Why I abandoned git worktrees doing vibe coding with Claude
It looked like the elegant way to work on multiple branches in parallel. Reality was three Next.js instances fighting over the same `.next` folder and a MacBook on the verge of meltdown.
A month ago I started using git worktrees to work with Claude Code on multiple branches at once. Perfect pattern on paper: one folder per branch, one Claude instance per folder, all independent, all syncing the same repo. Three branches, three windows, three parallel conversations. Ten days later I scrapped it completely.
Here's what happened, why the elegant idea doesn't survive an intensive vibe-coding session, and what pattern I went back to.
The promise of worktrees
For anyone unfamiliar: git worktree lets you keep multiple copies of the same repo in different folders, each on its own branch, all sharing the same .git. Like so:
cd ~/Code/agentikas-blog # main
git worktree add ../agentikas-feat-auth feat/auth
git worktree add ../agentikas-fix-images fix/images
# Three sibling folders, three different branches, zero cloning
ls ~/Code/
# agentikas-blog/ agentikas-feat-auth/ agentikas-fix-images/
The vibe-coding appeal is obvious. Three terminals. A Claude Code instance in each. The feat/auth branch handles authentication. The fix/images branch resolves an image-loading bug. The main branch reviews PRs and merges. Three simultaneous flows. Your brain switches between windows, not between branches.
On paper, lovely. In reality, my MacBook started buckling.
What breaks in a Next.js monorepo
The first problem hit in the first hour. Each worktree has its own hoisted node_modules. In a monorepo of three apps with npm workspaces, that's ~80,000 files per folder. Three worktrees: 240,000 files total, with Spotlight trying to index them all in parallel (yes, the same MacBook-on-its-knees bug — there's another post about it).
The second was subtler. Each worktree ran its own npm run dev. Next.js generates .next/ with build cache, manifests, hot-module-replacement state. Three dev servers, three .next/ folders, three servers on different ports. So far, manageable.
The third was lethal. Turbopack — the new Next.js bundler — has a global cache shared at ~/.cache/turbo. Three dev servers writing to the same cache folder simultaneously produced race conditions impossible to debug: builds breaking for no reason, hot reloads showing content from another branch, "module not found" errors on modules that did exist. The cache wasn't designed for parallel consumers.
I spent two sessions trying to configure it properly. Different TURBO_CACHE_DIR per worktree. Separate lockfiles. Workspaces "isolated" mode. Each fix solved one problem and opened two. When I caught myself adding shell wrappers to "make sure the worktree uses its own cache," I realized I was fighting the flow.
What I did before — and what I went back to
The pattern I went back to is radically simpler: one checkout, branch switching with git switch. One Next.js instance running on main. One conversation with Claude. When I finish a branch, commit, switch back to main, continue.
The argument against this was "I lose parallelism." Turned out false. What I lost with worktrees wasn't parallelism in my head — it was on the machine. My head can't hold three agentic conversations at once without dropping the thread. Even if the worktrees worked perfectly, my decision quality dropped jumping between contexts.
What I did gain: deterministic focus. One branch, one story, one Claude conversation that remembers what happened five minutes ago because it's in the same process. If I need to pause a branch and start another, I git stash or commit-WIP, switch, do the new thing, come back. Switch friction is seconds, not minutes.
When worktrees do pay off
Not saying worktrees are always bad. Three scenarios where they earn their keep:
- Serious code review. You want to review a PR without losing your in-progress work. A dedicated "review" worktree where you
git fetch && git checkout origin/feat/whateveris clean. Doesn't interfere with your current branch or build artifacts. - Side-by-side comparison. You want to see how the app's behavior changes between two versions. Two worktrees, two servers on different ports, compare in the browser. Not for ongoing dev — for a specific session.
- Long reproducible builds. If your build takes ten minutes and you need to compare two-branch results, two worktrees with two parallel builds is faster than sequential builds in the same folder.
For daily vibe coding in a Next.js monorepo — which is the 90% case for devs considering worktrees for the first time — the answer is no. The theoretical gain doesn't beat the real cost on the machine and in your head.
The rule I take with me
After this experiment, a rule I apply to other tooling decisions: before adding parallelism, verify you have the problem parallelism solves.
Worktrees solve "I need several active branches at once without recompiling." That's a specific claim. If your problem is "I jump between branches and the builds invalidate," the answer might be as simple as a shared build cache (Turborepo gives this for free) or accepting the friction of rebuilds. You don't always need a worktree for a minor annoyance.
The elegant git worktree add in my terminal is still there. I don't use it for daily vibe coding. I use it for PR reviews and for one-off comparisons. The right dose of the tool, not the maximum dose.
If you're vibe coding with Claude Code and the worktree promise is tempting, try a single checkout with git switch discipline first. The machine-load difference is huge and decision quality improves. Only if the flow is too tight, consider worktrees — and only for the three scenarios above.

Comments
Loading comments…
Sign in on your dashboard to join the conversation.