Why your agent rewrites everything from scratch
Without context about existing code, agents start fresh. They create duplicate utilities, ignore your patterns, and rewrite files that already exist.
Why your agent rewrites everything from scratch
The file that already existed
Last month I queued up 15 tasks overnight. One of them was "add rate limiting to the API routes." Straightforward. I've got a src/lib/rateLimit.ts that wraps a token bucket implementation. It's been in the project for four months. Used in six files. Fully tested.
I woke up and the agent had written a brand new rate limiter in src/utils/rateLimiter.ts. Different file name, different API surface, different pattern. It imported express-rate-limit from npm even though I already had a local implementation that worked exactly the way I wanted. Then it went through every route file and imported its shiny new version.
The result: two rate limiters in the same codebase. Duplicate logic. Conflicting naming. And about 25,000 tokens burned on code that shouldn't exist.
This happens constantly
If you've used an AI coding agent on anything bigger than a weekend project, you've seen this. The agent doesn't know what's already there. It doesn't check. It starts fresh every time, like it's working on an empty repo.
Here's what that looks like in practice:
- You have
formatDate()insrc/lib/dates.ts. The agent writesformatTimestamp()in a new file with slightly different behavior. - You use camelCase for API response fields. The agent uses snake_case because that's what the LLM defaults to.
- You have a custom logger that writes structured JSON. The agent imports
winstonand sets up its own logging. - You have a
src/types/directory. The agent puts its types inline in the implementation file.
Every one of these is a 10-minute fix by itself. But when an agent produces 12 tasks overnight and 5 of them have this problem, you're starting your morning with an hour of cleanup before you even look at the actual logic.
Why agents do this
It's not a bug. It's the default behavior.
When you give an agent a task like "add caching to the search endpoint," the agent has a goal and a set of tools. It can read files, write files, run commands. But nothing in that prompt tells it to read your existing code first. So it doesn't. It jumps straight to implementation because that's the most direct path to "task complete."
Think about it from the model's perspective. It's trying to minimize the distance between "task description" and "task done." Reading your utilities directory and analyzing your patterns is a detour. A useful one, but a detour. Without explicit instruction, the agent skips it.
This is why copy-pasting a task description into an agent prompt and hitting enter works fine for greenfield projects and terribly for anything with existing code. I wrote about how this saves 40% on tokens once you enforce it at the pipeline level.
The fix: read before you write
In nightloop.sh, this was the first real improvement I made after the initial for loop. Before the agent touches any files, it gets a separate step where its only job is to read and understand.
# The pre-check step from nightloop.sh v0.3
claude -p "Read these directories: src/lib/, src/utils/, src/types/, src/middleware/.
List every exported function and its purpose.
Note the naming conventions used (camelCase vs snake_case, file naming, etc).
Then read this task: $TASK
Which existing functions are relevant?
Which files already handle part of this requirement?
Output an implementation plan that reuses existing code."
That's it. The agent spends 2-3 minutes reading instead of jumping to writing. When it gets to the implementation step, it already knows rateLimit.ts exists, it already knows the project uses camelCase, it already knows there's a custom logger.
The implementation step doesn't start from zero. It starts from understanding.
What happens without pre-check
I tracked this across 40 pipeline runs early on, half with pre-check and half without. Same tasks, same codebase, same model.
Without pre-check, 47% of tasks produced duplicate code or deviated from existing patterns. Almost half. That meant almost half my mornings started with refactoring instead of reviewing.
With pre-check, that number dropped to about 10%. And the 10% that still deviated were cases where my existing code was genuinely inconsistent (two different error handling patterns in different parts of the codebase, for example). The agent picked one; I would've picked the other. Fair enough.
The token cost of the pre-check step is around 3,000-5,000 tokens per task. The token cost of rewriting existing code from scratch is 10,000-20,000 tokens. The math isn't close.
What a good pre-check looks like
Over a few hundred pipeline runs, I've settled on a structure that works reliably:
## Pre-check Instructions
1. Read: src/lib/, src/utils/, src/types/
2. List all exported functions relevant to this task
3. Note naming conventions, error handling patterns, and test patterns
4. For task: [description]
- Which existing code should be reused?
- Which files need modification vs creation?
- Are there any conflicts with existing patterns?
5. Output a 5-10 line implementation plan
The key is specificity. "Read the codebase" is too vague. The agent will skim and miss things. "Read src/lib/ and list all exported functions" gives it a concrete job. It reads every file in that directory, extracts the exports, and reports them back. Now when the implementation step says "add rate limiting," the pre-check output already says "rateLimit() exists in src/lib/rateLimit.ts, accepts a config object with maxRequests and windowMs."
No duplication. No reinvention.
Pre-check isn't optional
I know it looks like overhead. Another step in the pipeline, more tokens before anything gets built. I thought the same thing when I first added it. It felt like I was slowing the pipeline down.
But the opposite happened. Total pipeline time went down because the agent wasn't wasting cycles building things that already existed. Morning review time went down because the code matched existing patterns instead of inventing new ones. Token spend went down because implementation was shorter when the agent knew what to reuse.
In Zowl's NightLoop template, pre-check is the first step and it's there for a reason. The pipeline goes pre-check, implement, validate. That order isn't arbitrary. Every step depends on the one before it. Skip pre-check and the implement step is flying blind. The agent will write correct code that doesn't fit your project.
I tried running without pre-check exactly once after building Zowl, just to see if maybe the newer models were smart enough to check on their own. They weren't. Got a brand new src/helpers/formatCurrency.ts sitting right next to the existing src/lib/currency.ts. Same function, different name, different file. Classic.
Read before you write. It's such a simple rule that it feels almost too obvious to be worth a blog post. But if you're running agents on a real codebase and you're not enforcing it, you're paying for the same code twice every single night. Making sure your PRD is clear about what done looks like reinforces this. And if you want this pattern built into your workflow, Zowl enforces pre-check automatically.