Task dependencies: the order your agents run matters
Task 5 depends on Task 3. If Task 3 fails, Task 5 will too. Without dependency management, 30% of tasks fail unnecessarily.
Task dependencies: the order your agents run matters
The 3am collision
February 2025. I had 20 tasks queued up overnight. Task 12 was "build the booking confirmation screen." Task 7 was "create the bookings API endpoint that returns confirmation data." Pretty obvious which one needs to go first.
nightloop.sh didn't care. It ran tasks in the order they appeared in tasks.txt. That night, Task 12 ran before Task 7. The agent tried to build a screen that fetches from an endpoint that didn't exist yet. It hallucinated the response shape, guessed at the URL, and wrote a component that called /api/bookings/confirm with a payload structure that had nothing to do with what Task 7 would eventually create.
Task 7 ran later and built the actual endpoint. Different URL. Different response format. Both tasks "passed" in isolation. But the screen didn't work with the endpoint. I spent my morning wiring them together manually. Two tasks worth of output, zero value.
Order isn't a nice-to-have
When you're running 5 tasks, you can eyeball the order. Put the database schema first, the API endpoints second, the frontend third. Simple enough.
When you're running 30+ tasks overnight, ordering by intuition falls apart. Task 18 needs the type definitions from Task 4, but also needs the middleware from Task 11. Task 23 needs the component from Task 18 and the hook from Task 15. It's not a list. It's a graph.
In nightloop.sh I tried solving this by carefully ordering tasks.txt by hand. I'd sit there for 20 minutes, shuffling lines around like I was solving a puzzle. Then I'd run it, go to bed, and wake up to find that Task 9 (which needed Task 6) ran fine, but Task 14 (which also needed Task 6 and I forgot about) ran before Task 6 and cratered.
Manual ordering doesn't scale. I missed dependencies constantly. About 30% of my task failures in the nightloop.sh era weren't real failures. The code was fine, the task was clear, the agent was capable. The task just ran before its dependency was ready.
30% of failures from bad ordering. Not from bad code, not from bad prompts. Just from running things in the wrong sequence.
What dependency management actually looks like
The fix is explicit dependency declarations. Each task says "I depend on these other tasks." The pipeline reads the dependency graph and runs things in topological order: parents first, children after.
## Task 7: Bookings API endpoint
Dependencies: [Task 3: Database schema]
## Task 12: Booking confirmation screen
Dependencies: [Task 7: Bookings API endpoint, Task 9: Shared UI components]
## Task 15: Booking notification hook
Dependencies: [Task 7: Bookings API endpoint]
With this, the pipeline knows Task 3 runs first. Then Tasks 7 and 9 can run in parallel (they don't depend on each other). Then Task 12 runs after both 7 and 9 finish. Task 15 runs after 7 finishes but doesn't need to wait for 9.
That parallelism matters. If you have 30 tasks and 10 of them are independent, running them sequentially wastes hours. A dependency graph lets the pipeline run independent branches simultaneously and only serialize where it needs to.
How Generate from Brief handles this
Writing dependency declarations for 60 tasks by hand would be miserable. That's where Zowl's Generate from Brief comes in.
You paste your project brief, a plain English description of what you want built. Zowl breaks it into individual tasks, writes a PRD for each one, and builds the dependency graph automatically. It looks at the files each task will likely touch, the types and functions each task will create or consume, and figures out which tasks need to come first.
When I built the gym app (60 tasks from a 1,400-word brief), the auto-generated dependency graph got the ordering right for 56 out of 60 tasks. The four it missed were edge cases where two tasks referenced the same table but in non-obvious ways. Took me 10 minutes to add those missing edges during the review pass.
The dependency graph also shows you the critical path. If Task 3 is the root dependency for 15 downstream tasks, you want to know that before you hit run. Because if Task 3 fails, those 15 tasks are all blocked. I've started reviewing critical-path tasks more carefully during my pre-run review. If the root is wrong, everything downstream inherits the mistake.
Failure cascades
Here's the thing that bit me hardest before I had proper dependencies: cascading failures.
Task 3 creates the database schema. Task 7 builds an endpoint using that schema. Task 12 builds a screen using that endpoint. Task 3 fails. What happens?
Without dependency management, Tasks 7 and 12 run anyway. Task 7 tries to query a table that doesn't exist. It fails with a confusing error about missing columns. Task 12 tries to call an endpoint that's broken. It fails with a different confusing error. Now I've got three failures in my morning log, but only one of them is real. The other two are ghosts.
With dependency management, Task 3 fails and Tasks 7 and 12 are automatically skipped with a clear status: "Skipped: dependency Task 3 failed." One failure, two skips. The log tells me exactly what happened. I fix Task 3, re-run the three affected tasks, and I'm done.
Task 3: ✗ FAILED (schema migration syntax error)
Task 7: ○ SKIPPED (depends on Task 3)
Task 9: ✓ PASSED
Task 12: ○ SKIPPED (depends on Task 7 → Task 3)
Task 15: ○ SKIPPED (depends on Task 7 → Task 3)
Those three skipped tasks didn't burn any tokens. They didn't produce garbage output I'd have to review and discard. They just waited. That's what dependency management buys you: tasks that fail cleanly instead of tasks that fail expensively.
The nightloop.sh lesson
Before I added dependency tracking to nightloop.sh, my morning routine looked like this:
- Open the log file
- See 6 failures out of 20 tasks
- Spend 30 minutes figuring out which failures were real and which were caused by upstream failures
- Fix the real failures
- Manually figure out which downstream tasks needed re-running
- Re-run those tasks and hope I didn't miss any
After I added a basic dependency system (a janky JSON file that mapped task IDs to their parents), the routine became:
- Open the log
- See 2 failures and 4 skips
- Fix the 2 failures
- Re-run the 6 affected tasks
Same 20 tasks. Same codebase. 30% fewer "failures" just because the pipeline stopped running tasks it already knew would fail.
Practical dependency rules
After running a few hundred overnight pipelines, here's what I've landed on:
Schema before endpoints before frontend. Always. If you're building a feature that touches all three layers, the database schema is the root. Everything flows from the shape of the data.
Shared components before pages that use them. If Task 18 creates a <DataTable> component and Tasks 22, 25, and 30 all use it, Task 18 better run first. Otherwise three agents will each build their own table component.
Types and interfaces before implementations. If you're generating TypeScript types from a schema, that task is the root of the subgraph. Everything that imports those types depends on it.
Don't over-specify dependencies. If Task 20 and Task 21 touch different parts of the codebase with no overlap, they don't need a dependency between them. False dependencies serialize tasks that could run in parallel and slow down the whole pipeline.
The dependency graph isn't glamorous. It doesn't feel like the exciting part of running AI agents overnight. But every time I get tempted to skip the review pass and just hit run, I remember those mornings staring at six failures and trying to figure out which two were real. The graph saves me from that. Every single night.
Understanding dependencies is essential to the broader pipeline architecture approach that powers modern agent orchestration. It's also critical for implementing proper failure routing when tasks do fail. For more details on how to build these systems, check out Zowl.