How Generate from Brief actually works
Paste a project doc. The AI breaks it into tasks with dependencies. Walk through a real example with a snake game.
How Generate from Brief actually works
You already have the spec
I've written about Generate from Brief before, but people keep asking me to show them exactly what happens when you use it. So let's do that. Step by step, with a real example.
The idea is simple. You have a document that describes what you want to build. Maybe it's a project spec, maybe it's notes from a planning session, maybe it's a wall of text you typed into Apple Notes at midnight. You paste it into Zowl. The AI reads it, finds the gaps, asks you about them, and generates structured tasks with dependencies.
That's the pitch. Here's what it actually looks like.
The snake game brief
I wanted a clean example, so I wrote a short project spec for a classic snake game. Nothing fancy. Here's roughly what I pasted in:
# Snake Game
Browser-based snake game using vanilla JS and Canvas.
Features:
- Arrow key controls
- Snake grows when eating food
- Food spawns randomly after being eaten
- Score counter
- Game over when snake hits wall or itself
- Restart button
- Speed increases every 5 food items
- High score saved to localStorage
- Mobile swipe controls
- Pause with spacebar
That's it. About 15 lines. The kind of thing you'd write in 5 minutes before starting to code.
What happened next
I expected Zowl to just spit out 10 tasks matching my 10 bullet points. It didn't. Instead, it came back with three questions.
Question 1: "Your spec mentions mobile swipe controls but doesn't specify a responsive canvas size. Should the game resize to fit the device viewport, or use a fixed canvas with scrolling on small screens?"
I hadn't thought about that. I said responsive, resize to viewport, keep the grid proportional.
Question 2: "Game over when snake hits wall or itself. Do you want wall wrapping as a separate game mode, or strictly walls-kill-you only?"
Just walls kill you. No wrapping mode for now.
Question 3: "Speed increases every 5 food items. Should there be a max speed cap, or does it keep accelerating until the game is effectively unplayable?"
Good catch. I said cap it at level 10 (so after 50 food items, speed stops increasing).
Three questions. Maybe 90 seconds to answer. Each one caught something I would've hit during implementation.
The 11 tasks it generated
After I answered, Zowl generated 11 tasks. Not 10 (matching my bullet points) but 11, because it identified a foundational task I didn't explicitly list. Here's the dependency tree:
Task 1: Set up project structure and Canvas boilerplate
└── no dependencies
Task 2: Implement game loop and rendering engine
└── depends on: Task 1
Task 3: Implement snake movement and arrow key controls
└── depends on: Task 2
Task 4: Implement snake growth and collision detection
└── depends on: Task 3
Task 5: Implement food spawning system
└── depends on: Task 2
Task 6: Implement score counter and UI overlay
└── depends on: Task 4, Task 5
Task 7: Implement speed progression with level cap
└── depends on: Task 6
Task 8: Implement game over screen and restart flow
└── depends on: Task 4
Task 9: Implement localStorage high score persistence
└── depends on: Task 6
Task 10: Implement responsive canvas sizing
└── depends on: Task 1
Task 11: Implement mobile swipe controls
└── depends on: Task 3, Task 10
Notice what it did. It created Task 1 (project structure) even though I didn't mention it. It figured out that you can't implement snake movement without a game loop and rendering engine, so it split that into its own task. It made the responsive canvas a separate task that depends on the boilerplate but can run in parallel with the game logic. Smart.
The dependency links are the part that matters most for pipelines. Tasks 5 and 10 can run in parallel because they don't depend on each other. Task 11 waits for both Task 3 and Task 10 because swipe controls need both the input system and the responsive canvas. This ordering used to live in my head. Now it's explicit, and the pipeline engine uses it to figure out what can run concurrently.
What a single PRD looked like
Each task came with a full PRD (read more about PRD templates for AI agents and how to structure PRDs for your pipeline). Here's Task 7 as an example:
Task: Implement speed progression with level cap
Dependencies: [Score counter and UI overlay]
Complexity: Low
Description:
Increase game tick speed every 5 food items consumed.
Track current level. Display level in UI overlay.
Cap speed increase at level 10 (50 food items).
Acceptance Criteria:
- Game speed increases after every 5th food item
- Current level displayed next to score
- Speed caps at level 10, no further acceleration
- Level persists across the session (resets on game restart)
- Speed change feels smooth, not jarring (use linear interpolation)
That last acceptance criterion is interesting. I didn't mention smooth speed transitions. The AI inferred it because a sudden jump in game tick rate would feel broken. That's the kind of detail that separates "generate a task list" from "generate a task list that an agent can actually implement without producing garbage."
The part that makes people say "wait, that's free?"
Generate from Brief is included in the free tier. You can paste a document and get structured tasks without paying anything. No limit on how many times you use it.
I get asked about this a lot. Why give it away? Because the value of Zowl is in running the pipeline, not in generating the tasks. If Generate from Brief gets someone from "I have a vague idea" to "I have 11 well-defined tasks with dependencies" in 5 minutes, they're going to want to run those tasks. That's where Pro comes in.
But even if you never upgrade, you still get a tool that turns a messy project doc into structured work items faster than any human can. I've watched people use it to generate tasks and then execute them manually or with other tools. That's fine. The feature stands on its own.
Where it breaks
I should be honest. It's not magic.
If your brief is three sentences long and extremely vague, you'll get a lot of questions and the output won't be great. Garbage in, garbage out applies here too. The AI can fill in gaps, but it can't invent your product for you.
It also struggles with highly visual requirements. "Make the UI look like Stripe's dashboard" doesn't translate into useful tasks because the agent can't see Stripe's dashboard. You need to describe the layout, the components, the interactions. Words, not vibes.
And occasionally it gets the granularity wrong. It'll split something into 3 tasks that should be 1, or combine things that should be separate. That's why you review and edit before running the pipeline. Generating takes 5 minutes. Editing takes another 5. That's still a fraction of writing everything from scratch.
The snake game, by the way
I ran those 11 tasks through a NightLoop pipeline overnight. Nine completed cleanly. Two needed minor fixes (the swipe detection threshold was too sensitive, and the high score wasn't displaying on the game over screen). Total time from pasting the brief to a working snake game: one evening of setup, one night of sleeping, 20 minutes of review in the morning.
For a browser game I probably would've spent a full day on if I'd coded it manually. I'll take that trade every time.