Lesson 03 · Part 4 15 min

From First Draft to Final

Let Claude run the full workflow, review what comes out, give structured feedback, and iterate from a rough first draft to something you'd actually send. This is where the planning pays off.

Watch on YouTube

In this series

Where we are

You've picked a task, built your context library, and planned everything: persona, tools, process sequence, model strategy. All of that lives in your context.md. Now it's time to hit go. Planning is over. You're executing.

This part covers the full execution cycle: generating a first draft, giving it eyes through screenshot verification, and iterating with structured feedback until the output is ready to send. The first draft won't be perfect, and that's by design. What matters is how quickly you can move from rough to polished.

The three-pass iteration strategy

The first draft won't be perfect, and that's by design. Trying to get to 100% in one pass wastes tokens and usually fails. The fastest path to "done" is rough, refined, polished. Each pass has diminishing scope but increasing precision.

Pass 1: First draft (~60-70%) Hands off. Let Claude run the full process sequence end to end. Don't interrupt, don't micro-manage
Pass 2: Bulk feedback (~90-95%) Review the output, write down everything you notice, deliver it as one structured list of broad changes
Pass 3: Surgical edits (~100%) Individual, specific fixes. One thing at a time. Small, targeted prompts until it's done
Key Insight

The approach is: create a first draft, then maybe a second draft with more general bulk prompts, and then do surgical edits as needed. Each pass narrows the scope. You go from "fix everything" to "fix this one heading."

The first draft: let it run

The first prompt is simple: run the full process sequence. Claude reads the inputs, applies the brand system, and generates the complete output. The key here is to let it run without interrupting. You're looking for a complete first attempt, not a perfect one.

What V1 looked like

The first draft had the right colours and structure, but several things were off. The logo was broken, the content read like a dump of sporadic facts in white tiles rather than a coherent narrative, and Claude used the screenshot tool to create the PDF but never actually read those screenshots to verify the output.

Working What went right

Correct brand colours applied throughout. Page structure and layout followed the brief. All source files were read and referenced.

Broken What needs work

Logo rendering was broken. Content was sporadic, not narrative. No visual self-verification was performed.

Key Insight

Claude won't self-check its own visual output unless you tell it to. It will generate HTML, even take screenshots, but it won't look at those screenshots and compare them against the brief. This has to be explicit in your instructions.

Context window management

After V1 generation, the context window was at 54% usage. Before giving feedback, I used /compact to summarise and preserve tokens. This freed up space for the detailed feedback that would follow.

Give your AI eyes

The screenshot verification loop is how you give your AI workflow visual awareness. Without it, Claude generates HTML and renders PDFs blindly. The process is straightforward but must be explicit in your instructions:

Create HTML Generate the styled document from your inputs and brand system
Take screenshots Use Playwright to capture each page as an image
Read the screenshots Claude examines the images and identifies visual issues
Fix and repeat Fix identified issues, take new screenshots, verify again. Minimum two passes

After discovering this gap in V1, the screenshot verification step was encoded into context.md as a permanent requirement. Two verification passes minimum. This is the kind of instruction that compounds: you discover it once and it improves every future run.

Structured feedback and V2

Before giving feedback, I wrote down on a piece of paper what are some of the general things I noticed. Then I dictated the list as structured feedback. This approach works better than ad-hoc corrections because Claude can address everything in one pass rather than going back and forth.

The feedback list

Add verification process to context.md
Document needs a coherent narrative, not sporadic facts
Fix the logo (full on page 1, icon on other pages)
Research and showcase all awards
Use vector outline icons only, in brand colours
Install Plotly/Kaleido for richer charts

V2 results

V2 took about seven minutes and came back at roughly 90-95%: a photo on the cover pulled from the inspiration folder, better narrative flow, and charts built with Plotly. The structured feedback approach meant Claude could address everything in one pass rather than needing multiple rounds of individual corrections.

Key Insight

Writing feedback down first, then delivering it as a structured list, is more effective than giving corrections one at a time. You save tokens and Claude can make better decisions about how changes interact with each other.

Feedback encoding

This is the step that creates compounding returns. Every piece of feedback you give goes back into context.md permanently. The corrections stop being one-off fixes and become rules that apply to every future run.

Feedback What you said

"Fix the footer text." "Two verification passes minimum." "Use vector outline icons only."

Encoded What context.md now says

A brand rule, a process step, and a design principle, all permanent, all automatic.

This is what separates AI workflows from one-off prompting. In a regular chat, you correct the same mistakes over and over. With feedback encoding, every correction makes every future run better. "Fix the footer" becomes a brand rule. "Two verification passes" becomes a process step. "Use vector outline icons only" becomes a design principle.

Key Insight

Your context.md is a living document. It gets better with every project because every piece of feedback becomes a permanent rule. This is how 60 minutes of setup turns into 7 minutes per document.

What's next

You've now taken a first draft from roughly 60% to something you'd actually send, using three passes with decreasing scope and increasing precision. Every correction was encoded back into context.md. The workflow works. But does it work again, on a completely different topic, with zero additional setup?

In Part 5, you'll prove it. You'll run a completely different topic through the same context.md and see whether the investment was worth it. Then we'll recap the four building blocks and what you've accomplished.

Download PDF

Enter your details to download

Ask me a question

About this lesson or anything AI-related

Ask me on LinkedIn Send me a message directly