The spec is written. The backlog is ranked. This week, the tool exists — not in a document, but on a screen, doing something a writer can use.
Phase 4 · Bonus Build · Week 30 of 32Last week you defined the tool. This week you build it. The build sprint is exactly what it sounds like: a concentrated burst of construction, focused on getting a working prototype from zero to functional in seven days. Not polished. Not complete. Functional — meaning a writer can open the tool, type screenplay text, and see something happen that no generic text editor provides. The target for this week is the core of your PRD's feature list: a text editing surface that recognizes scene headings and generates a navigable scene list. That single capability — type a scene heading and watch it appear in a sidebar you can click to jump to that scene — transforms a text editor into a screenwriting workspace. Everything else (metadata panels, outline views, reference docks, AI integration) can come later. This week, you build the foundation: an editor that knows what a scene is.
Vibe-coding: building with AI as your engineer. You're not expected to be a programmer. The Bonus Build is designed for writers who want to make tools, using AI-assisted code generation — sometimes called "vibe-coding" — to produce working software from natural-language descriptions. The process works like this: you describe what you want the tool to do in plain language. The AI generates code. You run the code. You see what works and what doesn't. You describe the fix. The AI revises. You iterate. Each cycle takes minutes, not days. A writer with no coding experience can produce a functional prototype in a week using this method — as long as the scope is small, the features are simple, and the builder knows when to accept "good enough."
The build sprint protocol. Seven days. Three milestones. Each milestone is a checkpoint — if you reach it, you're on track. If you don't, reduce scope (cut a feature from this week's target and move it to Week 31).
Milestone 1 (Days 1–2): The blank editor runs. A text editing surface exists in a browser or local application. You can type in it. You can save what you type (to a file, to local storage, or to the browser's memory). The font is Courier or Courier Prime. The background is clean. Nothing else — no features, no sidebar, no formatting. Just a place to write that looks like a screenplay page. This milestone validates that your tech stack works and that you can get from code to running application.
Milestone 2 (Days 3–5): Scene detection works. The editor recognizes scene headings — lines that begin with INT. or EXT. (case-insensitive, with reasonable variations). When a scene heading is typed, it appears in a sidebar list. The list updates as the writer types. Clicking an item in the sidebar scrolls the editor to that scene. This is the MVP's defining feature: the editor is now scene-aware. A writer can type a screenplay and navigate by scene rather than by scrolling through a continuous document.
Milestone 3 (Days 6–7): One additional feature works. Choose the highest-priority feature from your backlog that isn't the scene navigator. Possibilities: a scene count displayed in the sidebar, a word/page count in a status bar, the ability to tag a scene with a purpose label via a dropdown or text field, or the ability to export the text as a plain-text file. One feature. Not two. The third milestone proves you can extend the prototype beyond its initial capability — that the architecture supports addition, not just the initial build.
The technical minimum. Your tool needs to run somewhere. The simplest options for a solo builder using AI-assisted coding:
Option A: Web app (HTML + JavaScript). A single HTML file with embedded CSS and JavaScript. Opens in any browser. No installation. No backend. Text is saved to the browser's local storage or exported as a file. This is the lowest-friction option — you can start building immediately with no setup, and the AI can generate the complete application in a single code block. Limitation: local storage isn't permanent, so work can be lost if the browser cache is cleared. Acceptable for a prototype.
Option B: Local app (Electron, Tauri, or similar). A desktop application that runs on your machine. Saves to local files. Feels more like "real" software. More complex to set up than a web app — requires a development environment, package installation, and a build step. Choose this only if you have some familiarity with development tools or if the AI-assisted setup goes smoothly. The additional complexity buys you file-system access and a more native feel, but it's not necessary for the MVP.
Option C: Command-line tool. A script that processes screenplay text files — reads a .fountain or .txt file, extracts scene headings, displays a scene list, and generates a navigable index. No graphical interface. The writer works in their existing text editor and uses the command-line tool for navigation and analysis. This is the fastest to build but the least satisfying to use. Choose this only if you're comfortable with the terminal and don't need a graphical editor.
For most curriculum students, Option A is the right choice. It produces a visible, interactive prototype with the lowest setup overhead, and AI-assisted coding tools are most effective at generating self-contained web applications.
How to work with AI during the build sprint. The AI is your pair programmer. You describe what you want in natural language; it generates code. The cycle works best when your descriptions are specific and your iterations are small:
Good prompt: "Create a simple web-based text editor with a monospace font (Courier Prime), a sidebar on the left, and the main editing area on the right. When I type a line that starts with 'INT.' or 'EXT.', it should appear as a clickable item in the sidebar. Clicking the item should scroll the editor to that line."
Bad prompt: "Build me a screenwriting app." (Too vague — the AI will make hundreds of assumptions, most of them wrong.)
After each generated code block: run it, test it, identify the one most important thing that's broken or missing, and describe the fix. One fix per iteration. Not three. Not five. One. Small iterations converge faster than large ones because each cycle validates one decision before you move to the next.
This is a build week. No screenplays. No tool reviews. All available time goes to the prototype. If you have spare hours at the end of the week and the build is on track, use them to test the prototype with your own screenplay — paste your finished script into the editor and navigate by scene. That's the best test: does the tool serve the writer who made it?
Deliverable: Running prototype — text editor with scene navigation.
Constraints: By the end of this week, produce a working application (web app, local app, or CLI tool) that meets the three milestones:
Milestone 1: A text editing surface exists and accepts input. The font is monospace (Courier or Courier Prime). Text persists across the session (local storage, file save, or equivalent). The interface is clean enough that a writer would be willing to type in it.
Milestone 2: The editor detects scene headings (lines beginning with INT. or EXT.) and displays them in a navigable list. The list updates as the writer types. Clicking an item in the list scrolls the editor to that scene. The detection covers the standard 95% case — full INT./EXT. headings with location and time of day.
Milestone 3: One additional feature from the backlog is implemented. The feature works — it's not a stub or a placeholder. A writer can use it as part of a real workflow.
Quality bar: The prototype must be functional, not polished. Functional means: a writer can open it, type or paste screenplay text, see scenes appear in the sidebar, click to navigate, and use the additional feature. The prototype does NOT need to be beautiful (default styling is fine), does NOT need to handle every edge case (unusual heading formats, massive documents, concurrent users), and does NOT need to be deployed on the internet (running locally is sufficient). The test: paste your finished screenplay into the editor. Can you navigate to any scene in under five seconds using the sidebar? If yes, the prototype works.
Estimated time: 14–20 hours across 7 days (Milestone 1: 3–5 hours; Milestone 2: 5–8 hours; Milestone 3: 3–5 hours; testing and debugging: 2–3 hours).
This week, AI is your pair programmer. The prompts below are starting points for the build — use them to generate the initial code, then iterate through small fix-describe-regenerate cycles. You'll likely send dozens of prompts this week, each one refining the previous output. The prompts here launch the build. Everything after is conversation.
When something breaks — and it will — use this prompt template to get the AI to fix it:
"Here's my current code: [paste]. When I [describe what you did], I expected [what should happen], but instead [what actually happened]. Fix the [specific area — the sidebar, the scroll function, the localStorage save] without changing the rest of the code."
The key: describe the gap between expected and actual behavior. "It's broken" gives the AI nothing to work with. "Clicking the third scene scrolls to the second scene instead" gives the AI a precise bug to fix. Debugging is the build sprint's most time-consuming activity — expect to spend 30–40% of your build time identifying and fixing problems. That's normal. Every iteration that fixes a bug also teaches you something about how the code works, which makes the next iteration faster.
The hardest moment in a build sprint is Day 3 — when the initial excitement of "it runs" has faded and the reality of "it doesn't do what I want yet" sets in. The editor works but the sidebar flickers. The detection works but misses headings with unusual formatting. The click-to-navigate works but the scroll position is off by a few lines. These are not failures. They're the normal state of software in development. The difference between a builder who ships and one who abandons is the response to Day 3: the builder who ships says "the scroll is off by 50 pixels — let me fix that specific thing." The builder who abandons says "this isn't working" and walks away from the project. Be specific about what's wrong. Fix one thing at a time. The prototype doesn't need to be right on Day 3. It needs to be closer to right than it was on Day 2.
Building software and writing a screenplay have more in common than you might expect. Both start with a plan (PRD or outline) that changes on contact with reality. Both require iterating through a build-test-revise cycle. Both demand the discipline of working within constraints (feature scope or budget tier). And both produce an artifact that must be evaluated by someone other than the maker. Write about one parallel you noticed this week — a moment during the build sprint that felt like a moment from the drafting phase. Did the prototype surprise you the way the draft surprised you? Did you discover a feature you hadn't planned, the way you discovered unplanned scenes during drafting? Did you have to cut a feature for scope, the way you cut scenes for pace? The creative process has a shape that transcends the medium. The writer who sees that shape is a writer who can build anything.
By the end of this week you should have:
• A running prototype — a web app (or local app) that functions as a screenplay editor
• Scene detection working — lines starting with INT./EXT. populate a sidebar list in real time
• Click-to-navigate working — sidebar items scroll the editor to the corresponding scene
• One additional feature implemented from the backlog
• The prototype tested with your actual screenplay (40+ scenes)
• A daily build log tracking what was built, what works, and what's next
• The prototype saved/exported as a file you can share or deploy
Week 31: AI Features (Questions-Only Script Doctor). The editor works. The scene navigator runs. Next week, you add the layer that makes the tool distinctive: an AI panel that doesn't write for the writer — it asks questions. The "Two Readers" model from the curriculum becomes a product feature: the writer selects a scene, invokes the AI panel, and receives diagnostic questions from Reader A and Reader B. The AI doesn't generate dialogue, suggest rewrites, or produce content. It asks the questions a developmental editor would ask and a resistant first reader would ask — and the writer decides what to do with the answers. This is the curriculum's philosophy made software: AI as thinking partner, not as ghost writer.