THE MOVIE ON THE PAGE · WEEK 30 OF 32 · BONUS BUILD
SCREENWRITING STUDIO

Build Sprint
Editor + Scene Navigator

The spec is written. The backlog is ranked. This week, the tool exists — not in a document, but on a screen, doing something a writer can use.

The Movie on the Page Phase 4 · Bonus Build · Week 30 of 32
Commitment
14–20 hours
Craft Focus
Minimum viable screenplay editor: scene-aware text editing
Cinema Lens
N/A — build time
Page Craft
N/A — no screenplay formatting this phase
Exercise Output
Running prototype — text editor with scene navigation
Budget Dial
N/A — build phase

Last week you defined the tool. This week you build it. The build sprint is exactly what it sounds like: a concentrated burst of construction, focused on getting a working prototype from zero to functional in seven days. Not polished. Not complete. Functional — meaning a writer can open the tool, type screenplay text, and see something happen that no generic text editor provides. The target for this week is the core of your PRD's feature list: a text editing surface that recognizes scene headings and generates a navigable scene list. That single capability — type a scene heading and watch it appear in a sidebar you can click to jump to that scene — transforms a text editor into a screenwriting workspace. Everything else (metadata panels, outline views, reference docks, AI integration) can come later. This week, you build the foundation: an editor that knows what a scene is.

The prototype doesn't need to be beautiful. It needs to run. A working ugly tool is infinitely more useful than a beautiful design that exists only in a document.

Craft Lecture

Vibe-coding: building with AI as your engineer. You're not expected to be a programmer. The Bonus Build is designed for writers who want to make tools, using AI-assisted code generation — sometimes called "vibe-coding" — to produce working software from natural-language descriptions. The process works like this: you describe what you want the tool to do in plain language. The AI generates code. You run the code. You see what works and what doesn't. You describe the fix. The AI revises. You iterate. Each cycle takes minutes, not days. A writer with no coding experience can produce a functional prototype in a week using this method — as long as the scope is small, the features are simple, and the builder knows when to accept "good enough."

The build sprint protocol. Seven days. Three milestones. Each milestone is a checkpoint — if you reach it, you're on track. If you don't, reduce scope (cut a feature from this week's target and move it to Week 31).

Milestone 1 (Days 1–2): The blank editor runs. A text editing surface exists in a browser or local application. You can type in it. You can save what you type (to a file, to local storage, or to the browser's memory). The font is Courier or Courier Prime. The background is clean. Nothing else — no features, no sidebar, no formatting. Just a place to write that looks like a screenplay page. This milestone validates that your tech stack works and that you can get from code to running application.

Milestone 2 (Days 3–5): Scene detection works. The editor recognizes scene headings — lines that begin with INT. or EXT. (case-insensitive, with reasonable variations). When a scene heading is typed, it appears in a sidebar list. The list updates as the writer types. Clicking an item in the sidebar scrolls the editor to that scene. This is the MVP's defining feature: the editor is now scene-aware. A writer can type a screenplay and navigate by scene rather than by scrolling through a continuous document.

Milestone 3 (Days 6–7): One additional feature works. Choose the highest-priority feature from your backlog that isn't the scene navigator. Possibilities: a scene count displayed in the sidebar, a word/page count in a status bar, the ability to tag a scene with a purpose label via a dropdown or text field, or the ability to export the text as a plain-text file. One feature. Not two. The third milestone proves you can extend the prototype beyond its initial capability — that the architecture supports addition, not just the initial build.

The technical minimum. Your tool needs to run somewhere. The simplest options for a solo builder using AI-assisted coding:

Option A: Web app (HTML + JavaScript). A single HTML file with embedded CSS and JavaScript. Opens in any browser. No installation. No backend. Text is saved to the browser's local storage or exported as a file. This is the lowest-friction option — you can start building immediately with no setup, and the AI can generate the complete application in a single code block. Limitation: local storage isn't permanent, so work can be lost if the browser cache is cleared. Acceptable for a prototype.

Option B: Local app (Electron, Tauri, or similar). A desktop application that runs on your machine. Saves to local files. Feels more like "real" software. More complex to set up than a web app — requires a development environment, package installation, and a build step. Choose this only if you have some familiarity with development tools or if the AI-assisted setup goes smoothly. The additional complexity buys you file-system access and a more native feel, but it's not necessary for the MVP.

Option C: Command-line tool. A script that processes screenplay text files — reads a .fountain or .txt file, extracts scene headings, displays a scene list, and generates a navigable index. No graphical interface. The writer works in their existing text editor and uses the command-line tool for navigation and analysis. This is the fastest to build but the least satisfying to use. Choose this only if you're comfortable with the terminal and don't need a graphical editor.

For most curriculum students, Option A is the right choice. It produces a visible, interactive prototype with the lowest setup overhead, and AI-assisted coding tools are most effective at generating self-contained web applications.

How to work with AI during the build sprint. The AI is your pair programmer. You describe what you want in natural language; it generates code. The cycle works best when your descriptions are specific and your iterations are small:

Good prompt: "Create a simple web-based text editor with a monospace font (Courier Prime), a sidebar on the left, and the main editing area on the right. When I type a line that starts with 'INT.' or 'EXT.', it should appear as a clickable item in the sidebar. Clicking the item should scroll the editor to that line."

Bad prompt: "Build me a screenwriting app." (Too vague — the AI will make hundreds of assumptions, most of them wrong.)

After each generated code block: run it, test it, identify the one most important thing that's broken or missing, and describe the fix. One fix per iteration. Not three. Not five. One. Small iterations converge faster than large ones because each cycle validates one decision before you move to the next.

Craft Principle: Build the smallest thing that works, then add one feature at a time — a prototype that does one thing well is worth more than a design document that promises twenty.
MICRO-EXAMPLE 1: THE SCENE DETECTION PATTERN The core technical problem: detect scene headings in a block of text and extract them into a navigable list. A scene heading follows this pattern: - Starts with INT. or EXT. (or INT./EXT.) - Followed by a location name - Optionally followed by a time of day (DAY, NIGHT, CONTINUOUS, etc.) - The entire line is in UPPERCASE (or at least begins with the INT./EXT. prefix in uppercase) SAMPLE INPUT (what the writer types): FADE IN: INT. HIGH SCHOOL CHEMISTRY LAB - EARLY MORNING A beaker of water on a lab bench. INT. PRINCIPAL'S OFFICE - DAY Torres stands behind his desk. EXT. SCHOOL PARKING LOT - EVENING Nora sits in her car. EXPECTED OUTPUT (what the sidebar shows): 1. INT. HIGH SCHOOL CHEMISTRY LAB - EARLY MORNING 2. INT. PRINCIPAL'S OFFICE - DAY 3. EXT. SCHOOL PARKING LOT - EVENING Each item is clickable. Clicking scrolls the editor to the corresponding line. DETECTION LOGIC (simplified): For each line in the text: If the line starts with "INT." or "EXT." or "INT./EXT." (case-insensitive): Add it to the scene list with its line number. → This simple regex catches 95% of scene headings. Edge cases (secondary headings without INT./EXT., lines that happen to start with "Interior" or "Exterior" in prose) can be handled in v2. The MVP needs the 95% case to work reliably.
MICRO-EXAMPLE 2: BUILD SPRINT DAILY LOG — WHAT TO TRACK Keep a brief daily log during the build sprint. Not a journal — a build record. Three lines per day: DAY 1: BUILT: Basic HTML editor with Courier Prime font and a two-column layout (sidebar + editor). WORKS: Can type in the editor. Text persists in browser session. NEXT: Scene heading detection in the sidebar. DAY 3: BUILT: Scene detection — sidebar populates when lines start with INT. or EXT. WORKS: Sidebar updates in real time. Click-to- scroll works for the first 3 scenes. BROKEN: Scroll position is off by ~50px for scenes lower in the document. Need to fix offset calc. NEXT: Fix scroll offset. Then add scene count to sidebar header. DAY 5: BUILT: Scroll offset fixed. Scene count displays in sidebar header. Added a simple page-count estimate (total characters / 1500 ≈ pages). WORKS: Navigation is accurate. Page count updates live. Editor feels responsive with 50+ scenes. BROKEN: If I paste a full screenplay into the editor, the sidebar rebuilds slowly (~2 sec delay). Acceptable for MVP but flagged for optimization. NEXT: Add scene purpose tagging (dropdown per scene card) — this is Milestone 3. DAY 7: BUILT: Purpose tag dropdown on each scene card. Options: establish / complicate / reveal / decide / release / [custom]. Tags save to local storage. WORKS: Full workflow: type screenplay → see scene list → click to navigate → tag scenes with purpose → filter by tag (stretch goal — partially working). SHIP STATUS: Milestone 3 achieved. Prototype is functional. Ready for Week 31 AI integration layer. → The daily log keeps the build honest. Three lines: what you built, what works, what's next. No narrative. No reflection. Just the state of the tool at the end of each day. This log becomes the demo script in Week 32.

Core Reading

No Core Reading This Week

This is a build week. No screenplays. No tool reviews. All available time goes to the prototype. If you have spare hours at the end of the week and the build is on track, use them to test the prototype with your own screenplay — paste your finished script into the editor and navigate by scene. That's the best test: does the tool serve the writer who made it?

Writing Exercise

Your Project Progress

Deliverable: Running prototype — text editor with scene navigation.

Constraints: By the end of this week, produce a working application (web app, local app, or CLI tool) that meets the three milestones:

Milestone 1: A text editing surface exists and accepts input. The font is monospace (Courier or Courier Prime). Text persists across the session (local storage, file save, or equivalent). The interface is clean enough that a writer would be willing to type in it.

Milestone 2: The editor detects scene headings (lines beginning with INT. or EXT.) and displays them in a navigable list. The list updates as the writer types. Clicking an item in the list scrolls the editor to that scene. The detection covers the standard 95% case — full INT./EXT. headings with location and time of day.

Milestone 3: One additional feature from the backlog is implemented. The feature works — it's not a stub or a placeholder. A writer can use it as part of a real workflow.

Quality bar: The prototype must be functional, not polished. Functional means: a writer can open it, type or paste screenplay text, see scenes appear in the sidebar, click to navigate, and use the additional feature. The prototype does NOT need to be beautiful (default styling is fine), does NOT need to handle every edge case (unusual heading formats, massive documents, concurrent users), and does NOT need to be deployed on the internet (running locally is sufficient). The test: paste your finished screenplay into the editor. Can you navigate to any scene in under five seconds using the sidebar? If yes, the prototype works.

Estimated time: 14–20 hours across 7 days (Milestone 1: 3–5 hours; Milestone 2: 5–8 hours; Milestone 3: 3–5 hours; testing and debugging: 2–3 hours).

AI Workshop

Phase 4: AI as Collaborator — Build Mode

This week, AI is your pair programmer. The prompts below are starting points for the build — use them to generate the initial code, then iterate through small fix-describe-regenerate cycles. You'll likely send dozens of prompts this week, each one refining the previous output. The prompts here launch the build. Everything after is conversation.

Build Prompt 1 — Editor Foundation
Prompt
Build me a single-page web application (HTML + CSS + JavaScript in one file) with the following features: LAYOUT: - A sidebar on the left (250px wide, fixed) with a header that says "SCENES" and an empty list below it. - A main editing area on the right that fills the remaining width. - The editing area should be a contenteditable div or a textarea — whichever produces a better writing experience. STYLING: - Font: Courier Prime (load from Google Fonts). Fallback: Courier, monospace. - Font size: 15px in the editor, 12px in the sidebar. - Background: #faf7f2 (cream). Editor background: white. Sidebar background: #f5f0e8. - Minimal, clean design. No borders heavier than 1px. The editor should feel like a page, not a form field. FUNCTIONALITY: - The writer types in the editor. - As they type, the app scans the text for lines that begin with "INT." or "EXT." (case-insensitive). These are scene headings. - Each detected scene heading appears as a clickable item in the sidebar list. - Clicking a sidebar item scrolls the editor to that scene heading's position. - The sidebar list updates in real time as the writer types (debounced to avoid lag — update every 500ms of inactivity). - Display the total scene count in the sidebar header: "SCENES (12)" for example. PERSISTENCE: - Save the editor content to localStorage every 5 seconds. Restore it on page load. Make the whole thing work in a single HTML file I can open in a browser. No build tools. No npm. No framework. Just HTML, CSS, and vanilla JavaScript.
Build Prompt 2 — Feature Extension
Prompt
I have a working screenplay editor with a scene-detection sidebar. Now I want to add ONE new feature. Here's the current code: [Paste your current working HTML file] The feature I want to add: [Choose ONE from your backlog and describe it clearly. Examples:] OPTION A — SCENE PURPOSE TAGS: "Add a small dropdown next to each scene in the sidebar with these options: establish, complicate, reveal, decide, release, [none]. When the user selects a tag, it should be stored (in localStorage alongside the scene data) and displayed as a colored dot next to the scene title in the sidebar. Add a filter bar at the top of the sidebar that lets the user show only scenes with a specific tag." OPTION B — PAGE COUNT ESTIMATOR: "Add a status bar at the bottom of the editor that shows an estimated page count. Calculate it as: total character count / 1500 (the rough average characters per screenplay page). Update it in real time. Also show total word count and total scene count." OPTION C — REFERENCE DOCK: "Add a collapsible panel below the sidebar that stores persistent reference text — the writer can paste their theme sentence, character notes, etc. into this panel. The panel content saves to localStorage. It has a toggle button to expand/collapse. When expanded, it takes up the lower third of the sidebar space." OPTION D — EXPORT AS TEXT FILE: "Add an 'Export' button in the sidebar header. When clicked, it downloads the editor content as a .txt file with the screenplay title as the filename. Also add an 'Import' button that lets the user load a .txt file into the editor." Keep the existing functionality working. Add the new feature without breaking what's already there. Same constraints: single HTML file, no build tools, vanilla JavaScript.
Build Sprint Debugging Protocol

When something breaks — and it will — use this prompt template to get the AI to fix it:

"Here's my current code: [paste]. When I [describe what you did], I expected [what should happen], but instead [what actually happened]. Fix the [specific area — the sidebar, the scroll function, the localStorage save] without changing the rest of the code."

The key: describe the gap between expected and actual behavior. "It's broken" gives the AI nothing to work with. "Clicking the third scene scrolls to the second scene instead" gives the AI a precise bug to fix. Debugging is the build sprint's most time-consuming activity — expect to spend 30–40% of your build time identifying and fixing problems. That's normal. Every iteration that fixes a bug also teaches you something about how the code works, which makes the next iteration faster.

Student Self-Check

Before You Move On
Does the prototype run — can you open it and type screenplay text in a clean, monospace editing surface?
Does the scene detection work — do lines beginning with INT. or EXT. appear in the sidebar list as you type?
Does click-to-navigate work — can you click a scene in the sidebar and jump to that location in the editor?
Have you implemented one additional feature from your backlog beyond the scene navigator?
Have you tested the prototype with your actual screenplay — pasted or typed in — and verified that navigation works across 40+ scenes?

Editorial Tip

The Builder's Eye

The hardest moment in a build sprint is Day 3 — when the initial excitement of "it runs" has faded and the reality of "it doesn't do what I want yet" sets in. The editor works but the sidebar flickers. The detection works but misses headings with unusual formatting. The click-to-navigate works but the scroll position is off by a few lines. These are not failures. They're the normal state of software in development. The difference between a builder who ships and one who abandons is the response to Day 3: the builder who ships says "the scroll is off by 50 pixels — let me fix that specific thing." The builder who abandons says "this isn't working" and walks away from the project. Be specific about what's wrong. Fix one thing at a time. The prototype doesn't need to be right on Day 3. It needs to be closer to right than it was on Day 2.

Journal Prompt

Reflection

Building software and writing a screenplay have more in common than you might expect. Both start with a plan (PRD or outline) that changes on contact with reality. Both require iterating through a build-test-revise cycle. Both demand the discipline of working within constraints (feature scope or budget tier). And both produce an artifact that must be evaluated by someone other than the maker. Write about one parallel you noticed this week — a moment during the build sprint that felt like a moment from the drafting phase. Did the prototype surprise you the way the draft surprised you? Did you discover a feature you hadn't planned, the way you discovered unplanned scenes during drafting? Did you have to cut a feature for scope, the way you cut scenes for pace? The creative process has a shape that transcends the medium. The writer who sees that shape is a writer who can build anything.

Week Summary

What You've Built

By the end of this week you should have:

• A running prototype — a web app (or local app) that functions as a screenplay editor
• Scene detection working — lines starting with INT./EXT. populate a sidebar list in real time
• Click-to-navigate working — sidebar items scroll the editor to the corresponding scene
• One additional feature implemented from the backlog
• The prototype tested with your actual screenplay (40+ scenes)
• A daily build log tracking what was built, what works, and what's next
• The prototype saved/exported as a file you can share or deploy

Looking Ahead

Next Week

Week 31: AI Features (Questions-Only Script Doctor). The editor works. The scene navigator runs. Next week, you add the layer that makes the tool distinctive: an AI panel that doesn't write for the writer — it asks questions. The "Two Readers" model from the curriculum becomes a product feature: the writer selects a scene, invokes the AI panel, and receives diagnostic questions from Reader A and Reader B. The AI doesn't generate dialogue, suggest rewrites, or produce content. It asks the questions a developmental editor would ask and a resistant first reader would ask — and the writer decides what to do with the answers. This is the curriculum's philosophy made software: AI as thinking partner, not as ghost writer.

Your Portfolio So Far
Week 1–28: Complete screenplay + packaging ✓ PHASES 0–3
Week 29: Product Spec — PRD + feature backlog
Week 30: Build Sprint — editor + scene navigator (THIS WEEK)
Week 31: AI Features — Questions-Only Script Doctor
Week 32: Deploy + Demo + Roadmap ★
✦ ✦ ✦