THE MOVIE ON THE PAGE · WEEK 31 OF 32 · BONUS BUILD
SCREENWRITING STUDIO

AI Features
Questions-Only Script Doctor

Most AI writing tools offer answers. Yours offers questions — because the writer who knows the right question already has the answer inside them.

The Movie on the Page Phase 4 · Bonus Build · Week 31 of 32
Commitment
14–20 hours
Craft Focus
Designing AI that asks questions rather than generates answers
Cinema Lens
N/A — build time
Page Craft
N/A — no screenplay formatting this phase
Exercise Output
"Two Readers" panel inside the app — AI asks diagnostic questions
Budget Dial
N/A — build phase

Your editor runs. The scene navigator works. A writer can type a screenplay, see their scenes in a sidebar, and click to jump between them. This week you add the feature that makes the tool philosophically distinct from every other AI writing product on the market: an AI panel that never writes a single word of the screenplay. It only asks questions. The "Two Readers" model you've been using since Week 5 — Reader A the developmental editor, Reader B the resistant first reader — becomes a product feature. The writer selects a scene, clicks a button, and receives diagnostic questions: what's the scene's goal? Where's the friction? Does the turn change the emotional register? Could you cut this scene without losing story? The AI doesn't suggest dialogue. It doesn't propose alternative scenes. It doesn't generate anything the writer could paste into the draft. It asks the questions a smart reader would ask — and the writer decides what to do with them. This is the curriculum's philosophy encoded in software: AI as thinking partner, never as ghost writer.

The most useful AI feature for a writer isn't one that writes. It's one that makes the writer see what they've already written — more clearly than they could see it alone.

Craft Lecture

Why questions, not answers. The AI writing tools that dominate the market operate on a single model: the writer provides context, the AI generates text, the writer edits the output. The model is efficient but destructive — it erodes the writer's agency, one generated paragraph at a time. Each time the writer accepts AI-generated text, the screenplay becomes a collaboration where the writer's voice is diluted by the model's statistical median. After enough cycles, the screenplay doesn't sound like the writer. It sounds like a screenplay — generically competent, tonally neutral, structurally adequate, and utterly without personality. The questions-only model reverses this. The AI never generates screenplay content. It generates attention — diagnostic questions that direct the writer's focus toward specific aspects of the scene they might not have examined. The writer's response to the questions is always their own: their voice, their judgment, their creative authority. The AI makes the writer think harder. It doesn't think for them.

The Two Readers as a product feature. Throughout the curriculum, you used two reader personas — Reader A (developmental editor, focused on structure and craft) and Reader B (resistant first reader, focused on experience and emotion). These personas weren't arbitrary. They simulate the two kinds of feedback professional screenwriters receive: the analytical reader who evaluates architecture and the intuitive reader who reports feeling. Both are essential. Both are hard to get from the same person. Your tool makes both available on demand, scoped to a single scene, without the writer needing to construct a prompt from scratch each time.

The interaction model. Here's how the Two Readers panel should work in the tool:

Step 1: Scene selection. The writer clicks a scene in the sidebar navigator. The scene's text is highlighted or isolated in the editor pane.

Step 2: Reader invocation. The writer clicks a "Diagnose" button (or two separate buttons — "Reader A" and "Reader B"). The tool sends the scene text to the AI along with a system prompt that defines the reader's persona and question-generation rules.

Step 3: Question delivery. The AI returns 3–5 diagnostic questions — not feedback, not suggestions, not rewritten text. Questions. They appear in a panel below the editor or beside it, styled distinctively (Reader A in blue, Reader B in warm red — the same color coding from the curriculum's lesson pages).

Step 4: Writer decides. The writer reads the questions. There's no "accept" or "apply" button because there's nothing to accept — the questions are prompts for thought, not proposals for change. The writer may revise the scene, leave it unchanged, or make a note for a future pass. The tool records nothing about the writer's decision. The decision is private. The tool's job ended when the questions appeared.

Designing the system prompts. The quality of the Two Readers panel depends entirely on the system prompts that define each reader's behavior. These prompts are hardcoded into the tool — the writer doesn't need to write or modify them. They're the product's secret ingredient: the craft knowledge you accumulated across twenty-eight weeks, compressed into instructions that make the AI behave like a skilled reader rather than a generic chatbot.

The system prompts must enforce three constraints:

Constraint 1: Questions only. The AI must never generate screenplay text — no dialogue suggestions, no action line rewrites, no "you could try this" proposals. The system prompt must explicitly prohibit text generation and instruct the model to respond only with questions. This is the hardest constraint to enforce because language models default to helpfulness, and helpfulness in a writing context usually means generating text. The system prompt needs to be blunt: "You must ONLY ask questions. Do not generate any screenplay text, dialogue, action lines, or suggestions for specific wording. Your job is to ask diagnostic questions that help the writer see the scene more clearly."

Constraint 2: Scene-scoped. The questions must address the specific scene the writer selected — not the screenplay in general, not screenwriting theory, not abstract craft principles. The system prompt should instruct the model to reference specific details from the submitted scene text: character names that appear, actions described, dialogue exchanged. Generic questions ("Does this scene serve the story?") are less useful than specific ones ("Nora enters the room but doesn't speak for twelve lines — is her silence strategic, or has the scene forgotten she's present?").

Constraint 3: Persona-consistent. Reader A asks structural questions — about the scene's engine, its purpose in the arc, its causal relationship to adjacent scenes, its efficiency. Reader B asks experiential questions — about the emotional impact, the felt pace, the dialogue's naturalism, the moment where a reader's attention might wander. The two readers should never ask the same kind of question. Their divergence is the feature's value.

The context dock integration. If you built a reference dock in Week 30 (or plan to add one this week), the Two Readers panel becomes significantly more powerful when it can access the writer's persistent reference material — theme sentence, character dossier, escalation ladder notes. With context, Reader A can ask "This scene's turn involves Graham offering a deal — is this Escalation Level 3 or Level 4 in your plan?" Without context, Reader A can only ask about what's visible in the scene text itself. Context makes the questions sharper. If the context dock exists, the system prompt should instruct the AI to reference it. If it doesn't, the questions are still useful — just less personalized.

What the MVP version looks like. The full vision — a polished Two Readers panel with context integration, response history, and color-coded persona styling — is a v2 feature. The MVP version is simpler:

A single "Diagnose Scene" button that sends the selected scene's text to an AI API with a system prompt combining both Reader A and Reader B instructions. The AI returns 4–6 questions (the first 2–3 from Reader A's structural perspective, the last 2–3 from Reader B's experiential perspective). The questions appear in a panel below the editor. No history. No context dock integration. No persona switching. Just: select a scene, click a button, receive questions. That's the MVP. If you have time and energy remaining, add persona separation (two buttons, two prompt sets, two colored output zones). If you don't, the combined version is sufficient.

Craft Principle: An AI that asks the right question is more valuable to a writer than an AI that generates a good paragraph — because the question builds the writer's skill, and the paragraph replaces it.
MICRO-EXAMPLE 1: SYSTEM PROMPT — READER A (STRUCTURAL) SYSTEM PROMPT FOR READER A: You are Reader A — a developmental editor analyzing a single scene from a feature screenplay. Your role is to ask DIAGNOSTIC QUESTIONS that help the writer evaluate the scene's structural integrity. RULES: - Ask 3 questions maximum. - NEVER generate screenplay text, dialogue, action lines, or suggestions for specific wording. - NEVER say "you could try..." or "consider writing..." - ONLY ask questions. Every response must be 3 questions and nothing else. - Reference specific details from the scene text: names, actions, objects, dialogue lines. - Focus on: the scene's engine (goal, friction, turn), its purpose in the larger arc, its causal connections to what comes before and after, its efficiency (could it be shorter without losing function). EXAMPLES OF GOOD READER A QUESTIONS: "Torres tells Nora to 'go through proper channels' — is this the friction in the scene, or is the friction something Nora encounters before she reaches Torres?" "The scene runs 3 pages but the turn (Nora finding the locked door) happens on the last 4 lines — could the first 2.5 pages be compressed, or is the slow build earning the turn's impact?" "After this scene, does the audience know something they didn't know before? If so, what — and is this the earliest scene where that information could be delivered?" EXAMPLES OF BAD READER A QUESTIONS: "Does this scene work?" (Too vague — not diagnostic.) "Have you considered making Torres more sympathetic?" (This is a suggestion, not a question about the scene.) "Could you add a line where Nora mentions the deadline?" (This is a text-generation suggestion.)
MICRO-EXAMPLE 2: SYSTEM PROMPT — READER B (EXPERIENTIAL) SYSTEM PROMPT FOR READER B: You are Reader B — a resistant first reader reacting to a single scene from a feature screenplay. Your role is to ask DIAGNOSTIC QUESTIONS that help the writer evaluate the scene's emotional and experiential impact. RULES: - Ask 3 questions maximum. - NEVER generate screenplay text, dialogue, action lines, or suggestions for specific wording. - NEVER say "you could try..." or "consider writing..." - ONLY ask questions. Every response must be 3 questions and nothing else. - Reference specific details from the scene text: names, actions, objects, dialogue lines. - Focus on: emotional impact, dialogue naturalism, pacing (where attention sharpens or wanders), character behavior under pressure, the felt experience of reading the scene. EXAMPLES OF GOOD READER B QUESTIONS: "Nora doesn't respond to Graham's offer for six lines — during that silence, am I leaning forward (because the tension is unbearable) or checking out (because the scene has stalled)? Which one did you intend?" "Torres uses the phrase 'community partnership' three times in this scene — is the repetition making him sound institutional and evasive (which works) or making him sound like a broken record (which might lose me)?" "When I finished this scene, what was I feeling? I think I was feeling [nothing specific] — is there a moment in the scene that should be hitting me emotionally that isn't landing on the page?" EXAMPLES OF BAD READER B QUESTIONS: "Is this scene emotionally effective?" (Too vague.) "Have you tried writing this from Graham's point of view?" (Suggestion, not question.) "The dialogue could be snappier — what if Torres said something more cutting?" (Text-generation suggestion disguised as a question.)

Building the Feature

Implementation Guide

The technical challenge. Adding AI questions to your editor requires one new capability: sending text to an AI API and displaying the response. If you're building a web app (Option A from Week 30), this means making an API call from JavaScript. The simplest approach: use the Anthropic API or another LLM provider's API to send the scene text with the system prompt and receive the questions.

API key management. For a local prototype (not deployed publicly), you can store your API key in the code or in a local configuration file. For a deployed version, the key must be stored server-side — never exposed in client-side JavaScript. For the MVP, local-only usage with a hardcoded key is acceptable. Note this as a deployment requirement for v2.

The interaction flow in code: 1. Writer clicks "Diagnose Scene" button. 2. The app identifies the currently selected scene (using the scene navigator — the text between the current scene heading and the next one). 3. The app constructs a message: the system prompt (Reader A, Reader B, or combined) plus the scene text as the user message. 4. The app sends the message to the AI API. 5. The response (3–6 questions) is displayed in a panel below the editor. 6. The writer reads, thinks, and revises on their own.

Fallback if API integration is too complex: If connecting to an AI API proves technically difficult this week, build the panel UI without the live connection. The "Diagnose Scene" button can copy the scene text plus the system prompt to the clipboard, ready to paste into a separate AI chat window. The writer still gets the value (the system prompt is pre-built, the scene text is pre-selected), just through a manual step. This clipboard approach is a legitimate MVP — it removes the friction of constructing prompts from scratch, which was one of the curriculum's top frustrations.

Core Reading

No Core Reading This Week

Build time. All available hours go to implementing the AI panel. If you finish early, test the panel by running it against five scenes from your finished screenplay. Evaluate: are the questions useful? Do they make you see something you missed? Is there a question type that's consistently unhelpful — and if so, what adjustment to the system prompt would fix it?

Writing Exercise

Your Project Progress

Deliverable: "Two Readers" panel inside the app — AI asks diagnostic questions, writer decides.

Constraints: By the end of this week, the prototype should include:

Feature 1: Scene text extraction. When the writer selects a scene in the sidebar (or positions their cursor within a scene), the tool can identify the scene's full text — from its heading to the next scene heading. This text is the input for the AI panel.

Feature 2: Diagnose button. A "Diagnose Scene" button (in the sidebar, in a toolbar, or in a right-click menu) that triggers the AI analysis. Clicking it sends the selected scene's text to the AI along with the system prompt.

Feature 3: Question display. The AI's response (3–6 diagnostic questions) appears in a visible panel — below the editor, in a slide-out drawer, or in a modal. The questions are styled distinctly from the editor text (different background, different font size, or color coding if Reader A and Reader B are separated). The panel can be dismissed or collapsed when the writer is done reading.

Fallback version: If live API integration isn't achievable this week, the Diagnose button copies the system prompt + scene text to the clipboard, ready for the writer to paste into an external AI chat. The panel displays a message: "Scene text and diagnostic prompt copied — paste into your AI tool to receive questions." This version still delivers value by automating the prompt construction.

Quality bar: The questions generated (either live or via the clipboard fallback) must be genuinely diagnostic — specific to the scene's content, referencing characters and events from the text, never suggesting text or rewrites. If the system prompt produces generic questions ("Is this scene necessary?") or text-generation suggestions ("Try making the dialogue sharper"), revise the system prompt until the output meets the questions-only constraint. Test with at least three scenes from your finished screenplay. The questions should make you think about the scene differently — not tell you what to do.

Estimated time: 14–20 hours (scene text extraction: 2–3 hours; API integration or clipboard fallback: 4–6 hours; question display panel: 3–4 hours; system prompt refinement: 2–3 hours; testing and debugging: 3–4 hours).

AI Workshop

Phase 4: AI as Collaborator — Feature Design

This week's AI collaboration is meta — you're using AI to help you design a feature that uses AI. The prompts below help you refine the system prompts and build the integration.

System Prompt Refinement
Prompt
I'm building a "Questions-Only Script Doctor" feature for a screenwriting tool. The feature sends a single scene from a screenplay to an AI and receives 3–5 diagnostic questions — never text suggestions, never rewrites, never generated dialogue. Here's my current system prompt for Reader A (structural): [Paste your Reader A system prompt] And here's my current system prompt for Reader B (experiential): [Paste your Reader B system prompt] I tested them with this scene: [Paste a scene from your screenplay — 1–2 pages] The questions I received: [Paste the AI's output] Evaluate: 1. CONSTRAINT COMPLIANCE: Did the AI stay within the questions-only rule? Flag any response that crossed into suggestion territory ("you could...", "consider...", "try..."). 2. SPECIFICITY: Are the questions specific to THIS scene — referencing characters, actions, and dialogue from the text? Or are they generic enough to apply to any scene in any screenplay? 3. DIAGNOSTIC VALUE: Would these questions make a writer see the scene differently? Rate each question: HIGH (forces a new perspective), MEDIUM (confirms what the writer probably already knows), LOW (too vague to be actionable). 4. PERSONA DISTINCTION: Can you tell which questions came from Reader A (structural) and which from Reader B (experiential)? If not, the prompts need sharper persona differentiation. 5. PROMPT REVISION: Suggest specific revisions to the system prompts that would improve specificity, enforce the questions-only constraint more reliably, and sharpen the persona distinction. Help me make the system prompts better.
AI Panel Build Prompt
Prompt
I have a working screenplay editor (single HTML file, vanilla JavaScript). Here's the current code: [Paste your current working HTML file] I want to add an AI diagnostic panel. Here's the feature spec: 1. A "Diagnose Scene" button appears in the sidebar when a scene is selected (highlighted/clicked). 2. Clicking the button extracts the selected scene's text — from its heading to the next INT./EXT. heading. 3. The scene text is sent to the Anthropic API (or: copied to the clipboard with a system prompt prepended — if API integration is too complex). 4. The response is displayed in a panel below the editor. The panel has a header ("SCENE DIAGNOSIS"), a close button, and the AI's questions displayed as a numbered list. 5. The panel uses different styling from the editor: slightly smaller font, a light blue-gray background (#e8f0f8), and a left border in blue (#2c4a6e). FOR API VERSION: - Use fetch() to call the Anthropic messages API at https://api.anthropic.com/v1/messages - Model: claude-sonnet-4-20250514 - System prompt: [Paste your combined Reader A + Reader B system prompt] - The user message is the scene text - Display a "Thinking..." indicator while waiting for the response FOR CLIPBOARD VERSION (fallback): - Concatenate the system prompt + the scene text - Copy to clipboard using navigator.clipboard.writeText() - Show a confirmation: "Diagnostic prompt copied — paste into your AI tool" Keep the existing editor and scene navigator working. Add the diagnostic panel without breaking anything. Same constraints: single HTML file, no build tools, vanilla JavaScript.
Testing Protocol

After implementing the panel, test it with five scenes from your finished screenplay — one from each act area: an Act I establishment scene, an Act IIa complication scene, the midpoint scene, an Act IIb escalation scene, and the climax. For each scene, evaluate the questions along four criteria:

Specific? Do the questions reference details from the actual scene text (character names, actions, dialogue), or are they generic? Diagnostic? Do they help you see the scene differently — identify a problem you hadn't noticed or confirm a strength you weren't sure about? Questions-only? Did the AI stay within the constraint — no text suggestions, no rewrites, no generated content? Distinct? If using separate Reader A and Reader B prompts, do the questions clearly come from different perspectives?

Record the results. If any criterion consistently fails, revise the system prompt and retest. The system prompt is the product's core intellectual property — the accumulated craft knowledge that makes the AI's questions useful rather than generic. It deserves the same iterative refinement you gave the screenplay's dialogue during Week 25.

Student Self-Check

Before You Move On
Can the tool identify and extract the text of a selected scene — from its heading to the next heading?
Does the Diagnose button work — either sending the scene to an AI API and displaying questions, or copying the prompt to the clipboard for external use?
Do the system prompts enforce the questions-only constraint? Run three scenes through the panel — did the AI generate any text suggestions or rewrite proposals?
Are the questions specific to the scene's content — referencing characters, actions, and details from the text — rather than generic craft advice?
Have you tested the panel with at least five scenes from your finished screenplay and recorded the quality of the questions?

Editorial Tip

The Builder's Eye

The questions-only constraint will feel limiting — both to you as the builder and to the AI as the responder. The model will try to be helpful by suggesting fixes. The writer using the tool will wish it would just tell them what's wrong. That desire — the pull toward answers rather than questions — is exactly what the constraint exists to resist. Every AI writing tool on the market gives answers. Yours gives questions. That's not a limitation. It's the product's identity. The tool that asks "What is Nora's goal in this scene?" produces a writer who thinks about goals in every scene they write. The tool that says "Nora's goal should be to confront Torres" produces a writer who waits for the tool to tell them what to do. The first writer improves. The second writer depends. Build the tool that improves the writer.

Journal Prompt

Reflection

You've now experienced AI from both sides — as a writer receiving AI-generated questions during Weeks 5–28, and as a builder designing the prompts that generate those questions. Write about how the perspective shift changed your understanding of the Two Readers model. When you were the writer, the questions felt like they came from a reader. Now that you've written the system prompts, you know they come from a set of instructions you designed — a carefully constructed persona with rules and constraints. Does knowing how the sausage is made change the sausage's taste? When you test the panel with your own scenes and receive questions, do the questions still feel useful — even though you wrote the instructions that produced them? The answer matters, because it determines whether the tool is genuinely helpful or merely a mirror of the builder's own assumptions. If the questions surprise you — if they make you see something you didn't see — the system prompt is working. If they only confirm what you already know, the prompt needs more craft knowledge and less formula.

Week Summary

What You've Built

By the end of this week you should have:

• Scene text extraction — the tool can identify and isolate a selected scene's full text
• A "Diagnose Scene" button that triggers the AI analysis (live API or clipboard fallback)
• System prompts for Reader A (structural) and Reader B (experiential) that enforce the questions-only constraint
• A question display panel styled distinctly from the editor
• The panel tested with at least 5 scenes from your finished screenplay
• System prompts refined based on test results — specificity, constraint compliance, and persona distinction verified
• A daily build log tracking progress

Looking Ahead

Next Week

Week 32: Deploy + Demo + Roadmap — the final week of the curriculum. The tool exists. The editor runs. The scene navigator works. The AI panel asks questions. Next week, you ship: deploy the tool (even if deployment means "it runs on my machine and I can share the file"), write a short demo script (a walkthrough showing the tool in action), and produce a v2 roadmap listing the features you'd build next if you continued. The final Phase Gate is simple: a working prototype and a demo. The real deliverable of this entire course was the screenplay from Week 28. The app is the bonus — proof that a writer who finishes a screenplay has the discipline and creative judgment to build tools for other writers. You did both.

Your Portfolio So Far
Week 1–28: Complete screenplay + packaging ✓ PHASES 0–3
Week 29: Product Spec — PRD + feature backlog
Week 30: Build Sprint — editor + scene navigator
Week 31: AI Features — Questions-Only Script Doctor (THIS WEEK)
Week 32: Deploy + Demo + Roadmap ★
✦ ✦ ✦