Skip to content

XucroYuri/how-to-make-script

How to Make Script

Open-source screenwriting knowledge infrastructure for writers and agents.
Route, generate, review, and orchestrate narrative, branded, and interactive scripts.

CI MIT License GitHub Discussions PRs welcome English 中文

screenwriting agent skill workflow protocols quality gates human-in-the-loop

See an ExampleInstall as a SkillBrowse by GoalChallenge a Claim

Not a prompt dump. Not a single-method gospel. Not a UI-first product. Durable creative infrastructure for screenplay work: routable knowledge, clear workflow contracts, reusable review logic, and community-driven correction loops.


60-Second Example

Request

Turn this idea into a feature-film beat sheet:
"A journalist who has spent years avoiding the truth behind her father's death
is forced back to her mining hometown to investigate an old case."

Selected route

Layer Selection
Skill skill.structure-beat
Protocol wp.structure-beat-outline
Review rb.outline + optional quality_gate_report

Artifact excerpt

## Beat List
- Opening imbalance: She avoids every mining story that crosses her desk.
- Lock-in: A fragment from her father's case file forces her back home.
- Midpoint turn: She learns her own silence helped protect the cover-up.

Full example chain:

What Makes It Different

Principle How it works
route-first Primary route anchored by intent x medium x stage x output; constraints refine tie-breaks and loading
research-first Stable knowledge lives in versioned assets, not hidden chat memory
bounded-loading Agents load the smallest useful bundle instead of the whole repository
challenge-friendly Counterexamples, objections, and field reports are first-class improvement inputs
multi-surface Covers writing artifacts, review, team orchestration, project surfaces, and downstream handoff

What It Helps You Do

  • Turn a vague idea into concrete artifacts: logline, premise, beat_sheet, outline, scene_draft, commercial_script
  • Route each request to the right protocol, rubric, and minimal knowledge bundle
  • Compare multiple viable creative directions instead of locking into one method
  • Diagnose drafts with rewrite_report, quality_gate_report, boundary_map, or scope_correction
  • Handle broad theory and long-form continuity with research_background_map and story_memory_checkpoint
  • Bridge into voice calibration, multilingual visual language, and screen-to-video handoff
  • Design multi-agent or writers' room workflows with defined casts, dispatch plans, and handoff contracts

Who It Is For

Good fit

Audience What you get
Writers and story developers Durable reference, structure, and self-check instead of loose prompt fragments
Agent builders Explicit routing, bounded loading, reusable contracts, and machine-readable registries
Script reviewers and educators Rubrics, failure contrasts, and challengeable heuristics instead of vague taste judgments
Multi-agent workflow designers Team modes, dispatch patterns, handoff packets, and role-aware orchestration

Not the best fit

Audience Why
People looking for one magic prompt This repo optimizes for reusable systems, not shortcut prompt hacks
People who want one absolute method The design assumes screenplay work is plural, unstable, and context-bound
People who only want a polished app UI This is a repo-first knowledge and skill system, not a hosted product

Quick Start

1. Browse a real example

2. Install as a skill

Clone the latest version from GitHub first, then point your tool at that local checkout:

git clone https://github.com/XucroYuri/how-to-make-script.git ~/.local/share/how-to-make-script
# Later updates:
git -C ~/.local/share/how-to-make-script pull --ff-only
Codex

Use the absolute path of the cloned repository, for example /Users/<you>/.local/share/how-to-make-script.

[[skills.config]]
path = "/Users/<you>/.local/share/how-to-make-script"
enabled = true
Claude Code
mkdir -p ~/.claude/skills
ln -sfn ~/.local/share/how-to-make-script ~/.claude/skills/how-to-make-script
OpenCode
mkdir -p ~/.config/opencode/skills
ln -sfn ~/.local/share/how-to-make-script ~/.config/opencode/skills/how-to-make-script
Gemini CLI

Clone https://github.com/XucroYuri/how-to-make-script.git into any shared skills directory your Gemini CLI setup recognizes, then register that local checkout as the extension root.

OpenClaw

Clone https://github.com/XucroYuri/how-to-make-script.git into the skills directory your OpenClaw setup scans, or symlink ~/.local/share/how-to-make-script into that directory, then keep the runtime entrypoint at the repo root so SKILL.md stays the entrypoint.

3. Verify repository health

Run validation locally
python3 scripts/validate_assets.py
python3 scripts/check_semantic_consistency.py
python3 scripts/check_background_bundles.py
python3 scripts/check_routes.py
python3 scripts/check_route_overlaps.py
python3 scripts/check_subagent_registries.py
python3 scripts/check_community_surfaces.py
python3 scripts/check_links.py
python3 scripts/check_forbidden_paths.py
python3 scripts/check_canonical_terms.py
python3 scripts/check_question_todos.py
python3 scripts/check_golden_artifact_formats.py
python3 scripts/run_fixture_suite.py
python3 -m unittest discover -s tests -v

How The System Works

The diagram below is the high-level architecture view of the repository's routing and improvement loop.

how-to-make-script architecture overview

flowchart LR
    S["Preflight Sync<br/>upstream SHA check"] --> A["Request"]
    A --> B["Classify & Route<br/>intent × medium × stage × output"]
    B --> C["Load Bundle<br/>protocol + rubric + atoms"]
    C --> D["Generate Artifact"]
    D --> E["Self-Check<br/>rubric-based quality gate"]
    E --> F["Human Feedback"]
    F --> G["Improve Assets"]
    G -.->|"next run"| S
Loading

Calling From Another Agent

  • Start at SKILL.md for the root orchestration contract.
  • Use references/supported-outputs.md to choose the smallest appropriate output instead of inventing a blended artifact.
  • Use references/output-format-contracts.md when you need the minimum Markdown handoff shape for golden artifacts.
  • Use references/router-matrix.json and references/routing-policy.md to understand route selection and constraint signals.
  • Use research_background_map for broad "how to create a screenplay" or theory-support requests.
  • Use story_memory_checkpoint when the real need is resumable continuity or handoff-safe state.
  • Use project_surface_map when the real need is long-running workflow design or packet/export governance.

Find Your Entry Point

Writers and reviewers

  1. Narrative Pattern Pack
  2. Adaptive Quality Checking
  3. Supported Outputs

Agent and workflow developers

  1. Architecture
  2. Content Model
  3. Routing Policy + Router Matrix
  4. Supported Outputs + Context Loading Policy

Broad or theory-heavy questions

  1. How To Create A Screenplay Research
  2. Research Background Workflow
  3. Narrow into the next output route instead of staying in survey mode

Pause, resume, or hand off long-form work

  1. Story Memory Checkpoint
  2. Project Surface Architecture if the problem is really long-horizon design

Challenge or improve the repo

  1. Community Operations
  2. Contributing
  3. Open the lightest useful thread in GitHub Discussions

Repository At A Glance

Surface Scope
Root skill SKILL.md — routing, loading, and output discipline
Output contracts 31 routeable outputs in supported-outputs.md
Skill folders 29 folders in skills/
Structured assets 69 atoms + 28 protocols + 28 rubrics
Route fixtures 95 fixtures in fixtures.json
Knowledge base 168 Markdown files in knowledge/
Examples 38 files across golden flows, fixtures, and reference packs
Validation 18 scripts in scripts/
Tests 17 modules in tests/

Capability Surface

Writing and development — narrative screenwriting, commercial/branded scripting, interactive/branching narrative, premise through rewrite

Review and correction — rewrite diagnosis, quality gates, targeted recheck, boundary maps, scope correction

Research and continuity — broad theory support, resumable story-memory checkpoints, bounded loading, route-aware research bundles

Expression and downstream — character/IP/brand voice calibration, multilingual visual language, screenplay-to-video bridge

Team and system design — writers' room blueprints, expert subagent casting, dispatch topology, handoff design, project-surface architecture

Quality Guarantees

  • Schemas, registries, routes, and fixtures validated before completeness claims
  • Routes tested for correct output contracts and overlap risk
  • Fixtures exercise narrative, commercial, interactive, and systems workflows
  • Community surfaces checked so issue and discussion routing stays fresh
  • Forbidden local workspace leakage blocked in index and history (denylist in .gitignore + check_forbidden_paths.py)
  • Human disagreement treated as a source of regression tests, rubrics, and scope corrections

Docs By Goal

For writers

For agent builders

For contributors

Community

This project grows through high-signal disagreement.

Channel Use for
Discussions Questions, rebuttals, rival paths, field notes
Issue forms Concrete route, rubric, asset, or governance changes
Support Support ladder
Security Private vulnerability reporting

Good first contributions:

  • Challenge one claim that feels too broad
  • Add one counterexample or field note that changes scope
  • Improve one example, rubric explanation, or doc path
  • Reproduce one route mismatch and turn it into a fixture

Project Status

The repository is a usable research-first, agent-ready screenplay monorepo.

Current emphasis: narrative, commercial, and interactive screenplay work; research and continuity layers; voice/visual/video layers; team orchestration and project surfaces; adaptive quality gating with human-in-the-loop iteration.

Open gaps:

  • Collaboration blueprints are mature, but live runtime execution is not yet implemented
  • Bounded loading is well documented, but bundle-planner enforcement is incomplete
  • Route coverage is broad, but edge-case fixture depth is uneven across similar outputs
  • Knowledge coverage is broad, but genre-specific and stage-level depth is thin in several areas
  • Community intake exists, but discussion-to-asset conversion still relies on manual effort

Next-stage roadmap: executable runtime planning; stricter router governance; deeper genre/medium/case-study layers; stronger quality presets and cross-artifact checks; systematic human-in-the-loop conversion; bilingual maturity.

Detailed TODO list: Roadmap


Standards And Metadata

ContributingCode of ConductSupportSecurityCitationLicense

About

Open-source screenwriting knowledge infrastructure for writers and agents: route, generate, review, and orchestrate narrative, branded, and interactive scripts.

Topics

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages