Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
140 changes: 140 additions & 0 deletions .claude/commands/resume-plan.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,140 @@
Resume a previously created plan — discovers plan files, syncs with latest main, and presents the plan ready for implementation.

## Instructions

Follow these steps precisely.

### Argument Parsing

Parse `$ARGUMENTS`:
- If non-empty, treat as a plan file identifier: a full filename (e.g., `serene-hugging-orbit.md`), a name without extension (e.g., `serene-hugging-orbit`), or a full path.
- If empty, proceed to plan discovery in Step 1.

### Step 1: Discover and select plan file

**If `$ARGUMENTS` is empty:**

1. List all `.md` files in `~/.claude/plans/` sorted by modification time (newest first): `ls -t ~/.claude/plans/*.md 2>/dev/null`
2. If no files are found, report "No plan files found in `~/.claude/plans/`. Create a plan using Claude Code's plan mode first." and stop.
3. For each plan file (limit to the 15 most recent), extract:
- The filename (without path)
- The modification date: `stat -f '%Sm' -t '%Y-%m-%d %H:%M' <file>` (macOS) or `stat -c '%y' <file>` (Linux)
Copy link

Copilot AI Mar 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Step 1 instructs using either macOS (stat -f) or Linux (stat -c) syntax to get modification dates, but doesn’t specify how to select the correct variant at runtime. This can lead to command failure on the “other” OS. Consider adding an explicit branch (detect OS or try one form and fall back to the other) so the workflow is deterministic.

Suggested change
- The modification date: `stat -f '%Sm' -t '%Y-%m-%d %H:%M' <file>` (macOS) or `stat -c '%y' <file>` (Linux)
- The modification date, using a portable `stat` invocation, for example:
```sh
# $file is the current plan file path
if MOD_TIME=$(stat -f '%Sm' -t '%Y-%m-%d %H:%M' "$file" 2>/dev/null); then
:
else
MOD_TIME=$(stat -c '%y' "$file" 2>/dev/null | cut -d'.' -f1)
fi
```

Copilot uses AI. Check for mistakes.
- The first `#` heading from the file content, or "Untitled plan" if none
4. Display the list as a numbered table:

```
Available plans (most recent first):

# Plan file Last modified Title
1 serene-hugging-orbit.md 2025-03-18 23:14 Plan: Resolve PR #56 Conflicts
2 majestic-knitting-haven.md 2025-03-17 14:22 Plan: Update README
...
```

Comment on lines +25 to +33
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add language identifiers to fenced code blocks.

Static analysis is correct here: Line 25, Line 94, and Line 109 use fenced blocks without a language, triggering MD040.

Suggested fix
-```
+```text
 Available plans (most recent first):
@@
-```
+```text
 Warning: The following files referenced in the plan may no longer exist:
@@
-```
+```text
 ========================================
   Plan: <PLAN_TITLE>

Also applies to: 94-101, 109-117

🧰 Tools
🪛 markdownlint-cli2 (0.21.0)

[warning] 25-25: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.claude/commands/resume-plan.md around lines 25 - 33, Add explicit language
identifiers to the three fenced code blocks in the resume-plan content: change
the block that starts with "Available plans (most recent first):" to use
```text, change the block that begins "Warning: The following files referenced
in the plan may no longer exist:" to use ```text, and change the block that
shows "Plan: <PLAN_TITLE>" (the separator block with
"========================================") to use ```text so the MD040 lint
warnings are resolved.

5. Ask the user: "Which plan would you like to resume? Enter a number or filename." Wait for their response. Save the selection as `PLAN_PATH`.

Comment on lines +34 to +35
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Define how numeric selection maps to PLAN_PATH.

Line 34 allows entering a number, but there’s no explicit step to resolve that number to the corresponding listed file. Add deterministic mapping/validation before saving PLAN_PATH.

Suggested doc fix
 5. Ask the user: "Which plan would you like to resume? Enter a number or filename." Wait for their response. Save the selection as `PLAN_PATH`.
+   - If the response is a number `N`, validate `1 <= N <= <displayed_count>` and map it to the Nth entry from the displayed list.
+   - If the response is a filename, resolve it exactly as in the filename branch below.
+   - If invalid, report an error and prompt again.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
5. Ask the user: "Which plan would you like to resume? Enter a number or filename." Wait for their response. Save the selection as `PLAN_PATH`.
5. Ask the user: "Which plan would you like to resume? Enter a number or filename." Wait for their response. Save the selection as `PLAN_PATH`.
- If the response is a number `N`, validate `1 <= N <= <displayed_count>` and map it to the Nth entry from the displayed list.
- If the response is a filename, resolve it exactly as in the filename branch below.
- If invalid, report an error and prompt again.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.claude/commands/resume-plan.md around lines 34 - 35, Clarify and implement
deterministic mapping from numeric input to PLAN_PATH: when asking "Which plan
would you like to resume? Enter a number or filename.", first present a numbered
list of available plan files, then if the user enters a number validate it falls
within the list bounds, convert that index to the corresponding filename and
assign it to PLAN_PATH; if the user enters a filename validate it exists and use
it as PLAN_PATH; add explicit error/reprompt behavior for invalid numbers or
filenames to ensure PLAN_PATH is always a valid plan file.

**If `$ARGUMENTS` is non-empty:**

1. If `$ARGUMENTS` is a full path:
- Reject if it is outside `~/.claude/plans/` unless the user explicitly confirms.
- Require `.md` extension.
- If valid and the file exists, save as `PLAN_PATH`; otherwise report an error and stop.
2. If `$ARGUMENTS` is a filename (with or without `.md`), look for it in `~/.claude/plans/`. Append `.md` if not already present. If found, save as `PLAN_PATH`.
3. If the file is not found, report "Plan file not found: `$ARGUMENTS`. Run `/resume-plan` without arguments to list available plans." and stop.

### Step 2: Read and validate the plan

1. Read the full content of `PLAN_PATH`. Save as `PLAN_CONTENT`.
2. If the file is empty, report "Plan file is empty: `PLAN_PATH`. Nothing to resume." and stop.
3. Extract the first line matching `# ...` (H1 heading) as `PLAN_TITLE`. If no H1 heading exists, use the filename as `PLAN_TITLE`.
4. If the file contains no markdown headings at all (no lines starting with `#`), warn: "This file does not appear to be a structured plan (no headings found). Proceeding anyway."

### Step 3: Check git state

1. Check for detached HEAD: run `git symbolic-ref -q HEAD`. If this fails (exit code non-zero), report: "You are in a detached HEAD state. Please checkout a branch before resuming a plan (`git checkout main` or `git checkout -b <branch-name>`)." and stop.

2. Run `git status` to check the working tree. If there are uncommitted changes (modified, staged, or untracked files — excluding untracked files under `.claude/`), warn the user:

"There are uncommitted changes in the working tree. These should be committed or stashed before resuming a plan to avoid conflicts."

Suggest: "Commit with `/commit`, stash with `git stash`, or continue at your own risk."

Ask: "Continue with uncommitted changes? (yes/no)". If the user says no, stop.

3. Save the current branch name: `git rev-parse --abbrev-ref HEAD` as `CURRENT_BRANCH`.

### Step 4: Determine main branch

1. Detect the main branch name: `git symbolic-ref refs/remotes/origin/HEAD 2>/dev/null | sed 's@^refs/remotes/origin/@@'`
2. If that fails, check if `origin/main` exists: `git rev-parse --verify origin/main 2>/dev/null`. If it does, use `main`.
3. If that also fails, check `origin/master`: `git rev-parse --verify origin/master 2>/dev/null`. If it does, use `master`.
4. Save as `MAIN_BRANCH`. If none of the above succeeded, report "Cannot determine the main branch. Please specify it manually." and stop.

### Step 5: Sync with latest code

1. Fetch latest from remote: `git fetch origin`

2. **If `CURRENT_BRANCH` equals `MAIN_BRANCH`:**
- Pull latest: `git pull origin <MAIN_BRANCH>`
- If the pull fails, report the error and stop.

3. **If `CURRENT_BRANCH` does not equal `MAIN_BRANCH`:**
- If the working tree has uncommitted changes (detected in Step 3), stash them first: `git stash --include-untracked`. Save a flag `DID_STASH=true`.
Comment on lines +77 to +82
Copy link

Copilot AI Mar 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the user chose to continue with uncommitted changes and CURRENT_BRANCH == MAIN_BRANCH, Step 5 performs git pull origin <MAIN_BRANCH> without first stashing/committing. This commonly fails when local changes would be overwritten, producing a raw git error and stopping the flow. Consider stashing in this branch too (with a pop/rollback plan) or requiring a clean worktree before pulling on the main branch.

Suggested change
2. **If `CURRENT_BRANCH` equals `MAIN_BRANCH`:**
- Pull latest: `git pull origin <MAIN_BRANCH>`
- If the pull fails, report the error and stop.
3. **If `CURRENT_BRANCH` does not equal `MAIN_BRANCH`:**
- If the working tree has uncommitted changes (detected in Step 3), stash them first: `git stash --include-untracked`. Save a flag `DID_STASH=true`.
2. Initialize a flag `DID_STASH=false` for this step.
3. **If `CURRENT_BRANCH` equals `MAIN_BRANCH`:**
- If the working tree has uncommitted changes (detected in Step 3), stash them first: `git stash --include-untracked`. Set `DID_STASH=true`.
- Pull latest: `git pull origin <MAIN_BRANCH>`
- If the pull fails:
- If `DID_STASH`, run `git stash pop` to restore the user's changes.
- Report the error from `git pull` and stop.
- If the pull succeeds:
- If `DID_STASH`, run `git stash pop`. If stash pop fails with conflicts, report: "Pull from `<MAIN_BRANCH>` succeeded but your stashed changes conflict with the updated code. Run `git stash show` to review and `git stash drop` after resolving." and stop.
4. **If `CURRENT_BRANCH` does not equal `MAIN_BRANCH`:**
- If the working tree has uncommitted changes (detected in Step 3), stash them first: `git stash --include-untracked`. Set `DID_STASH=true`.

Copilot uses AI. Check for mistakes.
- Merge latest main into the current branch: `git merge origin/<MAIN_BRANCH> --no-edit`
- If the merge fails with conflicts:
- Report: "Merge conflicts detected while syncing with `<MAIN_BRANCH>`. Resolve conflicts first (consider `/resolve-conflicts`) or start from a clean branch."
- Run `git merge --abort` to restore the working tree.
- If `DID_STASH`, run `git stash pop` to restore the user's changes.
- Stop.
- If the merge succeeds:
- If `DID_STASH`, run `git stash pop`. If stash pop fails with conflicts, report: "Merge succeeded but your stashed changes conflict with the merged code. Run `git stash show` to review and `git stash drop` after resolving." and stop.
- Report: "Merged latest `origin/<MAIN_BRANCH>` into `<CURRENT_BRANCH>`."

### Step 6: Validate plan file references

Scan `PLAN_CONTENT` for file paths — look for paths in backtick-quoted strings, markdown links, and "Files to Modify" tables that reference project files.

For each referenced file path, check if the file exists in the current working tree.

If any referenced files are missing, display a warning:

```
Warning: The following files referenced in the plan may no longer exist:
- path/to/deleted-file.md
- path/to/moved-file.py

These may have been renamed or removed since the plan was created.
The plan may need adjustment for these files.
```

Do NOT stop. This is informational only.

### Step 7: Present the plan

Display the full plan with context:

```
========================================
Plan: <PLAN_TITLE>
Branch: <CURRENT_BRANCH>
Synced with: <MAIN_BRANCH> (latest)
========================================

<PLAN_CONTENT>
```

Report: "Plan loaded and codebase synced."

### Step 8: SpecOps handoff

After presenting the plan, check if the project uses SpecOps:

1. Check if `.specops.json` exists in the project root.

2. **If `.specops.json` exists:**
- Report: "SpecOps detected — converting plan to structured spec before implementation."
- Invoke `/specops from-plan` with `PLAN_CONTENT` as the plan input. This routes through the full SpecOps lifecycle: spec creation (From Plan Mode) → implementation (Phase 3) → completion (Phase 4).
- Do NOT proceed with direct implementation. The SpecOps workflow handles implementation after spec conversion.

3. **If `.specops.json` does NOT exist:**
- Report: "No SpecOps configuration found. Ready for direct implementation from the plan above."
99 changes: 99 additions & 0 deletions .specops/enforcement-roadmap/design.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,99 @@
# Design: Enforcement Roadmap — Advisory Context to Deterministic Enforcement

## Architecture Overview
This spec converts advisory behavioral mechanisms into deterministic enforcement using three proven approaches: (A) validate.py / CI gates for generated output checks, (B) file-persisted state for runtime enforcement, and (C) phase gate checklists for workflow transitions. A new spec artifact linter script provides runtime validation of spec files that validate.py (which checks generated platform outputs) cannot cover.

## Technical Decisions

### Decision 1: Enforcement Approach Selection
**Context:** 35 behavioral mechanisms exist; ~5 are enforced, ~26 are advisory. Need to select the right enforcement approach for each.
**Options Considered:**
1. Enforce everything via validate.py markers — Pros: deterministic CI gate; Cons: validate.py checks generated output, not runtime spec artifacts
2. Enforce everything via workflow language ("MUST", "protocol breach") — Pros: simple; Cons: mixed results for multi-step sequences (dogfood evidence)
3. Tiered approach matching enforcement mechanism to failure type — Pros: uses proven patterns where they work; Cons: more complex implementation

**Decision:** Option 3 — Tiered enforcement
**Rationale:** Dogfood evidence shows: validate.py markers have zero gaps for output checks, file-persisted state machines have zero gaps for runtime enforcement, and mandatory checklists have zero gaps for phase transitions. Match each mechanism to the proven pattern that fits its failure mode.

### Decision 2: Spec Artifact Linter as Separate Script
**Context:** Need to validate spec artifacts (tasks.md, implementation.md, spec.json) — these are runtime files, not generated platform outputs.
**Options Considered:**
1. Extend validate.py — Pros: single validation tool; Cons: validate.py checks generated files in platforms/, mixing concerns
2. New script `scripts/lint-spec-artifacts.py` — Pros: clean separation, conditional on specsDir existence; Cons: another script to maintain

**Decision:** Option 2 — Separate `scripts/lint-spec-artifacts.py`
**Rationale:** validate.py validates the generator pipeline (core → platforms). Spec artifacts are user-project files with different validation rules. Separation keeps each script focused and allows the linter to run conditionally only when a specsDir exists.

### Decision 3: Advisory Tier Preserved
**Context:** ~11 mechanisms are judgment-based with no machine-verifiable criterion.
**Decision:** Keep advisory — no enforcement
**Rationale:** Simplicity principle, communication style, high autonomy mode, memory heuristics, codebase exploration, team conventions, data handling, custom templates, integration references, EARS notation, and interview answers all require human judgment. Forcing enforcement would create false positives or meaningless gates.

## Component Design

### Component 1: Spec Artifact Linter (`scripts/lint-spec-artifacts.py`)
**Responsibility:** Validate spec artifacts in `<specsDir>/` for:
- Checkbox staleness: completed tasks with unchecked items (excluding Deferred Criteria subsections)
- Documentation Review: completed specs must have `## Documentation Review` in implementation.md
- Version validation: `specopsCreatedWith`/`specopsUpdatedWith` must match semver pattern or be absent

**Interface:** `python3 scripts/lint-spec-artifacts.py [specsDir]` (defaults to `.specops`)
**Dependencies:** Standard library only (no pip dependencies). Uses `re`, `json`, `os`, `sys`, `glob`.

### Component 2: Phase 1 Context Summary (workflow gate)
**Responsibility:** Enforce that Phase 1 steps 3, 3.5, 4 execute by requiring their output in implementation.md
**Interface:** New `## Phase 1 Context Summary` section template in `core/templates/implementation.md`. Workflow instruction in `core/workflow.md` Phase 1 requiring the section be written before Phase 2.

### Component 3: Phase 4 Documentation Review (workflow gate)
**Responsibility:** Enforce that documentation check executes by requiring `## Documentation Review` in implementation.md
**Interface:** Workflow instruction in `core/workflow.md` Phase 4 requiring the section be written. Linter validates presence for completed specs.

### Component 4: Config-to-Workflow Annotations
**Responsibility:** Make config → workflow bindings explicit and auditable
**Interface:** `### Workflow Impact` subsections in `core/config-handling.md` per config value. Explicit conditionals in `core/workflow.md`.

### Component 5: Coherence Verification (Phase 2 gate)
**Responsibility:** Cross-check NFRs against functional requirements after spec generation
**Interface:** New step at end of Phase 2 in `core/workflow.md`. COHERENCE_MARKERS in `generator/validate.py`.

### Component 6: Pre-Task Anchoring (task-tracking enhancement)
**Responsibility:** Anchor task scope before implementation to enable meaningful pivot checks
**Interface:** New step in `core/task-tracking.md` before In Progress transition. Post-task comparison against anchored scope.

### Component 7: Vertical Vocabulary Verification (verticals enhancement)
**Responsibility:** Verify vertical-specific vocabulary was applied after Phase 2 generation
**Interface:** New verification step in `core/verticals.md`. Result recorded in Context Summary.

## Sequence Diagrams

### Flow 1: Spec Artifact Linting
```
Linter -> specsDir: scan for spec directories
specsDir -> Linter: list of spec directories
Linter -> tasks.md: parse task statuses and checkboxes
Linter -> implementation.md: check for Documentation Review section
Linter -> spec.json: validate version fields
Linter -> stdout: report errors/warnings/pass
```

### Flow 2: Phase 1 Context Summary Gate
```
Agent -> config: load .specops.json
Agent -> steering: load steering files
Agent -> repo-map: check/refresh repo map
Agent -> memory: load memory layer
Agent -> implementation.md: WRITE Phase 1 Context Summary
Agent -> Phase 2: proceed (gate passed)
```

## Testing Strategy
- **Linter tests:** Run `scripts/lint-spec-artifacts.py` against all 9 existing completed dogfood specs — expect zero errors
- **Validation tests:** Run `python3 generator/validate.py` — new COHERENCE_MARKERS pass
- **Platform consistency:** Run `python3 tests/test_platform_consistency.py` — new markers present across all 4 platforms
- **Build test:** Run `python3 tests/test_build.py` — generator produces valid outputs with new content
- **Full suite:** `bash scripts/run-tests.sh` — all tests pass

## Risks & Mitigations
- **Risk 1:** New workflow gates increase agent instruction length, potentially causing context window pressure → **Mitigation:** Keep gate instructions concise; use imperative single-line instructions, not explanatory paragraphs
- **Risk 2:** COHERENCE_MARKERS may not be present in all generated outputs if the Coherence Verification section is not added to all platform templates → **Mitigation:** Add to core/workflow.md (which flows to all platforms via generator), verify with cross-platform consistency check
- **Risk 3:** Spec artifact linter may false-positive on edge cases (e.g., tasks with no acceptance criteria) → **Mitigation:** Only lint tasks explicitly marked Completed; skip tasks without Acceptance Criteria sections
37 changes: 37 additions & 0 deletions .specops/enforcement-roadmap/implementation.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
# Implementation Journal: Enforcement Roadmap

## Summary
8 tasks completed, 0 deviations from design, 0 blockers. Created `scripts/lint-spec-artifacts.py` with 3 validation checks (checkbox staleness, documentation review, version validation). Added Phase 1 Context Summary gate and Phase 4 Documentation Review gate to `core/workflow.md`. Added Workflow Impact annotations for all behavioral config values in `core/config-handling.md`. Added COHERENCE_MARKERS to `generator/validate.py` with cross-platform consistency. Added pre-task anchoring to `core/task-tracking.md` and vertical vocabulary verification to `core/verticals.md`. All 4 platform outputs regenerated, validator passes with new markers, all 8 tests pass including new spec artifact linter.

## Phase 1 Context Summary
<!-- Populated during Phase 1. -->
- Config: loaded from `.specops.json` (builder vertical, specsDir: .specops, taskTracking: github)
- Context recovery: none (new spec)
- Steering files: loaded 4 files (product.md, tech.md, structure.md, repo-map.md)
- Repo map: loaded (existing, checked in Phase 1)
- Memory: loaded 15+ decisions from 9 specs, 5 patterns detected
- Vertical: builder (from config)
- Affected files: core/workflow.md, core/task-tracking.md, core/config-handling.md, core/verticals.md, core/templates/implementation.md, generator/validate.py, scripts/lint-spec-artifacts.py (new), scripts/run-tests.sh

## Decision Log
| # | Decision | Rationale | Task | Timestamp |
|---|----------|-----------|------|-----------|

## Deviations from Design
| Planned | Actual | Reason | Task |
|---------|--------|--------|------|

## Blockers Encountered
| Blocker | Resolution | Impact | Task |
|---------|------------|--------|------|

## Documentation Review
- `CLAUDE.md`: Updated — added `scripts/lint-spec-artifacts.py` to Key Commands section (pending below)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Remove stale “pending below” wording in Documentation Review entry.

The phrase suggests unresolved follow-up that is not actually present later in the file.

🧹 Proposed fix
-- `CLAUDE.md`: Updated — added `scripts/lint-spec-artifacts.py` to Key Commands section (pending below)
+- `CLAUDE.md`: Updated — added `scripts/lint-spec-artifacts.py` to Key Commands section
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- `CLAUDE.md`: Updated — added `scripts/lint-spec-artifacts.py` to Key Commands section (pending below)
- `CLAUDE.md`: Updated — added `scripts/lint-spec-artifacts.py` to Key Commands section
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.specops/enforcement-roadmap/implementation.md at line 29, Edit the
Documentation Review entry that currently reads "`CLAUDE.md`: Updated — added
`scripts/lint-spec-artifacts.py` to Key Commands section (pending below)`" and
remove the stale phrase " (pending below)" so the line simply states the update;
specifically locate the string containing `scripts/lint-spec-artifacts.py` in
the Documentation Review/implementation note and delete the parenthetical to
reflect there are no outstanding follow-ups.

- `docs/STRUCTURE.md`: No new core module created (existing modules modified) — up-to-date
- `docs/REFERENCE.md`: No new config options added — up-to-date
- `docs/COMMANDS.md`: No new subcommand added — up-to-date
- `README.md`: No changes needed — up-to-date

## Session Log
### Session 1 — All tasks completed (2026-03-18)
Tasks 1-8 completed sequentially. Created lint-spec-artifacts.py, updated core/workflow.md (Phase 1 Context Summary, Phase 2 Coherence + Vocabulary gates, Phase 4 Docs gate), core/config-handling.md (Workflow Impact annotations), core/task-tracking.md (Pre-Task Anchoring), core/verticals.md (Vocabulary Verification), core/templates/implementation.md (new sections), and generator/validate.py (COHERENCE_MARKERS). All platform outputs regenerated, all 8 tests pass.
Loading
Loading