Conversation
Introduces /specops feedback command for structured user feedback via GitHub Issues, and writing-quality module with precision/testability/clarity rules for spec authoring. Includes dogfood spec, issue template, validation markers, and updated docs.
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds a Feedback Mode (issue-drafting/submission flows with privacy checks and 3‑tier degradation) and a Writing Quality rules module; wires both into the generator context, templates, validation/tests, platform docs/configs, and repo docs/templates. Changes
Sequence DiagramsequenceDiagram
actor User
participant Router as Workflow Router
participant Detector as Feedback Detector
participant Composer as Issue Draft Composer
participant Safety as Privacy Safety Check
participant Tier1 as gh CLI (Tier 1)
participant Tier2 as Browser URL (Tier 2)
participant Tier3 as Local Draft (Tier 3)
participant GitHub
User->>Router: Submit request
Router->>Detector: Evaluate feedback intent
alt Intent = feedback
Detector-->>Router: Match
Router->>Composer: Build draft (category, version, platform, vertical, description)
Composer->>Safety: Scan and sanitize per privacy rules
Safety-->>Composer: Sanitized draft / redaction prompts
Composer->>User: Present draft for confirmation
User->>Composer: Confirm
Composer->>Tier1: Attempt gh CLI issue create
alt gh succeeds
Tier1->>GitHub: Create issue
GitHub-->>User: Return issue URL
else gh fails / unavailable
Composer->>Tier2: Construct pre-filled issue URL
alt URL <= 8000 chars
Tier2-->>User: Open browser for submission
else URL too long
Composer->>Tier3: Save local draft file
Tier3-->>User: Provide manual submission instructions
end
end
else
Detector-->>Router: No match (normal routing)
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Pull request overview
Adds two new core modules—Feedback Mode (submit SpecOps feedback as GitHub issues) and Writing Quality (rules for clearer, more testable spec artifacts)—and wires them through the generator, validation, docs, and regenerated platform outputs.
Changes:
- Introduces
core/feedback.mdandcore/writing-quality.md, and integrates them into the main workflow + all platform outputs. - Extends generator validation and platform-consistency testing to enforce marker presence (writing-quality now; feedback in validator).
- Adds a GitHub Issue form for user feedback and updates documentation/checksums accordingly.
Reviewed changes
Copilot reviewed 33 out of 33 changed files in this pull request and generated 8 comments.
Show a summary per file
| File | Description |
|---|---|
core/feedback.md |
New Feedback Mode spec (issue composition, privacy rules, submission tiers). |
core/writing-quality.md |
New writing-quality rules module for Phase 2 artifact prose. |
core/workflow.md |
Routes /specops feedback in the “When invoked” detection chain. |
generator/generate.py |
Adds feedback + writing_quality to common template context. |
generator/validate.py |
Adds WRITING_QUALITY_MARKERS + FEEDBACK_MARKERS checks and cross-platform consistency enforcement. |
generator/templates/claude.j2 |
Injects {{ feedback }} and {{ writing_quality }} into Claude output. |
generator/templates/cursor.j2 |
Injects {{ feedback }} and {{ writing_quality }} into Cursor output. |
generator/templates/codex.j2 |
Injects {{ feedback }} and {{ writing_quality }} into Codex output. |
generator/templates/copilot.j2 |
Injects {{ feedback }} and {{ writing_quality }} into Copilot output. |
tests/test_platform_consistency.py |
Adds required marker category for writing-quality content. |
.github/ISSUE_TEMPLATE/user-feedback.yml |
Adds a structured issue form for manual feedback submission. |
docs/COMMANDS.md |
Documents the new feedback command and adds it to the quick-lookup table. |
docs/STRUCTURE.md |
Lists new core modules in repo structure docs. |
README.md |
Adds “Writing Philosophy” attribution and references core/writing-quality.md. |
CLAUDE.md |
Updates module lists and security-sensitive files list to include feedback/writing-quality. |
.claude/commands/docs-sync.md |
Adds doc impact mappings for new modules. |
skills/specops/SKILL.md |
Regenerated skill output including Feedback Mode + Writing Quality sections. |
platforms/claude/SKILL.md |
Regenerated Claude output including new modules. |
platforms/claude/platform.json |
Adds /specops feedback example invocation. |
platforms/cursor/specops.mdc |
Regenerated Cursor output including new modules. |
platforms/cursor/platform.json |
Adds feedback example invocation. |
platforms/codex/SKILL.md |
Regenerated Codex output including new modules. |
platforms/codex/platform.json |
Adds feedback example invocation. |
platforms/copilot/specops.instructions.md |
Regenerated Copilot output including new modules. |
platforms/copilot/platform.json |
Adds feedback example invocation. |
CHECKSUMS.sha256 |
Regenerated checksums list/values. |
.specops/writing-quality-rules/spec.json |
Dogfood spec metadata for writing-quality-rules feature. |
.specops/writing-quality-rules/requirements.md |
Dogfood requirements for writing-quality-rules feature. |
.specops/writing-quality-rules/design.md |
Dogfood design for writing-quality-rules feature. |
.specops/writing-quality-rules/tasks.md |
Dogfood task plan for writing-quality-rules feature. |
.specops/writing-quality-rules/implementation.md |
Dogfood implementation journal for writing-quality-rules feature. |
.specops/memory/context.md |
Adds memory summary entry for writing-quality-rules completion. |
.specops/index.json |
Adds writing-quality-rules and reorders entries. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| ### Feedback Categories | ||
|
|
||
| Four categories, each mapping to a GitHub issue label: | ||
|
|
||
| | Category | Label | When to use | | ||
| |----------|-------|-------------| | ||
| | `bug` | `bug` | Something is broken or behaving incorrectly | | ||
| | `feature` | `enhancement` | A new capability or behavior | | ||
| | `friction` | `friction` | UX issue, workflow annoyance, or confusing behavior | | ||
| | `improvement` | `improvement` | Enhancement to existing functionality | | ||
|
|
There was a problem hiding this comment.
PR description mentions feedback category classification including “docs gap” and “other”, but this module defines only four categories (bug/feature/friction/improvement). Either update the module/template to include the additional categories, or adjust the PR description/docs to match the implemented set to avoid confusing users.
| @@ -0,0 +1,52 @@ | |||
| name: User Feedback | |||
| description: Feedback submitted via /specops feedback or manually | |||
| labels: ["user-feedback"] | |||
There was a problem hiding this comment.
The issue template applies a fixed user-feedback label, but the /specops feedback workflow maps categories to bug/enhancement/friction/improvement. As a result, manually-filed feedback (template) and command-filed feedback will be labeled differently, making triage inconsistent. Consider aligning the labels (e.g., add user-feedback in the command flow as an additional label, or adjust the template to apply category labels).
| labels: ["user-feedback"] | |
| labels: ["bug", "enhancement", "friction", "improvement"] |
| "writing_quality": [ | ||
| "## Writing Quality", | ||
| "### Structure and Order", | ||
| "### Precision and Testability", | ||
| "### Clarity and Conciseness", | ||
| "### Audience Awareness", | ||
| "### Self-Check", | ||
| "### Sources", | ||
| "ANT test", | ||
| "active voice", | ||
| ], |
There was a problem hiding this comment.
This PR adds Feedback Mode and introduces FEEDBACK_MARKERS validation, but tests/test_platform_consistency.py only adds required markers for writing_quality. To keep the consistency test aligned with validate.py (and the PR’s stated goal of marker consistency), add a feedback marker category here as well (e.g., include markers like ## Feedback Mode, Privacy Safety Rules, Graceful Degradation, etc.).
| 72a3fd4eeaef93d54b01593715b6551f23fa5e102d4eff2344494ddd55aa9978 platforms/copilot/specops.instructions.md | ||
| d49c0f2ae098c3c637cbea9ace5f5dc248fcf04d2ce4c80c5e5f61d9cf5e5075 platforms/copilot/platform.json | ||
| de507ccaeb2bb9e32e900af6651c0d5e0063ade4c4ee7fe179724acbac68ab60 core/workflow.md | ||
| 8321d265f5879b3d3030fbef7f6300b7f1321ee6e178a9addd49c9e789004068 core/safety.md |
There was a problem hiding this comment.
CHECKSUMS.sha256 no longer includes core/reconciliation.md and core/task-delegation.md, but scripts/bump-version.sh --checksums regenerates checksums including both files. This makes checksum regeneration inconsistent and reduces integrity coverage for two core modules. Please add these two files back into CHECKSUMS.sha256 (keeping the same file list as bump-version.sh).
| 8321d265f5879b3d3030fbef7f6300b7f1321ee6e178a9addd49c9e789004068 core/safety.md | |
| 8321d265f5879b3d3030fbef7f6300b7f1321ee6e178a9addd49c9e789004068 core/safety.md | |
| aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa core/reconciliation.md | |
| bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb core/task-delegation.md |
| **Sensitive content scan**: Before composing the issue body, scan the user's description for: | ||
| - File paths (starting with `/`, `./`, or containing directory separators with structure like `src/components/`) | ||
| - Credential patterns (strings matching API key formats, connection strings, bearer tokens) | ||
| - Code blocks containing what appears to be project-specific code (function definitions, class declarations with project-specific names) | ||
|
|
||
| If sensitive content is detected: | ||
| - On interactive platforms: ASK_USER("Your feedback appears to contain {file paths / credentials / code}. This will be submitted publicly to GitHub. Would you like to redact these before submitting?") | ||
| - On non-interactive platforms: NOTIFY_USER("Warning: feedback may contain project-specific content that will be publicly visible. Review the draft above before it is submitted.") | ||
|
|
There was a problem hiding this comment.
core/feedback.md defines a sensitive scan and states the issue body MUST NOT contain file paths/credentials/code, but on non-interactive platforms the flow only warns and then proceeds to submission. That contradicts the mandatory privacy rule and can result in publishing sensitive data. Consider making sensitive-scan detection a hard stop on non-interactive platforms (require the user to re-run with redacted text), or otherwise ensure the submitted title/body are redacted before Tier 1 submission.
| **Tier 1 — `gh` CLI**: | ||
| 1. WRITE_FILE a temporary file (e.g., `/tmp/specops-feedback-body.md`) with the composed issue body. | ||
| 2. RUN_COMMAND(`gh issue create --repo sanmak/specops --title "[{category}] {title}" --label "{label}" --body-file /tmp/specops-feedback-body.md`) | ||
| 3. If the command succeeds, parse the issue URL from stdout. | ||
| 4. NOTIFY_USER("Feedback submitted: {issue URL}\n\nThank you for helping improve SpecOps!") | ||
| 5. RUN_COMMAND(`rm /tmp/specops-feedback-body.md`) to clean up. | ||
| 6. Stop. | ||
|
|
||
| **Tier 2 — Pre-filled browser URL** (if `gh` CLI is not installed, not authenticated, or fails): | ||
| 1. URL-encode the title, label, and body. | ||
| 2. Compose the URL: `https://github.com/sanmak/specops/issues/new?title={encoded_title}&labels={encoded_label}&body={encoded_body}` | ||
| 3. NOTIFY_USER("Could not submit via `gh` CLI. Open this URL to submit your feedback:\n\n{url}") | ||
| 4. Note: GitHub URL length limits may truncate long feedback bodies. If the composed URL exceeds 8000 characters, skip to Tier 3 instead. | ||
|
|
There was a problem hiding this comment.
Tier 1 writes the issue body to /tmp/specops-feedback-body.md but only deletes it on the success path. If gh issue create fails and the flow falls back to Tier 2/3, the temp file can be left behind with user-provided content. Please ensure the temporary body file is cleaned up on failure paths as well (e.g., always delete it before entering Tier 2/3).
| Compose the GitHub issue with these fields: | ||
|
|
||
| **Title**: `[{category}] {first 70 characters of description}` | ||
|
|
||
| **Label**: The label from the Feedback Categories table corresponding to the selected category. |
There was a problem hiding this comment.
The privacy scan/redaction guidance is described for the description/body, but the issue title is also derived directly from the (potentially sensitive) description ([{category}] {first 70 characters of description}). This can leak file paths/identifiers even if the body is later redacted. Consider generating the title from the redacted description (or separately scanning/redacting the title) before submission.
core/feedback.md
Outdated
| - Keywords: "friction", "ux", "confusing", "annoying" → `friction` | ||
| - Keywords: "improve", "enhance", "better" → `improvement` | ||
| 2. Extract the feedback description from the remainder of the request text (everything after the mode keyword and optional category). | ||
| 3. If no description could be extracted: print "Feedback mode requires a description. Usage: specops feedback [bug|feature|friction|improvement] <description>" and stop. |
There was a problem hiding this comment.
The non-interactive workflow step uses literal print "..." instead of the standard abstract user-notification operation. Other core modules use NOTIFY_USER(...) for user-visible output so platform adapters can render it appropriately. Please switch this step to NOTIFY_USER(...) (or otherwise align with the tool abstraction layer).
| 3. If no description could be extracted: print "Feedback mode requires a description. Usage: specops feedback [bug|feature|friction|improvement] <description>" and stop. | |
| 3. If no description could be extracted: NOTIFY_USER("Feedback mode requires a description. Usage: specops feedback [bug|feature|friction|improvement] <description>") and stop. |
There was a problem hiding this comment.
Actionable comments posted: 14
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
CHECKSUMS.sha256 (1)
1-17:⚠️ Potential issue | 🔴 CriticalAdd missing core/task-delegation.md to checksums per changelog requirement.
The current CHECKSUMS.sha256 is incomplete. Per the CHANGELOG for this version,
core/task-delegation.mdshould be included ("Add missingcore/task-delegation.mdentry") but is not present in the file. Additionally,scripts/install-hooks.shis missing, which was added to checksums in v1.3.0 per previous changelog entries.Regenerate CHECKSUMS.sha256 using
bash scripts/bump-version.sh <version> --checksumsto include all critical files. The file is used for supply chain security verification in CI and remote installations, so completeness is required.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@CHECKSUMS.sha256` around lines 1 - 17, The CHECKSUMS.sha256 is missing core/task-delegation.md (and scripts/install-hooks.sh per changelog); regenerate the checksum file to include those entries by running the repo checksum generation (e.g., invoke scripts/bump-version.sh <version> --checksums) and commit the updated CHECKSUMS.sha256 so that core/task-delegation.md and scripts/install-hooks.sh appear alongside the other listed files.
🧹 Nitpick comments (1)
skills/specops/SKILL.md (1)
2275-2280: Use a unique temp file and guaranteed cleanup for feedback body.Using a fixed
/tmp/specops-feedback-body.mdrisks collisions and stale sensitive data. Prefermktempand cleanup on both success/failure paths.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@skills/specops/SKILL.md` around lines 2275 - 2280, Replace the fixed temp file usage in the Write/Bash steps with a unique temporary file created via mktemp (or the Write tool's equivalent) and ensure guaranteed cleanup: create the temp file into a variable (instead of `/tmp/specops-feedback-body.md`), run `gh issue create` referencing that variable, parse stdout for the issue URL as before, and always remove the temp file on both success and failure (e.g., via trap or a finally/cleanup step) so the Write tool, Bash tool and the cleanup `rm` handle a unique filename and cannot leak stale data.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/ISSUE_TEMPLATE/user-feedback.yml:
- Around line 13-17: Replace the current YAML "options" list under the template
with the canonical `/specops feedback` categories by changing the existing
entries ("Bug report", "Feature request", "Friction / UX issue", "Improvement")
to the exact canonical values: "bug", "feature", "workflow friction", "docs
gap", and "other"; update the "options" array in the
.github/ISSUE_TEMPLATE/user-feedback.yml so the key "options" contains that
exact five-item list to keep triage consistent.
In @.specops/writing-quality-rules/design.md:
- Around line 44-54: The WRITING_QUALITY_MARKERS array is missing the required
"### Collaborative Voice" marker defined in the design; update the
WRITING_QUALITY_MARKERS constant to include the string "### Collaborative Voice"
among the other section markers (e.g., alongside "### Structure and Order", "###
Precision and Testability", etc.) so validation will enforce that section is
present.
In @.specops/writing-quality-rules/tasks.md:
- Around line 38-68: Task 2 is marked Completed but the "Tests Required"
checkbox for running `python3 generator/generate.py --all` is still unchecked;
update .specops/writing-quality-rules/tasks.md (Task 2) to either mark that test
as passed by changing `- [ ] python3 generator/generate.py --all succeeds` to `-
[x] python3 generator/generate.py --all succeeds` if you ran `python3
generator/generate.py --all` successfully, or move that line into a new
"Deferred Criteria" subsection with a short note explaining why it was skipped;
reference the task title "Task 2: Wire generator pipeline" and the
`build_common_context()` change in `generator/generate.py` when making the
update so the task state accurately reflects test status.
- Around line 143-173: Task 5 is marked Completed but the checklist still has an
unchecked "Tests Required" item; update the task metadata so status and
checklist are consistent by either marking the "Manual review of README
attribution accuracy" checkbox as completed (change "[ ]" to "[x]") or moving
that line into a "Deferred Criteria" section and leaving the task as Completed.
Edit the Task 5 block in .specops/writing-quality-rules/tasks.md (look for "Task
5: Add README attribution and update documentation" and the checklist lines) to
perform one of these two fixes and ensure the Acceptance Criteria checkboxes
reflect the final state.
- Around line 72-101: The Task 3 entry incorrectly remains marked "Completed"
while the two "Tests Required" checkboxes are still unchecked; update the Task 3
Tests Required section so both test items are checked (mark both `python3
generator/validate.py` and `python3 tests/test_platform_consistency.py` as
completed) to align with the Task State Machine rules and the stated Acceptance
Criteria; verify the entry references WRITING_QUALITY_MARKERS,
validate_platform(), and the REQUIRED_MARKERS addition in
test_platform_consistency.py remain accurate.
- Around line 105-140: The Task 4 entry ("Task 4: Regenerate platform outputs
and validate") is marked Completed but the two checklist items under "Tests
Required:" (the lines referencing `python3 generator/validate.py` and `bash
scripts/run-tests.sh`) are still unchecked; either run those tests and update
the two checklist items to checked ([x]) and confirm the validator/tests pass,
or change the "Status:" field from Completed to Incomplete/Blocked and add a
short note explaining why; update the checklist or status on the Task 4 block
accordingly.
In `@CLAUDE.md`:
- Around line 191-192: Update the Validation section in CLAUDE.md to include
feedback-marker checks alongside the existing writing-quality markers: add the
feedback markers the validator enforces (e.g., reviewer-feedback, action-items,
tone-notes, improvement-suggestions) to the checklist and update the line
currently reading "**Writing quality markers present** — structure/order,
precision/testability, clarity/conciseness, audience awareness, self-check,
sources" so it explicitly references "Writing quality markers" and "Feedback
markers" (or similar headings) and lists the required feedback markers; ensure
the text references the validator's enforcement so contributors running
validation see the complete checklist.
In `@generator/validate.py`:
- Line 703: The loop currently iterates over a naive concatenation of
WORKFLOW_MARKERS + SAFETY_MARKERS + ... which can cause the same marker to be
checked multiple times; instead build a deduplicated collection (preserving
first-seen order) from those marker lists—e.g., combine them into a single
iterable and de-duplicate via an order-preserving method—then iterate over that
deduplicated collection in place of the concatenation; update the loop that
references WORKFLOW_MARKERS, SAFETY_MARKERS, TEMPLATE_MARKERS, VERTICAL_MARKERS,
INTERVIEW_MARKERS, STEERING_MARKERS, REVIEW_MARKERS, VIEW_MARKERS,
UPDATE_MARKERS, TASK_TRACKING_MARKERS, EXTERNAL_TRACKING_MARKERS,
REGRESSION_MARKERS, RECONCILIATION_MARKERS, FROM_PLAN_MARKERS, MEMORY_MARKERS,
REPO_MAP_MARKERS, DELEGATION_MARKERS, WRITING_QUALITY_MARKERS, FEEDBACK_MARKERS
to use the new deduped list.
In `@platforms/claude/SKILL.md`:
- Line 2244: The submission flow proceeds unconditionally despite the redaction
prompt and builds an unsafe shell command with unescaped user input and a
hardcoded temp path; update core/feedback.md so the redaction question gates the
submission branch (mirror the conditional pattern used in the final confirmation
flow) and, when creating the GitHub issue, replace direct string interpolation
of {title} and {category} with a safe command invocation (use a list-style
subprocess call or properly escape user values, e.g., shlex.quote) and replace
the fixed /tmp/specops-feedback-body.md with a secure temporary file created via
mktemp or tempfile; after editing core/feedback.md regenerate the docs with
python3 generator/generate.py --all.
In `@platforms/codex/SKILL.md`:
- Around line 2167-2177: The SKILL.md table under "Feedback Categories" is
missing the `docs gap` and `other` categories; do not edit SKILL.md directly —
update the source mapping in the generator templates (add rows for `docs gap` ->
`documentation` or desired label and `other` -> `other`) in the appropriate
generator/templates/*.j2 file or core module that defines the feedback taxonomy,
then run the generator to rebuild platform outputs using python3
generator/generate.py --all so SKILL.md and other generated files are refreshed.
- Around line 2273-2286: The temp file /tmp/specops-feedback-body.md is only
removed after a successful `gh issue create`; modify the Tier 1 flow so the temp
file is always deleted on exit/failure (use a finally/ensure block or trap)
regardless of whether `gh issue create` succeeds, and perform the cleanup before
falling back to Tier 2/3; specifically, ensure the code path that writes the
draft (the step that creates the temp file and runs `gh issue create --body-file
/tmp/specops-feedback-body.md`) deletes the file on both success and error
(replace the current unconditional `rm /tmp/specops-feedback-body.md` placed
only after success with an unconditional cleanup in the error/cleanup handler).
In `@skills/specops/SKILL.md`:
- Around line 2246-2269: The policy currently treats detection of
sensitive/project-specific content as optional (it suggests using
AskUserQuestion or displaying a warning) instead of blocking submission; update
the submission flow logic so that when the sensitive content scan (file paths,
credential patterns, or project-specific code blocks) detects any prohibited
content the flow enforces rejection of the submission and requires redaction
before proceeding. Locate the submission handler that invokes the sensitive
content scan and replace the optional branches that call AskUserQuestion or
display a warning with a hard fail path that returns an error to the user and
prevents issue creation (adjust the code paths around the AskUserQuestion tool
invocation and the "Warning: feedback may contain..." branch to enforce
mandatory redaction).
In `@tests/test_platform_consistency.py`:
- Around line 155-166: The consistency test's "writing_quality" marker list in
tests/test_platform_consistency.py is missing the feedback markers required by
generator/validate.py; update the "writing_quality" array to include the same
feedback marker entries that generator/validate.py expects (add the feedback
marker names used by generator/validate.py so CI validates parity), ensuring the
test's keys match exactly the identifiers in generator/validate.py.
---
Outside diff comments:
In `@CHECKSUMS.sha256`:
- Around line 1-17: The CHECKSUMS.sha256 is missing core/task-delegation.md (and
scripts/install-hooks.sh per changelog); regenerate the checksum file to include
those entries by running the repo checksum generation (e.g., invoke
scripts/bump-version.sh <version> --checksums) and commit the updated
CHECKSUMS.sha256 so that core/task-delegation.md and scripts/install-hooks.sh
appear alongside the other listed files.
---
Nitpick comments:
In `@skills/specops/SKILL.md`:
- Around line 2275-2280: Replace the fixed temp file usage in the Write/Bash
steps with a unique temporary file created via mktemp (or the Write tool's
equivalent) and ensure guaranteed cleanup: create the temp file into a variable
(instead of `/tmp/specops-feedback-body.md`), run `gh issue create` referencing
that variable, parse stdout for the issue URL as before, and always remove the
temp file on both success and failure (e.g., via trap or a finally/cleanup step)
so the Write tool, Bash tool and the cleanup `rm` handle a unique filename and
cannot leak stale data.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 8884c21d-93cd-4f89-bc2c-5c48af7ca3a2
📒 Files selected for processing (33)
.claude/commands/docs-sync.md.github/ISSUE_TEMPLATE/user-feedback.yml.specops/index.json.specops/memory/context.md.specops/writing-quality-rules/design.md.specops/writing-quality-rules/implementation.md.specops/writing-quality-rules/requirements.md.specops/writing-quality-rules/spec.json.specops/writing-quality-rules/tasks.mdCHECKSUMS.sha256CLAUDE.mdREADME.mdcore/feedback.mdcore/workflow.mdcore/writing-quality.mddocs/COMMANDS.mddocs/STRUCTURE.mdgenerator/generate.pygenerator/templates/claude.j2generator/templates/codex.j2generator/templates/copilot.j2generator/templates/cursor.j2generator/validate.pyplatforms/claude/SKILL.mdplatforms/claude/platform.jsonplatforms/codex/SKILL.mdplatforms/codex/platform.jsonplatforms/copilot/platform.jsonplatforms/copilot/specops.instructions.mdplatforms/cursor/platform.jsonplatforms/cursor/specops.mdcskills/specops/SKILL.mdtests/test_platform_consistency.py
| options: | ||
| - Bug report | ||
| - Feature request | ||
| - Friction / UX issue | ||
| - Improvement |
There was a problem hiding this comment.
Align category taxonomy with /specops feedback canonical categories.
Line 14-Line 17 currently use a different set than the PR objective (bug, feature, workflow friction, docs gap, other). This can fragment issue triage/reporting when submissions come from both command flow and manual template flow.
Suggested fix
options:
- - Bug report
- - Feature request
- - Friction / UX issue
- - Improvement
+ - Bug
+ - Feature
+ - Workflow friction
+ - Docs gap
+ - Other📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| options: | |
| - Bug report | |
| - Feature request | |
| - Friction / UX issue | |
| - Improvement | |
| options: | |
| - Bug | |
| - Feature | |
| - Workflow friction | |
| - Docs gap | |
| - Other |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/ISSUE_TEMPLATE/user-feedback.yml around lines 13 - 17, Replace the
current YAML "options" list under the template with the canonical `/specops
feedback` categories by changing the existing entries ("Bug report", "Feature
request", "Friction / UX issue", "Improvement") to the exact canonical values:
"bug", "feature", "workflow friction", "docs gap", and "other"; update the
"options" array in the .github/ISSUE_TEMPLATE/user-feedback.yml so the key
"options" contains that exact five-item list to keep triage consistent.
| ### Feedback Categories | ||
|
|
||
| Four categories, each mapping to a GitHub issue label: | ||
|
|
||
| | Category | Label | When to use | | ||
| |----------|-------|-------------| | ||
| | `bug` | `bug` | Something is broken or behaving incorrectly | | ||
| | `feature` | `enhancement` | A new capability or behavior | | ||
| | `friction` | `friction` | UX issue, workflow annoyance, or confusing behavior | | ||
| | `improvement` | `improvement` | Enhancement to existing functionality | | ||
|
|
There was a problem hiding this comment.
Feedback taxonomy is missing docs gap and other categories.
The category mapping here exposes only 4 categories, but the PR objective specifies 5 (bug, feature, workflow friction, docs gap, other). This causes command behavior/UI drift from the intended contract and from the issue template flow.
Please add docs gap and other to the source module/template, then regenerate platform outputs.
As per coding guidelines: "Never edit generated platform output files directly (SKILL.md, specops.mdc, specops.instructions.md). Edit core/ modules or generator/templates/*.j2 instead, then regenerate with python3 generator/generate.py --all."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@platforms/codex/SKILL.md` around lines 2167 - 2177, The SKILL.md table under
"Feedback Categories" is missing the `docs gap` and `other` categories; do not
edit SKILL.md directly — update the source mapping in the generator templates
(add rows for `docs gap` -> `documentation` or desired label and `other` ->
`other`) in the appropriate generator/templates/*.j2 file or core module that
defines the feedback taxonomy, then run the generator to rebuild platform
outputs using python3 generator/generate.py --all so SKILL.md and other
generated files are refreshed.
…stency - Harden shell injection: sanitize user-controlled title, pass via env var - Use unique temp file path (mktemp) and clean up on all code paths - Enforce credential detection as hard block, require redaction for file paths - Add docs gap and other categories to align taxonomy across module and template - Replace print with NOTIFY_USER in non-interactive flow - Add Collaborative Voice marker to validate.py and test_platform_consistency.py - Add feedback markers to test REQUIRED_MARKERS for CI parity - Regenerate checksums with full file list (core/reconciliation.md, core/task-delegation.md)
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
platforms/claude/SKILL.md (1)
3367-3415: Writing Quality rules are well-structured and comprehensive.The module provides clear, actionable guidelines for spec generation with strong theoretical foundations (ANT/OAT tests for precision, sources from Sutton, Orwell, Pinker, etc.). Line 3369 correctly scopes these as "mandatory" rules to apply "during Phase 2."
💡 Optional enhancement for discoverability
Consider adding an explicit reference from Phase 2 (around line 62, where spec files are created) to this Writing Quality module, similar to how Phase 4 step 3 (line 149) explicitly references "Local Memory Layer module." This would improve discoverability for agents scanning the workflow sequentially.
Example:
**Phase 2: Create Specification** 1. Generate a structured spec directory in the configured `specsDir` -2. Create four core files: +2. Create four core files (following the Writing Quality rules): - `requirements.md` (or `bugfix.md` for bugs, `refactor.md` for refactors) - User stories with EARS acceptance criteria, bug analysis, or refactoring rationaleThis suggestion would need to be applied in
core/workflow.mdorgenerator/templates/claude.j2before regeneration.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@platforms/claude/SKILL.md` around lines 3367 - 3415, Add an explicit pointer from Phase 2's spec-creation step to the Writing Quality module so agents can discover these mandatory rules: update the Phase 2 overview or the specific "create spec files" paragraph in core/workflow.md (or the corresponding template generator/templates/claude.j2) to include a short sentence like "Follow the Writing Quality rules (see Writing Quality module in platforms/claude/SKILL.md) when generating spec artifacts" and a hyperlink or reference anchor to platforms/claude/SKILL.md; ensure the reference mentions "Writing Quality" and "Phase 2 spec artifacts" so automated agents parsing the workflow can locate the guidance.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@core/feedback.md`:
- Line 3: Update the opening category list in the Feedback Mode description so
it matches the defined taxonomy later in the document: replace the current
four-category list with the six categories used in the file — `bug`, `feature`,
`friction`, `improvement`, `docs gap`, and `other` — ensuring the initial
paragraph that starts "The Feedback Mode allows users..." uses the same
six-category wording as the detailed list (so there is no doc drift between the
summary and the taxonomy).
In `@platforms/cursor/specops.mdc`:
- Around line 2289-2299: Ensure the temporary file created by the Tier 1 `gh`
CLI flow (the `mktemp /tmp/specops-feedback-XXXXXX.md` step that yields
`{tmpfile}`) is always removed even on interruptions: change the
implementation/docs to instruct using a shell-level trap/finally pattern that
registers `rm -f {tmpfile}` on EXIT (or equivalent cleanup in the host language)
immediately after creating `{tmpfile}`, so cleanup runs whether step 4 (`gh
issue create`) succeeds, fails, or the process is aborted; reference the
`SPECOPS_TITLE="[{category}] {sanitized_title}" gh issue create ... --body-file
{tmpfile}` step and ensure the cleanup is invoked before falling through to Tier
2 or when reporting the success message.
---
Nitpick comments:
In `@platforms/claude/SKILL.md`:
- Around line 3367-3415: Add an explicit pointer from Phase 2's spec-creation
step to the Writing Quality module so agents can discover these mandatory rules:
update the Phase 2 overview or the specific "create spec files" paragraph in
core/workflow.md (or the corresponding template generator/templates/claude.j2)
to include a short sentence like "Follow the Writing Quality rules (see Writing
Quality module in platforms/claude/SKILL.md) when generating spec artifacts" and
a hyperlink or reference anchor to platforms/claude/SKILL.md; ensure the
reference mentions "Writing Quality" and "Phase 2 spec artifacts" so automated
agents parsing the workflow can locate the guidance.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 379feb6c-fa6d-4bfa-a7b6-1b0a4896ad10
📒 Files selected for processing (10)
.github/ISSUE_TEMPLATE/user-feedback.ymlCHECKSUMS.sha256core/feedback.mdgenerator/validate.pyplatforms/claude/SKILL.mdplatforms/codex/SKILL.mdplatforms/copilot/specops.instructions.mdplatforms/cursor/specops.mdcskills/specops/SKILL.mdtests/test_platform_consistency.py
✅ Files skipped from review due to trivial changes (1)
- .github/ISSUE_TEMPLATE/user-feedback.yml
🚧 Files skipped from review as they are similar to previous changes (3)
- generator/validate.py
- CHECKSUMS.sha256
- tests/test_platform_consistency.py
| @@ -0,0 +1,167 @@ | |||
| ## Feedback Mode | |||
|
|
|||
| The Feedback Mode allows users to submit feedback about SpecOps (bugs, feature requests, friction, improvements) directly as a GitHub issue on the `sanmak/specops` repository. Submission uses a 3-tier strategy: `gh` CLI → pre-filled browser URL → local draft file. | |||
There was a problem hiding this comment.
Update the opening category list to match the defined taxonomy.
Line 3 still lists four categories, but Lines 15-24 define six (bug, feature, friction, improvement, docs gap, other). This creates doc drift in the same module.
Suggested fix
-The Feedback Mode allows users to submit feedback about SpecOps (bugs, feature requests, friction, improvements) directly as a GitHub issue on the `sanmak/specops` repository. Submission uses a 3-tier strategy: `gh` CLI → pre-filled browser URL → local draft file.
+The Feedback Mode allows users to submit feedback about SpecOps (bug reports, feature requests, workflow friction, improvements, docs gaps, and other feedback) directly as a GitHub issue on the `sanmak/specops` repository. Submission uses a 3-tier strategy: `gh` CLI → pre-filled browser URL → local draft file.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@core/feedback.md` at line 3, Update the opening category list in the Feedback
Mode description so it matches the defined taxonomy later in the document:
replace the current four-category list with the six categories used in the file
— `bug`, `feature`, `friction`, `improvement`, `docs gap`, and `other` —
ensuring the initial paragraph that starts "The Feedback Mode allows users..."
uses the same six-category wording as the detailed list (so there is no doc
drift between the summary and the taxonomy).
| **Tier 1 — `gh` CLI**: | ||
| 1. Create a unique temporary file: Run the terminal command(`mktemp /tmp/specops-feedback-XXXXXX.md`) and capture the output as `{tmpfile}`. | ||
| 2. Create the file at `{tmpfile}` with the composed issue body. | ||
| 3. Set the sanitized title in an environment variable: `SPECOPS_TITLE="[{category}] {sanitized_title}"` | ||
| 4. Run the terminal command(`SPECOPS_TITLE="[{category}] {sanitized_title}" gh issue create --repo sanmak/specops --title "$SPECOPS_TITLE" --label "{label}" --body-file {tmpfile}`) | ||
| 5. Run the terminal command(`rm -f {tmpfile}`) to clean up — always run this regardless of whether step 4 succeeded or failed. | ||
| 6. If step 4 failed, fall through to Tier 2 (the temp file is already cleaned up). | ||
| 7. If step 4 succeeded, parse the issue URL from stdout. | ||
| 8. Tell the user("Feedback submitted: {issue URL}\n\nThank you for helping improve SpecOps!") | ||
| 9. Stop. | ||
|
|
There was a problem hiding this comment.
Temporary file cleanup should be guaranteed on all paths.
The current implementation creates a temp file at line 2291 and cleans it up at line 2295, but only if execution reaches that point. If the workflow is interrupted (platform crash, user cancellation, unexpected error) between file creation and cleanup, the temp file could leak.
Consider adding explicit cleanup instructions for the implementer, or note that the shell command should use a trap/finally pattern to ensure cleanup on all exit paths.
🧹 Suggested enhancement for cleanup reliability
Add a note after line 2295:
5. Run the terminal command(`rm -f {tmpfile}`) to clean up — always run this regardless of whether step 4 succeeded or failed.
+ Note: Implementers should ensure cleanup happens even if the workflow is interrupted (e.g., using shell trap handlers or try-finally blocks where supported).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@platforms/cursor/specops.mdc` around lines 2289 - 2299, Ensure the temporary
file created by the Tier 1 `gh` CLI flow (the `mktemp
/tmp/specops-feedback-XXXXXX.md` step that yields `{tmpfile}`) is always removed
even on interruptions: change the implementation/docs to instruct using a
shell-level trap/finally pattern that registers `rm -f {tmpfile}` on EXIT (or
equivalent cleanup in the host language) immediately after creating `{tmpfile}`,
so cleanup runs whether step 4 (`gh issue create`) succeeds, fails, or the
process is aborted; reference the `SPECOPS_TITLE="[{category}]
{sanitized_title}" gh issue create ... --body-file {tmpfile}` step and ensure
the cleanup is invoked before falling through to Tier 2 or when reporting the
success message.
Merge main into feat/add-feedback-submission-and-writing-quality-rules to resolve 10 conflicts: - .specops/index.json: JSON array union (12 entries: 9 base + 1 PR + 2 main) - .specops/memory/context.md: additive markdown sections (3 new spec summaries) - core/workflow.md: combined PR feedback step + main post-plan acceptance gate - generator/validate.py: included writing-quality, feedback, and coherence marker checks - CHECKSUMS.sha256: regenerated - platforms/claude/SKILL.md, platforms/codex/SKILL.md, platforms/copilot/specops.instructions.md, platforms/cursor/specops.mdc, skills/specops/SKILL.md: regenerated from merged sources
There was a problem hiding this comment.
Actionable comments posted: 4
♻️ Duplicate comments (1)
CLAUDE.md (1)
195-195:⚠️ Potential issue | 🟡 MinorValidation checklist still omits feedback markers.
Line [195] documents writing-quality markers, but this section still does not list feedback-marker checks enforced by validation/tests. Please add a dedicated feedback-marker bullet here to keep contributor guidance complete.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@CLAUDE.md` at line 195, Update the "Writing quality markers present" checklist entry in CLAUDE.md to include an explicit feedback-marker bullet; locate the list item labelled "Writing quality markers present — structure/order, precision/testability, clarity/conciseness, audience awareness, self-check, sources" and append a new bullet such as "feedback markers — presence of explicit feedback/action items and reviewer notes" (or similar wording consistent with your style), ensuring the new bullet appears alongside the other validation checks so tests and contributors can see that feedback-marker checks are enforced.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@platforms/codex/SKILL.md`:
- Around line 2366-2371: Move the URL-length guard so you validate the composed
prefilled GitHub URL before printing it: when building the Tier 2 URL (the
variable {url} used in the stdout message "Could not submit via `gh` CLI. Open
this URL..."), compute and check its length against the 8000-character threshold
first; if it exceeds 8000 characters, skip Tier 2 and proceed to Tier 3 without
printing the URL, otherwise print the message and URL as before. Make this
change in the generator/template that emits the Tier 2 logic (don’t edit the
generated SKILL.md directly) so the generated output validates length before
emitting the prefilled URL.
- Around line 2236-2246: The SKILL.md feedback taxonomy currently lists six
categories (including `improvement`) which conflicts with the declared
5-category command contract; remove the `improvement` category and ensure the
categories map to `bug`, `feature` (or `enhancement` label as used),
`friction`/`workflow friction`, `docs gap`/`documentation`, and `other`, then
regenerate the file from the canonical source instead of editing SKILL.md
directly by updating the generator source (edit the core modules or the
generator/templates/*.j2 that produce the category table) and run python3
generator/generate.py --all to produce an updated SKILL.md consistent with the
PR contract.
In `@skills/specops/SKILL.md`:
- Line 2373: Rename the duplicated heading "### Graceful Degradation" in this
feedback/fallback section to a unique title such as "### Feedback Graceful
Degradation" so marker-based validation can distinguish it; update any internal
references or validation markers that target the old heading text (search for
"### Graceful Degradation" and replace the one in the feedback/fallback block)
and ensure anchors/links that expect the new heading are adjusted accordingly.
- Around line 2370-2372: The Tier-2 flow currently displays the pre-filled
GitHub URL before checking length; modify the logic that decides whether to
present Tier-2 (the pre-filled GitHub link) so it checks the composed feedback
URL length first and if length > 8000 skips directly to Tier-3 instead of
rendering the Tier-2 message; update the function or block that
composes/displays the message ("Could not submit via `gh` CLI. Open this URL to
submit your feedback:\n\n{url}") to perform the >8000 guard prior to any
user-facing output and only render the Tier-2 URL when the length check passes.
---
Duplicate comments:
In `@CLAUDE.md`:
- Line 195: Update the "Writing quality markers present" checklist entry in
CLAUDE.md to include an explicit feedback-marker bullet; locate the list item
labelled "Writing quality markers present — structure/order,
precision/testability, clarity/conciseness, audience awareness, self-check,
sources" and append a new bullet such as "feedback markers — presence of
explicit feedback/action items and reviewer notes" (or similar wording
consistent with your style), ensuring the new bullet appears alongside the other
validation checks so tests and contributors can see that feedback-marker checks
are enforced.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 2983dac0-c448-4b33-a19c-c8232720c4ab
📒 Files selected for processing (13)
.specops/index.json.specops/memory/context.md.specops/writing-quality-rules/tasks.mdCHECKSUMS.sha256CLAUDE.mdcore/workflow.mdgenerator/validate.pyplatforms/claude/SKILL.mdplatforms/claude/platform.jsonplatforms/codex/SKILL.mdplatforms/copilot/specops.instructions.mdplatforms/cursor/specops.mdcskills/specops/SKILL.md
✅ Files skipped from review due to trivial changes (6)
- platforms/claude/platform.json
- .specops/index.json
- CHECKSUMS.sha256
- platforms/claude/SKILL.md
- platforms/copilot/specops.instructions.md
- .specops/writing-quality-rules/tasks.md
🚧 Files skipped from review as they are similar to previous changes (3)
- generator/validate.py
- .specops/memory/context.md
- platforms/cursor/specops.mdc
| Six categories, each mapping to a GitHub issue label: | ||
|
|
||
| | Category | Label | When to use | | ||
| |----------|-------|-------------| | ||
| | `bug` | `bug` | Something is broken or behaving incorrectly | | ||
| | `feature` | `enhancement` | A new capability or behavior | | ||
| | `friction` | `friction` | UX issue, workflow annoyance, or confusing behavior | | ||
| | `improvement` | `improvement` | Enhancement to existing functionality | | ||
| | `docs gap` | `documentation` | Missing, unclear, or outdated documentation | | ||
| | `other` | `other` | Anything that does not fit the above categories | | ||
|
|
There was a problem hiding this comment.
Feedback taxonomy is inconsistent with the declared command contract.
Line 2236 defines six categories, but this PR’s objective contract is the 5-category set (bug, feature, workflow friction, docs gap, other). Keeping improvement as a separate category creates behavior drift against the issue template and parsing flows.
Based on learnings: Never edit generated platform output files directly (SKILL.md, specops.mdc, specops.instructions.md). Edit core/ modules or generator/templates/*.j2 instead, then regenerate with python3 generator/generate.py --all.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@platforms/codex/SKILL.md` around lines 2236 - 2246, The SKILL.md feedback
taxonomy currently lists six categories (including `improvement`) which
conflicts with the declared 5-category command contract; remove the
`improvement` category and ensure the categories map to `bug`, `feature` (or
`enhancement` label as used), `friction`/`workflow friction`, `docs
gap`/`documentation`, and `other`, then regenerate the file from the canonical
source instead of editing SKILL.md directly by updating the generator source
(edit the core modules or the generator/templates/*.j2 that produce the category
table) and run python3 generator/generate.py --all to produce an updated
SKILL.md consistent with the PR contract.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 33 out of 33 changed files in this pull request and generated 7 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| "feedback": [ | ||
| "## Feedback Mode", | ||
| "Feedback Mode Detection", | ||
| "Interactive Feedback Workflow", | ||
| "Non-Interactive Feedback Workflow", | ||
| "Issue Composition", | ||
| "Privacy Safety Rules", | ||
| "Submission", | ||
| "Graceful Degradation", | ||
| "sanmak/specops", | ||
| ], |
There was a problem hiding this comment.
The feedback marker list includes Graceful Degradation, but external_tracking markers also require the same string. This makes the feedback category check ambiguous (a platform could miss the feedback graceful-degradation section while still matching the external-tracking marker). Consider replacing this with a feedback-specific marker (e.g., feedback-draft.md or Tier 3 — Local draft file).
| - **Reconciliation markers present** — drift detection audit and reconcile rules included | ||
| - **Steering markers present** — steering file format, inclusion modes, loading procedure, foundation templates | ||
| - **Memory markers present** — local memory layer storage format, loading, writing, pattern detection, and safety rules | ||
| - **Writing quality markers present** — structure/order, precision/testability, clarity/conciseness, audience awareness, self-check, sources |
There was a problem hiding this comment.
The Validation section lists writing-quality markers, but it does not mention the newly added feedback markers even though generator/validate.py now enforces them. Add a corresponding bullet for feedback markers (Feedback Mode detection/workflows, privacy rules, submission, graceful degradation) to keep the docs aligned with the validator.
| - **Writing quality markers present** — structure/order, precision/testability, clarity/conciseness, audience awareness, self-check, sources | |
| - **Writing quality markers present** — structure/order, precision/testability, clarity/conciseness, audience awareness, self-check, sources | |
| - **Feedback markers present** — feedback mode detection and workflows, privacy rules, submission handling, graceful degradation when feedback is unavailable |
| **Workflow (interactive):** | ||
| 1. Select category (bug, feature, friction, improvement) | ||
| 2. Describe the feedback | ||
| 3. Review the draft issue | ||
| 4. Confirm submission | ||
|
|
||
| **Workflow (non-interactive):** | ||
| - Provide category and description inline | ||
| - Issue is composed and submitted automatically | ||
|
|
||
| **Requirements:** `gh` CLI installed and authenticated. If unavailable, SpecOps provides a pre-filled browser URL as fallback, or saves the feedback locally with manual submission instructions. | ||
|
|
||
| **Notes:** Only triggers when referring to SpecOps feedback, not product features like "add feedback form". Privacy-safe: only SpecOps version, platform, and vertical are included — no project code, paths, or configuration. |
There was a problem hiding this comment.
This section documents feedback as having only four interactive categories and states gh is a requirement, but the feedback module supports six categories (including docs gap and other) and has URL/local-draft fallbacks when gh isn't available. Update the category list and rephrase requirements to reflect that gh is preferred but not required due to the defined fallbacks.
CHECKSUMS.sha256
Outdated
| d49c0f2ae098c3c637cbea9ace5f5dc248fcf04d2ce4c80c5e5f61d9cf5e5075 platforms/copilot/platform.json | ||
| c4b3bceb0f75baa1c437b38048369b16335da2b507e0aad711c9482035555ed7 core/workflow.md | ||
| 8321d265f5879b3d3030fbef7f6300b7f1321ee6e178a9addd49c9e789004068 core/safety.md | ||
| fda1cece0f1831537973fddff45178c7a666409b45eac265835968d87f609859 core/reconciliation.md |
There was a problem hiding this comment.
CHECKSUMS.sha256 is missing an entry for core/task-delegation.md, but scripts/bump-version.sh --checksums includes it in the canonical regeneration list. Add core/task-delegation.md to keep the checksum file list consistent with regeneration and integrity verification coverage.
| fda1cece0f1831537973fddff45178c7a666409b45eac265835968d87f609859 core/reconciliation.md | |
| fda1cece0f1831537973fddff45178c7a666409b45eac265835968d87f609859 core/reconciliation.md | |
| aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa core/task-delegation.md |
core/feedback.md
Outdated
| 3. NOTIFY_USER("Could not submit via `gh` CLI. Open this URL to submit your feedback:\n\n{url}") | ||
| 4. Note: GitHub URL length limits may truncate long feedback bodies. If the composed URL exceeds 8000 characters, skip to Tier 3 instead. |
There was a problem hiding this comment.
In Tier 2, the user is notified with the pre-filled GitHub URL before checking the 8,000-character guard. This can present a URL that GitHub will truncate even though the workflow intends to fall back to Tier 3 for long bodies. Compute/check the URL length first; if it exceeds the limit, skip notifying the URL and go straight to Tier 3 with an explanation.
| 3. NOTIFY_USER("Could not submit via `gh` CLI. Open this URL to submit your feedback:\n\n{url}") | |
| 4. Note: GitHub URL length limits may truncate long feedback bodies. If the composed URL exceeds 8000 characters, skip to Tier 3 instead. | |
| 3. Compute the length of the full URL string. | |
| 4. If the URL length exceeds 8000 characters, **do not** show the URL. Instead, skip directly to Tier 3 with an explanation such as: NOTIFY_USER("Your feedback is too long to safely include in a GitHub URL without truncation. Saving it locally instead."); then continue with the Tier 3 flow. | |
| 5. Otherwise, NOTIFY_USER("Could not submit via `gh` CLI. Open this URL to submit your feedback:\n\n{url}") |
core/feedback.md
Outdated
| 1. Create a unique temporary file: RUN_COMMAND(`mktemp /tmp/specops-feedback-XXXXXX.md`) and capture the output as `{tmpfile}`. | ||
| 2. WRITE_FILE `{tmpfile}` with the composed issue body. | ||
| 3. Set the sanitized title in an environment variable: `SPECOPS_TITLE="[{category}] {sanitized_title}"` | ||
| 4. RUN_COMMAND(`SPECOPS_TITLE="[{category}] {sanitized_title}" gh issue create --repo sanmak/specops --title "$SPECOPS_TITLE" --label "{label}" --body-file {tmpfile}`) |
There was a problem hiding this comment.
The abstract operation is written as WRITE_FILE {tmpfile} (and later WRITE_FILE the save path) without the standard WRITE_FILE(path, content) / at least WRITE_FILE(<path>) call syntax used elsewhere. This reduces consistency and also weakens generator/validate.py's raw-abstract-op detection (it only flags WRITE_FILE(). Please rewrite these steps to use the canonical WRITE_FILE( form (and similarly for the Tier 3 save) so substitution/validation are reliable.
generator/validate.py
Outdated
| "Issue Composition", | ||
| "Privacy Safety Rules", | ||
| "Submission", | ||
| "Graceful Degradation", |
There was a problem hiding this comment.
Graceful Degradation appears in both EXTERNAL_TRACKING_MARKERS and FEEDBACK_MARKERS, so a platform output could omit the feedback graceful-degradation subsection and still satisfy the feedback marker via the external-tracking section. Use a feedback-unique marker string here (e.g., the full heading ### Graceful Degradation plus a nearby feedback-specific phrase like feedback-draft.md) to make validation unambiguous.
| "Graceful Degradation", | |
| "### Graceful Degradation (feedback-draft.md)", |
…s alignment - Rename Graceful Degradation marker in FEEDBACK_MARKERS to avoid collision with EXTERNAL_TRACKING_MARKERS (validate.py + test) - Move URL length check before Tier 2 display in feedback submission - Use WRITE_FILE(path, content) syntax and quote tmpfile in shell command - Add label-not-found retry guidance to Tier 1 submission flow - Add core/task-delegation.md to CHECKSUMS.sha256 for parity with bump-version.sh - Add feedback markers bullet to CLAUDE.md validation section - Update docs/COMMANDS.md: 6 categories, 3-tier submission - Note label convention gap in issue template for manual submissions
There was a problem hiding this comment.
Actionable comments posted: 5
♻️ Duplicate comments (1)
core/feedback.md (1)
3-3:⚠️ Potential issue | 🟠 MajorFeedback taxonomy is out of sync with the 5-category command contract.
Line 15-Line 24 defines six categories (
improvementincluded), and Line 48 defaults toimprovement. This conflicts with the declared 5-category set (bug,feature,workflow friction,docs gap,other) and can cause drift across parsing, prompts, and template-aligned submissions.Proposed source-level fix
-The Feedback Mode allows users to submit feedback about SpecOps (bugs, feature requests, friction, improvements) directly as a GitHub issue on the `sanmak/specops` repository. Submission uses a 3-tier strategy: `gh` CLI → pre-filled browser URL → local draft file. +The Feedback Mode allows users to submit feedback about SpecOps (bugs, feature requests, workflow friction, docs gaps, and other feedback) directly as a GitHub issue on the `sanmak/specops` repository. Submission uses a 3-tier strategy: `gh` CLI → pre-filled browser URL → local draft file. @@ -Six categories, each mapping to a GitHub issue label: +Five categories, each mapping to a GitHub issue label: @@ | `bug` | `bug` | Something is broken or behaving incorrectly | | `feature` | `enhancement` | A new capability or behavior | -| `friction` | `friction` | UX issue, workflow annoyance, or confusing behavior | -| `improvement` | `improvement` | Enhancement to existing functionality | +| `workflow friction` | `friction` | UX issue, workflow annoyance, or confusing behavior | | `docs gap` | `documentation` | Missing, unclear, or outdated documentation | | `other` | `other` | Anything that does not fit the above categories | @@ -1. Parse the request for a category keyword. If absent, default to `improvement`. +1. Parse the request for a category keyword. If absent, default to `other`. @@ - - Keywords: "friction", "ux", "confusing", "annoying" → `friction` - - Keywords: "improve", "enhance", "better" → `improvement` + - Keywords: "friction", "workflow friction", "ux", "confusing", "annoying" → `workflow friction` @@ -3. If no description could be extracted: NOTIFY_USER("Feedback mode requires a description. Usage: specops feedback [bug|feature|friction|improvement|docs gap|other] <description>") and stop. +3. If no description could be extracted: NOTIFY_USER("Feedback mode requires a description. Usage: specops feedback [bug|feature|workflow friction|docs gap|other] <description>") and stop.Based on learnings: Never edit generated platform output files directly (SKILL.md, specops.mdc, specops.instructions.md). Edit core/ modules or generator/templates/*.j2 instead, then regenerate with
python3 generator/generate.py --all.Also applies to: 15-24, 48-57
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/feedback.md` at line 3, The feedback taxonomy in Feedback Mode is inconsistent: remove the extra "improvement" category and ensure the allowed categories and default align with the 5-category contract (bug, feature, workflow friction, docs gap, other) by updating where the categories list and default are defined (the Feedback Mode taxonomy block and the default-setting that currently uses "improvement"); make these changes in the source generator (core modules or generator/templates/*.j2) not the generated output, then regenerate artifacts with python3 generator/generate.py --all so parsing, prompts, and templates all use the corrected category list and default.
🧹 Nitpick comments (2)
docs/COMMANDS.md (1)
335-363: Add/specops feedbackto the top Quick Reference table too.The command is documented well here, but it’s missing from the primary Quick Reference table at the top of the file. Adding it there will improve discoverability and keep both summary tables consistent.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/COMMANDS.md` around lines 335 - 363, Add the documented "/specops feedback" entry to the top Quick Reference table so the summary matches the detailed "Send Feedback" section: locate the Quick Reference table near the top of COMMANDS.md and add a row or bullet for "Send Feedback" with the command text "/specops feedback" (and optionally the short alias shown under Other platforms) so the command is discoverable and consistent with the "Send Feedback" section.platforms/claude/SKILL.md (1)
2238-2246: Non-default feedback labels may not exist, causing loss of categorization.Three of the six feedback categories (
friction,improvement,other) map to non-default GitHub labels that may not exist in thesanmak/specopsrepository. Line 2362 acknowledges this: "non-default labels likefriction,improvement,othermay not exist on the target repo."The fallback strategy retries without the
--labelflag, which succeeds but results in uncategorized issues. This impacts issue triage and discoverability — users submitting friction reports or improvement suggestions won't have their feedback properly labeled.Trade-off: The current design prioritizes submission success over metadata completeness. An alternative approach would be to map all categories to guaranteed-to-exist default labels (e.g.,
friction→enhancement,improvement→enhancement,other→question) to preserve categorization.If you want to preserve categorization, consider adding a fallback label mapping in
core/feedback.md:
- If the primary label fails, retry with a safe fallback label (
enhancementorquestion) instead of no label- Document the fallback mapping in the Feedback Categories table
Otherwise, the current graceful degradation strategy is acceptable for ensuring submission reliability.
Also applies to: 2362-2362
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@platforms/claude/SKILL.md` around lines 2238 - 2246, The Feedback Categories table currently maps `friction`, `improvement`, and `other` to non-default GitHub labels that may not exist, causing uncategorized issues when the `--label` flag fallback drops labels; update core/feedback.md and the logic that performs issue submission so that when adding a primary label fails it retries using a guaranteed fallback mapping (e.g., map `friction` -> `enhancement`, `improvement` -> `enhancement`, `other` -> `question`) and document this fallback mapping in the Feedback Categories table in SKILL.md (ensure the retry uses the same code path that currently retries without `--label` but supplies the fallback label instead).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@platforms/claude/SKILL.md`:
- Around line 2356-2365: The Tier 1 "gh CLI" flow deletes the temporary file
({tmpfile}) before performing the retry without the --label flag; update the
Tier 1 steps so the retry logic (the alternate gh issue create call that omits
--label but still uses --body-file "{tmpfile}") runs before the rm -f
"{tmpfile}" cleanup, ensure the cleanup happens only after all retries complete,
adjust subsequent step numbering accordingly, and then regenerate the docs with
the project generator (run python3 generator/generate.py --all).
In `@platforms/copilot/specops.instructions.md`:
- Around line 2377-2379: The manual instructions currently tell users to "Select
the '{category}' label" but categories map to different label names (e.g.
'feature'→'enhancement'), so update the template that composes the message (the
string containing "Your feedback has been saved to `{path}` ... Select the
'{category}' label") to use the mapped label value instead of the raw {category}
token (e.g. replace '{category}' with '{label}' or call the mapping helper that
converts category→label), update core/feedback.md (or the template source)
accordingly, and regenerate the platform outputs so the manual instructions show
the correct mapped label names.
- Around line 217-220: The feedback-intent matcher is too broad (patterns like
"report bug", "report issue") and misroutes product bug reports into Feedback
Mode; update the feedback pattern set used by the Feedback Mode detector to
require an explicit SpecOps qualifier (e.g., require the token "specops" or
"/specops" when matching generic phrases like "report bug", "report issue",
"suggest improvement", "feature request") or only accept explicit phrases that
mention SpecOps (e.g., "specops feedback", "feedback for specops"), modify the
generator template or core matcher that emits these patterns (the module that
constructs the Feedback Mode pattern list referenced as "Feedback Mode" in the
spec) and then regenerate artifacts with the generator (python3
generator/generate.py --all) so the change propagates; apply the same tightening
to the duplicate pattern set noted in the other block referenced for Feedback
Mode.
In `@platforms/cursor/specops.mdc`:
- Around line 2361-2363: The cleanup step removes the temporary body file (rm -f
"{tmpfile}") before the retry path that reuses --body-file, so preserve the
tmpfile until after retries complete: modify the flow so the rm -f "{tmpfile}"
call is moved to after the retry/branch that handles the "--label" failure and
subsequent retry without --label (or only delete tmpfile in the
final/error/fall-through paths), ensuring the retry command still references the
same tmpfile; look for the tmpfile variable and the cleanup invocation and
adjust control flow around the retry logic that checks for "label does not
exist" and re-invokes the command without --label.
- Around line 2379-2381: The manual fallback message uses the placeholder
'{category}' which mismatches actual GitHub labels (e.g., feature→enhancement,
docs gap→documentation); update the code that composes the user notification
string (the "Your feedback has been saved to `{path}`..." message) to map
internal category values to the corresponding GitHub label names and insert the
mapped label instead of '{category}'; locate where the message/template is built
and replace the direct '{category}' insertion with a lookup (e.g.,
labelForCategory(category)) that returns the correct label names like
"enhancement" and "documentation".
---
Duplicate comments:
In `@core/feedback.md`:
- Line 3: The feedback taxonomy in Feedback Mode is inconsistent: remove the
extra "improvement" category and ensure the allowed categories and default align
with the 5-category contract (bug, feature, workflow friction, docs gap, other)
by updating where the categories list and default are defined (the Feedback Mode
taxonomy block and the default-setting that currently uses "improvement"); make
these changes in the source generator (core modules or generator/templates/*.j2)
not the generated output, then regenerate artifacts with python3
generator/generate.py --all so parsing, prompts, and templates all use the
corrected category list and default.
---
Nitpick comments:
In `@docs/COMMANDS.md`:
- Around line 335-363: Add the documented "/specops feedback" entry to the top
Quick Reference table so the summary matches the detailed "Send Feedback"
section: locate the Quick Reference table near the top of COMMANDS.md and add a
row or bullet for "Send Feedback" with the command text "/specops feedback" (and
optionally the short alias shown under Other platforms) so the command is
discoverable and consistent with the "Send Feedback" section.
In `@platforms/claude/SKILL.md`:
- Around line 2238-2246: The Feedback Categories table currently maps
`friction`, `improvement`, and `other` to non-default GitHub labels that may not
exist, causing uncategorized issues when the `--label` flag fallback drops
labels; update core/feedback.md and the logic that performs issue submission so
that when adding a primary label fails it retries using a guaranteed fallback
mapping (e.g., map `friction` -> `enhancement`, `improvement` -> `enhancement`,
`other` -> `question`) and document this fallback mapping in the Feedback
Categories table in SKILL.md (ensure the retry uses the same code path that
currently retries without `--label` but supplies the fallback label instead).
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 4d5eed6c-7d56-4462-b390-e4e01f3ead1a
📒 Files selected for processing (12)
.github/ISSUE_TEMPLATE/user-feedback.ymlCHECKSUMS.sha256CLAUDE.mdcore/feedback.mddocs/COMMANDS.mdgenerator/validate.pyplatforms/claude/SKILL.mdplatforms/codex/SKILL.mdplatforms/copilot/specops.instructions.mdplatforms/cursor/specops.mdcskills/specops/SKILL.mdtests/test_platform_consistency.py
✅ Files skipped from review due to trivial changes (4)
- CLAUDE.md
- CHECKSUMS.sha256
- .github/ISSUE_TEMPLATE/user-feedback.yml
- skills/specops/SKILL.md
🚧 Files skipped from review as they are similar to previous changes (2)
- generator/validate.py
- tests/test_platform_consistency.py
| 8. Check if the request is a **feedback** command (see "Feedback Mode" module). Patterns: "feedback", "send feedback", "report bug", "report issue", "suggest improvement", "feature request for specops", "specops friction". These must refer to sending feedback about SpecOps itself, NOT about a product feature (e.g., "add feedback form", "implement user feedback system", "collect user feedback" is NOT feedback mode). If detected, follow the Feedback Mode workflow instead of the standard phases below. | ||
| 9. Check if the request is a **map** command (see "Map Subcommand" in the Repo Map module). Patterns: "repo map", "generate repo map", "refresh repo map", "show repo map", "codebase map", "/specops map". The bare word "map" alone is NOT sufficient — it must co-occur with "repo", "codebase", or the explicit "/specops" prefix. These must refer to SpecOps repo map management, NOT a product feature (e.g., "add map component", "map API endpoints", "create sitemap" is NOT map mode). If detected, follow the Map Subcommand workflow instead of the standard phases below. | ||
| 10. Check if the request is an **audit** or **reconcile** command (see the Reconciliation module). Patterns for audit: "audit", "audit <name>", "health check", "check drift", "spec health". Patterns for reconcile: "reconcile <name>", "fix <name>" (when referring to a spec), "repair <name>", "sync <name>". These must refer to SpecOps spec health, NOT product features like "audit log" or "health endpoint". If detected, follow the Reconciliation module workflow instead of the standard phases below. | ||
| 11. Check if the request is a **from-plan** command (see "From Plan Mode" module). Patterns: "from-plan", "from plan", "import plan", "convert plan", "convert my plan", "from my plan", "use this plan", "turn this plan into a spec", "make a spec from this plan", "implement the plan", "implement my plan", "go ahead with the plan", "proceed with plan". These must refer to converting an AI coding assistant plan into a SpecOps spec, NOT to a product feature. If so, follow the From Plan Mode workflow instead of the standard phases below. |
There was a problem hiding this comment.
Tighten feedback intent matching to avoid misrouting product bug requests.
The pattern list includes broad phrases like “report bug” and “report issue”, which can capture normal product-development intents. Consider requiring an explicit SpecOps qualifier (e.g., “specops” token) for these generic patterns before routing into Feedback Mode.
Based on learnings, this file is generated output; apply the fix in core/ or generator/templates/*.j2 and regenerate with python3 generator/generate.py --all.
Also applies to: 2227-2231
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@platforms/copilot/specops.instructions.md` around lines 217 - 220, The
feedback-intent matcher is too broad (patterns like "report bug", "report
issue") and misroutes product bug reports into Feedback Mode; update the
feedback pattern set used by the Feedback Mode detector to require an explicit
SpecOps qualifier (e.g., require the token "specops" or "/specops" when matching
generic phrases like "report bug", "report issue", "suggest improvement",
"feature request") or only accept explicit phrases that mention SpecOps (e.g.,
"specops feedback", "feedback for specops"), modify the generator template or
core matcher that emits these patterns (the module that constructs the Feedback
Mode pattern list referenced as "Feedback Mode" in the spec) and then regenerate
artifacts with the generator (python3 generator/generate.py --all) so the change
propagates; apply the same tightening to the duplicate pattern set noted in the
other block referenced for Feedback Mode.
| 2. Create the file at the save path with the composed issue content. | ||
| 3. Tell the user("Your feedback has been saved to `{path}`. You can submit it manually:\n\n1. Go to https://github.com/sanmak/specops/issues/new\n2. Copy the content from `{path}`\n3. Select the '{category}' label\n4. Submit the issue") | ||
|
|
There was a problem hiding this comment.
Use mapped label (not category name) in Tier 3 manual instructions.
The instruction says “Select the '{category}' label”, but categories and labels differ (feature→enhancement, docs gap→documentation). This can lead to incorrect manual labeling.
Suggested wording fix
-3. Select the '{category}' label
+3. Select the '{label}' labelBased on learnings, this file is generated output; apply the fix in core/feedback.md (or template source) and regenerate platform outputs.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| 2. Create the file at the save path with the composed issue content. | |
| 3. Tell the user("Your feedback has been saved to `{path}`. You can submit it manually:\n\n1. Go to https://github.com/sanmak/specops/issues/new\n2. Copy the content from `{path}`\n3. Select the '{category}' label\n4. Submit the issue") | |
| 2. Create the file at the save path with the composed issue content. | |
| 3. Tell the user("Your feedback has been saved to `{path}`. You can submit it manually:\n\n1. Go to https://github.com/sanmak/specops/issues/new\n2. Copy the content from `{path}`\n3. Select the '{label}' label\n4. Submit the issue") |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@platforms/copilot/specops.instructions.md` around lines 2377 - 2379, The
manual instructions currently tell users to "Select the '{category}' label" but
categories map to different label names (e.g. 'feature'→'enhancement'), so
update the template that composes the message (the string containing "Your
feedback has been saved to `{path}` ... Select the '{category}' label") to use
the mapped label value instead of the raw {category} token (e.g. replace
'{category}' with '{label}' or call the mapping helper that converts
category→label), update core/feedback.md (or the template source) accordingly,
and regenerate the platform outputs so the manual instructions show the correct
mapped label names.
| 2. Create the file at the save path with the composed issue content. | ||
| 3. Tell the user("Your feedback has been saved to `{path}`. You can submit it manually:\n\n1. Go to https://github.com/sanmak/specops/issues/new\n2. Copy the content from `{path}`\n3. Select the '{category}' label\n4. Submit the issue") | ||
|
|
There was a problem hiding this comment.
Manual fallback asks for the wrong label value.
Line 2380 tells users to select '{category}', but category names and actual labels differ (e.g., feature → enhancement, docs gap → documentation). This can produce inconsistent/manual mislabeling.
Suggested fix
-3. Select the '{category}' label
+3. Select the '{label}' label🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@platforms/cursor/specops.mdc` around lines 2379 - 2381, The manual fallback
message uses the placeholder '{category}' which mismatches actual GitHub labels
(e.g., feature→enhancement, docs gap→documentation); update the code that
composes the user notification string (the "Your feedback has been saved to
`{path}`..." message) to map internal category values to the corresponding
GitHub label names and insert the mapped label instead of '{category}'; locate
where the message/template is built and replace the direct '{category}'
insertion with a lookup (e.g., labelForCategory(category)) that returns the
correct label names like "enhancement" and "documentation".
…validation markers - Move temp file cleanup after retry step so retry can access --body-file - Remove redundant SPECOPS_TITLE env var assignment (already inline in gh command) - Change generic "Submission" marker to "### Submission" in validate.py and tests to prevent false positive matches against non-feedback sections
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 33 out of 33 changed files in this pull request and generated 2 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| 1. Determine the save path: | ||
| - If FILE_EXISTS(`.specops.json`), READ_FILE(`.specops.json`) to get `specsDir`; otherwise use default `.specops`. | ||
| - Save to `<specsDir>/feedback-draft.md`. If `<specsDir>` does not exist, save to `.specops-feedback-draft.md` in the project root. | ||
| 2. WRITE_FILE the save path with the composed issue content. |
There was a problem hiding this comment.
Tier 3 references {path} in the user message, but the workflow never assigns a {path} variable (it only describes a “save path”). Also, step 2 uses an inconsistent abstract-op form (“WRITE_FILE the save path…”) compared to the rest of the module (e.g., WRITE_FILE({tmpfile}, ...)). Consider explicitly setting {path} when determining the save path, then using a consistent WRITE_FILE({path}, ...) instruction so downstream steps can refer to a defined value.
| 1. Determine the save path: | |
| - If FILE_EXISTS(`.specops.json`), READ_FILE(`.specops.json`) to get `specsDir`; otherwise use default `.specops`. | |
| - Save to `<specsDir>/feedback-draft.md`. If `<specsDir>` does not exist, save to `.specops-feedback-draft.md` in the project root. | |
| 2. WRITE_FILE the save path with the composed issue content. | |
| 1. Determine and set `{path}`: | |
| - If FILE_EXISTS(`.specops.json`), READ_FILE(`.specops.json`) to get `specsDir`; otherwise use default `.specops`. | |
| - Set `{path}` to `<specsDir>/feedback-draft.md`. If `<specsDir>` does not exist, set `{path}` to `.specops-feedback-draft.md` in the project root. | |
| 2. WRITE_FILE({path}, composed_issue_content). |
| - Issue is composed and submitted automatically | ||
|
|
||
| **Submission tiers:** (1) `gh` CLI creates a GitHub issue directly. (2) If `gh` is unavailable, a pre-filled browser URL is provided. (3) If the URL is too long or both tiers fail, feedback is saved as a local draft with manual submission instructions. | ||
|
|
||
| **Notes:** Only triggers when referring to SpecOps feedback, not product features like "add feedback form". Privacy-safe: only SpecOps version, platform, and vertical are included — no project code, paths, or configuration. |
There was a problem hiding this comment.
The non-interactive workflow section says the issue is “submitted automatically”, but the Feedback Mode’s Privacy Safety Rules explicitly require saving a local draft (Tier 3) and not auto-submitting when file paths/code are detected, and Tier 1 can also fall back to a browser URL. Update this doc block to reflect that non-interactive submission is best-effort and may degrade to URL or local draft depending on safety checks and tool availability.
| - Issue is composed and submitted automatically | |
| **Submission tiers:** (1) `gh` CLI creates a GitHub issue directly. (2) If `gh` is unavailable, a pre-filled browser URL is provided. (3) If the URL is too long or both tiers fail, feedback is saved as a local draft with manual submission instructions. | |
| **Notes:** Only triggers when referring to SpecOps feedback, not product features like "add feedback form". Privacy-safe: only SpecOps version, platform, and vertical are included — no project code, paths, or configuration. | |
| - Issue composition and submission are attempted automatically, but may fall back to a browser URL or local draft depending on privacy checks and tool availability. | |
| **Submission tiers:** Best-effort, subject to privacy safety rules. (1) `gh` CLI *attempts* to create a GitHub issue directly when safe and available. (2) If `gh` is unavailable or not used, a pre-filled browser URL is provided. (3) If the URL is too long, privacy checks detect code/paths/config, or both tiers fail, feedback is saved as a local draft with manual submission instructions (no auto-submission). | |
| **Notes:** Only triggers when referring to SpecOps feedback, not product features like "add feedback form". Privacy-safe: only SpecOps version, platform, and vertical are included — no project code, paths, or configuration. When potentially sensitive content (for example, code snippets or file paths) is detected, automatic submission is disabled and a local draft is used instead. |
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
platforms/claude/SKILL.md (1)
219-2234:⚠️ Potential issue | 🟠 MajorFeedback intent matching is too broad and can misroute normal product requests.
Patterns like
"report bug","report issue", and"suggest improvement"are generic; without an explicit SpecOps-reference gate, valid product work can be routed into Feedback Mode instead of spec generation.Proposed tightening (illustrative)
-8. Check if the request is a **feedback** command (see "Feedback Mode" module). Patterns: "feedback", "send feedback", "report bug", "report issue", "suggest improvement", "feature request for specops", "specops friction". These must refer to sending feedback about SpecOps itself, NOT about a product feature (e.g., "add feedback form", "implement user feedback system", "collect user feedback" is NOT feedback mode). If detected, follow the Feedback Mode workflow instead of the standard phases below. +8. Check if the request is a **feedback** command (see "Feedback Mode" module). + - Strong matches: "specops feedback", "feature request for specops", "specops friction" + - Conditional matches: "report bug", "report issue", "suggest improvement" ONLY if the request also explicitly references "specops" + - Otherwise continue with normal workflow (treat as product request) + These must refer to sending feedback about SpecOps itself, NOT about a product feature (e.g., "add feedback form", "implement user feedback system", "collect user feedback" is NOT feedback mode). If detected, follow the Feedback Mode workflow instead of the standard phases below.Based on learnings: Never edit generated platform output files directly (SKILL.md/specops.*). Edit
core/modules or templates, then regenerate withpython3 generator/generate.py --all.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@platforms/claude/SKILL.md` around lines 219 - 2234, Feedback Mode detection is too permissive and misroutes generic product requests; tighten the matching in the Feedback Mode Detection logic (the pattern list under "Feedback Mode Detection" / "Feedback Mode") to require an explicit SpecOps context token (e.g., "specops", "SpecOps", "sanmak/specops", or "about SpecOps") or an explicit target phrase like "about SpecOps" alongside generic phrases ("report bug", "suggest improvement"), and update the detection code that builds the patterns accordingly; do NOT edit the generated SKILL.md directly — instead modify the core detection rule source (core/ module or template responsible for mode detection) and then run python3 generator/generate.py --all to regenerate SKILL.md so the change is authoritative and persists.
♻️ Duplicate comments (4)
core/feedback.md (1)
3-3:⚠️ Potential issue | 🟡 MinorAlign the opening category list with the defined taxonomy.
This sentence still lists 4 categories, but the module defines 6 (
bug,feature,friction,improvement,docs gap,other). Please update the opening text to match.Suggested fix
-The Feedback Mode allows users to submit feedback about SpecOps (bugs, feature requests, friction, improvements) directly as a GitHub issue on the `sanmak/specops` repository. Submission uses a 3-tier strategy: `gh` CLI → pre-filled browser URL → local draft file. +The Feedback Mode allows users to submit feedback about SpecOps (bug reports, feature requests, workflow friction, improvements, docs gaps, and other feedback) directly as a GitHub issue on the `sanmak/specops` repository. Submission uses a 3-tier strategy: `gh` CLI → pre-filled browser URL → local draft file.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/feedback.md` at line 3, Update the opening sentence that lists feedback categories to match the taxonomy used elsewhere by replacing the current four-item list with the six categories: "bug", "feature", "friction", "improvement", "docs gap", and "other" so the description in Feedback Mode aligns with the module's defined taxonomy.platforms/cursor/specops.mdc (1)
2379-2379:⚠️ Potential issue | 🟡 MinorManual fallback still instructs category instead of mapped GitHub label.
This can mislabel issues (e.g.,
featureshould map toenhancement).Proposed fix
-3. Select the '{category}' label +3. Select the '{label}' label🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@platforms/cursor/specops.mdc` at line 2379, Update the manual fallback message to instruct users to select the mapped GitHub label rather than the raw '{category}'; replace the template placeholder '{category}' with the mapped label variable or function (e.g., use '{mappedLabel}' or call mapCategoryToGithubLabel(category)) so the message reads "Select the '{mappedLabel}' label" (ensure you reference the same mapping function/variable used elsewhere in the codebase that maps categories like "feature" → "enhancement").platforms/copilot/specops.instructions.md (2)
2377-2377:⚠️ Potential issue | 🟡 MinorTier 3 manual instructions reference unmapped category name instead of GitHub label.
Line 2377 instructs users to "Select the '{category}' label", but categories map to different label names (
feature→enhancement,docs gap→documentation). This will cause users to look for a non-existent label when manually submitting feedback.The template should use
{label}(the mapped value from the Feedback Categories table, lines 2238-2245) instead of{category}.Based on learnings, this file is generated output; apply the fix in
core/feedback.md(or template source that composes this message) and regenerate platform outputs.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@platforms/copilot/specops.instructions.md` at line 2377, The assembled message uses the unmapped template variable "{category}" instead of the mapped label "{label}", causing incorrect instructions; update the feedback template (the source that composes the message) to use "{label}" where the manual-submission text currently says "Select the '{category}' label", ensure the template pulls the mapped value from the Feedback Categories mapping (e.g., feature→enhancement, docs gap→documentation) rather than the raw category name, and then regenerate platform outputs so platforms/copilot/specops.instructions.md reflects the corrected "{label}" value.
217-218:⚠️ Potential issue | 🟡 MinorFeedback intent patterns remain too broad and can misroute product feature requests.
Patterns like
"report bug","report issue", and"suggest improvement"lack explicit SpecOps qualifiers. A user saying "report bug in checkout flow" or "suggest improvement to the login screen" will match these patterns and incorrectly route into Feedback Mode (submitting to the SpecOps repo) instead of creating a product feature spec.Only
"feature request for specops"and"specops friction"contain explicit qualifiers. The other patterns should require context tokens like "specops" or "/specops" to avoid false positives.Based on learnings, this file is generated output; apply the fix in
core/feedback.mdorgenerator/templates/*.j2, then regenerate withpython3 generator/generate.py --all.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@platforms/copilot/specops.instructions.md` around lines 217 - 218, The Feedback Mode patterns are too broad: restrict generic phrases ("report bug", "report issue", "suggest improvement") to only match when they include explicit SpecOps qualifiers (e.g., contain "specops" or the "/specops" prefix), leaving explicit patterns like "feature request for specops" and "specops friction" unchanged; update the Feedback Mode pattern set (the "Feedback Mode" rules/templates that generate platforms/copilot/specops.instructions.md) so these tokens require a specops qualifier, keep Map Subcommand logic as-is, and regenerate the instructions from the generator/templates (or core feedback source) so the changes propagate to the generated spec.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@platforms/claude/SKILL.md`:
- Around line 219-2234: Feedback Mode detection is too permissive and misroutes
generic product requests; tighten the matching in the Feedback Mode Detection
logic (the pattern list under "Feedback Mode Detection" / "Feedback Mode") to
require an explicit SpecOps context token (e.g., "specops", "SpecOps",
"sanmak/specops", or "about SpecOps") or an explicit target phrase like "about
SpecOps" alongside generic phrases ("report bug", "suggest improvement"), and
update the detection code that builds the patterns accordingly; do NOT edit the
generated SKILL.md directly — instead modify the core detection rule source
(core/ module or template responsible for mode detection) and then run python3
generator/generate.py --all to regenerate SKILL.md so the change is
authoritative and persists.
---
Duplicate comments:
In `@core/feedback.md`:
- Line 3: Update the opening sentence that lists feedback categories to match
the taxonomy used elsewhere by replacing the current four-item list with the six
categories: "bug", "feature", "friction", "improvement", "docs gap", and "other"
so the description in Feedback Mode aligns with the module's defined taxonomy.
In `@platforms/copilot/specops.instructions.md`:
- Line 2377: The assembled message uses the unmapped template variable
"{category}" instead of the mapped label "{label}", causing incorrect
instructions; update the feedback template (the source that composes the
message) to use "{label}" where the manual-submission text currently says
"Select the '{category}' label", ensure the template pulls the mapped value from
the Feedback Categories mapping (e.g., feature→enhancement, docs
gap→documentation) rather than the raw category name, and then regenerate
platform outputs so platforms/copilot/specops.instructions.md reflects the
corrected "{label}" value.
- Around line 217-218: The Feedback Mode patterns are too broad: restrict
generic phrases ("report bug", "report issue", "suggest improvement") to only
match when they include explicit SpecOps qualifiers (e.g., contain "specops" or
the "/specops" prefix), leaving explicit patterns like "feature request for
specops" and "specops friction" unchanged; update the Feedback Mode pattern set
(the "Feedback Mode" rules/templates that generate
platforms/copilot/specops.instructions.md) so these tokens require a specops
qualifier, keep Map Subcommand logic as-is, and regenerate the instructions from
the generator/templates (or core feedback source) so the changes propagate to
the generated spec.
In `@platforms/cursor/specops.mdc`:
- Line 2379: Update the manual fallback message to instruct users to select the
mapped GitHub label rather than the raw '{category}'; replace the template
placeholder '{category}' with the mapped label variable or function (e.g., use
'{mappedLabel}' or call mapCategoryToGithubLabel(category)) so the message reads
"Select the '{mappedLabel}' label" (ensure you reference the same mapping
function/variable used elsewhere in the codebase that maps categories like
"feature" → "enhancement").
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: d2179842-e3ac-48e9-825b-b6c495ecccbe
📒 Files selected for processing (9)
CHECKSUMS.sha256core/feedback.mdgenerator/validate.pyplatforms/claude/SKILL.mdplatforms/codex/SKILL.mdplatforms/copilot/specops.instructions.mdplatforms/cursor/specops.mdcskills/specops/SKILL.mdtests/test_platform_consistency.py
✅ Files skipped from review due to trivial changes (1)
- CHECKSUMS.sha256
🚧 Files skipped from review as they are similar to previous changes (3)
- generator/validate.py
- tests/test_platform_consistency.py
- platforms/codex/SKILL.md
| - If FILE_EXISTS(`.specops.json`), READ_FILE(`.specops.json`) to get `specsDir`; otherwise use default `.specops`. | ||
| - Save to `<specsDir>/feedback-draft.md`. If `<specsDir>` does not exist, save to `.specops-feedback-draft.md` in the project root. | ||
| 2. WRITE_FILE the save path with the composed issue content. | ||
| 3. NOTIFY_USER("Your feedback has been saved to `{path}`. You can submit it manually:\n\n1. Go to https://github.com/sanmak/specops/issues/new\n2. Copy the content from `{path}`\n3. Select the '{category}' label\n4. Submit the issue") |
There was a problem hiding this comment.
Tier 3 prompt shows category name instead of label name
The Tier 3 manual-submission instruction tells the user to "Select the '{category}' label", but for two of the six categories the category name and label name differ:
| Category (shown to user) | Actual label |
|---|---|
feature |
enhancement |
docs gap |
documentation |
If a user reaches Tier 3 with a feature category, they'll be instructed to "Select the 'feature' label" — a label that doesn't exist on the repo. They would need to look for enhancement instead. Same for docs gap → documentation.
Substitute {label} (the GitHub label value from the Feedback Categories mapping) so the instruction always names the label the user must actually select:
| 3. NOTIFY_USER("Your feedback has been saved to `{path}`. You can submit it manually:\n\n1. Go to https://github.com/sanmak/specops/issues/new\n2. Copy the content from `{path}`\n3. Select the '{category}' label\n4. Submit the issue") | |
| 3. NOTIFY_USER("Your feedback has been saved to `{path}`. You can submit it manually:\n\n1. Go to https://github.com/sanmak/specops/issues/new\n2. Copy the content from `{path}`\n3. Select the '{label}' label\n4. Submit the issue") |
Summary
core/feedback.md): Adds/specops feedbackcommand for structured user feedback submission via GitHub Issues, with category classification (bug, feature, workflow friction, docs gap, other) and optional context capturecore/writing-quality.md): Adds spec writing quality rules covering structure/order, precision/testability, clarity/conciseness, audience awareness, and self-check guidelines.github/ISSUE_TEMPLATE/user-feedback.yml): Structured form for feedback submitted via the command or manually.specops/writing-quality-rules/): Complete spec used to build the writing quality feature itselfChanges
core/feedback.md— new feedback submission workflow modulecore/writing-quality.md— new writing quality rules modulecore/workflow.md— integrates feedback and writing quality into the workflow.github/ISSUE_TEMPLATE/user-feedback.yml— GitHub issue template for user feedback.specops/writing-quality-rules/— dogfood spec (requirements, design, tasks, implementation)generator/generate.py— loads new core modulesgenerator/templates/*.j2— includes new modules in all 4 platform templatesgenerator/validate.py— adds FEEDBACK_MARKERS and WRITING_QUALITY_MARKERS validationplatforms/*/— regenerated outputs for all 4 platforms (claude, cursor, codex, copilot)skills/specops/SKILL.md— regenerated plugin skilltests/test_platform_consistency.py— tests for new marker consistencyCLAUDE.md,README.md,docs/COMMANDS.md,docs/STRUCTURE.md— documentation updates.claude/commands/docs-sync.md— new mapping entries for feedback and writing-quality modulesCHECKSUMS.sha256— regenerated checksumsTest Plan
python3 generator/validate.pypasses (feedback and writing quality markers validated)bash scripts/run-tests.sh— all 7 tests pass including platform consistencyshasum -a 256 -c CHECKSUMS.sha256— checksums verifypython3 generator/generate.py --all && git diff --exit-code platforms/ skills/ .claude-plugin/)/specops feedbackcommand is documented in platform outputsSummary by CodeRabbit
New Features
Documentation
Tests
Greptile Summary
This PR adds two new core modules — a
/specops feedbackcommand and writing quality rules — along with the supporting generator, validator, test, and platform output changes to propagate them across all four platforms (claude, cursor, codex, copilot).core/feedback.md): Implements a 3-tier submission strategy (gh CLI → pre-filled browser URL → local draft file), with privacy safety rules, shell injection protections (env-var title,--body-filefor the body,mktemp-generated temp paths), and a label-not-found retry. Several issues identified in earlier review rounds were addressed in follow-up commits (903e5b7, 112678c): hardcoded temp path, URL length gate ordering, "Graceful Degradation" marker collision, non-default label failures, and label mismatch between the template and programmatic flow.core/writing-quality.md): Adds mandatory spec-writing rules (ANT/OAT precision tests, active voice, plain language, causal narrative, self-check) sourced from established writing guides.WRITING_QUALITY_MARKERSandFEEDBACK_MARKERSadded to bothvalidate.pyandtest_platform_consistency.py; marker uniqueness verified across generated outputs.{category}where it should use{label}. Forfeature(label:enhancement) anddocs gap(label:documentation), users would be told to select a non-existent label on the repo.Confidence Score: 4/5
{category}instead of{label}, causing incorrect label instructions for thefeatureanddocs gapcategories.{category}instead of{label}Important Files Changed
{category}instead of{label}for the label-selection instruction, which breaks forfeature(→enhancement) anddocs gap(→documentation).writing_qualityandfeedbackgroups to REQUIRED_MARKERS, mirroring validate.py. Previously missingfeedbackgroup is now present (fixed in 903e5b7).writing_qualityandfeedbackto the shared render context, matching the dynamic module-loading key convention (writing-quality→writing_quality,feedback→feedback). No issues.{{ feedback }}and{{ writing_quality }}blocks. Consistent with the other three platform templates (codex, copilot, cursor). No issues.Flowchart
%%{init: {'theme': 'neutral'}}%% flowchart TD A[User invokes /specops feedback] --> B{canAskInteractive?} B -->|true| C[Interactive: ASK_USER for category & description] B -->|false| D[Non-interactive: parse category & description inline] C --> E[Apply Privacy Safety Rules] D --> E E -->|Credentials detected| F[HARD BLOCK — notify user, stop] E -->|File paths / code detected| G{canAskInteractive?} G -->|true| H[ASK_USER to redact — if declined, save draft only] G -->|false| I[Auto-save to Tier 3 draft, notify] E -->|Clean| J[Compose issue draft] H -->|Accepted & redacted| J J --> K{canAskInteractive?} K -->|true| L[Display draft & ASK_USER to confirm] K -->|false| M[Display draft, proceed to submission] L -->|edit| L L -->|no| N[Cancel — no issue created] L -->|yes| O[Tier 1: gh CLI] M --> O O --> P[mktemp unique tmpfile\nWRITE_FILE body\ngh issue create with env-var title] P -->|label not found| Q[Retry without --label flag] Q -->|still fails| R[cleanup tmpfile → Tier 2] P -->|other failure| R P -->|success| S[cleanup tmpfile\nNotify user with issue URL ✓] R --> T{URL ≤ 8000 chars?} T -->|yes| U[Tier 2: NOTIFY_USER with pre-filled URL] T -->|no / both fail| V[Tier 3: WRITE_FILE local draft\nNotify user with path & manual steps]Last reviewed commit: "fix: resolve temp fi..."