diff --git a/.github/workflows/release.yml b/.github/workflows/release.yml index dc86e1d5..721e8205 100644 --- a/.github/workflows/release.yml +++ b/.github/workflows/release.yml @@ -31,4 +31,4 @@ jobs: gh release create "$tag" \ --title="$tag" \ --draft \ - main.js styles.css manifest.json + dist/main.js dist/manifest.json dist/styles.css diff --git a/.gitignore b/.gitignore index 510f61b5..7866f282 100644 --- a/.gitignore +++ b/.gitignore @@ -8,9 +8,10 @@ # npm node_modules -# Don't include the compiled main.js file in the repo. +# Don't include the compiled files in the repo. # They should be uploaded to GitHub releases instead. main.js +dist/ # Exclude sourcemaps *.map @@ -25,3 +26,10 @@ data.json quokka.ts Makefile archive/ + +# exclude MCP file from commit, it can contain secrets +mcp.json +.mcp.json + +# code coverage +coverage/ diff --git a/.prettierrc b/.prettierrc deleted file mode 100644 index 86c4be00..00000000 --- a/.prettierrc +++ /dev/null @@ -1,6 +0,0 @@ -{ - "semi": false, - "singleQuote": true, - "printWidth": 120, - "trailingComma": "none" -} diff --git a/.specify/COMPLIANCE_CHECKLIST.md b/.specify/COMPLIANCE_CHECKLIST.md new file mode 100644 index 00000000..eb61702d --- /dev/null +++ b/.specify/COMPLIANCE_CHECKLIST.md @@ -0,0 +1,223 @@ +# Constitution Compliance Checklist + +**Version**: 1.0.0 | **For**: Code Reviews & Pull Requests + +Use this checklist to validate that code changes comply with the Tars Plugin Constitution before merging. + +## ๐Ÿงช Test-Driven Development (CRITICAL) + +### Pre-Implementation +- [ ] Tests were written **BEFORE** implementation code +- [ ] Tests initially **FAILED** (Red phase confirmed) +- [ ] Tests now **PASS** after implementation (Green phase) +- [ ] Code was refactored for clarity (Refactor phase if needed) + +### Test Quality +- [ ] Every test uses **GIVEN / WHEN / THEN** pattern +- [ ] Each section (GIVEN/WHEN/THEN) is separated by blank lines +- [ ] Test comments explain **business purpose**, not just code mechanics +- [ ] Tests start with `// TEST: {clear description}` +- [ ] `AND:` is used for additional steps within sections +- [ ] Test file naming: `*.test.ts` in appropriate `tests/` subdirectory + +### Example Validation +```typescript +// โœ… COMPLIANT +// TEST: Provider handles malformed API response gracefully +test('sendRequest catches and logs JSON parse errors', () => { + // GIVEN: API returns invalid JSON response + const mockResponse = "not valid json{" + + // WHEN: Provider attempts to parse the response + const result = await provider.handleResponse(mockResponse) + + // THEN: Error is caught and user is notified + expect(result.error).toBeDefined() + // AND: Error is logged for debugging + expect(console.error).toHaveBeenCalled() +}) + +// โŒ NON-COMPLIANT +test('parse error', () => { + const r = "bad json" + expect(() => parse(r)).toThrow() // No GIVEN/WHEN/THEN, no business context +}) +``` + +## ๐Ÿ—๏ธ Plugin Architecture + +### Module Organization +- [ ] New providers added to `src/providers/` +- [ ] New commands added to `src/commands/` +- [ ] Editor logic kept in `src/editor.ts` (not scattered) +- [ ] Settings changes reflected in `src/settings.ts` and `src/settingTab.ts` +- [ ] No business logic in UI components + +### Provider Implementation +- [ ] Provider options interface **extends BaseOptions** +- [ ] `sendRequestFunc` returns **async generator** (uses `yield`) +- [ ] Streaming implemented token-by-token +- [ ] **AbortController** support implemented +- [ ] Error handling uses **Notice** for user feedback +- [ ] Capability emoji implemented via `getCapabilityEmoji()` +- [ ] Provider registered in `src/providers/index.ts` + +### Tag-Based Interaction +- [ ] Tag syntax preserved: `#Role :` format +- [ ] Blank lines separate messages +- [ ] Callouts properly ignored +- [ ] Conversation order respected: System โ†’ User โ†” Assistant + +## ๐Ÿ“˜ TypeScript Type Safety + +### Strict Mode Compliance +- [ ] No `any` types (or explicitly justified in comments) +- [ ] No `@ts-ignore` (or explicitly justified in comments) +- [ ] All public functions have explicit return types +- [ ] All parameters have explicit types (no implicit `any`) +- [ ] Null/undefined handled explicitly (`strictNullChecks`) + +### Type Definitions +- [ ] `interface` used for extensible contracts +- [ ] `type` used for unions/intersections/compositions +- [ ] Provider options extend `BaseOptions` +- [ ] Message types use `Message` interface from `./providers` + +### Naming Conventions +- [ ] Classes/Interfaces/Types: **PascalCase** (`ClaudeOptions`) +- [ ] Functions/variables/properties: **camelCase** (`sendRequest`) +- [ ] Constants: **UPPER_SNAKE_CASE** (`DEFAULT_SETTINGS`) +- [ ] No `I` prefix on interfaces (use `BaseOptions`, not `IBaseOptions`) + +## ๐Ÿ” Code Quality + +### Linting +- [ ] `npm run lint` passes with **zero warnings** +- [ ] Prettier formatting applied: `npm run format` +- [ ] No unused variables (except `_` prefixed) +- [ ] ESLint rules from `eslint.config.mjs` followed + +### Code Standards +- [ ] ES6 module syntax (`import`/`export`) +- [ ] Async/await for promises (not `.then()/.catch()`) +- [ ] Arrow functions for callbacks +- [ ] Template literals for string interpolation +- [ ] Destructuring used where appropriate + +### Error Handling +- [ ] User-facing errors use `new Notice(t('message'))` +- [ ] Debug information logged to `console.debug` or `console.error` +- [ ] HTTP errors translated to helpful messages (401, 404, 429, etc.) +- [ ] All strings wrapped in `t()` for internationalization + +## ๐Ÿ”— Upstream Compatibility + +### Architecture Preservation +- [ ] No breaking changes to existing provider interfaces +- [ ] Command registration pattern unchanged +- [ ] Settings structure compatible with upstream +- [ ] Tag syntax backwards compatible +- [ ] Plugin lifecycle hooks unchanged (`onload`, `onunload`) + +### Configuration Matching +- [ ] `tsconfig.json` settings match upstream +- [ ] `eslint.config.mjs` rules match upstream +- [ ] Build configuration (`esbuild.config.mjs`) compatible +- [ ] `package.json` scripts unchanged (or documented) + +### Documentation +- [ ] Deviations from upstream documented with rationale +- [ ] README updated if user-facing changes +- [ ] Provider-specific behavior documented +- [ ] Breaking changes clearly marked + +## ๐Ÿงน Polish & Refinement + +### Code Cleanliness +- [ ] No commented-out code blocks +- [ ] No `console.log` (use `console.debug` for dev logs) +- [ ] No TODOs without issue tracking +- [ ] Duplicate code refactored/extracted +- [ ] File length reasonable (<500 lines preferred) + +### Testing Coverage +- [ ] Provider message formatting tested +- [ ] Tag parsing tested for edge cases +- [ ] Stream cancellation tested +- [ ] Error scenarios tested (network failures, invalid responses) +- [ ] Settings validation tested + +### Performance +- [ ] No synchronous file I/O in hot paths +- [ ] Streaming chunks efficiently (not buffering entire response) +- [ ] Status bar updates don't block rendering +- [ ] Large responses don't freeze UI + +## ๐Ÿ“‹ Pull Request Checklist + +### Before Submitting +- [ ] All tests pass locally +- [ ] `npm run build` succeeds +- [ ] `npm run lint` passes (zero warnings) +- [ ] Manual testing in real Obsidian vault completed +- [ ] No merge conflicts with main branch + +### PR Description +- [ ] Links to related issue/spec +- [ ] Describes WHAT changed and WHY +- [ ] Screenshots/GIFs for UI changes +- [ ] Breaking changes clearly called out +- [ ] Testing steps documented + +### Reviewer Guidance +- [ ] TDD workflow evidence provided (commit history shows tests first) +- [ ] Constitution compliance verified (this checklist completed) +- [ ] Upstream compatibility assessed +- [ ] Performance impact considered + +## ๐Ÿšฆ Gate Status + +**Mark each gate as PASS/FAIL before merge:** + +| Gate | Status | Notes | +|------|--------|-------| +| TDD Workflow | โฌœ PASS / โฌœ FAIL | Tests written before code? | +| Test Format | โฌœ PASS / โฌœ FAIL | GIVEN/WHEN/THEN with comments? | +| TypeScript Strict | โฌœ PASS / โฌœ FAIL | No `any`, explicit types? | +| Linting | โฌœ PASS / โฌœ FAIL | Zero warnings? | +| Architecture | โฌœ PASS / โฌœ FAIL | Modules in correct locations? | +| Upstream Compat | โฌœ PASS / โฌœ FAIL | No breaking changes? | +| Manual Testing | โฌœ PASS / โฌœ FAIL | Works in real Obsidian? | + +**All gates must PASS before merge.** + +## ๐ŸŽฏ Quick Validation Commands + +```bash +# Run all quality checks +npm run lint # Must pass (zero warnings) +npm run build # Must succeed +npm test # Must pass (if test suite exists) + +# Check TypeScript strict mode +grep -E "(noImplicitAny|strictNullChecks)" tsconfig.json +# Should show both set to true + +# Find potential violations +grep -r "any" src/ # Check for any types +grep -r "@ts-ignore" src/ # Check for ts-ignore +grep -r "console.log" src/ # Check for debug logs +``` + +## ๐Ÿ“š Reference + +- **Full Constitution**: `.specify/memory/constitution.md` +- **Quick Reference**: `.specify/QUICK_REFERENCE.md` +- **Constitution Summary**: `.specify/CONSTITUTION_SUMMARY.md` + +--- + +**Remember**: This is a living document. Update it when the constitution changes. + +**Version History**: +- v1.0.0 (2025-10-01): Initial checklist based on constitution v1.0.0 diff --git a/.specify/CONSTITUTION_SUMMARY.md b/.specify/CONSTITUTION_SUMMARY.md new file mode 100644 index 00000000..4ed17357 --- /dev/null +++ b/.specify/CONSTITUTION_SUMMARY.md @@ -0,0 +1,143 @@ +# Constitution Creation Summary + +**Date**: 2025-10-01 +**Version**: 1.0.0 +**Project**: Tars - Obsidian AI Integration Plugin (Fork) + +## Overview + +The constitution has been created based on comprehensive analysis of the existing obsidian-tars project structure, codebase patterns, and fork development requirements. This document serves as the project's governance framework. + +## Project Analysis Conducted + +### Repository Structure Examined +- **Package configuration**: `package.json`, `tsconfig.json`, `eslint.config.mjs` +- **Source code**: `src/` directory with providers, commands, editor, settings, suggest modules +- **Providers**: 18 AI provider implementations (Claude, OpenAI, DeepSeek, Gemini, etc.) +- **Architecture**: Obsidian plugin pattern with tag-based interaction model +- **Build system**: esbuild with development/production modes +- **License**: MIT License (TarsLab 2024) + +### Key Patterns Identified +- **Plugin architecture**: Commands, providers, settings, suggest as core modules +- **TypeScript strict mode**: `noImplicitAny`, `strictNullChecks` enabled +- **ESLint configuration**: TypeScript ESLint with Prettier integration +- **Provider pattern**: Interface-based with `BaseOptions`, `SendRequest`, `Vendor` +- **Tag-driven UX**: `#User :`, `#Assistant :`, `#System :`, `#NewChat` syntax +- **Streaming responses**: Async generators with `yield` pattern +- **Multimodal support**: Images, PDFs via embed resolution + +## Constitution Structure + +### Core Principles (5) +1. **Test-Driven Development (NON-NEGOTIABLE)** - GIVEN/WHEN/THEN pattern mandatory +2. **Upstream Compatibility** - Fork-specific governance for TarsLab/obsidian-tars +3. **Plugin Architecture Modularity** - Obsidian-specific constraints +4. **TypeScript Type Safety** - Strict mode enforcement +5. **Code Quality & Linting** - Zero-warning policy + +### Additional Sections +- **Plugin Architecture Standards** - Provider integration, tag-based UX, settings +- **Code Quality & Testing** - Test structure, coverage requirements +- **TypeScript Standards** - Module structure, naming conventions +- **AI Provider Integration** - Required capabilities, error handling +- **Governance** - Amendment process, compliance enforcement, conflict resolution + +## Template Updates + +All three templates have been updated to align with the constitution: + +### `.specify/templates/plan-template.md` +- โœ… Added comprehensive Constitution Check section with 5 principle categories +- โœ… Updated version reference to v1.0.0 +- โœ… Included Tars-specific gates (TypeScript, plugin architecture, upstream compatibility) + +### `.specify/templates/spec-template.md` +- โœ… Added "Tars Plugin Specific" checklist to Review & Acceptance section +- โœ… Includes Obsidian plugin compatibility checks +- โœ… Tag-based interaction and multimodal content considerations + +### `.specify/templates/tasks-template.md` +- โœ… Updated task examples to TypeScript/Obsidian plugin patterns +- โœ… Emphasized GIVEN/WHEN/THEN test format requirement +- โœ… Included Obsidian-specific tasks (provider registration, tag integration) +- โœ… Added upstream compatibility verification task +- โœ… Replaced generic examples with plugin-specific paths + +## Key Principles Emphasized + +### Test-Driven Development +- **Red-Green-Refactor** cycle is mandatory +- Tests MUST be written before implementation +- **GIVEN/WHEN/THEN** pattern with clear comments +- Business purpose explanation required in test comments +- Test sections separated by blank lines + +### Fork Development Standards +- Preserve upstream architecture patterns +- Match TypeScript and ESLint configurations +- Document all deviations with rationale +- Maintain PR-ready state for potential contributions +- Respect original design decisions + +### Obsidian Plugin Constraints +- Module isolation (providers/, commands/, editor.ts, settingTab.ts) +- Standard interfaces (SendRequest, BaseOptions, Vendor) +- Tag-based interaction model preservation +- Settings persistence via Obsidian APIs (loadData/saveData) +- Notice-based user feedback + +## Compliance Enforcement + +### Pre-commit Requirements +- Tests written before code +- `npm run lint` passes (zero warnings) +- TypeScript strict mode compliance +- GIVEN/WHEN/THEN test format +- No `@ts-ignore` without justification + +### Priority Order for Conflicts +1. Obsidian plugin API requirements (non-negotiable) +2. Test-Driven Development principle (non-negotiable) +3. Upstream compatibility (strong preference) +4. TypeScript type safety (strong preference) +5. Code quality standards (enforced) + +## Next Steps + +### For Development Work +1. Use `/plan` workflow to generate implementation plans +2. All plans will reference this constitution +3. Constitution Check gates will validate compliance +4. Tests must pass TDD validation before implementation + +### For Constitution Updates +1. Document motivation and impact +2. Update sync impact report (HTML comment) +3. Semantic versioning: MAJOR.MINOR.PATCH +4. Propagate changes to all three templates + +## Files Created/Updated + +``` +.specify/ +โ”œโ”€โ”€ memory/ +โ”‚ โ””โ”€โ”€ constitution.md (created v1.0.0) +โ”œโ”€โ”€ templates/ +โ”‚ โ”œโ”€โ”€ plan-template.md (updated) +โ”‚ โ”œโ”€โ”€ spec-template.md (updated) +โ”‚ โ””โ”€โ”€ tasks-template.md (updated) +โ””โ”€โ”€ CONSTITUTION_SUMMARY.md (this file) +``` + +## Constitution Version Details + +- **Version**: 1.0.0 +- **Ratified**: 2025-10-01 +- **Last Amended**: 2025-10-01 +- **Based on**: Analysis of TarsLab/obsidian-tars v3.5.0 fork +- **Compatibility**: Obsidian plugin API 1.5.8+ + +--- + +**This constitution is now active and governs all development work on the obsidian-tars fork.** diff --git a/.specify/QUICK_REFERENCE.md b/.specify/QUICK_REFERENCE.md new file mode 100644 index 00000000..37f82c98 --- /dev/null +++ b/.specify/QUICK_REFERENCE.md @@ -0,0 +1,217 @@ +# Tars Plugin Constitution - Quick Reference + +**Version**: 1.0.0 | **Full Document**: `.specify/memory/constitution.md` + +## ๐Ÿšจ NON-NEGOTIABLE Rules + +### โœ… Test-Driven Development +```typescript +// 1. Write test FIRST (must fail) +// TEST: Stream cancellation stops generation +test('provider respects AbortController signal', () => { + // GIVEN: Active stream with abort controller + const controller = new AbortController() + const stream = provider.sendRequest(messages, controller) + + // WHEN: Abort signal is triggered + controller.abort() + + // THEN: Stream stops immediately + // AND: No further tokens are yielded +}) + +// 2. See it RED (test fails) +// 3. Implement to GREEN (test passes) +// 4. Refactor if needed +``` + +### ๐Ÿ“‹ Test Format Requirements +- **Pattern**: GIVEN / WHEN / THEN with `AND:` for additional steps +- **Comments**: Clear business purpose, not just code description +- **Separation**: Blank lines between GIVEN/WHEN/THEN sections +- **Tag**: Start with `// TEST: {clear description of what's being tested}` + +## ๐Ÿ—๏ธ Architecture Patterns + +### Provider Implementation Checklist +```typescript +// โœ… 1. Define options interface +export interface MyProviderOptions extends BaseOptions { + api_key: string + model: string + // ... provider-specific settings +} + +// โœ… 2. Implement sendRequestFunc +const sendRequestFunc = (settings: MyProviderOptions): SendRequest => + async function* (messages, controller, resolveEmbed) { + // ... streaming implementation + for await (const chunk of stream) { + yield chunk.text // token-by-token + } + } + +// โœ… 3. Export vendor object +export const myProvider: Vendor = { + sendRequest: sendRequestFunc, + protocol: 'my-protocol', + getCapabilityEmoji: () => '๐ŸŽฏ' +} +``` + +### File Structure Rules +- **Providers**: `src/providers/[name].ts` (isolated, interface-based) +- **Commands**: `src/commands/[name].ts` (tag-driven) +- **Tests**: `tests/[category]/[feature].test.ts` (GIVEN/WHEN/THEN) +- **Settings**: Use `PluginSettings` interface, persist via `loadData()`/`saveData()` + +## ๐Ÿ” Code Quality Gates + +### Before Every Commit +```bash +# Must pass with ZERO warnings +npm run lint + +# Must pass TypeScript compilation +npm run build +``` + +### TypeScript Strict Mode +- โœ… `noImplicitAny: true` (no implicit any types) +- โœ… `strictNullChecks: true` (handle null/undefined explicitly) +- โœ… Explicit types for all public interfaces +- โœ… `interface` for contracts, `type` for composition +- โŒ No `any` types (except justified third-party APIs) +- โŒ No `@ts-ignore` (without explicit comment explaining why) + +### Naming Conventions +```typescript +// PascalCase: Classes, Interfaces, Types +class TarsPlugin {} +interface BaseOptions {} +type Vendor = {} + +// camelCase: Functions, variables, properties +function buildRunEnv() {} +const sendRequest = () => {} + +// UPPER_SNAKE_CASE: Constants +const DEFAULT_SETTINGS = {} +const APP_FOLDER = '.tars' +``` + +## ๐Ÿ”Œ Obsidian Plugin Specifics + +### Tag-Based Interaction Model +```markdown +#NewChat + +#System : You are a helpful assistant. + +#User : What is 1+1? + +#Claude : (triggers AI response) +``` + +### Required Patterns +- **Commands**: Register via `this.addCommand()` in `main.ts` +- **Settings**: Extend `PluginSettings`, use `TarsSettingTab` +- **Notices**: Use `new Notice(t('message'))` for user feedback +- **Internationalization**: Wrap strings in `t()` helper +- **Tag parsing**: Respect blank line separators, ignore callouts + +### Provider Capabilities +```typescript +// โœ… MUST support +- Streaming (async generators) +- AbortController cancellation +- API key validation +- Base URL configuration +- Error handling with Notice + +// ๐Ÿ”„ MAY support (provider-dependent) +- Multimodal (images via resolveEmbedAsBinary) +- Web search +- Extended thinking +- Document interpretation (PDF) +``` + +## ๐Ÿ”— Upstream Compatibility + +### Fork Development Rules +- โœ… Preserve existing architecture (commands, providers, settings, suggest) +- โœ… Match `tsconfig.json` and `eslint.config.mjs` from upstream +- โœ… Keep PR-ready (document all deviations) +- โœ… Follow upstream patterns (check recent commits) +- โŒ Don't break existing tag syntax or provider interfaces + +## ๐Ÿšฆ When Things Conflict + +### Priority Order (1 = highest) +1. **Obsidian plugin API requirements** (can't violate) +2. **Test-Driven Development** (non-negotiable) +3. **Upstream compatibility** (strong preference) +4. **TypeScript type safety** (strong preference) +5. **Code quality standards** (enforced) + +## ๐Ÿ“š Quick Commands + +### Workflows Available +```bash +/specify # Create feature specification +/plan # Generate implementation plan +/tasks # Generate ordered task list +/implement # Execute tasks +/constitution # Update this constitution +``` + +### Common Patterns +```typescript +// โœ… Good: Type-safe provider options +interface ClaudeOptions extends BaseOptions { + max_tokens: number + enableThinking: boolean +} + +// โŒ Bad: Implicit any +function processMessage(msg) { // โ† no type + return msg.content +} + +// โœ… Good: Explicit error handling +try { + const response = await api.call() +} catch (error) { + new Notice(t('API request failed')) + console.error('Claude API error:', error) +} + +// โŒ Bad: Silent failure +const response = await api.call() // might throw +``` + +## ๐ŸŽฏ Test Coverage Priorities + +### MUST Test +- โœ… Provider message formatting +- โœ… Tag parsing and role assignment +- โœ… Stream cancellation +- โœ… Settings validation +- โœ… Multimodal content handling + +### MAY Skip +- โŒ Third-party API calls (mock instead) +- โŒ Obsidian internal APIs (assume correct) +- โŒ UI rendering (manual verification) + +## ๐Ÿ“– Reference Documents + +- **Full Constitution**: `.specify/memory/constitution.md` +- **Plan Template**: `.specify/templates/plan-template.md` +- **Spec Template**: `.specify/templates/spec-template.md` +- **Tasks Template**: `.specify/templates/tasks-template.md` +- **Upstream Repo**: `https://github.com/TarsLab/obsidian-tars` + +--- + +**Remember**: When in doubt, write a test first! ๐Ÿงช diff --git a/.specify/memory/constitution.md b/.specify/memory/constitution.md new file mode 100644 index 00000000..f4b06fcd --- /dev/null +++ b/.specify/memory/constitution.md @@ -0,0 +1,226 @@ + + +# Tars Plugin Constitution + +**Obsidian AI Integration Plugin - Fork Development Standards** + +## Core Principles + +### I. Test-Driven Development (NON-NEGOTIABLE) + +**All code changes MUST follow strict TDD workflow:** +- Write tests FIRST before implementation +- Use GIVEN/WHEN/THEN pattern with clear comments +- Each test section MUST be separated by blank lines +- Test comments MUST explain business purpose and scenario +- Red-Green-Refactor cycle is mandatory +- Tag tests with clear messages using comments + +**Rationale**: TDD ensures correctness, prevents regressions, and serves as living documentation. This is particularly critical for AI integration where edge cases and provider-specific behavior must be explicitly validated. + +### II. Upstream Compatibility + +**Maintain compatibility with upstream TarsLab/obsidian-tars:** +- Follow upstream coding conventions and patterns +- Preserve existing plugin architecture (commands, providers, settings, suggest) +- Match TypeScript configuration and ESLint rules +- Keep PR-ready state for potential upstream contributions +- Document all deviations from upstream with clear rationale + +**Rationale**: As a fork, we must respect the original project's design decisions while enabling innovation. This ensures we can contribute improvements back and benefit from upstream updates. + +### III. Plugin Architecture Modularity + +**Obsidian plugin structure MUST be maintained:** +- Provider implementations are isolated in `src/providers/` +- Commands are separated in `src/commands/` +- Editor logic centralized in `src/editor.ts` +- Settings management isolated in `src/settingTab.ts` and `src/settings.ts` +- Each provider MUST implement standard interfaces: `SendRequest`, `BaseOptions`, `Vendor` +- Tag-based interaction model is preserved + +**Rationale**: Obsidian plugins have specific architectural requirements. Modularity enables independent testing and makes AI provider integration straightforward. + +### IV. TypeScript Type Safety + +**Strict TypeScript discipline:** +- `noImplicitAny: true` and `strictNullChecks: true` MUST be enforced +- All public interfaces MUST be explicitly typed +- Use `interface` for extensible contracts, `type` for composition +- Provider options MUST extend `BaseOptions` +- Avoid `any` except when interfacing with untyped third-party APIs + +**Rationale**: Type safety catches errors at compile time, improves IDE experience, and serves as inline documentation. Critical for AI integrations where message formats vary by provider. + +### V. Code Quality & Linting + +**Quality gates before all commits:** +- Code MUST pass `eslint .` without warnings +- Prefer `prettier --write src/` for formatting +- No unused variables (except those prefixed with `_`) +- No `@ts-ignore` comments without explicit justification +- Console logs: `console.debug` for development, structured logging for production + +**Rationale**: Consistent code quality reduces cognitive load, prevents bugs, and maintains professional standards expected in the Obsidian plugin ecosystem. + +## Plugin Architecture Standards + +### Provider Integration Pattern + +**All AI providers MUST:** +1. Export an interface extending `BaseOptions` with provider-specific settings +2. Implement `sendRequestFunc(settings): SendRequest` as an async generator +3. Handle streaming responses via `yield` with token-by-token delivery +4. Support `AbortController` for cancellation +5. Handle multimodal content (text, images, documents) per provider capabilities +6. Return capability emoji via `getCapabilityEmoji()` + +### Tag-Based Interaction Model + +**Preserve tag-driven UX:** +- Tags define message roles: `#User :`, `#Assistant :`, `#System :`, `#NewChat` +- Tag commands MUST be dynamically built from settings +- Support both command palette and inline tag triggers +- Respect conversation order: System โ†’ User โ†” Assistant +- Blank lines separate messages; callouts are ignored + +### Settings Architecture + +**Settings MUST:** +- Use `PluginSettings` interface for all configuration +- Persist via `loadData()`/`saveData()` Obsidian APIs +- Support JSON override parameters for advanced users +- Validate provider credentials before use +- Provide sensible defaults in `DEFAULT_SETTINGS` + +## Code Quality & Testing + +### Test Structure (GIVEN/WHEN/THEN) + +**Example test pattern:** +```typescript +// TEST: User message parsing with embedded images +test('parseUserMessage handles image embeds', () => { + // GIVEN: A user message with embedded image reference + const messageText = "Analyze this image: ![[example.jpg]]" + const embeds = [{ link: 'example.jpg', position: {...} }] + + // WHEN: Parsing the message + const result = parseMessage(messageText, embeds) + + // THEN: Message includes embed metadata + expect(result.embeds).toHaveLength(1) + // AND: Embed has correct MIME type + expect(result.embeds[0].mimeType).toBe('image/jpeg') +}) +``` + +### Test Coverage Requirements + +**MUST test:** +- Provider message formatting for each AI service +- Tag parsing and role assignment +- Multimodal content handling (images, PDFs) +- Stream cancellation and error handling +- Settings persistence and validation +- Command registration and tag suggestions + +**MAY skip integration tests for:** +- Third-party API calls (use mocks/stubs) +- Obsidian internal APIs (assume correct) +- UI rendering (manual verification acceptable) + +## TypeScript Standards + +### Module Structure + +**Follow existing patterns:** +- ES6 module syntax: `import`/`export` +- Target: ES6, Module: NodeNext +- Inline source maps for debugging +- External dependencies: `obsidian`, `electron`, `@codemirror/*` + +### Naming Conventions + +**Observed upstream patterns:** +- `PascalCase`: Classes, interfaces, types (`TarsPlugin`, `ClaudeOptions`) +- `camelCase`: Functions, variables, properties (`buildRunEnv`, `sendRequest`) +- `UPPER_SNAKE_CASE`: Constants (`DEFAULT_SETTINGS`, `APP_FOLDER`) +- Prefix interfaces with purpose, not `I` (`BaseOptions`, not `IOptions`) + +## AI Provider Integration + +### Required Capabilities + +**Each provider implementation MUST support:** +- Streaming text generation (async generators) +- Error handling with user-friendly notices +- API key validation +- Base URL configuration for custom endpoints +- Model selection with parameter overrides + +**MAY support (provider-dependent):** +- Multimodal input (images via `resolveEmbedAsBinary`) +- Web search (Claude, Zhipu) +- Extended thinking (Claude) +- Document interpretation (PDF) + +### Error Handling + +**Provider errors MUST:** +- Use Obsidian `Notice` for user-facing errors +- Log full errors to console for debugging +- Translate HTTP status codes (401, 404, 429) to helpful messages +- Respect internationalization via `t()` helper + +## Governance + +### Amendment Process + +**Constitution changes require:** +1. Clear documentation of motivation and impact +2. Update sync impact report (HTML comment header) +3. Version bump following semantic versioning: + - **MAJOR**: Breaking changes to principles or architecture + - **MINOR**: New principles or substantial expansions + - **PATCH**: Clarifications, typos, non-semantic fixes +4. Propagate changes to all template files + +### Compliance Enforcement + +**All code contributions MUST:** +- Pass TDD workflow validation (tests before code) +- Pass linting (`npm run lint`) +- Maintain TypeScript strict mode compliance +- Include test coverage for new functionality +- Document provider-specific behavior + +### Development Guidance + +**For runtime assistance:** +- Use this constitution as primary reference +- Consult upstream repository for architectural decisions +- Reference Obsidian API docs for plugin APIs +- Test against real Obsidian environment for UX validation + +### Conflict Resolution + +**Priority order when conflicts arise:** +1. Obsidian plugin API requirements (non-negotiable) +2. Test-Driven Development principle (non-negotiable) +3. Upstream compatibility (strong preference) +4. TypeScript type safety (strong preference) +5. Code quality standards (enforced) + +**Version**: 1.0.0 | **Ratified**: 2025-10-01 | **Last Amended**: 2025-10-01 \ No newline at end of file diff --git a/.specify/scripts/bash/check-prerequisites.sh b/.specify/scripts/bash/check-prerequisites.sh new file mode 100755 index 00000000..f32b6245 --- /dev/null +++ b/.specify/scripts/bash/check-prerequisites.sh @@ -0,0 +1,166 @@ +#!/usr/bin/env bash + +# Consolidated prerequisite checking script +# +# This script provides unified prerequisite checking for Spec-Driven Development workflow. +# It replaces the functionality previously spread across multiple scripts. +# +# Usage: ./check-prerequisites.sh [OPTIONS] +# +# OPTIONS: +# --json Output in JSON format +# --require-tasks Require tasks.md to exist (for implementation phase) +# --include-tasks Include tasks.md in AVAILABLE_DOCS list +# --paths-only Only output path variables (no validation) +# --help, -h Show help message +# +# OUTPUTS: +# JSON mode: {"FEATURE_DIR":"...", "AVAILABLE_DOCS":["..."]} +# Text mode: FEATURE_DIR:... \n AVAILABLE_DOCS: \n โœ“/โœ— file.md +# Paths only: REPO_ROOT: ... \n BRANCH: ... \n FEATURE_DIR: ... etc. + +set -e + +# Parse command line arguments +JSON_MODE=false +REQUIRE_TASKS=false +INCLUDE_TASKS=false +PATHS_ONLY=false + +for arg in "$@"; do + case "$arg" in + --json) + JSON_MODE=true + ;; + --require-tasks) + REQUIRE_TASKS=true + ;; + --include-tasks) + INCLUDE_TASKS=true + ;; + --paths-only) + PATHS_ONLY=true + ;; + --help|-h) + cat << 'EOF' +Usage: check-prerequisites.sh [OPTIONS] + +Consolidated prerequisite checking for Spec-Driven Development workflow. + +OPTIONS: + --json Output in JSON format + --require-tasks Require tasks.md to exist (for implementation phase) + --include-tasks Include tasks.md in AVAILABLE_DOCS list + --paths-only Only output path variables (no prerequisite validation) + --help, -h Show this help message + +EXAMPLES: + # Check task prerequisites (plan.md required) + ./check-prerequisites.sh --json + + # Check implementation prerequisites (plan.md + tasks.md required) + ./check-prerequisites.sh --json --require-tasks --include-tasks + + # Get feature paths only (no validation) + ./check-prerequisites.sh --paths-only + +EOF + exit 0 + ;; + *) + echo "ERROR: Unknown option '$arg'. Use --help for usage information." >&2 + exit 1 + ;; + esac +done + +# Source common functions +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +source "$SCRIPT_DIR/common.sh" + +# Get feature paths and validate branch +eval $(get_feature_paths) +check_feature_branch "$CURRENT_BRANCH" "$HAS_GIT" || exit 1 + +# If paths-only mode, output paths and exit (support JSON + paths-only combined) +if $PATHS_ONLY; then + if $JSON_MODE; then + # Minimal JSON paths payload (no validation performed) + printf '{"REPO_ROOT":"%s","BRANCH":"%s","FEATURE_DIR":"%s","FEATURE_SPEC":"%s","IMPL_PLAN":"%s","TASKS":"%s"}\n' \ + "$REPO_ROOT" "$CURRENT_BRANCH" "$FEATURE_DIR" "$FEATURE_SPEC" "$IMPL_PLAN" "$TASKS" + else + echo "REPO_ROOT: $REPO_ROOT" + echo "BRANCH: $CURRENT_BRANCH" + echo "FEATURE_DIR: $FEATURE_DIR" + echo "FEATURE_SPEC: $FEATURE_SPEC" + echo "IMPL_PLAN: $IMPL_PLAN" + echo "TASKS: $TASKS" + fi + exit 0 +fi + +# Validate required directories and files +if [[ ! -d "$FEATURE_DIR" ]]; then + echo "ERROR: Feature directory not found: $FEATURE_DIR" >&2 + echo "Run /specify first to create the feature structure." >&2 + exit 1 +fi + +if [[ ! -f "$IMPL_PLAN" ]]; then + echo "ERROR: plan.md not found in $FEATURE_DIR" >&2 + echo "Run /plan first to create the implementation plan." >&2 + exit 1 +fi + +# Check for tasks.md if required +if $REQUIRE_TASKS && [[ ! -f "$TASKS" ]]; then + echo "ERROR: tasks.md not found in $FEATURE_DIR" >&2 + echo "Run /tasks first to create the task list." >&2 + exit 1 +fi + +# Build list of available documents +docs=() + +# Always check these optional docs +[[ -f "$RESEARCH" ]] && docs+=("research.md") +[[ -f "$DATA_MODEL" ]] && docs+=("data-model.md") + +# Check contracts directory (only if it exists and has files) +if [[ -d "$CONTRACTS_DIR" ]] && [[ -n "$(ls -A "$CONTRACTS_DIR" 2>/dev/null)" ]]; then + docs+=("contracts/") +fi + +[[ -f "$QUICKSTART" ]] && docs+=("quickstart.md") + +# Include tasks.md if requested and it exists +if $INCLUDE_TASKS && [[ -f "$TASKS" ]]; then + docs+=("tasks.md") +fi + +# Output results +if $JSON_MODE; then + # Build JSON array of documents + if [[ ${#docs[@]} -eq 0 ]]; then + json_docs="[]" + else + json_docs=$(printf '"%s",' "${docs[@]}") + json_docs="[${json_docs%,}]" + fi + + printf '{"FEATURE_DIR":"%s","AVAILABLE_DOCS":%s}\n' "$FEATURE_DIR" "$json_docs" +else + # Text output + echo "FEATURE_DIR:$FEATURE_DIR" + echo "AVAILABLE_DOCS:" + + # Show status of each potential document + check_file "$RESEARCH" "research.md" + check_file "$DATA_MODEL" "data-model.md" + check_dir "$CONTRACTS_DIR" "contracts/" + check_file "$QUICKSTART" "quickstart.md" + + if $INCLUDE_TASKS; then + check_file "$TASKS" "tasks.md" + fi +fi \ No newline at end of file diff --git a/.specify/scripts/bash/common.sh b/.specify/scripts/bash/common.sh new file mode 100755 index 00000000..34e5d4bb --- /dev/null +++ b/.specify/scripts/bash/common.sh @@ -0,0 +1,113 @@ +#!/usr/bin/env bash +# Common functions and variables for all scripts + +# Get repository root, with fallback for non-git repositories +get_repo_root() { + if git rev-parse --show-toplevel >/dev/null 2>&1; then + git rev-parse --show-toplevel + else + # Fall back to script location for non-git repos + local script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" + (cd "$script_dir/../../.." && pwd) + fi +} + +# Get current branch, with fallback for non-git repositories +get_current_branch() { + # First check if SPECIFY_FEATURE environment variable is set + if [[ -n "${SPECIFY_FEATURE:-}" ]]; then + echo "$SPECIFY_FEATURE" + return + fi + + # Then check git if available + if git rev-parse --abbrev-ref HEAD >/dev/null 2>&1; then + git rev-parse --abbrev-ref HEAD + return + fi + + # For non-git repos, try to find the latest feature directory + local repo_root=$(get_repo_root) + local specs_dir="$repo_root/specs" + + if [[ -d "$specs_dir" ]]; then + local latest_feature="" + local highest=0 + + for dir in "$specs_dir"/*; do + if [[ -d "$dir" ]]; then + local dirname=$(basename "$dir") + if [[ "$dirname" =~ ^([0-9]{3})- ]]; then + local number=${BASH_REMATCH[1]} + number=$((10#$number)) + if [[ "$number" -gt "$highest" ]]; then + highest=$number + latest_feature=$dirname + fi + fi + fi + done + + if [[ -n "$latest_feature" ]]; then + echo "$latest_feature" + return + fi + fi + + echo "main" # Final fallback +} + +# Check if we have git available +has_git() { + git rev-parse --show-toplevel >/dev/null 2>&1 +} + +check_feature_branch() { + local branch="$1" + local has_git_repo="$2" + + # For non-git repos, we can't enforce branch naming but still provide output + if [[ "$has_git_repo" != "true" ]]; then + echo "[specify] Warning: Git repository not detected; skipped branch validation" >&2 + return 0 + fi + + if [[ ! "$branch" =~ ^[0-9]{3}- ]]; then + echo "ERROR: Not on a feature branch. Current branch: $branch" >&2 + echo "Feature branches should be named like: 001-feature-name" >&2 + return 1 + fi + + return 0 +} + +get_feature_dir() { echo "$1/specs/$2"; } + +get_feature_paths() { + local repo_root=$(get_repo_root) + local current_branch=$(get_current_branch) + local has_git_repo="false" + + if has_git; then + has_git_repo="true" + fi + + local feature_dir=$(get_feature_dir "$repo_root" "$current_branch") + + cat </dev/null) ]] && echo " โœ“ $2" || echo " โœ— $2"; } diff --git a/.specify/scripts/bash/create-new-feature.sh b/.specify/scripts/bash/create-new-feature.sh new file mode 100755 index 00000000..5cb17fab --- /dev/null +++ b/.specify/scripts/bash/create-new-feature.sh @@ -0,0 +1,97 @@ +#!/usr/bin/env bash + +set -e + +JSON_MODE=false +ARGS=() +for arg in "$@"; do + case "$arg" in + --json) JSON_MODE=true ;; + --help|-h) echo "Usage: $0 [--json] "; exit 0 ;; + *) ARGS+=("$arg") ;; + esac +done + +FEATURE_DESCRIPTION="${ARGS[*]}" +if [ -z "$FEATURE_DESCRIPTION" ]; then + echo "Usage: $0 [--json] " >&2 + exit 1 +fi + +# Function to find the repository root by searching for existing project markers +find_repo_root() { + local dir="$1" + while [ "$dir" != "/" ]; do + if [ -d "$dir/.git" ] || [ -d "$dir/.specify" ]; then + echo "$dir" + return 0 + fi + dir="$(dirname "$dir")" + done + return 1 +} + +# Resolve repository root. Prefer git information when available, but fall back +# to searching for repository markers so the workflow still functions in repositories that +# were initialised with --no-git. +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" + +if git rev-parse --show-toplevel >/dev/null 2>&1; then + REPO_ROOT=$(git rev-parse --show-toplevel) + HAS_GIT=true +else + REPO_ROOT="$(find_repo_root "$SCRIPT_DIR")" + if [ -z "$REPO_ROOT" ]; then + echo "Error: Could not determine repository root. Please run this script from within the repository." >&2 + exit 1 + fi + HAS_GIT=false +fi + +cd "$REPO_ROOT" + +SPECS_DIR="$REPO_ROOT/specs" +mkdir -p "$SPECS_DIR" + +HIGHEST=0 +if [ -d "$SPECS_DIR" ]; then + for dir in "$SPECS_DIR"/*; do + [ -d "$dir" ] || continue + dirname=$(basename "$dir") + number=$(echo "$dirname" | grep -o '^[0-9]\+' || echo "0") + number=$((10#$number)) + if [ "$number" -gt "$HIGHEST" ]; then HIGHEST=$number; fi + done +fi + +NEXT=$((HIGHEST + 1)) +FEATURE_NUM=$(printf "%03d" "$NEXT") + +BRANCH_NAME=$(echo "$FEATURE_DESCRIPTION" | tr '[:upper:]' '[:lower:]' | sed 's/[^a-z0-9]/-/g' | sed 's/-\+/-/g' | sed 's/^-//' | sed 's/-$//') +WORDS=$(echo "$BRANCH_NAME" | tr '-' '\n' | grep -v '^$' | head -3 | tr '\n' '-' | sed 's/-$//') +BRANCH_NAME="${FEATURE_NUM}-${WORDS}" + +if [ "$HAS_GIT" = true ]; then + git checkout -b "$BRANCH_NAME" +else + >&2 echo "[specify] Warning: Git repository not detected; skipped branch creation for $BRANCH_NAME" +fi + +FEATURE_DIR="$SPECS_DIR/$BRANCH_NAME" +mkdir -p "$FEATURE_DIR" + +TEMPLATE="$REPO_ROOT/.specify/templates/spec-template.md" +SPEC_FILE="$FEATURE_DIR/spec.md" +if [ -f "$TEMPLATE" ]; then cp "$TEMPLATE" "$SPEC_FILE"; else touch "$SPEC_FILE"; fi + +# Set the SPECIFY_FEATURE environment variable for the current session +export SPECIFY_FEATURE="$BRANCH_NAME" + +if $JSON_MODE; then + printf '{"BRANCH_NAME":"%s","SPEC_FILE":"%s","FEATURE_NUM":"%s"}\n' "$BRANCH_NAME" "$SPEC_FILE" "$FEATURE_NUM" +else + echo "BRANCH_NAME: $BRANCH_NAME" + echo "SPEC_FILE: $SPEC_FILE" + echo "FEATURE_NUM: $FEATURE_NUM" + echo "SPECIFY_FEATURE environment variable set to: $BRANCH_NAME" +fi diff --git a/.specify/scripts/bash/setup-plan.sh b/.specify/scripts/bash/setup-plan.sh new file mode 100755 index 00000000..654ba50d --- /dev/null +++ b/.specify/scripts/bash/setup-plan.sh @@ -0,0 +1,60 @@ +#!/usr/bin/env bash + +set -e + +# Parse command line arguments +JSON_MODE=false +ARGS=() + +for arg in "$@"; do + case "$arg" in + --json) + JSON_MODE=true + ;; + --help|-h) + echo "Usage: $0 [--json]" + echo " --json Output results in JSON format" + echo " --help Show this help message" + exit 0 + ;; + *) + ARGS+=("$arg") + ;; + esac +done + +# Get script directory and load common functions +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +source "$SCRIPT_DIR/common.sh" + +# Get all paths and variables from common functions +eval $(get_feature_paths) + +# Check if we're on a proper feature branch (only for git repos) +check_feature_branch "$CURRENT_BRANCH" "$HAS_GIT" || exit 1 + +# Ensure the feature directory exists +mkdir -p "$FEATURE_DIR" + +# Copy plan template if it exists +TEMPLATE="$REPO_ROOT/.specify/templates/plan-template.md" +if [[ -f "$TEMPLATE" ]]; then + cp "$TEMPLATE" "$IMPL_PLAN" + echo "Copied plan template to $IMPL_PLAN" +else + echo "Warning: Plan template not found at $TEMPLATE" + # Create a basic plan file if template doesn't exist + touch "$IMPL_PLAN" +fi + +# Output results +if $JSON_MODE; then + printf '{"FEATURE_SPEC":"%s","IMPL_PLAN":"%s","SPECS_DIR":"%s","BRANCH":"%s","HAS_GIT":"%s"}\n' \ + "$FEATURE_SPEC" "$IMPL_PLAN" "$FEATURE_DIR" "$CURRENT_BRANCH" "$HAS_GIT" +else + echo "FEATURE_SPEC: $FEATURE_SPEC" + echo "IMPL_PLAN: $IMPL_PLAN" + echo "SPECS_DIR: $FEATURE_DIR" + echo "BRANCH: $CURRENT_BRANCH" + echo "HAS_GIT: $HAS_GIT" +fi diff --git a/.specify/scripts/bash/update-agent-context.sh b/.specify/scripts/bash/update-agent-context.sh new file mode 100755 index 00000000..d3cc422e --- /dev/null +++ b/.specify/scripts/bash/update-agent-context.sh @@ -0,0 +1,719 @@ +#!/usr/bin/env bash + +# Update agent context files with information from plan.md +# +# This script maintains AI agent context files by parsing feature specifications +# and updating agent-specific configuration files with project information. +# +# MAIN FUNCTIONS: +# 1. Environment Validation +# - Verifies git repository structure and branch information +# - Checks for required plan.md files and templates +# - Validates file permissions and accessibility +# +# 2. Plan Data Extraction +# - Parses plan.md files to extract project metadata +# - Identifies language/version, frameworks, databases, and project types +# - Handles missing or incomplete specification data gracefully +# +# 3. Agent File Management +# - Creates new agent context files from templates when needed +# - Updates existing agent files with new project information +# - Preserves manual additions and custom configurations +# - Supports multiple AI agent formats and directory structures +# +# 4. Content Generation +# - Generates language-specific build/test commands +# - Creates appropriate project directory structures +# - Updates technology stacks and recent changes sections +# - Maintains consistent formatting and timestamps +# +# 5. Multi-Agent Support +# - Handles agent-specific file paths and naming conventions +# - Supports: Claude, Gemini, Copilot, Cursor, Qwen, opencode, Codex, Windsurf +# - Can update single agents or all existing agent files +# - Creates default Claude file if no agent files exist +# +# Usage: ./update-agent-context.sh [agent_type] +# Agent types: claude|gemini|copilot|cursor|qwen|opencode|codex|windsurf +# Leave empty to update all existing agent files + +set -e + +# Enable strict error handling +set -u +set -o pipefail + +#============================================================================== +# Configuration and Global Variables +#============================================================================== + +# Get script directory and load common functions +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +source "$SCRIPT_DIR/common.sh" + +# Get all paths and variables from common functions +eval $(get_feature_paths) + +NEW_PLAN="$IMPL_PLAN" # Alias for compatibility with existing code +AGENT_TYPE="${1:-}" + +# Agent-specific file paths +CLAUDE_FILE="$REPO_ROOT/CLAUDE.md" +GEMINI_FILE="$REPO_ROOT/GEMINI.md" +COPILOT_FILE="$REPO_ROOT/.github/copilot-instructions.md" +CURSOR_FILE="$REPO_ROOT/.cursor/rules/specify-rules.mdc" +QWEN_FILE="$REPO_ROOT/QWEN.md" +AGENTS_FILE="$REPO_ROOT/AGENTS.md" +WINDSURF_FILE="$REPO_ROOT/.windsurf/rules/specify-rules.md" +KILOCODE_FILE="$REPO_ROOT/.kilocode/rules/specify-rules.md" +AUGGIE_FILE="$REPO_ROOT/.augment/rules/specify-rules.md" +ROO_FILE="$REPO_ROOT/.roo/rules/specify-rules.md" + +# Template file +TEMPLATE_FILE="$REPO_ROOT/.specify/templates/agent-file-template.md" + +# Global variables for parsed plan data +NEW_LANG="" +NEW_FRAMEWORK="" +NEW_DB="" +NEW_PROJECT_TYPE="" + +#============================================================================== +# Utility Functions +#============================================================================== + +log_info() { + echo "INFO: $1" +} + +log_success() { + echo "โœ“ $1" +} + +log_error() { + echo "ERROR: $1" >&2 +} + +log_warning() { + echo "WARNING: $1" >&2 +} + +# Cleanup function for temporary files +cleanup() { + local exit_code=$? + rm -f /tmp/agent_update_*_$$ + rm -f /tmp/manual_additions_$$ + exit $exit_code +} + +# Set up cleanup trap +trap cleanup EXIT INT TERM + +#============================================================================== +# Validation Functions +#============================================================================== + +validate_environment() { + # Check if we have a current branch/feature (git or non-git) + if [[ -z "$CURRENT_BRANCH" ]]; then + log_error "Unable to determine current feature" + if [[ "$HAS_GIT" == "true" ]]; then + log_info "Make sure you're on a feature branch" + else + log_info "Set SPECIFY_FEATURE environment variable or create a feature first" + fi + exit 1 + fi + + # Check if plan.md exists + if [[ ! -f "$NEW_PLAN" ]]; then + log_error "No plan.md found at $NEW_PLAN" + log_info "Make sure you're working on a feature with a corresponding spec directory" + if [[ "$HAS_GIT" != "true" ]]; then + log_info "Use: export SPECIFY_FEATURE=your-feature-name or create a new feature first" + fi + exit 1 + fi + + # Check if template exists (needed for new files) + if [[ ! -f "$TEMPLATE_FILE" ]]; then + log_warning "Template file not found at $TEMPLATE_FILE" + log_warning "Creating new agent files will fail" + fi +} + +#============================================================================== +# Plan Parsing Functions +#============================================================================== + +extract_plan_field() { + local field_pattern="$1" + local plan_file="$2" + + grep "^\*\*${field_pattern}\*\*: " "$plan_file" 2>/dev/null | \ + head -1 | \ + sed "s|^\*\*${field_pattern}\*\*: ||" | \ + sed 's/^[ \t]*//;s/[ \t]*$//' | \ + grep -v "NEEDS CLARIFICATION" | \ + grep -v "^N/A$" || echo "" +} + +parse_plan_data() { + local plan_file="$1" + + if [[ ! -f "$plan_file" ]]; then + log_error "Plan file not found: $plan_file" + return 1 + fi + + if [[ ! -r "$plan_file" ]]; then + log_error "Plan file is not readable: $plan_file" + return 1 + fi + + log_info "Parsing plan data from $plan_file" + + NEW_LANG=$(extract_plan_field "Language/Version" "$plan_file") + NEW_FRAMEWORK=$(extract_plan_field "Primary Dependencies" "$plan_file") + NEW_DB=$(extract_plan_field "Storage" "$plan_file") + NEW_PROJECT_TYPE=$(extract_plan_field "Project Type" "$plan_file") + + # Log what we found + if [[ -n "$NEW_LANG" ]]; then + log_info "Found language: $NEW_LANG" + else + log_warning "No language information found in plan" + fi + + if [[ -n "$NEW_FRAMEWORK" ]]; then + log_info "Found framework: $NEW_FRAMEWORK" + fi + + if [[ -n "$NEW_DB" ]] && [[ "$NEW_DB" != "N/A" ]]; then + log_info "Found database: $NEW_DB" + fi + + if [[ -n "$NEW_PROJECT_TYPE" ]]; then + log_info "Found project type: $NEW_PROJECT_TYPE" + fi +} + +format_technology_stack() { + local lang="$1" + local framework="$2" + local parts=() + + # Add non-empty parts + [[ -n "$lang" && "$lang" != "NEEDS CLARIFICATION" ]] && parts+=("$lang") + [[ -n "$framework" && "$framework" != "NEEDS CLARIFICATION" && "$framework" != "N/A" ]] && parts+=("$framework") + + # Join with proper formatting + if [[ ${#parts[@]} -eq 0 ]]; then + echo "" + elif [[ ${#parts[@]} -eq 1 ]]; then + echo "${parts[0]}" + else + # Join multiple parts with " + " + local result="${parts[0]}" + for ((i=1; i<${#parts[@]}; i++)); do + result="$result + ${parts[i]}" + done + echo "$result" + fi +} + +#============================================================================== +# Template and Content Generation Functions +#============================================================================== + +get_project_structure() { + local project_type="$1" + + if [[ "$project_type" == *"web"* ]]; then + echo "backend/\\nfrontend/\\ntests/" + else + echo "src/\\ntests/" + fi +} + +get_commands_for_language() { + local lang="$1" + + case "$lang" in + *"Python"*) + echo "cd src && pytest && ruff check ." + ;; + *"Rust"*) + echo "cargo test && cargo clippy" + ;; + *"JavaScript"*|*"TypeScript"*) + echo "npm test && npm run lint" + ;; + *) + echo "# Add commands for $lang" + ;; + esac +} + +get_language_conventions() { + local lang="$1" + echo "$lang: Follow standard conventions" +} + +create_new_agent_file() { + local target_file="$1" + local temp_file="$2" + local project_name="$3" + local current_date="$4" + + if [[ ! -f "$TEMPLATE_FILE" ]]; then + log_error "Template not found at $TEMPLATE_FILE" + return 1 + fi + + if [[ ! -r "$TEMPLATE_FILE" ]]; then + log_error "Template file is not readable: $TEMPLATE_FILE" + return 1 + fi + + log_info "Creating new agent context file from template..." + + if ! cp "$TEMPLATE_FILE" "$temp_file"; then + log_error "Failed to copy template file" + return 1 + fi + + # Replace template placeholders + local project_structure + project_structure=$(get_project_structure "$NEW_PROJECT_TYPE") + + local commands + commands=$(get_commands_for_language "$NEW_LANG") + + local language_conventions + language_conventions=$(get_language_conventions "$NEW_LANG") + + # Perform substitutions with error checking using safer approach + # Escape special characters for sed by using a different delimiter or escaping + local escaped_lang=$(printf '%s\n' "$NEW_LANG" | sed 's/[\[\.*^$()+{}|]/\\&/g') + local escaped_framework=$(printf '%s\n' "$NEW_FRAMEWORK" | sed 's/[\[\.*^$()+{}|]/\\&/g') + local escaped_branch=$(printf '%s\n' "$CURRENT_BRANCH" | sed 's/[\[\.*^$()+{}|]/\\&/g') + + # Build technology stack and recent change strings conditionally + local tech_stack + if [[ -n "$escaped_lang" && -n "$escaped_framework" ]]; then + tech_stack="- $escaped_lang + $escaped_framework ($escaped_branch)" + elif [[ -n "$escaped_lang" ]]; then + tech_stack="- $escaped_lang ($escaped_branch)" + elif [[ -n "$escaped_framework" ]]; then + tech_stack="- $escaped_framework ($escaped_branch)" + else + tech_stack="- ($escaped_branch)" + fi + + local recent_change + if [[ -n "$escaped_lang" && -n "$escaped_framework" ]]; then + recent_change="- $escaped_branch: Added $escaped_lang + $escaped_framework" + elif [[ -n "$escaped_lang" ]]; then + recent_change="- $escaped_branch: Added $escaped_lang" + elif [[ -n "$escaped_framework" ]]; then + recent_change="- $escaped_branch: Added $escaped_framework" + else + recent_change="- $escaped_branch: Added" + fi + + local substitutions=( + "s|\[PROJECT NAME\]|$project_name|" + "s|\[DATE\]|$current_date|" + "s|\[EXTRACTED FROM ALL PLAN.MD FILES\]|$tech_stack|" + "s|\[ACTUAL STRUCTURE FROM PLANS\]|$project_structure|g" + "s|\[ONLY COMMANDS FOR ACTIVE TECHNOLOGIES\]|$commands|" + "s|\[LANGUAGE-SPECIFIC, ONLY FOR LANGUAGES IN USE\]|$language_conventions|" + "s|\[LAST 3 FEATURES AND WHAT THEY ADDED\]|$recent_change|" + ) + + for substitution in "${substitutions[@]}"; do + if ! sed -i.bak -e "$substitution" "$temp_file"; then + log_error "Failed to perform substitution: $substitution" + rm -f "$temp_file" "$temp_file.bak" + return 1 + fi + done + + # Convert \n sequences to actual newlines + newline=$(printf '\n') + sed -i.bak2 "s/\\\\n/${newline}/g" "$temp_file" + + # Clean up backup files + rm -f "$temp_file.bak" "$temp_file.bak2" + + return 0 +} + + + + +update_existing_agent_file() { + local target_file="$1" + local current_date="$2" + + log_info "Updating existing agent context file..." + + # Use a single temporary file for atomic update + local temp_file + temp_file=$(mktemp) || { + log_error "Failed to create temporary file" + return 1 + } + + # Process the file in one pass + local tech_stack=$(format_technology_stack "$NEW_LANG" "$NEW_FRAMEWORK") + local new_tech_entries=() + local new_change_entry="" + + # Prepare new technology entries + if [[ -n "$tech_stack" ]] && ! grep -q "$tech_stack" "$target_file"; then + new_tech_entries+=("- $tech_stack ($CURRENT_BRANCH)") + fi + + if [[ -n "$NEW_DB" ]] && [[ "$NEW_DB" != "N/A" ]] && [[ "$NEW_DB" != "NEEDS CLARIFICATION" ]] && ! grep -q "$NEW_DB" "$target_file"; then + new_tech_entries+=("- $NEW_DB ($CURRENT_BRANCH)") + fi + + # Prepare new change entry + if [[ -n "$tech_stack" ]]; then + new_change_entry="- $CURRENT_BRANCH: Added $tech_stack" + elif [[ -n "$NEW_DB" ]] && [[ "$NEW_DB" != "N/A" ]] && [[ "$NEW_DB" != "NEEDS CLARIFICATION" ]]; then + new_change_entry="- $CURRENT_BRANCH: Added $NEW_DB" + fi + + # Process file line by line + local in_tech_section=false + local in_changes_section=false + local tech_entries_added=false + local changes_entries_added=false + local existing_changes_count=0 + + while IFS= read -r line || [[ -n "$line" ]]; do + # Handle Active Technologies section + if [[ "$line" == "## Active Technologies" ]]; then + echo "$line" >> "$temp_file" + in_tech_section=true + continue + elif [[ $in_tech_section == true ]] && [[ "$line" =~ ^##[[:space:]] ]]; then + # Add new tech entries before closing the section + if [[ $tech_entries_added == false ]] && [[ ${#new_tech_entries[@]} -gt 0 ]]; then + printf '%s\n' "${new_tech_entries[@]}" >> "$temp_file" + tech_entries_added=true + fi + echo "$line" >> "$temp_file" + in_tech_section=false + continue + elif [[ $in_tech_section == true ]] && [[ -z "$line" ]]; then + # Add new tech entries before empty line in tech section + if [[ $tech_entries_added == false ]] && [[ ${#new_tech_entries[@]} -gt 0 ]]; then + printf '%s\n' "${new_tech_entries[@]}" >> "$temp_file" + tech_entries_added=true + fi + echo "$line" >> "$temp_file" + continue + fi + + # Handle Recent Changes section + if [[ "$line" == "## Recent Changes" ]]; then + echo "$line" >> "$temp_file" + # Add new change entry right after the heading + if [[ -n "$new_change_entry" ]]; then + echo "$new_change_entry" >> "$temp_file" + fi + in_changes_section=true + changes_entries_added=true + continue + elif [[ $in_changes_section == true ]] && [[ "$line" =~ ^##[[:space:]] ]]; then + echo "$line" >> "$temp_file" + in_changes_section=false + continue + elif [[ $in_changes_section == true ]] && [[ "$line" == "- "* ]]; then + # Keep only first 2 existing changes + if [[ $existing_changes_count -lt 2 ]]; then + echo "$line" >> "$temp_file" + ((existing_changes_count++)) + fi + continue + fi + + # Update timestamp + if [[ "$line" =~ \*\*Last\ updated\*\*:.*[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9] ]]; then + echo "$line" | sed "s/[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9]/$current_date/" >> "$temp_file" + else + echo "$line" >> "$temp_file" + fi + done < "$target_file" + + # Post-loop check: if we're still in the Active Technologies section and haven't added new entries + if [[ $in_tech_section == true ]] && [[ $tech_entries_added == false ]] && [[ ${#new_tech_entries[@]} -gt 0 ]]; then + printf '%s\n' "${new_tech_entries[@]}" >> "$temp_file" + fi + + # Move temp file to target atomically + if ! mv "$temp_file" "$target_file"; then + log_error "Failed to update target file" + rm -f "$temp_file" + return 1 + fi + + return 0 +} +#============================================================================== +# Main Agent File Update Function +#============================================================================== + +update_agent_file() { + local target_file="$1" + local agent_name="$2" + + if [[ -z "$target_file" ]] || [[ -z "$agent_name" ]]; then + log_error "update_agent_file requires target_file and agent_name parameters" + return 1 + fi + + log_info "Updating $agent_name context file: $target_file" + + local project_name + project_name=$(basename "$REPO_ROOT") + local current_date + current_date=$(date +%Y-%m-%d) + + # Create directory if it doesn't exist + local target_dir + target_dir=$(dirname "$target_file") + if [[ ! -d "$target_dir" ]]; then + if ! mkdir -p "$target_dir"; then + log_error "Failed to create directory: $target_dir" + return 1 + fi + fi + + if [[ ! -f "$target_file" ]]; then + # Create new file from template + local temp_file + temp_file=$(mktemp) || { + log_error "Failed to create temporary file" + return 1 + } + + if create_new_agent_file "$target_file" "$temp_file" "$project_name" "$current_date"; then + if mv "$temp_file" "$target_file"; then + log_success "Created new $agent_name context file" + else + log_error "Failed to move temporary file to $target_file" + rm -f "$temp_file" + return 1 + fi + else + log_error "Failed to create new agent file" + rm -f "$temp_file" + return 1 + fi + else + # Update existing file + if [[ ! -r "$target_file" ]]; then + log_error "Cannot read existing file: $target_file" + return 1 + fi + + if [[ ! -w "$target_file" ]]; then + log_error "Cannot write to existing file: $target_file" + return 1 + fi + + if update_existing_agent_file "$target_file" "$current_date"; then + log_success "Updated existing $agent_name context file" + else + log_error "Failed to update existing agent file" + return 1 + fi + fi + + return 0 +} + +#============================================================================== +# Agent Selection and Processing +#============================================================================== + +update_specific_agent() { + local agent_type="$1" + + case "$agent_type" in + claude) + update_agent_file "$CLAUDE_FILE" "Claude Code" + ;; + gemini) + update_agent_file "$GEMINI_FILE" "Gemini CLI" + ;; + copilot) + update_agent_file "$COPILOT_FILE" "GitHub Copilot" + ;; + cursor) + update_agent_file "$CURSOR_FILE" "Cursor IDE" + ;; + qwen) + update_agent_file "$QWEN_FILE" "Qwen Code" + ;; + opencode) + update_agent_file "$AGENTS_FILE" "opencode" + ;; + codex) + update_agent_file "$AGENTS_FILE" "Codex CLI" + ;; + windsurf) + update_agent_file "$WINDSURF_FILE" "Windsurf" + ;; + kilocode) + update_agent_file "$KILOCODE_FILE" "Kilo Code" + ;; + auggie) + update_agent_file "$AUGGIE_FILE" "Auggie CLI" + ;; + roo) + update_agent_file "$ROO_FILE" "Roo Code" + ;; + *) + log_error "Unknown agent type '$agent_type'" + log_error "Expected: claude|gemini|copilot|cursor|qwen|opencode|codex|windsurf|kilocode|auggie|roo" + exit 1 + ;; + esac +} + +update_all_existing_agents() { + local found_agent=false + + # Check each possible agent file and update if it exists + if [[ -f "$CLAUDE_FILE" ]]; then + update_agent_file "$CLAUDE_FILE" "Claude Code" + found_agent=true + fi + + if [[ -f "$GEMINI_FILE" ]]; then + update_agent_file "$GEMINI_FILE" "Gemini CLI" + found_agent=true + fi + + if [[ -f "$COPILOT_FILE" ]]; then + update_agent_file "$COPILOT_FILE" "GitHub Copilot" + found_agent=true + fi + + if [[ -f "$CURSOR_FILE" ]]; then + update_agent_file "$CURSOR_FILE" "Cursor IDE" + found_agent=true + fi + + if [[ -f "$QWEN_FILE" ]]; then + update_agent_file "$QWEN_FILE" "Qwen Code" + found_agent=true + fi + + if [[ -f "$AGENTS_FILE" ]]; then + update_agent_file "$AGENTS_FILE" "Codex/opencode" + found_agent=true + fi + + if [[ -f "$WINDSURF_FILE" ]]; then + update_agent_file "$WINDSURF_FILE" "Windsurf" + found_agent=true + fi + + if [[ -f "$KILOCODE_FILE" ]]; then + update_agent_file "$KILOCODE_FILE" "Kilo Code" + found_agent=true + fi + + if [[ -f "$AUGGIE_FILE" ]]; then + update_agent_file "$AUGGIE_FILE" "Auggie CLI" + found_agent=true + fi + + if [[ -f "$ROO_FILE" ]]; then + update_agent_file "$ROO_FILE" "Roo Code" + found_agent=true + fi + + # If no agent files exist, create a default Claude file + if [[ "$found_agent" == false ]]; then + log_info "No existing agent files found, creating default Claude file..." + update_agent_file "$CLAUDE_FILE" "Claude Code" + fi +} +print_summary() { + echo + log_info "Summary of changes:" + + if [[ -n "$NEW_LANG" ]]; then + echo " - Added language: $NEW_LANG" + fi + + if [[ -n "$NEW_FRAMEWORK" ]]; then + echo " - Added framework: $NEW_FRAMEWORK" + fi + + if [[ -n "$NEW_DB" ]] && [[ "$NEW_DB" != "N/A" ]]; then + echo " - Added database: $NEW_DB" + fi + + echo + log_info "Usage: $0 [claude|gemini|copilot|cursor|qwen|opencode|codex|windsurf|kilocode|auggie|roo]" +} + +#============================================================================== +# Main Execution +#============================================================================== + +main() { + # Validate environment before proceeding + validate_environment + + log_info "=== Updating agent context files for feature $CURRENT_BRANCH ===" + + # Parse the plan file to extract project information + if ! parse_plan_data "$NEW_PLAN"; then + log_error "Failed to parse plan data" + exit 1 + fi + + # Process based on agent type argument + local success=true + + if [[ -z "$AGENT_TYPE" ]]; then + # No specific agent provided - update all existing agent files + log_info "No agent specified, updating all existing agent files..." + if ! update_all_existing_agents; then + success=false + fi + else + # Specific agent provided - update only that agent + log_info "Updating specific agent: $AGENT_TYPE" + if ! update_specific_agent "$AGENT_TYPE"; then + success=false + fi + fi + + # Print summary + print_summary + + if [[ "$success" == true ]]; then + log_success "Agent context update completed successfully" + exit 0 + else + log_error "Agent context update completed with errors" + exit 1 + fi +} + +# Execute main function if script is run directly +if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then + main "$@" +fi diff --git a/.specify/templates/agent-file-template.md b/.specify/templates/agent-file-template.md new file mode 100644 index 00000000..2301e0ea --- /dev/null +++ b/.specify/templates/agent-file-template.md @@ -0,0 +1,23 @@ +# [PROJECT NAME] Development Guidelines + +Auto-generated from all feature plans. Last updated: [DATE] + +## Active Technologies +[EXTRACTED FROM ALL PLAN.MD FILES] + +## Project Structure +``` +[ACTUAL STRUCTURE FROM PLANS] +``` + +## Commands +[ONLY COMMANDS FOR ACTIVE TECHNOLOGIES] + +## Code Style +[LANGUAGE-SPECIFIC, ONLY FOR LANGUAGES IN USE] + +## Recent Changes +[LAST 3 FEATURES AND WHAT THEY ADDED] + + + \ No newline at end of file diff --git a/.specify/templates/plan-template.md b/.specify/templates/plan-template.md new file mode 100644 index 00000000..c9c6fdfb --- /dev/null +++ b/.specify/templates/plan-template.md @@ -0,0 +1,244 @@ + +# Implementation Plan: [FEATURE] + +**Branch**: `[###-feature-name]` | **Date**: [DATE] | **Spec**: [link] +**Input**: Feature specification from `/specs/[###-feature-name]/spec.md` + +## Execution Flow (/plan command scope) +``` +1. Load feature spec from Input path + โ†’ If not found: ERROR "No feature spec at {path}" +2. Fill Technical Context (scan for NEEDS CLARIFICATION) + โ†’ Detect Project Type from file system structure or context (web=frontend+backend, mobile=app+api) + โ†’ Set Structure Decision based on project type +3. Fill the Constitution Check section based on the content of the constitution document. +4. Evaluate Constitution Check section below + โ†’ If violations exist: Document in Complexity Tracking + โ†’ If no justification possible: ERROR "Simplify approach first" + โ†’ Update Progress Tracking: Initial Constitution Check +5. Execute Phase 0 โ†’ research.md + โ†’ If NEEDS CLARIFICATION remain: ERROR "Resolve unknowns" +6. Execute Phase 1 โ†’ contracts, data-model.md, quickstart.md, agent-specific template file (e.g., `CLAUDE.md` for Claude Code, `.github/copilot-instructions.md` for GitHub Copilot, `GEMINI.md` for Gemini CLI, `QWEN.md` for Qwen Code or `AGENTS.md` for opencode). +7. Re-evaluate Constitution Check section + โ†’ If new violations: Refactor design, return to Phase 1 + โ†’ Update Progress Tracking: Post-Design Constitution Check +8. Plan Phase 2 โ†’ Describe task generation approach (DO NOT create tasks.md) +9. STOP - Ready for /tasks command +``` + +**IMPORTANT**: The /plan command STOPS at step 7. Phases 2-4 are executed by other commands: +- Phase 2: /tasks command creates tasks.md +- Phase 3-4: Implementation execution (manual or via tools) + +## Summary +[Extract from feature spec: primary requirement + technical approach from research] + +## Technical Context +**Language/Version**: [e.g., Python 3.11, Swift 5.9, Rust 1.75 or NEEDS CLARIFICATION] +**Primary Dependencies**: [e.g., FastAPI, UIKit, LLVM or NEEDS CLARIFICATION] +**Storage**: [if applicable, e.g., PostgreSQL, CoreData, files or N/A] +**Testing**: [e.g., pytest, XCTest, cargo test or NEEDS CLARIFICATION] +**Target Platform**: [e.g., Linux server, iOS 15+, WASM or NEEDS CLARIFICATION] +**Project Type**: [single/web/mobile - determines source structure] +**Performance Goals**: [domain-specific, e.g., 1000 req/s, 10k lines/sec, 60 fps or NEEDS CLARIFICATION] +**Constraints**: [domain-specific, e.g., <200ms p95, <100MB memory, offline-capable or NEEDS CLARIFICATION] +**Scale/Scope**: [domain-specific, e.g., 10k users, 1M LOC, 50 screens or NEEDS CLARIFICATION] + +## Constitution Check +*GATE: Must pass before Phase 0 research. Re-check after Phase 1 design.* + +### Test-Driven Development (NON-NEGOTIABLE) +- [ ] Tests will be written BEFORE implementation +- [ ] GIVEN/WHEN/THEN pattern will be used with clear comments +- [ ] All tests will explain business purpose and scenario + +### Upstream Compatibility +- [ ] Changes preserve existing plugin architecture (commands, providers, settings, suggest) +- [ ] TypeScript config and ESLint rules are maintained +- [ ] Deviations from upstream are documented with rationale + +### Plugin Architecture Modularity +- [ ] Provider implementations isolated in `src/providers/` +- [ ] Commands separated in `src/commands/` +- [ ] Standard interfaces implemented: `SendRequest`, `BaseOptions`, `Vendor` +- [ ] Tag-based interaction model preserved + +### TypeScript Type Safety +- [ ] `noImplicitAny: true` and `strictNullChecks: true` enforced +- [ ] All public interfaces explicitly typed +- [ ] Provider options extend `BaseOptions` +- [ ] No `any` types without justification + +### Code Quality & Linting +- [ ] Code passes `npm run lint` without warnings +- [ ] No unused variables (except `_` prefixed) +- [ ] No `@ts-ignore` without explicit justification + +## Project Structure + +### Documentation (this feature) +``` +specs/[###-feature]/ +โ”œโ”€โ”€ plan.md # This file (/plan command output) +โ”œโ”€โ”€ research.md # Phase 0 output (/plan command) +โ”œโ”€โ”€ data-model.md # Phase 1 output (/plan command) +โ”œโ”€โ”€ quickstart.md # Phase 1 output (/plan command) +โ”œโ”€โ”€ contracts/ # Phase 1 output (/plan command) +โ””โ”€โ”€ tasks.md # Phase 2 output (/tasks command - NOT created by /plan) +``` + +### Source Code (repository root) + +``` +# [REMOVE IF UNUSED] Option 1: Single project (DEFAULT) +src/ +โ”œโ”€โ”€ models/ +โ”œโ”€โ”€ services/ +โ”œโ”€โ”€ cli/ +โ””โ”€โ”€ lib/ + +tests/ +โ”œโ”€โ”€ contract/ +โ”œโ”€โ”€ integration/ +โ””โ”€โ”€ unit/ + +# [REMOVE IF UNUSED] Option 2: Web application (when "frontend" + "backend" detected) +backend/ +โ”œโ”€โ”€ src/ +โ”‚ โ”œโ”€โ”€ models/ +โ”‚ โ”œโ”€โ”€ services/ +โ”‚ โ””โ”€โ”€ api/ +โ””โ”€โ”€ tests/ + +frontend/ +โ”œโ”€โ”€ src/ +โ”‚ โ”œโ”€โ”€ components/ +โ”‚ โ”œโ”€โ”€ pages/ +โ”‚ โ””โ”€โ”€ services/ +โ””โ”€โ”€ tests/ + +# [REMOVE IF UNUSED] Option 3: Mobile + API (when "iOS/Android" detected) +api/ +โ””โ”€โ”€ [same as backend above] + +ios/ or android/ +โ””โ”€โ”€ [platform-specific structure: feature modules, UI flows, platform tests] +``` + +**Structure Decision**: [Document the selected structure and reference the real +directories captured above] + +## Phase 0: Outline & Research +1. **Extract unknowns from Technical Context** above: + - For each NEEDS CLARIFICATION โ†’ research task + - For each dependency โ†’ best practices task + - For each integration โ†’ patterns task + +2. **Generate and dispatch research agents**: + ``` + For each unknown in Technical Context: + Task: "Research {unknown} for {feature context}" + For each technology choice: + Task: "Find best practices for {tech} in {domain}" + ``` + +3. **Consolidate findings** in `research.md` using format: + - Decision: [what was chosen] + - Rationale: [why chosen] + - Alternatives considered: [what else evaluated] + +**Output**: research.md with all NEEDS CLARIFICATION resolved + +## Phase 1: Design & Contracts +*Prerequisites: research.md complete* + +1. **Extract entities from feature spec** โ†’ `data-model.md`: + - Entity name, fields, relationships + - Validation rules from requirements + - State transitions if applicable + +2. **Generate API contracts** from functional requirements: + - For each user action โ†’ endpoint + - Use standard REST/GraphQL patterns + - Output OpenAPI/GraphQL schema to `/contracts/` + +3. **Generate contract tests** from contracts: + - One test file per endpoint + - Assert request/response schemas + - Tests must fail (no implementation yet) + +4. **Extract test scenarios** from user stories: + - Each story โ†’ integration test scenario + - Quickstart test = story validation steps + +5. **Update agent file incrementally** (O(1) operation): + - Run `.specify/scripts/bash/update-agent-context.sh windsurf` + **IMPORTANT**: Execute it exactly as specified above. Do not add or remove any arguments. + - If exists: Add only NEW tech from current plan + - Preserve manual additions between markers + - Update recent changes (keep last 3) + - Keep under 150 lines for token efficiency + - Output to repository root + +**Output**: data-model.md, /contracts/*, failing tests, quickstart.md, agent-specific file + +## Phase 2: Task Planning Approach +*This section describes what the /tasks command will do - DO NOT execute during /plan* + +**Task Generation Strategy**: +- Load `.specify/templates/tasks-template.md` as base +- Generate tasks from Phase 1 design docs (contracts, data model, quickstart) +- Each contract โ†’ contract test task [P] +- Each entity โ†’ model creation task [P] +- Each user story โ†’ integration test task +- Implementation tasks to make tests pass + +**Ordering Strategy**: +- TDD order: Tests before implementation +- Dependency order: Models before services before UI +- Mark [P] for parallel execution (independent files) + +**Estimated Output**: 25-30 numbered, ordered tasks in tasks.md + +**IMPORTANT**: This phase is executed by the /tasks command, NOT by /plan + +## Phase 3+: Future Implementation +*These phases are beyond the scope of the /plan command* + +**Phase 3**: Task execution (/tasks command creates tasks.md) +**Phase 4**: Implementation (execute tasks.md following constitutional principles) +**Phase 5**: Validation (run tests, execute quickstart.md, performance validation) + +## Complexity Tracking +*Fill ONLY if Constitution Check has violations that must be justified* + +| Violation | Why Needed | Simpler Alternative Rejected Because | +|-----------|------------|-------------------------------------| +| [e.g., 4th project] | [current need] | [why 3 projects insufficient] | +| [e.g., Repository pattern] | [specific problem] | [why direct DB access insufficient] | + + +## Progress Tracking +*This checklist is updated during execution flow* + +**Phase Status**: +- [ ] Phase 0: Research complete (/plan command) +- [ ] Phase 1: Design complete (/plan command) +- [ ] Phase 2: Task planning complete (/plan command - describe approach only) +- [ ] Phase 3: Tasks generated (/tasks command) +- [ ] Phase 4: Implementation complete +- [ ] Phase 5: Validation passed + +**Gate Status**: +- [ ] Initial Constitution Check: PASS +- [ ] Post-Design Constitution Check: PASS +- [ ] All NEEDS CLARIFICATION resolved +- [ ] Complexity deviations documented + +--- +*Based on Tars Plugin Constitution v1.0.0 - See `.specify/memory/constitution.md`* diff --git a/.specify/templates/spec-template.md b/.specify/templates/spec-template.md new file mode 100644 index 00000000..3c4152f3 --- /dev/null +++ b/.specify/templates/spec-template.md @@ -0,0 +1,122 @@ +# Feature Specification: [FEATURE NAME] + +**Feature Branch**: `[###-feature-name]` +**Created**: [DATE] +**Status**: Draft +**Input**: User description: "$ARGUMENTS" + +## Execution Flow (main) +``` +1. Parse user description from Input + โ†’ If empty: ERROR "No feature description provided" +2. Extract key concepts from description + โ†’ Identify: actors, actions, data, constraints +3. For each unclear aspect: + โ†’ Mark with [NEEDS CLARIFICATION: specific question] +4. Fill User Scenarios & Testing section + โ†’ If no clear user flow: ERROR "Cannot determine user scenarios" +5. Generate Functional Requirements + โ†’ Each requirement must be testable + โ†’ Mark ambiguous requirements +6. Identify Key Entities (if data involved) +7. Run Review Checklist + โ†’ If any [NEEDS CLARIFICATION]: WARN "Spec has uncertainties" + โ†’ If implementation details found: ERROR "Remove tech details" +8. Return: SUCCESS (spec ready for planning) +``` + +--- + +## โšก Quick Guidelines +- โœ… Focus on WHAT users need and WHY +- โŒ Avoid HOW to implement (no tech stack, APIs, code structure) +- ๐Ÿ‘ฅ Written for business stakeholders, not developers + +### Section Requirements +- **Mandatory sections**: Must be completed for every feature +- **Optional sections**: Include only when relevant to the feature +- When a section doesn't apply, remove it entirely (don't leave as "N/A") + +### For AI Generation +When creating this spec from a user prompt: +1. **Mark all ambiguities**: Use [NEEDS CLARIFICATION: specific question] for any assumption you'd need to make +2. **Don't guess**: If the prompt doesn't specify something (e.g., "login system" without auth method), mark it +3. **Think like a tester**: Every vague requirement should fail the "testable and unambiguous" checklist item +4. **Common underspecified areas**: + - User types and permissions + - Data retention/deletion policies + - Performance targets and scale + - Error handling behaviors + - Integration requirements + - Security/compliance needs + +--- + +## User Scenarios & Testing *(mandatory)* + +### Primary User Story +[Describe the main user journey in plain language] + +### Acceptance Scenarios +1. **Given** [initial state], **When** [action], **Then** [expected outcome] +2. **Given** [initial state], **When** [action], **Then** [expected outcome] + +### Edge Cases +- What happens when [boundary condition]? +- How does system handle [error scenario]? + +## Requirements *(mandatory)* + +### Functional Requirements +- **FR-001**: System MUST [specific capability, e.g., "allow users to create accounts"] +- **FR-002**: System MUST [specific capability, e.g., "validate email addresses"] +- **FR-003**: Users MUST be able to [key interaction, e.g., "reset their password"] +- **FR-004**: System MUST [data requirement, e.g., "persist user preferences"] +- **FR-005**: System MUST [behavior, e.g., "log all security events"] + +*Example of marking unclear requirements:* +- **FR-006**: System MUST authenticate users via [NEEDS CLARIFICATION: auth method not specified - email/password, SSO, OAuth?] +- **FR-007**: System MUST retain user data for [NEEDS CLARIFICATION: retention period not specified] + +### Key Entities *(include if feature involves data)* +- **[Entity 1]**: [What it represents, key attributes without implementation] +- **[Entity 2]**: [What it represents, relationships to other entities] + +--- + +## Review & Acceptance Checklist +*GATE: Automated checks run during main() execution* + +### Content Quality +- [ ] No implementation details (languages, frameworks, APIs) +- [ ] Focused on user value and business needs +- [ ] Written for non-technical stakeholders +- [ ] All mandatory sections completed + +### Requirement Completeness +- [ ] No [NEEDS CLARIFICATION] markers remain +- [ ] Requirements are testable and unambiguous +- [ ] Success criteria are measurable +- [ ] Scope is clearly bounded +- [ ] Dependencies and assumptions identified + +### Tars Plugin Specific +- [ ] Obsidian plugin compatibility considered (if applicable) +- [ ] Tag-based interaction model implications addressed +- [ ] Multimodal content requirements specified (text, images, PDFs) +- [ ] Provider-specific behavior documented (if AI integration) + +--- + +## Execution Status +*Updated by main() during processing* + +- [ ] User description parsed +- [ ] Key concepts extracted +- [ ] Ambiguities marked +- [ ] User scenarios defined +- [ ] Requirements generated +- [ ] Entities identified +- [ ] Review checklist passed + +--- diff --git a/.specify/templates/tasks-template.md b/.specify/templates/tasks-template.md new file mode 100644 index 00000000..525a912a --- /dev/null +++ b/.specify/templates/tasks-template.md @@ -0,0 +1,136 @@ +# Tasks: [FEATURE NAME] + +**Input**: Design documents from `/specs/[###-feature-name]/` +**Prerequisites**: plan.md (required), research.md, data-model.md, contracts/ + +## Execution Flow (main) +``` +1. Load plan.md from feature directory + โ†’ If not found: ERROR "No implementation plan found" + โ†’ Extract: tech stack, libraries, structure +2. Load optional design documents: + โ†’ data-model.md: Extract entities โ†’ model tasks + โ†’ contracts/: Each file โ†’ contract test task + โ†’ research.md: Extract decisions โ†’ setup tasks +3. Generate tasks by category: + โ†’ Setup: project init, dependencies, linting + โ†’ Tests: contract tests, integration tests + โ†’ Core: models, services, CLI commands + โ†’ Integration: DB, middleware, logging + โ†’ Polish: unit tests, performance, docs +4. Apply task rules: + โ†’ Different files = mark [P] for parallel + โ†’ Same file = sequential (no [P]) + โ†’ Tests before implementation (TDD) +5. Number tasks sequentially (T001, T002...) +6. Generate dependency graph +7. Create parallel execution examples +8. Validate task completeness: + โ†’ All contracts have tests? + โ†’ All entities have models? + โ†’ All endpoints implemented? +9. Return: SUCCESS (tasks ready for execution) +``` + +## Format: `[ID] [P?] Description` +- **[P]**: Can run in parallel (different files, no dependencies) +- Include exact file paths in descriptions + +## Path Conventions +- **Single project**: `src/`, `tests/` at repository root +- **Web app**: `backend/src/`, `frontend/src/` +- **Mobile**: `api/src/`, `ios/src/` or `android/src/` +- Paths shown below assume single project - adjust based on plan.md structure + +## Phase 3.1: Setup +- [ ] T001 Create project structure per implementation plan +- [ ] T002 Initialize TypeScript project with Obsidian plugin dependencies +- [ ] T003 [P] Configure ESLint and Prettier per upstream standards +- [ ] T004 [P] Verify tsconfig.json: noImplicitAny=true, strictNullChecks=true + +## Phase 3.2: Tests First (TDD) โš ๏ธ MUST COMPLETE BEFORE 3.3 +**CRITICAL: These tests MUST be written and MUST FAIL before ANY implementation** +**GIVEN/WHEN/THEN format with clear comments required** +- [ ] T005 [P] Provider message format test in tests/providers/test_provider_format.test.ts +- [ ] T006 [P] Tag parsing test in tests/editor/test_tag_parsing.test.ts +- [ ] T007 [P] Stream handling test in tests/providers/test_stream.test.ts +- [ ] T008 [P] Settings validation test in tests/settings/test_validation.test.ts + +## Phase 3.3: Core Implementation (ONLY after tests are failing) +- [ ] T009 [P] Provider interface implementation in src/providers/[provider].ts +- [ ] T010 [P] Message formatter extending BaseOptions in src/providers/[provider].ts +- [ ] T011 [P] Command registration in src/commands/[command].ts +- [ ] T012 Streaming response handler (async generator) +- [ ] T013 Settings persistence via loadData/saveData +- [ ] T014 Type-safe options interface +- [ ] T015 Error handling with Notice and i18n + +## Phase 3.4: Integration +- [ ] T016 Register provider in src/providers/index.ts +- [ ] T017 Integrate with tag suggestion system +- [ ] T018 Connect to status bar manager +- [ ] T019 Wire AbortController for cancellation + +## Phase 3.5: Polish +- [ ] T020 [P] Unit tests for edge cases in tests/unit/ +- [ ] T021 Verify npm run lint passes (zero warnings) +- [ ] T022 [P] Update README.md with provider documentation +- [ ] T023 Remove code duplication +- [ ] T024 Manual test in Obsidian vault with real API +- [ ] T025 Verify upstream compatibility (no breaking changes) + +## Dependencies +- Setup (T001-T004) before tests +- Tests (T005-T008) before implementation (T009-T015) +- T009 blocks T012 (interface before async generator) +- T013 blocks T017 (settings before tag integration) +- Implementation before polish (T020-T025) + +## Parallel Example +``` +# Launch T005-T008 together (different test files): +Task: "Provider message format test - tests/providers/test_provider_format.test.ts" +Task: "Tag parsing test - tests/editor/test_tag_parsing.test.ts" +Task: "Stream handling test - tests/providers/test_stream.test.ts" +Task: "Settings validation test - tests/settings/test_validation.test.ts" + +# Each test MUST use GIVEN/WHEN/THEN pattern with comments +``` + +## Notes +- [P] tasks = different files, no dependencies +- Verify tests fail before implementing (Red-Green-Refactor) +- All tests use GIVEN/WHEN/THEN with clear business purpose comments +- Run `npm run lint` before each commit +- Commit after each task +- Maintain TypeScript strict mode compliance +- Avoid: vague tasks, same file conflicts, `any` types + +## Task Generation Rules +*Applied during main() execution* + +1. **From Contracts**: + - Each contract file โ†’ contract test task [P] + - Each endpoint โ†’ implementation task + +2. **From Data Model**: + - Each entity โ†’ model creation task [P] + - Relationships โ†’ service layer tasks + +3. **From User Stories**: + - Each story โ†’ integration test [P] + - Quickstart scenarios โ†’ validation tasks + +4. **Ordering**: + - Setup โ†’ Tests โ†’ Models โ†’ Services โ†’ Endpoints โ†’ Polish + - Dependencies block parallel execution + +## Validation Checklist +*GATE: Checked by main() before returning* + +- [ ] All contracts have corresponding tests +- [ ] All entities have model tasks +- [ ] All tests come before implementation +- [ ] Parallel tasks truly independent +- [ ] Each task specifies exact file path +- [ ] No task modifies same file as another [P] task \ No newline at end of file diff --git a/.windsurf/workflows/analyze.md b/.windsurf/workflows/analyze.md new file mode 100644 index 00000000..f4c1a7bd --- /dev/null +++ b/.windsurf/workflows/analyze.md @@ -0,0 +1,101 @@ +--- +description: Perform a non-destructive cross-artifact consistency and quality analysis across spec.md, plan.md, and tasks.md after task generation. +--- + +The user input to you can be provided directly by the agent or as a command argument - you **MUST** consider it before proceeding with the prompt (if not empty). + +User input: + +$ARGUMENTS + +Goal: Identify inconsistencies, duplications, ambiguities, and underspecified items across the three core artifacts (`spec.md`, `plan.md`, `tasks.md`) before implementation. This command MUST run only after `/tasks` has successfully produced a complete `tasks.md`. + +STRICTLY READ-ONLY: Do **not** modify any files. Output a structured analysis report. Offer an optional remediation plan (user must explicitly approve before any follow-up editing commands would be invoked manually). + +Constitution Authority: The project constitution (`.specify/memory/constitution.md`) is **non-negotiable** within this analysis scope. Constitution conflicts are automatically CRITICAL and require adjustment of the spec, plan, or tasksโ€”not dilution, reinterpretation, or silent ignoring of the principle. If a principle itself needs to change, that must occur in a separate, explicit constitution update outside `/analyze`. + +Execution steps: + +1. Run `.specify/scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks` once from repo root and parse JSON for FEATURE_DIR and AVAILABLE_DOCS. Derive absolute paths: + - SPEC = FEATURE_DIR/spec.md + - PLAN = FEATURE_DIR/plan.md + - TASKS = FEATURE_DIR/tasks.md + Abort with an error message if any required file is missing (instruct the user to run missing prerequisite command). + +2. Load artifacts: + - Parse spec.md sections: Overview/Context, Functional Requirements, Non-Functional Requirements, User Stories, Edge Cases (if present). + - Parse plan.md: Architecture/stack choices, Data Model references, Phases, Technical constraints. + - Parse tasks.md: Task IDs, descriptions, phase grouping, parallel markers [P], referenced file paths. + - Load constitution `.specify/memory/constitution.md` for principle validation. + +3. Build internal semantic models: + - Requirements inventory: Each functional + non-functional requirement with a stable key (derive slug based on imperative phrase; e.g., "User can upload file" -> `user-can-upload-file`). + - User story/action inventory. + - Task coverage mapping: Map each task to one or more requirements or stories (inference by keyword / explicit reference patterns like IDs or key phrases). + - Constitution rule set: Extract principle names and any MUST/SHOULD normative statements. + +4. Detection passes: + A. Duplication detection: + - Identify near-duplicate requirements. Mark lower-quality phrasing for consolidation. + B. Ambiguity detection: + - Flag vague adjectives (fast, scalable, secure, intuitive, robust) lacking measurable criteria. + - Flag unresolved placeholders (TODO, TKTK, ???, , etc.). + C. Underspecification: + - Requirements with verbs but missing object or measurable outcome. + - User stories missing acceptance criteria alignment. + - Tasks referencing files or components not defined in spec/plan. + D. Constitution alignment: + - Any requirement or plan element conflicting with a MUST principle. + - Missing mandated sections or quality gates from constitution. + E. Coverage gaps: + - Requirements with zero associated tasks. + - Tasks with no mapped requirement/story. + - Non-functional requirements not reflected in tasks (e.g., performance, security). + F. Inconsistency: + - Terminology drift (same concept named differently across files). + - Data entities referenced in plan but absent in spec (or vice versa). + - Task ordering contradictions (e.g., integration tasks before foundational setup tasks without dependency note). + - Conflicting requirements (e.g., one requires to use Next.js while other says to use Vue as the framework). + +5. Severity assignment heuristic: + - CRITICAL: Violates constitution MUST, missing core spec artifact, or requirement with zero coverage that blocks baseline functionality. + - HIGH: Duplicate or conflicting requirement, ambiguous security/performance attribute, untestable acceptance criterion. + - MEDIUM: Terminology drift, missing non-functional task coverage, underspecified edge case. + - LOW: Style/wording improvements, minor redundancy not affecting execution order. + +6. Produce a Markdown report (no file writes) with sections: + + ### Specification Analysis Report + | ID | Category | Severity | Location(s) | Summary | Recommendation | + |----|----------|----------|-------------|---------|----------------| + | A1 | Duplication | HIGH | spec.md:L120-134 | Two similar requirements ... | Merge phrasing; keep clearer version | + (Add one row per finding; generate stable IDs prefixed by category initial.) + + Additional subsections: + - Coverage Summary Table: + | Requirement Key | Has Task? | Task IDs | Notes | + - Constitution Alignment Issues (if any) + - Unmapped Tasks (if any) + - Metrics: + * Total Requirements + * Total Tasks + * Coverage % (requirements with >=1 task) + * Ambiguity Count + * Duplication Count + * Critical Issues Count + +7. At end of report, output a concise Next Actions block: + - If CRITICAL issues exist: Recommend resolving before `/implement`. + - If only LOW/MEDIUM: User may proceed, but provide improvement suggestions. + - Provide explicit command suggestions: e.g., "Run /specify with refinement", "Run /plan to adjust architecture", "Manually edit tasks.md to add coverage for 'performance-metrics'". + +8. Ask the user: "Would you like me to suggest concrete remediation edits for the top N issues?" (Do NOT apply them automatically.) + +Behavior rules: +- NEVER modify files. +- NEVER hallucinate missing sectionsโ€”if absent, report them. +- KEEP findings deterministic: if rerun without changes, produce consistent IDs and counts. +- LIMIT total findings in the main table to 50; aggregate remainder in a summarized overflow note. +- If zero issues found, emit a success report with coverage statistics and proceed recommendation. + +Context: $ARGUMENTS diff --git a/.windsurf/workflows/clarify.md b/.windsurf/workflows/clarify.md new file mode 100644 index 00000000..26ff530b --- /dev/null +++ b/.windsurf/workflows/clarify.md @@ -0,0 +1,158 @@ +--- +description: Identify underspecified areas in the current feature spec by asking up to 5 highly targeted clarification questions and encoding answers back into the spec. +--- + +The user input to you can be provided directly by the agent or as a command argument - you **MUST** consider it before proceeding with the prompt (if not empty). + +User input: + +$ARGUMENTS + +Goal: Detect and reduce ambiguity or missing decision points in the active feature specification and record the clarifications directly in the spec file. + +Note: This clarification workflow is expected to run (and be completed) BEFORE invoking `/plan`. If the user explicitly states they are skipping clarification (e.g., exploratory spike), you may proceed, but must warn that downstream rework risk increases. + +Execution steps: + +1. Run `.specify/scripts/bash/check-prerequisites.sh --json --paths-only` from repo root **once** (combined `--json --paths-only` mode / `-Json -PathsOnly`). Parse minimal JSON payload fields: + - `FEATURE_DIR` + - `FEATURE_SPEC` + - (Optionally capture `IMPL_PLAN`, `TASKS` for future chained flows.) + - If JSON parsing fails, abort and instruct user to re-run `/specify` or verify feature branch environment. + +2. Load the current spec file. Perform a structured ambiguity & coverage scan using this taxonomy. For each category, mark status: Clear / Partial / Missing. Produce an internal coverage map used for prioritization (do not output raw map unless no questions will be asked). + + Functional Scope & Behavior: + - Core user goals & success criteria + - Explicit out-of-scope declarations + - User roles / personas differentiation + + Domain & Data Model: + - Entities, attributes, relationships + - Identity & uniqueness rules + - Lifecycle/state transitions + - Data volume / scale assumptions + + Interaction & UX Flow: + - Critical user journeys / sequences + - Error/empty/loading states + - Accessibility or localization notes + + Non-Functional Quality Attributes: + - Performance (latency, throughput targets) + - Scalability (horizontal/vertical, limits) + - Reliability & availability (uptime, recovery expectations) + - Observability (logging, metrics, tracing signals) + - Security & privacy (authN/Z, data protection, threat assumptions) + - Compliance / regulatory constraints (if any) + + Integration & External Dependencies: + - External services/APIs and failure modes + - Data import/export formats + - Protocol/versioning assumptions + + Edge Cases & Failure Handling: + - Negative scenarios + - Rate limiting / throttling + - Conflict resolution (e.g., concurrent edits) + + Constraints & Tradeoffs: + - Technical constraints (language, storage, hosting) + - Explicit tradeoffs or rejected alternatives + + Terminology & Consistency: + - Canonical glossary terms + - Avoided synonyms / deprecated terms + + Completion Signals: + - Acceptance criteria testability + - Measurable Definition of Done style indicators + + Misc / Placeholders: + - TODO markers / unresolved decisions + - Ambiguous adjectives ("robust", "intuitive") lacking quantification + + For each category with Partial or Missing status, add a candidate question opportunity unless: + - Clarification would not materially change implementation or validation strategy + - Information is better deferred to planning phase (note internally) + +3. Generate (internally) a prioritized queue of candidate clarification questions (maximum 5). Do NOT output them all at once. Apply these constraints: + - Maximum of 5 total questions across the whole session. + - Each question must be answerable with EITHER: + * A short multipleโ€‘choice selection (2โ€“5 distinct, mutually exclusive options), OR + * A one-word / shortโ€‘phrase answer (explicitly constrain: "Answer in <=5 words"). + - Only include questions whose answers materially impact architecture, data modeling, task decomposition, test design, UX behavior, operational readiness, or compliance validation. + - Ensure category coverage balance: attempt to cover the highest impact unresolved categories first; avoid asking two low-impact questions when a single high-impact area (e.g., security posture) is unresolved. + - Exclude questions already answered, trivial stylistic preferences, or plan-level execution details (unless blocking correctness). + - Favor clarifications that reduce downstream rework risk or prevent misaligned acceptance tests. + - If more than 5 categories remain unresolved, select the top 5 by (Impact * Uncertainty) heuristic. + +4. Sequential questioning loop (interactive): + - Present EXACTLY ONE question at a time. + - For multipleโ€‘choice questions render options as a Markdown table: + + | Option | Description | + |--------|-------------| + | A |