Feature Radar helps your AI coding agent discover, track, and prioritize what to build next.
Whether it's creative ideation, ecosystem scanning, user feedback, or cross-project research β it captures ideas from any source, evaluates them objectively, and maintains a living knowledge base that compounds over time.
Works with any AI agent that supports SKILL.md.
It starts the moment you say "feature radar." Your agent analyzes your project β language, architecture, key feature areas β and builds a structured tracking system at .feature-radar/.
From there, every feature goes through a lifecycle: discovered as an opportunity, evaluated against real demand and strategic fit, built and archived with mandatory learning extraction. Archiving is not the end β it's a checkpoint. Every shipped feature produces learnings, reveals new gaps, and opens new directions. The archive checklist enforces this so institutional knowledge compounds instead of evaporating.
The skills trigger automatically β just say "what should we build next" or "this feature is done" and the right workflow kicks in.
With skillshare
skillshare install runkids/feature-radar --into feature-radarWith Vercel Skills CLI
npx skills add runkids/feature-radarCopy the skills to your agent's skill directory:
# Claude Code
cp -r skills/* ~/.claude/skills/
# Codex
cp -r skills/* ~/.codex/skills/Pick individual skills if you don't need all of them:
cp -r skills/feature-radar ~/.claude/skills/
cp -r skills/feature-radar-archive ~/.claude/skills/| Skill | Trigger | Output |
|---|---|---|
| feature-radar | "feature radar", "what should we build next" | Full 6-phase workflow β all directories + base.md |
| feature-radar:scan | "scan opportunities", "brainstorm ideas" | New entries β opportunities/ |
| feature-radar:archive | "archive feature", "this feature is done" | Move to archive/ + extraction checklist |
| feature-radar:learn | "extract learnings", "capture what we learned" | Patterns β specs/ |
| feature-radar:ref | "add reference", "interesting approach" | Observations β references/ |
The full workflow. Analyzes your project, creates .feature-radar/ with base.md (project dashboard), then runs 6 phases: scan, archive, organize, gap analysis, evaluate, propose. Ends by recommending what to build next.
Discover new ideas β from creative brainstorming, user pain points, ecosystem evolution, technical possibilities, or cross-project research. Deduplicates against existing tracking and evaluates each candidate on 6 criteria including value uplift and innovation potential.
Archive a shipped, rejected, or covered feature. Then runs the mandatory extraction checklist: extract learnings β specs, derive new opportunities, update references, update trends. Does NOT skip steps.
Capture reusable patterns, architectural decisions, and pitfalls from completed work. Names files by the pattern, not the feature that produced it.
Record external observations and inspiration β ecosystem trends, creative approaches from other projects, research findings, user feedback. Cites source URLs and dates, assesses implications, suggests new opportunities when unmet needs or innovation angles are found.
On first run, feature-radar creates:
.feature-radar/
βββ base.md β Project dashboard: context, feature inventory, strategic overview
βββ archive/ β Shipped, rejected, or covered features
βββ opportunities/ β Open features ranked by impact and effort
βββ specs/ β Reusable patterns and architectural decisions
βββ references/ β External inspiration, observations, and ecosystem analysis
base.md is the project dashboard β generated by analyzing your codebase, updated incrementally:
- Project Context β language, architecture, key feature areas, core philosophy
- Feature Inventory β what's built, where the code lives, docs coverage gaps
- Tracking Summary β counts across all categories
- Classification Rules β how features move between categories
- Archive Extraction Checklist β the mandatory checks that make knowledge compound
- Compound knowledge β Every completed feature feeds back into the system
- Value-driven β Chase user value and innovation, not feature checklists
- Honest evaluation β Evaluate fit with YOUR architecture and users, not someone else's roadmap
- Signal over noise β 1 issue with no comments = weak signal; multiple independent asks = strong
- Evidence over assumptions β Rank by real demand and creative potential, not hypothetical value
Skills live in the skills/ directory. To contribute:
- Fork the repository
- Create a branch for your skill
- Add your skill under
skills/{skill-name}/SKILL.md - Submit a PR
MIT License β see LICENSE file for details.