A structured framework for evaluating emerging technologies, tools, and practices through systematic pilots and experiments. KE Radar provides templates, workflows, and metrics to help teams make evidence-based technology adoption decisions.
KE Radar enables organizations to:
- Discover technology candidates from various sources (ThoughtWorks Radar, industry trends, team proposals)
- Evaluate them using structured pilots with clear success criteria
- Decide on adoption with documented evidence and metrics
- Track results through standardized templates and workflows
This framework is technology-agnostic - use it to evaluate any tool, practice, or platform regardless of source.
Instead of relying on secondhand recommendations, KE Radar helps you:
- Run hands-on pilots with your actual use cases
- Collect quantified metrics (not just opinions)
- Document decisions with rationale (Architecture Decision Records)
- Build institutional knowledge through experiment reports
git clone https://github.com/klever-engineering/ke-radar.git
cd ke-radarCheck ROADMAP.md for queued technology candidates and experiment priorities.
Follow the RUNBOOK.md workflow:
- Plan: Copy
templates/pilot_plan.mdβpilots/<technology>/pilot_plan.md - Define: Set scope, hypothesis, and success criteria
- Build: Create proof-of-concept demo
- Evaluate: Measure results against rubric
- Decide: Document with ADR
Located in templates/:
pilot_plan.md- Experiment definition and planningexperiment_report.md- Results documentationdecision_memo.md- Executive summaryadr-0000-template.md- Architecture Decision Record
ke-radar/
βββ README.md # This file
βββ ROADMAP.md # Prioritized evaluation queue
βββ RUNBOOK.md # Experiment execution workflow
βββ SOP-Technology-Surveillance.md # Standard Operating Procedure
βββ templates/ # Reusable templates
β βββ pilot_plan.md
β βββ experiment_report.md
β βββ decision_memo.md
β βββ adr-0000-template.md
βββ pilots/ # Technology evaluations
β βββ <technology-name>/
β βββ pilot_plan.md
β βββ experiment_report.md
β βββ decision_memo.md
β βββ demo/ # Proof of concept code
β βββ eval/ # Evaluation scripts
β βββ docs/adr/ # Architecture Decision Records
βββ scripts/ # Automation helpers
βββ metrics/ # Scoring rubrics and definitions
βββ documentation/ # Guides and learnings
βββ data/ # Source materials (e.g., ThoughtWorks Radar PDFs)
See pilots/curated-shared-instructions/ for a complete reference implementation:
- Pilot Plan: Problem definition, hypothesis, success criteria
- Demo: Working proof-of-concept with instruction pack builder
- Evaluation: Metrics collection and scoring rubrics
- ADR: Final decision documentation with rationale
- Results: Quantified improvements (accuracy, consistency, traceability)
Each technology goes through five phases:
- Discovery - Identify candidate technology
- Planning - Define experiment scope and metrics
- Experimentation - Build demo, collect data
- Evaluation - Score against rubric, measure impact
- Decision - Document with ADR, recommend action (Adopt/Trial/Assess/Hold)
See metrics/metrics.md for scoring methodology.
We use a simplified adoption ladder:
- Adopt: Proven through pilot, ready for production
- Trial: Promising results, continue with broader pilot
- Assess: Needs more research or different use case
- Hold: Not recommended based on pilot results
This framework is source-agnostic. Technology candidates can come from:
- ThoughtWorks Technology Radar (volumes 31-33 PDFs in
data/) - Industry publications and conferences
- Team proposals and pain points
- Vendor evaluations and RFPs
- Open source trends and GitHub activity
Parse ThoughtWorks Radar with: python scripts/extract_radar_blips.py
We welcome contributions! Please see CONTRIBUTING.md for:
- How to propose new technology evaluations
- Pilot contribution guidelines
- Code of conduct
- Pull request process
The standard workflow is documented in SOP-Technology-Surveillance.md:
- Intake: Add technology to roadmap with prioritization
- Branch: Create
radar-<technology>branch - Execute: Follow RUNBOOK workflow
- Review: Present findings to stakeholders
- Decide: Create ADR with recommendation
- Merge: Only after approval (see branching conventions)
All pilot work lives on branches named radar-<technology>.
- Rebase regularly to
main - Do not merge to
mainwithout stakeholder approval - Each branch contains one complete pilot (plan β demo β eval β decision)
KE Radar is designed for AI-assisted pilot execution. The structured templates, runbooks, and specifications enable coding agents (GitHub Copilot, Claude Code, Codex) to autonomously create and run technology evaluations.
Instead of manually creating pilots, use natural language prompts:
"Test the next technology from the radar roadmap"
"Prepare the next pilot according to ROADMAP.md"
"Run the pilot for <technology-name>"
"Create evaluation report for the current pilot"
"Generate decision memo based on pilot results"
"Scan the codebase and propose relevant technologies to evaluate"
Agents can analyze your project to suggest relevant pilots:
Ask the agent to scan your codebase and propose technologies based on actual needs:
"Analyze the codebase and suggest technologies from the radar that would address current pain points"
"Review our architecture and propose relevant pilots from ROADMAP.md"
"Identify technical debt areas and recommend technologies to evaluate"
How it works:
- Agent scans project structure, dependencies, patterns, and pain points
- Agent correlates findings with technologies in
ROADMAP.mdand radar sources - Agent prioritizes suggestions based on:
- Relevance: Technologies that address discovered issues
- Impact: Potential improvements to quality, velocity, or maintainability
- Feasibility: Compatibility with existing stack and team skills
- Agent generates prioritized queue with rationale linking each technology to specific codebase findings
Result: Pilots become context-aware and directly relevant to your project's actual needs, not generic recommendations.
- Agent reads context: Consults
RUNBOOK.md,AGENT_SPEC.md,ROADMAP.md, and templates - Agent creates structure: Creates
radar-<technology>branch with complete pilot scaffolding - Agent executes pilot: Follows five-phase workflow (Discovery β Planning β Experimentation β Evaluation β Decision)
- Agent generates artifacts: Produces pilot plans, experiment logs, ADRs, evaluation reports using standard templates
- Human reviews and approves: You validate results before merging to
main
Agents rely on these framework components:
RUNBOOK.md: Step-by-step pilot execution proceduresAGENT_SPEC.md: Agent persona, capabilities, constraintstemplates/: Standardized formats for all artifactsROADMAP.md: Prioritized technology backlogmetrics/: Quantified success criteria and scoring rubrics
Benefits: Rapid experimentation velocity while maintaining governance through templates, review gates, and documented decision criteria.
This project is licensed under the MIT License - see the LICENSE file for details.
- ThoughtWorks Technology Radar for inspiring structured technology evaluation
- All contributors who have shared experiment learnings
- Issues: GitHub Issues
- Discussions: GitHub Discussions
See ROADMAP.md for evaluation queue:
- Model Context Protocol (MCP)
- Structured LLM Output
- Agentic Tool Use
- LLM Context Management
- And more...
Start evaluating technologies with evidence, not hype! π