📦 Public Archive — November 2025
This project is being archived as a public reference implementation of a multi-agent website optimization platform. It demonstrates practical patterns for local-first agent orchestration, tri-swarm analysis (PMM + QA + AEO), and reproducible agent-driven QA/SEO workflows.
Note: The repository remains available for reference and reuse. Active development has stopped; see docs/ARCHIVE_SUMMARY.md and the archived docs in docs/legacy/ for details about moved content and rationale.
Status: Archived — read-only reference (no active maintenance)
Multi-Agent Grid Optimization Platform
Master Control is a comprehensive agent platform that deploys specialized programs to analyze, optimize, and enhance websites using AI-powered agents with memory, learning, and cross-agent collaboration.
"I fight for the users." - Tron
Master Control deploys intelligent programs (agents) that work together to optimize your grid (website):
- ⚡ PMM Agent - Product Marketing analysis (clarity, positioning, 7-second rule)
- ⚡ QA Agent - Quality assurance with 6 specialized sub-agents
- ⚡ Git Agent - Clean conventional commit generation
- ⚡ AEO Agent - SEO/AEO optimization via SwarmCoordinator
Stop manually analyzing websites. Let Master Control deploy the programs.
- ~90% code reuse for new agents via base class
- Memory integration with AgentDB
- Cross-learning between agents
- Type-safe TypeScript with generics
- Battle-tested on production sites
- AgentDB integration stores patterns and insights
- Vector embeddings for semantic search
- Success rate tracking for fixes
- Cross-agent knowledge sharing
- Continuous improvement with each analysis
- Parallel execution of multiple agents
- TriSwarm runs PMM + QA + AEO simultaneously
- Cross-learning validates insights across agents
- Auto-balancing for optimal performance
npm install @mastercontrol/grid# Analyze local grid
npm run grid:swarm http://localhost:3000
# Or use the CLI
npx mcp analyze https://yoursite.com⚡ Master Control Analysis Complete
📊 Tri-Swarm Results:
PMM Clarity: 100/100 ✅
7-Second Test: FAIL (value proposition not in first impression)
QA Quality: 95/100 ✅
AEO Score: 88/100 ✅
🎯 Key Findings:
• 3 PMM issues detected
• 2 QA warnings (performance)
• 5 AEO optimizations recommended
• 8 cross-learnings discovered
💾 Stored 15 patterns in AgentDB memory core
End of line.
/mcp-analyze <url> # Grid analysis protocol
/mcp-grid-optimize # Complete optimization
/mcp-swarm <url> # Deploy multi-program swarm
/mcp-tri-swarm <url> # Deploy PMM + QA + AEO swarmsnpm run grid:analyze # Analyze grid
npm run grid:optimize # Optimize grid
npm run grid:swarm # Deploy tri-swarm
npm run commit # Generate conventional commit
npm run commit:push # Commit and pushmcp analyze <url> # Analyze grid
master-control optimize # Optimize gridAnalyzes product marketing using FletchPMM principles:
Core Principles:
- ✅ Clarity Over Cleverness - "More people buy if they understand what it is"
- ✅ 7-Second Rule - Communicate value immediately
- ✅ Product-First - Focus on WHAT you do, not just outcomes
- ✅ Primary Audience - Pick one, serve them well
What It Analyzes:
- Clarity score (0-100)
- 7-second test pass/fail
- Competitive positioning
- Messaging quality
- Primary audience identification
Example:
const pmmAgent = new PMMAgent({ memory, verbose: true });
const result = await pmmAgent.analyze({
url: 'https://example.com',
html: pageHtml,
text: pageText,
metadata: { title, description }
});
console.log(`Clarity: ${result.clarity.score}/100`);
console.log(`7-Second Test: ${result.sevenSecondTest.passed ? 'PASS' : 'FAIL'}`);Autonomous testing with 6 specialized sub-agents:
Sub-Agents:
- Smoke Agent - Page load validation
- Link Agent - Navigation integrity
- Console Agent - JS error detection (with false positive filtering)
- Accessibility Agent - WCAG compliance
- Form Agent - Enhanced detection (Shadcn/ui, React Hook Form)
- Performance Agent - Load time optimization
What It Tests:
- Page accessibility (HTTP 200)
- Broken links
- Console errors (filters SSL/CDN false positives)
- Accessibility violations
- Form functionality
- Performance metrics (A-F grade)
Example:
const qaAgent = new QAAgent({ memory, verbose: true });
const result = await qaAgent.analyze({
baseUrl: 'https://example.com',
maxPages: 10
});
console.log(`Overall Score: ${result.summary.overallScore}/100`);
console.log(`Pages Pass: ${result.summary.pagesPass}/${result.totalPages}`);Generates professional conventional commits:
Features:
- Analyzes staged changes
- Detects type (feat/fix/refactor/docs/test/chore)
- Extracts scope from directory structure
- Identifies breaking changes
- Interactive confirmation
- Never mentions "Claude" or "AI"
Example:
npm run commit
🔍 Analyzing staged changes...
📊 Change Analysis:
Type: feat (agents)
Files: 3
Changes: +546 -0
📝 Commit Message:
feat(agents): implement git management agent
Added GitAgent using Agent Factory pattern.
❓ Commit with this message? (y/n)Automated SEO via SwarmCoordinator:
What It Analyzes:
- Schema.org detection
- Meta tag quality
- Heading structure
- Readability scores
- Content optimization
Handled by: Multi-agent swarm system
All agents extend a common base class:
import { AgentFactory, type AgentFactoryConfig } from './agents/base/agent-factory.js';
export class MyAgent extends AgentFactory<MyInput, MyResult> {
constructor(config: AgentFactoryConfig) {
super({
...config,
agentType: 'my-agent',
enableLearning: true
});
}
async analyze(input: MyInput): Promise<MyResult> {
// Your analysis logic
}
async generateRecommendations(result: MyResult): Promise<string[]> {
// Return recommendations
}
protected async storeResults(input, result, recommendations): Promise<number> {
// Store in memory for learning
}
protected async extractPatterns(result: MyResult): Promise<number> {
// Share insights with other agents
}
}Benefits:
- ~90% code reuse for new agents
- Consistent interface across all agents
- Built-in memory integration
- Cross-learning capabilities
- Type safety with TypeScript generics
All agents store and retrieve patterns:
// Store fix pattern
await memory.storeFixPattern({
category: 'cross-learning-aeo',
issue: 'Meta descriptions prioritize keywords over clarity',
fix: 'State "what you do" first, then add keywords naturally',
successRate: 0.85,
description: 'Clarity-first meta descriptions'
});
// Find similar analyses
const similar = await memory.findSimilarAnalyses('homepage clarity issues', 5);
// Get site experiences
const experiences = await memory.getSimilarSiteExperiences('example.com', 5);Agents share insights bidirectionally:
┌─────────┐ Insights ┌─────────┐
│ PMM │ ←─────────────────→ │ AEO │
└─────────┘ └─────────┘
↕ ↕
Insights Insights
↕ ↕
┌─────────┐ Insights ┌─────────┐
│ QA │ ←─────────────────→ │ AEO │
└─────────┘ └─────────┘
Example Cross-Learning:
- PMM → AEO: "Meta descriptions should state 'what you do' first"
- QA → AEO: "Filter false positives (SSL errors, CDN blocks) before scoring"
- PMM → QA: "Enhanced form detection finds 8+ elements (was 0)"
Master Control's primary use case is analyzing ANY website - remote, local, or competitor sites.
DON'T DO THIS ❌
# Manual analysis (WRONG!)
cat app/layout.tsx
grep "title" app/page.tsx
# → Guess scores like "PMM: 90/100"DO THIS INSTEAD ✅
# Run REAL agents (CORRECT!)
node scripts/full-swarm-analysis.mjs /path/to/repo
# → PMMAgent measures clarity, 7-second test
# → QAAgent tests links, performance
# → GitAgent analyzes commits
# → Results are MEASURED, not guessedManual analysis vs real agents:
- Manual Guess: PMM Clarity 90/100, 7-Second Test PASS ❌
- Real Agent: PMM Clarity 100/100, 7-Second Test FAIL ✅
- Discovery: Found 3 bugs in PMMAgent by running on real sites!
Lesson Stored in AgentDB:
{
category: 'cross-learning-platform',
issue: 'Manual analysis performed instead of using validated agents',
fix: 'ALWAYS run real agents with TriSwarm',
successRate: 1.0
}node scripts/analyze-external-site.mjs https://example.com /path/to/repoWhat it does:
- Crawls live site with Puppeteer
- Runs PMMAgent (clarity, positioning)
- Runs QAAgent (links, performance)
- Analyzes git history
- Generates recommendations
node scripts/full-swarm-analysis.mjs /path/to/repoWhat it does:
- Extracts page data from local files
- Runs PMMAgent
- Runs GitAgent
- Stores patterns in AgentDB
- Cross-learning between agents
node scripts/run-tri-swarm.mjs https://example.com ./publicWhat it does:
- Deploys AEO + PMM + QA swarms in parallel
- Cross-learning between all three
- Stores comprehensive patterns
- Generates unified report
Analysis Performed:
git clone https://github.com/matt-strautmann/mattstrautmann.com /tmp/site
node scripts/full-swarm-analysis.mjs /tmp/siteAgents Found:
- PMMAgent - 7-Second Test FAIL (value proposition unclear)
- PMMAgent - Primary audience misdetected as "AI search engines" (BUG!)
- PMMAgent - CTA detection missed React Link components (BUG!)
- GitAgent - Claude found in all 52 commits
- AEOAgent - Meta description too long (169 → 152 chars)
Fixes Applied:
- Fixed 3 bugs in PMMAgent
- Cleaned git history
- Optimized meta description
- Enhanced H1 tag
- Added Speed Insights
Results:
- SEO Score: 85/100 → 95/100 (+10 points)
- Grade: A → A+
- Git History: Clean (no Claude mentions)
- PMMAgent: Now works for ALL products, not just calculators
Lesson: Running real agents on real sites finds bugs that unit tests miss!
# Memory core settings
AEO_MEMORY_PATH=./data/aeo-memory.db
AEO_MEMORY_CACHE_SIZE=100
# Swarm settings
AEO_MAX_CONCURRENT_AGENTS=6
# Auto-optimization
AEO_AUTO_FIX=true
AEO_CONFIDENCE_THRESHOLD=0.75
# Output
VERBOSE=trueconst triSwarm = new TriSwarmCoordinator({
memory,
enableAEO: true,
enablePMM: true,
enableQA: true,
enableCrossLearning: true,
maxConcurrentAgents: 6,
verbose: true,
baseUrl: 'https://example.com'
});| Metric | Value |
|---|---|
| Agent creation | <100ms |
| Analysis time | 5-10s per page |
| Memory storage | <50ms per pattern |
| Cross-learning | Real-time |
| API costs | $0 (rule-based) |
| Platform score | 9.2/10 production-ready |
# Install dependencies
npm install
# Build TypeScript
npm run build
# Run tests
npm test
# Run with coverage
npm run test:coverage- Define types:
export interface MyAgentInput {
url: string;
data: any;
}
export interface MyAgentResult {
score: number;
findings: string[];
}- Extend AgentFactory:
export class MyAgent extends AgentFactory<MyAgentInput, MyAgentResult> {
constructor(config: AgentFactoryConfig) {
super({ ...config, agentType: 'my-agent' });
}
async analyze(input: MyAgentInput): Promise<MyAgentResult> {
// Implementation
}
async generateRecommendations(result: MyAgentResult): Promise<string[]> {
// Implementation
}
}- Add cross-learning:
protected async extractPatterns(result: MyAgentResult): Promise<number> {
await this.storeFixPattern({
category: 'cross-learning-aeo',
issue: 'Example issue',
fix: 'Example fix',
successRate: 0.90
});
return 1;
}# For local repos
node scripts/full-swarm-analysis.mjs /path/to/repo
# For remote sites
node scripts/analyze-external-site.mjs https://example.com /path/to/repoawait memory.storeFixPattern({
category: 'cross-learning-platform',
issue: 'Description of what went wrong',
fix: 'How to do it correctly',
successRate: 1.0
});Check:
- Does the clarity score make sense?
- Did it find issues you know exist?
- Are recommendations actionable?
- Did it store patterns in memory?
# Validate agents work for different use cases
node scripts/analyze-external-site.mjs https://portfolio.com /tmp/site1
node scripts/analyze-external-site.mjs https://saas-product.com /tmp/site2
node scripts/analyze-external-site.mjs https://blog.com /tmp/site3- CLAUDE.md - Complete integration guide
- PLATFORM_VALIDATION_REPORT.md - 9.2/10 production-ready validation
- ENHANCEMENT_ROADMAP.md - 12-week implementation plan
- LESSONS_LEARNED.md - Critical mistakes and fixes
Solution: Verify you're running REAL agents, not manual analysis
Solution:
ls -la ./data/aeo-memory.db
chmod 755 ./dataSolution:
echo "AEO_MAX_CONCURRENT_AGENTS=3" >> .envMIT
Matt Strautmann
Contributions welcome! See CONTRIBUTING.md
- Repository: https://github.com/matt-strautmann/aeo-now
- Issues: https://github.com/matt-strautmann/aeo-now/issues
- Documentation: See CLAUDE.md for complete guide
⚡ Master Control Program
Multi-agent platform for grid optimization and analysis.
Built with Agent Factory Architecture, AgentDB Memory Core, and Cross-Agent Learning.
I fight for the users. End of line.