A lightweight, framework-agnostic template for building agentic AI applications quickly. Perfect for MVPs and proof-of-concept projects.
🔗 Repository: https://github.com/kython220282/AgenticAI-MVP-Project-Template
⚠️ SETUP REQUIRED: This template needs configuration before use. See SETUP_GUIDE.md for required steps.
📌 How to Use This Template:
- Click "Use this template" button on GitHub OR clone directly:
git clone https://github.com/kython220282/AgenticAI-MVP-Project-Template.git cd AgenticAI-MVP-Project-Template- Follow the Setup Guide to configure API keys and LLM integration
- Run the Quick Start Guide
- Customize for your specific use case
- Modular: Clean separation of agents, tools, and config
- Framework-Agnostic: Works with any LLM provider or agent framework
- Minimal: Only essential components - no bloat
- Production-Ready: Scale up when validated, not before
- Developer-Friendly: Simple to understand and customize
- 🤖 Framework-agnostic agents - Works with OpenAI, Anthropic, Azure, or any LLM
- 🛠️ Extensible tool system - Easy-to-add custom tools with registry
- 🎨 Interactive web dashboard - Showcase agent capabilities in real-time
- 📡 REST API + SSE - Full API with live updates via Server-Sent Events
- 📚 Comprehensive docs - Quick start, examples, and architecture guides
- 🔧 Production patterns - Config management, logging, error handling
- ⚡ Quick setup - Running in 5 minutes with minimal dependencies
- Quick Start Guide - Get running in 5 minutes
- Dashboard Guide - Web interface for showcasing agents
- Project Structure - Architecture and design patterns
This template includes:
- ✅ Base Agent Classes - Inherit and customize for your domain
- ✅ Memory Systems - Short-term, long-term, and hybrid memory for agents
- ✅ RAG Support - Document retrieval and context-augmented generation
- ✅ Caching Layer - Cache expensive LLM calls and tool results
- ✅ Tool System - Add custom tools easily with a registry pattern
- ✅ Web Dashboard - Live demo interface for stakeholders
- ✅ REST API - FastAPI backend with auto-generated docs
- ✅ Real-time Updates - Server-Sent Events for live monitoring
- ✅ Config Management - Multi-provider LLM support (OpenAI, Anthropic, Azure)
- ✅ Working Examples - 6 runnable examples to learn from
- ✅ Production Patterns - Logging, error handling, validation
Perfect for: Proof-of-concepts, hackathons, MVP validation, client demos
This template is framework-agnostic and works with any agent/LLM framework:
- OpenAI - GPT-4, GPT-3.5, GPT-4 Turbo
- Anthropic - Claude 3 (Opus, Sonnet, Haiku)
- Azure OpenAI - Enterprise deployments
- Google - Gemini, PaLM
- Cohere - Command models
- Open Source - Llama, Mistral, etc. via Ollama/vLLM
- Custom APIs - Any HTTP-based LLM service
- LangChain - Wrap agents in SimpleLLMAgent class
- CrewAI - Adapt crew agents to BaseAgent interface
- AutoGen - Use as orchestration layer
- Haystack - Integrate pipeline components
- Custom - Build from scratch with provided base classes
- Pinecone - Managed vector search
- Weaviate - Open-source vector database
- ChromaDB - Embedded vector store
- Qdrant - Performance-focused vector search
- FAISS - Facebook's similarity search
Example: Using with LangChain
from langchain.chat_models import ChatOpenAI
from agents import SimpleLLMAgent
class LangChainAgent(SimpleLLMAgent):
def __init__(self, name, role, goal):
llm = ChatOpenAI(model="gpt-4")
super().__init__(name, role, goal, llm_client=llm)
def execute(self, task, context=None):
response = self.llm_client.invoke(task)
return {"output": response.content, "status": "success"}Example: Using with Anthropic
import anthropic
from agents import SimpleLLMAgent
class ClaudeAgent(SimpleLLMAgent):
def __init__(self, name, role, goal):
client = anthropic.Anthropic(api_key=os.getenv("ANTHROPIC_API_KEY"))
super().__init__(name, role, goal, llm_client=client)
def execute(self, task, context=None):
response = self.llm_client.messages.create(
model="claude-3-sonnet-20240229",
messages=[{"role": "user", "content": task}]
)
return {"output": response.content[0].text, "status": "success"}See examples/ for more integration patterns.
Speed & Simplicity
- ✅ Running in 5 minutes vs hours/days from scratch
- ✅ Minimal dependencies - only 3 required packages
- ✅ Clear, readable code - easy to understand and modify
- ✅ No complex build processes or configurations
Flexibility
- ✅ Framework-agnostic - not locked into any vendor
- ✅ LLM-agnostic - swap providers without refactoring
- ✅ Modular design - use only what you need
- ✅ Easy to extend - add agents, tools, features incrementally
Production-Ready Patterns
- ✅ Config management with environment variables
- ✅ Logging and error handling built-in
- ✅ REST API with auto-generated documentation
- ✅ Real-time updates via Server-Sent Events
- ✅ Clean separation of concerns
Showcase & Demo
- ✅ Web dashboard included - impress stakeholders
- ✅ Interactive API docs - test endpoints in browser
- ✅ Working examples - demonstrate capabilities immediately
- ✅ Professional UI - dark theme, responsive design
What This Template Doesn't Include
- ❌ No pre-built LLM integrations - You implement LLM calls (examples provided)
- ❌ No database - Add PostgreSQL/MongoDB when needed (SQLite for dev included)
- ❌ No authentication - Implement JWT/OAuth for production
- ❌ No production cache - Uses in-memory cache (add Redis when scaling)
- ❌ No vector embeddings - Keyword search included (add OpenAI/Cohere embeddings for production RAG)
- ❌ No deployment configs - Add Docker/K8s for production
- ❌ No monitoring - Add Prometheus/Grafana for production
Not Ideal For
- ❌ Enterprise production apps (use full enterprise template)
- ❌ Complex multi-model pipelines (consider LangChain/Haystack)
- ❌ Heavy reinforcement learning (needs training infrastructure)
- ❌ Real-time streaming at scale (needs different architecture)
When to Graduate Move to a full framework or enterprise template when you:
- Have validated your MVP and need production scale
- Need advanced features (RAG, agents with memory, etc.)
- Require enterprise security and compliance
- Have 10,000+ users or complex workflows
- Need dedicated DevOps/monitoring infrastructure
Migration Path: Start here for MVP → Validate → Add features → Scale to production
Perfect For:
- ✅ Building MVPs in days, not weeks
- ✅ Proof-of-concept demonstrations
- ✅ Hackathon projects
- ✅ Client demos and showcases
- ✅ Learning agentic AI patterns
- ✅ Testing different LLM providers
- ✅ Rapid prototyping and experimentation
Good For:
- ✅ Small to medium agent systems (1-10 agents)
- ✅ Internal tools and automation
- ✅ Research and experimentation
- ✅ Educational projects
Not Recommended For:
- ❌ Production apps with 1000s of concurrent users
- ❌ Mission-critical systems requiring 99.99% uptime
- ❌ Complex enterprise workflows without customization
- ❌ Systems requiring extensive audit trails and compliance
# Create virtual environment
python -m venv venv
# Activate (Windows)
venv\Scripts\activate
# Activate (Linux/Mac)
source venv/bin/activate
# Install dependencies
pip install -r requirements.txt# Copy environment template
cp .env.example .env
# Edit .env and add your API keys
# At minimum, add one LLM provider API keyOption A: Command Line Usage
# Run the main application
python main.py
# Or run specific examples
python examples/basic_agent.py
python examples/multi_agent.py
python examples/tools_usage.pyOption B: Web Dashboard (Recommended for Demos)
# Start the web server
python run_server.py
# Open browser to:
# http://localhost:8000 - Interactive Dashboard
# http://localhost:8000/docs - API DocumentationAgenticAI_MVP_Project_Template/
├── agents/ # Agent definitions
│ ├── __init__.py
│ └── base.py # Base agent classes
├── tools/ # Tool implementations
│ ├── __init__.py
│ ├── base.py # Base tool classes
│ └── implementations.py # Example tools
├── api/ # Web API backend
│ ├── __init__.py
│ └── app.py # FastAPI application
├── frontend/ # Web dashboard UI
│ ├── index.html # Main dashboard
│ ├── styles.css # Styling
│ └── app.js # Frontend logic
├── examples/ # Usage examples
│ ├── basic_agent.py # Single agent example
│ ├── multi_agent.py # Multi-agent workflow
│ ├── tools_usage.py # Tools integration
│ ├── memory_agent.py # Memory system demo
│ ├── rag_example.py # RAG demonstration
│ └── caching_example.py # Caching demo
├── docs/ # Documentation
│ ├── QUICKSTART.md # Quick start guide
│ ├── SETUP_GUIDE.md # Setup instructions
│ ├── DASHBOARD.md # Dashboard guide
│ ├── STRUCTURE.md # Architecture details
│ └── CONTRIBUTING.md # Customization guide
├── data/ # Data storage (gitignored)
│ └── README.md # Data directory info
├── utils/ # Utility modules
│ ├── helpers.py # Logging, formatting, validation
│ ├── cache.py # Caching system
│ └── rag.py # RAG utilities
├── config.py # Configuration management
├── main.py # Main entry point
├── run_server.py # Web server launcher
├── requirements.txt # Python dependencies
├── .env.example # Environment template
├── .gitignore # Git ignore rules
├── LICENSE # MIT License
└── README.md # This file
Framework-agnostic agent base classes that work with any LLM:
BaseAgent: Abstract base for all agentsSimpleLLMAgent: Basic LLM-powered agentAgentOrchestrator: Coordinate multiple agents
Extensible tool system compatible with any framework:
BaseTool: Abstract base for toolsFunctionTool: Wrap Python functions as toolsToolRegistry: Manage and discover tools
Context-aware agents with memory and document retrieval:
ShortTermMemory: Conversation history and recent contextLongTermMemory: Persistent knowledge with disk storageHybridMemory: Combined short and long-term memorySimpleRAG: Document chunking and retrievalVectorRAG: Placeholder for vector database integration
Performance optimization for expensive operations:
SimpleCache: In-memory cache with TTL@cached: Decorator for function caching@cache_agent_response: Cache agent outputs@cache_tool_result: Cache tool results
Interactive UI for showcasing agent capabilities:
- Create Agents: Define agents with custom roles and goals
- Execute Tasks: Run single-agent tasks with real-time results
- Multi-Agent Workflows: Chain multiple agents together
- Live Monitoring: Real-time updates via Server-Sent Events
- History Tracking: View all past executions
Web Dashboard (Best for Showcasing)
# Start the server
python run_server.py
# Open http://localhost:8000 in your browser
# 1. Create agents with custom roles
# 2. Execute tasks and see results in real-time
# 3. Build multi-agent workflows visually
# 4. View execution historyCentralized config supporting multiple LLM providers:
- OpenAI
- Anthropic (Claude)
- Azure OpenAI
- Easy to add more
from agents import SimpleLLMAgent
agent = SimpleLLMAgent(
name="ResearchAgent",
role="Research Assistant",
goal="Find and analyze information"
)
result = agent.execute("Research topic X")from agents import SimpleLLMAgent, AgentOrchestrator
# Create agents
researcher = SimpleLLMAgent(name="Researcher", ...)
writer = SimpleLLMAgent(name="Writer", ...)
# Orchestrate
orchestrator = AgentOrchestrator([researcher, writer])
workflow = [
{"agent": "Researcher", "task": "Research topic"},
{"agent": "Writer", "task": "Write summary"}
]
results = orchestrator.execute_workflow(workflow)from agents import SimpleLLMAgent, HybridMemory
agent = SimpleLLMAgent(name="Assistant", ...)
memory = HybridMemory(storage_path="data/memory.json")
# Add to memory
memory.add_to_short_term("user_message", "Hello!")
# Get context for next interaction
context = memory.get_context(query="greeting", recent_count=5)
result = agent.execute("Respond to user", context=context)from utils import SimpleRAG
rag = SimpleRAG(chunk_size=500)
rag.add_document(content="Your document text...", metadata={"source": "doc1"})
# Retrieve context for query
context = rag.get_context("What is Python?", top_k=3)
result = agent.execute(f"Context: {context}\n\nQuestion: What is Python?")from utils import cached, cache_agent_response
# Cache function results
@cached(ttl=300)
def expensive_api_call(query):
return call_external_api(query)
# Cache agent responses
class CachedAgent(SimpleLLMAgent):
@cache_agent_response(ttl=600)
def execute(self, task, context=None):
return super().execute(task, context)from tools import BaseTool, tool_registry
class MyTool(BaseTool):
def __init__(self):
super().__init__("my_tool", "Does something useful")
def execute(self, input_data):
# Your logic here
return result
# Register
tool_registry.register(MyTool())from langchain.chat_models import ChatOpenAI
from agents import SimpleLLMAgent
llm = ChatOpenAI(model="gpt-4")
agent = SimpleLLMAgent(name="Agent", llm_client=llm)import anthropic
from agents import SimpleLLMAgent
client = anthropic.Anthropic(api_key="your-key")
agent = SimpleLLMAgent(name="Agent", llm_client=client)Adapt the base classes to wrap CrewAI or AutoGen agents - the interface remains consistent.
- Inherit from
BaseAgent - Implement the
execute()method - Add any custom behavior
- Inherit from
BaseTool - Implement the
execute()method - Register with
tool_registry
- Update
.envwith new provider credentials - Modify agent's LLM client initialization
- Adapt the
execute()method if needed
When your MVP is validated:
- Add Testing: Integrate pytest, add unit/integration tests
- Add API Layer: Use FastAPI for API endpoints
- Add Monitoring: Logging, metrics, error tracking
- Add Database: PostgreSQL, MongoDB, etc.
- Add CI/CD: GitHub Actions, deployment automation
- Add Security: Auth, rate limiting, input validation
- Add Documentation: API docs, architecture diagrams
Migrate to the full enterprise template when ready! � Customization Checklist
After cloning this template, customize these key areas:
- Update
.env- Add your LLM API keys - Customize Agents - Modify
agents/base.pyfor your domain - Add Custom Tools - Create tools in
tools/implementations.py - Update Frontend - Change branding in
frontend/styles.css - Implement LLM Calls - Replace placeholders in agent
execute()methods - Add Your Logic - Customize
main.pyfor your use case - Update README - Replace this README with your project details
See CONTRIBUTING.md for detailed customization guide.
| Feature | This Template | From Scratch | Enterprise Template |
|---|---|---|---|
| Setup Time | 5 minutes | Hours/Days | 30-60 minutes |
| Framework Lock-in | None | Depends | Often Yes |
| Dashboard Included | ✅ Yes | ❌ No | ✅ Yes |
| Production Patterns | ✅ Core ones | ❌ No | ✅✅ Extensive |
| Complexity | Low | Varies | High |
| Best For | MVPs, Demos | Custom builds | Production apps |
Backend:
- Python 3.8+
- FastAPI (API framework)
- Pydantic (Data validation)
Frontend:
- Vanilla JavaScript (no framework dependencies)
- HTML5/CSS3 with modern dark theme
- Server-Sent Events for real-time updates
Optional Integrations:
- Any LLM provider (OpenAI, Anthropic, Azure, etc.)
- Any agent framework (LangChain, CrewAI, AutoGen, etc.)
- Any vector database (Pinecone, Weaviate, ChromaDB, etc.)
This is a template repository designed to be forked and customized for your needs.
Using this template:
- Fork it and make it yours - no attribution required
- Customize everything - it's designed for that
- Share your improvements via PRs if you'd like
See CONTRIBUTING.md for customization guidelines.
If this template helped you ship faster:
- ⭐ Star this repository
- 🐛 Report issues or suggest features
- 🔀 Share your use cases (open an issue with "Showcase" label)
- 📣 Tell others about it
This project is licensed under the MIT License - see the LICENSE file for details.
TL;DR: Use it commercially, modify it, distribute it. Just include the license
- Focus on rapid iteration
- Document assumptions
This template is provided as-is for your use. Modify freely!
No LLM API key configured
- Add API key to
.envfile
Module not found errors
- Ensure you're in the virtual environment:
venv\Scripts\activate - Install dependencies:
pip install -r requirements.txt
Want to use framework X?
- This template is agnostic - adapt the agent/tool classes to wrap your framework
Dashboard not loading?
- See Dashboard Guide for detailed troubleshooting
- Get Started: Follow the Quick Start Guide
- Explore Dashboard: Check out the Dashboard Guide for showcasing
- Understand Structure: Read Project Structure for architecture details
- Configure your LLM provider in
.env - Add domain-specific agents and tools
- Build your MVP!
Ready to start?
- 📥 Clone:
git clone https://github.com/kython220282/AgenticAI-MVP-Project-Template.git - 📖 Follow: Quick Start Guide
- 🚀 Build your AI MVP!
This project is licensed under the MIT License - see the LICENSE file for details.
This is a template project designed to be forked and customized. See CONTRIBUTING.md for guidelines on how to adapt it for your needs.
Found this helpful? ⭐ Star the repo: https://github.com/kython220282/AgenticAI-MVP-Project-Template
Built with the philosophy that perfect is the enemy of done - ship fast, learn, iterate!
Star ⭐ this repo if it helped you build your MVP faster!