AI-powered API mocking server with guaranteed schema compliance.
Stop writing mock data manually. Define your schema once - Helix generates realistic data that always matches your structure.
# Install Helix
git clone https://github.com/ashfromsky/helix.git
cd helix
pip install -e .
# Interactive setup wizard
helix init
# Start server
helix startVisit http://localhost:8000
# Clone and start
git clone https://github.com/ashfromsky/helix.git
cd helix
docker-compose upVisit http://localhost:8080
# Clone repository
git clone https://github.com/ashfromsky/helix.git
cd helix
# Create virtual environment
python -m venv venv
source venv/bin/activate # Linux/Mac
# or
venv\Scripts\activate # Windows
# Install dependencies
pip install -r requirements.txt
# Copy configuration
cp .env.example .env
# Start server
uvicorn app.main:app --reload --port 8080→ Jump to Ollama Setup for completely offline AI with no API keys.
Helix provides a powerful CLI for easy management:
Interactive setup wizard that configures your environment:
helix initWhat it does:
- Guides you through AI provider selection (demo/ollama/deepseek/groq)
- Configures API keys if needed
- Creates
.envfile automatically - Sets up Redis container (if Docker available)
- Initializes AI system prompt
- Creates required directories
Example output:
AI-Powered API Mocking Platform
SETUP WIZARD
1. AI Provider Configuration
? Select AI provider:
→ demo - Free, no API keys required
ollama - Local LLM, private and unlimited
deepseek - OpenRouter, cost-effective
groq - Ultra-fast inference, free tier available
2. Environment Setup
✓ Configuration applied successfully
✓ Directory structure created
✓ AI system prompt initialized
3. Infrastructure
✓ Redis started successfully
Configuration Applied Successfully
Provider: DEMO
Starts the Helix API server:
helix start [OPTIONS]Options:
--host TEXT: Host to bind to (default: 0.0.0.0)--port INTEGER: Port to bind to (default: 8000)--reload / --no-reload: Enable auto-reload (default: True)
Examples:
# Start with defaults
helix start
# Custom port
helix start --port 3000
# Production mode (no reload)
helix start --no-reload --port 8080Shows current configuration and system status:
helix statusExample output:
Configuration Status
CURRENT SETTINGS
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ Parameter ┃ Value ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│ AI Provider │ demo │
│ Redis URL │ redis://localhost:6379 │
│ Server Port │ 8000 │
└──────────────────────────────┴──────────────────────────────────────┘
Manage configuration interactively:
helix configOptions:
- Change AI Provider: Switch between demo/ollama/deepseek/groq
- Update API Keys: Update existing API credentials
- Reset Configuration: Delete and reconfigure from scratch
- Exit: Leave configuration unchanged
Example session:
helix config
Configuration Manager
MODIFY SETTINGS
? What would you like to configure?
→ Change AI Provider
Update API Keys
Reset Configuration
Exit
? Select AI provider:
→ deepseek - OpenRouter, cost-effective
? OpenRouter API key: ****************************
✓ Provider configuration updatedShows all available commands and options:
helix --helpHelix uses your MOCKPILOT_SYSTEM.md rules to strictly follow your API design.
Basic Mode (No Schema):
# Helix infers structure from path and method
curl http://localhost:8080/api/users
# Returns: realistic user dataSchema Mode (Guaranteed Structure):
# 1. Define your schema
POST /api/users
Body: {
"schema": {
"id": "string",
"name": "string",
"email": "string",
"role": "admin|user|guest"
}
}
# 2. Helix generates data matching your schema exactly
# Every field, every type, every enum value - guaranteedThe key difference:
- Without schema: Helix makes smart guesses (great for demos)
- With schema: Helix enforces exact structure (safe for production)
- Schema Enforcement - Define structure once, get consistent data forever (see below)
- Zero Configuration - Hit any endpoint and get instant responses
- AI-Powered - Uses DeepSeek, Groq, Ollama, or built-in templates
- Context Aware - Remembers your actions within sessions
- Smart Data - Generates realistic names, emails, dates, IDs
- REST Compatible - Follows HTTP standards automatically
- Redis Caching - Fast responses with intelligent caching
- Chaos Engineering - Simulate failures and latency
- Live Dashboard - Monitor requests in real-time
- CLI Interface - Easy setup and management with
helixcommands
After running helix init, your .env file is automatically configured. You can also edit it manually:
# AI Provider (demo/deepseek/groq/ollama)
HELIX_AI_PROVIDER=demo
# DeepSeek (via OpenRouter)
HELIX_OPENROUTER_API_KEY=sk-or-v1-your-key-here
HELIX_OPENROUTER_MODEL=deepseek/deepseek-chat
# Groq
HELIX_GROQ_API_KEY=gsk_your-key-here
HELIX_GROQ_MODEL=llama-3.1-70b-versatile
# Ollama (Local)
HELIX_OLLAMA_HOST=http://localhost:11434
HELIX_OLLAMA_MODEL=llama3
# Server Settings
HELIX_PORT=8080
HELIX_HOST=0.0.0.0
HELIX_DEBUG=true
# Redis
HELIX_REDIS_HOST=localhost
HELIX_REDIS_PORT=6379Choose your AI provider during helix init or change it later with helix config:
| Provider | Setup | Free Tier | Speed | Best For |
|---|---|---|---|---|
| demo (default) | None needed | ✓ Unlimited | Fast | Getting started |
| DeepSeek | API key required | 500 req/day | Medium | Production |
| Groq | API key required | 14,400 req/day | Ultra-fast | High volume |
| Ollama | Local installation | ✓ Unlimited | Varies | Offline/Privacy |
No setup needed. Uses template-based generation with Faker library.
helix init
# Select: demo - Free, no API keys required- Run
helix initorhelix config - Select
deepseek - Enter your OpenRouter API key from openrouter.ai
helix config
# Select: Change AI Provider → deepseek
# Enter API key when prompted- Run
helix initorhelix config - Select
groq - Enter your Groq API key from console.groq.com
helix config
# Select: Change AI Provider → groq
# Enter API key when promptedOllama runs AI models locally - no API keys, no rate limits, completely offline.
Quick Setup:
# 1. Install Ollama
# Visit https://ollama.com/ and download for your OS
# 2. Pull a model
ollama pull llama3.2
# 3. Configure Helix
helix init
# Select: ollama - Local LLM, private and unlimited
# Enter Ollama host (default: http://localhost:11434)
# 4. Start Helix
helix startModel Recommendations:
| Model | Size | RAM | Speed | Quality | Best For |
|---|---|---|---|---|---|
llama3.2:1b |
1GB | 2GB | ⚡⚡⚡⚡⚡ | ⭐⭐⭐ | Testing, demos |
llama3.2 |
2GB | 4GB | ⚡⚡⚡⚡ | ⭐⭐⭐⭐ | Most users |
llama3.1:8b |
4.7GB | 8GB | ⚡⚡⚡ | ⭐⭐⭐⭐ | Production |
llama3.1:70b |
40GB | 32GB+ | ⚡ | ⭐⭐⭐⭐⭐ | Maximum quality |
Docker Setup:
If using Docker, update your Ollama host:
helix config
# Select: Change AI Provider → ollama
# Host: http://host.docker.internal:11434Troubleshooting:
# Check if Ollama is running
ollama list
# Start Ollama server
ollama serve
# Test connection
curl http://localhost:11434/api/tags
# Pull missing model
ollama pull llama3.2# GET collection
curl http://localhost:8080/api/products
# GET single item
curl http://localhost:8080/api/products/prod_123
# POST (create)
curl -X POST http://localhost:8080/api/products \
-H "Content-Type: application/json" \
-d '{"name": "Laptop", "price": 999}'
# PUT (full update)
curl -X PUT http://localhost:8080/api/products/prod_123 \
-H "Content-Type: application/json" \
-d '{"name": "Gaming Laptop", "price": 1499}'
# PATCH (partial update)
curl -X PATCH http://localhost:8080/api/products/prod_123 \
-H "Content-Type: application/json" \
-d '{"price": 1299}'
# DELETE
curl -X DELETE http://localhost:8080/api/products/prod_123Maintain context across requests with X-Session-ID header:
# Session 1: Create a user
curl -H "X-Session-ID: session-1" \
-X POST http://localhost:8080/api/users \
-d '{"name": "Alice"}'
# Session 1: List users - returns Alice
curl -H "X-Session-ID: session-1" \
http://localhost:8080/api/users
# Session 2: List users - returns different data
curl -H "X-Session-ID: session-2" \
http://localhost:8080/api/usersProblem: Random JSON structures break your frontend.
Solution: Define your schema - Helix guarantees compliance.
1. Your TypeScript Interface:
interface User {
id: string;
name: string;
email: string;
role: 'admin' | 'user' | 'guest';
createdAt: string; // ISO 8601
}2. Define Schema in System Prompt:
Edit assets/AI/MOCKPILOT_SYSTEM.md (created by helix init):
## User Resource Schema
When handling /api/users endpoints:
- id: string (format: usr_XXXXXXXX)
- name: string (full name)
- email: string (valid email)
- role: enum [admin, user, guest]
- createdAt: ISO 8601 timestamp3. Helix Response (Always Matches Schema):
{
"id": "usr_9Xk2LmNp",
"name": "Sarah Chen",
"email": "sarah.chen@company.com",
"role": "admin",
"createdAt": "2024-12-19T14:23:00Z"
}Simulate production failures to test error handling:
HELIX_CHAOS_ENABLED=true
HELIX_CHAOS_ERROR_RATE=0.1 # 10% requests fail
HELIX_CHAOS_LATENCY_RATE=0.15 # 15% requests delayed
HELIX_CHAOS_MIN_DELAY_MS=2000 # Min delay: 2s
HELIX_CHAOS_MAX_DELAY_MS=5000 # Max delay: 5sGenerate OpenAPI specification from your traffic:
# Make some requests first
curl http://localhost:8080/api/users
curl http://localhost:8080/api/products
# Generate spec
curl http://localhost:8080/api/generate-spec?limit=50Monitor all requests in real-time:
http://localhost:8080/dashboard
Features:
- Live request logging
- Method, path, status, latency
- Request/response inspection
- Clear logs
# Quick health check
curl http://localhost:8080/health
# Detailed status
curl http://localhost:8080/statusCheck your current configuration anytime:
helix statusThis shows:
- Active AI provider
- Redis connection status
- Server port
- API keys (masked)
- Model being used
# Check if setup completed
helix status
# Re-run setup if needed
helix init
# Or start Redis manually
docker run -d -p 6379:6379 redis:7-alpine# Check current configuration
helix status
# Reconfigure provider
helix config
# Select: Change AI Provider
# Or fallback to demo mode
helix config
# Select: demo# Reset everything
helix config
# Select: Reset Configuration
# Then reconfigure
helix init# Reinstall Helix
pip install -e .
# Verify installation
helix --help# Use different port
helix start --port 3000
# Or kill existing process
lsof -ti:8080 | xargs kill -9helix/
├── app/
│ ├── cli.py # CLI commands (helix init/start/status)
│ ├── cliAssets/ # CLI resources (logo, prompts)
│ ├── routes/
│ │ ├── requestbased/catch_all.py # Dynamic mock handler
│ │ └── ui/ # Web interface
│ ├── services/
│ │ ├── ai/
│ │ │ ├── providers/ # AI provider implementations
│ │ │ │ ├── demo.py # Template-based (default)
│ │ │ │ ├── deepseek.py # DeepSeek via OpenRouter
│ │ │ │ ├── groq.py # Groq inference
│ │ │ │ └── ollama.py # Local Ollama
│ │ │ └── manager.py # Provider manager
│ │ ├── cache.py # Redis caching
│ │ ├── context.py # Session management
│ │ └── logger.py # Request logging
│ └── main.py # FastAPI application
├── assets/AI/MOCKPILOT_SYSTEM.md # AI system prompt (schema rules)
├── templates/ # HTML templates
├── pyproject.toml # CLI setup configuration
├── docker-compose.yml # Container orchestration
└── .env # Configuration (created by helix init)
- Python: 3.11+
- Redis: 7.0+ (via Docker or local)
- Docker: Optional but recommended
- Ollama: Optional (for local AI)
# Clone repository
git clone https://github.com/ashfromsky/helix.git
cd helix
# Install in editable mode with dev dependencies
pip install -e .
pip install -r requirements-dev.txt
# Run tests
pytest tests/ -v
# Format code
black app/
isort app/
# Type checking
mypy app/The CLI is built with Typer and Rich. Main commands are in app/cli.py:
helix init: Setup wizard (init()function)helix start: Server launcher (start()function)helix status: Configuration viewer (status()function)helix config: Configuration manager (config()function)
Contributions welcome! Please read CONTRIBUTING.md first.
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests
- Submit a pull request
GNU Affero General Public License v3.0 (AGPL-3.0)
This project is free and open-source software. See LICENSE for details.
Key points:
- ✓ Free to use, modify, and distribute
- ✓ Must disclose source code
- ✓ Network use = distribution (AGPL requirement)
- ✓ Same license for derivatives
- GitHub: https://github.com/ashfromsky/helix
- Issues: https://github.com/ashfromsky/helix/issues
- Discussions: https://github.com/ashfromsky/helix/discussions
Schema-safe mocking for serious development.
Built with ❤️ for developers who want to focus on features, not infrastructure.
