Pandora — A local, self-evolving autonomous AI inspired by Overlord.
An intelligent agent that progressively evolves, is aware of its hardware, can request upgrades, and scales its autonomy under human supervision. Not a chatbot: an agent with identity, voice, persistent memory, and self-improvement capabilities.
- Quick Start
- Prerequisites
- Installation
- Personalization
- Screenshots
- Architecture
- Stack
- Documentation
- Changelog
- Contributing
- License
# 1. Open terminal and go to the project folder
cd C:\Users\YourUser\source\repos\overlord # your path here
# 2. Activate virtual environment
.venv\Scripts\activate # Windows PowerShell
source .venv/bin/activate # Linux/Mac
# 3. Make sure Ollama is running (in a separate terminal if needed)
ollama serve # skip if already running as service
# 4. Launch Pandora
python -m interfaces.cli.terminal_uiYou MUST be inside the project folder with the virtual environment activated. Otherwise Python won't find Pandora's modules.
First time? Follow Prerequisites and Installation first.
| Command | What it does |
|---|---|
| Type any text | Chat with Pandora |
/help |
Show all commands |
/status |
System status (CPU, RAM, GPU, disk) |
/info |
Agent info (model, memory, voice, hardware, evolution) |
/hw |
Hardware snapshot (Rich tables: System, GPU, Inference, Bottlenecks) |
/monitor |
Live hardware dashboard (2s refresh, Ctrl+C to exit) |
/voice |
Toggle voice mode (mic + text concurrent) |
/say |
Speak the last response aloud |
/evo |
Evolution engine status |
/evolve |
Force an evolution cycle |
/benchmark |
Run benchmark suite |
/history |
Show evolution history |
/upgrades |
Show pending hardware upgrade requests |
/autonomy |
Show/change autonomy level (0-5) |
/kill |
Activate emergency kill switch |
/reset |
Reset kill switch |
/clear |
Clear conversation history |
/quit |
Exit |
| Input | Tool | What happens |
|---|---|---|
read file pyproject.toml |
FileManager | Reads and displays file content |
list files in core |
FileManager | Lists directory contents |
generate a fibonacci function in Python |
CodeWriter | Generates code with specialized model |
execute print('hello') |
CodeExecutor | Runs code in Docker sandbox |
Download from python.org. During installation on Windows, make sure to check "Add Python to PATH".
Verify:
python --version # Should show 3.11 or higherOllama runs LLM models locally. Download from ollama.com.
After installing, download the models Pandora uses:
# Start Ollama (if not running as a service)
ollama serve
# Primary brain (~8 GB) — main model for thinking and conversation
ollama pull qwen3:14b
# Lightweight brain (~4.5 GB) — for simple and fast tasks
ollama pull qwen3:8b
# Code brain (~8 GB) — for writing and analyzing code
ollama pull qwen2.5-coder:14b
# Reasoning brain (~5 GB) — for complex problem solving
ollama pull deepseek-r1:8bTotal download: ~25.5 GB. At minimum, only
qwen3:14bis required. The others enhance specific capabilities.
Verify Ollama is running:
ollama list # Should show your downloaded modelsRequired only for the CodeExecutor tool (sandboxed code execution). Without Docker, all other features work normally.
Download from docker.com. On Windows, Docker Desktop uses WSL2 internally.
Verify:
docker --version # Should show Docker versionDownload from git-scm.com if not already installed.
| Component | Minimum | Recommended |
|---|---|---|
| GPU VRAM | 8 GB | 16 GB+ |
| RAM | 16 GB | 64 GB |
| Disk | 20 GB free | SSD |
Supported GPUs for Ollama: NVIDIA (CUDA), AMD (ROCm/DirectML), Apple Silicon (Metal), or CPU-only. GPU hardware monitoring (Phase 5) requires NVIDIA.
git clone https://github.com/SoftDryzz/project-pandora.git
cd project-pandora# Create
python -m venv .venv
# Activate (Windows PowerShell)
.venv\Scripts\activate
# Activate (Windows CMD)
.venv\Scripts\activate.bat
# Activate (Linux/Mac)
source .venv/bin/activateYou should see (.venv) in your terminal prompt.
# Base installation (text-only mode)
pip install -e ".[dev]"
# With voice support (STT + TTS + VAD)
pip install -e ".[dev,voice]"
# NVIDIA GPU only: full GPU metrics (fan speed, power draw via pynvml)
# Skip this if you DON'T have an NVIDIA GPU
pip install -e ".[hardware-gpu]"Required for the WebScraper tool (headless web browsing):
playwright install chromium# Windows PowerShell
copy .env.example .env
# Linux/Mac
cp .env.example .envThe default .env works out of the box. See Personalization for customization options.
# Make sure Ollama is running
ollama serve
# Quick test (in another terminal, with venv activated)
python -c "from core.agent import OverlordAgent; print('OK')"If it prints OK, go to Quick Start to launch Pandora.
Edit your .env file to customize how Pandora interacts with you:
| Variable | Default | Description |
|---|---|---|
OVERLORD_MASTER_TITLE |
Maestro |
How Pandora addresses you |
OVERLORD_LANGUAGE |
auto |
Language: auto, es, or en |
OVERLORD_AUTONOMY_LEVEL |
1 |
Autonomy level (0-5) |
OLLAMA_MODEL_PRIMARY |
qwen3:14b |
Main LLM brain |
OLLAMA_MODEL_CODE |
qwen2.5-coder:14b |
Code generation model |
VOICE_ENABLED |
false |
Enable voice pipeline |
VOICE_WHISPER_MODEL |
large-v3 |
Whisper model (large-v3, medium, small) |
VOICE_TTS_ENGINE_EN |
kokoro |
TTS engine for English |
VOICE_TTS_ENGINE_ES |
piper |
TTS engine for Spanish |
HARDWARE_ENABLED |
true |
Enable hardware monitoring |
HARDWARE_GPU_VRAM_WARNING_THRESHOLD |
96 |
VRAM % to trigger warning |
HARDWARE_MONITOR_INTERVAL_SECONDS |
30.0 |
Background check interval (seconds) |
EVOLUTION_ENABLED |
true |
Enable evolution engine |
EVOLUTION_CHECK_INTERVAL |
50 |
Interactions between evolution checks |
EVOLUTION_BENCHMARK_INTERVAL |
100 |
Interactions between benchmarks |
# Examples for OVERLORD_MASTER_TITLE:
OVERLORD_MASTER_TITLE=Maestro
# OVERLORD_MASTER_TITLE=Supreme Being
# OVERLORD_MASTER_TITLE=Boss
# OVERLORD_MASTER_TITLE=My Lord+-------------------------------------------+
| CLI (Rich) |
| Text + Voice concurrent I/O |
| /hw /monitor /evo /benchmark /kill |
+-------------------------------------------+
| Overlord Agent |
| Planner -> Executor -> Evaluator |
+----------+----------+--------------------+
| Identity | Memory | Tools |
| (Persona)| Short/Long| Files | Code |
| | Episodic | Docker | Web |
+----------+----------+--------------------+
| Hardware | Voice | Evolution | Perms |
| GPU/CPU | STT+TTS | Benchmark | (0-5) |
| Profiler | VAD+Mic | Optimizer | Kill |
| Alerts | | Upgrades | Switch |
+----------+----------+-----------+--------+
| LLM (Ollama) |
| ChromaDB | SQLite | Whisper |
+-------------------------------------------+
- Core: Python (async)
- LLM: Ollama (Qwen3 14B + Qwen2.5-Coder 14B)
- Memory: ChromaDB (vector) + SQLite (structured)
- Tools: Docker (sandbox), Playwright (web scraping)
- Voice: faster-whisper (STT) + Kokoro/Piper (TTS) + Silero (VAD)
- Hardware: pynvml/nvidia-smi (GPU) + psutil (CPU/RAM/disk) + inference profiler
- CLI: Rich (text + voice concurrent input, live hardware dashboard)
- Config: Pydantic Settings
- Logging: Loguru
- Coming soon: Rust (performance), React (Web UI)
All technical docs live in docs/.
| Document | Description |
|---|---|
| Developer Setup | Prerequisites, installation, environment |
| Architecture | System diagram, module relationships, data flow |
| Database ER | Entity-relationship diagram for all SQLite tables |
| Sequences | Sequence diagrams — agent loop, evolution, voice, kill switch |
| State Machines | Autonomy levels, evolution status, kill switch states |
| Phase Reviews | Post-implementation reviews for each completed phase |
We welcome contributions! Here's how to get started:
- Read the Contributing Guide for rules and workflow
- Set up your environment with the Developer Setup Guide
- Pick an issue or open a new one — bug reports and feature requests use templates
Current priorities: Phase 7 (Web UI), test coverage, bug fixes. See the Changelog for completed work.
Licensed under the Apache License 2.0. Copyright 2026 Cristo Fernández Tomé (SoftDryzz).
