Skip to content

SoftDryzz/project-pandora

Repository files navigation

Project Pandora

Project Pandora

Pandora — A local, self-evolving autonomous AI inspired by Overlord.

An intelligent agent that progressively evolves, is aware of its hardware, can request upgrades, and scales its autonomy under human supervision. Not a chatbot: an agent with identity, voice, persistent memory, and self-improvement capabilities.

Leer en Español

Table of Contents


Quick Start

# 1. Open terminal and go to the project folder
cd C:\Users\YourUser\source\repos\overlord    # your path here

# 2. Activate virtual environment
.venv\Scripts\activate          # Windows PowerShell
source .venv/bin/activate       # Linux/Mac

# 3. Make sure Ollama is running (in a separate terminal if needed)
ollama serve                    # skip if already running as service

# 4. Launch Pandora
python -m interfaces.cli.terminal_ui

You MUST be inside the project folder with the virtual environment activated. Otherwise Python won't find Pandora's modules.

First time? Follow Prerequisites and Installation first.

Commands

Command What it does
Type any text Chat with Pandora
/help Show all commands
/status System status (CPU, RAM, GPU, disk)
/info Agent info (model, memory, voice, hardware, evolution)
/hw Hardware snapshot (Rich tables: System, GPU, Inference, Bottlenecks)
/monitor Live hardware dashboard (2s refresh, Ctrl+C to exit)
/voice Toggle voice mode (mic + text concurrent)
/say Speak the last response aloud
/evo Evolution engine status
/evolve Force an evolution cycle
/benchmark Run benchmark suite
/history Show evolution history
/upgrades Show pending hardware upgrade requests
/autonomy Show/change autonomy level (0-5)
/kill Activate emergency kill switch
/reset Reset kill switch
/clear Clear conversation history
/quit Exit

Tool Examples

Input Tool What happens
read file pyproject.toml FileManager Reads and displays file content
list files in core FileManager Lists directory contents
generate a fibonacci function in Python CodeWriter Generates code with specialized model
execute print('hello') CodeExecutor Runs code in Docker sandbox

Prerequisites

1. Python 3.11+

Download from python.org. During installation on Windows, make sure to check "Add Python to PATH".

Verify:

python --version   # Should show 3.11 or higher

2. Ollama (Required)

Ollama runs LLM models locally. Download from ollama.com.

After installing, download the models Pandora uses:

# Start Ollama (if not running as a service)
ollama serve

# Primary brain (~8 GB) — main model for thinking and conversation
ollama pull qwen3:14b

# Lightweight brain (~4.5 GB) — for simple and fast tasks
ollama pull qwen3:8b

# Code brain (~8 GB) — for writing and analyzing code
ollama pull qwen2.5-coder:14b

# Reasoning brain (~5 GB) — for complex problem solving
ollama pull deepseek-r1:8b

Total download: ~25.5 GB. At minimum, only qwen3:14b is required. The others enhance specific capabilities.

Verify Ollama is running:

ollama list   # Should show your downloaded models

3. Docker Desktop (Optional)

Required only for the CodeExecutor tool (sandboxed code execution). Without Docker, all other features work normally.

Download from docker.com. On Windows, Docker Desktop uses WSL2 internally.

Verify:

docker --version   # Should show Docker version

4. Git

Download from git-scm.com if not already installed.

Recommended Hardware

Component Minimum Recommended
GPU VRAM 8 GB 16 GB+
RAM 16 GB 64 GB
Disk 20 GB free SSD

Supported GPUs for Ollama: NVIDIA (CUDA), AMD (ROCm/DirectML), Apple Silicon (Metal), or CPU-only. GPU hardware monitoring (Phase 5) requires NVIDIA.


Installation

Step 1: Clone the repository

git clone https://github.com/SoftDryzz/project-pandora.git
cd project-pandora

Step 2: Create and activate virtual environment

# Create
python -m venv .venv

# Activate (Windows PowerShell)
.venv\Scripts\activate

# Activate (Windows CMD)
.venv\Scripts\activate.bat

# Activate (Linux/Mac)
source .venv/bin/activate

You should see (.venv) in your terminal prompt.

Step 3: Install dependencies

# Base installation (text-only mode)
pip install -e ".[dev]"

# With voice support (STT + TTS + VAD)
pip install -e ".[dev,voice]"

# NVIDIA GPU only: full GPU metrics (fan speed, power draw via pynvml)
# Skip this if you DON'T have an NVIDIA GPU
pip install -e ".[hardware-gpu]"

Step 4: Install Playwright browser

Required for the WebScraper tool (headless web browsing):

playwright install chromium

Step 5: Configure environment

# Windows PowerShell
copy .env.example .env

# Linux/Mac
cp .env.example .env

The default .env works out of the box. See Personalization for customization options.

Step 6: Verify installation

# Make sure Ollama is running
ollama serve

# Quick test (in another terminal, with venv activated)
python -c "from core.agent import OverlordAgent; print('OK')"

If it prints OK, go to Quick Start to launch Pandora.


Personalization

Edit your .env file to customize how Pandora interacts with you:

Variable Default Description
OVERLORD_MASTER_TITLE Maestro How Pandora addresses you
OVERLORD_LANGUAGE auto Language: auto, es, or en
OVERLORD_AUTONOMY_LEVEL 1 Autonomy level (0-5)
OLLAMA_MODEL_PRIMARY qwen3:14b Main LLM brain
OLLAMA_MODEL_CODE qwen2.5-coder:14b Code generation model
VOICE_ENABLED false Enable voice pipeline
VOICE_WHISPER_MODEL large-v3 Whisper model (large-v3, medium, small)
VOICE_TTS_ENGINE_EN kokoro TTS engine for English
VOICE_TTS_ENGINE_ES piper TTS engine for Spanish
HARDWARE_ENABLED true Enable hardware monitoring
HARDWARE_GPU_VRAM_WARNING_THRESHOLD 96 VRAM % to trigger warning
HARDWARE_MONITOR_INTERVAL_SECONDS 30.0 Background check interval (seconds)
EVOLUTION_ENABLED true Enable evolution engine
EVOLUTION_CHECK_INTERVAL 50 Interactions between evolution checks
EVOLUTION_BENCHMARK_INTERVAL 100 Interactions between benchmarks
# Examples for OVERLORD_MASTER_TITLE:
OVERLORD_MASTER_TITLE=Maestro
# OVERLORD_MASTER_TITLE=Supreme Being
# OVERLORD_MASTER_TITLE=Boss
# OVERLORD_MASTER_TITLE=My Lord

Screenshots

CLI Startup

Banner

Conversation

Chat

System Status (/status)

Status

Agent Info (/info)

Info

Persistent Memory Across Sessions

Memory

Tools in Action (file reading + code generation)

Tools

Hardware Snapshot (/hw)

Hardware

Live Hardware Monitor (/monitor)

Monitor

Evolution Engine (/evo)

Evolution

Benchmark Suite (/benchmark)

Benchmark

Autonomy System (/autonomy)

Autonomy


Architecture

+-------------------------------------------+
|             CLI (Rich)                    |
|       Text + Voice concurrent I/O         |
| /hw  /monitor  /evo  /benchmark  /kill   |
+-------------------------------------------+
|           Overlord Agent                  |
|    Planner -> Executor -> Evaluator       |
+----------+----------+--------------------+
| Identity |  Memory  |      Tools         |
| (Persona)| Short/Long| Files | Code      |
|          | Episodic | Docker | Web       |
+----------+----------+--------------------+
| Hardware |  Voice   | Evolution | Perms  |
| GPU/CPU  | STT+TTS  | Benchmark | (0-5) |
| Profiler | VAD+Mic  | Optimizer | Kill   |
| Alerts   |          | Upgrades  | Switch |
+----------+----------+-----------+--------+
|              LLM (Ollama)                |
|       ChromaDB  |  SQLite  |  Whisper    |
+-------------------------------------------+

Stack

  • Core: Python (async)
  • LLM: Ollama (Qwen3 14B + Qwen2.5-Coder 14B)
  • Memory: ChromaDB (vector) + SQLite (structured)
  • Tools: Docker (sandbox), Playwright (web scraping)
  • Voice: faster-whisper (STT) + Kokoro/Piper (TTS) + Silero (VAD)
  • Hardware: pynvml/nvidia-smi (GPU) + psutil (CPU/RAM/disk) + inference profiler
  • CLI: Rich (text + voice concurrent input, live hardware dashboard)
  • Config: Pydantic Settings
  • Logging: Loguru
  • Coming soon: Rust (performance), React (Web UI)

Documentation

All technical docs live in docs/.

Document Description
Developer Setup Prerequisites, installation, environment
Architecture System diagram, module relationships, data flow
Database ER Entity-relationship diagram for all SQLite tables
Sequences Sequence diagrams — agent loop, evolution, voice, kill switch
State Machines Autonomy levels, evolution status, kill switch states
Phase Reviews Post-implementation reviews for each completed phase

Contributing

We welcome contributions! Here's how to get started:

  1. Read the Contributing Guide for rules and workflow
  2. Set up your environment with the Developer Setup Guide
  3. Pick an issue or open a new one — bug reports and feature requests use templates

Current priorities: Phase 7 (Web UI), test coverage, bug fixes. See the Changelog for completed work.

License

Licensed under the Apache License 2.0. Copyright 2026 Cristo Fernández Tomé (SoftDryzz).

About

Local self-evolving AI agent with memory, tools, and voice

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors