Skip to content

arnab2001/Optischema-Slim

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

67 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

OptiSchema Logo

The Local-First Doctor for your PostgreSQL.


OptiSchema Slim - Local-first AI PostgreSQL tuning with 'what-if' indexes | Product Hunt
License Docker Status Privacy

🎯 All-In-One Docker Image

OptiSchema Slim now ships as a single ~390MB Docker image containing both the UI and API.

No separate frontend container. No Node.js runtime. One pull, one run.

Why OptiSchema Slim?

  • Privacy First: Your schema and queries never leave localhost.
  • Simulation Engine: Verify index suggestions with HypoPG before touching production.
  • Model Agnostic: Use Ollama (SQLCoder) locally, or bring your own OpenAI/Gemini/DeepSeek keys.
  • All-In-One: Single container, single port (8080), built-in static UI serving.

🚀 Quickstart Guide

Option 1: The 10-Second Demo (Recommended) Run an auto-configured environment with a "bad database" to see OptiSchema's analysis in action.

# Clone the repo
git clone https://github.com/arnab2001/Optischema-Slim.git
cd Optischema-Slim

# Start the Demo
docker compose -f docker-compose.demo.yml up --build
  • URL: http://localhost:8452
  • Scenario: Pre-loaded with slow queries and missing indexes.

Option 2: Run with Your Database Pull the official image and connect to your local Postgres.

# pulls ~390MB image
docker pull arnab2001/optischema-slim:latest

# run on port 8080
docker run -p 8080:8080 arnab2001/optischema-slim:latest
  • URL: http://localhost:8080
  • Setup: Enter your postgres:// connection string in the UI.

🛠️ Development (Local Source Build)

# Backend (FastAPI)
make dev-backend

# Frontend (Next.js)
make dev-frontend

Features

  • Real-time Monitoring: Heatmaps and latency tracking via pg_stat_statements.
  • AI Analysis: Context-aware suggestions using your schema and table stats.
  • Cost Verification: Compare EXPLAIN costs (Original vs. Virtual Index) side-by-side.

Architecture

The system follows a Collect → Analyze → Simulate pipeline designed for distinct safety and performance guarantees:

  • Frontend: Vite + React with Tailwind UI & Recharts for real-time visualization.
  • Backend: FastAPI paired with AsyncPG for high-conformance, non-blocking I/O.
  • Core Engine:
    • Metric Collection: Ingests pg_stat_statements to fingerprint and rank queries by Total Time and IO.
    • Context Engine: Enriches queries with live schema definitions, indices, and table statistics (tuple counts, bloat).
    • AI Analysis: Router sends sanitized context to the configure LLM (Local/Cloud) to synthesize optimization strategies.
    • HypoPG Simulation: Creates virtual indexes in a transient session to verify EXPLAIN cost reductions before suggesting them.

Configuration / LLM Setup

Click to view Configuration Details

Environment Setup

  1. Create a .env file from the example:

    cp .env.example .env
  2. To use Ollama:

    • Install Ollama and pull the model: ollama pull sqlcoder:7b
    • Set LLM_PROVIDER=ollama in your .env.
    • Ensure OptiSchema can reach your host (typically http://host.docker.internal:11434).
  3. To use Cloud Models:

    • Add your OPENAI_API_KEY, GEMINI_API_KEY, or DEEPSEEK_API_KEY to the .env file.
    • Set LLM_PROVIDER accordingly (e.g., openai, gemini).
  4. Auto-Connection (Optional):

    • Set DATABASE_URL=postgresql://user:pass@host:5432/db to skip the connection screen on startup.

Roadmap / Status

  • ✅ Core Metrics
  • ✅ HypoPG Integration
  • 🚧 Health Scan (In Progress)
  • 🚧 History Persistence

Status: We are currently optimizing the Docker image for the public release. Join the Waitlist to get the v1 image.

Contributing

PRs are welcome! Please check out the backend/services to see how we handle different components.


Built with ❤️ for the PostgreSQL Community

About

The Local-First Doctor for your PostgreSQL. Debug queries, verify indexes with HypoPG, and optimize performance using local LLMs. Zero data egress.

Topics

Resources

License

Contributing

Security policy

Stars

Watchers

Forks

Packages

 
 
 

Contributors