High-Efficiency Customer Service Assistant
Simba is an open-source customer service assistant built for teams who need full control over their AI. Unlike black-box solutions, Simba is designed from the ground up around evaluation and customization, so you can measure performance, iterate fast, and tailor the assistant to your exact needs.
| Problem | Simba's Solution |
|---|---|
| Can't measure AI quality | Built-in evaluation framework with retrieval and generation metrics |
| Generic responses | Fully customizable RAG pipeline with your own data |
| Hard to integrate | Drop-in npm package for instant website integration |
| Vendor lock-in | Open-source, self-hosted, swap any component |
- Evaluation-First Design - Track retrieval accuracy, generation quality, and latency out of the box. Know exactly how your assistant performs.
- Fully Customizable - Swap embedding models, LLMs, vector stores, chunking strategies, and rerankers. Your pipeline, your rules.
- npm Package for Easy Integration - Add a customer service chat widget to your website with a single npm install.
- Modern Dashboard - Manage documents, monitor conversations, and analyze performance from a clean UI.
- Production-Ready - Streaming responses, async processing, and scalable architecture.
The fastest way to get Simba running:
git clone https://github.com/GitHamza0206/simba.git
cd simbaCreate a .env file:
OPENAI_API_KEY=your_openai_api_keyRun with Docker:
# CPU
DEVICE=cpu make build && make up
# NVIDIA GPU
DEVICE=cuda make build && make upVisit http://localhost:3000 to access the dashboard.
If you prefer installing without Docker:
pip install simba-coresimba server
simba frontIf you're using Claude Code, you can set up the project with a single command:
/setup --allThis will automatically install all dependencies (Python, frontend, npm package) and start the infrastructure services. Other options:
/setup --backend # Python dependencies only
/setup --frontend # Next.js + simba-chat only
/setup --services # Start Docker infrastructure onlyAdd Simba to your website with the npm package:
npm install simba-chat-widgetimport { SimbaChat } from 'simba-chat-widget';
function App() {
return (
<SimbaChat
apiUrl="https://your-simba-instance.com"
theme="light"
/>
);
}That's it. Your customers now have an AI assistant powered by your knowledge base.
Simba tracks what matters:
- Retrieval Metrics - Precision, recall, relevance scores
- Generation Metrics - Faithfulness, answer relevancy, latency
- Conversation Analytics - User satisfaction, resolution rates
Use these metrics to continuously improve your assistant's performance.
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Your Website │────▶│ Simba API │────▶│ Vector Store │
│ (npm package) │ │ (FastAPI) │ │ (Qdrant/FAISS) │
└─────────────────┘ └─────────────────┘ └─────────────────┘
│ ▲
│ │
▼ │
┌─────────────────┐ ┌───────┴─────────┐
│ LLM │ │ Celery │
│ (OpenAI/Local) │ │ (Ingestion) │
└─────────────────┘ └─────────────────┘
│
▼
┌─────────────────┐
│ Redis │
│ (Task Queue) │
└─────────────────┘
# CPU
DEVICE=cpu make build && make up
# NVIDIA GPU
DEVICE=cuda make build && make up| Component | Options |
|---|---|
| Vector Store | Qdrant, FAISS, Chroma |
| Embeddings | OpenAI, HuggingFace, Cohere |
| LLM | OpenAI, Anthropic, Local models |
| Reranker | Cohere, ColBERT, Cross-encoder |
| Parser | Docling, Unstructured, PyMuPDF |
- Core evaluation framework
- npm chat widget
- Streaming responses
- Multi-tenant support
- Advanced analytics dashboard
- Webhook integrations
- Fine-tuning pipeline
We welcome contributions! Fork the repo, create a branch, and submit a PR.
- Open an issue on GitHub
- Contact: zeroualihamza0206@gmail.com
