Seamless Conversations Across Multiple LLMs
Ever started a chat with one LLM, but then realized another model (say Mistral, DeepSeek, or an open-source gem) would give you better answers? Right now, switching models means copy-pasting conversations, losing context, and wasting time.
That’s broken.
MultiChat is an AI orchestration layer that lets you:
- ✅ Start a chat with one model and seamlessly continue with another.
- ✅ Run parallel responses from multiple LLMs and compare answers.
- ✅ Build pipelines where different models handle different steps.
- ✅ Support both closed-source APIs and open-source LLMs.
- ✅ Keep your conversations synced, sessionized, and reusable.
Think of it as your “multi-agent switchboard” for the AI world.
- 🔄 Continue Anywhere: Pick up your conversation in DeepSeek, Mistral, Qwen, or any LLM.
- 🧩 Extensible Connectors: Add new LLMs with a simple plug-and-play connector.
- ⚡ Parallel Mode: Ask multiple models the same thing, compare instantly.
- 📡 Pipelines: Chain models (e.g., Mistral for analysis → DeepSeek for summarization → Qwen for creativity).
- 🛠 Developer Friendly: Simple API endpoints to integrate into any app.
MultiChat is powered by:
- Backend: FastAPI (Python)
- Frontend: React (Next.js/Vite)
- LLM Connectors: DeepSeek, Mistral, Qwen, and more
- Middleware: Session tracking, orchestration logic
- Python 3.10+
- Node.js 18+
- API keys for your chosen LLMs (if using hosted ones)
git clone https://github.com/manitejagaddam/Multi-Chat.git
cd MultiChatcd server
pip install -r requirements.txt
uvicorn main:app --reloadcd client
npm install
npm run dev
# open another terminal
cd client
npm run startOnce running:
- Visit
http://localhost:3000for the UI. - Use
/chatand/chatallendpoints for backend orchestration.
Example (parallel chat request):
{
"session_id": "abc123",
"message": "Explain quantum computing in simple terms",
"models": ["deepseek", "mistral", "qwen"]
}We welcome contributions from the community to make MultiChat even better!
-
Fork the repository and clone it locally.
-
Create a new branch for your feature or bugfix:
git checkout -b feature/your-feature-name
-
Write clean, documented code and include tests if applicable.
-
Commit your changes with a descriptive message:
git commit -m "Add feature: your feature description" -
Push your branch to your forked repo:
git push origin feature/your-feature-name
-
Open a Pull Request (PR) describing your changes clearly.
-
Ensure your PR passes all checks and reviews before merging.
We expect contributors to follow our Code of Conduct to ensure a welcoming environment for everyone.
This project is licensed under the Apache 2.0 License – see the LICENSE file for details.
MultiChat is just the beginning. Our goal is to build a universal orchestration platform where multiple LLMs and agents collaborate — not compete — to give you the best possible intelligence layer for apps, research, and daily life.