This is a simple web application that allows you to run multiple Large Language Models (LLMs) in parallel.
- Simultaneous use of multiple chat interfaces
- Option to select different LLM models for each chat
- Ability to synchronize input across chats
- Toggle between dark and light modes
- React
- Next.js
- TypeScript
- Tailwind CSS
- shadcn/ui components
- Vercel AI SDK
-
Clone the repository:
git clone https://github.com/komzweb/multi-llm-runner.git
-
Install dependencies:
cd multi-llm-runner npm install
-
Start the development server:
npm run dev
-
Open
http://localhost:3000
in your browser.
Before running the application, set up the following environment variables:
OPENAI_API_KEY
: Your OpenAI API keyANTHROPIC_API_KEY
: Your Anthropic API keyGOOGLE_GEMINI_API_KEY
: Your Google Gemini API keyGROQ_API_KEY
: Your Groq API key
The current application lacks the following features:
- Multimodal support
- Markdown to HTML conversion
- Saving conversation history to a database
- Adjustable message input area
- And more
Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.