Orchestrator-LLM is a proof-of-concept tool designed to orchestrate the execution of multiple Large Language Models (LLMs) in parallel. Our goal is to provide a simple, user-friendly interface that allows users to leverage the power of various AI models without needing to understand the intricacies of each model's strengths and weaknesses.
- Parallel execution of multiple LLMs
- Intelligent task distribution based on model strengths
- User-friendly interface for prompt input and result visualization
- Seamless integration of various LLM APIs
These instructions will help you set up and run Orchestrator-LLM on your local machine for development and testing purposes.
Before you begin, ensure you have the following installed:
- Python 3.7+
- pip (Python package manager)
-
Clone the repository:
git clone https://github.com/yourusername/Orchestrator-LLM.git cd Orchestrator-LLM
-
Install the required dependencies:
pip install -r requirements.txt
-
Set up environment variables (if necessary):
cp .env.example .env # Edit .env file with your API keys and other configurations
To run the Streamlit app locally:
streamlit run app.py
The app should now be running on http://localhost:8501
.
To run the app using Docker:
docker run -p 8501:8501 -d --name orchestrator-llm ghcr.io/HRA42/orchestrator-llm:latest
Note: Docker image is coming soon. The repository will be updated with Dockerfile and build instructions.
- Open the app in your web browser
- Enter your prompt in the text input field
- Click "Submit" to process your request
- View the orchestrated results from multiple LLMs