Skip to content

πŸ§ πŸ”€ Seamlessly manage multiple LLM clients to overcome API limitations. tShift_LLM automatically shifts between language models, ensuring uninterrupted AI responses and bypassing rate limits. Perfect for building robust, scalable AI applications.

License

Notifications You must be signed in to change notification settings

MohdSaleh/ThinkShift_LLM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

4 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

tShift_LLM πŸ§ πŸ”€

PyPI version License: MIT Python Versions

tShift_LLM is a robust and flexible LLM manager that seamlessly shifts between multiple language models to ensure uninterrupted AI-powered conversations. Say goodbye to API limitations and hello to consistent, reliable responses!

🌟 Key Features

  • πŸ”„ Automatic Failover: Shifts between models when errors occur, ensuring you always get a response.
  • πŸ”Œ Multi-Provider Support: Works with various LLM providers (OpenAI, Anthropic, Cohere, etc.).
  • 🌊 Streaming & Non-Streaming: Supports both streaming and non-streaming completions.
  • πŸ“Š Detailed Logging: Keeps track of all interactions for analysis and debugging.
  • βš–οΈ Load Balancing: Uses round-robin selection to distribute requests across clients.
  • πŸ›‘οΈ Rate Limit Protection: Helps bypass API limitations by switching to available clients.

πŸ€” Why tShift_LLM?

In the real world, LLM APIs often come with limitations:

  • ⏱️ Requests per minute
  • πŸ•°οΈ Requests per hour
  • πŸ“… Requests per day

These constraints can be a roadblock for applications requiring consistent access to LLM capabilities. tShift_LLM solves this by managing multiple LLM clients automatically, ensuring you get a successful response even if some clients fail due to these limitations.

πŸš€ Quick Start

Installation

pip install tshift-llm
# Latest release
pip install tshift-llm==0.2.2

Basic Usage

from tshift_llm import tShift_LLM, LiteLLMClient

# Set up your clients
clients = [
    LiteLLMClient(
        model="gpt-3.5-turbo", 
        api_key="your-openai-key-1",
        api_base="your-openai-url"
    ),
    LiteLLMClient("gpt-3.5-turbo", "your-openai-key-2"),
    LiteLLMClient("claude-2", "your-anthropic-key"),
    LiteLLMClient("command-nightly", "your-cohere-key")
]

# Initialize tShift_LLM
tshift_llm = tShift_LLM(clients)

# Make a completion request
response = tshift_llm.completion(
    messages=[{"role": "user", "content": "What's the meaning of life?"}]
)
print(response.choices[0].message.content)

πŸ› οΈ Advanced Usage

Streaming Completions

for chunk in tshift_llm.stream_completion(
    messages=[{"role": "user", "content": "Write a short story about AI."}]
):
    print(chunk.choices[0].delta.content or "", end="", flush=True)

Custom Error Handling

try:
    response = tshift_llm.completion(messages=[...])
except Exception as e:
    print(f"All LLM clients failed. Last error: {str(e)}")

πŸ“Š Logging

tShift_LLM automatically logs all interactions. You can find the log file at tshift_llm.log in your working directory.

🀝 Contributing

We welcome contributions! Please see our Contributing Guide for more details.

πŸ“œ License

This project is licensed under the MIT License - see the LICENSE file for details.

πŸ™ Acknowledgements

  • LiteLLM for providing a unified interface to various LLM providers.
  • All the amazing LLM providers out there pushing the boundaries of AI.

🌟 Star Us!

If you find tShift_LLM helpful, please consider giving us a star on GitHub. It helps us know that our work is valuable and encourages further development!

GitHub stars


Made with ❀️ by [Ryan Saleh]

Got questions? Open an issue or reach out to us at hello@ryansaleh.com

About

πŸ§ πŸ”€ Seamlessly manage multiple LLM clients to overcome API limitations. tShift_LLM automatically shifts between language models, ensuring uninterrupted AI responses and bypassing rate limits. Perfect for building robust, scalable AI applications.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages