Skip to content

Commit

Permalink
Corrected liteLLM model providers
Browse files Browse the repository at this point in the history
When I use default setting and clicked on fast/powerful, the following error message occurs:
500: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=OpenAI/gpt-4o-mini Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: https://docs.litellm.ai/docs/providers
  • Loading branch information
whyXVI authored Sep 9, 2024
1 parent 883003f commit c0e50de
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions src/backend/constants.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,8 +23,8 @@ class ChatModel(str, Enum):


model_mappings: dict[ChatModel, str] = {
ChatModel.GPT_4o: "gpt-4o",
ChatModel.GPT_4o_mini: "gpt-4o-mini",
ChatModel.GPT_4o: "openai/gpt-4o",
ChatModel.GPT_4o_mini: "openai/gpt-4o-mini",
ChatModel.LLAMA_3_70B: "groq/llama-3.1-70b-versatile",
ChatModel.LOCAL_LLAMA_3: "ollama_chat/llama3.1",
ChatModel.LOCAL_GEMMA: "ollama_chat/gemma",
Expand Down

0 comments on commit c0e50de

Please sign in to comment.