Replies: 3 comments
-
Hi @amirclam - can we hop on a call to debug this. Hoping to get steps to repro the issue my cal for your convenience: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat |
Beta Was this translation helpful? Give feedback.
0 replies
-
Did this get resolved? |
Beta Was this translation helpful? Give feedback.
0 replies
-
any updates here? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I am running litellm behind proxy server using local text generation inference model
when running it with our setting https_proxy every call get stuck for 5 minutes
even due my model and redis are inside my k8s cluster
what configuration am i missing ?
Beta Was this translation helpful? Give feedback.
All reactions