Shell-GPT with local deepseek-coder-v2
working config
#644
JohnRDOrazio
started this conversation in
Show and tell
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
After the Deepseek coder was announced promising amazing performance and quality in code generation, I was curious to try it out locally. I found this youtube video that explained a little bit of how to install
deepseek-coder-v2
withshell-gpt
andollama
/litellm
. However in the video the author mentioned that he was having trouble getting it to work correctly locally, and in the end just resorted to using the platform API.I have finally been able to do a bit more testing and have succeeded in getting a local
deepseek-coder-v2
to work with Shell GPT, here are the correct configurations:Make sure to:
DEFAULT_MODEL
toollama/deepseek-coder-v2
API_BASE_URL
tohttp://127.0.0.1:11434
(or the IP:PORT your ollama instance is running on)USE_LITELLM
totrue
OPENAI_API_KEY
environment variableYou should now be able to run
sgpt hello
and get an answer from your localdeepseek-coder-v2
!https://www.johnromanodorazio.com/en/2024/11/how-to-install-shell-gpt-with-deepseek-coder-v2/
Beta Was this translation helpful? Give feedback.
All reactions