You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Env:
PyCharm 2024.3.2 (Community Edition)
Build #PC-243.23654.177, built on January 27, 2025
Windows 11.0
Non-Bundled Plugins:
de.liebki.MyOllamaEnhancer (0.1.3.4)
Ollama is running locally on standart 11434 port, the models I tried: qwen2.5-coder:32b, deepseek-coder-v2:16b
Problem:
Whatever enhancer option I use, w/ or w/o prompts, I always recieve the same error in a modal window: An error occured while executing ollama. There is no additional infarmation in IDE's log.
The text was updated successfully, but these errors were encountered:
are the models you’re using (qwen2.5-coder:32b, deepseek-coder-v2:16b) ones that typically take a long time to generate a response? For example, if you try using them in another application or via the command line, do they take a while before returning an answer? Some reasoning-heavy models or large models in general can have long initial response times, which might be causing the issue in PyCharm.
Because 32b is quite big for consumer hardware, it’s not impossible to run, but if the inference time is too long, it could be either my code or the IDE terminating the job due to the model’s slow response.
Let me know if you’ve tested them outside of the IDE!
When I use these models via command line or open webui they are not super fast, but start answering immediately. RTX 4080 with 16 Gb manages quite well with 32b models. Nevertheless, your assuming is absolutely correct. I tested the plugin with ultra light model llama3.2 and it works.
Sorry for bothering you before I tested it carefully.
Env:
PyCharm 2024.3.2 (Community Edition)
Build #PC-243.23654.177, built on January 27, 2025
Windows 11.0
Non-Bundled Plugins:
de.liebki.MyOllamaEnhancer (0.1.3.4)
Ollama is running locally on standart 11434 port, the models I tried: qwen2.5-coder:32b, deepseek-coder-v2:16b
Problem:
Whatever enhancer option I use, w/ or w/o prompts, I always recieve the same error in a modal window: An error occured while executing ollama. There is no additional infarmation in IDE's log.
The text was updated successfully, but these errors were encountered: