Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PyCharm 2024.3.2 (Community Edition): An error occured while executing ollama #5

Open
pavel-sg opened this issue Feb 7, 2025 · 3 comments

Comments

@pavel-sg
Copy link

pavel-sg commented Feb 7, 2025

Env:
PyCharm 2024.3.2 (Community Edition)
Build #PC-243.23654.177, built on January 27, 2025
Windows 11.0
Non-Bundled Plugins:
de.liebki.MyOllamaEnhancer (0.1.3.4)
Ollama is running locally on standart 11434 port, the models I tried: qwen2.5-coder:32b, deepseek-coder-v2:16b

Problem:
Whatever enhancer option I use, w/ or w/o prompts, I always recieve the same error in a modal window: An error occured while executing ollama. There is no additional infarmation in IDE's log.

@liebki
Copy link
Owner

liebki commented Feb 7, 2025

Hi @pavel-sg

are the models you’re using (qwen2.5-coder:32b, deepseek-coder-v2:16b) ones that typically take a long time to generate a response? For example, if you try using them in another application or via the command line, do they take a while before returning an answer? Some reasoning-heavy models or large models in general can have long initial response times, which might be causing the issue in PyCharm.

Because 32b is quite big for consumer hardware, it’s not impossible to run, but if the inference time is too long, it could be either my code or the IDE terminating the job due to the model’s slow response.

Let me know if you’ve tested them outside of the IDE!

Greetings ☺️

@pavel-sg
Copy link
Author

Hi @liebki,

When I use these models via command line or open webui they are not super fast, but start answering immediately. RTX 4080 with 16 Gb manages quite well with 32b models. Nevertheless, your assuming is absolutely correct. I tested the plugin with ultra light model llama3.2 and it works.

Sorry for bothering you before I tested it carefully.

@liebki
Copy link
Owner

liebki commented Feb 10, 2025

It's alright, I will nevertheless m take a look into something, maybe I can do something.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants