You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
I would love if instead of relying on llama cpp python we could use any backend of our choosing. For instance LM Studio using API at http://localhost:1234/v1/chat/completions . Can that be added as an option instead of having to load/unload models within ComfyUI itself? Reference Plush nodes for one implementation of this.
Thank you!
The text was updated successfully, but these errors were encountered:
Thanks for the suggestion, i thought about this. Actually before llama cpp python, i was planning to do this with lm studio, but you need to install another program etc... than i choose llama cpp python. But yes i can add it to my nodes. I actually have OpenAi nodes, it already accepts custom url(i use it with deepseek api same way). All I need to do is add custom url to UI. I might add vision capability to that also.
Hi,
I would love if instead of relying on llama cpp python we could use any backend of our choosing. For instance LM Studio using API at http://localhost:1234/v1/chat/completions . Can that be added as an option instead of having to load/unload models within ComfyUI itself? Reference Plush nodes for one implementation of this.
Thank you!
The text was updated successfully, but these errors were encountered: