LM Studio Support #46
Open
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
As a fan of local models and this cli, I added support for an application called LM Studio that follows local endpoints similar to OpenAI's.
LinkedIn: https://www.linkedin.com/in/lrdoc/
Example of Addition
https://github.com/user-attachments/assets/f69ebb08-2c99-480b-99a1-cd65f26d9d67
GitHub Generated:
This pull request adds support for using LM Studio as a local LLM provider, alongside Ollama, and ensures users can select and use LM Studio models throughout the application. The changes update documentation, environment configuration, model selection logic, and provider handling to fully integrate LM Studio.
LM Studio integration:
env.exampleand documented its usage inREADME.mdfor local LLM support. [1] [2] [3]getLMStudioModelsutility to fetch available models from a running LM Studio instance, following the OpenAI-compatible API.Provider and model handling:
Agent and tool execution updates:
ToolExecutorand agent orchestrator to use the currently selected model (including local providers) for tool selection, instead of a hardcoded model. [1] [2] [3] [4]