Skip to content

Conversation

@LRDOC
Copy link

@LRDOC LRDOC commented Jan 10, 2026

As a fan of local models and this cli, I added support for an application called LM Studio that follows local endpoints similar to OpenAI's.

LinkedIn: https://www.linkedin.com/in/lrdoc/

Example of Addition
https://github.com/user-attachments/assets/f69ebb08-2c99-480b-99a1-cd65f26d9d67

GitHub Generated:
This pull request adds support for using LM Studio as a local LLM provider, alongside Ollama, and ensures users can select and use LM Studio models throughout the application. The changes update documentation, environment configuration, model selection logic, and provider handling to fully integrate LM Studio.

LM Studio integration:

  • Added LM Studio configuration to env.example and documented its usage in README.md for local LLM support. [1] [2] [3]
  • Implemented getLMStudioModels utility to fetch available models from a running LM Studio instance, following the OpenAI-compatible API.
  • Updated model selection UI and CLI logic to list, select, and handle LM Studio models similarly to Ollama, including prefix handling and user instructions. [1] [2] [3] [4] [5] [6] [7] [8]

Provider and model handling:

  • Added LM Studio as a recognized provider in environment and model logic, ensuring correct prefixing, skipping API key flow, and base URL configuration. [1] [2]

Agent and tool execution updates:

  • Modified ToolExecutor and agent orchestrator to use the currently selected model (including local providers) for tool selection, instead of a hardcoded model. [1] [2] [3] [4]

@virattt virattt added the run-ci Runs CI label Feb 6, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

run-ci Runs CI

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants