Skip to content

Conversation

@d0lphin
Copy link

@d0lphin d0lphin commented Jan 22, 2026

Summary

  • Adds LM Studio as a local LLM provider option alongside Ollama
  • LM Studio uses an OpenAI-compatible API, integrated via ChatOpenAI with custom base URL
  • Supports LM_STUDIO_BASE_URL env var (defaults to localhost:1234)

Changes

  • src/model/llm.ts - Add lmstudio: provider factory
  • src/utils/lm-studio.ts - New utility to fetch models from LM Studio API
  • src/utils/env.ts - Add lmstudio to providers config
  • src/components/ModelSelector.tsx - Add LM Studio to UI
  • src/hooks/useModelSelection.ts - Handle LM Studio model fetching/selection
  • env.example - Add LM_STUDIO_BASE_URL

Test plan

  • Start LM Studio with a model loaded
  • Run Dexter and select LM Studio from provider list
  • Verify models are fetched and selectable
  • Verify chat works with selected model

LM Studio uses an OpenAI-compatible API, so this integrates it
using ChatOpenAI with a custom base URL. Similar to the existing
Ollama integration:

- Add lmstudio provider to model selector and env config
- Add lm-studio.ts utility to fetch available models
- Support LM_STUDIO_BASE_URL env var (defaults to localhost:1234)
- Add CLAUDE.md to .gitignore
@virattt
Copy link
Owner

virattt commented Jan 22, 2026

Hey @d0lphin - thanks for the PR.

What is the benefit of having LM Studio support since we already have ollama?

Want to be mindful of bloating the options list.

@d0lphin
Copy link
Author

d0lphin commented Jan 22, 2026

LM studio supports mlx which is significantly faster than llama.cpp on Apple Silicon.

@virattt virattt added the run-ci Runs CI label Feb 6, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

run-ci Runs CI

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants