-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Add LM Studio Provider #988
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
@ppranay20 is attempting to deploy a commit to the Inbox Zero OSS Program Team on Vercel. A member of the Team first needs to authorize it. |
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. WalkthroughLM-Studio is added as a new LLM provider: env vars for base URL and public model were introduced, provider and model constants were extended, selectModel now supports LM_STUDIO via an OpenAI-compatible client, a package dependency was added, and tests updated to cover the LM_STUDIO path. Changes
Sequence Diagram(s)sequenceDiagram
participant Caller as selectModel Caller
participant selectModel as selectModel()
participant validate as Validation
participant client as createOpenAICompatible
participant result as Return Result
Caller->>selectModel: provider=LM_STUDIO, modelName, baseURL
selectModel->>validate: ensure modelName & baseURL present
alt missing required param
validate-->>Caller: throw Error
else
selectModel->>client: createOpenAICompatible({ name, baseURL, supportsStructuredOutputs:true })
client-->>selectModel: model instance
selectModel->>result: { provider: LM_STUDIO, modelName, model, backupModel: null }
result-->>Caller: return
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20–30 minutes
Possibly related PRs
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
Review the following changes in direct dependencies. Learn more about Socket for GitHub.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (1)
apps/web/utils/llms/model.ts (1)
235-242: Consider extracting the LM Studio check logic to reduce duplication.The same LM Studio check pattern appears in both
selectEconomyModelandselectChatModel. While the duplication is minimal, extracting this into a helper function would improve maintainability.Consider creating a helper function:
function isLocalProvider(provider: string): boolean { return provider === Provider.LM_STUDIO || provider === Provider.OLLAMA; }Then use it in both functions:
function selectEconomyModel(userAi: UserAIFields): SelectModel { if (env.ECONOMY_LLM_PROVIDER && env.ECONOMY_LLM_MODEL) { - const isLMStudio = env.ECONOMY_LLM_PROVIDER === Provider.LM_STUDIO; - if (isLMStudio) { + if (isLocalProvider(env.ECONOMY_LLM_PROVIDER)) { return selectModel({ - aiProvider: Provider.LM_STUDIO, + aiProvider: env.ECONOMY_LLM_PROVIDER, aiModel: env.ECONOMY_LLM_MODEL, aiApiKey: null, }); }Note: If you add Ollama back or add more local providers in the future, this pattern would be even more valuable.
Also applies to: 282-289
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
apps/web/.env.example(1 hunks)apps/web/env.ts(4 hunks)apps/web/package.json(1 hunks)apps/web/utils/llms/config.ts(2 hunks)apps/web/utils/llms/model.test.ts(4 hunks)apps/web/utils/llms/model.ts(4 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-07-19T15:06:43.730Z
Learnt from: garnertb
Repo: elie222/inbox-zero PR: 580
File: apps/web/.env.example:4-7
Timestamp: 2025-07-19T15:06:43.730Z
Learning: In apps/web/.env.example, boolean environment variables follow an unquoted convention (e.g., LOG_ZOD_ERRORS=true, AUTH_TRUST_HOST=true). Logical grouping of related variables (like auth variables together) is preferred over strict alphabetical ordering for better developer experience.
Applied to files:
apps/web/.env.example
🧬 Code graph analysis (3)
apps/web/utils/llms/config.ts (1)
apps/web/env.ts (1)
env(17-250)
apps/web/utils/llms/model.test.ts (4)
apps/web/utils/llms/types.ts (1)
UserAIFields(3-9)apps/web/utils/llms/config.ts (2)
Provider(7-16)Model(18-37)apps/web/utils/llms/model.ts (1)
getModel(28-42)apps/web/env.ts (1)
env(17-250)
apps/web/utils/llms/model.ts (2)
apps/web/utils/llms/config.ts (2)
Provider(7-16)Model(18-37)apps/web/env.ts (1)
env(17-250)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Static Code Analysis Js
- GitHub Check: cubic · AI code reviewer
- GitHub Check: Jit Security
- GitHub Check: test
🔇 Additional comments (6)
apps/web/env.ts (1)
14-14: LGTM! Clean environment configuration for LM Studio.The environment variable setup follows the established pattern for local providers (similar to Ollama). The configuration properly separates server-side (
LM_STUDIO_BASE_URL) and client-side (NEXT_PUBLIC_LM_STUDIO_MODEL) variables, with appropriate runtime mappings.Also applies to: 61-61, 182-182, 243-243
apps/web/package.json (1)
23-23: LGTM! Dependency addition is appropriate.The
@ai-sdk/openai-compatiblepackage at version ^1.0.27 is correctly added to support LM Studio integration. The version constraint is consistent with other AI SDK dependencies.apps/web/.env.example (1)
42-43: LGTM! Example environment variables follow conventions.The LM Studio configuration examples are correctly placed in the LLM config section and follow the established pattern of unquoted values for optional variables. The default values (localhost:1234 for base URL and qwen/qwen3-vl-4b for model) are sensible defaults for LM Studio users.
Based on learnings
apps/web/utils/llms/model.test.ts (1)
6-6: LGTM! Test setup and mocking are correctly implemented.The import and mock for
createOpenAICompatiblefollow the same pattern as other provider mocks, and the environment variable mocks are properly configured.Also applies to: 39-41, 64-65
apps/web/utils/llms/config.ts (1)
14-14: LGTM! Provider and model configuration follows the established pattern.The LM Studio constants are correctly added to both
ProviderandModelexports, mirroring the implementation of other local providers like Ollama.Note: LM Studio is not included in the
providerOptionsarray (lines 39-50), which means it won't appear in UI provider dropdowns. This appears intentional and consistent with how local providers are handled, but confirm this aligns with your product requirements.Also applies to: 35-35
apps/web/utils/llms/model.ts (1)
9-9: LGTM! Import added correctly.
| it("should configure LM Studio model correctly", () => { | ||
| const userAi: UserAIFields = { | ||
| aiApiKey: "user-api-key", | ||
| aiProvider: Provider.LM_STUDIO, | ||
| aiModel: Model.LM_STUDIO!, | ||
| }; | ||
|
|
||
| const result = getModel(userAi); | ||
|
|
||
| expect(result.provider).toBe(Provider.LM_STUDIO); | ||
| expect(result.modelName).toBe(Model.LM_STUDIO); | ||
|
|
||
| expect(createOpenAICompatible).toHaveBeenCalledWith({ | ||
| name: Provider.LM_STUDIO, | ||
| baseURL: env.LM_STUDIO_BASE_URL!, | ||
| supportsStructuredOutputs: true, | ||
| }); | ||
|
|
||
| expect(result.model).toBeDefined(); | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Verify that LM Studio supports structured outputs.
The test correctly validates that createOpenAICompatible is called with supportsStructuredOutputs: true (line 189). However, not all OpenAI-compatible APIs support structured outputs.
Please verify using LM Studio documentation or by searching the web whether LM Studio's OpenAI-compatible API supports the structured outputs feature.
🌐 Web query:
Does LM Studio OpenAI-compatible API support structured outputs?
💡 Result:
Yes. LM Studio's OpenAI-compatible API supports Structured Outputs: you can pass response_format: { type: "json_schema", json_schema: ... } to /v1/chat/completions and the model will return JSON conforming to the schema (output appears as a JSON string in choices[0].message.content). Note: not all small models (<7B) support structured output — check the model card. [1][2]
Sources:
- LM Studio docs — Structured Output. [1]
- OpenAI — Introducing Structured Outputs in the API. [2]
Update test to account for model size limitations on structured outputs support.
LM Studio's OpenAI-compatible API does support Structured Outputs. However, not all small models (<7B) support structured output—check the model card.
The test hardcodes supportsStructuredOutputs: true without validating that Model.LM_STUDIO actually supports this feature. If a smaller model is used, this could cause runtime failures when structured outputs are requested. Either validate the model size before enabling this flag, or document that LM Studio configuration requires a model >=7B.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1 issue found across 6 files
Prompt for AI agents (all 1 issues)
Understand the root cause of the following 1 issues and fix them.
<file name="apps/web/utils/llms/config.ts">
<violation number="1" location="apps/web/utils/llms/config.ts:14">
Adding `Provider.LM_STUDIO` without also surfacing it in `providerOptions` leaves LM Studio impossible to select, so the new provider is effectively unusable.</violation>
</file>
Reply to cubic to teach it or ask questions. Re-run a review with @cubic-dev-ai review this PR
| GROQ: "groq", | ||
| OPENROUTER: "openrouter", | ||
| AI_GATEWAY: "aigateway", | ||
| LM_STUDIO: "lmstudio", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adding Provider.LM_STUDIO without also surfacing it in providerOptions leaves LM Studio impossible to select, so the new provider is effectively unusable.
Prompt for AI agents
Address the following comment on apps/web/utils/llms/config.ts at line 14:
<comment>Adding `Provider.LM_STUDIO` without also surfacing it in `providerOptions` leaves LM Studio impossible to select, so the new provider is effectively unusable.</comment>
<file context>
@@ -11,6 +11,7 @@ export const Provider = {
GROQ: "groq",
OPENROUTER: "openrouter",
AI_GATEWAY: "aigateway",
+ LM_STUDIO: "lmstudio",
...(supportsOllama ? { OLLAMA: "ollama" } : {}),
};
</file context>
| const isLMStudio = env.ECONOMY_LLM_PROVIDER === Provider.LM_STUDIO; | ||
| if (isLMStudio) { | ||
| return selectModel({ | ||
| aiProvider: Provider.LM_STUDIO, | ||
| aiModel: env.ECONOMY_LLM_MODEL, | ||
| aiApiKey: null, | ||
| }); | ||
| } | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems inconsistent? And would ignore the user settings if those are set?
Same in the other plcae you did this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I don't add this then it will direct to default function (because LM studio does not have API key it provides URL). The default function selects the model which is set to default. So, if economy is set to LM studio it will revert to default model.
I can also add a check that for LM studio URL check like
if (!apiKey && !url) { logger.warn("Economy LLM provider configured but API key not found", { provider: env.ECONOMY_LLM_PROVIDER, }); return selectDefaultModel(userAi); }
This will also work.
Closes #507
Description
Integrates LM Studio as a supported AI provider using the OpenAI-compatible SDK.
Changes
LM_STUDIOas a provider inmodel.ts.model.tsfor LM_STUDIO.env.exampleto include NEXT_PUBLIC_LM_STUDIO_MODEL and LM_STUDIO_BASE_URL.chatandeconomymodel resolvers to ensure they return correctly when a local provider is selected.Summary by CodeRabbit
New Features
Chores
Tests