Skip to content

Conversation

@ppranay20
Copy link
Contributor

@ppranay20 ppranay20 commented Nov 19, 2025

Closes #507

Description

Integrates LM Studio as a supported AI provider using the OpenAI-compatible SDK.

Changes

  • Added LM_STUDIO as a provider in model.ts.
  • Added tests in model.ts for LM_STUDIO.
  • Updated env.example to include NEXT_PUBLIC_LM_STUDIO_MODEL and LM_STUDIO_BASE_URL.
  • Patched chat and economy model resolvers to ensure they return correctly when a local provider is selected.

Summary by CodeRabbit

  • New Features

    • Added LM Studio as a supported language-model provider and UI/runtime options to select its model.
    • Integrated OpenAI-compatible connectivity for LM Studio backends.
  • Chores

    • Added an OpenAI-compatible SDK dependency.
  • Tests

    • Added tests validating LM Studio configuration and integration.

@vercel
Copy link

vercel bot commented Nov 19, 2025

@ppranay20 is attempting to deploy a commit to the Inbox Zero OSS Program Team on Vercel.

A member of the Team first needs to authorize it.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 19, 2025

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.

Walkthrough

LM-Studio is added as a new LLM provider: env vars for base URL and public model were introduced, provider and model constants were extended, selectModel now supports LM_STUDIO via an OpenAI-compatible client, a package dependency was added, and tests updated to cover the LM_STUDIO path.

Changes

Cohort / File(s) Summary
Environment examples & wiring
apps/web/.env.example, apps/web/env.ts
Added LM_STUDIO_BASE_URL and NEXT_PUBLIC_LM_STUDIO_MODEL; extended public and server-side env schema and runtime env mapping; added "lmstudio" to provider enum.
LLM config constants
apps/web/utils/llms/config.ts
Added Provider.LM_STUDIO and Model.LM_STUDIO (sourced from env.NEXT_PUBLIC_LM_STUDIO_MODEL).
Model selection & runtime wiring
apps/web/utils/llms/model.ts
Added Provider.LM_STUDIO branch in selectModel that validates inputs and constructs an OpenAI‑compatible client via createOpenAICompatible({ baseURL, supportsStructuredOutputs: true }); wired economy/chat provider fallthroughs for LM_STUDIO.
Tests
apps/web/utils/llms/model.test.ts
Added mock for @ai-sdk/openai-compatible, extended env mocks with LM_STUDIO vars, and added test asserting LM_STUDIO selection, client invocation, and returned model.
Dependencies
apps/web/package.json
Added dependency @ai-sdk/openai-compatible@^1.0.27.

Sequence Diagram(s)

sequenceDiagram
    participant Caller as selectModel Caller
    participant selectModel as selectModel()
    participant validate as Validation
    participant client as createOpenAICompatible
    participant result as Return Result

    Caller->>selectModel: provider=LM_STUDIO, modelName, baseURL
    selectModel->>validate: ensure modelName & baseURL present
    alt missing required param
        validate-->>Caller: throw Error
    else
        selectModel->>client: createOpenAICompatible({ name, baseURL, supportsStructuredOutputs:true })
        client-->>selectModel: model instance
        selectModel->>result: { provider: LM_STUDIO, modelName, model, backupModel: null }
        result-->>Caller: return
    end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20–30 minutes

  • Pay attention to: apps/web/utils/llms/model.ts (validation, proper nulling of aiApiKey/backups), apps/web/env.ts (public vs server schema correctness), and the test mock for @ai-sdk/openai-compatible to ensure it matches runtime behavior.

Possibly related PRs

  • Add ai gateway support #674 — Another LLM provider addition that modifies env, config, and selectModel wiring (very similar integration pattern).
  • Fixes #657 — Changes to the same llms test file and test patterns for model selection.
  • Add Google Gemini Support #297 — Extends Provider/Model constants in utils/llms/config.ts (same surface updated).

Poem

🐇 In a burrow bright and new, I hop,
LM‑Studio joins our LLM shop.
Base URL set, the model's in play,
I nibble tests and dance all day — hooray! 🎉

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 50.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'Add LM Studio Provider' accurately and concisely summarizes the main change: adding LM Studio as a new LLM provider throughout the codebase.
Linked Issues check ✅ Passed The pull request fulfills issue #507 requirements: adds LM Studio support via OpenAI-compatible API, integrates with existing OpenAI SDK, provides necessary environment configuration, and implements model selection logic.
Out of Scope Changes check ✅ Passed All changes are directly related to adding LM Studio provider support. No extraneous modifications detected; all alterations align with the stated objectives in issue #507.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

📜 Recent review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1cf1aab and 6996fd7.

📒 Files selected for processing (1)
  • apps/web/utils/llms/model.ts (4 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • apps/web/utils/llms/model.ts
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
  • GitHub Check: Software Component Analysis Js
  • GitHub Check: Static Code Analysis Js
  • GitHub Check: cubic · AI code reviewer
  • GitHub Check: Jit Security
  • GitHub Check: test

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@socket-security
Copy link

socket-security bot commented Nov 19, 2025

Review the following changes in direct dependencies. Learn more about Socket for GitHub.

Diff Package Supply Chain
Security
Vulnerability Quality Maintenance License
Added@​ai-sdk/​openai-compatible@​1.0.2710010010098100

View full report

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
apps/web/utils/llms/model.ts (1)

235-242: Consider extracting the LM Studio check logic to reduce duplication.

The same LM Studio check pattern appears in both selectEconomyModel and selectChatModel. While the duplication is minimal, extracting this into a helper function would improve maintainability.

Consider creating a helper function:

function isLocalProvider(provider: string): boolean {
  return provider === Provider.LM_STUDIO || provider === Provider.OLLAMA;
}

Then use it in both functions:

   function selectEconomyModel(userAi: UserAIFields): SelectModel {
     if (env.ECONOMY_LLM_PROVIDER && env.ECONOMY_LLM_MODEL) {
-      const isLMStudio = env.ECONOMY_LLM_PROVIDER === Provider.LM_STUDIO;
-      if (isLMStudio) {
+      if (isLocalProvider(env.ECONOMY_LLM_PROVIDER)) {
         return selectModel({
-          aiProvider: Provider.LM_STUDIO,
+          aiProvider: env.ECONOMY_LLM_PROVIDER,
           aiModel: env.ECONOMY_LLM_MODEL,
           aiApiKey: null,
         });
       }

Note: If you add Ollama back or add more local providers in the future, this pattern would be even more valuable.

Also applies to: 282-289

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 45e6d8b and 1cf1aab.

📒 Files selected for processing (6)
  • apps/web/.env.example (1 hunks)
  • apps/web/env.ts (4 hunks)
  • apps/web/package.json (1 hunks)
  • apps/web/utils/llms/config.ts (2 hunks)
  • apps/web/utils/llms/model.test.ts (4 hunks)
  • apps/web/utils/llms/model.ts (4 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-07-19T15:06:43.730Z
Learnt from: garnertb
Repo: elie222/inbox-zero PR: 580
File: apps/web/.env.example:4-7
Timestamp: 2025-07-19T15:06:43.730Z
Learning: In apps/web/.env.example, boolean environment variables follow an unquoted convention (e.g., LOG_ZOD_ERRORS=true, AUTH_TRUST_HOST=true). Logical grouping of related variables (like auth variables together) is preferred over strict alphabetical ordering for better developer experience.

Applied to files:

  • apps/web/.env.example
🧬 Code graph analysis (3)
apps/web/utils/llms/config.ts (1)
apps/web/env.ts (1)
  • env (17-250)
apps/web/utils/llms/model.test.ts (4)
apps/web/utils/llms/types.ts (1)
  • UserAIFields (3-9)
apps/web/utils/llms/config.ts (2)
  • Provider (7-16)
  • Model (18-37)
apps/web/utils/llms/model.ts (1)
  • getModel (28-42)
apps/web/env.ts (1)
  • env (17-250)
apps/web/utils/llms/model.ts (2)
apps/web/utils/llms/config.ts (2)
  • Provider (7-16)
  • Model (18-37)
apps/web/env.ts (1)
  • env (17-250)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: Static Code Analysis Js
  • GitHub Check: cubic · AI code reviewer
  • GitHub Check: Jit Security
  • GitHub Check: test
🔇 Additional comments (6)
apps/web/env.ts (1)

14-14: LGTM! Clean environment configuration for LM Studio.

The environment variable setup follows the established pattern for local providers (similar to Ollama). The configuration properly separates server-side (LM_STUDIO_BASE_URL) and client-side (NEXT_PUBLIC_LM_STUDIO_MODEL) variables, with appropriate runtime mappings.

Also applies to: 61-61, 182-182, 243-243

apps/web/package.json (1)

23-23: LGTM! Dependency addition is appropriate.

The @ai-sdk/openai-compatible package at version ^1.0.27 is correctly added to support LM Studio integration. The version constraint is consistent with other AI SDK dependencies.

apps/web/.env.example (1)

42-43: LGTM! Example environment variables follow conventions.

The LM Studio configuration examples are correctly placed in the LLM config section and follow the established pattern of unquoted values for optional variables. The default values (localhost:1234 for base URL and qwen/qwen3-vl-4b for model) are sensible defaults for LM Studio users.

Based on learnings

apps/web/utils/llms/model.test.ts (1)

6-6: LGTM! Test setup and mocking are correctly implemented.

The import and mock for createOpenAICompatible follow the same pattern as other provider mocks, and the environment variable mocks are properly configured.

Also applies to: 39-41, 64-65

apps/web/utils/llms/config.ts (1)

14-14: LGTM! Provider and model configuration follows the established pattern.

The LM Studio constants are correctly added to both Provider and Model exports, mirroring the implementation of other local providers like Ollama.

Note: LM Studio is not included in the providerOptions array (lines 39-50), which means it won't appear in UI provider dropdowns. This appears intentional and consistent with how local providers are handled, but confirm this aligns with your product requirements.

Also applies to: 35-35

apps/web/utils/llms/model.ts (1)

9-9: LGTM! Import added correctly.

Comment on lines +174 to +193
it("should configure LM Studio model correctly", () => {
const userAi: UserAIFields = {
aiApiKey: "user-api-key",
aiProvider: Provider.LM_STUDIO,
aiModel: Model.LM_STUDIO!,
};

const result = getModel(userAi);

expect(result.provider).toBe(Provider.LM_STUDIO);
expect(result.modelName).toBe(Model.LM_STUDIO);

expect(createOpenAICompatible).toHaveBeenCalledWith({
name: Provider.LM_STUDIO,
baseURL: env.LM_STUDIO_BASE_URL!,
supportsStructuredOutputs: true,
});

expect(result.model).toBeDefined();
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

Verify that LM Studio supports structured outputs.

The test correctly validates that createOpenAICompatible is called with supportsStructuredOutputs: true (line 189). However, not all OpenAI-compatible APIs support structured outputs.

Please verify using LM Studio documentation or by searching the web whether LM Studio's OpenAI-compatible API supports the structured outputs feature.


🌐 Web query:

Does LM Studio OpenAI-compatible API support structured outputs?

💡 Result:

Yes. LM Studio's OpenAI-compatible API supports Structured Outputs: you can pass response_format: { type: "json_schema", json_schema: ... } to /v1/chat/completions and the model will return JSON conforming to the schema (output appears as a JSON string in choices[0].message.content). Note: not all small models (<7B) support structured output — check the model card. [1][2]

Sources:

  • LM Studio docs — Structured Output. [1]
  • OpenAI — Introducing Structured Outputs in the API. [2]

Update test to account for model size limitations on structured outputs support.

LM Studio's OpenAI-compatible API does support Structured Outputs. However, not all small models (<7B) support structured output—check the model card.

The test hardcodes supportsStructuredOutputs: true without validating that Model.LM_STUDIO actually supports this feature. If a smaller model is used, this could cause runtime failures when structured outputs are requested. Either validate the model size before enabling this flag, or document that LM Studio configuration requires a model >=7B.

Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 issue found across 6 files

Prompt for AI agents (all 1 issues)

Understand the root cause of the following 1 issues and fix them.


<file name="apps/web/utils/llms/config.ts">

<violation number="1" location="apps/web/utils/llms/config.ts:14">
Adding `Provider.LM_STUDIO` without also surfacing it in `providerOptions` leaves LM Studio impossible to select, so the new provider is effectively unusable.</violation>
</file>

Reply to cubic to teach it or ask questions. Re-run a review with @cubic-dev-ai review this PR

GROQ: "groq",
OPENROUTER: "openrouter",
AI_GATEWAY: "aigateway",
LM_STUDIO: "lmstudio",
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot Nov 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding Provider.LM_STUDIO without also surfacing it in providerOptions leaves LM Studio impossible to select, so the new provider is effectively unusable.

Prompt for AI agents
Address the following comment on apps/web/utils/llms/config.ts at line 14:

<comment>Adding `Provider.LM_STUDIO` without also surfacing it in `providerOptions` leaves LM Studio impossible to select, so the new provider is effectively unusable.</comment>

<file context>
@@ -11,6 +11,7 @@ export const Provider = {
   GROQ: &quot;groq&quot;,
   OPENROUTER: &quot;openrouter&quot;,
   AI_GATEWAY: &quot;aigateway&quot;,
+  LM_STUDIO: &quot;lmstudio&quot;,
   ...(supportsOllama ? { OLLAMA: &quot;ollama&quot; } : {}),
 };
</file context>
Fix with Cubic

Comment on lines +243 to +251
const isLMStudio = env.ECONOMY_LLM_PROVIDER === Provider.LM_STUDIO;
if (isLMStudio) {
return selectModel({
aiProvider: Provider.LM_STUDIO,
aiModel: env.ECONOMY_LLM_MODEL,
aiApiKey: null,
});
}

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems inconsistent? And would ignore the user settings if those are set?

Same in the other plcae you did this

Copy link
Contributor Author

@ppranay20 ppranay20 Nov 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I don't add this then it will direct to default function (because LM studio does not have API key it provides URL). The default function selects the model which is set to default. So, if economy is set to LM studio it will revert to default model.

I can also add a check that for LM studio URL check like
if (!apiKey && !url) { logger.warn("Economy LLM provider configured but API key not found", { provider: env.ECONOMY_LLM_PROVIDER, }); return selectDefaultModel(userAi); }

This will also work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

FEATURE REQUEST: LM-Studio API

2 participants