fix: support provider-filtered 'all' in CUSTOM_MODELS#6740
Open
octo-patch wants to merge 1 commit intoChatGPTNextWeb:mainfrom
Open
fix: support provider-filtered 'all' in CUSTOM_MODELS#6740octo-patch wants to merge 1 commit intoChatGPTNextWeb:mainfrom
octo-patch wants to merge 1 commit intoChatGPTNextWeb:mainfrom
Conversation
…extWeb#6647) When using CUSTOM_MODELS=-all,+gpt-4.1, the model gpt-4.1 was being enabled for ALL providers (OpenAI, Azure, 302.AI) because it exists in multiple DEFAULT_MODELS entries. This caused unexpected provider variants to appear in the model list even when those providers are not configured. This fix adds support for an @Provider suffix on the "all" keyword, e.g.: -all@azure disable all Azure models -all@ai302 disable all 302.AI models Users can now use: CUSTOM_MODELS=-all,+gpt-4.1@openai or: CUSTOM_MODELS=-all,+gpt-4.1,-all@azure,-all@ai302
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Fixes #6647
Problem
When using
CUSTOM_MODELS=-all,+gpt-4.1,+gpt-4oin a Docker deployment, users unexpectedly seegpt-4.1 302.AIandgpt-4.1 Azurein the model list alongsidegpt-4.1(OpenAI). This happens because:gpt-4.1exists inDEFAULT_MODELSunder three providers: OpenAI, Azure (all OpenAI models are mirrored there), and 302.AI+gpt-4.1without a provider suffix re-enablesgpt-4.1for all providers after-allhas disabled themSolution
Add support for an
@providersuffix on theallkeyword inCUSTOM_MODELS, enabling provider-scoped enable/disable operations:-all-all@azure-all@ai302+all@openaiUsers with the reported issue now have two clean options:
Option A – Specify provider explicitly when re-enabling:
Option B – Disable unwanted providers after enabling:
Changes
app/utils/model.ts: MovegetModelProvider()call before theallcheck so the provider suffix is parsed in all cases. Theallbranch now filters bymodel.provider?.idwhen a provider suffix is present.let [customModelName, customProviderName]in theif (count === 0)block withlet newModelName = customModelNameto avoid confusing variable re-declaration.Testing
Existing behavior is unchanged for
CUSTOM_MODELSstrings that do not use the new@providersuffix onall. The newall@providerfeature can be verified manually by setting:and confirming only the OpenAI
gpt-4.1entry appears in the model selector.