Skip to content

OpenAI API configuration error: "Max Output Tokens" parameter causes extraction failure #32

@alice-debra

Description

@alice-debra

Description

There is a configuration issue when using OpenAI models in the "Bring Your Own LLM" section that causes AI extraction to fail.

Current Behavior

When selecting "OpenAI" in the "Bring Your Own LLM" section, only recent GPT models appear to be available
These recent models seemingly don't accept the "Max Output Tokens" parameter as input
This causes an error during AI extraction

Expected Behavior

OpenAI models should either:

  • Accept the "Max Output Tokens" parameter without error, or
  • The parameter should be omitted/adapted for models that don't support it

Note

This needs further testing and confirmation

Steps to Reproduce:

  • Go to "Bring Your Own LLM" configuration
  • Select "OpenAI" as provider
  • Select one of the available (recent) GPT models
  • Attempt AI extraction with "Max Output Tokens" parameter set
  • Observe the error

Possible Root Cause

Recent OpenAI models may use different parameter names (e.g., max_completion_tokens instead of max_tokens) or have removed support for output token limits in the API call format being used.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions