Skip to content

Generation error: Response from LLM is not in JSON format #15

@theotherp

Description

@theotherp
❯ magic-cli config list
Field: llm
Value: "openai"
Description: The LLM to use for generating responses. Supported values: "ollama", "openai"

Field: ollama.base_url
Value: "http://localhost:11434"
Description: The base URL of the Ollama API.

Field: ollama.embedding_model
Value: "nomic-embed-text:latest"
Description: The model to use for generating embeddings.

Field: ollama.model
Value: "codestral:latest"
Description: The model to use for generating responses.

Field: openai.api_key (secret)
Value: **********************************************************
Description: The API key for the OpenAI API.

Field: openai.embedding_model
Value: "gpt-4o"
Description: The model to use for generating embeddings.

Field: openai.model
Value: "gpt-4o"
Description: The model to use for generating responses.

Field: suggest.add_to_history
Value: false
Description: Whether to add the suggested command to the shell history.

Field: suggest.mode
Value: "clipboard"
Description: The mode to use for suggesting commands. Supported values: "clipboard" (copying command to clipboard), "unsafe-execution" (executing in the current shell session)
❯ magic-cli suggest "get kubernetes pods with most memory usage in all namespaces"
Generating suggested command for prompt "get kubernetes pods with most memory usage in all namespaces"...

Error: Generation error: Generation error: Response from LLM is not in JSON format
❯                                                                

Version: magic-cli 0.0.2

❯ magic-cli sys-info
System information as detected by the CLI:

OS: Windows
OS version: 10 (19045)
CPU architecture: x86_64
Shell: pwsh

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions