Skip to content
This repository has been archived by the owner on Sep 12, 2024. It is now read-only.

Very short answers #135

Answered by SeeknnDestroy
seoeaa asked this question in Q&A
Discussion options

You must be logged in to vote

Hello @seoeaa,

Thank you for bringing up your concern about the brevity of the answers. If you'd like to receive longer responses from the LLM, you can adjust the max_tokens parameter in your configuration. To be able to use this feature, please update autollm to latest version by using pip install -U autollm.

Please refer to our config.example.yaml file for a guide on setting this parameter and others:

# config.example.yaml

# ... (other configurations)

# Parameters for the LLM
llm_params:
  model: "gpt-3.5-turbo"       # Model identifier
  max_tokens: 2048               # Increase this value for longer outputs
# ... (other configurations)

By setting the max_tokens to a higher value, yo…

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by SeeknnDestroy
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants