Very short answers #135
-
I don’t understand why he answers so briefly, the hint doesn’t help |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Hello @seoeaa, Thank you for bringing up your concern about the brevity of the answers. If you'd like to receive longer responses from the LLM, you can adjust the Please refer to our # config.example.yaml
# ... (other configurations)
# Parameters for the LLM
llm_params:
model: "gpt-3.5-turbo" # Model identifier
max_tokens: 2048 # Increase this value for longer outputs
# ... (other configurations) By setting the Please try this adjustment and let us know if it improves the length and detail of the answers. |
Beta Was this translation helpful? Give feedback.
Hello @seoeaa,
Thank you for bringing up your concern about the brevity of the answers. If you'd like to receive longer responses from the LLM, you can adjust the
max_tokens
parameter in your configuration. To be able to use this feature, please updateautollm
to latest version by usingpip install -U autollm
.Please refer to our
config.example.yaml
file for a guide on setting this parameter and others:By setting the
max_tokens
to a higher value, yo…