Limiting response #139
Replies: 3 comments 1 reply
-
Bard does not support temperature or hyperparameter adjustments, but it is possible to achieve the appearance of limiting the number of output tokens or the number of output sentences using simple algorithms, as follows:
from bardapi import Bard, max_token, max_sentence
token = 'xxxxxxx'
bard = Bard(token=token)
max_token(bard.get_answer("나와 내 동년배들이 좋아하는 뉴진스에 대해서 알려줘")['content'], 30) def max_token(text: str, n: int):
"""
Print the first 'n' tokens (words) of the given text.
Args:
text (str): The input text to be processed.
n (int): The number of tokens (words) to be printed from the beginning.
Returns:
None
"""
word_count = 0
word_start = 0
for i, char in enumerate(text):
if char.isspace():
word_count += 1
if word_count == n:
print(text[:i])
break
else:
print(text)
def max_sentence(text: str, n: int):
"""
Print the first 'n' sentences of the given text.
Args:
text (str): The input text to be processed.
n (int): The number of sentences to be printed from the beginning.
Returns:
None
"""
punctuations = set('?!.')
sentences = []
sentence_count = 0
for char in text:
sentences.append(char)
if char in punctuations:
sentence_count += 1
if sentence_count == n:
print(''.join(sentences).strip())
return
print(''.join(sentences).strip())
text = "Is there a way to limit Bard's response to a few sentences? Can this be done with some sort of max_tokens parameter? I want to ask something."
max_sentence(text, 2) |
Beta Was this translation helpful? Give feedback.
-
Correct, but in this way it doesn't actually change the way Bard is responding, it merely just cuts if off after a certain amount of sentences, meaning it's not necessarily a shortened version of the answer. |
Beta Was this translation helpful? Give feedback.
-
Hello, you can limit the response by specifying the number of words in your query, for example 'Explain in 200 words ...". |
Beta Was this translation helpful? Give feedback.
-
Is there a way to limit Bard's response to a few sentences? Can this be done with some sort of max_tokens parameter?
Beta Was this translation helpful? Give feedback.
All reactions