-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement models for BrainForge service #8
base: dev
Are you sure you want to change the base?
Conversation
…a" BrainForge personas
Add `LLMGetInference.as_llm_request` convenience method
|
||
|
||
class LLMGetModelsHttpResponse(BaseModel): | ||
models: List[BrainForgeLLM] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This returns both all models and all personas, making Persona
related requests useless
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe the brainforge_get_personas
endpoint isn't necessary at all? The only use case I see for it now is if some client wants to get a specific model
@revision
without parsing all of the available models
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that it will never be the case
Because every service wants to request all available info at once and validate requests before been sent
But we can keep it, and deside later
… handling Refactor tokenizer model names to be more descriptive
Description
Implement models for LLM MQ requests/responses
Implement models for LLM requests via HANA endpoints
Issues
Adds incoming user query to
history
inLLMRequest.to_completion_kwargs
. Missed in #4Other Notes