-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added API Proxy functionality to API LM #68
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @AakritiKinra !
But I think this changes the signature of APIBasedLM.generate()
to be different than that of the parent LanguageModel.generate()
. We should keep the signature of parent and child abstract functions the same.
Instead, I think you can specify api_base
in the constructor of APIBasedLM
.
I have added the api_base to the constructor. The signature of the two generate functions is still different because of the completion function. I am passing the message which is the prompt for which the response is generated. In the generate function of the LanguageModel class, there is no message parameter. The chat_generate function of both classes has messages as a parameter. |
it looks like the modified version is not passing tests. We’ll have to make sure that the “generate” interface is the same for all LMs |
Thanks! |
* added base_url * Updated function descriptions * Added api_base to the constructor * matched structure with lm class Pull latest changes
Description
Added the api_base parameter to the completion() and batch_completion() functions to use the API Proxy.
References