-
-
Notifications
You must be signed in to change notification settings - Fork 225
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support more than one model #347
Comments
It would be pretty easy to do such routing here: Lines 50 to 63 in 803d6e5
I'm not super interested in it though, but should be easy for gptme to modify itself to do! |
@ErikBjare This was implemented in the last release? using |
Yes, could also be done by adding tools to outsource such reasoning: #416 We are not doing any auto-routing, but I don't think we will be, at least not for now. |
Given the range of models that ollama can run and given that we now have smaller models that are great for specific tasks, how hard would it be for specific agents to run with a specific model. Example Qwen-coder for coding tasks, Qwen-vl for image related tasks or perhaps llama 3.3 for overall task management.
The text was updated successfully, but these errors were encountered: