-
-
Notifications
You must be signed in to change notification settings - Fork 225
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: added support for deepseek-reasoner #410
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 Looks good to me! Reviewed everything up to aa4ab0e in 15 seconds
More details
- Looked at
67
lines of code in2
files - Skipped
0
files when reviewing. - Skipped posting
1
drafted comments based on config settings.
1. gptme/llm/llm_openai.py:116
- Draft comment:
Consider renamingis_o1
tois_o1_model
for clarity, as it represents a boolean indicating if the base model is an 'o1' model. This applies to similar variables likeis_deepseek_reasoner
andis_reasoner
. - Reason this comment was not posted:
Confidence changes required:20%
The code changes in the PR are consistent with the existing code structure and logic. The addition of the 'deepseek-reasoner' model is handled correctly in both files. The logic for determining 'is_reasoner' is correctly updated to include 'deepseek-reasoner'. The model metadata is also correctly updated in 'models.py'.
Workflow ID: wflow_TcggQuAuo948TChs
You can customize Ellipsis with 👍 / 👎 feedback, review rules, user-specific overrides, quiet
mode, and more.
Codecov ReportAttention: Patch coverage is
✅ All tests successful. No failed tests found.
Additional details and impacted files@@ Coverage Diff @@
## master #410 +/- ##
=========================================
Coverage ? 69.95%
=========================================
Files ? 70
Lines ? 5808
Branches ? 0
=========================================
Hits ? 4063
Misses ? 1745
Partials ? 0
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
aa4ab0e
to
43ed919
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 Looks good to me! Incremental review on 43ed919 in 51 seconds
More details
- Looked at
140
lines of code in2
files - Skipped
0
files when reviewing. - Skipped posting
3
drafted comments based on config settings.
1. gptme/llm/llm_openai.py:137
- Draft comment:
Consider adding a check to handle emptymsgs
list to avoid potentialIndexError
. - Reason this comment was not posted:
Decided after close inspection that this draft comment was likely wrong and/or not actionable:
The function does access msgs[0] without a check. However, this is an internal function only called in a context where messages are required for the API call to work. An empty message list would be invalid input for the chat API anyway. The function assumes valid input rather than defensive programming, which is reasonable for an internal implementation detail.
I could be wrong about the API requirements - maybe there are valid cases where an empty message list should be handled gracefully rather than failing.
Looking at OpenAI's API docs, a chat completion requires at least one message. An empty message list would be invalid input. The function is correct to assume valid input since it's an internal implementation detail.
Delete the comment. The function reasonably assumes valid input since it's an internal implementation detail and empty message lists would be invalid for the chat API anyway.
2. gptme/llm/llm_openai.py:142
- Draft comment:
UseList
fromtyping
instead oflist
for type hinting. This applies to other instances in the file as well. - Reason this comment was not posted:
Confidence changes required:50%
The code useslist
for type hinting in several places, which is not recommended in modern Python. Instead,List
fromtyping
should be used for better compatibility and readability.
3. gptme/llm/models.py:134
- Draft comment:
Ensuredeepseek-reasoner
is properly handled inget_model
to avoid potential issues if the model is not found. - Reason this comment was not posted:
Confidence changes required:50%
Thedeepseek-reasoner
model is added to theMODELS
dictionary, but there is no check for its presence in theget_model
function. This could lead to issues if the model is not found.
Workflow ID: wflow_hFk3i8RDzDEZpnae
You can customize Ellipsis with 👍 / 👎 feedback, review rules, user-specific overrides, quiet
mode, and more.
TODO