-
Notifications
You must be signed in to change notification settings - Fork 55
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore(exports): export OAIExtended type #130
Conversation
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 Looks good to me!
- Reviewed the entire pull request up to 070f1ea
- Looked at
36
lines of code in3
files - Took 58 seconds to review
More info
- Skipped
0
files when reviewing. - Skipped posting
1
additional comments because they didn't meet confidence threshold of70%
.
1. /src/index.ts:4
:
- Assessed confidence :
0%
- Comment:
Good job on exporting the necessary types and functions. This will make them available for use in other parts of the codebase or by other modules that import this one. - Reasoning:
The PR author is exporting the typesMaybe
andOAIClientExtended
along with theLLMValidator
andmaybe
functions. This is a good practice as it allows these types and functions to be used in other parts of the codebase or by other modules that import this one. The changes made in this PR are straightforward and there doesn't seem to be any logical, performance, or security bugs introduced. The code is also DRY and the variable and function names are descriptive. The PR description is clear and matches the changes made.
Workflow ID: wflow_tAu8X0oeIWowDAke
Not what you expected? You can customize the content of the reviews using rules. Learn more here.
This looks good - I have been building validators like this custom too - the approach we use here currently works fine as an example and for the tests - but personally I'm not a big fan of requiring an instructor instance to be passed in and having messages hard coded like we have in here. The big issue though with this is streaming - we run continuous validation during a streamand expose an isValid - which lets you have some understanding of the state of the response before it completes if you were to use this with stream true it would trigger a shit ton of requests, right? |
Fair. I wasn't going to use it streaming so I didn't consider it. Do you guys have a plan for that in the future such as disabling validation params or waiting till the last message? I'm not super clued up on how the zod functions work. For now I have just exported OAIExtended type. I hope that works for you. |
yeah, 100% - I didn't spend much time in here - so I could be wrong about the stream issue, was just my first thought when looking through it again - I can run it locally and see, but that's my only concern tbh - if people find it valuable then im all about it - as long as we can just make sure folks cant get into a situation where they make like 2000 completion calls because they have stream true lol |
just confirmed locally it is def creating many many many addt completion calls if stream is present on the origin call so this is tough - we could hypothetically check for a superRefine or something - and like throw or warn if stream is on - but it might be expected, you might want to make some async check? But I dunno, that's tough. Or we could extend the responseModel maybe to have like an optional "validators" or something - and you define like a map of them that you could use in ur schema but then we have the ability to force stream false or throw otherwise, maybe? can try to spend some time thinking about it, but if you have some ideas of what would be a good way to approach plz let me know - also tagging @ethanleifer in who wrote the validator initially |
I prefer the idea of extending the response model. That is a key part of instructor so having control over that interface makes sense. Disabling validation on streaming by default does make sense. Could add some function to validate every n tokens continuously also. |
I am slightly leaning more towards keeping the validators on zod schema because that is how zod and instructor-py both do it. The response_model object is really just a way to pass the zod schema through to the instructor. Maybe we make it instructor.LLMValidator so you don't have to pass through instructor separately? I think either validating on every n chunks or only trigger the refine on finished model |
I'd like to use
LLMValidator
andmaybe
. Because of the way the project is packaged it needs to be exported to the top level. There are also situations using Instructor where having its type exported is super helpful.Summary:
This PR exports
LLMValidator
,maybe
,Maybe
, andOAIClientExtended
from the top level and corrects a typo in themaybe
function comment.Key points:
LLMValidator
,maybe
,Maybe
, andOAIClientExtended
in/src/index.ts
and/src/dsl/index.ts
.maybe
function in/src/dsl/maybe.ts
.Generated with ❤️ by ellipsis.dev