Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore(exports): export OAIExtended type #130

Merged
merged 5 commits into from
Mar 6, 2024
Merged

Conversation

mattzcarey
Copy link
Contributor

@mattzcarey mattzcarey commented Mar 4, 2024

I'd like to use LLMValidator and maybe. Because of the way the project is packaged it needs to be exported to the top level. There are also situations using Instructor where having its type exported is super helpful.


Ellipsis 🚀 This PR description was created by Ellipsis for commit 070f1ea.

Summary:

This PR exports LLMValidator, maybe, Maybe, and OAIClientExtended from the top level and corrects a typo in the maybe function comment.

Key points:

  • Added exports for LLMValidator, maybe, Maybe, and OAIClientExtended in /src/index.ts and /src/dsl/index.ts.
  • Corrected a typo in the comment of the maybe function in /src/dsl/maybe.ts.

Generated with ❤️ by ellipsis.dev

Copy link

changeset-bot bot commented Mar 4, 2024

⚠️ No Changeset found

Latest commit: baee712

Merging this PR will not cause a version bump for any packages. If these changes should not result in a new version, you're good to go. If these changes should result in a version bump, you need to add a changeset.

This PR includes no changesets

When changesets are added to this PR, you'll see the packages that this PR includes changesets for and the associated semver types

Click here to learn what changesets are, and how to add one.

Click here if you're a maintainer who wants to add a changeset to this PR

Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 Looks good to me!

  • Reviewed the entire pull request up to 070f1ea
  • Looked at 36 lines of code in 3 files
  • Took 58 seconds to review
More info
  • Skipped 0 files when reviewing.
  • Skipped posting 1 additional comments because they didn't meet confidence threshold of 70%.
1. /src/index.ts:4:
  • Assessed confidence : 0%
  • Comment:
    Good job on exporting the necessary types and functions. This will make them available for use in other parts of the codebase or by other modules that import this one.
  • Reasoning:
    The PR author is exporting the types Maybe and OAIClientExtended along with the LLMValidator and maybe functions. This is a good practice as it allows these types and functions to be used in other parts of the codebase or by other modules that import this one. The changes made in this PR are straightforward and there doesn't seem to be any logical, performance, or security bugs introduced. The code is also DRY and the variable and function names are descriptive. The PR description is clear and matches the changes made.

Workflow ID: wflow_tAu8X0oeIWowDAke


Not what you expected? You can customize the content of the reviews using rules. Learn more here.

@roodboi
Copy link
Collaborator

roodboi commented Mar 4, 2024

This looks good - I have been building validators like this custom too - the approach we use here currently works fine as an example and for the tests - but personally I'm not a big fan of requiring an instructor instance to be passed in and having messages hard coded like we have in here.

The big issue though with this is streaming - we run continuous validation during a streamand expose an isValid - which lets you have some understanding of the state of the response before it completes

if you were to use this with stream true it would trigger a shit ton of requests, right?

@mattzcarey
Copy link
Contributor Author

This looks good - I have been building validators like this custom too - the approach we use here currently works fine as an example and for the tests - but personally I'm not a big fan of requiring an instructor instance to be passed in and having messages hard coded like we have in here.

The big issue though with this is streaming - we run continuous validation during a streamand expose an isValid - which lets you have some understanding of the state of the response before it completes

if you were to use this with stream true it would trigger a shit ton of requests, right?

Fair. I wasn't going to use it streaming so I didn't consider it. Do you guys have a plan for that in the future such as disabling validation params or waiting till the last message?

I'm not super clued up on how the zod functions work. For now I have just exported OAIExtended type. I hope that works for you.

@mattzcarey mattzcarey changed the title chore(exports): export LLMValidator and useful types chore(exports): export OAIExtended type Mar 4, 2024
@roodboi
Copy link
Collaborator

roodboi commented Mar 4, 2024

yeah, 100% - I didn't spend much time in here - so I could be wrong about the stream issue, was just my first thought when looking through it again - I can run it locally and see, but that's my only concern tbh - if people find it valuable then im all about it - as long as we can just make sure folks cant get into a situation where they make like 2000 completion calls because they have stream true lol

roodboi
roodboi previously approved these changes Mar 4, 2024
@roodboi
Copy link
Collaborator

roodboi commented Mar 4, 2024

just confirmed locally it is def creating many many many addt completion calls if stream is present on the origin call

so this is tough - we could hypothetically check for a superRefine or something - and like throw or warn if stream is on - but it might be expected, you might want to make some async check? But I dunno, that's tough.

Or we could extend the responseModel maybe to have like an optional "validators" or something - and you define like a map of them that you could use in ur schema but then we have the ability to force stream false or throw otherwise, maybe?

can try to spend some time thinking about it, but if you have some ideas of what would be a good way to approach plz let me know - also tagging @ethanleifer in who wrote the validator initially

@mattzcarey
Copy link
Contributor Author

just confirmed locally it is def creating many many many addt completion calls if stream is present on the origin call

so this is tough - we could hypothetically check for a superRefine or something - and like throw or warn if stream is on - but it might be expected, you might want to make some async check? But I dunno, that's tough.

Or we could extend the responseModel maybe to have like an optional "validators" or something - and you define like a map of them that you could use in ur schema but then we have the ability to force stream false or throw otherwise, maybe?

can try to spend some time thinking about it, but if you have some ideas of what would be a good way to approach plz let me know - also tagging @ethanleifer in who wrote the validator initially

I prefer the idea of extending the response model. That is a key part of instructor so having control over that interface makes sense. Disabling validation on streaming by default does make sense. Could add some function to validate every n tokens continuously also.

@ethanleifer
Copy link
Contributor

I am slightly leaning more towards keeping the validators on zod schema because that is how zod and instructor-py both do it. The response_model object is really just a way to pass the zod schema through to the instructor.

Maybe we make it instructor.LLMValidator so you don't have to pass through instructor separately?

I think either validating on every n chunks or only trigger the refine on finished model

@roodboi roodboi merged commit f365a41 into instructor-ai:main Mar 6, 2024
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants