Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Openai error handling for ratelimit, timeout and try again errors(fix #1255) #1361

Merged

Conversation

aleric-cusher
Copy link
Contributor

Description

With these changes the agent won't stop if a ratelimit error or timeout error or try again error is hit from the openai library, it will try again.

Related Issues

#1255

Solution and Design

The agent will attempt to wait 5 times with an exponential backoff strategy and try again, it will wait for a minimum of 30 seconds and max of 300 seconds.
If after 5 attempts the api call still fails it will return the error.

Test Plan

The code is tested by mocking the api calls to openai and having it return the errors mentioned above and checking if the function has called the api specified number of times and it returns the expected error as a dictionary object.
To reduce the time taken to test, the tencity library's wait_random_exponential function is mocked to only use 0.1 seconds

Type of change

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Docs update

Checklist

  • My pull request is atomic and focuses on a single change.
  • I have read the contributing guide and my code conforms to the guidelines.
  • I have documented my changes clearly and comprehensively.
  • I have added the required tests.

…le and its tests

- Adds test for rate limit error handling in the llms/openai module
- Adds error handling for rate limit error in the llms/openai module
- Refactors code in llms/openai module to be readable and modular
…and its test

- Adds test for timeout error handling in chat_completion in llms/openai module
- Adds error handling for openai's timeout error in chat_completion in llms/openai module
…e and its test

- Adds test for openai's try again error handling in chat_completion in llms/openai module
- Adds error handling for openai's try again error in chat_completion in llms/openai module
@CLAassistant
Copy link

CLAassistant commented Nov 8, 2023

CLA assistant check
All committers have signed the CLA.

Copy link

codecov bot commented Nov 9, 2023

Codecov Report

Attention: 1 lines in your changes are missing coverage. Please review.

Comparison is base (4afbd7c) 58.66% compared to head (b6d0720) 58.73%.
Report is 2 commits behind head on main.

❗ Current head b6d0720 differs from pull request most recent head bdd9660. Consider uploading reports for the commit bdd9660 to get more accurate results

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1361      +/-   ##
==========================================
+ Coverage   58.66%   58.73%   +0.06%     
==========================================
  Files         230      230              
  Lines       11188    11202      +14     
  Branches     1206     1209       +3     
==========================================
+ Hits         6563     6579      +16     
+ Misses       4289     4286       -3     
- Partials      336      337       +1     
Files Coverage Δ
superagi/llms/openai.py 68.05% <94.44%> (+11.15%) ⬆️

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@luciferlinx101 luciferlinx101 merged commit 240d05d into TransformerOptimus:main Dec 13, 2023
1 check failed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants