You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
and when doing several hundred concurrent requests, I'm seeing large exception stacktraces in my logs.
Here's a small snippet of the stacktrace, where each "The above exception" frame is largely identical and repeated 850+ times.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/workspace/myapp/.heroku/python/lib/python3.11/site-packages/instructor/retry.py", line 222, in retry_async
response: ChatCompletion = await func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/myapp/.heroku/python/lib/python3.11/site-packages/instructor/patch.py", line 161, in new_create_async
response = await retry_async(
^^^^^^^^^^^^^^^^^^
File "/workspace/myapp/.heroku/python/lib/python3.11/site-packages/instructor/retry.py", line 248, in retry_async
raise InstructorRetryException(
instructor.exceptions.InstructorRetryException: Connection error.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/workspace/myapp/.heroku/python/lib/python3.11/site-packages/instructor/retry.py", line 217, in retry_async
async for attempt in max_retries:
File "/workspace/myapp/.heroku/python/lib/python3.11/site-packages/tenacity/asyncio/__init__.py", line 166, in __anext__
do = await self.iter(retry_state=self._retry_state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/myapp/.heroku/python/lib/python3.11/site-packages/tenacity/asyncio/__init__.py", line 153, in iter
result = await action(retry_state)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/myapp/.heroku/python/lib/python3.11/site-packages/tenacity/_utils.py", line 99, in inner
return call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/workspace/myapp/.heroku/python/lib/python3.11/site-packages/tenacity/__init__.py", line 419, in exc_check
raise retry_exc from fut.exception()
tenacity.RetryError: RetryError[<Future at 0x7ec68d12c490 state=finished raised InstructorRetryException>]
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/workspace/myapp/.heroku/python/lib/python3.11/site-packages/myapp/ai/gen2.py", line 98, in _run_extraction
response = await asyncio.wait_for(
^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/myapp/.heroku/python/lib/python3.11/asyncio/tasks.py", line 490, in wait_for
return fut.result()
^^^^^^^^^^^^
File "/workspace/myapp/.heroku/python/lib/python3.11/site-packages/instructor/patch.py", line 161, in new_create_async
response = await retry_async(
^^^^^^^^^^^^^^^^^^
File "/workspace/myapp/.heroku/python/lib/python3.11/site-packages/instructor/retry.py", line 248, in retry_async
raise InstructorRetryException(
instructor.exceptions.InstructorRetryException: Connection error.
In the docs, it mentions:
We set the maximum number of retries to 3. This means that if the model returns an error, we'll reask the model up to 3 times.
Does that cover connection timeouts as well? If not, and I wanted to stop retrying after X connection timeouts, would this be possible via a setting or a callback handler?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
I'm calling instructor as follows:
and when doing several hundred concurrent requests, I'm seeing large exception stacktraces in my logs.
Here's a small snippet of the stacktrace, where each "The above exception" frame is largely identical and repeated 850+ times.
In the docs, it mentions:
Does that cover connection timeouts as well? If not, and I wanted to stop retrying after X connection timeouts, would this be possible via a setting or a callback handler?
Beta Was this translation helpful? Give feedback.
All reactions