You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Not sure if you've run into this before, but using job chaining, if the chains are big I am seeing Operation timed out after 20003 milliseconds with 0 bytes received from CURL
Have you run into this before?
The text was updated successfully, but these errors were encountered:
Thanks @marickvantuil I am looking currently on a separate branch to see what could be a cause.
I am wondering if a big chained job is causing CURL underneath for Google tasks to timeout. As the containers that run Cloud Run for jobs have a 600 second timeout, with nginx and php matching.
@Kyon147 Just for full transparency, which version of the package are you using, and on which platform is the application hosted? (AppEngine/Cloud Run/something else).
I first thought it could be slow because of Cloud Tasks's deduplication feature, but even with 500+ tasks in the queue, it only takes around 500ms to create a task. Could it be some quota thing from Google somewhere?
I've also experimented with job chains, but I run into { "message": "Task size too large", "code": 3, "status": "INVALID_ARGUMENT", "details": [] } errors even before anything times out.
Hey,
Not sure if you've run into this before, but using job chaining, if the chains are big I am seeing Operation timed out after 20003 milliseconds with 0 bytes received from CURL
Have you run into this before?
The text was updated successfully, but these errors were encountered: