You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Large appends to large files can result in the error "failed to append to file: Error: Timeout awaiting 'request' for 3000ms"
Reading back large files can also timeout
Expected Behavior
So long as the append is within the fastify POST limits and the resulting file store size is < quota limit, the timeout should not occur.
Consideration should be made to the length of time the s3 storage takes. For example, we could simply max out the request timeout but users would start experiencing long delays between file operations and the flow continuing.
One possible interim (containement) option might be to add an explicit "timeout" setting on the FF NR File-node config that permits the user to see (and adjust) this if they encounter timeouts. Not ideal, but the presence of the timeout option would go some way (implicitly) to explain what can be done about any incurred timeouts
Other far reaching possibilities include:
employing something other than the s3 backend - is it just too slow for purpose? is it our code?
add performance logging in the areas of code to determine where the bottle necks are?
use alternative transport?
Steps To Reproduce
Add the (below) flow to a cloud instance
Operate the 5MB button and wait for the debug message
Repeat step 2 8 times until the file is 40MB
Note how with each operation the time to perform the 5MB append increases.
Wait 1 minute
Operate the Read inject
Should get a timeout, if not, try adding some more 5MB parts (step 2)
Current Behavior
Large appends to large files can result in the error "failed to append to file: Error: Timeout awaiting 'request' for 3000ms"
Reading back large files can also timeout
Expected Behavior
So long as the append is within the fastify POST limits and the resulting file store size is < quota limit, the timeout should not occur.
Consideration should be made to the length of time the s3 storage takes. For example, we could simply max out the request timeout but users would start experiencing long delays between file operations and the flow continuing.
One possible interim (containement) option might be to add an explicit "timeout" setting on the FF NR File-node config that permits the user to see (and adjust) this if they encounter timeouts. Not ideal, but the presence of the timeout option would go some way (implicitly) to explain what can be done about any incurred timeouts
Other far reaching possibilities include:
Steps To Reproduce
Environment
The text was updated successfully, but these errors were encountered: