-
Notifications
You must be signed in to change notification settings - Fork 863
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
java.lang.IllegalStateException: The service request was not made within 10 seconds of doBlockingWrite being invoked. Make sure to invoke the service request BEFORE invoking doBlockingWrite if your caller is single-threaded.ription)(short issue description) #4893
Comments
Hi @ramakrishna-g1 apologies for the silence. We identified an issue with multipart uploads using BlockingInputStream where the client enters a bad state and doesn't recover from it. We are working on a fix. We'll also consider creating a timeout configuration so this default value can be customized. Will keep this updated with progress of the fix. |
Would this apply when using I am running into a similar issue Also any eta on a fix? Thx |
Running into this issue with BlockingInputStreamAsyncRequestBody instead of the Output body. Default S3Async setup and creds.
|
Hey all, we've exposed an option to allow users to configure
|
In which version fix is available? |
2.25.8 |
This issue describes the timeout problem in the BlockingInputStreamAsyncRequestBody, but the change made by #5000 adds the configuration option to BlockingOutputStreamAsyncRequestBody. Is a similar configuration option going to be exposed for the BlockingInputStreamAsyncRequestBody as well? |
Hi @nredhefferprovidertrust #4893 is created to add the same config for BlockingInputStreamAsyncRequestBody |
What is the best fail-safe value for .subscribeTimeout() in the PROD environment. where we have uploading thousands of messages per minute. |
Hi @zoewangg, I see that you have provided an option to extend the timeout which is good. But it still doesn't solve the original issue of the client going into an unhealthy state. So is there going to be a fix for that? |
Yes, we have a task in our backlog to fix the issue. No ETA to share at the moment. |
Hello! Are any updates on this issue, or any recommendation on how to avoid this? I am using CRT Client along with I notice it happens when S3 starts responding backoff and throttling responses. The client enters an unrecoverable bad state and everything after that throws the timeout error described by OP. For now, I replaced the implementation with AsyncRequestBody.fromInputStream(inputStream, fileSize, executorService) This however requires a separate executor service which makes not much sense for us, as we will just block the calling thread uploading the file anyways. We are using private void copyFile(URL source, String destination, long fileSize) {
try (InputStream inputStream = urlStreamReader.read(source)) {
BlockingInputStreamAsyncRequestBody requestBody =
AsyncRequestBody.forBlockingInputStream(fileSize);
Upload upload = transferManager.upload(
UploadRequest.builder()
.putObjectRequest(PutObjectRequest.builder()
.bucket(s3BucketName)
.expectedBucketOwner(s3BucketOwner)
.key(destination)
.build())
.requestBody(requestBody)
.build());
// Blocks calling thread
requestBody.writeInputStream(inputStream);
upload.completionFuture().join();
}
} |
Hi @debora-ito are there any updates for the fix for BlockingInputStreamAsyncRequestBody? |
Was able to bypass the issue using this approach (Scala). val body: BlockingInputStreamAsyncRequestBody = BlockingInputStreamAsyncRequestBody.builder().subscribeTimeout(Duration.ofSeconds(30)).build()
val upload: Upload = transferManager.upload(UploadRequest.builder().requestBody(body).putObjectRequest(request).build())
body.writeInputStream(inputStream) |
I also get this exception seemingly "randomly" with code that looks like this (CRT client as this , as I understand it is required to do streaming with unknown size):
and would really like to find a work-around as this is to fragile to use in production... For me specifying the size in advance is not an option (that this in unknown is the whole point for me of using streaming as the data is generated over some time andf total data size may be far larger than I could hold in memory). |
I use your method, it still has this problem |
Is it still going on |
Unless the code pasted above has some error it seems to me there is still some intermittent problem with the SDK implementation of S3 streaming.... |
Seems like when s3 api exception like 403 happends, async client is not trying to consume (subscribe) BlockingInputStreamAsyncRequestBody. |
Hello , Any fix has been made to BlockingInputStreamAsyncRequestBody ? Like an option to extend the timeout. |
I have more or less given up on using streaming upload to S3 due to this annoying problem. |
Had to rollback our AWS v2 SDK migration for this issue. Complete show stopper as our main use case involved pushing very large multipart uploads to S3. Not a problem for v1 S3TransferManager but one for the recommended CRT S3TransferManager as per https://github.com/awsdocs/aws-doc-sdk-examples/blob/9f83508fe18af7380875cd6d18b6921ba95e85c3/javav2/example_code/s3/src/main/java/com/example/s3/transfermanager/UploadStream.java#L50 |
Thanks for pointing out that v1 works. Can you upload data of unknown size (stream) with v1 ac you can with v2 I would really like to see this bug getting some attention - not sure if it would help to report to AWS supoort..... |
@stephenmontgomery @javafanboy I can't reproduce the error with the sample code provided in the top comment. Do you have a self-contained code that reproduces the issue reliably? Increasing |
The problem is that the error is intermittent. When running at high rate it use to hit me at least within hours often within as little as 15-20 minutes. I have no stand alone code I can share at Github at the moment but I have more or less. used the example code provided by AWS to upload to S3 with unknown data size. Do you have code that upload with unknown data size that runs reliably I could try to integrate it in my use-case, see how it is different from my code and see if it solves the problem. I am also pretty sure my code is not waiting any 10 sec at any point as I am pushing data as fast as I generate it so the error message is most likely wrong (i.e. something else is causing it and not a time-out). |
I can try to reproduce again, will leave it running for a couple of hours.
|
I do not think I have tried changing subscribeTimeout (I have however over
timed played with a lot of options both in the Http component, transfer
agent etc. without keeping proper notes of them all so not 100% sure) -
where is this option documented/how is it applied/what exactly does it do?
As I mentioned I am quite sure I am not ever waiting for even a second
between writes when streaming data as long as an upload to S3 is in
progress so not sure how changing a timeout (long or short) would help
unless it is a work-around for a known bug?
The amount of data in each upload can vary quite a lot from a few megabytes
to probably 100+. I mostly have 1Gbit upload speed and run on an M3 Mac
when I do my tresting.
…On Tue, Jan 21, 2025 at 8:18 PM Debora N. Ito ***@***.***> wrote:
I can try to reproduce again, will leave it running for a couple of hours.
- For repro purposes, is there a pattern on the file size being
transferred when the error happens?
- Double-checking: are you using a recent SDK version? Which version?
- Have you tried to increase subscribeTimeout to mitigate?
—
Reply to this email directly, view it on GitHub
<#4893 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AADXQF3UO7UEMGLW4YDT6FL2L2MONAVCNFSM6AAAAABC3MGIVWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDMMBVGU2DONRWHA>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
I'm running into this regularly while trying to use I pared down my app code into a small test app that can reliably reproduce the error: https://github.com/ward-eric/transfer-manager-test For whatever reason I had to really ramp up the thread count in the test app, but in the production app I hit it with only 4-5 threads. |
I am also running multi threaded with varying number of threads sharing the same http client and transfer managers. |
Describe the bug
The service request was not made within 10 seconds of doBlockingWrite being invoked. Make sure to invoke the service request BEFORE invoking doBlockingWrite if your caller is single-threaded.
at software.amazon.awssdk.core.async.BlockingInputStreamAsyncRequestBody.waitForSubscriptionIfNeeded(BlockingInputStreamAsyncRequestBody.java:110) ~[sdk-core-2.22.2.jar!/:na]
at software.amazon.awssdk.core.async.BlockingInputStreamAsyncRequestBody.writeInputStream(BlockingInputStreamAsyncRequestBody.java:74) ~[sdk-core-2.22.2.jar!/:na]
Expected Behavior
We are experiencing these failures very often even after using latest aws crt client "aws-crt-client" 2.23.12.
We expect this to wait for longer time / should have option to increase the time out which would be helpful when there is huge data with large files.
Current Behavior
We are trying to steam large number of files from source system to AWS s3 using Transfer manager by reading the stream using HttpURLConnection, below is sample code -
URL targetURL = new URL("URL");
HttpURLConnection urlConnection = (HttpURLConnection) targetURL.openConnection();
urlConnection.setRequestMethod(HttpMethod.GET.toString());
urlConnection.setRequestProperty(HttpHeaders.ACCEPT, MediaType.ALL_VALUE);
if (urlConnection.getResponseCode() == HttpStatus.OK.value()) {
BlockingInputStreamAsyncRequestBody body = AsyncRequestBody.forBlockingInputStream(null);
Upload upload = transferManager.upload(builder -> builder
.requestBody(body)
.addTransferListener(UploadProcessListener.create(fileTracker.getPath()))
.putObjectRequest(req -> req.bucket(s3BucketName).key("v3/" + s3Key + "/" + fileTracker.getPath()))
.build());
}
Reproduction Steps
URL targetURL = new URL("URL");
HttpURLConnection urlConnection = (HttpURLConnection) targetURL.openConnection();
urlConnection.setRequestMethod(HttpMethod.GET.toString());
urlConnection.setRequestProperty(HttpHeaders.ACCEPT, MediaType.ALL_VALUE);
if (urlConnection.getResponseCode() == HttpStatus.OK.value()) {
BlockingInputStreamAsyncRequestBody body = AsyncRequestBody.forBlockingInputStream(null);
Upload upload = transferManager.upload(builder -> builder
.requestBody(body)
.addTransferListener(UploadProcessListener.create(fileTracker.getPath()))
.putObjectRequest(req -> req.bucket(s3BucketName).key("v3/" + s3Key + "/" + fileTracker.getPath()))
.build());
}
Possible Solution
No response
Additional Information/Context
Last week I have created ticket(awslabs/aws-crt-java#754) under aws-crt-java, as per the suggestion/comments creating this ticket here.
AWS Java SDK version used
2.23.12
JDK version used
11
Operating System and version
window / linux
The text was updated successfully, but these errors were encountered: