-
Notifications
You must be signed in to change notification settings - Fork 164
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
--profile
does not correctly use credential_process
(Appstream)
#389
Comments
Thank you for reporting the issue. It seems like the I tested running Mountpoint in Appstream with this command and it works fine for me.
We will look into this problem and provide the updates here. |
The PR was related to this issue and closed it prematurely, we are continuing to look into this issue to fix |
--profile
does not correctly use credential_process
(Appstream)
We were using the SDK's default retry configuration (actually, slightly wrong -- it's supposed to be 3 total attempts, but we configured 3 *retries*, so 4 attempts). This isn't a good default for file systems, as it works out to only retrying for about 2 seconds before giving up, and applications are rarely equipped to gracefully handle transient errors. This change increases the default to 10 total attempts, which takes about a minute on average. This is in the same ballpark as NFS's defaults (3 attempts, 60 seconds linear backoff), though still a little more aggressive. There's probably scope to go even further (20?), but this is a reasonable step for now. To allow customers to further tweak this, the S3CrtClient now respects the `AWS_MAX_ATTEMPTS` environment variable, and its value overrides the defaults. This is only a partial solution, as SDKs are supposed to also respect the `max_attempts` config file setting, but we don't have any of the infrastructure for that today (similar issue as awslabs#389). Signed-off-by: James Bornholt <bornholt@amazon.com>
We were using the SDK's default retry configuration (actually, slightly wrong -- it's supposed to be 3 total attempts, but we configured 3 *retries*, so 4 attempts). This isn't a good default for file systems, as it works out to only retrying for about 2 seconds before giving up, and applications are rarely equipped to gracefully handle transient errors. This change increases the default to 10 total attempts, which takes about a minute on average. This is in the same ballpark as NFS's defaults (3 attempts, 60 seconds linear backoff), though still a little more aggressive. There's probably scope to go even further (20?), but this is a reasonable step for now. To allow customers to further tweak this, the S3CrtClient now respects the `AWS_MAX_ATTEMPTS` environment variable, and its value overrides the defaults. This is only a partial solution, as SDKs are supposed to also respect the `max_attempts` config file setting, but we don't have any of the infrastructure for that today (similar issue as awslabs#389). Signed-off-by: James Bornholt <bornholt@amazon.com>
…ide (#830) * Increase default max retries and expose environment variable to override We were using the SDK's default retry configuration (actually, slightly wrong -- it's supposed to be 3 total attempts, but we configured 3 *retries*, so 4 attempts). This isn't a good default for file systems, as it works out to only retrying for about 2 seconds before giving up, and applications are rarely equipped to gracefully handle transient errors. This change increases the default to 10 total attempts, which takes about a minute on average. This is in the same ballpark as NFS's defaults (3 attempts, 60 seconds linear backoff), though still a little more aggressive. There's probably scope to go even further (20?), but this is a reasonable step for now. To allow customers to further tweak this, the S3CrtClient now respects the `AWS_MAX_ATTEMPTS` environment variable, and its value overrides the defaults. This is only a partial solution, as SDKs are supposed to also respect the `max_attempts` config file setting, but we don't have any of the infrastructure for that today (similar issue as #389). Signed-off-by: James Bornholt <bornholt@amazon.com> * Surprised Clippy doesn't yell about this Signed-off-by: James Bornholt <bornholt@amazon.com> --------- Signed-off-by: James Bornholt <bornholt@amazon.com>
This issue was fixed in Mountpoint v1.9.0. Credential process provider in the CRT is now included as part of the profile provider used by |
Mountpoint for Amazon S3 version
mountpoint-s3 0.3.0-d0ef0b9
AWS Region
us-west-2
Describe the running environment
Trying to mount S3 bucket in Appstream using IAM Profile
I get an error.
Command:
mount-s3 --read-only --allow-other --profile appstream_machine_role --region us-west-2 -f BUCKET_NAME /mnt/
What happened?
Error:
Caused by:
0: HeadBucket failed for bucket BUCKET_NAME in region us-west-2
1: Client error
2: Unknown response error: MetaRequestResult { response_status: 0, crt_error: Error(6146, "aws-c-auth: AWS_AUTH_SIGNING_NO_CREDENTIALS, Attempt to sign an http request without credentials"), error_response_headers: None, error_response_body: None }
Relevant log output
The text was updated successfully, but these errors were encountered: