Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NodeJS AWS SDK The specified bucket does not exist (also weird aws s3 ls behavior) #202

Closed
yonilerner opened this issue Jan 3, 2020 · 8 comments
Labels

Comments

@yonilerner
Copy link

Server is running via docker like:

docker run -e initialBuckets=bucket-1,bucket-2 -e root=/tmp/buckets -p 9090:9090 -p 9191:9191 -v /tmp/buckets:/tmp/buckets -t adobe/s3mock

(Version running is 2.1.18, which is latest as of this writing)

Im able to:

  • Upload a Dockerfile
  • Confirm I can download it again
  • Confirm I can see the item listed with list-objects-v2
aws --endpoint-url http://localhost:9090 s3 cp Dockerfile s3://bucket-1/Dockerfile
aws --endpoint-url http://localhost:9090 s3 cp s3://bucket-1/Dockerfile ./s
aws --endpoint-url http://localhost:9090 s3api list-objects-v2 --bucket bucket-1

Heres where the bugs start:

  • aws --endpoint-url http://localhost:9090 s3 ls bucket-1 isnt showing anything other than:
    image

  • NodeJS aws-sdk's AWS.S3.putObject is always throwing "NoSuchBucket: The specified bucket does not exist" (with the server printing "Responding with status 404: The specified bucket does not exist.")

  • AWS.S3.listObjectsV2 isnt listing anything

My NodeJS S3 is configured like:

new aws.S3({
    accessKeyId: globals.AWS_ACCESS_KEY,
    secretAccessKey: globals.AWS_SECRET_KEY,
    endpoint: 'http://localhost:9090'
})

where the access and secret keys are just empty strings right now

My listObjectsV2 is:

const opts = {Bucket: 'bucket-1'}
const response = await s3.listObjectsV2(opts).promise()

My putObject is:

const obj = {
Bucket: 'bucket-1',
Body: 'some string',
Key: 'some-key'
}
await s3.putObject(obj).promise()

Let me know if I can provide any more info. Thanks!

@timoe
Copy link
Contributor

timoe commented Jan 3, 2020

thx for reporting, looks like a Bug.

@timoe timoe added the bug label Jan 3, 2020
@rayzr522
Copy link

rayzr522 commented Jan 3, 2020

Confirmed using the above steps, when running aws --debug --endpoint-url http://localhost:9090 s3 ls bucket-1 I got the following exception:

2020-01-03 15:18:21,318 - MainThread - awscli.clidriver - DEBUG - Exception caught in main()
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/awscli/clidriver.py", line 217, in main
    return command_table[parsed_args.command](remaining, parsed_args)
  File "/usr/lib/python3/dist-packages/awscli/customizations/commands.py", line 190, in __call__
    parsed_globals)
  File "/usr/lib/python3/dist-packages/awscli/customizations/commands.py", line 187, in __call__
    return self._run_main(parsed_args, parsed_globals)
  File "/usr/lib/python3/dist-packages/awscli/customizations/s3/subcommands.py", line 486, in _run_main
    bucket, key, parsed_args.page_size, parsed_args.request_payer)
  File "/usr/lib/python3/dist-packages/awscli/customizations/s3/subcommands.py", line 514, in _list_all_objects
    self._display_page(response_data)
  File "/usr/lib/python3/dist-packages/awscli/customizations/s3/subcommands.py", line 523, in _display_page
    prefix_components = common_prefix['Prefix'].split('/')
KeyError: 'Prefix'
2020-01-03 15:18:21,318 - MainThread - awscli.clidriver - DEBUG - Exiting with rc 255

Full log attached here:
aws-debug.log

@yonilerner
Copy link
Author

It appears that when you pass {Bucket: 'bucket-1'} into JS SDK, rather than giving you localhost:9090/bucket-1, it gives you bucket-1.localhost:9090. The server doesnt appear to support this syntax. Navigating to /bucket-1 lists the bucket, but navigating to bucket-1.localhost:9090/ does not

@yonilerner
Copy link
Author

After more digging, it seems like I may be able to force the desired behavior with s3ForcePathStyle: true in the S3 SDK config

@ifigotin
Copy link

ifigotin commented Jun 15, 2021

I also see the issue while using aws s3 ls locally.

I think the issue may be related to how CommonPrefixes are serialized. The botocore library seems to expect a nested Prefix under CommonPrefixes see here:
https://github.com/boto/botocore/blob/develop/botocore/handlers.py#L734

But the output XML sometimes has CommonPrefixes as an empty tag with nothing inside. I see this on a new bucket that has objects at the root level, e.g. aws s3 ls on bucket with this content s3://abc/a.txt will result in KeyError: 'Prefix'.

As long as one has some nested prefixes, things seem to work fine, e.g. listing of a bucket with this structure works fine: s3://abc/def/a.txt

@afranken
Copy link
Member

afranken commented Jul 1, 2021

@ifigotin this was fixed recently and has nothing to do with the original issue in this ticket. (see #257)

@afranken
Copy link
Member

afranken commented Jul 1, 2021

@yonilerner seems like you got your problem solved, then?
When you start the mock locally, either in Docker or by running the Java application directly, the Mock will only ever be reachable on localhost, and never under the sub-domain bucket-name.localhost.
Bucket names must be passed as part of the path or as a query parameter.

@afranken afranken closed this as completed Jul 1, 2021
@guija
Copy link

guija commented Mar 7, 2023

I had the same issue only when upgrading from Java S3 SDK Version 2.17.8 to 2.19.33.
As @yonilerner mentioned I could also fix it like this:

val serviceConfiguration =
        S3Configuration.builder()
            .checksumValidationEnabled(false)
            .chunkedEncodingEnabled(true)
            .pathStyleAccessEnabled(true)
            .build();

I added the line .pathStyleAccessEnabled(true).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

6 participants