Skip to content

Commit

Permalink
build(deps): bump terraform-docs/gh-actions from 1.2.2 to 1.3.0 in th…
Browse files Browse the repository at this point in the history
…e actions group (#563)

Bumps the actions group with 1 update:
[terraform-docs/gh-actions](https://github.com/terraform-docs/gh-actions).

Updates `terraform-docs/gh-actions` from 1.2.2 to 1.3.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/terraform-docs/gh-actions/releases">terraform-docs/gh-actions's
releases</a>.</em></p>
<blockquote>
<h2>v1.3.0</h2>
<h2>What’s Changed</h2>
<ul>
<li>chore: revert the action name back for now (<a
href="https://redirect.github.com/terraform-docs/gh-actions/issues/144">#144</a>)
<a href="https://github.com/khos2ow"><code>@​khos2ow</code></a></li>
<li>Add section about creating a release to CONTRIBUTING.md (<a
href="https://redirect.github.com/terraform-docs/gh-actions/issues/142">#142</a>)
<a
href="https://github.com/pascal-hofmann"><code>@​pascal-hofmann</code></a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/terraform-docs/gh-actions/commit/aeae0038ed47a547e0c0fca5c059d3335f48fb25"><code>aeae003</code></a>
chore: prepare release v1.3.0</li>
<li><a
href="https://github.com/terraform-docs/gh-actions/commit/6c989007421c87790b129e96e425ee5fabd61e0b"><code>6c98900</code></a>
ci: enable sign-off for auto commits</li>
<li><a
href="https://github.com/terraform-docs/gh-actions/commit/752705dfea83ffa07d981644cb45e08b41f9b8cb"><code>752705d</code></a>
chore: bump terraform-docs to v0.19.0</li>
<li><a
href="https://github.com/terraform-docs/gh-actions/commit/ceebb781ca443aeeccbbb15b3f359a78ad2ef4ce"><code>ceebb78</code></a>
chore: update README</li>
<li><a
href="https://github.com/terraform-docs/gh-actions/commit/4b070bdf09a11f67a3944bcb7809c8df783a89b0"><code>4b070bd</code></a>
Update Action name and description</li>
<li><a
href="https://github.com/terraform-docs/gh-actions/commit/fdf26f471dc7aa676877f9e2b21b0371f55a6209"><code>fdf26f4</code></a>
Merge pull request <a
href="https://redirect.github.com/terraform-docs/gh-actions/issues/142">#142</a>
from terraform-docs/add-release-info</li>
<li><a
href="https://github.com/terraform-docs/gh-actions/commit/d8af945d68f1dcdfd28a3bb6c3409ec2240ee15b"><code>d8af945</code></a>
Add section about creating a release to CONTRIBUTING.md</li>
<li><a
href="https://github.com/terraform-docs/gh-actions/commit/f9a33581072a78f38e2f0bb728f56c8b674e50fc"><code>f9a3358</code></a>
fix: update-tag job</li>
<li>See full diff in <a
href="https://github.com/terraform-docs/gh-actions/compare/cca78c27ac9e2b6545debf2ecae9df930fd3461c...aeae0038ed47a547e0c0fca5c059d3335f48fb25">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=terraform-docs/gh-actions&package-manager=github_actions&previous-version=1.2.2&new-version=1.3.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

---------

Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: Kenny Leung <kleung@chainguard.dev>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Kenny Leung <kleung@chainguard.dev>
  • Loading branch information
dependabot[bot] and k4leung4 authored Sep 24, 2024
1 parent f5f2d31 commit 6ef6da6
Show file tree
Hide file tree
Showing 17 changed files with 53 additions and 52 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/documentation.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ jobs:

- uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7

- uses: terraform-docs/gh-actions@cca78c27ac9e2b6545debf2ecae9df930fd3461c # v1.2.2
- uses: terraform-docs/gh-actions@aeae0038ed47a547e0c0fca5c059d3335f48fb25 # v1.3.0
with:
working-dir: modules/${{ matrix.module }}
output-file: README.md
Expand Down
2 changes: 1 addition & 1 deletion hack/update-docs.sh
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
set -o errexit

# Find all directories containing .tf files
directories=$(find . -name '*.tf' -exec dirname {} \;)
directories=$(find . -name '*.tf' -not -path "./.*" -exec dirname {} \;)

# Check if the find command found any directories
if [[ -z "${directories}" ]]; then
Expand Down
6 changes: 3 additions & 3 deletions modules/bucket-events/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -118,13 +118,13 @@ No requirements.
|------|-------------|------|---------|:--------:|
| <a name="input_bucket"></a> [bucket](#input\_bucket) | The name of the bucket to watch for events. The region where the bucket is located will be the region where the Pub/Sub topic and trampoline service will be created. The bucket must be in a region that is in the set of regions passed to the regions variable. | `string` | n/a | yes |
| <a name="input_enable_profiler"></a> [enable\_profiler](#input\_enable\_profiler) | Enable cloud profiler. | `bool` | `false` | no |
| <a name="input_gcs_event_types"></a> [gcs\_event\_types](#input\_gcs\_event\_types) | The types of GCS events to watch for (https://cloud.google.com/storage/docs/pubsub-notifications#payload). | `list(string)` | <pre>[<br> "OBJECT_FINALIZE",<br> "OBJECT_METADATA_UPDATE",<br> "OBJECT_DELETE",<br> "OBJECT_ARCHIVE"<br>]</pre> | no |
| <a name="input_ingress"></a> [ingress](#input\_ingress) | An object holding the name of the ingress service, which can be used to authorize callers to publish cloud events. | <pre>object({<br> name = string<br> })</pre> | n/a | yes |
| <a name="input_gcs_event_types"></a> [gcs\_event\_types](#input\_gcs\_event\_types) | The types of GCS events to watch for (https://cloud.google.com/storage/docs/pubsub-notifications#payload). | `list(string)` | <pre>[<br/> "OBJECT_FINALIZE",<br/> "OBJECT_METADATA_UPDATE",<br/> "OBJECT_DELETE",<br/> "OBJECT_ARCHIVE"<br/>]</pre> | no |
| <a name="input_ingress"></a> [ingress](#input\_ingress) | An object holding the name of the ingress service, which can be used to authorize callers to publish cloud events. | <pre>object({<br/> name = string<br/> })</pre> | n/a | yes |
| <a name="input_max_delivery_attempts"></a> [max\_delivery\_attempts](#input\_max\_delivery\_attempts) | The maximum number of delivery attempts for any event. | `number` | `5` | no |
| <a name="input_name"></a> [name](#input\_name) | n/a | `string` | n/a | yes |
| <a name="input_notification_channels"></a> [notification\_channels](#input\_notification\_channels) | List of notification channels to alert. | `list(string)` | n/a | yes |
| <a name="input_project_id"></a> [project\_id](#input\_project\_id) | n/a | `string` | n/a | yes |
| <a name="input_regions"></a> [regions](#input\_regions) | A map from region names to a network and subnetwork. The bucket must be in one of these regions. | <pre>map(object({<br> network = string<br> subnet = string<br> }))</pre> | n/a | yes |
| <a name="input_regions"></a> [regions](#input\_regions) | A map from region names to a network and subnetwork. The bucket must be in one of these regions. | <pre>map(object({<br/> network = string<br/> subnet = string<br/> }))</pre> | n/a | yes |

## Outputs

Expand Down
6 changes: 3 additions & 3 deletions modules/cloudevent-broker/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -107,12 +107,12 @@ No requirements.
| Name | Description | Type | Default | Required |
|------|-------------|------|---------|:--------:|
| <a name="input_enable_profiler"></a> [enable\_profiler](#input\_enable\_profiler) | Enable cloud profiler. | `bool` | `false` | no |
| <a name="input_limits"></a> [limits](#input\_limits) | Resource limits for the regional go service. | <pre>object({<br> cpu = string<br> memory = string<br> })</pre> | `null` | no |
| <a name="input_limits"></a> [limits](#input\_limits) | Resource limits for the regional go service. | <pre>object({<br/> cpu = string<br/> memory = string<br/> })</pre> | `null` | no |
| <a name="input_name"></a> [name](#input\_name) | n/a | `string` | n/a | yes |
| <a name="input_notification_channels"></a> [notification\_channels](#input\_notification\_channels) | List of notification channels to alert. | `list(string)` | n/a | yes |
| <a name="input_project_id"></a> [project\_id](#input\_project\_id) | n/a | `string` | n/a | yes |
| <a name="input_regions"></a> [regions](#input\_regions) | A map from region names to a network and subnetwork. A pub/sub topic and ingress service (publishing to the respective topic) will be created in each region, with the ingress service configured to egress all traffic via the specified subnetwork. | <pre>map(object({<br> network = string<br> subnet = string<br> }))</pre> | n/a | yes |
| <a name="input_scaling"></a> [scaling](#input\_scaling) | The scaling configuration for the service. | <pre>object({<br> min_instances = optional(number, 0)<br> max_instances = optional(number, 100)<br> max_instance_request_concurrency = optional(number)<br> })</pre> | `{}` | no |
| <a name="input_regions"></a> [regions](#input\_regions) | A map from region names to a network and subnetwork. A pub/sub topic and ingress service (publishing to the respective topic) will be created in each region, with the ingress service configured to egress all traffic via the specified subnetwork. | <pre>map(object({<br/> network = string<br/> subnet = string<br/> }))</pre> | n/a | yes |
| <a name="input_scaling"></a> [scaling](#input\_scaling) | The scaling configuration for the service. | <pre>object({<br/> min_instances = optional(number, 0)<br/> max_instances = optional(number, 100)<br/> max_instance_request_concurrency = optional(number)<br/> })</pre> | `{}` | no |

## Outputs

Expand Down
8 changes: 4 additions & 4 deletions modules/cloudevent-recorder/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -138,7 +138,7 @@ No requirements.
| <a name="input_enable_profiler"></a> [enable\_profiler](#input\_enable\_profiler) | Enable cloud profiler. | `bool` | `false` | no |
| <a name="input_flush_interval"></a> [flush\_interval](#input\_flush\_interval) | Flush interval for logrotate, as a duration string. | `string` | `""` | no |
| <a name="input_ignore_unknown_values"></a> [ignore\_unknown\_values](#input\_ignore\_unknown\_values) | Whether to ignore unknown values in the data, when transferring data to BigQuery. | `bool` | `false` | no |
| <a name="input_limits"></a> [limits](#input\_limits) | Resource limits for the regional go service. | <pre>object({<br> cpu = string<br> memory = string<br> })</pre> | `null` | no |
| <a name="input_limits"></a> [limits](#input\_limits) | Resource limits for the regional go service. | <pre>object({<br/> cpu = string<br/> memory = string<br/> })</pre> | `null` | no |
| <a name="input_location"></a> [location](#input\_location) | The location to create the BigQuery dataset in, and in which to run the data transfer jobs from GCS. | `string` | `"US"` | no |
| <a name="input_max_delivery_attempts"></a> [max\_delivery\_attempts](#input\_max\_delivery\_attempts) | The maximum number of delivery attempts for any event. | `number` | `5` | no |
| <a name="input_maximum_backoff"></a> [maximum\_backoff](#input\_maximum\_backoff) | The maximum delay between consecutive deliveries of a given message. | `number` | `600` | no |
Expand All @@ -148,11 +148,11 @@ No requirements.
| <a name="input_notification_channels"></a> [notification\_channels](#input\_notification\_channels) | List of notification channels to alert (for service-level issues). | `list(string)` | n/a | yes |
| <a name="input_project_id"></a> [project\_id](#input\_project\_id) | n/a | `string` | n/a | yes |
| <a name="input_provisioner"></a> [provisioner](#input\_provisioner) | The identity as which this module will be applied (so it may be granted permission to 'act as' the DTS service account). This should be in the form expected by an IAM subject (e.g. user:sally@example.com) | `string` | n/a | yes |
| <a name="input_regions"></a> [regions](#input\_regions) | A map from region names to a network and subnetwork. A recorder service and cloud storage bucket (into which the service writes events) will be created in each region. | <pre>map(object({<br> network = string<br> subnet = string<br> }))</pre> | n/a | yes |
| <a name="input_regions"></a> [regions](#input\_regions) | A map from region names to a network and subnetwork. A recorder service and cloud storage bucket (into which the service writes events) will be created in each region. | <pre>map(object({<br/> network = string<br/> subnet = string<br/> }))</pre> | n/a | yes |
| <a name="input_retention-period"></a> [retention-period](#input\_retention-period) | The number of days to retain data in BigQuery. | `number` | n/a | yes |
| <a name="input_scaling"></a> [scaling](#input\_scaling) | The scaling configuration for the service. | <pre>object({<br> min_instances = optional(number, 0)<br> max_instances = optional(number, 100)<br> max_instance_request_concurrency = optional(number)<br> })</pre> | `{}` | no |
| <a name="input_scaling"></a> [scaling](#input\_scaling) | The scaling configuration for the service. | <pre>object({<br/> min_instances = optional(number, 0)<br/> max_instances = optional(number, 100)<br/> max_instance_request_concurrency = optional(number)<br/> })</pre> | `{}` | no |
| <a name="input_split_triggers"></a> [split\_triggers](#input\_split\_triggers) | Opt-in flag to split into per-trigger dashboards. Helpful when hitting widget limits | `bool` | `false` | no |
| <a name="input_types"></a> [types](#input\_types) | A map from cloudevent types to the BigQuery schema associated with them, as well as an alert threshold and a list of notification channels (for subscription-level issues). | <pre>map(object({<br> schema = string<br> alert_threshold = optional(number, 50000)<br> notification_channels = optional(list(string), [])<br> partition_field = optional(string)<br> }))</pre> | n/a | yes |
| <a name="input_types"></a> [types](#input\_types) | A map from cloudevent types to the BigQuery schema associated with them, as well as an alert threshold and a list of notification channels (for subscription-level issues). | <pre>map(object({<br/> schema = string<br/> alert_threshold = optional(number, 50000)<br/> notification_channels = optional(list(string), [])<br/> partition_field = optional(string)<br/> }))</pre> | n/a | yes |

## Outputs

Expand Down
10 changes: 5 additions & 5 deletions modules/cloudevent-trigger/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -114,16 +114,16 @@ No requirements.
|------|-------------|------|---------|:--------:|
| <a name="input_ack_deadline_seconds"></a> [ack\_deadline\_seconds](#input\_ack\_deadline\_seconds) | The deadline for acking a message. | `number` | `300` | no |
| <a name="input_broker"></a> [broker](#input\_broker) | The name of the pubsub topic we are using as a broker. | `string` | n/a | yes |
| <a name="input_filter"></a> [filter](#input\_filter) | A Knative Trigger-style filter over the cloud event attributes.<br><br>This is normally used to filter relevant event types, for example:<br><br> { "type" : "dev.chainguard.foo" }<br><br>In this case, only events with a type attribute of "dev.chainguard.foo" will be delivered. | `map(string)` | `{}` | no |
| <a name="input_filter_has_attributes"></a> [filter\_has\_attributes](#input\_filter\_has\_attributes) | A Knative Trigger-style filter over the cloud event attribute prefixes.<br><br>This can be used to filter on the presence of an event attribute, for example:<br><br> ["location"]<br><br>In this case, any event with a type attribute of "location" will be delivered. | `list(string)` | `[]` | no |
| <a name="input_filter_not_has_attributes"></a> [filter\_not\_has\_attributes](#input\_filter\_not\_has\_attributes) | A Knative Trigger-style filter over the cloud event attribute prefixes.<br><br>This can be used to filter on the absence of an event attribute, for example:<br><br> ["location"]<br><br>In this case, any event with a type attribute of "location" will NOT be delivered. | `list(string)` | `[]` | no |
| <a name="input_filter_prefix"></a> [filter\_prefix](#input\_filter\_prefix) | A Knative Trigger-style filter over the cloud event attribute prefixes.<br><br>This can be used to filter relevant event types, for example:<br><br> { "type" : "dev.chainguard." }<br><br>In this case, any event with a type attribute that starts with "dev.chainguard." will be delivered. | `map(string)` | `{}` | no |
| <a name="input_filter"></a> [filter](#input\_filter) | A Knative Trigger-style filter over the cloud event attributes.<br/><br/>This is normally used to filter relevant event types, for example:<br/><br/> { "type" : "dev.chainguard.foo" }<br/><br/>In this case, only events with a type attribute of "dev.chainguard.foo" will be delivered. | `map(string)` | `{}` | no |
| <a name="input_filter_has_attributes"></a> [filter\_has\_attributes](#input\_filter\_has\_attributes) | A Knative Trigger-style filter over the cloud event attribute prefixes.<br/><br/>This can be used to filter on the presence of an event attribute, for example:<br/><br/> ["location"]<br/><br/>In this case, any event with a type attribute of "location" will be delivered. | `list(string)` | `[]` | no |
| <a name="input_filter_not_has_attributes"></a> [filter\_not\_has\_attributes](#input\_filter\_not\_has\_attributes) | A Knative Trigger-style filter over the cloud event attribute prefixes.<br/><br/>This can be used to filter on the absence of an event attribute, for example:<br/><br/> ["location"]<br/><br/>In this case, any event with a type attribute of "location" will NOT be delivered. | `list(string)` | `[]` | no |
| <a name="input_filter_prefix"></a> [filter\_prefix](#input\_filter\_prefix) | A Knative Trigger-style filter over the cloud event attribute prefixes.<br/><br/>This can be used to filter relevant event types, for example:<br/><br/> { "type" : "dev.chainguard." }<br/><br/>In this case, any event with a type attribute that starts with "dev.chainguard." will be delivered. | `map(string)` | `{}` | no |
| <a name="input_max_delivery_attempts"></a> [max\_delivery\_attempts](#input\_max\_delivery\_attempts) | The maximum number of delivery attempts for any event. | `number` | `20` | no |
| <a name="input_maximum_backoff"></a> [maximum\_backoff](#input\_maximum\_backoff) | The maximum delay between consecutive deliveries of a given message. | `number` | `600` | no |
| <a name="input_minimum_backoff"></a> [minimum\_backoff](#input\_minimum\_backoff) | The minimum delay between consecutive deliveries of a given message. | `number` | `10` | no |
| <a name="input_name"></a> [name](#input\_name) | n/a | `string` | n/a | yes |
| <a name="input_notification_channels"></a> [notification\_channels](#input\_notification\_channels) | List of notification channels to alert. | `list(string)` | n/a | yes |
| <a name="input_private-service"></a> [private-service](#input\_private-service) | The private cloud run service that is subscribing to these events. | <pre>object({<br> name = string<br> region = string<br> })</pre> | n/a | yes |
| <a name="input_private-service"></a> [private-service](#input\_private-service) | The private cloud run service that is subscribing to these events. | <pre>object({<br/> name = string<br/> region = string<br/> })</pre> | n/a | yes |
| <a name="input_project_id"></a> [project\_id](#input\_project\_id) | n/a | `string` | n/a | yes |
| <a name="input_raw_filter"></a> [raw\_filter](#input\_raw\_filter) | Raw PubSub filter to apply, ignores other variables. https://cloud.google.com/pubsub/docs/subscription-message-filter#filtering_syntax | `string` | `""` | no |

Expand Down
Loading

0 comments on commit 6ef6da6

Please sign in to comment.