Skip to content

Commit ee71aaa

Browse files
authored
Merge branch 'main' into miguelhar.dataplane
2 parents f5c4940 + 46578e4 commit ee71aaa

File tree

29 files changed

+525
-87
lines changed

29 files changed

+525
-87
lines changed

examples/deploy/terraform/infra/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ No resources.
3737
| <a name="input_network"></a> [network](#input\_network) | vpc = {<br/> id = Existing vpc id, it will bypass creation by this module.<br/> subnets = {<br/> private = Existing private subnets.<br/> public = Existing public subnets.<br/> pod = Existing pod subnets.<br/> }), {})<br/> }), {})<br/> network\_bits = {<br/> public = Number of network bits to allocate to the public subnet. i.e /27 -> 32 IPs.<br/> private = Number of network bits to allocate to the private subnet. i.e /19 -> 8,192 IPs.<br/> pod = Number of network bits to allocate to the private subnet. i.e /19 -> 8,192 IPs.<br/> }<br/> cidrs = {<br/> vpc = The IPv4 CIDR block for the VPC.<br/> pod = The IPv4 CIDR block for the Pod subnets.<br/> }<br/> use\_pod\_cidr = Use additional pod CIDR range (ie 100.64.0.0/16) for pod networking. | <pre>object({<br/> vpc = optional(object({<br/> id = optional(string, null)<br/> subnets = optional(object({<br/> private = optional(list(string), [])<br/> public = optional(list(string), [])<br/> pod = optional(list(string), [])<br/> }), {})<br/> }), {})<br/> network_bits = optional(object({<br/> public = optional(number, 27)<br/> private = optional(number, 19)<br/> pod = optional(number, 19)<br/> }<br/> ), {})<br/> cidrs = optional(object({<br/> vpc = optional(string, "10.0.0.0/16")<br/> pod = optional(string, "100.64.0.0/16")<br/> }), {})<br/> use_pod_cidr = optional(bool, true)<br/> })</pre> | `{}` | no |
3838
| <a name="input_region"></a> [region](#input\_region) | AWS region for the deployment | `string` | n/a | yes |
3939
| <a name="input_ssh_pvt_key_path"></a> [ssh\_pvt\_key\_path](#input\_ssh\_pvt\_key\_path) | SSH private key filepath. | `string` | n/a | yes |
40-
| <a name="input_storage"></a> [storage](#input\_storage) | storage = {<br/> filesystem\_type = File system type(netapp\|efs)<br/> efs = {<br/> access\_point\_path = Filesystem path for efs.<br/> backup\_vault = {<br/> create = Create backup vault for EFS toggle.<br/> force\_destroy = Toggle to allow automatic destruction of all backups when destroying.<br/> backup = {<br/> schedule = Cron-style schedule for EFS backup vault (default: once a day at 12pm).<br/> cold\_storage\_after = Move backup data to cold storage after this many days.<br/> delete\_after = Delete backup data after this many days.<br/> }<br/> }<br/> }<br/> netapp = {<br/> deployment\_type = netapp ontap deployment type,('MULTI\_AZ\_1', 'MULTI\_AZ\_2', 'SINGLE\_AZ\_1', 'SINGLE\_AZ\_2')<br/> storage\_capacity = Filesystem Storage capacity<br/> throughput\_capacity = Filesystem throughput capacity<br/> automatic\_backup\_retention\_days = How many days to keep backups<br/> daily\_automatic\_backup\_start\_time = Start time in 'HH:MM' format to initiate backups<br/><br/> storage\_capacity\_autosizing = Options for the FXN automatic storage capacity increase, cloudformation template<br/> enabled = Enable automatic storage capacity increase.<br/> threshold = Used storage capacity threshold.<br/> percent\_capacity\_increase = The percentage increase in storage capacity when used storage exceeds<br/> LowFreeDataStorageCapacityThreshold. Minimum increase is 10 %.<br/> notification\_email\_address = The email address for alarm notification.<br/> }<br/> volume = {<br/> create = Create a volume associated with the filesystem.<br/> name\_suffix = The suffix to name the volume<br/> storage\_efficiency\_enabled = Toggle storage\_efficiency\_enabled<br/> junction\_path = filesystem junction path<br/> size\_in\_megabytes = The size of the volume<br/> }<br/> }<br/> s3 = {<br/> force\_destroy\_on\_deletion = Toogle to allow recursive deletion of all objects in the s3 buckets. if 'false' terraform will NOT be able to delete non-empty buckets.<br/> }<br/> ecr = {<br/> force\_destroy\_on\_deletion = Toogle to allow recursive deletion of all objects in the ECR repositories. if 'false' terraform will NOT be able to delete non-empty repositories.<br/> }<br/> enable\_remote\_backup = Enable tagging required for cross-account backups<br/> costs\_enabled = Determines whether to provision domino cost related infrastructures, ie, long term storage<br/> }<br/> } | <pre>object({<br/> filesystem_type = optional(string, "efs")<br/> efs = optional(object({<br/> access_point_path = optional(string, "/domino")<br/> backup_vault = optional(object({<br/> create = optional(bool, true)<br/> force_destroy = optional(bool, true)<br/> backup = optional(object({<br/> schedule = optional(string, "0 12 * * ? *")<br/> cold_storage_after = optional(number, 35)<br/> delete_after = optional(number, 125)<br/> }), {})<br/> }), {})<br/> }), {})<br/> netapp = optional(object({<br/> migrate_from_efs = optional(object({<br/> enabled = optional(bool, false)<br/> datasync = optional(object({<br/> enabled = optional(bool, false)<br/> target = optional(string, "netapp")<br/> schedule = optional(string, "cron(0 * * * ? *)")<br/> }), {})<br/> }), {})<br/> deployment_type = optional(string, "SINGLE_AZ_1")<br/> storage_capacity = optional(number, 1024)<br/> throughput_capacity = optional(number, 128)<br/> automatic_backup_retention_days = optional(number, 90)<br/> daily_automatic_backup_start_time = optional(string, "00:00")<br/> storage_capacity_autosizing = optional(object({<br/> enabled = optional(bool, false)<br/> threshold = optional(number, 70)<br/> percent_capacity_increase = optional(number, 30)<br/> notification_email_address = optional(string, "")<br/> }), {})<br/> volume = optional(object({<br/> create = optional(bool, true)<br/> name_suffix = optional(string, "domino_shared_storage")<br/> storage_efficiency_enabled = optional(bool, true)<br/> junction_path = optional(string, "/domino")<br/> size_in_megabytes = optional(number, 1099511)<br/> }), {})<br/> }), {})<br/> s3 = optional(object({<br/> force_destroy_on_deletion = optional(bool, true)<br/> }), {})<br/> ecr = optional(object({<br/> force_destroy_on_deletion = optional(bool, true)<br/> }), {}),<br/> enable_remote_backup = optional(bool, false)<br/> costs_enabled = optional(bool, true)<br/> })</pre> | `{}` | no |
40+
| <a name="input_storage"></a> [storage](#input\_storage) | storage = {<br/> filesystem\_type = File system type(netapp\|efs)<br/> efs = {<br/> access\_point\_path = Filesystem path for efs.<br/> backup\_vault = {<br/> create = Create backup vault for EFS toggle.<br/> force\_destroy = Toggle to allow automatic destruction of all backups when destroying.<br/> backup = {<br/> schedule = Cron-style schedule for EFS backup vault (default: once a day at 12pm).<br/> cold\_storage\_after = Move backup data to cold storage after this many days.<br/> delete\_after = Delete backup data after this many days.<br/> }<br/> }<br/> }<br/> netapp = {<br/> migrate\_from\_efs = {<br/> enabled = When enabled, both EFS and NetApp resources will be provisioned simultaneously during the migration period.<br/> datasync = {<br/> enabled = Toggle to enable AWS DataSync for automated data transfer from EFS to NetApp FSx.<br/> schedule = Cron-style schedule for the DataSync task, specifying how often the data transfer will occur (default: hourly).<br/> verify\_mode = One of: POINT\_IN\_TIME\_CONSISTENT, ONLY\_FILES\_TRANSFERRED, NONE.<br/> }<br/> }<br/> deployment\_type = netapp ontap deployment type,('MULTI\_AZ\_1', 'MULTI\_AZ\_2', 'SINGLE\_AZ\_1', 'SINGLE\_AZ\_2')<br/> storage\_capacity = Filesystem Storage capacity<br/> throughput\_capacity = Filesystem throughput capacity<br/> automatic\_backup\_retention\_days = How many days to keep backups<br/> daily\_automatic\_backup\_start\_time = Start time in 'HH:MM' format to initiate backups<br/><br/> storage\_capacity\_autosizing = Options for the FXN automatic storage capacity increase, cloudformation template<br/> enabled = Enable automatic storage capacity increase.<br/> threshold = Used storage capacity threshold.<br/> percent\_capacity\_increase = The percentage increase in storage capacity when used storage exceeds<br/> LowFreeDataStorageCapacityThreshold. Minimum increase is 10 %.<br/> notification\_email\_address = The email address for alarm notification.<br/> }<br/> volume = {<br/> create = Create a volume associated with the filesystem.<br/> name\_suffix = The suffix to name the volume<br/> storage\_efficiency\_enabled = Toggle storage\_efficiency\_enabled<br/> junction\_path = filesystem junction path<br/> size\_in\_megabytes = The size of the volume<br/> }<br/> s3 = {<br/> force\_destroy\_on\_deletion = Toogle to allow recursive deletion of all objects in the s3 buckets. if 'false' terraform will NOT be able to delete non-empty buckets.<br/> }<br/> ecr = {<br/> force\_destroy\_on\_deletion = Toogle to allow recursive deletion of all objects in the ECR repositories. if 'false' terraform will NOT be able to delete non-empty repositories.<br/> }<br/> enable\_remote\_backup = Enable tagging required for cross-account backups<br/> costs\_enabled = Determines whether to provision domino cost related infrastructures, ie, long term storage<br/> }<br/> } | <pre>object({<br/> filesystem_type = optional(string, "efs")<br/> efs = optional(object({<br/> access_point_path = optional(string, "/domino")<br/> backup_vault = optional(object({<br/> create = optional(bool, true)<br/> force_destroy = optional(bool, true)<br/> backup = optional(object({<br/> schedule = optional(string, "0 12 * * ? *")<br/> cold_storage_after = optional(number, 35)<br/> delete_after = optional(number, 125)<br/> }), {})<br/> }), {})<br/> }), {})<br/> netapp = optional(object({<br/> migrate_from_efs = optional(object({<br/> enabled = optional(bool, false)<br/> datasync = optional(object({<br/> enabled = optional(bool, false)<br/> target = optional(string, "netapp")<br/> schedule = optional(string, "cron(0 */4 * * ? *)")<br/> verify_mode = optional(string, "ONLY_FILES_TRANSFERRED")<br/> }), {})<br/> }), {})<br/> deployment_type = optional(string, "SINGLE_AZ_1")<br/> storage_capacity = optional(number, 1024)<br/> throughput_capacity = optional(number, 128)<br/> automatic_backup_retention_days = optional(number, 90)<br/> daily_automatic_backup_start_time = optional(string, "00:00")<br/> storage_capacity_autosizing = optional(object({<br/> enabled = optional(bool, false)<br/> threshold = optional(number, 70)<br/> percent_capacity_increase = optional(number, 30)<br/> notification_email_address = optional(string, "")<br/> }), {})<br/> volume = optional(object({<br/> create = optional(bool, true)<br/> name_suffix = optional(string, "domino_shared_storage")<br/> storage_efficiency_enabled = optional(bool, true)<br/> junction_path = optional(string, "/domino")<br/> size_in_megabytes = optional(number, 1099511)<br/> }), {})<br/> }), {})<br/> s3 = optional(object({<br/> force_destroy_on_deletion = optional(bool, true)<br/> }), {})<br/> ecr = optional(object({<br/> force_destroy_on_deletion = optional(bool, true)<br/> }), {}),<br/> enable_remote_backup = optional(bool, false)<br/> costs_enabled = optional(bool, true)<br/> })</pre> | `{}` | no |
4141
| <a name="input_tags"></a> [tags](#input\_tags) | Deployment tags. | `map(string)` | n/a | yes |
4242
| <a name="input_use_fips_endpoint"></a> [use\_fips\_endpoint](#input\_use\_fips\_endpoint) | Use aws FIPS endpoints | `bool` | `false` | no |
4343

examples/deploy/terraform/infra/variables.tf

Lines changed: 12 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -215,6 +215,14 @@ variable "storage" {
215215
}
216216
}
217217
netapp = {
218+
migrate_from_efs = {
219+
enabled = When enabled, both EFS and NetApp resources will be provisioned simultaneously during the migration period.
220+
datasync = {
221+
enabled = Toggle to enable AWS DataSync for automated data transfer from EFS to NetApp FSx.
222+
schedule = Cron-style schedule for the DataSync task, specifying how often the data transfer will occur (default: hourly).
223+
verify_mode = One of: POINT_IN_TIME_CONSISTENT, ONLY_FILES_TRANSFERRED, NONE.
224+
}
225+
}
218226
deployment_type = netapp ontap deployment type,('MULTI_AZ_1', 'MULTI_AZ_2', 'SINGLE_AZ_1', 'SINGLE_AZ_2')
219227
storage_capacity = Filesystem Storage capacity
220228
throughput_capacity = Filesystem throughput capacity
@@ -234,7 +242,6 @@ variable "storage" {
234242
storage_efficiency_enabled = Toggle storage_efficiency_enabled
235243
junction_path = filesystem junction path
236244
size_in_megabytes = The size of the volume
237-
}
238245
}
239246
s3 = {
240247
force_destroy_on_deletion = Toogle to allow recursive deletion of all objects in the s3 buckets. if 'false' terraform will NOT be able to delete non-empty buckets.
@@ -265,9 +272,10 @@ variable "storage" {
265272
migrate_from_efs = optional(object({
266273
enabled = optional(bool, false)
267274
datasync = optional(object({
268-
enabled = optional(bool, false)
269-
target = optional(string, "netapp")
270-
schedule = optional(string, "cron(0 * * * ? *)")
275+
enabled = optional(bool, false)
276+
target = optional(string, "netapp")
277+
schedule = optional(string, "cron(0 */4 * * ? *)")
278+
verify_mode = optional(string, "ONLY_FILES_TRANSFERRED")
271279
}), {})
272280
}), {})
273281
deployment_type = optional(string, "SINGLE_AZ_1")

0 commit comments

Comments
 (0)