diff --git a/.github/workflows/release.yml b/.github/workflows/release.yml
index 7f19a3b..3fc7648 100644
--- a/.github/workflows/release.yml
+++ b/.github/workflows/release.yml
@@ -25,7 +25,7 @@ jobs:
- name: Set up Go
uses: actions/setup-go@v5.0.0
with:
- go-version: 1.21
+ go-version: 1.22
- name: Import GPG key
id: import_gpg
uses: crazy-max/ghaction-import-gpg@v6.1.0
diff --git a/README.md b/README.md
index 5a2d918..31e2f43 100644
--- a/README.md
+++ b/README.md
@@ -82,6 +82,7 @@ provider "polaris" {
##### Environment Variables for Local User Accounts
When using a local user account the following environmental variables can be used to override the default local user
account behaviour:
+* *RUBRIK_POLARIS_ACCOUNT_CREDENTIALS* — Overrides the content of the local user account file.
* *RUBRIK_POLARIS_ACCOUNT_FILE* — Overrides the name and path of the file to read local user accounts from.
* *RUBRIK_POLARIS_ACCOUNT_NAME* — Overrides the name of the local user account given to the credentials
parameter in the provider configuration.
@@ -109,6 +110,7 @@ provider "polaris" {
##### Environment Variables for Service Accounts
When using a service account the following environmental variables can be used to override the default service account
behaviour:
+* *RUBRIK_POLARIS_SERVICEACCOUNT_CREDENTIALS* — Overrides the content of the service account credentials file.
* *RUBRIK_POLARIS_SERVICEACCOUNT_FILE* — Overrides the name and path of the service account credentials file.
* *RUBRIK_POLARIS_SERVICEACCOUNT_NAME* — Overrides the name of the service account.
* *RUBRIK_POLARIS_SERVICEACCOUNT_CLIENTID* — Overrides the client id of the service account.
diff --git a/docs/data-sources/account.md b/docs/data-sources/account.md
new file mode 100644
index 0000000..3b0febc
--- /dev/null
+++ b/docs/data-sources/account.md
@@ -0,0 +1,48 @@
+---
+# generated by https://github.com/hashicorp/terraform-plugin-docs
+page_title: "polaris_account Data Source - terraform-provider-polaris"
+subcategory: ""
+description: |-
+ The polaris_account data source is used to access information about the RSC account.
+ -> Note: The fqdn and name fields are read from the local RSC credentials and
+ not from RSC.
+---
+
+# polaris_account (Data Source)
+
+The `polaris_account` data source is used to access information about the RSC account.
+
+-> **Note:** The `fqdn` and `name` fields are read from the local RSC credentials and
+ not from RSC.
+
+## Example Usage
+
+```terraform
+# Output the features enabled for the RSC account.
+data "polaris_account" "account" {}
+
+output "features" {
+ value = data.polaris_account.account.features
+}
+
+# Using the fqdn field from the deployment data source to create an Azure
+# AD application.
+data "polaris_deployment" "deployment" {}
+
+resource "azuread_application" "app" {
+ display_name = "Rubrik Security Cloud Integration"
+ web {
+ homepage_url = "https://${data.polaris_account.account.fqdn}/setup_azure"
+ }
+}
+```
+
+
+## Schema
+
+### Read-Only
+
+- `features` (Set of String) Features enabled for the RSC account.
+- `fqdn` (String) Fully qualified domain name of the RSC account.
+- `id` (String) SHA-256 hash of the features, the fully qualified domain name and the name.
+- `name` (String) RSC account name.
diff --git a/docs/data-sources/aws_account.md b/docs/data-sources/aws_account.md
new file mode 100644
index 0000000..cb1e8b2
--- /dev/null
+++ b/docs/data-sources/aws_account.md
@@ -0,0 +1,40 @@
+---
+# generated by https://github.com/hashicorp/terraform-plugin-docs
+page_title: "polaris_aws_account Data Source - terraform-provider-polaris"
+subcategory: ""
+description: |-
+ The polaris_aws_account data source is used to access information about an AWS account
+ added to RSC. An AWS account is looked up using either the AWS account ID or the name.
+ -> Note: The account name is the name of the AWS account as it appears in RSC.
+---
+
+# polaris_aws_account (Data Source)
+
+The `polaris_aws_account` data source is used to access information about an AWS account
+added to RSC. An AWS account is looked up using either the AWS account ID or the name.
+
+-> **Note:** The account name is the name of the AWS account as it appears in RSC.
+
+## Example Usage
+
+```terraform
+data "polaris_aws_account" "example" {
+ name = "example"
+}
+
+output "example_aws_account" {
+ value = data.polaris_aws_account.example
+}
+```
+
+
+## Schema
+
+### Optional
+
+- `account_id` (String) AWS account ID.
+- `name` (String) AWS account name.
+
+### Read-Only
+
+- `id` (String) RSC cloud account ID (UUID).
diff --git a/docs/data-sources/aws_archival_location.md b/docs/data-sources/aws_archival_location.md
index 54f7486..79a6d27 100644
--- a/docs/data-sources/aws_archival_location.md
+++ b/docs/data-sources/aws_archival_location.md
@@ -3,19 +3,21 @@
page_title: "polaris_aws_archival_location Data Source - terraform-provider-polaris"
subcategory: ""
description: |-
-
+ The polaris_aws_archival_location data source is used to access information about an
+ AWS archival location. An archival location is looked up using either the ID or the name.
---
# polaris_aws_archival_location (Data Source)
-
+The `polaris_aws_archival_location` data source is used to access information about an
+AWS archival location. An archival location is looked up using either the ID or the name.
## Example Usage
```terraform
# Using the archival location ID.
data "polaris_aws_archival_location" "location" {
- archival_location_id = "db34f042-79ea-48b1-bab8-c40dfbf2ab82"
+ id = "db34f042-79ea-48b1-bab8-c40dfbf2ab82"
}
# Using the name.
@@ -29,16 +31,16 @@ data "polaris_aws_archival_location" "location" {
### Optional
-- `archival_location_id` (String) ID of the archival location.
-- `name` (String) Name of the archival location.
+- `archival_location_id` (String, Deprecated) Cloud native archival location ID (UUID). **Deprecated:** use `id` instead.
+- `id` (String) Cloud native archival location ID (UUID).
+- `name` (String) Name of the cloud native archival location.
### Read-Only
-- `bucket_prefix` (String) AWS bucket prefix.
+- `bucket_prefix` (String) AWS bucket prefix. Note, `rubrik-` will always be prepended to the prefix.
- `bucket_tags` (Map of String) AWS bucket tags.
- `connection_status` (String) Connection status of the archival location.
-- `id` (String) The ID of this resource.
- `kms_master_key` (String, Sensitive) AWS KMS master key alias/ID.
-- `location_template` (String) Location template. If a region was specified, it will be `SPECIFIC_REGION`, otherwise `SOURCE_REGION`.
+- `location_template` (String) RSC location template. If a region was specified, it will be `SPECIFIC_REGION`, otherwise `SOURCE_REGION`.
- `region` (String) AWS region to store the snapshots in. If not specified, the snapshots will be stored in the same region as the workload.
-- `storage_class` (String) AWS bucket storage class.
+- `storage_class` (String) AWS bucket storage class. Possible values are `STANDARD`, `STANDARD_IA`, `ONEZONE_IA`, `GLACIER_INSTANT_RETRIEVAL`, `GLACIER_DEEP_ARCHIVE` and `GLACIER_FLEXIBLE_RETRIEVAL`. Default value is `STANDARD_IA`.
diff --git a/docs/data-sources/aws_cnp_artifacts.md b/docs/data-sources/aws_cnp_artifacts.md
index 6d7f15e..791e272 100644
--- a/docs/data-sources/aws_cnp_artifacts.md
+++ b/docs/data-sources/aws_cnp_artifacts.md
@@ -3,18 +3,110 @@
page_title: "polaris_aws_cnp_artifacts Data Source - terraform-provider-polaris"
subcategory: ""
description: |-
-
+ The polaris_aws_archival_location data source is used to access information about
+ instance profiles and roles required by RSC for a specified feature set.
+ Permission Groups
+ Following is a list of features and their applicable permission groups. These are used
+ when specifying the feature set.
+ CLOUDNATIVEARCHIVAL
+ BASIC - Represents the basic set of permissions required to onboard the feature.
+ CLOUDNATIVEPROTECTION
+ BASIC - Represents the basic set of permissions required to onboard the feature.EXPORT_AND_RESTORE - Represents the set of permissions required for export and
+ restore operations.FILE_LEVEL_RECOVERY - Represents the set of permissions required for file-level
+ recovery operations.SNAPSHOT_PRIVATE_ACCESS - Represents the set of permissions required for private
+ access to disk snapshots.
+ CLOUDNATIVES3_PROTECTION
+ BASIC - Represents the basic set of permissions required to onboard the feature.
+ EXOCOMPUTE
+ BASIC - Represents the basic set of permissions required to onboard the feature.PRIVATE_ENDPOINTS - Represents the set of permissions required for usage of private
+ endpoints.RSC_MANAGED_CLUSTER - Represents the set of permissions required for the Rubrik-
+ managed Exocompute cluster.
+ RDS_PROTECTION
+ BASIC - Represents the basic set of permissions required to onboard the feature.
+ -> Note: When permission groups are specified, the BASIC permission group must
+ always be included.
---
# polaris_aws_cnp_artifacts (Data Source)
+The `polaris_aws_archival_location` data source is used to access information about
+instance profiles and roles required by RSC for a specified feature set.
+## Permission Groups
+Following is a list of features and their applicable permission groups. These are used
+when specifying the feature set.
+
+### CLOUD_NATIVE_ARCHIVAL
+ * `BASIC` - Represents the basic set of permissions required to onboard the feature.
+
+### CLOUD_NATIVE_PROTECTION
+ * `BASIC` - Represents the basic set of permissions required to onboard the feature.
+ * `EXPORT_AND_RESTORE` - Represents the set of permissions required for export and
+ restore operations.
+ * `FILE_LEVEL_RECOVERY` - Represents the set of permissions required for file-level
+ recovery operations.
+ * `SNAPSHOT_PRIVATE_ACCESS` - Represents the set of permissions required for private
+ access to disk snapshots.
+
+### CLOUD_NATIVE_S3_PROTECTION
+ * `BASIC` - Represents the basic set of permissions required to onboard the feature.
+
+### EXOCOMPUTE
+ * `BASIC` - Represents the basic set of permissions required to onboard the feature.
+ * `PRIVATE_ENDPOINTS` - Represents the set of permissions required for usage of private
+ endpoints.
+ * `RSC_MANAGED_CLUSTER` - Represents the set of permissions required for the Rubrik-
+ managed Exocompute cluster.
+
+### RDS_PROTECTION
+ * `BASIC` - Represents the basic set of permissions required to onboard the feature.
+
+-> **Note:** When permission groups are specified, the `BASIC` permission group must
+ always be included.
## Example Usage
```terraform
+# Permission groups defaults to BASIC.
+data "polaris_aws_cnp_artifacts" "artifacts" {
+ feature {
+ name = "CLOUD_NATIVE_PROTECTION"
+ }
+}
+
+# Multiple permission groups. When permission groups are specified,
+# the BASIC permission group must always be included.
+data "polaris_aws_cnp_artifacts" "artifacts" {
+ feature {
+ name = "CLOUD_NATIVE_PROTECTION"
+
+ permission_groups = [
+ "BASIC",
+ "EXPORT_AND_RESTORE",
+ "FILE_LEVEL_RECOVERY",
+ ]
+ }
+}
+
+# Multiple features with permission groups.
data "polaris_aws_cnp_artifacts" "artifacts" {
- features = ["CLOUD_NATIVE_PROTECTION"]
+ feature {
+ name = "CLOUD_NATIVE_ARCHIVAL"
+
+ permission_groups = [
+ "BASIC",
+ ]
+ }
+
+ feature {
+ name = "CLOUD_NATIVE_PROTECTION"
+
+ permission_groups = [
+ "BASIC",
+ "EXPORT_AND_RESTORE",
+ "FILE_LEVEL_RECOVERY",
+ ]
+ }
}
```
@@ -27,11 +119,11 @@ data "polaris_aws_cnp_artifacts" "artifacts" {
### Optional
-- `cloud` (String) AWS cloud type.
+- `cloud` (String) AWS cloud type. Possible values are `STANDARD`, `CHINA` and `GOV`. Default value is `STANDARD`.
### Read-Only
-- `id` (String) The ID of this resource.
+- `id` (String) SHA-256 hash of the instance profile keys and the roleskeys.
- `instance_profile_keys` (Set of String) Instance profile keys for the RSC features.
- `role_keys` (Set of String) Role keys for the RSC features.
@@ -40,5 +132,5 @@ data "polaris_aws_cnp_artifacts" "artifacts" {
Required:
-- `name` (String) Feature name.
-- `permission_groups` (Set of String) Permission groups to assign to the feature.
+- `name` (String) RSC feature name. Possible values are `CLOUD_NATIVE_ARCHIVAL`, `CLOUD_NATIVE_PROTECTION`, `CLOUD_NATIVE_S3_PROTECTION`, `EXOCOMPUTE` and `RDS_PROTECTION`.
+- `permission_groups` (Set of String) RSC permission groups for the feature. Possible values are `BASIC`, `EXPORT_AND_RESTORE`, `FILE_LEVEL_RECOVERY`, `SNAPSHOT_PRIVATE_ACCESS`, `PRIVATE_ENDPOINT` and `RSC_MANAGED_CLUSTER`. For backwards compatibility, `[]` is interpreted as all applicable permission groups.
diff --git a/docs/data-sources/aws_cnp_permissions.md b/docs/data-sources/aws_cnp_permissions.md
index 6da683e..ac2e8cf 100644
--- a/docs/data-sources/aws_cnp_permissions.md
+++ b/docs/data-sources/aws_cnp_permissions.md
@@ -3,24 +3,123 @@ page_title: "polaris_aws_cnp_permissions Data Source - terraform-provider-polari
subcategory: ""
description: |-
+The `polaris_aws_cnp_permissions` data source is used to access information about the
+permissions required by RSC for a specified feature set.
+
+## Permission Groups
+Following is a list of features and their applicable permission groups. These are used
+when specifying the feature set.
+
+### CLOUD_NATIVE_ARCHIVAL
+ * `BASIC` - Represents the basic set of permissions required to onboard the feature.
+
+### CLOUD_NATIVE_PROTECTION
+ * `BASIC` - Represents the basic set of permissions required to onboard the feature.
+ * `EXPORT_AND_RESTORE` - Represents the set of permissions required for export and
+ restore operations.
+ * `FILE_LEVEL_RECOVERY` - Represents the set of permissions required for file-level
+ recovery operations.
+ * `SNAPSHOT_PRIVATE_ACCESS` - Represents the set of permissions required for private
+ access to disk snapshots.
+
+### CLOUD_NATIVE_S3_PROTECTION
+ * `BASIC` - Represents the basic set of permissions required to onboard the feature.
+
+### EXOCOMPUTE
+ * `BASIC` - Represents the basic set of permissions required to onboard the feature.
+ * `PRIVATE_ENDPOINTS` - Represents the set of permissions required for usage of private
+ endpoints.
+ * `RSC_MANAGED_CLUSTER` - Represents the set of permissions required for the Rubrik-
+ managed Exocompute cluster.
+
+### RDS_PROTECTION
+ * `BASIC` - Represents the basic set of permissions required to onboard the feature.
+
+-> **Note:** When permission groups are specified, the `BASIC` permission group must
+ always be included.
+
---
# polaris_aws_cnp_permissions (Data Source)
+The `polaris_aws_cnp_permissions` data source is used to access information about the
+permissions required by RSC for a specified feature set.
+
+## Permission Groups
+Following is a list of features and their applicable permission groups. These are used
+when specifying the feature set.
+
+### CLOUD_NATIVE_ARCHIVAL
+ * `BASIC` - Represents the basic set of permissions required to onboard the feature.
+
+### CLOUD_NATIVE_PROTECTION
+ * `BASIC` - Represents the basic set of permissions required to onboard the feature.
+ * `EXPORT_AND_RESTORE` - Represents the set of permissions required for export and
+ restore operations.
+ * `FILE_LEVEL_RECOVERY` - Represents the set of permissions required for file-level
+ recovery operations.
+ * `SNAPSHOT_PRIVATE_ACCESS` - Represents the set of permissions required for private
+ access to disk snapshots.
+
+### CLOUD_NATIVE_S3_PROTECTION
+ * `BASIC` - Represents the basic set of permissions required to onboard the feature.
+
+### EXOCOMPUTE
+ * `BASIC` - Represents the basic set of permissions required to onboard the feature.
+ * `PRIVATE_ENDPOINTS` - Represents the set of permissions required for usage of private
+ endpoints.
+ * `RSC_MANAGED_CLUSTER` - Represents the set of permissions required for the Rubrik-
+ managed Exocompute cluster.
+
+### RDS_PROTECTION
+ * `BASIC` - Represents the basic set of permissions required to onboard the feature.
+
+-> **Note:** When permission groups are specified, the `BASIC` permission group must
+ always be included.
+
+
## Example Usage
```terraform
+data "polaris_aws_cnp_artifacts" "artifacts" {
+ feature {
+ name = "CLOUD_NATIVE_ARCHIVAL"
+
+ permission_groups = [
+ "BASIC",
+ ]
+ }
+
+ feature {
+ name = "CLOUD_NATIVE_PROTECTION"
+
+ permission_groups = [
+ "BASIC",
+ "EXPORT_AND_RESTORE",
+ ]
+ }
+}
+
+# Lookup the required permissions using the output from the
+# artifacts data source.
data "polaris_aws_cnp_permissions" "permissions" {
for_each = data.polaris_aws_cnp_artifacts.artifacts.role_keys
cloud = data.polaris_aws_cnp_artifacts.artifacts.cloud
- features = data.polaris_aws_cnp_artifacts.artifacts.features
role_key = each.key
+
+ dynamic "feature" {
+ for_each = data.polaris_aws_cnp_artifacts.artifacts.feature
+ content {
+ name = feature.value["name"]
+ permission_groups = feature.value["permission_groups"]
+ }
+ }
}
```
-
+
## Schema
### Required
@@ -30,28 +129,29 @@ data "polaris_aws_cnp_permissions" "permissions" {
### Optional
-- `cloud` (String) AWS cloud type.
-- `ec2_recovery_role_path` (String) EC2 recovery role path.
+- `cloud` (String) AWS cloud type. Possible values are `STANDARD`, `CHINA` and `GOV`. Default value is `STANDARD`.
+- `ec2_recovery_role_path` (String) AWS EC2 recovery role path.
### Read-Only
- `customer_managed_policies` (List of Object) Customer managed policies. (see [below for nested schema](#nestedatt--customer_managed_policies))
-- `id` (String) The ID of this resource.
+- `id` (String) SHA-256 hash of the customer managed policies and the managed policies.
- `managed_policies` (List of String) Managed policies.
+
+### Nested Schema for `feature`
+
+Required:
+
+- `name` (String) RSC feature name. Possible values are `CLOUD_NATIVE_ARCHIVAL`, `CLOUD_NATIVE_ARCHIVAL_ENCRYPTION`, `CLOUD_NATIVE_PROTECTION`, `CLOUD_NATIVE_S3_PROTECTION`, `EXOCOMPUTE` and `RDS_PROTECTION`.
+- `permission_groups` (Set of String) RSC permission groups for the feature. Possible values are `BASIC`, `ENCRYPTION`, `EXPORT_AND_RESTORE`, `SNAPSHOT_PRIVATE_ACCESS`, `PRIVATE_ENDPOINT` and `RSC_MANAGED_CLUSTER`. Default value is `BASIC`.
+
+
### Nested Schema for `customer_managed_policies`
Read-Only:
-- `feature` (String) RSC Feature.
+- `feature` (String) RSC feature name.
- `name` (String) Policy name.
-- `policy` (String) Policy.
-
-
-### Nested Schema for `feature`
-
-Required:
-
-- `name` (String) Feature name.
-- `permission_groups` (Set of String) Permission groups to assign to the feature.
+- `policy` (String) AWS policy.
diff --git a/docs/data-sources/azure_archival_location.md b/docs/data-sources/azure_archival_location.md
new file mode 100644
index 0000000..851584d
--- /dev/null
+++ b/docs/data-sources/azure_archival_location.md
@@ -0,0 +1,63 @@
+---
+page_title: "polaris_azure_archival_location Data Source - terraform-provider-polaris"
+subcategory: ""
+description: |-
+
+The `polaris_azure_archival_location` data source is used to access information about
+an Azure archival location. An archival location is looked up using either the ID or
+the name.
+
+---
+
+# polaris_azure_archival_location (Data Source)
+
+
+The `polaris_azure_archival_location` data source is used to access information about
+an Azure archival location. An archival location is looked up using either the ID or
+the name.
+
+
+
+## Example Usage
+
+```terraform
+# Using the archival location ID.
+data "polaris_azure_archival_location" "archival_location" {
+ id = "db34f042-79ea-48b1-bab8-c40dfbf2ab82"
+}
+
+# Using the archival location name.
+data "polaris_azure_archival_location" "archival_location" {
+ name = "my-archival-location"
+}
+```
+
+
+## Schema
+
+### Optional
+
+- `archival_location_id` (String, Deprecated) Cloud native archival location ID (UUID). **Deprecated:** use `id` instead.
+- `id` (String) Cloud native archival location ID (UUID).
+- `name` (String) Name of the cloud native archival location.
+
+### Read-Only
+
+- `connection_status` (String) Connection status of the cloud native archival location.
+- `container_name` (String) Azure storage container name.
+- `customer_managed_key` (Set of Object) Customer managed storage encryption. Specify the regions and their respective encryption details. For other regions, data will be encrypted using platform managed keys. (see [below for nested schema](#nestedatt--customer_managed_key))
+- `location_template` (String) RSC location template. If a storage account region was specified, it will be `SPECIFIC_REGION`, otherwise `SOURCE_REGION`.
+- `redundancy` (String) Azure storage redundancy. Possible values are `GRS`, `GZRS`, `LRS`, `RA_GRS`, `RA_GZRS` and `ZRS`. Default value is `LRS`.
+- `storage_account_name_prefix` (String) Azure storage account name prefix. The storage account name prefix cannot be longer than 14 characters and can only consist of numbers and lower case letters.
+- `storage_account_region` (String) Azure region to store the snapshots in. If not specified, the snapshots will be stored in the same region as the workload.
+- `storage_account_tags` (Map of String) Azure storage account tags. Each tag will be added to the storage account created by RSC.
+- `storage_tier` (String) Azure storage tier. Possible values are `COOL` and `HOT`. Default value is `COOL`.
+
+
+### Nested Schema for `customer_managed_key`
+
+Read-Only:
+
+- `name` (String) Key name.
+- `region` (String) The region in which the key will be used. Regions without customer managed keys will use platform managed keys.
+- `vault_name` (String) Key vault name.
diff --git a/docs/data-sources/azure_permissions.md b/docs/data-sources/azure_permissions.md
index ec69a4e..a95ac31 100644
--- a/docs/data-sources/azure_permissions.md
+++ b/docs/data-sources/azure_permissions.md
@@ -3,35 +3,118 @@
page_title: "polaris_azure_permissions Data Source - terraform-provider-polaris"
subcategory: ""
description: |-
-
+ The polaris_azure_permissions data source is used to access information about
+ the permissions required by RSC for a specified RSC feature. The features currently
+ supported for Azure subscriptions are:
+ * AZURE_SQL_DB_PROTECTION
+ * AZURE_SQL_MI_PROTECTION
+ * CLOUD_NATIVE_ARCHIVAL
+ * CLOUD_NATIVE_ARCHIVAL_ENCRYPTION
+ * CLOUD_NATIVE_PROTECTION
+ * EXOCOMPUTE
+ See the subscription ../resources/azure_subscription resource for more information
+ on enabling features for an Azure subscription added to RSC.
+ The polaris_azure_permissions data source can be used with the azurerm_role_definition
+ and the permissions fields of the polaris_azure_subscription resources to
+ automatically update the permissions of roles and notify RSC about the updated
+ permissions.
+ -> Note: To better fit the RSC Azure permission model where each RSC feature have
+ two Azure roles, the features field has been deprecated and replaced with the
+ feature field.
+ -> Note: Due to the RSC Azure permission model having been refined into subscription
+ level permissions and resource group level permissions, the actions, data_actions,
+ not_actions and not_data_actions fields have been deprecated and replaced with the
+ corresponding subscription and resource group fields.
+ -> Note: Due to backward compatibility, the features field allow the feature names
+ to be given in 3 different styles: EXAMPLE_FEATURE_NAME, example-feature-name or
+ example_feature_name. The recommended style is EXAMPLE_FEATURE_NAME as it is what
+ the RSC API itself uses.
---
# polaris_azure_permissions (Data Source)
+The `polaris_azure_permissions` data source is used to access information about
+the permissions required by RSC for a specified RSC feature. The features currently
+supported for Azure subscriptions are:
+ * `AZURE_SQL_DB_PROTECTION`
+ * `AZURE_SQL_MI_PROTECTION`
+ * `CLOUD_NATIVE_ARCHIVAL`
+ * `CLOUD_NATIVE_ARCHIVAL_ENCRYPTION`
+ * `CLOUD_NATIVE_PROTECTION`
+ * `EXOCOMPUTE`
+See the [subscription](../resources/azure_subscription) resource for more information
+on enabling features for an Azure subscription added to RSC.
+
+The `polaris_azure_permissions` data source can be used with the `azurerm_role_definition`
+and the `permissions` fields of the `polaris_azure_subscription` resources to
+automatically update the permissions of roles and notify RSC about the updated
+permissions.
+
+-> **Note:** To better fit the RSC Azure permission model where each RSC feature have
+ two Azure roles, the `features` field has been deprecated and replaced with the
+ `feature` field.
+
+-> **Note:** Due to the RSC Azure permission model having been refined into subscription
+ level permissions and resource group level permissions, the `actions`, `data_actions`,
+ `not_actions` and `not_data_actions` fields have been deprecated and replaced with the
+ corresponding subscription and resource group fields.
+
+-> **Note:** Due to backward compatibility, the `features` field allow the feature names
+ to be given in 3 different styles: `EXAMPLE_FEATURE_NAME`, `example-feature-name` or
+ `example_feature_name`. The recommended style is `EXAMPLE_FEATURE_NAME` as it is what
+ the RSC API itself uses.
## Example Usage
```terraform
-data "polaris_azure_permissions" "default" {
- features = [
- "CLOUD_NATIVE_PROTECTION",
- ]
+# Permissions required for the Cloud Native Protection RSC feature.
+data "polaris_azure_permissions" "cloud_native_protection" {
+ feature = "CLOUD_NATIVE_PROTECTION"
+}
+
+# Permissions required for the Exocompute RSC feature. The subscription
+# is set up to notify RSC when the permissions are updated for the feature.
+data "polaris_azure_permissions" "exocompute" {
+ feature = "EXOCOMPUTE"
+}
+
+resource "polaris_azure_subscription" "subscription" {
+ subscription_id = "31be1bb0-c76c-11eb-9217-afdffe83a002"
+ tenant_domain = "my-domain.onmicrosoft.com"
+
+ exocompute {
+ permissions = data.polaris_azure_permissions.exocompute.id
+ regions = [
+ "eastus2",
+ ]
+ resource_group_name = "my-east-resource-group"
+ resource_group_region = "eastus2"
+ }
}
```
## Schema
-### Required
+### Optional
-- `features` (Set of String) Enabled features.
+- `feature` (String) RSC feature. Note that the feature name must be given in the `EXAMPLE_FEATURE_NAME` style. Possible values are `AZURE_SQL_DB_PROTECTION`, `AZURE_SQL_MI_PROTECTION`, `CLOUD_NATIVE_ARCHIVAL`, `CLOUD_NATIVE_ARCHIVAL_ENCRYPTION`, `CLOUD_NATIVE_PROTECTION` and `EXOCOMPUTE`.
+- `features` (Set of String, Deprecated) RSC features. Possible values are `AZURE_SQL_DB_PROTECTION`, `AZURE_SQL_MI_PROTECTION`, `CLOUD_NATIVE_ARCHIVAL`, `CLOUD_NATIVE_ARCHIVAL_ENCRYPTION`, `CLOUD_NATIVE_PROTECTION` and `EXOCOMPUTE`. **Deprecated:** use `feature` instead.
### Read-Only
-- `actions` (List of String) Allowed actions.
-- `data_actions` (List of String) Allowed data actions.
-- `hash` (String) SHA-256 hash of the permissions, can be used to detect changes to the permissions.
-- `id` (String) The ID of this resource.
-- `not_actions` (List of String) Disallowed actions.
-- `not_data_actions` (List of String) Disallowed data actions.
+- `actions` (List of String, Deprecated) Azure allowed actions. **Deprecated:** use `subscription_actions` and `resource_group_actions` instead.
+- `data_actions` (List of String, Deprecated) Azure allowed data actions. **Deprecated:** use `subscription_data_actions` and `resource_group_data_actions` instead.
+- `hash` (String, Deprecated) SHA-256 hash of the permissions, can be used to detect changes to the permissions. **Deprecated:** use `id` instead.
+- `id` (String) SHA-256 hash of the required permissions, will be updated as the required permissions changes.
+- `not_actions` (List of String, Deprecated) Azure disallowed actions. **Deprecated:** use `subscription_not_actions` and `resource_group_not_actions` instead.
+- `not_data_actions` (List of String, Deprecated) Azure disallowed data actions. **Deprecated:** use `subscription_not_data_actions` and `resource_group_not_data_actions` instead.
+- `resource_group_actions` (List of String) Azure allowed actions on the resource group level.
+- `resource_group_data_actions` (List of String) Azure allowed data actions on the resource group level.
+- `resource_group_not_actions` (List of String) Azure disallowed actions on the resource group level.
+- `resource_group_not_data_actions` (List of String) Azure disallowed data actions on the resource group level.
+- `subscription_actions` (List of String) Azure allowed actions on the subscription level.
+- `subscription_data_actions` (List of String) Azure allowed data actions on the subscription level.
+- `subscription_not_actions` (List of String) Azure disallowed actions on the subscription level.
+- `subscription_not_data_actions` (List of String) Azure disallowed data actions on the subscription level.
diff --git a/docs/data-sources/azure_subscription.md b/docs/data-sources/azure_subscription.md
new file mode 100644
index 0000000..73dc64c
--- /dev/null
+++ b/docs/data-sources/azure_subscription.md
@@ -0,0 +1,49 @@
+---
+# generated by https://github.com/hashicorp/terraform-plugin-docs
+page_title: "polaris_azure_subscription Data Source - terraform-provider-polaris"
+subcategory: ""
+description: |-
+ The polaris_azure_subscription data source is used to access information about an
+ Azure subscription added to RSC. An Azure subscription is looked up using either the
+ Azure subscription ID or the name. When looking up an Azure subscription using the
+ subscription name, the tenant domain can be used to specify in which tenant to look
+ for the name.
+ -> Note: The subscription name is the name of the Azure subscription as it appears
+ in RSC.
+---
+
+# polaris_azure_subscription (Data Source)
+
+The `polaris_azure_subscription` data source is used to access information about an
+Azure subscription added to RSC. An Azure subscription is looked up using either the
+Azure subscription ID or the name. When looking up an Azure subscription using the
+subscription name, the tenant domain can be used to specify in which tenant to look
+for the name.
+
+-> **Note:** The subscription name is the name of the Azure subscription as it appears
+ in RSC.
+
+## Example Usage
+
+```terraform
+data "polaris_azure_subscription" "example" {
+ name = "example"
+}
+
+output "example_azure_subscription" {
+ value = data.polaris_azure_subscription.example
+}
+```
+
+
+## Schema
+
+### Optional
+
+- `name` (String) Azure subscription name.
+- `subscription_id` (String) Azure subscription ID.
+- `tenant_domain` (String) Azure tenant primary domain.
+
+### Read-Only
+
+- `id` (String) RSC cloud account ID (UUID).
diff --git a/docs/data-sources/deployment.md b/docs/data-sources/deployment.md
index 105a7f3..1104705 100644
--- a/docs/data-sources/deployment.md
+++ b/docs/data-sources/deployment.md
@@ -3,17 +3,28 @@
page_title: "polaris_deployment Data Source - terraform-provider-polaris"
subcategory: ""
description: |-
-
+ The polaris_deployment data source is used to access information about the RSC
+ deployment.
---
# polaris_deployment (Data Source)
-
+The `polaris_deployment` data source is used to access information about the RSC
+deployment.
## Example Usage
```terraform
-data "polaris_deployment" "default" {}
+# Output the IP addresses and version used by the RSC deployment.
+data "polaris_deployment" "deployment" {}
+
+output "ip_addresses" {
+ value = data.polaris_deployment.deployment.ip_addresses
+}
+
+output "version" {
+ value = data.polaris_deployment.deployment.version
+}
```
@@ -21,6 +32,6 @@ data "polaris_deployment" "default" {}
### Read-Only
-- `id` (String) The ID of this resource.
+- `id` (String) SHA-256 hash of the fields in order.
- `ip_addresses` (Set of String) Deployment IP addresses.
- `version` (String) Deployment version.
diff --git a/docs/data-sources/features.md b/docs/data-sources/features.md
index 85ce6cc..29f1219 100644
--- a/docs/data-sources/features.md
+++ b/docs/data-sources/features.md
@@ -3,17 +3,29 @@
page_title: "polaris_features Data Source - terraform-provider-polaris"
subcategory: ""
description: |-
-
+ The polaris_feature data source is used to access information about features enabled
+ for an RSC account.
+ !> WARNING: This resource is deprecated and will be removed in a future version.
+ Use the features field of the polaris_account data source instead.
---
# polaris_features (Data Source)
+The `polaris_feature` data source is used to access information about features enabled
+for an RSC account.
+!> **WARNING:** This resource is deprecated and will be removed in a future version.
+ Use the `features` field of the `polaris_account` data source instead.
## Example Usage
```terraform
+# Output the features enabled for the RSC account.
data "polaris_features" "features" {}
+
+output "features_enabled" {
+ value = data.polaris_features.features.features
+}
```
@@ -21,5 +33,5 @@ data "polaris_features" "features" {}
### Read-Only
-- `features` (List of String) Enabled features.
-- `id` (String) The ID of this resource.
+- `features` (List of String) Features enabled for the RSC account.
+- `id` (String) SHA-256 hash of the fields in order.
diff --git a/docs/data-sources/role.md b/docs/data-sources/role.md
index ece785e..4024db6 100644
--- a/docs/data-sources/role.md
+++ b/docs/data-sources/role.md
@@ -2,12 +2,17 @@
page_title: "polaris_role Data Source - terraform-provider-polaris"
subcategory: ""
description: |-
+
+The `polaris_role` data source is used to access information about RSC roles.
---
# polaris_role (Data Source)
+The `polaris_role` data source is used to access information about RSC roles.
+
+
## Example Usage
@@ -17,7 +22,7 @@ data "polaris_role" "compliance_auditor" {
}
```
-
+
## Schema
### Required
@@ -27,7 +32,7 @@ data "polaris_role" "compliance_auditor" {
### Read-Only
- `description` (String) Role description.
-- `id` (String) The ID of this resource.
+- `id` (String) Role ID (UUID).
- `is_org_admin` (Boolean) True if the role is the organization administrator.
- `permission` (Set of Object) Role permission. (see [below for nested schema](#nestedatt--permission))
@@ -37,7 +42,7 @@ data "polaris_role" "compliance_auditor" {
Read-Only:
- `hierarchy` (Set of Object) Snappable hierarchy. (see [below for nested schema](#nestedobjatt--permission--hierarchy))
-- `operation` (String) Operation allowed on object ids under the snappable hierarchy.
+- `operation` (String) Operation allowed on object IDs under the snappable hierarchy.
### Nested Schema for `permission.hierarchy`
diff --git a/docs/data-sources/role_template.md b/docs/data-sources/role_template.md
index 2afff3c..1598e0c 100644
--- a/docs/data-sources/role_template.md
+++ b/docs/data-sources/role_template.md
@@ -2,12 +2,19 @@
page_title: "polaris_role_template Data Source - terraform-provider-polaris"
subcategory: ""
description: |-
+
+The `polaris_role_template` data source is used to access information about RSC role
+templates.
---
# polaris_role_template (Data Source)
+The `polaris_role_template` data source is used to access information about RSC role
+templates.
+
+
## Example Usage
@@ -17,18 +24,18 @@ data "polaris_role_template" "compliance_auditor" {
}
```
-
+
## Schema
### Required
-- `name` (String) Role name.
+- `name` (String) Role template name.
### Read-Only
-- `description` (String) Role description.
-- `id` (String) The ID of this resource.
-- `permission` (Set of Object) Role permission. (see [below for nested schema](#nestedatt--permission))
+- `description` (String) Role template description.
+- `id` (String) Role template ID (UUID).
+- `permission` (Set of Object) Role template permission. (see [below for nested schema](#nestedatt--permission))
### Nested Schema for `permission`
@@ -36,7 +43,7 @@ data "polaris_role_template" "compliance_auditor" {
Read-Only:
- `hierarchy` (Set of Object) Snappable hierarchy. (see [below for nested schema](#nestedobjatt--permission--hierarchy))
-- `operation` (String) Operation allowed on object ids under the snappable hierarchy.
+- `operation` (String) Operation allowed on object IDs under the snappable hierarchy.
### Nested Schema for `permission.hierarchy`
diff --git a/docs/guides/aws_cnp_account.md b/docs/guides/aws_cnp_account.md
index 0a617e0..d108dec 100644
--- a/docs/guides/aws_cnp_account.md
+++ b/docs/guides/aws_cnp_account.md
@@ -7,34 +7,50 @@ The `polaris_aws_account` resource uses a CloudFormation stack to grant RSC perm
granted to RSC by the CloudFormation stack can be difficult to understand and track as RSC will request the permissions
to be updated as new features, requiring new permissions, are released.
-To make the process of granting AWS permissions more transparent, a couple of new resources and data sources have been added to
-the RSC Terraform provider:
+To make the process of granting AWS permissions more transparent, a couple of new resources and data sources have been
+added to the RSC Terraform provider:
* `polaris_aws_cnp_account`
* `polaris_aws_cnp_account_attachments`
* `polaris_aws_cnp_account_trust_policy`
* `polaris_aws_cnp_artifacts`
* `polaris_aws_cnp_permissions`
- * `polaris_features`
+ * `polaris_account`
Using these resources, it's possible to add an AWS account to RSC without using a CloudFormation stack.
To add an AWS account to RSC using the new CNP resources, start by using the `polaris_aws_cnp_artifacts` data source:
```terraform
data "polaris_aws_cnp_artifacts" "artifacts" {
- features = ["CLOUD_NATIVE_PROTECTION"]
+ feature {
+ name = "CLOUD_NATIVE_PROTECTION"
+
+ permission_groups = [
+ "BASIC",
+ "EXPORT_AND_RESTORE",
+ "FILE_LEVEL_RECOVERY",
+ "SNAPSHOT_PRIVATE_ACCESS",
+ ]
+ }
}
```
-`features` lists the RSC features to enabled for the AWS account. Use the `polaris_features` data source to obtain a
-list of RSC features available for the RSC account. The `polaris_aws_cnp_artifacts` data source returns the instance
-profiles and roles, referred to as _artifacts_ by RSC, which are required by RSC.
+One or more `feature` blocks lists the RSC features to enabled for the AWS account. Use the `polaris_account` data
+source to obtain a list of RSC features available for the RSC account. The `polaris_aws_cnp_artifacts` data source
+returns the instance profiles and roles, referred to as _artifacts_ by RSC, which are required by RSC.
Next, use the `polaris_aws_cnp_permissions` data source to obtain the role permission policies, customer managed
policies and managed policies, required by RSC:
```terraform
data "polaris_aws_cnp_permissions" "permissions" {
for_each = data.polaris_aws_cnp_artifacts.artifacts.role_keys
- features = data.polaris_aws_cnp_artifacts.artifacts.features
role_key = each.key
+
+ dynamic "feature" {
+ for_each = data.polaris_aws_cnp_artifacts.artifacts.feature
+ content {
+ name = feature.value["name"]
+ permission_groups = feature.value["permission_groups"]
+ }
+ }
}
```
@@ -42,23 +58,31 @@ After defining the two data sources, use the `polaris_aws_cnp_account` resource
account:
```terraform
resource "polaris_aws_cnp_account" "account" {
- features = polaris_aws_cnp_artifacts.artifacts.features
name = "My Account"
native_id = "123456789123"
regions = ["us-east-2", "us-west-2"]
+
+ dynamic "feature" {
+ for_each = polaris_aws_cnp_artifacts.artifacts.features
+ content {
+ name = feature.value["name"]
+ permission_groups = feature.value["permission_groups"]
+ }
+ }
}
```
-`name` is the name given to the AWS account in RSC, `native_id` is the AWS account ID and `regions` the AWS regions.
-When Terraform processes this resource, the AWS account will show up in the connecting state in the RSC UI.
+`name` is the name given to the AWS account in RSC, `native_id` is the AWS account ID and `regions` the AWS regions to
+protect with RSC. When Terraform processes this resource, the AWS account will show up in the connecting state in the
+RSC UI.
Next, the `polaris_aws_cnp_account_trust_policy` resource needs to be used to define the trust policies required by RSC
for the AWS account:
```terraform
resource "polaris_aws_cnp_account_trust_policy" "trust_policy" {
- for_each = data.polaris_aws_cnp_artifacts.artifacts.role_keys
- account_id = polaris_aws_cnp_account.account.id
- features = polaris_aws_cnp_account.account.features
- role_key = each.key
+ for_each = data.polaris_aws_cnp_artifacts.artifacts.role_keys
+ account_id = polaris_aws_cnp_account.account.id
+ features = polaris_aws_cnp_account.account.feature.*.name
+ role_key = each.key
}
```
This resource provides the trust policies to attach to the IAM roles created, so that RSC can assume the roles to
@@ -95,13 +119,13 @@ Lastly, to finalize the onboarding of the AWS account, use the `polaris_aws_cnp_
```terraform
resource "polaris_aws_cnp_account_attachments" "attachments" {
account_id = polaris_aws_cnp_account.account.id
- features = polaris_aws_cnp_account.account.features
+ features = polaris_aws_cnp_account.account.feature.*.name
dynamic "instance_profile" {
for_each = aws_iam_instance_profile.profile
content {
key = instance_profile.key
- name = instance_profile.value["name"]
+ name = instance_profile.value["arn"]
}
}
diff --git a/docs/guides/changelog.md b/docs/guides/changelog.md
new file mode 100644
index 0000000..ce3537d
--- /dev/null
+++ b/docs/guides/changelog.md
@@ -0,0 +1,50 @@
+---
+page_title: "Changelog"
+---
+
+# Changelog
+
+## v0.9.0
+* Update the `polaris_aws_archival_location` resource to support updates of the `bucket_tags` field without recreating
+ the resources.
+* Add `polaris_aws_account` data source. [[docs](../data-sources/aws_account)]
+* Add `polaris_azure_subscription` data source. [[docs](../data-sources/azure_subscription)]
+* Deprecate the `archival_location_id` field in the `polaris_aws_archival_location` data source. Use the `id` field
+ instead.
+* Deprecate the `archival_location_id` field in the `polaris_azure_archival_location` data source. Use the `id` field
+ instead.
+* Add the field `setup_yaml` to the `polaris_aws_exocompute_cluster_attachment` resource. The `setup_yaml` fields
+ contains K8s specs that can be passed to `kubectl` to establish a connection between the cluster and RSC.
+ [[docs](../resources/aws_exocompute_cluster_attachment)]
+* Fix a bug in the AWS feature removal code that causes removal of the `CLOUD_NATIVE_S3_PROTECTION` feature to fail.
+* Improve the code that waits for RSC features to be disabled. The code now checks both the status of the job and the
+ status of the cloud account.
+* Improve the documentation for AWS data sources and resources.
+* Update guides.
+* Add `polaris_azure_archival_location` data source. [[docs](../data-sources/azure_archival_location)]
+* Fix a bug in the `polaris_azure_archival_location` resource where the cloud account UUID would be passed to the RSC
+ API instead of the Azure subscription UUID when creating an Azure archival location.
+* Fix a bug in the `polaris_aws_cnp_account` resource where destroying it would constantly result in an *objects not
+ authorized* error.
+* Increase the wait time for asynchronous RSC operations to 8.5 minutes.
+* Fix an issue with the permissions of subscriptions onboarded using the `polaris_azure_subscription` resource where
+ the RSC UI would show the status as "Update permissions" even though the app registration would have all the required
+ permissions.
+* Move changelog and upgrade guides to guides folder.
+* Add support for creating Azure cloud native archival locations. [[docs](../resources/azure_archival_location)]
+* Fix a bug in the `polaris_aws_exocompute` resource where customer supplied security groups were not validated
+ correctly.
+* Add support for shared Exocompute to the `polaris_azure_exocompute` resource.
+ [[docs](../resources/azure_exocompute#host_cloud_account_id)]
+* Add the `polaris_account` data source. [[docs](../data-sources/account)]
+* Add support for the Cloud Native Archival feature to the `polaris_azure_subscription` resource.
+ [[docs](../resources/azure_subscription#nested-schema-for-cloud_native_archival)]
+* Add support for the Cloud Native Archival Encryption feature to the `polaris_azure_subscription` resource.
+ [[docs](../resources/azure_subscription#nested-schema-for-cloud_native_archival_encryption)]
+* Add support for the Azure SQL Database Protection feature to the `polaris_azure_subscription` resource.
+ [[docs](../resources/azure_subscription#nested-schema-for-sql_db_protection)]
+* Add support for the Azure SQL Managed Instance Protection feature to the `polaris_azure_subscription` resource.
+ [[docs](../resources/azure_subscription#nested-schema-for-sql_mi_protection)]
+* Add support for specifying an Azure resource group when onboarding the Cloud Native Archival, Cloud Native Archival
+ Encryption, Cloud Native Protection or Exocompute features using the `polaris_azure_subscription` resource.
+ [[docs](../resources/azure_subscription#optional)]
diff --git a/docs/guides/permissions.md b/docs/guides/permissions.md
index b75c73f..f16d40f 100644
--- a/docs/guides/permissions.md
+++ b/docs/guides/permissions.md
@@ -7,10 +7,14 @@ RSC requires permissions to operate and as new features are added to RSC the set
guide explains how Terraform can be used to keep this set of permissions up to date.
## AWS
-For AWS this is managed through a CloudFormation stack. When the status of an account feature is `missing-permissions`
-the CloudFormation stack must be updated for the feature to continue to function. This can be managed by setting the
-`permissions` argument to `update`.
-```hcl
+There are two ways to onboard AWS accounts to RSC, using a CloudFormation stack or not. Depending on the way an account
+is onboarded, permissions are managed in different ways.
+
+### Using a CloudFormation Stack
+When an account is onboarded using a CloudFormation stack, the permissions are managed through the stack. When the
+status of an account feature is `MISSING_PERMISSIONS` the CloudFormation stack must be updated for the RSC feature to
+continue to function. This can be managed by setting the `permissions` argument to `update`.
+```terraform
resource "polaris_aws_account" "default" {
profile = "default"
permissions = "update"
@@ -22,55 +26,98 @@ resource "polaris_aws_account" "default" {
}
}
```
-This will generate a diff when the status of at least one feature is `missing-permissions`. Applying the account
-resource for this diff will update the CloudFormation stack. If the `permissions` argument is not specified the
+This will generate a diff when the status of at least one feature is in the `MISSING_PERMISSIONS` state. Applying the
+account resource for this diff will update the CloudFormation stack. If the `permissions` argument is not specified the
provider will not attempt to update the CloudFormation stack.
+### Not Using a CloudFormation Stack
+When an account is onboarded without using a CloudFormation stack, the permissions can be managed using the
+`polaris_aws_cnp_artifacts` and `polaris_aws_cnp_permissions` data sources and the
+[aws](https://registry.terraform.io/providers/hashicorp/aws/latest) provider, using IAM roles. Please see the
+[AWS CNP Account](aws_cnp_account.md) guide for more information on how create IAM roles using the data sources.
+
## Azure
-For Azure permissions are managed through a service principal. When the status of a subscription feature is
-`missing-permissions` the permissions of the service principal must be updated for the feature to continue to
-function. This can be managed by Terraform using the
-[azurerm](https://registry.terraform.io/providers/hashicorp/azurerm/latest) provider:
-```hcl
-data "polaris_azure_permissions" "default" {
- features = [
- "cloud-native-protection",
- "exocompute",
- ]
+For Azure permissions are managed through the subscription. When the status of a subscription feature is
+`MISSING_PERMISSIONS` the permissions must be updated for the feature to continue to function. This can be managed by
+Terraform using the [azurerm](https://registry.terraform.io/providers/hashicorp/azurerm/latest) provider:
+```terraform
+variable "features" {
+ type = set(string)
+ description = "List of RSC features to enable for subscription."
+}
+
+data "polaris_azure_permissions" "features" {
+ for_each = var.features
+ feature = each.key
}
-resource "azurerm_role_definition" "default" {
- name = "terraform"
- scope = data.azurerm_subscription.default.id
+resource "azurerm_role_definition" "subscription" {
+ for_each = data.polaris_azure_permissions.features
+ name = "RSC - Subscription Level - ${each.value.feature}"
+ scope = data.azurerm_subscription.subscription.id
permissions {
- actions = data.polaris_azure_permissions.default.actions
- data_actions = data.polaris_azure_permissions.default.data_actions
- not_actions = data.polaris_azure_permissions.default.not_actions
- not_data_actions = data.polaris_azure_permissions.default.not_data_actions
+ actions = each.value.subscription_actions
+ data_actions = each.value.subscription_data_actions
+ not_actions = each.value.subscription_not_actions
+ not_data_actions = each.value.subscription_not_data_actions
}
}
-resource "azurerm_role_assignment" "default" {
+resource "azurerm_role_assignment" "subscription" {
+ for_each = data.polaris_azure_permissions.features
principal_id = "9e7f3952-1fc1-11ec-b57a-972144d12d97"
- role_definition_id = azurerm_role_definition.default.role_definition_resource_id
- scope = data.azurerm_subscription.default.id
+ role_definition_id = azurerm_role_definition.subscription[each.key].role_definition_resource_id
+ scope = data.azurerm_subscription.subscription.id
}
-resource "polaris_azure_service_principal" "default" {
- sdk_auth = "${path.module}/sdk-service-principal.json"
- tenant_domain = "mydomain.onmicrosoft.com"
- permissions_hash = data.polaris_azure_permissions.default.hash
+resource "azurerm_role_definition" "resource_group" {
+ for_each = data.polaris_azure_permissions.features
+ name = "RSC - Resource Group Level - ${each.value.feature}"
+ scope = data.azurerm_resource_group.resource_group.id
+
+ permissions {
+ actions = each.value.resource_group_actions
+ data_actions = each.value.resource_group_data_actions
+ not_actions = each.value.resource_group_not_actions
+ not_data_actions = each.value.resource_group_not_data_actions
+ }
+}
+
+resource "azurerm_role_assignment" "resource_group" {
+ for_each = data.polaris_azure_permissions.features
+ principal_id = "9e7f3952-1fc1-11ec-b57a-972144d12d97"
+ role_definition_id = azurerm_role_definition.resource_group[each.key].role_definition_resource_id
+ scope = data.azurerm_resource_group.resource_group.id
+}
+
+resource "polaris_azure_service_principal" "service_principal" {
+ ...
+}
+
+resource "polaris_azure_subscription" "subscription" {
+ subscription_id = data.azurerm_subscription.subscription.subscription_id
+ subscription_name = data.azurerm_subscription.subscription.display_name
+ tenant_domain = polaris_azure_service_principal.service_principal.tenant_domain
+
+ cloud_native_protection {
+ permissions = data.polaris_azure_permissions.features["CLOUD_NATIVE_PROTECTION"].id
+ resource_group_name = data.azurerm_resource_group.resource_group.name
+ resource_group_region = data.azurerm_resource_group.resource_group.location
+ regions = ["eastus2"]
+ }
+
+ ...
depends_on = [
- azurerm_role_definition.default,
- azurerm_role_assignment.default,
+ azurerm_role_definition.subscription,
+ azurerm_role_definition.resource_group,
]
}
```
When the permissions for a feature changes the permissions data source will reflect this generating a diff for the
-role definition and service principal resources. Applying the diff will first update the permissions of the service
-principal's role definition and then notify RSC about the update.
+role definitions and subscription resources. Applying the diff will first update the permissions of the role
+definitions, then notify RSC about the update.
## GCP
For GCP permissions are managed through a service account. When the status of a project feature is `missing-permissions`
diff --git a/docs/guides/upgrade_guide_v0.3.0.md b/docs/guides/upgrade_guide_v0.3.0.md
index aceeaf4..5a22974 100644
--- a/docs/guides/upgrade_guide_v0.3.0.md
+++ b/docs/guides/upgrade_guide_v0.3.0.md
@@ -1,6 +1,5 @@
---
-page_title: "Upgrade Guide: v0.3.0 "
-subcategory: "Upgrade"
+page_title: "Upgrade Guide: v0.3.0"
---
# RSC provider version v0.3.0
diff --git a/docs/guides/upgrade_guide_v0.6.0.md b/docs/guides/upgrade_guide_v0.6.0.md
index f1a2341..f3c62ad 100644
--- a/docs/guides/upgrade_guide_v0.6.0.md
+++ b/docs/guides/upgrade_guide_v0.6.0.md
@@ -1,6 +1,5 @@
---
-page_title: "Upgrade Guide: v0.6.0 "
-subcategory: "Upgrade"
+page_title: "Upgrade Guide: v0.6.0"
---
# RSC provider version v0.6.0
diff --git a/docs/guides/upgrade_guide_v0.9.0.md b/docs/guides/upgrade_guide_v0.9.0.md
new file mode 100644
index 0000000..431dc44
--- /dev/null
+++ b/docs/guides/upgrade_guide_v0.9.0.md
@@ -0,0 +1,134 @@
+---
+page_title: "Upgrade Guide: v0.9.0"
+---
+
+# RSC provider changes
+The v0.9.0 release introduces changes to the following data sources and resources:
+* `polaris_account` - New data source with 3 fields, `features`, `fqdn` and `name`. `features` holds the features
+ enabled for the RSC account. `fqdn` holds the fully qualified domain name for the RSC account. `name` holds the RSC
+ account name.
+* `polaris_azure_permissions` - Add support for scoped permissions. Permissions are scoped to either the subscription
+ level or to resource group level. The `hash` field has been deprecated and replaced with the `id` field. Both fields
+ will have same value until the `hash` field is removed in a future release.
+* `polaris_azure_archival_location` - Add support for Azure archival locations, see the data source and resource
+ documentation for more information.
+* `polaris_azure_exocompute` - Add support for shared Exocompute, see the resource documentation for more information.
+ The `subscription_id` field has been deprecated and replaced with the `cloud_account_id` field. The `subscription_id`
+ field referred to the ID of the `polaris_azure_subscription` resource and not the Azure subscription ID, which was
+ confusing. Note, changing an existing `polaris_azure_exocompute` resource to use the `cloud_account_id` field will
+ recreate the resource.
+* `polaris_azure_service_principal` - The `permissions_hash` field has been deprecated and replaced with the
+ `permissions` field. With the changes in the `polaris_azure_permissions` data source, use
+ `permissions = data.polaris_azure_permissions..id` to connect the `polaris_azure_permissions` data source to
+ the permissions updated signal. The `permissions` field has been deprecated and replaced with the `permissions` field
+ for each feature in the `polaris_azure_subscription` resource.
+* `polaris_azure_subscription` - Add support for onboarding `cloud_native_archival`, `cloud_native_archival_encryption`,
+ `sql_db_protection` and `sql_mi_protection`. Note, there is no additional Terraform resources for managing the
+ features yet. Add support for specifying an Azure resource group per RSC feature. Add the `permissions` field to each
+ feature, which can be use with the `polaris_azure_permissions` data source signal permissions updates.
+* `polaris_features` - The data source has been deprecated and replaced with the `features` field of the
+ `polaris_deployment` data source. Note, the `features` field is a set and not a list.
+* `polaris_aws_exocompute_cluster_attachment` - New field, `setup_yaml`, which holds the K8s spec which can be passed
+ to `kubectl apply` inside the EKS cluster to create a connection between the cluster and RSC.
+* `polaris_aws_account` - New data source for accessing information about an AWS account added to RSC. The account can
+ be looked up by the AWS account ID or the account name. Currently, only the cloud account ID of the account is
+ exposed.
+* `polaris_azure_subscription` - New data source for accessing information about an Azure subscription added to RSC.
+ The subscription can be looked up by the Azure subscription ID or the subscription name. Currently, only the cloud
+ account ID of the subscription is exposed.
+* `polaris_aws_archival_location` - The `bucket_tags` field now supports being updated without the resource being
+ recreating.
+
+Deprecated fields will be removed in a future release, please migrate your configurations to use the replacement field
+as soon as possible.
+
+# Known issues
+* The user-assigned managed identity for `cloud_native_archival_encryption` is not refreshed when the
+ `polaris_azure_subscription` resource is updated. This will be fixed in a future release.
+
+In addition to the issues listed above, affecting this particular release of the provider, additional issues reported
+can be found on [GitHub](https://github.com/rubrikinc/terraform-provider-polaris/issues).
+
+# How to upgrade
+Make sure that the `version` field is configured in a way which allows Terraform to upgrade to the v0.9.0 release. One
+way of doing this is by using the pessimistic constraint operator `~>`, which allows Terraform to upgrade to the latest
+release within the same minor version:
+```hcl
+terraform {
+ required_providers {
+ polaris = {
+ source = "rubrikinc/polaris"
+ version = "~> 0.9.0"
+ }
+ }
+}
+```
+Next, upgrade the Terraform provider to the new version by running:
+```bash
+$ terraform init -upgrade
+```
+After the Terraform provider has been updated, validate the correctness of the Terraform configuration files by running:
+```bash
+$ terraform plan
+```
+If this doesn't produce an error or unwanted diff, proceed by running:
+```bash
+$ terraform apply -refresh-only
+```
+This will read the remote state of the resources and migrate the local Terraform state to the v0.9.0 version.
+
+## Upgrade issues
+When upgrading to the v0.9.0 release you may encounter one or more of the following issues.
+
+### polaris_azure_exocompute
+Replacing the `subscription_id` field with the `cloud_account_id` field will result in the `polaris_azure_exocompute`
+resource being recreated, a diff similar to the following will be shown:
+```hcl
+ # polaris_azure_exocompute.default must be replaced
+-/+ resource "polaris_azure_exocompute" "default" {
+ + cloud_account_id = "a677433c-954c-4af6-842e-0268c4a82a9f" # forces replacement
+ ~ id = "45d68b3f-a78f-4098-922e-367d2a22cb92" -> (known after apply)
+ - subscription_id = "a677433c-954c-4af6-842e-0268c4a82a9f" -> null # forces replacement
+ # (2 unchanged attributes hidden)
+ }
+```
+Apply the diff to recreate the resource and replace the field.
+
+### polaris_azure_service_principal
+Replacing the `permissions_hash` field with the `permissions` field will result in the resource being updated in-place,
+a diff similar to the following will be shown:
+```hcl
+# polaris_azure_service_principal.default will be updated in-place
+~ resource "polaris_azure_service_principal" "default" {
+ id = "6f35cc58-e1c9-445d-8bb0-a0e30dd53a40"
+ + permissions = "0a79e15a989ef9a5191fe9fba62f40f5bd7f7062a90fbe367b29d1ae3dd34e50"
+ - permissions_hash = "0a79e15a989ef9a5191fe9fba62f40f5bd7f7062a90fbe367b29d1ae3dd34e50" -> null
+ # (2 unchanged attributes hidden)
+}
+```
+Apply the diff to replace the field.
+
+### polaris_azure_subscription
+Because of the new Azure resource group support, using the `cloud_native_protection` or `exocompute` fields will result
+in a diff similar to the following:
+```hcl
+# polaris_azure_subscription.default will be updated in-place
+~ resource "polaris_azure_subscription" "default" {
+ id = "f7b298c4-bf1d-4af4-900e-bf69ddfc6187"
+ # (4 unchanged attributes hidden)
+
+ ~ cloud_native_protection {
+ - resource_group_name = "RubrikBackups-RG-DontDelete-9f68a830-36a7-4363-9cf9-c81189fdc410" -> null
+ - resource_group_region = "westus" -> null
+ # (3 unchanged attributes hidden)
+ }
+
+ ~ exocompute {
+ - resource_group_name = "RubrikBackups-RG-DontDelete-e9ee0004-dcb2-4ec5-91b5-329c561c8311" -> null
+ - resource_group_region = "westus" -> null
+ # (3 unchanged attributes hidden)
+ }
+}
+```
+To remove the diff, copy the `resource_group_name` and `resource_group_region` values from the diff and add them to
+their respective places in the Terraform configuration.
diff --git a/docs/index.md b/docs/index.md
index 96e4702..2556746 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -40,8 +40,7 @@ provider "polaris" {
The service account can also be passed to the provider using the `RUBRIK_POLARIS_SERVICEACCOUNT_CREDENTIALS` environment
variable. When passing the service account using the environment variable, leave the provider configuration empty:
```terraform
-provider "polaris" {
-}
+provider "polaris" {}
```
For documentation on how to create a service account using RSC, visit the
diff --git a/docs/resources/aws_account.md b/docs/resources/aws_account.md
index 9dc9785..6612395 100644
--- a/docs/resources/aws_account.md
+++ b/docs/resources/aws_account.md
@@ -3,12 +3,42 @@
page_title: "polaris_aws_account Resource - terraform-provider-polaris"
subcategory: ""
description: |-
-
+ The polaris_aws_account resource adds an AWS account to RSC. To grant RSC
+ permissions to perform certain operations on the account, a Cloud Formation stack
+ is created from a template provided by RSC.
+ There are two ways to specify the AWS account to onboard:
+ 1. Using the profile field. The AWS profile is used to create the Cloud
+ Formation stack and lookup the AWS account ID.
+ 2. Using the assume_rolefield with, or without, the profile field. If the
+ profile field is omitted, the default profile is used. The profile is used
+ to assume the role. The assumed role is then used and create the Cloud
+ Formation stack and lookup the account ID.
+ Any combination of different RSC features can be enabled for an account:
+ 1. cloud_native_protection - Provides protection for AWS EC2 instances and
+ EBS volumes through the rules and policies of SLA Domains.
+ 2. exocompute - Provides snapshot indexing, file recovery and application
+ protection of AWS objects.
---
# polaris_aws_account (Resource)
+The `polaris_aws_account` resource adds an AWS account to RSC. To grant RSC
+permissions to perform certain operations on the account, a Cloud Formation stack
+is created from a template provided by RSC.
+There are two ways to specify the AWS account to onboard:
+ 1. Using the `profile` field. The AWS profile is used to create the Cloud
+ Formation stack and lookup the AWS account ID.
+ 2. Using the `assume_role`field with, or without, the `profile` field. If the
+ `profile` field is omitted, the default profile is used. The profile is used
+ to assume the role. The assumed role is then used and create the Cloud
+ Formation stack and lookup the account ID.
+
+Any combination of different RSC features can be enabled for an account:
+ 1. `cloud_native_protection` - Provides protection for AWS EC2 instances and
+ EBS volumes through the rules and policies of SLA Domains.
+ 2. `exocompute` - Provides snapshot indexing, file recovery and application
+ protection of AWS objects.
## Example Usage
@@ -18,17 +48,25 @@ resource "polaris_aws_account" "default" {
profile = "default"
cloud_native_protection {
+ permission_groups = [
+ "BASIC",
+ ]
+
regions = [
"us-east-2",
]
}
}
-# Enable Cloud Native Protection and Exocompte.
+# Enable Cloud Native Protection and Exocompute.
resource "polaris_aws_account" "default" {
profile = "default"
cloud_native_protection {
+ permission_groups = [
+ "BASIC",
+ ]
+
regions = [
"us-east-2",
"us-west-2",
@@ -36,6 +74,11 @@ resource "polaris_aws_account" "default" {
}
exocompute {
+ permission_groups = [
+ "BASIC",
+ "RSC_MANAGED_CLUSTER",
+ ]
+
regions = [
"us-west-2",
]
@@ -44,7 +87,7 @@ resource "polaris_aws_account" "default" {
# The Couldformation stack ARN is available after creation
output "stack_arn" {
- value = polaris_aws_account.default.exocompute[0].stack_arn
+ value = polaris_aws_account.default.exocompute[0].stack_arn
}
```
@@ -59,25 +102,25 @@ output "stack_arn" {
- `assume_role` (String) Role ARN of role to assume.
- `delete_snapshots_on_destroy` (Boolean) Should snapshots be deleted when the resource is destroyed.
-- `exocompute` (Block List, Max: 1) Enable the exocompute feature for the account. (see [below for nested schema](#nestedblock--exocompute))
+- `exocompute` (Block List, Max: 1) Enable the Exocompute feature for the account. (see [below for nested schema](#nestedblock--exocompute))
- `name` (String) Account name in Polaris. If not given the name is taken from AWS Organizations or, if the required permissions are missing, is derived from the AWS account ID and the named profile.
- `permissions` (String) When set to 'update' feature permissions can be updated by applying the configuration.
- `profile` (String) AWS named profile.
### Read-Only
-- `id` (String) The ID of this resource.
+- `id` (String) RSC cloud account ID (UUID).
### Nested Schema for `cloud_native_protection`
Required:
-- `regions` (Set of String) Regions that Polaris will monitor for instances to automatically protect.
+- `regions` (Set of String) Regions that RSC will monitor for instances to automatically protect.
Optional:
-- `permission_groups` (Set of String) Permission groups to assign to the cloud native protection feature.
+- `permission_groups` (Set of String) Permission groups to assign to the Cloud Native Protection feature. Possible values are `BASIC`, `EXPORT_AND_RESTORE`, `FILE_LEVEL_RECOVERY` and `SNAPSHOT_PRIVATE_ACCESS`.
Read-Only:
@@ -94,7 +137,7 @@ Required:
Optional:
-- `permission_groups` (Set of String) Permission groups to assign to the exocompute feature.
+- `permission_groups` (Set of String) Permission groups to assign to the Exocompute feature. Possible values are `BASIC`, `PRIVATE_ENDPOINT` and `RSC_MANAGED_CLUSTER`.
Read-Only:
diff --git a/docs/resources/aws_archival_location.md b/docs/resources/aws_archival_location.md
index 5d25bcd..783f261 100644
--- a/docs/resources/aws_archival_location.md
+++ b/docs/resources/aws_archival_location.md
@@ -3,12 +3,34 @@
page_title: "polaris_aws_archival_location Resource - terraform-provider-polaris"
subcategory: ""
description: |-
-
+ The polaris_aws_archival_location resource creates an RSC archival location for
+ cloud-native workloads. This resource requires that the AWS account has been
+ onboarded with the CLOUD_NATIVE_ARCHIVAL feature.
+ When creating an archival location, the region where the snapshots are stored needs
+ to be specified:
+ * SOURCE_REGION - Store snapshots in the same region to minimize data transfer
+ charges. This is the default behaviour when the region field is not specified.
+ * SPECIFIC_REGION - Storing snapshots in another region can increase total data
+ transfer charges. The region field specifies the region.
+ -> Note: The AWS bucket holding the archived data is not created until the first
+ protected object is archived.
---
# polaris_aws_archival_location (Resource)
+The `polaris_aws_archival_location` resource creates an RSC archival location for
+cloud-native workloads. This resource requires that the AWS account has been
+onboarded with the `CLOUD_NATIVE_ARCHIVAL` feature.
+When creating an archival location, the region where the snapshots are stored needs
+to be specified:
+ * `SOURCE_REGION` - Store snapshots in the same region to minimize data transfer
+ charges. This is the default behaviour when the `region` field is not specified.
+ * `SPECIFIC_REGION` - Storing snapshots in another region can increase total data
+ transfer charges. The `region` field specifies the region.
+
+-> **Note:** The AWS bucket holding the archived data is not created until the first
+ protected object is archived.
## Example Usage
@@ -34,19 +56,19 @@ resource "polaris_aws_archival_location" "archival_location" {
### Required
-- `account_id` (String) RSC cloud account ID.
-- `bucket_prefix` (String) AWS bucket prefix. Note that `rubrik-` will always be prepended to the prefix.
-- `name` (String) Name of the archival location.
+- `account_id` (String) RSC cloud account ID (UUID). Changing this forces a new resource to be created.
+- `bucket_prefix` (String) AWS bucket prefix. The prefix cannot be longer than 19 characters. Note that `rubrik-` will always be prepended to the prefix. Changing this forces a new resource to be created.
+- `name` (String) Name of the cloud native archival location.
### Optional
- `bucket_tags` (Map of String) AWS bucket tags. Each tag will be added to the bucket created by RSC.
-- `kms_master_key` (String, Sensitive) AWS KMS master key alias/ID.
-- `region` (String) AWS region to store the snapshots in. If not specified, the snapshots will be stored in the same region as the workload.
-- `storage_class` (String) AWS bucket storage class.
+- `kms_master_key` (String, Sensitive) AWS KMS master key alias/ID. Default value is `aws/s3`.
+- `region` (String) AWS region to store the snapshots in. If not specified, the snapshots will be stored in the same region as the workload. Changing this forces a new resource to be created.
+- `storage_class` (String) AWS bucket storage class. Possible values are `STANDARD`, `STANDARD_IA`, `ONEZONE_IA`, `GLACIER_INSTANT_RETRIEVAL`, `GLACIER_DEEP_ARCHIVE` and `GLACIER_FLEXIBLE_RETRIEVAL`. Default value is `STANDARD_IA`.
### Read-Only
-- `connection_status` (String) Connection status of the archival location.
-- `id` (String) The ID of this resource.
+- `connection_status` (String) Connection status of the cloud native archival location.
+- `id` (String) Cloud native archival location ID (UUID).
- `location_template` (String) Location template. If a region was specified, it will be `SPECIFIC_REGION`, otherwise `SOURCE_REGION`.
diff --git a/docs/resources/aws_cnp_account.md b/docs/resources/aws_cnp_account.md
index 1e092fe..0d0ae98 100644
--- a/docs/resources/aws_cnp_account.md
+++ b/docs/resources/aws_cnp_account.md
@@ -3,21 +3,113 @@
page_title: "polaris_aws_cnp_account Resource - terraform-provider-polaris"
subcategory: ""
description: |-
-
+ The polaris_aws_cnp_account resource adds an AWS account to RSC using the non-CFT
+ (Cloud Formation Template) workflow. The polaris_aws_account resource can be used to
+ add an AWS account to RSC using the CFT workflow.
+ Permission Groups
+ Following is a list of features and their applicable permission groups. These are used
+ when specifying the feature set.
+ CLOUDNATIVEARCHIVAL
+ BASIC - Represents the basic set of permissions required to onboard the feature.
+ CLOUDNATIVEPROTECTION
+ BASIC - Represents the basic set of permissions required to onboard the feature.EXPORT_AND_RESTORE - Represents the set of permissions required for export and
+ restore operations.FILE_LEVEL_RECOVERY - Represents the set of permissions required for file-level
+ recovery operations.SNAPSHOT_PRIVATE_ACCESS - Represents the set of permissions required for private
+ access to disk snapshots.
+ CLOUDNATIVES3_PROTECTION
+ BASIC - Represents the basic set of permissions required to onboard the feature.
+ EXOCOMPUTE
+ BASIC - Represents the basic set of permissions required to onboard the feature.PRIVATE_ENDPOINTS - Represents the set of permissions required for usage of
+ private endpoints.RSC_MANAGED_CLUSTER - Represents the set of permissions required for the Rubrik-
+ managed Exocompute cluster.
+ RDS_PROTECTION
+ BASIC - Represents the basic set of permissions required to onboard the feature.
+ -> Note: When permission groups are specified, the BASIC permission group must
+ always be included.
---
# polaris_aws_cnp_account (Resource)
+The `polaris_aws_cnp_account` resource adds an AWS account to RSC using the non-CFT
+(Cloud Formation Template) workflow. The `polaris_aws_account` resource can be used to
+add an AWS account to RSC using the CFT workflow.
+## Permission Groups
+Following is a list of features and their applicable permission groups. These are used
+when specifying the feature set.
+
+### CLOUD_NATIVE_ARCHIVAL
+ * `BASIC` - Represents the basic set of permissions required to onboard the feature.
+
+### CLOUD_NATIVE_PROTECTION
+ * `BASIC` - Represents the basic set of permissions required to onboard the feature.
+ * `EXPORT_AND_RESTORE` - Represents the set of permissions required for export and
+ restore operations.
+ * `FILE_LEVEL_RECOVERY` - Represents the set of permissions required for file-level
+ recovery operations.
+ * `SNAPSHOT_PRIVATE_ACCESS` - Represents the set of permissions required for private
+ access to disk snapshots.
+
+### CLOUD_NATIVE_S3_PROTECTION
+ * `BASIC` - Represents the basic set of permissions required to onboard the feature.
+
+### EXOCOMPUTE
+ * `BASIC` - Represents the basic set of permissions required to onboard the feature.
+ * `PRIVATE_ENDPOINTS` - Represents the set of permissions required for usage of
+ private endpoints.
+ * `RSC_MANAGED_CLUSTER` - Represents the set of permissions required for the Rubrik-
+ managed Exocompute cluster.
+
+### RDS_PROTECTION
+ * `BASIC` - Represents the basic set of permissions required to onboard the feature.
+
+-> **Note:** When permission groups are specified, the `BASIC` permission group must
+ always be included.
## Example Usage
```terraform
+# Hardcoded values. Permission groups defaults to BASIC.
resource "polaris_aws_cnp_account" "account" {
- features = ["CLOUD_NATIVE_PROTECTION"]
name = "My Account"
native_id = "123456789123"
- regions = ["us-east-2", "us-west-2"]
+
+ regions = [
+ "us-east-2",
+ "us-west-2",
+ ]
+
+ feature {
+ name = "CLOUD_NATIVE_ARCHIVAL"
+ }
+
+ feature {
+ name = "CLOUD_NATIVE_PROTECTION"
+
+ permission_groups = [
+ "BASIC",
+ "EXPORT_AND_RESTORE",
+ ]
+ }
+}
+
+# Using variables for the account values and the features. The dynamic
+# feature block could also be expanded from the polaris_aws_cnp_artifacts
+# data source.
+resource "polaris_aws_cnp_account" "account" {
+ cloud = var.cloud
+ external_id = var.external_id
+ name = var.name
+ native_id = var.native_id
+ regions = var.regions
+
+ dynamic "feature" {
+ for_each = var.features
+ content {
+ name = feature.value["name"]
+ permission_groups = feature.value["permission_groups"]
+ }
+ }
}
```
@@ -27,24 +119,24 @@ resource "polaris_aws_cnp_account" "account" {
### Required
- `feature` (Block Set, Min: 1) RSC feature with optional permission groups. (see [below for nested schema](#nestedblock--feature))
-- `native_id` (String) AWS account id.
+- `native_id` (String) AWS account ID. Changing this forces a new resource to be created.
- `regions` (Set of String) Regions.
### Optional
-- `cloud` (String) Cloud type.
+- `cloud` (String) AWS cloud type. Possible values are `STANDARD`, `CHINA` and `GOV`. Default value is `STANDARD`. Changing this forces a new resource to be created.
- `delete_snapshots_on_destroy` (Boolean) Should snapshots be deleted when the resource is destroyed.
-- `external_id` (String) External id.
+- `external_id` (String) External ID. Changing this forces a new resource to be created.
- `name` (String) Account name.
### Read-Only
-- `id` (String) The ID of this resource.
+- `id` (String) RSC cloud account ID (UUID).
### Nested Schema for `feature`
Required:
-- `name` (String) Feature name.
-- `permission_groups` (Set of String) Permission groups to assign to the feature.
+- `name` (String) RSC feature name. Possible values are `CLOUD_NATIVE_ARCHIVAL`, `CLOUD_NATIVE_PROTECTION`, `CLOUD_NATIVE_S3_PROTECTION`, `EXOCOMPUTE` and `RDS_PROTECTION`.
+- `permission_groups` (Set of String) RSC permission groups for the feature. Possible values are `BASIC`, `EXPORT_AND_RESTORE`, `FILE_LEVEL_RECOVERY`, `SNAPSHOT_PRIVATE_ACCESS`, `PRIVATE_ENDPOINT` and `RSC_MANAGED_CLUSTER`. For backwards compatibility, `[]` is interpreted as all applicable permission groups.
diff --git a/docs/resources/aws_cnp_account_attachments.md b/docs/resources/aws_cnp_account_attachments.md
index a61ad41..0945329 100644
--- a/docs/resources/aws_cnp_account_attachments.md
+++ b/docs/resources/aws_cnp_account_attachments.md
@@ -3,25 +3,35 @@
page_title: "polaris_aws_cnp_account_attachments Resource - terraform-provider-polaris"
subcategory: ""
description: |-
-
+ The aws_cnp_account_attachments resource attaches AWS instance profiles and AWS
+ roles to an RSC cloud account.
+ -> Note: The features field takes only the feature names and not the permission
+ groups associated with the features.
---
# polaris_aws_cnp_account_attachments (Resource)
+The `aws_cnp_account_attachments` resource attaches AWS instance profiles and AWS
+roles to an RSC cloud account.
+-> **Note:** The `features` field takes only the feature names and not the permission
+ groups associated with the features.
## Example Usage
```terraform
+# The configuration assumes that an AWS account has been been added
+# to RSC and that one AWS IAM instance profile and role has been
+# created for each RSC artifact.
resource "polaris_aws_cnp_account_attachments" "attachments" {
account_id = polaris_aws_cnp_account.account.id
- features = polaris_aws_cnp_account.account.features
+ features = polaris_aws_cnp_account.account.feature.*.name
dynamic "instance_profile" {
for_each = aws_iam_instance_profile.profile
content {
key = instance_profile.key
- name = instance_profile.value["name"]
+ name = instance_profile.value["arn"]
}
}
@@ -40,8 +50,8 @@ resource "polaris_aws_cnp_account_attachments" "attachments" {
### Required
-- `account_id` (String) RSC account id.
-- `features` (Set of String) RSC features.
+- `account_id` (String) RSC cloud account ID (UUID). Changing this forces a new resource to be created.
+- `features` (Set of String) RSC features. Possible values are `CLOUD_NATIVE_ARCHIVAL`, `CLOUD_NATIVE_PROTECTION`, `CLOUD_NATIVE_S3_PROTECTION`, `EXOCOMPUTE` and `RDS_PROTECTION`.
- `role` (Block Set, Min: 1) Roles to attach to the cloud account. (see [below for nested schema](#nestedblock--role))
### Optional
@@ -50,7 +60,7 @@ resource "polaris_aws_cnp_account_attachments" "attachments" {
### Read-Only
-- `id` (String) The ID of this resource.
+- `id` (String) RSC cloud account ID (UUID).
### Nested Schema for `role`
@@ -58,7 +68,7 @@ resource "polaris_aws_cnp_account_attachments" "attachments" {
Required:
- `arn` (String) AWS role ARN.
-- `key` (String) Role key.
+- `key` (String) RSC artifact key for the AWS role.
@@ -66,5 +76,5 @@ Required:
Required:
-- `key` (String) Instance profile key.
+- `key` (String) RSC artifact key for the AWS instance profile.
- `name` (String) AWS instance profile name.
diff --git a/docs/resources/aws_cnp_account_trust_policy.md b/docs/resources/aws_cnp_account_trust_policy.md
index 36e4fbf..9aea098 100644
--- a/docs/resources/aws_cnp_account_trust_policy.md
+++ b/docs/resources/aws_cnp_account_trust_policy.md
@@ -3,22 +3,68 @@
page_title: "polaris_aws_cnp_account_trust_policy Resource - terraform-provider-polaris"
subcategory: ""
description: |-
-
+ The aws_cnp_account_trust_policy resource gets the AWS IAM trust policies required
+ by RSC. The policy field of aws_cnp_account_trust_policy resource should be used
+ with the assume_role_policy of the aws_iam_role resource.
+ -> Note: The features field takes only the feature names and not the permission
+ groups associated with the features.
---
# polaris_aws_cnp_account_trust_policy (Resource)
+The `aws_cnp_account_trust_policy` resource gets the AWS IAM trust policies required
+by RSC. The `policy` field of `aws_cnp_account_trust_policy` resource should be used
+with the `assume_role_policy` of the `aws_iam_role` resource.
+-> **Note:** The `features` field takes only the feature names and not the permission
+ groups associated with the features.
## Example Usage
```terraform
+data "polaris_aws_cnp_artifacts" "artifacts" {
+ feature {
+ name = "CLOUD_NATIVE_ARCHIVAL"
+
+ permission_groups = [
+ "BASIC",
+ ]
+ }
+
+ feature {
+ name = "CLOUD_NATIVE_PROTECTION"
+
+ permission_groups = [
+ "BASIC",
+ "EXPORT_AND_RESTORE",
+ ]
+ }
+}
+
+resource "polaris_aws_cnp_account" "account" {
+ name = "My Account"
+ native_id = "123456789123"
+ regions = [
+ "us-east-2",
+ "us-west-2",
+ ]
+
+ dynamic "feature" {
+ for_each = data.polaris_aws_cnp_artifacts.artifacts.feature
+ content {
+ name = feature.value["name"]
+ permission_groups = feature.value["permission_groups"]
+ }
+ }
+}
+
+# Lookup the trust policies using the artifacts data source and the
+# account resource.
resource "polaris_aws_cnp_account_trust_policy" "trust_policy" {
- for_each = data.polaris_aws_cnp_artifacts.artifacts.role_keys
- account_id = polaris_aws_cnp_account.account.id
- features = polaris_aws_cnp_account.account.features
- external_id = polaris_aws_cnp_account.account.external_id
- role_key = each.key
+ for_each = data.polaris_aws_cnp_artifacts.artifacts.role_keys
+ account_id = polaris_aws_cnp_account.account.id
+ features = polaris_aws_cnp_account.account.feature.*.name
+ role_key = each.key
}
```
@@ -27,15 +73,15 @@ resource "polaris_aws_cnp_account_trust_policy" "trust_policy" {
### Required
-- `account_id` (String) RSC account id.
-- `features` (Set of String) RSC features.
-- `role_key` (String) Role key.
+- `account_id` (String) RSC cloud account ID (UUID). Changing this forces a new resource to be created.
+- `features` (Set of String) RSC features. Possible values are `CLOUD_NATIVE_ARCHIVAL`, `CLOUD_NATIVE_PROTECTION`, `CLOUD_NATIVE_S3_PROTECTION`, `EXOCOMPUTE` and `RDS_PROTECTION`. Changing this forces a new resource to be created.
+- `role_key` (String) RSC artifact key for the AWS role.
### Optional
-- `external_id` (String) External id.
+- `external_id` (String) External ID. Changing this forces a new resource to be created.
### Read-Only
-- `id` (String) The ID of this resource.
-- `policy` (String) Trust policy.
+- `id` (String) RSC cloud account ID (UUID).
+- `policy` (String) AWS IAM trust policy.
diff --git a/docs/resources/aws_exocompute.md b/docs/resources/aws_exocompute.md
index d7afde1..6c86fd0 100644
--- a/docs/resources/aws_exocompute.md
+++ b/docs/resources/aws_exocompute.md
@@ -3,19 +3,75 @@
page_title: "polaris_aws_exocompute Resource - terraform-provider-polaris"
subcategory: ""
description: |-
-
+ The polaris_aws_exocompute resource creates an RSC Exocompute configuration for AWS
+ workloads.
+ There are 3 types of Exocompute configurations:
+ 1. RSC Managed Host - When an RSC managed host configuration is created, RSC will
+ automatically deploy the necessary resources in the specified AWS region to run the
+ Exocompute service. AWS security groups can be managed by RSC or by the customer.
+ 2. Customer Managed Host - When a customer managed host configuration is created,
+ RSC will not deploy any resources. Instead it will use the AWS EKS cluster attached
+ by the customer, using the aws_exocompute_cluster_attachment resource, for all
+ operations.
+ 3. Application - An application configuration is created by mapping the application
+ cloud account to a host cloud account. The application cloud account will leverage
+ the Exocompute resources deployed for the host configuration.
+ Items 1 and 2 above requires that the AWS account has been onboarded with the
+ EXOCOMPUTE feature.
+ Since there are 3 types of Exocompute configurations, there are 3 ways to create a
+ polaris_azure_exocompute resource:
+ 1. Using the cloud_account_id, region, subnet and pod_overlay_network_cidr
+ fields creates an RSC managed host configuration.
+ 2. Using the cloud_account_id and region fields creates a customer managed host
+ configuration. Note, the aws_exocompute_cluster_attachment resource must be used
+ to attach an AWS EKS cluster to the Exocompute configuration.
+ 3. Using the cloud_account_id and host_cloud_account_id fields creates an
+ application configuration.
+ -> Note: Customer-managed Exocompute is sometimes referred to as Bring Your Own
+ Kubernetes (BYOK). Using both host and application Exocompute configurations is
+ sometimes referred to as shared Exocompute.
---
# polaris_aws_exocompute (Resource)
-
+The `polaris_aws_exocompute` resource creates an RSC Exocompute configuration for AWS
+workloads.
+
+There are 3 types of Exocompute configurations:
+ 1. *RSC Managed Host* - When an RSC managed host configuration is created, RSC will
+ automatically deploy the necessary resources in the specified AWS region to run the
+ Exocompute service. AWS security groups can be managed by RSC or by the customer.
+ 2. *Customer Managed Host* - When a customer managed host configuration is created,
+ RSC will not deploy any resources. Instead it will use the AWS EKS cluster attached
+ by the customer, using the `aws_exocompute_cluster_attachment` resource, for all
+ operations.
+ 3. *Application* - An application configuration is created by mapping the application
+ cloud account to a host cloud account. The application cloud account will leverage
+ the Exocompute resources deployed for the host configuration.
+
+Items 1 and 2 above requires that the AWS account has been onboarded with the
+`EXOCOMPUTE` feature.
+
+Since there are 3 types of Exocompute configurations, there are 3 ways to create a
+`polaris_azure_exocompute` resource:
+ 1. Using the `cloud_account_id`, `region`, `subnet` and `pod_overlay_network_cidr`
+ fields creates an RSC managed host configuration.
+ 2. Using the `cloud_account_id` and `region` fields creates a customer managed host
+ configuration. Note, the `aws_exocompute_cluster_attachment` resource must be used
+ to attach an AWS EKS cluster to the Exocompute configuration.
+ 3. Using the `cloud_account_id` and `host_cloud_account_id` fields creates an
+ application configuration.
+
+-> **Note:** Customer-managed Exocompute is sometimes referred to as Bring Your Own
+ Kubernetes (BYOK). Using both host and application Exocompute configurations is
+ sometimes referred to as shared Exocompute.
## Example Usage
```terraform
-# With security groups managed by RSC.
-resource "polaris_aws_exocompute" "default" {
- account_id = polaris_aws_account.default.id
+# RSC managed Exocompute with security groups managed by RSC.
+resource "polaris_aws_exocompute" "exocompute" {
+ account_id = polaris_aws_account.account.id
region = "us-east-2"
vpc_id = "vpc-4859acb9"
@@ -25,9 +81,9 @@ resource "polaris_aws_exocompute" "default" {
]
}
-# With security groups managed by the user.
-resource "polaris_aws_exocompute" "default" {
- account_id = polaris_aws_account.default.id
+# RSC managed Exocompute with security groups managed by the customer.
+resource "polaris_aws_exocompute" "exocompute" {
+ account_id = polaris_aws_account.account.id
cluster_security_group_id = "sg-005656347687b8170"
node_security_group_id = "sg-00e147656785d7e2f"
region = "us-east-2"
@@ -39,9 +95,20 @@ resource "polaris_aws_exocompute" "default" {
]
}
-# Using the exocompute resources shared by an exocompute host.
-resource "polaris_aws_exocompute" "default" {
- account_id = polaris_aws_account.app.id
+# Customer managed Exocompute.
+resource "polaris_aws_exocompute" "exocompute" {
+ account_id = polaris_aws_account.account.id
+ region = "us-east-2"
+}
+
+resource "polaris_aws_exocompute_cluster_attachment" "cluster" {
+ cluster_name = "my-eks-cluster"
+ exocompute_id = polaris_aws_exocompute.exocompute.id
+}
+
+# Using the exocompute resources shared by an Exocompute host.
+resource "polaris_aws_exocompute" "exocompute" {
+ account_id = polaris_aws_account.account.id
host_account_id = polaris_aws_account.host.id
}
```
@@ -51,18 +118,18 @@ resource "polaris_aws_exocompute" "default" {
### Required
-- `account_id` (String) RSC account id.
+- `account_id` (String) RSC cloud account ID (UUID). Changing this forces a new resource to be created.
### Optional
-- `cluster_security_group_id` (String) AWS security group id for the cluster.
-- `host_account_id` (String) Shared exocompute host RSC account id.
-- `node_security_group_id` (String) AWS security group id for the nodes.
-- `polaris_managed` (Boolean) If true the security groups are managed by Polaris.
-- `region` (String) AWS region to run the exocompute instance in.
-- `subnets` (Set of String) AWS subnet ids for the cluster subnets.
-- `vpc_id` (String) AWS VPC id for the cluster network.
+- `cluster_security_group_id` (String) AWS security group ID for the cluster. Changing this forces a new resource to be created.
+- `host_account_id` (String) Exocompute host cloud account ID. Changing this forces a new resource to be created.
+- `node_security_group_id` (String) AWS security group ID for the nodes. Changing this forces a new resource to be created.
+- `region` (String) AWS region to run the Exocompute instance in. Changing this forces a new resource to be created.
+- `subnets` (Set of String) AWS subnet IDs for the cluster subnets. Changing this forces a new resource to be created.
+- `vpc_id` (String) AWS VPC ID for the cluster network. Changing this forces a new resource to be created.
### Read-Only
-- `id` (String) The ID of this resource.
+- `id` (String) Exocompute configuration ID (UUID).
+- `polaris_managed` (Boolean) If true the security groups are managed by RSC.
diff --git a/docs/resources/aws_exocompute_cluster_attachment.md b/docs/resources/aws_exocompute_cluster_attachment.md
index 6f22775..9200655 100644
--- a/docs/resources/aws_exocompute_cluster_attachment.md
+++ b/docs/resources/aws_exocompute_cluster_attachment.md
@@ -3,28 +3,40 @@
page_title: "polaris_aws_exocompute_cluster_attachment Resource - terraform-provider-polaris"
subcategory: ""
description: |-
-
+ The polaris_aws_exocompute_cluster_attachment resource attaches an AWS EKS cluster
+ to a customer managed host Exocompute configuration, allowing RSC to use the cluster
+ for Exocompute operations.
---
# polaris_aws_exocompute_cluster_attachment (Resource)
+The `polaris_aws_exocompute_cluster_attachment` resource attaches an AWS EKS cluster
+to a customer managed host Exocompute configuration, allowing RSC to use the cluster
+for Exocompute operations.
+## Example Usage
-
+```terraform
+resource "polaris_aws_exocompute_cluster_attachment" "attachment" {
+ cluster_name = "my-eks-cluster"
+ exocompute_id = polaris_aws_exocompute.exocompute.id
+}
+```
## Schema
### Required
-- `cluster_name` (String) AWS EKS cluster name.
-- `exocompute_id` (String) RSC exocompute id.
+- `cluster_name` (String) AWS EKS cluster name. Changing this forces a new resource to be created.
+- `exocompute_id` (String) RSC exocompute configuration ID (UUID). Changing this forces a new resource to be created.
### Optional
-- `token_refresh` (Number) To force a refresh of the token, part of the connection command, increase the value of this field.
+- `token_refresh` (Number) To force a refresh of the token, part of the connection command, increase the value of this field. The token is valid for 24 hours.
### Read-Only
-- `connection_command` (String) Cluster connection command.
-- `id` (String) The ID of this resource.
+- `connection_command` (String) `kubectl` command which can be executed inside the EKS cluster to create a connection between the cluster and RSC. See setup_yaml for an alternative connection method.
+- `id` (String) RSC cluster ID (UUID).
+- `setup_yaml` (String) K8s spec which can be passed to `kubectl apply` inside the EKS cluster to create a connection between the cluster and RSC. See connection_command for an alternative connection method.
diff --git a/docs/resources/aws_private_container_registry.md b/docs/resources/aws_private_container_registry.md
index c2b6a7f..118ab81 100644
--- a/docs/resources/aws_private_container_registry.md
+++ b/docs/resources/aws_private_container_registry.md
@@ -3,18 +3,143 @@
page_title: "polaris_aws_private_container_registry Resource - terraform-provider-polaris"
subcategory: ""
description: |-
+ The polaris_aws_private_container_registry resource enables the private container
+ registry (PCR) feature for the RSC customer account. This disables the standard
+ Rubrik container registry. Once PCR has been enabled, it can only be disabled by
+ Rubrik customer support.
+ !> Note: Creating a polaris_aws_private_container_registry resource enables
+ the PCR feature for the RSC customer account. Destroying the resource will not
+ disabled PCR, it can only be disabled by contacting Rubrik customer support.
+ ~> Note: Even though the polaris_aws_private_container_registry resource ID
+ is an RSC cloud account ID, there can only be a single PCR per RSC customer
+ account.
+ Exocompute Image Bundles
+ The following GraphQL query can be used to retrieve information about the image
+ bundles used by RSC for exocompute:
+ graphql
+ query ExotaskImageBundle($input: GetExotaskImageBundleInput) {
+ exotaskImageBundle(input: $input) {
+ bundleImages {
+ name
+ sha
+ tag
+ }
+ bundleVersion
+ eksVersion
+ repoUrl
+ }
+ }
+ The repoUrl field holds the URL to the RSC container registry from where the RSC
+ images can be pulled.
+ The input is an object with the following structure:
+ json
+ {
+ "input": {
+ "eksVersion": "1.29"
+ }
+ }
+
+ Where eksVersion is the version of the customer's' EKS cluster. eksVersion is
+ optional, if it's not specified it defaults to the latest EKS version supported by
+ RSC.
+ The following GraphQL mutation can be used to set the approved bundle version for
+ the RSC customer account:
+ graphql
+ mutation SetBundleApprovalStatus($input: SetBundleApprovalStatusInput!) {
+ setBundleApprovalStatus(input: $input)
+ }
+
+ The input is an object with the following structure:
+ json
+ {
+ "input": {
+ "approvalStatus": "APPROVED",
+ "bundleVersion": "1.164",
+ "bundleMetadata": {
+ "eksVersion": "1.29"
+ }
+ }
+ }
+
+ Where approvalStatus can be either APPROVED or REJECTED. bundleVersion is
+ the the bundle version being approved or rejected. bundleMetadata is optional.
---
# polaris_aws_private_container_registry (Resource)
+The `polaris_aws_private_container_registry` resource enables the private container
+registry (PCR) feature for the RSC customer account. This disables the standard
+Rubrik container registry. Once PCR has been enabled, it can only be disabled by
+Rubrik customer support.
+
+!> **Note:** Creating a `polaris_aws_private_container_registry` resource enables
+ the PCR feature for the RSC customer account. Destroying the resource will not
+ disabled PCR, it can only be disabled by contacting Rubrik customer support.
+
+~> **Note:** Even though the `polaris_aws_private_container_registry` resource ID
+ is an RSC cloud account ID, there can only be a single PCR per RSC customer
+ account.
+
+## Exocompute Image Bundles
+The following GraphQL query can be used to retrieve information about the image
+bundles used by RSC for exocompute:
+```graphql
+query ExotaskImageBundle($input: GetExotaskImageBundleInput) {
+ exotaskImageBundle(input: $input) {
+ bundleImages {
+ name
+ sha
+ tag
+ }
+ bundleVersion
+ eksVersion
+ repoUrl
+ }
+}
+```
+The `repoUrl` field holds the URL to the RSC container registry from where the RSC
+images can be pulled.
+The input is an object with the following structure:
+```json
+{
+ "input": {
+ "eksVersion": "1.29"
+ }
+}
+```
+Where `eksVersion` is the version of the customer's' EKS cluster. `eksVersion` is
+optional, if it's not specified it defaults to the latest EKS version supported by
+RSC.
+
+The following GraphQL mutation can be used to set the approved bundle version for
+the RSC customer account:
+```graphql
+mutation SetBundleApprovalStatus($input: SetBundleApprovalStatusInput!) {
+ setBundleApprovalStatus(input: $input)
+}
+```
+The input is an object with the following structure:
+```json
+{
+ "input": {
+ "approvalStatus": "APPROVED",
+ "bundleVersion": "1.164",
+ "bundleMetadata": {
+ "eksVersion": "1.29"
+ }
+ }
+}
+```
+Where `approvalStatus` can be either `APPROVED` or `REJECTED`. `bundleVersion` is
+the the bundle version being approved or rejected. `bundleMetadata` is optional.
## Example Usage
```terraform
-resource "polaris_aws_private_container_registry" "default" {
- account_id = polaris_aws_account.default.id
+resource "polaris_aws_private_container_registry" "registry" {
+ account_id = polaris_aws_account.account.id
native_id = "123456789012"
url = "234567890121.dkr.ecr.us-east-2.amazonaws.com"
}
@@ -25,10 +150,10 @@ resource "polaris_aws_private_container_registry" "default" {
### Required
-- `account_id` (String) RSC account id
+- `account_id` (String) RSC cloud account ID (UUID). Changing this forces a new resource to be created.
- `native_id` (String) AWS account ID of the AWS account that will pull images from the RSC container registry.
- `url` (String) URL for customer provided private container registry.
### Read-Only
-- `id` (String) The ID of this resource.
+- `id` (String) RSC cloud account ID (UUID).
diff --git a/docs/resources/azure_archival_location.md b/docs/resources/azure_archival_location.md
new file mode 100644
index 0000000..4e5d0fa
--- /dev/null
+++ b/docs/resources/azure_archival_location.md
@@ -0,0 +1,107 @@
+---
+# generated by https://github.com/hashicorp/terraform-plugin-docs
+page_title: "polaris_azure_archival_location Resource - terraform-provider-polaris"
+subcategory: ""
+description: |-
+ The polaris_azure_archival_location resource creates an RSC archival location for
+ cloud-native workloads. This resource requires that the Azure subscription has been
+ onboarded with the cloud_native_archival feature.
+ When creating an archival location, the region where the snapshots are stored needs
+ to be specified:
+ * SOURCE_REGION - Store snapshots in the same region to minimize data transfer
+ charges. This is the default behaviour when the storage_account_region field is
+ not specified.
+ * SPECIFIC_REGION - Storing snapshots in another region can increase total data
+ transfer charges. The storage_account_region field specifies the region.
+ Custom storage encryption is enabled by specifying one or more customer_managed_key
+ blocks. Each customer_managed_key block specifies the encryption details to use for
+ a region. For other regions, data will be encrypted using platform managed keys.
+ -> Note: The Azure storage account is not created until the first protected object
+ is archived to the location.
+---
+
+# polaris_azure_archival_location (Resource)
+
+The `polaris_azure_archival_location` resource creates an RSC archival location for
+cloud-native workloads. This resource requires that the Azure subscription has been
+onboarded with the `cloud_native_archival` feature.
+
+When creating an archival location, the region where the snapshots are stored needs
+to be specified:
+ * `SOURCE_REGION` - Store snapshots in the same region to minimize data transfer
+ charges. This is the default behaviour when the `storage_account_region` field is
+ not specified.
+ * `SPECIFIC_REGION` - Storing snapshots in another region can increase total data
+ transfer charges. The `storage_account_region` field specifies the region.
+
+Custom storage encryption is enabled by specifying one or more `customer_managed_key`
+blocks. Each `customer_managed_key` block specifies the encryption details to use for
+a region. For other regions, data will be encrypted using platform managed keys.
+
+-> **Note:** The Azure storage account is not created until the first protected object
+ is archived to the location.
+
+## Example Usage
+
+```terraform
+# Source region.
+resource "polaris_azure_archival_location" "archival_location" {
+ cloud_account_id = polaris_azure_subscription.subscription.id
+ name = "my-archival-location"
+ storage_account_name_prefix = "archival"
+}
+
+# Source region with a customer managed key.
+resource "polaris_azure_archival_location" "archival_location" {
+ cloud_account_id = polaris_azure_subscription.subscription.id
+ name = "my-archival-location"
+ storage_account_name_prefix = "archival"
+
+ customer_managed_key {
+ name = "my-archival-key"
+ region = "eastus"
+ vault_name = "my-archival-key-vault"
+ }
+}
+
+# Specific region.
+resource "polaris_azure_archival_location" "archival_location" {
+ cloud_account_id = polaris_azure_subscription.subscription.id
+ name = "my-archival-location"
+ storage_account_name_prefix = "archival"
+ storage_account_region = "eastus2"
+}
+```
+
+
+## Schema
+
+### Required
+
+- `cloud_account_id` (String) RSC cloud account ID. Changing this forces a new resource to be created.
+- `name` (String) Cloud native archival location name.
+- `storage_account_name_prefix` (String) Azure storage account name prefix. The storage account name prefix cannot be longer than 14 characters and can only consist of numbers and lower case letters. Changing this forces a new resource to be created.
+
+### Optional
+
+- `customer_managed_key` (Block Set) Customer managed storage encryption. Specify the regions and their respective encryption details. For other regions, data will be encrypted using platform managed keys. (see [below for nested schema](#nestedblock--customer_managed_key))
+- `redundancy` (String) Azure storage redundancy. Possible values are `GRS`, `GZRS`, `LRS`, `RA_GRS`, `RA_GZRS` and `ZRS`. Default value is `LRS`. Changing this forces a new resource to be created.
+- `storage_account_region` (String) Azure region to store the snapshots in. If not specified, the snapshots will be stored in the same region as the workload. Changing this forces a new resource to be created.
+- `storage_account_tags` (Map of String) Azure storage account tags. Each tag will be added to the storage account created by RSC.
+- `storage_tier` (String) Azure storage tier. Possible values are `COOL` and `HOT`. Default value is `COOL`.
+
+### Read-Only
+
+- `connection_status` (String) Connection status of the cloud native archival location.
+- `container_name` (String) Azure storage container name.
+- `id` (String) Cloud native archival location ID (UUID).
+- `location_template` (String) RSC location template. If a storage account region was specified, it will be `SPECIFIC_REGION`, otherwise `SOURCE_REGION`.
+
+
+### Nested Schema for `customer_managed_key`
+
+Required:
+
+- `name` (String) Key name.
+- `region` (String) The region in which the key will be used. Regions without customer managed keys will use platform managed keys.
+- `vault_name` (String) Key vault name.
diff --git a/docs/resources/azure_exocompute.md b/docs/resources/azure_exocompute.md
index 6b1c623..96a4e13 100644
--- a/docs/resources/azure_exocompute.md
+++ b/docs/resources/azure_exocompute.md
@@ -3,32 +3,94 @@
page_title: "polaris_azure_exocompute Resource - terraform-provider-polaris"
subcategory: ""
description: |-
-
+ The polaris_azure_exocompute resource creates an RSC Exocompute configuration for
+ Azure workloads.
+ There are 2 types of Exocompute configurations:
+ 1. Host - When a host configuration is created, RSC will automatically deploy the
+ necessary resources in the specified Azure region to run the Exocompute service.
+ A host configuration can be used by both the host cloud account and application
+ cloud accounts mapped to the host account.
+ 2. Application - An application configuration is created by mapping the application
+ cloud account to a host cloud account. The application cloud account will leverage
+ the Exocompute resources deployed for the host configuration.
+ Item 1 above requires that the Azure subscription has been onboarded with the
+ exocompute feature.
+ Since there are 2 types of Exocompute configurations, there are 2 ways to create a
+ polaris_azure_exocompute resource:
+ 1. Using the cloud_account_id, region, subnet and pod_overlay_network_cidr
+ fields. This creates a host configuration.
+ 2. Using the cloud_account_id and host_cloud_account_id fields. This creates an
+ application configuration.
+ ~> Note: A host configuration can be created without specifying the
+ pod_overlay_network_cidr field, this is discouraged and should only be done for
+ backwards compatibility reasons.
+ -> Note: Customer-managed Exocompute is sometimes referred to as Bring Your Own
+ Kubernetes (BYOK). Using both host and application Exocompute configurations is
+ sometimes referred to as shared Exocompute.
---
# polaris_azure_exocompute (Resource)
+The `polaris_azure_exocompute` resource creates an RSC Exocompute configuration for
+Azure workloads.
+There are 2 types of Exocompute configurations:
+ 1. *Host* - When a host configuration is created, RSC will automatically deploy the
+ necessary resources in the specified Azure region to run the Exocompute service.
+ A host configuration can be used by both the host cloud account and application
+ cloud accounts mapped to the host account.
+ 2. *Application* - An application configuration is created by mapping the application
+ cloud account to a host cloud account. The application cloud account will leverage
+ the Exocompute resources deployed for the host configuration.
+
+Item 1 above requires that the Azure subscription has been onboarded with the
+`exocompute` feature.
+
+Since there are 2 types of Exocompute configurations, there are 2 ways to create a
+`polaris_azure_exocompute` resource:
+ 1. Using the `cloud_account_id`, `region`, `subnet` and `pod_overlay_network_cidr`
+ fields. This creates a host configuration.
+ 2. Using the `cloud_account_id` and `host_cloud_account_id` fields. This creates an
+ application configuration.
+
+~> **Note:** A host configuration can be created without specifying the
+ `pod_overlay_network_cidr` field, this is discouraged and should only be done for
+ backwards compatibility reasons.
+
+-> **Note:** Customer-managed Exocompute is sometimes referred to as Bring Your Own
+ Kubernetes (BYOK). Using both host and application Exocompute configurations is
+ sometimes referred to as shared Exocompute.
## Example Usage
```terraform
-resource "polaris_azure_exocompute" "default" {
- subscription_id = polaris_azure_subscription.default.id
- region = "eastus2"
- subnet_id = "/subscriptions/65774f88-da6a-11eb-bc8f-e798f8b54eba/resourceGroups/test/providers/Microsoft.Network/virtualNetworks/test/subnets/default"
+# Host configuration.
+resource "polaris_azure_exocompute" "host_exocompute" {
+ cloud_account_id = polaris_azure_subscription.host_subscription.id
+ pod_overlay_network_cidr = "10.244.0.0/16"
+ region = "eastus2"
+ subnet = "/subscriptions/65774f88-da6a-11eb-bc8f-e798f8b54eba/resourceGroups/test/providers/Microsoft.Network/virtualNetworks/test/subnets/default"
+}
+
+# Application configuration.
+resource "polaris_azure_exocompute" "app_exocompute" {
+ cloud_account_id = polaris_azure_subscription.app_subscription.id
+ host_cloud_account_id = polaris_azure_subscription.host_subscription.id
}
```
## Schema
-### Required
+### Optional
-- `region` (String) Azure region to run the exocompute instance in.
-- `subnet` (String) Azure subnet id.
-- `subscription_id` (String) RSC subscription id
+- `cloud_account_id` (String) RSC cloud account ID. This is the ID of the `polaris_azure_subscription` resource for which the Exocompute service runs. Changing this forces a new resource to be created.
+- `host_cloud_account_id` (String) RSC cloud account ID of the shared exocompute host account. Changing this forces a new resource to be created.
+- `pod_overlay_network_cidr` (String) The CIDR range assigned to pods when launching Exocompute with the CNI overlay network plugin mode. Changing this forces a new resource to be created.
+- `region` (String) Azure region to run the exocompute service in. Should be specified in the standard Azure style, e.g. `eastus`. Changing this forces a new resource to be created.
+- `subnet` (String) Azure subnet ID of the cluster subnet corresponding to the Exocompute configuration. This subnet will be used to allocate IP addresses to the nodes of the cluster. Changing this forces a new resource to be created.
+- `subscription_id` (String, Deprecated) RSC cloud account ID. This is the ID of the `polaris_azure_subscription` resource for which the Exocompute service runs. Changing this forces a new resource to be created. **Deprecated:** use `cloud_account_id` instead.
### Read-Only
-- `id` (String) The ID of this resource.
+- `id` (String) Exocompute configuration ID (UUID).
diff --git a/docs/resources/azure_service_principal.md b/docs/resources/azure_service_principal.md
index c1db643..5f188fd 100644
--- a/docs/resources/azure_service_principal.md
+++ b/docs/resources/azure_service_principal.md
@@ -3,34 +3,76 @@
page_title: "polaris_azure_service_principal Resource - terraform-provider-polaris"
subcategory: ""
description: |-
-
+ The polaris_azure_service_principal resource adds an Azure service principal to
+ RSC. A service principal must be added for each Azure tenant before subscriptions
+ for the tenants can be added to RSC.
+ There are 3 ways to create a polaris_azure_service principal resource:
+ 1. Using the app_id, app_name, app_secret, tenant_id and tenant_domain
+ fields.
+ 2. Using the credentials field which is the path to a custom service principal
+ file. A description of the custom format can be found
+ here https://github.com/rubrikinc/rubrik-polaris-sdk-for-go?tab=readme-ov-file#azure-credentials.
+ 3. Using the ' sdk_authfield which is the path to an Azure service principal
+ created with the Azure SDK using the--sdk-auth` parameter.
+ ~> Note: Removing the last subscription from an RSC tenant will automatically
+ remove the tenant, which also removes the service principal.
+ ~> Note: Destroying the polaris_azure_service_principal resource only updates
+ the local state, it does not remove the service principal from RSC. However,
+ creating another polaris_azure_service_principal resource for the same Azure
+ tenant will overwrite the old service principal in RSC.
+ -> Note: There is no way to verify if a service principal has been added to RSC
+ using the UI. RSC tenants don't show up in the UI until the first subscription is
+ added.
---
# polaris_azure_service_principal (Resource)
+The `polaris_azure_service_principal` resource adds an Azure service principal to
+RSC. A service principal must be added for each Azure tenant before subscriptions
+for the tenants can be added to RSC.
+There are 3 ways to create a `polaris_azure_service principal` resource:
+ 1. Using the `app_id`, `app_name`, `app_secret`, `tenant_id` and `tenant_domain`
+ fields.
+ 2. Using the `credentials` field which is the path to a custom service principal
+ file. A description of the custom format can be found
+ [here](https://github.com/rubrikinc/rubrik-polaris-sdk-for-go?tab=readme-ov-file#azure-credentials).
+ 3. Using the ' sdk_auth` field which is the path to an Azure service principal
+ created with the Azure SDK using the `--sdk-auth` parameter.
+
+~> **Note:** Removing the last subscription from an RSC tenant will automatically
+ remove the tenant, which also removes the service principal.
+
+~> **Note:** Destroying the `polaris_azure_service_principal` resource only updates
+ the local state, it does not remove the service principal from RSC. However,
+ creating another `polaris_azure_service_principal` resource for the same Azure
+ tenant will overwrite the old service principal in RSC.
+
+-> **Note:** There is no way to verify if a service principal has been added to RSC
+ using the UI. RSC tenants don't show up in the UI until the first subscription is
+ added.
## Example Usage
```terraform
-# With service principal file.
+# With custom service principal file.
resource "polaris_azure_service_principal" "default" {
credentials = "${path.module}/service-principal.json"
tenant_domain = "mydomain.onmicrosoft.com"
}
-# With service principal created with the Azure SDK using the --sdk-auth
-# parameter
+# With a service principal created using the Azure SDK and the
+# --sdk-auth parameter.
resource "polaris_azure_service_principal" "default" {
sdk_auth = "${path.module}/sdk-service-principal.json"
tenant_domain = "mydomain.onmicrosoft.com"
}
-# Without service principal file.
+# Without a service principal file.
resource "polaris_azure_service_principal" "default" {
app_id = "25c2b42a-c76b-11eb-9767-6ff6b5b7e72b"
app_name = "My App"
- app_secret = ""
+ app_secret = ""
tenant_domain = "mydomain.onmicrosoft.com"
tenant_id = "2bfdaef8-c76b-11eb-8d3d-4706c14a88f0"
}
@@ -41,18 +83,19 @@ resource "polaris_azure_service_principal" "default" {
### Required
-- `tenant_domain` (String) Tenant directory/domain name.
+- `tenant_domain` (String) Azure tenant primary domain. Changing this forces a new resource to be created.
### Optional
-- `app_id` (String) App registration application id.
-- `app_name` (String) App registration display name.
-- `app_secret` (String, Sensitive) App registration client secret.
-- `credentials` (String) Path to Azure service principal file.
-- `permissions_hash` (String) Signals that the permissions has been updated.
-- `sdk_auth` (String) Path to Azure service principal created with the Azure SDK using the --sdk-auth parameter
-- `tenant_id` (String) Tenant/domain id.
+- `app_id` (String) Azure app registration application ID. Also known as the client ID. Changing this forces a new resource to be created.
+- `app_name` (String) Azure app registration display name. Changing this forces a new resource to be created.
+- `app_secret` (String, Sensitive) Azure app registration client secret. Changing this forces a new resource to be created.
+- `credentials` (String) Path to a custom service principal file. Changing this forces a new resource to be created.
+- `permissions` (String, Deprecated) Permissions updated signal. When this field is updated, the provider will notify RSC that permissions has been updated. Use this field with the `polaris_azure_permissions` data source. **Deprecated:** use the `polaris_azure_subscription` resource's `permissions` fields instead.
+- `permissions_hash` (String, Deprecated) Permissions updated signal. **Deprecated:** use `permissions` instead.
+- `sdk_auth` (String) Path to an Azure service principal created with the Azure SDK using the `--sdk-auth` parameter. Changing this forces a new resource to be created.
+- `tenant_id` (String) Azure tenant ID. Also known as the directory ID. Changing this forces a new resource to be created.
### Read-Only
-- `id` (String) The ID of this resource.
+- `id` (String) Azure app registration application ID (UUID). Also known as the client ID. Note, this might change in the future, use the `app_id` field to reference the application ID in configurations.
diff --git a/docs/resources/azure_subscription.md b/docs/resources/azure_subscription.md
index ad94f0c..c0b8606 100644
--- a/docs/resources/azure_subscription.md
+++ b/docs/resources/azure_subscription.md
@@ -3,44 +3,139 @@
page_title: "polaris_azure_subscription Resource - terraform-provider-polaris"
subcategory: ""
description: |-
-
+ The polaris_azure_subscription resource adds an Azure subscription to RSC. When
+ the first subscription for an Azure tenant is added, a corresponding tenant is
+ created in RSC. The RSC tenant is automatically destroyed when it's last subscription
+ is removed.
+ Any combination of different RSC features can be enabled for a subscription:
+ 1. cloud_native_archival - Provides archival of data from data center workloads
+ for disaster recovery and long-term retention.
+ 2. cloud_native_archival_encryption - Allows cloud archival locations to be
+ encrypted with customer managed keys.
+ 3. cloud_native_protection - Provides protection for Azure virtual machines and
+ managed disks through the rules and policies of SLA Domains.
+ 4. exocompute - Provides snapshot indexing, file recovery, storage tiering, and
+ application-consistent protection of Azure objects.
+ 5. sql_db_protection - Provides centralized database backup management and
+ recovery in an Azure SQL Database deployment.
+ 6. sql_mi_protection - Provides centralized database backup management and
+ recovery for an Azure SQL Managed Instance deployment.
+ Each feature's permissions field can be used with the polaris_azure_permissions
+ data source to inform RSC about permission updates when the Terraform configuration
+ is applied.
+ ~> Note: Even though the resource_group_name and the resource_group_region
+ fields are marked as optional you should always specify them. They are marked as
+ optional to simplify the migration of existing Terraform configurations. If
+ omitted, RSC will generate a unique resource group name but it will not create
+ the actual resource group. Until the resource group is created, the RSC feature
+ depending on the resource group will not function as expected.
+ ~> Note: As mentioned in the documentation for each feature below, changing
+ certain fields causes features to be re-onboarded. Take care when the subscription
+ only has a single feature, as it could cause the tenant to be removed from RSC.
+ -> Note: As of now, sql_db_protection and sql_mi_protection does not support
+ specifying an Azure resource group.
---
# polaris_azure_subscription (Resource)
+The `polaris_azure_subscription` resource adds an Azure subscription to RSC. When
+the first subscription for an Azure tenant is added, a corresponding tenant is
+created in RSC. The RSC tenant is automatically destroyed when it's last subscription
+is removed.
+Any combination of different RSC features can be enabled for a subscription:
+ 1. `cloud_native_archival` - Provides archival of data from data center workloads
+ for disaster recovery and long-term retention.
+ 2. `cloud_native_archival_encryption` - Allows cloud archival locations to be
+ encrypted with customer managed keys.
+ 3. `cloud_native_protection` - Provides protection for Azure virtual machines and
+ managed disks through the rules and policies of SLA Domains.
+ 4. `exocompute` - Provides snapshot indexing, file recovery, storage tiering, and
+ application-consistent protection of Azure objects.
+ 5. `sql_db_protection` - Provides centralized database backup management and
+ recovery in an Azure SQL Database deployment.
+ 6. `sql_mi_protection` - Provides centralized database backup management and
+ recovery for an Azure SQL Managed Instance deployment.
+
+Each feature's `permissions` field can be used with the `polaris_azure_permissions`
+data source to inform RSC about permission updates when the Terraform configuration
+is applied.
+
+~> **Note:** Even though the `resource_group_name` and the `resource_group_region`
+ fields are marked as optional you should always specify them. They are marked as
+ optional to simplify the migration of existing Terraform configurations. If
+ omitted, RSC will generate a unique resource group name but it will not create
+ the actual resource group. Until the resource group is created, the RSC feature
+ depending on the resource group will not function as expected.
+
+~> **Note:** As mentioned in the documentation for each feature below, changing
+ certain fields causes features to be re-onboarded. Take care when the subscription
+ only has a single feature, as it could cause the tenant to be removed from RSC.
+
+-> **Note:** As of now, `sql_db_protection` and `sql_mi_protection` does not support
+ specifying an Azure resource group.
## Example Usage
```terraform
-# Enable Cloud Native Protection
-resource "polaris_azure_subscription" "default" {
+# Enable the Cloud Native Protection feature for the EastUS2 region.
+resource "polaris_azure_subscription" "subscription" {
subscription_id = "31be1bb0-c76c-11eb-9217-afdffe83a002"
- tenant_domain = "mydomain.onmicrosoft.com"
+ tenant_domain = "my-domain.onmicrosoft.com"
cloud_native_protection {
regions = [
"eastus2",
]
+ resource_group_name = "my-resource-group"
+ resource_group_region = "eastus2"
}
}
-# Enable Cloud Native Protection and Exocompte.
-resource "polaris_azure_subscription" "default" {
+# Enable the Cloud Native Protection feature for the EastUS2 and the
+# WestUS2 regions and the Exocompute feature for the EastUS2 region.
+resource "polaris_azure_subscription" "subscription" {
subscription_id = "31be1bb0-c76c-11eb-9217-afdffe83a002"
- tenant_domain = "mydomain.onmicrosoft.com"
+ tenant_domain = "my-domain.onmicrosoft.com"
cloud_native_protection {
regions = [
"eastus2",
"westus2",
]
+ resource_group_name = "my-west-resource-group"
+ resource_group_region = "westus2"
+ resource_group_tags = {
+ environment = "production"
+ }
}
exocompute {
regions = [
"eastus2",
]
+ resource_group_name = "my-east-resource-group"
+ resource_group_region = "eastus2"
+ }
+}
+
+# Using the polaris_azure_permissions data source to inform RSC about
+# permission updates for the feature.
+data "polaris_azure_permissions" "exocompute" {
+ feature = "EXOCOMPUTE"
+}
+
+resource "polaris_azure_subscription" "default" {
+ subscription_id = "31be1bb0-c76c-11eb-9217-afdffe83a002"
+ tenant_domain = "my-domain.onmicrosoft.com"
+
+ exocompute {
+ permissions = data.polaris_azure_permissions.exocompute.id
+ regions = [
+ "eastus2",
+ ]
+ resource_group_name = "my-resource-group"
+ resource_group_region = "eastus2"
}
}
```
@@ -50,26 +145,79 @@ resource "polaris_azure_subscription" "default" {
### Required
-- `cloud_native_protection` (Block List, Min: 1, Max: 1) Enable the Cloud Native Protection feature for the GCP project. (see [below for nested schema](#nestedblock--cloud_native_protection))
-- `subscription_id` (String) Subscription id.
-- `tenant_domain` (String) Tenant directory/domain name.
+- `subscription_id` (String) Azure subscription ID. Changing this forces a new resource to be created.
+- `tenant_domain` (String) Azure tenant primary domain. Changing this forces a new resource to be created.
### Optional
-- `delete_snapshots_on_destroy` (Boolean) Should snapshots be deleted when the resource is destroyed.
-- `exocompute` (Block List, Max: 1) Enable the exocompute feature for the account. (see [below for nested schema](#nestedblock--exocompute))
-- `subscription_name` (String) Subscription name.
+- `cloud_native_archival` (Block List, Max: 1) Enable the RSC Cloud Native Archival feature for the Azure subscription. (see [below for nested schema](#nestedblock--cloud_native_archival))
+- `cloud_native_archival_encryption` (Block List, Max: 1) Enable the RSC Cloud Native Archival Encryption feature for the Azure subscription. (see [below for nested schema](#nestedblock--cloud_native_archival_encryption))
+- `cloud_native_protection` (Block List, Max: 1) Enable the RSC Cloud Native Protection feature for the Azure subscription. (see [below for nested schema](#nestedblock--cloud_native_protection))
+- `delete_snapshots_on_destroy` (Boolean) Should snapshots be deleted when the resource is destroyed. Default value is `false`.
+- `exocompute` (Block List, Max: 1) Enable the RSC Exocompute feature for the Azure subscription. (see [below for nested schema](#nestedblock--exocompute))
+- `sql_db_protection` (Block List, Max: 1) Enable the RSC SQL DB Protection feature for the Azure subscription. (see [below for nested schema](#nestedblock--sql_db_protection))
+- `sql_mi_protection` (Block List, Max: 1) Enable the RSC SQL MI Protection feature for the Azure subscription. (see [below for nested schema](#nestedblock--sql_mi_protection))
+- `subscription_name` (String) Azure subscription name.
### Read-Only
-- `id` (String) The ID of this resource.
+- `id` (String) RSC cloud account ID (UUID).
+
+
+### Nested Schema for `cloud_native_archival`
+
+Required:
+
+- `regions` (Set of String) Azure regions to enable the Cloud Native Archival feature in. Should be specified in the standard Azure style, e.g. `eastus`.
+
+Optional:
+
+- `permissions` (String) Permissions updated signal. When this field changes, the provider will notify RSC that the permissions for the feature has been updated. Use this field with the `polaris_azure_permissions` data source.
+- `resource_group_name` (String) Name of the Azure resource group where RSC places all resources created by the feature. RSC assumes the resource group already exists. Changing this forces the RSC feature to be re-onboarded.
+- `resource_group_region` (String) Region of the Azure resource group. Should be specified in the standard Azure style, e.g. `eastus`. Changing this forces the RSC feature to be re-onboarded.
+- `resource_group_tags` (Map of String) Tags to add to the Azure resource group. Changing this forces the RSC feature to be re-onboarded.
+
+Read-Only:
+
+- `status` (String) Status of the Cloud Native Archival feature.
+
+
+
+### Nested Schema for `cloud_native_archival_encryption`
+
+Required:
+
+- `regions` (Set of String) Azure regions to enable the Cloud Native Archival Encryption feature in. Should be specified in the standard Azure style, e.g. `eastus`.
+- `user_assigned_managed_identity_name` (String) User-assigned managed identity name.
+- `user_assigned_managed_identity_principal_id` (String) ID of the service principal object associated with the user-assigned managed identity.
+- `user_assigned_managed_identity_region` (String) User-assigned managed identity region. Should be specified in the standard Azure style, e.g. `eastus`.
+- `user_assigned_managed_identity_resource_group_name` (String) User-assigned managed identity resource group name.
+
+Optional:
+
+- `permissions` (String) Permissions updated signal. When this field changes, the provider will notify RSC that the permissions for the feature has been updated. Use this field with the `polaris_azure_permissions` data source.
+- `resource_group_name` (String) Name of the Azure resource group where RSC places all resources created by the feature. RSC assumes the resource group already exists. Changing this forces the RSC feature to be re-onboarded.
+- `resource_group_region` (String) Region of the Azure resource group. Should be specified in the standard Azure style, e.g. `eastus`. Changing this forces the RSC feature to be re-onboarded.
+- `resource_group_tags` (Map of String) Tags to add to the Azure resource group. Changing this forces the RSC feature to be re-onboarded.
+
+Read-Only:
+
+- `status` (String) Status of the Cloud Native Archival Encryption feature.
+
### Nested Schema for `cloud_native_protection`
Required:
-- `regions` (Set of String) Regions that Polaris will monitor for instances to automatically protect.
+- `regions` (Set of String) Azure regions that RSC will monitor for resources to protect according to SLA Domains. Should be specified in the standard Azure style, e.g. `eastus`.
+
+Optional:
+
+- `permissions` (String) Permissions updated signal. When this field changes, the provider will notify RSC that the permissions for the feature has been updated. Use this field with the `polaris_azure_permissions` data source.
+- `resource_group_name` (String) Name of the Azure resource group where RSC places all resources created by the feature. RSC assumes the resource group already exists. Changing this forces the RSC feature to be re-onboarded.
+- `resource_group_region` (String) Region of the Azure resource group. Should be specified in the standard Azure style, e.g. `eastus`. Changing this forces the RSC feature to be re-onboarded.
+- `resource_group_tags` (Map of String) Tags to add to the Azure resource group. Changing this forces the RSC feature to be re-onboarded.
Read-Only:
@@ -81,8 +229,47 @@ Read-Only:
Required:
-- `regions` (Set of String) Regions to enable the exocompute feature in.
+- `regions` (Set of String) Azure regions to enable the Exocompute feature in. Should be specified in the standard Azure style, e.g. `eastus`.
+
+Optional:
+
+- `permissions` (String) Permissions updated signal. When this field changes, the provider will notify RSC that the permissions for the feature has been updated. Use this field with the `polaris_azure_permissions` data source.
+- `resource_group_name` (String) Name of the Azure resource group where RSC places all resources created by the feature. RSC assumes the resource group already exists. Changing this forces the RSC feature to be re-onboarded.
+- `resource_group_region` (String) Region of the Azure resource group. Should be specified in the standard Azure style, e.g. `eastus`. Changing this forces the RSC feature to be re-onboarded.
+- `resource_group_tags` (Map of String) Tags to add to the Azure resource group. Changing this forces the RSC feature to be re-onboarded.
Read-Only:
- `status` (String) Status of the Exocompute feature.
+
+
+
+### Nested Schema for `sql_db_protection`
+
+Required:
+
+- `regions` (Set of String) Azure regions to enable the SQL DB Protection feature in. Should be specified in the standard Azure style, e.g. `eastus`.
+
+Optional:
+
+- `permissions` (String) Permissions updated signal. When this field changes, the provider will notify RSC that the permissions for the feature has been updated. Use this field with the `polaris_azure_permissions` data source.
+
+Read-Only:
+
+- `status` (String) Status of the SQL DB Protection feature.
+
+
+
+### Nested Schema for `sql_mi_protection`
+
+Required:
+
+- `regions` (Set of String) Azure regions to enable the SQL MI Protection feature in. Should be specified in the standard Azure style, e.g. `eastus`.
+
+Optional:
+
+- `permissions` (String) Permissions updated signal. When this field changes, the provider will notify RSC that the permissions for the feature has been updated. Use this field with the `polaris_azure_permissions` data source.
+
+Read-Only:
+
+- `status` (String) Status of the SQL MI Protection feature.
diff --git a/docs/resources/cdm_bootstrap.md b/docs/resources/cdm_bootstrap.md
index 7e6fd4a..c014d97 100644
--- a/docs/resources/cdm_bootstrap.md
+++ b/docs/resources/cdm_bootstrap.md
@@ -7,6 +7,9 @@ description: |-
# polaris_cdm_bootstrap (Resource)
+
+
+
## Example Usage
```terraform
@@ -28,6 +31,7 @@ resource "polaris_cdm_bootstrap" "default" {
}
```
+
## Schema
### Required
diff --git a/docs/resources/cdm_bootstrap_cces_aws.md b/docs/resources/cdm_bootstrap_cces_aws.md
index db3e0d9..d859c1d 100644
--- a/docs/resources/cdm_bootstrap_cces_aws.md
+++ b/docs/resources/cdm_bootstrap_cces_aws.md
@@ -2,11 +2,14 @@
page_title: "polaris_cdm_bootstrap_cces_aws Resource - terraform-provider-polaris"
subcategory: ""
description: |-
-
+
---
# polaris_cdm_bootstrap_cces_aws (Resource)
+
+
+
## Example Usage
```terraform
@@ -30,6 +33,7 @@ resource "polaris_cdm_bootstrap_cces_aws" "default" {
}
```
+
## Schema
### Required
diff --git a/docs/resources/cdm_bootstrap_cces_azure.md b/docs/resources/cdm_bootstrap_cces_azure.md
index f95dc1c..94489fe 100644
--- a/docs/resources/cdm_bootstrap_cces_azure.md
+++ b/docs/resources/cdm_bootstrap_cces_azure.md
@@ -2,11 +2,14 @@
page_title: "polaris_cdm_bootstrap_cces_azure Resource - terraform-provider-polaris"
subcategory: ""
description: |-
-
+
---
# polaris_cdm_bootstrap_cces_azure (Resource)
+
+
+
## Example Usage
```terraform
@@ -30,6 +33,7 @@ resource "polaris_cdm_bootstrap_cces_azure" "default" {
}
```
+
## Schema
### Required
diff --git a/docs/resources/custom_role.md b/docs/resources/custom_role.md
index 4a14cb9..36cb172 100644
--- a/docs/resources/custom_role.md
+++ b/docs/resources/custom_role.md
@@ -3,12 +3,12 @@
page_title: "polaris_custom_role Resource - terraform-provider-polaris"
subcategory: ""
description: |-
-
+ The polaris_custom_role resource is used to manage custom roles in RSC.
---
# polaris_custom_role (Resource)
-
+The `polaris_custom_role` resource is used to manage custom roles in RSC.
## Example Usage
@@ -79,7 +79,7 @@ resource "polaris_custom_role" "compliance_auditor" {
### Read-Only
-- `id` (String) The ID of this resource.
+- `id` (String) Role ID (UUID).
### Nested Schema for `permission`
@@ -87,7 +87,7 @@ resource "polaris_custom_role" "compliance_auditor" {
Required:
- `hierarchy` (Block Set, Min: 1) Snappable hierarchy. (see [below for nested schema](#nestedblock--permission--hierarchy))
-- `operation` (String) Operation to allow on object ids under the snappable hierarchy.
+- `operation` (String) Operation to allow on object IDs under the snappable hierarchy.
### Nested Schema for `permission.hierarchy`
diff --git a/docs/resources/role_assignment.md b/docs/resources/role_assignment.md
index 4f6993c..f9c0764 100644
--- a/docs/resources/role_assignment.md
+++ b/docs/resources/role_assignment.md
@@ -3,12 +3,12 @@
page_title: "polaris_role_assignment Resource - terraform-provider-polaris"
subcategory: ""
description: |-
-
+ The polaris_role_assignment resource is used to assign roles to users in RSC.
---
# polaris_role_assignment (Resource)
-
+The `polaris_role_assignment` resource is used to assign roles to users in RSC.
## Example Usage
@@ -31,9 +31,9 @@ resource "polaris_role_assignment" "compliance_auditor" {
### Required
-- `role_id` (String) Role identifier.
-- `user_email` (String) User email address.
+- `role_id` (String) Role ID (UUID). Changing this forces a new resource to be created.
+- `user_email` (String) User email address. Changing this forces a new resource to be created.
### Read-Only
-- `id` (String) The ID of this resource.
+- `id` (String) SHA-256 hash of the user email and the role ID.
diff --git a/docs/resources/user.md b/docs/resources/user.md
index 3f02436..4efd5a2 100644
--- a/docs/resources/user.md
+++ b/docs/resources/user.md
@@ -3,12 +3,12 @@
page_title: "polaris_user Resource - terraform-provider-polaris"
subcategory: ""
description: |-
-
+ The polaris_user resource is used to manage users in RSC.
---
# polaris_user (Resource)
-
+The `polaris_user` resource is used to manage users in RSC.
## Example Usage
@@ -27,11 +27,11 @@ resource "polaris_user" "auditor" {
### Required
-- `email` (String) User email address.
-- `role_ids` (Set of String) Roles assigned to the user.
+- `email` (String) User email address. Changing this forces a new resource to be created.
+- `role_ids` (Set of String) Roles assigned to the user (UUIDs).
### Read-Only
-- `id` (String) The ID of this resource.
+- `id` (String) User email address.
- `is_account_owner` (Boolean) True if the user is the account owner.
- `status` (String) User status.
diff --git a/examples/data-sources/polaris_account/data-source.tf b/examples/data-sources/polaris_account/data-source.tf
new file mode 100644
index 0000000..ad6f264
--- /dev/null
+++ b/examples/data-sources/polaris_account/data-source.tf
@@ -0,0 +1,17 @@
+# Output the features enabled for the RSC account.
+data "polaris_account" "account" {}
+
+output "features" {
+ value = data.polaris_account.account.features
+}
+
+# Using the fqdn field from the deployment data source to create an Azure
+# AD application.
+data "polaris_deployment" "deployment" {}
+
+resource "azuread_application" "app" {
+ display_name = "Rubrik Security Cloud Integration"
+ web {
+ homepage_url = "https://${data.polaris_account.account.fqdn}/setup_azure"
+ }
+}
diff --git a/examples/data-sources/polaris_aws_account/data-source.tf b/examples/data-sources/polaris_aws_account/data-source.tf
new file mode 100644
index 0000000..db9131c
--- /dev/null
+++ b/examples/data-sources/polaris_aws_account/data-source.tf
@@ -0,0 +1,7 @@
+data "polaris_aws_account" "example" {
+ name = "example"
+}
+
+output "example_aws_account" {
+ value = data.polaris_aws_account.example
+}
diff --git a/examples/data-sources/polaris_aws_archival_location/data-source.tf b/examples/data-sources/polaris_aws_archival_location/data-source.tf
index f8c2663..07bbf24 100644
--- a/examples/data-sources/polaris_aws_archival_location/data-source.tf
+++ b/examples/data-sources/polaris_aws_archival_location/data-source.tf
@@ -1,6 +1,6 @@
# Using the archival location ID.
data "polaris_aws_archival_location" "location" {
- archival_location_id = "db34f042-79ea-48b1-bab8-c40dfbf2ab82"
+ id = "db34f042-79ea-48b1-bab8-c40dfbf2ab82"
}
# Using the name.
diff --git a/examples/data-sources/polaris_aws_cnp_artifacts/data-source.tf b/examples/data-sources/polaris_aws_cnp_artifacts/data-source.tf
index ee97d20..f4173b6 100644
--- a/examples/data-sources/polaris_aws_cnp_artifacts/data-source.tf
+++ b/examples/data-sources/polaris_aws_cnp_artifacts/data-source.tf
@@ -1,3 +1,41 @@
+# Permission groups defaults to BASIC.
data "polaris_aws_cnp_artifacts" "artifacts" {
- features = ["CLOUD_NATIVE_PROTECTION"]
+ feature {
+ name = "CLOUD_NATIVE_PROTECTION"
+ }
+}
+
+# Multiple permission groups. When permission groups are specified,
+# the BASIC permission group must always be included.
+data "polaris_aws_cnp_artifacts" "artifacts" {
+ feature {
+ name = "CLOUD_NATIVE_PROTECTION"
+
+ permission_groups = [
+ "BASIC",
+ "EXPORT_AND_RESTORE",
+ "FILE_LEVEL_RECOVERY",
+ ]
+ }
+}
+
+# Multiple features with permission groups.
+data "polaris_aws_cnp_artifacts" "artifacts" {
+ feature {
+ name = "CLOUD_NATIVE_ARCHIVAL"
+
+ permission_groups = [
+ "BASIC",
+ ]
+ }
+
+ feature {
+ name = "CLOUD_NATIVE_PROTECTION"
+
+ permission_groups = [
+ "BASIC",
+ "EXPORT_AND_RESTORE",
+ "FILE_LEVEL_RECOVERY",
+ ]
+ }
}
diff --git a/examples/data-sources/polaris_aws_cnp_permissions/data-source.tf b/examples/data-sources/polaris_aws_cnp_permissions/data-source.tf
index dde0ca0..c0d9388 100644
--- a/examples/data-sources/polaris_aws_cnp_permissions/data-source.tf
+++ b/examples/data-sources/polaris_aws_cnp_permissions/data-source.tf
@@ -1,6 +1,34 @@
+data "polaris_aws_cnp_artifacts" "artifacts" {
+ feature {
+ name = "CLOUD_NATIVE_ARCHIVAL"
+
+ permission_groups = [
+ "BASIC",
+ ]
+ }
+
+ feature {
+ name = "CLOUD_NATIVE_PROTECTION"
+
+ permission_groups = [
+ "BASIC",
+ "EXPORT_AND_RESTORE",
+ ]
+ }
+}
+
+# Lookup the required permissions using the output from the
+# artifacts data source.
data "polaris_aws_cnp_permissions" "permissions" {
for_each = data.polaris_aws_cnp_artifacts.artifacts.role_keys
cloud = data.polaris_aws_cnp_artifacts.artifacts.cloud
- features = data.polaris_aws_cnp_artifacts.artifacts.features
role_key = each.key
+
+ dynamic "feature" {
+ for_each = data.polaris_aws_cnp_artifacts.artifacts.feature
+ content {
+ name = feature.value["name"]
+ permission_groups = feature.value["permission_groups"]
+ }
+ }
}
diff --git a/examples/data-sources/polaris_azure_archival_location/data-source.tf b/examples/data-sources/polaris_azure_archival_location/data-source.tf
new file mode 100644
index 0000000..96d65df
--- /dev/null
+++ b/examples/data-sources/polaris_azure_archival_location/data-source.tf
@@ -0,0 +1,9 @@
+# Using the archival location ID.
+data "polaris_azure_archival_location" "archival_location" {
+ id = "db34f042-79ea-48b1-bab8-c40dfbf2ab82"
+}
+
+# Using the archival location name.
+data "polaris_azure_archival_location" "archival_location" {
+ name = "my-archival-location"
+}
diff --git a/examples/data-sources/polaris_azure_permissions/data-source.tf b/examples/data-sources/polaris_azure_permissions/data-source.tf
index c99650c..549976b 100644
--- a/examples/data-sources/polaris_azure_permissions/data-source.tf
+++ b/examples/data-sources/polaris_azure_permissions/data-source.tf
@@ -1,5 +1,24 @@
-data "polaris_azure_permissions" "default" {
- features = [
- "CLOUD_NATIVE_PROTECTION",
- ]
+# Permissions required for the Cloud Native Protection RSC feature.
+data "polaris_azure_permissions" "cloud_native_protection" {
+ feature = "CLOUD_NATIVE_PROTECTION"
+}
+
+# Permissions required for the Exocompute RSC feature. The subscription
+# is set up to notify RSC when the permissions are updated for the feature.
+data "polaris_azure_permissions" "exocompute" {
+ feature = "EXOCOMPUTE"
+}
+
+resource "polaris_azure_subscription" "subscription" {
+ subscription_id = "31be1bb0-c76c-11eb-9217-afdffe83a002"
+ tenant_domain = "my-domain.onmicrosoft.com"
+
+ exocompute {
+ permissions = data.polaris_azure_permissions.exocompute.id
+ regions = [
+ "eastus2",
+ ]
+ resource_group_name = "my-east-resource-group"
+ resource_group_region = "eastus2"
+ }
}
diff --git a/examples/data-sources/polaris_azure_subscription/data-source.tf b/examples/data-sources/polaris_azure_subscription/data-source.tf
new file mode 100644
index 0000000..6abd742
--- /dev/null
+++ b/examples/data-sources/polaris_azure_subscription/data-source.tf
@@ -0,0 +1,7 @@
+data "polaris_azure_subscription" "example" {
+ name = "example"
+}
+
+output "example_azure_subscription" {
+ value = data.polaris_azure_subscription.example
+}
diff --git a/examples/data-sources/polaris_deployment/data-source.tf b/examples/data-sources/polaris_deployment/data-source.tf
index 1da2e73..bfba4f9 100644
--- a/examples/data-sources/polaris_deployment/data-source.tf
+++ b/examples/data-sources/polaris_deployment/data-source.tf
@@ -1 +1,10 @@
-data "polaris_deployment" "default" {}
+# Output the IP addresses and version used by the RSC deployment.
+data "polaris_deployment" "deployment" {}
+
+output "ip_addresses" {
+ value = data.polaris_deployment.deployment.ip_addresses
+}
+
+output "version" {
+ value = data.polaris_deployment.deployment.version
+}
diff --git a/examples/data-sources/polaris_features/data-source.tf b/examples/data-sources/polaris_features/data-source.tf
index 89eb9e3..7e31b67 100644
--- a/examples/data-sources/polaris_features/data-source.tf
+++ b/examples/data-sources/polaris_features/data-source.tf
@@ -1 +1,6 @@
+# Output the features enabled for the RSC account.
data "polaris_features" "features" {}
+
+output "features_enabled" {
+ value = data.polaris_features.features.features
+}
diff --git a/examples/resources/polaris_aws_account/resource.tf b/examples/resources/polaris_aws_account/resource.tf
index c74abb4..0d80f9e 100644
--- a/examples/resources/polaris_aws_account/resource.tf
+++ b/examples/resources/polaris_aws_account/resource.tf
@@ -3,17 +3,25 @@ resource "polaris_aws_account" "default" {
profile = "default"
cloud_native_protection {
+ permission_groups = [
+ "BASIC",
+ ]
+
regions = [
"us-east-2",
]
}
}
-# Enable Cloud Native Protection and Exocompte.
+# Enable Cloud Native Protection and Exocompute.
resource "polaris_aws_account" "default" {
profile = "default"
cloud_native_protection {
+ permission_groups = [
+ "BASIC",
+ ]
+
regions = [
"us-east-2",
"us-west-2",
@@ -21,6 +29,11 @@ resource "polaris_aws_account" "default" {
}
exocompute {
+ permission_groups = [
+ "BASIC",
+ "RSC_MANAGED_CLUSTER",
+ ]
+
regions = [
"us-west-2",
]
@@ -29,5 +42,5 @@ resource "polaris_aws_account" "default" {
# The Couldformation stack ARN is available after creation
output "stack_arn" {
- value = polaris_aws_account.default.exocompute[0].stack_arn
+ value = polaris_aws_account.default.exocompute[0].stack_arn
}
diff --git a/examples/resources/polaris_aws_cnp_account/resource.tf b/examples/resources/polaris_aws_cnp_account/resource.tf
index 752a16c..7d09c92 100644
--- a/examples/resources/polaris_aws_cnp_account/resource.tf
+++ b/examples/resources/polaris_aws_cnp_account/resource.tf
@@ -1,6 +1,42 @@
+# Hardcoded values. Permission groups defaults to BASIC.
resource "polaris_aws_cnp_account" "account" {
- features = ["CLOUD_NATIVE_PROTECTION"]
name = "My Account"
native_id = "123456789123"
- regions = ["us-east-2", "us-west-2"]
+
+ regions = [
+ "us-east-2",
+ "us-west-2",
+ ]
+
+ feature {
+ name = "CLOUD_NATIVE_ARCHIVAL"
+ }
+
+ feature {
+ name = "CLOUD_NATIVE_PROTECTION"
+
+ permission_groups = [
+ "BASIC",
+ "EXPORT_AND_RESTORE",
+ ]
+ }
+}
+
+# Using variables for the account values and the features. The dynamic
+# feature block could also be expanded from the polaris_aws_cnp_artifacts
+# data source.
+resource "polaris_aws_cnp_account" "account" {
+ cloud = var.cloud
+ external_id = var.external_id
+ name = var.name
+ native_id = var.native_id
+ regions = var.regions
+
+ dynamic "feature" {
+ for_each = var.features
+ content {
+ name = feature.value["name"]
+ permission_groups = feature.value["permission_groups"]
+ }
+ }
}
diff --git a/examples/resources/polaris_aws_cnp_account_attachments/resource.tf b/examples/resources/polaris_aws_cnp_account_attachments/resource.tf
index 0f7f2b3..1aa9505 100644
--- a/examples/resources/polaris_aws_cnp_account_attachments/resource.tf
+++ b/examples/resources/polaris_aws_cnp_account_attachments/resource.tf
@@ -1,12 +1,15 @@
+# The configuration assumes that an AWS account has been been added
+# to RSC and that one AWS IAM instance profile and role has been
+# created for each RSC artifact.
resource "polaris_aws_cnp_account_attachments" "attachments" {
account_id = polaris_aws_cnp_account.account.id
- features = polaris_aws_cnp_account.account.features
+ features = polaris_aws_cnp_account.account.feature.*.name
dynamic "instance_profile" {
for_each = aws_iam_instance_profile.profile
content {
key = instance_profile.key
- name = instance_profile.value["name"]
+ name = instance_profile.value["arn"]
}
}
diff --git a/examples/resources/polaris_aws_cnp_account_trust_policy/resource.tf b/examples/resources/polaris_aws_cnp_account_trust_policy/resource.tf
index d95e9bb..0da4dbd 100644
--- a/examples/resources/polaris_aws_cnp_account_trust_policy/resource.tf
+++ b/examples/resources/polaris_aws_cnp_account_trust_policy/resource.tf
@@ -1,7 +1,44 @@
+data "polaris_aws_cnp_artifacts" "artifacts" {
+ feature {
+ name = "CLOUD_NATIVE_ARCHIVAL"
+
+ permission_groups = [
+ "BASIC",
+ ]
+ }
+
+ feature {
+ name = "CLOUD_NATIVE_PROTECTION"
+
+ permission_groups = [
+ "BASIC",
+ "EXPORT_AND_RESTORE",
+ ]
+ }
+}
+
+resource "polaris_aws_cnp_account" "account" {
+ name = "My Account"
+ native_id = "123456789123"
+ regions = [
+ "us-east-2",
+ "us-west-2",
+ ]
+
+ dynamic "feature" {
+ for_each = data.polaris_aws_cnp_artifacts.artifacts.feature
+ content {
+ name = feature.value["name"]
+ permission_groups = feature.value["permission_groups"]
+ }
+ }
+}
+
+# Lookup the trust policies using the artifacts data source and the
+# account resource.
resource "polaris_aws_cnp_account_trust_policy" "trust_policy" {
- for_each = data.polaris_aws_cnp_artifacts.artifacts.role_keys
- account_id = polaris_aws_cnp_account.account.id
- features = polaris_aws_cnp_account.account.features
- external_id = polaris_aws_cnp_account.account.external_id
- role_key = each.key
+ for_each = data.polaris_aws_cnp_artifacts.artifacts.role_keys
+ account_id = polaris_aws_cnp_account.account.id
+ features = polaris_aws_cnp_account.account.feature.*.name
+ role_key = each.key
}
diff --git a/examples/resources/polaris_aws_exocompute/resource.tf b/examples/resources/polaris_aws_exocompute/resource.tf
index cd9da5c..74e7819 100644
--- a/examples/resources/polaris_aws_exocompute/resource.tf
+++ b/examples/resources/polaris_aws_exocompute/resource.tf
@@ -1,6 +1,6 @@
-# With security groups managed by RSC.
-resource "polaris_aws_exocompute" "default" {
- account_id = polaris_aws_account.default.id
+# RSC managed Exocompute with security groups managed by RSC.
+resource "polaris_aws_exocompute" "exocompute" {
+ account_id = polaris_aws_account.account.id
region = "us-east-2"
vpc_id = "vpc-4859acb9"
@@ -10,9 +10,9 @@ resource "polaris_aws_exocompute" "default" {
]
}
-# With security groups managed by the user.
-resource "polaris_aws_exocompute" "default" {
- account_id = polaris_aws_account.default.id
+# RSC managed Exocompute with security groups managed by the customer.
+resource "polaris_aws_exocompute" "exocompute" {
+ account_id = polaris_aws_account.account.id
cluster_security_group_id = "sg-005656347687b8170"
node_security_group_id = "sg-00e147656785d7e2f"
region = "us-east-2"
@@ -24,8 +24,19 @@ resource "polaris_aws_exocompute" "default" {
]
}
-# Using the exocompute resources shared by an exocompute host.
-resource "polaris_aws_exocompute" "default" {
- account_id = polaris_aws_account.app.id
+# Customer managed Exocompute.
+resource "polaris_aws_exocompute" "exocompute" {
+ account_id = polaris_aws_account.account.id
+ region = "us-east-2"
+}
+
+resource "polaris_aws_exocompute_cluster_attachment" "cluster" {
+ cluster_name = "my-eks-cluster"
+ exocompute_id = polaris_aws_exocompute.exocompute.id
+}
+
+# Using the exocompute resources shared by an Exocompute host.
+resource "polaris_aws_exocompute" "exocompute" {
+ account_id = polaris_aws_account.account.id
host_account_id = polaris_aws_account.host.id
}
diff --git a/examples/resources/polaris_aws_exocompute_cluster_attachment/resource.tf b/examples/resources/polaris_aws_exocompute_cluster_attachment/resource.tf
new file mode 100644
index 0000000..169d3c7
--- /dev/null
+++ b/examples/resources/polaris_aws_exocompute_cluster_attachment/resource.tf
@@ -0,0 +1,4 @@
+resource "polaris_aws_exocompute_cluster_attachment" "attachment" {
+ cluster_name = "my-eks-cluster"
+ exocompute_id = polaris_aws_exocompute.exocompute.id
+}
diff --git a/examples/resources/polaris_aws_private_container_registry/resource.tf b/examples/resources/polaris_aws_private_container_registry/resource.tf
index 94ca7a6..fed8f8d 100644
--- a/examples/resources/polaris_aws_private_container_registry/resource.tf
+++ b/examples/resources/polaris_aws_private_container_registry/resource.tf
@@ -1,5 +1,5 @@
-resource "polaris_aws_private_container_registry" "default" {
- account_id = polaris_aws_account.default.id
+resource "polaris_aws_private_container_registry" "registry" {
+ account_id = polaris_aws_account.account.id
native_id = "123456789012"
url = "234567890121.dkr.ecr.us-east-2.amazonaws.com"
}
diff --git a/examples/resources/polaris_azure_archival_location/resource.tf b/examples/resources/polaris_azure_archival_location/resource.tf
new file mode 100644
index 0000000..81e179a
--- /dev/null
+++ b/examples/resources/polaris_azure_archival_location/resource.tf
@@ -0,0 +1,27 @@
+# Source region.
+resource "polaris_azure_archival_location" "archival_location" {
+ cloud_account_id = polaris_azure_subscription.subscription.id
+ name = "my-archival-location"
+ storage_account_name_prefix = "archival"
+}
+
+# Source region with a customer managed key.
+resource "polaris_azure_archival_location" "archival_location" {
+ cloud_account_id = polaris_azure_subscription.subscription.id
+ name = "my-archival-location"
+ storage_account_name_prefix = "archival"
+
+ customer_managed_key {
+ name = "my-archival-key"
+ region = "eastus"
+ vault_name = "my-archival-key-vault"
+ }
+}
+
+# Specific region.
+resource "polaris_azure_archival_location" "archival_location" {
+ cloud_account_id = polaris_azure_subscription.subscription.id
+ name = "my-archival-location"
+ storage_account_name_prefix = "archival"
+ storage_account_region = "eastus2"
+}
diff --git a/examples/resources/polaris_azure_exocompute/resource.tf b/examples/resources/polaris_azure_exocompute/resource.tf
index b72bbc0..4b35ba2 100644
--- a/examples/resources/polaris_azure_exocompute/resource.tf
+++ b/examples/resources/polaris_azure_exocompute/resource.tf
@@ -1,5 +1,13 @@
-resource "polaris_azure_exocompute" "default" {
- subscription_id = polaris_azure_subscription.default.id
- region = "eastus2"
- subnet_id = "/subscriptions/65774f88-da6a-11eb-bc8f-e798f8b54eba/resourceGroups/test/providers/Microsoft.Network/virtualNetworks/test/subnets/default"
+# Host configuration.
+resource "polaris_azure_exocompute" "host_exocompute" {
+ cloud_account_id = polaris_azure_subscription.host_subscription.id
+ pod_overlay_network_cidr = "10.244.0.0/16"
+ region = "eastus2"
+ subnet = "/subscriptions/65774f88-da6a-11eb-bc8f-e798f8b54eba/resourceGroups/test/providers/Microsoft.Network/virtualNetworks/test/subnets/default"
+}
+
+# Application configuration.
+resource "polaris_azure_exocompute" "app_exocompute" {
+ cloud_account_id = polaris_azure_subscription.app_subscription.id
+ host_cloud_account_id = polaris_azure_subscription.host_subscription.id
}
diff --git a/examples/resources/polaris_azure_service_principal/resource.tf b/examples/resources/polaris_azure_service_principal/resource.tf
index db8d64d..a0c7be1 100644
--- a/examples/resources/polaris_azure_service_principal/resource.tf
+++ b/examples/resources/polaris_azure_service_principal/resource.tf
@@ -1,21 +1,21 @@
-# With service principal file.
+# With custom service principal file.
resource "polaris_azure_service_principal" "default" {
credentials = "${path.module}/service-principal.json"
tenant_domain = "mydomain.onmicrosoft.com"
}
-# With service principal created with the Azure SDK using the --sdk-auth
-# parameter
+# With a service principal created using the Azure SDK and the
+# --sdk-auth parameter.
resource "polaris_azure_service_principal" "default" {
sdk_auth = "${path.module}/sdk-service-principal.json"
tenant_domain = "mydomain.onmicrosoft.com"
}
-# Without service principal file.
+# Without a service principal file.
resource "polaris_azure_service_principal" "default" {
app_id = "25c2b42a-c76b-11eb-9767-6ff6b5b7e72b"
app_name = "My App"
- app_secret = ""
+ app_secret = ""
tenant_domain = "mydomain.onmicrosoft.com"
tenant_id = "2bfdaef8-c76b-11eb-8d3d-4706c14a88f0"
}
diff --git a/examples/resources/polaris_azure_subscription/resource.tf b/examples/resources/polaris_azure_subscription/resource.tf
index 70ac6c0..b50f686 100644
--- a/examples/resources/polaris_azure_subscription/resource.tf
+++ b/examples/resources/polaris_azure_subscription/resource.tf
@@ -1,30 +1,60 @@
-# Enable Cloud Native Protection
-resource "polaris_azure_subscription" "default" {
+# Enable the Cloud Native Protection feature for the EastUS2 region.
+resource "polaris_azure_subscription" "subscription" {
subscription_id = "31be1bb0-c76c-11eb-9217-afdffe83a002"
- tenant_domain = "mydomain.onmicrosoft.com"
+ tenant_domain = "my-domain.onmicrosoft.com"
cloud_native_protection {
regions = [
"eastus2",
]
+ resource_group_name = "my-resource-group"
+ resource_group_region = "eastus2"
}
}
-# Enable Cloud Native Protection and Exocompte.
-resource "polaris_azure_subscription" "default" {
+# Enable the Cloud Native Protection feature for the EastUS2 and the
+# WestUS2 regions and the Exocompute feature for the EastUS2 region.
+resource "polaris_azure_subscription" "subscription" {
subscription_id = "31be1bb0-c76c-11eb-9217-afdffe83a002"
- tenant_domain = "mydomain.onmicrosoft.com"
+ tenant_domain = "my-domain.onmicrosoft.com"
cloud_native_protection {
regions = [
"eastus2",
"westus2",
]
+ resource_group_name = "my-west-resource-group"
+ resource_group_region = "westus2"
+ resource_group_tags = {
+ environment = "production"
+ }
+ }
+
+ exocompute {
+ regions = [
+ "eastus2",
+ ]
+ resource_group_name = "my-east-resource-group"
+ resource_group_region = "eastus2"
}
+}
+
+# Using the polaris_azure_permissions data source to inform RSC about
+# permission updates for the feature.
+data "polaris_azure_permissions" "exocompute" {
+ feature = "EXOCOMPUTE"
+}
+
+resource "polaris_azure_subscription" "default" {
+ subscription_id = "31be1bb0-c76c-11eb-9217-afdffe83a002"
+ tenant_domain = "my-domain.onmicrosoft.com"
exocompute {
+ permissions = data.polaris_azure_permissions.exocompute.id
regions = [
"eastus2",
]
+ resource_group_name = "my-resource-group"
+ resource_group_region = "eastus2"
}
}
diff --git a/go.mod b/go.mod
index 7c6984d..2c82185 100644
--- a/go.mod
+++ b/go.mod
@@ -8,7 +8,7 @@ require (
github.com/hashicorp/go-cty v1.4.1-0.20200414143053-d3edf31b6320
github.com/hashicorp/terraform-plugin-docs v0.16.0
github.com/hashicorp/terraform-plugin-sdk/v2 v2.10.0
- github.com/rubrikinc/rubrik-polaris-sdk-for-go v0.9.2
+ github.com/rubrikinc/rubrik-polaris-sdk-for-go v0.10.0
)
require (
@@ -96,7 +96,7 @@ require (
golang.org/x/crypto v0.21.0 // indirect
golang.org/x/exp v0.0.0-20230626212559-97b1e661b5df // indirect
golang.org/x/mod v0.11.0 // indirect
- golang.org/x/net v0.21.0 // indirect
+ golang.org/x/net v0.23.0 // indirect
golang.org/x/oauth2 v0.11.0 // indirect
golang.org/x/sys v0.18.0 // indirect
golang.org/x/text v0.14.0 // indirect
diff --git a/go.sum b/go.sum
index 7d29683..b5a97fa 100644
--- a/go.sum
+++ b/go.sum
@@ -412,8 +412,8 @@ github.com/rogpeppe/fastuuid v1.2.0/go.mod h1:jVj6XXZzXRy/MSR5jhDC/2q6DgLz+nrA6L
github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
github.com/rogpeppe/go-internal v1.6.1 h1:/FiVV8dS/e+YqF2JvO3yXRFbBLTIuSDkuC7aBOAvL+k=
github.com/rogpeppe/go-internal v1.6.1/go.mod h1:xXDCJY+GAPziupqXw64V24skbSoqbTEfhy4qGm1nDQc=
-github.com/rubrikinc/rubrik-polaris-sdk-for-go v0.9.2 h1:8bfi331z52j0Hr8T1WFonYaG/2EISQmyL1Dm4X16enA=
-github.com/rubrikinc/rubrik-polaris-sdk-for-go v0.9.2/go.mod h1:670TFQkxTdbsBwEwR/fDT75hfHwPDTTOiLnyZerbqQk=
+github.com/rubrikinc/rubrik-polaris-sdk-for-go v0.10.0 h1:tCdwXxqMg7NAcLvph+hBObNJm0zcZ1+4lLB9XiHvBNA=
+github.com/rubrikinc/rubrik-polaris-sdk-for-go v0.10.0/go.mod h1:ryJGDKlbaCvozY3Wvt+TPSN2OZRChQedHUNsnVfCbXE=
github.com/russross/blackfriday v1.6.0 h1:KqfZb0pUVN2lYqZUYRddxF4OR8ZMURnJIG5Y3VRLtww=
github.com/russross/blackfriday v1.6.0/go.mod h1:ti0ldHuxg49ri4ksnFxlkCfN+hvslNlmVHqNRXXJNAY=
github.com/sebdah/goldie v1.0.0/go.mod h1:jXP4hmWywNEwZzhMuv2ccnqTSFpuq8iyQhtQdkkZBH4=
@@ -560,8 +560,8 @@ golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v
golang.org/x/net v0.0.0-20210326060303-6b1517762897/go.mod h1:uSPa2vr4CLtc/ILN5odXGNXS6mhrKVzTaCXzk9m6W3k=
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
-golang.org/x/net v0.21.0 h1:AQyQV4dYCvJ7vGmJyKki9+PBdyvhkSd8EIx/qb0AYv4=
-golang.org/x/net v0.21.0/go.mod h1:bIjVDfnllIU7BJ2DNgfnXvpSvtn8VRwhlsaeUTyUS44=
+golang.org/x/net v0.23.0 h1:7EYJ93RZ9vYSZAIb2x3lnuvqO5zneoD6IvWjuhfxjTs=
+golang.org/x/net v0.23.0/go.mod h1:JKghWKKOSdJwpW2GEx0Ja7fmaKnMsbu+MWVZTokSYmg=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
diff --git a/internal/provider/data_source_account.go b/internal/provider/data_source_account.go
new file mode 100644
index 0000000..7fbfb01
--- /dev/null
+++ b/internal/provider/data_source_account.go
@@ -0,0 +1,114 @@
+// Copyright 2024 Rubrik, Inc.
+//
+// Permission is hereby granted, free of charge, to any person obtaining a copy
+// of this software and associated documentation files (the "Software"), to
+// deal in the Software without restriction, including without limitation the
+// rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
+// sell copies of the Software, and to permit persons to whom the Software is
+// furnished to do so, subject to the following conditions:
+//
+// The above copyright notice and this permission notice shall be included in
+// all copies or substantial portions of the Software.
+//
+// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+// DEALINGS IN THE SOFTWARE.
+
+package provider
+
+import (
+ "context"
+ "crypto/sha256"
+ "fmt"
+ "log"
+ "strings"
+
+ "github.com/hashicorp/terraform-plugin-sdk/v2/diag"
+ "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
+ "github.com/rubrikinc/rubrik-polaris-sdk-for-go/pkg/polaris/graphql/core"
+)
+
+const dataSourceAccountDescription = `
+The ´polaris_account´ data source is used to access information about the RSC account.
+
+-> **Note:** The ´fqdn´ and ´name´ fields are read from the local RSC credentials and
+ not from RSC.
+`
+
+func dataSourceAccount() *schema.Resource {
+ return &schema.Resource{
+ ReadContext: accountRead,
+
+ Description: description(dataSourceAccountDescription),
+ Schema: map[string]*schema.Schema{
+ keyID: {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "SHA-256 hash of the features, the fully qualified domain name and the name.",
+ },
+ keyFeatures: {
+ Type: schema.TypeSet,
+ Elem: &schema.Schema{
+ Type: schema.TypeString,
+ },
+ Computed: true,
+ Description: "Features enabled for the RSC account.",
+ },
+ keyFQDN: {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "Fully qualified domain name of the RSC account.",
+ },
+ keyName: {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "RSC account name.",
+ },
+ },
+ }
+}
+
+func accountRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
+ log.Print("[TRACE] accountRead")
+
+ client, err := m.(*client).polaris()
+ if err != nil {
+ return diag.FromErr(err)
+ }
+
+ accountFeatures, err := core.Wrap(client.GQL).EnabledFeaturesForAccount(ctx)
+ if err != nil {
+ return diag.FromErr(err)
+ }
+ accountFQDN := strings.ToLower(client.Account.AccountFQDN())
+ accountName := strings.ToLower(client.Account.AccountName())
+
+ accountFeaturesAttr := &schema.Set{F: schema.HashString}
+ for _, accountFeature := range accountFeatures {
+ accountFeaturesAttr.Add(accountFeature.Name)
+ }
+ if err := d.Set(keyFeatures, accountFeaturesAttr); err != nil {
+ return diag.FromErr(err)
+ }
+ if err := d.Set(keyFQDN, accountFQDN); err != nil {
+ return diag.FromErr(err)
+ }
+ if err := d.Set(keyName, accountName); err != nil {
+ return diag.FromErr(err)
+
+ }
+
+ hash := sha256.New()
+ for _, accountFeature := range accountFeatures {
+ hash.Write([]byte(accountFeature.Name))
+ }
+ hash.Write([]byte(accountFQDN))
+ hash.Write([]byte(accountName))
+ d.SetId(fmt.Sprintf("%x", hash.Sum(nil)))
+
+ return nil
+}
diff --git a/internal/provider/data_source_aws_account.go b/internal/provider/data_source_aws_account.go
new file mode 100644
index 0000000..1d90f01
--- /dev/null
+++ b/internal/provider/data_source_aws_account.go
@@ -0,0 +1,103 @@
+// Copyright 2024 Rubrik, Inc.
+//
+// Permission is hereby granted, free of charge, to any person obtaining a copy
+// of this software and associated documentation files (the "Software"), to
+// deal in the Software without restriction, including without limitation the
+// rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
+// sell copies of the Software, and to permit persons to whom the Software is
+// furnished to do so, subject to the following conditions:
+//
+// The above copyright notice and this permission notice shall be included in
+// all copies or substantial portions of the Software.
+//
+// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+// DEALINGS IN THE SOFTWARE.
+
+package provider
+
+import (
+ "context"
+ "log"
+
+ "github.com/hashicorp/terraform-plugin-sdk/v2/diag"
+ "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
+ "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation"
+ "github.com/rubrikinc/rubrik-polaris-sdk-for-go/pkg/polaris/aws"
+ "github.com/rubrikinc/rubrik-polaris-sdk-for-go/pkg/polaris/graphql/core"
+)
+
+const dataSourceAwsAccountDescription = `
+The ´polaris_aws_account´ data source is used to access information about an AWS account
+added to RSC. An AWS account is looked up using either the AWS account ID or the name.
+
+-> **Note:** The account name is the name of the AWS account as it appears in RSC.
+`
+
+func dataSourceAwsAccount() *schema.Resource {
+ return &schema.Resource{
+ ReadContext: awsAccountRead,
+
+ Description: description(dataSourceAwsAccountDescription),
+ Schema: map[string]*schema.Schema{
+ keyID: {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "RSC cloud account ID (UUID).",
+ },
+ keyAccountID: {
+ Type: schema.TypeString,
+ Optional: true,
+ ExactlyOneOf: []string{keyAccountID, keyName},
+ Description: "AWS account ID.",
+ ValidateFunc: validation.StringIsNotEmpty,
+ },
+ keyName: {
+ Type: schema.TypeString,
+ Optional: true,
+ ExactlyOneOf: []string{keyAccountID, keyName},
+ Description: "AWS account name.",
+ ValidateFunc: validation.StringIsNotEmpty,
+ },
+ },
+ }
+}
+
+func awsAccountRead(ctx context.Context, d *schema.ResourceData, m any) diag.Diagnostics {
+ log.Print("[TRACE] awsAccountRead")
+
+ client, err := m.(*client).polaris()
+ if err != nil {
+ return diag.FromErr(err)
+ }
+
+ // Read the AWS account using either the ID or the name. We don't allow
+ // prefix searches since it would be impossible to uniquely identify an
+ // account with a name being the prefix of another account.
+ var account aws.CloudAccount
+ if accountID := d.Get(keyAccountID).(string); accountID != "" {
+ account, err = aws.Wrap(client).AccountByNativeID(ctx, core.FeatureAll, accountID)
+ if err != nil {
+ return diag.FromErr(err)
+ }
+ } else {
+ account, err = aws.Wrap(client).AccountByName(ctx, core.FeatureAll, d.Get(keyName).(string))
+ if err != nil {
+ return diag.FromErr(err)
+ }
+ }
+
+ if err := d.Set(keyAccountID, account.NativeID); err != nil {
+ return diag.FromErr(err)
+ }
+ if err := d.Set(keyName, account.Name); err != nil {
+ return diag.FromErr(err)
+ }
+
+ d.SetId(account.ID.String())
+ return nil
+}
diff --git a/internal/provider/data_source_aws_archival_location.go b/internal/provider/data_source_aws_archival_location.go
index cde810e..b4743a8 100644
--- a/internal/provider/data_source_aws_archival_location.go
+++ b/internal/provider/data_source_aws_archival_location.go
@@ -32,60 +32,78 @@ import (
"github.com/rubrikinc/rubrik-polaris-sdk-for-go/pkg/polaris/aws"
)
+const dataSourceAWSArchivalLocationDescription = `
+The ´polaris_aws_archival_location´ data source is used to access information about an
+AWS archival location. An archival location is looked up using either the ID or the name.
+`
+
func dataSourceAwsArchivalLocation() *schema.Resource {
return &schema.Resource{
ReadContext: awsArchivalLocationRead,
+ Description: description(dataSourceAWSArchivalLocationDescription),
Schema: map[string]*schema.Schema{
- "bucket_prefix": {
+ keyID: {
+ Type: schema.TypeString,
+ Optional: true,
+ ExactlyOneOf: []string{keyID, keyArchivalLocationID, keyName},
+ Description: "Cloud native archival location ID (UUID).",
+ ValidateFunc: validation.IsUUID,
+ },
+ keyArchivalLocationID: {
+ Type: schema.TypeString,
+ Optional: true,
+ ExactlyOneOf: []string{keyID, keyArchivalLocationID, keyName},
+ Description: "Cloud native archival location ID (UUID). **Deprecated:** use `id` instead.",
+ ValidateFunc: validation.IsUUID,
+ Deprecated: "Use `id` instead.",
+ },
+ keyBucketPrefix: {
Type: schema.TypeString,
Computed: true,
- Description: "AWS bucket prefix.",
+ Description: "AWS bucket prefix. Note, `rubrik-` will always be prepended to the prefix.",
},
- "bucket_tags": {
+ keyBucketTags: {
Type: schema.TypeMap,
Computed: true,
Description: "AWS bucket tags.",
},
- "connection_status": {
+ keyConnectionStatus: {
Type: schema.TypeString,
Computed: true,
Description: "Connection status of the archival location.",
},
- "archival_location_id": {
- Type: schema.TypeString,
- Optional: true,
- ExactlyOneOf: []string{"archival_location_id", "name"},
- Description: "ID of the archival location.",
- ValidateFunc: validation.StringIsNotWhiteSpace,
- },
- "kms_master_key": {
+ keyKMSMasterKey: {
Type: schema.TypeString,
Computed: true,
Sensitive: true,
Description: "AWS KMS master key alias/ID.",
},
- "location_template": {
- Type: schema.TypeString,
- Computed: true,
- Description: "Location template. If a region was specified, it will be `SPECIFIC_REGION`, otherwise `SOURCE_REGION`.",
+ keyLocationTemplate: {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "RSC location template. If a region was specified, it will be `SPECIFIC_REGION`, " +
+ "otherwise `SOURCE_REGION`.",
},
- "name": {
+ keyName: {
Type: schema.TypeString,
Optional: true,
- ExactlyOneOf: []string{"archival_location_id", "name"},
- Description: "Name of the archival location.",
+ ExactlyOneOf: []string{keyID, keyArchivalLocationID, keyName},
+ Description: "Name of the cloud native archival location.",
ValidateFunc: validation.StringIsNotWhiteSpace,
},
- "region": {
- Type: schema.TypeString,
- Computed: true,
- Description: "AWS region to store the snapshots in. If not specified, the snapshots will be stored in the same region as the workload.",
+ keyRegion: {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "AWS region to store the snapshots in. If not specified, the snapshots will be stored " +
+ "in the same region as the workload.",
},
- "storage_class": {
- Type: schema.TypeString,
- Computed: true,
- Description: "AWS bucket storage class.",
+ keyStorageClass: {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "AWS bucket storage class. Possible values are `STANDARD`, `STANDARD_IA`, `ONEZONE_IA`, " +
+ "`GLACIER_INSTANT_RETRIEVAL`, `GLACIER_DEEP_ARCHIVE` and `GLACIER_FLEXIBLE_RETRIEVAL`. Default " +
+ "value is `STANDARD_IA`.",
},
},
}
@@ -99,58 +117,56 @@ func awsArchivalLocationRead(ctx context.Context, d *schema.ResourceData, m any)
return diag.FromErr(err)
}
+ // Read the archival location using either the ID or the name.
var targetMapping aws.TargetMapping
- if targetMappingID, ok := d.GetOk("archival_location_id"); ok {
- id, err := uuid.Parse(targetMappingID.(string))
+ targetMappingID := d.Get(keyID).(string)
+ if targetMappingID == "" {
+ targetMappingID = d.Get(keyArchivalLocationID).(string)
+ }
+ if targetMappingID != "" {
+ id, err := uuid.Parse(targetMappingID)
if err != nil {
return diag.FromErr(err)
}
-
- // Read the AWS archival location using the target mapping ID.
targetMapping, err = aws.Wrap(client).TargetMappingByID(ctx, id)
if err != nil {
return diag.FromErr(err)
}
} else {
- targetMappingName := d.Get("name").(string)
-
- // Read the AWS archival location using the target mapping name.
- targetMapping, err = aws.Wrap(client).TargetMappingByName(ctx, targetMappingName)
+ targetMapping, err = aws.Wrap(client).TargetMappingByName(ctx, d.Get(keyName).(string))
if err != nil {
return diag.FromErr(err)
}
}
- // Set the resource string arguments.
- if err := d.Set("bucket_prefix", strings.TrimPrefix(targetMapping.BucketPrefix, "rubrik-")); err != nil {
+ if err := d.Set(keyArchivalLocationID, targetMapping.ID.String()); err != nil {
return diag.FromErr(err)
}
- if err := d.Set("connection_status", targetMapping.ConnectionStatus); err != nil {
+ if err := d.Set(keyBucketPrefix, strings.TrimPrefix(targetMapping.BucketPrefix, "rubrik-")); err != nil {
return diag.FromErr(err)
}
- if err := d.Set("kms_master_key", targetMapping.KMSMasterKey); err != nil {
+ if err := d.Set(keyConnectionStatus, targetMapping.ConnectionStatus); err != nil {
return diag.FromErr(err)
}
- if err := d.Set("location_template", targetMapping.LocTemplate); err != nil {
+ if err := d.Set(keyKMSMasterKey, targetMapping.KMSMasterKey); err != nil {
return diag.FromErr(err)
}
- if err := d.Set("name", targetMapping.Name); err != nil {
+ if err := d.Set(keyLocationTemplate, targetMapping.LocTemplate); err != nil {
return diag.FromErr(err)
}
- if err := d.Set("region", targetMapping.Region); err != nil {
+ if err := d.Set(keyName, targetMapping.Name); err != nil {
return diag.FromErr(err)
}
- if err := d.Set("storage_class", targetMapping.StorageClass); err != nil {
+ if err := d.Set(keyRegion, targetMapping.Region); err != nil {
return diag.FromErr(err)
}
-
- // Set the resource bucket tags argument.
- if err := d.Set("bucket_tags", toBucketTags(targetMapping.BucketTags)); err != nil {
+ if err := d.Set(keyStorageClass, targetMapping.StorageClass); err != nil {
+ return diag.FromErr(err)
+ }
+ if err := d.Set(keyBucketTags, toBucketTags(targetMapping.BucketTags)); err != nil {
return diag.FromErr(err)
}
- // Set the resource ID to the target mapping ID.
d.SetId(targetMapping.ID.String())
-
return nil
}
diff --git a/internal/provider/data_source_aws_cnp_artifacts.go b/internal/provider/data_source_aws_cnp_artifacts.go
index fa32006..25141f3 100644
--- a/internal/provider/data_source_aws_cnp_artifacts.go
+++ b/internal/provider/data_source_aws_cnp_artifacts.go
@@ -33,33 +33,77 @@ import (
"github.com/rubrikinc/rubrik-polaris-sdk-for-go/pkg/polaris/graphql/core"
)
-// dataSourceAwsArtifacts defines the schema for the AWS artifacts data source.
+const dataSourceAWSArtifactsDescription = `
+The ´polaris_aws_archival_location´ data source is used to access information about
+instance profiles and roles required by RSC for a specified feature set.
+
+## Permission Groups
+Following is a list of features and their applicable permission groups. These are used
+when specifying the feature set.
+
+### CLOUD_NATIVE_ARCHIVAL
+ * ´BASIC´ - Represents the basic set of permissions required to onboard the feature.
+
+### CLOUD_NATIVE_PROTECTION
+ * ´BASIC´ - Represents the basic set of permissions required to onboard the feature.
+ * ´EXPORT_AND_RESTORE´ - Represents the set of permissions required for export and
+ restore operations.
+ * ´FILE_LEVEL_RECOVERY´ - Represents the set of permissions required for file-level
+ recovery operations.
+ * ´SNAPSHOT_PRIVATE_ACCESS´ - Represents the set of permissions required for private
+ access to disk snapshots.
+
+### CLOUD_NATIVE_S3_PROTECTION
+ * ´BASIC´ - Represents the basic set of permissions required to onboard the feature.
+
+### EXOCOMPUTE
+ * ´BASIC´ - Represents the basic set of permissions required to onboard the feature.
+ * ´PRIVATE_ENDPOINTS´ - Represents the set of permissions required for usage of private
+ endpoints.
+ * ´RSC_MANAGED_CLUSTER´ - Represents the set of permissions required for the Rubrik-
+ managed Exocompute cluster.
+
+### RDS_PROTECTION
+ * ´BASIC´ - Represents the basic set of permissions required to onboard the feature.
+
+-> **Note:** When permission groups are specified, the ´BASIC´ permission group must
+ always be included.
+`
+
func dataSourceAwsArtifacts() *schema.Resource {
return &schema.Resource{
ReadContext: awsArtifactsRead,
+ Description: description(dataSourceAWSArtifactsDescription),
Schema: map[string]*schema.Schema{
- "cloud": {
- Type: schema.TypeString,
- Optional: true,
- Default: "STANDARD",
- Description: "AWS cloud type.",
- ValidateFunc: validation.StringIsNotWhiteSpace,
+ keyID: {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "SHA-256 hash of the instance profile keys and the roles" +
+ "keys.",
+ },
+ keyCloud: {
+ Type: schema.TypeString,
+ Optional: true,
+ Default: "STANDARD",
+ Description: "AWS cloud type. Possible values are `STANDARD`, `CHINA` and `GOV`. Default value is " +
+ "`STANDARD`.",
+ ValidateFunc: validation.StringInSlice([]string{"STANDARD", "CHINA", "GOV"}, false),
},
- "feature": {
+ keyFeature: {
Type: schema.TypeSet,
- Elem: featureResource,
+ Elem: featureResource(),
MinItems: 1,
Required: true,
Description: "RSC feature with optional permission groups.",
},
- "instance_profile_keys": {
+ keyInstanceProfileKeys: {
Type: schema.TypeSet,
Elem: &schema.Schema{Type: schema.TypeString},
Computed: true,
Description: "Instance profile keys for the RSC features.",
},
- "role_keys": {
+ keyRoleKeys: {
Type: schema.TypeSet,
Elem: &schema.Schema{Type: schema.TypeString},
Computed: true,
@@ -69,9 +113,6 @@ func dataSourceAwsArtifacts() *schema.Resource {
}
}
-// awsArtifactsRead run the Read operation for the AWS artifacts data source.
-// Returns all the instance profiles and roles required for the specified cloud
-// and feature set.
func awsArtifactsRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
log.Print("[TRACE] awsArtifactsRead")
@@ -81,12 +122,12 @@ func awsArtifactsRead(ctx context.Context, d *schema.ResourceData, m interface{}
}
// Get attributes.
- cloud := d.Get("cloud").(string)
+ cloud := d.Get(keyCloud).(string)
var features []core.Feature
- for _, block := range d.Get("feature").(*schema.Set).List() {
+ for _, block := range d.Get(keyFeature).(*schema.Set).List() {
block := block.(map[string]interface{})
- feature := core.Feature{Name: block["name"].(string)}
- for _, group := range block["permission_groups"].(*schema.Set).List() {
+ feature := core.Feature{Name: block[keyName].(string)}
+ for _, group := range block[keyPermissionGroups].(*schema.Set).List() {
feature = feature.WithPermissionGroups(core.PermissionGroup(group.(string)))
}
@@ -104,7 +145,7 @@ func awsArtifactsRead(ctx context.Context, d *schema.ResourceData, m interface{}
for _, profile := range profiles {
profilesAttr.Add(profile)
}
- if err := d.Set("instance_profile_keys", profilesAttr); err != nil {
+ if err := d.Set(keyInstanceProfileKeys, profilesAttr); err != nil {
return diag.FromErr(err)
}
@@ -112,7 +153,7 @@ func awsArtifactsRead(ctx context.Context, d *schema.ResourceData, m interface{}
for _, role := range roles {
rolesAttr.Add(role)
}
- if err := d.Set("role_keys", rolesAttr); err != nil {
+ if err := d.Set(keyRoleKeys, rolesAttr); err != nil {
return diag.FromErr(err)
}
diff --git a/internal/provider/data_source_aws_cnp_permissions.go b/internal/provider/data_source_aws_cnp_permissions.go
index 8b73f20..c5c7088 100644
--- a/internal/provider/data_source_aws_cnp_permissions.go
+++ b/internal/provider/data_source_aws_cnp_permissions.go
@@ -33,75 +33,117 @@ import (
"github.com/rubrikinc/rubrik-polaris-sdk-for-go/pkg/polaris/graphql/core"
)
-// dataSourceAwsPermissions defines the schema for the AWS permissions data
-// source.
+const dataSourceAWSPermissionsDescription = `
+The ´polaris_aws_cnp_permissions´ data source is used to access information about the
+permissions required by RSC for a specified feature set.
+
+## Permission Groups
+Following is a list of features and their applicable permission groups. These are used
+when specifying the feature set.
+
+### CLOUD_NATIVE_ARCHIVAL
+ * ´BASIC´ - Represents the basic set of permissions required to onboard the feature.
+
+### CLOUD_NATIVE_PROTECTION
+ * ´BASIC´ - Represents the basic set of permissions required to onboard the feature.
+ * ´EXPORT_AND_RESTORE´ - Represents the set of permissions required for export and
+ restore operations.
+ * ´FILE_LEVEL_RECOVERY´ - Represents the set of permissions required for file-level
+ recovery operations.
+ * ´SNAPSHOT_PRIVATE_ACCESS´ - Represents the set of permissions required for private
+ access to disk snapshots.
+
+### CLOUD_NATIVE_S3_PROTECTION
+ * ´BASIC´ - Represents the basic set of permissions required to onboard the feature.
+
+### EXOCOMPUTE
+ * ´BASIC´ - Represents the basic set of permissions required to onboard the feature.
+ * ´PRIVATE_ENDPOINTS´ - Represents the set of permissions required for usage of private
+ endpoints.
+ * ´RSC_MANAGED_CLUSTER´ - Represents the set of permissions required for the Rubrik-
+ managed Exocompute cluster.
+
+### RDS_PROTECTION
+ * ´BASIC´ - Represents the basic set of permissions required to onboard the feature.
+
+-> **Note:** When permission groups are specified, the ´BASIC´ permission group must
+ always be included.
+`
+
+// This data source uses a template for its documentation due to a bug in the TF
+// docs generator. Remember to update the template if the documentation for any
+// fields are changed.
func dataSourceAwsPermissions() *schema.Resource {
return &schema.Resource{
ReadContext: awsPermissionsRead,
+ Description: description(dataSourceAWSPermissionsDescription),
Schema: map[string]*schema.Schema{
- "cloud": {
- Type: schema.TypeString,
- Optional: true,
- Default: "STANDARD",
- Description: "AWS cloud type.",
- ValidateFunc: validation.StringIsNotWhiteSpace,
+ keyID: {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "SHA-256 hash of the customer managed policies and the managed policies.",
+ },
+ keyCloud: {
+ Type: schema.TypeString,
+ Optional: true,
+ Default: "STANDARD",
+ Description: "AWS cloud type. Possible values are `STANDARD`, `CHINA` and `GOV`. Default value is " +
+ "`STANDARD`.",
+ ValidateFunc: validation.StringInSlice([]string{"STANDARD", "CHINA", "GOV"}, false),
},
- "customer_managed_policies": {
+ keyCustomerManagedPolicies: {
Type: schema.TypeList,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
- "feature": {
+ keyFeature: {
Type: schema.TypeString,
Computed: true,
- Description: "RSC Feature.",
+ Description: "RSC feature name.",
},
- "name": {
+ keyName: {
Type: schema.TypeString,
Computed: true,
Description: "Policy name.",
},
- "policy": {
+ keyPolicy: {
Type: schema.TypeString,
Computed: true,
- Description: "Policy.",
+ Description: "AWS policy.",
},
},
},
Computed: true,
Description: "Customer managed policies.",
},
- "ec2_recovery_role_path": {
+ keyEC2RecoveryRolePath: {
Type: schema.TypeString,
Optional: true,
- Description: "EC2 recovery role path.",
+ Description: "AWS EC2 recovery role path.",
},
- "feature": {
+ keyFeature: {
Type: schema.TypeSet,
- Elem: featureResource,
+ Elem: featureResource(),
MinItems: 1,
Required: true,
Description: "RSC feature with optional permission groups.",
},
- "managed_policies": {
+ keyManagedPolicies: {
Type: schema.TypeList,
Elem: &schema.Schema{Type: schema.TypeString},
Computed: true,
Description: "Managed policies.",
},
- "role_key": {
+ keyRoleKey: {
Type: schema.TypeString,
Required: true,
- Description: "Role key.",
+ Description: "RSC artifact key for the AWS role.",
ValidateFunc: validation.StringIsNotWhiteSpace,
},
},
}
}
-// awsPermissionsRead run the Read operation for the AWS permissions data
-// source. Returns all AWS permissions needed of for the specified cloud and
-// feature set.
func awsPermissionsRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
log.Print("[TRACE] awsPermissionsRead")
@@ -110,39 +152,36 @@ func awsPermissionsRead(ctx context.Context, d *schema.ResourceData, m interface
return diag.FromErr(err)
}
- // Get attributes.
- cloud := d.Get("cloud").(string)
- ec2RecoveryRolePath := d.Get("ec2_recovery_role_path").(string)
+ cloud := d.Get(keyCloud).(string)
+ ec2RecoveryRolePath := d.Get(keyEC2RecoveryRolePath).(string)
var features []core.Feature
- for _, block := range d.Get("feature").(*schema.Set).List() {
+ for _, block := range d.Get(keyFeature).(*schema.Set).List() {
block := block.(map[string]interface{})
- feature := core.Feature{Name: block["name"].(string)}
- for _, group := range block["permission_groups"].(*schema.Set).List() {
+ feature := core.Feature{Name: block[keyName].(string)}
+ for _, group := range block[keyPermissionGroups].(*schema.Set).List() {
feature = feature.WithPermissionGroups(core.PermissionGroup(group.(string)))
}
features = append(features, feature)
}
- roleKey := d.Get("role_key").(string)
+ roleKey := d.Get(keyRoleKey).(string)
- // Request permissions.
customerPolicies, managedPolicies, err := aws.Wrap(client).Permissions(ctx, cloud, features, ec2RecoveryRolePath)
if err != nil {
return diag.FromErr(err)
}
- // Set attributes.
var customerPoliciesAttr []map[string]string
for _, policy := range customerPolicies {
if roleKey == policy.Artifact {
customerPoliciesAttr = append(customerPoliciesAttr, map[string]string{
- "feature": policy.Feature.Name,
- "name": policy.Name,
- "policy": policy.Policy,
+ keyFeature: policy.Feature.Name,
+ keyName: policy.Name,
+ keyPolicy: policy.Policy,
})
}
}
- if err := d.Set("customer_managed_policies", customerPoliciesAttr); err != nil {
+ if err := d.Set(keyCustomerManagedPolicies, customerPoliciesAttr); err != nil {
return diag.FromErr(err)
}
@@ -152,7 +191,7 @@ func awsPermissionsRead(ctx context.Context, d *schema.ResourceData, m interface
managedPoliciesAttr = append(managedPoliciesAttr, policy.Name)
}
}
- if err := d.Set("managed_policies", managedPoliciesAttr); err != nil {
+ if err := d.Set(keyManagedPolicies, managedPoliciesAttr); err != nil {
return diag.FromErr(err)
}
diff --git a/internal/provider/data_source_azure_archival_location.go b/internal/provider/data_source_azure_archival_location.go
new file mode 100644
index 0000000..3b6634c
--- /dev/null
+++ b/internal/provider/data_source_azure_archival_location.go
@@ -0,0 +1,193 @@
+// Copyright 2024 Rubrik, Inc.
+//
+// Permission is hereby granted, free of charge, to any person obtaining a copy
+// of this software and associated documentation files (the "Software"), to
+// deal in the Software without restriction, including without limitation the
+// rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
+// sell copies of the Software, and to permit persons to whom the Software is
+// furnished to do so, subject to the following conditions:
+//
+// The above copyright notice and this permission notice shall be included in
+// all copies or substantial portions of the Software.
+//
+// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+// DEALINGS IN THE SOFTWARE.
+
+package provider
+
+import (
+ "context"
+ "log"
+
+ "github.com/google/uuid"
+ "github.com/hashicorp/terraform-plugin-sdk/v2/diag"
+ "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
+ "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation"
+ "github.com/rubrikinc/rubrik-polaris-sdk-for-go/pkg/polaris/azure"
+)
+
+const dataSourceAzureArchivalLocationDescription = `
+The ´polaris_azure_archival_location´ data source is used to access information about
+an Azure archival location. An archival location is looked up using either the ID or
+the name.
+`
+
+// This data source uses a template for its documentation due to a bug in the TF
+// docs generator. Remember to update the template if the documentation for any
+// fields are changed.
+func dataSourceAzureArchivalLocation() *schema.Resource {
+ return &schema.Resource{
+ ReadContext: azureArchivalLocationRead,
+
+ Description: description(dataSourceAzureArchivalLocationDescription),
+ Schema: map[string]*schema.Schema{
+ keyID: {
+ Type: schema.TypeString,
+ Optional: true,
+ ExactlyOneOf: []string{keyID, keyArchivalLocationID, keyName},
+ Description: "Cloud native archival location ID (UUID).",
+ ValidateFunc: validation.IsUUID,
+ },
+ keyArchivalLocationID: {
+ Type: schema.TypeString,
+ Optional: true,
+ ExactlyOneOf: []string{keyID, keyArchivalLocationID, keyName},
+ Description: "Cloud native archival location ID (UUID). **Deprecated:** use `id` instead.",
+ ValidateFunc: validation.IsUUID,
+ Deprecated: "Use `id` instead.",
+ },
+ keyConnectionStatus: {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "Connection status of the cloud native archival location.",
+ },
+ keyContainerName: {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "Azure storage container name.",
+ },
+ keyCustomerManagedKey: {
+ Type: schema.TypeSet,
+ Elem: customerKeyResource(),
+ Computed: true,
+ Description: "Customer managed storage encryption. Specify the regions and their respective " +
+ "encryption details. For other regions, data will be encrypted using platform managed keys.",
+ },
+ keyLocationTemplate: {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "RSC location template. If a storage account region was specified, it will be " +
+ "`SPECIFIC_REGION`, otherwise `SOURCE_REGION`.",
+ },
+ keyName: {
+ Type: schema.TypeString,
+ Optional: true,
+ ExactlyOneOf: []string{keyID, keyArchivalLocationID, keyName},
+ Description: "Cloud native archival location name.",
+ ValidateFunc: validation.StringIsNotWhiteSpace,
+ },
+ keyRedundancy: {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "Azure storage redundancy. Possible values are `GRS`, `GZRS`, `LRS`, `RA_GRS`, " +
+ "`RA_GZRS` and `ZRS`. Default value is `LRS`.",
+ },
+ keyStorageAccountNamePrefix: {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "Azure storage account name prefix. The storage account name prefix cannot be longer " +
+ "than 14 characters and can only consist of numbers and lower case letters.",
+ },
+ keyStorageAccountRegion: {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "Azure region to store the snapshots in. If not specified, the snapshots will be " +
+ "stored in the same region as the workload.",
+ },
+ keyStorageAccountTags: {
+ Type: schema.TypeMap,
+ Computed: true,
+ Description: "Azure storage account tags. Each tag will be added to the storage account created by " +
+ "RSC.",
+ },
+ keyStorageTier: {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "Azure storage tier. Possible values are `COOL` and `HOT`. Default value is `COOL`.",
+ },
+ },
+ }
+}
+
+func azureArchivalLocationRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
+ log.Print("[TRACE] azureArchivalLocationRead")
+
+ client, err := m.(*client).polaris()
+ if err != nil {
+ return diag.FromErr(err)
+ }
+
+ // Read the archival location using either the ID or the name.
+ var targetMapping azure.TargetMapping
+ targetMappingID := d.Get(keyID).(string)
+ if targetMappingID == "" {
+ targetMappingID = d.Get(keyArchivalLocationID).(string)
+ }
+ if targetMappingID != "" {
+ id, err := uuid.Parse(targetMappingID)
+ if err != nil {
+ return diag.FromErr(err)
+ }
+ targetMapping, err = azure.Wrap(client).TargetMappingByID(ctx, id)
+ if err != nil {
+ return diag.FromErr(err)
+ }
+ } else {
+ targetMapping, err = azure.Wrap(client).TargetMappingByName(ctx, d.Get(keyName).(string))
+ if err != nil {
+ return diag.FromErr(err)
+ }
+ }
+
+ if err := d.Set(keyArchivalLocationID, targetMapping.ID.String()); err != nil {
+ return diag.FromErr(err)
+ }
+ if err := d.Set(keyConnectionStatus, targetMapping.ConnectionStatus); err != nil {
+ return diag.FromErr(err)
+ }
+ if err := d.Set(keyContainerName, targetMapping.ContainerName); err != nil {
+ return diag.FromErr(err)
+ }
+ if err := d.Set(keyCustomerManagedKey, toCustomerManagedKeys(targetMapping.CustomerKeys)); err != nil {
+ return diag.FromErr(err)
+ }
+ if err := d.Set(keyLocationTemplate, targetMapping.LocTemplate); err != nil {
+ return diag.FromErr(err)
+ }
+ if err := d.Set(keyName, targetMapping.Name); err != nil {
+ return diag.FromErr(err)
+ }
+ if err := d.Set(keyRedundancy, targetMapping.Redundancy); err != nil {
+ return diag.FromErr(err)
+ }
+ if err := d.Set(keyStorageAccountNamePrefix, targetMapping.StorageAccountName); err != nil {
+ return diag.FromErr(err)
+ }
+ if err := d.Set(keyStorageAccountRegion, targetMapping.StorageAccountRegion); err != nil {
+ return diag.FromErr(err)
+ }
+ if err := d.Set(keyStorageAccountTags, toStorageAccountTags(targetMapping.StorageAccountTags)); err != nil {
+ return diag.FromErr(err)
+ }
+ if err := d.Set(keyStorageTier, targetMapping.StorageTier); err != nil {
+ return diag.FromErr(err)
+ }
+
+ d.SetId(targetMapping.ID.String())
+ return nil
+}
diff --git a/internal/provider/data_source_azure_permissions.go b/internal/provider/data_source_azure_permissions.go
index 513a16d..1660d80 100644
--- a/internal/provider/data_source_azure_permissions.go
+++ b/internal/provider/data_source_azure_permissions.go
@@ -25,9 +25,6 @@ import (
"crypto/sha256"
"fmt"
"log"
- "sort"
- "strconv"
- "time"
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
@@ -36,67 +33,197 @@ import (
"github.com/rubrikinc/rubrik-polaris-sdk-for-go/pkg/polaris/graphql/core"
)
-// dataSourceAzurePermissions defines the schema for the Azure permissions data
-// source.
+const dataSourceAzurePermissionsDescription = `
+The ´polaris_azure_permissions´ data source is used to access information about
+the permissions required by RSC for a specified RSC feature. The features currently
+supported for Azure subscriptions are:
+ * ´AZURE_SQL_DB_PROTECTION´
+ * ´AZURE_SQL_MI_PROTECTION´
+ * ´CLOUD_NATIVE_ARCHIVAL´
+ * ´CLOUD_NATIVE_ARCHIVAL_ENCRYPTION´
+ * ´CLOUD_NATIVE_PROTECTION´
+ * ´EXOCOMPUTE´
+
+See the [subscription](../resources/azure_subscription) resource for more information
+on enabling features for an Azure subscription added to RSC.
+
+The ´polaris_azure_permissions´ data source can be used with the ´azurerm_role_definition´
+and the ´permissions´ fields of the ´polaris_azure_subscription´ resources to
+automatically update the permissions of roles and notify RSC about the updated
+permissions.
+
+-> **Note:** To better fit the RSC Azure permission model where each RSC feature have
+ two Azure roles, the ´features´ field has been deprecated and replaced with the
+ ´feature´ field.
+
+-> **Note:** Due to the RSC Azure permission model having been refined into subscription
+ level permissions and resource group level permissions, the ´actions´, ´data_actions´,
+ ´not_actions´ and ´not_data_actions´ fields have been deprecated and replaced with the
+ corresponding subscription and resource group fields.
+
+-> **Note:** Due to backward compatibility, the ´features´ field allow the feature names
+ to be given in 3 different styles: ´EXAMPLE_FEATURE_NAME´, ´example-feature-name´ or
+ ´example_feature_name´. The recommended style is ´EXAMPLE_FEATURE_NAME´ as it is what
+ the RSC API itself uses.
+`
+
func dataSourceAzurePermissions() *schema.Resource {
return &schema.Resource{
ReadContext: azurePermissionsRead,
+ Description: description(dataSourceAzurePermissionsDescription),
Schema: map[string]*schema.Schema{
- "actions": {
+ keyID: {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "SHA-256 hash of the required permissions, will be updated as the required permissions " +
+ "changes.",
+ },
+ keyActions: {
+ Type: schema.TypeList,
+ Elem: &schema.Schema{
+ Type: schema.TypeString,
+ },
+ Computed: true,
+ Description: "Azure allowed actions. **Deprecated:** use `subscription_actions` and " +
+ "`resource_group_actions` instead.",
+ Deprecated: "use `subscription_actions` and `resource_group_actions` instead.",
+ },
+ keyDataActions: {
+ Type: schema.TypeList,
+ Elem: &schema.Schema{
+ Type: schema.TypeString,
+ },
+ Computed: true,
+ Description: "Azure allowed data actions. **Deprecated:** use `subscription_data_actions` and " +
+ "`resource_group_data_actions` instead.",
+ Deprecated: "use `subscription_data_actions` and `resource_group_data_actions` instead.",
+ },
+ keyFeature: {
+ Type: schema.TypeString,
+ Optional: true,
+ ExactlyOneOf: []string{keyFeature, keyFeatures},
+ Description: "RSC feature. Note that the feature name must be given in the `EXAMPLE_FEATURE_NAME` " +
+ "style. Possible values are `AZURE_SQL_DB_PROTECTION`, `AZURE_SQL_MI_PROTECTION`, " +
+ "`CLOUD_NATIVE_ARCHIVAL`, `CLOUD_NATIVE_ARCHIVAL_ENCRYPTION`, `CLOUD_NATIVE_PROTECTION` and " +
+ "`EXOCOMPUTE`.",
+ ValidateFunc: validation.StringInSlice([]string{
+ "AZURE_SQL_DB_PROTECTION", "AZURE_SQL_MI_PROTECTION", "CLOUD_NATIVE_ARCHIVAL",
+ "CLOUD_NATIVE_ARCHIVAL_ENCRYPTION", "CLOUD_NATIVE_PROTECTION", "EXOCOMPUTE",
+ }, false),
+ },
+ keyFeatures: {
+ Type: schema.TypeSet,
+ Elem: &schema.Schema{
+ Type: schema.TypeString,
+ ValidateFunc: validation.StringInSlice([]string{
+ "AZURE_SQL_DB_PROTECTION", "AZURE_SQL_MI_PROTECTION", "CLOUD_NATIVE_ARCHIVAL",
+ "CLOUD_NATIVE_ARCHIVAL_ENCRYPTION", "CLOUD_NATIVE_PROTECTION", "EXOCOMPUTE",
+ }, false),
+ },
+ MinItems: 1,
+ Optional: true,
+ Description: "RSC features. Possible values are `AZURE_SQL_DB_PROTECTION`, " +
+ "`AZURE_SQL_MI_PROTECTION`, `CLOUD_NATIVE_ARCHIVAL`, `CLOUD_NATIVE_ARCHIVAL_ENCRYPTION`, " +
+ "`CLOUD_NATIVE_PROTECTION` and `EXOCOMPUTE`. **Deprecated:** use `feature` instead.",
+ Deprecated: "use `feature` instead",
+ },
+ keyHash: {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "SHA-256 hash of the permissions, can be used to detect changes to the permissions. " +
+ "**Deprecated:** use `id` instead.",
+ Deprecated: "use `id` instead.",
+ },
+ keyNotActions: {
+ Type: schema.TypeList,
+ Elem: &schema.Schema{
+ Type: schema.TypeString,
+ },
+ Computed: true,
+ Description: "Azure disallowed actions. **Deprecated:** use `subscription_not_actions` and " +
+ "`resource_group_not_actions` instead.",
+ Deprecated: "use `subscription_not_actions` and `resource_group_not_actions` instead.",
+ },
+ keyNotDataActions: {
+ Type: schema.TypeList,
+ Elem: &schema.Schema{
+ Type: schema.TypeString,
+ },
+ Computed: true,
+ Description: "Azure disallowed data actions. **Deprecated:** use `subscription_not_data_actions` and " +
+ "`resource_group_not_data_actions` instead.",
+ Deprecated: "use `subscription_not_data_actions` and `resource_group_not_data_actions` instead.",
+ },
+ keyResourceGroupActions: {
Type: schema.TypeList,
Elem: &schema.Schema{
Type: schema.TypeString,
},
Computed: true,
- Description: "Allowed actions.",
+ Description: "Azure allowed actions on the resource group level.",
},
- "data_actions": {
+ keyResourceGroupDataActions: {
Type: schema.TypeList,
Elem: &schema.Schema{
Type: schema.TypeString,
},
Computed: true,
- Description: "Allowed data actions.",
+ Description: "Azure allowed data actions on the resource group level.",
},
- "features": {
- Type: schema.TypeSet,
+ keyResourceGroupNotActions: {
+ Type: schema.TypeList,
+ Elem: &schema.Schema{
+ Type: schema.TypeString,
+ },
+ Computed: true,
+ Description: "Azure disallowed actions on the resource group level.",
+ },
+ keyResourceGroupNotDataActions: {
+ Type: schema.TypeList,
+ Elem: &schema.Schema{
+ Type: schema.TypeString,
+ },
+ Computed: true,
+ Description: "Azure disallowed data actions on the resource group level.",
+ },
+ keySubscriptionActions: {
+ Type: schema.TypeList,
Elem: &schema.Schema{
- Type: schema.TypeString,
- ValidateFunc: validation.StringIsNotWhiteSpace,
+ Type: schema.TypeString,
},
- MinItems: 1,
- Required: true,
- Description: "Enabled features.",
+ Computed: true,
+ Description: "Azure allowed actions on the subscription level.",
},
- "hash": {
- Type: schema.TypeString,
+ keySubscriptionDataActions: {
+ Type: schema.TypeList,
+ Elem: &schema.Schema{
+ Type: schema.TypeString,
+ },
Computed: true,
- Description: "SHA-256 hash of the permissions, can be used to detect changes to the permissions.",
+ Description: "Azure allowed data actions on the subscription level.",
},
- "not_actions": {
+ keySubscriptionNotActions: {
Type: schema.TypeList,
Elem: &schema.Schema{
Type: schema.TypeString,
},
Computed: true,
- Description: "Disallowed actions.",
+ Description: "Azure disallowed actions on the subscription level.",
},
- "not_data_actions": {
+ keySubscriptionNotDataActions: {
Type: schema.TypeList,
Elem: &schema.Schema{
Type: schema.TypeString,
},
Computed: true,
- Description: "Disallowed data actions.",
+ Description: "Azure disallowed data actions on the subscription level.",
},
},
}
}
-// azurePermissionsRead run the Read operation for the Azure permissions data
-// source. Reads the permissions required for the specified Polaris features.
-func azurePermissionsRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
+func azurePermissionsRead(ctx context.Context, d *schema.ResourceData, m any) diag.Diagnostics {
log.Print("[TRACE] azurePermissionsRead")
client, err := m.(*client).polaris()
@@ -104,70 +231,142 @@ func azurePermissionsRead(ctx context.Context, d *schema.ResourceData, m interfa
return diag.FromErr(err)
}
- // Read permissions required for the specified features.
- var features []core.Feature
- for _, f := range d.Get("features").(*schema.Set).List() {
- // The ParseFeature functions accepts different spellings of the
- // features and should not be used. However, we need to keep it for
- // backwards compatibility reasons.
- feature, err := core.ParseFeature(f.(string))
- if err != nil {
- return diag.FromErr(err)
+ // Check both feature and features.
+ var perms []azure.Permissions
+ var groups []azure.PermissionGroupWithVersion
+ if f := d.Get(keyFeature).(string); f != "" {
+ perms, groups, err = azure.Wrap(client).ScopedPermissions(ctx, core.Feature{Name: f})
+ } else {
+ var features []core.Feature
+ for _, f := range d.Get(keyFeatures).(*schema.Set).List() {
+ features = append(features, core.Feature{Name: f.(string)})
}
- features = append(features, feature)
+ perms, err = azure.Wrap(client).ScopedPermissionsForFeatures(ctx, features)
}
-
- perms, err := azure.Wrap(client).Permissions(ctx, features)
if err != nil {
return diag.FromErr(err)
}
- sort.Strings(perms.Actions)
- sort.Strings(perms.DataActions)
- sort.Strings(perms.NotActions)
- sort.Strings(perms.NotDataActions)
-
- // Format permissions according to the data source schema.
hash := sha256.New()
- var actions []interface{}
- for _, perm := range perms.Actions {
+ // Legacy scope. The legacy scope contains the union of the subscription
+ // and the resource group scopes, so we only need to update the hash value
+ // here, with the added benefit of keeping it backwards compatible.
+ var actions []any
+ for _, perm := range perms[azure.ScopeLegacy].Actions {
actions = append(actions, perm)
hash.Write([]byte(perm))
}
- if err := d.Set("actions", actions); err != nil {
+ if err := d.Set(keyActions, actions); err != nil {
return diag.FromErr(err)
}
- var dataActions []interface{}
- for _, perm := range perms.DataActions {
+ var dataActions []any
+ for _, perm := range perms[azure.ScopeLegacy].DataActions {
dataActions = append(dataActions, perm)
hash.Write([]byte(perm))
}
- if err := d.Set("data_actions", dataActions); err != nil {
+ if err := d.Set(keyDataActions, dataActions); err != nil {
return diag.FromErr(err)
}
- var notActions []interface{}
- for _, perm := range perms.NotActions {
+ var notActions []any
+ for _, perm := range perms[azure.ScopeLegacy].NotActions {
notActions = append(notActions, perm)
hash.Write([]byte(perm))
}
- if err := d.Set("not_actions", notActions); err != nil {
+ if err := d.Set(keyNotActions, notActions); err != nil {
return diag.FromErr(err)
}
- var notDataActions []interface{}
- for _, perm := range perms.NotDataActions {
+ var notDataActions []any
+ for _, perm := range perms[azure.ScopeLegacy].NotDataActions {
notDataActions = append(notDataActions, perm)
hash.Write([]byte(perm))
}
- if err := d.Set("not_data_actions", notDataActions); err != nil {
+ if err := d.Set(keyNotDataActions, notDataActions); err != nil {
+ return diag.FromErr(err)
+ }
+
+ // Subscription scope.
+ var subActions []any
+ for _, perm := range perms[azure.ScopeSubscription].Actions {
+ subActions = append(subActions, perm)
+ }
+ if err := d.Set(keySubscriptionActions, subActions); err != nil {
+ return diag.FromErr(err)
+ }
+
+ var subDataActions []any
+ for _, perm := range perms[azure.ScopeSubscription].DataActions {
+ subDataActions = append(subDataActions, perm)
+ }
+ if err := d.Set(keySubscriptionDataActions, subDataActions); err != nil {
+ return diag.FromErr(err)
+ }
+
+ var subNotActions []any
+ for _, perm := range perms[azure.ScopeSubscription].NotActions {
+ subNotActions = append(subNotActions, perm)
+ }
+ if err := d.Set(keySubscriptionNotActions, subNotActions); err != nil {
+ return diag.FromErr(err)
+ }
+
+ var subNotDataActions []any
+ for _, perm := range perms[azure.ScopeSubscription].NotDataActions {
+ subNotDataActions = append(subNotDataActions, perm)
+ }
+ if err := d.Set(keySubscriptionNotDataActions, subNotDataActions); err != nil {
+ return diag.FromErr(err)
+ }
+
+ // Resource group scope.
+ var rgActions []any
+ for _, perm := range perms[azure.ScopeResourceGroup].Actions {
+ rgActions = append(rgActions, perm)
+ }
+ if err := d.Set(keyResourceGroupActions, rgActions); err != nil {
+ return diag.FromErr(err)
+ }
+
+ var rgDataActions []any
+ for _, perm := range perms[azure.ScopeResourceGroup].DataActions {
+ rgDataActions = append(rgDataActions, perm)
+ }
+ if err := d.Set(keyResourceGroupDataActions, rgDataActions); err != nil {
return diag.FromErr(err)
}
- d.Set("hash", fmt.Sprintf("%x", hash.Sum(nil)))
+ var rgNotActions []any
+ for _, perm := range perms[azure.ScopeResourceGroup].NotActions {
+ rgNotActions = append(rgNotActions, perm)
+ }
+ if err := d.Set(keyResourceGroupNotActions, rgNotActions); err != nil {
+ return diag.FromErr(err)
+ }
+
+ var rgNotDataActions []any
+ for _, perm := range perms[azure.ScopeResourceGroup].NotDataActions {
+ rgNotDataActions = append(rgNotDataActions, perm)
+ }
+ if err := d.Set(keyResourceGroupNotDataActions, rgNotDataActions); err != nil {
+ return diag.FromErr(err)
+ }
+
+ // Hash permission groups. This generates a diff for subscription onboarded
+ // with the old onboarding workflow. Applying the diff fixes the backend
+ // state.
+ for _, group := range groups {
+ hash.Write([]byte(group.Name))
+ hash.Write([]byte(fmt.Sprintf("%d", group.Version)))
+ }
+
+ hashValue := fmt.Sprintf("%x", hash.Sum(nil))
+ if err := d.Set(keyHash, hashValue); err != nil {
+ return diag.FromErr(err)
+ }
- d.SetId(strconv.FormatInt(time.Now().Unix(), 10))
+ d.SetId(hashValue)
return nil
}
diff --git a/internal/provider/data_source_azure_subscription.go b/internal/provider/data_source_azure_subscription.go
new file mode 100644
index 0000000..50186ce
--- /dev/null
+++ b/internal/provider/data_source_azure_subscription.go
@@ -0,0 +1,122 @@
+// Copyright 2024 Rubrik, Inc.
+//
+// Permission is hereby granted, free of charge, to any person obtaining a copy
+// of this software and associated documentation files (the "Software"), to
+// deal in the Software without restriction, including without limitation the
+// rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
+// sell copies of the Software, and to permit persons to whom the Software is
+// furnished to do so, subject to the following conditions:
+//
+// The above copyright notice and this permission notice shall be included in
+// all copies or substantial portions of the Software.
+//
+// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+// DEALINGS IN THE SOFTWARE.
+
+package provider
+
+import (
+ "context"
+ "log"
+
+ "github.com/google/uuid"
+ "github.com/hashicorp/terraform-plugin-sdk/v2/diag"
+ "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
+ "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation"
+ "github.com/rubrikinc/rubrik-polaris-sdk-for-go/pkg/polaris/azure"
+ "github.com/rubrikinc/rubrik-polaris-sdk-for-go/pkg/polaris/graphql/core"
+)
+
+const dataSourceAzureSubscriptionDescription = `
+The ´polaris_azure_subscription´ data source is used to access information about an
+Azure subscription added to RSC. An Azure subscription is looked up using either the
+Azure subscription ID or the name. When looking up an Azure subscription using the
+subscription name, the tenant domain can be used to specify in which tenant to look
+for the name.
+
+-> **Note:** The subscription name is the name of the Azure subscription as it appears
+ in RSC.
+`
+
+func dataSourceAzureSubscription() *schema.Resource {
+ return &schema.Resource{
+ ReadContext: azureSubscriptionRead,
+
+ Description: description(dataSourceAzureSubscriptionDescription),
+ Schema: map[string]*schema.Schema{
+ keyID: {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "RSC cloud account ID (UUID).",
+ },
+ keySubscriptionID: {
+ Type: schema.TypeString,
+ Optional: true,
+ ExactlyOneOf: []string{keySubscriptionID, keyName},
+ Description: "Azure subscription ID.",
+ ValidateFunc: validation.IsUUID,
+ },
+ keyName: {
+ Type: schema.TypeString,
+ Optional: true,
+ ExactlyOneOf: []string{keySubscriptionID, keyName},
+ Description: "Azure subscription name.",
+ ValidateFunc: validation.StringIsNotEmpty,
+ },
+ keyTenantDomain: {
+ Type: schema.TypeString,
+ Optional: true,
+ Description: "Azure tenant primary domain.",
+ ValidateFunc: validation.StringIsNotEmpty,
+ },
+ },
+ }
+}
+
+func azureSubscriptionRead(ctx context.Context, d *schema.ResourceData, m any) diag.Diagnostics {
+ log.Print("[TRACE] azureSubscriptionRead")
+
+ client, err := m.(*client).polaris()
+ if err != nil {
+ return diag.FromErr(err)
+ }
+
+ // Read the Azure subscription using either the ID or the name. We don't
+ // allow prefix searches since it would be impossible to uniquely identify
+ // a subscription with a name being the prefix of another subscription.
+ var subscription azure.CloudAccount
+ if subscriptionID := d.Get(keySubscriptionID).(string); subscriptionID != "" {
+ id, err := uuid.Parse(subscriptionID)
+ if err != nil {
+ return diag.FromErr(err)
+ }
+ subscription, err = azure.Wrap(client).SubscriptionByNativeID(ctx, core.FeatureAll, id)
+ if err != nil {
+ return diag.FromErr(err)
+ }
+ } else {
+ subscription, err = azure.Wrap(client).SubscriptionByName(ctx, core.FeatureAll, d.Get(keyName).(string),
+ d.Get(keyTenantDomain).(string))
+ if err != nil {
+ return diag.FromErr(err)
+ }
+ }
+
+ if err := d.Set(keySubscriptionID, subscription.NativeID.String()); err != nil {
+ return diag.FromErr(err)
+ }
+ if err := d.Set(keyName, subscription.Name); err != nil {
+ return diag.FromErr(err)
+ }
+ if err := d.Set(keyTenantDomain, subscription.TenantDomain); err != nil {
+ return diag.FromErr(err)
+ }
+
+ d.SetId(subscription.ID.String())
+ return nil
+}
diff --git a/internal/provider/data_source_deployment.go b/internal/provider/data_source_deployment.go
index 87ca709..18412cb 100644
--- a/internal/provider/data_source_deployment.go
+++ b/internal/provider/data_source_deployment.go
@@ -31,13 +31,23 @@ import (
"github.com/rubrikinc/rubrik-polaris-sdk-for-go/pkg/polaris/graphql/core"
)
-// dataSourceDeployment defines the schema for the RSC deployment data source.
+const dataSourceDeploymentDescription = `
+The ´polaris_deployment´ data source is used to access information about the RSC
+deployment.
+`
+
func dataSourceDeployment() *schema.Resource {
return &schema.Resource{
ReadContext: deploymentRead,
+ Description: description(dataSourceDeploymentDescription),
Schema: map[string]*schema.Schema{
- "ip_addresses": {
+ keyID: {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "SHA-256 hash of the fields in order.",
+ },
+ keyIPAddresses: {
Type: schema.TypeSet,
Elem: &schema.Schema{
Type: schema.TypeString,
@@ -45,7 +55,7 @@ func dataSourceDeployment() *schema.Resource {
Computed: true,
Description: "Deployment IP addresses.",
},
- "version": {
+ keyVersion: {
Type: schema.TypeString,
Computed: true,
Description: "Deployment version.",
@@ -54,8 +64,6 @@ func dataSourceDeployment() *schema.Resource {
}
}
-// deploymentRead run the Read operation for the deployment data source. Returns
-// details about the RSC deployment.
func deploymentRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
log.Print("[TRACE] deploymentRead")
@@ -64,7 +72,6 @@ func deploymentRead(ctx context.Context, d *schema.ResourceData, m interface{})
return diag.FromErr(err)
}
- // Request deployment details.
ipAddresses, err := core.Wrap(client.GQL).DeploymentIPAddresses(ctx)
if err != nil {
return diag.FromErr(err)
@@ -74,19 +81,17 @@ func deploymentRead(ctx context.Context, d *schema.ResourceData, m interface{})
return diag.FromErr(err)
}
- // Set attributes.
ipAddressesAttr := &schema.Set{F: schema.HashString}
for _, ipAddress := range ipAddresses {
ipAddressesAttr.Add(ipAddress)
}
- if err := d.Set("ip_addresses", ipAddressesAttr); err != nil {
+ if err := d.Set(keyIPAddresses, ipAddressesAttr); err != nil {
return diag.FromErr(err)
}
- if err := d.Set("version", version); err != nil {
+ if err := d.Set(keyVersion, version); err != nil {
return diag.FromErr(err)
}
- // Generate an ID for the data source.
hash := sha256.New()
for _, ipAddress := range ipAddresses {
hash.Write([]byte(ipAddress))
diff --git a/internal/provider/data_source_features.go b/internal/provider/data_source_features.go
index f7ce72d..94675c1 100644
--- a/internal/provider/data_source_features.go
+++ b/internal/provider/data_source_features.go
@@ -33,26 +33,38 @@ import (
"github.com/rubrikinc/rubrik-polaris-sdk-for-go/pkg/polaris/graphql/core"
)
-// dataSourceFeatures defines the schema for the RSC features data source.
+const dataSourceFeaturesDescription = `
+The ´polaris_feature´ data source is used to access information about features enabled
+for an RSC account.
+
+!> **WARNING:** This resource is deprecated and will be removed in a future version.
+ Use the ´features´ field of the ´polaris_account´ data source instead.
+`
+
func dataSourceFeatures() *schema.Resource {
return &schema.Resource{
ReadContext: featuresRead,
+ Description: description(dataSourceFeaturesDescription),
Schema: map[string]*schema.Schema{
- "features": {
+ keyID: {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "SHA-256 hash of the fields in order.",
+ },
+ keyFeatures: {
Type: schema.TypeList,
Elem: &schema.Schema{
Type: schema.TypeString,
},
Computed: true,
- Description: "Enabled features.",
+ Description: "Features enabled for the RSC account.",
},
},
+ DeprecationMessage: "use `polaris_deployment` instead.",
}
}
-// featuresRead run the Read operation for the RSC features data source. Returns
-// all RSC features enabled for the current RSC account.
func featuresRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
log.Print("[TRACE] featuresRead")
@@ -61,7 +73,6 @@ func featuresRead(ctx context.Context, d *schema.ResourceData, m interface{}) di
return diag.FromErr(err)
}
- // Request features.
features, err := core.Wrap(client.GQL).EnabledFeaturesForAccount(ctx)
if err != nil {
return diag.FromErr(err)
@@ -70,16 +81,14 @@ func featuresRead(ctx context.Context, d *schema.ResourceData, m interface{}) di
return cmp.Compare(lhs.Name, rhs.Name)
})
- // Set attributes.
var featuresAttr []string
for _, feature := range features {
featuresAttr = append(featuresAttr, feature.Name)
}
- if err := d.Set("features", featuresAttr); err != nil {
+ if err := d.Set(keyFeatures, featuresAttr); err != nil {
return diag.FromErr(err)
}
- // Generate an ID for the data source.
hash := sha256.New()
for _, feature := range features {
hash.Write([]byte(feature.Name))
diff --git a/internal/provider/data_source_gcp_permissions.go b/internal/provider/data_source_gcp_permissions.go
index 7409413..eb0fccb 100644
--- a/internal/provider/data_source_gcp_permissions.go
+++ b/internal/provider/data_source_gcp_permissions.go
@@ -83,15 +83,7 @@ func gcpPermissionsRead(ctx context.Context, d *schema.ResourceData, m interface
// Read permissions required for the specified features.
var features []core.Feature
for _, f := range d.Get("features").(*schema.Set).List() {
- // The ParseFeature functions accepts different spellings of the
- // features and should not be used. However, we need to keep it for
- // backwards compatibility reasons.
- feature, err := core.ParseFeature(f.(string))
- if err != nil {
- return diag.FromErr(err)
- }
-
- features = append(features, feature)
+ features = append(features, core.ParseFeatureNoValidation(f.(string)))
}
perms, err := gcp.Wrap(client).Permissions(ctx, features)
diff --git a/internal/provider/data_source_role.go b/internal/provider/data_source_role.go
index aafe5be..887c92a 100644
--- a/internal/provider/data_source_role.go
+++ b/internal/provider/data_source_role.go
@@ -30,37 +30,49 @@ import (
"github.com/rubrikinc/rubrik-polaris-sdk-for-go/pkg/polaris/access"
)
-// dataSourceRole defines the schema for the role data source.
+const dataSourceRoleDescription = `
+The ´polaris_role´ data source is used to access information about RSC roles.
+`
+
+// This data source uses a template for its documentation due to a bug in the TF
+// docs generator. Remember to update the template if the documentation for any
+// fields are changed.
func dataSourceRole() *schema.Resource {
return &schema.Resource{
ReadContext: roleRead,
+ Description: description(dataSourceRoleDescription),
Schema: map[string]*schema.Schema{
- "description": {
+ keyID: {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "Role ID (UUID).",
+ },
+ keyDescription: {
Type: schema.TypeString,
Computed: true,
Description: "Role description.",
},
- "is_org_admin": {
+ keyIsOrgAdmin: {
Type: schema.TypeBool,
Computed: true,
Description: "True if the role is the organization administrator.",
},
- "name": {
+ keyName: {
Type: schema.TypeString,
Required: true,
Description: "Role name.",
ValidateFunc: validation.StringIsNotWhiteSpace,
},
- "permission": {
+ keyPermission: {
Type: schema.TypeSet,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
- "hierarchy": {
+ keyHierarchy: {
Type: schema.TypeSet,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
- "object_ids": {
+ keyObjectIDs: {
Type: schema.TypeSet,
Elem: &schema.Schema{
Type: schema.TypeString,
@@ -68,7 +80,7 @@ func dataSourceRole() *schema.Resource {
Computed: true,
Description: "Object/workload identifiers.",
},
- "snappable_type": {
+ keySnappableType: {
Type: schema.TypeString,
Computed: true,
Description: "Snappable/workload type.",
@@ -78,10 +90,10 @@ func dataSourceRole() *schema.Resource {
Computed: true,
Description: "Snappable hierarchy.",
},
- "operation": {
+ keyOperation: {
Type: schema.TypeString,
Computed: true,
- Description: "Operation allowed on object ids under the snappable hierarchy.",
+ Description: "Operation allowed on object IDs under the snappable hierarchy.",
},
},
},
@@ -92,7 +104,6 @@ func dataSourceRole() *schema.Resource {
}
}
-// roleRead run the Read operation for the role data source.
func roleRead(ctx context.Context, d *schema.ResourceData, m any) diag.Diagnostics {
log.Print("[TRACE] roleRead")
@@ -101,19 +112,21 @@ func roleRead(ctx context.Context, d *schema.ResourceData, m any) diag.Diagnosti
return diag.FromErr(err)
}
- name := d.Get("name").(string)
- role, err := access.Wrap(client).RoleByName(ctx, name)
+ role, err := access.Wrap(client).RoleByName(ctx, d.Get(keyName).(string))
if err != nil {
return diag.FromErr(err)
}
- if err := d.Set("description", role.Description); err != nil {
+ if err := d.Set(keyDescription, role.Description); err != nil {
+ return diag.FromErr(err)
+ }
+ if err := d.Set(keyIsOrgAdmin, role.IsOrgAdmin); err != nil {
return diag.FromErr(err)
}
- if err := d.Set("is_org_admin", role.IsOrgAdmin); err != nil {
+ if err := d.Set(keyName, role.Name); err != nil {
return diag.FromErr(err)
}
- if err := d.Set("permission", fromPermissions(role.AssignedPermissions)); err != nil {
+ if err := d.Set(keyPermission, fromPermissions(role.AssignedPermissions)); err != nil {
return diag.FromErr(err)
}
diff --git a/internal/provider/data_source_role_template.go b/internal/provider/data_source_role_template.go
index 04c2f9a..7ac1680 100644
--- a/internal/provider/data_source_role_template.go
+++ b/internal/provider/data_source_role_template.go
@@ -30,32 +30,45 @@ import (
"github.com/rubrikinc/rubrik-polaris-sdk-for-go/pkg/polaris/access"
)
-// dataSourceRoleTemplate defines the schema for the role template data source.
+const dataSourceRoleTemplateDescription = `
+The ´polaris_role_template´ data source is used to access information about RSC role
+templates.
+`
+
+// This data source uses a template for its documentation due to a bug in the TF
+// docs generator. Remember to update the template if the documentation for any
+// fields are changed.
func dataSourceRoleTemplate() *schema.Resource {
return &schema.Resource{
ReadContext: roleTemplateRead,
+ Description: description(dataSourceRoleTemplateDescription),
Schema: map[string]*schema.Schema{
- "description": {
+ keyID: {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "Role template ID (UUID).",
+ },
+ keyDescription: {
Type: schema.TypeString,
Computed: true,
- Description: "Role description.",
+ Description: "Role template description.",
},
- "name": {
+ keyName: {
Type: schema.TypeString,
Required: true,
- Description: "Role name.",
+ Description: "Role template name.",
ValidateFunc: validation.StringIsNotWhiteSpace,
},
- "permission": {
+ keyPermission: {
Type: schema.TypeSet,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
- "hierarchy": {
+ keyHierarchy: {
Type: schema.TypeSet,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
- "object_ids": {
+ keyObjectIDs: {
Type: schema.TypeSet,
Elem: &schema.Schema{
Type: schema.TypeString,
@@ -63,7 +76,7 @@ func dataSourceRoleTemplate() *schema.Resource {
Computed: true,
Description: "Object/workload identifiers.",
},
- "snappable_type": {
+ keySnappableType: {
Type: schema.TypeString,
Computed: true,
Description: "Snappable/workload type.",
@@ -73,10 +86,10 @@ func dataSourceRoleTemplate() *schema.Resource {
Computed: true,
Description: "Snappable hierarchy.",
},
- "operation": {
+ keyOperation: {
Type: schema.TypeString,
Computed: true,
- Description: "Operation allowed on object ids under the snappable hierarchy.",
+ Description: "Operation allowed on object IDs under the snappable hierarchy.",
},
},
},
@@ -87,7 +100,6 @@ func dataSourceRoleTemplate() *schema.Resource {
}
}
-// roleTemplateRead run the Read operation for the role template data source.
func roleTemplateRead(ctx context.Context, d *schema.ResourceData, m any) diag.Diagnostics {
log.Print("[TRACE] roleTemplateRead")
@@ -96,16 +108,18 @@ func roleTemplateRead(ctx context.Context, d *schema.ResourceData, m any) diag.D
return diag.FromErr(err)
}
- name := d.Get("name").(string)
- roleTemplate, err := access.Wrap(client).RoleTemplateByName(ctx, name)
+ roleTemplate, err := access.Wrap(client).RoleTemplateByName(ctx, d.Get(keyName).(string))
if err != nil {
return diag.FromErr(err)
}
- if err := d.Set("description", roleTemplate.Description); err != nil {
+ if err := d.Set(keyDescription, roleTemplate.Description); err != nil {
+ return diag.FromErr(err)
+ }
+ if err := d.Set(keyName, roleTemplate.Name); err != nil {
return diag.FromErr(err)
}
- if err := d.Set("permission", fromPermissions(roleTemplate.AssignedPermissions)); err != nil {
+ if err := d.Set(keyPermission, fromPermissions(roleTemplate.AssignedPermissions)); err != nil {
return diag.FromErr(err)
}
diff --git a/internal/provider/names.go b/internal/provider/names.go
new file mode 100644
index 0000000..9f32d48
--- /dev/null
+++ b/internal/provider/names.go
@@ -0,0 +1,137 @@
+package provider
+
+const (
+ keyAccountID = "account_id"
+ keyActions = "actions"
+ keyAppID = "app_id"
+ keyAppName = "app_name"
+ keyAppSecret = "app_secret"
+ keyArchivalLocationID = "archival_location_id"
+ keyARN = "arn"
+ keyAssumeRole = "assume_role"
+ keyBucketPrefix = "bucket_prefix"
+ keyBucketTags = "bucket_tags"
+ keyCredentials = "credentials"
+ keyCloud = "cloud"
+ keyCloudAccountID = "cloud_account_id"
+ keyCloudNativeArchival = "cloud_native_archival"
+ keyCloudNativeArchivalEncryption = "cloud_native_archival_encryption"
+ keyCloudNativeProtection = "cloud_native_protection"
+ keyClusterName = "cluster_name"
+ keyClusterSecurityGroupID = "cluster_security_group_id"
+ keyConnectionCommand = "connection_command"
+ keyConnectionStatus = "connection_status"
+ keyContainerName = "container_name"
+ keyCustomerManagedKey = "customer_managed_key"
+ keyCustomerManagedPolicies = "customer_managed_policies"
+ keyDataActions = "data_actions"
+ keyDeleteSnapshotsOnDestroy = "delete_snapshots_on_destroy"
+ keyDescription = "description"
+ keyEC2RecoveryRolePath = "ec2_recovery_role_path"
+ keyEmail = "email"
+ keyExocompute = "exocompute"
+ keyExocomputeID = "exocompute_id"
+ keyExternalID = "external_id"
+ keyFeature = "feature"
+ keyFeatures = "features"
+ keyFQDN = "fqdn"
+ keyHash = "hash"
+ keyHierarchy = "hierarchy"
+ keyHostAccountID = "host_account_id"
+ keyHostCloudAccountID = "host_cloud_account_id"
+ keyID = "id"
+ keyInstanceProfile = "instance_profile"
+ keyInstanceProfileKeys = "instance_profile_keys"
+ keyIPAddresses = "ip_addresses"
+ keyIsAccountOwner = "is_account_owner"
+ keyIsOrgAdmin = "is_org_admin"
+ keyKey = "key"
+ keyKMSMasterKey = "kms_master_key"
+ keyLocationTemplate = "location_template"
+ keyManagedPolicies = "managed_policies"
+ keyName = "name"
+ keyNativeID = "native_id"
+ keyNodeSecurityGroupID = "node_security_group_id"
+ keyNotActions = "not_actions"
+ keyNotDataActions = "not_data_actions"
+ keyObjectIDs = "object_ids"
+ keyOperation = "operation"
+ keyPermissionGroups = "permission_groups"
+ keyPermission = "permission"
+ keyPermissions = "permissions"
+ keyPermissionsHash = "permissions_hash"
+ keyPodOverlayNetworkCIDR = "pod_overlay_network_cidr"
+ keyPolarisAccount = "polaris_account"
+ keyPolarisAWSAccount = "polaris_aws_account"
+ keyPolarisAWSArchivalLocation = "polaris_aws_archival_location"
+ keyPolarisAWSCNPAccount = "polaris_aws_cnp_account"
+ keyPolarisAWSCNPAccountAttachments = "polaris_aws_cnp_account_attachments"
+ keyPolarisAWSCNPAccountTrustPolicy = "polaris_aws_cnp_account_trust_policy"
+ keyPolarisAWSCNPArtifacts = "polaris_aws_cnp_artifacts"
+ keyPolarisAWSCNPPermissions = "polaris_aws_cnp_permissions"
+ keyPolarisAWSExocompute = "polaris_aws_exocompute"
+ keyPolarisAWSExocomputeClusterAttachment = "polaris_aws_exocompute_cluster_attachment"
+ keyPolarisAWSPrivateContainerRegistry = "polaris_aws_private_container_registry"
+ keyPolarisAzureArchivalLocation = "polaris_azure_archival_location"
+ keyPolarisAzureExocompute = "polaris_azure_exocompute"
+ keyPolarisAzurePermissions = "polaris_azure_permissions"
+ keyPolarisAzureServicePrincipal = "polaris_azure_service_principal"
+ keyPolarisAzureSubscription = "polaris_azure_subscription"
+ keyPolarisCustomRole = "polaris_custom_role"
+ keyPolarisDeployment = "polaris_deployment"
+ keyPolarisFeatures = "polaris_features"
+ keyPolarisManaged = "polaris_managed"
+ keyPolarisRole = "polaris_role"
+ keyPolarisRoleAssignment = "polaris_role_assignment"
+ keyPolarisRoleTemplate = "polaris_role_template"
+ keyPolarisUser = "polaris_user"
+ keyPolicy = "policy"
+ keyProfile = "profile"
+ keyRedundancy = "redundancy"
+ keyRegion = "region"
+ keyRegions = "regions"
+ keyResourceGroupActions = "resource_group_actions"
+ keyResourceGroupDataActions = "resource_group_data_actions"
+ keyResourceGroupName = "resource_group_name"
+ keyResourceGroupNotActions = "resource_group_not_actions"
+ keyResourceGroupNotDataActions = "resource_group_not_data_actions"
+ keyResourceGroupTags = "resource_group_tags"
+ keyResourceGroupRegion = "resource_group_region"
+ keyRole = "role"
+ keyRoleID = "role_id"
+ keyRoleIDs = "role_ids"
+ keyRoleKey = "role_key"
+ keyRoleKeys = "role_keys"
+ keySDKAuth = "sdk_auth"
+ keySnappableType = "snappable_type"
+ keySQLDBProtection = "sql_db_protection"
+ keySQLMIProtection = "sql_mi_protection"
+ keySetupYAML = "setup_yaml"
+ keyStackARN = "stack_arn"
+ keyStatus = "status"
+ keyStorageAccountNamePrefix = "storage_account_name_prefix"
+ keyStorageAccountRegion = "storage_account_region"
+ keyStorageAccountTags = "storage_account_tags"
+ keyStorageClass = "storage_class"
+ keyStorageTier = "storage_tier"
+ keySubnet = "subnet"
+ keySubnets = "subnets"
+ keySubscriptionActions = "subscription_actions"
+ keySubscriptionDataActions = "subscription_data_actions"
+ keySubscriptionID = "subscription_id"
+ keySubscriptionName = "subscription_name"
+ keySubscriptionNotActions = "subscription_not_actions"
+ keySubscriptionNotDataActions = "subscription_not_data_actions"
+ keyTenantDomain = "tenant_domain"
+ keyTenantID = "tenant_id"
+ keyTokenRefresh = "token_refresh"
+ keyUserAssignedManagedIdentityName = "user_assigned_managed_identity_name"
+ keyUserAssignedManagedIdentityPrincipalID = "user_assigned_managed_identity_principal_id"
+ keyUserAssignedManagedIdentityRegion = "user_assigned_managed_identity_region"
+ keyUserAssignedManagedIdentityResourceGroupName = "user_assigned_managed_identity_resource_group_name"
+ keyUserEmail = "user_email"
+ keyURL = "url"
+ keyVaultName = "vault_name"
+ keyVersion = "version"
+ keyVPCID = "vpc_id"
+)
diff --git a/internal/provider/provider.go b/internal/provider/provider.go
index 8c0ace5..e9605ec 100644
--- a/internal/provider/provider.go
+++ b/internal/provider/provider.go
@@ -23,13 +23,9 @@ package provider
import (
"context"
"errors"
- "fmt"
- "io/fs"
- "net/mail"
"os"
- "time"
+ "strings"
- "github.com/hashicorp/go-cty/cty"
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation"
@@ -40,11 +36,15 @@ import (
"github.com/rubrikinc/rubrik-polaris-sdk-for-go/pkg/polaris/log"
)
+const (
+ appCloudAccountPrefix = "app-"
+)
+
// Provider defines the schema and resource map for the RSC provider.
func Provider() *schema.Provider {
return &schema.Provider{
Schema: map[string]*schema.Schema{
- "credentials": {
+ keyCredentials: {
Type: schema.TypeString,
Optional: true,
Description: "The service account file or local user account name to use when accessing RSC.",
@@ -53,37 +53,42 @@ func Provider() *schema.Provider {
},
ResourcesMap: map[string]*schema.Resource{
- "polaris_aws_account": resourceAwsAccount(),
- "polaris_aws_archival_location": resourceAwsArchivalLocation(),
- "polaris_aws_cnp_account": resourceAwsCnpAccount(),
- "polaris_aws_cnp_account_attachments": resourceAwsCnpAccountAttachments(),
- "polaris_aws_cnp_account_trust_policy": resourceAwsCnpAccountTrustPolicy(),
- "polaris_aws_exocompute": resourceAwsExocompute(),
- "polaris_aws_exocompute_cluster_attachment": resourceAwsExocomputeClusterAttachment(),
- "polaris_aws_private_container_registry": resourceAwsPrivateContainerRegistry(),
- "polaris_azure_exocompute": resourceAzureExocompute(),
- "polaris_azure_service_principal": resourceAzureServicePrincipal(),
- "polaris_azure_subscription": resourceAzureSubscription(),
- "polaris_cdm_bootstrap": resourceCDMBootstrap(),
- "polaris_cdm_bootstrap_cces_aws": resourceCDMBootstrapCCESAWS(),
- "polaris_cdm_bootstrap_cces_azure": resourceCDMBootstrapCCESAzure(),
- "polaris_custom_role": resourceCustomRole(),
- "polaris_gcp_project": resourceGcpProject(),
- "polaris_gcp_service_account": resourceGcpServiceAccount(),
- "polaris_role_assignment": resourceRoleAssignment(),
- "polaris_user": resourceUser(),
+ keyPolarisAWSAccount: resourceAwsAccount(),
+ keyPolarisAWSArchivalLocation: resourceAwsArchivalLocation(),
+ keyPolarisAWSCNPAccount: resourceAwsCnpAccount(),
+ keyPolarisAWSCNPAccountAttachments: resourceAwsCnpAccountAttachments(),
+ keyPolarisAWSCNPAccountTrustPolicy: resourceAwsCnpAccountTrustPolicy(),
+ keyPolarisAWSExocompute: resourceAwsExocompute(),
+ keyPolarisAWSExocomputeClusterAttachment: resourceAwsExocomputeClusterAttachment(),
+ keyPolarisAWSPrivateContainerRegistry: resourceAwsPrivateContainerRegistry(),
+ keyPolarisAzureArchivalLocation: resourceAzureArchivalLocation(),
+ keyPolarisAzureExocompute: resourceAzureExocompute(),
+ keyPolarisAzureServicePrincipal: resourceAzureServicePrincipal(),
+ keyPolarisAzureSubscription: resourceAzureSubscription(),
+ "polaris_cdm_bootstrap": resourceCDMBootstrap(),
+ "polaris_cdm_bootstrap_cces_aws": resourceCDMBootstrapCCESAWS(),
+ "polaris_cdm_bootstrap_cces_azure": resourceCDMBootstrapCCESAzure(),
+ keyPolarisCustomRole: resourceCustomRole(),
+ "polaris_gcp_project": resourceGcpProject(),
+ "polaris_gcp_service_account": resourceGcpServiceAccount(),
+ keyPolarisRoleAssignment: resourceRoleAssignment(),
+ keyPolarisUser: resourceUser(),
},
DataSourcesMap: map[string]*schema.Resource{
- "polaris_aws_archival_location": dataSourceAwsArchivalLocation(),
- "polaris_aws_cnp_artifacts": dataSourceAwsArtifacts(),
- "polaris_aws_cnp_permissions": dataSourceAwsPermissions(),
- "polaris_azure_permissions": dataSourceAzurePermissions(),
- "polaris_deployment": dataSourceDeployment(),
- "polaris_features": dataSourceFeatures(),
+ keyPolarisAccount: dataSourceAccount(),
+ keyPolarisAWSAccount: dataSourceAwsAccount(),
+ keyPolarisAWSArchivalLocation: dataSourceAwsArchivalLocation(),
+ keyPolarisAWSCNPArtifacts: dataSourceAwsArtifacts(),
+ keyPolarisAWSCNPPermissions: dataSourceAwsPermissions(),
+ keyPolarisAzureArchivalLocation: dataSourceAzureArchivalLocation(),
+ keyPolarisAzurePermissions: dataSourceAzurePermissions(),
+ keyPolarisAzureSubscription: dataSourceAzureSubscription(),
+ keyPolarisDeployment: dataSourceDeployment(),
+ keyPolarisFeatures: dataSourceFeatures(),
"polaris_gcp_permissions": dataSourceGcpPermissions(),
- "polaris_role": dataSourceRole(),
- "polaris_role_template": dataSourceRoleTemplate(),
+ keyPolarisRole: dataSourceRole(),
+ keyPolarisRoleTemplate: dataSourceRoleTemplate(),
},
ConfigureContextFunc: providerConfigure,
@@ -123,10 +128,10 @@ func providerConfigure(ctx context.Context, d *schema.ResourceData) (any, diag.D
}
var account polaris.Account
- if c, ok := d.GetOk("credentials"); ok {
+ if c, ok := d.GetOk(keyCredentials); ok {
credentials := c.(string)
- // When credentials refer to an existing file we load the file as a
+ // When credentials refer to an existing file, we load the file as a
// service account, otherwise we assume that it's a user account name.
if _, err := os.Stat(credentials); err == nil {
if account, err = polaris.ServiceAccountFromFile(credentials, true); err != nil {
@@ -144,7 +149,7 @@ func providerConfigure(ctx context.Context, d *schema.ResourceData) (any, diag.D
return nil, diag.FromErr(err)
}
- // Make sure interface value is an untyped nil, see SA4023 for
+ // Make sure the interface value is an untyped nil, see SA4023 for
// details.
account = nil
}
@@ -160,54 +165,8 @@ func providerConfigure(ctx context.Context, d *schema.ResourceData) (any, diag.D
return client, nil
}
-// validateDuration verifies that i contains a valid duration.
-func validateDuration(i interface{}, k string) ([]string, []error) {
- v, ok := i.(string)
- if !ok {
- return nil, []error{fmt.Errorf("expected type of %q to be string", k)}
- }
- if _, err := time.ParseDuration(v); err != nil {
- return nil, []error{fmt.Errorf("%q is not a valid duration", v)}
- }
-
- return nil, nil
-}
-
-// validateEmailAddress verifies that i contains a valid email address.
-func validateEmailAddress(i interface{}, k string) ([]string, []error) {
- v, ok := i.(string)
- if !ok {
- return nil, []error{fmt.Errorf("expected type of %q to be string", k)}
- }
- if _, err := mail.ParseAddress(v); err != nil {
- return nil, []error{fmt.Errorf("%q is not a valid email address", v)}
- }
-
- return nil, nil
-}
-
-// fileExists assumes m is a file path and returns nil if the file exists,
-// otherwise a diagnostic message is returned.
-func fileExists(m interface{}, p cty.Path) diag.Diagnostics {
- if _, err := os.Stat(m.(string)); err != nil {
- details := "unknown error"
-
- var pathErr *fs.PathError
- if errors.As(err, &pathErr) {
- details = pathErr.Err.Error()
- }
-
- return diag.Errorf("failed to access file: %s", details)
- }
-
- return nil
-}
-
-// validateHash verifies that m contains a valid SHA-256 hash.
-func validateHash(m interface{}, p cty.Path) diag.Diagnostics {
- if hash, ok := m.(string); ok && len(hash) == 64 {
- return nil
- }
-
- return diag.Errorf("invalid hash value")
+// description returns the description string with all acute accents replaced
+// with grave accents (backticks).
+func description(description string) string {
+ return strings.ReplaceAll(description, "´", "`")
}
diff --git a/internal/provider/provider_test.go b/internal/provider/provider_test.go
index 9881910..cbdc82d 100644
--- a/internal/provider/provider_test.go
+++ b/internal/provider/provider_test.go
@@ -219,8 +219,17 @@ type testAzureSubscription struct {
PrincipalName string `json:"principalName"`
PrincipalSecret string `json:"principalSecret"`
+ CloudNativeProtection struct {
+ Regions []string `json:"regions"`
+ ResourceGroupName string `json:"resourceGroupName"`
+ ResourceGroupRegion string `json:"resourceGroupRegion"`
+ } `json:"cloudNativeProtection"`
+
Exocompute struct {
- SubnetID string `json:"subnetId"`
+ Regions []string `json:"regions"`
+ ResourceGroupName string `json:"resourceGroupName"`
+ ResourceGroupRegion string `json:"resourceGroupRegion"`
+ SubnetID string `json:"subnetId"`
} `json:"exocompute"`
}
diff --git a/internal/provider/resource_aws_account.go b/internal/provider/resource_aws_account.go
index cd6e4c4..a2f374f 100644
--- a/internal/provider/resource_aws_account.go
+++ b/internal/provider/resource_aws_account.go
@@ -25,43 +25,35 @@ import (
"errors"
"log"
- "github.com/aws/aws-sdk-go-v2/aws/arn"
"github.com/google/uuid"
- "github.com/hashicorp/go-cty/cty"
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation"
"github.com/rubrikinc/rubrik-polaris-sdk-for-go/pkg/polaris/aws"
"github.com/rubrikinc/rubrik-polaris-sdk-for-go/pkg/polaris/graphql"
- graphqlaws "github.com/rubrikinc/rubrik-polaris-sdk-for-go/pkg/polaris/graphql/aws"
"github.com/rubrikinc/rubrik-polaris-sdk-for-go/pkg/polaris/graphql/core"
)
-// validateAwsRegion verifies that the name is a valid AWS region name.
-func validateAwsRegion(m interface{}, p cty.Path) diag.Diagnostics {
- _, err := graphqlaws.ParseRegion(m.(string))
- return diag.FromErr(err)
-}
-
-// validatePermissions verifies that the permissions value is valid.
-func validatePermissions(m interface{}, p cty.Path) diag.Diagnostics {
- if m.(string) != "update" {
- return diag.Errorf("invalid permissions value")
- }
+const resourceAWSAccountDescription = `
+The ´polaris_aws_account´ resource adds an AWS account to RSC. To grant RSC
+permissions to perform certain operations on the account, a Cloud Formation stack
+is created from a template provided by RSC.
+
+There are two ways to specify the AWS account to onboard:
+ 1. Using the ´profile´ field. The AWS profile is used to create the Cloud
+ Formation stack and lookup the AWS account ID.
+ 2. Using the ´assume_role´field with, or without, the ´profile´ field. If the
+ ´profile´ field is omitted, the default profile is used. The profile is used
+ to assume the role. The assumed role is then used and create the Cloud
+ Formation stack and lookup the account ID.
+
+Any combination of different RSC features can be enabled for an account:
+ 1. ´cloud_native_protection´ - Provides protection for AWS EC2 instances and
+ EBS volumes through the rules and policies of SLA Domains.
+ 2. ´exocompute´ - Provides snapshot indexing, file recovery and application
+ protection of AWS objects.
+`
- return nil
-}
-
-// validateRoleARN verifies that the role ARN is a valid AWS ARN.
-func validateRoleARN(m interface{}, p cty.Path) diag.Diagnostics {
- if _, err := arn.Parse(m.(string)); err != nil {
- return diag.Errorf("failed to parse role ARN: %v", err)
- }
-
- return nil
-}
-
-// resourceAwsAccount defines the schema for the AWS account resource.
func resourceAwsAccount() *schema.Resource {
return &schema.Resource{
CreateContext: awsCreateAccount,
@@ -69,40 +61,53 @@ func resourceAwsAccount() *schema.Resource {
UpdateContext: awsUpdateAccount,
DeleteContext: awsDeleteAccount,
+ Description: description(resourceAWSAccountDescription),
Schema: map[string]*schema.Schema{
- "assume_role": {
+ keyID: {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "RSC cloud account ID (UUID).",
+ },
+ keyAssumeRole: {
Type: schema.TypeString,
Optional: true,
- AtLeastOneOf: []string{"profile"},
+ AtLeastOneOf: []string{keyProfile},
Description: "Role ARN of role to assume.",
ValidateDiagFunc: validateRoleARN,
},
- "cloud_native_protection": {
+ keyCloudNativeProtection: {
Type: schema.TypeList,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
- "permission_groups": {
- Type: schema.TypeSet,
- Elem: &schema.Schema{Type: schema.TypeString},
- Optional: true,
- Description: "Permission groups to assign to the cloud native protection feature.",
+ keyPermissionGroups: {
+ Type: schema.TypeSet,
+ Elem: &schema.Schema{
+ Type: schema.TypeString,
+ ValidateFunc: validation.StringInSlice([]string{
+ "BASIC", "EXPORT_AND_RESTORE", "FILE_LEVEL_RECOVERY", "SNAPSHOT_PRIVATE_ACCESS",
+ }, false),
+ },
+ Optional: true,
+ Description: "Permission groups to assign to the Cloud Native Protection feature. " +
+ "Possible values are `BASIC`, `EXPORT_AND_RESTORE`, `FILE_LEVEL_RECOVERY` and " +
+ "`SNAPSHOT_PRIVATE_ACCESS`.",
},
- "regions": {
+ keyRegions: {
Type: schema.TypeSet,
Elem: &schema.Schema{
- Type: schema.TypeString,
- ValidateDiagFunc: validateAwsRegion,
+ Type: schema.TypeString,
+ ValidateFunc: validation.StringIsNotWhiteSpace,
},
MinItems: 1,
Required: true,
- Description: "Regions that Polaris will monitor for instances to automatically protect.",
+ Description: "Regions that RSC will monitor for instances to automatically protect.",
},
- "status": {
+ keyStatus: {
Type: schema.TypeString,
Computed: true,
Description: "Status of the Cloud Native Protection feature.",
},
- "stack_arn": {
+ keyStackARN: {
Type: schema.TypeString,
Computed: true,
Description: "Cloudformation stack ARN.",
@@ -113,38 +118,44 @@ func resourceAwsAccount() *schema.Resource {
Required: true,
Description: "Enable the Cloud Native Protection feature for the AWS account.",
},
- "delete_snapshots_on_destroy": {
+ keyDeleteSnapshotsOnDestroy: {
Type: schema.TypeBool,
Optional: true,
Default: false,
Description: "Should snapshots be deleted when the resource is destroyed.",
},
- "exocompute": {
+ keyExocompute: {
Type: schema.TypeList,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
- "permission_groups": {
- Type: schema.TypeSet,
- Elem: &schema.Schema{Type: schema.TypeString},
- Optional: true,
- Description: "Permission groups to assign to the exocompute feature.",
+ keyPermissionGroups: {
+ Type: schema.TypeSet,
+ Elem: &schema.Schema{
+ Type: schema.TypeString,
+ ValidateFunc: validation.StringInSlice([]string{
+ "BASIC", "PRIVATE_ENDPOINT", "RSC_MANAGED_CLUSTER",
+ }, false),
+ },
+ Optional: true,
+ Description: "Permission groups to assign to the Exocompute feature. Possible values " +
+ "are `BASIC`, `PRIVATE_ENDPOINT` and `RSC_MANAGED_CLUSTER`.",
},
- "regions": {
+ keyRegions: {
Type: schema.TypeSet,
Elem: &schema.Schema{
- Type: schema.TypeString,
- ValidateDiagFunc: validateAwsRegion,
+ Type: schema.TypeString,
+ ValidateFunc: validation.StringIsNotWhiteSpace,
},
MinItems: 1,
Required: true,
Description: "Regions to enable the Exocompute feature in.",
},
- "status": {
+ keyStatus: {
Type: schema.TypeString,
Computed: true,
Description: "Status of the Exocompute feature.",
},
- "stack_arn": {
+ keyStackARN: {
Type: schema.TypeString,
Computed: true,
Description: "Cloudformation stack ARN.",
@@ -153,28 +164,31 @@ func resourceAwsAccount() *schema.Resource {
},
MaxItems: 1,
Optional: true,
- Description: "Enable the exocompute feature for the account.",
+ Description: "Enable the Exocompute feature for the account.",
},
- "name": {
- Type: schema.TypeString,
- Optional: true,
- Computed: true,
- Description: "Account name in Polaris. If not given the name is taken from AWS Organizations or, if the required permissions are missing, is derived from the AWS account ID and the named profile.",
- ValidateDiagFunc: validation.ToDiagFunc(validation.StringIsNotWhiteSpace),
+ keyName: {
+ Type: schema.TypeString,
+ Optional: true,
+ Computed: true,
+ Description: "Account name in Polaris. If not given the name is taken from AWS Organizations " +
+ "or, if the required permissions are missing, is derived from the AWS account ID and the " +
+ "named profile.",
+ ValidateFunc: validation.StringIsNotWhiteSpace,
},
- "permissions": {
- Type: schema.TypeString,
- Optional: true,
- Computed: true,
- Description: "When set to 'update' feature permissions can be updated by applying the configuration.",
+ keyPermissions: {
+ Type: schema.TypeString,
+ Optional: true,
+ Computed: true,
+ Description: "When set to 'update' feature permissions can be updated by applying the " +
+ "configuration.",
ValidateDiagFunc: validatePermissions,
},
- "profile": {
- Type: schema.TypeString,
- Optional: true,
- AtLeastOneOf: []string{"assume_role"},
- Description: "AWS named profile.",
- ValidateDiagFunc: validation.ToDiagFunc(validation.StringIsNotWhiteSpace),
+ keyProfile: {
+ Type: schema.TypeString,
+ Optional: true,
+ AtLeastOneOf: []string{keyAssumeRole},
+ Description: "AWS named profile.",
+ ValidateFunc: validation.StringIsNotWhiteSpace,
},
},
@@ -191,8 +205,6 @@ func resourceAwsAccount() *schema.Resource {
}
}
-// awsCreateAccount run the Create operation for the AWS account resource. This
-// adds the AWS account to the Polaris platform.
func awsCreateAccount(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
log.Print("[TRACE] awsCreateAccount")
@@ -202,8 +214,8 @@ func awsCreateAccount(ctx context.Context, d *schema.ResourceData, m interface{}
}
// Initialize to empty string if missing from the configuration.
- profile, _ := d.Get("profile").(string)
- roleARN, _ := d.Get("assume_role").(string)
+ profile, _ := d.Get(keyProfile).(string)
+ roleARN, _ := d.Get(keyAssumeRole).(string)
var account aws.AccountFunc
switch {
@@ -216,7 +228,7 @@ func awsCreateAccount(ctx context.Context, d *schema.ResourceData, m interface{}
}
var opts []aws.OptionFunc
- if name, ok := d.GetOk("name"); ok {
+ if name, ok := d.GetOk(keyName); ok {
opts = append(opts, aws.Name(name.(string)))
}
@@ -224,17 +236,17 @@ func awsCreateAccount(ctx context.Context, d *schema.ResourceData, m interface{}
// cloud native protection feature.
var id uuid.UUID
- cnpBlock, ok := d.GetOk("cloud_native_protection")
+ cnpBlock, ok := d.GetOk(keyCloudNativeProtection)
if ok {
block := cnpBlock.([]interface{})[0].(map[string]interface{})
feature := core.FeatureCloudNativeProtection
- for _, group := range block["permission_groups"].(*schema.Set).List() {
+ for _, group := range block[keyPermissionGroups].(*schema.Set).List() {
feature = feature.WithPermissionGroups(core.PermissionGroup(group.(string)))
}
var cnpOpts []aws.OptionFunc
- for _, region := range block["regions"].(*schema.Set).List() {
+ for _, region := range block[keyRegions].(*schema.Set).List() {
cnpOpts = append(cnpOpts, aws.Region(region.(string)))
}
@@ -246,17 +258,17 @@ func awsCreateAccount(ctx context.Context, d *schema.ResourceData, m interface{}
}
}
- exoBlock, ok := d.GetOk("exocompute")
+ exoBlock, ok := d.GetOk(keyExocompute)
if ok {
block := exoBlock.([]interface{})[0].(map[string]interface{})
feature := core.FeatureExocompute
- for _, group := range block["permission_groups"].(*schema.Set).List() {
+ for _, group := range block[keyPermissionGroups].(*schema.Set).List() {
feature = feature.WithPermissionGroups(core.PermissionGroup(group.(string)))
}
var exoOpts []aws.OptionFunc
- for _, region := range block["regions"].(*schema.Set).List() {
+ for _, region := range block[keyRegions].(*schema.Set).List() {
exoOpts = append(exoOpts, aws.Region(region.(string)))
}
@@ -273,8 +285,6 @@ func awsCreateAccount(ctx context.Context, d *schema.ResourceData, m interface{}
return nil
}
-// awsReadAccount run the Read operation for the AWS account resource. This
-// reads the state of the AWS account in Polaris.
func awsReadAccount(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
log.Print("[TRACE] awsReadAccount")
@@ -313,17 +323,17 @@ func awsReadAccount(ctx context.Context, d *schema.ResourceData, m interface{})
status := core.FormatStatus(cnpFeature.Status)
err := d.Set("cloud_native_protection", []interface{}{
map[string]interface{}{
- "permission_groups": &groups,
- "regions": ®ions,
- "status": &status,
- "stack_arn": &cnpFeature.StackArn,
+ keyPermissionGroups: &groups,
+ keyRegions: ®ions,
+ keyStatus: &status,
+ keyStackARN: &cnpFeature.StackArn,
},
})
if err != nil {
return diag.FromErr(err)
}
} else {
- if err := d.Set("cloud_native_protection", nil); err != nil {
+ if err := d.Set(keyCloudNativeProtection, nil); err != nil {
return diag.FromErr(err)
}
}
@@ -343,22 +353,22 @@ func awsReadAccount(ctx context.Context, d *schema.ResourceData, m interface{})
status := core.FormatStatus(exoFeature.Status)
err := d.Set("exocompute", []interface{}{
map[string]interface{}{
- "permission_groups": &groups,
- "regions": ®ions,
- "status": &status,
- "stack_arn": &exoFeature.StackArn,
+ keyPermissionGroups: &groups,
+ keyRegions: ®ions,
+ keyStatus: &status,
+ keyStackARN: &exoFeature.StackArn,
},
})
if err != nil {
return diag.FromErr(err)
}
} else {
- if err := d.Set("exocompute", nil); err != nil {
+ if err := d.Set(keyExocompute, nil); err != nil {
return diag.FromErr(err)
}
}
- if err := d.Set("name", account.Name); err != nil {
+ if err := d.Set(keyName, account.Name); err != nil {
return diag.FromErr(err)
}
@@ -368,7 +378,7 @@ func awsReadAccount(ctx context.Context, d *schema.ResourceData, m interface{})
continue
}
- if err := d.Set("permissions", "update-required"); err != nil {
+ if err := d.Set(keyPermissions, "update-required"); err != nil {
return diag.FromErr(err)
}
}
@@ -376,8 +386,6 @@ func awsReadAccount(ctx context.Context, d *schema.ResourceData, m interface{})
return nil
}
-// awsUpdateAccount run the Update operation for the AWS account resource. This
-// updates the state of the AWS account in Polaris.
func awsUpdateAccount(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
log.Print("[TRACE] awsUpdateAccount")
@@ -387,8 +395,8 @@ func awsUpdateAccount(ctx context.Context, d *schema.ResourceData, m interface{}
}
// Initialize to empty string if missing from the configuration.
- profile, _ := d.Get("profile").(string)
- roleARN, _ := d.Get("assume_role").(string)
+ profile, _ := d.Get(keyProfile).(string)
+ roleARN, _ := d.Get(keyAssumeRole).(string)
var account aws.AccountFunc
switch {
@@ -418,18 +426,18 @@ func awsUpdateAccount(ctx context.Context, d *schema.ResourceData, m interface{}
return diag.Errorf("resource id and profile/role refer to different accounts")
}
- if d.HasChange("cloud_native_protection") {
- cnpBlock, ok := d.GetOk("cloud_native_protection")
+ if d.HasChange(keyCloudNativeProtection) {
+ cnpBlock, ok := d.GetOk(keyCloudNativeProtection)
if ok {
block := cnpBlock.([]interface{})[0].(map[string]interface{})
feature := core.FeatureCloudNativeProtection
- for _, group := range block["permission_groups"].(*schema.Set).List() {
+ for _, group := range block[keyPermissionGroups].(*schema.Set).List() {
feature = feature.WithPermissionGroups(core.PermissionGroup(group.(string)))
}
var opts []aws.OptionFunc
- for _, region := range block["regions"].(*schema.Set).List() {
+ for _, region := range block[keyRegions].(*schema.Set).List() {
opts = append(opts, aws.Region(region.(string)))
}
@@ -437,19 +445,19 @@ func awsUpdateAccount(ctx context.Context, d *schema.ResourceData, m interface{}
return diag.FromErr(err)
}
} else {
- if _, ok := d.GetOk("exocompute"); ok {
+ if _, ok := d.GetOk(keyExocompute); ok {
return diag.Errorf("cloud native protection is required by exocompute")
}
- snapshots := d.Get("delete_snapshots_on_destroy").(bool)
+ snapshots := d.Get(keyDeleteSnapshotsOnDestroy).(bool)
if err := aws.Wrap(client).RemoveAccount(ctx, account, []core.Feature{core.FeatureCloudNativeProtection}, snapshots); err != nil {
return diag.FromErr(err)
}
}
}
- if d.HasChange("exocompute") {
- oldExoBlock, newExoBlock := d.GetChange("exocompute")
+ if d.HasChange(keyExocompute) {
+ oldExoBlock, newExoBlock := d.GetChange(keyExocompute)
oldExoList := oldExoBlock.([]interface{})
newExoList := newExoBlock.([]interface{})
@@ -458,12 +466,12 @@ func awsUpdateAccount(ctx context.Context, d *schema.ResourceData, m interface{}
switch {
case len(oldExoList) == 0:
feature := core.FeatureExocompute
- for _, group := range newExoList[0].(map[string]interface{})["permission_groups"].(*schema.Set).List() {
+ for _, group := range newExoList[0].(map[string]interface{})[keyPermissionGroups].(*schema.Set).List() {
feature = feature.WithPermissionGroups(core.PermissionGroup(group.(string)))
}
var opts []aws.OptionFunc
- for _, region := range newExoList[0].(map[string]interface{})["regions"].(*schema.Set).List() {
+ for _, region := range newExoList[0].(map[string]interface{})[keyRegions].(*schema.Set).List() {
opts = append(opts, aws.Region(region.(string)))
}
@@ -478,7 +486,7 @@ func awsUpdateAccount(ctx context.Context, d *schema.ResourceData, m interface{}
}
default:
var opts []aws.OptionFunc
- for _, region := range newExoList[0].(map[string]interface{})["regions"].(*schema.Set).List() {
+ for _, region := range newExoList[0].(map[string]interface{})[keyRegions].(*schema.Set).List() {
opts = append(opts, aws.Region(region.(string)))
}
@@ -489,8 +497,8 @@ func awsUpdateAccount(ctx context.Context, d *schema.ResourceData, m interface{}
}
}
- if d.HasChange("permissions") {
- oldPerms, newPerms := d.GetChange("permissions")
+ if d.HasChange(keyPermissions) {
+ oldPerms, newPerms := d.GetChange(keyPermissions)
if oldPerms == "update-required" && newPerms == "update" {
var features []core.Feature
@@ -506,7 +514,7 @@ func awsUpdateAccount(ctx context.Context, d *schema.ResourceData, m interface{}
return diag.FromErr(err)
}
- if err := d.Set("permissions", "update"); err != nil {
+ if err := d.Set(keyPermissions, "update"); err != nil {
return diag.FromErr(err)
}
}
@@ -516,8 +524,6 @@ func awsUpdateAccount(ctx context.Context, d *schema.ResourceData, m interface{}
return nil
}
-// awsDeleteAccount run the Delete operation for the AWS account resource. This
-// removes the AWS account from Polaris.
func awsDeleteAccount(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
log.Print("[TRACE] awsDeleteAccount")
@@ -533,8 +539,8 @@ func awsDeleteAccount(ctx context.Context, d *schema.ResourceData, m interface{}
// Get the old resource arguments. Initialize to empty string if missing
// from the configuration.
- oldProfile, _ := d.GetChange("profile")
- oldRoleARN, _ := d.GetChange("assume_role")
+ oldProfile, _ := d.GetChange(keyProfile)
+ oldRoleARN, _ := d.GetChange(keyAssumeRole)
profile, _ := oldProfile.(string)
roleARN, _ := oldRoleARN.(string)
@@ -548,7 +554,7 @@ func awsDeleteAccount(ctx context.Context, d *schema.ResourceData, m interface{}
account = aws.DefaultWithRole(roleARN)
}
- oldSnapshots, _ := d.GetChange("delete_snapshots_on_destroy")
+ oldSnapshots, _ := d.GetChange(keyDeleteSnapshotsOnDestroy)
deleteSnapshots := oldSnapshots.(bool)
// Make sure that the resource id and account profile refers to the same
@@ -564,14 +570,14 @@ func awsDeleteAccount(ctx context.Context, d *schema.ResourceData, m interface{}
return diag.Errorf("resource id and profile/role refer to different accounts")
}
- if _, ok := d.GetOk("exocompute"); ok {
+ if _, ok := d.GetOk(keyExocompute); ok {
err = aws.Wrap(client).RemoveAccount(ctx, account, []core.Feature{core.FeatureExocompute}, deleteSnapshots)
if err != nil {
return diag.FromErr(err)
}
}
- if _, ok := d.GetOk("cloud_native_protection"); ok {
+ if _, ok := d.GetOk(keyCloudNativeProtection); ok {
err = aws.Wrap(client).RemoveAccount(ctx, account, []core.Feature{core.FeatureCloudNativeProtection}, deleteSnapshots)
if err != nil {
return diag.FromErr(err)
@@ -579,6 +585,5 @@ func awsDeleteAccount(ctx context.Context, d *schema.ResourceData, m interface{}
}
d.SetId("")
-
return nil
}
diff --git a/internal/provider/resource_aws_account_v0.go b/internal/provider/resource_aws_account_v0.go
index 73bc2cb..ed305ed 100644
--- a/internal/provider/resource_aws_account_v0.go
+++ b/internal/provider/resource_aws_account_v0.go
@@ -27,12 +27,18 @@ import (
"strings"
"github.com/google/uuid"
+ "github.com/hashicorp/go-cty/cty"
+ "github.com/hashicorp/terraform-plugin-sdk/v2/diag"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation"
"github.com/rubrikinc/rubrik-polaris-sdk-for-go/pkg/polaris/aws"
"github.com/rubrikinc/rubrik-polaris-sdk-for-go/pkg/polaris/graphql/core"
)
+func validateAwsRegion(m interface{}, p cty.Path) diag.Diagnostics {
+ return nil
+}
+
// resourceAwsAccountV0 defines the schema for version 0 of the AWS account
// resource.
func resourceAwsAccountV0() *schema.Resource {
diff --git a/internal/provider/resource_aws_archival_location.go b/internal/provider/resource_aws_archival_location.go
index 48fa859..dead14e 100644
--- a/internal/provider/resource_aws_archival_location.go
+++ b/internal/provider/resource_aws_archival_location.go
@@ -35,6 +35,26 @@ import (
"github.com/rubrikinc/rubrik-polaris-sdk-for-go/pkg/polaris/graphql"
)
+const (
+ implicitPrefix = "rubrik-"
+)
+
+const resourceAWSArchivalLocationDescription = `
+The ´polaris_aws_archival_location´ resource creates an RSC archival location for
+cloud-native workloads. This resource requires that the AWS account has been
+onboarded with the ´CLOUD_NATIVE_ARCHIVAL´ feature.
+
+When creating an archival location, the region where the snapshots are stored needs
+to be specified:
+ * ´SOURCE_REGION´ - Store snapshots in the same region to minimize data transfer
+ charges. This is the default behaviour when the ´region´ field is not specified.
+ * ´SPECIFIC_REGION´ - Storing snapshots in another region can increase total data
+ transfer charges. The ´region´ field specifies the region.
+
+-> **Note:** The AWS bucket holding the archived data is not created until the first
+ protected object is archived.
+`
+
func resourceAwsArchivalLocation() *schema.Resource {
return &schema.Resource{
CreateContext: awsCreateArchivalLocation,
@@ -42,64 +62,77 @@ func resourceAwsArchivalLocation() *schema.Resource {
UpdateContext: awsUpdateArchivalLocation,
DeleteContext: awsDeleteArchivalLocation,
+ Description: description(resourceAWSArchivalLocationDescription),
Schema: map[string]*schema.Schema{
- "account_id": {
- Type: schema.TypeString,
- Required: true,
- ForceNew: true,
- Description: "RSC cloud account ID.",
- ValidateFunc: validation.StringIsNotWhiteSpace,
+ keyID: {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "Cloud native archival location ID (UUID).",
},
- "bucket_prefix": {
+ keyAccountID: {
Type: schema.TypeString,
Required: true,
ForceNew: true,
- Description: "AWS bucket prefix. Note that `rubrik-` will always be prepended to the prefix.",
+ Description: "RSC cloud account ID (UUID). Changing this forces a new resource to be created.",
+ ValidateFunc: validation.IsUUID,
+ },
+ keyBucketPrefix: {
+ Type: schema.TypeString,
+ Required: true,
+ ForceNew: true,
+ Description: "AWS bucket prefix. The prefix cannot be longer than 19 characters. Note that `rubrik-` " +
+ "will always be prepended to the prefix. Changing this forces a new resource to be created.",
ValidateFunc: validation.StringLenBetween(1, 19),
},
- "bucket_tags": {
+ keyBucketTags: {
Type: schema.TypeMap,
Optional: true,
- ForceNew: true,
Description: "AWS bucket tags. Each tag will be added to the bucket created by RSC.",
},
- "connection_status": {
+ keyConnectionStatus: {
Type: schema.TypeString,
Computed: true,
- Description: "Connection status of the archival location.",
+ Description: "Connection status of the cloud native archival location.",
},
- "kms_master_key": {
+ keyKMSMasterKey: {
Type: schema.TypeString,
Optional: true,
Sensitive: true,
Default: "aws/s3",
- Description: "AWS KMS master key alias/ID.",
+ Description: "AWS KMS master key alias/ID. Default value is `aws/s3`.",
ValidateFunc: validation.StringIsNotWhiteSpace,
},
- "location_template": {
- Type: schema.TypeString,
- Computed: true,
- Description: "Location template. If a region was specified, it will be `SPECIFIC_REGION`, otherwise `SOURCE_REGION`.",
+ keyLocationTemplate: {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "Location template. If a region was specified, it will be `SPECIFIC_REGION`, otherwise " +
+ "`SOURCE_REGION`.",
},
- "name": {
+ keyName: {
Type: schema.TypeString,
Required: true,
- Description: "Name of the archival location.",
+ Description: "Name of the cloud native archival location.",
ValidateFunc: validation.StringIsNotWhiteSpace,
},
- "region": {
- Type: schema.TypeString,
- Optional: true,
- ForceNew: true,
- Description: "AWS region to store the snapshots in. If not specified, the snapshots will be stored in the same region as the workload.",
+ keyRegion: {
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ Description: "AWS region to store the snapshots in. If not specified, the snapshots will be " +
+ "stored in the same region as the workload. Changing this forces a new resource to be created.",
ValidateFunc: validation.StringIsNotWhiteSpace,
},
- "storage_class": {
- Type: schema.TypeString,
- Optional: true,
- Default: "STANDARD_IA",
- Description: "AWS bucket storage class.",
- ValidateFunc: validation.StringIsNotWhiteSpace,
+ keyStorageClass: {
+ Type: schema.TypeString,
+ Optional: true,
+ Default: "STANDARD_IA",
+ Description: "AWS bucket storage class. Possible values are `STANDARD`, `STANDARD_IA`, `ONEZONE_IA`, " +
+ "`GLACIER_INSTANT_RETRIEVAL`, `GLACIER_DEEP_ARCHIVE` and `GLACIER_FLEXIBLE_RETRIEVAL`. Default " +
+ "value is `STANDARD_IA`.",
+ ValidateFunc: validation.StringInSlice([]string{
+ "STANDARD", "STANDARD_IA", "ONEZONE_IA", "GLACIER_INSTANT_RETRIEVAL", "GLACIER_DEEP_ARCHIVE",
+ "GLACIER_FLEXIBLE_RETRIEVAL",
+ }, false),
},
},
}
@@ -115,20 +148,18 @@ func awsCreateArchivalLocation(ctx context.Context, d *schema.ResourceData, m in
// Lookup and parse the cloud account ID argument. Note, if this argument
// changes the resource will be recreated.
- accountID, err := uuid.Parse(d.Get("account_id").(string))
+ accountID, err := uuid.Parse(d.Get(keyAccountID).(string))
if err != nil {
return diag.FromErr(err)
}
- // Lookup the resource string arguments.
- bucketPrefix := d.Get("bucket_prefix").(string)
- kmsMasterKey := d.Get("kms_master_key").(string)
- name := d.Get("name").(string)
- region := d.Get("region").(string)
- storageClass := d.Get("storage_class").(string)
+ bucketPrefix := d.Get(keyBucketPrefix).(string)
+ kmsMasterKey := d.Get(keyKMSMasterKey).(string)
+ name := d.Get(keyName).(string)
+ region := d.Get(keyRegion).(string)
+ storageClass := d.Get(keyStorageClass).(string)
- // Lookup the resource bucket tags argument.
- bucketTags, err := fromBucketTags(d.Get("bucket_tags").(map[string]any))
+ bucketTags, err := fromBucketTags(d.Get(keyBucketTags).(map[string]any))
if err != nil {
return diag.FromErr(err)
}
@@ -140,9 +171,7 @@ func awsCreateArchivalLocation(ctx context.Context, d *schema.ResourceData, m in
return diag.FromErr(err)
}
- // Set the resource ID to the target mapping ID.
d.SetId(targetMappingID.String())
-
awsReadArchivalLocation(ctx, d, m)
return nil
}
@@ -155,7 +184,6 @@ func awsReadArchivalLocation(ctx context.Context, d *schema.ResourceData, m inte
return diag.FromErr(err)
}
- // Lookup and parse the target mapping ID from the resource ID.
targetMappingID, err := uuid.Parse(d.Id())
if err != nil {
return diag.FromErr(err)
@@ -172,31 +200,28 @@ func awsReadArchivalLocation(ctx context.Context, d *schema.ResourceData, m inte
return diag.FromErr(err)
}
- // Set the resource string arguments.
- if err := d.Set("bucket_prefix", strings.TrimPrefix(targetMapping.BucketPrefix, "rubrik-")); err != nil {
+ if err := d.Set(keyBucketPrefix, strings.TrimPrefix(targetMapping.BucketPrefix, implicitPrefix)); err != nil {
return diag.FromErr(err)
}
- if err := d.Set("connection_status", targetMapping.ConnectionStatus); err != nil {
+ if err := d.Set(keyConnectionStatus, targetMapping.ConnectionStatus); err != nil {
return diag.FromErr(err)
}
- if err := d.Set("kms_master_key", targetMapping.KMSMasterKey); err != nil {
+ if err := d.Set(keyKMSMasterKey, targetMapping.KMSMasterKey); err != nil {
return diag.FromErr(err)
}
- if err := d.Set("location_template", targetMapping.LocTemplate); err != nil {
+ if err := d.Set(keyLocationTemplate, targetMapping.LocTemplate); err != nil {
return diag.FromErr(err)
}
- if err := d.Set("name", targetMapping.Name); err != nil {
+ if err := d.Set(keyName, targetMapping.Name); err != nil {
return diag.FromErr(err)
}
- if err := d.Set("region", targetMapping.Region); err != nil {
+ if err := d.Set(keyRegion, targetMapping.Region); err != nil {
return diag.FromErr(err)
}
- if err := d.Set("storage_class", targetMapping.StorageClass); err != nil {
+ if err := d.Set(keyStorageClass, targetMapping.StorageClass); err != nil {
return diag.FromErr(err)
}
-
- // Set the resource bucket tags argument.
- if err := d.Set("bucket_tags", toBucketTags(targetMapping.BucketTags)); err != nil {
+ if err := d.Set(keyBucketTags, toBucketTags(targetMapping.BucketTags)); err != nil {
return diag.FromErr(err)
}
@@ -211,21 +236,22 @@ func awsUpdateArchivalLocation(ctx context.Context, d *schema.ResourceData, m in
return diag.FromErr(err)
}
- // Lookup and parse the target mapping ID from the resource ID.
targetMappingID, err := uuid.Parse(d.Id())
if err != nil {
- d.SetId("")
return diag.FromErr(err)
}
- // Lookup the resource string arguments.
- kmsMasterKey := d.Get("kms_master_key").(string)
- name := d.Get("name").(string)
- storageClass := d.Get("storage_class").(string)
+ kmsMasterKey := d.Get(keyKMSMasterKey).(string)
+ name := d.Get(keyName).(string)
+ storageClass := d.Get(keyStorageClass).(string)
+ bucketTags, err := fromBucketTags(d.Get(keyBucketTags).(map[string]any))
+ if err != nil {
+ return diag.FromErr(err)
+ }
// Update the AWS archival location. Note, the API doesn't support updating
// all arguments.
- err = aws.Wrap(client).UpdateStorageSetting(ctx, targetMappingID, name, storageClass, kmsMasterKey)
+ err = aws.Wrap(client).UpdateStorageSetting(ctx, targetMappingID, name, storageClass, kmsMasterKey, bucketTags)
if err != nil {
return diag.FromErr(err)
}
@@ -241,7 +267,6 @@ func awsDeleteArchivalLocation(ctx context.Context, d *schema.ResourceData, m in
return diag.FromErr(err)
}
- // Lookup and parse the target mapping ID from the resource ID.
targetMappingID, err := uuid.Parse(d.Id())
if err != nil {
return diag.FromErr(err)
@@ -255,8 +280,8 @@ func awsDeleteArchivalLocation(ctx context.Context, d *schema.ResourceData, m in
return nil
}
-// fromBucketTags converts from the bucket tags argument to a standard string to
-// string map.
+// fromBucketTags converts from the bucket tags argument to a standard
+// string-to-string map.
func fromBucketTags(bucketTags map[string]any) (map[string]string, error) {
tags := make(map[string]string, len(bucketTags))
for key, value := range bucketTags {
@@ -270,8 +295,8 @@ func fromBucketTags(bucketTags map[string]any) (map[string]string, error) {
return tags, nil
}
-// toBucketTags converts to the bucket tags argument from a standard string to
-// string map.
+// toBucketTags converts to the bucket tags argument from a standard
+// string-to-string map.
func toBucketTags(tags map[string]string) map[string]any {
bucketTags := make(map[string]any, len(tags))
for key, value := range tags {
diff --git a/internal/provider/resource_aws_cnp_account.go b/internal/provider/resource_aws_cnp_account.go
index 1526bd1..247cb9d 100644
--- a/internal/provider/resource_aws_cnp_account.go
+++ b/internal/provider/resource_aws_cnp_account.go
@@ -34,21 +34,43 @@ import (
"github.com/rubrikinc/rubrik-polaris-sdk-for-go/pkg/polaris/graphql/core"
)
-var featureResource = &schema.Resource{
- Schema: map[string]*schema.Schema{
- "name": {
- Type: schema.TypeString,
- Required: true,
- Description: "Feature name.",
- },
- "permission_groups": {
- Type: schema.TypeSet,
- Elem: &schema.Schema{Type: schema.TypeString},
- Required: true,
- Description: "Permission groups to assign to the feature.",
- },
- },
-}
+const resourceAWSCNPAccount = `
+The ´polaris_aws_cnp_account´ resource adds an AWS account to RSC using the non-CFT
+(Cloud Formation Template) workflow. The ´polaris_aws_account´ resource can be used to
+add an AWS account to RSC using the CFT workflow.
+
+## Permission Groups
+Following is a list of features and their applicable permission groups. These are used
+when specifying the feature set.
+
+### CLOUD_NATIVE_ARCHIVAL
+ * ´BASIC´ - Represents the basic set of permissions required to onboard the feature.
+
+### CLOUD_NATIVE_PROTECTION
+ * ´BASIC´ - Represents the basic set of permissions required to onboard the feature.
+ * ´EXPORT_AND_RESTORE´ - Represents the set of permissions required for export and
+ restore operations.
+ * ´FILE_LEVEL_RECOVERY´ - Represents the set of permissions required for file-level
+ recovery operations.
+ * ´SNAPSHOT_PRIVATE_ACCESS´ - Represents the set of permissions required for private
+ access to disk snapshots.
+
+### CLOUD_NATIVE_S3_PROTECTION
+ * ´BASIC´ - Represents the basic set of permissions required to onboard the feature.
+
+### EXOCOMPUTE
+ * ´BASIC´ - Represents the basic set of permissions required to onboard the feature.
+ * ´PRIVATE_ENDPOINTS´ - Represents the set of permissions required for usage of
+ private endpoints.
+ * ´RSC_MANAGED_CLUSTER´ - Represents the set of permissions required for the Rubrik-
+ managed Exocompute cluster.
+
+### RDS_PROTECTION
+ * ´BASIC´ - Represents the basic set of permissions required to onboard the feature.
+
+-> **Note:** When permission groups are specified, the ´BASIC´ permission group must
+ always be included.
+`
func resourceAwsCnpAccount() *schema.Resource {
return &schema.Resource{
@@ -57,50 +79,55 @@ func resourceAwsCnpAccount() *schema.Resource {
UpdateContext: awsUpdateCnpAccount,
DeleteContext: awsDeleteCnpAccount,
+ Description: description(resourceAWSCNPAccount),
Schema: map[string]*schema.Schema{
- "cloud": {
- Type: schema.TypeString,
- Optional: true,
- ForceNew: true,
- Default: "STANDARD",
- Description: "Cloud type.",
- ValidateFunc: validation.StringIsNotWhiteSpace,
+ keyID: {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "RSC cloud account ID (UUID).",
+ },
+ keyCloud: {
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ Default: "STANDARD",
+ Description: "AWS cloud type. Possible values are `STANDARD`, `CHINA` and `GOV`. Default value is " +
+ "`STANDARD`. Changing this forces a new resource to be created.",
+ ValidateFunc: validation.StringInSlice([]string{"STANDARD", "CHINA", "GOV"}, false),
},
- "delete_snapshots_on_destroy": {
+ keyDeleteSnapshotsOnDestroy: {
Type: schema.TypeBool,
Optional: true,
Default: false,
Description: "Should snapshots be deleted when the resource is destroyed.",
},
- // Needed to force full recreation of account if external id is
- // changed.
- "external_id": {
+ keyExternalID: {
Type: schema.TypeString,
Optional: true,
ForceNew: true,
- Description: "External id.",
+ Description: "External ID. Changing this forces a new resource to be created.",
},
- "feature": {
+ keyFeature: {
Type: schema.TypeSet,
- Elem: featureResource,
+ Elem: featureResource(),
MinItems: 1,
Required: true,
Description: "RSC feature with optional permission groups.",
},
- "name": {
+ keyName: {
Type: schema.TypeString,
Optional: true,
Description: "Account name.",
ValidateFunc: validation.StringIsNotWhiteSpace,
},
- "native_id": {
+ keyNativeID: {
Type: schema.TypeString,
Required: true,
ForceNew: true,
- Description: "AWS account id.",
+ Description: "AWS account ID. Changing this forces a new resource to be created.",
ValidateFunc: validation.StringIsNotWhiteSpace,
},
- "regions": {
+ keyRegions: {
Type: schema.TypeSet,
Elem: &schema.Schema{
Type: schema.TypeString,
@@ -123,21 +150,22 @@ func awsCreateCnpAccount(ctx context.Context, d *schema.ResourceData, m interfac
}
// Get attributes.
- cloud := d.Get("cloud").(string)
+ cloud := d.Get(keyCloud).(string)
var features []core.Feature
- for _, block := range d.Get("feature").(*schema.Set).List() {
+ for _, block := range d.Get(keyFeature).(*schema.Set).List() {
block := block.(map[string]interface{})
- feature := core.Feature{Name: block["name"].(string)}
- for _, group := range block["permission_groups"].(*schema.Set).List() {
+ feature := core.Feature{Name: block[keyName].(string)}
+ for _, group := range block[keyPermissionGroups].(*schema.Set).List() {
feature = feature.WithPermissionGroups(core.PermissionGroup(group.(string)))
}
features = append(features, feature)
}
- name := d.Get("name").(string)
- nativeID := d.Get("native_id").(string)
+
+ name := d.Get(keyName).(string)
+ nativeID := d.Get(keyNativeID).(string)
var regions []string
- for _, region := range d.Get("regions").(*schema.Set).List() {
+ for _, region := range d.Get(keyRegions).(*schema.Set).List() {
regions = append(regions, region.(string))
}
@@ -182,24 +210,24 @@ func awsReadCnpAccount(ctx context.Context, d *schema.ResourceData, m interface{
if err := d.Set("cloud", account.Cloud); err != nil {
return diag.FromErr(err)
}
- features := &schema.Set{F: schema.HashResource(featureResource)}
+ features := &schema.Set{F: schema.HashResource(featureResource())}
for _, feature := range account.Features {
groups := &schema.Set{F: schema.HashString}
for _, group := range feature.Feature.PermissionGroups {
groups.Add(string(group))
}
features.Add(map[string]any{
- "name": feature.Feature.Name,
- "permission_groups": groups,
+ keyName: feature.Feature.Name,
+ keyPermissionGroups: groups,
})
}
- if err := d.Set("feature", features); err != nil {
+ if err := d.Set(keyFeature, features); err != nil {
return diag.FromErr(err)
}
- if err := d.Set("name", account.Name); err != nil {
+ if err := d.Set(keyName, account.Name); err != nil {
return diag.FromErr(err)
}
- if err := d.Set("native_id", account.NativeID); err != nil {
+ if err := d.Set(keyNativeID, account.NativeID); err != nil {
return diag.FromErr(err)
}
regions := &schema.Set{F: schema.HashString}
@@ -208,7 +236,7 @@ func awsReadCnpAccount(ctx context.Context, d *schema.ResourceData, m interface{
regions.Add(region)
}
}
- if err := d.Set("regions", regions); err != nil {
+ if err := d.Set(keyRegions, regions); err != nil {
return diag.FromErr(err)
}
@@ -228,22 +256,22 @@ func awsUpdateCnpAccount(ctx context.Context, d *schema.ResourceData, m interfac
if err != nil {
return diag.FromErr(err)
}
- cloud := d.Get("cloud").(string)
- deleteSnapshots := d.Get("delete_snapshots_on_destroy").(bool)
+ cloud := d.Get(keyCloud).(string)
+ deleteSnapshots := d.Get(keyDeleteSnapshotsOnDestroy).(bool)
var features []core.Feature
- for _, block := range d.Get("feature").(*schema.Set).List() {
+ for _, block := range d.Get(keyFeature).(*schema.Set).List() {
block := block.(map[string]interface{})
- feature := core.Feature{Name: block["name"].(string)}
- for _, group := range block["permission_groups"].(*schema.Set).List() {
+ feature := core.Feature{Name: block[keyName].(string)}
+ for _, group := range block[keyPermissionGroups].(*schema.Set).List() {
feature = feature.WithPermissionGroups(core.PermissionGroup(group.(string)))
}
features = append(features, feature)
}
- name := d.Get("name").(string)
- nativeID := d.Get("native_id").(string)
+ name := d.Get(keyName).(string)
+ nativeID := d.Get(keyNativeID).(string)
var regions []string
- for _, region := range d.Get("regions").(*schema.Set).List() {
+ for _, region := range d.Get(keyRegions).(*schema.Set).List() {
regions = append(regions, region.(string))
}
@@ -257,20 +285,20 @@ func awsUpdateCnpAccount(ctx context.Context, d *schema.ResourceData, m interfac
return diag.FromErr(err)
}
- if d.HasChange("name") {
+ if d.HasChange(keyName) {
if err := aws.Wrap(client).UpdateAccount(ctx, aws.CloudAccountID(id), core.FeatureAll, aws.Name(name)); err != nil {
return diag.FromErr(err)
}
}
- if d.HasChange("feature") {
- oldAttr, newAttr := d.GetChange("feature")
+ if d.HasChange(keyFeature) {
+ oldAttr, newAttr := d.GetChange(keyFeature)
var oldFeatures []core.Feature
for _, block := range oldAttr.(*schema.Set).List() {
block := block.(map[string]interface{})
- feature := core.Feature{Name: block["name"].(string)}
- for _, group := range block["permission_groups"].(*schema.Set).List() {
+ feature := core.Feature{Name: block[keyName].(string)}
+ for _, group := range block[keyPermissionGroups].(*schema.Set).List() {
feature = feature.WithPermissionGroups(core.PermissionGroup(group.(string)))
}
oldFeatures = append(oldFeatures, feature)
@@ -279,8 +307,8 @@ func awsUpdateCnpAccount(ctx context.Context, d *schema.ResourceData, m interfac
var newFeatures []core.Feature
for _, block := range newAttr.(*schema.Set).List() {
block := block.(map[string]interface{})
- feature := core.Feature{Name: block["name"].(string)}
- for _, group := range block["permission_groups"].(*schema.Set).List() {
+ feature := core.Feature{Name: block[keyName].(string)}
+ for _, group := range block[keyPermissionGroups].(*schema.Set).List() {
feature = feature.WithPermissionGroups(core.PermissionGroup(group.(string)))
}
newFeatures = append(newFeatures, feature)
@@ -303,9 +331,9 @@ func awsUpdateCnpAccount(ctx context.Context, d *schema.ResourceData, m interfac
}
}
- if d.HasChange("regions") {
+ if d.HasChange(keyRegions) {
var regions []string
- for _, region := range d.Get("regions").(*schema.Set).List() {
+ for _, region := range d.Get(keyRegions).(*schema.Set).List() {
regions = append(regions, region.(string))
}
@@ -332,7 +360,7 @@ func awsDeleteCnpAccount(ctx context.Context, d *schema.ResourceData, m interfac
if err != nil {
return diag.FromErr(err)
}
- deleteSnapshots := d.Get("delete_snapshots_on_destroy").(bool)
+ deleteSnapshots := d.Get(keyDeleteSnapshotsOnDestroy).(bool)
// Request the cloud account.
account, err := aws.Wrap(client).Account(ctx, aws.CloudAccountID(id), core.FeatureAll)
@@ -356,10 +384,41 @@ func awsDeleteCnpAccount(ctx context.Context, d *schema.ResourceData, m interfac
// Reset ID.
d.SetId("")
-
return nil
}
+func featureResource() *schema.Resource {
+ return &schema.Resource{
+ Schema: map[string]*schema.Schema{
+ keyName: {
+ Type: schema.TypeString,
+ Required: true,
+ Description: "RSC feature name. Possible values are `CLOUD_NATIVE_ARCHIVAL`, " +
+ "`CLOUD_NATIVE_PROTECTION`, `CLOUD_NATIVE_S3_PROTECTION`, `EXOCOMPUTE` and `RDS_PROTECTION`.",
+ ValidateFunc: validation.StringInSlice([]string{
+ "CLOUD_NATIVE_ARCHIVAL", "CLOUD_NATIVE_PROTECTION", "CLOUD_NATIVE_S3_PROTECTION", "EXOCOMPUTE",
+ "RDS_PROTECTION",
+ }, false),
+ },
+ keyPermissionGroups: {
+ Type: schema.TypeSet,
+ Elem: &schema.Schema{
+ Type: schema.TypeString,
+ ValidateFunc: validation.StringInSlice([]string{
+ "BASIC", "EXPORT_AND_RESTORE", "FILE_LEVEL_RECOVERY", "SNAPSHOT_PRIVATE_ACCESS",
+ "PRIVATE_ENDPOINT", "RSC_MANAGED_CLUSTER",
+ }, false),
+ },
+ Required: true,
+ Description: "RSC permission groups for the feature. Possible values are `BASIC`, " +
+ "`EXPORT_AND_RESTORE`, `FILE_LEVEL_RECOVERY`, `SNAPSHOT_PRIVATE_ACCESS`, `PRIVATE_ENDPOINT` " +
+ "and `RSC_MANAGED_CLUSTER`. For backwards compatibility, `[]` is interpreted as all applicable " +
+ "permission groups.",
+ },
+ },
+ }
+}
+
func diffFeatures(oldFeatures []core.Feature, newFeatures []core.Feature) ([]core.Feature, []core.Feature) {
oldSet := make(map[string]core.Feature)
for _, feature := range oldFeatures {
diff --git a/internal/provider/resource_aws_cnp_account_attachments.go b/internal/provider/resource_aws_cnp_account_attachments.go
index f0158c6..0423b8b 100644
--- a/internal/provider/resource_aws_cnp_account_attachments.go
+++ b/internal/provider/resource_aws_cnp_account_attachments.go
@@ -34,39 +34,13 @@ import (
"github.com/rubrikinc/rubrik-polaris-sdk-for-go/pkg/polaris/graphql/core"
)
-var instanceProfileResource = &schema.Resource{
- Schema: map[string]*schema.Schema{
- "key": {
- Type: schema.TypeString,
- Required: true,
- Description: "Instance profile key.",
- ValidateFunc: validation.StringIsNotWhiteSpace,
- },
- "name": {
- Type: schema.TypeString,
- Required: true,
- Description: "AWS instance profile name.",
- ValidateFunc: validation.StringIsNotWhiteSpace,
- },
- },
-}
+const resourceAWSCNPAccountAttachmentsDescription = `
+The ´aws_cnp_account_attachments´ resource attaches AWS instance profiles and AWS
+roles to an RSC cloud account.
-var roleResource = &schema.Resource{
- Schema: map[string]*schema.Schema{
- "key": {
- Type: schema.TypeString,
- Required: true,
- Description: "Role key.",
- ValidateFunc: validation.StringIsNotWhiteSpace,
- },
- "arn": {
- Type: schema.TypeString,
- Required: true,
- Description: "AWS role ARN.",
- ValidateFunc: validation.StringIsNotWhiteSpace,
- },
- },
-}
+-> **Note:** The ´features´ field takes only the feature names and not the permission
+ groups associated with the features.
+`
func resourceAwsCnpAccountAttachments() *schema.Resource {
return &schema.Resource{
@@ -75,33 +49,43 @@ func resourceAwsCnpAccountAttachments() *schema.Resource {
UpdateContext: awsUpdateCnpAccountAttachments,
DeleteContext: awsDeleteCnpAccountAttachments,
+ Description: description(resourceAWSCNPAccountAttachmentsDescription),
Schema: map[string]*schema.Schema{
- "account_id": {
+ keyID: {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "RSC cloud account ID (UUID).",
+ },
+ keyAccountID: {
Type: schema.TypeString,
Required: true,
ForceNew: true,
- Description: "RSC account id.",
- ValidateFunc: validation.StringIsNotWhiteSpace,
+ Description: "RSC cloud account ID (UUID). Changing this forces a new resource to be created.",
+ ValidateFunc: validation.IsUUID,
},
- "features": {
+ keyFeatures: {
Type: schema.TypeSet,
Elem: &schema.Schema{
- Type: schema.TypeString,
- ValidateFunc: validation.StringIsNotWhiteSpace,
+ Type: schema.TypeString,
+ ValidateFunc: validation.StringInSlice([]string{
+ "CLOUD_NATIVE_ARCHIVAL", "CLOUD_NATIVE_PROTECTION", "CLOUD_NATIVE_S3_PROTECTION",
+ "EXOCOMPUTE", "RDS_PROTECTION",
+ }, false),
},
- MinItems: 1,
- Required: true,
- Description: "RSC features.",
+ MinItems: 1,
+ Required: true,
+ Description: "RSC features. Possible values are `CLOUD_NATIVE_ARCHIVAL`, `CLOUD_NATIVE_PROTECTION`, " +
+ "`CLOUD_NATIVE_S3_PROTECTION`, `EXOCOMPUTE` and `RDS_PROTECTION`.",
},
- "instance_profile": {
+ keyInstanceProfile: {
Type: schema.TypeSet,
- Elem: instanceProfileResource,
+ Elem: instanceProfileResource(),
Optional: true,
Description: "Instance profiles to attach to the cloud account.",
},
- "role": {
+ keyRole: {
Type: schema.TypeSet,
- Elem: roleResource,
+ Elem: roleResource(),
Required: true,
Description: "Roles to attach to the cloud account.",
},
@@ -117,24 +101,23 @@ func awsCreateCnpAccountAttachments(ctx context.Context, d *schema.ResourceData,
return diag.FromErr(err)
}
- // Get attributes.
- accountID, err := uuid.Parse(d.Get("account_id").(string))
+ accountID, err := uuid.Parse(d.Get(keyAccountID).(string))
if err != nil {
return diag.FromErr(err)
}
var features []core.Feature
- for _, feature := range d.Get("features").(*schema.Set).List() {
+ for _, feature := range d.Get(keyFeatures).(*schema.Set).List() {
features = append(features, core.Feature{Name: feature.(string)})
}
profiles := make(map[string]string)
- for _, roleAttr := range d.Get("instance_profile").(*schema.Set).List() {
+ for _, roleAttr := range d.Get(keyInstanceProfile).(*schema.Set).List() {
block := roleAttr.(map[string]any)
- profiles[block["key"].(string)] = block["name"].(string)
+ profiles[block["key"].(string)] = block[keyName].(string)
}
roles := make(map[string]string)
- for _, roleAttr := range d.Get("role").(*schema.Set).List() {
+ for _, roleAttr := range d.Get(keyRole).(*schema.Set).List() {
block := roleAttr.(map[string]any)
- roles[block["key"].(string)] = block["arn"].(string)
+ roles[block[keyKey].(string)] = block[keyARN].(string)
}
// Request artifacts be added to account.
@@ -143,9 +126,7 @@ func awsCreateCnpAccountAttachments(ctx context.Context, d *schema.ResourceData,
return diag.FromErr(err)
}
- // Set attributes.
d.SetId(id.String())
-
awsReadCnpAccountAttachments(ctx, d, m)
return nil
}
@@ -158,7 +139,6 @@ func awsReadCnpAccountAttachments(ctx context.Context, d *schema.ResourceData, m
return diag.FromErr(err)
}
- // Get attributes.
id, err := uuid.Parse(d.Id())
if err != nil {
return diag.FromErr(err)
@@ -184,24 +164,23 @@ func awsReadCnpAccountAttachments(ctx context.Context, d *schema.ResourceData, m
return diag.FromErr(err)
}
- // Set attributes.
- if err := d.Set("features", features); err != nil {
+ if err := d.Set(keyFeatures, features); err != nil {
return diag.FromErr(err)
}
- instanceProfilesAttr := &schema.Set{F: schema.HashResource(instanceProfileResource)}
+ instanceProfilesAttr := &schema.Set{F: schema.HashResource(instanceProfileResource())}
for key, name := range instanceProfiles {
- instanceProfilesAttr.Add(map[string]any{"key": key, "name": name})
+ instanceProfilesAttr.Add(map[string]any{keyKey: key, keyName: name})
}
- if err := d.Set("instance_profile", instanceProfilesAttr); err != nil {
+ if err := d.Set(keyInstanceProfile, instanceProfilesAttr); err != nil {
return diag.FromErr(err)
}
- rolesAttr := &schema.Set{F: schema.HashResource(roleResource)}
+ rolesAttr := &schema.Set{F: schema.HashResource(roleResource())}
for key, arn := range roles {
- rolesAttr.Add(map[string]any{"key": key, "arn": arn})
+ rolesAttr.Add(map[string]any{keyKey: key, keyARN: arn})
}
- if err := d.Set("role", rolesAttr); err != nil {
+ if err := d.Set(keyRole, rolesAttr); err != nil {
return diag.FromErr(err)
}
@@ -216,24 +195,23 @@ func awsUpdateCnpAccountAttachments(ctx context.Context, d *schema.ResourceData,
return diag.FromErr(err)
}
- // Get attributes.
id, err := uuid.Parse(d.Id())
if err != nil {
return diag.FromErr(err)
}
var features []core.Feature
- for _, feature := range d.Get("features").(*schema.Set).List() {
+ for _, feature := range d.Get(keyFeatures).(*schema.Set).List() {
features = append(features, core.Feature{Name: feature.(string)})
}
profiles := make(map[string]string)
- for _, roleAttr := range d.Get("instance_profile").(*schema.Set).List() {
+ for _, roleAttr := range d.Get(keyInstanceProfile).(*schema.Set).List() {
block := roleAttr.(map[string]any)
- profiles[block["key"].(string)] = block["name"].(string)
+ profiles[block[keyKey].(string)] = block[keyName].(string)
}
roles := make(map[string]string)
- for _, roleAttr := range d.Get("role").(*schema.Set).List() {
+ for _, roleAttr := range d.Get(keyRole).(*schema.Set).List() {
block := roleAttr.(map[string]any)
- roles[block["key"].(string)] = block["arn"].(string)
+ roles[block[keyKey].(string)] = block[keyARN].(string)
}
// Request artifacts be added to account.
@@ -253,3 +231,41 @@ func awsDeleteCnpAccountAttachments(ctx context.Context, d *schema.ResourceData,
return nil
}
+
+func instanceProfileResource() *schema.Resource {
+ return &schema.Resource{
+ Schema: map[string]*schema.Schema{
+ keyKey: {
+ Type: schema.TypeString,
+ Required: true,
+ Description: "RSC artifact key for the AWS instance profile.",
+ ValidateFunc: validation.StringIsNotWhiteSpace,
+ },
+ keyName: {
+ Type: schema.TypeString,
+ Required: true,
+ Description: "AWS instance profile name.",
+ ValidateFunc: validation.StringIsNotWhiteSpace,
+ },
+ },
+ }
+}
+
+func roleResource() *schema.Resource {
+ return &schema.Resource{
+ Schema: map[string]*schema.Schema{
+ keyKey: {
+ Type: schema.TypeString,
+ Required: true,
+ Description: "RSC artifact key for the AWS role.",
+ ValidateFunc: validation.StringIsNotWhiteSpace,
+ },
+ keyARN: {
+ Type: schema.TypeString,
+ Required: true,
+ Description: "AWS role ARN.",
+ ValidateFunc: validation.StringIsNotWhiteSpace,
+ },
+ },
+ }
+}
diff --git a/internal/provider/resource_aws_cnp_account_trust_policy.go b/internal/provider/resource_aws_cnp_account_trust_policy.go
index e0e46f7..f514099 100644
--- a/internal/provider/resource_aws_cnp_account_trust_policy.go
+++ b/internal/provider/resource_aws_cnp_account_trust_policy.go
@@ -36,6 +36,15 @@ import (
"github.com/rubrikinc/rubrik-polaris-sdk-for-go/pkg/polaris/graphql/core"
)
+const resourceAWSCNPAccountTrustPolicyDescription = `
+The ´aws_cnp_account_trust_policy´ resource gets the AWS IAM trust policies required
+by RSC. The ´policy´ field of ´aws_cnp_account_trust_policy´ resource should be used
+with the ´assume_role_policy´ of the ´aws_iam_role´ resource.
+
+-> **Note:** The ´features´ field takes only the feature names and not the permission
+ groups associated with the features.
+`
+
func resourceAwsCnpAccountTrustPolicy() *schema.Resource {
return &schema.Resource{
CreateContext: awsCreateCnpAccountTrustPolicy,
@@ -43,40 +52,51 @@ func resourceAwsCnpAccountTrustPolicy() *schema.Resource {
UpdateContext: awsUpdateCnpAccountTrustPolicy,
DeleteContext: awsDeleteCnpAccountTrustPolicy,
+ Description: description(resourceAWSCNPAccountTrustPolicyDescription),
Schema: map[string]*schema.Schema{
- "account_id": {
+ keyID: {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "RSC cloud account ID (UUID).",
+ },
+ keyAccountID: {
Type: schema.TypeString,
Required: true,
ForceNew: true,
- Description: "RSC account id.",
- ValidateFunc: validation.StringIsNotWhiteSpace,
+ Description: "RSC cloud account ID (UUID). Changing this forces a new resource to be created.",
+ ValidateFunc: validation.IsUUID,
},
- "external_id": {
+ keyExternalID: {
Type: schema.TypeString,
Optional: true,
ForceNew: true,
- Description: "External id.",
+ Description: "External ID. Changing this forces a new resource to be created.",
},
- "features": {
+ keyFeatures: {
Type: schema.TypeSet,
Elem: &schema.Schema{
- Type: schema.TypeString,
- ValidateFunc: validation.StringIsNotWhiteSpace,
+ Type: schema.TypeString,
+ ValidateFunc: validation.StringInSlice([]string{
+ "CLOUD_NATIVE_ARCHIVAL", "CLOUD_NATIVE_PROTECTION", "CLOUD_NATIVE_S3_PROTECTION",
+ "EXOCOMPUTE", "RDS_PROTECTION",
+ }, false),
},
- MinItems: 1,
- Required: true,
- ForceNew: true,
- Description: "RSC features.",
+ MinItems: 1,
+ Required: true,
+ ForceNew: true,
+ Description: "RSC features. Possible values are `CLOUD_NATIVE_ARCHIVAL`, `CLOUD_NATIVE_PROTECTION`, " +
+ "`CLOUD_NATIVE_S3_PROTECTION`, `EXOCOMPUTE` and `RDS_PROTECTION`. Changing this forces a new " +
+ "resource to be created.",
},
- "policy": {
+ keyPolicy: {
Type: schema.TypeString,
Computed: true,
- Description: "Trust policy.",
+ Description: "AWS IAM trust policy.",
},
- "role_key": {
+ keyRoleKey: {
Type: schema.TypeString,
Required: true,
- Description: "Role key.",
+ Description: "RSC artifact key for the AWS role.",
ValidateFunc: validation.StringIsNotWhiteSpace,
},
},
@@ -92,11 +112,11 @@ func awsCreateCnpAccountTrustPolicy(ctx context.Context, d *schema.ResourceData,
}
// Get attributes.
- accountID := d.Get("account_id").(string)
- externalID := d.Get("external_id").(string)
- roleKey := d.Get("role_key").(string)
+ accountID := d.Get(keyAccountID).(string)
+ externalID := d.Get(keyExternalID).(string)
+ roleKey := d.Get(keyRoleKey).(string)
var features []core.Feature
- for _, feature := range d.Get("features").(*schema.Set).List() {
+ for _, feature := range d.Get(keyFeatures).(*schema.Set).List() {
features = append(features, core.Feature{Name: feature.(string)})
}
@@ -107,7 +127,7 @@ func awsCreateCnpAccountTrustPolicy(ctx context.Context, d *schema.ResourceData,
}
// Set attributes.
- if err := d.Set("policy", policy); err != nil {
+ if err := d.Set(keyPolicy, policy); err != nil {
return diag.FromErr(err)
}
d.SetId(accountID)
@@ -129,7 +149,7 @@ func awsReadCnpAccountTrustPolicy(ctx context.Context, d *schema.ResourceData, m
if err != nil {
return diag.FromErr(err)
}
- roleKey := d.Get("role_key").(string)
+ roleKey := d.Get(keyRoleKey).(string)
// Request the cloud account.
account, err := aws.Wrap(client).Account(ctx, aws.CloudAccountID(id), core.FeatureAll)
@@ -156,10 +176,10 @@ func awsReadCnpAccountTrustPolicy(ctx context.Context, d *schema.ResourceData, m
for _, feature := range features {
featuresAttr.Add(feature.Name)
}
- if err := d.Set("features", featuresAttr); err != nil {
+ if err := d.Set(keyFeatures, featuresAttr); err != nil {
return diag.FromErr(err)
}
- if err := d.Set("policy", policy); err != nil {
+ if err := d.Set(keyPolicy, policy); err != nil {
return diag.FromErr(err)
}
@@ -175,9 +195,9 @@ func awsUpdateCnpAccountTrustPolicy(ctx context.Context, d *schema.ResourceData,
}
// Get attributes.
- roleKey := d.Get("role_key").(string)
+ roleKey := d.Get(keyRoleKey).(string)
var features []core.Feature
- for _, feature := range d.Get("features").(*schema.Set).List() {
+ for _, feature := range d.Get(keyFeatures).(*schema.Set).List() {
features = append(features, core.Feature{Name: feature.(string)})
}
@@ -193,7 +213,7 @@ func awsUpdateCnpAccountTrustPolicy(ctx context.Context, d *schema.ResourceData,
}
// Set attributes.
- if err := d.Set("policy", policy); err != nil {
+ if err := d.Set(keyPolicy, policy); err != nil {
return diag.FromErr(err)
}
diff --git a/internal/provider/resource_aws_exocompute.go b/internal/provider/resource_aws_exocompute.go
index 5b6885b..72a7f3d 100644
--- a/internal/provider/resource_aws_exocompute.go
+++ b/internal/provider/resource_aws_exocompute.go
@@ -34,67 +34,106 @@ import (
"github.com/rubrikinc/rubrik-polaris-sdk-for-go/pkg/polaris/graphql"
)
-// resourceAwsExocompute defines the schema for the AWS exocompute resource.
+const resourceAWSExocomputeDescription = `
+The ´polaris_aws_exocompute´ resource creates an RSC Exocompute configuration for AWS
+workloads.
+
+There are 3 types of Exocompute configurations:
+ 1. *RSC Managed Host* - When an RSC managed host configuration is created, RSC will
+ automatically deploy the necessary resources in the specified AWS region to run the
+ Exocompute service. AWS security groups can be managed by RSC or by the customer.
+ 2. *Customer Managed Host* - When a customer managed host configuration is created,
+ RSC will not deploy any resources. Instead it will use the AWS EKS cluster attached
+ by the customer, using the ´aws_exocompute_cluster_attachment´ resource, for all
+ operations.
+ 3. *Application* - An application configuration is created by mapping the application
+ cloud account to a host cloud account. The application cloud account will leverage
+ the Exocompute resources deployed for the host configuration.
+
+Items 1 and 2 above requires that the AWS account has been onboarded with the
+´EXOCOMPUTE´ feature.
+
+Since there are 3 types of Exocompute configurations, there are 3 ways to create a
+´polaris_azure_exocompute´ resource:
+ 1. Using the ´cloud_account_id´, ´region´, ´subnet´ and ´pod_overlay_network_cidr´
+ fields creates an RSC managed host configuration.
+ 2. Using the ´cloud_account_id´ and ´region´ fields creates a customer managed host
+ configuration. Note, the ´aws_exocompute_cluster_attachment´ resource must be used
+ to attach an AWS EKS cluster to the Exocompute configuration.
+ 3. Using the ´cloud_account_id´ and ´host_cloud_account_id´ fields creates an
+ application configuration.
+
+-> **Note:** Customer-managed Exocompute is sometimes referred to as Bring Your Own
+ Kubernetes (BYOK). Using both host and application Exocompute configurations is
+ sometimes referred to as shared Exocompute.
+`
+
func resourceAwsExocompute() *schema.Resource {
return &schema.Resource{
CreateContext: awsCreateExocompute,
ReadContext: awsReadExocompute,
DeleteContext: awsDeleteExocompute,
+ Description: description(resourceAWSExocomputeDescription),
Schema: map[string]*schema.Schema{
- "account_id": {
- Type: schema.TypeString,
- Required: true,
- ForceNew: true,
- Description: "RSC account id.",
- ValidateDiagFunc: validation.ToDiagFunc(validation.StringIsNotWhiteSpace),
+ keyID: {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "Exocompute configuration ID (UUID).",
},
- "cluster_security_group_id": {
- Type: schema.TypeString,
- Optional: true,
- Computed: true,
- ForceNew: true,
- ConflictsWith: []string{"host_account_id"},
- RequiredWith: []string{"node_security_group_id"},
- Description: "AWS security group id for the cluster.",
- ValidateDiagFunc: validation.ToDiagFunc(validation.StringIsNotWhiteSpace),
+ keyAccountID: {
+ Type: schema.TypeString,
+ Required: true,
+ ForceNew: true,
+ Description: "RSC cloud account ID (UUID). Changing this forces a new resource to be created.",
+ ValidateFunc: validation.IsUUID,
},
- "host_account_id": {
- Type: schema.TypeString,
- Optional: true,
- ForceNew: true,
- AtLeastOneOf: []string{"host_account_id", "region"},
- Description: "Shared exocompute host RSC account id.",
- ValidateDiagFunc: validation.ToDiagFunc(validation.StringIsNotWhiteSpace),
+ keyClusterSecurityGroupID: {
+ Type: schema.TypeString,
+ Optional: true,
+ Computed: true,
+ ForceNew: true,
+ ConflictsWith: []string{"host_account_id"},
+ RequiredWith: []string{"node_security_group_id"},
+ Description: "AWS security group ID for the cluster. Changing this forces a new resource to be " +
+ "created.",
+ ValidateFunc: validation.StringIsNotWhiteSpace,
},
- "node_security_group_id": {
- Type: schema.TypeString,
- Optional: true,
- Computed: true,
- ForceNew: true,
- ConflictsWith: []string{"host_account_id"},
- RequiredWith: []string{"cluster_security_group_id"},
- Description: "AWS security group id for the nodes.",
- ValidateDiagFunc: validation.ToDiagFunc(validation.StringIsNotWhiteSpace),
+ keyHostAccountID: {
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ AtLeastOneOf: []string{"host_account_id", "region"},
+ Description: "Exocompute host cloud account ID. Changing this forces a new resource to be created.",
+ ValidateFunc: validation.IsUUID,
},
- "polaris_managed": {
- Type: schema.TypeBool,
+ keyNodeSecurityGroupID: {
+ Type: schema.TypeString,
Optional: true,
Computed: true,
ForceNew: true,
ConflictsWith: []string{"host_account_id"},
- Description: "If true the security groups are managed by Polaris.",
+ RequiredWith: []string{"cluster_security_group_id"},
+ Description: "AWS security group ID for the nodes. Changing this forces a new resource to be " +
+ "created.",
+ ValidateFunc: validation.StringIsNotWhiteSpace,
+ },
+ keyPolarisManaged: {
+ Type: schema.TypeBool,
+ Computed: true,
+ Description: "If true the security groups are managed by RSC.",
},
- "region": {
- Type: schema.TypeString,
- Optional: true,
- ForceNew: true,
- AtLeastOneOf: []string{"host_account_id", "region"},
- ConflictsWith: []string{"host_account_id"},
- Description: "AWS region to run the exocompute instance in.",
- ValidateDiagFunc: validateAwsRegion,
+ keyRegion: {
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ AtLeastOneOf: []string{"host_account_id", "region"},
+ ConflictsWith: []string{"host_account_id"},
+ Description: "AWS region to run the Exocompute instance in. Changing this forces a new resource " +
+ "to be created.",
+ ValidateFunc: validation.StringIsNotWhiteSpace,
},
- "subnets": {
+ keySubnets: {
Type: schema.TypeSet,
Elem: &schema.Schema{
Type: schema.TypeString,
@@ -105,24 +144,22 @@ func resourceAwsExocompute() *schema.Resource {
ForceNew: true,
ConflictsWith: []string{"host_account_id"},
RequiredWith: []string{"vpc_id"},
- Description: "AWS subnet ids for the cluster subnets.",
+ Description: "AWS subnet IDs for the cluster subnets. Changing this forces a new resource to be " +
+ "created.",
},
- "vpc_id": {
- Type: schema.TypeString,
- Optional: true,
- ForceNew: true,
- ConflictsWith: []string{"host_account_id"},
- RequiredWith: []string{"subnets"},
- Description: "AWS VPC id for the cluster network.",
- ValidateDiagFunc: validation.ToDiagFunc(validation.StringIsNotWhiteSpace),
+ keyVPCID: {
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ ConflictsWith: []string{"host_account_id"},
+ RequiredWith: []string{"subnets"},
+ Description: "AWS VPC ID for the cluster network. Changing this forces a new resource to be created.",
+ ValidateFunc: validation.StringIsNotWhiteSpace,
},
},
}
}
-// awsCreateExocompute run the Create operation for the AWS exocompute
-// resource. This enables the exocompute feature and adds an exocompute config
-// to the Polaris cloud account.
func awsCreateExocompute(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
log.Print("[TRACE] awsCreateExocompute")
@@ -131,12 +168,12 @@ func awsCreateExocompute(ctx context.Context, d *schema.ResourceData, m interfac
return diag.FromErr(err)
}
- accountID, err := uuid.Parse(d.Get("account_id").(string))
+ accountID, err := uuid.Parse(d.Get(keyAccountID).(string))
if err != nil {
return diag.FromErr(err)
}
- if host, ok := d.GetOk("host_account_id"); ok {
+ if host, ok := d.GetOk(keyHostAccountID); ok {
hostID, err := uuid.Parse(host.(string))
if err != nil {
return diag.FromErr(err)
@@ -147,14 +184,14 @@ func awsCreateExocompute(ctx context.Context, d *schema.ResourceData, m interfac
}
d.SetId("app-" + accountID.String())
} else {
- clusterSecurityGroupID := d.Get("cluster_security_group_id").(string)
- nodeSecurityGroupID := d.Get("node_security_group_id").(string)
- region := d.Get("region").(string)
+ clusterSecurityGroupID := d.Get(keyClusterSecurityGroupID).(string)
+ nodeSecurityGroupID := d.Get(keyNodeSecurityGroupID).(string)
+ region := d.Get(keyRegion).(string)
var subnets []string
- for _, s := range d.Get("subnets").(*schema.Set).List() {
+ for _, s := range d.Get(keySubnets).(*schema.Set).List() {
subnets = append(subnets, s.(string))
}
- vpcID := d.Get("vpc_id").(string)
+ vpcID := d.Get(keyVPCID).(string)
// Note that Managed and Unmanaged below refer to whether the security
// groups are managed by RSC or not, and not the cluster.
@@ -181,8 +218,6 @@ func awsCreateExocompute(ctx context.Context, d *schema.ResourceData, m interfac
return nil
}
-// awsReadExocompute run the Read operation for the AWS exocompute resource.
-// This reads the state of the exocompute config in Polaris.
func awsReadExocompute(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
log.Print("[TRACE] awsReadExocompute")
@@ -192,8 +227,8 @@ func awsReadExocompute(ctx context.Context, d *schema.ResourceData, m interface{
}
id := d.Id()
- if strings.HasPrefix(d.Id(), "app-") {
- appID, err := uuid.Parse(strings.TrimPrefix(id, "app-"))
+ if strings.HasPrefix(d.Id(), appCloudAccountPrefix) {
+ appID, err := uuid.Parse(strings.TrimPrefix(id, appCloudAccountPrefix))
if err != nil {
return diag.FromErr(err)
}
@@ -206,7 +241,7 @@ func awsReadExocompute(ctx context.Context, d *schema.ResourceData, m interface{
return diag.FromErr(err)
}
- if err := d.Set("host_account_id", hostID.String()); err != nil {
+ if err := d.Set(keyHostAccountID, hostID.String()); err != nil {
return diag.FromErr(err)
}
} else {
@@ -223,28 +258,28 @@ func awsReadExocompute(ctx context.Context, d *schema.ResourceData, m interface{
return diag.FromErr(err)
}
- if err := d.Set("region", exoConfig.Region); err != nil {
+ if err := d.Set(keyRegion, exoConfig.Region); err != nil {
return diag.FromErr(err)
}
// Rubrik managed cluster
- if err := d.Set("cluster_security_group_id", exoConfig.ClusterSecurityGroupID); err != nil {
+ if err := d.Set(keyClusterSecurityGroupID, exoConfig.ClusterSecurityGroupID); err != nil {
return diag.FromErr(err)
}
- if err := d.Set("node_security_group_id", exoConfig.NodeSecurityGroupID); err != nil {
+ if err := d.Set(keyNodeSecurityGroupID, exoConfig.NodeSecurityGroupID); err != nil {
return diag.FromErr(err)
}
- if err := d.Set("polaris_managed", exoConfig.ManagedByRubrik); err != nil {
+ if err := d.Set(keyPolarisManaged, exoConfig.ManagedByRubrik); err != nil {
return diag.FromErr(err)
}
subnets := schema.Set{F: schema.HashString}
for _, subnet := range exoConfig.Subnets {
subnets.Add(subnet.ID)
}
- if err := d.Set("subnets", &subnets); err != nil {
+ if err := d.Set(keySubnets, &subnets); err != nil {
return diag.FromErr(err)
}
- if err := d.Set("vpc_id", exoConfig.VPCID); err != nil {
+ if err := d.Set(keyVPCID, exoConfig.VPCID); err != nil {
return diag.FromErr(err)
}
}
@@ -252,8 +287,6 @@ func awsReadExocompute(ctx context.Context, d *schema.ResourceData, m interface{
return nil
}
-// awsDeleteExocompute run the Delete operation for the AWS exocompute
-// resource. This removes the exocompute config from Polaris.
func awsDeleteExocompute(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
log.Print("[TRACE] awsDeleteExocompute")
@@ -263,8 +296,8 @@ func awsDeleteExocompute(ctx context.Context, d *schema.ResourceData, m interfac
}
id := d.Id()
- if strings.HasPrefix(id, "app-") {
- appID, err := uuid.Parse(strings.TrimPrefix(id, "app-"))
+ if strings.HasPrefix(id, appCloudAccountPrefix) {
+ appID, err := uuid.Parse(strings.TrimPrefix(id, appCloudAccountPrefix))
if err != nil {
return diag.FromErr(err)
}
diff --git a/internal/provider/resource_aws_exocompute_cluster_attachment.go b/internal/provider/resource_aws_exocompute_cluster_attachment.go
index 2a9fb71..a7049c7 100644
--- a/internal/provider/resource_aws_exocompute_cluster_attachment.go
+++ b/internal/provider/resource_aws_exocompute_cluster_attachment.go
@@ -31,6 +31,12 @@ import (
"github.com/rubrikinc/rubrik-polaris-sdk-for-go/pkg/polaris/aws"
)
+const awsExocomputeClusterAttachmentDescription = `
+The ´polaris_aws_exocompute_cluster_attachment´ resource attaches an AWS EKS cluster
+to a customer managed host Exocompute configuration, allowing RSC to use the cluster
+for Exocompute operations.
+`
+
func resourceAwsExocomputeClusterAttachment() *schema.Resource {
return &schema.Resource{
CreateContext: awsCreateAwsExocomputeClusterAttachment,
@@ -38,30 +44,46 @@ func resourceAwsExocomputeClusterAttachment() *schema.Resource {
UpdateContext: awsUpdateAwsExocomputeClusterAttachment,
DeleteContext: awsDeleteAwsExocomputeClusterAttachment,
+ Description: description(awsExocomputeClusterAttachmentDescription),
Schema: map[string]*schema.Schema{
- "cluster_name": {
- Type: schema.TypeString,
- Required: true,
- ForceNew: true,
- Description: "AWS EKS cluster name.",
- ValidateDiagFunc: validation.ToDiagFunc(validation.StringIsNotWhiteSpace),
- },
- "connection_command": {
+ keyID: {
Type: schema.TypeString,
Computed: true,
- Description: "Cluster connection command.",
+ Description: "RSC cluster ID (UUID).",
+ },
+ keyClusterName: {
+ Type: schema.TypeString,
+ Required: true,
+ ForceNew: true,
+ Description: "AWS EKS cluster name. Changing this forces a new resource to be created.",
+ ValidateFunc: validation.StringIsNotWhiteSpace,
},
- "exocompute_id": {
- Type: schema.TypeString,
- Required: true,
- ForceNew: true,
- Description: "RSC exocompute id.",
- ValidateDiagFunc: validation.ToDiagFunc(validation.StringIsNotWhiteSpace),
+ keyConnectionCommand: {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "`kubectl` command which can be executed inside the EKS cluster to create a connection " +
+ "between the cluster and RSC. See " + keySetupYAML + " for an alternative connection method.",
},
- "token_refresh": {
- Type: schema.TypeInt,
- Optional: true,
- Description: "To force a refresh of the token, part of the connection command, increase the value of this field.",
+ keyExocomputeID: {
+ Type: schema.TypeString,
+ Required: true,
+ ForceNew: true,
+ Description: "RSC exocompute configuration ID (UUID). Changing this forces a new resource to be " +
+ "created.",
+ ValidateFunc: validation.IsUUID,
+ },
+ keySetupYAML: {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "K8s spec which can be passed to `kubectl apply` inside the EKS cluster to create a " +
+ "connection between the cluster and RSC. See " + keyConnectionCommand + " for an alternative " +
+ "connection method.",
+ },
+ keyTokenRefresh: {
+ Type: schema.TypeInt,
+ Optional: true,
+ Description: "To force a refresh of the token, part of the connection command, increase the value " +
+ "of this field. The token is valid for 24 hours.",
},
},
}
@@ -75,25 +97,24 @@ func awsCreateAwsExocomputeClusterAttachment(ctx context.Context, d *schema.Reso
return diag.FromErr(err)
}
- // Get attributes.
- configID, err := uuid.Parse(d.Get("exocompute_id").(string))
+ configID, err := uuid.Parse(d.Get(keyExocomputeID).(string))
if err != nil {
return diag.FromErr(err)
}
- clusterName := d.Get("cluster_name").(string)
+ clusterName := d.Get(keyClusterName).(string)
- // Request cluster attachment.
- clusterID, cmd, err := aws.Wrap(client).AddClusterToExocomputeConfig(ctx, configID, clusterName)
+ clusterID, kubectlCmd, setupYAML, err := aws.Wrap(client).AddClusterToExocomputeConfig(ctx, configID, clusterName)
if err != nil {
return diag.FromErr(err)
}
-
- // Set read-only attributes.
- d.SetId(clusterID.String())
- if err := d.Set("connection_command", cmd); err != nil {
+ if err := d.Set(keyConnectionCommand, kubectlCmd); err != nil {
+ return diag.FromErr(err)
+ }
+ if err := d.Set(keySetupYAML, setupYAML); err != nil {
return diag.FromErr(err)
}
+ d.SetId(clusterID.String())
return nil
}
@@ -102,14 +123,13 @@ func awsReadAwsExocomputeClusterAttachment(ctx context.Context, d *schema.Resour
// There is no way to read the state of the cluster attachment without
// updating the token.
-
return nil
}
func awsUpdateAwsExocomputeClusterAttachment(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
log.Print("[TRACE] awsUpdateAwsExocomputeClusterAttachment")
- if d.HasChange("token_refresh") {
+ if d.HasChange(keyTokenRefresh) {
return awsCreateAwsExocomputeClusterAttachment(ctx, d, m)
}
diff --git a/internal/provider/resource_aws_private_container_registry.go b/internal/provider/resource_aws_private_container_registry.go
index f62c0c5..ca7ceda 100644
--- a/internal/provider/resource_aws_private_container_registry.go
+++ b/internal/provider/resource_aws_private_container_registry.go
@@ -31,6 +31,75 @@ import (
"github.com/rubrikinc/rubrik-polaris-sdk-for-go/pkg/polaris/aws"
)
+const awsPrivateContainerRegistryDescription = `
+The ´polaris_aws_private_container_registry´ resource enables the private container
+registry (PCR) feature for the RSC customer account. This disables the standard
+Rubrik container registry. Once PCR has been enabled, it can only be disabled by
+Rubrik customer support.
+
+!> **Note:** Creating a ´polaris_aws_private_container_registry´ resource enables
+ the PCR feature for the RSC customer account. Destroying the resource will not
+ disabled PCR, it can only be disabled by contacting Rubrik customer support.
+
+~> **Note:** Even though the ´polaris_aws_private_container_registry´ resource ID
+ is an RSC cloud account ID, there can only be a single PCR per RSC customer
+ account.
+
+## Exocompute Image Bundles
+The following GraphQL query can be used to retrieve information about the image
+bundles used by RSC for exocompute:
+´´´graphql
+query ExotaskImageBundle($input: GetExotaskImageBundleInput) {
+ exotaskImageBundle(input: $input) {
+ bundleImages {
+ name
+ sha
+ tag
+ }
+ bundleVersion
+ eksVersion
+ repoUrl
+ }
+}
+´´´
+The ´repoUrl´ field holds the URL to the RSC container registry from where the RSC
+images can be pulled.
+
+The input is an object with the following structure:
+´´´json
+{
+ "input": {
+ "eksVersion": "1.29"
+ }
+}
+´´´
+Where ´eksVersion´ is the version of the customer's' EKS cluster. ´eksVersion´ is
+optional, if it's not specified it defaults to the latest EKS version supported by
+RSC.
+
+The following GraphQL mutation can be used to set the approved bundle version for
+the RSC customer account:
+´´´graphql
+mutation SetBundleApprovalStatus($input: SetBundleApprovalStatusInput!) {
+ setBundleApprovalStatus(input: $input)
+}
+´´´
+The input is an object with the following structure:
+´´´json
+{
+ "input": {
+ "approvalStatus": "APPROVED",
+ "bundleVersion": "1.164",
+ "bundleMetadata": {
+ "eksVersion": "1.29"
+ }
+ }
+}
+´´´
+Where ´approvalStatus´ can be either ´APPROVED´ or ´REJECTED´. ´bundleVersion´ is
+the the bundle version being approved or rejected. ´bundleMetadata´ is optional.
+`
+
func resourceAwsPrivateContainerRegistry() *schema.Resource {
return &schema.Resource{
CreateContext: awsCreatePrivateContainerRegistry,
@@ -38,21 +107,28 @@ func resourceAwsPrivateContainerRegistry() *schema.Resource {
UpdateContext: awsUpdatePrivateContainerRegistry,
DeleteContext: awsDeletePrivateContainerRegistry,
+ Description: description(awsPrivateContainerRegistryDescription),
Schema: map[string]*schema.Schema{
- "account_id": {
+ keyID: {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "RSC cloud account ID (UUID).",
+ },
+ keyAccountID: {
Type: schema.TypeString,
Required: true,
ForceNew: true,
- Description: "RSC account id",
- ValidateFunc: validation.StringIsNotWhiteSpace,
+ Description: "RSC cloud account ID (UUID). Changing this forces a new resource to be created.",
+ ValidateFunc: validation.IsUUID,
},
- "native_id": {
- Type: schema.TypeString,
- Required: true,
- Description: "AWS account ID of the AWS account that will pull images from the RSC container registry.",
+ keyNativeID: {
+ Type: schema.TypeString,
+ Required: true,
+ Description: "AWS account ID of the AWS account that will pull images from the RSC container " +
+ "registry.",
ValidateFunc: validation.StringIsNotWhiteSpace,
},
- "url": {
+ keyURL: {
Type: schema.TypeString,
Required: true,
Description: "URL for customer provided private container registry.",
@@ -70,17 +146,17 @@ func awsCreatePrivateContainerRegistry(ctx context.Context, d *schema.ResourceDa
return diag.FromErr(err)
}
- id, err := uuid.Parse(d.Get("account_id").(string))
+ id, err := uuid.Parse(d.Get(keyAccountID).(string))
if err != nil {
return diag.FromErr(err)
}
- nativeID := d.Get("native_id").(string)
- url := d.Get("url").(string)
+ nativeID := d.Get(keyNativeID).(string)
+ url := d.Get(keyURL).(string)
if err := aws.Wrap(client).SetPrivateContainerRegistry(ctx, aws.CloudAccountID(id), url, nativeID); err != nil {
return diag.FromErr(err)
}
- d.SetId(id.String())
+ d.SetId(id.String())
awsReadPrivateContainerRegistry(ctx, d, m)
return nil
}
@@ -103,10 +179,10 @@ func awsReadPrivateContainerRegistry(ctx context.Context, d *schema.ResourceData
return diag.FromErr(err)
}
- if err := d.Set("native_id", nativeID); err != nil {
+ if err := d.Set(keyNativeID, nativeID); err != nil {
return diag.FromErr(err)
}
- if err := d.Set("url", url); err != nil {
+ if err := d.Set(keyURL, url); err != nil {
return diag.FromErr(err)
}
@@ -125,8 +201,8 @@ func awsUpdatePrivateContainerRegistry(ctx context.Context, d *schema.ResourceDa
if err != nil {
return diag.FromErr(err)
}
- nativeID := d.Get("native_id").(string)
- url := d.Get("url").(string)
+ nativeID := d.Get(keyNativeID).(string)
+ url := d.Get(keyURL).(string)
if err := aws.Wrap(client).SetPrivateContainerRegistry(ctx, aws.CloudAccountID(id), url, nativeID); err != nil {
return diag.FromErr(err)
}
diff --git a/internal/provider/resource_azure_archival_location.go b/internal/provider/resource_azure_archival_location.go
new file mode 100644
index 0000000..9c75f01
--- /dev/null
+++ b/internal/provider/resource_azure_archival_location.go
@@ -0,0 +1,382 @@
+// Copyright 2024 Rubrik, Inc.
+//
+// Permission is hereby granted, free of charge, to any person obtaining a copy
+// of this software and associated documentation files (the "Software"), to
+// deal in the Software without restriction, including without limitation the
+// rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
+// sell copies of the Software, and to permit persons to whom the Software is
+// furnished to do so, subject to the following conditions:
+//
+// The above copyright notice and this permission notice shall be included in
+// all copies or substantial portions of the Software.
+//
+// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+// DEALINGS IN THE SOFTWARE.
+
+package provider
+
+import (
+ "context"
+ "errors"
+ "fmt"
+ "log"
+ "regexp"
+
+ "github.com/google/uuid"
+ "github.com/hashicorp/terraform-plugin-sdk/v2/diag"
+ "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
+ "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation"
+ "github.com/rubrikinc/rubrik-polaris-sdk-for-go/pkg/polaris/azure"
+ "github.com/rubrikinc/rubrik-polaris-sdk-for-go/pkg/polaris/graphql"
+)
+
+const resourceAzureArchivalLocationDescription = `
+The ´polaris_azure_archival_location´ resource creates an RSC archival location for
+cloud-native workloads. This resource requires that the Azure subscription has been
+onboarded with the ´cloud_native_archival´ feature.
+
+When creating an archival location, the region where the snapshots are stored needs
+to be specified:
+ * ´SOURCE_REGION´ - Store snapshots in the same region to minimize data transfer
+ charges. This is the default behaviour when the ´storage_account_region´ field is
+ not specified.
+ * ´SPECIFIC_REGION´ - Storing snapshots in another region can increase total data
+ transfer charges. The ´storage_account_region´ field specifies the region.
+
+Custom storage encryption is enabled by specifying one or more ´customer_managed_key´
+blocks. Each ´customer_managed_key´ block specifies the encryption details to use for
+a region. For other regions, data will be encrypted using platform managed keys.
+
+-> **Note:** The Azure storage account is not created until the first protected object
+ is archived to the location.
+`
+
+func resourceAzureArchivalLocation() *schema.Resource {
+ return &schema.Resource{
+ CreateContext: azureCreateArchivalLocation,
+ ReadContext: azureReadArchivalLocation,
+ UpdateContext: azureUpdateArchivalLocation,
+ DeleteContext: azureDeleteArchivalLocation,
+
+ Description: description(resourceAzureArchivalLocationDescription),
+ Schema: map[string]*schema.Schema{
+ keyID: {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "Cloud native archival location ID (UUID).",
+ },
+ keyCloudAccountID: {
+ Type: schema.TypeString,
+ Required: true,
+ ForceNew: true,
+ Description: "RSC cloud account ID. Changing this forces a new resource to be created.",
+ ValidateFunc: validation.StringIsNotWhiteSpace,
+ },
+ keyConnectionStatus: {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "Connection status of the cloud native archival location.",
+ },
+ keyContainerName: {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "Azure storage container name.",
+ },
+ keyCustomerManagedKey: {
+ Type: schema.TypeSet,
+ Elem: customerKeyResource(),
+ Optional: true,
+ Description: "Customer managed storage encryption. Specify the regions and their respective " +
+ "encryption details. For other regions, data will be encrypted using platform managed keys.",
+ },
+ keyLocationTemplate: {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "RSC location template. If a storage account region was specified, it will be " +
+ "`SPECIFIC_REGION`, otherwise `SOURCE_REGION`.",
+ },
+ keyName: {
+ Type: schema.TypeString,
+ Required: true,
+ Description: "Cloud native archival location name.",
+ ValidateFunc: validation.StringIsNotWhiteSpace,
+ },
+ keyRedundancy: {
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ Default: "LRS",
+ Description: "Azure storage redundancy. Possible values are `GRS`, `GZRS`, `LRS`, `RA_GRS`, " +
+ "`RA_GZRS` and `ZRS`. Default value is `LRS`. Changing this forces a new resource to be created.",
+ ValidateFunc: validation.StringInSlice([]string{"GRS", "GZRS", "LRS", "RA_GRS", "RA_GZRS", "ZRS"}, false),
+ },
+ keyStorageAccountNamePrefix: {
+ Type: schema.TypeString,
+ Required: true,
+ ForceNew: true,
+ Description: "Azure storage account name prefix. The storage account name prefix cannot be longer " +
+ "than 14 characters and can only consist of numbers and lower case letters. Changing this forces " +
+ "a new resource to be created.",
+ ValidateFunc: validation.All(validation.StringLenBetween(1, 14),
+ validation.StringMatch(regexp.MustCompile("^[a-z0-9]*$"),
+ "storage account name may only contain numbers and lowercase letters")),
+ },
+ keyStorageAccountRegion: {
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ Description: "Azure region to store the snapshots in. If not specified, the snapshots will be stored " +
+ "in the same region as the workload. Changing this forces a new resource to be created.",
+ ValidateFunc: validation.StringIsNotWhiteSpace,
+ },
+ keyStorageAccountTags: {
+ Type: schema.TypeMap,
+ Optional: true,
+ Description: "Azure storage account tags. Each tag will be added to the storage account created by " +
+ "RSC.",
+ },
+ keyStorageTier: {
+ Type: schema.TypeString,
+ Optional: true,
+ Default: "COOL",
+ Description: "Azure storage tier. Possible values are `COOL` and `HOT`. Default value is `COOL`.",
+ ValidateFunc: validation.StringInSlice([]string{"COOL", "HOT"}, false),
+ },
+ },
+ }
+}
+
+func azureCreateArchivalLocation(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
+ log.Print("[TRACE] azureCreateArchivalLocation")
+
+ client, err := m.(*client).polaris()
+ if err != nil {
+ return diag.FromErr(err)
+ }
+
+ accountID, err := uuid.Parse(d.Get(keyCloudAccountID).(string))
+ if err != nil {
+ return diag.FromErr(err)
+ }
+
+ customerKeys := fromCustomerManagedKeys(d.Get(keyCustomerManagedKey).(*schema.Set))
+ name := d.Get(keyName).(string)
+ redundancy := d.Get(keyRedundancy).(string)
+ storageAccountName := d.Get(keyStorageAccountNamePrefix).(string)
+ storageAccountRegion := d.Get(keyStorageAccountRegion).(string)
+ storageAccountTags, err := fromBucketTags(d.Get(keyStorageAccountTags).(map[string]any))
+ if err != nil {
+ return diag.FromErr(err)
+ }
+ storageTier := d.Get(keyStorageTier).(string)
+
+ // Create the archival location.
+ targetMappingID, err := azure.Wrap(client).CreateStorageSetting(
+ ctx, azure.CloudAccountID(accountID), name, redundancy, storageTier, storageAccountName, storageAccountRegion, storageAccountTags, customerKeys)
+ if err != nil {
+ return diag.FromErr(err)
+ }
+
+ d.SetId(targetMappingID.String())
+ azureReadArchivalLocation(ctx, d, m)
+ return nil
+}
+
+func azureReadArchivalLocation(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
+ log.Print("[TRACE] azureReadArchivalLocation")
+
+ client, err := m.(*client).polaris()
+ if err != nil {
+ return diag.FromErr(err)
+ }
+
+ targetMappingID, err := uuid.Parse(d.Id())
+ if err != nil {
+ return diag.FromErr(err)
+ }
+
+ // Read the archival location. If the archival location isn't found, we
+ // remove it from the local state and return.
+ targetMapping, err := azure.Wrap(client).TargetMappingByID(ctx, targetMappingID)
+ if errors.Is(err, graphql.ErrNotFound) {
+ d.SetId("")
+ return nil
+ }
+ if err != nil {
+ return diag.FromErr(err)
+ }
+
+ if err := d.Set(keyConnectionStatus, targetMapping.ConnectionStatus); err != nil {
+ return diag.FromErr(err)
+ }
+ if err := d.Set(keyContainerName, targetMapping.ContainerName); err != nil {
+ return diag.FromErr(err)
+ }
+ if err := d.Set(keyCustomerManagedKey, toCustomerManagedKeys(targetMapping.CustomerKeys)); err != nil {
+ return diag.FromErr(err)
+ }
+ if err := d.Set(keyLocationTemplate, targetMapping.LocTemplate); err != nil {
+ return diag.FromErr(err)
+ }
+ if err := d.Set(keyName, targetMapping.Name); err != nil {
+ return diag.FromErr(err)
+ }
+ if err := d.Set(keyRedundancy, targetMapping.Redundancy); err != nil {
+ return diag.FromErr(err)
+ }
+ if err := d.Set(keyStorageAccountNamePrefix, targetMapping.StorageAccountName); err != nil {
+ return diag.FromErr(err)
+ }
+ if err := d.Set(keyStorageAccountRegion, targetMapping.StorageAccountRegion); err != nil {
+ return diag.FromErr(err)
+ }
+ if err := d.Set(keyStorageAccountTags, toStorageAccountTags(targetMapping.StorageAccountTags)); err != nil {
+ return diag.FromErr(err)
+ }
+ if err := d.Set(keyStorageTier, targetMapping.StorageTier); err != nil {
+ return diag.FromErr(err)
+ }
+
+ return nil
+}
+
+func azureUpdateArchivalLocation(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
+ log.Print("[TRACE] azureUpdateArchivalLocation")
+
+ client, err := m.(*client).polaris()
+ if err != nil {
+ return diag.FromErr(err)
+ }
+
+ targetMappingID, err := uuid.Parse(d.Id())
+ if err != nil {
+ return diag.FromErr(err)
+ }
+
+ customerKeys := fromCustomerManagedKeys(d.Get(keyCustomerManagedKey).(*schema.Set))
+ name := d.Get(keyName).(string)
+ storageAccountTags, err := fromStorageAccountTags(d.Get(keyStorageAccountTags).(map[string]any))
+ if err != nil {
+ return diag.FromErr(err)
+ }
+ storageTier := d.Get(keyStorageTier).(string)
+
+ // Update the archival location. Note, the API doesn't support updating
+ // all arguments.
+ err = azure.Wrap(client).UpdateStorageSetting(ctx, targetMappingID, name, storageTier, storageAccountTags, customerKeys)
+ if err != nil {
+ return diag.FromErr(err)
+ }
+
+ return nil
+}
+
+func azureDeleteArchivalLocation(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
+ log.Print("[TRACE] azureDeleteArchivalLocation")
+
+ client, err := m.(*client).polaris()
+ if err != nil {
+ return diag.FromErr(err)
+ }
+
+ targetMappingID, err := uuid.Parse(d.Id())
+ if err != nil {
+ return diag.FromErr(err)
+ }
+
+ // Delete the archival location.
+ if err := azure.Wrap(client).DeleteTargetMapping(ctx, targetMappingID); err != nil {
+ return diag.FromErr(err)
+ }
+
+ return nil
+}
+
+// customerKeyResource returns the schema for a customer managed key resource.
+func customerKeyResource() *schema.Resource {
+ return &schema.Resource{
+ Schema: map[string]*schema.Schema{
+ keyName: {
+ Type: schema.TypeString,
+ Required: true,
+ Description: "Key name.",
+ ValidateFunc: validation.StringIsNotWhiteSpace,
+ },
+ keyRegion: {
+ Type: schema.TypeString,
+ Required: true,
+ Description: "The region in which the key will be used. Regions without customer managed keys will " +
+ "use platform managed keys.",
+ ValidateFunc: validation.StringIsNotWhiteSpace,
+ },
+ keyVaultName: {
+ Type: schema.TypeString,
+ Required: true,
+ Description: "Key vault name.",
+ ValidateFunc: validation.StringIsNotWhiteSpace,
+ },
+ },
+ }
+}
+
+// fromCustomerManagedKeys converts from the customer managed keys field type
+// to a customer key slice.
+func fromCustomerManagedKeys(customerManagedKeys *schema.Set) []azure.CustomerKey {
+ var customerKeys []azure.CustomerKey
+ for _, key := range customerManagedKeys.List() {
+ key := key.(map[string]any)
+ customerKeys = append(customerKeys, azure.CustomerKey{
+ Name: key[keyName].(string),
+ Region: key[keyRegion].(string),
+ VaultName: key[keyVaultName].(string),
+ })
+ }
+
+ return customerKeys
+}
+
+// toStorageAccountTags converts to the customer managed keys field type from
+// a customer key slice.
+func toCustomerManagedKeys(customerKeys []azure.CustomerKey) *schema.Set {
+ customerManagedKeys := &schema.Set{F: schema.HashResource(customerKeyResource())}
+ for _, key := range customerKeys {
+ customerManagedKeys.Add(map[string]any{
+ keyName: key.Name,
+ keyRegion: key.Region,
+ keyVaultName: key.VaultName,
+ })
+ }
+
+ return customerManagedKeys
+}
+
+// fromStorageAccountTags converts from the storage account tags field type to
+// a standard string-to-string map.
+func fromStorageAccountTags(storageAccountTags map[string]any) (map[string]string, error) {
+ tags := make(map[string]string, len(storageAccountTags))
+ for key, value := range storageAccountTags {
+ value, ok := value.(string)
+ if !ok {
+ return nil, fmt.Errorf("storage account tag value for key %q is not a string", key)
+ }
+ tags[key] = value
+ }
+
+ return tags, nil
+}
+
+// toStorageAccountTags converts to the storage account tags field type from a
+// standard string-to-string map.
+func toStorageAccountTags(tags map[string]string) map[string]any {
+ storageAccountTags := make(map[string]any, len(tags))
+ for key, value := range tags {
+ storageAccountTags[key] = value
+ }
+
+ return storageAccountTags
+}
diff --git a/internal/provider/resource_azure_exocompute.go b/internal/provider/resource_azure_exocompute.go
index c48e808..7db226e 100644
--- a/internal/provider/resource_azure_exocompute.go
+++ b/internal/provider/resource_azure_exocompute.go
@@ -24,6 +24,7 @@ import (
"context"
"errors"
"log"
+ "strings"
"github.com/google/uuid"
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
@@ -31,36 +32,105 @@ import (
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation"
"github.com/rubrikinc/rubrik-polaris-sdk-for-go/pkg/polaris/azure"
"github.com/rubrikinc/rubrik-polaris-sdk-for-go/pkg/polaris/graphql"
- "github.com/rubrikinc/rubrik-polaris-sdk-for-go/pkg/polaris/graphql/core"
)
-// resourceAzureExocompute defines the schema for the Azure exocompute resource.
+const resourceAzureExocomputeDescription = `
+The ´polaris_azure_exocompute´ resource creates an RSC Exocompute configuration for
+Azure workloads.
+
+There are 2 types of Exocompute configurations:
+ 1. *Host* - When a host configuration is created, RSC will automatically deploy the
+ necessary resources in the specified Azure region to run the Exocompute service.
+ A host configuration can be used by both the host cloud account and application
+ cloud accounts mapped to the host account.
+ 2. *Application* - An application configuration is created by mapping the application
+ cloud account to a host cloud account. The application cloud account will leverage
+ the Exocompute resources deployed for the host configuration.
+
+Item 1 above requires that the Azure subscription has been onboarded with the
+´exocompute´ feature.
+
+Since there are 2 types of Exocompute configurations, there are 2 ways to create a
+´polaris_azure_exocompute´ resource:
+ 1. Using the ´cloud_account_id´, ´region´, ´subnet´ and ´pod_overlay_network_cidr´
+ fields. This creates a host configuration.
+ 2. Using the ´cloud_account_id´ and ´host_cloud_account_id´ fields. This creates an
+ application configuration.
+
+~> **Note:** A host configuration can be created without specifying the
+ ´pod_overlay_network_cidr´ field, this is discouraged and should only be done for
+ backwards compatibility reasons.
+
+-> **Note:** Customer-managed Exocompute is sometimes referred to as Bring Your Own
+ Kubernetes (BYOK). Using both host and application Exocompute configurations is
+ sometimes referred to as shared Exocompute.
+`
+
func resourceAzureExocompute() *schema.Resource {
return &schema.Resource{
CreateContext: azureCreateExocompute,
ReadContext: azureReadExocompute,
DeleteContext: azureDeleteExocompute,
+ Description: description(resourceAzureExocomputeDescription),
Schema: map[string]*schema.Schema{
- "subscription_id": {
- Type: schema.TypeString,
- Required: true,
- ForceNew: true,
- Description: "RSC subscription id",
- ValidateDiagFunc: validation.ToDiagFunc(validation.StringIsNotWhiteSpace),
+ keyID: {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "Exocompute configuration ID (UUID).",
},
- "region": {
- Type: schema.TypeString,
- Required: true,
- ForceNew: true,
- Description: "Azure region to run the exocompute instance in.",
- ValidateDiagFunc: validateAzureRegion,
+ keyCloudAccountID: {
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ ExactlyOneOf: []string{keyCloudAccountID, keySubscriptionID},
+ Description: "RSC cloud account ID. This is the ID of the `polaris_azure_subscription` resource for " +
+ "which the Exocompute service runs. Changing this forces a new resource to be created.",
+ ValidateFunc: validation.IsUUID,
},
- "subnet": {
- Type: schema.TypeString,
- Required: true,
- ForceNew: true,
- Description: "Azure subnet id.",
+ keyHostCloudAccountID: {
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ AtLeastOneOf: []string{keyHostCloudAccountID, keyRegion},
+ Description: "RSC cloud account ID of the shared exocompute host account. Changing this forces a new " +
+ "resource to be created.",
+ ValidateFunc: validation.IsUUID,
+ },
+ keyPodOverlayNetworkCIDR: {
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ Description: "The CIDR range assigned to pods when launching Exocompute with the CNI overlay network " +
+ "plugin mode. Changing this forces a new resource to be created.",
+ ValidateFunc: validation.StringIsNotWhiteSpace,
+ },
+ keyRegion: {
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ Description: "Azure region to run the exocompute service in. Should be specified in the standard " +
+ "Azure style, e.g. `eastus`. Changing this forces a new resource to be created.",
+ ValidateFunc: validation.StringIsNotWhiteSpace,
+ },
+ keySubnet: {
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ Description: "Azure subnet ID of the cluster subnet corresponding to the Exocompute configuration. " +
+ "This subnet will be used to allocate IP addresses to the nodes of the cluster. Changing this forces " +
+ "a new resource to be created.",
+ ValidateFunc: validation.StringIsNotWhiteSpace,
+ },
+ keySubscriptionID: {
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ Description: "RSC cloud account ID. This is the ID of the `polaris_azure_subscription` resource for " +
+ "which the Exocompute service runs. Changing this forces a new resource to be created. " +
+ "**Deprecated:** use `cloud_account_id` instead.",
+ Deprecated: "use `cloud_account_id` instead.",
+ ValidateFunc: validation.IsUUID,
},
},
SchemaVersion: 1,
@@ -72,9 +142,6 @@ func resourceAzureExocompute() *schema.Resource {
}
}
-// azureCreateExocompute run the Create operation for the Azure exocompute
-// resource. This enables the exocompute feature and adds an exocompute config
-// to the Polaris cloud account.
func azureCreateExocompute(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
log.Print("[TRACE] azureCreateExocompute")
@@ -83,36 +150,44 @@ func azureCreateExocompute(ctx context.Context, d *schema.ResourceData, m interf
return diag.FromErr(err)
}
- accountID, err := uuid.Parse(d.Get("subscription_id").(string))
- if err != nil {
- return diag.FromErr(err)
- }
- account, err := azure.Wrap(client).Subscription(ctx, azure.CloudAccountID(accountID), core.FeatureExocompute)
- if errors.Is(err, graphql.ErrNotFound) {
- return diag.Errorf("exocompute not enabled on account")
+ id := d.Get(keyCloudAccountID).(string)
+ if id == "" {
+ id = d.Get(keySubscriptionID).(string)
}
+ accountID, err := uuid.Parse(id)
if err != nil {
return diag.FromErr(err)
}
- region := d.Get("region").(string)
- if !account.Features[0].HasRegion(region) {
- return diag.Errorf("region %q not available with exocompute feature", region)
- }
-
- config := azure.Managed(region, d.Get("subnet").(string))
- id, err := azure.Wrap(client).AddExocomputeConfig(ctx, azure.CloudAccountID(accountID), config)
- if err != nil {
- return diag.FromErr(err)
+ if hostCloudAccount, ok := d.GetOk(keyHostCloudAccountID); ok {
+ hostCloudAccountID, err := uuid.Parse(hostCloudAccount.(string))
+ if err != nil {
+ return diag.FromErr(err)
+ }
+ err = azure.Wrap(client).MapExocompute(ctx, azure.CloudAccountID(hostCloudAccountID), azure.CloudAccountID(accountID))
+ if err != nil {
+ return diag.FromErr(err)
+ }
+ d.SetId(appCloudAccountPrefix + accountID.String())
+ } else {
+ var exoConfig azure.ExoConfigFunc
+ if podOverlayNetworkCIDR, ok := d.GetOk(keyPodOverlayNetworkCIDR); ok {
+ exoConfig = azure.ManagedWithOverlayNetwork(d.Get(keyRegion).(string), d.Get(keySubnet).(string),
+ podOverlayNetworkCIDR.(string))
+ } else {
+ exoConfig = azure.Managed(d.Get(keyRegion).(string), d.Get(keySubnet).(string))
+ }
+ exoConfigID, err := azure.Wrap(client).AddExocomputeConfig(ctx, azure.CloudAccountID(accountID), exoConfig)
+ if err != nil {
+ return diag.FromErr(err)
+ }
+ d.SetId(exoConfigID.String())
}
- d.SetId(id.String())
azureReadExocompute(ctx, d, m)
return nil
}
-// azureReadExocompute run the Read operation for the Azure exocompute
-// resource. This reads the state of the exocompute config in Polaris.
func azureReadExocompute(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
log.Print("[TRACE] azureReadExocompute")
@@ -121,32 +196,53 @@ func azureReadExocompute(ctx context.Context, d *schema.ResourceData, m interfac
return diag.FromErr(err)
}
- id, err := uuid.Parse(d.Id())
- if err != nil {
- return diag.FromErr(err)
- }
+ if id := d.Id(); strings.HasPrefix(id, appCloudAccountPrefix) {
+ appID, err := uuid.Parse(strings.TrimPrefix(id, appCloudAccountPrefix))
+ if err != nil {
+ return diag.FromErr(err)
+ }
- exoConfig, err := azure.Wrap(client).ExocomputeConfig(ctx, id)
- if errors.Is(err, graphql.ErrNotFound) {
- d.SetId("")
- return nil
- }
- if err != nil {
- return diag.FromErr(err)
- }
+ hostID, err := azure.Wrap(client).ExocomputeHostAccount(ctx, azure.CloudAccountID(appID))
+ if errors.Is(err, graphql.ErrNotFound) {
+ d.SetId("")
+ return nil
+ }
+ if err != nil {
+ return diag.FromErr(err)
+ }
- if err := d.Set("region", exoConfig.Region); err != nil {
- return diag.FromErr(err)
- }
- if err := d.Set("subnet", exoConfig.SubnetID); err != nil {
- return diag.FromErr(err)
+ if err := d.Set(keyHostCloudAccountID, hostID.String()); err != nil {
+ return diag.FromErr(err)
+ }
+ } else {
+ exoConfigID, err := uuid.Parse(id)
+ if err != nil {
+ return diag.FromErr(err)
+ }
+
+ exoConfig, err := azure.Wrap(client).ExocomputeConfig(ctx, exoConfigID)
+ if errors.Is(err, graphql.ErrNotFound) {
+ d.SetId("")
+ return nil
+ }
+ if err != nil {
+ return diag.FromErr(err)
+ }
+
+ if err := d.Set(keyRegion, exoConfig.Region); err != nil {
+ return diag.FromErr(err)
+ }
+ if err := d.Set(keySubnet, exoConfig.SubnetID); err != nil {
+ return diag.FromErr(err)
+ }
+ if err := d.Set(keyPodOverlayNetworkCIDR, exoConfig.PodOverlayNetworkCIDR); err != nil {
+ return diag.FromErr(err)
+ }
}
return nil
}
-// azureDeleteExocompute run the Delete operation for the Azure exocompute
-// resource. This removes the exocompute config from Polaris.
func azureDeleteExocompute(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
log.Print("[TRACE] azureDeleteExocompute")
@@ -155,14 +251,25 @@ func azureDeleteExocompute(ctx context.Context, d *schema.ResourceData, m interf
return diag.FromErr(err)
}
- id, err := uuid.Parse(d.Id())
- if err != nil {
- return diag.FromErr(err)
- }
+ if id := d.Id(); strings.HasPrefix(id, appCloudAccountPrefix) {
+ appID, err := uuid.Parse(strings.TrimPrefix(id, appCloudAccountPrefix))
+ if err != nil {
+ return diag.FromErr(err)
+ }
+ err = azure.Wrap(client).UnmapExocompute(ctx, azure.CloudAccountID(appID))
+ if err != nil {
+ return diag.FromErr(err)
+ }
+ } else {
+ exoConfigID, err := uuid.Parse(d.Id())
+ if err != nil {
+ return diag.FromErr(err)
+ }
- err = azure.Wrap(client).RemoveExocomputeConfig(ctx, id)
- if err != nil {
- return diag.FromErr(err)
+ err = azure.Wrap(client).RemoveExocomputeConfig(ctx, exoConfigID)
+ if err != nil {
+ return diag.FromErr(err)
+ }
}
d.SetId("")
diff --git a/internal/provider/resource_azure_exocompute_test.go b/internal/provider/resource_azure_exocompute_test.go
index 8ee7f59..a8e131d 100644
--- a/internal/provider/resource_azure_exocompute_test.go
+++ b/internal/provider/resource_azure_exocompute_test.go
@@ -42,12 +42,18 @@ resource "polaris_azure_subscription" "default" {
tenant_domain = "{{ .Resource.TenantDomain }}"
cloud_native_protection {
+ resource_group_name = "{{ .Resource.CloudNativeProtection.ResourceGroupName }}"
+ resource_group_region = "{{ .Resource.CloudNativeProtection.ResourceGroupRegion }}"
+
regions = [
"eastus2",
]
}
exocompute {
+ resource_group_name = "{{ .Resource.Exocompute.ResourceGroupName }}"
+ resource_group_region = "{{ .Resource.Exocompute.ResourceGroupRegion }}"
+
regions = [
"eastus2",
]
@@ -86,12 +92,16 @@ func TestAccPolarisAzureExocompute_basic(t *testing.T) {
resource.TestCheckResourceAttr("polaris_azure_subscription.default", "delete_snapshots_on_destroy", "false"),
// Cloud Native Protection feature
- resource.TestCheckResourceAttr("polaris_azure_subscription.default", "cloud_native_protection.0.status", "connected"),
+ resource.TestCheckResourceAttr("polaris_azure_subscription.default", "cloud_native_protection.0.status", "CONNECTED"),
resource.TestCheckResourceAttr("polaris_azure_subscription.default", "cloud_native_protection.0.regions.#", "1"),
resource.TestCheckTypeSetElemAttr("polaris_azure_subscription.default", "cloud_native_protection.0.regions.*", "eastus2"),
+ resource.TestCheckResourceAttr("polaris_azure_subscription.default", "cloud_native_protection.0.resource_group_name",
+ subscription.CloudNativeProtection.ResourceGroupName),
+ resource.TestCheckResourceAttr("polaris_azure_subscription.default", "cloud_native_protection.0.resource_group_region",
+ subscription.CloudNativeProtection.ResourceGroupRegion),
// Exocompute feature
- resource.TestCheckResourceAttr("polaris_azure_subscription.default", "exocompute.0.status", "connected"),
+ resource.TestCheckResourceAttr("polaris_azure_subscription.default", "exocompute.0.status", "CONNECTED"),
resource.TestCheckResourceAttr("polaris_azure_subscription.default", "exocompute.0.regions.#", "1"),
resource.TestCheckTypeSetElemAttr("polaris_azure_subscription.default", "exocompute.0.regions.*", "eastus2"),
diff --git a/internal/provider/resource_azure_exocompute_v0.go b/internal/provider/resource_azure_exocompute_v0.go
index 902cc92..f26d99a 100644
--- a/internal/provider/resource_azure_exocompute_v0.go
+++ b/internal/provider/resource_azure_exocompute_v0.go
@@ -28,7 +28,8 @@ import (
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation"
)
-// resourceAzureExocompute defines the schema for the Azure exocompute resource.
+// resourceAzureExocomputeV0 defines the schema for version 0 of the Azure
+// service principal resource and how to migrate to version 1.
func resourceAzureExocomputeV0() *schema.Resource {
return &schema.Resource{
Schema: map[string]*schema.Schema{
@@ -66,7 +67,7 @@ func resourceAzureExocomputeV0() *schema.Resource {
// resourceAzureExocomputeStateUpgradeV0 removes the polaris_managed parameter.
// Exocompute on Azure only supports RSC managed configurations.
func resourceAzureExocomputeStateUpgradeV0(ctx context.Context, state map[string]any, m any) (map[string]any, error) {
- log.Print("[TRACE] resourceAzureExocomputeStateUpgradeV0")
+ log.Print("[TRACE] azureExocomputeStateUpgradeV0")
delete(state, "polaris_managed")
diff --git a/internal/provider/resource_azure_service_principal.go b/internal/provider/resource_azure_service_principal.go
index 3edf54b..6ea0289 100644
--- a/internal/provider/resource_azure_service_principal.go
+++ b/internal/provider/resource_azure_service_principal.go
@@ -31,9 +31,36 @@ import (
"github.com/rubrikinc/rubrik-polaris-sdk-for-go/pkg/polaris/azure"
)
+const resourceAzureServicePrincipalDescription = `
+The ´polaris_azure_service_principal´ resource adds an Azure service principal to
+RSC. A service principal must be added for each Azure tenant before subscriptions
+for the tenants can be added to RSC.
+
+There are 3 ways to create a ´polaris_azure_service principal´ resource:
+ 1. Using the ´app_id´, ´app_name´, ´app_secret´, ´tenant_id´ and ´tenant_domain´
+ fields.
+ 2. Using the ´credentials´ field which is the path to a custom service principal
+ file. A description of the custom format can be found
+ [here](https://github.com/rubrikinc/rubrik-polaris-sdk-for-go?tab=readme-ov-file#azure-credentials).
+ 3. Using the ' sdk_auth´ field which is the path to an Azure service principal
+ created with the Azure SDK using the ´--sdk-auth´ parameter.
+
+~> **Note:** Removing the last subscription from an RSC tenant will automatically
+ remove the tenant, which also removes the service principal.
+
+~> **Note:** Destroying the ´polaris_azure_service_principal´ resource only updates
+ the local state, it does not remove the service principal from RSC. However,
+ creating another ´polaris_azure_service_principal´ resource for the same Azure
+ tenant will overwrite the old service principal in RSC.
+
+-> **Note:** There is no way to verify if a service principal has been added to RSC
+ using the UI. RSC tenants don't show up in the UI until the first subscription is
+ added.
+`
+
// resourceAzureServicePrincipal defines the schema for the Azure service
// principal resource. Note that the delete function cannot remove the service
-// principal since there is no delete operation in the Polaris API.
+// principal since there is no delete operation in the RSC API.
func resourceAzureServicePrincipal() *schema.Resource {
return &schema.Resource{
CreateContext: azureCreateServicePrincipal,
@@ -41,67 +68,92 @@ func resourceAzureServicePrincipal() *schema.Resource {
UpdateContext: azureUpdateServicePrincipal,
DeleteContext: azureDeleteServicePrincipal,
+ Description: description(resourceAzureServicePrincipalDescription),
Schema: map[string]*schema.Schema{
- "app_id": {
- Type: schema.TypeString,
- Optional: true,
- ExactlyOneOf: []string{"app_id", "credentials", "sdk_auth"},
- ConflictsWith: []string{"credentials", "sdk_auth"},
- RequiredWith: []string{"app_name", "app_secret", "tenant_id"},
- Description: "App registration application id.",
- ValidateDiagFunc: validation.ToDiagFunc(validation.IsUUID),
+ keyID: {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "Azure app registration application ID (UUID). Also known as the client ID. " +
+ "Note, this might change in the future, use the `app_id` field to reference the application ID " +
+ "in configurations.",
+ },
+ keyAppID: {
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ ExactlyOneOf: []string{keyAppID, keyCredentials, keySDKAuth},
+ RequiredWith: []string{keyAppName, keyAppSecret, keyTenantID},
+ Description: "Azure app registration application ID. Also known as the client ID. Changing this " +
+ "forces a new resource to be created.",
+ ValidateFunc: validation.IsUUID,
},
- "app_name": {
- Type: schema.TypeString,
- Optional: true,
- RequiredWith: []string{"app_id", "app_secret", "tenant_id"},
- Description: "App registration display name.",
- ValidateDiagFunc: validation.ToDiagFunc(validation.StringIsNotWhiteSpace),
+ keyAppName: {
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ RequiredWith: []string{keyAppID, keyAppSecret, keyTenantID},
+ Description: "Azure app registration display name. Changing this forces a new resource to be created.",
+ ValidateFunc: validation.StringIsNotWhiteSpace,
},
- "app_secret": {
- Type: schema.TypeString,
- Optional: true,
- Sensitive: true,
- RequiredWith: []string{"app_id", "app_name", "tenant_id"},
- Description: "App registration client secret.",
- ValidateDiagFunc: validation.ToDiagFunc(validation.StringIsNotWhiteSpace),
+ keyAppSecret: {
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ Sensitive: true,
+ RequiredWith: []string{keyAppID, keyAppName, keyTenantID},
+ Description: "Azure app registration client secret. Changing this forces a new resource to be created.",
+ ValidateFunc: validation.StringIsNotWhiteSpace,
},
- "credentials": {
- Type: schema.TypeString,
- Optional: true,
- ForceNew: true,
- ConflictsWith: []string{"app_id", "sdk_auth"},
- Description: "Path to Azure service principal file.",
- ValidateDiagFunc: fileExists,
+ keyCredentials: {
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ ExactlyOneOf: []string{keyAppID, keyCredentials, keySDKAuth},
+ Description: "Path to a custom service principal file. Changing this forces a new resource to be " +
+ "created.",
+ ValidateFunc: isExistingFile,
},
- "sdk_auth": {
- Type: schema.TypeString,
- Optional: true,
- ForceNew: true,
- ConflictsWith: []string{"app_id", "credentials"},
- Description: "Path to Azure service principal created with the Azure SDK using the --sdk-auth parameter",
- ValidateDiagFunc: fileExists,
+ keySDKAuth: {
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ ExactlyOneOf: []string{keyAppID, keyCredentials, keySDKAuth},
+ Description: "Path to an Azure service principal created with the Azure SDK using the `--sdk-auth` " +
+ "parameter. Changing this forces a new resource to be created.",
+ ValidateFunc: isExistingFile,
},
- "permissions_hash": {
- Type: schema.TypeString,
- Optional: true,
- Description: "Signals that the permissions has been updated.",
- ValidateDiagFunc: validateHash,
+ keyPermissions: {
+ Type: schema.TypeString,
+ Optional: true,
+ Description: "Permissions updated signal. When this field is updated, the provider will notify RSC " +
+ "that permissions has been updated. Use this field with the `polaris_azure_permissions` data " +
+ "source. **Deprecated:** use the `polaris_azure_subscription` resource's `permissions` fields " +
+ "instead.",
+ Deprecated: "use the `polaris_azure_subscription` resource's `permissions` fields instead.",
+ ValidateFunc: validation.StringIsNotWhiteSpace,
},
- "tenant_domain": {
- Type: schema.TypeString,
- Required: true,
- ForceNew: true,
- Description: "Tenant directory/domain name.",
- ValidateDiagFunc: validation.ToDiagFunc(validation.StringIsNotWhiteSpace),
+ keyPermissionsHash: {
+ Type: schema.TypeString,
+ Optional: true,
+ Description: "Permissions updated signal. **Deprecated:** use `permissions` instead.",
+ Deprecated: "use `permissions` instead.",
+ ValidateFunc: validation.StringIsNotWhiteSpace,
},
- "tenant_id": {
- Type: schema.TypeString,
- Optional: true,
- ForceNew: true,
- RequiredWith: []string{"app_id", "app_name", "app_secret"},
- Description: "Tenant/domain id.",
- ValidateDiagFunc: validation.ToDiagFunc(validation.IsUUID),
+ keyTenantDomain: {
+ Type: schema.TypeString,
+ Required: true,
+ ForceNew: true,
+ Description: "Azure tenant primary domain. Changing this forces a new resource to be created.",
+ ValidateFunc: validation.StringIsNotWhiteSpace,
+ },
+ keyTenantID: {
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ RequiredWith: []string{keyAppID, keyAppName, keyAppSecret},
+ Description: "Azure tenant ID. Also known as the directory ID. Changing this forces a new resource to " +
+ "be created.",
+ ValidateFunc: validation.IsUUID,
},
},
SchemaVersion: 1,
@@ -114,7 +166,7 @@ func resourceAzureServicePrincipal() *schema.Resource {
}
// azureCreateServicePrincipal run the Create operation for the Azure service
-// principal resource. This adds the Azure service principal to the Polaris
+// principal resource. This adds the Azure service principal to the RSC
// platform.
func azureCreateServicePrincipal(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
log.Print("[TRACE] azureCreateServicePrincipal")
@@ -124,40 +176,39 @@ func azureCreateServicePrincipal(ctx context.Context, d *schema.ResourceData, m
return diag.FromErr(err)
}
- tenantDomain := d.Get("tenant_domain").(string)
+ tenantDomain := d.Get(keyTenantDomain).(string)
var principal azure.ServicePrincipalFunc
switch {
- case d.Get("credentials").(string) != "":
- principal = azure.KeyFile(d.Get("credentials").(string), tenantDomain)
- case d.Get("sdk_auth").(string) != "":
- principal = azure.SDKAuthFile(d.Get("sdk_auth").(string), tenantDomain)
+ case d.Get(keyCredentials).(string) != "":
+ principal = azure.KeyFile(d.Get(keyCredentials).(string), tenantDomain)
+ case d.Get(keySDKAuth).(string) != "":
+ principal = azure.SDKAuthFile(d.Get(keySDKAuth).(string), tenantDomain)
default:
- appID, err := uuid.Parse(d.Get("app_id").(string))
+ appID, err := uuid.Parse(d.Get(keyAppID).(string))
if err != nil {
return diag.FromErr(err)
}
- tenantID, err := uuid.Parse(d.Get("tenant_id").(string))
+ tenantID, err := uuid.Parse(d.Get(keyTenantID).(string))
if err != nil {
return diag.FromErr(err)
}
- principal = azure.ServicePrincipal(appID, d.Get("app_secret").(string), tenantID, tenantDomain)
+ principal = azure.ServicePrincipal(appID, d.Get(keyAppName).(string), d.Get(keyAppSecret).(string), tenantID, tenantDomain)
}
- id, err := azure.Wrap(client).SetServicePrincipal(ctx, principal)
+ appID, err := azure.Wrap(client).SetServicePrincipal(ctx, principal)
if err != nil {
return diag.FromErr(err)
}
- d.SetId(id.String())
-
+ d.SetId(appID.String())
azureReadServicePrincipal(ctx, d, m)
return nil
}
// azureReadServicePrincipal run the Read operation for the Azure service
// principal resource. This reads the state of the Azure service principal in
-// Polaris.
+// RSC.
func azureReadServicePrincipal(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
log.Print("[TRACE] azureReadServicePrincipal")
@@ -165,7 +216,7 @@ func azureReadServicePrincipal(ctx context.Context, d *schema.ResourceData, m in
}
// azureUpdateServiceAccount run the Update operation for the Azure service
-// principal resource. This updates the Azure service principal in Polaris.
+// principal resource. This updates the Azure service principal in RSC.
func azureUpdateServicePrincipal(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
log.Print("[TRACE] azureUpdateServicePrincipal")
@@ -174,8 +225,8 @@ func azureUpdateServicePrincipal(ctx context.Context, d *schema.ResourceData, m
return diag.FromErr(err)
}
- if d.HasChange("permissions_hash") {
- err := azure.Wrap(client).PermissionsUpdatedForTenantDomain(ctx, d.Get("tenant_domain").(string), nil)
+ if d.HasChanges(keyPermissions, keyPermissionsHash) {
+ err := azure.Wrap(client).PermissionsUpdatedForTenantDomain(ctx, d.Get(keyTenantDomain).(string), nil)
if err != nil {
return diag.FromErr(err)
}
@@ -187,7 +238,7 @@ func azureUpdateServicePrincipal(ctx context.Context, d *schema.ResourceData, m
// azureDeleteServicePrincipal run the Delete operation for the Azure service
// principal resource. This only removes the local state of the GCP service
-// account since the service account cannot be removed using the Polaris API.
+// account since the service account cannot be removed using the RSC API.
func azureDeleteServicePrincipal(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
log.Print("[TRACE] azureDeleteServicePrincipal")
diff --git a/internal/provider/resource_azure_service_principal_v0.go b/internal/provider/resource_azure_service_principal_v0.go
index 9050f95..cbcd29a 100644
--- a/internal/provider/resource_azure_service_principal_v0.go
+++ b/internal/provider/resource_azure_service_principal_v0.go
@@ -31,9 +31,8 @@ import (
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation"
)
-// resourceAzureServicePrincipal defines the schema for the Azure service
-// principal resource. Note that the delete function cannot remove the service
-// principal since there is no delete operation in the Polaris API.
+// resourceAzureServicePrincipalV0 defines the schema for version 0 of the Azure
+// service principal resource and how to migrate to version 1.
func resourceAzureServicePrincipalV0() *schema.Resource {
return &schema.Resource{
Schema: map[string]*schema.Schema{
@@ -93,10 +92,10 @@ func resourceAzureServicePrincipalV0() *schema.Resource {
// resourceAzureServicePrincipalStateUpgradeV0 makes the tenant domain
// parameter required.
func resourceAzureServicePrincipalStateUpgradeV0(ctx context.Context, state map[string]interface{}, m interface{}) (map[string]interface{}, error) {
- log.Print("[TRACE] resourceAzureServicePrincipalStateUpgradeV0")
+ log.Print("[TRACE] azureServicePrincipalStateUpgradeV0")
// Tenant domain is only missing when the principal has been given as a
- // credentials file.
+ // credential file.
credentials, ok := state["credentials"]
if !ok {
return state, nil
diff --git a/internal/provider/resource_azure_subscription.go b/internal/provider/resource_azure_subscription.go
index 54bbf1c..3afd56d 100644
--- a/internal/provider/resource_azure_subscription.go
+++ b/internal/provider/resource_azure_subscription.go
@@ -21,30 +21,62 @@
package provider
import (
+ "cmp"
"context"
"errors"
"log"
+ "maps"
+ "slices"
"github.com/google/uuid"
- "github.com/hashicorp/go-cty/cty"
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation"
-
+ "github.com/rubrikinc/rubrik-polaris-sdk-for-go/pkg/polaris"
"github.com/rubrikinc/rubrik-polaris-sdk-for-go/pkg/polaris/azure"
"github.com/rubrikinc/rubrik-polaris-sdk-for-go/pkg/polaris/graphql"
- graphql_azure "github.com/rubrikinc/rubrik-polaris-sdk-for-go/pkg/polaris/graphql/azure"
"github.com/rubrikinc/rubrik-polaris-sdk-for-go/pkg/polaris/graphql/core"
)
-// validateAzureRegion verifies that the name is a valid Azure region name.
-func validateAzureRegion(m interface{}, p cty.Path) diag.Diagnostics {
- _, err := graphql_azure.ParseRegion(m.(string))
- return diag.FromErr(err)
-}
+const resourceAzureSubscriptionDescription = `
+The ´polaris_azure_subscription´ resource adds an Azure subscription to RSC. When
+the first subscription for an Azure tenant is added, a corresponding tenant is
+created in RSC. The RSC tenant is automatically destroyed when it's last subscription
+is removed.
+
+Any combination of different RSC features can be enabled for a subscription:
+ 1. ´cloud_native_archival´ - Provides archival of data from data center workloads
+ for disaster recovery and long-term retention.
+ 2. ´cloud_native_archival_encryption´ - Allows cloud archival locations to be
+ encrypted with customer managed keys.
+ 3. ´cloud_native_protection´ - Provides protection for Azure virtual machines and
+ managed disks through the rules and policies of SLA Domains.
+ 4. ´exocompute´ - Provides snapshot indexing, file recovery, storage tiering, and
+ application-consistent protection of Azure objects.
+ 5. ´sql_db_protection´ - Provides centralized database backup management and
+ recovery in an Azure SQL Database deployment.
+ 6. ´sql_mi_protection´ - Provides centralized database backup management and
+ recovery for an Azure SQL Managed Instance deployment.
+
+Each feature's ´permissions´ field can be used with the ´polaris_azure_permissions´
+data source to inform RSC about permission updates when the Terraform configuration
+is applied.
+
+~> **Note:** Even though the ´resource_group_name´ and the ´resource_group_region´
+ fields are marked as optional you should always specify them. They are marked as
+ optional to simplify the migration of existing Terraform configurations. If
+ omitted, RSC will generate a unique resource group name but it will not create
+ the actual resource group. Until the resource group is created, the RSC feature
+ depending on the resource group will not function as expected.
+
+~> **Note:** As mentioned in the documentation for each feature below, changing
+ certain fields causes features to be re-onboarded. Take care when the subscription
+ only has a single feature, as it could cause the tenant to be removed from RSC.
+
+-> **Note:** As of now, ´sql_db_protection´ and ´sql_mi_protection´ does not support
+ specifying an Azure resource group.
+`
-// resourceAzureSubscription defines the schema for the Azure subscription
-// resource.
func resourceAzureSubscription() *schema.Resource {
return &schema.Resource{
CreateContext: azureCreateSubscription,
@@ -52,82 +84,436 @@ func resourceAzureSubscription() *schema.Resource {
UpdateContext: azureUpdateSubscription,
DeleteContext: azureDeleteSubscription,
+ Description: description(resourceAzureSubscriptionDescription),
Schema: map[string]*schema.Schema{
- "cloud_native_protection": {
+ keyID: {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "RSC cloud account ID (UUID).",
+ },
+ keyCloudNativeArchival: {
+ Type: schema.TypeList,
+ Elem: &schema.Resource{
+ Schema: map[string]*schema.Schema{
+ keyPermissions: {
+ Type: schema.TypeString,
+ Optional: true,
+ Description: "Permissions updated signal. When this field changes, the provider will notify " +
+ "RSC that the permissions for the feature has been updated. Use this field with the " +
+ "`polaris_azure_permissions` data source.",
+ ValidateFunc: validation.StringIsNotWhiteSpace,
+ },
+ keyRegions: {
+ Type: schema.TypeSet,
+ Elem: &schema.Schema{
+ Type: schema.TypeString,
+ },
+ MinItems: 1,
+ Required: true,
+ Description: "Azure regions to enable the Cloud Native Archival feature in. Should be " +
+ "specified in the standard Azure style, e.g. `eastus`.",
+ },
+ keyResourceGroupName: {
+ Type: schema.TypeString,
+ Optional: true,
+ RequiredWith: []string{
+ keyCloudNativeArchival + ".0." + keyResourceGroupRegion,
+ },
+ Description: "Name of the Azure resource group where RSC places all resources created by " +
+ "the feature. RSC assumes the resource group already exists. Changing this forces the " +
+ "RSC feature to be re-onboarded.",
+ ValidateFunc: validation.StringIsNotWhiteSpace,
+ },
+ keyResourceGroupRegion: {
+ Type: schema.TypeString,
+ Optional: true,
+ RequiredWith: []string{
+ keyCloudNativeArchival + ".0." + keyResourceGroupName,
+ },
+ Description: "Region of the Azure resource group. Should be specified in the standard " +
+ "Azure style, e.g. `eastus`. Changing this forces the RSC feature to be re-onboarded.",
+ ValidateFunc: validation.StringIsNotWhiteSpace,
+ },
+ keyResourceGroupTags: {
+ Type: schema.TypeMap,
+ Elem: &schema.Schema{
+ Type: schema.TypeString,
+ },
+ Optional: true,
+ RequiredWith: []string{
+ keyCloudNativeArchival + ".0." + keyResourceGroupName,
+ keyCloudNativeArchival + ".0." + keyResourceGroupRegion,
+ },
+ Description: "Tags to add to the Azure resource group. Changing this forces the RSC feature " +
+ "to be re-onboarded.",
+ },
+ keyStatus: {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "Status of the Cloud Native Archival feature.",
+ },
+ },
+ },
+ MaxItems: 1,
+ Optional: true,
+ AtLeastOneOf: []string{
+ keyCloudNativeArchival,
+ keyCloudNativeProtection,
+ keyExocompute,
+ keySQLDBProtection,
+ keySQLMIProtection,
+ },
+ Description: "Enable the RSC Cloud Native Archival feature for the Azure subscription.",
+ },
+ keyCloudNativeArchivalEncryption: {
+ Type: schema.TypeList,
+ Elem: &schema.Resource{
+ Schema: map[string]*schema.Schema{
+ keyPermissions: {
+ Type: schema.TypeString,
+ Optional: true,
+ Description: "Permissions updated signal. When this field changes, the provider will notify " +
+ "RSC that the permissions for the feature has been updated. Use this field with the " +
+ "`polaris_azure_permissions` data source.",
+ ValidateFunc: validation.StringIsNotWhiteSpace,
+ },
+ keyRegions: {
+ Type: schema.TypeSet,
+ Elem: &schema.Schema{
+ Type: schema.TypeString,
+ },
+ MinItems: 1,
+ Required: true,
+ Description: "Azure regions to enable the Cloud Native Archival Encryption feature in. " +
+ "Should be specified in the standard Azure style, e.g. `eastus`.",
+ },
+ keyResourceGroupName: {
+ Type: schema.TypeString,
+ Optional: true,
+ RequiredWith: []string{
+ keyCloudNativeArchivalEncryption + ".0." + keyResourceGroupRegion,
+ },
+ Description: "Name of the Azure resource group where RSC places all resources created by " +
+ "the feature. RSC assumes the resource group already exists. Changing this forces the " +
+ "RSC feature to be re-onboarded.",
+ ValidateFunc: validation.StringIsNotWhiteSpace,
+ },
+ keyResourceGroupRegion: {
+ Type: schema.TypeString,
+ Optional: true,
+ RequiredWith: []string{
+ keyCloudNativeArchivalEncryption + ".0." + keyResourceGroupName,
+ },
+ Description: "Region of the Azure resource group. Should be specified in the standard " +
+ "Azure style, e.g. `eastus`. Changing this forces the RSC feature to be re-onboarded.",
+ ValidateFunc: validation.StringIsNotWhiteSpace,
+ },
+ keyResourceGroupTags: {
+ Type: schema.TypeMap,
+ Elem: &schema.Schema{
+ Type: schema.TypeString,
+ },
+ Optional: true,
+ RequiredWith: []string{
+ keyCloudNativeArchivalEncryption + ".0." + keyResourceGroupName,
+ keyCloudNativeArchivalEncryption + ".0." + keyResourceGroupRegion,
+ },
+ Description: "Tags to add to the Azure resource group. Changing this forces the RSC feature " +
+ "to be re-onboarded.",
+ },
+ keyStatus: {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "Status of the Cloud Native Archival Encryption feature.",
+ },
+ keyUserAssignedManagedIdentityName: {
+ Type: schema.TypeString,
+ Required: true,
+ Description: "User-assigned managed identity name.",
+ ValidateFunc: validation.StringIsNotWhiteSpace,
+ },
+ keyUserAssignedManagedIdentityPrincipalID: {
+ Type: schema.TypeString,
+ Required: true,
+ Description: "ID of the service principal object associated with the user-assigned managed " +
+ "identity.",
+ ValidateFunc: validation.StringIsNotWhiteSpace,
+ },
+ keyUserAssignedManagedIdentityRegion: {
+ Type: schema.TypeString,
+ Required: true,
+ Description: "User-assigned managed identity region. Should be specified in the " +
+ "standard Azure style, e.g. `eastus`.",
+ ValidateFunc: validation.StringIsNotWhiteSpace,
+ },
+ keyUserAssignedManagedIdentityResourceGroupName: {
+ Type: schema.TypeString,
+ Required: true,
+ Description: "User-assigned managed identity resource group name.",
+ ValidateFunc: validation.StringIsNotWhiteSpace,
+ },
+ },
+ },
+ MaxItems: 1,
+ Optional: true,
+ RequiredWith: []string{
+ keyCloudNativeArchival,
+ },
+ Description: "Enable the RSC Cloud Native Archival Encryption feature for the Azure subscription.",
+ },
+ keyCloudNativeProtection: {
Type: schema.TypeList,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
- "regions": {
+ keyPermissions: {
+ Type: schema.TypeString,
+ Optional: true,
+ Description: "Permissions updated signal. When this field changes, the provider will notify " +
+ "RSC that the permissions for the feature has been updated. Use this field with the " +
+ "`polaris_azure_permissions` data source.",
+ ValidateFunc: validation.StringIsNotWhiteSpace,
+ },
+ keyRegions: {
Type: schema.TypeSet,
Elem: &schema.Schema{
- Type: schema.TypeString,
- ValidateDiagFunc: validateAzureRegion,
+ Type: schema.TypeString,
+ },
+ MinItems: 1,
+ Required: true,
+ Description: "Azure regions that RSC will monitor for resources to protect according to " +
+ "SLA Domains. Should be specified in the standard Azure style, e.g. `eastus`.",
+ },
+ keyResourceGroupName: {
+ Type: schema.TypeString,
+ Optional: true,
+ RequiredWith: []string{
+ keyCloudNativeProtection + ".0." + keyResourceGroupRegion,
},
- MinItems: 1,
- Required: true,
- Description: "Regions that Polaris will monitor for instances to automatically protect.",
+ Description: "Name of the Azure resource group where RSC places all resources created by " +
+ "the feature. RSC assumes the resource group already exists. Changing this forces the " +
+ "RSC feature to be re-onboarded.",
+ ValidateFunc: validation.StringIsNotWhiteSpace,
},
- "status": {
+ keyResourceGroupRegion: {
+ Type: schema.TypeString,
+ Optional: true,
+ RequiredWith: []string{
+ keyCloudNativeProtection + ".0." + keyResourceGroupName,
+ },
+ Description: "Region of the Azure resource group. Should be specified in the standard " +
+ "Azure style, e.g. `eastus`. Changing this forces the RSC feature to be re-onboarded.",
+ ValidateFunc: validation.StringIsNotWhiteSpace,
+ },
+ keyResourceGroupTags: {
+ Type: schema.TypeMap,
+ Elem: &schema.Schema{
+ Type: schema.TypeString,
+ },
+ Optional: true,
+ RequiredWith: []string{
+ keyCloudNativeProtection + ".0." + keyResourceGroupName,
+ keyCloudNativeProtection + ".0." + keyResourceGroupRegion,
+ },
+ Description: "Tags to add to the Azure resource group. Changing this forces the RSC feature " +
+ "to be re-onboarded.",
+ },
+ keyStatus: {
Type: schema.TypeString,
Computed: true,
Description: "Status of the Cloud Native Protection feature.",
},
},
},
- MaxItems: 1,
- Required: true,
- Description: "Enable the Cloud Native Protection feature for the GCP project.",
+ MaxItems: 1,
+ Optional: true,
+ AtLeastOneOf: []string{
+ keyCloudNativeArchival,
+ keyCloudNativeProtection,
+ keyExocompute,
+ keySQLDBProtection,
+ keySQLMIProtection,
+ },
+ Description: "Enable the RSC Cloud Native Protection feature for the Azure subscription.",
},
- "delete_snapshots_on_destroy": {
+ keyDeleteSnapshotsOnDestroy: {
Type: schema.TypeBool,
Optional: true,
Default: false,
- Description: "Should snapshots be deleted when the resource is destroyed.",
+ Description: "Should snapshots be deleted when the resource is destroyed. Default value is `false`.",
},
- "exocompute": {
+ keyExocompute: {
Type: schema.TypeList,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
- "regions": {
+ keyPermissions: {
+ Type: schema.TypeString,
+ Optional: true,
+ Description: "Permissions updated signal. When this field changes, the provider will notify " +
+ "RSC that the permissions for the feature has been updated. Use this field with the " +
+ "`polaris_azure_permissions` data source.",
+ ValidateFunc: validation.StringIsNotWhiteSpace,
+ },
+ keyRegions: {
Type: schema.TypeSet,
Elem: &schema.Schema{
- Type: schema.TypeString,
- ValidateDiagFunc: validateAzureRegion,
+ Type: schema.TypeString,
},
- MinItems: 1,
- Required: true,
- Description: "Regions to enable the exocompute feature in.",
+ MinItems: 1,
+ Required: true,
+ Description: "Azure regions to enable the Exocompute feature in. Should be specified in " +
+ "the standard Azure style, e.g. `eastus`.",
},
- "status": {
+ keyResourceGroupName: {
+ Type: schema.TypeString,
+ Optional: true,
+ RequiredWith: []string{
+ keyExocompute + ".0." + keyResourceGroupRegion,
+ },
+ Description: "Name of the Azure resource group where RSC places all resources created by " +
+ "the feature. RSC assumes the resource group already exists. Changing this forces the " +
+ "RSC feature to be re-onboarded.",
+ ValidateFunc: validation.StringIsNotWhiteSpace,
+ },
+ keyResourceGroupRegion: {
+ Type: schema.TypeString,
+ Optional: true,
+ RequiredWith: []string{
+ keyExocompute + ".0." + keyResourceGroupName,
+ },
+ Description: "Region of the Azure resource group. Should be specified in the standard " +
+ "Azure style, e.g. `eastus`. Changing this forces the RSC feature to be re-onboarded.",
+ ValidateFunc: validation.StringIsNotWhiteSpace,
+ },
+ keyResourceGroupTags: {
+ Type: schema.TypeMap,
+ Elem: &schema.Schema{
+ Type: schema.TypeString,
+ },
+ Optional: true,
+ RequiredWith: []string{
+ keyExocompute + ".0." + keyResourceGroupName,
+ keyExocompute + ".0." + keyResourceGroupRegion,
+ },
+ Description: "Tags to add to the Azure resource group. Changing this forces the RSC feature " +
+ "to be re-onboarded.",
+ },
+ keyStatus: {
Type: schema.TypeString,
Computed: true,
Description: "Status of the Exocompute feature.",
},
},
},
- MaxItems: 1,
- Optional: true,
- Description: "Enable the exocompute feature for the account.",
+ MaxItems: 1,
+ Optional: true,
+ AtLeastOneOf: []string{
+ keyCloudNativeArchival,
+ keyCloudNativeProtection,
+ keyExocompute,
+ keySQLDBProtection,
+ keySQLMIProtection,
+ },
+ Description: "Enable the RSC Exocompute feature for the Azure subscription.",
},
- "subscription_id": {
- Type: schema.TypeString,
- Required: true,
- ForceNew: true,
- Description: "Subscription id.",
- ValidateDiagFunc: validation.ToDiagFunc(validation.IsUUID),
+ keySQLDBProtection: {
+ Type: schema.TypeList,
+ Elem: &schema.Resource{
+ Schema: map[string]*schema.Schema{
+ keyPermissions: {
+ Type: schema.TypeString,
+ Optional: true,
+ Description: "Permissions updated signal. When this field changes, the provider will notify " +
+ "RSC that the permissions for the feature has been updated. Use this field with the " +
+ "`polaris_azure_permissions` data source.",
+ ValidateFunc: validation.StringIsNotWhiteSpace,
+ },
+ keyRegions: {
+ Type: schema.TypeSet,
+ Elem: &schema.Schema{
+ Type: schema.TypeString,
+ },
+ MinItems: 1,
+ Required: true,
+ Description: "Azure regions to enable the SQL DB Protection feature in. Should be " +
+ "specified in the standard Azure style, e.g. `eastus`.",
+ },
+ keyStatus: {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "Status of the SQL DB Protection feature.",
+ },
+ },
+ },
+ MaxItems: 1,
+ Optional: true,
+ AtLeastOneOf: []string{
+ keyCloudNativeArchival,
+ keyCloudNativeProtection,
+ keyExocompute,
+ keySQLDBProtection,
+ keySQLMIProtection,
+ },
+ Description: "Enable the RSC SQL DB Protection feature for the Azure subscription.",
},
- "subscription_name": {
- Type: schema.TypeString,
- Optional: true,
- Description: "Subscription name.",
- ValidateDiagFunc: validation.ToDiagFunc(validation.StringIsNotWhiteSpace),
+ keySQLMIProtection: {
+ Type: schema.TypeList,
+ Elem: &schema.Resource{
+ Schema: map[string]*schema.Schema{
+ keyPermissions: {
+ Type: schema.TypeString,
+ Optional: true,
+ Description: "Permissions updated signal. When this field changes, the provider will notify " +
+ "RSC that the permissions for the feature has been updated. Use this field with the " +
+ "`polaris_azure_permissions` data source.",
+ ValidateFunc: validation.StringIsNotWhiteSpace,
+ },
+ keyRegions: {
+ Type: schema.TypeSet,
+ Elem: &schema.Schema{
+ Type: schema.TypeString,
+ },
+ MinItems: 1,
+ Required: true,
+ Description: "Azure regions to enable the SQL MI Protection feature in. Should be " +
+ "specified in the standard Azure style, e.g. `eastus`.",
+ },
+ keyStatus: {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "Status of the SQL MI Protection feature.",
+ },
+ },
+ },
+ MaxItems: 1,
+ Optional: true,
+ AtLeastOneOf: []string{
+ keyCloudNativeArchival,
+ keyCloudNativeProtection,
+ keyExocompute,
+ keySQLDBProtection,
+ keySQLMIProtection,
+ },
+ Description: "Enable the RSC SQL MI Protection feature for the Azure subscription.",
+ },
+ keySubscriptionID: {
+ Type: schema.TypeString,
+ Required: true,
+ ForceNew: true,
+ Description: "Azure subscription ID. Changing this forces a new resource to be created.",
+ ValidateFunc: validation.IsUUID,
},
- "tenant_domain": {
- Type: schema.TypeString,
- Required: true,
- ForceNew: true,
- Description: "Tenant directory/domain name.",
- ValidateDiagFunc: validation.ToDiagFunc(validation.StringIsNotWhiteSpace),
+ keySubscriptionName: {
+ Type: schema.TypeString,
+ Optional: true,
+ Description: "Azure subscription name.",
+ ValidateFunc: validation.StringIsNotWhiteSpace,
+ },
+ keyTenantDomain: {
+ Type: schema.TypeString,
+ Required: true,
+ ForceNew: true,
+ Description: "Azure tenant primary domain. Changing this forces a new resource to be created.",
+ ValidateFunc: validation.StringIsNotWhiteSpace,
},
},
@@ -145,8 +531,8 @@ func resourceAzureSubscription() *schema.Resource {
}
// azureCreateSubscription run the Create operation for the Azure subscription
-// resource. This adds the Azure subscription to the Polaris platform.
-func azureCreateSubscription(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
+// resource. This adds the Azure subscription to the RSC platform.
+func azureCreateSubscription(ctx context.Context, d *schema.ResourceData, m any) diag.Diagnostics {
log.Print("[TRACE] azureCreateSubscription")
client, err := m.(*client).polaris()
@@ -154,74 +540,43 @@ func azureCreateSubscription(ctx context.Context, d *schema.ResourceData, m inte
return diag.FromErr(err)
}
- subscriptionID, err := uuid.Parse(d.Get("subscription_id").(string))
- if err != nil {
- return diag.FromErr(err)
+ featureKeys := make([]featureKey, 0, len(azureKeyFeatureMap))
+ for key, feature := range azureKeyFeatureMap {
+ featureKeys = append(featureKeys, featureKey{key: key, feature: feature.feature, order: feature.orderAdd})
}
+ slices.SortFunc(featureKeys, func(i, j featureKey) int {
+ return cmp.Compare(i.order, j.order)
+ })
- tenantDomain := d.Get("tenant_domain").(string)
-
- var opts []azure.OptionFunc
- if name, ok := d.GetOk("subscription_name"); ok {
- opts = append(opts, azure.Name(name.(string)))
- }
-
- // Check if the subscription already exist in Polaris.
- account, err := azure.Wrap(client).Subscription(ctx, azure.SubscriptionID(subscriptionID), core.FeatureAll)
- if err == nil {
- return diag.Errorf("subscription %q already added to polaris", account.NativeID)
- }
- if !errors.Is(err, graphql.ErrNotFound) {
- return diag.FromErr(err)
- }
-
- // Polaris Cloud Account id. Returned when the account is added for the
- // cloud native protection feature.
- var id uuid.UUID
-
- cnpBlock, ok := d.GetOk("cloud_native_protection")
- if ok {
- block := cnpBlock.([]interface{})[0].(map[string]interface{})
-
- var cnpOpts []azure.OptionFunc
- for _, region := range block["regions"].(*schema.Set).List() {
- cnpOpts = append(cnpOpts, azure.Region(region.(string)))
+ var accountID uuid.UUID
+ for _, featureKey := range featureKeys {
+ var block map[string]any
+ if v, ok := d.GetOk(featureKey.key); ok {
+ block = v.([]any)[0].(map[string]any)
+ } else {
+ continue
}
- cnpOpts = append(cnpOpts, opts...)
- id, err = azure.Wrap(client).AddSubscription(ctx, azure.Subscription(subscriptionID, tenantDomain),
- core.FeatureCloudNativeProtection, cnpOpts...)
+ id, err := addAzureFeature(ctx, d, client, featureKey.feature, block)
if err != nil {
return diag.FromErr(err)
}
- }
-
- exoBlock, ok := d.GetOk("exocompute")
- if ok {
- block := exoBlock.([]interface{})[0].(map[string]interface{})
-
- var exoOpts []azure.OptionFunc
- for _, region := range block["regions"].(*schema.Set).List() {
- exoOpts = append(exoOpts, azure.Region(region.(string)))
+ if accountID == uuid.Nil {
+ accountID = id
}
-
- exoOpts = append(exoOpts, opts...)
- _, err := azure.Wrap(client).AddSubscription(ctx, azure.Subscription(subscriptionID, tenantDomain),
- core.FeatureExocompute, exoOpts...)
- if err != nil {
- return diag.FromErr(err)
+ if id != accountID {
+ return diag.Errorf("feature %s added to wrong cloud account", featureKey.feature)
}
}
- d.SetId(id.String())
-
+ d.SetId(accountID.String())
azureReadSubscription(ctx, d, m)
return nil
}
// azureReadSubscription run the Read operation for the Azure subscription
-// resource. This reads the state of the Azure subscription in Polaris.
-func azureReadSubscription(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
+// resource. This reads the remote state of the Azure subscription in RSC.
+func azureReadSubscription(ctx context.Context, d *schema.ResourceData, m any) diag.Diagnostics {
log.Print("[TRACE] azureReadSubscription")
client, err := m.(*client).polaris()
@@ -229,72 +584,38 @@ func azureReadSubscription(ctx context.Context, d *schema.ResourceData, m interf
return diag.FromErr(err)
}
- id, err := uuid.Parse(d.Id())
+ accountID, err := uuid.Parse(d.Id())
if err != nil {
return diag.FromErr(err)
}
-
- // Lookup the Polaris cloud account using the cloud account id.
- account, err := azure.Wrap(client).Subscription(ctx, azure.CloudAccountID(id), core.FeatureAll)
+ account, err := azure.Wrap(client).Subscription(ctx, azure.CloudAccountID(accountID), core.FeatureAll)
if errors.Is(err, graphql.ErrNotFound) {
d.SetId("")
return nil
- }
- if err != nil {
+ } else if err != nil {
return diag.FromErr(err)
}
- cnpFeature, ok := account.Feature(core.FeatureCloudNativeProtection)
- if ok {
- regions := schema.Set{F: schema.HashString}
- for _, region := range cnpFeature.Regions {
- regions.Add(region)
- }
-
- status := core.FormatStatus(cnpFeature.Status)
- err := d.Set("cloud_native_protection", []interface{}{
- map[string]interface{}{
- "regions": ®ions,
- "status": &status,
- },
- })
- if err != nil {
- return diag.FromErr(err)
+ for key, feature := range azureKeyFeatureMap {
+ feature, ok := account.Feature(feature.feature)
+ if !ok {
+ if err := d.Set(key, nil); err != nil {
+ return diag.FromErr(err)
+ }
+ continue
}
- } else {
- if err := d.Set("cloud_native_protection", nil); err != nil {
+ if err := updateAzureFeatureState(d, key, feature); err != nil {
return diag.FromErr(err)
}
}
- exoFeature, ok := account.Feature(core.FeatureExocompute)
- if ok {
- regions := schema.Set{F: schema.HashString}
- for _, region := range exoFeature.Regions {
- regions.Add(region)
- }
-
- status := core.FormatStatus(exoFeature.Status)
- err := d.Set("exocompute", []interface{}{
- map[string]interface{}{
- "regions": ®ions,
- "status": &status,
- },
- })
- if err != nil {
- return diag.FromErr(err)
- }
- } else {
- if err := d.Set("exocompute", nil); err != nil {
- return diag.FromErr(err)
- }
+ if err := d.Set(keySubscriptionID, account.NativeID.String()); err != nil {
+ return diag.FromErr(err)
}
-
- if err := d.Set("subscription_name", account.Name); err != nil {
+ if err := d.Set(keySubscriptionName, account.Name); err != nil {
return diag.FromErr(err)
}
-
- if err := d.Set("tenant_domain", account.TenantDomain); err != nil {
+ if err := d.Set(keyTenantDomain, account.TenantDomain); err != nil {
return diag.FromErr(err)
}
@@ -302,8 +623,8 @@ func azureReadSubscription(ctx context.Context, d *schema.ResourceData, m interf
}
// azureUpdateSubscription run the Update operation for the Azure subscription
-// resource. This updates the Azure subscription in Polaris.
-func azureUpdateSubscription(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
+// resource. This updates the Azure subscription in RSC.
+func azureUpdateSubscription(ctx context.Context, d *schema.ResourceData, m any) diag.Diagnostics {
log.Print("[TRACE] azureUpdateSubscription")
client, err := m.(*client).polaris()
@@ -311,83 +632,124 @@ func azureUpdateSubscription(ctx context.Context, d *schema.ResourceData, m inte
return diag.FromErr(err)
}
- id, err := uuid.Parse(d.Id())
+ accountID, err := uuid.Parse(d.Id())
if err != nil {
return diag.FromErr(err)
}
- if d.HasChange("cloud_native_protection") {
- cnpBlock, ok := d.GetOk("cloud_native_protection")
- if ok {
- block := cnpBlock.([]interface{})[0].(map[string]interface{})
+ // Break the update into a series of update operations sequenced in the
+ // correct order.
+ const (
+ opAddFeature = iota
+ opRemoveFeature
+ opTemporaryRemoveFeature
+ opUpdateRegions
+ opUpdatePermissions
+ )
+ type updateOp struct {
+ feature core.Feature
+ op int
+ block map[string]any
+ order int
+ }
+ var updates []updateOp
+ for key, feature := range azureKeyFeatureMap {
+ if !d.HasChange(key) {
+ continue
+ }
- var opts []azure.OptionFunc
- for _, region := range block["regions"].(*schema.Set).List() {
- opts = append(opts, azure.Region(region.(string)))
- }
+ switch oldBlock, newBlock := d.GetChange(key); {
+ case len(oldBlock.([]any)) == 0 && len(newBlock.([]any)) != 0:
+ updates = append(updates, updateOp{
+ op: opAddFeature,
+ feature: feature.feature,
+ block: newBlock.([]any)[0].(map[string]any),
+ order: feature.orderAdd,
+ })
- if err := azure.Wrap(client).UpdateSubscription(ctx, azure.CloudAccountID(id), core.FeatureCloudNativeProtection, opts...); err != nil {
- return diag.FromErr(err)
- }
- } else {
- if _, ok := d.GetOk("exocompute"); ok {
- return diag.Errorf("cloud native protection is required by exocompute")
- }
+ case len(oldBlock.([]any)) != 0 && len(newBlock.([]any)) == 0:
+ updates = append(updates, updateOp{
+ op: opRemoveFeature,
+ feature: feature.feature,
+ order: feature.orderRemove,
+ })
- snapshots := d.Get("delete_snapshots_on_destroy").(bool)
- if err := azure.Wrap(client).RemoveSubscription(ctx, azure.CloudAccountID(id), core.FeatureCloudNativeProtection, snapshots); err != nil {
- return diag.FromErr(err)
+ case len(oldBlock.([]any)) != 0 && len(newBlock.([]any)) != 0:
+ oldBlock := oldBlock.([]any)[0].(map[string]any)
+ newBlock := newBlock.([]any)[0].(map[string]any)
+
+ switch {
+ case diffAzureFeatureResourceGroup(oldBlock, newBlock) || diffAzureUserAssignedManagedIdentity(oldBlock, newBlock):
+ updates = append(updates, updateOp{
+ op: opAddFeature,
+ feature: feature.feature,
+ block: newBlock,
+ order: feature.orderSplitAdd,
+ })
+ updates = append(updates, updateOp{
+ op: opTemporaryRemoveFeature,
+ feature: feature.feature,
+ order: feature.orderSplitRemove,
+ })
+
+ case diffAzureFeatureRegions(oldBlock, newBlock):
+ updates = append(updates, updateOp{
+ op: opUpdateRegions,
+ feature: feature.feature,
+ block: newBlock,
+ })
+
+ case newBlock[keyPermissions] != oldBlock[keyPermissions]:
+ updates = append(updates, updateOp{
+ op: opUpdatePermissions,
+ feature: feature.feature,
+ })
}
}
}
+ slices.SortFunc(updates, func(i, j updateOp) int {
+ return cmp.Compare(i.order, j.order)
+ })
- if d.HasChange("exocompute") {
- oldExoBlock, newExoBlock := d.GetChange("exocompute")
- oldExoList := oldExoBlock.([]interface{})
- newExoList := newExoBlock.([]interface{})
+ // Apply the update operations in the correct order.
+ for _, update := range updates {
+ feature := update.feature
- // Determine whether we are adding, removing or updating the Exocompute
- // feature.
- switch {
- case len(oldExoList) == 0:
- var opts []azure.OptionFunc
- for _, region := range newExoList[0].(map[string]interface{})["regions"].(*schema.Set).List() {
- opts = append(opts, azure.Region(region.(string)))
- }
-
- subscriptionID, err := uuid.Parse(d.Get("subscription_id").(string))
+ switch update.op {
+ case opAddFeature:
+ id, err := addAzureFeature(ctx, d, client, feature, update.block)
if err != nil {
return diag.FromErr(err)
}
-
- tenantDomain := d.Get("tenant_domain").(string)
- _, err = azure.Wrap(client).AddSubscription(ctx, azure.Subscription(subscriptionID, tenantDomain),
- core.FeatureExocompute, opts...)
- if err != nil {
- return diag.FromErr(err)
+ if id != accountID {
+ return diag.Errorf("feature %s added to the wrong cloud account", feature)
}
- case len(newExoList) == 0:
- err := azure.Wrap(client).RemoveSubscription(ctx, azure.CloudAccountID(id), core.FeatureExocompute, false)
- if err != nil {
+ case opRemoveFeature, opTemporaryRemoveFeature:
+ deleteSnapshots := false
+ if update.op == opRemoveFeature {
+ deleteSnapshots = d.Get(keyDeleteSnapshotsOnDestroy).(bool)
+ }
+ if err := azure.Wrap(client).RemoveSubscription(ctx, azure.CloudAccountID(accountID), feature, deleteSnapshots); err != nil {
return diag.FromErr(err)
}
- default:
+ case opUpdateRegions:
var opts []azure.OptionFunc
- for _, region := range newExoList[0].(map[string]interface{})["regions"].(*schema.Set).List() {
+ for _, region := range update.block[keyRegions].(*schema.Set).List() {
opts = append(opts, azure.Region(region.(string)))
}
-
- err = azure.Wrap(client).UpdateSubscription(ctx, azure.CloudAccountID(id), core.FeatureExocompute, opts...)
- if err != nil {
+ if err := azure.Wrap(client).UpdateSubscription(ctx, azure.CloudAccountID(accountID), feature, opts...); err != nil {
+ return diag.FromErr(err)
+ }
+ case opUpdatePermissions:
+ if err := azure.Wrap(client).PermissionsUpdated(ctx, azure.CloudAccountID(accountID), []core.Feature{feature}); err != nil {
return diag.FromErr(err)
}
}
}
- if d.HasChange("subscription_name") {
- opts := []azure.OptionFunc{azure.Name(d.Get("subscription_name").(string))}
- err = azure.Wrap(client).UpdateSubscription(ctx, azure.CloudAccountID(id), core.FeatureCloudNativeProtection, opts...)
- if err != nil {
+ if d.HasChange(keySubscriptionName) {
+ opts := []azure.OptionFunc{azure.Name(d.Get(keySubscriptionName).(string))}
+ if err = azure.Wrap(client).UpdateSubscription(ctx, azure.CloudAccountID(accountID), core.FeatureAll, opts...); err != nil {
return diag.FromErr(err)
}
}
@@ -397,8 +759,8 @@ func azureUpdateSubscription(ctx context.Context, d *schema.ResourceData, m inte
}
// azureDeleteSubscription run the Delete operation for the Azure subscription
-// resource. This removes the Azure subscription from Polaris.
-func azureDeleteSubscription(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
+// resource. This removes the Azure subscription from RSC.
+func azureDeleteSubscription(ctx context.Context, d *schema.ResourceData, m any) diag.Diagnostics {
log.Print("[TRACE] azureDeleteSubscription")
client, err := m.(*client).polaris()
@@ -406,30 +768,342 @@ func azureDeleteSubscription(ctx context.Context, d *schema.ResourceData, m inte
return diag.FromErr(err)
}
- id, err := uuid.Parse(d.Id())
+ accountID, err := uuid.Parse(d.Id())
if err != nil {
return diag.FromErr(err)
}
- // Get the old resource arguments.
- oldSnapshots, _ := d.GetChange("delete_snapshots_on_destroy")
- deleteSnapshots := oldSnapshots.(bool)
+ featureKeys := make([]featureKey, 0, len(azureKeyFeatureMap))
+ for key, feature := range azureKeyFeatureMap {
+ featureKeys = append(featureKeys, featureKey{key: key, feature: feature.feature, order: feature.orderRemove})
+ }
+ slices.SortFunc(featureKeys, func(i, j featureKey) int {
+ return cmp.Compare(i.order, j.order)
+ })
- if _, ok := d.GetOk("exocompute"); ok {
- err = azure.Wrap(client).RemoveSubscription(ctx, azure.CloudAccountID(id), core.FeatureExocompute, deleteSnapshots)
- if err != nil {
- return diag.FromErr(err)
+ for _, featureKey := range featureKeys {
+ if _, ok := d.GetOk(featureKey.key); !ok {
+ continue
}
- }
- if _, ok := d.GetOk("cloud_native_protection"); ok {
- err = azure.Wrap(client).RemoveSubscription(ctx, azure.CloudAccountID(id), core.FeatureCloudNativeProtection, deleteSnapshots)
- if err != nil {
+ deleteSnapshots := d.Get(keyDeleteSnapshotsOnDestroy).(bool)
+ if err = azure.Wrap(client).RemoveSubscription(ctx, azure.CloudAccountID(accountID), featureKey.feature, deleteSnapshots); err != nil {
return diag.FromErr(err)
}
}
d.SetId("")
+ return nil
+}
+
+// featureKey maps a Terraform configuration key to an RSC feature along with
+// order information.
+type featureKey struct {
+ key string
+ feature core.Feature
+ order int
+}
+
+// orderedFeature holds the feature and order information for the feature.
+// The split order information is used when a feature needs to be re-onboarded
+// due to a change in the configuration.
+type orderedFeature struct {
+ feature core.Feature
+ orderAdd int
+ orderRemove int
+ orderSplitAdd int
+ orderSplitRemove int
+}
+
+// azureKeyFeatureMap maps the subscription's Terraform keys to the RSC features
+// and the feature's order information.
+//
+// Adds are performed first, to reduce the risk of tenant being removed due to
+// the last RSC feature being removed. Next, we perform updates. An update can
+// result in a feature being removed and added again. Lastly, feature removals
+// are performed.
+//
+// Note, all operations must be performed in the correct order, due to the
+// implicit relationship between CLOUD_NATIVE_ARCHIVAL and
+// CLOUD_NATIVE_ARCHIVAL_ENCRYPTION.
+var azureKeyFeatureMap = map[string]orderedFeature{
+ keyCloudNativeArchival: {
+ feature: core.FeatureCloudNativeArchival,
+ orderAdd: 100,
+ orderRemove: 301,
+ orderSplitAdd: 202,
+ orderSplitRemove: 201,
+ },
+ keyCloudNativeArchivalEncryption: {
+ feature: core.FeatureCloudNativeArchivalEncryption,
+ orderAdd: 101,
+ orderRemove: 300,
+ orderSplitAdd: 203,
+ orderSplitRemove: 200,
+ },
+ keyCloudNativeProtection: {
+ feature: core.FeatureCloudNativeProtection,
+ orderAdd: 102,
+ orderRemove: 302,
+ orderSplitAdd: 205,
+ orderSplitRemove: 204,
+ },
+ keyExocompute: {
+ feature: core.FeatureExocompute,
+ orderAdd: 103,
+ orderRemove: 303,
+ orderSplitAdd: 207,
+ orderSplitRemove: 206,
+ },
+ keySQLDBProtection: {
+ feature: core.FeatureAzureSQLDBProtection,
+ orderAdd: 104,
+ orderRemove: 304,
+ orderSplitAdd: 209,
+ orderSplitRemove: 208,
+ },
+ keySQLMIProtection: {
+ feature: core.FeatureAzureSQLMIProtection,
+ orderAdd: 105,
+ orderRemove: 305,
+ orderSplitAdd: 211,
+ orderSplitRemove: 210,
+ },
+}
+
+// addAzureFeature onboards the RSC feature for the Azure subscription.
+func addAzureFeature(ctx context.Context, d *schema.ResourceData, client *polaris.Client, feature core.Feature, block map[string]any) (uuid.UUID, error) {
+ id, err := uuid.Parse(d.Get(keySubscriptionID).(string))
+ if err != nil {
+ return uuid.Nil, err
+ }
+
+ var opts []azure.OptionFunc
+ if name, ok := d.GetOk(keySubscriptionName); ok {
+ opts = append(opts, azure.Name(name.(string)))
+ }
+
+ if regions, ok := block[keyRegions]; ok {
+ for _, region := range regions.(*schema.Set).List() {
+ opts = append(opts, azure.Region(region.(string)))
+ }
+ }
+ if rgOpt, ok := fromAzureResourceGroup(block); ok {
+ opts = append(opts, rgOpt)
+ }
+ if miOpt, ok := fromAzureUserAssignedManagedIdentity(block); ok {
+ opts = append(opts, miOpt)
+ }
+
+ return azure.Wrap(client).AddSubscription(ctx, azure.Subscription(id, d.Get(keyTenantDomain).(string)), feature, opts...)
+}
+
+// updateAzureFeatureState updates the local state with the feature information.
+func updateAzureFeatureState(d *schema.ResourceData, key string, feature azure.Feature) error {
+ var block map[string]any
+ if v, ok := d.GetOk(key); ok {
+ block = v.([]any)[0].(map[string]any)
+ } else {
+ block = make(map[string]any)
+ }
+
+ regions := schema.Set{F: schema.HashString}
+ for _, region := range feature.Regions {
+ regions.Add(region)
+ }
+ block[keyRegions] = ®ions
+ block[keyStatus] = string(feature.Status)
+
+ if feature.SupportResourceGroup() {
+ tags := make(map[string]any, len(feature.ResourceGroup.Tags))
+ for key, value := range feature.ResourceGroup.Tags {
+ tags[key] = value
+ }
+ block[keyResourceGroupName] = feature.ResourceGroup.Name
+ block[keyResourceGroupRegion] = feature.ResourceGroup.Region
+ block[keyResourceGroupTags] = tags
+ }
+
+ if err := d.Set(key, []any{block}); err != nil {
+ return err
+ }
return nil
}
+
+// fromAzureResourceGroup returns an OptionFunc holding the resource group
+// information.
+func fromAzureResourceGroup(block map[string]any) (azure.OptionFunc, bool) {
+ var name string
+ if v, ok := block[keyResourceGroupName]; ok {
+ name = v.(string)
+ }
+ var region string
+ if v, ok := block[keyResourceGroupRegion]; ok {
+ region = v.(string)
+ }
+ tags := make(map[string]string)
+ if rgTags, ok := block[keyResourceGroupTags]; ok {
+ for key, value := range rgTags.(map[string]any) {
+ tags[key] = value.(string)
+ }
+ }
+
+ if name != "" || region != "" || len(tags) > 0 {
+ return azure.ResourceGroup(name, region, tags), true
+ }
+
+ return nil, false
+}
+
+// fromAzureUserAssignedManagedIdentity returns an OptionFunc holding the
+// user-assigned managed identity information.
+func fromAzureUserAssignedManagedIdentity(block map[string]any) (azure.OptionFunc, bool) {
+ var name string
+ if v, ok := block[keyUserAssignedManagedIdentityName]; ok {
+ name = v.(string)
+ }
+ var principalID string
+ if v, ok := block[keyUserAssignedManagedIdentityPrincipalID]; ok {
+ principalID = v.(string)
+ }
+ var region string
+ if v, ok := block[keyUserAssignedManagedIdentityRegion]; ok {
+ region = v.(string)
+ }
+ var rgName string
+ if v, ok := block[keyUserAssignedManagedIdentityResourceGroupName]; ok {
+ rgName = v.(string)
+ }
+
+ if name != "" || rgName != "" || principalID != "" || region != "" {
+ return azure.ManagedIdentity(name, rgName, principalID, region), true
+ }
+
+ return nil, false
+}
+
+// diffAzureFeatureRegions returns true if the old and new regions are
+// different.
+func diffAzureFeatureRegions(oldBlock, newBlock map[string]any) bool {
+ var oldRegions []string
+ if v, ok := oldBlock[keyRegions]; ok {
+ for _, region := range v.(*schema.Set).List() {
+ oldRegions = append(oldRegions, region.(string))
+ }
+ }
+ var newRegions []string
+ if v, ok := newBlock[keyRegions]; ok {
+ for _, region := range v.(*schema.Set).List() {
+ newRegions = append(newRegions, region.(string))
+ }
+ }
+ slices.SortFunc(oldRegions, func(i, j string) int {
+ return cmp.Compare(i, j)
+ })
+ slices.SortFunc(newRegions, func(i, j string) int {
+ return cmp.Compare(i, j)
+ })
+
+ return !slices.Equal(oldRegions, newRegions)
+}
+
+// diffAzureFeatureResourceGroup returns true if the old and new resource group
+// blocks are different.
+func diffAzureFeatureResourceGroup(oldBlock, newBlock map[string]any) bool {
+ var oldName string
+ if v, ok := oldBlock[keyResourceGroupName]; ok {
+ oldName = v.(string)
+ }
+ var newName string
+ if v, ok := newBlock[keyResourceGroupName]; ok {
+ newName = v.(string)
+ }
+ if newName != oldName {
+ return true
+ }
+
+ var oldRegion string
+ if v, ok := oldBlock[keyResourceGroupRegion]; ok {
+ oldRegion = v.(string)
+ }
+ var newRegion string
+ if v, ok := newBlock[keyResourceGroupRegion]; ok {
+ newRegion = v.(string)
+ }
+ if newRegion != oldRegion {
+ return true
+ }
+
+ oldTags := make(map[string]string)
+ if v, ok := oldBlock[keyResourceGroupTags]; ok {
+ for k, v := range v.(map[string]any) {
+ oldTags[k] = v.(string)
+ }
+ }
+ newTags := make(map[string]string)
+ if v, ok := newBlock[keyResourceGroupTags]; ok {
+ for k, v := range v.(map[string]any) {
+ newTags[k] = v.(string)
+ }
+ }
+ if !maps.Equal(oldTags, newTags) {
+ return true
+ }
+
+ return false
+}
+
+// diffAzureUserAssignedManagedIdentity returns true if the old and new
+// user-assigned managed identities blocks are different.
+func diffAzureUserAssignedManagedIdentity(oldBlock, newBlock map[string]any) bool {
+ var oldName string
+ if v, ok := oldBlock[keyUserAssignedManagedIdentityName]; ok {
+ oldName = v.(string)
+ }
+ var newName string
+ if v, ok := newBlock[keyUserAssignedManagedIdentityName]; ok {
+ newName = v.(string)
+ }
+ if newName != oldName {
+ return true
+ }
+
+ var oldRGName string
+ if v, ok := oldBlock[keyUserAssignedManagedIdentityResourceGroupName]; ok {
+ oldRGName = v.(string)
+ }
+ var newRGName string
+ if v, ok := newBlock[keyUserAssignedManagedIdentityResourceGroupName]; ok {
+ newRGName = v.(string)
+ }
+ if newRGName != oldRGName {
+ return true
+ }
+
+ var oldPrincipalID string
+ if v, ok := oldBlock[keyUserAssignedManagedIdentityPrincipalID]; ok {
+ oldPrincipalID = v.(string)
+ }
+ var newPrincipalID string
+ if v, ok := newBlock[keyUserAssignedManagedIdentityPrincipalID]; ok {
+ newPrincipalID = v.(string)
+ }
+ if newPrincipalID != oldPrincipalID {
+ return true
+ }
+
+ var oldRegion string
+ if v, ok := oldBlock[keyUserAssignedManagedIdentityRegion]; ok {
+ oldRegion = v.(string)
+ }
+ var newRegion string
+ if v, ok := newBlock[keyUserAssignedManagedIdentityRegion]; ok {
+ newRegion = v.(string)
+ }
+ if newRegion != oldRegion {
+ return true
+ }
+
+ return false
+}
diff --git a/internal/provider/resource_azure_subscription_test.go b/internal/provider/resource_azure_subscription_test.go
index e7cb991..51c6c2f 100644
--- a/internal/provider/resource_azure_subscription_test.go
+++ b/internal/provider/resource_azure_subscription_test.go
@@ -42,6 +42,9 @@ resource "polaris_azure_subscription" "default" {
tenant_domain = "{{ .Resource.TenantDomain }}"
cloud_native_protection {
+ resource_group_name = "{{ .Resource.CloudNativeProtection.ResourceGroupName }}"
+ resource_group_region = "{{ .Resource.CloudNativeProtection.ResourceGroupRegion }}"
+
regions = [
"eastus2",
]
@@ -67,6 +70,9 @@ resource "polaris_azure_subscription" "default" {
tenant_domain = "{{ .Resource.TenantDomain }}"
cloud_native_protection {
+ resource_group_name = "{{ .Resource.CloudNativeProtection.ResourceGroupName }}"
+ resource_group_region = "{{ .Resource.CloudNativeProtection.ResourceGroupRegion }}"
+
regions = [
"eastus2",
"westus2",
@@ -105,9 +111,13 @@ func TestAccPolarisAzureSubscription_basic(t *testing.T) {
resource.TestCheckResourceAttr("polaris_azure_subscription.default", "delete_snapshots_on_destroy", "false"),
// Cloud Native Protection feature
- resource.TestCheckResourceAttr("polaris_azure_subscription.default", "cloud_native_protection.0.status", "connected"),
+ resource.TestCheckResourceAttr("polaris_azure_subscription.default", "cloud_native_protection.0.status", "CONNECTED"),
resource.TestCheckResourceAttr("polaris_azure_subscription.default", "cloud_native_protection.0.regions.#", "1"),
resource.TestCheckTypeSetElemAttr("polaris_azure_subscription.default", "cloud_native_protection.0.regions.*", "eastus2"),
+ resource.TestCheckResourceAttr("polaris_azure_subscription.default", "cloud_native_protection.0.resource_group_name",
+ subscription.CloudNativeProtection.ResourceGroupName),
+ resource.TestCheckResourceAttr("polaris_azure_subscription.default", "cloud_native_protection.0.resource_group_region",
+ subscription.CloudNativeProtection.ResourceGroupRegion),
),
}, {
Config: subscriptionTwoRegions,
@@ -119,10 +129,14 @@ func TestAccPolarisAzureSubscription_basic(t *testing.T) {
resource.TestCheckResourceAttr("polaris_azure_subscription.default", "delete_snapshots_on_destroy", "false"),
// Cloud Native Protection feature
- resource.TestCheckResourceAttr("polaris_azure_subscription.default", "cloud_native_protection.0.status", "connected"),
+ resource.TestCheckResourceAttr("polaris_azure_subscription.default", "cloud_native_protection.0.status", "CONNECTED"),
resource.TestCheckResourceAttr("polaris_azure_subscription.default", "cloud_native_protection.0.regions.#", "2"),
resource.TestCheckTypeSetElemAttr("polaris_azure_subscription.default", "cloud_native_protection.0.regions.*", "eastus2"),
resource.TestCheckTypeSetElemAttr("polaris_azure_subscription.default", "cloud_native_protection.0.regions.*", "westus2"),
+ resource.TestCheckResourceAttr("polaris_azure_subscription.default", "cloud_native_protection.0.resource_group_name",
+ subscription.CloudNativeProtection.ResourceGroupName),
+ resource.TestCheckResourceAttr("polaris_azure_subscription.default", "cloud_native_protection.0.resource_group_region",
+ subscription.CloudNativeProtection.ResourceGroupRegion),
),
}, {
Config: subscriptionOneRegion,
@@ -134,9 +148,13 @@ func TestAccPolarisAzureSubscription_basic(t *testing.T) {
resource.TestCheckResourceAttr("polaris_azure_subscription.default", "delete_snapshots_on_destroy", "false"),
// Cloud Native Protection feature
- resource.TestCheckResourceAttr("polaris_azure_subscription.default", "cloud_native_protection.0.status", "connected"),
+ resource.TestCheckResourceAttr("polaris_azure_subscription.default", "cloud_native_protection.0.status", "CONNECTED"),
resource.TestCheckResourceAttr("polaris_azure_subscription.default", "cloud_native_protection.0.regions.#", "1"),
resource.TestCheckTypeSetElemAttr("polaris_azure_subscription.default", "cloud_native_protection.0.regions.*", "eastus2"),
+ resource.TestCheckResourceAttr("polaris_azure_subscription.default", "cloud_native_protection.0.resource_group_name",
+ subscription.CloudNativeProtection.ResourceGroupName),
+ resource.TestCheckResourceAttr("polaris_azure_subscription.default", "cloud_native_protection.0.resource_group_region",
+ subscription.CloudNativeProtection.ResourceGroupRegion),
),
}},
})
diff --git a/internal/provider/resource_azure_subscription_v0.go b/internal/provider/resource_azure_subscription_v0.go
index 9782a49..a1ac9a7 100644
--- a/internal/provider/resource_azure_subscription_v0.go
+++ b/internal/provider/resource_azure_subscription_v0.go
@@ -25,14 +25,20 @@ import (
"log"
"github.com/google/uuid"
+ "github.com/hashicorp/go-cty/cty"
+ "github.com/hashicorp/terraform-plugin-sdk/v2/diag"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation"
"github.com/rubrikinc/rubrik-polaris-sdk-for-go/pkg/polaris/azure"
"github.com/rubrikinc/rubrik-polaris-sdk-for-go/pkg/polaris/graphql/core"
)
+func validateAzureRegion(m interface{}, p cty.Path) diag.Diagnostics {
+ return nil
+}
+
// resourceAzureSubscriptionV0 defines the schema for version 0 of the Azure
-// subscription resource.
+// subscription resource and how to migrate to version 1.
func resourceAzureSubscriptionV0() *schema.Resource {
return &schema.Resource{
Schema: map[string]*schema.Schema{
@@ -73,7 +79,7 @@ func resourceAzureSubscriptionV0() *schema.Resource {
// resourceAzureSubscriptionStateUpgradeV0 migrates the resource id from the
// Azure subscription id to the Polaris cloud account id.
func resourceAzureSubscriptionStateUpgradeV0(ctx context.Context, state map[string]interface{}, m interface{}) (map[string]interface{}, error) {
- log.Print("[TRACE] resourceAzureSubscriptionStateUpgradeV0")
+ log.Print("[TRACE] azureSubscriptionStateUpgradeV0")
client, err := m.(*client).polaris()
if err != nil {
@@ -82,7 +88,7 @@ func resourceAzureSubscriptionStateUpgradeV0(ctx context.Context, state map[stri
id, err := uuid.Parse(state["id"].(string))
if err != nil {
- return state, err
+ return nil, err
}
account, err := azure.Wrap(client).Subscription(ctx, azure.SubscriptionID(id), core.FeatureCloudNativeProtection)
diff --git a/internal/provider/resource_azure_subscription_v1.go b/internal/provider/resource_azure_subscription_v1.go
index 308126a..852d303 100644
--- a/internal/provider/resource_azure_subscription_v1.go
+++ b/internal/provider/resource_azure_subscription_v1.go
@@ -32,8 +32,8 @@ import (
"github.com/rubrikinc/rubrik-polaris-sdk-for-go/pkg/polaris/graphql/core"
)
-// resourceAzureSubscriptionV0 defines the schema for version 1 of the Azure
-// subscription resource.
+// resourceAzureSubscriptionV1 defines the schema for version 1 of the Azure
+// service principal resource and how to migrate to version 2.
func resourceAzureSubscriptionV1() *schema.Resource {
return &schema.Resource{
Schema: map[string]*schema.Schema{
@@ -79,7 +79,7 @@ func resourceAzureSubscriptionV1() *schema.Resource {
// resourceAzureSubscriptionStateUpgradeV1 introduces a cloud native protection
// feature block.
func resourceAzureSubscriptionStateUpgradeV1(ctx context.Context, state map[string]interface{}, m interface{}) (map[string]interface{}, error) {
- log.Print("[TRACE] resourceAzureSubscriptionStateUpgradeV1")
+ log.Print("[TRACE] azureSubscriptionStateUpgradeV1")
client, err := m.(*client).polaris()
if err != nil {
@@ -88,7 +88,7 @@ func resourceAzureSubscriptionStateUpgradeV1(ctx context.Context, state map[stri
id, err := uuid.Parse(state["id"].(string))
if err != nil {
- return state, err
+ return nil, err
}
account, err := azure.Wrap(client).Subscription(ctx, azure.CloudAccountID(id), core.FeatureAll)
diff --git a/internal/provider/resource_cdm_bootstrap.go b/internal/provider/resource_cdm_bootstrap.go
index dcd9cdf..d39c66a 100644
--- a/internal/provider/resource_cdm_bootstrap.go
+++ b/internal/provider/resource_cdm_bootstrap.go
@@ -12,6 +12,9 @@ import (
"github.com/rubrikinc/rubrik-polaris-sdk-for-go/pkg/cdm"
)
+// This resource uses a template for its documentation due to a bug in the TF
+// docs generator. Remember to update the template if the documentation for any
+// fields are changed.
func resourceCDMBootstrap() *schema.Resource {
return &schema.Resource{
CreateContext: resourceCDMBootstrapCreate,
diff --git a/internal/provider/resource_cdm_bootstrap_cces_aws.go b/internal/provider/resource_cdm_bootstrap_cces_aws.go
index ffba264..16fa7a0 100644
--- a/internal/provider/resource_cdm_bootstrap_cces_aws.go
+++ b/internal/provider/resource_cdm_bootstrap_cces_aws.go
@@ -11,6 +11,9 @@ import (
"github.com/rubrikinc/rubrik-polaris-sdk-for-go/pkg/cdm"
)
+// This resource uses a template for its documentation due to a bug in the TF
+// docs generator. Remember to update the template if the documentation for any
+// fields are changed.
func resourceCDMBootstrapCCESAWS() *schema.Resource {
return &schema.Resource{
CreateContext: resourceCDMBootstrapCCESAWSCreate,
diff --git a/internal/provider/resource_cdm_bootstrap_cces_azure.go b/internal/provider/resource_cdm_bootstrap_cces_azure.go
index 6f869f0..673d1fa 100644
--- a/internal/provider/resource_cdm_bootstrap_cces_azure.go
+++ b/internal/provider/resource_cdm_bootstrap_cces_azure.go
@@ -11,6 +11,9 @@ import (
"github.com/rubrikinc/rubrik-polaris-sdk-for-go/pkg/cdm"
)
+// This resource uses a template for its documentation due to a bug in the TF
+// docs generator. Remember to update the template if the documentation for any
+// fields are changed.
func resourceCDMBootstrapCCESAzure() *schema.Resource {
return &schema.Resource{
CreateContext: resourceCDMBootstrapCCESAzureCreate,
diff --git a/internal/provider/resource_custom_role.go b/internal/provider/resource_custom_role.go
index efffcca..292835f 100644
--- a/internal/provider/resource_custom_role.go
+++ b/internal/provider/resource_custom_role.go
@@ -33,6 +33,10 @@ import (
"github.com/rubrikinc/rubrik-polaris-sdk-for-go/pkg/polaris/graphql"
)
+const resourceCustomRoleDescription = `
+The ´polaris_custom_role´ resource is used to manage custom roles in RSC.
+`
+
// resourceCustomRole defines the schema for the custom role resource.
func resourceCustomRole() *schema.Resource {
return &schema.Resource{
@@ -41,28 +45,34 @@ func resourceCustomRole() *schema.Resource {
UpdateContext: updateCustomRole,
DeleteContext: deleteCustomRole,
+ Description: description(resourceCustomRoleDescription),
Schema: map[string]*schema.Schema{
- "description": {
+ keyID: {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "Role ID (UUID).",
+ },
+ keyDescription: {
Type: schema.TypeString,
Optional: true,
Description: "Role description.",
ValidateFunc: validation.StringIsNotWhiteSpace,
},
- "name": {
+ keyName: {
Type: schema.TypeString,
Required: true,
Description: "Role name.",
ValidateFunc: validation.StringIsNotWhiteSpace,
},
- "permission": {
+ keyPermission: {
Type: schema.TypeSet,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
- "hierarchy": {
+ keyHierarchy: {
Type: schema.TypeSet,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
- "object_ids": {
+ keyObjectIDs: {
Type: schema.TypeSet,
Elem: &schema.Schema{
Type: schema.TypeString,
@@ -72,7 +82,7 @@ func resourceCustomRole() *schema.Resource {
MinItems: 1,
Description: "Object/workload identifiers.",
},
- "snappable_type": {
+ keySnappableType: {
Type: schema.TypeString,
Required: true,
Description: "Snappable/workload type.",
@@ -84,10 +94,10 @@ func resourceCustomRole() *schema.Resource {
MinItems: 1,
Description: "Snappable hierarchy.",
},
- "operation": {
+ keyOperation: {
Type: schema.TypeString,
Required: true,
- Description: "Operation to allow on object ids under the snappable hierarchy.",
+ Description: "Operation to allow on object IDs under the snappable hierarchy.",
ValidateFunc: validation.StringIsNotWhiteSpace,
},
},
@@ -109,9 +119,9 @@ func createCustomRole(ctx context.Context, d *schema.ResourceData, m any) diag.D
return diag.FromErr(err)
}
- name := d.Get("name").(string)
- description := d.Get("description").(string)
- permissions := toPermissions(d.Get("permission"))
+ name := d.Get(keyName).(string)
+ description := d.Get(keyDescription).(string)
+ permissions := toPermissions(d.Get(keyPermission))
id, err := access.Wrap(client).AddRole(ctx, name, description, permissions, access.NoProtectableClusters)
if err != nil {
@@ -119,7 +129,6 @@ func createCustomRole(ctx context.Context, d *schema.ResourceData, m any) diag.D
}
d.SetId(id.String())
-
readCustomRole(ctx, d, m)
return nil
}
@@ -147,13 +156,13 @@ func readCustomRole(ctx context.Context, d *schema.ResourceData, m any) diag.Dia
return diag.FromErr(err)
}
- if err := d.Set("name", role.Name); err != nil {
+ if err := d.Set(keyName, role.Name); err != nil {
return diag.FromErr(err)
}
- if err := d.Set("description", role.Description); err != nil {
+ if err := d.Set(keyDescription, role.Description); err != nil {
return diag.FromErr(err)
}
- if err := d.Set("permission", fromPermissions(role.AssignedPermissions)); err != nil {
+ if err := d.Set(keyPermission, fromPermissions(role.AssignedPermissions)); err != nil {
return diag.FromErr(err)
}
@@ -174,10 +183,10 @@ func updateCustomRole(ctx context.Context, d *schema.ResourceData, m any) diag.D
return diag.FromErr(err)
}
- if d.HasChanges("name", "description", "permission") {
- name := d.Get("name").(string)
- description := d.Get("description").(string)
- permissions := toPermissions(d.Get("permission"))
+ if d.HasChanges(keyName, keyDescription, keyPermission) {
+ name := d.Get(keyName).(string)
+ description := d.Get(keyDescription).(string)
+ permissions := toPermissions(d.Get(keyPermission))
if err := access.Wrap(client).UpdateRole(ctx, id, name, description, permissions, access.NoProtectableClusters); err != nil {
return diag.FromErr(err)
@@ -211,7 +220,7 @@ func deleteCustomRole(ctx context.Context, d *schema.ResourceData, m any) diag.D
}
func permissionHash(v any) int {
- return schema.HashString(v.(map[string]any)["operation"])
+ return schema.HashString(v.(map[string]any)[keyOperation])
}
func fromPermissions(permissions []access.Permission) any {
@@ -225,15 +234,15 @@ func fromPermissions(permissions []access.Permission) any {
func fromPermission(permission access.Permission) any {
hierarchyBlocks := &schema.Set{F: func(v any) int {
- return schema.HashString(v.(map[string]any)["snappable_type"])
+ return schema.HashString(v.(map[string]any)[keySnappableType])
}}
for _, hierarchy := range permission.Hierarchies {
hierarchyBlocks.Add(fromSnappableHierarchy(hierarchy))
}
return map[string]any{
- "operation": permission.Operation,
- "hierarchy": hierarchyBlocks,
+ keyOperation: permission.Operation,
+ keyHierarchy: hierarchyBlocks,
}
}
@@ -244,8 +253,8 @@ func fromSnappableHierarchy(hierarchy access.SnappableHierarchy) any {
}
return map[string]any{
- "snappable_type": hierarchy.SnappableType,
- "object_ids": objectIDs,
+ keySnappableType: hierarchy.SnappableType,
+ keyObjectIDs: objectIDs,
}
}
@@ -260,24 +269,24 @@ func toPermissions(permissionBlocks any) []access.Permission {
func toPermission(permissionBlock map[string]any) access.Permission {
var hierarchies []access.SnappableHierarchy
- for _, hierarchy := range permissionBlock["hierarchy"].(*schema.Set).List() {
+ for _, hierarchy := range permissionBlock[keyHierarchy].(*schema.Set).List() {
hierarchies = append(hierarchies, toSnappableHierarchy(hierarchy.(map[string]any)))
}
return access.Permission{
- Operation: permissionBlock["operation"].(string),
+ Operation: permissionBlock[keyOperation].(string),
Hierarchies: hierarchies,
}
}
func toSnappableHierarchy(hierarchyBlock map[string]any) access.SnappableHierarchy {
var objectIDs []string
- for _, objectID := range hierarchyBlock["object_ids"].(*schema.Set).List() {
+ for _, objectID := range hierarchyBlock[keyObjectIDs].(*schema.Set).List() {
objectIDs = append(objectIDs, objectID.(string))
}
return access.SnappableHierarchy{
- SnappableType: hierarchyBlock["snappable_type"].(string),
+ SnappableType: hierarchyBlock[keySnappableType].(string),
ObjectIDs: objectIDs,
}
}
diff --git a/internal/provider/resource_role_assignment.go b/internal/provider/resource_role_assignment.go
index 15335b8..8c60afc 100644
--- a/internal/provider/resource_role_assignment.go
+++ b/internal/provider/resource_role_assignment.go
@@ -35,34 +35,41 @@ import (
"github.com/rubrikinc/rubrik-polaris-sdk-for-go/pkg/polaris/graphql"
)
-// resourceRoleAssignment defines the schema for the role assignment resource.
+const resourceRoleAssignmentDescription = `
+The ´polaris_role_assignment´ resource is used to assign roles to users in RSC.
+`
+
func resourceRoleAssignment() *schema.Resource {
return &schema.Resource{
CreateContext: createRoleAssignment,
ReadContext: readRoleAssignment,
DeleteContext: deleteRoleAssignment,
+ Description: description(resourceRoleAssignmentDescription),
Schema: map[string]*schema.Schema{
- "role_id": {
+ keyID: {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "SHA-256 hash of the user email and the role ID.",
+ },
+ keyRoleID: {
Type: schema.TypeString,
Required: true,
ForceNew: true,
- Description: "Role identifier.",
- ValidateFunc: validation.StringIsNotWhiteSpace,
+ Description: "Role ID (UUID). Changing this forces a new resource to be created.",
+ ValidateFunc: validation.IsUUID,
},
- "user_email": {
+ keyUserEmail: {
Type: schema.TypeString,
Required: true,
ForceNew: true,
- Description: "User email address.",
+ Description: "User email address. Changing this forces a new resource to be created.",
ValidateFunc: validation.StringIsNotWhiteSpace,
},
},
}
}
-// createRoleAssignment run the Create operation for the role assignment
-// resource.
func createRoleAssignment(ctx context.Context, d *schema.ResourceData, m any) diag.Diagnostics {
log.Print("[TRACE] createRoleAssignment")
@@ -71,11 +78,11 @@ func createRoleAssignment(ctx context.Context, d *schema.ResourceData, m any) di
return diag.FromErr(err)
}
- roleID, err := uuid.Parse(d.Get("role_id").(string))
+ roleID, err := uuid.Parse(d.Get(keyRoleID).(string))
if err != nil {
return diag.FromErr(err)
}
- userEmail := d.Get("user_email").(string)
+ userEmail := d.Get(keyUserEmail).(string)
if err := access.Wrap(client).AssignRole(ctx, userEmail, roleID); err != nil {
return diag.FromErr(err)
@@ -87,7 +94,6 @@ func createRoleAssignment(ctx context.Context, d *schema.ResourceData, m any) di
return nil
}
-// readRoleAssignment run the Read operation for the role assignment resource.
func readRoleAssignment(ctx context.Context, d *schema.ResourceData, m any) diag.Diagnostics {
log.Print("[TRACE] readRoleAssignment")
@@ -96,11 +102,11 @@ func readRoleAssignment(ctx context.Context, d *schema.ResourceData, m any) diag
return diag.FromErr(err)
}
- roleID, err := uuid.Parse(d.Get("role_id").(string))
+ roleID, err := uuid.Parse(d.Get(keyRoleID).(string))
if err != nil {
return diag.FromErr(err)
}
- userEmail := d.Get("user_email").(string)
+ userEmail := d.Get(keyUserEmail).(string)
user, err := access.Wrap(client).User(ctx, userEmail)
if errors.Is(err, graphql.ErrNotFound) {
@@ -111,14 +117,12 @@ func readRoleAssignment(ctx context.Context, d *schema.ResourceData, m any) diag
return diag.FromErr(err)
}
if !user.HasRole(roleID) {
- d.Set("role_id", "")
+ d.Set(keyRoleID, "")
}
return nil
}
-// deleteRoleAssignment run the Delete operation for the role assignment
-// resource.
func deleteRoleAssignment(ctx context.Context, d *schema.ResourceData, m any) diag.Diagnostics {
log.Print("[TRACE] deleteRoleAssignment")
@@ -127,11 +131,11 @@ func deleteRoleAssignment(ctx context.Context, d *schema.ResourceData, m any) di
return diag.FromErr(err)
}
- roleID, err := uuid.Parse(d.Get("role_id").(string))
+ roleID, err := uuid.Parse(d.Get(keyRoleID).(string))
if err != nil {
return diag.FromErr(err)
}
- userEmail := d.Get("user_email").(string)
+ userEmail := d.Get(keyUserEmail).(string)
if err := access.Wrap(client).UnassignRole(ctx, userEmail, roleID); err != nil {
return diag.FromErr(err)
diff --git a/internal/provider/resource_user.go b/internal/provider/resource_user.go
index b073637..28de1e5 100644
--- a/internal/provider/resource_user.go
+++ b/internal/provider/resource_user.go
@@ -34,7 +34,10 @@ import (
"github.com/rubrikinc/rubrik-polaris-sdk-for-go/pkg/polaris/graphql"
)
-// resourceUser defines the schema for the user resource.
+const resourceUserDescription = `
+The ´polaris_user´ resource is used to manage users in RSC.
+`
+
func resourceUser() *schema.Resource {
return &schema.Resource{
CreateContext: createUser,
@@ -42,29 +45,35 @@ func resourceUser() *schema.Resource {
UpdateContext: updateUser,
DeleteContext: deleteUser,
+ Description: description(resourceUserDescription),
Schema: map[string]*schema.Schema{
- "email": {
+ keyID: {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "User email address.",
+ },
+ keyEmail: {
Type: schema.TypeString,
Required: true,
ForceNew: true,
- Description: "User email address.",
+ Description: "User email address. Changing this forces a new resource to be created.",
ValidateFunc: validation.StringIsNotWhiteSpace,
},
- "is_account_owner": {
+ keyIsAccountOwner: {
Type: schema.TypeBool,
Computed: true,
Description: "True if the user is the account owner.",
},
- "role_ids": {
+ keyRoleIDs: {
Type: schema.TypeSet,
Elem: &schema.Schema{
Type: schema.TypeString,
- ValidateFunc: validation.StringIsNotWhiteSpace,
+ ValidateFunc: validation.IsUUID,
},
Required: true,
- Description: "Roles assigned to the user.",
+ Description: "Roles assigned to the user (UUIDs).",
},
- "status": {
+ keyStatus: {
Type: schema.TypeString,
Computed: true,
Description: "User status.",
@@ -73,7 +82,6 @@ func resourceUser() *schema.Resource {
}
}
-// createUser run the Create operation for the user resource.
func createUser(ctx context.Context, d *schema.ResourceData, m any) diag.Diagnostics {
log.Print("[TRACE] createUser")
@@ -82,8 +90,8 @@ func createUser(ctx context.Context, d *schema.ResourceData, m any) diag.Diagnos
return diag.FromErr(err)
}
- userEmail := d.Get("email").(string)
- roleIDs, err := parseRoleIDs(d.Get("role_ids").(*schema.Set))
+ userEmail := d.Get(keyEmail).(string)
+ roleIDs, err := parseRoleIDs(d.Get(keyRoleIDs).(*schema.Set))
if err != nil {
return diag.FromErr(err)
}
@@ -98,7 +106,6 @@ func createUser(ctx context.Context, d *schema.ResourceData, m any) diag.Diagnos
return nil
}
-// readUser run the Read operation for the user resource.
func readUser(ctx context.Context, d *schema.ResourceData, m any) diag.Diagnostics {
log.Print("[TRACE] readUser")
@@ -116,13 +123,13 @@ func readUser(ctx context.Context, d *schema.ResourceData, m any) diag.Diagnosti
return diag.FromErr(err)
}
- if err := d.Set("email", user.Email); err != nil {
+ if err := d.Set(keyEmail, user.Email); err != nil {
return diag.FromErr(err)
}
- if err := d.Set("is_account_owner", user.IsAccountOwner); err != nil {
+ if err := d.Set(keyIsAccountOwner, user.IsAccountOwner); err != nil {
return diag.FromErr(err)
}
- if err := d.Set("status", user.Status); err != nil {
+ if err := d.Set(keyStatus, user.Status); err != nil {
return diag.FromErr(err)
}
@@ -130,16 +137,15 @@ func readUser(ctx context.Context, d *schema.ResourceData, m any) diag.Diagnosti
for _, role := range user.Roles {
roleIDs.Add(role.ID.String())
}
- if err := d.Set("role_ids", roleIDs); err != nil {
+ if err := d.Set(keyRoleIDs, roleIDs); err != nil {
return diag.FromErr(err)
}
return nil
}
-// updateUser run the Update operation for the user resource.
func updateUser(ctx context.Context, d *schema.ResourceData, m any) diag.Diagnostics {
- roleIDs, err := parseRoleIDs(d.Get("role_ids").(*schema.Set))
+ roleIDs, err := parseRoleIDs(d.Get(keyRoleIDs).(*schema.Set))
if err != nil {
return diag.FromErr(err)
}
@@ -157,7 +163,6 @@ func updateUser(ctx context.Context, d *schema.ResourceData, m any) diag.Diagnos
return nil
}
-// deleteUser run the Delete operation for the user resource.
func deleteUser(ctx context.Context, d *schema.ResourceData, m any) diag.Diagnostics {
log.Print("[TRACE] deleteUser")
diff --git a/internal/provider/validators.go b/internal/provider/validators.go
new file mode 100644
index 0000000..48d55cd
--- /dev/null
+++ b/internal/provider/validators.go
@@ -0,0 +1,104 @@
+package provider
+
+import (
+ "errors"
+ "fmt"
+ "io/fs"
+ "net/mail"
+ "os"
+ "time"
+
+ "github.com/aws/aws-sdk-go-v2/aws/arn"
+ "github.com/hashicorp/go-cty/cty"
+ "github.com/hashicorp/terraform-plugin-sdk/v2/diag"
+)
+
+// validateDuration verifies that i contains a valid duration.
+func validateDuration(i interface{}, k string) ([]string, []error) {
+ v, ok := i.(string)
+ if !ok {
+ return nil, []error{fmt.Errorf("expected type of %q to be string", k)}
+ }
+ if _, err := time.ParseDuration(v); err != nil {
+ return nil, []error{fmt.Errorf("%q is not a valid duration", v)}
+ }
+
+ return nil, nil
+}
+
+// validateEmailAddress verifies that i contains a valid email address.
+func validateEmailAddress(i interface{}, k string) ([]string, []error) {
+ v, ok := i.(string)
+ if !ok {
+ return nil, []error{fmt.Errorf("expected type of %q to be string", k)}
+ }
+ if _, err := mail.ParseAddress(v); err != nil {
+ return nil, []error{fmt.Errorf("%q is not a valid email address", v)}
+ }
+
+ return nil, nil
+}
+
+// validatePermissions verifies that the permissions value is valid.
+func validatePermissions(m interface{}, p cty.Path) diag.Diagnostics {
+ if m.(string) != "update" {
+ return diag.Errorf("invalid permissions value")
+ }
+
+ return nil
+}
+
+// validateRoleARN verifies that the role ARN is a valid AWS ARN.
+func validateRoleARN(m interface{}, p cty.Path) diag.Diagnostics {
+ if _, err := arn.Parse(m.(string)); err != nil {
+ return diag.Errorf("failed to parse role ARN: %v", err)
+ }
+
+ return nil
+}
+
+// fileExists assumes m is a file path and returns nil if the file exists,
+// otherwise a diagnostic message is returned.
+func fileExists(m interface{}, p cty.Path) diag.Diagnostics {
+ if _, err := os.Stat(m.(string)); err != nil {
+ details := "unknown error"
+
+ var pathErr *fs.PathError
+ if errors.As(err, &pathErr) {
+ details = pathErr.Err.Error()
+ }
+
+ return diag.Errorf("failed to access file: %s", details)
+ }
+
+ return nil
+}
+
+func isExistingFile(i interface{}, k string) ([]string, []error) {
+ v, ok := i.(string)
+ if !ok {
+ return nil, []error{fmt.Errorf("expected type of %q to be string", k)}
+ }
+
+ if _, err := os.Stat(v); err != nil {
+ details := "unknown error"
+ var pathErr *fs.PathError
+ if errors.As(err, &pathErr) {
+ details = pathErr.Err.Error()
+ }
+
+ return nil, []error{fmt.Errorf("failed to access file: %s", details)}
+ }
+
+ return nil, nil
+}
+
+// validateHash verifies that m contains a valid base 16 encoded SHA-256 hash
+// with two characters per byte.
+func validateHash(m interface{}, p cty.Path) diag.Diagnostics {
+ if hash, ok := m.(string); ok && len(hash) == 64 {
+ return nil
+ }
+
+ return diag.Errorf("invalid hash value")
+}
diff --git a/templates/data-sources/aws_cnp_permissions.md.tmpl b/templates/data-sources/aws_cnp_permissions.md.tmpl
index 6da683e..e7659bf 100644
--- a/templates/data-sources/aws_cnp_permissions.md.tmpl
+++ b/templates/data-sources/aws_cnp_permissions.md.tmpl
@@ -1,26 +1,20 @@
---
-page_title: "polaris_aws_cnp_permissions Data Source - terraform-provider-polaris"
+page_title: "{{.Name}} {{.Type}} - {{.ProviderName}}"
subcategory: ""
description: |-
-
+ {{.Description}}
---
-# polaris_aws_cnp_permissions (Data Source)
-
+# {{.Name}} ({{.Type}})
+{{.Description}}
+{{if .HasExample}}
## Example Usage
-```terraform
-data "polaris_aws_cnp_permissions" "permissions" {
- for_each = data.polaris_aws_cnp_artifacts.artifacts.role_keys
- cloud = data.polaris_aws_cnp_artifacts.artifacts.cloud
- features = data.polaris_aws_cnp_artifacts.artifacts.features
- role_key = each.key
-}
-```
+{{tffile .ExampleFile}}
+{{end}}
-
## Schema
### Required
@@ -30,28 +24,29 @@ data "polaris_aws_cnp_permissions" "permissions" {
### Optional
-- `cloud` (String) AWS cloud type.
-- `ec2_recovery_role_path` (String) EC2 recovery role path.
+- `cloud` (String) AWS cloud type. Possible values are `STANDARD`, `CHINA` and `GOV`. Default value is `STANDARD`.
+- `ec2_recovery_role_path` (String) AWS EC2 recovery role path.
### Read-Only
- `customer_managed_policies` (List of Object) Customer managed policies. (see [below for nested schema](#nestedatt--customer_managed_policies))
-- `id` (String) The ID of this resource.
+- `id` (String) SHA-256 hash of the customer managed policies and the managed policies.
- `managed_policies` (List of String) Managed policies.
+
+### Nested Schema for `feature`
+
+Required:
+
+- `name` (String) RSC feature name. Possible values are `CLOUD_NATIVE_ARCHIVAL`, `CLOUD_NATIVE_ARCHIVAL_ENCRYPTION`, `CLOUD_NATIVE_PROTECTION`, `CLOUD_NATIVE_S3_PROTECTION`, `EXOCOMPUTE` and `RDS_PROTECTION`.
+- `permission_groups` (Set of String) RSC permission groups for the feature. Possible values are `BASIC`, `ENCRYPTION`, `EXPORT_AND_RESTORE`, `SNAPSHOT_PRIVATE_ACCESS`, `PRIVATE_ENDPOINT` and `RSC_MANAGED_CLUSTER`. Default value is `BASIC`.
+
+
### Nested Schema for `customer_managed_policies`
Read-Only:
-- `feature` (String) RSC Feature.
+- `feature` (String) RSC feature name.
- `name` (String) Policy name.
-- `policy` (String) Policy.
-
-
-### Nested Schema for `feature`
-
-Required:
-
-- `name` (String) Feature name.
-- `permission_groups` (Set of String) Permission groups to assign to the feature.
+- `policy` (String) AWS policy.
diff --git a/templates/data-sources/azure_archival_location.md.tmpl b/templates/data-sources/azure_archival_location.md.tmpl
new file mode 100644
index 0000000..9ba1c45
--- /dev/null
+++ b/templates/data-sources/azure_archival_location.md.tmpl
@@ -0,0 +1,45 @@
+---
+page_title: "{{.Name}} {{.Type}} - {{.ProviderName}}"
+subcategory: ""
+description: |-
+ {{.Description}}
+---
+
+# {{.Name}} ({{.Type}})
+
+{{.Description}}
+
+{{if .HasExample}}
+## Example Usage
+
+{{tffile .ExampleFile}}
+{{end}}
+
+## Schema
+
+### Optional
+
+- `archival_location_id` (String, Deprecated) Cloud native archival location ID (UUID). **Deprecated:** use `id` instead.
+- `id` (String) Cloud native archival location ID (UUID).
+- `name` (String) Name of the cloud native archival location.
+
+### Read-Only
+
+- `connection_status` (String) Connection status of the cloud native archival location.
+- `container_name` (String) Azure storage container name.
+- `customer_managed_key` (Set of Object) Customer managed storage encryption. Specify the regions and their respective encryption details. For other regions, data will be encrypted using platform managed keys. (see [below for nested schema](#nestedatt--customer_managed_key))
+- `location_template` (String) RSC location template. If a storage account region was specified, it will be `SPECIFIC_REGION`, otherwise `SOURCE_REGION`.
+- `redundancy` (String) Azure storage redundancy. Possible values are `GRS`, `GZRS`, `LRS`, `RA_GRS`, `RA_GZRS` and `ZRS`. Default value is `LRS`.
+- `storage_account_name_prefix` (String) Azure storage account name prefix. The storage account name prefix cannot be longer than 14 characters and can only consist of numbers and lower case letters.
+- `storage_account_region` (String) Azure region to store the snapshots in. If not specified, the snapshots will be stored in the same region as the workload.
+- `storage_account_tags` (Map of String) Azure storage account tags. Each tag will be added to the storage account created by RSC.
+- `storage_tier` (String) Azure storage tier. Possible values are `COOL` and `HOT`. Default value is `COOL`.
+
+
+### Nested Schema for `customer_managed_key`
+
+Read-Only:
+
+- `name` (String) Key name.
+- `region` (String) The region in which the key will be used. Regions without customer managed keys will use platform managed keys.
+- `vault_name` (String) Key vault name.
diff --git a/templates/data-sources/role.md.tmpl b/templates/data-sources/role.md.tmpl
index ece785e..377280c 100644
--- a/templates/data-sources/role.md.tmpl
+++ b/templates/data-sources/role.md.tmpl
@@ -1,23 +1,20 @@
---
-page_title: "polaris_role Data Source - terraform-provider-polaris"
+page_title: "{{.Name}} {{.Type}} - {{.ProviderName}}"
subcategory: ""
description: |-
-
+ {{.Description}}
---
-# polaris_role (Data Source)
-
+# {{.Name}} ({{.Type}})
+{{.Description}}
+{{if .HasExample}}
## Example Usage
-```terraform
-data "polaris_role" "compliance_auditor" {
- name = "Compliance Auditor Role"
-}
-```
+{{tffile .ExampleFile}}
+{{end}}
-
## Schema
### Required
@@ -27,7 +24,7 @@ data "polaris_role" "compliance_auditor" {
### Read-Only
- `description` (String) Role description.
-- `id` (String) The ID of this resource.
+- `id` (String) Role ID (UUID).
- `is_org_admin` (Boolean) True if the role is the organization administrator.
- `permission` (Set of Object) Role permission. (see [below for nested schema](#nestedatt--permission))
@@ -37,7 +34,7 @@ data "polaris_role" "compliance_auditor" {
Read-Only:
- `hierarchy` (Set of Object) Snappable hierarchy. (see [below for nested schema](#nestedobjatt--permission--hierarchy))
-- `operation` (String) Operation allowed on object ids under the snappable hierarchy.
+- `operation` (String) Operation allowed on object IDs under the snappable hierarchy.
### Nested Schema for `permission.hierarchy`
diff --git a/templates/data-sources/role_template.md.tmpl b/templates/data-sources/role_template.md.tmpl
index 2afff3c..46b95e7 100644
--- a/templates/data-sources/role_template.md.tmpl
+++ b/templates/data-sources/role_template.md.tmpl
@@ -1,34 +1,31 @@
---
-page_title: "polaris_role_template Data Source - terraform-provider-polaris"
+page_title: "{{.Name}} {{.Type}} - {{.ProviderName}}"
subcategory: ""
description: |-
-
+ {{.Description}}
---
-# polaris_role_template (Data Source)
-
+# {{.Name}} ({{.Type}})
+{{.Description}}
+{{if .HasExample}}
## Example Usage
-```terraform
-data "polaris_role_template" "compliance_auditor" {
- name = "Compliance Auditor"
-}
-```
+{{tffile .ExampleFile}}
+{{end}}
-
## Schema
### Required
-- `name` (String) Role name.
+- `name` (String) Role template name.
### Read-Only
-- `description` (String) Role description.
-- `id` (String) The ID of this resource.
-- `permission` (Set of Object) Role permission. (see [below for nested schema](#nestedatt--permission))
+- `description` (String) Role template description.
+- `id` (String) Role template ID (UUID).
+- `permission` (Set of Object) Role template permission. (see [below for nested schema](#nestedatt--permission))
### Nested Schema for `permission`
@@ -36,7 +33,7 @@ data "polaris_role_template" "compliance_auditor" {
Read-Only:
- `hierarchy` (Set of Object) Snappable hierarchy. (see [below for nested schema](#nestedobjatt--permission--hierarchy))
-- `operation` (String) Operation allowed on object ids under the snappable hierarchy.
+- `operation` (String) Operation allowed on object IDs under the snappable hierarchy.
### Nested Schema for `permission.hierarchy`
diff --git a/templates/guides/aws_cnp_account.md.tmpl b/templates/guides/aws_cnp_account.md.tmpl
index 0a617e0..d108dec 100644
--- a/templates/guides/aws_cnp_account.md.tmpl
+++ b/templates/guides/aws_cnp_account.md.tmpl
@@ -7,34 +7,50 @@ The `polaris_aws_account` resource uses a CloudFormation stack to grant RSC perm
granted to RSC by the CloudFormation stack can be difficult to understand and track as RSC will request the permissions
to be updated as new features, requiring new permissions, are released.
-To make the process of granting AWS permissions more transparent, a couple of new resources and data sources have been added to
-the RSC Terraform provider:
+To make the process of granting AWS permissions more transparent, a couple of new resources and data sources have been
+added to the RSC Terraform provider:
* `polaris_aws_cnp_account`
* `polaris_aws_cnp_account_attachments`
* `polaris_aws_cnp_account_trust_policy`
* `polaris_aws_cnp_artifacts`
* `polaris_aws_cnp_permissions`
- * `polaris_features`
+ * `polaris_account`
Using these resources, it's possible to add an AWS account to RSC without using a CloudFormation stack.
To add an AWS account to RSC using the new CNP resources, start by using the `polaris_aws_cnp_artifacts` data source:
```terraform
data "polaris_aws_cnp_artifacts" "artifacts" {
- features = ["CLOUD_NATIVE_PROTECTION"]
+ feature {
+ name = "CLOUD_NATIVE_PROTECTION"
+
+ permission_groups = [
+ "BASIC",
+ "EXPORT_AND_RESTORE",
+ "FILE_LEVEL_RECOVERY",
+ "SNAPSHOT_PRIVATE_ACCESS",
+ ]
+ }
}
```
-`features` lists the RSC features to enabled for the AWS account. Use the `polaris_features` data source to obtain a
-list of RSC features available for the RSC account. The `polaris_aws_cnp_artifacts` data source returns the instance
-profiles and roles, referred to as _artifacts_ by RSC, which are required by RSC.
+One or more `feature` blocks lists the RSC features to enabled for the AWS account. Use the `polaris_account` data
+source to obtain a list of RSC features available for the RSC account. The `polaris_aws_cnp_artifacts` data source
+returns the instance profiles and roles, referred to as _artifacts_ by RSC, which are required by RSC.
Next, use the `polaris_aws_cnp_permissions` data source to obtain the role permission policies, customer managed
policies and managed policies, required by RSC:
```terraform
data "polaris_aws_cnp_permissions" "permissions" {
for_each = data.polaris_aws_cnp_artifacts.artifacts.role_keys
- features = data.polaris_aws_cnp_artifacts.artifacts.features
role_key = each.key
+
+ dynamic "feature" {
+ for_each = data.polaris_aws_cnp_artifacts.artifacts.feature
+ content {
+ name = feature.value["name"]
+ permission_groups = feature.value["permission_groups"]
+ }
+ }
}
```
@@ -42,23 +58,31 @@ After defining the two data sources, use the `polaris_aws_cnp_account` resource
account:
```terraform
resource "polaris_aws_cnp_account" "account" {
- features = polaris_aws_cnp_artifacts.artifacts.features
name = "My Account"
native_id = "123456789123"
regions = ["us-east-2", "us-west-2"]
+
+ dynamic "feature" {
+ for_each = polaris_aws_cnp_artifacts.artifacts.features
+ content {
+ name = feature.value["name"]
+ permission_groups = feature.value["permission_groups"]
+ }
+ }
}
```
-`name` is the name given to the AWS account in RSC, `native_id` is the AWS account ID and `regions` the AWS regions.
-When Terraform processes this resource, the AWS account will show up in the connecting state in the RSC UI.
+`name` is the name given to the AWS account in RSC, `native_id` is the AWS account ID and `regions` the AWS regions to
+protect with RSC. When Terraform processes this resource, the AWS account will show up in the connecting state in the
+RSC UI.
Next, the `polaris_aws_cnp_account_trust_policy` resource needs to be used to define the trust policies required by RSC
for the AWS account:
```terraform
resource "polaris_aws_cnp_account_trust_policy" "trust_policy" {
- for_each = data.polaris_aws_cnp_artifacts.artifacts.role_keys
- account_id = polaris_aws_cnp_account.account.id
- features = polaris_aws_cnp_account.account.features
- role_key = each.key
+ for_each = data.polaris_aws_cnp_artifacts.artifacts.role_keys
+ account_id = polaris_aws_cnp_account.account.id
+ features = polaris_aws_cnp_account.account.feature.*.name
+ role_key = each.key
}
```
This resource provides the trust policies to attach to the IAM roles created, so that RSC can assume the roles to
@@ -95,13 +119,13 @@ Lastly, to finalize the onboarding of the AWS account, use the `polaris_aws_cnp_
```terraform
resource "polaris_aws_cnp_account_attachments" "attachments" {
account_id = polaris_aws_cnp_account.account.id
- features = polaris_aws_cnp_account.account.features
+ features = polaris_aws_cnp_account.account.feature.*.name
dynamic "instance_profile" {
for_each = aws_iam_instance_profile.profile
content {
key = instance_profile.key
- name = instance_profile.value["name"]
+ name = instance_profile.value["arn"]
}
}
diff --git a/templates/guides/changelog.md.tmpl b/templates/guides/changelog.md.tmpl
new file mode 100644
index 0000000..ce3537d
--- /dev/null
+++ b/templates/guides/changelog.md.tmpl
@@ -0,0 +1,50 @@
+---
+page_title: "Changelog"
+---
+
+# Changelog
+
+## v0.9.0
+* Update the `polaris_aws_archival_location` resource to support updates of the `bucket_tags` field without recreating
+ the resources.
+* Add `polaris_aws_account` data source. [[docs](../data-sources/aws_account)]
+* Add `polaris_azure_subscription` data source. [[docs](../data-sources/azure_subscription)]
+* Deprecate the `archival_location_id` field in the `polaris_aws_archival_location` data source. Use the `id` field
+ instead.
+* Deprecate the `archival_location_id` field in the `polaris_azure_archival_location` data source. Use the `id` field
+ instead.
+* Add the field `setup_yaml` to the `polaris_aws_exocompute_cluster_attachment` resource. The `setup_yaml` fields
+ contains K8s specs that can be passed to `kubectl` to establish a connection between the cluster and RSC.
+ [[docs](../resources/aws_exocompute_cluster_attachment)]
+* Fix a bug in the AWS feature removal code that causes removal of the `CLOUD_NATIVE_S3_PROTECTION` feature to fail.
+* Improve the code that waits for RSC features to be disabled. The code now checks both the status of the job and the
+ status of the cloud account.
+* Improve the documentation for AWS data sources and resources.
+* Update guides.
+* Add `polaris_azure_archival_location` data source. [[docs](../data-sources/azure_archival_location)]
+* Fix a bug in the `polaris_azure_archival_location` resource where the cloud account UUID would be passed to the RSC
+ API instead of the Azure subscription UUID when creating an Azure archival location.
+* Fix a bug in the `polaris_aws_cnp_account` resource where destroying it would constantly result in an *objects not
+ authorized* error.
+* Increase the wait time for asynchronous RSC operations to 8.5 minutes.
+* Fix an issue with the permissions of subscriptions onboarded using the `polaris_azure_subscription` resource where
+ the RSC UI would show the status as "Update permissions" even though the app registration would have all the required
+ permissions.
+* Move changelog and upgrade guides to guides folder.
+* Add support for creating Azure cloud native archival locations. [[docs](../resources/azure_archival_location)]
+* Fix a bug in the `polaris_aws_exocompute` resource where customer supplied security groups were not validated
+ correctly.
+* Add support for shared Exocompute to the `polaris_azure_exocompute` resource.
+ [[docs](../resources/azure_exocompute#host_cloud_account_id)]
+* Add the `polaris_account` data source. [[docs](../data-sources/account)]
+* Add support for the Cloud Native Archival feature to the `polaris_azure_subscription` resource.
+ [[docs](../resources/azure_subscription#nested-schema-for-cloud_native_archival)]
+* Add support for the Cloud Native Archival Encryption feature to the `polaris_azure_subscription` resource.
+ [[docs](../resources/azure_subscription#nested-schema-for-cloud_native_archival_encryption)]
+* Add support for the Azure SQL Database Protection feature to the `polaris_azure_subscription` resource.
+ [[docs](../resources/azure_subscription#nested-schema-for-sql_db_protection)]
+* Add support for the Azure SQL Managed Instance Protection feature to the `polaris_azure_subscription` resource.
+ [[docs](../resources/azure_subscription#nested-schema-for-sql_mi_protection)]
+* Add support for specifying an Azure resource group when onboarding the Cloud Native Archival, Cloud Native Archival
+ Encryption, Cloud Native Protection or Exocompute features using the `polaris_azure_subscription` resource.
+ [[docs](../resources/azure_subscription#optional)]
diff --git a/templates/guides/permissions.md.tmpl b/templates/guides/permissions.md.tmpl
index b75c73f..f16d40f 100644
--- a/templates/guides/permissions.md.tmpl
+++ b/templates/guides/permissions.md.tmpl
@@ -7,10 +7,14 @@ RSC requires permissions to operate and as new features are added to RSC the set
guide explains how Terraform can be used to keep this set of permissions up to date.
## AWS
-For AWS this is managed through a CloudFormation stack. When the status of an account feature is `missing-permissions`
-the CloudFormation stack must be updated for the feature to continue to function. This can be managed by setting the
-`permissions` argument to `update`.
-```hcl
+There are two ways to onboard AWS accounts to RSC, using a CloudFormation stack or not. Depending on the way an account
+is onboarded, permissions are managed in different ways.
+
+### Using a CloudFormation Stack
+When an account is onboarded using a CloudFormation stack, the permissions are managed through the stack. When the
+status of an account feature is `MISSING_PERMISSIONS` the CloudFormation stack must be updated for the RSC feature to
+continue to function. This can be managed by setting the `permissions` argument to `update`.
+```terraform
resource "polaris_aws_account" "default" {
profile = "default"
permissions = "update"
@@ -22,55 +26,98 @@ resource "polaris_aws_account" "default" {
}
}
```
-This will generate a diff when the status of at least one feature is `missing-permissions`. Applying the account
-resource for this diff will update the CloudFormation stack. If the `permissions` argument is not specified the
+This will generate a diff when the status of at least one feature is in the `MISSING_PERMISSIONS` state. Applying the
+account resource for this diff will update the CloudFormation stack. If the `permissions` argument is not specified the
provider will not attempt to update the CloudFormation stack.
+### Not Using a CloudFormation Stack
+When an account is onboarded without using a CloudFormation stack, the permissions can be managed using the
+`polaris_aws_cnp_artifacts` and `polaris_aws_cnp_permissions` data sources and the
+[aws](https://registry.terraform.io/providers/hashicorp/aws/latest) provider, using IAM roles. Please see the
+[AWS CNP Account](aws_cnp_account.md) guide for more information on how create IAM roles using the data sources.
+
## Azure
-For Azure permissions are managed through a service principal. When the status of a subscription feature is
-`missing-permissions` the permissions of the service principal must be updated for the feature to continue to
-function. This can be managed by Terraform using the
-[azurerm](https://registry.terraform.io/providers/hashicorp/azurerm/latest) provider:
-```hcl
-data "polaris_azure_permissions" "default" {
- features = [
- "cloud-native-protection",
- "exocompute",
- ]
+For Azure permissions are managed through the subscription. When the status of a subscription feature is
+`MISSING_PERMISSIONS` the permissions must be updated for the feature to continue to function. This can be managed by
+Terraform using the [azurerm](https://registry.terraform.io/providers/hashicorp/azurerm/latest) provider:
+```terraform
+variable "features" {
+ type = set(string)
+ description = "List of RSC features to enable for subscription."
+}
+
+data "polaris_azure_permissions" "features" {
+ for_each = var.features
+ feature = each.key
}
-resource "azurerm_role_definition" "default" {
- name = "terraform"
- scope = data.azurerm_subscription.default.id
+resource "azurerm_role_definition" "subscription" {
+ for_each = data.polaris_azure_permissions.features
+ name = "RSC - Subscription Level - ${each.value.feature}"
+ scope = data.azurerm_subscription.subscription.id
permissions {
- actions = data.polaris_azure_permissions.default.actions
- data_actions = data.polaris_azure_permissions.default.data_actions
- not_actions = data.polaris_azure_permissions.default.not_actions
- not_data_actions = data.polaris_azure_permissions.default.not_data_actions
+ actions = each.value.subscription_actions
+ data_actions = each.value.subscription_data_actions
+ not_actions = each.value.subscription_not_actions
+ not_data_actions = each.value.subscription_not_data_actions
}
}
-resource "azurerm_role_assignment" "default" {
+resource "azurerm_role_assignment" "subscription" {
+ for_each = data.polaris_azure_permissions.features
principal_id = "9e7f3952-1fc1-11ec-b57a-972144d12d97"
- role_definition_id = azurerm_role_definition.default.role_definition_resource_id
- scope = data.azurerm_subscription.default.id
+ role_definition_id = azurerm_role_definition.subscription[each.key].role_definition_resource_id
+ scope = data.azurerm_subscription.subscription.id
}
-resource "polaris_azure_service_principal" "default" {
- sdk_auth = "${path.module}/sdk-service-principal.json"
- tenant_domain = "mydomain.onmicrosoft.com"
- permissions_hash = data.polaris_azure_permissions.default.hash
+resource "azurerm_role_definition" "resource_group" {
+ for_each = data.polaris_azure_permissions.features
+ name = "RSC - Resource Group Level - ${each.value.feature}"
+ scope = data.azurerm_resource_group.resource_group.id
+
+ permissions {
+ actions = each.value.resource_group_actions
+ data_actions = each.value.resource_group_data_actions
+ not_actions = each.value.resource_group_not_actions
+ not_data_actions = each.value.resource_group_not_data_actions
+ }
+}
+
+resource "azurerm_role_assignment" "resource_group" {
+ for_each = data.polaris_azure_permissions.features
+ principal_id = "9e7f3952-1fc1-11ec-b57a-972144d12d97"
+ role_definition_id = azurerm_role_definition.resource_group[each.key].role_definition_resource_id
+ scope = data.azurerm_resource_group.resource_group.id
+}
+
+resource "polaris_azure_service_principal" "service_principal" {
+ ...
+}
+
+resource "polaris_azure_subscription" "subscription" {
+ subscription_id = data.azurerm_subscription.subscription.subscription_id
+ subscription_name = data.azurerm_subscription.subscription.display_name
+ tenant_domain = polaris_azure_service_principal.service_principal.tenant_domain
+
+ cloud_native_protection {
+ permissions = data.polaris_azure_permissions.features["CLOUD_NATIVE_PROTECTION"].id
+ resource_group_name = data.azurerm_resource_group.resource_group.name
+ resource_group_region = data.azurerm_resource_group.resource_group.location
+ regions = ["eastus2"]
+ }
+
+ ...
depends_on = [
- azurerm_role_definition.default,
- azurerm_role_assignment.default,
+ azurerm_role_definition.subscription,
+ azurerm_role_definition.resource_group,
]
}
```
When the permissions for a feature changes the permissions data source will reflect this generating a diff for the
-role definition and service principal resources. Applying the diff will first update the permissions of the service
-principal's role definition and then notify RSC about the update.
+role definitions and subscription resources. Applying the diff will first update the permissions of the role
+definitions, then notify RSC about the update.
## GCP
For GCP permissions are managed through a service account. When the status of a project feature is `missing-permissions`
diff --git a/templates/guides/upgrade_guide_v0.3.0.md.tmpl b/templates/guides/upgrade_guide_v0.3.0.md.tmpl
index aceeaf4..5a22974 100644
--- a/templates/guides/upgrade_guide_v0.3.0.md.tmpl
+++ b/templates/guides/upgrade_guide_v0.3.0.md.tmpl
@@ -1,6 +1,5 @@
---
-page_title: "Upgrade Guide: v0.3.0 "
-subcategory: "Upgrade"
+page_title: "Upgrade Guide: v0.3.0"
---
# RSC provider version v0.3.0
diff --git a/templates/guides/upgrade_guide_v0.6.0.md.tmpl b/templates/guides/upgrade_guide_v0.6.0.md.tmpl
index f1a2341..f3c62ad 100644
--- a/templates/guides/upgrade_guide_v0.6.0.md.tmpl
+++ b/templates/guides/upgrade_guide_v0.6.0.md.tmpl
@@ -1,6 +1,5 @@
---
-page_title: "Upgrade Guide: v0.6.0 "
-subcategory: "Upgrade"
+page_title: "Upgrade Guide: v0.6.0"
---
# RSC provider version v0.6.0
diff --git a/templates/guides/upgrade_guide_v0.9.0.md.tmpl b/templates/guides/upgrade_guide_v0.9.0.md.tmpl
new file mode 100644
index 0000000..431dc44
--- /dev/null
+++ b/templates/guides/upgrade_guide_v0.9.0.md.tmpl
@@ -0,0 +1,134 @@
+---
+page_title: "Upgrade Guide: v0.9.0"
+---
+
+# RSC provider changes
+The v0.9.0 release introduces changes to the following data sources and resources:
+* `polaris_account` - New data source with 3 fields, `features`, `fqdn` and `name`. `features` holds the features
+ enabled for the RSC account. `fqdn` holds the fully qualified domain name for the RSC account. `name` holds the RSC
+ account name.
+* `polaris_azure_permissions` - Add support for scoped permissions. Permissions are scoped to either the subscription
+ level or to resource group level. The `hash` field has been deprecated and replaced with the `id` field. Both fields
+ will have same value until the `hash` field is removed in a future release.
+* `polaris_azure_archival_location` - Add support for Azure archival locations, see the data source and resource
+ documentation for more information.
+* `polaris_azure_exocompute` - Add support for shared Exocompute, see the resource documentation for more information.
+ The `subscription_id` field has been deprecated and replaced with the `cloud_account_id` field. The `subscription_id`
+ field referred to the ID of the `polaris_azure_subscription` resource and not the Azure subscription ID, which was
+ confusing. Note, changing an existing `polaris_azure_exocompute` resource to use the `cloud_account_id` field will
+ recreate the resource.
+* `polaris_azure_service_principal` - The `permissions_hash` field has been deprecated and replaced with the
+ `permissions` field. With the changes in the `polaris_azure_permissions` data source, use
+ `permissions = data.polaris_azure_permissions..id` to connect the `polaris_azure_permissions` data source to
+ the permissions updated signal. The `permissions` field has been deprecated and replaced with the `permissions` field
+ for each feature in the `polaris_azure_subscription` resource.
+* `polaris_azure_subscription` - Add support for onboarding `cloud_native_archival`, `cloud_native_archival_encryption`,
+ `sql_db_protection` and `sql_mi_protection`. Note, there is no additional Terraform resources for managing the
+ features yet. Add support for specifying an Azure resource group per RSC feature. Add the `permissions` field to each
+ feature, which can be use with the `polaris_azure_permissions` data source signal permissions updates.
+* `polaris_features` - The data source has been deprecated and replaced with the `features` field of the
+ `polaris_deployment` data source. Note, the `features` field is a set and not a list.
+* `polaris_aws_exocompute_cluster_attachment` - New field, `setup_yaml`, which holds the K8s spec which can be passed
+ to `kubectl apply` inside the EKS cluster to create a connection between the cluster and RSC.
+* `polaris_aws_account` - New data source for accessing information about an AWS account added to RSC. The account can
+ be looked up by the AWS account ID or the account name. Currently, only the cloud account ID of the account is
+ exposed.
+* `polaris_azure_subscription` - New data source for accessing information about an Azure subscription added to RSC.
+ The subscription can be looked up by the Azure subscription ID or the subscription name. Currently, only the cloud
+ account ID of the subscription is exposed.
+* `polaris_aws_archival_location` - The `bucket_tags` field now supports being updated without the resource being
+ recreating.
+
+Deprecated fields will be removed in a future release, please migrate your configurations to use the replacement field
+as soon as possible.
+
+# Known issues
+* The user-assigned managed identity for `cloud_native_archival_encryption` is not refreshed when the
+ `polaris_azure_subscription` resource is updated. This will be fixed in a future release.
+
+In addition to the issues listed above, affecting this particular release of the provider, additional issues reported
+can be found on [GitHub](https://github.com/rubrikinc/terraform-provider-polaris/issues).
+
+# How to upgrade
+Make sure that the `version` field is configured in a way which allows Terraform to upgrade to the v0.9.0 release. One
+way of doing this is by using the pessimistic constraint operator `~>`, which allows Terraform to upgrade to the latest
+release within the same minor version:
+```hcl
+terraform {
+ required_providers {
+ polaris = {
+ source = "rubrikinc/polaris"
+ version = "~> 0.9.0"
+ }
+ }
+}
+```
+Next, upgrade the Terraform provider to the new version by running:
+```bash
+$ terraform init -upgrade
+```
+After the Terraform provider has been updated, validate the correctness of the Terraform configuration files by running:
+```bash
+$ terraform plan
+```
+If this doesn't produce an error or unwanted diff, proceed by running:
+```bash
+$ terraform apply -refresh-only
+```
+This will read the remote state of the resources and migrate the local Terraform state to the v0.9.0 version.
+
+## Upgrade issues
+When upgrading to the v0.9.0 release you may encounter one or more of the following issues.
+
+### polaris_azure_exocompute
+Replacing the `subscription_id` field with the `cloud_account_id` field will result in the `polaris_azure_exocompute`
+resource being recreated, a diff similar to the following will be shown:
+```hcl
+ # polaris_azure_exocompute.default must be replaced
+-/+ resource "polaris_azure_exocompute" "default" {
+ + cloud_account_id = "a677433c-954c-4af6-842e-0268c4a82a9f" # forces replacement
+ ~ id = "45d68b3f-a78f-4098-922e-367d2a22cb92" -> (known after apply)
+ - subscription_id = "a677433c-954c-4af6-842e-0268c4a82a9f" -> null # forces replacement
+ # (2 unchanged attributes hidden)
+ }
+```
+Apply the diff to recreate the resource and replace the field.
+
+### polaris_azure_service_principal
+Replacing the `permissions_hash` field with the `permissions` field will result in the resource being updated in-place,
+a diff similar to the following will be shown:
+```hcl
+# polaris_azure_service_principal.default will be updated in-place
+~ resource "polaris_azure_service_principal" "default" {
+ id = "6f35cc58-e1c9-445d-8bb0-a0e30dd53a40"
+ + permissions = "0a79e15a989ef9a5191fe9fba62f40f5bd7f7062a90fbe367b29d1ae3dd34e50"
+ - permissions_hash = "0a79e15a989ef9a5191fe9fba62f40f5bd7f7062a90fbe367b29d1ae3dd34e50" -> null
+ # (2 unchanged attributes hidden)
+}
+```
+Apply the diff to replace the field.
+
+### polaris_azure_subscription
+Because of the new Azure resource group support, using the `cloud_native_protection` or `exocompute` fields will result
+in a diff similar to the following:
+```hcl
+# polaris_azure_subscription.default will be updated in-place
+~ resource "polaris_azure_subscription" "default" {
+ id = "f7b298c4-bf1d-4af4-900e-bf69ddfc6187"
+ # (4 unchanged attributes hidden)
+
+ ~ cloud_native_protection {
+ - resource_group_name = "RubrikBackups-RG-DontDelete-9f68a830-36a7-4363-9cf9-c81189fdc410" -> null
+ - resource_group_region = "westus" -> null
+ # (3 unchanged attributes hidden)
+ }
+
+ ~ exocompute {
+ - resource_group_name = "RubrikBackups-RG-DontDelete-e9ee0004-dcb2-4ec5-91b5-329c561c8311" -> null
+ - resource_group_region = "westus" -> null
+ # (3 unchanged attributes hidden)
+ }
+}
+```
+To remove the diff, copy the `resource_group_name` and `resource_group_region` values from the diff and add them to
+their respective places in the Terraform configuration.
diff --git a/templates/index.md.tmpl b/templates/index.md.tmpl
index c8317b2..d44495b 100644
--- a/templates/index.md.tmpl
+++ b/templates/index.md.tmpl
@@ -40,8 +40,7 @@ provider "polaris" {
The service account can also be passed to the provider using the `RUBRIK_POLARIS_SERVICEACCOUNT_CREDENTIALS` environment
variable. When passing the service account using the environment variable, leave the provider configuration empty:
```terraform
-provider "polaris" {
-}
+provider "polaris" {}
```
For documentation on how to create a service account using RSC, visit the
diff --git a/templates/resources/cdm_bootstrap.md.tmpl b/templates/resources/cdm_bootstrap.md.tmpl
index f521050..b9293ac 100644
--- a/templates/resources/cdm_bootstrap.md.tmpl
+++ b/templates/resources/cdm_bootstrap.md.tmpl
@@ -2,14 +2,18 @@
page_title: "{{.Name}} {{.Type}} - {{.ProviderName}}"
subcategory: ""
description: |-
-
+ {{.Description}}
---
# {{.Name}} ({{.Type}})
+{{.Description}}
+
+{{if .HasExample}}
## Example Usage
-{{printf "examples/resources/%s/resource.tf" .Name | tffile}}
+{{tffile .ExampleFile}}
+{{end}}
## Schema
diff --git a/templates/resources/cdm_bootstrap_cces_aws.md.tmpl b/templates/resources/cdm_bootstrap_cces_aws.md.tmpl
index 5193490..294f7c3 100644
--- a/templates/resources/cdm_bootstrap_cces_aws.md.tmpl
+++ b/templates/resources/cdm_bootstrap_cces_aws.md.tmpl
@@ -2,14 +2,18 @@
page_title: "{{.Name}} {{.Type}} - {{.ProviderName}}"
subcategory: ""
description: |-
-
+ {{.Description}}
---
# {{.Name}} ({{.Type}})
+{{.Description}}
+
+{{if .HasExample}}
## Example Usage
-{{printf "examples/resources/%s/resource.tf" .Name | tffile}}
+{{tffile .ExampleFile}}
+{{end}}
## Schema
diff --git a/templates/resources/cdm_bootstrap_cces_azure.md.tmpl b/templates/resources/cdm_bootstrap_cces_azure.md.tmpl
index 9307f95..260772e 100644
--- a/templates/resources/cdm_bootstrap_cces_azure.md.tmpl
+++ b/templates/resources/cdm_bootstrap_cces_azure.md.tmpl
@@ -2,14 +2,18 @@
page_title: "{{.Name}} {{.Type}} - {{.ProviderName}}"
subcategory: ""
description: |-
-
+ {{.Description}}
---
# {{.Name}} ({{.Type}})
+{{.Description}}
+
+{{if .HasExample}}
## Example Usage
-{{printf "examples/resources/%s/resource.tf" .Name | tffile}}
+{{tffile .ExampleFile}}
+{{end}}
## Schema