Terraforming GCP: deploy DataDomain, PowerProtect DataManager, Networker, Avamar and more from GCP Marketplace offerings using terraform
This Modules can deploy Dell PowerProtect DataDomain Virtual Edition, PowerPotect DataManager, Networker Virtual Edition and Avamar Virtual edition to GCP using terraform. Instance Sizes and Disk Count/Size will be automatically evaluated my specifying a ddve_type and ave_type.
Individual Modules will be called from main by evaluating Variables
Name | Version |
---|---|
terraform | >= 0.14 |
~> 5.3.0 |
Name | Source | Version |
---|---|---|
cloud_nat | ./modules/cloud_nat | n/a |
ddve | ./modules/ddve | n/a |
ddve_project_role | ./modules/ddve_project_role | n/a |
gke | ./modules/gke | n/a |
networks | ./modules/networks | n/a |
nve | ./modules/nve | n/a |
ppdm | ./modules/ppdm | n/a |
s2svpn | ./modules/s2svpn | n/a |
ubuntu | ./modules/ubuntu | n/a |
windows | ./modules/windows | n/a |
No resources.
Name | Description | Type | Default | Required |
---|---|---|---|---|
DDVE_HOSTNAME | Hotname of the DDVE Machine | string |
"ddve-tf" |
no |
ENV_NAME | Enfironment Name, concatenated to resource names | string |
"demo" |
no |
NVE_HOSTNAME | Hotname Prefix (adds counting number) of the NVE Machine | string |
"nve-tf" |
no |
PPDM_HOSTNAME | Hotname Prefix (adds counting number) of the PPDM Machine | string |
"ppdm-tf" |
no |
create_cloud_nat | n/a | bool |
false |
no |
create_ddve_project_role | deploy a role for ddev oauth to Goocle Cloud Storage | bool |
false |
no |
create_gke | deploy a basic Google Kubernetes Engine for test/dev | bool |
false |
no |
create_networks | Do you want to create a VPC | bool |
false |
no |
create_s2svpn | Should a Side 2 Side VPN Gateway be deployed | bool |
false |
no |
ddve_count | Do you want to create a DDVE | number |
0 |
no |
ddve_disk_type | DDVE Disk Type, can be: 'Performance Optimized', 'Cost Optimized' | string |
"Cost Optimized" |
no |
ddve_role_id | id of the role fo DDVE used, format roles/{role}, organizations/{organization_id}/roles/{role}, or projects/{project_id}/roles/{role} when using existing roles, otherwise will be created for you | string |
"ddve_oauth_role" |
no |
ddve_sa_account_id | The ID of the Service Account for DDVE IAM Policy to Access Storage Bucket via OAuth, in ther form of | string |
"" |
no |
ddve_source_tags | Source tags applied to Instance for Firewall Rules | list(any) |
[] |
no |
ddve_target_tags | Target tags applied to Instance for Firewall Rules | list(any) |
[] |
no |
ddve_type | DDVE Type, can be: '16 TB DDVE', '32 TB DDVE', '96 TB DDVE', '256 TB DDVE' | string |
"16 TB DDVE" |
no |
ddve_version | DDVE Version, can be: 'LTS2022 7.7.5.50', 'LTS2023 7.10.1.40', 'LTS2024 7.13.1.05','8.1.0.10' | string |
"8.1.0.10" |
no |
gcp_network | GCP Network to be used, change for youn own infra | string |
"default" |
no |
gcp_project | the GCP Project do deploy resources | any |
null |
no |
gcp_region | GCP Region to be used | string |
"europe-west3" |
no |
gcp_subnet_cidr_block_1 | Cidr Block of the first Subnet to be used | string |
"10.0.16.0/20" |
no |
gcp_subnetwork_name_1 | name of the first subnet | string |
"default" |
no |
gcp_zone | GCP Zone to be used | string |
"europe-west3-c" |
no |
gke_master_ipv4_cidr_block | Subnet CIDR BLock for Google Kubernetes Engine Master Nodes | string |
"172.16.0.16/28" |
no |
gke_num_nodes | Number of GKE Worker Nodes | number |
2 |
no |
gke_subnet_secondary_cidr_block_0 | Cluster CIDR Block for Google Kubernetes Engine | string |
"10.4.0.0/14" |
no |
gke_subnet_secondary_cidr_block_1 | Services CIDR Block for Google Kubernetes Engine | string |
"10.0.32.0/20" |
no |
gke_zonal | deployment Zonal Model used for GKE | bool |
true |
no |
labels | Key Value of labels you want to apply to Resources | map(any) |
{} |
no |
nve_count | Do you want to create a NVE | number |
0 |
no |
nve_source_tags | Source tags applied to Instance for Firewall Rules | list(any) |
[] |
no |
nve_target_tags | Target tags applied to Instance for Firewall Rules | list(any) |
[] |
no |
nve_type | NVE Type, can be: 'small', 'medium', 'large' | string |
"small" |
no |
nve_version | NVE Version, can be: '19.9','19.10' | string |
"19.10" |
no |
ppdm_count | Do you want to create a PPDM | number |
0 |
no |
ppdm_source_tags | Source tags applied to Instance for Firewall Rules | list(any) |
[] |
no |
ppdm_target_tags | Target tags applied to Instance for Firewall Rules | list(any) |
[] |
no |
ppdm_version | PPDM Version, can be: '19.16', '19.17' | string |
"19.17" |
no |
s2s_vpn_route_dest | Routing Destination ( on Premises local networks ) for VPN | list(string) |
[ |
no |
ubuntu_HOSTNAME | Hotname Prefix (adds counting number) of the ubuntu Machine | string |
"ubuntu-tf" |
no |
ubuntu_count | Do you want to create a ubuntu | number |
0 |
no |
ubuntu_deletion_protection | Protect ubuntu from deletion | bool |
false |
no |
ubuntu_source_tags | Source tags applied to Instance for Firewall Rules | list(any) |
[] |
no |
ubuntu_target_tags | Target tags applied to Instance for Firewall Rules | list(any) |
[] |
no |
vpn_shared_secret | Shared Secret for VPN Connection | string |
"topsecret12345" |
no |
vpn_wan_ip | IP Adress of the Local VPN Gateway | string |
"0.0.0.0" |
no |
windows_HOSTNAME | Hotname Prefix (adds counting number) of the windows Machine | string |
"windows-tf" |
no |
windows_count | Do you want to create a windows | number |
0 |
no |
windows_deletion_protection | Protect windows from deletion | bool |
false |
no |
windows_source_tags | Source tags applied to Instance for Firewall Rules | list(any) |
[] |
no |
windows_target_tags | Target tags applied to Instance for Firewall Rules | list(any) |
[] |
no |
Name | Description |
---|---|
NVE_FQDN | The private ip address for the DDVE Instance |
PPDM_FQDN | The private ip address for the DDVE Instance |
atos_bucket | The Object Bucket Name created for ATOS configuration |
ddve_instance_id | The instance id (initial password) for the DDVE Instance |
ddve_private_ip | The private ip address for the DDVE Instance |
ddve_ssh_private_key | The ssh private key for the DDVE Instance |
ddve_ssh_public_key | The ssh public key for the DDVE Instance |
kubernetes_cluster_host | GKE Cluster Host |
kubernetes_cluster_name | GKE Cluster Name |
location | GKE Cluster location |
nve_instance_id | The instance id (initial password) for the DDVE Instance |
nve_ssh_private_key | The ssh private key for the DDVE Instance |
nve_ssh_public_key | The ssh public key for the DDVE Instance |
ppdm_instance_id | The instance id (initial password) for the DDVE Instance |
ppdm_ssh_private_key | The ssh private key for the DDVE Instance |
ppdm_ssh_public_key | The ssh public key for the DDVE Instance |
ubuntu_instance_id | The instance id (initial password) for the DDVE Instance |
ubuntu_private_ip | The private ip address for the DDVE Instance |
ubuntu_ssh_private_key | The ssh private key for the DDVE Instance |
ubuntu_ssh_public_key | The ssh public key for the DDVE Instance |
vpn_ip | n/a |
windows_instance_id | The instance id (initial password) for the DDVE Instance |
windows_private_ip | The private ip address for the DDVE Instance |
windows_ssh_private_key | The ssh private key for the DDVE Instance |
windows_ssh_public_key | The ssh public key for the DDVE Instance |
DDVE_HOSTNAME = "ddve-tf"
ENV_NAME = "demo"
NVE_HOSTNAME = "nve-tf"
PPDM_HOSTNAME = "ppdm-tf"
create_cloud_nat = false
create_ddve_project_role = false
create_gke = false
create_networks = false
create_s2svpn = false
ddve_count = 0
ddve_disk_type = "Cost Optimized"
ddve_role_id = "roles/ddve_oauth_role"
ddve_sa_account_id = "tfddve-sa"
ddve_source_tags = []
ddve_target_tags = []
ddve_type = "16 TB DDVE"
ddve_version = "7.13.0.20"
gcp_network = "default"
gcp_project = ""
gcp_region = "europe-west3"
gcp_subnet_cidr_block_1 = "10.0.16.0/20"
gcp_subnetwork_name_1 = "default"
gcp_zone = "europe-west3-c"
gke_master_ipv4_cidr_block = "172.16.0.16/28"
gke_num_nodes = 2
gke_subnet_secondary_cidr_block_0 = "10.4.0.0/14"
gke_subnet_secondary_cidr_block_1 = "10.0.32.0/20"
gke_zonal = true
labels = {}
nve_count = 0
nve_source_tags = []
nve_target_tags = []
nve_type = "small"
nve_version = "19.10"
ppdm_count = 0
ppdm_source_tags = []
ppdm_target_tags = []
ppdm_version = "19.15"
s2s_vpn_route_dest = [
"127.0.0.1/32"
]
ubuntu_HOSTNAME = "ubuntu-tf"
ubuntu_count = 0
ubuntu_deletion_protection = false
ubuntu_source_tags = []
ubuntu_target_tags = []
vpn_shared_secret = "topsecret12345"
vpn_wan_ip = "0.0.0.0"
Once you configured all you required Settings and Machines to be deployed, check your deployment plan with
terraform plan
when everything meets your requirements, run the deployment with
terraform apply --auto-approve
this assumes that you use my ansible Playbooks for AVE, PPDM and DDVE from ansible-dps Set the Required Variables: (don´t worry about the "Public" notations / names)
when the deployment is finished, you can connect and configure DDVE in multiple ways. for an ssh connection, use:
export DDVE_PRIVATE_FQDN=$(terraform output -raw ddve_private_ip)
terraform output ddve_ssh_private_key > ~/.ssh/ddve_key
chmod 0600 ~/.ssh/ddve_key
ssh -i ~/.ssh/ddve_key sysadmin@${DDVE_PRIVATE_FQDN}
Proceed with CLi configuration
export DDVE_PUBLIC_FQDN=$(terraform output -raw ddve_private_ip)
export DDVE_USERNAME=sysadmin
export DDVE_INITIAL_PASSWORD=$(terraform output -raw ddve_instance_id)
export DDVE_PASSWORD=Change_Me12345_
export PPDD_PASSPHRASE=Change_Me12345_!
export DDVE_PRIVATE_FQDN=$(terraform output -raw ddve_private_ip)
export ATOS_BUCKET=$(terraform output -raw atos_bucket)
export PPDD_TIMEZONE="Europe/Berlin"
set the Initial DataDomain Password
ansible-playbook ~/workspace/ansible_ppdd/1.0-Playbook-configure-initial-password.yml
If you have a valid dd license, set the variable PPDD_LICENSE, example:
export PPDD_LICENSE=$(cat ~/workspace/license.xml)
ansible-playbook ~/workspace/ansible_ppdd/3.0-Playbook-set-dd-license.yml
next, we set the passphrase, as export PPDD_Lit is required for ATOS then, we will set the Timezone and the NTP to GCP NTP link local Server
ansible-playbook ~/workspace/ansible_ppdd/2.1-Playbook-configure-ddpassphrase.yml
ansible-playbook ~/workspace/ansible_ppdd/2.1.1-Playbook-set-dd-timezone-and-ntp-gcp.yml
Albeit there is a ansible-playbook ~/workspace/ansible_ppdd/2.2-Playbook-configure-dd-atos-aws.yml , we cannot use it, as the RestAPI Call to create Active Tier on Object is not available now for GCP... Therefore us the UI Wizard
use the bucket from
terraform output -raw atos_bucket
once the FIlesystem is enabled, we go ahead and enable the boost Protcol ...
ansible-playbook ~/workspace/ansible_ppdd/2.2-Playbook-configure-dd-atos-gcp.yml
set ppdm_count to desired number
terraform plan
when everything meets your requirements, run the deployment with
terraform apply --auto-approve
Similar to the DDVE Configuration, we will set Environment Variables for Ansible to Automatically Configure PPDM
# Refresh you Environment Variables if Multi Step !
eval "$(terraform output --json | jq -r 'with_entries(select(.key|test("^PP+"))) | keys[] as $key | "export \($key)=\"\(.[$key].value)\""')"
export PPDM_INITIAL_PASSWORD=Change_Me12345_
export PPDM_NTP_SERVERS='["169.254.169.254"]'
export PPDM_SETUP_PASSWORD=admin # default password on the Cloud PPDM rest API
export PPDM_TIMEZONE="Europe/Berlin"
export PPDM_POLICY=PPDM_GOLD
Set the initial Configuration:
ansible-playbook ~/workspace/ansible_ppdm/1.0-playbook_configure_ppdm.yml
verify the config:
ansible-playbook ~/workspace/ansible_ppdm/1.1-playbook_get_ppdm_config.yml
we add the DataDomain:
ansible-playbook ~/workspace/ansible_ppdm/2.0-playbook_set_ddve.yml
we can get the sdr config after Data Domain Boost auto-configuration for primary source from PPDM
ansible-playbook ~/workspace/ansible_ppdm/3.0-playbook_get_sdr.yml
and see the dr jobs status
ansible-playbook ~/workspace/ansible_ppdm/31.1-playbook_get_activities.yml --extra-vars "filter='category eq \"DISASTER_RECOVERY\"'"
create a kubernetes policy and rule ...
ansible-playbook ~/workspace/ansible_ppdm/playbook_add_k8s_policy_and_rule.yml
eval "$(terraform output --json | jq -r 'with_entries(select(.key|test("^NVE+"))) | keys[] as $key | "export \($key)=\"\(.[$key].value)\""')"
export NVE_PRIVATE_IP=$NVE_FQDN
export NVE_PASSWORD="Change_Me12345_"
export NVE_TIMEZONE="Europe/Berlin"
set create_gke to true
terraform plan
when everything meets your requirements, run the deployment with
terraform apply --auto-approve
get the context / login
export USE_GKE_GCLOUD_AUTH_PLUGIN=True
gcloud container clusters get-credentials $(terraform output --raw kubernetes_cluster_name) --region $(terraform output --raw location)
add the cluster
ansible-playbook ~/workspace/ansible_ppdm/playbook_rbac_add_k8s_to_ppdm.yml
Let´s view the Storageclasses
kubectl get sc
We need to create a new default class, as GKE will always reconcile its CSI Classes to WaitforFirstConsumer. SO we will read the default class, unset default, and create a new one out of it as default with Immediate Binding mode
#Gdt default class
STORAGECLASS=$(kubectl get storageclass -o=jsonpath='{.items[?(@.metadata.annotations.storageclass\.kubernetes\.io/is-default-class=="true")].metadata.name}')
# Read storageclass into a new with volumeBindingMode Immediate
kubectl get sc $STORAGECLASS -o json | jq '.volumeBindingMode = "Immediate" | .metadata.name = "standard-rwo-csi"' > default.sc.json
# Patch default class to *not* be default
kubectl patch storageclass standard-rwo -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
# Create a new default Class
kubectl apply -f default.sc.json
kubectl get sc
STORAGECLASS=$(kubectl get storageclass -o=jsonpath='{.items[?(@.metadata.annotations.storageclass\.kubernetes\.io/is-default-class=="true")].metadata.name}')
To use PPDM, we need to create a snapshot class
kubectl apply -f - <<EOF
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
name: standard-rwo-csi-vsc
driver: pd.csi.storage.gke.io
deletionPolicy: Delete
EOF
when the deployment is finished, you can connect and configure DDVE in multiple ways. for an ssh connection, use:
export UBUNTU_PRIVATE_FQDN=$(terraform output -raw ubuntu_private_ip)
terraform output ubuntu_ssh_private_key > ~/.ssh/ubuntu_key
chmod 0600 ~/.ssh/ubuntu_key
ssh -i ~/.ssh/ubuntu_key cloudadmin@${UBUNTU_PRIVATE_FQDN}
If branched from here you might only want to update version.
The versions can be found from the Marketplace default jinja file on GCP , e.g. DDVE :
ddve.jinja:
{% if ddveVersion == "8.1.0.10" %}
{% set ddveImage = "ddve-gcp-8-1-0-10-1127744" %}
{% elif ddveVersion == "LTS2024 7.13.1.05" %}
{% set ddveImage = "ddve-gcp-7-13-1-05-1126976" %}
{% elif ddveVersion == "LTS2023 7.10.1.40" %}
{% set ddveImage = "ddve-gcp-7-10-1-40-1126469" %}
{% elif ddveVersion == "LTS2022 7.7.5.50" %}
{% set ddveImage = "ddve-gcp-7-7-5-50-1129444" %}
{% endif %}
The code will always be maintained in ./modules/ddve/ddve.tf:
ddve_image = {
"8.1.0.10" = {
projectId = "dellemc-ddve-public"
imageName = "ddve-gcp-8-1-0-10-1127744"
}
"LTS2024 7.13.1.05" = {
projectId = "dellemc-ddve-public"
imageName = "ddve-gcp-7-13-1-05-1126976"
}
"LTS2023 7.10.1.40" = {
projectId = "dellemc-ddve-public"
imageName = "dddve-gcp-7-10-1-40-1126469"
}
"LTS2022 7.7.5.50" = {
projectId = "dellemc-ddve-public"
imageName = "ddve-gcp-7-7-5-50-1129444"
}
}
And in ./ddve_variables.tf:
variable "ddve_version" {
type = string
default = "8.1.0.10"
description = "DDVE Version, can be: 'LTS2022 7.7.5.50', 'LTS2023 7.10.1.40', 'LTS2024 7.13.1.05','8.1.0.10' "
validation {
condition = anytrue([
var.ddve_version == "LTS2022 7.7.5.50",
var.ddve_version == "LTS2023 7.10.1.40",
var.ddve_version == "LTS2024 7.13.1.05",
var.ddve_version == "8.1.0.10",
])
error_message = "Must be a valid DDVE Version, can be: 'LTS2022 7.7.5.50', 'LTS2023 7.10.1.40', 'LTS2024 7.13.1.05','8.1.0.10' ."
}
}