Skip to content

Latest commit

 

History

History
473 lines (382 loc) · 32.7 KB

File metadata and controls

473 lines (382 loc) · 32.7 KB

terraforming-azure

terrafroming-azure is a set of terraform modules to deploy Dell DPS Products to Azure

Requirements

Name Version
terraform >= 0.15.0
azurerm ~> 2.94
http ~> 3.4.2
random ~> 3.1
tls ~> 3.1

Modules

Name Source Version
aks ./modules/aks n/a
ave ./modules/ave n/a
common_rg ./modules/rg n/a
crs_s2s_vpn ./modules/s2s_vpn n/a
ddve ./modules/ddve n/a
linux ./modules/linux n/a
networks ./modules/networks n/a
nve ./modules/nve n/a
ppdm ./modules/ppdm n/a
s2s_vpn ./modules/s2s_vpn n/a

Resources

No resources.

Inputs

Name Description Type Default Required
LINUX_ADMIN_USERNAME n/a string "ubuntu" no
LINUX_DATA_DISKS n/a list(string) [] no
LINUX_HOSTNAME n/a string "client1" no
LINUX_IMAGE n/a map(any)
{
"offer": "UbuntuServer",
"publisher": "Canonical",
"sku": "18.04-LTS",
"version": "latest"
}
no
LINUX_PRIVATE_IP IP for linux instance string "10.10.8.12" no
LINUX_VM_SIZE n/a string "Standard_DS1_v2" no
aks_count will deploy AKS Clusters when number greater 0. Number indicates number of AKS Clusters number 0 no
aks_private_cluster Determines weather AKS Cluster is Private, currently not supported bool false no
aks_private_dns_zone_id the Zone ID for AKS, currently not supported any null no
aks_subnet n/a list(string)
[
"10.10.6.0/24"
]
no
ave_count will deploy AVE when number greater 0. Number indicates number of AVE Instances number 0 no
ave_initial_password n/a string "Change_Me12345_" no
ave_public_ip n/a string "false" no
ave_resource_group_name Bring your own resourcegroup. the Code will read the Data from the resourcegroup name specified here string null no
ave_tcp_inbound_rules_Inet inbound Traffic rule for Security Group from Internet list(string)
[
"22",
"443"
]
no
ave_type AVE Type, can be: '0.5 TB AVE', '1 TB AVE', '2 TB AVE', '4 TB AVE','8 TB AVE','16 TB AVE' string "0.5 TB AVE" no
ave_version AVE Version, can be: '19.8.0', '19.7.0', '19.4.02', '19.3.03', '19.2.04' string "19.8.0" no
azure_bastion_subnet n/a list(string)
[
"10.10.0.224/27"
]
no
azure_environment The Azure cloud environment to use. Available values at https://www.terraform.io/docs/providers/azurerm/#environment string "public" no
client_id n/a any null no
client_secret n/a any null no
common_location Name of a common resource group location for all but network resources any null no
common_resource_group_name Name of a common resorce group for all but network resources any n/a yes
create_bastion n/a bool false no
create_common_rg Create a common RG bool false no
create_crs_s2s_vpn Do you want to create a Cyber Vault bool false no
create_linux a demo linux client bool false no
create_networks if set to true, we will create networks in the environment bool false no
create_s2s_vpn n/a bool false no
crs_network_rg_name name of the existing vnet string "" no
crs_tunnel1_preshared_key the preshared key for teh vpn tunnel when deploying S2S VPN string "" no
crs_vnet_name name of the existing vnet string "" no
crs_vpn_destination_cidr_blocks the cidr blocks as string !!! for the destination route in you local network, when s2s_vpn is deployed list(string) [] no
crs_vpn_subnet n/a list(string)
[
"10.150.1.0/24"
]
no
crs_wan_ip The IP of your VPN Device if S2S VPN any null no
ddve_count will deploy DDVE when number greater 0. Number indicates number of DDVE Instances number 0 no
ddve_initial_password the initial Password for Datadomain. It will be exposed to output as DDVE_PASSWORD for further Configuration.
As DD will be confiured with SSH, the Password must be changed from changeme
string "Change_Me12345_" no
ddve_meta_disks n/a list(string)
[
"1023",
"1023"
]
no
ddve_networks_resource_group_name Bring your own Network resourcegroup. the Code will read the Data from the resourcegroup name specified here string null no
ddve_public_ip Enable Public IP on Datadomain Network Interface string "false" no
ddve_resource_group_name Bring your own resourcegroup. the Code will read the Data from the resourcegroup name specified here string null no
ddve_tcp_inbound_rules_Inet inbound Traffic rule for Security Group from Internet list(string)
[
"22",
"443"
]
no
ddve_type DDVE Type, can be: '16 TB DDVE', '32 TB DDVE', '96 TB DDVE', '256 TB DDVE','16 TB DDVE PERF', '32 TB DDVE PERF', '96 TB DDVE PERF', '256 TB DDVE PERF' string "16 TB DDVE" no
ddve_version DDVE Version, can be: '7.7.525', '7.7.530', '7.10.115', '7.10.120', '7.13.020', '8.0.010', '7.10.1015.MSDN', '7.10.120.MSDN', '7.7.5020.MSDN', '7.13.0020.MSDN', '8.0.010.MSDN' string "8.0.010.MSDN" no
ddvelist n/a
map(object({
ddve_name = string
ddve_meta_disks = list(string)
ddve_type = string
ddve_version = string
}))
{
"firstdd": {
"ddve_meta_disks": [
1000,
1000
],
"ddve_name": "ddve1",
"ddve_type": "16 TB DDVE",
"ddve_version": "8.0.010.MSDN"
}
}
no
dns_suffix the DNS suffig when we create a network with internal dns any n/a yes
enable_aks_subnet If set to true, create subnet for aks bool false no
enable_tkg_controlplane_subnet If set to true, create subnet for tkg controlplane bool false no
enable_tkg_workload_subnet If set to true, create subnet for tkg workload bool false no
environment n/a any n/a yes
file_uris_cs File uri for custom script extension with linux string null no
infrastructure_subnet n/a list(string)
[
"10.10.8.0/26"
]
no
location n/a any null no
network_rg_name The RG for Network if different is used for existing vnet any null no
networks_aks_subnet_id The AKS Subnet ID if not deployed from Module string "" no
networks_dns_zone_name n/a any null no
networks_infrastructure_resource_group_name name ofb the network rg when using existing any null no
networks_infrastructure_subnet_id Id of the subnet when using existing any null no
nve_count will deploy NVE when number greater 0. Number indicates number of NVE Instances number 0 no
nve_initial_password The initial Password fot the NVE string "Change_Me12345_" no
nve_public_ip n/a string "false" no
nve_resource_group_name Bring your own resourcegroup. the Code will read the Data from the resourcegroup name specified here string null no
nve_tcp_inbound_rules_Inet inbound Traffic rule for Security Group from Internet list(string)
[
"22",
"443"
]
no
nve_type NVE Type, can be: 'SMALL', 'MEDIUM', 'HIGH', , see Networker Virtual Edition Deployment Guide for more string "SMALL" no
nve_version NVE Version, can be: '19.8.0', '19.9.2', '19.10.0' string "19.10.0" no
ppdm_count will deploy PPDM when number greater 0. Number indicates number of PPDM Instances number 0 no
ppdm_initial_password for use only if ansible playbooks shall hide password string "Change_Me12345_" no
ppdm_name Instances wiull be named envname+ppdmname+instanceid, e.g tfdemo-ppdm1 tfdemo-ppdm2 string "ppdm" no
ppdm_public_ip must we assign a public ip to ppdm bool false no
ppdm_resource_group_name Bring your own resourcegroup. the Code will read the Data from the resourcegroup name specified here string null no
ppdm_version PPDM Version, can be: '19.16.0','19.15.0', '19.14.0' string "19.16.0" no
resource_group_name Default name of provided RG any null no
storage_account_cs Storage account when using custom script extension with linux string null no
storage_account_key_cs Storage account key when using custom script extension with linux string null no
subscription_id n/a any null no
tenant_id n/a any null no
tkg_controlplane_subnet n/a list(string)
[
"10.10.2.0/24"
]
no
tkg_workload_subnet n/a list(string)
[
"10.10.4.0/24"
]
no
tunnel1_preshared_key n/a any null no
virtual_network_address_space n/a list(any)
[
"10.10.0.0/16"
]
no
vnet_name n/a any null no
vpn_destination_cidr_blocks the cidr blocks as string !!! for the destination route in you local network, when s2s_vpn is deployed list(string) [] no
vpn_subnet n/a list(string)
[
"10.10.12.0/24"
]
no
wan_ip n/a any null no

Outputs

Name Description
AKS_KUBE_API first API Seyrver
AKS_KUBE_CONFIG first cluster kubeconfig
AVE_PASSWORD n/a
AVE_PRIVATE_FQDN the private FQDN of the first AVE
AVE_PRIVATE_IP The private ip address for the first AVE Instance
AVE_PUBLIC_FQDN we will use the Priovate IP as FQDN if no pubblic is registered, so api calls can work
AVE_PUBLIC_IP we will use the Priovate IP as FQDN if no pubblic is registered, so api calls can work
AVE_SSH_PRIVATE_KEY The ssh private key for the AVE Instance
AVE_SSH_PUBLIC_KEY The ssh public key for the AVE Instance
AZURE_SUBSCRIPTION_ID n/a
DDVE_ATOS_CONTAINER n/a
DDVE_ATOS_STORAGE_ACCOUNT n/a
DDVE_PASSWORD n/a
DDVE_PRIVATE_IP The private ip address for the first DDVE Instance
DDVE_PUBLIC_FQDN we will use the Priovate IP as FQDN if no pubblic is registered, so api calls can work
DDVE_SSH_PRIVATE_KEY The ssh private key for the DDVE Instance
DDVE_SSH_PUBLIC_KEY The ssh public key for the DDVE Instance
DEPLOYMENT_DOMAIN n/a
K8S_CLUSTER_NAME The Name of the K8S Cluster
K8S_FQDN the FQDN of the AKS Cluster
NVE_PASSWORD n/a
NVE_PRIVATE_FQDN the private FQDN of the first NVE
NVE_PRIVATE_IP The private ip address for the first NVE Instance
NVE_PUBLIC_FQDN we will use the Private IP as FQDN if no pubblic is registered, so api calls can work
NVE_PUBLIC_IP we will use the Private IP as FQDN if no pubblic is registered, so api calls can work
NVE_SSH_PRIVATE_KEY The ssh private key for the NVE Instance
NVE_SSH_PUBLIC_KEY The ssh public key for the NVE Instance
PPDM_FQDN we will use the Priovate IP as FQDN if no pubblic is registered, so api calls can work
PPDM_HOSTNAME The private ip address for the first ppdm Instance
PPDM_PRIVATE_IP The private ip address for the first ppdm Instance
PPDM_PUBLIC_IP_ADDRESS n/a
PPDM_SSH_PRIVATE_KEY n/a
PPDM_SSH_PUBLIC_KEY n/a
RESOURCE_GROUP n/a
aks_cluster_name all Kubernetes Cluster Names
aks_kube_api all API Servers
aks_kube_config all kubeconfigs
ave_private_fqdn the private FQDN of the AVE´s
ave_private_ip The private ip addresses for the AVE Instances
ave_public_fqdn the private FQDN of the AVE´s
ave_ssh_private_key The ssh private key´s for the AVE Instances
ave_ssh_public_key The ssh public keys for the AVE Instances
crs_vpn_public_ip The IP of the VPN Vnet Gateway
ddve_atos_container n/a
ddve_atos_storageaccount n/a
ddve_private_ip The private ip addresses for the DDVE Instances
ddve_ssh_private_key The ssh private key´s for the DDVE Instances
ddve_ssh_public_key The ssh public keys for the DDVE Instances
k8s_fqdn FQDN´s of the All AKS Clusters
nve_private_fqdn the private FQDN of the NVE´s
nve_private_ip The private ip addresses for the NVE Instances
nve_public_fqdn the private FQDN of the NVE´s
nve_ssh_private_key The ssh private key´s for the NVE Instances
nve_ssh_public_key The ssh public keys for the NVE Instances
ppdm_fqdn n/a
ppdm_hostname The private ip address for the first ppdm Instance
ppdm_initial_password output "PPDM_PRIVATE_FQDN" { sensitive = false value = var.ppdm_count > 0 ? module.ppdm[0].private_fqdn : null }
ppdm_private_ip The private ip address for all ppdm Instances
ppdm_public_ip_address n/a
ppdm_ssh_private_key n/a
ppdm_ssh_public_key n/a
vpn_public_ip The IP of the VPN Vnet Gateway

usage

cd terraforming-dps/terraforming-azure

create a terraform.tfvars file or terraform.tfvars.json file

module_ddve

set ddve_count to desired number in tfvars

"ddve_count":1,

review the deployment

terraform plan

when everything meets your requirements, run the deployment with

terraform apply --auto-approve

configure using ansible

export outputs from terraform into environment variables:

export DDVE_PUBLIC_FQDN=$(terraform output -json DDVE_PRIVATE_IP  | jq -r  '.[0]')
export DDVE_USERNAME=sysadmin
export DDVE_INITIAL_PASSWORD=changeme
export DDVE_PASSWORD=Change_Me12345_
export PPDD_PASSPHRASE=Change_Me12345_!
export DDVE_PRIVATE_FQDN=$(terraform output -json DDVE_PRIVATE_IP | jq -r  '.[0]')
export PPDD_TIMEZONE="Europe/Berlin"
export DDVE_ATOS_STORAGEACCOUNT=$(terraform output -json DDVE_ATOS_STORAGE_ACCOUNT  | jq -r  '.[0]')
export DDVE_ATOS_CONTAINER=$(terraform output -json DDVE_ATOS_CONTAINER  | jq -r  '.[0]')

set the Initial DataDomain Password

ansible-playbook ~/workspace/ansible_ppdd/1.0-Playbook-configure-initial-password.yml

image If you have a valid dd license, set the variable PPDD_LICENSE, example:

export PPDD_LICENSE=$(cat ~/workspace/internal.lic)
ansible-playbook ~/workspace/ansible_ppdd/3.0-Playbook-set-dd-license.yml

next, we set the passphrase, as export it is required for ATOS then, we will set the Timezone and the NTP to GCP NTP link local Server

ansible-playbook ~/workspace/ansible_ppdd/2.1-Playbook-configure-ddpassphrase.yml
ansible-playbook ~/workspace/ansible_ppdd/2.1.1-Playbook-set-dd-timezone-and-ntp-azure.yml

review container name and storageaccount from

echo $DDVE_ATOS_CONTAINER
echo $DDVE_ATOS_STORAGEACCOUNT

Wait for the Filesystem

ansible-playbook ~/workspace/ansible_ppdd/2.2-Playbook-wait-dd-filesystems.yml

Albeit there is a ansible-playbook ~/workspace/ansible_ppdd/2.2-Playbook-configure-dd-atos-aws.yml , we cannot use it, as the RestAPI Call to create Active Tier on Object is not available now for Azure... Therefore us the UI Wizard

image

Add the Metadata Disks:

image

Finish:

image

image once the FIlesystem is enabled, we go ahead and enable the boost Protocol ... ( below runbook will cerate filesystem on atos in future once api is ready, and also enable the boost protocol )

ansible-playbook ~/workspace/ansible_ppdd/2.2-Playbook-configure-dd-atos-azure.yml

for an ssh connection to the ddve, use:

export DDVE_PRIVATE_FQDN=$(terraform output -raw ddve_private_ip)
terraform output ddve_ssh_private_key > ~/.ssh/ddve_key
chmod 0600 ~/.ssh/ddve_key
ssh -i ~/.ssh/ddve_key sysadmin@${DDVE_PRIVATE_FQDN}

module_ppdm

set ppdm_count to desired number in tfvars

"ppdm_count":1,

review the deployment

terraform plan

when everything meets your requirements, run the deployment with

terraform apply --auto-approve

Configure PPDM

Similar to the DDVE Configuration, we will set Environment Variables for Ansible to Automatically Configure PPDM

# Refresh you Environment Variables if Multi Step !
eval "$(terraform output --json | jq -r 'with_entries(select(.key|test("^PP+"))) | keys[] as $key | "export \($key)=\"\(.[$key].value)\""')"
export PPDM_INITIAL_PASSWORD=Change_Me12345_
export PPDM_NTP_SERVERS='["time.windows.com"]'
export PPDM_SETUP_PASSWORD=admin          # default password on the Cloud PPDM rest API
export PPDM_TIMEZONE="Europe/Berlin"
export PPDM_POLICY=PPDM_GOLD

Set the initial Configuration

the playbook will wait for PPDM to be ready for configguration and starts the COnfiguration Process

ansible-playbook ~/workspace/ansible_ppdm/1.0-playbook_configure_ppdm.yml

image

and will wait for configuration Success:

image

verify the config:

ansible-playbook ~/workspace/ansible_ppdm/1.1-playbook_get_ppdm_config.yml

image

we add the DataDomain:

ansible-playbook ~/workspace/ansible_ppdm/2.0-playbook_set_ddve.yml 

image

we can get the sdr config after Data Domain Boost auto-configuration for primary source from PPDM

ansible-playbook ~/workspace/ansible_ppdm/3.0-playbook_get_sdr.yml

image

and see the Server desaster recovery jobs status

ansible-playbook ~/workspace/ansible_ppdm/31.1-playbook_get_activities.yml --extra-vars "filter='category eq \"DISASTER_RECOVERY\"'"

image

module_nve

set ppdm_count to desired number in tfvars

"nve_count":1,

review the deployment

terraform plan

when everything meets your requirements, run the deployment with

terraform apply --auto-approve

Configure NVE

Similar to the DDVE Configuration, we will set Environment Variables for Ansible to Automatically Configure NVE

# Refresh you Environment Variables if Multi Step !
eval "$(terraform output --json | jq -r 'with_entries(select(.key|test("^NV+"))) | keys[] as $key | "export \($key)=\"\(.[$key].value)\""')"
export NVE_FQDN=$(terraform output -raw  NVE_PRIVATE_IP)
export NVE_TIMEZONE="Europe/Berlin"
export NVE_PASSPHRASE=ChangeMe12345

Set the initial Configuration

the playbook will wait for NVE to be ready for configuration and starts the Configuration Process via the AVI endpoint

ansible-playbook ~/workspace/ansible_avi/01-playbook-configure-nve.yaml

Configure a 2nd NVE as a Storage Node

set Environment Variables for Ansible to Automatically Configure 2nd NVE

"nve_count":2,

review the deployment

terraform plan

when everything meets your requirements, run the deployment with

terraform apply --auto-approve
# Refresh you Environment Variables if Multi Step !
eval "$(terraform output --json | jq -r 'with_entries(select(.key|test("^NV+"))) | keys[] as $key | "export \($key)=\"\(.[$key].value)\""')"
export NVE_FQDN=$(terraform output -json nve_private_ip | jq -r  '.[1]')
export NVE_TIMEZONE="Europe/Berlin"
export NVE_PASSPHRASE=ChangeMe12345
export NVE_PRIVATE_IP=$(terraform output -json nve_private_ip | jq -r  '.[1]' )
ansible-playbook ~/workspace/ansible_avi/01-playbook-configure-nve.yaml --extra-vars="nve_as_storage_node=true"

for an ssh connection to the NVE, use:

export NVE_FQDN=$(terraform output -json nve_private_ip | jq -r  '.[1]' )
export NVE_PRIVATE_IP=$(terraform output -json nve_private_ip | jq -r  '.[1]' )
terraform output -json nve_ssh_private_key | jq -r  '.[1]' > ~/.ssh/nve_key
chmod 0600 ~/.ssh/nve_key
ssh -i ~/.ssh/nve_key admin@${NVE_PRIVATE_FQDN}

Appendix

Deploying multiple Systems

When deploying multiple DD Systems, , the required informations to be bpassed to ansible are serverd from a json array, The blow example shows how to configure the second DD ( [1] represents the second enty in the array:

configure using ansible

export outputs from terraform into environment variables:

export DDVE_PUBLIC_FQDN=$(terraform output -json DDVE_PRIVATE_IP  | jq -r  '.[1]')
export DDVE_USERNAME=sysadmin
export DDVE_INITIAL_PASSWORD=changeme
export DDVE_PASSWORD=Change_Me12345_
export PPDD_PASSPHRASE=Change_Me12345_!
export DDVE_PRIVATE_FQDN=$(terraform output -json DDVE_PRIVATE_IP | jq -r  '.[1]')
export PPDD_TIMEZONE="Europe/Berlin"
export DDVE_ATOS_STORAGEACCOUNT=$(terraform output -json DDVE_ATOS_STORAGE_ACCOUNT  | jq -r  '.[1]')
export DDVE_ATOS_CONTAINER=$(terraform output -json DDVE_ATOS_CONTAINER  | jq -r  '.[1]')