Demonstration of various Azure Red Hat Openshift features and basic steps to create and configure a cluster. Always refer to the official docs for the latest up-to-date documentation as things may have changed since this was last updated.
Note:
- Red Hat's Managed OpenShift Black Belt Team also have great documentation on configuring ARO so check that out (it's more up-to-date than this repo!).
- Prerequisites
- VNET setup
- Create a default cluster
- Create a private cluster
- Configure Custom Domain and TLS
- Configure bastion host access
- Use an App Gateway
- Configure Identity Providers
- Setup user roles
- Setup in-cluster logging - Elasticsearch and Kibana
- Setup egress firewall - Azure Firewall
- Onboard to Azure Monitor
- Deploy a demo app
- Automation with Bicep (ARM DSL)
- Install the latest Azure CLI
- Log in to your Azure subscription from a console window:
az login
# Follow SSO prompts to authenticate
az account list -o table
az account set -s <subscription_id>
- Register the
Microsoft.RedHatOpenShift
resource provider to be able to create ARO clusters (only required once per Azure subscription):
az provider register -n Microsoft.RedHatOpenShift --wait
az provider show -n Microsoft.RedHatOpenShift -o table
- Install the OpenShift CLI for managing the cluster
wget https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/openshift-client-linux.tar.gz
tar -zxvf openshift-client-linux.tar.gz oc
sudo mv oc /usr/local/bin/
oc version
- (Optional) Install Helm v3 if you want to integrate with Azure Monitor
- (Optional) Install the
htpasswd
utility if you want to try HTPasswd as an OCP Identity Provider:
# Ubuntu
sudo apt install apache2-utils -y
cp aro4-env.sh.template aro4-env.sh
# Edit aro4-env.sh to suit your environment
The VNET and subnet sizes here are for illustrative purposes only. You need to design the network accordingly to your scale needs and existing networks (to avoid overlaps).
# Source variables into your shell environment
source ./aro4-env.sh
# Create resource group to hold cluster resources
az group create -g $RESOURCEGROUP -l $LOCATION
# Create the ARO virtual network
az network vnet create \
--resource-group $RESOURCEGROUP \
--name $VNET \
--address-prefixes 10.0.0.0/22
# Add two empty subnets to your virtual network (master subnet and worker subnet)
az network vnet subnet create \
--resource-group $RESOURCEGROUP \
--vnet-name $VNET \
--name master-subnet \
--address-prefixes 10.0.2.0/24 \
--service-endpoints Microsoft.ContainerRegistry
az network vnet subnet create \
--resource-group $RESOURCEGROUP \
--vnet-name $VNET \
--name worker-subnet \
--address-prefixes 10.0.3.0/24 \
--service-endpoints Microsoft.ContainerRegistry
# Disable network policies for Private Link Service on your virtual network and subnets.
# This is a requirement for the ARO service to access and manage the cluster.
az network vnet subnet update \
--name master-subnet \
--resource-group $RESOURCEGROUP \
--vnet-name $VNET \
--disable-private-link-service-network-policies true
See the official instructions.
It normally takes about 35 minutes to create a cluster.
# Create the ARO cluster
az aro create \
--resource-group $RESOURCEGROUP \
--name $CLUSTER \
--vnet $VNET \
--master-subnet master-subnet \
--worker-subnet worker-subnet \
--pull-secret @pull-secret.txt \
--domain $DOMAIN
# pull-secret: OPTIONAL, but recommended
# domain: OPTIONAL custom domain for ARO (set in aro4-env.sh)
If you have created a cluster with a public ingress (default) you can change that to private later or add a second ingress to handle private traffic whilst still serving public traffic.
- TODO
See the official instructions.
It normally takes about 35 minutes to create a cluster.
# Create the ARO cluster
az aro create \
--resource-group $RESOURCEGROUP \
--name $CLUSTER \
--vnet $VNET \
--master-subnet master-subnet \
--worker-subnet worker-subnet \
--apiserver-visibility Private \
--ingress-visibility Private \
--pull-secret @pull-secret.txt \
--domain $DOMAIN
# pull-secret: OPTIONAL, but recommended
# domain: OPTIONAL custom domain for ARO (set in aro4-env.sh)
If you used the --domain
flag with an FQDN (e.g. my.domain.com
) to create your cluster you'll need to configure DNS and a certificate authority for your API server and apps ingress.
If you used a shortname (e.g. "mycluster") with the --domain
flag then you don't need to setup a custom domain and configure DNS/certs.
Then you can proceed to configure the DNS and TLS/Certs settings, if required - e.g. you set a FQDN custom domain.
In the later case, you'd get assigned an FQDN ending in aroapp.io
like so:
https://console-openshift-console.apps.<shortname>.<region>.aroapp.io/
If needed, follow the steps in TLS.md.
In order to connect to a private Azure Red Hat OpenShift cluster, you will need to perform CLI commands from a host that is either in the Virtual Network you created or in a Virtual Network that is peered with the Virtual Network the cluster was deployed to -- this could be from an on-prem host connected over an Express Route.
az network vnet create -g $RESOURCEGROUP -n utils-vnet --address-prefix 10.0.4.0/22 --subnet-name AzureBastionSubnet --subnet-prefix 10.0.4.0/27
az network public-ip create -g $RESOURCEGROUP -n bastion-ip --sku Standard
az network bastion create --name bastion-service --public-ip-address bastion-ip --resource-group $RESOURCEGROUP --vnet-name $UTILS_VNET --location $LOCATION
See how to peer VNETs from CLI: https://docs.microsoft.com/en-us/azure/virtual-network/tutorial-connect-virtual-networks-cli#peer-virtual-networks
# Get the id for myVirtualNetwork1.
vNet1Id=$(az network vnet show \
--resource-group $RESOURCEGROUP \
--name $VNET \
--query id --out tsv)
# Get the id for myVirtualNetwork2.
vNet2Id=$(az network vnet show \
--resource-group $RESOURCEGROUP \
--name $UTILS_VNET \
--query id \
--out tsv)
az network vnet peering create \
--name aro-utils-peering \
--resource-group $RESOURCEGROUP \
--vnet-name $VNET \
--remote-vnet $vNet2Id \
--allow-vnet-access
az network vnet peering create \
--name utils-aro-peering \
--resource-group $RESOURCEGROUP \
--vnet-name $UTILS_VNET \
--remote-vnet $vNet1Id \
--allow-vnet-access
az network vnet subnet create \
--resource-group $RESOURCEGROUP \
--vnet-name $UTILS_VNET \
--name utils-hosts \
--address-prefixes 10.0.5.0/24 \
--service-endpoints Microsoft.ContainerRegistry
STORAGE_ACCOUNT="jumpboxdiag$(openssl rand -hex 5)"
az storage account create -n $STORAGE_ACCOUNT -g $RESOURCEGROUP -l $LOCATION --sku Standard_LRS
winpass=$(openssl rand -base64 12)
echo $winpass > winpass.txt
az vm create \
--resource-group $RESOURCEGROUP \
--name jumpbox \
--image MicrosoftWindowsServer:WindowsServer:2022-Datacenter:latest \
--vnet-name $UTILS_VNET \
--subnet utils-hosts \
--public-ip-address "" \
--admin-username azureuser \
--admin-password $winpass \
--authentication-type password \
--boot-diagnostics-storage $STORAGE_ACCOUNT \
--generate-ssh-keys
az vm open-port --port 3389 --resource-group $RESOURCEGROUP --name jumpbox
Recommended: Enable update management or automatic guest OS patching.
Connect to the jumpbox
host using the Bastion connection type and enter the username (azureuser
) and password (use the value of $winpass
set above or view the file winpass.txt
).
Install the Microsoft Edge browser (if you used the Windows Server 2022 image for your VM then you can skip this step):
- Open a Powershell prompt
$Url = "http://dl.delivery.mp.microsoft.com/filestreamingservice/files/c39f1d27-cd11-495a-b638-eac3775b469d/MicrosoftEdgeEnterpriseX64.msi"
Invoke-WebRequest -UseBasicParsing -Uri $url -OutFile "\MicrosoftEdgeEnterpriseX64.msi"
Start-Process msiexec.exe -Wait -ArgumentList '/I \MicrosoftEdgeEnterpriseX64.msi /norestart /qn'
Or you can Download and deploy Microsoft Edge for business.
Install utilities:
- Install the Azure CLI
- Install Git For Windows so you have access to a Bash shell
- Log in to your Azure subscription from a console window:
az login
# Follow SSO prompts (or create a Service Principal and login with that)
az account list -o table
az account set -s <subscription_id>
- Install the OpenShift CLI for managing the cluster (example steps)
- (Optional) Install Helm v3 if you want to integrate with Azure Monitor
Given this is a Windows jumpbox, you may need to install a Bash shell like Git Bash.
This approach is not using the AppGw Ingress Controller but rather deploying an App Gateway WAFv2 in front of the ARO cluster and load-balancing traffic to the exposed ARO Routes for services. This method can be used to selectively expose private routes for public access rahter than exposing the route directly.
az network vnet subnet create \
--resource-group $RESOURCEGROUP \
--vnet-name utils-vnet \
--name myAGSubnet \
--address-prefixes 10.0.6.0/24 \
--service-endpoints Microsoft.ContainerRegistry
az network public-ip create \
--resource-group $RESOURCEGROUP \
--name myAGPublicIPAddress \
--allocation-method Static \
--sku Standard
If your ARO cluster is using Private ingress, you'll need to peer the AppGw VNET and the ARO VNET (if you haven't already done so).
az network application-gateway create \
--name myAppGateway \
--location $LOCATION \
--resource-group $RESOURCEGROUP \
--capacity 1 \
--sku WAF_v2 \
--http-settings-cookie-based-affinity Disabled \
--public-ip-address myAGPublicIPAddress \
--vnet-name utils-vnet \
--subnet myAGSubnet
Create or procure your App Gateway frontend PKCS #12 (*.PFX file) certificate chain (e.g. see below for manually, using Let's Encrypt):
# Specify the frontend domain for App Gw (must be different to the internal ARO domain, i.e. not *.apps.<domain>, but you can use *.<domain>)
APPGW_DOMAIN=$DOMAIN
./acme.sh --issue --dns -d "*.$APPGW_DOMAIN" --yes-I-know-dns-manual-mode-enough-go-ahead-please --fullchain-file fullchain.cer --cert-file file.crt --key-file file.key
# Add the TXT entry for _acme-challenge to the $DOMAIN record set, then...
./acme.sh --renew --dns -d "*.$APPGW_DOMAIN" --yes-I-know-dns-manual-mode-enough-go-ahead-please --fullchain-file fullchain.cer --cert-file file.crt --key-file file.key
cd ~/.acme.sh/\*.$APPGW_DOMAIN/
cat fullchain.cer \*.$APPGW_DOMAIN.key > gw-bundle.pem
openssl pkcs12 -export -out gw-bundle.pfx -in gw-bundle.pem
TODO: The following steps require Azure Portal access until I get around to writig the CLI/Powershell steps.
Define Azure DNS entries for the App Gateway frontend IP:
- Create a
*
A record with the public IP address of your App Gateway in your APPGW_DOMAIN domain (or better yet, create an alias record pointing to the public IP resource)
In the Listeners section, create a new HTTPS listener:
-
Listener name: aro-route-https-listener
-
Frontend IP: Public
-
Port: 443
-
Protocol: HTTPS
-
Http Settings - choose to Upload a Certificate (upload the PFX file from earlier)
- Cert Name: gw-bundle
- PFX certificate file: gw-bundle.pfx
- Password: ****** (what you used when creating the PFX file)
- Additional settings - Multi site: (Enter your site host names, comma separated) - note: wildcard hostname not supported yet
- e.g. rating-web.
- Note: You can also create multiple listeners - one per site and re-use the certificate and select basic site
-
Define Backend pools to point to the exposed ARO routes x n (one per web site/api)
-
Define backend HTTP Settings (HTTPS, 443, Trusted CA) X 1
In the Backend pools section, create a new backend pool:
- Name: aro-routes
- Backend Targets: Enter the FQDN(s), e.g.
rating-web-workshop.apps.<domain>
- Click Add
In the HTTP settings section, create a new HTTP setting:
- HTTP settings name: aro-route-https-settings
- Backend protocol: HTTPS
- Backend port: 443
- Use well known CA certificat: Yes (if you used one; otherwise upload your CA cer file)
- Override with new host name: Yes
- Choose: Pick host name from backend target
In the Rules section, define rules x n (one per website/api):
- Name: e.g. rating-web-rule
- Select the https listener above
- Enter backend target details - select the target and HTTP settings created above
- Click 'Add'
TODO: Define Health probes
Access the website/API via App Gateway: e.g. https://rating-web.<domain>/
See: automation section.
# Get Console URL from command output
az aro list -o table
webConsole=$(az aro show -g $RESOURCEGROUP -n $CLUSTER --query consoleProfile.url -o tsv | tr -d '[:space:]')
echo $webConsole
# ==> https://console-openshift-console.apps.<aro-domain>
# Get kubeadmin username and password
az aro list-credentials -g $RESOURCEGROUP -n $CLUSTER
API_URL=$(az aro show -g $RESOURCEGROUP -n $CLUSTER --query apiserverProfile.url -o tsv)
KUBEADMIN_PASSWD=$(az aro list-credentials -g $RESOURCEGROUP -n $CLUSTER | jq -r .kubeadminPassword)
oc login -u kubeadmin -p $KUBEADMIN_PASSWD --server=$API_URL
oc status
Add one or more identity providers to allow other users to login. kubeadmin
is intended as a temporary login to set up the cluster.
Configure HTPasswd identity provider.
htpasswd -c -B -b aro-user.htpasswd <user1> <somepassword1>
htpasswd -b $(pwd)/aro-user.htpasswd <user2> <somepassword2>
htpasswd -b $(pwd)/aro-user.htpasswd <user3> <somepassword3>
oc create secret generic htpass-secret --from-file=htpasswd=./aro-user.htpasswd -n openshift-config
oc apply -f htpasswd-cr.yaml
See the CLI steps to configure Azure AD or see below for the Portal steps.
Configure OAuth callback URL:
domain=$(az aro show -g $RESOURCEGROUP -n $CLUSTER --query clusterProfile.domain -o tsv | tr -d '[:space:]')
location=$(az aro show -g $RESOURCEGROUP -n $CLUSTER --query location -o tsv | tr -d '[:space:]')
apiServer=$(az aro show -g $RESOURCEGROUP -n $CLUSTER --query apiserverProfile.url -o tsv | tr -d '[:space:]')
webConsole=$(az aro show -g $RESOURCEGROUP -n $CLUSTER --query consoleProfile.url -o tsv | tr -d '[:space:]')
# If using default domain
oauthCallbackURL=https://oauth-openshift.apps.$domain.$location.aroapp.io/oauth2callback/AAD
# If using custom domain
oauthCallbackURL=https://oauth-openshift.apps.$DOMAIN/oauth2callback/AAD
Create an Azure Active Directory application:
clientSecret=$(openssl rand -base64 16)
echo $clientSecret > clientSecret.txt
appDisplayName="aro-auth-$(openssl rand -hex 4)"
appId=$(az ad app create \
--query appId -o tsv \
--display-name $appDisplayName \
--reply-urls $oauthCallbackURL \
--password $clientSecret)
tenantId=$(az account show --query tenantId -o tsv | tr -d '[:space:]')
Create manifest file for optional claims to include in the ID Token:
cat > manifest.json<< EOF
[{
"name": "upn",
"source": null,
"essential": false,
"additionalProperties": []
},
{
"name": "email",
"source": null,
"essential": false,
"additionalProperties": []
}]
EOF
Update AAD application's optionalClaims with a manifest:
az ad app update \
--set optionalClaims.idToken=@manifest.json \
--id $appId
Update AAD application scope permissions:
# Azure Active Directory Graph.User.Read = 311a71cc-e848-46a1-bdf8-97ff7156d8e6
az ad app permission add \
--api 00000002-0000-0000-c000-000000000000 \
--api-permissions 311a71cc-e848-46a1-bdf8-97ff7156d8e6=Scope \
--id $appId
Login to oc CLI as kubeadmin
.
Create a secret to store AAD application secret:
oc create secret generic openid-client-secret-azuread \
--namespace openshift-config \
--from-literal=clientSecret=$clientSecret
Create OIDC configuration file for AAD:
cat > oidc.yaml<< EOF
apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
name: cluster
spec:
identityProviders:
- name: AAD
mappingMethod: claim
type: OpenID
openID:
clientID: $appId
clientSecret:
name: openid-client-secret-azuread
extraScopes:
- email
- profile
extraAuthorizeParameters:
include_granted_scopes: "true"
claims:
preferredUsername:
- email
- upn
name:
- name
email:
- email
issuer: https://login.microsoftonline.com/$tenantId
EOF
Apply the configuration to the cluster:
oc apply -f oidc.yaml
Verify login to ARO console using AAD.
See other supported identity providers.
You can assign various roles or cluster roles to users.
oc adm policy add-cluster-role-to-user <role> <username>
You'll want to have at least one cluster-admin (similar to the kubeadmin
user):
oc adm policy add-cluster-role-to-user cluster-admin <username>
If you get sign-in errors, you may need to delete users and/or identities:
oc get user
oc delete user <user>
oc get identity
oc delete identity <name>
See: https://docs.openshift.com/aro/4/authentication/remove-kubeadmin.html
Ensure you have at least one other cluster-admin, sign in as that user then you can remove the kube-admin
user:
oc delete secrets kubeadmin -n kube-system
See logging/
See firewall/
Refer to the ARO Monitoring README in this repo.
Follow the Demo steps to deploy a sample microservices app.
See Bicep automation example.
Disable monitoring (if enabled):
helm del azmon-containers-release-1
or if using Arc-enabled monitoring, follow these cleanup steps.
az aro delete -g $RESOURCEGROUP -n $CLUSTER
# (optional)
az network vnet subnet delete -g $RESOURCEGROUP --vnet-name $VNET -n master-subnet
az network vnet subnet delete -g $RESOURCEGROUP --vnet-name $VNET -n worker-subnet
(optional) Delete Azure AD application (if using Azure AD for Auth)
./cleanup-failed-clusters.sh