This environment has been created for the sole purpose of providing an easy to deploy and consume Red Hat OpenShift Container Platform 4 environment as a sandpit.
This install will create a “Minimal Viable Setup”, which anyone can extend to their needs and purposes.
Recent tests show that SSD storage on the server might be required for any persistent deployment to work correctly.
Use it at your own pleasure and risk!
If you want to provide additional features, please feel free to contribute via pull requests or any other means.
We are happy to track and discuss ideas, topics and requests via Issues.
Our instructions are based on the CentOS Root Server as provided by Hetzner, please feel free to adapt it to the needs of your preferred hosting provider. We are happy to get pull requests for an updated documentation, which makes consuming this setup easy also for other hosting providers.
These instructions are for running CentOS and 'root' machines which are set up following the Hetzner CentOS documentation. You might have to modify commands if running on another Linux distro. Feel free to provide instructions for alternative providers.
NOTE: If you are running on other environments than bare metal servers from Hetzner, check if there is specific instruction under Infra providers list and then jump to section Initialize tools
Supported root server operating systems:
- RHEL 8 - How to install RHEL8: https://keithtenzer.com/cloud/how-to-create-a-rhel-8-image-for-hetzner-root-servers/
- RHEL 9 - leapp update from RHEL 8
- RHEL 9 (How to install RHEL9)
- CentOS Stream 9 base
- Rocky Linux 9.1 base
- Debian 11
When following the steps below, you will end with a setup similar to this:
Important: Hetzner Firewall only support IPv4 - IPv6 must be solved via the host firewall(d)!
Here is an example Hetzner Firewall configuration:
Name | Source IP | Destination IP | Source port | Destination port | Protocol | TCP flags | Action |
---|---|---|---|---|---|---|---|
ssh | 22 | tcp | accept | ||||
api+ingress | 80,443,6443 | tcp | accept | ||||
icmp | icmp | accept | |||||
outgoing connections | 32768-65535 | tcp | ack | accept |
Subscribe your RHEL host:
subscription-manager register
# get pool id via:
# subscription-manager list --available
subscription-manager attach [--auto] --pool=...
subscription-manager repos --disable=*
subscription-manager repos \
--enable=rhel-8-for-x86_64-baseos-rpms \
--enable=rhel-8-for-x86_64-appstream-rpms \
--enable=rhel-8-for-x86_64-highavailability-rpms \
--enable=ansible-automation-platform-2.3-for-rhel-8-x86_64-rpms
subscription-manager repos \
--enable=rhel-9-for-x86_64-baseos-rpms \
--enable=rhel-9-for-x86_64-appstream-rpms \
--enable=rhel-9-for-x86_64-highavailability-rpms \
--enable=ansible-automation-platform-2.3-for-rhel-9-x86_64-rpms
dnf install -y ansible-navigator git podman
Ansible navigator installation based on the upstream documentation.
dnf install -y python3-pip podman git
python3 -m pip install ansible-navigator --user
echo 'export PATH=$HOME/.local/bin:$PATH' >> ~/.profile
source ~/.profile
ssh-keygen
cat ~/.ssh/*.pub >> ~/.ssh/authorized_keys
You are now ready to clone this project to your CentOS system.
git clone https://github.com/RedHat-EMEA-SSA-Team/hetzner-ocp4.git
We are now ready to install libvirt
as our hypervisor, provision VMs and prepare those for OCP.
Here is an example of a cluster.yml file that contains information about the cluster that is going to be installed. The parameters that can be configured are as follows:
variable | description | Default |
---|---|---|
cluster_name |
Name of the cluster to be installed | Required |
dns_provider |
DNS provider, value can be route53, cloudflare, gcp, azure,transip, hetzner, gandi or none. Check Setup public DNS records for more info. | Required |
image_pull_secret |
Token to be used to authenticate to the Red Hat image registry. You can download your pull secret from https://cloud.redhat.com/openshift/install/metal/user-provisioned | Required |
letsencrypt_account_email |
Email address that is used to create LetsEncrypt certs. If cloudflare_account_email is not present for CloudFlare DNS records, letsencrypt_account_email is also used with CloudFlare DNS account email | Required |
public_domain |
Root domain that will be used for your cluster. | Required |
ip_families |
Decide whether you want IPv4, IPv6 or dual-stack. | ['IPv4'] |
listen_address |
Listen address for the load balancer on your host system. | hostvars['localhost']['ansible_default_ipv4']['address'] |
listen_address_ipv6 |
Same as listen_address but for IPv6 | hostvars['localhost']['ansible_default_ipv6']['address'] |
public_ip |
Optional to overwrite public IP, if it is different from listen_address . Used for dns records at your dns_provider . |
listen_address |
public_ipv6 |
Same as public_ip but for IPv6 |
listen_address_ipv6 |
masters_schedulable |
Optional to overwrite masters schedulable | false |
sdn_plugin_name |
Optional to change the SDN plugin between OVNKubernetes or OpenShiftSDN |
OVNKubernetes |
It is possible to install three different types of cluster designs: single node, compact or normal.
Recommended cluster.yml
settings:
master_count: 1
compute_count: 0
masters_schedulable: true # is default
# It's recommended to increase the master capacity too
# master_vcpu: 4
# master_memory_size: 16384
# master_memory_unit: 'MiB'
# master_root_disk_size: '120G'
Recommended cluster.yml
settings:
master_count: 3
compute_count: 0
masters_schedulable: true # is default
Recommended cluster.yml
settings:
master_count: 3
compute_count: 2 # at least 2 recommended
masters_schedulable: false
Read this if you want to deploy pre releases
Current tools allow use of three DNS providers: AWS Route53, Cloudflare, DigitalOcean, GCP DNS or none. If you want to use Route53, Cloudflare, DigitalOcean, GCP or Gandi as your DNS provider, you have to add a few variables. Check the instructions below.
DNS records are constructed based on cluster_name and public_domain values. With above values DNS records should be
- api.cluster_name.public_domain
- *.apps.cluster_name.public_domain
If you use another DNS provider, feel free to contribute! 😀
With dns_provider: none
the playbooks will not create public dns entries. (It will skip letsencrypt too) Please create public dns entries if you want to access your cluster.
Please configure in cluster.yml
all necessary credentials:
DNS provider | Variables |
---|---|
Azure | azure_client_id: 'client_id' azure_secret: 'key' azure_subscription_id: 'subscription_id' azure_tenant: 'tenant_id' azure_resource_group: 'dns_zone_resource_group' |
CloudFlare | cloudflare_account_email: john@example.com Use the global api key here! (API-Token is not supported!) (Details in #86) cloudflare_account_api_token: 9348234sdsd894..... cloudflare_zone: domain.tld |
DigitalOcean | digitalocean_token: e7a6f82c3245b65cf4..... digitalocean_zone: domain.tld |
Gandi | gandi_account_api_token: 0123456... gandi_zone: domain.tld |
GCP | gcp_project: project-name gcp_managed_zone_name: 'zone-name' gcp_managed_zone_domain: 'example.com.' gcp_serviceaccount_file: ../gcp_service_account.json |
Hetzner | hetzner_account_api_token: 93543ade82AA$73..... hetzner_zone: domain.tld |
Route53 / AWS | aws_access_key: key aws_secret_key: secret aws_zone: domain.tld |
TransIP | transip_token: eyJ0eXAiOiJKV.... transip_zone: domain.tld |
none | With dns_provider: none the playbooks will not create public dns entries. (It will skip letsencrypt too) Please create public dns entries if you want to access your cluster. |
Variable | Default | Description |
---|---|---|
storage_nfs |
false | Set up a local NFS server, create a Storage Class (with nfs-subdir-external-provisioner ) pointing to it, and use that StorageClass for the internal Registry Storage |
vm_autostart |
false | Create cluster VMs with autostart enabled |
vm_storage_backend |
qcow2 |
You can choose between default qcow2 and lvm as storage backend. |
vm_storage_backend_location |
empty | Important for vm_storage_backend lvm, please add the volume group for example vg0 |
auth_redhatsso |
empty | Install Red Hat SSO, checkout cluster-example.yml for an example |
auth_htpasswd |
empty | Install htpasswd, checkout cluster-example.yml for an example |
auth_github |
empty | Install GitHub IDP, checkout cluster-example.yml for an example |
cluster_role_bindings |
empty | Set up cluster role binding, checkout cluster-example.yml for an example |
openshift_install_command |
check defaults | Important for air-gapped installation. checkout docs/air-gapped.md |
install_config_additionalTrustBundle |
empty | Important for air-gapped installation. checkout docs/air-gapped.md |
install_config_imageContentSources |
empty | Important for air-gapped installation. checkout docs/air-gapped.md |
letsencrypt_disabled |
false |
This allows you to disable letsencrypt setup. (Default is enabled letsencrypt.) |
sdn_plugin_name |
OVNKubernetes |
This allows you to change SDN plugin. Valid values are OpenShiftSDN and OVNKubernetes. (Default is OVNKubernetes.) |
masters_schedulable |
true | Set to false if don't want to allow workload onto the master nodes. (Default is to allow this) |
install_config_capabilities |
null | Configure Cluster capabilities |
fips |
false | Enable FIPS mode on the OpenShift cluster (Default is false) |
cd hetzner-ocp4
ansible-navigator run ansible/setup.yml
When using an Ansible-vault containing sensitive configuration data:
% ansible-vault create ansible/group_vars/all/my-vault.yml
% touch ~/.vault-password && chmod 0600 ~/.vault-password
% vi ~/.vault-password # enter password as plain-text
% export ANSIBLE_VAULT_PASSWORD_FILE=~/.vault-password
% ansible-navigator run ansible/setup.yml
Please note that group_vars
and host_vars
will have to reside in the directory of the playbook to be run.
In our setup, this will be ansible/group_vars
and ansible/host_vars
.
- How to use add-ons (post_install_add_ons)
- How to install and manage more than one OpenShift Cluster with hetzner-ocp4
- How to install an air-gapped cluster with hetzner-ocp4
- How to install an proxy cluster with hetzner-ocp4
- How to setup a container native virtualization lab (nested) with hetzner-ocp4
- How to install an OpenShift nighly or RC (any kind of pre-release)
- Disk management (add disk to vm, wipe node)
- How to passthrough nvme or gpu (pci-passthrough
- How to install OKD
- Virsh commands cheatsheet to manage KVM guest virtual machines
- Remote execution, run the playbooks on your laptop
Playbook | Description |
---|---|
ansible/00-provision-hetzner.yml |
Automated operating system of your Hetzner bare-metal server. detail documentation: docs/hetzner.md |
ansible/01-prepare-host.yml |
Install all dependencies like kvm & co on your Hetzner bare-metal server. |
ansible/02-create-cluster.yml |
Installation of your OpenShift 4 Cluster |
ansible/03-stop-cluster.yml |
Stop all virtual machines related to your OpenShift 4 Cluster |
ansible/04-start-cluster.yml |
Start all virtual machines related to your OpenShift 4 Cluster |
ansible/99-destroy-cluster.yml |
Delete everything what is created via ansible/02-create-cluster.yml |
ansible/renewal-certificate.yml |
Renewal your Let's encrypt certificate and replace everything in your OpenShift 4 Cluster. There is no automatically renew process, please run renew on your own behalf. |
ansible/run-add-ons.yml |
Run all enabled add-ons agains your OpenShift 4 cluster |
ansible/setup.yml |
One shot cluster installation, including operating system installation and configuration of your Hetzner bare-metal server. |
Problem | Command |
---|---|
Check haproxy connections | podman exec -ti openshift-4-loadbalancer-${cluster_name} ./watch-stats.sh |
Start cluster after reboot | ansible-navigator run -m stdout ./ansible/04-start-cluster.yml |
VERSION=$(date +%Y%m%d%H%M)
ansible-builder build \
--verbosity 3 \
--container-runtime podman \
--tag quay.io/redhat-emea-ssa-team/hetzner-ocp4-ansible-ee:$VERSION
podman push quay.io/redhat-emea-ssa-team/hetzner-ocp4-ansible-ee:$VERSION