From 4bfdeb523e76fddc4397a5ca2923bd231c40be30 Mon Sep 17 00:00:00 2001 From: Jaime Magiera Date: Sat, 14 Mar 2020 12:25:40 -0400 Subject: [PATCH 1/8] initial commit of govc-specific document --- Documentation/UPI/vSphere/vSphere-govc.md | 257 ++++++++++++++++++++++ 1 file changed, 257 insertions(+) create mode 100644 Documentation/UPI/vSphere/vSphere-govc.md diff --git a/Documentation/UPI/vSphere/vSphere-govc.md b/Documentation/UPI/vSphere/vSphere-govc.md new file mode 100644 index 0000000..e7bf8a7 --- /dev/null +++ b/Documentation/UPI/vSphere/vSphere-govc.md @@ -0,0 +1,257 @@ +# Install OKD 4 on top of an UPI VMware vSphere configuration +This guide explains how to provision Fedora CoreOS on vSphere and install OKD on it. + +## Assumptions +- You have `openshift-installer` and `oc` for the OKD version you're installing in your PATH. See [Getting Started](/README.md#getting-started) +- This guide uses [govc](https://github.com/vmware/govmomi/tree/master/govc) to interface with vSphere. The examples assume you have already set up a connection with the required authenticated. You can complete the same tasks using a variety of tools, including PowerCLI, the vSphere web UI and terraform. +- The configuration uses `platform: none` which means that OKD will not integrate into vSphere and can not, for example, automatically provision volumes backed by vSphere datastores. +- You have a network / portgroup in vSphere you can use for the cluster. + +## Walkthrough + +### Obtain Fedora CoreOS images +Find and download an image of FCOS for VMware vSphere from https://getfedora.org/en/coreos/download/ + +``` +wget https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/31.20200113.3.1/x86_64/fedora-coreos-31.20200113.3.1-vmware.x86_64.ova + +# Import into vSphere +govc import.ova -ds= \ + -name fedora-coreos-31.20200113.3.1-vmware.x86_64 \ + fedora-coreos-31.20200113.3.1-vmware.x86_64.ova +``` + +### Create FCOS VMs +``` +#!/bin/bash +# Title: UPI-vSphere-GenerateVMs +# Description: This is an example bash script to create the VMs iteratively. Set the values for cluster_name, datastore_name, vm_folder, network_name, master_node_count, and worker_node_count. + +template_name="fedora-coreos-31.20200113.3.1-vmware.x86_64" +cluster_name= +datastore_name= +vm_folder= +network_name= +master_node_count= +worker_node_count= + +# Create the master nodes + +for (( i=1; i<=${master_node_count}; i++ )); do + vm="${cluster_name}-master-${i}" + govc vm.clone -vm "${template_name}" \ + -ds "${datastore_name}" \ + -folder "${vm_folder}" \ + -on="false" \ + -c="4" -m="8192" \ + -net="${network_name}" \ + $vm + govc vm.disk.change -vm $vm -disk.label "Hard disk 1" -size 120G +done + +# Create the worker nodes + +for (( i=1; i<=${worker_node_count}; i++ )); do + vm="${cluster_name}-worker-${i}" + govc vm.clone -vm "${template_name}" \ + -ds "${datastore_name}" \ + -folder "${vm_folder}" \ + -on="false" \ + -c="4" -m="8192" \ + -net="${network_name}" \ + $vm + govc vm.disk.change -vm $vm -disk.label "Hard disk 1" -size 120G +done + + +# Create the bootstrap node + +vm="${cluster_name}-bootstrap" +govc vm.clone -vm "${template_name}" \ + -ds "${datastore_name}" \ + -folder "${vm_folder}" \ + -on="false" \ + -c="4" -m="8192" \ + -net="${network_name}" \ + $vm +govc vm.disk.change -vm $vm -disk.label "Hard disk 1" -size 120G + + +``` + +### Configure DNS, DHCP and LB +The installation requires specific configuration of DNS and a load balancer. The requirements are listed in the official Openshift documentation: [Creating the user-provisioned infrastructure](https://docs.openshift.com/container-platform/4.2/installing/installing_vsphere/installing-vsphere.html#installation-infrastructure-user-infra_installing-vsphere). Example configurations are available at [requirements](/Documentation/UPI/Requirements) + +You will also need working DHCP on the network the cluster hosts are connected to. The DHCP server should assign the hosts unique FQDNs. + +### Create cluster configuration and Ignition files +Create `install-config.yaml`: + +``` +apiVersion: v1 +baseDomain: domain.tld +metadata: + name: cluster + +compute: +- hyperthreading: Enabled + name: worker + replicas: 3 + +controlPlane: + hyperthreading: Enabled + name: master + replicas: 3 + +platform: + none: {} + +pullSecret: '' +sshKey: +``` + +**NOTE**: It is a good idea to keep a copy of `install-config.yaml` for if you need to recreate the Ignition files since the following step destroys it. + + +Generate Ignition-configs: + +``` +openshift-install create ignition-configs +``` + +Your `install-config.yaml` file should have been replaced with a bunch of `.ign` files which will be used to configure the FCOS hosts. + +**NOTE**: If you want to totally regenerate the Ignition configs (for example to replace expired temporary certificates) you will also need to remove a hidden `.openshift_install_state.json`-file + +### Serve bootstrap.ign + +Due to the size of the bootstrap.ign file it can't be directly written into the VM metadata but needs to be served over HTTP instead. One way to do this is to use `python3 -m http.server`. + +Create a file `append-bootstrap.ign` which contains an URL to the full `bootstrap.ign`: + +``` +{ + "ignition": { + "config": { + "merge": [ + { + "source": "http://10.0.0.50:8000/bootstrap.ign", + "verification": {} + } + ] + }, + "timeouts": {}, + "version": "3.0.0" + } +} +``` + +### Set the VM metadata +Steps which need to be done: +- Set the VM property `guestinfo.ignition.config.data` to a base64-encoded version of the Ignition-config +- Set the VM property `guestinfo.ignition.config.data.encoding` to `base64` +- Set the VM property `disk.EnableUUID` to `TRUE` + +``` +#!/bin/bash +# Title: UPI-vSphere-AddMetadata +# Description: This is an example bash script to set the metadata on the VMs iteratively. Set the values for cluster_name, master_node_count, and worker_node_count. + +cluster_name= +master_node_count= +worker_node_count= + +# Set the metadata on the master nodes + +for (( i=1; i<=${master_node_count}; i++ )); do + vm="${cluster_name}-master-${i}" + govc vm.change -vm $vm \ + -e guestinfo.ignition.config.data="$(cat master.ign | base64 -w0)" \ + -e guestinfo.ignition.config.data.encoding="base64" \ + -e disk.EnableUUID="TRUE" +done + +# Set the metadata on the worker nodes + +for (( i=1; i<=${worker_node_count}; i++ )); do + vm="${cluster_name}-worker-${i}" + govc vm.change -vm $vm \ + -e guestinfo.ignition.config.data="$(cat worker.ign | base64 -w0)" \ + -e guestinfo.ignition.config.data.encoding="base64" \ + -e disk.EnableUUID="TRUE" +done + +# Set the metadata on the bootstrap node + +vm="${cluster_name}-bootstrap" +govc vm.change -vm $vm \ + -e guestinfo.ignition.config.data="$(cat append-bootstrap.ign | base64 -w0)" \ + -e guestinfo.ignition.config.data.encoding="base64" \ + -e disk.EnableUUID="TRUE" + +``` + +### Start the bootstrap server +After every server in your cluster was provisioned, start the bootstrap server. +By default, Fedora CoreOS will install the OS from the official OSTree image, so we have to wait a few minutes for the machine-config-daemon to pull and install the pivot image from the registry. This image is necessary for the kubelet service, as the official Fedora CoreOS image does not include hyperkube. +After the image was pulled and installed, the server will be rebooted by itself. +When the server is up again wait for the API service and the MachineConfig service to be spawned (check for the ports 6443 and 22623). Check also for the status of the `bootkube.service`. + +### Start the other servers +**NOTE:** You can start every server in the cluster in the same time of the boostrap server, as they will still waiting for the latter to expose the Kubernetes and MachineConfig API ports. These steps were separated just for convenience. + +Now that the bootstrap server is ready, you can start every server of your cluster. +Just like the bootstrap server, the control planes and the workers will boot with the official Fedora CoreOS image, that does not contains hyperkube. Since hyperkube is missing the kubelet service will not start and so the cluster bootstrapping. +Wait for the machine-config-daemon to pull the same image as the bootstrap server. The servers will reboot themselves and after that they will try to join the cluster, starting the bootstrapping process. + +For debugging you can use `sudo crictl ps` and `sudo crictl logs ` to inspect the state of the various components. + +### Install OKD cluster +#### Bootstrap stage +Now that every servers is up and running, they are ready to form the cluster. +Bootstrap will start as soon as the master nodes finish forming the etcd cluster. + +Meanwhile just run the OKD Installer in order to check the status of the installation: + +`$ openshift-installer wait-for bootstrap-complete --log-level debug` + +The installer will now check for the availability of the Kubernetes API and then for the `bootstrap-complete` event that will be spawned after the cluster has almost finished to install every cluster operator. +OKD installer will wait for 30 minutes. It should be enough to complete the bootstrap process. + +#### Intermediate stage +When the bootstrap is finished you have to approve the nodes CSR, configure the storage backend for the `image-registry` cluster operator, and shutting down the bootstrap node. + +Shut down the bootstrap vm and then remove it from the pools of the load balancer. If you followed the [LB_HAProxy.md](../Requirements/LB_HAProxy.md) guide to configure HAProxy as you load balancer, just comment the two `bootstrap` records in the configuration file, and then restart its service. + +After the bootstrap vm is offline, authenticate as `system:admin` in OKD, by using the `kubeconfig` file, which was created when Ingnition configs were [generated](#generate-the-ignition-configuration-files). + +Export the `KUBECONFIG` variable like the following example: + +`$ export KUBECONFIG=$(pwd)/auth/kubeconfig` + +You should now bo able to interact with the OKD cluster by using the `oc` utility. + +For the certificate requests, you can approve them with: + +`$ oc get csr -ojson | jq -r '.items[] | select(.status == {} ) | .metadata.name' | xargs --no-run-if-empty oc adm certificate approve`. + +For the `image-registry` cluster operator things are getting a bit more tricky. + +By default registry would expect a storage provider to provide an RWX volume, or to be configured to be ephemeral. + +If you want the registry to store your container images, follow the [official OKD 4 documentation](https://docs.okd.io/latest/registry/configuring-registry-storage/configuring-registry-storage-baremetal.html) to configure a persistent storage backend. There are many backend you can use, so just choose the more appropriate for your infrastructure. + +If you want instead to use an ephemeral registry, just run the following command to use `emptyDir`: +`$ oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"managementState":"Managed","storage":{"emptyDir":{}}}}'` + +**NOTE:** While `emptyDir` is suitable for non-production or temporary cluster, it is not recommended for production environments. + +#### Final stage +Now that everything is configured run the OKD installer again to wait for the `install-complete` event. + +`$ openshift-install wait-for install-complete --log-level debug` + +After the installation is complete you can login into your cluster via WebUI using `kubeadmin` as login. Password for this account is auto-generated and stored in `auth/kubeadmin-password` file. If you want to use the `oc` utility, you can still use the `kubeconfig` file you used [before](#intermediate-stage). + +**NOTE:** `kubeadmin` is a temporary user and should not be left enabled after the cluster is up and running. +Follow the [official OKD 4 documentation](https://docs.okd.io/latest/authentication/understanding-authentication.html) to configure an alternative Identity Provider and to remove `kubeadmin`. From a00d8b9512e6c6c6ee6d670d6b438e5ccddcb340 Mon Sep 17 00:00:00 2001 From: Jaime Magiera Date: Sat, 14 Mar 2020 18:31:36 -0400 Subject: [PATCH 2/8] Moving and renaming to follow new convention --- .../{vSphere/vSphere/vSphere-govc.md => vSphere_govc/README.md} | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename Guides/UPI/{vSphere/vSphere/vSphere-govc.md => vSphere_govc/README.md} (100%) diff --git a/Guides/UPI/vSphere/vSphere/vSphere-govc.md b/Guides/UPI/vSphere_govc/README.md similarity index 100% rename from Guides/UPI/vSphere/vSphere/vSphere-govc.md rename to Guides/UPI/vSphere_govc/README.md From 72e02bae84229bf4b8cb11b96d801da921a69ae5 Mon Sep 17 00:00:00 2001 From: Jaime Magiera Date: Sat, 14 Mar 2020 18:36:05 -0400 Subject: [PATCH 3/8] fixing language --- Guides/UPI/vSphere_govc/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Guides/UPI/vSphere_govc/README.md b/Guides/UPI/vSphere_govc/README.md index e7bf8a7..4135479 100644 --- a/Guides/UPI/vSphere_govc/README.md +++ b/Guides/UPI/vSphere_govc/README.md @@ -1,5 +1,5 @@ # Install OKD 4 on top of an UPI VMware vSphere configuration -This guide explains how to provision Fedora CoreOS on vSphere and install OKD on it. +This guide explains how to provision Fedora CoreOS on vSphere and install OKD on it. The guide includes bash scripts to automate the gogovc command line tool for interacting with the vSphere cluster. ## Assumptions - You have `openshift-installer` and `oc` for the OKD version you're installing in your PATH. See [Getting Started](/README.md#getting-started) From 042075493b10dc012458bdbd3bef1d4a1fc487db Mon Sep 17 00:00:00 2001 From: Jaime Magiera Date: Sat, 14 Mar 2020 18:47:18 -0400 Subject: [PATCH 4/8] initial commmit --- Guides/UPI/vSphere_govc/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Guides/UPI/vSphere_govc/README.md b/Guides/UPI/vSphere_govc/README.md index 4135479..2ccdd77 100644 --- a/Guides/UPI/vSphere_govc/README.md +++ b/Guides/UPI/vSphere_govc/README.md @@ -1,5 +1,5 @@ # Install OKD 4 on top of an UPI VMware vSphere configuration -This guide explains how to provision Fedora CoreOS on vSphere and install OKD on it. The guide includes bash scripts to automate the gogovc command line tool for interacting with the vSphere cluster. +This guide explains how to provision Fedora CoreOS on vSphere and install OKD on it. The guide includes bash scripts to automate the govc command line tool for interacting with the vSphere cluster. ## Assumptions - You have `openshift-installer` and `oc` for the OKD version you're installing in your PATH. See [Getting Started](/README.md#getting-started) From 88956eb5d27ece18088aab1df7b641fbfcdcd3d7 Mon Sep 17 00:00:00 2001 From: Jaime Magiera Date: Sat, 14 Mar 2020 19:04:39 -0400 Subject: [PATCH 5/8] removed old vSphere folder --- Guides/UPI/vSphere/Requirements/DNS_Bind.md | 121 ----------- Guides/UPI/vSphere/Requirements/LB_HAProxy.md | 176 ---------------- Guides/UPI/vSphere/vSphere.md | 195 ------------------ 3 files changed, 492 deletions(-) delete mode 100644 Guides/UPI/vSphere/Requirements/DNS_Bind.md delete mode 100644 Guides/UPI/vSphere/Requirements/LB_HAProxy.md delete mode 100644 Guides/UPI/vSphere/vSphere.md diff --git a/Guides/UPI/vSphere/Requirements/DNS_Bind.md b/Guides/UPI/vSphere/Requirements/DNS_Bind.md deleted file mode 100644 index 5b8ff28..0000000 --- a/Guides/UPI/vSphere/Requirements/DNS_Bind.md +++ /dev/null @@ -1,121 +0,0 @@ -# Configure Bind/named for DNS service -This guide will explain how to install and configure Bind/named as a DNS server for OKD. - -## Assumptions - - - This guide is based on CentOS 7; - - Firewall rules are managed by firewalld. - - This guide use `example.com` as base domain. Replace it with your own. - -## Walkthrough -### Install the requirements -Bind is included in `base` repository, so you can install just with: -``` -$ sudo yum install bind -``` -### General configuration -At the end of `/etc/named.conf` add the following file: -`include "/etc/named/named.conf.local";` - -`/etc/named/named.conf.local` contains the configuration of the DNS zones. -Such file should be something like the following example: -``` -# cat /etc/named/named.conf.local -zone "example.com" { - type master; - file "/var/named/zones/db.example.com"; # zone file path -}; - -zone "100.168.192.in-addr.arpa" { - type master; - file "/var/named/zones/db.192.168.100"; # 192.168.100.0/24 subnet -}; -``` -### DNS Zone configuration -#### Main Zone -Create the file `/var/named/zones/db.example.com` with a content like the following example. -``` -$TTL 604800 -@ IN SOA ns1.example.com. admin.example.com. ( - 1 ; Serial - 604800 ; Refresh - 86400 ; Retry - 2419200 ; Expire - 604800 ; Negative Cache TTL -) - -; name servers - NS records - IN NS ns1 - -; name servers - A records -ns1.example.com. IN A BASTION_IP - -; OpenShift Container Platform Cluster - A records -BOOTSTRAP_SERVER_FQDN. IN A BOOTSTRAP_SERVER_IP -CONTROL_PLANE_0_FQDN. IN A CONTROL_PLANE_0_IP -CONTROL_PLANE_1_FQDN. IN A CONTROL_PLANE_1_IP -CONTROL_PLANE_2_FQDN. IN A CONTROL_PLANE_2_IP -COMPUTE_NODE_0_FQDN. IN A COMPUTE_NODE_0_IP -COMPUTE_NODE_1_FQDN. IN A COMPUTE_NODE_1_IP - -; OpenShift internal cluster IPs - A records -api.CLUSTER_NAME.example.com. IN A BASTION_IP -api-int.CLUSTER_NAME.example.com. IN A BASTION_IP -*.apps.CLUSTER_NAME.example.com. IN A BASTION_IP -etcd-0.CLUSTER_NAME.example.com. IN A CONTROL_PLANE_0_IP -etcd-1.CLUSTER_NAME.example.com. IN A CONTROL_PLANE_1_IP -etcd-2.CLUSTER_NAME.example.com. IN A CONTROL_PLANE_2_IP -console-openshift-console.apps.CLUSTER_NAME.example.com. IN A BASTION_IP -oauth-openshift.apps.CLUSTER_NAME.example.com. IN A BASTION_IP - -; OpenShift internal cluster IPs - SRV records -_etcd-server-ssl._tcp.CLUSTER_NAME.example.com. 86400 IN SRV 0 10 2380 etcd-0.CLUSTER_NAME -_etcd-server-ssl._tcp.CLUSTER_NAME.example.com. 86400 IN SRV 0 10 2380 etcd-1.CLUSTER_NAME -_etcd-server-ssl._tcp.CLUSTER_NAME.example.com. 86400 IN SRV 0 10 2380 etcd-2.CLUSTER_NAME -``` -Replace IP and FQDN placeholders accordingly to the configuration of your cluster. - -**NOTE:** `CLUSTER_NAME` shall be the same name you're going to use in the install-config.yaml. - -#### Reverse Zone -Create the file `/var/named/zones/db.192.168.100` with a content like the following example. -``` -$TTL 604800 -@ IN SOA ns1.example.com. admin.example.com. ( - 6 ; Serial - 604800 ; Refresh - 86400 ; Retry - 2419200 ; Expire - 604800 ; Negative Cache TTL -) - -; name servers - NS records - IN NS ns1.example.com. - -; name servers - PTR records -BASTION_LAST_OCTECT_IP IN PTR ns1.example.com. - -; OpenShift Container Platform Cluster - PTR records -BOOTSTRAP_SERVER_LAST_OCTECT_IP IN PTR BOOTSTRAP_SERVER_FQDN. -CONTROL_PLANE_0_LAST_OCTECT_IP IN PTR CONTROL_PLANE_0_FQDN. -CONTROL_PLANE_1_LAST_OCTECT_IP IN PTR CONTROL_PLANE_1_FQDN. -CONTROL_PLANE_2_LAST_OCTECT_IP IN PTR CONTROL_PLANE_2_FQDN. -COMPUTE_NODE_0_LAST_OCTECT_IP IN PTR COMPUTE_NODE_0_FQDN. -COMPUTE_NODE_1_LAST_OCTECT_IP IN PTR COMPUTE_NODE_1_FQDN. -``` -Replace every last octet and FQDN placeholders accordingly to the configuration of your cluster. - -### Start DNS -Now that both the main and the reverse zones are configured, you can start the `named` service with the following command: -``` -$ sudo systemctl enable --now named -``` -### Configure firewall -If your DNS is intended to be internal and cluster-specific, and not general purpose, you could configure firewalld to block any requests to the port 53 that came from the outside of the OKD network, with the following commands: -``` -$ sudo firewall-cmd --add-rich-rule='rule family="ipv4" source address="LIBVIRT_OKD_SUBNET" service name="dns" accept' --permanent -$ sudo firewall-cmd --reload -``` -where `LIBVIRT_OKD_SUBNET` is the subnet you're going to allow. -Alternatively you can bind named to a specific IP or restrict the hosts that can inquiry the DNS. - diff --git a/Guides/UPI/vSphere/Requirements/LB_HAProxy.md b/Guides/UPI/vSphere/Requirements/LB_HAProxy.md deleted file mode 100644 index 266c52b..0000000 --- a/Guides/UPI/vSphere/Requirements/LB_HAProxy.md +++ /dev/null @@ -1,176 +0,0 @@ -# Configure HAProxy as a cluster load balancer - -This guide will explain how to install and configure a load balancer with HAProxy, to use it as a front-end for the OKD cluster. - -## Assumptions - - - This guide is based on CentOS 7; - - The cluster example has two dedicated infra nodes, identified by `OKD4_INFRA_NODE_0_IP` and `OKD4_INFRA_NODE_1_IP`; - - Firewall rules are managed by firewalld. - -## Walkthrough -### Install the requirements -Since HAProxy is included in the `base` repository, you can just install it with the following command: -``` -$ sudo yum install haproxy -``` -### Configure the pools for bootstrapping -After the installation you need to configure the pools it needs to balance. -For an OKD installation, HAProxy has to provide load balancing capabilities to the following services: - - - OKD default route (ports 443 and 80); - - Kubernetes API/CLI (port 6443); - - MachineConfig API (port 22623). - -Edit `/etc/haproxy/haproxy.cfg` like following example: -``` -# Global settings -#--------------------------------------------------------------------- -global - maxconn 20000 - log /dev/log local0 info - chroot /var/lib/haproxy - pidfile /var/run/haproxy.pid - user haproxy - group haproxy - daemon - - # turn on stats unix socket - stats socket /var/lib/haproxy/stats - -#--------------------------------------------------------------------- -# common defaults that all the 'listen' and 'backend' sections will -# use if not designated in their block -#--------------------------------------------------------------------- -defaults - mode http - log global - option httplog - option dontlognull - option http-server-close - option forwardfor except 127.0.0.0/8 - option redispatch - retries 3 - timeout http-request 10s - timeout queue 1m - timeout connect 10s - timeout client 300s - timeout server 300s - timeout http-keep-alive 10s - timeout check 10s - maxconn 20000 - -listen stats - bind :9000 - mode http - stats enable - stats uri / - -frontend ocp4_k8s_api_fe - bind :6443 - default_backend ocp4_k8s_api_be - mode tcp - option tcplog - -backend ocp4_k8s_api_be - balance roundrobin - mode tcp - server bootstrap OKD4_BOOTSTRAP_SERVER_IP:6443 check - server master0 OKD4_CONTROL_PLANE_0_IP:6443 check - server master1 OKD4_CONTROL_PLANE_1_IP:6443 check - server master2 OKD4_CONTROL_PLANE_2_IP:6443 check - -frontend ocp4_machine_config_server_fe - bind :22623 - default_backend ocp4_machine_config_server_be - mode tcp - option tcplog - -backend ocp4_machine_config_server_be - balance roundrobin - mode tcp - server bootstrap OKD4_BOOTSTRAP_SERVER_IP:22623 check - server master0 OKD4_CONTROL_PLANE_0_IP:22623 check - server master1 OKD4_CONTROL_PLANE_1_IP:22623 check - server master2 OKD4_CONTROL_PLANE_2_IP:22623 check - -frontend ocp4_http_ingress_traffic_fe - bind :80 - default_backend ocp4_http_ingress_traffic_be - mode tcp - option tcplog - -backend ocp4_http_ingress_traffic_be - balance roundrobin - mode tcp - server infra0 OKD4_INFRA_NODE_0_IP:80 check - server infra1 OKD4_INFRA_NODE_1_IP:80 check - -frontend ocp4_https_ingress_traffic_fe - bind :443 - default_backend ocp4_https_ingress_traffic_be - mode tcp - option tcplog - -backend ocp4_https_ingress_traffic_be - balance roundrobin - mode tcp - server infra0 OKD4_INFRA_NODE_0_IP:443 check - server infra1 OKD4_INFRA_NODE_1_IP:443 check -``` -Replace `OKD4_BOOTSTRAP_SERVER_IP`, `OKD4_CONTROL_PLANE_0_IP`, `OKD4_CONTROL_PLANE_1_IP`, `OKD4_CONTROL_PLANE_2_IP`, `OKD4_INFRA_NODE_0_IP` and `OKD4_INFRA_NODE_1_IP` with the IPs of your cluster. - -As described [above](#assumptions), in this example the pools `ocp4_http_ingress_traffic_be` and `ocp4_https_ingress_traffic_be` will balance on the two infra nodes indentified as `infra0` and `infra1`. -If you're not going to provision two separate infra nodes, ensure that those pools will balance the compute nodes instead. - -### Configure SELinux to allow non-standard port binding -Since HAProxy is going to bind itself to non-standard ports like 6443 and 22623, SELinux needs to be configured to allow such configurations. -``` -$ sudo setsebool -P haproxy_connect_any on -``` -### Starting HAProxy -Now that SELinux allows HAProxy to bind to non-standard ports, you have to start it service. -``` -$ sudo systemctl start haproxy -``` -Inquiry the service status should show something like: -``` -$ systemctl status haproxy -● haproxy.service - HAProxy Load Balancer - Loaded: loaded (/usr/lib/systemd/system/haproxy.service; enabled; vendor preset: disabled) - Active: active (running) since dom 2019-12-29 01:44:51 CET; 1 day 13h ago - Main PID: 20458 (haproxy-systemd) - Tasks: 3 - CGroup: /system.slice/haproxy.service - ├─20458 /usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid - ├─20459 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds - └─20460 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds - -dic 29 01:44:51 localhost haproxy-systemd-wrapper[20458]: [WARNING] 362/014451 (20459) : config : 'option forwardfor' ignored for frontend 'ocp4_https_ingress...TTP mode. -dic 29 01:44:51 localhost haproxy-systemd-wrapper[20458]: [WARNING] 362/014451 (20459) : config : 'option forwardfor' ignored for backend 'ocp4_https_ingress_...TTP mode. -dic 29 01:44:51 localhost haproxy[20459]: Proxy ocp4_k8s_api_fe started. -dic 29 01:44:51 localhost haproxy[20459]: Proxy ocp4_k8s_api_be started. -dic 29 01:44:51 localhost haproxy[20459]: Proxy ocp4_machine_config_server_fe started. -dic 29 01:44:51 localhost haproxy[20459]: Proxy ocp4_machine_config_server_be started. -dic 29 01:44:51 localhost haproxy[20459]: Proxy ocp4_http_ingress_traffic_fe started. -dic 29 01:44:51 localhost haproxy[20459]: Proxy ocp4_http_ingress_traffic_be started. -dic 29 01:44:51 localhost haproxy[20459]: Proxy ocp4_https_ingress_traffic_fe started. -dic 29 01:44:51 localhost haproxy[20459]: Proxy ocp4_https_ingress_traffic_be started. -Hint: Some lines were ellipsized, use -l to show in full. -``` - -### Configure the pools after bootstrapping -When the cluster is correctly deployed and the bootstrap node can be turned off, comment the lines related to the node that in the [above](#configure-the-pools-for-bootstrapping) configuration example is called `bootstrap`. -Then restart the service to activate the new configuration: -``` -$ sudo systemctl restart haproxy -``` - -### Configure firewall -If your server is exposed to internet, like a rented dedicated server, you can restrict the access to some ports in order to let the API, such as the Kubernetes' and the MachingConfig's, to be reachable only from the internal cluster network, through some firewall rules, like the following example: -``` -$ sudo firewall-cmd --add-rich-rule='rule family="ipv4" source address="LIBVIRT_OKD_SUBNET" port port="6443" protocol="tcp" accept' --permanent -$ sudo firewall-cmd --add-rich-rule='rule family="ipv4" source address="LIBVIRT_OKD_SUBNET" port port="22623" protocol="tcp" accept' --permanent -$ sudo firewall-cmd --reload -``` - diff --git a/Guides/UPI/vSphere/vSphere.md b/Guides/UPI/vSphere/vSphere.md deleted file mode 100644 index 374558a..0000000 --- a/Guides/UPI/vSphere/vSphere.md +++ /dev/null @@ -1,195 +0,0 @@ -# Install OKD 4 on top of an UPI VMware vSphere configuration -This guide explains how to provision Fedora CoreOS on vSphere and install OKD on it. - -## Assumptions -- You have `openshift-installer` and `oc` for the OKD version you're installing in your PATH. See [Getting Started](/README.md#getting-started) -- This guide uses [govc](https://github.com/vmware/govmomi/tree/master/govc) to interface with vSphere. The examples assume you have already set up a connection with the required authenticated. You can complete the same tasks using a variety of tools, including PowerCLI, the vSphere web UI and terraform. -- The configuration uses `platform: none` which means that OKD will not integrate into vSphere and can not, for example, automatically provision volumes backed by vSphere datastores. -- You have a network / portgroup in vSphere you can use for the cluster. - -## Walkthrough - -### Obtain Fedora CoreOS images -Find and download an image of FCOS for VMware vSphere from https://getfedora.org/en/coreos/download/ - -``` -wget https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/31.20200113.3.1/x86_64/fedora-coreos-31.20200113.3.1-vmware.x86_64.ova - -# Import into vSphere -govc import.ova -ds= \ - -name fedora-coreos-31.20200113.3.1-vmware.x86_64 \ - fedora-coreos-31.20200113.3.1-vmware.x86_64.ova -``` - -### Create FCOS VMs -``` -for vm in \ - okd4-master-1 okd4-master-2 okd4-master-3 \ - okd4-worker-1 okd4-worker-2 okd4-worker-3 \ - okd4-bootstrap; do - govc vm.clone -vm fedora-coreos-31.20200113.3.1-vmware.x86_64 \ - -ds -folder -on=false \ - -c=4 -m=8192 -net= $vm - - govc vm.disk.change -vm $vm -disk.label "Hard disk 1" -size 120G -done -``` - -### Configure DNS, DHCP and LB -The installation requires specific configuration of DNS and a load balancer. The requirements are listed in the official OKD documentation: [Creating the user-provisioned infrastructure](https://docs.okd.io/latest/installing/installing_vsphere/installing-vsphere.html#installation-infrastructure-user-infra_installing-vsphere). Example configurations are available at [requirements](/Guides/UPI/vSphere/Requirements) - -You will also need working DHCP on the network the cluster hosts are connected to. The DHCP server should assign the hosts unique FQDNs. - -### Create cluster configuration and Ignition files -Create `install-config.yaml`: - -``` -apiVersion: v1 -baseDomain: domain.tld -metadata: - name: cluster - -compute: -- hyperthreading: Enabled - name: worker - replicas: 3 - -controlPlane: - hyperthreading: Enabled - name: master - replicas: 3 - -platform: - none: {} - -pullSecret: '' -sshKey: -``` - -**NOTE**: It is a good idea to keep a copy of `install-config.yaml` for if you need to recreate the Ignition files since the following step destroys it. - - -Generate Ignition-configs: - -``` -openshift-install create ignition-configs -``` - -Your `install-config.yaml` file should have been replaced with a bunch of `.ign` files which will be used to configure the FCOS hosts. - -**NOTE**: If you want to totally regenerate the Ignition configs (for example to replace expired temporary certificates) you will also need to remove a hidden `.openshift_install_state.json`-file - -### Serve bootstrap.ign - -Due to the size of the bootstrap.ign file it can't be directly written into the VM metadata but needs to be served over HTTP instead. One way to do this is to use `python3 -m http.server`. - -Create a file `append-bootstrap.ign` which contains an URL to the full `bootstrap.ign`: - -``` -{ - "ignition": { - "config": { - "merge": [ - { - "source": "http://10.0.0.50:8000/bootstrap.ign", - "verification": {} - } - ] - }, - "timeouts": {}, - "version": "3.0.0" - } -} -``` - -### Set the VM metadata -Steps which need to be done: -- Set the VM property `guestinfo.ignition.config.data` to a base64-encoded version of the Ignition-config -- Set the VM property `guestinfo.ignition.config.data.encoding` to `base64` -- Set the VM property `disk.EnableUUID` to `TRUE` - -``` -for host in okd4-master-1 okd4-master-2 okd4-master-3; do - govc vm.change -vm $host \ - -e guestinfo.ignition.config.data="$(cat master.ign | base64 -w0)" \ - -e guestinfo.ignition.config.data.encoding="base64" \ - -e disk.EnableUUID="TRUE" -done - -for host in okd4-worker-1 okd4-worker-2 okd4-worker-3; do - govc vm.change -vm $host \ - -e guestinfo.ignition.config.data="$(cat worker.ign | base64 -w0)" \ - -e guestinfo.ignition.config.data.encoding="base64" \ - -e disk.EnableUUID="TRUE" -done - -govc vm.change -vm okd4-bootstrap \ - -e guestinfo.ignition.config.data="$(cat append-bootstrap.ign | base64 -w0)" \ - -e guestinfo.ignition.config.data.encoding="base64" \ - -e disk.EnableUUID="TRUE" -``` - -### Start the bootstrap server -After every server in your cluster was provisioned, start the bootstrap server. -By default, Fedora CoreOS will install the OS from the official OSTree image, so we have to wait a few minutes for the machine-config-daemon to pull and install the pivot image from the registry. This image is necessary for the kubelet service, as the official Fedora CoreOS image does not include hyperkube. -After the image was pulled and installed, the server will be rebooted by itself. -When the server is up again wait for the API service and the MachineConfig service to be spawned (check for the ports 6443 and 22623). Check also for the status of the `bootkube.service`. - -### Start the other servers -**NOTE:** You can start every server in the cluster in the same time of the boostrap server, as they will still waiting for the latter to expose the Kubernetes and MachineConfig API ports. These steps were separated just for convenience. - -Now that the bootstrap server is ready, you can start every server of your cluster. -Just like the bootstrap server, the control planes and the workers will boot with the official Fedora CoreOS image, that does not contains hyperkube. Since hyperkube is missing the kubelet service will not start and so the cluster bootstrapping. -Wait for the machine-config-daemon to pull the same image as the bootstrap server. The servers will reboot themselves and after that they will try to join the cluster, starting the bootstrapping process. - -For debugging you can use `sudo crictl ps` and `sudo crictl logs ` to inspect the state of the various components. - -### Install OKD cluster -#### Bootstrap stage -Now that every servers is up and running, they are ready to form the cluster. -Bootstrap will start as soon as the master nodes finish forming the etcd cluster. - -Meanwhile just run the OpenShift Installer in order to check the status of the installation: - -`$ openshift-installer wait-for bootstrap-complete --log-level debug` - -The installer will now check for the availability of the Kubernetes API and then for the `bootstrap-complete` event that will be spawned after the cluster has almost finished to install every cluster operator. -OpenShift installer will wait for 30 minutes. It should be enough to complete the bootstrap process. - -#### Intermediate stage -When the bootstrap is finished you have to approve the nodes CSR, configure the storage backend for the `image-registry` cluster operator, and shutting down the bootstrap node. - -Shut down the bootstrap vm and then remove it from the pools of the load balancer. If you followed the [LB_HAProxy.md](Requirements/LB_HAProxy.md) guide to configure HAProxy as you load balancer, just comment the two `bootstrap` records in the configuration file, and then restart its service. - -After the bootstrap vm is offline, authenticate as `system:admin` in OKD, by using the `kubeconfig` file, which was created when Ingnition configs were [generated](#generate-the-ignition-configuration-files). - -Export the `KUBECONFIG` variable like the following example: - -`$ export KUBECONFIG=$(pwd)/auth/kubeconfig` - -You should now bo able to interact with the OKD cluster by using the `oc` utility. - -For the certificate requests, you can approve them with: - -`$ oc get csr -ojson | jq -r '.items[] | select(.status == {} ) | .metadata.name' | xargs --no-run-if-empty oc adm certificate approve`. - -For the `image-registry` cluster operator things are getting a bit more tricky. - -By default registry would expect a storage provider to provide an RWX volume, or to be configured to be ephemeral. - -If you want the registry to store your container images, follow the [official OKD 4 documentation](https://docs.okd.io/latest/registry/configuring-registry-storage/configuring-registry-storage-baremetal.html) to configure a persistent storage backend. There are many backend you can use, so just choose the more appropriate for your infrastructure. - -If you want instead to use an ephemeral registry, just run the following command to use `emptyDir`: -`$ oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"managementState":"Managed","storage":{"emptyDir":{}}}}'` - -**NOTE:** While `emptyDir` is suitable for non-production or temporary cluster, it is not recommended for production environments. - -#### Final stage -Now that everything is configured run the OpenShift installer again to wait for the `install-complete` event. - -`$ openshift-install wait-for install-complete --log-level debug` - -After the installation is complete you can login into your cluster via WebUI using `kubeadmin` as login. Password for this account is auto-generated and stored in `auth/kubeadmin-password` file. If you want to use the `oc` utility, you can still use the `kubeconfig` file you used [before](#intermediate-stage). - -**NOTE:** `kubeadmin` is a temporary user and should not be left enabled after the cluster is up and running. -Follow the [official OKD 4 documentation](https://docs.okd.io/latest/authentication/understanding-authentication.html) to configure an alternative Identity Provider and to remove `kubeadmin`. From c2606c3aa8ab593e2edc17aa7c8e708a39187360 Mon Sep 17 00:00:00 2001 From: Jaime Magiera Date: Sat, 14 Mar 2020 19:06:40 -0400 Subject: [PATCH 6/8] Added Requirements --- .../UPI/vSphere_govc/Requirements/DNS_Bind.md | 121 ++++++++++++ .../vSphere_govc/Requirements/LB_HAProxy.md | 176 ++++++++++++++++++ 2 files changed, 297 insertions(+) create mode 100644 Guides/UPI/vSphere_govc/Requirements/DNS_Bind.md create mode 100644 Guides/UPI/vSphere_govc/Requirements/LB_HAProxy.md diff --git a/Guides/UPI/vSphere_govc/Requirements/DNS_Bind.md b/Guides/UPI/vSphere_govc/Requirements/DNS_Bind.md new file mode 100644 index 0000000..5b8ff28 --- /dev/null +++ b/Guides/UPI/vSphere_govc/Requirements/DNS_Bind.md @@ -0,0 +1,121 @@ +# Configure Bind/named for DNS service +This guide will explain how to install and configure Bind/named as a DNS server for OKD. + +## Assumptions + + - This guide is based on CentOS 7; + - Firewall rules are managed by firewalld. + - This guide use `example.com` as base domain. Replace it with your own. + +## Walkthrough +### Install the requirements +Bind is included in `base` repository, so you can install just with: +``` +$ sudo yum install bind +``` +### General configuration +At the end of `/etc/named.conf` add the following file: +`include "/etc/named/named.conf.local";` + +`/etc/named/named.conf.local` contains the configuration of the DNS zones. +Such file should be something like the following example: +``` +# cat /etc/named/named.conf.local +zone "example.com" { + type master; + file "/var/named/zones/db.example.com"; # zone file path +}; + +zone "100.168.192.in-addr.arpa" { + type master; + file "/var/named/zones/db.192.168.100"; # 192.168.100.0/24 subnet +}; +``` +### DNS Zone configuration +#### Main Zone +Create the file `/var/named/zones/db.example.com` with a content like the following example. +``` +$TTL 604800 +@ IN SOA ns1.example.com. admin.example.com. ( + 1 ; Serial + 604800 ; Refresh + 86400 ; Retry + 2419200 ; Expire + 604800 ; Negative Cache TTL +) + +; name servers - NS records + IN NS ns1 + +; name servers - A records +ns1.example.com. IN A BASTION_IP + +; OpenShift Container Platform Cluster - A records +BOOTSTRAP_SERVER_FQDN. IN A BOOTSTRAP_SERVER_IP +CONTROL_PLANE_0_FQDN. IN A CONTROL_PLANE_0_IP +CONTROL_PLANE_1_FQDN. IN A CONTROL_PLANE_1_IP +CONTROL_PLANE_2_FQDN. IN A CONTROL_PLANE_2_IP +COMPUTE_NODE_0_FQDN. IN A COMPUTE_NODE_0_IP +COMPUTE_NODE_1_FQDN. IN A COMPUTE_NODE_1_IP + +; OpenShift internal cluster IPs - A records +api.CLUSTER_NAME.example.com. IN A BASTION_IP +api-int.CLUSTER_NAME.example.com. IN A BASTION_IP +*.apps.CLUSTER_NAME.example.com. IN A BASTION_IP +etcd-0.CLUSTER_NAME.example.com. IN A CONTROL_PLANE_0_IP +etcd-1.CLUSTER_NAME.example.com. IN A CONTROL_PLANE_1_IP +etcd-2.CLUSTER_NAME.example.com. IN A CONTROL_PLANE_2_IP +console-openshift-console.apps.CLUSTER_NAME.example.com. IN A BASTION_IP +oauth-openshift.apps.CLUSTER_NAME.example.com. IN A BASTION_IP + +; OpenShift internal cluster IPs - SRV records +_etcd-server-ssl._tcp.CLUSTER_NAME.example.com. 86400 IN SRV 0 10 2380 etcd-0.CLUSTER_NAME +_etcd-server-ssl._tcp.CLUSTER_NAME.example.com. 86400 IN SRV 0 10 2380 etcd-1.CLUSTER_NAME +_etcd-server-ssl._tcp.CLUSTER_NAME.example.com. 86400 IN SRV 0 10 2380 etcd-2.CLUSTER_NAME +``` +Replace IP and FQDN placeholders accordingly to the configuration of your cluster. + +**NOTE:** `CLUSTER_NAME` shall be the same name you're going to use in the install-config.yaml. + +#### Reverse Zone +Create the file `/var/named/zones/db.192.168.100` with a content like the following example. +``` +$TTL 604800 +@ IN SOA ns1.example.com. admin.example.com. ( + 6 ; Serial + 604800 ; Refresh + 86400 ; Retry + 2419200 ; Expire + 604800 ; Negative Cache TTL +) + +; name servers - NS records + IN NS ns1.example.com. + +; name servers - PTR records +BASTION_LAST_OCTECT_IP IN PTR ns1.example.com. + +; OpenShift Container Platform Cluster - PTR records +BOOTSTRAP_SERVER_LAST_OCTECT_IP IN PTR BOOTSTRAP_SERVER_FQDN. +CONTROL_PLANE_0_LAST_OCTECT_IP IN PTR CONTROL_PLANE_0_FQDN. +CONTROL_PLANE_1_LAST_OCTECT_IP IN PTR CONTROL_PLANE_1_FQDN. +CONTROL_PLANE_2_LAST_OCTECT_IP IN PTR CONTROL_PLANE_2_FQDN. +COMPUTE_NODE_0_LAST_OCTECT_IP IN PTR COMPUTE_NODE_0_FQDN. +COMPUTE_NODE_1_LAST_OCTECT_IP IN PTR COMPUTE_NODE_1_FQDN. +``` +Replace every last octet and FQDN placeholders accordingly to the configuration of your cluster. + +### Start DNS +Now that both the main and the reverse zones are configured, you can start the `named` service with the following command: +``` +$ sudo systemctl enable --now named +``` +### Configure firewall +If your DNS is intended to be internal and cluster-specific, and not general purpose, you could configure firewalld to block any requests to the port 53 that came from the outside of the OKD network, with the following commands: +``` +$ sudo firewall-cmd --add-rich-rule='rule family="ipv4" source address="LIBVIRT_OKD_SUBNET" service name="dns" accept' --permanent +$ sudo firewall-cmd --reload +``` +where `LIBVIRT_OKD_SUBNET` is the subnet you're going to allow. +Alternatively you can bind named to a specific IP or restrict the hosts that can inquiry the DNS. + diff --git a/Guides/UPI/vSphere_govc/Requirements/LB_HAProxy.md b/Guides/UPI/vSphere_govc/Requirements/LB_HAProxy.md new file mode 100644 index 0000000..266c52b --- /dev/null +++ b/Guides/UPI/vSphere_govc/Requirements/LB_HAProxy.md @@ -0,0 +1,176 @@ +# Configure HAProxy as a cluster load balancer + +This guide will explain how to install and configure a load balancer with HAProxy, to use it as a front-end for the OKD cluster. + +## Assumptions + + - This guide is based on CentOS 7; + - The cluster example has two dedicated infra nodes, identified by `OKD4_INFRA_NODE_0_IP` and `OKD4_INFRA_NODE_1_IP`; + - Firewall rules are managed by firewalld. + +## Walkthrough +### Install the requirements +Since HAProxy is included in the `base` repository, you can just install it with the following command: +``` +$ sudo yum install haproxy +``` +### Configure the pools for bootstrapping +After the installation you need to configure the pools it needs to balance. +For an OKD installation, HAProxy has to provide load balancing capabilities to the following services: + + - OKD default route (ports 443 and 80); + - Kubernetes API/CLI (port 6443); + - MachineConfig API (port 22623). + +Edit `/etc/haproxy/haproxy.cfg` like following example: +``` +# Global settings +#--------------------------------------------------------------------- +global + maxconn 20000 + log /dev/log local0 info + chroot /var/lib/haproxy + pidfile /var/run/haproxy.pid + user haproxy + group haproxy + daemon + + # turn on stats unix socket + stats socket /var/lib/haproxy/stats + +#--------------------------------------------------------------------- +# common defaults that all the 'listen' and 'backend' sections will +# use if not designated in their block +#--------------------------------------------------------------------- +defaults + mode http + log global + option httplog + option dontlognull + option http-server-close + option forwardfor except 127.0.0.0/8 + option redispatch + retries 3 + timeout http-request 10s + timeout queue 1m + timeout connect 10s + timeout client 300s + timeout server 300s + timeout http-keep-alive 10s + timeout check 10s + maxconn 20000 + +listen stats + bind :9000 + mode http + stats enable + stats uri / + +frontend ocp4_k8s_api_fe + bind :6443 + default_backend ocp4_k8s_api_be + mode tcp + option tcplog + +backend ocp4_k8s_api_be + balance roundrobin + mode tcp + server bootstrap OKD4_BOOTSTRAP_SERVER_IP:6443 check + server master0 OKD4_CONTROL_PLANE_0_IP:6443 check + server master1 OKD4_CONTROL_PLANE_1_IP:6443 check + server master2 OKD4_CONTROL_PLANE_2_IP:6443 check + +frontend ocp4_machine_config_server_fe + bind :22623 + default_backend ocp4_machine_config_server_be + mode tcp + option tcplog + +backend ocp4_machine_config_server_be + balance roundrobin + mode tcp + server bootstrap OKD4_BOOTSTRAP_SERVER_IP:22623 check + server master0 OKD4_CONTROL_PLANE_0_IP:22623 check + server master1 OKD4_CONTROL_PLANE_1_IP:22623 check + server master2 OKD4_CONTROL_PLANE_2_IP:22623 check + +frontend ocp4_http_ingress_traffic_fe + bind :80 + default_backend ocp4_http_ingress_traffic_be + mode tcp + option tcplog + +backend ocp4_http_ingress_traffic_be + balance roundrobin + mode tcp + server infra0 OKD4_INFRA_NODE_0_IP:80 check + server infra1 OKD4_INFRA_NODE_1_IP:80 check + +frontend ocp4_https_ingress_traffic_fe + bind :443 + default_backend ocp4_https_ingress_traffic_be + mode tcp + option tcplog + +backend ocp4_https_ingress_traffic_be + balance roundrobin + mode tcp + server infra0 OKD4_INFRA_NODE_0_IP:443 check + server infra1 OKD4_INFRA_NODE_1_IP:443 check +``` +Replace `OKD4_BOOTSTRAP_SERVER_IP`, `OKD4_CONTROL_PLANE_0_IP`, `OKD4_CONTROL_PLANE_1_IP`, `OKD4_CONTROL_PLANE_2_IP`, `OKD4_INFRA_NODE_0_IP` and `OKD4_INFRA_NODE_1_IP` with the IPs of your cluster. + +As described [above](#assumptions), in this example the pools `ocp4_http_ingress_traffic_be` and `ocp4_https_ingress_traffic_be` will balance on the two infra nodes indentified as `infra0` and `infra1`. +If you're not going to provision two separate infra nodes, ensure that those pools will balance the compute nodes instead. + +### Configure SELinux to allow non-standard port binding +Since HAProxy is going to bind itself to non-standard ports like 6443 and 22623, SELinux needs to be configured to allow such configurations. +``` +$ sudo setsebool -P haproxy_connect_any on +``` +### Starting HAProxy +Now that SELinux allows HAProxy to bind to non-standard ports, you have to start it service. +``` +$ sudo systemctl start haproxy +``` +Inquiry the service status should show something like: +``` +$ systemctl status haproxy +● haproxy.service - HAProxy Load Balancer + Loaded: loaded (/usr/lib/systemd/system/haproxy.service; enabled; vendor preset: disabled) + Active: active (running) since dom 2019-12-29 01:44:51 CET; 1 day 13h ago + Main PID: 20458 (haproxy-systemd) + Tasks: 3 + CGroup: /system.slice/haproxy.service + ├─20458 /usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid + ├─20459 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds + └─20460 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds + +dic 29 01:44:51 localhost haproxy-systemd-wrapper[20458]: [WARNING] 362/014451 (20459) : config : 'option forwardfor' ignored for frontend 'ocp4_https_ingress...TTP mode. +dic 29 01:44:51 localhost haproxy-systemd-wrapper[20458]: [WARNING] 362/014451 (20459) : config : 'option forwardfor' ignored for backend 'ocp4_https_ingress_...TTP mode. +dic 29 01:44:51 localhost haproxy[20459]: Proxy ocp4_k8s_api_fe started. +dic 29 01:44:51 localhost haproxy[20459]: Proxy ocp4_k8s_api_be started. +dic 29 01:44:51 localhost haproxy[20459]: Proxy ocp4_machine_config_server_fe started. +dic 29 01:44:51 localhost haproxy[20459]: Proxy ocp4_machine_config_server_be started. +dic 29 01:44:51 localhost haproxy[20459]: Proxy ocp4_http_ingress_traffic_fe started. +dic 29 01:44:51 localhost haproxy[20459]: Proxy ocp4_http_ingress_traffic_be started. +dic 29 01:44:51 localhost haproxy[20459]: Proxy ocp4_https_ingress_traffic_fe started. +dic 29 01:44:51 localhost haproxy[20459]: Proxy ocp4_https_ingress_traffic_be started. +Hint: Some lines were ellipsized, use -l to show in full. +``` + +### Configure the pools after bootstrapping +When the cluster is correctly deployed and the bootstrap node can be turned off, comment the lines related to the node that in the [above](#configure-the-pools-for-bootstrapping) configuration example is called `bootstrap`. +Then restart the service to activate the new configuration: +``` +$ sudo systemctl restart haproxy +``` + +### Configure firewall +If your server is exposed to internet, like a rented dedicated server, you can restrict the access to some ports in order to let the API, such as the Kubernetes' and the MachingConfig's, to be reachable only from the internal cluster network, through some firewall rules, like the following example: +``` +$ sudo firewall-cmd --add-rich-rule='rule family="ipv4" source address="LIBVIRT_OKD_SUBNET" port port="6443" protocol="tcp" accept' --permanent +$ sudo firewall-cmd --add-rich-rule='rule family="ipv4" source address="LIBVIRT_OKD_SUBNET" port port="22623" protocol="tcp" accept' --permanent +$ sudo firewall-cmd --reload +``` + From faa858e5b48a5c04b2ad3d3dda557a569462ac4d Mon Sep 17 00:00:00 2001 From: Jaime Magiera Date: Sat, 28 Mar 2020 16:36:41 -0400 Subject: [PATCH 7/8] Changed link for 'user-provisioned infrastructure' to point to OKD docs. Fixed 'Requirements' to point to local requirements file. --- Guides/UPI/vSphere_govc/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Guides/UPI/vSphere_govc/README.md b/Guides/UPI/vSphere_govc/README.md index 2ccdd77..a7bf771 100644 --- a/Guides/UPI/vSphere_govc/README.md +++ b/Guides/UPI/vSphere_govc/README.md @@ -80,7 +80,7 @@ govc vm.disk.change -vm $vm -disk.label "Hard disk 1" -size 120G ``` ### Configure DNS, DHCP and LB -The installation requires specific configuration of DNS and a load balancer. The requirements are listed in the official Openshift documentation: [Creating the user-provisioned infrastructure](https://docs.openshift.com/container-platform/4.2/installing/installing_vsphere/installing-vsphere.html#installation-infrastructure-user-infra_installing-vsphere). Example configurations are available at [requirements](/Documentation/UPI/Requirements) +The installation requires specific configuration of DNS and a load balancer. The requirements are listed in the official Openshift documentation: [Creating the user-provisioned infrastructure](https://docs.okd.io/latest/installing/installing_vsphere/installing-vsphere.html#installation-infrastructure-user-infra_installing-vsphere). Example configurations are available at [requirements](Guides/UPI/vSphere_govc/Requirements) You will also need working DHCP on the network the cluster hosts are connected to. The DHCP server should assign the hosts unique FQDNs. From 65b2853524e5ad7bcabdd29b1ed45c0b53422bff Mon Sep 17 00:00:00 2001 From: Jaime Magiera Date: Sat, 28 Mar 2020 16:45:23 -0400 Subject: [PATCH 8/8] missed a slash --- Guides/UPI/vSphere_govc/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Guides/UPI/vSphere_govc/README.md b/Guides/UPI/vSphere_govc/README.md index a7bf771..3dfd226 100644 --- a/Guides/UPI/vSphere_govc/README.md +++ b/Guides/UPI/vSphere_govc/README.md @@ -80,7 +80,7 @@ govc vm.disk.change -vm $vm -disk.label "Hard disk 1" -size 120G ``` ### Configure DNS, DHCP and LB -The installation requires specific configuration of DNS and a load balancer. The requirements are listed in the official Openshift documentation: [Creating the user-provisioned infrastructure](https://docs.okd.io/latest/installing/installing_vsphere/installing-vsphere.html#installation-infrastructure-user-infra_installing-vsphere). Example configurations are available at [requirements](Guides/UPI/vSphere_govc/Requirements) +The installation requires specific configuration of DNS and a load balancer. The requirements are listed in the official Openshift documentation: [Creating the user-provisioned infrastructure](https://docs.okd.io/latest/installing/installing_vsphere/installing-vsphere.html#installation-infrastructure-user-infra_installing-vsphere). Example configurations are available at [requirements](/Guides/UPI/vSphere_govc/Requirements) You will also need working DHCP on the network the cluster hosts are connected to. The DHCP server should assign the hosts unique FQDNs.