diff --git a/Documentation/k8s-install-openshift-okd.rst b/Documentation/k8s-install-openshift-okd.rst new file mode 100644 index 0000000..f9622ab --- /dev/null +++ b/Documentation/k8s-install-openshift-okd.rst @@ -0,0 +1,243 @@ +***************************** +Installation on OpenShift OKD +***************************** + +OpenShift Requirements +====================== + +1. Choose preferred cloud provider. This guide was tested in AWS, Azure, and GCP + from a Linux host. + +2. Read `OpenShift documentation `_ to find out about provider-specific prerequisites. + +3. `Get OpenShift Installer `_. + +.. note:: + + It is highly recommended to read the OpenShift documentation, unless you have + installed OpenShift in the past. Here are a few notes that you may find + useful. + + - With the AWS provider ``openshift-install`` will not work properly + when MFA credentials are stored in ``~/.aws/credentials``, traditional credentials are required. + - With the Azure provider ``openshift-install`` will prompt for + credentials and store them in ``~/.azure/osServicePrincipal.json``, it + doesn't simply pickup ``az login`` credentials. It's recommended to + setup a dedicated service principal and use it. + - With the GCP provider ``openshift-install`` will only work with a service + account key, which has to be set using ``GOOGLE_CREDENTIALS`` + environment variable (e.g. ``GOOGLE_CREDENTIALS=service-account.json``). + Follow `Openshift Installer documentation `_ + to assign required roles to your service account. + +Create an OpenShift OKD Cluster +=============================== + +First, set the cluster name: + +.. code-block:: shell-session + + CLUSTER_NAME="cluster-1" + +Now, create configuration files: + +.. note:: + + The sample output below is showing the AWS provider, but + it should work the same way with other providers. + +.. code-block:: shell-session + + $ openshift-install create install-config --dir "${CLUSTER_NAME}" + ? SSH Public Key ~/.ssh/id_rsa.pub + ? Platform aws + INFO Credentials loaded from default AWS environment variables + ? Region eu-west-1 + ? Base Domain openshift-test-1.cilium.rocks + ? Cluster Name cluster-1 + ? Pull Secret [? for help] ********************************** + +And set ``networkType: Cilium``: + +.. code-block:: shell-session + + sed -i "s/networkType: .*/networkType: Cilium/" "${CLUSTER_NAME}/install-config.yaml" + +The resulting configuration will look like this: + +.. code-block:: yaml + + apiVersion: v1 + baseDomain: ilya-openshift-test-1.cilium.rocks + compute: + - architecture: amd64 + hyperthreading: Enabled + name: worker + platform: {} + replicas: 3 + controlPlane: + architecture: amd64 + hyperthreading: Enabled + name: master + platform: {} + replicas: 3 + metadata: + creationTimestamp: null + name: cluster-1 + networking: + clusterNetwork: + - cidr: 10.128.0.0/14 + hostPrefix: 23 + machineNetwork: + - cidr: 10.0.0.0/16 + networkType: Cilium + serviceNetwork: + - 172.30.0.0/16 + platform: + aws: + region: eu-west-1 + publish: External + pullSecret: '{"auths":{"fake":{"auth": "bar"}}}' + sshKey: | + ssh-rsa + +You may wish to make a few changes, e.g. increase the number of nodes. + +If you do change any of the CIDRs, you will need to make sure that Helm values in ``${CLUSTER_NAME}/manifests/cluster-network-07-cilium-ciliumconfig.yaml`` +reflect those changes. Namely ``clusterNetwork`` should match ``ipv4NativeRoutingCIDR``, ``clusterPoolIPv4PodCIDRList`` and ``clusterPoolIPv4MaskSize``. +Also make sure that the ``clusterNetwork`` does not conflict with ``machineNetwork`` (which represents the VPC CIDR in AWS). + +.. warning:: + + Ensure that there are multiple replicas of the ``controlPlane``. A single + ``controlPlane`` will lead to failure to bootstrap the cluster during + installation. + +Next, generate OpenShift manifests: + +.. code-block:: shell-session + + openshift-install create manifests --dir "${CLUSTER_NAME}" + +Next, obtain Cilium manifests for the target installation version from the +``isovalent/olm-for-cilium`` repository and copy to ``${CLUSTER_NAME}/manifests``: + +.. parsed-literal:: + + cilium_version="replace_me" + git_dir="/tmp/cilium-olm" + + git clone https://github.com/isovalent/olm-for-cilium.git ${git_dir} + cp ${git_dir}/manifests/cilium.v${cilium_version}/* "${CLUSTER_NAME}/manifests" + + test -d ${git_dir} && rm -rf -- ${git_dir} + +At this stage manifest directory contains all that is needed to install Cilium. +To get a list of the Cilium manifests, run: + +.. code-block:: shell-session + + ls ${CLUSTER_NAME}/manifests/cluster-network-*-cilium-* + +You can set any custom Helm values by editing ``${CLUSTER_NAME}/manifests/cluster-network-07-cilium-ciliumconfig.yaml``. + +It is also possible to update Helm values once the cluster is running by +changing the ``CiliumConfig`` object, e.g. with ``kubectl edit ciliumconfig -n +cilium cilium``. You may need to restart the Cilium agent pods for certain +options to take effect. + +Create the cluster: + +.. note:: + + The sample output below is showing the AWS provider, but + it should work the same way with other providers. + +.. code-block:: shell-session + + $ openshift-install create cluster --dir "${CLUSTER_NAME}" + INFO Consuming OpenShift Install (Manifests) from target directory + INFO Consuming Master Machines from target directory + INFO Consuming Worker Machines from target directory + INFO Consuming Openshift Manifests from target directory + INFO Consuming Common Manifests from target directory + INFO Credentials loaded from the "default" profile in file "/home/twp/.aws/credentials" + INFO Creating infrastructure resources... + INFO Waiting up to 20m0s for the Kubernetes API at https://api.cluster-name.ilya-openshift-test-1.cilium.rocks:6443... + INFO API v1.20.0-1058+7d0a2b269a2741-dirty up + INFO Waiting up to 30m0s for bootstrapping to complete... + INFO Destroying the bootstrap resources... + INFO Waiting up to 40m0s for the cluster at https://api.cluster-name.ilya-openshift-test-1.cilium.rocks:6443 to initialize... + INFO Waiting up to 10m0s for the openshift-console route to be created... + INFO Install complete! + INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/twp/okd/cluster-name/auth/kubeconfig' + INFO Access the OpenShift web-console here: https://console-openshift-console.apps.cluster-name.ilya-openshift-test-1.cilium.rocks + INFO Login to the console with user: "kubeadmin", and password: "" + INFO Time elapsed: 32m9s + +Accessing the cluster +--------------------- + +To access the cluster you will need to use ``kubeconfig`` file from the ``${CLUSTER_NAME}/auth`` directory: + +.. code-block:: shell-session + + export KUBECONFIG="${CLUSTER_NAME}/auth/kubeconfig" + +Prepare cluster for Cilium connectivity test +-------------------------------------------- + +In order for Cilium connectivity test pods to run on OpenShift, a simple custom ``SecurityContextConstraints`` +object is required. It will to allow ``hostPort``/``hostNetwork`` that some of the connectivity test pods rely on, +it sets only ``allowHostPorts`` and ``allowHostNetwork`` without any other privileges. + +.. code-block:: shell-session + + kubectl apply -f - < **Note** > This repository is a fork of the original repository hosted at [cilium/cilium-olm](https://github.com/cilium/cilium-olm), which has been deprecated. -> NOTE: this documentation is for Cilium maintainers, the user guide for OpenShift is part of [Cilium documentation][okd-gsg] - This repository contains Cilium packaging for OpenShift, which is centred around [Operator Lifecycle Management APIs (OLM)][olm]. ## Key Components @@ -62,7 +60,6 @@ The metadata bundle image contains just YAML files and no software as such, howe [rhpc-projects]: https://connect.redhat.com/projects [GitHub Actions]: ../../actions/workflows/ci.yaml -[okd-gsg]: https://docs.cilium.io/en/v1.10/gettingstarted/k8s-install-openshift-okd [olm]: https://docs.openshift.com/container-platform/4.7/operators/understanding/olm/olm-understanding-olm.html [kuegen]: https://github.com/errordeveloper/kuegen