This repository contains instructions and files for a demonstration
deployment that shows how a CI/CD workflow of VirtualMachines
across multiple clusters could look like. It works with Open Cluster Management,
ArgoCD and KubeVirt or Red Hat Advanced Cluster Management, OpenShift GitOps
and OpenShift Virtualization.
Open Cluster Management (OCM) or Red Hat Advanced Cluster Management (ACM) simplifies the management of multiple clusters by offering end-to-end management, visibility and control of the whole cluster and application life cycle. It acts as a central point for keeping an inventory of all your clusters and applications and enables multi-cluster and multi-cloud scenarios, such as deploying the same application across clusters in different regions, possibly on several cloud providers. It uses a hub and spoke architecture and allows the targeted distribution of Kubernetes manifests across clusters.
The hub cluster is the cluster on which OCM/ACM is running on. It acts as an inventory and carries out all management actions. It is usually not running any actual workloads (though still possible), these run on managed clusters. Managed clusters are kept in the inventory of the hub cluster. Existing clusters can be added to the inventory and on ACM they can also be created directly. The terms Open Cluster Management and Advanced Cluster Management might be used interchangeably in the following sections. For more information have a look at the OCM documentation.
The GitOps way uses Git repositories as a single source of truth to deliver infrastructure as code. Automation is employed to keep the desired and the live state of clusters in sync at all times. This means any change to a repository is automatically applied to one or more clusters while changes to a cluster will be automatically reverted to the state described in the single source of truth.
ArgoCD or Red Hat OpenShift GitOps enables declarative GitOps workflows and allows to deploy applications on-demand. It monitors the live state of clusters against the desired state in a Git repository and keeps them in sync. The terms ArgoCD and OpenShift GitOps might be used interchangeably in the following sections. For more information have a look at the ArgoCD documentation.
The ArgoCD Application
is a CustomResourceDefinition
(CRD),
which essentially describes a source of manifests and a target cluster to apply
the manifests to. Besides that, options like automatic creation of namespaces
or the automatic revert of changes can be configured.
The ArgoCD ApplicationSet
is a CRD building on ArgoCD Applications
,
targeted to deploy and manage Applications
across multiple clusters while
using the same manifest or declaration. It is possible to deploy multiple
ApplicationSets
which are contained in one monorepo. By using generators
it is possible to dynamically select a subset of clusters available to ArgoCD to
deploy resources to.
In this demo we are going to use ApplicationSets
to deploy KubeVirt or
OpenShift Virtualization and VirtualMachines
to multiple clusters while using
the same declaration of resources for all clusters.
For more information on ApplicationSets
see the documentation.
The following requirements need to be satisfied to build the setup described in this demo yourself:
- A Git repository accessible by the hub cluster
- One Kubernetes or OpenShift cluster acting as hub cluster
- Needs to be publicly accessible or at least accessible by the managed clusters
- One or more Kubernetes or OpenShift clusters acting as managed clusters
- Can be in private networks
- Virtualization has to be available
- Nested virtualization is fine for demonstration purposes
You need to clone this repository to somewhere where you are able to make changes to it (i.e. forking it on GitHub). Then open a terminal on your machine, check out the repository locally and change your working directory into the cloned repository.
The ApplicationSets
in this repository use the URL of this repository as
repoURL
. To be able to make changes to your ApplicationSets
, you need to
adjust the repoURL
to the URL of your own repository. If you do this later
do not forget to update any existing ApplicationSets
on your hub cluster.
The following sections will cover the Kubernetes specific part of this demo.
See the OCM documentation on how to install Open Cluster Management.
Make sure to install multicloud-integrations for integration with ArgoCD too.
See the OCM documentation on how to add managed clusters to OCM.
Managed clusters can be grouped into ManagedClusterSets
. These sets can be
bound to namespaces with a ManagedClusterSetBinding
to make managed clusters
available in the bound namespaces.
To add managed clusters to a new set see the OCM documentation.
Make sure to name your set managed
for compatibility with the manifests in
this repository.
Now we have a ManagedClusterSet
that can be used to make the managed clusters
available to ArgoCD.
See the ArgoCD documentation on how to install ArgoCD.
The following sections will cover the OpenShift specific part of this demo.
- Login as cluster administrator on the UI of the hub cluster
- Open the
Administrator
view if it is not already selected - In the menu click on
Operators
and openOperatorHub
- In the search type
Advanced Cluster Management for Kubernetes
and click on it in the results - Click on
Install
, keep defaults and click onInstall
again - Wait until
MultiClusterHub
can be created and create it - Wait until the created
MultiClusterHub
is ready (Operators
-->Installed Operators
--> see status of ACM)
Managed clusters can be added to ACM in two ways:
- Create a new cluster with ACM
- Add an existing cluster to ACM
Note: For the sake of simplicity we will let ACM create the managed clusters in this blog post on a public cloud provider. Please note that nested virtualization is not supported in production deployments.
To create one or more managed clusters follow these steps:
- Login as cluster administrator on the UI of the hub cluster
- At the top of the menu select
All Clusters
(local-cluster
should be selected initially) - Add credentials for you cloud provider by clicking on
Credentials
in the menu and then clicking onAdd credentials
- Click on
Infrastructure
and then onClusters
in the menu - Click
Create cluster
, select your cloud provider and complete the wizard (use the default cluster set for now)
Note: When using Azure as cloud provider select instance type
Standard_D8s_v3
for the control plane andStandard_D4s_v3
for the worker nodes, resources might become to tight to run virtual machines on the cluster otherwise.
Managed clusters can be grouped into ManagedClusterSets
. These sets can be
bound to namespaces with a ManagedClusterSetBinding
to make managed clusters
available in the bound namespaces.
To add managed clusters to a new set follow these steps:
- Login as cluster administrator on the UI of the hub cluster
- At the top of the menu select
All Clusters
(local-cluster
should be selected initially) - Click on
Infrastructure
and then onClusters
in the menu - Click on
Cluster sets
and then onCreate cluster set
- Enter
managed
as name for the new set and click onCreate
- Click on
Managed resource assignments
- Select all clusters you want to add, click on
Review
and then onSave
Now we have a ManagedClusterSet
that can be used to make the managed clusters
available to ArgoCD.
If done correctly, the cluster list of the created ManagedClusterSet
in ACM
should look like in the screenshot above.
- Login as cluster administrator on the UI of the hub cluster
- Open the
Administrator
view if it is not already selected - In the menu click on
Operators
and openOperatorHub
- In the search type
Red Hat OpenShift GitOps
and click on it in the results - Click on
Install
, keep defaults and click onInstall
again - Wait until OpenShift GitOps is ready (
Operators
-->Installed Operators
--> see status of OpenShift GitOps)
If installed correctly, the list of installed operators on your cluster should look like in the following screenshot:
The OpenShift GitOps web UI is exposed with a Route
. To get the
exact URL of the Route
follow these steps:
- Login as cluster administrator on the UI of the hub cluster
- Open the
Administrator
view if it is not already selected - In the menu click on
Networking
and openRoutes
- In the
Projects
drop down selectopenshift-gitops
(enableShow default projects
if not visible) - There will be a
Route
calledopenshift-gitops-server
, the location of thisRoute
is the URL to the GitOps UI - You can log in to the GitOps UI with your OpenShift credentials
Alternatively you can use the command line to get the URL to the GitOps UI with the following command:
oc get route -n openshift-gitops openshift-gitops-server -o jsonpath='{.spec.host}'
To make a set of managed clusters available to ArgoCD or OpenShift GitOps, a
tight integration between OCM and ArgoCD or ACM and GitOps exists. The
integration is controlled with the GitOpsCluster
CRD.
Follow these steps to make the managed clusters available to ArgoCD or GitOps:
- Make sure you are logged in to your cluster on the CLI
- Run the copied command in your terminal
- Create a
ManagedClusterSetBinding
in theargocd
oropenshift-gitops
namespace to make theManagedClusterSet
available in this namespace- See file managedclustersetbinding.yaml
- On Kubernetes run
kubectl create -n argocd -f acm-gitops-integration/managedclustersetbinding.yaml
- On OpenShift run
oc create -n openshift-gitops -f acm-gitops-integration/managedclustersetbinding.yaml
- Create a
Placement
to let OCM/ACM decide which clusters should be made available to GitOps- See file placement.yaml
- On Kubernetes run
kubectl create -n argocd -f acm-gitops-integration/placement.yaml
- On OpenShift run
oc create -n openshift-gitops -f acm-gitops-integration/placement.yaml
- For the sake of simplicity this will select the whole
ManagedClusterSet
, but advanced use cases are possible
- Create a
GitOpsCluster
to finally make the selected clusters available to GitOps on the hub cluster- For Kubernetes see file k8s-gitopscluster.yaml
- On Kubernetes run
kubectl create -f acm-gitops-integration/k8s-gitopscluster.yaml
- For OpenShift see file gitopscluster.yaml
- On OpenShift run
oc create -f acm-gitops-integration/gitopscluster.yaml
In this screenshot you can see that the managed clusters were made available to
ArgoCD successfully. This view can be opened by going to ArgoCD's settings and
opening the Clusters
menu. Until an Application
is deployed to the cluster
its connection status may still be Unknown
.
In our setup we assign managed clusters to specific environments by setting a label on them. Ideally it would be possible to assign them from OCM/ACM, but for the time being this still has to be done in ArgoCD. In an upcoming OCM/ACM release it will be possible to carry over labels set in OCM/ACM to ArgoCD.
See this PR for details.
In this post we will work with the dev
and the prod
environments. Add your
managed clusters to the environments by following these steps:
Open ArgoCD's settings and open the Clusters
menu. Then click on the three
dots on the right side of a cluster to edit it. After editing the cluster do not
forget to save your changes.
One or more of the clusters should belong to the dev
environment. This is
achieved by setting the env
label to the value dev
on the managed cluster.
One or more of the clusters should belong to the prod
environment. This is
achieved by setting the env
label to the value prod
on the managed cluster.
To deploy KubeVirt or OpenShift Virtualization to the managed clusters with
the help of an ApplicationSet
run the following command from your cloned
repository (See Repository preparation):
On Kubernetes run the following commands:
kubectl create -f applicationsets/cdi/applicationset-cdi.yaml
kubectl create -f applicationsets/kubevirt/applicationset-kubevirt.yaml
On OpenShift run the following command:
oc create -f applicationsets/virtualization/applicationset-virtualization.yaml
This will create an Application
for each managed cluster that deploys
KubeVirt or OpenShift Virtualization with its default settings. The
Application
will ensure that the appropriate namespaces exists, and it
will automatically apply any changes to this repository or undo changes
which are not in this repository. Sync waves are used to ensure that
resources are created in the right order.
Namespace
,CustomResourceDefinition
,ClusterRole
ServiceAccount
,Role
,ConfigMap
ClusterRoleBinding
,RoleBinding
Deployment
CDI
CR,KubeVirt
CR
OperatorGroup
Subscription
HyperConverged
Because the HyperConverged
CRD is unknown to ArgoCD, the sync option
SkipDryRunOnMissingResource=true
is set to allow ArgoCD to create a CR
without knowing its CRD.
There is only one update channel for OpenShift Virtualization (called
stable
) so the appropriate version for the managed cluster is selected
automatically.
To force a specific version from the channel do the following:
- Make sure that
grpcurl
andjq
are available on your machine - Extract the available
CSV
versions from the Operator registry- Login to the command line of the managed cluster
- Run
oc port-forward service/redhat-operators -n openshift-marketplace 50051:50051
- In a separate terminal run
grpcurl -plaintext localhost:50051 api.Registry/ListBundles | jq 'select(.csvName | match ("kubevirt-hyperconverged-operator")) | .version'
- Set the following fields in the
Subscription
specinstallPlanApproval
:Manual
startingCSV
: Your desired and available CSV version
This technique can for example be used to control the upgrade process of OpenShift Virtualization in a declarative way.
In ArgoCD's UI you can follow the synchronization status of the newly created
Application
for each cluster. Eventually every Application
will reach the
healthy and synced status like in the following screenshot.
To see what is actually deployed have a look into the following directories:
On Kubernetes look into applicationsets/cdi/manifests
and
applicatioinsets/kubevirt/manifests
.
On OpenShift look into applicationsets/virtualization/manifests
.
To deploy a Fedora VirtualMachine
on all managed clusters with the help of
an ApplicationSet
run the following command from your cloned repository
(See Repository preparation):
On Kubernetes run the following command:
kubectl create -n argocd -f applicationsets/demo-vm/applicationset-demo-vm.yaml
On OpenShift run the following command:
oc create -n openshift-gitops -f applicationsets/demo-vm/applicationset-demo-vm.yaml
This will create an Application
for each managed cluster that deploys a
simple VirtualMachine
on each cluster. It uses the Fedora DataSource
available on the OpenShift cluster by default to boot a Fedora cloud image.
On Kubernetes please create the DataSource with the following command:
kubectl create -f acm-gitops-integration/k8s-datasource.yaml
Notice how the health state of the created Application
is Suspended
. This is
because the created VirtualMachine
is still in stopped state.
Instead of using plain manifests this ApplicationSet
is using Kustomize
.
This allows to apply customizations to an Application
depending on the
environment a managed cluster belongs to. In this post it is achieved by using
the metadata.labels.env
value to choose the right Kustomize
overlay.
The dev
overlay prefixes names of created resources with dev-
, while the
prod
overlay prefixes names with prod-
. Furthermore, the created
VirtualMachines
get more or less memory assigned depending on the environment.
These are only simple customizations, but the possibilities are endless!
To see what is actually deployed have a look into the following directory:
applicationsets/demo-vm/kustomize
.
Here is a quick summary of the required steps:
- Choose to modify all environments (
base
) or a single environment (eg.dev
orprod
) - To start the
VirtualMachine
in all environments editapplicationsets/demo-vm/kustomize/base/virtualmachine.yaml
- Set
spec.running
totrue
- Commit and push the change to your repository
- Refresh ArgoCD to pick up the change
The following sections will explain the steps in more detail.
First let us have a closer look at the Application
of the stopped
VirtualMachine
. Notice the Suspended
health state. Also notice the dev-
prefix of the created VirtualMachine
. It was created on a cluster belonging
to the dev
environment.
To start or stop a VirtualMachine
you need to edit the spec.running
field of
a VirtualMachine
and set it to the corresponding value (false
or true
).
You can do this in the applicationsets/demo-vm/kustomize
directory.
If the VirtualMachine
has an appropriate termination grace period
(spec.template.spec.terminationGracePeriodSeconds
), by setting this value to
false
the VirtualMachine
will be gracefully shut down. When setting the
timeout grace period to 0 seconds, the VirtualMachine
is stopped immediately
however.
When modifying the VirtualMachine
you can choose to either modify the base or
a specific overlay of Kustomize
. This allows to start or stop the
VirtualMachine
in every environment or just in a specific one. In this example
the VirtualMachine
was started in every environment by modifying the
Kustomize
base.
To apply new changes with ArgoCD you need to commit and push changes to the Git
repository containing your Application
. To start or stop a VirtualMachine
you have to update the manifest and commit and push to your repository. In the
ArgoCD UI select the Application
of the VirtualMachine
and click Refresh
to apply the change immediately. Otherwise, it will take some time until ArgoCD
scans the repository and picks up the change.
After ArgoCD picked up the change it will sync it to the VirtualMachine
as
visible by the Progressing
health state in the following screenshot:
Eventually the VirtualMachine
will be running and healthy:
For the sake of simplicity the Placement
created in this demo selects the
whole ManagedClusterSet
, but more advanced use cases are possible.
OCM/ACM can dynamically select a subset of clusters from the ManagedClusterSet
while following a defined set of criteria. This for example allows to schedule
VirtualMachines
on clusters with the most resources available at the time of
the placement decision.
For more on this topic see Using the Open Cluster Management Placement for Multicluster Scheduling.
An OCM/ACM add-on that deploys OpenShift Virtualization to managed clusters was implemented for evaluation purposes. The add-on is fully functional and can deploy OpenShift Virtualization to all managed clusters that have a specific label set.
Although the add-on the serves the purpose of deploying OpenShift Virtualization, it was found to be unnecessary complex when OpenShift GitOps is available too. The add-on is only deploying a small set of static manifests which can be deployed by GitOps too.
In contrast to its use stand the additional maintenance burden and resource usage of another container running on the cluster. Therefore, it was decided to not follow this path any further.
The add-on can be found here.
OCM/ACM can be integrated with Ansible AWX or Ansible Automation Controller to
trigger Playbook runs after certain events. To make use of VirtualMachines
in Ansible a dynamic inventory is needed, which makes the VirtualMachines
available and accessible to Ansible.
There is already a collection of KubeVirt modules for Ansible, this collection however is deprecated and no longer working.
For evaluation purposes a fork was created. This fork provides limited functionality but shows that this type of integration is still possible. A demo of this Ansible collection can be found here.
A future use case is to pre-configure a VirtualMachine
on a cluster and then
export it into a blob format which can be stored somewhere where it can be
accessed from other clusters. The blob could then be imported into other
clusters to allow deployment of replicas of a pre-configured VirtualMachine
across multiple clusters.
A possible blob format for this kind of export/import feature could be ContainerDisks, which are already supported by KubeVirt or OpenShift Virtualization.
To show this is already possible with the current ContainerDisk
implementation
a Proof-Of-Concept was created. The PoC can be found here.
In this demo we set up a hub cluster and two clusters managed by OCM/ACM to
deploy applications to from a centralized management point. As example
applications we deployed KubeVirt or OpenShift Virtualization with simple
manifests and a virtual machine with manifests customized by Kustomize
. We
learned how to apply customizations to specific environments and how we can
start and stop virtual machines in a declarative way. All of this was
accomplished the GitOps way by using a Git repository as a single source of
truth.
This is of course only the tip of the iceberg, building on this setup allows
you to customize your ApplicationSets
for different environments like
development, staging and production or to schedule your applications based on
custom criteria (e.g. available resources) with advanced placements rules.