diff --git a/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-1/asciicast.svg b/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-1/asciicast.svg
new file mode 100644
index 0000000..b129b1c
--- /dev/null
+++ b/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-1/asciicast.svg
@@ -0,0 +1,497 @@
+
+
diff --git a/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-1/baremetal.svg b/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-1/baremetal.svg
new file mode 100644
index 0000000..08f964f
--- /dev/null
+++ b/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-1/baremetal.svg
@@ -0,0 +1,60 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-1/cloud.svg b/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-1/cloud.svg
new file mode 100644
index 0000000..1605c38
--- /dev/null
+++ b/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-1/cloud.svg
@@ -0,0 +1,67 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-1/index.md b/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-1/index.md
new file mode 100644
index 0000000..30fa032
--- /dev/null
+++ b/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-1/index.md
@@ -0,0 +1,255 @@
+---
+layout: blog
+title: "DIY: Create Your Own Cloud with Kubernetes (Part 1)"
+slug: diy-create-your-own-cloud-with-kubernetes-part-1
+date: 2024-04-05T07:30:00+00:00
+---
+
+**Author**: Andrei Kvapil (Ænix)
+
+At Ænix, we have a deep affection for Kubernetes and dream that all modern technologies will soon
+start utilizing its remarkable patterns.
+
+Have you ever thought about building your own cloud? I bet you have. But is it possible to do this
+using only modern technologies and approaches, without leaving the cozy Kubernetes ecosystem?
+Our experience in developing Cozystack required us to delve deeply into it.
+
+You might argue that Kubernetes is not intended for this purpose and why not simply use OpenStack
+for bare metal servers and run Kubernetes inside it as intended. But by doing so, you would simply
+ shift the responsibility from your hands to the hands of OpenStack administrators.
+ This would add at least one more huge and complex system to your ecosystem.
+
+Why complicate things? - after all, Kubernetes already has everything needed to run tenant
+Kubernetes clusters at this point.
+
+I want to share with you our experience in developing a cloud platform based on Kubernetes,
+highlighting the open-source projects that we use ourselves and believe deserve your attention.
+
+In this series of articles, I will tell you our story about how we prepare managed Kubernetes
+from bare metal using only open-source technologies. Starting from the basic level of data
+center preparation, running virtual machines, isolating networks, setting up fault-tolerant
+storage to provisioning full-featured Kubernetes clusters with dynamic volume provisioning,
+load balancers, and autoscaling.
+
+With this article, I start a series consisting of several parts:
+
+- **Part 1**: Preparing the groundwork for your cloud. Challenges faced during the preparation
+and operation of Kubernetes on bare metal and a ready-made recipe for provisioning infrastructure.
+- **Part 2**: Networking, storage, and virtualization. How to turn Kubernetes into a tool for
+launching virtual machines and what is needed for this.
+- **Part 3**: Cluster API and how to start provisioning Kubernetes clusters at the push of a
+button. How autoscaling works, dynamic provisioning of volumes, and load balancers.
+
+I will try to describe various technologies as independently as possible, but at the same time,
+I will share our experience and why we came to one solution or another.
+
+To begin with, let's understand the main advantage of Kubernetes and how it has changed the
+approach to using cloud resources.
+
+It is important to understand that the use of Kubernetes in the cloud and on bare metal differs.
+
+## Kubernetes in the cloud
+
+When you operate Kubernetes in the cloud, you don't worry about persistent volumes,
+cloud load balancers, or the process of provisioning nodes. All of this is handled by your cloud
+provider, who accepts your requests in the form of Kubernetes objects. In other words, the server
+side is completely hidden from you, and you don't really want to know how exactly the cloud
+provider implements as it's not in your area of responsibility.
+
+{{< figure src="cloud.svg" alt="A diagram showing cloud Kubernetes, with load balancing and storage done outside the cluster" caption="A diagram showing cloud Kubernetes, with load balancing and storage done outside the cluster" >}}
+
+Kubernetes offers convenient abstractions that work the same everywhere, allowing you to deploy
+your application on any Kubernetes in any cloud.
+
+In the cloud, you very commonly have several separate entities: the Kubernetes control plane,
+virtual machines, persistent volumes, and load balancers as distinct entities. Using these entities, you can create highly dynamic environments.
+
+Thanks to Kubernetes, virtual machines are now only seen as a utility entity for utilizing
+cloud resources. You no longer store data inside virtual machines. You can delete all your virtual
+machines at any moment and recreate them without breaking your application. The Kubernetes control
+plane will continue to hold information about what should run in your cluster. The load balancer
+will keep sending traffic to your workload, simply changing the endpoint to send traffic to a new
+node. And your data will be safely stored in external persistent volumes provided by cloud.
+
+This approach is fundamental when using Kubernetes in clouds. The reason for it is quite obvious:
+the simpler the system, the more stable it is, and for this simplicity you go buying Kubernetes
+in the cloud.
+
+## Kubernetes on bare metal
+
+Using Kubernetes in the clouds is really simple and convenient, which cannot be said about bare
+metal installations. In the bare metal world, Kubernetes, on the contrary, becomes unbearably
+complex. Firstly, because the entire network, backend storage, cloud balancers, etc. are usually
+run not outside, but inside your cluster. As result such a system is much more difficult to
+update and maintain.
+
+{{< figure src="baremetal.svg" alt="A diagram showing bare metal Kubernetes, with load balancing and storage done inside the cluster" caption="A diagram showing bare metal Kubernetes, with load balancing and storage done inside the cluster" >}}
+
+
+Judge for yourself: in the cloud, to update a node, you typically delete the virtual machine
+(or even use `kubectl delete node`) and you let your node management tooling create a new
+one, based on an immutable image. The new node will join the cluster and ”just work” as a node;
+following a very simple and commonly used pattern in the Kubernetes world.
+Many clusters order new virtual machines every few minutes, simply because they can use
+cheaper spot instances. However, when you have a physical server, you can't just delete and
+recreate it, firstly because it often runs some cluster services, stores data, and its update process
+is significantly more complicated.
+
+There are different approaches to solving this problem, ranging from in-place updates, as done by
+kubeadm, kubespray, and k3s, to full automation of provisioning physical nodes through Cluster API
+and Metal3.
+
+I like the hybrid approach offered by Talos Linux, where your entire system is described in a
+single configuration file. Most parameters of this file can be applied without rebooting or
+recreating the node, including the version of Kubernetes control-plane components. However, it
+still keeps the maximum declarative nature of Kubernetes.
+This approach minimizes unnecessary impact on cluster services when updating bare metal nodes.
+In most cases, you won't need to migrate your virtual machines and rebuild the cluster filesystem
+on minor updates.
+
+## Preparing a base for your future cloud
+
+So, suppose you've decided to build your own cloud. To start somewhere, you need a base layer.
+You need to think not only about how you will install Kubernetes on your servers but also about how
+you will update and maintain it. Consider the fact that you will have to think about things like
+updating the kernel, installing necessary modules, as well packages and security patches.
+Now you have to think much more that you don't have to worry about when using a ready-made
+Kubernetes in the cloud.
+
+Of course you can use standard distributions like Ubuntu or Debian, or you can consider specialized
+ones like Flatcar Container Linux, Fedora Core, and Talos Linux. Each has its advantages and
+disadvantages.
+
+What about us? At Ænix, we use quite a few specific kernel modules like ZFS, DRBD, and OpenvSwitch,
+so we decided to go the route of forming a system image with all the necessary modules in advance.
+In this case, Talos Linux turned out to be the most convenient for us.
+For example, such a config is enough to build a system image with all the necessary kernel modules:
+
+```yaml
+arch: amd64
+platform: metal
+secureboot: false
+version: v1.6.4
+input:
+ kernel:
+ path: /usr/install/amd64/vmlinuz
+ initramfs:
+ path: /usr/install/amd64/initramfs.xz
+ baseInstaller:
+ imageRef: ghcr.io/siderolabs/installer:v1.6.4
+ systemExtensions:
+ - imageRef: ghcr.io/siderolabs/amd-ucode:20240115
+ - imageRef: ghcr.io/siderolabs/amdgpu-firmware:20240115
+ - imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20240115
+ - imageRef: ghcr.io/siderolabs/i915-ucode:20240115
+ - imageRef: ghcr.io/siderolabs/intel-ice-firmware:20240115
+ - imageRef: ghcr.io/siderolabs/intel-ucode:20231114
+ - imageRef: ghcr.io/siderolabs/qlogic-firmware:20240115
+ - imageRef: ghcr.io/siderolabs/drbd:9.2.6-v1.6.4
+ - imageRef: ghcr.io/siderolabs/zfs:2.1.14-v1.6.4
+output:
+ kind: installer
+ outFormat: raw
+```
+
+Then we use the `docker` command line tool to build an OS image:
+
+```
+cat config.yaml | docker run --rm -i -v /dev:/dev --privileged "ghcr.io/siderolabs/imager:v1.6.4" -
+```
+
+And as a result, we get a Docker container image with everything we need, which we can use to
+install Talos Linux on our servers. You can do the same; this image will contain all the necessary
+firmware and kernel modules.
+
+But the question arises, how do you deliver the freshly formed image to your nodes?
+
+I have been contemplating the idea of PXE booting for quite some time. For example, the
+**Kubefarm** project that I wrote an
+[article](/blog/2021/12/22/kubernetes-in-kubernetes-and-pxe-bootable-server-farm/) about
+two years ago was entirely built using this approach. But unfortunately, it does help you to
+deploy your very first parent cluster that will hold the others. So now you have prepared a
+solution that will help you do this the same using PXE approach.
+
+Essentially, all you need to do is [run temporary](https://cozystack.io/docs/get-started/)
+**DHCP** and **PXE** servers inside containers. Then your nodes will boot from your
+image, and you can use a simple Debian-flavored script to help you bootstrap your nodes.
+
+[](https://asciinema.org/a/627123)
+
+The [source](https://github.com/aenix-io/talos-bootstrap/) for that `talos-bootstrap` script is
+available on GitHub.
+
+This script allows you to deploy Kubernetes on bare metal in five minutes and obtain a kubeconfig
+for accessing it. However, many unresolved issues still lie ahead.
+
+## Delivering system components
+
+At this stage, you already have a Kubernetes cluster capable of running various workloads. However,
+it is not fully functional yet. In other words, you need to set up networking and storage, as well
+as install necessary cluster extensions, like KubeVirt to run virtual machines, as well the
+monitoring stack and other system-wide components.
+
+Traditionally, this is solved by installing **Helm charts** into your cluster. You can do this by
+running `helm install` commands locally, but this approach becomes inconvenient when you want to
+track updates, and if you have multiple clusters and you want to keep them uniform. In fact, there
+are plenty of ways to do this declaratively. To solve this, I recommend using best GitOps practices.
+I mean tools like ArgoCD and FluxCD.
+
+While ArgoCD is more convenient for dev purposes with its graphical interface and a central control
+plane, FluxCD, on the other hand, is better suited for creating Kubernetes distributions. With FluxCD,
+you can specify which charts with what parameters should be launched and describe dependencies. Then,
+FluxCD will take care of everything for you.
+
+It is suggested to perform a one-time installation of FluxCD in your newly created cluster and
+provide it with the configuration. This will install everything necessary, bringing the cluster
+to the expected state.
+
+By carrying out a single installation of FluxCD in your newly minted cluster and configuring it
+accordingly, you enable it to automatically deploy all the essentials. This will allow your cluster
+to upgrade itself into the desired state. For example, after installing our platform you'll see the
+next pre-configured Helm charts with system components:
+
+```
+NAMESPACE NAME AGE READY STATUS
+cozy-cert-manager cert-manager 4m1s True Release reconciliation succeeded
+cozy-cert-manager cert-manager-issuers 4m1s True Release reconciliation succeeded
+cozy-cilium cilium 4m1s True Release reconciliation succeeded
+cozy-cluster-api capi-operator 4m1s True Release reconciliation succeeded
+cozy-cluster-api capi-providers 4m1s True Release reconciliation succeeded
+cozy-dashboard dashboard 4m1s True Release reconciliation succeeded
+cozy-fluxcd cozy-fluxcd 4m1s True Release reconciliation succeeded
+cozy-grafana-operator grafana-operator 4m1s True Release reconciliation succeeded
+cozy-kamaji kamaji 4m1s True Release reconciliation succeeded
+cozy-kubeovn kubeovn 4m1s True Release reconciliation succeeded
+cozy-kubevirt-cdi kubevirt-cdi 4m1s True Release reconciliation succeeded
+cozy-kubevirt-cdi kubevirt-cdi-operator 4m1s True Release reconciliation succeeded
+cozy-kubevirt kubevirt 4m1s True Release reconciliation succeeded
+cozy-kubevirt kubevirt-operator 4m1s True Release reconciliation succeeded
+cozy-linstor linstor 4m1s True Release reconciliation succeeded
+cozy-linstor piraeus-operator 4m1s True Release reconciliation succeeded
+cozy-mariadb-operator mariadb-operator 4m1s True Release reconciliation succeeded
+cozy-metallb metallb 4m1s True Release reconciliation succeeded
+cozy-monitoring monitoring 4m1s True Release reconciliation succeeded
+cozy-postgres-operator postgres-operator 4m1s True Release reconciliation succeeded
+cozy-rabbitmq-operator rabbitmq-operator 4m1s True Release reconciliation succeeded
+cozy-redis-operator redis-operator 4m1s True Release reconciliation succeeded
+cozy-telepresence telepresence 4m1s True Release reconciliation succeeded
+cozy-victoria-metrics-operator victoria-metrics-operator 4m1s True Release reconciliation succeeded
+```
+
+## Conclusion
+
+As a result, you achieve a highly repeatable environment that you can provide to anyone, knowing
+that it operates exactly as intended.
+This is actually what the [Cozystack](https://github.com/aenix-io/cozystack) project does, which
+you can try out for yourself absolutely free.
+
+In the following articles, I will discuss
+[how to prepare Kubernetes for running virtual machines](/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-2/)
+and [how to run Kubernetes clusters with the click of a button](/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-3/).
+Stay tuned, it'll be fun!
+
+---
+
+*Originally published at [https://kubernetes.io](https://kubernetes.io/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-1/) on April 5, 2024.*
diff --git a/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-2/index.md b/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-2/index.md
new file mode 100644
index 0000000..58993d2
--- /dev/null
+++ b/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-2/index.md
@@ -0,0 +1,264 @@
+---
+layout: blog
+title: "DIY: Create Your Own Cloud with Kubernetes (Part 2)"
+slug: diy-create-your-own-cloud-with-kubernetes-part-2
+date: 2024-04-05T07:35:00+00:00
+---
+
+**Author**: Andrei Kvapil (Ænix)
+
+Continuing our series of posts on how to build your own cloud using just the Kubernetes ecosystem.
+In the [previous article](/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-1/), we
+explained how we prepare a basic Kubernetes distribution based on Talos Linux and Flux CD.
+In this article, we'll show you a few various virtualization technologies in Kubernetes and prepare
+everything need to run virtual machines in Kubernetes, primarily storage and networking.
+
+We will talk about technologies such as KubeVirt, LINSTOR, and Kube-OVN.
+
+But first, let's explain what virtual machines are needed for, and why can't you just use docker
+containers for building cloud?
+The reason is that containers do not provide a sufficient level of isolation.
+Although the situation improves year by year, we often encounter vulnerabilities that allow
+escaping the container sandbox and elevating privileges in the system.
+
+On the other hand, Kubernetes was not originally designed to be a multi-tenant system, meaning
+the basic usage pattern involves creating a separate Kubernetes cluster for every independent
+project and development team.
+
+Virtual machines are the primary means of isolating tenants from each other in a cloud environment.
+In virtual machines, users can execute code and programs with administrative privilege, but this
+doesn't affect other tenants or the environment itself. In other words, virtual machines allow to
+achieve [hard multi-tenancy isolation](/docs/concepts/security/multi-tenancy/#isolation), and run
+in environments where tenants do not trust each other.
+
+## Virtualization technologies in Kubernetes
+
+There are several different technologies that bring virtualization into the Kubernetes world:
+[KubeVirt](https://kubevirt.io/) and [Kata Containers](https://katacontainers.io/)
+are the most popular ones. But you should know that they work differently.
+
+**Kata Containers** implements the CRI (Container Runtime Interface) and provides an additional
+level of isolation for standard containers by running them in virtual machines.
+But they work in a same single Kubernetes-cluster.
+
+{{< figure src="kata-containers.svg" caption="A diagram showing how container isolation is ensured by running containers in virtual machines with Kata Containers" alt="A diagram showing how container isolation is ensured by running containers in virtual machines with Kata Containers" >}}
+
+**KubeVirt** allows running traditional virtual machines using the Kubernetes API. KubeVirt virtual
+machines are run as regular linux processes in containers. In other words, in KubeVirt, a container
+is used as a sandbox for running virtual machine (QEMU) processes.
+This can be clearly seen in the figure below, by looking at how live migration of virtual machines
+is implemented in KubeVirt. When migration is needed, the virtual machine moves from one container
+to another.
+
+{{< figure src="kubevirt-migration.svg" caption="A diagram showing live migration of a virtual machine from one container to another in KubeVirt" alt="A diagram showing live migration of a virtual machine from one container to another in KubeVirt" >}}
+
+There is also an alternative project - [Virtink](https://github.com/smartxworks/virtink), which
+implements lightweight virtualization using
+[Cloud-Hypervisor](https://github.com/cloud-hypervisor/cloud-hypervisor) and is initially focused
+on running virtual Kubernetes clusters using the Cluster API.
+
+Considering our goals, we decided to use KubeVirt as the most popular project in this area.
+Besides we have extensive expertise and already made a lot of contributions to KubeVirt.
+
+KubeVirt is [easy to install](https://kubevirt.io/user-guide/operations/installation/) and allows
+you to run virtual machines out-of-the-box using
+[containerDisk](https://kubevirt.io/user-guide/virtual_machines/disks_and_volumes/#containerdisk)
+feature - this allows you to store and distribute VM images directly as OCI images from container
+image registry.
+Virtual machines with containerDisk are well suited for creating Kubernetes worker nodes and other
+VMs that do not require state persistence.
+
+For managing persistent data, KubeVirt offers a separate tool, Containerized Data Importer (CDI).
+It allows for cloning PVCs and populating them with data from base images. The CDI is necessary
+if you want to automatically provision persistent volumes for your virtual machines, and it is
+ also required for the KubeVirt CSI Driver, which is used to handle persistent volumes claims
+ from tenant Kubernetes clusters.
+
+But at first, you have to decide where and how you will store these data.
+
+## Storage for Kubernetes VMs
+
+With the introduction of the CSI (Container Storage Interface), a wide range of technologies that
+integrate with Kubernetes has become available.
+In fact, KubeVirt fully utilizes the CSI interface, aligning the choice of storage for
+virtualization closely with the choice of storage for Kubernetes itself.
+However, there are nuances, which you need to consider. Unlike containers, which typically use a
+standard filesystem, block devices are more efficient for virtual machine.
+
+Although the CSI interface in Kubernetes allows the request of both types of volumes: filesystems
+and block devices, it's important to verify that your storage backend supports this.
+
+Using block devices for virtual machines eliminates the need for an additional abstraction layer,
+such as a filesystem, that makes it more performant and in most cases enables the use of the
+_ReadWriteMany_ mode. This mode allows concurrent access to the volume from multiple nodes, which
+is a critical feature for enabling the live migration of virtual machines in KubeVirt.
+
+The storage system can be external or internal (in the case of hyper-converged infrastructure).
+Using external storage in many cases makes the whole system more stable, as your data is stored
+separately from compute nodes.
+
+{{< figure src="storage-external.svg" caption="A diagram showing external data storage communication with the compute nodes" alt="A diagram showing external data storage communication with the compute nodes" >}}
+
+External storage solutions are often popular in enterprise systems because such storage is
+frequently provided by an external vendor, that takes care of its operations. The integration with
+Kubernetes involves only a small component installed in the cluster - the CSI driver. This driver
+is responsible for provisioning volumes in this storage and attaching them to pods run by Kubernetes.
+However, such storage solutions can also be implemented using purely open-source technologies.
+One of the popular solutions is [TrueNAS](https://www.truenas.com/) powered by
+[democratic-csi](https://github.com/democratic-csi/democratic-csi) driver.
+
+{{< figure src="storage-local.svg" caption="A diagram showing local data storage running on the compute nodes" alt="A diagram showing local data storage running on the compute nodes" >}}
+
+On the other hand, hyper-converged systems are often implemented using local storage (when you do
+not need replication) and with software-defined storages, often installed directly in Kubernetes,
+such as [Rook/Ceph](https://rook.io/), [OpenEBS](https://openebs.io/),
+[Longhorn](https://longhorn.io/), [LINSTOR](https://linbit.com/linstor/), and others.
+
+{{< figure src="storage-clustered.svg" caption="A diagram showing clustered data storage running on the compute nodes" alt="A diagram showing clustered data storage running on the compute nodes" >}}
+
+A hyper-converged system has its advantages. For example, data locality: when your data is stored
+locally, access to such data is faster. But there are disadvantages as such a system is usually
+more difficult to manage and maintain.
+
+At Ænix, we wanted to provide a ready-to-use solution that could be used without the need to
+purchase and setup an additional external storage, and that was optimal in terms of speed and
+resource utilization. LINSTOR became that solution.
+The time-tested and industry-popular technologies such as LVM and ZFS as backend gives confidence
+that data is securely stored. DRBD-based replication is incredible fast and consumes a small amount
+of computing resources.
+
+For installing LINSTOR in Kubernetes, there is the Piraeus project, which already provides a
+ready-made block storage to use with KubeVirt.
+
+{{< note >}}
+In case you are using Talos Linux, as we described in the
+[previous article](/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-1/), you will
+need to enable the necessary kernel modules in advance, and configure piraeus as described in the
+[instruction](https://github.com/piraeusdatastore/piraeus-operator/blob/v2/docs/how-to/talos.md).
+{{< /note >}}
+
+## Networking for Kubernetes VMs
+
+Despite having the similar interface - CNI, The network architecture in Kubernetes is actually more
+complex and typically consists of many independent components that are not directly connected to
+each other. In fact, you can split Kubernetes networking into four layers, which are described below.
+
+### Node Network (Data Center Network)
+
+The network through which nodes are interconnected with each other. This network is usually not
+managed by Kubernetes, but it is an important one because, without it, nothing would work.
+In practice, the bare metal infrastructure usually has more than one of such networks e.g.
+one for node-to-node communication, second for storage replication, third for external access, etc.
+
+{{< figure src="net-nodes.svg" caption="A diagram showing the role of the node network (data center network) on the Kubernetes networking scheme" alt="A diagram showing the role of the node network (data center network) on the Kubernetes networking scheme" >}}
+
+Configuring the physical network interaction between nodes goes beyond the scope of this article,
+as in most situations, Kubernetes utilizes already existing network infrastructure.
+
+### Pod Network
+
+This is the network provided by your CNI plugin. The task of the CNI plugin is to ensure transparent
+connectivity between all containers and nodes in the cluster. Most CNI plugins implement a flat
+network from which separate blocks of IP addresses are allocated for use on each node.
+
+{{< figure src="net-pods.svg" caption="A diagram showing the role of the pod network (CNI-plugin) on the Kubernetes network scheme" alt="A diagram showing the role of the pod network (CNI-plugin) on the Kubernetes network scheme" >}}
+
+In practice, your cluster can have several CNI plugins managed by
+[Multus](https://github.com/k8snetworkplumbingwg/multus-cni). This approach is often used in
+virtualization solutions based on KubeVirt - [Rancher](https://www.rancher.com/) and
+[OpenShift](https://www.redhat.com/en/technologies/cloud-computing/openshift/virtualization).
+The primary CNI plugin is used for integration with Kubernetes services, while additional CNI
+plugins are used to implement private networks (VPC) and integration with the physical networks
+of your data center.
+
+The [default CNI-plugins](https://github.com/containernetworking/plugins/tree/main/plugins) can
+be used to connect bridges or physical interfaces. Additionally, there are specialized plugins
+such as [macvtap-cni](https://github.com/kubevirt/macvtap-cni) which are designed to provide
+more performance.
+
+One additional aspect to keep in mind when running virtual machines in Kubernetes is the need for
+IPAM (IP Address Management), especially for secondary interfaces provided by Multus. This is
+commonly managed by a DHCP server operating within your infrastructure. Additionally, the allocation
+of MAC addresses for virtual machines can be managed by
+[Kubemacpool](https://github.com/k8snetworkplumbingwg/kubemacpool).
+
+Although in our platform, we decided to go another way and fully rely on
+[Kube-OVN](https://www.kube-ovn.io/). This CNI plugin is based on OVN (Open Virtual Network) which
+was originally developed for OpenStack and it provides a complete network solution for virtual
+machines in Kubernetes, features Custom Resources for managing IPs and MAC addresses, supports
+live migration with preserving IP addresses between the nodes, and enables the creation of VPCs
+for physical network separation between tenants.
+
+In Kube-OVN you can assign separate subnets to an entire namespace or connect them as additional
+network interfaces using Multus.
+
+### Services Network
+
+In addition to the CNI plugin, Kubernetes also has a services network, which is primarily needed
+for service discovery.
+Contrary to traditional virtual machines, Kubernetes is originally designed to run pods with a
+random address.
+And the services network provides a convenient abstraction (stable IP addresses and DNS names)
+that will always direct traffic to the correct pod.
+The same approach is also commonly used with virtual machines in clouds despite the fact that
+their IPs are usually static.
+
+{{< figure src="net-services.svg" caption="A diagram showing the role of the services network (services network plugin) on the Kubernetes network scheme" alt="A diagram showing the role of the services network (services network plugin) on the Kubernetes network scheme" >}}
+
+
+The implementation of the services network in Kubernetes is handled by the services network plugin,
+The standard implementation is called **kube-proxy** and is used in most clusters.
+But nowadays, this functionality might be provided as part of the CNI plugin. The most advanced
+implementation is offered by the [Cilium](https://cilium.io/) project, which can be run in kube-proxy replacement mode.
+
+Cilium is based on the eBPF technology, which allows for efficient offloading of the Linux
+networking stack, thereby improving performance and security compared to traditional methods based
+on iptables.
+
+In practice, Cilium and Kube-OVN can be easily
+[integrated](https://kube-ovn.readthedocs.io/zh-cn/stable/en/advance/with-cilium/) to provide a
+unified solution that offers seamless, multi-tenant networking for virtual machines, as well as
+advanced network policies and combined services network functionality.
+
+### External Traffic Load Balancer
+
+At this stage, you already have everything needed to run virtual machines in Kubernetes.
+But there is actually one more thing.
+You still need to access your services from outside your cluster, and an external load balancer
+will help you with organizing this.
+
+For bare metal Kubernetes clusters, there are several load balancers available:
+[MetalLB](https://metallb.universe.tf/), [kube-vip](https://kube-vip.io/),
+[LoxiLB](https://www.loxilb.io/), also [Cilium](https://docs.cilium.io/en/latest/network/lb-ipam/) and
+[Kube-OVN](https://kube-ovn.readthedocs.io/zh-cn/latest/en/guide/loadbalancer-service/)
+provides built-in implementation.
+
+The role of a external load balancer is to provide a stable address available externally and direct
+external traffic to the services network.
+The services network plugin will direct it to your pods and virtual machines as usual.
+
+{{< figure src="net-loadbalancer.svg" caption="A diagram showing the role of the external load balancer on the Kubernetes network scheme" alt="The role of the external load balancer on the Kubernetes network scheme" >}}
+
+In most cases, setting up a load balancer on bare metal is achieved by creating floating IP address
+on the nodes within the cluster, and announce it externally using ARP/NDP or BGP protocols.
+
+After exploring various options, we decided that MetalLB is the simplest and most reliable solution,
+although we do not strictly enforce the use of only it.
+
+Another benefit is that in L2 mode, MetalLB speakers continuously check their neighbour's state by
+sending preforming liveness checks using a memberlist protocol.
+This enables failover that works independently of Kubernetes control-plane.
+
+## Conclusion
+
+This concludes our overview of virtualization, storage, and networking in Kubernetes.
+The technologies mentioned here are available and already pre-configured on the
+[Cozystack](https://github.com/aenix-io/cozystack) platform, where you can try them with no limitations.
+
+In the [next article](/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-3/),
+I'll detail how, on top of this, you can implement the provisioning of fully functional Kubernetes
+clusters with just the click of a button.
+
+---
+
+*Originally published at [https://kubernetes.io](https://kubernetes.io/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-2/) on April 5, 2024.*
diff --git a/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-2/kata-containers.svg b/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-2/kata-containers.svg
new file mode 100644
index 0000000..724cd4f
--- /dev/null
+++ b/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-2/kata-containers.svg
@@ -0,0 +1,281 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-2/kubevirt-migration.svg b/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-2/kubevirt-migration.svg
new file mode 100644
index 0000000..06850ae
--- /dev/null
+++ b/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-2/kubevirt-migration.svg
@@ -0,0 +1,172 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-2/net-loadbalancer.svg b/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-2/net-loadbalancer.svg
new file mode 100644
index 0000000..05b2ff1
--- /dev/null
+++ b/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-2/net-loadbalancer.svg
@@ -0,0 +1,136 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-2/net-nodes.svg b/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-2/net-nodes.svg
new file mode 100644
index 0000000..6fefe61
--- /dev/null
+++ b/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-2/net-nodes.svg
@@ -0,0 +1,124 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-2/net-pods.svg b/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-2/net-pods.svg
new file mode 100644
index 0000000..25c0bbf
--- /dev/null
+++ b/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-2/net-pods.svg
@@ -0,0 +1,131 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-2/net-services.svg b/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-2/net-services.svg
new file mode 100644
index 0000000..528cfe6
--- /dev/null
+++ b/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-2/net-services.svg
@@ -0,0 +1,133 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-2/storage-clustered.svg b/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-2/storage-clustered.svg
new file mode 100644
index 0000000..1115e28
--- /dev/null
+++ b/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-2/storage-clustered.svg
@@ -0,0 +1,213 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-2/storage-external.svg b/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-2/storage-external.svg
new file mode 100644
index 0000000..5c97e8f
--- /dev/null
+++ b/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-2/storage-external.svg
@@ -0,0 +1,219 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-2/storage-local.svg b/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-2/storage-local.svg
new file mode 100644
index 0000000..9fbbd13
--- /dev/null
+++ b/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-2/storage-local.svg
@@ -0,0 +1,309 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-3/clusterapi1.svg b/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-3/clusterapi1.svg
new file mode 100644
index 0000000..7c5b792
--- /dev/null
+++ b/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-3/clusterapi1.svg
@@ -0,0 +1,41 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-3/clusterapi2.svg b/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-3/clusterapi2.svg
new file mode 100644
index 0000000..5178888
--- /dev/null
+++ b/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-3/clusterapi2.svg
@@ -0,0 +1,29 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-3/clusterapi3.svg b/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-3/clusterapi3.svg
new file mode 100644
index 0000000..87e99ec
--- /dev/null
+++ b/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-3/clusterapi3.svg
@@ -0,0 +1,29 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-3/components1.svg b/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-3/components1.svg
new file mode 100644
index 0000000..00715d8
--- /dev/null
+++ b/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-3/components1.svg
@@ -0,0 +1,42 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-3/components2.svg b/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-3/components2.svg
new file mode 100644
index 0000000..3601d61
--- /dev/null
+++ b/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-3/components2.svg
@@ -0,0 +1,67 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-3/components3.svg b/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-3/components3.svg
new file mode 100644
index 0000000..74a8597
--- /dev/null
+++ b/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-3/components3.svg
@@ -0,0 +1,97 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-3/components4.svg b/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-3/components4.svg
new file mode 100644
index 0000000..d19f794
--- /dev/null
+++ b/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-3/components4.svg
@@ -0,0 +1,66 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-3/fluxcd.svg b/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-3/fluxcd.svg
new file mode 100644
index 0000000..039b1e9
--- /dev/null
+++ b/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-3/fluxcd.svg
@@ -0,0 +1,48 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-3/index.md b/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-3/index.md
new file mode 100644
index 0000000..c00dbbe
--- /dev/null
+++ b/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-3/index.md
@@ -0,0 +1,271 @@
+---
+layout: blog
+title: "DIY: Create Your Own Cloud with Kubernetes (Part 3)"
+slug: diy-create-your-own-cloud-with-kubernetes-part-3
+date: 2024-04-05T07:40:00+00:00
+---
+
+**Author**: Andrei Kvapil (Ænix)
+
+Approaching the most interesting phase, this article delves into running Kubernetes within
+Kubernetes. Technologies such as Kamaji and Cluster API are highlighted, along with their
+integration with KubeVirt.
+
+Previous discussions have covered
+[preparing Kubernetes on bare metal](/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-1/)
+and
+[how to turn Kubernetes into virtual machines management system](/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-2).
+This article concludes the series by explaining how, using all of the above, you can build a
+full-fledged managed Kubernetes and run virtual Kubernetes clusters with just a click.
+
+First up, let's dive into the Cluster API.
+
+## Cluster API
+
+Cluster API is an extension for Kubernetes that allows the management of Kubernetes clusters as
+custom resources within another Kubernetes cluster.
+
+The main goal of the Cluster API is to provide a unified interface for describing the basic
+entities of a Kubernetes cluster and managing their lifecycle. This enables the automation of
+processes for creating, updating, and deleting clusters, simplifying scaling, and infrastructure
+management.
+
+Within the context of Cluster API, there are two terms: **management cluster** and
+**tenant clusters**.
+
+- **Management cluster** is a Kubernetes cluster used to deploy and manage other clusters.
+This cluster contains all the necessary Cluster API components and is responsible for describing,
+creating, and updating tenant clusters. It is often used just for this purpose.
+- **Tenant clusters** are the user clusters or clusters deployed using the Cluster API. They are
+created by describing the relevant resources in the management cluster. They are then used for
+deploying applications and services by end-users.
+
+It's important to understand that physically, tenant clusters do not necessarily have to run on
+the same infrastructure with the management cluster; more often, they are running elsewhere.
+
+{{< figure src="clusterapi1.svg" caption="A diagram showing interaction of management Kubernetes cluster and tenant Kubernetes clusters using Cluster API" alt="A diagram showing interaction of management Kubernetes cluster and tenant Kubernetes clusters using Cluster API" >}}
+
+For its operation, Cluster API utilizes the concept of _providers_ which are separate controllers
+responsible for specific components of the cluster being created. Within Cluster API, there are
+several types of providers. The major ones are:
+
+ - **Infrastructure Provider**, which is responsible for providing the computing infrastructure, such as virtual machines or physical servers.
+ - **Control Plane Provider**, which provides the Kubernetes control plane, namely the components kube-apiserver, kube-scheduler, and kube-controller-manager.
+ - **Bootstrap Provider**, which is used for generating cloud-init configuration for the virtual machines and servers being created.
+
+To get started, you will need to install the Cluster API itself and one provider of each type.
+You can find a complete list of supported providers in the project's
+[documentation](https://cluster-api.sigs.k8s.io/reference/providers.html).
+
+For installation, you can use the `clusterctl` utility, or
+[Cluster API Operator](https://github.com/kubernetes-sigs/cluster-api-operator)
+as the more declarative method.
+
+## Choosing providers
+
+### Infrastructure provider
+
+To run Kubernetes clusters using KubeVirt, the
+[KubeVirt Infrastructure Provider](https://github.com/kubernetes-sigs/cluster-api-provider-kubevirt)
+must be installed.
+It enables the deployment of virtual machines for worker nodes in the same management cluster, where
+the Cluster API operates.
+
+### Control plane provider
+
+The [Kamaji](https://github.com/clastix/kamaji) project offers a ready solution for running the
+Kubernetes control plane for tenant clusters as containers within the management cluster.
+This approach has several significant advantages:
+
+- **Cost-effectiveness**: Running the control plane in containers avoids the use of separate control
+plane nodes for each cluster, thereby significantly reducing infrastructure costs.
+- **Stability**: Simplifying architecture by eliminating complex multi-layered deployment schemes.
+Instead of sequentially launching a virtual machine and then installing etcd and Kubernetes components
+inside it, there's a simple control plane that is deployed and run as a regular application inside
+Kubernetes and managed by an operator.
+- **Security**: The cluster's control plane is hidden from the end user, reducing the possibility
+of its components being compromised, and also eliminates user access to the cluster's certificate
+store. This approach to organizing a control plane invisible to the user is often used by cloud providers.
+
+### Bootstrap provider
+
+[Kubeadm](https://github.com/kubernetes-sigs/cluster-api/tree/main/bootstrap) as the Bootstrap
+Provider - as the standard method for preparing clusters in Cluster API. This provider is developed
+as part of the Cluster API itself. It requires only a prepared system image with kubelet and kubeadm
+installed and allows generating configs in the cloud-init and ignition formats.
+
+It's worth noting that Talos Linux also supports provisioning via the Cluster API and
+[has](https://github.com/siderolabs/cluster-api-bootstrap-provider-talos)
+[providers](https://github.com/siderolabs/cluster-api-bootstrap-provider-talos) for this.
+Although [previous articles](/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-1/)
+discussed using Talos Linux to set up a management cluster on bare-metal nodes, to provision tenant
+clusters the Kamaji+Kubeadm approach has more advantages.
+It facilitates the deployment of Kubernetes control planes in containers, thus removing the need for
+separate virtual machines for control plane instances. This simplifies the management and reduces costs.
+
+## How it works
+
+The primary object in Cluster API is the Cluster resource, which acts as the parent for all the others.
+Typically, this resource references two others: a resource describing the **control plane** and a
+resource describing the **infrastructure**, each managed by a separate provider.
+
+Unlike the Cluster, these two resources are not standardized, and their kind depends on the specific
+provider you are using:
+
+{{< figure src="clusterapi2.svg" caption="A diagram showing the relationship of a Cluster resource and the resources it links to in Cluster API" alt="A diagram showing the relationship of a Cluster resource and the resources it links to in Cluster API" >}}
+
+Within Cluster API, there is also a resource named MachineDeployment, which describes a group of nodes,
+whether they are physical servers or virtual machines. This resource functions similarly to standard
+Kubernetes resources such as Deployment, ReplicaSet, and Pod, providing a mechanism for the
+declarative description of a group of nodes and automatic scaling.
+
+In other words, the MachineDeployment resource allows you to declaratively describe nodes for your
+cluster, automating their creation, deletion, and updating according to specified parameters and
+the requested number of replicas.
+
+{{< figure src="machinedeploymentres.svg" caption="A diagram showing the relationship of a MachineDeployment resource and its children in Cluster API" alt="A diagram showing the relationship of a Cluster resource and its children in Cluster API" >}}
+
+To create machines, MachineDeployment refers to a template for generating the machine itself and a
+template for generating its cloud-init config:
+
+{{< figure src="clusterapi3.svg" caption="A diagram showing the relationship of a MachineDeployment resource and the resources it links to in Cluster API" alt="A diagram showing the relationship of a Cluster resource and the resources it links to in Cluster API" >}}
+
+To deploy a new Kubernetes cluster using Cluster API, you will need to prepare the following set of resources:
+
+- A general Cluster resource
+- A KamajiControlPlane resource, responsible for the control plane operated by Kamaji
+- A KubevirtCluster resource, describing the cluster configuration in KubeVirt
+- A KubevirtMachineTemplate resource, responsible for the virtual machine template
+- A KubeadmConfigTemplate resource, responsible for generating tokens and cloud-init
+- At least one MachineDeployment to create some workers
+
+## Polishing the cluster
+
+In most cases, this is sufficient, but depending on the providers used, you may need other resources
+as well. You can find examples of the resources created for each type of provider in the
+[Kamaji project documentation](https://github.com/clastix/cluster-api-control-plane-provider-kamaji?tab=readme-ov-file#-supported-capi-infrastructure-providers).
+
+At this stage, you already have a ready tenant Kubernetes cluster, but so far, it contains nothing
+but API workers and a few core plugins that are standardly included in the installation of any
+Kubernetes cluster: **kube-proxy** and **CoreDNS**. For full integration, you will need to install
+several more components:
+
+To install additional components, you can use a separate
+[Cluster API Add-on Provider for Helm](https://github.com/kubernetes-sigs/cluster-api-addon-provider-helm),
+or the same [FluxCD](https://fluxcd.io/) discussed in
+[previous articles](/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-1/).
+
+When creating resources in FluxCD, it's possible to specify the target cluster by referring to the
+kubeconfig generated by Cluster API. Then, the installation will be performed directly into it.
+Thus, FluxCD becomes a universal tool for managing resources both in the management cluster and
+in the user tenant clusters.
+
+{{< figure src="fluxcd.svg" caption="A diagram showing the interaction scheme of fluxcd, which can install components in both management and tenant Kubernetes clusters" alt="A diagram showing the interaction scheme of fluxcd, which can install components in both management and tenant Kubernetes clusters" >}}
+
+What components are being discussed here? Generally, the set includes the following:
+
+### CNI Plugin
+
+To ensure communication between pods in a tenant Kubernetes cluster, it's necessary to deploy a
+CNI plugin. This plugin creates a virtual network that allows pods to interact with each other
+and is traditionally deployed as a Daemonset on the cluster's worker nodes. You can choose and
+install any CNI plugin that you find suitable.
+
+{{< figure src="components1.svg" caption="A diagram showing a CNI plugin installed inside the tenant Kubernetes cluster on a scheme of nested Kubernetes clusters" alt="A diagram showing a CNI plugin installed inside the tenant Kubernetes cluster on a scheme of nested Kubernetes clusters" >}}
+
+### Cloud Controller Manager
+
+The main task of the Cloud Controller Manager (CCM) is to integrate Kubernetes with the cloud
+infrastructure provider's environment (in your case, it is the management Kubernetes cluster
+in which all worksers of tenant Kubernetes are provisioned). Here are some tasks it performs:
+
+1. When a service of type LoadBalancer is created, the CCM initiates the process of creating a cloud load balancer, which directs traffic to your Kubernetes cluster.
+1. If a node is removed from the cloud infrastructure, the CCM ensures its removal from your cluster as well, maintaining the cluster's current state.
+1. When using the CCM, nodes are added to the cluster with a special taint, `node.cloudprovider.kubernetes.io/uninitialized`,
+ which allows for the processing of additional business logic if necessary. After successful initialization, this taint is removed from the node.
+
+Depending on the cloud provider, the CCM can operate both inside and outside the tenant cluster.
+
+[The KubeVirt Cloud Provider](https://github.com/kubevirt/cloud-provider-kubevirt) is designed
+to be installed in the external parent management cluster. Thus, creating services of type
+LoadBalancer in the tenant cluster initiates the creation of LoadBalancer services in the parent
+cluster, which direct traffic into the tenant cluster.
+
+{{< figure src="components2.svg" caption="A diagram showing a Cloud Controller Manager installed outside of a tenant Kubernetes cluster on a scheme of nested Kubernetes clusters and the mapping of services it manages from the parent to the child Kubernetes cluster" alt="A diagram showing a Cloud Controller Manager installed outside of a tenant Kubernetes cluster on a scheme of nested Kubernetes clusters and the mapping of services it manages from the parent to the child Kubernetes cluster" >}}
+
+### CSI Driver
+
+The Container Storage Interface (CSI) is divided into two main parts for interacting with storage
+in Kubernetes:
+
+- **csi-controller**: This component is responsible for interacting with the cloud provider's API
+to create, delete, attach, detach, and resize volumes.
+- **csi-node**: This component runs on each node and facilitates the mounting of volumes to pods
+as requested by kubelet.
+
+In the context of using the [KubeVirt CSI Driver](https://github.com/kubevirt/csi-driver), a unique
+opportunity arises. Since virtual machines in KubeVirt runs within the management Kubernetes cluster,
+where a full-fledged Kubernetes API is available, this opens the path for running the csi-controller
+outside of the user's tenant cluster. This approach is popular in the KubeVirt community and offers
+several key advantages:
+
+- **Security**: This method hides the internal cloud API from the end-user, providing access to
+resources exclusively through the Kubernetes interface. Thus, it reduces the risk of direct access
+to the management cluster from user clusters.
+- **Simplicity and Convenience**: Users don't need to manage additional controllers in their clusters,
+simplifying the architecture and reducing the management burden.
+
+However, the CSI-node must necessarily run inside the tenant cluster, as it directly interacts with
+kubelet on each node. This component is responsible for the mounting and unmounting of volumes into pods,
+requiring close integration with processes occurring directly on the cluster nodes.
+
+The KubeVirt CSI Driver acts as a proxy for ordering volumes. When a PVC is created inside the tenant
+cluster, a PVC is created in the management cluster, and then the created PV is connected to the
+virtual machine.
+
+{{< figure src="components3.svg" caption="A diagram showing a CSI plugin components installed on both inside and outside of a tenant Kubernetes cluster on a scheme of nested Kubernetes clusters and the mapping of persistent volumes it manages from the parent to the child Kubernetes cluster" alt="A diagram showing a CSI plugin components installed on both inside and outside of a tenant Kubernetes cluster on a scheme of nested Kubernetes clusters and the mapping of persistent volumes it manages from the parent to the child Kubernetes cluster" >}}
+
+### Cluster Autoscaler
+
+The [Cluster Autoscaler](https://github.com/kubernetes/autoscaler) is a versatile component that
+can work with various cloud APIs, and its integration with Cluster-API is just one of the available
+functions. For proper configuration, it requires access to two clusters: the tenant cluster, to
+track pods and determine the need for adding new nodes, and the managing Kubernetes cluster
+(management kubernetes cluster), where it interacts with the MachineDeployment resource and adjusts
+the number of replicas.
+
+Although Cluster Autoscaler usually runs inside the tenant Kubernetes cluster, in this situation,
+it is suggested to install it outside for the same reasons described before. This approach is
+simpler to maintain and more secure as it prevents users of tenant clusters from accessing the
+management API of the management cluster.
+
+{{< figure src="components4.svg" caption="A diagram showing a Cluster Autoscaler installed outside of a tenant Kubernetes cluster on a scheme of nested Kubernetes clusters" alt="A diagram showing a Cloud Controller Manager installed outside of a tenant Kubernetes cluster on a scheme of nested Kubernetes clusters" >}}
+
+### Konnectivity
+
+There's another additional component I'd like to mention -
+[Konnectivity](https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/).
+You will likely need it later on to get webhooks and the API aggregation layer working in your
+tenant Kubernetes cluster. This topic is covered in detail in one of my
+[previous article](https://kubernetes.io/blog/2021/12/22/kubernetes-in-kubernetes-and-pxe-bootable-server-farm/#webhooks-and-api-aggregation-layer).
+
+Unlike the components presented above, Kamaji allows you to easily enable Konnectivity and manage
+it as one of the core components of your tenant cluster, alongside kube-proxy and CoreDNS.
+
+## Conclusion
+
+Now you have a fully functional Kubernetes cluster with the capability for dynamic scaling, automatic
+provisioning of volumes, and load balancers.
+
+Going forward, you might consider metrics and logs collection from your tenant clusters, but that
+goes beyond the scope of this article.
+
+Of course, all the components necessary for deploying a Kubernetes cluster can be packaged into a
+single Helm chart and deployed as a unified application. This is precisely how we organize the
+deployment of managed Kubernetes clusters with the click of a button on our open PaaS platform,
+[Cozystack](https://cozystack.io/), where you can try all the technologies described in the article
+for free.
+
+---
+
+*Originally published at [https://kubernetes.io](https://kubernetes.io/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-3/) on April 5, 2024.*
diff --git a/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-3/machinedeploymentres.svg b/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-3/machinedeploymentres.svg
new file mode 100644
index 0000000..8adeee6
--- /dev/null
+++ b/content/en/blog/2024-04-05-diy-create-your-own-cloud-with-kubernetes-part-3/machinedeploymentres.svg
@@ -0,0 +1,25 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+