title | weight |
---|---|
Upgrades |
60 |
This guide will walk you through the manual steps to upgrade the software in a Rook cluster from one version to the next. Rook is a distributed software system and therefore there are multiple components to individually upgrade in the sequence defined in this guide. After each component is upgraded, it is important to verify that the cluster returns to a healthy and fully functional state.
This guide is just the beginning of upgrade support in Rook. The goal is to provide prescriptive guidance and knowledge on how to upgrade a live Rook cluster and we hope to get valuable feedback from the community that will be incorporated into an automated upgrade solution by the Rook operator.
We welcome feedback and opening issues!
The supported version for this upgrade guide is from an 0.7 release to the latest builds. Until 0.8 is released, the latest builds are labeled such as v0.7.0-27.gbfc8ec6
. Build-to-build upgrades are not guaranteed to work. This guide is to test upgrades only between the official releases.
For a guide to upgrade previous versions of Rook, please refer to the version of documentation for those releases.
With this manual upgrade guide, there are a few notes to consider:
- WARNING: Upgrading a Rook cluster is a manual process in its very early stages. There may be unexpected issues or obstacles that damage the integrity and health of your storage cluster, including data loss. Only proceed with this guide if you are comfortable with that.
- Rook is still in an alpha state. Migrations and general support for breaking changes across versions are not supported or covered in this guide.
- This guide assumes that your Rook operator and its agents are running in the
rook-system
namespace. It also assumes that your Rook cluster is in therook
namespace. If any of these components is in a different namespace, search/replace all instances of-n rook-system
and-n rook
in this guide with-n <your namespace>
.- New Ceph specific namespaces (
rook-ceph-system
androok-ceph
) are now used by default in the new release, but this guide maintains the usage ofrook-system
androok
for backwards compatibility. Note that all user guides and examples have been updated to the new namespaces, so you will need to tweak them to maintain compatibility with the legacyrook-system
androok
namespaces.
- New Ceph specific namespaces (
In order to successfully upgrade a Rook cluster, the following prerequisites must be met:
- The cluster should be in a healthy state with full functionality. Review the health verification section in order to verify your cluster is in a good starting state.
dataDirHostPath
must be set in your Cluster spec. This persists metadata on host nodes, enabling pods to be terminated during the upgrade and for new pods to be created in their place. More details aboutdataDirHostPath
can be found in the Cluster CRD readme.- All pods consuming Rook storage should be created, running, and in a steady state. No Rook persistent volumes should be in the act of being created or deleted.
The minimal sample Cluster spec that will be used in this guide can be found below (note that the specific configuration may not be applicable to all environments):
apiVersion: v1
kind: Namespace
metadata:
name: rook
---
apiVersion: rook.io/v1alpha1
kind: Cluster
metadata:
name: rook
namespace: rook
spec:
dataDirHostPath: /var/lib/rook
storage:
useAllNodes: true
useAllDevices: true
storeConfig:
storeType: bluestore
databaseSizeMB: 1024
journalSizeMB: 1024
Before we begin the upgrade process, let's first review some ways that you can verify the health of your cluster, ensuring that the upgrade is going smoothly after each step. Most of the health verification checks for your cluster during the upgrade process can be performed with the Rook toolbox. For more information about how to run the toolbox, please visit the Rook toolbox readme.
In a healthy Rook cluster, the operator, the agents and all Rook namespace pods should be in the Running
state and have few, if any, pod restarts.
To verify this, run the following commands:
kubectl -n rook-system get pods
kubectl -n rook get pod
If pods aren't running or are restarting due to crashes, you can get more information with kubectl describe pod
and kubectl logs
for the affected pods.
The Rook toolbox contains the Ceph tools that can give you status details of the cluster with the ceph status
command.
Let's look at some sample output and review some of the details:
> kubectl -n rook exec -it rook-ceph-tools -- ceph status
cluster:
id: fe7ae378-dc77-46a1-801b-de05286aa78e
health: HEALTH_OK
services:
mon: 3 daemons, quorum rook-ceph-mon0,rook-ceph-mon1,rook-ceph-mon2
mgr: rook-ceph-mgr0(active)
osd: 1 osds: 1 up, 1 in
data:
pools: 1 pools, 100 pgs
objects: 0 objects, 0 bytes
usage: 2049 MB used, 15466 MB / 17516 MB avail
pgs: 100 active+clean
In the output above, note the following indications that the cluster is in a healthy state:
- Cluster health: The overall cluster status is
HEALTH_OK
and there are no warning or error status messages displayed. - Monitors (mon): All of the monitors are included in the
quorum
list. - OSDs (osd): All OSDs are
up
andin
. - Manager (mgr): The Ceph manager is in the
active
state. - Placement groups (pgs): All PGs are in the
active+clean
state.
If your ceph status
output has deviations from the general good health described above, there may be an issue that needs to be investigated further. There are other commands you may run for more details on the health of the system, such as ceph osd status
.
The version of a specific pod in the Rook cluster can be verified in its pod spec output. For example, for the monitor pod mon0
, we can verify the version it is running with the below commands:
MON0_POD_NAME=$(kubectl -n rook get pod -l mon=rook-ceph-mon0 -o jsonpath='{.items[0].metadata.name}')
kubectl -n rook get pod ${MON0_POD_NAME} -o jsonpath='{.spec.containers[0].image}'
The status and version of all Rook pods can be collected all at once with the following commands:
kubectl -n rook-system get pod -o jsonpath='{range .items[*]}{.metadata.name}{"\n\t"}{.status.phase}{"\t"}{.spec.containers[0].image}{"\n"}{end}'
kubectl -n rook get pod -o jsonpath='{range .items[*]}{.metadata.name}{"\n\t"}{.status.phase}{"\t"}{.spec.containers[0].image}{"\n"}{end}'
Any pod that is using a Rook volume should also remain healthy:
- The pod should be in the
Running
state with no restarts - There shouldn't be any errors in its logs
- The pod should still be able to read and write to the attached Rook volume.
The general flow of the upgrade process will be to upgrade the version of a Rook pod, verify the pod is running with the new version, then verify that the overall cluster health is still in a good state.
In this guide, we will be upgrading a live Rook cluster running v0.7.0
to the next available version of v0.8
. Until the v0.8
release is completed, we will instead use the latest v0.7
tag such as v0.7.0-27.gbfc8ec6
.
Let's get started!
The Rook agents are deployed by the operator to run on every node. They are in charge of handling all operations related to the consumption of storage from the cluster. The agents are deployed and managed by a Kubernetes daemonset. Since the agents are stateless, the simplest way to update them is by deleting them and allowing the operator to create them again.
Delete the agent daemonset and permissions:
kubectl -n rook-system delete daemonset rook-agent
kubectl delete clusterroles rook-agent
kubectl delete clusterrolebindings rook-agent
Now when the operator is recreated, the agent daemonset will automatically be created again with the new version.
The Rook operator is the management brains of the cluster, so it should be upgraded first before other components. In the event that the new version requires a migration of metadata or config, the operator is the one that would understand how to perform that migration.
Since the upgrade process for this version includes support for storage providers beyond Ceph, we will need to start up a Ceph specific operator. Let's delete the deployment for the old operator and its permissions first:
kubectl -n rook-system delete deployment rook-operator
kubectl delete clusterroles rook-operator
kubectl delete clusterrolebindings rook-operator
Now we need to create the new Ceph specific operator.
IMPORTANT: Ensure that you are using the latest manifests from either master
or the release-0.8
branch. If you have custom configuration options set in your old rook-operator.yaml
manifest, you will need to set those values in the new Ceph operator manifest below.
Navigate to the new Ceph manifests directory, apply your custom configuration options if you are using any, and then create the new Ceph operator with the command below.
Note that the new operator by default uses by rook-ceph-system
namespace, but we will use sed
to edit it in place to use rook-system
instead for backwards compatibility with your existing cluster.
cd cluster/examples/kubernetes/ceph
cat operator.yaml | sed -e 's/namespace: rook-ceph-system/namespace: rook-system/g' | kubectl create -f -
To verify the operator pod is Running
and using the new version of rook/ceph:master
, use the following commands:
OPERATOR_POD_NAME=$(kubectl -n rook-system get pods -l app=rook-ceph-operator -o jsonpath='{.items[0].metadata.name}')
kubectl -n rook-system get pod ${OPERATOR_POD_NAME} -o jsonpath='{.status.phase}{"\n"}{.spec.containers[0].image}{"\n"}'
Once you've verified the operator is Running
and on the new version, verify the health of the cluster is still OK.
Instructions for verifying cluster health can be found in the health verification section.
After upgrading the operator, the placement groups may show as status unknown. If you see this, go to the section on upgrading OSDs. Upgrading the OSDs will resolve this issue.
kubectl -n rook exec -it rook-ceph-tools -- ceph status
...
pgs: 100.000% pgs unknown
100 unknown
The toolbox pod runs the tools we will use during the upgrade for cluster status. The toolbox is not expected to contain any state, so we will delete the old pod and start the new toolbox.
kubectl -n rook delete pod rook-tools
After verifying the old tools pod has terminated, start the new toolbox.
You will need to either create the toolbox using the yaml in the master branch or simply set the version of the container to rook/ceph-toolbox:master
before creating the toolbox.
Note the below command uses sed
to change the new default namespace for the toolbox from rook-ceph
to rook
to be backwards compatible with your existing cluster.
cat toolbox.yaml | sed -e 's/namespace: rook-ceph/namespace: rook/g' | kubectl create -f -
The Rook API service has been removed. Delete the service and its deployment with the following commands:
kubectl -n rook delete svc rook-api
kubectl -n rook delete deploy rook-api
There are multiple monitor pods to upgrade and they are each individually managed by their own replica set.
For each monitor's replica set, you will need to update the pod template spec's image version field to rook/ceph:master
.
For example, we can update the replica set for mon0
with:
kubectl -n rook set image replicaset/rook-ceph-mon0 rook-ceph-mon=rook/ceph:master
Once the replica set has been updated, we need to manually terminate the old pod which will trigger the replica set to create a new pod using the new version.
kubectl -n rook delete pod -l mon=rook-ceph-mon0
After the new monitor pod comes up, we can verify that it's in the Running
state and on the new version:
kubectl -n rook get pod -l mon=rook-ceph-mon0 -o jsonpath='{.items[0].status.phase}{"\n"}{.items[0].spec.containers[0].image}{"\n"}'
At this point, it's very important to ensure that all monitors are OK
and in quorum
.
Refer to the status output section for instructions.
If all of the monitors (and the cluster health overall) look good, then we can move on and repeat the same upgrade steps for the next monitor until all are completed.
NOTE: It is possible while upgrading your monitor pods that the operator will find them out of quorum and immediately replace them with a new monitor, such as mon0
getting replaced by mon3
.
This is okay as long as the cluster health looks good and all monitors eventually reach quorum again.
The OSD pods can be managed in two different ways, depending on how you specified your storage configuration in your Cluster spec.
- Use all nodes: all storage nodes in the cluster will be managed by a single daemon set. Only the one daemon set will need to be edited to update the image version, then each OSD pod will need to be deleted so that a new pod will be created by the daemon set to take its place.
- Specify individual nodes: each storage node specified in the cluster spec will be managed by its own individual replica set. Each of these replica sets will need to be edited to update the image version, then each OSD pod will need to be deleted so its replica set will start a new pod on the new version to replace it.
In this example, we are going to walk through the case where useAllNodes: true
was set in the cluster spec, so there will be a single daemon set managing all the OSD pods.
Let's update the container version of either the single OSD daemonset or every OSD replicaset (depending on how the OSDs were deployed).
# If using a daemonset for all nodes
kubectl -n rook edit daemonset rook-ceph-osd
# If using a replicaset for specific nodes, edit each one by one
kubectl -n rook edit replicaset rook-ceph-osd-<node>
Update the version of the container.
image: rook/ceph:master
Once the daemon set (or replica set) is updated, we can begin deleting each OSD pod one at a time and verifying a new one comes up to replace it that is running the new version. After each pod, the cluster health and OSD status should remain or return to an okay state as described in the health verification section. To get the names of all the OSD pods, the following can be used:
kubectl -n rook get pod -l app=rook-ceph-osd -o jsonpath='{range .items[*]}{.metadata.name}{"\n"}{end}'
Below is an example of deleting just one of the OSD pods (note that the names of your OSD pods will be different):
kubectl -n rook delete pod rook-ceph-osd-kcj8f
The status and version for all OSD pods can be collected with the following command:
kubectl -n rook get pod -l app=rook-ceph-osd -o jsonpath='{range .items[*]}{.metadata.name}{" "}{.status.phase}{" "}{.spec.containers[0].image}{"\n"}{end}'
Remember after each OSD pod to verify the cluster health using the instructions found in the health verification section.
Similar to the Rook operator, the Ceph manager pods are managed by a deployment.
We will edit the deployment to use the new image version of rook/ceph:master
:
kubectl -n rook set image deploy/rook-ceph-mgr0 rook-ceph-mgr0=rook/ceph:master
To verify that the manager pod is Running
and on the new version, use the following:
kubectl -n rook get pod -l app=rook-ceph-mgr -o jsonpath='{range .items[*]}{.metadata.name}{" "}{.status.phase}{" "}{.spec.containers[0].image}{"\n"}{end}'
During this upgrade process, the new Ceph operator automatically migrated legacy custom resources to their new rook.io/v1alpha2
and ceph.rook.io/v1alpha1
types.
First confirm that there are no remaining legacy CRD instances:
kubectl -n rook get clusters.rook.io
kubectl -n rook get objectstores.rook.io
kubectl -n rook get filesystems.rook.io
kubectl -n rook get pools.rook.io
kubectl -n rook get volumeattachments.rook.io
After confirming that each of those commands returns No resources found
, it is safe to go ahead and delete the legacy CRD types:
kubectl delete crd clusters.rook.io
kubectl delete crd filesystems.rook.io
kubectl delete crd objectstores.rook.io
kubectl delete crd pools.rook.io
kubectl delete crd volumeattachments.rook.io
If you have optionally installed either object storage or a shared file system in your Rook cluster, the sections below will provide guidance on how to update them as well. They are both managed by deployments, which we have already covered in this guide, so the instructions will be brief.
If you have object storage installed, first edit the RGW deployment to use the new image version of rook/ceph:master
:
kubectl -n rook set image deploy/rook-ceph-rgw-my-store rook-ceph-rgw-my-store=rook/ceph:master
To verify that the RGW pod is Running
and on the new version, use the following:
kubectl -n rook get pod -l app=rook-ceph-rgw -o jsonpath='{range .items[*]}{.metadata.name}{" "}{.status.phase}{" "}{.spec.containers[0].image}{"\n"}{end}'
If you have a shared file system installed, first edit the MDS deployment to use the new image version of rook/ceph:master
:
kubectl -n rook set image deploy/rook-ceph-mds-myfs rook-ceph-mds-myfs=rook/ceph:master
To verify that the MDS pod is Running
and on the new version, use the following:
kubectl -n rook get pod -l app=rook-ceph-mds -o jsonpath='{range .items[*]}{.metadata.name}{" "}{.status.phase}{" "}{.spec.containers[0].image}{"\n"}{end}'
At this point, your Rook cluster should be fully upgraded to running version rook/ceph:master
and the cluster should be healthy according to the steps in the health verification section.
Rook cluster installations on Kubernetes prior to version 1.7.x, use ThirdPartyResource that have been deprecated as of 1.7 and removed in 1.8. If upgrading your Kubernetes cluster Rook TPRs have to be migrated to CustomResourceDefinition (CRD) following Kubernetes documentation. Rook TPRs that require migration during upgrade are:
- Cluster
- Pool
- ObjectStore
- Filesystem
- VolumeAttachment