Skip to content

Commit

Permalink
docs(kraft): kraft-based upgrade and downgrade instructions (strimzi#…
Browse files Browse the repository at this point in the history
…9435)

Signed-off-by: prmellor <pmellor@redhat.com>
  • Loading branch information
PaulRMellor authored Dec 8, 2023
1 parent 33e12f9 commit 7b0cdd8
Show file tree
Hide file tree
Showing 23 changed files with 496 additions and 239 deletions.

This file was deleted.

Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
// This assembly is included in the following assemblies:
//
// assembly-downgrade.adoc

[id='assembly-downgrade-kafka-versions-{context}']
= Downgrading Kafka when using ZooKeeper

If you are using Kafka in ZooKeeper mode, the downgrade process involves changing the Kafka version and the related `log.message.format.version` and `inter.broker.protocol.version` properties.

//Version constraints on the downgrade
include::../../modules/upgrading/con-downgrade-target-version.adoc[leveloffset=+1]
//procedure to downgrade Kafka
include::../../modules/upgrading/proc-downgrade-kafka-zookeeper.adoc[leveloffset=+1]
17 changes: 8 additions & 9 deletions documentation/assemblies/upgrading/assembly-downgrade.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -9,13 +9,9 @@
If you are encountering issues with the version of Strimzi you upgraded to,
you can revert your installation to the previous version.

If you used the YAML installation files to install Strimzi, you can use the YAML installation files from the previous release to perform the following downgrade procedures:

. xref:proc-downgrade-cluster-operator-{context}[]
. xref:assembly-downgrade-kafka-versions-{context}[]

If the previous version of Strimzi does not support the version of Kafka you are using,
you can also downgrade Kafka as long as the log message format versions appended to messages match.
If you used the YAML installation files to install Strimzi, you can use the YAML installation files from the previous release to perform the downgrade procedures.
You can downgrade Strimzi by updating the Cluster Operator and the version of Kafka you are using.
Kafka version downgrades are performed by the Cluster Operator.

WARNING: The following downgrade instructions are only suitable if you installed Strimzi using the installation files.
If you installed Strimzi using another method, like {OperatorHub}, downgrade may not be supported by that method unless specified in their documentation.
Expand All @@ -24,5 +20,8 @@ To ensure a successful downgrade process, it is essential to use a supported app
//steps to downgrade the operators
include::../../modules/upgrading/proc-downgrade-cluster-operator.adoc[leveloffset=+1]

//steps to downgrade Kafka
include::assembly-downgrade-kafka-versions.adoc[leveloffset=+1]
//steps to downgrade KRaft-based Kafka
include::../../modules/upgrading/proc-downgrade-kafka-kraft.adoc[leveloffset=+1]

//steps to downgrade ZooKeeper-based Kafka
include::assembly-downgrade-zookeeper.adoc[leveloffset=+1]
Original file line number Diff line number Diff line change
Expand Up @@ -8,25 +8,27 @@
[role="_abstract"]
Use the same method to upgrade the Cluster Operator as the initial method of deployment.

Using installation files:: If you deployed the Cluster Operator using the installation YAML files, perform your upgrade by modifying the Operator installation files, as described in xref:proc-upgrading-the-co-{context}[Upgrading the Cluster Operator].
include::../../modules/upgrading/proc-upgrade-cluster-operator.adoc[leveloffset=+1]

== Upgrading the Cluster Operator using the OperatorHub

If you deployed Strimzi from {OperatorHub}, use the Operator Lifecycle Manager (OLM) to change the update channel for the Strimzi operators to a new Strimzi version.

Using the OperatorHub.io:: If you deployed Strimzi from {OperatorHub}, use the Operator Lifecycle Manager (OLM) to change the update channel for the Strimzi operators to a new Strimzi version.
+
Updating the channel starts one of the following types of upgrade, depending on your chosen upgrade strategy:
+
--

* An automatic upgrade is initiated
* A manual upgrade that requires approval before installation begins
--
+

NOTE: If you subscribe to the _stable_ channel, you can get automatic updates without changing channels.
However, enabling automatic updates is not recommended because of the potential for missing any pre-installation upgrade steps.
Use automatic upgrades only on version-specific channels.
+
For more information on using OperatorHub.io to upgrade Operators, see the {OLMOperatorDocs}.

Using a Helm chart:: If you deployed the Cluster Operator using a Helm chart, use `helm upgrade`.
+
For more information on using OperatorHub to upgrade Operators, see the {OLMOperatorDocs}.

== Upgrading the Cluster Operator using a Helm chart

If you deployed the Cluster Operator using a Helm chart, use `helm upgrade`.

The `helm upgrade` command does not upgrade the {HelmCustomResourceDefinitions}.
Install the new CRDs manually after upgrading the Cluster Operator.
You can access the CRDs from the {ReleaseDownload} or find them in the `crd` subdirectory inside the Helm Chart.
Expand All @@ -48,4 +50,4 @@ kubectl get kafka <kafka_cluster_name> -n <namespace> -o jsonpath='{.status.cond

Replace <kafka_cluster_name> with the name of your Kafka cluster and <namespace> with the Kubernetes namespace where the pod is running.

include::../../modules/upgrading/proc-upgrade-cluster-operator.adoc[leveloffset=+1]

This file was deleted.

14 changes: 14 additions & 0 deletions documentation/assemblies/upgrading/assembly-upgrade-zookeeper.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
// This assembly is included in the following assemblies:
//
// assembly-upgrade.adoc

[id='assembly-upgrade-zookeeper-{context}']
= Upgrading Kafka when using ZooKeeper

[role="_abstract"]
If you are using a ZooKeeper-based Kafka cluster, an upgrade requires an update to the Kafka version and the inter-broker protocol version.

include::../../modules/upgrading/ref-upgrade-kafka-versions.adoc[leveloffset=+1]
include::../../modules/upgrading/con-upgrade-older-clients.adoc[leveloffset=+1]

include::../../modules/upgrading/proc-upgrade-kafka-zookeeper.adoc[leveloffset=+1]
23 changes: 20 additions & 3 deletions documentation/assemblies/upgrading/assembly-upgrade.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,11 @@
Upgrade your Strimzi installation to version {ProductVersion} and benefit from new features, performance improvements, and enhanced security options.
During the upgrade, Kafka is also be updated to the latest supported version, introducing additional features and bug fixes to your Strimzi deployment.

Use the same method to upgrade the Cluster Operator as the initial method of deployment.
For example, if you used the Strimzi installation files, modify those files to perform the upgrade.
After you have upgraded your Cluster Operator to {ProductVersion}, the next step is to upgrade all Kafka nodes to the latest supported version of Kafka.
Kafka upgrades are performed by the Cluster Operator through rolling updates of the Kafka nodes.

If you encounter any issues with the new version, Strimzi can be xref:assembly-downgrade-{context}[downgraded] to the previous version.

Released Strimzi versions can be found at {ReleaseDownload}.
Expand All @@ -20,17 +25,29 @@ For topics configured with high availability (replication factor of at least 3 a
The upgrade triggers rolling updates, where brokers are restarted one by one at different stages of the process.
During this time, overall cluster availability is temporarily reduced, which may increase the risk of message loss in the event of a broker failure.

//sequence
include::../../modules/upgrading/con-upgrade-sequence.adoc[leveloffset=+1]

//kafka upgrade concepts
include::../../modules/upgrading/con-upgrade-paths.adoc[leveloffset=+1]
include::../../modules/upgrading/con-upgrade-versions-and-images.adoc[leveloffset=+2]

include::../../modules/upgrading/con-upgrade-sequence.adoc[leveloffset=+1]
//client upgrade concepts
include::../../modules/upgrading/con-upgrade-strategies-for-upgrading-clients.adoc[leveloffset=+1]

//upgrading kubernetes
include::../../modules/upgrading/con-upgrade-cluster.adoc[leveloffset=+1]

//upgrading cluster operator
include::assembly-upgrade-cluster-operator.adoc[leveloffset=+1]

include::assembly-upgrade-kafka-versions.adoc[leveloffset=+1]
//upgrading kafka: KRaft-based
include::../../modules/upgrading/proc-upgrade-kafka-kraft.adoc[leveloffset=+1]

//upgrading Kafka: ZooKeeper-based
include::assembly-upgrade-zookeeper.adoc[leveloffset=+1]

//checking the status of an upgrade
include::../../modules/upgrading/con-upgrade-status.adoc[leveloffset=+1]

//Using FIPS
include::../../modules/upgrading/proc-switching-to-FIPS-mode-when-upgrading-Strimzi.adoc[leveloffset=+1]
17 changes: 8 additions & 9 deletions documentation/modules/configuring/con-config-kafka.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -21,18 +21,17 @@ Configuration options that are particularly important include the following:
* Rack awareness
* Metrics
* Cruise Control for cluster rebalancing
* Metadata version for KRaft-based Kafka clusters
* Inter-broker protocol version for ZooKeeper-based Kafka clusters

For a deeper understanding of the Kafka cluster configuration options, refer to the link:{BookURLConfiguring}[Strimzi Custom Resource API Reference^].

.Kafka versions
The `.spec.kafka.metadataVersion` property or the `inter.broker.protocol.version` property in `config` must be a version supported by the specified Kafka version (`spec.kafka.version`).
The property represents the Kafka metadata or inter-broker protocol version used in a Kafka cluster.
If either of these properties is not set in the configuration, the Cluster Operator updates the version to the default for the Kafka version used.

The `inter.broker.protocol.version` property for the Kafka `config` must be the version supported by the specified Kafka version (`spec.kafka.version`).
The property represents the version of Kafka protocol used in a Kafka cluster.
NOTE: The oldest supported metadata version is 3.3.
Using a metadata version that is older than the Kafka version might cause some features to be disabled.

From Kafka 3.0.0, when the `inter.broker.protocol.version` is set to `3.0` or higher, the `log.message.format.version` option is ignored and doesn't need to be set.

An update to the `inter.broker.protocol.version` is required when upgrading your Kafka version.
For more information, see xref:assembly-upgrading-kafka-versions-str[Upgrading Kafka].
For a deeper understanding of the Kafka cluster configuration options, refer to the link:{BookURLConfiguring}[Strimzi Custom Resource API Reference^].

.Managing TLS certificates
When deploying Kafka, the Cluster Operator automatically sets up and renews TLS certificates to enable encryption and authentication within your cluster.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ For example, you can deploy the `Kafka` custom resource, and the installed Clust
Upgrades between versions might include manual steps.
Always read the release notes before upgrading.

For information on upgrades, see xref:assembly-upgrade-cluster-operator-{context}[Cluster Operator upgrade options].
For information on upgrades, see xref:assembly-upgrade-{context}[].

WARNING: Make sure you use the appropriate update channel.
Installing Strimzi from the default _stable_ channel is generally safe.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ Strimzi provides a Helm chart to deploy the Cluster Operator.
After you have deployed the Cluster Operator this way, you can deploy Strimzi components using custom resources.
For example, you can deploy the `Kafka` custom resource, and the installed Cluster Operator will create a Kafka cluster.

For information on upgrades, see xref:assembly-upgrade-cluster-operator-{context}[Cluster Operator upgrade options].
For information on upgrades, see xref:assembly-upgrade-{context}[].

.Prerequisites

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ This procedure shows how to deploy a Kafka cluster to your Kubernetes cluster us

The deployment uses a YAML file to provide the specification to create a `Kafka` resource.

Strimzi provides the following xref:config-examples-{context}[example files] you can use to create a Kafka cluster:
Strimzi provides the following xref:config-examples-{context}[example files] to create a Kafka cluster that uses ZooKeeper for cluster management:

`kafka-persistent.yaml`:: Deploys a persistent cluster with three ZooKeeper and three Kafka nodes.
`kafka-jbod.yaml`:: Deploys a persistent cluster with three ZooKeeper and three Kafka nodes (each using multiple persistent volumes).
Expand All @@ -37,8 +37,6 @@ The property represents the version of Kafka protocol used in a Kafka cluster.

From Kafka 3.0.0, when the `inter.broker.protocol.version` is set to `3.0` or higher, the `log.message.format.version` option is ignored and doesn't need to be set.

An update to the `inter.broker.protocol.version` is required when xref:assembly-upgrading-kafka-versions-str[upgrading Kafka].

The example clusters are named `my-cluster` by default.
The cluster name is defined by the name of the resource and cannot be changed after the cluster has been deployed.
To change the cluster name before you deploy the cluster, edit the `Kafka.metadata.name` property of the `Kafka` resource in the relevant YAML file.
Expand Down
30 changes: 30 additions & 0 deletions documentation/modules/upgrading/con-upgrade-older-clients.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
// Module included in the following assemblies:
//
// assembly-upgrade-zookeeper.adoc

[id='con-upgrade-older-clients-{context}']
= Upgrading clients with older message formats

[role="_abstract"]
Before Kafka 3.0, you could configure a specific message format for brokers using the `log.message.format.version` property (or the `message.format.version` property at the topic level).
This allowed brokers to accommodate older Kafka clients that were using an outdated message format.
Though Kafka inherently supports older clients without explicitly setting this property, brokers would then need to convert the messages from the older clients, which came with a significant performance cost.

Apache Kafka Java clients have supported the latest message format version since version 0.11.
If all of your clients are using the latest message version, you can remove the `log.message.format.version` or `message.format.version` overrides when upgrading your brokers.

However, if you still have clients that are using an older message format version, we recommend upgrading your clients first.
Start with the consumers, then upgrade the producers before removing the `log.message.format.version` or `message.format.version` overrides when upgrading your brokers.
This will ensure that all of your clients can support the latest message format version and that the upgrade process goes smoothly.

You can track Kafka client names and versions using this metric:

* `kafka.server:type=socket-server-metrics,clientSoftwareName=<name>,clientSoftwareVersion=<version>,listener=<listener>,networkProcessor=<processor>`

[TIP]
====
The following Kafka broker metrics help monitor the performance of message down-conversion:
* `kafka.network:type=RequestMetrics,name=MessageConversionsTimeMs,request={Produce|Fetch}` provides metrics on the time taken to perform message conversion.
* `kafka.server:type=BrokerTopicMetrics,name={Produce|Fetch}MessageConversionsPerSec,topic=([-.\w]+)` provides metrics on the number of messages converted over a period of time.
====
8 changes: 6 additions & 2 deletions documentation/modules/upgrading/con-upgrade-sequence.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,10 @@ Strimzi {ProductVersion} requires Kubernetes {KubernetesVersion}.
+
You can xref:con-upgrade-cluster-{context}[upgrade Kubernetes with minimal downtime].

. xref:assembly-upgrade-cluster-operator-{context}[Upgrade the Cluster Operator].
. xref:assembly-upgrade-{context}[Upgrade the Cluster Operator].

. xref:assembly-upgrading-kafka-versions-{context}[Upgrade all Kafka brokers and client applications] to the latest supported Kafka version.
. Upgrade Kafka depending on the cluster configuration:
.. If using Kafka in KRaft mode, update the Kafka version and `spec.kafka.metadataVersion` to xref:proc-upgrade-kafka-kraft-{context}[upgrade all Kafka brokers and client applications].
.. If using ZooKeeper-based Kafka, update the Kafka version and `inter.broker.protocol.version` to xref:assembly-upgrade-zookeeper-{context}[upgrade all Kafka brokers and client applications].

NOTE: From Strimzi 0.39, upgrades and downgrades between KRaft-based clusters are supported.
4 changes: 3 additions & 1 deletion documentation/modules/upgrading/con-upgrade-status.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,13 +6,14 @@
= Checking the status of an upgrade

[role="_abstract"]
When performing an upgrade, you can check it completed successfully in the status of the `Kafka` custom resource.
When performing an upgrade (or downgrade), you can check it completed successfully in the status of the `Kafka` custom resource.
The status provides information on the Strimzi and Kafka versions being used.

To ensure that you have the correct versions after completing an upgrade, verify the `kafkaVersion` and `operatorLastSuccessfulVersion` values in the Kafka status.

* `operatorLastSuccessfulVersion` is the version of the Strimzi operator that last performed a successful reconciliation.
* `kafkaVersion` is the the version of Kafka being used by the Kafka cluster.
* `kafkaMetadataVersion` is the metadata version used by KRaft-based Kafka clusters

You can use these values to check an upgrade of Strimzi or Kafka has completed.

Expand All @@ -28,4 +29,5 @@ status:
# ...
kafkaVersion: {DefaultKafkaVersion}
operatorLastSuccessfulVersion: {ProductVersion}
kafkaMetadataVersion: {DefaultKafkaMetadataVersion}
----
Loading

0 comments on commit 7b0cdd8

Please sign in to comment.