From 6e1a6a120464d1f76c4595b1b69d1328dabe9bef Mon Sep 17 00:00:00 2001 From: James Munson Date: Thu, 25 Jul 2024 10:48:09 -0600 Subject: [PATCH] Fix broken reference. Signed-off-by: James Munson --- .../docs/1.7.0/high-availability/rwx-volume-fast-failover.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/docs/1.7.0/high-availability/rwx-volume-fast-failover.md b/content/docs/1.7.0/high-availability/rwx-volume-fast-failover.md index c3195b04a..29af38fee 100644 --- a/content/docs/1.7.0/high-availability/rwx-volume-fast-failover.md +++ b/content/docs/1.7.0/high-availability/rwx-volume-fast-failover.md @@ -5,7 +5,7 @@ Release 1.7.0 adds a feature that minimizes the downtime for ReadWriteMany volumes when a node fails. When enabled Longhorn uses a lease-based mechanism to monitor the state of the NFS server pod that exports the volume Longhorn reacts quickly to move it to another node if it becomes unresponsive. See [RWX Volumes](../../nodes-and-volumes/volumes/rwx-volumes) for details on how the NFS server works. -To enable the feature, you set [RWX Volume Fast Failover](../../references/settings#rwx-volume-fast-failover) to "true". Existing RWX volumes will need to be restarted to use the feature after the setting is changed. That is done by scaling the workload down to zero and then back up again. New volumes will pick up the setting at creation and be configured appropriately. +To enable the feature, you set [RWX Volume Fast Failover](../../references/settings#rwx-volume-fast-failover-experimental) to "true". Existing RWX volumes will need to be restarted to use the feature after the setting is changed. That is done by scaling the workload down to zero and then back up again. New volumes will pick up the setting at creation and be configured appropriately. With the feature enabled, when a pod is created or re-created, Longhorn also creates an associated lease object in the `longhorn-system` namespace, with the same name as the volume. The NFS server pod keeps the lease renewed as proof of life. If the renewal stops happening, Longhorn will take steps to create a new NFS server pod on another node and to re-attach the workload, even before the old node is marked as `Not Ready` by Kubernetes.