From 4ef30f2a903b38a446015daed68d8a6cb5701cca Mon Sep 17 00:00:00 2001 From: gardener-robot-ci-2 Date: Wed, 7 Aug 2024 21:52:58 +0000 Subject: [PATCH] Automatic build triggered by last commit --- docs/404.html | 2 +- docs/__resources/controller_989379.png | Bin 112104 -> 0 bytes docs/__resources/defrag_ab05c8.png | Bin 349871 -> 0 bytes .../gardenlet_api_access_graph_5486ba.png | Bin 709029 -> 809015 bytes docs/_print/adopter/index.html | 2 +- docs/_print/community/index.html | 2 +- docs/_print/contribute/docs/index.html | 2 +- docs/_print/docs/contribute/code/index.html | 2 +- docs/adopter/index.html | 2 +- docs/blog/2018/06.11-anti-patterns/index.html | 2 +- .../index.html | 2 +- .../index.html | 2 +- .../index.html | 2 +- .../index.html | 2 +- .../2018/06.11-namespace-isolation/index.html | 2 +- .../2018/06.11-namespace-scope/index.html | 2 +- .../index.html | 2 +- .../index.html | 2 +- .../index.html | 2 +- .../index.html | 2 +- .../12.22-cookies-are-dangerous/index.html | 2 +- .../2018/12.25-gardener_cookies/index.html | 2 +- docs/blog/2018/_print/index.html | 2 +- docs/blog/2018/index.html | 2 +- docs/blog/2018/page/2/index.html | 2 +- .../index.html | 2 +- .../index.html | 2 +- docs/blog/2019/_print/index.html | 2 +- docs/blog/2019/index.html | 2 +- .../index.html | 2 +- .../2020/05.27-pingcaps-experience/index.html | 2 +- .../08.06-gardener-v1.8.0-released/index.html | 2 +- .../index.html | 2 +- .../index.html | 2 +- .../index.html | 2 +- .../index.html | 2 +- .../index.html | 2 +- .../11.23-gardener-v1.13-released/index.html | 2 +- .../index.html | 2 +- docs/blog/2020/_print/index.html | 2 +- docs/blog/2020/index.html | 2 +- .../index.html | 2 +- .../index.html | 2 +- docs/blog/2021/_print/index.html | 2 +- docs/blog/2021/index.html | 2 +- .../index.html | 2 +- .../index.html | 2 +- .../index.html | 2 +- .../index.html | 2 +- .../index.html | 2 +- docs/blog/2022/_print/index.html | 2 +- docs/blog/2022/index.html | 2 +- .../index.html | 2 +- docs/blog/2023/_print/index.html | 2 +- docs/blog/2023/index.html | 2 +- .../index.html | 2 +- .../index.html | 2 +- .../index.html | 2 +- .../index.html | 2 +- docs/blog/2024/_print/index.html | 2 +- docs/blog/2024/index.html | 2 +- docs/blog/_print/index.html | 2 +- docs/blog/index.html | 2 +- docs/blog/page/2/index.html | 2 +- docs/blog/page/3/index.html | 2 +- docs/blog/page/4/index.html | 2 +- docs/community/index.html | 2 +- docs/contribute/docs/index.html | 5 +- docs/curated-links/index.html | 5 +- docs/docs/_print/index.html | 116 +- docs/docs/contribute/_print/index.html | 2 +- docs/docs/contribute/code/cicd/index.html | 5 +- .../contributing-bigger-changes/index.html | 5 +- .../contribute/code/dependencies/index.html | 5 +- docs/docs/contribute/code/index.html | 5 +- .../code/security-guide/_print/index.html | 2 +- .../contribute/code/security-guide/index.html | 5 +- .../adding-existing-documentation/index.html | 5 +- .../documentation/formatting-guide/index.html | 5 +- .../documentation/images/index.html | 5 +- .../documentation/markup/index.html | 5 +- .../documentation/organization/index.html | 5 +- .../documentation/pr-description/index.html | 5 +- .../documentation/shortcodes/index.html | 5 +- .../style-guide/_print/index.html | 2 +- .../style-guide/concept_template/index.html | 5 +- .../documentation/style-guide/index.html | 5 +- .../style-guide/reference_template/index.html | 5 +- .../style-guide/task_template/index.html | 5 +- docs/docs/contribute/index.html | 5 +- docs/docs/dashboard/_print/index.html | 2 +- .../dashboard/access-restrictions/index.html | 5 +- docs/docs/dashboard/architecture/index.html | 5 +- .../automated-resource-management/index.html | 5 +- .../docs/dashboard/connect-kubectl/index.html | 5 +- docs/docs/dashboard/custom-fields/index.html | 5 +- docs/docs/dashboard/customization/index.html | 5 +- docs/docs/dashboard/index.html | 5 +- docs/docs/dashboard/local-setup/index.html | 5 +- docs/docs/dashboard/process/index.html | 5 +- .../dashboard/project-operations/index.html | 5 +- docs/docs/dashboard/readme/index.html | 5 +- .../dashboard/terminal-shortcuts/index.html | 5 +- docs/docs/dashboard/testing/index.html | 5 +- docs/docs/dashboard/using-terminal/index.html | 5 +- docs/docs/dashboard/webterminals/index.html | 5 +- .../working-with-projects/index.html | 5 +- docs/docs/extensions/_print/index.html | 2 +- .../_print/index.html | 2 +- .../_print/index.html | 2 +- .../index.html | 5 +- .../container-runtime-extensions/index.html | 5 +- docs/docs/extensions/index.html | 5 +- .../_print/index.html | 2 +- .../_print/index.html | 2 +- .../deployment/index.html | 5 +- .../index.html | 5 +- .../local-setup/index.html | 5 +- .../operations/index.html | 5 +- .../tutorials/_print/index.html | 2 +- .../tutorials/index.html | 5 +- .../index.html | 5 +- .../usage/index.html | 5 +- .../_print/index.html | 2 +- .../deployment/index.html | 5 +- .../dual-stack-ingress/index.html | 5 +- .../index.html | 5 +- .../local-setup/index.html | 5 +- .../operations/index.html | 5 +- .../index.html | 5 +- .../usage/index.html | 5 +- .../_print/index.html | 2 +- .../azure-permissions/index.html | 5 +- .../deployment/index.html | 5 +- .../index.html | 5 +- .../local-setup/index.html | 5 +- .../migrate-loadbalancer/index.html | 5 +- .../operations/index.html | 5 +- .../tutorials/_print/index.html | 2 +- .../tutorials/index.html | 5 +- .../index.html | 5 +- .../usage/index.html | 5 +- .../_print/index.html | 2 +- .../index.html | 5 +- .../operations/index.html | 5 +- .../usage/index.html | 5 +- .../_print/index.html | 2 +- .../deployment/index.html | 5 +- .../index.html | 5 +- .../local-setup/index.html | 5 +- .../operations/index.html | 5 +- .../datadisk-image-restore/index.html | 5 +- .../tutorials/_print/index.html | 2 +- .../tutorials/index.html | 5 +- .../index.html | 5 +- .../usage/index.html | 5 +- .../_print/index.html | 2 +- .../deployment/index.html | 5 +- .../index.html | 5 +- .../local-setup/index.html | 5 +- .../operations/index.html | 5 +- .../usage/index.html | 5 +- .../infrastructure-extensions/index.html | 5 +- .../network-extensions/_print/index.html | 2 +- .../_print/index.html | 2 +- .../deployment/index.html | 5 +- .../index.html | 5 +- .../operations/index.html | 5 +- .../shoot_overlay_network/index.html | 5 +- .../usage/index.html | 5 +- .../_print/index.html | 2 +- .../index.html | 5 +- .../usage/index.html | 5 +- .../extensions/network-extensions/index.html | 5 +- .../os-extensions/_print/index.html | 2 +- .../_print/index.html | 2 +- .../gardener-extension-os-coreos/index.html | 5 +- .../usage/index.html | 5 +- .../_print/index.html | 2 +- .../index.html | 5 +- .../_print/index.html | 2 +- .../index.html | 5 +- .../usage/index.html | 5 +- .../_print/index.html | 2 +- .../gardener-extension-os-ubuntu/index.html | 5 +- .../usage/index.html | 5 +- docs/docs/extensions/os-extensions/index.html | 5 +- docs/docs/extensions/others/_print/index.html | 2 +- .../_print/index.html | 2 +- .../extension-registry-cache/index.html | 5 +- .../getting-started-locally/index.html | 5 +- .../getting-started-remotely/index.html | 5 +- .../index.html | 5 +- .../registry-cache/configuration/index.html | 5 +- .../upstream-credentials/index.html | 5 +- .../registry-mirror/configuration/index.html | 5 +- .../_print/index.html | 2 +- .../alerting/index.html | 5 +- .../custom_shoot_issuer/index.html | 5 +- .../deployment/index.html | 5 +- .../index.html | 5 +- .../request_cert/index.html | 5 +- .../request_default_domain_cert/index.html | 5 +- .../tutorials/gateway-api-gateways/index.html | 5 +- .../tutorials/istio-gateways/index.html | 5 +- .../index.html | 5 +- .../_print/index.html | 2 +- .../configuration/index.html | 5 +- .../deployment/index.html | 5 +- .../dns_names/index.html | 5 +- .../dns_providers/index.html | 5 +- .../index.html | 5 +- .../tutorials/gateway-api-gateways/index.html | 5 +- .../tutorials/istio-gateways/index.html | 5 +- .../_print/index.html | 2 +- .../deployment/index.html | 5 +- .../index.html | 5 +- .../lakom/index.html | 5 +- .../shoot-extension/index.html | 5 +- .../_print/index.html | 2 +- .../deployment/index.html | 5 +- .../index.html | 5 +- .../shoot-networking-filter/index.html | 5 +- .../_print/index.html | 2 +- .../deployment/index.html | 5 +- .../index.html | 5 +- .../index.html | 5 +- .../_print/index.html | 2 +- .../deployment/index.html | 5 +- .../index.html | 5 +- .../openidconnects/index.html | 5 +- .../_print/index.html | 2 +- .../configuration/index.html | 5 +- .../getting-started-remotely/index.html | 5 +- .../getting-started/index.html | 5 +- .../index.html | 5 +- .../monitoring/index.html | 5 +- .../shoot-rsyslog-relp/index.html | 5 +- docs/docs/extensions/others/index.html | 5 +- docs/docs/faq/_print/index.html | 2 +- docs/docs/faq/add-feature-gates/index.html | 5 +- docs/docs/faq/automatic-migrate/index.html | 5 +- docs/docs/faq/automatic-upgrade/index.html | 5 +- docs/docs/faq/backup/index.html | 5 +- docs/docs/faq/clusterhealthz/index.html | 5 +- .../faq/configure-worker-pools/index.html | 5 +- docs/docs/faq/dns-config/index.html | 5 +- docs/docs/faq/index.html | 5 +- .../docs/faq/privileged-containers/index.html | 5 +- .../docs/faq/reconciliation-impact/index.html | 5 +- docs/docs/faq/rotate-iaas-keys/index.html | 5 +- docs/docs/gardenctl-v2/index.html | 5 +- docs/docs/gardener/_print/index.html | 10 +- .../gardener/api-reference/_print/index.html | 8 +- .../api-reference/authentication/index.html | 5 +- .../gardener/api-reference/core-v1/index.html | 5 +- .../gardener/api-reference/core/index.html | 5 +- .../api-reference/extensions/index.html | 15 +- docs/docs/gardener/api-reference/index.html | 5 +- docs/docs/gardener/api-reference/index.xml | 23 + .../api-reference/operations/index.html | 5 +- .../api-reference/operator/index.html | 5 +- .../api-reference/provider-local/index.html | 5 +- .../api-reference/resources/index.html | 5 +- .../api-reference/security/index.html | 5 +- .../api-reference/seedmanagement/index.html | 5 +- .../api-reference/settings/index.html | 5 +- .../index.html | 5 +- .../docs/gardener/changing-the-api/index.html | 5 +- .../gardener/component-checklist/index.html | 5 +- docs/docs/gardener/concepts/_print/index.html | 2 +- .../concepts/admission-controller/index.html | 5 +- .../apiserver-admission-plugins/index.html | 5 +- .../gardener/concepts/apiserver/index.html | 5 +- .../gardener/concepts/architecture/index.html | 5 +- .../concepts/backup-restore/index.html | 5 +- .../gardener/concepts/cluster-api/index.html | 5 +- .../concepts/controller-manager/index.html | 5 +- docs/docs/gardener/concepts/etcd/index.html | 5 +- .../gardener/concepts/gardenlet/index.html | 5 +- docs/docs/gardener/concepts/index.html | 5 +- .../gardener/concepts/node-agent/index.html | 5 +- .../gardener/concepts/operator/index.html | 5 +- .../concepts/resource-manager/index.html | 5 +- .../gardener/concepts/scheduler/index.html | 5 +- docs/docs/gardener/configuration/index.html | 5 +- .../index.html | 5 +- .../index.html | 5 +- .../control_plane_migration/index.html | 5 +- docs/docs/gardener/csi_components/index.html | 5 +- .../custom-containerd-config/index.html | 5 +- .../gardener/custom-dns-config/index.html | 5 +- .../default_seccomp_profile/index.html | 5 +- docs/docs/gardener/defaulting/index.html | 5 +- docs/docs/gardener/dependencies/index.html | 5 +- .../gardener/deployment/_print/index.html | 4 +- .../index.html | 5 +- .../deployment/configuring_logging/index.html | 5 +- .../deployment/deploy_gardenlet/index.html | 5 +- .../deploy_gardenlet_automatically/index.html | 5 +- .../deploy_gardenlet_manually/index.html | 5 +- .../deploy_gardenlet_via_operator/index.html | 5 +- .../deployment/feature_gates/index.html | 5 +- .../gardenlet_api_access/index.html | 11 +- .../getting_started_locally/index.html | 5 +- .../index.html | 5 +- .../deployment/image_vector/index.html | 5 +- docs/docs/gardener/deployment/index.html | 5 +- docs/docs/gardener/deployment/index.xml | 6 + .../deployment/migration_v0_to_v1/index.html | 5 +- .../index.html | 5 +- .../deployment/setup_gardener/index.html | 5 +- .../deployment/version_skew_policy/index.html | 5 +- docs/docs/gardener/dns-autoscaling/index.html | 5 +- .../dns-search-path-optimization/index.html | 5 +- .../etcd_encryption_config/index.html | 5 +- docs/docs/gardener/exposureclasses/index.html | 5 +- .../gardener/extensions/_print/index.html | 2 +- .../gardener/extensions/admission/index.html | 5 +- .../extensions/backupbucket/index.html | 5 +- .../extensions/backupentry/index.html | 5 +- .../gardener/extensions/bastion/index.html | 5 +- .../extensions/ca-rotation/index.html | 5 +- .../gardener/extensions/cluster/index.html | 5 +- .../extensions/containerruntime/index.html | 5 +- .../controllerregistration/index.html | 5 +- .../controlplane-exposure/index.html | 5 +- .../controlplane-webhooks/index.html | 5 +- .../extensions/controlplane/index.html | 5 +- .../extensions/conventions/index.html | 5 +- .../gardener/extensions/dnsrecord/index.html | 5 +- .../gardener/extensions/extension/index.html | 5 +- .../extensions/force-deletion/index.html | 5 +- .../extensions/garden-api-access/index.html | 5 +- .../extensions/healthcheck-library/index.html | 5 +- .../gardener/extensions/heartbeat/index.html | 5 +- docs/docs/gardener/extensions/index.html | 5 +- .../extensions/infrastructure/index.html | 5 +- .../logging-and-monitoring/index.html | 5 +- .../index.html | 5 +- .../extensions/managedresources/index.html | 5 +- .../gardener/extensions/migration/index.html | 5 +- .../gardener/extensions/network/index.html | 5 +- .../operatingsystemconfig/index.html | 5 +- .../gardener/extensions/overview/index.html | 5 +- .../extensions/project-roles/index.html | 5 +- .../extensions/provider-local/index.html | 5 +- .../extensions/reconcile-trigger/index.html | 5 +- .../referenced-resources/index.html | 5 +- .../shoot-health-status-conditions/index.html | 5 +- .../extensions/shoot-maintenance/index.html | 5 +- .../extensions/shoot-webhooks/index.html | 5 +- .../gardener/extensions/worker/index.html | 5 +- .../getting_started_locally/index.html | 5 +- .../gardener/high-availability/index.html | 5 +- docs/docs/gardener/index.html | 5 +- docs/docs/gardener/ipv6/index.html | 5 +- docs/docs/gardener/istio/index.html | 5 +- .../gardener/kubernetes-clients/index.html | 5 +- docs/docs/gardener/local_setup/index.html | 5 +- docs/docs/gardener/log_parsers/index.html | 5 +- docs/docs/gardener/logging-usage/index.html | 5 +- docs/docs/gardener/logging/index.html | 5 +- docs/docs/gardener/managed_seed/index.html | 5 +- .../docs/gardener/monitoring-stack/index.html | 5 +- .../gardener/monitoring/_print/index.html | 2 +- .../gardener/monitoring/alerting/index.html | 5 +- .../monitoring/connectivity/index.html | 5 +- docs/docs/gardener/monitoring/index.html | 5 +- .../gardener/monitoring/profiling/index.html | 5 +- .../gardener/monitoring/readme/index.html | 5 +- .../docs/gardener/network_policies/index.html | 5 +- .../gardener/new-cloud-provider/index.html | 5 +- .../new-kubernetes-version/index.html | 5 +- docs/docs/gardener/node-local-dns/index.html | 5 +- docs/docs/gardener/node-readiness/index.html | 5 +- .../gardener/openidconnect-presets/index.html | 5 +- docs/docs/gardener/pod-security/index.html | 5 +- .../docs/gardener/priority-classes/index.html | 5 +- docs/docs/gardener/process/index.html | 5 +- docs/docs/gardener/projects/index.html | 5 +- .../gardener/reversed-vpn-tunnel/index.html | 5 +- .../gardener/secrets_management/index.html | 5 +- .../gardener/seed_bootstrapping/index.html | 5 +- docs/docs/gardener/seed_settings/index.html | 5 +- .../service-account-manager/index.html | 5 +- docs/docs/gardener/shoot_access/index.html | 5 +- .../gardener/shoot_auditpolicy/index.html | 5 +- .../gardener/shoot_autoscaling/index.html | 5 +- docs/docs/gardener/shoot_cleanup/index.html | 5 +- .../shoot_credentials_rotation/index.html | 5 +- docs/docs/gardener/shoot_hibernate/index.html | 5 +- .../shoot_high_availability/index.html | 5 +- .../gardener/shoot_info_configmap/index.html | 5 +- .../index.html | 5 +- .../gardener/shoot_maintenance/index.html | 5 +- .../docs/gardener/shoot_networking/index.html | 5 +- .../docs/gardener/shoot_operations/index.html | 5 +- docs/docs/gardener/shoot_purposes/index.html | 5 +- .../shoot_scheduling_profiles/index.html | 5 +- .../gardener/shoot_serviceaccounts/index.html | 5 +- docs/docs/gardener/shoot_status/index.html | 5 +- .../shoot_supported_architectures/index.html | 5 +- docs/docs/gardener/shoot_updates/index.html | 5 +- docs/docs/gardener/shoot_versions/index.html | 5 +- .../docs/gardener/shoot_workerless/index.html | 5 +- .../shoot_workers_settings/index.html | 5 +- .../supported_k8s_versions/index.html | 5 +- docs/docs/gardener/testing/index.html | 5 +- .../gardener/testmachinery_tests/index.html | 5 +- docs/docs/gardener/tolerations/index.html | 5 +- .../topology_aware_routing/index.html | 5 +- .../trusted-tls-for-control-planes/index.html | 5 +- .../trusted-tls-for-garden-runtime/index.html | 5 +- .../worker_pool_k8s_versions/index.html | 5 +- docs/docs/getting-started/_print/index.html | 2 +- .../getting-started/architecture/index.html | 5 +- .../getting-started/ca-components/index.html | 5 +- .../common-pitfalls/index.html | 5 +- .../features/_print/index.html | 2 +- .../certificate-management/index.html | 5 +- .../features/cluster-autoscaler/index.html | 5 +- .../features/credential-rotation/index.html | 5 +- .../features/dns-management/index.html | 5 +- .../features/hibernation/index.html | 5 +- docs/docs/getting-started/features/index.html | 5 +- .../getting-started/features/vpa/index.html | 5 +- .../features/workerless-shoots/index.html | 5 +- docs/docs/getting-started/index.html | 5 +- .../getting-started/introduction/index.html | 5 +- .../docs/getting-started/lifecycle/index.html | 5 +- .../observability/_print/index.html | 2 +- .../observability/alerts/index.html | 5 +- .../observability/components/index.html | 5 +- .../getting-started/observability/index.html | 5 +- .../observability/shoot-status/index.html | 5 +- docs/docs/getting-started/project/index.html | 5 +- docs/docs/getting-started/shoots/index.html | 5 +- docs/docs/glossary/_print/index.html | 2 +- docs/docs/glossary/index.html | 5 +- docs/docs/guides/_print/index.html | 2 +- .../administer-shoots/_print/index.html | 2 +- .../backup-restore/index.html | 5 +- .../conversion-webhook/index.html | 5 +- .../create-delete-shoot/index.html | 5 +- .../index.html | 5 +- .../guides/administer-shoots/gpu/index.html | 5 +- docs/docs/guides/administer-shoots/index.html | 5 +- .../maintain-shoot/index.html | 5 +- .../administer-shoots/oidc-login/index.html | 5 +- .../administer-shoots/scalability/index.html | 5 +- .../administer-shoots/tailscale/index.html | 5 +- .../guides/applications/_print/index.html | 2 +- .../access-pod-from-local/index.html | 5 +- .../applications/antipattern/index.html | 5 +- .../commit-secret-fail/index.html | 5 +- .../applications/container-startup/index.html | 5 +- .../applications/content_trust/index.html | 5 +- .../dockerfile-pitfall/index.html | 5 +- .../applications/dynamic-pvc/index.html | 5 +- .../applications/image-pull-policy/index.html | 5 +- docs/docs/guides/applications/index.html | 5 +- .../insecure-configuration/index.html | 5 +- .../applications/knative-install/index.html | 5 +- .../missing-registry-permission/index.html | 5 +- .../applications/network-isolation/index.html | 5 +- .../pod-disruption-budget/index.html | 5 +- .../guides/applications/prometheus/index.html | 5 +- .../applications/secure-seccomp/index.html | 5 +- .../service-cache-control/index.html | 5 +- .../index.html | 5 +- .../guides/client-tools/_print/index.html | 2 +- .../client-tools/bash-kubeconfig/index.html | 5 +- .../guides/client-tools/bash-tips/index.html | 5 +- docs/docs/guides/client-tools/index.html | 5 +- .../working-with-kubeconfig/index.html | 5 +- .../high-availability/_print/index.html | 2 +- .../best-practices/index.html | 5 +- .../chaos-engineering/index.html | 5 +- .../control-plane/index.html | 5 +- docs/docs/guides/high-availability/index.html | 5 +- docs/docs/guides/index.html | 5 +- .../_print/index.html | 2 +- .../analysing-node-failures/index.html | 5 +- .../debug-a-pod/index.html | 5 +- .../monitoring-and-troubleshooting/index.html | 5 +- .../shell-to-node/index.html | 5 +- .../tail-logfile/index.html | 5 +- docs/docs/guides/networking/_print/index.html | 2 +- .../index.html | 5 +- .../certificate-extension/index.html | 5 +- .../networking/dns-extension/index.html | 5 +- .../index.html | 5 +- docs/docs/guides/networking/index.html | 5 +- docs/docs/index.html | 5 +- docs/docs/other-components/_print/index.html | 108 +- .../dependency-watchdog/_print/index.html | 2 +- .../concepts/_print/index.html | 2 +- .../dependency-watchdog/concepts/index.html | 5 +- .../concepts/prober/index.html | 5 +- .../concepts/weeder/index.html | 5 +- .../contribution/index.html | 5 +- .../deployment/_print/index.html | 2 +- .../deployment/configure/index.html | 5 +- .../dependency-watchdog/deployment/index.html | 5 +- .../deployment/monitor/index.html | 5 +- .../dependency-watchdog/index.html | 5 +- .../dependency-watchdog/readme/index.html | 5 +- .../setup/dwd-using-local-garden/index.html | 5 +- .../dependency-watchdog/testing/index.html | 5 +- .../etcd-druid/_print/index.html | 108 +- .../etcd-druid/api-reference/index.html | 5 +- .../concepts/controllers/index.html | 5 +- .../etcd-druid/concepts/webhooks/index.html | 5 +- .../deployment/cli-flags/index.html | 5 +- .../deployment/feature-gates/index.html | 5 +- .../etcd-network-latency/index.html | 5 +- .../index.html | 5 +- .../index.html | 5 +- .../getting-started-locally/index.html | 5 +- .../other-components/etcd-druid/index.html | 92 +- .../other-components/etcd-druid/index.xml | 40 +- .../etcd-druid/local-e2e-tests/index.html | 5 +- .../etcd-druid/metrics/index.html | 5 +- .../proposals/00-template/index.html | 5 +- .../01-multi-node-etcd-clusters/index.html | 5 +- .../02-snapshot-compaction/index.html | 5 +- .../03-scaling-up-an-etcd-cluster/index.html | 5 +- .../04-etcd-member-custom-resource/index.html | 5 +- .../05-etcd-operator-tasks/index.html | 5 +- .../etcd-druid/readme/index.html | 1390 +++++++++++++++++ .../index.html | 5 +- .../index.html | 5 +- .../supported_k8s_versions/index.html | 5 +- docs/docs/other-components/index.html | 5 +- docs/docs/other-components/index.xml | 151 +- .../_print/index.html | 2 +- .../cp_support_new/index.html | 5 +- .../deployment/index.html | 5 +- .../documents/_print/index.html | 2 +- .../documents/apis/index.html | 5 +- .../documents/index.html | 5 +- .../machine-controller-manager/faq/index.html | 5 +- .../machine-controller-manager/index.html | 5 +- .../integration_tests/index.html | 5 +- .../local_setup/index.html | 5 +- .../machine/index.html | 5 +- .../machine_deployment/index.html | 5 +- .../machine_error_codes/index.html | 5 +- .../machine_set/index.html | 5 +- .../prerequisite/index.html | 5 +- .../proposals/_print/index.html | 2 +- .../excess_reserve_capacity/index.html | 5 +- .../external_providers_grpc/index.html | 5 +- .../proposals/hotupdate-instances/index.html | 5 +- .../proposals/index.html | 5 +- .../proposals/initialize-machine/index.html | 5 +- .../testing_and_dependencies/index.html | 5 +- .../todo/_print/index.html | 2 +- .../todo/index.html | 5 +- .../todo/outline/index.html | 5 +- docs/docs/resources/_print/index.html | 2 +- docs/docs/resources/index.html | 5 +- docs/docs/resources/videos/_print/index.html | 2 +- .../resources/videos/fairy-tail/index.html | 5 +- .../videos/gardener-teaser/index.html | 5 +- .../videos/in-out-networking/index.html | 5 +- docs/docs/resources/videos/index.html | 5 +- .../videos/livecheck-readiness/index.html | 5 +- .../microservices-in_kubernetes/index.html | 5 +- .../resources/videos/namespace/index.html | 5 +- .../videos/small-container/index.html | 5 +- .../videos/why-kubernetes/index.html | 5 +- .../security-and-compliance/_print/index.html | 2 +- docs/docs/security-and-compliance/index.html | 5 +- .../kubernetes-hardening/index.html | 5 +- .../partial-disa-k8s-stig-shoot/index.html | 5 +- .../regional-restrictions/index.html | 5 +- .../security-and-compliance/report/index.html | 5 +- docs/index.html | 2 +- docs/js/404.js | 758 ++++----- ...dex.7a338a9cbbf3410338ec9dfe480fd8b4.json} | 2 +- docs/sitemap.xml | 2 +- 583 files changed, 3844 insertions(+), 1443 deletions(-) delete mode 100644 docs/__resources/controller_989379.png delete mode 100644 docs/__resources/defrag_ab05c8.png create mode 100644 docs/docs/other-components/etcd-druid/readme/index.html rename docs/{offline-search-index.9e2e5a1b479d2b9cc7532cfc3ea163e6.json => offline-search-index.7a338a9cbbf3410338ec9dfe480fd8b4.json} (61%) diff --git a/docs/404.html b/docs/404.html index 0e18d94ac8a..a89fc80fbea 100644 --- a/docs/404.html +++ b/docs/404.html @@ -1,5 +1,5 @@ 404 Page not found | Gardener -

Page Not Found

We dug around, but couldn't find the page that you were looking for.

You could go back to our home page or use the search bar to find what you were looking for.

6.3.15 - Local e2e Tests

e2e Test Suite

Developers can run extended e2e tests, in addition to unit tests, for Etcd-Druid in or from their local environments. This is recommended to verify the desired behavior of several features and to avoid regressions in future releases.

The very same tests typically run as part of the component’s release job as well as on demand, e.g., when triggered by Etcd-Druid maintainers for open pull requests.

Testing Etcd-Druid automatically involves a certain test coverage for gardener/etcd-backup-restore @@ -17467,7 +17387,7 @@ STEPS="setup,deploy,test,undeploy,cleanup" \ test-e2e

e2e test with localstack

The above-mentioned e2e tests need storage from real cloud providers to be setup. But there is a tool named localstack that enables to run e2e test with mock AWS storage. We can also provision KIND cluster for e2e tests. So, together with localstack and KIND cluster, we don’t need to depend on any actual cloud provider infrastructure to be setup to run e2e tests.

How are the KIND cluster and localstack set up

KIND or Kubernetes-In-Docker is a kubernetes cluster that is set up inside a docker container. This cluster is with limited capability as it does not have much compute power. But this cluster can easily be setup inside a container and can be tear down easily just by removing a container. That’s why KIND cluster is very easy to use for e2e tests. Makefile command helps to spin up a KIND cluster and use the cluster to run e2e tests.

There is a docker image for localstack. The image is deployed as pod inside the KIND cluster through hack/e2e-test/infrastructure/localstack/localstack.yaml. Makefile takes care of deploying the yaml file in a KIND cluster.

The developer needs to run make ci-e2e-kind command. This command in turn runs hack/ci-e2e-kind.sh which spin up the KIND cluster and deploy localstack in it and then run the e2e tests using localstack as mock AWS storage provider. e2e tests are actually run on host machine but deploy the druid controller inside KIND cluster. Druid controller spawns multinode etcd clusters inside KIND cluster. e2e tests verify whether the druid controller performs its jobs correctly or not. Mock localstack storage is cleaned up after every e2e tests. That’s why the e2e tests need to access the localstack pod running inside KIND cluster. The network traffic between host machine and localstack pod is resolved via mapping localstack pod port to host port while setting up the KIND cluster via hack/e2e-test/infrastructure/kind/cluster.yaml

How to execute e2e tests with localstack and KIND cluster

Run the following make command to spin up a KinD cluster, deploy localstack and run the e2e tests with provider aws:

make ci-e2e-kind
-

6.3.15 - Metrics

Monitoring

etcd-druid uses Prometheus for metrics reporting. The metrics can be used for real-time monitoring and debugging of compaction jobs.

The simplest way to see the available metrics is to cURL the metrics endpoint /metrics. The format is described here.

Follow the Prometheus getting started doc to spin up a Prometheus server to collect etcd metrics.

The naming of metrics follows the suggested Prometheus best practices. All compaction related metrics are put under namespace etcddruid and the respective subsystems.

Snapshot Compaction

These metrics provide information about the compaction jobs that run after some interval in shoot control planes. Studying the metrics, we can deduce how many compaction job ran successfully, how many failed, how many delta events compacted etc.

NameDescriptionType
etcddruid_compaction_jobs_totalTotal number of compaction jobs initiated by compaction controller.Counter
etcddruid_compaction_jobs_currentNumber of currently running compaction job.Gauge
etcddruid_compaction_job_duration_secondsTotal time taken in seconds to finish a running compaction job.Histogram
etcddruid_compaction_num_delta_eventsTotal number of etcd events to be compacted by a compaction job.Gauge

There are two labels for etcddruid_compaction_jobs_total metrics. The label succeeded shows how many of the compaction jobs are succeeded and label failed shows how many of compaction jobs are failed.

There are two labels for etcddruid_compaction_job_duration_seconds metrics. The label succeeded shows how much time taken by a successful job to complete and label failed shows how much time taken by a failed compaction job.

etcddruid_compaction_jobs_current metric comes with label etcd_namespace that indicates the namespace of the Etcd running in the control plane of a shoot cluster..

Etcd

These metrics are exposed by the etcd process that runs in each etcd pod.

The following list metrics is applicable to clustering of a multi-node etcd cluster. The full list of metrics exposed by etcd is available here.

No.Metrics NameDescriptionComments
1etcd_disk_wal_fsync_duration_secondslatency distributions of fsync called by WAL.High disk operation latencies indicate disk issues.
2etcd_disk_backend_commit_duration_secondslatency distributions of commit called by backend.High disk operation latencies indicate disk issues.
3etcd_server_has_leaderwhether or not a leader exists. 1: leader exists, 0: leader not exists.To capture quorum loss or to check the availability of etcd cluster.
4etcd_server_is_leaderwhether or not this member is a leader. 1 if it is, 0 otherwise.
5etcd_server_leader_changes_seen_totalnumber of leader changes seen.Helpful in fine tuning the zonal cluster like etcd-heartbeat time etc, it can also indicates the etcd load and network issues.
6etcd_server_is_learnerwhether or not this member is a learner. 1 if it is, 0 otherwise.
7etcd_server_learner_promote_successestotal number of successful learner promotions while this member is leader.Might be helpful in checking the success of API calls called by backup-restore.
8etcd_network_client_grpc_received_bytes_totaltotal number of bytes received from grpc clients.Client Traffic In.
9etcd_network_client_grpc_sent_bytes_totaltotal number of bytes sent to grpc clients.Client Traffic Out.
10etcd_network_peer_sent_bytes_totaltotal number of bytes sent to peers.Useful for network usage.
11etcd_network_peer_received_bytes_totaltotal number of bytes received from peers.Useful for network usage.
12etcd_network_active_peerscurrent number of active peer connections.Might be useful in detecting issues like network partition.
13etcd_server_proposals_committed_totaltotal number of consensus proposals committed.A consistently large lag between a single member and its leader indicates that member is slow or unhealthy.
14etcd_server_proposals_pendingcurrent number of pending proposals to commit.Pending proposals suggests there is a high client load or the member cannot commit proposals.
15etcd_server_proposals_failed_totaltotal number of failed proposals seen.Might indicates downtime caused by a loss of quorum.
16etcd_server_proposals_applied_totaltotal number of consensus proposals applied.Difference between etcd_server_proposals_committed_total and etcd_server_proposals_applied_total should usually be small.
17etcd_mvcc_db_total_size_in_bytestotal size of the underlying database physically allocated in bytes.
18etcd_server_heartbeat_send_failures_totaltotal number of leader heartbeat send failures.Might be helpful in fine-tuning the cluster or detecting slow disk or any network issues.
19etcd_network_peer_round_trip_time_secondsround-trip-time histogram between peers.Might be helpful in fine-tuning network usage specially for zonal etcd cluster.
20etcd_server_slow_apply_totaltotal number of slow apply requests.Might indicate overloaded from slow disk.
21etcd_server_slow_read_indexes_totaltotal number of pending read indexes not in sync with leader’s or timed out read index requests.

The full list of metrics is available here.

Etcd-Backup-Restore

These metrics are exposed by the etcd-backup-restore container in each etcd pod.

The following list metrics is applicable to clustering of a multi-node etcd cluster. The full list of metrics exposed by etcd-backup-restore is available here.

No.Metrics NameDescription
1.etcdbr_cluster_sizeto capture the scale-up/scale-down scenarios.
2.etcdbr_is_learnerwhether or not this member is a learner. 1 if it is, 0 otherwise.
3.etcdbr_is_learner_count_totaltotal number times member added as the learner.
4.etcdbr_restoration_duration_secondstotal latency distribution required to restore the etcd member.
5.etcdbr_add_learner_duration_secondstotal latency distribution of adding the etcd member as a learner to the cluster.
6.etcdbr_member_remove_duration_secondstotal latency distribution removing the etcd member from the cluster.
7.etcdbr_member_promote_duration_secondstotal latency distribution of promoting the learner to the voting member.
8.etcdbr_defragmentation_duration_secondstotal latency distribution of defragmentation of each etcd cluster member.

Prometheus supplied metrics

The Prometheus client library provides a number of metrics under the go and process namespaces.

6.3.16 - operator out-of-band tasks

DEP-05: Operator Out-of-band Tasks

Table of Contents

Summary

This DEP proposes an enhancement to etcd-druid’s capabilities to handle out-of-band tasks, which are presently performed manually or invoked programmatically via suboptimal APIs. The document proposes the establishment of a unified interface by defining a well-structured API to harmonize the initiation of any out-of-band task, monitor its status, and simplify the process of adding new tasks and managing their lifecycles.

Terminology

Motivation

Today, etcd-druid mainly acts as an etcd cluster provisioner (creation, maintenance and deletion). In future, capabilities of etcd-druid will be enhanced via etcd-member proposal by providing it access to much more detailed information about each etcd cluster member. While we enhance the reconciliation and monitoring capabilities of etcd-druid, it still lacks the ability to allow users to invoke out-of-band tasks on an existing etcd cluster.

There are new learnings while operating etcd clusters at scale. It has been observed that we regularly need capabilities to trigger out-of-band tasks which are outside of the purview of a regular etcd reconciliation run. Many of these tasks are multi-step processes, and performing them manually is error-prone, even if an operator follows a well-written step-by-step guide. Thus, there is a need to automate these tasks. +

6.3.16 - Metrics

Monitoring

etcd-druid uses Prometheus for metrics reporting. The metrics can be used for real-time monitoring and debugging of compaction jobs.

The simplest way to see the available metrics is to cURL the metrics endpoint /metrics. The format is described here.

Follow the Prometheus getting started doc to spin up a Prometheus server to collect etcd metrics.

The naming of metrics follows the suggested Prometheus best practices. All compaction related metrics are put under namespace etcddruid and the respective subsystems.

Snapshot Compaction

These metrics provide information about the compaction jobs that run after some interval in shoot control planes. Studying the metrics, we can deduce how many compaction job ran successfully, how many failed, how many delta events compacted etc.

NameDescriptionType
etcddruid_compaction_jobs_totalTotal number of compaction jobs initiated by compaction controller.Counter
etcddruid_compaction_jobs_currentNumber of currently running compaction job.Gauge
etcddruid_compaction_job_duration_secondsTotal time taken in seconds to finish a running compaction job.Histogram
etcddruid_compaction_num_delta_eventsTotal number of etcd events to be compacted by a compaction job.Gauge

There are two labels for etcddruid_compaction_jobs_total metrics. The label succeeded shows how many of the compaction jobs are succeeded and label failed shows how many of compaction jobs are failed.

There are two labels for etcddruid_compaction_job_duration_seconds metrics. The label succeeded shows how much time taken by a successful job to complete and label failed shows how much time taken by a failed compaction job.

etcddruid_compaction_jobs_current metric comes with label etcd_namespace that indicates the namespace of the Etcd running in the control plane of a shoot cluster..

Etcd

These metrics are exposed by the etcd process that runs in each etcd pod.

The following list metrics is applicable to clustering of a multi-node etcd cluster. The full list of metrics exposed by etcd is available here.

No.Metrics NameDescriptionComments
1etcd_disk_wal_fsync_duration_secondslatency distributions of fsync called by WAL.High disk operation latencies indicate disk issues.
2etcd_disk_backend_commit_duration_secondslatency distributions of commit called by backend.High disk operation latencies indicate disk issues.
3etcd_server_has_leaderwhether or not a leader exists. 1: leader exists, 0: leader not exists.To capture quorum loss or to check the availability of etcd cluster.
4etcd_server_is_leaderwhether or not this member is a leader. 1 if it is, 0 otherwise.
5etcd_server_leader_changes_seen_totalnumber of leader changes seen.Helpful in fine tuning the zonal cluster like etcd-heartbeat time etc, it can also indicates the etcd load and network issues.
6etcd_server_is_learnerwhether or not this member is a learner. 1 if it is, 0 otherwise.
7etcd_server_learner_promote_successestotal number of successful learner promotions while this member is leader.Might be helpful in checking the success of API calls called by backup-restore.
8etcd_network_client_grpc_received_bytes_totaltotal number of bytes received from grpc clients.Client Traffic In.
9etcd_network_client_grpc_sent_bytes_totaltotal number of bytes sent to grpc clients.Client Traffic Out.
10etcd_network_peer_sent_bytes_totaltotal number of bytes sent to peers.Useful for network usage.
11etcd_network_peer_received_bytes_totaltotal number of bytes received from peers.Useful for network usage.
12etcd_network_active_peerscurrent number of active peer connections.Might be useful in detecting issues like network partition.
13etcd_server_proposals_committed_totaltotal number of consensus proposals committed.A consistently large lag between a single member and its leader indicates that member is slow or unhealthy.
14etcd_server_proposals_pendingcurrent number of pending proposals to commit.Pending proposals suggests there is a high client load or the member cannot commit proposals.
15etcd_server_proposals_failed_totaltotal number of failed proposals seen.Might indicates downtime caused by a loss of quorum.
16etcd_server_proposals_applied_totaltotal number of consensus proposals applied.Difference between etcd_server_proposals_committed_total and etcd_server_proposals_applied_total should usually be small.
17etcd_mvcc_db_total_size_in_bytestotal size of the underlying database physically allocated in bytes.
18etcd_server_heartbeat_send_failures_totaltotal number of leader heartbeat send failures.Might be helpful in fine-tuning the cluster or detecting slow disk or any network issues.
19etcd_network_peer_round_trip_time_secondsround-trip-time histogram between peers.Might be helpful in fine-tuning network usage specially for zonal etcd cluster.
20etcd_server_slow_apply_totaltotal number of slow apply requests.Might indicate overloaded from slow disk.
21etcd_server_slow_read_indexes_totaltotal number of pending read indexes not in sync with leader’s or timed out read index requests.

The full list of metrics is available here.

Etcd-Backup-Restore

These metrics are exposed by the etcd-backup-restore container in each etcd pod.

The following list metrics is applicable to clustering of a multi-node etcd cluster. The full list of metrics exposed by etcd-backup-restore is available here.

No.Metrics NameDescription
1.etcdbr_cluster_sizeto capture the scale-up/scale-down scenarios.
2.etcdbr_is_learnerwhether or not this member is a learner. 1 if it is, 0 otherwise.
3.etcdbr_is_learner_count_totaltotal number times member added as the learner.
4.etcdbr_restoration_duration_secondstotal latency distribution required to restore the etcd member.
5.etcdbr_add_learner_duration_secondstotal latency distribution of adding the etcd member as a learner to the cluster.
6.etcdbr_member_remove_duration_secondstotal latency distribution removing the etcd member from the cluster.
7.etcdbr_member_promote_duration_secondstotal latency distribution of promoting the learner to the voting member.
8.etcdbr_defragmentation_duration_secondstotal latency distribution of defragmentation of each etcd cluster member.

Prometheus supplied metrics

The Prometheus client library provides a number of metrics under the go and process namespaces.

6.3.17 - operator out-of-band tasks

DEP-05: Operator Out-of-band Tasks

Table of Contents

Summary

This DEP proposes an enhancement to etcd-druid’s capabilities to handle out-of-band tasks, which are presently performed manually or invoked programmatically via suboptimal APIs. The document proposes the establishment of a unified interface by defining a well-structured API to harmonize the initiation of any out-of-band task, monitor its status, and simplify the process of adding new tasks and managing their lifecycles.

Terminology

Motivation

Today, etcd-druid mainly acts as an etcd cluster provisioner (creation, maintenance and deletion). In future, capabilities of etcd-druid will be enhanced via etcd-member proposal by providing it access to much more detailed information about each etcd cluster member. While we enhance the reconciliation and monitoring capabilities of etcd-druid, it still lacks the ability to allow users to invoke out-of-band tasks on an existing etcd cluster.

There are new learnings while operating etcd clusters at scale. It has been observed that we regularly need capabilities to trigger out-of-band tasks which are outside of the purview of a regular etcd reconciliation run. Many of these tasks are multi-step processes, and performing them manually is error-prone, even if an operator follows a well-written step-by-step guide. Thus, there is a need to automate these tasks. Some examples of an on-demand/out-of-band tasks:

Goals

Non-Goals

Proposal

Authors propose creation of a new single dedicated custom resource to represent an out-of-band task. Etcd-druid will be enhanced to process the task requests and update its status which can then be tracked/observed.

Custom Resource Golang API

EtcdOperatorTask is the new custom resource that will be introduced. This API will be in v1alpha1 version and will be subject to change. We will be respecting Kubernetes Deprecation Policy.

// EtcdOperatorTask represents an out-of-band operator task resource.
 type EtcdOperatorTask struct {
   metav1.TypeMeta
@@ -17636,7 +17556,7 @@
     maxBackups: <maximum no. of backups that will be copied>    
 

Note: For detailed object store specification please refer here

Pre-Conditions

Note: copy-backups-task runs as a separate job, and it operates only on the backup bucket, hence it doesn’t depend on health of etcd cluster members.

Note: copy-backups-task has already been implemented and it’s currently being used in Control Plane Migration but copy-backups-task will be harmonized with EtcdOperatorTask custom resource.

Metrics

Authors proposed to introduce the following metrics:

6.3.17 - Recovery From Permanent Quorum Loss In Etcd Cluster

Recovery from Permanent Quorum Loss in an Etcd Cluster

Quorum loss in Etcd Cluster

Quorum loss means when the majority of Etcd pods (greater than or equal to n/2 + 1) are down simultaneously for some reason.

There are two types of quorum loss that can happen to an Etcd multinode cluster:

  1. Transient quorum loss - A quorum loss is called transient when the majority of Etcd pods are down simultaneously for some time. The pods may be down due to network unavailability, high resource usages, etc. When the pods come back after some time, they can re-join the cluster and quorum is recovered automatically without any manual intervention. There should not be a permanent failure for the majority of etcd pods due to hardware failure or disk corruption.

  2. Permanent quorum loss - A quorum loss is called permanent when the majority of Etcd cluster members experience permanent failure, whether due to hardware failure or disk corruption, etc. In that case, the etcd cluster is not going to recover automatically from the quorum loss. A human operator will now need to intervene and execute the following steps to recover the multi-node Etcd cluster.

If permanent quorum loss occurs to a multinode Etcd cluster, the operator needs to note down the PVCs, configmaps, statefulsets, CRs, etc. related to that Etcd cluster and work on those resources only. The following steps guide a human operator to recover from permanent quorum loss of an etcd cluster. We assume the name of the Etcd CR for the Etcd cluster is etcd-main.

Etcd cluster in shoot control plane of gardener deployment: +Labels:

6.3.18 - Recovery From Permanent Quorum Loss In Etcd Cluster

Recovery from Permanent Quorum Loss in an Etcd Cluster

Quorum loss in Etcd Cluster

Quorum loss means when the majority of Etcd pods (greater than or equal to n/2 + 1) are down simultaneously for some reason.

There are two types of quorum loss that can happen to an Etcd multinode cluster:

  1. Transient quorum loss - A quorum loss is called transient when the majority of Etcd pods are down simultaneously for some time. The pods may be down due to network unavailability, high resource usages, etc. When the pods come back after some time, they can re-join the cluster and quorum is recovered automatically without any manual intervention. There should not be a permanent failure for the majority of etcd pods due to hardware failure or disk corruption.

  2. Permanent quorum loss - A quorum loss is called permanent when the majority of Etcd cluster members experience permanent failure, whether due to hardware failure or disk corruption, etc. In that case, the etcd cluster is not going to recover automatically from the quorum loss. A human operator will now need to intervene and execute the following steps to recover the multi-node Etcd cluster.

If permanent quorum loss occurs to a multinode Etcd cluster, the operator needs to note down the PVCs, configmaps, statefulsets, CRs, etc. related to that Etcd cluster and work on those resources only. The following steps guide a human operator to recover from permanent quorum loss of an etcd cluster. We assume the name of the Etcd CR for the Etcd cluster is etcd-main.

Etcd cluster in shoot control plane of gardener deployment: There are two Etcd clusters running in the shoot control plane. One is named etcd-events and another is named etcd-main. The operator needs to take care of permanent quorum loss to a specific cluster. If permanent quorum loss occurs to etcd-events cluster, the operator needs to note down the PVCs, configmaps, statefulsets, CRs, etc. related to the etcd-events cluster and work on those resources only.

⚠️ Note: Please note that manually restoring etcd can result in data loss. This guide is the last resort to bring an Etcd cluster up and running again.

If etcd-druid and etcd-backup-restore is being used with gardener, then:

Target the control plane of affected shoot cluster via kubectl. Alternatively, you can use gardenctl to target the control plane of the affected shoot cluster. You can get the details to target the control plane from the Access tile in the shoot cluster details page on the Gardener dashboard. Ensure that you are targeting the correct namespace.

  1. Add the following annotations to the Etcd resource etcd-main:

    1. kubectl annotate etcd etcd-main druid.gardener.cloud/suspend-etcd-spec-reconcile=

    2. kubectl annotate etcd etcd-main druid.gardener.cloud/disable-resource-protection=

  2. Note down the configmap name that is attached to the etcd-main statefulset. If you describe the statefulset with kubectl describe sts etcd-main, look for the lines similar to following lines to identify attached configmap name. It will be needed at later stages:

    Volumes:
       etcd-config-file:
         Type:      ConfigMap (a volume populated by a ConfigMap)
    @@ -17667,8 +17587,8 @@
     etcd-main-0 4c37667312a3912b:Member 1m
     etcd-main-1 75a9b74cfd3077cc:Member 1m
     etcd-main-2 c62ee6af755e890d:Leader 1m
    -

6.3.18 - Restoring Single Member In Multi Node Etcd Cluster

Restoration of a single member in multi-node etcd deployed by etcd-druid

Note:

Motivation

If a single etcd member within a multi-node etcd cluster goes down due to DB corruption/PVC corruption/Invalid data-dir then it needs to be brought back. Unlike in the single-node case, a minority member of a multi-node cluster can’t be restored from the snapshots present in storage container as you can’t restore from the old snapshots as it contains the metadata information of cluster which leads to memberID mismatch that prevents the new member from coming up as new member is getting its metadata information from db which got restore from old snapshots.

Solution

Example

  1. If a 3 member etcd cluster has 1 downed member(due to invalid data-dir), the cluster can still make forward progress because the quorum is 2.
  2. Etcd downed member get restarted and it’s corresponding backup-restore sidecar receives an initialization request.
  3. Then, backup-restore sidecar checks for data corruption/invalid data-dir.
  4. Backup-restore sidecar detects that data-dir is invalid and its a multi-node etcd cluster.
  5. Then, backup-restore sidecar removed the downed etcd member from cluster.
  6. The number of members in a cluster becomes 2 and the quorum remains at 2, so it won’t affect the etcd cluster.
  7. Clean the data-dir and add a member as a learner(non-voting member).
  8. As soon as learner gets in sync with leader, promote the learner to a voting member, hence increasing number of members in a cluster back to 3.

6.3.19 - Supported K8s Versions

Supported Kubernetes Versions

We strongly recommend using etcd-druid with the supported kubernetes versions, published in this document. -The following is a list of kubernetes versions supported by the respective etcd-druid versions.

Etcd-druid versionKubernetes version
>=0.20>=1.21
>=0.14 && <0.20All versions supported
<0.14< 1.25

6.3.20 - Webhooks

Webhooks

The etcd-druid controller-manager registers certain admission webhooks that allow for validation or mutation of requests on resources in the cluster, in order to prevent misconfiguration and restrict access to the etcd cluster resources.

All webhooks that are a part of etcd-druid reside in package internal/webhook, as sub-packages.

Package Structure

The typical package structure for the webhooks that are part of etcd-druid is shown with the EtcdComponents Webhook:

internal/webhook/etcdcomponents
+

6.3.19 - Restoring Single Member In Multi Node Etcd Cluster

Restoration of a single member in multi-node etcd deployed by etcd-druid

Note:

  • For a cluster with n members, we are proposing the solution to only single member restoration within a etcd cluster not the quorum loss scenario (when majority of members within a cluster fail).
  • In this proposal we are not targeting the recovery of single member which got separated from cluster due to network partition.

Motivation

If a single etcd member within a multi-node etcd cluster goes down due to DB corruption/PVC corruption/Invalid data-dir then it needs to be brought back. Unlike in the single-node case, a minority member of a multi-node cluster can’t be restored from the snapshots present in storage container as you can’t restore from the old snapshots as it contains the metadata information of cluster which leads to memberID mismatch that prevents the new member from coming up as new member is getting its metadata information from db which got restore from old snapshots.

Solution

  • If a corresponding backup-restore sidecar detects that its corresponding etcd is down due to data-dir corruption or Invalid data-dir
  • Then backup-restore will first remove the failing etcd member from the cluster using the MemberRemove API call and clean the data-dir of failed etcd member.
  • It won’t affect the etcd cluster as quorum is still maintained.
  • After successfully removing failed etcd member from the cluster, backup-restore sidecar will try to add a new etcd member to a cluster to get the same cluster size as before.
  • Backup-restore firstly adds new member as a Learner using the MemberAddAsLearner API call, once learner is added to the cluster and it’s get in sync with leader and becomes up-to-date then promote the learner(non-voting member) to a voting member using MemberPromote API call.
  • So, the failed member first needs to be removed from the cluster and then added as a new member.

Example

  1. If a 3 member etcd cluster has 1 downed member(due to invalid data-dir), the cluster can still make forward progress because the quorum is 2.
  2. Etcd downed member get restarted and it’s corresponding backup-restore sidecar receives an initialization request.
  3. Then, backup-restore sidecar checks for data corruption/invalid data-dir.
  4. Backup-restore sidecar detects that data-dir is invalid and its a multi-node etcd cluster.
  5. Then, backup-restore sidecar removed the downed etcd member from cluster.
  6. The number of members in a cluster becomes 2 and the quorum remains at 2, so it won’t affect the etcd cluster.
  7. Clean the data-dir and add a member as a learner(non-voting member).
  8. As soon as learner gets in sync with leader, promote the learner to a voting member, hence increasing number of members in a cluster back to 3.

6.3.20 - Supported K8s Versions

Supported Kubernetes Versions

We strongly recommend using etcd-druid with the supported kubernetes versions, published in this document. +The following is a list of kubernetes versions supported by the respective etcd-druid versions.

Etcd-druid versionKubernetes version
>=0.20>=1.21
>=0.14 && <0.20All versions supported
<0.14< 1.25

6.3.21 - Webhooks

Webhooks

The etcd-druid controller-manager registers certain admission webhooks that allow for validation or mutation of requests on resources in the cluster, in order to prevent misconfiguration and restrict access to the etcd cluster resources.

All webhooks that are a part of etcd-druid reside in package internal/webhook, as sub-packages.

Package Structure

The typical package structure for the webhooks that are part of etcd-druid is shown with the EtcdComponents Webhook:

internal/webhook/etcdcomponents
 ├── config.go
 ├── handler.go
 └── register.go
diff --git a/docs/docs/contribute/_print/index.html b/docs/docs/contribute/_print/index.html
index aa40095f933..cd9f2dc79e2 100644
--- a/docs/docs/contribute/_print/index.html
+++ b/docs/docs/contribute/_print/index.html
@@ -1,5 +1,5 @@
 Contribute | Gardener
-

This is the multi-page printable view of this section. +

This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Contribute

Contributors guides for code and documentation

Contributing to Gardener

Welcome

Welcome to the Contributor section of Gardener. Here you can learn how it is possible for you to contribute your ideas and expertise to the project and have it grow even more.

Prerequisites

Before you begin contributing to Gardener, there are a couple of things you should become familiar with and complete first.

Code of Conduct

All members of the Gardener community must abide by the Contributor Covenant. Only by respecting each other can we develop a productive, collaborative community. diff --git a/docs/docs/contribute/code/cicd/index.html b/docs/docs/contribute/code/cicd/index.html index 3f0bfb4ef99..4ad323ce095 100644 --- a/docs/docs/contribute/code/cicd/index.html +++ b/docs/docs/contribute/code/cicd/index.html @@ -7,7 +7,7 @@ Typical workloads encompass the execution of tests and builds of a variety of technologies, as well as building and publishing container images, typically containing build results."> -