Skip to content

Releases: sighupio/distribution

Release v1.30.2

03 Apr 16:23
e40d62d
Compare
Choose a tag to compare

SIGHUP Distribution Release v1.30.2

Welcome to SD release v1.30.2.

The distribution is maintained with ❤️ by the team SIGHUP by ReeVo.

New Features since v1.30.1

Installer Updates

  • on-premises 📦 installer: v1.32.3
    • Add support and install Kubernetes 1.31.7
    • [#116] Add support for etcd cluster on dedicated nodes
    • [#124] Add support for kubeadm and kubelet reconfiguration

Module updates

  • networking 📦 core module: v2.1.0
    • Updated Tigera operator to v1.36.5 (that includes calico v3.29.2)
    • Updated Cilium to v1.17.2
  • monitoring 📦 core module: v3.4.0
    • Updated alert-manager to v0.27.0
    • Updated x509-exporter to v3.18.1
    • Updated mimir to v2.15.0
    • Updated minio to version RELEASE.2025-02-28T09-55-16Z
  • logging 📦 core module: v5.0.0
    • Updated opensearch and opensearch-dashboards to v2.19.1
    • Updated logging-operator to v5.2.0
    • Updated loki to v3.4.2
    • Updated minio to version RELEASE.2025-02-28T09-55-16Z
  • ingress 📦 core module: v4.0.0
    • Updated cert-manager to v1.17.1
    • Updated external-dns to v0.16.1
    • Updated forecastle to v1.0.156
    • Updated nginx to v1.12.0
  • auth 📦 core module: v0.5.0
    • Updated dex to v2.42.0
    • Updated pomerium to v0.28.0
  • dr 📦 core module: v3.1.0
    • Updated velero to v1.15.2
    • Updated all velero plugins to v1.11.1
    • Added snapshot-controller v8.2.0
  • tracing 📦 core module: v1.2.0
    • Updated tempo to v2.1.1
    • Updated minio to version RELEASE.2025-02-28T09-55-16Z
  • policy 📦 core module: v1.14.0
    • Updated gatekeeper to v3.18.2
    • Updated kyverno to v1.13.4
  • aws 📦 module: v5.0.0
    • Updated cluster-autoscaler to v1.32.0
    • Removed snapshot-controller
    • Updated aws-load-balancer-controller to v2.12.0
    • Updated node-termination-handler to v1.25.0

Breaking changes 💔

  • Feature removal in Ingress NGINX Controller: upstream Ingress NGINX Controller has introduced some breaking changes in version 1.12.0 included in this version of the ingress module. We recommend reading the module's release notes for further info.

  • kustomize upgrade to version 5.6.0: plugins that used old deprecated constructs in their kustomization.yaml may not work anymore. Please refer to the release notes of kustomize version 4.0.0 and version 5.0.0 for breaking changes that might affect your plugins.

  • Loki update: starting with the v5.0.0 of the Logging Core Module Loki version has been bumped to 3.4.2. Please refer to loki documentation for the complete release notes.

  • Policy update: potential breaking changes in Kyverno that depend on the target environment.

    • Removal of wildcard permissions: Prior versions contained wildcard view permissions which allowed Kyverno controllers to view all resources. In v1.13, these were replaced with more granular permissions. This change will not impact policies during admission controls but may impact reports, and may impact users with mutate and generate policies on CRs as the controller may no longer be able to view them.
    • Default exception settings: In Kyverno v1.12 and previous versions, policy exceptions were enabled by default for all namespaces. The new default in Kyverno v1.13 no longer automatically enables exceptions for all namespaces (to address CVE-2024-48921), instead requires explicit configuration of the namespaces to which exceptions apply, which may need to be added.

New features 🌟

  • [#355] Support for etcd cluster on dedicated nodes: adding support for deploying etcd on dedicated nodes instead of control plane nodes to the OnPremises provider. For the new clusters, users can define specific hosts for etcd, each with a name and IP. If the etcd key is omitted, etcd will be provisioned on control plane nodes. Migrating from etcd on the control-plane nodes to separated nodes (and vice-versa) is not currently supported.

    To make use of this new feature, you need to define the hosts where etcd will be deployed in your configuration file using the .spec.kubernetes.etcd key, for example:

    ...
    spec:
      kubernetes:
        masters:
          hosts:
            - name: master1
              ip: 192.168.66.29
            - name: master2
              ip: 192.168.66.30
            - name: master3
              ip: 192.168.66.31
        etcd:
          hosts:
            - name: etcd1
              ip: 192.168.66.39
            - name: etcd2
              ip: 192.168.66.40
            - name: etcd3
              ip: 192.168.66.41
        nodes:
          - name: worker
            hosts:
              - name: worker1
                ip: 192.168.66.49
    ...
  • [#359] Add etcd backup to S3 and PVC: we added two new solutions for snapshotting your etcd cluster. It allows a user to save automatically and periodically the etcd snapshot in a PersistentVolumeClaim or in a remote S3-bucket.

    To make use of this new feature, you need to define how etcdBackup will be deployed in your configuration file, using the .spec.distribution.dr.etcdBackup key, for example:

    ...
    spec:
      distribution:
        dr:
          etcdBackup:
            type: "all" # it can be: pvc, s3, all (pvc and s3), none
            backupPrefix: "" # prefix for the filename of the snapshot
            pvc:
              schedule: "0 2 * * *"
              # name: test-pvc (optional name of the pvc: if set it uses an existing one, if left unset it creates one for you)
              size: 1G # size of the created PVC, ignored if name is set
              # accessModes: [] # accessMode used for the created PVC, ignored if name is set
              # storageClass: storageclass # storage class to use for the created PVC, ignored if name is set
              retentionTime: 10m # how long do you wanna keep your snapshots?
            s3:
              schedule: "0 0 * * *"
              endpoint: play.min.io:9000 # s3 endpoint to upload your snapshots to
              accessKeyId: test
              secretAccessKey: test
              retentionTime: 10m
              bucketName: bucketname
    ...
  • [#368] Add support for kubeadm and kubelet reconfiguration in the OnPremises provider: this feature allows reconfiguring kubeadm and kubelet components after initial provisioning.

    The kubeletConfiguration key allows users to specify any parameter supported by the KubeletConfiguration object, at three different levels:

    • Global level (spec.kubernetes.advanced.kubeletConfiguration).
    • Master nodes level (spec.kubernetes.masters.kubeletConfiguration).
    • Worker node groups level (spec.kubernetes.nodes.kubeletConfiguration).

    Examples of uses include controlling the maximum number of pods per core (podsPerCore), managing container logging (containerLogMaxSize), Topology Manager options (topologyManagerPolicyOptions). All values must follow the official Kubelet specification. Usage examples:

    ...
    spec:
      kubernetes:
        masters:
          hosts:
            - name: master1
              ip: 192.168.56.20
          kubeletConfiguration:
            podsPerCore: 150
            systemReserved:
              memory: 1Gi
    ...
    ...
    spec:
      kubernetes:
        masters:
          hosts:
            - name: master1
              ip: 192.168.66.29
        nodes:
          - name: worker
            hosts:
              - name: worker1
                ip: 192.168.66.49
            kubeletConfiguration:
              podsPerCore: 200
              systemReserved:
                memory: 2Gi
    ...
    ...
    spec:
      kubernetes:
        masters:
          hosts:
            - name: master1
              ip: 192.168.56.20
        advanced:
          kubeletConfiguration:
            podsPerCore: 100
            enforceNodeAllocatable:
              - pods
              - system-reserved
            systemReserved:
              memory: 500Mi
    ...

    This feature also adds the kubeadm reconfiguration logic and exposes the missing apiServerCertSANs field, under the spec.kubernetes.advanced key.

    ...
    spec:
      kubernetes:
        masters:
          hosts:
            - name: master1
              ip: 192.168.56.20
        advanced:
          apiServerCertSANs:
            - my.domain.com
            - other.domain.net
    ...

Note

If you have previously made manual change...

Read more

Release v1.31.1

02 Apr 16:33
7afa25f
Compare
Choose a tag to compare

SIGHUP Distribution Release v1.31.1

Welcome to SD release v1.31.1.

The distribution is maintained with ❤️ by the team SIGHUP by ReeVo.

New Features since v1.31.0

Installer Updates

  • on-premises 📦 installer: v1.32.3
    • Add support and install Kubernetes 1.31.7
    • [#116] Add support for etcd cluster on dedicated nodes
    • [#124] Add support for kubeadm and kubelet reconfiguration

Module updates

  • networking 📦 core module: v2.1.0
    • Updated Tigera operator to v1.36.5 (that includes calico v3.29.2)
    • Updated Cilium to v1.17.2
  • monitoring 📦 core module: v3.4.0
    • Updated alert-manager to v0.27.0
    • Updated x509-exporter to v3.18.1
    • Updated mimir to v2.15.0
    • Updated minio to version RELEASE.2025-02-28T09-55-16Z
  • logging 📦 core module: v5.0.0
    • Updated opensearch and opensearch-dashboards to v2.19.1
    • Updated logging-operator to v5.2.0
    • Updated loki to v3.4.2
    • Updated minio to version RELEASE.2025-02-28T09-55-16Z
  • ingress 📦 core module: v4.0.0
    • Updated cert-manager to v1.17.1
    • Updated external-dns to v0.16.1
    • Updated forecastle to v1.0.156
    • Updated nginx to v1.12.0
  • auth 📦 core module: v0.5.0
    • Updated dex to v2.42.0
    • Updated pomerium to v0.28.0
  • dr 📦 core module: v3.1.0
    • Updated velero to v1.15.2
    • Updated all velero plugins to v1.11.1
    • Added snapshot-controller v8.2.0
  • tracing 📦 core module: v1.2.0
    • Updated tempo to v2.1.1
    • Updated minio to version RELEASE.2025-02-28T09-55-16Z
  • policy 📦 core module: v1.14.0
    • Updated gatekeeper to v3.18.2
    • Updated kyverno to v1.13.4
  • aws 📦 module: v5.0.0
    • Updated cluster-autoscaler to v1.32.0
    • Removed snapshot-controller
    • Updated aws-load-balancer-controller to v2.12.0
    • Updated node-termination-handler to v1.25.0

Breaking changes 💔

  • Feature removal in Ingress NGINX Controller: upstream Ingress NGINX Controller has introduced some breaking changes in version 1.12.0 included in this version of the ingress module. We recommend reading the module's release notes for further info.

  • kustomize upgrade to version 5.6.0: plugins that used old deprecated constructs in their kustomization.yaml may not work anymore. Please refer to the release notes of kustomize version 4.0.0 and version 5.0.0 for breaking changes that might affect your plugins.

  • Loki update: starting with the v5.0.0 of the Logging Core Module Loki version has been bumped to 3.4.2. Please refer to loki documentation for the complete release notes.

  • Policy update: potential breaking changes in Kyverno that depend on the target environment.

    • Removal of wildcard permissions: Prior versions contained wildcard view permissions which allowed Kyverno controllers to view all resources. In v1.13, these were replaced with more granular permissions. This change will not impact policies during admission controls but may impact reports, and may impact users with mutate and generate policies on CRs as the controller may no longer be able to view them.
    • Default exception settings: In Kyverno v1.12 and previous versions, policy exceptions were enabled by default for all namespaces. The new default in Kyverno v1.13 no longer automatically enables exceptions for all namespaces (to address CVE-2024-48921), instead requires explicit configuration of the namespaces to which exceptions apply, which may need to be added.

New features 🌟

  • [#355] Support for etcd cluster on dedicated nodes: adding support for deploying etcd on dedicated nodes instead of control plane nodes to the OnPremises provider. For the new clusters, users can define specific hosts for etcd, each with a name and IP. If the etcd key is omitted, etcd will be provisioned on control plane nodes. Migrating from etcd on the control-plane nodes to separated nodes (and vice-versa) is not currently supported.

    To make use of this new feature, you need to define the hosts where etcd will be deployed in your configuration file using the .spec.kubernetes.etcd key, for example:

    ...
    spec:
      kubernetes:
        masters:
          hosts:
            - name: master1
              ip: 192.168.66.29
            - name: master2
              ip: 192.168.66.30
            - name: master3
              ip: 192.168.66.31
        etcd:
          hosts:
            - name: etcd1
              ip: 192.168.66.39
            - name: etcd2
              ip: 192.168.66.40
            - name: etcd3
              ip: 192.168.66.41
        nodes:
          - name: worker
            hosts:
              - name: worker1
                ip: 192.168.66.49
    ...
  • [#359] Add etcd backup to S3 and PVC: we added two new solutions for snapshotting your etcd cluster. It allows a user to save automatically and periodically the etcd snapshot in a PersistentVolumeClaim or in a remote S3-bucket.

    To make use of this new feature, you need to define how etcdBackup will be deployed in your configuration file, using the .spec.distribution.dr.etcdBackup key, for example:

    ...
    spec:
      distribution:
        dr:
          etcdBackup:
            type: "all" # it can be: pvc, s3, all (pvc and s3), none
            backupPrefix: "" # prefix for the filename of the snapshot
            pvc:
              schedule: "0 2 * * *"
              # name: test-pvc (optional name of the pvc: if set it uses an existing one, if left unset it creates one for you)
              size: 1G # size of the created PVC, ignored if name is set
              # accessModes: [] # accessMode used for the created PVC, ignored if name is set
              # storageClass: storageclass # storage class to use for the created PVC, ignored if name is set
              retentionTime: 10m # how long do you wanna keep your snapshots?
            s3:
              schedule: "0 0 * * *"
              endpoint: play.min.io:9000 # s3 endpoint to upload your snapshots to
              accessKeyId: test
              secretAccessKey: test
              retentionTime: 10m
              bucketName: bucketname
    ...
  • [#368] Add support for kubeadm and kubelet reconfiguration in the OnPremises provider: this feature allows reconfiguring kubeadm and kubelet components after initial provisioning.

    The kubeletConfiguration key allows users to specify any parameter supported by the KubeletConfiguration object, at three different levels:

    • Global level (spec.kubernetes.advanced.kubeletConfiguration).
    • Master nodes level (spec.kubernetes.masters.kubeletConfiguration).
    • Worker node groups level (spec.kubernetes.nodes.kubeletConfiguration).

    Examples of uses include controlling the maximum number of pods per core (podsPerCore), managing container logging (containerLogMaxSize), Topology Manager options (topologyManagerPolicyOptions). All values must follow the official Kubelet specification. Usage examples:

    ...
    spec:
      kubernetes:
        masters:
          hosts:
            - name: master1
              ip: 192.168.56.20
          kubeletConfiguration:
            podsPerCore: 150
            systemReserved:
              memory: 1Gi
    ...
    ...
    spec:
      kubernetes:
        masters:
          hosts:
            - name: master1
              ip: 192.168.66.29
        nodes:
          - name: worker
            hosts:
              - name: worker1
                ip: 192.168.66.49
            kubeletConfiguration:
              podsPerCore: 200
              systemReserved:
                memory: 2Gi
    ...
    ...
    spec:
      kubernetes:
        masters:
          hosts:
            - name: master1
              ip: 192.168.56.20
        advanced:
          kubeletConfiguration:
            podsPerCore: 100
            enforceNodeAllocatable:
              - pods
              - system-reserved
            systemReserved:
              memory: 500Mi
    ...

    This feature also adds the kubeadm reconfiguration logic and exposes the missing apiServerCertSANs field, under the spec.kubernetes.advanced key.

    ...
    spec:
      kubernetes:
        masters:
          hosts:
            - name: maste...
Read more

Release v1.29.7

02 Apr 17:05
62ef2ab
Compare
Choose a tag to compare

SIGHUP Distribution Release v1.29.7

Welcome to SD release v1.29.7.

The distribution is maintained with ❤️ by the team SIGHUP by ReeVo.

New Features since v1.29.6

Installer Updates

  • on-premises 📦 installer: v1.32.3
    • Add support to Kubernetes 1.31.7
    • [#116] Add support for etcd cluster on dedicated nodes
    • [#124] Add support for kubeadm and kubelet reconfiguration

Module updates

  • networking 📦 core module: v2.1.0
    • Updated Tigera operator to v1.36.5 (that includes calico v3.29.2)
    • Updated Cilium to v1.17.2
  • monitoring 📦 core module: v3.4.0
    • Updated alert-manager to v0.27.0
    • Updated x509-exporter to v3.18.1
    • Updated mimir to v2.15.0
    • Updated minio to version RELEASE.2025-02-28T09-55-16Z
  • logging 📦 core module: v5.0.0
    • Updated opensearch and opensearch-dashboards to v2.19.1
    • Updated logging-operator to v5.2.0
    • Updated loki to v3.4.2
    • Updated minio to version RELEASE.2025-02-28T09-55-16Z
  • ingress 📦 core module: v4.0.0
    • Updated cert-manager to v1.17.1
    • Updated external-dns to v0.16.1
    • Updated forecastle to v1.0.156
    • Updated nginx to v1.12.0
  • auth 📦 core module: v0.5.0
    • Updated dex to v2.42.0
    • Updated pomerium to v0.28.0
  • dr 📦 core module: v3.1.0
    • Updated velero to v1.15.2
    • Updated all velero plugins to v1.11.1
    • Added snapshot-controller v8.2.0
  • tracing 📦 core module: v1.2.0
    • Updated tempo to v2.1.1
    • Updated minio to version RELEASE.2025-02-28T09-55-16Z
  • policy 📦 core module: v1.14.0
    • Updated gatekeeper to v3.18.2
    • Updated kyverno to v1.13.4
  • aws 📦 module: v5.0.0
    • Updated cluster-autoscaler to v1.32.0
    • Removed snapshot-controller
    • Updated aws-load-balancer-controller to v2.12.0
    • Updated node-termination-handler to v1.25.0

Breaking changes 💔

  • Feature removal in Ingress NGINX Controller: upstream Ingress NGINX Controller has introduced some breaking changes in version 1.12.0 included in this version of the ingress module. We recommend reading the module's release notes for further info.

  • kustomize upgrade to version 5.6.0: plugins that used old deprecated constructs in their kustomization.yaml may not work anymore. Please refer to the release notes of kustomize version 4.0.0 and version 5.0.0 for breaking changes that might affect your plugins.

  • Loki update: starting with the v5.0.0 of the Logging Core Module Loki version has been bumped to 3.4.2. Please refer to loki documentation for the complete release notes.

  • Policy update: potential breaking changes in Kyverno that depend on the target environment.

    • Removal of wildcard permissions: Prior versions contained wildcard view permissions which allowed Kyverno controllers to view all resources. In v1.13, these were replaced with more granular permissions. This change will not impact policies during admission controls but may impact reports, and may impact users with mutate and generate policies on CRs as the controller may no longer be able to view them.
    • Default exception settings: In Kyverno v1.12 and previous versions, policy exceptions were enabled by default for all namespaces. The new default in Kyverno v1.13 no longer automatically enables exceptions for all namespaces (to address CVE-2024-48921), instead requires explicit configuration of the namespaces to which exceptions apply, which may need to be added.

New features 🌟

  • [#355] Support for etcd cluster on dedicated nodes: adding support for deploying etcd on dedicated nodes instead of control plane nodes to the OnPremises provider. For the new clusters, users can define specific hosts for etcd, each with a name and IP. If the etcd key is omitted, etcd will be provisioned on control plane nodes. Migrating from etcd on the control-plane nodes to separated nodes (and vice-versa) is not currently supported.

    To make use of this new feature, you need to define the hosts where etcd will be deployed in your configuration file using the .spec.kubernetes.etcd key, for example:

    ...
    spec:
      kubernetes:
        masters:
          hosts:
            - name: master1
              ip: 192.168.66.29
            - name: master2
              ip: 192.168.66.30
            - name: master3
              ip: 192.168.66.31
        etcd:
          hosts:
            - name: etcd1
              ip: 192.168.66.39
            - name: etcd2
              ip: 192.168.66.40
            - name: etcd3
              ip: 192.168.66.41
        nodes:
          - name: worker
            hosts:
              - name: worker1
                ip: 192.168.66.49
    ...
  • [#359] Add etcd backup to S3 and PVC: we added two new solutions for snapshotting your etcd cluster. It allows a user to save automatically and periodically the etcd snapshot in a PersistentVolumeClaim or in a remote S3-bucket.

    To make use of this new feature, you need to define how etcdBackup will be deployed in your configuration file, using the .spec.distribution.dr.etcdBackup key, for example:

    ...
    spec:
      distribution:
        dr:
          etcdBackup:
            type: "all" # it can be: pvc, s3, all (pvc and s3), none
            backupPrefix: "" # prefix for the filename of the snapshot
            pvc:
              schedule: "0 2 * * *"
              # name: test-pvc (optional name of the pvc: if set it uses an existing one, if left unset it creates one for you)
              size: 1G # size of the created PVC, ignored if name is set
              # accessModes: [] # accessMode used for the created PVC, ignored if name is set
              # storageClass: storageclass # storage class to use for the created PVC, ignored if name is set
              retentionTime: 10m # how long do you wanna keep your snapshots?
            s3:
              schedule: "0 0 * * *"
              endpoint: play.min.io:9000 # s3 endpoint to upload your snapshots to
              accessKeyId: test
              secretAccessKey: test
              retentionTime: 10m
              bucketName: bucketname
    ...
  • [#368] Add support for kubeadm and kubelet reconfiguration in the OnPremises provider: this feature allows reconfiguring kubeadm and kubelet components after initial provisioning.

    The kubeletConfiguration key allows users to specify any parameter supported by the KubeletConfiguration object, at three different levels:

    • Global level (spec.kubernetes.advanced.kubeletConfiguration).
    • Master nodes level (spec.kubernetes.masters.kubeletConfiguration).
    • Worker node groups level (spec.kubernetes.nodes.kubeletConfiguration).

    Examples of uses include controlling the maximum number of pods per core (podsPerCore), managing container logging (containerLogMaxSize), Topology Manager options (topologyManagerPolicyOptions). All values must follow the official Kubelet specification. Usage examples:

    ...
    spec:
      kubernetes:
        masters:
          hosts:
            - name: master1
              ip: 192.168.56.20
          kubeletConfiguration:
            podsPerCore: 150
            systemReserved:
              memory: 1Gi
    ...
    ...
    spec:
      kubernetes:
        masters:
          hosts:
            - name: master1
              ip: 192.168.66.29
        nodes:
          - name: worker
            hosts:
              - name: worker1
                ip: 192.168.66.49
            kubeletConfiguration:
              podsPerCore: 200
              systemReserved:
                memory: 2Gi
    ...
    ...
    spec:
      kubernetes:
        masters:
          hosts:
            - name: master1
              ip: 192.168.56.20
        advanced:
          kubeletConfiguration:
            podsPerCore: 100
            enforceNodeAllocatable:
              - pods
              - system-reserved
            systemReserved:
              memory: 500Mi
    ...

    This feature also adds the kubeadm reconfiguration logic and exposes the missing apiServerCertSANs field, under the spec.kubernetes.advanced key.

    ...
    spec:
      kubernetes:
        masters:
          hosts:
            - name: master1
              ip: 192.168.56.20
        advanced:
          apiServerCertSANs:
            - my.domain.com
            - other.domain.net
    ...

Note

If you have previously made manual changes to kube...

Read more

Prerelease v1.29.7-rc.2

02 Apr 11:05
Compare
Choose a tag to compare
Pre-release

SIGHUP Distribution Release v1.29.7

Welcome to SD release v1.29.7.

The distribution is maintained with ❤️ by the team SIGHUP by ReeVo.

New Features since v1.29.6

Installer Updates

  • on-premises 📦 installer: v1.32.3
    • Add support to Kubernetes 1.31.7
    • [#116] Add support for etcd cluster on dedicated nodes
    • [#124] Add support for kubeadm and kubelet reconfiguration

Module updates

  • networking 📦 core module: v2.1.0
    • Updated Tigera operator to v1.36.5 (that includes calico v3.29.2)
    • Updated Cilium to v1.17.2
  • monitoring 📦 core module: v3.4.0
    • Updated alert-manager to v0.27.0
    • Updated x509-exporter to v3.18.1
    • Updated mimir to v2.15.0
    • Updated minio to version RELEASE.2025-02-28T09-55-16Z
  • logging 📦 core module: v5.0.0
    • Updated opensearch and opensearch-dashboards to v2.19.1
    • Updated logging-operator to v5.2.0
    • Updated loki to v3.4.2
    • Updated minio to version RELEASE.2025-02-28T09-55-16Z
  • ingress 📦 core module: v4.0.0
    • Updated cert-manager to v1.17.1
    • Updated external-dns to v0.16.1
    • Updated forecastle to v1.0.156
    • Updated nginx to v1.12.0
  • auth 📦 core module: v0.5.0
    • Updated dex to v2.42.0
    • Updated pomerium to v0.28.0
  • dr 📦 core module: v3.1.0
    • Updated velero to v1.15.2
    • Updated all velero plugins to v1.11.1
    • Added snapshot-controller v8.2.0
  • tracing 📦 core module: v1.2.0
    • Updated tempo to v2.1.1
    • Updated minio to version RELEASE.2025-02-28T09-55-16Z
  • policy 📦 core module: v1.14.0
    • Updated gatekeeper to v3.18.2
    • Updated kyverno to v1.13.4
  • aws 📦 module: v5.0.0
    • Updated cluster-autoscaler to v1.32.0
    • Removed snapshot-controller
    • Updated aws-load-balancer-controller to v2.12.0
    • Updated node-termination-handler to v1.25.0

Breaking changes 💔

  • Feature removal in Ingress NGINX Controller: upstream Ingress NGINX Controller has introduced some breaking changes in version 1.12.0 included in this version of the ingress module. We recommend reading the module's release notes for further info.

  • kustomize upgrade to version 5.6.0: plugins that used old deprecated constructs in their kustomization.yaml may not work anymore. Please refer to the release notes of kustomize version 4.0.0 and version 5.0.0 for breaking changes that might affect your plugins.

  • Loki update: starting with the v5.0.0 of the Logging Core Module Loki version has been bumped to 3.4.2. Please refer to loki documentation for the complete release notes.

  • Policy update: potential breaking changes in Kyverno that depend on the target environment.

    • Removal of wildcard permissions: Prior versions contained wildcard view permissions which allowed Kyverno controllers to view all resources. In v1.13, these were replaced with more granular permissions. This change will not impact policies during admission controls but may impact reports, and may impact users with mutate and generate policies on CRs as the controller may no longer be able to view them.
    • Default exception settings: In Kyverno v1.12 and previous versions, policy exceptions were enabled by default for all namespaces. The new default in Kyverno v1.13 no longer automatically enables exceptions for all namespaces (to address CVE-2024-48921), instead requires explicit configuration of the namespaces to which exceptions apply, which may need to be added.

New features 🌟

  • [#355] Support for etcd cluster on dedicated nodes: adding support for deploying etcd on dedicated nodes instead of control plane nodes to the OnPremises provider. For the new clusters, users can define specific hosts for etcd, each with a name and IP. If the etcd key is omitted, etcd will be provisioned on control plane nodes. Migrating from etcd on the control-plane nodes to separated nodes (and vice-versa) is not currently supported.

    To make use of this new feature, you need to define the hosts where etcd will be deployed in your configuration file using the .spec.kubernetes.etcd key, for example:

    ...
    spec:
      kubernetes:
        masters:
          hosts:
            - name: master1
              ip: 192.168.66.29
            - name: master2
              ip: 192.168.66.30
            - name: master3
              ip: 192.168.66.31
        etcd:
          hosts:
            - name: etcd1
              ip: 192.168.66.39
            - name: etcd2
              ip: 192.168.66.40
            - name: etcd3
              ip: 192.168.66.41
        nodes:
          - name: worker
            hosts:
              - name: worker1
                ip: 192.168.66.49
    ...
  • [#359] Add etcd backup to S3 and PVC: we added two new solutions for snapshotting your etcd cluster. It allows a user to save automatically and periodically the etcd snapshot in a PersistentVolumeClaim or in a remote S3-bucket.

    To make use of this new feature, you need to define how etcdBackup will be deployed in your configuration file, using the .spec.distribution.dr.etcdBackup key, for example:

    ...
    spec:
      distribution:
        dr:
          etcdBackup:
            type: "all" # it can be: pvc, s3, all (pvc and s3), none
            backupPrefix: "" # prefix for the filename of the snapshot
            pvc:
              schedule: "0 2 * * *"
              # name: test-pvc (optional name of the pvc: if set it uses an existing one, if left unset it creates one for you)
              size: 1G # size of the created PVC, ignored if name is set
              # accessModes: [] # accessMode used for the created PVC, ignored if name is set
              # storageClass: storageclass # storage class to use for the created PVC, ignored if name is set
              retentionTime: 10m # how long do you wanna keep your snapshots?
            s3:
              schedule: "0 0 * * *"
              endpoint: play.min.io:9000 # s3 endpoint to upload your snapshots to
              accessKeyId: test
              secretAccessKey: test
              retentionTime: 10m
              bucketName: bucketname
    ...
  • [#368] Add support for kubeadm and kubelet reconfiguration in the OnPremises provider: this feature allows reconfiguring kubeadm and kubelet components after initial provisioning.

    The kubeletConfiguration key allows users to specify any parameter supported by the KubeletConfiguration object, at three different levels:

    • Global level (spec.kubernetes.advanced.kubeletConfiguration).
    • Master nodes level (spec.kubernetes.masters.kubeletConfiguration).
    • Worker node groups level (spec.kubernetes.nodes.kubeletConfiguration).

    Examples of uses include controlling the maximum number of pods per core (podsPerCore), managing container logging (containerLogMaxSize), Topology Manager options (topologyManagerPolicyOptions). All values must follow the official Kubelet specification. Usage examples:

    ...
    spec:
      kubernetes:
        masters:
          hosts:
            - name: master1
              ip: 192.168.56.20
          kubeletConfiguration:
            podsPerCore: 150
            systemReserved:
              memory: 1Gi
    ...
    ...
    spec:
      kubernetes:
        masters:
          hosts:
            - name: master1
              ip: 192.168.66.29
        nodes:
          - name: worker
            hosts:
              - name: worker1
                ip: 192.168.66.49
            kubeletConfiguration:
              podsPerCore: 200
              systemReserved:
                memory: 2Gi
    ...
    ...
    spec:
      kubernetes:
        masters:
          hosts:
            - name: master1
              ip: 192.168.56.20
        advanced:
          kubeletConfiguration:
            podsPerCore: 100
            enforceNodeAllocatable:
              - pods
              - system-reserved
            systemReserved:
              memory: 500Mi
    ...

    This feature also adds the kubeadm reconfiguration logic and exposes the missing apiServerCertSANs field, under the spec.kubernetes.advanced key.

    ...
    spec:
      kubernetes:
        masters:
          hosts:
            - name: master1
       ...
Read more

Release v1.30.1

16 Jan 17:21
6853060
Compare
Choose a tag to compare

Kubernetes Fury Distribution Release v1.30.1

Welcome to KFD release v1.30.1.

The distribution is maintained with ❤️ by the team SIGHUP.

New Features since v1.30.0

Installer Updates

Module updates

No module updates from the last version.

Breaking changes 💔

No breaking changes on this version.

New features 🌟

  • [#320] Custom Lables and Annotations for on-premises nodes: the configuration file for on-premises clusters now supports specifying custom labels and annotations for the control-plane nodes and for the node groups. The labels and annotations specified will be applied to all the nodes in the group (and deleted when removed from the configuration). Usage example:

    ...
    spec:
      kubernetes:
        masters:
          hosts:
            - name: master1
              ip: 192.168.66.29
            - name: master2
              ip: 192.168.66.30
            - name: master3
              ip: 192.168.66.31
          labels:
            node-role.kubernetes.io/dungeon-master: ""
            dnd-enabled: "true"
          annotations:
            level: "100"
        nodes:
          - name: infra
            hosts:
              - name: infra1
                ip: 192.168.66.32
              - name: infra2
                ip: 192.168.66.33
              - name: infra3
                ip: 192.168.66.34
            taints:
              - effect: NoSchedule
                key: node.kubernetes.io/role
                value: infra
            labels:
              a-label: with-content
              empty-label: ""
              label/sighup: "with-slashes"
              node-role.kubernetes.io/wizard: ""
              dnd-enabled: "true"
            annotations:
              with-spaces: "annotation with spaces"
              without-spaces: annotation-without-spaces
              level: "20"
          - name: worker
            hosts:
              - name: worker1
                ip: 192.168.66.35
            taints: []
            labels:
              node-role.kubernetes.io/barbarian: ""
              dnd-enabled: "true"
              label-custom: "with-value"
            annotations:
              level: "10"
          - name: empty-labels-and-annotations
            hosts:
              - name: empty1
                ip: 192.168.66.50
            taints: []
            labels:
            annotations:
          - name: undefined-labels-and-annotations
            hosts:
              - name: undefined1
                ip: 192.168.66.51
            taints: []
    ...

Fixes 🐞

No fixes in this version.

Upgrade procedure

Check the upgrade docs for the detailed procedure.

Release v1.29.6

16 Jan 16:25
c63bb47
Compare
Choose a tag to compare

Kubernetes Fury Distribution Release v1.29.6

Welcome to KFD release v1.29.6.

The distribution is maintained with ❤️ by the team SIGHUP.

New Features since v1.29.5

Installer Updates

Module updates

No module updates from the last version.

Breaking changes 💔

No breaking changes on this version.

New features 🌟

  • [#320] Custom Lables and Annotations for on-premises nodes: the configuration file for on-premises clusters now supports specifying custom labels and annotations for the control-plane nodes and for the node groups. The labels and annotations specified will be applied to all the nodes in the group (and deleted when removed from the configuration). Usage example:

    ...
    spec:
      kubernetes:
        masters:
          hosts:
            - name: master1
              ip: 192.168.66.29
            - name: master2
              ip: 192.168.66.30
            - name: master3
              ip: 192.168.66.31
          labels:
            node-role.kubernetes.io/dungeon-master: ""
            dnd-enabled: "true"
          annotations:
            level: "100"
        nodes:
          - name: infra
            hosts:
              - name: infra1
                ip: 192.168.66.32
              - name: infra2
                ip: 192.168.66.33
              - name: infra3
                ip: 192.168.66.34
            taints:
              - effect: NoSchedule
                key: node.kubernetes.io/role
                value: infra
            labels:
              a-label: with-content
              empty-label: ""
              label/sighup: "with-slashes"
              node-role.kubernetes.io/wizard: ""
              dnd-enabled: "true"
            annotations:
              with-spaces: "annotation with spaces"
              without-spaces: annotation-without-spaces
              level: "20"
          - name: worker
            hosts:
              - name: worker1
                ip: 192.168.66.35
            taints: []
            labels:
              node-role.kubernetes.io/barbarian: ""
              dnd-enabled: "true"
              label-custom: "with-value"
            annotations:
              level: "10"
          - name: empty-labels-and-annotations
            hosts:
              - name: empty1
                ip: 192.168.66.50
            taints: []
            labels:
            annotations:
          - name: undefined-labels-and-annotations
            hosts:
              - name: undefined1
                ip: 192.168.66.51
            taints: []
    ...

Fixes 🐞

No fixes in this version.

Upgrade procedure

Check the upgrade docs for the detailed procedure.

Release v1.28.6

16 Jan 12:33
cdc033b
Compare
Choose a tag to compare

Kubernetes Fury Distribution Release v1.28.6

Welcome to KFD release v1.28.6.

The distribution is maintained with ❤️ by the team SIGHUP.

New Features since v1.28.5

Installer Updates

Module updates

No module updates from the last version.

Breaking changes 💔

No breaking changes on this version.

New features 🌟

  • [#320] Custom Lables and Annotations for on-premises nodes: the configuration file for on-premises clusters now supports specifying custom labels and annotations for the control-plane nodes and for the node groups. The labels and annotations specified will be applied to all the nodes in the group (and deleted when removed from the configuration). Usage example:

    ...
    spec:
      kubernetes:
        masters:
          hosts:
            - name: master1
              ip: 192.168.66.29
            - name: master2
              ip: 192.168.66.30
            - name: master3
              ip: 192.168.66.31
          labels:
            node-role.kubernetes.io/dungeon-master: ""
            dnd-enabled: "true"
          annotations:
            level: "100"
        nodes:
          - name: infra
            hosts:
              - name: infra1
                ip: 192.168.66.32
              - name: infra2
                ip: 192.168.66.33
              - name: infra3
                ip: 192.168.66.34
            taints:
              - effect: NoSchedule
                key: node.kubernetes.io/role
                value: infra
            labels:
              a-label: with-content
              empty-label: ""
              label/sighup: "with-slashes"
              node-role.kubernetes.io/wizard: ""
              dnd-enabled: "true"
            annotations:
              with-spaces: "annotation with spaces"
              without-spaces: annotation-without-spaces
              level: "20"
          - name: worker
            hosts:
              - name: worker1
                ip: 192.168.66.35
            taints: []
            labels:
              node-role.kubernetes.io/barbarian: ""
              dnd-enabled: "true"
              label-custom: "with-value"
            annotations:
              level: "10"
          - name: empty-labels-and-annotations
            hosts:
              - name: empty1
                ip: 192.168.66.50
            taints: []
            labels:
            annotations:
          - name: undefined-labels-and-annotations
            hosts:
              - name: undefined1
                ip: 192.168.66.51
            taints: []
    ...

Fixes 🐞

No fixes in this version.

Upgrade procedure

Check the upgrade docs for the detailed procedure.

Release v1.31.0

24 Dec 11:24
75bb61f
Compare
Choose a tag to compare

Kubernetes Fury Distribution Release v1.31.0

Caution

Use furyctl >= 0.31.1 to upgrade to this version.
This KFD version used the version 1.31.4 of the On-premises installer that had an issue in the upgrade process to v1.31.
This issue was patched in v1.31.4-rev.1 of the installer and is automatically used when using furyctl >= 0.31.1.

Welcome to KFD release v1.31.0.

The distribution is maintained with ❤️ by the team SIGHUP.

New Features since v1.30.0

Installer Updates

Module updates

No module updates from the last version.

Breaking changes 💔

No breaking changes on this version.

New features 🌟

  • [#320] Custom Lables and Annotations for on-premises nodes: the configuration file for on-premises clusters now supports specifying custom labels and annotations for the control-plane nodes and for the node groups. The labels and annotations specified will be applied to all the nodes in the group (and deleted when removed from the configuration). Usage example:

    ...
    spec:
      kubernetes:
        masters:
          hosts:
            - name: master1
              ip: 192.168.66.29
            - name: master2
              ip: 192.168.66.30
            - name: master3
              ip: 192.168.66.31
          labels:
            node-role.kubernetes.io/dungeon-master: ""
            dnd-enabled: "true"
          annotations:
            level: "100"
        nodes:
          - name: infra
            hosts:
              - name: infra1
                ip: 192.168.66.32
              - name: infra2
                ip: 192.168.66.33
              - name: infra3
                ip: 192.168.66.34
            taints:
              - effect: NoSchedule
                key: node.kubernetes.io/role
                value: infra
            labels:
              a-label: with-content
              empty-label: ""
              label/sighup: "with-slashes"
              node-role.kubernetes.io/wizard: ""
              dnd-enabled: "true"
            annotations:
              with-spaces: "annotation with spaces"
              without-spaces: annotation-without-spaces
              level: "20"
          - name: worker
            hosts:
              - name: worker1
                ip: 192.168.66.35
            taints: []
            labels:
              node-role.kubernetes.io/barbarian: ""
              dnd-enabled: "true"
              label-custom: "with-value"
            annotations:
              level: "10"
          - name: empty-labels-and-annotations
            hosts:
              - name: empty1
                ip: 192.168.66.50
            taints: []
            labels:
            annotations:
          - name: undefined-labels-and-annotations
            hosts:
              - name: undefined1
                ip: 192.168.66.51
            taints: []
    ...
  • [#322] Apply step now uses kapp: The manifests apply for distribution phase and kustomize plugins is now done via kapp instead of kubectl. kapp allows for applying manifests and verifying that everything being installed is functioning correctly. It can also apply CRDs, wait for them to be available, and then apply the referring CRs. This significantly reduces and simplifies the complexity of apply operations, which were previously performed with plain kubectl.

Fixes 🐞

No fixes in this version.

Upgrade procedure

Check the upgrade docs for the detailed procedure.

Release v1.29.5

29 Nov 11:52
8bb6eb5
Compare
Choose a tag to compare

Kubernetes Fury Distribution Release v1.29.5

Welcome to KFD release v1.29.5. This patch release also updates Kubernetes from 1.29.3 to 1.29.10 on the OnPremises provider.

The distribution is maintained with ❤️ by the team SIGHUP.

New Features since v1.29.4

Installer Updates

  • on-premises 📦 installer: v1.30.6
    • Updated etcd default version to 3.5.15
    • Updated HAProxy version to 3.0 TLS
    • Updated containerd default version to 1.7.23
    • Added support for Kubernetes versions 1.30.6, 1.29.10 and 1.28.15
  • eks 📦 installer: v3.2.0
    • Introduced AMI selection type: alinux2023 and alinux2
    • Fixed eks-managed nodepool node labels

Module updates

  • networking 📦 core module: v2.0.0
    • Updated Tigera operator to v1.36.1 (that includes calico v3.29.0)
    • Updated Cilium to v1.16.3
  • monitoring 📦 core module: v3.3.0
    • Updated blackbox-exporter to v0.25.0
    • Updated grafana to v11.3.0
    • Updated kube-rbac-proxy to v0.18.1
    • Updated kube-state-metrics to v2.13.0
    • Updated node-exporter to v1.8.2
    • Updated prometheus-adapter to v0.12.0
    • Updated prometheus-operator to v0.76.2
    • Updated prometheus to v2.54.1
    • Updated x509-exporter to v3.17.0
    • Updated mimir to v2.14.0
    • Updated minio to version RELEASE.2024-10-13T13-34-11Z
  • logging 📦 core module: v4.0.0
    • Updated opensearch and opensearch-dashboards to v2.17.1
    • Updated logging-operator to v4.10.0
    • Updated loki to v2.9.10
    • Updated minio to version RELEASE.2024-10-13T13-34-11Z
  • ingress 📦 core module: v3.0.1
    • Updated cert-manager to v1.16.1
    • Updated external-dns to v0.15.0
    • Updated forecastle to v1.0.145
    • Updated nginx to v1.11.3
  • auth 📦 core module: v0.4.0
    • Updated dex to v2.41.1
    • Updated pomerium to v0.27.1
  • dr 📦 core module: v3.0.0
    • Updated velero to v1.15.0
    • Updated all velero plugins to v1.11.0
    • Added snapshot-controller v8.0.1
  • tracing 📦 core module: v1.1.0
    • Updated tempo to v2.6.0
    • Updated minio to version RELEASE.2024-10-13T13-34-11Z
  • opa 📦 core module: v1.13.0
    • Updated gatekeeper to v3.17.1
    • Updated gatekeeper-policy-manager to v1.0.13
    • Updated kyverno to v1.12.6
  • aws 📦 module: v4.3.0
    • Updated cluster-autoscaler to v1.30.0
    • Updated snapshot-controller to v8.1.0
    • Updated aws-load-balancer-controller to v2.10.0
    • Updated node-termination-handler to v1.22.0

Breaking changes 💔

  • Loki store and schema change: A new store and schema has been introduced in order to improve efficiency, speed and scalability of Loki clusters. See "New features" below for more details.
  • DR schema change: A new format for the schedule customization has been introduced to improve the usability. See "New Features" section below for more details.
  • Kyverno validation failure action: Kyverno has deprecated audit and enforce as valid options for the validationFailureAction, valid options are now Audit and Enforce, in title case. Adjust your .spec.distribution.modules.policy.kyverno.validationFailureAction value accordingly.

New features 🌟

  • New option for Logging: Loki's configuration has been extended to accommodate a new tsdbStartDate required option to allow a migration towards TSDB and schema v13 storage (note: this is a breaking change):

    ...
    spec:
      distribution:
        modules:
          logging:
            loki:
              tsdbStartDate: "2024-11-18"
    ...
    • tsdbStartDate (required): a string in ISO 8601 date format that represents the day starting from which Loki will record logs with the new store and schema.

    ℹ️ Note: Loki will assume the start of the day on the UTC midnight of the specified day.

  • Improved configurable schedules for DR backups: the schedule configuration has been updated to enhance the usability of schedule customization (note: this is a breaking change):

    ...
    spec:
      distribution:
        modules:
          dr:
            velero:
              schedules:
                install: true
                definitions:
                  manifests:
                    schedule: "*/15 * * * *"
                    ttl: "720h0m0s"
                  full:
                    schedule: "0 1 * * *"
                    ttl: "720h0m0s"
                    snapshotMoveData: false
    ...
  • DR snapshotMoveData options for full schedule: a new parameter has been introduced in the velero full schedule to enable the snapshotMoveData feature. This feature allows data captured from a snapshot to be copied to the object storage location. Important: Setting this parameter to true will cause Velero to upload all data from the snapshotted volumes to S3 using Kopia. While backups are deduplicated, significant storage usage is still expected. To enable this use the following parameter in the full schedule configuration:

    ...
    spec:
      distribution:
        modules:
          dr:
            velero:
              schedules:
                install: true
                definitions:
                  full:
                    snapshotMoveData: true
    ...

General example to enable Volume Snapshotting on rook-ceph (from our storage add-on module):

apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
name: velero-snapclass
labels:
  velero.io/csi-volumesnapshot-class: "true"
driver: rook-ceph.rbd.csi.ceph.com
parameters:
  clusterID: rook-ceph
  csi.storage.k8s.io/snapshotter-secret-name: rook-csi-rbd-provisioner
  csi.storage.k8s.io/snapshotter-secret-namespace: rook-ceph
deletionPolicy: Retain

deletionPolicy: Retain is important because if the volume snapshot is deleted from the namespace, the cluster wide volumesnapshotcontent CR will be preserved, maintaining the snapshot on the storage that the cluster is using.

NOTE: For EKSCluster provider, a default VolumeSnapshotClass is created automatically.

  • DR optional snapshot-controller installation: To leverage VolumeSnapshots on the OnPremises and KFDDistribution providers, a new option on velero has been added to install the snapshot-controller component. Before activating this parameter make sure that in your cluster there is not another snapshot-controller component deployed. By default this parameter is false.

    ...
    spec:
      distribution:
        modules:
          dr:
            velero:
              snapshotController:
                install: true
    ...
  • Prometheus ScrapeConfigs: the Monitoring module now enables by default the scrapeConfig CRDs from the Prometheus Operator. All the scrapeConfig objects present in the cluster will now be detected by the operator. ScrapeConfig objects are used to instruct Prometheus to scrape specific endpoints that could be outside the cluster.

  • Components Hardening: we hardened the security context of several components, improving the out-of-the-box security of the distribution.

  • On-premises minimal clusters: it is now possible to create clusters with only control-plane nodes, for minimal clusters installations that need to handle minimal workloads.

  • Helm Plugins: Helm plugins now allow disabling validation at installation time with the disableValidationOnInstall option. This can be useful when installing Helm charts that fail the diff step on a first installation, for example.

  • Network Policies (experimental 🧪): a new experimental feature is introduced in this version. You can now enable the installation of network policies that will restrict the traffic across all the infrastructural namespaces of KFD to just the access needed for its proper functioning and denying the rest of it. Improving the overall security of the cluster. This experimental feature is only available in OnPremises cluster at the moment. Read more in the Pull Request introducing the feature and in the relative documentation.

  • Global CVE patched images for core modules: This distribution version includes images that have been patched for OS vulnerabilities (CVE). To use these patched images, select the following option:

    ...
    spec:
      distribution:
        common:
          registry: registry.sighup.io/fury-secured
    ...

Fixes 🐞

  • Improved Configuration Schema documentation: documentation for the configuration schemas was lacking, we great...
Read more

Release v1.28.5

29 Nov 11:51
dfac8ad
Compare
Choose a tag to compare

Kubernetes Fury Distribution Release v1.28.5

Welcome to KFD release v1.28.5. This patch release also updates Kubernetes from 1.28.7 to 1.28.15 on the OnPremises provider.

The distribution is maintained with ❤️ by the team SIGHUP.

New Features since v1.28.4

Installer Updates

  • on-premises 📦 installer: v1.30.6
    • Updated etcd default version to 3.5.15
    • Updated HAProxy version to 3.0 TLS
    • Updated containerd default version to 1.7.23
    • Added support for Kubernetes versions 1.30.6, 1.29.10 and 1.28.15
  • eks 📦 installer: v3.2.0
    • Introduced AMI selection type: alinux2023 and alinux2
    • Fixed eks-managed nodepool node labels

Module updates

  • networking 📦 core module: v2.0.0
    • Updated Tigera operator to v1.36.1 (that includes calico v3.29.0)
    • Updated Cilium to v1.16.3
  • monitoring 📦 core module: v3.3.0
    • Updated blackbox-exporter to v0.25.0
    • Updated grafana to v11.3.0
    • Updated kube-rbac-proxy to v0.18.1
    • Updated kube-state-metrics to v2.13.0
    • Updated node-exporter to v1.8.2
    • Updated prometheus-adapter to v0.12.0
    • Updated prometheus-operator to v0.76.2
    • Updated prometheus to v2.54.1
    • Updated x509-exporter to v3.17.0
    • Updated mimir to v2.14.0
    • Updated minio to version RELEASE.2024-10-13T13-34-11Z
  • logging 📦 core module: v4.0.0
    • Updated opensearch and opensearch-dashboards to v2.17.1
    • Updated logging-operator to v4.10.0
    • Updated loki to v2.9.10
    • Updated minio to version RELEASE.2024-10-13T13-34-11Z
  • ingress 📦 core module: v3.0.1
    • Updated cert-manager to v1.16.1
    • Updated external-dns to v0.15.0
    • Updated forecastle to v1.0.145
    • Updated nginx to v1.11.3
  • auth 📦 core module: v0.4.0
    • Updated dex to v2.41.1
    • Updated pomerium to v0.27.1
  • dr 📦 core module: v3.0.0
    • Updated velero to v1.15.0
    • Updated all velero plugins to v1.11.0
    • Added snapshot-controller v8.0.1
  • tracing 📦 core module: v1.1.0
    • Updated tempo to v2.6.0
    • Updated minio to version RELEASE.2024-10-13T13-34-11Z
  • opa 📦 core module: v1.13.0
    • Updated gatekeeper to v3.17.1
    • Updated gatekeeper-policy-manager to v1.0.13
    • Updated kyverno to v1.12.6
  • aws 📦 module: v4.3.0
    • Updated cluster-autoscaler to v1.30.0
    • Updated snapshot-controller to v8.1.0
    • Updated aws-load-balancer-controller to v2.10.0
    • Updated node-termination-handler to v1.22.0

Breaking changes 💔

  • Loki store and schema change: A new store and schema has been introduced in order to improve efficiency, speed and scalability of Loki clusters. See "New features" below for more details.
  • DR schema change: A new format for the schedule customization has been introduced to improve the usability. See "New Features" section below for more details.
  • Kyverno validation failure action: Kyverno has deprecated audit and enforce as valid options for the validationFailureAction, valid options are now Audit and Enforce, in title case. Adjust your .spec.distribution.modules.policy.kyverno.validationFailureAction value accordingly.

New features 🌟

  • New option for Logging: Loki's configuration has been extended to accommodate a new tsdbStartDate required option to allow a migration towards TSDB and schema v13 storage (note: this is a breaking change):

    ...
    spec:
      distribution:
        modules:
          logging:
            loki:
              tsdbStartDate: "2024-11-18"
    ...
    • tsdbStartDate (required): a string in ISO 8601 date format that represents the day starting from which Loki will record logs with the new store and schema.

    ℹ️ Note: Loki will assume the start of the day on the UTC midnight of the specified day.

  • Improved configurable schedules for DR backups: the schedule configuration has been updated to enhance the usability of schedule customization (note: this is a breaking change):

    ...
    spec:
      distribution:
        modules:
          dr:
            velero:
              schedules:
                install: true
                definitions:
                  manifests:
                    schedule: "*/15 * * * *"
                    ttl: "720h0m0s"
                  full:
                    schedule: "0 1 * * *"
                    ttl: "720h0m0s"
                    snapshotMoveData: false
    ...
  • DR snapshotMoveData options for full schedule: a new parameter has been introduced in the velero full schedule to enable the snapshotMoveData feature. This feature allows data captured from a snapshot to be copied to the object storage location. Important: Setting this parameter to true will cause Velero to upload all data from the snapshotted volumes to S3 using Kopia. While backups are deduplicated, significant storage usage is still expected. To enable this use the following parameter in the full schedule configuration:

    ...
    spec:
      distribution:
        modules:
          dr:
            velero:
              schedules:
                install: true
                definitions:
                  full:
                    snapshotMoveData: true
    ...

General example to enable Volume Snapshotting on rook-ceph (from our storage add-on module):

apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
name: velero-snapclass
labels:
  velero.io/csi-volumesnapshot-class: "true"
driver: rook-ceph.rbd.csi.ceph.com
parameters:
  clusterID: rook-ceph
  csi.storage.k8s.io/snapshotter-secret-name: rook-csi-rbd-provisioner
  csi.storage.k8s.io/snapshotter-secret-namespace: rook-ceph
deletionPolicy: Retain

deletionPolicy: Retain is important because if the volume snapshot is deleted from the namespace, the cluster wide volumesnapshotcontent CR will be preserved, maintaining the snapshot on the storage that the cluster is using.

NOTE: For EKSCluster provider, a default VolumeSnapshotClass is created automatically.

  • DR optional snapshot-controller installation: To leverage VolumeSnapshots on the OnPremises and KFDDistribution providers, a new option on velero has been added to install the snapshot-controller component. Before activating this parameter make sure that in your cluster there is not another snapshot-controller component deployed. By default this parameter is false.

    ...
    spec:
      distribution:
        modules:
          dr:
            velero:
              snapshotController:
                install: true
    ...
  • Prometheus ScrapeConfigs: the Monitoring module now enables by default the scrapeConfig CRDs from the Prometheus Operator. All the scrapeConfig objects present in the cluster will now be detected by the operator. ScrapeConfig objects are used to instruct Prometheus to scrape specific endpoints that could be outside the cluster.

  • Components Hardening: we hardened the security context of several components, improving the out-of-the-box security of the distribution.

  • On-premises minimal clusters: it is now possible to create clusters with only control-plane nodes, for minimal clusters installations that need to handle minimal workloads.

  • Helm Plugins: Helm plugins now allow disabling validation at installation time with the disableValidationOnInstall option. This can be useful when installing Helm charts that fail the diff step on a first installation, for example.

  • Network Policies (experimental 🧪): a new experimental feature is introduced in this version. You can now enable the installation of network policies that will restrict the traffic across all the infrastructural namespaces of KFD to just the access needed for its proper functioning and denying the rest of it. Improving the overall security of the cluster. This experimental feature is only available in OnPremises cluster at the moment. Read more in the Pull Request introducing the feature and in the relative documentation.

  • Global CVE patched images for core modules: This distribution version includes images that have been patched for OS vulnerabilities (CVE). To use these patched images, select the following option:

    ...
    spec:
      distribution:
        common:
          registry: registry.sighup.io/fury-secured
    ...

Fixes 🐞

  • Improved Configuration Schema documentation: documentation for the configuration schemas was lacking, we great...
Read more