diff --git a/assets/prometheus-federator/prometheus-federator-105.0.0-rc.1+up0.4.2.tgz b/assets/prometheus-federator/prometheus-federator-105.0.0-rc.1+up0.4.2.tgz new file mode 100644 index 0000000000..662b6cff18 Binary files /dev/null and b/assets/prometheus-federator/prometheus-federator-105.0.0-rc.1+up0.4.2.tgz differ diff --git a/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/Chart.yaml b/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/Chart.yaml new file mode 100644 index 0000000000..25d97114b2 --- /dev/null +++ b/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/Chart.yaml @@ -0,0 +1,21 @@ +annotations: + catalog.cattle.io/certified: rancher + catalog.cattle.io/display-name: Prometheus Federator + catalog.cattle.io/kube-version: '>= 1.28.0-0 < 1.32.0-0' + catalog.cattle.io/namespace: cattle-monitoring-system + catalog.cattle.io/os: linux,windows + catalog.cattle.io/permits-os: linux,windows + catalog.cattle.io/provides-gvr: helm.cattle.io.projecthelmchart/v1alpha1 + catalog.cattle.io/rancher-version: '>= 2.9.0-0 < 2.10.0-0' + catalog.cattle.io/release-name: prometheus-federator +apiVersion: v2 +appVersion: 0.3.5 +dependencies: +- condition: helmProjectOperator.enabled + name: helmProjectOperator + repository: file://./charts/helmProjectOperator + version: 0.2.1 +description: Prometheus Federator +icon: https://raw.githubusercontent.com/rancher/prometheus-federator/main/assets/logos/prometheus-federator.svg +name: prometheus-federator +version: 105.0.0-rc.1+up0.4.2 diff --git a/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/README.md b/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/README.md new file mode 100644 index 0000000000..7da4edfc22 --- /dev/null +++ b/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/README.md @@ -0,0 +1,120 @@ +# Prometheus Federator + +This chart is deploys a Helm Project Operator (based on the [rancher/helm-project-operator](https://github.com/rancher/helm-project-operator)), an operator that manages deploying Helm charts each containing a Project Monitoring Stack, where each stack contains: +- [Prometheus](https://prometheus.io/) (managed externally by [Prometheus Operator](https://github.com/prometheus-operator/prometheus-operator)) +- [Alertmanager](https://prometheus.io/docs/alerting/latest/alertmanager/) (managed externally by [Prometheus Operator](https://github.com/prometheus-operator/prometheus-operator)) +- [Grafana](https://github.com/helm/charts/tree/master/stable/grafana) (deployed via an embedded Helm chart) +- Default PrometheusRules and Grafana dashboards based on the collection of community-curated resources from [kube-prometheus](https://github.com/prometheus-operator/kube-prometheus/) +- Default ServiceMonitors that watch the deployed resources + +> **Important Note: Prometheus Federator is designed to be deployed alongside an existing Prometheus Operator deployment in a cluster that has already installed the Prometheus Operator CRDs.** + +By default, the chart is configured and intended to be deployed alongside [rancher-monitoring](https://rancher.com/docs/rancher/v2.6/en/monitoring-alerting/), which deploys Prometheus Operator alongside a Cluster Prometheus that each Project Monitoring Stack is configured to federate namespace-scoped metrics from by default. + +## Pre-Installation: Using Prometheus Federator with Rancher and rancher-monitoring + +If you are running your cluster on [Rancher](https://rancher.com/) and already have [rancher-monitoring](https://rancher.com/docs/rancher/v2.6/en/monitoring-alerting/) deployed onto your cluster, Prometheus Federator's default configuration should already be configured to work with your existing Cluster Monitoring Stack; however, here are some notes on how we recommend you configure rancher-monitoring to optimize the security and usability of Prometheus Federator in your cluster: + +### Ensure the cattle-monitoring-system namespace is placed into the System Project (or a similarly locked down Project that has access to other Projects in the cluster) + +Prometheus Operator's security model expects that the namespace it is deployed into (`cattle-monitoring-system`) has limited access for anyone except Cluster Admins to avoid privilege escalation via execing into Pods (such as the Jobs executing Helm operations). In addition, deploying Prometheus Federator and all Project Prometheus stacks into the System Project ensures that the each Project Prometheus is able to reach out to scrape workloads across all Projects (even if Network Policies are defined via Project Network Isolation) but has limited access for Project Owners, Project Members, and other users to be able to access data they shouldn't have access to (i.e. being allowed to exec into pods, set up the ability to scrape namespaces outside of a given Project, etc.). + +### Configure rancher-monitoring to only watch for resources created by the Helm chart itself + +Since each Project Monitoring Stack will watch the other namespaces and collect additional custom workload metrics or dashboards already, it's recommended to configure the following settings on all selectors to ensure that the Cluster Prometheus Stack only monitors resources created by the Helm Chart itself: + +``` +matchLabels: + release: "rancher-monitoring" +``` + +The following selector fields are recommended to have this value: +- `.Values.alertmanager.alertmanagerSpec.alertmanagerConfigSelector` +- `.Values.prometheus.prometheusSpec.serviceMonitorSelector` +- `.Values.prometheus.prometheusSpec.podMonitorSelector` +- `.Values.prometheus.prometheusSpec.ruleSelector` +- `.Values.prometheus.prometheusSpec.probeSelector` + +Once this setting is turned on, you can always create ServiceMonitors or PodMonitors that are picked up by the Cluster Prometheus by adding the label `release: "rancher-monitoring"` to them (in which case they will be ignored by Project Monitoring Stacks automatically by default, even if the namespace in which those ServiceMonitors or PodMonitors reside in are not system namespaces). + +> Note: If you don't want to allow users to be able to create ServiceMonitors and PodMonitors that aggregate into the Cluster Prometheus in Project namespaces, you can additionally set the namespaceSelectors on the chart to only target system namespaces (which must contain `cattle-monitoring-system` and `cattle-dashboards`, where resources are deployed into by default by rancher-monitoring; you will also need to monitor the `default` namespace to get apiserver metrics or create a custom ServiceMonitor to scrape apiserver metrics from the Service residing in the default namespace) to limit your Cluster Prometheus from picking up other Prometheus Operator CRs; in that case, it would be recommended to turn `.Values.prometheus.prometheusSpec.ignoreNamespaceSelectors=true` to allow you to define ServiceMonitors that can monitor non-system namespaces from within a system namespace. + +In addition, if you modified the default `.Values.grafana.sidecar.*.searchNamespace` values on the Grafana Helm subchart for Monitoring V2, it is also recommended to remove the overrides or ensure that your defaults are scoped to only system namespaces for the following values: +- `.Values.grafana.sidecar.dashboards.searchNamespace` (default `cattle-dashboards`) +- `.Values.grafana.sidecar.datasources.searchNamespace` (default `null`, which means it uses the release namespace `cattle-monitoring-system`) +- `.Values.grafana.sidecar.plugins.searchNamespace` (default `null`, which means it uses the release namespace `cattle-monitoring-system`) +- `.Values.grafana.sidecar.notifiers.searchNamespace` (default `null`, which means it uses the release namespace `cattle-monitoring-system`) + +### Increase the CPU / memory limits of the Cluster Prometheus + +Depending on a cluster's setup, it's generally recommended to give a large amount of dedicated memory to the Cluster Prometheus to avoid restarts due to out-of-memory errors (OOMKilled), usually caused by churn created in the cluster that causes a large number of high cardinality metrics to be generated and ingested by Prometheus within one block of time; this is one of the reasons why the default Rancher Monitoring stack expects around 4GB of RAM to be able to operate in a normal-sized cluster. However, when introducing Project Monitoring Stacks that are all sending `/federate` requests to the same Cluster Prometheus and are reliant on the Cluster Prometheus being "up" to federate that system data on their namespaces, it's even more important that the Cluster Prometheus has an ample amount of CPU / memory assigned to it to prevent an outage that can cause data gaps across all Project Prometheis in the cluster. + +> Note: There are no specific recommendations on how much memory the Cluster Prometheus should be configured with since it depends entirely on the user's setup (namely the likelihood of encountering a high churn rate and the scale of metrics that could be generated at that time); it generally varies per setup. + +## How does the operator work? + +1. On deploying this chart, users can create ProjectHelmCharts CRs with `spec.helmApiVersion` set to `monitoring.cattle.io/v1alpha1` (also known as "Project Monitors" in the Rancher UI) in a **Project Registration Namespace (`cattle-project-`)**. +2. On seeing each ProjectHelmChartCR, the operator will automatically deploy a Project Prometheus stack on the Project Owner's behalf in the **Project Release Namespace (`cattle-project--monitoring`)** based on a HelmChart CR and a HelmRelease CR automatically created by the ProjectHelmChart controller in the **Operator / System Namespace**. +3. RBAC will automatically be assigned in the Project Release Namespace to allow users to view the Prometheus, Alertmanager, and Grafana UIs of the Project Monitoring Stack deployed; this will be based on RBAC defined on the Project Registration Namespace against the [default Kubernetes user-facing roles](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles) (see below for more information about configuring RBAC). + +### What is a Project? + +In Prometheus Federator, a Project is a group of namespaces that can be identified by a `metav1.LabelSelector`; by default, the label used to identify projects is `field.cattle.io/projectId`, the label used to identify namespaces that are contained within a given [Rancher](https://rancher.com/) Project. + +### Configuring the Helm release created by a ProjectHelmChart + +The `spec.values` of this ProjectHelmChart resources will correspond to the `values.yaml` override to be supplied to the underlying Helm chart deployed by the operator on the user's behalf; to see the underlying chart's `values.yaml` spec, either: +- View to the chart's definition located at [`rancher/prometheus-federator` under `charts/rancher-project-monitoring`](https://github.com/rancher/prometheus-federator/blob/main/charts/rancher-project-monitoring) (where the chart version will be tied to the version of this operator) +- Look for the ConfigMap named `monitoring.cattle.io.v1alpha1` that is automatically created in each Project Registration Namespace, which will contain both the `values.yaml` and `questions.yaml` that was used to configure the chart (which was embedded directly into the `prometheus-federator` binary). + +### Namespaces + +As a Project Operator based on [rancher/helm-project-operator](https://github.com/rancher/helm-project-operator), Prometheus Federator has three different classifications of namespaces that the operator looks out for: +1. **Operator / System Namespace**: this is the namespace that the operator is deployed into (e.g. `cattle-monitoring-system`). This namespace will contain all HelmCharts and HelmReleases for all ProjectHelmCharts watched by this operator. **Only Cluster Admins should have access to this namespace.** +2. **Project Registration Namespace (`cattle-project-`)**: this is the set of namespaces that the operator watches for ProjectHelmCharts within. The RoleBindings and ClusterRoleBindings that apply to this namespace will also be the source of truth for the auto-assigned RBAC created in the Project Release Namespace (see more details below). **Project Owners (admin), Project Members (edit), and Read-Only Members (view) should have access to this namespace**. +> Note: Project Registration Namespaces will be auto-generated by the operator and imported into the Project it is tied to if `.Values.global.cattle.projectLabel` is provided (which is set to `field.cattle.io/projectId` by default); this indicates that a Project Registration Namespace should be created by the operator if at least one namespace is observed with that label. The operator will not let these namespaces be deleted unless either all namespaces with that label are gone (e.g. this is the last namespace in that project, in which case the namespace will be marked with the label `"helm.cattle.io/helm-project-operator-orphaned": "true"`, which signals that it can be deleted) or it is no longer watching that project (because the project ID was provided under `.Values.helmProjectOperator.otherSystemProjectLabelValues`, which serves as a denylist for Projects). These namespaces will also never be auto-deleted to avoid destroying user data; it is recommended that users clean up these namespaces manually if desired on creating or deleting a project +> Note: if `.Values.global.cattle.projectLabel` is not provided, the Operator / System Namespace will also be the Project Registration Namespace +3. **Project Release Namespace (`cattle-project--monitoring`)**: this is the set of namespaces that the operator deploys Project Monitoring Stacks within on behalf of a ProjectHelmChart; the operator will also automatically assign RBAC to Roles created in this namespace by the Project Monitoring Stack based on bindings found in the Project Registration Namespace. **Only Cluster Admins should have access to this namespace; Project Owners (admin), Project Members (edit), and Read-Only Members (view) will be assigned limited access to this namespace by the deployed Helm Chart and Prometheus Federator.** +> Note: Project Release Namespaces are automatically deployed and imported into the project whose ID is specified under `.Values.helmProjectOperator.projectReleaseNamespaces.labelValue` (which defaults to the value of `.Values.global.cattle.systemProjectId` if not specified) whenever a ProjectHelmChart is specified in a Project Registration Namespace +> Note: Project Release Namespaces follow the same orphaning conventions as Project Registration Namespaces (see note above) +> Note: if `.Values.projectReleaseNamespaces.enabled` is false, the Project Release Namespace will be the same as the Project Registration Namespace + +### Helm Resources (HelmChart, HelmRelease) + +On deploying a ProjectHelmChart, the Prometheus Federator will automatically create and manage two child custom resources that manage the underlying Helm resources in turn: +- A HelmChart CR (managed via an embedded [k3s-io/helm-contoller](https://github.com/k3s-io/helm-controller) in the operator): this custom resource automatically creates a Job in the same namespace that triggers a `helm install`, `helm upgrade`, or `helm uninstall` depending on the change applied to the HelmChart CR; this CR is automatically updated on changes to the ProjectHelmChart (e.g. modifying the values.yaml) or changes to the underlying Project definition (e.g. adding or removing namespaces from a project). +> **Important Note: If a ProjectHelmChart is not deploying or updating the underlying Project Monitoring Stack for some reason, the Job created by this resource in the Operator / System namespace should be the first place you check to see if there's something wrong with the Helm operation; however, this is generally only accessible by a Cluster Admin.** +- A HelmRelease CR (managed via an embedded [rancher/helm-locker](https://github.com/rancher/helm-locker) in the operator): this custom resource automatically locks a deployed Helm release in place and automatically overwrites updates to underlying resources unless the change happens via a Helm operation (`helm install`, `helm upgrade`, or `helm uninstall` performed by the HelmChart CR). +> Note: HelmRelease CRs emit Kubernetes Events that detect when an underlying Helm release is being modified and locks it back to place; to view these events, you can use `kubectl describe helmrelease -n `; you can also view the logs on this operator to see when changes are detected and which resources were attempted to be modified + +Both of these resources are created for all Helm charts in the Operator / System namespaces to avoid escalation of privileges to underprivileged users. + +### RBAC + +As described in the section on namespaces above, Prometheus Federator expects that Project Owners, Project Members, and other users in the cluster with Project-level permissions (e.g. permissions in a certain set of namespaces identified by a single label selector) have minimal permissions in any namespaces except the Project Registration Namespace (which is imported into the project by default) and those that already comprise their projects. Therefore, in order to allow Project Owners to assign specific chart permissions to other users in their Project namespaces, the Helm Project Operator will automatically watch the following bindings: +- ClusterRoleBindings +- RoleBindings in the Project Release Namespace + +On observing a change to one of those types of bindings, the Helm Project Operator will check whether the `roleRef` that the the binding points to matches a ClusterRole with the name provided under `helmProjectOperator.releaseRoleBindings.clusterRoleRefs.admin`, `helmProjectOperator.releaseRoleBindings.clusterRoleRefs.edit`, or `helmProjectOperator.releaseRoleBindings.clusterRoleRefs.view`; by default, these roleRefs correspond will correspond to `admin`, `edit`, and `view` respectively, which are the [default Kubernetes user-facing roles](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles). + +> Note: for Rancher RBAC users, these [default Kubernetes user-facing roles](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles) directly correlate to the `Project Owner`, `Project Member`, and `Read-Only` default Project Role Templates. + +If the `roleRef` matches, the Helm Project Operator will filter the `subjects` of the binding for all Users and Groups and use that to automatically construct a RoleBinding for each Role in the Project Release Namespace with the same name as the role and the following labels: +- `helm.cattle.io/project-helm-chart-role: {{ .Release.Name }}` +- `helm.cattle.io/project-helm-chart-role-aggregate-from: ` + +By default, the `rancher-project-monitoring` (the underlying chart deployed by Prometheus Federator) creates three default Roles per Project Release Namespace that provide `admin`, `edit`, and `view` users to permissions to view the Prometheus, Alertmanager, and Grafana UIs of the Project Monitoring Stack to provide least privilege; however, if a Cluster Admin would like to assign additional permissions to certain users, they can either directly assign RoleBindings in the Project Release Namespace to certain users or created Roles with the above two labels on them to allow Project Owners to control assigning those RBAC roles to users in their Project Registration namespaces. + +### Advanced Helm Project Operator Configuration + +|Value|Configuration| +|---|---------------------------| +|`helmProjectOperator.valuesOverride`| Allows an Operator to override values that are set on each ProjectHelmChart deployment on an operator-level; user-provided options (specified on the `spec.values` of the ProjectHelmChart) are automatically overridden if operator-level values are provided. For an exmaple, see how the default value overrides `federate.targets` (note: when overriding list values like `federate.targets`, user-provided list values will **not** be concatenated) | +|`helmProjectOperator.projectReleaseNamespaces.labelValues`| The value of the Project that all Project Release Namespaces should be auto-imported into (via label and annotation). Not recommended to be overridden on a Rancher setup. | +|`helmProjectOperator.otherSystemProjectLabelValues`| Other namespaces that the operator should treat as a system namespace that should not be monitored. By default, all namespaces that match `global.cattle.systemProjectId` will not be matched. `cattle-monitoring-system`, `cattle-dashboards`, and `kube-system` are explicitly marked as system namespaces as well, regardless of label or annotation. | +|`helmProjectOperator.releaseRoleBindings.aggregate`| Whether to automatically create RBAC resources in Project Release namespaces +|`helmProjectOperator.releaseRoleBindings.clusterRoleRefs.`| ClusterRoles to reference to discover subjects to create RoleBindings for in the Project Release Namespace for all corresponding Project Release Roles. See RBAC above for more information | +|`helmProjectOperator.hardenedNamespaces.enabled`| Whether to automatically patch the default ServiceAccount with `automountServiceAccountToken: false` and create a default NetworkPolicy in all managed namespaces in the cluster; the default values ensure that the creation of the namespace does not break a CIS 1.16 hardened scan | +|`helmProjectOperator.hardenedNamespaces.configuration`| The configuration to be supplied to the default ServiceAccount or auto-generated NetworkPolicy on managing a namespace | +|`helmProjectOperator.helmController.enabled`| Whether to enable an embedded k3s-io/helm-controller instance within the Helm Project Operator. Should be disabled for RKE2/K3s clusters before v1.23.14 / v1.24.8 / v1.25.4 since RKE2/K3s clusters already run Helm Controller at a cluster-wide level to manage internal Kubernetes components | +|`helmProjectOperator.helmLocker.enabled`| Whether to enable an embedded rancher/helm-locker instance within the Helm Project Operator. | diff --git a/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/app-README.md b/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/app-README.md new file mode 100644 index 0000000000..99fa7ca1c8 --- /dev/null +++ b/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/app-README.md @@ -0,0 +1,27 @@ +# Prometheus Federator + +This chart deploys an operator that manages Project Monitoring Stacks composed of the following set of resources that are scoped to project namespaces: +- [Prometheus](https://prometheus.io/) (managed externally by [Prometheus Operator](https://github.com/prometheus-operator/prometheus-operator)) +- [Alertmanager](https://prometheus.io/docs/alerting/latest/alertmanager/) (managed externally by [Prometheus Operator](https://github.com/prometheus-operator/prometheus-operator)) +- [Grafana](https://github.com/helm/charts/tree/master/stable/grafana) (deployed via an embedded Helm chart) +- Default PrometheusRules and Grafana dashboards based on the collection of community-curated resources from [kube-prometheus](https://github.com/prometheus-operator/kube-prometheus/) +- Default ServiceMonitors that watch the deployed Prometheus, Grafana, and Alertmanager + +Since this Project Monitoring Stack deploys Prometheus Operator CRs, an existing Prometheus Operator instance must already be deployed in the cluster for Prometheus Federator to successfully be able to deploy Project Monitoring Stacks. It is recommended to use [`rancher-monitoring`](https://rancher.com/docs/rancher/v2.6/en/monitoring-alerting/) for this. For more information on how the chart works or advanced configurations, please read the `README.md`. + +## Upgrading to Kubernetes v1.25+ + +Starting in Kubernetes v1.25, [Pod Security Policies](https://kubernetes.io/docs/concepts/security/pod-security-policy/) have been removed from the Kubernetes API. + +As a result, **before upgrading to Kubernetes v1.25** (or on a fresh install in a Kubernetes v1.25+ cluster), users are expected to perform an in-place upgrade of this chart with `global.cattle.psp.enabled` set to `false` if it has been previously set to `true`. +​ +> **Note:** +> In this chart release, any previous field that was associated with any PSP resources have been removed in favor of a single global field: `global.cattle.psp.enabled`. + ​ +> **Note:** +> If you upgrade your cluster to Kubernetes v1.25+ before removing PSPs via a `helm upgrade` (even if you manually clean up resources), **it will leave the Helm release in a broken state within the cluster such that further Helm operations will not work (`helm uninstall`, `helm upgrade`, etc.).** +> +> If your charts get stuck in this state, please consult the Rancher docs on how to clean up your Helm release secrets. +Upon setting `global.cattle.psp.enabled` to false, the chart will remove any PSP resources deployed on its behalf from the cluster. This is the default setting for this chart. +​ +As a replacement for PSPs, [Pod Security Admission](https://kubernetes.io/docs/concepts/security/pod-security-admission/) should be used. Please consult the Rancher docs for more details on how to configure your chart release namespaces to work with the new Pod Security Admission and apply Pod Security Standards. \ No newline at end of file diff --git a/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/charts/helmProjectOperator/Chart.yaml b/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/charts/helmProjectOperator/Chart.yaml new file mode 100644 index 0000000000..c4d14e1d93 --- /dev/null +++ b/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/charts/helmProjectOperator/Chart.yaml @@ -0,0 +1,15 @@ +annotations: + catalog.cattle.io/certified: rancher + catalog.cattle.io/display-name: Helm Project Operator + catalog.cattle.io/kube-version: '>=1.16.0-0' + catalog.cattle.io/namespace: cattle-helm-system + catalog.cattle.io/os: linux,windows + catalog.cattle.io/permits-os: linux,windows + catalog.cattle.io/provides-gvr: helm.cattle.io.projecthelmchart/v1alpha1 + catalog.cattle.io/rancher-version: '>= 2.6.0-0' + catalog.cattle.io/release-name: helm-project-operator +apiVersion: v2 +appVersion: 0.2.1 +description: Helm Project Operator +name: helmProjectOperator +version: 0.2.1 diff --git a/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/charts/helmProjectOperator/README.md b/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/charts/helmProjectOperator/README.md new file mode 100644 index 0000000000..fc1d39e81b --- /dev/null +++ b/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/charts/helmProjectOperator/README.md @@ -0,0 +1,77 @@ +# Helm Project Operator + +## How does the operator work? + +1. On deploying a Helm Project Operator, users can create ProjectHelmCharts CRs with `spec.helmApiVersion` set to `dummy.cattle.io/v1alpha1` in a **Project Registration Namespace (`cattle-project-`)**. +2. On seeing each ProjectHelmChartCR, the operator will automatically deploy the embedded Helm chart on the Project Owner's behalf in the **Project Release Namespace (`cattle-project--dummy`)** based on a HelmChart CR and a HelmRelease CR automatically created by the ProjectHelmChart controller in the **Operator / System Namespace**. +3. RBAC will automatically be assigned in the Project Release Namespace to allow users to based on Role created in the Project Release Namespace with a given set of labels; this will be based on RBAC defined on the Project Registration Namespace against the [default Kubernetes user-facing roles](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles) (see below for more information about configuring RBAC). + +### What is a Project? + +In Helm Project Operator, a Project is a group of namespaces that can be identified by a `metav1.LabelSelector`; by default, the label used to identify projects is `field.cattle.io/projectId`, the label used to identify namespaces that are contained within a given [Rancher](https://rancher.com/) Project. + +### What is a ProjectHelmChart? + +A ProjectHelmChart is an instance of a (project-scoped) Helm chart deployed on behalf of a user who has permissions to create ProjectHelmChart resources in a Project Registration namespace. + +Generally, the best way to think about the ProjectHelmChart model is by comparing it to two other models: +1. Managed Kubernetes providers (EKS, GKE, AKS, etc.): in this model, a user has the ability to say "I want a Kubernetes cluster" but the underlying cloud provider is responsible for provisioning the infrastructure and offering **limited view and access** of the underlying resources created on their behalf; similarly, Helm Project Operator allows a Project Owner to say "I want this Helm chart deployed", but the underlying Operator is responsible for "provisioning" (deploying) the Helm chart and offering **limited view and access** of the underlying Kubernetes resources created on their behalf (based on configuring "least-privilege" Kubernetes RBAC for the Project Owners / Members in the newly created Project Release Namespace). +2. Dynamically-provisioned Persistent Volumes: in this model, a single resource (PersistentVolume) exists that allows you to specify a Storage Class that actually implements provisioning the underlying storage via a Storage Class Provisioner (e.g. Longhorn). Similarly, the ProjectHelmChart exists that allows you to specify a `spec.helmApiVersion` ("storage class") that actually implements deploying the underlying Helm chart via a Helm Project Operator (e.g. [`rancher/prometheus-federator`](https://github.com/rancher/prometheus-federator)). + +### Configuring the Helm release created by a ProjectHelmChart + +The `spec.values` of this ProjectHelmChart resources will correspond to the `values.yaml` override to be supplied to the underlying Helm chart deployed by the operator on the user's behalf; to see the underlying chart's `values.yaml` spec, either: +- View to the chart's definition located at [`rancher/helm-project-operator` under `charts/example-chart`](https://github.com/rancher/helm-project-operator/blob/main/charts/example-chart) (where the chart version will be tied to the version of this operator) +- Look for the ConfigMap named `dummy.cattle.io.v1alpha1` that is automatically created in each Project Registration Namespace, which will contain both the `values.yaml` and `questions.yaml` that was used to configure the chart (which was embedded directly into the `helm-project-operator` binary). + +### Namespaces + +All Helm Project Operators have three different classifications of namespaces that the operator looks out for: +1. **Operator / System Namespace**: this is the namespace that the operator is deployed into (e.g. `cattle-helm-system`). This namespace will contain all HelmCharts and HelmReleases for all ProjectHelmCharts watched by this operator. **Only Cluster Admins should have access to this namespace.** +2. **Project Registration Namespace (`cattle-project-`)**: this is the set of namespaces that the operator watches for ProjectHelmCharts within. The RoleBindings and ClusterRoleBindings that apply to this namespace will also be the source of truth for the auto-assigned RBAC created in the Project Release Namespace (see more details below). **Project Owners (admin), Project Members (edit), and Read-Only Members (view) should have access to this namespace**. +> Note: Project Registration Namespaces will be auto-generated by the operator and imported into the Project it is tied to if `.Values.global.cattle.projectLabel` is provided (which is set to `field.cattle.io/projectId` by default); this indicates that a Project Registration Namespace should be created by the operator if at least one namespace is observed with that label. The operator will not let these namespaces be deleted unless either all namespaces with that label are gone (e.g. this is the last namespace in that project, in which case the namespace will be marked with the label `"helm.cattle.io/helm-project-operator-orphaned": "true"`, which signals that it can be deleted) or it is no longer watching that project (because the project ID was provided under `.Values.helmProjectOperator.otherSystemProjectLabelValues`, which serves as a denylist for Projects). These namespaces will also never be auto-deleted to avoid destroying user data; it is recommended that users clean up these namespaces manually if desired on creating or deleting a project +> Note: if `.Values.global.cattle.projectLabel` is not provided, the Operator / System Namespace will also be the Project Registration Namespace +3. **Project Release Namespace (`cattle-project--dummy`)**: this is the set of namespaces that the operator deploys Helm charts within on behalf of a ProjectHelmChart; the operator will also automatically assign RBAC to Roles created in this namespace by the Helm charts based on bindings found in the Project Registration Namespace. **Only Cluster Admins should have access to this namespace; Project Owners (admin), Project Members (edit), and Read-Only Members (view) will be assigned limited access to this namespace by the deployed Helm Chart and Helm Project Operator.** +> Note: Project Release Namespaces are automatically deployed and imported into the project whose ID is specified under `.Values.helmProjectOperator.projectReleaseNamespaces.labelValue` (which defaults to the value of `.Values.global.cattle.systemProjectId` if not specified) whenever a ProjectHelmChart is specified in a Project Registration Namespace +> Note: Project Release Namespaces follow the same orphaning conventions as Project Registration Namespaces (see note above) +> Note: if `.Values.projectReleaseNamespaces.enabled` is false, the Project Release Namespace will be the same as the Project Registration Namespace + +### Helm Resources (HelmChart, HelmRelease) + +On deploying a ProjectHelmChart, the Helm Project Operator will automatically create and manage two child custom resources that manage the underlying Helm resources in turn: +- A HelmChart CR (managed via an embedded [k3s-io/helm-contoller](https://github.com/k3s-io/helm-controller) in the operator): this custom resource automatically creates a Job in the same namespace that triggers a `helm install`, `helm upgrade`, or `helm uninstall` depending on the change applied to the HelmChart CR; this CR is automatically updated on changes to the ProjectHelmChart (e.g. modifying the values.yaml) or changes to the underlying Project definition (e.g. adding or removing namespaces from a project). +> **Important Note: If a ProjectHelmChart is not deploying or updating the underlying Project Monitoring Stack for some reason, the Job created by this resource in the Operator / System namespace should be the first place you check to see if there's something wrong with the Helm operation; however, this is generally only accessible by a Cluster Admin.** +- A HelmRelease CR (managed via an embedded [rancher/helm-locker](https://github.com/rancher/helm-locker) in the operator): this custom resource automatically locks a deployed Helm release in place and automatically overwrites updates to underlying resources unless the change happens via a Helm operation (`helm install`, `helm upgrade`, or `helm uninstall` performed by the HelmChart CR). +> Note: HelmRelease CRs emit Kubernetes Events that detect when an underlying Helm release is being modified and locks it back to place; to view these events, you can use `kubectl describe helmrelease -n `; you can also view the logs on this operator to see when changes are detected and which resources were attempted to be modified + +Both of these resources are created for all Helm charts in the Operator / System namespaces to avoid escalation of privileges to underprivileged users. + +### RBAC + +As described in the section on namespaces above, Helm Project Operator expects that Project Owners, Project Members, and other users in the cluster with Project-level permissions (e.g. permissions in a certain set of namespaces identified by a single label selector) have minimal permissions in any namespaces except the Project Registration Namespace (which is imported into the project by default) and those that already comprise their projects. Therefore, in order to allow Project Owners to assign specific chart permissions to other users in their Project namespaces, the Helm Project Operator will automatically watch the following bindings: +- ClusterRoleBindings +- RoleBindings in the Project Release Namespace + +On observing a change to one of those types of bindings, the Helm Project Operator will check whether the `roleRef` that the the binding points to matches a ClusterRole with the name provided under `helmProjectOperator.releaseRoleBindings.clusterRoleRefs.admin`, `helmProjectOperator.releaseRoleBindings.clusterRoleRefs.edit`, or `helmProjectOperator.releaseRoleBindings.clusterRoleRefs.view`; by default, these roleRefs correspond will correspond to `admin`, `edit`, and `view` respectively, which are the [default Kubernetes user-facing roles](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles). + +> Note: for Rancher RBAC users, these [default Kubernetes user-facing roles](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles) directly correlate to the `Project Owner`, `Project Member`, and `Read-Only` default Project Role Templates. + +If the `roleRef` matches, the Helm Project Operator will filter the `subjects` of the binding for all Users and Groups and use that to automatically construct a RoleBinding for each Role in the Project Release Namespace with the same name as the role and the following labels: +- `helm.cattle.io/project-helm-chart-role: {{ .Release.Name }}` +- `helm.cattle.io/project-helm-chart-role-aggregate-from: ` + +By default, the `example-chart` (the underlying chart deployed by Helm Project Operator) does not create any default roles; however, if a Cluster Admin would like to assign additional permissions to certain users, they can either directly assign RoleBindings in the Project Release Namespace to certain users or created Roles with the above two labels on them to allow Project Owners to control assigning those RBAC roles to users in their Project Registration namespaces. + +### Advanced Helm Project Operator Configuration + +|Value|Configuration| +|---|---------------------------| +|`valuesOverride`| Allows an Operator to override values that are set on each ProjectHelmChart deployment on an operator-level; user-provided options (specified on the `spec.values` of the ProjectHelmChart) are automatically overridden if operator-level values are provided. For an exmaple, see how the default value overrides `federate.targets` (note: when overriding list values like `federate.targets`, user-provided list values will **not** be concatenated) | +|`projectReleaseNamespaces.labelValues`| The value of the Project that all Project Release Namespaces should be auto-imported into (via label and annotation). Not recommended to be overridden on a Rancher setup. | +|`otherSystemProjectLabelValues`| Other namespaces that the operator should treat as a system namespace that should not be monitored. By default, all namespaces that match `global.cattle.systemProjectId` will not be matched. `kube-system` is explicitly marked as a system namespace as well, regardless of label or annotation. | +|`releaseRoleBindings.aggregate`| Whether to automatically create RBAC resources in Project Release namespaces +|`releaseRoleBindings.clusterRoleRefs.`| ClusterRoles to reference to discover subjects to create RoleBindings for in the Project Release Namespace for all corresponding Project Release Roles. See RBAC above for more information | +|`hardenedNamespaces.enabled`| Whether to automatically patch the default ServiceAccount with `automountServiceAccountToken: false` and create a default NetworkPolicy in all managed namespaces in the cluster; the default values ensure that the creation of the namespace does not break a CIS 1.16 hardened scan | +|`hardenedNamespaces.configuration`| The configuration to be supplied to the default ServiceAccount or auto-generated NetworkPolicy on managing a namespace | +|`helmController.enabled`| Whether to enable an embedded k3s-io/helm-controller instance within the Helm Project Operator. Should be disabled for RKE2 clusters since RKE2 clusters already run Helm Controller to manage internal Kubernetes components | +|`helmLocker.enabled`| Whether to enable an embedded rancher/helm-locker instance within the Helm Project Operator. | diff --git a/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/charts/helmProjectOperator/app-readme.md b/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/charts/helmProjectOperator/app-readme.md new file mode 100644 index 0000000000..fd551467d3 --- /dev/null +++ b/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/charts/helmProjectOperator/app-readme.md @@ -0,0 +1,20 @@ +# Helm Project Operator + +This chart installs the example [Helm Project Operator](https://github.com/rancher/helm-project-operator) onto your cluster. + +## Upgrading to Kubernetes v1.25+ + +Starting in Kubernetes v1.25, [Pod Security Policies](https://kubernetes.io/docs/concepts/security/pod-security-policy/) have been removed from the Kubernetes API. + +As a result, **before upgrading to Kubernetes v1.25** (or on a fresh install in a Kubernetes v1.25+ cluster), users are expected to perform an in-place upgrade of this chart with `global.cattle.psp.enabled` set to `false` if it has been previously set to `true`. +​ +> **Note:** +> In this chart release, any previous field that was associated with any PSP resources have been removed in favor of a single global field: `global.cattle.psp.enabled`. + ​ +> **Note:** +> If you upgrade your cluster to Kubernetes v1.25+ before removing PSPs via a `helm upgrade` (even if you manually clean up resources), **it will leave the Helm release in a broken state within the cluster such that further Helm operations will not work (`helm uninstall`, `helm upgrade`, etc.).** +> +> If your charts get stuck in this state, please consult the Rancher docs on how to clean up your Helm release secrets. +Upon setting `global.cattle.psp.enabled` to false, the chart will remove any PSP resources deployed on its behalf from the cluster. This is the default setting for this chart. +​ +As a replacement for PSPs, [Pod Security Admission](https://kubernetes.io/docs/concepts/security/pod-security-admission/) should be used. Please consult the Rancher docs for more details on how to configure your chart release namespaces to work with the new Pod Security Admission and apply Pod Security Standards. \ No newline at end of file diff --git a/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/charts/helmProjectOperator/questions.yaml b/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/charts/helmProjectOperator/questions.yaml new file mode 100644 index 0000000000..054361a7a4 --- /dev/null +++ b/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/charts/helmProjectOperator/questions.yaml @@ -0,0 +1,43 @@ +questions: +- variable: global.cattle.psp.enabled + default: "false" + description: "Flag to enable or disable the installation of PodSecurityPolicies by this chart in the target cluster. If the cluster is running Kubernetes 1.25+, you must update this value to false." + label: "Enable PodSecurityPolicies" + type: boolean + group: "Security Settings" +- variable: helmController.enabled + label: Enable Embedded Helm Controller + description: 'Note: If you are running this chart in an RKE2 cluster, this should be disabled.' + type: boolean + group: Helm Controller +- variable: helmLocker.enabled + label: Enable Embedded Helm Locker + type: boolean + group: Helm Locker +- variable: projectReleaseNamespaces.labelValue + label: Project Release Namespace Project ID + description: By default, the System Project is selected. This can be overriden to a different Project (e.g. p-xxxxx) + type: string + required: false + group: Namespaces +- variable: releaseRoleBindings.clusterRoleRefs.admin + label: Admin ClusterRole + description: By default, admin selects Project Owners. This can be overridden to a different ClusterRole (e.g. rt-xxxxx) + type: string + default: admin + required: false + group: RBAC +- variable: releaseRoleBindings.clusterRoleRefs.edit + label: Edit ClusterRole + description: By default, edit selects Project Members. This can be overridden to a different ClusterRole (e.g. rt-xxxxx) + type: string + default: edit + required: false + group: RBAC +- variable: releaseRoleBindings.clusterRoleRefs.view + label: View ClusterRole + description: By default, view selects Read-Only users. This can be overridden to a different ClusterRole (e.g. rt-xxxxx) + type: string + default: view + required: false + group: RBAC diff --git a/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/charts/helmProjectOperator/templates/NOTES.txt b/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/charts/helmProjectOperator/templates/NOTES.txt new file mode 100644 index 0000000000..32baeebcbe --- /dev/null +++ b/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/charts/helmProjectOperator/templates/NOTES.txt @@ -0,0 +1,2 @@ +{{ $.Chart.Name }} has been installed. Check its status by running: + kubectl --namespace {{ template "helm-project-operator.namespace" . }} get pods -l "release={{ $.Release.Name }}" diff --git a/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/charts/helmProjectOperator/templates/_helpers.tpl b/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/charts/helmProjectOperator/templates/_helpers.tpl new file mode 100644 index 0000000000..97dd6b368b --- /dev/null +++ b/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/charts/helmProjectOperator/templates/_helpers.tpl @@ -0,0 +1,66 @@ +# Rancher +{{- define "system_default_registry" -}} +{{- if .Values.global.cattle.systemDefaultRegistry -}} +{{- printf "%s/" .Values.global.cattle.systemDefaultRegistry -}} +{{- end -}} +{{- end -}} + +# Windows Support + +{{/* +Windows cluster will add default taint for linux nodes, +add below linux tolerations to workloads could be scheduled to those linux nodes +*/}} + +{{- define "linux-node-tolerations" -}} +- key: "cattle.io/os" + value: "linux" + effect: "NoSchedule" + operator: "Equal" +{{- end -}} + +{{- define "linux-node-selector" -}} +{{- if semverCompare "<1.14-0" .Capabilities.KubeVersion.GitVersion -}} +beta.kubernetes.io/os: linux +{{- else -}} +kubernetes.io/os: linux +{{- end -}} +{{- end -}} + +# Helm Project Operator + +{{/* vim: set filetype=mustache: */}} +{{/* Expand the name of the chart. This is suffixed with -alertmanager, which means subtract 13 from longest 63 available */}} +{{- define "helm-project-operator.name" -}} +{{- default .Chart.Name .Values.nameOverride | trunc 50 | trimSuffix "-" -}} +{{- end }} + +{{/* +Allow the release namespace to be overridden for multi-namespace deployments in combined charts +*/}} +{{- define "helm-project-operator.namespace" -}} + {{- if .Values.namespaceOverride -}} + {{- .Values.namespaceOverride -}} + {{- else -}} + {{- .Release.Namespace -}} + {{- end -}} +{{- end -}} + +{{/* Create chart name and version as used by the chart label. */}} +{{- define "helm-project-operator.chartref" -}} +{{- replace "+" "_" .Chart.Version | printf "%s-%s" .Chart.Name -}} +{{- end }} + +{{/* Generate basic labels */}} +{{- define "helm-project-operator.labels" -}} +app.kubernetes.io/managed-by: {{ .Release.Service }} +app.kubernetes.io/instance: {{ .Release.Name }} +app.kubernetes.io/version: "{{ replace "+" "_" .Chart.Version }}" +app.kubernetes.io/part-of: {{ template "helm-project-operator.name" . }} +chart: {{ template "helm-project-operator.chartref" . }} +release: {{ $.Release.Name | quote }} +heritage: {{ $.Release.Service | quote }} +{{- if .Values.commonLabels}} +{{ toYaml .Values.commonLabels }} +{{- end }} +{{- end -}} diff --git a/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/charts/helmProjectOperator/templates/cleanup.yaml b/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/charts/helmProjectOperator/templates/cleanup.yaml new file mode 100644 index 0000000000..98675642d0 --- /dev/null +++ b/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/charts/helmProjectOperator/templates/cleanup.yaml @@ -0,0 +1,82 @@ +apiVersion: batch/v1 +kind: Job +metadata: + name: {{ template "helm-project-operator.name" . }}-cleanup + namespace: {{ template "helm-project-operator.namespace" . }} + labels: {{ include "helm-project-operator.labels" . | nindent 4 }} + app: {{ template "helm-project-operator.name" . }} + annotations: + "helm.sh/hook": pre-delete + "helm.sh/hook-delete-policy": before-hook-creation, hook-succeeded, hook-failed +spec: + template: + metadata: + name: {{ template "helm-project-operator.name" . }}-cleanup + labels: {{ include "helm-project-operator.labels" . | nindent 8 }} + app: {{ template "helm-project-operator.name" . }} + spec: + serviceAccountName: {{ template "helm-project-operator.name" . }} +{{- if .Values.cleanup.securityContext }} + securityContext: {{ toYaml .Values.cleanup.securityContext | nindent 8 }} +{{- end }} + initContainers: + - name: add-cleanup-annotations + image: {{ template "system_default_registry" . }}{{ .Values.cleanup.image.repository }}:{{ .Values.cleanup.image.tag }} + imagePullPolicy: "{{ .Values.image.pullPolicy }}" + command: + - /bin/sh + - -c + - > + echo "Labeling all ProjectHelmCharts with helm.cattle.io/helm-project-operator-cleanup=true"; + EXPECTED_HELM_API_VERSION={{ .Values.helmApiVersion }}; + IFS=$'\n'; + for namespace in $(kubectl get namespaces -l helm.cattle.io/helm-project-operated=true --no-headers -o=custom-columns=NAME:.metadata.name); do + for projectHelmChartAndHelmApiVersion in $(kubectl get projecthelmcharts -n ${namespace} --no-headers -o=custom-columns=NAME:.metadata.name,HELMAPIVERSION:.spec.helmApiVersion); do + projectHelmChartAndHelmApiVersion=$(echo ${projectHelmChartAndHelmApiVersion} | xargs); + projectHelmChart=$(echo ${projectHelmChartAndHelmApiVersion} | cut -d' ' -f1); + helmApiVersion=$(echo ${projectHelmChartAndHelmApiVersion} | cut -d' ' -f2); + if [[ ${helmApiVersion} != ${EXPECTED_HELM_API_VERSION} ]]; then + echo "Skipping marking ${namespace}/${projectHelmChart} with cleanup annotation since spec.helmApiVersion: ${helmApiVersion} is not ${EXPECTED_HELM_API_VERSION}"; + continue; + fi; + kubectl label projecthelmcharts -n ${namespace} ${projectHelmChart} helm.cattle.io/helm-project-operator-cleanup=true --overwrite; + done; + done; +{{- if .Values.cleanup.resources }} + resources: {{ toYaml .Values.cleanup.resources | nindent 12 }} +{{- end }} +{{- if .Values.cleanup.containerSecurityContext }} + securityContext: {{ toYaml .Values.cleanup.containerSecurityContext | nindent 12 }} +{{- end }} + containers: + - name: ensure-subresources-deleted + image: {{ template "system_default_registry" . }}{{ .Values.cleanup.image.repository }}:{{ .Values.cleanup.image.tag }} + imagePullPolicy: IfNotPresent + command: + - /bin/sh + - -c + - > + SYSTEM_NAMESPACE={{ .Release.Namespace }} + EXPECTED_HELM_API_VERSION={{ .Values.helmApiVersion }}; + HELM_API_VERSION_TRUNCATED=$(echo ${EXPECTED_HELM_API_VERSION} | cut -d'/' -f0); + echo "Ensuring HelmCharts and HelmReleases are deleted from ${SYSTEM_NAMESPACE}..."; + while [[ "$(kubectl get helmcharts,helmreleases -l helm.cattle.io/helm-api-version=${HELM_API_VERSION_TRUNCATED} -n ${SYSTEM_NAMESPACE} 2>&1)" != "No resources found in ${SYSTEM_NAMESPACE} namespace." ]]; do + echo "waiting for HelmCharts and HelmReleases to be deleted from ${SYSTEM_NAMESPACE}... sleeping 3 seconds"; + sleep 3; + done; + echo "Successfully deleted all HelmCharts and HelmReleases in ${SYSTEM_NAMESPACE}!"; +{{- if .Values.cleanup.resources }} + resources: {{ toYaml .Values.cleanup.resources | nindent 12 }} +{{- end }} +{{- if .Values.cleanup.containerSecurityContext }} + securityContext: {{ toYaml .Values.cleanup.containerSecurityContext | nindent 12 }} +{{- end }} + restartPolicy: OnFailure + nodeSelector: {{ include "linux-node-selector" . | nindent 8 }} + {{- if .Values.cleanup.nodeSelector }} + {{- toYaml .Values.cleanup.nodeSelector | nindent 8 }} + {{- end }} + tolerations: {{ include "linux-node-tolerations" . | nindent 8 }} + {{- if .Values.cleanup.tolerations }} + {{- toYaml .Values.cleanup.tolerations | nindent 8 }} + {{- end }} diff --git a/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/charts/helmProjectOperator/templates/clusterrole.yaml b/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/charts/helmProjectOperator/templates/clusterrole.yaml new file mode 100644 index 0000000000..60ed263ba8 --- /dev/null +++ b/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/charts/helmProjectOperator/templates/clusterrole.yaml @@ -0,0 +1,57 @@ +{{- if and .Values.global.rbac.create .Values.global.rbac.userRoles.create }} +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: {{ template "helm-project-operator.name" . }}-admin + labels: {{ include "helm-project-operator.labels" . | nindent 4 }} + {{- if .Values.global.rbac.userRoles.aggregateToDefaultRoles }} + rbac.authorization.k8s.io/aggregate-to-admin: "true" + {{- end }} +rules: +- apiGroups: + - helm.cattle.io + resources: + - projecthelmcharts + - projecthelmcharts/finalizers + - projecthelmcharts/status + verbs: + - '*' +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: {{ template "helm-project-operator.name" . }}-edit + labels: {{ include "helm-project-operator.labels" . | nindent 4 }} + {{- if .Values.global.rbac.userRoles.aggregateToDefaultRoles }} + rbac.authorization.k8s.io/aggregate-to-edit: "true" + {{- end }} +rules: +- apiGroups: + - helm.cattle.io + resources: + - projecthelmcharts + - projecthelmcharts/status + verbs: + - 'get' + - 'list' + - 'watch' +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: {{ template "helm-project-operator.name" . }}-view + labels: {{ include "helm-project-operator.labels" . | nindent 4 }} + {{- if .Values.global.rbac.userRoles.aggregateToDefaultRoles }} + rbac.authorization.k8s.io/aggregate-to-view: "true" + {{- end }} +rules: +- apiGroups: + - helm.cattle.io + resources: + - projecthelmcharts + - projecthelmcharts/status + verbs: + - 'get' + - 'list' + - 'watch' +{{- end }} diff --git a/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/charts/helmProjectOperator/templates/configmap.yaml b/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/charts/helmProjectOperator/templates/configmap.yaml new file mode 100644 index 0000000000..d4def157dc --- /dev/null +++ b/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/charts/helmProjectOperator/templates/configmap.yaml @@ -0,0 +1,14 @@ +## Note: If you add another entry to this ConfigMap, make sure a corresponding env var is set +## in the deployment of the operator to ensure that a Helm upgrade will force the operator +## to reload the values in the ConfigMap and redeploy +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ template "helm-project-operator.name" . }}-config + namespace: {{ template "helm-project-operator.namespace" . }} + labels: {{ include "helm-project-operator.labels" . | nindent 4 }} +data: + hardened.yaml: |- +{{ .Values.hardenedNamespaces.configuration | toYaml | indent 4 }} + values.yaml: |- +{{ .Values.valuesOverride | toYaml | indent 4 }} diff --git a/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/charts/helmProjectOperator/templates/deployment.yaml b/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/charts/helmProjectOperator/templates/deployment.yaml new file mode 100644 index 0000000000..33b81e72e6 --- /dev/null +++ b/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/charts/helmProjectOperator/templates/deployment.yaml @@ -0,0 +1,126 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: {{ template "helm-project-operator.name" . }} + namespace: {{ template "helm-project-operator.namespace" . }} + labels: {{ include "helm-project-operator.labels" . | nindent 4 }} + app: {{ template "helm-project-operator.name" . }} +spec: + {{- if .Values.replicas }} + replicas: {{ .Values.replicas }} + {{- end }} + selector: + matchLabels: + app: {{ template "helm-project-operator.name" . }} + release: {{ $.Release.Name | quote }} + template: + metadata: + labels: {{ include "helm-project-operator.labels" . | nindent 8 }} + app: {{ template "helm-project-operator.name" . }} + spec: + containers: + - name: {{ template "helm-project-operator.name" . }} + image: "{{ template "system_default_registry" . }}{{ .Values.image.repository }}:{{ .Values.image.tag }}" + imagePullPolicy: "{{ .Values.image.pullPolicy }}" + args: + - {{ template "helm-project-operator.name" . }} + - --namespace={{ template "helm-project-operator.namespace" . }} + - --controller-name={{ template "helm-project-operator.name" . }} + - --values-override-file=/etc/helmprojectoperator/config/values.yaml +{{- if .Values.global.cattle.systemDefaultRegistry }} + - --system-default-registry={{ .Values.global.cattle.systemDefaultRegistry }} +{{- end }} +{{- if .Values.global.cattle.url }} + - --cattle-url={{ .Values.global.cattle.url }} +{{- end }} +{{- if .Values.global.cattle.projectLabel }} + - --project-label={{ .Values.global.cattle.projectLabel }} +{{- end }} +{{- if not .Values.projectReleaseNamespaces.enabled }} + - --system-project-label-values={{ join "," (append .Values.otherSystemProjectLabelValues .Values.global.cattle.systemProjectId) }} +{{- else if and (ne (len .Values.global.cattle.systemProjectId) 0) (ne (len .Values.projectReleaseNamespaces.labelValue) 0) (ne .Values.projectReleaseNamespaces.labelValue .Values.global.cattle.systemProjectId) }} + - --system-project-label-values={{ join "," (append .Values.otherSystemProjectLabelValues .Values.global.cattle.systemProjectId) }} +{{- else if len .Values.otherSystemProjectLabelValues }} + - --system-project-label-values={{ join "," .Values.otherSystemProjectLabelValues }} +{{- end }} +{{- if .Values.projectReleaseNamespaces.enabled }} +{{- if .Values.projectReleaseNamespaces.labelValue }} + - --project-release-label-value={{ .Values.projectReleaseNamespaces.labelValue }} +{{- else if .Values.global.cattle.systemProjectId }} + - --project-release-label-value={{ .Values.global.cattle.systemProjectId }} +{{- end }} +{{- end }} +{{- if .Values.global.cattle.clusterId }} + - --cluster-id={{ .Values.global.cattle.clusterId }} +{{- end }} +{{- if .Values.releaseRoleBindings.aggregate }} +{{- if .Values.releaseRoleBindings.clusterRoleRefs }} +{{- if .Values.releaseRoleBindings.clusterRoleRefs.admin }} + - --admin-cluster-role={{ .Values.releaseRoleBindings.clusterRoleRefs.admin }} +{{- end }} +{{- if .Values.releaseRoleBindings.clusterRoleRefs.edit }} + - --edit-cluster-role={{ .Values.releaseRoleBindings.clusterRoleRefs.edit }} +{{- end }} +{{- if .Values.releaseRoleBindings.clusterRoleRefs.view }} + - --view-cluster-role={{ .Values.releaseRoleBindings.clusterRoleRefs.view }} +{{- end }} +{{- end }} +{{- end }} +{{- if .Values.hardenedNamespaces.enabled }} + - --hardening-options-file=/etc/helmprojectoperator/config/hardening.yaml +{{- else }} + - --disable-hardening +{{- end }} +{{- if .Values.debug }} + - --debug + - --debug-level={{ .Values.debugLevel }} +{{- end }} +{{- if not .Values.helmController.enabled }} + - --disable-embedded-helm-controller +{{- else }} + - --helm-job-image={{ template "system_default_registry" . }}{{ .Values.helmController.job.image.repository }}:{{ .Values.helmController.job.image.tag }} +{{- end }} +{{- if not .Values.helmLocker.enabled }} + - --disable-embedded-helm-locker +{{- end }} +{{- if .Values.additionalArgs }} +{{- toYaml .Values.additionalArgs | nindent 10 }} +{{- end }} + env: + - name: NODE_NAME + valueFrom: + fieldRef: + fieldPath: spec.nodeName + ## Note: The below two values only exist to force Helm to upgrade the deployment on + ## a change to the contents of the ConfigMap during an upgrade. Neither serve + ## any practical purpose and can be removed and replaced with a configmap reloader + ## in a future change if dynamic updates are required. + - name: HARDENING_OPTIONS_SHA_256_HASH + value: {{ .Values.hardenedNamespaces.configuration | toYaml | sha256sum }} + - name: VALUES_OVERRIDE_SHA_256_HASH + value: {{ .Values.valuesOverride | toYaml | sha256sum }} +{{- if .Values.resources }} + resources: {{ toYaml .Values.resources | nindent 12 }} +{{- end }} +{{- if .Values.containerSecurityContext }} + securityContext: {{ toYaml .Values.containerSecurityContext | nindent 12 }} +{{- end }} + volumeMounts: + - name: config + mountPath: "/etc/helmprojectoperator/config" + serviceAccountName: {{ template "helm-project-operator.name" . }} +{{- if .Values.securityContext }} + securityContext: {{ toYaml .Values.securityContext | nindent 8 }} +{{- end }} + nodeSelector: {{ include "linux-node-selector" . | nindent 8 }} +{{- if .Values.nodeSelector }} +{{- toYaml .Values.nodeSelector | nindent 8 }} +{{- end }} + tolerations: {{ include "linux-node-tolerations" . | nindent 8 }} +{{- if .Values.tolerations }} +{{- toYaml .Values.tolerations | nindent 8 }} +{{- end }} + volumes: + - name: config + configMap: + name: {{ template "helm-project-operator.name" . }}-config diff --git a/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/charts/helmProjectOperator/templates/psp.yaml b/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/charts/helmProjectOperator/templates/psp.yaml new file mode 100644 index 0000000000..73dcc4560e --- /dev/null +++ b/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/charts/helmProjectOperator/templates/psp.yaml @@ -0,0 +1,68 @@ +{{- if .Values.global.cattle.psp.enabled }} +apiVersion: policy/v1beta1 +kind: PodSecurityPolicy +metadata: + name: {{ template "helm-project-operator.name" . }}-psp + namespace: {{ template "helm-project-operator.namespace" . }} + labels: {{ include "helm-project-operator.labels" . | nindent 4 }} + app: {{ template "helm-project-operator.name" . }} +{{- if .Values.global.rbac.pspAnnotations }} + annotations: {{ toYaml .Values.global.rbac.pspAnnotations | nindent 4 }} +{{- end }} +spec: + privileged: false + hostNetwork: false + hostIPC: false + hostPID: false + runAsUser: + # Permits the container to run with root privileges as well. + rule: 'RunAsAny' + seLinux: + # This policy assumes the nodes are using AppArmor rather than SELinux. + rule: 'RunAsAny' + supplementalGroups: + rule: 'MustRunAs' + ranges: + # Forbid adding the root group. + - min: 0 + max: 65535 + fsGroup: + rule: 'MustRunAs' + ranges: + # Forbid adding the root group. + - min: 0 + max: 65535 + readOnlyRootFilesystem: false +--- +kind: ClusterRole +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: {{ template "helm-project-operator.name" . }}-psp + labels: {{ include "helm-project-operator.labels" . | nindent 4 }} + app: {{ template "helm-project-operator.name" . }} +rules: +{{- if semverCompare "> 1.15.0-0" .Capabilities.KubeVersion.GitVersion }} +- apiGroups: ['policy'] +{{- else }} +- apiGroups: ['extensions'] +{{- end }} + resources: ['podsecuritypolicies'] + verbs: ['use'] + resourceNames: + - {{ template "helm-project-operator.name" . }}-psp +--- +kind: ClusterRoleBinding +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: {{ template "helm-project-operator.name" . }}-psp + labels: {{ include "helm-project-operator.labels" . | nindent 4 }} + app: {{ template "helm-project-operator.name" . }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: {{ template "helm-project-operator.name" . }}-psp +subjects: + - kind: ServiceAccount + name: {{ template "helm-project-operator.name" . }} + namespace: {{ template "helm-project-operator.namespace" . }} +{{- end }} diff --git a/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/charts/helmProjectOperator/templates/rbac.yaml b/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/charts/helmProjectOperator/templates/rbac.yaml new file mode 100644 index 0000000000..b1c4092029 --- /dev/null +++ b/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/charts/helmProjectOperator/templates/rbac.yaml @@ -0,0 +1,32 @@ +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: {{ template "helm-project-operator.name" . }} + labels: {{ include "helm-project-operator.labels" . | nindent 4 }} + app: {{ template "helm-project-operator.name" . }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: "cluster-admin" # see note below +subjects: +- kind: ServiceAccount + name: {{ template "helm-project-operator.name" . }} + namespace: {{ template "helm-project-operator.namespace" . }} +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: {{ template "helm-project-operator.name" . }} + namespace: {{ template "helm-project-operator.namespace" . }} + labels: {{ include "helm-project-operator.labels" . | nindent 4 }} + app: {{ template "helm-project-operator.name" . }} +{{- if .Values.global.imagePullSecrets }} +imagePullSecrets: {{ toYaml .Values.global.imagePullSecrets | nindent 2 }} +{{- end }} +# --- +# NOTE: +# As of now, due to the fact that the k3s-io/helm-controller can only deploy jobs that are cluster-bound to the cluster-admin +# ClusterRole, the only way for this operator to be able to perform that binding is if it is also bound to the cluster-admin ClusterRole. +# +# As a result, this ClusterRoleBinding will be left as a work-in-progress until changes are made in k3s-io/helm-controller to allow us to grant +# only scoped down permissions to the Job that is deployed. diff --git a/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/charts/helmProjectOperator/templates/system-namespaces-configmap.yaml b/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/charts/helmProjectOperator/templates/system-namespaces-configmap.yaml new file mode 100644 index 0000000000..f4c85254e8 --- /dev/null +++ b/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/charts/helmProjectOperator/templates/system-namespaces-configmap.yaml @@ -0,0 +1,62 @@ +{{- if .Values.systemNamespacesConfigMap.create }} +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ template "helm-project-operator.name" . }}-system-namespaces + namespace: {{ template "helm-project-operator.namespace" . }} + labels: {{ include "helm-project-operator.labels" . | nindent 4 }} +data: + system-namespaces.json: |- + { +{{- if .Values.projectReleaseNamespaces.enabled }} +{{- if .Values.projectReleaseNamespaces.labelValue }} + "projectReleaseLabelValue": {{ .Values.projectReleaseNamespaces.labelValue | quote }}, +{{- else if .Values.global.cattle.systemProjectId }} + "projectReleaseLabelValue": {{ .Values.global.cattle.systemProjectId | quote }}, +{{- else }} + "projectReleaseLabelValue": "", +{{- end }} +{{- else }} + "projectReleaseLabelValue": "", +{{- end }} +{{- if not .Values.projectReleaseNamespaces.enabled }} + "systemProjectLabelValues": {{ append .Values.otherSystemProjectLabelValues .Values.global.cattle.systemProjectId | toJson }} +{{- else if and (ne (len .Values.global.cattle.systemProjectId) 0) (ne (len .Values.projectReleaseNamespaces.labelValue) 0) (ne .Values.projectReleaseNamespaces.labelValue .Values.global.cattle.systemProjectId) }} + "systemProjectLabelValues": {{ append .Values.otherSystemProjectLabelValues .Values.global.cattle.systemProjectId | toJson }} +{{- else if len .Values.otherSystemProjectLabelValues }} + "systemProjectLabelValues": {{ .Values.otherSystemProjectLabelValues | toJson }} +{{- else }} + "systemProjectLabelValues": [] +{{- end }} + } +--- +{{- if (and .Values.systemNamespacesConfigMap.rbac.enabled .Values.systemNamespacesConfigMap.rbac.subjects) }} +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + name: {{ template "helm-project-operator.name" . }}-system-namespaces + namespace: {{ template "helm-project-operator.namespace" . }} + labels: {{ include "helm-project-operator.labels" . | nindent 4 }} +rules: +- apiGroups: + - "" + resources: + - configmaps + resourceNames: + - "{{ template "helm-project-operator.name" . }}-system-namespaces" + verbs: + - 'get' +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + name: {{ template "helm-project-operator.name" . }}-system-namespaces + namespace: {{ template "helm-project-operator.namespace" . }} + labels: {{ include "helm-project-operator.labels" . | nindent 4 }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + name: {{ template "helm-project-operator.name" . }}-system-namespaces +subjects: {{ .Values.systemNamespacesConfigMap.rbac.subjects | toYaml | nindent 2 }} +{{- end }} +{{- end }} diff --git a/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/charts/helmProjectOperator/templates/validate-psp-install.yaml b/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/charts/helmProjectOperator/templates/validate-psp-install.yaml new file mode 100644 index 0000000000..a30c59d3b7 --- /dev/null +++ b/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/charts/helmProjectOperator/templates/validate-psp-install.yaml @@ -0,0 +1,7 @@ +#{{- if gt (len (lookup "rbac.authorization.k8s.io/v1" "ClusterRole" "" "")) 0 -}} +#{{- if .Values.global.cattle.psp.enabled }} +#{{- if not (.Capabilities.APIVersions.Has "policy/v1beta1/PodSecurityPolicy") }} +#{{- fail "The target cluster does not have the PodSecurityPolicy API resource. Please disable PSPs in this chart before proceeding." -}} +#{{- end }} +#{{- end }} +#{{- end }} diff --git a/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/charts/helmProjectOperator/values.yaml b/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/charts/helmProjectOperator/values.yaml new file mode 100644 index 0000000000..63fae45af8 --- /dev/null +++ b/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/charts/helmProjectOperator/values.yaml @@ -0,0 +1,228 @@ +# Default values for helm-project-operator. +# This is a YAML-formatted file. +# Declare variables to be passed into your templates. + +# Helm Project Operator Configuration + +global: + cattle: + clusterId: "" + psp: + enabled: false + projectLabel: field.cattle.io/projectId + systemDefaultRegistry: "" + systemProjectId: "" + url: "" + rbac: + ## Create RBAC resources for ServiceAccounts and users + ## + create: true + + userRoles: + ## Create default user ClusterRoles to allow users to interact with ProjectHelmCharts + create: true + ## Aggregate default user ClusterRoles into default k8s ClusterRoles + aggregateToDefaultRoles: true + + pspAnnotations: {} + ## Specify pod annotations + ## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#apparmor + ## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#seccomp + ## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#sysctl + ## + # seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*' + # seccomp.security.alpha.kubernetes.io/defaultProfileName: 'docker/default' + # apparmor.security.beta.kubernetes.io/defaultProfileName: 'runtime/default' + + ## Reference to one or more secrets to be used when pulling images + ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ + ## + imagePullSecrets: [] + # - name: "image-pull-secret" + +helmApiVersion: dummy.cattle.io/v1alpha1 + +## valuesOverride overrides values that are set on each ProjectHelmChart deployment on an operator-level +## User-provided values will be overwritten based on the values provided here +valuesOverride: {} + +## projectReleaseNamespaces are auto-generated namespaces that are created to host Helm Releases +## managed by this operator on behalf of a ProjectHelmChart +projectReleaseNamespaces: + ## Enabled determines whether Project Release Namespaces should be created. If false, the underlying + ## Helm release will be deployed in the Project Registration Namespace + enabled: true + ## labelValue is the value of the Project that the projectReleaseNamespace should be created within + ## If empty, this will be set to the value of global.cattle.systemProjectId + ## If global.cattle.systemProjectId is also empty, project release namespaces will be disabled + labelValue: "" + +## otherSystemProjectLabelValues are project labels that identify namespaces as those that should be treated as system projects +## i.e. they will be entirely ignored by the operator +## By default, the global.cattle.systemProjectId will be in this list +otherSystemProjectLabelValues: [] + +## releaseRoleBindings configures RoleBindings automatically created by the Helm Project Operator +## in Project Release Namespaces where underlying Helm charts are deployed +releaseRoleBindings: + ## aggregate enables creating these RoleBindings off aggregating RoleBindings in the + ## Project Registration Namespace or ClusterRoleBindings that bind users to the ClusterRoles + ## specified under clusterRoleRefs + aggregate: true + + ## clusterRoleRefs are the ClusterRoles whose RoleBinding or ClusterRoleBindings should determine + ## the RoleBindings created in the Project Release Namespace + ## + ## By default, these are set to create RoleBindings based on the RoleBindings / ClusterRoleBindings + ## attached to the default K8s user-facing ClusterRoles of admin, edit, and view. + ## ref: https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles + ## + clusterRoleRefs: + admin: admin + edit: edit + view: view + +hardenedNamespaces: + # Whether to automatically manage the configuration of the default ServiceAccount and + # auto-create a NetworkPolicy for each namespace created by this operator + enabled: true + + configuration: + # Values to be applied to each default ServiceAccount created in a managed namespace + serviceAccountSpec: + secrets: [] + imagePullSecrets: [] + automountServiceAccountToken: false + # Values to be applied to each default generated NetworkPolicy created in a managed namespace + networkPolicySpec: + podSelector: {} + egress: [] + ingress: [] + policyTypes: ["Ingress", "Egress"] + +## systemNamespacesConfigMap is a ConfigMap created to allow users to see valid entries +## for registering a ProjectHelmChart for a given Project on the Rancher Dashboard UI. +## It does not need to be enabled for a non-Rancher use case. +systemNamespacesConfigMap: + ## Create indicates whether the system namespaces configmap should be created + ## This is a required value for integration with Rancher Dashboard + create: true + + ## RBAC provides options around the RBAC created to allow users to be able to view + ## the systemNamespacesConfigMap; if not specified, only users with the ability to + ## view ConfigMaps in the namespace where this chart is deployed will be able to + ## properly view the system namespaces on the Rancher Dashboard UI + rbac: + ## enabled indicates that we should deploy a RoleBinding and Role to view this ConfigMap + enabled: true + ## subjects are the subjects that should be bound to this default RoleBinding + ## By default, we allow anyone who is authenticated to the system to be able to view + ## this ConfigMap in the deployment namespace + subjects: + - kind: Group + name: system:authenticated + +nameOverride: "" + +namespaceOverride: "" + +replicas: 1 + +image: + repository: rancher/helm-project-operator + tag: v0.2.1 + pullPolicy: IfNotPresent + +helmController: + # Note: should be disabled for RKE2 clusters since they already run Helm Controller to manage internal Kubernetes components + enabled: true + + job: + image: + repository: rancher/klipper-helm + tag: v0.7.0-build20220315 + +helmLocker: + enabled: true + +# Additional arguments to be passed into the Helm Project Operator image +additionalArgs: [] + +## Define which Nodes the Pods are scheduled on. +## ref: https://kubernetes.io/docs/user-guide/node-selection/ +## +nodeSelector: {} + +## Tolerations for use with node taints +## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ +## +tolerations: [] +# - key: "key" +# operator: "Equal" +# value: "value" +# effect: "NoSchedule" + +resources: {} + # limits: + # memory: 500Mi + # cpu: 1000m + # requests: + # memory: 100Mi + # cpu: 100m + +containerSecurityContext: {} + # allowPrivilegeEscalation: false + # capabilities: + # drop: + # - ALL + # privileged: false + # readOnlyRootFilesystem: true + +securityContext: {} + # runAsGroup: 1000 + # runAsUser: 1000 + # supplementalGroups: + # - 1000 + +debug: false +debugLevel: 0 + +cleanup: + image: + repository: rancher/shell + tag: v0.1.19 + pullPolicy: IfNotPresent + + ## Define which Nodes the Pods are scheduled on. + ## ref: https://kubernetes.io/docs/user-guide/node-selection/ + ## + nodeSelector: {} + + ## Tolerations for use with node taints + ## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ + ## + tolerations: [] + # - key: "key" + # operator: "Equal" + # value: "value" + # effect: "NoSchedule" + + containerSecurityContext: {} + # allowPrivilegeEscalation: false + # capabilities: + # drop: + # - ALL + # privileged: false + # readOnlyRootFilesystem: true + + securityContext: + runAsNonRoot: false + runAsUser: 0 + + resources: {} + # limits: + # memory: 500Mi + # cpu: 1000m + # requests: + # memory: 100Mi + # cpu: 100m diff --git a/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/questions.yaml b/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/questions.yaml new file mode 100644 index 0000000000..87cf1339f8 --- /dev/null +++ b/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/questions.yaml @@ -0,0 +1,43 @@ +questions: +- variable: global.cattle.psp.enabled + default: "false" + description: "Flag to enable or disable the installation of PodSecurityPolicies by this chart in the target cluster. If the cluster is running Kubernetes 1.25+, you must update this value to false." + label: "Enable PodSecurityPolicies" + type: boolean + group: "Security Settings" +- variable: helmProjectOperator.helmController.enabled + label: Enable Embedded Helm Controller + description: 'Note: If you are running Prometheus Federator in an RKE2 / K3s cluster before v1.23.14 / v1.24.8 / v1.25.4, this should be disabled.' + type: boolean + group: Helm Controller +- variable: helmProjectOperator.helmLocker.enabled + label: Enable Embedded Helm Locker + type: boolean + group: Helm Locker +- variable: helmProjectOperator.projectReleaseNamespaces.labelValue + label: Project Release Namespace Project ID + description: By default, the System Project is selected. This can be overriden to a different Project (e.g. p-xxxxx) + type: string + required: false + group: Namespaces +- variable: helmProjectOperator.releaseRoleBindings.clusterRoleRefs.admin + label: Admin ClusterRole + description: By default, admin selects Project Owners. This can be overridden to a different ClusterRole (e.g. rt-xxxxx) + type: string + default: admin + required: false + group: RBAC +- variable: helmProjectOperator.releaseRoleBindings.clusterRoleRefs.edit + label: Edit ClusterRole + description: By default, edit selects Project Members. This can be overridden to a different ClusterRole (e.g. rt-xxxxx) + type: string + default: edit + required: false + group: RBAC +- variable: helmProjectOperator.releaseRoleBindings.clusterRoleRefs.view + label: View ClusterRole + description: By default, view selects Read-Only users. This can be overridden to a different ClusterRole (e.g. rt-xxxxx) + type: string + default: view + required: false + group: RBAC \ No newline at end of file diff --git a/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/templates/NOTES.txt b/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/templates/NOTES.txt new file mode 100644 index 0000000000..f551f36613 --- /dev/null +++ b/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/templates/NOTES.txt @@ -0,0 +1,3 @@ +{{ $.Chart.Name }} has been installed. Check its status by running: + kubectl --namespace {{ template "prometheus-federator.namespace" . }} get pods -l "release={{ $.Release.Name }}" + diff --git a/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/templates/_helpers.tpl b/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/templates/_helpers.tpl new file mode 100644 index 0000000000..15ea4e5c88 --- /dev/null +++ b/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/templates/_helpers.tpl @@ -0,0 +1,66 @@ +# Rancher +{{- define "system_default_registry" -}} +{{- if .Values.global.cattle.systemDefaultRegistry -}} +{{- printf "%s/" .Values.global.cattle.systemDefaultRegistry -}} +{{- end -}} +{{- end -}} + +# Windows Support + +{{/* +Windows cluster will add default taint for linux nodes, +add below linux tolerations to workloads could be scheduled to those linux nodes +*/}} + +{{- define "linux-node-tolerations" -}} +- key: "cattle.io/os" + value: "linux" + effect: "NoSchedule" + operator: "Equal" +{{- end -}} + +{{- define "linux-node-selector" -}} +{{- if semverCompare "<1.14-0" .Capabilities.KubeVersion.GitVersion -}} +beta.kubernetes.io/os: linux +{{- else -}} +kubernetes.io/os: linux +{{- end -}} +{{- end -}} + +# Helm Project Operator + +{{/* vim: set filetype=mustache: */}} +{{/* Expand the name of the chart. This is suffixed with -alertmanager, which means subtract 13 from longest 63 available */}} +{{- define "prometheus-federator.name" -}} +{{- default .Chart.Name .Values.nameOverride | trunc 50 | trimSuffix "-" -}} +{{- end }} + +{{/* +Allow the release namespace to be overridden for multi-namespace deployments in combined charts +*/}} +{{- define "prometheus-federator.namespace" -}} + {{- if .Values.namespaceOverride -}} + {{- .Values.namespaceOverride -}} + {{- else -}} + {{- .Release.Namespace -}} + {{- end -}} +{{- end -}} + +{{/* Create chart name and version as used by the chart label. */}} +{{- define "prometheus-federator.chartref" -}} +{{- replace "+" "_" .Chart.Version | printf "%s-%s" .Chart.Name -}} +{{- end }} + +{{/* Generate basic labels */}} +{{- define "prometheus-federator.labels" }} +app.kubernetes.io/managed-by: {{ .Release.Service }} +app.kubernetes.io/instance: {{ .Release.Name }} +app.kubernetes.io/version: "{{ replace "+" "_" .Chart.Version }}" +app.kubernetes.io/part-of: {{ template "prometheus-federator.name" . }} +chart: {{ template "prometheus-federator.chartref" . }} +release: {{ $.Release.Name | quote }} +heritage: {{ $.Release.Service | quote }} +{{- if .Values.commonLabels}} +{{ toYaml .Values.commonLabels }} +{{- end }} +{{- end }} diff --git a/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/values.yaml b/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/values.yaml new file mode 100644 index 0000000000..a9b898baf4 --- /dev/null +++ b/charts/prometheus-federator/105.0.0-rc.1+up0.4.2/values.yaml @@ -0,0 +1,94 @@ +# Default values for helm-project-operator. +# This is a YAML-formatted file. +# Declare variables to be passed into your templates. + +# Prometheus Federator Configuration + +global: + cattle: + psp: + enabled: false + systemDefaultRegistry: "" + projectLabel: field.cattle.io/projectId + clusterId: "" + systemProjectId: "" + url: "" + rbac: + pspEnabled: true + pspAnnotations: {} + ## Specify pod annotations + ## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#apparmor + ## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#seccomp + ## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#sysctl + ## + # seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*' + # seccomp.security.alpha.kubernetes.io/defaultProfileName: 'docker/default' + # apparmor.security.beta.kubernetes.io/defaultProfileName: 'runtime/default' + + ## Reference to one or more secrets to be used when pulling images + ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ + ## + imagePullSecrets: [] + # - name: "image-pull-secret" + +helmProjectOperator: + enabled: true + + # ensures that all resources created by subchart show up as prometheus-federator + helmApiVersion: monitoring.cattle.io/v1alpha1 + + nameOverride: prometheus-federator + + helmController: + # Note: should be disabled for RKE2 clusters since they already run Helm Controller to manage internal Kubernetes components + enabled: true + + helmLocker: + enabled: true + + ## valuesOverride overrides values that are set on each Project Prometheus Stack Helm Chart deployment on an operator level + ## all values provided here will override any user-provided values automatically + valuesOverride: + + federate: + # Change this to point at all Prometheuses you want all your Project Prometheus Stacks to federate from + # By default, this matches the default deployment of Rancher Monitoring + targets: + - rancher-monitoring-prometheus.cattle-monitoring-system.svc:9090 + + image: + repository: rancher/prometheus-federator + tag: v0.3.4 + pullPolicy: IfNotPresent + + # Additional arguments to be passed into the Prometheus Federator image + additionalArgs: [] + + ## Define which Nodes the Pods are scheduled on. + ## ref: https://kubernetes.io/docs/user-guide/node-selection/ + ## + nodeSelector: {} + + ## Tolerations for use with node taints + ## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ + ## + tolerations: [] + # - key: "key" + # operator: "Equal" + # value: "value" + # effect: "NoSchedule" + + resources: {} + # limits: + # memory: 500Mi + # cpu: 1000m + # requests: + # memory: 100Mi + # cpu: 100m + + securityContext: {} + # allowPrivilegeEscalation: false + # readOnlyRootFilesystem: true + + debug: false + debugLevel: 0 \ No newline at end of file diff --git a/index.yaml b/index.yaml index 8237fe23e4..3235d3b3f3 100755 --- a/index.yaml +++ b/index.yaml @@ -5117,6 +5117,31 @@ entries: - assets/neuvector-monitor/neuvector-monitor-102.0.3+up2.6.0.tgz version: 102.0.3+up2.6.0 prometheus-federator: + - annotations: + catalog.cattle.io/certified: rancher + catalog.cattle.io/display-name: Prometheus Federator + catalog.cattle.io/kube-version: '>= 1.28.0-0 < 1.32.0-0' + catalog.cattle.io/namespace: cattle-monitoring-system + catalog.cattle.io/os: linux,windows + catalog.cattle.io/permits-os: linux,windows + catalog.cattle.io/provides-gvr: helm.cattle.io.projecthelmchart/v1alpha1 + catalog.cattle.io/rancher-version: '>= 2.9.0-0 < 2.10.0-0' + catalog.cattle.io/release-name: prometheus-federator + apiVersion: v2 + appVersion: 0.3.5 + created: "2024-09-20T15:30:52.543053408-04:00" + dependencies: + - condition: helmProjectOperator.enabled + name: helmProjectOperator + repository: file://./charts/helmProjectOperator + version: 0.2.1 + description: Prometheus Federator + digest: 85040647da03dde6a0bb8b88a9fa18c230995ee6a7c1527d3be040ddd3dd1285 + icon: https://raw.githubusercontent.com/rancher/prometheus-federator/main/assets/logos/prometheus-federator.svg + name: prometheus-federator + urls: + - assets/prometheus-federator/prometheus-federator-105.0.0-rc.1+up0.4.2.tgz + version: 105.0.0-rc.1+up0.4.2 - annotations: catalog.cattle.io/certified: rancher catalog.cattle.io/display-name: Prometheus Federator diff --git a/packages/rancher-monitoring/prometheus-federator/generated-changes/patch/Chart.yaml.patch b/packages/rancher-monitoring/prometheus-federator/generated-changes/patch/Chart.yaml.patch index acf9eb6801..ed74232f7f 100644 --- a/packages/rancher-monitoring/prometheus-federator/generated-changes/patch/Chart.yaml.patch +++ b/packages/rancher-monitoring/prometheus-federator/generated-changes/patch/Chart.yaml.patch @@ -4,7 +4,7 @@ annotations: catalog.cattle.io/certified: rancher catalog.cattle.io/display-name: Prometheus Federator -+ catalog.cattle.io/kube-version: '>= 1.26.0-0 < 1.31.0-0' ++ catalog.cattle.io/kube-version: '>= 1.28.0-0 < 1.32.0-0' catalog.cattle.io/namespace: cattle-monitoring-system catalog.cattle.io/os: linux,windows catalog.cattle.io/permits-os: linux,windows diff --git a/packages/rancher-monitoring/prometheus-federator/package.yaml b/packages/rancher-monitoring/prometheus-federator/package.yaml index e1c31398ee..0ac48be062 100644 --- a/packages/rancher-monitoring/prometheus-federator/package.yaml +++ b/packages/rancher-monitoring/prometheus-federator/package.yaml @@ -1,4 +1,4 @@ url: https://github.com/rancher/prometheus-federator.git subdirectory: charts/prometheus-federator/0.4.2 commit: 734fcff1d3e46f2eb2deb9fa4a388fa11905af12 -version: 104.0.1 +version: 105.0.0-rc.1 diff --git a/release.yaml b/release.yaml index d9c7dc2cac..205fb8f267 100644 --- a/release.yaml +++ b/release.yaml @@ -40,3 +40,5 @@ rancher-monitoring: rancher-monitoring-crd: - 105.0.0+up57.0.3 - 105.1.0-rc.1+up61.3.2 +prometheus-federator: + - 105.0.0-rc.1+up0.4.2