Skip to content

Commit

Permalink
[release-v2.8] Upgrading charts-build-scripts to v0.5.5 (#3814)
Browse files Browse the repository at this point in the history
Co-authored-by: Josh Meranda <joshua.meranda@suse.com>
  • Loading branch information
lucasmlp and joshmeranda committed Apr 25, 2024
1 parent 1912709 commit 2838874
Show file tree
Hide file tree
Showing 222 changed files with 9,911 additions and 23 deletions.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
29 changes: 29 additions & 0 deletions charts/rancher-alerting-drivers/103.0.2/Chart.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
annotations:
catalog.cattle.io/certified: rancher
catalog.cattle.io/display-name: Alerting Drivers
catalog.cattle.io/kube-version: '>= 1.23.0-0 < 1.29.0-0'
catalog.cattle.io/os: linux
catalog.cattle.io/permits-os: linux,windows
catalog.cattle.io/rancher-version: '>= 2.8.0-0 < 2.9.0-0'
catalog.cattle.io/release-name: rancher-alerting-drivers
catalog.cattle.io/type: cluster-tool
catalog.cattle.io/upstream-version: 100.0.1
apiVersion: v2
appVersion: 1.16.0
dependencies:
- condition: prom2teams.enabled
name: prom2teams
repository: file://./charts/prom2teams
version: 0.2.0
- condition: sachet.enabled
name: sachet
repository: file://./charts/sachet
version: 1.0.1
description: The manager for third-party webhook receivers used in Prometheus Alertmanager
icon: https://charts.rancher.io/assets/logos/alerting-drivers.svg
keywords:
- monitoring
- alertmanger
- webhook
name: rancher-alerting-drivers
version: 103.0.2
11 changes: 11 additions & 0 deletions charts/rancher-alerting-drivers/103.0.2/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
# Rancher Alerting Drivers

This chart installs one or more [Alertmanager Webhook Receiver Integrations](https://prometheus.io/docs/operating/integrations/#alertmanager-webhook-receiver) (i.e. Drivers).

Those Drivers can be targeted by an existing deployment of Alertmanager to send alerts to notification mechanisms that are not natively supported.

Currently, this chart supports the following Drivers:
- Microsoft Teams, based on [prom2teams](https://github.com/idealista/prom2teams)
- SMS, based on [Sachet](https://github.com/messagebird/sachet)

After installing rancher-alerting-drivers, please refer to the upstream documentation for each Driver for configuration options.
29 changes: 29 additions & 0 deletions charts/rancher-alerting-drivers/103.0.2/app-readme.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
# Rancher Alerting Drivers

This chart installs one or more [Alertmanager Webhook Receiver Integrations](https://prometheus.io/docs/operating/integrations/#alertmanager-webhook-receiver) (i.e. Drivers).

Those Drivers can be targeted by an existing deployment of Alertmanager to send alerts to notification mechanisms that are not natively supported.

Currently, this chart supports the following Drivers:
- Microsoft Teams, based on [prom2teams](https://github.com/idealista/prom2teams)
- SMS, based on [Sachet](https://github.com/messagebird/sachet)

After installing rancher-alerting-drivers, please refer to the upstream documentation for each Driver for configuration options.

## Upgrading to Kubernetes v1.25+

Starting in Kubernetes v1.25, [Pod Security Policies](https://kubernetes.io/docs/concepts/security/pod-security-policy/) have been removed from the Kubernetes API.

As a result, **before upgrading to Kubernetes v1.25** (or on a fresh install in a Kubernetes v1.25+ cluster), users are expected to perform an in-place upgrade of this chart with `global.cattle.psp.enabled` set to `false` if it has been previously set to `true`.
> **Note:**
> In this chart release, any previous field that was associated with any PSP resources have been removed in favor of a single global field: `global.cattle.psp.enabled`.
> **Note:**
> If you upgrade your cluster to Kubernetes v1.25+ before removing PSPs via a `helm upgrade` (even if you manually clean up resources), **it will leave the Helm release in a broken state within the cluster such that further Helm operations will not work (`helm uninstall`, `helm upgrade`, etc.).**
>
> If your charts get stuck in this state, please consult the Rancher docs on how to clean up your Helm release secrets.
Upon setting `global.cattle.psp.enabled` to false, the chart will remove any PSP resources deployed on its behalf from the cluster. This is the default setting for this chart.
As a replacement for PSPs, [Pod Security Admission](https://kubernetes.io/docs/concepts/security/pod-security-admission/) should be used. Please consult the Rancher docs for more details on how to configure your chart release namespaces to work with the new Pod Security Admission and apply Pod Security Standards.
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
annotations:
catalog.cattle.io/certified: rancher
catalog.cattle.io/hidden: "true"
catalog.cattle.io/os: linux
catalog.cattle.io/release-name: rancher-prom2teams
apiVersion: v1
appVersion: 4.2.1
description: A Helm chart for Prom2Teams based on the upstream https://github.com/idealista/prom2teams
name: prom2teams
version: 0.2.0
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
{%- set
theme_colors = {
'resolved' : '2DC72D',
'critical' : '8C1A1A',
'severe' : '8C1A1A',
'warning' : 'FF9A0B',
'unknown' : 'CCCCCC'
}
-%}

{
"@type": "MessageCard",
"@context": "http://schema.org/extensions",
"themeColor": "{% if status=='resolved' %} {{ theme_colors.resolved }} {% else %} {{ theme_colors[msg_text.severity] }} {% endif %}",
"summary": "{% if status=='resolved' %}(Resolved) {% endif %}{{ msg_text.summary }}",
"title": "Prometheus alert {% if status=='resolved' %}(Resolved) {% elif status=='unknown' %} (status unknown) {% endif %}",
"sections": [{
"activityTitle": "{{ msg_text.summary }}",
"facts": [{% if msg_text.name %}{
"name": "Alert",
"value": "{{ msg_text.name }}"
},{% endif %}{% if msg_text.instance %}{
"name": "In host",
"value": "{{ msg_text.instance }}"
},{% endif %}{% if msg_text.severity %}{
"name": "Severity",
"value": "{{ msg_text.severity }}"
},{% endif %}{% if msg_text.description %}{
"name": "Description",
"value": "{{ msg_text.description }}"
},{% endif %}{
"name": "Status",
"value": "{{ msg_text.status }}"
}{% if msg_text.extra_labels %}{% for key in msg_text.extra_labels %},{
"name": "{{ key }}",
"value": "{{ msg_text.extra_labels[key] }}"
}{% endfor %}{% endif %}
{% if msg_text.extra_annotations %}{% for key in msg_text.extra_annotations %},{
"name": "{{ key }}",
"value": "{{ msg_text.extra_annotations[key] }}"
}{% endfor %}{% endif %}],
"markdown": true
}]
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
Prom2Teams has been installed. Check its status by running:
kubectl --namespace {{ .Release.Namespace }} get pods -l "app.kubernetes.io/instance={{ .Release.Name }}"
Original file line number Diff line number Diff line change
@@ -0,0 +1,73 @@
{{/* vim: set filetype=mustache: */}}

{{- define "system_default_registry" -}}
{{- if .Values.global.cattle.systemDefaultRegistry -}}
{{- printf "%s/" .Values.global.cattle.systemDefaultRegistry -}}
{{- end -}}
{{- end -}}

{{/*
Windows cluster will add default taint for linux nodes,
add below linux tolerations to workloads could be scheduled to those linux nodes
*/}}

{{- define "linux-node-tolerations" -}}
- key: "cattle.io/os"
value: "linux"
effect: "NoSchedule"
operator: "Equal"
{{- end -}}

{{- define "linux-node-selector" -}}
{{- if semverCompare "<1.14-0" .Capabilities.KubeVersion.GitVersion -}}
beta.kubernetes.io/os: linux
{{- else -}}
kubernetes.io/os: linux
{{- end -}}
{{- end -}}

{{/*
Expand the name of the chart.
*/}}
{{- define "prom2teams.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}

{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "prom2teams.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}

{{/*
Allow the release namespace to be overridden for multi-namespace deployments in combined charts
*/}}
{{- define "prom2teams.namespace" -}}
{{ default .Release.Namespace .Values.global.namespaceOverride }}
{{- end -}}

{{/*
Common labels
*/}}
{{- define "prom2teams.labels" -}}
app.kubernetes.io/name: {{ include "prom2teams.name" . }}
helm.sh/chart: {{ printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
app.kubernetes.io/instance: {{ .Release.Name }}
release: {{ .Release.Name }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end -}}
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
{{- $valid := list "DEBUG" "INFO" "WARNING" "ERROR" "CRITICAL" -}}
{{- if not (has .Values.prom2teams.loglevel $valid) -}}
{{- fail "Invalid log level"}}
{{- end -}}
{{- if and .Values.prom2teams.connector (hasKey .Values.prom2teams.connectors "Connector") -}}
{{- fail "Invalid configuration: prom2teams.connectors can't have a connector named Connector when prom2teams.connector is set"}}
{{- end -}}
{{/* Create the configmap when the operation is helm install and the target configmap does not exist. */}}
{{- if not (lookup "v1" "ConfigMap" (include "prom2teams.namespace" . ) (include "prom2teams.fullname" .)) }}
apiVersion: v1
kind: ConfigMap
metadata:
namespace: {{ include "prom2teams.namespace" . }}
name: {{ include "prom2teams.fullname" . }}
labels: {{ include "prom2teams.labels" . | nindent 4 }}
annotations:
"helm.sh/hook": pre-install, pre-upgrade
"helm.sh/hook-weight": "3"
"helm.sh/resource-policy": keep
data:
config.ini: |-
[HTTP Server]
Host: {{ .Values.prom2teams.host }}
Port: {{ .Values.prom2teams.port }}
[Microsoft Teams]
{{- with .Values.prom2teams.connector }}
Connector: {{ . }}
{{- end }}
{{- range $key, $val := .Values.prom2teams.connectors }}
{{ $key }}: {{ $val }}
{{- end }}
[Group Alerts]
Field: {{ .Values.prom2teams.group_alerts_by }}
[Log]
Level: {{ .Values.prom2teams.loglevel }}
[Template]
Path: {{ .Values.prom2teams.templatepath }}
teams.j2: {{ .Files.Get "files/teams.j2" | quote }}
{{- end -}}
Original file line number Diff line number Diff line change
@@ -0,0 +1,83 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "prom2teams.fullname" . }}
namespace: {{ include "prom2teams.namespace" . }}
labels: {{ include "prom2teams.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app.kubernetes.io/name: {{ include "prom2teams.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ include "prom2teams.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
serviceAccountName: {{ include "prom2teams.fullname" . }}
{{- with .Values.imagePullSecrets }}
imagePullSecrets: {{ toYaml . | nindent 8 }}
{{- end }}
volumes:
- name: config
configMap:
name: {{ include "prom2teams.fullname" . }}
containers:
- name: {{ .Chart.Name }}
image: {{ include "system_default_registry" . }}{{ .Values.image.repository }}:{{ .Values.image.tag }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 8089
protocol: TCP
volumeMounts:
- name: config
mountPath: /opt/prom2teams/helmconfig/
env:
- name: APP_CONFIG_FILE
value: {{ .Values.prom2teams.config | quote }}
- name: PROM2TEAMS_PORT
value: {{ .Values.prom2teams.port | quote }}
- name: PROM2TEAMS_HOST
value: {{ .Values.prom2teams.host | quote }}
- name: PROM2TEAMS_CONNECTOR
value: {{ .Values.prom2teams.connector | quote }}
- name: PROM2TEAMS_GROUP_ALERTS_BY
value: {{ .Values.prom2teams.group_alerts_by | quote }}
- name: PROM2TEAMS_LOGLEVEL
value: {{ .Values.prom2teams.loglevel }}
{{- range $key, $value := .Values.prom2teams.extraEnv }}
- name: "{{ $key }}"
value: "{{ $value }}"
{{- end }}
resources: {{ toYaml .Values.resources | nindent 12 }}
{{- if .Values.securityContext.enabled }}
securityContext:
privileged: false
readOnlyRootFilesystem: false
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
{{- end }}
nodeSelector: {{ include "linux-node-selector" . | nindent 8 }}
{{- if .Values.nodeSelector }}
{{- toYaml .Values.nodeSelector | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity: {{ toYaml . | nindent 8 }}
{{- end }}
tolerations: {{ include "linux-node-tolerations" . | nindent 8 }}
{{- if .Values.tolerations }}
{{- toYaml .Values.tolerations | nindent 8 }}
{{- end }}
{{- if .Values.securityContext.enabled }}
securityContext:
runAsNonRoot: {{ if eq (int .Values.securityContext.runAsUser) 0 }}false{{ else }}true{{ end }}
runAsUser: {{ .Values.securityContext.runAsUser }}
runAsGroup: {{ .Values.securityContext.runAsGroup }}
fsGroup: {{ .Values.securityContext.fsGroup }}
{{- end }}

Loading

0 comments on commit 2838874

Please sign in to comment.