Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(helm): update k8s-monitoring ( 1.6.4 → 1.6.12 ) #161

Merged
merged 2 commits into from
Dec 3, 2024

Conversation

botty-white[bot]
Copy link
Contributor

@botty-white botty-white bot commented Nov 9, 2024

This PR contains the following updates:

Package Update Change
k8s-monitoring patch 1.6.4 -> 1.6.12

Release Notes

grafana/helm-charts (k8s-monitoring)

v1.6.12

Compare Source

A Helm chart for gathering, scraping, and forwarding Kubernetes telemetry data to a Grafana Stack.

Source commit: grafana/k8s-monitoring-helm@b7429a7

Tag on source: https://github.com/grafana/k8s-monitoring-helm/releases/tag/v1.6.12

v1.6.11

Compare Source

A Helm chart for gathering, scraping, and forwarding Kubernetes telemetry data to a Grafana Stack.

Source commit: grafana/k8s-monitoring-helm@ab233c0

Tag on source: https://github.com/grafana/k8s-monitoring-helm/releases/tag/v1.6.11

v1.6.10

Compare Source

A Helm chart for gathering, scraping, and forwarding Kubernetes telemetry data to a Grafana Stack.

Source commit: grafana/k8s-monitoring-helm@aa765dd

Tag on source: https://github.com/grafana/k8s-monitoring-helm/releases/tag/v1.6.10

v1.6.9

Compare Source

A Helm chart for gathering, scraping, and forwarding Kubernetes telemetry data to a Grafana Stack.

Source commit: grafana/k8s-monitoring-helm@a3ceed6

Tag on source: https://github.com/grafana/k8s-monitoring-helm/releases/tag/v1.6.9

v1.6.8

Compare Source

A Helm chart for gathering, scraping, and forwarding Kubernetes telemetry data to a Grafana Stack.

Source commit: grafana/k8s-monitoring-helm@2d70f02

Tag on source: https://github.com/grafana/k8s-monitoring-helm/releases/tag/v1.6.8

v1.6.7

Compare Source

A Helm chart for gathering, scraping, and forwarding Kubernetes telemetry data to a Grafana Stack.

Source commit: grafana/k8s-monitoring-helm@1bb6855

Tag on source: https://github.com/grafana/k8s-monitoring-helm/releases/tag/v1.6.7

v1.6.6

Compare Source

A Helm chart for gathering, scraping, and forwarding Kubernetes telemetry data to a Grafana Stack.

Source commit: grafana/k8s-monitoring-helm@6d61def

Tag on source: https://github.com/grafana/k8s-monitoring-helm/releases/tag/v1.6.6

v1.6.5

Compare Source

A Helm chart for gathering, scraping, and forwarding Kubernetes telemetry data to a Grafana Stack.

Source commit: grafana/k8s-monitoring-helm@4e826a7

Tag on source: https://github.com/grafana/k8s-monitoring-helm/releases/tag/v1.6.5


Configuration

📅 Schedule: Branch creation - "every weekend" in timezone America/New_York, Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR has been generated by Renovate Bot.

@botty-white
Copy link
Contributor Author

botty-white bot commented Nov 9, 2024

--- kubernetes/apps/observability/grafana-k8s-monitoring/app Kustomization: flux-system/grafana-k8s-monitoring HelmRelease: observability/grafana-k8s-monitoring

+++ kubernetes/apps/observability/grafana-k8s-monitoring/app Kustomization: flux-system/grafana-k8s-monitoring HelmRelease: observability/grafana-k8s-monitoring

@@ -13,13 +13,13 @@

     spec:
       chart: k8s-monitoring
       sourceRef:
         kind: HelmRepository
         name: grafana
         namespace: flux-system
-      version: 1.6.4
+      version: 1.6.12
   install:
     remediation:
       retries: 3
   interval: 30m
   upgrade:
     cleanupOnFail: true

@botty-white
Copy link
Contributor Author

botty-white bot commented Nov 9, 2024

--- HelmRelease: observability/grafana-k8s-monitoring ConfigMap: observability/grafana-k8s-monitoring-alloy

+++ HelmRelease: observability/grafana-k8s-monitoring ConfigMap: observability/grafana-k8s-monitoring-alloy

@@ -186,12 +186,13 @@

         metrics = [otelcol.exporter.prometheus.metrics_converter.input]
         logs = [otelcol.exporter.loki.logs_converter.input]
         traces = [otelcol.exporter.otlp.traces_service.input]
       }
     }
     otelcol.exporter.prometheus "metrics_converter" {
+      add_metric_suffixes = true
       forward_to = [prometheus.relabel.metrics_service.receiver]
     }
     otelcol.exporter.loki "logs_converter" {
       forward_to = [loki.process.pod_logs.receiver]
     }
     // Annotation Autodiscovery
@@ -925,8 +926,8 @@

       level  = "info"
       format = "logfmt"
     }
   k8s-monitoring-build-info-metric.prom: |
     # HELP grafana_kubernetes_monitoring_build_info A metric to report the version of the Kubernetes Monitoring Helm chart as well as a summary of enabled features
     # TYPE grafana_kubernetes_monitoring_build_info gauge
-    grafana_kubernetes_monitoring_build_info{version="1.6.4", namespace="observability", metrics="enabled,alloy,autoDiscover,kube-state-metrics,node-exporter,kubelet,kubeletResource,cadvisor", logs="enabled,events,pod_logs", traces="enabled", deployments="kube-state-metrics,prometheus-node-exporter,prometheus-operator-crds"} 1
-
+    grafana_kubernetes_monitoring_build_info{version="1.6.12", namespace="observability", metrics="enabled,alloy,autoDiscover,kube-state-metrics,node-exporter,kubelet,kubeletResource,cadvisor", logs="enabled,events,pod_logs", traces="enabled", deployments="kube-state-metrics,prometheus-node-exporter,prometheus-operator-crds"} 1
+
--- HelmRelease: observability/grafana-k8s-monitoring DaemonSet: observability/grafana-k8s-monitoring-alloy-logs

+++ HelmRelease: observability/grafana-k8s-monitoring DaemonSet: observability/grafana-k8s-monitoring-alloy-logs

@@ -23,13 +23,13 @@

         app.kubernetes.io/name: alloy-logs
         app.kubernetes.io/instance: grafana-k8s-monitoring
     spec:
       serviceAccountName: grafana-k8s-monitoring-alloy-logs
       containers:
       - name: alloy
-        image: docker.io/grafana/alloy:v1.4.2
+        image: docker.io/grafana/alloy:v1.5.0
         imagePullPolicy: IfNotPresent
         args:
         - run
         - /etc/alloy/config.alloy
         - --storage.path=/tmp/alloy
         - --server.http.listen-addr=0.0.0.0:12345
--- HelmRelease: observability/grafana-k8s-monitoring Deployment: observability/grafana-k8s-monitoring-alloy-events

+++ HelmRelease: observability/grafana-k8s-monitoring Deployment: observability/grafana-k8s-monitoring-alloy-events

@@ -24,13 +24,13 @@

         app.kubernetes.io/name: alloy-events
         app.kubernetes.io/instance: grafana-k8s-monitoring
     spec:
       serviceAccountName: grafana-k8s-monitoring-alloy-events
       containers:
       - name: alloy
-        image: docker.io/grafana/alloy:v1.4.2
+        image: docker.io/grafana/alloy:v1.5.0
         imagePullPolicy: IfNotPresent
         args:
         - run
         - /etc/alloy/config.alloy
         - --storage.path=/tmp/alloy
         - --server.http.listen-addr=0.0.0.0:12345
--- HelmRelease: observability/grafana-k8s-monitoring Deployment: observability/grafana-k8s-monitoring-kube-state-metrics

+++ HelmRelease: observability/grafana-k8s-monitoring Deployment: observability/grafana-k8s-monitoring-kube-state-metrics

@@ -44,13 +44,13 @@

       - name: kube-state-metrics
         args:
         - --port=8080
         - --resources=certificatesigningrequests,configmaps,cronjobs,daemonsets,deployments,endpoints,horizontalpodautoscalers,ingresses,jobs,leases,limitranges,mutatingwebhookconfigurations,namespaces,networkpolicies,nodes,persistentvolumeclaims,persistentvolumes,poddisruptionbudgets,pods,replicasets,replicationcontrollers,resourcequotas,secrets,services,statefulsets,storageclasses,validatingwebhookconfigurations,volumeattachments
         - --metric-labels-allowlist=nodes=[agentpool,alpha.eksctl.io/cluster-name,alpha.eksctl.io/nodegroup-name,beta.kubernetes.io/instance-type,cloud.google.com/gke-nodepool,cluster_name,ec2_amazonaws_com_Name,ec2_amazonaws_com_aws_autoscaling_groupName,ec2_amazonaws_com_aws_autoscaling_group_name,ec2_amazonaws_com_name,eks_amazonaws_com_nodegroup,k8s_io_cloud_provider_aws,karpenter.sh/nodepool,kubernetes.azure.com/cluster,kubernetes.io/arch,kubernetes.io/hostname,kubernetes.io/os,node.kubernetes.io/instance-type,topology.kubernetes.io/region,topology.kubernetes.io/zone]
         imagePullPolicy: IfNotPresent
-        image: registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.13.0
+        image: registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.14.0
         ports:
         - containerPort: 8080
           name: http
         livenessProbe:
           failureThreshold: 3
           httpGet:
@@ -64,13 +64,13 @@

           timeoutSeconds: 5
         readinessProbe:
           failureThreshold: 3
           httpGet:
             httpHeaders: null
             path: /readyz
-            port: 8080
+            port: 8081
             scheme: HTTP
           initialDelaySeconds: 5
           periodSeconds: 10
           successThreshold: 1
           timeoutSeconds: 5
         resources: {}
--- HelmRelease: observability/grafana-k8s-monitoring StatefulSet: observability/grafana-k8s-monitoring-alloy

+++ HelmRelease: observability/grafana-k8s-monitoring StatefulSet: observability/grafana-k8s-monitoring-alloy

@@ -26,13 +26,13 @@

         app.kubernetes.io/name: alloy
         app.kubernetes.io/instance: grafana-k8s-monitoring
     spec:
       serviceAccountName: grafana-k8s-monitoring-alloy
       containers:
       - name: alloy
-        image: docker.io/grafana/alloy:v1.4.2
+        image: docker.io/grafana/alloy:v1.5.0
         imagePullPolicy: IfNotPresent
         args:
         - run
         - /etc/alloy/config.alloy
         - --storage.path=/tmp/alloy
         - --server.http.listen-addr=0.0.0.0:12345
--- HelmRelease: observability/grafana-k8s-monitoring ConfigMap: observability/validate-grafana-k8s-monitoring

+++ HelmRelease: observability/grafana-k8s-monitoring ConfigMap: observability/validate-grafana-k8s-monitoring

@@ -193,12 +193,13 @@

         metrics = [otelcol.exporter.prometheus.metrics_converter.input]
         logs = [otelcol.exporter.loki.logs_converter.input]
         traces = [otelcol.exporter.otlp.traces_service.input]
       }
     }
     otelcol.exporter.prometheus "metrics_converter" {
+      add_metric_suffixes = true
       forward_to = [prometheus.relabel.metrics_service.receiver]
     }
     otelcol.exporter.loki "logs_converter" {
       forward_to = [loki.process.pod_logs.receiver]
     }
     // Annotation Autodiscovery
--- HelmRelease: observability/grafana-k8s-monitoring Pod: observability/validate-grafana-k8s-monitoring

+++ HelmRelease: observability/grafana-k8s-monitoring Pod: observability/validate-grafana-k8s-monitoring

@@ -19,13 +19,13 @@

   - effect: NoSchedule
     key: kubernetes.io/arch
     operator: Equal
     value: arm64
   containers:
   - name: alloy
-    image: docker.io/grafana/alloy:v1.4.2
+    image: docker.io/grafana/alloy:v1.5.0
     command:
     - bash
     - -c
     - |
       echo Validating Grafana Alloy config file
       if ! alloy fmt /etc/alloy/config.alloy > /dev/null; then

@zebernst zebernst force-pushed the main branch 6 times, most recently from c691458 to fff0219 Compare November 12, 2024 19:08
@botty-white botty-white bot force-pushed the renovate/k8s-monitoring-1.x branch from 7dd4683 to cdaabf5 Compare November 14, 2024 18:24
@botty-white botty-white bot changed the title fix(helm): update k8s-monitoring ( 1.6.4 → 1.6.6 ) fix(helm): update k8s-monitoring ( 1.6.4 → 1.6.7 ) Nov 14, 2024
@botty-white botty-white bot force-pushed the renovate/k8s-monitoring-1.x branch from cdaabf5 to f0f7455 Compare November 19, 2024 23:17
@botty-white botty-white bot changed the title fix(helm): update k8s-monitoring ( 1.6.4 → 1.6.7 ) fix(helm): update k8s-monitoring ( 1.6.4 → 1.6.8 ) Nov 19, 2024
@botty-white botty-white bot force-pushed the renovate/k8s-monitoring-1.x branch from f0f7455 to bfb73e2 Compare November 20, 2024 17:16
@botty-white botty-white bot changed the title fix(helm): update k8s-monitoring ( 1.6.4 → 1.6.8 ) fix(helm): update k8s-monitoring ( 1.6.4 → 1.6.9 ) Nov 20, 2024
@botty-white botty-white bot force-pushed the renovate/k8s-monitoring-1.x branch from bfb73e2 to ec3240c Compare November 22, 2024 02:57
@botty-white botty-white bot changed the title fix(helm): update k8s-monitoring ( 1.6.4 → 1.6.9 ) fix(helm): update k8s-monitoring ( 1.6.4 → 1.6.10 ) Nov 22, 2024
@botty-white botty-white bot force-pushed the renovate/k8s-monitoring-1.x branch from ec3240c to e5a6961 Compare November 22, 2024 16:23
@botty-white botty-white bot changed the title fix(helm): update k8s-monitoring ( 1.6.4 → 1.6.10 ) fix(helm): update k8s-monitoring ( 1.6.4 → 1.6.11 ) Nov 22, 2024
@botty-white botty-white bot force-pushed the renovate/k8s-monitoring-1.x branch from e5a6961 to 936fce2 Compare November 27, 2024 04:23
@botty-white botty-white bot changed the title fix(helm): update k8s-monitoring ( 1.6.4 → 1.6.11 ) fix(helm): update k8s-monitoring ( 1.6.4 → 1.6.12 ) Nov 27, 2024
@botty-white botty-white bot force-pushed the renovate/k8s-monitoring-1.x branch from 936fce2 to 8de26bc Compare December 2, 2024 23:40
@botty-white botty-white bot force-pushed the renovate/k8s-monitoring-1.x branch from 8de26bc to ddfaecd Compare December 2, 2024 23:42
@zebernst zebernst merged commit 5cc56f8 into main Dec 3, 2024
3 checks passed
@zebernst zebernst deleted the renovate/k8s-monitoring-1.x branch December 3, 2024 14:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant