diff --git a/charts/spark-operator-chart/README.md b/charts/spark-operator-chart/README.md index 2314c82be..50c7cd840 100644 --- a/charts/spark-operator-chart/README.md +++ b/charts/spark-operator-chart/README.md @@ -112,6 +112,7 @@ All charts linted successfully | rbac.create | bool | `false` | **DEPRECATED** use `createRole` and `createClusterRole` | | rbac.createClusterRole | bool | `true` | Create and use RBAC `ClusterRole` resources | | rbac.createRole | bool | `true` | Create and use RBAC `Role` resources | +| rbac.annotations | object | `{}` | Optional annotations for the spark rbac | | replicaCount | int | `1` | Desired number of pods, leaderElection will be enabled if this is greater than 1 | | resourceQuotaEnforcement.enable | bool | `false` | Whether to enable the ResourceQuota enforcement for SparkApplication resources. Requires the webhook to be enabled by setting `webhook.enable` to true. Ref: https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/blob/master/docs/user-guide.md#enabling-resource-quota-enforcement. | | resources | object | `{}` | Pod resource requests and limits Note, that each job submission will spawn a JVM within the Spark Operator Pod using "/usr/local/openjdk-11/bin/java -Xmx128m". Kubernetes may kill these Java processes at will to enforce resource limits. When that happens, you will see the following error: 'failed to run spark-submit for SparkApplication [...]: signal: killed' - when this happens, you may want to increase memory limits. | diff --git a/charts/spark-operator-chart/templates/rbac.yaml b/charts/spark-operator-chart/templates/rbac.yaml index 6f5d97c0d..c78a73df9 100644 --- a/charts/spark-operator-chart/templates/rbac.yaml +++ b/charts/spark-operator-chart/templates/rbac.yaml @@ -7,6 +7,9 @@ metadata: "helm.sh/hook": pre-install, pre-upgrade "helm.sh/hook-delete-policy": hook-failed, before-hook-creation "helm.sh/hook-weight": "-10" +{{- with .Values.rbac.annotations }} +{{ toYaml . | indent 4 }} +{{- end }} labels: {{- include "spark-operator.labels" . | nindent 4 }} rules: diff --git a/charts/spark-operator-chart/values.yaml b/charts/spark-operator-chart/values.yaml index c7d672f60..3436a8f75 100644 --- a/charts/spark-operator-chart/values.yaml +++ b/charts/spark-operator-chart/values.yaml @@ -33,6 +33,8 @@ rbac: createRole: true # -- Create and use RBAC `ClusterRole` resources createClusterRole: true + # -- Optional annotations for rbac + annotations: {} serviceAccounts: spark: