Skip to content

Commit

Permalink
Merge branch 'main' into add-optional-securityContext
Browse files Browse the repository at this point in the history
  • Loading branch information
tplavcic authored Aug 4, 2023
2 parents 54a99b9 + 87d260e commit 3a2f3e2
Show file tree
Hide file tree
Showing 20 changed files with 703 additions and 342 deletions.
4 changes: 2 additions & 2 deletions charts/pg-db/Chart.yaml
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
apiVersion: v2
name: pg-db
description: 'A Helm chart for Deploying the Percona PostgreSQL database by Percona Distribution for PostgreSQL Operator'
description: 'A Helm chart to deploy the PostgreSQL database with the Percona Operator for PostgreSQL'
type: application
version: 2.2.0
version: 2.2.3
appVersion: 2.2.0
home: https://docs.percona.com/percona-operator-for-postgresql/2.0/
maintainers:
Expand Down
46 changes: 27 additions & 19 deletions charts/pg-db/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Useful links:
- [Operator Documentation](https://www.percona.com/doc/kubernetes-operator-for-postgresql/index.html)

## Pre-requisites
* [Percona Operator for PostgreSQL](https://hub.helm.sh/charts/percona/pg-operator) running in you Kubernetes cluster. See installation details [here](https://github.com/percona/percona-helm-charts/tree/main/charts/pg-operator) or in the [Operator Documentation](https://www.percona.com/doc/kubernetes-operator-for-postgresql/helm.html).
* [Percona Operator for PostgreSQL](https://hub.helm.sh/charts/percona/pg-operator) running in your Kubernetes cluster. See installation details [here](https://github.com/percona/percona-helm-charts/tree/main/charts/pg-operator) or in the [Operator Documentation](https://www.percona.com/doc/kubernetes-operator-for-postgresql/helm.html).
* Kubernetes 1.22+
* At least `v3.2.3` version of helm

Expand Down Expand Up @@ -157,31 +157,39 @@ Specify parameters using `--set key=value[,key=value]` argument to `helm install
Notice that you can use multiple replica sets only with sharding enabled.

## Examples
This is great one for a dev Percona Distribution for PostgreSQL cluster as it doesn't bother with backups.

### Deploy for tests - single PostgreSQL node and automated PVCs deletion

Such a setup is good for testing, as it does not require a lot of compute power
and performs and automated clean up of the Persistent Volume Claims (PVCs).
It also deploys just one pgBouncer node, instead of 3.
```bash
$ helm install dev --namespace pgdb .
NAME: dev
LAST DEPLOYED:
NAMESPACE: pgdb
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
$ helm install my-test percona/pg-db \
--set instances[0].name=test \
--set instances[0].replicas=1 \
--set instances[0].dataVolumeClaimSpec.resources.requests.storage=1Gi \
--set proxy.pgBouncer.replicas=1 \
--set finalizers={'percona\.com\/delete-pvc,percona\.com\/delete-ssl'}
```

You can start up the cluster with only S3 backup storage like this
### Expose pgBouncer with a Load Balancer

Expose the cluster's pgBouncer with a LoadBalancer:

```bash
$ helm install dev --namespace pgdb . \
--set backup.repos.repo1.s3.bucket=my-s3-bucket \
--set backup.repos.repo1.s3.endpoint='s3.amazonaws.com' \
--set backup.repos.repo1.s3.region='us-east-1' \
$ helm install my-test percona/pg-db \
--set proxy.pgBouncer.expose.type=LoadBalancer
```

GCS and local backup storages:
### Add a custom user and a database

The following command is going to deploy the cluster with the user `test`
and give it access to the database `mytest`:

```bash
$ helm install dev --namespace pgdb . \
--set backup.repos.repo2.gcs.bucket=my-gcs-bucket
```
$ helm install my-test percona/pg-db \
--set users[0].name=test \
--set users[0].databases={mytest}
```

Read more about custom users in our [documentation](https://docs.percona.com/percona-operator-for-postgresql/2.0/users.html)
13 changes: 9 additions & 4 deletions charts/pg-db/templates/NOTES.txt
Original file line number Diff line number Diff line change
Expand Up @@ -21,10 +21,15 @@ Join Percona Squad! Get early access to new product features, invite-only ”ask
>>> https://percona.com/k8s <<<

To get a PostgreSQL prompt inside your new cluster you can run:
{{ $clusterName := include "pg-database.fullname" . }}
{{- if .Values.users }}
{{ $users := .Values.users }} {{ $firstUser := first $users }} {{ $userName := $firstUser.name }}
PGBOUNCER_URI=$(kubectl -n {{ .Release.Namespace }} get secrets {{ $clusterName }}-pguser-{{ $userName }} -o jsonpath="{.data.pgbouncer-uri}" | base64 --decode)
{{- else }}
PGBOUNCER_URI=$(kubectl -n {{ .Release.Namespace }} get secrets {{ $clusterName }}-pguser-{{ $clusterName }} -o jsonpath="{.data.pgbouncer-uri}" | base64 --decode)
{{- end }}

POSTGRES_USER=$(kubectl -n {{ .Release.Namespace }} get secrets {{ include "pg-database.fullname" . }}-{{ .Values.defaultUser }}-secret -o jsonpath="{.data.username}" | base64 --decode)
POSTGRES_PASSWORD=$(kubectl -n {{ .Release.Namespace }} get secrets {{ include "pg-database.fullname" . }}-{{ .Values.defaultUser }}-secret -o jsonpath="{.data.password}" | base64 --decode)
And then connect to a cluster with a temporary Pod:

And then
$ kubectl run -i --rm --tty percona-client --image=perconalab/percona-distribution-postgresql:15 --restart=Never \
-- psql "postgres://${POSTGRES_USER}:${POSTGRES_PASSWORD}@{{ include "pg-database.fullname" . }}-pgbouncer.{{ .Release.Namespace }}.svc.cluster.local/{{ .Values.defaultDatabase }}"
-- psql $PGBOUNCER_URI
144 changes: 61 additions & 83 deletions charts/pg-db/templates/cluster.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,8 @@ metadata:
pgo-version: {{ .Chart.AppVersion }}
pgouser: admin
{{ include "pg-database.labels" . | indent 4 }}
finalizers:
{{ .Values.finalizers | toYaml | indent 4 }}
name: {{ include "pg-database.fullname" . }}
spec:
crVersion: {{ .Values.crVersion}}
Expand All @@ -20,7 +22,6 @@ spec:
postgresVersion: {{ .Values.postgresVersion}}
standby:
enabled: {{ .Values.standby.enabled }}
pause: {{ .Values.pause }}
{{- if or (.Values.customTLSSecret.name) (.Values.customReplicationTLSSecret.name) }}
secrets:
{{- if .Values.customTLSSecret.name }}
Expand All @@ -33,22 +34,28 @@ spec:
{{- end }}
{{- end }}

{{- if .Values.openshift }}
openshift: .Values.openshift
{{- end }}

openshift: {{ default false .Values.openshift }}

{{- if .Values.users }}
users:
{{- range $user := .Values.users }}
- name: {{ .Values.users.name }}
- name: {{ $user.name }}
{{- if $user.databases }}
databases:
{{- range $database := $user.databeses}}
- $database
{{- range $database := $user.databases }}
- {{ $database }}
{{- end }}
{{- end }}
{{- if $user.options }}
options: {{ $user.options }}
{{- end }}
options: {{ .Values.users.options }}
{{- if $user.password }}
password:
type: {{ .Values.users.password.type }}
secretName: {{ .Values.users.secretName }}
type: {{ $user.password.type }}
{{- end }}
{{- if $user.secretName }}
secretName: {{ $user.secretName }}
{{- end }}
{{- end }}
{{- end }}

Expand Down Expand Up @@ -90,16 +97,11 @@ spec:
expose:
type: {{ .Values.expose.type }}
annotations:
{{- range $annotation := .Values.expose.annotations}}
$annotation
{{- end }}
{{ .Values.expose.annotations | toYaml | indent 6 }}
labels:
{{- range $label := .Values.expose.labels}}
$labels
{{- end }}
{{ .Values.expose.labels | toYaml | indent 6 }}
{{- end }}


instances:
{{- range $instance := .Values.instances }}
- name: {{ $instance.name }}
Expand All @@ -119,14 +121,14 @@ spec:
{{- end }}
{{- if $instance.topologySpreadConstraints }}
topologySpreadConstraints:
- maxSkew: {{ $instance.topologySpreadConstraints.maxSkew}}
topologyKey: {{ $instance.topologySpreadConstraints.topologyKey }}
whenUnsatisfiable: {{ $instance.topologySpreadConstraints.whenUnsatisfiable}}
labelSelector:
matchLabels:
{{- range $label := $instance.topologySpreadConstraints.labelSelector.matchLabels}}
$label
{{- end }}
{{- range $constraint := $instance.topologySpreadConstraints }}
- maxSkew: {{ $constraint.maxSkew }}
topologyKey: {{ $constraint.topologyKey }}
whenUnsatisfiable: {{ $constraint.whenUnsatisfiable }}
labelSelector:
matchLabels:
{{ $constraint.labelSelector.matchLabels | toYaml | indent 6 }}
{{- end }}
{{- end }}

{{- if $instance.tolerations }}
Expand Down Expand Up @@ -179,13 +181,9 @@ spec:
expose:
type: {{ .Values.proxy.pgBouncer.expose.type }}
annotations:
{{- range $annotation := .Values.proxy.pgBouncer.expose.annotations}}
$annotation
{{- end }}
{{ .Values.proxy.pgBouncer.expose.annotations | toYaml | indent 10 }}
labels:
{{- range $label := .Values.proxy.pgBouncer.expose.labels}}
$labels
{{- end }}
{{.Values.proxy.pgBouncer.expose.labels | toYaml | indent 10 }}
{{- end }}
{{- if .Values.proxy.pgBouncer.sidecars }}
sidecars:
Expand All @@ -201,35 +199,30 @@ spec:
{{- if .Values.proxy.pgBouncer.config }}
config:
global:
{{- range $setting := .Values.proxy.pgBouncer.config.global }}
$setting
{{- end }}
{{ .Values.proxy.pgBouncer.config.global | toYaml | indent 10 }}
{{- end }}

{{- if .Values.proxy.pgBouncer.topologySpreadConstraints }}
topologySpreadConstraints:
- maxSkew: {{.Values.proxy.pgBouncer.topologySpreadConstraints.maxSkew}}
topologyKey: {{ .Values.proxy.pgBouncer.topologySpreadConstraints.topologyKey }}
whenUnsatisfiable: {{ .Values.proxy.pgBouncer.topologySpreadConstraints.whenUnsatisfiable}}
labelSelector:
matchLabels:
{{- range $label := .Values.proxy.pgBouncer.topologySpreadConstraints.labelSelector.matchLabels}}
$label
{{- end }}
{{- range $constraint := .Values.proxy.pgBouncer.topologySpreadConstraints }}
- maxSkew: {{ $constraint.maxSkew }}
topologyKey: {{ $constraint.topologyKey }}
whenUnsatisfiable: {{ $constraint.whenUnsatisfiable }}
labelSelector:
matchLabels:
{{ $constraint.labelSelector.matchLabels | toYaml | indent 6 }}
{{- end }}
{{- end }}

{{- if .Values.proxy.pgBouncer.affinity }}
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: {{ .Values.proxy.pgBouncer.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution.weight }}
podAffinityTerm:
labelSelector:
matchLabels:
{{- range $label := .Values.proxy.pgBouncer.affinity.podAntiAffinity.podAffinityTerm.labelSelector.matchLabels }}
$label
{{- end }}
topologyKey: {{ .Values.proxy.pgBouncer.affinity.podAntiAffinity.podAffinityTerm.topologyKey }}
- weight: {{ .Values.proxy.pgBouncer.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution.weight }}
podAffinityTerm:
labelSelector:
matchLabels:
{{ .Values.proxy.pgBouncer.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.labelSelector.matchLabels | toYaml | indent 18 }}
topologyKey: {{ .Values.proxy.pgBouncer.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.topologyKey }}
{{- end }}
{{- if .Values.proxy.pgBouncer.tolerations }}
tolerations:
Expand Down Expand Up @@ -270,36 +263,32 @@ spec:
{{- end }}
{{- if .Values.backups.pgbackrest.global }}
global:
{{- range $setting := .Values.backups.pgbackrest.global }}
$setting
{{- end }}
{{ .Values.backups.pgbackrest.global | toYaml | indent 8 }}
{{- end }}
{{- if .Values.backups.pgbackrest.repoHost }}
repoHost:
priorityClassName: {{ .Values.backups.pgbackrest.repoHost.priorityClassName }}
{{- if .Values.backups.pgbackrest.repoHost.topologySpreadConstraints }}
topologySpreadConstraints:
- maxSkew: {{.Values.backups.pgbackrest.repoHost.topologySpreadConstraints.maxSkew}}
topologyKey: {{ .Values.backups.pgbackrest.repoHost.topologySpreadConstraints.topologyKey }}
whenUnsatisfiable: {{ .Values.backups.pgbackrest.repoHost.topologySpreadConstraints.whenUnsatisfiable }}
labelSelector:
matchLabels:
{{- range $label := .Values.backups.pgbackrest.repoHost.topologySpreadConstraints.labelSelector.matchLabels }}
$label
{{- end }}
{{- range $constraint := .Values.backups.pgbackrest.repoHost.topologySpreadConstraints }}
- maxSkew: {{ $constraint.maxSkew }}
topologyKey: {{ $constraint.topologyKey }}
whenUnsatisfiable: {{ $constraint.whenUnsatisfiable }}
labelSelector:
matchLabels:
{{ $constraint.labelSelector.matchLabels | toYaml | indent 6 }}
{{- end }}
{{- end }}
{{- if .Values.backups.pgbackrest.repoHost.affinity }}
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: {{ .Values.backups.pgbackrest.repoHost.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution.weight }}
podAffinityTerm:
labelSelector:
matchLabels:
{{- range $label := .Values.backups.pgbackrest.repoHost.affinity.podAntiAffinity.podAffinityTerm.labelSelector.matchLabels }}
$label
{{- end }}
topologyKey: {{ .Values.backups.pgbackrest.repoHost.affinity.podAntiAffinity.podAffinityTerm.topologyKey }}
- weight: {{ .Values.backups.pgbackrest.repoHost.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution.weight }}
podAffinityTerm:
labelSelector:
matchLabels:
{{ .Values.backups.pgbackrest.repoHost.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.labelSelector.matchLabels | toYaml | indent 18 }}
topologyKey: {{ .Values.backups.pgbackrest.repoHost.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.topologyKey }}
{{- end }}
{{- end }}
manual:
Expand Down Expand Up @@ -335,18 +324,7 @@ spec:
container: {{ $repo.azure.container}}
{{- end }}
{{- end }}
{{- if .Values.backups.pgbackrest.restore }}
restore:
enabled: {{ .Values.backups.pgbackrest.restore.enabled }}
repoName: {{ .Values.backups.pgbackrest.restore.repoName }}
{{- if .Values.backups.pgbackrest.options }}
options:
{{- range $option := .Values.backups.pgbackrest.options }}
# PITR restore in place
- $option
{{- end }}
{{- end }}
{{- end }}


{{- if .Values.patroni}}
patroni:
Expand Down
11 changes: 1 addition & 10 deletions charts/pg-db/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -244,21 +244,12 @@ backups:
# azure:
# container: "<YOUR_AZURE_CONTAINER>"
#
# restore:
# enabled: true
# repoName: repo1
# options:
# PITR restore in place
# - --type=time
# - --target="2021-06-09 14:15:11-04"
# restore individual databases
# - --db-include=hippo

pmm:
enabled: false
image:
repository: percona/pmm-client
tag: 2.37.0
tag: 2.38.0
# imagePullPolicy: IfNotPresent
secret: cluster1-pmm-secret
serverHost: monitoring-service
Expand Down
4 changes: 2 additions & 2 deletions charts/pg-operator/Chart.yaml
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
apiVersion: v2
name: pg-operator
description: 'A Helm chart to deploy the v2 version of Percona Operator for PostgreSQL.'
description: 'A Helm chart to deploy the Percona Operator for PostgreSQL'
type: application
version: 2.2.0
version: 2.2.2
appVersion: 2.2.0
home: https://docs.percona.com/percona-operator-for-postgresql/2.0/
maintainers:
Expand Down
6 changes: 6 additions & 0 deletions charts/pg-operator/templates/role.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -123,6 +123,12 @@ rules:
- patch
- update
- watch
- apiGroups:
- pgv2.percona.com
resources:
- perconapgclusters/finalizers
verbs:
- update
- apiGroups:
- policy
resources:
Expand Down
Loading

0 comments on commit 3a2f3e2

Please sign in to comment.