Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

K8SPSMDB-1213 Add arm64 support for the e2e tests #1735

Open
wants to merge 42 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 38 commits
Commits
Show all changes
42 commits
Select commit Hold shift + click to select a range
97a6f79
Merge pull request #1714 from percona/release-1.18.0-push-test
eleo007 Nov 11, 2024
dc3633e
support for running e2e tests against arm64 GKE nodes
ptankov Nov 11, 2024
0a03ce9
making cert-manager able to be installed on arm64 nodes (using helm)
ptankov Nov 12, 2024
1111fdc
use apply_clien always so that client pod has proper tolerations applied
ptankov Nov 19, 2024
be2c942
update e2e test to use alpine/curl with tolerations for arm64 archite…
ptankov Nov 19, 2024
970942a
use apply_clien always so that client pod has proper tolerations appl…
ptankov Nov 19, 2024
da6966d
use apply_clien always so that client pod has proper tolerations appl…
ptankov Nov 19, 2024
09d5151
use apply_clien always so that client pod has proper tolerations appl…
ptankov Nov 19, 2024
d49669e
update deploy_cmctl function to handle arm64 architecture tolerations
ptankov Nov 19, 2024
f56e773
use apply_clien always so that client pod has proper tolerations appl…
ptankov Nov 20, 2024
b1a00f2
add tolerations for arm64 architecture in run_simple_cli_inside_image…
ptankov Nov 20, 2024
908eb0f
add tolerations for arm64 architecture in deploy_operator_gh and clea…
ptankov Nov 20, 2024
4c0c72d
add initial run-release-arm64 configuration with various options
ptankov Nov 20, 2024
e7eeeab
Merge branch 'release-1.18.0' into K8SPSMDB-1213
ptankov Nov 20, 2024
43c416b
amazon/aws-cli instead of perconalab/awscli
ptankov Nov 21, 2024
f9282dd
add support for arm64 architecture in various e2e tests and enhance b…
ptankov Nov 25, 2024
ee95241
for upgrade-consistency and upgrade-consistency-sharded-tls tests, we…
ptankov Nov 21, 2024
07aac83
remove hardcoded image references in configuration files for e2e tests
ptankov Nov 25, 2024
feaef62
Merge branch 'release-1.18.0' into K8SPSMDB-1213
ptankov Nov 25, 2024
49cfb0c
Merge branch 'main' into K8SPSMDB-1213
ptankov Nov 25, 2024
3d286ed
Refactor architecture checks and streamline cluster deletion process …
ptankov Nov 25, 2024
fa54637
Add architecture-specific tolerations for arm64 in OpenLDAP deployment
ptankov Nov 26, 2024
ce97657
Merge branch 'main' into K8SPSMDB-1213
ptankov Nov 26, 2024
061b459
Remove --no-hooks option from cert-manager Helm installation
ptankov Nov 26, 2024
fe10601
Refactor cert-manager deployment to use kubectl patch for arm64 toler…
ptankov Nov 27, 2024
35b3167
Update deploy_cmctl function to use service account for RBAC configur…
ptankov Nov 27, 2024
48ab905
Remove arm64 toleration from chaos pod failure configuration in self-…
ptankov Nov 27, 2024
b25251c
Set cluster variable for telemetry transfer function in e2e tests
ptankov Nov 28, 2024
bf9d74c
Committed by mistake. Removed.
ptankov Dec 11, 2024
74ba5cb
Merge branch 'main' into K8SPSMDB-1213
ptankov Dec 11, 2024
f78625c
returning commented out code, used for debugging purposes
ptankov Dec 11, 2024
4666fc9
Merge branch 'main' into K8SPSMDB-1213
ptankov Dec 11, 2024
dc76f30
refactor: replace hardcoded tolerations with variable for arm64 archi…
ptankov Dec 11, 2024
6bc5de5
Merge branch 'K8SPSMDB-1213' of github.com:percona/percona-server-mon…
ptankov Dec 11, 2024
592bbdc
refactor: simplify architecture check by removing quotes around varia…
ptankov Dec 11, 2024
15d6819
refactor: use regex for architecture check to improve readability and…
ptankov Dec 11, 2024
175f6d6
Merge branch 'main' into K8SPSMDB-1213
ptankov Dec 11, 2024
352cebd
Merge branch 'main' into K8SPSMDB-1213
ptankov Dec 18, 2024
13eb721
Merge branch 'main' into K8SPSMDB-1213
ptankov Dec 19, 2024
048db89
resolving the bug about applying secrets after creating cluste
ptankov Dec 19, 2024
55ff954
Merge branch 'main' into K8SPSMDB-1213
ptankov Dec 19, 2024
268b731
Merge branch 'main' into K8SPSMDB-1213
ptankov Dec 20, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 2 additions & 3 deletions e2e-tests/arbiter/run
Original file line number Diff line number Diff line change
Expand Up @@ -74,9 +74,8 @@ main() {
deploy_cert_manager

desc 'create secrets and start client'
kubectl_bin apply \
-f $conf_dir/client.yml \
-f $conf_dir/secrets.yml
kubectl_bin apply -f $conf_dir/secrets.yml
apply_client $conf_dir/client.yml

desc 'check arbiter without service-per-pod'
check_cr_config "arbiter-rs0"
Expand Down
6 changes: 3 additions & 3 deletions e2e-tests/balancer/run
Original file line number Diff line number Diff line change
Expand Up @@ -69,9 +69,9 @@ main() {

desc 'create first PSMDB cluster'
cluster="some-name"
kubectl_bin apply \
-f "$conf_dir/secrets.yml" \
-f "$conf_dir/client-70.yml"
ptankov marked this conversation as resolved.
Show resolved Hide resolved
kubectl_bin apply -f $conf_dir/secrets.yml
apply_client $conf_dir/client-70.yml

if version_gt "1.19" && [ $EKS -ne 1 ]; then
$sed 's/docker/runc/g' "$conf_dir/container-rc.yaml" | kubectl_bin apply -f -
Expand Down
2 changes: 1 addition & 1 deletion e2e-tests/conf/client.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ spec:
terminationGracePeriodSeconds: 10
containers:
- name: psmdb-client
image: percona/percona-server-mongodb:4.4
image: percona/percona-server-mongodb:4.4-multi
imagePullPolicy: Always
command:
- sleep
Expand Down
2 changes: 1 addition & 1 deletion e2e-tests/conf/client_with_tls.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ spec:
terminationGracePeriodSeconds: 10
containers:
- name: psmdb-client
image: percona/percona-server-mongodb:4.4
image: percona/percona-server-mongodb:4.4-multi
imagePullPolicy: Always
command: ["/bin/bash","-c","cat /etc/mongodb-ssl/tls.key /etc/mongodb-ssl/tls.crt > /tmp/tls.pem && sleep 100500"]
volumeMounts:
Expand Down
8 changes: 3 additions & 5 deletions e2e-tests/cross-site-sharded/run
Original file line number Diff line number Diff line change
Expand Up @@ -39,9 +39,8 @@ desc "create main cluster"
create_infra "$namespace"

desc 'create secrets and start client'
kubectl_bin apply \
-f "$conf_dir/client.yml" \
-f "$test_dir/conf/secrets.yml"
kubectl_bin apply -f $test_dir/conf/secrets.yml
apply_client $conf_dir/client.yml

desc "create main PSMDB cluster $main_cluster."
apply_cluster "$test_dir/conf/$main_cluster.yml"
Expand Down Expand Up @@ -112,8 +111,7 @@ create_namespace $replica_namespace 0
deploy_operator

desc 'start client'
kubectl_bin apply \
-f "$conf_dir/client.yml"
apply_client $conf_dir/client.yml

desc "copy secrets from main to replica namespace and create all of them"
kubectl get secret ${main_cluster}-secrets -o yaml -n ${namespace} \
Expand Down
4 changes: 2 additions & 2 deletions e2e-tests/custom-replset-name/conf/some-name.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ spec:
crVersion: 1.18.0
backup:
enabled: true
image: percona/percona-backup-mongodb:2.0.4
image:
pitr:
enabled: false
serviceAccountName: percona-server-mongodb-operator
Expand All @@ -33,7 +33,7 @@ spec:
bucket: operator-testing
prefix: psmdb
endpointUrl: https://storage.googleapis.com
image: percona/percona-server-mongodb:4.4.10-11
image:
imagePullPolicy: Always
pmm:
enabled: false
Expand Down
6 changes: 5 additions & 1 deletion e2e-tests/custom-replset-name/run
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,11 @@ create_infra $namespace
apply_s3_storage_secrets
deploy_minio

kubectl_bin apply -f $conf_dir/secrets.yml -f $conf_dir/client.yml -f $conf_dir/minio-secret.yml
desc 'create secrets and start client'
kubectl_bin apply -f $conf_dir/secrets.yml
kubectl_bin apply -f $conf_dir/minio-secret.yml
apply_client $conf_dir/client.yml

cluster="some-name"

desc 'create first PSMDB cluster'
Expand Down
4 changes: 2 additions & 2 deletions e2e-tests/custom-tls/run
Original file line number Diff line number Diff line change
Expand Up @@ -32,8 +32,8 @@ main() {
destroy_cert_manager || true # We need to be sure that we are getting certificates created by the operator, not by cert-manager

desc 'create secrets and start client'
kubectl_bin apply -f "$conf_dir/secrets.yml"
kubectl_bin apply -f "$conf_dir/client_with_tls.yml"
kubectl_bin apply -f $conf_dir/secrets.yml
apply_client $conf_dir/client_with_tls.yml

cluster="some-name"
desc "create first PSMDB cluster $cluster"
Expand Down
9 changes: 4 additions & 5 deletions e2e-tests/custom-users-roles-sharded/run
Original file line number Diff line number Diff line change
Expand Up @@ -79,10 +79,9 @@ create_infra "$namespace"
mongosUri="userAdmin:userAdmin123456@$cluster-mongos.$namespace"

desc 'create secrets and start client'
kubectl_bin apply -f "${conf_dir}/client.yml" \
-f "${conf_dir}/secrets.yml" \
-f "${test_dir}/conf/app-user-secrets.yml"

kubectl_bin apply -f $conf_dir/secrets.yml
kubectl_bin apply -f $test_dir/conf/app-user-secrets.yml
apply_client $conf_dir/client.yml

apply_s3_storage_secrets
if version_gt "1.19" && [ $EKS -ne 1 ]; then
Expand Down Expand Up @@ -135,7 +134,7 @@ kubectl_bin patch psmdb ${cluster} --type=merge --patch '{
"key": "userTwoPassKey"
},
"roles": [
{"db":"admin","name":"userAdminAnyDatabase"},
{"db":"admin","name":"userAdminAnyDatabase"},
{"db":"admin","name":"clusterAdmin"}
]
}
Expand Down
8 changes: 4 additions & 4 deletions e2e-tests/custom-users-roles/run
Original file line number Diff line number Diff line change
Expand Up @@ -68,9 +68,9 @@ cluster="some-name-rs0"
create_infra $namespace

desc 'create secrets and start client'
kubectl_bin apply -f "${conf_dir}/client.yml" \
-f "${conf_dir}/secrets.yml" \
-f "${test_dir}/conf/app-user-secrets.yml"
kubectl_bin apply -f $conf_dir/secrets.yml
kubectl_bin apply -f $test_dir/conf/app-user-secrets.yml
apply_client $conf_dir/client.yml

mongoUri="userAdmin:userAdmin123456@$cluster.$namespace"

Expand Down Expand Up @@ -107,7 +107,7 @@ kubectl_bin patch psmdb ${psmdb} --type=merge --patch '{
"key": "userTwoPassKey"
},
"roles": [
{"db":"admin","name":"userAdminAnyDatabase"},
{"db":"admin","name":"userAdminAnyDatabase"},
{"db":"admin","name":"clusterAdmin"}
]
}
Expand Down
8 changes: 3 additions & 5 deletions e2e-tests/data-at-rest-encryption/run
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,8 @@ deploy_minio
apply_s3_storage_secrets

desc 'create secrets and start client'
kubectl_bin apply -f "$conf_dir/secrets.yml" -f "$conf_dir/client.yml"
kubectl_bin apply -f $conf_dir/secrets.yml
apply_client $conf_dir/client.yml

cluster='some-name'
desc "create PSMDB cluster $cluster"
Expand Down Expand Up @@ -57,10 +58,7 @@ sleep 5

desc "check backup and restore -- minio"
backup_dest_minio=$(get_backup_dest "$backup_name_minio")
retry 3 8 kubectl_bin run -i --rm aws-cli --image=perconalab/awscli --restart=Never -- \
/usr/bin/env AWS_ACCESS_KEY_ID=some-access-key AWS_SECRET_ACCESS_KEY=some-secret-key AWS_DEFAULT_REGION=us-east-1 \
/usr/bin/aws --endpoint-url http://minio-service:9000 s3 ls s3://${backup_dest_minio}/rs0/ \
| grep myApp.test.gz
retry 3 8 aws_cli "s3 ls s3://${backup_dest_minio}/rs0/" | grep "myApp.test.gz"
run_mongos 'use myApp\n db.test.insert({ x: 100501 })' "myApp:myPass@$cluster-mongos.$namespace"
compare_mongos_cmd "find" "myApp:myPass@$cluster-mongos.$namespace" "-2nd"
run_restore "$backup_name_minio"
Expand Down
4 changes: 2 additions & 2 deletions e2e-tests/data-sharded/run
Original file line number Diff line number Diff line change
Expand Up @@ -39,8 +39,8 @@ main() {
deploy_cert_manager

desc 'create secrets and start client'
kubectl_bin apply -f "$conf_dir/secrets.yml"
kubectl_bin apply -f "$conf_dir/client_with_tls.yml"
kubectl_bin apply -f $conf_dir/secrets.yml
apply_client $conf_dir/client_with_tls.yml

cluster="some-name"
desc "create first PSMDB cluster $cluster"
Expand Down
45 changes: 39 additions & 6 deletions e2e-tests/default-cr/run
Original file line number Diff line number Diff line change
Expand Up @@ -48,26 +48,46 @@ function main() {

desc 'create secrets and start client'
kubectl_bin apply -f $deploy_dir/secrets.yaml
kubectl_bin apply -f $conf_dir/client.yml
apply_client $conf_dir/client.yml

desc "create first PSMDB cluster $cluster"
kubectl_bin apply ${OPERATOR_NS:+-n $OPERATOR_NS} --server-side --force-conflicts -f $deploy_dir/crd.yaml


local temp_operator_yaml="$(mktemp)"
ptankov marked this conversation as resolved.
Show resolved Hide resolved

if [ -n "$OPERATOR_NS" ]; then
apply_rbac cw-rbac
kubectl_bin apply -n ${OPERATOR_NS} -f $deploy_dir/cw-operator.yaml
else
apply_rbac rbac
yq eval '((.. | select(.[] == "DISABLE_TELEMETRY")) |= .value="true")' "$deploy_dir/operator.yaml" \
| kubectl_bin apply -f -
yq eval '((.. | select(.[] == "DISABLE_TELEMETRY")) |= .value="true")' "$deploy_dir/operator.yaml" > $temp_operator_yaml
ptankov marked this conversation as resolved.
Show resolved Hide resolved

if [[ $ARCH == "arm64" ]]; then
yq eval -i '(.spec.template.spec.tolerations += '"$TOLERATIONS_ARM64"')' $temp_operator_yaml
fi

kubectl_bin apply -f $temp_operator_yaml
ptankov marked this conversation as resolved.
Show resolved Hide resolved
fi

local temp_cr="$(mktemp)"
yq eval '.spec.upgradeOptions.versionServiceEndpoint = "https://check-dev.percona.com" |
.spec.replsets[].affinity.antiAffinityTopologyKey = "none" |
.spec.replsets[].nonvoting.affinity.antiAffinityTopologyKey = "none" |
.spec.replsets[].arbiter.affinity.antiAffinityTopologyKey = "none" |
.spec.sharding.configsvrReplSet.affinity.antiAffinityTopologyKey = "none" |
.spec.sharding.mongos.affinity.antiAffinityTopologyKey = "none"' $deploy_dir/cr.yaml \
| kubectl_bin apply -f -
.spec.sharding.mongos.affinity.antiAffinityTopologyKey = "none"' $deploy_dir/cr.yaml > $temp_cr
ptankov marked this conversation as resolved.
Show resolved Hide resolved

if [[ $ARCH == "arm64" ]]; then
yq eval '.spec.replsets[].tolerations += '"$TOLERATIONS_ARM64"' |
ptankov marked this conversation as resolved.
Show resolved Hide resolved
(.spec | select(has("sharding"))).sharding.configsvrReplSet.tolerations += '"$TOLERATIONS_ARM64"' |
(.spec | select(has("sharding"))).sharding.mongos.tolerations += '"$TOLERATIONS_ARM64"' |
(.spec.replsets[] | select(has("arbiter"))).arbiter.tolerations += '"$TOLERATIONS_ARM64"' |
(.spec.replsets[] | select(has("nonvoting"))).nonvoting.tolerations += '"$TOLERATIONS_ARM64"'' $temp_cr |
kubectl_bin apply -f -
ptankov marked this conversation as resolved.
Show resolved Hide resolved
else
kubectl_bin apply -f $temp_cr
fi

desc 'check if all 3 Pods started'
wait_cluster_consistency $cluster 70
Expand Down Expand Up @@ -137,7 +157,20 @@ function main() {
cluster="minimal-cluster"
yq eval '.metadata.name = "'${cluster}'"' $deploy_dir/secrets.yaml | kubectl_bin apply -f -

yq eval '.spec.upgradeOptions.versionServiceEndpoint = "https://check-dev.percona.com"' $deploy_dir/cr-minimal.yaml | kubectl_bin apply -f -
local temp_cr_minimal="$(mktemp)"
yq eval '.spec.upgradeOptions.versionServiceEndpoint = "https://check-dev.percona.com"' $deploy_dir/cr-minimal.yaml > $temp_cr_minimal
ptankov marked this conversation as resolved.
Show resolved Hide resolved

if [[ $ARCH == "arm64" ]]; then
yq eval '.spec.replsets[].tolerations += '"$TOLERATIONS_ARM64"' |
ptankov marked this conversation as resolved.
Show resolved Hide resolved
(.spec | select(has("sharding"))).sharding.configsvrReplSet.tolerations += '"$TOLERATIONS_ARM64"' |
(.spec | select(has("sharding"))).sharding.mongos.tolerations += '"$TOLERATIONS_ARM64"' |
(.spec.replsets[] | select(has("arbiter"))).arbiter.tolerations += '"$TOLERATIONS_ARM64"' |
(.spec.replsets[] | select(has("nonvoting"))).nonvoting.tolerations += '"$TOLERATIONS_ARM64"'' $temp_cr_minimal |
kubectl_bin apply -f -
ptankov marked this conversation as resolved.
Show resolved Hide resolved
else
kubectl_bin apply -f $temp_cr_minimal
fi

desc 'check if all Pods started'
wait_cluster_consistency "${cluster}"

Expand Down
5 changes: 2 additions & 3 deletions e2e-tests/demand-backup-eks-credentials/run
Original file line number Diff line number Diff line change
Expand Up @@ -14,9 +14,8 @@ fi
create_infra $namespace

desc 'create secrets and start client'
kubectl_bin apply \
-f "$conf_dir/secrets.yml" \
-f "$conf_dir/client.yml"
kubectl_bin apply -f $conf_dir/secrets.yml
apply_client $conf_dir/client.yml

cluster="some-name-rs0"
desc "create first PSMDB cluster $cluster"
Expand Down
10 changes: 6 additions & 4 deletions e2e-tests/demand-backup-physical-sharded/run
Original file line number Diff line number Diff line change
Expand Up @@ -66,11 +66,13 @@ apply_s3_storage_secrets
### Case 1: Backup and restore on sharded cluster
desc 'Testing on sharded cluster'

echo "Creating PSMDB cluster"
desc 'create secrets and start client'
kubectl_bin apply -f $conf_dir/secrets.yml
apply_client $conf_dir/client_with_tls.yml

cluster="some-name"
kubectl_bin apply -f "${conf_dir}/secrets.yml"
apply_cluster "${test_dir}/conf/${cluster}-sharded.yml"
kubectl_bin apply -f "${conf_dir}/client_with_tls.yml"
desc "create first PSMDB cluster $cluster"
apply_cluster $test_dir/conf/$cluster-sharded.yml

echo "check if all pods started"
wait_for_running ${cluster}-rs0 3
Expand Down
8 changes: 5 additions & 3 deletions e2e-tests/demand-backup-physical/run
Original file line number Diff line number Diff line change
Expand Up @@ -58,11 +58,13 @@ apply_s3_storage_secrets

desc 'Testing on not sharded cluster'

desc 'create secrets and start client'
kubectl_bin apply -f $test_dir/conf/secrets.yml
apply_client $conf_dir/client_with_tls.yml

echo "Creating PSMDB cluster"
cluster="some-name"
kubectl_bin apply -f "${test_dir}/conf/secrets.yml"
apply_cluster "${test_dir}/conf/${cluster}.yml"
kubectl_bin apply -f "${conf_dir}/client_with_tls.yml"
apply_cluster $test_dir/conf/$cluster.yml

echo "check if all pods started"
wait_for_running ${cluster}-rs0 3
Expand Down
31 changes: 18 additions & 13 deletions e2e-tests/demand-backup-sharded/run
Original file line number Diff line number Diff line change
Expand Up @@ -19,11 +19,9 @@ create_infra "$namespace"

deploy_minio

desc 'create first PSMDB cluster'
cluster="some-name"
kubectl_bin apply \
-f "$conf_dir/secrets.yml" \
-f "$conf_dir/client.yml"
desc 'create secrets and start client'
kubectl_bin apply -f $conf_dir/secrets.yml
apply_client $conf_dir/client.yml

apply_s3_storage_secrets
if version_gt "1.19" && [ $EKS -ne 1 ]; then
Expand All @@ -34,6 +32,8 @@ else
kubectl_bin apply -f "$conf_dir/container-rc.yaml"
fi

desc 'create first PSMDB cluster'
cluster="some-name"
apply_cluster "$test_dir/conf/$cluster-rs0.yml"
desc 'check if all 3 Pods started'
wait_for_running $cluster-rs0 3
Expand Down Expand Up @@ -146,10 +146,18 @@ fi

desc 'check backup and restore -- minio'
backup_dest_minio=$(get_backup_dest "$backup_name_minio")
kubectl_bin run -i --rm aws-cli --image=perconalab/awscli --restart=Never -- \
/usr/bin/env AWS_ACCESS_KEY_ID=some-access-key AWS_SECRET_ACCESS_KEY=some-secret-key AWS_DEFAULT_REGION=us-east-1 \
/usr/bin/aws --endpoint-url http://minio-service:9000 s3 ls "s3://${backup_dest_minio}/rs0/" \
| grep "myApp.test.gz"

retry=0
until aws_cli "s3 ls s3://$backup_dest_minio/rs0/" | grep "myApp.test.gz"; do
if [[ $retry -ge 10 ]]; then
echo "Max retry count $retry reached. File myApp.test.gz wasn't found on s3://$backup_dest_minio/rs0/"
exit 1
fi
((retry += 1))
echo -n .
sleep 5
done

insert_data_mongos "100501" "myApp"
insert_data_mongos "100501" "myApp1"
insert_data_mongos "100501" "myApp2"
Expand All @@ -161,10 +169,7 @@ check_data
desc 'delete backup and check if it is removed from bucket -- minio'
kubectl_bin delete psmdb-backup --all

backup_exists=$(kubectl_bin run -i --rm aws-cli --image=perconalab/awscli --restart=Never -- \
/usr/bin/env AWS_ACCESS_KEY_ID=some-access-key AWS_SECRET_ACCESS_KEY=some-secret-key AWS_DEFAULT_REGION=us-east-1 \
/usr/bin/aws --endpoint-url http://minio-service:9000 s3 ls s3://operator-testing/ \
| grep -c ${backup_dest_minio}_ | cat)
backup_exists=$(aws_cli "s3 ls s3://operator-testing/" | grep -c ${backup_dest_minio}_ | cat)
if [[ $backup_exists -eq 1 ]]; then
echo "Backup was not removed from bucket -- minio"
exit 1
Expand Down
Loading
Loading