Skip to content

Commit

Permalink
Fixes #1002, updating the EC2 workerpool type for ROSA HCP cluster no…
Browse files Browse the repository at this point in the history
…des (#1003)

Signed-off-by: Kamesh Akella <kamesh.asp@gmail.com>
  • Loading branch information
kami619 authored Oct 8, 2024
1 parent 9c46388 commit 3d270de
Show file tree
Hide file tree
Showing 8 changed files with 13 additions and 11 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
2. Click on Run workflow button
3. Fill in the form and click on Run workflow button
1. Name of the cluster - the name of the cluster that will be later used for other workflows. Default value is `gh-${{ github.repository_owner }}`, this results in `gh-<owner of fork>`.
2. Instance type for compute nodes - see [AWS EC2 instance types](https://aws.amazon.com/ec2/instance-types/). Default value is `m5.xlarge`.
2. Instance type for compute nodes - see [AWS EC2 instance types](https://aws.amazon.com/ec2/instance-types/). Default value is `m5.2xlarge`.
3. Deploy to multiple availability zones in the region - if checked, the cluster will be deployed to multiple availability zones in the region. Default value is `false`.
4. Number of worker nodes to provision - number of compute nodes in the cluster. Default value is `2`.
4. Wait for the workflow to finish.
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/keycloak-scalability-benchmark.yml
Original file line number Diff line number Diff line change
Expand Up @@ -93,7 +93,7 @@ jobs:
- name: Allow cluster to scale
if: ${{ !inputs.skipCreateDeployment }}
run: rosa edit machinepool -c ${{ inputs.clusterName }} --min-replicas 3 --autorepair scaling
run: rosa edit machinepool -c ${{ inputs.clusterName }} --min-replicas 4 --autorepair scaling

- name: Create Keycloak deployment
if: ${{ !inputs.skipCreateDeployment }}
Expand Down
2 changes: 2 additions & 0 deletions .github/workflows/rosa-cluster-create.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ on:
type: string
computeMachineType:
description: 'Instance type for the compute nodes'
default: 'm5.2xlarge'
type: string
availabilityZones:
description: 'Availability zones to deploy to'
Expand All @@ -34,6 +35,7 @@ on:
default: 10.0.0.0/24
computeMachineType:
description: 'Instance type for the compute nodes'
default: 'm5.2xlarge'
type: string
availabilityZones:
description: 'Availability zones to deploy to'
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/rosa-multi-az-cluster-create.yml
Original file line number Diff line number Diff line change
Expand Up @@ -141,8 +141,8 @@ jobs:

- name: Scale ROSA clusters
run: |
rosa edit machinepool -c ${{ env.CLUSTER_PREFIX }}-a --min-replicas 3 --max-replicas 10 --autorepair scaling
rosa edit machinepool -c ${{ env.CLUSTER_PREFIX }}-b --min-replicas 3 --max-replicas 10 --autorepair scaling
rosa edit machinepool -c ${{ env.CLUSTER_PREFIX }}-a --min-replicas 4 --max-replicas 15 --autorepair scaling
rosa edit machinepool -c ${{ env.CLUSTER_PREFIX }}-b --min-replicas 4 --max-replicas 15 --autorepair scaling
- name: Setup Go Task
uses: ./.github/actions/task-setup
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ Collecting the CPU usage for refreshing a token is currently performed manually
This setup is run https://github.com/keycloak/keycloak-benchmark/blob/main/.github/workflows/rosa-cluster-auto-provision-on-schedule.yml[daily on a GitHub action schedule]:

* OpenShift 4.15.x deployed on AWS via ROSA with two AWS availability zones in AWS one region.
* Machinepool with `m5.4xlarge` instances.
* Machinepool with `m5.2xlarge` instances.
* Keycloak 25 release candidate build deployed with Operator and 3 pods in each site as an active/passive setup, and Infinispan connecting the two sites.
* Default user password hashing with Argon2 and 5 hash iterations and minimum memory size 7 MiB https://cheatsheetseries.owasp.org/cheatsheets/Password_Storage_Cheat_Sheet.html#argon2id[as recommended by OWASP].
* Database seeded with 100,000 users and 100,000 clients.
Expand Down Expand Up @@ -53,7 +53,7 @@ Each test ran for 10 minutes.
+
[source,bash,subs="+quotes"]
----
rosa edit machinepool -c **<clustername>** --min-replicas 3 --autorepair scaling
rosa edit machinepool -c **<clustername>** --min-replicas 4 --autorepair scaling
----
. Deploy Keycloak and Monitoring
+
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ After the installation process is finished, it creates a new admin user.
CLUSTER_NAME=rosa-kcb
VERSION=4.13.8
REGION=eu-central-1
COMPUTE_MACHINE_TYPE=m5.xlarge
COMPUTE_MACHINE_TYPE=m5.2xlarge
MULTI_AZ=false
REPLICAS=3
----
Expand Down Expand Up @@ -85,14 +85,14 @@ The above installation script creates an admin user automatically but in case th
== Scaling the cluster's nodes on demand

The standard setup of nodes might be too small for running a load test, at the same time using a different instance type and rebuilding the cluster takes a lot of time (about 45 minutes).
To scale the cluster on demand, the standard setup has a machine pool named `scaling` with instances of type `m5.4xlarge` which is auto-scaled based on the current demand from 0 to 10 instances.
To scale the cluster on demand, the standard setup has a machine pool named `scaling` with instances of type `m5.2xlarge` which is auto-scaled based on the current demand from 4 to 15 instances.
However, auto-scaling of worker nodes is quite time-consuming as nodes are scaled one by one.

To scale the machine pool faster, issue a command like the following, and the additional nodes will be spawned at the same time:

[source,bash,subs=+quotes]
----
rosa edit machinepool -c _**clustername**_ --min-replicas 3 --autorepair scaling
rosa edit machinepool -c _**clustername**_ --min-replicas 4 --autorepair scaling
----

To allow scaling the machine pool back to 0, use a command like the following:
Expand Down
2 changes: 1 addition & 1 deletion provision/aws/rosa_create_cluster.sh
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ fi

SCALING_MACHINE_POOL=$(rosa list machinepools -c "${CLUSTER_NAME}" -o json | jq -r '.[] | select(.id == "scaling") | .id')
if [[ "${SCALING_MACHINE_POOL}" != "scaling" ]]; then
rosa create machinepool -c "${CLUSTER_NAME}" --instance-type m5.4xlarge --max-replicas 10 --min-replicas 1 --name scaling --enable-autoscaling --autorepair
rosa create machinepool -c "${CLUSTER_NAME}" --instance-type "${COMPUTE_MACHINE_TYPE:-m5.2xlarge}" --max-replicas 15 --min-replicas 1 --name scaling --enable-autoscaling --autorepair
fi

cd ${SCRIPT_DIR}
Expand Down
2 changes: 1 addition & 1 deletion provision/opentofu/modules/rosa/hcp/variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ variable "openshift_version" {

variable "instance_type" {
type = string
default = "m5.4xlarge"
default = "m5.2xlarge"
nullable = false
}

Expand Down

0 comments on commit 3d270de

Please sign in to comment.