This is v2.0 release of our benchmark automation suite.
Please refer to the 1.0 release for automation discussed in our 2019 blog post.
The suite includes:
- orchestrator tooling and Helm charts
for deploying benchmark clusters from an orchestrator cluster
- metrics of all benchmark clusters will be scraped and made available in the orchestrator cluster
- a stand-alone benchmark cluster configuration for use with Lokomotive
- helm charts for deploying Emojivoto to provide application endpoints to run benchmarks against
- helm charts for deploying a wrk2 benchmark job as well as a job to create summary metrics of multiple benchmark runs
- Grafana dashboards to view benchmark metrics
Prerequisites:
- cluster is set up
- push gateway is installed
- grafana dashboards are uploaded to Grafana
- applications are installed
- Start the benchmark:
This will start a 120s, 3000RPS benchmark against 10 emojivoto app instances, with 96 threads / simultaneous connections. See the helm chart values for all parameters, and use helm command line parameters for different values (eg.
$ helm install --create-namespace benchmark --namespace benchmark configs/benchmark
--set wrk2.RPS="500"
to change target RPS). - Refer to the "wrk2 cockpit" grafana dashboard for live metrics
- After the run concluded, run the "metrics-merger" job to update summary
metrics:
This will update the "wrk2 summary" dashboard.
$ helm install --create-namespace --namespace metrics-merger \ metrics-merger configs/metrics-merger/
The benchmark suite script will install applications and service meshes, and run several benchmarks in a loop.
Use the supplied scripts/run_benchmarks.sh
to run a full benchmark suite:
5 runs of 10 minutes each for 500-5000 RPS, in 500 RPS increases, with 128 threads,
for "bare metal", linkerd, and istio service meshes, against 60 emojivoto
instances.
We use Equinix Metal infrastructure to run the benchmark on, AWS S3 for sharing cluster state, and AWS Route53 for the clusters' public DNS entries. You'll need a Equinix Metal account and respective API token as well as an AWS account and accompanying secret key before you can provision a cluster.
You'll also need a recent version of Lokomotive.
-
Make the authentication tokens available to the
lokoctl
command. You can do this in a couple of ways. For example, exporting your authentication tokens:export PACKET_AUTH_TOKEN="Your Equinix Metal Auth Token" export AWS_ACCESS_KEY_ID="your access key for AWS" export AWS_SECRET_ACCESS_KEY="your secret for the above access key"
-
Create the Route53 hosted zone that will be used by the cluster. And an S3 bucket and Dynamo tables for storing Lokomotive's state. Check out Lokomotive's documentation for Using S3 as backend for how to do this.
-
Create
configs/lokocfg.vars
by copying the example fileconfigs/lokocfg.vars.example
, and editing its contents.metal_project_id = "[ID of the equinix metal project to deploy to]" route53_zone = "[cluster's route53 zone]" state_s3_bucket = "[PRIVATE AWS S3 bucket to share cluster state in]" state_s3_key = "[key in S3 bucket, e.g. cluster name]" state_s3_region = "[AWS S3 region to use]" lock_dynamodb_table = "[DynamoDB table name to use as state lock, e.g. cluster name]" region_private_cidr = "[Your Equinix Metal region's private CIDR]" ssh_pub_keys = [ "[Your SSH pub keys]" ]
-
Review the benchmark cluster config in
configs/equinix-metal-cluster.lokocfg
-
Provision the cluster by running
$ cd configs configs $ lokoctl cluster apply
After provisioning concluded, make sure to run
$ export KUBECONFIG=assets/cluster-assets/auth/kubeconfig
to get kubectl
access to the cluster.
The benchmark load generator will push intermediate run-time metrics as well as final latency metrics to a prometheus push gateway. A push gateway is currently not bundled with Lokomotive's prometheus component. Deploy by issuing
$ helm install pushgateway --namespace monitoring configs/pushgateway
Demo apps will be used to run the benchmarks against. We'll use Linkerd's emojivoto.
We will deploy multiple instances of each app to emulate many applications in a cluster. For the default set-up, which includes 4 application nodes, we recommend deploying 30 "bookinfo" instances, and 40 "emojivoto" instances:
$ cd configs
$ for i in $(seq 10) ; do \
helm install --create-namespace emojivoto-$i \ --namespace emojivoto-$i \
configs/emojivoto \
done
- Get the Grafana Admin password from the cluster
$ kubectl -n monitoring get secret prometheus-operator-grafana -o jsonpath='{.data.admin-password}' | base64 -d && echo
- Forward the Grafana service port from the cluster
$ kubectl -n monitoring port-forward svc/prometheus-operator-grafana 3000:80 &
- Log in to Grafana and create an API key we'll use to upload the dashboard
- Upload the dashboard:
$ cd dashboard dashboard $ ./upload_dashboard.sh "[API KEY]" grafana-wrk2-cockpit.json localhost:3000
To benchmark OSM with AKS, follow these steps:
- first create an AKS cluster and set it to current context of local Kubernetes.
- Update variables in
scripts/osm-setup.sh
as needed, and run the script to setup Kinvolk. - Upload Kinvolk dashboard to Grafana, folloing the similar steps in Upload Grafana dashboard section, but upload wrk2-dash-osm.json file instead.
- Update variables in
scritps/run_benchmarks.sh
as needed, and run the script to start benchmark. - Collect result on the uploaded dashboard in Grafana.