You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+51Lines changed: 51 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -47,3 +47,54 @@ There are some [scripts](scripts) available to help you run the application:
47
47
- Runs a set of requests against a running application.
48
48
-[`infra.sh`](scripts/infra.sh)
49
49
- Starts/stops required infrastructure
50
+
51
+
## Running performance comparisons
52
+
53
+
Of course you want to start generating some numbers and doing some comparisons, that's why you're here!
54
+
There are lots of *wrong* ways to run benchmarks, and running them reliably requires a controlled environment, strong automation, and multiple machines.
55
+
Realistically, that kind of setup isn't always possible.
56
+
57
+
Here's a range of options, from easiest to best practice.
58
+
Remember that the easy setup will *not* be particularly accurate, but it does sidestep some of the worst pitfalls of casual benchmarking.
59
+
60
+
61
+
### Quick and dirty: Single laptop, simple scripts
62
+
63
+
Before we go any further, know that this kind of test is not going to be reliable.
64
+
Laptops usually have a number of other processes running on them, and modern laptop CPUs are subject to power management which can wildly skew results.
65
+
Often, some cores are 'fast' and some are 'slow', and without extra care, you don't know which core your test is running on.
66
+
Thermal management also means 'fast' jobs get throttled, while 'slow' jobs might run at their normal speed.
67
+
68
+
Load shouldn't be generated on the same machine as the one running the workload, because the work of load generation can interfere with what's being measured.
69
+
70
+
But if you accept all that, and know these results should be treated with caution, here's our recommendation for the least-worst way of running a quick and dirty test.
71
+
We use [Hyperfoil](https://hyperfoil.io/https://hyperfoil.io/) instead of [wrk](https://github.com/wg/wrk), to avoid [coordinated omission](https://redhatperf.github.io/post/coordinated-omission/) issues. For simplicity, we use the [wrk2](https://github.com/giltene/wrk2) Hyperfoil bindings.
0 commit comments