-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature Request] Benchmark HTTP/1.1 vs HTTP/2 (over clear text) for _bulk APIs #17257
Comments
Benchmarks resultsEvery benchmark round consisted of 3 runs. Default transport:
|
Benchmarks comparisonDefault
|
Latencies | HTTP/1.1 | HTTP/1.1 | HTTP/1.1 | HTTP/2 | HTTP/2 | HTTP/2 |
---|---|---|---|---|---|---|
min (ms) | 9.478 | 9.224 | 9.174 | 🔼 9.637 🔼 | 🔼 9.228 🔼 | 🔼 9.215 🔼 |
mean (ms) | 70.225 | 64.53 | 57.28 | 🔽 50.003 🔽 | 🔼 68.099 🔼 | 🔼 60.492 🔼 |
50th (ms) | 11.885 | 11.909 | 12.043 | 🔼 12.539 🔼 | 🔼 12.325 🔼 | 🔼 12.188 🔼 |
90th (ms) | 13.443 | 13.384 | 13.488 | 🔼 23.265 🔼 | 🔼 16.919 🔼 | 🔼 18.018 🔼 |
95th (ms) | 17.363 | 17.319 | 16.61 | 🔼 31.214 🔼 | 🔼 35.241 🔼 | 🔼 29.076 🔼 |
99th (ms) | 2612 | 2398 | 1956 | 🔽 1692 🔽 | 🔼 2451 🔼 | 🔼 2209 🔼 |
max (ms) | 5149 | 5099 | 4975 | 🔽 4199 🔽 | 🔽 4669 🔽 | 🔽 4746 🔽 |
Reactive transport: transport-reactor-netty4
Latencies | HTTP/1.1 | HTTP/1.1 | HTTP/1.1 | HTTP/2 | HTTP/2 | HTTP/2 |
---|---|---|---|---|---|---|
min (ms) | 9.67 | 9.779 | 9.743 | 🔽 9660 🔽 | 🔽 9483 🔽 | 🔼 9812 🔼 |
mean (ms) | 52.997 | 81.012 | 88.863 | 🔽 45.976 🔽 | 🔽 63.149 🔽 | 🔽 53.230 🔽 |
50th (ms) | 12.525 | 12.546 | 12.641 | 🔼 12.741 🔼 | 🔼 12.720 🔼 | 🔼 12.835 🔼 |
90th (ms) | 14.094 | 14.245 | 14.158 | 🔼 18.183 🔼 | 🔼 18.511 🔼 | 🔼 17.654 🔼 |
95th (ms) | 17.205 | 22.254 | 26.195 | 🔼 29.889 🔼 | 🔼 30.777 🔼 | 🔼 27.709 🔼 |
99th (ms) | 1861 | 2930 | 3255 | 🔽 1470 🔽 | 🔽 2359 🔽 | 🔽 1876 🔽 |
max (ms) | 4531 | 5826 | 6329 | 🔽 4326 🔽 | 🔽 4573 🔽 | 🔽 4502 🔽 |
Conclusions
- The HTTP/2 has consistently lower tail latencies
- The HTTP/1.1 is overall showing better latency distribution across majority of measurements
- The benchmarking is limited to
_bulk
API only (non-streaming)
Thanks for the detailed comparison @reta! It's interesting that p50 latencies stay around ~12ms across the board but the largest discrepancies are seen in p90/99s. I'm curious to hear your thoughts on which HTTP2 features we expect to deliver the most impact. My initial impression is multiplexing is the biggest deal, allowing the server to re-use a single connection to handle multiple requests from a client. I think we would see this impact in the worker thread pool/queue of our client/server transport? It's not clear to me how much this should trickle down to total latency, particularly if our thread pool is not under significant strain and no requests are being pushed back into a queue. |
Thanks @finnegancarroll !
I would agree with you, that is something we should get out of the box now with respect to JVM based implementations (at least) since clients do support HTTP/2 out of the box. More importantly, I would love to see streaming to have first class support in transports: we do have it now (with One of the questions I would like to have an answer for rather soon-ish is if we could swap the |
Is your feature request related to a problem? Please describe
In scope of implementing an experimental gRPC transport [1], a round of benchmarks have been performed [2] to compare HTTP/1.1 and gRPC in context of
_bulk
APIs. However, it turned out we haven't actually benchmarked HTTP/1.1 vs HTTP/2.[1] #16534
[2] #16711 (comment)
Describe the solution you'd like
Benchmark HTTP/1.1 vs HTTP/2 (over clear text) for
_bulk
APIsSetup
[1] #16534
Related component
Indexing
Describe alternatives you've considered
N/A
Additional context
N/A
The text was updated successfully, but these errors were encountered: