Skip to content

Conversation

@Sashan
Copy link
Contributor

@Sashan Sashan commented Nov 27, 2025

This change simplifies current HA-proxy test set up. Testing no longer requires apache/nginx server as backend. Instead of using siege as a client the test uses
h1load [1].

The pull request also install httpterm [2] http/1.1 server. It's unused currently.

The HA-proxy configuration for testing matches the configuration used in 'State of SSL stacks' write up.

The h1load client currently runs with options as follows :
h1load
-l \ # long results, output expected by h1load shell script
-P \ # report also percentiles for gathared data
-d ${TEST_TIME} \ # test duration, TEST_TIME is 10secs
-c 500 \ # 500 concurrent connections
-t ${THREAD_COUNT} \ # gather data for 1, 2, 4, 8, 16, 32, 64 threads
-u \ # use runtime instead of system time
${BASE_URL}${PORT} # url where to connect to

The options above is just the initial version.

[1] https://github.com/wtarreau/h1load

[2] https://github.com/wtarreau/httpterm

[3] https://www.haproxy.com/blog/state-of-ssl-stacks

@Sashan Sashan moved this to Waiting Review in Development Board Nov 27, 2025
This change simplifies current HA-proxy test set up.
Testing no longer requires apache/nginx server as backend.
Instead of using siege as a client the test uses
h1load [1].

The pull request also install httpterm [2] http/1.1 server.
It's unused currently.

The HA-proxy configuration for testing matches the configuration
used in 'State of SSL stacks' write up.

The h1load client currently runs with options as follows :
    h1load
        -l \	# long results, output expected by h1load shell script
        -P \	# report also percentiles for gathared data
        -d ${TEST_TIME} \	# test duration, TEST_TIME is 10secs
        -c 500 \		# 500 concurrent connections
        -t ${THREAD_COUNT} \	# gather data for 1, 2, 4, 8, 16, 32, 64 threads
        -u \			# use runtime instead of system time
        ${BASE_URL}${PORT} 	# url where to connect to

The options above is just the initial version.

[1] https://github.com/wtarreau/h1load

[2] https://github.com/wtarreau/httpterm

[3] https://www.haproxy.com/blog/state-of-ssl-stacks
@vavroch2010 vavroch2010 moved this from Waiting Review to In Progress in Development Board Dec 1, 2025
Copy link
Contributor

@nhorman nhorman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks fine to me, but I'm a bit confused as to how h1load gets setup with this test. Is it meant to be run by hand independently?

@Sashan
Copy link
Contributor Author

Sashan commented Dec 8, 2025

This looks fine to me, but I'm a bit confused as to how h1load gets setup with this test. Is it meant to be run by hand independently?

it's run by bench_run_haproxy.sh this comes from run_test() function:

    RESULT=${RESULT_DIR}/h1load-dh-rsa-noreuse-${THREAD_COUNT}-${SSL_LIB}.out
    PORT=$(( ${PORT_RSA} + ${PROXY_CHAIN}  ))
    LD_LIBRARY_PATH=${OPENSSL_DIR}/lib ${H1LOAD} \
        -l \
        -P \
        -d ${TEST_TIME} \
        -c 500 \
        -t ${THREAD_COUNT} \
        -u \
        ${BASE_URL}${PORT} > ${RESULT} || exit 1

it's the H1LOAD variable which holds the path to h1load client. the client is linked with desired SSL library. I'm still verifying set up and figuring out the command line options to use. I've added also ability to use the siege [1] client just to cross check the results with other kind of tests.

the part up to collecting results is mostly done. I'm still working on gnuplot scripts to post-process data. I will include them to separate PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: In Progress

Development

Successfully merging this pull request may close these issues.

2 participants