Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

testing: use Hurl in CI to test Caddy against spec #6255

Draft
wants to merge 15 commits into
base: master
Choose a base branch
from

Conversation

mohammed90
Copy link
Member

@mohammed90 mohammed90 commented Apr 20, 2024

Since #5704 was posted, we've been on-and-off brainstorming how to approach testing of a web server. We sorta agreed that declarative approach is desired but weren't aware of any tools that'd facilitate the declarative approach nor had a concrete plan. We just knew we need solid tests.

I have recently come across Hurl (https://github.com/Orange-OpenSource/hurl) and was curious if it meets our needs. It is declarative. It makes HTTP calls. It stands on shoulders of The Giant®, namely curl. The PoC presented in this branch seems to work. In fact, the PR #6249 is a fix for a bug found while building this PoC.

This PR is to discuss the approach and to collaboratively add the tests. The core idea is simple:

HTTP handlers claim to conform to a particular behavior. The behavior can be specified through a collection of Hurl tests, i.e. a collection of HTTP requests and responses. The Hurl file defines the Spec the handler shall meet.

TODO:

  • Agree on the structure/placement of the spec files
  • Write the spec files for all the existing handlers by inspecting the docs and the code

For TODO number 2, code coverage is a helpful tool. There's a way to extract execution coverage of the hurl tests†, but I haven't found a neat way to present it on GitHub PRs/Actions.

Based on the work done to resolve #5849 and the existence of the project REDbot, we can translate those expectations and rules into Hurl files.

† Using this article as guide: Build Caddy with coverage instrumentation using go build -cover. Run Caddy using the command GOCOVERDIR=./coverdir caddy run, then run the Hurl tests. Stop Caddy with either caddy stop or Ctrl-C. Run go tool covdata textfmt -i=coverdir -o profile.txt. Run go tool cover -html profile.txt. An HTML page is opened in the browser with each file annotated by color for whether it was executed or not.

@mohammed90 mohammed90 added in progress 🏃‍♂️ Being actively worked on discussion 💬 The right solution needs to be found CI/CD 🔩 Automated tests, releases labels Apr 20, 2024
@mohammed90 mohammed90 added this to the v2.9.0 milestone Apr 20, 2024
@mohammed90
Copy link
Member Author

Example of how it looks in the Actions run summary:

image

Example of how failure is presented:

image

Example of how success is presented:

image

Copy link

github-actions bot commented Apr 20, 2024

Test Results

6 tests   6 ✅  2s ⏱️
6 suites  0 💤
1 files    0 ❌

Results for commit c718744.

♻️ This comment has been updated with latest results.

@mholt
Copy link
Member

mholt commented Apr 20, 2024

Ooo, I like where this is going! Will revisit this after 2.8.

@dkarlovi
Copy link

dkarlovi commented May 1, 2024

@mohammed90 this looks great! \o/

I'm only wondering one thing:

we can translate those expectations and rules into Hurl files

Since the expectations are basically "the HTTP protocol" (and related stuff), would Caddy actually be the right place to do this? RedBot's value here IMO is exactly that it already knows what requests need to be done and what assertions need to be done against them, converting that into Hurl-based specs seems like it would be a sizeable project with a sizeable deliverable.

Would creating this "Hurl-based HTTP spec test suite" be better as a standalone project which Caddy (and others, hopefully) can take advantage of and, hopefully, maintain?

@mohammed90
Copy link
Member Author

Since the expectations are basically "the HTTP protocol" (and related stuff), would Caddy actually be the right place to do this? RedBot's value here IMO is exactly that it already knows what requests need to be done and what assertions need to be done against them, converting that into Hurl-based specs seems like it would be a sizeable project with a sizeable deliverable.

Caddy can be the right place :) we're aiming to test Caddy conformance to the spec. I skimmed the REDbot repo, and it isn't too complex. Integrating REDbot itself into the CI pipeline might be more of a hassle to maintain. Translating the behavior into Hurl files makes the expectations easier to understand and poke.

Would creating this "Hurl-based HTTP spec test suite" be better as a standalone project which Caddy (and others, hopefully) can take advantage of and, hopefully, maintain?

Perhaps, but maintaining such project is beyond my capacity. I can't initiate and commit to it (my personal backlog is too long already). I may help if it's maintained by a group.

Signed-off-by: Mohammed Al Sahaf <msaa1990@gmail.com>
@mohammed90 mohammed90 force-pushed the hurl-tests branch 4 times, most recently from 7e174a0 to 6d4992e Compare June 20, 2024 17:06
Signed-off-by: Mohammed Al Sahaf <msaa1990@gmail.com>
@mohammed90 mohammed90 force-pushed the hurl-tests branch 2 times, most recently from 1c8a91e to 05c2380 Compare October 29, 2024 21:00
@mohammed90 mohammed90 force-pushed the hurl-tests branch 5 times, most recently from 755c510 to f59151e Compare October 29, 2024 22:01
@mohammed90 mohammed90 force-pushed the hurl-tests branch 2 times, most recently from b7e90d4 to 0d9f7b3 Compare October 31, 2024 07:28
@mholt
Copy link
Member

mholt commented Nov 15, 2024

I like where this is going. I'll have to give it a closer look soon :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CI/CD 🔩 Automated tests, releases discussion 💬 The right solution needs to be found in progress 🏃‍♂️ Being actively worked on
Projects
None yet
Development

Successfully merging this pull request may close these issues.

File server missing Vary on 206, 304
3 participants