-
-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Automated tests using PackTest #930
Conversation
This PR is ready for review, so have a go at trying to run the tests on your machine! I'm going to keep writing tests for the remaining modules but we can merge this PR before every modules has tests. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are there any concerns with this running after the commit / release step? I know we need the packs after beet is done with them to test, but it would be nice if the packtest prevented release if there were failures.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a fair comment and I considered doing that. We could delay the release to gm4.co because that happens in a separate action step (Commit release and Push release), but since Modrinth and Smithed release happens during beet build that it would be a bit tricky. We could separate the publishing logic out of beet which might not be a bad idea.
For now I kept it last because I was a bit concerned with how long these tests could take and wouldn't want to needlessly delay the release. It looks like the tests only take 40s right now so that is probably fine.
Tests will run in pull requests anyways so we should be able to catch most issues early. And when a test fails on master we can investigate anyways. Still an improvement over not knowing that a bug was created and hearing about it months later in an issue on GitHub.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Alright I took a peek at this and I think it looks pretty good! Tests look pretty easy to create and that's hopefully super powerful going forward. Theres a few points I want to just point out or briefly discuss.
- Keeping test files in the released pack
I feel a little strange about releasing packs with the test files still intact, though theres alot of facets to that.
Pros:
- No test files in the release means adding tests won't trigger an update patch, which is probably preferable.
Cons: - Since half of the test files (structures, predicates ect) are mixed in with regular datapack resources, we wouldn't be able to simply remove the
tests
directory. - Would complicate the build process, since we'd need to first emit a pack with tests to hand to packtest, and then reemit the pack without tests for release.
-
Pack load errors
One of the things I was most excited about the first iteration of packtest was the ability to see game output log errors in the GH action. This kinda shows up, but it doesn't fail action. I think this leads more to my next point though -
Action Annotations
This is something I looked into some earlier but got too confused by, but maybe you know some tricks I couldnt find in google. I would love if the failed tests (and pack load errors) showed up as action annotations (to my best understanding, just specially formatted messages to stdout). Right now they're just reported as "action exited with exit code 1". This might be a whole other PR for improving our action logging though.
I have added both 2 and 3 to the packtest roadmap |
I kinda like the idea of splitting our current action into build/test, at least so it's a little more cleanly organized in the action summary. Is that possible? |
I tested this in-game in a I have a couple of questions/notes still:
|
It shouldn't. Public repos have unlimited action minutes, even for organizations. Looking at the action time, it seems to add roughly 40 seconds to every build.
I have split the workflow into two jobs. Still todo is stripping the tests from the build job, and do less work in the test job (like not building libraries separately, all the readme stuff, etc) |
This looks like it runs the release build twice, once for the build and once for test. Is it possible to reuse the files in test or was this done to run the jobs in parallel. I don't think theres that much to strip out of the build that would benefit us. I think I've also decided its fine to distribute test files? It might allow for some better debugging for advanced users (for example on modded servers) or point more people to the existence of packtest. |
@SpecialBuilder32 If we wanted to reuse the same build then there was no point in splitting the action in two jobs. I was fine with the original workflow too 🤷 |
I wanted to split it so we could get status badges for build and test separately. Can different tasks not reuse the same files / work on different runners? |
They can, using artifacts, but from experience on this repo it has a significant overhead (taking 10-20 seconds). I do see the advantage of having separate status badges so I'll keep it split. I don't think building it twice is a big problem, it happens in parallel and we can cut away stuff we don't need for testing, so overall the action will be faster. |
- Adds line numbers to errors, detects resource loading errors
@SpecialBuilder32 Pack load errors will now show up red in the log and will cause the action to fail. (see below) I tried implementing the annotations, and found that it would be difficult to do for us because we have the beet build step. So when the server loads these data packs it can't know where those tests were defined in the source (without some heuristics and folder scanning, or an explicit source map from beet). I decided to leave that feature out for now. |
I thought you could make an annotation without pointing it to a file, just having it essentially copy a message to the action summary. Pointing it to a source file was an optional thing for better debugging |
Edit after I saw Special's message: Oh I didn't even think to try that. I'll give it one more go. Original message:I'm happy with this PR in its current state. Future work can include:
|
All requested changed implemented and it has been 2 weeks, so merging... |
* Initial prototype of tests * Add more tests using await * More tests * Update to "await delay" + add distance check to some selectors * Try running tests in github action * Test job needs build * Fix data pack file names in test action * Update packtest to v1.3 * Add documentation * Some more tests because why not * Include new tests in CI * Fix typo * Try to use a glob instead of listing all packs * try it another way * Disable guidebook tests and make animi respawn test optional * Fix guidebook filtering * Disable animi shamir test asserts * Update packtest_runner * Remove some generated files before uploading artifact * Switch to a simpler test workflow * Adding tests * Update packtest to v1.5 * A few more tests * Fix animi shamir test with undead players * Add seperate test job * Fix also build modules for PRs * Strip tests in release builds and optimize test build * Fix guidebook path * Include guidebook in tests * Update packtest to 1.6 beta version - Adds line numbers to errors, detects resource loading errors * Replace packtest jar with modrinth cdn * Update packtest to v1.6 beta 2 * Fix purple -> dark_purple error in combat expanded loot tables * Update packtest to v1.6 and add some temporary mistakes * Fix temporary mistakes
Revisiting automated tests for our modules. See earlier prototype in #777.
Tests are written in
*.mcfunction
files and run using the PackTest mod. See the readme there for an explanation on the syntax in test files.This PR will be considered ready when: