-
-
Notifications
You must be signed in to change notification settings - Fork 15
feat: new benchmarking tool #9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
@evenstensberg fyi |
Great work @avivkeller . @alexander-akait will look at this soon. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In future, let's have discussion before we start such work.
- We can't remove
compare.yml
/measure-all-cases.yml
/etc , it will break our site - https://webpack.github.io/benchmark/ - This repo focus on webpack performance, not comparing with other bundlers (but I am not afraid to do it too)
- This repo should be merged into webpack repo in future and archived
What we should make step by step
- We have https://github.com/webpack/webpack/tree/main/test/benchmarkCases these tests runs when developer push a PR (it allow to track changes right now when code was changed), here we measure performance only after the merger, it's not very good for tracking
So the first our step is moving all scenarios/cases/addons into webpack benchmarks, and avoid using outdated/deprecated packages (or replace them by something new)
We use -long
postfix to avoid running them on PRs, so some of them will be run on new PRs too, more benchmarks - better perf regression tracking
-
Move all CI files to store/vefiry/artifacts/etc (maybe improve/update them) and setup to publish them (we can reuse the same page - https://webpack.github.io/benchmark/)
-
Improve our benchmark runner into webpack (measure memory/cpu/allow to use custom bundlers to compare/etc), any other feature welcome
-
Use
jest-worker
or any other worker solution to run multiple benchmarks to speedup them (we need to test it, maybe it is not good idea, but if we will have a lot of benchmarks it can be slow), we have a draft - webpack/webpack#19780 -
Profiling is not always working - webpack/webpack#19866, we are writing to codspeed support about it (WIP)
-
Stability our
watch
benchmarks - webpack/webpack#19952, as you can see some watch benchmarks is not very stable, we need to search a way how to make them stable
Rewrites Webpack's benchmarker to be more modular, allowing easier support for future bundlers. Refer to the README excerpt below for how to add/use bundlers with this benchmark.
Currently,
vite
,webpack
,parcel
, andesbuild
are supported. Let me know what other bundlers to include (rollup
?rolldown
?)The benchmark measures output size and time taken, but I can add more (memory usage?)
Lastly, the only fixtures currently added are:
This has been adapted from the new README:
CLI
--bundlers
-b
["vite", "webpack"]
--metrics
-m
["build-time", "size"]
--reporter
-r
"console"
(Only reporter implemented atm)--fixtures
"./fixtures/*"
--verbose
false
--silent
false
Fixtures
Fixtures are test projects used to benchmark bundlers.
Each fixture should exist in
fixtures/[name]
, and have amain.js
as an entrypoint.Adding Bundlers
Adding a New Bundler
Create a new file in
src/bundlers/[bundler-name].mjs
that exports two functions:Adding a New Metric
Create a new file in
src/metrics/[metric-name].mjs
that default exports a class:Adding a New Reporter
Create a new file in
src/reporters/[reporter-name].mjs
that default exports a function:Using with Git Repositories
To run a benchmark on a bundler directly from it's source, install it directly from it's repository, and benchmark like normal.
For example, to benchmark Webpack's bleeding edge, install it with: