Skip to content

Conversation

avivkeller
Copy link
Member

@avivkeller avivkeller commented Sep 18, 2025

Rewrites Webpack's benchmarker to be more modular, allowing easier support for future bundlers. Refer to the README excerpt below for how to add/use bundlers with this benchmark.

Currently, vite, webpack, parcel, and esbuild are supported. Let me know what other bundlers to include (rollup? rolldown?)

The benchmark measures output size and time taken, but I can add more (memory usage?)

Lastly, the only fixtures currently added are:

  • Entire ThreeJS
  • Minimal JS file

This has been adapted from the new README:

CLI

Option Short Description Default
--bundlers -b Select bundlers to benchmark ["vite", "webpack"]
--metrics -m Choose metrics to collect ["build-time", "size"]
--reporter -r Choose output format "console" (Only reporter implemented atm)
--fixtures Glob pattern for fixture directories "./fixtures/*"
--verbose Enable verbose logging false
--silent Suppress all output except errors false

Fixtures

Fixtures are test projects used to benchmark bundlers.

Each fixture should exist in fixtures/[name], and have a main.js as an entrypoint.


Adding Bundlers

Adding a New Bundler

Create a new file in src/bundlers/[bundler-name].mjs that exports two functions:

export async function build(fixture) { ... }
export async function clean(fixture) { ... }

Adding a New Metric

Create a new file in src/metrics/[metric-name].mjs that default exports a class:

export default class {
  name = "My Metric Name"; // Required

  start(options) { ... } // Optional

  stop(options) { ... } // Optional

  collect(options) { ... } // Required
}

Adding a New Reporter

Create a new file in src/reporters/[reporter-name].mjs that default exports a function:

export default async function report(results, options) { ... }

Using with Git Repositories

To run a benchmark on a bundler directly from it's source, install it directly from it's repository, and benchmark like normal.

For example, to benchmark Webpack's bleeding edge, install it with:

yarn add webpack@https://github.com/webpack/webpack

@avivkeller avivkeller marked this pull request as ready for review September 19, 2025 00:02
@avivkeller avivkeller changed the title [WIP] New benchmarking tool feat: new benchmarking tool Sep 19, 2025
@avivkeller
Copy link
Member Author

@evenstensberg fyi

@evenstensberg
Copy link
Member

Great work @avivkeller . @alexander-akait will look at this soon.

Copy link
Member

@alexander-akait alexander-akait left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In future, let's have discussion before we start such work.

  1. We can't remove compare.yml/measure-all-cases.yml/etc , it will break our site - https://webpack.github.io/benchmark/
  2. This repo focus on webpack performance, not comparing with other bundlers (but I am not afraid to do it too)
  3. This repo should be merged into webpack repo in future and archived

What we should make step by step

  1. We have https://github.com/webpack/webpack/tree/main/test/benchmarkCases these tests runs when developer push a PR (it allow to track changes right now when code was changed), here we measure performance only after the merger, it's not very good for tracking

So the first our step is moving all scenarios/cases/addons into webpack benchmarks, and avoid using outdated/deprecated packages (or replace them by something new)

We use -long postfix to avoid running them on PRs, so some of them will be run on new PRs too, more benchmarks - better perf regression tracking

  1. Move all CI files to store/vefiry/artifacts/etc (maybe improve/update them) and setup to publish them (we can reuse the same page - https://webpack.github.io/benchmark/)

  2. Improve our benchmark runner into webpack (measure memory/cpu/allow to use custom bundlers to compare/etc), any other feature welcome

  3. Use jest-worker or any other worker solution to run multiple benchmarks to speedup them (we need to test it, maybe it is not good idea, but if we will have a lot of benchmarks it can be slow), we have a draft - webpack/webpack#19780

  4. Profiling is not always working - webpack/webpack#19866, we are writing to codspeed support about it (WIP)

  5. Stability our watch benchmarks - webpack/webpack#19952, as you can see some watch benchmarks is not very stable, we need to search a way how to make them stable

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants