NOTE: This package is still a work in progress.
Command line tool for running and publishing benchmarks to a git repository.
It uses pyperf under the hood for performance analysis heavy lifting.
Name | Description |
---|---|
git |
Fast, scalable, distributed revision control system. Benchmarker assumes git as the underlying vcs |
pip install benchmarker-cli
$ benchmarker --help
# TODO: show output
Ready the repository by creating the orphaned branch to store all benchmark results.
Assumes this is running from an initialised git repository.
$ benchmarker init --help
Usage: benchmarker init [-v|--verbosity] [-d|--folderPath FOLDER_PATH]
[-b|--resultBranch REPO_BRANCH]
Initialise repository for benchmarking
Available options:
-v,--verbosity Level of verbosity used for the command output e.g.
-vv
-d,--folderPath FOLDER_PATH
Folder path to the benchmark result
file (default: "run/{run_id}")
-b,--resultBranch REPO_BRANCH
The repository branch that the results will be stored
on (default: "benchmarks")
-h,--help Show this help text
Runs the commands specified in the config file.
If the repository is clean commits the results using the current commit as the run id.
$ benchmaker start --help
# TODO: show output
Compares the results of multiple benchmarks
$ benchmarker compare --help
# TODO: show output
# -e: When this is enabled the exit code returned depends on the failure conditions specified in your config file.
Commit benchmark results to repo branch
$ benchmarker commit --help
# TODO: show output
# -p: When this is enabled the branch is pushed to the remote tracking target.
CI Recipes for using this tool.
Easily add this to your GitHub CI worflow. See the actions repository for documentation.
Feel free to open a PR or GitHub issue. Contributions welcome!
To develop locally, clone this repository and run . script/bootstrap to setup dependencies.
See contributing guide.
Emmanuel Ogbizi 🤔 🎨 🚇 💻 📖 |
This section is automatically generated via tagging the all-contributors bot in a PR:
@all-contributors please add <username> for <contribution type>