Skip to content

👷 Setup benchmark workflow with pytest-codspeed #1

👷 Setup benchmark workflow with pytest-codspeed

👷 Setup benchmark workflow with pytest-codspeed #1

Workflow file for this run

# Run performance benchmarks
#
# Continuous benchmarking using pytest-codspeed. Measures the execution speed
# of tests marked with @pytest.mark.benchmark decorator.
name: Benchmarks
on:
# Run on pushes to the main branch
push:
branches: [ main ]
# Run on pull requests
pull_request:
types: [ opened, reopened, synchronize ]
# `workflow_dispatch` allows CodSpeed to trigger backtest
# performance analysis in order to generate initial data.
workflow_dispatch:
release:
types:
- published
jobs:
benchmarks:
runs-on: ubuntu-22.04
defaults:
run:
shell: bash -l {0}
steps:
# Checkout current git repository
- name: Checkout
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
# Setup Python interpreter
- uses: actions/setup-python@0a5c61591373683505ea898e09a3ea4f39ef2b9c # v5.0.0
with:
python-version: '3.12'
# Build binary distribution wheel
- name: Build wheels
uses: PyO3/maturin-action@60d11847b29f81ca5375519a8eb33cc336ba4bfa # v1.41.0
with:
target: x86_64
args: --release --out dist --find-interpreter
sccache: 'true'
manylinux: auto
# Install the package that we want to test
- name: Install the package
run: |
set -e
python - m pip install cog3pio[tests] --find-links dist --force-reinstall
python - m pip list
# Run the benchmark tests
- name: Run benchmarks
uses: CodSpeedHQ/action@2e04019f4572c19684929a755da499f19a00b25b # v2.2.1
with:
run: |
python -c "import pygmt; pygmt.show_versions()"
python -m pytest --verbose --codspeed