Skip to content

Fix faulty validation for maintained forward model objectives. #1275

Fix faulty validation for maintained forward model objectives.

Fix faulty validation for maintained forward model objectives. #1275

Workflow file for this run

name: Benchmark Adaptive Localization
on:
push:
branches:
- main
permissions:
# deployments permission to deploy GitHub pages website
deployments: write
# contents permission to update benchmark contents in gh-pages branch
contents: write
env:
ERT_SHOW_BACKTRACE: 1
ECL_SKIP_SIGNAL: 1
UV_SYSTEM_PYTHON: 1
jobs:
benchmark:
name: Run pytest-benchmark benchmark example
runs-on: ubuntu-latest
timeout-minutes: 60
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
submodules: true
lfs: true
- uses: actions/setup-python@v5
id: setup_python
with:
# pin this to maintain comparable benchmark results
python-version: "3.10"
- name: Install uv
uses: astral-sh/setup-uv@v3
- name: Install ert with dev-deps
run: |
uv pip install ".[dev]"
- name: Run benchmark
run: |
pytest tests/ert/performance_tests/test_analysis.py::test_and_benchmark_adaptive_localization_with_fields --benchmark-json output.json
- name: Store benchmark result
uses: benchmark-action/github-action-benchmark@v1
with:
name: Python Benchmark with pytest-benchmark
tool: 'pytest'
output-file-path: output.json
github-token: ${{ secrets.GITHUB_TOKEN }}
auto-push: true
max-items-in-chart: 30