-
-
Notifications
You must be signed in to change notification settings - Fork 0
Open
Description
Project: Lecture Performance Benchmarking & Monitoring
Motivation
As our lecture repositories evolve — with code changes, software upgrades in environment.yml, and new releases (via publish-* tags) — we currently have no systematic way to track how these changes affect build and execution performance over time.
We need a monitoring system that can:
- Track notebook execution times across releases and environment changes
- Correlate performance changes with specific code commits,
environment.ymlupdates, andpublish-*releases - Detect regressions early when software upgrades cause slowdowns
- Provide historical trends for each lecture across all lecture repositories
Scope
This project will serve all lecture repositories including:
lecture-python-introlecture-python-programming.mystlecture-python-advanced.mystlecture-jaxlecture-statslecture-tools-techniqueslecture-dynamics- and others
Proposed Approach
Create a new repository lecture-benchmark that houses:
- A benchmarking tool (possibly integrated with jupyter-book as a Sphinx extension, or a standalone tool) that captures per-notebook execution metrics during builds
- A storage system for benchmark results (git-friendly format like JSON/CSV, or a lightweight database)
- Reporting & visualization — either as part of the jupyter-book build (benchmark pages) or as a standalone dashboard
- GitHub Actions integration — automatically run benchmarks on
publish-*releases andenvironment.ymlchanges
Key Design Decisions
- Integration vs standalone: Should we build a Sphinx/jupyter-book extension (enabling benchmark result pages in the built lectures) or a separate CLI tool?
- Storage format: Git-friendly flat files (JSON/CSV) vs lightweight DB (SQLite)?
- Metrics to capture: Execution time, memory usage, cell-level timings, output sizes?
- Triggering: On every cache build? On publish only? On environment.yml changes?
Related Work
The existing benchmarks/ repository contains hardware profiling and JAX-specific benchmarks. This project would be complementary — focused on lecture-level performance tracking over time rather than micro-benchmarks.
Next Steps
- Create
tool-lecture-benchmarkrepository with a detailedPLAN.md - Prototype the data collection approach
- Define the storage schema
- Build integration with existing CI/CD workflows
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels