It would be really useful to track the overall pytest execution time as a statistic on the repo. This is not something that can currently be captured via asv, but if we could create some kind of mock metric that is plotted alongside the other metrics, we could at least observe when there's a significant regression and potentially find a culprit.
This came up while addressing astronomy-commons/lsdb#194