Replies: 2 comments 10 replies
-
I agree. It's not good to be changing 20k lines with every PR, for multiple reasons. Thinking about it, we could change the regression tests so that you can pass in a path to some old results (with an env var or something), then we could drop the results from the repo. We can do some trickery with our CI workflows to first generate the old results from whatever version of MUSE is on the commit the PR is based on (e.g. the tip of |
Beta Was this translation helpful? Give feedback.
-
I'm going to close this. The outputs are now much smaller than they once were, so I'm less worried about the repository booming. I also find it quite useful committing the results files as it's easy to see exactly what's changed in the diffs, which you wouldn't get with a checksum |
Beta Was this translation helpful? Give feedback.
-
I think it's worth thinking again about which files we're committing to the repo.
Currently we're committing all results files from all the example/tutorial models. This means that every time any of these models change, we get a whole load of changes (currently 25,000 additions and deletions in #349), and I don't think it's sustainable to keep committing all these changes to the repo. It's already 1.5 GB and growing.
As far as I can tell, the only reason that we need to commit the results files is for the regression tests. Would it not be better to create some kind of checksum? We'd have to be a bit careful because we often get subtly different results on different operating systems, so we'd need the checksum to be tolerant to some degree of error, but there must be some solution that doesn't involve committing the full results files...
@dalonsoa @alexdewar
Beta Was this translation helpful? Give feedback.
All reactions