Conversation
|
@gowerc I think this catches the NOTE that CRAN was finding. It's not actually related to dont-test at all. It's triggered by We could eventually integrate this with another workflow. |
|
@gravesti / @danielinteractive / @luwidmer Some notes about the changes that I've made here:
I will disable all Stan tests in a future PR |
|
@gowerc you mentioned "I will disable all Stan tests in a future PR" - is this just on CRAN, or everywhere? |
|
@luwidmer - That would just be for CRAN. I need to adjust the code and GHA workflows a bit but I was thinking we'd have 3 tiers of tests:
Main justification here for the "minimal" on CRAN being that we need to get the run time down below 10 minutes which is difficult now we don't cache the model on CRAN thus easiest option is to disable all rstan tests. This is also justifiable as we rstan is only a suggests for the package anyway. |
|
Works for me. For OncoBayes2 we still sample with 10 samples total and 2 warmup "fake sampling" on CRAN, but if your runtime is dominated by compilation, then I'd skip there too. |
|
Ok apologies this PR has ballooned a bit. Unfortunately to get the unit tests to work in a more sensible and clear way required a minor overhaul to them.
@gravesti / @luwidmer / @danielinteractive - I think I am done with this now if any of you can spare any time to skim the code again. To be honest I'd ignore all the docker / unit test changes I think the important stuff is just the changes to In particular I wasn't sure about the |

Closes #537