param_names in model-specific closure test#347
Conversation
…can be listed by name in the runcard for model-specific closure test
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #347 +/- ##
==========================================
+ Coverage 94.52% 94.55% +0.02%
==========================================
Files 28 28
Lines 1297 1304 +7
==========================================
+ Hits 1226 1233 +7
Misses 71 71 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
comane
left a comment
There was a problem hiding this comment.
Thanks for the addition, I have a couple of comments.
colibri/config.py
Outdated
| Checks that the required keys are included in closure_test_model_settings | ||
| and constructs a dictionary for the settings. | ||
| """ | ||
| known_keys = {"model", "fitted_flavours", "parameters"} |
There was a problem hiding this comment.
In the runcard I have been using to run model-specific closure tests I have the following:
closure_test_model_settings: model: les_houches_example fitted_flavours: [\Sigma, g, V, V3]
So that's why I thought that fitted_flavours is a necessary key in this case. If it is only optional, I can remove it from the parse rule.
There was a problem hiding this comment.
But we have also removed_fitted flavours from the les houches PDF. Also, I am a bit confused with this, does it allow to use wmin? It seems to allow only these 3 keys now, and for wmin one has to specify wmin_settings @comane
There was a problem hiding this comment.
why would you need to specify wmin_settings?
I am a bit confused since this is independent on the closure test but a key that is needed for any wmin fit no?
I also was looking to see whether I could find a wmin model specific closure test runcard but didnt, do you have one perhaps?
There was a problem hiding this comment.
There was an example in the PR. You have to specify it because you need to specify the settings to build the PDF
There was a problem hiding this comment.
I mean the point of this model_specific closure test is that you can build PDFs from the installed models, so you have to be free to specify everything that is needed to build them, i.e. what the constructor of the model needs. See this:
Line 494 in bab2c97
…_model_pdf and allow for any keys, as long as parameters and model are there. Adapt unit test accordingly
Make sure that it is possible to include the parameter names in any order in the runcard when running a model-specific closure test.