-
Notifications
You must be signed in to change notification settings - Fork 3
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
* update documentation of train.py and predict.py * add new docs * force correct model naming if emulator_label is provided * clean hardcodings * update docs developers * new model cp * fix bugs compressed parameters * chunksize * chunksize * new model with running * new docu under develop. * update with new files to ignore * new structure docs * cov data for lace * modify function to make inference cleaner * name star parameters with _cp * updated notebooks * update cosmopower documentation * Remove DESI_cov from git tracking * ne notebooks version * new notebooks * add desi data to gitignore --------- Co-authored-by: Laura Cabayol Garcia <lauracabayol@Lauras-MacBook-Pro.local> Co-authored-by: Laura Cabayol-Garcia <lcabayol@login13.chn.perlmutter.nersc.gov> Co-authored-by: Laura Cabayol Garcia <lauracabayol@lauras-macbook-pro.home>
- Loading branch information
1 parent
e552f1f
commit 656f626
Showing
28 changed files
with
1,282 additions
and
390 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Binary file not shown.
Binary file not shown.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,56 @@ | ||
# PREDEFINED EMULATORS AND TRAINING SETS | ||
|
||
## PREDEFINED EMULATORS | ||
LaCE provides a set of predefined emulators that have been validated. These emulators are: | ||
|
||
- Neural network emulators: | ||
- Gadget emulators: | ||
- Cabayol23: Neural network emulating the optimal P1D of Gadget simulations fitting coefficients to a 5th degree polynomial. It goes to scales of 4Mpc^{-1} and z<=4.5. | ||
- Cabayol23+: Neural network emulating the optimal P1D of Gadget simulations fitting coefficients to a 5th degree polynomial. It goes to scales of 4Mpc^{-1} and z<=4.5. Updated version compared to Cabayol+23 paper. | ||
- Cabayol23_extended: Neural network emulating the optimal P1D of Gadget simulations fitting coefficients to a 7th degree polynomial. It goes to scales of 8Mpc^{-1} and z<=4.5. | ||
- Cabayol23+_extended: Neural network emulating the optimal P1D of Gadget simulations fitting coefficients to a 5th degree polynomial. It goes to scales of 4Mpc^{-1} and z<=4.5. Updated version compared to Cabayol+23 paper. | ||
- Nyx emulators: | ||
- Nyx_v0: Neural network emulating the optimal P1D of Nyx simulations fitting coefficients to a 6th degree polynomial. It goes to scales of 4Mpc^{-1} and z<=4.5. | ||
- Nyx_v0_extended: Neural network emulating the optimal P1D of Nyx simulations fitting coefficients to a 6th degree polynomial. It goes to scales of 8Mpc^{-1} and z<=4.5. | ||
- Nyx_alphap: Neural network emulating the optimal P1D of Nyx simulations fitting coefficients to a 6th degree polynomial. It goes to scales of 4Mpc^{-1} and z<=4.5. | ||
- Nyx_alphap_extended: Neural network emulating the optimal P1D of Nyx simulations fitting coefficients to a 6th degree polynomial. It goes to scales of 8Mpc^{-1} and z<=4.5. | ||
- Nyx_alphap_cov: Neural network under testing for the Nyx_alphap emulator. | ||
|
||
- Gaussian Process emulators: | ||
- Gadget emulators: | ||
- "Pedersen21": Gaussian process emulating the optimal P1D of Gadget simulations. Pedersen+21 paper. | ||
- "Pedersen23": Updated version of Pedersen21 emulator. Pedersen+23 paper. | ||
- "Pedersen21_ext": Extended version of Pedersen21 emulator. | ||
- "Pedersen21_ext8": Extended version of Pedersen21 emulator up to k=8 Mpc^-1. | ||
- "Pedersen23_ext": Extended version of Pedersen23 emulator. | ||
- "Pedersen23_ext8": Extended version of Pedersen23 emulator up to k=8 Mpc^-1. | ||
|
||
## PREDEFINED TRAINING SETS | ||
|
||
Similarly, LaCE provides a set of predefined training sets that have been used to train the emulators. These training sets correspond to a simulations suite, a postprocessing and the addition (or not) of mean flux rescalings. The training sets are: | ||
|
||
- "Pedersen21": Training set used in [Pedersen+21 paper](https://arxiv.org/abs/2103.05195). Gadget simulations without mean flux rescalings. | ||
- "Cabayol23": Training set used in [Cabayol+23 paper](https://arxiv.org/abs/2303.05195). Gadget simulations with mean flux rescalings and measuring the P1D along the three principal axes of the simulation box. | ||
- "Nyx_Oct2023": Training set using Nyx version from October 2023. | ||
- "Nyx_Jul2024": Training set using Nyx version from July 2024. | ||
|
||
## CONNECTION BETWEEN PREDEFINED EMULATORS AND TRAINING SETS | ||
The following table shows the default training set for each predefined emulator. | ||
|
||
| Emulator | Training Set | Simulation | Type | Description | | ||
|----------|--------------|------------|------|-------------| | ||
| Cabayol23 | Cabayol23 | Gadget | NN | Neural network emulator trained on Gadget simulations with mean flux rescaling | | ||
| Cabayol23+ | Cabayol23 | Gadget | NN | Updated version of Cabayol23 emulator | | ||
| Cabayol23_extended | Cabayol23 | Gadget | NN | Extended version of Cabayol23 emulator (k up to 8 Mpc^-1) | | ||
| Cabayol23+_extended | Cabayol23 | Gadget | NN | Extended version of Cabayol23+ emulator (k up to 8 Mpc^-1) | | ||
| Nyx_v0 | Nyx_Oct2023 | Nyx | NN | Neural network emulator trained on Nyx simulations | | ||
| Nyx_v0_extended | Nyx_Oct2023 | Nyx | NN | Extended version of Nyx_v0 emulator (k up to 8 Mpc^-1) | | ||
| Nyx_alphap | Nyx_Oct2023 | Nyx | NN | Neural network emulator trained on updated Nyx simulations | | ||
| Nyx_alphap_extended | Nyx_Oct2023 | Nyx | NN | Extended version of Nyx_alphap emulator (k up to 8 Mpc^-1) | | ||
| Nyx_alphap_cov | Nyx_Jul2024 | Nyx | NN | Testing version of Nyx_alphap emulator | | ||
| Pedersen21 | Pedersen21 | Gadget | GP | GP emulator trained on Gadget simulations without mean flux rescaling | | ||
| Pedersen23 | Pedersen21 | Gadget | GP | Updated version of Pedersen21 GP emulator | | ||
| Pedersen21_ext | Pedersen21 | Gadget | GP | Extended version of Pedersen21 GP emulator | | ||
| Pedersen21_ext8 | Pedersen21 | Gadget | GP | Extended version of Pedersen21 GP emulator (k up to 8 Mpc^-1) | | ||
| Pedersen23_ext | Pedersen21 | Gadget | GP | Extended version of Pedersen23 GP emulator | | ||
| Pedersen23_ext8 | Pedersen21 | Gadget | GP | Extended version of Pedersen23 GP emulator (k up to 8 Mpc^-1) | |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,27 @@ | ||
# AVAILABLE SIMULATIONS | ||
|
||
This section contains the list of simulations available in the archives. | ||
|
||
## Gadget simulations | ||
The Gadget simulations contain 30 training simulations, which are named as "mpg_{x}", where x is an integer number from 0 to 29. | ||
Additionaly, there are 7 test simulations: | ||
|
||
- "mpg_central": The simulation parameters are at the center of the parameter space. | ||
- "mpg_neutrinos": The simulation contains massive neutrinos. | ||
- "mpg_running": The simulation has a non-zero running of the spectral index. | ||
- "mpg_growth": The growth factor of the simulation is different from that of the training set. | ||
- "mpg_reio": The reionization history is different from that of the training set. | ||
- "mpg_seed": Identical to the central simulation with different initial conditions. Meant to test the impact of cosmic variance. | ||
- "mpg_curved": The simulation has a different curvature power spectrum from that of the training set. | ||
|
||
For information about the simulation parameters can be found in [Pedersen+21](https://arxiv.org/abs/2103.05195). | ||
|
||
## Nyx simulations | ||
|
||
The Nyx simulation suite contains 18 training simulations, which are named as "nyx_{x}", where x is an integer number from 0 to 17. Additionally, there are 2 test simulations: | ||
|
||
- "nyx_central": The simulation parameters are at the center of the parameter space. | ||
- "nyx_seed": Identical to the central simulation with different initial conditions. Meant to test the impact of cosmic variance. | ||
|
||
|
||
For information about the simulation parameters can be found in [TBD](..). |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.