Skip to content

Commit

Permalink
All GRB examples look fine to me
Browse files Browse the repository at this point in the history
  • Loading branch information
ThibeauWouters committed Dec 19, 2024
1 parent 0c10084 commit d90fba9
Show file tree
Hide file tree
Showing 11 changed files with 219 additions and 4 deletions.
27 changes: 27 additions & 0 deletions examples/GRB/bash.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
#!/bin/bash -l
#Set job requirements
#SBATCH -N 1
#SBATCH -n 1
#SBATCH -p gpu
#SBATCH -t 00:30:00
#SBATCH --gpus-per-node=1
#SBATCH --cpus-per-gpu=1
#SBATCH --mem-per-gpu=5G
#SBATCH --output=outdir_GRB170817_tophat/log.out
#SBATCH --job-name=GRB170817

now=$(date)
echo "$now"

# Loading modules
# module load 2024
# module load Python/3.10.4-GCCcore-11.3.0
conda activate /home/twouters2/miniconda3/envs/ninjax

# Display GPU name
nvidia-smi --query-gpu=name --format=csv,noheader

# Run the script
python run_GRB170817_tophat.py

echo "DONE"
Binary file modified examples/GRB/injection_gaussian/corner.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified examples/GRB/injection_gaussian/lightcurves.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
64 changes: 64 additions & 0 deletions examples/GRB/injection_gaussian/log.out
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
Thu Dec 19 17:46:39 CET 2024
NVIDIA A100-SXM4-40GB
GPU found? [cuda(id=0)]
Loaded SurrogateLightcurveModel with filters ['radio-3GHz', 'radio-6GHz', 'X-ray-1keV', 'bessellv'].
Converting error budget to dictionary.
NOTE: No detection limit is given. Putting it to infinity.
Loading and preprocessing observations in likelihood . . .
Loading and preprocessing observations in likelihood . . . DONE
INFO: Using MALA as local sampler
Setting up a single Gaussian distribution
No autotune found, use input sampler_params
Training normalizing flow
Tuning global sampler: 0%| | 0/7 [00:00<?, ?it/s]Tuning global sampler: 14%|█▍ | 1/7 [00:47<04:46, 47.76s/it]Tuning global sampler: 29%|██▊ | 2/7 [00:50<01:45, 21.00s/it]Tuning global sampler: 43%|████▎ | 3/7 [00:52<00:50, 12.58s/it]Tuning global sampler: 57%|█████▋ | 4/7 [00:54<00:25, 8.50s/it]Tuning global sampler: 71%|███████▏ | 5/7 [00:57<00:12, 6.25s/it]Tuning global sampler: 86%|████████▌ | 6/7 [00:59<00:04, 4.89s/it]Tuning global sampler: 100%|██████████| 7/7 [01:01<00:00, 4.03s/it]Tuning global sampler: 100%|██████████| 7/7 [01:01<00:00, 8.80s/it]
Compiling MALA body
Starting Production run
Production run: 0%| | 0/3 [00:00<?, ?it/s]Production run: 33%|███▎ | 1/3 [00:00<00:01, 1.60it/s]Production run: 67%|██████▋ | 2/3 [00:01<00:00, 1.60it/s]Production run: 100%|██████████| 3/3 [00:01<00:00, 1.60it/s]Production run: 100%|██████████| 3/3 [00:01<00:00, 1.60it/s]
Training summary
==========
inclination_EM: 0.214 +/- 0.204
log10_E0: 54.721 +/- 1.193
thetaCore: 0.215 +/- 0.127
alphaWing: 2.141 +/- 0.752
log10_n0: -1.354 +/- 1.631
p: 2.589 +/- 0.102
log10_epsilon_e: -2.107 +/- 0.648
log10_epsilon_B: -5.277 +/- 1.683
Log probability: -1550.697 +/- 12224.367
Local acceptance: 0.381 +/- 0.486
Global acceptance: 0.180 +/- 0.384
Max loss: 11.349, Min loss: 3.928
Production summary
==========
inclination_EM: 0.139 +/- 0.088
log10_E0: 55.129 +/- 0.681
thetaCore: 0.146 +/- 0.050
alphaWing: 2.176 +/- 0.725
log10_n0: -1.815 +/- 1.418
p: 2.603 +/- 0.018
log10_epsilon_e: -2.114 +/- 0.522
log10_epsilon_B: -5.498 +/- 1.494
Log probability: -12.507 +/- 1.690
Local acceptance: 0.285 +/- 0.451
Global acceptance: 0.338 +/- 0.473
Saving samples to ./injection_gaussian/results_training.npz
Saving samples to ./injection_gaussian/results_production.npz
Total runtime: 1.0 m 19.91 s
Plotting lightcurves
Plotting lightcurves . . . done
DONE
DONE

JOB STATISTICS
==============
Job ID: 9119701
Cluster: snellius
User/Group: twouters2/twouters2
State: COMPLETED (exit code 0)
Nodes: 1
Cores per node: 18
CPU Utilized: 00:01:53
CPU Efficiency: 5.81% of 00:32:24 core-walltime
Job Wall-clock time: 00:01:48
Memory Utilized: 1.54 GB
Memory Efficiency: 30.86% of 5.00 GB
Expand Down
Binary file modified examples/GRB/injection_tophat/corner.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified examples/GRB/injection_tophat/lightcurves.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
62 changes: 62 additions & 0 deletions examples/GRB/injection_tophat/log.out
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
Thu Dec 19 17:38:06 CET 2024
NVIDIA A100-SXM4-40GB
GPU found? [cuda(id=0)]
Loaded SurrogateLightcurveModel with filters ['radio-3GHz', 'radio-6GHz', 'X-ray-1keV', 'bessellv'].
Converting error budget to dictionary.
NOTE: No detection limit is given. Putting it to infinity.
Loading and preprocessing observations in likelihood . . .
Loading and preprocessing observations in likelihood . . . DONE
INFO: Using MALA as local sampler
Setting up a single Gaussian distribution
No autotune found, use input sampler_params
Training normalizing flow
Tuning global sampler: 0%| | 0/7 [00:00<?, ?it/s]Tuning global sampler: 14%|█▍ | 1/7 [00:53<05:23, 53.96s/it]Tuning global sampler: 29%|██▊ | 2/7 [00:56<01:57, 23.49s/it]Tuning global sampler: 43%|████▎ | 3/7 [00:58<00:54, 13.74s/it]Tuning global sampler: 57%|█████▋ | 4/7 [01:00<00:27, 9.16s/it]Tuning global sampler: 71%|███████▏ | 5/7 [01:02<00:13, 6.74s/it]Tuning global sampler: 86%|████████▌ | 6/7 [01:04<00:05, 5.17s/it]Tuning global sampler: 100%|██████████| 7/7 [01:07<00:00, 4.18s/it]Tuning global sampler: 100%|██████████| 7/7 [01:07<00:00, 9.59s/it]
Compiling MALA body
Starting Production run
Production run: 0%| | 0/3 [00:00<?, ?it/s]Production run: 33%|███▎ | 1/3 [00:00<00:01, 1.97it/s]Production run: 67%|██████▋ | 2/3 [00:01<00:00, 1.97it/s]Production run: 100%|██████████| 3/3 [00:01<00:00, 1.97it/s]Production run: 100%|██████████| 3/3 [00:01<00:00, 1.97it/s]
Training summary
==========
inclination_EM: 0.472 +/- 0.254
log10_E0: 53.771 +/- 1.152
thetaCore: 0.153 +/- 0.121
log10_n0: -1.269 +/- 1.361
p: 2.467 +/- 0.110
log10_epsilon_e: -1.212 +/- 0.529
log10_epsilon_B: -4.616 +/- 1.669
Log probability: -1434.384 +/- 10642.869
Local acceptance: 0.183 +/- 0.387
Global acceptance: 0.282 +/- 0.450
Max loss: 9.741, Min loss: 1.378
Production summary
==========
inclination_EM: 0.339 +/- 0.056
log10_E0: 54.048 +/- 0.617
thetaCore: 0.097 +/- 0.017
log10_n0: -1.707 +/- 0.918
p: 2.469 +/- 0.017
log10_epsilon_e: -1.240 +/- 0.152
log10_epsilon_B: -4.511 +/- 1.341
Log probability: -11.601 +/- 1.869
Local acceptance: 0.112 +/- 0.315
Global acceptance: 0.608 +/- 0.488
Saving samples to ./injection_tophat/results_training.npz
Saving samples to ./injection_tophat/results_production.npz
Total runtime: 1.0 m 18.61 s
Plotting lightcurves
Plotting lightcurves . . . done
DONE
DONE

JOB STATISTICS
==============
Job ID: 9119646
Cluster: snellius
User/Group: twouters2/twouters2
State: COMPLETED (exit code 0)
Nodes: 1
Cores per node: 18
CPU Utilized: 00:01:54
CPU Efficiency: 5.92% of 00:32:06 core-walltime
Job Wall-clock time: 00:01:47
Memory Utilized: 1.39 GB
Memory Efficiency: 27.77% of 5.00 GB
Expand Down
Binary file modified examples/GRB/outdir_GRB170817_tophat/corner.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified examples/GRB/outdir_GRB170817_tophat/lightcurves.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
62 changes: 62 additions & 0 deletions examples/GRB/outdir_GRB170817_tophat/log.out
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
Thu Dec 19 17:56:44 CET 2024
NVIDIA A100-SXM4-40GB
GPU found? [cuda(id=0)]
Loaded SurrogateLightcurveModel with filters ['radio-3GHz', 'radio-6GHz', 'X-ray-1keV', 'bessellv']
Converting error budget to dictionary.
NOTE: No detection limit is given. Putting it to infinity.
Loading and preprocessing observations in likelihood . . .
Loading and preprocessing observations in likelihood . . . DONE
INFO: Using MALA as local sampler
Setting up a single Gaussian distribution
No autotune found, use input sampler_params
Training normalizing flow
Tuning global sampler: 0%| | 0/7 [00:00<?, ?it/s]Tuning global sampler: 14%|█▍ | 1/7 [00:49<04:57, 49.55s/it]Tuning global sampler: 29%|██▊ | 2/7 [00:51<01:47, 21.47s/it]Tuning global sampler: 43%|████▎ | 3/7 [00:53<00:49, 12.48s/it]Tuning global sampler: 57%|█████▋ | 4/7 [00:55<00:25, 8.36s/it]Tuning global sampler: 71%|███████▏ | 5/7 [00:56<00:11, 5.99s/it]Tuning global sampler: 86%|████████▌ | 6/7 [00:58<00:04, 4.56s/it]Tuning global sampler: 100%|██████████| 7/7 [01:00<00:00, 3.65s/it]Tuning global sampler: 100%|██████████| 7/7 [01:00<00:00, 8.65s/it]
Compiling MALA body
Starting Production run
Production run: 0%| | 0/3 [00:00<?, ?it/s]Production run: 33%|███▎ | 1/3 [00:00<00:00, 4.46it/s]Production run: 67%|██████▋ | 2/3 [00:00<00:00, 4.29it/s]Production run: 100%|██████████| 3/3 [00:00<00:00, 4.24it/s]Production run: 100%|██████████| 3/3 [00:00<00:00, 4.27it/s]
Training summary
==========
inclination_EM: 0.370 +/- 0.185
log10_E0: 53.264 +/- 1.349
thetaCore: 0.227 +/- 0.068
log10_n0: -3.481 +/- 1.962
p: 2.149 +/- 0.132
log10_epsilon_e: -2.420 +/- 1.418
log10_epsilon_B: -3.904 +/- 2.005
Log probability: -465.326 +/- 3419.754
Local acceptance: 0.415 +/- 0.493
Global acceptance: 0.201 +/- 0.400
Max loss: 9.719, Min loss: 1.823
Production summary
==========
inclination_EM: 0.332 +/- 0.118
log10_E0: 53.533 +/- 1.128
thetaCore: 0.224 +/- 0.065
log10_n0: -3.972 +/- 1.801
p: 2.113 +/- 0.056
log10_epsilon_e: -2.568 +/- 1.515
log10_epsilon_B: -3.780 +/- 2.058
Log probability: -13.369 +/- 1.754
Local acceptance: 0.391 +/- 0.488
Global acceptance: 0.328 +/- 0.470
Saving samples to ./outdir_GRB170817_tophat/results_training.npz
Saving samples to ./outdir_GRB170817_tophat/results_production.npz
Total runtime: 1.0 m 10.91 s
Plotting lightcurves
Plotting lightcurves . . . done
DONE
DONE

JOB STATISTICS
==============
Job ID: 9119871
Cluster: snellius
User/Group: twouters2/twouters2
State: COMPLETED (exit code 0)
Nodes: 1
Cores per node: 18
CPU Utilized: 00:01:38
CPU Efficiency: 5.61% of 00:29:06 core-walltime
Job Wall-clock time: 00:01:37
Memory Utilized: 983.49 MB
Memory Efficiency: 19.21% of 5.00 GB
Expand Down
8 changes: 4 additions & 4 deletions examples/GRB/run_GRB170817_tophat.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@

from fiesta.inference.lightcurve_model import AfterglowpyLightcurvemodel
from fiesta.inference.likelihood import EMLikelihood
from fiesta.inference.prior import Uniform, Composite
from fiesta.inference.prior import Uniform, CompositePrior
from fiesta.inference.fiesta import Fiesta
from fiesta.utils import load_event_data

Expand Down Expand Up @@ -67,7 +67,7 @@
##############

name = "tophat"
model_dir = f"../trained_models/afterglowpy/{name}/"
model_dir = f"../../lightcurve_models/afterglowpy/{name}/"
FILTERS = ["radio-3GHz", "radio-6GHz", "X-ray-1keV", "bessellv"]

model = AfterglowpyLightcurvemodel(name,
Expand All @@ -79,7 +79,7 @@
### DATA ###
############

data = load_event_data("./data/GRB170817A.dat") # only one filter of the GRB170817A data
data = load_event_data("../data/GRB170817A.dat") # only one filter of the GRB170817A data

#############################
### PRIORS AND LIKELIHOOD ###
Expand All @@ -105,7 +105,7 @@
# luminosity_distance
]

prior = Composite(prior_list)
prior = CompositePrior(prior_list)

detection_limit = None
likelihood = EMLikelihood(model,
Expand Down

0 comments on commit d90fba9

Please sign in to comment.