-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Validation of 8GeV Signal #1215
Conversation
replace old ntuplizer with one that puts data in event bus better energy calculation in old, but the new adds the info to the event bus and extracts vertex material information for us
use mA=10MeV and use the template simulation from Biasing.target module use the init.sh script infrastructure to generate the reference lbirary before attempting the simulation
not updating labels for material/elements but that's how it goes
- move tick labels to bin centers so the number of labels is equal to the number of bins (and not the number of bin edges) - update hcal and photonuclear modules to conform to this change, removing the trailing empty label present in those tick label lists
this does not affect the distribution shapes greatly since the weights are extremely close to exactly 1/B within each sample
The PR Validations are passing except for the new signal validation sample which is failing at the comparison step since there is no gold to compare against. |
At the last SW dev meeting, I was requested to investigate the cross section calculation at the new beam energy as well. The good news is that I've already extensively investigated the xsec calculation. The bad news is that it is really complicated! I have a standing issue in SimCore related to this where I'm putting plots and the code/data I've used to make them. Specifically, the tar-ball linked there has the data files holding the total xsec as estimated by MadGraph/MadEvent which I am using as the source of "truth". TLDR: The ratio of the cross section used within our simulation and the cross section estimated by MG/ME is roughly constant over the energy range we care about. The "constant-ness" of this ratio gets worse as the dark photon mass grows, but I don't think it has a large enough effect to matter at this point. Long AnswerIn the linked SimCore issue, I compare a "normalized" cross section calculated by the different cross section methods available in G4DB to a "normalized" cross section estimated by taking the mean of several MG/ME runs. The normalization is done to remove overall-scale effects that can be easily handled when calculating the total expected rate at analysis time. Currently, we use the G4DB default xsec method which uses "Full" for dark photon masses below 25MeV and "Improved" for masses above. Looking at the plots and focusing on our energy range of interest in the insets, these xsec calculations are roughly flat when compared to MG/ME (emphasis on roughly). While we could spend time optimizing this xsec calculation in order to more perfectly align with the MG/ME shape, I don't think that has enough benefit. I believe if we want the xsec calculation to match MG/ME we should simply import the MG/ME xsec and do an interpolation of it within G4DB rather than trying to tweak/hack a WW approximation. This would require some more development and this hurdle combined with the fact that the xsec is already kinda close causes me to conclude we are good (for now). Perhaps in the future it will be worth it to update G4DB or G4DB will be superseded by another generator like Pythia. |
This looks good so far, and I'm particularly happy to see a suite of validation DQM histograms + the corresponding plotting being set up. Just waiting to see the 4 GeV comparison and then I think this is good to go. The next step after merging would be to share your config to the LDCS repo where I will template-ify it and start production. |
Looks good to me, I just had a minor comment suggestion and (as usual) a request for a corresponding header file for the source file. I (and I think a lot of people) will look in the header directory to find out what's available in a project, so if there's only a source file you'll miss that that translation unit exists. Histogram bin update looks good, was a bit hacky before :) |
@EinarElen I moved the declarations into its own header. I cannot see another "minor comment suggestion", did you "finish your review" so that it gets posted for me? |
The 4GeV versions of the same histograms (with only 1k events sampled and doing density instead of overall count) have similar shapes and similar dependence on the A' mass. This is as expected because a 4->8GeV change is not too drastic of a change in the context of this model. WARNING: the colors are different in this set of plots compared to the 8GeV plots above. This is because I just used my Validation module to plot the directory of plots again and did not bother trying to do an overlay or side-by-side comparison with matplotlib. I could, but I don't think it is worth it given how close these distributions are. |
explain in doxygen comment why this function exists basically copying the rational written for the produce function as well.
I am updating ldmx-sw, here are the details.
What are the issues that this addresses?
tick_labels
to be the bin centers rather than the bin edges. I applied this update to the hcal and photonuclear plots by removing the trailing empty tick label''
from all the tick label lists in those modules. Please confirm this was correct or guide on how to fix this if not.Check List
I attached any sub-module related changes to this PR.Running
Rough outline, have not confirmed this can work out of the box.
The
config.py
I used is the same as the one in the new signal validation sample, but with some tweaks to have the dark brem library be passed on the command line.diff config.py .github/validation_samples/signal/config.py
Plots
Generally, we see the expected behavior. The A' takes a majority of the energy. The incident particle is close to the beam energy (most of the time), and the dark brem occurs in the tungsten target (most of the time). The heaviest mass point starts to happen more frequently in the trigger pads, but that is not too worrisome. This may inform us to include carbon and hydrogen in the dark brem reference library, but it is not necessary since the recoil electron loses so much energy either way in this high mass scenario.
You may notice that there is a bin for the dark photon energy above the beam energy. This is a non-empty bin because we are trying to replicate the dark brem process from MadGraph without the nucleus. G4DarkBreM chooses to put all of this messiness into the invisible dark photon so that the recoil electron's distributions follow the true process to the highest degree possible. In the high mass scenario, this means the dark photon's energy can easily exceed the beam energy since it also (in some sense) includes the energy that would be given to the nucleus. The two electrons involved (incident and recoil) also have this bin, but (if their energies are calculated correctly1) they stay within physical bounds and faithfully model the beam electron prior to dark brem and the dark brem recoiling electron respectively.
(Apologies for the legend labels being out of order, they are merely ordered by how they get listed within python and I didn't feel like implementing a sorting mechanism within Validation.)
Footnotes
The calculation for the incident energy needs to use the recoil and dark photon's three momentum to calculate the incident's three momentum and then calculate its energy using its mass. This is necessary since G4DarkBreM chooses to conserve three momentum when ignoring the nucleus which therefore "shoves" a lot of "error" into the dark photon's total energy. This calculation is done correctly in the DQM processor in this PR. ↩