-
Notifications
You must be signed in to change notification settings - Fork 109
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tabulated sersics #566
Comments
(1) would be easy to implement, I think: add a |
(Um, the internal uniform deviate which gives you the |
Yes, (1) should indeed be that easy! Another question is whether anyone would use it... :) |
For reference of why this is important, here is a quick script you can run on your machine to show you how long the setup takes for larger n Sersics:
On my machine, n=6.2 takes over 7 seconds on the first pass, but only 0.16 seconds the second pass. Loosening the accuracy requirements helps: these drop to 0.65 and 0.08 respectively. Smaller n are better, and n<2 are quite reasonable. But it would be nice if we could have some precomputation that would allow Galsim to be able to go directly to the more efficient calculations. The question is how best to do the interpolation in 2 variables (n and k) to meet our accuracy requirements. This is the tricky research problem Rachel mentioned. |
Cross referencing this useful comment by @dkirkby on a different issue about his efforts at tabulating the high-k functional form of the Hankel transforms. (I'd remembered that comment and thought it would be here in this issue, but it wasn't so I went looking for it...) |
Based on a discussion with @rmjarvis and @esheldon today, it would be useful to have ways to circumvent the Sersic profile limitation that there is a significant setup time for every single new value of n. There were two things we discussed, one easier and one harder:
(1) Easier: If one has a p(n) they want to draw from, we could offer some discretization option to DistDeviate so that instead of varying n continuously, it draws from p(n) at some specific number of discrete values. Presumably if you tell it you want e.g. 100 different values from p(n) then it should choose those values to be evenly-spaced in the CDF (e.g., at p=0.01, 0.02, 0.03, ...) rather than evenly-spaced in n.
(2) Harder: we could tabulate the Hankel transforms that are the main setup cost, for some number of n values, and just interpolate between them. There is a tricky research problem here, which is to figure out what resolution in n is needed to get systematics below the desired level. Actually doing the tabulation and having code that can read them in shouldn't be hard.
The text was updated successfully, but these errors were encountered: