Gradient guided JAXNS #205
Joshuaalbert
started this conversation in
General
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
With 2.6.5 gradient guided nested sampling is implemented. It is not the same as Lemos et al. 2023 where a trajectory is integrated. Trajectories require step-size tuning, and handling getting stuck outside the likelihood constraint. Instead I have devised an equivalent approach that uses JAXNS' exponential contraction and Householder reflections. The top panel is the error in the calculated
H=KL(posterior | prior)
, the middle is error in logZ, and bottom runtime for aD=8
0.99-correlation ellipsoid. We see that the bias forH
andlogZ
is less for small numbers of slices with gradients on. This means you can use fewer slices per sample and get decorrelated samples. It is still in experimental mode, and I will gradually improve it and choose default parameters that perform well for large D problems.Beta Was this translation helpful? Give feedback.
All reactions