compute a KL divergence for a Gaussian Mixture prior and a normal distribution posterior #1723
Replies: 2 comments 1 reply
-
Hi, thanks for reaching out! If I understand correctly, you’re asking about computing a KL divergence between two continuous distributions via Monte Carlo sampling. I would expect that it’s possible to use MCMC to do this, although I’m not sure how well it would converge. However, BoTorch doesn’t have any functionality for estimating a KL divergence in that way, and since BoTorch is intended as a Bayesian Optimization library, we’re not planning on adding that functionality. You can implement MCMC methods on top of BoTorch Posterior objects, which represent distributions. However, you can also do this with the underlying torch distributions that you’re already using in the VAE, so I don’t think BoTorch would add a lot of value there. |
Beta Was this translation helpful? Give feedback.
-
Thank you for your reply. Is it possible to provide an example about how to combine the MCMC method with a Posterior object? |
Beta Was this translation helpful? Give feedback.
-
🚀 Feature Request
Hi,
I am trying to compute a KL divergence for my variational autoencoder architecture between a Gaussian Mixture prior and a normal distribution posterior. Since this KL divergence is analytically intractable and only it could be evaluated by some sort of approximation, I am wondering whether it is also possible to compute it via Monte Carlo Sampling using
botorch
? I will appreciate if you could suggest an implementation with your library? Here is the architecture of my encoder and prior networksIs there any possibility to use
botorch
library and expliciltly Monte Carlo sampling in order to compute this KL divergence? I was wondering whetherbotorch.acquisition.active_learning.StatisticalDistance
for KL distance_metric could be used? Thanks in advance.Beta Was this translation helpful? Give feedback.
All reactions