Tensorflow implementations for different variational autoencoders applied to MNIST dataset. This respository is created for practice and testing purposes for different methods. Please let me know if there is any bug or mistake in this respository. Thank you.
pip install -r requirements.txt
Implementation of multi-facet clustering variataional autoencoder from Multi-Facet Clustering Variational Autoencoders (Falck et al., NeurIPS 2021). Similar to VLAE, the progressive training is also implemented
Implementation of variational ladder autoencoder from Learning Hierarchical Features from Generative Models (Zhao el al., PMLR 2017). Followed by the paper, progressive training is also implemented.
Implementation of variational deep embedding from Variational Deep Embedding: An Unsupervised and Generative Approach to Clustering (Jiang et al., IJCAI 2017)
Implementation of conditional variational autoencoder from Learning Structured Output Representation using Deep Conditional Generative Models (Sohn et al., NeurIPS 2015).
Implementation of factor variational autoencoder from Disentangling by Factorising (Kim and Mnih, NeurIPS 2017).
Implementation of total correlation variataional autoencoder from Isolating Sources of Disentanglement in VAEs (Chen et al., NeurIPS 2018).
Implementation of variational autoencoder from Auto-Encoding Variational Bayes (Kingma et al., ICLR 2014) and beta variational autoencoder from beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework (Higgins et al., ICLR 2017).
- Ladder variational autoencoder (Ladder Variational Autoencoders by Sønderby et al., NeurIPS 2016)
- Relevance factor variational autoencoder (Relevance Factor VAE: Learning and Identifying Disentangled Factors by Kim et al., 2019)
- Multi-level variational autoencoder (Multi-Level Variational Autoencoder Learning Disentangled Representations by Bouchacourt et al., AAAI 2018)
- (Soft) Introspective variational autoencoder (Soft-IntroVAE: Analyzing and Improving the Introspective Variational Autoencoder by Daniel and Tamar., CVPR 2021)