In sparse autoencoders
with a sparsity enforcer that directs a single layer network to learn code dictionary which minimizes the error in reproducing the input while constraining number of code words for reconstruction.
The sparse autoencoder
consists of a single hidden layer, which is connected to the input vector by a weight matrix forming the encoding step. The hidden layer outputs to a reconstruction vector, using a tied weight matrix to form the decoder.
Sparse autoencoders
are used to learn features from another task, such as classification. A regularized autoencoder to be sparse must respond to unique statistical features of the trained dataset, instead of simply acting as an identity function.
python3 sample_keras.py
python3 sample_pytorch.py
- https://stats.stackexchange.com/questions/118199/what-are-the-differences-between-sparse-coding-and-autoencoder
- https://www.hindawi.com/journals/jcse/2018/8676387/
- https://medium.com/@venkatakrishna.jonnalagadda/sparse-stacked-and-variational-autoencoder-efe5bfe73b64
- https://web.stanford.edu/class/cs294a/sparseAutoencoder.pdf
- https://github.com/jadhavhninad/Sparse_autoencoder
- https://www.jeremyjordan.me/autoencoders/