-
Notifications
You must be signed in to change notification settings - Fork 1
Generative Adversarial Networks
Generative Adversarial Networks (GANs) are a powerful class of neural networks that are used for unsupervised learning. They were developed and introduced by Ian J. Goodfellow in 2014. GANs are basically made up of a system of two competing neural network models which compete with each other and are able to analyze, capture and copy the variations within a dataset.
- Generative: To learn a generative model, which describes how data is generated in terms of a probabilistic model.
- Adversarial: The training of a model is done in an adversarial setting.
- Networks: Use deep neural networks as the artificial intelligence (AI) algorithms for training purposes.
- It has been noticed most of the mainstream neural nets can be easily fooled into misclassifying things by adding only a small amount of noise into the original data.
- Surprisingly, the model after adding noise has higher confidence in the wrong prediction than when it is predicted correctly.
- The reason for such an adversary is that most machine learning models learn from a limited amount of data, which is a huge drawback, as it is prone to overfitting.
- Also, the mapping between the input and the output is almost linear. Although it may seem that the boundaries of separation between the various classes are linear, in reality, they are composed of linearities and even a small change in a point in the feature space might lead to the misclassification of data.
In GANs, there is a generator and a discriminator. The Generator and the Discriminator are both Neural Networks and they both run in competition with each other in the training phase. The steps are repeated several times and in this, the Generator and Discriminator get better and better in their respective jobs after each repetition.
![](https://github.com/ikathuria/python-to-ai/raw/main/.resources/GAN%20Architecture.png)
The Generator generates new data samples (be it an image, audio, etc.) and tries to fool the Discriminator.
- After the Discriminator has completed training, the Generator is trained while the Discriminator is idle.
- Since the Discriminator is trained by the generated fake data of the Generator, we can get its predictions and use the results for training the Generator and get better from the previous state to try and fool the Discriminator.
The Discriminator tries to distinguish between the real and generated samples. It decides whether each instance of data that it reviews belong to the actual training dataset or not.
- Initially, the Discriminator is trained while the Generator is idle.
- In this phase, the network is only forward propagated and no back-propagation is done.
- The Discriminator is trained on real data for n epochs and sees if it can correctly predict them as real.
- Also, in this phase, the Discriminator is also trained on the fake generated data from the Generator and see if it can correctly predict them as fake.
- This is the simplest type of GAN.
- Here, the Generator and the Discriminator are simple multi-layer perceptions.
- In vanilla GAN, the algorithm is really simple, it tries to optimize the mathematical equation using stochastic gradient descent.
- CGAN can be described as a deep learning method in which some conditional parameters are put into place.
- In CGAN, an additional parameter ‘y’ is added to the Generator for generating the corresponding data.
- Labels are also put into the input to the Discriminator in order for the Discriminator to help distinguish the real data from the fake generated data.
- DCGAN is one of the most popular and also the most successful implementations of GAN.
- It is composed of ConvNets in place of multi-layer perceptrons.
- The ConvNets are implemented without max pooling, which is in fact replaced by a convolutional stride.
- Also, the layers are not fully connected.
-
To learn more about Artificial Intelligence concepts, see Artificial Intelligence, Machine Learning, and Deep Learning..
-
Learn ML with Google Machine Learning Crash Course.
- Home
-
Machine Learning
- Supervised Learning
- Unsupervised Learning
- Deep Learning
- Recommender Systems