We studied neural networks and applied them to text recognition and face recognition as part of the COMP4107 course at Carleton University.
This work was done by myself, Basim Ramadhan, and my partner Christian Abbott.
For our end-of-course project, we aimed to classify images from the Tiny Imagenet dataset.
We referred to network structures and techniques use by YOLO and AlexNet, and made derived simplified versions that we could run on a single GTX 970 GPU card.
After plenty of troubleshooting, parameter tweaking and experimenting, we achieved a top-1 accuracy of 30%.
Check out our report here.
We used neural networks to recognize numbers from the MNIST dataset.
Check out the write-up.
Our best model achieved an accuracy of 96%:
We experimented with using Hopfield networks to remember images using Storkey's and Hebb's learning rules for later retrieval.
See part 1 of this write-up.
Continuing with the MNIST dataset, we experimented with using SOMs to recognize the hand-written numbers.
See part 2 of this write-up.
We used neural networks to recognize faces from the LFW dataset.
See part 3 of this write-up.
We were able to achieve an accuracy of 83% in recognizing faces using a regular feed-forward network.
We then used principal component analysis (PCA) to reduce the size of the network and see how much accuracy it retains:
Later in the course, we tackled the CIFAR-10 dataset, which contains images of various animals and objects.
We used convolutional neural networks (CNNs) for this task.
We achieved an accuracy of 75%:
Check out our write-up here.