This collection of notebooks is based on the Dive into Deep Learning Book. It was created with the intention of serving as a reference when working on future projects. All of the notes are written in Pytorch and the d2l library
Ralph Emerson Waldo: "Nothing great was ever achieved without enthusiasm..."
Steven Wright: "Everywhere is walking distance if you have the time..."
Lee Hanney: "Exercise to stimulate, not to annihilate. The world wasn't formed in a day, and neither were we. set small goals and build upon them.."
"The Man who loves walking will walk further than the man who loves the destination. When you fall in love with the journey, everything else takes care of itself. Trip, fall, pick yourself up. Get up, learn, do it over again..."
-
Basics: ✅
- Linear Neural Networks
- Multilayer Perceptrons
- Builder's guide
-
Convolutional Neural Networks ✅
- LeNet ->> DenseNet
- CNNs for Audio and Text (Maybe)
-
Review Probability and Information Theory Deep Learning: Adaptive Computation and Machine Learning Chapter III another link ✅
- Estimators, Bias and Variance - Maximum Likelihood Estimation - Bayesian Statistics - Deep FeedForward Networks
-
Deep Learning: Adaptive Computation and Machine Learning Chapter VII ✅
- Regularization for Deep Learning(apply to CNNs)
-
Optimization Algorithms d2l.ai chapter 12 ✅
-
Deep Learning: Adaptive Computation and Machine Learning Chapter IX ✅
- Convolutional Neural Networks a Maths perspective
-
Computational Performance d2l.ai chapter 13 🔜
- When talking about parallelization, do not forget to check the multiple implementation of GPU as shown on the AlexNet paper.
- Implementation cuda-convnet
-
Computer Vision d2l.ai chapter 14 ✅
-
Final Project 🔜
-
Recurrent Neural Networks d2l.ai chapter 9-10 ✅
-
Final Project ✅
-
Attention Mechanisms and Transformers d2l.ai chapter 11 ✅
-
Natural Language Processing: Pretraining d2l.ai chapter 15 ✅
-
Natural Language Processing: Applications d2l.ai chapter 16 ✅
-
Final Project ✅ - Machine Translation - Document Summarization - Document Generation
- Transformers in Computer Vision
- Diffusion Models
- Video Understanding
- Nice time to reconsider GAN + Transformers!
- Transformers in Computer Vision
-
Hyperparameter Optimization d2l.ai chapter 19 🔜
-
Generative Adversarial Networks d2l.ai chapter 20 ✅
- This can come after the CNN final project
- Small project
- GANformer = GAN + Transformers
- Video: NIPS 2016-Generative Adversarial Networks by Ian Goodfellow
-
Recommender Systems d2l.ai chapter 21
- This can come before the Transformer final project
- Small Project + Website
- Checkout this paper LLMs + Recommender Systems GPT4Rec: A Generative Framework for Personalized Recommendation and User Interests Interpretation
-
Reinforcement Learning d2l.ai chapter 17 ✅
- Companion: Course By David Silver from DeepMind
- Reinforcement Learning Recommender System:
-
Gaussian Processes d2l.ai chapter 18
- Why study Gaussian Processes??
- They provide a function space perspective of modelling, which makes understanding a variety of model classes, including deep neural networks, much more approachable
- They have an extraordinary range of applications where they are SOTA, including active learning, hyperparameter learning, auto-ML, and spatiotemporal regression
- Over the last few years, algorithmic advances have made Gaussian processes increasingly scalable and relevant, harmonizing with deep learning through frameworks such as GPyTorch
- Why study Gaussian Processes??
@article{zhang2021dive,
title={Dive into Deep Learning},
author={Zhang, Aston and Lipton, Zachary C. and Li, Mu and Smola, Alexander J.},
journal={arXiv preprint arXiv:2106.11342},
year={2021}
}