Course notes for fast.ai 2019
- Lenovo ThinkPad X1 Extreme
- Install Visual Studio 2017 with following workloads
- .NET Desktop
- Data science and analytical applications
- Python
- git clone https://github.com/parthopdas/c-fastai2019
Setup fastai course v3 using any of the methods:
- Anaconda (default)
- PIP (in case anaconda doesnt work out for some reason)
- git linked (when you want to debug or contribute to fastai library)
For all 3 approaches ensure these steps are performed first
- conda deactivate
- conda env remove --name my-fastai2019
- conda create --name my-fastai2019 python=3.7.2
- conda activate my-fastai2019
- conda install -c pytorch -c fastai -c conda-forge fastai pytorch torchvision cuda100
- tools\verify-CUDA.ps1
- conda install jupyter
In case of build errors with pip installation, follow:
- pip install fastai
- tools\verify-CUDA.ps1
- conda install -c pytorch fastai pytorch torchvision cuda100
- conda install jupyter
- tools\verify-CUDA.ps1
- git clone https://github.com/fastai/fastai
- cd fastai
- ./tools/run-after-git-clone
- pip install -e ".[dev]"
- Start VS2017 from conda environment
- Create new Python project
- Add fastai code to the python file
- Set breakpoint and F5, F10, F11
- hyperlayout default
- .\tools\monitor-cpu.ps1
- .\tools\monitor-gpu.ps1
- Kaggle CLI
- Faster experimentation
- Transfer learning
- [TODO] Google image downloader
- Use the fastai samples/tiny datasets
- Train with images with reduced sizes
- Faster experimentation
- For creating new datasets: Download images from google search [TODO: ImageDownloader tool]
- Switch between CPU & GPU
defaults.device = torch.device('cpu') defaults.device = torch.device('cuda')
- Fixing GPU OOM
- Reduce batch size
- Reduce image size
- Use a simpler architecture
- Mixed precision training (doing the training using half precision fp)
- Ways to get more data
- Supervised learning: Manually label data
- Data augmentation
- NLP: Entire test and train sets to fine tune language model
- Ways to prevent underfit?
- Decrease regularization
- Ways to prevent overfit??
- Increase regularization
- Ways to speed up training?
- Use transfer learning
- Will 'x' work? Try it and see!
- Getting a feel for the data aqap:
- Use the fastai samples/tiny datasets
- Initially train with images with reduced sizes
- How to pick the learning rate? 10x smaller then right before loss shoots up.
- Train on GPU, predict on CPU.
- Skill in training models is honing the intuition of driving across the loss function landscape to find the lowest & flattest spot as quickly as possible.
- Lowest => Best fit across seen data sets
- Flattest => Generalizes the best across unseen data sets
- With DL there is still feature engineering, but the features are encoded in the activations in various layers and automagically figured out by the network during training
- verify-CUDA
- Add fai verification
- rename to verify-FAI
- looking at the results section for each kata
- make a single prediction using CPU
- official instruction for win10
- Kaggle winners interviews