Objective : This is a mini tutorial which aims to develop intuition about how matrices gets/changes/modifies their shapes as they go from layer to layer in a neural network.
Prerequisite : Basic knowledge of representation of Neural Networks and Matrices.
-
This is a necessary concept to be understood in understanding how actually, we are building more complex neural networks becasue layers stacked over one another keeps the matrices computations overly abstracted but getting deeper insight into their shapes will help us in understanding how our inputs and outputs are related and also very useful while debugging the code as most of the errors occur due to in consistent shapes of matrix.
-
Here we only talk a simple network with simple examples because again, we need to look deeper and so we need to understand fundamentals first, others are just stacking over one another.
-
Clone the repository and navigate to the folder where repo is downloaded.
git clone https://github.com/souravs17031999/NeuralNets-Pure-Python.git
cd NeuralNets-Pure-Python
-
Install all the requirements (maybe create a separate environment using conda).
pip install -r requirements.txt
- For your reference : requirements mainly includes Python Numpy Jupyter Notebook
-
Open file "Analysis_neural_networks.ipynb" on Jupyter Notebook.
Jupyter Notebook Analysis_neural_networks.ipynb
-
Now you should see the notebook opened in your browser on local server host.
Feel free to explore.
Highly recommended following tutorials and articles if you feel a bit perplexed !
References :
Andrew Trask blog
Numpy tutorial
Python tutorial
Refresher on Gradient descent
Refresher on backpropogation
- ⭐️ this repo if you liked it !