- ml5 Coding Train video playlist
- ml5 website, ml5 github
- Image Classification Transfer Learning example code
- Image "Regression" Transfer Learning example code
- Intro to TensorFlow.js video playlist
- Building a "Color Classifier" with TensorFlow.js video playlist
- Yining Shi's TensorFlow.js Doodle Classifier
- TensorFlow.js website
- Coding Challenge: Solving XOR with tf.js
- Coding Challenge: Linear Regression with tf.js
- Coding Challenge: Polynomial Regression with tf.js
- Neural Networks (Nature of Code Chapter 10)
- Perceptron Video, Perceptron p5.js code
- 3Blue1Brown Neural Network series
- Toy Neural Network JavaScript library
- Coding Train Neural Network Playlist
- Coding Train Doodle Classifier Playlist
What is a "Machine Learning"? (From Andrew Ng's Coursera Course)
- "Field of study that gives computers the ability to learn without being explicitly programmed." -- Arthur Samuels (1959). Self-learning and checkers.
- "A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E." -- Tom Mitchell (1998): Maching Learning book.
- Example: classifying images of dogs and cats.
- E = Watching you classify images as dogs or cats.
- T = Classifying images as dogs or cats.
- P = The % of images correctly classified.
- Example: classifying images of dogs and cats.
In supervised learning, we are given a data set and already know what our correct output should look like, having the idea that there is a relationship between the input and the output. -- Andrew Ng Supervised Learning is a strategy that involves a "teacher" that trains the learning system. For example, consider facial recognition. The "teacher" shows the network a bunch of faces (the teacher already knows the names associated with each face). The learning system makes its guesses and the teacher provides the answers. The learning system can then compare its answers to the known “correct” ones and make adjustments according to its errors. -- Nature of Code Chapter 10:
- Classification and regression both involve making a "prediction" based on input data.
- Classification refers to predicting an output with a discrete set of possibilities like a set of categories or labels. For example: "Given an input image, is it a dog or cat?"
- Regression refers to predicting an "continuous" output (a fancy way of saying number). For example: "Given the number of bedrooms, what is the price of a house?" or "Given an input image of a cat, how much does the cat weigh?"
This short list thanks to Andrey Kurenkov's excellent 'Brief' History of Neural Nets and Deep Learning
- In 1943, Warren S. McCulloch, a neuroscientist, and Walter Pitts, a logician, developed the first conceptual model of an artificial neural network. In their paper, "A logical calculus of the ideas immanent in nervous activity,” they describe the concept of a neuron, a single cell living in a network of cells that receives inputs, processes those inputs, and generates an output.
- Hebb's Rule from The Organization of Behavior: A Neuropsychological Theory: "When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A's efficiency, as one of the cells firing B, is increased."
- Invented in 1957 by Frank Rosenblatt at the Cornell Aeronautical Laboratory (original paper), a perceptron is the simplest neural network possible: a computational model of a single neuron. A perceptron consists of one or more inputs, a processor, and a single output.
- In 1969, in their book Perceptrons Marvin Minksy and Seymour Papert demonstrate the limitations of perceptrons to solve only "linearly separable" problems. AI Winter #1!
- Paul Werbos's 1974 thesis Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences proposes "backpropagation" as a solution to adjusting weights in the hidden layers of a neural network. The technique was popularized in the 1986 paper Learning representations by back-propagating errors by David Rumelhart, Geoffrey Hinton, and Ronald Williams
- Neural Networks come back with Yann LeCunn's paper Backpropagation Applied to Handwritten Zip Code Recognition. Here's a 1993 video on convolutional neural networks. But AI Winter returns again with the "vanishing gradient problem."
- "Deep Learning" thaws the wintr with new methodologies for training: A fast learning algorithm for deep belief nets by Hinton, Osindero, Teh and raw power with GPUs: Large-scale Deep Unsupervised Learning using Graphics Processors
- A Quick Introduction to Neural Networks by Ujjwal Karn
- Let’s code a Neural Network from scratch by Charles Fried
- Linear Algebra Cheatsheet by Brendan Fortuner
- A 'Brief' History of Neural Nets and Deep Learning by Andrey Kurenkov
- Make Your Own Neural Network by Tariq Rashid
- Chapter 22 of The Computational Beauty of Nature by Gary Flake
- Nature of Code Chapter 10 Processing examples
- Charles Fried's Neural Network in Processing
- Another Processing Example
- Make Your Own Neural Network from Tariq Rashid
- Abishek's Tensorflow Example
- TBD