Very simple c++ DNN implementation that uses the stochastic gradient descent optimization algorithm
-
Create an object of type "NeuralNetwork":
//specify the neurons cuont at each layer std::vector<uint32_t> layers_lengths = { 250, 40, 30, 5 }; //specify the activation functions NeuralNetwork::func_ptr activations = { 3, tanh }; //specify the activation functions derivatives NeuralNetwork::func_ptr activations_derivatives = { 3, [](double x) {return 1.0 - x * x; } }; NeuralNetwork NN(layers_lengths, activations, activations_derivatives);
-
Call the "forward_pass" function to calculate the output layer values:
//assuming that the vector "inputs" is defined somewhere NN.forward_pass(inputs);
-
You can read the output layer values from the member variable "neurons":
for (size_t i = 0; i < NN.neurons.back().size(); i++) { double output = NN.neurons.back()[i]; //... }
-
To optimize the Neural Network, call the "backward_pass" function
//the vector "desired_outputs" holds the correct values that the neural network was supposed to give NN.backward_pass(desired_outputs, 0.3/*learning rate*/);