A simple C++ neural network library implemented from scratch for educational purposes. This project demonstrates the fundamentals of neural networks, including forward and backward propagation, with optional CUDA GPU acceleration for faster training on NVIDIA GPUs.
- Author: Timothée Blanpied
- Created: 2023/05
- Customizable neural network architecture with multiple layers
- Support for common activation functions (ReLU, Sigmoid, TanH)
- Gradient descent-based training with configurable learning rate
- Matrix operations library for CPU and GPU (CUDA) computations
- Example implementations, including digit recognition on MNIST dataset
- Terminal-based progress bars and accuracy graphs during training
- Save and load neural network models
- Language: C++ (C++11 or later)
- GPU Acceleration: CUDA (optional, requires NVIDIA GPU and CUDA toolkit)
- Build System: Makefile
- Dependencies: Standard C++ libraries, CUDA runtime (if enabled)
- C++ compiler supporting C++11 or later (e.g., GCC)
- Make
- For GPU acceleration: NVIDIA GPU with CUDA toolkit installed
-
Clone the repository:
git clone https://github.com/tblanpied/NeuralNetworkCPP.git cd NeuralNetworkCPP -
Build the project:
make
This creates the executable at
build/bin/nncpp. -
(Optional) Enable CUDA for GPU acceleration:
- Install CUDA toolkit
- Edit
inc/Config.hppand setENABLE_CUDAtoYES - Edit
Makefileand setUSE_CUDA = YES - Rebuild:
make clean && make
#include "NeuralNetwork.hpp"
int main() {
// Create a neural network with 3 layers: input (2), hidden (4), output (1)
NeuralNetwork nn({2, 4, 1}, 0.1);
// Set activation functions
nn.setActivationFunction(RELU, 0); // Hidden layer
nn.setActivationFunction(SIGMOID, 1); // Output layer
// Prepare training data (XOR example)
vector<vector<float>> inputs = {{0,0}, {0,1}, {1,0}, {1,1}};
vector<vector<float>> targets = {{0}, {1}, {1}, {0}};
// Train the network
nn.train(inputs, targets, 1000, 4); // 1000 epochs, batch size 4
// Make a prediction
vector<float> prediction = nn.predict({{1, 0}});
cout << "Prediction for [1,0]: " << prediction[0] << endl;
return 0;
}The project includes example programs accessible via the command-line tool:
-
Digit Recognition: Train a neural network on the MNIST dataset
./build/bin/nncpp --example digit_recognition
The dataset files should be placed in the
datasets/directory. With the labels file asdatasets/mnist_labels.csvand all the digits underdatasets/mnist/*.png. -
Create a New Network: Create and save a new neural network
./build/bin/nncpp --new mynetwork.nn --shape 784,128,10
-
Load and Inspect a Network: Load an existing network and display its structure
./build/bin/nncpp --neural_network mynetwork.nn --shape
src/: Source code files (.cpp)inc/: Header files (.hpp)datasets/: Sample datasets (e.g., MNIST for digit recognition)build/: Build artifacts (generated, ignored in git)Makefile: Build configurationinc/Config.hpp: Configuration settings (e.g., CUDA enable/disable)
- This is a prototype implementation focused on learning and demonstration
- Limited to basic feedforward neural networks (no convolutional layers, RNNs, etc.)
- GPU acceleration is experimental and may require tuning for large networks
- No advanced optimization techniques (e.g., Adam optimizer, regularization beyond dropout)
- Training data must fit in memory
This project is licensed under the GNU General Public License v3.0 - see the LICENSE file for details.
This project is provided as-is for educational use. It is not actively maintained or accepting external contributions.