Skip to content

tblanpied/NeuralNetworkCPP

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

22 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

NeuralNetworkCPP

A simple C++ neural network library implemented from scratch for educational purposes. This project demonstrates the fundamentals of neural networks, including forward and backward propagation, with optional CUDA GPU acceleration for faster training on NVIDIA GPUs.

  • Author: Timothée Blanpied
  • Created: 2023/05

Features

  • Customizable neural network architecture with multiple layers
  • Support for common activation functions (ReLU, Sigmoid, TanH)
  • Gradient descent-based training with configurable learning rate
  • Matrix operations library for CPU and GPU (CUDA) computations
  • Example implementations, including digit recognition on MNIST dataset
  • Terminal-based progress bars and accuracy graphs during training
  • Save and load neural network models

Tech Stack

  • Language: C++ (C++11 or later)
  • GPU Acceleration: CUDA (optional, requires NVIDIA GPU and CUDA toolkit)
  • Build System: Makefile
  • Dependencies: Standard C++ libraries, CUDA runtime (if enabled)

Installation

Prerequisites

  • C++ compiler supporting C++11 or later (e.g., GCC)
  • Make
  • For GPU acceleration: NVIDIA GPU with CUDA toolkit installed

Setup

  1. Clone the repository:

    git clone https://github.com/tblanpied/NeuralNetworkCPP.git
    cd NeuralNetworkCPP
  2. Build the project:

    make

    This creates the executable at build/bin/nncpp.

  3. (Optional) Enable CUDA for GPU acceleration:

    • Install CUDA toolkit
    • Edit inc/Config.hpp and set ENABLE_CUDA to YES
    • Edit Makefile and set USE_CUDA = YES
    • Rebuild: make clean && make

Usage

Basic Example

#include "NeuralNetwork.hpp"

int main() {
    // Create a neural network with 3 layers: input (2), hidden (4), output (1)
    NeuralNetwork nn({2, 4, 1}, 0.1);

    // Set activation functions
    nn.setActivationFunction(RELU, 0);    // Hidden layer
    nn.setActivationFunction(SIGMOID, 1); // Output layer

    // Prepare training data (XOR example)
    vector<vector<float>> inputs = {{0,0}, {0,1}, {1,0}, {1,1}};
    vector<vector<float>> targets = {{0}, {1}, {1}, {0}};

    // Train the network
    nn.train(inputs, targets, 1000, 4); // 1000 epochs, batch size 4

    // Make a prediction
    vector<float> prediction = nn.predict({{1, 0}});
    cout << "Prediction for [1,0]: " << prediction[0] << endl;

    return 0;
}

Running Examples

The project includes example programs accessible via the command-line tool:

  • Digit Recognition: Train a neural network on the MNIST dataset

    ./build/bin/nncpp --example digit_recognition

    The dataset files should be placed in the datasets/ directory. With the labels file as datasets/mnist_labels.csv and all the digits under datasets/mnist/*.png.

  • Create a New Network: Create and save a new neural network

    ./build/bin/nncpp --new mynetwork.nn --shape 784,128,10
  • Load and Inspect a Network: Load an existing network and display its structure

    ./build/bin/nncpp --neural_network mynetwork.nn --shape

Project Structure

  • src/: Source code files (.cpp)
  • inc/: Header files (.hpp)
  • datasets/: Sample datasets (e.g., MNIST for digit recognition)
  • build/: Build artifacts (generated, ignored in git)
  • Makefile: Build configuration
  • inc/Config.hpp: Configuration settings (e.g., CUDA enable/disable)

Limitations

  • This is a prototype implementation focused on learning and demonstration
  • Limited to basic feedforward neural networks (no convolutional layers, RNNs, etc.)
  • GPU acceleration is experimental and may require tuning for large networks
  • No advanced optimization techniques (e.g., Adam optimizer, regularization beyond dropout)
  • Training data must fit in memory

License

This project is licensed under the GNU General Public License v3.0 - see the LICENSE file for details.

This project is provided as-is for educational use. It is not actively maintained or accepting external contributions.

About

A simple C++ neural network library with CUDA GPU acceleration for educational purposes. Includes examples like MNIST digit recognition.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors