Skip to content

Rajesh-Arigala/tiny-mlp-classifier-pytorch

Repository files navigation

Neural Network Classifier from Scratch (PyTorch)

Project Overview

This project demonstrates how to build, train, evaluate, and deploy a neural network classifier using PyTorch from scratch.
The goal was to understand the full deep learning workflow rather than using high-level frameworks.

The project includes:

  • Model construction using nn.Module
  • Autograd and backpropagation
  • Training loop with optimizer and loss
  • Validation evaluation
  • Model saving and inference
  • Batch prediction
  • Confidence calibration
  • Histogram visualization

This is intended as a foundational deep learning portfolio project.


Tech Stack

  • Python
  • PyTorch
  • NumPy
  • Matplotlib
  • Pandas
  • Logging

Model Architecture

A simple multilayer perceptron (MLP):

Input → Linear → ReLU → Linear → ReLU → Output (Logits)

Example default configuration:

  • Input: 2 features
  • Hidden Layer 1: 64 neurons
  • Hidden Layer 2: 32 neurons
  • Output: 2 classes

Training Pipeline

The training process follows the standard deep learning loop:

  1. Forward pass
  2. Loss calculation
  3. Backpropagation
  4. Weight update
  5. Accuracy tracking
  6. Validation evaluation

Loss Function:

  • CrossEntropyLoss()

Optimizer:

  • Adam

Validation Evaluation

Formal evaluation is performed on a held-out validation dataset.

Metrics computed:

  • Validation loss
  • Validation accuracy

Model is switched to evaluation mode using:

model.eval()

Gradient calculation is disabled during evaluation:

torch.no_grad()

Batch Prediction

The project supports batch inference on new data.

Features:

  • Batch input loading
  • Softmax probability output
  • Class prediction using argmax

Example output per batch:

Predicted Class: 0  
Probabilities: [0.93, 0.07]

Confidence Calibration

The project includes confidence calibration analysis.

Prediction Confidence Distribution

Prediction Confidence Histogram

Steps performed:

  • Softmax probability extraction
  • Confidence histogram generation
  • Prediction confidence distribution analysis

This helps answer:

"How confident is the model in its predictions?"


Visualization

The project includes:

  • Test Accuracy vs Epochs
  • Test Accuracy Vs. Learning Rate
  • Confidence distribution histogram

These plots help interpret:

  • Model learning behaviour
  • Confidence reliability

Saving and Loading Model

The trained model is saved using:

torch.save(model.state_dict(), "tiny_mlp.pth")

To reload for inference:

model.load_state_dict(torch.load("tiny_mlp.pth"))
model.eval()

Key Learning Outcomes

By completing this project, I learned:

  • How backpropagation works in practice
  • How PyTorch autograd builds computation graphs
  • How to implement training loops manually
  • How to evaluate properly with validation data
  • How softmax differs from logits
  • How to save / load deep learning models
  • How confidence calibration works
  • How batch inference is structured
  • How real-world training pipelines operate

This project reflects my understanding of:

✅ Neural network fundamentals
✅ PyTorch model development
✅ Autograd & gradients
✅ Validation workflow
✅ Inference pipeline
✅ Model confidence analysis

This is my first complete PyTorch model project, and it serves as the foundation for:

  • CNNs
  • RNNs
  • Transformers
  • Deployment projects

Author

Rajesh Arigala

License

MIT License

About

Neural network classifier with training, evaluation, calibration, and prediction using PyTorch.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published