Skip to content

2ai-lab/Optimized-Vision-Transformer-Training-using-GPU-and-Multi-threading

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Optimized Vision Transformer Training using GPU and Multi-threading

This repository contains an optimized implementation of Convolutional Neural Networks (CNN), Transformer, and Vision Transformer (ViT) models.

Authors

Anonymous during paper submission process.

Overview

This project focuses on optimizing Vision Transformer training using GPU acceleration and multi-threading techniques. It provides implementations of popular deep learning models, including Convolutional Neural Networks (CNN), Transformer, and a customized version of Vision Transformer (ViT) tailored for improved performance.

Contents

  • CNN: Implementation of Convolutional Neural Networks.
  • Transformer: Implementation of the Transformer model.
  • ViT: Customized version of the Vision Transformer (ViT) model, based on the vision-transformers-cifar10 repository.

Getting Started

Prerequisites

  • Python (>=3.6)
  • Anaconda 3
  • PyTorch
  • CUDA-enabled GPU (for GPU acceleration)

Installation

  1. Clone this repository:

    git clone https://github.com/jonledet/vision-transformer.git
  2. Create and activate a new Anaconda environment:

    conda create --name your-env-name python=3.6
    conda activate your-env-name
  3. Install dependencies:

    pip install -r requirements.txt

Usage

  • To run the models, execute the corresponding Python script:
python cnn.py
python transformer.py
python vit.py

Acknowledgments

License

This project is licensed under the MIT License.

About

Optimized Vision Transformer Training using GPU and Multi-threading

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages