Skip to content

YOLOv8-based QR code detector with training + inference and JSON output.

Notifications You must be signed in to change notification settings

nghn0/qr_code_detection

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

26 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

QR Code Detection with YOLOv8

CV Model Ultralytics Label Studio

This repository contains the complete source code for training and inference of a YOLOv8 model to detect QR codes in images.

The project prepares a dataset and splits the dataset, trains YOLOv8, and runs inference on test images, outputting bounding boxes in both annotated images and a JSON file.


Working steps/Workflow of this project

  • Do the necessary environment setup
  • create labels for training images using Label Studio
  • run train.py
  • run infer.py

📂 Project Structure

├── labels/
│   ├── img001.txt
│   ├── img002.txt
│   ├── ...
│   └── all YOLO-annotated txt files for the train_images in QR_Dataset/
│
├── QR_Dataset/
│   ├── train_images/        # Training images
│   ├── test_images/         # Test images for inference
│   ├── labels/              # YOLO format label files (.txt) with split of 80% training and 20% validation. Generated by running train.py
│   │   ├── train/
│   │   └── val/
│   ├── images/              #split of 80% training and 20% validation from the train_images in QR_Dataset. Generated by running train.py
│   │   ├── train/
│   │   └── val/
│   └── data.yaml            # Auto-generated by running train.py
│
├── src/
│   └── model/               # YOLO training outputs (weights, logs)
│
├── outputs/
│   ├── image_output/        # Annotated inference images
│   ├── submission_detection_1.json       # Final detection results
│   └── submission_decoding_2.json       # Final detection,decoding and classification results
│
├── train.py                 # Training script
├── infer.py                 # Inference script
├── requirements.txt
└── README.md                # Project documentation


⚙️ Environment Setup

1️⃣ Install requirements.txt

pip install -r requirements.txt

2️⃣ Install dependencies

pip install ultralytics

Manual annotation of training images in QR_Dataset

1. Install Label Studio

Open a terminal and run:

pip install label-studio

2. Start Label Studio

Launch the tool with:

label-studio start

3. Upload Images

  • After Label Studio opens in your browser, create a new project.
  • Upload all images from your dataset.

4. Create Custom Labels

  • Add a custom label (e.g., "QR Code") for annotating bounding boxes.

5. Annotate Images

  • Open each image in the project.
  • Draw bounding boxes around all QR codes present in the image.
  • Save each annotation.

6. Export Annotations in YOLO Format

  • After completing all annotations, export them in YOLO format (.txt files).
  • Each image should have a corresponding .txt annotation file.

Important

The .txt file name must match the image file name. Example

Image: train_images/img001.jpg
Annotation: labels/img001.txt

Note

Only a few images are included in labels and QR_Dataset folder to show the folder structure. During the execution of train.py on Original QR_Dataset you will get the folder structure specified earlier

🚀 Training

Run the training script to prepare the dataset and train YOLOv8:

python train.py

This will:

  • Split your dataset into train/val
  • Generate data.yaml
  • Train YOLOv8 for 50 epochs
  • Save the best model in src/model/qr_yolo_model_aug/weights/best.pt

Original structure of dataset before running train.py

├── QR_Dataset/
   ├── train_images/        # Training images
   ├── test_images/         # Test images for inference

Structure of dataset after running train.py

├── QR_Dataset/
   ├── train_images/        # Training images
   ├── test_images/         # Test images for inference
   ├── labels/              # YOLO format label files (.txt) with split of 80% training and 20% validation. Generated by running train.py
   │   ├── train/
   │   └── val/
   ├── images/              #split of 80% training and 20% validation from the train_images in QR_Dataset. Generated by running train.py
   │   ├── train/
   │   └── val/
   └── data.yaml            # Auto-generated by running train.py

🔎 Inference

Run inference on a folder of test images:

python infer.py

Note

In the infer.py script, update the line

IMAGES_FOLDER = "QR_Dataset/test_images"

to your own custom folder path. This tells the code where to look for images, and the output (annotated images and JSON) will be generated based on the images inside that folder.

Note

In the infer.py script, make sure this line

MODEL_PATH = "src/model/qr_yolo_model_aug/weights/best.pt"

properly points to weight best.pt generated by the model in src/model by default its currently pointing to the model, but in case the model is trained more than once update it accordingly

This will:

  • Load your trained YOLOv8 model
  • Run inference on all .jpg / .png images in the input folder
  • Save annotated images in outputs/image_output/
  • Save detection results in outputs/submission_detection_1.json
  • Save decoding+classification results in outputs/submission_decoding_2.json

Example Detection JSON output:

[
  {
    "image_id": "image_001",
    "qrs": [
      {"bbox": [34, 45, 120, 200]}
    ]
  },
  {
    "image_id": "image_002",
    "qrs": []
  }
]

Example Decoding JSON output:

[
  {
    "image_id": "image_001",
    "qrs": [
      {
         "bbox": [34, 45, 120, 200],
         "value": "5a0SBZ0D",
         "type": "serial"
      }
    ]
  },
  {
    "image_id": "image_002",
    "qrs": [
      {
         "bbox": [869, 616, 990, 730],
         "value": "",
         "type": "undecoded"
      }
   ]
  }
]

Author

Nithish Gowda - BTech(Hons.) CSE, AI & ML Major

About

YOLOv8-based QR code detector with training + inference and JSON output.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages