This repository contains code developed at AIVolved for quality testing using Machine Learning & Artificial Intelligence. Several goals were pursued on key datasets including:
- Eye-patch shift detection using non-ML computer vision techniques (92.98% accuracy)
- Clustering & SVD on ResNet output for unsupervised defect detection in soap (99.55% acccuracy)
- Unsupervised, single-shot defect detection in soap using the Fourier Transform (100% accuracy)
- Cut detection in shampoo using Sobel & Canny filters + Hough Transform (N/A)
eyeshift.ipynb contains code that identifies defects in eye-patches for shampoo packets. First, a YOLOv8 model identifies horizontal and vertical cuts, then a linear regression is performed through the horizontal cuts, and eye-patches outside a threshold are categorised as defective.
Accuracy: 92.98%
soap-binary-classifier.ipynb uses a simple fully-connected layer on the outputs from ResNet18, fine-tuned on a dataset of soap to classify as either defective or non-defective.
Accuracy: 100%
Non-Defective | Defective |
soap-feature-clustering.ipynb is an unsupervised approach to defect detection in this dataset where features from the ResNet18 output undergo a Singular Value Decomposition (SVD) and are then clustered using Birch.
Accuracy: 99.55%
Ground Truth | SVD & Clustering | Prediction |
soap-autoencoder.ipynb is actually a U-net which attempts to reconstruct masked images of soap to predict defective pieces by correcting errors.
Accuracy: untested.
Masked Input | Prediction |
soap-fourier-analysis.ipynb is a single-shot, unsupervised method for defect detection on a normalised dataset. A non-defective single-shot reference image is chosen and the squared complex-difference between its Fourier Transform and all other images in the dataset are compared and clustered.
Accuracy: 100%
Fourier Transform of Soap | Histogram of Differences to Reference |
shampoo.ipynb contains code that identifies defective cuts in shampoo packets. First, vertical cuts are extracted using a YOLOv8 model, the cuts are equalised and normalised and a sobel filter is applied to enhance edges. A Canny edge-detector is used followed by a Hough transform to identify cuts.
Accuracy: visually excellent. No quantitative measure.
Masked Input | Cuts Extracted from YOLOv8 |
Equalised & Sobel Filtered | Canny Edge-detection & Hough Transform |