(Initially I thought of going each topic one by one but that doesn't seem practical. So, Writing on the topic that I'm curently working on)
Disclaimer: These algorithms will look pretty dumb until you get the thrill of it and see for yourself what magic you can do with them.
- Simple Image Pre-Processing Techniques (Rotation, Morphological Operations)
- Image Thresholding
- Morphological Operations
- Connected Components and Contour Detection
- Edge Detection Algorithms
- Histogram Equalization
- Filtering(Linear Filters and Non-Linear Filters)
- Frequency Domain Analysis
- Image Similarity Detection
- Feature Extraction(SIFT, SURF, AKAZE etc.)
- Template Matching
- Anisotropic Filters
- Image Moments
- Training Common Mistakes
Libraries used:
opencv
numpy
matplotlib
Images are just a 3-D matrix of 8 bit numbers: 0-255 (Will be referred as uint8
). Where 0 means black and 255 means white. Images are stored in an array:
(height, width, dimensions)
dimensions = 3 for colored images dimensions = 1 for grayscale images
One of the most important techniques and probably overlooked in Computer Vision are the most basic ones.
Techniques:
-
Rotation
For a given iamge, you rotate the image by a certain angle
$\theta$ . How rotation works in OpenCV could be a little different from the rotation that we've learned. The operation is same but is more generalized in OpenCV.def rotate_image(img:np.ndarray, angle:int) -> np.ndarray: """ Rotate the image by theta degrees. """ rows, cols = img.shape M = cv2.getRotationMatrix2D(((cols-1)/2.0(rows-1)/2.0), angle, 1) img_rotated = cv2.warpAffine(img, M, (cols, rows)) print(img_rotated.shape) return img_rotated
For more Information, Refer to rotation.ipynb.
An image basically has 3 channels: Red, Green, Blue will be further referred as RGB. For some reason OpenCV uses BGR. (Images have another channel, alpha channel but we'll ignore it). In any image pre-processing, if we can decrease dimensions without compromising the information, we tend to do so. (Google Curse of Dimensionality). An image can be converted into grayscale by taking mean over all dimensions but we could go one step further; binarizing the image. This technique is known as image thresholding.
Algorithm: For any pixel 'p' in an image matrix M and a given threshold 't'.
if p < t:
p = 0
else:
p = 255
The algorithm is fundamentally simple but there's a catch. How do we find the optimum threshold?
In general, there are two kinds of thresholding techniques.
- Global Thresholding
- Local Thresholding
Global Thresholding
In Global Thresholding, there is a single threshold for an entire image Matrix. Based on the given image, a single threshold is calculated. Some of the methods of Global Thresholding:
- Otsu Thresholding
- Entropy Based Method
- Based on Histogram Analysis
Local Thresholding
A given image Matrix is sub-divided into smaller matrices(sub-images). For each sub-image a threshold is computed and threshold is applied. Some of the methods for Local Thresholding:
- Niblack's Binarization Method
- Adaptive Thresholding
- PASS Algorithm
#TODO: Explain all algorithms with citations and code.
(Morphological Operations done only on binarized Images) Code: morphology.ipynb
They are non-linear operations related to shape and morphology of an image. If we're doing morphological operations on images then we are going to need an image kernel. A 2D matrix smaller than the image.
(Definition could look intimidating but they are one of the most easy to understand algorithms in Image Processing)
-
Image Dilation:
For a white pixel inside a binarized image, convert it's neighboring black pixels to white.
Eg. If I have a white circle of radius r. It'll have a bigger circle or radius r' after dilation.For two given matrices, an image matrix and a structuring matrix. The structuring matrix is superimposed into the image matrix. A pixel element is 1(white) if at least one pixel under the kernel (after superimposition) is 1(white)
-
Image Erosion:
Opposite of dilation
The structuring matrix is superimposed into the image matrix. A pixel element is white if all the pixels under the kernel (after superimposition) are 1 (white)
-
Image Opening:
Image Opening is Erosion Followed by Dilation.
-
Image Closing
Image closing is Dilation Followed by Erosion.
Personally, I don't ever think about Opening or closing Operation. Eg. If I have small white noises around a big white blob laid on an empty canvas. What I'd like to do is remove the smaller blocks. So, I'd first erode the image so that the smaller white blocks would be removed and not the main blob. After that, I would dilate the image so that the original blob of interest would come back to its original size(nearly) And sometimes, you'd want to use different kernels for opening and closing too.
-
Image Skeletonization
Skeletonization is the process of reducing foreground regions of a binary image to a skeleton-like image preserving the connectivity of the original region. (Well, Basically creating a skeleton). Rather than explaining, it's better to view the code and dissect it to view how it works and I leave it up to the reader to refer the code and do so.
Code: connected_components_contour.ipynb
Connected Component Labelling is used in Computer Vision to detect regions in binary digital images, although color images with higher dimensionality can also be processed.
Connected Components Algorithm is one of the fundamentally simpler algorithms. For any binarized image, find all the pixels(white) that are connected to each other and label them. Eg.
As we can see tht there are two distinct white patches (255) inside the image. What connected component labelling does is to label those two patches based on whether they are connected or not.
The second image matrix is the matrix gotten after connected component labelling. I think what it does is pretty self-explanatory.
Algorithm:
(There have been various algorithms on connected components, but I'll write a simple and inefficient method)
-
For pixel in img row, cols
-
component_no = 0
-
If the pixel is 0, continue
-
For the pixel, check it's neighbors if neighbor is already labelled,
if neighbors have multiple labels: change it's label to the neighbor's label (lower). change all the pixels of other label to the lower label (If, one pixel has two neighbors 1 and 2. Change the pixel to 1. And change all the pixels of 2 to 1 as this pixel connects two blobs.)
Else
pixel = component_no; component_no += 1
In OpenCV, to do a Connected Components Labelling:
We could take the base as connected components and try to add a boundary following algorithm. What we're trying to get is a proper boundary of all the connected components, and in turn detect contours for a given image.
OpenCV Provides a simple API for contour detection
contours, hierarchy = cv2.findContours(img, )
You might have heard about moments in statistics:
In simple terms, image moments are a set of statistical parameters to measure the distribution of where the pixels are and their intensities.
Mathematically, the image moment
Where, x, y refers to the row and column index and
Simple Uses Of Image Moments: (Used to describe properties of a binary image)
To calculate the area of a binary image, you'd need to calculate the first moment.
As,
This, might look intimidating but converting it to code might change your perspective.
def get_area(img):
height, width = img.shape
area = 0
for w in range(0, width):
for h in range(0, height):
area += img[h,w]
return area
# Easier and faster method
area = np.sum(img)
#Or
area = cv2.moments(img)['m00']
Centroid of an image is just a pixel location. Which is given by:
def get_centroid(img):
mu = cv2.moments(img)
centroid = mu['m10']//mu['m00'], mu['m01']//mu['m00']
return centroid
Personally, Feature extraction is one of the most exciting topics. And the places that they are used is immense. Whether it be Google Image Search or the Panorama Images we click from our phone and many other tasks. Feature extraction is used. Just Feature Extraction is not enough for most of the tasks. i.e Google Image Search probably uses Bag Of Words for images but more on that later. Meanwhile, Panorama Images use Feature Matching to stich the images together.
Firstly, let's discuss what Feature Extraction is.
For any Image, there are certain features in that image. If we could extract the important features from that image, we'd have reduced the size of the image and also have the most important details from that image.
But What exactly are the important features?
Well, the starting point is Derivative.
A plain white image doesn't have any detail. So, it's gradient is zero. But if there's a distinct black line, the image gradient changes from white to black when the line is encountered. We could also save, by how much the colors change when we encounter that specific point. That is the most basic idea for Edge Detection. If we try to go further taking this idea, We'd reach towards Corner detection and then finally towards our actual goal, Feature Descriptors like SIFT, SURF, AKAZE etc.
With that roadmap, let's dive into the first sub-topic. Corner Detection.
A corner, in simple words is a junction of two edges. Edge being a pixel where there is a sudden change in brightness in a certain direction. Corners are important features because they are invariant to various operations like rotation, scaling, translation etc.
The equation of Harris Corner Detection is:
The Window could either be a rectangular window or a gaussian Window.
For corner detection, we'd have to maximize the term