forked from bnsreenu/python_for_microscopists
-
Notifications
You must be signed in to change notification settings - Fork 0
/
029-keypoint detectors and descriptors in opencv.py
149 lines (101 loc) · 5.12 KB
/
029-keypoint detectors and descriptors in opencv.py
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
#!/usr/bin/env python
__author__ = "Sreenivas Bhattiprolu"
__license__ = "Feel free to copy, I appreciate if you acknowledge Python for Microscopists"
# https://www.youtube.com/watch?v=DZtUt4bKtmY
"""What are features, detectors, and keypoints?
Features in an image are unique regions that the computer can easily tell apart.
Corners are good examples of features. Finding these unique features is called feature detection.
Once features are detected we need to find similar ones in a different image.
This means we need to describe the features.
Once you have the features and its description, you can find same features
in all images and align them, stitch them or do whatever you want.
Harris corner detector is a good example of feature detector.
Keypoints are the same thing as points of interest.
They are spatial locations, or points in the image that define what is interesting or what stand out in the image.
The reason why keypoints are special is because no matter how the image changes...
whether the image rotates, shrinks/expands, is translated, or distorted
you should be able to find the same keypoints in this modified image when comparing with the original image.
Harris corner detector detects corners.
FAST: Features from Accelerated Segment Test - also detects corners
Each keypoint that you detect has an associated descriptor that accompanies it.
SIFT, SURF and ORB all detect and describe the keypoints.
Descriptors are primarily concerned with both the scale and the orientation of the keypoint.
e.g. run ORB and draw keypoints with rich keypoints.
some of these points have a different circle radius. These deal with scale.
The larger the "circle", the larger the scale was that the point was detected at.
Also, there is a line that radiates from the centre of the circle to the edge.
This is the orientation of the keypoint, which we will cover next
Usually when we want to detect keypoints, we just take a look at the locations.
However, if you want to match keypoints between images,
then you definitely need the scale and the orientation to facilitate this.
"""
#HARRIS corner
#
import cv2
import numpy as np
img = cv2.imread('images/grains.jpg')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
gray = np.float32(gray) #Harris works on float32 images.
#Input parameters
# image, block size (size of neighborhood considered), ksize (aperture parameter for Sobel), k
harris = cv2.cornerHarris(gray,2,3,0.04)
# Threshold for an optimal value, it may vary depending on the image.
img[harris>0.01*harris.max()]=[255,0,0] # replace these pixels with blue
cv2.imshow('Harris Corners',img)
cv2.waitKey(0)
#############################################
# Shi-Tomasi Corner Detector & Good Features to Track
# In opencv it is called goodfeaturestotrack
import cv2
import numpy as np
img = cv2.imread('images/grains.jpg')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
#input image, #points, quality level (0-1), min euclidean dist. between detected points
corners = cv2.goodFeaturesToTrack(gray,50,0.05,10)
corners = np.int0(corners) #np.int0 is int64
for i in corners:
x,y = i.ravel() # Ravel Returns a contiguous flattened array.
# print(x,y)
cv2.circle(img,(x,y),3,255,-1) #Draws circle (Img, center, radius, color, etc.)
cv2.imshow('Corners',img)
cv2.waitKey(0)
#####################################
#SIFT and SURF - do not work in opencv 3
#SIFT stands for scale invariant feature transform
#####################################
# FAST
# Features from Accelerated Segment Test
# High speed corner detector
# FAST is only keypoint detector. Cannot get any descriptors.
import cv2
img = cv2.imread('images/grains.jpg', 0)
# Initiate FAST object with default values
detector = cv2.FastFeatureDetector_create(50) #Detects 50 points
kp = detector.detect(img, None)
img2 = cv2.drawKeypoints(img, kp, None, flags=0)
cv2.imshow('Corners',img2)
cv2.waitKey(0)
#############################################
#BRIEF (Binary Robust Independent Elementary Features)
#One important point is that BRIEF is a feature descriptor,
#it doesn’t provide any method to find the features.
# Not going to show the example as BRIEF also not working in opencv 3
###############################################
#ORB
# Oriented FAST and Rotated BRIEF
# An efficient alternative to SIFT or SURF
# ORB is basically a fusion of FAST keypoint detector and BRIEF descriptor
import numpy as np
import cv2
img = cv2.imread('images/grains.jpg', 0)
orb = cv2.ORB_create(100)
kp, des = orb.detectAndCompute(img, None)
# draw only keypoints location,not size and orientation
#img2 = cv2.drawKeypoints(img, kp, None, flags=None)
# Now, let us draw with rich key points, reflecting descriptors.
# Descriptors here show both the scale and the orientation of the keypoint.
img2 = cv2.drawKeypoints(img, kp, None, flags=cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
cv2.imshow("With keypoints", img2)
cv2.waitKey(0)
############################################################
#Next lecture. Use this information to register images.