implementation of EigenFace theory for pattern recognition course
PCA is a method of dimensionality reduction. The goal of PCA is to find a basis of eigenvectors (i.e., principle components) which describes the data. PCA is useful in that it de-correlates the data and preserves original variances of the data. This allows one to reduce the number of dimensions of their data for classification while still preserving a large amount of information/variance in the data. To perform PCA on a database of images, the database of images must first be shaped into an array of shape
Once the sample mean is computed, it is subtracted from each image (
Next, we can compute the eigenvectors and eigenvalues of
If we also assume that
Since
Once the eigenvectors of
where
where
This reduction allows us to preserve some chosen threshold of information within the data, while drastically reducing dimensionality and computational time. \
Since each eigenvalue found from
where
In addition to reconstructing images within the training data, we can also see how well new test images are recognized. During the training phase, we can compute the eigen-coefficients for each training image such that
Then, we can do the same for an unknown image
We can then compare the eigen-coefficients of the unknown face (
For this assignment, we will compute this using the Mahalanobis distance, where
Turk & Pentland (1991), refer to