Caltech-101 contains a total of 9,146 images, split between 101 distinct object categories (faces, watches, ants, pianos, etc.) and a background category.
We can observe that the Faces category has the highest number of images as 870. And the lowest number of images as low as 31. Such an imbalanced dataset is one of the major reasons for the bad performance of deep neural networks and other general classifiers.
Options
Run Classifiers with K-fold Strategy(Training|Testing).
Run Classifiers with Stratified K-fold Strategy(Training|Testing).
Run SVM with [3, 5, 10, 15, 20, 25, 30] images per class for training.
for the options 1 and 2, below classifiers used and performance is observed:
Multi-layer Perceptron (MLP) Classifier.
SVM Classifier.
Random Forest Classifier.
KNN Classifier.
Logistic Regression Classifier.
LightGBM Classifier.
Dataset consideration
- Original Dataset
- Dataset after Removing the
BACKGROUND_Google
Class.(noise class).
Some general Observation
- Training|Testing Strategy have high impact in classifier performance.
random_state
has considerable impact in classifier performance.[3, 5, 10, 15, 20, 25, 30..]
images per class for training showing increase in accuracy but after the training split30
classifier seems to get overfitted and accuracy flatlines. This is due to the imbalanced dataset and in the frequency plot we can see minimum number of images in a class is 31 and max is 870, which will impact the classifier performance considerably.- SVM classifier is showing good promise in terms of computational cost and accuracy => Reason for choosing SVM over other for train split scenario.
Images.csv
: Contains the images which are represented by an image ID and the corresponding
class.
EdgeHistogram.csv
: Contains the feature data, Edge Histogram feature data for the images (Dimension of 80).
For a local installation, make sure you have pip installed and run:
pip install notebook
Note: Use conda environment to ease up the setup and future environment setups.
conda create --name <envname> --file requirements.txt
conda activate testing
Install the necessary dependencies:
python -m pip install -r requirements.txt
Running in a local installation Launch with:
jupyter notebook
if using google colab: Just run the cells.