Official implementation of our solution (3rd place) for ICCV 2021 Workshop Self-supervised Learning for Next-Generation Industry-level Autonomous Driving (SSLAD) Track 3A - Continual Learning Classification using "Online Continual Learning via Multiple Deep Metric Learning and Uncertainty-guided Episodic Memory Replay".
First, install dependencies
# clone project
git clone https://github.com/mrifkikurniawan/sslad.git
# install project
cd sslad
pip3 install -r requirements.txt
Next, preparing the dataset via links below.
- Train/val: SODA10M Labeled Trainval (Google Drive). Then, replace the original annotations with this new annotations with timestamps: [train][val].
- Test: SODA10M 3A Tesetset (Google Drive)
Next, run training.
# run training module with our proposed cl strategy
python3.9 classification.py \
--config configs/cl_strategy.yaml \
--name {path/to/log} \
--root {root/of/your/dataset} \
--num_workers {num workers} \
--gpu_id {your-gpu-id} \
--comment {any-comments}
--store \
or see the train.sh for the example.
Method | Val AMCA | Test AMCA |
---|---|---|
Baseline (Uncertainty Replay)* | 57.517 | - |
+ Multi-step Lr Scheduler* | 59.591 (+2.07) | - |
+ Soft Labels Retrospection* | 59.825 (+0.23) | - |
+ Contrastive Learning* | 60.363 (+0.53) | 59.68 |
+ Supervised Contrastive Learning* | 61.49 (+1.13) | - |
+ Change backbone to ResNet50-D* | 62.514 (+1.02) | - |
+ Focal loss* | 62.71 (+0.19) | - |
+ Cost Sensitive Cross Entropy | 63.33 (+0.62) | - |
+ Class Balanced Focal loss* | 64.01 (+1.03) | 64.53 (+4.85) |
+ Head Fine-tuning with Class Balanced Replay | 65.291 (+1.28) | 62.58 (-1.56) |
+ Head Fine-tuning with Soft Labels Retrospection | 66.116 (+0.83) | 62.97 (+0.39) |
*Applied to our final method.
classification.py
: Driver code for the classification subtrack.
There are a few things that can be changed here, such as the
model, optimizer and loss criterion. There are several arguments that can be set to store
results etc. (Run classification.py --help
to get an overview, or check the file.)
class_strategy.py
: Provides an empty plugin. Here, you can define
your own strategy, by implementing the necessary callbacks. Helper
methods and classes can be ofcourse implemented as pleased. See
here
for examples of strategy plugins.
data_intro.ipynb
: In this notebook the stream of data is further introduced and explained.
Feel free to experiment with the dataset to get a good feeling of the challenge.
Note: not all callbacks have to be implemented, you can just delete those that you don't need.
classification_util.py
& haitain_classification.py
: These files contain helper code for
dataloading etc. There should be no reason to change these.
Please consider citing our paper in your publications if the project helps your research. BibTeX reference is as follow.
@article{Kurniawan2021OnlineCL,
title={Online Continual Learning via Multiple Deep Metric Learning and Uncertainty-guided Episodic Memory Replay - 3rd Place Solution for ICCV 2021 Workshop SSLAD Track 3A Continual Object Classification},
author={Muhammad Rifki Kurniawan and Xing Wei and Yihong Gong},
journal={ArXiv},
year={2021},
volume={abs/2111.02757}
}