We update the evaluation results using ResNet50 as both localization and classfication backbone. Table is also updated in our Arxiv paper.
Method | Loc Back. | Cls Back. | CUB (top1/top5 loc) | CUB (GT-Known) | ImageNet (top1/top5 loc) | ImageNet (GT-Known) |
---|---|---|---|---|---|---|
ORNet | VGG16 | VGG16 | 67.74 / 80.77 | 86.20 | 52.05 / 63.94 | 68.27 |
PSOL | ResNet50 | ResNet50 | 70.68 / 86.64 | 90.00 | 53.98 / 63.08 | 65.44 |
C2AM (supervised initialization) | ResNet50 | ResNet50 | 76.36 / 89.15 | 93.40 | 54.41 / 64.77 | 67.80 |
C2AM (unsupervised initialization) | ResNet50 | ResNet50 | 74.76 / 87.37 | 91.54 | 54.65 / 65.05 | 68.07 |
Code repository for our paper "C2AM: Contrastive learning of Class-agnostic Activation Map for Weakly Supervised Object Localization and Semantic Segmentation" in CVPR 2022.
😍 Code for our paper "CLIMS: Cross Language Image Matching for Weakly Supervised Semantic Segmentation" in CVPR 2022 is also available here.
The repository includes full training, evaluation, and visualization codes on CUB-200-2011, ILSVRC2012, and PASCAL VOC2012 datasets.
We provide the extracted class-agnostic bounding boxes (on CUB-200-2011 and ILSVRC2012) and background cues (on PASCAL VOC12) from here.
- Python 3
- PyTorch 1.7.1
- OpenCV-Python
- Numpy
- Scipy
- MatplotLib
- Yaml
- Easydict
You will need to download the images (JPEG format) in CUB-200-2011 dataset
from here. Make sure your data/CUB_200_2011
folder is structured as
follows:
├── CUB_200_2011/
| ├── images
| ├── images.txt
| ├── bounding_boxes.txt
| ...
| └── train_test_split.txt
You will need to download the images (JPEG format) in ILSVRC2012 dataset from here.
Make sure your data/ILSVRC2012
folder is structured as follows:
├── ILSVRC2012/
| ├── train
| ├── val
| ├── val_boxes
| | ├——val
| | | ├—— ILSVRC2012_val_00050000.xml
| | | ├—— ...
| ├── train.txt
| └── val.txt
You will need to download the images (JPEG format) in PASCAL VOC2012 dataset from here.
Make sure your data/VOC2012
folder is structured as follows:
├── VOC2012/
| ├── Annotations
| ├── ImageSets
| ├── SegmentationClass
| ├── SegmentationClassAug
| └── SegmentationObject
please refer to the directory of './WSOL'
cd WSOL
please refer to the directory of './WSSS'
cd WSSS
As CCAM is an unsupervised method, it can be applied to various scenarios, like ReID, Saliency detection, or skin lesion detection. We provide an example to apply CCAM on your custom dataset like 'Market-1501'.
cd CUSTOM
If you are using our code, please consider citing our paper.
@InProceedings{Xie_2022_CVPR,
author = {Xie, Jinheng and Xiang, Jianfeng and Chen, Junliang and Hou, Xianxu and Zhao, Xiaodong and Shen, Linlin},
title = {C2AM: Contrastive Learning of Class-Agnostic Activation Map for Weakly Supervised Object Localization and Semantic Segmentation},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {989-998}
}
@article{xie2022contrastive,
title={Contrastive learning of Class-agnostic Activation Map for Weakly Supervised Object Localization and Semantic Segmentation},
author={Xie, Jinheng and Xiang, Jianfeng and Chen, Junliang and Hou, Xianxu and Zhao, Xiaodong and Shen, Linlin},
journal={arXiv preprint arXiv:2203.13505},
year={2022}
}