Learning Self-Supervised Low-Rank Network for Single-Stage Weakly and Semi-Supervised Semantic Segmentation
This is the official implementation of Learning Self-Supervised Low-Rank Network for Single-Stage Weakly and Semi-Supervised Semantic Segmentation, arXiv, IJCV 2022.
This repository contains the code for SLRNet, which is a unified framework that can be well generalized to learn a label-efficient segmentation model in various weakly and semi-supervised settings. The key component of our approach is the Cross-View Low-Rank (CVLR) module that decompose the multi-view representations via the collective matrix factorization. We provide scripts for Pascal VOC and COCO datasets. Moreover, the SLRNet ranked 2nd in the WSSS Track of CVPR 2021 Learning from Limited and Imperfect Data (L2ID) Challenge.
-
Minimum requirements. This project was developed with Python 3.7, PyTorch 1.x. The training requires at least two Titan XP GPUs (12 GB memory each).
-
Setup your Python environment. Install python dependencies in requirements.txt
pip install -r requirements.txt
-
Download Datasets.
Download Pascal VOC data from:
-
SBD: Training (1.4GB .tgz file)
Convert SBD data using
tools/convert_coco.py
.Link to the data:
ln -s <your_path_to_voc> <project>/data/voc ln -s <your_path_to_sbd> <project>/data/sbd
Make sure that the first directory in
data/voc
isVOCdevkit
; the first directory indata/sbd
isbenchmark_RELEASE
.
Download COCO data from:
-
COCO: Training, Validation, Annotation
Convert COCO data using
tools/convert_sbd.py
.Download L2ID challenge data from:
-
Download pre-trained backbones.
Backbone Initial Weights Comment WideResNet38 ilsvrc-cls_rna-a1_cls1000_ep-0001.pth (402M) Converted from mxnet ResNet101 resnet101-5d3b4d8f.pth PyTorch official
For testing, we provide our checkpoints on Pascal VOC dataset:
Setting | Backbone | Val | Link |
---|---|---|---|
Weakly-sup. w/ image-level label | WideResNet38 | 67.2 (w/ CRF) | link |
Weakly-sup. w/ image-level label (Two-stage) | ResNet101 | 69.3 (w/o CRF) | |
Semi-sup. w/ image-level label | WideResNet38 | 75.1 (w/o CRF) | link |
Semi-sup. w/o image-level label | WideResNet38 | 72.4 (w/o CRF) | link |
Train the weakly supervised model:
python tools/train_voc_wss.py --config experiments/slrnet_voc_wss_v100x2.yaml --run custom_experiment_run_id
For the COCO or L2ID dataset, please refer to the relevant scripts/configs with the suffix "_coco".
python tools/infer_voc.py \
--config experiments/slrnet_voc_wss_v100x2.yaml \
--checkpoint path/to/checkpoint20.pth.tar \
--use_cls_label 0 \
--output outputs/wss_prediction \
--data_list voc12/val.txt \
--data_root path/to/VOCdevkit/VOC2012 \
--fp_cut 0.3 \
--bg_pow 3
Generate pseudo labels with following parameters:
...
--data_list voc12/train_aug.txt \
--use_cls_label 1 \
...
Then we train the DeepLabV3+ (w/ ResNet101 backbone) network implemented by mmsegmentation
python tools/train_voc_semi.py --config experiments/slrnet_voc_semi_w_cls_v100x2.yaml --run custom_experiment_run_id
python tools/infer_voc.py \
--config experiments/slrnet_voc_semi_w_cls_v100x2.yaml \
--checkpoint path/to/checkpoint28.pth.tar \
--use_cls_label 0 \
--output outputs/semi_val_prediction \
--data_list voc12/val.txt \
--data_root ../VOCdevkit/VOC2012 \
--fp_cut 0.3 \
--bg_pow 1 \
--apply_crf 0 \
--verbose 0
python tools/train_voc_semi.py --config experiments/slrnet_voc_semi_wo_cls.yaml --run custom_experiment_run_id
python tools/infer_voc.py \
--config experiments/slrnet_voc_semi_wo_cls.yaml \
--checkpoint path/to/checkpoint32.pth.tar \
--use_cls_label 0 \
--output outputs/semi_val_prediction \
--data_list voc12/val.txt \
--data_root ../VOCdevkit/VOC2012 \
--fp_cut 0.3 \
--bg_pow 1 \
--apply_crf 0 \
--verbose 0
We thank Nikita Araslanov, and Jiwoon Ahn for their great work that helped in the early stages of this project.
We hope that you find this work useful. If you would like to acknowledge us, please, use the following citation:
@article{pan2022learning,
title={Learning Self-supervised Low-Rank Network for Single-Stage Weakly and Semi-supervised Semantic Segmentation},
author={Pan, Junwen and Zhu, Pengfei and Zhang, Kaihua and Cao, Bing and Wang, Yu and Zhang, Dingwen and Han, Junwei and Hu, Qinghua},
journal={International Journal of Computer Vision},
pages={1--15},
year={2022},
publisher={Springer}
}
Junwen Pan junwenpan@tju.edu.cn