Skip to content

[ACM MM2019] Learning Semantics-aware Distance Map with Semantics Layering Network for Amodal Instance Segmentation

License

Notifications You must be signed in to change notification settings

apchenstu/SLN-Amodal

Repository files navigation

This is the code repo of SLN-Amodal. Paper could be downloaded here.

This repository is in large parts based on Multimodallearning's Mask_RCNN, we also borrow the amodal evaluation code from AmodalMask and COCO API. The training and evaluation dataset are referenced from COCOA and D2SA. We would like to thank each of them for their kindly work.

SLN

In this work, we demonstrate yet another approach to tackle the amodal segmentation problem. Specifically, we first introduce a new representation, namely a semantics-aware distance map (sem-dist map), to serve as our target for amodal segmentation instead of the commonly used masks and heatmaps. The sem-dist map is a kind of level-set representation, of which the different regions of an object are placed into different levels on the map according to their visibility. It is a natural extension of masks and heatmaps, where modal, amodal segmentation, as well as depth order information, are all well-described. Then we also introduce a novel convolutional neural network (CNN) architecture, which we refer to as semantic layering network, to estimate sem-dist maps layer by layer, from the global-level to the instance-level, for all objects in an image. Extensive experiments on the COCOA and D2SA datasets have demonstrated that our framework can predict amodal segmentation, occlusion and depth order with state-of-the-art performance.

Authors:

Ziheng Zhang*, Anpei Chen*, Ling Xie, Jingyi Yu, Shenghua Gao

Set up environment

  1. We specify pytorch==0.4.0

    Or you can use Ananconda to create new environment(strongly recommanded) in root directory by

    conda create -n SLN-env
    conda install pytorch=0.4.0 cuda90 -c pytorch
    conda install -c conda-forge scikit-image
    conda install -c anaconda cudatoolkit==9.0
    conda install tqdm
    pip install tensorboardX
  2. Configure COCOAPI

    We modify the COCOAPI to meet our need. Here you have to soft link the pycocotool to the root directory for invoking.

    ln -s /path/to/pycocotool /path/to/our/root/diectory

Datasets

  • Download pre-trained weights and unzip the package to the root directory.
  • we provide layer based D2SA and COCOA dataset. We also provide some scripts that you can convert the original amodal annotation [COCOA,D2SA] to our layer based annotation. If you find those two dataset are useful, please cite their COCOA and D2SA papers.
  • BaiduYunPan link with verify code:yr2i
  • Folder structure
      ├──  datasets                       - dataset folder
      │    └── coco_amodal 
      |        └── annotations    
      |            └── COCO_amodal_val[train]2014.json
      |        └── val2014             
      |            └── ###.jpg ###.npz 
      |        └── train2014           
      |            └── ###.jpg ###.npz 
      │    └── D2S                
      |        └── annotations    
      |            └── COCO_amodal_val[train]2014.json
      |        └── val2014             
      |            └── ###.jpg ###.npz 
      |        └── train2014           
      |            └── ###.jpg ###.npz 
      ├──  checkpoints                 
      |    └──COCOA[D2SA,deeplabv2,mask_rcnn_coco].pth     
      ├──  pycocotool                     - soft link to cocoapi/PythonAPI/pycocotool              
    

Usage

  • For training,

    python amodal_train.py train --dataset ./datasets/coco_amodal --model coco
    python amodal_train.py train --dataset ./datasets/D2S --model coco
  • For evaluate,

    python amodal_train.py evaluate --dataset ./datasets/coco_amodal --model ./checkpoints/COCOA.pth --data_type COCOA
    python amodal_train.py evaluate --dataset ./datasets/D2S --model ./checkpoints/D2SA.pth --data_type D2SA
  • For test images,
    python amodal_test.py 
    you can modify the path to your image folder inside the script.

Citation

If you find this code useful to your research, please consider citing:

@inproceedings{zhang2019amodal,
  title={Learning Semantics-aware Distance Map with Semantics Layering Network for Amodal Instance Segmentation},
  author={Zhang, Zi-Heng and Chen, An-Pei and Xie, Ling and Yu, Jing-Yi and Gao, Sheng-Hua},
  booktitle={2019 ACM Multimedia Conference on Multimedia Conference},
  pages={1--9},
  year={2019},
  organization={ACM}
}

About

[ACM MM2019] Learning Semantics-aware Distance Map with Semantics Layering Network for Amodal Instance Segmentation

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published