PAPS is a bottom approach for amodal panoptic segmentation, where the goal is to concurrently predict the pixel-wise semantic segmentation labels of visible regions of "stuff" classes (e.g., road, sky, and so on), and instance segmentation labels of both the visible and occluded regions of "thing" classes (e.g., car, truck, etc).
This repository contains the PyTorch implementation of our RA-L'2022 paper Perceiving the Invisible: Proposal-Free Amodal Panoptic Segmentation. The repository builds on Detectron2.
If you find this code useful for your research, we kindly ask you to consider citing our papers:
@article{mohan2022perceiving,
title={Perceiving the invisible: Proposal-free amodal panoptic segmentation},
author={Mohan, Rohit and Valada, Abhinav},
journal={IEEE Robotics and Automation Letters},
volume={7},
number={4},
pages={9302--9309},
year={2022},
publisher={IEEE}
}
- Linux
- Python 3.9
- PyTorch 1.12.1
- CUDA 11
- GCC 7 or 8
IMPORTANT NOTE: These requirements are not necessarily mandatory. However, we have only tested the code under the above settings and cannot provide support for other setups.
Please refer to the installation documentation for detailed instructions.
Please refer to the dataset documentation for detailed instructions.
For detailed instructions on training, evaluation, and inference processes, please refer to the usage documentation.
Pre-trained models can be found in the model zoo.
We have used utility functions from other open-source projects. We espeicially thank the authors of:
For academic usage, the code is released under the GPLv3 license. For any commercial purpose, please contact the authors.