SwiftMOS: A Fast and Lightweight Moving Object Segmentation via Feature Flowing Direct View Transformation
This repository offers official SwiftMOS codes.
We recommend to use PyTorch-CUDA Docker image.
$ docker pull pytorch/pytorch:1.9.1-cuda11.1-cudnn8-devel
$ export DISPLAY=***.***.***.***:0 (* : your IP address for visualization)
$ xhost +
$ docker run -it \
-e DISPLAY=$DISPLAY \
-v /tmp/.X11-unix/:/tmp/.X11-unix:ro \
-v (path to KITTI dataset in local):(path to KITTI dataset to set in container)
--privileged \
--name SwiftMOS \
--ipc=host \
--gpus all \
pytorch/pytorch:1.9.1-cuda11.1-cudnn8-devel \
/bin/bash
$ conda init
$ source ~/.bashrc
$ apt-get update -y
$ apt install -y git vim unzip wget vim git dpkg build-essential
$ apt install -y libgl1-mesa-glx libglib2.0-0 libxcb-cursor0 x11-apps libglib2.0-0
$ conda create -n swiftmos python=3.8
$ conda activate swiftmosgit clone https://github.com/MinChoi0129/SwiftMOS.git
cd SwiftMOSpip install torch==1.9.1+cu111 torchvision==0.10.1+cu111 torchaudio==0.9.1 -f https://download.pytorch.org/whl/torch_stable.html
pip install --no-index torch-scatter -f https://pytorch-geometric.com/whl/torch-1.9.1+cu111.html
pip install -r requirements.txt
cd deep_point
python setup.py installPlease download the SemanticKITTI dataset to the folder SemanticKITTI and the structure of the folder should look like:
ROOT_to_SemanticKITTI
└── dataset/
├──sequences
├── 00/
│ ├── velodyne/
| | ├── 000000.bin
| | ├── 000001.bin
| | └── ...
│ └── labels/
| ├── 000000.label
| ├── 000001.label
| └── ...
├── 08/ # for validation
├── 11/ # 11-21 for testing
└── 21/
└── ...
And download the object bank on the SemanticKITTI to the folder object_bank_semkitti and the structure of the folder should look like:
ROOT_to_Object_Bank
├── bicycle
├── bicyclist
├── car
├── motorcycle
├── motorcyclist
├── other-vehicle
├── person
├── truck
In config/config_MOS.py
- Fill
batch_size_per_gpuaccording to your computing resources. - The SemanticKITTI's
sequencepath should be filled inSeqDir(Recommend Absolute Path) - The path of
Object Bankshould be filled inObjBackDir(Recommend Absolute Path)
In scripts/train_multi_gpu.sh
- Fill
CUDA_VISIBLE_DEVICESandNumGPUsaccording to your computing resources. - Note : Number of gpus should be same with the length of exported env variable(CUDA_VISIBLE_DEVICES)
bash scripts/train_multi_gpu.shAfter every single epoch in the training session, you can see metrics like Moving IoU for validation sequence (08). But, the evaluation process in training session doesn't save the prediction labels for fast training time.
If you want to save the label, you can just run the 5.2 Evaluation Process below.
This process saves prediction labels. Just comment '--save-label' and remove backslash(\) above if you don't want to.
Before you run valdation command below, create folder experiments/config_MOS/checkpoint first. We provide pretrained SwiftMOS model file as ./50-checkpoint.pth. Move the .pth file into the folder you just created.
bash scripts/validate.shbash scripts/model_infer_speed.shWe provide a Dockerfile due to the complex installation process for the Nvidia AGX ORIN NX hardware. This Dockerfile primarily sets up the environment. You will still need to install SwiftMOS properly within this environment.
