Skip to content

Latest commit

 

History

History
142 lines (104 loc) · 5.71 KB

README.md

File metadata and controls

142 lines (104 loc) · 5.71 KB

Made withPython Made withPytorch

PointPainting-Semantic-Segmentation

My Pytorch implementation of PointPainting Paper for realtime pointcloud semantic segmentation painting (labeling each point with a class) based on semantic segmentation maps using BiSeNetv2

pointpainting

Project

  • BiSeNetv2 model trained on KITTI dataset
  • Implementation of the pointpainting fusion algorithm

Update, 1 June, 2022

Demo Video

Demo

Download the checkpoint

Download from Drive Place it in "BiSeNetv2/checkpoints"
Important Note The file you will download will have the name "BiseNetv2_150.pth.tar", don't unzip it .. just rename it to be "BiseNetv2_150.pth"

Run Demo

python3 demo.py --image_path PATH_TO_IMAGE --pointcloud_path PATH_TO_POINTCLOUD --calib_path PATH_TO_CALIB --weights_path PATH_TO_MODEL_WEIGHTS

# note that default arguments are set to one of uploaded kitti samples so you can run it as
python3 demo.py

output_2d_demo

# add --mode 3d to see a 3d visualization of painted pointcloud
python3 demo.py --mode 3d

output_3d_demo

Run Demo on Kitti Videos

Kitti Provides sequential videos for testing, Download them from Kitti Videos by downloading video data(left & pointclouds)(download from [synced+rectified data]) and calibs (download from [calibration]) in the selected video

# PATH_TO_VIDEO is path contains 'image_02' & 'velodyne_points' together
# PATH_TO_CALIB is path contains calib files ['calib_cam_to_cam', '', '']
# mode 2d to visualize image+bev .. 3d to visualize 3d painted pointcloud
python3 demo_video.py --video_path PATH_TO_VIDEO --calib_path PATH_TO_CALIB --mode 3d

video_3d

BiSeNetv2

Realtime semantic segmentation on images

model

Thanks to https://github.com/CoinCheung/BiSeNet for the implementation trained on CityScapes datasets. I used it and finetuned it on KITTI dataset using Pytorch

Training on KITTI dataset

cd BiSeNetv2
python3 train.py

I trained it on Colab and provided the notebook

training

Test on KITTI Semantic

cd BiSeNetv2
python3 test.py

KITTI Dataset

Semantic KITTI dataset contains 200 images for training & 200 for testing
Download it from KITTI website

# visualize dataset on tensorboard
python3 visualization.py --tensorboard

# PATH_TO_TENSORBOARD_FOLDER is path "BiSeNetv2/checkpoints/tensorboard/"
tensorboard --logdir PATH_TO_TENSORBOARD_FOLDER

tensorboard

Folder structure

├── BiSeNetv2
    ├── checkpoints
		├── BiseNetv2_150.pth 	# path to model
	    ├── tensorboard 		# path to save tensorboard events
    ├── data 					# path to kitti semantic dataset
        ├── KITTI
          ├── testing
              ├── image_2
          ├── training
            ├── image_2
            ├── instance
            ├── semantic
            ├── semantic_rgb

    ├── utils
      ├── label.py 				# label information (colors/ids/names)
      ├── utils.py 				# utils functions
    ├── train.py
    ├── test.py

├── Kitti_sample				# 2 images & pointclouds & calib for testing (by demo.py)
├── KittiCalibration.py 		# Stores Calibration file matrices
├── KittiVideo.py 				# Kitti Video Reader
├── bev_utils.py 				# BEV algorithms
├── demo.py 					# demo to test on 1 sample (Kitti_sample)
├── demo_video.py 				# demo to test on Kitti Videos
├── pointpainting.py 			# implementation of PointPainting
├── visualizer.py 				# visualizer using opend3d & opencv

References