- Hardware: GPUs to hold 14000MB
- Software: Linux (tested on Ubuntu 18.04) PyTorch>=1.5.0, Python>=3, CUDA>=10.1, tensorboardX, tqdm, pyYaml
Download and unzip ShapeNet Part (674M). Then symlink the paths to it as follows (you can alternatively modify the path here):
mkdir -p data
ln -s /path to shapenet part/shapenetcore_partanno_segmentation_benchmark_v0_normal data
-
Build the CUDA kernel:
When you run the program for the first time, please wait a few moments for compiling the cuda_lib automatically. Once the CUDA kernel is built, the program will skip this in the future running.
-
Train:
-
Multi-thread training (nn.DataParallel) :
python main.py --config config/dgcnn_paconv_train.yaml
(Embed PAConv into DGCNN)
-
We also provide a fast multi-process training (nn.parallel.DistributedDataParallel, recommended) with official nn.SyncBatchNorm. Please also remind to specify the GPU ID:
CUDA_VISIBLE_DEVICES=x,x python main_ddp.py --config config/dgcnn_paconv_train.yaml
(Embed PAConv into DGCNN)
-
-
Test:
-
Download our pretrained model and put it under the part_seg folder.
-
Run the voting evaluation script to test our pretrained models, after this voting you will get an instance mIoU of 86.1% if all things go right:
python eval_voting.py --config config/dgcnn_paconv_test.yaml
-
You can also directly test our pretrained model without voting to get an instance mIoU of 86.0%:
python main.py --config config/dgcnn_paconv_test.yaml
-
For full test after training the model:
-
Specify the
eval
toTrue
in your config file. -
Make sure to use main.py (main_ddp.py may lead to wrong result due to the repeating problem of all_reduce function in multi-process training) :
python main.py --config config/your config file.yaml
-
-
You can choose to test the model with the best instance mIoU, class mIoU or accuracy, by specifying
model_type
toinsiou
,clsiou
oracc
in the test config file.
-
-
Visualization: tensorboardX incorporated for better visualization.
tensorboard --logdir=checkpoints/exp_name
If you find the code or trained models useful, please consider citing:
@inproceedings{xu2021paconv,
title={PAConv: Position Adaptive Convolution with Dynamic Kernel Assembling on Point Clouds},
author={Xu, Mutian and Ding, Runyu and Zhao, Hengshuang and Qi, Xiaojuan},
booktitle={CVPR},
year={2021}
}
You are welcome to send pull requests or share some ideas with us. Contact information: Mutian Xu (mino1018@outlook.com) or Runyu Ding (ryding@eee.hku.hk).
This code is partially borrowed from DGCNN and PointNet++.