Kangli Wang1,
Qianxi Yi1,2,
Yuqi Ye1,
Shihao Li1,
Wei Gao1,2*
(* Corresponding author)
1SECE, Peking University
2Peng Cheng Laboratory, Shenzhen, China
TL;DR: AnyPcc compress any source point cloud with a single universal model.
- [25-10-24] 🔥 We initially released the paper and project.
- [26-02-21] 🔥 Congratulations on the acceptance of AnyPcc to CVPR 2026!
- [26-02-24] 🔥 Complete training and testing code and pre-trained checkpoint of AnyPcc have been released.
- [26-03-03] 🔥 All dataset have been released.
Our work on point cloud or 3DGS compression has also been released. Welcome to check it.
- 🔥 UniPCGC [AAAI'25]: A unified point cloud geometry compression. [
Paper] [Arxiv] [Project] - 🔥 GausPcc [Arxiv'25]: Efficient 3D Gaussian Compression ! [
Arxiv] [Project]
Generalization remains a critical challenge for deep learning-based point cloud geometry compression. We argue this stems from two key limitations: the lack of robust context models and the inefficient handling of out-of-distribution (OOD) data. To address both, we introduce AnyPcc, a universal point cloud compression framework. AnyPcc first employs a Universal Context Model that leverages priors from both spatial and channel-wise grouping to capture robust contextual dependencies. Second, our novel Instance-Adaptive Fine-Tuning (IAFT) strategy tackles OOD data by synergizing explicit and implicit compression paradigms. It fine-tunes a small subset of network weights for each instance and incorporates them into the bitstream, where the marginal bit cost of the weights is dwarfed by the resulting savings in geometry compression. Extensive experiments on a benchmark of 15 diverse datasets confirm that AnyPcc sets a new state-of-the-art in point cloud compression. Our code and datasets will be released to encourage reproducible research.
The code has been tested on Ubuntu with Python 3.10, PyTorch 2.4.0, and CUDA 12.1. Furthermore, our environment requires very few libraries, so even lower versions of CUDA and Torch are acceptable. Configure the environment as follows.
# 1. Clone the repository
git clone https://github.com/Wangkkklll/AnyPcc.git
cd AnyPcc
# 2. Create and activate conda environment
conda create -n anypcc python=3.10 -y
conda activate anypcc
# 3. Install PyTorch (CUDA 12.1)
pip install torch==2.4.0 torchvision==0.19.0 torchaudio==2.4.0 --index-url https://download.pytorch.org/whl/cu121
# 4. Install specific dependencies directly via Git
pip install git+https://github.com/mit-han-lab/torchsparse.git
pip install git+https://github.com/fraunhoferhhi/DeepCABAC.git
# 5. Install other requirements
pip install torchac
pip install -r requirements.txt
The training sets we used include 8iVFB, MVUB, KITTI, Ford, ScanNet, Thuman, and GausPcc-1K. Please refer to the training and testing config files for specific details.
- KITTI : https://www.cvlibs.net/datasets/kitti/
- 8iVFB : http://plenodb.jpeg.org/pc/8ilabs/
- Owlii : https://mpeg-pcc.org/index.php/pcc-content-database/owlii-dynamic-human-textured-mesh-sequence-dataset/
- ScanNet : https://github.com/ScanNet/ScanNet
- MVUB : http://plenodb.jpeg.org/pc/microsoft/
- GausPcc-1K : https://github.com/Wangkkklll/GausPcc
- Thuman : https://github.com/ytrock/THuman2.0-Dataset (we use this mesh dataset and sample to 10-bit dense point cloud, some example can be found at our Link.)
- Ford : This dataset can be found at our Link.
Please refer to the following Link to obtain the pretrained models and dataset.
# train for lossless unified model
script/train/ucm_u.sh
# train for lossy model
script/train/ucm_1stage_u.sh
# other single model see the script/train
Before compression, all point clouds need to be quantized. Quantization parameters include preprocess_scale, preprocess_shift, and posQ. The quantization formula is torch.round((xyz/preprocess_scale + preprocess_shift) / posQ).
# test for all dataset for lossless compression
script/test/ucm_u_all.sh
# test for dense point cloud for lossy compression
script/test/ucm_u_lossy.sh
# test for sparse point cloud for lossy compression
script/test/ucm_u_all.sh (set posQ!=1 for Quantization lossy compression)
# test for OOD data using IAFT
script/test/ucm_u_tune.sh
Please refer to the parameter settings in the compress_decompress.sh
script/test/compress_decompress.sh
If your have any comments or questions, feel free to contact kangliwang@stu.pku.edu.cn.
Please consider citing our work as follows if it is helpful and star our repo.
@article{wang2025anypcc,
title={AnyPcc: Compressing Any Point Cloud with a Single Universal Model},
author={Wang, Kangli and Yi, Qianxi and Ye, Yuqi and Li, Shihao and Gao, Wei},
journal={arXiv preprint arXiv:2510.20331},
year={2025}
}