Skip to content

Wangkkklll/AnyPcc

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AnyPcc: Compressing Any Point Cloud with a Single Universal Model

Kangli Wang1, Qianxi Yi1,2, Yuqi Ye1, Shihao Li1, Wei Gao1,2*
(* Corresponding author)

1SECE, Peking University
2Peng Cheng Laboratory, Shenzhen, China

arXiv License Home Page

TL;DR: AnyPcc compress any source point cloud with a single universal model.

📣 News

  • [25-10-24] 🔥 We initially released the paper and project.
  • [26-02-21] 🔥 Congratulations on the acceptance of AnyPcc to CVPR 2026!
  • [26-02-24] 🔥 Complete training and testing code and pre-trained checkpoint of AnyPcc have been released.
  • [26-03-03] 🔥 All dataset have been released.

Links

Our work on point cloud or 3DGS compression has also been released. Welcome to check it.

📌 Introduction

Generalization remains a critical challenge for deep learning-based point cloud geometry compression. We argue this stems from two key limitations: the lack of robust context models and the inefficient handling of out-of-distribution (OOD) data. To address both, we introduce AnyPcc, a universal point cloud compression framework. AnyPcc first employs a Universal Context Model that leverages priors from both spatial and channel-wise grouping to capture robust contextual dependencies. Second, our novel Instance-Adaptive Fine-Tuning (IAFT) strategy tackles OOD data by synergizing explicit and implicit compression paradigms. It fine-tunes a small subset of network weights for each instance and incorporates them into the bitstream, where the marginal bit cost of the weights is dwarfed by the resulting savings in geometry compression. Extensive experiments on a benchmark of 15 diverse datasets confirm that AnyPcc sets a new state-of-the-art in point cloud compression. Our code and datasets will be released to encourage reproducible research.


Ilustration of the proposed framework.

🔑 Setup

The code has been tested on Ubuntu with Python 3.10, PyTorch 2.4.0, and CUDA 12.1. Furthermore, our environment requires very few libraries, so even lower versions of CUDA and Torch are acceptable. Configure the environment as follows.

# 1. Clone the repository
git clone https://github.com/Wangkkklll/AnyPcc.git
cd AnyPcc

# 2. Create and activate conda environment
conda create -n anypcc python=3.10 -y
conda activate anypcc

# 3. Install PyTorch (CUDA 12.1)
pip install torch==2.4.0 torchvision==0.19.0 torchaudio==2.4.0 --index-url https://download.pytorch.org/whl/cu121

# 4. Install specific dependencies directly via Git
pip install git+https://github.com/mit-han-lab/torchsparse.git
pip install git+https://github.com/fraunhoferhhi/DeepCABAC.git

# 5. Install other requirements
pip install torchac
pip install -r requirements.txt

🧩 Dataset Preparation and Pretrained Model

The training sets we used include 8iVFB, MVUB, KITTI, Ford, ScanNet, Thuman, and GausPcc-1K. Please refer to the training and testing config files for specific details.

Trainset

Testset and Pretrained Model

Please refer to the following Link to obtain the pretrained models and dataset.

🚀 Running

Training

# train for lossless unified model
script/train/ucm_u.sh
# train for lossy model
script/train/ucm_1stage_u.sh
# other single model see the script/train

Testing

Before compression, all point clouds need to be quantized. Quantization parameters include preprocess_scale, preprocess_shift, and posQ. The quantization formula is torch.round((xyz/preprocess_scale + preprocess_shift) / posQ).

# test for all dataset for lossless compression
script/test/ucm_u_all.sh

# test for dense point cloud for lossy compression
script/test/ucm_u_lossy.sh

# test for sparse point cloud for lossy compression
script/test/ucm_u_all.sh (set posQ!=1 for Quantization lossy compression)

# test for OOD data using IAFT
script/test/ucm_u_tune.sh

Compress and Decompress

Please refer to the parameter settings in the compress_decompress.sh

script/test/compress_decompress.sh

🔎 Contact

If your have any comments or questions, feel free to contact kangliwang@stu.pku.edu.cn.

📘 Citation

Please consider citing our work as follows if it is helpful and star our repo.

@article{wang2025anypcc,
  title={AnyPcc: Compressing Any Point Cloud with a Single Universal Model},
  author={Wang, Kangli and Yi, Qianxi and Ye, Yuqi and Li, Shihao and Gao, Wei},
  journal={arXiv preprint arXiv:2510.20331},
  year={2025}
}

About

[CVPR 2026] Official Implementation for "AnyPcc: Compressing Any Point Cloud with a Single Universal Model"

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors