Skip to content

[NeurIPS'24] ContextGS: Compact 3D Gaussian Splatting with Anchor Level Context Model

License

Notifications You must be signed in to change notification settings

wyf0912/ContextGS

Repository files navigation

ContextGS: Compact 3D Gaussian Splatting with Anchor Level Context Model

Welcome! The official implementation of the paper "ContextGS: Compact 3D Gaussian Splatting with Anchor Level Context Model" will be released here soon!

Yufei Wang, Zhihao Li, Lanqing Guo, Wenhan Yang, Alex C. Kot, Bihan Wen

⭐ Overall

intro Our method, ContextGS, first proposes to reduce the spatial redundancy among anchors using an autoregressive model.

We divide anchors into levels as shown in Fig.(b) and the anchors from coarser levels are used to predict anchors in finer levels, i.e., red anchors predict blue anchors then red and blue anchors together predict black anchors. Fig.(c) verifies the spatial redundancy by calculating the cosine similarity between anchors in level $0$ and their context anchors in levels $1$ and $2$. Fig.(d) displays the bit savings using the proposed anchor-level context model evaluated on our entropy coding based strong baseline built on Scaffold-GS.

🚀 Performance

Compared with Scaffold-GS, we achieve better rendering qualities, faster rendering speed, and great size reduction of up to $15$ times averaged over all datasets we used.

performance

🔥 Train/Evaluation

Installation

  1. Unzip files
cd submodules
unzip diff-gaussian-rasterization.zip
unzip simple-knn.zip
cd ..
  1. Install environment
conda env create --file environment.yml
# or `sh setup_env.sh` # tested on CUDA 11.8
conda activate contextgs

Data

First, create a data/ folder inside the project path by

mkdir data

The data structure will be organised as follows:

data/
├── dataset_name
│   ├── scene1/
│   │   ├── images
│   │   │   ├── IMG_0.jpg
│   │   │   ├── IMG_1.jpg
│   │   │   ├── ...
│   │   ├── sparse/
│   │       └──0/
│   ├── scene2/
│   │   ├── images
│   │   │   ├── IMG_0.jpg
│   │   │   ├── IMG_1.jpg
│   │   │   ├── ...
│   │   ├── sparse/
│   │       └──0/
...
  • For instance: ./data/blending/drjohnson/
  • For instance: ./data/bungeenerf/amsterdam/
  • For instance: ./data/mipnerf360/bicycle/
  • For instance: ./data/nerf_synthetic/chair/
  • For instance: ./data/tandt/train/

Public Data (We follow suggestions from Scaffold-GS)

  • The BungeeNeRF dataset is available in Google Drive/百度网盘[提取码:4whv].
  • The MipNeRF360 scenes are provided by the paper author here. And we test on its entire 9 scenes bicycle, bonsai, counter, garden, kitchen, room, stump, flowers, treehill.
  • The SfM datasets for Tanks&Temples and Deep Blending are hosted by 3D-Gaussian-Splatting here. Download and uncompress them into the data/ folder.

Custom Data

For custom data, you should process the image sequences with Colmap to obtain the SfM points and camera poses. Then, place the results into data/ folder.

Training

To train scenes, we provide the following training scripts in ./scripts:

  • Tanks&Temples: run_shell_tnt.py
  • MipNeRF360: run_shell_mip360.py
  • BungeeNeRF: run_shell_bungee.py
  • Deep Blending: run_shell_db.py

run them with

python run_shell_xxx.py

The code will automatically run the entire process of: training, encoding, decoding, testing.

  • Training log will be recorded in output.log of the output directory. Results of detailed fidelity, detailed size, detailed time will all be recorded
  • Encoded bitstreams will be stored in ./bitstreams of the output directory.
  • Evaluated output images will be saved in ./test/ours_30000/renders of the output directory.
  • Optionally, you can change lmbda in these run_shell_xxx.py scripts to try variable bitrate.
  • After training, the original model point_cloud.ply is losslessly compressed as ./bitstreams. You should refer to ./bitstreams to get the final model size, but not point_cloud.ply. You can even delete point_cloud.ply if you like :).

Decompress from binary files

You can use the following command to decompress from the binary files:

python3 decompress.py 
  -s 'scene path to calucalte the metrics'
  --eval
  --lod '[int] level of detail'
  -m 'output path'
  --voxel_size '[float] voxel size used to train the model'

⭐ Citation

Please cite our paper if you find our work useful. Thanks!

@inproceedings{
wang2024contextgs,
title={Context{GS} : Compact 3D Gaussian Splatting with Anchor Level Context Model},
author={Yufei Wang and Zhihao Li and Lanqing Guo and Wenhan Yang and Alex Kot and Bihan Wen},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=W2qGSMl2Uu}
}

📧 Contact

If you have any questions, please feel free to contact me via yufei001@ntu.edu.sg.

Acknowledgement

About

[NeurIPS'24] ContextGS: Compact 3D Gaussian Splatting with Anchor Level Context Model

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published