Welcome! The official implementation of the paper "ContextGS: Compact 3D Gaussian Splatting with Anchor Level Context Model" will be released here soon!
Yufei Wang, Zhihao Li, Lanqing Guo, Wenhan Yang, Alex C. Kot, Bihan Wen
Our method, ContextGS, first proposes to reduce the spatial redundancy among anchors using an autoregressive model.
We divide anchors into levels as shown in Fig.(b) and the anchors from coarser levels are used to predict anchors in finer levels, i.e., red anchors predict blue anchors then red and blue anchors together predict black anchors. Fig.(c) verifies the spatial redundancy by calculating the cosine similarity between anchors in level
Compared with Scaffold-GS, we achieve better rendering qualities, faster rendering speed, and great size reduction of up to
- Unzip files
cd submodules
unzip diff-gaussian-rasterization.zip
unzip simple-knn.zip
cd ..
- Install environment
conda env create --file environment.yml
# or `sh setup_env.sh` # tested on CUDA 11.8
conda activate contextgs
First, create a data/
folder inside the project path by
mkdir data
The data structure will be organised as follows:
data/
├── dataset_name
│ ├── scene1/
│ │ ├── images
│ │ │ ├── IMG_0.jpg
│ │ │ ├── IMG_1.jpg
│ │ │ ├── ...
│ │ ├── sparse/
│ │ └──0/
│ ├── scene2/
│ │ ├── images
│ │ │ ├── IMG_0.jpg
│ │ │ ├── IMG_1.jpg
│ │ │ ├── ...
│ │ ├── sparse/
│ │ └──0/
...
- For instance:
./data/blending/drjohnson/
- For instance:
./data/bungeenerf/amsterdam/
- For instance:
./data/mipnerf360/bicycle/
- For instance:
./data/nerf_synthetic/chair/
- For instance:
./data/tandt/train/
Public Data (We follow suggestions from Scaffold-GS)
- The BungeeNeRF dataset is available in Google Drive/百度网盘[提取码:4whv].
- The MipNeRF360 scenes are provided by the paper author here. And we test on its entire 9 scenes
bicycle, bonsai, counter, garden, kitchen, room, stump, flowers, treehill
. - The SfM datasets for Tanks&Temples and Deep Blending are hosted by 3D-Gaussian-Splatting here. Download and uncompress them into the
data/
folder.
For custom data, you should process the image sequences with Colmap to obtain the SfM points and camera poses. Then, place the results into data/
folder.
To train scenes, we provide the following training scripts in ./scripts
:
- Tanks&Temples:
run_shell_tnt.py
- MipNeRF360:
run_shell_mip360.py
- BungeeNeRF:
run_shell_bungee.py
- Deep Blending:
run_shell_db.py
run them with
python run_shell_xxx.py
The code will automatically run the entire process of: training, encoding, decoding, testing.
- Training log will be recorded in
output.log
of the output directory. Results of detailed fidelity, detailed size, detailed time will all be recorded - Encoded bitstreams will be stored in
./bitstreams
of the output directory. - Evaluated output images will be saved in
./test/ours_30000/renders
of the output directory. - Optionally, you can change
lmbda
in theserun_shell_xxx.py
scripts to try variable bitrate. - After training, the original model
point_cloud.ply
is losslessly compressed as./bitstreams
. You should refer to./bitstreams
to get the final model size, but notpoint_cloud.ply
. You can even deletepoint_cloud.ply
if you like :).
You can use the following command to decompress from the binary files:
python3 decompress.py
-s 'scene path to calucalte the metrics'
--eval
--lod '[int] level of detail'
-m 'output path'
--voxel_size '[float] voxel size used to train the model'
Please cite our paper if you find our work useful. Thanks!
@inproceedings{
wang2024contextgs,
title={Context{GS} : Compact 3D Gaussian Splatting with Anchor Level Context Model},
author={Yufei Wang and Zhihao Li and Lanqing Guo and Wenhan Yang and Alex Kot and Bihan Wen},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=W2qGSMl2Uu}
}
If you have any questions, please feel free to contact me via yufei001@ntu.edu.sg
.
- We thank all authors from HAC, Scaffold-GS, 3D-GS for excellent works.