Jiahao Lu, Yifan Zhang, Qiuhong Shen, Xinchao Wang, Shuicheng Yan
We identify an overlooked security vulnerability of 3D Gaussian Splatting by revealing a novel attack surface: the computation complexity of training 3DGS could be maliciously manipulated by poisoning the input data.
The key observation behind this new attack surface lies in the inherent flexibility in model complexity.
Unlike NeRF or any other neural-network-driven machine learning systems which has pre-fixed and consistent computation cost (Fig. a), 3DGS has adaptively flexible computation complexity(i.e, number of parameters, training time cost and GPU memory consumption) depending on the content of input (Fig. b). This flexibility leaves backdoor to computation cost attacks (Fig. c).
We model attack via max-min bi-level optimization problem:
Where inner loop is the victim goal: minimize reconstruction error
With constraints on perturbation strength (
If the attacker is allowed to unlimitedly alter input data, the attack can be more damaging, sometimes achieving 80 GB GPU memory consumption, which is enough to cause an out-of-memory error and denial-of-service on most GPUs.
poison-splat
|---assets
(directory for project introduction figures)
|---attacker
(directory for attacker behavior)
|---dataset
(directory for saving clean and poisoned datasets)
|---exp
(directory for experiment scripts)
|---log
(directory for experiment records)
|---victim
(directory for victim behavior)
First create a conda environment with pytorch-gpu. CUDA version 11.8 recommended.
conda create -n poison_splat python=3.11 -y
conda activate poison_splat
conda install pytorch torchvision pytorch-cuda=11.8 -c pytorch -c nvidia -y
pip install -r requirements.txt
Instructions of how to download NeRF-Synthetic
, MIP-NeRF360
and Tanks-and-Temples
original datasets are provided in dataset
directory.
For poisoned datasets, you can run poison-splat to make a poisoned dataset following the data poisoning scripts or alternatively, downloading our poisoned datasets:
Google Drive: https://drive.google.com/file/d/11EZwsxRxWEAOnOThoOEJVre77Q5_SQfx/view?usp=sharing
After installing the environment and downloading the NeRF-Synthetic
dataset, you can verify your installation by running testing script:
bash exp/00_test/test_install.sh
bash exp/01_main_exp/benchmark_nerf_synthetic_clean.sh
bash exp/01_main_exp/benchmark_mip_nerf_360_clean.sh
bash exp/01_main_exp/benchmark_tanks_and_temples_clean.sh
Please note that the above scripts assume you have 8-GPU environment. If not, please change the GPU device id by resetting the --gpu
argument in each script.
Constrained attack with perturbation 16/255:
bash exp/01_main_exp/eps16_attack_nerf_synthetic.sh
bash exp/01_main_exp/eps16_attack_mip_nerf_360.sh
bash exp/01_main_exp/eps16_attack_tanks_and_temples_1.sh
bash exp/01_main_exp/eps16_attack_tanks_and_temples_2.sh
bash exp/01_main_exp/eps16_attack_tanks_and_temples_3.sh
Unconstrained attack:
bash exp/01_main_exp/unbounded_attack_nerf_synthetic.sh
bash exp/01_main_exp/unbounded_attack_mip_nerf_360.sh
bash exp/01_main_exp/unbounded_attack_tanks_and_temples_1.sh
bash exp/01_main_exp/unbounded_attack_tanks_and_temples_2.sh
bash exp/01_main_exp/unbounded_attack_tanks_and_temples_3.sh
We borrowed implementation from Scaffold-GS as a black-box victim, and benchmark their performance directly on the poisoned datasets for traditional Gaussian Splatting.
Following scripts in exp/02_blackbox_generalize/
to benchmark black-box attack performance.
To test the black-box performance for other variants of Gaussian Splatting, first implement the victim behavior in victim/
folder. Be especially careful about the environment conflict - for example the diff-gaussian-rasterization
library. vicim/Scaffold-GS/submodules/diff-guassian-rasterization_scaffold/
gives an example of resolving such name conflicts.
Following the benchmark scripts victim/gaussian-splatting/benchmark.py
and victim/Scaffold-GS/benchmark.py
, write a script for benchmarking the newly added victim.
We implemented one naive defense strategy in victim/gaussian-splatting/defense/
where the maximum number of Gaussians involved in training is restricted. Run scripts in exp/05_naive_defense/
to apply defensive training strategy.
We put some visualizations of attacker poisoned data and the corresponding victim reconstructions here.
dataset setting | Attacker Poisoned Image | Victim Reconstructed Image | PSNR |
---|---|---|---|
NS-Chair-eps16 | 37.07 dB | ||
NS-Drums-eps16 | 30.32 dB | ||
MIP-bicycle-eps16 | 18.20 dB | ||
MIP-bonsai-eps16 | 22.67 dB |
If you consider this repostory useful for your research, please consider citing:
@article{lu2024poisonsplat,
title={Poison-splat: Computation Cost Attack on 3D Gaussian Splatting},
author={Lu, Jiahao and Zhang, Yifan and Shen, Qiuhong and Wang, Xinchao and Yan, Shuicheng},
journal={arXiv preprint arXiv:2410.08190},
year={2024}
}