Skip to content

Modified code of the paper "Generative Gaussian Splatting for Efficient 3D Content Creation"

License

Notifications You must be signed in to change notification settings

WWmore/dreamgaussian

 
 

Repository files navigation

DreamGaussian

Original implementation of the paper DreamGaussian: Generative Gaussian Splatting for Efficient 3D Content Creation. Project Page | Arxiv



Install

  • Hui: Windows 10 with torch 2.1 & CUDA 12.1 on a RTX A4500.

  • Hui: problem with pip install ./diff-gaussian-rasterization ./simple-knn --> need to install 'cuda_12.1.0_windows_network'

  • Hui: problem with diff_gaussian_rasterization and simple_knn._C path problem --> add init.py file inside folder diff-gaussian-rasterization and simple-knn

Setting environment

Environment
```bash
install 'cuda_12.1.0_windows_network'

open anaconda prompt

cd C:\Users\NAME

git clone https://github.com/dreamgaussian/dreamgaussian --recursive

cd cd C:\Users\NAME\dreamgaussian

conda create -n dreamgaussian

conda activate dreamgaussian
```

Setting packages

Package
```bash
pip install -r requirements.txt

# a modified gaussian splatting (+ depth, alpha rendering)
git clone --recursive https://github.com/ashawkey/diff-gaussian-rasterization
pip install ./diff-gaussian-rasterization

# simple-knn
pip install ./simple-knn

# nvdiffrast
pip install git+https://github.com/NVlabs/nvdiffrast/

# kiuikit
pip install git+https://github.com/ashawkey/kiuikit
```

Image-to-3D

Parameters are set in ./configs/image.yaml.

File structure of folder `./logs`

File

1. Process the initial image: (below choose 1/3)

# background removal and recentering, save rgba at 256x256
python process.py data/name.png

#----------------------------------
# save at a larger resolution
python process.py data/name.png --size 512
#----------------------------------

# process all jpg images under a dir
python process.py data

File

2. Train Gaussian stage: (below choose 1/5)

If donot need to show GUI, one can directly choose 1 of below 3 commonds:

## New file by Hui:
python main_anacondaprompt.py --config configs/image.yaml input=data/name_rgba.png save_path=name_gaussian/name

# train 500 iters (~1min) and export ckpt & coarse_mesh to logs/name_gaussian
python main.py --config configs/image.yaml input=data/name_rgba.png save_path=name_gaussian/name

# use an estimated elevation angle if image is not front-view (e.g., common looking-down image can use -30)
python main.py --config configs/image.yaml input=data/name_rgba.png save_path=name_gaussian/name elevation=-30

If one need to show GUI, choose one:

#----------------------------------
# gui mode (supports visualizing training)
python main.py --config configs/image.yaml input=data/name_rgba.png save_path=name_gaussian/name gui=True
#----------------------------------

# load and visualize a saved ckpt
python main.py --config configs/image.yaml load=name_gaussian/name_model.ply gui=True

File

beacon_gaussian.mp4

3. Train mesh stage: (below choose 1/5)

If donot need to show GUI, one can directly choose 1 of below 4 commonds:

## New file by Hui:
#----------------------------------
# specify coarse mesh path explicity
python main2_anacondaprompt.py --config configs/image.yaml input=data/name_rgba.png save_path=name_mesh/name mesh=logs/name_gaussian/name_mesh.obj
#----------------------------------

# specify coarse mesh path explicity
python main2.py --config configs/image.yaml input=data/name_rgba.png save_path=name_mesh/name mesh=logs/name_gaussian/name_mesh.obj

# auto load coarse_mesh and refine 50 iters (~1min), export fine_mesh to logs/name_mesh
python main2.py --config configs/image.yaml input=data/name_rgba.png save_path=name_mesh/name

# export glb instead of obj
python main2.py --config configs/image.yaml input=data/name_rgba.png save_path=name_gaussian/name mesh_format=glb
# gui mode
python main2.py --config configs/image.yaml input=data/name_rgba.png save_path=name_gaussian/name gui=True

File

beacon_mesh.mp4

4. Visualization: (below choose 1/3)

# gui for visualizing mesh
python -m kiui.render logs/name_mesh/name.obj

#----------------------------------
# save 360 degree video of mesh (can run without gui)
python -m kiui.render logs/name_mesh/name.obj --save_video logs/name.mp4 --wogui
#----------------------------------

# save 8 view images of mesh (can run without gui)
python -m kiui.render logs/name_mesh/name.obj --save logs/name/ --wogui

File

beacon.mp4

5. Evaluation

### evaluation of CLIP-similarity
##Hui note: AttributeError: 'Namespace' object has no attribute 'force_cuda_rast'
python -m kiui.cli.clip_sim data/name_rgba.png logs/name_mesh/name.obj

Conclusion

  • The whole process has GUI layouts to navigate, but the rotation is very sensitive.
  • Both training in Step2 (Gaussian) and Step3 (Mesh) are fast. Highly depends on the hardware.
  • However, both 3D Gaussian Splatting and generated mesh look bad.
  • The mesh has a very large background area of blur texture.
  • Starting from 1 image to produce a mesh is not suitable for the parametric designed model.
  • Resolutions are low.

Get a mesh from given .ply file

Load a .ply file into GUI

python main.py --config configs/image.yaml load=logs/bonsai.ply save_path=name gui=True

First training to get an initial mesh

The extract_mesh function cannot help to get a good mesh: File

Second training to get a finer mesh

Stuck: Only after getting a good initial mesh, can process it to be a finer mesh.

About

Modified code of the paper "Generative Gaussian Splatting for Efficient 3D Content Creation"

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 95.7%
  • Cuda 2.6%
  • Shell 1.3%
  • Other 0.4%