Peng Wang, Yichun Shi
Project Page | Paper | Demo
imagedream-teaser.mp4
This part is the same as original MVDream-threestudio. Skip it if you already have installed the environment.
Clone the modelcard on the Huggingface ImageDream Model Page under ./extern/ImageDream/release_models/
In the paper, we use the configuration with soft-shading. It would need an A100 GPU in most cases to compute normal:
export PYTHONPATH=$PYTHONPATH:./extern/ImageDream
image_file="./extern/ImageDream/assets/astronaut.png"
ckpt_file="./extern/ImageDream/release_models/ImageDream/sd-v2.1-base-4view-ipmv.pt"
cfg_file="./extern/ImageDream/imagedream/configs/sd_v2_base_ipmv.yaml"
python3 launch.py \
--config configs/$method.yaml --train --gpu 0 \
name="imagedream-sd21-shading" tag="astronaut" \
system.prompt_processor.prompt="an astronaut riding a horse" \
system.prompt_processor.image_path="${image_file}" \
system.guidance.ckpt_path="${ckpt_file}" \
system.guidance.config_path="${cfg_file}"
For diffusion only model, refer to subdir ./extern/ImageDream/
Check ./threestudio/scripts/run_imagedream.sh
for a bash example.
- This code is forked from threestudio and MVDream for SDS and 3D Generation.
- Place the object in the center and do not make it too large/small in the image.
- If you have an object cutting image edge, in config, tuning the parameters range of elevation and fov to be a larger range, e.g.
[0, 30]
, otherwise, you may do image outpainting and follow tips 1. - Check the results with ImageDream diffusion model before using it in 3D rendering to save time.
- Since there is some randomness in diffusion model and time costly to get baseline results. We put our pre-computed results for reproducing Tab.1 in the paper in a hugging face dataset card
If you find ImageDream helpful, please consider citing:
@article{wang2023imagedream,
title={ImageDream: Image-Prompt Multi-view Diffusion for 3D Generation},
author={Wang, Peng and Shi, Yichun},
journal={arXiv preprint arXiv:2312.02201},
year={2023}
}