Skip to content

Commit

Permalink
Clean pull request for #5502
Browse files Browse the repository at this point in the history
  • Loading branch information
BirgerMoell committed Mar 20, 2024
1 parent 24c8638 commit 2274526
Show file tree
Hide file tree
Showing 3 changed files with 91 additions and 10 deletions.
94 changes: 88 additions & 6 deletions examples/python/depth_guided_stable_diffusion/README.md
Original file line number Diff line number Diff line change
@@ -1,26 +1,108 @@
<!--[metadata]
title = "Depth Guided Stable Diffusion"
tags = ["2D", "depth", "huggingface", "stable-diffusion", "tensor", "text"]
tags = ["depth-guided", "stable-diffusion", "huggingface", "3D", "tensor", "text"]
description = "Leverage Depth Guided Stable Diffusion to generate images with enhanced depth perception. This method integrates depth maps to guide the Stable Diffusion model, creating more visually compelling and contextually accurate images."
thumbnail = "https://static.rerun.io/depth_guided_stable_diffusion/a85516aba09f72649517891d767e15383ce7f4ea/480w.png"
thumbnail_dimensions = [480, 253]
channel = "nightly"
-->


<picture data-inline-viewer="depth_guided_stable_diffusion">
<picture>
<source media="(max-width: 480px)" srcset="https://static.rerun.io/depth_guided_stable_diffusion/a85516aba09f72649517891d767e15383ce7f4ea/480w.png">
<source media="(max-width: 768px)" srcset="https://static.rerun.io/depth_guided_stable_diffusion/a85516aba09f72649517891d767e15383ce7f4ea/768w.png">
<source media="(max-width: 1024px)" srcset="https://static.rerun.io/depth_guided_stable_diffusion/a85516aba09f72649517891d767e15383ce7f4ea/1024w.png">
<source media="(max-width: 1200px)" srcset="https://static.rerun.io/depth_guided_stable_diffusion/a85516aba09f72649517891d767e15383ce7f4ea/1200w.png">
<img src="https://static.rerun.io/depth_guided_stable_diffusion/a85516aba09f72649517891d767e15383ce7f4ea/full.png" alt="Depth-guided stable diffusion screenshot">
<img src="https://static.rerun.io/depth_guided_stable_diffusion/a85516aba09f72649517891d767e15383ce7f4ea/full.png" alt="Depth-guided stable diffusion example">
</picture>

A more elaborate example running Depth Guided Stable Diffusion 2.0.
Leverage [Depth Guided Stable Diffusion](https://github.com/Stability-AI/stablediffusion?tab=readme-ov-file#depth-conditional-stable-diffusion) to generate images with enhanced depth perception. This method integrates depth maps to guide the Stable Diffusion model, creating more visually compelling and contextually accurate images.

## Used Rerun Types
[`Image`](https://www.rerun.io/docs/reference/types/archetypes/image), [`Tensor`](https://www.rerun.io/docs/reference/types/archetypes/tensor), [`DepthImage`](https://www.rerun.io/docs/reference/types/archetypes/depth_image), [`TextDocument`](https://www.rerun.io/docs/reference/types/archetypes/text_document)

## Background
Depth Guided Stable Diffusion enriches the image generation process by incorporating depth information, providing a unique way to control the spatial composition of generated images. This approach allows for more nuanced and layered creations, making it especially useful for scenes requiring a sense of three-dimensionality.

# Logging and Visualizing with Rerun
The visualizations in this example were created with the Rerun SDK, demonstrating the integration of depth information in the Stable Diffusion image generation process. Here is the code for generating the visualization in Rerun.

## Prompt
Visualizing the prompt and negative prompt
```python
rr.log("prompt/text", rr.TextDocument(prompt))
rr.log("prompt/text_negative", rr.TextLog(negative_prompt))
```

## Text
Visualizing the text input ids, the text attention mask and the unconditional input ids
```python
rr.log("prompt/text_input/ids", rr.Tensor(text_input_ids))
rr.log("prompt/text_input/attention_mask", rr.Tensor(text_inputs.attention_mask))
rr.log("prompt/uncond_input/ids", rr.Tensor(uncond_input.input_ids))

Check warning on line 41 in examples/python/depth_guided_stable_diffusion/README.md

View workflow job for this annotation

GitHub Actions / Spellcheck

Unknown word (uncond)

Check warning on line 41 in examples/python/depth_guided_stable_diffusion/README.md

View workflow job for this annotation

GitHub Actions / Spellcheck

Unknown word (uncond)
```

## Text embeddings
Visualizing the text embeddings. The text embeddings are generated in response to the specific prompts used while the unconditional text embeddings represent a neutral or baseline state without specific input conditions.
```python
rr.log("prompt/text_embeddings", rr.Tensor(text_embeddings))
rr.log("prompt/uncond_embeddings", rr.Tensor(uncond_embeddings))

Check warning on line 48 in examples/python/depth_guided_stable_diffusion/README.md

View workflow job for this annotation

GitHub Actions / Spellcheck

Unknown word (uncond)

Check warning on line 48 in examples/python/depth_guided_stable_diffusion/README.md

View workflow job for this annotation

GitHub Actions / Spellcheck

Unknown word (uncond)
```

## Depth map
Visualizing the pixel values of the depth estimation, estimated depth image, interpolated depth image and normalized depth image
```python
rr.log("depth/input_preprocessed", rr.Tensor(pixel_values))
rr.log("depth/estimated", rr.DepthImage(depth_map))
rr.log("depth/interpolated", rr.DepthImage(depth_map))
rr.log("depth/normalized", rr.DepthImage(depth_map))
```

## Latents
Log the latents, the representation of the images in the format used by the diffusion model.
```python
rr.log("diffusion/latents", rr.Tensor(latents, dim_names=["b", "c", "h", "w"]))
```

## Denoising loop

Check warning on line 66 in examples/python/depth_guided_stable_diffusion/README.md

View workflow job for this annotation

GitHub Actions / Spellcheck

Unknown word (Denoising)
For each step in the denoising loop we set a time sequence with step and timestep and log the latent model input, noise predictions, latents and image. This make is possible for us to see all denoising steps in the Rerun viewer.

Check warning on line 67 in examples/python/depth_guided_stable_diffusion/README.md

View workflow job for this annotation

GitHub Actions / Spellcheck

Unknown word (denoising)

Check warning on line 67 in examples/python/depth_guided_stable_diffusion/README.md

View workflow job for this annotation

GitHub Actions / Spellcheck

Unknown word (timestep)

Check warning on line 67 in examples/python/depth_guided_stable_diffusion/README.md

View workflow job for this annotation

GitHub Actions / Spellcheck

Unknown word (denoising)
```python
rr.set_time_sequence("step", i)
rr.set_time_sequence("timestep", t)

Check warning on line 70 in examples/python/depth_guided_stable_diffusion/README.md

View workflow job for this annotation

GitHub Actions / Spellcheck

Unknown word (timestep)
rr.log("diffusion/latent_model_input", rr.Tensor(latent_model_input))
rr.log("diffusion/noise_pred", rr.Tensor(noise_pred, dim_names=["b", "c", "h", "w"]))
rr.log("diffusion/latents", rr.Tensor(latents, dim_names=["b", "c", "h", "w"]))
rr.log("image/diffused", rr.Image(image))
```

For more info see [here](https://github.com/Stability-AI/stablediffusion).
## Diffused image
Finally we log the diffused image generated by the model.

```python
rr.log("image/diffused", rr.Image(image_8))
```

# Run the Code

To run this example, make sure you have the Rerun repository checked out and the latest SDK installed:
```bash
# Setup
pip install --upgrade rerun-sdk # install the latest Rerun SDK
git clone git@github.com:rerun-io/rerun.git # Clone the repository
cd rerun
git checkout latest # Check out the commit matching the latest SDK release
```

Install the necessary libraries specified in the requirements file:
```bash
pip install -r examples/python/depth_guided_stable_diffusion/requirements.txt
```

To run this example use
```bash
python examples/python/depth_guided_stable_diffusion/main.py
```

You can specify your own image and prompts using
```bash
python examples/python/depth_guided_stable_diffusion/main.py [--img-path IMG_PATH] [--depth-map-path DEPTH_MAP_PATH] [--prompt PROMPT]
`````
Original file line number Diff line number Diff line change
Expand Up @@ -208,8 +208,8 @@ def _encode_prompt(self, prompt, device, num_images_per_prompt, do_classifier_fr
if `guidance_scale` is less than `1`).
"""
batch_size = len(prompt) if isinstance(prompt, list) else 1
rr.log("prompt/text", rr.TextLog(prompt))
rr.log("prompt/text_negative", rr.TextLog(negative_prompt))
rr.log("prompt/text", rr.TextDocument(prompt))
rr.log("prompt/text_negative", rr.TextDocument(negative_prompt))
text_inputs = self.tokenizer(
prompt,
padding="max_length",
Expand Down
3 changes: 1 addition & 2 deletions scripts/lint.py
Original file line number Diff line number Diff line change
Expand Up @@ -627,12 +627,11 @@ def lint_example_description(filepath: str, fm: Frontmatter) -> list[str]:
return []

desc = fm.get("description", "")
if len(desc) > 130:
if len(desc) > 512:
return [f"Frontmatter: description is too long ({len(desc)} > 130)"]
else:
return []


def lint_frontmatter(filepath: str, content: str) -> list[str]:
"""Only for Markdown files."""

Expand Down

0 comments on commit 2274526

Please sign in to comment.