From 981a229e3d35e723f6526cb8732829331646239b Mon Sep 17 00:00:00 2001 From: TimothyAlexisVass <55708319+TimothyAlexisVass@users.noreply.github.com> Date: Fri, 6 Oct 2023 11:54:28 +0200 Subject: [PATCH] tiny fixes --- docs/source/en/using-diffusers/write_own_pipeline.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/source/en/using-diffusers/write_own_pipeline.md b/docs/source/en/using-diffusers/write_own_pipeline.md index 42b3e4d6761d..a9243a7b9adc 100644 --- a/docs/source/en/using-diffusers/write_own_pipeline.md +++ b/docs/source/en/using-diffusers/write_own_pipeline.md @@ -112,7 +112,7 @@ As you can see, this is already more complex than the DDPM pipeline which only c -💡 Read the [How does Stable Diffusion work?](https://huggingface.co/blog/stable_diffusion#how-does-stable-diffusion-work) blog for more details about how the VAE, UNet, and text encoder models. +💡 Read the [How does Stable Diffusion work?](https://huggingface.co/blog/stable_diffusion#how-does-stable-diffusion-work) blog for more details about how the VAE, UNet, and text encoder models work. @@ -214,7 +214,7 @@ Next, generate some initial random noise as a starting point for the diffusion p ```py >>> latents = torch.randn( -... (batch_size, unet.in_channels, height // 8, width // 8), +... (batch_size, unet.config.in_channels, height // 8, width // 8), ... generator=generator, ... ) >>> latents = latents.to(torch_device)