Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How can I fix the error "LoRAModule.forward() takes 2 positional arguments but 3 were given" #27

Closed
xdfeng07 opened this issue Nov 27, 2023 · 5 comments

Comments

@xdfeng07
Copy link

0%| | 0/50 [00:00<?, ?it/s]

TypeError Traceback (most recent call last)
Cell In[6], line 54
52 for scale in scales:
53 generator = torch.manual_seed(seed)
---> 54 images = pipe(prompt, num_images_per_prompt=1, num_inference_steps=50, generator=generator, network=network, start_noise=start_noise, scale=scale, unet=unet).images[0]
55 image_list.append(images)
56 del unet, network, pipe

File ~/miniconda3/envs/sd/lib/python3.10/site-packages/torch/autograd/grad_mode.py:27, in _DecoratorContextManager.call..decorate_context(*args, **kwargs)
24 @functools.wraps(func)
25 def decorate_context(*args, **kwargs):
26 with self.clone():
---> 27 return func(*args, **kwargs)

Cell In[2], line 313, in call(self, prompt, prompt_2, height, width, num_inference_steps, denoising_end, guidance_scale, negative_prompt, negative_prompt_2, num_images_per_prompt, eta, generator, latents, prompt_embeds, negative_prompt_embeds, pooled_prompt_embeds, negative_pooled_prompt_embeds, output_type, return_dict, callback, callback_steps, cross_attention_kwargs, guidance_rescale, original_size, crops_coords_top_left, target_size, negative_original_size, negative_crops_coords_top_left, negative_target_size, network, start_noise, scale, unet)
311 added_cond_kwargs = {"text_embeds": add_text_embeds, "time_ids": add_time_ids}
312 with network:
--> 313 noise_pred = unet(
314 latent_model_input,
315 t,
316 encoder_hidden_states=prompt_embeds,
317 cross_attention_kwargs=cross_attention_kwargs,
318 added_cond_kwargs=added_cond_kwargs,
319 return_dict=False,
320 )[0]
322 # perform guidance
323 if do_classifier_free_guidance:

File ~/miniconda3/envs/sd/lib/python3.10/site-packages/torch/nn/modules/module.py:1194, in Module._call_impl(self, *input, **kwargs)
1190 # If we don't have any hooks, we want to skip the rest of the logic in
1191 # this function, and just call forward.
1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1193 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1194 return forward_call(*input, **kwargs)
1195 # Do not call functions when jit is used
1196 full_backward_hooks, non_full_backward_hooks = [], []

File ~/miniconda3/envs/sd/lib/python3.10/site-packages/diffusers/models/unet_2d_condition.py:966, in UNet2DConditionModel.forward(self, sample, timestep, encoder_hidden_states, class_labels, timestep_cond, attention_mask, cross_attention_kwargs, added_cond_kwargs, down_block_additional_residuals, mid_block_additional_residual, encoder_attention_mask, return_dict)
956 sample, res_samples = downsample_block(
957 hidden_states=sample,
958 temb=emb,
(...)
963 **additional_residuals,
964 )
965 else:
--> 966 sample, res_samples = downsample_block(hidden_states=sample, temb=emb, scale=lora_scale)
968 if is_adapter and len(down_block_additional_residuals) > 0:
969 sample += down_block_additional_residuals.pop(0)

File ~/miniconda3/envs/sd/lib/python3.10/site-packages/torch/nn/modules/module.py:1194, in Module._call_impl(self, *input, **kwargs)
1190 # If we don't have any hooks, we want to skip the rest of the logic in
1191 # this function, and just call forward.
1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1193 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1194 return forward_call(*input, **kwargs)
1195 # Do not call functions when jit is used
1196 full_backward_hooks, non_full_backward_hooks = [], []

File ~/miniconda3/envs/sd/lib/python3.10/site-packages/diffusers/models/unet_2d_blocks.py:1183, in DownBlock2D.forward(self, hidden_states, temb, scale)
1179 hidden_states = torch.utils.checkpoint.checkpoint(
1180 create_custom_forward(resnet), hidden_states, temb
1181 )
1182 else:
-> 1183 hidden_states = resnet(hidden_states, temb, scale=scale)
1185 output_states = output_states + (hidden_states,)
1187 if self.downsamplers is not None:

File ~/miniconda3/envs/sd/lib/python3.10/site-packages/torch/nn/modules/module.py:1194, in Module._call_impl(self, *input, **kwargs)
1190 # If we don't have any hooks, we want to skip the rest of the logic in
1191 # this function, and just call forward.
1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1193 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1194 return forward_call(*input, **kwargs)
1195 # Do not call functions when jit is used
1196 full_backward_hooks, non_full_backward_hooks = [], []

File ~/miniconda3/envs/sd/lib/python3.10/site-packages/diffusers/models/resnet.py:637, in ResnetBlock2D.forward(self, input_tensor, temb, scale)
626 input_tensor = (
627 self.downsample(input_tensor, scale=scale)
628 if isinstance(self.downsample, Downsample2D)
629 else self.downsample(input_tensor)
630 )
631 hidden_states = (
632 self.downsample(hidden_states, scale=scale)
633 if isinstance(self.downsample, Downsample2D)
634 else self.downsample(hidden_states)
635 )
--> 637 hidden_states = self.conv1(hidden_states, scale)
639 if self.time_emb_proj is not None:
640 if not self.skip_time_act:

File ~/miniconda3/envs/sd/lib/python3.10/site-packages/torch/nn/modules/module.py:1194, in Module._call_impl(self, *input, **kwargs)
1190 # If we don't have any hooks, we want to skip the rest of the logic in
1191 # this function, and just call forward.
1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1193 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1194 return forward_call(*input, **kwargs)
1195 # Do not call functions when jit is used
1196 full_backward_hooks, non_full_backward_hooks = [], []

TypeError: LoRAModule.forward() takes 2 positional arguments but 3 were given

@rohitgandikota
Copy link
Owner

did you use the installation guide we provided in README.md ? I suspect it could be some diffusers version issue?

@manzonif
Copy link

manzonif commented Nov 30, 2023

Yes, with diffusers v0.24.0 I've got that error.

I finally got it working on Windows:

python -m pip install bitsandbytes==0.41.0 --prefer-binary --extra-index-url=https://jllllll.github.io/bitsandbytes-windows-webui

pip install torch==2.0.1 torchvision==0.15.2 --index-url https://download.pytorch.org/whl/cu118

pip install -r requirements.txt

Then I tried to run train_lora_XL with a custom checkpoint from Civitai, (newrealityxl_v11.safetensors")
And it comes out with the error: Unexpected key(s) in state_dict: "text_model.embeddings.position_ids"
I found this issue on diffusers's repository
So I installed:

 pip install accelerate
 pip install omegaconf

And seems to work fine.

@rohitgandikota
Copy link
Owner

closing this issue - feel free to reopen if the issue persists

@sdbds
Copy link

sdbds commented Dec 14, 2023

https://github.com/sdbds/sliders-for-windows/tree/qinglong
windows version for autoinstall and run with powershell scripts.

@lixida123
Copy link

是的,使用扩散器 v0.24.0 我遇到了这个错误。

我终于让它在 Windows 上运行了:

python -m pip install bitsandbytes==0.41.0 --prefer-binary --extra-index-url=https://jllllll.github.io/bitsandbytes-windows-webui

pip install torch==2.0.1 torchvision==0.15.2 --index-url https://download.pytorch.org/whl/cu118

pip install -r requirements.txt

然后我尝试使用Civitai的自定义检查点(newrealityxl_v11.safetensors)运行train_lora_XL,并出现错误: state_dict中意外的键:“text_model.embeddings.position_ids” 我在扩散器的存储库中发现了这个问题 所以我安装了:

 pip install accelerate
 pip install omegaconf

而且似乎工作正常。
Thank you for your answer, I have been reporting errors when calling the Sd local model for a long time, I run your: pip install accelerate in git
pip install omegaconf successfully solves the problem of calling the local model

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants
@sdbds @manzonif @rohitgandikota @xdfeng07 @lixida123 and others