Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: Error(s) in loading state_dict for EncoderUNetModelWT #36

Open
Experienment-Gu opened this issue Jul 3, 2023 · 6 comments

Comments

@Experienment-Gu
Copy link

RuntimeError: Error(s) in loading state_dict for EncoderUNetModelWT: Missing key(s) in state_dict: "time_embed.0.weight", "time_embed.0.bias", "time_embed.2.weight", "time_embed.2.bias", "input_blocks.0.0.weight", "input_blocks.0.0.bias", "input_blocks.1.0.in_layers.0.weight", "input_blocks.1.0.in_layers.0.bias", "input_blocks.1.0.in_layers.2.weight", "input_blocks.1.0.in_layers.2.bias", "input_blocks.1.0.emb_layers.1.weight", "input_blocks.1.0.emb_layers.1.bias", "input_blocks.1.0.out_layers.0.weight", "input_blocks.1.0.out_layers.0.bias", "input_blocks.1.0.out_layers.3.weight", "input_blocks.1.0.out_layers.3.bias", "input_blocks.1.1.norm.weight", "input_blocks.1.1.norm.bias", "input_blocks.1.1.qkv.weight", "input_blocks.1.1.qkv.bias", "input_blocks.1.1.proj_out.weight", "input_blocks.1.1.proj_out.bias", "input_blocks.2.0.in_layers.0.weight", "input_blocks.2.0.in_layers.0.bias", "input_blocks.2.0.in_layers.2.weight", "input_blocks.2.0.in_layers.2.bias", "input_blocks.2.0.emb_layers.1.weight", "input_blocks.2.0.emb_layers.1.bias", "input_blocks.2.0.out_layers.0.weight", "input_blocks.2.0.out_layers.0.bias", "input_blocks.2.0.out_layers.3.weight", "input_blocks.2.0.out_layers.3.bias", "input_blocks.2.1.norm.weight", "input_blocks.2.1.norm.bias", "input_blocks.2.1.qkv.weight", "input_blocks.2.1.qkv.bias", "input_blocks.2.1.proj_out.weight", "input_blocks.2.1.proj_out.bias", "input_blocks.3.0.op.weight", "input_blocks.3.0.op.bias", "input_blocks.4.0.in_layers.0.weight", "input_blocks.4.0.in_layers.0.bias", "input_blocks.4.0.in_layers.2.weight", "input_blocks.4.0.in_layers.2.bias", "input_blocks.4.0.emb_layers.1.weight", "input_blocks.4.0.emb_layers.1.bias", "input_blocks.4.0.out_layers.0.weight", "input_blocks.4.0.out_layers.0.bias", "input_blocks.4.0.out_layers.3.weight", "input_blocks.4.0.out_layers.3.bias", "input_blocks.4.1.norm.weight", "input_blocks.4.1.norm.bias", "input_blocks.4.1.qkv.weight", "input_blocks.4.1.qkv.bias", "input_blocks.4.1.proj_out.weight", "input_blocks.4.1.proj_out.bias", "input_blocks.5.0.in_layers.0.weight", "input_blocks.5.0.in_layers.0.bias", "input_blocks.5.0.in_layers.2.weight", "input_blocks.5.0.in_layers.2.bias", "input_blocks.5.0.emb_layers.1.weight", "input_blocks.5.0.emb_layers.1.bias", "input_blocks.5.0.out_layers.0.weight", "input_blocks.5.0.out_layers.0.bias", "input_blocks.5.0.out_layers.3.weight", "input_blocks.5.0.out_layers.3.bias", "input_blocks.5.1.norm.weight", "input_blocks.5.1.norm.bias", "input_blocks.5.1.qkv.weight", "input_blocks.5.1.qkv.bias", "input_blocks.5.1.proj_out.weight", "input_blocks.5.1.proj_out.bias", "input_blocks.6.0.op.weight", "input_blocks.6.0.op.bias", "input_blocks.7.0.in_layers.0.weight", "input_blocks.7.0.in_layers.0.bias", "input_blocks.7.0.in_layers.2.weight", "input_blocks.7.0.in_layers.2.bias", "input_blocks.7.0.emb_layers.1.weight", "input_blocks.7.0.emb_layers.1.bias", "input_blocks.7.0.out_layers.0.weight", "input_blocks.7.0.out_layers.0.bias", "input_blocks.7.0.out_layers.3.weight", "input_blocks.7.0.out_layers.3.bias", "input_blocks.7.0.skip_connection.weight", "input_blocks.7.0.skip_connection.bias", "input_blocks.7.1.norm.weight", "input_blocks.7.1.norm.bias", "input_blocks.7.1.qkv.weight", "input_blocks.7.1.qkv.bias", "input_blocks.7.1.proj_out.weight", "input_blocks.7.1.proj_out.bias", "input_blocks.8.0.in_layers.0.weight", "input_blocks.8.0.in_layers.0.bias", "input_blocks.8.0.in_layers.2.weight", "input_blocks.8.0.in_layers.2.bias", "input_blocks.8.0.emb_layers.1.weight", "input_blocks.8.0.emb_layers.1.bias", "input_blocks.8.0.out_layers.0.weight", "input_blocks.8.0.out_layers.0.bias", "input_blocks.8.0.out_layers.3.weight", "input_blocks.8.0.out_layers.3.bias", "input_blocks.8.1.norm.weight", "input_blocks.8.1.norm.bias", "input_blocks.8.1.qkv.weight", "input_blocks.8.1.qkv.bias", "input_blocks.8.1.proj_out.weight", "input_blocks.8.1.proj_out.bias", "input_blocks.9.0.op.weight", "input_blocks.9.0.op.bias", "input_blocks.10.0.in_layers.0.weight", "input_blocks.10.0.in_layers.0.bias", "input_blocks.10.0.in_layers.2.weight", "input_blocks.10.0.in_layers.2.bias", "input_blocks.10.0.emb_layers.1.weight", "input_blocks.10.0.emb_layers.1.bias", "input_blocks.10.0.out_layers.0.weight", "input_blocks.10.0.out_layers.0.bias", "input_blocks.10.0.out_layers.3.weight", "input_blocks.10.0.out_layers.3.bias", "input_blocks.11.0.in_layers.0.weight", "input_blocks.11.0.in_layers.0.bias", "input_blocks.11.0.in_layers.2.weight", "input_blocks.11.0.in_layers.2.bias", "input_blocks.11.0.emb_layers.1.weight", "input_blocks.11.0.emb_layers.1.bias", "input_blocks.11.0.out_layers.0.weight", "input_blocks.11.0.out_layers.0.bias", "input_blocks.11.0.out_layers.3.weight", "input_blocks.11.0.out_layers.3.bias", "middle_block.0.in_layers.0.weight", "middle_block.0.in_layers.0.bias", "middle_block.0.in_layers.2.weight", "middle_block.0.in_layers.2.bias", "middle_block.0.emb_layers.1.weight", "middle_block.0.emb_layers.1.bias", "middle_block.0.out_layers.0.weight", "middle_block.0.out_layers.0.bias", "middle_block.0.out_layers.3.weight", "middle_block.0.out_layers.3.bias", "middle_block.1.norm.weight", "middle_block.1.norm.bias", "middle_block.1.qkv.weight", "middle_block.1.qkv.bias", "middle_block.1.proj_out.weight", "middle_block.1.proj_out.bias", "middle_block.2.in_layers.0.weight", "middle_block.2.in_layers.0.bias", "middle_block.2.in_layers.2.weight", "middle_block.2.in_layers.2.bias", "middle_block.2.emb_layers.1.weight", "middle_block.2.emb_layers.1.bias", "middle_block.2.out_layers.0.weight", "middle_block.2.out_layers.0.bias", "middle_block.2.out_layers.3.weight", "middle_block.2.out_layers.3.bias", "fea_tran.0.in_layers.0.weight", "fea_tran.0.in_layers.0.bias", "fea_tran.0.in_layers.2.weight", "fea_tran.0.in_layers.2.bias", "fea_tran.0.emb_layers.1.weight", "fea_tran.0.emb_layers.1.bias", "fea_tran.0.out_layers.0.weight", "fea_tran.0.out_layers.0.bias", "fea_tran.0.out_layers.3.weight", "fea_tran.0.out_layers.3.bias", "fea_tran.1.in_layers.0.weight", "fea_tran.1.in_layers.0.bias", "fea_tran.1.in_layers.2.weight", "fea_tran.1.in_layers.2.bias", "fea_tran.1.emb_layers.1.weight", "fea_tran.1.emb_layers.1.bias", "fea_tran.1.out_layers.0.weight", "fea_tran.1.out_layers.0.bias", "fea_tran.1.out_layers.3.weight", "fea_tran.1.out_layers.3.bias", "fea_tran.2.in_layers.0.weight", "fea_tran.2.in_layers.0.bias", "fea_tran.2.in_layers.2.weight", "fea_tran.2.in_layers.2.bias", "fea_tran.2.emb_layers.1.weight", "fea_tran.2.emb_layers.1.bias", "fea_tran.2.out_layers.0.weight", "fea_tran.2.out_layers.0.bias", "fea_tran.2.out_layers.3.weight", "fea_tran.2.out_layers.3.bias", "fea_tran.2.skip_connection.weight", "fea_tran.2.skip_connection.bias", "fea_tran.3.in_layers.0.weight", "fea_tran.3.in_layers.0.bias", "fea_tran.3.in_layers.2.weight", "fea_tran.3.in_layers.2.bias", "fea_tran.3.emb_layers.1.weight", "fea_tran.3.emb_layers.1.bias", "fea_tran.3.out_layers.0.weight", "fea_tran.3.out_layers.0.bias", "fea_tran.3.out_layers.3.weight", "fea_tran.3.out_layers.3.bias", "fea_tran.3.skip_connection.weight", "fea_tran.3.skip_connection.bias".
Time taken: 17.57sTorch active/reserved: 2515/2600 MiB, Sys VRAM: 4125/12288 MiB (33.57%)

@pkuliyi2015
Copy link
Owner

Please select the correct srmodule ckpt file in the script. Its size is around 400M.

@WSJUSA
Copy link

WSJUSA commented Jul 8, 2023

same issue, think I have correct SD and SR ckpts
Screenshot_2023-07-08_12-13-48

@JackeyDeng
Copy link

hello,
I met the same problem when loading my self-trained stablesr model, have you solved the problem?

@202030481266
Copy link

It's OK. You should use the webui_*.ckpt.

@WSJUSA
Copy link

WSJUSA commented Aug 31, 2023

It's OK. You should use the webui_*.ckpt.

Do you mean for the SR Model type use this webui_768v_139.ckpt instead of the stablesr_768v_000139.ckpt

@202030481266
Copy link

@WSJUSA yeap

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants