You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Traceback (most recent call last):
File "train_styleswin.py", line 409, in
generator.load_state_dict(ckpt["g"])
File "/mnt/anaconda3/envs/StyleSwin/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1668, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for Generator:
Unexpected key(s) in state_dict: "layers.4.blocks.0.attn_mask2", "layers.4.blocks.0.norm1.style.weight", "layers.4.blocks.0.norm1.style.bias", "layers.4.blocks.0.qkv.weight", "layers.4.blocks.0.qkv.bias", "layers.4.blocks.0.proj.weight", "layers.4 and so on
Hi, --use_checkpoint specifies to use PyTorch's checkpointing technique to save GPU memory, while --ckpt /mnt/PROCESSEDdata/StyleSwin/FFHQ_1024.pt specifies to resume from ckpt. It seems that you are training the mode with size 32, but using a higher resolution ckpt (e.g. 512, 1024), which would cause this error. The ckpt could only be used when you training the corresponding resolution model, and when training on other resolution, you should train from scratch without the pre-trained ckpt.
Thanks for sharing, I am having this error:
Traceback (most recent call last):
File "train_styleswin.py", line 409, in
generator.load_state_dict(ckpt["g"])
File "/mnt/anaconda3/envs/StyleSwin/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1668, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for Generator:
Unexpected key(s) in state_dict: "layers.4.blocks.0.attn_mask2", "layers.4.blocks.0.norm1.style.weight", "layers.4.blocks.0.norm1.style.bias", "layers.4.blocks.0.qkv.weight", "layers.4.blocks.0.qkv.bias", "layers.4.blocks.0.proj.weight", "layers.4 and so on
this is the command I am running:
I tried with and without the use_checkpoint flag and also the 256 version giving back the same error.
Best
The text was updated successfully, but these errors were encountered: