Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Assistance in training manga-colorization #4

Open
My12123 opened this issue May 26, 2023 · 20 comments
Open

Assistance in training manga-colorization #4

My12123 opened this issue May 26, 2023 · 20 comments

Comments

@My12123
Copy link

My12123 commented May 26, 2023

zyddnys/manga-image-translator#378

@Keiser04
Copy link

imagen
@My12123 I think I found a possible solution. It would be to add --no_dropout to the manga colorization, the question is how... if you get it, you avoid the problems with the architecture.

@Keiser04
Copy link

RuntimeError: Error(s) in loading state_dict (#812, #671,#461, #296)

If you get the above errors when loading the generator during test time, you probably have used different network configurations for training and test. There are a few things to check: (1) the network architecture --netG: you will get an error if you use --netG unet256 during training, and use --netG resnet_6blocks during test. Make sure that the flag is the same. (2) the normalization parameters --norm: we use different default --norm parameters for --model cycle_gan, --model pix2pix, and --model test. They might be different from the one you used in your training time. Make sure that you add the --norm flag in your test code. (3) If you use dropout during training time, make sure that you use the same Dropout setting in your test. Check the flag --no_dropout.

Note that we use different default generators, normalization, and dropout options for different models. The model file can overwrite the default arguments and add new arguments. For example, this line adds and changes default arguments for pix2pix. For CycleGAN, the default is --netG resnet_9blocks --no_dropout --norm instance --dataset_mode unaligned. For pix2pix, the default is --netG unet_256 --norm batch --dataset_mode aligned. For model testing with single direction (--model test), the default is --netG resnet_9blocks --norm instance --dataset_mode single. To make sure that your training and test follow the same setting, you are encouraged to plicitly specify the --netG, --norm, --dataset_mode, and --no_dropout (or not) in your script.

@Keiser04
Copy link

Keiser04 commented Nov 7, 2023

I managed to train 1 epoch and save the model, now I'm really testing it with 800 epoch (with a dataset of 1406 color images from One Piace) and if it goes well and looks good I'll share whit you how I did it. @My12123
imagen

@My12123
Copy link
Author

My12123 commented Nov 7, 2023

@Keiser04 Okay, thanks

@Keiser04
Copy link

Keiser04 commented Nov 8, 2023

Do you know what meens this¿
image

@Keiser04
Copy link

Keiser04 commented Nov 8, 2023

hmmmmmmmmmmmmmmmmmmmmmmmm
image

@My12123
Copy link
Author

My12123 commented Nov 8, 2023

@Keiser04 Were you taught on the pages of manga? If yes, then you need to color manga pages, not art.
You can provide the model that you got, then I will be able to test more.
There is only a contour on your photo, ControlNet will cope with this better, there should be shades of gray at the bottom of the color photo and black and white.
the original
nn03 png_res
sketch
sketch
result
5-translated

@Keiser04
Copy link

Keiser04 commented Nov 9, 2023

I didn't understand the controll net thing, maybe I'm taking too long since I'm specifying how the dataset has to be and how to use the codes I created to speed up the dataset. Anyway what I am doing is to pass the i

mages in black and white with python, since I don't know how to make it look like manga, well some of them do look like the original panels.image

@Keiser04
Copy link

Keiser04 commented Nov 9, 2023

do you know how to use kaggle? if so, I'll pass you the notebook but I haven't made a tutorial or anything else yet

@Keiser04
Copy link

Keiser04 commented Nov 9, 2023

my biggest problem is that I don't know what you mean by dfm images.

@My12123
Copy link
Author

My12123 commented Nov 9, 2023

I didn't understand the controll net

I'm about https://github.com/lllyasviel/ControlNet
https://github.com/Mikubill/sd-webui-controlnet

@My12123
Copy link
Author

My12123 commented Nov 9, 2023

Only the contour that is filled with gray is colored.
281716456-40fa256f-69ed-4050-9b0e-0d9fef0c8337
281716456-40fa256f-69ed-4050-9b0e-0d9fef0c8337-translated

@Keiser04
Copy link

Keiser04 commented Nov 9, 2023

well wish me luck and see how it turns out, 18k images in total counting black and white 100 epoch in theory in 10 hours
image
o

@Keiser04
Copy link

Keiser04 commented Nov 9, 2023

Only the contour that is filled with gray is colored. 281716456-40fa256f-69ed-4050-9b0e-0d9fef0c8337 281716456-40fa256f-69ed-4050-9b0e-0d9fef0c8337-translated

Does the net control do that? or does it just grayscale it?

@My12123
Copy link
Author

My12123 commented Nov 9, 2023

@Keiser04 I don't know for sure, I know that only the contour will be colored, which is filled with gray, with the exception of faces.

@My12123
Copy link
Author

My12123 commented Nov 9, 2023

Results in ControlNet
2023-02-13_19-25-49
00002-12345
281240545-1a872a57-2e53-4c91-b2a7-1ff13dade808
00004-168945997
00005-168945998
00008-891187452

@Keiser04
Copy link

The training mode is ussles
image

@Keiser04
Copy link

I can only believe that the model he gave us was trained with another type of ia @My12123

@Keiser04
Copy link

do you know python? i think the problem is that v1 doesn't have colorizartion.py while v2 has one. the thing is that the models were modified i think making the state dict static or something like that, if we could fix it. @My12123

@Keiser04
Copy link

image
@My12123 is my model, 30 images... 10 h of training....

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants