Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rule of thumb for kimg? #274

Open
lebeli opened this issue Jan 13, 2023 · 3 comments
Open

Rule of thumb for kimg? #274

lebeli opened this issue Jan 13, 2023 · 3 comments

Comments

@lebeli
Copy link

lebeli commented Jan 13, 2023

I have a dataset with ~4000 images (CAD images without any noise). Is there a rule of thumb for choosing the kimg hyperparameter? Also, the kimg hyperparameter is basically how one can controll the number of epochs, right?

@thinkercache
Copy link

General understanding is : 1kimg =1000inmg. This means 1000 images are shown to the network during the training. According to my experience, I would suggest 4000kimg (--kimg=4000) in the training configuration, as a good starting point to observe how the G and D behaves. After that, you may go for lesser kimg or higher, depends on your dataset.
To satisfy the built-in plugins in PyTorch, kimg is used.

@lebeli
Copy link
Author

lebeli commented Jan 13, 2023

Thank you, for the explanation. What's a good metric to observe both the generator and discriminator? FID for the generator and logits for the discriminator? Or simply logits for both?

@thinkercache
Copy link

thinkercache commented Jan 13, 2023

@lebeli KID is good for small datasets as the original KID paper (Demystifying MMD GANs Mikołaj Bińkowski, Danica J. Sutherland, Michael Arbel, Arthur Gretton https://arxiv.org/abs/1801.01401) suggests. FID is widely used and good fit for large datasets such as over 10k images, according to my understanding at the moment. One metric for both.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants