-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training time and the total number of used datasset #61
Comments
你好,
|
您好,请问使用DIV2K, DIV8K, Flickr2K, OST和FFHQ10k制作的数据集大小大概多大呢?我粗略估计了一下,使用make_paired_data.py制作数据集,epoch设置为60,似乎会占用大概4T的存储空间? |
你好,我这边没有具体统计过。 |
您好,请问可以开源一下仅在DIV2K, DIV8K, Flickr2K, OST和FFHQ10k数据集上的预训练模型吗 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
你好,最近在复现本工作,有以下几个问题想请教一下
1.请问使用论文中所述的8 NVIDIA Tesla 32G-V100 GPUs总共需要多久的训练时间?
2.论文中说batchsize设置为192,iter为150K,那么train_batch_size和gradient_accumulation_steps这两个参数应该如何设置?我的理解是train_batch_sizegradient_accumulation_stepsGPU数量=192,不知道是否正确。另外arxiv上v1版本batchsize为32,v2为192,差异很大,这两个指的是总batchsize还是每张卡的batchsize,应该以哪个为准。
3.请问训练所使用的数据集裁剪之后总共包含多少图像?对数据集中的原始图像是直接裁剪吗,还是resize后再裁剪
4.LSDIR数据集较大,请问有没有在DF2K数据集上做过实验,LSDIR有包含DF2K的内容吗?
望解答疑惑
The text was updated successfully, but these errors were encountered: