You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for your solid work!
I have some problem with reproducing the result on your leaderboard. The suggest batch size on your paper is 512 however the biggest batch size supported on one GPU(2080 ti) is 64 (128 will cause OOM).
The tensorflow_hub seems not working well with tf.distribute.MirroredStrategy() to support multi-GPUs, leading to the error below:
RuntimeError: variable_scope module/ was unused but the corresponding name_scope was already taken.
just as mentioned in tensorflow/hub#64
Would you please give me some suggestions? (Not familiar with tf very well)
The text was updated successfully, but these errors were encountered:
Thanks for your solid work!
I have some problem with reproducing the result on your leaderboard. The suggest batch size on your paper is 512 however the biggest batch size supported on one GPU(2080 ti) is 64 (128 will cause OOM).
The tensorflow_hub seems not working well with
tf.distribute.MirroredStrategy()
to support multi-GPUs, leading to the error below:Would you please give me some suggestions? (Not familiar with tf very well)
The text was updated successfully, but these errors were encountered: