Reason for using args.rank in torch.manual_seed? #372
Unanswered
gouthamk1998
asked this question in
Q&A
Replies: 1 comment
-
@gouthamk1998 goal was to ensure each process generates differing sequences of augmentations, etc. For distributed training, the DDP wrapper broadcasts the params/buffers of the model from rank 0 when it is created. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
What is the reason for adding args.rank to args.seed in torch.manual_seed in train.py?. So in distributed mode won't it be like different process will have different seeds and the model gets initialized with different weights in different process?
Beta Was this translation helpful? Give feedback.
All reactions