Skip to content

Commit

Permalink
fix
Browse files Browse the repository at this point in the history
  • Loading branch information
Borda committed Dec 31, 2020
1 parent 64c73d5 commit 7af6832
Showing 1 changed file with 3 additions and 1 deletion.
4 changes: 3 additions & 1 deletion pytorch_lightning/accelerators/accelerator_connector.py
Original file line number Diff line number Diff line change
Expand Up @@ -321,7 +321,9 @@ def set_distributed_mode(self):
rank_zero_warn(
'You requested distributed training on GPUs, but none is available, so we set backend to `ddp_cpu`.'
)
if self.trainer.num_nodes > 1 or self.trainer.num_processes > 1:
# in some cases it yield in comarison None and int
if ((self.trainer.num_nodes and self.trainer.num_nodes > 1)
or (self.trainer.num_processes and self.trainer.num_processes > 1)):
self.trainer._distrib_type = DistributedType.DDP
else:
rank_zero_warn('You are running on single node with no parallelization, so distributed has no effect.')
Expand Down

0 comments on commit 7af6832

Please sign in to comment.