Skip to content

Commit

Permalink
fix a comment
Browse files Browse the repository at this point in the history
  • Loading branch information
vanbasten23 committed Oct 20, 2023
1 parent f0989bf commit de2a842
Showing 1 changed file with 2 additions and 1 deletion.
3 changes: 2 additions & 1 deletion docs/pjrt.md
Original file line number Diff line number Diff line change
Expand Up @@ -249,11 +249,12 @@ On the second GPU machine, run
--nnodes=2 \
--node_rank=1 \
--nproc_per_node=4 \
--rdzv_endpoint="<MACHINE_0_IP_ADDRESS>:12355" pytorch/xla/test/test_train_mp_imagenet_torchrun.py --fake_data --pjrt_distributed --batch_size=128 --num_epochs=1
--rdzv_endpoint="<MACHINE_0_IP_ADDRESS>:12355" pytorch/xla/test/test_train_mp_imagenet.py --fake_data --pjrt_distributed --batch_size=128 --num_epochs=1
```

the difference bewteen the 2 commands above are `--node_rank` and potentially `--nproc_per_node` if you want to use different number of GPU devices on each machine. All the other scripts are identical.


## Differences from XRT

Although in most cases we expect PJRT and XRT to work mostly interchangeably
Expand Down

0 comments on commit de2a842

Please sign in to comment.