Skip to content

Commit

Permalink
Add information about on-going DTensor API in spmd.md (#5735)
Browse files Browse the repository at this point in the history
  • Loading branch information
yeounoh authored and jonb377 committed Oct 31, 2023
1 parent a9f4298 commit a2b3b30
Showing 1 changed file with 14 additions and 0 deletions.
14 changes: 14 additions & 0 deletions docs/spmd.md
Original file line number Diff line number Diff line change
Expand Up @@ -202,6 +202,20 @@ The main use case for `XLAShardedTensor` [[RFC](https://github.com/pytorch/xla/i

There is also an ongoing effort to integrate <code>XLAShardedTensor</code> into <code>DistributedTensor</code> API to support XLA backend [[RFC](https://github.com/pytorch/pytorch/issues/92909)].

### DTensor Integration
PyTorch has prototype-released [DTensor](https://github.com/pytorch/pytorch/blob/main/torch/distributed/_tensor/README.md) in 2.1.
We are integrating PyTorch/XLA SPMD into DTensor API [RFC](https://github.com/pytorch/pytorch/issues/92909). We have a proof-of-concept integration for `distribute_tensor`, which calls `mark_sharding` annotation API to shard a tensor and its computation using XLA:
```python
import torch
from torch.distributed import DeviceMesh, Shard, distribute_tensor

# distribute_tensor now works with `xla` backend using PyTorch/XLA SPMD.
mesh = DeviceMesh("xla", list(range(world_size)))
big_tensor = torch.randn(100000, 88)
my_dtensor = distribute_tensor(big_tensor, mesh, [Shard(0)])
```

This feature is experimental and stay tuned for more updates, examples and tutorials in the upcoming releases.

### Sharding-Aware Host-to-Device Data Loading

Expand Down

0 comments on commit a2b3b30

Please sign in to comment.