-
Notifications
You must be signed in to change notification settings - Fork 480
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add API to assemble CPU shards to a sharded tensor #5681
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
Kokoro failure is due to a dependency issue:
I'll merge after TPU CI. Thanks Jiewen! |
Looking into the TPU CI failure, that's new since the rebase. Passes locally on v4, it may be that the test breaks with 8 devices. |
Surprisingly the test actually failed on the original PR, but TPU CI still passed: https://github.com/pytorch/xla/runs/17442958115 |
* Add API to assemble CPU shards to a sharded tensor * Handle replicated sharding * Move validations into get_op_sharding * Improve tests and error handling * Don't WrapXlaData * Fix test for v3
* Add API to assemble CPU shards to a sharded tensor * Handle replicated sharding * Move validations into get_op_sharding * Improve tests and error handling * Don't WrapXlaData * Fix test for v3
* Add API to assemble CPU shards to a sharded tensor * Handle replicated sharding * Move validations into get_op_sharding * Improve tests and error handling * Don't WrapXlaData * Fix test for v3
* Add API to assemble CPU shards to a sharded tensor * Handle replicated sharding * Move validations into get_op_sharding * Improve tests and error handling * Don't WrapXlaData * Fix test for v3
* Add API to assemble CPU shards to a sharded tensor * Handle replicated sharding * Move validations into get_op_sharding * Improve tests and error handling * Don't WrapXlaData * Fix test for v3
* Add API to assemble CPU shards to a sharded tensor * Handle replicated sharding * Move validations into get_op_sharding * Improve tests and error handling * Don't WrapXlaData * Fix test for v3
* Add API to assemble CPU shards to a sharded tensor * Handle replicated sharding * Move validations into get_op_sharding * Improve tests and error handling * Don't WrapXlaData * Fix test for v3
This PR reintroduces #5630, which was reverted in #5680 due to failing CI on master.
The following patch shows the difference between this and the original PR: