-
Notifications
You must be signed in to change notification settings - Fork 491
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Partially implement LTC Backend interface for PyTorch/XLA. #3571
Conversation
568024c
to
7ec217c
Compare
bdaedc2
to
7626a74
Compare
7626a74
to
457ea3e
Compare
39158bb
to
0a1b78b
Compare
457ea3e
to
f55ac29
Compare
f55ac29
to
6d0a2d4
Compare
6d0a2d4
to
e142cdb
Compare
86cd80c
to
251520b
Compare
251520b
to
49198a2
Compare
@wonjoolee95 I think this pr is ready to be review. This pr only partially implement LTC Backend interface. I will have follow up pr(s) for the remaining prs. |
std::vector<std::string> GetCompilationDevices( | ||
const std::string& device, | ||
c10::ArrayRef<std::string> devices) const override { | ||
return xla::ComputationClient::Get()->GetCompilationDevices(device, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! So this XlaBackendImpl
won't be used yet in XLA until we implement the InitXlaBackend
function, correct?
@wonjoolee95 It won't be used even if we implement One thing we can do incrementally is to start using the function in |
Partially implement LTC Backend interface for PyTorch/XLA.