-
Notifications
You must be signed in to change notification settings - Fork 486
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Fori_loop|While_loop] Enable fori_loop with add/sub test case #6603
Conversation
7a8f9f5
to
c0d3359
Compare
torch_xla/csrc/lowering_context.cpp
Outdated
if (!root_tuple_.empty() & (root_tuple_.size() > 1)) { | ||
xla::XlaOp root = xla::Tuple(builder(), root_tuple_); | ||
xla = builder()->Build(root); | ||
} else if (!root_tuple_.empty() & (root_tuple_.size() == 1)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Explain: we need to skip tuple for cond/body computation creation to match the xla::While format check for cond, error log
Hi, @JackCaoG, since this PR would add new function to |
@amithrm FYI |
kokoro failure should be fixed on master branch, let's skip it now |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Left some suggestions
c35ac53
to
c7f09d5
Compare
@@ -1027,7 +1033,9 @@ void BuildLoweringContextSubmodule(py::module* m) { | |||
.def("hlo_json", &PyLoweringContext::GetHloJsonText) | |||
.def("parameter_id_tensor_mapping", | |||
&PyLoweringContext::GetParameterIdTensorMapping) | |||
.def("tensor_parameter_id", &PyLoweringContext::GetTensorParameterId); | |||
.def("tensor_parameter_id", &PyLoweringContext::GetTensorParameterId) | |||
.def("set_name_string", &PyLoweringContext::SetNameString) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good, thanks!
torch_xla/csrc/lowering_context.cpp
Outdated
xla::XlaOp root = xla::Tuple(builder(), root_tuple_); | ||
xla = builder()->Build(root); | ||
} else if (!root_tuple_.empty() & (root_tuple_.size() == 1)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should condition on get_name_string()
. Add this check at the top and build for while loop `if get_name_string() == "condctx" or get_name_string() == "bodyctx"; otherwise, you can keep the original build logic.
Have your logic for while loop build in a separate private method, and call it if ``if get_name_string() == "condctx" or get_name_string() == "bodyctx"` is true.
So you can keep BuildXla()
simple.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, make sense and updated in the newest commit, due to the logic for while loop
is one simple line code, we run it directly without warping it in a separate private method
torch_xla/csrc/lowering_context.h
Outdated
@@ -68,7 +72,8 @@ class LoweringContext : public torch::lazy::LoweringContext { | |||
xla::XlaOp GetOutputOp(const torch::lazy::Output& output); | |||
|
|||
// Build the XLA computation capturing all the operations created with the | |||
// embedded XLA builder (returned by the builder() API). | |||
// embedded XLA builder (returned by the builder() API) with check whether |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Think we can keep the original comment. Add the while specific comment on the new private method, as described above.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, updated
def cond_fn(x): | ||
return x.sum() <= 10 | ||
ten = torch.ones(1, dtype=torch.int32, device=device) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cosmetic nit: ten
can be interpreted as number 10 - suggesting a better name plz
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, updated
96ce688
to
173ff44
Compare
For fori_loop implementation with while_loop, this PR is for lowering body/cond to replace formal placeholder
This is the step two PR, and father PR(#6532), child PR(#6529), source PR(#6563)
some issue fixed:
body fn istried torch.sub(a, b), passed too locally-
, not a torch func, will test laterinput are limited to list/tuplethis match torch._higher_order_ops.while_loop requiredinput was trans from list to not list after torch.compile, TODO, add the same logic like torch.compile to use inputs, not like currently create a duplicated tensor in the fori_loop.py file