-
Notifications
You must be signed in to change notification settings - Fork 1.8k
[retiarii] refactor of pytorch operators #3365
Conversation
QuanluZhang
commented
Feb 4, 2021
•
edited
Loading
edited
- support more unit tests
- support shared module
- refactor of pytorch operators
- write more unittest: pytorch models
- write more unittest: basic operations (partial)
- support control flow (v2.2)
return None | ||
|
||
def remove_inject_pytorch_nn(): | ||
Identity = unwrap_module(nn.Identity) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This file looks tedious. For test usage, is it possible to write like this:
from torch.nn import *
for module in torch.nn.__all__:
wrap_module(getattr(torch.nn, module))
The injection seems to have an effect on the module itself.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let's leave it for now
str | ||
generated code line | ||
""" | ||
if self.type == 'aten::slice': |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do this check in PyTorchOperation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
will refactor this part in the next pr
elif tensor.node().kind() == 'aten::le': | ||
left = _generate_expr(tensor.node().inputsAt(0)) | ||
right = _generate_expr(tensor.node().inputsAt(1)) | ||
return f'({left} <= {right})' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Better to symmetrically add aten::ge
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added
subcell = ir_graph.add_node(submodule_full_name, submodule_type_str, sub_m_attrs) | ||
if isinstance(submodule_obj, Placeholder): | ||
subcell.update_label(submodule_obj.label) | ||
elif isinstance(submodule_obj, (LayerChoice, InputChoice)): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why this check? Where is ValueChoice?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
because only our provided APIs can be specified a label. If ValueChoice is a graph node, we should deal with it here, otherwise no need to deal with it here.
I like this refactor, it seems everything is on the right way again. In order to find a good design pattern, I browsed some |
nni/retiarii/codegen/pytorch.py
Outdated
@@ -33,8 +33,25 @@ def _sorted_incoming_edges(node: Node) -> List[Edge]: | |||
|
|||
|
|||
def _format_inputs(node: Node) -> List[str]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
-> Tuple[List[str], List[Any]]:
It might be more pythonic to return List[Tuple[str, Any]]
. Just a suggestion.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
updated, thanks
|
||
class NoopIdentity(PyTorchOperation): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NoOp
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
updated
# discover methods | ||
for elem in dir(torch.Tensor): | ||
if not hidden(elem): | ||
schemas = torch._C._jit_get_schemas_for_operator("aten::" + elem) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This API looks dangerous to me...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, kind of. this one is copied from this file: https://github.com/pytorch/pytorch/blob/master/torch/jit/supported_ops.py
@tczhangzhi thanks for your suggestion! it would be great if you can also provide the links of the discussion here :). I agree with your suggestion. we also thought about this design. the problem is that we have to convert graph back to code. if we only parse a submodule while keeping the top-level module unchanged, it is not easy to stitch the unchanged part and the generated part together. we will keep thinking about this. If you have good suggestions, feel free to tell us. |