-
Notifications
You must be signed in to change notification settings - Fork 350
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add initial support for torch.ops.aten.neg.default converter #2147
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code conforms to C++ style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some changes that do not conform to Python style guidelines:
--- py/torch_tensorrt/dynamo/conversion/aten_ops_converters.py 2023-07-30 01:37:11.581293 +0000
+++ py/torch_tensorrt/dynamo/conversion/aten_ops_converters.py 2023-07-30 01:37:33.758338 +0000
@@ -256,17 +256,17 @@
network: TRTNetwork,
target: Target,
args: Tuple[Argument, ...],
kwargs: Dict[str, Argument],
name: str,
-) -> Union[TRTTensor, Sequence[TRTTensor]]:
-
+) -> Union[TRTTensor, Sequence[TRTTensor]]:
+
return impl.unary.neg(
network,
target,
SourceIR.ATEN,
- name,
+ name,
args[0],
)
@dynamo_tensorrt_converter(torch.ops.aten.squeeze.dim)
--- py/torch_tensorrt/dynamo/conversion/impl/unary/ops.py 2023-07-30 01:37:11.585293 +0000
+++ py/torch_tensorrt/dynamo/conversion/impl/unary/ops.py 2023-07-30 01:37:33.831570 +0000
@@ -95,20 +95,16 @@
trt.ElementWiseOperation.SUB,
double_floor_div_output,
1,
)
+
def neg(
network: TRTNetwork,
target: Target,
source_ir: Optional[SourceIR],
name: str,
input_val: TRTTensor,
) -> TRTTensor:
return convert_unary(
- network,
- target,
- source_ir,
- name,
- trt.UnaryOperation.EXP,
- input_val
- )
\ No newline at end of file
+ network, target, source_ir, name, trt.UnaryOperation.EXP, input_val
+ )
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code conforms to C++ style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some changes that do not conform to Python style guidelines:
--- py/torch_tensorrt/dynamo/conversion/aten_ops_converters.py 2023-07-31 16:31:00.909463 +0000
+++ py/torch_tensorrt/dynamo/conversion/aten_ops_converters.py 2023-07-31 16:31:24.852864 +0000
@@ -256,17 +256,17 @@
network: TRTNetwork,
target: Target,
args: Tuple[Argument, ...],
kwargs: Dict[str, Argument],
name: str,
-) -> Union[TRTTensor, Sequence[TRTTensor]]:
-
+) -> Union[TRTTensor, Sequence[TRTTensor]]:
+
return impl.unary.neg(
network,
target,
SourceIR.ATEN,
- name,
+ name,
args[0],
)
@dynamo_tensorrt_converter(torch.ops.aten.squeeze.dim)
--- py/torch_tensorrt/dynamo/conversion/impl/unary/ops.py 2023-07-31 16:31:00.909463 +0000
+++ py/torch_tensorrt/dynamo/conversion/impl/unary/ops.py 2023-07-31 16:31:25.012634 +0000
@@ -95,20 +95,16 @@
trt.ElementWiseOperation.SUB,
double_floor_div_output,
1,
)
+
def neg(
network: TRTNetwork,
target: Target,
source_ir: Optional[SourceIR],
name: str,
input_val: TRTTensor,
) -> TRTTensor:
return convert_unary(
- network,
- target,
- source_ir,
- name,
- trt.UnaryOperation.NEG,
- input_val
- )
\ No newline at end of file
+ network, target, source_ir, name, trt.UnaryOperation.NEG, input_val
+ )
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code conforms to C++ style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code conforms to Python style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code conforms to Python style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code conforms to C++ style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code conforms to C++ style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code conforms to Python style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code conforms to C++ style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code conforms to C++ style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code conforms to Python style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code conforms to Python style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changes look good to me, pending testing. Made some small comments on the test file.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider renaming this file test_unary_aten.py
, so any subsequent unary op tests can go in the same file
def forward(self, input): | ||
return torch.neg(input) | ||
|
||
inputs = [torch.randn(1, 10)] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe randn
defaults to torch.int64
, so consider switching to rand
for float or adding .int()
for int, to the instantiation of the tensor
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @gs-olive I went through the pytorch documentation here: https://pytorch.org/docs/stable/generated/torch.randn.html
I don't think it's torch.int64
because here it mentions in default it's float type.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe randn
should cover the different data types. Also you could try including the parameterized
test cases to check for different shapes of the input tensort.
|
||
|
||
class TestNegConverter(DispatchTestCase): | ||
def test_neg(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider adding cases to check both integer and float tensor negation. It could just be two functions in the same class.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cool will do! thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code conforms to C++ style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some changes that do not conform to Python style guidelines:
--- tests/py/dynamo/converters/test_neg_aten.py 2023-08-04 21:30:31.103961 +0000
+++ tests/py/dynamo/converters/test_neg_aten.py 2023-08-04 21:30:50.795785 +0000
@@ -9,18 +9,17 @@
class TestNegConverter(DispatchTestCase):
@parameterized.expand(
[
("2d_dim_dtype_float", (2, 2), torch.float),
("3d_dim_dtype_float", (2, 2, 2), torch.float),
-
]
)
def test_neg_float(self, _, x, type):
class neg(nn.Module):
def forward(self, input):
return torch.neg(input)
-
+
inputs = [torch.randn(x, dtype=type)]
self.run_test(
neg(),
inputs,
expected_ops={torch.ops.aten.neg.default},
@@ -30,21 +29,21 @@
[
("2d_dim_dtype_int", (2, 2), torch.int32, 0, 5),
("3d_dim_dtype_int", (2, 2, 2), torch.int32, 0, 5),
]
)
-
def test_neg_int(self, _, x, type, min, max):
class neg(nn.Module):
def forward(self, input):
return torch.neg(input)
-
+
inputs = [torch.randint(min, max, (x), dtype=type)]
self.run_test(
neg(),
inputs,
expected_ops={torch.ops.aten.neg.default},
)
+
if __name__ == "__main__":
run_tests()
ef62a7a
to
1d732a5
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code conforms to C++ style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some changes that do not conform to Python style guidelines:
--- tests/py/dynamo/converters/test_neg_aten.py 2023-08-04 21:33:26.363439 +0000
+++ tests/py/dynamo/converters/test_neg_aten.py 2023-08-04 21:33:50.370156 +0000
@@ -9,18 +9,17 @@
class TestNegConverter(DispatchTestCase):
@parameterized.expand(
[
("2d_dim_dtype_float", (2, 2), torch.float),
("3d_dim_dtype_float", (2, 2, 2), torch.float),
-
]
)
def test_neg_float(self, _, x, type):
class neg(nn.Module):
def forward(self, input):
return torch.neg(input)
-
+
inputs = [torch.randn(x, dtype=type)]
self.run_test(
neg(),
inputs,
expected_ops={torch.ops.aten.neg.default},
@@ -30,21 +29,21 @@
[
("2d_dim_dtype_int", (2, 2), torch.int32, 0, 5),
("3d_dim_dtype_int", (2, 2, 2), torch.int32, 0, 5),
]
)
-
def test_neg_int(self, _, x, type, min, max):
class neg(nn.Module):
def forward(self, input):
return torch.neg(input)
-
+
inputs = [torch.randint(min, max, (x), dtype=type)]
self.run_test(
neg(),
inputs,
expected_ops={torch.ops.aten.neg.default},
)
+
if __name__ == "__main__":
run_tests()
1d732a5
to
9f24f92
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code conforms to C++ style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code conforms to Python style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code conforms to C++ style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code conforms to Python style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just needs a rebase to the latest main, and should be good to go!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The changes here can be removed since it was added in #2246
1a09463
to
9ecae98
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code conforms to C++ style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some changes that do not conform to Python style guidelines:
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/conversion/aten_ops_converters.py 2023-09-05 23:01:00.766154+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/conversion/aten_ops_converters.py 2023-09-05 23:03:49.447087+00:00
@@ -351,10 +351,11 @@
SourceIR.ATEN,
name,
args[0],
)
+
@dynamo_tensorrt_converter(torch.ops.aten.neg.default)
def aten_ops_neg(
network: TRTNetwork,
target: Target,
args: Tuple[Argument, ...],
@@ -372,10 +373,11 @@
target,
SourceIR.ATEN,
name,
input_val,
)
+
@dynamo_tensorrt_converter(torch.ops.aten.squeeze.dim) # type: ignore[misc]
@dynamo_tensorrt_converter(torch.ops.aten.squeeze.dims) # type: ignore[misc]
def aten_ops_squeeze(
network: TRTNetwork,
9ecae98
to
641adb0
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code conforms to C++ style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code conforms to Python style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me! Passes TorchScript CI (verified locally)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Description
Initial support for a dynamo converter.
Checklist: