Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add initial support for torch.ops.aten.neg.default converter #2147

Merged
merged 1 commit into from
Sep 7, 2023

Conversation

bowang007
Copy link
Collaborator

Description

Initial support for a dynamo converter.

Checklist:

  • My code follows the style guidelines of this project (You can use the linters)
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas and hacks
  • I have made corresponding changes to the documentation
  • I have added tests to verify my fix or my feature
  • New and existing unit tests pass locally with my changes
  • I have added the relevant labels to my PR in so that relevant reviewers are notified

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to C++ style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to Python style guidelines:

--- py/torch_tensorrt/dynamo/conversion/aten_ops_converters.py	2023-07-30 01:37:11.581293 +0000
+++ py/torch_tensorrt/dynamo/conversion/aten_ops_converters.py	2023-07-30 01:37:33.758338 +0000
@@ -256,17 +256,17 @@
    network: TRTNetwork,
    target: Target,
    args: Tuple[Argument, ...],
    kwargs: Dict[str, Argument],
    name: str,
-) -> Union[TRTTensor, Sequence[TRTTensor]]: 
-    
+) -> Union[TRTTensor, Sequence[TRTTensor]]:
+
    return impl.unary.neg(
        network,
        target,
        SourceIR.ATEN,
-        name, 
+        name,
        args[0],
    )


@dynamo_tensorrt_converter(torch.ops.aten.squeeze.dim)
--- py/torch_tensorrt/dynamo/conversion/impl/unary/ops.py	2023-07-30 01:37:11.585293 +0000
+++ py/torch_tensorrt/dynamo/conversion/impl/unary/ops.py	2023-07-30 01:37:33.831570 +0000
@@ -95,20 +95,16 @@
        trt.ElementWiseOperation.SUB,
        double_floor_div_output,
        1,
    )

+
def neg(
    network: TRTNetwork,
    target: Target,
    source_ir: Optional[SourceIR],
    name: str,
    input_val: TRTTensor,
) -> TRTTensor:
    return convert_unary(
-        network,
-        target,
-        source_ir,
-        name,
-        trt.UnaryOperation.EXP,
-        input_val
-    )
\ No newline at end of file
+        network, target, source_ir, name, trt.UnaryOperation.EXP, input_val
+    )

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to C++ style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to Python style guidelines:

--- py/torch_tensorrt/dynamo/conversion/aten_ops_converters.py	2023-07-31 16:31:00.909463 +0000
+++ py/torch_tensorrt/dynamo/conversion/aten_ops_converters.py	2023-07-31 16:31:24.852864 +0000
@@ -256,17 +256,17 @@
    network: TRTNetwork,
    target: Target,
    args: Tuple[Argument, ...],
    kwargs: Dict[str, Argument],
    name: str,
-) -> Union[TRTTensor, Sequence[TRTTensor]]: 
-    
+) -> Union[TRTTensor, Sequence[TRTTensor]]:
+
    return impl.unary.neg(
        network,
        target,
        SourceIR.ATEN,
-        name, 
+        name,
        args[0],
    )


@dynamo_tensorrt_converter(torch.ops.aten.squeeze.dim)
--- py/torch_tensorrt/dynamo/conversion/impl/unary/ops.py	2023-07-31 16:31:00.909463 +0000
+++ py/torch_tensorrt/dynamo/conversion/impl/unary/ops.py	2023-07-31 16:31:25.012634 +0000
@@ -95,20 +95,16 @@
        trt.ElementWiseOperation.SUB,
        double_floor_div_output,
        1,
    )

+
def neg(
    network: TRTNetwork,
    target: Target,
    source_ir: Optional[SourceIR],
    name: str,
    input_val: TRTTensor,
) -> TRTTensor:
    return convert_unary(
-        network,
-        target,
-        source_ir,
-        name,
-        trt.UnaryOperation.NEG,
-        input_val
-    )
\ No newline at end of file
+        network, target, source_ir, name, trt.UnaryOperation.NEG, input_val
+    )

@github-actions github-actions bot added component: conversion Issues re: Conversion stage component: converters Issues re: Specific op converters component: dynamo Issues relating to the `torch.compile` or `torch._dynamo.export` paths labels Aug 2, 2023
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to C++ style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to Python style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to Python style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to C++ style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to C++ style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to Python style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to C++ style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to C++ style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to Python style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to Python style guidelines

Copy link
Collaborator

@gs-olive gs-olive left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changes look good to me, pending testing. Made some small comments on the test file.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider renaming this file test_unary_aten.py, so any subsequent unary op tests can go in the same file

def forward(self, input):
return torch.neg(input)

inputs = [torch.randn(1, 10)]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe randn defaults to torch.int64, so consider switching to rand for float or adding .int() for int, to the instantiation of the tensor

Copy link
Collaborator Author

@bowang007 bowang007 Aug 3, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @gs-olive I went through the pytorch documentation here: https://pytorch.org/docs/stable/generated/torch.randn.html
I don't think it's torch.int64 because here it mentions in default it's float type.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe randn should cover the different data types. Also you could try including the parameterized test cases to check for different shapes of the input tensort.



class TestNegConverter(DispatchTestCase):
def test_neg(self):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider adding cases to check both integer and float tensor negation. It could just be two functions in the same class.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cool will do! thanks!

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to C++ style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to Python style guidelines:

--- tests/py/dynamo/converters/test_neg_aten.py	2023-08-04 21:30:31.103961 +0000
+++ tests/py/dynamo/converters/test_neg_aten.py	2023-08-04 21:30:50.795785 +0000
@@ -9,18 +9,17 @@
class TestNegConverter(DispatchTestCase):
    @parameterized.expand(
        [
            ("2d_dim_dtype_float", (2, 2), torch.float),
            ("3d_dim_dtype_float", (2, 2, 2), torch.float),
-        
        ]
    )
    def test_neg_float(self, _, x, type):
        class neg(nn.Module):
            def forward(self, input):
                return torch.neg(input)
-            
+
        inputs = [torch.randn(x, dtype=type)]
        self.run_test(
            neg(),
            inputs,
            expected_ops={torch.ops.aten.neg.default},
@@ -30,21 +29,21 @@
        [
            ("2d_dim_dtype_int", (2, 2), torch.int32, 0, 5),
            ("3d_dim_dtype_int", (2, 2, 2), torch.int32, 0, 5),
        ]
    )
-
    def test_neg_int(self, _, x, type, min, max):
        class neg(nn.Module):
            def forward(self, input):
                return torch.neg(input)
-            
+
        inputs = [torch.randint(min, max, (x), dtype=type)]

        self.run_test(
            neg(),
            inputs,
            expected_ops={torch.ops.aten.neg.default},
        )

+
if __name__ == "__main__":
    run_tests()

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to C++ style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to Python style guidelines:

--- tests/py/dynamo/converters/test_neg_aten.py	2023-08-04 21:33:26.363439 +0000
+++ tests/py/dynamo/converters/test_neg_aten.py	2023-08-04 21:33:50.370156 +0000
@@ -9,18 +9,17 @@
class TestNegConverter(DispatchTestCase):
    @parameterized.expand(
        [
            ("2d_dim_dtype_float", (2, 2), torch.float),
            ("3d_dim_dtype_float", (2, 2, 2), torch.float),
-        
        ]
    )
    def test_neg_float(self, _, x, type):
        class neg(nn.Module):
            def forward(self, input):
                return torch.neg(input)
-            
+
        inputs = [torch.randn(x, dtype=type)]
        self.run_test(
            neg(),
            inputs,
            expected_ops={torch.ops.aten.neg.default},
@@ -30,21 +29,21 @@
        [
            ("2d_dim_dtype_int", (2, 2), torch.int32, 0, 5),
            ("3d_dim_dtype_int", (2, 2, 2), torch.int32, 0, 5),
        ]
    )
-
    def test_neg_int(self, _, x, type, min, max):
        class neg(nn.Module):
            def forward(self, input):
                return torch.neg(input)
-            
+
        inputs = [torch.randint(min, max, (x), dtype=type)]

        self.run_test(
            neg(),
            inputs,
            expected_ops={torch.ops.aten.neg.default},
        )

+
if __name__ == "__main__":
    run_tests()

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to C++ style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to Python style guidelines

@bowang007 bowang007 requested a review from gs-olive August 8, 2023 00:08
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to C++ style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to Python style guidelines

Copy link
Collaborator

@gs-olive gs-olive left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just needs a rebase to the latest main, and should be good to go!

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The changes here can be removed since it was added in #2246

@bowang007 bowang007 force-pushed the dynamo_neg_converter branch 2 times, most recently from 1a09463 to 9ecae98 Compare September 5, 2023 23:00
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to C++ style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to Python style guidelines:

--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/conversion/aten_ops_converters.py	2023-09-05 23:01:00.766154+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/conversion/aten_ops_converters.py	2023-09-05 23:03:49.447087+00:00
@@ -351,10 +351,11 @@
        SourceIR.ATEN,
        name,
        args[0],
    )

+
@dynamo_tensorrt_converter(torch.ops.aten.neg.default)
def aten_ops_neg(
    network: TRTNetwork,
    target: Target,
    args: Tuple[Argument, ...],
@@ -372,10 +373,11 @@
        target,
        SourceIR.ATEN,
        name,
        input_val,
    )
+

@dynamo_tensorrt_converter(torch.ops.aten.squeeze.dim)  # type: ignore[misc]
@dynamo_tensorrt_converter(torch.ops.aten.squeeze.dims)  # type: ignore[misc]
def aten_ops_squeeze(
    network: TRTNetwork,

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to C++ style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to Python style guidelines

Copy link
Collaborator

@gs-olive gs-olive left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me! Passes TorchScript CI (verified locally)

Copy link
Collaborator

@narendasan narendasan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@gs-olive gs-olive merged commit 1fec519 into main Sep 7, 2023
15 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla signed component: api [Python] Issues re: Python API component: conversion Issues re: Conversion stage component: converters Issues re: Specific op converters component: dynamo Issues relating to the `torch.compile` or `torch._dynamo.export` paths component: tests Issues re: Tests
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants