Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Relay][Pytorch] Add support for aten::linalg_vector_norm #16123

Merged
merged 6 commits into from
Nov 26, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
27 changes: 27 additions & 0 deletions python/tvm/relay/frontend/pytorch.py
Original file line number Diff line number Diff line change
Expand Up @@ -3844,6 +3844,32 @@ def inplace_copy(self, inputs, input_types):
# Return
return _op.scatter_nd(source, indices, values, mode)

def linalg_vector_norm(self, inputs, input_types):
data = inputs[0]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like this method is based on torch.linalg.vector_norm. The latter assums that input data type is float, double or complex one; dtype also should be real or complex. Would you check it?

Copy link
Contributor Author

@mshr-h mshr-h Nov 14, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your review. That's true. It's based on torch.linalg.vector_norm and it supports float, double and complex dtypes as input. Seems like PyTorch doesn't support complex data type. convert_pt_to_tvm_type

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello @mshr-h! My idea was to add something like assert data.dtype == float or data.dtype == double. And may be add TODO for further support complex values, but I do not think it is needed just now

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @vvchernov ! I've added assertion and testcases for double-precision input data.

dtype = input_types[0]
ord = inputs[1]
dim = inputs[2]
keepdim = inputs[3]

assert dtype == "float32" or dtype == "float64"

if ord == 0:
return _op.reduce.sum(
_op.cast(_op.not_equal(data, _expr.const(0, dtype=dtype)), dtype=dtype),
axis=dim,
keepdims=keepdim,
)
elif ord == np.inf:
return _op.reduce.max(_op.abs(data), axis=dim, keepdims=keepdim)
elif ord == np.NINF:
return _op.reduce.min(_op.abs(data), axis=dim, keepdims=keepdim)
reci_ord = _expr.const(1.0 / ord, dtype=dtype)
ord = _expr.const(ord, dtype=dtype)
return _op.power(
_op.reduce.sum(_op.power(_op.abs(data), ord), axis=dim, keepdims=keepdim),
reci_ord,
)

# Operator mappings
def create_convert_map(self):
self.convert_map = {
Expand Down Expand Up @@ -4118,6 +4144,7 @@ def create_convert_map(self):
"aten::_weight_norm": self.weight_norm,
"aten::copy_": self.inplace_copy,
"aten::swapaxes": self.transpose,
"aten::linalg_vector_norm": self.linalg_vector_norm,
}

def update_convert_map(self, custom_map):
Expand Down
27 changes: 25 additions & 2 deletions tests/python/frontend/pytorch/test_forward.py
Original file line number Diff line number Diff line change
Expand Up @@ -1780,7 +1780,6 @@ def forward(self, *args):
verify_model(LogSoftmax1().float().eval(), input_data=input_data)


@pytest.mark.skip(reason="unsupported op aten::linalg_vector_norm")
@tvm.testing.uses_gpu
def test_forward_norm():
"""test_forward_norm"""
Expand Down Expand Up @@ -1840,7 +1839,6 @@ def forward(self, *args):
verify_model(Norm10().float().eval(), input_data=input_data)


@pytest.mark.skip(reason="unsupported op aten::linalg_vector_norm")
@tvm.testing.uses_gpu
def test_forward_frobenius_norm():
"""test_forward_frobenius_norm"""
Expand Down Expand Up @@ -5432,6 +5430,31 @@ def forward(self, *args):
verify_model(Swapaxes3().float().eval(), input_data=input_data)


def test_linalg_vector_norm():
"""test_linalg_vector_norm"""
torch.set_grad_enabled(False)

def test_fn(order):
return lambda x: torch.linalg.vector_norm(x, ord=order)

input_shape = [3, 3]

input_data = torch.rand(input_shape).float()
verify_model(test_fn(order=2), input_data=input_data)
verify_model(test_fn(order=3.5), input_data=input_data)
verify_model(test_fn(order=np.inf), input_data=input_data)
verify_model(test_fn(order=np.NINF), input_data=input_data)
verify_model(test_fn(order=0), input_data=input_data)

# Also test on double
input_data = torch.rand(input_shape).double()
verify_model(test_fn(order=2), input_data=input_data)
verify_model(test_fn(order=3.5), input_data=input_data)
verify_model(test_fn(order=np.inf), input_data=input_data)
verify_model(test_fn(order=np.NINF), input_data=input_data)
verify_model(test_fn(order=0), input_data=input_data)


class TestSetSpan:
"""test structural equal between translated / hand-crafted relay IR with span tagged."""

Expand Down