We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
test_tinshift.py
Checklist
Environment
Build full-mmcv version with the pytorch:pytorch image image version: ubuntu_1804_py_37_cuda_101_cudnn_7_torch_160_dev hardware: v100
ubuntu_1804_py_37_cuda_101_cudnn_7_torch_160_dev
10/21/2021 10:51:48 AM ____________________________ test_tinshift[dtype0] _____________________________ 10/21/2021 10:51:48 AM 10/21/2021 10:51:48 AM dtype = torch.float32 10/21/2021 10:51:48 AM 10/21/2021 10:51:48 AM @pytest.mark.skipif( 10/21/2021 10:51:48 AM not torch.cuda.is_available(), reason='requires CUDA support') 10/21/2021 10:51:48 AM @pytest.mark.parametrize('dtype', [torch.float, torch.double, torch.half]) 10/21/2021 10:51:48 AM def test_tinshift(dtype): 10/21/2021 10:51:48 AM > _test_tinshift_allclose(dtype=dtype) 10/21/2021 10:51:48 AM 10/21/2021 10:51:48 AM tests/test_ops/test_tin_shift.py:105: 10/21/2021 10:51:48 AM _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 10/21/2021 10:51:48 AM 10/21/2021 10:51:48 AM dtype = torch.float32 10/21/2021 10:51:48 AM 10/21/2021 10:51:48 AM def _test_tinshift_allclose(dtype): 10/21/2021 10:51:48 AM try: 10/21/2021 10:51:48 AM from mmcv.ops import tin_shift 10/21/2021 10:51:48 AM except ModuleNotFoundError: 10/21/2021 10:51:48 AM pytest.skip('TinShift op is not successfully compiled') 10/21/2021 10:51:48 AM 10/21/2021 10:51:48 AM for shift, output, grad in zip(shifts, outputs, grads): 10/21/2021 10:51:48 AM np_input = np.array(inputs) 10/21/2021 10:51:48 AM np_shift = np.array(shift) 10/21/2021 10:51:48 AM np_output = np.array(output) 10/21/2021 10:51:48 AM np_grad = np.array(grad) 10/21/2021 10:51:48 AM 10/21/2021 10:51:48 AM x = torch.tensor( 10/21/2021 10:51:48 AM np_input, dtype=dtype, device='cuda', requires_grad=True) 10/21/2021 10:51:48 AM shift = torch.tensor(np_shift, device='cuda').int() 10/21/2021 10:51:48 AM 10/21/2021 10:51:48 AM output = tin_shift(x, shift) 10/21/2021 10:51:48 AM output.backward(torch.ones_like(output)) 10/21/2021 10:51:48 AM > assert np.allclose( 10/21/2021 10:51:48 AM output.data.type(torch.float).cpu().numpy(), np_output, 1e-3) 10/21/2021 10:51:48 AM E AssertionError: assert False 10/21/2021 10:51:48 AM E + where False = <function allclose at 0x7f95d396def0>(array([[[[ 0. , 0. ],\n [ 0. , 0. ],\n [ 0. , 0. ]],\n\n [[ 0. , 0. ],\n... , 0. ]],\n\n [[ 0. , 0. ],\n [ 0. , 0. ],\n [ 0. , 0. ]]]], dtype=float32), array([[[[ 0.4369, -3.7571],\n [-1.1835, -1.6374],\n [ 0.9534, -0.1321]],\n\n [[-0.4658, 0.2162],\n... [ 0. , 0. ]],\n\n [[ 0. , 0. ],\n [ 0. , 0. ],\n [ 0. , 0. ]]]]), 0.001) 10/21/2021 10:51:48 AM E + where <function allclose at 0x7f95d396def0> = np.allclose 10/21/2021 10:51:48 AM E + and array([[[[ 0. , 0. ],\n [ 0. , 0. ],\n [ 0. , 0. ]],\n\n [[ 0. , 0. ],\n... , 0. ]],\n\n [[ 0. , 0. ],\n [ 0. , 0. ],\n [ 0. , 0. ]]]], dtype=float32) = <built-in method numpy of Tensor object at 0x7f95543ead20>() 10/21/2021 10:51:48 AM E + where <built-in method numpy of Tensor object at 0x7f95543ead20> = tensor([[[[ 0.0000, 0.0000],\n [ 0.0000, 0.0000],\n [ 0.0000, 0.0000]],\n\n [[ 0.0000, 0.000... [ 0.0000, 0.0000]],\n\n [[ 0.0000, 0.0000],\n [ 0.0000, 0.0000],\n [ 0.0000, 0.0000]]]]).numpy 10/21/2021 10:51:48 AM E + where tensor([[[[ 0.0000, 0.0000],\n [ 0.0000, 0.0000],\n [ 0.0000, 0.0000]],\n\n [[ 0.0000, 0.000... [ 0.0000, 0.0000]],\n\n [[ 0.0000, 0.0000],\n [ 0.0000, 0.0000],\n [ 0.0000, 0.0000]]]]) = <built-in method cpu of Tensor object at 0x7f95543eac80>() 10/21/2021 10:51:48 AM E + where <built-in method cpu of Tensor object at 0x7f95543eac80> = tensor([[[[ 0.0000, 0.0000],\n [ 0.0000, 0.0000],\n [ 0.0000, 0.0000]],\n\n [[ 0.0000, 0.000....0000]],\n\n [[ 0.0000, 0.0000],\n [ 0.0000, 0.0000],\n [ 0.0000, 0.0000]]]], device='cuda:0').cpu 10/21/2021 10:51:48 AM E + where tensor([[[[ 0.0000, 0.0000],\n [ 0.0000, 0.0000],\n [ 0.0000, 0.0000]],\n\n [[ 0.0000, 0.000....0000]],\n\n [[ 0.0000, 0.0000],\n [ 0.0000, 0.0000],\n [ 0.0000, 0.0000]]]], device='cuda:0') = <built-in method type of Tensor object at 0x7f95543eac80>(torch.float32) 10/21/2021 10:51:48 AM E + where <built-in method type of Tensor object at 0x7f95543eac80> = tensor([[[[ 0.0000, 0.0000],\n [ 0.0000, 0.0000],\n [ 0.0000, 0.0000]],\n\n [[ 0.0000, 0.000....0000]],\n\n [[ 0.0000, 0.0000],\n [ 0.0000, 0.0000],\n [ 0.0000, 0.0000]]]], device='cuda:0').type 10/21/2021 10:51:48 AM E + where tensor([[[[ 0.0000, 0.0000],\n [ 0.0000, 0.0000],\n [ 0.0000, 0.0000]],\n\n [[ 0.0000, 0.000....0000]],\n\n [[ 0.0000, 0.0000],\n [ 0.0000, 0.0000],\n [ 0.0000, 0.0000]]]], device='cuda:0') = tensor([[[[ 0.0000, 0.0000],\n [ 0.0000, 0.0000],\n [ 0.0000, 0.0000]],\n\n [[ 0.0000, 0.000... [ 0.0000, 0.0000],\n [ 0.0000, 0.0000]]]], device='cuda:0',\n grad_fn=<TINShiftFunctionBackward>).data 10/21/2021 10:51:48 AM E + and torch.float32 = torch.float 10/21/2021 10:51:48 AM 10/21/2021 10:51:48 AM tests/test_ops/test_tin_shift.py:95: AssertionError 10/21/2021 10:51:48 AM ____________________________ test_tinshift[dtype1] _____________________________ 10/21/2021 10:51:48 AM 10/21/2021 10:51:48 AM dtype = torch.float64 10/21/2021 10:51:48 AM 10/21/2021 10:51:48 AM @pytest.mark.skipif( 10/21/2021 10:51:48 AM not torch.cuda.is_available(), reason='requires CUDA support') 10/21/2021 10:51:48 AM @pytest.mark.parametrize('dtype', [torch.float, torch.double, torch.half]) 10/21/2021 10:51:48 AM def test_tinshift(dtype): 10/21/2021 10:51:48 AM > _test_tinshift_allclose(dtype=dtype) 10/21/2021 10:51:48 AM 10/21/2021 10:51:48 AM tests/test_ops/test_tin_shift.py:105: 10/21/2021 10:51:48 AM _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 10/21/2021 10:51:48 AM 10/21/2021 10:51:48 AM dtype = torch.float64 10/21/2021 10:51:48 AM 10/21/2021 10:51:48 AM def _test_tinshift_allclose(dtype): 10/21/2021 10:51:48 AM try: 10/21/2021 10:51:48 AM from mmcv.ops import tin_shift 10/21/2021 10:51:48 AM except ModuleNotFoundError: 10/21/2021 10:51:48 AM pytest.skip('TinShift op is not successfully compiled') 10/21/2021 10:51:48 AM 10/21/2021 10:51:48 AM for shift, output, grad in zip(shifts, outputs, grads): 10/21/2021 10:51:48 AM np_input = np.array(inputs) 10/21/2021 10:51:48 AM np_shift = np.array(shift) 10/21/2021 10:51:48 AM np_output = np.array(output) 10/21/2021 10:51:48 AM np_grad = np.array(grad) 10/21/2021 10:51:48 AM 10/21/2021 10:51:48 AM x = torch.tensor( 10/21/2021 10:51:48 AM np_input, dtype=dtype, device='cuda', requires_grad=True) 10/21/2021 10:51:48 AM shift = torch.tensor(np_shift, device='cuda').int() 10/21/2021 10:51:48 AM 10/21/2021 10:51:48 AM output = tin_shift(x, shift) 10/21/2021 10:51:48 AM output.backward(torch.ones_like(output)) 10/21/2021 10:51:48 AM > assert np.allclose( 10/21/2021 10:51:48 AM output.data.type(torch.float).cpu().numpy(), np_output, 1e-3) 10/21/2021 10:51:48 AM E AssertionError: assert False 10/21/2021 10:51:48 AM E + where False = <function allclose at 0x7f95d396def0>(array([[[[ 0. , 0. ],\n [ 0. , 0. ],\n [ 0. , 0. ]],\n\n [[ 0. , 0. ],\n... , 0. ]],\n\n [[ 0. , 0. ],\n [ 0. , 0. ],\n [ 0. , 0. ]]]], dtype=float32), array([[[[ 0.4369, -3.7571],\n [-1.1835, -1.6374],\n [ 0.9534, -0.1321]],\n\n [[-0.4658, 0.2162],\n... [ 0. , 0. ]],\n\n [[ 0. , 0. ],\n [ 0. , 0. ],\n [ 0. , 0. ]]]]), 0.001) 10/21/2021 10:51:48 AM E + where <function allclose at 0x7f95d396def0> = np.allclose 10/21/2021 10:51:48 AM E + and array([[[[ 0. , 0. ],\n [ 0. , 0. ],\n [ 0. , 0. ]],\n\n [[ 0. , 0. ],\n... , 0. ]],\n\n [[ 0. , 0. ],\n [ 0. , 0. ],\n [ 0. , 0. ]]]], dtype=float32) = <built-in method numpy of Tensor object at 0x7f95543da0a0>() 10/21/2021 10:51:48 AM E + where <built-in method numpy of Tensor object at 0x7f95543da0a0> = tensor([[[[ 0.0000, 0.0000],\n [ 0.0000, 0.0000],\n [ 0.0000, 0.0000]],\n\n [[ 0.0000, 0.000... [ 0.0000, 0.0000]],\n\n [[ 0.0000, 0.0000],\n [ 0.0000, 0.0000],\n [ 0.0000, 0.0000]]]]).numpy 10/21/2021 10:51:48 AM E + where tensor([[[[ 0.0000, 0.0000],\n [ 0.0000, 0.0000],\n [ 0.0000, 0.0000]],\n\n [[ 0.0000, 0.000... [ 0.0000, 0.0000]],\n\n [[ 0.0000, 0.0000],\n [ 0.0000, 0.0000],\n [ 0.0000, 0.0000]]]]) = <built-in method cpu of Tensor object at 0x7f95543da140>() 10/21/2021 10:51:48 AM E + where <built-in method cpu of Tensor object at 0x7f95543da140> = tensor([[[[ 0.0000, 0.0000],\n [ 0.0000, 0.0000],\n [ 0.0000, 0.0000]],\n\n [[ 0.0000, 0.000....0000]],\n\n [[ 0.0000, 0.0000],\n [ 0.0000, 0.0000],\n [ 0.0000, 0.0000]]]], device='cuda:0').cpu 10/21/2021 10:51:48 AM E + where tensor([[[[ 0.0000, 0.0000],\n [ 0.0000, 0.0000],\n [ 0.0000, 0.0000]],\n\n [[ 0.0000, 0.000....0000]],\n\n [[ 0.0000, 0.0000],\n [ 0.0000, 0.0000],\n [ 0.0000, 0.0000]]]], device='cuda:0') = <built-in method type of Tensor object at 0x7f95544367d0>(torch.float32) 10/21/2021 10:51:48 AM E + where <built-in method type of Tensor object at 0x7f95544367d0> = tensor([[[[ 0.0000, 0.0000],\n [ 0.0000, 0.0000],\n [ 0.0000, 0.0000]],\n\n [[ 0.0000, 0.000... 0.0000, 0.0000],\n [ 0.0000, 0.0000],\n [ 0.0000, 0.0000]]]], device='cuda:0', dtype=torch.float64).type 10/21/2021 10:51:48 AM E + where tensor([[[[ 0.0000, 0.0000],\n [ 0.0000, 0.0000],\n [ 0.0000, 0.0000]],\n\n [[ 0.0000, 0.000... 0.0000, 0.0000],\n [ 0.0000, 0.0000],\n [ 0.0000, 0.0000]]]], device='cuda:0', dtype=torch.float64) = tensor([[[[ 0.0000, 0.0000],\n [ 0.0000, 0.0000],\n [ 0.0000, 0.0000]],\n\n [[ 0.0000, 0.000...000],\n [ 0.0000, 0.0000]]]], device='cuda:0', dtype=torch.float64,\n grad_fn=<TINShiftFunctionBackward>).data 10/21/2021 10:51:48 AM E + and torch.float32 = torch.float 10/21/2021 10:51:48 AM 10/21/2021 10:51:48 AM tests/test_ops/test_tin_shift.py:95: AssertionError 10/21/2021 10:51:48 AM ____________________________ test_tinshift[dtype2] _____________________________ 10/21/2021 10:51:48 AM 10/21/2021 10:51:48 AM dtype = torch.float16 10/21/2021 10:51:48 AM 10/21/2021 10:51:48 AM @pytest.mark.skipif( 10/21/2021 10:51:48 AM not torch.cuda.is_available(), reason='requires CUDA support') 10/21/2021 10:51:48 AM @pytest.mark.parametrize('dtype', [torch.float, torch.double, torch.half]) 10/21/2021 10:51:48 AM def test_tinshift(dtype): 10/21/2021 10:51:48 AM > _test_tinshift_allclose(dtype=dtype) 10/21/2021 10:51:48 AM 10/21/2021 10:51:48 AM tests/test_ops/test_tin_shift.py:105: 10/21/2021 10:51:48 AM _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 10/21/2021 10:51:48 AM 10/21/2021 10:51:48 AM dtype = torch.float16 10/21/2021 10:51:48 AM 10/21/2021 10:51:48 AM def _test_tinshift_allclose(dtype): 10/21/2021 10:51:48 AM try: 10/21/2021 10:51:48 AM from mmcv.ops import tin_shift 10/21/2021 10:51:48 AM except ModuleNotFoundError: 10/21/2021 10:51:48 AM pytest.skip('TinShift op is not successfully compiled') 10/21/2021 10:51:48 AM 10/21/2021 10:51:48 AM for shift, output, grad in zip(shifts, outputs, grads): 10/21/2021 10:51:48 AM np_input = np.array(inputs) 10/21/2021 10:51:48 AM np_shift = np.array(shift) 10/21/2021 10:51:48 AM np_output = np.array(output) 10/21/2021 10:51:48 AM np_grad = np.array(grad) 10/21/2021 10:51:48 AM 10/21/2021 10:51:48 AM x = torch.tensor( 10/21/2021 10:51:48 AM np_input, dtype=dtype, device='cuda', requires_grad=True) 10/21/2021 10:51:48 AM shift = torch.tensor(np_shift, device='cuda').int() 10/21/2021 10:51:48 AM 10/21/2021 10:51:48 AM output = tin_shift(x, shift) 10/21/2021 10:51:48 AM output.backward(torch.ones_like(output)) 10/21/2021 10:51:48 AM > assert np.allclose( 10/21/2021 10:51:48 AM output.data.type(torch.float).cpu().numpy(), np_output, 1e-3) 10/21/2021 10:51:48 AM E AssertionError: assert False 10/21/2021 10:51:48 AM E + where False = <function allclose at 0x7f95d396def0>(array([[[[ 0. , 0. ],\n [ 0. , 0. ],\n [ 0. , 0. ]],\n\n ...[ 0. , 0. ],\n [ 0. , 0. ],\n [ 0. , 0. ]]]], dtype=float32), array([[[[ 0.4369, -3.7571],\n [-1.1835, -1.6374],\n [ 0.9534, -0.1321]],\n\n [[-0.4658, 0.2162],\n... [ 0. , 0. ]],\n\n [[ 0. , 0. ],\n [ 0. , 0. ],\n [ 0. , 0. ]]]]), 0.001) 10/21/2021 10:51:48 AM E + where <function allclose at 0x7f95d396def0> = np.allclose 10/21/2021 10:51:48 AM E + and array([[[[ 0. , 0. ],\n [ 0. , 0. ],\n [ 0. , 0. ]],\n\n ...[ 0. , 0. ],\n [ 0. , 0. ],\n [ 0. , 0. ]]]], dtype=float32) = <built-in method numpy of Tensor object at 0x7f955432c1e0>() 10/21/2021 10:51:48 AM E + where <built-in method numpy of Tensor object at 0x7f955432c1e0> = tensor([[[[ 0.0000, 0.0000],\n [ 0.0000, 0.0000],\n [ 0.0000, 0.0000]],\n\n [[ 0.0000, 0.000... [ 0.0000, 0.0000]],\n\n [[ 0.0000, 0.0000],\n [ 0.0000, 0.0000],\n [ 0.0000, 0.0000]]]]).numpy 10/21/2021 10:51:48 AM E + where tensor([[[[ 0.0000, 0.0000],\n [ 0.0000, 0.0000],\n [ 0.0000, 0.0000]],\n\n [[ 0.0000, 0.000... [ 0.0000, 0.0000]],\n\n [[ 0.0000, 0.0000],\n [ 0.0000, 0.0000],\n [ 0.0000, 0.0000]]]]) = <built-in method cpu of Tensor object at 0x7f955432c820>() 10/21/2021 10:51:48 AM E + where <built-in method cpu of Tensor object at 0x7f955432c820> = tensor([[[[ 0.0000, 0.0000],\n [ 0.0000, 0.0000],\n [ 0.0000, 0.0000]],\n\n [[ 0.0000, 0.000....0000]],\n\n [[ 0.0000, 0.0000],\n [ 0.0000, 0.0000],\n [ 0.0000, 0.0000]]]], device='cuda:0').cpu 10/21/2021 10:51:48 AM E + where tensor([[[[ 0.0000, 0.0000],\n [ 0.0000, 0.0000],\n [ 0.0000, 0.0000]],\n\n [[ 0.0000, 0.000....0000]],\n\n [[ 0.0000, 0.0000],\n [ 0.0000, 0.0000],\n [ 0.0000, 0.0000]]]], device='cuda:0') = <built-in method type of Tensor object at 0x7f955432c370>(torch.float32) 10/21/2021 10:51:48 AM E + where <built-in method type of Tensor object at 0x7f955432c370> = tensor([[[[ 0.0000, 0.0000],\n [ 0.0000, 0.0000],\n [ 0.0000, 0.0000]],\n\n [[ 0.0000, 0.000... 0.0000, 0.0000],\n [ 0.0000, 0.0000],\n [ 0.0000, 0.0000]]]], device='cuda:0', dtype=torch.float16).type 10/21/2021 10:51:48 AM E + where tensor([[[[ 0.0000, 0.0000],\n [ 0.0000, 0.0000],\n [ 0.0000, 0.0000]],\n\n [[ 0.0000, 0.000... 0.0000, 0.0000],\n [ 0.0000, 0.0000],\n [ 0.0000, 0.0000]]]], device='cuda:0', dtype=torch.float16) = tensor([[[[ 0.0000, 0.0000],\n [ 0.0000, 0.0000],\n [ 0.0000, 0.0000]],\n\n [[ 0.0000, 0.000...000],\n [ 0.0000, 0.0000]]]], device='cuda:0', dtype=torch.float16,\n grad_fn=<TINShiftFunctionBackward>).data 10/21/2021 10:51:48 AM E + and torch.float32 = torch.float 10/21/2021 10:51:48 AM 10/21/2021 10:51:48 AM tests/test_ops/test_tin_shift.py:95: AssertionError
The text was updated successfully, but these errors were encountered:
Hi, thanks for the report. There are some problems with the current test data. We will update soon.
Sorry, something went wrong.
closed by #1426
grimoire
No branches or pull requests
Checklist
Environment
Build full-mmcv version with the pytorch:pytorch image
image version:
ubuntu_1804_py_37_cuda_101_cudnn_7_torch_160_dev
hardware: v100
The text was updated successfully, but these errors were encountered: