Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

#6361: Update ttnn repeat to use correct shapes when formatting output #6526

Merged
merged 1 commit into from
Mar 22, 2024

Conversation

tt-aho
Copy link
Contributor

@tt-aho tt-aho commented Mar 19, 2024

@arakhmati I'm not too familiar with everything that is expected/set up with ttnn, but for the tt_lib version of the op the only restriction is that the byte size of the last dim is aligned if we are trying to repeat on the last dim.

Is it correct for me to remove the dtype restriction if tt_lib version supports any dtype? Not sure how/if we can make ttnn only fallback for the specific alignment case. Other potential issue with removing the restrictions is that ttnn is doing the pad/reshape to tile afterwards so not sure if that's affected.

@kpaigwar
Copy link
Contributor

kpaigwar commented Mar 19, 2024

@tt-aho I tested with your fix for repeat, it worked for cases when tensor doesn't have any padding. Is it possible to make the ttnn.repeat op smarter to ignore the padding? I think ttnn.repeat_interleave supports that

Failure unit test

import ttnn
import torch
from tests.ttnn.utils_for_testing import assert_with_pcc


device = ttnn.open_device(device_id=0)
torch_input_tensor = torch.randn((1, 1, 32, 16), dtype=torch.bfloat16)
repeat_shape = torch.randn((1, 1, 1, 2048), dtype=torch.bfloat16)

input_tensor1 = ttnn.from_torch(repeat_shape, layout=ttnn.TILE_LAYOUT)
input_tensor1 = ttnn.to_device(input_tensor1, device)
torch_result = torch_input_tensor.repeat(repeat_shape.shape)

input_tensor = ttnn.from_torch(torch_input_tensor, layout=ttnn.TILE_LAYOUT, device=device)

output = ttnn.repeat(input_tensor, input_tensor1.shape)
output = ttnn.to_torch(output)

assert_with_pcc(torch_result, output, 0.9999)

ttnn.close_device(device)

Error

  File "/proj_sw/user_dev/kpaigwar/tt-metal/tests/ttnn/utils_for_testing.py", line 24, in assert_with_pcc                                                                                     │···················
    assert list(expected_pytorch_result.shape) == list(                                                                                                                                       │···················
AssertionError: list(expected_pytorch_result.shape)=[1, 1, 32, 32768] vs list(actual_pytorch_result.shape)=[1, 1, 32, 65520]  

@tt-aho
Copy link
Contributor Author

tt-aho commented Mar 19, 2024

I have fix for ttnn version with pr #6526. Note that for your specific case with input shape [1, 1, 32, 16] you should create your input tensor in RM and not TILE, or else your output will have padding interleaved inside the tensor.

I mentioned that you needed to create your input tensor as RM so that it would not have padding

@kpaigwar
Copy link
Contributor

I have fix for ttnn version with pr #6526. Note that for your specific case with input shape [1, 1, 32, 16] you should create your input tensor in RM and not TILE, or else your output will have padding interleaved inside the tensor.

I mentioned that you needed to create your input tensor as RM so that it would not have padding

Got it, Thanks! I just wanted to add that ttnn does have smart understanding of padding for few ops even in TILE layout. It will be good at user level to not worry about the padding.

@tt-aho
Copy link
Contributor Author

tt-aho commented Mar 19, 2024

I have fix for ttnn version with pr #6526. Note that for your specific case with input shape [1, 1, 32, 16] you should create your input tensor in RM and not TILE, or else your output will have padding interleaved inside the tensor.

I mentioned that you needed to create your input tensor as RM so that it would not have padding

Got it, Thanks! I just wanted to add that ttnn does have smart understanding of padding for few ops even in TILE layout. It will be good at user level to not worry about the padding.

Yes, should be possible, I'll take a look.

@tt-aho tt-aho merged commit 43aaab5 into main Mar 22, 2024
4 checks passed
@tt-aho tt-aho deleted the aho/repeat branch May 7, 2024 13:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants