Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Request] Support pad_value for ttnn.from_torch #14379

Open
mrshaw01 opened this issue Oct 28, 2024 · 0 comments
Open

[Feature Request] Support pad_value for ttnn.from_torch #14379

mrshaw01 opened this issue Oct 28, 2024 · 0 comments
Assignees
Labels
feature-request External feature request moreh moreh contribution

Comments

@mrshaw01
Copy link
Contributor

Is your feature request related to a problem? Please describe.
The ttnn.from_torch function is very convenient for converting a PyTorch tensor into a ttnn tensor. However, it currently lacks support for specifying a pad_value when working with TILE_LAYOUT tensors. Adding this support would greatly enhance testing of ttnn operations.

Currently, ttnn.from_torch defaults to padding with 0 when pad_value is not specified, which may lead to misleading results in computations. Using NaN as a pad_value would provide a more "natural error assertion" by immediately indicating when padding cells are mistakenly included in kernel computations.

For example, padding with NaN would result in NaN in the computed output if padding cells are unintentionally accessed, clearly signaling an issue. In contrast, padding with 0 may yield seemingly "correct" results, such as in tile summation, thus concealing potential errors.

Describe the solution you'd like
I propose adding a pad_value parameter to ttnn.from_torch.

  • If pad_value is not specified (i.e., None), ttnn.from_torch would retain its current behavior, padding TILE_LAYOUT tensors with 0.
  • If a pad_value is specified and the layout is TILE_LAYOUT, ttnn.from_torch would apply tensor.pad_to_tile(pad_value) to pad the tensor accordingly.

Describe alternatives you've considered
N/A

Additional context
N/A

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature-request External feature request moreh moreh contribution
Projects
None yet
Development

No branches or pull requests

1 participant