You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
The ttnn.from_torch function is very convenient for converting a PyTorch tensor into a ttnn tensor. However, it currently lacks support for specifying a pad_value when working with TILE_LAYOUT tensors. Adding this support would greatly enhance testing of ttnn operations.
Currently, ttnn.from_torch defaults to padding with 0 when pad_value is not specified, which may lead to misleading results in computations. Using NaN as a pad_value would provide a more "natural error assertion" by immediately indicating when padding cells are mistakenly included in kernel computations.
For example, padding with NaN would result in NaN in the computed output if padding cells are unintentionally accessed, clearly signaling an issue. In contrast, padding with 0 may yield seemingly "correct" results, such as in tile summation, thus concealing potential errors.
Describe the solution you'd like
I propose adding a pad_value parameter to ttnn.from_torch.
If pad_value is not specified (i.e., None), ttnn.from_torch would retain its current behavior, padding TILE_LAYOUT tensors with 0.
If a pad_value is specified and the layout is TILE_LAYOUT, ttnn.from_torch would apply tensor.pad_to_tile(pad_value) to pad the tensor accordingly.
Describe alternatives you've considered
N/A
Additional context
N/A
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe.
The
ttnn.from_torch
function is very convenient for converting a PyTorch tensor into attnn
tensor. However, it currently lacks support for specifying apad_value
when working withTILE_LAYOUT
tensors. Adding this support would greatly enhance testing ofttnn
operations.Currently,
ttnn.from_torch
defaults to padding with0
whenpad_value
is not specified, which may lead to misleading results in computations. UsingNaN
as apad_value
would provide a more "natural error assertion" by immediately indicating when padding cells are mistakenly included in kernel computations.For example, padding with
NaN
would result inNaN
in the computed output if padding cells are unintentionally accessed, clearly signaling an issue. In contrast, padding with0
may yield seemingly "correct" results, such as in tile summation, thus concealing potential errors.Describe the solution you'd like
I propose adding a
pad_value
parameter tottnn.from_torch
.pad_value
is not specified (i.e.,None
),ttnn.from_torch
would retain its current behavior, paddingTILE_LAYOUT
tensors with0
.pad_value
is specified and the layout isTILE_LAYOUT
,ttnn.from_torch
would applytensor.pad_to_tile(pad_value)
to pad the tensor accordingly.Describe alternatives you've considered
N/A
Additional context
N/A
The text was updated successfully, but these errors were encountered: