Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow Fortran-style memory layout #372

Closed
Markus-Goetz opened this issue Sep 6, 2019 · 1 comment · Fixed by #423
Closed

Allow Fortran-style memory layout #372

Markus-Goetz opened this issue Sep 6, 2019 · 1 comment · Fixed by #423
Assignees
Labels
API Anything relating the API type system

Comments

@Markus-Goetz
Copy link
Member

Feature functionality
Allow that tensors have Fortran-style, i.e. column-first, data layout. PyTorch supports this feature now directly via a passable parameter. This issue requires to work through factory functions, the creation of layout conversion calls as well as an investigation if and where auto-conversion between C- and Fortran-style layout needs to happen.

@Markus-Goetz Markus-Goetz added type system API Anything relating the API labels Sep 6, 2019
@ClaudiaComito ClaudiaComito self-assigned this Oct 9, 2019
@ClaudiaComito
Copy link
Contributor

ClaudiaComito commented Oct 28, 2019

@Markus-Goetz

Just checking if we are on the same page. For ndarrays, order='C' (default, rows first), or order='F' (columns first). So what we're talking about is to allow for a ht.tensor attribute order so that the memory layout of the tensor can be specified (or modified). E.g.:

a = ht.zeros((3,4,5), order='F')

would create a tensor with Fortran-style memory layout. We would then have a strides attribute to check the layout.

(Numpy details here)

I'm not sure about what PyTorch parameter you're referring to. With PyTorch 1.2.0 we can specify the memory layout (they call it memory_format) when creating a torch tensor, but this is with 4d tensors in NCHW order - <batch, channels, width, height> - in mind. It's not what we want.
pytorch/pytorch#19092

Then there's the torch.Tensor attribute layout, the only two options are torch.strided (default) or torch.spare_coo, so this isn't what we need either.
https://pytorch.org/docs/stable/tensor_attributes.html#torch-layout

So far, within torch the only way I've found of having column-major data layout is:

t = torch.randn(3, 5)
t = t.t().contiguous().t()

or

t.set_(t.storage(), t.storage_offset(), t.size(), tuple(reversed(t.stride())))

(from here)

ADDENDUM:
the torch option t = t.t().contiguous().t() only works with 2d tensors.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
API Anything relating the API type system
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants