Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

is that possible to replace your TrilinearInterpolation with torch.nn.functional.grid_sample? #14

Closed
Dongshengjiang opened this issue Oct 10, 2020 · 7 comments

Comments

@Dongshengjiang
Copy link

is that possible to replace your TrilinearInterpolation with torch.nn.functional.grid_sample? If ok, it would be more convenient.

@HuiZeng
Copy link
Owner

HuiZeng commented Oct 10, 2020

Hi, this is an interesting idea.
But it seems not that easy since their interpolation details are different.

@tuxa
Copy link

tuxa commented Nov 10, 2020

Hi! grid_sample actually uses trilinear interpolation if a 5d tensor is passed when mode is set to bilinear.

So, instead of using

_, result = trilinear_(LUT, img)

one can use:

# scale im between -1 and 1 since its used as grid input in grid_sample
img = (img - .5) * 2.

# grid_sample expects NxDxHxWx3 (1x1xHxWx3)
img = img.permute(0, 2, 3, 1)[:, None]

# add batch dim to LUT
LUT = LUT[None]

# grid sample
result = F.grid_sample(LUT, img, mode='bilinear', padding_mode='border', align_corners=True)

# drop added dimensions and permute back
result = result[:, :, 0].permute(0, 2, 3, 1)

@coallar
Copy link

coallar commented Aug 24, 2022

changing to:ndarr = result.squeeze().mul_(255).add_(0.5).clamp_(0, 255).permute(1, 2, 0).to('cpu', torch.uint8).numpy() to ndarr = result.squeeze().mul_(255).add_(0.5).clamp_(0, 255).to('cpu', torch.uint8).numpy()
is necessary

@ZedFu
Copy link

ZedFu commented Apr 8, 2023

Has anyone ever used grid_sample instead of Trilinear Interpolation? Why is the result of using grid_sample worse?

@11923303233
Copy link

scale im between -1 and 1 since its used as grid input in grid_sample

img = (img - .5) * 2.

grid_sample expects NxDxHxWx3 (1x1xHxWx3)

img = img.permute(0, 2, 3, 1)[:, None]

add batch dim to LUT

LUT = LUT[None]

grid sample

result = F.grid_sample(LUT, img, mode='bilinear', padding_mode='border', align_corners=True)

drop added dimensions and permute back

result = result[:, :, 0].permute(0, 2, 3, 1)

How to set batch_size > 1? LUT = LUT[None] --> LUT = LUT[None].repeat(bs, 1,1,1,1) is ok?

@fedral
Copy link

fedral commented May 22, 2024

Hi! grid_sample actually uses trilinear interpolation if a 5d tensor is passed when mode is set to bilinear.

So, instead of using

_, result = trilinear_(LUT, img)

one can use:

# scale im between -1 and 1 since its used as grid input in grid_sample
img = (img - .5) * 2.

# grid_sample expects NxDxHxWx3 (1x1xHxWx3)
img = img.permute(0, 2, 3, 1)[:, None]

# add batch dim to LUT
LUT = LUT[None]

# grid sample
result = F.grid_sample(LUT, img, mode='bilinear', padding_mode='border', align_corners=True)

# drop added dimensions and permute back
result = result[:, :, 0].permute(0, 2, 3, 1)

good trick!

@gganyy
Copy link

gganyy commented May 29, 2024

Hi! grid_sample actually uses trilinear interpolation if a 5d tensor is passed when mode is set to bilinear.

So, instead of using

_, result = trilinear_(LUT, img)

one can use:

# scale im between -1 and 1 since its used as grid input in grid_sample
img = (img - .5) * 2.

# grid_sample expects NxDxHxWx3 (1x1xHxWx3)
img = img.permute(0, 2, 3, 1)[:, None]

# add batch dim to LUT
LUT = LUT[None]

# grid sample
result = F.grid_sample(LUT, img, mode='bilinear', padding_mode='border', align_corners=True)

# drop added dimensions and permute back
result = result[:, :, 0].permute(0, 2, 3, 1)

/
/

Hi, this is an interesting idea. But it seems not that easy since their interpolation details are different.

Could you please add an _ext function to the code? it seemed that the code lack it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants