Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add torchscriptable adjust_gamma transform #2459

Merged
merged 3 commits into from
Jul 17, 2020
Merged

Conversation

nairbv
Copy link
Contributor

@nairbv nairbv commented Jul 10, 2020

Add scriptable transform for adjust_gamma. Related to: #1375 #2292

BC-breaking:
off-by-one in PIL-based version, due to change in translation between float and uint8 image representations.

@@ -161,8 +161,8 @@ def convert_image_dtype(image: torch.Tensor, dtype: torch.dtype = torch.float) -
msg = f"The cast from {image.dtype} to {dtype} cannot be performed safely."
raise RuntimeError(msg)

eps = 1e-3
return image.mul(torch.iinfo(dtype).max + 1 - eps).to(dtype)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this was giving results that didn't match PIL

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hum, we had quite a lot of discussion about this behavior in #2078 (comment). I believe if we make the multiplication go to dtype.max, we will end up with a non-uniform distribution over the last values.

@pmeier thoughts?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@fmassa is right about the intention. I think this boils down to

  • do we want it "right" or
  • do we want it compatible to other packages.

I'm in favor for the former (hence my implementation), but I can see why the latter is also feasible.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think my issue was internal consistency, not a difference between us and PIL. That comment thread is helpful -- we certainly expect more than one floating point value to map to 255.

I was able to fix it by just making the gamma adjustment consistent with convert_image_dtype.

This raises another issue though -- I'm not sure if it's still OK to express the equation for adjust_gamma as 255 * gain * (img/255) ** gamma in the docs, where in reality it's 255.999 * gain..... I want to be accurate but also don't want to be unnecessarily confusing, and the doc does say "based on."

@nairbv nairbv requested a review from fmassa July 10, 2020 20:50
@nairbv nairbv force-pushed the adjust_gamma branch 2 times, most recently from cc11df7 to 5f702bf Compare July 10, 2020 21:04
@vfdev-5 vfdev-5 mentioned this pull request Jul 15, 2020
16 tasks
Copy link
Member

@fmassa fmassa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The PR looks great, thanks a lot Brian!

ONNX failures are unrelated.

I just wanted to get @pmeier thoughts on the change regarding convert_image_dtype before merging this, so that we converge on the desired behavior of the function.

@@ -161,8 +161,8 @@ def convert_image_dtype(image: torch.Tensor, dtype: torch.dtype = torch.float) -
msg = f"The cast from {image.dtype} to {dtype} cannot be performed safely."
raise RuntimeError(msg)

eps = 1e-3
return image.mul(torch.iinfo(dtype).max + 1 - eps).to(dtype)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hum, we had quite a lot of discussion about this behavior in #2078 (comment). I believe if we make the multiplication go to dtype.max, we will end up with a non-uniform distribution over the last values.

@pmeier thoughts?

if torch.is_floating_point(img):
return gain * result ** gamma

result = 255.0 * gain * (result / 255.0) ** gamma
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the future: we should support other integer types as well. This could probably be implemented via

convert_image_dtype(..., float)
result = torch.clamp(gain * result ** gama, 0, 1)
convert_image_dtype(..., dtype)

Required by mypy, even thought technically incorrect due to possible Image parameter. torchscript doesn't support a union based type hint.

Co-authored-by: vfdev <vfdev.5@gmail.com>
@nairbv nairbv merged commit d481f2d into pytorch:master Jul 17, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants