Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Porting to pytest #3996

Merged
merged 19 commits into from
Jun 10, 2021
Merged

Conversation

tanvimoharir
Copy link
Contributor

@tanvimoharir tanvimoharir commented Jun 7, 2021

Refers #3987

group A
These ones could be bundled into a single test_random() function and be parametrized over
func, method, fn_kwargs and match_kwargs.

  • test_random_horizontal_flip
  • test_random_vertical_flip
  • test_random_invert
  • test_random_posterize
  • test_random_solarize
  • test_random_adjust_sharpness
  • test_random_autocontrast
  • test_random_equalize

group F

  • test_random_erasing -- maybe split this one into different functions, and parametrize over test_configs
  • test_convert_image_dtype -- parametrize over all loop variables and convert the continue and the assertRaises into a pytest.xfail
  • test_autoaugment -- parametrize over policy and fill

@facebook-github-bot
Copy link

Hi @tanvimoharir!

Thank you for your pull request and welcome to our community.

Action Required

In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at cla@fb.com. Thanks!

@facebook-github-bot
Copy link

Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Facebook open source project. Thanks!

@NicolasHug
Copy link
Member

thanks for the PR @tanvimoharir ! I see this is still draft, let me know if you need any help, and when this is ready for a review! Thanks!

@tanvimoharir
Copy link
Contributor Author

tanvimoharir commented Jun 9, 2021

thanks for the PR @tanvimoharir ! I see this is still draft, let me know if you need any help, and when this is ready for a review! Thanks!

@NicolasHug thanks and sorry its taking me few tries. I'm actually a bit stuck with how to parametrize the T.AutoAugmentPolicy (which is an enum) from https://github.com/tanvimoharir/vision/blob/port-test-transform-to-pytest/test/test_transforms_tensor.py#L656
I tried with passing a list (list(T.AutoAugmentPolicy)) but I think mypy complains there.(same for T.AutoAugmentPolicy. __ members __ .items() or T.AutoAugmentPolicy. __ members __ .keys())
I am trying to find out a way of doing this.

@tanvimoharir tanvimoharir marked this pull request as ready for review June 9, 2021 19:19
Copy link
Member

@NicolasHug NicolasHug left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @tanvimoharir

I made a few comments but this looks good. In particular I think we should remove the use of cpu_only everywhere and instead use the cpu_and_gpu() parametrization.

Let me know if you need any help!

s_transform.save(os.path.join(tmp_dir, "t_autoaugment.pt"))
@cpu_only
@pytest.mark.parametrize(
'func,method,device,fn_kwargs,match_kwargs', [
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
'func,method,device,fn_kwargs,match_kwargs', [
'func, method, device, fn_kwargs, match_kwargs', [

if s_transform is not None:
with get_tmp_dir() as tmp_dir:
s_transform.save(os.path.join(tmp_dir, "t_autoaugment.pt"))
@cpu_only
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This test is actually called twice: with self.device = 'cpu' and with self.device = 'cuda' from the CUDATester class.

So instead of using the @cpu_only decorator, we should parametrize with a new

@pytest.mark.parametrize('device', cpu_and_gpu())

and replace self.device by device.

You'll need to remove device from the parametrization below as well :)

@@ -481,7 +383,7 @@ def test_resized_crop_save(self):


@unittest.skipIf(not torch.cuda.is_available(), reason="Skip if no CUDA device")
class CUDATester(Tester):
class CUDATester(unittest.TestCase):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should be able to remove this class now

@@ -608,6 +510,86 @@ def test_to_grayscale(device, Klass, meth_kwargs):
)


@cpu_only
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

here as well we should parametrize with cpu_and_gpu() instead.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In fact we should do it in all of the other tests here that were using self.device

Copy link
Contributor Author

@tanvimoharir tanvimoharir Jun 10, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So the self.device had a value of 'cpu' which is why I thought using cpu_only() instead of cpu_and_gpu()

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, but as I mentioned above all these tests were called twice: once as part of Tester with 'cpu', and once as part of CudaTester with 'cuda'. Which is why we need to parametrize over 'device' with cpu_and_gpu() now :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, thanks for clarifying :)

Comment on lines 515 to 519
@pytest.mark.parametrize(
'in_dtype,out_dtype', [
(int_dtypes() + float_dtypes(), int_dtypes() + float_dtypes())
]
)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there are 2 nested for loops here so we should not parametrize over tuples, instead we should have 2 separate paraemtrization to have a cross-produce:

@pytest.mark.parametrize('in_dtype', int_dtypes() + float_dtypes())
@pytest.mark.parametrize('out_dtype', int_dtypes() + float_dtypes())

@@ -608,6 +510,86 @@ def test_to_grayscale(device, Klass, meth_kwargs):
)


@cpu_only
@pytest.mark.xfail()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why mark xfail? This should probably be removed

Comment on lines +540 to +541
with get_tmp_dir() as tmp_dir:
scripted_fn.save(os.path.join(tmp_dir, "t_convert_dtype.pt"))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this can take a bit of time, especially when the test iss heavily parametrized. Here and in the rest of the test, let's extract the saving part into separate tests. Here we could name it test_convert_image_dtype_save()

_test_transform_vs_scripted(transform, s_transform, tensor)
_test_transform_vs_scripted_on_batch(transform, s_transform, batch_tensors)

if s_transform is not None:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's extract this out as well in another test without parametrization

Comment on lines +581 to +582
with get_tmp_dir() as tmp_dir:
scripted_fn.save(os.path.join(tmp_dir, "t_random_erasing.pt"))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

here as well

@NicolasHug NicolasHug merged commit 13ed657 into pytorch:master Jun 10, 2021
@github-actions
Copy link

Hey @NicolasHug!

You approved or merged this PR, but no labels were added.

@NicolasHug
Copy link
Member

Thanks a lot @tanvimoharir ! I just took care of the merge conflicts and I removed some parametrization on the _save() tests

@tanvimoharir
Copy link
Contributor Author

Thanks a lot @tanvimoharir ! I just took care of the merge conflicts and I removed some parametrization on the _save() tests

Thank you for helping me with this. 👍

@tanvimoharir tanvimoharir deleted the port-test-transform-to-pytest branch June 10, 2021 15:53
facebook-github-bot pushed a commit that referenced this pull request Jun 14, 2021
Reviewed By: fmassa

Differential Revision: D29097733

fbshipit-source-id: 6ab7e5bb7c1d21e3aba922bb52659aab65e5abdf
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants