-
Notifications
You must be signed in to change notification settings - Fork 7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Porting to pytest #3996
Porting to pytest #3996
Conversation
Hi @tanvimoharir! Thank you for your pull request and welcome to our community. Action RequiredIn order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you. ProcessIn order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with If you have received this in error or have any questions, please contact us at cla@fb.com. Thanks! |
Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Facebook open source project. Thanks! |
thanks for the PR @tanvimoharir ! I see this is still draft, let me know if you need any help, and when this is ready for a review! Thanks! |
@NicolasHug thanks and sorry its taking me few tries. I'm actually a bit stuck with how to parametrize the T.AutoAugmentPolicy (which is an enum) from https://github.com/tanvimoharir/vision/blob/port-test-transform-to-pytest/test/test_transforms_tensor.py#L656 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @tanvimoharir
I made a few comments but this looks good. In particular I think we should remove the use of cpu_only
everywhere and instead use the cpu_and_gpu()
parametrization.
Let me know if you need any help!
test/test_transforms_tensor.py
Outdated
s_transform.save(os.path.join(tmp_dir, "t_autoaugment.pt")) | ||
@cpu_only | ||
@pytest.mark.parametrize( | ||
'func,method,device,fn_kwargs,match_kwargs', [ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
'func,method,device,fn_kwargs,match_kwargs', [ | |
'func, method, device, fn_kwargs, match_kwargs', [ |
test/test_transforms_tensor.py
Outdated
if s_transform is not None: | ||
with get_tmp_dir() as tmp_dir: | ||
s_transform.save(os.path.join(tmp_dir, "t_autoaugment.pt")) | ||
@cpu_only |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test is actually called twice: with self.device = 'cpu' and with self.device = 'cuda' from the CUDATester class.
So instead of using the @cpu_only
decorator, we should parametrize with a new
@pytest.mark.parametrize('device', cpu_and_gpu())
and replace self.device
by device.
You'll need to remove device
from the parametrization below as well :)
test/test_transforms_tensor.py
Outdated
@@ -481,7 +383,7 @@ def test_resized_crop_save(self): | |||
|
|||
|
|||
@unittest.skipIf(not torch.cuda.is_available(), reason="Skip if no CUDA device") | |||
class CUDATester(Tester): | |||
class CUDATester(unittest.TestCase): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we should be able to remove this class now
test/test_transforms_tensor.py
Outdated
@@ -608,6 +510,86 @@ def test_to_grayscale(device, Klass, meth_kwargs): | |||
) | |||
|
|||
|
|||
@cpu_only |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
here as well we should parametrize with cpu_and_gpu()
instead.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In fact we should do it in all of the other tests here that were using self.device
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So the self.device had a value of 'cpu' which is why I thought using cpu_only() instead of cpu_and_gpu()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, but as I mentioned above all these tests were called twice: once as part of Tester
with 'cpu', and once as part of CudaTester
with 'cuda'. Which is why we need to parametrize over 'device' with cpu_and_gpu()
now :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, thanks for clarifying :)
test/test_transforms_tensor.py
Outdated
@pytest.mark.parametrize( | ||
'in_dtype,out_dtype', [ | ||
(int_dtypes() + float_dtypes(), int_dtypes() + float_dtypes()) | ||
] | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there are 2 nested for loops here so we should not parametrize over tuples, instead we should have 2 separate paraemtrization to have a cross-produce:
@pytest.mark.parametrize('in_dtype', int_dtypes() + float_dtypes())
@pytest.mark.parametrize('out_dtype', int_dtypes() + float_dtypes())
test/test_transforms_tensor.py
Outdated
@@ -608,6 +510,86 @@ def test_to_grayscale(device, Klass, meth_kwargs): | |||
) | |||
|
|||
|
|||
@cpu_only | |||
@pytest.mark.xfail() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why mark xfail? This should probably be removed
with get_tmp_dir() as tmp_dir: | ||
scripted_fn.save(os.path.join(tmp_dir, "t_convert_dtype.pt")) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this can take a bit of time, especially when the test iss heavily parametrized. Here and in the rest of the test, let's extract the saving part into separate tests. Here we could name it test_convert_image_dtype_save()
test/test_transforms_tensor.py
Outdated
_test_transform_vs_scripted(transform, s_transform, tensor) | ||
_test_transform_vs_scripted_on_batch(transform, s_transform, batch_tensors) | ||
|
||
if s_transform is not None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let's extract this out as well in another test without parametrization
with get_tmp_dir() as tmp_dir: | ||
scripted_fn.save(os.path.join(tmp_dir, "t_random_erasing.pt")) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
here as well
Hey @NicolasHug! You approved or merged this PR, but no labels were added. |
Thanks a lot @tanvimoharir ! I just took care of the merge conflicts and I removed some parametrization on the |
Thank you for helping me with this. 👍 |
Reviewed By: fmassa Differential Revision: D29097733 fbshipit-source-id: 6ab7e5bb7c1d21e3aba922bb52659aab65e5abdf
Refers #3987
group A
These ones could be bundled into a single test_random() function and be parametrized over
func, method, fn_kwargs and match_kwargs.
group F