-
Notifications
You must be signed in to change notification settings - Fork 7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add tests for datasets #963
Comments
you can do things like mocking, where a fake OS / fake filesystem / fake files are read and processed, and we assert that the right things happened when interacting with this fake OS. See https://www.toptal.com/python/an-introduction-to-mocking-in-python for example and look for |
alternatively,
i think that's reasonable compromise |
@soumith Do you have a specific size in mind up to that we consider a dataset to be small? |
I'm currently implementing the "small" dataset downloading locally, and for MNIST / EMNIST / KMNIST / FashionMNIST, it takes 3 min to run a trivial test on my laptop. I think that the small-dataset size might need to be smaller than EMNIST for tests to run reasonably fast. I'll soon push a PR to test how long it takes CI to download it. |
We are going for an approach that generates random dataset files on-the-fly, so that they can be tested more easily. Here is an example: https://github.com/pytorch/vision/blob/master/test/test_datasets.py We have context-managers that generate datasets on the fly for the testing, and helps us catch bugs vision/test/fakedata_generation.py Lines 18 to 46 in 4886ccc
|
Have you reached consensus on how to test contributed datasets yet? I would like to contribute wrappers for several hand pose estimation datasets, like RHD, but haven't found a distinctive guide on how to structure a dataset wrapper or a transform.
This approach has following benefits:
Writing tests would also be separated into Database tests that interact with the directory system and separate Transform tests that need a single dummy sample. |
resumed in #1080 |
We should add unit tests for datasets. This will enable the codebase to be more reliable, and I expect that adding new datasets to torchvision will be a more straightforward procedure for the reviewer, as he will be able to rely on CI for a lot of the code checking.
We need to define how we will structure the tests for the datasets in order to avoid having to download the full datasets in memory, which will probably make the CI flaky due to network issues.
Here are a few alternatives:
1 - Add dummy test files for each dataset
Every dataset that has a
download
option should use a (newly introduced) function that performs the downloading and the extraction. The dummy test files should have the same folder structure as the original files after extraction. This way, any postprocessing of the extracted files and be tested by the testing suite.The dummy files would be fake images (e.g., of 2x3 pixels), but that enables to test both the
__init__
and the__getitem__
of the dataset in a straightforward way, which are the only functions that we guarantee that should be implemented for datasets.Drawback
While this is a very flexible alternative, adding a new dataset adds an extra burden, as now the contributor also need to create a set of fake data that should be commited into the repo.
2 - Add an option to load images / data from a buffer
There is a common pattern in torchvision datasets.
1 - download / extract the original dataset
2 - get a mapping of
image_id -> image_path / class_id / etc
3 - load image (either from memory or from the image_path) and annotations
the
download
/extract
parts should be factored out into a common function, which handles zip / tar / etc.If we add an extra method to the datasets
get_paths_for_idx
that gives the mappingimage_id -> image_path / class_id / etc
, we could then patch this method during testing so that instead of returning animage_path
, we return instead a file-like object, which can be read by PIL,scipy.io.loadmat
,xml.etree.ElementTree
and other Python libs without problems.This will require that we specify somewhere what is the type of values that
get_paths_for_idx
return, so that we how how to generate random data that fits to what the dataset expect.Drawback
We might end up adding extra complexity while defining a dataset, and plus we might not even test all possible codepaths.
3 - Factor out dataset-specific functions and test only those
This would be an indirect testing, where we would split the dataset into standalone functions and test each function with some dummy inputs.
For example, in
vision/torchvision/datasets/mnist.py
Lines 311 to 328 in d4a126b
we could generate on-the-fly a dummy object that can test those functions, without going into testing all MNIST itself.
Drawback
We wouldn't be having end-to-end tests, but only be testing parts of the functionality. Having the tests.
Wrapping up
There are other potential ways of implementing unit tests for the datasets that I'm potentially missing in this list.
One thing that is pretty clear to me is that we should provide a function to perform the
download
+extract
for zip / tar / etc that is tested, and use it everywhere in the codebase.I'm currently inclined towards going for approach number 1, but it will involve adding a number of (dummy and very small) files to the repo. It's currently the only one that tests end-to-end all the functionality that is required by the dataset API, and thus is the most robust IMO.
Thoughts?
The text was updated successfully, but these errors were encountered: