Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unified Sub-package Testing #992

Closed
seanpmorgan opened this issue Jan 31, 2020 · 11 comments
Closed

Unified Sub-package Testing #992

seanpmorgan opened this issue Jan 31, 2020 · 11 comments
Labels
discussion needed test-cases Related to Addons tests

Comments

@seanpmorgan
Copy link
Member

Similar to Keras layer_test it would be nice to have a suite of tests that all addons in a subpackage are verified against.

We've seen a number of times an addon which looked correct but later failed for serialization or something along those lines since it was missed in review. Maintainers try our best on reviews, but its volunteer work and things will always get through.

Any thoughts on the complexities of implementing this? If not we can move forward with tracking the implementation of these.

@seanpmorgan seanpmorgan added discussion needed test-cases Related to Addons tests labels Jan 31, 2020
@gabrieldemarmiesse
Copy link
Member

I can look into this. As you said, it requires some digging.

@hyang0129
Copy link
Contributor

I could try designing a common test for optimizers.

If I understand correctly, all subclasses of optimizers should have common behaviors (eg. applies gradients, serializes, etc). So given these common behaviours, we could have a test that identifies all the classes in the optimizers subpackage and tests them the same way.

For example, we get a list of optimizers, instantiate them with default parameters, create a dummy model and compile with MSE loss, fit the model for x steps, serialize, deserialize, etc. Would this be a test that the contributor has to add or would it be better if this test automatically finds all optimizers in the subpackage?

@seanpmorgan
Copy link
Member Author

I could try designing a common test for optimizers.

If I understand correctly, all subclasses of optimizers should have common behaviors (eg. applies gradients, serializes, etc). So given these common behaviours, we could have a test that identifies all the classes in the optimizers subpackage and tests them the same way.

For example, we get a list of optimizers, instantiate them with default parameters, create a dummy model and compile with MSE loss, fit the model for x steps, serialize, deserialize, etc. Would this be a test that the contributor has to add or would it be better if this test automatically finds all optimizers in the subpackage?

That would be great! I really like the idea of the test finding and testing all optimizers, but want to make sure the test case has a good UX when reading the test logs (e.g if we could utilize pytest parameterize that'd be great.) Feel free to open a PR and we can iterate on how we enforce the test.

@hyang0129
Copy link
Contributor

@seanpmorgan could you take a look at #2208

@hyang0129
Copy link
Contributor

@seanpmorgan please review #2208. It should work well as a standardized test for optimizers.

@seanpmorgan
Copy link
Member Author

Leaving this open because this could apply to other subpackages as well. If anyone is interested in this, let's build it out as a task list so we can easily keep track.

@hyang0129
Copy link
Contributor

@seanpmorgan I'll look into refactoring #2208 sometime this week.

Do you have any recommendations on where the we should put the generalized module testing code? Maybe in test utils?

@seanpmorgan
Copy link
Member Author

@seanpmorgan I'll look into refactoring #2208 sometime this week.

Do you have any recommendations on where the we should put the generalized module testing code? Maybe in test utils?

Yeah https://github.com/tensorflow/addons/blob/master/tensorflow_addons/utils/test_utils.py would be a good spot. Do you have any thoughts on how to keep separate exclusion lists for each subpackage?

@hyang0129
Copy link
Contributor

@seanpmorgan I think we should have a separate standard_tests.py in each module's testing folder. This standard tests file defines the tests and the exceptions.

Also, please review #2233

@hyang0129
Copy link
Contributor

@seanpmorgan @WindQAQ Are we still working on this?

If so, we can probably add some standardized tests for the layers submodule. We can use the existing layer test from keras and the standardized testing module to automatically test existing and future layers in the layers submode

@seanpmorgan seanpmorgan mentioned this issue Mar 29, 2021
7 tasks
@seanpmorgan
Copy link
Member Author

TensorFlow Addons is transitioning to a minimal maintenance and release mode. New features will not be added to this repository. For more information, please see our public messaging on this decision:
TensorFlow Addons Wind Down

Please consider sending feature requests / contributions to other repositories in the TF community with a similar charters to TFA:
Keras
Keras-CV
Keras-NLP

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
discussion needed test-cases Related to Addons tests
Projects
None yet
Development

No branches or pull requests

3 participants