-
Notifications
You must be signed in to change notification settings - Fork 610
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unified Sub-package Testing #992
Comments
I can look into this. As you said, it requires some digging. |
I could try designing a common test for optimizers. If I understand correctly, all subclasses of optimizers should have common behaviors (eg. applies gradients, serializes, etc). So given these common behaviours, we could have a test that identifies all the classes in the optimizers subpackage and tests them the same way. For example, we get a list of optimizers, instantiate them with default parameters, create a dummy model and compile with MSE loss, fit the model for x steps, serialize, deserialize, etc. Would this be a test that the contributor has to add or would it be better if this test automatically finds all optimizers in the subpackage? |
That would be great! I really like the idea of the test finding and testing all optimizers, but want to make sure the test case has a good UX when reading the test logs (e.g if we could utilize pytest parameterize that'd be great.) Feel free to open a PR and we can iterate on how we enforce the test. |
@seanpmorgan could you take a look at #2208 |
@seanpmorgan please review #2208. It should work well as a standardized test for optimizers. |
Leaving this open because this could apply to other subpackages as well. If anyone is interested in this, let's build it out as a task list so we can easily keep track. |
@seanpmorgan I'll look into refactoring #2208 sometime this week. Do you have any recommendations on where the we should put the generalized module testing code? Maybe in test utils? |
Yeah https://github.com/tensorflow/addons/blob/master/tensorflow_addons/utils/test_utils.py would be a good spot. Do you have any thoughts on how to keep separate exclusion lists for each subpackage? |
@seanpmorgan I think we should have a separate standard_tests.py in each module's testing folder. This standard tests file defines the tests and the exceptions. Also, please review #2233 |
@seanpmorgan @WindQAQ Are we still working on this? If so, we can probably add some standardized tests for the layers submodule. We can use the existing layer test from keras and the standardized testing module to automatically test existing and future layers in the layers submode |
TensorFlow Addons is transitioning to a minimal maintenance and release mode. New features will not be added to this repository. For more information, please see our public messaging on this decision: Please consider sending feature requests / contributions to other repositories in the TF community with a similar charters to TFA: |
Similar to Keras layer_test it would be nice to have a suite of tests that all addons in a subpackage are verified against.
We've seen a number of times an addon which looked correct but later failed for serialization or something along those lines since it was missed in review. Maintainers try our best on reviews, but its volunteer work and things will always get through.
Any thoughts on the complexities of implementing this? If not we can move forward with tracking the implementation of these.
The text was updated successfully, but these errors were encountered: