-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement Default Training Loop for TensorFlowV2Classifier
#2124
Conversation
Signed-off-by: Farhan Ahmed <Farhan.Ahmed@ibm.com>
Codecov Report
❗ Your organization is not using the GitHub App Integration. As a result you may experience degraded service beginning May 15th. Please install the Github App Integration for your organization. Read more. @@ Coverage Diff @@
## dev_1.15.0 #2124 +/- ##
==============================================
+ Coverage 84.30% 85.64% +1.34%
==============================================
Files 299 299
Lines 26648 26657 +9
Branches 4878 4878
==============================================
+ Hits 22465 22831 +366
+ Misses 2931 2581 -350
+ Partials 1252 1245 -7
|
Signed-off-by: Farhan Ahmed <Farhan.Ahmed@ibm.com>
Signed-off-by: Farhan Ahmed <Farhan.Ahmed@ibm.com>
Signed-off-by: Farhan Ahmed <Farhan.Ahmed@ibm.com>
Signed-off-by: Farhan Ahmed <Farhan.Ahmed@ibm.com>
Signed-off-by: Farhan Ahmed <Farhan.Ahmed@ibm.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @f4str Thank you very much for implementing a default training step for TensorFlow v2 classifiers! I have added a few suggestions, what do you think?
@@ -2055,6 +2055,7 @@ def logits_difference(y_true, y_pred): | |||
nb_classes=estimator.nb_classes, | |||
input_shape=estimator.input_shape, | |||
loss_object=self._loss_object, | |||
optimizer=estimator._optimizer, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we use the new property:
optimizer=estimator._optimizer, | |
optimizer=estimator._optimizer, |
@@ -203,6 +203,7 @@ def __call__(self, y_true: tf.Tensor, y_pred: tf.Tensor, *args, **kwargs) -> tf. | |||
nb_classes=estimator.nb_classes, | |||
input_shape=estimator.input_shape, | |||
loss_object=_loss_object_tf, | |||
optimizer=estimator._optimizer, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we use the new property:
optimizer=estimator._optimizer, | |
optimizer=estimator.optimizer, |
@@ -2055,6 +2055,7 @@ def logits_difference(y_true, y_pred): | |||
nb_classes=estimator.nb_classes, | |||
input_shape=estimator.input_shape, | |||
loss_object=self._loss_object, | |||
optimizer=estimator._optimizer, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we use the new property:
optimizer=estimator._optimizer, | |
optimizer=estimator.optimizer, |
@@ -224,6 +224,7 @@ def __call__(self, y_true: tf.Tensor, y_pred: tf.Tensor, *args, **kwargs) -> tf. | |||
nb_classes=estimator.nb_classes, | |||
input_shape=estimator.input_shape, | |||
loss_object=_loss_object_tf, | |||
optimizer=estimator._optimizer, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we use the new property:
optimizer=estimator._optimizer, | |
optimizer=estimator.optimizer, |
Hi @beat-buesser thank you for the comments! For consistency, Since ART is shifting from private attributes to public properties, I think it may be worth changing all of these ( |
Signed-off-by: Farhan Ahmed <Farhan.Ahmed@ibm.com>
Hi @f4str Thank you very much, the updates look good to me. |
Description
A new
optimizer
parameter was added to theTensorFlowV2Classifier
and its derivatives:TensorFlowV2RandomizedSmoothing
andTensorFlowV2RandomizedSmoothing
. This new parameter is now used in the.fit(...)
methods to create a defaulttrain_step
training loop that is optimized with the@tf.function
decorator. The existingtrain_step
parameter can still be provided to override the default training loop. This preserves backwards compatibility and ensures everything still works. The docstrings have been updated accordingly.All test cases and relevant examples/notebooks have been updated to now use the default (optimized) training loop rather than creating one manually. This simplifies the setup for the TensorFlow v2 classifiers and makes it more similar to other estimators like the
KerasClassifier
andPyTorchClassifier
. Note that all test cases still work the same as before and did not need to be modified at all, but this change just cleans things up.By using this optimized default training loop, it will significantly improve training performance for the
TensorFlowV2Classifier
and its derivatives.Fixes #2083
Type of change
Please check all relevant options.
Testing
Please describe the tests that you ran to verify your changes. Consider listing any relevant details of your test configuration.
Test Configuration:
Checklist