-
Notifications
You must be signed in to change notification settings - Fork 610
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Build against TF2.13 #2835
Build against TF2.13 #2835
Conversation
@qlzh727 getting a lot of |
I think there is a newly added check in keras-team/keras@7eb8ef2#diff-5b101fb3499cadc6810aaa4ec68b961018d6d0c86ed9e9295e5ffb097e516be2 for the layer instance type checking in wrapper. |
Thanks... this is causing breakage throughout the repo ... is there documentation you can share on what what changed here? Can we use a legacy serialization instead? I'm also seeing |
From the test error log, it seems that the wrapper was feeding a tensor, rather than a layer instance, is that expected behavior? Also @nkovela1 who is the original author for the change on keras side. |
Ack, the test was to verify an error is thrown on a Tensor input and it now raises There is another error though that is only happening on TF2.13:
We register our layers using |
There is a fork of SpectralNormalization layer sent to keras with keras-team/keras#17648, and I think you are now getting a naming conflict. |
@qlzh727 Appears that this isn't related to Addons as I can make minimal reproducible examples: Can you please help get this looked at before final is cut |
Ack. CC'ed related ppl on the issue. |
This reverts commit a6ff967.
@seanpmorgan You can cherry-pick a6ff967 when |
@qlzh727 The test is the same as always but it is failing now. Here the log.. |
This is quite weird, the same test is passing for 2.12 and failing for 2.13? Do we see same error if we build with tf-nightly? |
Yes, the tests pass on TF2.12: Based on this commit: |
This reverts commit 606071c.
Confirmed that it is TF2.13 and not the container: |
See tests above, confirmed that this is failing for TF2.13 and not TF2.12. It's also failing for tf-nightly: |
This reverts commit 275ef5e.
Thanks for the confirmation, since we should have same test internally, let me see what's the status there. |
Just want to quickly address this comment. The upcoming EOL of TFA should not lower the priority of TFA testing. Through the years we probably caught 20-30 bugs in releases because of the number of tests we run. Checking TFA tests should help the Google team prevent breaking changes and net new bugs, not simply ensure that our builds pass so we can publish. |
@seanpmorgan Is it different on nightly?
|
I just saw the similar error from keras nightly build, which is caused by a numpy version change from tf-nightly. |
It seems that with 2.13 we have an error tracing: @tf.function
def apply_gradients():
opt.apply_gradients([(grads, var)])
device.run(apply_gradients) |
def autograph_handler(*args, **kwargs):
"""Calls a converted version of original_func."""
try:
return api.converted_call(
original_func,
args,
kwargs,
options=converter.ConversionOptions(
recursive=True,
optional_features=autograph_options,
user_requested=True,
))
except Exception as e: # pylint:disable=broad-except
if hasattr(e, "ag_error_metadata"):
> raise e.ag_error_metadata.to_exception(e)
E tensorflow.python.autograph.impl.api.StagingError: in user code:
E
E File "/usr/local/lib/python3.9/dist-packages/keras/src/optimizers/utils.py", line 175, in _all_reduce_sum_fn **
E return distribution.extended.batch_reduce_to(
E
E IndexError: tuple index out of range
/usr/local/lib/python3.9/dist-packages/tensorflow/python/eager/polymorphic_function/autograph_util.py:52: StagingError |
Description
Waiting on: