-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unexpected breaking change: Optimizer.get_weights() removed #442
Comments
@gowthamkpr, |
@chenmoneygithub can you take a look here? |
@adriangb Thanks for reporting the issue! There has not been any change on
|
On >> import tensorflow as tf
>> tf.keras.optimizers.get("rmsprop").get_weights
<bound method OptimizerV2.get_weights of <keras.optimizer_v2.rmsprop.RMSprop object at 0x7f9f580e9360>> On >> import tensorflow as tf
>> tf.keras.optimizers.get("rmsprop").get_weights
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'RMSprop' object has no attribute 'get_weights' This is because the new |
@chenmoneygithub any updates on this? This is breaking SciKeras (and presumably other downstream things) Here's notebooks proving this is broken: |
Until https://github.com/keras-team/keras/issues/16983 gets resolved
@mattdangerw would you mind giving an update now that 2.11.0 was released? Please let me know if I am doing something wrong or if there are alternatives, but as far as I can tell this was an unannounced breaking change with no alternative API. |
But I don't think the issue is caused by this deprecation, at the time then issue got created the new optimizer was still in At the current version, I would encourage reworking the optimizer serialization strategy to not rely on the |
Digging a bit I see that there at least is a public import tensorflow as tf
model = tf.keras.Sequential(
[
tf.keras.Input(shape=(1,)),
tf.keras.layers.Dense(1, activation="softmax"),
]
)
model.compile(optimizer="adam", loss="categorical_crossentropy")
model.fit([[1]], [0])
optimizer = model.optimizer
variables = optimizer.variables()
print(len(variables)) # 5
new = tf.keras.optimizers.Adam()
print(len(new.variables())) # 1
new.build(variables)
print(len(new.variables())) # 11!
new.set_weights(variables) # fails
# ValueError: Optimizer variable m/iteration_568 has shape () not compatible with provided weight shape (1, 1) Maybe that's an unrelated bug? I would expect this roundtrip to work |
Here's the current code: https://github.com/adriangb/scikeras/blob/master/scikeras/_saving_utils.py It's the only reliable way that I know of to serialize optimizers with state like Adam. |
I reported the issue because I test with |
There is not Some more context - we no longer keep the |
(from the 2.11.0 tag) |
That's good news! It may "just work" then and I can remove these workarounds. Let me try. |
Exciting news! I think all of the bugs with optimizer serialization are fixed. So I think all of the following tickets are resolved: I'll update SciKeras and run tests in CI to confirm |
Nope, I got my hopes up too soon. The bug reported in tensorflow/tensorflow#44670 is still there. So yeah @chenmoneygithub can you think of a way to serialize and deserialize stateful optimizers in 2.11.0? |
You need to call For your code snippet, you want to do |
I think If I'm missing something, maybe you can give me a self-contained example where a model is saved and re-loaded preserving the optimizer state? |
It loads the optimizer state. You cannot do the length equal assertion because |
Here's what I'm getting: import tensorflow as tf
model = tf.keras.Sequential(
[
tf.keras.Input(shape=(1,)),
tf.keras.layers.Dense(1, activation="softmax"),
]
)
model.compile(optimizer="adam", loss="categorical_crossentropy")
model.fit([[1]], [0])
model.save("model")
new = tf.keras.models.load_model("model")
new.load_weights("model")
print([v.name for v in model.optimizer.variables()]) # ['iteration:0', 'Adam/m/dense/kernel:0', 'Adam/v/dense/kernel:0', 'Adam/m/dense/bias:0', 'Adam/v/dense/bias:0']
print([v.name for v in new.optimizer.variables()]) # ['iteration:0'] My understanding is that So I think that snipped you posted does not work. |
@chenmoneygithub looping back here. Am I missing something or does your suggestion indeed not work? Thanks |
@adriangb What's your TF and Keras version? The snippet works as expected on my testing. |
https://colab.research.google.com/drive/1p9XOAE9SwU3ZATKVHmzxWIDHK1BGF7S3?usp=sharing This notebook confirms my results. The TF and Keras versions are printed out as well (2.11.0 for both). Is there something I'm missing or wrong with this notebook? |
In 2.11 the optimizer does lazy loading, if you want to explicitly restore the variable values, you need to call A little more context - Keras team made a new-version optimizer, and is available via For serialization/deserialization purpose, I don't know what the current approach is. One potential solution I am thinking about is to explicitly call |
Please check if this works for you, thx! |
Yes, I think that works! I’ll give it some more in depth testing and confirm. Thank you for your help. |
… Keras team has temporarily broken cross-validation in their current release's optimizers (https://github.com/keras-team/keras/issues/16983). Open question: why are the CV results for the neural network model consistently 1.5-2x better than the single-random-test results?
Hello everyone, i made some tries. (keras and tf ==2.15)
Actually i think that i made a comparison between LAST LOSS of the first model and the loss at the end of the first epoch of the second model. They are similar. It's seems that tha training is continuing :)
|
@ValerioSpenn Hi Valerio, is it expected that the compared variables are only full of zeros? It seems to me that the comparison mechanism is missing some variables. |
It seems like
Optimizer.get_weights()
is being removed. SciKeras was using it to serialize optimizer weights since SavedModel silently fails to do so (see tensorflow/tensorflow#44670 and other linked issues, this is a longstanding bug that hasn't been fixed). Could someone fill me in on what the plans are going forward? Pickling models is an essential part how Scikit-Learn operates and hence SciKeras gets completely broken if TensorFlow models can't be serialized.The text was updated successfully, but these errors were encountered: