-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
keras.models.load_model resets the optimizer's state #70
Comments
@SiLiKhon I think the error is expected because when you print the following
The above optimizer is unknown to keras and it throws Alternatively, you can save weights along with
Please check the gist here I am not sure about your use-case. If you want to retrain the model from where it was left, you can load model and retrain (without recompiling). Thanks! |
@jvishnuvardhan thanks for the reply! tbh, I don't quite understand: I thought that a call to # The reconstructed model is already compiled and has retained the optimizer
# state, so training can resume: If, however, you run the example code, the optimizer's state will not be restored. Adding separate
That's exactly my use-case. And I'm saying that just a pair of |
For Adam and optimizers with SLOTS I think that we have still this tensorflow/tensorflow#44670 |
@k-w-w Please take a look at this issue. Thanks |
As per tensorflow/tensorflow#44670 (comment) this is solvable. @bhack what is holding up this being fixed? Is the implementation really that hard? |
@adriangb If you want to know my opinion as a first step I will try write a first PR with some expecting failing test to cover this missing feature. E.g. like in https://github.com/tensorflow/tensorflow/pull/51538/files When the test PR is approved and so the team agree on the use cases coverage of this feature I think that we could wait for another user contributed PR that invalidate these test so we can remove the expecting failure annotation and close this bug. Sometimes the feature are also implemented by the natural internal coding activity so the failing tests IMHO are still useful to fail in the case it will be solve by some internal development as a natural way to monitor open confirmed tickets. This is just my own view as probably someone else might not like the fact of making a feature request (or bug) with just expecting failing tests. |
Looks like I created a dupe of this by accident here: tensorflow/tensorflow#53064. |
@janhartman Is this warning not working in your case: |
@bhack Check out the notebook I linked in my issue: I don't see the warning in Colab or on my machine. Regardless of the warning, this should still be put into the docs. |
+1 for plastering this warning all over the docs. I would even go so far as to making this an error (only if fit is called). TensorFlow emits all sorts of warnings left and right; even if this did emit 1 warning there's a lot of noise. It's a relatively obscure and hard to detect bug since you can only see if via the resultant data/weights/training results. I could easily see this causing great harm to research projects or real world applications. |
@adriangb By an historical point of view you can read a little bit the discussion of this warning at tensorflow/tensorflow#42846 I still believe that an expected failing test PR like in https://github.com/tensorflow/tensorflow/pull/51538/files could really help and also support a community fix. At least when failing test are merged we could have a good overview of what tests will need to pass to implement/resolve this missing feature. Are you interested to try to contribute a PR to extend the tests with this (expected) failing case? |
Same situation here. But I find if the model is saved in H5 format, the optimizer states will be restored. Is it a bug in SavedModel format? |
Yes, it is primarily a bug in SavedModel. I believe that, as you say, H5 works fine. |
(Moving an issue from the tf repo)
System information
yes, mostly based on the example from https://www.tensorflow.org/guide/keras/save_and_serialize
Linux 59a52e5448f6 5.4.104+ keras-team/keras#1 SMP Sat Jun 5 09:50:34 PDT 2021 x86_64 x86_64 x86_64 GNU/Linux
)no
google colab version
v2.6.0-0-g919f693420e 2.6.0
3.7.12 (default, Sep 10 2021, 00:21:48) [GCC 7.5.0]
no
no
11.2
Tesla K80, 11441MiB
Describe the current behavior
When restoring a keras model with
keras.models.load_model
, the returned model's optimizer is in the reset state (e.g. itsweights
attribute is empty).Describe the expected behavior
The original call:
should have restored and kept the optimizer's weights.
Standalone code to reproduce the issue
output:
If we additionally provide a
compile=False
argument, the optimizer's weights are restored:output:
However, trying to use the restored optimizer fails with an exception:
output:
The text was updated successfully, but these errors were encountered: