Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Not able to resume training after loading model + weights #2378

Closed
trane293 opened this issue Apr 18, 2016 · 90 comments
Closed

Not able to resume training after loading model + weights #2378

trane293 opened this issue Apr 18, 2016 · 90 comments

Comments

@trane293
Copy link

trane293 commented Apr 18, 2016

I work at an institute where it is not allowed to run a workstation overnight, hence I had to split the training process into multiple days. I trained a model for 10 epochs which took approximately 1 day, and saved the model + weights using the methods described in keras documentation like this:

 modelPath = './SegmentationModels/
 modelName = 'Arch_1_10'
 sys.setrecursionlimit(10000)
 json_string = model.to_json()
 open(str(modelPath + modelName + '.json'), 'w').write(json_string)
 model.save_weights(str(modelPath + modelName + '.h5'))
 import cPickle as pickle
 with open(str(modelPath + modelName + '_hist.pckl'), 'wb') as f:
     pickle.dump(history.history, f, -1)

and load the model the next day like this:

 modelPath = './SegmentationModels/'
 modelName = 'Arch_1_10'
 model = model_from_json(open(str(modelPath + modelName + '.json')).read())
 model.compile(loss='categorical_crossentropy', optimizer=optim_sgd)
 model.load_weights(str(modelPath + modelName + '.h5'))
 #     import cPickle as pickle
 #     with open(str(modelPath + modelName + '_hist.pckl'), 'r') as f:
 #         history = pickle.load(f)
 model.summary()

but when I restarted the training process it initialized to the same training and validation loss that I had got the earlier day at the 1st epoch! It should have started with an accuracy of 60% which was the last best accuracy I got the earlier day, but it doesn't.

I have also tried to call model.compile() before and after load_weights, as well as leaving it out altogether, but that doesn't work either.

Please help me in this regard. Thanks in advance.

@NasenSpray
Copy link

Does it work when you construct the model with the original code instead of loading it from json?

@trane293
Copy link
Author

Nope. It doesn't. Still starts with 20% accuracy as it did on the 1st epoch.

@NasenSpray
Copy link

NasenSpray commented Apr 18, 2016

Did the weights file already exist before you tried to save them?

@trane293
Copy link
Author

trane293 commented Apr 18, 2016

It did, but now I tried using the ModelCheckpoint callback which saves weight files for each epoch. In my case last weights file for epoch 70 was created (it was not present), I tried loading that into the model loaded using i) JSON ii) using original code, but still no luck.

@NasenSpray
Copy link

It did

That's it, save_weights() doesn't overwrite existing files unless you also pass overwrite=True. It should have asked for user input, though.

@trane293
Copy link
Author

Actually sorry for my last comment, all the architectures I save and all weights I save have unique names, and yes I know save_weights() asks for user input when overwriting the file, but in my case it doesn't since the files do not exist. Se we can safely rule out the possibility that the file was not overwritten.

@trane293
Copy link
Author

trane293 commented Apr 18, 2016

screen
You can see the weights saved after every epoch. When I try to load these weights the training still restarts from where it started initially.

Here's my full loadModel() function:

# optimzers
optim_sgd = keras.optimizers.SGD(lr=0.01, momentum=0.9, decay=0.002, nesterov=True)
optim_adadelta = keras.optimizers.Adadelta()
optim_adagrad = keras.optimizers.Adagrad()
optim_adam = keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08)

imageSize = (19, 19)
img_rows, img_cols = imageSize[0], imageSize[1]
batch_size = 200
# number of convolutional filters to use
nb_filters = 32
# size of pooling area for max pooling
nb_pool = 2
# convolution kernel size
nb_conv = 3

nb_epoch = 1000

# callbacks
def scheduler(epoch):
    if epoch % 10 == 0 and epoch is not 0:
        x = float(input("Enter a learning rate (Current: {}): ".format(model.optimizer.lr.get_value())))
        model.optimizer.lr.set_value(x)
        print("Changed learning rate to: {}".format(model.optimizer.lr.get_value()))
    return model.optimizer.lr.get_value()

change_lr = oc.LearningRateScheduler(scheduler)
early_stop = oc.StopEarly(10)
plot_history = oc.PlotHistory()

# # Load the model
modelPath = './SegmentationModels/'
modelName = 'Arch_1_40'
model = model_from_json(open(str(modelPath + modelName + '.json')).read())
model.compile(loss='categorical_crossentropy', optimizer=optim_sgd)
model.load_weights(str(modelPath + 'weights.70-0.74.hdf5'))
#     import cPickle as pickle
#     with open(str(modelPath + modelName + '_hist.pckl'), 'r') as f:
#         history = pickle.load(f)
model.summary()

@NasenSpray
Copy link

NasenSpray commented Apr 18, 2016

That's strange...

replace this line with if 1: and try to load the weights again. nope, dont

@trane293
Copy link
Author

I found out that I was using an older version of Keras. I upgraded the version and found model_summary() is no longer there. Delved deeper and found that it has now been changed to print_summary().

Anyways, I tried changing the line of code you asked, but that didn't work as well.

@trane293
Copy link
Author

trane293 commented Apr 19, 2016

UPDATE: Came to the institute this morning, built the model using original code and loaded the model weights saved using ModelCheckpoint callback. Started training and it still restarts from the beginning; no memory of past metrics. The performance is actually even worse than it was earlier when it started training the first epoch. In my case, normally the network starts at 20% accuracy and goes to around 70% in 60 epochs. But when I restart the training process using loaded weights, the network starts at 20% on epoch 1 and keeps going lower and lower until 16% at epoch 5. I have no idea what's happening here.

UPDATE 2: When I try to evaluate the loaded model + weights on the same validation data, I get 60% accuracy, as intended. But if I do model.fit() then training starts from 20% and oscillates on it. So I can confirm that the weights are being loaded correctly since the model can make predictions, but the model is not able to retrain.

Please help! @NasenSpray

@carlthome
Copy link
Contributor

carlthome commented Apr 19, 2016

So what model do you have precisely? Perhaps some weights aren't actually saved or loaded at all (like the states in a LSTM or something)? Or perhaps they are accidentally shuffled (flipped dimensions or whatever) somehow.

EDIT: Check #2378 (comment)

@carlthome
Copy link
Contributor

carlthome commented Apr 19, 2016

Grasping at straws here, but some optimizers are stateful right? Are you just using SGD? I'm not familiar with this part of Keras but perhaps the optimizers should be saved as well because otherwise when you reinitiate learning and start a new epoch but with pretrained weights instead of your original weight initialization, perhaps training diverges due to high learning rates.

@NasenSpray
Copy link

NasenSpray commented Apr 19, 2016

Run this plz

model  = make_model()
w1 = model.get_weights()
model.load_weights('your_saved_weights.h5')
w2 = model.get_weights()

for a,b in zip(w1, w2):
  if np.all(a == b):
    print "wtf is happening"

Does it print?

@trane293
Copy link
Author

Doesn't print. The weights are loaded successfully I suppose. Its the training procedure that's problematic. After running this script (it didn't print anything), I ran model.fit() and it started with a loss 10x higher than originally it was at epoch 1, and with accuracy 20% again sigh

@carlthome
Copy link
Contributor

carlthome commented Apr 19, 2016

Obviously something must be different as you're seeing different results. Perhaps get_weights() doesn't actually return everything it could.

I'm curious if you have the same problem just by restarting training in the same session, nevermind loading a model and its weights with Keras' builtins. If not, consider saving states with something like this instead.

@trane293
Copy link
Author

Thanks. When I restart an interrupted training process, the training continues from where it left of successfully. The problem is when I load the model and weights.

My main aim to save "snapshots" or "states" of the model that can be loaded back and used as a starting point when training on the other day. I'll have a look at the shelf module too, thanks!. But I think the problem with Keras must be debugged as well.

Please guide me on how can I help you reproduce this issue for you guys to fix it sometime in the future. I would love to help.

@NasenSpray
Copy link

NasenSpray commented Apr 19, 2016

In your loadModel(), hardcode the learning rate to 0. Does it make the loaded model better?

- and -

Instead of training, just evaluate the loaded model on the training set. Still worse than before?

@trane293
Copy link
Author

I'll try your suggestion as soon as I get to the institute tomorrow.

@trane293
Copy link
Author

trane293 commented Apr 21, 2016

I could not try validating on the training set for some reason, but I solved my problem by pickling the model after training it for the day. I restarted my iPython Notebook kernel, loaded the pickled model, and restarted the training process. Fortunately it started from where it left of.

I will also try your suggestion and report back what I got.

@carlthome
Copy link
Contributor

Dang! That clearly means some states are not saved properly, whether they are weights or something else.

I assume the intended use case for having load and save functions in Keras has more to do with being able to share pretrained models like people do with Caffe, rather than it being for pausing your own training, in which pickling is probably safer.

I do wonder though if it wouldn't be easier to just scratch the manual parsing of states which is bug-prone and simply have everything rely on Python's builtin object serialization with pickle, shelve or similar. Keras' builtins are pretty meaty though so I'm probably missing something important in why they're needed.

I could do a pr with shelve for save_model(...), load_model(...), save_weights(...), load_weights(...) if it is of interest @fchollet.

@tboquet
Copy link
Contributor

tboquet commented Apr 21, 2016

From what @carlthome said here, you could try to take a snapshot of the optimizer too.
I have 2 functions working to be able to serialize the model and the optimizer as in the pre 1.0 release. Note that I return a dictionnary instead of a json dump. It's basically something really similar to the old functionnalities.
You could try them and let me know if it's working (I didn't have the time to really test them extensively):

def get_function_name(o):
    """Utility function to return the model's name
    """
    if isinstance(o, six.string_types):
        return o
    else:
        return o.__name__

def to_dict_w_opt(model):
    """Serialize a model and add the config of the optimizer and the loss.
    """
    config = dict()
    config_m = model.get_config()
    config['config'] = {
        'class_name': model.__class__.__name__,
        'config': config_m,
    }
    if hasattr(model, 'optimizer'):
        config['optimizer'] = model.optimizer.get_config()
    if hasattr(model, 'loss'):
        if isinstance(model.loss, dict):
            config['loss'] = dict([(k, get_function_name(v))
                                   for k, v in model.loss.items()])
        else:
            config['loss'] = get_function_name(model.loss)

    return config


def model_from_dict_w_opt(model_dict, custom_objects=None):
    """Builds a model from a serialized model using `to_dict_w_opt`
    """
    if custom_objects is None:
        custom_objects = {}

    model = layer_from_config(model_dict['config'],
                              custom_objects=custom_objects)

    if 'optimizer' in model_dict:
        model_name = model_dict['config'].get('class_name')
        # if it has an optimizer, the model is assumed to be compiled
        loss = model_dict.get('loss')

        # if a custom loss function is passed replace it in loss
        if model_name == "Graph":
            for l in loss:
                for c in custom_objects:
                    if loss[l] == c:
                        loss[l] = custom_objects[c]
        elif model_name == "Sequential" and loss in custom_objects:
            loss = custom_objects[loss]

        optimizer_params = dict([(
            k, v) for k, v in model_dict.get('optimizer').items()])
        optimizer_name = optimizer_params.pop('name')
        optimizer = optimizers.get(optimizer_name, optimizer_params)

        if model_name == "Sequential":
            sample_weight_mode = model_dict.get('sample_weight_mode')
            model.compile(loss=loss,
                          optimizer=optimizer,
                          sample_weight_mode=sample_weight_mode)
        elif model_name == "Graph":
            sample_weight_modes = model_dict.get('sample_weight_modes', None)
            loss_weights = model_dict.get('loss_weights', None)
            model.compile(loss=loss,
                          optimizer=optimizer,
                          sample_weight_modes=sample_weight_modes,
                          loss_weights=loss_weights)
    return model

@carlthome, if this solution is ok, we could work on a PR that includes these functionnalities and the other relevant elements (weights, states, ...)?
It should be possible to include all of this in a HDF5 file.

@carlthome
Copy link
Contributor

@tboquet, cool! Sounds good to me! I'm no authority on Keras but I would probably have based loading/saving around object serialization of Model() and Sequential() just to be safe. In the future, new things will probably be stateful which will screw up things again. The slight additional overhead of saving too much is worth the extra stability and reduced code complexity, in my mind.

@rtatishvili
Copy link

This is what I am using (took from keras docs) and it works without a problem on Keras 1.0:

def load_model():
    model = model_from_json(open('model.json').read())
    model.load_weights('weights.h5')
    model.compile(optimizer=rmsprop, loss='mse')
    return model


def save_model(model):    
    json_string = model.to_json()
    open('model.json', 'w').write(json_string)
    model.save_weights('weights.h5', overwrite=True)

I had one example with say 10 epochs and another example with save and load in a loop of 10 iterations each with 1 epoch, and the loss for both were similarly decreasing. Additionally both resulting models were predicting fine.

Have you tried to call model.load_weights before model.compile?

@trane293
Copy link
Author

trane293 commented May 1, 2016

Thank you for your suggestions everyone. I will try your suggestions again and revert back what I got. If the method described on official Keras documentation works for everyone, it should for me too. I will dig a little deeper and find out if its something I am doing wrong.

@carlthome
Copy link
Contributor

carlthome commented May 14, 2016

I ran into a similar problem today. It really seems like it could be the optimizer that needs to be saved/loaded too, aside from the weights.

Basically anything like this seems to go bonkers (in my case loss='mse' and optimizer='rmsprop'):

# Starting fresh, training for a while and saving the weights to file.
model = create_model()
model.compile(...)
model.fit(...)
model.save_weights(...)

# Creating the model again, but loading the previous weights and resuming training.
model = create_model()
model.compile(...)
model.load_weights(...)
model.fit(...) # Diverges!

The data is the same in both fit calls.

@trane293
Copy link
Author

@carlthome Had the same problem. Didn't check recently for the current status but now I use vanilla cPickle to pickle my trained model. Loading the pickled model and resuming training seems to be working just as expected. However I'm not sure about the JSON + h5 weight saving/loading functionality. If you are having the same problem then there must be something wrong.

@NasenSpray
Copy link

@carlthome: RMSprop makes really shitty updates during the first couple of steps which easily wreck pre-trained models. Could you retry with plain SGD?

@Rocketknight1
Copy link
Contributor

I also encountered this problem training a 2-layer LSTM with one dense layer at the end. Testing showed the following:

-Compiling two identical models in the same script, training the first model and then loading the weights in the second model via save_weights and load_weights worked as it should even if the two models had separate optimizer instances. If I did this and then started training with the second model its training loss was the same as the training loss of the first model when the weights were saved, as expected.

-However, once Python was closed and reopened loading weights saved in the previous instance resulted in, if anything, a -worse- loss at the start of training than the untrained model, though it quickly learned again.

-I'm not sure if the optimizer is at fault, because I've tried saving the weights from a model, reloading them and then testing predictions without any further training. If the two models were compiled in the same session it works fine, but if I close the session, start a new session, compile a new model and load the previous session's weights then its predictions are garbage.

@Rocketknight1
Copy link
Contributor

Rocketknight1 commented May 21, 2016

Also, I'm using the Theano backend and training on Windows with CUDA, which is probably a weird use-case. Not sure what backend/OS the other people with this problem are using.

@LeZhengThu
Copy link

I got the same problem. Anyone has an idea?

@whikwon
Copy link

whikwon commented Apr 9, 2018

I got the same problem!

@ruiyuanlu
Copy link

ruiyuanlu commented Apr 16, 2018

I got the same problem too, any ideas?

My architecture is:

Encoder part

encoder_input_layer = Input(shape=(None,), name='encoder_Input')
encder_embedding_layer = Embedding(src_token_num, embedding_dim, name='encoder_Embedding')(encoder_input_layer)
encoder_lstm_1_layer = LSTM(embedding_dim, return_sequences=True, return_state=True, name='encoder_LSTM_1')(encder_embedding_layer)
encoder_lstm_2_layer = LSTM(embedding_dim, return_sequences=True, return_state=True, name='encoder_LSTM_2')(encoder_lstm_1_layer)
encoder_output, state_h, state_c = LSTM(embedding_dim, return_state=True, name='encoder_LSTM_Final')(encoder_lstm_2_layer)
encoder_states = [state_h, state_c]

Decoder part

decoder_input_layer = Input(shape=(None,), name='decoder_Input')
decoder_embedding_output = Embedding(trgt_token_num, embedding_dim, name='decoder_Embedding')(decoder_input_layer)
# Use encoder states to initialize decoder LSTM
decoder_lstm_1_layer = LSTM(embedding_dim, return_sequences=True, return_state=True, name='decoder_LSTM_1')
decoder_lstm_1_layer_output = decoder_lstm_1_layer(decoder_embedding_output, encoder_states)
decoder_lstm_2_layer = LSTM(embedding_dim, return_sequences=True, return_state=True, name='decoder_LSTM_2')
decoder_lstm_2_layer_output = decoder_lstm_2_layer(decoder_lstm_1_layer_output)
# State_h and state_c discarded.
decoder_lstm_fianl_layer = LSTM(embedding_dim, return_sequences=True, return_state=True, name='decoder_LSTM_Final')
decoder_lstm_output, _, _ = decoder_lstm_fianl_layer(decoder_lstm_2_layer_output)
# Classify words
decoder_dense_1_layer = Dense(embedding_dim, activation='relu', name='decoder_Dense_1_relu')
decoder_dense_1_output = decoder_dense_1_layer(decoder_lstm_output)
decoder_dense_final_layer = Dense(trgt_token_num, activation='softmax', name='decoder_Dense_Final')
decoder_dense_output = decoder_dense_final_layer(decoder_dense_1_output)

Final Model

encoder_decoder_model = Model(inputs=[encoder_input_layer, decoder_input_layer], outputs=decoder_dense_output)

image

When I tried to save model, I got warning as follows:
image

When I load saved model, and ran

model.fit()

the training accuracy is close to 0!!!

@NahianHasan
Copy link

NahianHasan commented Apr 30, 2018

Hello everyone, I faced the same problem. But I think it's solved in my case.
First save the model and weights as in the code below....

#Save the final model
model_json = model.to_json()
mdl_save_path = 'model.json'
with open(mdl_save_path, "w") as json_file:
json_file.write(model_json)
###serialize weights to HDF5
mdl_wght_save_path = 'model.h5'
model.save_weights(mdl_wght_save_path)

Then I started another session completely closing all python opened files for retraining.I also tried this by checkpointing the model while training. At the time of resuming training, I first loaded the model architecture from .json file and then loaded the weights using load_weights() from .h5 file. Then compiled the model using model.compile() and fit it with model.fit().

N.B: I used SGD at both times while training and resuming training.....It worked.........

Though I did not check this with other optimizers. I saw that at the time of retraining if I use other optimizers other than SGD(I used SGD in normal training), the issue persists. So, I am pretty confident using different optimizers during normal training and resumed training will cause you a problem.

@peymanrah
Copy link

Guys,

I fixed the problem by reducing the learning rate to 1e-5 (small lr for Adam) when I fine tune my pretrained model, which has been trained using Adadelta with a much higher starting lr. I think, the issue is the starting lr for Adam that mess things up. For fine tuing, just use a small lr for a new optimiser of your choice.. Hope this helps.

@PatrickZGW
Copy link

Thanks @peymanrah this is exaclty the solution I needed!

@ad12
Copy link

ad12 commented Aug 28, 2018

@peymanrah when you first trained the model with Adam, what learning rate did your training stop at. Trying to see by what factor I should decrease the lr for fine_tuning

@Jingnan-Jia
Copy link

Jingnan-Jia commented Dec 12, 2018

@guyko81 @peymanrah I agree with you! The main reason is that when we save the model after several epochs, its learning rate is pretty smaller. Therefore, once we load the same model and continue training it, we have to set the learning rate to its last value instead of the default value( default value is for the first epoch, it is very large).

Thank you very much. This problem is fixed finally.

@bnaman50
Copy link

It is sad that such a basic issue has still not been solved.

It is happening because model.save(filename.h5) does not save the state of the optimizer. So the optimizers like Adam, RMSProp does not work but SGD works as mentioned in one of the previous comments (I verified this) since it is stateless optimizer (learning rate is fixed).

This is just sad that such a popular library has such basic/glaring/trivial bugs/problems :(

@salarbashi
Copy link

Guys,

I fixed the problem by reducing the learning rate to 1e-5 (small lr for Adam) when I fine tune my pretrained model, which has been trained using Adadelta with a much higher starting lr. I think, the issue is the starting lr for Adam that mess things up. For fine tuing, just use a small lr for a new optimiser of your choice.. Hope this helps.

Reducing learning rate solved my problem, too. At first lr was 0.01 and then I reduced it to 0.001 at second try. After one epoch it returned to last state (in terms of acc and loss).
Note than I just saved and loaded the weights.

@nicolefinnie
Copy link

It is sad that such a basic issue has still not been solved.

It is happening because model.save(filename.h5) does not save the state of the optimizer. So the optimizers like Adam, RMSProp does not work but SGD works as mentioned in one of the previous comments (I verified this) since it is stateless optimizer (learning rate is fixed).

This is just sad that such a popular library has such basic/glaring/trivial bugs/problems :(
@champnaman

What states of Adam, RMSProp did you refer to? The weights(states) of the optimizers such as RMPSProp and Adam are saved except tensorflow optimizers TFOptimizer if you look at save_model() in saving.py (mine is keras 2.1 which differs from the current code but even the old version saves the optimizer states), which is confirmed by @fchollet in this issue. And the tensorflow keras' save_model document also confirmed this.

And the learning rate (lr) of that epoch along with epsilon, rho, the whole optimizer instance are saved as well. load_model() in saving.py loads those hyperparemeters successfully in my case. However, my loaded loss is very different than the saved model, which is a problem that I'm still investigating. It could be related to the problem with multiple GPUs which is beyond the scope of this issue.

@ParikhKadam
Copy link

@trane293 You mentioned that you used pickle to save and load the model which worked very well here. Saving the model implies saving weights of model, states of optimizers, custom objects like layers, custom loss function, custom accuracy metrics and more..

Can you please share your code to save and load model. I too am facing this issue and in an urgent need to finding an alternative for now. My models takes approx 10 days to get trained perfectly but because of electricity cutoff, it's training got interrupted. Please help..

Also, I have opened a new issue in keras. Can you help be debug the problem in s step-by-step manner? It would be a great help to a newbie like me. Thank you..

Issue link - #12263

@ParikhKadam
Copy link

@trane293 You mentioned that the issue is fixed here. But I am still facing the issue. I am saving my model using ModelCheckpoint callback in keras. Is it like the issue is fixed in model.save() but not in ModelCheckpoint?

I am also using the function multi_gpu_model() for training. Is it interfering? Can you please help with my issue mentioned in above comment?

@macmatt22
Copy link

I experienced this issue with Keras both with Mxnet and Tensorflow backends. My solution was to switch from using keras to tensorflow.keras. This obviously only works with tensorflow backend. However, if you are already using tensorflow backend, it is just a matter of changing your import statements as the functionality of tensorflow.keras is almost identical to keras
Since switching I have not experienced this annoying bug

@ruiyuanlu
Copy link

Thanks, I'll try the solution this week.

@15a15a
Copy link

15a15a commented Apr 6, 2019

This issue is not yet fixed. I’m experiencing this issue with Tensorflow as backend.any idea?

@jayeew
Copy link

jayeew commented Apr 8, 2019

I have the same issue.Though I can't resume training after load_weight and loss value is just like the epoch1, I can load the weight and predict well.
And i found that the loss will down quickly after load_weight.
In the normal case, loss from 3 down to 1.5 maybe 10 epochs.
But when I call function load_weight, loss from 3 down to 1.5 maybe 3 or 4 epochs.
I have search a solution for a long time and not fix it yet.
Fortunately, It predicts well.

@turb0bur
Copy link

turb0bur commented Jun 3, 2019

After loading your weights, when you train your model set parameter initial_epoch to the last epoch you trained your model before. E.g. you trained your model 100 epochs and saved via ModelCheckpoint weights after each epoch and want to resume training from 101st epoch you should do it in the next way
model.load_weights('path_to_the_last_weights_file')
model.fit(initial_epoch=100)
Other parameters keep the same.

@veqtor
Copy link

veqtor commented Jun 14, 2019

Also experiencing this, especially with multi_gpu_model of a multi-model (model of models), saving the original multi-model. When I load weights it's as if they've never been saved (although load isn't erroring).

I'm using an "altmodelcheckpoint" to save the weights of the original model. Not sure if it is working.
When checking val loss I get the exact same patterns, as if the weights have been reinitialized...

I think there might actually be a bug in here somewhere.

@ParikhKadam
Copy link

There is no more an issue of saving/loading model weights in Keras. I know as I am a Keras user. It some error in your program which leads to such issues.

@veqtor
Copy link

veqtor commented Jun 15, 2019

I'll try to find reproduction steps.... But if this bug shows up only in rater circumstances then it's still a bug. But, with 1.14 we'll no longer use multi_gpu_model so it doesn't really matter

@ParikhKadam
Copy link

@veqtor You can check my mini project. I faced same issues when built it but now everything works fine. I added support for multi gpu and that too is working. Check scripts here - https://github.com/ParikhKadam/bidaf-keras

@jkfy
Copy link

jkfy commented Dec 19, 2019

model.fit(x_train, y_train,batch_size=batch_size,initial_epoch=30,
epochs=epochs,validation_data=(x_test, y_test),
callbacks=[modelcheckpoint, earlystopping])
model.fit(x_train, y_train,batch_size=batch_size,
epochs=epochs,
validation_data=(x_test, y_test),
callbacks=[modelcheckpoint, earlystopping])
添加一个initial_epoach,表示之前训练的次数,我之前只写了一次fit,然后程序只是输出,没有继续训练,后面又添加了一个fit,才开始继续训练的。

@igo312
Copy link

igo312 commented Mar 23, 2020

My solution was to switch from using keras to tensorflow.keras. This obviously only works with tensorflow backend.
Since switching I have not experienced this annoying bug

Here's my scripts

 from tensorflow.keras.callbacks import ModelCheckpoint, TensorBoard, ReduceLROnPlateau, EarlyStopping

saver = ModelCheckpoint(os.path.join(save_path, model_name + '-{epoch:02d}-{val_loss:.2f}.hdf5'),
                                                monitor='val_loss',
                                                verbose=0,
                                                save_best_only=False,
                                                save_weights_only=False,
                                                mode='auto',
                                                save_freq=save_interval)

still not working... And I have trained one epoch but it seems not speed up the speed of convergence

@jayeew
Copy link

jayeew commented Mar 23, 2020

My solution was to switch from using keras to tensorflow.keras. This obviously only works with tensorflow backend.
Since switching I have not experienced this annoying bug

Here's my scripts

 from tensorflow.keras.callbacks import ModelCheckpoint, TensorBoard, ReduceLROnPlateau, EarlyStopping

saver = ModelCheckpoint(os.path.join(save_path, model_name + '-{epoch:02d}-{val_loss:.2f}.hdf5'),
                                                monitor='val_loss',
                                                verbose=0,
                                                save_best_only=False,
                                                save_weights_only=False,
                                                mode='auto',
                                                save_freq=save_interval)

still not working... And I have trained one epoch but it seems not speed up the speed of convergence

my sulotion seems working well.Here's my repository. you can try to save weights only then rebuild the network and load the weights .

@igo312
Copy link

igo312 commented Mar 23, 2020

As @peymanrah and @turb0bur said, I setting the initial_epoch=39 where my train_eopch paused in tensorboard. And lr=0.008 which didn't change, I tried reducing lr, but I don't give it enough time to train a few epochs. here's my tensorboard visualization, We can see after one epoch, the network seems back on track.
But unfortunately, after load_weights the model still cannot predict.

epoch_loss:blue line is validation trend, orange line is train trend
epoch_loss:blue line is validation trend, orange line is training trend

@usher123
Copy link

great!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests