You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As part of section 1, by the end when evaluating the best mode we decide to save model_2 with model_2.save("model_name.h5"). However when loading the same model, in this case model_name.h5, i get different (worse) results when checking the mean absolute error on the very same data. Is this normal behaviour, because in the course Daniel gets the same metrics with the same data?
I apoligize if someone already answered this question, because I did not find it. In case there is, just link it to me. Thanks
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
As part of section 1, by the end when evaluating the best mode we decide to save model_2 with
model_2.save("model_name.h5")
. However when loading the same model, in this case model_name.h5, i get different (worse) results when checking the mean absolute error on the very same data. Is this normal behaviour, because in the course Daniel gets the same metrics with the same data?I apoligize if someone already answered this question, because I did not find it. In case there is, just link it to me. Thanks
Beta Was this translation helpful? Give feedback.
All reactions