Replies: 2 comments 2 replies
-
Hey there, Massive effort experimenting with different layers for fine-tuning. As for the See the docs: https://www.tensorflow.org/api_docs/python/tf/keras/Model#fit
For example, if you saved a model and wanted to continue training, you could start training on the last epoch of the training run where it was saved (e.g. If you wanted to go back further, you could use When fine-tuning you might want to experiment unfreezing layers using |
Beta Was this translation helpful? Give feedback.
-
Fantastic course! The notebooks are a go-to reference. I have a related question. What are the benefits and drawbacks of starting each fine-tuning session from the previous last epoch, versus reloading the feature extraction weights, thawing the n top layers of the base model, and then training from the last feature extraction epoch with the new model each time? I'm currently experimenting with both approaches and will update with my observations. Meanwhile, I'd be interested in your perspectives. |
Beta Was this translation helpful? Give feedback.
-
Hey all,
I'm having fun with the course and wanted to do some experimentation in terms of altering the number of layers to make trainable for the fine tuning section of transfer learning.
I had a bit of an issue and error message though when I altered the number of trainable errors, and it was indicating that I needed to modify the "inital_epoch" argument in the fit set. What is the general rule when it comes to that. I saw that? How do I modify that argument to accommodate some arbitrary amount of layers I want to train?
Thanks!
Sky
Beta Was this translation helpful? Give feedback.
All reactions