Skip to content

zeeshannisar/Transfer-Learning-and-Fine-Tuning-with-Pre-Trained-Networks

Repository files navigation

Transfer Learning and Fine Tuning in Keras with State of the Art Pre Trained Networks:

This Repository contains the detailed description and implementation of Transfer-Learning and Fine-Tuning solution for Image Classification Problems in Computer Vision domain. For a detailed theoratical guide one can visit Here

Table of Contents

Transfer Learning:

Transfer learning refers to a process where a model trained on one problem is used in some way on a second related problem. It is a popular method in Computer-Vision domain because it allows us to build effecient models in a time-saving way (Rawat & Wang 2017). With Transfer learning, instead of learning the model from scratch, we start from patterns that have already been learnt while solving a different but related problem. This way we leverage previous learnings and avoid learning from scratch.

Transfer Learning Strategies:

When we’re reusing a pre-trained model for our own needs, we start by removing the original classifier, then we add a new classifier that fits our purposes, and finally we have to fine-tune our model according to one of the three listed strategies.


Figure:1 Transfer Learning Strategies.

Transfer Learning Process:

1. Select a pre-trained model:

From the wide range of available pre-trained models at Here, we have to pick one that looks suitable for our problem.

2. Classify the problem and Fine-Tune the Model according to the Size-Similarity Matrix:

In Figure: 2 we have The Size-Similarity Matrix that controls our choice to train the model. This matrix classifies the Computer-Vision problem considering the size of the dataset and its similarity to the dataset in which pre-trained model is to be trained.

  • Large data but different from Pretrained data (Train the entire model)
  • Large data but same as Pretrained data (Train some layers and freeze others)
  • Small data but different from Pretrained data (Train some layers and freeze others)
  • Small data but same as Pretrained data (Freeze the convolutional base)

Size-Similarity Matrix and Decision Map:


Figure: 2 Size-Similarity matrix (left) and Decision-Map (right) to Fine-Tune Pre-trained Models .

Datasets and Code Implementations:

I have used the following pretrained networks for Transfer-Learning for Tuberculosis Classification and Skin Cancer Detection tasks.

Datasets:

The original datasets are publicaly available at Tuberculosis Dataset and Skin Cancer Datset and can be also be downloaded at Here to get the same accuracy results.

Implementations:

Transfer Learning with VGG16:

Code: Google Colab Notebook

Transfer Learning with VGG19:

Code: Google Colab Notebook

Transfer Learning with AttentionBased-VGG16:

Code: Google Colab Notebook

Transfer Learning with Res-Net50:

Code: Google Colab Notebook

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published