Skip to content

My Ph.D. thesis "Contributions to Deep Transfer Learning: from supervised to reinforcement learning", obtained at the Department of Computer Science and Electrical Engineering of the University of Liege

License

Notifications You must be signed in to change notification settings

paintception/phd-thesis

Repository files navigation

Contributions to Deep Transfer Learning: from supervised to reinforcement learning

PhD dissertation, Matthia Sabatelli. Defended on March 30th, 2022.

License: BSD 3 clause

Contact: Matthia Sabatelli (@MatthiawitH, matthia.sabatelli@gmail.com)


Throughout our lifetime we constantly need to deal with unforeseen events, which sometimes can be so overwhelming to look insurmountable. A common strategy that humans as well as animals have learned to adopt throughout millions of years of evolution, is to start tackling novel, unseen situations by re-using knowledge that in the past resulted in successful solutions. Being able to recognize patterns across similar settings, as well as the capacity of re-using and potentially adapting an already established skillset, is a crucial component in human's and animal's intelligence. This capacity comes with the name of Transfer Learning.

The field of Artificial Intelligence (AI) aims to create computer programs that can mimic at least to a certain extent the properties underlying natural intelligence. It follows that among such properties, there is also that of being capable of learning how to solve new tasks whilst exploiting some previously acquired knowledge. Within the mathematical and algorithmic AI toolbox, Convolutional Neural Networks (CNNs) are nowadays by far among the most successful techniques when it comes to machine learning problems involving high-dimensional and spatially organized inputs. In this dissertation, we focus on studying their transfer learning properties and investigate whether such models can get transferred and trained across a large variety of domains and tasks.

alt tag

In the quest of better characterizing the transfer learning potential of CNNs, we focus on two of the most common machine learning paradigms: supervised learning and reinforcement learning. After a first part (Part I) devoted to presenting all the necessary machine learning background, we will move to Part II, where the transfer learning properties of CNNs will be studied from a supervised learning perspective. Here we will focus on several computer vision tasks that range from image classification to object detection, which will be tackled by regular CNNs as well as by pruned models. Next, in part III, we will shift our transfer learning analysis to the reinforcement learning scenario. Here we will first start by introducing a novel family of deep reinforcement learning algorithms and then move towards studying their transfer learning properties alongside that of several other popular model-free reinforcement learning algorithms.

Our transfer learning experiments allow us to identify the benefits, as well as some of the possible drawbacks that can come from adapting transfer learning strategies, while at the same time shedding some light on how convolutional neural networks work.

About

My Ph.D. thesis "Contributions to Deep Transfer Learning: from supervised to reinforcement learning", obtained at the Department of Computer Science and Electrical Engineering of the University of Liege

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages