Skip to content

Hands-on session on Interpretable AI at the VISUM Summer School 2022

License

Notifications You must be signed in to change notification settings

maragraziani/InterpretabilityVISUM22

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

53 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Contributors Forks Stargazers Issues MIT License LinkedIn

Logo of the project

Interpretable AI at VISion Understanding and Machine intelligence (VISUM) Summer School 2022

On-site hand-on session for the track of Explainable AI led by Prof. Henning Müller

Presented by:

Welcome to the hands-on session about Explainable AI of the VISUM 2022 Summer School. This tutorial is created with the purpose of showcasing multiple ways in which developers may interpret Deep Learning (DL) outcomes for vision tasks and build more reliable and transparent models than the existing ones.

Table of Contents

  1. Motivation
  2. Definition of Interpretability
  3. Overview
  4. Getting Started
  5. License
  6. Contact
  7. Acknowledgements

Motivation

Within the past decades the availability of annotated data has grown exponentially, and it has been followed by the physical improvement of computation possibilities. With the computing resources available now we can train complex deep learning models with billions of parameters in just a few hours. These models benefit from the availability of large datasets and are used for several tasks, from the prediction of animal species to the analysis of microscopy images. It is important to keep in mind that these models, as other ML models, are only approximations of the true underlying phenomenon.

And these approximations, as George Box said a long time ago, are never exactly true. How to estimate the usefulness and applicability of these approximations is a problem that depends on the context, the task and the risk involved in each application. Several proofs were given that deep learning models not always predict outcomes as we would expect, despite their impeccable performance on testing data. In a variety of situations we can observe performance drops, for example on shifted data, lack of robustness to samples that are engineered to trick the models and the incorporation of bias and discrimination within the learning procedure.

An increasing number of researchers now claims that the evaluation of DL models by their sole test performance is insufficient and that we should drive the development towards building accurate and reliable models at the place of solely accurate models. So we need to keep the accuracy obviously, but we also want reliability.

Reliability is given by multiple desiderable factors, including the ability to generalise, to justify the decision making and, eventually, to show improvements of trust upon sustained use.

In this hands-on session, I will present multiple ways of making black-boxes more transparent, of building gray-boxes and of testing the reliability and robustness of DL models.

Definition of interpretability

There is still quite some debate about the meaning of interpretability, with several papers disagreeing on the meaning of interpretable and explainable. Particularly there is a rupture point that I identified in my research work between the social and the technical sciences, with people from the social domain using very different definitions. In the context of the hands-on, we will adopt the following definition:

“An AI system is interpretable if it is possible to translate its work- ing principles and outcomes in human- understandable language without af- fecting the validity of the system”

Overview

The hands-on will be structured in two parts:

  1. Interpretability methods for vision models [Colab]
    1. Gradient based methods: Gradient-weighted Class Activation Maps (Grad-CAM) and Integrated Gradients
    2. Model agnostic methods: LIME and Sharp-LIME
    3. Concept-based post-hoc attribution
  2. Evaluation beyond interpretability: Robustness to data shifts and Uncertainty [Colab]
Note: to annotate your images: https://www.robots.ox.ac.uk/~vgg/software/via/via_demo.html

Getting Started

Make sure to check the Prerequisites and Installation sections before the workshop

Prerequisites

To be able to run the hands-on you will be required of a laptop and access to a web browser where the Colab notebook can be run. Colab allows to execute arbitrary python code through the browse, and it requires no setup to use. See [more information(https://research.google.com/colaboratory/faq.html).

Installation

To run the notebook locally:

  1. Clone the repo
    git clone https://github.com/maragraziani/InterpretabilityVISUM22.git
  2. Install NPM packages
    pip install requirements.txt

License

Distributed under the MIT License. See LICENSE for more information.

Contact

Mara Graziani - @mormontre - mara.graziani@hevs.ch

Acknowledgements

About

Hands-on session on Interpretable AI at the VISUM Summer School 2022

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published