Skip to content

Hands-on Sessions 1 and 2 at the Building Interpretable AI for Digital Pathology AMLD workshop 2021

License

Notifications You must be signed in to change notification settings

histocartography/interpretAI_DigiPath

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 

Repository files navigation

Logo of the project

Presented by:

Welcome to the AMLD 2021 workshop about Building Interpretable AI for Digital Pathology. This hands-on session is created with the purpose of showcasing multiple ways in which developers may interpret automated decision making for digital pathology.

Schedule

The workshop will take place on the 27th of April from 9:00 to 12:00 CET.

Time Title Presenter
9:00-9:05 Welcome Guillaume Jaume
9:05-9:25 Introduction to Digital Pathology Prof. Dr. Inti Zlobec
9:25-9:45 Introduction to Interpretability Mara Graziani
9:45-9:55 Break 1 -
9:55-10:35 Hands-on session 1: CNNs & Concept Attribution Mara Graziani
10:35-10:45 Break 2 -
10:45-11:55 Hands-on session 2: Graph-based interpretability Guillaume Jaume, Pushpak Pati
11:55-12:00 Closing remarks Pushpak Pati

What to do before the workshop:

The participants need to bring their own laptop with basic development setup. We recommend testing the following steps before starting the workshop:

  • Cloning the repository
>> git clone https://github.com/maragraziani/interpretAI_DigiPath.git && cd interpretAI_DigiPath
  • Launch Jupyter Notebook
>> jupyter notebook

Content

Deep learning algorithms may hide inherent risks such as the codification of biases, weak accountability and bare transparency of the decision-making. Giving little insights about their final output, deep models are perceived by clinicians as black-boxes. Clinicians, on their side, are the sole people legally responsible and accountable for the diagnoses and treatment decisions. Providing justifications for automated predictions may have a positive impact of computer aided diagosis, for example, by increasing the uptake of automated support within the decision making process.

Part 1: Interpreting 2D CNNs

This part focuses on understanding the decision process on ConvNets with:

  • feature attribution: Class Activation Mapping (CAM) and its Gradient-weighted version
  • concept attribution: Regression Concept Vectors (RCV)

You will work on the implementation of Gradient-weighted Class Activation Mapping (Grad-CAM) as an example of feature attribution. RCVs will be applied to generate complementary explanations in terms of clinically relevant measures such as nuclei area and appearance.

The notebooks and instructions for this part are in the folder 2DCNNs.

Part 2: Explainable Graph-based Representations in Digital Pathology

The second part of this tutorial will guide you to build interpretable entity-based representations of tissue regions. The motivation starts from the observation that cancer diagnosis and prognosis is driven by the distribution of histological entities, e.g., cells, nuclei, tissue regions. A natural way to characterize the tissue is to represent it as a set of interacting entities, i.e., a graph. Unlike most of the deep learning techniques operating at pixel-level, the entity-based analysis preserves the notion of histopathological entities, which the pathologists can relate to and reason with. Thus, explainability of the entity-graph based methodologies can be interpreted by pathologists, which can potentially lead to build trust and adoption of AI in clinical practice. Notably, the produced explanations in the entity-space are better localized, and therefore better discernible.

Reference papers

@article{graziani2020,
    title = "Concept attribution: Explaining {{CNN}} decisions to physicians",
    author = "Mara Graziani, Vincent Andrearczyk, Stephane Marchand-Maillet, Henning Müller"
    booktitle = "Computers in Biology and Medicine",
    pages = "103865",
    year = "2020",
    doi = "https://doi.org/10.1016/j.compbiomed.2020.103865"
}

@inproceedings{pati2021,
    title = "Hierarchical Graph Representations in Digital Pathology",
    author = "Pushpak Pati, Guillaume Jaume, Antonio Foncubierta, Florinda Feroce, Anna Maria Anniciello, Giosuè Scognamiglio, Nadia Brancati, Maryse Fiche, Estelle Dubruc, Daniel Riccio, Maurizio Di Bonito, Giuseppe De Pietro, Gerardo Botti, Jean-Philippe Thiran, Maria Frucci, Orcun Goksel, Maria Gabrani",
    booktitle = "arXiv",
    url = "https://arxiv.org/abs/2102.11057",
    year = "2021"
} 

@inproceedings{jaume2021,
    title = "Quantifying Explainers of Graph Neural Networks in Computational Pathology",
    author = "Guillaume Jaume, Pushpak Pati, Behzad Bozorgtabar, Antonio Foncubierta-Rodríguez, Florinda Feroce, Anna Maria Anniciello, Tilman Rau, Jean-Philippe Thiran, Maria Gabrani, Orcun Goksel",
    booktitle = "IEEE CVPR",
    url = "https://arxiv.org/abs/2011.12646",
    year = "2021"
} 

About

Hands-on Sessions 1 and 2 at the Building Interpretable AI for Digital Pathology AMLD workshop 2021

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 99.9%
  • Python 0.1%