Official implementation of "Pathological Semantics-Preserving Learning for H&E-to-IHC Virtual Staining" (MICCAI 2024) [arxiv]
Conventional hematoxylin-eosin (H&E) staining is limited to revealing cell morphology and distribution, whereas immunohistochemical (IHC) staining provides precise and specific visualization of protein activation at the molecular level. Virtual staining technology has emerged as a solution for highly efficient IHC examination, which directly transforms H&E-stained images to IHC-stained images. However, virtual staining is challenged by the insufficient mining of pathological semantics and the spatial misalignment of pathological semantics. To address these issues, we propose the Pathological Semantics-Preserving Learning method for Virtual Staining (PSPStain), which directly incorporates the molecular-level semantic information and enhances semantics interaction despite any spatial inconsistency. Specifically, PSPStain comprises two novel learning strategies: 1) Protein-Aware Learning Strategy (PALS) with Focal Optical Density (FOD) map maintains the coherence of protein expression level, which represents molecular-level semantic information; 2) Prototype-Consistent Learning Strategy (PCLS), which enhances cross-image semantic interaction by prototypical consistency learning. We evaluate PSPStain on two public datasets using five metrics: three clinically relevant metrics and two for image quality. Extensive experiments indicate that PSPStain outperforms current state-of-the-art H&E-to-IHC virtual staining methods and demonstrates a high pathological correlation between the staging of real and virtual stains.
- Code is stil updating.
- Evaluation process will be described.
conda env create -f environment.yml
- Breast Cancer Immunohistochemical (BCI) challenge dataset
- Multi-IHC Stain Translation (MIST) dataset
More information and downloading links of the former two datasets can be found in BCI and MIST.
We use experiments/PSPStain_launcher.py
to generate the command line arguments for training and testing. More details on the parameters used in training our models can be found in that launcher file.
- set the
dataroot
inexperiments/PSPStain_launcher.py
as your data path.
dataset/
│
├── trainA/
├── HE
├── trainB/
├── IHC
├── valA/
├── valB/
💡Important tips💡
- train on different dataset, you need to change the pretrain unet model
pretrain/BCI_unet_seg.pth
orpretrain/MIST_unet_seg.pth
.
python -m experiments --name PSPStain --cmd train --id 0 --unet_seg 'BCI_unet_seg'
python -m experiments --name PSPStain --cmd test --id 0
- the latest weight of PSPStain is in link and the key is
u6qo
- we use the Image-J to calculate the optical density value.
This repo is built upon Contrastive Unpaired Translation (CUT) and Adaptive Supervised PatchNCE Loss (ASP)