Pouria Rouzrokh1,2,*, Bardia Khosravi1,2,*, Shahriar Faghani1, Mana Moassefi1, Sanaz Vahdati1, Bradley J. Erickson1,+
(1) Mayo Clinic Artificial Intelligence Laboratory (2) Orthopedic Surgery Artificial Intelligence Laboratory
Mayo Clinic, MN, USA
(*) co-first authors (+) corresponding author
Multitask Brain Tumor Inpainting (MBTI) is a denoising diffusion probabilistic model (DDPM), developed by the Mayo Clinic Artificial Intelligence Laboratory, that can be used to inpaint an axial slice of a brain magnetic resonance imaging (MRI) in either of the T1-weighted, post-contrast T1-weighted, T2-weighted, or Fluid attenuation inversion recovery (FLAIR) sequences. "Inpainting" refers to the ability of the proposed model to fill one or multiple cropped areas of an input image (i.e., a two-dimensional axial brain MRI slice) with tumor-free (apparently normal) brain tissue, necrotic tumor core, tumoral edema, tumoral enhancement, or a combination of the tumoral components. Similarly, the term "multitask" means that the model is able to simultaneously inpaint various cropped regions of the input image with distinct components (in a single inference run). Please refer to our manuscript for further information about our tool.s
The current repository includes both an online tool for utilizing our model and the source code for the UNet model which was used to train our algorithm (adopted from here).
Our online tool is a graphical user interface made possible by Gradio that illustrates how our program can be used to generate synthetic image data. This tool is accessible using the hyperlink provided on top of this page. Please watch the demo animation above to discover how to use our tool, or continue reading to learn its specs.
Note: If you are interested in using our tool in a more flexible manner (for example, to apply it to your projects or a large dataset), please contact us using the email addresses listed at the bottom of this page.
- You can drag and drop your desirable image into the "input image" placeholder, or click on that placeholder to open an upload dialog box for selecting your image.
- Uploaded input images should be axial slices from brain magnetic resonance imaging (MRI) in either of the following slices: T1-weighted, post-contrast T1-weighted, T2-weighted, or Fluid attenuation inversion recovery (FLAIR).
- In order for our tool to work properly, the uploaded input images should be pre-processed as are the images in the BraTS 2021 dataset (e.g., skull-stripped), and have no lower resolution than 240 * 240.
- If you just need a few example images to try our app, feel free to download some from this link. These examples all come from a subset of BraTS2021 dataset that was held out as a separate test set for our deep learning model.
- Once an appropriate input image has been uploaded, the next step is to annotate your desired region of interests (ROIs) on the input image. You can draw your annotations by utilizing the color-sketch tool that is accessible on top of the input-image placeholder.
- Please note that this interface has been designed so that our product is compatible with particular annotation colors. In other words, only certain RGB values should be used for the model to comprehend the ROI you wish to inpaint.
- Utilize the color picker tool to select the color that corresponds to your desired ROI. After selecting the color-picking tool, please click on the colored portion of any of the color boxes below the input image to draw with that color (please watch the demo above to see how this works).
- If you are using a non-standard monitor, such as a mobile device, the color-picker tool may select non-standard pixel values; therefore, it is advisable to manually enter the RGB values for your ROI brush color. The following table displays the RGB values for the various ROIs that can be annotated on input images:
Region of Interest (ROI) | Red | Green | Blue |
---|---|---|---|
Tumor-less brain tissue | 0 | 255 | 255 |
Necrotic Tumor Core | 255 | 0 | 0 |
Tumoral Edema | 0 | 255 | 0 |
Tumoral Enhancement | 0 | 0 | 255 |
Mixed Tumoral Components | 255 | 255 | 0 |
- You can select the checkbox for any of the ROIs you have drawn to have our tool treat it as a bounding box. Please note that in the bounding box mode, synthetic pixels will be created within the annotated bounding box such that it is the smallest bounding box that can fit around the ROI. However, the resulting ROI will not completely fill the bounding box. In other words, our model will decide for itself what contour the ROI should have and will fill in the surrounding environment with tumor-free brain tissue.
- In order for the Gradio interface to function correctly with our tool, we must slightly dilate your drawn ROI masks prior to feeding them to the model. This may result in the inpainted region of the input image being slightly larger than the ROI you initially drew.
- Once you have uploaded and annotated your image, you may press the "inpaint" button to have our model generate the output image.
- DDPM models are not deterministic by themselves. By adjusting the seed to any value other than zero, the model can be made to run in a deterministic manner. In other words, the identical input image and mask should provide the same output image. Seed=0 indicates a random seed.
- For the inference runs, we have switched our model to a denoising diffusion implicit model (DDIM). This indicates that you can now adjust the amount of steps the model takes to denoise the input image and generate the synthetic one. Reducing the amount of denoising steps will increase inference speed, although occasionally at the sacrifice of image quality.
- Increasing the conditioning weight can force the model to produce more realistic-looking tissue, but may also distort your results.
- The 25-step process of inpainting can take up to 20 seconds for the model.
- We configured the Gradio interface to handle a restricted number of requests concurrently. If others are using the app before you, you may have to wait in a queue before you can use it.
The Mayo Clinic Artificial Intelligence (AI) Laboratory, located at Rochester, Minnesota, USA, is directed by Dr. Bradley Erickson and focuses on enhancing patient care by developing and deploying advanced machine learning (ML) solutions for a number of medical specialties, including diagnostic radiology, pathology, cardiology, etc. To learn more about the Mayo Clinic AI Laboratory and read about our recent publications, please feel free to follow our Twitter account or have a look at the Massive Open Online Course (MOOC) our lab has created to explain machine learning (ML) to individuals who are eager to apply it to healthcare.
The following data scientists contributed to developing the Multitask Brain Tumor Inpainting tool:
IF you are interested in collaborating with our lab (e.g., to use our tool for a project), or to give us feedback or report an issue, please send a note to either of the following email adresses: