Skip to content

Adaptation of V-Net architecture for multi-modality medical image segmentation

Notifications You must be signed in to change notification settings

luiscarm9/Volumetric-Multimodality-Neural-Network

Repository files navigation

Volumetric Multimodality Neural Network

Adaptation of V-Net architecture for brain lesion segmentation in medical images with 4 different MRI modalities.

Getting Started

These instructions will get you a copy of the project up and running on your local machine for development and testing purposes. See deployment for notes on how to deploy the project on a live system.

Prerequisites

Programs and packages required to run our network:

SimpleITK 
Python2
Caffe-3D
CUDA with cuDNN

SimpleITK installation

Caffe-3D repo

Training Process

The dataset must be in a medical image format (.nii, .mhd, etc..)

The data must be in the following distribution:

Volumetric Multimodality Neural Network
│   README.md
│   main.py  
│   DataManager.py  
│   utilities.py  
│   layers.py
│   Vnet2.py    
└───Prototxt
│   │   test_noPooling_ResNet_cinque2.prototxt
│   │   train_noPooling_ResNet_cinque2.prototxt
└───Train
|   │   1_channel1.nii
|   │   1_channel2.nii
|   │   1_channel3.nii
|   │   1_channel4.nii
|   │   1_segmentation.nii
└───Test
|   │   1_channel1.nii
|   │   1_channel2.nii
|   │   1_channel3.nii
|   │   1_channel4.nii

Where channel# are the differents modlaities (Example: Falir, DWI, T1, T1c,...) To run the network first set the training parameters in main.py. After setting paths to the files and training values run:

python main.py -train

The trained models will be saved on /Models/MRI_cinque_snapshots

Note: This process uses a lot of memory we recommend using a GPU. For that you must have installed and set up cuDNN.

Test Process

In order to train your model update the parameters in main.py to select the last model or the iteration that want to be tested. After that only run:

python main.py -test

The output will be saved on the /Results folder.

Some of the results with the best model trained:

Deployment

For additional information of how basic VNet architecture works look at this paper VNet and the following tutorial VNet tutorial

License

This project is licensed under the MIT License

Acknowledgments

  • We thank V-Net and 3d-Caffe implementation by @faustomilletari.

Authors

  • Silvana Castillo @SilvanaC
  • Laura Daza @lauradaza
  • Luis Rivera @luiscarm9

About

Adaptation of V-Net architecture for multi-modality medical image segmentation

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages