Skip to content

Latest commit

 

History

History
138 lines (115 loc) · 4.54 KB

README.md

File metadata and controls

138 lines (115 loc) · 4.54 KB

MRI_Segmentation

MRI Segmentation using U-Net

Table of Contents
  1. Usage
  2. Process
  3. Testing Network
  4. Result

Usage

  1. Clone the repository
git clone <repository_url> your-folder
  1. Install requirements( It is for CPU. For GPU install GPU PyTorch.)
python -m venv .venv
source venv\Scripts\activate
pip install -r requirements.txt
  1. Modify the config.yaml file
  2. Run the following code script.
python main.py

(back to top)

Process

process

As is evident from the figure, the process starts with preprocessing the collected data. Then, data are located in the local database and are used to train the network. The process from getting data from the database to optimizing and updating the parameters of data is iterating in every batch. In this process the network is the heart of the pipeline and we aime at make an optimized network which is able to do its given task in a proper way. In this process, the network is the heart of the pipeline, and we aim to make an optimized network that is able to do its given task in a proper way.

Data Loader

In loading data, first, we select a number of data from the data name list then we load them from the database and put them in a batch. The main challenge of this process is that we have to do something so that the data in each batch be unique, that we do this by using Python generators and checking to remove the name of selected data from our datalist. We also try to patch data to a desired size.

Augmenter

For augmenting data, we use spatial transformations like rotation and scaling, Gaussian Noise, Gaussian Blur, Brightness Multiplicative Transformation, Contrast Augmentation, Simulating Low Resolution, Gamma Transformation, and Mirroring to some percent of data in the dataset. However, since we want to determine the ability of our network in dealing with different deteriorations we do not use this stage in training our networks.

Network

Currently, we are using a three-depth U-net as a base network. Following picture is the networks that will be trained and tested.

  • Real valued network

Network Architecture

Loss Function

The loss used in the training process is the sum of dice loss (an implementation of dice loss proposed in this paper) and cross-entropy loss. Networks, moreover, are trained with deep supervision.

Optimizer

For optimization, stochastic gradient descent with the poly learning rate is used.

(back to top)

Testing Network

After training the networks we aim to test them on deteriorated data which are obtained by the methods proposed in FAST-AID Brain. In the following, you can see the effect of each change on a sample of hippocampus data.

deteriorations


(back to top)

Result

Here are the result of network trained in previous tests:

Brain Tumor Segmentation

Train dice score = 78.8%
Labeled data:

brain_ideal


Labeled with predicted labels:

brain_predicted

Hippocampus segmentation

Labeled data:

hippocampus_ideal


Labeled with predicted labels by real-valued network:
Train dice score = 82.3%

hippocampus_predicted_real

(back to top)