This Project was part of my Master Thesis: "A reconstruction method for electrical impedance tomography based on machine learning and real measurement data".\
Electrical impedance tomography (EIT) is a cost-effective and non-invasive imaging technique that makes it
possible to determine the spatial impedance distribution of a body. This method is used in scientific, industrial
and medical contexts. One example of this is the monitoring of lung ventilation.
Many approaches to EIT image reconstruction already exist. They are divided into model-based and data-based
approaches. In recent years, data-based approaches have proven to be increasingly promising. This requires
large data sets of training data. These are currently usually generated by simulations. This work aims to test
whether this is also possible with a data set from real measurement data.
First, an experimental setup is developed to be able to generate a corresponding data set. This setup includes
an EIT device, specifically the EIT32 model from ScioSpec, and a Plexiglas tank with water whose impedance
distribution is recorded. In addition, a positioning system is used for objects that are inserted into the tank to
generate different impedance distributions.\
In the following image you can see the experimental setup:
The Data generation Process is shown in the following image: The experimental setup is then used to collect over 14 000 samples. Three types of models are then evaluated and compared with this data set. A linear- and a K-Nearest Neighbors (KNN) regression, as well as a neural network. In addition, the possibility of data augmentation and dimensionality reduction on EIT data and their effects are analyzed. The methods of noise-, rotation-, Gaussian blur- and superposition-augmentation are used. The possibility of training data-based models on real measurement data was confirmed. The neural networks performed up to 28 % better in relevant metrics than the linear- or K-nearest neighbor regression approaches. It was also found that the quality of the data set significantly influences the generalization ability of the models. For this reason, it was investigated whether the variation of the data can be improved through augmentation. It was found that the quality of the models can be improved by augmentation, especially with small amounts of data (120 samples). On the one hand, it was shown that the performance of the algorithms can be significantly increased by up to 50 % by using noise and rotation as augmentation methods. In addition, using Gaussian blur augmentation improved the visual impression of the reconstructions. It was successfully demonstrated that superposition augmentation enables models to be better generalized to complex scenarios.
To install the required packages, you can use the following command:
pip install -r requirements.txt
- Controlling of the GCODE Device is handled in GCodeDevice.py.
- data handling of the EIT Devices is done in data_reader.py
- Most of the rest of the Code is used for data generation, training and evaluation of the models.
General Procedure is:
- Start the EIT32 Software and run a measurement with your preferred settings (choose output folder)
- Run the collect_real_data.py script to collect the data (enter the correct settings and output path)
- The script will control the 3D Printer to move the object in the tank and collect the data
- The data will be saved in the output folder as an pickle file
- In many occasions it is necessary to choose between absolute eit or relative eit as a parameter of the function!
- collect_negative_samples.py to generate negative samples of the empty phantom tank
- collect_real_data.py to generate positive samples of the phantom tank with an object inside
- collect_real_data_multi_freq.py to generate positive samples of the phantom tank with an object inside at different frequencies for the eit settings
- simulate_phantom_voltages.py to simulate the voltages for the phantom tank with an object inside
- combine_datasets_and_convert_to_correct_format_for_training.py The script will combine the data and convert it to the correct format for training.
The default data location is: Collected_Data or Collected_Data_Experiments.
Recommend to move data used for Training to a new Folder like Training_Data to separate newly collected data from the training data.
Training will use GPU if available (Some manual adjustments in the code might be necessary).
The Scripts used for training are located at Model_Training:
- Aggragated_Model_Training.py used to run multiple training sessions with different parameters
- Model_Training_with_pca_reduction_copy.py is the newest version of the training script
- Models.py contains the models used for training
- dimensionality_reduction.py contains the PCA reduction function
- data_augmentation.py contains the data augmentation functions
- Choose the model you want to train in the Model_Training_with_pca_reduction_copy.py script
- Choose the location of the training data
- Choose the augmentations and the PCA reduction in the script
- Run the script and enter the correct parameters
- The script will train the model and save it to the specified location
- The script will also save the training history and the model architecture
There are multiple types of augmentations available:
The Scripts used for evaluation are located at Evaluation.:
- Evaluate_Test_Set_Dataframe.py used to evaluate the test set stored in a dataframe
- Live_Evaluate_Model.py used to evaluate the model with live data from the EIT32
Sample usages can be found in:
Try_Other_Reconstruction_methodes.py
- Integrate the EIT32 controlling Software into the Python Code
- started first steps in: ISX_3_eit.py
- Checkout: https://github.com/spatialaudio/sciopy for more information
For more in depth info you can look at my Master Thesis.