This is the companion repository for our paper also available on ArXiv, titled "Automatic alignment of surgical videos using kinematic data". This paper has been accepted at the Conference on Artificial Intelligence in Medicine (AIME) 2019.
The following is an example on how a time series alignment is used to synchronize the videos by duplicating the gray-scale frames.
Video without alignment | Video with alignment |
---|---|
The following is an example of aligning coordinate X’s time series for subject F, when performing three trials of the suturing surgical task.
Time series without alignment | Time series with alignment |
---|---|
You will need the JIGSAWS: The JHU-ISI Gesture and Skill Assessment Working Set to re-run the experiments of the paper.
Suturing | Knot-Tying | Needle-Passing |
---|---|---|
To run the code you will also need to download seperatly and install the following dependencies (the full list can be found here):
We have three surgical tasks in JIGSAWS.
Suturing
Knot_Tying
Needle_Passing
Before running the code, you might need to generate the Cython files using the following command:
cd src
./build-cython.sh
To align multiple videos for the Suturing task using the NLTS algorithm, you can run:
python3 main.py Suturing align_videos
To aligne only two videos for the Suturing task using the classic DTW algorithm, you can run:
python3 main.py Suturing align_2_videos
If you re-use this work, please cite:
@InProceedings{IsmailFawaz2019automatic,
Title = {Automatic alignment of surgical videos using kinematic data},
Author = {Ismail Fawaz, Hassan and Forestier, Germain and Weber, Jonathan and Petitjean, François and Idoumghar, Lhassane and Muller, Pierre-Alain},
booktitle = {Artificial Intelligence in Medicine},
Year = {2019},
pages = {104--113}
}