-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement torchIO for transformations #107
Comments
I would suggest to dig a little into the type of artifacts available with torchIO and then decide. Adding synthetic MS lesions is a good idea, but it is unclear what is the best way to do it. Maybe with GANs, as @naga-karthik mentioned |
Cool idea! If i understand correctly (to simulate lesions in the SC): Pre-processing:
GAN (Generative Adversarial Nets ): Note: both networks are trained simultaneously similar to a two player minimax game. The game ends when most outputs of G are classified by D as real (from baseline with MS). Articles using GAN to mimic atrophy:
@jcohenadad, i could present a plan to implement something similar in a meeting? |
sure! at our next ivadomed meeting for example |
@PaulBautin So, I might have a few suggestions on how to synthesize with GANs. There are many types of GANs (here's a list, note that it is not up-to-date). The reason I put that list is that the GAN that you have mentioned is the most basic so there might be a more advanced or even better way of synthesizing images. If I understand the original GAN correctly, I don't think G sees any input other than random noise (I'm not sure what you mean by point 4), the baseline segmentation only enters the picture only with D because it is compared with G's output within the adversarial loss function (which you correctly mentioned). About the minimax game, AFAIK it is only in theory that the game "ends" (see Nash equilibrium) but in practice, your loss function never converges because if you think about it - G is minimizing the loss function while D is trying to maximize it. So, the stationary point will be a saddle point and not a local/global minimum. In practice, GAN training is unstable and quite subjective as a result. So, "when to stop training" really depends on the quality of the images generated. If you are interested in this more - there's this famous paper which lays out the standard "guidelines" for training GANs to get good results. Please let me know if something isn't clear. I'd be happy to discuss. Also, I realize that I was being too picky with my response, please do not mind (I did not intend that). I was just hoping for us to be on the same page because GANs can get a bit tricky to understand. |
Hello @naga-karthik and @jcohenadad, I have been testing different artifacts within TorchIO to implement transformations that simulate more "realistic" intra-subject (scan-rescan) variability. I now have to choose the artifacts and transformations to keep. Two protocols that may allow for more "realistic" outcomes are:
Have statistics on artifact occurrence rates been published? Do you have suggestions to obtain a more realistic data-augmentation? |
Up to now we use script
affine_transform
to implement random rigid transformations that mimic subject repositioning for each scaling. This method works well but is not very realistic as it may oversimplify image acquisition reality (i.e., non-realistic intra-subject variability).An improvement would be to augment the dataset with things like: random noise, random elastic transformation, and several artifacts (all part of torchIO).
@jcohenadad , what do you think? An alternative would be to use the data augmentation tools that were implemented in ivadomed to train models (https://github.com/ivadomed/ivadomed/blob/master/ivadomed/transforms.py ).
A remaining issue is to implement biologically realistic transformations, (e.g. choice of max_displacement for random elastic deformations, SC cord lesions...).
The text was updated successfully, but these errors were encountered: