-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to compute the metrics between testset predictions and true labels? #3
Comments
Understood a bit. There is cross validation that can be done with |
Hello @cndu234, have you taken a look at
to
|
Hello @hoangtan96dl and @cndu234, I am also developing a piece of code to do the evaluation on the test dataset. I am not much familiar with pytorch lightning so please ignore this completely if this does not make sense. As far as I understood there is data module that can be set for fit, validation, and test stages. Setting the stage of the data module for test is a bit strange to me because if you set the stage to test then you should call
What I was looking for was predicting on the test set to collect the predicted label maps (image outputs) and the class-based dice coefficients, precision, and recall. As far as I understood, this is not possible by setting the stage on "test". So I set the stage to be on "validate". But then this problem rises that the data module reads from the "training" key of the
Then in the evaluate file, I call
I could have had a self.test_transforms and so but I preferred to keep things simple till I understand the script better. You need to change the _load_data_dicts accordingly:
I turned off shuffle because in my dataset, I am predicting over small subvolumes of a large volume which should be attached to each other later so at the time of attaching the subvolumes together, I want to make sure the sequence of outputs is the same as the input. my test_dataloader looks like this:
you can use the one for
I know this is not logical to have test data in the training key of JSON but I have worked around this for days and it seems there is a lack of support from the framework. Or I am a new bee and I don't know how to do it in the right way. This worked perfectly for me though prediction on the validation set is perfect on the test set it fails suggesting that the model overfits on my data. Please update me if you find a straightforward approach. |
Thank you @noushinha for your comment. Let me clarify a little bit about pytorch-lightning framework. If you want to strictly follow pytorch lightning way then you must follow their rules. (It means you should take time to learn their framework) For example, as you pointed out, if I want to let Trainer handle the testing phase for me with
The benefit of this is my main function will be very short and clear as you can see in the However, another option is you can use
Then I can write the whole evaluation loop on these outputs instead of defining it in Another suggestion is if you want to use a custom dataset, you should look at datamodule of iSeg or LUNA dataset. The hippocampus and cardiac dataset use supported function I know it is pretty hard to understand and modify if you are not familiar with these frameworks but there are two reasons I want to use them:
|
Thanks a lot @hoangtan96dl. As I mentioned I was running for a quick solution to see if I should continue working on the model for my custom dataset. That is why I chose to use a naive approach based on the low amount of knowledge I had about the used frameworks. I insist to repeat that my solution is neither a general, straightforward solution nor the best. From my point of view, with the whole storm of new frameworks that are released on monthly basis, it is not wise to sit and learn all of them. I have been developing on pytorch for a while and I could relate the pytorch lightning with it a bit. This is what had been educated on the website as well. What you explained is quite helpful in coming up with a smart solution for test evaluation. Now that I am sure further development might be useful on my data, I started by writing
thanks again for the repository. I always learn from others. |
Thank you @noushinha @hoangtan96dl I will try these out |
I am using my custom data...After training, how can I compute the metrics between test set predictions and true labels? I am using Hippocampus data loader provided by you . But i have
imagesTr
,labelsTr
for training , andimagesTs
,labelsTs
for testing. I want to compute metrics for the test setThe text was updated successfully, but these errors were encountered: