Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Requirements of WSI image for registration to work? #2

Open
rutgerfick opened this issue Sep 22, 2021 · 6 comments
Open

Requirements of WSI image for registration to work? #2

rutgerfick opened this issue Sep 22, 2021 · 6 comments

Comments

@rutgerfick
Copy link

rutgerfick commented Sep 22, 2021

Hello,

I found your repo following your COMPAY publication, congratulations on that!

However, I've been trying out the example on your main README.md with some of my local images (HER2 or KI67 to HE), but I find that the registration returns an error for various (opencv internal) reasons - it can't find a homology or some function returns None - OR it returns a very poor registration no matter what parameters I perturb.

The registration is poor if I give it cross-stain images to register (HER2 to HE or KI67 to HE), even though the images look pretty aligned and similar. If I give the same HE image as input and target then the registration works (so at least that sanity test works).

My images are .svs files that only contain a 40x magnification.

Are there any hidden parameters or requirements that a WSI file must have that the code intrinsically uses? Must it be a pyramid image for example?

Thank you for your help,
Rutger

@ChristianMarzahl
Copy link
Owner

ChristianMarzahl commented Sep 23, 2021

Dear Rutger,

I appreciate your interest in our registration algorithm and implementation.
These are all parameters you can currently use.

parameters = {
    # feature extractor parameters
    "point_extractor": "sift",  #orb , sift
    "maxFeatures": 512, 
    "crossCheck": False, 
    "flann": False,
    "ratio": 0.6, 
    "use_gray": False,

    # QTree parameter 
    "homography": True,
    "filter_outliner": False,
    "debug": False,
    "target_depth": 1,
    "run_async": True,
    "num_workers: 2,
    "thumbnail_size": (1024, 1024)
}

As the paper explains, our algorithm works best on cytology images or when the same slide is digitised with different stains.
It sounds like your approach involves different sections of a tissue sample. We were able to show in the paper that our approach has problems with this and no longer finds key points (MSSC-Dataset). We are working on this challenge and will post an update.

With kind regards,
Christian

@rutgerfick
Copy link
Author

Thanks for the reply Christian.

The new parameters that you gave didn't help - they even stopped me from visualizing the example images :(.

Let me at least show you what I'm looking at with the old parameters for two cases.

case that sort of works

When the slides are aligned then it works a little bit, if I understand correctly the source (top row) and GT (bottom row) are the anchor points that the algorithm finds in the source and target slide. These look like they could be the same area. However, the transformation (middle row) is then completely something else. What do you think it the problem?
MicrosoftTeams-image (4)

case that doesn't work

When the slides are rotated from each other it basically just doesn't work. Do slides need to be pre-registered (so they are at least in the same orientation) for your algo to find good anchor points?
MicrosoftTeams-image (3)

@ChristianMarzahl
Copy link
Owner

Dear Rutger,

with "debug": False or "debug": True you can toggle the debug view.

Please try:
"homography": False,

The upper case actually should work and looks good.

But without the data, I have no chance of figuring out what's going on.

Please also have a look at the examples we provided:
https://github.com/ChristianMarzahl/WsiRegistration/blob/main/demo/IHC_HE.ipynb

With kind regards,
Christian

@rutgerfick
Copy link
Author

rutgerfick commented Oct 26, 2021

Hi Christian, I hope you had a nice time on your holidays!

So, I checked out your code a little deeper and I think I know what is the problem if the source and target WSI are rotated w.r.t. each other: openslide does not allow you to request a rotated square from the WSI.

Your SIFT algorithm is able to find rotationally invariant keypoints (so it will find matching points even if the WSIs are rotated w.r.t. each other). However, in the code as it is (I believe) you can't query a rotated image in your debug mode, so it returns some garbage crop that makes no sense. Did you not account for rotated images in your approach? (it is quite common)

Moreover, I appreciate the setup of the repo and your approach but I have some questions:

  • are you planning on adding functionallity that does any of the follow-up steps after finding a reasonable registration? And with this I mean, for example, things like actually transforming and saving the source WSI into the target space?
  • When the algo fails the reasons are quite opaque. As in if the algo finds (I guess) no keypoints then for me it's hard to know what to do about it - two images may look (to my eye) visually similar and I could overlay them roughly, but your algo will just staight-up do nothing sometimes. What do you suggest to do in this case?

Thanks for your help,
Rutger

@ChristianMarzahl
Copy link
Owner

Dear Ruther,

Thanks for the interest in the repository. Thanks for hinting what the problem with the rotated images could be. I will look into that as soon as possible.

I plan to do the following with this repository soon (Q1 2022).

  • Fixing some bugs and adding detailed visualisation of why something is not working
  • Added a registration algorithm that can handle consecutive WSI slices. We are currently collecting data for that and experimenting with some approaches.

Regard your question. I´m currently not planning to integrate functions to transform WSIs to the target space.

With kind regards,
Christian Marzahl

@rutgerfick
Copy link
Author

Hi Christian,
I sent you a mail at Christian.marzahl@fau.de to follow up on this conversation. Is this your correct e-mail?

Best,
Rutger

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants