Comparing different staining on consecutive sections

172 views
Skip to first unread message

bene....@gmail.com

unread,
Jun 8, 2018, 2:26:04 AM6/8/18
to QuPath users
Dear QuPath community,

we are fairly new to QuPath, so please apologize if these are trivial questions or have been raised before.

We have consecutive sections of the same tissue which are stained and scanned independently (on different scanners). Both scans have a slightly different pixel size. If we load both images and display them side-by-side, and synchronize both viewers, they are not shown at the same scale (it seems pixel dimensions are not taken into account, i.e. scale bars are differently long, see screenshot 1). Is there a way to circumvent this?

Is there a way to get an overlay of the two slide images?

What we want to achieve in the end is to identify cells which are positive in both scans. This requires to transfer detections from one slide image to a second one (and we don't know if this is actually possible, if somebody could clearify...).

The most severe problems, I guess, is that both images are not aligned perfectly (screenshots 1 and 2), and if one compares corresponding cores, they might well be shifted and rotated within the automatic TMA core detection. I'm sure people have had this problem before. Is there any advice how to deal with that?

Or any advice in general how we could achieve our goal?

Best wishes and thanks a lot in advance,

Benjamin Schmid
qupath1.jpg
qupath2.jpg

Pete

unread,
Jun 8, 2018, 3:16:35 AM6/8/18
to QuPath users
Hi Benjamin,

I'm afraid this looks very difficult... not helped by the fact that QuPath doesn't contain any image registration.

I've explored ways of interactively overlaying two different slides, but it only really supports rigid alignment (and is in an obscure branch of my own fork of Qupath*, not in v0.1.2).  I could probably extend it to compensate for the differences in pixel size fairly easily... but it still wouldn't really help you, since the low power view makes clear that more complex alignment is needed.  It's also not particularly useful yet.

It can be done with QuPath in the sense that pretty much anything can be done, depending on how much Java or Groovy someone writes to do it.  But it looks like it would require a new suite of tools to help, and be a pretty big project.

If you could take care of the image registration outside of QuPath, then we could get inventive and create an image reader that basically dynamically appends a new pseudo-fluorescence channel based on color deconvolving the (pre-registered) brightfield image - in which case it should work in QuPath directly as if there was just one more channel.  That could be somewhat fun, but achieving the successful whole slide registration is a big if.  I think microscopyra has previously mentioned a (commercial) tool for whole slide registration, but I've never used it.

Other options that come to mind:
  • Export all cores as ImageJ TIFFs, and turn it into a core-by-core ImageJ/Fiji problem.  Because the TIFFs contain calibration information, you could potentially bring back any ImageJ ROIs created into QuPath as a final step to visualize them in the context of at least one of the whole slide images.
  • Handle each image separately, and export the cell features - along with centroids.  Then the problem turns into one of aligning point clouds and finding matches... maybe in R or Python.  This is probably difficult in itself, and even more so given that the same cells don't necessarily appear in the consecutive sections.
If you find a way that works, I'd be really curious - it's an interesting problem, and not one I've ever had to try to solve myself (yet)...

Pete

*-See https://petebankhead.github.io/qupath/2018/03/19/qupath-updates.html - this might help with the differing pixel sizes when viewing images side-by-side, by not forcing the downsamples to match for both images with synchronized.

micros...@gmail.com

unread,
Jun 8, 2018, 11:02:58 AM6/8/18
to QuPath users
Reply all
Reply to author
Forward
0 new messages