Image overlay

450 views
Skip to first unread message

Alistair

unread,
Mar 2, 2018, 10:16:11 AM3/2/18
to QuPath users
Hi,

Thanks for all the hard work with Qupath, I'm finding really useful for my work.

We are currently using multiplex immunohistochemistry.  This involves staining a section by IHC then scanning the image using a slide scanner.  The same section is then destained and a new IHC target is stained, using the same chromogen as before.  We do this multiple times on the same tissue section for several different markers.  In order to analyse the images, I select ROI's, produce a DAB (or AEC) B+W image and export them to Image J for false colour and image merge/overlay with the same area in the other images.  In a perfect world, I'd be able to overlay the whole scanned images in Qupath, however, I believe there is commercial software that can do this.

The problem I'm having with my own particular work-around is that each time the section is scanned between stains, there is a small difference x/y coordinates in the image.  I suspect that the slide scanner is responsible and is automatically detecting the edge of the tissue slightly different each time due to subtle differences in focus or counterstain.  Thus, when I mark out ROI's in one image, they are in different positions when I copy them onto the next image.  I then have to manually reposition them over the same area of tissue.  This is quite cumbersome and difficult.

Looking through previous questions, I found a script that compensates for the x/y differences.  However, this has to be worked out for each image, is never quite accurate and is still quite cumbersome.

Is it possible to manually assign a point in each image individually as 0,0 (x,y) so that anything done in one image can then be copied to the other images and appear over exactly the same area of tissue?

Many thanks in advance for any help with this. 

micros...@gmail.com

unread,
Mar 2, 2018, 11:24:26 AM3/2/18
to QuPath users
Well, in addition to the free trial of Slidematch to align the images prior to importing to QuPath (only useful for a short project, no clue how much it actually costs), there are both the script you described, and another one that handles rotation.  You would need at least two points though, as one point would not be useful for measuring rotation, of course!

If you think the differences do not involve any rotations, I would recommend finding the X,Y coordinate of interest in one image, writing it out to a text file, and then reading in that coordinate and using it to automatically adjust all objects in each other image in the project.
I'm sure there are other examples, but the first one I remember for reading and writing files like this comes from my attempt at a slide exploration tool, here: https://gist.github.com/Svidro/86fb224d69484ae5955631ce68d27054

Lines 75-81 of Location File Creator.groovy and lines 28-30 plus lines 45-47 in Slide Explorer.groovy demonstrate the code for reading in and out coordinate.  Read in the stored coordinates for the other one or two scripts (depending on whether you want to calculate rotation), and you should be good to go.

Once you have the imported coordinates, and the coordinates marked in the current image, subtract out your translation, select all objects, and apply the translation to their coordinates.

It sounds like you have the translation script, and the rotation script is here: https://groups.google.com/forum/#!searchin/qupath-users/rotation%7Csort:date/qupath-users/UvkNb54fYco/ri_4K6tiCwAJ


Pete

unread,
Mar 2, 2018, 12:36:26 PM3/2/18
to QuPath users
I think this is a really interesting application, and a few possibilities come to mind.

As suggested by microscopyra, life is a lot easier the simpler the transformation can be.  Translation only should be fine.  With any kind of rotation it's a lot more awkward, but tolerable.  Both of those can be represented efficiently and applied dynamically, but if any other non-rigid or local transformations are involved then it probably requires a completely separate alignment step that writes out 'perfectly' aligned whole slide images.

Under the assumption that, because it's the same tissue and scanner, we just need to deal with translation, the 'simple' approach might be this:
  1. Mark a single point on each image, in the centre of the exact same nucleus in each case.
  2. Mark one or more 'area' annotation (e.g. rectangles) on one image
  3. With a script, determine the x & y shift from the point apply it to every other area annotation when transferring it to the other images
If you think that would help then I can certainly help with that script.  It could either be applied to all images in a project 'blindly', or to images that are all opened simultaneously in QuPath using multiple viewers - whichever approach you think would help most.  The latter would help with checking quickly that it has done the right thing, but having lots of images open at the same time can get pretty awkward.

(In the longer term, this could be built-in to QuPath so that the Transfer last annotation command understands it - but it would require defining some kind of 'anchor point' annotation... and therefore be a slightly bigger change, and not possible through a script alone.)

The elaborate approach I can think of is something like this:
  1. Figure out the transformation between images either using manual points as above, or automatically (OpenCV has a nice method for this that could be called from within QuPath; the main question is how well it can handle different staining patterns, and if it can be run at a high enough resolution)
  2. Create a QuPath 'ImageServer' that automatically applies color deconvolution to separate out the DAB (or AEC) staining, and 'tricks' QuPath into thinking that the deconvolved DAB for each image represents a single channel of a fluorescence image.  Multiple channels would come from the multiple images, applying the transform dynamically.  You might optionally also include the hematoxylin channel from one image (or more) too.
To get reasonable performance in this case, the spatial transform has to be really fast - which it sounds like it should be here.  If it is, the color deconvolution part should not be a problem.  And then as far as QuPath is concerned, it should just 'think' it has a fluorescence image and handle it accordingly - with access to the same annotation and analysis tools.

The second option would be a comparatively bigger task... if you want to take it on, it could be a fantastic addition to QuPath.  Or send me an email if you want to discuss more whether it's something we could feasibly collaborate on to solve together - and, indeed, if the idea is likely to work.  Without seeing the images I'm guessing quite a bit, but it sounds to me like it should be possible.

Alistair

unread,
Mar 16, 2018, 9:17:51 AM3/16/18
to QuPath users
Hi Pete,

I just tried emailing you at p.ban...@qub.ac.uk and it bounced. Do you have another email address?

BW,

Alistair

Pete

unread,
Mar 16, 2018, 9:22:04 AM3/16/18
to QuPath users
Ah yes, I'm afraid that one doesn't work any more since I left Queen's... I've just sent you a message through other means.
Thanks,
Pete
Reply all
Reply to author
Forward
0 new messages