Script to get smallest bounding box of tissue area and reload to qupath

766 views
Skip to first unread message

lingd...@tempus.com

unread,
Oct 5, 2017, 12:04:08 PM10/5/17
to QuPath users
Hello everyone,

I want to get rid of the background area and only keep tissue within the smallest bounding box.

I'm trying to write a script which can automatically detect tissue, get only the smallest bounding box area and reload this area to qupath.

Does anyone know how to do it?

Thanks,
Lingdao

Pete

unread,
Oct 5, 2017, 12:34:17 PM10/5/17
to QuPath users
Hi,

There’s a easier bit and a much more difficult bit to this answer.

The easier bit is getting the bounding box.  If you create an annotation for the tissue (e.g. manually with a drawing tool, with an ImageJ macro, or with Simple Tissue Detection) then you can get the bounding box like this:

def selected = getSelectedObject() // If the object is selected...
def roi = selected.getROI()
int x = roi.getBoundsX()
int y = roi.getBoundsY()
int width = roi.getBoundsWidth()
int height = roi.getBoundsHeight()

print([x, y, width, height])

Now, reloading only that part of the image within QuPath is difficult.  Currently, you’d need to crop the image and saved a new cropped file.  QuPath doesn’t give you a way to do that for a whole slide image; so you’d need to crop and write the new image some other way.

A QuPath-based alternative would require that you write a custom ImageServer that wraps around your existing ImageServer that QuPath uses to request pixels from the whole slide, and which also knows about the crop bounds.  This custom ImageServer would then perform all the coordinate transforms required to ‘trick’ QuPath into thinking that the cropped image is actually a whole slide (i.e. subtracting the x, y coordinates of the bounding box from any pixel request, and setting the image width and height according to those of the bounding box).

This should be technically possible, but it would take some work to write the custom ImageServer and you'd need to be careful not to thwart QuPath's built-in method of caching image regions.  You would also somehow need tell QuPath to use the cropped bounds whenever you reopen the image in the future.  This could require a modification to the core QuPath code, or else some more custom trickery that takes you deeper into how QuPath is working than you might want to go.

Given the effort that would be required, I really wouldn’t advise trying this unless it is absolutely necessary.  Rather, I’d try to find something simpler.  For example, if you are happy with coding you could try writing a script that sets the displayed image magnification and location so that the tissue bounding box is fit into the center of the viewer at the maximum size that contains all the tissue.  The rest of the image will still be there...  just outside the visible bounds of the viewer.

Hope this helps,

Pete

lingd...@tempus.com

unread,
Oct 5, 2017, 3:15:35 PM10/5/17
to QuPath users
Thank you so much Pete! This is very helpful.

I have figured out the first part, however, I want to make it automatically with out drawing or selection. Currently, I used 'Simple Tissue Detection' in QuPath and choose the annotation with largest bounding box.
Do you know how can I run the 'Simple Tissue Detection' by script? Is it SimpleThresholding.thresholdBelow(ImageProcessor ip, float) method? I'm learning how to use ij.process.ImageProcessor.

I think centering tissue area is a very good solution. The reason behind it is I want to do image registration (HE vs IHC). I found algorithm that can give me good approximation of rotation based on downsampled cropped tissues. I think by centering the tissue I should be able to apply the rotation in QuPath.

Thanks,
Lingdao

Pete

unread,
Oct 5, 2017, 6:06:47 PM10/5/17
to QuPath users
The easiest (and most 'official') way would be to take advantage of the script that can be automatically generated under the Workflow tab: see https://github.com/qupath/qupath/wiki/From-workflows-to-scripts

The script should be one (rather full) line, and look something like this:

runPlugin('qupath.imagej.detect.tissue.SimpleTissueDetection2', '{"threshold": 220,  "requestedPixelSizeMicrons": 10.0,  "minAreaMicrons": 1000000.0,  "maxHoleAreaMicrons": 100000.0,  "darkBackground": false,  "smoothImage": true,  "medianCleanup": true,  "dilateBoundaries": false,  "smoothCoordinates": true,  "excludeOnBoundary": false,  "singleAnnotation": true}');

micros...@gmail.com

unread,
Oct 5, 2017, 6:36:33 PM10/5/17
to QuPath users
Not to discourage you, especially if you get the image registration working (and better yet are willing to share!), but another option if you only need alignment for a small project is Slidematch by MicroDimensions.  The 20 day free trial should be enough to get through quite a few images, and it works fairly well if your tissue is opaque enough.  It can be fairly slow if you are aligning many images at once, though.  Especially on older computers.

lingd...@tempus.com

unread,
Oct 5, 2017, 8:33:59 PM10/5/17
to QuPath users
Great! It works very well. Thanks

lingd...@tempus.com

unread,
Oct 5, 2017, 8:40:57 PM10/5/17
to QuPath users
Thanks! I'm using imreg_dft (https://github.com/matejak/imreg_dft) and SimpleElastix (https://simpleelastix.github.io/) for image registration. 

They work pretty good for me. What I do is I first register on down-sampled images and then use the transformation matrix to align original images.

Thanks

Thomas Kilvær

unread,
Nov 14, 2018, 4:40:01 PM11/14/18
to QuPath users
Sorry to revive this old thread. Do you have the time to elaborate on how you apply the transforms to the original WSIs?
Reply all
Reply to author
Forward
0 new messages