Distance between cells and annotation

836 views
Skip to first unread message

Oscar Brück

unread,
Aug 8, 2017, 9:53:14 AM8/8/17
to QuPath users
Hi!

I have a dataset of 80 chromogen images. In each image I have annotated 3 different areas ("A", "B", "C") and cells in them as either "Immune cells" or "Other". I would like to calculate the mean/median distance between each Immune cell to area A. So if a cell would be located inside annotation "A", the distance would be 0. And if the cell would be in annotation "B" or "C", the distance would depend on the distance of the cell to the border or annotation "A". I am sure there could be a nice script to run that, but I am not that familiar with Groovy... Could someone help me with the script?

Thanks in advance,
Oscar

micros...@gmail.com

unread,
Aug 8, 2017, 2:11:02 PM8/8/17
to QuPath users
Well, it should be fairly straightforward to get a script that compares distance to the middle of the of annotation since that is represented by an X,Y coordinate, but for what you want, you would either need to convert the annotation into a set of coordinates and check each one (similar to the processes listed elsewhere for importing and exporting annotations via lists), or perhaps use a distance transform through an ImageJ macro.  

Oscar Brück

unread,
Aug 10, 2017, 3:22:48 AM8/10/17
to QuPath users
Thanks for your reply. One possibility would be to export the coordinates of the annotation (or only coordinates of the annotation's periphery) and then compute Delaunay triangulation with R or another computing surface, which might what you meant with your first option. However, I think there could be a nicer way to solve this inside QuPath :)

micros...@gmail.com

unread,
Aug 10, 2017, 1:27:01 PM8/10/17
to QuPath users
True, as long as you can get past the part where you get the list of XY coordinates for the annotation (I think that was posted elsewhere, but I can't recall where), you should also be able to, at worst, cycle through all of them calculating the distance from each cell to all of the points and only keeping the smallest value.  That could certainly all be done fairly easily within QuPath, though you would probably want to cut it down slightly using the centroids of annotation A (check the centroids for the cell and annotation, to see which direction the cell is in from the centroid, then only use the annotation coordinates that are on the correct side of the annotation).  If there are multiple A areas, this might be slightly more difficult unless they are distinct (not one merged AWT area).


Pete

unread,
Aug 10, 2017, 9:59:27 PM8/10/17
to QuPath users
I'm afraid I don't know of a much nicer way...
Using the vertices alone would probably work most of the time, although could fail if there is a large distance between any consecutive pair of vertices.

I don't really have a feeling for how the Delaunay triangulation would help, but there is an OpenCV implementation accessible through QuPath.  You can see an example of its use within DelaunayTriangulation.java - although be warned that this is not exactly the easiest code to follow.  It's also not the easiest to write; because using OpenCV brings things outside the Java world, small errors and bugs frequently end quite catastrophically (i.e. QuPath may shut down without warning, rather than throwing a more helpful exception).  Debugging becomes particularly exhilarating when the catastrophe turns out to be triggered by the garbage collector.

On the other hand, the implementation is fast and it doesn't require another dependency.

If you don't mind adding another dependency, then there may well be other specialized Java geometry-related libraries that would help perform the calculations much more easily.

Personally, my tendency is to convert problems like these into image processing ones.  With that in mind, my first approach would be to create a Groovy script that does the following:
  • Based on the original image dimensions, create a ByteProcessor in ImageJ that has been scaled down to a manageable size - i.e. figure out a 'downsample' factor and divide the full image width and height by this
  • Convert the QuPath ROI for Annotation A into an ImageJ 'Roi' - scaling accordingly - and use it to fill the ByteProcessor with white pixels
  • Apply a (32-bit) distance transform to the ByteProcessor in ImageJ to calculate the distance to every white pixel (ImageJ has 'EDM.java' to do this)
  • For each cell centroid, downsample it accordingly to fit with the distance map and calculate the interpolated pixel value for the downsampled x,y coordinates
  • Scale the interpolated distance map value by the downsample * pixel size in microns to get an approximate distance
  • Add this distance to the measurement list for each cell, and use Measure -> Show measurement maps and the line tool to check everything looks ok
This doesn't exactly feel optimal, but it should be fast.  If you decide to go this way, feel free to ask for more information about any of the vaguely-defined steps.  But if you find an alternative that is better, please do post it here as well - it's an interesting problem, and one that I suspect could be worth solving for others as well.

Pete

unread,
Aug 10, 2017, 10:07:06 PM8/10/17
to QuPath users
Actually... I've just remembered the OpenCV function pointPolygonTest, which may do what you want.

This would still require some Groovy effort to convert your annotation to an OpenCV Mat.

I have not tried it, so I don't know what the limitations are here... for example, it might be restricted to work with only a subset of possible QuPath annotation shapes (e.g. non-self-intersecting polygons).

Alexander Katsis

unread,
Jan 31, 2019, 1:49:26 PM1/31/19
to QuPath users
Hi Pete,

Thanks for this explanation it has been very helpful. One follow-up question I had was how to get the distance measurements for each detection from ImageJ back into QuPath so that it can be added to the measurement list.

So far I have sent the annotation and detections to ImageJ, then converted the detections to rois and gotten the centroids from the results table. Finally, by interpolating on the distance map, I have calculated the distance metric for every cell. Now what I want to do is send this list of numbers back into QuPath so that they can be added as measurements. Is there a way to do this? (being able to send the centroid measurements directly from QuPath to ImageJ would be helpful as well so they don't have to be recalculated in ImageJ)

Thanks for your help!

-Alex


micros...@gmail.com

unread,
Jan 31, 2019, 2:09:55 PM1/31/19
to QuPath users
If you import the ROIs themselves as detections, they should be able to include any measurements with them.
And back in QuPath
I am not sure about adding new measurements to similar objects (based on their ImageJ counterparts) back in QuPath, though. If there is an easy way to import data that I have missed, that would be neat!

micros...@gmail.com

unread,
Feb 5, 2019, 11:07:54 AM2/5/19
to QuPath users
Revisiting this, if you shrink your cell ROIs (erode?) in ImageJ, when you reimport them they should be "children" of their parent cells, which would make it very easy to go through the cell list and getMeasurement(childObject) -> putMeasurement(parentCell).

I have not played with it, but it sounds like you could use Edit > Selection > Enlarge in ImageJ and input a negative value, so that all of the imported ROIs should nicely nest inside their original cell. As long as you don't have any weird overlapping going on.

Alexander Katsis

unread,
Feb 6, 2019, 10:57:30 AM2/6/19
to QuPath users
Thanks! The erosion is a clever solution. This sounds like it will work nicely. QuPath definitely becomes even more versatile when you get past the "intended" use of certain objects and workflows and instead take advantage of the hierarchy structure.

micros...@gmail.com

unread,
Feb 6, 2019, 5:49:16 PM2/6/19
to QuPath users
I have allllso been known to break things. So use with care :)
And that pseudocode was terrible. It would be something closer to
cells.each{ def subcells = it.getChildObjects(); it.getMeasurementList().putMeasurement("newCellMeasurementName", subcells[0].getMeasurementList().getMeasurementValue("imageJMeasurementName"))}

Assuming you had only one object within a cell so that subcells[0] was accurate. Actually you might want to also check for subcells[0] being null and put a 0 in the cell measurement instead.
Reply all
Reply to author
Forward
0 new messages