In the event that you have some kind of counterstain that allows you to detect 'all' nuclei on one channel, that would potentially be the easiest approach: you should automatically get various nucleus intensity measurements in all channels (mean, min, max), and you can assign each nucleus as positive or negative in each channel based on intensity values.
That's the method I'd normally try first with fluorescence images. I assume from your description that isn't what you need here, but mention it just in case.
Keeping with the approach you're considering,
combineROIs could be helpful to calculate the intersection:
def r3 = PathROIToolsAwt.combineROIs(r1, r2, CombineOp.INTERSECT)
double area1 = r1.getArea()
double area2 = r2.getArea()
double areaIntersection = r3.getArea()
I admit I haven't actually tested the code, but hopefully it helps point in a useful direction to show how you can get the proportion of the intersection by then using the ratio of areas. A few caveats:
- The areas here are given in pixels; for ratios that should be fine assuming 'square' pixels, i.e. the width and height match. If not, you'd need to use getScaledArea(pixelWidth, pixelHeight).
- You'd likely need to use your intersection test first, and deal with cases where there isn't any intersection.
- I suspect there could be awkward cases that need special consideration, e.g. if there are multiple partial overlaps of the same region in another channel.
- Technically, r1, r2 and r3 are PathShapes, which don't actually have a getArea() method at all. But unless a line ROI sneaks into your analysis, you can be pretty confident that your ROIs implement both the PathShape and PathArea interfaces... so it should work anyway.
If you have a lot of comparisons, I could also imagine this might be quite slow to run. One element of this is that your intersection test requires creating at least four new Java objects for every single comparison (areas and bounding rectangles), and the cost of this could really add up.
You can get some improvement by taking advantage of the fact that
ROIs already give you
getBoundsX(),
getBoundsY(),
getBoundsWidth() and
getBoundsHeight() methods. You can use these directly for a more efficient test of intersecting bounding boxes, e.g.
// Outside any loop - this is Java code
def rect1 = Rectangle2D.Double()
//....
// Set the rectangle from the cached bounds of QuPath ROIs
rect1.setFrame(r1.getBoundsX(), r1.getBoundsY(), r1.getBoundsWidth(), r1.getBoundsHeight())
// Check for intersection
if (rect1.intersects(r2.getBoundsX(), r2.getBoundsY(), r2.getBoundsWidth(), r2.getBoundsHeight()) {
// Do something...
}
Alternatively, if you happen to be comfortable with writing ImageJ plugins/macros, then you could also consider consider turning it into an ImageJ problem rather than a QuPath one by writing a
custom detection that gets the pixels from QuPath and sends them to ImageJ for processing, then converts ImageJ ROIs back to QuPath objects. Although potentially more work, it gives full control. With this approach, you might create a binary mask of the nuclei in each channel, and then it becomes a matter of creating ROIs for one channel and measuring the intensities in the other (binary) channel. Because each pixel in the binary image will be either 0 (background) or 255 (foreground), then dividing the mean intensity by 255 directly gives you the proportion overlap.