How to check if detections overlap?

874 views
Skip to first unread message

Sam Vanmassenhove

unread,
Apr 6, 2018, 10:41:41 AM4/6/18
to QuPath users

Hi everyone,

I'm working on fluorescence images with two channels. I would like to detect nuclei in both channels and check for overlap between detections in channel 1 and those in channel 2.

If two detections overlap I would like to keep the largest and discard the other.

The only way I've found to do this is the following: 

       PathROIToolsAwt.getArea(r1).getBounds().intersects( new Rectangle( PathROIToolsAwt.getArea(r2).getBounds() ) ) ,

where r1 and r2 are the ROIs of the two PathDetectionObjects.

But this seems to flag any overlap whereas I would prefer to have some sort of threshold (let's say 50% overlap for example). Is there a simple way to measure how much overlapping area there is, and then make a decision based on that?

Thanks in advance,

Sam




micros...@gmail.com

unread,
Apr 6, 2018, 12:48:55 PM4/6/18
to QuPath users
Hi! Interesting question, and I am a bit curious about what you are doing where you would want to discard the other measurements. Afraid I do not know of any easy methods to get percentage overlap, that might be for Pete to know.

If I were approaching two different nuclear stains by size, if there was a primary stain (100% of the nuclei have that stain) I would generate my cells based on that one and then perform a Subcellular detection in the other channel and compare the nuclear size to the subcellular detection size, and classify the "cell" as one type or the other. That gets you both areas in your detection measurements and a count/percentage in your annotation.

Not sure if that works for the rest of your project.

Pete

unread,
Apr 6, 2018, 1:22:55 PM4/6/18
to QuPath users
In the event that you have some kind of counterstain that allows you to detect 'all' nuclei on one channel, that would potentially be the easiest approach: you should automatically get various nucleus intensity measurements in all channels (mean, min, max), and you can assign each nucleus as positive or negative in each channel based on intensity values.

That's the method I'd normally try first with fluorescence images.  I assume from your description that isn't what you need here, but mention it just in case.

Keeping with the approach you're considering, combineROIs could be helpful to calculate the intersection:

def r3 = PathROIToolsAwt.combineROIs(r1, r2, CombineOp.INTERSECT)

double area1 = r1.getArea()
double area2 = r2.getArea()
double areaIntersection = r3.getArea()

I admit I haven't actually tested the code, but hopefully it helps point in a useful direction to show how you can get the proportion of the intersection by then using the ratio of areas.  A few caveats:
  • The areas here are given in pixels; for ratios that should be fine assuming 'square' pixels, i.e. the width and height match.  If not, you'd need to use getScaledArea(pixelWidth, pixelHeight).
  • You'd likely need to use your intersection test first, and deal with cases where there isn't any intersection.
  • I suspect there could be awkward cases that need special consideration, e.g. if there are multiple partial overlaps of the same region in another channel.
  • Technically, r1, r2 and r3 are PathShapes, which don't actually have a getArea() method at all.  But unless a line ROI sneaks into your analysis, you can be pretty confident that your ROIs implement both the PathShape and PathArea interfaces... so it should work anyway.
If you have a lot of comparisons, I could also imagine this might be quite slow to run.  One element of this is that your intersection test requires creating at least four new Java objects for every single comparison (areas and bounding rectangles), and the cost of this could really add up.

You can get some improvement by taking advantage of the fact that ROIs already give you getBoundsX(), getBoundsY(), getBoundsWidth() and getBoundsHeight() methods.  You can use these directly for a more efficient test of intersecting bounding boxes, e.g.

// Outside any loop - this is Java code
def rect1 = Rectangle2D.Double()
//....
// Set the rectangle from the cached bounds of QuPath ROIs
rect1
.setFrame(r1.getBoundsX(), r1.getBoundsY(), r1.getBoundsWidth(), r1.getBoundsHeight())
// Check for intersection
if (rect1.intersects(r2.getBoundsX(), r2.getBoundsY(), r2.getBoundsWidth(), r2.getBoundsHeight()) {
 
// Do something...
}


Alternatively, if you happen to be comfortable with writing ImageJ plugins/macros, then you could also consider consider turning it into an ImageJ problem rather than a QuPath one by writing a custom detection that gets the pixels from QuPath and sends them to ImageJ for processing, then converts ImageJ ROIs back to QuPath objects.  Although potentially more work, it gives full control.  With this approach, you might create a binary mask of the nuclei in each channel, and then it becomes a matter of creating ROIs for one channel and measuring the intensities in the other (binary) channel.  Because each pixel in the binary image will be either 0 (background) or 255 (foreground), then dividing the mean intensity by 255 directly gives you the proportion overlap.

Sam Vanmassenhove

unread,
Apr 6, 2018, 4:05:23 PM4/6/18
to QuPath users

Thank you Pete, that's basically solved my problem. After following your suggestions about the ROIs it seems that there's no real problem with speed - it's completely negligable in comparison with the detection itself.

To adress the earlier question of why I want to do this in the first place: I'm working on TUNEL images, so I do have a 100% primary DAPI stain, but I've noticed that cells which are completely dead actually no longer show in the DAPI channel. So the most interesting cases are actually likely to slip through if I only study the first channel. To get everything - I don't want to miss any of the nuclei in the second channel because there's so few to start with - I need to detect in both channels and then remove any 'duplicates' by checking the overlap.

I hope that explains it.

Thanks again,

Sam


micros...@gmail.com

unread,
Apr 6, 2018, 5:10:40 PM4/6/18
to QuPath users
Ah, neat!  It's nice that there is a way to do that within QuPath.  It sounds like the sort of thing I would previously have tried to handle using a separate cell detection per channel and then cross checked the results in R.  Glad there is a cleaner way to do it!
Reply all
Reply to author
Forward
0 new messages