Pipeline for brightfield 2-plex immunoquantification - Differential quantification of nuclear staining by keratin cell positivity

458 views
Skip to first unread message

Carlos Moro

unread,
May 6, 2017, 2:23:25 PM5/6/17
to QuPath users
Hello,

I'd be very happy and grateful to share and discuss with you this (yet initial) procedure for differential quantification of a nuclear staining (DAB) in brightfield 2-plex, using as differential marker keratin (AP-red).

After calculating HTX, DAB, and AP vector (left panel), according to previous thread:
Slide 1: Selection of a typical cell spheroid in the slide.
Slide 2: Copy the vectors from script ("Create command history script") to an external text editor.
         Exchange vector values between DAB and AP. Run
Slide 3: The vector values are interchanged in the left panel - but not the stain labels.         
Slide 4: That was needed because the great "Cytokeratin annotation creation" in its version only operates with DAB, so I needed to trick it ;)
         "Cytokeratin annotation creation" with out-of-the-box parameters.
Slide 5-6: AP-based keratin segmentation performed :)
           Sorry, I forgot to include an additional screenshot: Now, I interchanged again the stain vector values between DAB and AP in order to revert to the originals (as in left panel slide 1).
Slide 7: "Positive cell detection" of (nuclear) DAB staining in the keratin-positive area (annotation). Standard parameters, only lowered a bit the threshold to 0.15.
Slide 8: AP-positive region, cell segmentation and nuclear detection performed, 35 %.
Slides 9-11: "Positive cell detection" in the keratin-negative area (annotation), parameters as before.
Slide 12: AP (keratin)-negative region, cell segmentation and nuclear detection performed, 71 %
Slide 13: Keratin-differential nuclear cell count measurements

What do you think about this process and results? Would it be some parameter you would recommend to tweak for improvement of keratin segmentation, cell detection, or nuclear detection?
I tried a bit the "Fast cell counts" but got multiple counts within many nuclei (large and with more intensively stained nucleoli) and went back to "Positive cell detection" as it seemed to perform well and straightforward.

In a quick read at https://github.com/qupath/qupath/blob/master/qupath-processing-opencv/src/main/java/qupath/opencv/DetectCytokeratinCV.java I couldn't see anything specifically for DAB so that's why imagined this command would work as well with other (e.g. AP) stain.
It's window title says (TMA, IHC), but this command is also very useful in non-TMA slides (non TMA specific). Also, doesn't need to be a keratin, this very useful command will also work nicely with other cytoplasmic stains, like mucins, maspin, ema... Maybe a more generic title/description, something like "Immuno/stain-based annotation creation"?
As suggestion for improvement maybe a combo to select the staining in which to perform the annotation would be very useful, so there would not be need to trick (exchange forth and back) the DAB and AP/other stain vector values? 

QuPath rocks, congratulations again to the whole dev team!   ;)

Best wishes,
Carlos
2plex_brightfield_immunostaining_segmentbykeratin_and_nuclear_count.pdf

Pete

unread,
May 7, 2017, 7:41:52 AM5/7/17
to QuPath users
Hi Carlos,

Thanks for your post, and describing your approach.  I had started thinking a bit about this before seeing your message, and come up with 3 ideas...

The one you’ve chosen is the one that I thought I probably shouldn’t suggest, since I expected it would be most awkward and you’d like it least :) But having thought about the other options, they all have their awkwardness…  Nevertheless, I would hope/expect they all end with something similar.

In any case, you’re completely correct about the Create cytokeratin annotations command… it exists because it was needed once, for TMAs, with DAB staining.  Lacking time and urgency, it never reached the generality that it should have.
So you do have to trick it to apply it here.  But so long as it produces the desired results, and you can verify that they are appropriate and that the right cells have been identified and counted, I would say that’s ok.

Anyway, here are the other two methods:

Method #1: Recreate the cytokeratin command with an ImageJ macro

The Create cytokeratin annotations command is quite simple… simple enough that it can be more or less reproduced with an ImageJ macro, which can in turn be run through Extensions -> ImageJ -> ImageJ macro runner

I’ve put together an example of what such an ImageJ macro might look like, and attached it to this message.  If you drag the attached file onto QuPath, it should automatically be opened in the ‘ImageJ macro runner’.  You should then just need to do three things:
1 - Select the annotation (or TMA core) in which you want to run the command
2 - Select ‘Create annotation from ImageJ ROI’ in the macro runner, to ensure the results are sent back from ImageJ
3 - Press ‘Run'

You may notice that the AP staining doesn’t feature explicitly in the macro.  In this case, I skipped color deconvolution and wrote the macro to threshold the result of subtracting the green from the red channel instead.  That seemed to work too… at least in this example.

Note, you can use Send Region to ImageJ and run all/part of the macro directly in ImageJ to see what it is doing exactly.


Method #1a: Create annotations, then detection cells

Having done the above, it is then necessary to detect cells inside and outside of the keratin annotation.

Inside is straightforward; outside is not.  If you use the ‘parent’ annotation, then when you run cell detection everything inside will be deleted and replaced by cells… including the keratin annotation itself.  Therefore you need three annotations; one ‘parent’, containing one keratin annotation and one annotation for everything else.  The detection should be applied only inside the ‘child’ annotations and not the ‘parent’.

One way to create the ‘everything else’ annotation is to select the keratin annotation and choose Objects -> Make inverse annotation.  In case you want to script that, here’s how it looks in Groovy:

for (keratin in getAnnotationObjects().findAll { it.getName() == "Cytokeratin" }) {
    makeInverseAnnotation(keratin).setName("Everything else")
}
fireHierarchyUpdate()

I took the liberty of adding a name for the new annotation, which clearly you might change/remove/improve.  The resulting annotation is added to the hierarchy by default.

Now you can detect inside the two regions.  The next problem occurs if you want to apply cell detection in batch.  If you have a TMA you’re ok; just run the detection over all annotations.  But if you don’t have a TMA and you run over all annotations, you risk the same problem as above; the keratin / everything else annotations being deleted whenever the cell detection is applied to the parent annotation containing them.

Again, that should be surmountable.  Here’s one Groovy line to select all annotations, which in turn have a parent that is also an annotation.

selectObjects { p -> p.isAnnotation() && p.getParent().isAnnotation() }

Annotations that are not nested inside another annotation will not be selected.  You should then be able to run cell detection.  Of course, the predicate can be adjusted as needed (e.g. p.getName() == “ Cytokeratin or similar).


Method #1b: Detect cells, then create annotations, then resolve the results

A disadvantage of 1a is that the cell detections inside & outside the keratin are independent; this means that cells around the boundary can be split/detected twice, or otherwise ‘expand’ into one another.  I don’t know how much this would matter for your application.

To avoid this, you could instead do it in the other order; detect cells, then create the keratin annotation.

Again, this works… partially.  After running the ImageJ macro to create the annotation, QuPath does not bother trying to ‘resolve the object hierarchy’, meaning it doesn’t look to see whether cells fall inside or outside of the new annotation.  This can be computationally expensive, and isn’t always necessary - or even desirable - but it is clearly important in this case.

Fortunately, a short QuPath script can help by simply removing and re-adding the cells:

// Get the cells
cells = getDetectionObjects()

// Remove the cells...
removeObjects(cells, false)

// Add the cells again; the hierarchy will be resolved 
// based on the cell centroid location
addObjects(cells)

// Good to print this, so we know the script didn't get stuck
print "Done!”


Method #2: Detect the cells, measure nearby

After all the above, there is potentially an easier way to get a result for this kind of image - one that avoids creating any kind of keratin annotation altogether.

The aim is to detect cells, then measure the AP staining in the area nearby; if this is above a particular threshold, then consider the cells to be positive.

Rather unfortunately, when you run cell detection on a brightfield image QuPath (currently) only automatically makes measurements for the first two channels; here, hematoxylin and DAB.  You can supplement this with new measurements of any channel (including AP) with Analyze -> Calculate features -> Add intensity features (experimental), although no longer separately for each cell compartment.

Still, you have 3 choices:
1 - Measure AP within the full cell ROI
2 - Measure AP within a rectangle of fixed size centred on the cell (nucleus) ROI
3 - Measure AP within a circle of fixed size centred on the cell (nucleus) ROI

I would suggest either 1 or 3.

To restrict the AP measurement to a region close to the detected nucleus, run the cell detection with a small ‘Cell expansion’ parameter value. Then use Add intensity features with ‘Region’ set to ‘ROI’.  Choose to measure ‘AP’ and (at least) ‘Mean’ and ‘Min & Max’.

You can now train a simple classifier to identify AP cells, or choose a fixed threshold based on one of your AP measurements (likely ‘Mean’ or ‘Max’).  Keep in mind that the size of the cell nucleus will ‘dilute’ the mean value, but likely not influence the max.  A suitable threshold could be chosen by trial and error, inspecting histograms under Measure -> Show detection measurements or by exploring Measure -> Show measurement manager.

If you decide to train a classifier, QuPath will take care of most things for you.  If you want to write a script to classify based on one specific measurement and threshold, be aware that QuPath is extremely unforgiving regarding measurement names; what you type must be an exact match.  This is confounded further by a horribly subtle issue that the Add intensity features command sometimes uses 2 spaces instead of 1… which is hard to spot when typing in measurement names.

Therefore it is best to get QuPath to print out the measurements it has - then copy and paste.  Select a cell, and use this script to print a list of available measurements:

getSelectedObject().getMeasurementList().getMeasurementNames().each { print it }

In this case, I would like to use ROI: 2.00 µm per pixel: AP:  Max
(note the two spaces before ‘Max’…. sigh….)

Now, a script like this would classify based on an AP measurement, then further subclassify by nuclear DAB staining:

getDetectionObjects().each { p ->
    if (measurement(p, "ROI: 2.00 µm per pixel: AP:  Max") > 0.5)
        p.setPathClass(getPathClass("Tumor"))
    else
        p.setPathClass(getPathClass("Other"))
}
setCellIntensityClassifications("Nucleus: DAB OD mean", 0.15)
fireHierarchyUpdate()

The summary measurements for the parent annotation should contain the positive percentage overall, and also for each class.

Now one final gotcha.  On the Mac I used for developing QuPath, this worked fine.  I subsequently learned that there is an encoding issue on Windows, where the µ symbol is lost when the script is read back into QuPath… so you may end up needing to replace these values each time you reload your script.  Alternatively, if you save your script using an editor that uses a different encoding then you might be able to get around this.


Conclusion
Hopefully that helps choose a method.  If you’d like me to create screenshots for any of these parts to show how they look for me, just let me know.

I think all of the approaches are more inconvenient than they ought to be, and thinking them through there are lots of places where small QuPath improvements/bug fixes could help reduce the pain a lot.  It’s not an application or type of data I ever had before, so I wasn’t able to design/test for this before.  But at least it’s possible… in more ways than one :)

Pete
AP_annotation_macro.ijm

Carlos Moro

unread,
May 7, 2017, 11:00:49 AM5/7/17
to QuPath users
Hi Pete!

Oooh, thank you very much for so enlightening and detailed feedback!!

Intuitively I find methods 1b and 2.1 definitely worth to try and compare their results with the tricked  "Cytokeratin annotation creation" in first post.
One difficulty I can imagine for 2.2-3 is the "fixed size", as cell nuclei vary considerably (at least in the slide, probably cut at different z-levels).

I'll continue working on them during the following days, start groovying and will post their results (and eventually doubts if I get stuck somewhere).

The fact that such a 2-plex problem can be resolved by so many various ways all within QuPath is to me a clear evidence that... QuPath rocks!!  ;)

Best wishes,
Carlos

micros...@gmail.com

unread,
May 7, 2017, 5:15:26 PM5/7/17
to QuPath users
I had not understood what those other Add Intensity Feature options were meant for, but now I see that could reduce the need for extra subcellular detection as long as I got a small enough circle in the center of the nucleus, and the entire nucleus was stained in a given experiment.  

Another fun use of the Cytokeratin tool (since I wasn't using fast cell counts) is that since it names all of the annotations "Tumor" or "Stroma" by default, you can use 
selectObjects { p -> p.getPathClass() == getPathClass("Stroma") }
to select a given set of annotations and run Cell detection on each type of area individually.  This has allowed me to up the sensitivity for hematoxylin in heavy DAB areas, among other things.  Once the cells are generated in a satisfactory manner, I would still use the Subcellular Detection command in order to get the total AP positive area of the cell and classify based off of the expected spots.  

Unfortunately, I don't know enough about modifying the settings of images to allow the subcellular detection to work on an image with no uM information (plus low resolution at this point), but a quick run at it (now that I am back at a computer!) can yield the attached.
For the low resolution version, all I did was choose an intermediate Hema/DAB vector for my first vector, AP for the second, and then determine AP positivity by mean cytoplasmic AP OD.  Similarly, I kind of hacked at the DAB positivity by using Nuclear Hematoxylin OD.  Had I been able to use subcellular detection, I am sure a good nuclear positivity threshold would have been much easier to establish.
Basic 2plex test.JPG

Carlos Moro

unread,
May 8, 2017, 12:25:28 PM5/8/17
to QuPath users
Hi,

I'm advancing the several approaches. One alternative that could be interesting would be also 1b but with the AP deconvoluted intead of sending "the whole" image/stains and then in ImageJ > to RGB stack.

Manual way: If AP view is selected in QuPath  > Send region to ImageJ: Only the AP is sent and I'm currently adapting the macro to just process the sent AP image.

Macro ImageJ runner: However, even when in QuPath AP view, dragging the ImageJ macro file / Macro runner seems to send to ImageJ the whole image not only the AP. Is there a way to only send that stain through the Macro runner, as "manually"?

Best wishes,
Carlos

micros...@gmail.com

unread,
May 8, 2017, 12:31:10 PM5/8/17
to QuPath users
I am afraid that is not possible at the moment, though Peter is aware of the issue and looking into it.  See link.


At the moment, the only work-around I know of is to use ImageJ to recreate the AP view with each sent image region.

Carlos Moro

unread,
May 8, 2017, 1:09:14 PM5/8/17
to QuPath users
Thanks a lot!

I'll try by including in ImageJ macro some code like http://imagej.1557.x6.nabble.com/Fiji-color-deconvolution-macro-td5012404.html

Best wishes,
Carlos

Carlos Moro

unread,
May 11, 2017, 9:24:02 AM5/11/17
to QuPath users
Hello!

It took a while but here's the comparison of cell and positive cell detection using the various methods:
Slide 1: TMA keratin segmentation
Slide 2: ImageJ 1b
Slide 3: Cell detection method 2
Slide 4: Manual.

Attached also excel with numbers. 

Purely % I'd think both "TMA keratin segmentation" and "Cell detection method 2" perform very similarly and would be suitable. 
I now have some doubts regarding cell detection, but I'd rather comment in a new thread as that is a bit different (although related) question.

How do you see it?

Best wishes,
Carlos
Comparison QuPath keratin segmentation 2-plex ihc.pdf
Comparison QuPath keratin segmentation 2-plex ihc.xlsx

micros...@gmail.com

unread,
May 11, 2017, 10:32:52 PM5/11/17
to QuPath users
Cell detection looks pretty good, I'm looking forward to the new thread, as I have spent a quite a bit of time with it!  I depending on how your classifier is set, you might consider lowering the cytoplasm expansion setting to something more like 2-3um so that cytoplasm based measurements are less likely to pick up neighboring cell cytoplasm with a different marker.

Pete

unread,
May 13, 2017, 3:52:39 AM5/13/17
to QuPath users
This looks very good to me, and the systematic comparison of all methods is wonderful.  And intriguing...

Considering the numbers alone, it's reassuring that the percentages are fairly similar throughout.  The biggest discrepancy looks like the fact that there are many fewer keratin manual counts, and there are somewhat fewer manual cell counts overall.

Explanations that come to mind, without considering the images, include:
  • Imperfect stain separation causes the keratin to interfere with the detection and produce false positives
  • Imperfect manual counts (sorry!) result in under-counting - the automated algorithm may be more sensitive, and less inclined to miss anything
  • Imperfect detection may cause excessive splitting of single large nuclei to boost the counts, or the detection of artefacts
  • Imperfect classification/region identification causes more non-keratin-positive cells to be put into the keratin group because they happen to be nearby
I think all are possible... and perhaps all are involved.

I think microscopyra's suggestion is an important one, and it could slightly mitigate the last possibility.  Where region identification is used, decreasing any smoothing parameter would have a similar effect.

I also do think that, while the manual counts look very accurate overall, on closer inspection quite a few cells have been missed (looking towards the top right, for example)... but I don't know if that is because extra pathology knowledge was applied to exclude them, or if they should really be included?  I do know that every time I try manual cell counting, whenever I look back the next day I find that I missed considerably more than I realised at the time.  But if there are good reasons for excluding those cells, then QuPath lacks that knowledge.

With regard to detection, I don't really see a problem with false positives being caused by the keratin, although there is probably a bit too much splitting.

For what it's worth - and with the caveat that I'm not a pathologist and don't necessarily know what cells look like - I think the truth probably lies between the manual and automated values.  I think there are slightly too few cells with the manual counts, slightly too many with the automated counts, and there could be improvements made in classifying the not-positive-but-close-to-positive cells.  

Adjusting some parameters might help bring the values a bit closer together; for that I'll join you on the other thread...

Just a few other suggestions for manual counting here (apologies if you already knew/applied these):
  • You can initialize your manual counts using the automated cell detections.  Within the window that appears when you select the 'Points tool', click on Convert detections to points.  Then you can clean up the result by adding/removing points, rather than having to click everything.
  • You can turn on a counting grid to help with Shift + G, or under the View menu; you can also set the grid spacing there.
  • You can adjust the point size & press F to fill/unfill the points to make them easier to see.
  • I find it helpful to occasionally separate the stains when manually counting using the shortcut keys https://github.com/qupath/qupath/wiki/Changing-colors
And one other suggestion about classifying the positive cells based on nearby AP staining: it may seem overkill, but you could try using Classify -> Create detection classifier to help do this step, and under Advanced options you can select what features you want to give the classifier.  This would make it possible for more complex combinations of features to be used when making the decision... although this would also necessitate some more caution and verification, to check that the 'wrong' information isn't being used (e.g. nucleus size, DAB staining) to override the 'right' information.

micros...@gmail.com

unread,
May 13, 2017, 4:45:39 AM5/13/17
to QuPath users
Mm, I really need to go back and try the trainable classifier again.  Once I got ahold of your classifier script, I started blissfully if/threshold statementing (verbing) my way to happy classifications.  It's great for simple things, but I'm beginning to see the allure of some of the Harlick features, and I don't know enough about them or their expected values to really handle that part manually.  I read a paper on them once but... hah.


Carlos Moro

unread,
May 14, 2017, 10:10:25 AM5/14/17
to QuPath users
Hello,

@Pete - thank you very much for the enlightening reflections and proposal!

I fully agree, the best approximation (truth? ;) ) must lie somewhere between the manual and the computed one.

Increasing number of pathologists are aware the manual "gold standards" are rather subjective and unreliable in daily clinical practice. That's for some of us the main reason to strive for introduction of digital pathology and quantitative methods, in order to improve quality of diagnosis (here a recent ref: https://www.nature.com/modpathol/journal/v29/n4/full/modpathol201634a.html). As I see quantitative analysis (and profuse use of immunohistochemistry) are the best tools we have available towards more objective and reproducible pathological assessments ;)

You're very right about reasons for underestimation of cell and positivity counts by visual assessments. Every time I look again after counting find some cells I missed O:)  Another was I simply couldn't decide if they were "more positive" or "more negative" so just skipped them. Also, even though 2-plex is very powerful for cell and tissue context segmentation there's a drawback when visually assessing DAB in a AP stained cell... no colour deconvolution in our conscious image representations! ;)

The tip you mention about initializing a manual cell count with automatic cell detections can be very interesting in some transition from manual to computed analysis, thanks a lot!

I was also eager to give a try to the method 2 (cell detection) with classifiers. Here it comes following the systematic approach (pdf and excel). I recorded each step in the pdf with the hope it may be helpful for other users to try it, which I strongly recommend.

For machine learning parameters, as this is cell culture and all cells look (to a pathologist ;) ) very strange (highly reactive) I tried two approaches for training:
   -AP-only parameters:ROI: 2.00 µm per pixel: AP:  Mean, ROI: 2.00 µm per pixel: AP:  Std.dev., ROI: 2.00 µm per pixel: AP:  Min , ROI: 2.00 µm per pixel: AP:  Max, ROI: 2.00 µm per pixel: AP:  Median
   
   -Comprehensive parameter set (basically all except DAB for that shouldn't be relevant in this use case): Nucleus: Area, Nucleus: Perimeter, Nucleus: Circularity, Nucleus: Max caliper, Nucleus: Min caliper, Nucleus: Eccentricity, Nucleus: Hematoxylin OD mean, Nucleus: Hematoxylin OD sum, Nucleus: Hematoxylin OD std dev, Nucleus: Hematoxylin OD max,     Nucleus: Hematoxylin OD min, Nucleus: Hematoxylin OD range, Cell: Area, Cell: Perimeter, Cell: Circularity, Cell: Max caliper, Cell: Min caliper, Cell: Eccentricity, Nucleus/Cell area ratio, ROI: 2.00 µm per pixel: AP:  Mean, ROI: 2.00 µm per pixel: AP:  Std.dev., ROI: 2.00 µm per pixel: AP:  Min , ROI: 2.00 µm per pixel: AP:  Max, ROI: 2.00 µm per pixel: AP:  Median

Overall it seemed AP-only parameters performed better. That can make sense as these specific cells are very difficult to impossible to differentiate without keratin, which would be the discriminating parameter...?

Among AP-only classifiers:
From those % positive keratin cells it seems svm, random trees, k-nearest, and boosted decision trees perform quite similar. K-nearest and boosted detect slightly less amount of keratin cells, maybe closer to the manual, although to be sure those will need to be re-evaluated with a tailored cell segmentation as being discussed in the other thread...

How do you see these data? 

Intuitively I'd think this would be the most effective approach for keratin segmentation, as it's very effective and avoids ROI particularities of methods 0 (keratin segmentation) and 1 (imagej segmentation) as well as the hard-coded condition of the method 2 (cell detection), as you very well pointed out in that previous mail, the classifiers will take care of it  ;)

Thank you again for all your kind help this is all very interesting and enlightening. 
And QuPath so powerful and effective software!   ;)

Best wishes,
Carlos
Comparison QuPath keratin segmentation 2-plex ihc_v2.xlsx
2-plex_keratin_segmentation-classifiers1-2.pdf

Carlos Moro

unread,
May 14, 2017, 10:13:33 AM5/14/17
to QuPath users
Hello,

Pdf in my previous mail was the training process, here comes the testing one

Best wishes,
Carlos
2-plex_keratin_segmentation-classifiers2-2.pdf

Pete

unread,
May 14, 2017, 2:15:08 PM5/14/17
to QuPath users
Hi,

Thanks for posting all these tests!

My inclination would be towards coming up with a criterion / single threshold and not have to rely on the classifier for this kind of thing... but in reality, that should probably be based on AP measurements made from the cytoplasm only.  Since QuPath doesn't currently give these measurements (sadly!), then I agree that the classifier is probably a good way to go.  No single measurement (mean, min, max, median...) is quite right due to the influence of the nucleus size, and it helps that you can limit the information that the classifier has to work with, to point it in the right direction.

If you want, you can check visually across an entire image how likely a measurement is to be useful by using Measure -> Show measurement maps or Measure -> Show detection measurements.  The first is good for an overall impression, while the second gives more detail.

If you create a detection measurement table, then click on 'Show histograms'.  Then you can double-click on a column of interest to see the distribution of values; you can also sort the column, and click on rows of the table to view the corresponding cell - using the arrow keys to go up and down.  This can help you see in detail what the cells look like around any critical threshold.  It can also help identify outliers, which should perhaps be removed.

Returning to the classifier plan, I would tend towards preferring any of the ones associated with trees.  Other classifiers have additional parameters that ought to be tuned (especially SVM), or can be unduly impacted by whether the data is normalized or there are missing values.  This probably lies behind some of the outlier results.  Random trees tends to work well without this additional tuning.

Best wishes,

Pete

micros...@gmail.com

unread,
May 14, 2017, 7:20:20 PM5/14/17
to QuPath users
Looks good!  I would re-emphasize dropping the pixel size for your AP measurement to 0.25 (or whatever your actual pixel height/width is listed as under Image) as it should not greatly increase the time the command takes to run and should give more accurate data.  Though, to be fair, your staining is very clear and well defined, so it may not be an issue.  I wish I saw more cell pellets like that one!

Also, if your primary measurement of interest is detecting cytoplasmic AP, I would start the Cell Detection over with AP as your second stain vector instead of DAB.  Couple of caveats there, if you run cell detection in Brightfield Other (because you want DAB as the third vector), you do need to keep the Name of the second vector as DAB, or the measurements for the second stain vector will not show up.  If you use H-DAB, I don't believe the name matters.  There are a number of tricks you can play after that if you want to pick up other parts of the cell.  

In this case, I would probably run the cell detection so that you get the AP measurement for the cytoplasm, then go back and run Cell Analysis-> Subcellular Detections with a fairly high minimum spot size (smallest DAB stained nucleus), and use that to pick up DAB stained cells (binary yes/no for the estimated DAB spot count).

If you used a combined DAB/Hematoxylin vector to generate the nuclei in the first place, you might just be able to use a fixed threshold for the nuclear hematoxylin OD to pick up the ones that are stained with DAB.





micros...@gmail.com

unread,
May 15, 2017, 12:13:59 AM5/15/17
to QuPath users
Oh, and a fun trick Pete mentioned a few days ago, you can use  https://github.com/qupath/qupath/wiki/Multiple-images to view mask and mask-less side by side and have them synced up so you can scan around your image.  It also makes for nice screenshots showing both the overlay and no overlay at once to best see the accuracy of the detections!  It is a little nicer than tapping a key to swap back and forth really fast as I skim around.

I have both multiple monitors and a slew of memory, so I now like to have QuPath open across two monitors (add column) showing both views at once.  I usually have to unsync first and align them, as they almost never start synced.  Though they might if you don't move at all before opening both?  The two images write to separate temporary data files though, so keep track which one you are making changes in!

Carlos Moro

unread,
May 17, 2017, 12:37:02 PM5/17/17
to QuPath users
Thank you all so much for the enlightening feedback and discussion!  :)

Best wishes,
Carlos
Reply all
Reply to author
Forward
0 new messages