An outline of using SLICs to select regions for cell by cell analysis

ยอดดู 932 ครั้ง
ข้ามไปที่ข้อความที่ยังไม่อ่านรายการแรก

micros...@gmail.com

ยังไม่อ่าน,
19 ก.ค. 2560 15:16:0919/7/60
ถึง QuPath users
This is a rough outline, and I plan on filling it in as I have time.  From the Gitter page:

1. Generate your annotations, usually by Simple tissue detection, probably with a threshold in the 230-250 range.  I tend to choose a very small "Max fill area" because we are generally looking at tissue specifically and avoid including large empty blood vessel cross sections in the cells/mm^2 measurements.
2. Use the SLIC superpixel segmentation to select regions.  I usually like a more regular (higher regularization value) when I am merging the SLICs for cell generation, or a very low one if I am identifying areas for MTC or other types of stains that identify oddly shaped regions.

3. Add measurements and classify!  I don't have much guidance here, just make sure you have enough of the right kind of measurements, I tend to have a lot of luck with either Harlick on OD if I am looking at larger patterns in cells, or just mean values on color vectors when I am looking for regions of a particular stain ("blue" for MTC again).

4. Convert the SLICs to Annotations:
    At this point you have a couple of options.  I usually handle this step by first deleting all of the original Annotations, or classifying the original Annotation as something that I am never going to use in my classifier, so that I can differentiate between the SLICs-as-annotations and the original full tissue annotations.  You end goal here is to not merge the original Annotation in with the others, or it will overwrite/wipe out all of your SLIC data, and could cause problems down the line (or not, if you are really careful).
Next, use the script in Automate->Open Sample Scripts-> Create annotation from ROI.  It is fairly generic and works on all "selected objects," so in this case you probably want to pick one type of classified detection to run this on at a time.  
One line of code should do that for you: 
selectObjects { p -> p.getPathClass() == getPathClass("Stroma") }
for example will select all of the SLICs you have classified as Stroma, or whatever you change the text of Stroma to.

5. At this point, it is probably important to mention that you now have a complete set of both detection SLICs and annotation SLICs, and you may want to selectDetections() and clearSelectedObjects().  Alternatively, you may want to selectAnnotations(), and assuming you already deleted your first, outer annotation, you can simply right click on one and set the class for the whole set.  This would let you go back and select a different type of SLIC detection class, and perform the same steps.  Another way to select the new annotations would be to use the same script above for selecting objects, except use == null, as everything else should have a classification at this point.

6.  The slow part! Select all of your annotations that are of a given class, and mergeSelectedAnnotations() (many of these commands are available through the Objects menu as well, you don't have to script them all).  This can take forrrrevvvvvver.  If you SLICs are large enough not to interfere with cell generation, you may want to just select those annotations, and then run your Cell Detection on them, and sum the results outside of QuPath (export function).

Hope this helps.  It can be scripted such that all of these steps are done with the click of a button, and can be applied across all of the slides in a project, but that requires a bit of tinkering, plus making sure your original annotations are correct and don't include artifacts!

Pete

ยังไม่อ่าน,
24 ก.ค. 2560 05:06:3724/7/60
ถึง QuPath users
Thanks for writing this up!

One question: the command Analyze -> Region identification -> Tiles & superpixels -> Tile classifications to annotations is intended to help with steps 4-6, and should be faster.  Are there any reasons to avoid the Tile classifications to annotations command in favour of the script method?

Also, there is another option to use superpixels to help in the event that you don't have very many regions to identify, and a 'semi-manual' approach would be acceptable.  To use this, you need a little-known option under 'Edit -> Preferences' called 'Use tile brush'.  If you turn this on, then for an image containing superpixels (they don't even need to have features) the regular 'Brush' tool will switch its behavior to select entire tiles as you draw over them.  This makes it behave a bit more like the Wand tool, but more controlled.

Having created your SLIC-enabled annotations this way, you might then simply delete all the detections (which includes the superpixels) afterwards to be left with your annotations only.

Furthermore, you can hold down the shift key while using the brush in 'tile brush' mode to temporarily revert to the old behaviour of a circular brush - this means you are not entirely dependent on the superpixel boundaries being correct.  In both cases, the alt key has the same effect as before - turning the brush into an eraser, either one circle or one superpixel at a time.

micros...@gmail.com

ยังไม่อ่าน,
24 ก.ค. 2560 10:14:5524/7/60
ถึง QuPath users
Yes!  That tool is MUCH better for the actual merging into annotations!  I had not tried it due to the "Tile" in the name, which I thought of as only the grid based tiles.  Another overlooked gem.  It's too bad because the merge times using the script were a major stumbling block on a previous project :(

James Hurst

ยังไม่อ่าน,
10 ส.ค. 2560 08:34:1510/8/60
ถึง QuPath users
Hi. Thanks for the great help. 

I've been using QuPath to delineate between endometrium and myometrium on tissue sectioned slides, then doing a DAB positive cell count in the two different compartments/annotations. The classification of the SLIC's (later to be merged) is causing me some difficulties at the moment. I train the classifier on a set of 5-10 slides, only for test classifications to come back looking very poor.

Does anyone have any tips for using QuPath for classifying large tiles or structures rather than sets of cells? I know it's not exactly what it was built for. 

For instance, there are a couple of things I'm changing but not sure of the effect of:

- I'm currently adding Haralick features, does changing the computing distance significantly impact analysis, or the number of bins? 
- How does the resolution of the intensity features affect classification on this larger scale? 

Thanks in advance for any help!

James

James Hurst

ยังไม่อ่าน,
10 ส.ค. 2560 08:50:3410/8/60
ถึง QuPath users
I should say I'm adding Haralick features on Optical Density, currently. 

Pete

ยังไม่อ่าน,
10 ส.ค. 2560 21:05:0010/8/60
ถึง QuPath users
Hi James,

I'd start by asking:
  • Do you have enough images to really require full automation using classified superpixels, or could you save yourself effort by drawing the regions manually?
  • Does it look in your image like intensity and/or texture features alone are enough to make the distinction between regions, or is some additional knowledge needed?
  • Is there much variation in staining across your images, which might be thwarting the classification?
I don't know what stains you have beyond DAB, and I don't know how subtle or clear the distinction would be.  Personally, I tend to prefer just drawing manual regions in many cases - often creating these with the brush or wand tools, taking advantage of the fact these can adapt depending on the magnification at which I am viewing the image.  Where two regions are needed instead of one, Objects -> Make inverse annotation can help.  The classifiers can help a lot when they work well, but a lot of time can be spent fighting with them otherwise - and the drawing tools let me define the regions more quickly and accurately.

If classifying superpixels does still seem necessary, then the Changing colors section of the wiki could help to ascertain whether the optical density sum is likely to be sufficiently informative; depending on stains, even switching to use red, green and blue might help.  I have never found a compelling reason to change the distance value or number of bins from the defaults; I would not expect these to make a major difference... but of course you may be able to find a case in which they do.

Something that may make more of a difference: in the Compute intensity features command you can specify whether to use the ROI (i.e. just use textures within the superpixels) or Square/Circular tiles.  The latter options result in measuring textures potentially beyond the confines of the superpixel itself, in a square or circle around the superpixel centroid.  This gives a way to 'look further' away.  Depending on how far you look, changing the resolution for those features may help.

I don't know of any particular rules beyond that, but I would use the Measure -> Show measurement maps command to get more of a feeling for how informative the texture features seem to be when using different parameters.

Two other possibly-important points:
  • Show measurement maps can also help check that the features are being calculated on the test images properly in the first place.  If run from a script, a misplaced line could mean that the features aren't calculated on the superpixels, e.g. because the superpixels haven't been selected at the right time.  The 'Image type' also needs to be set properly for the right features to be created (in the 'Image' tab, double-click on the 'Image type' entry if it looks wrong to change its value.)  If any features are missing, the classifier receives 'NaN' (Not a Number) values instead - and still makes an effort at classification, but usually a very bad one.
  • It shouldn't matter if you are working with a whole slide image, but be sure to check this bug description (and potential workaround), in case it could be affecting you: https://github.com/qupath/qupath/issues/74
Finally, if you are familiar with ImageJ then you might also want to send your image there and explore other approaches that could help identify your regions more reproducibly - see the Working with ImageJ section of the wiki.  If so, then this could be run as a macro from QuPath, and the ImageJ ROI sent back as a QuPath object.
ข้อความถูกลบแล้ว

James Hurst

ยังไม่อ่าน,
18 ส.ค. 2560 12:03:3818/8/60
ถึง QuPath users
Hi Pete,

Thanks for the great help. I have hundreds of slides to deal with, so getting this right would potentially save a lot of time. I've been playing around with adding Basic and Haralick Features to hematoxylin/OD and using your advice to visualise these textures in the measurement maps. It's given me a fantastic idea of what i have to play with in regards to the classifier and the SLIC superpixels. Unfortunately, as you have probably surmised, there aren't many that really distinguish between the tissue structures with the specificity that would allow for a decent set of training data. I'm not giving up yet, (I'm in the process of picking out the best ones and trying to find a complementary combination!) but I'm formulating a plan B. 

This would basically be the semi-automated approach; estimating stain vectors, hand drawing annotations and then doing positive cell counts. Standard stuff. With this in mind (and your comment on the 'make inverse annotation' command) is there a way of excluding a smaller annotation which lies inside a second, large annotation when doing cell counts? I have two areas of interest, one within the other, and I don't want to get double-counted cells. I realize I could produce a hole in the larger annotation using the 'alt' function of a tile brush/regular brush, but I reckon there is a more glamorous solution that might save me some time. 

Many thanks,

James

EDIT: Sorry, I was just using the Make Inverse Annotation function incorrectly. It works perfectly, thanks! No need for any extra glamorous solutions. 
ตอบทุกคน
ตอบกลับผู้สร้าง
ส่งต่อ
ข้อความใหม่ 0 รายการ