Hi Naoko,
Thanks for your comments. I looked into the resolution issue, and fortunately I think there is a straightforward explanation for it. It comes down to
this line of code:
// Reset interpolation - this roughly halves repaint times
gBuffered.setRenderingHint(RenderingHints.KEY_INTERPOLATION, RenderingHints.VALUE_INTERPOLATION_NEAREST_NEIGHBOR);
Since your image is acquired at 20x then viewing it at any higher magnification requires interpolation. QuPath uses the simplest - and fastest - interpolation approach, which basically makes every pixel looks like a square of a particular color as you zoom in further. It appears from the screenshots that ImageScope is using a different interpolation method. In your attachment, the image was viewed at x80, and therefore each ‘image pixel’ within QuPath is shown as a square of 16 pixels on screen with the same color and no smoothing; consequently it looks blocky.
I tested this by changing the line in QuPath and viewing the same image with the two other interpolation methods that Java offers: bilinear and bicubic. This makes the appearance in QuPath much more similar to ImageScope. I've attached examples.
One reason for QuPath using nearest neighbor interpolation is speed; it results in less lag when browsing the image. However, another reason is that QuPath is mostly focussed on analysis… and I personally I strongly prefer for image analysis software to display the image with minimal processing/interpolation in order to help better understand its contents. The boxes visible in QuPath make it clear that the image is zoomed in beyond the resolution actually provided by the scanner.
In any case, the cell detection in QuPath with the default settings is run at resolution where the pixel size is 0.5 µm… this is 20x in your case, and no further magnification or interpolation is needed or applied. I suspect that ImageScope is probably doing the same, although I don’t know for sure. I don’t think this should impact the detection.
For small cells there may be some advantages in acquiring the image at 40x magnification; this not only adds some small amount of information to the image, but also means that any JPEG compression artefacts are included at the higher resolution. With this in mind, even if you actually run your analysis at 20x anyway, the quality should be better because these artefacts are reduced when scaling down. But the impact may be too subtle to be worthwhile.
With regard to the cell detection, I would suggest reducing the ‘Minimum area’ in QuPath substantially; in your case, it looks like a value in the range 2-5 brings in many of the missed cells. You might also try reducing the sigma value from 1.5 to 1. And, if you would like to be able to relate QuPath’s detections more easily with the exact pixels at high magnification, you can turn off ‘Smooth boundaries’ to get a more ‘raw’ result.
In interpreting all this, one aspect of QuPath’s nucleus detection may be useful to know... or at least what I was thinking at the time.
Simple detection very often depends upon setting an intensity threshold to detect dark/light pixels in the image. This can form the basis of nucleus detection and give very intuitive results; however, a problem can emerge in that the exact choice of intensity threshold can have a huge influence on the results. This is not only in terms of the number of detections, but also on the measured
size of the detections - because a small change in threshold might switch from detecting only the darkest part of a cell to detecting the cell and also the surrounding blur (described
here). A simple change in intensity threshold might make a cell be measured as being 2 or 3 times larger.
This isn’t good whenever cell measurements are also important for later classifying the cell type; it makes size and intensity more tightly related than they should be. As a result, when designing the cell detection in QuPath, I tried to reduce the trouble this can cause by making the detected boundaries be effectively based on the zero crossings after a Laplacian of Gaussians filtering; this has a more mathematical justification for matching with the cell boundary. To see this in action, if you zoom in on a suitably dark cell and run the detection with wildly different intensity thresholds, you should find that the boundary of the cell remains (usually) unchanged; the primary difference is in terms of whether the cell is detected at all.
This benefit in improved boundary consistency comes with a cost of making it a bit harder to relate what is detected directly to the intensities within the image. Because of the preprocessing involved to help compute the boundaries, cells that look obvious might occasionally be missed - especially if they occur very close to cells that are stained more darkly. On the other hand, the cells that are detected should be detected more consistently. In fact, the localization of nucleus boundaries is more dependent on the sigma parameter than the intensity threshold (although since sigma is defined in terms of spatial units, that does at least make some sense).
Whether this is actually a good or a bad thing is up for debate; I suspect it depends on the image, and it’s very hard to compare algorithms with multiple parameters fairly. I also don't know anything about how the ImageScope algorithm works or performs. But I thought that perhaps it would be useful to know a bit more of the rationale behind the method in QuPath.
In any case, I think there is definitely room for improving the cell detection in QuPath, or adding in new detection algorithms for different types of images. There are already two distinct methods for cell detection; given the ability to add extensions, I hope that more will be made available.
Best wishes,
Pete