Dear Scikit-Image (SKI) team,
I have over a hundred scanning electron microscope (SEM) images of gold nanoparticles on glass surfaces, and I've generated several scripts in ImageJ/Python to batch analyze them. The analysis is fairly crude, and consists mostly of users manually thresholding to make a binary image, applying a simple noise filter and performing ImageJ's particle counting routine. Afterwards, my scripts use Python to do plotting, statistics and then output .txt, excel and .tex files. Eventually, I'd like to remove the ImageJ portion altogether and refactor the code to use SKI exclusively; however, for now, I am mainly interested in improving the results with some features of SKI.
The images are in a .pdf can be downloaded directly here. (Just a hair too big to attach)
What I'd like to do is to look at a subset of our images and see if SKI can enhance the image/remove defects. I've chosen 10 images to represent various cases and attached a summary via googledrive. The images are categorized as follows (preliminary questions are in blue):
NICE - Image is about as good as we can get, and shouldn't have many artifacts. Can these be further enhanced?
LOWCONTRAST - Can the contrast in these images be enhanced automatically in SKI?
NONCIRC - Particles appear non-circular due to stigmation offset in microscope. Is it possible to reshape them/make them more circular?
WARPED - Images that have artifacts, or uneven contrast, due to aberrations in SEM beam during imaging. I'm especially interested in removing uneven contrast.
WATERSHED - These images have overlapping AuNPs, and I had hoped that SKI's watershedding routines might help disentangle them. The watershed segmentation guide indicates that there are several ways to approach this problem.
On the attached PDF, each page shows the original SEM image (converted from highres.tiff to png), a binary image, our manually chosen adjustment threshold, and two estimates of the particle diameter distribution (don't worry about details of this).
I was really hoping that some SKI experts would examine these images and suggest some algorithms or insights to address the aforementioned concerns. The overall goal is to survey the most common problems in SEM imaging of nanoparticles, give examples of each, and demonstrate how SKI can improve the particle counting.
Thanks for you time, and for making a really nice open-source package.
To download without an account, I am not familiar with any hosting solution, but if you guys have any recommendations I'd love to hear them.
Thank you for you help, I will test out the methods you suggested
Is the goal only to count particles? In that case, I think a local thresholding (threshold_adaptive) would work on all these images. Then, just do a labelling (scipy.ndimage.label) and draw a histogram of particle sizes. You'll get a sharp peak around the true particle size, with bigger peaks for clumps
--
You received this message because you are subscribed to the Google Groups "scikit-image" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
On 20 Nov 2013 07:33, "Juan Nunez-Iglesias" <jni....@gmail.com> wrote:
>
> I'm guessing you are applying label() directly to your image, which is not the right way to use it.
Looks like isolating objects is a common problem. Time for another gallery example? We already have the coins, but perhaps something more realistic with some biological data.
Stéfan
--
--
Adam,
On Wed, Nov 20, 2013 at 7:44 AM, Adam Hughes <hughes...@gmail.com> wrote:To download without an account, I am not familiar with any hosting solution, but if you guys have any recommendations I'd love to hear them.Dropbox can be used for this...
Thank you for you help, I will test out the methods you suggestedIs the goal only to count particles? In that case, I think a local thresholding (threshold_adaptive) would work on all these images. Then, just do a labelling (scipy.ndimage.label) and draw a histogram of particle sizes. You'll get a sharp peak around the true particle size, with bigger peaks for clumps. Once you have the mean particle size you can estimate the number of particles in each clump (barring occlusion in 3D, in which case you're stuffed anyway), and then the total number of particles in your image.
Looking at your images, I don't think watershed (or anything else that I know) will do very well with the clumps. The contrast between adjacent particles is too low.
Low-contrast-4 looks tricky... Are the smaller "points" particles of different sizes or just image noise?Finally, Watershed-f3 also looks hard, because it appears all the particles are touching... Again, I don't think watershed will help you here, nor anything else that doesn't have an a-priori knowledge of the particle size.
--Juan.
You received this message because you are subscribed to a topic in the Google Groups "scikit-image" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/scikit-image/EbS7o2HvcUc/unsubscribe.
To unsubscribe from this group and all of its topics, send an email to scikit-image...@googlegroups.com.
I tried this but to share a link, it asks for the emails of the recipients. You have used Dropbox to host a publicly accessible link? If so, I will certainly start doing this, thanks.
Yes, that is the goal. We had done a similar process ImageJ, but did thersholding manually. I will read into the adaptive threshold a bit more. We had hoped that some of these corrections, such as histogram equilization, would make the automatic threshold more likely to give correct results.
Hmm I see. I will still try it out, but thanks for the heads up. I'll feel better now if it doesn't work well.
We do have an a-prior knowledge actually. What I've been doing already is putting a lower limit on particle size, with anything under it being noise. After doing particle counts and binning the data, we fit it with a guassian, and optionally scale the data so that the guassian is centered around the mean partitcle diameter (which believe we know to about 3nm based on TEM imaging and indirect spectroscopic techniques). Based on the size distribution, we try to further bin the data into small (dimers/trimers) and large aggregates. For all the particles that are large enough to be considered an aggregate, we assume that they fill a half-sphere volume, and then we infer the true particle due to these aggregates. It's pretty ad-hoc, but we certainly apply some knowledge of the expected particle size distributions. I realize watershedding won't split up huge clumps, but maybe could assist in the dimers and trimers? In any case, even if it doesn't significantly enhance our results, it would still be helpful to explore that option and I'll try it out.
--
You received this message because you are subscribed to a topic in the Google Groups "scikit-image" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/scikit-image/EbS7o2HvcUc/unsubscribe.
To unsubscribe from this group and all of its topics, send an email to scikit-image...@googlegroups.com.
For the uneven background issue, you can always filter out the low frequency parts of the image. You can do this in Fourrier space, or just subtract a Gaussian filtered version of the image:
from skimage import img_as_float
from scipy import ndimage
def preprocess_highpass(image, filter_width=100):
'''Emulates a highpass filter by subtracted a smoothed version
of the image from the image.
Parameters:
----------------
image: a ndarray
filter_width: an int, should be much bigger than the relevant features in the image,
and about the scale of the background variations
Returns:
-----------
f_image: ndarray with the same shape as the input image, with float dtype, the filtered image,
with minimum at 0.
'''
image = img_as_float(image)
lowpass = ndimage.gaussian_filter(image, filter_width)
f_image = image - lowpass
f_image -= f_image.min()
return f_image