Advice on improving a analysis on images of nanoparticles

223 views
Skip to first unread message

Adam Hughes

unread,
Nov 15, 2013, 12:02:26 PM11/15/13
to scikit...@googlegroups.com

Dear Scikit-Image (SKI) team,


I have over a hundred scanning electron microscope (SEM) images of gold nanoparticles on glass surfaces, and I've generated several scripts in ImageJ/Python to batch analyze them.  The analysis is fairly crude, and consists mostly of users manually thresholding to make a binary image, applying a simple noise filter and performing ImageJ's particle counting routine.  Afterwards, my scripts use Python to do plotting, statistics and then output .txt, excel and .tex files.   Eventually, I'd like to remove the ImageJ portion altogether and refactor the code to use SKI exclusively; however, for now, I am mainly interested in improving the results with some features of SKI.  


The images are in a .pdf can be downloaded directly here. (Just a hair too big to attach)


What I'd like to do is to look at a subset of our images and see if SKI can enhance the image/remove defects.   I've chosen 10 images to represent various cases and attached a summary via googledrive. The images are categorized as follows (preliminary questions are in blue):


NICE - Image is about as good as we can get, and shouldn't have many artifacts.  Can these be further enhanced?


LOWCONTRAST - Can the contrast in these images be enhanced automatically in SKI?  


NONCIRC - Particles appear non-circular due to stigmation offset in microscope.  Is it possible to reshape them/make them more circular?


WARPED - Images that have artifacts, or uneven contrast, due to aberrations in SEM beam during imaging.  I'm especially interested in removing uneven contrast.


WATERSHED - These images have overlapping AuNPs, and I had hoped that SKI's watershedding routines might help disentangle them.  The watershed segmentation guide indicates that there are several ways to approach this problem.


On the attached PDF, each page shows the original SEM image (converted from highres.tiff to png), a binary image, our manually chosen adjustment threshold, and two estimates of the particle diameter distribution (don't worry about details of this).


I was really hoping that some SKI experts would examine these images and suggest some algorithms or insights to address the aforementioned concerns.  The overall goal is to survey the most common problems in SEM imaging of nanoparticles, give examples of each, and demonstrate how SKI can improve the particle counting.


Thanks for you time, and for making a really nice open-source package.

Stéfan van der Walt

unread,
Nov 17, 2013, 4:27:12 AM11/17/13
to scikit-image
Hi Adam

On Fri, Nov 15, 2013 at 7:02 PM, Adam Hughes <hughes...@gmail.com> wrote:
> The images are in a .pdf can be downloaded directly here. (Just a hair too
> big to attach)

This download requires a login--could you please provide us with a
direct link / HTML page showing the images?

> LOWCONTRAST - Can the contrast in these images be enhanced automatically in
> SKI?

We have several histogram equalization algorithms, including Contrast
Limited Histogram Equalization, which should do the trick.

> NONCIRC - Particles appear non-circular due to stigmation offset in
> microscope. Is it possible to reshape them/make them more circular?

It depends on what you mean by "make them more circular". We have
geometric transformation utilities in `skimage.transform`.

> WARPED - Images that have artifacts, or uneven contrast, due to aberrations
> in SEM beam during imaging. I'm especially interested in removing uneven
> contrast.

Localized contrast equalization in the `skimage.filter.rank` module might help.

> WATERSHED - These images have overlapping AuNPs, and I had hoped that SKI's
> watershedding routines might help disentangle them. The watershed
> segmentation guide indicates that there are several ways to approach this
> problem.

See Juan's post here:
https://groups.google.com/d/msg/scikit-image/4z-hPiFFDj8/2gIDMxfGrU4J

Regards
Stéfan

Adam Hughes

unread,
Nov 19, 2013, 3:44:41 PM11/19/13
to scikit...@googlegroups.com
Thanks for the help Stefan.  Here is a direct link that anyone can view:  


To download without an account, I am not familiar with any hosting solution, but if you guys have any recommendations I'd love to hear them.

Thank you for you help, I will test out the methods you suggested

Juan Nunez-Iglesias

unread,
Nov 19, 2013, 9:39:47 PM11/19/13
to scikit...@googlegroups.com
Adam,

On Wed, Nov 20, 2013 at 7:44 AM, Adam Hughes <hughes...@gmail.com> wrote:
To download without an account, I am not familiar with any hosting solution, but if you guys have any recommendations I'd love to hear them.

Dropbox can be used for this... 

Thank you for you help, I will test out the methods you suggested

Is the goal only to count particles? In that case, I think a local thresholding (threshold_adaptive) would work on all these images. Then, just do a labelling (scipy.ndimage.label) and draw a histogram of particle sizes. You'll get a sharp peak around the true particle size, with bigger peaks for clumps. Once you have the mean particle size you can estimate the number of particles in each clump (barring occlusion in 3D, in which case you're stuffed anyway), and then the total number of particles in your image.

Looking at your images, I don't think watershed (or anything else that I know) will do very well with the clumps. The contrast between adjacent particles is too low.

Low-contrast-4 looks tricky... Are the smaller "points" particles of different sizes or just image noise?

Finally, Watershed-f3 also looks hard, because it appears all the particles are touching... Again, I don't think watershed will help you here, nor anything else that doesn't have an a-priori knowledge of the particle size.

Juan.

Evelyn Liu

unread,
Nov 19, 2013, 11:45:19 PM11/19/13
to scikit...@googlegroups.com
Hi Juan,

On Tuesday, November 19, 2013 9:39:47 PM UTC-5, Juan Nunez-Iglesias wrote:
Is the goal only to count particles? In that case, I think a local thresholding (threshold_adaptive) would work on all these images. Then, just do a labelling (scipy.ndimage.label) and draw a histogram of particle sizes. You'll get a sharp peak around the true particle size, with bigger peaks for clumps

I'd like to use scikit to plot the size distribution histogram of particles in an image, which is similar with Adam's. I tried scipy.ndimage.measurements.label(image), which I thought would give an array about particle sizes. However, the output array is with all 1, obviously nothing about size. I must get something wrong...So which function should i call for the size distribution? Thanks Juan!


Juan Nunez-Iglesias

unread,
Nov 20, 2013, 12:33:19 AM11/20/13
to scikit...@googlegroups.com, eve...@gmail.com
Hi Evelyn,

I'm guessing you are applying label() directly to your image, which is not the right way to use it. label() connects all neighboring nonzero points together. Since images are rarely zero (rather than some very small intensity value), you are simply connecting all the pixels of your image together into one label.

The correct way to do this is to threshold your image, eg using:

So:

>>> from skimage.filter import threshold_adaptive
>>> from scipy import ndimage as nd
>>> diam = 51
>>> # "51" is a guess, you might have to fiddle with this parameter.
>>> image_t = (image > threshold_adaptive(image, diam))
>>> image_labeled = nd.label(image_t)[0]
>>> particle_sizes = np.bincount(image_labeled.ravel())[1:]
>>> # [1:] is to select only the foreground labels

Hope this helps!

Juan.



--
You received this message because you are subscribed to the Google Groups "scikit-image" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Stéfan van der Walt

unread,
Nov 20, 2013, 1:13:57 AM11/20/13
to scikit-image

On 20 Nov 2013 07:33, "Juan Nunez-Iglesias" <jni....@gmail.com> wrote:
>
> I'm guessing you are applying label() directly to your image, which is not the right way to use it.

Looks like isolating objects is a common problem. Time for another gallery example? We already have the coins, but perhaps something more realistic with some biological data.

Stéfan

Guillaume Gay

unread,
Nov 20, 2013, 2:32:44 AM11/20/13
to scikit...@googlegroups.com
Hi all,

I'm currently working with cell nuclei segmentation (in 3D) with fluorescence imagery. I'm know using quite a pannel of sckimage routines to segment those out (local contrast enhancement, threshold otsu, etc.)?. Would you be interested if I turned that into an example (maybe something that could be chunked in several pieces?).

Best,

Guillaume
--

Brickle Macho

unread,
Nov 20, 2013, 5:50:46 AM11/20/13
to scikit...@googlegroups.com
I can always adapt this example (http://pythonvision.org/basic-tutorial) which uses pymorph and mahotaos to counting nuceli and segments the image as a gallery example for skimage (obviously acknowledging the original tutorial).  Or is it bad form to copy/use another example.

Michael.
--
--

Juan Nunez-Iglesias

unread,
Nov 20, 2013, 7:55:37 AM11/20/13
to scikit...@googlegroups.com
@Guillaume, please contribute the example ASAP because I'm struggling with the exact same problem! =D In particular, the nuclei example from pythonvision is not ideal because they gloss over a lot of parameter tuning that is much easier to deal with in 2D than 3D. I'll be happy to hear about your experience!

Juan.

Stéfan van der Walt

unread,
Nov 20, 2013, 9:10:27 AM11/20/13
to scikit-image
On Wed, Nov 20, 2013 at 12:50 PM, Brickle Macho <brickl...@gmail.com> wrote:
> I can always adapt this example (http://pythonvision.org/basic-tutorial)
> which uses pymorph and mahotaos to counting nuceli and segments the image as
> a gallery example for skimage (obviously acknowledging the original
> tutorial). Or is it bad form to copy/use another example.

I think you should ask Luis for permission, just to be on the safe side.

But this reminds me: someone needs to implement voronoi! Should be
quite fun, actually.

Stéfan

Adam Hughes

unread,
Nov 20, 2013, 1:04:46 PM11/20/13
to scikit...@googlegroups.com
Hi Juan, thanks for your helpful response.  See my replies inline:


On Tue, Nov 19, 2013 at 9:39 PM, Juan Nunez-Iglesias <jni....@gmail.com> wrote:
Adam,

On Wed, Nov 20, 2013 at 7:44 AM, Adam Hughes <hughes...@gmail.com> wrote:
To download without an account, I am not familiar with any hosting solution, but if you guys have any recommendations I'd love to hear them.

Dropbox can be used for this... 

I tried this but to share a link, it asks for the emails of the recipients.  You have used Dropbox to host a publicly accessible link?  If so, I will certainly start doing this, thanks.
 

Thank you for you help, I will test out the methods you suggested

Is the goal only to count particles? In that case, I think a local thresholding (threshold_adaptive) would work on all these images. Then, just do a labelling (scipy.ndimage.label) and draw a histogram of particle sizes. You'll get a sharp peak around the true particle size, with bigger peaks for clumps. Once you have the mean particle size you can estimate the number of particles in each clump (barring occlusion in 3D, in which case you're stuffed anyway), and then the total number of particles in your image.

Yes, that is the goal.  We had done a similar process ImageJ, but did thersholding manually.  I will read into the adaptive threshold a bit more.  We had hoped that some of these corrections, such as histogram equilization, would make the automatic threshold more likely to give correct results.
 

Looking at your images, I don't think watershed (or anything else that I know) will do very well with the clumps. The contrast between adjacent particles is too low.

Hmm I see.   I will still try it out, but thanks for the heads up.  I'll feel better now if it doesn't work well.
 

Low-contrast-4 looks tricky... Are the smaller "points" particles of different sizes or just image noise?

Finally, Watershed-f3 also looks hard, because it appears all the particles are touching... Again, I don't think watershed will help you here, nor anything else that doesn't have an a-priori knowledge of the particle size.

We do have an a-prior knowledge actually.  What I've been doing already is putting a lower limit on particle size, with anything under it being noise.  After doing particle counts and binning the data, we fit it with a guassian, and optionally scale the data so that the guassian is centered around the mean partitcle diameter (which believe we know to about 3nm based on TEM imaging and indirect spectroscopic techniques).  Based on the size distribution, we try to further bin the data into small (dimers/trimers) and large aggregates.  For all the particles that are large enough to be considered an aggregate, we assume that they fill a half-sphere volume, and then we infer the true particle due to these aggregates.  It's pretty ad-hoc, but we certainly apply some knowledge of the expected particle size distributions.  I realize watershedding won't split up huge clumps, but maybe could assist in the dimers and trimers?  In any case, even if it doesn't significantly enhance our results, it would still be helpful to explore that option and I'll try it out.
 
Thanks for this, and other examples!


Juan.

--
You received this message because you are subscribed to a topic in the Google Groups "scikit-image" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/scikit-image/EbS7o2HvcUc/unsubscribe.
To unsubscribe from this group and all of its topics, send an email to scikit-image...@googlegroups.com.

Adam Hughes

unread,
Nov 20, 2013, 1:11:31 PM11/20/13
to scikit...@googlegroups.com
Hey guys,

One last thing to add to this thread.  We also have the issue that some images were taken at low resolution, or low magnification.  As a result, it becomes quite difficult to ascertain particle distributions.   Unto this point, I mostly just made sure to get high quality images, but sometimes this is not possible due to microscope-induced artifacts at long image times.  Have you any suggestions or examples for improving low res images of small particles?  Otherwise I'll just note the limitations in resolution and leave it at that in my writeup. 

Stéfan van der Walt

unread,
Nov 20, 2013, 1:15:31 PM11/20/13
to scikit-image
Hi Adam

On Wed, Nov 20, 2013 at 8:11 PM, Adam Hughes <hughes...@gmail.com> wrote:
> One last thing to add to this thread. We also have the issue that some
> images were taken at low resolution, or low magnification. As a result, it
> becomes quite difficult to ascertain particle distributions. Unto this
> point, I mostly just made sure to get high quality images, but sometimes
> this is not possible due to microscope-induced artifacts at long image
> times. Have you any suggestions or examples for improving low res images of
> small particles? Otherwise I'll just note the limitations in resolution and
> leave it at that in my writeup.

If you have multiple images of the same object, you could do
super-resolution. If you only have one image, you could try
single-frame super-resolution, which is essentially dictionary
learning on a large set of similar photos, used to update
high-frequency information in the current image.

Stéfan

Adam Hughes

unread,
Nov 20, 2013, 1:21:32 PM11/20/13
to scikit...@googlegroups.com
Cool suggestions, thanks!  Although I don't have multiple images of the same region of one of our surfaces, I will try it out next time we take images.


Juan Nunez-Iglesias

unread,
Nov 20, 2013, 8:45:20 PM11/20/13
to scikit...@googlegroups.com
Hi Adam, more responses inline below. =)


On Thu, Nov 21, 2013 at 5:04 AM, Adam Hughes <hughes...@gmail.com> wrote:
I tried this but to share a link, it asks for the emails of the recipients.  You have used Dropbox to host a publicly accessible link?  If so, I will certainly start doing this, thanks.

The web interface is a bit wonky that way, but it's certainly possible, it's called something like "copy share link". Then you get a URL that has a random token embedded in it, which anyone with the link can open. In OSX, you can just right-click > share dropbox link to have the link copied to your clipboard. That's how I got this:

=)

(You can add a "?dl=1" to the end to go straight to the file.) 

Yes, that is the goal.  We had done a similar process ImageJ, but did thersholding manually.  I will read into the adaptive threshold a bit more.  We had hoped that some of these corrections, such as histogram equilization, would make the automatic threshold more likely to give correct results.

adaptive is automatic, but adjusts the threshold individually for each pixel based on surrounding pixels. This should help e.g. for your image warped_f2_b1, where the background is not uniform and no single threshold may work for your entire image.

Hmm I see.   I will still try it out, but thanks for the heads up.  I'll feel better now if it doesn't work well.

Great strategy. =D 

We do have an a-prior knowledge actually.  What I've been doing already is putting a lower limit on particle size, with anything under it being noise.  After doing particle counts and binning the data, we fit it with a guassian, and optionally scale the data so that the guassian is centered around the mean partitcle diameter (which believe we know to about 3nm based on TEM imaging and indirect spectroscopic techniques).  Based on the size distribution, we try to further bin the data into small (dimers/trimers) and large aggregates.  For all the particles that are large enough to be considered an aggregate, we assume that they fill a half-sphere volume, and then we infer the true particle due to these aggregates.  It's pretty ad-hoc, but we certainly apply some knowledge of the expected particle size distributions.  I realize watershedding won't split up huge clumps, but maybe could assist in the dimers and trimers?  In any case, even if it doesn't significantly enhance our results, it would still be helpful to explore that option and I'll try it out.

That's the strategy I would suggest, but my point is that in some images, such as your last one, you have more of a carpet of particles, and no single particle will separate, so you will need knowledge from different images. If you have that, no problem! =)

If you want to publish the result of your explorations as an IPython notebook, we won't stop you. =D

Juan.

Guillaume Gay

unread,
Nov 21, 2013, 8:29:30 AM11/21/13
to scikit...@googlegroups.com
Hi,

This:
http://damcb.com/posts/segmenting-nuclei-with-skimage.html

links to a notebook where I expose what I'm doing for the segmentation. It won't be too much work to turn it into an example if needed.
Also feel free to comment on the post... As you'll see I kind of go around the 3D part by clustering strategies, more that by doing real 3D stuff...

Cheers.

Guillaume

PS: I'll try to have syntax highlighting in nikola soon, so that this code is more readable.

Adam Hughes

unread,
Nov 21, 2013, 4:50:41 PM11/21/13
to scikit...@googlegroups.com
Thanks Juan,

I will followup and get a NB or two returned to the list.


--
You received this message because you are subscribed to a topic in the Google Groups "scikit-image" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/scikit-image/EbS7o2HvcUc/unsubscribe.
To unsubscribe from this group and all of its topics, send an email to scikit-image...@googlegroups.com.

Evelyn Liu

unread,
Nov 25, 2013, 2:49:04 AM11/25/13
to scikit...@googlegroups.com, eve...@gmail.com
Thanks for your helpful response Juan. But I have the issue with choosing parameter for threshold_adaptive.. Some of my images have uneven contrast, like the lower part of image has darker background compared to the upper part. So if the threshold is proper for the upper part, some neighboring particles will be taken as a cluster even they are individual. I tried different diam, but none of them leads to good threshold in terms of the full image. Is there any other thresholding methods for those uneven contrast images? 

I also tried the edge operator filter.sobel and it looks good for drawing up particles' edges(the attached image). I wonder if i can fill these circles up to get the thresholding image?  I tried ndimage.binary_fill_holes but it gives me either blank or total total black pic..

Guillaume Gay

unread,
Nov 25, 2013, 3:27:17 AM11/25/13
to scikit...@googlegroups.com

For the uneven background issue, you can always filter out the low frequency parts of the image. You can do this in Fourrier space, or just subtract a Gaussian filtered version of the image:

from skimage import img_as_float
from scipy import ndimage

def preprocess_highpass(image, filter_width=100):
    '''Emulates a highpass filter by subtracted a smoothed version 
    of the image from the image.

    Parameters:
    ----------------
    image: a ndarray
    filter_width: an int, should be much bigger than the relevant features in the image, 
        and about the scale of the background variations

    Returns:
    -----------
    f_image: ndarray with the same shape as the input image, with float dtype, the filtered image,
        with minimum at 0.
    '''

    image = img_as_float(image)
    lowpass = ndimage.gaussian_filter(image, filter_width)
    f_image = image - lowpass
    f_image -= f_image.min()
    return f_image

Evelyn Liu

unread,
Nov 26, 2013, 1:47:01 PM11/26/13
to scikit...@googlegroups.com
This is really helpful. Thanks Guillaume!
Reply all
Reply to author
Forward
0 new messages