Apply known decomposition model to new data (EELS Data)

145 views
Skip to first unread message

Søren Roesgaard Nielsen

unread,
Mar 2, 2017, 5:27:06 AM3/2/17
to hyperspy-users
Hi

I am new to hyperspy, but I am impressed with the software so far!

My problem is as follows:
I load and perform decomposition on a map

s = hs.load("XXX_reference.dm3")
s.decomposition(algorithm='nmf', output_dimension=6)
s.plot_decomposition_results()

Which results in a satisfactory element decomposition.
Now I want to load a second EELS map and use the already acquired decomposition model on this data, is this possible?

Best regards
Søren

Francisco de la Peña

unread,
Mar 2, 2017, 5:46:52 AM3/2/17
to hyperspy-users
Hi Søren,

No, that's unfortunately not possible. We've been wanting to rewrite the decomposition and BSS code since quite a while to enable this and other interesting features but time is limited and we haven't done it yet.

You could do it manually (although less conveniently) using the NMF algorithm in sklearn or fitting the components to the new dataset using scipy's linear least squares. (We'll add a lls to HyperSpy soon too, but it's not yet there.)

Best regards,

Francisco

--
You received this message because you are subscribed to the Google Groups "hyperspy-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hyperspy-users+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Thomas Aarholt

unread,
Mar 2, 2017, 6:32:30 AM3/2/17
to hyperspy-users
Francisco, can't he turn each decomposition factor into a ScalableFixedPattern and fit it to the new EELS map with multifit? It'll be slow, but it'll do the job.
That said, nmf is reasonably fast, so it might be better just to do nmf on the new map.

Best,
Tom
--
Thomas M. Aarholt
Reading for DPhil in Nuclear Materials
University of Oxford
 
Mobile: 07794820885

Søren Roesgaard Nielsen

unread,
Mar 3, 2017, 3:51:20 AM3/3/17
to hyperspy-users
Hi Francisco

Thanks for the fast reply! I will do the linear least squares.

one more related question. I am using the NMF algotirh in the decomposition and the I try to do a blind_source_seperation based on this. HOwever this returns negative results which I though was eliminated since the decomposition was done using NMF. Is there a similar solution to get only non-negative solutions? I do the following:

sc = s.get_decomposition_model()
sc.blind_source_separation(number_of_components=3, on_loadings=True)
sc.plot_bss_results()


Thanks in advance!
To unsubscribe from this group and stop receiving emails from it, send an email to hyperspy-user...@googlegroups.com.

Søren Roesgaard Nielsen

unread,
Mar 3, 2017, 4:20:10 AM3/3/17
to hyperspy-users
Also one more problem:

When doing
s.decomposition(algorithm='nmf', output_dimension=8, normalize_poissonian_noise=True)
s.plot_decomposition_results()

I cannot get the following line to work? It seems to work when not using nmf, can that be true?
s.plot_explained_variance_ratio()

Den torsdag den 2. marts 2017 kl. 11.46.52 UTC+1 skrev Francisco:
To unsubscribe from this group and stop receiving emails from it, send an email to hyperspy-user...@googlegroups.com.

Francisco de la Peña

unread,
Mar 3, 2017, 4:36:26 AM3/3/17
to hyperspy-users
Hi Søren,

BSS can return negative values when operating on an NMF decomposition despite the fact that the NMF components are positive. We don't currently have any algorithm that enforces positivity other than NMF. NMF is useful when positiveness alone is enough to unmix the sources. When this is not the case and the sources are independent, then it is usually better to perform BSS on a PCA decomposition.

Regarding the explained variance plot, the explained variance is only computed for PCA, that's why you can't plot it when decomposing with NMF. It is of course possible to calculate it manually for NMF (and it may actually be a good idea to add this feature), but, as NMF is usually computed with a low number of components, it is not that useful in order to discern signal from noise as it is for PCA.

Thomas' suggestion also should work to apply the decomposition on a different dataset. He's actually working on implementing linear least squares fitting in HyperSpy, (#1462) which should make the computation a lot faster.

Best regards,

Francisco

To unsubscribe from this group and stop receiving emails from it, send an email to hyperspy-users+unsubscribe@googlegroups.com.

Søren Roesgaard Nielsen

unread,
Jul 11, 2017, 7:33:24 AM7/11/17
to hyperspy-users
Hi Francisco

I am now looking a bit more into this, however, calculating the explained variance manually would require knowing the eigenvalues as far as I understand? This is now forwarded to the EELS container during the decomposition which makes it a lot more difficult to calculate. Or is there something I am missing?

Best regards
Søren

Francisco

unread,
Jul 13, 2017, 12:54:56 PM7/13/17
to hyperspy-users
Hi Søren,

I don't know if there is a better way but this is how I do it:

def sse(x, y):
    return ((x - y) ** 2).sum()

def ssq(signal):
    ssq = []
    sse_ = (signal.data ** 2).sum()
    for i in range(1, signal.learning_results.output_dimension + 1):
        d = signal.get_decomposition_model(i).data
        ssen = sse(signal.data, d)
        ssq.append(sse_ - ssen)
        sse_ = ssen
    return np.array(ssq)

To compare it with what hyperspy stores in s.learning_results.explained_variance (which, actually, is only the explained variance when centre=True) you must divide it by s.axes_manager.navigation_size.

I hope that this helps.

Francisco
Reply all
Reply to author
Forward
0 new messages