Memory problem when using linear prediction in NMRglue

38 views
Skip to first unread message

Ahmed Youssef

unread,
May 21, 2015, 6:00:25 AM5/21/15
to nmrglue...@googlegroups.com
Hi,
First, thanks for the excellent package.
I am trying to increase the spectral resolution of 1D spectra using linear predictions (as introduced here). I tried the 1D Bruker example as below.

import nmrglue as ng
dic
, data = ng.bruker.read("/expnmr_00001_1/", read_pulseprogram=False)
ng
.proc_lp.lp(data, pred = data.shape[-1], bad_roots=None, method='tls')



And also using the default parameters:
ng.proc_lp.lp(data)



Both times results in a huge memory consumption, that I had to kill the whole python process. I couldn't find any working example in NMRglue nor any tests for the function.
Thanks in advance for your help.
Ahmed.

Eiso AB

unread,
May 21, 2015, 1:04:09 PM5/21/15
to nmrglue...@googlegroups.com
this is a somewhat atypical use of LP. normally it's used
to increase the resolution in indirect domains in 2D or nD spectra.

you seem to use it for a 1D, right? how many points does the fid have?
if it's very large that may be a problem (not sure, have not tried myself).

if could be that the nr of signals (or nr of roots) it tries to predict is some fraction
of the size of the fid. is there a parameter that allows setting that lower?

also, is there still signal at the end of the fid? 

goodluck




--
You received this message because you are subscribed to the Google Groups "nmrglue-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to nmrglue-discu...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Jonathan Helmus

unread,
May 21, 2015, 1:32:10 PM5/21/15
to nmrglue...@googlegroups.com
Ahmed,

    As Eiso metioned this is a very atypical use of LP.  The sample Bruker data you are processing is a one dimensional data set with 16384 points.  The signal appears to be fully decayed by the end of the FID, so there is little reason to try to increase the spectral resolution using LP, zero filling is more than sufficient.

    The reason that you are running out of memory is that in the 'svd' (the default) and 'tls' LP methods require a single value decomposition to find the linear prediction filter.  These methods require storing an NxN array where N is the size of the FID.  For the sample data this requires storing a 16384 x 16384 array of complex128 values which requires ~4 GB of memory, or ~2 GB if you accept lower precision and cast the data array to complex64.  The 'tls' method can be implemented in a manner that does not require storing the entire decomposed matrix which would lessen the memory requirements but this is not how the method is implementation in nmrglue.

    That said I had no issues running the code snippets on my machine which has 16 GB of RAM. 

    Finally I should note that the LP implementations in nmrglue are mainly for educational use, they were written so that I could understand and show others how linear prediction works.  Little thought was given to the speed nor numerically stability of the algorithm [1].  As such, they most likely not appropriate for use with real world data.  Other NMR processing packages offer more robust and optimized implementation of linear prediction which I would recommend.

Cheers,

    - Jonathan Helmus


[1] http://nmrglue.readthedocs.org/en/latest/reference/proc_lp.html#developer-functions
Reply all
Reply to author
Forward
0 new messages