Ahmed,
As Eiso metioned this is a very atypical use of LP. The
sample Bruker data you are processing is a one dimensional data
set with 16384 points. The signal appears to be fully decayed by
the end of the FID, so there is little reason to try to increase
the spectral resolution using LP, zero filling is more than
sufficient.
The reason that you are running out of memory is that in the
'svd' (the default) and 'tls' LP methods require a single value
decomposition to find the linear prediction filter. These methods
require storing an NxN array where N is the size of the FID. For
the sample data this requires storing a 16384 x 16384 array of
complex128 values which requires ~4 GB of memory, or ~2 GB if you
accept lower precision and cast the data array to complex64. The
'tls' method can be implemented in a manner that does not require
storing the entire decomposed matrix which would lessen the memory
requirements but this is not how the method is implementation in
nmrglue.
That said I had no issues running the code snippets on my
machine which has 16 GB of RAM.
Finally I should note that the LP implementations in nmrglue
are mainly for educational use, they were written so that I could
understand and show others how linear prediction works. Little
thought was given to the speed nor numerically stability of the
algorithm [1]. As such, they most likely not appropriate for use
with real world data. Other NMR processing packages offer more
robust and optimized implementation of linear prediction which I
would recommend.
Cheers,
- Jonathan Helmus
[1]
http://nmrglue.readthedocs.org/en/latest/reference/proc_lp.html#developer-functions