
_______________________________________________
SciPy-User mailing list
SciPy...@scipy.org
http://mail.scipy.org/mailman/listinfo/scipy-user
Many years ago I played around with using YuleWalker as a digital filter design
technique. There, I added white noise to the model (adding to the diagonal
terms of the correlation matrix), and by varying the amount of noise, you could
vary the filter characteristics.
Thanks for the references, the paper on the matlab help has some
interesting parts.
I haven't seen anything like this in python, maybe it's work for the
new scikits.signal developers.
The idea of the modified yule walker equations to get the ar
coefficients for an ARMA looked interesting so I coded that part. I
didn't manage to get equation 54 to get the MA coefficients to work.
(Instead I just use a deconvolution to go from autocovariance function
& AR coefficients to MA coefficients. It looks like it's possible to
go from the autocovariance function to recover the ARMA coefficients
with one yule walker, one modified yule walker and one deconvolution.
I just wrote some toy code for this to see if round tripping works. I
still have no idea about filter design. )
------------
import numpy as np
from scipy import linalg
def modified_yule_walker(acov, n_ar, n_ma):
'''estimate AR part of ARMA from modified Yule-Walker equations
The autocovariance of an ARMA process for lags greater than n_ma is
independent of the MA part and can be used to estimate the AR part.
(If I remember correctly.)
Parameters
----------
acov : array_like
autocovariance, needs to have at least n_ar+n_ma terms and start with
the zero lag, i.e. variance.
n_ar : int
length of autoregressive lagpolynomial, including counting the leading
one.
n_ma : int
length of moving-average lagpolynomial, including counting the leading
one. lagpolynomials with leading (zero lag) value different from one
have not been tested.
Returns
-------
ar_coefs : ndarray
the estimated AR coefficients. the autoregressive lag-polynomial is
[1, -arcoeffs]
References
----------
Friedlander, B., and B. Porat, 1984, equation (2)
'''
p, q = n_ar - 1, n_ma #Friedlander/Porat convention
r_idx = np.arange(q,q+p+1) #easier with index than slices
r2_idx = np.abs(r_idx[0] - 1 - np.arange(p))
R = linalg.toeplitz(acov[r_idx-1], acov[r2_idx])
return linalg.lstsq(R, acov[r_idx])[0]
---------
Josef
in python for educational purpose, and C for performance.
bye
No need to use C for performance here.
Computing the autocovariance for Yule-Walker can be vectorized with
np.dot, which lets BLAS do the work. Something like this:
def covmtx_yulewalker(x,p):
''' autocorrelation method '''
x = np.ascontiguousarray(x)
n = x.shape[0]
Rxx = np.zeros(p+1)
for k in range(0,p+1):
Rxx[k] = np.dot(x[:n-k],x[k:])/(n-k-1.0)
return Rxx
Later on, in the code Josef posted, the next bulk of the computation is
done by LAPACK (linalg.lstsq).
With NumPy linked against optimized BLAS and LAPACK libraries (e.g. MKL,
ACML, GotoBLAS2, Cray libsci), doing this in C might actually end up
being slower. Don't waste your time on C before (1) NumPy is proven to
be too slow and (2) you have good reasons to believe that C will be
substantially faster. (NumPy users familiar with MATLAB make the latter
assumption far too often.)
Sturla
a quick guess is that it's the same as the denf function, but given
only the one sided autocovariance function.
> I tried to translate Matlab's yulewalk.m, and it kinda works but with an
> issue. Code is here: http://paste.pocoo.org/show/537224/
>
> Main problem is on lines 79-80 when impulse response is computed. FFT/IFFT
> functions result is different. Everything to that point gives correct result
> in respect to Matlab function. I don't know why is that, is it precision in
> question or else, but I had enough hard time to produce linked and not so
> representative code.
No help from my side, I don't understand the code.
I'm curious why they use log and exp during fft
ifft(exp(fft(hf)))
The only other mention of this that I found is on the octave mailing list.
numf looks interesting if I find a reference to avoid license problems.
Josef
>
> I attached example image to show how results differ slightly.
> Any comment is appreciated
>
> Cheers
>
I think the main argument is that levinson-durbin uses fewer
calculations, which might matter if the AR polynomial is very large.
I've read conflicting comments about numerical stability, some argue
in favor of levinson-durbin, some in favor of least squares, but Burg
seems to be generally considered to be numerically better that either
of the other two.
Josef
Looks like cepstrum, a quite common tool in speech analysis, but in
fact it is not the same think
Cepstrum would be
log |ft(f)|
see http://en.wikipedia.org/wiki/Cepstrum
----------------------------------------------------------------
This message was sent using IMP, the Internet Messaging Program.
We are not talking about the same algorithm here. Because the
correlation matrix has a very specific structure (toeplitz), it can be
inverted in O(N^2) instead of O(N^3), this is what the Levinson Durbin
algorithm is all about. You cannot easily implement Levinson-Durbin in
numpy, because of its recursive nature.
I think you can also reasonably expect talkbox author to know one
thing or two about numpy ;)
David
Sure :)
But whenever I have used autoregression, it seems the expensive part is
estimating the covariance, not inverting it.
Levinson-Durbin would belong in scipy.linalg though, it's not just for
Yule-Walker.
Sturla