Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Rootmusic

155 views
Skip to first unread message

Derrick Rea

unread,
Dec 10, 2003, 12:40:35 PM12/10/03
to
Hi All,

I'm trying to apply the Rootmusic algorithm, as found in Matlab, to
detect sinusoidal components in a blood velocity signal.

I've synthesized a compound sinusoidal test signal that consists of 6
sinusoids, (12 complex exponentials). The signal's positive frequency
components
are given below and its 219 samples long.

Frequency components
0.91, 1.82, 2.73, 3.64, 4.55, 5.46 Hz.
Respective Amplitudes
5.469722684, 2.649544942, 2.960454796, 2.418314605, 1.480919941,
1.86812776
Respective Phase Angles
-2.080504298, -2.743620721, 2.604208101, 1.366778808,
0.610463199, -0.158876205, 2.080504298

The rootmusic function is used as below.

[X,R] = corrmtx(input_signal,80,'covariance');
[f,pow] = rootmusic(X,12,200);

This code returns the correct frequency locations of the complex
exponentials but the Powers do not match. Some of the Powers are
negative which seems makes no sense.

f = [ 5.5045; -5.5045; 0.91516; -0.91516; 4.5865; -4.5865; 3.6706;
-3.6706;
2.7529; -2.7529; 1.8317; -1.8317]
pow = [-30.605; 40.492; 120.59; -128.92; -10.432; 98.776]

The algorithm works well through calculating eigenvalues, eigenvectors
and then rooting the polynomial to find the signal component
frequencies but falls over solving linear equations for the powers.
Does anyone know of a problem in the algorithm or in the Matlab
implementation that could explain this?

The algorithm is very sensitive to changing the size of the covariance
matrix. Can anyone shed light on why this is so? The amplitudes of the
powers can jump from 2 digits to 5.

I've followed loads of music/rootmusic threads, hoping for ideas but
they don't seem to go anywhere. Is there a fundamental problem with
the algorithm? The concept of fitting exponentials to a signal seems
like it should be straight forward but then I'm new to this game.

Ta!

Rune Allnor

unread,
Dec 10, 2003, 6:59:42 PM12/10/03
to
derri...@hotmail.com (Derrick Rea) wrote in message news:<5d5e74f5.03121...@posting.google.com>...

Well, MUSIC is, for starters, a frequency estimator. No MUSIC algorithm
I'm aware of claim to provide signal power as output (but that doesn't
mean that people who implement the algorithms do not make such claims!).

MUSIC relies on a eigenvector decomposition of the signal autocovariance
matrix. Based on the corresponding eigenvalues, MUSIC groups the eigen-
vectors into a basis for the signal subspace and a basis for the noise
subspace. There is a matrix theorem that says that these spaces will be
orthogonal, so that any sinusoidal vector that belongs in the signal
subspace will be orthogonal to the basis for the noise subspace.

The very basic idea behind MUSIC is to search over all possible
sinusoidals and check them against the noise subspace basis and see
if the sines are orthogonal to that subspace. Root MUSIC is just one
variant of the algorithm that utilizes that the signal is regularly
sampled, which the basic MUSIC scheme does not require.

This "orthogonal to noise" test has several consequences. For starters,
it means that the order of the signal autocovariance matrix must be
higher than the number of sines present in the signal. Which, in turn,
means that you need to know something about what you will measure in the
design stages of your system. First of all you would need to know the
absolute maximum nunber of sines, and would also need an additional
flexibility in the algorithm to decide the number of sines if it by
any chance should be fewer sines present.

The next consequence deals with the MUSIC pseudo spectrum. Now, take
a look at how it is constructed: In regular MUSIC the pseudo spectrum
is constructed as

1
P(f) = ---------------
|Vn^H*e(f)|^2

where Vn is the matrix of eigenvectors that spans the *noise* subspace
and e(f) is the unit norm exponential test vector at frequency f.

Now, contemplate that equation for a second. We have taken great care
to separate out that part of the signal autocovariance matrix that has
the *least* to do with the signal we are after. We compute an inner
product between a test vector and a vector basis that we ideally want
to vanish. And we invert that number. Of course, this has nothing
whatsoever to do with the powers of the sines.

As for root MUSIC, I don't know how matlab estimates the power.
Some techniques I have seen solves a minimum least squares problem
based on the estimated frequencies. This can be done either for
amplitude or for signal power. The point to watch out for, is that
the estimates of amplitude or power are highly sensitive for errors
in estimated frequency. Remember, in the usual DFT two sinusoidals
are separated in frequency by fs/N (fs being sampling frequency
and being N the number of samples in the time sequence) they are
orthogonal. Wich means that the variance df of the frequency estimate
must be very small, more specifically

df << fs/N

for the amplitude estimate to succeed.

In your case you have 12 complex exponentials, which means you would
need a covariance matrix of order at least 13, with some noise I'd
expect that you would need to work with an order of 20 or so, for the
algorithm to work.

So, to answer your question, yes there are known problems with MUSIC
and power/amplitude/phase estimates. The algorithm requiers a very
speciffic setting to work, the user really need to know how the
thing works and what he is trying to achieve. Fitting data to
complex exponentials isn't quite as easy as it seems.

Rune

Derrick Rea

unread,
Dec 15, 2003, 11:59:06 AM12/15/03
to
all...@tele.ntnu.no (Rune Allnor) wrote in message news:<f56893ae.0312...@posting.google.com>...

Rune,
Thanks for getting back to me on Rootmusic. You were my only response
so I hope you're still out there and have a little more time to answer
another question.

I am trying to understand what effect the size of autocorrelation has
on the music algorithm.

Can the whole autocorrelation sequence for a summation of siusoids be
determined from any section of it?
My reasoning is that the autocorrelation sequence of a pure sinusoid
is a cosine sequence and the autocorrelation sequence of a signal
constructed of pure sinusoids is a summation of cosines. Hence the
whole autocorrelation sequence is determinable from examining a short
section of the sequence because the cosine function is determinable.
Is this true?

The autocorrelation sequence for my 6 sinusoids though decays from a
correlation coefficient of 1 in 30 samples, hovers around zero for 60,
before dropping to oscillate around -0.2 and then climb back up as the
period length approaches. It seems that, if I only use 30 lags of my
autocorrelation matrix, I lose frequency info presented in correlation
coefficients of the longer lags. Surely if the last paragraph is true
there should be no additional frequencies in the longer lags of the
autocorrelation sequence.

If I know that a signal is composed of 12 complex exponentials in
noise and you recommend an order of around 20 for the music algorithm
to work, I assume that the additional 7 eigenvalues are averaged to
estimate the noise variance and their respective eigenvectors are dot
producted with the signal eigenvectors to locate the complex
exponential signal frequencies. Increasing the size of the
autocorrelation matrix should only increase the number of noise
eigenvector and so help locate the signal frequencies.

I investigated the relationship between the correlation matrix size
and MUSICs ability to locate my 12 complex frequencies and I found
that the likelyhood of finding the frequencies increases as the size
of the autocorrelation matrix is increased. Is generally true or is
there a limit of lag when the estimation of the autocorrelation
sequence becomes so inaccurate that MUSIC finds complex exponentials
that aren't there. Why did you advice before that I would need to work
with an order of approx 20 for 12 complex exponentials buried in
noise. Isn't bigger better?

Derrick

Rune Allnor

unread,
Dec 15, 2003, 8:53:34 PM12/15/03
to
derri...@hotmail.com (Derrick Rea) wrote in message news:<5d5e74f5.03121...@posting.google.com>...
> all...@tele.ntnu.no (Rune Allnor) wrote in message news:<f56893ae.0312...@posting.google.com>...
> > derri...@hotmail.com (Derrick Rea) wrote in message news:<5d5e74f5.03121...@posting.google.com>...
> Rune,
> Thanks for getting back to me on Rootmusic. You were my only response
> so I hope you're still out there and have a little more time to answer
> another question.
>
> I am trying to understand what effect the size of autocorrelation has
> on the music algorithm.
>
> Can the whole autocorrelation sequence for a summation of siusoids be
> determined from any section of it?
> My reasoning is that the autocorrelation sequence of a pure sinusoid
> is a cosine sequence and the autocorrelation sequence of a signal
> constructed of pure sinusoids is a summation of cosines. Hence the
> whole autocorrelation sequence is determinable from examining a short
> section of the sequence because the cosine function is determinable.
> Is this true?

I don't know. I have usually worked with autocovariance functions
computed from random data, and have focused on short-lag correlations.
There is a field of signal analysis that deals with "cyclostationary
processes" where, if I have understood things correctly, the auto-
covariance function "oscillates" on a longer term. I don't know why
or how these techniques work.

> The autocorrelation sequence for my 6 sinusoids though decays from a
> correlation coefficient of 1 in 30 samples, hovers around zero for 60,
> before dropping to oscillate around -0.2 and then climb back up as the
> period length approaches. It seems that, if I only use 30 lags of my
> autocorrelation matrix, I lose frequency info presented in correlation
> coefficients of the longer lags. Surely if the last paragraph is true
> there should be no additional frequencies in the longer lags of the
> autocorrelation sequence.

Well, it's hard to say. Normalized autocovarince values with magnitudes
in the order of 0.2 - 0.3 aren't very high. Some of the variation may be
due to random effects. But then, if you have one sinusiod with peiod
N samples and no noise, you would expect the autocovariance function
to have a peak every N lags.

> If I know that a signal is composed of 12 complex exponentials in
> noise and you recommend an order of around 20 for the music algorithm
> to work, I assume that the additional 7 eigenvalues are averaged to
> estimate the noise variance

Indeed

> and their respective eigenvectors are dot
> producted with the signal eigenvectors to locate the complex
> exponential signal frequencies.

Nope, they are "dot producted" with the exponential test vectors.
Signal eigen vectors and noise eigen vectors ar, by definition,
orthogonal. Once you have grouped the eigen vectors as signal eigen
vectors and noise eigen vectors, the signal eigen vectors leaves the
saga never to be seen again. at least what MUSIC is concerned.

> Increasing the size of the
> autocorrelation matrix should only increase the number of noise
> eigenvector and so help locate the signal frequencies.
>
> I investigated the relationship between the correlation matrix size
> and MUSICs ability to locate my 12 complex frequencies and I found
> that the likelyhood of finding the frequencies increases as the size
> of the autocorrelation matrix is increased. Is generally true or is
> there a limit of lag when the estimation of the autocorrelation
> sequence becomes so inaccurate that MUSIC finds complex exponentials
> that aren't there. Why did you advice before that I would need to work
> with an order of approx 20 for 12 complex exponentials buried in
> noise. Isn't bigger better?

No, it isn't for a number of reasons.

First, you don't want to compute eigen vector decompositions of larger
matrixes than absolutely nevessary. The computational task of decompsing
a 200 x 200 matrix is significantly higher than decomposing a 20 x 20
matrix. It's a question of both time and numerical accuracy.

Second, you want to pay some attention to the individual entries in the
autocovariance matrix. If you compute the aotocovariance sequence the
"usual" way, you implement something like (the process is assumed to be
zero mean)

1 N-1-|k|
Rxx(k) = ------- sum x(n)x^*(n+|k|)
N-|k| n=0

which means that only N-|k| cross-terms are summed to produce the
autocovariance coefficient of lag |k|. All available data points are
used to compute the autocovariance coefficient of lag 0 while only
x(0) and x(N-1) are used to compute autocovariance coefficients of
lags and -N+1 and N-1. Which, in turn, means that there is different
bias to estimates of coefficients of different lags. The basic MUSIC
recipe does not address the question of biased estimators of the
signal autocovariance, so it is reasonable to believe that MUSIC
assumes the estimator to be unbiased.

It is possible to make unbiased estimators for the signal autocovariance
matrix for orders up to N/2. This unbiasedness comes at the expence of
higher variance of the individual terms.

In summary:

- You want a low order of the autocovariance matrix for computational
reasons
- You want an unbiased autocovariance estimator
- You want as little variance of the autocovariance estimator as
possible.
- You still need the basic requirements, that relate the number of
sinusoidals present to the order of the autocovariance matrix, to
hold, with a little slack for the unexpected.

Finding both an estimator and an order that meets all these criteria
isn't always easy, and I usually say that these things have more to
do with black art and voodoo than anything else. My experience is
that using an order P such that

1.5*M < P < 2*M

where M is the true (maximum) number of sinusoidals present, usually
works quite well. There are, of course, exceptions that go both ways,
but the above works well as a general rule of thumb.

Rune

0 new messages