Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Incorrect Sound Pressure Level from FFT in Matlab

757 views
Skip to first unread message

Johannes Buechler

unread,
Apr 8, 2010, 9:47:26 AM4/8/10
to
Hello folks,

Obviously I am doing something wrong but I can't figure out what it is.

My procedure: Having time data measured with a commercial data aquisition tool (in this case, PAK from Muller-BBM).

Now I would like to draw a correct Campbell diagram from that. It works and looks more or less exactly like in the professional software, however I have wrong numerical dB values. That is, I have to shift the color axis in order to get the same output. The values in Matlab are incorrect, whereas the professional software shows the correct scaling.

The same happens if I take just one block and calculate its FFT and compare, or if I calculate the Overall Level from each spetrum.

Does anyone know what could be the reason?

The Output for comparison is to be seen here, beginning one hour after this post (because I'll have to upload the files at home, at work it's blocked)
http://www.JohannesBuechler.de/Campbell_PAKvsMatlab_v2.bmp
http://www.JohannesBuechler.de/Campbell_PAKvsMatlab_v3.bmp

v2: SPL calculated with the correct reference value for Sound Pressure Levels of po = 2*10^-5 Pa. It has approximately 50 dB shift relative to the correct graph from PAK.

v3: SPL calculated with a reference value of 1 Pa instead of po = 2*10^-5 Pa. This one has not as much shift as v3, but still approximately 20 dB shift.

What the heck am I doing wrong?

Here's my code:
OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
function F = JAB_Campbell2(Y)

nfft = 2^14;
Fs = 2^14;
OvlpPerc = 50;

rpm = Y(:,end);
signal = Y(:,1);

window = hanning(nfft);
noverlap = floor(OvlpPerc/100*nfft);
% noverlap = 0;

effective_BL = nfft - noverlap;
NumberOfBlocks = floor(length(signal)/effective_BL);
LengthDifference = NumberOfBlocks * effective_BL - length(signal);

if (~exist('Weighting','var'))
Weighting = 'lin';
end

dBText = ['dB(' Weighting ') SPL'];

% DEBUG
% disp([' Länge der Daten: ' num2str(length(signal)) ' samples']);
%
% disp(['=> Anzahl Blöcke ohne Overlap wäre: ' num2str(floor(length(signal)/nfft))]);
% disp([' Anzahl Blöcke mit ' num2str(OvlpPerc) '% Overlap: ' num2str(NumberOfBlocks) ]);
%
% disp([' Das entspricht ' num2str(NumberOfBlocks * effective_BL) ' samples']);
% disp(['=> Längendifferenz: ' num2str(-LengthDifference) ]);
% DEBUG Ende

[B,F,T] = specgram(signal,nfft,Fs,window,noverlap);

% B ist das Einseitenspektrum inkl. f=0
% => F(1) = 0
%
% T enthält die Zeitpunkte der Anfänge jedes Zeitblocks
% => T(1) = 0

if (max(size(B))-1 ~= nfft/2)
error('Length of calculated FFT vectors incorrect!');
end

idx = round(T*Fs+1); % start indices of the (possibly overlapping) FFT blocks

for i=1:length(idx)
rpmT(i) = mean(rpm(idx(i):idx(i)+nfft-1));
end

p0 = 2*10^-5;
% L_p = 20*log10(abs(B)/(sqrt(2)*p0)); % v1
L_p = 20*log10(abs(B)/p0); % v2
% L_p = 20*log10(abs(B)); % v3

% disp(' -------------- DEBUG ');
% min(min(abs(B)))
% max(max(abs(B)))
% min(min(L_p))
% max(max(L_p))
% disp(' -------------- DEBUG ');

fig1 = figure('Color',[1 1 1]);
ax1=axes('XGrid','on','YGrid','on');
surface(rpmT,F,L_p);
shading('interp');
set(gcf,'Renderer','zbuffer');
title('Campbell Diagram');
xlabel('Engine Revolution RPM');
ylabel('Frequency [Hz]');
zlabel('TEST');
axis tight;
set(ax1,'Layer','top');
set(ax1,'YDir','normal');
set(ax1,'Box','on');
set(ax1,'XMinorTick','on');
set(ax1,'YMinorTick','on');
% set(ax1,'YLim',[20,Fs/2]);
set(ax1, 'YLim',[20,2000]);
set(ax1, 'YScale', 'log');
% set(ax1, 'XLim', [rpmT(1),rpmT(end)]);
set(ax1, 'XLim', [1000,4000]);
% set(ax1, 'CLim', [round(min(min(L_p))/5)*5 round(max(max(L_p))/5)*5]);
set(ax1, 'CLim', [25 round(max(max(L_p))/5)*5]);
% set(ax1, 'CLim', [25 75]);
% set(ax1, 'ZLim', [50 150]);

% colormap('jet');

c_ax1 = colorbar( ...
'Box','on',...
'YLim',[0 150],...
'Location','SouthOutside' ...
);

%% Create textbox
annotation1 = annotation(...
fig1,'textbox',...
'Position',[0.4393 0.045 0.1607 0.0746],...
'LineStyle','none',...
'String',dBText,...
'FitHeightToText','on');


for i=1:min(size(B)) % für jedes Einzelspektrum
SumE(i) = sum(B(2:end,i).^2)/(nfft/2);
end

MeanE = SumE ./ T(2); % T(2) is the time length of each block

% fig2 = figure;
% plot(rpmT,20*log10(abs(MeanE)));

OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO

Thanks for your help :-)

Best regards, Joe

Johannes Buechler

unread,
Apr 8, 2010, 11:17:25 AM4/8/10
to
pictures are uploaded now...

"Johannes Buechler" <jb_...@gmx.DELnet> wrote in message <hpkmpe$bqf$1...@fred.mathworks.com>...

Rune Allnor

unread,
Apr 8, 2010, 11:46:54 AM4/8/10
to
On 8 apr, 15:47, "Johannes Buechler" <jb_s...@gmx.DELnet> wrote:
> Hello folks,
>
> Obviously I am doing something wrong but I can't figure out what it is.
>
> My procedure: Having time data measured with a commercial data aquisition tool (in this case, PAK from Muller-BBM).
>
> Now I would like to draw a correct Campbell diagram from that. It works and looks more or less exactly like in the professional software, however I have wrong numerical dB values.

If you want the correct numbers you need to calibrate
your measurement system. That's a not at all trivial
task, which requires you to determine the (possibly frequency
dependent and nonlinear) scaling coefficients throughout
your measurement system.

Rune

Johannes Buechler

unread,
Apr 8, 2010, 12:03:05 PM4/8/10
to
Rune Allnor <all...@tele.ntnu.no> wrote in message <06b4754e-1c5d-4985-

> If you want the correct numbers you need to calibrate
> your measurement system. That's a not at all trivial
> task, which requires you to determine the (possibly frequency
> dependent and nonlinear) scaling coefficients throughout
> your measurement system.
>
> Rune

Hi Rune,

the system is calibrated of course. This is why the graph from the measurement system shows the correct pressure levels. The same values are exported, so they should also be correct in the Matlab plot. The question is, what is wrong in the Matlab calculation? Anyway, thanks for the hint.

Joe

Mark Shore

unread,
Apr 8, 2010, 12:06:08 PM4/8/10
to
Rune Allnor <all...@tele.ntnu.no> wrote in message <06b4754e-1c5d-4985...@r1g2000yqb.googlegroups.com>...

As Rune notes, it appears your code derives dB values through intermediate steps from Y -> signal -> B -> L_p without any explicit calibration constant used. Luckily - based on the very similar appearance of the paired plot - the scaling factor appears to be constant over the range of frequencies you're measuring. Presumably you can derive this by either looking through instrument documentation, or normalizing your MATLAB results to the PAK ones.

Mark Shore

unread,
Apr 8, 2010, 12:35:27 PM4/8/10
to
"Johannes Buechler" <jb_...@gmx.DELnet> wrote in message <hpkunp$9sg$1...@fred.mathworks.com>...

(Delayed posting effect in action.) But I'd guess you don't have the source code for the PAK software, so are you certain that it isn't applying calibration or scaling factors to the instrument output?

Johannes Buechler

unread,
Apr 8, 2010, 2:25:10 PM4/8/10
to
"Mark Shore" <msh...@magmageosciences.ca> wrote in message <hpl0kf$e22$1...@fred.mathworks.com>...

Hello Rune and Mark,

this thread is pointing to the wrong direction. I guess I did not deliver the right facts & figures.

Tomorrow at work I'll re-post, showing that the numerical values of the time data are the same in Matlab and in the measurement system. Only the SPL of the FFT leads to different numerical values. So something is wrong with my Matlab code.

As for PAK: Of course it does apply calibration factors to the data stream. And this calibration factor makes the numerical values directly express the amount of Pascals that the sound pressure has. The measurement microphone has a sensitivity of approx. 50 mV/Pa and a flat frequency response in audible frequency range (that is why it costs several thousand Euros). The measurement system (PAK) knows this value because it is set in the properties of the measurement channel, and with a calibrator that creates exactly 1 Pa and put on the microphone the calibration factor is determined to something very close to 1.00 (for instance, 1.02 which means that the microphone has almost but not quite precisely 50 mV/Pa.

Example: A "1" in the data stream means 1.00000000 Pa, which is 20*log(1/2E-5) dB = 93,9794 dB. So if I plot the time data, it will show a "1". If I plot SPL, it should show 94 dB. In both cases, also under Matlab. But only if the calculation is correct that I make.....

Rune Allnor

unread,
Apr 8, 2010, 4:08:02 PM4/8/10
to

The most likely explanation is that you process the raw,
uncalibrated data in matlab, whereas the other package
uses additional calibration information.

The basic principle is that one records a signal from
a source with known parameters (the calibration source),
determines the relevant calibration parameters from this
recording, and then applies the calibration parameters
as a pre-processing step in subsequent processing.

So the data that comes from the recorder - and are stored
to file - will be dumped before any calibration information
has been applied.

Rune

Godzilla

unread,
Apr 8, 2010, 10:48:23 PM4/8/10
to
Rune Allnor <all...@tele.ntnu.no> wrote in message <a9ad38ca-84ed-40cd...@11g2000yqr.googlegroups.com>...

the window function will affect the fft amplitude estimates. did you compensate?

Johannes Buechler

unread,
Apr 9, 2010, 8:26:07 AM4/9/10
to
Rune Allnor <all...@tele.ntnu.no> wrote in message <a9ad38ca-84ed-40cd...@11g2000yqr.googlegroups.com>...

> The most likely explanation is that you process the raw,
> uncalibrated data in matlab, whereas the other package
> uses additional calibration information.

The time signals are exactly identical in PAK and in Matlab, as is to be seen here:
http://www.JohannesBuechler.de/TimeSignal_PAKvsMatlab.bmp
(the upload will be done tonight from my home computer)

As you can see the numerical values are identical up to at least 0.1 mPa... This proofs that the export has already applied all correction/calibration/whatever factors. So please, take this as given.

The problem however lies somewhere in Matlab.

In the meantime Mr Kaeferstein (author of http://www.mathworks.com/matlabcentral/fileexchange/2251-campbell ) gave me some possible reasons for a wrong scaling. The most striking one was that the result of FFT is not scaled in Matlab at all. After dividing the result by nfft/2 I have roughly the right Levels now:
http://www.JohannesBuechler.de/Campbell_PAKvsMatlab_v4.bmp (tonight...)

There is still a difference of approx. 5.6 dB though which I can not explain. I don't resample and I have window length = fs. Also:

1.) hanning window is compensated through the 50% overlap
2.) peak vs. RMS is chosen alike in PAK and Matlab: In PAK one can select in the FFT settings, in Matlab I have devided the sound pressure by sqrt(2):
L_p = 20*log10(abs(B)/(sqrt(2)*p0)); % RMS scaling

So these two points can not be the reason for it.

Any more ideas?

Rune Allnor

unread,
Apr 9, 2010, 8:34:04 AM4/9/10
to
On 9 apr, 14:26, "Johannes Buechler" <jb_s...@gmx.DELnet> wrote:
> Rune Allnor <all...@tele.ntnu.no> wrote in message <a9ad38ca-84ed-40cd-96ef-b11aa4cb9...@11g2000yqr.googlegroups.com>...

> > The most likely explanation is that you process the raw,
> > uncalibrated data in matlab, whereas the other package
> > uses additional calibration information.
>
> The time signals are exactly identical in PAK and in Matlab, as is to be seen here:http://www.JohannesBuechler.de/TimeSignal_PAKvsMatlab.bmp
> (the upload will be done tonight from my home computer)
>
> As you can see the numerical values are identical up to at least 0.1 mPa... This proofs that the export has already applied all correction/calibration/whatever factors. So please, take this as given.

No, it doesn't. It only proves that the same data were exported.

> The problem however lies somewhere in Matlab.

The problem lies somewhere in your brain. Read what I posted
yesterday, and contemplate what I said.

Rune

Johannes Buechler

unread,
Apr 9, 2010, 9:27:05 AM4/9/10
to
Rune Allnor <all...@tele.ntnu.no> wrote in message <5a499dbf-8e5f-4969...@g11g2000yqe.googlegroups.com>...

> On 9 apr, 14:26, "Johannes Buechler" <jb_s...@gmx.DELnet> wrote:
> > Rune Allnor <all...@tele.ntnu.no> wrote in message <a9ad38ca-84ed-40cd-96ef-b11aa4cb9...@11g2000yqr.googlegroups.com>...
> > > The most likely explanation is that you process the raw,
> > > uncalibrated data in matlab, whereas the other package
> > > uses additional calibration information.
> >
> > The time signals are exactly identical in PAK and in Matlab, as is to be seen here: http://www.JohannesBuechler.de/TimeSignal_PAKvsMatlab.bmp
> > (the upload will be done tonight from my home computer)
> >
> > As you can see the numerical values are identical up to at least 0.1 mPa... This proofs that the export has already applied all correction/calibration/whatever factors. So please, take this as given.
>
> No, it doesn't. It only proves that the same data were exported.

True. But a plot in PAK from a regular measurement with a calibration device on the microphone delivers a "1" in Matlab, and 94 dB in PAK, so with this information it is a proof. Because, as I said, if I calculate the overall level in Matlab from the spectra, it also leads to different numerical values than in PAK.

> > The problem however lies somewhere in Matlab.
>
> The problem lies somewhere in your brain. Read what I posted
> yesterday, and contemplate what I said.

Thanks for pointing this out so nicely. In fact this is what I meant when I said "in Matlab", because the problem is in the Matlab code that I posted and use... There is an agreement that there is no Matlab-bug or anything like that leading to a "wrong" plot. Matlab just calculates what the user tells it to calculate. So I have to find the mistake in my code, which is why I asked in the Matlab group.

So let me redefine the problem just for you: I'd like to get the same Levels in the Matlab plot as in the professional 20,000+ EUR software PAK. I also have ArtemiS on my computer which delivers identical results to PAK. They are the two most common acoustical/vibration measurement and analysis tools in the world, together with LMS maybe.

Joe

Rune Allnor

unread,
Apr 9, 2010, 9:42:15 AM4/9/10
to
On 9 apr, 15:27, "Johannes Buechler" <jb_s...@gmx.DELnet> wrote:
> Rune Allnor <all...@tele.ntnu.no> wrote in message <5a499dbf-8e5f-4969-b0ca-687805178...@g11g2000yqe.googlegroups.com>...

> > On 9 apr, 14:26, "Johannes Buechler" <jb_s...@gmx.DELnet> wrote:
> > > Rune Allnor <all...@tele.ntnu.no> wrote in message <a9ad38ca-84ed-40cd-96ef-b11aa4cb9...@11g2000yqr.googlegroups.com>...
> > > > The most likely explanation is that you process the raw,
> > > > uncalibrated data in matlab, whereas the other package
> > > > uses additional calibration information.
>
> > > The time signals are exactly identical in PAK and in Matlab, as is to be seen here:http://www.JohannesBuechler.de/TimeSignal_PAKvsMatlab.bmp
> > > (the upload will be done tonight from my home computer)
>
> > > As you can see the numerical values are identical up to at least 0.1 mPa... This proofs that the export has already applied all correction/calibration/whatever factors. So please, take this as given.
>
> > No, it doesn't. It only proves that the same data were exported.
>
> True. But a plot in PAK from a regular measurement with a calibration device on the microphone delivers a "1" in Matlab, and 94 dB in PAK, so with this information it is a proof. Because, as I said, if I calculate the overall level in Matlab from the spectra, it also leads to different numerical values than in PAK.
>
> > > The problem however lies somewhere in Matlab.
>
> > The problem lies somewhere in your brain. Read what I posted
> > yesterday, and contemplate what I said.
>
> Thanks for pointing this out so nicely.

Y're welcome.

> In fact this is what I meant when I said "in Matlab", because the problem is in the Matlab code that I posted and use... There is an agreement that there is no Matlab-bug or anything like that leading to a "wrong" plot. Matlab just calculates what the user tells it to calculate. So I have to find the mistake in my code, which is why I asked in the Matlab group.

I have pointed out repeatedly that you don't have all the
required information.

> So let me redefine the problem just for you: I'd like to get the same Levels in the Matlab plot as in the professional 20,000+ EUR software PAK. I also have ArtemiS on my computer which delivers identical results to PAK. They are the two most common acoustical/vibration measurement and analysis tools in the world, together with LMS maybe.

So what? You could have a EUR 17 billion package and it wouldn't
matter if you don't get access to the calibration data.

Rune

Mark Shore

unread,
Apr 9, 2010, 10:14:05 AM4/9/10
to
snip

> True. But a plot in PAK from a regular measurement with a calibration device on the microphone delivers a "1" in Matlab, and 94 dB in PAK, so with this information it is a proof. Because, as I said, if I calculate the overall level in Matlab from the spectra, it also leads to different numerical values than in PAK.
>
snip

> So let me redefine the problem just for you: I'd like to get the same Levels in the Matlab plot as in the professional 20,000+ EUR software PAK. I also have ArtemiS on my computer which delivers identical results to PAK. They are the two most common acoustical/vibration measurement and analysis tools in the world, together with LMS maybe.
>
> Joe

I take a very empirical approach to this sort of thing, so it could be that I'm missing something, but if you have dB values you trust from PAK and want to reproduce in MATLAB (you've managed to create remarkably similar output plots by the way), then why not simply adjust the value of p0 in L_p = 20*log10(abs(B)/p0) until the dB levels match? (And figure out why it works afterward.)

Johannes Buechler

unread,
Apr 12, 2010, 3:42:03 AM4/12/10
to
"Mark Shore" <msh...@magmageosciences.ca> wrote in message <hpncnd$3q7$1...@fred.mathworks.com>...

> > So let me redefine the problem just for you: I'd like to get the same Levels in the Matlab plot as in the professional 20,000+ EUR software PAK. I also have ArtemiS on my computer which delivers identical results to PAK. They are the two most common acoustical/vibration measurement and analysis tools in the world, together with LMS maybe.
> >
> > Joe
>
> I take a very empirical approach to this sort of thing, so it could be that I'm missing something, but if you have dB values you trust from PAK and want to reproduce in MATLAB (you've managed to create remarkably similar output plots by the way), then why not simply adjust the value of p0 in L_p = 20*log10(abs(B)/p0) until the dB levels match? (And figure out why it works afterward.)

Thanks Mark for this comment. It is a quite refreshing remark considering that some people obviously think that they are the most clever in the world and if they don't know something then they haven't been provided with all necessary information.

However I have to find the reason for the discrepancy before I can proceed with my work (drawing of the Campbell is not the goal, it is just a way of easily comparing calculation results). The calculation of SPL is defined with p0 so the question is why does the FFT deliver different numerical values already before the logarithmisation. In order to find the reason, I'll have to start with a simple sinusoidal signal.

I'll probably try to get some help as soon as I can define the problem in a more detailed way. Thanks a lot so far, Joe

Cris Luengo

unread,
Apr 12, 2010, 6:32:04 AM4/12/10
to
"Johannes Buechler" <jb_...@gmx.DELnet> wrote in message
> However I have to find the reason for the discrepancy before I can proceed with my work (drawing of the Campbell is not the goal, it is just a way of easily comparing calculation results). The calculation of SPL is defined with p0 so the question is why does the FFT deliver different numerical values already before the logarithmisation. In order to find the reason, I'll have to start with a simple sinusoidal signal.
>
> I'll probably try to get some help as soon as I can define the problem in a more detailed way. Thanks a lot so far, Joe

You might want to look for the solution in the differences between the Fourier Transform (which you want to compute) and the FFT (which you are computing). For one, the FFT is unitless. This means that the output values are not in Pascal, and the coordinates are not in Hz. Frequencies go from 0 to <1, and the value at f=0 is the sum of all input values, not the integral over the signal.

I have no experience whatsoever with your application, so I cannot help you further. But I'm sure you can compute the correct normalisation based on this.

Good luck!

dpb

unread,
Apr 12, 2010, 9:12:01 AM4/12/10
to

I'd venture as another respondent just noted it's all in the
normalization assumed by TMW in the FFT algorithm as compared to the
computed values and the suggestion there and the one above are your
Rosetta stone.

If you were to simply compute the difference (ratio) I suspect a short
amount of experimenting w/ the various quantities of length of the
sample record, sampling units, etc., etc., will uncover the difference.

Waiting for magical insight will probably come in behind the empirical
observation a la tortoise/hare--maybe not as magical as an "aha!" moment
of divine inspiration but undoubtedly satisfying when uncover it...

--

Rune Allnor

unread,
Apr 12, 2010, 9:21:50 AM4/12/10
to
On 12 apr, 15:12, dpb <n...@non.net> wrote:

> I'd venture as another respondent just noted it's all in the
> normalization assumed by TMW in the FFT algorithm

TMW does not 'assume' anything about how to normalize the FFT,
but rather uses the de facto standard definition. There is the
omnipresent discussion about the 1/N factor not being symmetric
in the FFT/IFFT pair, but that's the convetional way of defining
the FFT/IFFT pair. Presumably, it has to do with saving FLOPs,
as there usually are more FFTs than IFFTs computed, and any
computations shifted from the FFT side to the IFFT side will
result in an overall computational gain.

Apart from that, the OP is looking in the wrong place for
explanations of his trivial conundrum.

Rune

dpb

unread,
Apr 12, 2010, 9:25:17 AM4/12/10
to


I've seen others in both computations and literature than the particular
de facto 'standard' in ML...plus windowing corrections, etc., that may
be in the other software that aren't in ML w/o manual incorporation.

--

Rune Allnor

unread,
Apr 12, 2010, 9:36:43 AM4/12/10
to
On 12 apr, 15:25, dpb <n...@non.net> wrote:
> Rune Allnor wrote:
> > On 12 apr, 15:12, dpb <n...@non.net> wrote:
>
> >> I'd venture as another respondent just noted it's all in the
> >> normalization assumed by TMW in the FFT algorithm
>
> > TMW does not 'assume' anything about how to normalize the FFT,
> > but rather uses the de facto standard definition. There is the
> > omnipresent discussion about the 1/N factor not being symmetric
> > in the FFT/IFFT pair, but that's the convetional way of defining
> > the FFT/IFFT pair. Presumably, it has to do with saving FLOPs,
> > as there usually are more FFTs than IFFTs computed, and any
> > computations shifted from the FFT side to the IFFT side will
> > result in an overall computational gain.
>
> > Apart from that, the OP is looking in the wrong place for
> > explanations of his trivial conundrum.
>
> I've seen others in both computations and literature than the particular
> de facto 'standard' in ML...

Examples?

> plus windowing corrections, etc., that may
> be in the other software that aren't in ML w/o manual incorporation.

The FFT is a well-defined operation that you might use as a
building block in other algorithms. The FFT/IFFT pair as such
are for all intents and purposes standardized as what is
implemented in matlab. What the FFT/IFFT pair is used for, is
not standardized. If you want to implement, say, a periodogram
spectrum estimator based on the FFT, then by all means do that.
To get the numbers to match up - e.g. Parseval's relation - you
need to introduce some additional scaling constants. Those have
nothing to do with the FFT, but with the periodogram. If you
want the numbers to match up to the physical world, you need to
introduce calibration constants. Which have nothing to do with
the peridogram or the FFT, but with the sensors.

Again, all this is trivial. One only needs to know what one
is talking about.

Rune

dpb

unread,
Apr 12, 2010, 10:29:40 AM4/12/10
to
Rune Allnor wrote:
> On 12 apr, 15:25, dpb <n...@non.net> wrote:
...

>> I've seen others in both computations and literature than the particular
>> de facto 'standard' in ML...
>
> Examples?

Over 40 years and having been retired for over 10 now, which instrument
specifically had which would take some effort that I'm not sure it would
make any difference to go find again but various HP (now Agilent) and
Tek instruments had their own idiosyncrasies of what normalizations they
utilized internally.

>> plus windowing corrections, etc., that may
>> be in the other software that aren't in ML w/o manual incorporation.
>

...

> To get the numbers to match up - e.g. Parseval's relation - you
> need to introduce some additional scaling constants. Those have
> nothing to do with the FFT, but with the periodogram. If you
> want the numbers to match up to the physical world, you need to
> introduce calibration constants. Which have nothing to do with
> the peridogram or the FFT, but with the sensors.

Some do, others don't...I've been in precisely the same position as OP
in multiple instances and found empirically things like an extra (or
missing) "N", "sqrt(N)", "N/2" and various other combinations of the
above and more that needed simply to find empirically what the
instrument returned as opposed to the "gold standard" as you seem to
want to refer to it.

That there's a specific definition of FFT is true, that TMW uses perhaps
even the most common doesn't mean that others haven't been used in
various other places either on purpose or perhaps even out of ignorance.

> Again, all this is trivial. One only needs to know what one
> is talking about.

As well as what the instrument makers were talking about which sometimes
required some empirical prodding to uncover ime...

I've never had a case where the transducer calibrations weren't already
in the returned data from the instrument; only other normalizations for
windowing, the N vis a vis N/2, proper scaling for frequency binning,
etc., that were, in the end, a constant that could be determined which
factors were what by looking at the required number to obtain consonance.

My wager is still that that's what the OP will find...

If your experience is different, so be it.

--

Rune Allnor

unread,
Apr 12, 2010, 11:04:39 AM4/12/10
to

I have said nothing about a 'gold stanard'. I only happen to have
some
clue about DSP. The definition of the FFT/IFFT pair as used in
matlab is found in a number of textbooks:

Oppenheim & Schafer: "Digital Signal Processing" (1975) eqs. 3.5 &
3.6, p.89.
Rabiner & Gold: "Theory and applications of DSP" (1975) eqs. 2.132 &
2.136, p. 51.
Oppenheim & Schafer: "Discrete-Time Signal Procesing" (1989) eqs. 8.2
& 8.4, p. 542
Proakis & Manolakis: "DSP, Principles, Algorithms and
Applications" (1996)
eqs. 5.1.1 & 5.1.8, pp 394 & 395
Leland B. Jackson: "Digital Filters and Signal Processing" (1989),
eqs 7.1.4 and 7.1.6, p. 134

and those are just the books I have easily available.

> That there's a specific definition of FFT is true, that TMW uses perhaps
> even the most common doesn't mean that others haven't been used in
> various other places either on purpose or perhaps even out of ignorance.

Maybe not. But not knowing that what TMW uses is the de facto
standard can *only* be attributed to ignorance.

> > Again, all this is trivial. One only needs to know what one
> > is talking about.
>
> As well as what the instrument makers were talking about which sometimes
> required some empirical prodding to uncover ime...

Too bad that you are used sub-standard gear.

> I've never had a case where the transducer calibrations weren't already
> in the returned data from the instrument; only other normalizations for
> windowing, the N vis a vis N/2, proper scaling for frequency binning,
> etc., that were, in the end, a constant that could be determined which
> factors were what by looking at the required number to obtain consonance.

That might be the case in a dedicated instrument.

> My wager is still that that's what the OP will find...

He might find that if he gets access to the proprietary data formats
for the commercial package at hand. However, the prices he is quotes
all but guarantee that those data formats will not be openly
available.

The open data formats like .wav do not contain calibration
information,
only raw quantized data from the fixed-point ADC.

No sane programmer will allow the software package to interact with
the sensor except by reading the output. If the software package is
worth the suggested price tag, the full instrument package - from
sensor through cables through PC and software lisence - will have
been calibrated by some independent body, saving the calibration
data for the set-up locally on the PC. One might imagine the same
sensors and cables be used with a different PC/SW combo, in which
case the previous calibrations will be invalid if the sensor settings
are messed with. Similarly, the ADC on a single PC might be used
with a number of sensors, in which case other calibrations will
be invalidated if the ADC settings are messed with.

As I said several days ago:

1) Calibration data will be stored independently of the measured
data.
2) Calibration corrections will be implemented as pre-processing
steps in the processing chain.
3) Calibration data will not be available to arbitrary users, only
to the internals of the commercial package. That restricted access
is *the* reason why SW vendors can charge $20k per lisence.

All of this is totally trivial instrumentation issues.

> If your experience is different, so be it.

What is different between the two of us is that I actually know
what I am talking about, both with respect to DSP and
instrumentation.

Rune

dpb

unread,
Apr 12, 2010, 1:06:54 PM4/12/10
to
Rune Allnor wrote:
...

> What is different between the two of us is that I actually know
> what I am talking about, both with respect to DSP and
> instrumentation.

Well, I am quite confident I'm aware of what I've done as well.

The difference appears to be in what you're thinking the OP is getting
returned to him vs what I presume he has which is the output _INCLUDING_
those internal transducer calibrations, etc., etc., etc., ... but
lacking the scaling to engineering units.

If he instead does simply have raw transducer ADC values, that is,
indeed something else again and the compensation will not be a constant
and he's likely hosed unless he can find same or derive them by
comparison on a bin-by-bin basis.

--

Rune Allnor

unread,
Apr 12, 2010, 1:28:31 PM4/12/10
to
On 12 apr, 19:06, dpb <n...@non.net> wrote:

> The difference appears to be in what you're thinking the OP is getting
> returned to him vs what I presume he has which is the output _INCLUDING_
> those internal transducer calibrations, etc., etc., etc., ... but
> lacking the scaling to engineering units.

The guy talks about "sound pressure levels", which can only
be interpreted as if he thinks he gets the calibrated numbers.
If that is correct - that the package indeed produces the
sound pressure numbers - then the calibration data are missing
from the data he exports from that package.

> If he instead does simply have raw transducer ADC values, that is,
> indeed something else again and the compensation will not be a constant
> and he's likely hosed unless he can find same or derive them by
> comparison on a bin-by-bin basis.

I see no reason why he would get anything but the ADC values
exported to a public format. The calibration data is the sole
reason why anyone could defend a price tag of several tens of
thousands of dollars for what are effectively simple spectrum
analyses. Anyone selling a package for $20k would protect that
chargeable core. If the calibrated data were exported, anyone
could import them to some other package (e.g. matlab) and
do the analysis there.

There is partially a business aspect to this - the SW vendor
losing sales - but far more importantly, a QA/QC aspect. The
serious SW vendor will have had their computational algorithms
tested and certified by some standardizing body, which have
a number of boring legal implications for what the SW can be
used for.

If a measurement is to be used for something with legal
implications, e.g. settle a quarrel between a blast contractor
who used too much dynamite and a house owner who discovered
cracks in the basin walls, one does *not* want arbitrary
people or programs in that loop. One wants to be certain that
any particular analysis result is dominated by the data, and
not screwed up by an amateur fiddling with stuff he does
not understand.

The obvious way for the SW vendor to protect himself from
amateurs going in and doing stupid analyses is to be able
to say that "we don't export the calibration information
with the non-proprietary data formats, so any analyses based
on data available on non-proprietary data formats is invalid."

That way the SW vendors cover their backs wrt a number of
legal nuisances, ensuring that only their own, certified
software - and trained, certified operators - can do the
legally relevant stuff.

Rune

Johannes Buechler

unread,
Apr 13, 2010, 3:33:06 PM4/13/10
to
Rune Allnor <all...@tele.ntnu.no> wrote in message <3cd2aa66-1f00-4a4a...@u21g2000yqc.googlegroups.com>...

>
> > That there's a specific definition of FFT is true, that TMW uses perhaps
> > even the most common doesn't mean that others haven't been used in
> > various other places either on purpose or perhaps even out of ignorance.
>
> Maybe not. But not knowing that what TMW uses is the de facto
> standard can *only* be attributed to ignorance.

Ok, if that makes you happy.

> > > Again, all this is trivial. One only needs to know what one
> > > is talking about.

Right, but if everybody knew, you wouldn't be any special any more, so that'd be boring as well.

> He might find that if he gets access to the proprietary data formats
> for the commercial package at hand. However, the prices he is quotes
> all but guarantee that those data formats will not be openly
> available.

You are right that the details of the data format are not available to the customer. This however is irrelevant for this thread, because the numerical information contained in a exported format such as .mat or .wav (with restrictions) or .asc is identical, apart from some minor rounding errors due to different data word representations.

Or how else would you explain that if I import data that I had formerly exported as .mat or .asc (pure uncompressed numbers, prepresenting the unit of the respective channel - to stay at the example of this thread, Pascal), I get the exact same plots?

> No sane programmer will allow the software package to interact with
> the sensor except by reading the output. If the software package is

Am I wrong or do you completely ignore the fact that there is the actual measurement frontend between the sensor and the Software?

The Software, by the way, to record the data may be different from the Software to be used to create plots. I can measure with a frontend from a different manufacturer and still do the analysis in PAK. This is not standard of course, but it is done regularly in my daily work with different companies and different departments within one and the same company. Luckily, this works, otherwise we'd have the same dilemma as with MS-Word...

It is true though that plots don't look exactly alike between different Softwares. However, there is for sure no 3..5 dB shift in anything, it's rather much minor things. We can usually reproduce analyses very well between different Softwares, even if it comes to things like Pychoacoustic parameters, modulation analyses and what not.

> worth the suggested price tag, the full instrument package - from
> sensor through cables through PC and software lisence - will have
> been calibrated by some independent body, saving the calibration
> data for the set-up locally on the PC. One might imagine the same
> sensors and cables be used with a different PC/SW combo, in which
> case the previous calibrations will be invalid if the sensor settings
> are messed with. Similarly, the ADC on a single PC might be used

Again, you obviously ignore the presence of a measurement frontend. Also it appears that you have no clue at all of how multi-channel measurements in real life are performed. We have 120 parallel measurement channels, some record accelerometers, some record microphones, some record AH's, some record TTL signals and some record data from a bus system like CAN (extracted from the data stream by the frontend). Each channel can have sampling rates of up to 2^18 Hz - some channels (those for the TTL have even higher sampling rates - in return they store only the times of the slopes not the voltages itself).

The whole setup looks different each time, because different kinds of sensors are applied to the object of interest, and connected to the frontend just before the measurements are done. For instance, when I perform a measurement for a vehicle interior noise transfer path analysis on a dynanometer, I typically have around 90 accelerometer channels (30 sensors with 3D), some 10 microphones and some other stuff. The frontend just knows the sentitivity of each sensor and whether it needs an operating voltage or not. The sensitivity incorporates the calibration value which is obtained before the measurement by the user by means of a calibrator (at least for the microphones).

The computer can be exchanged, it is just connected to the measurement frontend via a LAN cable. Its only purpose during the measurement is to store the data that the frontend delivers. Also it can show some useful information on the display, like time data of one or several channels, or on-line-analyses of the data that is coming in.

No specific information other than the sensitivity of a sensor is available, since TED information is not used. (a while ago it did not exist anyway). The sensors are assumed to have a flat frequency response within a specified frequency range. It is up to the engineer to keep this in mind while interpreting the data afterwards.

So what would be the ominous "calibration data" you are talking about the whole time. I would really like to understand this.

If I import a Matlab data stream into the Software, I have to be able to get the same FFT plot as in Matlab itself. This is the whole issue of this thread.

> 1) Calibration data will be stored independently of the measured
> data.
> 2) Calibration corrections will be implemented as pre-processing
> steps in the processing chain.
> 3) Calibration data will not be available to arbitrary users, only
> to the internals of the commercial package. That restricted access
> is *the* reason why SW vendors can charge $20k per lisence.

Nobody said that each single license costs 20k. We are quite a few people working with it here.

> All of this is totally trivial instrumentation issues.

good to know, doesn't solve the problem, though.

Joe/"the guy"/OP

Rune Allnor

unread,
Apr 13, 2010, 4:01:55 PM4/13/10
to
On 13 apr, 21:33, "Johannes Buechler" <jb_s...@gmx.DELnet> wrote:

> > All of this is totally trivial instrumentation issues.
>
> good to know, doesn't solve the problem, though.

Of course not. We esdtablished a week ago that the real problem
is in your mind.

Rune

noisemetric stojiljkovic

unread,
May 24, 2010, 10:51:07 AM5/24/10
to
i'm no expert in the topics discussed above and don't want to be crucified for trying to contribute, but this is a forum so what the hell...i've come across similar posts before, and vaguely remember a note about r.m.s accuracy calculations based on the fft algorithms. could it be that the same formula is used to calculate rms from matlab which performs the calculation over double sided spectra, while labview and similar apps first convert to single sided spectra ? this would go in line with the 5-6 dB differences seen in the plots. now duck and cover...

Milan

Rune Allnor

unread,
May 24, 2010, 10:56:50 AM5/24/10
to
On 24 Mai, 16:51, "noisemetric stojiljkovic" <noisemet...@gmail.com>
wrote:

> i'm no expert in the topics discussed above and don't want to be crucified for trying to contribute, but this is a forum so what the hell...i've come across similar posts before, and vaguely remember a note about r.m.s accuracy calculations based on the  fft algorithms. could it be that the same formula is used to calculate rms from matlab which performs the calculation over double sided spectra, while labview and similar apps first convert to single sided spectra ? this would go in line with the 5-6 dB differences seen in the plots. now duck and cover...

The factor 2'ish issue (the DC and nyquist components must
be treated separately, as they only appear once in the spectrum)
is one of a large number os issues that must be handled correctly
to get the correct numbers in the end.

The problem as stated in this tread was how to relate the computed
numbers to sound pressure. You can't do that without the calibration
factors.

Rune

Johannes Buechler

unread,
Jul 14, 2010, 3:36:22 AM7/14/10
to
Rune Allnor <all...@tele.ntnu.no> wrote in message <3dab608e-d1bf-4015...@r9g2000vbk.googlegroups.com>...

> The factor 2'ish issue (the DC and nyquist components must
> be treated separately, as they only appear once in the spectrum)
> is one of a large number os issues that must be handled correctly
> to get the correct numbers in the end.

correct

> The problem as stated in this tread was how to relate the computed
> numbers to sound pressure. You can't do that without the calibration
> factors.

It would be appreciated a lot if you could finally stop this repeated and unhelpful refering to these ominous "calibration factors"; you have neither at any point clarified what exactly you mean with "calibration data" nor have you tried to help finding the real reasons of the difference between Matlab and other programs.

The problem was solved quite a while ago, the plots are equal now. Since some people might experience similar problems in the future, here the reasons for the (now gone) differences in the Campbell diagrams that this thread was about:

1.) As Godzilla had pointed out somewhere during the thread, the windowing (Hann) needs to be compensated for. I did not do that in my original code (thought I did so though --> thanks Godzilla). Interestingly, the compensation of "2" is obviously applied by both PAK and ArtemiS in all FFT plots, leading to results that show (with broadband noise) levels that are something like 1..2 dB higher than if a rectangular window was used instead. However, the windows correction "2*sqrt(2/3)" is applied in overall level plots. So as to get an overall level plot equal to PAK (ArtemiS uses a different algorithm following those of sound level meters), I have to "re-correct" the sums of each window corrected FFT set by 2*sqrt(2/3)/2, that is, the energy values need to be multiplied by 2/3. I am not sure why PAK uses different windows correction factors depending on the plot to show. For a discussion
of the "correct" correction factor after application of a HANN window, see http://maintenanceforums.com/eve/forums/a/tpc/f/3751089011/m/9711055853?r=3921059853#3921059853

2.) As Mr Käferstein had told me in a PM, the FFT in Matlab needs to be scaled by a factor of 1/(nfft * sqrt(2)) in order to get the expected levels. The advantage of the 1/nfft scaling is that one gets the correct overall level after summing up all the Stützstellen (frequency points) - something that obviously makes perfect sense and is standard in all analysis software that I've see so far. The second part, sqrt(2) is getting the numerical values right if RMS values are to be considered (as usually done since they represent the energy of a signal and are the basis for the SPL calculation).

3.) The setting called "Überlappung" (Overlap) does not mean the same thing in PAK as I had originally assumed. I had expected it to refer to the amount of samples which are considered in both of two successive/adjacent FFT windows, as the window is moved less than its own size. In PAK however, this (FFT block) overlap is only implicitely set via the step size of the reference parameter (time). This step size is, however, (other than the "Überlappung" which is set in the main window of FFT analysis settings) set in a different window which made the whole thing a little confusing to me.

After these three points, numerical values were identical. To get the 3D plot right as well, a last fix had to be done:

4.) The colormap that I used in PAK as approximation of the Matlab "Jet" colormap was incorrect. It had been created visually (=quick&dirty) by a co-worker earlier who had simply compared resulting plots from different programs. Now I have plotted the RGB values in Matlab, resampled them from 64 to 100 (due to a difference in the number of rgb values defining the whole map in both programs) and created a PAK colormap with these values.

The resulting Campbell diagrams now look exactly identical - on my screen up to approx. 2000 Hz. Above 2 kHz the Matlab plots still look different from PAK or ArtemiS even though its numerical values are identical. The latter becomes clear when plotting the averaged FFT, which are identical between all three programs up to fs/2 (or in case of PAK, until where it calculates which is somewhat lower than fs/2). Thus, the difference in the 3D plots could be an issue of different color interpolation for higher frequencies, especially in logarithmic representations of the f-axis. If a solution can be found for that, great - otherwise I can live with that as I know the numerical values are identical (which was my main concern).

Also the overall level can be plotted with identical results between PAK and Matlab.
Here are some plots showing the result. Other than the pictures that were named earlier in this thread, I'll try to keep them available on my domain for a longer time period.

(1) Campbell:
http://www.JohannesBuechler.de/FFT/Campbell_Matlab-vs-PAK.jpg
http://www.JohannesBuechler.de/FFT/Campbell_Matlab-vs-ArtemiS.jpg

(2) Average FFT:
http://www.JohannesBuechler.de/FFT/AverageFFT_Matlab-vs-PAK.jpg

Comparison different HANN window correction factors:
http://www.JohannesBuechler.de/FFT/Matlab_RECT-vs-HANN_CorrectionTypes.jpg

Best regards

Johannes

Joerg Braeuer

unread,
Jul 22, 2010, 10:31:03 AM7/22/10
to
Hello Johannes,

I encountered the same problem when trying to plot Artemis-data in Matlab. If I understood correctly, there are two window-compensation factors. A Division by 2 to compensate for the hann-window (leakage?) and another factor of 2*sqrt(2/3) to get the rms-values, meaning there is an overall scaling factor of sqrt(2/3). Furthermore the scaling factor of 1/(nfft*sqrt(2)) needs to be applied.

If I incorporate these information, my code looks like this:

test_signal = Data{2,1}.channels.values;
Fs = Data{2,1}.Fs;

nfft = 16384;
noverlap = floor(0.5*nfft);
window = hanning(nfft);

[B,F,T] = spectrogram(test_signal,window,noverlap,nfft,Fs);

p0 = 2*10^-5;
X = B*sqrt(2/3)/(nfft*sqrt(2));
L_p = 20*log10(abs(X)/p0)

To plot the campbell, I use:

rpm = Data{3,1}.channels.values;
idx = round((T*Fs+1)); % The indices of the start times and set
rpmT = rpm(idx); % the Abscissa to the speed in RPS -> Campbell
rpmT = (rpmT+[rpmT(2:end),rpmT(end)])/2;

figure('Color',[1 1 1]);
ax1=axes;
surface(rpmT,F,L_p)
shading('interp');
set(gcf,'Renderer','zbuffer');
title('Spectrogram');
xlabel('rpm');
yLabel('frequency [Hz]');
colormap(mycolormap);
view(-90,90); %rotate view
axis tight;
set(ax1,'Layer','top');
set(ax1,'YDir','normal');
set(ax1,'Box','on');
set(ax1,'XMinorTick','on');
set(ax1,'YMinorTick','on');
set(ax1,'YLim',[0,350]);
set(ax1,'ydir','reverse');
set(ax1, 'CLim', [0 100]);
colorbar;

However, the matlab-campbell is still way off of what I get using Artemis (about 20 dB). Hopefully, you can help me to understand what I do wrong.

Best regards
Joerg Braeuer

Johannes Buechler

unread,
Aug 12, 2010, 5:45:19 AM8/12/10
to
"Joerg Braeuer" <aball...@DELgmx.net> wrote in message <i29kn7$g63$1...@fred.mathworks.com>...

> Hello Johannes,
>
> I encountered the same problem when trying to plot Artemis-data in Matlab. If I understood correctly, there are two window-compensation factors. A Division by 2 to compensate for the hann-window (leakage?) and another factor of 2*sqrt(2/3) to get the rms-values, meaning there is an overall scaling factor of sqrt(2/3). Furthermore the scaling factor of 1/(nfft*sqrt(2)) needs to be applied.
>
Hi Jörg,

sorry, I only now saw this message, but we're already figured the issue out in PM. For others, here are the necessary code lines to get the right plots:

if strcmpi(Scaling, 'peak')
ScalingFactor = 1;
else %RMS
ScalingFactor = 2^(-1/2);
end

if (strfind(lower(wintype), 'rect'))
window = rectwin(nfft);
WindowCorrectionFactor = 1;
else %HANN
window = hann(nfft);
WindowCorrectionFactor = 2;
end

[B, F, T ] = specgram(signal,nfft,Fs,window,noverlap);

% Scaling
FFTScalingFactor = WindowCorrectionFactor * ScalingFactor / nfft;
B = FFTScalingFactor * abs(B);

B(2:end-1,:) = 2.*B(2:end-1,:); % single side spectrum with even block length

Best regards, Johannes

0 new messages