Log scaling of power values, pre vs. post averaging across trials, for paradigms that don't include a baseline period?

331 views
Skip to first unread message

Daniel Roberts

unread,
Mar 23, 2017, 3:51:10 PM3/23/17
to AnalyzingNeuralTimeSeriesData

In Chapter 18 of “Analyzing Neural Time Series Data,” which covers baseline normalizations for time-frequency power, it is mentioned that with respect to decibel scaling, (p 222) “… first average trials together and then transform to decibels; do not transform each trial to decibels separately and then average.”  My question is whether this statement applies primarily to dB scaling relative to a baseline period (which the chapter covers) or dB scaling of EEG power values more generally.

 

In paradigms in which spectral power is calculated in a single window without baseline correction (for example power in a pre-stimulus period, or power during a resting period) it is also common to transform to from raw power to dB power, not relative to a value in a baseline window, but simply by converting from to 10 * log10(raw power).

 

It seems that if the distribution of trial level power values have a non-normal but instead ‘right’ or ‘positively’ skewed distribution, then log scaling prior to averaging across trials would be useful, because the resulting distribution of log-scaled power values would more closely approximate a normal distribution.

 

Of course 10 * log10(mean(trials)) is not equivalent to mean(10 * log10(trials)), as log10(x) + log10(y) = log10(x * y). Log scaling the trial level values prior to averaging across trials is more analogous to a geometric rather than arithmetic mean of the underlying raw values, though I don’t believe it is precisely the geometric mean because the result is not exponentiated back to the original scale.

 

Any thoughts on the pros vs. cons of log scaling prior vs. post averaging across trials in a condition, when there is no baseline involved? Thanks!

Mike X Cohen

unread,
Mar 24, 2017, 8:00:21 AM3/24/17
to analyzingneura...@googlegroups.com
Hi Daniel. There are a few motivations for baseline normalization, chief among them being (1) to help separate task-irrelevant from task-relevant activity, and (2) allow qualitative and quantitative comparisons across frequencies.

Single-trial normalization usually isn't necessary because the two aforementioned goals happen at the trial-average level. The main potential disadvantage of single-trial normalization is that a bit of noise can potentially have a disproportionate effect on the log transform.

This is an issue that is often discussed on this list and others (e.g., eeglab and fieldtrip), in part because there is no simple answer. I have found in my own data that single-trial normalizations generally produce less interpretable results, but that doesn't mean I think it's wrong or incorrect.

As for taking the log of the raw power without dividing any reference period, that can be done to remove some of the 1/f in the data. However, the 1/f is not a pure power distribution, so taking the log is no guarantee to help goal #2 listed above, and it certainly does nothing for #1. It works OK sometimes and not other times.

Hope that helps,
Mike



--
You received this message because you are subscribed to the Google Groups "AnalyzingNeuralTimeSeriesData" group.
To unsubscribe from this group and stop receiving emails from it, send an email to analyzingneuraltimeseriesdata+unsub...@googlegroups.com.
Visit this group at https://groups.google.com/group/analyzingneuraltimeseriesdata.
For more options, visit https://groups.google.com/d/optout.



--
Mike X Cohen, PhD
mikexcohen.com
Reply all
Reply to author
Forward
0 new messages