Cycle selection in Morlet wavelet analysis

1,047 views
Skip to first unread message

A

unread,
May 2, 2017, 2:51:20 PM5/2/17
to AnalyzingNeuralTimeSeriesData

Hello Dr. Cohen,


After reading the chapter in your book explaining Morlet wavelet analysis, I would like further clarification on how to select the number of cycles to use.  The general guidelines you described were useful, but I was wondering if you can explain them on a mathematical basis.  For example, say I am interested in measuring 1Hz  - 80Hz activity using a wavelet range of 4-10 cycles.  How can I show mathematically that this cycle range is appropriate?


On the Fieldtrip data analysis tutorial website, the Morlet section describes some basic equations for calculating spectral bandwidth and wavelet duration.  Based on the wavelet duration equation, ((4 cycles / 1 Hz))/pi = 1.27sec.  Is it correct to say that 4 cycles may not be appropriate because the wavelet duration is too short to measure 1Hz?  I am also not sure how to interpret their definition of spectral bandwidth.  I’m not even quite sure how to derive these equations or if they are the appropriate ones to answer my original question above.


Some clarification would be immensely helpful!


Thank you,

A

Mike X Cohen

unread,
May 3, 2017, 3:42:02 AM5/3/17
to analyzingneura...@googlegroups.com
Hi A. Unfortunately, the math will only provide limited help here, because there is no mathematically optimal amount of smoothing to apply in a time-frequency decomposition. Fortunately, there are respectably wide ranges of smoothing that generally produce the same pattern of results. That means that although you may get different results from using 1 cycle vs. 20 cycles, you'll get basically the same pattern within the range of, say, 3-10. 

I usually prefer to compute smoothing as the empirical FWHM. This you can obtain in the time domain from the Gaussian used to create the wavelet, or in the frequency domain from the amplitude spectrum of the wavelet (both are Gaussian-shaped). If the Gaussian is normalized to a maximum of 1, then the distance in ms or Hz between the lower and upper points closest to .5 is the empirical FWHM. This is described in the book in the chapter on wavelets.

Mike



--
You received this message because you are subscribed to the Google Groups "AnalyzingNeuralTimeSeriesData" group.
To unsubscribe from this group and stop receiving emails from it, send an email to analyzingneuraltimeseriesdata+unsub...@googlegroups.com.
Visit this group at https://groups.google.com/group/analyzingneuraltimeseriesdata.
For more options, visit https://groups.google.com/d/optout.



--
Mike X Cohen, PhD
mikexcohen.com

A

unread,
May 3, 2017, 11:41:08 AM5/3/17
to AnalyzingNeuralTimeSeriesData
Hi again,

Thank you for your response.  I went back and re-read the FWHM section of Ch13 and played around with the code as well.  I think I understand the basic concept, but I was wondering if you could comment further on how to evaluate whether a computed FWHM value is appropriate for the frequency of interest. 

Let's look at a specific example, where the wavelets are constructed with 1-80Hz using 3-10 cycles.  Fig1 shows the FWHM plot for 1 Hz at 3 cycles, and Fig2 shows the FWHM plot for 10Hz at 10 cycles.  Fig3 shows the FWHM values for 1 -80Hz.  For 1Hz, FWHM is 1Hz, and for 80Hz, FWHM is 19Hz.  Based on this, how do you determine if the FWHM is appropriate?  Should the FWHM be half of the frequency of interest or less?

Looking forward to your response,
A
Fig1.PNG
Fig2.PNG
Fig3.PNG

Mike X Cohen

unread,
May 3, 2017, 12:13:58 PM5/3/17
to analyzingneura...@googlegroups.com
Hi A. I'm sorry to disappoint you, but as far as I know, there is no algorithm for determining the appropriate level of smoothing for these kinds of analyses, because it depends very much on the characteristics of the data. In the brain, higher frequencies tend to have wider bands, in part because of larger fluctuations in time-varying frequency. This means that more spectral smoothing is generally preferable at higher frequencies. If you have a measurement level and experiment setup that are conducive to narrowband gamma (e.g., LFP recordings from V1 during spatial moving gratings), then 10-15 Hz smoothing might be too much. If these are scalp EEG recordings during a cognitive task, then this level of smoothing seems appropriate (perhaps could be even more). If you are doing cross-frequency coupling analyses, you'll probably want even more spectral smoothing in order to increase the temporal precision. And so on.

I think the best approach is to start with some test/pilot data and try re-running the time-frequency decomposition with different parameter settings. I predict that you will find a pocket in the parameter space where there is little effect on the qualitative pattern on results, and then you can feel reasonable comfortable that you have a good parameter selection. Keep in mind that brain oscillations have non-stationarities and that measurement devices have noise. Very much unlike, e.g., FM radio where spectral smoothing is a bad thing and must be minimized. 

Mike



--
You received this message because you are subscribed to the Google Groups "AnalyzingNeuralTimeSeriesData" group.
To unsubscribe from this group and stop receiving emails from it, send an email to analyzingneuraltimeseriesdata+unsub...@googlegroups.com.
Visit this group at https://groups.google.com/group/analyzingneuraltimeseriesdata.
For more options, visit https://groups.google.com/d/optout.
Reply all
Reply to author
Forward
0 new messages