Legendre polynomial in the surface Laplacian

153 views
Skip to first unread message

affective neurodynamics

unread,
Jun 12, 2015, 4:13:18 PM6/12/15
to analyzingneura...@googlegroups.com
Hi,

I'm just starting to play with surface Laplacians and noticed a discrepancy I was hoping someone could clear up for me. In the text of Analyzing Neural Time Series Data there's a statement that past an order of 10 or so your Laplacian exceeds the spatial resolution of the EEG recording with 64 electrodes, while it might make sense to go up to 13 or 15 for denser montages. In the accompanying code (laplacian_perrinX.m, as downloaded from http://mikexcohen.com/getthecode.html on May 29th, 2015), however, the default setting is 20 for montages of less than 100 electrodes and 40 for > 100. Tenke's CSD toolbox implementation is even more dramatic, with a hardcoded value of 50 - assuming I am reading his code correctly, which admittedly is not certain.

In contrast, the code accompanying Mike's 2015 methods paper (www.mikexcohen.com/Cohen_phaseConnectivityComparison.zip) uses defaults of 12 for dense and 10 for sparse montages. Was this a change of heart post-book code, a matter of wanting differing spatial resolutions for different purposes, or something else entirely?

Tangentially related, I notice that in papers and books Lambda values are most often given as 10^-5 (although Mike identifies 10^-6 as an alternative sensible value), while in both the laplacian_perrinX.m and CSD toolbox implementations defaults are 1^-5. Is this a case of nobody liking the default, or are digits just getting lost all over the place? It is a relatively small thing, but it has been somewhat disconcerting as I work through the process. Thanks for any help you can offer on either point.

-Karl

Mike X Cohen

unread,
Jun 13, 2015, 2:04:11 AM6/13/15
to analyzingneura...@googlegroups.com
Hi Karl. Those are good questions, and I'm sorry if the book wasn't clear enough. The Legendre order is actually not the smoothing parameter (that's the variable "m" in the code); it's the number of iterations used when computing the G and H transformation matrices.

While working on the book, I made the recommendation of 10 based on my investigations on how many iterations it took for the data to "settle" such that the result did not change with additional iterations. After the book was published, some colleagues convinced me that it doesn't hurt (very small additional computation time) to increase the order just in case some montage needs more iterations (this might happen if there are missing channels or regional changes in electrode density). So I changed the code but obviously can't change the book. I don't think it really matters, though. You can test it yourself, and probably you will find that the Laplacian results do not appreciably change after an order of ~10.

The spatial frequency characteristics come from the "m" parameter, which is hard-coded in the function. You can also try changing that to see the effect on the results. That's the same in the script as discussed in the book.

As for the lambda parameter, note that in Matlab you can use 1ex to indicate powers-of-ten. So, 1e-5 is the same thing as 10^-5. 1^-5 is actually just 1 (1 to the power of anything is 1), so if you see that in a paper, it's most likely a honest typo. 

I hope that helps clarify things a bit,
Mike


--
You received this message because you are subscribed to the Google Groups "AnalyzingNeuralTimeSeriesData" group.
To unsubscribe from this group and stop receiving emails from it, send an email to analyzingneuraltimes...@googlegroups.com.
Visit this group at http://groups.google.com/group/analyzingneuraltimeseriesdata.
For more options, visit https://groups.google.com/d/optout.



--
Mike X Cohen, PhD
mikexcohen.com

affective neurodynamics

unread,
Aug 3, 2015, 1:04:30 PM8/3/15
to AnalyzingNeuralTimeSeriesData
Mike,

I just realized I'd never come back by after playing around with these parameters based on your answer - you did in fact clarify things.  And you were right, the order doesn't seem to matter much past a minimum in the ballpark of ten for all the data I've touched.  Thanks for the answers.

-Karl
To unsubscribe from this group and stop receiving emails from it, send an email to analyzingneuraltimeseriesdata+unsub...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages