Using a Laplacian with ICA

516 views
Skip to first unread message

abbydick...@gmail.com

unread,
Jan 10, 2017, 3:46:52 PM1/10/17
to AnalyzingNeuralTimeSeriesData
Hi Mike, 

I have some spontaneous EEG data we have collected with low functioning children (EGI 128 channel, 500Hz).  Due to the nature of our sample I am working with a few constraints, including a high amount of EMG and relatively short recording lengths (1-2 minutes). We're really interested in calculating coherence, and also looking at multiple frequency bands - so applying a laplacian seems like it would have numerous benefits for our data (including attenuating some of the EMG alongside ICA)

The processing pipeline I currently use involves:

-FIR filter (high pass:1Hz, low pass:100Hz)

-remove bad channels

-down-sampling to the 10-20 system 25 channel montage (in order to have an adequate k-factor to run ICA) 

-remove bad segments of data 

-Run ICA

-Remove artifactual components

-Re-reference to average


I just wanted to check that I would be correct to apply the laplacian filter right at the start of processing (before removing bad channels), and whether I would still be able to re-reference to average at the end of the pipeline, or whether this step would now be obsolete?


Thanks in advance!


Best wishes, 


Abby 

Mike X Cohen

unread,
Jan 10, 2017, 3:52:43 PM1/10/17
to analyzingneura...@googlegroups.com
Hi Abby. I would say you could apply the Laplacian instead of computing average re-reference, thus as the final preprocessing step. 

If you are removing electrodes only to boost the timepoints-to-channels ratio for ICA, then you might instead consider first running PCA to reduce the data to, e.g., 25 dimensions and then running the ICA on that subspace. This will allow you to keep the original number of electrodes. I know that the jade algorithm does this (available in eeglab, or you can search the Internet for jader.m), and perhaps other ICA algorithms as well. 

Hope that helps,
Mike



--
You received this message because you are subscribed to the Google Groups "AnalyzingNeuralTimeSeriesData" group.
To unsubscribe from this group and stop receiving emails from it, send an email to analyzingneuraltimeseriesdata+unsub...@googlegroups.com.
Visit this group at https://groups.google.com/group/analyzingneuraltimeseriesdata.
For more options, visit https://groups.google.com/d/optout.



--
Mike X Cohen, PhD
mikexcohen.com

Abby Dickinson

unread,
Jan 10, 2017, 4:10:09 PM1/10/17
to analyzingneura...@googlegroups.com
Thanks Mike. 

PCA instead of downsampling our montage was something that we considered. Hoever, we decided on downsampling to 25 channels as we wanted to downsample to a smaller montage anyway during analyses, so thought doing this earlier would also solve our k-factor problem (rather than using PCA + ICA and then downsampling to 25 channels). 

If I wanted to keep with this approach, would it be possible for me to apply the laplacian earlier in the pipeline (rather than after ICA)? From what I understand it should only be applied to montages that are higher-density than the 25 channels we have post-ICA in our case? And if I did this should I do it when bad channels have been removed (seen as it will be replacing the average reference step), or do it on the original 128 channels?

Thanks again, 

Abby 

--
You received this message because you are subscribed to a topic in the Google Groups "AnalyzingNeuralTimeSeriesData" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/analyzingneuraltimeseriesdata/yrImtsLfvms/unsubscribe.
To unsubscribe from this group and all its topics, send an email to analyzingneuraltimeseriesdata+unsub...@googlegroups.com.

Mike X Cohen

unread,
Jan 10, 2017, 4:24:08 PM1/10/17
to analyzingneura...@googlegroups.com
Given that you have 128 channels, you might consider the following steps (in addition to the temporal filtering and so on):

1) Remove all "high-risk" electrodes that are likely to contain a lot of noise, like the ones on the face (including forehead) and neck, plus other particularly bad electrodes.
2) Reject bad data periods.
3) Run ICA on PCA'ed data of the remaining ~100 electrodes. 
4) Laplacian
5) Select a subset of 25 electrodes for further analyses.

The reason why I think this might be a good approach is that the spatial transformations will benefit from having more electrodes, unless those electrodes are really noisy, hence the new step-1. 

On the other hand, the above argument is a bit of a gut-feeling, certainly not a formal proof. I recommend taking one of your datasets as a test-case, trying out several different preprocessing strategies and running the key analyses. You can then compare various aspects of the data from the different preprocessing approaches and try to make an informed decision. Once you decide on a preferred strategy, you would then apply the same procedure to all other datasets.

Mike

Abby Dickinson

unread,
Jan 10, 2017, 4:32:35 PM1/10/17
to analyzingneura...@googlegroups.com
Thanks Mike, that's really helpful!

Best wishes, 

Abby 

Abby Dickinson

unread,
Jan 16, 2017, 5:12:27 PM1/16/17
to analyzingneura...@googlegroups.com
Hi Mike, 

Sorry to bother you with this again. I just wanted to double check a couple of things before I go ahead with my analysis. 

1) From comparing a couple of pipelines (in terms of order of steps) there doesn't seem to be any large differences. For practical reasons it would be much easier for me to apply the laplacian to 128 channels, downsample (to 25) and then ICA (I know you originally recommended ICA and then laplacian). I just wanted to check there wasn't anything inherently wrong with this approach (using the laplacian before ICA) before I implemented it on all datasets?

2) I'm a little confused about the differences between a surface laplacian and CSD. Would your laplacian code transform the signal into the current density domain?

Best wishes, 

Abby 

Mike X Cohen

unread,
Jan 16, 2017, 5:26:23 PM1/16/17
to analyzingneura...@googlegroups.com
Hi Abby. See below.


On Mon, Jan 16, 2017 at 11:11 PM, Abby Dickinson <abbydick...@gmail.com> wrote:
Hi Mike, 

Sorry to bother you with this again. I just wanted to double check a couple of things before I go ahead with my analysis. 

1) From comparing a couple of pipelines (in terms of order of steps) there doesn't seem to be any large differences. For practical reasons it would be much easier for me to apply the laplacian to 128 channels, downsample (to 25) and then ICA (I know you originally recommended ICA and then laplacian). I just wanted to check there wasn't anything inherently wrong with this approach (using the laplacian before ICA) before I implemented it on all datasets?


That should be comforting. When you do things differently and get basically the same result, it means that your various pipelines are not overly sensitive to noise or weird methodological idiosyncrasies. When you change things slightly and get completely different results, that's when you should start worrying.

I don't see that there is anything wrong with Laplacian and then ICA. ICA is simply a multivariate decomposition method; it does not make assumptions about volume conduction or reference montage (as opposed to source localization, where the reference scheme does matter). The reason I recommended ICA first is that it seemed intuitive to me that having more volume conduction would help isolate the components better. But I don't know if that's been demonstrated and my intuition could be wrong. 
Actually, that paper is part of a special issue on the Laplacian.


 
2) I'm a little confused about the differences between a surface laplacian and CSD. Would your laplacian code transform the signal into the current density domain?


Just terminology; nowadays they're the same. To be more precise: The surface Laplacian is one of several methods to estimate the CSD, but because that's really the only method that is used these days, the two terms are used interchangeably.

Abby Dickinson

unread,
Jan 16, 2017, 5:43:49 PM1/16/17
to analyzingneura...@googlegroups.com
Thanks so much Mike - that's really helpful. 

And having the CSD data won't mean I need to change any of my later analysis strategies (for computing spectral power, etc), the data can just be treat the same? 
To unsubscribe from this group and all its topics, send an email to analyzingneuraltimes...@googlegroups.com.

Mike X Cohen

unread,
Jan 17, 2017, 5:40:40 AM1/17/17
to analyzingneura...@googlegroups.com
Correct, the electrode data are still just time series that you can apply any time series analysis method that is appropriate.

Mike


abbydick...@gmail.com

unread,
Dec 4, 2017, 4:28:31 PM12/4/17
to AnalyzingNeuralTimeSeriesData
Hi!

I just wanted to double check something with people who had been using CSD longer than me. 

As I mentioned in my previous posts, we are using a pipeline where we clean EEG data (using manual cleaning & ICA), then convert to CSD. 

The CSD estimates obviously look very different to the cleaned EEG data. I was wondering if other people cleaned the data after transformation to CSD?

Best, 

Abby 

Roy Cox

unread,
Dec 4, 2017, 5:13:08 PM12/4/17
to analyzingneura...@googlegroups.com
hi Abby,

I've been doing all the typical preprocessing (temporal filtering, rejecting bad segments/epochs, interpolating bad channels, ICA-based removal of eye/EMG components) on EEG, prior to CSD transformation. As you say, CSD looks really different (appears more "noisy" due to more high frequency content). As far as I know, there are no accepted guidelines on what should count as an artifact in CSD data, so performing these steps on regular EEG seems preferable (if still subjective). Another temporal filtering pass after CSD couldn't hurt, as the time series has now been changed, but I've typically not bothered with this.

Intuitively, I wouldn't want to interpolate channels after the Laplacian transformation, because how channels are weighted for interpolation with regular EEG doesn't seem compatible with the spatial derivative trick that is central to CSD. (Whereas CSD is trying to make things more "focal", interpolation blurs things again.) I'm sure Mike has more to say on whether that checks out mathematically.

Re-referencing to average ref before CSD isn't necessary, as the CSD is reference-independent. So going from nose, linked mastoid, or average reference to CSD results in (theoretically) identical solutions (and extremely similar solutions in practice - I checked).

Just my two cents.

Roy

--

Mike X Cohen

unread,
Dec 5, 2017, 4:51:50 AM12/5/17
to analyzingneura...@googlegroups.com
Hi both. I agree -- you should clean the voltage data and then CSD the cleaned post-processed data. 

@Abby, what do you mean by "CSD estimates obviously look very different to the cleaned EEG data"? If the Laplacian and voltage data look really qualitatively different, then there might be something wrong. If this is a concern, you might want to try your code on some simulated data. For example, you could apply the method for creating and filtering spatial Gaussians that I show in figure 22.4.

As Roy mentioned, the Laplacian is basically just a high-pass spatial filter. It will reveal dynamics that are present in the voltage data but obscured by the low-frequency activity. 

Mike


--
Mike X Cohen, PhD
New online courses: mikexcohen.com

Abby Dickinson

unread,
Dec 5, 2017, 1:24:41 PM12/5/17
to analyzingneura...@googlegroups.com
Thank you for your feedback Roy & Mike, its very helpful!

@Mike, when I said they looked different I was referring to the fact the laplacian data look 'noisier' due to the low frequency dynamics being removed and revealing higher frequency activity (as Roy mentioned). 

That was the crux of my question really - that I was cleaning the voltage data, but then when transformed the laplacian data loked noisy due to the high freuqunecy activity being more prominent. But it seems like you both would not recommend recleaning the laplacian data, so I'll stick to my current pipeline :)

--
You received this message because you are subscribed to a topic in the Google Groups "AnalyzingNeuralTimeSeriesData" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/analyzingneuraltimeseriesdata/yrImtsLfvms/unsubscribe.
To unsubscribe from this group and all its topics, send an email to analyzingneuraltimeseriesdata+unsub...@googlegroups.com.

Mike X Cohen

unread,
Dec 5, 2017, 1:33:56 PM12/5/17
to analyzingneura...@googlegroups.com
"Noise" is an ambiguous term. I think you mean that you can see more features in the Laplacian data than in the voltage data. I guess people had similar feelings when they went from 3 electrodes to 64. But yes, indeed, the Laplacian data are a bit more granular. The key is that all of those features were already there, just hidden underneath the larger-amplitude lower-spatial-frequency features.

You should always feel free to email screenshots if you want a more concrete opinion. Otherwise you'll just get these vague hand-wavey overtures ;)

Mike


Abby Dickinson

unread,
Dec 5, 2017, 2:01:20 PM12/5/17
to analyzingneura...@googlegroups.com
Thank you!

I just had one more question if thats OK?

I've been using laplacian with our connectivity data (and I've really liked this approach). I now want to implement laplacian across other types of analyses (ie. previously when I quantified spectral power I did so on average referenced EEG data, but I now want to use laplacian instead). I was using a measure of relative power before (the relative power in a certain spectral band, in relation to total power in each channel). Would it be appropriate to still use relative power on CSD estimates, or should I use absolute?

Mike X Cohen

unread,
Dec 5, 2017, 2:19:27 PM12/5/17
to analyzingneura...@googlegroups.com
In general, I recommend consistency in analyses. If you will use Laplacian for one analysis, then you should use it for all analyses, or at least have a good reason to treat the data differently for different analyses. Laplacian also generally boosts SNR and increases selectivity, so I think it's overall a good spatial filter for channel-level analyses. 

Anyway, the direct answer to your question is Yes, you can compute power or relative power on the CSD estimate.

Mike


Reply all
Reply to author
Forward
0 new messages