Stat inquiries for the analysis of iEEG data

158 views
Skip to first unread message

Gabriel Obregon-Henao

unread,
Aug 22, 2019, 5:50:13 PM8/22/19
to AnalyzingNeuralTimeSeriesData

Hi Mike,


After reading Chapter 35 on group-level analyses, I’ve been thinking on how to apply strategies 2a and 2b to a within-subject analysis. Since I work with iEEG, the number and coverage of the electrodes varies considerably across subjects. I imagine that I could define time-frequency-region windows based on my hypotheses, but averaging power within a region might not be the best strategy given the higher spatial resolution of iEEG compared to M/EEG. Furthermore, my sample size is considerably small, and therefore I think it makes more sense to focus on within-subject statistics.


For defining TF-window boundaries based on pixels that are statistically significant, while avoiding circular inference, I was wondering if comparing time-frequency samples in all conditions vs. baseline (which you mention is a common practice in fMRI research) is equivalent to your suggestion in the YouTube videos of choosing the window based on the average of all data (across all conditions)? Moreover, if my hypothesis includes more than one channel (within or across brain regions), does it make sense to choose independent TF windows for each channel? Couldn’t the time-frequency peaks vary across channels, similar to how they may vary across subjects? Treating channels as independent would also imply forming clusters across time-frequency only, and I was wondering if one would then have to correct for multiple comparisons across channels?


Finally, for testing power relative to baseline, is there a difference in shuffling the trial labels of the baseline periods and of the activation periods (I think this is the way it’s performed in Fieldtrip) to form the null-hypothesis distribution, compared to the method of temporally shifting the time series (depicted in Fig. 34.2A)? Specifically, are there differences in the null hypotheses of these two methods? Also, when would it be better to use the method (shifting the baseline period) shown in Fig. 34.2B? Is it better suited for experiments in which your ITI is very short (or non-existent)? Do you gain anything by preserving the temporal structure of the time series?


Thanks!


--Gabriel

Mike X Cohen

unread,
Aug 26, 2019, 5:42:04 AM8/26/19
to analyzingneura...@googlegroups.com
Hi Gabriel. Apologies for the delayed reply. See below. 



On Thu, Aug 22, 2019, 23:50 Gabriel Obregon-Henao <gabrielobr...@gmail.com> wrote:

Hi Mike,


After reading Chapter 35 on group-level analyses, I’ve been thinking on how to apply strategies 2a and 2b to a within-subject analysis. Since I work with iEEG, the number and coverage of the electrodes varies considerably across subjects. I imagine that I could define time-frequency-region windows based on my hypotheses, but averaging power within a region might not be the best strategy given the higher spatial resolution of iEEG compared to M/EEG. Furthermore, my sample size is considerably small, and therefore I think it makes more sense to focus on within-subject statistics.


For defining TF-window boundaries based on pixels that are statistically significant, while avoiding circular inference, I was wondering if comparing time-frequency samples in all conditions vs. baseline (which you mention is a common practice in fMRI research) is equivalent to your suggestion in the YouTube videos of choosing the window based on the average of all data (across all conditions)?


Yes, averaging over conditions first will avoid biased data selection for condition comparisons.


Moreover, if my hypothesis includes more than one channel (within or across brain regions), does it make sense to choose independent TF windows for each channel? Couldn’t the time-frequency peaks vary across channels, similar to how they may vary across subjects?


Interesting thought. It depends on the quality of the data. I'd be concerned about the peak-finding algorithm getting caught up by noise.


Treating channels as independent would also imply forming clusters across time-frequency only, and I was wondering if one would then have to correct for multiple comparisons across channels?


I guess it depends on how you set it upand what level you will use to make inferences. "electrode" is the sample of the population of all possible electrodes in this group of patients. So you wouldn't need to correct for the number of electrodes. It might be useful to do a lot of qualitative visualizations, for example, showing effects sizes across the brain.



Finally, for testing power relative to baseline, is there a difference in shuffling the trial labels of the baseline periods and of the activation periods (I think this is the way it’s performed in Fieldtrip) to form the null-hypothesis distribution, compared to the method of temporally shifting the time series (depicted in Fig. 34.2A)? Specifically, are there differences in the null hypotheses of these two methods? Also, when would it be better to use the method (shifting the baseline period) shown in Fig. 34.2B? Is it better suited for experiments in which your ITI is very short (or non-existent)? Do you gain anything by preserving the temporal structure of the time series?


The baseline-shifting method is useful when you want to test every TF pixel in the map. 




Thanks!


--Gabriel

--
You received this message because you are subscribed to the Google Groups "AnalyzingNeuralTimeSeriesData" group.
To unsubscribe from this group and stop receiving emails from it, send an email to analyzingneuraltimes...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/analyzingneuraltimeseriesdata/f4c2a741-d2fa-4dee-9008-e21eee80f071%40googlegroups.com.

Gabriel Obregon-Henao

unread,
Sep 3, 2019, 4:27:36 PM9/3/19
to AnalyzingNeuralTimeSeriesData
Thanks Mike.

I was wondering if it would be better to compute median power across trials, as opposed to average power? I've done my best to remove trials with interictal discharges from my analyses, but there still seem to be some outlier trials with spikes, sharp waves, and/or slow waves that could bias the average.

Another option would be to convert the power values to dB before averaging across trials, and then normalize by subtracting the average baseline (instead of computing average power, dividing by the average baseline, and then transforming to dB). I've seen the latter approach used in some papers, but I don't know if it's appropriate for defining TF windows and/or for running statistical analyses.

Best,

--Gabriel

To unsubscribe from this group and stop receiving emails from it, send an email to analyzingneuraltimeseriesdata+unsub...@googlegroups.com.

Mike X Cohen

unread,
Sep 4, 2019, 4:22:38 AM9/4/19
to analyzingneura...@googlegroups.com
Median power is an interesting idea that has been discussed in my book and on this list before. It's a nonlinear empirical measure, so its properties are not as well understood as the mean. Another possibility is to use amplitude instead of power -- outliers will have a much smaller impact. 

As for single-trial dB, I generally don't recommend that. The results can become unstable and overly influenced by noise.

Mike

To unsubscribe from this group and stop receiving emails from it, send an email to analyzingneuraltimes...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "AnalyzingNeuralTimeSeriesData" group.
To unsubscribe from this group and stop receiving emails from it, send an email to analyzingneuraltimes...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/analyzingneuraltimeseriesdata/ad47ed4b-8563-47f6-8741-278b21aeea29%40googlegroups.com.

Gabriel Obregon-Henao

unread,
Sep 12, 2019, 5:38:22 AM9/12/19
to AnalyzingNeuralTimeSeriesData
Hey Mike,

Does one need to account for 1/f when averaging power across a TF-window? In the Fieldtrip example of analyzing high-gamma in human ECoG, for example, they multiply the power values at each time-frequency sample by the square of the corresponding frequency prior to averaging across frequencies within the high-gamma range. I’m mainly asking because I’m planning on using Strategy 2a from the book for a between-trials analysis.

Thanks!

-Gabriel

Mike X Cohen

unread,
Sep 14, 2019, 6:51:51 AM9/14/19
to analyzingneura...@googlegroups.com
Hi Gabriel. I don't think that's necessary. If the frequency window is relatively narrow, then the 1/f won't really bias the spectrum. And if the frequency window is wide enough that the 1/f will introduce a bias, then the window is probably too wide ;)

That said, baseline normalization has several benefits aside from getting rid of the 1/f issue. For example, baseline normalization also helps to separate background/ongoing activity from the specific task-related modulation. In general, I recommend baseline normalization unless there is a specific reason not to use it.

Mike



--
You received this message because you are subscribed to the Google Groups "AnalyzingNeuralTimeSeriesData" group.
To unsubscribe from this group and stop receiving emails from it, send an email to analyzingneuraltimes...@googlegroups.com.


--
Mike X Cohen, PhD
Fresh look: mikexcohen.com

Gabriel Obregon-Henao

unread,
Sep 14, 2019, 1:26:51 PM9/14/19
to AnalyzingNeuralTimeSeriesData
Understood. That makes me wonder why in your toy example of comparing between the first and second half of the experimental trials you omitted the normalization step. Also, if one were to extrapolate from that example to a comparison between two experimental conditions, does it really make sense to run stats on the whole trial? Wouldn't you just want to compare the post-stimulus periods between conditions regardless of whether you perform baseline normalization? 

I was also wondering why do you use n-1 degrees of freedom in Matlab's tinv function if you constructed the null distribution using Welch's t-test. Shouldn't we be using the Welch-Satterthwaite equation for estimating the degrees of freedom (or n-2 if we assume equal variances)? 

Finally, I've seen that in Fieldtrip when you use a Montercarlo sample, and not the full permutation, they adjust their p-values because the minimum p-value shouldn't be zero but 1/num_permutations. How do we account for this in your code?

Thanks!

--Gabriel


On Saturday, September 14, 2019 at 3:51:51 AM UTC-7, Mike X Cohen wrote:
Hi Gabriel. I don't think that's necessary. If the frequency window is relatively narrow, then the 1/f won't really bias the spectrum. And if the frequency window is wide enough that the 1/f will introduce a bias, then the window is probably too wide ;)

That said, baseline normalization has several benefits aside from getting rid of the 1/f issue. For example, baseline normalization also helps to separate background/ongoing activity from the specific task-related modulation. In general, I recommend baseline normalization unless there is a specific reason not to use it.

Mike



On Thu, Sep 12, 2019 at 11:38 AM Gabriel Obregon-Henao <gabrielobr...@gmail.com> wrote:
Hey Mike,

Does one need to account for 1/f when averaging power across a TF-window? In the Fieldtrip example of analyzing high-gamma in human ECoG, for example, they multiply the power values at each time-frequency sample by the square of the corresponding frequency prior to averaging across frequencies within the high-gamma range. I’m mainly asking because I’m planning on using Strategy 2a from the book for a between-trials analysis.

Thanks!

-Gabriel

--
You received this message because you are subscribed to the Google Groups "AnalyzingNeuralTimeSeriesData" group.
To unsubscribe from this group and stop receiving emails from it, send an email to analyzingneuraltimeseriesdata+unsub...@googlegroups.com.

Mike X Cohen

unread,
Sep 16, 2019, 4:37:36 AM9/16/19
to analyzingneura...@googlegroups.com
See below...


On Sat, Sep 14, 2019 at 7:26 PM Gabriel Obregon-Henao <gabrielobr...@gmail.com> wrote:
Understood. That makes me wonder why in your toy example of comparing between the first and second half of the experimental trials you omitted the normalization step. Also, if one were to extrapolate from that example to a comparison between two experimental conditions, does it really make sense to run stats on the whole trial? Wouldn't you just want to compare the post-stimulus periods between conditions regardless of whether you perform baseline normalization? 


When using single trials for statistical comparisons, you don't need normalization, because anything that would need to be normalized (1/f, non-task-related ongoing activity) will be present in all trials, and thus on both sides of the null hypothesis testing. Normalization is something you would do at the trial-averaged level, not for analyses that involve single trials.

 
I was also wondering why do you use n-1 degrees of freedom in Matlab's tinv function if you constructed the null distribution using Welch's t-test. Shouldn't we be using the Welch-Satterthwaite equation for estimating the degrees of freedom (or n-2 if we assume equal variances)? 


Can you let me know which piece of code (book section or video) you are referring to? Then I can try to reconstruct my reasoning there. But in general, there are several ways to setup a t-test and its threshold, depending on the assumptions. I'm actually not familiar with the Welch-Satterhwaite equation, which would already explain why I didn't use it ;)

 
Finally, I've seen that in Fieldtrip when you use a Montercarlo sample, and not the full permutation, they adjust their p-values because the minimum p-value shouldn't be zero but 1/num_permutations. How do we account for this in your code?


There is indeed some debate in the non-parametric stats world about whether a p-value can actually be zero. In a formal sense, no, it cannot, because it is read off of a normal distribution, which itself never actually touches zero. But if you computing a p-value by counting empirical null hypothesis values from a finite number of iterations, then it can be zero. Honestly, I try to avoid these kinds of issues, because I think they are purely theoretical and have no practical implications. For example, would you interpret your finding different if the p-value were p>.0001 vs. p=0?

 
Thanks!

--Gabriel


On Saturday, September 14, 2019 at 3:51:51 AM UTC-7, Mike X Cohen wrote:
Hi Gabriel. I don't think that's necessary. If the frequency window is relatively narrow, then the 1/f won't really bias the spectrum. And if the frequency window is wide enough that the 1/f will introduce a bias, then the window is probably too wide ;)

That said, baseline normalization has several benefits aside from getting rid of the 1/f issue. For example, baseline normalization also helps to separate background/ongoing activity from the specific task-related modulation. In general, I recommend baseline normalization unless there is a specific reason not to use it.

Mike



On Thu, Sep 12, 2019 at 11:38 AM Gabriel Obregon-Henao <gabrielobr...@gmail.com> wrote:
Hey Mike,

Does one need to account for 1/f when averaging power across a TF-window? In the Fieldtrip example of analyzing high-gamma in human ECoG, for example, they multiply the power values at each time-frequency sample by the square of the corresponding frequency prior to averaging across frequencies within the high-gamma range. I’m mainly asking because I’m planning on using Strategy 2a from the book for a between-trials analysis.

Thanks!

-Gabriel

--
You received this message because you are subscribed to the Google Groups "AnalyzingNeuralTimeSeriesData" group.
To unsubscribe from this group and stop receiving emails from it, send an email to analyzingneuraltimes...@googlegroups.com.


--
Mike X Cohen, PhD
Fresh look: mikexcohen.com

--
You received this message because you are subscribed to the Google Groups "AnalyzingNeuralTimeSeriesData" group.
To unsubscribe from this group and stop receiving emails from it, send an email to analyzingneuraltimes...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/analyzingneuraltimeseriesdata/7d2f287c-773d-4ecb-9933-a53a88f3ad38%40googlegroups.com.

Gabriel Obregon-Henao

unread,
Sep 16, 2019, 5:52:25 AM9/16/19
to AnalyzingNeuralTimeSeriesData
Thanks Mike, the code I’m referring to belongs to the toy example and I think it’s used for creating figure 34.3. You basically form the clusters by parametrically thresholding the t-maps at each permutation using the t-values output by tinv. If you were to threshold them based on their p-values, I think you wouldn’t use n-1 degrees of freedom because you have unequal sample sizes and you’re assuming unequal variances. Therefore, I’m not sure if you should use your critical alpha level and the degrees of freedom calculated via the Welch-Satterhwaite equation as inputs to tinv.

Best,

-Gabriel

Mike X Cohen

unread,
Sep 19, 2019, 7:10:04 AM9/19/19
to analyzingneura...@googlegroups.com
(Apologies for the response delay.) I see what you are talking about, but I'm not sure it really matters in this example. The difference in the threshold for budging around the degrees of freedom is quite tiny, particularly after around 100 df's (e.g., plot(40:200,tinv(.95,40:200))). 

But your general point is well-taken, that people should be thinking carefully about how the statistics are set up and applied to their data, what assumptions are being made, and so forth. I'm a big fan of trusting convergence in findings, and so if the statistical significance of an effect depends on one test and one particular df value, then I wouldn't consider that to be a "significant" effect.

Mike



--
You received this message because you are subscribed to the Google Groups "AnalyzingNeuralTimeSeriesData" group.
To unsubscribe from this group and stop receiving emails from it, send an email to analyzingneuraltimes...@googlegroups.com.

Gabriel Obregon-Henao

unread,
Sep 19, 2019, 4:37:31 PM9/19/19
to analyzingneura...@googlegroups.com
Thanks Mike,

I have a final question regarding one- vs two-tailed tests. I’ve seen that some people compute cluster size and cluster mass separately for negative and positive test statistics, similar to how you proceeded with the minimum/maximum t-value in the toy example, and use a separate two-tailed test for the negative/positive clusters. Other people take the maximum cluster size or the maximum absolute cluster mass between the negative and positive clusters, and my question is whether you should use a one- vs a two-tailed test in this case? In your examples, you usually form clusters on the absolute t-maps and use a one-tailed test, and my intuition is it should be similar for the latter scenario. 

Also, do you seen any advantage in forming clusters separately for the negative/positive test statistics? 

Best,

-Gabriel
You received this message because you are subscribed to a topic in the Google Groups "AnalyzingNeuralTimeSeriesData" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/analyzingneuraltimeseriesdata/MQXHm06upo4/unsubscribe.
To unsubscribe from this group and all its topics, send an email to analyzingneuraltimes...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/analyzingneuraltimeseriesdata/CAABvs%2BvNjwC3_ndXCnpy9tzu1NfCV2yWfwoA20ngThyV8gLW-A%40mail.gmail.com.

Mike X Cohen

unread,
Sep 20, 2019, 8:56:01 AM9/20/19
to analyzingneura...@googlegroups.com
Also a great question. There is no a priori correct answer here. On the one hand, if positive and negative clusters are really coming from different distributions, then it's justified to come up with separate thresholds. But do positive and negative clusters really come from different distributions? And how would we know the answer without already looking at the data and making the decision post-hoc? 

For better or worse (actually, it's better and worse), there is a lot of statistical leeway in neuroscience. This is good because it increases freedom for individual researchers to custom-tailor the statistical procedures to their needs; but at the level of the entire field, it also decreases confidence because there aren't such well-defined standards that are ubiquitously appropriate. It is clear that there are new statistical issues in multivariate neuroscience for which we lack good protocols and analytic solutions. Pearson, for example, never had to worry about cluster correction in data where the sizes/shapes of the clusters are data- and parameter-dependent, vary over frequency and over brain region, and come from distributions with unknown parameters.

I fear this will all get worse before it gets better. We ("we" referring to the neuroscience community) are collecting ever-larger datasets with increasing dimensionality, and we have basically no clue how to analyze them, other than to apply whatever is the most recent and popular machine-learning/deep-network technique. I don't mean to sound too skeptical -- this level of exploration was not previously possible, and so there needs to be a period of wild-west data mining. But I think (hope) that in 50 years, we will look back on this period with shame at how much was produced/published without proper analyses and statistical controls. Perhaps I share some of the blame by giving people more analysis/statistical tools without strict guidelines on how (not) to use them. I cannot tell people when to use which method, which statistic, or which p-value correction, because I don't know myself, and I am not convinced that the methods that were appropriate for datasets 100 years ago are still valid on datasets we collect today.

This is the main reason why I feel strongly about convergence of findings, and why I don't trust any individual finding or individual publication. I have no idea how many Type-I errors there are in neuroscience, but I suspect it's way more than 5%. Of course, simply re-publishing a finding doesn't guarantee that it isn't an alpha-error, but I trust that integrating over multiple datasets, analysis methods, and research groups will eventually weed out fact from fiction. Ask me again how I feel about this issue in 50 years ;)   I think I'll be.... 74 years old at the time :/

Mike



Gabriel Obregon-Henao

unread,
Nov 5, 2019, 6:05:04 PM11/5/19
to AnalyzingNeuralTimeSeriesData
Hi Mike,

I saw that you migrated the group to a new platform, for which I'll definitely sign up, but I wanted to go back to this thread. Specifically, you recommended that I plot effect sizes across the brain, and I was wondering if you have a good guide reference for doing so (I've only seen people report p-values)?

Thanks,

--Gabriel
To unsubscribe from this group and stop receiving emails from it, send an email to analyzingneuraltimeseriesdata+unsub...@googlegroups.com.

Mike X Cohen

unread,
Nov 6, 2019, 2:03:16 AM11/6/19
to analyzingneura...@googlegroups.com
Before converting to p-values, you have some effects size measure, like a t-value or z-value or r-value. You could simply make a histogram of all effects sizes over all channels. Or if you have the physical XYZ locations of all the electrodes, you could map these effects sizes onto the anatomy and show it that way. I'm not sure of any specific references off-hand, but I know people in Mike Kahana's group often show results this way.

My suggestion was basically to try to show a lot of qualitative patterns before getting too much into statistical thresholding. I appreciate the importance of statistical thresholding, but thresholding is a lossy, nonlinear transformation with an arbitrary cut-off. A lot of rich details get lost in that approach. Like in most of the fMRI literature, where a brain region is reduced to a binary "active" or "not active" depending on this threshold. 

Please do come join us at discuss.sincxpress.com! But I also don't mind continuing existing threads here on google-groups. 

Mike



On Wed, Nov 6, 2019 at 12:05 AM Gabriel Obregon-Henao <gabrielobr...@gmail.com> wrote:
Hi Mike,

I saw that you migrated the group to a new platform, for which I'll definitely sign up, but I wanted to go back to this thread. Specifically, you recommended that I plot effect sizes across the brain, and I was wondering if you have a good guide reference for doing so (I've only seen people report p-values)?

Thanks,

--Gabriel


On Monday, August 26, 2019 at 2:42:04 AM UTC-7, Mike X Cohen wrote:
Hi Gabriel. Apologies for the delayed reply. See below. 



On Thu, Aug 22, 2019, 23:50 Gabriel Obregon-Henao <gabrielobr...@gmail.com> wrote:

Hi Mike,


After reading Chapter 35 on group-level analyses, I’ve been thinking on how to apply strategies 2a and 2b to a within-subject analysis. Since I work with iEEG, the number and coverage of the electrodes varies considerably across subjects. I imagine that I could define time-frequency-region windows based on my hypotheses, but averaging power within a region might not be the best strategy given the higher spatial resolution of iEEG compared to M/EEG. Furthermore, my sample size is considerably small, and therefore I think it makes more sense to focus on within-subject statistics.


For defining TF-window boundaries based on pixels that are statistically significant, while avoiding circular inference, I was wondering if comparing time-frequency samples in all conditions vs. baseline (which you mention is a common practice in fMRI research) is equivalent to your suggestion in the YouTube videos of choosing the window based on the average of all data (across all conditions)?


Yes, averaging over conditions first will avoid biased data selection for condition comparisons.


Moreover, if my hypothesis includes more than one channel (within or across brain regions), does it make sense to choose independent TF windows for each channel? Couldn’t the time-frequency peaks vary across channels, similar to how they may vary across subjects?


Interesting thought. It depends on the quality of the data. I'd be concerned about the peak-finding algorithm getting caught up by noise.


Treating channels as independent would also imply forming clusters across time-frequency only, and I was wondering if one would then have to correct for multiple comparisons across channels?


I guess it depends on how you set it upand what level you will use to make inferences. "electrode" is the sample of the population of all possible electrodes in this group of patients. So you wouldn't need to correct for the number of electrodes. It might be useful to do a lot of qualitative visualizations, for example, showing effects sizes across the brain.



Finally, for testing power relative to baseline, is there a difference in shuffling the trial labels of the baseline periods and of the activation periods (I think this is the way it’s performed in Fieldtrip) to form the null-hypothesis distribution, compared to the method of temporally shifting the time series (depicted in Fig. 34.2A)? Specifically, are there differences in the null hypotheses of these two methods? Also, when would it be better to use the method (shifting the baseline period) shown in Fig. 34.2B? Is it better suited for experiments in which your ITI is very short (or non-existent)? Do you gain anything by preserving the temporal structure of the time series?


The baseline-shifting method is useful when you want to test every TF pixel in the map. 




Thanks!


--Gabriel

--
You received this message because you are subscribed to the Google Groups "AnalyzingNeuralTimeSeriesData" group.
To unsubscribe from this group and stop receiving emails from it, send an email to analyzingneuraltimes...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "AnalyzingNeuralTimeSeriesData" group.
To unsubscribe from this group and stop receiving emails from it, send an email to analyzingneuraltimes...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/analyzingneuraltimeseriesdata/5b34a8ed-efca-4da1-976d-9aa58b9ccb1e%40googlegroups.com.

Gabriel Obregon-Henao

unread,
Nov 7, 2019, 4:59:22 PM11/7/19
to AnalyzingNeuralTimeSeriesData
Have you ever tried the full-epoch single trial corrections used by Grandchamp & Delorme in the following paper: https://www.ncbi.nlm.nih.gov/pubmed/21994498?You cite the paper in the creation of Figure 18.10 in the book but you don't use their method.

I'm trying to code the single trial correction that they recommend (the ERSPFull TB−z  correction in conjunction with the baseline permutation), but I'm not sure I fully understand how they're creating their null distribution. Part of my confusion has to do with why one has to recompute the classical trial average pre-stimulus baseline prior to computing statistics, and how to do so. Moreover, I'm not sure what test statistic is being used per frequency; is it a min and max t value across time?

Thanks!

--Gabriel 

To unsubscribe from this group and stop receiving emails from it, send an email to analyzingneuraltimeseriesdata+unsub...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "AnalyzingNeuralTimeSeriesData" group.
To unsubscribe from this group and stop receiving emails from it, send an email to analyzingneuraltimeseriesdata+unsub...@googlegroups.com.

Mike X Cohen

unread,
Nov 8, 2019, 6:01:03 AM11/8/19
to analyzingneura...@googlegroups.com
I remember testing their suggestions a bit when the paper first came out. And I also remember coming to the conclusion that trial-average baseline is best in most cases, and that baselining is often unnecessary when doing single-trial analyses, particularly for a time-frequency decomposition where drifts and DC are already removed from the narrowband filtering. 

It's been many years since I've read the paper closely; if you have specific questions about how they implemented something, it's probably best to contact them. Both of those authors are still active in the field. 

Mike



To unsubscribe from this group and stop receiving emails from it, send an email to analyzingneuraltimes...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "AnalyzingNeuralTimeSeriesData" group.
To unsubscribe from this group and stop receiving emails from it, send an email to analyzingneuraltimes...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "AnalyzingNeuralTimeSeriesData" group.
To unsubscribe from this group and stop receiving emails from it, send an email to analyzingneuraltimes...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/analyzingneuraltimeseriesdata/faa6cffa-f1b8-4197-8e37-f795bc7c4a9c%40googlegroups.com.
Reply all
Reply to author
Forward
0 new messages