Single trial baseline normalization

868 views
Skip to first unread message

nonn...@u.northwestern.edu

unread,
May 20, 2017, 4:31:12 PM5/20/17
to AnalyzingNeuralTimeSeriesData

Hi Dr. Mike Cohen,

Your book and lectures are really amazing.

I have a question regarding single-trial baseline normalization for time-frequency power. I am now dealing with EEG data set that has a continuous variable, and this variable changes trial-by-trial. Because of this, I would like to look at the trial-by-trial changes in power using bandpass-filtering and Hilbert transformation with single-trial baseline normalization.

 

In your firfilter.m file, you normalized the bandpassed, Hilbert-transformed power across trials:

 

    temppow  = mean(abs(hilbert(filtered_data)).^2,2);

    pow(i,:) = 10*log10( temppow./mean(temppow(baseidx(1):baseidx(2))) );

 

I however need to do it at the single trial level. When I tried this, I got very noisy plot:

  % EEG.data has already been band-passed.

for curChan = 1: EEG.nbchan

        EEG.data(curChan,:,:) = abs(hilbert(EEG.data(curChan,:,:))).^2;

end

    

    for curChan = 1: EEG.nbchan

        for curTrial = 1: EEG.trials

            basePow(curChan,curTrial) = mean(EEG.data(curChan,baseidx(1):baseidx(2),curTrial),2);

            for curPoint = 1: EEG.pnts

                EEG.data(curChan,curPoint,curTrial) = 10.*log10(EEG.data(curChan,curPoint,curTrial)./basePow(curChan,curTrial));

            end

        end

    end

 

    pow27 = (mean(EEG.data(27,:,:),3)); 

    plot(EEG.times,pow27);

 

Strangely enough, when I tried percent change normalization (as opposed to dB) at the single trial level, it seems to be working.

 

    for curChan = 1: EEG.nbchan

        EEG.data(curChan,:,:) = abs(hilbert(EEG.data(curChan,:,:))).^2;

    end

    

    for curChan = 1: EEG.nbchan

        for curTrial = 1: EEG.trials

            basePow(curChan,curTrial) = mean(EEG.data(curChan,baseidx(1):baseidx(2),curTrial),2);

            for curPoint = 1: EEG.pnts

                 EEG.data(curChan,curPoint,curTrial) =  100*((EEG.data(curChan,curPoint,curTrial) - basePow(curChan,curTrial))./basePow(curChan,curTrial));

            end

        end

    end

 

pow27 = (mean(EEG.data(27,:,:),3)); 

plot(EEG.times,pow27);

 

Am I missing something important here?

Thank you so much,

Narun

 

PS. Note though that this script below got the same result with your two-line version : )

 

    for curChan = 1: EEG.nbchan

        EEG.data(curChan,:,:) = abs(hilbert(EEG.data(curChan,:,:))).^2; 

         for curTrial = 1: EEG.trials

            basePow(curChan,curTrial ) = mean(EEG.data(curChan,baseidx(1):baseidx(2),curTrial));

         end

         basePowMeanTrial(curChan) = mean(basePow(curChan,:));

    end

    

    for curChan = 1: EEG.nbchan

        for curTrial = 1: EEG.trials

            for curPoint = 1: EEG.pnts

                EEG.data(curChan,curPoint,curTrial) = (EEG.data(curChan,curPoint,curTrial)./basePowMeanTrial(curChan));

            end

        end

    end

    pow27 = 10.*log10(mean(EEG.data(27,:,:),3)); 

plot(EEG.times,pow27);


 

Mike X Cohen

unread,
May 22, 2017, 12:38:31 AM5/22/17
to analyzingneura...@googlegroups.com
Hi Narun. Single-trial baselining is a tricky issue, and best avoided if at all possible. DB and percent change are not identical transforms, and although they give quite similar results at the trial-average level, I'm not surprised to hear that they are more divergent at the single-trial level. But I would be hesitant to recommend one over the other, because it might be that percent change looks better for one dataset, dB for another dataset, and so on. You could try a linear baseline subtraction for the single-trial analyses, although this would preclude using dB or percent change at the trial-average level. Depending on what your goal is, you could apply linear baseline subtraction for the single-trial analyses and no single-trial baselining for the trial-average analyses. 

Hope that helps. There have been several other posts about this issue on this list (and, I would imagine, also on the eeglab and fieldtrip lists), so you might want to search around for related discussions.

Mike



--
You received this message because you are subscribed to the Google Groups "AnalyzingNeuralTimeSeriesData" group.
To unsubscribe from this group and stop receiving emails from it, send an email to analyzingneuraltimeseriesdata+unsub...@googlegroups.com.
Visit this group at https://groups.google.com/group/analyzingneuraltimeseriesdata.
For more options, visit https://groups.google.com/d/optout.



--
Mike X Cohen, PhD
mikexcohen.com

Narun Pornpattananangkul

unread,
May 22, 2017, 1:10:35 AM5/22/17
to analyzingneura...@googlegroups.com
Hi Mike,

Thank you so much for your insightful answer.

It seems to me that you also use linear baseline subtraction in your Figure 18.10:

% convenientize power

convdatPower  = abs(convdat2keep).^2;

% single-trial linear baseline correction

convdat2keepB = convdatPower - repmat(mean(convdatPower(baselineidx(1):baselineidx(2),:),1),size(convdatPower,1),1);

I think eeglab also has additive (linear baseline subtraction) and divisive options. I'll dig into the list(s) more, but in your opinion, why might an additive baseline be a better choice for single-trial analyses? 


Thank you so much,

Narun

--
You received this message because you are subscribed to a topic in the Google Groups "AnalyzingNeuralTimeSeriesData" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/analyzingneuraltimeseriesdata/8198m7dMoEE/unsubscribe.
To unsubscribe from this group and all its topics, send an email to analyzingneuraltimeseriesdata+unsub...@googlegroups.com.



--
Narun Pornpattananangkul, PhD

​****please note new e-mail address
ps...@nus.edu.sg****

Research Fellow
Department of Psychology
National University of Singapore
Google Scholar: goo.gl/5ZJkdc




Mike X Cohen

unread,
May 22, 2017, 1:26:51 AM5/22/17
to analyzingneura...@googlegroups.com
Linear methods are more stable. With dB and percent change, the baseline goes into the denominator. So if the baseline happens to be really small in one trial, maybe just because of noise fluctuations, then the post-stim power will be (possibly artifactually) high. Consider how large the fraction A/B gets as B goes to zero, vs. what happens to A-B as B goes to zero.

Mike


Narun Pornpattananangkul

unread,
May 22, 2017, 5:14:57 AM5/22/17
to analyzingneura...@googlegroups.com
Hi Mike, 

Thank you so much. Things are much clearer now. 

I'm actually trying to do a single-trial time–frequency multiple regression analysis similar to your earlier paper (e.g., Cohen & Cavanagh, 2011, Front. Psychol.). Reading this paper, it seems to me that, even though you looked at single-trial ERSP, you used the  trial-average, divisive baseline (is this correct?). I'll try this approach as well as single-trial linear baseline subtraction.

Thanks again,

Narun

Mike X Cohen

unread,
May 22, 2017, 7:41:18 AM5/22/17
to analyzingneura...@googlegroups.com
Hi Narun. We didn't use a single-trial baseline in that study. We were interested only in the "beta" (regression parameter, not frequency band) coefficients, which reflect the trial-by-trial relationship between power and RT. The 1/f influence would be taken up by the intercept term, which we didn't report in that paper. We did some baselining at the group level (if I remember correctly), but that was subtractive. So if you are going to do single-trial regression, I'm not sure you need to do any baselining.

Mike


Narun Pornpattananangkul

unread,
May 22, 2017, 12:49:08 PM5/22/17
to analyzingneura...@googlegroups.com
Hi Mike, 

Thanks so much for your input. I'll try both single-trial linear baseline subtraction and non baseline. I think having baseline might be useful even for a regression-type analysis in case if I need to do an ERP-image-like plot of power at certain frequency.

Really appreciate your time,

Narun 
Reply all
Reply to author
Forward
0 new messages