Suggestion for a pre-processing protocol

2,172 views
Skip to first unread message

leontion

unread,
Mar 31, 2015, 3:57:42 AM3/31/15
to analyzingneura...@googlegroups.com

Hello everybody, 

I am very new to EEG analysis and I was wondering if you can give me some tips on how to pre-process some continuous EEG data. 
I have read up to chapter 15 of the book but I am not very confident on how to apply the knowledge to my data. 

The EEG data I have are 4 min of Eyes Open/Close session and then 30 min of two tasks (15 min each). Each task is consisted of 15 trials of 60 sec each. 
The sampling rate is 256 Hz and the channels are 64+EOG

The goal is to analyze multiple epochs from each trial. Markers for these epochs will be triggered whenever the participant responds (responses are at self-pace mode).
So I am interested in the analysis of a pre-response interval. 

The analysis will be focused on alpha waves and functional connectivity, but maybe I will need to also research the theta waves.

I spoke to several people but the approaches they suggest are very contradictory so I would love to have your opinion. 
Here is a sequence of steps I have gathered so far. 

1. Re-reference to common average. 
2. Filter with Butterworth 6-7th order and band pass 0.5 - 40Hz
3. Run ICA, after removing M1,M2 (data recorded with reference the mastoids)
4. Run ADJUST algorithm, and manual inspection to remove artifacts.
5. Extract epochs based on the responses onsets as markers

Any comments or tips would be of great help to me. 

Thank you!

Mike X Cohen

unread,
Mar 31, 2015, 3:22:03 PM3/31/15
to analyzingneura...@googlegroups.com
Hi. Protocols for preprocessing EEG data vary widely across researchers and labs, so I'm not surprised that you heard contradictory advice. I'll write down the steps that we do in my lab, but keep in mind that this is not necessarily the best protocol for your data. Some of these items do not necessarily need to be in this order.

1. Import raw data.
2. highpass filter at .5 Hz (we don't use a low-pass)
3. Import standard channel locations
4. Rereference EOG channels
5. Epoch data to one EEG structure (eeglab format) that contains ALL trials across all conditions.
6. Subtract a prestimulus baseline.
7. Adjust marker values as appropriate (for example, mark trials as error or posterror)
8. Task-based trial rejection (for example, remove trials with no response or really long responses)
9. Manual trial rejection based on visual inspection
10. Mark electrodes as bad if necessary. Electrodes are not marked as bad if they contain signal and noise; only if they are pure noise, for example if the electrode wasn't even plugged in during the recording.
11. Average reference. Note that you should re-reference the data only after marking electrodes as bad. You don't want the noise from a single bad electrode to infect the good data from other electrodes.
12. run ICA and mark components for removal.
13. Apply scalp Laplacian. In my book, I generally promote the use of the Laplacian. A recent special issue on the Laplacian in EEG research appeared in International Journal of Psychophysiology. After reading those papers, I became more convinced that basically all EEG research should use the Laplacian, and you should need a good reason not to use it.
14. Separate epochs according to experiment condition and start analyzing (i.e., the fun part)!

I hope that helps. Ultimately, you will need to develop your own protocol that is best suited for your experiment. 

Mike



--
You received this message because you are subscribed to the Google Groups "AnalyzingNeuralTimeSeriesData" group.
To unsubscribe from this group and stop receiving emails from it, send an email to analyzingneuraltimes...@googlegroups.com.
Visit this group at http://groups.google.com/group/analyzingneuraltimeseriesdata.
For more options, visit https://groups.google.com/d/optout.



--
Mike X Cohen, PhD
mikexcohen.com

Li Zhu

unread,
Apr 5, 2015, 2:14:52 AM4/5/15
to analyzingneura...@googlegroups.com
Hi Mike,

Thank you for sharing the preprocessing steps. I have a question on the step two, which said the low pass filter was not used in your analysis. I'm wondering if there is any low pass filter during the data recording? If not, how did you deal with the alias problem please? Thank you. 

Best Regards,
Li

Mike X Cohen

unread,
Apr 5, 2015, 2:58:06 AM4/5/15
to analyzingneura...@googlegroups.com
Hi Li. Many (or perhaps all?) amplifiers have built-in anti-aliasing filters, which are generally necessary because aliased signals are best filtered out during data acquisition, not during off-line filtering (in that case, the aliased signal is already in the data as a lower frequency artifact). 

That said, it's fine to apply a low-pass filter. However, I would recommend keeping the cutoff fairly high (e.g., >100 Hz). Imagine you low-pass filter the continuous data at 40 Hz and then during the analyses you want to see if there is gamma activity. You would need to go back to the very beginning and redo all of the preprocessing. That would be quite time-consuming and annoying.

Mike


Li Zhu

unread,
Apr 5, 2015, 11:27:44 AM4/5/15
to analyzingneura...@googlegroups.com
Thank you Mike, for your kindly advice.



Sincerely,
Li Zhu

Li Zhu

unread,
Apr 5, 2015, 4:14:06 PM4/5/15
to analyzingneura...@googlegroups.com
Hi Mike,

Hope you are having a great weekend. Regarding to the Surface Laplacian, does it work for "unevenly" placed electrodes montage? For example, how about that all electrodes are covering only prefrontal and temporal cortices? Thank you!



Sincerely,
Li Zhu

Mike X Cohen

unread,
Apr 5, 2015, 4:37:48 PM4/5/15
to analyzingneura...@googlegroups.com
Hi Li. There are two separate questions here: One is whether the Laplacian will technically work with a montage that doesn't have full coverage. The answer is yes, as long as the electrode positions are fairly accurate. 

The second -- and more important -- question is whether this will produce good and interpretable results. If you really have huge gaps in the topography, then the Laplacian will interpolate over large distances, and there might be some interpolation issues. But do you really have electrodes over only prefrontal and temporal regions? That seems like a bit of an unusual montage. 

One way to see whether the results will be interpretable is to use the sample EEG data (or any other data with a reasonable coverage), compute the Laplacian, and then remove electrodes so that you have what matches your EEG cap, and then compute the Laplacian again. If the results look similar, then you should be OK. If the results look really different, then it's probably not a good idea to use the Laplacian. I've never tried this test before, so if you do it, it would be interesting to hear about the results!

Mike


Victor

unread,
Apr 5, 2015, 4:59:23 PM4/5/15
to analyzingneura...@googlegroups.com
Great suggestion, Mike!  Yes I have the data set like that, but I'm quit worry about the accuracy when run SL on it. I will try out your suggestion and get you back. 

Best,
Li

Li Zhu

unread,
Apr 6, 2015, 7:09:42 PM4/6/15
to analyzingneura...@googlegroups.com
Hi, I just get back here to report that I have checked my montage following Mike's suggestion:

"One way to see whether the results will be interpretable is to use the sample EEG data (or any other data with a reasonable coverage), compute the Laplacian, and then remove electrodes so that you have what matches your EEG cap, and then compute the Laplacian again. If the results look similar, then you should be OK."

I compared the pre- and post- Laplacian transformed topoplot from a 128-channel dataset, with its channel-reduced (to 30 channel, biased layout) counterpart. I got a result which is very similar between the counterparts, although there is some inaccuracy (actually that is due to the nature of low-density electrodes). So I think that the Surface Laplacian works good and kindly reliable even with the biased layout of electrodes montage.

Thanks Mike again!



Sincerely,
Li Zhu

leontion

unread,
Apr 22, 2015, 7:54:09 AM4/22/15
to analyzingneura...@googlegroups.com
Hi Mike, 

Thank you so much for your detailed response and guidance. 
I wonder if I can ask some clarification regarding some of the steps. 

2. If I want to reduce time signal complexity but I want to have the option to be able to analyze gamma activity, would it be good idea to use bandpass with higher cutoff, let's say 80Ηz? Do you recommend pop_eegfiltnew , the default choice eeglab FIR? 

4. Do you mean to re-reference the eye channels to their common average or something? Our system rereferences the raw data online to the common avg. Do you still think I should do  EOG channels rereference?

 
6. I will no conduct an ERP analysis. I will go for functional connectivity (or maybe effective connectivity if it is possible with EEG data?). Do I still need to remove baseline? I collect some eyes open/close data before the behavioral task. 

Many many thanks for your support and contribution, 
Mina

To unsubscribe from this group and stop receiving emails from it, send an email to analyzingneuraltimeseriesdata+unsub...@googlegroups.com.

Mike X Cohen

unread,
Apr 22, 2015, 8:05:22 AM4/22/15
to analyzingneura...@googlegroups.com
Hi Mina. See below. 

On Wed, Apr 22, 2015 at 1:54 PM, leontion <mina.m...@gmail.com> wrote:
Hi Mike, 

Thank you so much for your detailed response and guidance. 
I wonder if I can ask some clarification regarding some of the steps. 

2. If I want to reduce time signal complexity but I want to have the option to be able to analyze gamma activity, would it be good idea to use bandpass with higher cutoff, let's say 80Ηz? Do you recommend pop_eegfiltnew , the default choice eeglab FIR? 


I don't recommend using any low-pass filters, unless you have some unusual and horrible high-frequency noise. Keep in mind that although the cut-off might be 80 Hz, the attenuation will be larger. Time-frequency analyses *are* band-pass filters, and so doing, e.g., wavelet convolution will involve removing the higher frequencies anyway. 


 
4. Do you mean to re-reference the eye channels to their common average or something? Our system rereferences the raw data online to the common avg. Do you still think I should do  EOG channels rereference?


You should re-reference the EOG channels to each other. This is done simply by subtraction, thus, upper-lower for the VEOG, and left-right for the HEOG. 

 
 
6. I will no conduct an ERP analysis. I will go for functional connectivity (or maybe effective connectivity if it is possible with EEG data?). Do I still need to remove baseline? I collect some eyes open/close data before the behavioral task. 


Yes, you should always do a time-domain baseline subtraction. This will help with the data cleaning (for example, ICA will be driven by mean offsets if there are any), and it will also help with visual inspection of the data. You can try this for yourself by plotting some ERPs and topographical maps before vs. after removing the baseline. 

Even if you have no plans to include ERPs in your final analyses, you should always compute ERPs and ERP topographical maps, because these provide fast and powerful data inspection opportunities. For example, I have my students make screenshots of ERPs and ERP topomaps for each subject, and before starting any of the more interesting analyses, we sit down and look at each subject's condition-average ERPs. This tells you right away whether there are any problems, bad channels, massive artifacts, etc. 


 
Many many thanks for your support and contribution, 
Mina


Mike

 
To unsubscribe from this group and stop receiving emails from it, send an email to analyzingneuraltimes...@googlegroups.com.



--
Mike X Cohen, PhD
mikexcohen.com

--
You received this message because you are subscribed to the Google Groups "AnalyzingNeuralTimeSeriesData" group.
To unsubscribe from this group and stop receiving emails from it, send an email to analyzingneuraltimes...@googlegroups.com.

leontion

unread,
Apr 23, 2015, 12:35:19 AM4/23/15
to analyzingneura...@googlegroups.com
Thank you Mike,

My prestimulus conditions are eyes closed (EC) and eyes open (EO) (fixation cross).
Then before each trial of the task a jittered fixation cross period is inserted. 
From which of these periods should I take the mean to subtract from my trials data do you think? 

Mike X Cohen

unread,
Apr 23, 2015, 1:54:54 PM4/23/15
to analyzingneura...@googlegroups.com
This time-domain baseline subtraction would involve subtracting the mean signal of each trial/channel. This is also known as removing the DC component. You can also do it easily in eeglab with the function pop_rmbase (I'm sure it's just as easy in fieldtrip or any other analysis toolbox). 

Mike

--
You received this message because you are subscribed to the Google Groups "AnalyzingNeuralTimeSeriesData" group.
To unsubscribe from this group and stop receiving emails from it, send an email to analyzingneuraltimes...@googlegroups.com.
Visit this group at http://groups.google.com/group/analyzingneuraltimeseriesdata.
For more options, visit https://groups.google.com/d/optout.

Mina Marmpena

unread,
Apr 24, 2015, 5:04:54 AM4/24/15
to analyzingneura...@googlegroups.com
Ok now I get it. If it's the DC component I think it's already done done by EEGLAB's pop_eegfiltnew().
Regarding the non-low pass filtering, I wanted to ask you Mike if I should use a notch filter for say 50 Hz at this pre-processing stage, and if yes, which step should it be applied after? 

Raquel London

unread,
Jul 13, 2016, 2:17:32 PM7/13/16
to AnalyzingNeuralTimeSeriesData
Hi Mike,

Just a question about this:
You should re-reference the EOG channels to each other. This is done simply by subtraction, thus, upper-lower for the VEOG, and left-right for the HEOG. 
If I understand correctly we combine the two channels and are then left with two channels less, correct? Is this done for the ICA or for another reason?

Thanks!
Raquel
Hi Mina. See below. 

To unsubscribe from this group and stop receiving emails from it, send an email to analyzingneuraltimeseriesdata+unsub...@googlegroups.com.



--
Mike X Cohen, PhD
mikexcohen.com

--
You received this message because you are subscribed to the Google Groups "AnalyzingNeuralTimeSeriesData" group.
To unsubscribe from this group and stop receiving emails from it, send an email to analyzingneuraltimeseriesdata+unsub...@googlegroups.com.

Mike X Cohen

unread,
Jul 13, 2016, 3:00:11 PM7/13/16
to analyzingneura...@googlegroups.com
Yes, the bipolar referencing means you'd go from 4 channels to 2. This is done because you want as pure an estimate as possible of the eye activity. Leaving those channels referenced to earlobe/mastoid/scalp-average/etc would mean mixing the eye and brain activity. 

Mike



Hi Mina. See below. 

To unsubscribe from this group and stop receiving emails from it, send an email to analyzingneuraltimes...@googlegroups.com.



--
Mike X Cohen, PhD
mikexcohen.com

--
You received this message because you are subscribed to the Google Groups "AnalyzingNeuralTimeSeriesData" group.
To unsubscribe from this group and stop receiving emails from it, send an email to analyzingneuraltimes...@googlegroups.com.



--
Mike X Cohen, PhD
mikexcohen.com

--
You received this message because you are subscribed to the Google Groups "AnalyzingNeuralTimeSeriesData" group.
To unsubscribe from this group and stop receiving emails from it, send an email to analyzingneuraltimes...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Katharina Limbach

unread,
Jul 14, 2016, 2:49:11 AM7/14/16
to analyzingneura...@googlegroups.com
Dear Mike,

I just saw your preprocessing routine in one of the previous emails. Thanks for sharing that with us!

I have a question about the baseline correction. If I understood you correctly you alway do (and recommend to do) time-domain baseline-correction. If you continue doing time-frequency analysis later on, do you just baseline-correct again? To my understanding baseline correction/normalization should be done differently for time-frequency data (e.g. dB conversion and not a simple baseline substraction). My question is, whether it is a problem to baseline correct the data twice? Or are you doing an additional step between the preprocessing and the baseline correction of the time-frequency data to deal with this?

I am further interested in your comment, stating that
More generally, what to do about noisy electrodes also depends on how important that electrode is. If you are looking at occipital activity and FP1 is noisy, I probably wouldn't even bother with it. 
So if you are interested in occipital activity and you have a number of trials in which a frontal electrode is going a bit crazy, do you still remove that trial during your visual inspection or do you leave it in as it is not an electrode of interest? I am asking because I often wonder, whether it would be fine to leave those trials in (especially if all the other electrodes are looking good in that particular trial) but then not use average reference?

Thanks a lot,
Katharina

Mike X Cohen

unread,
Jul 14, 2016, 3:14:41 AM7/14/16
to analyzingneura...@googlegroups.com
Hi Katharina. See below.


On Thu, Jul 14, 2016 at 8:49 AM, Katharina Limbach <katharin...@gmail.com> wrote:
Dear Mike,

I just saw your preprocessing routine in one of the previous emails. Thanks for sharing that with us!

I have a question about the baseline correction. If I understood you correctly you alway do (and recommend to do) time-domain baseline-correction. If you continue doing time-frequency analysis later on, do you just baseline-correct again? To my understanding baseline correction/normalization should be done differently for time-frequency data (e.g. dB conversion and not a simple baseline substraction). My question is, whether it is a problem to baseline correct the data twice? Or are you doing an additional step between the preprocessing and the baseline correction of the time-frequency data to deal with this?



There is a distinction here between a linear baseline subtraction during preprocessing, and nonlinear baseline normalization during time-frequency analyses. It's tempting to think that they are similar or related, because they both have the word "baseline" in them. But they are different operations with different goals. The linear baseline subtraction is simply a DC shift, and is done to facilitate data inspection and cleaning. For example, if each trial/channel has a wildly different average value, ERPs and topographical maps will be difficult or impossible to interpret, and the first few components of an ICA decomposition will be dominated by these offsets. Those are the reasons for baseline subtraction.

Baseline normalization during analyses is a nonlinear operation, and has many useful purposes (e.g., eliminate 1/f scaling, separate ongoing from task-related activity, create normally distributed data values), all of which are unrelated to the linear baseline subtraction during preprocessing. Remember that time-frequency analyses are bandpass filters, and 0 Hz is excluded. That means that any trial offsets resulting from baseline subtraction are ignored. You can try this yourself by adding 100000 to the time-domain data; the time-frequency results won't change at all (except for the increased edge artifact).


 
I am further interested in your comment, stating that
More generally, what to do about noisy electrodes also depends on how important that electrode is. If you are looking at occipital activity and FP1 is noisy, I probably wouldn't even bother with it. 
So if you are interested in occipital activity and you have a number of trials in which a frontal electrode is going a bit crazy, do you still remove that trial during your visual inspection or do you leave it in as it is not an electrode of interest? I am asking because I often wonder, whether it would be fine to leave those trials in (especially if all the other electrodes are looking good in that particular trial) but then not use average reference?



Well, perhaps I was a bit glib with that comment. In general, you should always strive to have clean data, and the best way to have clean data is to collect clean data. Cleaning data offline is imperfect and annoying. Let's imagine you are interested in posterior alpha oscillations after different kinds of visual stimuli. If you have EMG noise on frontal channels because your subject had a tense face, that artifact is far from posterior alpha on two dimensions -- space and frequency. Is it worth worrying about that artifact? I'd say probably not. 

The problem is that artifact removal is never perfect. Signal and noise are rarely perfectly separable, and so the more noise you remove, the more signal you will also inadvertently remove. Therefore, noise should be removed only when it's really important and when it's worse than the signal. Eyeblink artifacts are worse than signal, so they should be removed. Should you fret about removing 50 Hz line noise? If your analyses only go up to 40 Hz, then the answer is no. Should you reject trials or channels based on noise? It depends on how bad the noise is, what the frequency range is, whether the noise is correlated with the task, and whether the noise is in channels that you want to analyze. 

I hope that helps rather than confuses ;)   I'll finish with one observation I've made from teaching many students over the years: People new to EEG often have the idea that the data can become noise-free, if they remove enough trials, interpolate enough channels, and subtract enough independent components. At the end of the day, they're left with information-less data and their results look like shit, because all of the signal has been thrown out with the proverbial bathwater. There are many sources of noise -- neural noise, cognitive noise, muscle noise, equipment noise -- if you have a good experiment design, appropriate analyses, and enough trials, noise is not a problem.

Mike

ps, the Laplacian has been shown to topographically isolate EMG noise as well as improve SNR. You can look for a special issue on the Laplacian in EEG in International Journal of Psychophysiology, published I think in 2015 or maybe it was 2014.

Raquel London

unread,
Jul 15, 2016, 9:34:34 AM7/15/16
to AnalyzingNeuralTimeSeriesData
Thank you for all you help Mike!
Hi Mina. See below. 

To unsubscribe from this group and stop receiving emails from it, send an email to analyzingneuraltimeseriesdata+unsub...@googlegroups.com.



--
Mike X Cohen, PhD
mikexcohen.com

--
You received this message because you are subscribed to the Google Groups "AnalyzingNeuralTimeSeriesData" group.
To unsubscribe from this group and stop receiving emails from it, send an email to analyzingneuraltimeseriesdata+unsub...@googlegroups.com.



--
Mike X Cohen, PhD
mikexcohen.com

--
You received this message because you are subscribed to the Google Groups "AnalyzingNeuralTimeSeriesData" group.
To unsubscribe from this group and stop receiving emails from it, send an email to analyzingneuraltimeseriesdata+unsub...@googlegroups.com.

Raquel London

unread,
Aug 30, 2016, 6:54:21 AM8/30/16
to AnalyzingNeuralTimeSeriesData
Hi Mike,

I have a question about the preprocessing order regarding ICA, marking bad electrodes and rereferencing to the average. I was thinking about doing the ICA before getting rid of bad electrodes, because I thought the ICA might help me clean up some electrodes that I may otherwise discard (is this already a bad idea?). But then I would have to do rereferencing after ICA; is that problematic?

Thanks!
Raquel
To unsubscribe from this group and stop receiving emails from it, send an email to analyzingneuraltimeseriesdata+unsub...@googlegroups.com.

Mike X Cohen

unread,
Aug 30, 2016, 6:58:25 AM8/30/16
to analyzingneura...@googlegroups.com
Hi Raquel. By "bad" do you mean absolutely rubbish, like the electrode wasn't measuring any brain signal? Or do you mean that there is real brain signal but the electrode is also noisy or has drift? If the former, you should remove the electrode entirely before running ICA and then interpolate afterwards. If the latter, then I would try keeping it in the data and seeing whether ICA can isolate a component that accounts for the noise. In my experience, this sometimes works and sometimes doesn't. If you don't get a component that convincingly isolates the noise, then remove the electrode.

Mike




For more options, visit https://groups.google.com/d/optout.

Raquel London

unread,
Aug 30, 2016, 10:23:38 AM8/30/16
to AnalyzingNeuralTimeSeriesData
Hi Mike,

Thank you for your answer. I meant lots of noise but still some signal visible. If I follow this strategy, I think I would have to rereference to the average after the ICA, is that the correct thing to do?

Cheers,
Raquel

Mike X Cohen

unread,
Aug 30, 2016, 10:57:20 AM8/30/16
to analyzingneura...@googlegroups.com
Yes, probably. If you are unsure, you might want to ask the eeglab list. There are real ICA experts on that list; I'm more a casual ICA user. Or maybe some ICA experts on this list can offer some words of wisdom...

Mike


Raquel London

unread,
Aug 30, 2016, 11:32:01 AM8/30/16
to AnalyzingNeuralTimeSeriesData
Allright I will, thank you!

Raquel London

unread,
Oct 18, 2016, 10:26:01 AM10/18/16
to AnalyzingNeuralTimeSeriesData
Hi Mike and everyone,

I've been trying many things to get nicer IC decompositions than I previously had, so I thought I'd share the steps I took which, at least for my data, hugely improved the cleanliness of the IC's. I tested some of these steps together due to time constraints, so I can't be exactly sure how much of the improvement is due to which step. I hope its OK to post this here. I'd love to hear any thoughts people might have on the subject.

A resource I used a lot was the pre-processing pipeline as published by Makoto Miyakoshi: https://sccn.ucsd.edu/wiki/Makoto's_preprocessing_pipeline

What I ended up doing:

- Set the eeglab preferences to double precision 
- High pass filter at 0.5 Hz (The eeglab people recommend 1 Hz, but I used .5 because I found it to be much more common in the literature about my topic). Compared to the .1 Hz filter I used before, this was already a huge improvement.

For ICA I used short epochs so no components would be wasted on artifacts outside the time window of interest. Later, I applied these weights to the dataset that was epoched with buffer zones to accomodate edge artifacts.

- Epoch to the time window of interest (no buffer zone) and do linear baseline correction to the mean of the whole epoch. For ICA, this is preferable to a baseline correction to a pre-stimulus baseline (see https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3062525/)
- Get rid of any super bad channels (if the noise on the channel was such that I thought ICA might pick it up, I left it in for now)
- Average reference
- Very thorough artifact rejection, throwing away even slightly weird trials I would normally keep. I think that ICA is very good for removing repetitive artifacts, but is not good at removing one-of-a-kind stuff. But a big one-of-a-kind artifact will probably take up a component, which is a waste. I did have a lot of data, but I guess if you have less data per subject you'd have to be careful to retain enough trials.
- Run ICA; when using average reference rank is reduced by one, so I reduced the number of components that eeglab returns to avoid weird duplicates and noise. I did it like this (I had 68 channels): EEG = pop_runica(EEG , 'extended',1,'interupt','on','pca',67); 

Here I went back to the continuous, high pass filtered data

Epoch to the time window of interest plus buffer zones and do linear baseline correction to the mean of the whole epoch. 
- Average reference
- Apply the ICA weights to this file, do component rejection
- If there were any very bad channels that ICA couldn't clean, I'd go all the way back to before ICA and get rid of the channel and redo everything for this subject
- Trial rejection
- Laplacian

Cheers,
Raquel
Reply all
Reply to author
Forward
0 new messages