> I have a question about performing a bonferroni correction with a
> paired samples t-test.
> In my experiment, I have measured reaction times to a sound at 6
> different points under two conditions. I have run a paired samples
> t-test to compare the means of these reaction times under the two
> conditions. However, my supervisor has told me that I need to perform a
> bonferoni correction. As far as I am aware, I would divide .05 by 6 to
> do this.
Why do you have 6 "different points"?
Do you really have 6 hypotheses? - They are closely
connected, are they not?
Six tests, with 6 d.f., must have less power than doing
one test on a designed contrast, or doing one test on
the "most likely point". If you design a linear trend, or
an average score, or whatever gives the 1 d.f. test of
hypothesis, then you eliminate the need for correction.
> Do I then adjust the confidence interval accordingly and check the
> significance levels of the t-tests, or do I just check the significance
> level of the t-tests without adjusting the confidence interval and see
> wether they are more/less than my bonferroni adjusted alpha level?
Personally, I'm unhappy to see "adjusted CIs", for the most
part. The way that you describe the problem might serve to
justify that, but a different use of the "correction" goes like this:
The "overall test of differences" uses Bonferroni correction, or
it could be done by MANOVA. Once the overall test is rejected,
the *conclusion*, for a rather tightly-bound set of hypotheses,
is that "There is a real difference." After that, the estimation
problem is one of describing the differences, on each of the
variables. That can be conveniently done by giving the mean
differences with the ordinary, nominal 95% CIs. Carefully describe
them that way.
It could be 'fair' to mention to the reader how much wider
the 99% CI would be, as a fraction.
--
Rich Ulrich, wpi...@pitt.edu
http://www.pitt.edu/~wpilib/index.html
1. If there are x means, then a complete set of comparisons should be:
[x (x-1)] / 2
So, there are already 6 tests if you have only 4 means. If you have 6
means, it should be totally 15 pair-wise tests.
2. Point 1 leads to another fact that p < 0.003 (0.05 /15) may be too
stringent; and that is the characteristic of Bonferroni; easy to
understand, but tends to get too strict if number of means goes up.
You may consider consulting a statistician for better guidance.
It sounds like you have a 2x6 design with repeated measures on both
factors. So why not do a 2x6 repeated measures ANOVA? Then the
question becomes whether or not there is an interaction between the two
factors. If the interaction is resoundingly non-significant, you can
stop with the F-test for the main effect of condition, I should think.
If the interaction is significant (or close), then you may wish to
explore further.
By the way, RT distributions are notoriously (positively) skewed. Do
you have several RTs per condition x time cell, or only one? In areas
of cognitive psychology that use RT as a dependent measure, it is
customary to have several raw RTs per cell, and then use medians (or
trimmed means) in the ANOVA.
--
Bruce Weaver
bwe...@lakeheadu.ca
www.angelfire.com/wv/bwhomedir
Anybody got any advice?
Danny