Bonferroni in logistic and multiple regression?

3,923 views
Skip to first unread message

Mitchell Maltenfort

unread,
Jan 6, 2013, 5:01:05 PM1/6/13
to MedS...@googlegroups.com

My understanding had been that you do not need Bonferroni for tests of significance of individual parameters in a multiple regression.  I certainly do not recall seeing such an adjustment in any stats text.  Can anyone confirm or correct?  Thanks.

Thompson,Paul

unread,
Jan 6, 2013, 5:03:55 PM1/6/13
to meds...@googlegroups.com

I’d generally agree with that. Is this in reference to a review of a paper or something like that? Many reviewers have excessively rigid views of the Bonferroni adjustment. It’s not a formula which is strict, but rather a method of adjustment.

 

If the question has arisen as a function of a review, I’d be happy to look at it off line if you wish some confidential advice.

--
To post a new thread to MedStats, send email to MedS...@googlegroups.com .
MedStats' home page is http://groups.google.com/group/MedStats .
Rules: http://groups.google.com/group/MedStats/web/medstats-rules



-----------------------------------------------------------------------
Confidentiality Notice: This e-mail message, including any attachments,
is for the sole use of the intended recipient(s) and may contain
privileged and confidential information. Any unauthorized review, use,
disclosure or distribution is prohibited. If you are not the intended
recipient, please contact the sender by reply e-mail and destroy
all copies of the original message.

Mitchell Maltenfort

unread,
Jan 6, 2013, 5:17:17 PM1/6/13
to meds...@googlegroups.com

That is precisely how it came up. 

Is any sort of adjustment necessary? 

Thompson,Paul

unread,
Jan 6, 2013, 5:17:50 PM1/6/13
to meds...@googlegroups.com

Can’t say without seeing the comment.

Thompson,Paul

unread,
Jan 6, 2013, 5:19:04 PM1/6/13
to meds...@googlegroups.com

Are you the statistician on the paper? If not, is there a statistician?

 

From: meds...@googlegroups.com [mailto:meds...@googlegroups.com] On Behalf Of Mitchell Maltenfort


Sent: Sunday, January 06, 2013 4:17 PM
To: meds...@googlegroups.com

Mitchell Maltenfort

unread,
Jan 6, 2013, 5:22:39 PM1/6/13
to meds...@googlegroups.com

Reading it again I think I see reviewer's point.  This was exploratory analysis without a priori expectations.

Thompson,Paul

unread,
Jan 6, 2013, 5:26:24 PM1/6/13
to meds...@googlegroups.com

Traditionally, I would interpret a regression equation (logistic, continuous, etc) by first looking at some overall measure of significance, and then at the significance of individual coefficients. Generally, I would state that the overall p-value for the equation is the more important, and provides the ability to interpret individual coefficients, without adjustment. Things become a bit complicated when stepwise methods are used, of course.

Mitchell Maltenfort

unread,
Jan 6, 2013, 5:29:03 PM1/6/13
to meds...@googlegroups.com

I am, yes.

Thompson,Paul

unread,
Jan 6, 2013, 5:31:42 PM1/6/13
to meds...@googlegroups.com

Kirk’s Experimental Design provides a good discussion of correction methods. I recently had a reviewer make  a comment indicating again a very severe and strong view of the correction. It’s not a formula, it’s a general guideline (IMO).

Mitchell Maltenfort

unread,
Jan 6, 2013, 5:35:24 PM1/6/13
to meds...@googlegroups.com

I will look for that one.

I hope I do not seem under-clued.  I have read Faraway, Harrell, and others and I do not recall seeing any mention of multiple comparison adjustments on multiple regression coefficients.

Thompson,Paul

unread,
Jan 6, 2013, 5:37:59 PM1/6/13
to meds...@googlegroups.com

That’s because it is not a reasonable idea. It’s part of creeping incremental rigidity. Many people believe that there is no problem with becoming overly strict. Of course, it runs the risk of the Type II error. The ideal is the appropriate medium level of strictness – if the overall model is significant at the stated alpha, then individual coefficients can be interpreted at that level as well.

Mitchell Maltenfort

unread,
Jan 6, 2013, 5:39:46 PM1/6/13
to meds...@googlegroups.com

Thank you!  That had been how I understood and used the tools.

Kornbrot, Diana

unread,
Jan 7, 2013, 7:19:46 AM1/7/13
to meds...@googlegroups.com
If there are many variables and/or hypotheses you cannot ignore the problem
BUT what you need is standard procedures for false discovery rates
Genovese, C. R., K. Roeder, et al. (2006). "False discovery control with p-value weighting." Biometrika 93(3): 509-524.
    We present a method for multiple hypothesis testing that maintains control of the false discovery rate while incorporating prior information about the hypotheses. The prior information takes the form of p-value weights. If the assignment of weights is positively associated with the null hypotheses being false, the procedure improves power, except in cases where power is already near one. Even if the assignment of weights is poor, power is only reduced slightly, as long as the weights are not too large. We also provide a similar method for controlling false discovery exceedance.

Or google false discovery rates to get other key references
Best
Diana


On 06/01/2013 22:01, "Mitchell Maltenfort" <mma...@gmail.com> wrote:

My understanding had been that you do not need Bonferroni for tests of significance of individual parameters in a multiple regression.  I certainly do not recall seeing such an adjustment in any stats text.  Can anyone confirm or correct?  Thanks.


Emeritus Professor Diana Kornbrot
email:  d.e.ko...@herts.ac.uk    
 web:    http://dianakornbrot.wordpress.com/
Work
Department of Psychology
School of Life and Medical Sciences
University of Hertfordshire
College Lane, Hatfield, Hertfordshire AL10 9AB, UK
voice:   +44 (0) 170 728 4626
Home
19 Elmhurst Avenue
London N2 0LT, UK
voice:   +44 (0) 208  444 2081
mobile: +44 (0) 740 318 1612


Mitchell Maltenfort

unread,
Jan 7, 2013, 9:49:27 AM1/7/13
to meds...@googlegroups.com
 
Thanks for the interesting reference, Diana. 
 
Fortunately (?) the data set I was working is much smaller than genetic analyses, and usually I see these multivariate tables published without mention of adjustment. I asked a colleague and he didn't recall seeing Bonferroni adjustments for logistic regressions either.
 
But I was going through my statistical texts this morning looking for anything I missed.  The only detailed consideration of Bonferroni or other adjustment for explanatory variables within a multivariate fit was in the 3rd edition of Logistic Regression (Kleinbaum and Klein) -- so I went back and checked the 2nd edition, which had only a cursory mention of the problem of model selection with adding and removing variables.   
 
Well, that's the fun of being a statistician -- discovering trends!
 

Bruce Weaver

unread,
Jan 7, 2013, 2:41:37 PM1/7/13
to meds...@googlegroups.com, MedS...@googlegroups.com
Here are a couple other points, in addition to those that others have made.

1. Very frequently, some of the variables in the model are there only to control for potential confounding.  For those types of variables, you probably don't care (very much) about whether they are statistically significant or not.  (The same point might apply to variables that are only of secondary interest.)

2. You mentioned in a later post that this is a genetic study.  I don't know much about genetics; but IF it is a case where the design is experimental, you could potentially have orthogonality that is not present in purely observational data.  When you have orthogonality, correction for multiple comparisons is generally seen as unnecessary (or at least less necessary), because you won't be making the same mistake repeatedly, which can happen (to some extent) when you lack orthogonality.

HTH.
Reply all
Reply to author
Forward
0 new messages