Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

How to calculate the standard error of coefficient in logistic regression.

8,598 views
Skip to first unread message

Sungjoon

unread,
Sep 17, 2011, 12:15:04 PM9/17/11
to
I really appreciate it if anyone could answer how to calculate the
standard error of coefficient (0.400084) with the following very
simple example.

X Y
1 0
2 0
3 0
4 1
5 1
6 1
7 0
8 1
9 1
10 1

Logistic Regression Result
Predictor Coef SE Coe z
p
Constant -2.92651 2.060005 -1.42 0.155
X 0.662208 0.400084 1.66
0.098


Message has been deleted

Ray Koopman

unread,
Sep 17, 2011, 7:13:37 PM9/17/11
to
The standard errors of the coefficients are the square roots of the
diagonals of the covariance matrix of the coefficients. The usual
estimate of that covariance matrix is the inverse of the negative of
the matrix of second partial derivatives of the log of the likelihood
with respect to the coefficients, evaluated at the values of the
coefficients that maximize the likelihood.

Sungjoon

unread,
Sep 21, 2011, 6:54:42 PM9/21/11
to
Thank you!!

ako...@gmail.com

unread,
Oct 21, 2012, 6:46:38 AM10/21/12
to
Hello, I have a same question of this post.
But I cannot understand the detail of "the matrix of second partial derivatives of the log of the likelihood with respect to the coefficients, evaluated at the values of the
coefficients that maximize the likelihood".
Please explain the detail of the matrix in this example.

Rich Ulrich

unread,
Oct 21, 2012, 7:38:30 PM10/21/12
to
On Sun, 21 Oct 2012 03:46:38 -0700 (PDT), ako...@gmail.com wrote:

>On Sunday, September 18, 2011 8:13:37 AM UTC+9, Ray Koopman wrote:
>> On Sep 17, 9:15 am, Sungjoon <sungjoon....@gmail.com> wrote:
>>
>> > I really appreciate it if anyone could answer how to calculate the
>>
>> > standard error of coefficient (0.400084) with the following very
>>
>> > simple example.
>>

[snip, example]
>>
>> >
>>
>> > Logistic Regression Result
>>
>> > Predictor Coef SE Coe z p
>>
>> > Constant -2.92651 2.060005 -1.42 0.155
>>
>> > X 0.662208 0.400084 1.66 0.098
>>
>>
>>
>> The standard errors of the coefficients are the square roots of the
>>
>> diagonals of the covariance matrix of the coefficients. The usual
>>
>> estimate of that covariance matrix is the inverse of the negative of
>>
>> the matrix of second partial derivatives of the log of the likelihood
>>
>> with respect to the coefficients, evaluated at the values of the
>>
>> coefficients that maximize the likelihood.
>
>Hello, I have a same question of this post.
>But I cannot understand the detail of "the matrix of second partial derivatives of the log of the likelihood with respect to the coefficients, evaluated at the values of the
>coefficients that maximize the likelihood".
>Please explain the detail of the matrix in this example.

Here is some explanation of the Fisher information matrix,
which is what you are asking about.
http://en.wikipedia.org/wiki/Fisher_information

That won't mean a much of you don't read calculus.

It is evaluated "at the values that maximize..." in
the general case, because there may be different
values of the error for other choices.

--
Rich Ulrich

Chloe C

unread,
Nov 24, 2012, 4:52:52 PM11/24/12
to
Agree. It is related to the asymptotic behavior of the estimate using Fisher Information.
log-likelihood: log(L)
parameter: theta
MLE: dlog(L)/d(theta)=0 -> theta_hat
Fisher Information: I=E(-d^2(log(L)/d^(theta)) -> plug in theta=theta_hat
theta_hat ~ N(theta, 1/I)
That says 1/I is the variance.

marcc...@gmail.com

unread,
Dec 22, 2012, 10:21:42 AM12/22/12
to
Hello,

sorry i dind't get it until now.
Is the calculation of the standard error of the coefficients in binary logistic regression different from the way how they are calculated in linear regression?

in linear regression the calculation is simple:
SE = sqrt [ Σ(yi - ŷi)2 / (n - 2) ] / sqrt [ Σ(xi - x)2 ]

Maybe someone can explain the example and show how it is calculated?
Kind regards!
Marc

trinhv...@gmail.com

unread,
Mar 2, 2014, 9:35:04 PM3/2/14
to
Vào 22:21:42 UTC+7 Thứ bảy, ngày 22 tháng mười hai năm 2012, marcc...@gmail.com đã viết:
Hello Marc,
Now I have the question as you ask, about std.error of coefficient in logistic regression.
Do you have the answer for this example?
if you have, can you share me? I still do not understand.
Thank you

trinhv...@gmail.com

unread,
Mar 2, 2014, 9:36:32 PM3/2/14
to
Vào 23:15:04 UTC+7 Thứ bảy, ngày 17 tháng chín năm 2011, Sungjoon đã viết:
Hello Sungjoon,

Rich Ulrich

unread,
Mar 3, 2014, 6:09:57 PM3/3/14
to
On Sun, 2 Mar 2014 18:36:32 -0800 (PST), trinhv...@gmail.com
wrote:

>Vào 23:15:04 UTC+7 Th? b?y, ngày 17 tháng chín n?m 2011, Sungjoon ?ã vi?t:
I can see that the answers given in this thread in
2011 and 2012 were abstract and depended on
knowledge of calculus.

Here is something that depends only on algebra,
and knowledge of how a 2x2 table is described in
terms of E and O (Expected and Observed) for
the more familiar Pearson chisquared.

The error can be foiund by inverting (in the obvious
way) the statistical test. The z in the table is the
square root of the corresponding chisquared,
which is a likelihood test. The SE can be found by
dividing the Coef by z.

See the Wikip entry for g-test, for the likelihood
ratio test, given in terms of O and E.
http://en.wikipedia.org/wiki/G-test

G= 2 sum O ln(O/E) across the cells for O and E.
(put in _i on each O and E if you want.)

I expect that this is the exact formula for what is used in
the Maximum likelihood logistic regression, though I am
not 100% sure of that.

--
Rich Ulrich

Ryan

unread,
Mar 4, 2014, 12:01:27 AM3/4/14
to
If my math is correct, the formula for the standard error of the regression coefficient (slope) in a binomial logistic regression, where the independent variable is a binary variable (2x2 Contingency Table), should be:

SE(log OR) = sqrt[(1/n11) + (1/n12) + (1/n21) + (1/n22)]

I arrived at this formula using the delta method.

Ryan

Rich Ulrich

unread,
Mar 4, 2014, 12:03:27 AM3/4/14
to
Oops! My mistake -- I looked at that row of 0-1 data,
and mis-took that I could give an answer for an example
with 0-1 as predictor.

Sorry -- The above probably *is* an answer for that
very selected case of a logistic regression with a single
dichotomous predictor. And maybe it does suggest to
the reader something about how the likelihood solution
is going to differ from a least-squares solution... taking
logs is not quite as elementary as taking squares of
differences.

So - I don't know if there is any simple and reduced
form for the standard error of a single predictor in
a logistic regression solved by ML. The fact that I
don't remember seeing one -- and the question from
2011 and 2012 did not elicit one -- I suspect that there
isn't a form that reduces so much.



--
Rich Ulrich

Bruce Weaver

unread,
Mar 4, 2014, 6:55:52 AM3/4/14
to
0 new messages