Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

MLE vs MME (or MOM)

19 views
Skip to first unread message

USER

unread,
Mar 13, 2009, 3:30:30 PM3/13/09
to
Hello all,

I have been doing some self-study in point estimates and looking at the
method of moments vs. maximum likelihood.

I have two questions I hope to get some help with.

Question 1:
I would like to know if anyone can give an example where the MLE (max.
likelihood estimate) has variation that is greater than the MME (method
of moments estimate). I have read in a book that this is possible,
although no book seems to give an example of this.


Question 2:
I noticed that when using the MME for the well-known example of the
continuous uniform distribution, U(0,theta) [with theta as the unknown
parameter] that the first moment for the parameter \theta gives

\theta-hat = 2*x-bar,

but we can use the second moment to get

\theta-hat = sqrt( 3*M_2 ), with M_2 as the second sample moment.

I generalized this to

\theta_r -hat = ( (r+1) * M_r )^(1/r),

with M_r as the r^{th} sample moment.
( sample moment: M_r = (1/n) * sum_{i=1}^n x_i^r )

This seems(?) to yield unbiased estimates (for any positive integer r )
for \theta, but with variance that decreases with increasing r, although
I have not found a formula for the variance of \theta_r -hat.

Is this the case? This seems counterintuitive in that we get "better"
estimates by using larger sample moments, i.e. somewhat more work
(calculations) yields much better estimates (unbiased, very small variance).


I hope the notation is understandable and that someone can give answers
to these questions.

Thank you for any help,
A stats student/teacher

Jack Tomsky

unread,
Mar 13, 2009, 8:37:21 PM3/13/09
to
> Hello all,
>
> I have been doing some self-study in point estimates
> and looking at the
> method of moments vs. maximum likelihood.
>
> I have two questions I hope to get some help with.
>
> Question 1:
> I would like to know if anyone can give an example
> where the MLE (max.
> likelihood estimate) has variation that is greater
> than the MME (method
> of moments estimate). I have read in a book that this
> is possible,
> although no book seems to give an example of this.
>
>


Here's a simple example. Suppose you have a single observation x from

x ~ N(mu, 1).

Let theta = mu^2.

Then the MLE of theta is

MLE = x^2.

For MOM, set

E(x^2) = mu^2 + 1 = theta + 1.

Then

MOM = x^2 - 1.

While MLE and MOM have the same variance, MOM has a smaller mean-squared error because it's an unbiased estimate of theta.

MSE(MLE) = Var(MLE) + 1
MSE(MOM) = Var(MLE).

Jack
www.tomskystatistics.com

0 new messages