Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Eta-squared in linear mixed models

7,781 views
Skip to first unread message

Naepsy

unread,
Apr 29, 2009, 5:13:15 AM4/29/09
to
Hi. In SPSS it is not possible to get any effect size parameters for
linear mixed models such as eta-squared. Note: I'm talking about
linear mixed models, not mixed model GLM, linear regression etc where
such option exists. Do you have any suggestions how one could
calculate this by hand from the information that SPSS can produce?

Bruce Weaver

unread,
Apr 29, 2009, 10:18:26 PM4/29/09
to

Ryan

unread,
May 1, 2009, 8:01:51 AM5/1/09
to

In addition to what Bruce recommended, remember that you can always
compute "raw" effect sizes. For example, if you're comparing two
independent samples on averages,you might do the following:

pred_avg_grp_1 - pred_avg_grp_2

If it's a clinical trial type of design (groupXtime), the raw effect
size could be:

(pred_avg_grp_1_Posttreatment - pred_avg_grp_1_Baseline) -

(pred_avg_grp_2_Posttreatment - pred_avg_grp_2_Baseline)

You can get the fixed predicted values when setting up your model from
the drop down menu, or you can just add this to your syntax:

/SAVE=FIXPRED

HTH

Ryan

Naepsy

unread,
May 4, 2009, 5:01:32 AM5/4/09
to
Bruce and Ryan, thanks for your comments. However, I didn't find a way
to utilize this link in my problem, and I would like to get a more
sophisticated effect size measure than just group average.

On Apr 30, 5:18 am, Bruce Weaver <bwea...@lakeheadu.ca> wrote:
> Naepsy wrote:
> > Hi. In SPSS it is not possible to get any effect size parameters for
> > linear mixed models such as eta-squared. Note: I'm talking about
> > linear mixed models, not mixed model GLM, linear regression etc where
> > such option exists. Do you have any suggestions how one could
> > calculate this by hand from the information that SPSS can produce?
>
> See if this helps.
>
>    http://ssc.utexas.edu/consulting/answers/hlm/hlm4.html
>
> --
> Bruce Weaver

> bwea...@lakeheadu.cahttp://sites.google.com/a/lakeheadu.ca/bweaver/

Ryan

unread,
May 4, 2009, 8:13:32 AM5/4/09
to
> > "When all else fails, RTFM."- Hide quoted text -
>
> - Show quoted text -

The only formula I learned to compute pseudo R-Square for linear mixed
modeling was:

R-square=1-[(Residual_ConditionalModel+Intercept_ConditionalModel)/
(Residual_UnconditionalModel+Intercept_UnconditionalModel)]

which answers the question as to what percent the conditional (full)
model reduces errors in predicting outcome when compared to the
unconditional (intercept only) model.
---------
I think there is a way to compute Cohen's D when comparing averages of
two independent samples in linear mixed modeling in SAS, but I've
never actually done it. If you have SAS, this link might help you:

http://groups.google.com/group/comp.soft-sys.sas/browse_thread/thread/c1d7e5b5caf0429a/0ff38bc364319048?hl=en&lnk=gst&q=mixed+cohen%27s+d+effect#0ff38bc364319048

--------
The following article explains how to compute R-Square in different
types of linear mixed modeling designs:

Feng, Z., Diehr, P., Peterson, A., McLerran, D. Annual Review of
Public Health, 2001, 22:167-187.

The following book is where I found the pseudo-R-Square formula I
wrote above:

http://www.amazon.com/Multilevel-Analysis-Applied-Research-Methodology/dp/159385191X
--------

Ryan

Naepsy

unread,
May 4, 2009, 10:50:43 AM5/4/09
to
Ok, in order to find the effect size of a variable, I need to
calculate the error and intercept variance for the unconditional
(without the variable) model and conditional (full) model. So: how do
I find these variances in spss?

If it has to be done manually, I can save residuals with /SAVE=RESID
and calculate variance for the results - but how about the intercept
variance?

> http://groups.google.com/group/comp.soft-sys.sas/browse_thread/thread...


>
> --------
> The following article explains how to compute R-Square in different
> types of linear mixed modeling designs:
>
> Feng, Z., Diehr, P., Peterson, A., McLerran, D. Annual Review of
> Public Health, 2001, 22:167-187.
>
> The following book is where I found the pseudo-R-Square formula I
> wrote above:
>

> http://www.amazon.com/Multilevel-Analysis-Applied-Research-Methodolog...
> --------
>
> Ryan

Ryan

unread,
May 5, 2009, 9:47:29 AM5/5/09
to
> > Ryan- Hide quoted text -

>
> - Show quoted text -


Let's assume you have a model with a random slope and random intercept
term. You need to make all slopes fixed, and keep the intercept as
random.

Run the model with the categorical independent variable

MIXED Y BY X
/FIXED=x| SSTYPE(3)
/METHOD=ML
/PRINT=SOLUTION TESTCOV
/RANDOM=INTERCEPT | SUBJECT(SUBJ) COVTYPE(VC).

and then run the model with the intercept only.

MIXED Y
/FIXED=| SSTYPE(3)
/METHOD=ML
/PRINT=SOLUTION TESTCOV
/RANDOM=INTERCEPT | SUBJECT(SUBJ) COVTYPE(VC).

------
In your output, go to the "Estimates of Covariance Parameters" table.
That is where you'll find the values necessary to compute this pseudo
R-square.

Ryan

Naepsy

unread,
May 6, 2009, 12:27:03 PM5/6/09
to
Thanks again, but I'm afraid I still can't find the needed
information. Looking at "Estimates of Covariance Parameters", I get
only three parameters, something like below. Even worse, removing one
variable in a complex design I'm using, the intercept-variance stays
practically the same.

Repeated measures AR1 diagonal ,1701
AR1 rho ,6542
Intercept + BLscl Variance ,0533
[subject=subj]

Ryan

unread,
May 6, 2009, 12:53:01 PM5/6/09
to

I need to know a bit more about your design. Now that I see you're
using var-cov AR1, that says to me that you have more than two time
points. Could you tell me the following:

# groups
# covariates (if any)
# time points
type of outcome
MIXED syntax you're using

schmitz...@gmail.com

unread,
Jul 23, 2013, 7:06:29 AM7/23/13
to
Hi,

I just ran into the exact same problem and was wondering if you could help me?

I'm analyzing a 2 x 2 design with one covariate and 5 time points - this is my MIXED syntax:

MIXED ITI BY pbgp sex WITH AGE
/CRITERIA=CIN(95) MXITER(100) MXSTEP(10) SCORING(1) SINGULAR(0.000000000001) HCONVERGE(0,
ABSOLUTE) LCONVERGE(0, ABSOLUTE) PCONVERGE(0.000001, ABSOLUTE)
/FIXED=pbgp sex pbgp*sex AGE | SSTYPE(3)
/METHOD=REML
/PRINT=DESCRIPTIVES
/REPEATED=time | SUBJECT(ID) COVTYPE(DIAG).

Is there any way to calculate effect sizes?

peter....@gmail.com

unread,
Nov 18, 2014, 5:34:26 AM11/18/14
to
I've been struggling with this topic too, and found a way to calculate an omega squared on http://www.let.rug.nl/~heeringa/statistics/stat03_2013/lect16.pdf

The method is based on work by Xu (2003); Measuring explained variation in
linear mixed effects models. Statistics in Medicine, 22:3527-3541. See http:
//onlinelibrary.wiley.com/doi/10.1002/sim.1572/pdf.

In the example posed by schmitz, the way to calculate the effect size of the ITI effect would be as follows:

First, run the analysis and save the rediduals:
MIXED ITI BY pbgp sex WITH AGE
/CRITERIA=CIN(95) MXITER(100) MXSTEP(10) SCORING(1) SINGULAR(0.000000000001) HCONVERGE(0,
ABSOLUTE) LCONVERGE(0, ABSOLUTE) PCONVERGE(0.000001, ABSOLUTE)
/FIXED=pbgp sex pbgp*sex AGE | SSTYPE(3)
/METHOD=REML
/PRINT=DESCRIPTIVES
/REPEATED=time | SUBJECT(ID) COVTYPE(DIAG)
/SAVE=RESID.

Rename the RESID_1 variable in something like Residuals_full_model

Next, run the same analysis, but now without the variable you are interested in, (in this case pbgp) so:

MIXED ITI BY sex WITH AGE
/CRITERIA=CIN(95) MXITER(100) MXSTEP(10) SCORING(1) SINGULAR(0.000000000001) HCONVERGE(0,
ABSOLUTE) LCONVERGE(0, ABSOLUTE) PCONVERGE(0.000001, ABSOLUTE)
/FIXED=sex AGE | SSTYPE(3)
/METHOD=REML
/PRINT=DESCRIPTIVES
/REPEATED=time | SUBJECT(ID) COVTYPE(DIAG)
/SAVE=RESID.

Again, rename the residuals variable into something like Residuals_without_pbgp.

Next, get the variances of those residuals from the Descriptives menu.

Now, omega squared = 1 - (variance(Residuals_full_model) / variance(Residuals_without_pbgp)).

The way I interpret it is that omega squared is a proportion of explained variance (so that omega squared of 0.025 is 2.5% explained variance). I'm happy to receive comments on this method, because I have not seen this used before in any published paper, nor is it explained widespread (despite that many people seem to be working with linear mixed models in SPSS).

Anyway, I hope this helps.

peter....@gmail.com

unread,
Nov 18, 2014, 5:36:42 AM11/18/14
to
Sorry, instead of "the way to calculate the effect size of the ITI effect" it should of course be "the way to calculate the effect size of the pbgp effect"

Rich Ulrich

unread,
Nov 18, 2014, 4:45:31 PM11/18/14
to
On Tue, 18 Nov 2014 02:34:24 -0800 (PST), peter....@gmail.com
wrote:
I did not look at the article you cited.

The Wikipedia article says that omega squared is a version
of eta squared that is less biased --
http://en.wikipedia.org/wiki/Effect_size#Eta-squared.2C_.CE.B72
The section on omega squared, immediately after that short on
on eta-squared, starts off with a link to "adjusted R2", which is
something that accounts for overfitting, which your formula does not.

Your computation seems to be a straight-forward computation
of eta-squared where what is taken as Total is the model prior
to the new variable; that seems okay. But there is nothing
resembling the term in the Wikip formula that subtracts off a
numerator element for overfitting, or adds to the denominator.

Off hand, I don't remember ever seeing omega-squared before,
and almost never ran into eta-squared; which I probably never
used unless someone forced me into it.

--
Rich Ulrich

ER

unread,
Jan 21, 2016, 5:50:29 PM1/21/16
to
Does anyone know if there are any updates for calculating the effect size for linear mixed models? The original link to lecture 16 posted above doesn't work. This method is simple enough, but I'm not sure what the source is or if it's possible to publish using it.

Bruce Weaver

unread,
Jan 21, 2016, 9:52:35 PM1/21/16
to
On 21/01/2016 5:50 PM, ER wrote:
>
> Does anyone know if there are any updates for calculating the effect size for linear mixed models? The original link to lecture 16 posted above doesn't work. This method is simple enough, but I'm not sure what the source is or if it's possible to publish using it.
>

Back up a couple steps to here:

http://www.let.rug.nl/~heeringa/statistics/


--
Bruce Weaver
bwe...@lakeheadu.ca
http://sites.google.com/a/lakeheadu.ca/bweaver/Home

Ryan

unread,
Jan 21, 2016, 10:59:56 PM1/21/16
to
Much has been written on calculating various types of standardized effect sizes from linear mixed models. A simple Google search should produce many resources. Moreover, there are chapters/sections in linear mixed model books that cover the topic of standardized effect sizes. The standardized effect sizes you might consider depend on what you want and the actual model. Finally, you can certainly publish effect sizes calculated from linear mixed models. Search the scientific literature and you will find dozens and dozens of examples.

Ryan

ER

unread,
Jan 22, 2016, 2:17:06 AM1/22/16
to
Ryan, could you cite a few books which discuss effect sizes for mixed models? I am not a statistician, so mathematical formulas are often not helpful. I have look in West's and Heck's mixed model books, but neither address effect sizes.

I am looking for practical calculation advice with SPSS. I am aware of the papers out there discussing the effect size, like by Nagakawa and Xu, which was referenced in the above lecture, but I didn't understand how the calculations could be done until I saw Peter's response above, which I'm assuming is the correct intepretation. The only other helpful article I've found is based on calculation the inter-class correlation (http://www.unt.edu/rss/class/Jon/SPSS_SC/Module9/M9_LMM/SPSS_M9_LMM.htm). That's about it. So any practical citations would be very helpful, as it seems the matter of the effects sizes for LMM is far from settled. I know the Selya paper mentions a specific procedure that can calculates Cohen's f^2, but that's limited to SAS only.

References:

Heck, R. H., et al. (2013). Multilevel and Longitudinal Modeling with IBM SPSS, Taylor & Francis.

Nakagawa, S., et al. (2013). "A general and simple method for obtainingR2from generalized linear mixed-effects models." Methods in Ecology and Evolution 4(2): 133-142.

Selya, A. S., et al. (2012). "A Practical Guide to Calculating Cohen's f(2), a Measure of Local Effect Size, from PROC MIXED." Frontiers in Psychology 3: 111.

West, B. T., et al. (2014). Linear Mixed Models: A Practical Guide Using Statistical Software, Second Edition, Taylor & Francis.

Xu, R. (2003). "Measuring explained variation in linear mixed effects models." Statistics in Medicine 22(22): 3527-3541.

Ryan

unread,
Jan 22, 2016, 11:43:28 AM1/22/16
to
One book that immediately comes to mind is Hierarchical Linear Models by Raudenbush and Bryk. I do not think, however, that you need to buy this book to calculate a standardized effect size from a mixed model. There are plenty of resources online and in articles. In this very thread, I provided a way to calculate a standardized effect size accompanied by SPSS code. That approach is still commonly used.

I decided to look up the article on how to calculate f^2 using the mixed procedure in SAS:

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3328081/

The approach in the article above is straightforward, and certainly be used with the mixed procedure in SPSS.

Unless you provide specifics about your mixed model and exactly what you are trying to obtain, we can only make general statements.

Ryan

Rich Ulrich

unread,
Jan 22, 2016, 12:56:25 PM1/22/16
to
On Fri, 22 Jan 2016 08:43:21 -0800 (PST), Ryan
<ryan.and...@gmail.com> wrote:

>One book that immediately comes to mind is Hierarchical Linear Models by Raudenbush and Bryk. I do not think, however, that you need to buy this book to calculate a standardized effect size from a mixed model. There are plenty of resources online and in articles. In this very thread, I provided a way to calculate a standardized effect size accompanied by SPSS code. That approach is still commonly used.
>
>I decided to look up the article on how to calculate f^2 using the mixed procedure in SAS:
>
>http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3328081/
>
>The approach in the article above is straightforward, and certainly be used with the mixed procedure in SPSS.
>
>Unless you provide specifics about your mixed model and exactly what you are trying to obtain, we can only make general statements.

Also, even with specifics, you might find more than one opinion.

The Mixed Models sometimes give room for more than one opinion
as to what term should be used for the actual F-test, if you
do not test over the simple residual. I do assume that the proper
effect size should be a transformation of the proper F-test, but
that leaves the question of what is the proper F.

You can always get some "variance accounted for" (or, "usually"?)
by subtraction. The simplest R-sq just uses that, along with the
Total SS. Other computations that are relatively simple take some
alternative in place of the Total SS, when, for instance, the variance
Between Subjects is not relevant to the Within Subject effect size.

Intraclass r's use a term for the Expected Value of variance terms,
so they are potentially less biased by irrelevant design features
such as a total N that is not large compared to the number of groups.
And their computation is more complicated.
(I'm thinking of how the conventional Adjusted R-squared differs
from the raw R-squared.)

--
Rich Ulrich

ER

unread,
Jan 25, 2016, 1:08:22 PM1/25/16
to
I didn't find Raudenbush that understandable, at least not in a way where you can use it as a reference just for calculating LMM effect sizes. But in either case, maybe there is enough information in this thread to calculate the effect size one of several ways, so I'm trying to figure that out.

In my design I have 23 repeated measures (SI) and 4 groups (Dose). Here is the syntax:

MIXED aPS_aFV_mean BY Dose SI
/CRITERIA=CIN(95) MXITER(200) MXSTEP(20) SCORING(1)SINGULAR(0.000000000001)HCONVERGE(0, ABSOLUTE) LCONVERGE(0, ABSOLUTE) PCONVERGE(0.000001, ABSOLUTE)
/FIXED=Dose SI Dose*SI | SSTYPE(3)
/METHOD=REML
/PRINT=SOLUTION TESTCOV
/REPEATED=SI | SUBJECT(Subject) COVTYPE(ARH1)

Estimates of covariance parameters has 23 outputs (one for each repeated measure) and one more for ARH1 rho. The method that you suggested, which I think is based on intraclass correlation, does it work for this case? Or does it have certain limitations? From what I understood, that effect size would be called "pseudo R sq" and I would have to add up all 24 estimates, then repeat the analyses without "Dose," which will create the R sq for "Dose." But then to get the R sq for SI, I would have to drop the repeated measures altogether.

It seems that there are at least three methods of calculating the LMM effect size based on this thread: omega sq, pseudo-R sq, and Cohen's f sq. It would be rather helpful if we could briefly summarize whether each method has specific limitations and/or what are the advantages of each method over the other in relation to mixed models. I know that for ANOVAs it is recommended to use omega squared since it's less biased than the partial eta squared (upwardly biased) and can be calculated from the SPSS ANOVA output. So if a publication used both ANOVA and mixed models, it would probably make sense to stick to omega squared to report the same measures of effect size for both tests.

eca...@gmail.com

unread,
Apr 6, 2016, 5:49:48 AM4/6/16
to
Not sure if anyone is still reading this -- but if using the omega squared method recommended by Peter (ie, XU, 2003) -- how would one handle an interaction term? Eg, when running a model (minus a main effect) I can see how this works, but am confused about how to generate an omega squared for an interaction.

Thanks!!

eca...@gmail.com

unread,
Apr 8, 2016, 4:10:33 AM4/8/16
to

nathant...@gmail.com

unread,
Jun 30, 2016, 3:59:37 AM6/30/16
to
Hi everyone, in this thread there looks to be methods to calculate effect size for the overal model fit in LMM, but I'm wondering about what method to use specifically for pairwise comparisons?

My data uses small sample sizes and usually it will be a within-subjects or repeated measures design.

As an example, I've run a simple LMM on a single subject sample with only one fixed effect (5 conditions), where subject intercept and slope are random factors. If I wanted to examine the effect size for a specific pairwaise comparison is it simply a matter of using omega squared (which is less biased apparently for small sample size than eta squared)?

Rich Ulrich

unread,
Jun 30, 2016, 6:38:15 PM6/30/16
to
On Thu, 30 Jun 2016 00:59:34 -0700 (PDT), nathant...@gmail.com
wrote:

>Hi everyone, in this thread there looks to be methods to calculate effect size for the overal model fit in LMM, but I'm wondering about what method to use specifically for pairwise comparisons?
>
>My data uses small sample sizes and usually it will be a within-subjects or repeated measures design.

The fundamental question for a within-subjects or repeated measures
design is whether your effect size is appropriate in terms of "within-
subject" or "between subject" variation. I've had a case or two
where there was justification to report both figures.

>
>As an example, I've run a simple LMM on a single subject sample with only one fixed effect (5 conditions), where subject intercept and slope are random factors. If I wanted to examine the effect size for a specific pairwaise comparison is it simply a matter of using omega squared (which is less biased apparently for small sample size than eta squared)?

I never know what to make of omega squared or eta squared when
I read about them, so I never use them unless (for one reason or
another) there is no choice. Try for Cohen's d.

--
Rich Ulrich

Ryan

unread,
Jul 1, 2016, 4:18:50 PM7/1/16
to
If you post this to SPSS-L, several people will likely respond including me.

David Marso

unread,
Jul 1, 2016, 6:26:13 PM7/1/16
to
On Friday, July 1, 2016 at 4:18:50 PM UTC-4, Ryan wrote:
> If you post this to SPSS-L, several people will likely respond including me.
Link: http://spssx-discussion.1045642.n5.nabble.com/
0 new messages