estimate changes depending on reference level of another variable

367 views

ekbrown77

Jul 13, 2016, 5:55:15 PM7/13/16
to StatForLing with R
I apparently don't understand how best to specify interaction terms in a mixed effects model with lmerTest::lmer (which is a light wrapper for lme4::lmer). With an interaction term between 'discrete_var * gradient_var', when I change the reference level of 'discrete_var' in order to change the polarity of its estimate as well as the interaction variable, the result is that the estimate of 'gradient_var' (as a main effect) changes. While in the reproducible example below, it's not a problem as there isn't significance of any variable, it turns out that in my real dataset the change is causing 'gradient_var' to be significant in one model, but not significant in another, depending on the reference level of 'discrete_var'.

Here's the reproducible example:
# sets up reproducible example
set.seed(1)
person <- rep(LETTERS, 100)
dependent_var <- sample(1:10, length(person), replace = T)
discrete_var <- rep(c("consonant", "vowel"), length(person) / 2)
df <- data.frame(person, dependent_var, discrete_var, gradient_var)

# gets mixed effects model with "consonant" as reference level in "discrete_var"
model1 <- lmerTest::lmer(dependent_var ~ discrete_var * gradient_var + (1|person), data = df)

# changes reference level to "vowel" and fits another model
df\$discrete_var <- relevel(df\$discrete_var, ref = "vowel")
model2 <- lmerTest::lmer(dependent_var ~ discrete_var * gradient_var + (1|person), data = df)

# Why does the estimate and p-value for the gradient_var (as a main effect) change?!
summary(model1)
summary(model2)

Part of output of model 1:
Fixed effects:
Estimate Std. Error         df t value Pr(>|t|)
(Intercept)                       5.46693    0.16174  259.70000  32.421   <2e-16 ***
discrete_varvowel                -0.12538    0.23088  251.50000   0.543    0.588
gradient_var                      0.02189    0.27485 2595.40000   0.582    0.561
discrete_varvowel:gradient_var    0.13820    0.38888 2595.20000  -0.355    0.722

Part of output of model 2:
Fixed effects:

Estimate Std. Error        df t value Pr(>|t|)
(Intercept)                           5.3415     0.1648  259.7000  32.421   <2e-16 ***
discrete_varconsonant
0.1254     0.2309  251.5000   0.543    0.588
0.1601     0.2751 2595.4000   0.582    0.561
discrete_varconsonant
:gradient_var   -0.1382     0.3889 2595.2000  -0.355    0.722

What am I missing? Has anyone dealt with this? How?

Earl

Matías Guzmán Naranjo

Jul 13, 2016, 6:18:04 PM7/13/16
Dear Earl,

this has nothing to do lme4 or mixed effects, the same efffect happens when you use regular linear models you get the same disparity.

lm(formula = dependent_var ~ discrete_var * gradient_var, data = df)

Residuals:
Min      1Q  Median      3Q     Max
-4.5040 -2.4729 -0.3938  2.5295  4.6614

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)                     5.46519    0.15748  34.703   <2e-16 ***
discrete_varvowel              -0.12682    0.22491  -0.564    0.573

lm(formula = dependent_var ~ discrete_var2 * gradient_var, data = df)

Residuals:
Min      1Q  Median      3Q     Max
-4.5040 -2.4729 -0.3938  2.5295  4.6614

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)                           5.3384     0.1606  33.246   <2e-16 ***
discrete_var2consonant                0.1268     0.2249   0.564    0.573

This is because of how interactions are specified, you can take a look at this: http://www.kenbenoit.net/courses/quant1/Quant1_Week10_interactions.pdf If I find something better Ill send it to you.

Best,

Matías

--
You received this message because you are subscribed to the Google Groups "StatForLing with R" group.
To unsubscribe from this group and stop receiving emails from it, send an email to statforling-wit...@googlegroups.com.

Stefan Th. Gries

Jul 14, 2016, 4:14:00 AM7/14/16
to StatForLing with R
coefficients of the linear (mixed-effects) model mean, which I discuss
in the code files of the 2013 book (but, for lack of space, not in the
printed book. Here's how it works:

#####################################
# I apparently don't understand how best to specify interaction terms
in a mixed effects model with lmerTest::lmer (which is a light wrapper
for lme4::lmer). With an interaction term between 'discrete_var *
gradient_var', when I change the rm(list=ls(all=TRUE)); set.seed(1)
library(lme4); library(effects)

PERSON <- factor(rep(LETTERS, 100))
DV <- sample(1:10, length(PERSON), replace=TRUE)
IVCAT1 <- factor(rep(c("consonant", "vowel"), length(PERSON)/2),
levels=c("consonant", "vowel"))
IVCAT2 <- factor(rep(c("consonant", "vowel"), length(PERSON)/2),
levels=c("vowel", "consonant"))
IVNUM <- runif(length(PERSON))
summary(EXMPL <- data.frame(DV, PERSON, IVCAT1, IVCAT2, IVNUM))

# fixed-effects modeling
model.1.lm <- lm(DV ~ IVCAT1 * IVNUM, data=EXMPL)
model.2.lm <- lm(DV ~ IVCAT2 * IVNUM, data=EXMPL)

allEffects(model.1.lm, xlevels=list(IVNUM=seq(0, 1, 0.25)))
summary(model.1.lm)
# coef 1 / the intercept is the predicted value of the DV when IVCAT
is "consonant" and IVNUM is 0
# coef 2 is what you need to add to the intercept to get the
prediction for when IVCAT is "vowel" and IVNUM is 0
# coef 3 is what you need to add to the intercept to get the
prediction for when IVCAT is "consonant" and IVNUM is 1
# coef 4 is what you need to add to the all previous coefs to get the
prediction for when IVCAT is "vowel" and IVNUM is 1

allEffects(model.2.lm, xlevels=list(IVNUM=seq(0, 1, 0.25)))
summary(model.2.lm)
# coef 1 / the intercept is the predicted value of the DV when IVCAT
is "vowel" and IVNUM is 0
# coef 2 is what you need to add to the intercept to get the
prediction for when IVCAT is "consonant" and IVNUM is 0
# coef 3 is what you need to add to the intercept to get the
prediction for when IVCAT is "vowel" and IVNUM is 1
# coef 4 is what you need to add to the all previous coefs to get the
prediction for when IVCAT is "consonant" and IVNUM is 1

# And all p-values are testing whether the coefficients/estimates are
significantly different from 0.
# The coefficients for IVNUM are different in the two models because
they are the slopes of IVNUM in two different situations:
# - model.1.lm's 3rd coefficient gives you the slope of IVNUM when
IVCAT is "consonant"
# - model.2.lm's 3rd coefficient gives you the slope of IVNUM when
IVCAT is "vowel"

# This is because the interaction term, while not significant, still
says they differ: note that
# - the coefficients of the interaction terms are the same in both
models and, more importantly for your question,
# - the interaction term is the difference between the two slopes in
the two models:
(coef(model.2.lm) - coef(model.1.lm)) # same as
coef(model.1.lm) or coef(model.2.lm)
#####################################

Cheers,
STG
--
Stefan Th. Gries
----------------------------------
Univ. of California, Santa Barbara
http://tinyurl.com/stgries
----------------------------------

ekbrown77

Jul 14, 2016, 12:29:19 PM7/14/16
to StatForLing with R
Is it safe to conclude that if IVNUM becomes significant when the reference level of IVCAT is changed that this is (further) evidence of a significant interaction between IVNUM and IVCAT (in addition to, possibly, a significant p-value for the interaction term)?

Here's a reproducible example with some significance:

# sets up sample df
set.seed(1)
PERSON
<- factor(rep(LETTERS, 100))
DV
<- sample(1:10, length(PERSON), replace=TRUE)
IVCAT1
<- factor(rep(c("consonant", "vowel"), length(PERSON)/2),
levels
=c("consonant", "vowel"))
IVCAT2
<- factor(rep(c("consonant", "vowel"), length(PERSON)/2),
levels
=c("vowel", "consonant"))
IVNUM
<- runif(length(PERSON))
summary
(EXMPL <- data.frame(DV, PERSON, IVCAT1, IVCAT2, IVNUM))

# creates significance for 'consonant' and a significant interaction
for (i in 1:nrow(EXMPL)) {

if (i %% 5 == 0) {

if (EXMPL\$IVCAT1[i] == "consonant") {
EXMPL\$DV
[i] <- EXMPL\$DV[i] + 3
EXMPL\$IVNUM
[i] <- EXMPL\$IVNUM[i] + 0.3

}

}
}

# visualizes the significance
library
("ggplot2")
ggplot(EXMPL, aes(IVNUM, DV, color = IVCAT1)) +
geom_point(alpha = 0.5) +
geom_smooth(method = "lm") +
theme_bw()

# fits linear regressions

model
.1.lm <- lm(DV ~ IVCAT1 * IVNUM, data=EXMPL)
model
.2.lm <- lm(DV ~ IVCAT2 * IVNUM, data=EXMPL)

# IVNUM is significant when IVCAT's reference level is 'consonant'
summary
(model.1.lm)

# IVNUM is not significant when IVCAT's reference level is 'vowel'
summary
(model.2.lm) Thanks for the help Stefan and Matías.

Stefan Th. Gries

Jul 14, 2016, 2:18:29 PM7/14/16
to StatForLing with R
1) In this case, you _have_ to look at the interaction, though, because it is significant so you shouldn't concern yourself with the main effects anyway. Also, note that it can be problematic to take a coefficient in the summary(lm(...)) output as _the_ indicator for the significance of that predictor if, as here, the intercepts are not the same (because coefficients reflect differences from intercepts (and other sums), as per my previous mail).
2) One way to get p-values that measure/reflect what you apparently want is car::Anova (as long as you obey the principle of marginality!):

###################################
car::Anova(model.1.lm, type="II")
# Anova Table (Type II tests)
# Response: DV
#               Sum Sq   Df F value    Pr(>F)
# IVCAT1         243.9    1  26.860 2.355e-07 ***
# IVNUM          182.1    1  20.049 7.873e-06 ***
# IVCAT1:IVNUM   101.2    1  11.141 0.0008565 ***
# Residuals    23573.1 2596

car::Anova(model.2.lm, type="II")
# Anova Table (Type II tests)
# Response: DV
#               Sum Sq   Df F value    Pr(>F)
# IVCAT2         243.9    1  26.860 2.355e-07 ***
# IVNUM          182.1    1  20.049 7.873e-06 ***
# IVCAT2:IVNUM   101.2    1  11.141 0.0008565 ***
# Residuals    23573.1 2596
# see? the results are identical
###################################

HTH :-)

Matías Guzmán Naranjo

Jul 14, 2016, 2:27:42 PM7/14/16

Stefan Th. Gries

Jul 18, 2016, 6:24:33 PM7/18/16
to StatForLing with R
I wanted to add one more comment to this, which I think makes the difference between the two model outputs even more obvious:

#####################################
rm(list=ls(all=TRUE)); set.seed(1)
library(lme4); library(effects)

PERSON <- factor(rep(LETTERS, 100))
DV <- sample(1:10, length(PERSON), replace=TRUE)
IVCAT1 <- factor(rep(c("consonant", "vowel"), length(PERSON)/2),
levels=c("consonant", "vowel"))
IVCAT2 <- factor(rep(c("consonant", "vowel"), length(PERSON)/2),
levels=c("vowel", "consonant"))
IVNUM <- runif(length(PERSON))
summary(EXMPL <- data.frame(DV, PERSON, IVCAT1, IVCAT2, IVNUM))

# fixed-effects modeling
summary(model.1.lm <- lm(DV ~ IVCAT1 * IVNUM, data=EXMPL))
summary(model.2.lm <- lm(DV ~ IVCAT2 * IVNUM, data=EXMPL))

IVNUM.m1 <- data.frame(effect("IVCAT1*IVNUM", model.1.lm, xlevels=list(IVNUM=seq(0, 1, 0.25))))
IVNUM.m1.consonant <- split(IVNUM.m1, IVNUM.m1\$IVCAT1)[]
IVNUM.m2 <- data.frame(effect("IVCAT2*IVNUM", model.2.lm, xlevels=list(IVNUM=seq(0, 1, 0.25))))
IVNUM.m2.consonant <- split(IVNUM.m2, IVNUM.m2\$IVCAT2)[]

par(mfrow=c(1,2))
# for this model, coef 3 is the slope of IVNUM when IVCAT1 = "consonant" as shown by the regression line
plot(IVNUM.m1.consonant\$IVNUM, IVNUM.m1.consonant\$fit, type="l", ylim=c(5, 6)); grid()
# this is the slope that the regression line would have if coef 3 was 0, not 0.02541
abline(h=coef(model.1.lm), lty=2)

# for this model, coef 3 is the slope of IVNUM when IVCAT1 = "vowel" as shown by the regression line
plot(IVNUM.m2.consonant\$IVNUM, IVNUM.m2.consonant\$fit, type="l", ylim=c(5, 6)); grid()
# this is the slope that the regression line would have if coef 3 was 0, not 0.1664
abline(h=coef(model.2.lm), lty=2)
par(mfrow=c(1,1))
# the difference between the two different coefficients 3 for IVNUM is what level of IVCAT they are for and how much they differ from the H0 tested (that coef 3 = 0)
#####################################

Cheers,