Fix parameter without effect

3243 views
Skip to first unread message

Jorge Sinval

unread,
Jul 27, 2018, 1:02:46 PM7/27/18
to lavaan
Enter code here...

Hello!

I'm trying to fix a parameter (variance of an observed variable), but it doens't produce effect in the output.

item~~.001*item

I also get the warning:

lavaan WARNING:
   
The variance-covariance matrix of the estimated parameters (vcov)
    does
not appear to be positive definite! The smallest eigenvalue
   
(= -1.437742e-16) is smaller than zero. This may be a symptom that
    the model
is not identified.lavaan WARNING: some estimated ov variances are negative

Using the `eigen(inspect(fit,"cov.lv"))$values` , I get:

[1] 3.5744820051 0.4209028657 0.2839836555 0.2477959891 0.1991937449 0.0898171920 0.0134621281 0.0005180068


Any thoughts?

Thanks.

Jeremy Miles

unread,
Jul 27, 2018, 4:46:42 PM7/27/18
to lav...@googlegroups.com

What do you mean by "doesn't produce effect in the outcome"?

You're getting a warning because you're constraining the variance to a value that is impossible (given the covariances).  It's a warning, not an error, so if you really want to do this, and have a good reason, you can.

--
You received this message because you are subscribed to the Google Groups "lavaan" group.
To unsubscribe from this group and stop receiving emails from it, send an email to lavaan+un...@googlegroups.com.
To post to this group, send email to lav...@googlegroups.com.
Visit this group at https://groups.google.com/group/lavaan.
For more options, visit https://groups.google.com/d/optout.
--
--
My employer has nothing to do with this email. 

jpma...@gmail.com

unread,
Jul 28, 2018, 7:41:23 AM7/28/18
to lav...@googlegroups.com

Hi Jeremey,

… the problem was that the error variance of the item was negative and its standardized loading was greater than 1 (1.022, not much, and it is possible that std item’s loading can be greater than 1). The idea was to thus fix the variance to a small number (.001). With ML estimation, the standardized factor loading is less than 1 (.99). But with WLSMV, no changes were observed in the std. loading who has the exact same value before fixing the variance. Any clues why?

Best,

João

Terrence Jorgensen

unread,
Jul 29, 2018, 7:35:36 PM7/29/18
to lavaan

With ML estimation, the standardized factor loading is less than 1 (.99). But with WLSMV, no changes were observed in the std. loading who has the exact same value before fixing the variance. Any clues why?


Are you using the default paramterization = "delta", in which case residual variances are not estimated parameters?  Try setting paramterization = "theta".


But before you bother with constrained estimation, which produces problems with testing model fit/comparison:


Make sure the out-of-bounds standardized loading is actually inconsistent with mere sampling error, by checking its confidence interval:


standardizedSolution(fit)

Terrence D. Jorgensen
Postdoctoral Researcher, Methods and Statistics
Research Institute for Child Development and Education, the University of Amsterdam



Jorge Sinval

unread,
Jul 31, 2018, 5:58:26 AM7/31/18
to lavaan
Thanks!
After changing the parameterization to "theta" I get:

lavaan WARNING: the optimizer warns that a solution has NOT been found!


Terrence Jorgensen

unread,
Jul 31, 2018, 8:33:32 AM7/31/18
to lavaan
After changing the parameterization to "theta" I get:

lavaan WARNING: the optimizer warns that a solution has NOT been found!

That is not uncommon.  For whatever reason, the optimizer has more trouble with theta than delta parameterization, especially (in my experience) when using the weird identification constraints of Millsap & Tein (2004) for multiple groups or Liu et al. (2017) for longitudinal CFAs.  But the reason for nonconvergence in your case might be an unidentified model.  Can you post your syntax example?

Jorge Sinval

unread,
Jul 31, 2018, 9:24:29 AM7/31/18
to lav...@googlegroups.com
Sure, altough I must inform that my sample is small (N = 187)

model <- ' F1=~ item1  + item2 +  item3
           F2=~ item4  + item5 +  item6
           F3=~ item7  + item8
           F4=~ item9  + item10
           F5=~ item11 + item12
           F6=~ item13 + item14
           F7=~ item15 + item16


Factor_2L                  =~ F1+F2+F3F4+F5+F6+F7
'


--
You received this message because you are subscribed to a topic in the Google Groups "lavaan" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/lavaan/RtlInAPBeWg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to lavaan+un...@googlegroups.com.

Terrence Jorgensen

unread,
Aug 1, 2018, 4:46:24 AM8/1/18
to lavaan
Sure, altough I must inform that my sample is small (N = 187)

That is a problem, and I have noticed convergence issues with categorical data are exacerbated by small sample size.  Simulation studies also indicate that results from DWLS do not really stabilize until around N = 500, so the tests statistics often have inflated errors and estimates might be biased.  If you have at least 5 categories, I would recommend robust ML instead.

model <- ' F1=~ item1  + item2 +  item3
           F2=~ item4  + item5 +  item6
           F3=~ item7  + item8
           F4=~ item9  + item10
           F5=~ item11 + item12
           F6=~ item13 + item14
           F7=~ item15 + item16


Factor_2L                  =~ F1+F2+F3F4+F5+F6+F7
'

You have several 2-indicator factors, which can be empirically underidentified if any of the factor correlations are close to zero.  I would recommend fixing both factor loadings to 1 instead of only the first factor loading.  With 2 indicators with factor loadings of 1, the estimated factor variance is actually just the covariance, so it's a nice interpretation and prevents the possible empirical underidentification due to having only 2 indicators.

Jorge Sinval

unread,
Aug 3, 2018, 3:11:35 AM8/3/18
to lav...@googlegroups.com
Thank Jorgensen! Helpful as always.


Yakhoub NDIAYE

unread,
Feb 16, 2020, 1:07:52 PM2/16/20
to lavaan
Hello, I've encountered a similar problem with a more small sample size (N = 100). Can you have a look to fix it? In advance, I thank you 

The model is below:

MODELE <- '
F1 =~ var3 + var4 + var5
F2 =~ var6 + var7
F3 =~ var1 + var2
G =~ F1 + F2 + F3'

Fitted_model <- sem(MODELE, data = mydata, estimator ="WLS", missing = "pairwise",
                    ordered = c("var1", "var2", "var3", "var4", "var5", "var6", "var7"))
               

Warning message: In lav_model_estimate(lavmodel = lavmodel, lavpartable = lavpartable,  :
  lavaan WARNING: the optimizer warns that a solution has NOT been found!

Thanks in advance, 
Yakhoub 

Edward Rigdon

unread,
Feb 16, 2020, 1:59:00 PM2/16/20
to lav...@googlegroups.com
You cannot expect WLS to find a stable solution with n = 100. A smal sample size and ordinal data are not a good combination.

--
You received this message because you are subscribed to the Google Groups "lavaan" group.
To unsubscribe from this group and stop receiving emails from it, send an email to lavaan+un...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/lavaan/96732689-f2a0-4f26-a57a-7f31cc2fc01d%40googlegroups.com.

Terrence Jorgensen

unread,
Feb 20, 2020, 8:28:26 AM2/20/20
to lavaan
I concur with Ed.  I would also add that estimator="WLS" is the "ADF" estimator for continuous data.  For ordinal data, request estimator="DWLS" (although it is the default when ordered data are detected).  It does not require nearly as large N, but N < 500 is still generally inadequate in terms of bias and Type I error inflation.  You could try estimator="PML" for pairwise maximum likelihood (I think that stabilizes at small N).

Terrence D. Jorgensen
Assistant Professor, Methods and Statistics
Reply all
Reply to author
Forward
0 new messages