I should add that I'm using lavaan version 0.5.20
I can see very easily how to test the above models compared to the confirg model, using anova, but to what does one compare the config model to see if adding the configural constraints had a significant effect? I tried anova(cfa.no.group , cfa.config)
I have a large sample size so I'm worried about focussing on the chi squared, and so I thought I'd manually calculate the difference in cfi between cfa.no.group and cfa.config, which was 0.012, which is greater than 0.01 which from memory Hu & Bentler would say indicates a significant deprecation in goodness of fit, so there might not be configural invariance
P.S. if you are interested in what happened to group="Gender" with measurementInvarianceCat, it produced an error message that it had not converged.
Thank you so much for your help! The thing that didn’t converge was:
measurementInvarianceCat(model, data=data, group = "Male ", parameterization="theta",estimator="wlsmv",
ordered=c(" . . . all the names of the items in the data, each of them separated by a comma”))
I forgot to say in my post that the psychological battery that the data are from, contains 37 items and 11 primary factors, and 4 correlated residual error covariances and there are 1357 observations per item (the same size is 1357). I had a look at the link to Sunthud’s website, but he says that the code assumes no measurement error correlations, and code is of a style that is so different from what I’m used to, that I don’t understand it. I’m trained in medicine not statistics, and have picked up R through the kind help of R experts who have helped me to analyse data from the clinical trials that I have done. Sometimes the gaps in what I know about R make me feel so stupid!
Thank you so much!
Best wishes
Brent
--
You received this message because you are subscribed to a topic in the Google Groups "lavaan" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/lavaan/FQ5EWmclbjI/unsubscribe.
To unsubscribe from this group and all its topics, send an email to lavaan+un...@googlegroups.com.
To post to this group, send email to lav...@googlegroups.com.
Visit this group at https://groups.google.com/group/lavaan.
For more options, visit https://groups.google.com/d/optout.
Hi, thank you so much for your advice, I've attached my data and my code (I've shortened the original vector names), I'd be so grateful to anyone who could tell me why measurementInvarianceCat or how I can use the lavaan syntax to investigate the cfa with group=Male.bis4" in it. does not converge, even though the separate cfa's for males and females separately fit fine, and have very similar goodness of fit indices. The attached code takes a while to run, sometimes the measurementInvarianceCat commands take 15 minutes each on my computer which has cpu of 3.6GHz I think.
Thank you so much for you help and especially for taking the time to teach me about statistics in general and writing more beautiful R code in particular. This is vital to the veracity of the results because, as Keats noted,
Dear Terrance
I apologise for continuing to ask more questions. Am I correct in thinking that because there were acceptable goodness of fit measures, when I fitted to the model data subset so that it only contained data from females, that this meant that data from females fit the model? And because there was not even configural measurement invariance for gender, this means that data from females does not fit the model when variables like indicator-to-latent factor pattern are constrained to be the same as males, and this probably means that females must have a different indicator to latent factor loading pattern? So I thought I would try to calculate factor scores for females from the cfa that was fitted to only female data, which was:
fit.model1.female <- cfa(model1,ordered=paste0("v",1:37),data=subset(vrawdataframe,Male.bis4==0))
because it wouldn’t be right to get scores for females by subsetting scores from fit.model1 <- cfa(model1,ordered=paste0("v",1:37),data=vrawdataframe) because that model assumes that there are equal indicator to factor loading pattern, equal loadings, equal thresholds, etc between genders which there isn’t. So I tried to calculate the scores with predict(fit.model1.female) but all the scores were NA as can be seen from
summary(predict(fit.model1.female))
What have I don’t wrong? How can there be no scores in this fit, when it is possible to calculate goodness-of-fit measures from it summary(fit.model1.female,fit.measures=T)
Thank you so much for your patience with me!
Best wishes
Brent
From: lav...@googlegroups.com [mailto:lav...@googlegroups.com] On Behalf Of Terrence Jorgensen
Sent: Wednesday, 27 January 2016 9:33 p.m.
To: lavaan <lav...@googlegroups.com>
Subject: Re: How to test if there is significant configural measurement invariance using lavaan group syntax?
Thank you so much for you help and especially for taking the time to teach me about statistics in general and writing more beautiful R code in particular. This is vital to the veracity of the results because, as Keats noted,
--