how to do convergent and discriminant validity check in R?

1,643 views
Skip to first unread message

Min Chen

unread,
Aug 11, 2022, 9:09:36 AM8/11/22
to lavaan
Hello, you all, 

I am just new into R.  now I am going to do Convergent and discriminant validity check for my model.  
I use semTools::AVE(my_model) to check the AVE,  and how could I check MSV, Fornell-Lacker-Ratio in R? is there a more efficient way to do it? 

Thank you all for answering my questions in advance. 

best,
Min

Rönkkö, Mikko

unread,
Aug 11, 2022, 10:16:26 AM8/11/22
to lav...@googlegroups.com

Hi,

 

The AVE criterion is not a very useful technique for assessing discriminant validity. You can check out better alternatives here:

 

Rönkkö, M., & Cho, E. (2020). An updated guideline for assessing discriminant validity. Organizational Research Methodsdoi:10.1177/1094428120968614

 

These techniques are imlemented in the discriminantValidty function of semTools.

 

Mikko

--
You received this message because you are subscribed to the Google Groups "lavaan" group.
To unsubscribe from this group and stop receiving emails from it, send an email to lavaan+un...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/lavaan/d581282d-db2d-4514-8cd4-720b8af6e95fn%40googlegroups.com.

Christian Arnold

unread,
Aug 11, 2022, 11:02:36 AM8/11/22
to lav...@googlegroups.com
Hello Min,

here is a quick and dirty written function for simple one-group CFA that produces the "usual" result. If desired, HTMT is inserted in the upper triangular matrix.

No guarantee that everything is correct - please check the results!

Note: Discriminant validity is a strange phenomenon. One can argue about it forever. In any case, it makes sense to read Rönkkö/Cho (https://journals.sagepub.com/doi/full/10.1177/1094428120968614) and consider their advice.

Best

Christian

library(lavaan)
library(semTools)


# Creates the "usual" table
rel <- function(object, CR = TRUE, AVE = TRUE, MSV = TRUE, HTMT = TRUE, digits) {
  cor.lv <- data.frame(lavaan::lavInspect(object, "cor.lv"))
  pt <- lavaan::parTable(object)[lavaan::parTable(object)$op == "=~",]
  mtrx <- round(cor.lv, digits)
  diag(mtrx) <- ""
  if(HTMT) {
    htmt  <- round(semTools::htmt(paste(pt[,"lhs"], pt[,"op"], pt[,"rhs"], sep = ""), data.frame(lavaan::lavInspect(object, "data"))), digits)
    upper <- htmt[upper.tri(htmt)]
  }
  else upper <- ""
  mtrx[upper.tri(mtrx)] <- upper
  if(MSV) mtrx <- cbind(MSV = round(sapply(abs(cor.lv), function(x) max(x[x != max(x)])^2), digits), mtrx)
  if(AVE) mtrx <- cbind(AVE = round(semTools::AVE(object), digits), mtrx)
  if(CR)  mtrx <- cbind(CR = round(semTools::compRelSEM(object), digits), mtrx)
  return(mtrx)
}


# Example
pop.model <- "
f1 =~ 1.0 * x1 + 1.0 * x2 + 1.0 * x3
f2 =~ 1.0 * x4 + 1.0 * x5 + 1.0 * x6
f3 =~ 1.0 * x7 + 1.0 * x8 + 1.0 * x9
f1 ~~ 0.4 * f2
f1 ~~ -0.3 * f3
f2 ~~ -0.6 * f3
"


model <- "
f1 =~ x1 + x2 + x3
f2 =~ x4 + x5 + x6
f3 =~ x7 + x8 + x9
"

set.seed(1)
data <-  simulateData(pop.model, sample.nobs = 1000)

fit <- cfa(model, data)

rel(fit, CR = TRUE, AVE = TRUE, MSV = TRUE, HTMT = TRUE, digits = 3)




Von: lav...@googlegroups.com <lav...@googlegroups.com> im Auftrag von Min Chen <min.ch...@gmail.com>
Gesendet: Donnerstag, 11. August 2022 15:09
An: lavaan <lav...@googlegroups.com>
Betreff: how to do convergent and discriminant validity check in R?
 
--

Christian Arnold

unread,
Aug 11, 2022, 11:06:19 AM8/11/22
to lav...@googlegroups.com
Uh, I hadn't seen that you were faster. Sorry, Mikko.
From: lav...@googlegroups.com <lav...@googlegroups.com> on behalf of Rönkkö, Mikko <mikko....@jyu.fi>
Sent: Thursday, August 11, 2022 4:16:18 PM
To: lav...@googlegroups.com <lav...@googlegroups.com>
Subject: Re: how to do convergent and discriminant validity check in R?
 

Nickname

unread,
Aug 12, 2022, 10:11:24 AM8/12/22
to lavaan
Let me elaborate a little on Mikko's answer (Mikko, congrats on the article).  First, of course, 'convergent validity' and 'discriminant validity' are terms that are out of step with the modern understanding of validity.  These are not distinct and independent types of validity that one collects like stamps in a stamp collection (Guion's analogy).  Instead, they are lines of evidence that contribute to a unified validity argument for your measure(s).  This is related to the idea that validation is not like going through a checklist of independent steps.  As such, it is best to avoid thinking of validation in terms of "checking" and instead think of it as an evaluation or assessment of validity involving inter-related lines of evidence.

The lasting contribution of Campbell and Fisk's article that seems to get lost is that they recognized and attempted to solve the "how big is big" problem.  There are no universal cut scores for how related measures of the same construct should be or how unrelated measures of different constructs should be.  This depends on the constructs.  (Moreover, shifting attention to the items instead of the measures asks a very different question and should not be treated simply as a matter of expedience.)  As Zumbo and Chan have emphasized, Campbell and Fisk focused on the comparative sizes of the above associations, suggesting that associations between measures of different constructs set a lower bound for associations between measures of the same construct, and conversely, associations between measures of the same construct set an upper bound for associations between measures of different constructs.  These are just outer-bounds; depending on your constructs, you might want to set the bar lower for discrimination and higher for convergence.  The fundamental point is that the two need to be evaluated in relation to one another, not separately. 

More generally, no statistical analysis offers a gold standard or operational definition of convergence of measures of the same construct or discrimination of different construct by different measures.  (This relates back to the checklist idea.)  Any one analysis is just one line of evidence for these hypotheses.  Ideally, a program of research should triangulate on these hypotheses using different lines of evidence.  At the very least, no one analysis should ever be interpreted as offering definitive evidence.  When you report your results, remember that they are only partial and that they are fallible.  Interpret the results in light of all the evidence available to help you critically evaluate your measures.

Keith
------------------------
Keith A. Markus
John Jay College of Criminal Justice, CUNY
http://jjcweb.jjay.cuny.edu/kmarkus
Frontiers of Test Validity Theory: Measurement, Causation and Meaning.
http://www.routledge.com/books/details/9781841692203/
Reply all
Reply to author
Forward
0 new messages