"Well, you could just turn the question around. Why would this approach be better than comparing the mean differences using multigroup CFA?"
For this project, it saves me a bit of time. I have seven latent factors and several groups to test for MI.
When I use lavPredict to get the scores, I can e.g. easily create plots to compare means of all seven groups, split up the means into subgroups and compare all seven latent factors, etc.
It's generally very easy to plug predicted scores into other R-packages for analysis :-)
However, e.g.
Putnick and Borestein (2016, p.77) write "One common way to do this is to set the latent factor mean to 0 in one group and allow it to vary in the second group. The estimated mean parameter in the second group represents the difference in latent means across groups. For example, if the latent factor variance is set to 1.0 and the standardized mean of the parental control latent factor is estimated at 1.00, p < .05, in the United States, then control in the United States is one standard deviation higher than control in China."
This seems a bit "hacky" to me, to be honest, like something you would do if you couldn't easily get the predicted scores, which IIRC was the case with older software.
But maybe setting the latent factor mean to 0 allowing it to vary in the second group encodes some different assumptions or produces different means?
It's definitely more work for me to do this, which I would like to avoid unless there's a compelling reason :'-)