Lavaan, sem, word freq, scale data

463 views
Skip to first unread message

R McLaren

unread,
Oct 30, 2016, 11:29:43 PM10/30/16
to lavaan
I've been using Likert scales that result in data that are fairly normal  - for cfa sem in Lavaan. I've generated plausible model for this data. Now, I also have word freq measures (wf) that are categorical and can't assume they're normally distributed. So I've treated these wfs in separate log linear, logit type analysis. I prefer to have one cfa sem model do the job of both Likert and wf data, and get rid of the separate analysis. In seeking out R functionality online I've been unable to find a documented procedure for it with R, Lavaan. Online sources suggest OpenMX and MPlus. I haven't found a solid reference for R, lavaan, cfa sem categorical and non normal data.

Can I incorporate a categorical variable in cfa sem Lavaan models that can be tested (within the same model) alongside more continuous data like these scale data? I don't want to breach data assumptions.  I believe the categorical data are skewed to the right, with many subjects providing none of the key terms, for example.

I've tried OpenMX but I keep getting 'loading failed' messages (Ubuntu 16.04 LTS) after two attempts and I'm hopeful R Lavaan has the same functionality (in this respect) as OpenMX and MPlus.

Thanks in advance for any pointers.

Terrence Jorgensen

unread,
Nov 1, 2016, 4:49:44 AM11/1/16
to lavaan
Now, I also have word freq measures (wf) that are categorical and can't assume they're normally distributed.

Are these count data?  If the expected values are quite large, normal is a good approximation for Poisson.  I forget what the cutoff is (it's 30 trials for binomial to be approximately normal, maybe something close for Poisson?).  But lavaan doesn't have a Poisson link available, so you would have to treat counts like ordered categories using a probit link.

Can I incorporate a categorical variable in cfa sem Lavaan models that can be tested (within the same model) alongside more continuous data like these scale data?

There is nothing wrong with having a mixture of continuous and categorical outcomes in the same model.  WLS doesn't assume normality, but lavaan will use an identity link and give you linear regression slopes for factor loadings of continuous items, and probit link for items that are declared as ordered.


I believe the categorical data are skewed to the right, with many subjects providing none of the key terms, for example.

Skew doesn't matter, that just means that the thresholds will not be symmetric.  I recall that chi-squared has more inflated Type I error rates when thresholds are asymmetric, but lavaan provides a robust statistic by default with categorical data.

I'm hopeful R Lavaan has the same functionality (in this respect) as OpenMX and MPlus.

OpenMx is vastly more flexible, but has a steeper learning curve.  Mplus has more features available, including links for count outcomes/indicators.

Terrence D. Jorgensen
Postdoctoral Researcher, Methods and Statistics
Research Institute for Child Development and Education, the University of Amsterdam

R McLaren

unread,
Nov 2, 2016, 8:25:08 AM11/2/16
to lavaan
Thank you Dr. Jorgensen!  Yes these are count data. I ran my code and wrote in the categorical variable, and I don't know that my syntax properly identifies for Lavaan WordFlow as ordered and categorical, but the data is orderly. I did get one unexpected negative covariance value so I'm sorting that out presently. I want to try to load up more categorical counts to a latent variable.

fit2wf <- cfa(cfa.model2wf, data = FlowDataset, ordered = "WordFlow")


Message has been deleted

Terrence Jorgensen

unread,
Nov 3, 2016, 4:05:24 AM11/3/16
to lavaan
I don't know that my syntax properly identifies for Lavaan WordFlow as ordered and categorical, but the data is orderly
fit2wf <- cfa(cfa.model2wf, data = FlowDataset, ordered = "WordFlow")

That's what the "ordered" argument does, and you should be able to tell because WordFlow thresholds will be among the parameter estimates in the summary() output.

R MacLaren

unread,
Nov 4, 2016, 1:34:00 PM11/4/16
to lav...@googlegroups.com
Yes there were three non-predicted covariances in the output, one negative.

147] WARNING: 11 warnings.
First and last 5 warnings:
Warning in muthen1984(Data = X[[g]], ov.names = ov.names[[g]], ov.types = ov.types, :
lavaan WARNING: trouble inverting W matrix; used generalized inverse
Warning in lav_model_vcov(lavmodel = lavmodel, lavsamplestats = lavsamplestats, :
lavaan WARNING: could not compute standard errors!
lavaan NOTE: this may be a symptom that the model is not identified.
. . .
Warning in lav_model_test(lavmodel = lavmodel, lavpartable = lavpartable, :
lavaan WARNING: could not compute scaled test statistic
Warning in lav_object_post_check(lavobject) :
lavaan WARNING: some estimated ov variances are negative

I've 'bin'd the categorical variable in to bins of n = 2, 3, 4 to try to achieve power, and though the model converges, it does so with Warnings. Output below from un-bin'd raw data.

Thresholds:
                   Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
    wPFlow|t1        -0.340       NA                     -0.340   -0.340
    wPFlow|t2         0.606       NA                      0.606    0.606
    wPFlow|t3         1.245       NA                      1.245    1.245
    wPFlow|t4         1.805       NA                      1.805    1.805
    wPFlow|t5         1.887       NA                      1.887    1.887
    wPFlow|t6         2.262       NA                      2.262    2.262
    wPFlow|t7         2.517       NA                      2.517    2.517


When I treat the variable as an interval from 0 through 8, and I don't identify it as ordered, the model settles down and processes the input (of course) and gives meaningful, warning free output.

So regarding identification of variables as ordered, does it make a difference if I'm hypothesizing that the categorical variable in question is an 'outcome' - it is not hypothesized to be a causal precursor to other variables. In fact, temporally it doesn't seem likely to precede any other variables. I'm hoping with constrained hypotheses I can get some leverage.




--
You received this message because you are subscribed to a topic in the Google Groups "lavaan" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/lavaan/aI1glgqTaMU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to lavaan+unsubscribe@googlegroups.com.
To post to this group, send email to lav...@googlegroups.com.
Visit this group at https://groups.google.com/group/lavaan.
For more options, visit https://groups.google.com/d/optout.

R McLaren

unread,
Nov 5, 2016, 3:42:30 PM11/5/16
to lavaan
Adding categorical measures to the latent variable...:writing. I get a Warning about negative variances but I think I can ignore it.

Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
WordFlow         -0.000    0.015   -0.029    0.977   -0.003   -0.003

At this stage is there benefit in chunking threshold ranges or bin-ning the data column to attempt to yield more power for the model when N<200? i.e., DMST, TSC and wPFlow each with 2 levels. I'm assuming as in some multivariate models precision and power might trade off. Will doing so help the model definition All of these ordinals contribute to one latent.
Thresholds:
                   Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
    binwPFlow|t1      0.606    0.103    5.861    0.000    0.606    0.606
    binDMST|t1        0.202    0.097    2.069    0.039    0.202    0.202
    binTSC|t1         0.082    0.097    0.844    0.399    0.082    0.082

anova(fit1,fit2, fitMeasures=TRUE) does not work on my fit1 and fit2, but manual calculation shows same justification for the less parsimonious model.

Is there a good book on R and Lavaan and these sorts of models? I am reading Brown 2006 but it isn't written for R Lavaan. I've got a Kindle.

Terrence Jorgensen

unread,
Nov 8, 2016, 6:35:57 AM11/8/16
to lavaan
At this stage is there benefit in chunking threshold ranges or bin-ning the data column to attempt to yield more power for the model when N<200?

SEMNET is a good place to ask questions about SEM that aren't about lavaan:


Is there a good book on R and Lavaan and these sorts of models? I am reading Brown 2006 but it isn't written for R Lavaan. I've got a Kindle.

This book has a chapter on categorical indicators:


This book also covers SEM using lavaan, but goes into more detail about IRT for categorical items using the spychometric and ltm packages.  If you want to use IRT, I suggest looking into the mirt package.

R McLaren

unread,
Nov 12, 2016, 1:15:39 PM11/12/16
to lavaan
I've ordered the book - thank you!. In the meantime, this morning tried loading up the latent factor with a few more measures of word frequency. The results still have a nuisance negative regression and negative variance
Regression
Satisfaction     -0.099    0.071   -1.399    0.162   -0.200   -0.200
Variance
SCHOOL         -0.077    0.143   -0.538    0.590   -0.077   -0.059
Should I be doing anything with negative variance? Visual inspection of it shows increasing trend though negative variance and regression.
R

One comment I found says:

> I looked for the discussion in several multilevel modeling  
> textbooks but only found one short discussion in the book by Brown  
> and Prescott. SEM literature usually suggest fixing the negative  
> variances to 0. However, I wander whether this is the only way to  
> get around this problem or the sensible way because if the random  
> effects are fixed to 0 the model is no longer a random effects model.
>
> With best regards,
>
> Yu-Kang

Thanks! R
AAA NOV 12 2016 wPFlow x SAT RGraph.png

R McLaren

unread,
Nov 15, 2016, 8:24:40 AM11/15/16
to lavaan
Ok so I modified the model and output is showing negative variances from two variables

- so http://davidakenny.net/cm/1factor.htm#Heywood
                1) Treat as specification error and modify the model.
                2) Create a non-linear constraint on the loading to prevent it from being too large or prevent the error variance from being negative.
                3) Fix the standardized loading to one (usually one is subtracted from the degrees of freedom outputted by computer programs).

Ok so assuming I chose to fix the standardized loading to one, or zero, for a variable x, how might I do so? Do I just precede a variable with 1*?

How might I accomplish #2 if model respecification doesn't resolve it?

Thanks, rm

Terrence Jorgensen

unread,
Nov 16, 2016, 4:08:04 AM11/16/16
to lavaan
Ok so I modified the model and output is showing negative variances from two variables

Did you try SEMNET yet?  You keep posting issues about interpreting your output and what to do about a Heywood case, so you don't seem to have any problems running lavaan.

You haven't shared much information about the model you are fitting (e.g., path diagram, model syntax), but both of the negative variance estimates you have shown are much smaller than their SE, so it is within the bounds of sampling error.  If the true residual variance is close to zero, negative estimates become more frequent with smaller samples.  That doesn't mean your model isn't poorly specified, but we don't have enough information to comment on that.

                1) Treat as specification error and modify the model.

If you have poor model fit, this is probably the one to focus on.

                2) Create a non-linear constraint on the loading to prevent it from being too large or prevent the error variance from being negative.

Doing this would make your test statistic distributed differently than a chi-squared random variable.  It would be a mixture of chi-squareds.  Unconstrained estimation is also preferred so that the constraints don't bias any other parameter estimates.  If your model fits well and the Heywood case is within sampling error, then there is no strong evidence against the null hypothesis that sampling error is the simplest explanation for your negative estimate.




Ok so assuming I chose to fix the standardized loading to one, or zero, for a variable x, how might I do so? Do I just precede a variable with 1*?

That's how you constrain a parameter to a fixed value.  Standardization of estimates happens after estimation is complete, so you don't constrain standardized estimates (well, you can, but it's complicated -- it would involve labeling your parameters and solving the equation for the estimate you want to standardize, then using that as your equality constraint). 

How might I accomplish #2 if model respecification doesn't resolve it?

Again, I'd advise against it, but...

R McLaren

unread,
Nov 16, 2016, 1:23:37 PM11/16/16
to lavaan
Yes I'm thinking the best way to approach Semnet, and I'm on the list, is to look at the archives and look for comparable concerns. I'm reading on SEM model evaluation presently. Sorry that my questions are chaotic but I'm learning R, sem and the data all at the same time. The difference in DWLS between fit1 and fit2 is 3, with 1 df so the measurement side fits with what was shown before adding the ordinal variable to both models. I hope I'm reporting here what is customary.

> summary(fit2, fit.measures = TRUE, standardized=TRUE)
lavaan (0.5-22) converged normally after 112 iterations

  Number of observations                           169

  Estimator                                       DWLS      Robust
  Minimum Function Test Statistic              655.299     803.516
  Degrees of freedom                               550         550
  P-value (Chi-square)                           0.001       0.000
  Scaling correction factor                                  1.394
  Shift parameter                                          333.594
    for simple second-order correction (Mplus variant)

Model test baseline model:

  Minimum Function Test Statistic             5393.949    1729.473
  Degrees of freedom                               595         595
  P-value                                        0.000       0.000

User model versus baseline model:

  Comparative Fit Index (CFI)                    0.978       0.777
  Tucker-Lewis Index (TLI)                       0.976       0.758

  Robust Comparative Fit Index (CFI)                            NA
  Robust Tucker-Lewis Index (TLI)                               NA

Root Mean Square Error of Approximation:

  RMSEA                                          0.034       0.052
  90 Percent Confidence Interval          0.022  0.043       0.044  0.060
  P-value RMSEA <= 0.05                          0.998       0.304

Standardized Root Mean Square Residual:

  SRMR                                           0.081       0.081
  WRMR                                           0.994       0.994

Regressions:

                   Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
  SSA ~                                                                
    Intrinsic         0.135    0.035    3.908    0.000    0.531    0.531
  Performance ~                                                        
    SSA               0.251    0.189    1.328    0.184    0.319    0.319
    Flow              0.186    0.171    1.088    0.277    0.243    0.243
  Flow ~                                                               
    SSA               0.828    0.203    4.074    0.000    0.807    0.807
    Intrinsic         0.027    0.016    1.674    0.094    0.105    0.105
  Satisfaction ~                                                       
    Flow              1.847    0.247    7.481    0.000    0.676    0.676
  WordFlow ~                                                           
    Flow              1.446    0.466    3.100    0.002    0.245    0.245
    Satisfaction     -0.404    0.132   -3.065    0.002   -0.187   -0.187
    Performance       0.413    0.397    1.038    0.299    0.054    0.054

RMSEA, CFI and TLI improved with the addition of the ordinal (what I consider to be an outcome) variable. So I'm evaluating model fit presently and it was really the presence of the negative that had me spooked. Thanks for clearing that up. I'm not yet dissatisfied with model fit. So I look for outliers:


Thresholds:
                   Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
    binwf|t1          0.232    0.098    2.376    0.018    0.232    0.232
    binDT2|t1         0.202    0.097    2.069    0.039    0.202    0.202

Variances:

                   Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
   .INTRINSIC         0.000                               0.000    0.000
   .GPA               0.041    0.014    3.009    0.003    0.041    0.273
   .HON               0.398    0.141    2.825    0.005    0.398    0.236
   .ACAH              0.131    0.034    3.856    0.000    0.131    0.614
   .SCHOOL           -0.090    0.146   -0.615    0.538   -0.090   -0.068
   .LIFE              0.786    0.114    6.906    0.000    0.786    0.504
   .HEALTH            1.463    0.191    7.659    0.000    1.463    0.827
   .PHONE             1.157    0.153    7.573    0.000    1.157    0.866
   .STUDY             0.905    0.109    8.303    0.000    0.905    0.838
   .DISTRACT          0.693    0.077    8.947    0.000    0.693    0.813
   .FUN               1.108    0.154    7.174    0.000    1.108    0.834
   .MISTAKE           0.624    0.070    8.974    0.000    0.624    0.651
   .OFFICE            0.910    0.120    7.609    0.000    0.910    0.867
   .PART              0.687    0.086    7.979    0.000    0.687    0.741
   .REVIEW            0.658    0.077    8.579    0.000    0.658    0.776
   .DUE               2.085    0.291    7.164    0.000    2.085    0.843
   .CREATE            0.567    0.069    8.253    0.000    0.567    0.719
   .CLASS             0.213    0.019   11.005    0.000    0.213    0.754
   .MENTAL            0.261    0.033    7.842    0.000    0.261    0.604
   .INVOLVE           1.505    0.234    6.443    0.000    1.505    0.958
   .OPPORT            1.153    0.157    7.326    0.000    1.153    0.850
   .BOOKS             0.877    0.125    6.997    0.000    0.877    0.811
   .CONT              0.491    0.056    8.825    0.000    0.491    0.723
   .STAND             0.515    0.058    8.818    0.000    0.515    0.722
   .NEW               0.582    0.054   10.687    0.000    0.582    0.703
   .ATTEND            1.801    0.221    8.134    0.000    1.801    0.750
   .SKILLS            0.828    0.074   11.245    0.000    0.828    0.773
   .SUCCESS           0.787    0.102    7.712    0.000    0.787    0.678
   .ELSE              1.370    0.174    7.890    0.000    1.370    0.720
   .RELATION          1.075    0.107   10.068    0.000    1.075    0.871
   .ASSIGN            0.832    0.093    8.914    0.000    0.832    0.635
   .ESCAPE            2.784    0.417    6.673    0.000    2.784    0.888
   .TRAVEL            1.278    0.144    8.900    0.000    1.278    0.632
   .binwf            -5.560                              -5.560   -5.560
   .binDT2            0.954                               0.954    0.954
    Intrinsic         2.747    0.297    9.247    0.000    1.000    1.000
   .Performance       0.078    0.017    4.713    0.000    0.706    0.706
   .Satisfaction      0.763    0.158    4.822    0.000    0.543    0.543
   .SSA               0.128    0.056    2.275    0.023    0.718    0.718
   .Flow              0.047    0.015    3.018    0.003    0.248    0.248
   .WordFlow          6.281   19.355    0.325    0.746    0.958    0.958

binwf is the elephant in the room. Aside from this presently I'm evaluating model fit to be satisfactory. But know that I'm new to this kind of work so I don't have much confidence in my determination.

RM

R McLaren

unread,
Nov 16, 2016, 1:50:15 PM11/16/16
to lavaan

Here's a screen shot of the model so far.
Thx.

RM

R McLaren

unread,
Nov 16, 2016, 2:38:09 PM11/16/16
to lavaan
Yes I just wanted to add that my aim has been to not adjust the model too much trying to be mindful of the initial research hypotheses. But if there's a way to pare down the model or tighten it up I'll look for that too. I just haven't figured out a way to tighten it up.

I've included Satisfaction as influencing WordFlow only because it seemed plausible and reasonable, as with Performance, though these were not initially our focus they are explicable, sensible, and so discuss-able. If the analyses generates a causal suggestion that WordFlow influences Satisfaction I'm not confident of it because in terms of order of presentation, Satisfaction precedes WordFlow. I didn't explicitly control for it, but it is fair to assume given the layout of the materials.

RM

R McLaren

unread,
Nov 17, 2016, 2:43:36 PM11/17/16
to lavaan
The model layout is below. I've tried to simplify the regressions to only those hypothesized a priori, with the exceptions of Satisfaction -> WordFlow, Performance -> WordFlow.

cfa.model2 <- '
    Intrinsic =~ INTRINSIC
    Performance =~ GPA + HON + ACAH
   
    Satisfaction =~ SCHOOL+LIFE+HEALTH
    SSA =~ PHONE+STUDY+DISTRACT+FUN+MISTAKE+OFFICE+PART+REVIEW+DUE+CREATE+CLASS+MENTAL+INVOLVE+OPPORT+BOOKS     
    Flow =~ CONT+STAND+NEW+ATTEND+SKILLS+SUCCESS+ELSE+RELATION+ASSIGN+ESCAPE+TRAVEL
    WordFlow =~ binwf + binDT2
   
    # regressions
    SSA ~ Intrinsic
    Performance ~ SSA + Flow
    Flow ~ SSA + Intrinsic
    Satisfaction ~ Flow
    WordFlow ~ Flow + Satisfaction + Performance
   
    # residual correlations
    MENTAL ~~ STUDY
    CONT ~~ STUDY
'
fit2 <- cfa(cfa.model2, data = FlowDataset, ordered=c("binwf","binDT2"))

summary(fit2, fit.measures = TRUE, standardized=TRUE)

When I get warnings output by Lavaan, R, that say Warning because some variances are negative, for example, I don't want to ignore these warnings, in interpretation, unless I can know they're not critical. I understand now that there's a way to establish a cutoff for critical negative ones. I'm not getting warnings about model identification so far.




Yves Rosseel

unread,
Nov 18, 2016, 5:52:17 AM11/18/16
to lav...@googlegroups.com
On 11/17/2016 08:43 PM, R McLaren wrote:
> When I get warnings output by Lavaan, R, that say Warning because some
> variances are negative

Negative variances are either a problem of your dataset (perhaps a small
sample?). In that case, the problem should go away if you resample or if
you add more observations.

But it can also be a symptom of model misspecification. In this case,
getting a new sample or adding new observations will (typically) not help.

Yves.

R McLaren

unread,
Nov 18, 2016, 11:44:31 AM11/18/16
to lavaan
Presently getting more Ss would not be possible. One possible way the model could be respecified, would be to reconsider Flow  => as 'Flow + WordFlow' with fewer regressions, +1 df, and parsimony. Output shows SRMR rose by a fraction, other indices unremarkable, but one previously negative covariance (P ~~ S) is now positive.

lavaan (0.5-22) converged normally after 105 iterations


  Number of observations                           169

  Estimator                                       DWLS      Robust
  Minimum Function Test Statistic              668.634     814.000
  Degrees of freedom                               551         551
  P-value (Chi-square)                           0.000       0.000
  Scaling correction factor                                  1.393
  Shift parameter                                          334.128

    for simple second-order correction (Mplus variant)

Model test baseline model:

  Minimum Function Test Statistic             5393.949    1729.473
  Degrees of freedom                               595         595
  P-value                                        0.000       0.000

User model versus baseline model:

  Comparative Fit Index (CFI)                    0.975       0.768
  Tucker-Lewis Index (TLI)                       0.974       0.750


  Robust Comparative Fit Index (CFI)                            NA
  Robust Tucker-Lewis Index (TLI)                               NA

Root Mean Square Error of Approximation:

  RMSEA                                          0.036       0.053
  90 Percent Confidence Interval          0.025  0.045       0.045  0.061
  P-value RMSEA <= 0.05                          0.996       0.239

  Robust RMSEA                                                  NA
  90 Percent Confidence Interval                                NA     NA


Standardized Root Mean Square Residual:

  SRMR                                           0.082       0.082

Weighted Root Mean Square Residual:

  WRMR                                           1.004       1.004

Parameter Estimates:

  Information                                 Expected
  Standard Errors                           Robust.sem

Latent Variables:

                   Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
  Intrinsic =~                                                         
    INTRINSIC         1.000                               1.657    1.000
  Performance =~                                                       
    GPA               1.000                               0.333    0.855
    HON               3.411    0.515    6.630    0.000    1.136    0.875
    ACAH              0.862    0.162    5.335    0.000    0.287    0.622
  Satisfaction =~                                                      
    SCHOOL            1.000                               1.192    1.040
    LIFE              0.733    0.095    7.746    0.000    0.874    0.700
    HEALTH            0.462    0.103    4.489    0.000    0.551    0.414
  SSA =~                                                               
    PHONE             1.000                               0.423    0.366
    STUDY             0.989    0.248    3.988    0.000    0.418    0.402
    DISTRACT          0.944    0.236    4.009    0.000    0.399    0.432
    FUN               1.111    0.313    3.550    0.000    0.470    0.408
    MISTAKE           1.370    0.335    4.083    0.000    0.579    0.591
    OFFICE            0.884    0.286    3.095    0.002    0.374    0.365
    PART              1.161    0.295    3.936    0.000    0.491    0.510
    REVIEW            1.032    0.246    4.197    0.000    0.436    0.474
    DUE               1.472    0.442    3.328    0.001    0.622    0.396
    CREATE            1.114    0.286    3.900    0.000    0.471    0.530
    CLASS             0.622    0.176    3.539    0.000    0.263    0.495
    MENTAL            0.980    0.231    4.240    0.000    0.414    0.630
    INVOLVE           0.605    0.332    1.825    0.068    0.256    0.204
    OPPORT            1.069    0.379    2.818    0.005    0.452    0.388
    BOOKS             1.070    0.347    3.079    0.002    0.452    0.435
  Flow =~                                                              
    CONT              1.000                               0.434    0.526
    STAND             1.029    0.157    6.568    0.000    0.446    0.528
    NEW               1.143    0.181    6.308    0.000    0.496    0.545
    ATTEND            1.787    0.344    5.193    0.000    0.775    0.500
    SKILLS            1.134    0.173    6.567    0.000    0.492    0.475
    SUCCESS           1.408    0.266    5.289    0.000    0.611    0.567
    ELSE              1.681    0.301    5.591    0.000    0.729    0.529
    RELATION          0.923    0.248    3.724    0.000    0.400    0.360
    ASSIGN            1.594    0.266    5.987    0.000    0.692    0.604
    ESCAPE            1.363    0.353    3.858    0.000    0.591    0.334
    TRAVEL            1.990    0.297    6.698    0.000    0.863    0.607
    WordFlow          0.808    0.251    3.219    0.001    0.112    0.112
  WordFlow =~                                                          
    binwf             1.000                               3.131    3.131
    binDT2            0.056    0.282    0.198    0.843    0.175    0.175

Covariances:

                   Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
.Performance ~~                                                       
   .Satisfaction      0.009    0.025    0.364    0.716    0.036    0.036

...again the negative large variance estimate for binwp

Variances:

                   Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all

   .binwf            -8.801                              -8.801   -8.801
   .binDT2            0.969                               0.969    0.969

    Intrinsic         2.747    0.297    9.247    0.000    1.000    1.000
   .Performance       0.078    0.017    4.716    0.000    0.708    0.708
   .Satisfaction      0.801    0.167    4.806    0.000    0.564    0.564
   .SSA               0.128    0.056    2.276    0.023    0.719    0.719
   .Flow              0.045    0.015    2.954    0.003    0.237    0.237
    WordFlow          9.678   49.074    0.197    0.844    0.987    0.987
 
The other predicted regressions are unchanged. I'm going to assume any simplification in the model is an appropriate tactic for re-specification and aim towards it by generating some test models.

RM
Reply all
Reply to author
Forward
0 new messages