That's not exactly what the SEM in your diagram represents. The indicators are not predictors of satisfaction. The indicators are outcomes of the latent variables (just like the satisfaction is an outcome of the latent variables). In principle, your model is equivalent to one in which satisfaction is a multidimensional indicator of the latent constructs, whereas the other indicators are unidimensional indicators of only one construct each (but that is obviously not representative of your theory).
Hypothetically, if only ONE of your indicators changed its level while the other indicators of the same construct remained at the same levels, then the cause of that change would be due to something unique about that indicator, not common to all the indicators. The latent variable represents the source of the common variance among all the indicators. If you think there is something unique about an indicator that is predictive of satisfaction (beyond what the latent construct itself explains about satisfaction), then you need a regression path from that indicator to satisfaction, or equivalently, another latent variable that is pointing to both that indicator and to satisfaction.
If, however, you want to see how the predicted values of satisfaction would change as levels of the predictor(s) (i.e., the latent variables) change, then you already have the information you need in the lavaan output. Save the regression coefficients, then create a new data object (the way you would to use the predict method for an lm or glm object) that contains combinations of values of your latent variables, then plug those "newdata" values into the regression equation to generate predicted values.
For example, if you have 2 latent variables (L1 and L2), then your regression coefficients can be found in the "Intercepts" and "Regressions" sections of the summary() output from the lavaan object. Save those values as b0, b1, and b2. If you used the fixed-factor method of identification, then you can pick 3 meaningful levels of the latent variable: the mean (0) and 1 SD above and below the mean (+/- 1). Put all combinations of these into a single data frame:
newdata <- expand.grid(L1 = c(-1, 0, 1), L2 = -1:1)
newdata
Then calculate predicted values by plugging them into the regression equation:
newdata$pred <- b0 + b1*newdata$L1 + b2*newdata$L2
newdata
Finally, you can plot the predicted values
plot(pred ~ L1, data = newdata[newdata$L2 == 0, ], type = "l")
plot(pred ~ L2, data = newdata[newdata$L1 == 1, ], type = "l")
If you want to get fancy, there is a 3D plot in the "rgl" package, that the "car" package capitalizes on:
install.packages(c("car", "rgl"))
library(car)
scatter3d(pred ~ L1 + L2, data = newdata, fit = "smooth")
But since you can't (easily) model an interaction in SEM (unless you are Bayesian), then the prediction plane will be flat, so you don't gain anything with the 3D plot.
Terry