I would like to do regression of observed variables "A" and "B" to predict a target variable "y".
My goal is to have a sensitivity analysis for y, where I can use the coefficients to estimate the effect of A and B on y. However, I want to use my theory that the variations in B are partially caused by A.
There is a two step solution for this situation that does not involve SEM:
1) Do a linear regression to predict A as a function of B. Then, I can compute some new variable Z which is the residual of B. This residual is the part of B that is not caused by A.
2) Do a linear regression to predict "y" as a function of A and Z. Now, the sensitivity analysis will properly assign credit to A, and assign credit to the part of B that is not due to A.
Question: Is there a natural way to do this modeling in SEM in one step? Is SEM appropriate for this situation? Is there some slightly more complex situation where SEM would suddenly become preferable over the 2 stage linear regression approach?
My possibly misguided idea of how to solve this sensitivity analysis with SEM is to do a regression to predict B using A. Since B is an observed variable, it will have a latent residual variable attached to it (an error term). I can add a regression to the graph, which is "y ~ A + residualB". Is it possible to access the error term of B in lavaan? Is this approach reasonable?