If you're calculating correlations of single variables, the only thing you need to determine the p-value of the correlation is the magnitude of the correlation and the sample size.
When you're working with latent variables (factors) it's more complex, because there's more than that going into the calculation of the standard error.
For example, when you do correlations with single variables, the reliability of the variable affects the correlation - the higher the reliability, the higher the correlation will be (imagine you measured two variables with 0 reliability - your correlation will be zero). You could calculate the attenuation corrected correlation coefficient - you'd say "Well, I found a correlation of 0.3, but my variables are not perfectly reliable, so when I correct the correlations for attenuation, I get 0.5", so the same magnitude of correlation (which might have the same p-value) will lead to different attenuation corrected correlations, and therefore have the same p-values but now the correlations are different.
It's worse than that though - the reliability estimates also have a standard error and so this will feed into the standard error of the corrected correlation. And then there are things like correlated errors - a model might give you an estimate, but if it's not identified, it's standard error will be infinity. You don't need to go that extreme though - adding a correlation in one part of a model could change the estimates, or the p-values, or both (or neither) of any part of the model.
In short, these aren't inconsistencies. They are expected. You interpret the statistical significance in the same way that you interpret the significance of any other parameter in the model.
Jeremy