Eric--
A factor derives its variance from its indicators, unless the variance is assigned directly (say, through standardization). You must always do something to set the scale (that is, the variance) of your factors. There are good statistical reason to use reference variables (set one indicator's loading to 1) rather than standardizing factors. When indicators are ordinal, their own variance is arbitrary, so many packages will scale indicator variance to 1.
I'm sorry, I got a little confused previously and made an incorrect connection. Here is what happens. You have an ordinal indicator y, and y's variance is arbitrarily scaled to 1. It loads on factor f. Then you have an equation like:
y = 1 * f + e
if you make y the factor's reference variable. Here, e is "error."
Let's assume that y, f and e all have means of 0, and f and e are orthogonal. Then this equation implies:
Variance(y) = 1 squared * Variance(f) + Variance(e)
or
1 = 1 squared * Variance(f) + Variance(e)
or
1 = Variance(f) + Variance(e)
or
Variance(f) = 1 - Variance(e)
and that is where the variance of the factor comes from, if you don't standardize the factor.
--Ed Rigdon
or
Variance(f) = Variance(f) + Variance(e)