Yes, you can use the NEW command to name the new variable, then define it below (I think in a MODEL CONSTRAINT command). You can search for examples in the user guide or forum, especially in the mediation models, for defining new parameters.
Well, they are if you use parameterization = "theta", although they are nonetheless fixed (to 1) by default for identification. But the issue with identification is that with a single continuous indicator, you only have 1 piece of observed information (the variance), so you can only estimate 1 piece of information (either the factor variance or loading, fixing the other to 1). With a categorical indicator, you don't have an observed variance. You are fitting the model to a polychoric correlation matrix, in which the (total) variances are fixed to 1. So you cannot estimate anything in a single-indicator construct.
Using the default parameterization = "delta", the residual variances are fixed to 1 minus the common-factor variance for identification, such that the model-implied total variances == 1. So the trick is simply to fix the factor loading to the square-root of the reliability, in which case the residual variance will be set to 1 minus the reliability. For example, suppose the reliability of "u4" is 0.64 (however that is supposed to have been estimated...), the square-root of which is 0.8:
myData <- read.table("http://www.statmodel.com/usersguide/chap5/ex5.16.dat")
names(myData) <- c("u1","u2","u3","u4","u5","u6","x1","x2","x3","g")
model <- '
f1 =~ u1 + u2 + u3
f2 =~ 0.8*u4 # reliability == 0.64
summary(cfa(model, data = myData, ordered = paste0("u", 1:4), std.lv = TRUE))
Notice the the residual variance in the summary is fixed to 1 - 0.64 = 0.36
It's up to you. I agree it is highly dubious to assume it is error-free, but it is also suspicious to use a reliability estimate for a single categorical item. When we calculate scale reliability, it is the reliability of the composite (e.g., a scale sum or scale mean), not the reliability of each individual scale item. If you are using a single categorical indicator, I doubt it is a composite of many items. There is nothing wrong with choosing a few different values (including perfect reliability) and comparing results as a sensitivity analysis to different reliability assumptions.