For a hierarchical factor model, I agree that the most sensible identification strategy is to fix the loading of a single indicator of each factor to 1.0 and freely estimate the (residual) variance. That applies to both the first order factors and the higher-order factor. You should not fix both the variance and a loading, as the results produced will not be sensible.
If you are using a traditional identification strategy (either unit variance or unit loadings, but not both) for each factor and you get an unstable solution with very large SEs, there are a few possible explanations that come to mind: 1) a coding error/misspecified model, 2) one of the phenotypes has trivial SNP heritability and/or a very low N, 3) the rG between at least one pair of indicators of a factor is extremely close to zero, 4) you have fewer than 3 indicators for a factor. I'm sure there are other possible reasons, but those are the ones that come to mind.