Thanks for developing SALSA and making it freely available.
I think the elephant in the room with least squares for surveying is how propagated errors get scaled. It happens twice - once by the standard deviation of unit weight (s0), and again for the confidence interval (usually 95%). Unfortunately different software packages scale differently for each step.
I'm most familiar with Trimble Business Center (formerly Trimble Geomatics Office), Microsurvey StarNet and Carlson SurvNet, all of which have been around for decades. I'm generalizing a little (and correct me if I'm wrong), but basically:
For the first scaling (by s0), Trimble always does it, but SurvNet & StarNet only do if the network fails the Chi Square test (measurement residuals too large).
For the second scaling (using 95% here), Trimble & StarNet use ~1.96 for univariate and ~2.45 for bivariate (2D error ellipses), regardless of the degrees of freedom in the network. Carlson has the option to do the same, or use larger scales based on the F statistic using the degrees of freedom in the network.
As I understand from the user manual, SALSA always does the first scaling (up or down by s0), and then scales up to 95% using the larger scales based on the network's degrees of freedom.
As I see it, the differences in scaling are fundamentally different opinions about how reliable your a priori standard errors are (how well you know your measurement capabilities). StarNet, for example, assumes you know them well, from lots of experience. If you get a really low s0, that's too good to be true, so it's not going to scale all the propagated errors down. For the second scaling up to 95%, even if you have 3 degrees of freedom in the network, StarNet is still going to scale by 1.96 on coordinates & 2.45 on 2D ellipses (not 3.18 & 4.37 as I think SALSA would - see end), again because you know your a priori standard errors from lots of experience, not just from this last survey.
The major issue with all this in surveying is that our primary national property surveying accuracy standard (ALTA-NSPS Land Title Surveys) is based on the semi-major axis of the 95% relative error ellipse, but different software makes those ellipses different sizes.
Since there's least squares in the standards, and in other things like total station resection setups, we have to do better at explaining it to surveyors in a fairly simple way. For example, 1.96 (for 1D) gets you 95% of the area under a normal curve, while 2.45 (for 2D ellipses) gets you 95% of the volume under a normal surface. And each multiplier is even bigger if you say you only know your population from a small sample.
What's hard to explain is what a Wild West it is with scaling approaches, even among the experts & well-established surveying software.
For the F-statistic in examples above, I used the Excel function F.INV()
For network degrees of freedom = "infinite" (999999)
1D 95% multiplier = sqrt(1*F.INV(0.95,1,999999)) = 1.96 (same as z-distrib)
2D 95% multiplier = sqrt(2*F.INV(0.95,2,999999)) = 2.45
For network degrees of freedom = 3:
1D 95% multiplier = sqrt(1*F.INV(0.95,1,3)) = 3.18 (same as t-distrib)
2D 95% multiplier = sqrt(2*F.INV(0.95,2,3)) = 4.37
Dan Rodman
Wisconsin Professional Land Surveyor
Instructor, Civil Engineering Technology
Madison College, Madison WI