When pressing the "Select by Distance" button, the field is prefilled
with a default value.
Could you tell me how is this default value computed ?
I have not found the explanation in the wiki http://wiki.panotools.org/Hugin_Control_Points_table
Thanks and best regards,
Alain Carrie
IIRC, it is mean(errors) + std(errors). It was just a guess, so I'm open to
any other ideas on how to compute this.
ciao
Pablo
I'd like to see another statistic: Calculate the sum of distances of
all points of two different images. Eg: image 2 and 4 share 9
different points, the distance sum over these 9 points 21.4.
This way I can see very fast which images need more or more accurate
control points. It helps me to answer the question where to add more
control points. Good suggestion?
Thomas
Here are my comments:
1) regarding the default value of the "Select by Distance" button, I
have tested several iterations (Run Optimizer, then Select by Distance
with default value, then Delete selected CP) with the same two images
(the CP being initially generated by Autopano): taking into account
the values provided at each iteration by the Optimizer (Mean, SD²,
Max), I find that the default value of "Select by Distance" is only
roughly correlated with Mean(errors) + SD(errors) ; the data is
available in the Excel file available in this archive:
http://alaincarrie.net/download/HuginPtxTemp.zip
2) Anyway, with this default value, we often find situations where,
after several iterations where everything runs fine, the default value
is equal or greater than the maximum error (whatever its value) ;
after this iteration, nothing can improve in the further iterations as
no CP is deleted, whatever the current statistics on the errors.
3) Another formula would allow a true dichotomy: keeping only the 50%
best points resulting from each optimization is always improving
steadily the results at each iteration ; the default value is in this
case the median and can be approximated by the mean (straight, without
any addition) ; this is the solution I would prefer to see
implemented, if you agree
4) We could also think of more sophisticated formulas, like keeping
only the P% best points P being a adjustable parameter, for instance
80% and ideally between 50% and 100% ; but the average (approximating
the medion, i.e. P=50%) is fine for me.
5) Regarding Thomas' comments, I do agree that further information in
the Control Point Table would be interesting to identify what is the
quality of the current project for each image pair (n,m), with
statistics for each image pair like "number of CP", "mean error", "max
error" ; I am not sure that the sum of the errors is a good metric
6) Maybe a CP Table sortable on each field (column) of the table
(image #1 in the pair, image#2 in the pair, error for the
corresponding CP, ...) would be also a solution
Kind regards,
Alain
In fact the Optimizer is providing the Mean (RMS) wich is different
from the Mean computed directly from the data
> 2) Anyway, with this default value, we often find situations where,
> after several iterations where everything runs fine, the default value
> is equal or greater than the maximum error (whatever its value) ;
> after this iteration, nothing can improve in the further iterations as
> no CP is deleted, whatever the current statistics on the errors.
>
> 3) Another formula would allow a true dichotomy: keeping only the 50%
> best points resulting from each optimization is always improving
> steadily the results at each iteration ; the default value is in this
> case the median and can be approximated by the mean (straight, without
> any addition) ; this is the solution I would prefer to see
> implemented, if you agree
I have made tests with the Mean (RMS) as a alternative default value
and this work fine. I have made no test with Mean (which would be too
agressive I think).
Kind regards,
Alain