Dustin Lang wrote:
> Yes, you could try that. The match probability is not very smart; it
> does not adapt to the actual distribution of scatters it sees; it's
> based on the distance from the matched quadrangle and the assumed
> scatter of the input positions.
In the end, I would try to recreate my brain logic when looking at the
red/green plots: If there are many perfect red/green matches across the
whole image (or in my case, across the whole masked image area), then I
know two things: a) the solution is very accurate, b) the lens
distortion profile I used for that image is of very good quality. That's
all I need (is perfect/is not perfect) and probably all I can really get
at the moment.
> Some time I would love to sink some more time into getting the
> fine-tuning code working really well...
Well, without distortion it already works quite well I would say. I
imagine it's really hard to handle distortion in general, especially
with the very general SIP model. Maybe have a look at the rather simple
lensfun models
(
http://lensfun.sourceforge.net/manual/group__Lens.html#gaa505e04666a189274ba66316697e308e)
which only care about radius and don't handle x,y separately, e.g. poly3
has only one parameter, poly5 two, and the common ptlens model three. I
would guess that these models work for the majority of the uploaded images.
Maik