Hi everyone.
Using the
astrometry.net solve field with appropriate tweak order (in my case, tw=4) I managed to obtain a SIP coefficient that, I believe, truly represents distortion on my lens.
The average solving precision(the average distance between the field and index stars) is around 1/4 of a pixel which is a pretty amazing result.
For the new images, taken with the same camera, I can undistort starfield by hand using previously obtained SIP coefficients(A and B) and the function described here:
https://fits.gsfc.nasa.gov/registry/sip/SIP_distortion_v1_0.pdff(u, v) = ∑A_p_q u^p v^q ,
X_corrected = X + f(u,v)
where u and v are relative pixel coordinates with respect to the center of the image.
The same thing goes for the Y-axis too.
I believe the procedure explained above is correct, and once I do that, and run the solve-field afterward (on the undistorted star position) I get an average solving precision of more than one pixel, which is at least 4 time worse than with just regular solving.
If I apply some degree of tweak order in this second solving(after undistiortion is already implemented), the solving precision actually improves. But I think this is wrong, and we should not apply any tweak order after the undistortion is made to the original star positions.
Does anyone have a better understanding of this?
Cheers