HiCan one use many images to improve calibration?Situation: I do have many (thousands) images [0] taken with the same camera and same lens (50mm prime on EOS450D) and same settings. I do know the time (+- a few seconds) and geo-position the images were taken. I don't have good values for the camera orientation.Astrometry.net works well for these images: I get calibrations that seem to have average square errors of less than one pixel.I want to merge these images (spatially and/or across time). Since all images were taken with the identical setup, presumably they could share some calibration data, like scale and distortion, but not others, like not image center and rotation.Is it possible to compute a "shared" distortion across many images? Can such distortion values then be used to compute even better values for camera orientation or image center?
I don't mind diving into the code.Kendy[0] raw examples:
--
You received this message because you are subscribed to the Google Groups "astrometry" group.
To unsubscribe from this group and stop receiving emails from it, send an email to astrometry+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/astrometry/CABnrD-VP9E1Yhqkp5hu%3DRwOBhwJmxizUGh1VUmQ%2BFw1a3czGfg%40mail.gmail.com.
One thought: extract the features (stars) and look angles obtained with astrometry, and then use a panorama stitcher to solve for the lens properties. Hugin is pretty flexible in allowing you to constrain certain parameters (here, look angle) and solving for others.
Good question. We don't really have anything like that in the code right now.
One relevant feature could be the "solve-field --predistort" option, which lets you feed in an optical distortion model ("SIP solution").
You could imagine solving for a really good SIP solution for your lens using a bunch of exposures, and then keeping that fixed. It might vary with temperature, though!
A thing that's majorly missing is the bit that would do that SIP fitting based on many exposures (while probably also adjusting each image's center, scale, and rotation). If you're interested in diving into what would probably be a pretty big project, I can suggest where to possibly start...
But actually maybe a simpler thing to look at first would be take a hundred images and look at what the SIP coefficients look like. You'll want to run with the "solve-field --crpix-center" flag if you don't already -- that makes the center of distortion always be the center of the image. Then (if you're using python) read in the .wcs files and check out the values and scatters of the A_x_y and B_x_y SIP coefficients. Ideally, you find that only a few of these have significant values, and they're really stable!
So the processing flow would be:Once: Solve some images with astrometry, feed them to Hugin to compute a lensfun modelThen, for every new image:- use lensfun to correct the image- give the corrected image to astrometry.- in theory, the solution computed by astrometry should now have no optical distortion anymore (does that mean all SIP parameters should be zero?).Sounds a bit convoluted, but possible.
Hmm, if I'm remembering correctly, the A_2_0 coefficient is the one that multiplies (X - CRPIX_1)^2, in other words, if you're 1000 pixels from the image center in X, then 1000^2 * 1e-7 = 0.1 pixels -- a pretty small distortion!
--
You received this message because you are subscribed to the Google Groups "astrometry" group.
To unsubscribe from this group and stop receiving emails from it, send an email to astrometry+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/astrometry/CA%2BFzHBDUSEJon3%2B8058mN-z-frcXqrDp2KXy-3GwPeo2S_r5Hw%40mail.gmail.com.