better calibration across many images

107 views
Skip to first unread message

Kendy Kutzner

unread,
Mar 1, 2021, 11:18:05 AM3/1/21
to astro...@googlegroups.com
Hi

Can one use many images to improve calibration?

Situation: I do have many (thousands) images [0] taken with the same camera and same lens (50mm prime on  EOS450D) and same settings. I do know the time (+- a few seconds) and geo-position the images were taken. I don't have good values for the camera orientation.

Astrometry.net works well for these images: I get calibrations that seem to have average square errors of less than one pixel.

I want to merge these images (spatially and/or across time). Since all images were taken with the identical setup, presumably they could share some calibration data, like scale and distortion, but not others, like not image center and rotation.

Is it possible to compute a "shared" distortion across many images? Can such distortion values then be used to compute even better values for camera orientation or image center?

I don't mind diving into the code.

Kendy

[0] raw examples:


Magnus Ringman

unread,
Mar 1, 2021, 11:55:24 AM3/1/21
to Kendy Kutzner, astro...@googlegroups.com
Hi Kendy!

On Mon, Mar 1, 2021 at 5:18 PM Kendy Kutzner <kendy....@gmail.com> wrote:
Hi

Can one use many images to improve calibration?

Situation: I do have many (thousands) images [0] taken with the same camera and same lens (50mm prime on  EOS450D) and same settings. I do know the time (+- a few seconds) and geo-position the images were taken. I don't have good values for the camera orientation.

Astrometry.net works well for these images: I get calibrations that seem to have average square errors of less than one pixel.

I want to merge these images (spatially and/or across time). Since all images were taken with the identical setup, presumably they could share some calibration data, like scale and distortion, but not others, like not image center and rotation.

Is it possible to compute a "shared" distortion across many images? Can such distortion values then be used to compute even better values for camera orientation or image center?

One thought: extract the features (stars) and look angles obtained with astrometry, and then use a panorama stitcher to solve for the lens properties. Hugin is pretty flexible in allowing you to constrain certain parameters (here, look angle) and solving for others.

Magnus

--
You received this message because you are subscribed to the Google Groups "astrometry" group.
To unsubscribe from this group and stop receiving emails from it, send an email to astrometry+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/astrometry/CABnrD-VP9E1Yhqkp5hu%3DRwOBhwJmxizUGh1VUmQ%2BFw1a3czGfg%40mail.gmail.com.

Dustin Lang

unread,
Mar 1, 2021, 2:42:09 PM3/1/21
to Kendy Kutzner, astrometry
Hi Kendy,

Good question.  We don't really have anything like that in the code right now.  One relevant feature could be the "solve-field --predistort" option, which lets you feed in an optical distortion model ("SIP solution").  You could imagine solving for a really good SIP solution for your lens using a bunch of exposures, and then keeping that fixed.  It might vary with temperature, though!  A thing that's majorly missing is the bit that would do that SIP fitting based on many exposures (while probably also adjusting each image's center, scale, and rotation).  If you're interested in diving into what would probably be a pretty big project, I can suggest where to possibly start...
cheers,
--dustin



Kendy Kutzner

unread,
Mar 1, 2021, 3:00:34 PM3/1/21
to Magnus Ringman, astro...@googlegroups.com
Hi Magnus!

nice to see you here

On Mon, Mar 1, 2021 at 5:55 PM Magnus Ringman <magnus....@gmail.com> wrote:
One thought: extract the features (stars) and look angles obtained with astrometry, and then use a panorama stitcher to solve for the lens properties. Hugin is pretty flexible in allowing you to constrain certain parameters (here, look angle) and solving for others.

Interesting. So Hugin could compute a lens model based on a number of frames. Looks like the format for storing those lens models is lensfun's. [0]

So the processing flow would be:
Once: Solve some images with astrometry, feed them to Hugin to compute a lensfun model
Then, for every new image:
 - use lensfun to correct the image
 - give the corrected image to astrometry. 
 - in theory, the solution computed by astrometry should now have no optical distortion anymore (does that mean all SIP parameters should be zero?).

Sounds a bit convoluted, but possible.

Kendy

[0] I haven't thought of correcting for chromatic aberration yet. What an interesting new can of worms :)

Kendy Kutzner

unread,
Mar 1, 2021, 3:21:16 PM3/1/21
to Dustin Lang, astrometry
Hi Dustin,

On Mon, Mar 1, 2021 at 8:42 PM Dustin Lang <dstn...@gmail.com> wrote:
Good question.  We don't really have anything like that in the code right now.

Would you expect this to be useful if added?

  One relevant feature could be the "solve-field --predistort" option, which lets you feed in an optical distortion model ("SIP solution").

Yes, that is my idea behind this thread. What I currently don't understand how to possibly compute such a "shared" SIP solution. Magnus suggested using Hugin/lensfun to compute a model. I don't know if lensfun distortion can be converted into a SIP solution. 

Of course I could just compute many different SIP solutions on many different images, and use eyeballing to pick one that isn't an outlier. But can I do better?
 
  You could imagine solving for a really good SIP solution for your lens using a bunch of exposures, and then keeping that fixed.  It might vary with temperature, though!

My camera records sensor temperature, and I have readings from a nearby outside thermometer. I guess lens temperature correlates with both. So we can test this hypothesis.
 
  A thing that's majorly missing is the bit that would do that SIP fitting based on many exposures (while probably also adjusting each image's center, scale, and rotation).  If you're interested in diving into what would probably be a pretty big project, I can suggest where to possibly start...

I'll bite: please tell me more.

So what I could easily produce is a large number of correspondence pairs (my images typically contain thousands of stars, and I have thousands of images. This yields millions of pairs). I would correct them for image center and for rotation.

Then feed millions of pairs into the approximators for scale and for SIP??

Kendy



Dustin Lang

unread,
Mar 1, 2021, 3:33:32 PM3/1/21
to Kendy Kutzner, astrometry
I think this would be a useful thing to have!

But actually maybe a simpler thing to look at first would be take a hundred images and look at what the SIP coefficients look like.  You'll want to run with the "solve-field  --crpix-center" flag if you don't already -- that makes the center of distortion always be the center of the image.  Then (if you're using python) read in the .wcs files and check out the values and scatters of the A_x_y and B_x_y SIP coefficients.  Ideally, you find that only a few of these have significant values, and they're really stable!

cheers,
--dustin

Kendy Kutzner

unread,
Mar 1, 2021, 6:15:24 PM3/1/21
to Dustin Lang, astrometry
On Mon, Mar 1, 2021 at 9:33 PM Dustin Lang <dstn...@gmail.com> wrote:
But actually maybe a simpler thing to look at first would be take a hundred images and look at what the SIP coefficients look like.  You'll want to run with the "solve-field  --crpix-center" flag if you don't already -- that makes the center of distortion always be the center of the image.  Then (if you're using python) read in the .wcs files and check out the values and scatters of the A_x_y and B_x_y SIP coefficients.  Ideally, you find that only a few of these have significant values, and they're really stable!

(using C++) it turned out I had not set crpix_set_center/crpix_set to true yet. Having done that now, here is what I get now for a small set of images (printing all non-zero coefficients):

sip_scale 2.085461e+01 sip a_order 3 b_order 3 a[2][0] 1.401444e-07 b[2][0] 4.913099e-08
sip_scale 2.085450e+01 sip a_order 3 b_order 3 a[2][0] 1.547643e-07 b[2][0] 4.699726e-08
sip_scale 2.082325e+01 sip a_order 3 b_order 3 a[2][0] -1.820227e-06 b[2][0] -9.556101e-07
sip_scale 2.080421e+01 sip a_order 3 b_order 3 a[2][0] -1.629884e-06 b[2][0] -4.849871e-07
sip_scale 2.085463e+01 sip a_order 3 b_order 3 a[2][0] 1.412750e-07 b[2][0] 4.520182e-08
sip_scale 2.085406e+01 sip a_order 3 b_order 3 a[2][0] 1.463231e-07 b[2][0] 4.172158e-08
sip_scale 2.085356e+01 sip a_order 3 b_order 3 a[2][0] 1.373220e-07 b[2][0] 4.642783e-08
sip_scale 2.085535e+01 sip a_order 3 b_order 3 a[2][0] 1.793202e-07 b[2][0] 3.730814e-08
sip_scale 2.080203e+01 sip a_order 3 b_order 3 a[2][0] -1.321124e-06 b[2][0] -4.169784e-07
sip_scale 2.085560e+01 sip a_order 3 b_order 3 a[2][0] 1.562301e-07 b[2][0] 7.855175e-08
sip_scale 2.085677e+01 sip a_order 3 b_order 3 a[2][0] 1.873542e-07 b[2][0] 8.949012e-08
sip_scale 2.085470e+01 sip a_order 3 b_order 3 a[2][0] 1.271072e-07 b[2][0] 6.257633e-08
sip_scale 2.085631e+01 sip a_order 3 b_order 3 a[2][0] 1.656766e-07 b[2][0] 5.918579e-08
sip_scale 2.085531e+01 sip a_order 3 b_order 3 a[2][0] 1.530417e-07 b[2][0] 7.672947e-08
sip_scale 2.085505e+01 sip a_order 3 b_order 3 a[2][0] 1.481932e-07 b[2][0] 5.781259e-08
sip_scale 2.085643e+01 sip a_order 3 b_order 3 a[2][0] 1.199078e-07 b[2][0] 3.143217e-08
sip_scale 2.085527e+01 sip a_order 3 b_order 3 a[2][0] 1.383138e-07 b[2][0] 4.314240e-08
sip_scale 2.085422e+01 sip a_order 3 b_order 3 a[2][0] 1.428724e-07 b[2][0] 4.386321e-08
sip_scale 2.085446e+01 sip a_order 3 b_order 3 a[2][0] 1.307135e-07 b[2][0] 4.301696e-08
sip_scale 2.085550e+01 sip a_order 3 b_order 3 a[2][0] 1.563860e-07 b[2][0] 5.139219e-08
sip_scale 2.085643e+01 sip a_order 3 b_order 3 a[2][0] 1.300370e-07 b[2][0] 4.443508e-08

Indeed they are somewhat stable (except when they flip signs).

Next steps would be
 - find an "agreeable" solution
 - inject it back into the solver

Kendy

Dustin Lang

unread,
Mar 1, 2021, 7:02:25 PM3/1/21
to Kendy Kutzner, astrometry
Hmm, if I'm remembering correctly, the A_2_0 coefficient is the one that multiplies (X - CRPIX_1)^2, in other words, if you're 1000 pixels from the image center in X, then 1000^2 * 1e-7 = 0.1 pixels -- a pretty small distortion!

Magnus Ringman

unread,
Mar 2, 2021, 8:06:56 AM3/2/21
to Kendy Kutzner, astro...@googlegroups.com
On Mon, Mar 1, 2021 at 9:00 PM Kendy Kutzner <kendy....@gmail.com> wrote:
So the processing flow would be:
Once: Solve some images with astrometry, feed them to Hugin to compute a lensfun model
Then, for every new image:
 - use lensfun to correct the image
 - give the corrected image to astrometry. 
 - in theory, the solution computed by astrometry should now have no optical distortion anymore (does that mean all SIP parameters should be zero?).

Sounds a bit convoluted, but possible.

Not quite. I was thinking: for every star identified in any image, use its location in each image as a "control point" for the solver.  With many overlapping pairs of images thus linked by common points, the solver can derive the parameters for the lens by optimizing over the whole set.  But yes, clunky and subpar to doing direct statistics on the SIP parameters identified directly by astrometry.

Magnus Ringman

unread,
Mar 2, 2021, 8:22:18 AM3/2/21
to Dustin Lang, Kendy Kutzner, astrometry
On Tue, Mar 2, 2021 at 1:02 AM Dustin Lang <dstn...@gmail.com> wrote:
Hmm, if I'm remembering correctly, the A_2_0 coefficient is the one that multiplies (X - CRPIX_1)^2, in other words, if you're 1000 pixels from the image center in X, then 1000^2 * 1e-7 = 0.1 pixels -- a pretty small distortion!

Wow.  With <=0.1 pixels of accuracy, a different problem may come up.  At the tens of degrees of field of view you get with a 50mm lens, distortion due to differing atmospheric refraction across the image can become significant, especially if you have images taken at low elevation angles.  Refraction shows up as the vertical axis in the image representing a slightly greater angle of beyond-atmosphere sky than than an equal length of horizontal axis, and the amount is a nonlinear function of the elevation.  The scale of the phenomenon is a few arcminutes, and your pixel scale will be about half an arcminute—thus it's not unlikely that astrometry's precise SIP solutions will show variations between images depending just on the look angle.
 
--
You received this message because you are subscribed to the Google Groups "astrometry" group.
To unsubscribe from this group and stop receiving emails from it, send an email to astrometry+...@googlegroups.com.

Wilfred Tyler Gee

unread,
Mar 2, 2021, 2:32:55 PM3/2/21
to astrometry
Hi all, I'm following along at home as I'm interested in the same kind of thing.  We have *lots* of observation sequences of images taken with consistent hardware (85mm f/1.4 lenses on some Canon DSLRs) from numerous geographic locations and spanning a couple of years (i.e. a ground based wide-field survey) and have most of the relevant EXIF metadata, such as camera temp, for all of these as well. At most sites we had two cameras running simultaneously as well (insert shameless plug for https://projectpanoptes.org).

Most of the data can be accessed via bigquery, so you wouldn't necessarily even need the images. All of this is public so I'm happy to help make it available to someone and would also be interested in some collaboration if people want to do a little more with this.  That said, I'm just finishing up my PhD in the next few months so don't have time to work on this personally right away.  But if someone wants more data on this front let me know.

Cheers,
Wilfred

Kendy Kutzner

unread,
Apr 10, 2021, 6:19:18 PM4/10/21
to Magnus Ringman, Dustin Lang, astrometry
On Tue, Mar 2, 2021 at 2:22 PM Magnus Ringman <magnus....@gmail.com> wrote:
> On Tue, Mar 2, 2021 at 1:02 AM Dustin Lang <dstn...@gmail.com> wrote:>>
>> Hmm, if I'm remembering correctly, the A_2_0 coefficient is the one that multiplies (X - CRPIX_1)^2, in other words, if you're 1000 pixels from the image center in X, then 1000^2 * 1e-7 = 0.1 pixels -- a pretty small distortion!
>
> Wow. With <=0.1 pixels of accuracy, a different problem may come up. At the tens of degrees of field of view you get with a 50mm lens, distortion due to differing atmospheric refraction across the image can become significant, especially if you have images taken at low elevation angles.

Absolutely, you are right. I didn't consider atmospheric distortion so far.

So assuming the rough formulas from
https://en.wikipedia.org/wiki/Atmospheric_refraction. Me aiming at the
sky at 45deg elevation with a 50mm lens (portrait mode gives me 30deg
view), I have 0.58 arc min distortion at the top of the image, and 1.7
arc min distortion at the bottom.

This dwarfs all the other corrections! The assumption of a "tangential
plane" doesn't seem that sound anymore.


Kendy

Toni Šarić

unread,
Jul 5, 2022, 9:43:34 AM7/5/22
to astrometry
Hi @Kendy.
I am also interested in the subject, and I would like to know did you manage to measure lens distortion?
Specifically, if you succeeded to compute SIP coefficients on a series of images, as a good solution, and then apply it to new images?
Or did you have any luck with Hugin and lensfun?

BR
Toni
Reply all
Reply to author
Forward
0 new messages