Reimplementation Adventure

50 views
Skip to first unread message

Matt Goodman

unread,
Sep 3, 2021, 2:11:58 PM9/3/21
to astrometry
Hey all,
  I have spent the last few weeks implementing the core part of Astrometry in Java to understand how it all works, and also for use in a personal project. 

I came to a couple of issues that I am having a hard time getting my head around as I track down some internal sources for inaccuracy/performance:
  • How is focal length (and corresponding distortions) dealt with? I am starting by building my index from RaDec/Magnitude tables, which requires a projection of some sort. Is this implicitly linked to the input dataset (USNO-B) used to build the tables?
  • "The proposed alignment acceptance model" has a pretty vague description of how things are tallied. I am specifically unsure how to implement the line "Our foreground model is therefore a mixture of a uniform probability that a star will be found anywhere in the image—a query star that has no counterpart in the index—plus a blob of probability around each star in the index, where the size of the blob is determined by the combined positional variances of the index and query stars." Can I get a hint/link to where this is in the astrometry.net code?
Any leads appreciated, and thanks for running a great service :)

Dustin Lang

unread,
Sep 4, 2021, 11:30:31 AM9/4/21
to Matt Goodman, astrometry
Hi,

The core algorithms don't deal with distortion at all; both for building the index (computing the geometric hash code from reference stars), and at run time (compute the geometric hash code for image stars), we assume a TANgent-plane projection.  This is usually pretty good for the professional-grade telescopes we were thinking about when designing the algorithms.  It doesn't work nearly as well for very wide-field cameras; fish-eye images usually fail, even though they should be the easiest to recognize.  One option there would be to apply a preliminary undistortion (eg, using camera EXIF headers if available, or user input) to get close enough to a TAN projection that it would work.  Or maybe a small set of distortion parameters would be enough to cover the common cases.

But anyway, no, it's not linked to USNO-B in any way; Tycho-2 or Gaia or 2MASS, or any other reference catalog with RA,Dec coordinates work equally well.

For the proposed alignment verification, there's an excruciatingly detailed discussion in my thesis (https://tspace.library.utoronto.ca/bitstream/1807/19281/3/Lang_Dustin_A_200911_PhD_thesis.pdf) chapter 3.  In the code, it happens here-abouts,
here is where it finds a nearest matching star,
and here's the Gaussian blob of probability around each star,

cheers,
--dustin


--
You received this message because you are subscribed to the Google Groups "astrometry" group.
To unsubscribe from this group and stop receiving emails from it, send an email to astrometry+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/astrometry/78ecb8b0-737f-44dd-b937-170e893cfb75n%40googlegroups.com.

John Murrell

unread,
Sep 5, 2021, 4:48:41 AM9/5/21
to Dustin Lang, Matt Goodman, astrometry
If you are using wide angle lenses on a DSLR the camera often does a geometry correction as well as corrections for chromatic aberration and vignetting before it saves the JP|G image. This does not apply to RAW images.

Of course for this to work the camera has to have a record of the lens characteristics in it's database. If you use new lenses you need to ensure your camera software is updated so it has the appropriate data.

If you save RAW images this correction is done in the post processing RAW 'development' software and this can be turned on & off as required. As a above this means you need to keep your Digital Processing Software updated.

If you use third party lenses it is not clear how this works - I think they may use a 'nearest guess' model.

This applies to Canon DSLRs but i think something similar applies to other makes.

As a result of this the optical distortion in the image can be very different depending on the lens used and your own processing pipeline before submitting images.

As an aside the whole thing about the projection used for images seems to be a minefield - some projections are only valid within the image boundaries and can have large errors and big distortions outside the image boundaries.

John Murrell
Reply all
Reply to author
Forward
0 new messages