From opencv.fisheye camera parameters into panoramic coefficients a, b, c

1,940 views
Skip to first unread message

Ilia Shipachev

unread,
Aug 1, 2021, 6:20:59 PM8/1/21
to PTGui Support
Dear authors,

I'm struggle to understand distortion model which used by PTGui and it's a, b, c coefficients and also r0 (smallest side radius) and f (focal length).
I've managed to hack it to almost ideal fit based on parameters I have in opencv fisheye form, but I still would like to understand it properly.

Here is the camera model I use (Description section):
https://docs.opencv.org/4.5.2/db/d58/group__calib3d__fisheye.html
I do camera rig calibration and the best output so far I have in this form. This model fits our camera pretty well. So all calibrations are done in this form and it has a really small error for our cameras.

Based on this model I'm trying to build a polynomial to fit this model into panoramic model which uses polynomial of another lower degree.
At the end I'm getting some approximation with some polynomial of a proper degree. Something close to what I would expect PTGui to use based on documentation I read.
I also read the guide:
https://wiki.panotools.org/Lens_Correction_in_PanoTools

And seems like some conversions are missed still. Like some scaling factor either for focal length or coefficients how they understood by PTGui.

So it left me wondering, what is the panoramic distortion model used in PTGui? In the same terms described in OpenCV fisheye camera model of base Xc, Yc, Zc?
Is there a proper formula where I can see how geometric Xc, Yc, Zc at first get transformed by appropriate distortion model used in PTGui and after mapped by pinhole camera model into image frame.

I take into account that PTGui base its scale not only on focal length but also on radius and do some post-polynomial normalization to have this radius stable with any coefficients (something like [1 = a + b + c] normalization).
But at the end I lack of full picture. I've tried a lot of variations. I've checked all internet topics and forums on this and didn't find an appropriate solution or even a distortion form which will be enough for me to write an optimization step.
I understand that opencv.fisheye model can't be fit with panoramic model used by PTGui but by having both models in hand I can achieve a good results by calculating numerical approximations from one model into another.

Any hints, links or advises would be really appreciated.

Thank you,
Ilia Shipachev

PTGui Support

unread,
Aug 2, 2021, 5:21:21 AM8/2/21
to pt...@googlegroups.com
Hi Ilia,

PTGui follows the Panotools lens distortion model:
https://wiki.panotools.org/Lens_correction_model

The radius is normalized such that 1 corresponds to the smaller edge of
the image (0 is center). The polynomial maps destination radius to
source radius (not the other way round).

For fisheye images PTGui uses the lens model as explained in 3.28:
https://www.ptgui.com/support.html#3_28

Transform is done in this order:
crop - lens shift - abc correction - lens projection

Parsing the cropping data from the .pts file can be complicated; this
script shows how it can be done:
http://www.ptgui.com/parsepts.html

Hope this helps.

Kind regards,

Joost Nieuwenhuijse
www.ptgui.com

On 02/08/2021 00:20, Ilia Shipachev wrote:
> Dear authors,
>
> I'm struggle to understand distortion model which used by PTGui and it's
> a, b, c coefficients and also r0 (smallest side radius) and f (focal
> length).
> I've managed to hack it to almost ideal fit based on parameters I have
> in opencv fisheye form, but I still would like to understand it properly.
>
> Here is the camera model I use (/Description/ section):
> --
> You received this message because you are subscribed to the Google
> Groups "PTGui Support" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to ptgui+un...@googlegroups.com
> <mailto:ptgui+un...@googlegroups.com>.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/ptgui/a4eabc6b-2380-417e-926f-f2c93a230003n%40googlegroups.com
> <https://groups.google.com/d/msgid/ptgui/a4eabc6b-2380-417e-926f-f2c93a230003n%40googlegroups.com?utm_medium=email&utm_source=footer>.

Ilia Shipachev

unread,
Aug 2, 2021, 5:59:20 AM8/2/21
to PTGui Support
Hi Joost,

This still leaves the main question unanswered -> what is the polynomial form for lens distortion used in PTGui? Does it normalized to have [1 = a + b + c]?

>The radius is normalized such that 1 corresponds to the smaller edge of the image (0 is center). 
But I have fisheye lenses. Fisheye factor equals one. And all computations regarding angles, but not radius, becomes quite tricky. With all additional normalization parameters.
Is there a formula where I can see something like: source 3D point -> angle (theta usually) -> distorted angle -> image point?
I understand that for undistortion process the whole pipeline should be reversed and I'm aware that "distortion coefficients" are distortion coefficients, but not "undistortion coefficients". The main uknown for me -- is the proper and clear formula when I can see how PTGui sees those transformations so I can properly fit those a, b, c by having k1, k2, k3, k4 from Kannala-Brandt fisheye camera model I've got during calibration process.

Sincerely,
Ilia Shipachev

Erik Krause

unread,
Aug 2, 2021, 7:28:24 AM8/2/21
to pt...@googlegroups.com
Am 02.08.21 um 11:59 schrieb Ilia Shipachev:
> This still leaves the main question unanswered -> what is the polynomial
> form for lens distortion used in PTGui? Does it normalized to have [1 = a +
> b + c]?

Did you read the wiki article? It's all there:

r(src) = a*r(dest)^4 + b*r(dest)^3 + c*r(dest)^2 + d*r(dest)

with d = 1-(a+b+c)

> But I have fisheye lenses. Fisheye factor equals one.

If the Fisheye Factor k as used in PTGui was one it would be a
rectilinear lens: https://wiki.panotools.org/Fisheye_Projection

> And all computations
> regarding angles, but not radius, becomes quite tricky. With all additional
> normalization parameters.

But that's the point of all that: mapping light rays from the outside
world to coordinates on the image plane. For incoming rays you don't
have coordinate or radius information, you only have angles.

--
Erik Krause

Ilia Shipachev

unread,
Aug 2, 2021, 7:45:02 AM8/2/21
to PTGui Support
Dear Erik,

I'm sorry, I made a mistake. Fisheye factor equals 0.
Than it's why it becomes complicated for me. With fisheye factor 0 I understand that 3D point (X, Y, Z) will be projected through atan(r(X/Z, Y/Z)) this will give an angle theta. After this angle theta will be distorted into theta_d,  and the point will be normalized from p -> p * theta_d / r, after focal length and optical center shifts will be applied. All those steps are clear to me. As well as their reverse order when we need to do undistortion.
But! Does this theta get normalized to something different? I've noticed that it normalized somehow to map radius angle equals 1 in 3D into  r0 in pixels (which is the half of the shortest side). This normalization to theta can be done before taking atan from 3D point or this normalization can be after.
That is the thing. I don't have a good formula not for radius but for theta and those normalization procedures a subject of changes.

Sincerely,
Ilia Shipachev

Erik Krause

unread,
Aug 2, 2021, 11:07:41 AM8/2/21
to pt...@googlegroups.com
Am 02.08.21 um 13:45 schrieb Ilia Shipachev:
> I've noticed
> that it normalized somehow to map radius angle equals 1 in 3D into *r0 *in
> pixels* (*which is the half of the shortest side).

No, in mm, since focal length is in mm and theta is unitless (radians).

Perhaps this might help:
->
http://michel.thoby.free.fr/Fisheye_history_short/Projections/Models_of_classical_projections.html

I'd be curious why you want to know all this? What do you want to
achieve? You don't need to know all that for panorama stitching...

--
Erik Krause

Ilia Shipachev

unread,
Aug 2, 2021, 11:26:11 AM8/2/21
to PTGui Support
I have a calibration results for several fisheye cameras together. I have a quite good and precise model to describe them.
The model called equidistant, or "f-theta" or just a fisheye camera model with fisheye factor equals 1.

I saw the discussion on transformation of k1, k2, k3 coefficients into a, b, c, but those were from  Brown–Conrady camera model (e.g. standart OpenCV camera model):
There were some formulas.
I have calibration in the Kannala-Brandt form. But I have to use this calibration also to extract a panoramic pictures properly stitched for each camera rig. I have no problems with any other parameters except the way PTGui sees and understand theta angle.
For an example, PTGui keeps r0 radius (shortest side halved) of the picture stable with a, b, c parameters. That means that r0 was mapped from (theta = 1). But the classical approach measures angle in radians, e.g. focal length. Seems like PTGui renormalize theta to have this connection [r0  <-> 1 radians]. 
Pixels and mm in formulas are interchangeable and I can understand how to do it, but this angle scaling is something I'm missing.

For panorama stitching I do an approximation from calibration polynomial into panoramic polynomial. Everything is quite good and well behaved. Except this angle scaling issue. That is why I was looking for the distortion formula.

Sincerely,
Ilia Shipachev

Erik Krause

unread,
Aug 2, 2021, 1:42:33 PM8/2/21
to pt...@googlegroups.com
Am 02.08.2021 um 17:26 schrieb Ilia Shipachev:

> That means that *r0 *was mapped
> from (theta = 1). But the classical approach measures angle in radians,
> e.g. focal length.

theta must be given in radians, otherwise you won't get meaningful
results, so PTGui does this as well. It's only the resulting radius
which is normalized to the shorter side for further calculations (PTGui
uses both abc and k lens corrections).

So if you have an object 60° away from the optical axis and your lens is
8mm (k=0) the point will be 8.336mm from the center. Or the other way
round: the neutral radius 1 on a 24x36mm sensor is 12mm and with the
same lens corresponds to 1.5rad or 85.9427°

--
Erik Krause
http://www.erik-krause.de

Erik Krause

unread,
Aug 2, 2021, 1:48:33 PM8/2/21
to pt...@googlegroups.com
Am 02.08.2021 um 17:26 schrieb Ilia Shipachev:
> For panorama stitching I do an approximation from calibration polynomial
> into panoramic polynomial. Everything is quite good and well behaved.
> Except this angle scaling issue. That is why I was looking for the
> distortion formula.

For panorama stitching you normally don't need all that. Just drop the
files on PTGui (eventually choose the right fisheye) and press Align.
PTGui will optimize all necessary parameters.

PTGui Support

unread,
Aug 3, 2021, 5:04:00 AM8/3/21
to pt...@googlegroups.com
Hi Ilia,

First you apply the a/b/c polynomial correction. You should see this as
just a preprocessing step, independent of the actual lens projection.

r_src is the distance from the image center, scaled to half the short
side. Apply the polynomial in inverse, to get r_dest. Convert back to xy
coordinates in the original image scale.

Next calculate R which is the distance of a pixel from the optical
center, in millimeters. In other words, divide by the pixel density
(pixels per millimeters).

The fisheye projection formula will give you theta from R and the focal
length.

Also calculate the polar angle phi=atan2(y,x) in the original 2d image.

Now, phi and theta are the angles of the light ray in 3d space, you can
convert this pair to a 3d coordinate on the surface of the unit sphere.

Kind regards,

Joost Nieuwenhuijse
www.ptgui.com

On 02/08/2021 13:45, Ilia Shipachev wrote:
> Dear Erik,
>
> I'm sorry, I made a mistake. Fisheye factor equals 0.
> Than it's why it becomes complicated for me. With fisheye factor 0 I
> understand that 3D point (X, Y, Z) will be projected through atan(r(X/Z,
> Y/Z)) this will give an angle *theta*. After this angle *theta *will be
> distorted into *theta_d*,  and the point will be normalized from *p* ->
> *p* * *theta_d* / *r*, after focal length and optical center shifts will
> be applied. All those steps are clear to me. As well as their reverse
> order when we need to do undistortion.
> But! Does this *theta *get normalized to something different? I've
> noticed that it normalized somehow to map radius angle equals 1 in 3D
> into *r0 *in pixels*(*which is the half of the shortest side). This
> normalization to *theta* can be done before taking atan from 3D point or
> --
> You received this message because you are subscribed to the Google
> Groups "PTGui Support" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to ptgui+un...@googlegroups.com
> <mailto:ptgui+un...@googlegroups.com>.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/ptgui/eaa80e27-b7f8-4183-b665-daa2d538c61cn%40googlegroups.com
> <https://groups.google.com/d/msgid/ptgui/eaa80e27-b7f8-4183-b665-daa2d538c61cn%40googlegroups.com?utm_medium=email&utm_source=footer>.

Ilia Shipachev

unread,
Aug 4, 2021, 12:34:48 PM8/4/21
to PTGui Support
Erik and Joost,

Thank you for your advises. I will come back with short note if I succeed to manage it straight from formulas without any hacks with description what have worked for me.

Ilia

Ilia Shipachev

unread,
Feb 10, 2022, 12:45:08 PM2/10/22
to PTGui Support
Hi guys,

I've promised to report on this long time ago. I can't disclose exact solution.
But it was solved indeed by numerical optimization of distortion parameters and focal length as it understood by PTGui from a usual ones used by opencv.
This approximation from  (f_opencv, k1, k2, k3, k4) -> (f_pano, a, b, c) should include focal length, as lens geometry can't be approached properly if we keep focal lenght. I'm not talking about units, like pixels to mm, but about the value itself. As panoramic distortion strictly holds points at circle of image radius. Therefore to properly adjust calibrated opencv distortion into panoramic parameters we have to also relax focal length to find the best fit. 
camera.

For finding the right formula for parameters used by PTGui this link was helpful for me. And it matched what is actually happens in PTGui.

Hope it will help others who will look how to get (a, b, c) from classic computer vision camera models.

Ilia
Reply all
Reply to author
Forward
0 new messages