Equirectangular to Fisheye and vice versa

2,349 views
Skip to first unread message

pan...@gmail.com

unread,
Oct 28, 2007, 8:39:32 AM10/28/07
to hugin and other free panoramic software
Hi,
PTools use erect_sphere_tp function to transform equirectangular
projection to fisheye projection, and
sphere_tp_erect function to transform fisheye projection to
equirectangular projection.
I have spent some time to understand the code, but no success.
May anyone understand the code to interpret it in detail? And I
think it need to draw some graphics to describe the process.

void erect_sphere_tp( double x_dest,double y_dest, double* x_src,
double* y_src, void* params)
{
// params: double distance

register double theta,r,s;
double v[3];
#if 0
theta = sqrt( x_dest * x_dest + y_dest * y_dest ) /
*((double*)params);
phi = atan2( y_dest , x_dest );

v[1] = *((double*)params) * sin( theta ) * cos( phi ); // x' -> y
v[2] = *((double*)params) * sin( theta ) * sin( phi ); // y' -> z
v[0] = *((double*)params) * cos( theta ); // z' -> x

theta = atan( sqrt( v[0]*v[0] + v[1]*v[1] ) / v[2] ); //was atan2
phi = atan2( v[1], v[0] );

*x_src = *((double*)params) * phi;
if(theta > 0.0)
{
*y_src = *((double*)params) * (-theta + PI /2.0);
}
else
*y_src = *((double*)params) * (-theta - PI /2.0);
#endif
r = sqrt( x_dest * x_dest + y_dest * y_dest );
theta = r / distance;
if(theta == 0.0)
s = 1.0 / distance;
else
s = sin( theta) / r;

v[1] = s * x_dest;
v[0] = cos( theta );


*x_src = distance * atan2( v[1], v[0] );
*y_src = distance * atan( s * y_dest /sqrt( v[0]*v[0] +
v[1]*v[1] ) );
}


void sphere_tp_erect( double x_dest,double y_dest, double* x_src,
double* y_src, void* params)
{
// params: double distance

register double phi, theta, r,s;
double v[3];

phi = x_dest / distance;
theta = - y_dest / distance + PI / 2;
if(theta < 0)
{
theta = - theta;
phi += PI;
}
if(theta > PI)
{
theta = PI - (theta - PI);
phi += PI;
}

#if 0

v[2] = *((double*)params) * sin( theta ) * cos( phi ); // x' -> z
v[0] = *((double*)params) * sin( theta ) * sin( phi ); // y' -> x
v[1] = *((double*)params) * cos( theta ); // z' -> y

theta = atan2( sqrt( v[0]*v[0] + v[1]*v[1] ) , v[2] );

phi = atan2( v[1], v[0] );

*x_src = *((double*)params) * theta * cos( phi );
*y_src = *((double*)params) * theta * sin( phi );
#endif
s = sin( theta );
v[0] = s * sin( phi ); // y' -> x
v[1] = cos( theta ); // z' -> y

r = sqrt( v[1]*v[1] + v[0]*v[0]);

theta = distance * atan2( r , s * cos( phi ) );

*x_src = theta * v[0] / r;
*y_src = theta * v[1] / r;

}

Pablo d'Angelo

unread,
Oct 28, 2007, 9:40:54 AM10/28/07
to pan...@gmail.com, hugin and other free panoramic software
Hi panovr,

pan...@gmail.com schrieb:


> Hi,
> PTools use erect_sphere_tp function to transform equirectangular
> projection to fisheye projection, and
> sphere_tp_erect function to transform fisheye projection to
> equirectangular projection.
> I have spent some time to understand the code, but no success.
> May anyone understand the code to interpret it in detail? And I
> think it need to draw some graphics to describe the process.

Hmm, actually, one of the things I'd like to do is write cleaned up small
library for doing the transformations, which works mainly using cartesian
coordinates, instead of spherical coordinates (as panotools does). This
would be much faster, since for many calls to angle functions could be
avoided by that.
Anyway, that part of the panotools code is quite hard to read, and I haven't
really tried to figure out all the projection functions in detail.

> void erect_sphere_tp( double x_dest,double y_dest, double* x_src,
> double* y_src, void* params)
> {
> // params: double distance
>
> register double theta,r,s;
> double v[3];

> r = sqrt( x_dest * x_dest + y_dest * y_dest );

Actually, I believe sphere_tp_erect() transforms from fisheye coordinates
given in x_dest, y_dest to equirectangular, since I can't think of a reason
why to compute r like that on a equirectangular, whereas r is quite
meaningful for fisheye images.

ciao
Pablo

Tom Sharpless

unread,
Oct 29, 2007, 4:06:34 PM10/29/07
to hugin and other free panoramic software
Hey, Pablo---

Dersch did it right. A 3D sphere is the proper reference frame for
stitching, and there is no way to
avoid using the correct equations of spherical geometry. I go way
back in computer graphics and
image processing and have spent some time studying the panotools code,
so trust me on this.

Here is my own conceptual overview of the panotools geometry model.

The camera rotates about a point, represented by the center of the
panosphere. The sphere's radius,
R, is analogus to the lens focal length: it is the scale factor that
relates distances in an image to
angles. A = d/R is the central angle in radians corresponding to
distance d on the surface of the
sphere.

[The numerical value of R depends on resolution, scaling and other
things. For coordinates measured
in radians, R == 1; for coordinates in pixels, we measure R in pixels,
and so on. R can also be
varied for the purpose of changing the apparent focal length. Dersch
mostly just calls it the
distance factor.]

In panotools the panosphere is reprented by its equirectangular
projection, a Cartesian coordinate
system in which distances along X and Y represent central angles. The
range of the Y axis is -Pi/2 to
Pi/2 radians, or -90 to 90 degrees; the range of X is -Pi to Pi
radians, or -180 to 180 degrees. The X
axis is cyclic: its left and right ends are at the same place [Y is
properly skew-cyclic but is
usually treated as open]. Because this is a map of a sphere, distances
must be computed by the "great
circle" formulae, not the Euclidean sqrt(dx^2 + dy^2).

The yaw and pitch angles define the direction the camera is pointing
when a picture is taken. They
are the panosphere coordinates of the image point that lies on the
optical axis of the lens. Roll is
rotation of the camera around the optical axis, hence of the image
around that point.

The optical axis is the center of the lens's projection function. For
most real lenses this function
is circularly symmetric, that is, it operates only in the radial
direction. The ideal function for a
rectilinear lens is r/FL = tan(angle), for most fisheye lenses r/FL =
2*sin(angle/2). But Panotools
also allows for "lens" functions that are not circularly symmetric,
such as cylindrical and
equirectangular projections.

Coordinates on the panosphere are angles: to project a picture onto
it, we must change pixel indices
into angles. For circularly symmetric lenses, panotools does this by
converting to polar coordinates
(A,Theta) centered on the optical axis. The raw radial angle,
sqrt(x*x + y*y)/(FL in pixels), has to
be converted to true angle A by inverting the actual lens projection;
panotools uses the ideal
function times a 4th order correction polynomial. The polar angle,
Theta = atan2(y,x), does not depend
on the projection in this case. [For most non-circularly symmetric
"lenses", Panotools converts
directly to equirectangular coordinates centered at (0,0) instead].

Next we must transform to the coordinate system of the panosphere,
with the optical center at the
point (y,p). The essential step is a 3-dimensional rotation around
the center of the sphere, that
puts the optical axis at (y,p) and also rotates by the Roll angle
(r). The common case adds a final
transformation from polar to Cartesian coordinates. [The rotation is
preceeded and followed by
transformations to and from 3D Cartesian coordinates, so the whole
mapping is quite arithmetic
intensive].

The beautiful fact is that all this math eventually results in a pair
of (real) pixel coordinates
associated with each output pixel, giving its position in the input
image. So the actual remapping of
pixel data is done rather fast by a geometry-ignorant interpolation
engine. [This means that panotools
actually composes a reversed series of coordinate transformations that
are the inverses of the ones
mentioned above].

Dersch's excellent C code carries out these complicated calculations
as efficiently as any portable
code could be expected to do, taking (so far as I can see) all
shortcuts possible without violating
mathematical correctness. I doubt very much if any of us could
improve on it.

Some practical speedup might come from re-using previously computed
transformations when possible.
I'd be on the lookout for places where it might help to cache the
interpolation indices for possible
re-use. Obvious cases include any separable transformations, where the
same interpolation indices
apply to every row and/or column; places were several images need the
same transformation (e.g. to
prepare sets of geometry- and color-corrected pictures before
stitching); and situation where two or
more transformations are linearly related (the interpolation indices
need just be rescaled).

Cheers, Tom

Pablo d'Angelo

unread,
Oct 29, 2007, 5:17:02 PM10/29/07
to Tom Sharpless, hugin and other free panoramic software
Hi Tom,

Tom Sharpless schrieb:


> Hey, Pablo---
>
> Dersch did it right. A 3D sphere is the proper reference frame for
> stitching, and there is no way to
> avoid using the correct equations of spherical geometry.

I agree that the 3D sphere is a proper reference frame for stitching, but it
is not the only possible way. It is also possible hold exactly the same
information that is stored in spherical coordinates in 3D Cartesian
coordinates, where each 3D point describes a point on the sphere, if you
like to think on a sphere. Both forms can be used to specify a ray of sight.

So instead of passing around spherical coordinates, it is possible to
express the same rays in Cartesian coordinates. I believe using this
representation is significantly faster than working with spherical
coordinates directly, at least for many of the common usecases, since many
transformations can easily be written by matrix computations instead of
using their equivalent spherical forms. Profiling a typical stiching run,
and see that more than half of the time is spend in the sin() and cos()
functions.

For example, for the common rectilinear <-> rectilinear case, there is no
need to use spherical trigonometry at all. I'd like to exploit this without
loosing generality, which works when representing the rays of sight as
points on a sphere with cartesian coordinates and not with spherical
coordinates. If other projections are used, some spherical trigonometry is
of course required, but I'd like to keep it to a minimum. Do you see any
problems there?

Btw, one limitation of the current transform stack is that it does not allow
easy "flat" stitching of images of a flat surface captured from different
viewpoints with non-perfect rectilinear or other lenses. One might say that
this is not stitching panoramic images, but it is nevertheless a very useful
operation. One can work around this with various multi-step applications of
panotools, but one looses the nice and elegant way panotools works for
normal, single viewpoint panoramas.

> The beautiful fact is that all this math eventually results in a pair
> of (real) pixel coordinates
> associated with each output pixel, giving its position in the input
> image. So the actual remapping of
> pixel data is done rather fast by a geometry-ignorant interpolation
> engine. [This means that panotools
> actually composes a reversed series of coordinate transformations that
> are the inverses of the ones
> mentioned above].

I know. However it is possible to use a different internal representation
for this function.

> Some practical speedup might come from re-using previously computed
> transformations when possible.

Something like that is done by the remapper build into panotools, when using
the fast transform option, by subsampling the transformation and doing
linear interpolation in between. This is not 100% accurate, but the
interpolation errors are kept at subpixel level. It is not yet supported by
nona though.

> I'd be on the lookout for places where it might help to cache the
> interpolation indices for possible
> re-use. Obvious cases include any separable transformations, where the
> same interpolation indices
> apply to every row and/or column; places were several images need the
> same transformation (e.g. to
> prepare sets of geometry- and color-corrected pictures before
> stitching);
> and situation where two or
> more transformations are linearly related (the interpolation indices
> need just be rescaled).

The problem is that those are quite hard to exploit (from a software coding
point of view) when using a general model such as panotools uses.

ciao
Pablo

Bruno Postle

unread,
Oct 29, 2007, 5:37:07 PM10/29/07
to Hugin ptx
On Mon 29-Oct-2007 at 22:17 +0100, Pablo d'Angelo wrote:
>
>I agree that the 3D sphere is a proper reference frame for stitching, but it
>is not the only possible way. It is also possible hold exactly the same
>information that is stored in spherical coordinates in 3D Cartesian
>coordinates, where each 3D point describes a point on the sphere, if you
>like to think on a sphere. Both forms can be used to specify a ray of sight.

I can confirm that this technique works - When I wrote my own
panorama rendering stack (in perl, doh) five years ago, it didn't
occur to me that there would be any other way.

>> Some practical speedup might come from re-using previously computed
>> transformations when possible.
>
>Something like that is done by the remapper build into panotools, when using
>the fast transform option, by subsampling the transformation and doing
>linear interpolation in between. This is not 100% accurate, but the
>interpolation errors are kept at subpixel level. It is not yet supported by
>nona though.

I seem to remember that the 'fast transform' option only
interpolates within a single scanline, this was because it had to
work with the closed-source PTStitcher. A two-dimensional 'fast
transform' ought to be even faster.

--
Bruno

pan...@gmail.com

unread,
Oct 29, 2007, 11:24:35 PM10/29/07
to hugin and other free panoramic software
Hi,
for example, when I stitch four fisheye images (thus every two
images have 90 degree overlapping region), the I select a control
point p1(x1, y1) on Image 1 (which yaw = 0, pitch = 0, roll = 0) and
the corresponding control point p2(x2, y2) on Image2 (which yaw = 90
degree, pitch = 0, roll = 0).
Then I want to use PTools to optimize these two points, what is the
process that PTools do and what functions that PTools invokes?
If I know what functions PTools that invokes and the invoke order,
maybe I can underst it more easily.

> > Pablo- Hide quoted text -
>
> - Show quoted text -

pan...@gmail.com

unread,
Oct 30, 2007, 12:31:56 AM10/30/07
to hugin and other free panoramic software
Hi Pablo,
for cartesian coordinates, do you mean use (x, y, z) to represent a
point in 3-D space, instead of use (r, theta, phi) for spherical
coordinates (for spherical coordinates, theta is the azimuthal angle
in the xy -plane from the x-axis with (denoted lambda when referred
to as the longitude), phi to be the polar angle from the z-axis with
(colatitude, equal to phi = 90 - delta, where delta is the latitude),
and r to be distance (radius) from a point to the origin?

Tom Sharpless

unread,
Oct 30, 2007, 12:44:52 AM10/30/07
to hugin and other free panoramic software
I Know almost nothing about how the optimizer works, but clearly it
must apply the specified coordinate transformations to the control
points in the forward direction (mapping from photos to pano) in order
to determine how far apart they will be in the pano; whereas the
stitcher applies them in the reverse direction (mapping pano to
photos). Both mappings can't be done by the same code, and it is
possible (even likely) that the optimizer has its own algorithms for
this. This could cause problems, if the mapping used by the optimizer
is not exactly the inverse of the stitcher's mapping.

Maybe one of the experienced developers can tell us more.

In your example, the optimizer might measure the yaw between images 1
and 2 by mapping both control points to the panosphere at yaw = 0, and
taking the difference of X coordinates. If the lens parameters were
accurate, that would give an accurate result, but probably there is
some error. The difference between 90 degrees and the measured angle
could be taken as a first estimate of the error. The error could be
reduced by adjusting either the supposed yaw (90 degrees) or, say, the
lens focal length, and with just one control point there is no way to
decide which is correct. With enough control points, this question
could b resolved.

The steps for mapping one control point to the panosphere might be as
follows: 1) convert pixel coordinates to polar form around the optical
center (image center + d,e parameters); 2) convert the radius to an
angle by dividing it by the focal length in pixels; 3) apply the
polynomial correction given by parameters (a,b,c); 4) convert back to
Cartesian coordinates, now in radians rather than pixels. Note that
the polynomial needed here is the inverse of the one used to map pano
pixels onto image pixels.

I hope this is helpful.

pan...@gmail.com

unread,
Oct 30, 2007, 1:20:14 AM10/30/07
to hugin and other free panoramic software

On Oct 30, 4:06 am, Tom Sharpless <TKSharpl...@gmail.com> wrote:

> Hey, Pablo---
>
> Dersch did it right. A 3D sphere is the proper reference frame for
> stitching, and there is no way to
> avoid using the correct equations of spherical geometry. I go way
> back in computer graphics and
> image processing and have spent some time studying the panotools code,
> so trust me on this.
>
> Here is my own conceptual overview of the panotools geometry model.
>
> The camera rotates about a point, represented by the center of the
> panosphere. The sphere's radius,
> R, is analogus to the lens focal length: it is the scale factor that
> relates distances in an image to
> angles. A = d/R is the central angle in radians corresponding to
> distance d on the surface of the
> sphere.

What you mean "distances in an image to angles"?
And "A = d / R", do you mean "arc length = central * radius"?

> [The numerical value of R depends on resolution, scaling and other
> things. For coordinates measured
> in radians, R == 1; for coordinates in pixels, we measure R in pixels,
> and so on. R can also be
> varied for the purpose of changing the apparent focal length. Dersch
> mostly just calls it the
> distance factor.]

Why R == 1?

>
> In panotools the panosphere is reprented by its equirectangular
> projection, a Cartesian coordinate
> system in which distances along X and Y represent central angles. The
> range of the Y axis is -Pi/2 to
> Pi/2 radians, or -90 to 90 degrees; the range of X is -Pi to Pi
> radians, or -180 to 180 degrees. The X
> axis is cyclic: its left and right ends are at the same place [Y is
> properly skew-cyclic but is
> usually treated as open]. Because this is a map of a sphere, distances
> must be computed by the "great
> circle" formulae, not the Euclidean sqrt(dx^2 + dy^2).

for the distance part, do you mean that:
To find the great circle distance between two points
located at latitude delta and longitude lambda of
(delta1, longitude1) and (delta2, longitude2) on a
sphere of radius r, convert spherical coordinates to
Cartesian coordinates using:
xi = r * cos(lambdai) * cos(deltai)
yi = r * sin(lambdai) * cos(deltai)
zi = r * sin(deltai)
the angle alpha between vector1 and vector2 using the dot product:
cos(alpha) = v1 dot product v2
then the great circle distance is then:
d = r * alpha (alpha = arccos(v1 dot product v2))

May you draw some graphics to show these?

> Next we must transform to the coordinate system of the panosphere,
> with the optical center at the
> point (y,p). The essential step is a 3-dimensional rotation around
> the center of the sphere, that
> puts the optical axis at (y,p) and also rotates by the Roll angle
> (r). The common case adds a final
> transformation from polar to Cartesian coordinates. [The rotation is
> preceeded and followed by
> transformations to and from 3D Cartesian coordinates, so the whole
> mapping is quite arithmetic
> intensive].

This part should be computed with matrix multiply?

> The beautiful fact is that all this math eventually results in a pair
> of (real) pixel coordinates
> associated with each output pixel, giving its position in the input
> image. So the actual remapping of
> pixel data is done rather fast by a geometry-ignorant interpolation
> engine. [This means that panotools
> actually composes a reversed series of coordinate transformations that
> are the inverses of the ones
> mentioned above].

And all is done from 2-D to 2-D?

Reply all
Reply to author
Forward
0 new messages