Undistort a set of image points (not the whole image)

3,571 views
Skip to first unread message

Pete Rocket

unread,
Feb 11, 2015, 6:35:04 AM2/11/15
to ocamcali...@googlegroups.com
Hello together,

I want to undistort Keypoints in pixel coordinates, which I obtained via the SIFT algorithm in OpenCV.

Based on the provided C++-Code (world2cam, cam2world, creating Look Up Tables) I'm trying to implement my own undistort algorithm for normalized image points (equivalent to the function undistortPoints(…) in OpenCV).
However all attempts led to obviously wrong results.

I would be very grateful for any suggestions to solve this problem.

Best regards,
Pete

Pete Rocket

unread,
Aug 20, 2015, 3:39:24 AM8/20/15
to OCamCalib Toolbox
I finally found the solution:
At first you have to obtain a projective ray from the distorted pixel location via the cam2world function. To get the normalized and undistorted image point you then have to divide the X and Y coordinate of the ray with the Z component. Next you have to define a 3x3 pinhole camera matrix with an specific viewfield, which must be smaller than 180° due to the mathematical restriction of the tangent at 90° (Note: The radial mapping function of perspective cameras is r = f*tan(theta), with focal length f and the angle theta between the principal axis and the incoming ray). At last you have to multiply the camera matrix with the normalized image point in homogenous coordinates to obtain the final undistorted image point.
Below I am attaching my code example with OpenCV datatypes. I hope this will be helpful for other readers.

Best regards,
Pete


```c++
cv::Matx33d getPinholeCameraMatrix(float viewfield, cv::Size imageSize)
{
double f = 0.5*imageSize.width / tan(0.5*viewfield*PI / 180);

return cv::Matx33d(
f, 0, imageSize.width / 2,
0, f, imageSize.height / 2,
0, 0, 1);
}

void undistortPoints(std::vector<cv::Point2f> src, std::vector<cv::Point2f>& dst, cv::Mat ocamModel, float viewfield, cv::Size imageSize)
{
dst.clear();

for (size_t i = 0; i < src.size(); i++)
{
cv::Vec3d ray = cam2world(src.at(i), ocamModel);
cv::Vec3d undisPoint = getPinholeCameraMatrix(viewfield, imageSize) * cv::Vec3d(ray[0] / ray[2], ray[1] / ray[2], 1.0);

dst.push_back(cv::Point2f(undisPoint[0] / undisPoint[2], undisPoint[1] / undisPoint[2]));
}
}
```

khandelw...@gmail.com

unread,
Aug 20, 2015, 2:14:29 PM8/20/15
to OCamCalib Toolbox, jasch...@gmail.com
Hey!

This is definitely useful. I finally wrote a small matlab script which was basically taken from the last few lines commented in the undistort.m script written by Scaramuzza. For some reason I had to interchange the X and Y distorted points as input to get the desired undistorted points. i.e., input (Y,X) -> output (X,Y). After reading your reply I can definitely make more sense now.

Thanks!

Pete Rocket

unread,
Jul 10, 2016, 6:27:35 AM7/10/16
to OCamCalib Toolbox, jasch...@gmail.com
Hi Filip,

if you want to distort image points you have to apply the distortion with the world2cam function using the inverse polynomial coefficients.
Those can be obtained during the calibration procedure in Matlab. Just press 'Export Data' after calibration and extract the inv. coefficients from the textfile.
The distort function in C++ would look like this:


```c++
void distortPoints(std::vector<cv::Point2f> src, std::vector<cv::Point2f>& dst, cv::Mat ocamModel, float viewfield, cv::Size imageSize)
{
dst.clear();

// get inverse camera matrix of src points
cv::Matx33d Kinv = getPinholeCameraMatrix(viewfield, imageSize).inv();

for (size_t i = 0; i < src.size(); i++)
{

// undistorted point in homogenous coordinates
cv::Vec3d undisPoint = cv::Vec3d(src.at(i).x, src.at(i).y, 1);

// normalize undistorted point
cv::Vec3d normPoint = Kinv * undisPoint;

// get distorted point from world2cam with INVERSE polynomial coefficients
cv::Point2f disPoint = world2cam(normPoint, ocamModel);

dst.push_back(disPoint);
}
}
```

Hope that helped.


On 28.06.2016 13:30, Filip wrote:
> Hi Pete,
>
> I've calibrated camera with 150 field of view and undistorted fisheye image. I'm trying to find good way to apply effect of distortion on some points in undistorted image. I suppose good way is to use cam2world(using C++ code) and after getting X,Y,Z mapping I don't know how to get image coordinates back to the distorted image. I'm aware of losing information when undistorting image but I need to apply only distortion effect back onto undistorted parts (some points of undistorted image) of the image.
>
> I tried with your code to undistort but how I can do the inverse process, transforming points from undistorted image into "same points" in distorted image ? Thank you in advance.
>
>
> Kind regards,
>
> Filip.

Message has been deleted

bruce...@gmail.com

unread,
Oct 19, 2018, 3:54:48 AM10/19/18
to OCamCalib Toolbox
Hi Pete
I refer to your approach to write the c code blow, but i don't get the correct undistort points. Please help me to solve this problem thank you very much.
//--------------------------------------
double point3D[3] = {0,0,0};
double point2D[2] = {150,253};

double kt[9];
kt[0] = 118.311481351196; kt[1] = 0; kt[2] = 359.5;
kt[3] = 0; kt[4] = 118.311481351196; kt[5] = 239.5;
kt[6] = 0; kt[7] = 0; kt[8] = 1;

double x11, y11, z11;
cam2world(point3D, point2D);
printf("~~!!![0] = %2.4f , [1]=%2.4f , [2]=%2.4f \n", point3D[0], point3D[1], point3D[2]);

point3D[0] /= point3D[2];
point3D[1] /= point3D[2];
point3D[2] = 1.0;

x11 = kt[0] * point3D[0] + kt[1] * point3D[1] + kt[2] * point3D[2];
y11 = kt[3] * point3D[0] + kt[4] * point3D[1] + kt[5] * point3D[2];
z11 = kt[6] * point3D[0] + kt[7] * point3D[1] + kt[8] * point3D[2];

x11 /= z11;
y11 /= z11;

printf("[0] = %2.4f , [1]=%2.4f , [2]=%2.4f \n", x11, y11 , z11);
getchar();
//--------------------------------------
Best regards,
Lin

Pete Rocket

unread,
Oct 20, 2018, 5:01:42 PM10/20/18
to OCamCalib Toolbox
Your code looks correct to me. Maybe you have to interchange the X and Y coordinates of the image point (X,Y) => (Y,X). Or interchange the coordinates of the principle point in the cam2world function (xc=model->yc and yc=model->xc).

myste...@gmail.com

unread,
Nov 9, 2018, 3:49:00 AM11/9/18
to OCamCalib Toolbox
Hello, could you please post your matlab code mentioned in your comment?

Thank you!

yongjie...@gmail.com

unread,
Aug 13, 2019, 7:08:54 AM8/13/19
to OCamCalib Toolbox
could you please share the matlab code? thank you very much!

在 2015年8月21日星期五 UTC+8上午2:14:29,khandelw...@gmail.com写道:

Robert Waller

unread,
Oct 29, 2020, 10:28:03 AM10/29/20
to OCamCalib Toolbox
Hi,

I am running into the same situation that you did where I need to undistort only a set of points and not the entire image. The problem I'm facing is that I cannot use your method to solve it as the images I am taking are using a 220 degree FOV Fisheye camera. I tried to do this using the method that was used in the undistort.m file. I ported this into C++ but I do not really understand how the function works. It seems a bit counter-intuitive to me that it would use world2cam and not cam2world. Would picking a FOV of less that 180 for the camera matrix still work even though my images are taken with a 220 degree FOV camera? Any help you can give me will be appreciated.

Kind regards
Robert

Pete Rocket

unread,
Oct 30, 2020, 5:49:40 AM10/30/20
to OCamCalib Toolbox
Hi Robert,

you can still use my approach for undistorting image points. However, be aware that you will lose image content that lays outside of the maximum possible pinhole viewfield (>= 180°). This means that distorted points from the image boundary are likely to produce incorrect or negative results with my approach.

But I think the correct way would be to use Scaramuzza's approach from the Matlab script undistort.m (commented lines 65 - 73). A user in this group reported that he successfully undistorted the image points with that snippet. He just had to switch the X and Y coordinates for the input points.

About the projection functions:
Here it depends what you want to do. If you want to undistort the whole image you usually do a backward warping with interpolation which requires the inverse projection model because you go from destination (undistorted) to source (distorted). Therefore you need the world2cam function with the inverse polynomial coefficients to project your undistorted 3D point/ray back to the distorted image. This happens basically in the Matlab script undistort.m.
In contrast, if you want to undistort just single image points a forward warping is sufficient (no interpolation required). That's why you use cam2world there. 

Cheers,
Pete

Manuel Rey Area

unread,
Nov 24, 2020, 7:01:55 AM11/24/20
to OCamCalib Toolbox
Hey thanks for finding this out. I would like to get something different though. I would like to know the area/patch a virtual pinhole camera of 53º FOV rotated an angle alpha w.r.t to the fisheye camera samples from the fisheye image. The virtual camera is exactly in the same position in the world than the fisheye camera but tilted alpha in the x axis (could be any axis). My first approach is centering a rectangle in the center of the fisheye image. Get the 3D coordinates of these set of points and rotate them by alpha. Then, perform again cam2world to see the patch projection on the fisheye image. Is this right? My intuition tells me that the bigger the angle the more distorted the patch projection should be. The problem I am facing is that no matter the angle of rotation I cannot manage to get sampling on the edges of the fisheye image. Not even close...

Pete Rocket

unread,
Dec 3, 2020, 11:03:42 AM12/3/20
to OCamCalib Toolbox
Hi Manuel,

I think you must rotate the virtual pinhole camera and not the 3D points. This can be achieved by combining the intrinsic matrix with a rotation matrix which results basically in a homography matrix. So if you have the camera matrix K for your virtual pinhole camera you need to set up a 3x3 rotation matrix for your desired axis. You will get your homography with
H = K*R 

For undistorting image points just use H instead of K. If you want to undistort the image (the backward warping procedure) you need the inverse of H. Be aware that you will get weird image effects, e.g. content from the right image side appears on the left side at the borders in your undistorted image.

In theory this should work but I'm not sure what happens if you look at the borders of a fisheye camera with a rotated pinhole camera. I would still expecting some heavy distortion because of the perspective projection.

Cheers
Pete

Manuel Rey Area

unread,
Jan 20, 2021, 8:21:14 AM1/20/21
to OCamCalib Toolbox
Thanks Pete. Really helpful!
Reply all
Reply to author
Forward
0 new messages