A*x^2 + B*x*y + C*y^2 + D*x + E*y + F = f(x,y)
where f(x,y) is the value at that point.
where A, B, C, D, E, and F are constants determined
by your existing data set. So take the six data points
closest to the point you wish to get your new value
at, plug them and the functional value at those points
into the above equation. This will give you a linear
system of six equations with six unknowns, solve it
as you were taught to do it in introductory linear
algebra or simple wade through the mess once by hand.
Ensure that you correctly handle the degenerate
cases (i.e., avoid divide by zero errors or properly
handle singular matrices).
Now simply plug in the (x,y) pair into the current
f(x,y) you just calculated and get your inter/extrapolated
result for your new point.
Now repeat as needed for every point in your new
image.
enjoy your day,
Sean
"Sean" <sdemerc...@hotmail.com> wrote in message
news:ul8qhst...@corp.supernews.com...
> Take the formula for a bicubic surface, plug in your
> current 16 points to get your localized surface
> approximation formula, plug in your current coordinates,
> and there you have it. If you cannot handle the
> algebra, then use Maple V or Mathematica to solve
> for it and dump out the C++ for you.
>
> If you just want the answer without understanding
> what you are using, then go to your U Mass engineering
> library and grab a book on splines. The answer you
> seek should be in the first or second chapter.
>
> enjoy your day,
>
> Sean
>
>
>
> "Jamie L. Rothfeder" <jrot...@freya.cs.umass.edu> wrote in message
> news:Pine.OSF.4.31.020809...@freya.cs.umass.edu...
> > Hello,
> > Does anyone know where I can find information about a bicubic image
> > resize algorithm. I need to implement it in c or c++.
> >
> > Thank you,
> > Jamie Rothfeder
> >
> >
>
>
A*x^3y^3, B*x^3y^2, C*x^3y, D*x^3,
E*x^2y^3, F*x^2y^2, G*x^2y, Hx*^3,
I*xy^3, J*xy^2, K*xy, L*x,
M*y^3, N*y^2, O*y, P
For a degenerate, you need at least 7 points.
A*x^3, B*x^2, C*x, D*y^3,E*y^2,F*y, G
If your points are in a 4x4 grid, you can take advantage of it
and unroll the bi-cubic equation into 5 simpler uni-cubic equations
in other words, take the 4 points in each row and solve the
uni-cubic equation.
The output of those can then be solved for the final answer.
Let me know if you would like a more thorough answer.
Oscar
"Sean" <sdemerc...@hotmail.com> wrote in message
news:ul9viu...@corp.supernews.com...
thanks,
Sean
"Oscar Papel" <oep...@rogers.com> wrote in message
news:zkc59.78403$wh1....@news01.bloor.is.net.cable.rogers.com...
You both seem to be describing a way of fitting a single bicubic
function to the space between 4 input samples. This produces a
function that interpolates (passes through) those 4 points. However,
once you get outside that one little region, you'll calculate an
entirely different bicubic function for the space just one pixel away,
in any direction. This bicubic function will also interpolate its
corner points, but there's no guarantee that the two patches you create
this way will match each other along their common boundary, or that they
will match in any of their derivatives.
The normal way of doing "bicubic interpolation" avoids these problems.
In one dimension, cubic interpolation is usually done using a cubic
spline of four segments. These are four different cubic polynomials
defined over the domains [-2..-1], [-1..0], [0..1], and [1..2]. The
polynomials are chosen to have a bunch of special properties, and they
are predetermined - you don't need to solve any equations to find
coefficients that fit data points. All you have to do is evaluate the
four polynomials at an X value that depends on the fractional component
of the output sample location. It's relatively fast and cheap. The
design of the polynomials guarantees that the resulting interpolated
function passes through the data points *and* has a continuous first
derivative everywhere.
In two dimensions, bicubic interpolation is done simply by using the
method above to interpolate in the vertical direction in 4 adjacent
columns, then interpolate these 4 new points in the horizontal
direction. (Or you can do horizontal first then vertical - the answer
is the same). The result is an interpolated 2D function that passes
though all the original data points *and* has no gaps at the edges
between patches *and* has continous first derivatives at all the edges
between patches.
There are lots of references that talk about this and other methods
of interpolating data in 1D, and interpolating a 2D grid of points
specifically for image resizing. One is "Digital Image Warping"
by Wolberg.
Dave
You gotta walk before you can run.
Oscar.
"Dave Martindale" <da...@cs.ubc.ca> wrote in message
news:aj682g$pen$1...@trappist.cs.ubc.ca...
Sorry. I should have followed up the article above yours. No criticism
of what you wrote was meant.
>You gotta walk before you can run.
Indeed. That's why I described one of the more "normal" bicubic
interpolation methods, which is what the original poster probably
wanted. And if he gets the Wolberg book and reads it, he'll find
there are actually several different usable methods that can all be
called "bicubic".
Dave
>Hello,
> Does anyone know where I can find information about a bicubic image
>resize algorithm. I need to implement it in c or c++.
>
>Thank you,
>Jamie Rothfeder
Granted my math has become very weak (almost non-existant) over the
years due to near non-use, however I worked out the following
subroutine a few months back. Far as I know it's accurate. Or at least
a place to begin.
The function actually is the bi-cubic interpolation use by the main
scaling routine which feeds it the 16 points from a 4x4 array.
Not that'll you'll need to run a sharpening filter over the image
after you do the bi-cubic rescale.
--min
inline float deformation::bicubic(
float tx, float ty,
float P00, float P01, float P02, float P03,
float P10, float P11, float P12, float P13,
float P20, float P21, float P22, float P23,
float P30, float P31, float P32, float P33
)
{
float dx=f(tx);
float dy=f(ty);
float Rdx[4],Rdy[4];
for(int n=0;n<=3;n++)
{
Rdx[n]=R(n-1-dx);
Rdy[n]=R(n-1-dy);
}
float s;
s =P00*Rdx[0]*Rdy[0];
s+=P01*Rdx[1]*Rdy[0];
s+=P02*Rdx[2]*Rdy[0];
s+=P03*Rdx[3]*Rdy[0];
s+=P10*Rdx[0]*Rdy[1];
s+=P11*Rdx[1]*Rdy[1];
s+=P12*Rdx[2]*Rdy[1];
s+=P13*Rdx[3]*Rdy[1];
s+=P20*Rdx[0]*Rdy[2];
s+=P21*Rdx[1]*Rdy[2];
s+=P22*Rdx[2]*Rdy[2];
s+=P23*Rdx[3]*Rdy[2];
s+=P30*Rdx[0]*Rdy[3];
s+=P31*Rdx[1]*Rdy[3];
s+=P32*Rdx[2]*Rdy[3];
s+=P33*Rdx[3]*Rdy[3];
return s;
}
Oops. Forgot to include the dependant subfunctions. :)
--min
inline float deformation::f(float x)
{
return (x-floor(x));
}
inline float deformation::P(float x)
{
return (x>0.0)?x:0.0;
}
inline float deformation::R(float x)
{
// return (pow(P(x+2),3) - 4* pow(P(x+1),3) +
6*pow(P(x),3) - 4* pow(P(x-1),3) ) / 6;
float p0=P(x+2);
float p1=P(x+1);
float p2=P(x);
float p3=P(x-1);
return ( (p0*p0*p0) - 4*(p1*p1*p1) + 6*(p2*p2*p2) -
4*(p3*p3*p3) ) / 6.0;
}
Basically, you can do a bicubic interpolation as four vertical-direction
one-variable interpolations, which give you 4 points along a horizontal
line which is spaced somewhere between the rows of the input grid. Then
you do one single horizontal interpolation, using the output of the
first phase as the input to the second. Now you've got your output
point.
Better yet, when you go to calculate your *next* output point, you can
probably re-use either 3 of 4 or all four of the previous vertical
interpolations. So you need only zero or one vertical 1D interpolation
steps, and always the one horizontal interpolation step. This is much
cheaper than doing a 16-point convolution at every output point.
By the way, you can also do the bicubic interpolation as four
horizontal-direction interpolations followed by one vertical-direction
interpolation - the mathematics are symmetric. But then you can't
re-use results from one pixel to the next unless you're generating the
output image one column at a time. Images are usually stored by rows,
so the vertical interpolation first is fastest.
Dave
#include<windows.h>
#include<stdio.h>
int WINAPI WinMain(HINSTANCE, HINSTANCE, LPSTR, int)
{
//
// coordinates
//
int x0 = 24;
int x1 = 25;
int x2 = 26;
int x3 = 27;
int y0 = 15;
int y1 = 16;
int y2 = 17;
int y3 = 18;
//
// sixteen pixel values
//
BYTE c00 = 240; BYTE c04 = 220; BYTE c08 = 200; BYTE c12 = 200; // row #1
BYTE c01 = 160; BYTE c05 = 175; BYTE c09 = 160; BYTE c13 = 140; // row #2
BYTE c02 = 70; BYTE c06 = 75; BYTE c10 = 60; BYTE c14 = 20; // row #3
BYTE c03 = 120; BYTE c07 = 115; BYTE c11 = 150; BYTE c15 = 170; // row #4
//
// the point to intepolate
//
double x = 26.0;
double y = 17.0;
//
// lagrange interpolation along the y-axis
// formula assumes x1 - x0 = x2 - x1 = x3 = x2 = 1
//
double u0 = y - y0;
double u1 = y - y1;
double u2 = y - y2;
double u3 = y - y3;
double s1 = (u1*u2)/6;
double s2 = (u0*u3)/2;
double ip0 = s1*(c03*u0 - c00*u3) + s2*(c01*u2 - c02*u1); // interpolate #1
double ip1 = s1*(c07*u0 - c04*u3) + s2*(c05*u2 - c06*u1); // interpolate #2
double ip2 = s1*(c11*u0 - c08*u3) + s2*(c09*u2 - c10*u1); // interpolate #3
double ip3 = s1*(c15*u0 - c12*u3) + s2*(c13*u2 - c14*u1); // interpolate #4
//
// lagrange interpolation along the x-axis
// formula assumes y1 - y0 = y2 - y1 = y3 = y2 = 1
//
u0 = x - x0;
u1 = x - x1;
u2 = x - x2;
u3 = x - x3;
s1 = (u1*u2)/6;
s2 = (u0*u3)/2;
double ip4 = s1*(ip3*u0 - ip0*u3) + s2*(ip1*u2 - ip2*u1);
//
// force interpolated value to BYTE boundary
//
if(ip4 < 0)
ip4 = 0;
else if(ip4 > 255)
ip4 = 255;
//
// display results
//
char buffer[256];
sprintf(buffer, "final interpolated value = %u.", (unsigned)ip4);
MessageBox(0, buffer, "value", MB_OK);
return 0;
}
What you made was not an approximation of a cubic. It IS a cubic.
You used 4 points in your lagrange expansion which creates a cubic function.
Lagrange is used to approximate arbitrary functions within a degree of error
by generating a "best fit" polynomial. If the function was already a
polynomial of degree N and you used exactly N+1 distinct points then your
error term disappears and it is an exact solution.
Now, if you used less points (say 3) or were trying to model a non-cubic
function then it would be an approximation.
Oscar
"none" <no...@none.com> wrote in message
news:Xns92C110D...@64.154.60.179...
Doing the math using 16 points at a time for every single output point
is pretty wasteful, and only necessary if you aren't sampling a regular
grid (such as for arbitrary image warps).
Chris
In article <Xns92C110D...@64.154.60.179>, none <no...@none.com>
wrote:
Chris Cox <cc...@mindspring.com> wrote in news:101120021434245213%
cc...@mindspring.com:
PhotoEdit is a powerful photo editor, its source code is available,
it can read, write, and manipulate an image in many image formats.
You can also resize, rotate, sharpen, color reduce, or add special
effects to an image.
Please visit http://www.pdfimage.com/photoedit/index.htm to get more
infomation.
The method that you and Dave are using does not take continuity into
account.
By continuity, I mean that two bi-cubic surfaces that are 1 pixel apart
smoothly flow into each other instead of meeting at a "kink".
Take the 1 dimension case as an example:
lets say you have the following data values : {99,101,98,100,56}
if you solve the cubic that goes through the first 4 values and then solve
the cubic that goes through the last 4 values you will see that there is a
slight "kink" where the cubic surfaces meet at value 98.
Spline solving methods are designed so that this does not occur.
They do this by incorpoating supplied or derived first and second derivative
information so that one bi-cubic flows smoothly into the next.
Oscar
"none" <no...@none.com> wrote in message
news:Xns92C2D86...@64.154.60.180...
Bicubic interpolation just means you are using cubic functions in two
directions for the interpolation - it doesn't say where you get the
cubic functions from.
Spline mathematics are one popular way of obtaining the cubic functions.
There are several different ways to do this, even within spline math.
If you understand how to interpolate a curve through a series of data
points, you can use exactly the same math to interpolate a continuous
2d intensity function through a 2D grid of data points - which is
exactly what is needed to convert a sampled image back into a continuous
image that can then be resampled at different points.
Dave
> cool, i think i get it now thanks. :D
> so then what's all this spline stuff they keep
> about? is it "reinventing the bicubic interpolation
> wheel" of sorts?
You have to use the spline (or other function) to create the kernel.
But the same basic methodology can be used for bilinear, bicubic, or
much larger kernels (such as weighted sinc, or enlarged cubics for
dealing with image downsampling). The difference comes down to the
kernel function.
Again, "Digital Image Warping" by George Wolberg is a great (and the
standard) reference on this sort of thing.
Chris
> Hello,
>
> The method that you and Dave are using does not take continuity into
> account.
> By continuity, I mean that two bi-cubic surfaces that are 1 pixel apart
> smoothly flow into each other instead of meeting at a "kink".
Continuity is very much taken into account.
You really seem to be missing what's going on.
You might want to read "Digital Image Warping" by George Wolberg.
> Take the 1 dimension case as an example:
> lets say you have the following data values : {99,101,98,100,56}
>
> if you solve the cubic that goes through the first 4 values and then solve
> the cubic that goes through the last 4 values you will see that there is a
> slight "kink" where the cubic surfaces meet at value 98.
We're not solving a cubic.
We're convolving with a function described by a piecewise cubic
polynomial (usually designed to approximate a windowed sinc function
without all the nasty side effects).
> Spline solving methods are designed so that this does not occur.
> They do this by incorpoating supplied or derived first and second derivative
> information so that one bi-cubic flows smoothly into the next.
That's already taken care of in the cubic kernel design.
Chris
>if you solve the cubic that goes through the first 4 values and then solve
>the cubic that goes through the last 4 values you will see that there is a
>slight "kink" where the cubic surfaces meet at value 98.
This isn't what is done in practice.
>Spline solving methods are designed so that this does not occur.
>They do this by incorpoating supplied or derived first and second derivative
>information so that one bi-cubic flows smoothly into the next.
And this is how bicubic interpolation is done. There are different
splines used. One method uses splines with local support, so every
output point depends on 4x4 input points. This gives first derivative
continuity. You can get second-derivative continuity, but you have to
give up local support - every output point depends on all the input
points. (It's not as expensive as it sounds; in practice you use
ordinary B-splines (that do have local support) plus a preprocessing step).
Dave
> > The method that you and Dave are using does not take continuity into
> > account.
> > By continuity, I mean that two bi-cubic surfaces that are 1 pixel apart
> > smoothly flow into each other instead of meeting at a "kink".
>
> Continuity is very much taken into account.
> You really seem to be missing what's going on.
> You might want to read "Digital Image Warping" by George Wolberg.
I have read it.
There is nothing within the Lagrange code posted by the original poster that
tries to
constrain the cubic so that neighbouring cubics meet in a smooth manner.
> > Take the 1 dimension case as an example:
> > lets say you have the following data values : {99,101,98,100,56}
> >
> > if you solve the cubic that goes through the first 4 values and then
solve
> > the cubic that goes through the last 4 values you will see that there is
a
> > slight "kink" where the cubic surfaces meet at value 98.
>
> We're not solving a cubic.
> We're convolving with a function described by a piecewise cubic
> polynomial (usually designed to approximate a windowed sinc function
> without all the nasty side effects).
>
This is not being done in the original poster's code.
The Lagrange method coded by the original poster implicitly samples one
cubic for the range [101 98] and another completely independant cubic for
the range [98 100] and these two cubic functions do NOT form a continuous
boundary at value 98.
My point was that using Lagrange to create a piecewise cubic was not going
to be able to guarantee continuity.
Other methods do.
>
> > Spline solving methods are designed so that this does not occur.
> > They do this by incorpoating supplied or derived first and second
derivative
> > information so that one bi-cubic flows smoothly into the next.
>
> That's already taken care of in the cubic kernel design.
There is no "cubic kernel design" in the original poster's implementation
>> > The method that you and Dave are using does not take continuity into
>> > account.
>> > By continuity, I mean that two bi-cubic surfaces that are 1 pixel apart
>> > smoothly flow into each other instead of meeting at a "kink".
>> Continuity is very much taken into account.
>> You really seem to be missing what's going on.
>> You might want to read "Digital Image Warping" by George Wolberg.
>I have read it.
>There is nothing within the Lagrange code posted by the original poster that
>tries to
>constrain the cubic so that neighbouring cubics meet in a smooth manner.
There seems to be come confusion about what algorithm is being dicussed.
The Lagrange method probably does not worry about continuity, so you're
right in commenting about that. But Chris Cox and I (I'm the Dave you
refer to) are not referring to that Lagrange code, we're talking about
the bicubic methods that people actually use. These are based on
spline methods that do preserve at least function value and first
derivative continuity, and sometimes second derivative continuity as
well.
Dave
I was commenting on the original poster's code that used Lagrange.
The O.P. then asked why anyone would use any other method such as a spline
method.
He thought that it was just "reinventing the wheel".
I was trying to explain why this is not so.
I was actually arguing FOR spline methods.
Oscar.
>I was commenting on the original poster's code that used Lagrange.
>The O.P. then asked why anyone would use any other method such as a spline
>method.
>He thought that it was just "reinventing the wheel".
>I was trying to explain why this is not so.
>I was actually arguing FOR spline methods.
Yes, I realize that. But you wrote about the code "you and Dave are
using", which is confusing because I don't use Lagrange, I use splines.
And that may well have created confusion about what you were actually
saying - were you for or against splines? Then Chris Cox entered, and
it seems he thought you were arguing against splines.
Is this less confusing now, or more?
Dave