I'm the author of the Anti-Grain Geometry library, http://antigrain.com
and I would like to share my new article with you:
It can look kind of naive, but I have never seen a fast and perfect
method of Bezier approximation. Even Adobe engine fails in some cases.
This is an attempt to achieve perfect result. I would appreciate your
comments, suggestions, and criticism.
Looks like you've been working hard, but without adequate literature.
Many of the issues you tackle have been discussed here, and certainly
in the professional literature.
Let me first correct a claim you make:
But this sutuation is solved by forcing the subdivision the
first time. There is a proof that after one division by half
a cubic curve cannot have two inflection points, neither a
loop, so, this approach is safe.
A cubic curve can never have two inflection points, so that claim is
vacuously safe; but a midpoint subdivision is *not* a safe way to
eliminate a loop. What is true is that we can choose a subdivision
point that cuts the loop, just not at the midpoint. We can also cut an
"S" shape (which has no inflection, only extremes) to separate the two
changes in direction (the two bends of the "S").
Paul Bourke's code is needlessly expensive for direct evaluation; a
form of Horner's rule can be used for better speed. Even so, other
methods are faster.
The professional literature includes a careful study of adaptive
forward differencing, which combines some of the best features of both
an incremental approach and an adaptive approach.
Lien, Shantz, Pratt. Adaptive forward differencing for rendering
curves and surfaces. SIGGRAPH '87. ISBN 0-89791-227-6.
When using a flatness criterion, notice that a fat curve can amplify
the visibility of any error, so the tolerance might better be adjusted
based on width, not just thin-curve pixel errors.
A well-known flatness test is both cheaper and more reliable than the
ones you have tried. The essential observation is that when the curve
is a uniform speed straight line from end to end, the control points
are evenly spaced from beginning to end. Therefore, our measure of how
far we deviate from that ideal uses distance of the middle controls,
not from the line itself, but from their ideal *arrangement*. Point 2
should be half-way between points 1 and 3; point 3 should be half-way
between points 2 and 4.
This, too, can be improved. Yes, we can eliminate the square roots in
the distance tests, retaining the Euclidean metric; but the taxicab
metric is faster, and also safe. The length of displacement (x,y) in
this metric is |x|+|y|.
Detecting loops and cusps is a classical problem in algebraic geometry
that is also addressed specifically in computer graphics literature:
Stone, DeRose. A geometric characterization of parametric cubic
curves. ACM TOG v8 n3, 1989, pp. 147-163. ISSN 0730-0301.
But if we skip straight to the bottom line, past all this prior work
on the bits and pieces, the crucial paper to read is:
Fabris, Forrest. High Quality Rendering of Two-Dimensional
Continuous Curves. SIBGRAPI '97.
Also follow the reference trail connected with that paper, works it
cites and any works citing it.
In my recent framework (www.amanith.org) i've implemented a non
recursive method for cubic Bézier curves, based on parabolic
approximation, it takes care of cuspid and flex.
It does an adaptive flattening, using chordal distance as "maximum
permitted variation" threshold.
Maybe you can have a look at it.
thanks for your investigation.
It will need some time to check & honor your achievements.
In order to stabilize a printable version I would recommend
to convert the HTML pages into PDF.
Best regards --Gernot Hoffmann
First of all, thank you for you correction, you are right here. The
lack of my math knowledge results in misbeliefs.
Yes, I saw all papers you refer, the most interesting of them is the
> Lien, Shantz, Pratt. Adaptive forward differencing for rendering
> curves and surfaces. SIGGRAPH '87. ISBN 0-89791-227-6.
However, I found out that the incremental forward differencing approach
can't be considerably improved. I agree it can perfectly handle the
deviation error by the distance criterion, but there's no way to draw a
good looking fat stroke.
I might be mistaking, but as far as I understand you can only use some
constant coefficient that will define the maximal deviation. So that,
for the fat stroke you will have to increase the number of points along
the *entire curve*. You can't do that locally, at sharp turns. So, any
distance criterion is not enough and we simply have to add the *angle*
criterion as soon as we want to draw a fat curve. You are absolutely
right, a wide stroke amplifies the visibility of the errors, so that
the deviation from the ideal curve must be much smaller at sharp turns.
This is the whole idea, that's to considerably decrease the error at
Initially I used only the angle criterion but it doesn't work well at
flat parts, so that I use a more traditional criterion.
Then you say:
> A well-known flatness test is both cheaper and more reliable than the
> ones you have tried. The essential observation is that when the curve
> is a uniform speed straight line from end to end, the control points
> are evenly spaced from beginning to end. Therefore, our measure
> of how far we deviate from that ideal uses distance of the middle
> controls, not from the line itself, but from their ideal *arrangement*.
> Point 2 should be half-way between points 1 and 3; point 3 should
> be half-way between points 2 and 4.
First of all, it doesn't seem to be cheaper and more reliable. The
curve can be very flat, like this one:
And it requires only 9 points, with the maximal deviation of 0.12 of a
pixel. That's quite enough. Your criterion (well known flatness test)
will produce more points.
And it's not obvious how you define the value of "being half-way". In
terms of Manhattan distance? But it works only if points 1,2,3 are
about collinear. In the common case they are not.
It's very intresting, so, could you explain it more and provide some
I claim the following.
1. The line-point distance criterion calculated in Euclidian metrics is
the best way to estimate the deviation from the ideal curve, simply
because it's always proportional to the deviation. I agree it isn't the
cheapest way, but on modern processors the difference between my method
and the most efficient one is not that considerable. A floating point
multiplication is about as fast as FP addition.
2. At sharp turns the wide stroke amplifies visible errors, so that,
the approach with only a distance criterion isn't enough. My method
allows you to increase the accuracy locally, so that the distance error
at sharp turns is less as a factor of 10s, 100s, or even 1000s.
3. My approach is actually a combination of different conditions, not
just one method. It isn't mathematically beautiful, but it's a kind of
an engineering approach that works well in practice.
Just d' FAQs wrote:
> A cubic curve can never have two inflection points, so that claim is
> vacuously safe
> Detecting loops and cusps is a classical problem in algebraic geometry
> that is also addressed specifically in computer graphics literature:
> Stone, DeRose. A geometric characterization of parametric cubic
Interestingly enough, the very same paper talks about and shows an
example of a cubic curve with 2 inflection points.
I like your method.
It looks very good and understandable for me.
And it comes at right moment ;-)
(just now I implementing bezier drawing in java,
many thanks to Gernot Hoffman,
without his help I would be lost in mathe).
Keep up the good work!
My mistake. I was thinking of a function cubic,
f(t) = a t^3 + b t^2 + c t + d
For an inflection, the second derivative must be zero.
f''(t) = 6a t + 2b
This being a linear function, it can cross zero only once.
But the major point still holds: subdivision at the midpoint is not a
reliable way to simplify the curve.
Really, it's as simple and reliable as stated. With points P1...P4,
the midpoint between P1 and P3 is (P1+P3)/2, so we compute a vector
dev2 = (P1+P3)/2 - P2
Better still, double it.
dd2 = P1 + P3 - 2 P2
This has an x component and a y component. Take their absolute values
and sum them. Likewise, compute the deviation of P3:
dd3 = P2 + P4 - 2 P3
Compute its Manhattan length as well, and sum the two. That's what we
use for our flatness test.
A variation, not quite so cheap, is to expect P2 to be 1/3 of the way
from P1 to P4, and P3 to be 2/3.
dev2 = (2 P1 + P4)/3 - P2
dev3 = (P1 + 2 P4)/3 - P3
Tripled, these give
td2 = 2 P1 + P4 - 3 P2
td3 = P1 + 2 P4 - 3 P3
As you discovered, collinearity is not a safe and reliable test; this
is. We could still use a Euclidean norm, but the Manhattan value will
be within a constant factor of it, which makes it safe.
Again, however, the SIBGRAPI paper and those it cites (such as the
doctoral dissertation) study the issues in far greater detail. It is
hard to believe you could have absorbed those contents and still have
the difficulties against which you fought. Perhaps you should entice
Fabris to take an interest in the Anti-Grain Geometry project.
> Really, it's as simple and reliable as stated. With points P1...P4,
> the midpoint between P1 and P3 is (P1+P3)/2, so we compute a vector...
I have tried your methods (that very "well known" criteria) and here's
Maybe I did something wrong but my code is as follows (for the sake of
clarity I use many variables).
double dd2x = fabs(x1 + x3 - 2 * x2);
double dd2y = fabs(y1 + y3 - 2 * y2);
double dd3x = fabs(x2 + x4 - 2 * x3);
double dd3y = fabs(y2 + y4 - 2 * y3);
double dist = dd2x + dd2y + dd3x + dd3y;
if(dist <= m_distance_tolerance)
double td2x = fabs(2 * x1 + x4 - 3 * x2);
double td2y = fabs(2 * y1 + y4 - 3 * y2);
double td3x = fabs(x1 + 2 * x4 - 3 * x3);
double td3y = fabs(y1 + 2 * y4 - 3 * y3);
double dist = td2x + td2y + td3x + td3y;
if(dist <= m_distance_tolerance)
Please correct me if there are mistakes.
Rather flat curves:
4. AGG Subdivision:
You see, keeping the same maximal deviation, the number of points is
I consider the 4th picture close to ideal. It gives about the same
error on different scales, keeping the minimal number of points. I
totally agree my method is much more expensive, and besides, we should
take special care of the collinear cases. But yours estimation works
about the same as simple incremental method, as I suspected.
But the main point of my method is that it can be combined with the
*angle* error estimation. The trick here is to handle cusps where the
angle condition will never theoretically occur. As the practice shows
the collinearity check protects us from infinite recursion quite well.
In my method there is a ready collinearity estimation, while in yours
it's not obvious how to get it. Since we can't get the collinearity
value we can't easily use the angle estimation because it will result
in infinite recursion on degenerate cases.
There's another case, a curve with a loop:
4. AGG Subdivision:
I turned off my *angle* condition to make the chances equal.
> Again, however, the SIBGRAPI paper and those it cites (such as the
> doctoral dissertation) study the issues in far greater detail. It is
> hard to believe you could have absorbed those contents and still have
> the difficulties against which you fought. Perhaps you should entice
> Fabris to take an interest in the Anti-Grain Geometry project.
Thanks for the suggestion. However, my work differs considerably from
all those papers. Once again, it's mostly a matter of how to draw a
relatively thick equidistant outline (a stroke) keeping it looking
nice. And how NOT to increase the total number of points dramatically.
I suspect you blame me for some ignorance in math and I agree. What I
did is just an engineering research that works quite well in practice
within the defined contitions.
Thanks again, I really appreciate your comments!
The methods are not mine, I'm merely passing them on. The most direct
ancestor is GhostScript, but the observations have a long history
>Please correct me if there are mistakes.
Looks correct. One fine point we cannot judge from the snippet is that
it may be suboptimal to use the same numeric distance tolerance for
all the methods. Consider a "circle" in Manhattan versus Euclidean:
the former crosses a diagonal at (0.5,0.5); the latter, at (0.7,0.7).
But both cross the x axis at (1,0). This may be worth tuning.
>You see, keeping the same maximal deviation, the number of points is
>I consider the 4th picture close to ideal. It gives about the same
>error on different scales, keeping the minimal number of points. I
>totally agree my method is much more expensive, and besides, we should
>take special care of the collinear cases. But yours estimation works
>about the same as simple incremental method, as I suspected.
>But the main point of my method is that it can be combined with the
>*angle* error estimation. The trick here is to handle cusps where the
>angle condition will never theoretically occur. As the practice shows
>the collinearity check protects us from infinite recursion quite well.
>In my method there is a ready collinearity estimation, while in yours
>it's not obvious how to get it. Since we can't get the collinearity
>value we can't easily use the angle estimation because it will result
>in infinite recursion on degenerate cases.
There is a subtle logic behind the difference. Consider a case where
the first three control points are identical, and the last elsewhere.
This is a straight line, for which two points (the ends) suffice. Or
is it? The *parameterization* is not uniform, and for certain uses
(maybe not yours), that becomes visible. For example, if the curve is
not a solid stroke but a dash pattern, or if we animate motion along
the curve, we may expect to see the parameterization.
The tests I describe include collinearity, but are more stringent. The
central controls are tested against the line at specific positions. If
they are close to those positions, they are nearly collinear. But yes,
even far from their ideal positions they can still be collinear.
>> Again, however, the SIBGRAPI paper and those it cites (such as the
>> doctoral dissertation) study the issues in far greater detail. It is
>> hard to believe you could have absorbed those contents and still have
>> the difficulties against which you fought. Perhaps you should entice
>> Fabris to take an interest in the Anti-Grain Geometry project.
>Thanks for the suggestion. However, my work differs considerably from
>all those papers. Once again, it's mostly a matter of how to draw a
>relatively thick equidistant outline (a stroke) keeping it looking
>nice. And how NOT to increase the total number of points dramatically.
The focus of the Fabris and Forrest paper is antialiasing, which you
do not consider at all (though perhaps you will). But they cite a
paper by Klassen, "Drawing antialiased cubic spline curves", that is
more closely related to your stated intent. And a web search for who
cites that turns up another paper which may be of interest.
>I suspect you blame me for some ignorance in math and I agree. What I
>did is just an engineering research that works quite well in practice
>within the defined contitions.
Not at all. I'm suggesting that you might spare yourself some effort
and also improve your results with a deeper use of the literature.
Sometimes a single paper provides a "silver bullet" solution to the
problem we are tackling. More often we have to dig, understand, and
synthesize new solutions from the ideas we find there.
One of the peculiar tensions in finding novel solutions is between the
benefits of doing everything ourselves (ab initio) versus studying
prior art. Each has benefits and risks. We better understand a topic
if we discover the facts for ourselves, and are more likely to form a
novel view. Yet we also benefit from the insights of our predecessors
and can build rather than repeat. (Also, academia and patent offices
insist that we not duplicate prior work.)
In reading your page, I saw more evidence for ab initio work than for
use of the literature. Perhaps that's an artifact of presentation, but
it seemed an obvious gap to highlight.
> Looks correct. One fine point we cannot judge from the snippet is that
> it may be suboptimal to use the same numeric distance tolerance for
> all the methods. Consider a "circle" in Manhattan versus Euclidean:
> the former crosses a diagonal at (0.5,0.5); the latter, at (0.7,0.7).
> But both cross the x axis at (1,0). This may be worth tuning.
Well, I refer to the estimation of the *actual* deviation along the
resulting polyline from the ideal curve. This is namely the Euclidian
distance calculated using the local normal to the curve. And this
estimation minimizes the number of points, not exceeding the error. In
Manhattan metrics it's much less accurate, although I agree it's very
fast, simple and 100% robust.
> There is a subtle logic behind the difference. Consider a case where
> the first three control points are identical, and the last elsewhere.
> This is a straight line, for which two points (the ends) suffice. Or
> is it?
In all *strictly* colliner cases my method behaves about the same as
yours, that is, it produces more points than necessary. But in my
opinion it's not that important because strictly collinear cases are
very rear. As soon as a control point digresses from the straight line
even by 0.001 pixel the method works well. My point is that strictly
collinear cases are extremally rear in practice, but *almost* collinear
happen quite often. Well, actually we only need to handle the following
2---1---4---3, 1---2---4---3, and 2---1---3---4
Like in this picture:
I just simplified the whole thing, not detecting these cases and
assuming that strictly collinear cases are rear. In case of
1---2---3---4 it will do the same as in the above cases, although it'd
be enough to draw just a line 1---4.
> The *parameterization* is not uniform, and for certain uses
> (maybe not yours), that becomes visible. For example, if the curve is
> not a solid stroke but a dash pattern, or if we animate motion along
> the curve, we may expect to see the parameterization.
There's no problem with dash lines in AGG:
As you can see, uniformity isn't important at all. But what important
is that the *tangent* estimation becomes valuable in cases of very
thick dash lines, otherwise the calculeted local normals will have
considerable (and unpredictable) error.
> The focus of the Fabris and Forrest paper is antialiasing, which you
> do not consider at all (though perhaps you will). But they cite a
> paper by Klassen, "Drawing antialiased cubic spline curves", that is
> more closely related to your stated intent. And a web search for who
> cites that turns up another paper which may be of interest.
Yes, I know Cairo, but we are not competitors. It only *seems* that
it's all about the same but essentially AGG and Cairo are quite
As for anti-aliasing I do that differently. I didn't carefully study
the papers you refer to, but I read them quickly and found out that
drawing anti-aliased curves directly is too restrictive. I use a
scanline rasterizer approach that calculates exact pixel coverage
values. As far as I know it's one of the fastest algorithms and it
produces the best possible result. First it was used in libArt by Raph
Levien, then David Turner (FreeType) considerably improved it
switching to integer fixed point calculations, then I rewrote it in C++
and David gave me the permission to release it under a separate license
with a reference.
This algorithm perfectly works with subpixel accuracy (1/256) but it
works with straight line segments only.
First of all, I have my doubts if it's cheap and accurate to calculate
the coverage of pixel 2 in this picture:
David's algorithm perfectly handles it if we approximate the curve with
line segments (with subpixel accuracy, of course). No matter how short
the line segments are. And it works amazingly accurace and fast. That's
especially important when rendering glyphs.
Once again, even if we can draw anti-aliased curves (lines and fills)
directly it's still too restrictive. You can't easily calculate an
equidistant to the curve, because generally the exact equidistant to
the curve is a curve of a *higher degree!*. With the approach when you
rasterize curves directly you can't even draw text with a "false bold"
As you can see it's perfectly anti-aliased and smooth. I do that in a
very simple way, that is, approximating the curves with line segments
and then, calculating the contour. Also, it's quite easy to apply any
non-linear transformations in such a way:
In case of drawing Bezier curves directly it's not obvious how to apply
non-linear transformations to the curve.
In other words I want to be able to use the result of my curve
approximation in many different ways, not only do render the curves
directly. This is the whole point of generalization.
Let me be a bit extra strict here - the major point does not hold since
you haven't proved anything. You just stated something to be true.
In the same way, the opposite statement doesn't hold either. Without
the proof, that is.
I would tend to agree with you - just playing with the curve and curve
equation I could not spot anything that would guarantee that loop would
always be removed by one subdivision at parameter value t=1/2. But I am
not that good at math so this doesn't prove anything.
However, reading Stone,deRose article again I managed to (almost)
convince myself that, at least in one case (loop with intersection
point at t=0 endpoint), it is not possible that loop won't be be cut
after the first subdivision.
Proving it for me (so far) involves looking at their diagram of the
loop region for cubic curve in 'canonic' form so this claim can be
regarded as unsubstantiated and as vacuous as any other. Until some
mathematican jumps into the discussion or until somebody provides a
reference, I can just comment that it is also possible that McSeem's
statement is true.
You are right. Here's the case:
378, 439, 378, 497, 487, 432, 14, 338
"Beauty is the first test: there is no permanent place
in the world for ugly mathematics." G.H. Hardy
This is a bit silly. If any statement needed proof, it would be the
one asserting that a bisection *does* work. No proof was given, and
none can be given because that is false, which is what I had said.
I didn't offer a proof since my statement was not disputed; in fact,
it's fairly obvious when raised. A single counterexample is enough to
show bisection is unreliable, or consider the following construction.
Take a curve with whatever geometry you like, cusp or inflections or
whatever. Instead of subdividing it, double it. That is, let the
parameter range from 0 to 2 instead of 0 to 1, then create the control
points for that curve. Now when we halve the doubled curve, one half
is the original, which retains all the geometry in question. Therefore
midpoint bisection is an unreliable way to simplify the geometry, and
the original assertion on the AGG page is discredited. (A more careful
statement, allowing splitting at a *well chosen point*, is a different
We can use a de Casteljau blossom algorithm to compute the new control
points, in much the same way as we would evaluate. Let points 1,...,4
be the given controls, blossom values 000, 001, 011, 111. Our new
points will be blossom values 000, 002, 022, 222. The de Casteljau
000 001 011 111
\ / \ / \ /
002 012 112
\ / \ /
More explicitly, the de Casteljau weights will be -1 and 2, like so:
p002 = 2 p001 - p000
p012 = 2 p011 - p001
p112 = 2 p111 - p011
p022 = 2 p012 - p002
p122 = 2 p112 - p012
p222 = 2 p122 - p022
It's pretty easy to make a symmetric curve with a loop, a bit trickier
to produce a double inflection. Since the loop is easier to confirm
visually, let's try one. Take as control points
(-1,0) (3,2) (-3,2) (1,0)
Doubled, this becomes
(-1,0) (7,4) (-25,0) (63,-12)
For most readers, probably the most interesting part of this inquiry
is the idea that a de Casteljau evaluation can be used to *extend* a
Bezier curve, not just subdivide it.
If you are restricting yourself to the 2D cubic bezier case, here's a
reference to a paper I wrote describing a method to quickly determine
the geometric form of the curve. Source code is available from that
> Do you know if there is anything preventing cubic curves from having
> multiple loops, for example? IOW, if one half has certain geometry,
> does it and how does it restrict the other half of the geometry?
Look for yourself.
For every coordinate, x(t), y(t), z(t) (if you need the thir dimension)
are cubic polynomials in t.
Cubic polynomials may have at most two extrema.
How can you produce multiple loops?
Posted via Mailgate.ORG Server - http://www.Mailgate.ORG
"Math would be less confusing and difficult if only those people who
understand it would be the ones teaching it and explaining it"
A cubic Bezier curve is a piece of a cubic polynomial curve, and as
such, even a small portion determines the entire infinite curve. With
a properly chosen set of four control points we can delimit as large
or as small a piece as we wish.
Specifically with respect to multiple loops, the question can now be
asked with respect to a piece, or for any cubic. The curve cannot loop
unless the control polyline loops, and it is clearly impossible for a
four-point polyline to loop twice. Bezier curves are called variation
diminishing, because the curve cannot cross any line more times than
the control polyline; we could employ this for a more formal proof.
Also, we can implicitize a parametric cubic curve to produce a cubic
polynomial incorporating that curve in implicit form. A classic result
from algebraic geometry says that a line cannot contact an implicit
curve more times than its degree. These contacts include multiplicity,
so that a tangent contact counts twice, as does a double point. If a
curve has two loops, then it has two double points where it crosses
itself. A line through those double points contacts the curve four
times, which is impossible for a cubic.
Implicit cubics are a strictly larger class of curves than parametric
cubics, even when we allow rational parametrics. In fact, if a curve
in implicit form has a parametric form, then somewhere on it (or more
precisely, on its complex projective version) we must be able to make
a double contact with any line. Sweeping the line while maintaining
the double contact produces a moving third point of intersection, and
thus parameterizes the implicit curve.
Thanks for making me discover very interesting topic of
implicitization. That was a stumbling block for me: to connect the
argument for extrema in parametric form to implicit form.
> So a cubic, having genus 0
> I'm not even sure that genus is 0 but it seemed to follow from
> one definition on some Web page.
Sort of. The key point here is that we have a *parametric* cubic.
However, if you wish to continue this discussion I would suggest
to move it to new://sci.math, there you will probably get more
If you wish to learn more by yourself I would suggest to read
something on algebraic geometry. Miles Reid "Undergraduate
Algebraic Geometry" (http://isbn.nu/0521356628)
is a good starting point. For more in-depth treatment
Robin Hartshorne "Algebraic geometry" (http://isbn.nu/0387902449)
is a must-read.
> I would appreciate your
> comments, suggestions, and criticism.
I've discussed here in the past how to locate inflection points, cusps, and
horizontal/vertical tangents. This lets you first minimally divide a curve
into normalized pieces, and then attack the subdivision without any special
I suppose it is here: