Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Circle fit from moments

26 views
Skip to first unread message

Greg Egan

unread,
Mar 15, 2005, 3:31:43 AM3/15/05
to
A recent thread on comp.graphics.algorithms prompted me to write up some
observations on this topic.

It's fairly well known that to perform a true least-squares fit of a
circle to a set of points in two dimensions, iterative numerical methods
are needed to find the minimum of the sum of d_i^2, where:

d_i = sqrt((x_i-p)^2 + (y_i-q)^2) - r

and the circle parameters, centre (p,q) and radius r, are varied to locate
the minimum.

It's not as widely known that there's a different, but related, problem
that can be solved explicitly. If we define the quadratic function:

f(p,q,r,x,y) = (x-p)^2 + (y-q)^2 - r^2

then this function will be zero for values of (x,y) that lie on the circle
with centre (p,q) and radius r. So we can find the p, q, r that minimise
the sum of f_i^2, where:

f_i = f(p,q,r,x_i,y_i)

I think this is what Dave Eberley's Geometric Tools library routine
QuadraticCircleFit2 in
http://www.geometrictools.com/Foundation/Approximation/Wm3ApprQuadraticFit2.cpp
does.

There are several different ways to express the result. The routine
mentioned above expresses it in terms of the eigenvector of a matrix of
moments. Another nice way is the following:

c = m + (1/2) C^(-1) k (I)
r^2 = varx + vary + (c-m).(c-m) (II)


where:

c is the centre of the fitted circle
r is the radius of the fitted circle
m is the mean of the data, (xm,ym)
varx is the variance of the data's x coordinates
(i.e. the average of (x_i-xm)^2)
vary is the variance of the data's y coordinates
cov is the covariance of the data
(i.e. the average of (x_i-xm)*(y_i-ym))

|varx cov |
C is the covariance matrix | cov vary |

C^(-1) is the matrix inverse of C, which is just

1 |vary -cov |
----------------- |-cov varx |
varx*vary - cov^2

k is the vector found by taking the average of:

((X_i-m).(X_i-m)) (X_i-m)

where X_i is just the data point (x_i,y_i),
and we're taking the dot product of two copies
of the difference between X_i and the mean,
then multiplying the difference vector by
that number.


At first sight, it might seem amazing that such simple equations can
identify the centre of the circle from the moments of the data. If the
data all lies exactly on some circle, then it doesn't matter how evenly or
unevenly distributed the points are, these equations will still identify
the centre and radius exactly.

To see why this works, it helps to imagine taking the moments that define
the vector k around a different point, other than the mean. If we define:

k(w) = average of ((X_i-w).(X_i-w)) (X_i-w)

for any point w=(w_x,w_y), then it's not hard to show that:

k(w) - k =

2 C (m-w) + (varx + vary + (m-w).(m-w)) (m-w)

Also, if we take the variance about the point w=(w_x,w_y) instead of the mean:

varx(w) = average of (x_i-w_x)^2
vary(w) = average of (y_i-w_y)^2

we get:

varx(w)+vary(w) = varx + vary + (m-w).(m-w)

Suppose we have data that all lies on a circle. Then if we choose w to
lie on the centre of that circle, we'll have:

varx(w)+vary(w) = average squared dist from X_i to w
= r^2

and

k(w) = average of ((X_i-w).(X_i-w)) (X_i-w)
= average of r^2 (X_i-w)
= r^2 (m-w)

Combining these results, we get

r^2 = varx(w) + vary(w)
= varx + vary + (m-w).(m-w)

k(w) = r^2 (m-w)
= (varx + vary + (m-w).(m-w)) (m-w)

k(w) - k =

(varx + vary + (m-w).(m-w)) (m-w) - k =

2 C (m-w) + (varx + vary + (m-w).(m-w)) (m-w)


which yields:

k = 2 C (w-m)

w = m + (1/2) C^(-1) k

--
Greg Egan

Email address (remove name of animal and add standard punctuation):
gregegan netspace zebra net au

Ray Koopman

unread,
Mar 18, 2005, 3:25:02 AM3/18/05
to
Greg Egan wrote:
> It's fairly well known that to perform a true least-squares fit of
> a circle to a set of points in two dimensions, iterative numerical
> methods are needed to find the minimum of the sum of d_i^2, where:
>
> d_i = sqrt((x_i-p)^2 + (y_i-q)^2) - r
>
> and the circle parameters, centre (p,q) and radius r, are varied to
> locate the minimum.
>
> It's not as widely known that there's a different, but related,
> problem that can be solved explicitly. If we define the quadratic
> function:
>
> f(p,q,r,x,y) = (x-p)^2 + (y-q)^2 - r^2
>
> then this function will be zero for values of (x,y) that lie on the
> circle with centre (p,q) and radius r. So we can find the p, q, r
> that minimise the sum of f_i^2, where:
>
> f_i = f(p,q,r,x_i,y_i)

There's a more compact way to state what's going on. From an Oct 4,
2004, post in the comp.soft-sys.math.mathematica thread "3D data set":

> Given n points in two dimensions, {{x1,y1},...,{xn,yn}}, the
> following routine fits a circle thru them by minimizing the variance
> of the squared distances of the points from the putative center.

However you characterize it, this approach generalizes nicely to
fitting a hypersphere to points in m dimensions.

0 new messages