Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Quintic Splines, Solving rapidly

0 views
Skip to first unread message

pnachtwey

unread,
Sep 29, 2007, 1:33:04 PM9/29/07
to
I need to compute quintic splines rapidly. Below is a link to the
algorithm I am using.

http://books.google.com/books?id=kYG17apHcrYC&pg=PA1&dq=%22Computer+Aided+Geometric+Design%22&ei=woL-RoujEJ3qpwLyw_jsBQ&sig=z19_EIHAvEi9bLZ6lC7s1mdtg3I#PPA95,M1

Please, look at the quintic spline section and the end of the splines
section. One can see there a main array made of four NxN tridiagonal
arrays. The main array is not tridiagonal so a simple tridiagonal
method of finding the second and fourth derivatives will not be as
simple as the cubic spline example.

The hidden pages after the spline section mentions a 'block iterative'
method and a book in German. I have found that book but my German
isn't that good. I need to know what 'block iterative' is.

Gauss Seidel is iterative and I can optimize the main array so only
the non zero entries are used in the iterations. I am hoping for
something better. Actually I was hoping there would be some form of
tridiagonal algorithm that wouldn't be iterative and therefore
faster. Does anybody have any suggestions as to the best( fastest)
way to find find the second and fourth derivatives?

Thanks,

Peter Nachtwey

Chip Eastham

unread,
Sep 29, 2007, 5:28:00 PM9/29/07
to
On Sep 29, 1:33 pm, pnachtwey <pnacht...@gmail.com> wrote:
> I need to compute quintic splines rapidly. Below is a link to the
> algorithm I am using.
>
> http://books.google.com/books?id=kYG17apHcrYC&pg=PA1&dq=%22Computer+A...

>
> Please, look at the quintic spline section and the end of the splines
> section. One can see there a main array made of four NxN tridiagonal
> arrays. The main array is not tridiagonal so a simple tridiagonal
> method of finding the second and fourth derivatives will not be as
> simple as the cubic spline example.
>
> The hidden pages after the spline section mentions a 'block iterative'
> method and a book in German. I have found that book but my German
> isn't that good. I need to know what 'block iterative' is.
>
> Gauss Seidel is iterative and I can optimize the main array so only
> the non zero entries are used in the iterations. I am hoping for
> something better. Actually I was hoping there would be some form of
> tridiagonal algorithm that wouldn't be iterative and therefore
> faster. Does anybody have any suggestions as to the best( fastest)
> way to find find the second and fourth derivatives?
>
> Thanks,
>
> Peter Nachtwey

I'm seeing only a book cover when I go to the link you provide.

More details please, about what you want to do. Presumably you
are fitting a quintic spline in one dimension. There are some
variations but from the mention of fourth derivatives, likely you
have in mind continuity at knots through four differentiations.
Often the knots are equally spaced, but this is not always the
case and affects the ease of assembling the linear system to
solve for the quintic coefficients in each interval between
knots.

Taken together with some homogeneous conditions on derivatives
at the first and last knots, the function values + higher
order continuity conditions at the internal knots result in a
system of equations for the second and fourth derivatives at
internal knots that has a block structure of this form:

[ A -B ] [ u ] = [ d ]
[ C A ] [ v ] [ 0 ]

with tridiagonal blocks A,B,C. Here u represents the second
derivatives and v the fourth derivatives.

Because of the sparsity of these matrices, I'd be tempted to
experiment with one of the non-symmetric conjugate gradient
type algorithms, like CGS (conjugate gradient squared).
Sorry, I don't know of a study specifically for your
application. It might make a good paper for a numerical
analysis journal.

regards, chip

pnachtwey

unread,
Sep 29, 2007, 8:37:42 PM9/29/07
to
On Sep 29, 2:28 pm, Chip Eastham <hardm...@gmail.com> wrote:
> On Sep 29, 1:33 pm, pnachtwey <pnacht...@gmail.com> wrote:
> I'm seeing only a book cover when I go to the link you provide.
That is funny, I see the page that describes the same matrix equations
you wrote below.

> More details please, about what you want to do.

Solve for the u and v, the second and fourth derivative, vectors as
efficiently as possible.
I can find u and v easily but not quickly now.

> Presumably you
> are fitting a quintic spline in one dimension.

Yes

> Often the knots are equally spaced,

I can't assume this. You never know what customers will come up with.

> Taken together with some homogeneous conditions on derivatives
> at the first and last knots, the function values + higher
> order continuity conditions at the internal knots result in a
> system of equations for the second and fourth derivatives at
> internal knots that has a block structure of this form:
>
> [ A -B ] [ u ] = [ d ]
> [ C A ] [ v ] [ 0 ]
>
> with tridiagonal blocks A,B,C. Here u represents the second
> derivatives and v the fourth derivatives.

That describes the equations in the book perfectly.

>
> Because of the sparsity of these matrices, I'd be tempted to
> experiment with one of the non-symmetric conjugate gradient
> type algorithms, like CGS (conjugate gradient squared).

This is the kind of information I am looking for. I will try what you
suggest and compare it to what I already have. I have never heard of
CGS, it looks like it is time for more research. I am quickly going
beyond my Numerical Recipes in C. Fun.

> Sorry, I don't know of a study specifically for your
> application. It might make a good paper for a numerical
> analysis journal.
>

I want to do electronic camming for machinery applications. Cubic
splines work but don't provide a continuous 3rd derivative or jerk.

>Sorry, I don't know of a study specifically for your
>application. It might make a good paper for a numerical
>analysis journal.

I am sure everything I want to do has been done before. That is why I
am reading books, looking for information and asking here. :)

Actually, I am amazed you were able to figure out exactly what I am
talking about if you couldn't see the pages in the book. I searched
the math groups and there wasn't much useful information on quintic
splines so I had little expectation of getting a meaningful reply. I
posted the link because I thought others would find it useful.

Thanks,

Peter Nachtwey


Dave Dodson

unread,
Sep 30, 2007, 9:15:07 AM9/30/07
to
On Sep 29, 4:28 pm, Chip Eastham <hardm...@gmail.com> wrote:
> Taken together with some homogeneous conditions on derivatives
> at the first and last knots, the function values + higher
> order continuity conditions at the internal knots result in a
> system of equations for the second and fourth derivatives at
> internal knots that has a block structure of this form:
>
> [ A -B ] [ u ] = [ d ]
> [ C A ] [ v ] [ 0 ]
>
> with tridiagonal blocks A,B,C. Here u represents the second
> derivatives and v the fourth derivatives.
>
> Because of the sparsity of these matrices, I'd be tempted to
> experiment with one of the non-symmetric conjugate gradient
> type algorithms, like CGS (conjugate gradient squared).

I'd be tempted to try something simple first, such as the iteration

A*u_n+1 = d + B*v_n

A*v_n+1 = -C*u_n+1

using a tridiagonal solver to solve the indicated systems.

Dave

John Herman

unread,
Sep 30, 2007, 11:01:13 AM9/30/07
to
Did you look here?

De Boor, C., 1978. A Practical Guide to Splines. Springer, New York

In article <1191158107.8...@d55g2000hsg.googlegroups.com>, Dave

Peter Nachtwey

unread,
Sep 30, 2007, 12:52:38 PM9/30/07
to
"John Herman" <John_W...@yahoo.com> wrote in message
news:46ffba10$0$28800$4c36...@roadrunner.com...

> Did you look here?
>
> De Boor, C., 1978. A Practical Guide to Splines. Springer, New York
>
Yes, I bought it and read the chapter on higher order splines. It was
worthless for my purposes. The higher order splines it mentions are B
splines. I didn't even like the style either. De Boor seems to be too
hung up on using cryptic math symbology. I am engineer that looks at math
books to solve problems, not a mathematician that is familiar with the
cryptic symbols. If you look at the book there are pages of definition for
cryptic symbols. Why should a mathematician need to do that? I had wasted
$55 and the time trying to dig some useful information from it :( Some day
maybe but not now. Thumbs down.

If I ever write a book it will definitely be application oriented so an
engineer 20 years out of college will still be able to understand it.

The algorithm in "Fundamentals of Computer Aided Geoemetric Design" is
exactly what I wanted. It derivation for the quintic spline is simple and
straight forward. The quintic spline algorithm is similar to the cubic
spline algorithm in NR in C with which we are very familiar. If I hadn't
found the quintic spline algorithm in "Fundamentals of Computer Aided
Geoemetric Design", I would have started with the cubic splines in NR in C
and extendeded it. I had imagined it would be something similar to the
quintic splines in "Fundamentals of Computer Aided Geoemetric Design".
"Fundamentals of Computer Aided Geoemetric Design" is not just a book on
splines. It covers shading, rotations, interpolating between the wire meshe
setc. It is a book for software engineers that write CAD programs or even
games. Thumbs up.

Peter Nachtwey

Peter Nachtwey

unread,
Sep 30, 2007, 1:25:14 PM9/30/07
to
"Chip Eastham" <hard...@gmail.com> wrote in message
news:1191101280.2...@k79g2000hse.googlegroups.com...

I have looked at CGS and am still trying to get a handle on it.
http://citeseer.ist.psu.edu/cache/papers/cs/80/ftp:zSzzSzftp.cerfacs.frzSzpubzSzalgozSzworkshopszSzHCMzSzIterativezSzMeeting.11.7_8.1994zSzUtrechtzSzgcgs.pdf/fokkema94generalized.pdf
It appears to be an optimizing routine. I am will have to test this
carefully because the warnings. It also say it is best for non-symetrical
arrays. This leads to the question about what is best for symeterical
arrays.
I have a lot of testing to do.

Peter Nachtwey


Chip Eastham

unread,
Sep 30, 2007, 1:28:15 PM9/30/07
to
On Sep 29, 8:37 pm, pnachtwey <pnacht...@gmail.com> wrote:

[snip]


> I want to do electronic camming for machinery applications. Cubic
> splines work but don't provide a continuous 3rd derivative or jerk.

Are these then to be periodic splines? The matrix assembly should
perhaps be slightly altered to account for this.

The other thing to play around with in conjunction with your existing
Gauss-Siedel iteration (that converges but too slowly) is an over-
relaxation parameter. See here:

[Successive Over Relaxation -- SOR Method]
http://math.fullerton.edu/mathews/n2003/SORmethodMod.html

"When we choose w=1 the SOR method reduces to the Gauss-Seidel
method."

regards, chip

Peter Nachtwey

unread,
Sep 30, 2007, 4:16:24 PM9/30/07
to

"Chip Eastham" <hard...@gmail.com> wrote in message
news:1191173295.1...@22g2000hsm.googlegroups.com...

> On Sep 29, 8:37 pm, pnachtwey <pnacht...@gmail.com> wrote:
>
> [snip]
>> I want to do electronic camming for machinery applications. Cubic
>> splines work but don't provide a continuous 3rd derivative or jerk.
>
> Are these then to be periodic splines? The matrix assembly should
> perhaps be slightly altered to account for this.

They could be. I am making a product that users will use for their
purposes. I never know what that may be.

>
> The other thing to play around with in conjunction with your existing
> Gauss-Siedel iteration (that converges but too slowly) is an over-
> relaxation parameter. See here:
>
> [Successive Over Relaxation -- SOR Method]
> http://math.fullerton.edu/mathews/n2003/SORmethodMod.html
>
> "When we choose w=1 the SOR method reduces to the Gauss-Seidel
> method."

SOR definitely looks better than Gauss Siedel and it looks like just a small
tweak that needs 'tuning'. It still takes a lot of iterations. I can see
that making intelligent estimates is very important to reduce the number of
iterations. Thanks again.

Peter Nachtwey


Chip Eastham

unread,
Sep 30, 2007, 9:17:11 PM9/30/07
to
On Sep 30, 1:25 pm, "Peter Nachtwey" <pnacht...@comcast.net> wrote:
> "Chip Eastham" <hardm...@gmail.com> wrote in message

>
> news:1191101280.2...@k79g2000hse.googlegroups.com...
>
> > On Sep 29, 1:33 pm, pnachtwey <pnacht...@gmail.com> wrote:
>
> > Because of the sparsity of these matrices, I'd be tempted to
> > experiment with one of the non-symmetric conjugate gradient
> > type algorithms, like CGS (conjugate gradient squared).
> > Sorry, I don't know of a study specifically for your
> > application. It might make a good paper for a numerical
> > analysis journal.
>
> I have looked at CGS and am still trying to get a handle on it.http://citeseer.ist.psu.edu/cache/papers/cs/80/ftp:zSzzSzftp.cerfacs....

> It appears to be an optimizing routine. I am will have to test this
> carefully because the warnings. It also say it is best for non-symetrical
> arrays. This leads to the question about what is best for symeterical
> arrays.
> I have a lot of testing to do.
>
> Peter Nachtwey

The conjugate gradient method is an iterative algorithm
which, in principle, is also a direct method for pos. def.
symmetric matrices. CGS is one variant that can be used
on nonsymmetric matrices.

[Conjugate Gradient Method -- Wikipedia]
http://en.wikipedia.org/wiki/Conjugate_gradient_method

The attraction of conjugate gradient-like methods is that
they require limited storage (some constant number of
vectors plus a few constants) together with the ability
to compute matrix-vector products. For sparse matrices
(such as you are dealing with) these matrix-vector
products are computationally cheap.

The desire to accelerate convergence of conjugate
gradient-like methods leads to the topic of
preconditioning:

[Preconditioned Conjugate Gradient Method]
http://www.math-linux.com/spip.php?article55

[Preconditioned Conjugate Gradient Methods]
http://www.cs.utk.edu/~dongarra/etemplates/node418.html


--c

Peter Spellucci

unread,
Oct 1, 2007, 6:57:32 AM10/1/07
to

In article <1191087184.6...@57g2000hsv.googlegroups.com>,

"block iterative" means: you deal with submatrices in your system "as if they
were numbers (but of course obeying the rules of matrix algebra.
for example if you have
[ A11 A12 ; A21 A22 ] [x;y] = [b1;b2]
with A11 and A22 square of course then you could iterate
A11*x^{k+1} = b1 - A12*y^{k} ;
A22*y^{k+1} = b2 - A21*x^{k+1}
this would be the Gauus Seidel variant. Unfortunately these block iterationts
converge only little better than the componentwise version.
if you could restrict yourself to natural splines , then this will help:
toms/600
keywords: approximation, interpolation, spline approximation, quintic spline
gams: E1a
title: QUINAT, QUINEQ, and QUIND
for: quintic natural spline interpolation. translation of algorithm 507
by: J.G. Herriot and C.H. Reinsch
ref: ACM TOMS 9 (1983) 258-259
Score: 100%

maybe the linear solver used there gives you a hint what to do.

also the library
http://netlib.org/dierckx/

contains curve fitting routines with splines of arbitrary order, not only cubic.

and finally, if, as already discussed in the group, your system is only two blocks with
tridiagonal subblocks

[ A -C ][x] = [b1 ]
[ B A ][y] = [b2 ]
with three tridiagonal matrices and A has a Cholesky decomposition
A=L*L^T, then you could formally , using
u=L^T x v=L^T y
(with easy recovering x and y from u and v)
write this
as
[ I -L^{-1}CL^{-T} ] [ u] = [c1] solve L c1 = b1
[ L^{-1}BL^{-T} I ] [ v] = [c2] solve L c2 = b2
or
by substituting u from the first equation into the second

[ I + L^{-1}BL^{-T}L^{-1}CL^{-T} ] [v] = [c2] - L^{-1}BL^{-T}[c1]

and iterate this one : of course you never will have to invert L or L^T,
instead you solve linear systems with L or L^T which is only O(n) in complexity

hth
peter

pnachtwey

unread,
Oct 1, 2007, 4:15:04 PM10/1/07
to
On Oct 1, 3:57 am, spellu...@fb04373.mathematik.tu-darmstadt.de (Peter
Spellucci) wrote:

> In article <1191087184.651854.271...@57g2000hsv.googlegroups.com>, pnachtwey <pnacht...@gmail.com> writes:
>
> >I need to compute quintic splines rapidly. Below is a link to the
> >algorithm I am using.
> >
> >http://books.google.com/books?id=kYG17apHcrYC&pg=PA1&dq=%22Computer+A...
Thanks, I will investigate that too.

Peter Nachtwey


Peter Spellucci

unread,
Oct 2, 2007, 6:34:11 AM10/2/07
to

In article <1191269704.4...@19g2000hsx.googlegroups.com>,

thinking a little bit more on this I believe that your problem simply comes from
a wrong enumeration of the unknowns:
(if we speak about a quintic spline curve, not a surface):
if you proceed exactly as is done in the cubic case, then you begin with
the fourth derivative, which is piecewise linear and continuous and write it
as

s^{(4)}(x) = (a_{5,i}*(x_{i+1} - x) + a_{5,i+1}*(x-x_i))/(x_{i+1}-x_i)
for x_i <= x <= x_{i+1}
( a_{5,i}/120 is the coefficient for x^5 in the interval [x_i,x_{i+1}]. )
we integrate this formally four times, introducing four new integration
constants b_i, c_i, d_i, e_i and write this as

s(x) = (1/120)*(a_{5,i}*(x_{i+1}-x)^5+a_{5,i+1}*(x-x_i)^5)/(x_{i+1}-x_i)

+b_i*(x-x_i)^3+c_i*(x-x_i)^2+d_i*(x-x_i)+e_i .

def = s_i(x) ( the "piece" between x_i and x_{i+1} )

for x_i <= x <= x_{i+1} (**)

this is possible since the choice of the integration constants is free.
the properties which go into this construction is continuity of the
piecewise defined fourth derivative and the fact that s is piecewise a
polynomial of degree 5. now we proceed in determining the remaining
constants:
from s(x_i) = s_i(x_i) = y_i and s(x_{i+1} ) = s_i(x_{i+1}) = y_{i+1}
we compute e_i and d_i as functions of a_{5,i} and a_{5,i+1}, b_i, c_i
and, of course
(x_i,y_i) and (x_{i+1},y_{i+1}) . these are linear equations fixing e_i and d_i
and we insert these two into the representation

(**)

by this now also the piecewise defined function is continuous.

for the interior nodes we now write down the remaining conditions
in order to obtain the C^4 smoothness:
first, second and third derivative taken from the left =
first, second and third derivative from the right
s_i ' (x_{i+1}) = s_{i+1} ' (x_{i+1} )
s_i '' (x_{i=1} = s_{i+1} '' (x_{i+1} )
s_i ''' (x_{i+1}) = s_{i+1} ''' (x_{i+1})

this gives for each interior node x_i three equations involving the
unknowns a_{5,i}, a_{5,i+1} , a_{5,i+2} , b_i, c_i, b_{i+1}, c_{i+1}.
if you number the vector of _all_ unknowns as
a_{5,i},b_i,c_i,a_{5,i+1},b_{i+1},c_{i+1}, a_{5,i+2},.....
then you get a seven banded matrix for which standard Gaussian elimination
should work. you have of course 4 degrees of freedom in this system
and make the construction unique by appropriate additional requirements.
if you use separable boundary conditions, then the total system matrix remains
banded and only in the periodic case there is a coupling between the
first three and the last three components of the solution vector.
this requires a little modification in the Gaussian elimination like the
periodic cubic spline does. but nevertheless also here you can make full use of
sparsitiy ; only the last three columns of the U-part of the LU-decompositions
get filled and the last three rows of the L-part too.

hth
peter

pnachtwey

unread,
Oct 2, 2007, 7:32:56 PM10/2/07
to
On Oct 2, 3:34 am, spellu...@fb04373.mathematik.tu-darmstadt.de (Peter

Spellucci) wrote:
>
> thinking a little bit more on this I believe that your problem simply comes from
> a wrong enumeration of the unknowns:
The book covers how to use the the P(i) P''(i) and P''''(i) arrays
where P''=u and p''''=v
The P(i), P(i+1), P''(i), P''(i+1) and p''''(i) and p''''(i+1) is used
to calculate coefficients A,B,C,D,E,F
for the interpolation formula y(x-x(i))=(((((A*(x-x(i))+B)*(x-x(i))
+C)*(x-x(i))+D)*(x-x(i))+E)*(x-x(i))+F.
A = (P''''(i+1)-p''''(i))/(120*(x(i+1)-x(i))
B = P''''(i)/24
.
D = 0.5*P''(i)
.
F = P(i)

> then you get a seven banded matrix for which standard Gaussian elimination
> should work.

Actually the values in the major diagonal of the -B matrix have a
bigger magnitude than the values of A matrix's major diagonal. I am
hoping it will converge at all.

My test case is
for i = 0 to N+1 where N is 90
x=4*i;
y(x)=sin(x*PI/180)
The difference between the points is only 4 degrees but B matrix cubes
these values so they are relatively large.

> you have of course 4 degrees of freedom in this system
> and make the construction unique by appropriate additional requirements.
> if you use separable boundary conditions, then the total system matrix remains
> banded and only in the periodic case there is a coupling between the
> first three and the last three components of the solution vector.
> this requires a little modification in the Gaussian elimination like the
> periodic cubic spline does. but nevertheless also here you can make full use of
> sparsitiy ; only the last three columns of the U-part of the LU-decompositions
> get filled and the last three rows of the L-part too.

Yes, I know about eliminating the zeros by having arrays for the
diagonals.

BTW, this is what one customer does with the cubic splines
ftp://ftp.deltamotion.com/public/movies/JAN-04%20VSS_0001.wmv
It is a big 74 MB file to be prepared for it to take a second.
You can see there are 6 axes geared to a feed chain or rollers that
are moving the wood through the chipper heads. You can see I need to
calculate the splines quickly.

I have the arrays setup and can calculate the A,B,C,D,E,and F
coefficients. Now I must solve for u and v or P'' and P''''. Give
me a day or two. I'll be back.

Peter Nachtwey

Dave Dodson

unread,
Oct 3, 2007, 7:58:58 AM10/3/07
to
On Oct 2, 5:34 am, spellu...@fb04373.mathematik.tu-darmstadt.de (Peter

Spellucci) wrote:
> thinking a little bit more on this I believe that your problem simply comes from
> a wrong enumeration of the unknowns:

This is a good observation. Rearrange the columns of the original
system

[ A -B ] [ u ] = [ d ]
[ C A ] [ v ] [ 0 ]

to interleave the u and v vectors, and then rearrange the rows the
same way, and the system is transformed into a band matrix, which you
can solve with a direct (non-iterative) general band solver from,
e.g., LAPACK's DGBTRF and DGBTRS.

Maybe this is what Peter was saying, because it results in a matrix
with a total bandwidth of 7, as did Peter's.

If you have the nine diagonals representing A, B, and C, you can form
the band matrix by sprinkling those 9n values around appropriately in
a matrix with 2n rows and 7 columns. If you use DGBTRF and DGBTRS, you
will have to store this matrix in an array with 3 additional columns,
which are used for pivoting.

Dave

pnachtwey

unread,
Oct 4, 2007, 3:00:27 PM10/4/07
to
I am still watching for suggestions. I have just got to the point
where I can start testing the different methods of calculating the
acceleration and snap. I wrote the interpolator and all the test
routines first and verified the interpolation routine against my
Mathcad results. The interpolator works well and is much more
accurate than the cubic spline but we would expect that.

Peter Nachtwey

Chip Eastham

unread,
Oct 4, 2007, 4:38:38 PM10/4/07
to

Hi, Peter:

I may be reading something into the problem that isn't there, but
is the "snap" the same as a discontinuity in the cam's slope? In
my mind I was imagining a cam with a smooth rim, and hence the
boundary condition imposed is that of a periodic spline. If you
have one point where the slope changes abruptly, you can probably
handle that best by making this the location of your endpoints.

If there is more than one point where the slope is discontinuous,
then the spline equations probably have to be modified to model
that. The spline equations assume the continuity of derivatives
(up to the fourth order) within the interval.

regards, chip

pnachtwey

unread,
Oct 4, 2007, 11:01:36 PM10/4/07
to
On Oct 4, 1:38 pm, Chip Eastham <hardm...@gmail.com> wrote:
> I may be reading something into the problem that isn't there, but
> is the "snap" the same as a discontinuity in the cam's slope?

Snap is the fourth derivative or the v vector in your matrix example.
http://en.wikipedia.org/wiki/Snap_(physics)

> my mind I was imagining a cam with a smooth rim, and hence the
> boundary condition imposed is that of a periodic spline. If you
> have one point where the slope changes abruptly, you can probably
> handle that best by making this the location of your endpoints.

Not a problem. We have a lot of experience with cubic splines. We
can make the end points wrap ( periodic ) or be anything we want them
to be. We are sure we can extend our techniques to the quintic
splines.

> If there is more than one point where the slope is discontinuous,
> then the spline equations probably have to be modified to model
> that.

Not a problem. The problem has always been how to solve for the
second and fourth derivatives fast like in the video where 6 splines
are calculated and executed for each piece of wood.

> The spline equations assume the continuity of derivatives
> (up to the fourth order) within the interval.
>
> regards, chip

Yes, we were able to verify that today. A lot depends on how
accurate the acceleration and snap calculations are. The Gauss Seidel
method is working. I treated the top half and the bottom half the
the array as two separate problems that are solved in parallel. This
worked better ( faster ) than expected. I thought I might have
problems with convergence. I am computing a mean squared error of the
residual just to get a feel for the rate of convergence. It appears to
drop by a factor of 3 each iteration using the Gauss Seidel method.

I am looking forward to trying the SOR method tomorrow and comparing
how the mean squared error drops each iteration. I am also looking at
initialing the acceleration ( u ) and snap ( v ) vectors. This takes
time and I must weigh that against an iteration or two. At this time
I initialize the acceleration and snap vectors to 0 which is kind of
stupid.

Peter Nachtwey


pnachtwey

unread,
Oct 5, 2007, 10:06:01 PM10/5/07
to
On Sep 30, 1:16 pm, "Peter Nachtwey" <pnacht...@comcast.net> wrote:
> "Chip Eastham" <hardm...@gmail.com> wrote in message
Update, the SOR is better than Gauss Seidel. It converges in about
60-70% of the iterations that Gauss Seidel requires. What surprised
me what value of omega was best. I found it worked best if it is set
to 0.8. This seemed odd because it looks like it wouldn't converge as
fast as Gauss Siedel. I tried many different values and different
ways of change omega as a function of the iteration. The system
wasn't stable with omega above 1.3. I also compared the number of
iterations required if the u and v vectors were initialized to
something other than 0. So far it doesn't look like it is worth the
effort.

Peter Nachtwey

0 new messages