Just to fix a common misunderstanding :) The DDA is used to compute the
value of a function in a uniform grid incrementally. This usually is not
useful for line drawing, but for scan conversion of shapes it is. The
Midpoint and the Bresenham line algorithms do a different kind.
Visually:
DDA
---
If x is given as a function of y the DDA allows to compute the x-values
incrementally row by row:
*
*
*
Bresenham and Midpoint
----------------------
In Bresenham and the midpoint line algorithms a pixel is moved one at a
time either to the right, or right and up (one of the eight cases).
*****
*****
*****
DDA explained
=============
If f : RR^d --> RR is a function, then the DDA can be used, in some
cases, to compute the value of f in a uniform grid cheaply
(incrementally). It is most commonly used to evaluate m-degree
polynomials in a grid of points (when m is low), which we shall
concentrate on next.
Case d = 1, m = 1
-----------------
Consider f(x) = ax + b. We would like to evaluate f at the grid of
points 0, dx, 2 dx, 3 dx, n dx ..., where dx in RR. The _first
difference of f_ is defined by
df(x) = f(x + dx) - f(x).
Since I am lazy, I write d instead of the more proper Delta. In our
example case,
df(x) = (a(x + dx) + b) - (ax + b)
= a dx,
which is a constant (a zero degree polynomial).
To evaluate f, we proceed as follows. The first point f_0 is computed
directly by
f_0 = f(0) = b.
The subsequent (n - 1) points {f_1, ..., f_{n - 1}} are computed
iteratively by
f_i = f_{i - 1} + df_{i - 1}
= f_{i - 1} + a dx.
Thus we get the favorable result that when evaluating f in a uniform
grid, direct evaluation takes n multiplications and n additions, and DDA
takes 1 multiplication and n additions.
Case d = 1, m = 2
-----------------
Consider f(x) = ax^2 + bx + c. We would like to evaluate f at the grid
of points 0, dx, 2 dx, 3 dx, (n - 1) dx ..., where dx in RR. The first
difference of f is given by
df(x) = f(x + dx) - f(x)
= a(x + dx)^2 + b(x + dx) + c - (ax^2 + bx + c)
= ax^2 + 2ax dx + a dx^2 + bx + b dx + c - ax^2 - bx - c
= 2ax dx + a dx^2 + b dx,
which is a first-degree polynomial. Since we know how to evaluate df
quickly from the first example, we compute the _second difference_ of f:
(d^2 f)(x) = df(x + dx) - df(x)
= [2a (x + dx) dx + a dx^2 + b dx] -
[2ax dx + a dx^2 + b dx]
= 2a dx^2,
which is a constant (a zero-degree polynomial).
To evaluate f, we proceed as follows. All of f(0), df(0), and (d^2 f)(0)
are computed directly:
f_0 = f(0) = c,
df_0 = df(0) = a dx^2 + b dx, and
[d^2 f]_0 = (d^2 f)(0) = 2a dx^2.
The subsequent (n - 1) points {f_1, ..., f_{n - 1}} are computed
iteratively by
f_i = f_{i - 1} + df_{i - 1}
df_i = df_{i - 1} + [d^2 f]_{i - 1}
= df_{i - 1} + 2a dx^2.
Case d = 1, m in NN
-------------------
Consider f(x) = sum_{i = 0}^m a_i x^i. We would like to evaluate f at
the grid of points 0, dx, 2 dx, 3 dx, (n - 1) dx ..., where dx in RR. If
we think of the d as an operator, the _difference operator_, on
functions, then we can apply d multiple times to get the _i:th
difference of f_:
(d^i f)(x) = (d^(i - 1)f)(x + dx) - (d^(i - 1)f)(x).
The 0:th difference of f is the f itself (d^0 f = f). The reader might
see a resemblance between the difference operator and the
differentiation operator. Perhaps the use of the difference operator
could be called _differenciation_. Each differenciation decreases the
polynomial degree by one. Thus the m:th difference of f is a constant.
To evaluate f, we proceed as follows. First all of the differences at 0
up to degree m are computed directly:
f(0) = a_0,
(d^1 f)(0) = (m - 1) degree polynomial
...
(d^m f)(0) = constant.
The subsequent (n - 1) points {f_1, ..., f_{n - 1}} are computed
iteratively by
f_i = f_{i - 1} + [d^1 f]_{i - 1},
[d^1 f]_i = [d^1 f]_{i - 1} + [d^2 f]_{i - 1},
...
[d^(m - 1)f]_i = [d^(m - 1)f]_{i - 1} + [d^m f]_{i - 1}.
Thus, most of the computation is done by using additions alone.
Multiplication is only needed at the initialization.
Case d in NN, m = 2
-------------------
Consider f(x) = x^T A x, where x in RR^d, and A in RR^{d times d}. We
would like to evaluate f at the grid of points
{0, dy_1, 2 dy_1, ..., (n_1 - 1) dy_1} times
... times
{0, dy_d, 2 dy_d, ..., (n_d - 1) dy_d}
The f is a homogeneous second-degree polynomial. While this might seem
like losing generality, a non-homogeneous polynomial can always be
lifted to a (d + 1)-dimensional homogeneous polynomial. If A is
positive-definite, then each level set of f is an ellipse. This case of
the DDA is used in Elliptically-Weighted-Average (EWA) filter when
drawing textured polygons.
(See some images here:)
http://kaba.hilvi.org/cg/ewa/ewa_images.htm
If dx_1 in RR^d, then the first difference of f is given by
df(x; dx_1) = f(x + dx_1) - f(x)
= (x + dx_1)^T A (x + dx_1) - x^T A x
= x^T A x + 2x^T A dx_1 + dx_1^T A dx_1 - x^T A x,
= 2x^T A dx_1 + dx_1^T A dx_1,
which is a first-degree polynomial. The second difference of f is given
by (dx_2 in RR^d)
(d^2 f)(x; dx_1, dx_2)
= df(x + dx_2; dx_1) - df(x; dx_1)
= 2(x + dx_2)^T A dx_1 + dx_1^T A dx_1 -
[2x^T A dx_1 + dx_1^T A dx_1]
= 2 dx_2^T A dx_1
= 2 dx_1^T A dx_2,
which is a constant. These differences can again be used to
incrementally compute the value of f at the desired grid.
Case d in NN, m in NN
---------------------
Similarly, other cases can be developed: arbitrary polynomials in
arbitrary dimension. But I'm not sure which mathematical machinery would
be appropriate for handling them. I'd put my bets on tensor algebra (of
which I know little). Also the value of the function f can be
generalized to other structures than fields (such as RR).
(Since I wrote this from the scratch, errors are inenvitable.)