first of all, is it true? integration is defined on a larger class of
functions than differentiation, so in some sense, it is easier to show
the existence of the integral than the derivative.
but what we really want to know is why, given a function built out of
certain "elementary functions", it is easy to construct the derivative
in terms of those elementary functions, but hard (and sometimes
impossible) to construct the antiderivative in terms of those elemntary
functions.
from a practical standpoint, the reason is clear: there are rules for
the derivatives of the two constructions you can do to elementary
functions, namely the product and the composition. if there were a rule
for the integral of the composition of two functions and for the product
of two functions, then from that, we could write any integrals of
elementary functions in terms of elementary functions.
but we cannot. so why not? what is different about integration that
makes it not have these rules? on the surface, the definitions of
differentiation and integration seem at least slightly similar: take
the limit as epsilon goes to zero of some algebraic operation on your
function.
i wanted to mutter something about differential Galois theory, but i
think that would just have been a cover for the more honest "i don't know".
so, what do you say?
-z
One major difference is that differentiation is a local operation while
integration involves "finite intervals" -- okay, measurable sets, but
intuitively it is defined over an extended region of the domain while
differentiation isn't.
Norm
differentiation is local and integration is nonlocal. hmm. i like that
idea, but i am not convinced. actually, didn't i once read that one of
the amazing things about deRham cohomology is that the differential
forms on a manifold, which are all locally defined objects, can encode
global information about the manifold? but i am not sure if that has
any relevance here.
anyway, just thinking outloud here, but the difference between
integration and differentiation that makes the former hard and the
latter easy (in the elementary calculus sense) is that differentiation
is a sort of "forward" operation, and integration is an "inverse"
operation, in a way that is not symmetric.
i think the analogy with algebraic operations is good. anyone can
square a small number in her head, but who can take the square root?
solving the quadratic equation is hard, and solving certain quintics is
impossible.
but why does differentiation have to be the forward operation and
integration have to be its inverse? could we make it go the other way?
in other words, is there some fundamental property of integration that
rules out the possibility of writing the integral of a product in terms
of the integrals of the multiplicands? if we had a rule like that, and
one for composition of functions, then integration would be as
algorithmic as differentiation, and all elementary functions would be known.
since i know that not all elementary functions have elementary
integrals, then i must conclude that such a product rule cannot exist.
but this is a very indirect way to see this, and sort of relies on the
property i am trying to understand to explain this property. it is a
bit circular. is there an obvious reason why there can be no
integration rule for products? (the familiar product rule is not good
enough here)
Concerning the "difficulty"
I guess we have to differentiate [no pun intended] between the definite
integral which is always with respect to a given set in the function's
domain and the process of indefinite integration which is just the reverse
of differentiation. In an algebraic sense I don't think that inverse
differentiation is all that much more difficult than forward
differentiation. We have a set of rules to apply to "elementary" functions
and algebraic (and some transcendental) combinations of them to
differentiate a given function. If none of our usual rules apply, then
differentiation is -- at least symbolically -- impossible and we're stuck
with a numerical approximation. Ditto the process of inverse
differentiation. This is generally thought to be difficult because most
people simply don't recognize the inverse rules as readily as the forward
ones. There are tables to use in both cases.
Concerning the inverse relationship
Assuming that we define the indefinite integral in terms of the definite
integral, it's relatively easy to see that integration doesn't really care
if a function has, for example, discrete jump discontinuities, the function
is still integrable. But that same function fails to be differentiable at
the points of discontinuity so in that case I'd have to say that the process
of integration is easier than the process of differentiation.
Norm
>
> .... what we really want to know is why, given a function built out of
> certain "elementary functions", it is easy to construct the derivative
> in terms of those elementary functions, but hard (and sometimes
> impossible) to construct the antiderivative in terms of those elemntary
> functions.
>
> from a practical standpoint, the reason is clear: there are rules for
> the derivatives of the two constructions you can do to elementary
> functions, namely the product and the composition. if there were a rule
> for the integral of the composition of two functions and for the product
> of two functions, then from that, we could write any integrals of
> elementary functions in terms of elementary functions.
>
> but we cannot....
This is not an explanation of why it's harder to integrate, but a
comment on the major historical effect of that fact.
Integrals came first. There are quite a few arguments in Euclid
(probably due to Eudoxus) and in Archimedes, which find various areas,
volumes and centres of gravity by using limits of sums of little bits.
Modern writers often mention that Archimedes also handled a tangent to a
spiral, but this isolated case is very unlike modern differentiation.
Most of what we see as calculus-style arguments in the ancient and early
modern periods were integrations. But integration is hard, so each new
integral was a new research problem.
Then came the 17th-century development of differentiation, and the
Fundamental Thgeorem of the Calculus. It was Newton and Leibniz who
appreciated that differentiation admits a collection of easy algorithms,
and antidifferentiating is a practical way to find a lot of integrals.
_That_ is the sense in which Newton and Leibniz "invented the calculus".
Derivatives and integrals were already there before them, but those two
men showed how easy differentiation is, and what a lot of integrals it
lets you find.
Ken Pledger.
> someone asked this question, and i have been thinking about it for a
> while. "why is integration harder than differentiation?"
>
> first of all, is it true? integration is defined on a larger class of
> functions than differentiation, so in some sense, it is easier to show
> the existence of the integral than the derivative.
>
If you are dealing with a finite precision machine, then integration is
actually easier than differentiation. Integration is a stable process
while differentiation is ill-posed.
I'll leave it to you to find out why (hint: think about the definitions).
- Tim
A few thoughts ... First, there is a rule of sorts for integrating
the product of two functions, namely integration by parts. I'm not
sure whether this is something important in this context but
integration doesn't give a unique answer, it's only unique up to an
additive constant. Differentiation, when it can be done at all gives
a unique answer. Finally, if we restrict ourselves to a suitable
class of functions then integration and differentiation are equally
"difficult", each being a termwise operation on power series.
If you have a sequence a(n), it is generally much easier to compute its difference:
a(n+1)-a(n)
than the partial sums:
a(1)+a(2)+...+a(n).
Zig> A few thoughts ... First, there is a rule of sorts for integrating
Zig> the product of two functions, namely integration by parts. I'm not
Zig> sure whether this is something important in this context but
Zig> integration doesn't give a unique answer, it's only unique up to an
Zig> additive constant. Differentiation, when it can be done at all gives
Zig> a unique answer. Finally, if we restrict ourselves to a suitable
Zig> class of functions then integration and differentiation are equally
Zig> "difficult", each being a termwise operation on power series.
A few thoughts echoing some of the opinions/ideas already posted here. The
class of functions that we are willing to accept as giving us *closed form*
solutions limits our ability to consider integrals as easy or
difficult. The simplest examples being the error function and certain log
integrals...
That reminds me, long ago somebody on this group had pointed out some
theory that enables one to decide whether a particular integral is
evaluable (evaluatable?) in ``closed form,'' or not -- could somebody
please resend some of that information -- It might help to shed light on
the difficulty of integration!
Grüße,
-suvrit.
>
>
> A few thoughts ... First, there is a rule of sorts for integrating
> the product of two functions, namely integration by parts.
sure, but this is not good enough. for example, consider the integral
\int 1/sqrt(b^2-x^2)*1/sqrt(a^2-x^2) dx
this is the product of two functions, each of whose integrals i know in
terms of elementary functions (arcsine). and yet, the integral of their
product is not an elementary function (elliptic function).
so i know that integration by parts will never help me solve this
integral.
compare this with differentiation, where i could immediately write down
the total derivative, knowing the individual derivatives. this
difference between integration and differentiation makes differentiation
an algorithm, and integration an art form.
but why the difference?
> I'm not
> sure whether this is something important in this context but
> integration doesn't give a unique answer, it's only unique up to an
> additive constant. Differentiation, when it can be done at all gives
> a unique answer.
yes. perhaps that has something to do with it. it is a good suggestion.
> Finally, if we restrict ourselves to a suitable
> class of functions then integration and differentiation are equally
> "difficult", each being a termwise operation on power series.
>
yes, of course, if we view functions in terms of their power series,
then neither one is easier or harder. so i guess i am only talking
about finite compositions of elementary functions, not power series.
perhaps there is something unnatural about restricting yourself to only
talking about closed functions like that? but differentiation doesn't
care, why should integration?
>
> That reminds me, long ago somebody on this group had pointed out some
> theory that enables one to decide whether a particular integral is
> evaluable (evaluatable?) in ``closed form,'' or not -- could somebody
> please resend some of that information -- It might help to shed light on
> the difficulty of integration!
>
> Grüße,
> -suvrit.
>
I think the math that tells you when some integrals can be express in
terms of elementary functions is called differential Galois theory.
this theory tells you, for example, that the error function is not a
finite composition of elementary functions. it uses the same types of
ideas as used in showing that the quintic cannot be solved in terms of
simple root extractions.
i don't know much about it beyond what i have said here, so it will be
cool if someone who knows a lot about it weighs in.
but this is exactly the issue; differentiation never gets "stuck" in
this way. given any finite composition of elementary functions, i can
use the chain rule and product rule to algorithmically reduce this
completely. i can differentiate any such function with impunity.
not so with integration (or inverse differentiation, if you would like
to make the distinction).
> Ditto the process of inverse
> differentiation. This is generally thought to be difficult because most
> people simply don't recognize the inverse rules as readily as the forward
> ones. There are tables to use in both cases.
>
> Concerning the inverse relationship
> Assuming that we define the indefinite integral in terms of the definite
> integral, it's relatively easy to see that integration doesn't really care
> if a function has, for example, discrete jump discontinuities, the function
> is still integrable. But that same function fails to be differentiable at
> the points of discontinuity so in that case I'd have to say that the process
> of integration is easier than the process of differentiation.
>
> Norm
>
yes, i mentioned something about this in the original post. in some
sense, integration is easier, because more functions are integrable than
are differentiable. a lot more, i think. it is easier for a function
to have an integral than a derivative. if we chose a function at
random, it would be more likely to be integrable than differentiable
(although i suspect actually both probabilities would be zero, eh?)
anyway, maybe this is also exactly the reason why the process of finding
that elementary antiderivative is harder. simply because the
antiderivative has to exist for a broader class of functions? somehow?
i dunno...
thanks for your responses
-z
this seems related to Norm Dresner's suggestion that basically
integration is nonlocal; you have to know a lot more about the function.
i wonder if this connection could be made more explicit?
i am not sure right away what the answer to your question is; something
about needing more significant digits to get an accurate difference
between two close numbers, than you do to take the sum of a bunch of
numbers?
thinking about this, it struck me as surprising that derivatives are
defined as in terms of subtraction and division, which are inverse
operations in algebra, whereas (Riemann) integration is defined in terms
of multiplication and summation, which are ``primitive'' operations
algebraically, so to speak. and yet differentiation turns out to be the
primitive operation on functions, and integration the inverse operation.
>Suvrit wrote:
>>
>> That reminds me, long ago somebody on this group had pointed out some
>> theory that enables one to decide whether a particular integral is
>> evaluable (evaluatable?) in ``closed form,'' or not -- could somebody
>> please resend some of that information -- It might help to shed light on
>> the difficulty of integration!
>>
>I think the math that tells you when some integrals can be express in
>terms of elementary functions is called differential Galois theory.
>this theory tells you, for example, that the error function is not a
>finite composition of elementary functions. it uses the same types of
>ideas as used in showing that the quintic cannot be solved in terms of
>simple root extractions.
>i don't know much about it beyond what i have said here, so it will be
>cool if someone who knows a lot about it weighs in.
At this moment I have no library at hand so I cannot be very
specific. But I remember something like the Rish method which is
used in Maple and which (here I have to rely on memory, so I may be
wrong) enables one (Maple?) to say whether of a function build from
elementary functions is integrable or not.
A good reference is "Symbolic Integration I" by Manuel Bronstein.
I wonder if the question being asked in this thread is a non-question.
It seems to presuppose that the class of elementary functions is the
"right" set of functions to consider. That is, we pick a class of
functions that behaves well under differentiation, and puzzle over
why it doesn't behave well under antidifferentiation. But given that
we can pick other classes of functions for which differentiation
and antidifferentiation are equally easy, why puzzle? Presumably
one can also pick another class of functions that behaves well under
antidifferentiation and not under differentiation (derivatives of
C^1 functions, say). So some classes have an affinity for one process
and others have an affinity for the other process. Unless there's
some reason to think that elementary functions are "canonical" in some
sense, rather than an arbitrary artifact of notation, why would we
expect an answer beyond what has already been stated?
--
Tim Chow tchow-at-alum-dot-mit-dot-edu
The range of our projectiles---even ... the artillery---however great, will
never exceed four of those miles of which as many thousand separate us from
the center of the earth. ---Galileo, Dialogues Concerning Two New Sciences
It is exactly these reasons that on a computer, differentiation is "harder"
than integration. If you look at the definition of the derivative,
lim(dx->0) [f(x+dx)-f(x)]/dx, if dx really is small, almost zero, you are
subtracting two almost equal values f(x+dx) and f(x). This result is then
also close to zero so you lose many, many degrees of accuracy. Then, you
are dividing a number that is almost zero by another number that is almost
zero. This again leads to a lose of many degrees of accuracy. In fact,
depending on the function f(x), it could blow up towards positive infinty
or negative infinity, with only a small change in how you put it into your
computer.
Integration, on the other hand, is just addition, which for the most part,
does not lose any accuracy (sure there are some cases) and multiplication
which, depending, can increase accuracy (and occasionally decrease it).
Differentiation is an example of an ill-posed or unstable problem. Maybe a
more concrete example, think of tan(3^x) For x=1.411, it is positive,
nearly 4000. For x=1.412 it is negative, about -200. Just a small
difference in the x and there is a huge difference in the answer.
Differentiation, for a computer, is alot like this.
Here's something else to think about, when you learned about the
exponential and natural logarithm, which did you learn first and how was
the other one presented to you? Most cases you learn exp first then ln as
the inverse function of exp. But, this is misleading. In fact, exp is the
inverse function of ln. Why? ln exists as a derivative of 1/x whether exp
exists or not.
But, don't think about it too much.
- Tim
---
Timothy M. Brauch
Graduate Student
Department of Mathematics
Wake Forest University
>>>If you are dealing with a finite precision machine, then integration
>>>is actually easier than differentiation. Integration is a stable
>>>process while differentiation is ill-posed.
> If you look at the definition of the derivative,
> lim(dx->0) [f(x+dx)-f(x)]/dx, if dx really is small, almost zero, you are
> subtracting two almost equal values f(x+dx) and f(x). This result is then
> also close to zero so you lose many, many degrees of accuracy. Then, you
> are dividing a number that is almost zero by another number that is almost
> zero. This again leads to a lose of many degrees of accuracy.
More precisely: The subtraction is done without error but it turns the
small relative error in the function values into a small absolute but large
relative error. The division has a small relative error, and has little
effect on the relative error; but it turns the small absolute error
into a large absolute error. The result is useless.
Nevertheless, one can get full accuracy numerical derivatives using
extrapolation procedures, as described in my numerical analysis book.
One just needs care.
Arnold Neumaier
The rules for differentiation write a derivative in terms of derivatives
of simpler pieces (usign fewer letters to represent them), hence the
differentiation of any expression is a finite process.
Already integration by part (the product rule for integration)
does not have this property.
> i wanted to mutter something about differential Galois theory, but i
> think that would just have been a cover for the more honest "i don't know".
The theory of differential fields gives a fairly precise answer of
which classes of expressions can be integrated by elementary functions
(and, if desired, a finite number of new functions defined as solutions
of differential equations - generalizing the definitions of exp, sin, etc.)
Thus it delineated the limits of symbolic integration packages.
Numerically, integration is simpler than differentiation in one dimension,
but in higher dimension, integration suffers from the curse of dimensionality
while differntiation doesn't. In particular, it is very hard to get
accurate integrals in dimensions >100, say.
This has to do with the global aspects of integration and the infinitesimal
aspect of differentiation.
Arnold Neumaier
If your question is "how can the inverse of differentiation be so much
harder than differentiation itself", then I can say that it happens all
the time in mathematics that an operation is easier to apply than invert.
For example, multiplying prime numbers is easier than factoring a number
into primes. This example is more relevant than you might suppose.
Factoring a number into primes is similar to factoring a polynomial;
integrating a rational function requires factoring its denominator.
To be sure, there are fast algorithms to numerically factor polynomials.
But that dodges your question, because there are also fast algorithms
to numerically integrate any analytic function. Your question
is really about exact integration.
--
/\ Greg Kuperberg (UC Davis)
/ \
\ / Visit the Math ArXiv Front at http://front.math.ucdavis.edu/
\/ * All the math that's fit to e-print *
Readers may be interested in my new paper in which I fight the
curse of dimensionality for numerical integration in high dimensions:
http://front.math.ucdavis.edu/math.NA/0402047
Granted, it's impossible to completely defeat the curse. I derive
good methods for functions that are well-approximated by low-degree
polynomials.
The key terms are: integration in finite terms, Risch's algorithm, Liouville's theorem.
The authors associated with it are Liouville, Ostrowski, Ritt, Risch, and Rosenlicht.
The main references I know of are
Robert Risch, The Problem of Integration in Finite Terms, Trans. of AMS, v139 (1969) p167-189.
(the main results)
and
Max Rosenlicht, Integration in Finite Terms, Am. Math. Monthly, v79 (1972) p936-972.
(good exposition with classic examples (like int e^(-x^2)))
--
Mitch Harris
(remove q to reply)
This answer seems right to me.
And (to spell it out more explicitly) we find the situation
puzzling mainly because we don't take ease of differentiation
as our criterion for selecting this class. Rather, we generate
this class "naturally" by combining a small number of elementary
functions via algebraic ops (multiplication and division being
the pesky ones) and by other kinds of functional composition.
The chain rule makes differentiation a piece of cake. On the
other hand, to apply the chain rule in reverse we first have to
solve a factorization problem, and that's where the difficulty
lies.
These are certain classical references and the keywords Mitch listed are the
right ones. However, to dispel a common misconception, let me mention that
the general problem of integration in finite terms is not completely solved.
See Bronstein's book, that I mentioned in my other article in this thread,
for more information.
>Readers may be interested in my new paper in which I fight the
>curse of dimensionality for numerical integration in high dimensions:
> http://front.math.ucdavis.edu/math.NA/0402047
>Granted, it's impossible to completely defeat the curse. I derive
>good methods for functions that are well-approximated by low-degree
>polynomials.
Even in one dimension, there are lots of cases of
numerical integration where the integrand is not
well-approximated by even high degree polynomials.
One such example encountered is \int (F(x+c))^n dF(x),
where dF(x) = exp(-x^2/2)/sqrt(2*\pi) dx. For moderate
values of n, using 32-point Gauss-Hermite integration,
accurate for integrating all polynomials of degree 63
with respect to dF, the results are not too good.
--
This address is for information only. I do not claim that these views
are those of the Statistics Department or of Purdue University.
Herman Rubin, Department of Statistics, Purdue University
hru...@stat.purdue.edu Phone: (765)494-6054 FAX: (765)494-0558
gr...@conifold.math.ucdavis.edu (Greg Kuperberg) writes:
> Readers may be interested in my new paper in which I fight the
> curse of dimensionality for numerical integration in high dimensions:
>
> http://front.math.ucdavis.edu/math.NA/0402047
>
> Granted, it's impossible to completely defeat the curse. I derive
> good methods for functions that are well-approximated by low-degree
> polynomials.
Greg's work deals with the problem of finding cubature rules over
the unit cube (of high dimension) that have high degree of exactness.
Numerical analysts have long been interested in this problem. For
instance, there's the beautiful theory of how orthogonal polynomials give
us maximal-degree rules.
However, perhaps the more fundamental problem is to determine the minimal
cost of computing an approximation having given accuracy. This is a
problem of information-based complexity (IBC).
There's been a huge amount of recent work on breaking the curse of
dimensionality for high dimensional integration. One source of such
problems is mathematical finance, in which 360-dimensional problems
occur (a thirty-year mortgage, with twelve months per year, yields 360
dimensions). Much of this work deals with discrepancy theory.
For further info, you might want to look at (shameless plug:-)
@Book{TraubWerschulz:ComplInfor,
author = {J. F. Traub and A. G. Werschulz},
title = {Complexity and Information},
publisher = {Cambridge University Press},
year = 1998,
address = {Cambridge}
}
especially Chapters 3 and 4.
You also might want to check recent issues of the Journal of Complexity.
Finally, there's an online IBC database (mainly consisting of the
Refererences sections of several IBC-related books). This can be found at
http://www.ibc-research.org/search-refs.cgi
A caveat ... I haven't had the time to update this, and so it's somewhat
out-of-date. In particular, there's been a lot of work on quantum IBC, and
none of this has been entered into the database.
--
Art Werschulz (8-{)} "Metaphors be with you." -- bumper sticker
GCS/M (GAT): d? -p+ c++ l u+(-) e--- m* s n+ h f g+ w+ t++ r- y?
Internet: a...@cs.columbia.edu<a href="http://www.cs.columbia.edu/~agw/">WWW</a>
ATTnet: Columbia U. (212) 939-7060, Fordham U. (212) 636-6325
A couple more ideas ... First, integration and differentiation are
equally easy for polynomials and among continuous functions on an
interval, say, polynomials are uniformly dense. That suggests that
the problem comes from trying to work with specific algebraic
representations of functions. Also, there are plenty of functions not
easily definable in closed form (maybe most functions are of this
sort) for which differentiation is as difficult as integration unless
a power series is available. So the question really seems to be about
functions built up from basic algebraic operations and composition,
including the standard transcendental functions, too. You are asking
why integration isn't well related to the basic operations for
building up these functions.
I should study the literature of this and your reference to your book
is certainly helpful. However, I suspect that cubature which is exact
for polynomials is a very reasonable limiting case of information-based
complexity. The extra generality from not taking the limit may or may
not be important, depending on the application.
To be precise, in your article "Information-based complexity and
information-based optimization" with Traub on your home page, you
discuss both worst-case and average-case error of sampling methods.
Your example of average-case error assumes a function f on the interval
[0,1] whose rth derivative is a Brownian path. This amounts to assuming
a spectral decay rate for f. I.e. if f were a function on the circle
instead of the interval, you would say that the Fourier components are
i.i.d Gaussian with some a decay rate for their standard deviations.
In high dimensions you have to assume an abrupt decay for the problem
to be tractable. It is then very easy to construct natural-looking
spectral assumptions that limit to the polynomial cubature problem.
For example, you might very reasonably expand f in Hermite polynomials
times a Gaussian. Assuming a decay rate for this expansion seems very
similar to me to polynomial cubature with a Gaussian weight function.
This is one of the main cases of the cubature problem, and the results
in my paper apply to it.
Yes, because the integrand (without its Gaussian weight) is
approximately a step function. If you take the vector space spanned by
this integrand for different c, then the information-based complexity of
integration is high, because the integrand can take abrupt steps.
Any numerical method is going to be bad for this class. I'm more
interested in cases that information-based complexity doesn't show to
be impossible.
>Yes, because the integrand (without its Gaussian weight) is
>approximately a step function. If you take the vector space spanned by
>this integrand for different c, then the information-based complexity of
>integration is high, because the integrand can take abrupt steps.
>Any numerical method is going to be bad for this class. I'm more
>interested in cases that information-based complexity doesn't show to
>be impossible.
This is not a typical integral arising in obtaining
statistical procedures, but is in evaluating them.
The integral is wanted for "small" c, small is a
function of n.
A more important integral, which is needed for
asymptotic (already an approximation) purposes,
is the probability that
max (B(t) - L(B)(t)) > q,
where B is the Brownian bridge, and L is a linear
functional from B to smooth functions. The probability
exceeds the maximum of the probabilities over t, which is
the normal tail integral from hq to infinity. This is
approximately exp(-(hq)^2/2)/(hq*sqrt(2\pi)). If L=0, it
is known that h=2 and the precise answer just drops the
denominator. For other L, the h can be computed, and it
is known that the correction should be of a similar form,
but is not that simple. Even a decent first-order
approximation for large q would be quite useful. The
methods used for L=0 cannot be used here. Note that B is
continuous but not differentiable, and we are interested
in small probabilities, while L(B) is differentiable.
Differentiation is a breaking up procedure, but integration puts it
all together. A quote from Sam Rayburn:" Any donkey can kick down a
barn, but it takes a good carpenter to build one." [Absolutely no
offence meant for the first activity though.]
A differential equation links/relates the two. Performing integration
into an analytical form is as difficult as the complexity of this
relationship. y''+ y' = 1 may be easily solved, but log(y''+ y')^
sin(x y') = 1 is much more difficult to solve than simply writing out
such a differential relationship.
I too had often wondered why there is no proper mathematical
definition of "degree of hardness" of solvability of differential
equations, accenting on relational complexity. All we have is
categorization into ordinary, partial, non-linear etc. The arrival of
numerical machine solvers like the Runge-Kutta or implicit functions
has rendered such an enquiry perhaps needless... once and for all ?
Is there an example available to look at for computing a cumulative
multivariate normal distribtuion?
--
use mail ät axelvogt dot de