Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Float Groebner Basis question (Re: Groebner Basis Computations over Complex Numbers (in Maple))

7 views
Skip to first unread message

Gottfried Barthel

unread,
Mar 22, 2000, 3:00:00 AM3/22/00
to PENCEMIKE, sing...@mathematik.uni-kl.de
In his recent posting (Date: 21 Mar 2000) to this group, PENCEMIKE
wrote:
>
> I have the following inquiry regarding Groebner basis computations and
> Maple.
>
> The Maple Groebner basis package assumes that the polynomials for
> which the Groebner basis is to be computed have the rationals as their
> coefficient field.
>
> Are there Groebner basis packages (say in a contributed library. share
> library, etc) that perform the computations on polynomials having coefficients
> from the field of complex (real) numbers ?
>
> More generally, is this a reasonable inquiry ? I am a PhD candidate in
> physical
> chemistry. Part of my dissertation is concerned with solving systems of
> consistent polynomial equations having floating-point (real), but not easily
> rationalized, coefficients. I say "not easily rationalized" in that sense that
> if
> I do convert these equations' coefficients to rational numbers, I get mixed
> results when I attempt to compute the Groebner basis: Sometimes the
> procedures work, sometimes they don't, and I'm sure it's due to the accuracy of
> the float -> rational conversion. (In the context of my thesis problem, I can
> readily
> cook-up polynomial equations for which the coefficients are accurately
> represented by rational numbers, and the GB calculations provide a usable
> basis. Physically, however, these examples are not of much interest.)
>
> In actual practice, how do people deal with these issues ?
>
> I would be grateful for any insight that anyone would like to share.
>
> Thanks !
>
> Mike Pence

I don't know for sure, but I doubt very much that some available Maple
package may provide the proper tools to do float GB computations --
perhaps the improved support for hardware floats in the latest release
may help in the future. There are efforts towards GB computations for
float coefficients, but I do not know about the present state of things.
The free system "Singular" (http://www.singular.uni-kl.de/; available
for various UNIX platforms, Win and Mac) that is especially good at GB
computations supports floats. I have never had to use this feature, but
you might wish to give it a try. If the number of coefficients is not
too big and the systems are not too complicated, you could also try to
work with symbolic coefficients.

Concerning the problem with "not easily rationalized" coefficients, if
you increase the accuracy, you usually increase numerators and
denominators, and thus, all intermediate coefficients will increase in
size -- possibly enormously! This may slow down the computation. If you
make a simple test by taking an example that Maple can do, but that is
not immediately done by hand, and replace the integer coefficients by
rationals (say 1 by 11/10 or 99/100), then you can immediately notice
the effect. I think it would be important to understand the role and the
nature of the coefficients in your polynomials.

The problem of float GB computations seems to be rather complicated, as
shows the following quote from a recent report ("Computer Algebra and
Algebraic Geometry -- Achievements and Perspectives" by G.-M. Greuel,
one of the leaders of the "Singular" project in Kaiserslautern,
http://www.mathematik.uni-kl.de/~zca/Reports_on_ca/29/paper_html/paper.html,
section 8, #4: "A completely open subproblem is the validity resp. error
estimation of Gröbner basis computations with floating point
coefficients."

I'm not sure that this helps much, but perhaps the changed subject will
attract some experts in the field who can give better advice.

Regards from Konstanz,

Gottfried Barthel

Daniel Lichtblau

unread,
Mar 23, 2000, 3:00:00 AM3/23/00
to Gottfried Barthel
Gottfried Barthel wrote:
> [...]

> The problem of float GB computations seems to be rather complicated, as
> shows the following quote from a recent report ("Computer Algebra and
> Algebraic Geometry -- Achievements and Perspectives" by G.-M. Greuel,
> one of the leaders of the "Singular" project in Kaiserslautern,
> http://www.mathematik.uni-kl.de/~zca/Reports_on_ca/29/paper_html/paper.html,
> section 8, #4: "A completely open subproblem is the validity resp. error
> estimation of Gröbner basis computations with floating point
> coefficients."
>
> I'm not sure that this helps much, but perhaps the changed subject will
> attract some experts in the field who can give better advice.
>
> Regards from Konstanz,
>
> Gottfried Barthel

I'm certain the statement is true in theory. Let me say a few words
about how Groebner bases are computed with approximate arithmetic in
Mathematica, as it has some bearing on a practical approach (and may
shed light on shortcomings thereof).

First, while computations with machine numbers might be attempted, this
can be remarkably tricky when it comes to recognizing zero. Hence the
preferred approach is through software "bignums". In Mathematica this is
done by a means known as "significance arithmetic." Throughout
computations the software keeps track of estimated precision loss.
Recognizing zero is then just a matter of noticing that all significant
digits are zero.

Offhand I can think of a few potential problems. First, the significance
arithmetic model is basically a faster surrogate for full-fledged
interval arithmetic. It might in principle err in precision estimates.
As a practical matter, though, the sorts of computations one does in
Groebner basis construction are quite simple, just addition,
multiplication, and some division. To the best of my knowledge these do
not have trouble with correct estimation of precision.

A second issue is that one might get a "false" zero in that two close
quantities that are not really "identical" (whatever that means in the
context of approximate arithmetic) might cancel to give an apparent
zero. As a simple example of what I mean, compare the polynomials
epsilon*x^5 + x^2 + 1 vs. x^2 + 1. The first has three extra roots, very
large (for small epsilon). In general, roots might get lost due to
erroneous decision that some lead coefficient is zero.

A third issue is that Groebner basis results are now dependent on the
algorithm specifics. This is in contrast to the exact-arithmetic case.
In some sense this is related to the previous caveat, in that different
approaches might give rise to essentially the same polynomial except in
one case a leading coefficient is zero whereas in the other it is merely
"small" compared to the other coefficients.

Yet another issue is that, while it is generally preferable to exact
arithmetic, significance arithmetic is no match for machine arithmetic
in terms of speed. Moreover for some problems the initial precision
needed in order to get a result (rather than a "catastrophic loss of
precision" error) is unrealistically high.

These various caveats notwithstanding, our experience with this
significance arithmetic approach is that it is fairly reliable when
compared to exact computations. It is used in the version 4 Mathematica
NSolve code, when the solution set is finite. But of course practical
experience is no "proof of correctness" and G.-M. Greuel's statement
still stands.


Daniel Lichtblau
Wolfram Research
(speaking for myself, not my employer)

Richard Fateman

unread,
Mar 24, 2000, 3:00:00 AM3/24/00
to

Daniel Lichtblau wrote:
>
<snip>

> First, while computations with machine numbers might be attempted, this
> can be remarkably tricky when it comes to recognizing zero. Hence the
> preferred approach is through software "bignums". In Mathematica this is
> done by a means known as "significance arithmetic." Throughout
> computations the software keeps track of estimated precision loss.
> Recognizing zero is then just a matter of noticing that all significant
> digits are zero.
>

<snip>

This doesn't make sense to me. Loss of all significant digits means
you don't know what the number is at all. Not that it is the number
zero.

I think that the original question is off base, though. If you
want to find approximations to a solution of a system of polynomial
equations with approximate coefficients, at least those that are
point solutions, there are numerical solution methods that are
likely to be many many orders of magnitude faster than Grobner
basis calculations. If you have some idea of where the solutions
of interest are, you are truly wasting time to find ALL the
solutions.

For example from netlib.org one can find the (free) program

hompack/polsys.f ( plus dependencies )

for:
all (complex) solutions to a system of n polynomial equations
in n unknowns with real coefficients. Can return solutions at infinity
if requested.

There is, in this computer algebra business, a substantial
problem in applying exact solution method to approximate
problems, when approximate methods, carefully apply, are
much more appropriate.

E.g. I have encountered a recently published paper that improves
previous results in polynomial zero-finding that (for a polynomial
of degree 30) takes 30 seconds per zero. And this is an
improvement over previously published "exact" methods.
The result, however, is approximate.

In the numerical analysis world, more attuned to applications,
this is an absurdly slow
computation. Perhaps 1,000 to 10,000 times too slow.

Daniel Lichtblau

unread,
Mar 27, 2000, 3:00:00 AM3/27/00
to

[This response has been through a couple of iterations but is generally
similar to that which I originally made in private e-mail. --dl]


Richard Fateman wrote:
>
> Daniel Lichtblau wrote:
> >
> <snip>
>
> > First, while computations with machine numbers might be attempted, this
> > can be remarkably tricky when it comes to recognizing zero. Hence the
> > preferred approach is through software "bignums". In Mathematica this is
> > done by a means known as "significance arithmetic." Throughout
> > computations the software keeps track of estimated precision loss.
> > Recognizing zero is then just a matter of noticing that all significant
> > digits are zero.
> >
> <snip>
>
> This doesn't make sense to me. Loss of all significant digits means
> you don't know what the number is at all. Not that it is the number
> zero.

You also know its accuracy, in other words, that is is zero to some
number of places. It is true that one can be fooled due to cancellation
(as I noted in the last post). In practice this generally means you are
solving a nearby system because the original one had some degeneracy.


You raise several points that I'd like to address.

First, sparse homotopy methods and the Groebner/eigenvalue method are
both examples, in some sense, of so-called hybrid "symbolic-numeric"
methods. While it is true that these can incur a substantial cost, it is
also the case that they have been used with great success on problems
for which purely numeric methods have not been adequate. (As I was
reminded in private e-mail, another such method might be that of
multipolynomial resultants).

One disadvantage for the Groebner-based technique is that the complexity
of the symbolic part is not nicely bounded. Offhand I'm not sure how
serious a disadvantage this is in practice. Also the homotopy approach
appears to be better able to handle machine arithmetic, and clearly this
is an advantage. Whether it is overall superior under most circumstances
is not known to me, and I suspect there is no consensus at this time. I
must note that I cannot claim expertise with the sparse homotopy method,
so to some extent my opinion is uninformed.

One place in which (I believe) all current methods fall short is in
getting all real-valued roots to a system that has finitely many
solutions. This appears to be a reasonably important problem in applied
math, not just a curiousity. My guess is that a hybrid method will
ultimately lead the way (I think I have seen an outline of a viable
approach, except that it might require too-high precision to be
workable).

I must emphasize that I do not advocate using these general root-finders
in preference over the Newton-type solvers in cases where individual
roots are desired and reasonable initial approximations are known. (I
wanted to say that nobody advocates such use, but was reminded in effect
that one never knows how software might be misused.) Note, however, that
there are real-world uses of the more general polynomial system
root-finders such as that I outlined from the Mathematica NSolve
procedure. Here is an excerpt from a note that came to our technical
support group a few days ago.

"...Recently I developed an algorithm for performing automated magnetic
optics matching deterministically with NSolve being the core
engine...the mathematical complexity of the problem has defied previous
attempts...reduce the problem to 2 simultaneous polynomial equations in
2 variables homogeneous in order...NSolve did a very good job after
that, giving me apparently global solutions almost always equal in
number to the theoretical maximum. I have recently tested it
on-line...and the outcome was very satisfactory... I am
going to the European Center for Nuclear Research (CERN) in Geneva,
Switzerland to share this result with the accelerator physicists and
engineers there. I believe this algorithm will find wide application
again at CERN...."

This is but one example of an application where ALL roots are desired.
Whether the Groebner approach is best possible is debatable, but
certainly it can be adequate in some situations where local root-finders
are not.


Daniel Lichtblau
Wolfram Research
not speaking for my employer

0 new messages