Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

s.m.r. Problems for the next century

15 views
Skip to first unread message

John McKay

unread,
Mar 27, 1997, 3:00:00 AM3/27/97
to

Following Hilbert, there are going a set of problems
presented ?at the ICM next year intended to have a similar
guiding influence as his did.
It would be fun to accumulate a set of problems on
sci.math.research of a similar nature.
Any takers?

JM

--
Deep ideas are simple.
Odd groups are even.
Even simples are not;
and Gal/F2(t)(x^24-x-t) = Mathieu group, M24.


G. A. Edgar

unread,
Mar 27, 1997, 3:00:00 AM3/27/97
to

In article <5hcg0b$n6c$1...@newsflash.concordia.ca>, mc...@alcor.concordia.ca
(John McKay) wrote:

> Following Hilbert, there are going a set of problems
> presented ?at the ICM next year intended to have a similar
> guiding influence as his did.
> It would be fun to accumulate a set of problems on
> sci.math.research of a similar nature.
> Any takers?

There was a "new" list of problems collected for a 1974 conference
on the Hilbert problems. It was published in the procedings volume on that
conference by the AMS. "Mathematical Developments Arising from
Hilbert Problems"


MCKAY john

unread,
Mar 28, 1997, 3:00:00 AM3/28/97
to

In article <edgar-ya02408000...@news.math.ohio-state.edu>
Yes I am aware of this new list. It is, in part, because of dissatisfaction
with this list that I suggest we try one.

JM


--
Cogito ergo sum aut miror ergo sim?


john baez

unread,
Apr 4, 1997, 3:00:00 AM4/4/97
to

In article <5hcg0b$n6c$1...@newsflash.concordia.ca>, mc...@alcor.concordia.ca
(John McKay) wrote:

> Following Hilbert, there are going a set of problems

> presented at the ICM next year intended to have a similar


> guiding influence as his did.
> It would be fun to accumulate a set of problems on
> sci.math.research of a similar nature.
> Any takers?

It takes a certain chutzpah to list "Hilbert problems for the
next millenium". Fortunately I am not lacking in that quality....

Hilbert's problems included not only precise conjectures but
also what one might call research programs. Following those lines,
I would propose the following:

1) Develop n-category theory into a convenient tool and use it
to unify proof theory, homotopy theory, differential topology,
singularity theory, topological quantum field theory and representation
theory. A good place to start might be by making precise and proving
the "Tangle Hypothesis," which says that the n-category of n-dimensional
framed tangles in (n+k)-dimensional space is the free k-tuply monoidal
n-category with duals on one object. One should also make precise and
prove the "Stabilization Hypothesis", which says that a k-tuply monoidal
n-category is essentially the same thing as (k+1)-tuply monoidal
n-category when k >= n+2. More ambitiously, one could try to use
these to classify extended topological quantum field theories in
arbitrary dimensions.

Reference:

Higher-dimensional algebra and topological quantum field theory, by
John Baez and James Dolan, Jour. Math. Phys. 36 (1995), 6073-6105.

2) Give a unified explanation of all contexts in which Dynkin diagrams
and ADE classifications arise. In particular, give a conceptual explanation
of the role played by quiver representations in the theory of the
canonical bases for simple Lie algebras.

Reference:

M. Hazewinkel, W. Hesselink, D. Siermsa, and F. D. Veldkamp, The ubiquity of
Coxeter-Dynkin diagrams (an introduction to the ADE problem), Niew. Arch.
Wisk., 25 (1977), 257-307.

3) Give a short conceptual proof of the classification of finite simple
groups.

Reference:

Ron Solomon, On finite simple groups and their classification, AMS Notices
Vol. 45, February 1995, 231-239.

Michael Aschbacher, The finite simple groups and their classification,
New Haven : Yale University Press, 1980.

Daniel Gorenstein, Finite simple groups: an introduction to their
classification, New York : Plenum Press, c1982.

4) Determine whether or not P equals NP.

5) Determine whether or not the following widely studied physical
theories can be given a satisfactory mathematically rigorous
formulation:

a) the classical electrodynamics of point particles
b) quantum electrodynamics
c) the Standard Model

References:

Stephen Parrott, Relativistic electrodynamics and differential geometry,
New York : Springer-Verlag, c1987.

James Glimm, Arthur Jaffe, Quantum physics : a functional integral point of
view, New York : Springer-Verlag, c1981.

Vincent Rivasseau, From perturbative to constructive renormalization,
Princeton, N.J. : Princeton University Press, c1991.

Noam Elkies

unread,
Apr 8, 1997, 3:00:00 AM4/8/97
to

In article <5i46vd$b...@charity.ucr.edu>, john baez <ba...@math.ucr.edu> wrote:
>In article <5hcg0b$n6c$1...@newsflash.concordia.ca>, mc...@alcor.concordia.ca
>
>It takes a certain chutzpah to list "Hilbert problems for the
>next millenium". Fortunately I am not lacking in that quality....

I'll comment on these suggestions in reverse order...

>5) Determine whether or not the following widely studied physical
>theories can be given a satisfactory mathematically rigorous
>formulation:
>
> a) the classical electrodynamics of point particles
> b) quantum electrodynamics
> c) the Standard Model

The difficulties with (b) and (c) are well known, and probably present
problems worthy of succeeding Hilbert's. What's the difficulty with (a)?

>4) Determine whether or not P equals NP.

>3) Give a short conceptual proof of the classification of finite simple groups.

Excellent.

>2) Give a unified explanation of all contexts in which Dynkin diagrams
>and ADE classifications arise.

Good, though a "unified explanation of all contexts" is probably
too tall an order...

> In particular, give a conceptual explanation
>of the role played by quiver representations in the theory of the
>canonical bases for simple Lie algebras.

>1) Develop n-category theory into a convenient tool and use it


>to unify proof theory, homotopy theory, differential topology,
>singularity theory, topological quantum field theory and representation
>theory. A good place to start might be by making precise and proving
>the "Tangle Hypothesis," which says that the n-category of n-dimensional
>framed tangles in (n+k)-dimensional space is the free k-tuply monoidal
>n-category with duals on one object. One should also make precise and
>prove the "Stabilization Hypothesis", which says that a k-tuply monoidal
>n-category is essentially the same thing as (k+1)-tuply monoidal
>n-category when k >= n+2. More ambitiously, one could try to use
>these to classify extended topological quantum field theories in
>arbitrary dimensions.

...But this, I think, is crossing the line from chutzpah to polemic,
if not parochialism. Though it may be heresy to some, not all
important mathematics is categorical, and I daresay that not much
of that mathematics is so fundamentally categorical that it will benefit
much from n-category theory. This is not to denigrate the study of
n-categories, which might conceivably yield the kind of mathematical
grand unification you envision; but surely the field has not yet
reached the point where its fundamental problem/hope belongs in the
same neo-Hilbert list as P=NP or the classification of simple groups,
let alone at the top of that list.

As long as we're proposing Hilbert-level research problems, here's
a proposal for the top two of modern number theory:

I) Develop an understanding of zeta and L-functions in sufficient depth
and generality to prove the Riemann hypotheses for all such functions
for which an R.H. holds.

[If I remember right, Riemann's original Hypothesis was on Hilbert's list;
but we have many more zeta and L-functions now, and are still arguably no
closer to a proof even for Classic RH. N.B. I do not include the Langlands
program, Beilinson conjectures etc. on this short list because they are
subsumed by the most general arithmetic Riemann hypotheses; but even
a proof of the Riemann and Artin conjectures for zeta and L-functions
of number fields would be a significant legacy of 21st-century math.]

II) Obtain general, effective results as envisioned by Lang/Vojta
for Diophantine equations and inequalities. [See S.Lang: Old and new
conjectured diophantine inequalities, Bull. AMS 23 (1990), 37-75.]

--Noam D. Elkies (elk...@math.harvard:edu)
Dept. of Mathematics, Harvard University


JoachimHagemann

unread,
Apr 9, 1997, 3:00:00 AM4/9/97
to

G. A. Edgar wrote:
> (John McKay) wrote:
> > Following Hilbert, there are going a set of problems
> > presented ?at the ICM next year [...].

> > Any takers?
>
> There was a "new" list of problems collected for a 1974 conference
> on the Hilbert problems.

In 1974, Hilberts deep understanding of future research directions
was demonstrated again. Since a 'Hilbert' is not available
this challenging of the mathematical community seems reasonable.
Stimulated by John Baez posting I dare to poste my own:
He started with physics and arrived at categorical universal
algebra, I went the road the other direction.


1) --Goldbach conjecture--
Proof that any even number is the sum of two primes.


2) --7D vectoranalysis--
Develop a vectoranalysis for the 7D crossproduct algebra
(which is derived from the octonions in the same way as the
3D crossproduct algebra is derived from the quaternions)
and use it in physics and engineering in a way which is
similar to the traditional 3D vectoranalysis.
Comment:
This should include an analog of algebraic Grassmann-type
theory of higher dimensions, an analog of Cartan's alternating
differential forms and an analog of the theorem of Stokes.
Instead of 'oriented' manifolds 'unoriented' manifolds should
play a fundamental role, see [Ebbinghaus, ...: Numbers,
10.3 and 11 of second or third edition]. There the close
connection with topological K-theory is surveyed.


3) --string theory based on compact 'unoriented' manifolds--
Present (8+2)D_ and (24+2)D_string_theories admit only 'oriented'
compact manifolds which seems to be to limited. So create string
theories where 'unoriented' compact manifolds are admitted.
Comment:
Hopefully, they turn out to be 8D and 24D: Getting rid of those
ingenious two extra dimensions would be quite welcome.
Recently, Elkies & Gross created the first 'octonion'
construction of the 24D Leech lattice. Their result may
be usefull for merging the various string theories.


4) --categorical algebraic geometry--
Create a 'categorical algebraic geometry' within the
framework of 'algebraic theories'.
Comment1:
'Commutative' rings have the property that the product
of two finitely generated ideals is again a finitely
generated ideal. General associative rings certainly
fail to have this property, but PI rings may have it.
Trivially, the commutator of two finitely generated
abelian groups is again finitely generated. Nilpotent
groups may have it. The 'commutator theory of universal
algebra' unifies ideal-multiplication and commutator
theory of groups and Lie algebras and extends it to
any 'congruence modular equational class of universal
algebras'. Recently, category theory (especially the
branch 'algebraic theories' which absorbs universal
algebra into category theory) succeeded to capture
'modularity' and hence to create a categorical
commutator theory.
Comment2:
'Continous' lattices generalize the notation of
'algebraic' lattice where any element is join
of finitely generated elements.
Universal algebra and algebraic theories
admit only 'algebraic' congruence lattices:
A 'generalized concept of algebraic theories'
should be devised where the more general
'continous' lattices are admitted.
Of course, the 'commutator theory of algebraic
theories' and 'categorical algebraic geometra'
should belifted to this generalized setting.
Connections to the 'Riemann hypothesis on
the ZEROS of the zeta-function'???


Tal Kubo

unread,
Apr 9, 1997, 3:00:00 AM4/9/97
to

P=NP, RH, physics and so on already have minor industries
associated with them, and are broadly recognized as important
and fruitful problems. Would they really benefit from inclusion
on a neo-Hilbert list? Perhaps the pulpit would be better used
to call attention to less traditional directions whose fruitfulness
is yet to be recognized.


Dan Asimov [allan 3/96]

unread,
Apr 10, 1997, 3:00:00 AM4/10/97
to

For "problems for the next century" I'd like to see some old chestnuts included.
And I'd prefer that most of the included problems be on such a level that
undergraduate mathematics majors could understand their statements.

I propose as exemplars of some of Mathematics' most profound areas of ignorance:

1. The remaining Poincare' conjectures:

a) Every simply-connected 3-manifold is homeomorphic to the 3-sphere.

b) Every smooth 4-sphere is diffeomorphic to the standard one.

2. The Riemann hypothesis: The zeroes of the zeta-function that lie in the
strip 0 < Re(z) < 1 all lie on the line Re(z) = 1/2.

3. The Collatz problem: Iterating f(even) = even / 2, f(odd) = 3*odd + 1,
starting from any natural number, leads to 4 -> 2 -> 1.

4. The bellows conjecture: Any closed flexible polyhedron in R^3 maintains
the same volume as it flexes.

5. The 4-color theorem and Fermat's last theorem: Come up with vastly
simplified proofs, if possible.

6. Kissing and Kepler's conjectures:

a) The kissing number of 3-spheres in 4-space is 24.

b) The face-centered cubic sphere-packing in 3-space is as dense as any
non-lattice packing.

7. The continuum hypothesis: Come up with a natural, palatable axiom which
when added to ZF + AC will yield a proof or disproof of CH.

8. Improved axioms for probability theory, so that events such as

aleph
([0,1) - {x})^ are measurable (where x lies in [0,1) and aleph is a
cardinal number greater than the continuum).

9. Does the 6-sphere admit a complex structure?

d
10. Percolation: Color each lattice point of Z red independently with
probability p. Let R(d) = { p : Prob( infinite cluster of red ) = 1 }.
Does R(d) contain its infimum ?


--Dan Asimov


john baez

unread,
Apr 10, 1997, 3:00:00 AM4/10/97
to

In article <5iccpc$q0k$1...@news.fas.harvard.edu>,


Noam Elkies <elk...@ramanujan.math.harvard.edu> wrote:
>In article <5i46vd$b...@charity.ucr.edu>, john baez <ba...@math.ucr.edu> wrote:

>>5) Determine whether or not the following widely studied physical
>>theories can be given a satisfactory mathematically rigorous
>>formulation:
>>
>> a) the classical electrodynamics of point particles
>> b) quantum electrodynamics
>> c) the Standard Model
>
>The difficulties with (b) and (c) are well known, and probably present
>problems worthy of succeeding Hilbert's. What's the difficulty with (a)?

Problem (a) is even more of an embarrassment to the mathematical
physics community than (b) and (c), because we've had so much longer
to solve it, and it seems easier at first glance, but apparently
it remains unsolved. (More precisely, it remains mired in a murk of
controversy.)

While taking a different mathematical guise, the physics problems
lurking in (a) are very similar to those lurking in (b) and (c).
In all 3 cases, the basic problem is that the electric field of
a charged point particle approaches infinity as the distance to that
particle approaches zero. This has at least 2 bad consquences. First,
a naive computation indicates that the total energy of the field in
the vicinity of the particle is infinite. Thus a charged particle
will have infinite mass unless we pull some clever tricks (called
"renormalization"). Second, the acceleration of the charged particle
due to its own electric field is ill-defined, because its own field
is infinite (or undefined) where it happens to be! We can't reasonably
say that each particle interacts only with the field produced by the
*other* particles, so we need some clever trick to get a well-defined
answer for the "self-interaction".

Lorentz spent a lot of time working on problem (a), and later
Dirac also spent a lot of time on it, in the hopes that solving (a)
would help with (b). One result of their cogitations was the
"Lorentz-Dirac" equation, a modification of the law describing
the acceleration of a charged particle in an external field, designed
to handle the subtleties of self-interaction. Parrott's book:

Stephen Parrot, Relativistic Electrodynamics and Differential
Geometry, Springer-Verlag, 1987

presents a careful "derivation" of this equation. However, this
equation is very odd: it is a *3rd-order* differential equation
for the position of the charged particle as a function of time.
This seems to lead to the strange (unobserved) phenomenon of
"preacceleration", in which a charged particle starts accelerating
*before* an external field is applied. Another oddity: consider
two particles of equal mass and opposite charge that begin at
rest. The Lorentz-Dirac equation predicts that they will *not*
collide; instead, as t -> infinity, they will wind up moving
away from each other at velocities that approach the speed of
light. Various alternatives to the Lorentz-Dirac approach have
been proposed but none seems to be clearly what one wants.

So the problem is to set up a bunch of axioms for what would be
a "good" theory of the classical electrodynamics, and to see if
you can find a model of all these axioms, or only various subsets of
them. I don't know how important this will turn out to be --- possibly
not very important; I am just pissed off that nobody understands this
stuff yet.

>>2) Give a unified explanation of all contexts in which Dynkin diagrams
>>and ADE classifications arise.
>
>Good, though a "unified explanation of all contexts" is probably
>too tall an order...

We can afford to be bold here, since we aren't actually going to
solve the problems ourselves. Hilbert was much more bold: one of his
problems was to axiomatize physics.

>>1) Develop n-category theory into a convenient tool and use it
>>to unify proof theory, homotopy theory, differential topology,
>>singularity theory, topological quantum field theory and representation
>>theory. A good place to start might be by making precise and proving
>>the "Tangle Hypothesis," which says that the n-category of n-dimensional
>>framed tangles in (n+k)-dimensional space is the free k-tuply monoidal
>>n-category with duals on one object. One should also make precise and
>>prove the "Stabilization Hypothesis", which says that a k-tuply monoidal
>>n-category is essentially the same thing as (k+1)-tuply monoidal
>>n-category when k >= n+2. More ambitiously, one could try to use
>>these to classify extended topological quantum field theories in
>>arbitrary dimensions.
>
>...But this, I think, is crossing the line from chutzpah to polemic,
>if not parochialism. Though it may be heresy to some, not all
>important mathematics is categorical, and I daresay that not much
>of that mathematics is so fundamentally categorical that it will benefit
>much from n-category theory. This is not to denigrate the study of
>n-categories, which might conceivably yield the kind of mathematical
>grand unification you envision; but surely the field has not yet
>reached the point where its fundamental problem/hope belongs in the
>same neo-Hilbert list as P=NP or the classification of simple groups,
>let alone at the top of that list.

Chutzpah I admit to, and polemicism too, but it's hard to see how
trying to unify proof theory, homotopy theory, differential topology,


singularity theory, topological quantum field theory and representation

theory suffers from "parochialism". Perhaps you're upset that I left out
algebraic number theory? I could have included it, because Kapranov
has argued that the Langlands program is deeply related to topological
quantum field theory, but I didn't want to get carried away. :-)

In short, I really think n-category theory will offer a powerful unifying
perspective on the subjects I listed, and I think I even know roughly how.
So while n-category theory is not a "classic problem" like the Riemann
hypothesis or P = NP, I think it's going to be a very broadly important
direction of research, and will drastically change our perspective
on what math is like. I may be wrong, of course, and we could have some
fun arguing about it.

I won't present a long argument for my view unless you ask, since
I suspect everyone is already sick of me talking about n-categories.
I'll just say one thing: while perhaps only a little mathematics is
fundamentally categorical, a whole lot more is fundamentally n-categorical.

>As long as we're proposing Hilbert-level research problems, here's
>a proposal for the top two of modern number theory:
>
>I) Develop an understanding of zeta and L-functions in sufficient depth
>and generality to prove the Riemann hypotheses for all such functions
>for which an R.H. holds.
>
>[If I remember right, Riemann's original Hypothesis was on Hilbert's list;
>but we have many more zeta and L-functions now, and are still arguably no
>closer to a proof even for Classic RH. N.B. I do not include the Langlands
>program, Beilinson conjectures etc. on this short list because they are
>subsumed by the most general arithmetic Riemann hypotheses; but even
>a proof of the Riemann and Artin conjectures for zeta and L-functions
>of number fields would be a significant legacy of 21st-century math.]

This sounds good, and the main reason I didn't include something like
this is that I didn't know enough to say the right thing. For example,
I had no idea that the Langlands program had been "subsumed". Could
you say a word or two about this, bearing in mind that I don't understand
anything about the Langlands program? (Maybe the subsuming problem
will be easier to understand, at least superficially? One can always
hope....)

>II) Obtain general, effective results as envisioned by Lang/Vojta
>for Diophantine equations and inequalities. [See S.Lang: Old and new
>conjectured diophantine inequalities, Bull. AMS 23 (1990), 37-75.]

That also sounds good; that article seemed packed with amazing stuff.


Noam Elkies

unread,
Apr 11, 1997, 3:00:00 AM4/11/97
to

In article <5ik7il$e...@charity.ucr.edu>, john baez <ba...@math.ucr.edu> wrote:
>In article <5iccpc$q0k$1...@news.fas.harvard.edu>,
>Noam Elkies <elk...@ramanujan.math.harvard.edu> wrote:
>>In article <5i46vd$b...@charity.ucr.edu>, john baez <ba...@math.ucr.edu> wrote:

>>>5) Determine whether or not the following widely studied physical
>>>theories can be given a satisfactory mathematically rigorous
>>>formulation:

>>> a) the classical electrodynamics of point particles

>>The difficulties with (b) and (c) are well known, and probably present


>>problems worthy of succeeding Hilbert's. What's the difficulty with (a)?

>[...]

>In all 3 cases, the basic problem is that the electric field of
>a charged point particle approaches infinity as the distance to that
>particle approaches zero. This has at least 2 bad consquences. First,
>a naive computation indicates that the total energy of the field in
>the vicinity of the particle is infinite. Thus a charged particle
>will have infinite mass

I thought we're in a classical universe without mass-energy equivalence.
So one can have infinite energies around as long as we cannot actually
harness them.

>We can't reasonably say that each particle interacts only
>with the field produced by the *other* particles,

Why not? In a classical universe that's exactly what we do,
just as with gravitation.

I suspect that you have something more sophisticated in mind
than I imagine when you refer to "classical electrodynamics";
something which tries at least to deal with classical relativity
including E=mc^2. That the mathematics of "point particles"
becomes problematic as we try to account for more physics is
probably to be expected, because we don't expect "point particles"
to exist at that level, so we no longer have any reason to expect
a mathematically consistent and physically sensible theory of their
behavior.

>We can afford to be bold here, since we aren't actually going to
>solve the problems ourselves. Hilbert was much more bold: one of his
>problems was to axiomatize physics.

True enough.

>>...But this, I think, is crossing the line from chutzpah to polemic,

>>if not parochialism. [...]

>Chutzpah I admit to, and polemicism too, but it's hard to see how
>trying to unify proof theory, homotopy theory, differential topology,
>singularity theory, topological quantum field theory and representation
>theory suffers from "parochialism". Perhaps you're upset that I left out

>algebraic number theory? [...]

No. It is not the grand scope of the proposal that is parochial
but the expectation that the mathematics of your own parish will
realize it -- that it is the crop of your own garden that will
become a "very broadly important direction of research [that] will
drastically change our perspective of what math is like".
But then perhaps an artist (including a mathematician or
mathematical physicist) needs to believe, even against the
evidence, in the fundamental significance of his/her work
to produce anything of consequence.

>>I) Develop an understanding of zeta and L-functions in sufficient depth
>>and generality to prove the Riemann hypotheses for all such functions
>>for which an R.H. holds.
>>
>>[If I remember right, Riemann's original Hypothesis was on Hilbert's list;
>>but we have many more zeta and L-functions now, and are still arguably no
>>closer to a proof even for Classic RH. N.B. I do not include the Langlands
>>program, Beilinson conjectures etc. on this short list because they are
>>subsumed by the most general arithmetic Riemann hypotheses; but even
>>a proof of the Riemann and Artin conjectures for zeta and L-functions
>>of number fields would be a significant legacy of 21st-century math.]

>This sounds good, and the main reason I didn't include something like
>this is that I didn't know enough to say the right thing. For example,

>I had no idea that the Langlands program had been "subsumed". [...]

I've been asked about this in e-mail too, and on re-reading the above I
see that I did not state it clearly. The existence of various L-functions
as mermorophic functions satisfying functional equations is an essential
component of the Langlands program, and I imagine if we can do that the
rest will not be far behind; and understanding these functions should
certainly include an understanding of their special values (values at
suitable integers), which are conjecturally given by Beilinson-style
conjectures. Then there are the p-adic analogues of L-functions, obtained
by interpolating their normalized values at integers, etc.

john baez

unread,
Apr 11, 1997, 3:00:00 AM4/11/97
to

In article <5igveo$l87$1...@news.fas.harvard.edu>,
Tal Kubo <ku...@abel.math.harvard.edu> wrote:

Sounds good. Like what? (I guess n-categories count as a less
traditional direction, but there should be many others.)

Robert Israel

unread,
Apr 11, 1997, 3:00:00 AM4/11/97
to

In article <5ilica$edg$1...@news.fas.harvard.edu>, elk...@ramanujan.math.harvard.edu (Noam Elkies) writes:

|> I suspect that you have something more sophisticated in mind
|> than I imagine when you refer to "classical electrodynamics";
|> something which tries at least to deal with classical relativity
|> including E=mc^2. That the mathematics of "point particles"
|> becomes problematic as we try to account for more physics is
|> probably to be expected, because we don't expect "point particles"
|> to exist at that level, so we no longer have any reason to expect
|> a mathematically consistent and physically sensible theory of their
|> behavior.

Classical electrodynamics really requires special relativity, because
Maxwell's equations are not invariant under Galilean transformations. It is
"classical" in the sense of "not quantum".

There's a nice elementary discussion of the problems of electromagnetic mass
and energy in the Feynman Lectures on Physics, Vol. II, Chap. 28.

Robert Israel isr...@math.ubc.ca
Department of Mathematics (604) 822-3629
University of British Columbia fax 822-6074
Vancouver, BC, Canada V6T 1Y4


Bill Wood

unread,
Apr 11, 1997, 3:00:00 AM4/11/97
to

john baez wrote:
>
> >>5) Determine whether or not the following widely studied physical
> >>theories can be given a satisfactory mathematically rigorous
> >>formulation:
> >>
> >> a) the classical electrodynamics of point particles
> >> b) quantum electrodynamics
> >> c) the Standard Model
> >
> >The difficulties with (b) and (c) are well known, and probably present
> >problems worthy of succeeding Hilbert's. What's the difficulty with (a)?

> So the problem is to set up a bunch of axioms for what would be


> a "good" theory of the classical electrodynamics, and to see if
> you can find a model of all these axioms, or only various subsets of
> them. I don't know how important this will turn out to be --- possibly
> not very important; I am just pissed off that nobody understands this
> stuff yet.

I thought the Feynman-Wheeler "action-at-a-distance" theory "solved"
the self-energy problem at the classical level?? No?

I always wondered whether the problem didn't have something to do
with the unphysical q --> 0 limit in the classical operational
definition
of electric field.

Best regards,

Bill Wood

--
William A. Wood, Ph.D. | TECHNOLOGY SERVICE CORPORATION |
Senior Systems Engineer | Radar Systems Group |
tel. 812.336.7576 | 116 West 6 Street, Suite 200 |
fax 812.336.7735 | Bloomington, Indiana 47404 |
----------------------------------------------------------
http://www.tsc.com/ http://photonics.crane.navy.mil/ |
----------------------------------------------------------


Pertti Lounesto

unread,
Apr 12, 1997, 3:00:00 AM4/12/97
to

isr...@math.ubc.ca (Robert Israel) writes:
> Classical electrodynamics really requires special relativity, because
> Maxwell's equations are not invariant under Galilean transformations.
> It is "classical" in the sense of "not quantum".

Classical electrodynamics not only requires special relativity.
Historically, classical electrodynamics implied special relativity.
>From the Maxwell equations Lorentz derived his kinematical laws,
by demonstrating, in 1903, that the Maxwell equations are covariant
under the Lorentz transformations. In all fairness, it should be
mentioned that Lorentz transformations of space-time events were
already discussed before by Larmor, in 1900.
--
Pertti Lounesto http://www.math.hut.fi/~lounesto


ilias kastanas 08-14-90

unread,
Apr 12, 1997, 3:00:00 AM4/12/97
to

In article <334EAA...@tsc.com>, Bill Wood <bi...@walt.tsc.com> wrote:
@john baez wrote:
@>
@> >>5) Determine whether or not the following widely studied physical
@> >>theories can be given a satisfactory mathematically rigorous
@> >>formulation:
@> >>
@> >> a) the classical electrodynamics of point particles
@> >> b) quantum electrodynamics
@> >> c) the Standard Model
@> >
@> >The difficulties with (b) and (c) are well known, and probably present
@> >problems worthy of succeeding Hilbert's. What's the difficulty with (a)?
@
@> So the problem is to set up a bunch of axioms for what would be
@> a "good" theory of the classical electrodynamics, and to see if
@> you can find a model of all these axioms, or only various subsets of
@> them. I don't know how important this will turn out to be --- possibly
@> not very important; I am just pissed off that nobody understands this
@> stuff yet.


@I thought the Feynman-Wheeler "action-at-a-distance" theory "solved"
@the self-energy problem at the classical level?? No?


No... according to Feynman, at least. "Retarded" action of
one part of an accelerating electron on another yields radiation
resistance containing Lorentz's (d/dt)^3 x term and all -- in parti-
cular, the diverging one! Dirac noticed that a combination of advanced
and retarded Lienard-Wiechert potentials produces the 'right' answer, and
F/W gave this a new twist: e's action on the world is retarded, the world's
action on e is advanced!

There are other approaches too. Some of them predict things that
havn't been observed, but of course this has not stopped anybody! However,
none of them seems to extend to a quantum theory... and _that_ has. A
pity; even a non-quantum effort would be interesting.


@I always wondered whether the problem didn't have something to do
@with the unphysical q --> 0 limit in the classical operational
@definition of electric field.


In a sense. EM is a "continuous" theory somehow trying to acco-
modate "point" particles. E.g. the simple-minded self energy of the
electron is the same formula as for assembling a macroscopic sphere
of charge. Ascribing energy to the electromagnetic field seems una-
voidable... and does not quite jell with particles.


Incidentally, here is a Wheeler/Feynman flight of fancy on the
question why do all electrons have the same mass, charge, etc.: There
is only one electron. Busy critter, threading its way.


Ilias


C. T. McMullen

unread,
Apr 12, 1997, 3:00:00 AM4/12/97
to

In article <E8FMC...@research.att.com>,

Dan Asimov [allan 3/96] <d...@research.att.com> wrote:
>For "problems for the next century" I'd like to see some old chestnuts included.
>4. The bellows conjecture: Any closed flexible polyhedron in R^3 maintains
> the same volume as it flexes.

The bellows conjecture was recently solved in the affirmative by
Connelly et al. The proof is to show by algebra that the
volume lies in a certain countable subfield of the reals, so
it cannot change continuously.

>6. b) The face-centered cubic sphere-packing in 3-space is as dense as any
> non-lattice packing.

Hsiang's proof (or "strategy of proof", as it is called in
Math Reviews) of the Kepler conjecture appeared in
Internat. J. Math. 4 (1993).


Ilias Kastanas

unread,
Apr 12, 1997, 3:00:00 AM4/12/97
to

In article <E8FMC...@research.att.com>,
Dan Asimov [allan 3/96] <d...@research.att.com> wrote:
For "problems for the next century" I'd like to see some old chestnuts included

@And I'd prefer that most of the included problems be on such a level that
@undergraduate mathematics majors could understand their statements.

Hmm... A student got to class late and saw on the board statements
of Goldbach, Collatz, P=?NP, ... Next time he told his professor:
"That homework you assigned... I solved two of the problems, but I just
cannot do the third one!"


...
@4. The bellows conjecture: Any closed flexible polyhedron in R^3 maintains
@ the same volume as it flexes.

What is the intended meaning of "flexible"? (Surely more than edges
maintaining their lengths... Is it faces being rigid?)


...
@8. Improved axioms for probability theory, so that events such as
@
@ aleph
@ ([0,1) - {x})^ are measurable (where x lies in [0,1) and aleph is a
@ cardinal number greater than the continuum).

Say X carries the product measure mu of aleph copies of J = [0, 1)
(Lebesgue measure). Then sets (J-{x})^aleph are not mu-measurable; but
there is an extension of mu making them measurable (and many others, e.g.
with a countable set C in place of {x}). The question is, does this do the
job for probability theory?

To construct the Wiener process (Brownian motion) we can set up mea-
sures on R^n's (by integrating Gaussian densities) and arrive at a measure
on R^[0, inf), a la Kolmogorov. Unfortunately, the set of continuous
functions isn't then measurable, so we cannot proceed; fortunately, we can
backtrack and use R^Q instead -- at the cost of a sizable amount of work.
Theory says we could have extended the measure and made the set measurable
... but in practice this doesn't seem to be a viable alternative.

So one wonders whether such matters call for axiomatics or for
something else.

d
@10. Percolation: Color each lattice point of Z red independently with
@ probability p. Let R(d) = { p : Prob( infinite cluster of red ) = 1 }.
@ Does R(d) contain its infimum ?


Interesting. One usually sees first-passage percolation. Is there
background to this problem?


Ilias

john baez

unread,
Apr 13, 1997, 3:00:00 AM4/13/97
to

In article <5ilica$edg$1...@news.fas.harvard.edu>,
Noam Elkies <elk...@ramanujan.math.harvard.edu> wrote:

>I suspect that you have something more sophisticated in mind
>than I imagine when you refer to "classical electrodynamics";
>something which tries at least to deal with classical relativity
>including E=mc^2.

Right. "Classical" means a couple of things, and in speaking of
the "classical electrodynamics of point particles" people use it
to mean "non-quantum", not "non-relativistic". A good example is
the 641-page tome "Classical Electrodynamics" by Jackson, through
which every physics grad student dutifully plods.

>That the mathematics of "point particles"
>becomes problematic as we try to account for more physics is
>probably to be expected, because we don't expect "point particles"
>to exist at that level, so we no longer have any reason to expect
>a mathematically consistent and physically sensible theory of their
>behavior.

Right. Of course, the quantum theory of charged point particles ---
quantum electrodynamics --- suffers from a lot of the same problems
as the classical theory. If quantum electrodynamics were known to be
consistent we could probably assuage our guilt for having never figured
out if the theory it's a quantization of is consistent. But, alas, we
don't know if quantum electrodynamics is consistent. Lately, the
mathematical physicists' consensus appears to be that it's *not*,
since it's not asymptotically free at high momenta.

Eventually we may see that point particles are a fiction and that
everything is made of strings or something. Even so, the problems
I listed will be of some interest, because we use

a) the classical electrodynamics of point charges,
b) quantum electrodynamics, and
c) the Standard model

so much that it'd be a shame to never work out whether they were
consistent theories!

>>Chutzpah I admit to, and polemicism too, but it's hard to see how
>>trying to unify proof theory, homotopy theory, differential topology,
>>singularity theory, topological quantum field theory and representation
>>theory suffers from "parochialism". Perhaps you're upset that I left out

>>algebraic number theory? [...]
>
>No. It is not the grand scope of the proposal that is parochial
>but the expectation that the mathematics of your own parish will
>realize it -- that it is the crop of your own garden that will
>become a "very broadly important direction of research [that] will
>drastically change our perspective of what math is like".
>But then perhaps an artist (including a mathematician or
>mathematical physicist) needs to believe, even against the
>evidence, in the fundamental significance of his/her work
>to produce anything of consequence.

I agree that I have the need to feel I'm doing something "fundamental"
to get up the energy to work on stuff. But I don't think the claims
above reflect parochialism on my part --- probably just arrogance or
stupidity or a desire to stir up trouble or something.

Sometimes you see someone who has spent decades studying, say, hyperfinite
type II Moufang loops, and driven somewhat crazy from overspecialization
comes to believe that hyperfinite type II Moufang loops are the answer to
all the world's problems, from the Poincare conjecture to winning the
lottery. I'm just saying: it's not as if I spent years becoming a
specialist in n-category theory and then got the crazy idea that my
speciality might be good for studying all the different subjects listed
above. On the contrary, it was in trying to understand the relationships
between certain problems in those subjects that I decided I needed to
learn about n-category theory, and then became convinced it would be useful.

Of course, it's possible that the very process of studying n-categories
has driven me nuts --- I wouldn't be the first! --- but a number of seemingly
reasonable mathematicians seem to agree that this concept, far from being
of purely local importance, should be broadly useful, just as were sets
and categories (the cases n = 0 and n = 1).

Also, just to be clear, I don't think n-category theory will somehow solve
all the problems of all those subjects or turn them all into the same
subject. I just think it'll help solve *some* problems in these subjects,
by making precise the relations between them.

Here's the basic idea:

1) Proof theory. A lot of modern logic involves the study of cartesian
closed categories; examples include Boolean algebras, Heyting algebras,
the category of sets, and more general topoi. The common thread here is
the trick of thinking of an "implication" P -> Q as a morphism from P to Q.
Sometimes it's also good to think of a "proof" that P -> Q as a morphism
F: P -> Q. But when people start studying proof theory, they
get interested in tricks for shortening proofs and otherwise modifying
proofs, like "cut elimination". Sometimes it turns out to be good to
think of a proof as analogous to a "path" from P to Q, and the process
of modifying a proof as analogous to a "homotopy" between paths.

So in addition to morphisms F: P -> Q which are sequences of statements
"leading from P to Q", one has 2-morphisms T: F => G, which are sequences
of proofs "leading from F to G". There are clearly higher morphisms
as well, but to talk about these, we need the language of n-categories
(or at least something awfully like it).

Andre Joyal has recently discovered some relations between cartesian
closed categories and n-categories that might make this business a lot
easier that I would've thought.

2) Homotopy theory. This is actually the thing that got n-category
theory going in the first place. We can associate to any space X an
n-category called it's "fundamental n-groupoid", which encodes the
whole homotopy n-type of the theory. The objects of this n-category
are points in X, the morphisms are (roughly) paths in X, the 2-morphisms
are (roughly) paths of paths, and so on, where we take homotopy classes
at the top level.

Now an n-groupoid is a special sort of n-category in which, roughly
speaking, every j-morphism has an inverse. So we can think of homotopy
theory as a special branch of n-category theory. Of course this does
not in itself help us do homotopy theory! What it does is suggest how
to extend techniques from homotopy theory to other branches of math.
For example, Craig Squier has done nice stuff where he starts with a
logic problem involving "rewriting systems", associates to it a category
using the ideas in 1) above, turns that into topological space using a
well-known trick, and then solves the problem he's interested in using
algebraic topology!

3) Differential topology. As I noted in my first post on this thread,
there is a slightly vague conjecture out there which says that framed
n-manifolds embedded in (n+k)-dimensional space form a certain n-category
with a nice algebraic description. The idea here is that the objects
are points, the morphisms are 1-dimensional curves going between points,
the 2-morphisms are surfaces going between curves and so on. This
sort of n-category is not quite an n-groupoid, it's something weaker
which I call an "n-category with duals". So far these are only understood
for rather low n. For example, the thesis of my student Laurel Langford
contains a purely algebraic description of the isotopy classes of embedddings
of surfaces in 4-dimensional space: the case n = k = 2.

If we could understand n-categories with duals more generally and make
the relation between them and n-groupoids more precise, we could give a
nice purely algebraic proof that the stable homotopy groups of spheres
are the framed cobordism groups of a point. Of course we already know
this, but the new proof would prove a lot of other stuff, too. The idea
again is to exploit analogies between different subjects - namely
2 and 3) - using the formalism of n-category theory.

4) Singularity theory. Here the basic idea is that given a stratified
space, we should be able to get ahold of an n-category very roughly as
follows. Let the objects the points in the open dense stratum, let the
morphisms be generic paths between these points (which intersect only the
codimension 1 stratum), let the 2-morphisms be the generic paths of
paths (which intersect only the codimension 2 stratum), and so on.

This viewpoint explains some otherwise truly remarkable phenomena,
such as how quantum groups wind up giving Vassiliev invariants of
links, a concept originally defined using the stratified space of
immersions of a circle in 3-dimensional space.

5) Representation theory. Recently, under the influence of physics,
the traditional theory of semisimple Lie groups has developed a profusion
of strange new offshoots: quantum groups, loop groups, more general Kac-
Moody algebras, vertex operator algebras, etc.. Some of these have shed
a lot of new light on other aspects of group theory, like the study of
finite simple groups. All these different objects have their own
representation theory, but clearly they are all allied and part of the
same big subject.

Topological quantum field theories give invariants of manifolds, and more
generally, of n-manifolds embedded in (n+k)-dimensional space. They do
this by exploiting some still rather surprising "coincidences" between
differential topology and the representation theory of some of these new
gadgets (mainly quantum groups and loop groups). In an effort to understand
these coincidences, people like Jean-Luc Brylinski and Dan Freed have been
led to study "n-vector spaces" --- the n-categorical analogues of vector
spaces --- and also "n-Hilbert spaces". A good example is the category of
representations of a quantum group, which should be a special sort of
4-Hilbert space, having only one object and one 1-morphism. General
n-Hilbert spaces have only been given a rigorous definition up to
n = 2, but 4-Hilbert spaces with only one object and one 1-morphism
can be thought of as 2-Hilbert spaces with extra bells and whistles,
so we are already up to handling them.

In particular, if an n-Hilbert space is a particular sort of (n-1)-category
with duals, the "coincidences" that make TQFTs tick will be explained by
the fact that subjects 3) and 5) both concern n-categories with duals.
The further relation with 2) then explains Timothy Porter's construction
of n-dimensional TQFTs from homotopy (n-1)-types in a manner generalizing the
Dijkgraaf-Witten model and the work of Yetter.

I hope this begins to show that there is a broad but still fairly
well-defined circle of ideas out there which can be brought into
clarity using n-category theory. The common theme is a very simple
one: things, ways to go between things, ways to go between ways to
go between things, etc.. But there are a lot of nontrivial spinoffs
that come from taking this simple theme seriously. At least that's my
hope.


Thomas Haeberlen

unread,
Apr 13, 1997, 3:00:00 AM4/13/97
to

Noam Elkies wrote:

>
> >3) Give a short conceptual proof of the classification of finite simple groups.
>
> Excellent.
>

Maybe I am not an expert, but I think that giving a "proof" at all might
be already a bold task. As far as I am told, the classification of
finite simple groups today is widely considered to be complete but many
group theorists seem to feel a little bit uneasy about it and results
using the classification "in a strong sense" always seem to have a
little uncertainty in them. So if someone came and published a book
containing something like a real proof - the books of Gorenstein and
Aschbacher are IMO more like overviews, not actually proofs - ... that
could be something! And in order to be a proof that was short enough to
be printed in a normal book it would shurely have to be "conceptual"...
even if failed to be "short" in the strong sense.

Thomas Haeberlen (haeb...@cip.mathematik.uni-stuttgart.de)


MCKAY john

unread,
Apr 13, 1997, 3:00:00 AM4/13/97
to

In article <33512B...@cip.mathematik.uni-stuttgart.de>

The proble is not unease in using a perhaps incomplete proof. In my view -
the proble is that the classification is "internal" - namely that it uses
only tools from within finite group theory. I believe that we shall find
that other tools will shed light on the finite simples. There are strong
indications that this may be so.
Witten has a construction for obtaining Lie algebras from HyperKa"hler
manifolds. The existence of 26 sporadics needs explanation. Of the 26,
20 are involved in the Monster. I believe that an explanation of the
Monster, Baby Monster, and Fischer 24 lies in the existence of E8,7,6
respectively [Schur multipliers and fundamental groups coincide].

Why do sporadics arise when Schur multipliers are exceptionally large?

There are tantalizing suggestions that the Monster may be related to a
hyperKa"hler manifold. I now examine moonshine using differential operators!
We know the groups generated by a single elements... what about two?
Why are finite simples generated by two elements? There's the rub!
When more tools from geometry and topology are brought to bear on the
finite simples, may be we will comprehend what we can now at best merely
prove.

The current revisionism philosophy is like catching a fly by confining it
to a room. The real work remains to be done!

JoachimHagemann

unread,
Apr 14, 1997, 3:00:00 AM4/14/97
to

john baez wrote:

|> [...] Here's the basic idea:


|> 1) Proof theory. A lot of modern logic involves the study of cartesian
|> closed categories; examples include Boolean algebras, Heyting algebras,
|> the category of sets, and more general topoi. The common thread here is
|> the trick of thinking of an "implication" P -> Q as a morphism from P
|> to Q.
|> Sometimes it's also good to think of a "proof" that P -> Q as a morphism
|> F: P -> Q. [...]

|>
|> 5) Representation theory. Recently, under the influence of physics,
|> the traditional theory of semisimple Lie groups has developed a profusion
|> of strange new offshoots: quantum groups, loop groups, more general Kac-
|> Moody algebras, vertex operator algebras, etc.. Some of these have shed
|> a lot of new light on other aspects of group theory, like the study of
|> finite simple groups. All these different objects have their own
|> representation theory, but clearly they are all allied and part of the
|> same big subject.
|> [...]

It seems that n-categries are a usefull means to construct interesting
continous lattices and hence interesting models of Lambda calculus.
Abstracting interesting types of respresentation theory as special
'Lambda calculi' would be indeed quite interesting.
See [Gierz, Hofmann, Keimel, Lawson, Mislove, Scott]
A compendium of continous lattices, Springer 1980.
For the aspects of Lambda calculus see
[Scott] Models for type-free calculi, Logic,
Methodology and Philosophy of Science IV, P. Suppes et alii eds.,
North-Holland 1973, 157-187


Timothy Chow

unread,
Apr 16, 1997, 3:00:00 AM4/16/97
to

In article <5igveo$l87$1...@news.fas.harvard.edu>,
Tal Kubo <ku...@abel.math.harvard.edu> wrote:
>Perhaps the pulpit would be better used to call attention to less
>traditional directions whose fruitfulness is yet to be recognized.

At the recent Garrett Birkhoff Memorial Conference, Gian-Carlo Rota
spoke on "The many lives of lattice theory." After giving a splendid
overview of the subject, he concluded as follows.

These developments, and several others that will not be mentioned
because my time is up, are a belated validation of Garrett Birkhoff's
vision, which we learned in three editions of his "Lattice Theory,"
and they betoken Prof. Gelfand's oft-repeated prediction that lattice
theory will play a leading role in the mathematics of the twenty-first
century.

On a different note, my personal feeling is that mathematics has grown
increasingly fragmented over the past century. This is in spite of
the mathematician's love for generalizations and unifying themes.
Maybe the sheer volume of mathematics makes such fragmentation
inevitable, but I would like to see more unity in the next century.
Such "housekeeping" doesn't have much prestige, though.
--
Tim Chow tc...@umich.edu
Where a calculator on the ENIAC is equipped with 18,000 vacuum tubes and weighs
30 tons, computers in the future may have only 1,000 vacuum tubes and weigh
only 1 1/2 tons. ---Popular Mechanics, March 1949


Christian Ronse in Strasbourg

unread,
Apr 16, 1997, 3:00:00 AM4/16/97
to

In article <5j1a2s$a...@thighmaster.admin.lsa.umich.edu>
tc...@math.lsa.umich.edu (Timothy Chow) writes:

> At the recent Garrett Birkhoff Memorial Conference, Gian-Carlo Rota
> spoke on "The many lives of lattice theory." After giving a splendid
> overview of the subject, he concluded as follows.
>
> These developments, and several others that will not be mentioned
> because my time is up, are a belated validation of Garrett Birkhoff's
> vision, which we learned in three editions of his "Lattice Theory,"
> and they betoken Prof. Gelfand's oft-repeated prediction that lattice
> theory will play a leading role in the mathematics of the twenty-first
> century.

It seems to me that "lattice theory" is mostly used in "theoretical
computer science". For example the overwhelming majority of research on
complete lattices is done within two fields:
- the theory of abstract interpretation of programming;
- mathematical morphology (a branch of image processing).
I have myself devoted much time to the latter...

In "Mathematics: form and function", Mac Lane expresses distate for
lattice theory, mainly because it seems to him that research in this
topic looks artificial. Maybe it is lattice theory not motivated by
applications (computer science or image processing) which is
artificial...

Christian Ronse LSIIT - URA CNRS 1871
Universite Louis Pasteur, UFR de Mathematique et Informatique
7 rue Rene Descartes, F-67000 Strasbourg
e-mail: ro...@dpt-info.u-strasbg.fr Tel. +33-3-88.41.66.38
http://gr6.u-strasbg.fr/~ronse/ Fax. +33-3-88.60.26.54


Michael Smith

unread,
Apr 17, 1997, 3:00:00 AM4/17/97
to

Noam Elkies (elk...@ramanujan.math.harvard.edu) wrote:
: In article <5i46vd$b...@charity.ucr.edu>, john baez <ba...@math.ucr.edu> wrote:

: >5) Determine whether or not the following widely studied physical
: >theories can be given a satisfactory mathematically rigorous
: >formulation:
: >
: > a) the classical electrodynamics of point particles


: > b) quantum electrodynamics
: > c) the Standard Model

: The difficulties with (b) and (c) are well known, and probably present
: problems worthy of succeeding Hilbert's. What's the difficulty with (a)?

May I suggest another physics problem which, I believe, is worthy of
considerable mathematical research effort?

5`) Normalize the constants in String Theory.

As I understand it - and I admit my understanding of the subject is
very poor - the mathematical tools which could be used to attack this
problem as yet do not exist or are not well-understood. Until such
tools are developed, String Theory cannot even be tested
experimentally.

Mike Smith

DISCLAIMER: My opinions do not necessarily, or even remotely, reflect
those of Loyola University, Chicago.


Pertti Lounesto

unread,
Apr 17, 1997, 3:00:00 AM4/17/97
to

JoachimHagemann <hage...@horus.mch.sni.de> wrote:
>2) --7D vectoranalysis--
Develop a vectoranalysis for the 7D crossproduct algebra
(which is derived from the octonions in the same way as the
3D crossproduct algebra is derived from the quaternions)
and use it in physics and engineering in a way which is
>similar to the traditional 3D vectoranalysis.

You might want to take a look at the book G. Dixon: "Division
Algebras: Octonions, Quaternions, Complex Numbers and the
Algebraic Design of Physics", Kluwer, 1994. It is very nice
starting point for somebody who wants to develop the seven
dimensional vector analysis. See the web-site:
http://www.7stones.com/Homepage/history.html

Phil Gibbs

unread,
Apr 18, 1997, 3:00:00 AM4/18/97
to

The list needs a replacement for FLT, a simple
diophantine problem which the masses can understand.

I propose the problem to answer the question:
Do there exist 5 positive integers such that the
product of any two is one less than a square?
It is an old problem even known to Diophantus
himself.

--
Phil Gibbs

Turnpike evaluation. For information, see http://www.turnpike.com/


MCKAY john

unread,
Apr 19, 1997, 3:00:00 AM4/19/97
to

In article <1sxxlAAv...@weburbia.demon.co.uk>
Phil Gibbs <philip...@pobox.com> writes:

|>The list needs a replacement for FLT, a simple
|>diophantine problem which the masses can understand.

Agreed: How about a+b+c = 0?

|>I propose the problem to answer the question:
|>Do there exist 5 positive integers such that the
|>product of any two is one less than a square?
|>It is an old problem even known to Diophantus
|>himself.

|>--
|>Phil Gibbs

|>Turnpike evaluation. For information, see http://www.turnpike.com/
>

MCKAY john

unread,
Apr 20, 1997, 3:00:00 AM4/20/97
to

In article <1sxxlAAv...@weburbia.demon.co.uk>
Phil Gibbs <philip...@pobox.com> writes:
>
>|>The list needs a replacement for FLT, a simple
>|>diophantine problem which the masses can understand.
>
>I propose the problem to answer the question:
>Do there exist 5 positive integers such that the
>product of any two is one less than a square?
>It is an old problem even known to Diophantus
>himself.

Has this not been solved? See a paper by Stephen Muir (et al?) using
Baker's (then new) approximation results. Try Quarterly J. Math. about
1967.

Four numbers with this property are 1,3,8,120. Muir, I believe, proves
THIS set cannot be augmented. Does this solve the problem?

Kevin Brown

unread,
Apr 20, 1997, 3:00:00 AM4/20/97
to

On Fri, 18 Apr 1997 10:16:31 +0100, Phil Gibbs
<philip...@pobox.com> wrote:
>I propose the problem to answer the question:
>Do there exist 5 positive integers such that the
>product of any two is one less than a square?

Incidentally, there's a connection between this problem and the sets
of points called "clouds" by L. Comtet in his book "Advanced
Combinatorics". If you have 3 (distinct) positive integers a,b,c
(e.g., 1,3,8) such that each of the three pairwise products ab,ac,ad
is a "shy square" (defined as one less than a square), then obviously
the square (abc)^2 is expressible as a product of three shy squares,
i.e., (abc)^2 = (A^2-1)(B^2-1)(C^2-1).

However, if you have 4 distinct integers a,b,c,d (e.g., 1,3,8,120)
such that each of the SIX pairwise products is a shy square, then
the square (abcd)^2 must not only be expressible as a product of
four shy squares, it must be so expressible in THREE distinct ways,
corresponding to the three ways of choosing four out of the six
pairwise products such that each of a,b,c,d appears exactly twice.
For example, with a=1,b=3,c=8,d=120 we have

(2880)^2 = (2^2 - 1)(3^2 - 1)(19^2 - 1)(31^2 - 1)
= (2^2 - 1)(5^2 - 1)(11^2 - 1)(31^2 - 1)
= (3^2 - 1)(5^2 - 1)(11^2 - 1)(19^2 - 1)

Now, if you imagine 5 distinct integers a,b,c,d,e such that each
of the TEN pairwise products is a shy square, then the square
(abcde)^2 must be expressible as a product of 5 distinct shy squares
in TWELVE distinct ways. It isn't trivial to find numbers that are
expressible as products of shy squares in multiple ways. For example,
no one has ever found ANY integer (let alone a square) that is
expressible in the form (n^2 - 1)(m^2 - 1) in more than five distinct
ways. (And only 6 five-way expressible numbers are known).

Anyway, the connection to Comtet's "clouds" is that if you go
on to consider N distinct integers such that each of the N(N-1)/2
pairwise products is a shy square, then the square of the product
of those N integers must be expressible as a product of N shy squares
in c(N) distinct ways, where the first few values of c(N) are

N c(N)
--- -----
3 1
4 3
5 12
6 70
7 465
8 3507

This is the number of ways of choosing N out of N(N-1)/2 pairwise
products of N numbers such that each of the individual numbers appears
exactly twice. Checking the Encyclopedia of Integer Sequences (Sloane
and Plouffe) we find that this is sequence M2937, which Comtet defined
in terms of the intersection points of N lines in general position in
the plane.


Phil Gibbs

unread,
Apr 21, 1997, 3:00:00 AM4/21/97
to

In article <5jcof2$4ni$1...@newsflash.concordia.ca>, MCKAY john
<mc...@cs.concordia.ca> writes


>
>In article <1sxxlAAv...@weburbia.demon.co.uk>
> Phil Gibbs <philip...@pobox.com> writes:

>>Do there exist 5 positive integers such that the
>>product of any two is one less than a square?

>>It is an old problem even known to Diophantus
>>himself.
>
>Has this not been solved? See a paper by Stephen Muir (et al?) using
>Baker's (then new) approximation results. Try Quarterly J. Math. about
>1967.
>
>Four numbers with this property are 1,3,8,120. Muir, I believe, proves
>THIS set cannot be augmented. Does this solve the problem?
>

Unless there has been a more recent development, they have only proven
that there is no fifth number in this particular sequence and a finite
number of others. There are an infinite number of sequences of four
numbers like that one. It would be a much more impressive result
to prove that there is no sequence of five, or to find one.

JoachimHagemann

unread,
Apr 21, 1997, 3:00:00 AM4/21/97
to

JoachimHagemann wrote on John Baez n-categories:

> It seems that n-categries are a usefull means to construct interesting
> continous lattices and hence interesting models of Lambda calculus.
> [...] See [Gierz, Hofmann, Keimel, Lawson, Mislove, Scott]

> A compendium of continous lattices, Springer 1980.

Please let me give more details. The reason is:
I consider n-categories as something important.
Important in MATHEMATICS.
(In physics I would rather try to avoid n-categories:
It is too strange a mathematics for most physicists)

--new categories from a given 'good' base category--
Consider the base category K of commutative rings,
just to be not too abstract. The following holds for K:
- K is an algebraic theory: The ideal lattices are algebraic.
- K is a commutator theory with 'commutation = ideal multiplication'.
- The product of two finitely generated ideals
is again finitely generated.
Now start constructing from K 2-categories K2, 3-categories K3, ...
Q1: Are K2, K3, ... again algebraic theories?
Or at least continuous geometries?
Q2: Are K2, K3, ... again commutator theories?
If yes, is the commutator of finitely generated
'congruences' again finitely generated?
(At least in the sense of Continous Lattices?)
Remark: M. Christina Pedicchio of the university of Trieste
in Italy developped 'categorical commutator theory' based on
internal category theory / internal groupoids.
At present, it is availble for congruence modular algebraic
equational classes and hence has quite a limited range:
For example, semigroups are not covered.
It seems that 2-categories are an alternative setting for
her results. Commutator theory has the advantage that
the constructed category can be pushed back into the original
category; probably they are 'good behaved' 2-categories.
Extending Perdicchios results to a less limited scope
(perhaps by using Continous Lattices + n-categories)
is something important.


--Structure analysis of an abstract category--
Consider the class L of all categories which can
be derived from K by n-category techniques in
finitely many steps. Especially, n-category techniques
could be applied to K2, K3, ... again.
Q3: What are the properties of L? Is there some algebraic
structure on L as sometimes in algebraic topology?
Q4: Can any category in L characterized in an abstract way?
If yes: Construct a 'good' representation for it.
Q5: Define various nice properties which you would
love that an 'abstract' L have it. What properties
does it force on the 'generating' category K?

--
Somehow I hope that all this stuff will be one usefull
means in proving the Riemann Hypothesis. Joachim


Noam Elkies

unread,
Apr 21, 1997, 3:00:00 AM4/21/97
to

In article <5jcof2$4ni$1...@newsflash.concordia.ca>,


MCKAY john <mc...@cs.concordia.ca> wrote:
>In article <1sxxlAAv...@weburbia.demon.co.uk>
> Phil Gibbs <philip...@pobox.com> writes:
>>Do there exist 5 positive integers such that the
>>product of any two is one less than a square?
>>It is an old problem even known to Diophantus
>>himself.

>Has this not been solved? See a paper by Stephen Muir (et al?) using
>Baker's (then new) approximation results. Try Quarterly J. Math. about
>1967.

>Four numbers with this property are 1,3,8,120. Muir, I believe, proves
>THIS set cannot be augmented. Does this solve the problem?

Alas no, because there are infinitely many quadruples. This
problem, and the solution for {1,3,8,120,x}, was even reported
by Martin Gardner in his Sci.Am. column several decades ago.
Does it really go back to Diophantus? I thought Diophantus
never required integer solutions, only rational ones. (Are
there even any examples of rational quintuples?)

Kevin Brown

unread,
Apr 23, 1997, 3:00:00 AM4/23/97
to

Phil Gibbs <philip...@pobox.com> writes:
>Do there exist 5 positive integers such that the
>product of any two is one less than a square?
>It is an old problem even known to Diophantus
>himself.

elk...@ramanujan.math.harvard.edu (Noam Elkies) wrote:
>Does it really go back to Diophantus? I thought Diophantus
>never required integer solutions, only rational ones.

It's true that Diophantus only required rational numbers, but of
course many of his results have relevance for integers too. In Book
III of The Arithmetica he treated the problem of finding three numbers
such that the product of any two of them increased by 1 is a square,
and he gave the triple a=x, b=x+2, c=4x+4. Then in Problem 21, Book
IV, he treated the problem of finding FOUR numbers such that all six
pairwise products are 1 less than a square. He built on his triple
solution a,b,c, but oddly enough he *didn't* use the case x=1 (as
Fermat later did to find 1,3,8,120). Instead he decided to make the
product of the first and fourth numbers equal to one less than
(3x+1)^2. I suppose he was just following the pattern

ab+1 = (1x+1)^2
ac+1 = (2x+1)^2 bc+1 = (2x+3)^2
ad+1 = (3x+1)^2 bd+1 = ? cd+1 = (6x+5)^2

Notice that setting ad+1 = (3x+1)^2 forces d to be 9x+6, which
automatically gives cd+1 = (6x+5)^2. Diophantus didn't comment
much on this, but it's clearly systematic. For example, if he
went on to a 5th number e such that ae+1 = (4x+1)^2 it would
force e to be 16x+8, which automatically gives de+1 = (12x+7)^2,
and so on.

Anyway, all he needs to do now is find a (rational) value of x such
that bd+1 = 9x^2 + 24x + 13 is a (rational) square. He reasons
that if we assume bd+1 = (3x+k)^2 for some integer k then we will have
(24-6k)x + (13-k^2) = 0, and so x can be any number of the form
x = (k^2 - 13)/(24 - 6k). He chooses k=-4, which gives x=1/16, so
his four numbers are 1/16, 33/16, 68/16, and 105/16. (He might
have chosen k=-11 to give x=6/5.)

elk...@ramanujan.math.harvard.edu (Noam Elkies) wrote:
> (Are there even any examples of rational quintuples?)

Euler gave a family of such rational quintuples, including these
two examples

1/2 5/2 6 48 44880/128881
1 3 8 120 777480/8288641
________________________________________________________________
| /*\ |
| MathPages / \ http://www.seanet.com/~ksbrown/ |
|_____________/_____\____________________________________________|


Phil Gibbs

unread,
Apr 23, 1997, 3:00:00 AM4/23/97
to

In article <5jeqmh$ma6$1...@news.fas.harvard.edu>, Noam Elkies <elkies@rama
nujan.math.harvard.edu> writes

>Alas no, because there are infinitely many quadruples. This
>problem, and the solution for {1,3,8,120,x}, was even reported
>by Martin Gardner in his Sci.Am. column several decades ago.
>Does it really go back to Diophantus? I thought Diophantus
>never required integer solutions, only rational ones. (Are

>there even any examples of rational quintuples?)


There is always a fifth positive rational number for any starting
sequence of four positive integers. I don't know if there is a sixth.

ilias kastanas 08-14-90

unread,
Apr 23, 1997, 3:00:00 AM4/23/97
to

In article <5jeqmh$ma6$1...@news.fas.harvard.edu>,
Noam Elkies <elk...@ramanujan.math.harvard.edu> wrote:
@In article <5jcof2$4ni$1...@newsflash.concordia.ca>,
@MCKAY john <mc...@cs.concordia.ca> wrote:
@>In article <1sxxlAAv...@weburbia.demon.co.uk>
@> Phil Gibbs <philip...@pobox.com> writes:
@>>Do there exist 5 positive integers such that the
@>>product of any two is one less than a square?
@>>It is an old problem even known to Diophantus
@>>himself.
@
@>Has this not been solved? See a paper by Stephen Muir (et al?) using
@>Baker's (then new) approximation results. Try Quarterly J. Math. about
@>1967.
@
@>Four numbers with this property are 1,3,8,120. Muir, I believe, proves
@>THIS set cannot be augmented. Does this solve the problem?
@
@Alas no, because there are infinitely many quadruples. This
@problem, and the solution for {1,3,8,120,x}, was even reported
@by Martin Gardner in his Sci.Am. column several decades ago.
@Does it really go back to Diophantus? I thought Diophantus
@never required integer solutions, only rational ones. (Are


On occasion Diophantus would consider integer solutions. He
obtained the general formula for Pythagorean triples, and gave some
results on sums of 2, 3 and 4 squares, for example.


Ilias


JoachimHagemann

unread,
Apr 24, 1997, 3:00:00 AM4/24/97
to

Pertti Lounesto wrote:

>
> JoachimHagemann <hage...@horus.mch.sni.de> wrote:
> > --7D vectoranalysis--
> Develop a vectoranalysis for the 7D crossproduct algebra
> > [...] similar to the traditional 3D vectoranalysis.

>
> You might want to take a look at the book G. Dixon: "Division
> Algebras: Octonions, Quaternions, Complex Numbers and the
> Algebraic Design of Physics", Kluwer, 1994. [...]


--Let me give some comments on a possible 7D vectoranalysis,
and raise a few questions
--1. School of russian mathematician Malcev--
I like Dixons booklet. The connection of the 7D
crossproduct algebra R^7 with the 7-element projective
plane PG(2,2) may be found in the usefull booklet of Okubo, too,
and of course in Conway/Sloane. But it was already known in the late
fifties to the Russian mathematician Malcev who (besides creating
Universal Algebra & Modell Theory) extended Lie Groups/
Lie algebras to Analytic Loops/Malcev algebras.
Of course, Malcev was inspired by the multiplicative
loop of the octonions and possible applications to physics.
Due to Cold War & its importance, little is known about it.
Q1: Exists an ENGLISH version of Malcevs results on analytical loops?
Q2: What is the state of octonionic analysis/physics in Russia?

--2. trivialities of 7D vectoranalysis--
In the following I sketch the trivial aspects of a 7D vectoranalysis
up to where deep knowledge of unoriented manifolds and topological
K-theory is indispensable. Of course, an analyst at a computer company
points to where drilling seems worthwhile but does not drill on his own.
1) R^7 is not a Lie algebra anymore but a Malcev algebra and hence
satisfies the Malcev identity J(x,y,[x,z]) = [x,J(x,y,z)] where as usual
J(x,y,z):= [[x,y],z] + [[y,z],x] + [[z,x],y].
2) Let x, y, z be three elements of PG(2,2) which generate PG(2,2).
Observe that then within PG(2,2) quite naturally
J([x,y],[y,z],[z,x]) = 0 holds
which is even satisfied by all Malcev algebras.
3) Remember that the interplay of [,] and the scalarproduct
is already settled in the definition of crossproduct algebra.
As in 3D vectoranalysis, define the identities of vectoranalysis
by replacing one or two variables of the defining identities
of crossproduct algebra or in the above two identities
by NABLA (which in this case is of course the 7D-NABLA).
Q3:Is there a 'geometric meaning' of
above identities as in 3D vectoranalysis?
Remark: In 3D vectoranalysis, compact oriented 2D manifolds
can be treated well in an intuitive geometric way.
What is the 7D-analogon?
Q4:Symmetries & gauge invariance of 7D vectoranalysis.
Remark: In electrodynamics usually you do not discuss
the "symmetries" of the theory and "gauge invariance".
The reason is that this is so much part of the machinery
of 3D vectoranalysis that you use it 'unconsciously'.
I consider this as a central property of 3D vectoranalysis.
So what are the symmetries of above 7D identities?
Do they extend to any further identity of R^7 which migth
be discovered?

--3. concluding remarks--
1) Above two identities are the only identities of R^7 I know.
It is disturbing that both are valid for ALL Malcev algebra:
I do not know any identity 'peculiar' for R^7.
I expect that combinatorics associated with the 15-element
projective space PG(3,2) (known as 'Kirkman school girl
problem') yields special R^7 identities.
Q5: What are the identities of the Malcev algebra R^7 ?
2) The above two identities could modell the geometrical
structure of low dimensional compact unoriented manifolds.
I selected them because they are close to 3D vectoranalysis.
To be plain: I do not have competance in unoriented manifolds.
But these are the DECISIVE aspects for any vectoranalysis.
Probably, above identities have to be replaced by others
which have a geometric meaning for unoriented manifolds.
3) Recently, Elkies & Gross gave the first 'octonion'
construction of the 24D Leech lattice. Hence my impression is
that the various string theories of physics are a SECOND type
of an 'octonion' analysis. (The FIRST type established in the
late fifties is of course topological K-theory).
My bold conjecture is that both are EQUIVALENT.
String theory seems to be tied to old fashioned QED-type
mathematics which dates back to the thirties and hence is tied
to the traditional 'oriented intregration theory' so appropriate
for the everyday life of 3D-animals.
Affine Lie theory and its generalisations seem to 'approximate'
something which is yet unknown. We had that situation before:
The 'epicycle theory' of old geocentric Ptolemaic astronomy
approximates the physical facts to any degree of precision;
but eventually it was replaced by 'Keplers conics'
which allow a more elegant treatment.
Let me add some 'wild speculation' which combinatorics could
provide the means to establish above EQUIVALENCE.
- Observation:
Elkies & Gross seem to use somethhing which is equivalent to
the embedding 7-element PG(2,2) --> 8-element AG(3,2) as found
in the Appendix of Oxley's 'Matroids'. This is similar to the
'passing from affine geometry to projective geometry'.
- Conjecture:
The full projective space PG(3,2) over the 2-element field
(with 15x35 incidence matrix having 3*35=105 incidences)
is somehow EQIVALENT to
the 21-element projective plane PG(2,4) over the 4-element field
(with 21x21 incidence matrix having 5*21=105 incidences).
- Comment:
The number 2 seems to be too small to be on an equal footing with
other prime numbers p:
In the 'quadratic reciprocity' law 4 has to be used instead of 2.
The fundamental theorems of projective geometry can be proved
for projective geometry over any field BESIDES the 2-element field.
So I conjecture the eqivalence of the two natural PG(2,2)-extensions
--
Joachim


0 new messages