Google 网上论坛不再支持新的 Usenet 帖子或订阅项。历史内容仍可供查看。

G. Spencer-Brown RH "proof": follow-up

已查看 4 次
跳至第一个未读帖子

mwat...@maths.ex.ac.uk

未读,
2006年9月3日 14:34:052006/9/3
收件人
I recently spent an afternoon with G. Spencer-Brown, seeking further
explanation of his puzzling document "A Short Proof of Riemann's
Hypothesis".

[see
http://groups.google.co.uk/group/sci.math.research/browse_thread/thread/a177df8bb8641eea/?hl=en#
for earlier discussion]

After six hours of listening and asking questions, I left not at all
convinced that he has "proved the Riemann Hypothesis" in any sense that
the mathematical community would use those words. However, it does
seem worth recording some of what he said, in case anyone else can see
what he's getting at. It may turn out that there's nothing there, but
based on some of his earlier work, I can't help but keep an open mind.

The small number of mathematicians I was previously able get to look at
his paper and comment were left, like myself, mystified/bewildered at
the point where he claims to have established his bounds (from which
the RH would uncontroversially follow) somehow as a result of the
Tschebyshev-Sylvester bounds. No one seems to 'get' his "principle of
shrinkage" which he invokes in this context. It is this "principle"
which is the crux of the matter.

He told me that he originally wanted to introduce the principle as *a
new axiom*, but was convinced by one of his correspondents that the
mathematical community would never accept such a thing. So he claims
to have made a "political" decision to instead "prove" the principle
from Tschebyshev's bounds.

If it can be proved from known results, then one naturally wonders why
he would want to introduce it as an axiom. There are two clues I was
able to glean: (1) he sees the principle as having a wider domain of
application than the mere number system - it applies, he says, to
"any system defined in terms of an arithmetic progression" (he
attempted to illustrate this with the example of 3-graphs, with which
he is seemingly familiar from his work on the four-colour theorem) (2)
he makes a distinction (as Hardy did) between "proof" (which takes
place outside the system question) and "demonstration" (which takes
place within it). He illustrated this with the distinction between (i)
"proving" the infinitude of primes with Euclid's famous argument
involving supposition and contradiction, something a machine could
never do and (ii) demonstrating that 7 x 8 = 56, a calculation,
something a machine could perform.

It's also worth pointing out that he sees the period 1730-1830 as the
golden age of mathematics (and science, music, human intelligence
generally), and likes to point out that the current ideas of proof and
rigour were not always in place, that mathematics was previously
treated more like another of the natural sciences, to be pursued
'experimentally'. He seems more at ease with making hypotheses based
on his tables of calculations (i.e., in the way Gauss arrived at his
x/logx asymptote) than he is with proofs framed in terms modern
mathematical analysis.

In any case, his attempt to prove/demonstrate his "principle of
shrinkage" (or contraction) left me no less bewildered. He repeatedly
claimed that "Everyone knows it's true, they just won't (or can't)
admit it.". He accused the mathematical community (and me, in the role
of its representative in his presence) of being "neurotic" for not
seeing it. "It's universal, but no one will say it."

So what is the "principle of shrinkage"? You can't easily get GSB to
pin down clear definitions. I believe this was the source of a lot of
criticism for his 1969 opus magnum "Laws of Form". In that case, he
took an almost Taoist stance (reproducing a passage from the Tao Te
Ching adjacent to the first page) and wrote of the extraordinarily
demanding task of having to use "words and other symbols in an attempt
to express what the use of words and other symbols has hitherto
obscured..." In that instance, I could see the possible value in a
seeming lack of precision, but in the case of his trying to "prove" the
RH through the introduction of a new axiom, being less-than-clear about
definitions is a lot harder to accept.

I got a few different versions of the "principle of shrinkage", but
this seems to be the basic assertion:

"Errors in prime counts shrink, as n gets larger, compared with a
fraction of an asymptote."

He went further to say that the principle applied not just to pi(x),
but to *any function whatsoever*.

"Errors between any function and a proven asymptote will have maximum
errors at the beginning".

He sometimes just states it as "error shrinks and stays shrunk".

Error is defined as absolute difference between function and asymptote
value. 'Fraction' is being used fairly loosely here, as it should
include square roots, as in the case of his proposed eventual bounds
for pi(x), that is, li(x) +/- (li(x))^1/2...and presumably other roots
too(?) I'm not sure which functions of an asymptote he would/wouldn't
consider a 'fraction' in this sense. And in at least one example he
later mentioned, he was talking about 'error', but there didn't seem to
be an asymptote involved. When I asked how he can talk of error when
there's no asymptote against which to measure it, he exasperatedly
invoked the etymology of 'error' (it means 'wandering', apparently).

A clue: In Ribenboim's prime number records book on p.237 he claims
that pi(n)/(n/logn) peaks at 113 - no reference is given, but GSB is
very interested in obtaining one (can anyone help?). I noted down that
he described this phenomenon as the "essence of shrinkage". He claims
to have, lacking a reference for this assertion, attempted to prove it
independently, the process of which led him to his "proof" of the RH.

I asked about the seemingly imprecise use of terms like "the
beginning", "relatively small" (as in "all errors violating the bounds
occur for relatively small n"), etc., but couldn't get what I
considered to be any clear answers. There's an impatience, as if to
say "come on, we all know what we mean by these words - stop
pretending, stop being neurotic, just accept it..." He talks of
"having to do psychiatry and arithmetic at the same time" in trying to
get his ideas across to me (and the mathematical world at large). The
fact he worked (and co-authored a book with) the maverick psychiatrist
R.D. Laing makes me wonder how much of this is just GSB being
disparaging and how much is based on some deep insight into the
workings of the human mind.

If I were a GSB devotee (I'm not, but quite a few exist out there), I
might conclude that he's just a century or so ahead of the rest of us,
so we'll never quite see what he means. Instead, as a curious onlooker
trying to understand how the author of "Laws of Form" could put
together such a transparently unconvincing "proof" of the RH, I'm a lot
less likely to believe that, but, of course, one can never *entirely*
rule such things out...

So it's not a proof of the RH, really, it's a gauntlet thrown down to
the mathematical community, challenging them to accept the truth of
something that's already being unconsciously used in some way. I am
reminded of a passage that stood out when I first looked at his "Laws
of Form" (from notes for Chapter 4, p.85):

"In all mathematics it becomes apparent, at some stage, that we have
for some time been following a rule without being consciously aware of
the fact. This might be described as the use of a covert convention.
A recognizable aspect of the advancement of mathematics consists in the
advancement of the consciousness of what we are doing, where by the
covert becomes overt. Mathematics is in this respect psychedelic."

So in what way are we already using his principle? He claims that
whenever a proposition is formulated in the familiar terms of there
existing an integer after which some deviation or error function stays
within some bounds, that betrays the fact we KNOW the principle of
shrinkage to be true. If it weren't true, he argued, such statements
would be meaningless. I protested that these statements might be
*false* but not *meaningless*. He strongly disagreed, insisting they
would be meaningless.

He provided another example, a "theorem" of his own (a proposition
which follows if you accept his principle, apparently). That is,
every prime p > 113 has a neighbour either side within p1/2 of it.
He claims this follows directly from shrinkage. As a way of trying to
convince me, he looked up the record for the largest known prime gap in
Ribenboim's book and compared it to the square root of one of the
adjacent primes, to show the extent to which "error had shrunk" (it
was, indeed, something like 1/94,000th of the bound he's proposed). He
challenged me to say I really believed that the error might suddenly
explode outside such bounds after such extensive evidence of shrinkage.
I tried to string together an argument involving Skewes' number and
the fact that the number system can behave incredibly
counterintuitively despite any amount of numerical evidence - why not
just accept the RH as true on the basis of 1.5 billion (or whatever)
calculated zeros on the critical line? He claimed that had nothing to
do with it, as his was a *principle*, not an isolated phenomenon. He
again accused me of neurosis, and said I was being unreasonable not to
expect of the "laws governing the primes" the same "uniformity" I
naturally attributed to the laws governing their complement, the
composites. He claimed that for me not to accept the truth of
"shrinkage" is directly akin to believing that multiplication and
division might cease to work consistently for integers beyond the range
of what can be currently computed. I argued that multiplication was an
operation, not a "principle". He disagreed, forcefully, asserting that
multiplication was a principle.

He used the following argument a lot in try to convinced me of
"shrinkage": "Laws governing the number system are universal, and apply
the same everywhere."

He makes a repeated point about the mathematical community labouring
under some illusion that the primes behave somehow "lawlessly",
"randomly" or "devilishly", whereas in fact they're just "the
complement of the multiplication tables", and therefore perfectly
regular and lawful if you know how to look at them. He seems to think
the reason no one has previously found his "easy" RH "proof" is because
no one has had the independence of mind to break out of this illusory
view of the number system.

The closest I got to crossing the necessary mental threshold (if there
indeed is one) to get to a point where I could understand his
"principle" is when he scribbled a crude graph of pi(x) and a pair of
Tschebyshev-type bounds which it wavered outside before settling down,
permanently, within. This IS shrinkage, he proclaimed, forcefully,
pointing to the graph. The bounds, he pointed out, are just human
impositions, the prime count itself just IS. The choice of bounds
doesn't matter, he's effectively saying. The Tschebyshev bounds show
that the "errors" is the prime count are shrinking, so they will
necessarily shrink relative to any other bounds you might choose. It
doesn't matter if the asymptote you use is x/log x or li(x). That
still doesn't convince me, mathematically, but just for a second I
thought I caught an "intuitive glimmer" of what he was suggesting.

It would be *very* easy to dismiss his ideas on the RH on the basis of
what I've read/heard from him. However, I feel I should make some
public record of this for other people to consider. Can anyone
envision a precise formulation of his "principle" which wouldn't simply
follow from the definition of asymptoticity, yet to which there is no
immediate counterexample (so that it's not obviously true or false)?
Can anyone think of a counterexample, i.e. a function that has an
asymptote such that the error function refuses to be bound by a
'fraction' of the asymptote?

No doubt a lot of sci.math.research readers will find the pursuit of
this issue a complete waste of time, but any feedback from other
curious onlookers would be welcomed.

MW

P.S. something else he claims to have discovered, which people may wish
to comment on:
(1+x^(-n))/log(1+x^(-n)) -> x^n + 1.5 as n->infinity.

Robert Israel

未读,
2006年9月4日 03:14:502006/9/4
收件人
In article <1157308445....@e3g2000cwe.googlegroups.com>,
mwat...@maths.ex.ac.uk <mwat...@maths.ex.ac.uk> wrote:

>P.S. something else he claims to have discovered, which people may wish
>to comment on:
>(1+x^(-n))/log(1+x^(-n)) -> x^n + 1.5 as n->infinity.

???
Obviously not for x = 1. Only slightly less obviously, not for
0 < |x| <= 1. A simple exercise for undergraduates for |x| > 1,
in the sense that (1+t)/log(1+t) = t^(-1) + 3/2 + O(t) as t -> 0.
Nothing deep there, just the first two terms of a Laurent series.

Robert Israel isr...@math.ubc.ca
Department of Mathematics http://www.math.ubc.ca/~israel
University of British Columbia Vancouver, BC, Canada

Tim Peters

未读,
2006年9月5日 14:08:132006/9/5
收件人
[mwat...@maths.ex.ac.uk, on his personalized GSB mathemapsychoanalysis]

Thank you for writing this up! While it's admirable to keep an open mind, I
think you can stop now ;-) Just a few tech comments:

> ...


> A clue: In Ribenboim's prime number records book on p.237 he claims
> that pi(n)/(n/logn) peaks at 113 - no reference is given, but GSB is
> very interested in obtaining one (can anyone help?).

There's an important paper (which I haven't read! this is a /partly/
educated guess) which seems very likely:

Rosser, J. B. and Schoenfeld, L.
"Approximate Formulas for Some Functions of Prime Numbers."
Illinois J. Math. 6, 64-97, 1962.

That's always referenced as the source of the widely quoted:

pi(n) < 1.25506 * (n/ln(n))
for n > 1

and a bit of calculation shows that 1.25506 is the rounded (up) value of
pi(113)/(113/ln(113)) = 1.255058712932...

> ...


> It would be *very* easy to dismiss his ideas on the RH on the basis of
> what I've read/heard from him. However, I feel I should make some
> public record of this for other people to consider. Can anyone
> envision a precise formulation of his "principle" which wouldn't simply
> follow from the definition of asymptoticity, yet to which there is no
> immediate counterexample (so that it's not obviously true or false)?
> Can anyone think of a counterexample, i.e. a function that has an
> asymptote such that the error function refuses to be bound by a
> 'fraction' of the asymptote?

As you noted, it's unclear what "fraction" means. The first examples that
came to mind were so simple that I'm left wondering what "fraction" /could/
sensibly mean without leaving the "principle of shrinkage" either obviously
false or trivial; e.g.,

x^3 + x^2

is asymptotically equal to

x^3

The absolute difference is |x^2|.

"Errors between any function and a proven asymptote will
have maximum errors at the beginning".

Certainly not here (x^2 gets relentlessy bigger) if:

Error is defined as absolute difference between function
and asymptote value.

The difference x^2 doesn't get less than sqrt(x^3) = x^(3/2) (the square
root of the asymptote, as he tries to claim wrt pi(x) and sqrt(Li(x)))
either.

OTOH, if we take "fraction" in its obvious sense, "f(x) is asymptotically
equal to g(x) as x approaches infinity" /means/ that for every epsilon > 0
there's a real M s.t. x > M implies (and for simplicity let's assume that f
and g are strictly positive):

|f/g - 1| < epsilon

from which it's immediate that (multiply through by g)

|f - g| < g * epsilon

So, sure, for every fixed real C > 0, no matter how large, |f-g| is
eventually bounded by "the fraction of g" g/C (pick epsilon = 1/C, and stare
at the above).

But there's nothing new in that, and in fact it's really just a way of
rephrasing the definition of limit as applied to asymptotic equality.

So we can conclude that f~g implies |f-g| is eventually bounded by any
"simple fraction" of g, but that's not strong enough to conclude that |f-g|
is bounded by any function h where h/g -> 0. In particular, sqrt(x)/x -> 0,
and as above it was dead easy to show an f and g with f~g s.t. |f-g| is not
eventually bounded by sqrt(g). Simple variants of the same example show
that from f~g we can't conclude |f-g| is bounded by g^(1-epsilon) for /any/
epsilon > 0. For example, for g^(9/10), consider f = x^11 + x^10 ~ x^11 =
g; then it's not the case that the difference x^10 is eventually smaller
than g^(9/10) = x^(99/10).


victor_me...@yahoo.co.uk

未读,
2006年9月4日 04:15:082006/9/4
收件人
mwat...@maths.ex.ac.uk wrote:
> I recently spent an afternoon with G. Spencer-Brown, seeking further
> explanation of his puzzling document "A Short Proof of Riemann's
> Hypothesis".

You poor thing.

> After six hours of listening and asking questions, I left not at all
> convinced that he has "proved the Riemann Hypothesis

After six minutes of looking at his manuscript I was convinced
that he had provided therein no proofs at all (of anything).

> However, it does
> seem worth recording some of what he said, in case anyone else can see
> what he's getting at. It may turn out that there's nothing there, but
> based on some of his earlier work, I can't help but keep an open mind.

The pleasure of open-mindedness: being able to tolerate the gabblings
of a garrulous bore for hours in the hope of gleaning a tiny nugget of
wisdom.

> The small number of mathematicians I was previously able get to look at
> his paper and comment were left, like myself, mystified/bewildered at
> the point where he claims to have established his bounds (from which
> the RH would uncontroversially follow) somehow as a result of the
> Tschebyshev-Sylvester bounds.

I was neither mystified nor bewildered. He simply failed to attempt
proofs of any of his assertions. Had he, like may cranks, provided
some dense pages of cryptic and incomprehesible assertions as
claimed proofs, he may have "bewildered" his audience, but he
did not even attempt this.

> No one seems to 'get' his "principle of
> shrinkage" which he invokes in this context.

> He told me that he originally wanted to introduce the principle as *a
> new axiom*,

Brilliant! solving the Riemann hypothesis by making it a new axiom!
Slicing the Gordian knot indeed! Why does he not to that to the
Birch-Swinnerton-Dyer conjecture etc. There would be six
million dollars to be won not simply the one.

> but was convinced by one of his correspondents that the
> mathematical community would never accept such a thing. So he claims
> to have made a "political" decision to instead "prove" the principle
> from Tschebyshev's bounds.

> If it can be proved from known results, then one naturally wonders why
> he would want to introduce it as an axiom.

Actually I think that "one"'s natural curiosity about this
chap's mental processes might have ceased long before, but
the obvious "why" that comes to mind is:
because he can't prove it from known results.

> It's also worth pointing out that he sees the period 1730-1830 as the
> golden age of mathematics (and science, music, human intelligence
> generally), and likes to point out that the current ideas of proof and
> rigour were not always in place, that mathematics was previously
> treated more like another of the natural sciences, to be pursued
> 'experimentally'. He seems more at ease with making hypotheses based
> on his tables of calculations (i.e., in the way Gauss arrived at his
> x/logx asymptote) than he is with proofs framed in terms modern
> mathematical analysis.

Sounds like a rationalization of his inability to justify his
grandiose claims.

> So what is the "principle of shrinkage"? You can't easily get GSB to
> pin down clear definitions.

What a surprise!

> I got a few different versions of the "principle of shrinkage", but
> this seems to be the basic assertion:
>
> "Errors in prime counts shrink, as n gets larger, compared with a
> fraction of an asymptote."

Classic wishful thinking, it seems.

> He went further to say that the principle applied not just to pi(x),
> but to *any function whatsoever*.
>
> "Errors between any function and a proven asymptote will have maximum
> errors at the beginning".
>
> He sometimes just states it as "error shrinks and stays shrunk".
>
> Error is defined as absolute difference between function and asymptote
> value. 'Fraction' is being used fairly loosely here, as it should
> include square roots, as in the case of his proposed eventual bounds
> for pi(x), that is, li(x) +/- (li(x))^1/2...and presumably other roots
> too(?) I'm not sure which functions of an asymptote he would/wouldn't
> consider a 'fraction' in this sense. And in at least one example he
> later mentioned, he was talking about 'error', but there didn't seem to
> be an asymptote involved. When I asked how he can talk of error when
> there's no asymptote against which to measure it, he exasperatedly
> invoked the etymology of 'error' (it means 'wandering', apparently).

Resort to etymology is a classic diversionary tactic.

> I asked about the seemingly imprecise use of terms like "the
> beginning", "relatively small" (as in "all errors violating the bounds
> occur for relatively small n"), etc., but couldn't get what I
> considered to be any clear answers. There's an impatience, as if to
> say "come on, we all know what we mean by these words - stop
> pretending, stop being neurotic, just accept it..." He talks of
> "having to do psychiatry and arithmetic at the same time" in trying to
> get his ideas across to me (and the mathematical world at large). The
> fact he worked (and co-authored a book with) the maverick psychiatrist
> R.D. Laing makes me wonder how much of this is just GSB being
> disparaging and how much is based on some deep insight into the
> workings of the human mind.

It shouldn't: Laing was a classic bully with no deep insights
but much pompous pontificating.

> If I were a GSB devotee (I'm not, but quite a few exist out there),

But you have taken it upon yourself to act as his apologist
here. Why did you not devote the time you spent with him
interacting with genuine mathematicians, or doing
your own mathematics?

> Instead, as a curious onlooker
> trying to understand how the author of "Laws of Form" could put
> together such a transparently unconvincing "proof" of the RH, I'm a lot
> less likely to believe that, but, of course, one can never *entirely*
> rule such things out...

Let's play "spot the flying pig".

> So it's not a proof of the RH, really, it's a gauntlet thrown down to
> the mathematical community, challenging them to accept the truth of
> something that's already being unconsciously used in some way.

Now we are seeing some of your assumptions coming to the fore.
What is this "something that's already being unconsciously used "?

> So in what way are we already using his principle?

Like "have you stopped beating your wife?"
this is a question based on a prior assumption. The more
germane question is "are we using his principle".
(My anwer: no).

Like his hero Laing he is trying to bully you. He is trying to
induce guilt ("neurosis") in you to divert attention from his
dubious claims.

> He makes a repeated point about the mathematical community labouring
> under some illusion that the primes behave somehow "lawlessly",
> "randomly" or "devilishly", whereas in fact they're just "the
> complement of the multiplication tables", and therefore perfectly
> regular and lawful if you know how to look at them. He seems to think
> the reason no one has previously found his "easy" RH "proof" is because
> no one has had the independence of mind to break out of this illusory
> view of the number system.

Ah yes; it's great to be the only one with "independence of mind"!

> It would be *very* easy to dismiss his ideas on the RH on the basis of
> what I've read/heard from him. However, I feel I should make some
> public record of this for other people to consider. Can anyone
> envision a precise formulation of his "principle" which wouldn't simply
> follow from the definition of asymptoticity, yet to which there is no
> immediate counterexample (so that it's not obviously true or false)?
> Can anyone think of a counterexample, i.e. a function that has an
> asymptote such that the error function refuses to be bound by a
> 'fraction' of the asymptote?

I am sure that were it possible to state a precise "principle of
shrinkage" then it would be easy to construct counterexamples.
This shows the rhetorical advantage of keeping the principle vague.

> No doubt a lot of sci.math.research readers will find the pursuit of
> this issue a complete waste of time, but any feedback from other
> curious onlookers would be welcomed.

It is a complete waste of time. I don't understand why the moderators
passed this posting anyway; its relationship to research mathematics is
tenuous to say the least.

> P.S. something else he claims to have discovered, which people may wish
> to comment on:
> (1+x^(-n))/log(1+x^(-n)) -> x^n + 1.5 as n->infinity.

Ah yes, he has reached the same intellectual level as many
of my first-year students who confidently assert things such as
(x^2+1)/(x + 1) -> x^2/x
yet when asked refuse to explain what it means for a function
to converge to another function (I certainly don't know).

If we set y = x^{-n} in the LHS we get
(1 + y)/log(1 + y) = (1 + y)/(y - y^2/2 + y^3/3+ ...)
= y^{-1} + 3/2 + 5y/12 + ...
for small y.

Well, it seems that Spencer-Brown can compute the first two
terms of a Taylor series. I expect that no one else in the
history of mathematics can claim this result; I just cannot
conceive of anyone before who would have claimed this
result as their own personal discovery.

Victor Meldrew

mwat...@maths.ex.ac.uk

未读,
2006年9月6日 05:27:002006/9/6
收件人
> here. Why did you not devote the time you spent with him
> interacting with genuine mathematicians, or doing
> your own mathematics?

Do you consider Louis Kauffman to be a genuine mathematician? I just
got these from him, which may be of some interest to some readers.

---------------------------------------------------------------------------------------------------
I am aware of GSB's recent paper about RH, but have not studied it.
I have spent from 1978 until the present day thinking about his work on
the four color theorem (not to mention thinking pretty constantly on
Laws of Form since 1974). So you can well imagine that he has
influenced my mathematics a great deal (and there has been a lot of
mathematics there -- look me up in googlescholar to get a peek on it).
For a recent take on the four color work from my angle see
math.CO/0112266

In the case of the four color theorem he also eventually appeals to a
"new axiom" or principle. In this case the essence of it is that "a
thing is what it is not". In order to instantiate this in a
mathematical context may be easy or hard, depending on the context. In
a set theory with a fixed universal set, a set and its complement
determine one another. A prime knot is topologically determined by the
topology of its complement in three-space, but this is a deep theorem
of Gordon and Luecke, and is false for higher dimensional knots. Links
in dimension three are not determined by their complements. But the
context is key, and I think that he is right about the context in which
the map theorem occurs. How to prove it is the real question. Will
mathematicians of the future take it as an axiom? I rather doubt it.
Time will tell.

In the case of the four color theorem, he has in my opinion, put his
finger on the simple essence of the problem and advanced the discussion
toward a good proof far more than all the computer work. It is quite
possible that his parity pass algorithm is in fact a complete solution
to the problem in a form that Kempe himself would have appreciated.

The problem with GSB as "initiator of debate" is that in close contact
he seems to want his listener to acknowledge the veracity of his work
without criticism. This can lead to very sharp breaks in communication.


If my experience with him from 1978 to 1996 is any indication, his work
on RH will contain very deep and true commentary on the nature of the
problem. Perhaps mathematicians a century from now will look wisely at
his work, knowing that it all worked out, and that the mathematicians
of his time just could not see what he intended. Not unlike Grassmann,
for example.

Best,
Lou Kauffman
----------------------------------------------------------------------------------------

I wanted to add another comment or two. First of all, there is no lack
of precision in Laws of Form. The thing is that if one is to begin with
the notion of discrimination, then one has to admit that there is no
definition of discrimination that will not involve some act of
discrimination! Thus one is forced out of the idea that there can be a
way in to the concept of distinction that is void of experience. I will
not say more about this here, but jump to Chapter 11 of LOF. In Chapter
11 GSB discusses properties of abstract digital circuits in what at
first appears to be a very cryptic mode. It took awhile for me and some
friends to decipher this back in 1974. Once deciphered it is a clear
model and everything he says makes sense (up to how one might intepret
the concept of imaginary values in logic, which is still an interesting
subject of debate for me). Eventually and some years later, I had some
long discussions and explorations with GSB about circuit design
and it was clear that we were on the same page and understood each
other perfectly. He is however, a very empirical mathematician, and has
the tendency to notice a pattern and call it a Theorem long before
there is a vestige of proof in sight. The situation in which he
observes it IS the proof. I have seen theorems rise and fall in this
way in our conversation. In the case of the four color theorem the
pattern is the same: When there are good proofs or demonstrations he
will use them, but the actual empirical/mathematical situation of the
mathematician observing directly the phenomenon and knowing it is
paramount. I am sure that this, plus his unusual personality
characteristics, explains most of the behaviour that you have
witnessed.

Best,
Lou K.

------------------------------------------------------------------------------------------------

Incidentally (in the spirit of your above question), why did you spend
time replying (under a pseudonym) to a posting you consider worthless
when you could be doing something more constructive?

MW

tc...@lsa.umich.edu

未读,
2006年9月6日 09:24:322006/9/6
收件人
>Can anyone
>envision a precise formulation of his "principle" which wouldn't simply
>follow from the definition of asymptoticity, yet to which there is no
>immediate counterexample (so that it's not obviously true or false)?

This isn't precise, but it appears to be one of the things he's trying
to claim:

If f:R -> R or f:N -> R is a naturally occurring function and
lim(x->oo) f(x) appears to exist, then it does.

Can anyone give an example of a conjecture of the form "lim(x->oo) f(x)
exists" for which there was extensive numerical evidence and which
remained open for some time, but was eventually shown to be false?
--
Tim Chow tchow-at-alum-dot-mit-dot-edu
The range of our projectiles---even ... the artillery---however great, will
never exceed four of those miles of which as many thousand separate us from
the center of the earth. ---Galileo, Dialogues Concerning Two New Sciences

0 个新帖子