Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Godel's proof, truth, reality, self-awareness, and all that jazz

27 views
Skip to first unread message

david petry

unread,
Sep 6, 2007, 10:50:47 AM9/6/07
to

What is the nature of mathematics? Does it have a connection to
reality? Is it part of mankind's quest for truth? Or is it merely a
game which mathematicians have chosen to play, which may have had its
origins in mankind's quest for truth but no longer has a connection to
such a quest? I'm asking the reader to keep such questions in mind as
he reads the rest of this article.

Godel's proof is something of a joke. When we acknowledge that
mathematics has a strong connection to truth and reality, then we can
easily see that Godel's theorem is quite trivial, and the proof has no
content. And if you deny that mathematics has a strong connection to
truth and reality, then are you not implicitly claiming that all of
mathematics, including Godel's proof, is something of a joke?

Godel told us that he had the Liar paradox in mind when he came up
with his proof. And that's a good place to start this analysis.

When we talk, we want people to believe that we are telling the truth.
That simply cannot be denied. That is, when we communicate, we are
implicitly claiming that we are telling the truth. That's one of the
ground rules of communication. So, likewise, when we try to understand
what someone is saying, we take it as implicit that that person is
claiming to be telling the truth. So if we set about to analyze the
statement "I am lying", we must remember to take into consideration
the implicit claim to truth. So the statement must be interpreted as
equivalent to "<implicitly> I am telling the truth; <explicitly> I am
lying". And that is nothing more than a simple contradiction. There's
simply nothing paradoxical about it. It doesn't require any further
analysis.

Now Godel did not understand what is going on with the Liar paradox.
On his interpretation, the Liar paradox leads to some kind of infinite
loop of reasoning with no resolution possible. He completely ignored
the implicit claim to truth we make when we communicate. His proof
suffers the same defect as his analysis of the Liar paradox.

When mathematicians talk, they want people to believe that they are
telling the truth. Again, that cannot be denied. When they claim to
have proven a theorem, they are implicitly claiming that the theorem
is actually true. That is, they are claiming that proofs are
compelling arguments. They are claiming that whatever formal proof
system they are using must always lead to true theorems. So they are
implicitly claiming that their formal system is consistent! So if the
mathematicians were to claim that they had a proof that their formal
system is consistent, we would have to point out to them that they are
implicitly making the claim that their formal system is consistent the
moment they claim that they can use the formal system to prove
theorems, so they are claiming to prove something they claimed to be
true in the first place; their argument would be circular. And
circular arguments are not to be accepted as compelling arguments.

So what have we shown? We have shown that once it is acknowledged that
mathematics does have a strong connection to truth and reality--i.e.
that mathematicians are implicitly making that claim that their
theorems are true--then Godel's theorem (the claim that no formal
system can prove its own consistency) is merely an immediate and
rather trivial implication of the notion of proof. But we can go
beyond that. Since Godel's theorem itself is a direct implication of
the mathematicians' implicit claim that they tell the truth, we have
to admit that a formal proof of Godel's theorem would be circular. So
Godel's proof is something of a joke.

Note carefully what I'm saying here: if we accept the claim that
mathematics has strong connections to truth and reality, and we agree
that we can use that claim when we reason about mathematics, then
Godel's proof is something of a joke. If we deny the claim that
mathematics has strong connections to truth and reality, then
mathematics itself, including Godel's theorem, is something of a joke.

If we were to take the view that mathematics is a science, then the
triviality of Godel's theorem and the absurdity of the proof would be
immediately apparent. It is part of the scientists' view of their
subject that we must always be open to the possibility that our best
theories will prove to be quite wrong when we move on to unexplored
territory. If we apply that to mathematics, it says that we must be
open to the possibility that our theories (formal systems) may be
found to be inconsistent when we gain the ability to examine them in
much greater depth that we are currently able to (i.e. when we create
an artificial intelligence a million times more intelligent than we
are, we will ask it to re-examine the question of the consistency of
our current theories). So when we take the view that mathematics is a
science, we are forced to acknowledge that we simply cannot prove that
we will never come across an inconsistency in our theories. And of
course, any scientist would say that it is absurd to think that we
need a formal proof to tell us that we should be open to the
possibility that our theories may ultimately prove to be wrong.

To be sure, it definitely does make sense to view mathematics as a
science: mathematics may be defined as the science of phenomena
observable in the world of computation. To clarify this, it helps to
think of the computer as the mathematicians' microscope which helps us
peer deeply into the world of computation, and then mathematics
studies the phenomena we observe when we look through that microscope.
When we think of mathematics in this way, then our theories must make
predictions about observable phenomena, and those predictions can be
tested, and hence mathematics fits the criteria for being a science.
Certainly all of the mathematics that has strong connections to truth
and reality, including all of the mathematics use in science and
technology, falls within the scope of this view.

So why do we believe that mathematics is consistent? And is it even
sensible to try to reason about the consistency of mathematics? For
starters, consider this: we simply cannot believe that we have the
ability to reason consistently about the possibility that we lack the
ability to reason consistently (and you might want to read that a
second time). So simply by virtue of the empirical fact that we are
self-aware, and aware that we can reason, we are compelled to believe
that we have the ability to reason consistently, but we cannot
"reason" about our own consistency, since we cannot reason about the
possibility that we lack the ability to reason consistently! So if
mathematics were innate--that is, if mathematics were part of our own
model of who we are--then we would be forced to agree that we must
believe that mathematics is consistent, and that we cannot reason
about its consistency. But that's precisely the case! When we reason
about how our thought processes work, we come to the conclusion that
every thought process we have can be modelled on a digital computer
(that's not to say that our brains are digital computers, but there's
an equivalence between what a computer can do and what we can do). And
furthermore, when we reason about what computers are, we come to the
conclusion that our best models of computation must include a model of
arithmetic. For example, real world computers use arithmetic in a very
basic and essential way (e.g. for computing memory addresses), and so
we are forced to believe that the rules (axioms) of arithmetic are
part of our model of computation (and we would arrive at the same
conclusion if we use Turing machines as our model for computation).
So what all this implies is that if we know who we are--that is, if
our models of who we are are correct--then we are compelled to believe
that the basic axioms of mathematics are consistent, and that we
cannot reason about that consistency. And conversely, if we don't
know who we are (i.e. if our model of who we are is wrong), then we
would have to admit that we are confused and we really cannot be sure
that we can reason about anything at all. In other words, the
consistency of basic mathematics (i.e. the mathematics implicit in our
models of computation) is implicit in our self-awareness, and trying
to use reason to gain additional insight about that consistency is a
rather silly thing to do (an hence, Godel's proof is rather silly).


Clearly I'm making a rather remarkable claim which I can't really
expect the reader to immediately accept. I'm claiming that a huge
portion of modern logic (and also of the foundations of mathematics)
is silly. I'm claiming that logic and the foundations of mathematics
have lost touch with truth and reality; they've become mere games
played with symbols, only vaguely resembling an honest search for
truth. So here are some things to think about which might help the
reader come to agree with me.

1) If the purpose of mathematics and logic is to provide tools to help
us reason about the real world, then shouldn't the most important
advance in logic in the twentieth century help us reason about the
real world? That is, after seventy five years, shouldn't some kind of
practical application in the sciences or in technology have emerged
for Godel's theorem? Do philosophical arguments to the effect that
artificial intelligences will never be as intelligent as humans
qualify as practical applications?

2) Is Godel's proof falsifiable? If we were to find a reasonably
powerful consistent formal system that could "prove" its own
consistency, then that might be a falsification, but how would we
determine that the formal system is consistent? And if Godel's proof
is safe from falsification, in what sense does it tell us anything
about the real world?

3) Godel believed that an analysis of the Liar paradox leads to a
vicious circle which cannot be broken using standard logic, and then
he based his proof on his own understanding of that "paradox". Is it
plausible that the Liar paradox reveals innate flaws in our ability to
reason? Shouldn't we be highly suspicious of Godel's proof?

4) Think about how a sincere seeker of truth would react to criticism,
and compare that to the way the experts on Godel's theorem react to
this article. Do sincere seekers of truth call critics "crackpots"?

aatu.kos...@xortec.fi

unread,
Sep 6, 2007, 11:07:10 AM9/6/07
to
david petry wrote:
> Godel's proof is something of a joke.

It is well known Gödel was something of a jester. Ultimately, life is
nothing but a dark absurd joke with no punch-line, as he once put it
before starving to death.

--
Aatu Koskensilta (aatu.kos...@xortec.fi)

"Wovon man nicht sprechen kann, daruber muss man schweigen"
- Ludwig Wittgenstein, Tractatus Logico-Philosophicus

Peter_Smith

unread,
Sep 6, 2007, 11:29:32 AM9/6/07
to

> Godel's proof is something of a joke.

Well, Gödel proved rather a lot of things. But you seem to mean his
first incompleteness theorem. Which is, in one form,

If T is a recursively axiomatized theory which can represent all
recursive functions, then there is a sentence G of Goldbach type such
that, if T is consistent then G then T doesn't prove G, and if T is
omega-inconsistent then T doesn't prove not-G either.

That is a technical result in mathematics about formalized theories,
which has a number of different mathematical proofs (including Gödel's
relatively straightforward one -- straightforward when you spot the
proof-strategy!). Would you like to explain to us very clearly why the
proof is "trivial"?

It is worth adding that there other proofs than Gödel's, some of which
don't involve even the faintest whiff of anything vaguely related to
the Liar paradox. For example, Gödel's theorem is a pretty immediate
consequence of Kleene's Normal Form theorem about recursive functions
(as explained in Ch.33 of my book).

www.godelbook.net

Jesse F. Hughes

unread,
Sep 6, 2007, 1:14:29 PM9/6/07
to
david petry <david_lawr...@yahoo.com> writes:

> When we talk, we want people to believe that we are telling the truth.
> That simply cannot be denied. That is, when we communicate, we are
> implicitly claiming that we are telling the truth. That's one of the
> ground rules of communication. So, likewise, when we try to understand
> what someone is saying, we take it as implicit that that person is
> claiming to be telling the truth.

I have no doubt this is correct. Without exception.

--
Mo memorized the dictionary
But just can't seem to find a job
Or anyone who wants to marry "Memorizin' Mo",
Someone who memorized the dictionary. Shel Silverstein

david petry

unread,
Sep 6, 2007, 4:13:36 PM9/6/07
to
On Sep 6, 8:07 am, aatu.koskensi...@xortec.fi wrote:

> It is well known Gödel was something of a jester. Ultimately, life is
> nothing but a dark absurd joke with no punch-line, as he once put it
> before starving to death.

That's funny. I was under the impression Godel was rather humorless.
And I didn't know he starved to death. Oh well.


david petry

unread,
Sep 6, 2007, 4:17:31 PM9/6/07
to
On Sep 6, 8:29 am, Peter_Smith <ps...@cam.ac.uk> wrote:

> Would you like to explain to us very clearly why the
> proof is "trivial"?

Didn't I say that the theorem (not the proof) is trivial?


david petry

unread,
Sep 6, 2007, 4:30:34 PM9/6/07
to
On Sep 6, 10:14 am, "Jesse F. Hughes" <je...@phiwumbda.org> wrote:

> david petry <david_lawrence_pe...@yahoo.com> writes:
> > When we talk, we want people to believe that we are telling the truth.
> > That simply cannot be denied. That is, when we communicate, we are
> > implicitly claiming that we are telling the truth. That's one of the
> > ground rules of communication.

> I have no doubt this is correct. Without exception.

I forgot to mention that although it is one of the ground rules of
communication, flakes, liars, jokesters, and hypocrites don't always
obey the ground rules.


MoeBlee

unread,
Sep 6, 2007, 4:34:28 PM9/6/07
to

However lacking in humor he may have been; you just contributed to
your own reputation of the same.

MoeBlee


MoeBlee

unread,
Sep 6, 2007, 4:35:45 PM9/6/07
to

And people who are none of those NEVER use irony of any sort
WHATSOEVER.

MoeBlee

Peter_Smith

unread,
Sep 6, 2007, 5:10:32 PM9/6/07
to

You did -- though normally to say a theorem is trivial is to say that
it is an immediate result, i.e. has a trivial proof.

OK: what exactly is trivial about the theorem that If T is a


recursively axiomatized theory which can represent all
recursive functions, then there is a sentence G of Goldbach type such
that, if T is consistent then G then T doesn't prove G, and if T is

omega-inconsistent then T doesn't prove not-G either??

Jesse F. Hughes

unread,
Sep 6, 2007, 6:53:06 PM9/6/07
to
david petry <david_lawr...@yahoo.com> writes:

Right. Liars don't obey the rules.

So then why did you claim that the Liar's Paradox is implicitly true?

It's a pretty puzzling analysis.

--
Jesse F. Hughes

"How can I miss you when you won't go away?"
-- Dan Hicks and his Hot Licks

Boris Borcic

unread,
Sep 6, 2007, 11:01:24 PM9/6/07
to
david petry wrote:
>
> And if Godel's proof
> is safe from falsification, in what sense does it tell us anything
> about the real world?

Before its first proof is produced, the statement of any theorem-to-be is just a
conjecture and it is falsifiable in your sense. Once a proof is obtained...
well, you might repeat the Michelson-Morley experiment and fail to verify that
the speed of light is a constant... and you might repeat, say, what you call
Goedel's proof, and fail to convince your audience that it demonstrates the
corresponding theorem statement.

Curt Welch

unread,
Sep 6, 2007, 11:19:50 PM9/6/07
to
david petry <david_lawr...@yahoo.com> wrote:
> To be sure, it definitely does make sense to view mathematics as a
> science: mathematics may be defined as the science of phenomena
> observable in the world of computation.

Mathematics is the science of formal language. Language is the real
subject of mathematics, not computation machines. Anything that can be
expressed in formal language is the subject of mathematics. The study of
Mathematics is the study of what can be expressed with a formal language.

Once we scientifically study, and understand, the limits and the powers of
formal language, we then use our understanding of that language as a tool
for understanding the universe, just like we use a tape measure as a tool
for understanding aspects of the universe. We place it side by side with
some aspect of the universe, and if it matches, we can then then use our
knowledge of the language, to help us understand that aspect of the
universe.

Godel's proof is neither stupid or a joke. It's a formal proof about a
very real aspect of formal language. More simply, it's a proof about what
happens when you try to make language reference itself, and then put a NOT
in the self reference loop. Or something like that.

I've never really tried to figure it all out, but I believe if you approach
it from the understand that it's just language we are studying, you can get
a better grasp on what Godel's proof really shows.

> To clarify this, it helps to
> think of the computer as the mathematicians' microscope which helps us
> peer deeply into the world of computation, and then mathematics
> studies the phenomena we observe when we look through that microscope.

Yes, except I think you are wrong. This is not what mathematics is about.

> When we think of mathematics in this way, then our theories must make
> predictions about observable phenomena, and those predictions can be
> tested, and hence mathematics fits the criteria for being a science.

That's all very true if your assumption that mathematics is the study of
computation was correct, but since your assumption is wrong, your
conclusions have turned out to be wrong.

The true scope of mathematics is anything that can be expressed with formal
language. If you limit the scope to only those language constructs which
describe computation machines, then you have placed a limit on the scope of
what type of language you are going to study. Mathematics places no such
limits on the scope of what they study. There are good reasons to explore
that subset of mathematics (Turing machines) and the implications which can
exist in that domain. But real mathematics doesn't limit itself to that.

It's very useful to look at this limited subset of mathematics simply
because it connects so much better to the physical world. It limits
mathematics to the description of things that can actually exist. However,
real mathematics has no such limit. Real mathematics is only about the
description, and not about the thing it is describing. It's a study of
what can be described with language, not a study of what can exist in the
universe.

In Mathematics, it's perfectly valid to talk about a rock with no mass.
It's valid, because I can produce the language to describe it:

Given a rock with no mass (m=0), in a universe where F=ma is true, and
a is > 0, how long will the rock take to hit the ground when we
drop it from one meter?

The study of math is the study of language like the above. It's not the
study of rocks dropping to the ground. It's the study of what is implied
by all forms of formal language.

The fact that such a thing can't exist in the universe, doesn't stop us
from looking at the language and talking about it. The subject of
mathematics, is the words and symbols themselves, not the thing they try to
describe.

> Certainly all of the mathematics that has strong connections to truth
> and reality, including all of the mathematics use in science and
> technology, falls within the scope of this view.

The only connection between math and reality is that the language exists in
reality. The things described by language, don't have to exist in reality.

One of the first problems we get into with language, which separates us
from reality, is the ability to describe infinity. Then, we add to that,
the ability for one set of words, to make reference to other language,
which describes infinity. What does such language imply? This is the fun
stuff we get into when we create language describing infinite sets.

> So why do we believe that mathematics is consistent? And is it even
> sensible to try to reason about the consistency of mathematics? For
> starters, consider this: we simply cannot believe that we have the
> ability to reason consistently about the possibility that we lack the
> ability to reason consistently (and you might want to read that a
> second time). So simply by virtue of the empirical fact that we are
> self-aware, and aware that we can reason, we are compelled to believe
> that we have the ability to reason consistently, but we cannot
> "reason" about our own consistency, since we cannot reason about the
> possibility that we lack the ability to reason consistently!

I have no clue what you think "reason consistently" means. However, your
desire to tie together language, and human intelligence, I think is another
fundamental flaw in your reasoning. Far too many people seem to have made
the mistake of assuming that since humans do so much productive and
interesting work by manipulating language that the fundamental nature of
language must somehow be the fundamental nature of human intelligence.
They couldn't be further from the truth. Human intelligence is a problem
of physical behavior. It's about moving our arms and legs in a way that
improves our odds of survival. It's about flapping our lips in a ways that
improve our odds of survival. It's not about language (per se), it's about
producing the best possible body movements. It so happens that at the
lowest levels, the brain is using a type of language to solve this problem
(neural and chemical signals), but when we look at understanding the
problem of human intelligence in forms of producing useful body motions, we
get a very different perspective than the abstract and often vague ideas
that arise when think of intelligence as the processing of information in
the for form of language (like the information stored in books).

> So if
> mathematics were innate--that is, if mathematics were part of our own
> model of who we are--then we would be forced to agree that we must
> believe that mathematics is consistent, and that we cannot reason
> about its consistency. But that's precisely the case! When we reason
> about how our thought processes work, we come to the conclusion that
> every thought process we have can be modelled on a digital computer
> (that's not to say that our brains are digital computers, but there's
> an equivalence between what a computer can do and what we can do).

There are many people that would make a very strong argument against that
view. I basically agree with the statement, but there are too many vague
concepts in what you wrote to allow us to be clear about what we would be
agreeing with. You don't for example address at all what you might
actually mean by "an equivalence". And you don't address precisely enough
what it means to "model a thought process". etc. Basically, because we
can't reach agreement on what the brain is doing, or what human
consciousness is, we can't reach any agreement on the type of argument you
are trying to make. Your argument is no better than speculation. It's no
better than speculating there's a God which created the universe in 7 days
and he did it all 10,000 years ago. It's certainly not a logical argument
even if you seem to present it as if it were one since your logic is all
based on assumptions you won't be able to get agreement on.

> And
> furthermore, when we reason about what computers are, we come to the
> conclusion that our best models of computation must include a model of
> arithmetic. For example, real world computers use arithmetic in a very
> basic and essential way (e.g. for computing memory addresses), and so
> we are forced to believe that the rules (axioms) of arithmetic are
> part of our model of computation (and we would arrive at the same
> conclusion if we use Turing machines as our model for computation).
> So what all this implies is that if we know who we are--that is, if
> our models of who we are are correct--then we are compelled to believe
> that the basic axioms of mathematics are consistent, and that we
> cannot reason about that consistency. And conversely, if we don't
> know who we are (i.e. if our model of who we are is wrong), then we
> would have to admit that we are confused and we really cannot be sure
> that we can reason about anything at all. In other words, the
> consistency of basic mathematics (i.e. the mathematics implicit in our
> models of computation) is implicit in our self-awareness, and trying
> to use reason to gain additional insight about that consistency is a
> rather silly thing to do (an hence, Godel's proof is rather silly).

The one point that's quite valid in your argument is that since we study
and create science and mathematics using our mind, and since we don't have
a consensus on what the mind is, we don't have a way to know for sure what
math and science is. That is, math and science is a creation of the mind,
but since we don't what the mind is, we have to stop there. We just have
say they exist and until we solve the AI problem, and body mind problem,
and all those related fun little issues that are still unresolved in our
society, we can't do a very good job of saying what math really is.

I argued it's the study of formal language. But note that I didn't bother
to try and define what formal language is or what it means for something to
be a "study of". This is where we run into the mind. And if we can't agree
on what the mind is, we can't agree on what "study of" means, and we can't
agree on what it means for something to be a study of language. We are
simply left hanging. We talk about stuff as if we understood it, when in
fact we don't.

> Clearly I'm making a rather remarkable claim which I can't really
> expect the reader to immediately accept.

It's because your logic is based on a number of assumptions which seem good
(maybe even obvious) to you, but which for the most part, most people
reading your post have even thought about. I believe if you self examine
your arguments closer, you will understand how your logic leads to valid
conclusions, but they are based on axioms you simply choose to believe were
true without proof. They were assumptions that just seemed obvious to you
but which are not actually obvious at all.

I have a lot of my own answers about what human intelligence is and what
consciousness is. But, like your arguments, they are all based on
assumptions that I happen to like - assumptions I like so much, that they
cause me to write, and talk, as if they were facts, when they aren't facts
at all.

> I'm claiming that a huge
> portion of modern logic (and also of the foundations of mathematics)
> is silly. I'm claiming that logic and the foundations of mathematics
> have lost touch with truth and reality; they've become mere games
> played with symbols, only vaguely resembling an honest search for
> truth. So here are some things to think about which might help the
> reader come to agree with me.

You hit the nail right on the head when you said mathematics is a mere game
played with symbols. That's exactly what it is. It's the study of
concepts which can be expressed with symbols. There never was a
requirement that it only describe things that can exist. About the time
they started playing with concepts of infinity is when I think they
realized there was a split between reality, and what you could describe
with formal language. Before then, there was probably an underlying belief
that the study of mathematics was a study of reality.

> 1) If the purpose of mathematics and logic is to provide tools to help
> us reason about the real world, then shouldn't the most important
> advance in logic in the twentieth century help us reason about the
> real world? That is, after seventy five years, shouldn't some kind of
> practical application in the sciences or in technology have emerged
> for Godel's theorem?

I think this just shows that there are properties of language that simply
do not map to the real world. And as such, those properties of language
serve no known practical use as a tool in understanding reality.

> Do philosophical arguments to the effect that
> artificial intelligences will never be as intelligent as humans
> qualify as practical applications?

The connections some people make between Godel's proof an AI are total hog
wash in my view. It's roughly based on yet another error similar to the
one you are making. It's based on the idea that formal language is the
foundation of intelligence, instead of physical reality being the
foundation of intelligence. It's the same error you made, but in the other
direction. You think mathematics must be some type of study of physical
reality, when it's not (other than the fact that we need physical language
processing machines for there to be language in the first place).

> 2) Is Godel's proof falsifiable? If we were to find a reasonably
> powerful consistent formal system that could "prove" its own
> consistency, then that might be a falsification, but how would we
> determine that the formal system is consistent? And if Godel's proof
> is safe from falsification, in what sense does it tell us anything
> about the real world?

Godel's proof is safe from falsification. It tells us nothing about the
real world other than telling us something about the nature of language,
which is itself, part of the real world.

> 3) Godel believed that an analysis of the Liar paradox leads to a
> vicious circle which cannot be broken using standard logic, and then
> he based his proof on his own understanding of that "paradox". Is it
> plausible that the Liar paradox reveals innate flaws in our ability to
> reason? Shouldn't we be highly suspicious of Godel's proof?

It reveals innate flaws in what can be done with formal language. It has
nothing to do with reason, because formal language is not the basis of
reason. It's only one of the tools we use to reason with. It shows an
innate flaw (or limitation really) in what can be done with formal
language.

> 4) Think about how a sincere seeker of truth would react to criticism,
> and compare that to the way the experts on Godel's theorem react to
> this article. Do sincere seekers of truth call critics "crackpots"?

There is truth about the universe, such as how planets like to move in
space, and there is truth about how computation machines works, and there
is truth about what can be described with language. All of these are
different scientific truths about the nature of the universe because all
these things are part of the universe. However, language, is not a
computer, and a computer is not a planet moving through space. Don't get
these things confused, or else you won't correctly understand any of them.

There are a few points where I think we see some basic assumptions about
math diverting from reality. A fundamental tenet of "formal language" as
studied in the field of mathematics is the idea of absolute truth. It
starts at the very lowest levels of math. We makes a fundamental
assumption about the nature of language. We assume that two different
symbols, like the digit 1, and the digit 2, are absolutely different. We
assume that with 100% certainty, these two digits will always be different
symbols in the language. The belief in the existence of absolute truth is
a founding principle for the entire field of mathematics. But yet, all
evidence about reality points to the idea that absolute truth is a myth.
We can't make anything work absolutely. We can only make it work with some
high degree of certainty. The abstract language we use to describe
computers with (Turing machines and the like) are based on the truth of
absolute truth. We assume, that a computer, when it adds one and one
together, it will always (absolutely) get the answer two. But no hardware
we know how to build can do this. It can only get the right answer most
the time.

Why then, you might ask, do we spend so much time on mathematics, when it's
founding assumption is likely to be a diversion from reality?

It's because even with this problem, it's still extremely useful.
Computers, when built correctly, perform as if they were absolute perfect
machines, most the time. And since they work that way most the time, if we
make assumptions that they will always work by the rules of absolute truth,
that we will be right about what the machine will do, most the time. How
right we are, is only limited by how close we can make the machine behave
absolutely (how many times can it add 1 and 1 and 2, instead of getting
some other answer)? The answer, as we all know, is that we can build
computers that produce the answer we want so many times without mistake,
that they are still very useful to us.

It also doesn't do us much good, (in general) to think in terms of
computers making mistakes only once out of 10^100 calculations when we
write software for them. We simply assume they will do what we ask every
time, and ignore the fact that real computers can't always produce the
right answer.

But here's the problem with absolute truth. In our mind, absolute truth
does seem to exist. We absolutely, and always, know that one is not two.
Or at least it seems that this is a fundamental aspect of our mind. But is
it? Lester would like to declare that absolute truth does exist, and he
has chosen to believe that it's the foundation of all mental reality (or
something like that). But the fact that we, as a society, have not yet
figured out, (or even come close to some type of agreement in principle)
what the mind is, and what it's able to do, we are left no being able to
answer this most simple question about absolute truth. Does it exist, or
not?

Even though we have some very strong evidence that absolute truth doesn't
exist in the material universe (Heisenberg uncertainty principle for one
example), people aren't willing to rule out the idea it might exist in the
mental universe because we don't know the the link between the material
universe (the body) and the mental universe, (the mind). So maybe, mental
reality does have absolute truth, even if physical reality doesn't?

Of course, if you are a basic materialist, and believe that mental reality
is nothing more than the physical operation of a physical brain, we are
forced to also believe that absolute truth doesn't exist in the mind.

And the problem with all this, comes from trying to define what language
is. Can language exist, without there being a machine to produce it, and
use it? If it can't, then we are forced to face the fact that language
must ultimately be an abstract way of talking about language processing
machines (like humans). So to really understand what language is, we have
to understand the true nature of the machines that produce it.

Lets assume that computers do have the same power to use, and produce,
language, that humans have. Since we can't agree on what powers humans
have, we can't agree that this is a valid assumption, but lets just explore
the idea as if it's valid.

At this point, we have done what you wanted to do. We have linked the
study of math, to the study of computers. But we have done it in a very
different way. We are no longer talking about language which describes
what the computer is doing. We are talking about language produced by the
computer. What, for example, does it mean for the computer to print out
the message, "assume we have a rock with zero mass", as it's talking to
another computer? I've not tried to explore the answer to this question.
But if you want to understand the domain of mathematics in physical,
objective, scientific terms, you have to first find a way to define what
language is, free from any associations to the human mind, or human
consciousness. Only then, can you make some real assault on the validity,
and usefulness, of different areas of mathematics, and understand if there
is some reason to limit the scope of mathematics simply because going
beyond some line, makes the language meaningless, and staying inside some
line, gives it some extra amount of power.

If such a line existed, I suspect the problems happen when the language
starts to talk about an infinite amount of language existing, and when the
language creates self referencing loops - especially when there is a NOT
included in the loop.

--
Curt Welch http://CurtWelch.Com/
cu...@kcwc.com http://NewsReader.Com/

david petry

unread,
Sep 6, 2007, 11:40:59 PM9/6/07
to
On Sep 6, 2:10 pm, Peter_Smith <ps...@cam.ac.uk> wrote:
> On 6 Sep, 21:17, david petry <david_lawrence_pe...@yahoo.com> wrote:
>
> > On Sep 6, 8:29 am, Peter_Smith <ps...@cam.ac.uk> wrote:
>
> > > Would you like to explain to us very clearly why the
> > > proof is "trivial"?
>
> > Didn't I say that the theorem (not the proof) is trivial?
>
> You did -- though normally to say a theorem is trivial is to say that
> it is an immediate result, i.e. has a trivial proof.

Right. The claim I was making is that if we identify "proof" with
"compelling argument", then the theorem that a formal system cannot
prove its own consistency is an immediate result. And furthermore, if
we don't identify proof with "compelling argument", then mathematics
loses touch with reality.


> OK: what exactly is trivial about the theorem that If T is a
> recursively axiomatized theory which can represent all
> recursive functions, then there is a sentence G of Goldbach type such
> that, if T is consistent then G then T doesn't prove G, and if T is
> omega-inconsistent then T doesn't prove not-G either??

That wasn't the subject of the article I wrote, but as I have argued
in other articles posted in sci.math, simple and almost trivial
probabilistic reasoning leads to the conclusion that that theorem must
be true, and even that it is extremely easy to generate infinitely
many simple and concrete sentences logically independent of T and also
of each other.


david petry

unread,
Sep 6, 2007, 11:47:25 PM9/6/07
to

Not really. Even though Fermat's Last Theorem has been proved, we can
still say that the theorem could be falsified by finding a solution to
the equation a^p + b^p = c^p and then we would be forced to admit the
proof is flawed.


david petry

unread,
Sep 7, 2007, 12:32:26 AM9/7/07
to
On Sep 6, 8:19 pm, c...@kcwc.com (Curt Welch) wrote:

> david petry <david_lawrence_pe...@yahoo.com> wrote:
> > To be sure, it definitely does make sense to view mathematics as a
> > science: mathematics may be defined as the science of phenomena
> > observable in the world of computation.
>
> Mathematics is the science of formal language. [...]

Whatever.

The basic assumption I make is that mathematics is part of mankind's
quest for truth. That is, we're trying to understand the world we
observe, and the world we observe includes both the physical world and
the world of the mind. If we define mathematics as the science of
formal language, we loose touch with that quest for truth.

What I get out of reading your article is that you're not willing to
join me in the quest for truth. I have no reason to argue with you.


Curt Welch

unread,
Sep 7, 2007, 1:16:26 AM9/7/07
to

Well, you would have to explain to me what you think you are questing
before I could actually answer whether I wanted to join you. The truth I'm
interested in is improving our understanding of the nature of the universe
we exist in.

I am very much interested in the quest for that type of truth (aka more
accurate understanding of our universe - which includes a more accurate
understanding of ourselves since we are part of our the universe). That's
exactly why I took the time to write the long reply - to help you (and me,
and others) see the truth about what mathematics is.

I'm posting from c.a.p. I don't read sci.math or sci.logic. Both of those
groups are ultimately just groups dedicated to the study of formal language
and for the most part, such work doesn't interest me very much. I find the
tools they produce extremely useful, and it would be good for me to know
more math than I do, but currently, I'm more interested in exploring ideas
that help us make machines act more like humans, than I am in helping
mathematicians extend the outer edges of our understanding of formal
language. I like to work with machines I can touch (even if it's just a
keyboard and mouse I spend most my time touching :)).

I believe that the problem of making machines act like humans is
fundamentally a statistical problem (a machine learning problem), and as
such, all the tools of mathematics from areas of statistics are of much use
to me. But there's no confusion in mathematics about what a mean is, or
what a standard deviation is. We have all the truth we need there. The
confusing stuff in mathematics, like the significances of Godel's proof, is
just way beyond anything that's important to me. That's something I'll let
the mathematicians play with. I'll just step in and try to add some
clarity when I think I can provide it by pointing out how I think the field
of math fits into the big puzzle of the universe when I see someone that
seems to have it confused.

On the issue of human intelligence, and consciousness, I believe I already
know the truth about these mysteries. And on the issue of what type of
machine we need to build to duplicate human intelligence and human
consciousness, I also believe I understand what we need to do. So to me,
there are no big mysteries left in these areas. There are only low level
implementation details that still need to be created to make it work.
There's no way to currently know if what I believe will turn out to be the
truth, but it's my current best guess at the truth, and the one I use to
guide my thoughts and actions in my attempts to build AI.

So, my question to you is exactly what truth are you looking for?

Someone else (maybe it was you? - there seems to be much commonality in
your thoughts) posted not long ago about how the failure of mathematics has
caused a failure in our ability to solve AI. I think any attempt to solve
AI by thinking in terms of math, is doomed from the start. Math, as you
point out, is not about physical reality. AI however, is all about the
behavior of a physical machine - the human body - and more important, the
human brain and how it's able to make the human body move. It make my
fingers move and type out these messages and that's a very interesting set
of motions to understand. If you want to solve AI, the last thing you would
wasting your time on is trying to fix problems in higher level math. If
you want to figure out how the brain works, study neuroscience, or study EE
and signal processing systems. Study computers and what you can do to make
them control the motion of machines like robots.

If you think self awareness is something other than the natural result of
what happens when you build an advanced adaptive signal processing control
system into a body, then I suspect you are approaching the problem from the
wrong direction.

abo

unread,
Sep 7, 2007, 1:25:05 AM9/7/07
to
On Sep 6, 4:50 pm, david petry <david_lawrence_pe...@yahoo.com> wrote:

Is "There is an infinity of primes" falsifiable? If so, how?

Real world computers do not obey the Successor Axiom, since there is a
limit to their memory. While Turing Machines have infinite memory,
you seem to assume that they must exist (or exist as models), and this
does not seem to be falsifiable. So you are handwaving in the most
important part of the proof - what exactly is consistent, what this
mathematics (which is consistent) is.

aatu.kos...@xortec.fi

unread,
Sep 7, 2007, 1:44:56 AM9/7/07
to
Curt Welch wrote:
> Mathematics is the science of formal language. Language is the real
> subject of mathematics, not computation machines. Anything that can be
> expressed in formal language is the subject of mathematics. The study of
> Mathematics is the study of what can be expressed with a formal language.

A very idiosyncratic view. In the ordinary sense the "study of what
can be expressed with a formal language" is a rather marginal part of
mathematics.

> I've never really tried to figure it all out, but I believe if you approach
> it from the understand that it's just language we are studying, you can get
> a better grasp on what Godel's proof really shows.

How so?

Peter_Smith

unread,
Sep 7, 2007, 5:17:18 AM9/7/07
to
On 7 Sep, 04:40, david petry <david_lawrence_pe...@yahoo.com> wrote:
> The claim I was making is that if we identify "proof" with
> "compelling argument", then the theorem that a formal system cannot
> prove its own consistency is an immediate result.

I see. So the consequence that a formalized theory cannot prove the
consistency of a stronger theory is also an immediate result? So the
impossibility of using finitistic reasoning (formalized in some
fragment of arithmetic) to prove the consistency of set theory, for
example, is an *immediate* result? So Hilbert -- the greatest
mathematician of his day -- and his associates working on what we now
think of as Hilbert's programme throughout the 1920s hoping to find
such consistency proofs were overlooking an immediate result, a
triviality? Wow. Somehow I think not!


>
> > OK: what exactly is trivial about the theorem that If T is a
> > recursively axiomatized theory which can represent all
> > recursive functions, then there is a sentence G of Goldbach type such
> > that, if T is consistent then G then T doesn't prove G, and if T is
> > omega-inconsistent then T doesn't prove not-G either??
>
> That wasn't the subject of the article I wrote,

MY apologies. I only read the first half where you talked of "Gödel's
proof" and the Liar paradox before I lost the plot entirely. So I'd
assumed you were talking about the First incompleteness theorem. OK,
it is the second theorem that it in your sights. Well, that's even
less trivial. In fact it is quite a delicate matter even to state it
right in an informal way given that there are for example non-
canonical consistency sentences for PA which are provable within PA
(Gödel himself wrote that "the consistency, even of very strong
systems S, may be provable in S"). The formal result correctly stated
is certainly non-trivial.

>but as I have argued
> in other articles posted in sci.math, simple and almost trivial
> probabilistic reasoning leads to the conclusion that that theorem must
> be true, and even that it is extremely easy to generate infinitely

> many simple and concrete sentences logically independent of T and and also
of each other.

Really? Wow!! That is some discovery! Would you like to share with us
just *one* "simple and concrete" sentence S of first-order arithmetic
of Goldbach type such that neither S nor not-S is provable in first-
order Peano Arithmetic??

===
www.godelbook.net

Boris Borcic

unread,
Sep 7, 2007, 6:28:12 AM9/7/07
to
david petry wrote:
>
> Even though Fermat's Last Theorem has been proved, we can
> still say that the theorem could be falsified by finding a solution to
> the equation a^p + b^p = c^p

Saying that Fermat's last theorem has been proved *is* saying that this is not a
possibility. Of course one can always contradict oneself, but that is a
difference sense of "can".

David C. Ullrich

unread,
Sep 7, 2007, 7:52:19 AM9/7/07
to
On Thu, 06 Sep 2007 07:50:47 -0700, david petry
<david_lawr...@yahoo.com> wrote:

>
>
>What is the nature of mathematics? Does it have a connection to
>reality? Is it part of mankind's quest for truth? Or is it merely a
>game which mathematicians have chosen to play, which may have had its
>origins in mankind's quest for truth but no longer has a connection to
>such a quest? I'm asking the reader to keep such questions in mind as
>he reads the rest of this article.
>
>Godel's proof is something of a joke. When we acknowledge that
>mathematics has a strong connection to truth and reality, then we can
>easily see that Godel's theorem is quite trivial,

I'd ask you to explain this, but I see that that's been tried
and you declined.

>and the proof has no
>content. And if you deny that mathematics has a strong connection to
>truth and reality, then are you not implicitly claiming that all of
>mathematics, including Godel's proof, is something of a joke?
>
>Godel told us that he had the Liar paradox in mind when he came up
>with his proof. And that's a good place to start this analysis.

Uh, no. If we're going to analyze Godel's theorem then any
comments he supposedly made about the motivation for the
proof are a very bad place to start. Nothing below has any
actual connection to the actual theorem.

Well, most of it has no connection to the actual theorem.

[much snipped]

>
>2) Is Godel's proof falsifiable?

Yes.

>If we were to find a reasonably
>powerful consistent formal system that could "prove" its own
>consistency, then that might be a falsification,

No, that _would_ show that theorem was false.

>but how would we
>determine that the formal system is consistent?

By exhibiting a model. I mean duh, how else would we do it?

>And if Godel's proof
>is safe from falsification, in what sense does it tell us anything
>about the real world?

The idea that it's "safe from falsification" is a figment of
your lack of understanding.

>3) Godel believed that an analysis of the Liar paradox leads to a
>vicious circle which cannot be broken using standard logic, and then
>he based his proof on his own understanding of that "paradox".

Uh, no, the proof is not based on his understanding of that paradox.
The proof is based on simple definitions regarding formal systems.

It's possible that his understanding of that paradox has something
to do with how he _found_ the proof. But how he was led to the proof
has no relevance whatever to the correctness of the proof itself.

>Is it
>plausible that the Liar paradox reveals innate flaws in our ability to
>reason?

I doubt it, but supposing for the sake of argument that it does:

>Shouldn't we be highly suspicious of Godel's proof?

No, because the liar paradox is not part of the proof.

>4) Think about how a sincere seeker of truth would react to criticism,
>and compare that to the way the experts on Godel's theorem react to
>this article. Do sincere seekers of truth call critics "crackpots"?

When the "critics" are basing their criticism on notions that have
nothing at all to do with the actual content of what they're
crticising, then yes, sincere seekers of truth call them crackpots.

I mean really, this is another no-brainer, isn't it? Surely you've
been called a crackpot many times. QED.

************************

David C. Ullrich

Alpha

unread,
Sep 7, 2007, 9:57:00 AM9/7/07
to

"Curt Welch" <cu...@kcwc.com> wrote in message
news:20070907011630.870$u...@newsreader.com...

> On the issue of human intelligence, and consciousness, I believe I already
> know the truth about these mysteries.

Your mental cloud of unknowing is about a quadrillion times larger than it
was a few days ago.

--
Posted via a free Usenet account from http://www.teranews.com

Boris Borcic

unread,
Sep 7, 2007, 10:19:25 AM9/7/07
to
aatu.kos...@xortec.fi wrote:

> Curt Welch wrote:
>
>> I've never really tried to figure it all out, but I believe if you approach
>> it from the understand that it's just language we are studying, you can get
>> a better grasp on what Godel's proof really shows.
>
> How so?
>

One might for instance propose that the common theme of limitation theorems is
to deny that formalization can achieve what it was first invented for :
expelling ambiguity.

And illustrate the idea, with saying that formalization is like turning into
mechanical constraints the relationships between words of a language, that are
stipulated by the definitions of the latter's standard dictionary. To then
provide a portrait of what's shown by the theorems thusly :

While the process of formalization removes ambiguity locally from the
relationship between pairs of words, it's the whole system of words formed by
the dictionary that becomes ambiguous, in the sense that it allows coordinated
reinterpretations of many words at once to leave all the (mechanically
constraining) definitions of the dictionary invariant. The new meaning of words
is given in terms of the new meanings of the words of their definitions, but the
latter do not change.

In that image, what's usually called "incompleteness" boils down to the idea
that despite leaving the dictionary invariant, the above reinterpretation really
makes a difference that can be evidenced by its action on extensions to the
dictionary.

Zooming out from there, "what Goedel's proof really shows" is that we are
collectively really naive about diagnosing the presence of ambiguity; it
strongly suggests that we wrongly tend to think of ambiguity as marginal because
we essentially only diagnose it when it is antecedant to manifest pathologies.

That's my opinion, anyway, and I share it :)

Boris Borcic

Curt Welch

unread,
Sep 7, 2007, 10:44:41 AM9/7/07
to
aatu.kos...@xortec.fi wrote:
> Curt Welch wrote:
> > Mathematics is the science of formal language. Language is the real
> > subject of mathematics, not computation machines. Anything that can be
> > expressed in formal language is the subject of mathematics. The study
> > of Mathematics is the study of what can be expressed with a formal
> > language.
>
> A very idiosyncratic view. In the ordinary sense the "study of what
> can be expressed with a formal language" is a rather marginal part of
> mathematics.
>
> > I've never really tried to figure it all out, but I believe if you
> > approach it from the understand that it's just language we are
> > studying, you can get a better grasp on what Godel's proof really
> > shows.
>
> How so?

By that comment, I was talking about what it shows relative to the rest of
reality. Normally, we are only interested in what it shows within the
context of mathematics. Within the scope of mathematics, what it proves I
believe is rather clear for anyone who understands it. But people will at
times try to apply the proof to areas outside of mathematics, such as to
humans ability to think and reason and/or a computer's ability to think and
reason. It's those attempts to use the proof where I believe people have
stepped over the line and made assumptions that are not valid.
Understanding the true scope of mathematics to be the scope of what can be
expressed with formal language (the language of mathematics) might help
people to understand how they have gone too far when trying to use Godel's
work to say something about human mental ability.

Wolf Kirchmeir

unread,
Sep 7, 2007, 11:30:32 AM9/7/07
to
Curt Welch wrote:
> david petry <david_lawr...@yahoo.com> wrote:
>> To be sure, it definitely does make sense to view mathematics as a
>> science: mathematics may be defined as the science of phenomena
>> observable in the world of computation.
>
> Mathematics is the science of formal language. Language is the real
> subject of mathematics, not computation machines. Anything that can be
> expressed in formal language is the subject of mathematics. The study of
> Mathematics is the study of what can be expressed with a formal language.
[...]

Precisely so.

Now, define "formal language." ;-)

david petry

unread,
Sep 7, 2007, 12:36:01 PM9/7/07
to
On Sep 6, 10:25 pm, abo <dkfjd...@yahoo.com> wrote:
> On Sep 6, 4:50 pm, david petry <david_lawrence_pe...@yahoo.com> wrote:
>
> Is "There is an infinity of primes" falsifiable? If so, how?

No. Why do you ask?


david petry

unread,
Sep 7, 2007, 12:43:06 PM9/7/07
to
On Sep 7, 2:17 am, Peter_Smith <ps...@cam.ac.uk> wrote:

> Would you like to share with us
> just *one* "simple and concrete" sentence S of first-order arithmetic
> of Goldbach type such that neither S nor not-S is provable in first-
> order Peano Arithmetic??

Well, it's entirely possible that neither Goldbach's conjecture nor
its negation is provable.


aatu.kos...@xortec.fi

unread,
Sep 7, 2007, 12:49:16 PM9/7/07
to
david petry wrote:
> Well, it's entirely possible that neither Goldbach's conjecture nor
> its negation is provable.

That's just a rather oblique way of stating that, for all we know,
Goldbach's conjecture might be true. Your observation in no way
provides a simple and concrete sentence S of first-order arithmetic of


Goldbach type such that neither S nor not-S is provable in first-order

arithmetic.

david petry

unread,
Sep 7, 2007, 12:57:27 PM9/7/07
to
On Sep 6, 10:16 pm, c...@kcwc.com (Curt Welch) wrote:
> david petry <david_lawrence_pe...@yahoo.com> wrote:

> Someone else (maybe it was you? - there seems to be much commonality in
> your thoughts) posted not long ago about how the failure of mathematics has
> caused a failure in our ability to solve AI.

You're probably thinking of this article of mine:

http://groups.google.com/group/sci.math/msg/0845a6308e5d4633


abo

unread,
Sep 7, 2007, 1:11:32 PM9/7/07
to

1/ Is it safe from falsification?
2/ Does it tell us anything about the real world?
3/ Does your intuitive idea of mathematics include at least the Peano
Axioms?

david petry

unread,
Sep 7, 2007, 1:30:50 PM9/7/07
to
On Sep 7, 4:52 am, David C. Ullrich <ullr...@math.okstate.edu> wrote:
> On Thu, 06 Sep 2007 07:50:47 -0700, david petry
>
> <david_lawrence_pe...@yahoo.com> wrote:

> >Godel's proof is something of a joke. When we acknowledge that
> >mathematics has a strong connection to truth and reality, then we can
> >easily see that Godel's theorem is quite trivial,
>
> I'd ask you to explain this, but I see that that's been tried
> and you declined.

Except that the whole long article I wrote was devoted to explaining
it, and so I have to conclude that you understood nothing of what I
wrote. So I really have no idea where to go from there.

I've often wondered why you have so much difficulty understanding the
things I write. They seem quite obvious to me. Here are a couple of
fundamental ideas which I accept as true, which evidently you've never
even considered. So, if you would, consider them now, and if you think
they're not valid, try to explain why you think so.

1) The world of computation is real.

2) We (our minds) live within the world of computation.

3) A very large and important part of mathematics can be accurately
described as the science of phenomena observable in the world of
computation.

4) It's reasonable to ask what assumptions are implicit in our own
self-awareness, and then to apply those assumptions when reasoning
about the foundations of mathematics.

4) If we think of the world of computation as but a small part of a
larger world of the infinite, we are fantasizing. Likewise, if we
think of ourselves as somehow living above the world of computation,
looking down upon it, we are deluding ourselves.

david petry

unread,
Sep 7, 2007, 1:49:04 PM9/7/07
to
On Sep 7, 9:49 am, aatu.koskensi...@xortec.fi wrote:
> david petry wrote:
> > Well, it's entirely possible that neither Goldbach's conjecture nor
> > its negation is provable.
>
> That's just a rather oblique way of stating that, for all we know,
> Goldbach's conjecture might be true. Your observation in no way
> provides a simple and concrete sentence S of first-order arithmetic of
> Goldbach type such that neither S nor not-S is provable in first-order
> arithmetic.

Really? I guess you mean "in no way provides with certainty (i.e.
proof)...". I believe what I said is correct: Goldbach's conjecture
very possibly is an example of what you are asking for.

david petry

unread,
Sep 7, 2007, 1:56:41 PM9/7/07
to
On Sep 7, 10:11 am, abo <dkfjd...@yahoo.com> wrote:
> On Sep 7, 6:36 pm, david petry <david_lawrence_pe...@yahoo.com> wrote:
>
> > On Sep 6, 10:25 pm, abo <dkfjd...@yahoo.com> wrote:
>
> > > On Sep 6, 4:50 pm, david petry <david_lawrence_pe...@yahoo.com> wrote:
>
> > > Is "There is an infinity of primes" falsifiable? If so, how?
>
> > No.
>
> 1/ Is it safe from falsification?

Yes.

> 2/ Does it tell us anything about the real world?

No. Note that the usual proofs of that can be coaxed give us bounds on
how far out we have to go to find another prime, so those proofs do
tell us something about the real world.

> 3/ Does your intuitive idea of mathematics include at least the Peano
> Axioms?

I would want to know what you mean by "your intuitive idea" before I
respond to that.

aatu.kos...@xortec.fi

unread,
Sep 7, 2007, 1:58:25 PM9/7/07
to
david petry wrote:
> Really? I guess you mean "in no way provides with certainty (i.e.
> proof)...". I believe what I said is correct: Goldbach's conjecture
> very possibly is an example of what you are asking for.

Possibly. But as an answer to Peter's query yours was about as
relevant as noting there might be aliens in Alpha Centauri, or that
possibly dogs are really from another planet, as a reply to someone
asking for an example of extraterrestrial lifeforms. It also does not
in any way support your idea that the incompelteness theorem is a
triviality.

Peter_Smith

unread,
Sep 7, 2007, 2:05:44 PM9/7/07
to

You mean: we don't know whether Goldbach's conjecture is independent
of PA.

Well, so what? That in no way at all substantiates your claim "that it


is extremely easy to generate infinitely

many simple and concrete sentences logically independent of T and also
of each other". "We don't whether Goldbach's conjecture is
independent PA" very plainly doesn't entail "Goldbach's conjecture IS
independent of PA".

So to repeat: what is an example that does substantiate your bold
claim (one which, if you could prove, would put you in line for a
Field's medal!)

J.A. Legris

unread,
Sep 7, 2007, 2:50:21 PM9/7/07
to
On Sep 6, 11:19 pm, c...@kcwc.com (Curt Welch) wrote:

> david petry <david_lawrence_pe...@yahoo.com> wrote:
> > To be sure, it definitely does make sense to view mathematics as a
> > science: mathematics may be defined as the science of phenomena
> > observable in the world of computation.
>
> The study of Mathematics is the study of what can be expressed with a formal language.
>

But *anything* can be expressed with a formal language if you make
your definitions loose enough. That's how you came up with the above
sentence.

I'll place my bets on wikipedia:
http://en.wikipedia.org/wiki/Mathematics
http://en.wikipedia.org/wiki/Formal_language

--
Joe


abo

unread,
Sep 7, 2007, 3:07:21 PM9/7/07
to
On Sep 7, 7:56 pm, david petry <david_lawrence_pe...@yahoo.com>
wrote:>

> > 3/ Does your intuitive idea of mathematics include at least the Peano
> > Axioms?
>
> I would want to know what you mean by "your intuitive idea" before I
> respond to that.

You say that there is mathematics which has a connection to truth.
Does this "true mathematics" include the Peano Axioms? If there is a
Peano Axiom which does not have a connection with "true mathematics",
which one is it (which ones, if there are more than one)?

At some point in the past you wrote,

"PA with restrictions
on the use of the existential quantifier is a good starting point to
create a model of the world of computation, and the successor axiom
is
part of that."

I interpretted this as meaning you accept the successor axiom, that is
you believe that the successor axiom is part of the mathematics which
has a connection to truth. Was that the case? Is that still the
case?

Jesse F. Hughes

unread,
Sep 7, 2007, 3:49:35 PM9/7/07
to
david petry <david_lawr...@yahoo.com> writes:

> On Sep 7, 11:05 am, Peter_Smith <ps...@cam.ac.uk> wrote:
>> On 7 Sep, 17:43, david petry <david_lawrence_pe...@yahoo.com> wrote:
>>
>> > On Sep 7, 2:17 am, Peter_Smith <ps...@cam.ac.uk> wrote:
>>
>> > > Would you like to share with us
>> > > just *one* "simple and concrete" sentence S of first-order arithmetic
>> > > of Goldbach type such that neither S nor not-S is provable in first-
>> > > order Peano Arithmetic??
>>
>> > Well, it's entirely possible that neither Goldbach's conjecture nor
>> > its negation is provable.
>>
>> You mean: we don't know whether Goldbach's conjecture is independent
>> of PA.
>>
>> Well, so what? That in no way at all substantiates your claim "that it
>> is extremely easy to generate infinitely
>> many simple and concrete sentences logically independent of T and also
>> of each other".
>

> OK, at this point I will have to accuse you of being outright
> dishonest. What I actually wrote started with "simple and almost


> trivial probabilistic reasoning leads to the conclusion that that

> theorem must be true, and even that..."
>
> What I wrote is true.

Exactly what probabilistic reasoning do you have in mind?

--
Jesse F. Hughes
"I think the burden is on those people who think he didn't have
weapons of mass destruction to tell the world where they are."
-- White House spokesman Ari Fleischer

david petry

unread,
Sep 7, 2007, 4:11:47 PM9/7/07
to
On Sep 7, 10:58 am, aatu.koskensi...@xortec.fi wrote:
> david petry wrote:
> > Really? I guess you mean "in no way provides with certainty (i.e.
> > proof)...". I believe what I said is correct: Goldbach's conjecture
> > very possibly is an example of what you are asking for.
>
> Possibly. But as an answer to Peter's query yours was about as
> relevant as noting there might be aliens in Alpha Centauri, or that
> possibly dogs are really from another planet, as a reply to someone
> asking for an example of extraterrestrial lifeforms. It also does not
> in any way support your idea that the incompelteness theorem is a
> triviality.

Here are three additional facts:

1) A probabilistic analysis of Goldbach's conjecture suggests that it
is true with extremely high probability.

2) Assertions that are extremely likely to be true for probabilistic
reasons alone tend to be extremely difficult or even impossible to
prove formally.

3) It is very easy to construct infinite classes of statements having
the same probabilistic properties as Goldbach's conjecture.


david petry

unread,
Sep 7, 2007, 4:15:45 PM9/7/07
to
On Sep 6, 10:16 pm, c...@kcwc.com (Curt Welch) wrote:
> david petry <david_lawrence_pe...@yahoo.com> wrote:

> > The basic assumption I make is that mathematics is part of mankind's
> > quest for truth. That is, we're trying to understand the world we
> > observe, and the world we observe includes both the physical world and
> > the world of the mind. If we define mathematics as the science of
> > formal language, we loose touch with that quest for truth.
>
> > What I get out of reading your article is that you're not willing to
> > join me in the quest for truth. I have no reason to argue with you.
>

> The truth I'm
> interested in is improving our understanding of the nature of the universe
> we exist in.


Then the relevant question is, how does Godel's proof help you
understand the nature of the universe we exist in?

david petry

unread,
Sep 7, 2007, 4:46:49 PM9/7/07
to

As I wrote in the original article, basic mathematics is implicit in
our models of who we are, and hence has to be accepted on faith, so to
speak. What I didn't write about (just to keep the article from
getting too long) is that there is more to our models of who we are
than just the axioms of basic mathematics. For example, we are aware
of our own limitations, and so those limitations must be recognized in
our model of who we are. I argue that it would be a really good idea
to incorporate a notion of relevance into the foundations mathematics
which would reflect our understanding of our own limitations. The
basic idea is that if you tell me that some computational phenomena
can be observed, you have to be prepared to tell me how much
computation needs to be done in order to observe it. Then based on
what you tell me, I can determine for myself, according to my own
chosen criteria, just how relevant this phenomena that can be observed
is.

The upshot is that, yes, within the relevant universe, the successor
axiom holds. I can't imagine a relevant universe in which it doesn't
hold, no matter what reasonable criteria of relevance I may apply. I
(and presumably you too) don't want to talk about the irrelevant
universe.


Curt Welch

unread,
Sep 7, 2007, 4:47:08 PM9/7/07
to

I don't read sci.math so I hadn't see that. But it's probably another one
of your posts that you cross posted to c.a.p. with similar content I'm
thinking of.

In that article above, you write:

> When I was in graduate school, I thought it would be a
> really cool idea to build a foundation for mathematics
> incorporating as one of the cornerstones, the idea that
> mathematical statements must make predictions about the results of
> computational experiments. I believe it would
> be a straightforward task to teach computers to understand mathematics
> when it is built on such a foundation, and then it would be seen that the
> foundation of mathematics is also the foundation of artificial
> intelligence. After all, mathematics is the tool we use, and the tool
> computers could be using, to understand the world in a precise and
> quantitative way. I believe that making available a solid theoretical
> foundation for artificial intelligence could have profound implications
> for the future of mankind.

I relate a lot to the stuff you write because you are obviously someone who
spends a lot of time thinking for himself and never assumes that common
wisdom is right just because it's currently common wisdom. Far too few
people do that, and the ones that irritate me the most, are the ones who
seem unable to think for themselves and who will stomp on any idea which is
not in line with common wisdom simply because it's not in line with common
wisdom. These are the people that pride themselves on their advanced
education, which is nothing more than years wasted learning to be a parrot
instead of years spend learning to create the next improved common wisdom.
Those of us the prefer to think for ourselves and try to uncover new, and
deeper understandings, tend to be the first ones called crackpots whenever
we suggest that common wisdom might not be the best ways to look at things.

Anyhow, in your ideas about the connections between AI and mathematics I
have a few more thoughts. I'm a strict physicalist and believe that man is
nothing more than a biological machine and that search for understanding
about how humans can do all these interesting things like create math, is
nothing more than a search to understand how this biological machine works.
There are many others that believe man's consciousness is not reducible to
the material, but I choose to reject that belief simply because it's of no
use to me to assume it's true. So, from my foundation of physicalism
(strong materialism) I believe that to solve the AI problem, we need to
find a way to understand what this biological machine is doing. We need to
find the correct language to describe what the brain is doing. If we could
find that language, we could put in a book and anyone who read the book,
would be able to use the knowledge in the words, to build machines which
can duplicate all the mental powers of man.

You suggest above, that the language should be mathematics. I think that's
short sighted.

If I wanted to write a book that described how cars work and to tell people
how to build their own car, would that book be filled with nothing but
math? Of course not. Though the language of math would play an important
role in explaining to people how to build a car, it would be possible to
write the book without making reference to any math. But what it would be
full of, is references to the manipulation of physical objects - the art of
taking metal, and bending it into a given shape for example.

To build a machine which duplicated the power of humans, we are again,
required to take physical objects (maybe silicon) and bend it into the
right shape. That's the ultimate foundation of AI. It's about building
physical things. It's foundation does not lie in math.

Now you also suggested that we create a limited version of math, which was
used to describe the operation of computers, and use that as the foundation
of AI. I happen to believe this would be possible, but I want to point out
it's just not the right way to approach the problem. There's no real
evidence that the brain works like a digital computer, and there's plenty
of evidence suggesting it's a very different type of machine. So, when
trying to figure out how to make a physical machine act, and think, with
the same mental capacity of a human, why would we choose to believe that
the only correct foundation of AI would have to be one based on a
foundation of digital computers as described by math (Turing machines,
etc)?

To make this more clear, and a bit less abstract, let me give some
examples. I believe the brain is nothing but a fancy signal processing
device and that all we have to do to create human level intelligence and
human level consciousness in a machine, is to create the correct type of
signal processing machine. However, much our signal processing is analog.
And though we use math to understand the basic nature of analog signal
processing, math is not the foundation of analog signal processing. The
hardware is. You can't explain to someone how to build a microphone, by
describing it all with math. You describe it in terms of the physical
material you have to shape into the correct shapes to make a microphone.
We only use math as as tool to help us make predictions about how it will
act once we build it.

So, the bottom line of all this, is that math is just not, and never will
be, the foundation of AI. The physical hardware we build will be the
foundation. Now, I strongly suspect that we will be able to use a digital
signal processing core in such a machine, just like we use digital signal
processing techniques in most of our machines today which once were always
built out of only analog components (radios, record players, TVs, phones,
etc). And in this, our understanding of the operation of Turing machines
in general, will guide us in our construction of these machines.

But just because in the field of math, we may like to play with ideas of
infinity, and that the work of Cantor might have drifted off into fantasy,
doesn't mean that work has anything to do with building digital signal
processing machines which act like humans. Cantor's ideas certainly
doesn't stop anyone from building interesting robots. Why should it get in
the way of understanding how to build robots that duplicate human level
signal processing ability?

It feels to me that you have a fixation on mathematics (which is fine, most
mathematicians do), and that you feel like in order to understand the mind,
we must somehow make math "perfect", and that once mathematics is "perfect"
it will give us the answer to what the mind is. I just don't connect with
thinking like that. The answer to what the mind is will never be found
inside the field of mathematics. Mathematics is not the answer to what the
mind is, mathematics is just one of many creations of the mind. The mind
also creates works of art, and long rambling Usenet posts, and builds cars.
We won't understand the mind by just studying cars, or by just studying
art, or by just studying mathematics. We will understand it by studying
the behavior of humans, and using all the tools available to us (math
included) to understand that behavior.

Changing the subject slightly, I too have a real problem with how
mathematics has created the concept of multiple infinities. Like you, I
felt the need to tie the concepts to reality, and not being able to tie it
to reality, left me feeling the concept must somehow be invalid. I'm still
not sure if it's valid or not. But, once I realized that mathematics was
not about reality, but was instead, just about what type of descriptions
you could create with language, I lost my desire to figure out if the
concept of multiple types of infinity was really valid or just an error in
thinking. The fact that you can describe some really unintuitive things
(multiple types of infinity) with language just isn't important to the
problem of building physical machines (which AI is a subset of). If
anything, the type of mess Cantor has created is simply to me, a
demonstration of how much of a mess you can create with language if you are
not careful. In engineering, the goal is not to find language which is so
complex and unintuitive you can't understand it. The goal in engineering
is to find the simplest language possible to describe the properties of
what we want to build. I'm very sure that the type of machine we will end
up building to duplicate human level intelligence and consciousness will be
very simple to describe (despite all the common wisdom rejecting this
idea).

On the idea of multiple infinities and trying to tie it to reality, one way
I looked at the issue was to tie the concept of an infinite set to the
concept of a process which never ended. So a program which never
terminates is a real world example of an infinite set. In other words, we
we talk about infinite sets in mathematics which "exist" what we could
change to that is talking about algorithms which produce infinite output as
"existing". There's nothing wrong with that existing because algorithms
which never terminate are easy to create. So what we have here are words
"infinite set" which are not actually referencing an infinite set (which
can't exist) but which are making reference to other words which describe a
process which never ends.

Now, what happens when we talk about a process, for producing the
description of another process? Such as a finite length computer program
which is producing as output, a string of different computer programs, each
of which never terminate. And what if this program generator never
terminated either and produced a never ending string of finite sized
programs each of which produced a different infinite output string? The
only disconnect from these concepts, and reality, is that the programs have
to run on infinite sized computers. In other words, ones which never run
out of memory. And even though infinite computers don't exist, we can
describe a process of constantly making the computer larger and larger as
it runs. We just keep making, and adding, more memory, each time it runs
out. We can even get the human out of the loop by talking about a robot
which can suck in raw material from planets and use it to keep making the
computer larger.

This type of thinking I believe, links what Cantor was doing with power
sets, to the physical reality of computers. You need computers, which are
hung in an infinite loop, building other computers, each of which will hang
in an infinite loop once they are started. The only thing that might keep
this from working is that we need to live in a universe with infinite
energy, and infinite material, and infinite time, and infinite space, for
it to "exist". But other than that, we don't need any of the computers to
finish their work for it to be a valid example of infinite sets of infinite
sets.

I'm not sure if that approach is a valid way to tie Cantor's work to
reality, but that's the idea I would explore further if I wanted to try and
figure it out if it could be tied to reality.

Peter_Smith

unread,
Sep 7, 2007, 5:16:39 PM9/7/07
to
On 7 Sep, 21:11, david petry <david_lawrence_pe...@yahoo.com> wrote:

> Here are three additional facts:
>

> 2) Assertions that are extremely likely to be true for probabilistic
> reasons alone tend to be extremely difficult or even impossible to
> prove formally.

BY "impossible" do you mean (a) we can't in fact find a formal proof,
or (b) a formal proof doesn't exist to be found.

If you mean (a), what you say may be true, but it is irrelevant to the
claim you made about there being lots of simple sentences where ARE
(not just are believed to be, but are in fact) independent of PA. If
you mean (b), you are just asserting the claim that you need to argue
for.

Curt Welch

unread,
Sep 7, 2007, 5:33:03 PM9/7/07
to
david petry <david_lawr...@yahoo.com> wrote:

> 1) The world of computation is real.
>
> 2) We (our minds) live within the world of computation.

I don't think this is valid - unless you are a mathematician because we all
know their minds live in a world of computation :).

What exactly counts as a "computation" for you? Does one rock, when it
hits another rock, count as a computation?

When the motion of one electron causes other electrons to move (what we
call the transfer of energy) is that a computation? When one molecule
interacts with another molecule (like you what happens when you dissolve
salt in a glass of water) is that a computation? I think it's a real
stretch to talk about these things as computations.

What exists is the physical world with all it's properties - some of which
we understand and like to describe with language such as math, and much of
which we don't understand and which we have no language to describe. A lot
of it we just use words like chaos to describe it.

The mind exists in the physical world. It creates worlds such as the world
of mathematics, it doesn't exist in that world. It's able to understand
the physical universe in terms of its mathematical models, but the mind
itself is not a computation. It's physical electrons and molecules
interacting with each other in the same way all electrons and molecules
interact with each other in the universe.

Computation is something we describe with language, like when we talk about
1+1=2. Computation only really exists as language. It's the behavior of a
machine which is producing, or processing, language, like an adding machine
which is receiving the language symbols "1", "+" and "1" and producing the
language symbol "2" as output. But the brain, and the mind, is able to
much more than just process math-like language. We can play throw and
catch with a baseball. We can drink water from a stream. How do you
justify these behaviors as an example of the mind existing in the world of
computation?

The brain is a reaction machine which regulates the motion of the body. It
makes us move our arms and legs in response to what has happened in the
environment. Rocks are reaction machines as well. They react to their
environment. You push them on one side and they react by moving in the
other direction. They are very simple and uninteresting reaction machines.
Humans on the other hand, because of the complexity of all the independent
but connected parts of the brain, produce far more complex and interesting
reactions to our environment. But at the core, we are just a complex
reaction machine.

I just don't see the value in talking about a physical machine with these
billions and billions of moving parts as being something which exists in
the world of computation. I just don't see it as the right paradigm for
gaining further knowledge of the mind.

Curt Welch

unread,
Sep 7, 2007, 5:48:36 PM9/7/07
to
"J.A. Legris" <jale...@sympatico.ca> wrote:
> On Sep 6, 11:19 pm, c...@kcwc.com (Curt Welch) wrote:
> > david petry <david_lawrence_pe...@yahoo.com> wrote:
> > > To be sure, it definitely does make sense to view mathematics as a
> > > science: mathematics may be defined as the science of phenomena
> > > observable in the world of computation.
> >
> > The study of Mathematics is the study of what can be expressed with a
> > formal language.
> >
>
> But *anything* can be expressed with a formal language if you make
> your definitions loose enough. That's how you came up with the above
> sentence.

Well, by "formal language" I mean something to the nature of a language
which has precise bounds set for the meaning of very symbol in the
language. In natural language, there are no formal definitions for most
words and for the meaning of most words as we commonly use them. They are
simply symbols which we tend to produce under the right conditions and
symbols that we tend to react to in special ways.

Mathematics is about creating symbols (1, 2, 3, +, =) and giving them a
formal and precise meaning - normally linking them only to other symbols -
but never grounding them in reality other than in the reality of language
processing.

There is no symbol in math which means water. There is no expectation that
there should be a symbol in math which means water. This is because the
meaning of symbols created in math only refer to other symbols, or to the
actions of symbol processing systems.

My use of the concept of "formal language" is to imply that the definition
and usage of each symbol is formally defined unlike natural language where
the meaning and usage of the words is seldom formally defined.

Curt Welch

unread,
Sep 7, 2007, 5:56:26 PM9/7/07
to

Well, first off, I don't understand Godel's work. I've only looked at it
superficially to get a basic grasp of what type of work it was. As such,
it really doesn't (yet) help me understand much of anything. I would have
to study it more before I could really understand it.

But, what I believe, is that if I did study it to the point of fully
understanding it, all it would help me understand, is a bit more about the
nature of language. It wouldn't tell me anything about the physical world,
or how to build cars, or how to build computers, or how to build AI. It
would only tell me about the limits of what you can, and can't do, with
language. Or maybe more precisely, what a language processing machine can
and can't do with language.

Peter_Smith

unread,
Sep 7, 2007, 6:10:26 PM9/7/07
to
On 7 Sep, 22:48, c...@kcwc.com (Curt Welch) wrote:
> This is because the
> meaning of symbols created in math only refer to other symbols, or to the
> actions of symbol processing systems.

Really??? Let's take two simple examples.

1) The symbol "i" (which I thought referred ambiguously to one of the
square root of minus one). What symbol or action of a symbol
processing system does that refer to?

2) What about "|R" (which I thought referred to the set of real
numbers). What symbol or action of a symbol processing system does
that refer to?

Wolf Kirchmeir

unread,
Sep 7, 2007, 6:20:00 PM9/7/07
to
david petry wrote:
> On Sep 7, 4:52 am, David C. Ullrich <ullr...@math.okstate.edu> wrote:
>> On Thu, 06 Sep 2007 07:50:47 -0700, david petry
>>
>> <david_lawrence_pe...@yahoo.com> wrote:
>
>>> Godel's proof is something of a joke. When we acknowledge that
>>> mathematics has a strong connection to truth and reality, then we can
>>> easily see that Godel's theorem is quite trivial,
>> I'd ask you to explain this, but I see that that's been tried
>> and you declined.
>
> Except that the whole long article I wrote was devoted to explaining
> it, and so I have to conclude that you understood nothing of what I
> wrote. So I really have no idea where to go from there.
>
> I've often wondered why you have so much difficulty understanding the
> things I write. They seem quite obvious to me.

The fact that certain statements seem perfectly obvious to you proves
nothing at all.

> Here are a couple of
> fundamental ideas which I accept as true, which evidently you've never
> even considered. So, if you would, consider them now, and if you think
> they're not valid, try to explain why you think so.

"Valid" is the wrong term to use. Your "ideas" either assumptions , in
which case they are taken as true for the sake of argument; or else they
are conclusions from some assumptions that you haven't spelled out, so
their validity is indeterminate.

> 1) The world of computation is real.

I don't understand this, so how can I tell whether I dis/agree?

> 2) We (our minds) live within the world of computation.

Something like "The Matrix"? That's science fiction.

Otherwise, same as for 1.

> 3) A very large and important part of mathematics can be accurately
> described as the science of phenomena observable in the world of
> computation.

Same as for 1.

> 4) It's reasonable to ask what assumptions are implicit in our own
> self-awareness, and then to apply those assumptions when reasoning
> about the foundations of mathematics.

Same as for 1.

> 4) If we think of the world of computation as but a small part of a
> larger world of the infinite, we are fantasizing. Likewise, if we
> think of ourselves as somehow living above the world of computation,
> looking down upon it, we are deluding ourselves.

Same as for 1.

Proginoskes

unread,
Sep 7, 2007, 6:33:24 PM9/7/07
to
On Sep 6, 1:13 pm, david petry <david_lawrence_pe...@yahoo.com> wrote:
> On Sep 6, 8:07 am, aatu.koskensi...@xortec.fi wrote:
>
> > It is well known Gödel was something of a jester. Ultimately, life is
> > nothing but a dark absurd joke with no punch-line, as he once put it
> > before starving to death.
>
> That's funny. I was under the impression Godel was rather humorless.
> And I didn't know he starved to death. Oh well.

He was paranoid and thought that people were drugging his food.

--- Christopher Heckman

MoeBlee

unread,
Sep 7, 2007, 6:39:31 PM9/7/07
to
On Sep 7, 1:15 pm, david petry <david_lawrence_pe...@yahoo.com> wrote:

> Then the relevant question is, how does Godel's proof help you
> understand the nature of the universe we exist in?

Easy. (1) It tells me that there exists a certain finite string having
certain properties. That happens to be a fact about the universe
(given a reasonable definition of 'the universe'). (2) Since (1) tells
me a certain fact, which is a gain in knowledge about the universe,
but a trivial gain, more profoundly, the theorem tells me not to waste
my time trying to figure out an algorithm to determine for any given
statement of arithmetic whether the statement is true or not nor to
waste my time trying to find whether anyone else has found such an
algorithm. (3) Numerous corollaries (or other areas of investigation)
along the same lines as (2). (4) The proof techniques themselves
suggest creative methods of reasoning and in general enhance my
reasoning. (5) My disposition toward understanding things (in the
"universe") is increased by any disposition I have toward enjoying the
universe, which includes aesthetic enjoyment, which includes aesthetic
enjoyment of the creativity and rigor of such proofs. (6) Along the
lines of (4) and (5) the proof has historically contributed to
interest in investigations in the field of computability and such
investigations have inspired people who were part of the very
invention of modern physical computers, which contribute to my
understanding. (7) Along the lines of (6), my own study of the proof
and related mathematics contributes to my understanding of computing
itself, as I can see connections among such things as arithmetization
of syntax, representability in a first order theory, Turing machines,
register machines, etc; so I have a more informed perspective of what
a program is, what an algorithm is, and how these are related to
certain formulations as in Godel's proof or as developments that came
out of and were inspired by the proof.

MoeBlee

david petry

unread,
Sep 7, 2007, 9:07:38 PM9/7/07
to
On Sep 7, 2:16 pm, Peter_Smith <ps...@cam.ac.uk> wrote:
> On 7 Sep, 21:11, david petry <david_lawrence_pe...@yahoo.com> wrote:
>
> > Here are three additional facts:
>
> > 2) Assertions that are extremely likely to be true for probabilistic
> > reasons alone tend to be extremely difficult or even impossible to
> > prove formally.
>
> BY "impossible" do you mean (a) we can't in fact find a formal proof,
> or (b) a formal proof doesn't exist to be found.

It would be easy to argue that there is no reason to believe that a
formal proof exists to be found. I don't think I could argue that
there is reason to believe that no formal proof exists to be found in
any particular case, but when we consider the multitude of assertions
we can create, we can probably argue that there is good reason to
believe that some of them have no formal proof. The arguments I'm
talking about would be informal arguments that probably could not be
made formal.

I don't have time to write more at the moment.

Curt Welch

unread,
Sep 7, 2007, 9:48:50 PM9/7/07
to
Peter_Smith <ps...@cam.ac.uk> wrote:
> On 7 Sep, 22:48, c...@kcwc.com (Curt Welch) wrote:
> > This is because the
> > meaning of symbols created in math only refer to other symbols, or to
> > the actions of symbol processing systems.
>
> Really??? Let's take two simple examples.
>
> 1) The symbol "i" (which I thought referred ambiguously to one of the
> square root of minus one).

One of the square root of minus one? I can't parse that. Is that a typo
or are you saying something I just don't understand?

> What symbol or action of a symbol
> processing system does that refer to?

It refers to the symbols: "the square root of minus one".

Those symbols refers to the set of equations which evaluates to minus one
inside a square root.

-1 is a symbol which refers to all subtraction problems of the form A - B
where B is one larger than A.

Subtraction of A from B refers to all addition problems of the form A + X =
B where X is unknown.

Addition is a process of combining two sets of unique objects into one
group and counting them all.

There are lots of ways to say this, but it all boils down to words
describing other words describing some very simple physical processes at
the lowest level.

If the definition of term X is Y, then X refers to Y (in case you didn't
understand what I mean by "refers to"). That is, we use X as a short hand
name of the concept described by the words Y.

> 2) What about "|R" (which I thought referred to the set of real
> numbers). What symbol or action of a symbol processing system does
> that refer to?

I don't know the symbol "|R", but if it means the set of real numbers then
"|R" is a symbol which refers to the the symbols: "the set of all real
numbers". I don't know how mathematicians like to define the set of all
real numbers, but one way I assume works is to talk about the set of all
possible strings of digits which represent a real number. So the words
"the set of all real numbers" is a reference to the infinite list of
infinite strings of digits:

1.0000...
1.1234....

etc.

Or you can talk about the set of all real numbers being the set of all
possible descriptions of real numbers (like pi is one description of a
particular real number). And we could could talk about algorithms which
produce as output a real number (or an infinite sequence of digits) as one
way to define a real number. So for example, we could talk about pi as a
reference to all algorithms for calculating the value of pi.

Again, each language term we use, like |R or pi is just a reference to
other language terms we are able to produce and understand which tend, if
you keep following the chain of reference down, to typically refer to some
basic simple physical process, like counting of unique objects or motion in
space.

I'm not attempting to claim any of what I wrote above is valid
mathematically (I don't study math), I'm just trying to demonstrate what I
meant by "symbols created in math only refer to other symbols, or to the


actions of symbol processing systems."

And in case it's not clear, I consider man to be a symbol processing system
(as well as computers and calculators, etc.).

I'm not sure if every symbol in math works like this, but every one I can
currently think of does. I can't image how else it could work and still
have a useful meaning to us. But maybe someone can create an example of an
area of math I'm not currently not thinking of which introduces some very
different type of concept that wouldn't fit well with what I'm saying here?

abo

unread,
Sep 8, 2007, 12:59:06 AM9/8/07
to
On Sep 7, 10:46 pm, david petry <david_lawrence_pe...@yahoo.com>

So "There is an infinity of primes" *is not* part of mathematics which
is connected to truth, and the Successor Axiom *is* part of
mathematics which is connected to truth?

cbr...@cbrownsystems.com

unread,
Sep 8, 2007, 2:07:18 AM9/8/07
to

I think you miss the point: DP considers the proposition P(a), where a
is some natural, to be a /meaningful/ or /acceptable/ proposition, if
there is a proof that there has been provided an explicit polynomial f
in Q[x] such that P(a) iff f(a) = 0.


cbr...@cbrownsystems.com

unread,
Sep 8, 2007, 2:08:41 AM9/8/07
to
On Sep 7, 11:07 pm, "cbr...@cbrownsystems.com"

Please ignore the above - an ill thought out response I meant to
cancel rather than send.

Cheers - Chas

Nam D. Nguyen

unread,
Sep 8, 2007, 3:12:48 AM9/8/07
to
aatu.kos...@xortec.fi wrote:

> david petry wrote:
>> Well, it's entirely possible that neither Goldbach's conjecture nor
>> its negation is provable.
>
> That's just a rather oblique way of stating that, for all we know,
> Goldbach's conjecture might be true.

How would that contradict what DP stated above?

> Your observation in no way

> provides a simple and concrete sentence S of first-order arithmetic of


> Goldbach type such that neither S nor not-S is provable in first-order

> arithmetic.

His "it's entirely *possible*" means he's not *concretely* providing such
a sentence S, imho.

Peter_Smith

unread,
Sep 8, 2007, 3:22:55 AM9/8/07
to
On 8 Sep, 02:07, david petry <david_lawrence_pe...@yahoo.com> wrote:
> On Sep 7, 2:16 pm, Peter_Smith <ps...@cam.ac.uk> wrote:
>
> > On 7 Sep, 21:11, david petry <david_lawrence_pe...@yahoo.com> wrote:
>
> > > Here are three additional facts:
>
> > > 2) Assertions that are extremely likely to be true for probabilistic
> > > reasons alone tend to be extremely difficult or even impossible to
> > > prove formally.
>
> > BY "impossible" do you mean (a) we can't in fact find a formal proof,
> > or (b) a formal proof doesn't exist to be found.
>
> It would be easy to argue that there is no reason to believe that a
> formal proof exists to be found.

Then we'd certainly like to hear the "easy" argument -- for it has so
far eluded others for 70 years!

Peter_Smith

unread,
Sep 8, 2007, 3:38:54 AM9/8/07
to
On 8 Sep, 02:48, c...@kcwc.com (Curt Welch) wrote:
> Peter_Smith <ps...@cam.ac.uk> wrote:
> > On 7 Sep, 22:48, c...@kcwc.com (Curt Welch) wrote:
> > > This is because the
> > > meaning of symbols created in math only refer to other symbols, or to
> > > the actions of symbol processing systems.
>
> > Really??? Let's take two simple examples.
>
> > 1) The symbol "i" (which I thought referred ambiguously to one of the
> > square root of minus one).
>
> One of the square root of minus one? I can't parse that. Is that a typo
> or are you saying something I just don't understand?

Sorry, it is inded a typo for "square roots". (But if I'd have said
"i" refers to *the* square root of minus one, some bright spark here
would have immediately pointed out that there are two, one the
negation of the other, but no way to distinguish which is which -- but
let that pass!!!).

> > What symbol or action of a symbol
> > processing system does that refer to?
>
> It refers to the symbols: "the square root of minus one".

Nonsense. If "i" referred to those symbols then we could truly say
things like "i contains 21 letters", "i is only understood by English
speakers" and "i is never mentioned in Spanish mathematics books."

Look, if I introduce "C" as a symbol for Curt Welch (shorthand I can
use in my diary perhaps), then "C" refers to YOU not your name. When I
write "I met C", I'm saying I met YOU, not some symbols!!!

Exactly similarly, when we introduce "i" into mathematician speak as a
short symbol for the square root of minus one, "i" refers to the
square root of minus one, not to symbols. I when I say "i^5 = i" I am
talking about the square root of minus one not some symbols of
English.

> > 2) What about "|R" (which I thought referred to the set of real
> > numbers). What symbol or action of a symbol processing system does
> > that refer to?

> I don't know the symbol "|R",

MY best attempt as an ascii version of "blackboard" R.

> but if it means the set of real numbers then
> "|R" is a symbol which refers to the the symbols: "the set of all real

> numbers". ..... So the words


> "the set of all real numbers" is a reference to the infinite list of
> infinite strings of digits:

The first is wrong, while we might buy the second. They are *entirely*
different claims. For obviously the symbols : "the set of all real
numbers" are NOT an infinite list of infinite strings! (They are at
most one list of 21 symbols!!!!)

At root, you are confusing use and mention, as the logicians say.

T.H. Ray

unread,
Sep 8, 2007, 6:35:34 AM9/8/07
to

Not a relevant question at all. Your philosophy, which
leaves no room for knowledge for the sake of knowledge,
is limited to what one can perceive and describe in
language that already exists. However, we know
that language is progressive and that proofs of theorems
one would not have imagined had physical application
are now a common part of our scientific language.
Riemannian geometry is one such example.

As Enlightenment scholars used to say: "Sapere Aude."
(Dare to know.)

Tom

David C. Ullrich

unread,
Sep 8, 2007, 7:25:01 AM9/8/07
to
On Fri, 07 Sep 2007 10:30:50 -0700, david petry
<david_lawr...@yahoo.com> wrote:

>On Sep 7, 4:52 am, David C. Ullrich <ullr...@math.okstate.edu> wrote:
>> On Thu, 06 Sep 2007 07:50:47 -0700, david petry
>>
>> <david_lawrence_pe...@yahoo.com> wrote:
>
>> >Godel's proof is something of a joke. When we acknowledge that
>> >mathematics has a strong connection to truth and reality, then we can
>> >easily see that Godel's theorem is quite trivial,
>>
>> I'd ask you to explain this, but I see that that's been tried
>> and you declined.
>
>Except that the whole long article I wrote was devoted to explaining
>it, and so I have to conclude that you understood nothing of what I
>wrote. So I really have no idea where to go from there.

Right. It seems to me that what you wrote is nonsense, and the
only possible conclusion is I just didn't understand what you
said. And of course the same applies no matter how many people
explain the nonsenseical nature of your positions - _nobody_
has even bothered to read what you wrote, right?

>I've often wondered why you have so much difficulty understanding the
>things I write. They seem quite obvious to me.

I questioned a claim you made about the triviality of a certain
_theorem_. What does or does not seem obvious to you has no
relevance to that.

>Here are a couple of
>fundamental ideas which I accept as true, which evidently you've never
>even considered. So, if you would, consider them now, and if you think
>they're not valid, try to explain why you think so.

If I cared about the statements below I'd have a lot to say
about some of them. But one question at a time. _How_ does
anything below show that Godel's _theorem_ is trivial?

>1) The world of computation is real.
>

>2) We (our minds) live within the world of computation.
>

>3) A very large and important part of mathematics can be accurately
>described as the science of phenomena observable in the world of
>computation.
>

>4) It's reasonable to ask what assumptions are implicit in our own
>self-awareness, and then to apply those assumptions when reasoning
>about the foundations of mathematics.
>

>4) If we think of the world of computation as but a small part of a
>larger world of the infinite, we are fantasizing. Likewise, if we
>think of ourselves as somehow living above the world of computation,
>looking down upon it, we are deluding ourselves.
>
>
>
>


************************

David C. Ullrich

LauLuna

unread,
Sep 8, 2007, 7:47:37 AM9/8/07
to
On Sep 6, 4:50 pm, david petry <david_lawrence_pe...@yahoo.com> wrote:
> What is the nature of mathematics? Does it have a connection to
> reality? Is it part of mankind's quest for truth? Or is it merely a
> game which mathematicians have chosen to play, which may have had its
> origins in mankind's quest for truth but no longer has a connection to
> such a quest? I'm asking the reader to keep such questions in mind as
> he reads the rest of this article.

>
> Godel's proof is something of a joke. When we acknowledge that
> mathematics has a strong connection to truth and reality, then we can
> easily see that Godel's theorem is quite trivial, and the proof has no
> content. And if you deny that mathematics has a strong connection to
> truth and reality, then are you not implicitly claiming that all of
> mathematics, including Godel's proof, is something of a joke?
>
> Godel told us that he had the Liar paradox in mind when he came up
> with his proof. And that's a good place to start this analysis.
>
> When we talk, we want people to believe that we are telling the truth.
> That simply cannot be denied. That is, when we communicate, we are
> implicitly claiming that we are telling the truth. That's one of the
> ground rules of communication. So, likewise, when we try to understand
> what someone is saying, we take it as implicit that that person is
> claiming to be telling the truth. So if we set about to analyze the
> statement "I am lying", we must remember to take into consideration
> the implicit claim to truth. So the statement must be interpreted as
> equivalent to "<implicitly> I am telling the truth; <explicitly> I am
> lying". And that is nothing more than a simple contradiction. There's
> simply nothing paradoxical about it. It doesn't require any further
> analysis.
>
> Now Godel did not understand what is going on with the Liar paradox.
> On his interpretation, the Liar paradox leads to some kind of infinite
> loop of reasoning with no resolution possible. He completely ignored
> the implicit claim to truth we make when we communicate. His proof
> suffers the same defect as his analysis of the Liar paradox.
>
> When mathematicians talk, they want people to believe that they are
> telling the truth. Again, that cannot be denied. When they claim to
> have proven a theorem, they are implicitly claiming that the theorem
> is actually true. That is, they are claiming that proofs are
> compelling arguments. They are claiming that whatever formal proof
> system they are using must always lead to true theorems. So they are
> implicitly claiming that their formal system is consistent! So if the
> mathematicians were to claim that they had a proof that their formal
> system is consistent, we would have to point out to them that they are
> implicitly making the claim that their formal system is consistent the
> moment they claim that they can use the formal system to prove
> theorems, so they are claiming to prove something they claimed to be
> true in the first place; their argument would be circular. And
> circular arguments are not to be accepted as compelling arguments.
>
> So what have we shown? We have shown that once it is acknowledged that
> mathematics does have a strong connection to truth and reality--i.e.
> that mathematicians are implicitly making that claim that their
> theorems are true--then Godel's theorem (the claim that no formal
> system can prove its own consistency) is merely an immediate and
> rather trivial implication of the notion of proof. But we can go
> beyond that. Since Godel's theorem itself is a direct implication of
> the mathematicians' implicit claim that they tell the truth, we have
> to admit that a formal proof of Godel's theorem would be circular. So

> Godel's proof is something of a joke.
>
> Note carefully what I'm saying here: if we accept the claim that
> mathematics has strong connections to truth and reality, and we agree
> that we can use that claim when we reason about mathematics, then
> Godel's proof is something of a joke. If we deny the claim that
> mathematics has strong connections to truth and reality, then
> mathematics itself, including Godel's theorem, is something of a joke.
>
> If we were to take the view that mathematics is a science, then the
> triviality of Godel's theorem and the absurdity of the proof would be
> immediately apparent. It is part of the scientists' view of their
> subject that we must always be open to the possibility that our best
> theories will prove to be quite wrong when we move on to unexplored
> territory. If we apply that to mathematics, it says that we must be
> open to the possibility that our theories (formal systems) may be
> found to be inconsistent when we gain the ability to examine them in
> much greater depth that we are currently able to (i.e. when we create
> an artificial intelligence a million times more intelligent than we
> are, we will ask it to re-examine the question of the consistency of
> our current theories). So when we take the view that mathematics is a
> science, we are forced to acknowledge that we simply cannot prove that
> we will never come across an inconsistency in our theories. And of
> course, any scientist would say that it is absurd to think that we
> need a formal proof to tell us that we should be open to the
> possibility that our theories may ultimately prove to be wrong.

>
> To be sure, it definitely does make sense to view mathematics as a
> science: mathematics may be defined as the science of phenomena
> observable in the world of computation. To clarify this, it helps to
> think of the computer as the mathematicians' microscope which helps us
> peer deeply into the world of computation, and then mathematics
> studies the phenomena we observe when we look through that microscope.
> When we think of mathematics in this way, then our theories must make
> predictions about observable phenomena, and those predictions can be
> tested, and hence mathematics fits the criteria for being a science.
> Certainly all of the mathematics that has strong connections to truth
> and reality, including all of the mathematics use in science and
> technology, falls within the scope of this view.
>
> So why do we believe that mathematics is consistent? And is it even
> sensible to try to reason about the consistency of mathematics? For
> starters, consider this: we simply cannot believe that we have the
> ability to reason consistently about the possibility that we lack the
> ability to reason consistently (and you might want to read that a
> second time). So simply by virtue of the empirical fact that we are
> self-aware, and aware that we can reason, we are compelled to believe
> that we have the ability to reason consistently, but we cannot
> "reason" about our own consistency, since we cannot reason about the
> possibility that we lack the ability to reason consistently! So if
> mathematics were innate--that is, if mathematics were part of our own
> model of who we are--then we would be forced to agree that we must
> believe that mathematics is consistent, and that we cannot reason
> about its consistency. But that's precisely the case! When we reason
> about how our thought processes work, we come to the conclusion that
> every thought process we have can be modelled on a digital computer
> (that's not to say that our brains are digital computers, but there's
> an equivalence between what a computer can do and what we can do). And
> furthermore, when we reason about what computers are, we come to the
> conclusion that our best models of computation must include a model of
> arithmetic. For example, real world computers use arithmetic in a very
> basic and essential way (e.g. for computing memory addresses), and so
> we are forced to believe that the rules (axioms) of arithmetic are
> part of our model of computation (and we would arrive at the same
> conclusion if we use Turing machines as our model for computation).
> So what all this implies is that if we know who we are--that is, if
> our models of who we are are correct--then we are compelled to believe
> that the basic axioms of mathematics are consistent, and that we
> cannot reason about that consistency. And conversely, if we don't
> know who we are (i.e. if our model of who we are is wrong), then we
> would have to admit that we are confused and we really cannot be sure
> that we can reason about anything at all. In other words, the
> consistency of basic mathematics (i.e. the mathematics implicit in our
> models of computation) is implicit in our self-awareness, and trying
> to use reason to gain additional insight about that consistency is a
> rather silly thing to do (an hence, Godel's proof is rather silly).
>
> Clearly I'm making a rather remarkable claim which I can't really
> expect the reader to immediately accept. I'm claiming that a huge
> portion of modern logic (and also of the foundations of mathematics)
> is silly. I'm claiming that logic and the foundations of mathematics
> have lost touch with truth and reality; they've become mere games
> played with symbols, only vaguely resembling an honest search for
> truth. So here are some things to think about which might help the
> reader come to agree with me.
>
> 1) If the purpose of mathematics and logic is to provide tools to help
> us reason about the real world, then shouldn't the most important
> advance in logic in the twentieth century help us reason about the
> real world? That is, after seventy five years, shouldn't some kind of
> practical application in the sciences or in technology have emerged
> for Godel's theorem? Do philosophical arguments to the effect that
> artificial intelligences will never be as intelligent as humans
> qualify as practical applications?
>
> 2) Is Godel's proof falsifiable? If we were to find a reasonably
> powerful consistent formal system that could "prove" its own
> consistency, then that might be a falsification, but how would we
> determine that the formal system is consistent? And if Godel's proof
> is safe from falsification, in what sense does it tell us anything
> about the real world?
>
> 3) Godel believed that an analysis of the Liar paradox leads to a
> vicious circle which cannot be broken using standard logic, and then
> he based his proof on his own understanding of that "paradox". Is it
> plausible that the Liar paradox reveals innate flaws in our ability to
> reason? Shouldn't we be highly suspicious of Godel's proof?
>
> 4) Think about how a sincere seeker of truth would react to criticism,
> and compare that to the way the experts on Godel's theorem react to
> this article. Do sincere seekers of truth call critics "crackpots"?

I think there are some important misunderstandings here. At least the
following:

1. If the Liar implicitly says that he is telling the truth and
explicitly says he is lying, he is able to state a proposition and its
negation by means of the same (non conjunctive) sentence. This is not
the same as a mere contradiction of the form 'p and not p'. It's a
paradox.

2. No one tries to prove the consistency of all of the logical and
mathematical resources one is using, since this would be circular. But
that no consistent axiomatization of PA can DERIVE (better than
'prove') the formula equivalent (under the standard interpretation) to
its own consistency is quite a different fact dealing with formal
systems and formulas. Even if it is related in some way to
circularity, it is not a trivial fact.

3. Popper's falsifiability applies to empirical theories, not to logic
or mathematics.

Regards

LauLuna

unread,
Sep 8, 2007, 8:02:57 AM9/8/07
to
> Regards- Hide quoted text -
>
> - Show quoted text -

I should have added:

4. Telling something about the real world is not a necessary condition
for telling a truth, for there are also truths about the ideal objects
of the ideal realm of mathematics.

Regards

mbstevens

unread,
Sep 8, 2007, 8:56:54 AM9/8/07
to
Curt Welch wrote:

> Well, by "formal language" I mean something to the nature of a language
> which has precise bounds set for the meaning of very symbol in the
> language. In natural language, there are no formal definitions for most
> words and for the meaning of most words as we commonly use them. They are
> simply symbols which we tend to produce under the right conditions and
> symbols that we tend to react to in special ways.
>
> Mathematics is about creating symbols (1, 2, 3, +, =) and giving them a
> formal and precise meaning - normally linking them only to other symbols -
> but never grounding them in reality other than in the reality of language
> processing.

The most basic formal systems are just symbols and rules
for manipulating those symbols. You can optionally add a semantics
to your formal system to _interpret_ it. That is where the
'meanings' come in.

That may have been what you intended to say, but one has to be
careful when using 'meaning'.

Gonçalo Rodrigues

unread,
Sep 8, 2007, 10:49:05 AM9/8/07
to
On Thu, 06 Sep 2007 07:50:47 -0700, david petry
<david_lawr...@yahoo.com> fed this fish to the penguins:

>1) If the purpose of mathematics and logic is to provide tools to help
>us reason about the real world, then shouldn't the most important
>advance in logic in the twentieth century help us reason about the
>real world? That is, after seventy five years, shouldn't some kind of
>practical application in the sciences or in technology have emerged
>for Godel's theorem? Do philosophical arguments to the effect that
>artificial intelligences will never be as intelligent as humans
>qualify as practical applications?
>

That "the purpose of mathematics and logic is to provide tools to help
us reason about the real world" is *flatly* contradicted by not just
the history of Mathematics but, dare I say, by the experience of every
working Mathematician.

Mathematics is an autonomous discipline of its own, with its own field
of enquiry (let us call it the "Mathematical Universe"). Its
principles cannot be taken ready-made from other disciplines. In
particular, talking about "falsifiable proofs" is either using the
word "falsifiable" in a novel sense or a complete inanity. It
certainly is the case that Mathematics enters into a dialogue with
other disciplines, and a very fruitful one at that (e.g. Physics). It
is also certainly true that Mathematics was born out from the external
world of experience, as a means to impose a human order on it. But
recognizing this is a far cry from affirming external reality as the
centrifugal aim and goal towards which Mathematics spirals. The aim of
Mathematics can only be centripetal; the check on the Mathematician is
not any conformance to truth but conformance to his postulates. Anyone
who quarrels with this should have no business with Mathematics; much
like the people who dislike ghost stories on the argument that "ghosts
do not exist" should have no business with imaginative literature. The
distinction between "Applied" and "Pure Mathematics", however
pedantically it may be expressed, is a reflection of this fact.

Ordinals, say, are no more fictional than the natural number we name
"2" is. Both can be applied to descriptions of reality; these
descriptions may be accurate or not, may be true or not, but this
conformance or truthfulness says nothing about the objects themselves
as elements of the Mathematical Universe. Myself and anyone reading
this, has surely encountered "2 apples" and "2 chattering monkeys" and
"2 idiots posting at sci.math" but if anyone has seen a "2" hopping
around, I would strongly suggest that he should get himself a ticket
into Bedlam. I am being facetious in order to stress the obvious;
mathematical objects are ideas -- and I am not employing the word in
any Platonic sense. Our external world of experience (e.g. "Reality")
should be separated from the systematic study of it (e.g. Physics). It
is not at all clear to me, if there is a separate Mathematical
Universe world distinct from our study of it. In other words, it is
not clear to me, if mathematical objects have an independent external
existence, large part of this non-obviousness certainly being the
result of ignorance and wont of any deep though devoted to the matter.
But whatever their ontological status, the independence of Mathematics
as an intellectual pursuit is a fact. Just as an argument, to be
intellectually honest, must be in accord to the laws of logic, and
that same accordance testifies to the indepencence of logic itself, so
a mathematical argument must be in accordance to the laws of
mathematics and thus, testifies its independence.

Note: I am well aware that there are arguments not mathematically
sound commonly used (e.g. Feynman integrals), but as it can be readily
recognized, this does not contradict what was said above.

I am not Shelley, so I am not going to make a Defence of Mathematics,
but something of a defence of Mathematics is what is at stake here.
Valueing Mathematics (or for that matter, any other intellectual
discipline, Physics included) on the sole ground of its "practical
applications" can only lead to the barbarization of society. It can
certainly be (and was) tried; e,g, through censorship. Mathematics is
part of liberal education; and "liberal" has the same root as
"liberate" which should remind us of the ethical purpose of education
and culture: in Mathew Arnold's precept, "culture seeks to do away
with classes".

I have already wrote more than I wanted to; so I will stop here.

Best regards,
G. Rodrigues

david petry

unread,
Sep 8, 2007, 5:16:23 PM9/8/07
to

Simply the fact that for a very large class of assertions, each easily
seen from a probabilistic angle to be almost certainly true, not a
single example of a formal proof has been found, is enough to convince
some of us that it is reasonable to assert that there is no reason to
believe that formal proofs exist for most of those assertions.

david petry

unread,
Sep 8, 2007, 5:37:35 PM9/8/07
to
On Sep 8, 4:25 am, David C. Ullrich <ullr...@math.okstate.edu> wrote:
> On Fri, 07 Sep 2007 10:30:50 -0700, david petry

> If I cared about the statements below I'd have a lot to say


> about some of them. But one question at a time. _How_ does
> anything below show that Godel's _theorem_ is trivial?

Well, as I argued in the article, which apparently you didn't
understand in any way, the statements lead inexorably to the
conclusion that Godel's theorem is trivial. I've done my best to
explain it, and you reject my explanation, and you then demand that I
explain it. Catch 22.

Peter_Smith

unread,
Sep 8, 2007, 5:46:20 PM9/8/07
to
On 8 Sep, 22:16, david petry <david_lawrence_pe...@yahoo.com> wrote:

> Simply the fact that for a very large class of assertions, each easily
> seen from a probabilistic angle to be almost certainly true, not a
> single example of a formal proof has been found, is enough to convince
> some of us that it is reasonable to assert that there is no reason to
> believe that formal proofs exist for most of those assertions.

Of just WHICH "very large class of assertions" is it true both that
each of them is almost certainly true, yet we've actually looked for
formal proofs for many of the them and not found such proofs?

david petry

unread,
Sep 8, 2007, 6:01:02 PM9/8/07
to

The statement "2+2=4" must be equivalent to "<implicitly> I am telling
the truth; <explicitly> 2+2=4"


> > 2. No one tries to prove the consistency of all of the logical and
> > mathematical resources one is using, since this would be circular. But
> > that no consistent axiomatization of PA can DERIVE (better than
> > 'prove') the formula equivalent (under the standard interpretation) to
> > its own consistency is quite a different fact dealing with formal
> > systems and formulas. Even if it is related in some way to
> > circularity, it is not a trivial fact.

OK, so you agree that it would be silly to try to prove the
consistency of all the mathematical resources one is using. Fine. I'm
claiming that PA is part of the resources for which it is silly to
attempt a consistency proof, and ultimately the reason for this is
that PA is part of our self-awareness (i.e. part of our model of who
we are as intelligent, thinking beings). That's the essence of the
article I wrote.


> > 3. Popper's falsifiability applies to empirical theories, not to logic
> > or mathematics.

If mathematics is deemed to tell us something about the real world,
then falsifiability can be applied to it. Fermat's Last Theorem does
tell us that if we were to write and run a computer program to search
for counterexamples, it would never halt. That is a falsifiable
assertion.


> I should have added:
>
> 4. Telling something about the real world is not a necessary condition
> for telling a truth, for there are also truths about the ideal objects
> of the ideal realm of mathematics.

That's somewhat fuzzy and possibly not really true.


david petry

unread,
Sep 8, 2007, 6:31:10 PM9/8/07
to
On Sep 8, 7:49 am, Gonçalo Rodrigues <nos...@invalid.mail> wrote:
> On Thu, 06 Sep 2007 07:50:47 -0700, david petry
> <david_lawrence_pe...@yahoo.com> fed this fish to the penguins:

>
> >1) If the purpose of mathematics and logic is to provide tools to help
> >us reason about the real world, then shouldn't the most important
> >advance in logic in the twentieth century help us reason about the
> >real world? That is, after seventy five years, shouldn't some kind of
> >practical application in the sciences or in technology have emerged
> >for Godel's theorem? Do philosophical arguments to the effect that
> >artificial intelligences will never be as intelligent as humans
> >qualify as practical applications?
>
> That "the purpose of mathematics and logic is to provide tools to help
> us reason about the real world" is *flatly* contradicted by not just
> the history of Mathematics but, dare I say, by the experience of every
> working Mathematician.

Dare I say, that many great mathematicians were motivated by the
desire to provide tools to help us reason about the real world!

> ... the check on the Mathematician is


> not any conformance to truth but conformance to his postulates. Anyone
> who quarrels with this should have no business with Mathematics; much
> like the people who dislike ghost stories on the argument that "ghosts
> do not exist" should have no business with imaginative literature.

You got it! Well put.

One quibble here: if those who don't like certain ghost stories (e.g.
the "god" ghost) were to claim that such ghost stories don't belong in
the public universities, I would tend to sympathize with them.

> Valueing Mathematics (or for that matter, any other intellectual
> discipline, Physics included) on the sole ground of its "practical
> applications" can only lead to the barbarization of society. It can
> certainly be (and was) tried; e,g, through censorship. Mathematics is
> part of liberal education; and "liberal" has the same root as
> "liberate" which should remind us of the ethical purpose of education
> and culture: in Mathew Arnold's precept, "culture seeks to do away
> with classes".

There might be a tiny bit of hypocrisy in what you say. The
mathematicians take words out of natural language (e.g. truth, proof,
exists, theory, logic) and give them new meanings which are accessible
only to the sophisticates. The mathematicians speak a different
language than the rest of us (by this I mean that the pure
mathematicians speak a different language than those who actually
apply mathematics). That divides our society. It creates "classes",
rather than doing away with them.

david petry

unread,
Sep 8, 2007, 6:40:18 PM9/8/07
to

For the large class of assertions I had in mind when I wrote that, it
is so blindingly obvious to most mathematicians that there is no
reason to think that formal proofs exist for most of the assertions in
the class, that there has been no effort to search for proofs for most
of them.

Peter_Smith

unread,
Sep 8, 2007, 6:54:46 PM9/8/07
to

WHICH such large class do you have in mind??? I'm still entirely in
the dark (I may have missed something in your many posts, but if so
can I please have a recap??).

david petry

unread,
Sep 8, 2007, 6:55:11 PM9/8/07
to
> mathematics which is connected to truth

The precise statement "there is an infinity of primes" is missing a
key ingredient which is needed to make it meaningful. Informally, that
is not much of a problem, since any mathematician could provide that
missing key ingredient when requested to do so.

There's just something odd about the way you phrase things; "the
Successor Axiom *is* part of the mathematics which is connected to
truth" sounds so odd to my ear that I suspect there's some hidden
meaning I'm missing.


david petry

unread,
Sep 8, 2007, 7:19:11 PM9/8/07
to

We could easily come up with lots of large classes, but here's one in
particular: the class of Goldbach-like assertions.

Recall that Goldbach's conjecture says that every even integer larger
than 4 is the sum of two odd primes. Now, heuristically speaking, the
probability that a given odd number 'N' is prime is 2/log(N). We can
easily create other sets of odd numbers for which the probability that
a given odd number 'N' is an element of the set it also 2/log(N).
Then we could define a Goldbach-like assertion to be an assertion of
the form "Every even number greater than K is the sum of two odd
numbers chosen from the set Q", where 'K' is some positive integer,
and 'Q' is a set of odd numbers with a probability distribution
similar to the distribution of the primes.

To see why such assertions are almost certainly true is an easy
exercise in basic probability; there are lots of prime numbers, and so
the probability that for a large even integer 'M', both 'k' and 'M-k'
are prime for some 'k' is very high, etc.

Of course, it is also obvious that some of the Goldbach-like
assertions are false, if that needs to be pointed out.

Jesse F. Hughes

unread,
Sep 8, 2007, 8:26:07 PM9/8/07
to
david petry <david_lawr...@yahoo.com> writes:

> I've often wondered why you have so much difficulty understanding the

> things I write. They seem quite obvious to me. Here are a couple of


> fundamental ideas which I accept as true, which evidently you've never
> even considered. So, if you would, consider them now, and if you think
> they're not valid, try to explain why you think so.

Think they're not valid?

These statements are far too vague and meaningless to ask whether they
are true (or valid). The first one, for instance: what is it supposed
to mean?

> 1) The world of computation is real.
>
> 2) We (our minds) live within the world of computation.
>
> 3) A very large and important part of mathematics can be accurately
> described as the science of phenomena observable in the world of
> computation.
>
> 4) It's reasonable to ask what assumptions are implicit in our own
> self-awareness, and then to apply those assumptions when reasoning
> about the foundations of mathematics.
>
> 4) If we think of the world of computation as but a small part of a
> larger world of the infinite, we are fantasizing. Likewise, if we
> think of ourselves as somehow living above the world of computation,
> looking down upon it, we are deluding ourselves.

--
Jesse F. Hughes
Me: "Quincy, there's only *one* Truth, isn't there?"
Quincy (age 4): "Yeah, and it's *mine*."
-- A lesson in postmodernism goes awry.

abo

unread,
Sep 9, 2007, 2:13:49 AM9/9/07
to
On Sep 9, 12:55 am, david petry <david_lawrence_pe...@yahoo.com>

Well you're the one who has written things like, "we accept the claim
that
mathematics has strong connections to truth and reality!"

Okay, you have said that the Successor Axiom holds (i.e. is true), so
presumably you hold it is meaningful. And you have said "There is an
infinity of primes" is meaningless.

Anyway, the whole point of the cross-examination is that you have a
wonderful theory about mathematics where there is an importance to
theories being tested and falsifiability, and you have an intuition
about certain mathematical statements which comes from an entirely
different perspective (you "can't imagine" the Successor Axiom not
being true). The problem is that the two just don't match.

You look at "There is an infinity of primes" from the first angle
(it's not okay) and you declare "For every number there is a natural
number which is its successor" (the Successor Axiom) is true looking
at it from the second angle (it's okay). Yet "There is an infinity of
primes" and the Successor Axiom are logically equivalent! (To be
precise, they are equivalent over a theory which is weaker than PA and
which uses axioms which are essentially definitional).

Excuse me if I label this an incoherence and an error. Okay, maybe
it's looks like I'm just trying to play "gotcha", but I hope my point
can be more constructive: you *can* develop a coherent view, but you
will need to make a choice, either expanding one angle, or reducing
the other.

Peter_Smith

unread,
Sep 9, 2007, 3:47:49 AM9/9/07
to

You seem to be changing the subject. You were asked

> Would you like to share with us
> just *one* "simple and concrete" sentence S of first-order arithmetic


> of Goldbach type such that neither S nor not-S is provable in first-

> order Peano Arithmetic??

Of Goldbach type here means a \Pi_1 sentence (if that needs to be
point out! ;-))

Goldbach's original conjecture is of Goldbach type (of course)! But
your other Goldbach-like assertions seem to essentially mention
infinite sets of numbers with certain characteristics, and hence are
not \Pi_1 sentences of first-order arithmetic on the face of it. So
you owe us a proof that they can be stated by other \Pi_1 sentences
using coding tricks or something. Otherwise they are irrelevant to the
question asked.

David C. Ullrich

unread,
Sep 9, 2007, 6:33:49 AM9/9/07
to
On Sat, 08 Sep 2007 14:37:35 -0700, david petry
<david_lawr...@yahoo.com> wrote:

>On Sep 8, 4:25 am, David C. Ullrich <ullr...@math.okstate.edu> wrote:
>> On Fri, 07 Sep 2007 10:30:50 -0700, david petry
>
>> If I cared about the statements below I'd have a lot to say
>> about some of them. But one question at a time. _How_ does
>> anything below show that Godel's _theorem_ is trivial?
>
>Well, as I argued in the article, which apparently you didn't
>understand in any way, the statements lead inexorably to the
>conclusion that Godel's theorem is trivial. I've done my best to
>explain it, and you reject my explanation, and you then demand that I
>explain it. Catch 22.

Eh, no. You _stated_ that those statements lead to the conclusion
that Godel's theorem is trivial. You haven't explained how, in
spite of repeated requests from me and others.

>
>> >1) The world of computation is real.
>>
>> >2) We (our minds) live within the world of computation.
>>
>> >3) A very large and important part of mathematics can be accurately
>> >described as the science of phenomena observable in the world of
>> >computation.
>>
>> >4) It's reasonable to ask what assumptions are implicit in our own
>> >self-awareness, and then to apply those assumptions when reasoning
>> >about the foundations of mathematics.
>>
>> >4) If we think of the world of computation as but a small part of a
>> >larger world of the infinite, we are fantasizing. Likewise, if we
>> >think of ourselves as somehow living above the world of computation,
>> >looking down upon it, we are deluding ourselves.


************************

David C. Ullrich

toni.l...@gmail.com

unread,
Sep 9, 2007, 8:01:40 AM9/9/07
to
On 9 syys, 03:26, "Jesse F. Hughes" <je...@phiwumbda.org> wrote:

> david petry <david_lawrence_pe...@yahoo.com> writes:
> > I've often wondered why you have so much difficulty understanding the
> > things I write. They seem quite obvious to me. Here are a couple of
> > fundamental ideas which I accept as true, which evidently you've never
> > even considered. So, if you would, consider them now, and if you think
> > they're not valid, try to explain why you think so.
>
> Think they're not valid?
>
> These statements are far too vague and meaningless to ask whether they
> are true (or valid). The first one, for instance: what is it supposed
> to mean?
>
> > 1) The world of computation is real.

You have to remember that David Petry lives in the "world of
computation", where every sentence is either true or false, and
furthermore every sentence is true unless it's uttered by a sinister
hypocritical lying Cantorian.

Wolf Kirchmeir

unread,
Sep 9, 2007, 4:27:50 PM9/9/07
to


Er, care to provide an example? Unfortunately, I'm not a telepath, so I
don't know what you have/had/will have in mind.

david petry

unread,
Sep 10, 2007, 12:04:26 AM9/10/07
to
On Sep 8, 11:13 pm, abo <dkfjd...@yahoo.com> wrote:

> Yet "There is an infinity of
> primes" and the Successor Axiom are logically equivalent! (To be
> precise, they are equivalent over a theory which is weaker than PA and
> which uses axioms which are essentially definitional).

What I've been suggesting is that if you claim that something exists,
you have to be able to tell me how much work I have to do to find it.
The assertion that there are infinitely many primes does not meet that
requirement.

As I have stated, the Successor Axiom may be said to hold within the
"relevant" universe, where the notion of "relevant" puts a bound on
the size of the universe we are considering.

david petry

unread,
Sep 10, 2007, 12:22:44 AM9/10/07
to

I chose to use "Goldbach-like" conjectures precisely so that it would
be obvious that they are of Goldbach type. The infinite sets used in
them can be as concrete as the infinite set of primes.

Here's one example of a set 'Q' (as defined above) that could be used
in a "Goldbach-like" assertion: define Q = {q_n} where q_n = p_n +
2*[sqrt(n)], where p_n is the n'th prime, and [r] is the greatest
integer less than or equal to 'r'.


abo

unread,
Sep 10, 2007, 1:15:13 AM9/10/07
to
On Sep 10, 6:04 am, david petry <david_lawrence_pe...@yahoo.com>
wrote:

1/ The Successor Axiom holds in the relevant universe.
2/ "There are an infinite number of primes" does not hold in the
relevant universe.
3/ The two are logically equivalent.

Do you disagree with any of these three assertions?

david petry

unread,
Sep 10, 2007, 1:20:06 AM9/10/07
to
On Sep 7, 1:47 pm, c...@kcwc.com (Curt Welch) wrote:

> On the idea of multiple infinities and trying to tie it to reality, one way
> I looked at the issue was to tie the concept of an infinite set to the
> concept of a process which never ended. So a program which never
> terminates is a real world example of an infinite set. [...]

What you are describing is sometimes called the "potential" infinite.

http://groups.google.com/group/sci.math/msg/600ad9693523ba44

You might also be interested in this article:

http://groups.google.com/group/sci.math/msg/859d0f3750a0e9dc


Peter_Smith

unread,
Sep 10, 2007, 3:35:09 AM9/10/07
to

Ah, thanks for the clarification. I thought you meant something more
exotic. But then if you do have in mind assertions so very closely
related to the original Goldbach conjecture, then I'm not at all sure
that they really help your case. For, if we are in the business of
probabilistic considerations, it looks to me a probabilistically good
bet that if there *is* a proof in PA of original-Goldbach, there will
also be variant proofs of the true variant-Goldbachs that are so very
closely related. So we are back more or less exactly where we are
started. No one knows whether Goldbach's original (and together with
that, your elementary variants) are provable-in-PA. So it (and they)
can't be given as examples of elementary Pi_1 truths of arithmetic
which definitely are independent of PA.

Herman Jurjus

unread,
Sep 10, 2007, 4:26:04 AM9/10/07
to

The way i understand Petry, the successor axiom is equivalent to
something like 'for every natural number, there is a larger prime'.
Not to 'there is in infinitude of primes'.
If you can't make a difference between these latter two in your system,
then it might be that your system simply is 'not a fair tool' to judge
Petry's theory. That is, perhaps your system already fails to make the
crucial distinction that Petry is looking for. (That's nothing to be
ashamed of; virtually any axiomatic system suffers from the same.)

--
Cheers,
Herman Jurjus

abo

unread,
Sep 10, 2007, 4:35:42 AM9/10/07
to
On Sep 10, 10:26 am, Herman Jurjus <hjur...@hetnet.nl> wrote:
>
> The way i understand Petry, the successor axiom is equivalent to
> something like 'for every natural number, there is a larger prime'.
> Not to 'there is in infinitude of primes'.

As I understand Petry, he is not okay with (A) "for every natural
number, there is a larger prime," but is okay with (B) "for every
natural number and some sufficient higher bound, there is a prime
number." The former is what is equivalent to the Successor Axiom.
Perhaps he could clarify. If he is okay with (A), then also is (A)
falsifiable.

Han de Bruijn

unread,
Sep 10, 2007, 5:36:40 AM9/10/07
to
Curt Welch wrote:

> Humans on the other hand, because of the complexity of all the independent
> but connected parts of the brain, produce far more complex and interesting
> reactions to our environment. But at the core, we are just a complex
> reaction machine.

Objection! Humans are _not_ machines. But machines may be like humans,
because the bear the stamp of their creators.

Han de Bruijn

Han de Bruijn

unread,
Sep 10, 2007, 5:43:44 AM9/10/07
to
David C. Ullrich wrote:

> Right. It seems to me that what you wrote is nonsense, and the
> only possible conclusion is I just didn't understand what you
> said. And of course the same applies no matter how many people
> explain the nonsenseical nature of your positions - _nobody_
> has even bothered to read what you wrote, right?

How can you say that, if elementary research reveals that David Petry's
threads are among the most successfull in 'sci.math'? Literally ..

Han de Bruijn

David C. Ullrich

unread,
Sep 10, 2007, 6:27:42 AM9/10/07
to

Um. First, how can I say _what_, exactly? Second, how do you define
"successful"? His threads are certainly "successful" in a sense -
in that sense James Harris's threads are even more successful...

>Han de Bruijn


************************

David C. Ullrich

Han de Bruijn

unread,
Sep 10, 2007, 6:28:53 AM9/10/07
to
David C. Ullrich wrote:

> On Mon, 10 Sep 2007 11:43:44 +0200, Han de Bruijn
> <Han.de...@DTO.TUDelft.NL> wrote:
>
>>David C. Ullrich wrote:
>>
>>>Right. It seems to me that what you wrote is nonsense, and the
>>>only possible conclusion is I just didn't understand what you
>>>said. And of course the same applies no matter how many people
>>>explain the nonsenseical nature of your positions - _nobody_
>>>has even bothered to read what you wrote, right?
>>
>>How can you say that, if elementary research reveals that David Petry's
>>threads are among the most successfull in 'sci.math'? Literally ..
>
> Um. First, how can I say _what_, exactly?

You wrote:

> _nobody_ has even bothered to read what you wrote, right?

That's obviously false.

> Second, how do you define
> "successful"? His threads are certainly "successful" in a sense -
> in that sense James Harris's threads are even more successful...

Full of successors. And yes.

Han de Bruijn

Han de Bruijn

unread,
Sep 10, 2007, 6:35:00 AM9/10/07
to
david petry wrote:
>
> [ ... ] I'm claiming that a huge
> portion of modern logic (and also of the foundations of mathematics)
> is silly. I'm claiming that logic and the foundations of mathematics
> have lost touch with truth and reality; they've become mere games
> played with symbols, only vaguely resembling an honest search for
> truth. [ ... ]

Yes. Two hundred and fourty pages found here:

http://math.boisestate.edu/~holmes/holmes/head.pdf

Han de Bruijn

Message has been deleted

Han de Bruijn

unread,
Sep 10, 2007, 9:19:30 AM9/10/07
to
Jesse F. Hughes wrote:

> Han de Bruijn <Han.de...@DTO.TUDelft.NL> writes:
>
>>David C. Ullrich wrote:
>>
>>>On Mon, 10 Sep 2007 11:43:44 +0200, Han de Bruijn
>>><Han.de...@DTO.TUDelft.NL> wrote:
>>>
>>>>David C. Ullrich wrote:
>>>>
>>>>>Right. It seems to me that what you wrote is nonsense, and the
>>>>>only possible conclusion is I just didn't understand what you
>>>>>said. And of course the same applies no matter how many people
>>>>>explain the nonsenseical nature of your positions - _nobody_
>>>>>has even bothered to read what you wrote, right?
>>>>
>>>>How can you say that, if elementary research reveals that David Petry's
>>>>threads are among the most successfull in 'sci.math'? Literally ..
>>>
>>>Um. First, how can I say _what_, exactly?
>>
>>You wrote:
>>
>>>_nobody_ has even bothered to read what you wrote, right?
>>
>>That's obviously false.
>

> David didn't claim that nobody reads what Petry wrote. He claimed
> that this is an apparent implication of Petry's own reaction to
> David's questions.
>
> The fact that the quoted portion ends with the question "right?"
> should have been a clue that it is not an original claim on David's
> part.

In that case, David Ullrich's comments are somewhat too subtle for an
average Dutchman, it seems.

>>>Second, how do you define
>>>"successful"? His threads are certainly "successful" in a sense -
>>>in that sense James Harris's threads are even more successful...
>>
>>Full of successors. And yes.
>

> It's a particularly odd measure of success.

Han de Bruijn

david petry

unread,
Sep 10, 2007, 10:53:50 AM9/10/07
to
On Sep 10, 12:35 am, Peter_Smith <ps...@cam.ac.uk> wrote:
> On 10 Sep, 05:22, david petry <david_lawrence_pe...@yahoo.com> wrote:
>
> > On Sep 9, 12:47 am, Peter_Smith <ps...@cam.ac.uk> wrote:
> > I chose to use "Goldbach-like" conjectures precisely so that it would
> > be obvious that they are of Goldbach type. The infinite sets used in
> > them can be as concrete as the infinite set of primes.
>
> > Here's one example of a set 'Q' (as defined above) that could be used
> > in a "Goldbach-like" assertion: define Q = {q_n} where q_n = p_n +
> > 2*[sqrt(n)], where p_n is the n'th prime, and [r] is the greatest
> > integer less than or equal to 'r'.
>
> Ah, thanks for the clarification. I thought you meant something more
> exotic. But then if you do have in mind assertions so very closely
> related to the original Goldbach conjecture, then I'm not at all sure
> that they really help your case. For, if we are in the business of
> probabilistic considerations, it looks to me a probabilistically good
> bet that if there *is* a proof in PA of original-Goldbach, there will
> also be variant proofs of the true variant-Goldbachs that are so very
> closely related.

No. Perhaps here's a better example: let R(j,k) be a well-defined
pseudo-random number generator that gives an integer between 0 and
'k'. Then let q_n = p_n + R(n, sqrt(n))

Note that there are infinitely many well-defined pseudo-random number
generators.


> So we are back more or less exactly where we are
> started.

No.

david petry

unread,
Sep 10, 2007, 11:17:10 AM9/10/07
to

That's not really the point.

Here's a valid statement:

As long as we stay away from numbers that are too big to think about
in any reasonable sense, every integer has a successor.

Here's another valid statement:

As long as we stay away from numbers that are too big to think about
in any reasonable sense, for every prime p, there is another prime p'
< p^p (to be sure, much smaller bounds are possible)

The reason we need the bound (i.e. p' < p^p) is that without it, we
leave open the possibility that the next prime is too large to think
about in any reasonable sense, which would render the assertion
meaningless.

In common usage, there is no need to explicitly say "as long as we
stay away from numbers that are too big to think about in any
reasonable sense", but that common sense assumption should always be
taken to be implicit.


david petry

unread,
Sep 10, 2007, 11:49:07 AM9/10/07
to
On Sep 9, 3:33 am, David C. Ullrich <ullr...@math.okstate.edu> wrote:
> On Sat, 08 Sep 2007 14:37:35 -0700, david petry

> >> _How_ does


> >> anything below show that Godel's _theorem_ is trivial?
>
> >Well, as I argued in the article, which apparently you didn't
> >understand in any way, the statements lead inexorably to the
> >conclusion that Godel's theorem is trivial. I've done my best to
> >explain it, and you reject my explanation, and you then demand that I
> >explain it. Catch 22.
>
> Eh, no. You _stated_ that those statements lead to the conclusion
> that Godel's theorem is trivial. You haven't explained how, in
> spite of repeated requests from me and others.


If you want to understand what I'm saying, I'll be happy to help you.
But first, you have to show me that you are making an effort.

If you want to fail to understand what I am saying, I'll be happy to
let you succeed. I've already witnessed the effort you are making in
that direction.


abo

unread,
Sep 10, 2007, 12:26:32 PM9/10/07
to
On Sep 10, 5:17 pm, david petry <david_lawrence_pe...@yahoo.com>

ok but that's not the Successor Axiom, which say that every number has
a successor (which is a number). So perhaps I am incorrect, but it
seems to me that you would like to deny (or at least, refuse to
assert), the Successor Axiom (in its full glory). This is a perfectly
coherent position, if this is in fact your position, and coheres
(imho) much better with your dictum about truth and reality.

As to Godel, you will find, I think, that in a system without the
Successor Axiom, it is not possible to prove that Peano Arithmetic
cannot prove Con(PA).

Jesse F. Hughes

unread,
Sep 10, 2007, 12:43:36 PM9/10/07
to
david petry <david_lawr...@yahoo.com> writes:

>
> If you want to understand what I'm saying, I'll be happy to help you.
> But first, you have to show me that you are making an effort.

Keen.

What does "The world of computation is real" mean?

--
Jesse F. Hughes

"Do not click any hyperlinks that you do not trust. Type them in the
Address bar yourself." -- Microsoft gives security advice.

david petry

unread,
Sep 10, 2007, 12:56:01 PM9/10/07
to
On Sep 10, 9:26 am, abo <dkfjd...@yahoo.com> wrote:
> On Sep 10, 5:17 pm, david petry <david_lawrence_pe...@yahoo.com>
> wrote:

> > Here's a valid statement:
>
> > As long as we stay away from numbers that are too big to think about
> > in any reasonable sense, every integer has a successor.
>

> So perhaps I am incorrect, but it
> seems to me that you would like to deny (or at least, refuse to
> assert), the Successor Axiom (in its full glory). This is a perfectly
> coherent position, if this is in fact your position, and coheres
> (imho) much better with your dictum about truth and reality.


The quote that Aatu includes in every one of his posts, if I translate
it correctly, is what I like to call "the fundamental principle of
reasonableness": if we can't reasonably talk about something, let's
agree to ignore it. Then we don't have to distinguish between the
Successor Axiom in all its glory, and the plain and simple Successor
Axiom. And it's the failure to accept this principle of reasonableness
that pushes mathematics into a fantasy world.

"Wovon man nicht sprechen kann, daruber muss man schweigen"
- Ludwig Wittgenstein, Tractatus Logico-Philosophicus

It is loading more messages.
0 new messages