The halting problem is straightforward to express, and straightforward
to prove - it's a simple enumeration and diagonal argument much like the
proof that the reals are uncountable. Like all mathematics, it of
course relies on axioms.
"Artificial intelligence" is a far more nebulous and abstract concept.
You need a book to define what you mean by it, before trying to prove
anything about it.
And surely you are not suggesting that because one smart person
published a "proof" that you say turned out to be wrong (I don't know
enough to say if it really was wrong or not), then this casts doubt on a
completely different proof of a completely different hypothesis by a
completely different person?
>
> But related to the halting problem: Turing's impossibility proof relied
> crucially on the ability to construct a special program that was
> undecidable for a given fixed decider machine that /could be numbered/
> and henceforth referred to via the number.
Yes. For any given computation model, programs can be numbered. (If
you are happy to assume the Church-Turing hypothesis, then each of these
models is equivalent. If you are not happy with that assumption, then
you can prove the halting problem for each of them with the same method.)
If you could find a computation model that cannot be enumerated, then
the proof would not apply. (But neither does the hypothesis.) No
realisable non-Turing computation model has been found.
> But if that decider machine
> is a human, well, you can't create a fixed undecidable program relative
> to the human because the human evolves and changes and has no fixed
> completely defining number. And ditto for an artifical intelligence.
Humans are limited. "Mathematician with pen and paper executing a
finite set of algorithms" is one of the enumerable computation models
that is equivalent to a Turing machine. The same would apply to
artificial intelligence (however it is defined).
>
> So not only was Penrose wrong (which is obvious from his silly result)
> but there is necessarily something fishy in Turing's proof (not so
> obvious!), because it can't deal with a human as decider.
>
As I say, I don't know Penrose's argument here enough to comment -
except that it depends totally on the definition used for "artificial
intelligence".
> Namely, Turing did not account for evolving deciders -- I believe a
> very simple example could be one that is influenced by random numbers.
>
Random deciders can sometimes give you the answer for something that is
uncomputable - but since it is not deterministic it cannot solve the
problem in general.
All realisable computation is limited. There are a number of physical
constraints on calculations - no computation system (human, electronic,
quantum computer, artificial intelligence, etc.) can break these
constraints. (Even as we refine physical theories to go beyond quantum
mechanics, these will not be be broken - in the same way that more
nuanced theories of gravity do not allow bricks to float.) There are
limits to calculation speed, information density, computation energy,
and so on. This means programs are countable, regardless of the way
they are computed.
>
>> - if he thinks he has found such an algorithm, he is
>> wrong.
>
> Well I haven't read what Olcott writes, but see above.
>
He thinks that not only has he found an algorithm, but he has a program
that implements it (I think - I haven't read his stuff in detail either).
> It could well be that Olcott, like I, doesn't contest the validity of
> Turing's proof in itself, but contests the assumption that a decider can
> always be referred to by a fixed number -- e.g. a human or AI can't.
>
> Turing's proof is still, of course, valid under the assumptions he used.
>
As I showed above, that assumption is valid for any realisable method of
computation (in particular, on the model used by Olcott - programs on
real computers).
Anyway, the whole point of Turing's proof is that if you take the set of
all possible programs in some countable model of computation, then the
halting decider for those programs is not in that set. It does not say
that there is no such thing as a halting decider - it says that a
halting decider is not a computable function. If you are going to
imagine the possibility of "hyper-computers" that can evaluate
incomputable numbers, then maybe it could be a halting decider for
computable functions. (But you can't create such a machine.)
>
>> He is not the first amateur to think he has done something
>> impossible in mathematics - there are countless people who think they
>> have trisected an angle, squared the circle, counted the reals, and so
>> on.)
>>
>>
>> And since the field of mathematics is so huge, and the halting problem
>> is only one small part of the (relatively) small field of computation
>> theory, saying "nearly every mathematician should be interested in a
>> halting deciding algorithm" is like saying "nearly every sportsperson
>> should be interested in elephant polo". (If Olott had /really/ proved
>> Turing wrong, it would be a different matter - and of interest to many
>> people.)
>
> Cheers,
>
> - Alf
>
> PS: Alan Turing might seem to be so great an authority that nothing he
> did can be questioned.
We are talking about mathematics here, not American politics or TV
fashion shows. Authorities are /always/ questioned in mathematics and
science. There is no greater aim for a mathematician or a scientist
than to prove a famous theorem wrong.
And the uncomputability of the halting problem is simple enough to prove
that anyone studying computation theory will prove it to their own
satisfaction in their first year at university. It's not something we
have to take on trust.
> But remember that Turing had a pretty negative
> view of Muslims, that he also had some racist ideas, and that he
> believed in telepathy and other extrasensory phenomena. So we're talking
> about a flawed genius.
Find me a genius (or anyone else) that is not flawed, and perhaps that
argument would not be so laughable.
> But he was crucial for the allied victory in
> WWII, and was then wrongfully persecuted due to his sexuality so that he
> killed himself, and I believe he was therefore later elevated to near
> perfection so that he now can't be questioned -- but actually, flawed.
There is no doubt that Turing was an interesting person, who lived an
interesting life. But his work on computability is famous and important
because of what it is, not because of who Turing was. And his proofs
are correct if and only if they are correct - not because he was gay!