Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

A non-halting decider is defined

170 views
Skip to first unread message

olcott

unread,
Oct 28, 2020, 7:43:31 PM10/28/20
to
Intuitively, a decider should be a Turing machine that given an input,
halts and either accepts or rejects, relaying its answer in one of many
equivalent ways, such as halting at an ACCEPT or REJECT state, or
leaving its answer on the output tape.Yuval Filmus

https://cs.stackexchange.com/questions/84433/what-is-decider

Yuval Filmus top 0.04% overall Assistant Professor in the Department of
Computer Science at the Technion.

A non-halting decider can be defined that always Accepts NON_HALTING
input and Rejects HALTING input as follows:

The NON_HALTING decider UTM executes its subordinate UTM one state
transition at a time until it detects non-halting behavior or its
subordinate UTM has terminated normally.

If the halt decider UTM detects non-halting behavior of its subordinate
UTM it simply stops executing the subordinate and transitions to its own
final state of NON_TERMINATING_BEHAVIOR_DETECTED.

If the subordinate UTM terminates normally the halt decider UTM
transitions to its own final state of SUBORDINATE_HAS_TERMINATED.






--
Copyright 2020 Pete Olcott

daniel...@gmail.com

unread,
Oct 28, 2020, 9:43:38 PM10/28/20
to
On Wednesday, October 28, 2020 at 7:43:31 PM UTC-4, olcott wrote:
> Intuitively, a decider should be a Turing machine that given an input,
> halts

Also intuitively, olcott should be smart enough to know how to stop
cross posting to comp.lang.c++, but it seems that this is not so ...

Daniel

olcott

unread,
Oct 29, 2020, 1:00:59 AM10/29/20
to
My following has tripled because of my cross-posting.

I just solved the halting problem thus changing a basic foundation of
computer science.

I need my work vetted before I can present it at a higher level.
As soon as this work is confirmed or refuted I will quit cross-posting.

David Brown

unread,
Oct 29, 2020, 3:37:41 AM10/29/20
to
On 29/10/2020 06:00, olcott wrote:
> On 10/28/2020 8:43 PM, daniel...@gmail.com wrote:
>> On Wednesday, October 28, 2020 at 7:43:31 PM UTC-4, olcott wrote:
>>> Intuitively, a decider should be a Turing machine that given an input,
>>> halts
>>
>> Also intuitively, olcott  should be smart enough to know how to stop
>> cross posting to comp.lang.c++, but it seems that this is not so ...
>>
>> Daniel
>>
>
> My following has tripled because of my cross-posting.

The number of people who get irritated and have to ignore your
ridiculous unwanted and off-topic ramblings has more than tripled since
your cross-posting.

Here's a clue for you and your fantasies about having a "following" -
when the most likely person to respond to your posts is /you/, it's time
to realise you are an antisocial fanatic with an obsession not shared by
others.

There is an appropriate way to handle this. Get a /blog/. It's simple
and free. You can reach a vastly wider audience - not just the old
greybeards that still use Usenet, but everyone that can use Google. You
get the space to expand on your ideas, maybe add diagrams, organise your
thoughts. You don't have to keep repeating yourself on each post.

I look forward to seeing a /single/ thread in the future with the simple
message, "I've got a blog here ... ". Your followers will follow you,
and the rest of Usenet can be a little happier. Everyone wins.

>
> I just solved the halting problem thus changing a basic foundation of
> computer science.
>

That should be worth a good deal of fame and fortune. It would
certainly be worth a doctorate. Contact a university. Write a real
paper on it, and put it on your blog. I for one would read such a paper
- but not your Usenet threads where every thought in your head dribbles
out in a random order. If you have come up with a proof that is so
significant to the entire field, deal with it /properly/.

> I need my work vetted before I can present it at a higher level.

This is not the place to do it. This is a group for C++ programmers
(and comp.lang.c is for C programmers), not theoretical computer
scientists. Even for those with theoretical training and interest
(including me), this is a useless medium for something like this. You
want to talk to academics in an appropriate university department, not
people who'd rather discuss the legality of "void main();".

> As soon as this work is confirmed or refuted I will quit cross-posting.
>

You are making some extraordinary claims. Extraordinary claims require
extraordinary evidence. You can't present such evidence in this medium.

Write a paper - call it a draft. Put it on a blog. Ask for comments in
comp.theory, and work from there.

But /please/ stop cross-posting. It does not help you in any way, and
it most certainly irritates other people.

olcott

unread,
Oct 29, 2020, 8:04:45 AM10/29/20
to
It really seems to me that my claim is either so obviously correct that
it can be readily accepted as true or so obviously incorrect that it can
be easily refuted. All it takes is one of these and I will stop

daniel...@gmail.com

unread,
Oct 29, 2020, 9:21:28 AM10/29/20
to
On Thursday, October 29, 2020 at 1:00:59 AM UTC-4, olcott wrote:

> I just solved the halting problem thus changing a basic foundation of
> computer science.
>
I'm skeptical. Given that olcott appears unable to solve the problem of
making the cross posts halt when given input, it seems unlikely that
he could solve other halting problems.

Daniel

David Brown

unread,
Oct 29, 2020, 10:01:40 AM10/29/20
to
No one here (or in comp.lang.c, or comp.lang.prolog) actually /cares/
what your claims are. There are some who know what the "halting
problem" is, and can probably sketch out a rough proof without too much
trouble. (No, I am not going to do that - google it, or take some
classes on computability theory.)

People don't want you to stop cross-posting because they disagree with
your claim - they want you to stop because you are an annoying noise
generator producing vast amounts of irrelevant, off-topic nonsense that
is swamping these groups. You are not encouraging debate, comments or
criticism - you are /killing/ it.

I've given you constructive, helpful advice on how to get better results
and more useful discussions on your ideas, while simultaneously making
life better for all the members of these language groups. If you
continue posting as you have been, that is a clear demonstration that
you are not interested in criticism or help with your ideas, you are not
interested in spreading them - you are merely a vandal, a selfish troll
with no motivation other than spoiling things for other people.

olcott

unread,
Oct 29, 2020, 12:41:28 PM10/29/20
to
Find a single undecidable input and I will stop cross-posting:
bool Aborted_Because_Non_Halting_Behavior_Detected(u32 P, u32I)


--
Copyright 2020 Pete Olcott

"Great spirits have always encountered violent opposition from mediocre
minds." Einstein

olcott

unread,
Oct 29, 2020, 12:56:03 PM10/29/20
to
On 10/29/2020 2:37 AM, David Brown wrote:
Because I have no experience writing academic journal quality papers and
most poorly written papers tend to be rejected out-of-hand without
review I must go through several levels of review.

If I don't do this I simply blow my credibility with the academicians so
that no one ever bothers to pay enough attention to see that I was right
all along.

>> I need my work vetted before I can present it at a higher level.
>
> This is not the place to do it. This is a group for C++ programmers
> (and comp.lang.c is for C programmers), not theoretical computer
> scientists. Even for those with theoretical training and interest
> (including me), this is a useless medium for something like this. You
> want to talk to academics in an appropriate university department, not
> people who'd rather discuss the legality of "void main();".
>
>> As soon as this work is confirmed or refuted I will quit cross-posting.
>>
>
> You are making some extraordinary claims. Extraordinary claims require
> extraordinary evidence. You can't present such evidence in this medium.

For all people that know the material very well and are paying very
close attention I have already totally proved my point.

The key thing that is very easy to understand even in this medium is
that I totally circumvented the conventional halting problem trick basis
of proofs of udecidability.

The input to the halt decider cannot possibly do the opposite of
whatever the halt decider decides because the very first step of the
halting decision is to terminate the execution of its input.

> Write a paper - call it a draft. Put it on a blog. Ask for comments in
> comp.theory, and work from there.
>
> But /please/ stop cross-posting. It does not help you in any way, and
> it most certainly irritates other people.
>


--
Copyright 2020 Pete Olcott

Mostowski Collapse

unread,
Oct 29, 2020, 2:40:35 PM10/29/20
to
For gods sacke stop cross posting on 4 newsgroups.

Especially comp.lang.prolog doesn't make much sense.

RAM/RASP is probably best housed on comp.theory.

olcott schrieb:
> I am a brainless mud turtle
>

David Brown

unread,
Oct 30, 2020, 6:35:08 AM10/30/20
to
You are not getting any reviews here, nor any help towards learning how
to write academic papers. If you had invented a new way of raising
sheep, would you talk to a farmer, or would you insist on taking your
sheep to McDonald's every day? All you achieve is people yelling at you
to get out, and that is /not/ helpful.

>
> If I don't do this I simply blow my credibility with the academicians so
> that no one ever bothers to pay enough attention to see that I was right
> all along.

Certainly any credibility to you had in Usenet groups is blown.

Please re-read the advice I gave you. It is /real/ advice - intended to
be useful. I am not suggesting you send your paper to an academic
journal - I am suggesting you talk to some academics at a university.
Maybe you should take a course or two.

>
>>> I need my work vetted before I can present it at a higher level.
>>
>> This is not the place to do it.  This is a group for C++ programmers
>> (and comp.lang.c is for C programmers), not theoretical computer
>> scientists.  Even for those with theoretical training and interest
>> (including me), this is a useless medium for something like this.  You
>> want to talk to academics in an appropriate university department, not
>> people who'd rather discuss the legality of "void main();".
>>
>>> As soon as this work is confirmed or refuted I will quit cross-posting.
>>>
>>
>> You are making some extraordinary claims.  Extraordinary claims require
>> extraordinary evidence.  You can't present such evidence in this medium.
>
> For all people that know the material very well and are paying very
> close attention I have already totally proved my point.
>

That's not the impression I get from people who have tried to give you
replies. But be that as it may - if you have a proof of your point,
write it up and put it on a blog. Let people read and comment on the
paper. Based on that, re-write it as needed until you have something
you are happy to show to academics or journals. That way you make some
progress - rather than just irritating everyone else and frustrating
yourself.

> The key thing that is very easy to understand even in this medium is
> that I totally circumvented the conventional halting problem trick basis
> of proofs of udecidability.
>
> The input to the halt decider cannot possibly do the opposite of
> whatever the halt decider decides because the very first step of the
> halting decision is to terminate the execution of its input.
>

Write a proper paper (to the best of your abilities and experience), and
put it on a blog. Usenet is /not/ the right medium for this kind of
thing. And even if it were, these groups are not the right audience for
it. No C, C++ or Prolog programmers cares about the "halting problem"
or "halt deciders". How is that so hard to comprehend?

ijw wij

unread,
Oct 30, 2020, 6:51:05 PM10/30/20
to
David Brown 在 2020年10月30日 星期五下午6:35:08 [UTC+8] 的信中寫道:
IMOH, a halting deciding algorithm should interest (nearly)every programmer
and mathematician. The problem with what olcott said is that I, probably along
with others, do not understand what was really told while the algorithm should
be easy to explain if possible. Not to say it could be implemented in few days.

Saying so is because I had just posted a very unpoular question "1/∞!=0..." like
olcott's post, immediately got enough down vote to deleting it.

David Brown

unread,
Oct 31, 2020, 8:59:23 AM10/31/20
to
On 30/10/2020 23:50, ijw wij wrote:

> IMOH, a halting deciding algorithm should interest (nearly)every programmer
> and mathematician.

Sorry, but no.

I realise that computation theory underlies all programmers. But the
details - halting problem, computability, Turing machines, and all the
rest are irrelevant to virtually all programmers' work. The same
applies to the workings of processors - from the logic design down to
the laws of quantum mechanics that make them work. They are all part of
the chain of processes that make programming work - but they are at a
completely different part of that chain, and therefore of no particular
interest or relevance to most programmers.

Personally, I /am/ somewhat interested in computability and the
mathematics involved, as I studied it at university. But it has no
relevance to my work as a programmer. I am not particularly interested
in Olcott's ideas, however, since I know they are wrong.

(The impossibility of finding a general halting problem decider is
simple to prove - if he thinks he has found such an algorithm, he is
wrong. He is not the first amateur to think he has done something
impossible in mathematics - there are countless people who think they
have trisected an angle, squared the circle, counted the reals, and so on.)


And since the field of mathematics is so huge, and the halting problem
is only one small part of the (relatively) small field of computation
theory, saying "nearly every mathematician should be interested in a
halting deciding algorithm" is like saying "nearly every sportsperson
should be interested in elephant polo". (If Olott had /really/ proved
Turing wrong, it would be a different matter - and of interest to many
people.)

Alf P. Steinbach

unread,
Oct 31, 2020, 10:03:55 AM10/31/20
to
On 31.10.2020 13:59, David Brown wrote:
> On 30/10/2020 23:50, ijw wij wrote:
>
>> IMOH, a halting deciding algorithm should interest (nearly)every programmer
>> and mathematician.
>
> Sorry, but no.
>
> I realise that computation theory underlies all programmers. But the
> details - halting problem, computability, Turing machines, and all the
> rest are irrelevant to virtually all programmers' work. The same
> applies to the workings of processors - from the logic design down to
> the laws of quantum mechanics that make them work. They are all part of
> the chain of processes that make programming work - but they are at a
> completely different part of that chain, and therefore of no particular
> interest or relevance to most programmers.
>
> Personally, I /am/ somewhat interested in computability and the
> mathematics involved, as I studied it at university. But it has no
> relevance to my work as a programmer. I am not particularly interested
> in Olcott's ideas, however, since I know they are wrong.
>
> (The impossibility of finding a general halting problem decider is
> simple to prove

I would disagree. Because it so happened, I think that was in the
1990's, that Roger Penrose (chair of mathematics department at Oxford
university, England), in either the first or second of his infamous
books about physics and the possibility of artifical intelligence (as I
recall the first titled "The emperor's new mind"), used essentially the
same logic as Turing, but to prove that artificial intelligence is
impossible. Since that's an incorrect conclusion something had to be
wrong in his derivation.

As I recall several things were wrong, not just one critical assumption
or step.

But related to the halting problem: Turing's impossibility proof relied
crucially on the ability to construct a special program that was
undecidable for a given fixed decider machine that /could be numbered/
and henceforth referred to via the number. But if that decider machine
is a human, well, you can't create a fixed undecidable program relative
to the human because the human evolves and changes and has no fixed
completely defining number. And ditto for an artifical intelligence.

So not only was Penrose wrong (which is obvious from his silly result)
but there is necessarily something fishy in Turing's proof (not so
obvious!), because it can't deal with a human as decider.

Namely, Turing did not account for evolving deciders -- I believe a
very simple example could be one that is influenced by random numbers.


> - if he thinks he has found such an algorithm, he is
> wrong.

Well I haven't read what Olcott writes, but see above.

It could well be that Olcott, like I, doesn't contest the validity of
Turing's proof in itself, but contests the assumption that a decider can
always be referred to by a fixed number -- e.g. a human or AI can't.

Turing's proof is still, of course, valid under the assumptions he used.


> He is not the first amateur to think he has done something
> impossible in mathematics - there are countless people who think they
> have trisected an angle, squared the circle, counted the reals, and so on.)
>
>
> And since the field of mathematics is so huge, and the halting problem
> is only one small part of the (relatively) small field of computation
> theory, saying "nearly every mathematician should be interested in a
> halting deciding algorithm" is like saying "nearly every sportsperson
> should be interested in elephant polo". (If Olott had /really/ proved
> Turing wrong, it would be a different matter - and of interest to many
> people.)

Cheers,

- Alf

PS: Alan Turing might seem to be so great an authority that nothing he
did can be questioned. But remember that Turing had a pretty negative
view of Muslims, that he also had some racist ideas, and that he
believed in telepathy and other extrasensory phenomena. So we're talking
about a flawed genius. But he was crucial for the allied victory in
WWII, and was then wrongfully persecuted due to his sexuality so that he
killed himself, and I believe he was therefore later elevated to near
perfection so that he now can't be questioned -- but actually, flawed.

olcott

unread,
Oct 31, 2020, 10:57:41 AM10/31/20
to
bool Aborted_Because_Non_Halting_Behavior_Detected(u32 P, u32 I);
Stops executing and Accepts any and all non-halting inputs and Rejects
any and all halting inputs.

olcott

unread,
Oct 31, 2020, 10:58:48 AM10/31/20
to
On 10/31/2020 7:59 AM, David Brown wrote:
> On 30/10/2020 23:50, ijw wij wrote:
>
>> IMOH, a halting deciding algorithm should interest (nearly)every programmer
>> and mathematician.
>
> Sorry, but no.
>
> I realise that computation theory underlies all programmers. But the
> details - halting problem, computability, Turing machines, and all the
> rest are irrelevant to virtually all programmers' work. The same
> applies to the workings of processors - from the logic design down to
> the laws of quantum mechanics that make them work. They are all part of
> the chain of processes that make programming work - but they are at a
> completely different part of that chain, and therefore of no particular
> interest or relevance to most programmers.
>
> Personally, I /am/ somewhat interested in computability and the
> mathematics involved, as I studied it at university. But it has no
> relevance to my work as a programmer. I am not particularly interested
> in Olcott's ideas, however, since I know they are wrong.
>
> (The impossibility of finding a general halting problem decider is
> simple to prove - if he thinks he has found such an algorithm, he is
> wrong. He is not the first amateur to think he has done something
> impossible in mathematics - there are countless people who think they
> have trisected an angle, squared the circle, counted the reals, and so on.)

bool Aborted_Because_Non_Halting_Behavior_Detected(u32 P, u32 I);
Stops executing and Accepts any and all non-halting inputs and Rejects
any and all halting inputs.

> And since the field of mathematics is so huge, and the halting problem
> is only one small part of the (relatively) small field of computation
> theory, saying "nearly every mathematician should be interested in a
> halting deciding algorithm" is like saying "nearly every sportsperson
> should be interested in elephant polo". (If Olott had /really/ proved
> Turing wrong, it would be a different matter - and of interest to many
> people.)
>


olcott

unread,
Oct 31, 2020, 11:01:02 AM10/31/20
to
My related work is a proof that Tarski's undefinability theorem is
incorrect thus enabling truth conditional semantics to finally be
anchored in an actual truth predicate.

David Brown

unread,
Oct 31, 2020, 12:07:03 PM10/31/20
to
The halting problem is straightforward to express, and straightforward
to prove - it's a simple enumeration and diagonal argument much like the
proof that the reals are uncountable. Like all mathematics, it of
course relies on axioms.

"Artificial intelligence" is a far more nebulous and abstract concept.
You need a book to define what you mean by it, before trying to prove
anything about it.

And surely you are not suggesting that because one smart person
published a "proof" that you say turned out to be wrong (I don't know
enough to say if it really was wrong or not), then this casts doubt on a
completely different proof of a completely different hypothesis by a
completely different person?

>
> But related to the halting problem: Turing's impossibility proof relied
> crucially on the ability to construct a special program that was
> undecidable for a given fixed decider machine that /could be numbered/
> and henceforth referred to via the number.

Yes. For any given computation model, programs can be numbered. (If
you are happy to assume the Church-Turing hypothesis, then each of these
models is equivalent. If you are not happy with that assumption, then
you can prove the halting problem for each of them with the same method.)

If you could find a computation model that cannot be enumerated, then
the proof would not apply. (But neither does the hypothesis.) No
realisable non-Turing computation model has been found.

> But if that decider machine
> is a human, well, you can't create a fixed undecidable program relative
> to the human because the human evolves and changes and has no fixed
> completely defining number. And ditto for an artifical intelligence.

Humans are limited. "Mathematician with pen and paper executing a
finite set of algorithms" is one of the enumerable computation models
that is equivalent to a Turing machine. The same would apply to
artificial intelligence (however it is defined).

>
> So not only was Penrose wrong (which is obvious from his silly result)
> but there is necessarily something fishy in Turing's proof (not so
> obvious!), because it can't deal with a human as decider.
>

As I say, I don't know Penrose's argument here enough to comment -
except that it depends totally on the definition used for "artificial
intelligence".

> Namely, Turing did not account for evolving deciders  --  I believe a
> very simple example could be one that is influenced by random numbers.
>

Random deciders can sometimes give you the answer for something that is
uncomputable - but since it is not deterministic it cannot solve the
problem in general.

All realisable computation is limited. There are a number of physical
constraints on calculations - no computation system (human, electronic,
quantum computer, artificial intelligence, etc.) can break these
constraints. (Even as we refine physical theories to go beyond quantum
mechanics, these will not be be broken - in the same way that more
nuanced theories of gravity do not allow bricks to float.) There are
limits to calculation speed, information density, computation energy,
and so on. This means programs are countable, regardless of the way
they are computed.

>
>> - if he thinks he has found such an algorithm, he is
>> wrong.
>
> Well I haven't read what Olcott writes, but see above.
>

He thinks that not only has he found an algorithm, but he has a program
that implements it (I think - I haven't read his stuff in detail either).

> It could well be that Olcott, like I, doesn't contest the validity of
> Turing's proof in itself, but contests the assumption that a decider can
> always be referred to by a fixed number -- e.g. a human or AI can't.
>
> Turing's proof is still, of course, valid under the assumptions he used.
>

As I showed above, that assumption is valid for any realisable method of
computation (in particular, on the model used by Olcott - programs on
real computers).

Anyway, the whole point of Turing's proof is that if you take the set of
all possible programs in some countable model of computation, then the
halting decider for those programs is not in that set. It does not say
that there is no such thing as a halting decider - it says that a
halting decider is not a computable function. If you are going to
imagine the possibility of "hyper-computers" that can evaluate
incomputable numbers, then maybe it could be a halting decider for
computable functions. (But you can't create such a machine.)

>
>>  He is not the first amateur to think he has done something
>> impossible in mathematics - there are countless people who think they
>> have trisected an angle, squared the circle, counted the reals, and so
>> on.)
>>
>>
>> And since the field of mathematics is so huge, and the halting problem
>> is only one small part of the (relatively) small field of computation
>> theory, saying "nearly every mathematician should be interested in a
>> halting deciding algorithm" is like saying "nearly every sportsperson
>> should be interested in elephant polo".  (If Olott had /really/ proved
>> Turing wrong, it would be a different matter - and of interest to many
>> people.)
>
> Cheers,
>
> - Alf
>
> PS: Alan Turing might seem to be so great an authority that nothing he
> did can be questioned.

We are talking about mathematics here, not American politics or TV
fashion shows. Authorities are /always/ questioned in mathematics and
science. There is no greater aim for a mathematician or a scientist
than to prove a famous theorem wrong.

And the uncomputability of the halting problem is simple enough to prove
that anyone studying computation theory will prove it to their own
satisfaction in their first year at university. It's not something we
have to take on trust.

> But remember that Turing had a pretty negative
> view of Muslims, that he also had some racist ideas, and that he
> believed in telepathy and other extrasensory phenomena. So we're talking
> about a flawed genius.

Find me a genius (or anyone else) that is not flawed, and perhaps that
argument would not be so laughable.

> But he was crucial for the allied victory in
> WWII, and was then wrongfully persecuted due to his sexuality so that he
> killed himself, and I believe he was therefore later elevated to near
> perfection so that he now can't be questioned -- but actually, flawed.

There is no doubt that Turing was an interesting person, who lived an
interesting life. But his work on computability is famous and important
because of what it is, not because of who Turing was. And his proofs
are correct if and only if they are correct - not because he was gay!

olcott

unread,
Oct 31, 2020, 12:29:00 PM10/31/20
to
Refutation of halting problem proofs for high school students
https://groups.google.com/d/msg/comp.theory/wjjjtn39rEc/ah0QIayJBQAJ

>> But remember that Turing had a pretty negative
>> view of Muslims, that he also had some racist ideas, and that he
>> believed in telepathy and other extrasensory phenomena. So we're talking
>> about a flawed genius.
>
> Find me a genius (or anyone else) that is not flawed, and perhaps that
> argument would not be so laughable.
>
>> But he was crucial for the allied victory in
>> WWII, and was then wrongfully persecuted due to his sexuality so that he
>> killed himself, and I believe he was therefore later elevated to near
>> perfection so that he now can't be questioned -- but actually, flawed.
>
> There is no doubt that Turing was an interesting person, who lived an
> interesting life. But his work on computability is famous and important
> because of what it is, not because of who Turing was. And his proofs
> are correct if and only if they are correct - not because he was gay!
>


David Brown

unread,
Oct 31, 2020, 12:38:28 PM10/31/20
to
On 31/10/2020 17:28, olcott wrote:
> On 10/31/2020 11:06 AM, David Brown wrote:
>>
>> And the uncomputability of the halting problem is simple enough to prove
>> that anyone studying computation theory will prove it to their own
>> satisfaction in their first year at university.  It's not something we
>> have to take on trust.
>
> Refutation of halting problem proofs for high school students
> https://groups.google.com/d/msg/comp.theory/wjjjtn39rEc/ah0QIayJBQAJ
>

If that's the quality of your claims, then I'm glad I haven't wasted
effort on reading your posts. But I'm willing to suppose there is more
to it than that - write a proper paper and publish it on a blog, not here.

olcott

unread,
Oct 31, 2020, 12:52:57 PM10/31/20
to
That is the gist of my proof.
I can actually demonstrate this proof on the basis of an operating
system that I created that executes UTM equivalent virtual machines
where the x86 language is the description language of these virtual
machines.

x86 language ≡ von Neumann architecture ≡ UTM ≡ RASP Machine

x86utm shows all of the detailed steps of exactly how a machine that is
equivalent to the Peter Linz H correctly decides halting on a machine
that is equivalent to the Peter Linz Ĥ.

http://www.liarparadox.org/Peter_Linz_HP(Pages_315-320).pdf

The paper that I write will be my very first attempt at writing an
academic quality paper so my biggest fear it that it will be rejected
out-of-hand without review entirely on the basis of style versus
substance issues. It is for this reason that I seek preliminary reviews
of this work on USENET.

ijw wij

unread,
Oct 31, 2020, 1:01:10 PM10/31/20
to
olcott 在 2020年10月31日 星期六下午10:57:41 [UTC+8] 的信中寫道:
What I saw are just pseudo-codes(very suspicious). I need at least full
documentation of each function and variable in a compilable header
file to understand. The example programs are BAD.

void H_Hat(u32 P)
{
if (!Halts(P, P)) // If it does not halt then
HALT // halt
else // if it halts then
HERE: goto HERE; // loop forever
}

void H(u32 P, u32 I)
{
if (Non_Halting_Detected_While_Running_it(P, I))
{
Stop running it.
Report Non Halting Detected. // Does not halt
}
else
Report that it already stopped running. // H

ijw wij

unread,
Oct 31, 2020, 1:35:20 PM10/31/20
to
David Brown 在 2020年10月31日 星期六下午8:59:23 [UTC+8] 的信中寫道:
> On 30/10/2020 23:50, ijw wij wrote:
>
> > IMOH, a halting deciding algorithm should interest (nearly)every programmer
> > and mathematician.
> Sorry, but no.
>
> I realise that computation theory underlies all programmers. But the
> details - halting problem, computability, Turing machines, and all the
> rest are irrelevant to virtually all programmers' work. The same
> applies to the workings of processors - from the logic design down to
> the laws of quantum mechanics that make them work. They are all part of
> the chain of processes that make programming work - but they are at a
> completely different part of that chain, and therefore of no particular
> interest or relevance to most programmers.
>
I am only interested in the halting algorithm, not others.
At least, I know compiler implementors and language designers will
be happy to know such algorithm even it is not entirely functional.

> Personally, I /am/ somewhat interested in computability and the
> mathematics involved, as I studied it at university. But it has no
> relevance to my work as a programmer. I am not particularly interested
> in Olcott's ideas, however, since I know they are wrong.
>
> (The impossibility of finding a general halting problem decider is
> simple to prove - if he thinks he has found such an algorithm, he is
> wrong. He is not the first amateur to think he has done something
> impossible in mathematics - there are countless people who think they
> have trisected an angle, squared the circle, counted the reals, and so on.)
>
Agree.
>
> And since the field of mathematics is so huge, and the halting problem
> is only one small part of the (relatively) small field of computation
> theory, saying "nearly every mathematician should be interested in a
> halting deciding algorithm" is like saying "nearly every sportsperson
> should be interested in elephant polo". (If Olott had /really/ proved
> Turing wrong, it would be a different matter - and of interest to many
> people.)
Halting problem is not a small thing, many can be derived from it.
I, as app. programmer, do not really calculate O(f), but the idea being
in the head is importantly essential. Without knowing it, lots of time
can be wasted and unaware of bad codes written.

olcott

unread,
Oct 31, 2020, 1:35:55 PM10/31/20
to
The pseudo code was so that high school students can get the gist of the
idea of the basic design.

Here is the executable code that runs in my x86utm operating system:

void H_Hat(u32 P)
{
if (Aborted_Because_Non_Halting_Behavior_Detected(P, P))
MOV_EAX_1 // Execution of P(P) has been aborted
else
{
MOV_EAX_0 // P(P) has halted
HERE: goto HERE;
}
HALT
}


void H(u32 P, u32 I)
{
if (Aborted_Because_Non_Halting_Behavior_Detected(P, I))
MOV_EAX_1 // Execution of P(I) has been aborted
else
MOV_EAX_0 // P(I) has halted
HALT
}


int main()
{
u32 P = (u32)H_Hat;
H(P, P);
HALT
}

It implements virtual machine equivalents to the Peter Linz H and Ĥ.
I have known all of the details or exactly how the above H decides the
above H_Hat() since 2018-12-13 @ 7:00 PM these details are posted in the
comp.theory USENET group.

Please respond to comp.theory if you can.

David Brown

unread,
Oct 31, 2020, 1:44:47 PM10/31/20
to
On 31/10/2020 18:35, ijw wij wrote:
> David Brown 在 2020年10月31日 星期六下午8:59:23 [UTC+8] 的信中寫道:
>> On 30/10/2020 23:50, ijw wij wrote:
>>
>>> IMOH, a halting deciding algorithm should interest (nearly)every programmer
>>> and mathematician.
>> Sorry, but no.
>>
>> I realise that computation theory underlies all programmers. But the
>> details - halting problem, computability, Turing machines, and all the
>> rest are irrelevant to virtually all programmers' work. The same
>> applies to the workings of processors - from the logic design down to
>> the laws of quantum mechanics that make them work. They are all part of
>> the chain of processes that make programming work - but they are at a
>> completely different part of that chain, and therefore of no particular
>> interest or relevance to most programmers.
>>
> I am only interested in the halting algorithm, not others.
> At least, I know compiler implementors and language designers will
> be happy to know such algorithm even it is not entirely functional.

Compiler implementers and language designers would be /very/ interested
in hearing of an algorithm that could tell if a program was going to
hang or not - /if/ such an algorithm existed, and it could be
implemented to run in a sensible time frame. Beyond that, they don't care.

>
>> Personally, I /am/ somewhat interested in computability and the
>> mathematics involved, as I studied it at university. But it has no
>> relevance to my work as a programmer. I am not particularly interested
>> in Olcott's ideas, however, since I know they are wrong.
>>
>> (The impossibility of finding a general halting problem decider is
>> simple to prove - if he thinks he has found such an algorithm, he is
>> wrong. He is not the first amateur to think he has done something
>> impossible in mathematics - there are countless people who think they
>> have trisected an angle, squared the circle, counted the reals, and so on.)
>>
> Agree.
>>
>> And since the field of mathematics is so huge, and the halting problem
>> is only one small part of the (relatively) small field of computation
>> theory, saying "nearly every mathematician should be interested in a
>> halting deciding algorithm" is like saying "nearly every sportsperson
>> should be interested in elephant polo". (If Olott had /really/ proved
>> Turing wrong, it would be a different matter - and of interest to many
>> people.)
> Halting problem is not a small thing, many can be derived from it.
> I, as app. programmer, do not really calculate O(f), but the idea being
> in the head is importantly essential. Without knowing it, lots of time
> can be wasted and unaware of bad codes written.
>

If I understand you correctly, you mean that you even though you don't
go through detailed proofs of the efficiency of your code, you still
think a little about the theoretical aspects when you write code.
That's a good thing, yes.

ijw wij

unread,
Oct 31, 2020, 2:54:14 PM10/31/20
to
David Brown 在 2020年11月1日 星期日上午1:44:47 [UTC+8] 的信中寫道:
I mean the halting problem (or algorithm) is a key algorithm.
The influence extends to 'design' and software decision making.
Other fields are the same. I can just come up with a recent one occurred to me.
For example, is 0.333...=1/3? (0.333... is derived from common division method)
0.333... equal to 1/3 or not is a halting problem to me.
I categorize this as a halting problem, since the repeating 3 never terminates,
undecidable-ness is then translated to 'false'.

[My View] In common division methods, 0.333... can repeat forever is because
there exists non-zero remainder. If 0.333... terminating to 1. then remainder=0.
See, like it or not, the influence can be broad.

olcott

unread,
Oct 31, 2020, 2:59:34 PM10/31/20
to
Or you could simply represent it internally as an actual fraction and
not as an infinitely repeating decimal.

ijw wij

unread,
Oct 31, 2020, 3:06:01 PM10/31/20
to
ijw wij 在 2020年11月1日 星期日上午2:54:14 [UTC+8] 的信中寫道:
Sorry, some typos:
"undecidable-ness in "0.333...=1/3 could then be translated to 'false' "
"If 0.333... terminates to 1/3"

ijw wij

unread,
Oct 31, 2020, 3:16:39 PM10/31/20
to
olcott 在 2020年11月1日 星期日上午2:59:34 [UTC+8] 的信中寫道:
That original problem was to PROVE: lim(n->∞) 1/n≠ 0
But limit is not defined by me, so changed to prove ∞/1≠ 0 (∞ is defined)
A related problem is: 0.999...=1?

David Brown

unread,
Oct 31, 2020, 3:17:13 PM10/31/20
to
On 31/10/2020 19:53, ijw wij wrote:
> David Brown 在 2020年11月1日 星期日上午1:44:47 [UTC+8] 的信中寫道:
>> On 31/10/2020 18:35, ijw wij wrote:
>>> David Brown 在 2020年10月31日 星期六下午8:59:23 [UTC+8] 的信中寫道:
>>>> On 30/10/2020 23:50, ijw wij wrote:

>>> Halting problem is not a small thing, many can be derived from it.
>>> I, as app. programmer, do not really calculate O(f), but the idea being
>>> in the head is importantly essential. Without knowing it, lots of time
>>> can be wasted and unaware of bad codes written.
>>>
>> If I understand you correctly, you mean that you even though you don't
>> go through detailed proofs of the efficiency of your code, you still
>> think a little about the theoretical aspects when you write code.
>> That's a good thing, yes.
>
> I mean the halting problem (or algorithm) is a key algorithm.

No, it isn't. The halting problem is about finding an algorithm that
can decide if other programs ever stop. Even if such an algorithm
existed, it would have no particular use except as an aid to finding
bugs in code. The halting problem is not about determining if the code
you are writing at the moment will hang.

> The influence extends to 'design' and software decision making.
> Other fields are the same. I can just come up with a recent one occurred to me.
> For example, is 0.333...=1/3? (0.333... is derived from common division method)
> 0.333... equal to 1/3 or not is a halting problem to me.

It is not a "halting problem". That's just mathematics, and the
definition of the notation you are using. Yes, 0.33... equals 1/3.

Let x = 0.33.....

Then 10x = 3.33.....

So 10x - x = 3, i.e., 9x = 3, and so x = 1/3.

> I categorize this as a halting problem, since the repeating 3 never terminates,
> undecidable-ness is then translated to 'false'.

It is not a "halting problem" because the notation has a simple and
well-defined meaning.

You would have been better to pick an irrational number as an example,
such as 1.41421356... (the square root of two), where it not only does
not terminate but there is no obvious pattern to the digits. It would
still not be a "halting problem" - we /know/ it does not terminate, so
there is no problem.

>
> [My View] In common division methods, 0.333... can repeat forever is because
> there exists non-zero remainder. If 0.333... terminating to 1. then remainder=0.
> See, like it or not, the influence can be broad.
>

I'm sorry, I don't think you understand what you are saying.

But I /do/ know it is unrelated to C++, and also unrelated to getting
Olcott to act like a rational human being, so this should be the end of
this thread.

ijw wij

unread,
Oct 31, 2020, 3:50:38 PM10/31/20
to
David Brown 在 2020年11月1日 星期日上午3:17:13 [UTC+8] 的信中寫道:
> On 31/10/2020 19:53, ijw wij wrote:
> > David Brown 在 2020年11月1日 星期日上午1:44:47 [UTC+8] 的信中寫道:
> >> On 31/10/2020 18:35, ijw wij wrote:
> >>> David Brown 在 2020年10月31日 星期六下午8:59:23 [UTC+8] 的信中寫道:
> >>>> On 30/10/2020 23:50, ijw wij wrote:
>
> >>> Halting problem is not a small thing, many can be derived from it.
> >>> I, as app. programmer, do not really calculate O(f), but the idea being
> >>> in the head is importantly essential. Without knowing it, lots of time
> >>> can be wasted and unaware of bad codes written.
> >>>
> >> If I understand you correctly, you mean that you even though you don't
> >> go through detailed proofs of the efficiency of your code, you still
> >> think a little about the theoretical aspects when you write code.
> >> That's a good thing, yes.
> >
> > I mean the halting problem (or algorithm) is a key algorithm.
> No, it isn't. The halting problem is about finding an algorithm that
> can decide if other programs ever stop. Even if such an algorithm
> existed, it would have no particular use except as an aid to finding
> bugs in code. The halting problem is not about determining if the code
> you are writing at the moment will hang.

That's appearance. Halting Problem is a logic problem, applicable to all
decision problems.

> > The influence extends to 'design' and software decision making.
> > Other fields are the same. I can just come up with a recent one occurred to me.
> > For example, is 0.333...=1/3? (0.333... is derived from common division method)
> > 0.333... equal to 1/3 or not is a halting problem to me.
> It is not a "halting problem". That's just mathematics, and the
> definition of the notation you are using. Yes, 0.33... equals 1/3.
>
> Let x = 0.33.....
>
> Then 10x = 3.33.....
>
> So 10x - x = 3, i.e., 9x = 3, and so x = 1/3.

[Snippet from the original proof]
+-------------------+
| Prop4: 0.999⋯≠1 ? |
+-------------------+
A major issue of this proposition should be the interpretation of 0.999⋯
(or "⋯ "). The answer from the understanding of number says that we should look
for the way this number is constructed.

1. From subtracting minute quantity. Such numbers are many.
a= 1-1/∞ = 0.999⋯
b= 1-2/∞ = 0.999⋯
c= 1-1/10^∞ = 0.999⋯
d= 1-2/100^∞= 0.999⋯

Similar proof of Prop3 shows that lim(n→∞) {(1+k/n)^n}= e^k, that is
(0.999⋯^n) is always some number less than 1, never (1^n)=1.
Therefore, 0.999⋯ is not unique to represent a specific number (in the
example above, a≠b≠c≠d≠1. Density property remains valid).

2. From formal recursive construction
Let X= 0.999⋯
<=> 10X= 9.999⋯
<=> 10X= 9 + 0.999⋯
==> 10X= 9 + X (invalid)
<=> 9X= 9
<=> X= 9/9 =1

The focus is the line marked 'invalid', where the right hand side X
refers to the 0.999⋯ multiplied by 10 and subtract 9, which is not the
0.999⋯ in the proposition. This step uses unproven(or yet to prove)
equation is therefore invalid. But wait... if the "0.999⋯" does possess the
x10-9 invariant property, then X=0.999⋯ =1 is correct. But note that this
0.999⋯ is not equal to the 0.999⋯ from subtracting minute quantity from 1.
QED.

> > I categorize this as a halting problem, since the repeating 3 never terminates,
> > undecidable-ness is then translated to 'false'.
> It is not a "halting problem" because the notation has a simple and
> well-defined meaning.
>

I glimpsed a your argument with others (I am not really an English speaker, reading all the posted are difficult to me)
We have different definitions of "decider" or "the halting problem"

> You would have been better to pick an irrational number as an example,
> such as 1.41421356... (the square root of two), where it not only does
> not terminate but there is no obvious pattern to the digits. It would
> still not be a "halting problem" - we /know/ it does not terminate, so
> there is no problem.
> >
> > [My View] In common division methods, 0.333... can repeat forever is because
> > there exists non-zero remainder. If 0.333... terminating to 1. then remainder=0.
> > See, like it or not, the influence can be broad.
> >
> I'm sorry, I don't think you understand what you are saying.
>
Surely I do. I am just afraid the opposite.
If I could post the original text file somewhere.

> But I /do/ know it is unrelated to C++, and also unrelated to getting
> Olcott to act like a rational human being, so this should be the end of
> this thread.

1+1=3 not related to C++? (kidding but truely related)
I expect to learn/teach something from and to olcoot

Alf P. Steinbach

unread,
Oct 31, 2020, 4:16:23 PM10/31/20
to
Mathematician don't prove theorems (an endeavor similar to proving
halting or not) by executing fixed finite set algorithms.

Penrose assumed something of the sort and ended up with a contradiction,
that intelligence (he added the qualifier "artificial", but define
/that/) is mathematically impossible.

Essentially he proved that he isn't intelligent, just via the kind of
assumption you mention -- which is not how intelligent systems behave.


>> So not only was Penrose wrong (which is obvious from his silly result)
>> but there is necessarily something fishy in Turing's proof (not so
>> obvious!), because it can't deal with a human as decider.
>>
>
> As I say, I don't know Penrose's argument here enough to comment -
> except that it depends totally on the definition used for "artificial
> intelligence".

I don't recall that he ever defined AI. The main assumption was just
that an AI would be a computational process that, when presented with
some halting problem as input, could be assigned a fixed defining
number. And he carefully avoided applying the same proof to humans.
Perhaps at some level he realized the futility of defining a human's
ongoing and evolving thinking process by a single fixed number.

I.e. quite silly, childish, but it did not amount to a scandal; e.g. he
was not fired.

I believe the scientific community treated those books as more like
religious statements, and forgave him. He extrapolated from the (to him)
impossibility of AI that humans, or in particular mathematicians like
himself, had to rely on some special quantum mechanical effects in order
to tap into a non-computability that his envisioned AIs could not
access, for otherwise his AI impossibility proof would apply to humans.
His main candidate for that hidden effect was tied to quantum wave
function collapse, as I recall a "missing half" of the QM math.

Perhaps a bit less mystic than US philosopher John Searle who concluded
that only biological brains can be intelligent, but not much less!

This is a guy that along with two others got the Nobel prize in physics
this year, 2020.

But he's not the first crackpot to get the Nobel.


>> Namely, Turing did not account for evolving deciders  --  I believe a
>> very simple example could be one that is influenced by random numbers.
>>
>
> Random deciders can sometimes give you the answer for something that is
> uncomputable - but since it is not deterministic it cannot solve the
> problem in general.

A proof of that assertion would be interesting when "cannot" is replaced
with the more reasonable (and only meaningful) "cannot with any
probability arbitrarily close to but less than 1".


[snip]

Cheers,

- Alf

daniel...@gmail.com

unread,
Oct 31, 2020, 6:23:56 PM10/31/20
to
On Saturday, October 31, 2020 at 4:16:23 PM UTC-4, Alf P. Steinbach wrote:
> On 31.10.2020 17:06, David Brown wrote:
> > On 31/10/2020 15:03, Alf P. Steinbach wrote:
.
> >> I would disagree. Because it so happened, I think that was in the
> >> 1990's, that Roger Penrose (chair of mathematics department at Oxford
> >> university, England), in either the first or second of his infamous
> >> books about physics and the possibility of artifical intelligence (as I
> >> recall the first titled "The emperor's new mind"), used essentially the
> >> same logic as Turing, but to prove that artificial intelligence is
> >> impossible.

No, that wasn't what Penrose was trying to do. Penrose was attempting to
refute the idea of "strong AI" that a computer program could exhibit consciousness.
That was a big topic in the 1990's.

>>> Since that's an incorrect conclusion something had to be
> >> wrong in his derivation.

Daniel Dennett is one of the most well known proponent of strong AI.
I recall reading his book Consciousness Explained from cover to cover,
including illustrations of the mind/body problem with sketches of Casper
the ghost, hoping to learn something. But I don't think anywhere in the
book that consciousness was actually explained.

In any case, I'm pretty convinced that none of the programs that I
write will ever exhibit consciousness.

> >
> > "Artificial intelligence" is a far more nebulous and abstract concept.
> > You need a book to define what you mean by it, before trying to prove
> > anything about it.

The idea of consciousness and what it would mean for a computer
program to exhibit consciousness is hard to define. Dennett argues
that human consciousness is an illusion, and what it really is can be
realized in a machine. Dennett can be read as denying the existence of
consciousness, but the problem is, even though we don't know
what it is, we can all experience it.

Daniel

olcott

unread,
Oct 31, 2020, 6:57:31 PM10/31/20
to
comp.ai.philosophy › Is strong AI possible?
https://groups.google.com/forum/#!msg/comp.ai.philosophy/q3YDGZ9fnIM/Jw22XeHTAjYJ

Six years ago I came up with a possible measure of the functional
equivalent of consciousness.

An AI mind could have a [will of its own] as soon as it has a
sufficiently populated goal hierarchy.

Alf P. Steinbach

unread,
Oct 31, 2020, 9:48:35 PM10/31/20
to
On 31.10.2020 23:23, daniel...@gmail.com wrote:
> On Saturday, October 31, 2020 at 4:16:23 PM UTC-4, Alf P. Steinbach wrote:
>> On 31.10.2020 17:06, David Brown wrote:
>>> On 31/10/2020 15:03, Alf P. Steinbach wrote:
> .
>>>> I would disagree. Because it so happened, I think that was in the
>>>> 1990's, that Roger Penrose (chair of mathematics department at Oxford
>>>> university, England), in either the first or second of his infamous
>>>> books about physics and the possibility of artifical intelligence (as I
>>>> recall the first titled "The emperor's new mind"), used essentially the
>>>> same logic as Turing, but to prove that artificial intelligence is
>>>> impossible.
>
> No, that wasn't what Penrose was trying to do. Penrose was attempting to
> refute the idea of "strong AI" that a computer program could exhibit consciousness.
> That was a big topic in the 1990's.

A meaningless distinction. I am very happy that funding for philosophy
departments started being reduced at that time. They're like leeches on
academia and the less money wasted on them, the better.


>>>> Since that's an incorrect conclusion something had to be
>>>> wrong in his derivation.
>
> Daniel Dennett is one of the most well known proponent of strong AI.
> I recall reading his book Consciousness Explained from cover to cover,
> including illustrations of the mind/body problem with sketches of Casper
> the ghost, hoping to learn something. But I don't think anywhere in the
> book that consciousness was actually explained.
>
> In any case, I'm pretty convinced that none of the programs that I
> write will ever exhibit consciousness.

True.

It's a very good point about philosophers, that they are so stupid that
they still take serious the idea of /writing a program/ that could be
intelligent. It's an utter idiot's idea. Or maybe an infant's idea.


>>> "Artificial intelligence" is a far more nebulous and abstract concept.
>>> You need a book to define what you mean by it, before trying to prove
>>> anything about it.
>
> The idea of consciousness and what it would mean for a computer
> program to exhibit consciousness is hard to define.

You'll know it when you see it.

Alan Turing did a good first attempt of trying to pin it down with his
Turing test (in the same paper he included derogatory remarks about
Muslims, colored people, and his belief in ESP like telepathy, so that
classic paper would not have been accepted today): he pointed out the
main criterion, that if someone appears intelligent over an extended
period of interaction with you, than that someone is intelligent, that's
what we /mean/ by saying that someone is intelligent.

By this criterion modern philosophers do not exhibit intelligence. Or
for that matter, our classic philosophers were not intelligent. Consider
René Descartes "I think therefore I am" (he came up with that in the
process of trying to figure out how many angels could fit on a pin-head,
concluding that angels, being pure intelligence, had no physical extent)
who three times proved the existence of the Christian deity and failed
to consider that his proofs just as well proved the existence of the
Norse god Odin (much like Penrose amazingly failed to consider that his
proofs of non-intelligence applied to himself and not just robots), and
who argued strongly against the possibility of vacuum...

Or for that matter, consider Blaise Pascal, Descarte's contemporary.
These two quarreled about the existence of vacuum, with Pascal adopting
the more rational stance that as you go up in the air you get closer and
closer to vacuum. But while rational about that, Pascal was in general a
religious nutcase, known for his "Pascal's wager" where he calculated,
erroneously and irrationally, that the best bet was to believe in the
Christian deity (again, not considering that it also applied to Odin).

Or, talking about vacuum, such as inside philosopher's heads, consider
Aristotle, the idiot who thought that an arrow could not move in a
vacuum. Because in vacuum air would not be displaced by the arrow tip to
move back behind the arrow and push it. What an utter idiot.

And so on.


> Dennett argues
> that human consciousness is an illusion, and what it really is can be
> realized in a machine. Dennett can be read as denying the existence of
> consciousness, but the problem is, even though we don't know
> what it is, we can all experience it.

Am happy that I've not wasted time on reading Dennet. :)


Cheers,

- Alf

olcott

unread,
Oct 31, 2020, 10:33:36 PM10/31/20
to
This can be improved upon by saying that a machine <is> intelligent to
the same degree that it can achieve the same functional results as a
human with intelligence.

daniel...@gmail.com

unread,
Oct 31, 2020, 11:06:13 PM10/31/20
to
On Saturday, October 31, 2020 at 9:48:35 PM UTC-4, Alf P. Steinbach wrote:
> On 31.10.2020 23:23, daniel...@gmail.com wrote:
> >
> > No, that wasn't what Penrose was trying to do. Penrose was attempting to
> > refute the idea of "strong AI" that a computer program could exhibit consciousness.
> > That was a big topic in the 1990's.

> A meaningless distinction. I am very happy that funding for philosophy
> departments started being reduced at that time. They're like leeches on
> academia and the less money wasted on them, the better.

It's not only philosophers, but physicists and neuroscientists who are
interested in the problem of consciousness. They do not know what
consciousness is. It doesn't fit into the physicists' current understanding
of reality. Something is missing.

> >>>> Since that's an incorrect conclusion something had to be
> >>>> wrong in his derivation.
> >
> > Daniel Dennett is one of the most well known proponent of strong AI.
> > I recall reading his book Consciousness Explained from cover to cover,
> > including illustrations of the mind/body problem with sketches of Casper
> > the ghost, hoping to learn something. But I don't think anywhere in the
> > book that consciousness was actually explained.
> >
> > In any case, I'm pretty convinced that none of the programs that I
> > write will ever exhibit consciousness.
> True.
>
> It's a very good point about philosophers, that they are so stupid that
> they still take serious the idea of /writing a program/ that could be
> intelligent. It's an utter idiot's idea. Or maybe an infant's idea.

If consciousness is nothing more than neural processes, which was
proposed in the 1950's, I don't think it would be unreasonable to
conjecture that a computer program of sufficient complexity might
exhibit consciousness. But the problem is that there is nothing in
the behaviour of neurons that make them different with
regard to consciousness than any other cells in the body. Neuroscientists
have found correlates of conscious experience. You experience a Miles
Davis compilation, a neuroscientist can tell you which parts of your
brain are active. But correlates are not the same thing as conscious
experience itself.

> >
> > The idea of consciousness and what it would mean for a computer
> > program to exhibit consciousness is hard to define.

> You'll know it when you see it.
>
Right. But if we didn't already know we were self aware and could
experience the world, there is nothing in contemporary physics or
neurophysiology that would allow us to deduce it.

Daniel

Alf P. Steinbach

unread,
Oct 31, 2020, 11:35:25 PM10/31/20
to
You don't need a complex program; a complex program is the associative
non-thinking philosophers' idiot ideas intruding again.

You can in principle make do with a simplest possible, quite crude
simulation of neurons, or of atoms.

However, the /data set/ that the simple program operates on, is
gargantuan. Distribute the processing power of a PC over 100 billion
neuron simulations and you get a rough idea of the sloooooowness of this
approach with current technology: no intelligence when the sim takes a
year to do a msec thinking. But we'll be there in about two decades.

As you may or may not be aware, both the EU and the USA attempted to
create such simulations some years back, in billion-dollar projects.

I've not kept up to date. The last I heard they were beaten by a
Japanese simulation of a tenth of a second activity in half a rat brain.
It's an ethical mine field (well, it is that for rational people).


> But the problem is that there is nothing in
> the behaviour of neurons that make them different with
> regard to consciousness than any other cells in the body. Neuroscientists
> have found correlates of conscious experience. You experience a Miles
> Davis compilation, a neuroscientist can tell you which parts of your
> brain are active. But correlates are not the same thing as conscious
> experience itself.
>
>>>
>>> The idea of consciousness and what it would mean for a computer
>>> program to exhibit consciousness is hard to define.
>
>> You'll know it when you see it.
>>
> Right. But if we didn't already know we were self aware and could
> experience the world, there is nothing in contemporary physics or
> neurophysiology that would allow us to deduce it.

Deducing chemistry from the laws of quantum mechanics is rather
difficult, even though we know that it can be done in principle: mother
Nature, with her practically infinite computational resources, does it
effortlessly all the time, but we, with current technology, just can't.

That's a corresponding problem, the same sort.

Namely, it's about crossing up through an /abstraction level boundary/.


Cheers,

- Alf

olcott

unread,
Oct 31, 2020, 11:41:37 PM10/31/20
to
Even if we could define consciousness in terms of a verifiable set of
abilities we would still be stuck with distinguishing the difference
between actual consciousness and a perfect simulation of it.

One way to define strong AI apart from consciousness is would be the
ability to do anything that a mind can do. If a computer program could
win a supreme court case this would be a very good measure.

>>>
>>> The idea of consciousness and what it would mean for a computer
>>> program to exhibit consciousness is hard to define.
>
>> You'll know it when you see it.
>>
> Right. But if we didn't already know we were self aware and could
> experience the world, there is nothing in contemporary physics or
> neurophysiology that would allow us to deduce it.
>
> Daniel
>


daniel...@gmail.com

unread,
Nov 1, 2020, 12:09:17 AM11/1/20
to
Take our entire scientific edifice as it exists today - chemistry, physics,
evolution, general relativity, quantum mechanics, DNA, evolution, Higgs
Boson - and it still has no place for consciousness. There is nothing
there that explains why we experience things as opposed to simply
doing things. It's a problem. And from problems, follows inquiry. No
doubt our understanding of reality will need to advance again, as it
has several times past.

Daniel

Alf P. Steinbach

unread,
Nov 1, 2020, 1:03:28 AM11/1/20
to
On 01.11.2020 05:08, daniel...@gmail.com wrote:
> On Saturday, October 31, 2020 at 11:35:25 PM UTC-4, Alf P. Steinbach wrote:
>> On 01.11.2020 04:05, daniel...@gmail.com wrote:
>
>>> But if we didn't already know we were self aware and could
>>> experience the world, there is nothing in contemporary physics or
>>> neurophysiology that would allow us to deduce it.
>
>> Deducing chemistry from the laws of quantum mechanics is rather
>> difficult, even though we know that it can be done in principle: mother
>> Nature, with her practically infinite computational resources, does it
>> effortlessly all the time, but we, with current technology, just can't.
>>
> Take our entire scientific edifice as it exists today - chemistry, physics,
> evolution, general relativity, quantum mechanics, DNA, evolution, Higgs
> Boson - and it still has no place for consciousness.

Well, the laws of quantum mechanics have nothing resembling chemistry.

If you didn't know that chemistry existed, you couldn't deduce it (with
current tech) from QM.

But you know that chemistry exists, and just like Niels Bohr did, with
that knowledge of existence and some properties you can deduce some
connections down to the more basic abstraction layer. I'm not sure I'm
attributing this right, but as I recall Bohr came up with the idea of
electron shells with a specific number of slots in each (2, 8, 8 ...),
which explained much of how atoms react to form molecules. It's a sort
of intermediate level, an abstraction bridge, not quite 100% anchored in
QM, nor quite 100% explanatory power for chemistry, but it helps.

Similarly, knowing that intelligence exists, and some properties of
intelligence (e.g., it disappears with brain death, it appears gradually
as one learns, etc.), you can relate it down to pure chemistry and
physics. You can establish some abstraction bridges, such as neural
nets, at a higher level the idea of semantic nets, etc. But as with
chemistry, with current tech you just can't /deduce/ or fully explain
intelligence in terms of a lower level of abstraction.

There are lots of clearly purely conventional physical phenomena (e.g.
general protein folding) that currently are similarly beyond our ken, so
there's absolutely no need to get mystical about it.


> There is nothing
> there that explains why we experience things as opposed to simply
> doing things. It's a problem. And from problems, follows inquiry. No
> doubt our understanding of reality will need to advance again, as it
> has several times past.

There is /apparently/ nothing... You don't see chemistry in QM, and you
don't see intelligence in physics. Same thing. There's no need to
postulate any hidden aspects of reality to understand chemistry.


Cheers,

- Alf

daniel...@gmail.com

unread,
Nov 1, 2020, 10:38:41 AM11/1/20
to
On Sunday, November 1, 2020 at 1:03:28 AM UTC-4, Alf P. Steinbach wrote:
> On 01.11.2020 05:08, daniel...@gmail.com wrote:
> >>
> > Take our entire scientific edifice as it exists today - chemistry, physics,
> > evolution, general relativity, quantum mechanics, DNA, evolution, Higgs
> > Boson - and it still has no place for consciousness.
>
> There are lots of clearly purely conventional physical phenomena (e.g.
> general protein folding) that currently are similarly beyond our ken, so
> there's absolutely no need to get mystical about it.

Right, to get mystical is to admit defeat. To talk about souls and spirits, or
a spiritual physical divide, is to admit defeat. We can be mystified by
things, but that doesn't mean we should seek mystical solutions.

I think it's fair to say that philosophers, physicists and neuroscientists
currently have a better understanding of what consciousness isn't
than what it is. There is the idea that "the hard problem of consciousness"
is really hard. And some of the conjectures made about what
consciousness is are pretty strange. Currently this is mostly
speculative, as cosmology was once entirely speculative, but with
cosmology, we now have evidence that supports some ideas but
not others. No doubt understanding will advance.

Daniel

Leo

unread,
Nov 1, 2020, 3:43:45 PM11/1/20
to
On 10/29/20 8:00 AM, olcott wrote:
> On 10/28/2020 8:43 PM, daniel...@gmail.com wrote:
>> On Wednesday, October 28, 2020 at 7:43:31 PM UTC-4, olcott wrote:
>>> Intuitively, a decider should be a Turing machine that given an input,
>>> halts
>>
>> Also intuitively, olcott  should be smart enough to know how to stop
>> cross posting to comp.lang.c++, but it seems that this is not so ...
>>
>> Daniel
>>
>
> My following has tripled because of my cross-posting.
>
> I just solved the halting problem thus changing a basic foundation of
> computer science.
>
> I need my work vetted before I can present it at a higher level.
> As soon as this work is confirmed or refuted I will quit cross-posting.
>


How can someone be so clueless about CS concepts yet find out about
relatively obscure places like usenet? Does this happen just because you
get shadowbanned and ignored everywhere else?

Chris Vine

unread,
Nov 2, 2020, 8:53:59 AM11/2/20
to
Let us not confuse artificial intelligence on the one hand, which by
most definitions we already have, in the sense of machines which can
adapt their behaviour according to past events or patterns, and which
can interract sensibly with human beings in an apparently intelligent
way, and artificial consciousness on the other hand, which we do not
appear to have and which in any event we have as yet no means of
testing for.

There are plausible arguments that in humans and the higher animals
consciousness is an emergent property (or illusion, take your pick)
behind which determinism lurks, but that is not to say that the
self-generated states implied by consiousness are always benign, as
witnessed by the behaviour that can arise from the psychoses and other
disorders in human minds. It seems probable that in any conscious
system the possibility of malign disorders will always exist.

That is not to say that complex but non-conscious artificial
intelligence cannot also be capable of malign behaviour.

Tim Rentsch

unread,
Nov 10, 2020, 10:40:59 PM11/10/20
to
"Alf P. Steinbach" <alf.p.stein...@gmail.com> writes:

[...]

> But related to the halting problem: Turing's impossibility proof
> relied crucially on the ability to construct a special program that
> was undecidable for a given fixed decider machine that /could be
> numbered/ and henceforth referred to via the number. But if that
> decider machine is a human, well, you can't create a fixed undecidable
> program relative to the human because the human evolves and changes
> and has no fixed completely defining number. And ditto for an
> artifical intelligence.
>
> So not only was Penrose wrong (which is obvious from his silly result)
> but there is necessarily something fishy in Turing's proof (not so
> obvious!), because it can't deal with a human as decider.

The question is meaningless because what may happen when someone
or something "evolves" is not defined. If one assumes that a
mechanism, whether human or non-human, may "evolve" so as to
solve the halting problem, then the mechanism may solve the
halting problem. For the question to have a meaning though,
there must be some kind of statement about what kind of changes
are possible when the considering agent "evolves".

As far as human agents go, all the evidence available so far is
that human beings are no better at solving the halting problem
than programs (ie, Turing Machines) are, and if anything they are
worse. In any case the idea that a person could accept an
arbitrary Turing Machine as input and say within some finite time
(a thousand years, say) whether it will halt is obviously wrong.
Just for starters, the input Turing Machine could be so big it
would take more than a thousand years just to read it.

> Namely, Turing did not account for evolving deciders -- I believe a
> very simple example could be one that is influenced by random numbers.

Random numbers don't change anything. Any state change based on
a random number can be simulated by a non-deterministic Turing
Machine, and deterministic Turing Machines can compute anything
a NDTM can. No power is gained by introducing randomness.

Tim Woodall

unread,
Nov 11, 2020, 3:10:23 AM11/11/20
to
On 2020-11-11, Tim Rentsch <tr.1...@z991.linuxsc.com> wrote:
> As far as human agents go, all the evidence available so far is
> that human beings are no better at solving the halting problem
> than programs (ie, Turing Machines) are, and if anything they are
> worse.

Agreed!

The shift function for a 5 state 2 symbol TM isn't known.

There are (at most) $2*6*2^{2*5}$ TMs to investigate. While this is a
large number, it's well within the limits of exhaustive search for other
similarly sized problems.

The remaining (unknown if they halt) machines are known to run for
a minimum of (approx) 10^12 steps if they halt at all. The maximum value
for a TM that definitely does halt is <50000000.

AIUI there are only about 100 machines that are "unknown if they halt".
If humans were any good at telling if a machine would halt or not then
these remaining ones would have been solved "by inspection" but they
haven't been.

Tim.

Alf P. Steinbach

unread,
Nov 11, 2020, 6:57:36 AM11/11/20
to
I'm sorry but envisioning humans doing a thing like that without
employing computers, is just daft. Like, stupid.


>> Namely, Turing did not account for evolving deciders -- I believe a
>> very simple example could be one that is influenced by random numbers.
>
> Random numbers don't change anything. Any state change based on
> a random number can be simulated by a non-deterministic Turing
> Machine, and deterministic Turing Machines can compute anything
> a NDTM can. No power is gained by introducing randomness.

You can't predict a future random event, so you can't predict the
random-influenced machine, so you can't construct an unsolvable case for it.

So, your assertion that randomness is irrelevant, is, uhm, just an
assertion.

I'm not saying your conclusion is necessarily wrong but there's no
connection from premise to conclusion; that leap of yours is beyond
logic (unless you're postulating a time machine?).


- Alf

olcott

unread,
Nov 11, 2020, 12:06:39 PM11/11/20
to
On 10/28/2020 6:43 PM, olcott wrote:
> Intuitively, a decider should be a Turing machine that given an input,
> halts and either accepts or rejects, relaying its answer in one of many
> equivalent ways, such as halting at an ACCEPT or REJECT state, or
> leaving its answer on the output tape.Yuval Filmus
>
> https://cs.stackexchange.com/questions/84433/what-is-decider
>
> Yuval Filmus top 0.04% overall Assistant Professor in the Department of
> Computer Science at the Technion.
>
> A non-halting decider can be defined that always Accepts NON_HALTING
> input and Rejects HALTING input as follows:
>
> The NON_HALTING decider UTM executes its subordinate UTM one state
> transition at a time until it detects non-halting behavior or its
> subordinate UTM has terminated normally.
>
> If the halt decider UTM detects non-halting behavior of its subordinate
> UTM it simply stops executing the subordinate and transitions to its own
> final state of NON_TERMINATING_BEHAVIOR_DETECTED.
>
> If the subordinate UTM terminates normally the halt decider UTM
> transitions to its own final state of SUBORDINATE_HAS_TERMINATED.

// Simplified version of Linz H_Hat DOES NOT copy itself
01 void H_Hat(u32 P) // (bottom of page 319)
02 {
03 if (H(P, P))
04 HERE: goto HERE;
05 else
06 HALT
07 }

// Linz H with some of the ⊢* states shown
08 bool H(u32 P, u32 I) // Page 318
09 {
10 bool Halted = false;
11 bool Aborted = false;
12 while (!Halted && !Aborted)
13 {
14 Halted = DebugStep(P, I);
15 Aborted = Needs_To_Be_Aborted();
16 }
17 return !Aborted;
18 }

http://www.liarparadox.org/Peter_Linz_HP(Pages_315-320).pdf

Linz, Peter 1990. An Introduction to Formal Languages and Automata.
Lexington/Toronto: D. C. Heath and Company.

When-so-ever a halt decider correctly determines that its input would
never halt unless forced to halt by this halt decider this halt decider
has made a correct not-halting determination.

Unless H forces its input to halt: H(H_Hat, H_Hat) its first parameter
specifies infinite recursion at its line 03.

NEW MATERIAL ADDED HERE

int main()
H_Hat(H_Hat);

H_Hat(H_Hat) calls H(H_Hat, H_Hat) that decides that its first parameter
(also the parameter to H_Hat) would never halt unless the halt decider
forces it to halt. The execution trace proves that this is true.

H_Hat was intentionally designed to do the opposite of whatever the halt
decider decides to thwart a correct halting decision. The halt decider
got smarter and its halting decision is no longer thwarted.

The decision of every decider can only be correctly measured against its
input.
The question is did the decider meet its own criterion measure for its
input?

When the criterion measure of a halt decider is:
Does the input have to be forced to stop to prevent its infinite execution?

And the input does have to be forced to stop to prevent its infinite
execution.

Then the halt decider necessarily made a correct not-halting decision.

int main()
{
H_Hat(H_Hat);
H(H_Hat, H_Hat);
}

In both of the above two cases the input must be forced to halt by the
halt decider or it would never halt.

Mr Flibble

unread,
Nov 12, 2020, 1:36:59 PM11/12/20
to
On 28/10/2020 23:43, olcott wrote:
> Intuitively, a decider should be a Turing machine that given an input, halts and either accepts or rejects, relaying its answer in one of many equivalent ways, such as halting at an ACCEPT or REJECT state, or leaving its answer on the output tape.Yuval Filmus
>
> https://cs.stackexchange.com/questions/84433/what-is-decider
>
> Yuval Filmus top 0.04% overall Assistant Professor in the Department of Computer Science at the Technion.
>
> A non-halting decider can be defined that always Accepts NON_HALTING input and Rejects HALTING input as follows:
>
> The NON_HALTING decider UTM executes its subordinate UTM one state transition at a time until it detects non-halting behavior or its subordinate UTM has terminated normally.
>
> If the halt decider UTM detects non-halting behavior of its subordinate UTM it simply stops executing the subordinate and transitions to its own final state of NON_TERMINATING_BEHAVIOR_DETECTED.
>
> If the subordinate UTM terminates normally the halt decider UTM transitions to its own final state of SUBORDINATE_HAS_TERMINATED.

You still haven't shown how your decider decides that a program won't halt and until you do this you haven't refuted anything. SHOW. YOUR. WORKING. OR. SHUT. THE. FUCK. UP.

/Flibble

--
¬

Sjouke Burry

unread,
Nov 12, 2020, 3:37:20 PM11/12/20
to
On 12.11.20 19:36, Mr Flibble wrote:
> On 28/10/2020 23:43, olcott wrote:
>> Intuitively, a decider should be a Tur
cut

You still have not killfiled olcot yet? Tsssskkkkk.....

olcott

unread,
Nov 12, 2020, 4:00:16 PM11/12/20
to
I set the Followup-To field to be comp.theory

Tim Rentsch

unread,
Nov 17, 2020, 10:56:47 PM11/17/20
to
Adding computers doesn't help. Even turning the entire physical
universe into computers, the result is still finite. Man and
machine together can deal with, at most, an infinitesimally small
fraction of the set of Turing Machines.

>>> Namely, Turing did not account for evolving deciders -- I believe a
>>> very simple example could be one that is influenced by random numbers.
>>
>> Random numbers don't change anything. Any state change based on
>> a random number can be simulated by a non-deterministic Turing
>> Machine, and deterministic Turing Machines can compute anything
>> a NDTM can. No power is gained by introducing randomness.
>
> You can't predict a future random event, so you can't predict the
> random-influenced machine, so you can't construct an unsolvable case
> for it.

It isn't necessary to predict any future event. NDTMs can in
effect go down all possible branches simultaneously, covering
all possible values of any random input. They don't have to know
which one will happen.

> So, your assertion that randomness is irrelevant, is, uhm, just an
> assertion.

It's a proposition that is easily seen to be correct and
convincing to those who understand how NDTMs work.

> I'm not saying your conclusion is necessarily wrong but there's no
> connection from premise to conclusion; that leap of yours is beyond
> logic (unless you're postulating a time machine?).

Which part of my statement do you think represents a leap?
Is it

(a) Any state change based on a random number can be simulated
by a non-deterministic Turing Machine, or

(b) deterministic Turing Machines can compute anything a NDTM
can?

Alf P. Steinbach

unread,
Nov 18, 2020, 7:48:28 AM11/18/20
to
I'm sorry but that's disconnected from reality.

Your argument, quoted above, was that a Turing Machine could be so big
that reading it would exceed the lifetime of a really long-lived person,
and that this observation somehow supported the argument that humans can
not discover a halting-or-not that a specific very restricted TM, which
that halting problem is specifically designed for, fails to discover.

I chose to address the apparent observation in itself, noting that one
assumption in it is just daft, the idea of a person having to read the
TM specification. That I chose to constrain myself and be succinct
doesn't mean that the other aspects of the idea are intelligent in any
way. Nor does it mean that the alleged support of the hypothesis that
intelligences are simple Turing Machines, is intelligent.

By the way, just throwing in one more thing while I'm at it:
intelligences can /communicate/, e.g. results about halting problems.

That's one aspect entirely missing from Turing Machines, but the main
thing, as I noted, just to get this back on track, is that you cannot
(faithfully) model an intelligence with a single number that defines it,
and that severely limits the applicability of Turing's proof.

Again, sorry, but you're presenting essentially humanistic and/or
religious arguments in a technical forum.

It doesn't matter that these humanistic and/or religious arguments are
/cast/ as technical arguments: form doesn't matter here, substance does.

[snipalot]


Cheers,

- Alf

Chris M. Thomasson

unread,
Nov 18, 2020, 10:08:51 PM11/18/20
to
For some reason, this reminds me of trying to predict if a point in the
Mandelbrot set escapes or not.

Tim Rentsch

unread,
Nov 20, 2020, 12:45:33 PM11/20/20
to
I think you misunderstood my comments. My claim is only that
people (then expanded to any collection of people and computers
in the physical universe) cannot solve the halting problem. It
is always possible to answer the question of halting for a single
Turing Machine input, or indeed any finite collection of inputs.

> I chose to address the apparent observation in itself, noting that one
> assumption in it is just daft, the idea of a person having to read the
> TM specification. That I chose to constrain myself and be succinct
> doesn't mean that the other aspects of the idea are intelligent in any
> way. Nor does it mean that the alleged support of the hypothesis that
> intelligences are simple Turing Machines, is intelligent.
>
> By the way, just throwing in one more thing while I'm at it:
> intelligences can /communicate/, e.g. results about halting problems.
>
> That's one aspect entirely missing from Turing Machines, but the main
> thing, as I noted, just to get this back on track, is that you cannot
> (faithfully) model an intelligence with a single number that defines
> it, and that severely limits the applicability of Turing's proof.

I wasn't addressing any statement regarding intelligence. My
comments were only about things that exist in the physical
universe, whether they are intelligent or not.

> Again, sorry, but you're presenting essentially humanistic and/or
> religious arguments in a technical forum.

My comments are neither humanistic nor religious. The conclusions
rest solely on the property that the universe is finite, which I
think essentially all scientists would agree with.

> It doesn't matter that these humanistic and/or religious arguments are
> /cast/ as technical arguments: form doesn't matter here, substance
> does.

I'm sorry that we seem to be having trouble understanding each
other.

Tim Woodall

unread,
Nov 20, 2020, 4:20:28 PM11/20/20
to
On 2020-11-20, Tim Rentsch <tr.1...@z991.linuxsc.com> wrote:
>
> I think you misunderstood my comments. My claim is only that
> people (then expanded to any collection of people and computers
> in the physical universe) cannot solve the halting problem. It
> is always possible to answer the question of halting for a single
> Turing Machine input, or indeed any finite collection of inputs.
>

A circa 8k [1] state (2 symbol, 1 tape) TM is known that halts if ZFC is
inconsistent.

A proof that it doesn't halt would be the same as proving ZFC is
consistent.

[1] I believe there are much smaller machines known (<1k states)

Tim Rentsch

unread,
Nov 21, 2020, 7:29:01 AM11/21/20
to
Tim Woodall <new...@woodall.me.uk> writes:

> On 2020-11-20, Tim Rentsch <tr.1...@z991.linuxsc.com> wrote:
>
>> I think you misunderstood my comments. My claim is only that
>> people (then expanded to any collection of people and computers
>> in the physical universe) cannot solve the halting problem. It
>> is always possible to answer the question of halting for a single
>> Turing Machine input, or indeed any finite collection of inputs.
>
> A circa 8k state (2 symbol, 1 tape) TM is known that halts if ZFC is
> inconsistent.
>
> A proof that it doesn't halt would be the same as proving ZFC is
> consistent.

This doesn't surprise me. There are other such TMs for many open
questions in mathematics - if the TM halts then a counterexample
was found and the conjecture is false, or if the conjecture is
true then there is no counterexample to be found, and the TM will
run forever.

What do you think that implies, if anything, about my earlier
statement?

Ben Bacarisse

unread,
Nov 21, 2020, 7:43:57 AM11/21/20
to
To me, the surprising thing is the size. And the size has come down.
Another 2 symbol TM with less than 800 states is known with the same
properties. Of course getting the size down is just a matter of
ingenuity, but it's still what we technically refer to as gob smacking.

> What do you think that implies, if anything, about my earlier
> statement?

Maybe the issue is what you mean by "[it] is always possible to answer
the question". Finite sets are decidable, but I would not have phrased
it as you did (assuming that is all you meant).

--
Ben.

Tim Woodall

unread,
Nov 21, 2020, 10:00:39 AM11/21/20
to
I'm responding to your claim:
>>> It
>>> is always possible to answer the question of halting for a single
>>> Turing Machine input, or indeed any finite collection of inputs.

I believe this to be false. I believe there are (relatively) small
turing machines for which the halting status is undecidable.

We already know that the question is undecidable in general, which
implies that there exists a TM whose halting status is undecidable in
particular. I'm contending that not only do such TM machines exist, but
that they're relatively small and some (whose halting status is
undecidable) are already known.


The consistency of ZFC cannot be proved from within ZFC. So any proof
that this (fairly small) TM doesn't halt on a ZFC inconsistency will
require a proof from outside the relm of computing if such a proof can
exist at all.

Even some relatively simple things - e.g. counting up to the value of
BB(8000) - must be impossible because we know that any <8k state TM
must halt in less than BB(8000) steps or run forever.

And yet it's trivial to construct a TM that given a unary number N on
its input tape writes N-1, N-2, N-3 ... 1 after it (each number
separated by a blank) and then terminates.

Therefore BB(8000) must be "unknowable"

Tim Woodall

unread,
Nov 21, 2020, 10:20:22 AM11/21/20
to
On 2020-11-21, Tim Woodall <new...@woodall.me.uk> wrote:

> Even some relatively simple things - e.g. counting up to the value of
> BB(8000) - must be impossible because we know that any <8k state TM
> must halt in less than BB(8000) steps or run forever.
>
> And yet it's trivial to construct a TM that given a unary number N on
> its input tape writes N-1, N-2, N-3 ... 1 after it (each number
> separated by a blank) and then terminates.
>
> Therefore BB(8000) must be "unknowable"
>

That was a bit "confused".

The halting problem is for a machine with a blank tape. While the
"counting down" machine doesn't start with a blank tape. But the
argument shows that if you know BB(8000) you can count up to it but
being able to count up to it is the same as being able to decide the
halting status of a 8k state machine that starts with a blank tape. And
being able to decide the halting status of an 8k machine by just running
it for BB(8k) steps is inside ZFC.

Tim Rentsch

unread,
Nov 22, 2020, 6:27:20 AM11/22/20
to
Ben Bacarisse <ben.u...@bsb.me.uk> writes:

> Tim Rentsch <tr.1...@z991.linuxsc.com> writes:
>
>> Tim Woodall <new...@woodall.me.uk> writes:
>>
>>> On 2020-11-20, Tim Rentsch <tr.1...@z991.linuxsc.com> wrote:
>>>
>>>> I think you misunderstood my comments. My claim is only that
>>>> people (then expanded to any collection of people and computers
>>>> in the physical universe) cannot solve the halting problem. It
>>>> is always possible to answer the question of halting for a single
>>>> Turing Machine input, or indeed any finite collection of inputs.
>>>
>>> A circa 8k state (2 symbol, 1 tape) TM is known that halts if ZFC is
>>> inconsistent.
>>>
>>> A proof that it doesn't halt would be the same as proving ZFC is
>>> consistent.
>>
>> This doesn't surprise me. There are other such TMs for many open
>> questions in mathematics - if the TM halts then a counterexample
>> was found and the conjecture is false, or if the conjecture is
>> true then there is no counterexample to be found, and the TM will
>> run forever.
>
> To me, the surprising thing is the size. [...]

I didn't think about that at all. I don't have any sense for
either how big the question (about ZFC) is, or what Turing
Machines with a given number of states might be able to do.
Since I don't have any kind of framework in which to evaluate it,
I really didn't even notice the number.

>> What do you think that implies, if anything, about my earlier
>> statement?
>
> Maybe the issue is what you mean by "[it] is always possible to answer
> the question". Finite sets are decidable, but I would not have phrased
> it as you did (assuming that is all you meant).

I was trying to make a different point, about the nature of
Turing Machines, and which appears to be in need of further
explanation.

Tim Rentsch

unread,
Nov 22, 2020, 7:54:26 AM11/22/20
to
I think you're confusing being decidable and being provable. If
we are concerned with only a single input TM (call it Z), then
there is a TM (call it D) that tells us whether Z halts or not.
Because Z is fixed, either it halts or it doesn't. Hence D can
be one of two simple TMs: one that outputs "yes" and halts, or
one that outputs "no" and halts. We may not know which one is
right, but certainly one of those two answers the question (and
answers it correctly).

> The consistency of ZFC cannot be proved from within ZFC. So any proof
> that this (fairly small) TM doesn't halt on a ZFC inconsistency will
> require a proof from outside the relm of computing if such a proof can
> exist at all.

A TM that tells us whether the machine Z halts isn't obliged to
supply a proof that its answer is correct, only to produce the
right answer. If we want to prove that a particular proposed
machine does indeed produce the right answer then proving that is
up to us. But there isn't any doubt that a TM that does give the
correct answer actually exists, even if we don't know which one
it is.

> Even some relatively simple things - e.g. counting up to the value of
> BB(8000) - must be impossible because we know that any <8k state TM
> must halt in less than BB(8000) steps or run forever.

You mean it's impossible for a machine with only 8000 states, right?
Since BB(8000) is a fixed and finite number, certainly there is a TM
(with more than 8000 states) that can count up to BB(8000).

More generally, for any finite number N, there is a TM H(N) that
can tell us whether a TM that has N (or fewer) states will halt.
The machine H(N) works by simulating its input for BB(N) steps.
If the simulated machine hasn't halted by that time then it never
will. Even though we don't know what the value of BB(N) actually
is, there isn't any question that H(N) exists for every finite N.
Does that all make sense?

> And yet it's trivial to construct a TM that given a unary number N
> on its input tape writes N-1, N-2, N-3 ... 1 after it (each number
> separated by a blank) and then terminates.
>
> Therefore BB(8000) must be "unknowable"

The number BB(8000) is so unimaginably large that I doubt we will
ever know much about it. However, isn't it the case that the
number BB(8000) can be produced by a TM with only 8001 states?
It's been a while since I've looked at this stuff but I'm pretty
sure that's right, at least above some moderately small size.

By the way, if your comments about BB(8000) are based on some
sort of connection to the logic of ZFC then I am missing that,
since I don't have much background in what ZFC implies. But I
think my comments above about TMs are correct and are not
affected by matters dependent on ZFC consistency.
0 new messages