Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Clarification of Linz Ĥ Description

207 views
Skip to first unread message

olcott

unread,
Sep 28, 2021, 11:20:35 PM9/28/21
to
The halting theorem counter-examples present infinitely nested
simulation (non-halting) behavior to every simulating halt decider.

-----1-----------2--3----------
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.qx ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞

If the simulation of the 2nd ⟨Ĥ⟩ applied to
the 3rd ⟨Ĥ⟩ at Ĥ.qx reaches its final state.


-----1-----------2--3----------
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.qx ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn

If the simulation of the 2nd ⟨Ĥ⟩ applied to
the 3rd ⟨Ĥ⟩ at Ĥ.qx never reaches its final state.


https://www.researchgate.net/publication/351947980_Halting_problem_undecidability_and_infinitely_nested_simulation


--
Copyright 2021 Pete Olcott

"Great spirits have always encountered violent opposition from mediocre
minds." Einstein

olcott

unread,
Sep 28, 2021, 11:47:52 PM9/28/21
to
The following simplifies the syntax for the definition of the Linz
Turing machine Ĥ, it is now a single machine with a single start state.
A simulating halt decider is embedded at Ĥ.qx. It has been annotated so
that it only shows Ĥ applied to ⟨Ĥ⟩, converting the variables to constants.

Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.qx ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞
if the simulated input to Ĥ.qx ⟨Ĥ⟩ applied to ⟨Ĥ⟩ reaches its final
state. (Halts)

Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.qx ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
if the simulated input to Ĥ.qx ⟨Ĥ⟩ applied to ⟨Ĥ⟩ never reaches its
final state. (Does not halt)

Richard Damon

unread,
Sep 29, 2021, 6:26:44 AM9/29/21
to

On 9/28/21 11:20 PM, olcott wrote:
> The halting theorem counter-examples present infinitely nested
> simulation (non-halting) behavior to every simulating halt decider.
>
> -----1-----------2--3----------
> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.qx ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞
>
> If the simulation of the 2nd ⟨Ĥ⟩ applied to
> the 3rd ⟨Ĥ⟩ at Ĥ.qx reaches its final state.
>
>
> -----1-----------2--3----------
> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.qx ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
>
> If the simulation of the 2nd ⟨Ĥ⟩ applied to
> the 3rd ⟨Ĥ⟩ at Ĥ.qx never reaches its final state.
>
>
> https://www.researchgate.net/publication/351947980_Halting_problem_undecidability_and_infinitely_nested_simulation
>
>
>


The Halting Problem NEVER says 'If the Simulation of ...'

It ALWAYS talks about the behavior of the machine whose representation
is given as the input.

The fact you CHANGE this shows you are working on the wrong problem.

In the version you quote it says:

In computability theory, the halting problem is the problem of
determining, from a description of an arbitrary computer program and an
input, whether the program will finish running, or continue to run forever.

Let us look closely at this statement.

What is the object of the problem?

determining whether the program will finish running or continue to run
forever.

Thus, the problem is about the PROGRAM (i.e. the actual machine) not the
representation.

Yes, there is the phrase "from a description of an arbitrary computer
program and an input" which gives us details of how this program is
specified. It does NOT mean that we are being asked a question about
properties of this description.

The description is the MEANS of determining the answer, not the GOAL.

It seems you fail elementary English.

Try Again.

Richard Damon

unread,
Sep 29, 2021, 6:28:28 AM9/29/21
to
On 9/28/21 11:47 PM, olcott wrote:
> The following simplifies the syntax for the definition of the Linz
> Turing machine Ĥ, it is now a single machine with a single start state.
> A simulating halt decider is embedded at Ĥ.qx. It has been annotated so
> that it only shows Ĥ applied to ⟨Ĥ⟩, converting the variables to constants.
>
> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.qx ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞
> if the simulated input to Ĥ.qx ⟨Ĥ⟩ applied to ⟨Ĥ⟩ reaches its final
> state. (Halts)
>
> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.qx ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
> if the simulated input to Ĥ.qx ⟨Ĥ⟩ applied to ⟨Ĥ⟩ never reaches its
> final state. (Does not halt)
>
> https://www.researchgate.net/publication/351947980_Halting_problem_undecidability_and_infinitely_nested_simulation
>
>

WRONG.

The problem isn NOT "If the simulated ..." but "If the Machine ..."

To change it to 'simulated' says that the only deciders that could exist
are simulators.

FAIL.

Richard Damon

unread,
Sep 29, 2021, 7:52:50 AM9/29/21
to

On 9/28/21 11:47 PM, olcott wrote:
> The following simplifies the syntax for the definition of the Linz
> Turing machine Ĥ, it is now a single machine with a single start state.
> A simulating halt decider is embedded at Ĥ.qx. It has been annotated so
> that it only shows Ĥ applied to ⟨Ĥ⟩, converting the variables to constants.
>
> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.qx ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞
> if the simulated input to Ĥ.qx ⟨Ĥ⟩ applied to ⟨Ĥ⟩ reaches its final
> state. (Halts)
>
> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.qx ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
> if the simulated input to Ĥ.qx ⟨Ĥ⟩ applied to ⟨Ĥ⟩ never reaches its
> final state. (Does not halt)
>
> https://www.researchgate.net/publication/351947980_Halting_problem_undecidability_and_infinitely_nested_simulation
>
>

Perhaps a better clarification.

The above describes what you current decider DOES.

What it is SUPPOSED to do, in order to be right, is indicate what the
original machine would do, not just its partial simulation.

That difference is why your decider gets the wrong answer.

THAT is the crux of the issue.

This is perhaps related to your mistaken belief that you can impose the
rule that something can only be true if it can be proved, which might be
a working definition in some toy logic systems, but those system can NOT
handle the fundamental nature of Mathematics.

Mathematics is based on a logic system that allows statements to have a
Truth value even if that value can not be proven, or even known, within
the laws of the system. It does also allow for statements that are
self-contradictory and thus can't have a truth value, or statements that
are poorly defined so we can't tell exactly what it is asking, but these
are a different class of statements.

The Halting Question is a question of the first type, we KNOW that for
every Machine and input combination, that machine must either Halt or
Never Halt, and thus the question ALWAYS has a right answer. The Halting
Problem Proof (which goes farther then the piece you are looking at)
says that there are some machines that we can not PROVE what that answer is.

Ben Bacarisse

unread,
Sep 29, 2021, 6:48:56 PM9/29/21
to
olcott <No...@NoWhere.com> writes:

> The following simplifies the syntax for the definition of the Linz
> Turing machine Ĥ, it is now a single machine

It always was.

> with a single start state.

All TM's have a single start state.

> A simulating halt decider is embedded at Ĥ.qx.

This is not "clarifying" but restricting. Linz uses Ĥ to refer to any
TM constructed in a certain way.

> It has been annotated so that it only shows Ĥ applied to ⟨Ĥ⟩,
> converting the variables to constants.

You don't know what at least two of those these words mean. The result
is a sentence that is "not even wrong".

> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.qx ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞
> if the simulated input to Ĥ.qx ⟨Ĥ⟩ applied to ⟨Ĥ⟩ reaches its final
> state. (Halts)

The correct annotation is the one in Linz: "if Ĥ applied to ⟨Ĥ⟩ halts".

> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.qx ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
> if the simulated input to Ĥ.qx ⟨Ĥ⟩ applied to ⟨Ĥ⟩ never reaches its
> final state. (Does not halt)

The correct annotation is the one in Linz: "if Ĥ applied to ⟨Ĥ⟩ does not
halt".

You must deny what Linz says because you are not talking about halt
deciders. You are trying to pretend that Linz was talking about
deciders for your other problem:

This definition of halting circumvents the pathological self-reference
error for every simulating halt decider:

An input is decided to be halting only if ...

On that basis:
Ĥ(<Ĥ>) ⊢* Ĥ.qn
H(<Ĥ>,<Ĥ>) ⊢* H.qn

You don't get to say what halting is. It is already well-defined. Any
definition that provides a basis for H rejecting a computation that is
finite is garbage.

--
Ben.

olcott

unread,
Sep 29, 2021, 7:19:18 PM9/29/21
to
1----2-----3-----4--5-----6
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.qx ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn

(A) The Linz proof is about whether or not (4) halts on its input (5).
When the halt decider at Ĥ.qx is a simulating halt decider the Linz
proof becomes about whether or not the simulation of (4) halts on its
input (5).

(B) The Linz proof is not about whether or not (1) halts on its input (2).

Although it may be over your head because it is very very difficult
to understand (A) and (B) are distinctly different computations that can
have opposite final states without contradiction.

Richard Damon

unread,
Sep 29, 2021, 8:21:02 PM9/29/21
to
WRONG.

Linz is SOLELY about does the machine P given input I halt or not.

Note that if H^ is really a computation, and it will be if H is, then it
doesn't matter WHICH of the <H^> <H^> you take.

In EVERY case, the first <H^> represents the machine H^, and the second
is its input, and in every case that MACHINE when given that input will
Halt.

It is NEVER about what the simulation by the PARTIAL simulator H sees,
only what the ACTUAL TURING MACHINE does.

PERIOD>

If you can't understand this, you have FLUNKED reading comprehension.

Ben Bacarisse

unread,
Sep 29, 2021, 9:36:14 PM9/29/21
to
olcott <No...@NoWhere.com> writes:

> On 9/29/2021 5:48 PM, Ben Bacarisse wrote:

>> You must deny what Linz says because you are not talking about halt
>> deciders. You are trying to pretend that Linz was talking about
>> deciders for your other problem:
>>
>> This definition of halting circumvents the pathological self-reference
>> error for every simulating halt decider:
>> An input is decided to be halting only if ...
>>
>> On that basis:
>> Ĥ(<Ĥ>) ⊢* Ĥ.qn
>> H(<Ĥ>,<Ĥ>) ⊢* H.qn
<<
>> You don't get to say what halting is. It is already well-defined. Any
>> definition that provides a basis for H rejecting a computation that is
>> finite is garbage.
>
> 1----2-----3-----4--5-----6
> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.qx ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
>
> (A) The Linz proof is about...

... halting. You've redefined "halting" so as to permit

Ĥ(<Ĥ>) ⊢* Ĥ.qn
H(<Ĥ>,<Ĥ>) ⊢* H.qn

Unless you repudiate that, everything you say is just so much noise.
It's certainly not about Linz's H/Ĥ.

--
Ben.

olcott

unread,
Sep 29, 2021, 9:39:57 PM9/29/21
to
On 9/29/2021 8:36 PM, Ben Bacarisse wrote:
> olcott <No...@NoWhere.com> writes:
>
>> On 9/29/2021 5:48 PM, Ben Bacarisse wrote:
>
>>> You must deny what Linz says because you are not talking about halt
>>> deciders. You are trying to pretend that Linz was talking about
>>> deciders for your other problem:
>>>
>>> This definition of halting circumvents the pathological self-reference
>>> error for every simulating halt decider:
>>> An input is decided to be halting only if ...
>>>
>>> On that basis:
>>> Ĥ(<Ĥ>) ⊢* Ĥ.qn
>>> H(<Ĥ>,<Ĥ>) ⊢* H.qn
> <<
>>> You don't get to say what halting is. It is already well-defined. Any
>>> definition that provides a basis for H rejecting a computation that is
>>> finite is garbage.
>>
>> 1----2-----3-----4--5-----6
>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.qx ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
>>
>> (A) The Linz proof is about...
>
> ... halting. You've redefined "halting" so as to permit
>
> Ĥ(<Ĥ>) ⊢* Ĥ.qn

The question about (1) on (2) is not what Linz is referring to.

> H(<Ĥ>,<Ĥ>) ⊢* H.qn

My H1(P,P) proves this: H(<Ĥ>,<Ĥ>) ⊢* H.qy

>
> Unless you repudiate that, everything you say is just so much noise.
> It's certainly not about Linz's H/Ĥ.
>


--

Richard Damon

unread,
Sep 29, 2021, 9:50:53 PM9/29/21
to
On 9/29/21 9:39 PM, olcott wrote:
> On 9/29/2021 8:36 PM, Ben Bacarisse wrote:
>> olcott <No...@NoWhere.com> writes:
>>
>>> On 9/29/2021 5:48 PM, Ben Bacarisse wrote:
>>
>>>> You must deny what Linz says because you are not talking about halt
>>>> deciders.  You are trying to pretend that Linz was talking about
>>>> deciders for your other problem:
>>>>
>>>>     This definition of halting circumvents the pathological
>>>> self-reference
>>>>     error for every simulating halt decider:
>>>>     An input is decided to be halting only if ...
>>>>
>>>>     On that basis:
>>>>     Ĥ(<Ĥ>) ⊢* Ĥ.qn
>>>>     H(<Ĥ>,<Ĥ>) ⊢* H.qn
>> <<
>>>> You don't get to say what halting is.  It is already well-defined.  Any
>>>> definition that provides a basis for H rejecting a computation that is
>>>> finite is garbage.
>>>
>>> 1----2-----3-----4--5-----6
>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.qx ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
>>>
>>> (A) The Linz proof is about...
>>
>> ... halting.  You've redefined "halting" so as to permit
>>
>>      Ĥ(<Ĥ>) ⊢* Ĥ.qn
>
> The question about (1) on (2) is not what Linz is referring to.

What are you smoking??

Linz says that the answer that H should give is based on M w, what that
ACTUAL machine M does when given the input w.

H.q0 is the machine M
<H^> is the w

What the results of (1) on (2) is EXACTLY what H is supposed to answer
about.

>
>>      H(<Ĥ>,<Ĥ>) ⊢* H.qn
>
> My H1(P,P) proves this: H(<Ĥ>,<Ĥ>) ⊢* H.qy

But H1 is NOT the H that H^ was built on, so doesn't matter.

FAIL,

Ben Bacarisse

unread,
Sep 29, 2021, 9:57:54 PM9/29/21
to
olcott <No...@NoWhere.com> writes:

> On 9/29/2021 8:36 PM, Ben Bacarisse wrote:
>> olcott <No...@NoWhere.com> writes:
>>
>>> On 9/29/2021 5:48 PM, Ben Bacarisse wrote:
>>
>>>> You must deny what Linz says because you are not talking about halt
>>>> deciders. You are trying to pretend that Linz was talking about
>>>> deciders for your other problem:
>>>>
>>>> This definition of halting circumvents the pathological self-reference
>>>> error for every simulating halt decider:
>>>> An input is decided to be halting only if ...
>>>>
>>>> On that basis:
>>>> Ĥ(<Ĥ>) ⊢* Ĥ.qn
>>>> H(<Ĥ>,<Ĥ>) ⊢* H.qn
>>>>
>>>> You don't get to say what halting is. It is already well-defined. Any
>>>> definition that provides a basis for H rejecting a computation that is
>>>> finite is garbage.
>>>
>>> 1----2-----3-----4--5-----6
>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.qx ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
>>>
>>> (A) The Linz proof is about...
>> ... halting. You've redefined "halting" so as to permit
>> Ĥ(<Ĥ>) ⊢* Ĥ.qn
>
> The question about (1) on (2) is not what Linz is referring to.

It is when he refers to it, as he does in the annotation you keep
removing.

>> H(<Ĥ>,<Ĥ>) ⊢* H.qn
>
> My H1(P,P) proves this: H(<Ĥ>,<Ĥ>) ⊢* H.qy

So which is it, qy or qn? Where you wrong to redefine halting so that

"On that basis:
Ĥ(<Ĥ>) ⊢* Ĥ.qn
H(<Ĥ>,<Ĥ>) ⊢* H.qn"

?

--
Ben.

olcott

unread,
Sep 29, 2021, 10:11:44 PM9/29/21
to
H(<Ĥ>,<Ĥ>) ⊢* H.qy as I just said
H(<Ĥ>,<Ĥ>) ⊢* H.qy as I just said
H(<Ĥ>,<Ĥ>) ⊢* H.qy as I just said
H(<Ĥ>,<Ĥ>) ⊢* H.qy as I just said
H(<Ĥ>,<Ĥ>) ⊢* H.qy as I just said

> "On that basis:
> Ĥ(<Ĥ>) ⊢* Ĥ.qn
> H(<Ĥ>,<Ĥ>) ⊢* H.qn"



Ben Bacarisse

unread,
Sep 29, 2021, 10:42:56 PM9/29/21
to
But you previously also said that H(<Ĥ>,<Ĥ>) ⊢* H.qn. You can't get mad
because people believe what you write!

Now I need to know how much of the post where you said that your new
basis for halting gives H(<Ĥ>,<Ĥ>) ⊢* H.qn was /also/ wrong. All of it?
It formed what appeared to be your entire justification for having
"solved" the halting problem. Here is the whole thing:

| My current proof simply shows exactly how the exact Peter Linz H would
| correctly decide not halting on the exact Peter Linz Ĥ.
|
| This definition of halting circumvents the pathological self-reference
| error for every simulating halt decider:
|
| An input is decided to be halting only if its simulation never needs
| to be stopped by any simulating halt decider anywhere in its entire
| invocation chain.
|
| On that basis:
| Ĥ(<Ĥ>) ⊢* Ĥ.qn
| H(<Ĥ>,<Ĥ>) ⊢* H.qn

Is it just that one last letter that was wrong, or the whole thing? Are
you no longer redefining "halting"? Have you abandoned your POOH
problem?

But now I've seen a post where you seem to suggest that it's both! Too
crazy for this time of night. I think you need to drop TMs again and go
back to waffling about traces and secret x86 code.

--
Ben.

olcott

unread,
Sep 29, 2021, 10:52:46 PM9/29/21
to
It seems a little shady that you don't cut-and-paste it with the time
and date stamp as I ALWAYS do.

> Is it just that one last letter that was wrong, or the whole thing? Are
> you no longer redefining "halting"? Have you abandoned your POOH
> problem?
>
> But now I've seen a post where you seem to suggest that it's both! Too
> crazy for this time of night. I think you need to drop TMs again and go
> back to waffling about traces and secret x86 code.
>


--

Richard Damon

unread,
Sep 30, 2021, 6:25:21 AM9/30/21
to
If H(<H^>, <H^>) goes to H.qy then since H^ has a copy of it, it will go
to H^.qy also, since two copies of the same computation will do the same
thing., If this happens then H has said that H^(<H^>) is Halting but
H^(<H^>) will go to the infinite loop that was attached to the H^.qy state.

H is still wrong.

You can't make the copies of H at H.q0 and H^.qx do diferernt things
without making H not a computation. DEFINITION.

FAIL.

Ben Bacarisse

unread,
Sep 30, 2021, 12:36:00 PM9/30/21
to
It's Message-ID: <FL2dnU-wypwbXz_9...@giganews.com>

Are you accusing me of misquoting you?

Are you going to address the substantive points or not?

>> Is it just that one last letter that was wrong, or the whole thing? Are
>> you no longer redefining "halting"? Have you abandoned your POOH
>> problem?
>> But now I've seen a post where you seem to suggest that it's both! Too
>> crazy for this time of night. I think you need to drop TMs again and go
>> back to waffling about traces and secret x86 code.

--
Ben.

olcott

unread,
Sep 30, 2021, 1:08:30 PM9/30/21
to
I am accusing you of not providing proper support for your assertions.
A cut-and-paste of my actual question that includes the time and date
stamp is the standard that I established for myself.

> Are you going to address the substantive points or not?

I already addressed the substantive point several times in related posts.

If the above is an accurate quote (which has not been properly
established) then--->H(<Ĥ>,<Ĥ>) ⊢* H.qn is a typographical error.

Ĥ(<Ĥ>) ⊢* Ĥ.qn and H(<Ĥ>,<Ĥ>) ⊢* H.qy

are entirely different computations thus both final states are correct
even though they disagree.

That you did the dishonest dodge strawman error makes it look your
rebuttal is correct.

1----2------3----4---5----6
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.qx ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn

When we put the details that you left out back in then and only then do
we have the very very difficult basis proving that your strawman error
is a strawman error and not a legitimate rebuttal.

(A) H(<Ĥ>,<Ĥ>) ⊢* H.qy evaluates (1) applied to (2) and assumes that H
is a simulating halt decider.

(B) Ĥ(<Ĥ>) ⊢* Ĥ.qn evaluates (3) applied to (4) and (5) and assumes that
(3) is a simulating halt decider.

(A) and (B) are distinctly different computations even though when you
leave out key details they seem to be identical.

This is very very difficult because intuition really seems to confirm
that these details do not matter.

olcott

unread,
Sep 30, 2021, 6:12:11 PM9/30/21
to
On 5/17/2021 2:23 PM, olcott wrote:
> I have no dependency on Linz, his coverage of the halting
> problem is superior to all others because it precisely
> defines the problem in terms of the precision of state
> transitions and it does so as simply as possible.
>
> My current proof simply shows exactly how the exact Peter
> Linz H would correctly decide not halting on the exact Peter
> Linz Ĥ.
>
> This definition of halting circumvents the pathological
> self-reference error for every simulating halt decider:
>
> An input is decided to be halting only if its simulation
> never needs to be stopped by any simulating halt decider
> anywhere in its entire invocation chain.
>
> On that basis:
> Ĥ(<Ĥ>) ⊢* Ĥ.qn
> H(<Ĥ>,<Ĥ>) ⊢* H.qn
>

I have addressed the substantive points in my other reply. The above
analysis is not a typo it has been superseded by updated analysis.

Because of a critique of my work in another forum I have discovered that
the halt status that H1(P,P) reports is consistent with the behavior of
int main() { P(P); }

As long as this behavior is the same when I convert H1 and H to pure
functions this indicates that this: H(<Ĥ>,<Ĥ>) ⊢* H.qn must be updated
to this: H(<Ĥ>,<Ĥ>) ⊢* H.qy.

André G. Isaak

unread,
Sep 30, 2021, 6:55:53 PM9/30/21
to
On 2021-09-30 16:12, olcott wrote:

> Because of a critique of my work in another forum I have discovered that
> the halt status that H1(P,P) reports is consistent with the behavior of
> int main() { P(P); }

But how does this address the Linz proof then? You've derived your P
from H, not from H1. So all that matter with respect to the Linz proof
is what *H*(P, P) returns. That some other decider H1, from which P is
*not* derived, can correctly decide P(P) has no bearing on Linz's claim.
H1 cannot decide P1(P1) where P1 is derived from H1 in the same way that
P is derived from H.

And your proposed 'best of three' test won't work either, since it will
be possible to derive a BestOfThree_Hat construction from your
BestOfThree routine which your BestOfThree routine won't be able to decide.

You don't seem to grasp that Linz is *not* claiming that there are
computations whose halting status cannot be decided. He is claiming that
there cannot be a *universal* halt decider because for every purported
halt decider one can construct a TM which that specific halt decider
cannot correctly decide.

André


--
To email remove 'invalid' & replace 'gm' with well known Google mail
service.

Ben Bacarisse

unread,
Sep 30, 2021, 7:14:41 PM9/30/21
to
Good. So you do agree that you said it?

> A cut-and-paste of my actual question that includes the time and date
> stamp is the standard that I established for myself.

I post a message ID if anyone is having trouble finding a post.

You, presumably, knew all along that you have redefined what halting
means so that H and H^ are no longer like Linz's H and H^. You,
presumably, knew that with your definition H can reject a halting
computation. After all, that's what everyone has been objecting to for
the last few years. You can't have only just noticed that this is what
people have been telling you.

> If the above is an accurate quote (which has not been properly
> established) then--->H(<Ĥ>,<Ĥ>) ⊢* H.qn is a typographical error.

This is an appallingly dishonest claim. There is no possibility it is a
typo. Why would you even try to pretend that it is? You say the same
thing again in another post

H(<Ĥ>,<Ĥ>) ⊢* H.qn
Message-ID: <M9Kdnab-FpOgUD_9...@giganews.com>

and again using words this time

H applied to input (Ĥ, Ĥ) transitions to H.qn
Message-ID: <0cudnXyibs7OuDz9...@giganews.com>

and again in words

H(<Ĥ>,<Ĥ>) must abort the simulation of its input ∴ this input <is>
correctly decided as non-halting.
Message-ID: <_LednVXB0N9FGT_9...@giganews.com>

and again using a slightly different notion here

H.q0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* H.qn
Message-ID: <2-adncuMeNUVpY78...@giganews.com>

And then there is the fact that you've been saying exactly this in other
ways for months:

"Halts(Confound_Halts, Confound_Halts) returns false."
Message-ID: <2aidnV9e3qQaOCDC...@giganews.com>

and

"The input to Halts is only finite because..."
Message-ID: <6eSdnYuLBrIuVFzC...@giganews.com>

And H(H_Hat, H_Hat) == false though H_Hat(H_Hat) halts. Post after post
explaining why H(H_Hat, H_Hat) == 0 with H_Hat(H_Hat) halting might
/seem/ wrong but is in fact correct because of how you define halting.

No, it is not a typo. You are lying. You meant what you wrote, and
people have been addressing what you now pretend is a typo for months.
The scale of the dishonesty takes my breath away.

I could explain why you are wrong to say

> Ĥ(<Ĥ>) ⊢* Ĥ.qn and H(<Ĥ>,<Ĥ>) ⊢* H.qy

but what's the point? You'll just claim that this was also a typo in a
few months time. You can keep this up forever if you have no attachment
to honesty or the truth.

--
Ben.

olcott

unread,
Sep 30, 2021, 7:22:03 PM9/30/21
to
On 9/30/2021 5:55 PM, André G. Isaak wrote:
> On 2021-09-30 16:12, olcott wrote:
>
>> Because of a critique of my work in another forum I have discovered
>> that the halt status that H1(P,P) reports is consistent with the
>> behavior of int main() { P(P); }
>
> But how does this address the Linz proof then? You've derived your P
> from H, not from H1. So all that matter with respect to the Linz proof
> is what *H*(P, P) returns. That some other decider H1, from which P is
> *not* derived, can correctly decide P(P) has no bearing on Linz's claim.
> H1 cannot decide P1(P1) where P1 is derived from H1 in the same way that
> P is derived from H.
>

H(P,P) is correct and H1(P,P) is correct.

> And your proposed 'best of three' test won't work either, since it will
> be possible to derive a BestOfThree_Hat construction from your
> BestOfThree routine which your BestOfThree routine won't be able to decide.
>
> You don't seem to grasp that Linz is *not* claiming that there are
> computations whose halting status cannot be decided.

He is claiming that H does not exist:
The contradiction tells us that our assumption
of the existence of H, and hence the assumption
of the decidability of the halting problem,
must be false. (Linz 1990:320).

https://www.liarparadox.org/Peter_Linz_HP(Pages_315-320).pdf
Not only does the Linz H exist it gets the correct answer and this
answer is consistent with the behavior of Ĥ applied to ⟨Ĥ⟩

Linz, Peter 1990. An Introduction to Formal Languages and Automata.
Lexington/Toronto: D. C. Heath and Company. (318-320)

> He is claiming that
> there cannot be a *universal* halt decider because for every purported
> halt decider one can construct a TM which that specific halt decider
> cannot correctly decide.
>
> André
>
>


--

olcott

unread,
Sep 30, 2021, 7:58:28 PM9/30/21
to
You quoted me from five months ago my view has changed.
Here is my view now.

If you want to try to form an actual rebuttal and not just some form of
the strawman error that looks like a rebuttal to gullible fools that are
hardly paying attention you must address the following POINT-BY-POINT.

(A) Ĥ(<Ĥ>) ⊢* Ĥ.qn
(B) Ĥ.qx(<Ĥ>,<Ĥ>) ⊢* H.qn

Your whole basis of rebuttal from the beginning has been that (B) must
be wrong because it disagrees with the actual behavior of (A).

Now that I know that H(<Ĥ>,<Ĥ>) ⊢* H.qy
I have directly contradicted the Linz conclusion.

The contradiction tells us that our assumption
of the existence of H, and hence the assumption
of the decidability of the halting problem, must
be false. (Linz:1990:320)

https://www.liarparadox.org/Peter_Linz_HP(Pages_315-320).pdf

Linz, Peter 1990. An Introduction to Formal Languages and Automata.
Lexington/Toronto: D. C. Heath and Company. (318-320)

Ĥ.qx(<Ĥ>,<Ĥ>) ⊢* H.qn is correct because the simulation of <Ĥ> applied
to <Ĥ> never reaches any final state whether or not the simulating halt
decider at Ĥ.qx aborts its simulation of this input.

You can dishonestly call this another different halting problem yet you
know damn well it is the original halting problem.

That a computation never reaches its final state when this computation
is allowed to continue unabated is the very conventional definition of
not halting.

I don't know the proper way to say this because the definition of a
computation is restricted to sequence of steps that halts. This makes
non halting computations a contradiction in terms.

André G. Isaak

unread,
Sep 30, 2021, 8:18:16 PM9/30/21
to
On 2021-09-30 17:21, olcott wrote:
> On 9/30/2021 5:55 PM, André G. Isaak wrote:
>> On 2021-09-30 16:12, olcott wrote:
>>
>>> Because of a critique of my work in another forum I have discovered
>>> that the halt status that H1(P,P) reports is consistent with the
>>> behavior of int main() { P(P); }
>>
>> But how does this address the Linz proof then? You've derived your P
>> from H, not from H1. So all that matter with respect to the Linz proof
>> is what *H*(P, P) returns. That some other decider H1, from which P is
>> *not* derived, can correctly decide P(P) has no bearing on Linz's
>> claim. H1 cannot decide P1(P1) where P1 is derived from H1 in the same
>> way that P is derived from H.
>>
>
> H(P,P) is correct and H1(P,P) is correct.

So you're saying they both return true? If so, why on earth do you
*need* H1? Why bring it up at all?

>> And your proposed 'best of three' test won't work either, since it
>> will be possible to derive a BestOfThree_Hat construction from your
>> BestOfThree routine which your BestOfThree routine won't be able to
>> decide.
>>
>> You don't seem to grasp that Linz is *not* claiming that there are
>> computations whose halting status cannot be decided.
>
> He is claiming that H does not exist:

Yes, that's what I said immediately below. So why bother with this
quote? Does it have any relevance to the point I was making?

André

>    The contradiction tells us that our assumption
>    of the existence of H, and hence the assumption
>    of the decidability of the halting problem,
>    must be false.  (Linz 1990:320).
>
> https://www.liarparadox.org/Peter_Linz_HP(Pages_315-320).pdf
> Not only does the Linz H exist it gets the correct answer and this
> answer is consistent with the behavior of Ĥ applied to ⟨Ĥ⟩
>
> Linz, Peter 1990. An Introduction to Formal Languages and Automata.
> Lexington/Toronto: D. C. Heath and Company. (318-320)
>
>> He is claiming that there cannot be a *universal* halt decider because
>> for every purported halt decider one can construct a TM which that
>> specific halt decider cannot correctly decide.
>>
>> André
>>
>>
>
>


--

olcott

unread,
Sep 30, 2021, 8:27:06 PM9/30/21
to
On 9/30/2021 7:18 PM, André G. Isaak wrote:
> On 2021-09-30 17:21, olcott wrote:
>> On 9/30/2021 5:55 PM, André G. Isaak wrote:
>>> On 2021-09-30 16:12, olcott wrote:
>>>
>>>> Because of a critique of my work in another forum I have discovered
>>>> that the halt status that H1(P,P) reports is consistent with the
>>>> behavior of int main() { P(P); }
>>>
>>> But how does this address the Linz proof then? You've derived your P
>>> from H, not from H1. So all that matter with respect to the Linz
>>> proof is what *H*(P, P) returns. That some other decider H1, from
>>> which P is *not* derived, can correctly decide P(P) has no bearing on
>>> Linz's claim. H1 cannot decide P1(P1) where P1 is derived from H1 in
>>> the same way that P is derived from H.
>>>
>>
>> H(P,P) is correct and H1(P,P) is correct.
>
> So you're saying they both return true? If so, why on earth do you
> *need* H1? Why bring it up at all?
>

H(P,P) correctly determines that its input never reaches its final state
whether or not H aborts the simulation of this input.

H1(P,P) correctly determines that its input reaches its final state.

>>> And your proposed 'best of three' test won't work either, since it
>>> will be possible to derive a BestOfThree_Hat construction from your
>>> BestOfThree routine which your BestOfThree routine won't be able to
>>> decide.
>>>
>>> You don't seem to grasp that Linz is *not* claiming that there are
>>> computations whose halting status cannot be decided.
>>
>> He is claiming that H does not exist:
>
> Yes, that's what I said immediately below. So why bother with this
> quote? Does it have any relevance to the point I was making?
>
> André
>
>>     The contradiction tells us that our assumption
>>     of the existence of H, and hence the assumption
>>     of the decidability of the halting problem,
>>     must be false.  (Linz 1990:320).
>>
>> https://www.liarparadox.org/Peter_Linz_HP(Pages_315-320).pdf
>> Not only does the Linz H exist it gets the correct answer and this
>> answer is consistent with the behavior of Ĥ applied to ⟨Ĥ⟩
>>
>> Linz, Peter 1990. An Introduction to Formal Languages and Automata.
>> Lexington/Toronto: D. C. Heath and Company. (318-320)
>>
>>> He is claiming that there cannot be a *universal* halt decider
>>> because for every purported halt decider one can construct a TM which
>>> that specific halt decider cannot correctly decide.
>>>
>>> André
>>>
>>>
>>
>>
>
>


--

André G. Isaak

unread,
Sep 30, 2021, 8:39:39 PM9/30/21
to
On 2021-09-30 18:26, olcott wrote:
> On 9/30/2021 7:18 PM, André G. Isaak wrote:
>> On 2021-09-30 17:21, olcott wrote:
>>> On 9/30/2021 5:55 PM, André G. Isaak wrote:
>>>> On 2021-09-30 16:12, olcott wrote:
>>>>
>>>>> Because of a critique of my work in another forum I have discovered
>>>>> that the halt status that H1(P,P) reports is consistent with the
>>>>> behavior of int main() { P(P); }
>>>>
>>>> But how does this address the Linz proof then? You've derived your P
>>>> from H, not from H1. So all that matter with respect to the Linz
>>>> proof is what *H*(P, P) returns. That some other decider H1, from
>>>> which P is *not* derived, can correctly decide P(P) has no bearing
>>>> on Linz's claim. H1 cannot decide P1(P1) where P1 is derived from H1
>>>> in the same way that P is derived from H.
>>>>
>>>
>>> H(P,P) is correct and H1(P,P) is correct.
>>
>> So you're saying they both return true? If so, why on earth do you
>> *need* H1? Why bring it up at all?
>>
>
> H(P,P) correctly determines that its input never reaches its final state
> whether or not H aborts the simulation of this input.
>
> H1(P,P) correctly determines that its input reaches its final state.

They have the *same* input. Therefore they cannot both be correct unless
they give the same answer. You can't have one of them claim that P(P)
never reaches its final state and the other say that P(P) does reach its
final state *and* claim they are both correct.

That's Trump Logic.

olcott

unread,
Sep 30, 2021, 8:54:55 PM9/30/21
to
You have to wait until finish converting functions H1 and H to pure
functions. They do derive this result now yet it is possible that this
result is an artifact of their requirement of static local data to
derive these results.

If it works out like preliminary analysis indicates a pair of functions
with identical machine code and identical inputs really do have
different output because the input to H(P,P) calls H and the input to
H1(P,P) calls H (thus does not call H1).

André G. Isaak

unread,
Sep 30, 2021, 9:07:25 PM9/30/21
to
Your statement is entirely incoherent. If H(P, P) and H1(P, P) give
different results then either:

(A) one is wrong.

or

(B) they are not computing the same function, in which case at most
*one* of them can actually be deciding the halting function.

You understand that the halting function (which maps computations to
halting values) is a mathematical object which exists independently of
any Turing Machine or algorithm and has a fixed mapping from
computations to {true, false}, right?

olcott

unread,
Sep 30, 2021, 9:28:09 PM9/30/21
to
It will continue to be impossible for you to overcome your false
assumptions until you see that:

(a) H1(P,P) correctly reports that its input reaches its final state.

(b) H(P,P) correctly reports that its input NEVER reaches its final
state (whether or not it aborts the simulation of this input).

(c) Both H1 and H are pure functions and have identical machine code.

In computer programming, a pure function is a function that has the
following properties:

(1) The function return values are identical for identical arguments (no
variation with local static variables, non-local variables, mutable
reference arguments or input streams).

(2) The function application has no side effects (no mutation of local
static variables, non-local variables, mutable reference arguments or
input/output streams).

https://en.wikipedia.org/wiki/Pure_function

Richard Damon

unread,
Sep 30, 2021, 9:35:32 PM9/30/21
to
On 9/30/21 7:21 PM, olcott wrote:
> On 9/30/2021 5:55 PM, André G. Isaak wrote:
>> On 2021-09-30 16:12, olcott wrote:
>>
>>> Because of a critique of my work in another forum I have discovered
>>> that the halt status that H1(P,P) reports is consistent with the
>>> behavior of int main() { P(P); }
>>
>> But how does this address the Linz proof then? You've derived your P
>> from H, not from H1. So all that matter with respect to the Linz proof
>> is what *H*(P, P) returns. That some other decider H1, from which P is
>> *not* derived, can correctly decide P(P) has no bearing on Linz's
>> claim. H1 cannot decide P1(P1) where P1 is derived from H1 in the same
>> way that P is derived from H.
>>
>
> H(P,P) is correct and H1(P,P) is correct.

So P(P) both Halts and doesn't?

Since the REAL Halting problem is that the decider needs to tell the
caller what it thinks the Computation, whose representation is given as
the deciders input, will do when actually run.

The Inputs are the same, so the Machines they represent are the same, so
they will do the exact same thing.

>
>> And your proposed 'best of three' test won't work either, since it
>> will be possible to derive a BestOfThree_Hat construction from your
>> BestOfThree routine which your BestOfThree routine won't be able to
>> decide.
>>
>> You don't seem to grasp that Linz is *not* claiming that there are
>> computations whose halting status cannot be decided.
>
> He is claiming that H does not exist:
>    The contradiction tells us that our assumption
>    of the existence of H, and hence the assumption
>    of the decidability of the halting problem,
>    must be false.  (Linz 1990:320).

He claims that No H can exist that meets the requirements. And it can't

>
> https://www.liarparadox.org/Peter_Linz_HP(Pages_315-320).pdf
> Not only does the Linz H exist it gets the correct answer and this
> answer is consistent with the behavior of Ĥ applied to ⟨Ĥ⟩>

No, your claimed H says that H^(<H^>) will not halt when it does.

Only your H1, which is NOT the computation that H^ was built on get the
right answer, but it is not the right computation to be H in Linz.

Richard Damon

unread,
Sep 30, 2021, 9:38:30 PM9/30/21
to
So, you are quoting the results of experements not yet run?

You are telling the answers that programs will give that aren't yet
fully written?

That sounds like a real good use of the scientific method to me. Who
needs any actual proof, we just need to trust the guy who has been dead
wrong for the last many years.

RIGHT?

André G. Isaak

unread,
Sep 30, 2021, 9:40:40 PM9/30/21
to
If you think I am making false assumptions, please identify those
assumptions.

> (a) H1(P,P) correctly reports that its input reaches its final state.
>
> (b) H(P,P) correctly reports that its input NEVER reaches its final
> state (whether or not it aborts the simulation of this input).
>
> (c) Both H1 and H are pure functions and have identical machine code.
>
> In computer programming, a pure function is a function that has the
> following properties:

There's no need for you to keep providing wikipedia definitions for
things. Everyone here knows what these terms mean (well, everyone other
than you).

Ben Bacarisse

unread,
Sep 30, 2021, 9:50:32 PM9/30/21
to
So you were just lying when you tried to pretend it was as typographical
error! And when that did not work (because you'd said the same thing
time and time again) you decide to say you've "changed your mind" with
no mention of the attempted lie about it being a typo. Your lack of
commitment to truth is appalling.

> Here is my view now.

I wonder what your view was when you claimed to have a TM H that
correctly decided (H^, H^) in 2018? Were you wrong then as well? That
was nearly three years ago. No wonder you would not say when I asked if
that H accepted or rejected the input in question. You've adamantly
defended both possible answers!

> If you want to try to form an actual rebuttal

You mean the arguments that finally made you see you were wrong? The
rebuttals that caused you to "change your mind"?

> (A) Ĥ(<Ĥ>) ⊢* Ĥ.qn
> (B) Ĥ.qx(<Ĥ>,<Ĥ>) ⊢* H.qn
>
> Your whole basis of rebuttal from the beginning has been that (B) must
> be wrong because it disagrees with the actual behavior of (A).
>
> Now that I know that H(<Ĥ>,<Ĥ>) ⊢* H.qy

How can you know that? Do don't have a TM whose behaviour you can
observe? Linz's H denotes a whole class of TMs. Do they all do as you
claim here? Remember: your H is Linz's H.

> I have directly contradicted the Linz conclusion.

Where? Certainly no here.

> The contradiction tells us that our assumption
> of the existence of H, and hence the assumption
> of the decidability of the halting problem, must
> be false. (Linz:1990:320)
>
> https://www.liarparadox.org/Peter_Linz_HP(Pages_315-320).pdf
>
> Linz, Peter 1990. An Introduction to Formal Languages and
> Automata. Lexington/Toronto: D. C. Heath and Company. (318-320)
>
> Ĥ.qx(<Ĥ>,<Ĥ>) ⊢* H.qn is correct because the simulation of <Ĥ> applied
> to <Ĥ> never reaches any final state whether or not the simulating
> halt decider at Ĥ.qx aborts its simulation of this input.

For any Ĥ that behaves as you say, its corresponding H will transition
to H's equivalent state H.qn. But apparently you somehow know that
H(<Ĥ>,<Ĥ>) ⊢* H.qy. Not possible.

> You can dishonestly call this another different halting problem yet
> you know damn well it is the original halting problem.

Now that you've (almost) admitted that you were wrong (for months) to
redefine halting, it would indeed be dishonest for me to use the same
objection. You are wrong now for the much simpler reason given above.

Now that you are not simply declaring that H(H_Hat, H_Hat) == false is
the "correct" answer for the halting computation H_Hat(H_Hat), will you
just throw away all you junk code? Will you apologise for so much
wasted time?

--
Ben.

olcott

unread,
Sep 30, 2021, 10:07:50 PM9/30/21
to
Becauzsse you did not provide the time and date stampo
I am done with you for the moment you do not address the points that I
raise.

Ben Bacarisse

unread,
Sep 30, 2021, 10:22:04 PM9/30/21
to
>>> You quoted me from five months ago my view has changed.
>>
>> So you were just lying when you tried to pretend it was as typographical
>> error!
>
> Becauzsse you did not provide the time and date stampo

Liar: "If the above is an accurate quote (which has not been properly
established) then--->H(<Ĥ>,<Ĥ>) ⊢* H.qn is a typographical error". It
has been established that you said it so you /were/ claiming it was a
typo. That failed when you saw you'd said it again and again and again
in multiple forms. Quick, change the story!

>>> Now that I know that H(<Ĥ>,<Ĥ>) ⊢* H.qy
<snip>
>>> Ĥ.qx(<Ĥ>,<Ĥ>) ⊢* H.qn is correct because the simulation of <Ĥ> applied
>>> to <Ĥ> never reaches any final state whether or not the simulating
>>> halt decider at Ĥ.qx aborts its simulation of this input.
>>
>> For any Ĥ that behaves as you say, its corresponding H will transition
>> to H's equivalent state H.qn. But apparently you somehow know that
>> H(<Ĥ>,<Ĥ>) ⊢* H.qy. Not possible.
<snip>

> I am done with you for the moment you do not address the points that I
> raise.

As I have always said, not replying to me is your best option as well as
the one I prefer. But right there is the latest reason you are wrong,
directly placed after the incorrect point you raised. Fingers in your
ears, and... la, la, la, la, la!

--
Ben.

Mike Terry

unread,
Oct 1, 2021, 10:12:20 AM10/1/21
to
Or some other form of cheating such as a function taking its own
address, and varying its processing according to that address.

(I suggested this was all that was going on a couple of weeks ago, but
you neither confirmed nor denied that. Almost as though you actually
don't understand /yourself/ what's going on. (Surely that cannot be...!))

>
> If it works out like preliminary analysis indicates a pair of functions
> with identical machine code and identical inputs really do have
> different output because the input to H(P,P) calls H and the input to
> H1(P,P) calls H (thus does not call H1).

Of course an /unrestricted/ x86 program can do that - all it has to do
is look at its own address and behave differently according to that
address. (Like your H/H1 routines.) Or cheat with global variables etc..

If you are concerned with saying things of relevance to TMs and Linz's
proof, you need to limit your C code to doing things that directly
correspond with what TMs can do. Otherwise the relationship between
your computing model and TMs will become distorted and the conclusions
you want to draw from your program's behaviour to TM behaviour may
simply not apply.

I know that last paragraph is WAAAAAY over your head (computation
models? too abstract!) so you will just ignore it. Or perhaps spray
your "Objection Begone!" Turing-equivalence spray at it, which never
works for you. :)

Anyway, you need to work out /why/ copying an algorithm can produce
different behaviour in the copy, and STOP DOING IT!! (Just like you
seem to have recognised that non-pure functions are to be avoided. I'm
not sure whether using your own machine code address breaks the "pure
function" rules per se, but the effect on your efforts is the same so it
doesn't really matter.)


Mike.

olcott

unread,
Oct 1, 2021, 10:36:24 AM10/1/21
to
I don't reply to nonsense.


>>
>> If it works out like preliminary analysis indicates a pair of
>> functions with identical machine code and identical inputs really do
>> have different output because the input to H(P,P) calls H and the
>> input to H1(P,P) calls H (thus does not call H1).
>
> Of course an /unrestricted/ x86 program can do that - all it has to do
> is look at its own address and behave differently according to that
> address.  (Like your H/H1 routines.)  Or cheat with global variables etc..
>

The halting theorem counter-examples present infinitely nested
simulation (non-halting) behavior to every simulating halt decider.

When the input P to a simulating halt decider H1 does not call this same
decider and instead calls another different decider H, then infinitely
nested simulation is not specified for H1(P,P) as it is for H(P,P).

> If you are concerned with saying things of relevance to TMs and Linz's
> proof, you need to limit your C code to doing things that directly
> correspond with what TMs can do.

Yes that was my biggest challenge. I didn't even know the criteria for
that even though Kaz gave me this criteria months ago. After carefully
checking and double checking this criteria seems sufficient:

In computer programming, a pure function is a function that has the
following properties:[1][2]

(1) The function return values are identical for identical arguments (no
variation with local static variables, non-local variables, mutable
reference arguments or input streams).

(2) The function application has no side effects (no mutation of local
static variables, non-local variables, mutable reference arguments or
input/output streams).
https://en.wikipedia.org/wiki/Pure_function

> Otherwise the relationship between
> your computing model and TMs will become distorted and the conclusions
> you want to draw from your program's behaviour to TM behaviour may
> simply not apply.
>
> I know that last paragraph is WAAAAAY over your head (computation
> models?  too abstract!) so you will just ignore it.  Or perhaps spray
> your "Objection Begone!" Turing-equivalence spray at it, which never
> works for you. :)
>
> Anyway, you need to work out /why/ copying an algorithm can produce
> different behaviour in the copy, and STOP DOING IT!!  (Just like you
> seem to have recognised that non-pure functions are to be avoided.  I'm
> not sure whether using your own machine code address breaks the "pure
> function" rules per se, but the effect on your efforts is the same so it
> doesn't really matter.)
>
>
> Mike.

My redesign meets all of the above pure function criteria.

It really is the case that a pair of pure functions with identical
machine code will have different behavior on the same input when this
input calls one of these functions in pathological self-reference and
not the other. Identical functions, identical input, differing
relationships between the functions and their input.

Richard Damon

unread,
Oct 1, 2021, 11:44:36 AM10/1/21
to
No, it doesn't, only to Simulating Halt Deciders that can't handle that
problem and thus can't answer those cases.

IF the Halt Decider can actually answer the equivalent of H(P,P), then
P(P) does NOT have infinitely nested simulation behavior. PERIOD.

> When the input P to a simulating halt decider H1 does not call this same
> decider and instead calls another different decider H, then infinitely
> nested simulation is not specified for H1(P,P) as it is for H(P,P).

Which means that H1 is a DIFFERENT computation than H, and thus its
answer is irrelevant to the Halting Problem Theory.

Linz proof only claims that the Halt Decider that his H^ was built on
would get that decision wrong, or in converse, that given any Halt
Decider, he describes how to make a Computation that it won't be able to
get right.

>
>> If you are concerned with saying things of relevance to TMs and Linz's
>> proof, you need to limit your C code to doing things that directly
>> correspond with what TMs can do. 
>
> Yes that was my biggest challenge. I didn't even know the criteria for
> that even though Kaz gave me this criteria months ago. After carefully
> checking and double checking this criteria seems sufficient:

Yes, as you say, you don't know what you are talking about, and making
claims about things being 'obvious' about things that you don't even
know what they are.

>
> In computer programming, a pure function is a function that has the
> following properties:[1][2]>
> (1) The function return values are identical for identical arguments (no
> variation with local static variables, non-local variables, mutable
> reference arguments or input streams).
>
> (2) The function application has no side effects (no mutation of local
> static variables, non-local variables, mutable reference arguments or
> input/output streams).
> https://en.wikipedia.org/wiki/Pure_function

This comes close, but I am not positive if it is truly exhaustive, as if
it allows for two copies of the function to return different value
because the have different 'addresses' then it says that the address the
function is run at becomes an 'input' to the function. This makes an
individual copy of the function a pure computation, as it always will
have the same address, it doesn't make it the direct equivalent of a
Turing Machine, as the 'algorithm' of that machine has an extra 'hidden'
input, the address the code was run at.

>
>> Otherwise the relationship between your computing model and TMs will
>> become distorted and the conclusions you want to draw from your
>> program's behaviour to TM behaviour may simply not apply.
>>
>> I know that last paragraph is WAAAAAY over your head (computation
>> models?  too abstract!) so you will just ignore it.  Or perhaps spray
>> your "Objection Begone!" Turing-equivalence spray at it, which never
>> works for you. :)
>>
>> Anyway, you need to work out /why/ copying an algorithm can produce
>> different behaviour in the copy, and STOP DOING IT!!  (Just like you
>> seem to have recognised that non-pure functions are to be avoided. 
>> I'm not sure whether using your own machine code address breaks the
>> "pure function" rules per se, but the effect on your efforts is the
>> same so it doesn't really matter.)
>>
>>
>> Mike.
>
> My redesign meets all of the above pure function criteria.

But, if it uses the trick I described, it means that the function is not
a computation in the sense that it can be copied and be the same
computation. This break Turing Machine Equivalence.

>
> It really is the case that a pair of pure functions with identical
> machine code will have different behavior on the same input when this
> input calls one of these functions in pathological self-reference and
> not the other. Identical functions, identical input, differing
> relationships between the functions and their input.
>

Right, and as I said, if the 'PC' code uses operations that depend on
the address the code is run at, then the 'Computation' that the code
represents contains a 'hidden' input of what address the computation is
run at.

When converted to a Turing Machine, or a real Computation, that 'hidden'
input needs to become an explicit input parameter.

H(addr, P, I) is NOT the proper form for a Halt Decider, it must be just
H(P,I).

I suspect this 'breaks' your argument. H and H1 are DIFFERENT
Comutations, and thus you can't build H^ off of H and then use H1 to get
your needed correct answer.

wij

unread,
Oct 1, 2021, 12:12:30 PM10/1/21
to
On Wednesday, 29 September 2021 at 11:20:35 UTC+8, olcott wrote:
> The halting theorem counter-examples present infinitely nested
> simulation (non-halting) behavior to every simulating halt decider.
>
> -----1-----------2--3----------
> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.qx ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞
>
> If the simulation of the 2nd ⟨Ĥ⟩ applied to
> the 3rd ⟨Ĥ⟩ at Ĥ.qx reaches its final state.
>
>
> -----1-----------2--3----------
> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.qx ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
>
> If the simulation of the 2nd ⟨Ĥ⟩ applied to
> the 3rd ⟨Ĥ⟩ at Ĥ.qx never reaches its final state.
>
>
> https://www.researchgate.net/publication/351947980_Halting_problem_undecidability_and_infinitely_nested_simulation

Apparently, you are trying to solve an undecidable problem:
GUR https://groups.google.com/g/comp.theory/c/_tbCYyMox9M

Your proof is essentially several lines of "C" program, nothing to do with
TM or the traditional Halting Problem.

The key basis of your refutation is a refutation to your own conception that
no one care. You were denying some part of yourself, not the HP.

I find the key essence is that your relevant knowledge is very weak, you read
text books but not doing exercises.
Do exercises in the text books of C,Assembly or Computation theory you read.
That would take much lesser time than 7 years (rumor sayed that you have been
working on that several lines of C codes for probably more than 7 years!).

Andy Walker

unread,
Oct 1, 2021, 12:27:09 PM10/1/21
to
On 01/10/2021 15:12, Mike Terry wrote [to PO, of course]:
> Of course an /unrestricted/ x86 program can do that [...]
> If you are concerned with saying things of relevance to TMs and
> Linz's proof, you need to limit your C code to doing things that
> directly correspond with what TMs can do. [...]

I think it is misleading to talk about what TMs can do as
thought they are interestingly different from what x86 programs or
any real computer can do. An x86 computer /is/ a FSM with extra
storage; in other words, a TM. We tend to write rather simple
TMs to illustrate points, but there is no reason why it should
not have 2^N states, where N is rather large [the number of bits
in your computer]. "Taking the address of a function" is not
"difficult" with a TM, just rather tedious.

There is no reason why someone [I'm not volunteering!]
should not write a C compiler the target of which is recognisably
a TM. It's just going to be rather complicated. [Probably easier,
AAMOF, to write a microcode as a TM, and describe real computers
in terms of that microcode. In the 1930s, that would have seemed
like magic, but it is how many/most modern computers work.]

That's why I think people are barking up the wrong tree
when complaining that PO writes C code that is "not a function".
It doesn't matter. It just means that the "hat" construction
is more complicated than PO claims, inc all the states [etc] of
the FSM, inc "hidden" variables. The real errors lie elsewhere.

--
Andy Walker, Nottingham.
Andy's music pages: www.cuboid.me.uk/andy/Music
Composer of the day: www.cuboid.me.uk/andy/Music/Composers/Balakirev

Jeff Barnett

unread,
Oct 1, 2021, 12:45:20 PM10/1/21
to
On 10/1/2021 8:36 AM, olcott wrote:

>
> I don't reply to nonsense.
>

You do very frequently. You have written messages "in reply to" your own
messages more than any other (sub)human alive to day or in the past for
that matter. Your nonsense and repetitive behavior reminds me of a
neurotic chimpanzee at our local zoo. It hurts to watch. The Guinness
Book of World Records is holding a place for you in this regards as well
as several other categories. Be proud son/daughter; be proud.
--
Jeff Barnett

olcott

unread,
Oct 1, 2021, 12:46:20 PM10/1/21
to
It may superficially seem that way.
This is the essence of the basis of my refutation:

The halting theorem counter-examples present infinitely nested
simulation (non-halting) behavior to every simulating halt decider.

I created the x86utm operating system so that every single detail of
theory of computation problems can be analyzed at the much higher level
of abstraction of the C/x86 languages. Without this much higher level
abstraction key elements of the HP proof are essentially validated by
hand-waving.

https://www.researchgate.net/publication/351947980_Halting_problem_undecidability_and_infinitely_nested_simulation


A key element that had taken me by complete surprise is that some things
that are C/x86 computable are not TM computable. I had previously
thought that the Church-Turing thesis confirmed that any and all things
that any physically existing computer can do a TM can compute.



https://en.wikipedia.org/wiki/Computable_function#Characteristics_of_computable_functions

It seems that no aspect of the definition of [computable function]
requires it to be a [pure function], none-the-less most expert opinion
seems to indicate that this requirement exists. I am on the verge of
reproducing my work such that the simulating partial halt deciders H1
and H are pure functions and take finite string parameters. To conform
my C/x86 code to the Linz Ĥ template P will copy its input.


In computer programming, a pure function is a function that has the
following properties:

(1) The function return values are identical for identical arguments (no
variation with local static variables, non-local variables, mutable
reference arguments or input streams).

(2) The function application has no side effects (no mutation of local
static variables, non-local variables, mutable reference arguments or
input/output streams).

https://en.wikipedia.org/wiki/Pure_function


>>

Jeff Barnett

unread,
Oct 1, 2021, 12:55:50 PM10/1/21
to
I think what people are trying to say and may be saying badly is that
the set of things that can influence an execution must all be considered
part of that execution. Trivial but important. Said another way apropos
to the current conversation is that when you propose something to play
the role of the Linz H, the definition of that something must include
everything that influences its execution. PO doesn't do that. 'It' also
keeps the piece he calls H from being function- or computation-like. One
of the "rules" of software reasoning is that one must carefully delimit
the boundaries of what one is talking about. This basic guideline has
been ignored into oblivion.
--
Jeff Barnett

olcott

unread,
Oct 1, 2021, 1:04:52 PM10/1/21
to
It is very difficult to confirm that a [computable function]
https://en.wikipedia.org/wiki/Computable_function#Characteristics_of_computable_functions

Must be a [pure function]
https://en.wikipedia.org/wiki/Pure_function

When no official source say this.

None-the-less my partial halt deciders {H1, H} are being transformed
into [pure functions] that take finite string inputs. Also my C/x86
equivalent P of the Linz Ĥ will copy its input so that it more closely
conforms to the Linz Ĥ.

Richard Damon

unread,
Oct 1, 2021, 1:11:32 PM10/1/21
to
The key difference between a Turing Machine and a 'conventional' program
is that Turing Machines are ALWAYS looked at as a 'full operation', and
any sub-piece looked at, is itself a Turing Machine.

When we map that to an conventional computer, the equivalent is the
WHOLE machine and programming in it as a whole.

When we break appart the programming into sub-pieces, because the
computing model is 'more powerful' in some aspects, we can talk about
sub-pieces of the whole system tha aren't themselves equivalent to a
whole system.

We have functions that can 'save' data between calls, or pass data
behind the backs of the outside. We have functions that might envoke
pieces that are outside of themselves. All of these are things that A
Turing Machine, or a 'Complete' computer can't do.

This is the difference between talking about the Mathematical concept of
a 'Computation' and Computer Programming.

Computation Theory and the like deal with the capability of a 'Full
Machine' or a sub part that acks like another 'Full Machine.

Computer Programming and Software Engineering focuses more on the
sub-pieces, and often sub-pieces that are the equivalent of a 'Full
Machine' on their own.

One key is that ANY piece of a Computer Program, when you make sure the
'code' consists of ALL the code that is run when we run that piece of
code, and the input consists of ALL the information that piece of code
uses to determine its output, will follow the rules of Computations and
there will be an 'equivalent' Turing Machine.

The issue is that often, when working in the fields of Programming or
Software Engineering, we don't limit our selves to that view. We think
of a 'sub-routine' that might depend on code outside of itself as part
of its execution, or ignore some of the information that a given
invocation might depend on. Those models do NOT correspond to what
Compution Theory describes as a Computation.

Yes, there are LOTS of interesting things in these non-computation
models, and that is an important part of THOSE theories. THey just are
NOT part of Comptation Theory.

One big reason Computaton Theory focuses on Turing Machines, is that by
their structure, what they do is clearly what the Theory wants to talk
about. We don't need to worry if the code uses something outside its
defined bounds, as the limitations of how a Turing Machine is described,
makes it impossible to create such a condition.

On the other hand, the 'higher' computation modeles, like the
conventional computer, can easily be made to do things that break these
definitions, so you need to pay care to stay within the bounds.

PO has this basic problem, the thing he wants to do CAN'T be done with
Turing Machines, as the Halting Problem Proofs clearly show. He can't
accept that this proof is valid. He thus is tryihg to create a program
in a conventional computer to show that something like what the Halting
Problem Proof says is impossible is possible on that platform.

The key is that since a Turing Machine can't do it, neither can a
conventional program that actual follows the rules of being the
equivalent of a Computation. His basic goal is to try to find a way to
make a program that LOOKS like it is a Computation, but actually isn't,
that does what is impossible to do as a Computation.

Either he needs to come up with a theory shattering example that shows
that there IS an actual Computation that an x86 can do that doesn't have
a Turing Machine Equivalent (and thus disprove Church-Turing) or he
needs to be deceptive and hide is non-computational aspect of his program.

All his work seems to be about the latter, and this is hampered by the
fact that he seems to refuse to actually study the theory to any depth
to actually understand what he is trying to do, so he keeps making
primative errors.

This comes out in his attempt to try and get approval to use a 'wrong
definition' as the equivalent of the right one, like the Computer
Science term of a 'Pure Function' to be fully the same as a 'Computation'.

The key difference here is that the Computer Scientist is normally only
concerned about a single copy of the function always giving the same
input to output mapping, while the Computation Theorist wants ALL copies
of the function to implement the same input to output mapping.

Richard Damon

unread,
Oct 1, 2021, 1:17:19 PM10/1/21
to
Because they are different concepts and don't actually map directly into
each other.

A Computatble Function will tend to naturally be a Pure Function, but it
is possible to make a Computable Function out of limited forms of
non-Pure Functions (for example, caching of results to avoid recomputing
already computed results).

It is also possible of a Pure Function to not be a Computatble Function
if the pure function depends in some way on the address that it is being
run from. The concept of Pure Functions deal with a single instance of
it always mapping input to output, while a Computable Function deals
with a UNIVERSAL mapping of input to output of ALL copies of it.

Pure Functions are inplementations in physical hardware.

Computable Functions are mathematical abstractions.

olcott

unread,
Oct 1, 2021, 1:19:07 PM10/1/21
to
I always glance at you posts to see if they are drivel or not.
This is the first one that is not drivel.
See my reply to Jeff on this.

Richard Damon

unread,
Oct 1, 2021, 1:19:41 PM10/1/21
to
WRONG. You just ignore when people point out your error here.

>
> I created the x86utm operating system so that every single detail of
> theory of computation problems can be analyzed at the much higher level
> of abstraction of the C/x86 languages. Without this much higher level
> abstraction key elements of the HP proof are essentially validated by
> hand-waving.
>
> https://www.researchgate.net/publication/351947980_Halting_problem_undecidability_and_infinitely_nested_simulation
>
>
> A key element that had taken me by complete surprise is that some things
> that are C/x86 computable are not TM computable. I had previously
> thought that the Church-Turing thesis confirmed that any and all things
> that any physically existing computer can do a TM can compute.
>
>

You are just using the wrong definition of 'Computable'.

>
> https://en.wikipedia.org/wiki/Computable_function#Characteristics_of_computable_functions
>
>
> It seems that no aspect of the definition of [computable function]
> requires it to be a [pure function], none-the-less most expert opinion
> seems to indicate that this requirement exists. I am on the verge of
> reproducing my work such that the simulating partial halt deciders H1
> and H are pure functions and take finite string parameters. To conform
> my C/x86 code to the Linz Ĥ template P will copy its input.
>
>
> In computer programming, a pure function is a function that has the
> following properties:
>
> (1) The function return values are identical for identical arguments (no
> variation with local static variables, non-local variables, mutable
> reference arguments or input streams).
>
> (2) The function application has no side effects (no mutation of local
> static variables, non-local variables, mutable reference arguments or
> input/output streams).
>
> https://en.wikipedia.org/wiki/Pure_function

And as I just explance elsewhere, Pure Function is NOT an identical
concept to a Computation.

A Pure Function might not meet ALL the requirements of a Computation.

Richard Damon

unread,
Oct 1, 2021, 1:22:36 PM10/1/21
to
I will admit that I don't always hit the mark of explaining thing in a
way that your disturbed mind can understand.

My comments have NOT be 'drivel' (at least for the most parts, I will
admit to getting at times a bit snarky).

They just don't make sense to your with your 'filter' on reality.

See my response to your respeonce to him.

olcott

unread,
Oct 1, 2021, 1:30:22 PM10/1/21
to
Some of the stuff that you say is well reasoned and can be confirmed.
Some of it is pure bullshit, simply talking out of your ass.

The key pure bullshit item was either you or André that simply rejected
the whole idea of a simulating halt decider out-of-hand.

This is the key basis of my whole proof:
The halting theorem counter-examples present infinitely nested
simulation (non-halting) behavior to every simulating halt decider.

How a simulating halt decider determines that its input is an instance
of infinitely nested simulation is merely an implementation detail.

As long as a simulating halt decider can determine this by one means or
another, then the conventional halting theorem proofs have been refuted.

olcott

unread,
Oct 1, 2021, 1:41:59 PM10/1/21
to
People merely point out their own confusion with what I am saying.

1----2-----3-----4--5-----6
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.qx ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn

Ben perpetually pointed out his confusion between (3) applied to (4) (5)
and (1) applied to (2) until I numbered the elements making it
impossible to misunderstand.

The input to Ĥ.qx(⟨Ĥ⟩, ⟨Ĥ⟩) never reaches its final state whether or not
the simulating halt decider at Ĥ.qx aborts its simulation.
This conclusively proves that Ĥ.qx ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn is correct.

All of the rebuttals were mere confused double-talk that never directly
addressed any of these points.
Since you don't actually care about the truth and only seek to win an
argument I am sure that you will rule that whatever I come up with will
be ruled as insufficient.

olcott

unread,
Oct 1, 2021, 1:43:59 PM10/1/21
to
When-so-ever you reject anything out-of-hand without any supporting
reasoning what-so-ever I just skip the whole rest of the post. Most of
your posts are like this.

André G. Isaak

unread,
Oct 1, 2021, 2:26:12 PM10/1/21
to
On 2021-10-01 10:46, olcott wrote:

> A key element that had taken me by complete surprise is that some things
> that are C/x86 computable are not TM computable. I had previously
> thought that the Church-Turing thesis confirmed that any and all things
> that any physically existing computer can do a TM can compute.

As Andy Walker already correctly pointed out this isn't actually true.

What is true, though, is that the formal parameters given to a C
function do not necessarily correspond to the input string of the
corresponding Turing Machine, nor does the return value of that function
necessarily correspond to the 'output' of the corresponding Turing Machine.

When you declare a C function as follows:

int foo(someType x, someOtherType y) { ... }

x and y are not necessarily the only inputs to foo -- they are merely
the input values which are passed on the stack. Any information
retrieved from global or static variables etc. is also part of the
functions input; it's just not explicitly declared as such.

Similarly, the while the above function returns an int, that may or may
not be the only information it returns. In many functions the most
important information might be returned as part of a struct to which it
was passed a pointer as one of its formal parameters and the int
returned might simply indicate success or failure.

As a concrete example, consider the following C function:

typedef struct Record {
int a;
int b;
int c;
};

int someGlobal;

int foo(int x, int y Record *z) {
return ((x * y + z.a - z.c) * someGlobal));
}

The C function has three inputs consisting of two integers and an address.

The corresponding Turing Machine would have a string representing four
integers as its inputs. Those four integers would correspond to

x, y, z.a z.c someGlobal

That is, the input string must include all data which is actually to be
used by the computation. Since z.b is not accessed it needn't be
considered as part of the input. When pointers are used, it will
generally be the data which is pointed to rather than the address itself
which counts as part of the input. If the actual value of the address
plays a critical role in your computation (for example, determining if
two functions are the same by virtue of having the same address), then
that address would consitute a separate piece of information which must
be part of the input string.

So the difference between C and TMs is really that in C code is
expressed in a way which doesn't make clear what the actual input to any
given function is since declarations only explicitly show data that is
communicated in particular ways. In some cases, the data passed on the
stack may correspond to the actual input to the function, but in many
cases it will only represent a subset of the input.

When people state that your H or P are not implementable as Turing
Machines, what's really intended is that your H and P are not
implementable as Turing Machines of the types required by the Linz
proof, i.e. Turing Machines whose input consists of a string consisting
only of a Turing Machine description and a string representing the input
to that machine.

The Halting problem constrains your C function to only making use of
those two pieces of input.

olcott

unread,
Oct 1, 2021, 2:42:18 PM10/1/21
to
Let me know if the following does not fully address all of your points:

In my update the inputs are finite strings, P copies its input and both
H1 and H are pure functions according to the following criteria:

André G. Isaak

unread,
Oct 1, 2021, 3:07:08 PM10/1/21
to
On 2021-10-01 12:42, olcott wrote:

> Let me know if the following does not fully address all of your points:


It's pretty much impossible to evaluate what you're doing without seeing
actual code.

But the main failing of your current model isn't merely that you've got
sources of input which aren't declared as formal paramters; it's that
you're alleged 'Turing Machines' are not self-contained. They have
overlapping code which means you're really only dealing with a single
Machine, not two separate ones.

Turing Machines, as was pointed out already by Richard, are entirely
complete, self-contained objects. They have no operating system. They
make no reference to instructions which are not part of themselves.

When writing in C, different routines can have overlapping code. It's
fine to do this if its done purely as a matter of convenience, but if
you wanted to translate different C functions into actual Turing
Machines, each machine would have to contain its own copy of all of the
shared code.

In your approach, however, the sharing of code is *not* being done
simply as a matter of convenience. You critically rely on the fact that
code of your H is shared by your P since your H checks the address of
the call inside P to determine that it is calling 'itself'. That means
your H and P cannot be construed as distinct computations but as part of
a single, intertwined computation. If you want to claim your H and P
correspond to Linz's H and H_Hat you simply cannot do that. They must
each be capable of operating entirely independently of one another.

> In my update the inputs are finite strings, P copies its input and both
> H1 and H are pure functions according to the following criteria:
>
> In computer programming, a pure function is a function that has the
> following properties:
>
> (1) The function return values are identical for identical arguments (no
> variation with local static variables, non-local variables, mutable
> reference arguments or input streams).
>
> (2) The function application has no side effects (no mutation of local
> static variables, non-local variables, mutable reference arguments or
> input/output streams).
>
> https://en.wikipedia.org/wiki/Pure_function

You're not going to learn computational theory from reading Wikipedia.
Why not actually study Linz? That means not just looking at chapter
twelve, but also *carefully* going through chapters 1-11 first.

olcott

unread,
Oct 1, 2021, 3:18:51 PM10/1/21
to
On 10/1/2021 2:07 PM, André G. Isaak wrote:
When a halt decider is evaluating whether or not its input halts on its
input by simulating this input, then we have at least two distinctly
different computations.

>> In my update the inputs are finite strings, P copies its input and
>> both H1 and H are pure functions according to the following criteria:
>>
>> In computer programming, a pure function is a function that has the
>> following properties:
>>
>> (1) The function return values are identical for identical arguments
>> (no variation with local static variables, non-local variables,
>> mutable reference arguments or input streams).
>>
>> (2) The function application has no side effects (no mutation of local
>> static variables, non-local variables, mutable reference arguments or
>> input/output streams).
>>
>> https://en.wikipedia.org/wiki/Pure_function
>
> You're not going to learn computational theory from reading Wikipedia.
> Why not actually study Linz? That means not just looking at chapter
> twelve, but also *carefully* going through chapters 1-11 first.
>
> André
>


--

Andy Walker

unread,
Oct 1, 2021, 4:09:48 PM10/1/21
to
On 01/10/2021 17:55, Jeff Barnett wrote:
[I wrote:]
>>      That's why I think people are barking up the wrong tree
>> when complaining that PO writes C code that is "not a function".
>> It doesn't matter.  It just means that the "hat" construction
>> is more complicated than PO claims, inc all the states [etc] of
>> the FSM, inc "hidden" variables.  The real errors lie elsewhere.
> I think what people are trying to say and may be saying badly is that
> the set of things that can influence an execution must all be
> considered part of that execution. Trivial but important. [...]

Absolutely. IOW, the "hat" construction is still possible,
but is not the simplistic version that PO presents.

[FTAOD, although I was replying to one of your posts, my
article wasn't "aimed" at you, but at the whole way this "debate"
has been going. It's too easy to get sucked in to PO's way of
"thinking".]

Richard Damon

unread,
Oct 1, 2021, 4:18:09 PM10/1/21
to
More likely, you THINK stuff is 'pure bullshit' because it doesn't agree
with your own 'bullshit'. You seem to be unable to perform critical
thinking to try to understand stuff that isn't immediate in agreement
with your own preconceptions.

> The key pure bullshit item was either you or André that simply rejected
> the whole idea of a simulating halt decider out-of-hand.

So, you can't even keep who said what straight (maybe I should ask you
to 'prove' this with the quote with time-stamp like you like to do).

I do not out of hand reject a simulating halt decider, but a simulating
halt decider can't solve the problem, because NO decider can.

A simulating Halt decider will ALWAYS be a partial decider, being able
to detect only certain non-halting machines that match a pre-defined set
of 'simple' activity patterns, and should eventually be able to detect
all halting machines unless some of the patterns it uses can generate
'false positives' (like your rules do).

>
> This is the key basis of my whole proof:
> The halting theorem counter-examples present infinitely nested
> simulation (non-halting) behavior to every simulating halt decider.

Except that it doesn't

The Running of P(P) shows that no P gets involved in more than 3 levels
of simulation, because the decider it calls stops after two levels.

>
> How a simulating halt decider determines that its input is an instance
> of infinitely nested simulation is merely an implementation detail.

Which you have wrong, because your decider assumes that all copies of
itself will NEVER abort their simulation, when it will abort the exact
same simulation.

This means either it is just wrong, or it isn't actually a Computation.

>
> As long as a simulating halt decider can determine this by one means or
> another, then the conventional halting theorem proofs have been refuted.
>

Right, if it DID get it right, it might be able to come up with the
right answer. But it doesn't.

Because of the structure of the problem, and the fact that it is
deciding on a machine that knows what decision we will make, the decider
is stuck in a losing game.

olcott

unread,
Oct 1, 2021, 4:23:53 PM10/1/21
to
On 10/1/2021 3:09 PM, Andy Walker wrote:
> On 01/10/2021 17:55, Jeff Barnett wrote:
> [I wrote:]
>>>      That's why I think people are barking up the wrong tree
>>> when complaining that PO writes C code that is "not a function".
>>> It doesn't matter.  It just means that the "hat" construction
>>> is more complicated than PO claims, inc all the states [etc] of
>>> the FSM, inc "hidden" variables.  The real errors lie elsewhere.
>> I think what people are trying to say and may be saying badly is that
>> the set of things that can influence an execution must all be
>> considered part of that execution. Trivial but important. [...]
>
>     Absolutely.  IOW, the "hat" construction is still possible,
> but is not the simplistic version that PO presents.
>
>     [FTAOD, although I was replying to one of your posts, my
> article wasn't "aimed" at you, but at the whole way this "debate"
> has been going.  It's too easy to get sucked in to PO's way of
> "thinking".]
>


The whole basis of my rebuttal of the halting problem proofs:

The halting theorem counter-examples present infinitely nested
simulation (non-halting) behavior to every simulating halt decider.

How the simulating halt decider detects that an input is presenting
infinitely nested simulation is merely an implementation detail.

olcott

unread,
Oct 1, 2021, 4:25:20 PM10/1/21
to
The whole basis of my rebuttal of the halting problem proofs:

The halting theorem counter-examples present infinitely nested
simulation (non-halting) behavior to every simulating halt decider.

How the simulating halt decider detects that an input is presenting
infinitely nested simulation is merely an implementation detail.


Richard Damon

unread,
Oct 1, 2021, 4:26:38 PM10/1/21
to
So, you don't read rebuttals but just reject them out-of-hand without
any supporting evidence.

YOU are the one making a BIG claim. YOU need to prove it, and defend any
and all objections to it.

Please note, that if you ever do get to the point of submitting a paper
on this to some reputable journal, it is quite possible that some review
will search of discussions about this and find these conversations, and
ANY argument that has been brought up, will be checked if reasonable by
the classic theory, and then check that you have answered that point.

You might as well make sure you can handle them now, as once you submit
the paper, you won't really have the option to go back and change what
you have previously written.

How many times have you had to backup and redo core pieces because you
finally got an understanding of something important?

Ben Bacarisse

unread,
Oct 1, 2021, 4:32:27 PM10/1/21
to
Andy Walker <a...@cuboid.co.uk> writes:

> On 01/10/2021 17:55, Jeff Barnett wrote:
> [I wrote:]
>>>      That's why I think people are barking up the wrong tree
>>> when complaining that PO writes C code that is "not a function".
>>> It doesn't matter.  It just means that the "hat" construction
>>> is more complicated than PO claims, inc all the states [etc] of
>>> the FSM, inc "hidden" variables.  The real errors lie elsewhere.
>> I think what people are trying to say and may be saying badly is that
>> the set of things that can influence an execution must all be
>> considered part of that execution. Trivial but important. [...]
>
> Absolutely. IOW, the "hat" construction is still possible,
> but is not the simplistic version that PO presents.
>
> [FTAOD, although I was replying to one of your posts, my
> article wasn't "aimed" at you, but at the whole way this "debate"
> has been going. It's too easy to get sucked in to PO's way of
> "thinking".]

Another pattern I have tried is that of the challenge-response. The
challenge is to write a function with some particular specification (in
an imaginary "unbounded" dialect of C) and then a case the proposed
solution gets wrong is presented as the response. The "challengee" can
then come up with another solution that gets round this case, but then
there will be another case it gets wrong. This way, there is no way for
the challengee to know the "hat" construction in advance. The promise
is simply that there will be some case your code won't get right.

PO now sees the problem with such things and simply declines. (Well,
the last time he told me I'd got the specification wrong!)

--
Ben.

Richard Damon

unread,
Oct 1, 2021, 4:34:06 PM10/1/21
to
(3) applied to (4) (5) is the (copy of the) decider. It going to H^.qn
says that H has decider that its input, <H^> <H^> is the representation
of a non-halting computation.

That input, is the representation of the machine of (1) applied to (2),
aka H^(<H^>), which by this trace ends up at H^.qn which is a Halting
state, so we have that H^(<H^>) is a Halting Computation.

Thus H was WRONG about what 'its input' represents (mislabeled as does).

>
> The input to Ĥ.qx(⟨Ĥ⟩, ⟨Ĥ⟩) never reaches its final state whether or not
> the simulating halt decider at Ĥ.qx aborts its simulation.
> This conclusively proves that Ĥ.qx ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn is correct.

You did it again.

INPUTS DON'T DO ANYTHING.

The machine represented by the input H^(<H^>) DOES reach its terminal state.

Your confusion is you keep thinking of the partial simulation done by H
of this input, NOBODY (except you) cares what that partial simulation
does. The Halting Problem is about the actual machine and its actual
behavior.

>
> All of the rebuttals were mere confused double-talk that never directly
> addressed any of these points.

Please point to the double talk above (other than YOUR double talk of
misusing the term 'input' as if it has its own behavior.)

André G. Isaak

unread,
Oct 1, 2021, 4:35:00 PM10/1/21
to
Yes, but the problem is that in your code H and P are *not* distinct
computations. You have P calling H which means that P is not actually
independent from H because P shares the code of H.

If the input to H is to be construed as a distinct, independent
computation, then it must be *entirely* self-contained. It must contain
*all* of the code which it makes use of within itself.

Richard Damon

unread,
Oct 1, 2021, 4:40:35 PM10/1/21
to
I don't think anyone disagrees that we have two computations in view
here, they are related but different.

We have H^(<H^>), the machine being decider, and we have H(<H^>,<H^>),
the decider whose output is supposed to match the behavior of the first
computation to be correct.

The key point is that there is only one computation of a decider in
view, that of H(H^>,<H^>). There is no space for a second 'H1'. Either
H1 IS the same machine as H, a Computation that takes exactly two
defined inputs (the description of the machine and the input for the
computation to be decided) or it isn't. If it is, the it doesn't matter
which we use anywhere, so H(P,P) == H1(P,P). If it isn't then H1 just
doesn't matter, as Linz makes no claim that some other decider can't
decide on his H^, only H must get it wrong.

>

Ben Bacarisse

unread,
Oct 1, 2021, 4:44:58 PM10/1/21
to
Oh dear, you've posted about TMs again...

olcott <No...@NoWhere.com> writes:

> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.qx ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn

And the correct annotation for this line is "if (and only if) Ĥ applied
to ⟨Ĥ⟩ does not halt". But in that case, Ĥ applied ⟨Ĥ⟩ clearly does
halt -- in state Ĥ.qn. No decider, of any sort, can satisfy this part
Linz's specification.

PO always leaves the annotations off, or replaces them with his own
made-up ones. There no other way to avoid the obvious conclusion.

--
Ben.

olcott

unread,
Oct 1, 2021, 4:46:45 PM10/1/21
to
I have said this many hundreds of times.
The only case that it is required to get right is the single pattern of
an input doing exactly the opposite of whatever the halt decider decides.

To actually solve the halting problem requires hundreds of
trillions-fold more effort and would thus be the subject of a deep
learning solution.

Richard Damon

unread,
Oct 1, 2021, 4:49:54 PM10/1/21
to
Except that they don't. As I have said before, and you have not replied.
P(P) does NOT generate infinite recursion (if H(P,P) answers non-halting).

From our current reveled definitions, where H(P,P) aborts its simulation
after 2 'levels' of simulation, then the fully simulated P(P) will have
precisely 3 'layers' in it and then it will stop as the H in the first
layser will aborts its simulation, return no-halting and P will halt.

Thus H was WRONG in its decision that the simulation would have been
infinitely leveled. Yes, if you change H to simulate for N levels, then
the simulated P will be seen to go for precisely N+1 levels, just out of
reach of the outer simulating H. This doesn't make it 'infinite', just
that H will be wrong.

H needs some 'smarter' code to try to deal with this form of 'recursive'
invocation, and the Linz proof shows that it can NEVER be 'smart enough'
to handle all cases. At best, it can be smart enough to know it has been
had (for this case) and resign in dignity.

It does turn out that there will be other cases where it just won't be
able to tell if it has been defeated, but might just keep processing
forever to make a decision.

olcott

unread,
Oct 1, 2021, 4:52:16 PM10/1/21
to
It turns out that for my revised computation it is equivalent to a
monolithic single block of code.

For my revised computation I could embed H directly in P and the results
of the computation would not change.

> If the input to H is to be construed as a distinct, independent
> computation, then it must be *entirely* self-contained. It must contain
> *all* of the code which it makes use of within itself.
>
> André
>


--

Richard Damon

unread,
Oct 1, 2021, 4:55:05 PM10/1/21
to
Which you still haven't been able to do.

Your latest effort (which doesn't seem to have changed for a long time)
is that H(<H^>,<H^>) (or H(P,P)) says that its input represents a
non-halting comutation when H^(<H^>) does Halt when run.

Since Halting isn't non-Halting, your H isn't right.

You sometime try to change the question, to look at the internal partial
simulation of the input that H does, but that isn't the problem.

You also sometimes try to claim that some H1 that is a copy gets the
answer right, but that just shows that either H1 isn't the exact copy
claimed or H isn't a computation of JUST its defined input but has some
other hidden input that matters.

olcott

unread,
Oct 1, 2021, 4:58:56 PM10/1/21
to
On 10/1/2021 3:44 PM, Ben Bacarisse wrote:
> Oh dear, you've posted about TMs again...
>
> olcott <No...@NoWhere.com> writes:
>
---1----2-----3-----4--5-----6
>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.qx ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
>
> And the correct annotation for this line is "if (and only if) Ĥ applied
> to ⟨Ĥ⟩ does not halt".

Flat out totally incorrect.

Ĥ.qx(⟨Ĥ⟩, ⟨Ĥ⟩) the simulation of the input to Ĥ.qx never reaches its
final state whether or not Ĥ.qx aborts this simulation therefore the
transition of Ĥ.qx to Ĥ.qn is correct.

AKA the simulation of (4) applied to (5) by (3) does not halt.

I have explained all of the details of this a dozen times in the last
day and you simply ignore it rather than directly address it
point-by-point because you know that I am correct.


> But in that case, Ĥ applied ⟨Ĥ⟩ clearly does
> halt -- in state Ĥ.qn. No decider, of any sort, can satisfy this part
> Linz's specification.
>
> PO always leaves the annotations off, or replaces them with his own
> made-up ones. There no other way to avoid the obvious conclusion.
>


--

Richard Damon

unread,
Oct 1, 2021, 5:18:54 PM10/1/21
to
On 10/1/21 4:58 PM, olcott wrote:
> On 10/1/2021 3:44 PM, Ben Bacarisse wrote:
>> Oh dear, you've posted about TMs again...
>>
>> olcott <No...@NoWhere.com> writes:
>>
> ---1----2-----3-----4--5-----6
>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.qx ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
>>
>> And the correct annotation for this line is "if (and only if) Ĥ applied
>> to ⟨Ĥ⟩ does not halt". 
>
> Flat out totally incorrect.
>
> Ĥ.qx(⟨Ĥ⟩, ⟨Ĥ⟩) the simulation of the input to Ĥ.qx never reaches its
> final state whether or not Ĥ.qx aborts this simulation therefore the
> transition of Ĥ.qx to Ĥ.qn is correct.
>
> AKA the simulation of (4) applied to (5) by (3) does not halt.
>
> I have explained all of the details of this a dozen times in the last
> day and you simply ignore it rather than directly address it
> point-by-point because you know that I am correct.
>
>

That may be the details of what happens, but Ben was quoting the
REQUIREMENT that are on the behavior or H. You know, how H is supposed
to behave for it to be determined that it is doing what it is supposed to.

Requirements are a fundamental part of Software Engineering, so you
should understand them.

For H to meet is requirements, H is supposed to go to qn if and only if
the computation which is represented in the input to H is a non-halting
computation.

Since that input is (1) applied to (2) i.e. H^(<H^>) and that
computation reached the state H^.qn which is a Halting State, then the
Computation represented by the input to H (as in (3) applied to (4) (5))
is thus shown to be halting, so the decider H (and its copy at H^.qx)
should have gone to the qy state.

But of course if it did, H^ will go to the loop at qy -> qa <-> qb and
not halt and it would have also been wrong, and in that case it should
have gone to qn.

That is the basic of the Linz proof. Which ever state H goes to will
make the H^ machine do the opposite, and so H will be wrong.

This construction is totally legal by the rules of Turing Machines, and
is one thing that proves that you concept of Truth must be Provable
doesn't work in this area of logic. If you want to hole to that. maybe
you need to abandon all use of mathematics beyond the most trivial
stuff, and the use of things like computers, as they can't exist in a
world where the only thing that is True is what is provable.

olcott

unread,
Oct 1, 2021, 5:28:11 PM10/1/21
to
On 10/1/2021 4:18 PM, Richard Damon wrote:
> On 10/1/21 4:58 PM, olcott wrote:
>> On 10/1/2021 3:44 PM, Ben Bacarisse wrote:
>>> Oh dear, you've posted about TMs again...
>>>
>>> olcott <No...@NoWhere.com> writes:
>>>
>> ---1----2-----3-----4--5-----6
>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.qx ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
>>>
>>> And the correct annotation for this line is "if (and only if) Ĥ applied
>>> to ⟨Ĥ⟩ does not halt".
>>
>> Flat out totally incorrect.
>>
>> Ĥ.qx(⟨Ĥ⟩, ⟨Ĥ⟩) the simulation of the input to Ĥ.qx never reaches its
>> final state whether or not Ĥ.qx aborts this simulation therefore the
>> transition of Ĥ.qx to Ĥ.qn is correct.
>>
>> AKA the simulation of (4) applied to (5) by (3) does not halt.
>>
>> I have explained all of the details of this a dozen times in the last
>> day and you simply ignore it rather than directly address it
>> point-by-point because you know that I am correct.
>>
>>
>
> That may be the details of what happens, but Ben was quoting the
> REQUIREMENT that are on the behavior or H. You know, how H is supposed

The reason why I quit responding to most of your posts is terribly
careless mistakes like that. None of the above ever mentions H.

Richard Damon

unread,
Oct 1, 2021, 5:59:03 PM10/1/21
to
Never the less, H is still there as H^.qx is a copy of H, so will act
like H will act.

If you don't understand requirement flow down, maybe you shouldn't
consider yourself qualified to be a programmer.

Note there is no machine with an actual name of H^.qx, that is a state
within the larger machine H^. We don't talk about behavior of 'states'
of a Turing Machine (except talking about individual steps).

We only talk about actual Turing Machines, or the extracted sub-machines
taken out of a machihe.

The machine that represents the sub-machine at H^.qx IS H, so calling it
H is appropriate.

All statements need to be taken in context, and the context of the above
trace line has the definition of H in view.

The requirement is and always was, a requirement on the machine H, so
that is what should be stated.

To do otherwise is just a dishonest dodge.

yes, maybe I could have said the requirements on H which are copied to
H^.qx, but that is just being needlessly wordy as everyone should know
that H^.qx is where H was copied to.


Richard Damon

unread,
Oct 1, 2021, 6:00:22 PM10/1/21
to
Maybe my fault is I assume people have a reasonable amount of
intelligence, and I forget how DUMB you are on some things.

olcott

unread,
Oct 1, 2021, 6:10:23 PM10/1/21
to
It is difficult enough when you get the terms exactly right. In all
those cases where you refer to Ĥ as H I immediately stop reading and
skip the whole post.


> If you don't understand requirement flow down, maybe you shouldn't
> consider yourself qualified to be a programmer.
>
> Note there is no machine with an actual name of H^.qx, that is a state
> within the larger machine H^. We don't talk about behavior of 'states'
> of a Turing Machine (except talking about individual steps).
>
> We only talk about actual Turing Machines, or the extracted sub-machines
> taken out of a machihe.
>
> The machine that represents the sub-machine at H^.qx IS H, so calling it
> H is appropriate.
>
> All statements need to be taken in context, and the context of the above
> trace line has the definition of H in view.
>
> The requirement is and always was, a requirement on the machine H, so
> that is what should be stated.
>
> To do otherwise is just a dishonest dodge.
>
> yes, maybe I could have said the requirements on H which are copied to
> H^.qx, but that is just being needlessly wordy as everyone should know
> that H^.qx is where H was copied to.
>
>


André G. Isaak

unread,
Oct 1, 2021, 6:17:25 PM10/1/21
to
On 2021-10-01 14:52, olcott wrote:
> On 10/1/2021 3:34 PM, André G. Isaak wrote:
>> On 2021-10-01 13:18, olcott wrote:

>>> When a halt decider is evaluating whether or not its input halts on
>>> its input by simulating this input, then we have at least two
>>> distinctly different computations.
>>
>>
>> Yes, but the problem is that in your code H and P are *not* distinct
>> computations. You have P calling H which means that P is not actually
>> independent from H because P shares the code of H.
>>
>
> It turns out that for my revised computation it is equivalent to a
> monolithic single block of code.
>
> For my revised computation I could embed H directly in P and the results
> of the computation would not change.


Then show us this revised computation. Saying that it will do something
isn't of much value unless you show that you can deliver.

olcott

unread,
Oct 1, 2021, 6:22:00 PM10/1/21
to
Ĥ.qx applied to ⟨Ĥ⟩ ⟨Ĥ⟩ is computationally distinct from
H applied to ⟨Ĥ⟩ ⟨Ĥ⟩ and thus can have opposite final states without
contradiction.

This becomes impossible to understand if you can't even keep track of
the fact that H and Ĥ are not the same machine.

Ben Bacarisse

unread,
Oct 1, 2021, 6:22:25 PM10/1/21
to
olcott <No...@NoWhere.com> writes:

> On 10/1/2021 3:44 PM, Ben Bacarisse wrote:
>> Oh dear, you've posted about TMs again...
>> olcott <No...@NoWhere.com> writes:
>>
> ---1----2-----3-----4--5-----6
>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.qx ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
>> And the correct annotation for this line is "if (and only if) Ĥ applied
>> to ⟨Ĥ⟩ does not halt".
>
> Flat out totally incorrect.

Linz is not wrong about how he himself specifies the behaviour of Ĥ.
It's his specification. You /could/ reject that specification, but in
fact you insist that you only ever use Ĥ to refer to Linz's Ĥ.

You dishonestly leave off a key part of his specification because you
don't want to keep seeing the problem.

> Ĥ.qx(⟨Ĥ⟩, ⟨Ĥ⟩) the simulation of the input to Ĥ.qx never reaches its
> final state whether or not Ĥ.qx aborts this simulation therefore the
> transition of Ĥ.qx to Ĥ.qn is correct.

What is correct is determined by the annotation you keep dishonestly
omitting. Ĥ(⟨Ĥ⟩) transitions to Ĥ.qn if (and only if) Ĥ(⟨Ĥ⟩) does not
halt.

> AKA the simulation of (4) applied to (5) by (3) does not halt.

There is no TM (simulating or otherwise) that satisfies this clause from
Linz's specification of what Ĥ should do. At least there isn't if you
keep the specification intact, but by dishonestly removing the key words
you are trying to make it appears as if there could be.

> I have explained all of the details of this a dozen times in the last
> day and you simply ignore it rather than directly address it
> point-by-point because you know that I am correct.

It's right there in Linz. His Ĥ has this property:

"q0 w^ ⊢* y1 qn y2, if Ĥ applied to w^ does not halt."

(His q0 is your Ĥ.q0. His w^ is your ⟨Ĥ⟩. His qn is your Ĥ.qn. His y1
and y2 are string on the tape that you don't bother with.)

You /must/ remove this key part of the his specification because you
have no answer for the problem is so obviously results in. If your Ĥ
truly is Linz's Ĥ you should have no objection to stating under what
conditions, according to Linz, Ĥ(⟨Ĥ⟩) transitions to Ĥ.qn.

>> But in that case, Ĥ applied ⟨Ĥ⟩ clearly does
>> halt -- in state Ĥ.qn. No decider, of any sort, can satisfy this part
>> Linz's specification.

--
Ben.

olcott

unread,
Oct 1, 2021, 6:23:22 PM10/1/21
to
It is not done yet. It is going very well.

olcott

unread,
Oct 1, 2021, 6:29:21 PM10/1/21
to
On 10/1/2021 5:22 PM, Ben Bacarisse wrote:
> olcott <No...@NoWhere.com> writes:
>
>> On 10/1/2021 3:44 PM, Ben Bacarisse wrote:
>>> Oh dear, you've posted about TMs again...
>>> olcott <No...@NoWhere.com> writes:
>>>
>> ---1----2-----3-----4--5-----6
>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.qx ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
>>> And the correct annotation for this line is "if (and only if) Ĥ applied
>>> to ⟨Ĥ⟩ does not halt".
>>
>> Flat out totally incorrect.
>
> Linz is not wrong about how he himself specifies the behaviour of Ĥ.
> It's his specification. You /could/ reject that specification, but in
> fact you insist that you only ever use Ĥ to refer to Linz's Ĥ.
>
> You dishonestly leave off a key part of his specification because you
> don't want to keep seeing the problem.
>
>> Ĥ.qx(⟨Ĥ⟩, ⟨Ĥ⟩) the simulation of the input to Ĥ.qx never reaches its
>> final state whether or not Ĥ.qx aborts this simulation therefore the
>> transition of Ĥ.qx to Ĥ.qn is correct.
>
> What is correct is determined by the annotation you keep dishonestly
> omitting. Ĥ(⟨Ĥ⟩) transitions to Ĥ.qn if (and only if) Ĥ(⟨Ĥ⟩) does not
> halt.
That is flatly false.

(1) does not transition to (6) on the basis that (1) applied to (2) does
not halt.

(3) transitions to (6) on the basis that the simulation of (4) applied
to (5) does not halt.

The simulation of (4) applied to (5) never reaches any final state
whether or not (3) aborts its simulation of (4) applied to (5) or not.

Richard Damon

unread,
Oct 1, 2021, 6:34:31 PM10/1/21
to
If H applied to <H^> <H^> is a different computation than H^.qx applied
to <H^> <H^? then you built H^ wrong, as the REQUIREMENT to building H^
was to place a copy of H there (with some very minor modifications that
do not affact this path). So yes, H and H^ are different machines, but a
copy of H is a sub-machine within H^.

If you DID follow the instructions, but the results was a 'different
computation' than all that shows is that H wasn't a computation in the
first place.

FAIL.

You have just shown that your setup fails to meet the basic requirements
of the proof you are working on.

Maybe you just don't understand what you are trying to do?

Richard Damon

unread,
Oct 1, 2021, 6:36:09 PM10/1/21
to
On 10/1/21 6:23 PM, olcott wrote:
> On 10/1/2021 5:17 PM, André G. Isaak wrote:
>> On 2021-10-01 14:52, olcott wrote:
>>> On 10/1/2021 3:34 PM, André G. Isaak wrote:
>>>> On 2021-10-01 13:18, olcott wrote:
>>
>>>>> When a halt decider is evaluating whether or not its input halts on
>>>>> its input by simulating this input, then we have at least two
>>>>> distinctly different computations.
>>>>
>>>>
>>>> Yes, but the problem is that in your code H and P are *not* distinct
>>>> computations. You have P calling H which means that P is not
>>>> actually independent from H because P shares the code of H.
>>>>
>>>
>>> It turns out that for my revised computation it is equivalent to a
>>> monolithic single block of code.
>>>
>>> For my revised computation I could embed H directly in P and the
>>> results of the computation would not change.
>>
>>
>> Then show us this revised computation. Saying that it will do
>> something isn't of much value unless you show that you can deliver.
>>
>> André
>
> It is not done yet. It is going very well.
>
>

It isn't done, but you still quote with certainty what it does?

Doesn't sound like someone understands what empirical proof actually is.

olcott

unread,
Oct 1, 2021, 6:46:36 PM10/1/21
to
This is where the subject matter becomes too difficult for most people
to understand. I could not understand it myself until I saw that this is
what the code does. Even after I saw that this is what the code does it
took me three days of very careful analysis to see why the code does
this. Until you see that the code is correct you will not be able to
accept this.

olcott

unread,
Oct 1, 2021, 6:48:18 PM10/1/21
to
I am only changing the way that the code achieves its results.
It achieves these results now.
The results are very probably going to be the same.

Richard Damon

unread,
Oct 1, 2021, 7:05:03 PM10/1/21
to
The fact that you get different answers PROVES they are not the same
computation, or maybe even the H is not a Computaton at all.

The simple proof would be a REAL instruction by instruction trace of the
two supposed identical Computation, and see a what point they first
diverse. There MUST be such a point, as if they start at the same point,
and supposedly execute the exact same instruction sequence on the exact
same data, but arrive at different results, there WILL be a point of
first seperation.

Then, LOOK at what that instruction is and what data it was processing.

It is almost certainly that the input to that instruction will be
different in the two machines. Why is it different, it will be the
'hidden' input that wasn't declaired.

As I have said, it might be that your H has as one of its 'inputs' the
program counter address it is run from. It might be that you didn't
actually fully copy the code from one to the other (H1 may have a
constant of H1 instead of H in it, that being the 'hidden' input).

Since you haven't provided us with the code, no one can help you, but it
is a FUNDAMENTAL fact that if the two copes are REALLY the same
algorithm, and the inputs are all the same, then the results WILL be
identical. The fact that they are different says you have an error.

The fact that you are keeping the code secret says no one can help you,
and no one will actually believe you that you do have identical code
with identical input will give different results, as that is know to be
impossible. People won't just take you word that you have done the
impossible without proof. You have claimed it too many times and then
had to retract it.

If you have read what I have wriiten, you will notice that I HAVE made a
comment that the Software Engineering term 'Pure Function' is NOT
sufficient (or even neccesary) for a function to be a Computation, as
the Software Engineering term does NOT require or promise that all
'copies' of a function will return the same values for the same inputs,
but just requires/promises that a given copy will. For instance, a
function that just returns its program counter meets the definition of a
'Pure Runction' but not a Computation, as different copies give
different values, as a Computation, it has a 'hidden' input of its
program address.

Richard Damon

unread,
Oct 1, 2021, 7:12:39 PM10/1/21
to
You don't seem to understand the concept of REQUIREMENTS.

(1) transitions to (6) because (3) transitions to (6) because (3) is
part of the machine started at (1).

Yes (3) of (4) (5) transitions to (6) becuase its simulation showed that
(4) on (5) did not halt, but (3) was SUPPOSEED to return the answer
based on the behavior of (1) applied to (2) which is the machine that
the input to (3) i.e. (4) (5) represents.

The fact that it gave the wrong answer is WHY (1) applied to (2) halts,
but that doesn't really matter.

It doesn't actually MATTER what (3)'s simulation of (4) (5) actually
does, as that isn't part of the problem statement, it is only a detail
of how it works.

Ben Bacarisse

unread,
Oct 1, 2021, 7:13:35 PM10/1/21
to
I know. But despite this admirably narrow focus you have foolishly
accepted my challenge. Twice. It did not go well.

Curiously, in both cases you failed on the exactly the case you do claim
to have "solved"! The first time round you just declared the wrong
answer to be the right one:

"Halts(Confound_Halts, Confound_Halts) returns false."
Message-ID: <2aidnV9e3qQaOCDC...@giganews.com>

but

"The input to Halts is only finite because..."
Message-ID: <6eSdnYuLBrIuVFzC...@giganews.com>

(The challenge stated that Halts should return false only for "inputs"
that are not finite computations.)

The second time, you told me:

"You merely stated your incorrect opinion about what the output should
be."

!!! That's not how a challenge works! I get to say what the output
should be.

> To actually solve the halting problem requires hundreds of
> trillions-fold more effort...

So let's not forget what you said in August last year:

"I really do have a halting decider. Your failure to understand the
details of how this is true is not my fault."

--
Ben.

Richard Damon

unread,
Oct 1, 2021, 7:15:20 PM10/1/21
to
You don't understatd do you. The fact that you don't need to see the
actual code has always meant that the actual code has been irrelevant.

The actual code provides no proof that a reasoned arguement couldn't
provide, only a smoke screen to try to talk around the actual
requirements, to try to claim that something that isn't actually a
computation gave the right answer in one place, while you hid the fact
that it isn't actually a computation.

olcott

unread,
Oct 1, 2021, 7:20:08 PM10/1/21
to
Yes that is what I have been saying all along.

> or maybe even the H is not a Computaton at all.

When they are transformed into pure functions this wold be fully
addressed. It can be reviewed again then.

>
> The simple proof would be a REAL instruction by instruction trace of the
> two supposed identical Computation, and see a what point they first
> diverse. There MUST be such a point, as if they start at the same point,
> and supposedly execute the exact same instruction sequence on the exact
> same data, but arrive at different results, there WILL be a point of
> first seperation.
>

The key difference is that H1 is chained to H preceding it.

H1(P,P) simulates its input then
P calls H(P,P) that simulates its input then
P calls H(P,P) that simulates its input ...

The above is the correct execution trace that is specified by a pair of
functions with identical machine code on the same input.

Ben Bacarisse

unread,
Oct 1, 2021, 8:08:29 PM10/1/21
to
olcott <No...@NoWhere.com> writes:

> On 10/1/2021 5:22 PM, Ben Bacarisse wrote:
>> olcott <No...@NoWhere.com> writes:
>>
>>> On 10/1/2021 3:44 PM, Ben Bacarisse wrote:
>>>> Oh dear, you've posted about TMs again...
>>>> olcott <No...@NoWhere.com> writes:
>>>>
>>> ---1----2-----3-----4--5-----6
>>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.qx ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
>>>> And the correct annotation for this line is "if (and only if) Ĥ applied
>>>> to ⟨Ĥ⟩ does not halt".
>>>
>>> Flat out totally incorrect.
>> Linz is not wrong about how he himself specifies the behaviour of Ĥ.
>> It's his specification. You /could/ reject that specification, but in
>> fact you insist that you only ever use Ĥ to refer to Linz's Ĥ.
>> You dishonestly leave off a key part of his specification because you
>> don't want to keep seeing the problem.
>>
>>> Ĥ.qx(⟨Ĥ⟩, ⟨Ĥ⟩) the simulation of the input to Ĥ.qx never reaches its
>>> final state whether or not Ĥ.qx aborts this simulation therefore the
>>> transition of Ĥ.qx to Ĥ.qn is correct.
>>
>> What is correct is determined by the annotation you keep dishonestly
>> omitting. Ĥ(⟨Ĥ⟩) transitions to Ĥ.qn if (and only if) Ĥ(⟨Ĥ⟩) does not
>> halt.
>
> That is flatly false.

No, it's right there in Linz. It's dishonest to remove the key thing
about the line you keep writing, but I know you have to remove it.
There is no other way for you to avoid Linz's conclusion.

> (1) does not transition to (6) on the basis that (1) applied to (2)
> does not halt.

Indeed not. No one said it did. In fact, we know for sure that no TM
at (1) transitions to (6) (given the input (2)) because the conditions
under which that is permitted by the specification prevent it. Such a
transition is only permitted when such a transition does not occur.
It's called a contradiction. The specification Linz starts with, and
which logically evolves into the specification you keep dishonestly
editing, is contradictory.

--
Ben.

Mike Terry

unread,
Oct 1, 2021, 8:09:17 PM10/1/21
to
On 01/10/2021 15:36, olcott wrote:
> On 10/1/2021 9:12 AM, Mike Terry wrote:
>> On 01/10/2021 01:54, olcott wrote:
>>> On 9/30/2021 7:39 PM, André G. Isaak wrote:
>>>> On 2021-09-30 18:26, olcott wrote:
>>>>> On 9/30/2021 7:18 PM, André G. Isaak wrote:
>>>>>> On 2021-09-30 17:21, olcott wrote:
>>>>>>> On 9/30/2021 5:55 PM, André G. Isaak wrote:
>>>>>>>> On 2021-09-30 16:12, olcott wrote:
>>>>>>>>
>>>>>>>>> Because of a critique of my work in another forum I have
>>>>>>>>> discovered that the halt status that H1(P,P) reports is
>>>>>>>>> consistent with the behavior of int main() { P(P); }
>>>>>>>>
>>>>>>>> But how does this address the Linz proof then? You've derived
>>>>>>>> your P from H, not from H1. So all that matter with respect to
>>>>>>>> the Linz proof is what *H*(P, P) returns. That some other
>>>>>>>> decider H1, from which P is *not* derived, can correctly decide
>>>>>>>> P(P) has no bearing on Linz's claim. H1 cannot decide P1(P1)
>>>>>>>> where P1 is derived from H1 in the same way that P is derived
>>>>>>>> from H.
>>>>>>>>
>>>>>>>
>>>>>>> H(P,P) is correct and H1(P,P) is correct.
>>>>>>
>>>>>> So you're saying they both return true? If so, why on earth do you
>>>>>> *need* H1? Why bring it up at all?
>>>>>>
>>>>>
>>>>> H(P,P) correctly determines that its input never reaches its final
>>>>> state whether or not H aborts the simulation of this input.
>>>>>
>>>>> H1(P,P) correctly determines that its input reaches its final state.
>>>>
>>>> They have the *same* input. Therefore they cannot both be correct
>>>> unless they give the same answer. You can't have one of them claim
>>>> that P(P) never reaches its final state and the other say that P(P)
>>>> does reach its final state *and* claim they are both correct.
>>>>
>>>> That's Trump Logic.
>>>>
>>>> André
>>>>
>>>>
>>>
>>> You have to wait until finish converting functions H1 and H to pure
>>> functions. They do derive this result now yet it is possible that
>>> this result is an artifact of their requirement of static local data
>>> to derive these results.
>>
>> Or some other form of cheating such as a function taking its own
>> address, and varying its processing according to that address.
>>
>> (I suggested this was all that was going on a couple of weeks ago, but
>> you neither confirmed nor denied that.  Almost as though you actually
>> don't understand /yourself/ what's going on.  (Surely that cannot
>> be...!))
>
> I don't reply to nonsense.
>
>
>>>
>>> If it works out like preliminary analysis indicates a pair of
>>> functions with identical machine code and identical inputs really do
>>> have different output because the input to H(P,P) calls H and the
>>> input to H1(P,P) calls H (thus does not call H1).
>>
>> Of course an /unrestricted/ x86 program can do that - all it has to do
>> is look at its own address and behave differently according to that
>> address.  (Like your H/H1 routines.)  Or cheat with global variables
>> etc..
>>
>
> The halting theorem counter-examples present infinitely nested
> simulation (non-halting) behavior to every simulating halt decider.
>
> When the input P to a simulating halt decider H1 does not call this same
> decider and instead calls another different decider H, then infinitely
> nested simulation is not specified for H1(P,P) as it is for H(P,P).
>
>> If you are concerned with saying things of relevance to TMs and Linz's
>> proof, you need to limit your C code to doing things that directly
>> correspond with what TMs can do.
>
> Yes that was my biggest challenge. I didn't even know the criteria for
> that even though Kaz gave me this criteria months ago. After carefully
> checking and double checking this criteria seems sufficient:
>
> In computer programming, a pure function is a function that has the
> following properties:[1][2]
>
> (1) The function return values are identical for identical arguments (no
> variation with local static variables, non-local variables, mutable
> reference arguments or input streams).
>
> (2) The function application has no side effects (no mutation of local
> static variables, non-local variables, mutable reference arguments or
> input/output streams).
> https://en.wikipedia.org/wiki/Pure_function

And maybe you also should also be adhering to:

(3) The function does not take its own address and use that as a basis
to modify its processing.

I mean, you say "after careful checking this criteria seems sufficient:
...", but we all know you've not done any such "careful checking" or
given any serious thought at all to your underlying computing model and
its relationship to Linz's TM-based proof. Perhaps there are a whole
host of other restrictions you need to keep things workable? How would
you know if there were? You can't analyse the situation yourself - and
you've only settled on "pure functions" because that's what Kaz
mentioned. [btw, "criteria" is plural]

The Linz proof works because TMs have certain features:
[PURE]
- the same TM start state (including tape) always implies exactly
the same follow-on computation step states, and so properties
based on those follow-on step states like
"one of the follow-on step states is a halt state" aka
"computation halts" are well defined.
[COMPOSE]
- it is possible to include one TMs coded algorithm as a sub-part of
a second machine's coded algorithm, and computations running in
the second TM from the start of the incoporated "TM coded
algorithm" will likewise follow exactly the same computation steps
as for the original TM [given the same tape input, of course], and
so produce the same final result.

(COMPOSE is my name for the second feature...)

The COMPOSE feature is what supports Linz's use of a (modified) copy of
H incorporated into his derived H^. For a given input, his
H_Incoporated within H^ is guaranteed to track exactly the same
calculation steps as the H TM with that same input, and so is guaranteed
to arrive at the same qn/qy state as the original H. That is quite
trivial, and would be understood by anyone with a basic understanding of
TMs.

So your H/H1 give different answers for the same input, yet have "the
same machine code" - what does this imply? One way of looking at it
would be to say that copying the code within your computation model
doesn't satisfy COMPOSE. I think Andy would take that position, and
it's ok but I think not very useful for someone trying to write x86 code
as a more convenient (but broadly equivalent) alternative to discussing
TM-based proofs directly. If this is to be the way of looking at it,
then /what actually is/ the right way of copying an algorithm that
guarantees COMPOSE? Which of your H/H1 is the correct composition for
the original H machine, and why that one and not the other? Strictly
speaking, I suspect the answer is /neither/ of them, but this is all way
too complicated for what you're doing. Don't go there! :)

A better approach (IMO) is to say that the original idea for using C/x86
was maybe reasonable, but needs some restrictions to ensure basic [PURE]
and [COMPOSE] features hold. So you say you want to have pure
functions, which seems reasonable to me because then the correspondence
between your C code and TMs is more direct, making it easier to discuss
the Linz (TM) proof in terms of your C code. (I don't think you
understand those reasons, but hey...)

But for exactly the same reason, you would need to ensure COMPOSE
feature holds and algorithms are composable, preferably as "equivalent
code" callable functions, within your system. That fails when a
function can take its own address and vary its behaviour based on that
address. The function needs to base all its logic on JUST its input
arguments.

Just to confirm - that is what you're doing in your H/H1, right? (Like
I suggested a couple of weeks ago. You still seem to be avoiding this
question.)

A quick note - I don't see any problem with you using machine addresses
observed from your emulation traces. After all, your code can control
the way it emulates, including assigning machine addresses (in effect
state identifiers) to the code being emulated if required - nothing
there a TM can't directly do so that wouldn't break the simple
correspondence you need. The problems come if H/H1 take their /own/
address and then use that when analysing the trace data they receive.

>
>> Otherwise the relationship between your computing model and TMs will
>> become distorted and the conclusions you want to draw from your
>> program's behaviour to TM behaviour may simply not apply.
>>
>> I know that last paragraph is WAAAAAY over your head (computation
>> models?  too abstract!) so you will just ignore it.  Or perhaps spray
>> your "Objection Begone!" Turing-equivalence spray at it, which never
>> works for you. :)
>>
>> Anyway, you need to work out /why/ copying an algorithm can produce
>> different behaviour in the copy, and STOP DOING IT!!  (Just like you
>> seem to have recognised that non-pure functions are to be avoided.
>> I'm not sure whether using your own machine code address breaks the
>> "pure function" rules per se, but the effect on your efforts is the
>> same so it doesn't really matter.)
>>
>>
>> Mike.
>
> My redesign meets all of the above pure function criteria.
>
> It really is the case that a pair of pure functions with identical
> machine code will have different behavior on the same input when this
> input calls one of these functions in pathological self-reference and
> not the other. Identical functions, identical input, differing
> relationships between the functions and their input.

And what do you think that means for your refutation of the Linz proof?
Linz is about TMs, and in TM-world the TM H, and the embedded (modified)
copy of H within H^ DO NECESSARILY behave identically for the same input.

If you have concocted some programming environment where that doesn't
hold, that wouldn't refute Linz - at most it would just mean that the
Linz proof isn't (directly) applicable for that environment, so you were
never "refuting" anything to start with. And of course that wouldn't
mean HP undecidability didn't hold for that environment, just in case
you're thinking of trying that one! (Proof of HP in that environment
could probably just use reduction to the HP-for-TMs problem.)

And if it does hold, then the Linz proof is going to work, and H will
decide (H^,H^) incorrectly like it's been doing for you so far.

Mike.

Richard Damon

unread,
Oct 1, 2021, 8:28:10 PM10/1/21
to
But, then NEED to be the same. If the computation at H^.qx is not the
exact same computation as H, then H^ was built wrong.

>
>> or maybe even the H is not a Computaton at all.
>
> When they are transformed into pure functions this wold be fully
> addressed. It can be reviewed again then.

As I have said, 'Pure Function' isn't enough, it needs to be a pure
function that doesn't use something else, like its address, to make
copies of it different from each other.

>
>>
>> The simple proof would be a REAL instruction by instruction trace of the
>> two supposed identical Computation, and see a what point they first
>> diverse. There MUST be such a point, as if they start at the same point,
>> and supposedly execute the exact same instruction sequence on the exact
>> same data, but arrive at different results, there WILL be a point of
>> first seperation.
>>
>
> The key difference is that H1 is chained to H preceding it.

Just shows that either H1 is a different computation (and thus doesn't
matter) or that H and H1 aren't even computations (and thus disqualified)

>
> H1(P,P) simulates its input then
> P calls H(P,P) that simulates its input then
> P calls H(P,P) that simulates its input ...
>
> The above is the correct execution trace that is specified by a pair of
> functions with identical machine code on the same input.
>

And if H1 and H give different answers, they aren't the same
Computation, and thus not the equivalent of the same Turing Machine 'H'
and thus since clearly PO-H is the one that PO-P was built on. it is
PO-H that matters for Linz, not PO-H1.

Thus, what H1 says is not relevent to the problem, only what H says.

Your argument of 'same code' not withstanding, all that proves is that
either you are lying or the code actually has another input besides the
formal P and I, and thus not the right form of a computation to be a
Halt Decider.

olcott

unread,
Oct 1, 2021, 11:38:46 PM10/1/21
to
https://www.liarparadox.org/Peter_Linz_HP(Pages_315-320).pdf
You are misinterpreting Linz notation.
I will go through it step by step. starting with H.

The input to H will be the description (encoded in some form) of M, say
wM, as well as the input w.

Linz has named the Turing Machine Description of M wM.
Linz define his state transition diagram referring to wM

---1---2--------3---4---5-------6
q0 wM ⊢* qx wM wM ⊢* qn
if M applied to WM does not halt.

q0 copies (2) to (5)
qx decides halting on (4)(5)

Now we look a Ĥ applied to ⟨Ĥ⟩
---1----2-----3-----4--5-----6
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.qx ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn

Ĥ.q0 copies (2) to (5)
Ĥ.qx decides halting on (4) applied to (5)


>> (1) does not transition to (6) on the basis that (1) applied to (2)
>> does not halt.
>
> Indeed not. No one said it did. In fact, we know for sure that no TM
> at (1) transitions to (6) (given the input (2)) because the conditions
> under which that is permitted by the specification prevent it. Such a
> transition is only permitted when such a transition does not occur.
> It's called a contradiction. The specification Linz starts with, and
> which logically evolves into the specification you keep dishonestly
> editing, is contradictory.
>



--

olcott

unread,
Oct 1, 2021, 11:42:53 PM10/1/21
to
The whole basis of my rebuttal of the halting problem proofs:
The halting theorem counter-examples present infinitely nested
simulation (non-halting) behavior to every simulating halt decider.

How the simulating halt decider detects that an input is presenting
infinitely nested simulation is merely an implementation detail.

olcott

unread,
Oct 2, 2021, 12:10:38 AM10/2/21
to
Beginning with the Linz H
q0 wM w ⊢* qn
if M applied to w does not halt.

When Linz refers to M applied to wM there is no M in the Linz template
so we have to go back to these words to see what M applied to wM means:

The input to H will be the description (encoded in some form) of M, say
wM, as well as the input w.

M applied to wM means: The TM described by wM applied to w.




Now onto the Linz Ĥ
---1---2--------3---4---5-------6
q0 wM ⊢* qx wM wM ⊢* qn

if M applied to wM does not halt
is translated into

The TM described by wM applied to wM does not halt
is translated into

The TM described by ⟨Ĥ⟩ applied to ⟨Ĥ⟩ does not halt

---1----2-----3-----4--5-----6
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.qx ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn

There is only one place in the above where this exists:
The TM described by ⟨Ĥ⟩ applied to ⟨Ĥ⟩

Richard Damon

unread,
Oct 2, 2021, 6:34:56 AM10/2/21
to
NO!!!

Read again:

> The input to H will be the description (encoded in some form) of M, say WM, as well as the input w.


He STARTED with M as the TURING MACHINE that H is supposed to have
decided about.

He then encodes it to give to H as WM (with the M being a subscript) The
W being his notation for 'an input', and the M subscript being the
modifier to say what that input is describing.

Why would he need to give WM the name M, WM is a perfectly fine name for
the input.

>
> ---1---2--------3---4---5-------6
>   q0   wM  ⊢*  qx  wM  wM  ⊢*  qn
> if M applied to WM does not halt.

Right, the condition is if M, the Turing Machine, does not halt.
>
> q0 copies (2) to (5)

and (2) just becomes (4) by being left there

> qx decides halting on (4)(5)
>
> Now we look a Ĥ applied to ⟨Ĥ⟩
> ---1----2-----3-----4--5-----6
>    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.qx ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
>
> Ĥ.q0  copies (2) to (5)
> Ĥ.qx  decides halting on (4) applied to (5)

Somewhat right, the sub-machine at qx (a copy of H) needs to use its
inputs, (4) and (5) to figure out what the actual machine (1) applied to
(2) would do. H^(<H^>) in other notations.

That is part of what makes it hard.

Its answer isn't based on what it does, the answer it needs to arrive at
is fixed by the problem, the key is to try to design an algorithm that
can figure it out, including the fact that the thing it needs to figure
out gets to know exactly what it is going to do, as it gets a copy of
the decider as part of its design.

Yes, the DESIGN problem has a pathological self reference in it, When we
try to design an H, we need to take into account that the design of H^
will use a copy of use. DESIGN of Turing Machines allow for this sort of
self-reference, even if the final Turing Machines can't actually
'reference' another machine (just have a copy of them).

And this allowed self-reference is an important part of making the
design of an H to solve the problem impossible.

Richard Damon

unread,
Oct 2, 2021, 6:54:45 AM10/2/21
to
NO. Maybe you don't understand what a 'Formal Parameter' is. Let us go
back to what Linz said:

> The input to H will be the description (encoded in some form) of M, say WM, as well as the input w.

So M is the Turing Machine previous mentioned in the description of what
the Halting Problem's Question was:

> Simply stated, the problem is: given the description of a Turing machine M and an input w, does M, when started in the initial configuration qow, perform a computation that eventually halts?

So M is NOT an 'undefined term that we need to figure out what it
means', that is a lie.

M is the machine that the decider is supposed to decider, it is in
Software Engineering terms, a Formal Parameter to the Halting Decider.

Just like when we write the definition of H(P,I) as a function
definition, the P and I are the names of the Formal Parameters, that
when we later call H(H_hat, H_hat) get filled in by the values from the
calling site.

Is that why you suddenly changed the name of the machine to be decided,
because having the formal parameter named P and the passed parameter
named H_hat was too confusing for you?

>
> The input to H will be the description (encoded in some form) of M, say
> wM, as well as the input w.
>
> M applied to wM means: The TM described by wM applied to  w.
>

Right, the question is about the behavior of the ACTUAL Turing Machine

>
>
>
> Now onto the Linz Ĥ
> ---1---2--------3---4---5-------6
>   q0   wM  ⊢*  qx  wM  wM  ⊢*  qn
>
> if M applied to wM does not halt
> is translated into
>
> The TM described by wM applied to wM does not halt
> is translated into
>
> The TM described by ⟨Ĥ⟩ applied to ⟨Ĥ⟩ does not halt
>
> ---1----2-----3-----4--5-----6
>    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.qx ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
>
> There is only one place in the above where this exists:
> The TM described by ⟨Ĥ⟩ applied to ⟨Ĥ⟩
>

Right, so the question is does the TM which is described by <H^> applied
to <H^> Halt, and that is exactly your (1) applied to (2) which you show
as halting at (6) when it gets to (6) H^.qn which is by definition a
Halting state as it is the unaltered copy of H.qn which is where H
halted with its answer.

You quote all the right words, and even start off describing them right,
and then turn off the road and change the question form being about the
Turing Machine described to the partial simulaton of the machine by H.

Richard Damon

unread,
Oct 2, 2021, 6:58:27 AM10/2/21
to
Except that ther never actually is any infintly nested simulations if
the decider is able to answer the question. If the decider aborts the
simulation after N cycles based on whatever rules it uses, it turns out
that the actual machine will stop after N+1 cycles, because ITS copy of
H will stop after it does N cycles.

If this doesn't happen, then H wasn't a computation.

H gets the wrong answer because it was assuming that all deciders it was
looking at were not going to stop their simulations, but they will, if
it does.

olcott

unread,
Oct 2, 2021, 10:17:01 AM10/2/21
to
>>-----1----2-----3-----4--5-----6
>>    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.qx ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
>>
>> There is only one place in the above where this exists:
>> The TM described by ⟨Ĥ⟩ applied to ⟨Ĥ⟩
>>
>
> Right, so the question is does the TM which is described by <H^> applied
> to <H^> Halt, and that is exactly your (1) applied to (2) which you show

The TM described by ⟨Ĥ⟩ applied to ⟨Ĥ⟩ does not halt
(1) is not a TM description, proving that you are wrong.

There is only one place where a described by ⟨Ĥ⟩ is applied to ⟨Ĥ⟩.

> as halting at (6) when it gets to (6) H^.qn which is by definition a
> Halting state as it is the unaltered copy of H.qn which is where H
> halted with its answer.
>
> You quote all the right words, and even start off describing them right,
> and then turn off the road and change the question form being about the
> Turing Machine described to the partial simulaton of the machine by H.
>


It is loading more messages.
0 new messages