Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Simulating (partial) Halt Deciders [updated paper]

3 views
Skip to first unread message

olcott

unread,
May 10, 2023, 12:04:56 PM5/10/23
to
When it is understood that all halt deciders must compute the mapping
from their inputs to their own accept or reject state based on the
actual behavior of this actual input the otherwise "impossible" input
becomes decidable.

When an input has a pathological relationship to its simulating halt
decider (SHD) this changes the behavior of this input.

Unless the SHD bases its halt status decision on the actual behavior of
its actual input rather than the hypothetical behavior of what its input
would be if there was no pathological relationship the SHD itself would
never terminate.

*Simulating (partial) Halt Deciders Defeat the Halting Problem Proofs*

https://www.researchgate.net/publication/369971402_Simulating_partial_Halt_Deciders_Defeat_the_Halting_Problem_Proofs



--
Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Richard Damon

unread,
May 10, 2023, 7:34:37 PM5/10/23
to
On 5/10/23 12:02 PM, olcott wrote:
> When it is understood that all halt deciders must compute the mapping
> from their inputs to their own accept or reject state based on the
> actual behavior of this actual input the otherwise "impossible" input
> becomes decidable.
>
> When an input has a pathological relationship to its simulating halt
> decider (SHD) this changes the behavior of this input.

WHY?

The input doesnt't "own" a Halt Decider so it doesn't have a unique halt
decider that needs to decide it.

You

>
> Unless the SHD bases its halt status decision on the actual behavior of
> its actual input rather than the hypothetical behavior of what its input
> would be if there was no pathological relationship the SHD itself would
> never terminate.

So, you don't understand the DEFINITION of "Actual Behavior" for a Halt
Decide.

>
> *Simulating (partial) Halt Deciders Defeat the Halting Problem Proofs*
>
> https://www.researchgate.net/publication/369971402_Simulating_partial_Halt_Deciders_Defeat_the_Halting_Problem_Proofs
>
>

But since the "Actual Behavior" of the input to a Halt Decider is the
Actual Behavior of the Actual Machine describe by the input, and since
the fact that H(D,D) returning 0 (Indicating Non-Halting) means that
D(D) will Halt, it means that H was, by definition, WRONG for this input.


You are just repeating your LIES.

And they are LIES, because you have been informed of your ERRORS enough
times that your claim "Sincere Belief" has become a reckless disreguard
of the truth, and thus not a defense.

YOU ARE JUST SICK.

Richard Damon

unread,
May 10, 2023, 8:23:08 PM5/10/23
to
On 5/10/23 12:02 PM, olcott wrote:
> When it is understood that all halt deciders must compute the mapping
> from their inputs to their own accept or reject state based on the
> actual behavior of this actual input the otherwise "impossible" input
> becomes decidable.
>
> When an input has a pathological relationship to its simulating halt
> decider (SHD) this changes the behavior of this input.
>
> Unless the SHD bases its halt status decision on the actual behavior of
> its actual input rather than the hypothetical behavior of what its input
> would be if there was no pathological relationship the SHD itself would
> never terminate.
>
> *Simulating (partial) Halt Deciders Defeat the Halting Problem Proofs*
>
> https://www.researchgate.net/publication/369971402_Simulating_partial_Halt_Deciders_Defeat_the_Halting_Problem_Proofs
>
>

A fundamental part of your problem seems to be that you don't actually
understand what you are talking about.

For instance, Halt Deciders are defined not to decide on the "Actual
Behavior of their Actual Input", but on the Behavior of the machine
represented by its input.

The "Input" isn't the key factor, but the machine it represents.

In fact, by defintion, we can ask a lot of different deciders about the
same problem, and for a Halt Decider, the "Input" to each decider maight
well be different, as they may use different methods to "represent" the
machine to be decided on, but they are all being asked the same
question, and if they are correct, must all give the same answer.

This means that the "Behavior" of the "Input" doesn't depend on the
decider at all, but only on the machine and input that is being decided on.

This breaks your whole arguement, and at best just shows that the
problem your H runs into is that it CAN'T do the job it is trying to do
since it is IMPOSSIBLE for it to have done a "Correct Simulation", or
"Correctly Decide" what its "Correct Simulation" would be.

Basing an answer on the behavior of something that just doesn't happen
is illogical.

YOU FAIL.

olcott

unread,
May 10, 2023, 11:23:11 PM5/10/23
to
On 5/10/2023 6:34 PM, Richard Damon wrote:
> On 5/10/23 12:02 PM, olcott wrote:
>> When it is understood that all halt deciders must compute the mapping
>> from their inputs to their own accept or reject state based on the
>> actual behavior of this actual input the otherwise "impossible" input
>> becomes decidable.
>>
>> When an input has a pathological relationship to its simulating halt
>> decider (SHD) this changes the behavior of this input.
>
> WHY?
>

The easily verified fact that the behavior is different is much more
important than why it is different.

*I DON'T UNDERSTAND HOW YOU CAN'T UNDERSTAND THIS TAUTOLOGY*
When simulating halt decider H correctly simulates its input D until H
correctly determines that its simulated D would never stop running
unless aborted then H is necessarily correct to abort its simulation and
reject this input as non-halting.

Richard Damon

unread,
May 11, 2023, 7:39:34 AM5/11/23
to
On 5/10/23 11:23 PM, olcott wrote:
> On 5/10/2023 6:34 PM, Richard Damon wrote:
>> On 5/10/23 12:02 PM, olcott wrote:
>>> When it is understood that all halt deciders must compute the mapping
>>> from their inputs to their own accept or reject state based on the
>>> actual behavior of this actual input the otherwise "impossible" input
>>> becomes decidable.
>>>
>>> When an input has a pathological relationship to its simulating halt
>>> decider (SHD) this changes the behavior of this input.
>>
>> WHY?
>>
>
> The easily verified fact that the behavior is different is much more
> important than why it is different.
>
> *I DON'T UNDERSTAND HOW YOU CAN'T UNDERSTAND THIS TAUTOLOGY*
> When simulating halt decider H correctly simulates its input D until H
> correctly determines that its simulated D would never stop running
> unless aborted then H is necessarily correct to abort its simulation and
> reject this input as non-halting.
>
>

Because your statement is a contradiction.

The "Behavior of the input", by the definition, is the behavior of the
actual machine, not some partial simulaition.

Your H NEVER DOES a "Correct Simulaition" that actually shows the actual
behavior of the machine, so your criteria is invalid.

You don't seem to understand that H and D are a SPECIFIC PROGRAMS, and
DEFINED BEHAVIOR, and it is incorrect to talk about some "other version"
of said program, since it isn't that other version.

Your H (that gives the answer) NEVER simulates without halting, and
always aborts its simulation so talking about what would happen if it
did something different is like talking about what would happen if 1 was
equal to 2.

Note, the behavior CAN'T be different, because the behavior of a program
is not dependent on the behavior of a simulation of it. A simulation, to
be correct, must match the behavior of the actual program. All you have
done is proven that H CAN NOT DO a "Correct Simulation", since H can not
create a simulation that matches ALL the behavior of the input (and give
the answer).

You head seems to be on backwards, and you have the tail wagging the dog.

The actual behavior of D(D) determines the correct answer for H(D,D),
and since D(D) halts when H(D,D) returns 0, that can not be the correct
answer.

The actual behavior of D(D) shows that H(D,D) does not do a correct and
complete simulation of it, which is what is needed to determine if it
halts. Part of the problem i syou don't understand what a UTM is, and it
seems that you think that a UTM can affect the behavior of the machine
it is given as an input, but it can't (it can reveal it, IF the
"simulator" is an actual UTM).

You are just showing your ignorance of the field, and an inability to
learn about it, because it seems you are too stupid and believe your own
lies about it.

olcott

unread,
May 11, 2023, 10:12:20 AM5/11/23
to
On 5/11/2023 6:37 AM, Richard Damon wrote:
> On 5/10/23 11:23 PM, olcott wrote:
>> On 5/10/2023 6:34 PM, Richard Damon wrote:
>>> On 5/10/23 12:02 PM, olcott wrote:
>>>> When it is understood that all halt deciders must compute the mapping
>>>> from their inputs to their own accept or reject state based on the
>>>> actual behavior of this actual input the otherwise "impossible" input
>>>> becomes decidable.
>>>>
>>>> When an input has a pathological relationship to its simulating halt
>>>> decider (SHD) this changes the behavior of this input.
>>>
>>> WHY?
>>>
>>
>> The easily verified fact that the behavior is different is much more
>> important than why it is different.
>>
>> *I DON'T UNDERSTAND HOW YOU CAN'T UNDERSTAND THIS TAUTOLOGY*
>> When simulating halt decider H correctly simulates its input D until H
>> correctly determines that its simulated D would never stop running
>> unless aborted then H is necessarily correct to abort its simulation and
>> reject this input as non-halting.
>>
>>
>
> Because your statement is a contradiction.
>
> The "Behavior of the input", by the definition, is the behavior of the
> actual machine, not some partial simulaition.
>
> Your H NEVER DOES a "Correct Simulaition" that actually shows the actual
> behavior of the machine, so your criteria is invalid.
*You have already admitted otherwise*

When a simulating halt decider correctly simulates N steps of its input
it derives the exact same N steps that a pure UTM would derive because
it is itself a UTM with extra features.

My reviewers cannot show that any of the extra features added to the UTM
change the behavior of the simulated input for the first N steps of
simulation:
(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't change it
(c) Even aborting the simulation after N steps doesn't change the first
N steps.

Because of all this we can know that the first N steps of input D
simulated by simulating halt decider H are the actual behavior that D
presents to H for these same N steps.

computation that halts… “the Turing machine will halt whenever it enters
a final state” (Linz:1990:234)

When we see (after N steps) that D correctly simulated by H cannot
possibly reach its simulated final state in any finite number of steps
of correct simulation then we have conclusive proof that D presents non-
halting behavior to H.

Richard Damon

unread,
May 11, 2023, 10:29:54 PM5/11/23
to
Nope, just proves again that you are a liar.

H does a correct PARTIAL simulation which is NOT actually a Correct
Simulation by the definition you allude to by invoking the concept of a
UTM, which is only a COMPLETE simulation that EXACTLY reproduces the
results of the actual machine.

>
> When a simulating halt decider correctly simulates N steps of its input
> it derives the exact same N steps that a pure UTM would derive because
> it is itself a UTM with extra features.

Which since it did not reach a final state in those N steps, means that
your SHD's simulation is NOT a "Correct Simulation" by the definition of
a UTM, and thus you can not use the properties of a UTM to make your case.

YOU ARE JUST SHOWING YOUR STUPIDITY.

>
> My reviewers cannot show that any of the extra features added to the UTM
> change the behavior of the simulated input for the first N steps of
> simulation:
> (a) Watching the behavior doesn't change it.
> (b) Matching non-halting behavior patterns doesn't change it
> (c) Even aborting the simulation after N steps doesn't change the first
> N steps.

Since a UTM only shows the answer by COMPLETING the simulation,
correctly doing the first N steps is MEANIN

>
> Because of all this we can know that the first N steps of input D
> simulated by simulating halt decider H are the actual behavior that D
> presents to H for these same N steps.

Which means nothing.

>
> computation that halts… “the Turing machine will halt whenever it enters
> a final state” (Linz:1990:234)
>

Right, and since D(D) Halts, as does the actual CORRECT simulation
UTM(D,D) we have that the behavior of the input is Halting, so to be
correct H(D,D) needed to have returned 1, on 0.

> When we see (after N steps) that D correctly simulated by H cannot
> possibly reach its simulated final state in any finite number of steps
> of correct simulation then we have conclusive proof that D presents non-
> halting behavior to H.
>

No, it reaches that final state when simulated by an actual UTM.

H can't simulate past that point, as the H in question stops there.

You can't change H to be a different machine, as it is part of the
input, which needs to be a constant.

If you change JUST the H that is deciding, but not the H that D calls,
like done with H1, we see that if this H somehow did go on, for THIS
input, it would see the input halt, and thus the correct answer is 1.

Since this matches that actual semantics of the proper model, that is
what needs to be done.

Yes, it doesn't work in your model, but that is because it isn't an
actual correct model of the needed Turing machine, because you seem to
be too stupid to set it up correctly, or are just too much of a
pathological liar to let yourself do what is actually correct.

olcott

unread,
May 12, 2023, 12:04:45 AM5/12/23
to
You and I and embedded_H and H can see that the simulation would never
end unless aborted after the N steps that you have agreed have been
simulated correctly. N steps are enough to correctly infer this.


Both embedded_H and H

Richard Damon

unread,
May 12, 2023, 10:06:51 AM5/12/23
to
No, we see that there is no H that can be designed to simulate an input
built by this template to the end.

That is a DIFFERENT question than what is the behavior of this input, or
its correct simulation.

The latter is clearly Halt, since we are given that H(D,D) will return 0
and thus D(D) will halt.

All you have shown is that you can't build an H that actually does a
"Correct Simulation" per the definition needed by the rules of a UTM, it
can only approach that in correctly simulating the first N steps, but
that sort of simulation doesn show non-halting.

You are just arguing about Straw man because you seem to have straw for
a brain.

olcott

unread,
May 12, 2023, 10:47:01 AM5/12/23
to
*YOU ARE BACK TO DENYING THE SAME TAUTOLOGY*
When simulating halt decider H correctly simulates its input D until H
correctly determines that its simulated D would never stop running
unless aborted then H is necessarily correct to abort its simulation and
reject this input as non-halting.

You and I and simulating halt deciders can see that their input will
never stop running unless aborted after N steps of correct simulation.


> That is a DIFFERENT question than what is the behavior of this input, or
> its correct simulation.
>
> The latter is clearly Halt, since we are given that H(D,D) will return 0
> and thus D(D) will halt.
>
> All you have shown is that you can't build an H that actually does a
> "Correct Simulation" per the definition needed by the rules of a UTM, it
> can only approach that in correctly simulating the first N steps, but
> that sort of simulation doesn show non-halting.
>
> You are just arguing about Straw man because you seem to have straw for
> a brain.

Richard Damon

unread,
May 12, 2023, 11:02:28 AM5/12/23
to
But since H DOESN'T do a "Correct Simulation" by the rules that allow
you to use a simulation in place of the behavior of the actual machine,
your H never satisfies the premise of your statement.

Your "Tautology" falls into the trap of the fallacy of assuming the
conclusion.

>
> You and I and simulating halt deciders can see that their input will
> never stop running unless aborted after N steps of correct simulation.

You just don't understand what you are seeing. Remember, the program is
the program as it is written, not as you want it to be, and thus there
is no "unless", as your H ALWAYS aborts its simulation at the point it
does, and thus there is no other possible behaivor.

Your replacing H with a set of Hs is just invalid logic and you have
left the domain of Computability theory.

Your logic is based on LYING to yourself and pretending that things can
be the way they aren't. The fact that you logic is based on this sort of
LIE is why you have made yourself so stupid, as you have come to beleive
your own lies.

olcott

unread,
May 12, 2023, 11:09:45 AM5/12/23
to
You already admitted that H does correctly simulate the first N steps of
its input and these first N steps form a mathematical induction proof
that the input never reaches its own final state.

That you refuse to acknowledge the verified fact of this does not negate
that it is a verified fact.

Richard Damon

unread,
May 12, 2023, 2:44:06 PM5/12/23
to
Right, but correctly simulating the first N steps of the input is NOT
the same a correctly simulating the input to the point that determines
its behavior.

It does not mean that simulation meet the definition of a "Correct
Simulation" per the application of a UTM.

Therefore, since you actually attempt to USE that simultaion for that
purpose, it isn't actualy a "CORRECT SIMULATION" by the required definition.

That is like someone telling you there were brings a "cat" home for a
pet and then bring a piece of construction machinery. They are using the
wrong definition of "cat", just like you are using the wrong definition
of "correct" here.

This has been pointed out many times and your refusal to learn just
shows that you are stupid and a pathological liar.

olcott

unread,
May 12, 2023, 5:53:15 PM5/12/23
to
You are flatly incorrect, in this case it is exactly the same because
the simulating halt decider only stops at N steps when it correctly
predicts (through a form of mathematical induction) what the future
behavior would be if it did not abort its simulation.

The only issue is that you believe that the behavior is what a textbook
says it should be rather than what the actual behavior actually is.

Richard Damon

unread,
May 12, 2023, 6:39:13 PM5/12/23
to
How can it be correct at deciding, when the actual correct simulation
comes to a Halt.

Your problem is that you H doesn't look at the ACTUAL input, where D
calls THIS H that DOES abort its simulation and returns 0, but it
presumes that this input is something that it isn't, that it is calling
some other machine, not this H, that it tries to LIE about being H.

You H falls for its own strawman fallacy of thinking the input is
something that it isn't.

>
> The only issue is that you believe that the behavior is what a textbook
> says it should be rather than what the actual behavior actually is.
>

Nope, I beleive what the definition says it is. The ACTUAL question
posed to a Halt Decider is to determine what the behavior of the actual
machine described by the input.

Even you admit that machine Halts, so the ONLY correct answer is Halting.

All your arguments are BY DEFINITION incorrect because they are based on
incorrect definitions and strawman. The fact that you belive in them
just shows your stupidity.

Since you lie to yourself about what the definitions are, you mislead.
yourself into thinking false ideas.

This is why I call you a pathological liar, you have lost the ability to
understand what is actually true because you have lied to yourself so
much you have lost touch with what is actually reality.

In some of your more lucid moments, you actually admit to not
understanding somethings, but then fall back into your fantasy world of
lies by saying that it doesn't really matter because "you know better".

It turns out, you know NOTHING, because you have broken your connection
to any actual basis ot truth and are living in an insane fantasy world
which only vaguely has a connection to reaity.

That will be your eternal fate, to never really understand what is
truth, and to be chasing your lies, and everyone you leave behind
laughing at your stupidity.

olcott

unread,
May 12, 2023, 9:50:22 PM5/12/23
to
So you acknowledge that I was right about Tarski on the basis that my
argument has become irrefutable.

Richard Damon

unread,
May 12, 2023, 10:17:51 PM5/12/23
to
Nope, all your arguement fail because you don't actually understand what
you are talking about.

The Halting Problem proof has nothing to do with Tarski. He may use some
similar methods, but the Halting Problem has no dependency on it.

You are just proving that you are a total idiot about the material you
are talking about, and are resorting to every falicy you can get your
hand on to try to defend your indefensible position.

You don't seem to be actually reading what you are replying to, as this
thread had NOTHING about Tarski.

Your simulation is not "correct" because it doesn't meet the required
definition to reach the conclusion you are trying to make, showing you
are just too stupid to work in logic.

olcott

unread,
May 13, 2023, 4:55:08 PM5/13/23
to
In other words you are 100% totally clueless about the pathological
self-reference error that is shared by the
(a) Halting Problem proofs,
(b) Liar Paradox,
(c) Tarski Undefinability theorem
(d) Isomorphism to 1931 Incompleteness Theorem's G

Only in the first case is an algorithm smart enough to circumvent this.

olcott

unread,
May 13, 2023, 5:06:13 PM5/13/23
to
*NO MATTER HOW MANY TIMES YOU REJECT A TAUTOLOGY IT REMAINS TRUE*
When simulating halt decider H correctly simulates its input D until H
correctly determines that its simulated D would never stop running
unless aborted then H is necessarily correct to abort its simulation and
reject this input as non-halting.

Anything and everything that rejects a tautology is always necessarily
incorrect.

You acknowledge that the first N steps are simulated correctly and fail
to acknowledge that these N steps derive conclusive (mathematical
induction based) proof that the input would never stop running unless
aborted.

This is not my mistake.

01 int D(int (*x)())
02 {
03 int Halt_Status = H(x, x);
04 if (Halt_Status)
05 HERE: goto HERE;
06 return Halt_Status;
07 }
08
09 void main()
10 {
11 H(D,D);
12 }

Four people (two with masters degrees in computer science) have agreed
that the D correctly simulated by H cannot possibly terminate normally.

Richard Damon

unread,
May 13, 2023, 5:33:55 PM5/13/23
to
On 5/13/23 4:54 PM, olcott wrote:

>> Nope, all your arguement fail because you don't actually understand
>> what you are talking about.
>>
>> The Halting Problem proof has nothing to do with Tarski. He may use
>> some similar methods, but the Halting Problem has no dependency on it.
>
> In other words you are 100% totally clueless about the pathological
> self-reference error that is shared by the
> (a) Halting Problem proofs,
> (b) Liar Paradox,
> (c) Tarski Undefinability theorem
> (d) Isomorphism to 1931 Incompleteness Theorem's G
>
> Only in the first case is an algorithm smart enough to circumvent this.
>

Except tnat non of (a) or (c), which that the actual proof you are
trying to talk about actually HAVE a "Pathological self-reference"

Please point out the ACTUAL "Self-Reference" in the ACTUAL statements of
these problems.

Remember, in the ACTUAL Halting Problem we have ACTUAL Turing Machines,
which have no way to "reference" another machine. Turing Machine H is
give the description of another Turing Machine P, which includes a COPY
(not a reference) of the decider that is claimed to be correct. NO
"Self-Reference", because the system just doesn't support them.

In fact, they don't have a self-reference in them at all.

Yes (b) is pathologically self-reference, and thus not a truth bearer.

And your (d) is NOT an actual Isomorphism to Godel's G, it can't be
because Godel shows his G is actually True, so it can't be a
pathological self-reference. The issue is you remove essential nature
from the statement in your Isomophism because you are too stupid to
recognise what it actually is.

You are just proving that YOU are the one being 100% clueless.

Please point out where the actual "Pathological Self-Reference" exist in
the ACTUAL statement of the problem, not just your incorrect alteration
of it.

YOU FAIL.

Richard Damon

unread,
May 13, 2023, 5:35:51 PM5/13/23
to
But since your H doesn't correctly simulate its input, that is irrelevent.

If H DOES correctly simulate its input, it doesn't answer, NEVER EVER.

Your problem is you don't understand that H is just a single program and
that program always will behave the same with a given input.

And thus, you r arguement is just based on LYING that H can do a correct
simulation and give an answer.

Your own program proves you wrong, as it admits that D(D) will Halt
since H(D,D) returns 0.
0 new messages