Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Termination Analyzer H is Not Fooled by Pathological Input D

9 views
Skip to first unread message

olcott

unread,
Jun 22, 2023, 9:27:15 PM6/22/23
to
When the halting problem is construed as requiring a correct yes/no
answer to a contradictory question it cannot be solved. Any input D
defined to do the opposite of whatever Boolean value that its
termination analyzer H returns is a contradictory input relative to H.

When H returns 1 for inputs that it determines do halt and returns 0 for
inputs that either do not halt or do the opposite of whatever Boolean
value that H returns then these pathological inputs are no longer
contradictory and become decidable.

Can D correctly simulated by H terminate normally?

The x86utm operating system based on an open source x86 emulator. This
system enables one C function to execute another C function in debug
step mode. When H simulates D it creates a separate process context for
D with its own memory, stack and virtual registers. H is able to
simulate D simulating itself, thus the only limit to recursive
simulations is RAM.

// The following is written in C
//
01 typedef int (*ptr)(); // pointer to int function
02 int H(ptr x, ptr y) // uses x86 emulator to simulate its input
03
04 int D(ptr x)
05 {
06 int Halt_Status = H(x, x);
07 if (Halt_Status)
08 HERE: goto HERE;
09 return Halt_Status;
10 }
11
12 void main()
13 {
14 H(D,D);
15 }

*Execution Trace*
Line 14: main() invokes H(D,D);

*keeps repeating* (unless aborted)
Line 06: simulated D(D) invokes simulated H(D,D) that simulates D(D)

*Simulation invariant*
D correctly simulated by H cannot possibly reach its own line 09.

H correctly determines that D correctly simulated by H cannot possibly
terminate normally on the basis that H recognizes a dynamic behavior
pattern equivalent to infinite recursion.

H outputs: "H: Infinitely Recursive Simulation Detected Simulation
Stopped" indicating that D has defined a pathological (see above)
relationship to H.

The x86utm operating system (includes several termination analyzers)
https://github.com/plolcott/x86utm

It compiles with the 2017 version of the Community Edition
https://visualstudio.microsoft.com/thank-you-downloading-visual-studio/?sku=Community&rel=15




*Termination Analyzer H is Not Fooled by Pathological Input D*

https://www.researchgate.net/publication/369971402_Termination_Analyzer_H_is_Not_Fooled_by_Pathological_Input_D


--
Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Richard Damon

unread,
Jun 22, 2023, 10:25:48 PM6/22/23
to
On 6/22/23 9:27 PM, olcott wrote:
> When the halting problem is construed as requiring a correct yes/no
> answer to a contradictory question it cannot be solved. Any input D
> defined to do the opposite of whatever Boolean value that its
> termination analyzer H returns is a contradictory input relative to H.

So, you agree with the Halting Theorem that says that a correct Halting
Decider can't be made?

Then way are you trying to refute it?

>
> When H returns 1 for inputs that it determines do halt and returns 0 for
> inputs that either do not halt or do the opposite of whatever Boolean
> value that H returns then these pathological inputs are no longer
> contradictory and become decidable.

So, you are admitting that you criteria is DIFFERENT then that of the
Halting Problem, so your "Termination Analyzer" is NOT a "Solution to
the Halting Problem"

>
> Can D correctly simulated by H terminate normally?

Which again, isn't the question of the Halting Problem.
So, you are just admitting that none of you work applies to the Halting
Problem, but just your POOP which you are trying to make smell better by
calling it (incorrectly) a Termination Analyzer.

It isn't actually a "Termination Analyzer", because again, that theory
taks about the behavior of the actual program, and not that of the
decider, and the correct answer is if the actual program will terminate.

Since D(D) does terminate, you have shown that your POOP still stinks,
and you just can't help but being a liar.

Sorry, you are just showing that you writing is just a mass of error and
mistakes based on faulty assumptions resulting in erroneous answers.

You can't seem to keep yourself from lying about what you are doing.

olcott

unread,
Jun 22, 2023, 11:16:07 PM6/22/23
to
On 6/22/2023 9:25 PM, Richard Damon wrote:
> On 6/22/23 9:27 PM, olcott wrote:
>> When the halting problem is construed as requiring a correct yes/no
>> answer to a contradictory question it cannot be solved. Any input D
>> defined to do the opposite of whatever Boolean value that its
>> termination analyzer H returns is a contradictory input relative to H.
>
> So, you agree with the Halting Theorem that says that a correct Halting
> Decider can't be made?
>
> Then way are you trying to refute it?
>

I just refuted it. From the frame-of-reference of H input D that does
the opposite of whatever Boolean value that H returns the question:
"Does D halt on its input" is a contradictory question.

You can either fail to comprehend this or pretend to fail to
comprehend this yet the actual facts remain unchanged.

>>
>> When H returns 1 for inputs that it determines do halt and returns 0 for
>> inputs that either do not halt or do the opposite of whatever Boolean
>> value that H returns then these pathological inputs are no longer
>> contradictory and become decidable.
>
> So, you are admitting that you criteria is DIFFERENT then that of the
> Halting Problem, so your "Termination Analyzer" is NOT a "Solution to
> the Halting Problem"
>

No I am not. I do not believe that a termination analyzer can be
required to report on different behavior than the behavior that it
actually sees.

So if the halting problem requires its halt decider to report on
different behavior than it actually sees then the halting problem is
incorrect for another different reason.

>>
>> Can D correctly simulated by H terminate normally?
>
> Which again, isn't the question of the Halting Problem.
>

Yet professor Sipser seems to agree is equivalent and several people on
this forum took to be a tautology, AKA necessarily true.
I am opening my work to the much broader field of termination analysis
where it is dead obvious that a termination analyzer is not allowed to
report on behavior that it can't see.

> It isn't actually a "Termination Analyzer", because again, that theory
> taks about the behavior of the actual program, and not that of the
> decider, and the correct answer is if the actual program will terminate.
>

No that is not the case with software engineering. With software
engineering it is understood that when D correctly simulated by H cannot
possibly reach its last instruction and terminate normally that D is
correctly determined to be non-halting. It is much more clear in
software engineering that H is not supposed to be clairvoyant.

> Since D(D) does terminate, you have shown that your POOP still stinks,
> and you just can't help but being a liar.
>

If it absolutely true that D(D) does halt then H would never have to
abort its simulation of D. Because H must abort its simulation of D that
proves from the frame-of-reference of H that D does not halt.

All this becomes moot when we understand that any input D to
termination analyzer H that does the opposite of whatever Boolean value
H returns is a contradictory thus semantically incorrect input.

> Sorry, you are just showing that you writing is just a mass of error and
> mistakes based on faulty assumptions resulting in erroneous answers.
>
> You can't seem to keep yourself from lying about what you are doing.

If D actually does halt in an absolute sense then H would never need to
abort its simulation. Because H does need to abort its simulation then
from the frame-of-reference of H its input does not halt.

Richard Damon

unread,
Jun 23, 2023, 12:32:37 AM6/23/23
to
On 6/22/23 11:16 PM, olcott wrote:
> On 6/22/2023 9:25 PM, Richard Damon wrote:
>> On 6/22/23 9:27 PM, olcott wrote:
>>> When the halting problem is construed as requiring a correct yes/no
>>> answer to a contradictory question it cannot be solved. Any input D
>>> defined to do the opposite of whatever Boolean value that its
>>> termination analyzer H returns is a contradictory input relative to H.
>>
>> So, you agree with the Halting Theorem that says that a correct
>> Halting Decider can't be made?
>>
>> Then way are you trying to refute it?
>>
>
> I just refuted it. From the frame-of-reference of H input D that does
> the opposite of whatever Boolean value that H returns the question:
> "Does D halt on its input" is a contradictory question.

No, you confirmed it and refuted a Strawman.

You just said that you can not create an H that gives the correct
answer, which is EXACTLY what the theorem says, that you can not make a
decider that answers the exact question: "Does the machine represented
by the input halt".



>
> You can either fail to comprehend this or pretend to fail to
> comprehend this yet the actual facts remain unchanged.

No, you don't seem to understand what you are saying.

You yourself just said "It can not be solved".

The fact that you think you can change the question and come up with a
solution for that OTHER question (which isn't the actual Halting Problem
that you refer to), doesn't mean you have refuted that you can't
correctly answer the question you agreed can't be correctly answered.

>
>>>
>>> When H returns 1 for inputs that it determines do halt and returns 0 for
>>> inputs that either do not halt or do the opposite of whatever Boolean
>>> value that H returns then these pathological inputs are no longer
>>> contradictory and become decidable.
>>
>> So, you are admitting that you criteria is DIFFERENT then that of the
>> Halting Problem, so your "Termination Analyzer" is NOT a "Solution to
>> the Halting Problem"
>>
>
> No I am not. I do not believe that a termination analyzer can be
> required to report on different behavior than the behavior that it
> actually sees.

So, you don't belive the requirements as stated are the requirement.

I guess that means you believe it is ok to use strawmen instead of the
actual problem, and lie that you are doing the actual requirements.

YOU FAIL.

>
> So if the halting problem requires its halt decider to report on
> different behavior than it actually sees then the halting problem is
> incorrect for another different reason.

If the Halt Decider doesn't see the behavior that the Halting Problem
asks for, then the Decider is the one having the problem. The existance
of the UTM means that the decider has the ability to recreate as much of
that behavior as it wants to see. Thus, the data is theoretically
available to it. It just needs to figure out the right way to process it.

>
>>>
>>> Can D correctly simulated by H terminate normally?
>>
>> Which again, isn't the question of the Halting Problem.
>>
>
> Yet professor Sipser seems to agree is equivalent and several people on
> this forum took to be a tautology, AKA necessarily true.

Nope, you are just showing that you don't understand the meaning of the
words you use.

To anyone who understands the theory, your reference to "Correct
Simulation" means the simulation by a UTM, i.e a simulaiton that exactly
reproduces the behavior of the machine the input describes. If H can
CORRECTLY determine that THAT simulation wouldn't halt (for exactly this
input, the includes the H that does eventually abort its simulation and
return 0) then H would be correct in aborting and returning zero.

Since that doesn't actually happen for THIS H (which is the only one
viewable in the problem) it can't use that excuse to be correct about
aborting and returning 0.

You seem to believe it is ok to reason from false premises, which seems
to be why you lie so much.
But still try to claim it applies to the Halting Problem, thus you are
just a liar.

And "Termination Analysis" also looks at the behavior of the actual
machine as the standard for decision. It may be that Termination
analysis allows restrictions on the programs it will decide on, but the
correct answer for any machine it does decide on is based on the actual
behavior when run.

>
>> It isn't actually a "Termination Analyzer", because again, that theory
>> taks about the behavior of the actual program, and not that of the
>> decider, and the correct answer is if the actual program will terminate.
>>
>
> No that is not the case with software engineering. With software
> engineering it is understood that when D correctly simulated by H cannot
> possibly reach its last instruction and terminate normally that D is
> correctly determined to be non-halting. It is much more clear in
> software engineering that H is not supposed to be clairvoyant.

So, you are just admitting again that you aren't working on the actual
Halting Problem of Computation Theory and just lying through your teeth
why you say you have refuted the proof of that theorm

>
>> Since D(D) does terminate, you have shown that your POOP still stinks,
>> and you just can't help but being a liar.
>>
>
> If it absolutely true that D(D) does halt then H would never have to
> abort its simulation of D. Because H must abort its simulation of D that
> proves from the frame-of-reference of H that D does not halt.

Which is a statement based on a LIE. Since H DOES abort its simulation,
you can't talk about if H doesn't abort, the program that doesn't abort
is not the H that D is based on, since it isn't the machine claimed to
give the right answer.

Thus, your "proof" is just lies and invalid logic.



>
> All this becomes moot when we understand that any input D to
> termination analyzer H that does the opposite of whatever Boolean value
> H returns is a contradictory thus semantically incorrect input.

But not to a Halt Decider of Computability Thheory.

I guess you don't understand what the word ALL means.

>
>> Sorry, you are just showing that you writing is just a mass of error
>> and mistakes based on faulty assumptions resulting in erroneous answers.
>>
>> You can't seem to keep yourself from lying about what you are doing.
>
> If D actually does halt in an absolute sense then H would never need to
> abort its simulation. Because H does need to abort its simulation then
> from the frame-of-reference of H its input does not halt.
>

Right, H doesn't NEED to abort its simulation except for the fact that
it was programmed to do so in error.

H MUST do as programmed, so the whole idea of acting contrary to its
programming is just invalid logic.

This is what breaks all your logic, you assume the impossible can
happen, and thus your whole system is based on false premises, and is
thus just unsound.

olcott

unread,
Jun 23, 2023, 1:06:28 AM6/23/23
to
On 6/22/2023 11:32 PM, Richard Damon wrote:
> On 6/22/23 11:16 PM, olcott wrote:
>> On 6/22/2023 9:25 PM, Richard Damon wrote:
>>> On 6/22/23 9:27 PM, olcott wrote:
>>>> When the halting problem is construed as requiring a correct yes/no
>>>> answer to a contradictory question it cannot be solved. Any input D
>>>> defined to do the opposite of whatever Boolean value that its
>>>> termination analyzer H returns is a contradictory input relative to H.
>>>
>>> So, you agree with the Halting Theorem that says that a correct
>>> Halting Decider can't be made?
>>>
>>> Then way are you trying to refute it?
>>>
>>
>> I just refuted it. From the frame-of-reference of H input D that does
>> the opposite of whatever Boolean value that H returns the question:
>> "Does D halt on its input" is a contradictory question.
>
> No, you confirmed it and refuted a Strawman.
>
> You just said that you can not create an H that gives the correct
> answer, which is EXACTLY what the theorem says, that you can not make a
> decider that answers the exact question: "Does the machine represented
> by the input halt".
>
>

That is not the whole question. Ignoring the context really does not
make this context go away.

The whole question is what Boolean value can H return that corresponds
to the behavior of D(D) when D does the opposite of whatever value that
H returns?

>>
>> You can either fail to comprehend this or pretend to fail to
>> comprehend this yet the actual facts remain unchanged.
>
> No, you don't seem to understand what you are saying.
>
> You yourself just said "It can not be solved".
>

When a question is construed as contradictory it cannot have a correct
answer only because the question itself contradictory, thus incorrect.

> The fact that you think you can change the question and come up with a
> solution for that OTHER question (which isn't the actual Halting Problem
> that you refer to), doesn't mean you have refuted that you can't
> correctly answer the question you agreed can't be correctly answered.
>

When the halting problem question is understood to be incorrect then it
places no limit on computation and an equivalent question is required.

>>
>>>>
>>>> When H returns 1 for inputs that it determines do halt and returns 0
>>>> for
>>>> inputs that either do not halt or do the opposite of whatever Boolean
>>>> value that H returns then these pathological inputs are no longer
>>>> contradictory and become decidable.
>>>
>>> So, you are admitting that you criteria is DIFFERENT then that of the
>>> Halting Problem, so your "Termination Analyzer" is NOT a "Solution to
>>> the Halting Problem"
>>>
>>
>> No I am not. I do not believe that a termination analyzer can be
>> required to report on different behavior than the behavior that it
>> actually sees.
>
> So, you don't belive the requirements as stated are the requirement.

When I require you to provide a correct (yes or no) answer to the
question: What time is it? You can't do this because the question is
incorrect.

If I ask you to tell me whether or not the Liar Paradox
"This sentence is not true" is true or false you cannot answer because
it is a contradictory question.

>
> I guess that means you believe it is ok to use strawmen instead of the
> actual problem, and lie that you are doing the actual requirements.
>

It seems that myself and Professor Sipser agree that another criteria is
equivalent. When H would never stop running unless H aborted its
simulation of D proves that D does not halt from the point of view of H.

If H does not abort D then H never halts this proves that not aborting
is D is incorrect.

Richard Damon

unread,
Jun 23, 2023, 8:11:47 AM6/23/23
to
On 6/23/23 1:06 AM, olcott wrote:
> On 6/22/2023 11:32 PM, Richard Damon wrote:
>> On 6/22/23 11:16 PM, olcott wrote:
>>> On 6/22/2023 9:25 PM, Richard Damon wrote:
>>>> On 6/22/23 9:27 PM, olcott wrote:
>>>>> When the halting problem is construed as requiring a correct yes/no
>>>>> answer to a contradictory question it cannot be solved. Any input D
>>>>> defined to do the opposite of whatever Boolean value that its
>>>>> termination analyzer H returns is a contradictory input relative to H.
>>>>
>>>> So, you agree with the Halting Theorem that says that a correct
>>>> Halting Decider can't be made?
>>>>
>>>> Then way are you trying to refute it?
>>>>
>>>
>>> I just refuted it. From the frame-of-reference of H input D that does
>>> the opposite of whatever Boolean value that H returns the question:
>>> "Does D halt on its input" is a contradictory question.
>>
>> No, you confirmed it and refuted a Strawman.
>>
>> You just said that you can not create an H that gives the correct
>> answer, which is EXACTLY what the theorem says, that you can not make
>> a decider that answers the exact question: "Does the machine
>> represented by the input halt".
>>
>>
>
> That is not the whole question. Ignoring the context really does not
> make this context go away.

No, that IS the whole question. Please show a relaible reference that
makes the question anything like what you are saying it is.

The question is, and only is:

In computability theory, the halting problem is the problem of
determining, from a description of an arbitrary computer program and an
input, whether the program will finish running, or continue to run forever.

Turing Machines don't HAVE "Context", they have an input, and give a
specific output for every specific input.

You don't seem to understand this, and are incorrectly assuming things
that are not true, because you have made yourself IGNORANT of the actual
subjust.

>
> The whole question is what Boolean value can H return that corresponds
> to the behavior of D(D) when D does the opposite of whatever value that
> H returns?
>

Nope, you are changing the problem, thus you seem to beleive the
Strawman is a valid logic form, which makes your logic system UNSOUND.

>>>
>>> You can either fail to comprehend this or pretend to fail to
>>> comprehend this yet the actual facts remain unchanged.
>>
>> No, you don't seem to understand what you are saying.
>>
>> You yourself just said "It can not be solved".
>>
>
> When a question is construed as contradictory it cannot have a correct
> answer only because the question itself contradictory, thus incorrect.

But only your altered question is contradictory, the original question
has a definite answer for all inputs.

You just don't understand what is being talked about and are replacing
computations with some imaginary concept that just doesn't exist.

>
>> The fact that you think you can change the question and come up with a
>> solution for that OTHER question (which isn't the actual Halting
>> Problem that you refer to), doesn't mean you have refuted that you
>> can't correctly answer the question you agreed can't be correctly
>> answered.
>>
>
> When the halting problem question is understood to be incorrect then it
> places no limit on computation and an equivalent question is required.
>

Nope, the problem is the problem. If you think there is something wrong
with the question, then you can try to argue why that question is wrong,
but you don't get to change it. You can try to create an ALTERNATE field
with a different question, but that doesn't say anything about the
behavior of the original.

You just don't understand how things work, and thus you make yourself
inot a LIAR.

>>>
>>>>>
>>>>> When H returns 1 for inputs that it determines do halt and returns
>>>>> 0 for
>>>>> inputs that either do not halt or do the opposite of whatever Boolean
>>>>> value that H returns then these pathological inputs are no longer
>>>>> contradictory and become decidable.
>>>>
>>>> So, you are admitting that you criteria is DIFFERENT then that of
>>>> the Halting Problem, so your "Termination Analyzer" is NOT a
>>>> "Solution to the Halting Problem"
>>>>
>>>
>>> No I am not. I do not believe that a termination analyzer can be
>>> required to report on different behavior than the behavior that it
>>> actually sees.
>>
>> So, you don't belive the requirements as stated are the requirement.
>
> When I require you to provide a correct (yes or no) answer to the
> question: What time is it? You can't do this because the question is
> incorrect.

SO? That isn't the question. You are just going off onto Red Herrings.

Your use of Red Herrings just shows that you are getting "desperate" as
your logic is falling apart, so you need a diversion away from the
actual truth.

Since you have started by changing the question, NOTHING You have said
applies to the actual problem, so everything you try to say about that
original problem is just a LIE.

>
> If I ask you to tell me whether or not the Liar Paradox
> "This sentence is not true" is true or false you cannot answer because
> it is a contradictory question.

SO? Again, a Red Herring. The Liar's Paradox is a question that doesn't
have a truth value.

The Halt Question, "Does the machine represented by the input to the
decider Halt" always does, thus your claiming they are equivalent is
just a LIE.

Yes, your alternate question, which is just a Strawman, is very similar
to the Liar's Paradox, which is one reason you can't change the question
to that,


>
>>
>> I guess that means you believe it is ok to use strawmen instead of the
>> actual problem, and lie that you are doing the actual requirements.
>>
>
> It seems that myself and Professor Sipser agree that another criteria is
> equivalent. When H would never stop running unless H aborted its
> simulation of D proves that D does not halt from the point of view of H.
>
> If H does not abort D then H never halts this proves that not aborting
> is D is incorrect.

That isn't what he said, so you are just LYING agin. He didn't agree to
a different requirement, you provided an example of something you
claimed H could show and asked if it was good enough. He said it was,
but you H doesn't actually prove that condition, because you don't
understand what a "Correct Simulation" means in the field.

YOU used the wrong "Context" to the words, and thus were LYING.

Face it, you need to change the question because you know the original
question proves what it claims, but you just don't understand that once
you do that you are no longer dealing with the "Halting Problem of
Computability Theory". but just with your stinky POOP.

olcott

unread,
Jun 23, 2023, 11:39:19 AM6/23/23
to
*The halting problem proof counter-example cases*
There are a set of finite string pairs: {TMD1, TMD2} such that TMD1
is a decider and TMD2 is its input. TMD2 does the opposite of whatever
Boolean value that TMD1 returns.

For the set of {TMD1 TMD2} finite string pairs both true and false
return values are the wrong answer for their corresponding input TMD2
because TMD2 does the opposite of whatever Boolean value that TMD1
returns.

> The question is, and only is:
>
> In computability theory, the halting problem is the problem of
> determining, from a description of an arbitrary computer program and an
> input, whether the program will finish running, or continue to run forever.
>
> Turing Machines don't HAVE "Context", they have an input, and give a
> specific output for every specific input.
>
> You don't seem to understand this, and are incorrectly assuming things
> that are not true, because you have made yourself IGNORANT of the actual
> subjust.
>
>>
>> The whole question is what Boolean value can H return that corresponds
>> to the behavior of D(D) when D does the opposite of whatever value that
>> H returns?
>>
>
> Nope, you are changing the problem, thus you seem to beleive the
> Strawman is a valid logic form, which makes your logic system UNSOUND.
>
>>>>
>>>> You can either fail to comprehend this or pretend to fail to
>>>> comprehend this yet the actual facts remain unchanged.
>>>
>>> No, you don't seem to understand what you are saying.
>>>
>>> You yourself just said "It can not be solved".
>>>
>>
>> When a question is construed as contradictory it cannot have a correct
>> answer only because the question itself contradictory, thus incorrect.
>
> But only your altered question is contradictory, the original question
> has a definite answer for all inputs.
>

*The halting problem proof counter-example cases*
For the set of {TMD1 TMD2} finite string pairs both true and false
return values are the wrong answer for their corresponding input TMD2
because TMD2 does the opposite of whatever Boolean value that TMD1
returns.

> You just don't understand what is being talked about and are replacing
> computations with some imaginary concept that just doesn't exist.
>
>>
>>> The fact that you think you can change the question and come up with
>>> a solution for that OTHER question (which isn't the actual Halting
>>> Problem that you refer to), doesn't mean you have refuted that you
>>> can't correctly answer the question you agreed can't be correctly
>>> answered.
>>>
>>
>> When the halting problem question is understood to be incorrect then
>> it places no limit on computation and an equivalent question is required.
>>
>
> Nope, the problem is the problem. If you think there is something wrong
> with the question, then you can try to argue why that question is wrong,
> but you don't get to change it. You can try to create an ALTERNATE field
> with a different question, but that doesn't say anything about the
> behavior of the original.
>

*The halting problem proof counter-example cases*
For the set of {TMD1 TMD2} finite string pairs both true and false
return values are the wrong answer for their corresponding input TMD2
because TMD2 does the opposite of whatever Boolean value that TMD1
returns.

When the halting problem question is understood to be incorrect for
a set of finite string pairs then the halting problem proofs
counter-examples (and thus the proof itself) becomes a mere ruse.

> You just don't understand how things work, and thus you make yourself
> inot a LIAR.
>
>>>>
>>>>>>
>>>>>> When H returns 1 for inputs that it determines do halt and returns
>>>>>> 0 for
>>>>>> inputs that either do not halt or do the opposite of whatever Boolean
>>>>>> value that H returns then these pathological inputs are no longer
>>>>>> contradictory and become decidable.
>>>>>
>>>>> So, you are admitting that you criteria is DIFFERENT then that of
>>>>> the Halting Problem, so your "Termination Analyzer" is NOT a
>>>>> "Solution to the Halting Problem"
>>>>>
>>>>
>>>> No I am not. I do not believe that a termination analyzer can be
>>>> required to report on different behavior than the behavior that it
>>>> actually sees.
>>>
>>> So, you don't belive the requirements as stated are the requirement.
>>
>> When I require you to provide a correct (yes or no) answer to the
>> question: What time is it? You can't do this because the question is
>> incorrect.
>
> SO? That isn't the question. You are just going off onto Red Herrings.
>

When the halting problem question:
"Does input halt?" is applied to the

*The halting problem proof counter-example cases*
For the set of {TMD1 TMD2} finite string pairs both true and false
return values are the wrong answer for their corresponding input TMD2
because TMD2 does the opposite of whatever Boolean value that TMD1
returns.

> Your use of Red Herrings just shows that you are getting "desperate" as
> your logic is falling apart, so you need a diversion away from the
> actual truth.
>

It is the case that H does divide its input up three ways into halting
non-halting and incorrect question. H recognizes and reject D as a
pathological input that does the opposite of whatever Boolean value that
H returns.

> Since you have started by changing the question, NOTHING You have said
> applies to the actual problem, so everything you try to say about that
> original problem is just a LIE.
>

Everything that I said is about the fact that the actual problem is a
mere ruse, like betting someone ten dollars if the can correctly tell
you whether or not this sentence is true or false:
"This sentence is not true"
(1) They must sat true or false
(2) They must be correct
(3) Or they lose ten dollars.

>>
>> If I ask you to tell me whether or not the Liar Paradox
>> "This sentence is not true" is true or false you cannot answer because
>> it is a contradictory question.
>
> SO? Again, a Red Herring. The Liar's Paradox is a question that doesn't
> have a truth value.
>

No element of the {TMD1, TMD2} finite string pairs has a correct
Boolean return value for input TMD2 to decider TMD1.

> The Halt Question, "Does the machine represented by the input to the
> decider Halt" always does, thus your claiming they are equivalent is
> just a LIE.
>

It never does for every element of the {TMD1, TMD2} finite string pairs.

> Yes, your alternate question, which is just a Strawman, is very similar
> to the Liar's Paradox, which is one reason you can't change the question
> to that,
>

It is merely the ordinary halting problem question:
Does this input halt?" applied to the

*The halting problem proof counter-example cases*
resulting in both true and false being incorrect return values
from every TMD1 for its corresponding TMD2 input that does the
opposite of whatever Boolean value that TMD1 returns.

>>>
>>> I guess that means you believe it is ok to use strawmen instead of
>>> the actual problem, and lie that you are doing the actual requirements.
>>>
>>
>> It seems that myself and Professor Sipser agree that another criteria
>> is equivalent. When H would never stop running unless H aborted its
>> simulation of D proves that D does not halt from the point of view of H.
>>
>> If H does not abort D then H never halts this proves that not aborting
>> is D is incorrect.
>
> That isn't what he said,


MIT Professor Michael Sipser has agreed that the following verbatim
words are correct (he has not agreed to anything else):

(a) If simulating halt decider H correctly simulates its input D until H
correctly determines that its simulated D would never stop running
unless aborted then

(b) H can abort its simulation of D and correctly report that D
specifies a non-halting sequence of configurations.

It is correct that D correctly simulated by H never reaches its own last
instruction from the point of view of H.

If it was absolutely true that TMD2 halts then there would be no need
for TMD1 to ever abort its simulation of TMD2. Therefore that TMD2 halts
is not true from an absolute point of view, it is only true relative to
the direct execution of TMD2(TMD2).

From the frame-of-reference of TMD1, TMD2 must be aborted because it
meets the spec and not aborting it crashes the system.

> so you are just LYING agin. He didn't agree to
> a different requirement, you provided an example of something you
> claimed H could show and asked if it was good enough. He said it was,
> but you H doesn't actually prove that condition, because you don't
> understand what a "Correct Simulation" means in the field.
>
> YOU used the wrong "Context" to the words, and thus were LYING.
>
> Face it, you need to change the question because you know the original
> question proves what it claims, but you just don't understand that once
> you do that you are no longer dealing with the "Halting Problem of
> Computability Theory". but just with your stinky POOP.


If we leave the concept of undecidability as it is then the question:
"What time is it (yes or no)?" becomes a correct yet undecidable
decision problem.

That people previously simply did not pay close enough attention to the
nuances of natural language semantics by making sure to ignore the full
context of the halting problem question merely proves that people
weren't paying complete attention. It does not prove that the question
is correct.

When the halting problem question: "Does the input halt?"
is applied to:

*The halting problem proof counter-example cases*
For the set of {TMD1 TMD2} finite string pairs both true and false
return values are the wrong answer for their corresponding input TMD2
because TMD2 does the opposite of whatever Boolean value that TMD1
returns.

Richard Damon

unread,
Jun 23, 2023, 4:47:03 PM6/23/23
to
Turing Machines are NOT "Finite Strings".

They can be represented by finite strings.

And, all you are saying is that UTM TMD1 TMD2 TMD2, which should predict
the behavior of UTM TMD2 TMD2 if TMD1 was correct, doesn't do that, thus
TMD1 is incorrect, and since NO TMD1 can be defined (with the TMD2 built
from TMD1 by the template) this means that it is IMPOSSIBLE to define a
TMD1 that is a correct Halt Decider.

Thus, this is a PROOF of the Halting Problem Theorem, not a refutation
of it.

Remember TMD2 is really TMD2(TMD1) as TMD2 is derived from the TMD1 it
is to confound.

>
> When the halting problem question is understood to be incorrect for
> a set of finite string pairs then the halting problem proofs
> counter-examples (and thus the proof itself) becomes a mere ruse.
>

And what is incorrect about it?

Remember, TMD1, as a SPECIFIC string will always give a specific answer
for a given input, Thus, if we look at you two cases, and give them
distinct names we have TMDT1 and TMDF1 as the machines that return True
and false respectfully when applied to TMDT2 and TMDF2 (note, we get
DIFFERENT inputs for the two, since TMD2 is a function of the machine it
is designed for).

So we have UTM TMDT1 TMDT2 TMDT2 returns true, but UTM TMDT TMDT2 will
run forever by its design, and UTM TMDF1 TMDF2 TMDF2 returns false, but
UTM TMDF2 TMDF2 will halt.

In both cases TMDx1 is WRONG, and there IS a correct answer for the
question about its input, so there is no "contradiction".

You claimed contradiction is because you ignore that changing the
decider changes the input that will be give to it in the proof.
And the correct answer for the TMDT1 machine is False, and the correct
answer for the TMDF1 machine is True, so both machines are just wrong,
and there is no actual "contradiction" since the machine are given
different input.

If you look at UTM TMDT1 TMDF2 TMDF2 then it gets the right answer, but
this isn't the "pathological" case of the proof, so the fact that it can
solve it doesn't show anything, except that you are a LIAR.

>
>> Your use of Red Herrings just shows that you are getting "desperate"
>> as your logic is falling apart, so you need a diversion away from the
>> actual truth.
>>
>
> It is the case that H does divide its input up three ways into halting
> non-halting and incorrect question. H recognizes and reject D as a
> pathological input that does the opposite of whatever Boolean value that
> H returns.

But there is no "incorrect question", and you are just shown to be a
liar about working on the Halting Problem.

Also, your machine does NOT divide its input into 3 classes, as it only
returns 2 values. The Flibble decider gives 3 answers, but you reject
that, it list that one admits it isn't doing the actual halting problem,
for it, the only question would be if shcu a 3 way division is actually
useful for something.

>
>> Since you have started by changing the question, NOTHING You have said
>> applies to the actual problem, so everything you try to say about that
>> original problem is just a LIE.
>>
>
> Everything that I said is about the fact that the actual problem is a
> mere ruse, like betting someone ten dollars if the can correctly tell
> you whether or not this sentence is true or false:
> "This sentence is not true"
> (1) They must sat true or false
> (2) They must be correct
> (3) Or they lose ten dollars.
>

So, the fact that you don't understand the problem means it is just a ruse.

That is the sign of a weak mind.

Note, no one "loses" anything by the fact that Halting isn't decidable.
Even if we didn't ask that question, the effect of it still exist, it
just becomes a clear way to show the limits of the power of logic.



>>>
>>> If I ask you to tell me whether or not the Liar Paradox
>>> "This sentence is not true" is true or false you cannot answer
>>> because it is a contradictory question.
>>
>> SO? Again, a Red Herring. The Liar's Paradox is a question that
>> doesn't have a truth value.
>>
>
> No element of the {TMD1, TMD2} finite string pairs has a correct
> Boolean return value for input TMD2 to decider TMD1.

So, that just means that no correct TMD1 exists, that isn't a problem
except in your mind,

You break your whole logic system to try to make an impossible thing
possible, which is much worse than known that there are limits to what
can be known.

>
>> The Halt Question, "Does the machine represented by the input to the
>> decider Halt" always does, thus your claiming they are equivalent is
>> just a LIE.
>>
>
> It never does for every element of the {TMD1, TMD2} finite string pairs.

????

Every UTM TMD2 TMD2 execution will either Halt or Not, so there is a
correct answer for EVERY case.

UTM TMD1 TMD2 TMD@ just never produces it.

Maybe your mind is incapable of any real intelegence and is stuck trying
to be artificially intelegent.

>
>> Yes, your alternate question, which is just a Strawman, is very
>> similar to the Liar's Paradox, which is one reason you can't change
>> the question to that,
>>
>
> It is merely the ordinary halting problem question:
> Does this input halt?" applied to the
>
> *The halting problem proof counter-example cases*
> resulting in both true and false being incorrect return values
> from every TMD1 for its corresponding TMD2 input that does the
> opposite of whatever Boolean value that TMD1 returns.

No, it isn't.

You ask your question before you have defined TMD1, as asking about what
CAN it do to be correct, says you are asking how to program the machine,
so the program doesn't yet exist.

The actual quesition is asked AFTER you have actually designed your
machine, and at that point the answers are fixed, so we can see what
ACTUALLY happens for this case.

>
>>>>
>>>> I guess that means you believe it is ok to use strawmen instead of
>>>> the actual problem, and lie that you are doing the actual requirements.
>>>>
>>>
>>> It seems that myself and Professor Sipser agree that another criteria
>>> is equivalent. When H would never stop running unless H aborted its
>>> simulation of D proves that D does not halt from the point of view of H.
>>>
>>> If H does not abort D then H never halts this proves that not
>>> aborting is D is incorrect.
>>
>> That isn't what he said,
>
>
> MIT Professor Michael Sipser has agreed that the following verbatim
> words are correct (he has not agreed to anything else):
>
> (a) If simulating halt decider H correctly simulates its input D until H
> correctly determines that its simulated D would never stop running
> unless aborted then

So, *IF* H can *CORRECTLY* determine that a *CORRECT* simulation would
not halt. The only *CORRECT* simulation (in computablity theory) is a
simulation that never aborts, then H can abort its simulation.


>
> (b) H can abort its simulation of D and correctly report that D
> specifies a non-halting sequence of configurations.
>

But if H aborts its simulation, then H never does a correct simulation,
so the precondition was not satisfied. Remember if H aborts its
simulation, then its simulation is NOT correct by the definitions in place.


> It is correct that D correctly simulated by H never reaches its own last
> instruction from the point of view of H.

Except that the statement is a contradiction. If H aborts its
simulation, it didn't correctly simulate its input, and if H correctly
simulates its input, it never aborts its simulation to give an answer.

At best, H can conclude that it can't ever see the final state in its
simulation of the input, but that isn't the same as non-halting.

>
> If it was absolutely true that TMD2 halts then there would be no need
> for TMD1 to ever abort its simulation of TMD2. Therefore that TMD2 halts
> is not true from an absolute point of view, it is only true relative to
> the direct execution of TMD2(TMD2).

Except that TMD1 has defined behavior that affects the code of TMD2. If
TMD1 is the TMD1N that never aborts, then TMD2N built on it, will in
fact be non-halting.

If TMD1 is TMD1A that tries to use this fact, it is given TMD2A, not
TMD2N, and UTM TMD2A TMD2A will halt, so TMD1A is wrong.

Yes UTM TMD1A TMD2N TMD2N will give a correct answer, but that isn't the
case that needs refuting, UTM TMD1A TMD2A TMD2A is the case, and it gets
that wrong.

You are just gaslighting yourself with your deceptive lies by reusing
progaram names.

>
> From the frame-of-reference of TMD1, TMD2 must be aborted because it
> meets the spec and not aborting it crashes the system.

Except that it doesn't, because each variation of TMD1 gets a different
TMD2. You keep missing that fact because you believe your own lies.

>
>> so you are just LYING agin. He didn't agree to a different
>> requirement, you provided an example of something you claimed H could
>> show and asked if it was good enough. He said it was, but you H
>> doesn't actually prove that condition, because you don't understand
>> what a "Correct Simulation" means in the field.
>>
>> YOU used the wrong "Context" to the words, and thus were LYING.
>>
>> Face it, you need to change the question because you know the original
>> question proves what it claims, but you just don't understand that
>> once you do that you are no longer dealing with the "Halting Problem
>> of Computability Theory". but just with your stinky POOP.
>
>
> If we leave the concept of undecidability as it is then the question:
> "What time is it (yes or no)?" becomes a correct yet undecidable
> decision problem.

Nope. Red Herring.

The Halting Question HAS a valid answer as the input either Halts or not.

The input is just cleverly designed so THIS decider won't get the right
answer.

>
> That people previously simply did not pay close enough attention to the
> nuances of natural language semantics by making sure to ignore the full
> context of the halting problem question merely proves that people
> weren't paying complete attention. It does not prove that the question
> is correct.

Nope, you aren't paying attention and d

>
> When the halting problem question: "Does the input halt?"
> is applied to:
>
> *The halting problem proof counter-example cases*
> For the set of {TMD1 TMD2} finite string pairs both true and false
> return values are the wrong answer for their corresponding input TMD2
> because TMD2 does the opposite of whatever Boolean value that TMD1
> returns.
>


You keep repeating that statement, and it is still wrong.

Remember TMD2 is actually a function of TMD1, so you need to keep that
into account. Failure to do so just shows that you are being a LIAR.

olcott

unread,
Jun 23, 2023, 5:05:26 PM6/23/23
to
I am saying that the question:
"Does input D halt on input D" posed to H
is exactly isomorphic to the question:
"Will Jack's answer to this question be no?" posed to Jack.

Neither H nor Jack can answer their questions only because
from their frame-of-reference their questions are contradictory.

It is very important that this issue is recognized because until it is
recognized we can never have any AI that can reliably distinguish
between truth and falsehoods because the Tarski undefinability theorem
that is isomorphic to the Halting Problem proofs proves that True(L,x)
can never be defined.

If everyone believes that True(L,x) cannot be defined (even though
it can be defined) then no one will work on defining True(L,x) and
AI will be forever in the dark about True(L,x).

Richard Damon

unread,
Jun 23, 2023, 5:26:58 PM6/23/23
to
You can say it, but its a lie.

>
> Neither H nor Jack can answer their questions only because
> from their frame-of-reference their questions are contradictory.

But the difference is that when we ask Jack, the answer hasn't been
determined until he actually gives an answer.

When we ask H, the answer was determined the moment H was coded.


>
> It is very important that this issue is recognized because until it is
> recognized we can never have any AI that can reliably distinguish
> between truth and falsehoods because the Tarski undefinability theorem
> that is isomorphic to the Halting Problem proofs proves that True(L,x)
> can never be defined.
>

Except you don't seem to understand that programs don't have free-will
and their behavior is defined by their program, which is fixed.

> If everyone believes that True(L,x) cannot be defined (even though
> it can be defined) then no one will work on defining True(L,x) and
> AI will be forever in the dark about True(L,x).
>

Except you don't understand what was meant by True(L,x), so your
argument is just bogus.


You are just proving that you speak out of stupiditiy.

olcott

unread,
Jun 23, 2023, 5:41:34 PM6/23/23
to
This is not true. We know in advance that both of Jack's possible
answers are the wrong answer and we know in advance that both return
values from H will not correspond to the behavior of the directly
executed D(D).

Within the context of who is being asked Jack's question and the D
input to H have no correct answer / return value only because the
question / input is contradictory within this context even though it is
not contradictory in other different contexts.

If I ask you if you are a little girl the correct answer is no. If I ask
a little girl the exact same question it has a different answer because
the context of who is asked changes the meaning of the question.

Richard Damon

unread,
Jun 23, 2023, 6:49:02 PM6/23/23
to
Note, you are changing the Halting question. It is NOT "What can H
return to be correct", as What H returns is FIXED by your definition of H.

The Question given to H is "Does the machine represented by your input
Halt?". SInce you claim H is returning 0 correctly, it must be returning
0, and thus we know that the input WILL HALT.

Please show how that can be wrong for THIS H. You don't get to change
the input, and thus you don't get to change the H that D is built on.

This shows that you are just a LIAR.

Your probo

>
> Within the context of who is being asked Jack's question and the D
> input to H have no correct answer / return value only because the
> question / input is contradictory within this context even though it is
> not contradictory in other different contexts.

And since H has to have been defined to ask the question, the answer it
gives if fixed, (and is wrong), and the actual question has a difinative
answer.

Since your H answers 0, D(D) Halts, so the correct answer was 1, and
thus H was wrong.

PERIOD.

>
> If I ask you if you are a little girl the correct answer is no. If I ask
> a little girl the exact same question it has a different answer because
> the context of who is asked changes the meaning of the question.
>

Right, because the question has "YOU" in it, and the binding of the
pronoun changed.

The Halting problem doesn't have that same sort of thing happening. Yes,
the way we form the description of the machine might change based on the
decider we are giving it to, but the thing being asked about, the
machine so descibed, doesn't change.

Note in particular, the question isn't about a machine based on the
decider, but that it must be able to handle ANY input, so one possible
input is a machine based on it. Thus, when we look at the answer to the
question, we are asking ALL the deciders the exact same question, and
many of them can answer it, but we know that one in particular, the one
this particular input was based on, can't.

Note, to try to refute, you need to show that you can make a particular
decider ansswer the particular machine built on it, but the question is
still not self-referential, as the question is about the input that is
given without reference to how it was made, the "self-reference" is just
in the meta-logic.

You are just showing how stupid you are, and how poor you are at logic,
so you keep resorting to red herring.

I guess you must like fish.

olcott

unread,
Jun 23, 2023, 7:09:01 PM6/23/23
to
I am showing that the original halting question is contradictory for the
set halting problem proof instances: {TM1, TMD2} where TMD2 does the
opposite of whatever Boolean value that TM1 returns.

When I say that John has a black cat the fact that Harry has a black dog
is no rebuttal.

When I say that every input TMD2 is contradictory for its corresponding
TM1 the fact that it is not contradictory for TM2 is not a rebuttal.

Richard Damon

unread,
Jun 23, 2023, 7:42:58 PM6/23/23
to
Except that you don't actually show that there is any thing wrong any
particular set, just that there does not exist any possible TMD1 that
gets the right answer for its TMD2, which just proves the Halting Problem.

Remember every TMD1 gives just a single answer for a given input, and
every different TMD1 generates a different TMD2 that it needs to get
right to be a counter example, so you can't look across elements for help.

>
> When I say that John has a black cat the fact that Harry has a black dog
> is no rebuttal.
>
> When I say that every input TMD2 is contradictory for its corresponding
> TM1 the fact that it is not contradictory for TM2 is not a rebuttal.
>

The problem is that the question of TMD2 halts has a definite answer, so
isn't the same contrdiction of Jack's question.

Remember every different TMD1 creates a DIFFERENT TMD2, so you never
have a single TMD2 that generates contradictory answers for the Halting
Question, thus, your claim of contradiction is rebutted.

Maybe the issue is you don't actually know what "contradictory" means.

I recently found a fairly simple explanation of the problem, and maybe
even you could understand what is being said.

https://www.youtube.com/watch?v=sG0obNcgNJM

olcott

unread,
Jun 23, 2023, 8:04:02 PM6/23/23
to
We can know in advance that any answer that Jack provides and any return
value that TM1 returns on input TMD2 is the wrong answer / return value.

Furthermore we can know it is the wrong answer / return value
specifically because every answer / return value is contradicted.

The new part that I am adding (that you partially agreed to?)
Is that any question that contradicts every answer is an incorrect
question.

Richard Damon

unread,
Jun 23, 2023, 8:16:47 PM6/23/23
to
No, because for the halting Problem, TMD1 is a FIXED MACHINE in any
asking of the question, and their IS a correct answer to the question,
it just isn't the one that TMD1 gives.

That is the difference.

Thus, TMD1 is just WRONG, the question isn't a "Contradiction". TMD2
might have contradicted TMD1, but no contradiciton appears in the
question itself.

>
> The new part that I am adding (that you partially agreed to?)
> Is that any question that contradicts every answer is an incorrect
> question.
>

Except you don't define "Contradiction" in a proper manner.

Remember, the Halting Question is about a SPECIFIC machine each time it
is asked, and for that SPECIFIC machine, there is a correct answer to
the question, "Does this machine Halt?", and that answer just happens to
be always that opposite of what your claimed correct decider, that the
input is constructed from gives.

Until you speify that decider, you don't have a question.

Thus, when you try your "set" concept, you need to evalutate the
question for each member of the set, and for each member of the set,
there *IS* a correct answer, so the question itself isn't
"Contradictory" in the manner that makes a question invalid. It HAS a
valid answer, it is just that the decider can never give it, thus the
problem is undecidable, not "invalid"

olcott

unread,
Jun 23, 2023, 8:32:14 PM6/23/23
to
That I don't define it in a conventional manner does not mean that I am
defining it incorrectly.

> Remember, the Halting Question is about a SPECIFIC machine each time it
> is asked,
No it is not. It is always about every element of the entire set of
{TM1, TMD2} (halting problem proof instance) pairs.

Richard Damon

unread,
Jun 23, 2023, 8:55:12 PM6/23/23
to
No, but it means you can't use any attribute from the previous definition.

If you are going to make up a term, don't reuse an existing one.

This is just one of the ways you lie about things, you redefine terms
and try to pick and chose what you can import from the original terms
without trying to prove that you can. This is just more of your Hypocracy.

>
>> Remember, the Halting Question is about a SPECIFIC machine each time
>> it is asked,
> No it is not. It is always about every element of the entire set of
> {TM1, TMD2} (halting problem proof instance) pairs.
>

Every element INDIVIDUALLY,

A set is not a program, and you trying to make it one just shows your
stupidity.

You are just admitting you don't have a bit of ground to stand on for
you claims.

You have proved yourself to be a Hypocritical Ignorant Pathological
Lying Insane Idiot.

Your work is in the trash heap.

olcott

unread,
Jun 23, 2023, 9:16:30 PM6/23/23
to
Yes and you cannot tell that there is no integer N such that
N > 5 & N < 3 until after you try every element of the infinite
set of integers and can't find one that works.

On the other hand I can see that every element of the set of {TM1, TMD2}
where TMD2 does the opposite of the Boolean return value of TM1 does
contradict every TM1 that is intended to report on the behavior of TMD2.

When anyone disagrees with tautologies my first guess is that they are a
liar. The actual case might really be that they are not too bright.

*tautology* in logic, a statement so framed that it cannot be denied
without inconsistency. Thus, “All humans are mammals” is held to assert
with regard to anything whatsoever that either it is not a human or it
is a mammal. But that universal “truth” follows not from any facts noted
about real humans but only from the actual use of human and mammal and
is thus purely a matter of definition.
https://www.britannica.com/topic/tautology

Richard Damon

unread,
Jun 23, 2023, 9:32:58 PM6/23/23
to
Right, no integer individually meets the requirement.

But there IS an answer to the ACTUAL Halting Question, does the machine
given as a description, Halt.

In EVERY case, if UTM TMD1 TMD2 TMD2 returns Halting, then it is an
absolute fact that the answer to the question, which is the behavior of
UTM TMD2 TMD2 is to not halt, and if UTM TMD2 TMD2 TMD2 returns
non-halting, then it is a fact that UTM TMD2 TMD2 will Halt.

Thus there IS an answer for every case, so the question is not a
contradiction.

Yes, no TMD1 gave the right answer, but that just means they all were
wrong, and when you show that this applies to ALL possible TMD1, we can
show that the Halting Question can not be computed. That doesn't make it
an invalid question, it means it is an undecidable question.

>
> On the other hand I can see that every element of the set of {TM1, TMD2}
> where TMD2 does the opposite of the Boolean return value of TM1 does
> contradict every TM1 that is intended to report on the behavior of TMD2.

But that doesn't matter as that isn't the question.

Every TMD2 defines a correct answer, so the question is valid.

>
> When anyone disagrees with tautologies my first guess is that they are a
> liar. The actual case might really be that they are not too bright.

What Tautolgy? One that doesn't show what you are trying to prove?
Remember, you claim to be refuting the ACTUAL proof Halting Problem of
Computation Theory, so if your arguement doesn't apply to that, you are
just lying.

>
> *tautology* in logic, a statement so framed that it cannot be denied
> without inconsistency. Thus, “All humans are mammals” is held to assert
> with regard to anything whatsoever that either it is not a human or it
> is a mammal. But that universal “truth” follows not from any facts noted
> about real humans but only from the actual use of human and mammal and
> is thus purely a matter of definition.
> https://www.britannica.com/topic/tautology
>

Right, So what Tautolgy THAT MATTERS are you claiming. That you can't
find a TMD1 that gives the answer is actually a proof of the thing you
are trying to refute, so your logic is 180 degrees backwards.

You confuse yourself by redefining terms and then forgetting that you
have done that and thus can't use any of the standard properties of the
term.

Without an explicit redefinition, using an alternate definition is just
a lie, and with an explicit redefinition, using a property that you
haven't proven still applies is a lie.

So, you have been lying.

olcott

unread,
Jun 23, 2023, 9:46:24 PM6/23/23
to
Thus the question: "Are you a little girl?" must be false for everyone
because the exact same word-for-word question is false for you.

Richard Damon

unread,
Jun 23, 2023, 10:14:17 PM6/23/23
to
Nooe, because THAT question uses a pronoun to reference what it is
talking about, so the question veries based on who it is said to.

The Halting problem identifies the machine, by what input the decider is
given, and what machine it describes.

Your failure to understand THAT just shows you are STUPID.

You are just playing mind gaems with yourself, and losing.

Just think of what people are going to think of your mental capacity
when they read these conversation, you keep on going back to disproven
arguements, thus showing you are incapable of learning.

Sorry.

olcott

unread,
Jun 23, 2023, 10:44:54 PM6/23/23
to
On 6/23/2023 9:14 PM, Richard Damon wrote:
> On 6/23/23 9:46 PM, olcott wrote:
>> On 6/23/2023 8:32 PM, Richard Damon wrote:
>>> Every TMD2 defines a correct answer, so the question is valid.
>>
>> Thus the question: "Are you a little girl?" must be false for everyone
>> because the exact same word-for-word question is false for you.
>>
>>
>
> Nooe, because THAT question uses a pronoun to reference what it is
> talking about, so the question veries based on who it is said to.
>

Referring every element of the infinite set of {TM1, TMD2} pairs
such that TMD2 does the opposite of whatever Boolean value that TMD2
returns.

Is the reason why no TM1 element of this set returns a value that
corresponds to the behavior of its TMD2 input that each TMD2 element
does the opposite of the value that this TM1 element returns.

Richard Damon

unread,
Jun 24, 2023, 7:16:55 AM6/24/23
to
On 6/23/23 10:44 PM, olcott wrote:
> On 6/23/2023 9:14 PM, Richard Damon wrote:
>> On 6/23/23 9:46 PM, olcott wrote:
>>> On 6/23/2023 8:32 PM, Richard Damon wrote:
>>>> Every TMD2 defines a correct answer, so the question is valid.
>>>
>>> Thus the question: "Are you a little girl?" must be false for
>>> everyone because the exact same word-for-word question is false for you.
>>>
>>>
>>
>> Nooe, because THAT question uses a pronoun to reference what it is
>> talking about, so the question veries based on who it is said to.
>>
>
> Referring every element of the infinite set of {TM1, TMD2} pairs
> such that TMD2 does the opposite of whatever Boolean value that TMD2
> returns.
>
> Is the reason why no TM1 element of this set returns a value that
> corresponds to the behavior of its TMD2 input that each TMD2 element
> does the opposite of the value that this TM1 element returns.
>
>
>

Which means that you have proven it is impossible to make a correct Halt
Decider, not that the Halting Question is self-contradictory.

The problem is that since TMD2 changes in the set, there isn't a single
instance of the question in view.

That is exactly the same as your arguement about the question: "Are you
a gitl?". The fact that some people will answer yes and some no doesn't
make it a contradictory question, because each instance of the question
is asking about a different subject.

Thus, you haven't shown an actual problem with the Halting Question
(Does the machine described by the input Halt?) just that it is
impossible to find a answer, which is EXACTLY what the Halting Theorem
states, so you are not refuting its proof.

You just don't seem to understand what you are saying because you have
gaslit yourself with your false ideas.

olcott

unread,
Jun 24, 2023, 9:53:24 AM6/24/23
to
On 6/24/2023 6:16 AM, Richard Damon wrote:
> On 6/23/23 10:44 PM, olcott wrote:
>> On 6/23/2023 9:14 PM, Richard Damon wrote:
>>> On 6/23/23 9:46 PM, olcott wrote:
>>>> On 6/23/2023 8:32 PM, Richard Damon wrote:
>>>>> Every TMD2 defines a correct answer, so the question is valid.
>>>>
>>>> Thus the question: "Are you a little girl?" must be false for
>>>> everyone because the exact same word-for-word question is false for
>>>> you.
>>>>
>>>>
>>>
>>> Nooe, because THAT question uses a pronoun to reference what it is
>>> talking about, so the question veries based on who it is said to.
>>>
>>
>> Referring every element of the infinite set of {TM1, TMD2} pairs
>> such that TMD2 does the opposite of whatever Boolean value that TMD2
>> returns.
>>
>> Is the reason why no TM1 element of this set returns a value that
>> corresponds to the behavior of its TMD2 input that each TMD2 element
>> does the opposite of the value that this TM1 element returns.
>>
>>
>>
>
> Which means that you have proven it is impossible to make a correct Halt
> Decider, not that the Halting Question is self-contradictory.
>
> The problem is that since TMD2 changes in the set, there isn't a single
> instance of the question in view.
>

I asked you a tautology and you disagreed.


> That is exactly the same as your arguement about the question: "Are you
> a gitl?". The fact that some people will answer yes and some no doesn't
> make it a contradictory question, because each instance of the question
> is asking about a different subject.
>
> Thus, you haven't shown an actual problem with the Halting Question
> (Does the machine described by the input Halt?) just that it is
> impossible to find a answer, which is EXACTLY what the Halting Theorem
> states, so you are not refuting its proof.
>
> You just don't seem to understand what you are saying because you have
> gaslit yourself with your false ideas.

Richard Damon

unread,
Jun 24, 2023, 11:13:58 AM6/24/23
to
On 6/24/23 9:53 AM, olcott wrote:
> On 6/24/2023 6:16 AM, Richard Damon wrote:
>> On 6/23/23 10:44 PM, olcott wrote:
>>> On 6/23/2023 9:14 PM, Richard Damon wrote:
>>>> On 6/23/23 9:46 PM, olcott wrote:
>>>>> On 6/23/2023 8:32 PM, Richard Damon wrote:
>>>>>> Every TMD2 defines a correct answer, so the question is valid.
>>>>>
>>>>> Thus the question: "Are you a little girl?" must be false for
>>>>> everyone because the exact same word-for-word question is false for
>>>>> you.
>>>>>
>>>>>
>>>>
>>>> Nooe, because THAT question uses a pronoun to reference what it is
>>>> talking about, so the question veries based on who it is said to.
>>>>
>>>
>>> Referring every element of the infinite set of {TM1, TMD2} pairs
>>> such that TMD2 does the opposite of whatever Boolean value that TMD2
>>> returns.
>>>
>>> Is the reason why no TM1 element of this set returns a value that
>>> corresponds to the behavior of its TMD2 input that each TMD2 element
>>> does the opposite of the value that this TM1 element returns.
>>>
>>>
>>>
>>
>> Which means that you have proven it is impossible to make a correct
>> Halt Decider, not that the Halting Question is self-contradictory.
>>
>> The problem is that since TMD2 changes in the set, there isn't a
>> single instance of the question in view.
>>
>
> I asked you a tautology and you disagreed.

You asked a Red Herring, and I pointed it out.

You are just proving that you don't understand how correct logic works.

olcott

unread,
Jun 24, 2023, 11:57:34 AM6/24/23
to
I asked a tautology and you denied it.

Richard Damon

unread,
Jun 24, 2023, 12:37:57 PM6/24/23
to
WHERE did I say that your statement was factually wrong verse point out
that it doesn't prove what you want it to?

I think you don't understand what you read and write.

Also, how do you "ASK" a tautology. A Tautology isn't a QUESTION, but a
STATEMENT.

You seem to have category errors built into your brain.

olcott

unread,
Jun 24, 2023, 1:01:19 PM6/24/23
to
I asked you if a tautology is true and you denied it.

It is like I asked you if all of the black cats in Australia are black
and you said you don't know you have to check them one at a time.

Richard Damon

unread,
Jun 24, 2023, 1:29:40 PM6/24/23
to
So, still unable to provide refernce to show you statements which are
just lies.

Note, you didn't give the ACTUAL question you asked, because if you did,
it could be shown that you weren't asking what you are claiming you were
asking.

This is just showing how ingrained lying is into you communication.

If you want to quote my actual lie, it would be appreciated, and if I
was actually incorrect, I will correct my statement.

I suspect, you will find that I didn't actually deny what you thought I
was denying, but the false assumption you were deriving from the statement.

You like saying the equivalent of:

1 + 1 = 2
Therefore, Sam is a big Cat.

Using the quoting of an actual true statement as support for an
statement that is actually unrelated statement (maybe that sounds a bit
related).

olcott

unread,
Jun 24, 2023, 1:42:58 PM6/24/23
to
Are all the black cats in Australia black?

Richard Damon

unread,
Jun 24, 2023, 2:19:09 PM6/24/23
to
So, what is the "Tautology" there? There is no STATEMENT that is always
true in every situation.

I guess you are just proving your ignorance.

Note, to this I replied:

>>>>>>>> Which means that you have proven it is impossible to make a
>>>>>>>> correct Halt Decider, not that the Halting Question is
>>>>>>>> self-contradictory.
>>>>>>>>
>>>>>>>> The problem is that since TMD2 changes in the set, there isn't a
>>>>>>>> single instance of the question in view.

So I didn't say the statement was FALSE, I said it didn't prove what you
wanted it to prove.

The fact that no TMD1 can exist that gives the right answer does NOT
mean the question is contradictory, but that the problem of designing a
halt decider is impossible.

Which EXACT question, including context, showed contradictory results.
Remember if you change TMD1, you change TMD2

Or, do you think the question "Are you a girl?" is a contradictory sentence.


GAME - SET - MATCH

You are proved to be a LIAR and an idiot that can't read normal English.

olcott

unread,
Jun 24, 2023, 3:22:17 PM6/24/23
to
So you disagree that all of the black cats in Australia are black?
Maybe some of the black cats in Australia are white dogs?

Richard Damon

unread,
Jun 24, 2023, 3:31:38 PM6/24/23
to
Thats just bad logic speaking a Strawmen.

What is ACTUALY wrong with my statement?

You are just proving yourself to be a pathological lying idiot who
doesn't know how to use any logic.

You think throwing insults actually can prove an arguement.

At least I throw my insults BECAUSE I have proven my arguement, so it
isn't fallacious reasoning (like you use).

olcott

unread,
Jun 24, 2023, 4:10:26 PM6/24/23
to
In other words when I am obviously correct you spout out pure ad hominem
because that is all that you have when you know that I am correct.

Richard Damon

unread,
Jun 24, 2023, 4:24:26 PM6/24/23
to
Nope, and the fact you can't actually point out where you see the error
means you are just spouting out pure LIES. Specify the actual error you
claim or you are just admitting your lock of basis.

Note, the fact that you don't even know the meaning of ad hominem shows
how stupid you are. I have never said you are wrong because you are
stupid, (that would be ad hominem) I show you are stupid because you
insist on wrong statements (that is just applying definitions)

You show you have no useful knowledge of how to do logic.

To you "Obviously correct" means that you think it must be correct, even
if it is actually wrong. Since you appear to have lost touch with
reality, this doesn't mean a lot

olcott

unread,
Jun 24, 2023, 4:35:20 PM6/24/23
to
Every member of set X that has property P and property Q has property P.

Richard Damon

unread,
Jun 24, 2023, 4:41:35 PM6/24/23
to
So?

I didn't disagree that every element of the set has a TMD2 that does the
opposite of what TMD1 says. I disagreed that this mean the Halting
Question, i.e, the question of the behaivor of TMD2 has a problem. The
Halting Question ALWAYS has a correct answer, it is just that TMD1 never
gives it.

Thus, your claim that the Halting Question is just like the Liar's
paradox is a LIE, and your arguement is shown to be unsound.

You just keep fighting strawmen, and losing.

olcott

unread,
Jun 24, 2023, 4:59:59 PM6/24/23
to
Why is it that TM1 cannot provide a Boolean value that corresponds to
the actual behavior of TMD2?

Richard Damon

unread,
Jun 24, 2023, 5:08:14 PM6/24/23
to
That a problem with the programmer, or the fact that the function being
asked for isn't computable. It is NOT an indication that the problem is
incorrect.

The fact that you can't create a program to compute the right value is a
perfectly valid case, as computers can't computer everything.

In fact, if you look at the number of possible input -> output mappings
that can exist, and the number that are computable, the fraction that is
computable is infinitesimal, so the fact that a given one isn't
computable isn't a surprise.

Did you look at the video I linked to the other day, it has a nice
simple explanation about these sorts of problems, and how Hilbert found
out his ideas (which you seem to share a lot of) just didn't work.

Mathematics, once sufficiently complicated, is not:

1) Complete, there are Truths that can not be proven
2) Provable Consistent within itself
3) Decidable, there are problems that can not be computed.

If you want these properties, you need to keep your mathematics very simple.

olcott

unread,
Jun 24, 2023, 5:39:25 PM6/24/23
to
In other words the only reason that halting cannot be solved is
that every programmer in the universe is simply too stupid?

> or the fact that the function being
> asked for isn't computable.

In other words it can't be done simply because it just
can't be done, no circular reasoning here.

> It is NOT an indication that the problem is
> incorrect.
>
Why is halting not computable?

Richard Damon

unread,
Jun 24, 2023, 7:02:45 PM6/24/23
to
Nope, because it is mathematically shown not be computable (see below)


>
>> or the fact that the function being asked for isn't computable.
>
> In other words it can't be done simply because it just
> can't be done, no circular reasoning here.

It can't be done because the "pathological" program is a valid program.
Program complexity grows faster then the ability for it to analyze.

>
>> It is NOT an indication that the problem is incorrect.
>>
> Why is halting not computable?
>

Because computations are powerful enough to make their halting
uncomputable. It comes in part because one program can include another
within it, so a program can be programmed to act contray to any one
decider that is thought to be able to handle all cases. Thus every
decider can have an input taylored to defeat it.

You really need to view that video.

Why do you think it SHOULD be computable?

Remember, there are N^N different functions and only N computations to
try to do them, which makes trying to compute them all impossible.

olcott

unread,
Jun 24, 2023, 7:11:40 PM6/24/23
to
Syntactically valid is not the same as semantically valid.

Every polar (yes/no) question that contradicts both answers is an
incorrect question. Likewise for inputs to a decider.

Richard Damon

unread,
Jun 24, 2023, 7:51:32 PM6/24/23
to
There is no "Semantic" limitation in the requirement of ALL PROGRAMS.

>
> Every polar (yes/no) question that contradicts both answers is an
> incorrect question. Likewise for inputs to a decider.
>

But it DOESN'T. You keep on forgetting that before we can ask the
question, we have fixed the H, so its answer is fixed.

Then the question is on the behavior of the machine represented by the
input, and that has a definite answer, so no problem.

Since your H(D,D) that you claim to be correct returns 0, the CORRECT
answer is 1, and H is just wrong, and you are shown to be a liar.

You keep on forgetting that the decider is a program that has been
actually designed, not something with the mystical "Get the Right
Answer" instruction.

olcott

unread,
Jun 24, 2023, 11:24:15 PM6/24/23
to
Can any decider X possibly correctly return any Boolean value to any
input Y that does the opposite of whatever Boolean value that X returns?
(a) Yes
(b) No
(c) Richard is a Troll

Richard Damon

unread,
Jun 25, 2023, 7:33:56 AM6/25/23
to
No, If I understand your poorly defined question, if decider X is trying
to decide on property P of input Y, and Y is designed to use decider X
and crates a value of property P opposite of what X says.

Note, X may be able to decide on many other inputs that property
correctly, but can not do so for this particular one.

Note, since the decider was defined to take ANY program as input, this
sort of property becomes undecidable.

This is the basis of Rice's Theorem. Note, your configuration where Y is
made within the address space of X, and must directly call the deciding
X and not able to use another copy of it violates the requriement of
"Any Program" as you have restricted the cass the program can be, so
your idea of making the property be (... and not call X) isn't a valid
condition, because it has been shown that you can't actually detect that
Y is calling some function that is the equivalent of X in finite time.

olcott

unread,
Jun 26, 2023, 5:52:26 PM6/26/23
to
How would you say the question more clearly?

> Note, X may be able to decide on many other inputs that property
> correctly, but can not do so for this particular one.
>
> Note, since the decider was defined to take ANY program as input, this
> sort of property becomes undecidable.
>

H does correctly determine that its input has a pathological
relationship to H and specifically rejects its input on this basis.

> This is the basis of Rice's Theorem. Note, your configuration where Y is
> made within the address space of X, and must directly call the deciding
> X and not able to use another copy of it

It took me the last two days to solve this issue in a better
way than the way that took me six months to derive. I also
reiterated and simplified my original method.

This effort was not actually required because my simpler
form of the halting problem instance commonly understood
to be a halting problem instance.

A halting problem instance only requires that an input D do
the opposite of whatever Boolean value that any corresponding
H could possibly return.

> violates the requriement of
> "Any Program" as you have restricted the cass the program can be, so
> your idea of making the property be (... and not call X) isn't a valid
> condition, because it has been shown that you can't actually detect that
> Y is calling some function that is the equivalent of X in finite time.

Richard Damon

unread,
Jun 26, 2023, 7:18:55 PM6/26/23
to
Actually be clear in what you say. Since you mis-use so many words,
clearifying what I mean by them sometimes is needed.

>
>> Note, X may be able to decide on many other inputs that property
>> correctly, but can not do so for this particular one.
>>
>> Note, since the decider was defined to take ANY program as input, this
>> sort of property becomes undecidable.
>>
>
> H does correctly determine that its input has a pathological
> relationship to H and specifically rejects its input on this basis.

Only because it restricts its input to a non-turing complete set of
inputs. Remeber, you have defined that H can not be copied into D, for
"reasons", which shows that the input set isn't Turing Complete.

>
>> This is the basis of Rice's Theorem. Note, your configuration where Y
>> is made within the address space of X, and must directly call the
>> deciding X and not able to use another copy of it
>
> It took me the last two days to solve this issue in a better
> way than the way that took me six months to derive. I also
> reiterated and simplified my original method.
>
> This effort was not actually required because my simpler
> form of the halting problem instance commonly understood
> to be a halting problem instance.
>

But it isn't actually one, so it isn't. You are just lying and serving
Strawman.

> A halting problem instance only requires that an input D do
> the opposite of whatever Boolean value that any corresponding
> H could possibly return.

No, a Halting Decider needs to CORRECT answer about the HALTING
PROPERTY, which is about the actual behavior of the machine described by
the input.

Calling anything else a Halt Decider of Computability Theory, or saying
something besides one refutes the Halting Theory of Computability proof
is just a LIE.

olcott

unread,
Jun 26, 2023, 8:05:56 PM6/26/23
to
So you believe that it is unclear yet have no idea how it could be said
more clearly.

>>
>>> Note, X may be able to decide on many other inputs that property
>>> correctly, but can not do so for this particular one.
>>>
>>> Note, since the decider was defined to take ANY program as input,
>>> this sort of property becomes undecidable.
>>>
>>
>> H does correctly determine that its input has a pathological
>> relationship to H and specifically rejects its input on this basis.
>
> Only because it restricts its input to a non-turing complete set of
> inputs. Remeber, you have defined that H can not be copied into D, for
> "reasons", which shows that the input set isn't Turing Complete.
>

This limitation is not actually required. The alternative
requires inline_H to have very messy inline assembly language
that forces all function calls to be at an absolute rather than
relative machine address.

>>
>>> This is the basis of Rice's Theorem. Note, your configuration where Y
>>> is made within the address space of X, and must directly call the
>>> deciding X and not able to use another copy of it
>>
>> It took me the last two days to solve this issue in a better
>> way than the way that took me six months to derive. I also
>> reiterated and simplified my original method.
>>
>> This effort was not actually required because my simpler
>> form of the halting problem instance commonly understood
>> to be a halting problem instance.
>>
>
> But it isn't actually one, so it isn't. You are just lying and serving
> Strawman.
>
>> A halting problem instance only requires that an input D do
>> the opposite of whatever Boolean value that any corresponding
>> H could possibly return.
>
> No, a Halting Decider

I am defining {halting problem instance} not {halt decider}.
By defining {halting problem instance} I prove that H/D is a
{halting problem instance}. Thus no actual need for additional
more convoluted cases that copy their input.

> needs to CORRECT answer about the HALTING
> PROPERTY, which is about the actual behavior of the machine described by
> the input.
>

H is a decidability decider for itself with its input.
Rice's theorem says this is impossible.

By making slight changes to H it will report whether
or not it will be able to determine the halt status
of an input.

In this case H returns 1 for halting and not halting
and returns 0 for pathological input.

> Calling anything else a Halt Decider of Computability Theory, or saying
> something besides one refutes the Halting Theory of Computability proof
> is just a LIE.
>
>>
>>> violates the requriement of "Any Program" as you have restricted the
>>> cass the program can be, so your idea of making the property be (...
>>> and not call X) isn't a valid condition, because it has been shown
>>> that you can't actually detect that Y is calling some function that
>>> is the equivalent of X in finite time.
>>
>

Richard Damon

unread,
Jun 26, 2023, 8:20:04 PM6/26/23
to
The problem is you misuse words so often, the only way you could be
clearer is to stop doing that.

You also "invent" words that don't have accpted meanings without
definit=ng them.

>>>
>>>> Note, X may be able to decide on many other inputs that property
>>>> correctly, but can not do so for this particular one.
>>>>
>>>> Note, since the decider was defined to take ANY program as input,
>>>> this sort of property becomes undecidable.
>>>>
>>>
>>> H does correctly determine that its input has a pathological
>>> relationship to H and specifically rejects its input on this basis.
>>
>> Only because it restricts its input to a non-turing complete set of
>> inputs. Remeber, you have defined that H can not be copied into D, for
>> "reasons", which shows that the input set isn't Turing Complete.
>>
>
> This limitation is not actually required. The alternative
> requires inline_H to have very messy inline assembly language
> that forces all function calls to be at an absolute rather than
> relative machine address.

Nope, the key point is that an actual Decider that accepts inputs that
can define copies of itself can't actually recognize when a program
calls a copy of itself as a "pathological" call to itself.

When I pointed this out to you before, you answer was that it was
impossible to create a "copy" of your H.

Why does the copy of H need some messy inline assembly when the original
one didn't? Why can't we just copy the actual code of H?

>
>>>
>>>> This is the basis of Rice's Theorem. Note, your configuration where
>>>> Y is made within the address space of X, and must directly call the
>>>> deciding X and not able to use another copy of it
>>>
>>> It took me the last two days to solve this issue in a better
>>> way than the way that took me six months to derive. I also
>>> reiterated and simplified my original method.
>>>
>>> This effort was not actually required because my simpler
>>> form of the halting problem instance commonly understood
>>> to be a halting problem instance.
>>>
>>
>> But it isn't actually one, so it isn't. You are just lying and serving
>> Strawman.

Since your H can't take in ALL programs as an input, the partial
solution is just a strawman.

>>
>>> A halting problem instance only requires that an input D do
>>> the opposite of whatever Boolean value that any corresponding
>>> H could possibly return.
>>
>> No, a Halting Decider
>
> I am defining {halting problem instance} not {halt decider}.
> By defining {halting problem instance} I prove that H/D is a
> {halting problem instance}. Thus no actual need for additional
> more convoluted cases that copy their input.

So, either your {Halting Problem Instance} uses an ACTUAL {Halt Decider}
or it is just a strawman.

There is nothing in the Halting Theory that says you can't build a
decider that decides on SOME cases.
>
>> needs to CORRECT answer about the HALTING PROPERTY, which is about the
>> actual behavior of the machine described by the input.
>>
>
> H is a decidability decider for itself with its input.
> Rice's theorem says this is impossible.

But the problem is your input isn't from a Turing Complete programming
environmenet, so Rice doesn't apply.

Until you show how H can take a truly arbitrary program, including one
that has its own copy of your decider, then you haven't met the
requirements to try to invoke Rice.

>
> By making slight changes to H it will report whether
> or not it will be able to determine the halt status
> of an input.

Which means your H is no longer a single program, and thus not "A Program"

>
> In this case H returns 1 for halting and not halting
> and returns 0 for pathological input.

So, show the code that does this. Remember, it needs to handle a Turing
Complete input, so needs to work on programs with their own copy of H,
so the address match trick doesn't work.

You keep on just assuming that impossible tasks can be done, they are
just difficult, but you just gloss over them.

olcott

unread,
Jun 26, 2023, 9:13:41 PM6/26/23
to
Any idiot can be a mere naysayer:
Which woulds would you use to make my question more clear?

>>>>
>>>>> Note, X may be able to decide on many other inputs that property
>>>>> correctly, but can not do so for this particular one.
>>>>>
>>>>> Note, since the decider was defined to take ANY program as input,
>>>>> this sort of property becomes undecidable.
>>>>>
>>>>
>>>> H does correctly determine that its input has a pathological
>>>> relationship to H and specifically rejects its input on this basis.
>>>
>>> Only because it restricts its input to a non-turing complete set of
>>> inputs. Remeber, you have defined that H can not be copied into D,
>>> for "reasons", which shows that the input set isn't Turing Complete.
>>>
>>
>> This limitation is not actually required. The alternative
>> requires inline_H to have very messy inline assembly language
>> that forces all function calls to be at an absolute rather than
>> relative machine address.
>
> Nope, the key point is that an actual Decider that accepts inputs that
> can define copies of itself can't actually recognize when a program
> calls a copy of itself as a "pathological" call to itself.
>
> When I pointed this out to you before, you answer was that it was
> impossible to create a "copy" of your H.
>
> Why does the copy of H need some messy inline assembly when the original
> one didn't? Why can't we just copy the actual code of H?

Even if I made a single ten page long function that is both D and H
it still needs to call other functions that are part of the operating
system otherwise H cannot do output and such.

Every function call uses relative addressing so a copy of the function
would call into the middle of garbage. I can override this with very
cumbersome embedded assembly language. No sense doing that. If people
can't understand a ten line C function a 600 line function with lots
of embedded assembly language won't help.

>>
>>>>
>>>>> This is the basis of Rice's Theorem. Note, your configuration where
>>>>> Y is made within the address space of X, and must directly call the
>>>>> deciding X and not able to use another copy of it
>>>>
>>>> It took me the last two days to solve this issue in a better
>>>> way than the way that took me six months to derive. I also
>>>> reiterated and simplified my original method.
>>>>
>>>> This effort was not actually required because my simpler
>>>> form of the halting problem instance commonly understood
>>>> to be a halting problem instance.
>>>>
>>>
>>> But it isn't actually one, so it isn't. You are just lying and
>>> serving Strawman.
>
> Since your H can't take in ALL programs as an input, the partial
> solution is just a strawman.
>
>>>
>>>> A halting problem instance only requires that an input D do
>>>> the opposite of whatever Boolean value that any corresponding
>>>> H could possibly return.
>>>
>>> No, a Halting Decider
>>
>> I am defining {halting problem instance} not {halt decider}.
>> By defining {halting problem instance} I prove that H/D is a
>> {halting problem instance}. Thus no actual need for additional
>> more convoluted cases that copy their input.
>
> So, either your {Halting Problem Instance} uses an ACTUAL {Halt Decider}
> or it is just a strawman.

H is a termination analyzer.

> There is nothing in the Halting Theory that says you can't build a
> decider that decides on SOME cases.
>>
>>> needs to CORRECT answer about the HALTING PROPERTY, which is about
>>> the actual behavior of the machine described by the input.
>>>
>>
>> H is a decidability decider for itself with its input.
>> Rice's theorem says this is impossible.
>
> But the problem is your input isn't from a Turing Complete programming
> environmenet, so Rice doesn't apply.
>

Did you know that not every algorithm actually required unlimited
memory? H need not at all be Turing complete.

> Until you show how H can take a truly arbitrary program, including one
> that has its own copy of your decider, then you haven't met the
> requirements to try to invoke Rice.
>

I will never convince you of anything because your primary goal is
rebuttal.

>>
>> By making slight changes to H it will report whether
>> or not it will be able to determine the halt status
>> of an input.
>
> Which means your H is no longer a single program, and thus not "A Program"
>
>>
>> In this case H returns 1 for halting and not halting
>> and returns 0 for pathological input.
>
> So, show the code that does this. Remember, it needs to handle a Turing
> Complete input,

You are using that term incorrectly.

> so needs to work on programs with their own copy of H,
> so the address match trick doesn't work.
>

If hardly anyone acknowledges that they understand that D
correctly simulated by H cannot possibly terminate normally
FREAKING ONLY 9 LINES OF FREAKING CODE

There is no chance they will understand a function with 600
lines and lots of embedded assembly language.

H/D is a halting problem instance if you want to lie
about this you are not fooling me.

Richard Damon

unread,
Jun 26, 2023, 10:13:57 PM6/26/23
to
A world where you never uses improper definitions.

Like, your later talking about "{Halting Problem Instance}" as something
that is some how like but not like a "{Halting Decider}" or that you can
change the value of a return statement in a program, but still have the
"same" program.

You have burned that bridge.

>
>>>>>
>>>>>> Note, X may be able to decide on many other inputs that property
>>>>>> correctly, but can not do so for this particular one.
>>>>>>
>>>>>> Note, since the decider was defined to take ANY program as input,
>>>>>> this sort of property becomes undecidable.
>>>>>>
>>>>>
>>>>> H does correctly determine that its input has a pathological
>>>>> relationship to H and specifically rejects its input on this basis.
>>>>
>>>> Only because it restricts its input to a non-turing complete set of
>>>> inputs. Remeber, you have defined that H can not be copied into D,
>>>> for "reasons", which shows that the input set isn't Turing Complete.
>>>>
>>>
>>> This limitation is not actually required. The alternative
>>> requires inline_H to have very messy inline assembly language
>>> that forces all function calls to be at an absolute rather than
>>> relative machine address.
>>
>> Nope, the key point is that an actual Decider that accepts inputs that
>> can define copies of itself can't actually recognize when a program
>> calls a copy of itself as a "pathological" call to itself.
>>
>> When I pointed this out to you before, you answer was that it was
>> impossible to create a "copy" of your H.
>>
>> Why does the copy of H need some messy inline assembly when the
>> original one didn't? Why can't we just copy the actual code of H?
>
> Even if I made a single ten page long function that is both D and H
> it still needs to call other functions that are part of the operating
> system otherwise H cannot do output and such.
>

Remember, Turing Machines don't have "operating Systems", so that isn't
a issue. Yes, in a Turing Complete language, you might have some "built
ins" that act like "instructions" that are simple calls, to do things
like I/O. The key is that without the OS providing that as a basic
function, the "user" code COULD just do that operation.

A "Halt Decider" isn't such a primitive.

> Every function call uses relative addressing so a copy of the function
> would call into the middle of garbage. I can override this with very
> cumbersome embedded assembly language. No sense doing that. If people
> can't understand a ten line C function a 600 line function with lots
> of embedded assembly language won't help.

But a program can know which function are "system" functions that have
fixed location, and don't get relative addressing, so not an issue.

Maybe you haven't had to do PIC before (Position Independent Code).

>
>>>
>>>>>
>>>>>> This is the basis of Rice's Theorem. Note, your configuration
>>>>>> where Y is made within the address space of X, and must directly
>>>>>> call the deciding X and not able to use another copy of it
>>>>>
>>>>> It took me the last two days to solve this issue in a better
>>>>> way than the way that took me six months to derive. I also
>>>>> reiterated and simplified my original method.
>>>>>
>>>>> This effort was not actually required because my simpler
>>>>> form of the halting problem instance commonly understood
>>>>> to be a halting problem instance.
>>>>>
>>>>
>>>> But it isn't actually one, so it isn't. You are just lying and
>>>> serving Strawman.
>>
>> Since your H can't take in ALL programs as an input, the partial
>> solution is just a strawman.
>>
>>>>
>>>>> A halting problem instance only requires that an input D do
>>>>> the opposite of whatever Boolean value that any corresponding
>>>>> H could possibly return.
>>>>
>>>> No, a Halting Decider
>>>
>>> I am defining {halting problem instance} not {halt decider}.
>>> By defining {halting problem instance} I prove that H/D is a
>>> {halting problem instance}. Thus no actual need for additional
>>> more convoluted cases that copy their input.
>>
>> So, either your {Halting Problem Instance} uses an ACTUAL {Halt
>> Decider} or it is just a strawman.
>
> H is a termination analyzer.

So, are you admitting it doesn't meet the requirements of a "Halt
Decider"? (and thus doesn't mean anything to the Halting Theorem)

>
>> There is nothing in the Halting Theory that says you can't build a
>> decider that decides on SOME cases.
>>>
>>>> needs to CORRECT answer about the HALTING PROPERTY, which is about
>>>> the actual behavior of the machine described by the input.
>>>>
>>>
>>> H is a decidability decider for itself with its input.
>>> Rice's theorem says this is impossible.
>>
>> But the problem is your input isn't from a Turing Complete programming
>> environmenet, so Rice doesn't apply.
>>
>
> Did you know that not every algorithm actually required unlimited
> memory? H need not at all be Turing complete.

Not talking about unlimited memory. I am talking about being able to
give an arbitary but finite program. You don't seem to understand that.

>
>> Until you show how H can take a truly arbitrary program, including one
>> that has its own copy of your decider, then you haven't met the
>> requirements to try to invoke Rice.
>>
>
> I will never convince you of anything because your primary goal is
> rebuttal.

No, muy primary goal it TRUTH. When you state a falsehood. I correct it.
You don't seem to have such a goal, as you don't try to point out what
the error is in what I say, you just repeat your ERROR and say it should
be obvious.

The only obvious thing is that you don't actually have a way to really
prove what you are saying, since you bottom out at the level you can
discuss things, and below that just needs to be taken a true without
proof, as if you do try to go more definitive the errors become too
obvious to try to hinde.

>
>>>
>>> By making slight changes to H it will report whether
>>> or not it will be able to determine the halt status
>>> of an input.
>>
>> Which means your H is no longer a single program, and thus not "A
>> Program"
>>
>>>
>>> In this case H returns 1 for halting and not halting
>>> and returns 0 for pathological input.
>>
>> So, show the code that does this. Remember, it needs to handle a
>> Turing Complete input,
>
> You are using that term incorrectly.

How? Do you understand what "Turing Complete" means?

>
>> so needs to work on programs with their own copy of H, so the address
>> match trick doesn't work.
>>
>
> If hardly anyone acknowledges that they understand that D
> correctly simulated by H cannot possibly terminate normally
> FREAKING ONLY 9 LINES OF FREAKING CODE

Since H DOESN'T "Correct Simulate" its input and give an answer, that
statement is illogical. It has the same truth value as the liar's paradox.

>
> There is no chance they will understand a function with 600
> lines and lots of embedded assembly language.

Try me.

>
> H/D is a halting problem instance if you want to lie
> about this you are not fooling me.
>


So, going back to your made up terminology that just shows you aren't
actually working on the Halting Problem.

Either H actally claims to be a Halting Decider, or it doesn't. If it
doesn't your arguement is mute. If it does, then you are a liar, as it
doesn't meet the requirements to accept a Turing Complete system as its
input.

olcott

unread,
Jun 26, 2023, 11:34:09 PM6/26/23
to
I wrote this operating system and the simulated code must call a
function template in its own code in order for the operating system to
intercept this call and forward the call to itself.

> Maybe you haven't had to do PIC before (Position Independent Code).

Is there a C compiler that generates a COFF file of this?
https://en.wikipedia.org/wiki/Turing_completeness
A finite program could require a googolplex^ googolplex
more bytes than atoms in the universe.

>>
>>> Until you show how H can take a truly arbitrary program, including
>>> one that has its own copy of your decider, then you haven't met the
>>> requirements to try to invoke Rice.
>>>
>>
>> I will never convince you of anything because your primary goal is
>> rebuttal.
>
> No, muy primary goal it TRUTH. When you state a falsehood. I correct it.
> You don't seem to have such a goal, as you don't try to point out what
> the error is in what I say, you just repeat your ERROR and say it should
> be obvious.
>

If your primary goal is truth you would agree with the true
things that I say.

> The only obvious thing is that you don't actually have a way to really
> prove what you are saying, since you bottom out at the level you can
> discuss things, and below that just needs to be taken a true without
> proof, as if you do try to go more definitive the errors become too
> obvious to try to hinde.
>

It is true that H can be slightly adapted such that it recognizes
and rejects inputs that do the opposite of whatever their termination
analyzer returns and accepts the rest.

To the best of my knowledge This <is> a breakthrough that
no one else has ever had.

When Ĥ is applied to ⟨Ĥ⟩
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn

*Another breakthrough that no one else has ever had* is that when
⟨Ĥ⟩ ⟨Ĥ⟩ is simulated by embedded_H it cannot possibly reach its own
simulated ⟨Ĥ.qn⟩ or ⟨Ĥ.qy⟩ in any finite number of simulated steps.

*One more that I had 19 years ago, although I did not word it as well*
When the halting problem is construed as requiring a correct yes/no
answer to a contradictory question it cannot be solved. Any input D
defined to do the opposite of whatever Boolean value that its
termination analyzer H returns is a contradictory input relative to H.

Richard Damon

unread,
Jun 27, 2023, 7:52:19 AM6/27/23
to
No, UTM86 isn't really an operating system in the classic sense, since
itself runs under an operating system. It provide ZERO hardware support
itself.



>
>> Maybe you haven't had to do PIC before (Position Independent Code).
>
> Is there a C compiler that generates a COFF file of this?

I haven't look, but I think GCC can generate COFF, and I know it can
generate PIC. I think you can also convert the normal ELF output to
COFF. (And what's so special about COFF except that it is what Microsoft
uses?)
Yes, in ultra-precise usage, full Turing Completeness is impossible to
build, but in practical terms, the memory limit can be waived when
looking at physical machines as that normally doesn't come out to be the
actual issue.

If an architecture could theoretically be expanded to any arbitrary
finite amount of memory by upgrading the address space, or allows the
mounting of additional "external" memory, a thus an unbounded amount of
memory could theoretically be presented, then such an architecture is
generally considered "Turing Complete" if it meets the other
requirements, which you don't seem to understand.

So, the C programming language is strictly Turing Complete, as the
language itself doesn't provide an upper bound on the memory that the
program could access (even though any actual implementation will have
one since the sizeof the variable will be finite.

The x86 assembly language is considered practically Turing Complete, as
the instruction set is powerful enough, and if the direct memory
accessible isn't enough for a given problem, we can, in theory, either
define a new version with wider registers, or extend memory with some
from of external store that we "page" into parts of the memory.

Your system fails this, as for some reason "H" can't be copied.

Note, a "Proper" decider H, should be given as an input the description
of a COMPLETE program, which would be an input which has ALL of its code
(and thus for D, it would include its own copy of H). The H in D needs
to be an independent instance from the instance of the decider.

>
>>>
>>>> Until you show how H can take a truly arbitrary program, including
>>>> one that has its own copy of your decider, then you haven't met the
>>>> requirements to try to invoke Rice.
>>>>
>>>
>>> I will never convince you of anything because your primary goal is
>>> rebuttal.
>>
>> No, muy primary goal it TRUTH. When you state a falsehood. I correct
>> it. You don't seem to have such a goal, as you don't try to point out
>> what the error is in what I say, you just repeat your ERROR and say it
>> should be obvious.
>>
>
> If your primary goal is truth you would agree with the true
> things that I say.

Except you rarely say True things. The issue seeems to be that you
fundamentally don't understand what is Truth, or what is actually valid
logic, so you season everything you say with untruth, and just a timy
bit of untruth makes a statement untrue.


>
>> The only obvious thing is that you don't actually have a way to really
>> prove what you are saying, since you bottom out at the level you can
>> discuss things, and below that just needs to be taken a true without
>> proof, as if you do try to go more definitive the errors become too
>> obvious to try to hinde.
>>
>
> It is true that H can be slightly adapted such that it recognizes
> and rejects inputs that do the opposite of whatever their termination
> analyzer returns and accepts the rest.

Then DO IT. Note, a "slightly adapted" program is no longer the same
program by computational analysis criterea.

>
> To the best of my knowledge This <is> a breakthrough that
> no one else has ever had.

Except you can't show what you claim, so even you don't have it. You may
show something that matches part of what you claim, but then when you
apply it to the actual Halting Problem, it falls apart as it was based
on incorrect definitions.

>
> When Ĥ is applied to ⟨Ĥ⟩
> Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞
> Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn

Except if embedded_H isn't an exact equivalent to H, that result in
meaningless.

If embedded_H IS identical to H, then to be a decicer, it MUST get to
one of the output states, and if it goes to the top output state, that
is saying the decider will say the input halts, when it doesn't, and if
it goes to the bottom output state, it says that the input never halts
when it does.

So, it is wrong in all cases.

Only in your imagination where embedded_H is a "copy" of H, but can
behave differently, "because of reasons" (unexplained) do you get the
right answer.

So, you need to show the step in the execution of embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
that differs from the execution of H ⟨Ĥ⟩ ⟨Ĥ⟩ even though they have the
exact same code, based just on the "context" of the usage, when Turing
Machine behavior doesn't have such a concept.

>
> *Another breakthrough that no one else has ever had* is that when
> ⟨Ĥ⟩ ⟨Ĥ⟩ is simulated by embedded_H it cannot possibly reach its own
> simulated ⟨Ĥ.qn⟩ or ⟨Ĥ.qy⟩ in any finite number of simulated steps.

So? The question isn't about a partial simulation done by a decider, but
about the behavior of the actual machine, and for the "proof program",
it uses a copy of the actual decider that is claimed to correctly decide it.

>
> *One more that I had 19 years ago, although I did not word it as well*
> When the halting problem is construed as requiring a correct yes/no
> answer to a contradictory question it cannot be solved. Any input D
> defined to do the opposite of whatever Boolean value that its
> termination analyzer H returns is a contradictory input relative to H.
>

No, HALTING of ANY SPECIFIC PROGRAM is ALWAYS defined. The problem you
run into is you neglect to actually fully defined H, so you can't fully
define H^/P/D, so you don't have an actual program to decide on.

You don't seem to understand that H will do what it does and that is
fixed by what it is and its program. There is no "Get the Right Answer"
instruction, so H can't use it.

olcott

unread,
Jun 27, 2023, 12:27:27 PM6/27/23
to
The Peter Linz proof stipulates that embedded_H is a verbatim
identical copy of H.

> If embedded_H IS identical to H, then to be a decicer, it MUST get to
> one of the output states, and if it goes to the top output state, that
> is saying the decider will say the input halts, when it doesn't, and if
> it goes to the bottom output state, it says that the input never halts
> when it does.
>

You keep getting confused between Bill and his identical twin brother
Sam. embedded_H is embedded within Bill and Bill halts. His identical
twin brother Sam cannot possibly reach ⟨Ĥ.qy⟩ of ⟨Ĥ.qn⟩, thus Sam does
not halt even though Bill does halt.

It is incorrect to convict Sam of a crime that eye witnessed saw
Sam do when they were actually seeing Bill do it.

> So, it is wrong in all cases.
>
> Only in your imagination where embedded_H is a "copy" of H, but can
> behave differently, "because of reasons" (unexplained) do you get the
> right answer.
>

It is easy to see that Sam keeps calling embedded_H so that Sam
cannot possibly reach ⟨Ĥ.qy⟩ of ⟨Ĥ.qn⟩. The only way that I can see
that you don't see this is that you do see it and lie.

> So, you need to show the step in the execution of embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
> that differs from the execution of H ⟨Ĥ⟩ ⟨Ĥ⟩ even though they have the
> exact same code, based just on the "context" of the usage, when Turing
> Machine behavior doesn't have such a concept.

The behavior of D simulated by H is different than the behavior of
the directly executed D(D) because these two are out-of-sync by one
invocation. D simulated by H is before H aborts its simulation of D.
The directly executed D(D) is after H has aborted its simulation of D.

It is dead obvious that from the frame-of-reference of H that D does
not halt otherwise H would not have to abort its simulation of D to
prevent its own infinite execution.

>>
>> *Another breakthrough that no one else has ever had* is that when
>> ⟨Ĥ⟩ ⟨Ĥ⟩ is simulated by embedded_H it cannot possibly reach its own
>> simulated ⟨Ĥ.qn⟩ or ⟨Ĥ.qy⟩ in any finite number of simulated steps.
>
> So? The question isn't about a partial simulation done by a decider, but
> about the behavior of the actual machine, and for the "proof program",
> it uses a copy of the actual decider that is claimed to correctly decide
> it.
>
>>
>> *One more that I had 19 years ago, although I did not word it as well*
>> When the halting problem is construed as requiring a correct yes/no
>> answer to a contradictory question it cannot be solved. Any input D
>> defined to do the opposite of whatever Boolean value that its
>> termination analyzer H returns is a contradictory input relative to H.
>>
>
> No, HALTING of ANY SPECIFIC PROGRAM is ALWAYS defined.

Not from the frame-of-reference of some termination analyzer /
input pairs.

*YOU ALREADY ADMITTED (a paraphrase of) THIS*
The question does D halt on its input? is contradictory for every
termination analyzer H on input D where D does the opposite of
whatever Boolean value that H returns.

> The problem you
> run into is you neglect to actually fully defined H, so you can't fully
> define H^/P/D, so you don't have an actual program to decide on.
>

H/D {halting problem instance pairs} are every termination
analyzer H such that input D does the opposite of whatever
Boolean value that H returns.

In this case H can range from simply ignoring its input and
returning either True or False to the most sophisticated
termination analyzer that can possibly exist having encoded
within it the sum total of all human general knowledge about
every subject known to man.

*A limited subset of such a system already agreed*

*ChatGPT*
Therefore, based on the understanding that self-contradictory
questions lack a correct answer and are deemed incorrect, one
could argue that the halting problem's pathological input D can
be categorized as an incorrect question when posed to the halting
decider H.
https://www.liarparadox.org/ChatGPT_HP.pdf

From the frame-of-reference of H (the full context of the question)
input D is merely a contradictory thus incorrect question.
This is true for all of the halting problem instance pairs.

The only counter-argument changes the subject to a different set
of (termination analyzer / input) pairs thus is an example of the
strawman error of reasoning.

> You don't seem to understand that H will do what it does and that is
> fixed by what it is and its program. There is no "Get the Right Answer"
> instruction, so H can't use it.
>

If H is based on deep learning technology then the computation
is not fixed yet can vary from one instance to the the next as
H learns more from practice.

As can be seen a much smarter H understands that input D to
termination analyzer H is simply an incorrect question from
the frame-of-reference of H.

Richard Damon

unread,
Jun 27, 2023, 7:02:53 PM6/27/23
to
Right. SO it behaves just like H, and thus is H goes to qn, so does
embedded_H when given the same input, so H^ (H^) Halts when run so H was
wrong.



>> If embedded_H IS identical to H, then to be a decicer, it MUST get to
>> one of the output states, and if it goes to the top output state, that
>> is saying the decider will say the input halts, when it doesn't, and
>> if it goes to the bottom output state, it says that the input never
>> halts when it does.
>>
>
> You keep getting confused between Bill and his identical twin brother
> Sam. embedded_H is embedded within Bill and Bill halts. His identical
> twin brother Sam cannot possibly reach ⟨Ĥ.qy⟩ of ⟨Ĥ.qn⟩, thus Sam does
> not halt even though Bill does halt.
>

Nope, YOU do, the quesiton is about the actual running of the machine,
not on some simulation of it.

> It is incorrect to convict Sam of a crime that eye witnessed saw
> Sam do when they were actually seeing Bill do it.
>

So why do you blame H^/P/D of not halting when it does.

You even admit that it does, but you say it is correct to say it doesn't

>> So, it is wrong in all cases.
>>
>> Only in your imagination where embedded_H is a "copy" of H, but can
>> behave differently, "because of reasons" (unexplained) do you get the
>> right answer.
>>
>
> It is easy to see that Sam keeps calling embedded_H so that Sam
> cannot possibly reach ⟨Ĥ.qy⟩ of ⟨Ĥ.qn⟩. The only way that I can see
> that you don't see this is that you do see it and lie.

Nope, Sam call embedded_H which starts a simulaiton of Sam, sees that
this simulation of Sam will call embedded_H so embedded_H returns a
result to say it INCORRECTLY think that Sam will never halt, then we can
see that Sam actually Halts, so embedded_H is guilty of a false
accusation of non-halting.

>
>> So, you need to show the step in the execution of embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
>> that differs from the execution of H ⟨Ĥ⟩ ⟨Ĥ⟩ even though they have the
>> exact same code, based just on the "context" of the usage, when Turing
>> Machine behavior doesn't have such a concept.
>
> The behavior of D simulated by H is different than the behavior of
> the directly executed D(D) because these two are out-of-sync by one
> invocation. D simulated by H is before H aborts its simulation of D.
> The directly executed D(D) is after H has aborted its simulation of D.

Then the simuation of D is not correct. PERIOD.

That, you you need to show HOW a CORRECTLY SIMULATED step of D causes
that difference.

Note, claiming behavor of a simulated call that doesn't match the actual
behavior is NOT "Correctly simulating" it.

>
> It is dead obvious that from the frame-of-reference of H that D does
> not halt otherwise H would not have to abort its simulation of D to
> prevent its own infinite execution.

No, it is obvious that H is just WRONG about the behavior of D, becuase
it doesn't properly consider its own behavior, and thus is wrong.

>
>>>
>>> *Another breakthrough that no one else has ever had* is that when
>>> ⟨Ĥ⟩ ⟨Ĥ⟩ is simulated by embedded_H it cannot possibly reach its own
>>> simulated ⟨Ĥ.qn⟩ or ⟨Ĥ.qy⟩ in any finite number of simulated steps.
>>
>> So? The question isn't about a partial simulation done by a decider,
>> but about the behavior of the actual machine, and for the "proof
>> program", it uses a copy of the actual decider that is claimed to
>> correctly decide it.
>>
>>>
>>> *One more that I had 19 years ago, although I did not word it as well*
>>> When the halting problem is construed as requiring a correct yes/no
>>> answer to a contradictory question it cannot be solved. Any input D
>>> defined to do the opposite of whatever Boolean value that its
>>> termination analyzer H returns is a contradictory input relative to H.
>>>
>>
>> No, HALTING of ANY SPECIFIC PROGRAM is ALWAYS defined.
>
> Not from the frame-of-reference of some termination analyzer /
> input pairs.

So, your termination analyzer is just broken and not an actual Halt
Decider, as the input DOES have a defined input

>
> *YOU ALREADY ADMITTED (a paraphrase of) THIS*
> The question does D halt on its input? is contradictory for every
> termination analyzer H on input D where D does the opposite of
> whatever Boolean value that H returns.

But your "Paraphrase" is inaccurate, so a LIE.

Yes, D always does the opposite, so D is contrary to H.

The question of "Does D Halt?" isn't cotradicory, as for any given D,
there is a given H with given behavior, and that D will have defined
behavior, which will just happen to be the opposite of the given
behavior that given H gives.

THus, the halting QUESTION isn't contradictory. Your POOP question, is.


>
>> The problem you run into is you neglect to actually fully defined H,
>> so you can't fully define H^/P/D, so you don't have an actual program
>> to decide on.
>>
>
> H/D {halting problem instance pairs} are every termination
> analyzer H such that input D does the opposite of whatever
> Boolean value that H returns.

So? In every instance D has defined behavior and H fails to predict it,
so is wrong.

>
> In this case H can range from simply ignoring its input and
> returning either True or False to the most sophisticated
> termination analyzer that can possibly exist having encoded
> within it the sum total of all human general knowledge about
> every subject known to man.

Right, so, since every possible H is wrong, a correct H can not be made,
showing that Halting is not computable.

>
> *A limited subset of such a system already agreed*
>
> *ChatGPT*
>    Therefore, based on the understanding that self-contradictory
>    questions lack a correct answer and are deemed incorrect, one
>    could argue that the halting problem's pathological input D can
>    be categorized as an incorrect question when posed to the halting
>    decider H.
> https://www.liarparadox.org/ChatGPT_HP.pdf
>
> From the frame-of-reference of H (the full context of the question)
> input D is merely a contradictory thus incorrect question.
> This is true for all of the halting problem instance pairs.
>
> The only counter-argument changes the subject to a different set
> of (termination analyzer / input) pairs thus is an example of the
> strawman error of reasoning.

Nope, you are just showing your stupidity and leaning on strawman.

>
>> You don't seem to understand that H will do what it does and that is
>> fixed by what it is and its program. There is no "Get the Right
>> Answer" instruction, so H can't use it.
>>
>
> If H is based on deep learning technology then the computation
> is not fixed yet can vary from one instance to the the next as
> H learns more from practice.
>

Nope, "Deep Learning" can't help. Either you make a first training run
to build up the learning set, then built D on the trained H (which H
couldn't have learned from, since that trained H didn't exist at the time).

Or, your "Decider" doesn't meet the requirements of a Computation, as it
has something that affects it behavior that isn't part of its input.
When you build the D to give to H, it must be exactly the H that will
decide it, so if it has been learning and storing that info, you need to
build the D on the EXACT H that is going to answer about it, and that
includes any changes it has made to itself by learning.

This seems to be one of the aspects you keep on forgetting about
Computations, or perhaps something you never learned due to your
self-enforced ignorance of the subject.

> As can be seen a much smarter H understands that input D to
> termination analyzer H is simply an incorrect question from
> the frame-of-reference of H.
>

Excpet that since D is built on the H that is claimed to be correct, D
is smarter that H, and if H changes itself to try to be smarted, D gets
that new smarter H to make itself even smarter still.
0 new messages