# Re: Halting Problem Solved (Black Box Decider Theory) [ Flibble is correct ]

1 view

### olcott

Jul 17, 2021, 12:40:09 PMJul 17
to
On 7/17/2021 9:37 AM, David Brown wrote:
> On 16/07/2021 21:56, Mr Flibble wrote:
>> On Fri, 16 Jul 2021 19:46:28 -0000 (UTC)
>> Alan Mackenzie <a...@muc.de> wrote:
>>
>>> Mr Flibble <fli...@reddwarf.jmc> wrote:
>>>> On Fri, 16 Jul 2021 19:28:12 +0100
>>>> Ben Bacarisse <ben.u...@bsb.me.uk> wrote:
>>>
>>>>> Mr Flibble <fli...@reddwarf.jmc> writes:
>>>
>>>>>> A pathological program that executes a black box decider and
>>>>>> returns the opposite result can be detected by the black box
>>>>>> decider.
>>>
>>>>>> So there are THREE possible results the black box decider can
>>>>>> return:
>>>
>>>>>> 1) Program halts
>>>>>> 2) Program does not halt
>>>>>> 3) Program is pathological and can be discarded as invalid.
>>>
>>>>>> Halting problem solved.
>>>
>>>>> This one came up every year when one teaches this material. Every.
>>>>> Single. Year. In the end, I decided to bring it up myself and set
>>>>> explaining what's wrong the argument as an exercise. I wonder if
>>>>> any of the students here would like to have a go at that?
>>>
>>>>> (By student, I mean anyone reading this who has not read a
>>>>> textbook on this topic.)
>>>
>>>> I simply don't believe you. You appear to be a pathological liar;
>>>> feel free to provide evidence of what you are claiming.
>>>
>>> That's quite uncalled for. Reading (or browsing) these interminable
>>> threads over the last months, it's quite clear that Ben is an expert.
>>> You, by contrast, are merely exercising your freedom of expression.
>>>
>>> That you, in all apparent seriousness, put forward your 1), 2), 3)
>>> above indicates you are far from an expert. These things aren't a
>>> matter of opinion, they're a matter of settled fact.
>>
>> Expert? I have a CompSci BSc (Hons) degree, dear.
>
> So do lots of people - but some paid attention in class, and some have
> enough mathematical understanding to know how proof by contradiction
> works, and also to know that you cannot "solve" a problem by re-defining
> it to suit yourself.
>

He is much smarter than most here to understand that:
when an input is deliberately defined to do the opposite of whatever its
corresponding halt decider decides this has pathological communication
between the input and the halt decider.

Every scientist knows that the independent variable and the dependent
variable must be completely isolated so that the only effect on the
dependent variable comes from the independent variable.

By providing a pathological back channel where the dependent variable
can effect the behavior of the independent variable the halt status
analysis is tainted.

> AFAIUI, Ben /teaches/ this stuff. Whereas you probably slept through
> the classes and dodged anything involving proof, and have forgotten
> anything you did learn over the decades since university, he has covered
> it in much greater depth, and done so regularly and repeatedly.
>
> Yet you think not only that /you/ are an expert here, but that you know
> so much about it that you can accuse Ben of lying about what his
>
>> Also, "far from an
>> expert" is an attack on the person and not the argument, a logical
>> fallacy, dear.
>>
>
> That is a bit rich, considering your accusations.
>
>>>
>>> When it comes to the halting problem, there is no dispute, except
>>> amongst cranks. It has a rock solid proof, much as do the trisection
>>> of the angle or the squaring of the circle.
>>
>> Cranks? Ad hom logical fallacy (again), dear.
>>
>
> Yes, cranks. The world is full of them.
>
>> Rock solid proof? Prove it, dear.
>>
>
> Look it up, think about it, or go back to school and re-do the classes
> you skipped.
>

--
Copyright 2021 Pete Olcott

"Great spirits have always encountered violent opposition from mediocre
minds." Einstein

### David Brown

Jul 18, 2021, 5:43:29 AMJul 18
to
Mr. Flibble might be smart - I wouldn't put it past him to be trolling
all along here.

>
> Every scientist knows that the independent variable and the dependent
> variable must be completely isolated so that the only effect on the
> dependent variable comes from the independent variable.
>
> By providing a pathological back channel where the dependent variable
> can effect the behavior of the independent variable the halt status
> analysis is tainted.

The halting problem concerns finding a program H that takes an
/arbitrary/ program P and an /arbitrary/ input X and determines whether
it will halt or not (giving a yes/no answer).

Now, I agree that the simplest and most common proof that there cannot
be such a program H is based on a "pathological" case from proof by

G() =
if H(G, G) then loop forever
else 0

With that program, H(G, G) is guaranteed to give the wrong answer -
therefore no such H exists.

Since that this applies for /any/ conceivable H, there are no "dependent
variables". This (when made rigorous) proves that the halting problem
is undecidable.

But suppose - just for fun - we imagined that you are right - what would
that give us?

It would mean you have a claim (without proof - but we are assuming you
are correct) that you can find an algorithm H that correctly tells if a
given computation will halt or not, but only works for some functions.
If you call H with input (P, X) which it likes, it will correctly give
out yes or no. If it doesn't like (P, X) - the input is "pathological",
then it might say "yes", it might say "no", it might say "neither", it
might never stop with an answer.

Basically, it would be useless.

### olcott

Jul 19, 2021, 9:41:19 AMJul 19
to
On 7/18/2021 2:30 PM, Malcolm McLean wrote:
> On Sunday, 18 July 2021 at 19:23:02 UTC+1, olcott wrote:
>> On 7/18/2021 11:12 AM, Jeff Barnett wrote:
>>>
>>> If you check the answer reliably -- possibilities = yes and no -- then
>>> you are done and need nothing else. Correct?
>> Self-contradictory questions have no possible correct answer.
>>
> This is anthropomorphism. H_Hat isn't a "question", it's a machine
> which either halts or does not, depending on how H is written.
>

My notion of incorrect question and undecidable decision problem
TM/input pair are isomorphic.

Whenever a yes/no question posed to a person has no correct yes/no
answer then the question itself is incorrect.

Whenever a decision problem applied to a TM/input pair has no correct
Boolean value this decision problem itself is incorrect for this
TM/input pair.

On Friday, June 25, 2004 at 6:30:39 PM UTC-5, Daryl McCullough wrote:
> You ask someone (we'll call him "Jack") to give a truthful
> yes/no answer to the following question:
> Will Jack's answer to this question be no?
> Jack can't possibly give a correct yes/no answer to the question.
> --
> Daryl McCullough
> Ithaca, NY

Pathological Input to a halt decider is defined as any input that was
defined to do the opposite of whatever its corresponding halt decider
decides.

### olcott

Jul 19, 2021, 9:43:25 AMJul 19
to
On 7/18/2021 10:02 AM, David Brown wrote:
> On 18/07/2021 15:42, Richard Damon wrote:
>> On 7/18/21 7:15 AM, David Brown wrote:
>>> On 18/07/2021 13:13, Andy Walker wrote:
>>>> On 18/07/2021 10:43, David Brown wrote:
>>>> ["You" == PO, of course]
>>>>> It would mean you have a claim (without proof - but we are assuming you
>>>>> are correct) that you can find an algorithm H that correctly tells if a
>>>>> given computation will halt or not, but only works for some functions.
>>>>> If you call H with input (P, X) which it likes, it will correctly give
>>>>> out yes or no.  If it doesn't like (P, X) - the input is "pathological",
>>>>> then it might say "yes", it might say "no", it might say "neither", it
>>>>> might never stop with an answer.
>>>>
>>>>     Well, /that/ claim is trivially true, eg if "H" replies "don't
>>>> know" to every instance presented to it.  There are also less trivial
>>>> "H"'s, eg one that emulates "P(X)", replies "yes" if the emulation halts,
>>>> "no" if it repeats exactly a previous state , and otherwise either just
>>>> keeps going or else gives up after some large number of steps.
>>>>
>>>>     AFAACT, given the amount of obfuscation around it, PO's
>>>> current emulator is broadly of this type.  It has though, of course,
>>>> nothing to do with the actual Halting Problem or related proofs.
>>>>
>>>>> Basically, it would be useless.
>>>>
>>>>     Although the sentiment is correct, your claim isn't.  There are
>>>> /useful/ programs that correctly reply "yes", correctly reply "no", or
>>>> halt and reply "sorry, can't tell".  [Proof checkers are of this general
>>>> nature.]  Basically, the third response is not [necessarily] caused by
>>>> pathology, but could be thought of as a plea for more information.  As
>>>> a debugging aid, "H" could point out the constructions that prevent it
>>>> from reaching a definite conclusion, such as uncounted loops without
>>>> [eg] corresponding invariants proving that the loop terminates.  This
>>>> serves as a hint to the programmer that [eg] an extra check is needed.
>>>> A lot of relevant analysis is already done by aggressive optimisation,
>>>> which is a closely-related area of program analysis.
>>>>
>>>
>>> If the function could be relied upon to answer "sorry, can't tell" (or
>>> even reliably fail to halt) for inputs when it can't determine the
>>> correct answer, it would be useful. But that's not how these partial
>>> deciders that Olcott and Mr. Flibble like actually work. They say you
>>> are not allowed to give them "pathological" inputs - inputs that the
>>> function can't handle reliably. They don't give any guarantees about
>>> the output with such inputs - their function might say "yes", or might
>>> say "no", or might say "I can't tell", or might not halt, or might
>>> launch nasal daemons.
>>>
>>> This also means that if you give the function some input and it says
>>> "Yes, this will halt", it might be wrong - perhaps the input was
>>> "pathological" and the function is just making stuff up.
>>>
>>> That's why I say it is useless.
>>>
>>
>> A function that sometimes responds 'I don't know' might be practically
>> useful, but I suspect isn't going to be useful for the Theoretical uses
>> that Turing Machines are designed for.
>>
>> A function that is sometimes wrong might still be useful in some limited
>> cases. After all, we do use medical tests that sometimes give wrong
>> answers as long as we have an idea of the likelihood and maybe even the
>> sort of cases they get wrong. They can point us to maybe needing to use
>> a more extensive test. This is especially useful if they are only wrong
>> one way (or at least for a very large percent of the cases only wrong
>> one way).
>>
>
> Agreed. Many quantum computers are machines that might produce the
> right answer, and might produce a wrong answer. The idea is that if
> they give an answer fast enough, and you can check the answer reliably
> and efficiently, then you can always run it again if it has got things
> wrong.
>
> But in this context - proving the existence or non-existence of halting
> deciders - it is certainly useless.
>

No machine can give the correct answer to a self-contradictory question.

### olcott

Jul 19, 2021, 10:16:08 AMJul 19
to
On 7/18/2021 8:15 AM, David Brown wrote:
> On 18/07/2021 13:13, Andy Walker wrote:
>> On 18/07/2021 10:43, David Brown wrote:
>> ["You" == PO, of course]
>>> It would mean you have a claim (without proof - but we are assuming you
>>> are correct) that you can find an algorithm H that correctly tells if a
>>> given computation will halt or not, but only works for some functions.
>>> If you call H with input (P, X) which it likes, it will correctly give
>>> out yes or no.  If it doesn't like (P, X) - the input is "pathological",
>>> then it might say "yes", it might say "no", it might say "neither", it
>>> might never stop with an answer.
>>
>>     Well, /that/ claim is trivially true, eg if "H" replies "don't
>> know" to every instance presented to it.  There are also less trivial
>> "H"'s, eg one that emulates "P(X)", replies "yes" if the emulation halts,
>> "no" if it repeats exactly a previous state , and otherwise either just
>> keeps going or else gives up after some large number of steps.
>>
>>     AFAACT, given the amount of obfuscation around it, PO's
>> current emulator is broadly of this type.  It has though, of course,
>> nothing to do with the actual Halting Problem or related proofs.
>>
>>> Basically, it would be useless.
>>
>>     Although the sentiment is correct, your claim isn't.  There are
>> /useful/ programs that correctly reply "yes", correctly reply "no", or
>> halt and reply "sorry, can't tell".  [Proof checkers are of this general
>> nature.]  Basically, the third response is not [necessarily] caused by
>> pathology, but could be thought of as a plea for more information.  As
>> a debugging aid, "H" could point out the constructions that prevent it
>> from reaching a definite conclusion, such as uncounted loops without
>> [eg] corresponding invariants proving that the loop terminates.  This
>> serves as a hint to the programmer that [eg] an extra check is needed.
>> A lot of relevant analysis is already done by aggressive optimisation,
>> which is a closely-related area of program analysis.
>>
>
> If the function could be relied upon to answer "sorry, can't tell" (or
> even reliably fail to halt) for inputs when it can't determine the
> correct answer, it would be useful. But that's not how these partial
> deciders that Olcott and Mr. Flibble like actually work. They say you
> are not allowed to give them "pathological" inputs - inputs that the

You are wrong on both counts.

Pathological Input to a halt decider is defined as any input that was
defined to do the opposite of whatever its corresponding halt decider
decides.

This question can only be correctly answered after the pathology has
been removed. When a halt decider only acts as a pure simulator of its
input until after its halt status decision is made there is no feedback
loop of back channel communication between the halt decider and its
input that can prevent a correct halt status decision.

https://www.researchgate.net/publication/351947980_Halting_problem_undecidability_and_infinitely_nested_simulation

> function can't handle reliably. They don't give any guarantees about
> the output with such inputs - their function might say "yes", or might
> say "no", or might say "I can't tell", or might not halt, or might
> launch nasal daemons.
>
> This also means that if you give the function some input and it says
> "Yes, this will halt", it might be wrong - perhaps the input was
> "pathological" and the function is just making stuff up.
>
> That's why I say it is useless.
>

### olcott

Jul 19, 2021, 10:43:36 AMJul 19
to
On 7/18/2021 6:13 AM, Andy Walker wrote:
> On 18/07/2021 10:43, David Brown wrote:
> ["You" == PO, of course]
>> It would mean you have a claim (without proof - but we are assuming you
>> are correct) that you can find an algorithm H that correctly tells if a
>> given computation will halt or not, but only works for some functions.
>> If you call H with input (P, X) which it likes, it will correctly give
>> out yes or no.  If it doesn't like (P, X) - the input is "pathological",
>> then it might say "yes", it might say "no", it might say "neither", it
>> might never stop with an answer.
>
>     Well, /that/ claim is trivially true, eg if "H" replies "don't
> know" to every instance presented to it.  There are also less trivial
> "H"'s, eg one that emulates "P(X)", replies "yes" if the emulation halts,
> "no" if it repeats exactly a previous state , and otherwise either just
> keeps going or else gives up after some large number of steps.
>
>     AFAACT, given the amount of obfuscation around it, PO's
> current emulator is broadly of this type.  It has though, of course,
> nothing to do with the actual Halting Problem or related proofs.
>
>> Basically, it would be useless.
>
>     Although the sentiment is correct, your claim isn't.  There are
> /useful/ programs that correctly reply "yes", correctly reply "no", or
> halt and reply "sorry, can't tell".  [Proof checkers are of this general
> nature.]  Basically, the third response is not [necessarily] caused by
> pathology, but could be thought of as a plea for more information.  As
> a debugging aid, "H" could point out the constructions that prevent it
> from reaching a definite conclusion, such as uncounted loops without
> [eg] corresponding invariants proving that the loop terminates.  This
> serves as a hint to the programmer that [eg] an extra check is needed.
> A lot of relevant analysis is already done by aggressive optimisation,
> which is a closely-related area of program analysis.
>

Pathological Input to a halt decider is defined as any input that was
defined to do the opposite of whatever its corresponding halt decider
decides.

This question can only be correctly answered after the pathology has
been removed. When a halt decider only acts as a pure simulator of its
input until after its halt status decision is made there is no feedback
loop of back channel communication between the halt decider and its
input that can prevent a correct halt status decision.

### olcott

Jul 19, 2021, 10:48:47 AMJul 19
to
If Flibble was trolling he would not have correctly paraphrased the
pathological self-reference(Olcott 2004) error this accurately:

[Olcott's theory]
On Saturday, July 10, 2021 at 12:00:56 PM UTC-5, Mr Flibble wrote:
> I agree with Olcott that a halt decider can NOT be part of that which
> is being decided (see [Strachey 1965]) which, if Olcott is correct,
> falsifies a collection of proofs (which I don't have the time to
> examine) which rely on that mistake.
>
> /Flibble

>>
>> Every scientist knows that the independent variable and the dependent
>> variable must be completely isolated so that the only effect on the
>> dependent variable comes from the independent variable.
>>
>> By providing a pathological back channel where the dependent variable
>> can effect the behavior of the independent variable the halt status
>> analysis is tainted.
>
> The halting problem concerns finding a program H that takes an
> /arbitrary/ program P and an /arbitrary/ input X and determines whether
> it will halt or not (giving a yes/no answer).
>

Pathological Input to a halt decider is defined as any input that was
defined to do the opposite of whatever its corresponding halt decider
decides.

This question can only be correctly answered after the pathology has
been removed. When a halt decider only acts as a pure simulator of its
input until after its halt status decision is made there is no feedback
loop of back channel communication between the halt decider and its
input that can prevent a correct halt status decision.

https://www.researchgate.net/publication/351947980_Halting_problem_undecidability_and_infinitely_nested_simulation

> Now, I agree that the simplest and most common proof that there cannot
> be such a program H is based on a "pathological" case from proof by
>
> G() =
> if H(G, G) then loop forever
> else 0
>
> With that program, H(G, G) is guaranteed to give the wrong answer -
> therefore no such H exists.
>
> Since that this applies for /any/ conceivable H, there are no "dependent
> variables". This (when made rigorous) proves that the halting problem
> is undecidable.
>
>
> But suppose - just for fun - we imagined that you are right - what would
> that give us?
>
> It would mean you have a claim (without proof - but we are assuming you
> are correct) that you can find an algorithm H that correctly tells if a
> given computation will halt or not, but only works for some functions.
> If you call H with input (P, X) which it likes, it will correctly give
> out yes or no. If it doesn't like (P, X) - the input is "pathological",
> then it might say "yes", it might say "no", it might say "neither", it
> might never stop with an answer.
>
> Basically, it would be useless.
>