On 7/18/2021 10:02 AM, David Brown wrote:
> On 18/07/2021 15:42, Richard Damon wrote:
>> On 7/18/21 7:15 AM, David Brown wrote:
>>> On 18/07/2021 13:13, Andy Walker wrote:
>>>> On 18/07/2021 10:43, David Brown wrote:
>>>> ["You" == PO, of course]
>>>>> It would mean you have a claim (without proof - but we are assuming you
>>>>> are correct) that you can find an algorithm H that correctly tells if a
>>>>> given computation will halt or not, but only works for some functions.
>>>>> If you call H with input (P, X) which it likes, it will correctly give
>>>>> out yes or no. If it doesn't like (P, X) - the input is "pathological",
>>>>> then it might say "yes", it might say "no", it might say "neither", it
>>>>> might never stop with an answer.
>>>> Well, /that/ claim is trivially true, eg if "H" replies "don't
>>>> know" to every instance presented to it. There are also less trivial
>>>> "H"'s, eg one that emulates "P(X)", replies "yes" if the emulation halts,
>>>> "no" if it repeats exactly a previous state , and otherwise either just
>>>> keeps going or else gives up after some large number of steps.
>>>> AFAACT, given the amount of obfuscation around it, PO's
>>>> current emulator is broadly of this type. It has though, of course,
>>>> nothing to do with the actual Halting Problem or related proofs.
>>>>> Basically, it would be useless.
>>>> Although the sentiment is correct, your claim isn't. There are
>>>> /useful/ programs that correctly reply "yes", correctly reply "no", or
>>>> halt and reply "sorry, can't tell". [Proof checkers are of this general
>>>> nature.] Basically, the third response is not [necessarily] caused by
>>>> pathology, but could be thought of as a plea for more information. As
>>>> a debugging aid, "H" could point out the constructions that prevent it
>>>> from reaching a definite conclusion, such as uncounted loops without
>>>> [eg] corresponding invariants proving that the loop terminates. This
>>>> serves as a hint to the programmer that [eg] an extra check is needed.
>>>> A lot of relevant analysis is already done by aggressive optimisation,
>>>> which is a closely-related area of program analysis.
>>> If the function could be relied upon to answer "sorry, can't tell" (or
>>> even reliably fail to halt) for inputs when it can't determine the
>>> correct answer, it would be useful. But that's not how these partial
>>> deciders that Olcott and Mr. Flibble like actually work. They say you
>>> are not allowed to give them "pathological" inputs - inputs that the
>>> function can't handle reliably. They don't give any guarantees about
>>> the output with such inputs - their function might say "yes", or might
>>> say "no", or might say "I can't tell", or might not halt, or might
>>> launch nasal daemons.
>>> This also means that if you give the function some input and it says
>>> "Yes, this will halt", it might be wrong - perhaps the input was
>>> "pathological" and the function is just making stuff up.
>>> That's why I say it is useless.
>> A function that sometimes responds 'I don't know' might be practically
>> useful, but I suspect isn't going to be useful for the Theoretical uses
>> that Turing Machines are designed for.
>> A function that is sometimes wrong might still be useful in some limited
>> cases. After all, we do use medical tests that sometimes give wrong
>> answers as long as we have an idea of the likelihood and maybe even the
>> sort of cases they get wrong. They can point us to maybe needing to use
>> a more extensive test. This is especially useful if they are only wrong
>> one way (or at least for a very large percent of the cases only wrong
>> one way).
> Agreed. Many quantum computers are machines that might produce the
> right answer, and might produce a wrong answer. The idea is that if
> they give an answer fast enough, and you can check the answer reliably
> and efficiently, then you can always run it again if it has got things
> But in this context - proving the existence or non-existence of halting
> deciders - it is certainly useless.
No machine can give the correct answer to a self-contradictory question.