On 5/25/2022 8:15 PM, Ben wrote:
> Mike Terry <
news.dead.p...@darjeeling.plus.com> writes:
>
>> On 25/05/2022 19:42, Ben wrote:
>>> Mike Terry <
news.dead.p...@darjeeling.plus.com> writes:
>>>
>>>> It's true we don't know the details of how PO is doing this, but we
>>>> can see what's effectively going on, I'd say. It is /as though/ there
>>>> is one "master trace" of all the nested simulations maintained by the
>>>> x86utm somewhere in the address space of its virtual x86 processor.
>>> Hmm... If I had to guess I'd put some store in a few phrases he's
>>> uttered that maybe give away more than he intends. Something along the
>>> line of recursion having the same execution pattern as nested
>>> simulations (that's not verbatim -- I'm not reading so much anymore).
>>
>> Well, he certainly argued with me a couple of years back that it
>> didn't make any difference to his rule whether the trace was direct
>> call or emulation recursion. He declined to provide any proof for the
>> soundness of his rule in either scenario (of course), instead
>> suggesting it was my responsibility to provide counter-examples where
>> his rule failed! (If nobody could do that, it would mean his rule was
>> sound, so he believed...) Oh, and we know that pointing out his H as
>> an actual counterexample goes nowhere...
>>
>>> This adds wight to my idea that he has only the top level simulation and
>>> to "speed up the work" and "make the trace simpler" what's being traced
>>> by the top-level H is a different H(X, Y) that just calls X(Y). I
>>> imagine that he thought he could, in principle, eventually make both H's
>>> the same, and that just calling the computation rather than simulating
>>> was just a sort of optimisation.
>>
>> I can't say "definitely not" to that, but my thinking would be that it
>> illustrates PO not appreciating the qualitative difference between
>> recursion in call vs simulation environments, rather than PO not
>> actually using simulation. Maybe a prototype test used direct call
>> and that might have reinforced his confusions.
>
> I'm not saying there is none, just that it's not nested. I posit one
> simulation in some "top-level H" that steps through the execution of the
> specified function call (or otherwise observes it) and stops when the
> magic condition is seen. But rather than build P from this top-level H
> so he builds P from a simpler H(X, Y) that just calls X(Y).
>
> I admit it's all guesswork though. I seriously lost interest when all I
> thought it worth doing was pointing out that if H(X,Y) does not report
> on the "halting" of X(Y) then it's not doing what everyone else is
> talking about.
>
There are two key points:
(1) The the C function named H correctly determines that the correct
simulation of its x86 machine-code input would never reach its "ret"
instruction. This is a simple verified fact that lying cheating bastards
continue to deny.
That they continue to deny this is of no great concern because even
moderately competent software engineers can easily confirm that I am
correct.
(2) That H(P,P) must compute the mapping from a non-input clearly
violates the definition of a computable function that must
*given an input of the function ... return the corresponding output*
and the definition of a decider compute the mapping of its input to an
accept or reject state.
It is *not* that the computer science textbook authors disagree with
this. It is only that they simply assumed that P(P) cannot possibly
specify a different sequence of configurations than the correct
simulation of the input to H(P,P).
These two may only differ in the case of pathological self-reference
(Olcott 2004). Since no one was every able to execute an input with PSR
previously (they simply assumed it was impossibe) they never noticed
this divergence.
The actual correct x86 emulation of the input to H1(P,P) and H(P,P)
conclusive proves that P does have different halting behavior between
them. The alternative is that the x86 language itself is not telling the
truth about their behavior.
Actual computer scientists that know these things much deeper than mere
rote memorization will understand that I am correct. The alternative to
this is that computer scientists believe that textbook authors can
contradict the principles of computer science and not be wrong.
When H that simulates P calls H(P,P) this H creates a whole new process
context that simulates its input all the way through to P calling H(P,P)
again.
>> I think my description of how it /could/ be coded (using a global
>> trace area etc.) is within PO's coding ability given how long he had
>> to sort it out. Also PO has definitely talked about such a global
>> trace, I think in relation to whether this use of globals broke the
>> "pure function" requirement (as he understood it). So if I had to
>> place a bet, I'd go with it working /something/ like this, rather than
>> the blatant faking of traces otherwise required.
>
> I don't think they are faked, at least not totally faked.
>
It can be verified that they are correct thus the issue of whether they
are faked or not (they are not faked) is moot. It was very very
difficult to make H re-entrant.
It was much easier to make x86utm actually be able to execute H in
infinitely nested simulation than it was to verify that it was correct
without actual code. I had far too many false starts with imaginary
code. I had to make real code so if needed I could make adjustments to
my analysis.
>> BTW, have you noticed that PO's traces are out-of-step regarding the
>> ESP column? It's like he prints details for the "current instruction"
>> about to be executed, but the ESP column is the ESP /after/
>> instruction execution. Not how traces normally work... (that's just a
>> curiosity, but it makes me wonder about his recent "TM transition
>> function" not working posts...)
>
> No, I'd not noticed that. Curious.
Each instruction is simply shown after it has been executed thus not
out-of-sync at all.
>>>> So the purpose of all the complicated and semi-secret H code is
>>>> ultimately just to give PO some excuse to confuse himself!
>>> The original purpose was to backtrack on the claim, made I think in a
>>> manic delusion, that he had an "actual Turing machine pair", H, Ĥ, "fully
>>> encoded", "exactly and precisely as in Linz" that correctly decides the
>>> H(Ĥ, Ĥ) case.
>>> This claim was walked back step by step. It was "an algorithm", then "a
>>> pair of virtual machines" then "a few lines of C-like pseudo code"
>>> until, finally, the dump truck arrived with the "x86 language code" to
>>> make it too complicated to post. The original claim was then declared
>>> to be using "poetic licence".
>>
>> Right - that's all true! But still I imagine he actually /does/ have
>> some existing H code that he doesn't want to reveal. Right now, I
>> imagine PO's genuine reason for refusing to post H would be a
>> combination of
>>
>> 1) The H code is a total dog's dinner from a C programming
>> perspective, and he's ashamed of the quality of the code!
>>
>> 2) Architecturally it will be rather naff, having obvious breaks with
>> TM capabilities: use of global variables to communicate across
>> emulation levels; use of its own address as a hidden input to the
>> function; hacks that are designed to "just make the right decion he
>> knows he wants to make", rather than general logic that would be
>> required in anything claiming to be a more general halt decider.
>> Bottom line: it won't be at all how we'd have done it! PO thinks
>> (rightly or wrongly) that those things do not affect his claims, and
>> so he wants to avoid months of discussion/argument over whether he's
>> "doing it the right way".
>>
>> 3) If he "published everything" (x86utm and H) like he steadfastly
>> maintained he would for the first couple of years, people would be
>> able to run and post their own tests/traces, easily highlighting why
>> PO's explanations of what's going on are rubbish. PO wants to retain
>> a tight control over allowed discussion paths! Funnily enough, one of
>> PO's original selling points for developing all this was that it would
>> all be published enabling people to run it and see for themselves the
>> undisputable evidence of his claims!
>
> These are all plausible.
>
>> [I think (3) is by far the main reason why PO decided to backpedal
>> from publishing x86utm and H here. His explanation of "when I take my
>> work to a journal, the publishers will only accept if I haven't
>> revealed utmx86 + H source code elsewhere on the Internet" seems like
>> one of those retro-explanations cooked up just to excuse his breaking
>> with previous commitments. Well, you would know more about whether
>> there would be such a condition from publishers?
>
> I've never come across that. Publishers used to want a paper that was
> not largely similar to one published elsewhere (by which I mean another
> journal) for copyright reasons. But self-publishing, and making code
> public domain, pose no problems for journals as far as I know. But I
> said "used to" because it's been a while!
>
>> And yes, if PO were
>> serious about publishing he'd have acted years (decades?) ago!
>> Perhaps he can put a clause in his will and testament to publish all
>> on UseNet, just in case...]
>
> I very much doubt anyone will ever see H...