Nope, just proves again that you are a liar.
H does a correct PARTIAL simulation which is NOT actually a Correct
Simulation by the definition you allude to by invoking the concept of a
UTM, which is only a COMPLETE simulation that EXACTLY reproduces the
results of the actual machine.
>
> When a simulating halt decider correctly simulates N steps of its input
> it derives the exact same N steps that a pure UTM would derive because
> it is itself a UTM with extra features.
Which since it did not reach a final state in those N steps, means that
your SHD's simulation is NOT a "Correct Simulation" by the definition of
a UTM, and thus you can not use the properties of a UTM to make your case.
YOU ARE JUST SHOWING YOUR STUPIDITY.
>
> My reviewers cannot show that any of the extra features added to the UTM
> change the behavior of the simulated input for the first N steps of
> simulation:
> (a) Watching the behavior doesn't change it.
> (b) Matching non-halting behavior patterns doesn't change it
> (c) Even aborting the simulation after N steps doesn't change the first
> N steps.
Since a UTM only shows the answer by COMPLETING the simulation,
correctly doing the first N steps is MEANIN
>
> Because of all this we can know that the first N steps of input D
> simulated by simulating halt decider H are the actual behavior that D
> presents to H for these same N steps.
Which means nothing.
>
> computation that halts… “the Turing machine will halt whenever it enters
> a final state” (Linz:1990:234)
>
Right, and since D(D) Halts, as does the actual CORRECT simulation
UTM(D,D) we have that the behavior of the input is Halting, so to be
correct H(D,D) needed to have returned 1, on 0.
> When we see (after N steps) that D correctly simulated by H cannot
> possibly reach its simulated final state in any finite number of steps
> of correct simulation then we have conclusive proof that D presents non-
> halting behavior to H.
>
No, it reaches that final state when simulated by an actual UTM.
H can't simulate past that point, as the H in question stops there.
You can't change H to be a different machine, as it is part of the
input, which needs to be a constant.
If you change JUST the H that is deciding, but not the H that D calls,
like done with H1, we see that if this H somehow did go on, for THIS
input, it would see the input halt, and thus the correct answer is 1.
Since this matches that actual semantics of the proper model, that is
what needs to be done.
Yes, it doesn't work in your model, but that is because it isn't an
actual correct model of the needed Turing machine, because you seem to
be too stupid to set it up correctly, or are just too much of a
pathological liar to let yourself do what is actually correct.