No, because all the behavior of that program exists will be encoded in
the representation of the program given to the decider.
The fact that H can't correct USE that information doesn't mean it
wasn't given to it.
From the complete description of a Turing Machine and it input, the
ENTIRE behavior has been defined, though it may take infinite time to
determine this if the machine is non-halting.
Give the complete x86 code of a program, and its input, the behavior of
that program is completely defined.
Thus, H has been given everything that it should need to determine what
it needs to, if the results are actually computable.
You big problem is you are confusing the phases of the programming process.
During the DESIGN Phase, you get to change your program as you desire
and work on logic asking how to get the right answer.
When (and if) you comlete that design process, you now have a SPECIFIC
program that you can test to see if it works.
At this stage, the program does what it has been programmed to do, and
you don't look at other possible behaviors of it.
At this point you need to actually DEFINE what you H is, you have used
several different sets of defiitions.
One is that it just DOES a complete and correct simulation of its input,
that this H has been show to not return an answer to H(P,P) and does
make P(P) non-halting, but this can't be your "correct" H, since it
never returns the answer. (and you can't say P is using this, when you
are deciding with a different one).
THe next design uses your defined rule that you CLAIM proves
non-halting, that if P(P) calls H with the same input that H started
with, then it can presume the input is non-halting.
THis is proved incorrect by just running P(P), seeing it call H(P,P) and
then that H(P,P) sees its simulation do this, it aborting the
simulation, and returning to P and P(P) halting. This PROVES the rule wrong.
You claim that the input to H(P,P) not being the same as P(P) is shown
wrong because it MUST be or your P is defined incorrectly. Remember, the
"impossible" program was defined to ask H about itself with its input
and doing the opposite. Since P(P) calls H(P,P) to ask that question, if
it doesn't actually mean that, you have an error in your problem
construction.
You sometimes go more nebulous and just say that H will simulate until
it can actually PROVE that the input is non-halting, and while if it can
actually do that, it will be right, all this is actually doing is saying
that your Halt Decider will use a Halt Decider to tell it if its input
is non-halting, and thus your DESIGN recurses and you end up with an
infinite program.
You ASSUME that there is SOME finite pattern that H can detect, since it
is clear that if H doesn't abort its simulation it will never halt, but
the problem is that this doesn't mean that if it does abort it is
correct to say the input is non-halting, because the proof was based on
a now false premise, that H will not abort its simulation.
We can (and I have shown you) prove that there IS no finite pattern in
P(P) that H could detect, because ANY pattern in P(P), if put into H as
a non-halting pattern, cause H(P,P) to return 0 to P(P) and P(P) then
halts. Thus there does NOT exist a pattern to detect that this P(P) is
non-halting that H could use to answer non-halting.
It IS possible for H to detect this fact, it just can't act on that
knowledge without breaking the premise it uses to make this proof.
The problem is that the Field of Computation Theory requries that
Deciders actually return the answer, and in a way that another program
can get it, to be considered a Decider.
THus, we see that ANY H you design by your technique will either not
answer or give the wrong answer, which is EXACTLY what the proof you are
trying to refute says will happen.