NO, it is a TRUE statement. H is NOT a correct HALT DECIDER.
It might be a valid POOP decider with your altered criteria, but it
isn't correct as a Halt Decider.
You don't get to change the meaning of words, attempting to just shows
you are a liar.
Halting is a property of the original machine, not of the partial
simulation that H does.
>
> Because of this I changed the semantic meaning of a return value of 0
> from H to mean either that P(P) does not halt or P(P) specifically
> targets H to do the opposite of whatever Boolean value that H returns.
Which means you H need to return BOTH a 0 and 1 at the same time, since
D(D) DOES Halt, and also matches you "do the opposite" clause. Since
this is impossible, your criteria is invalid.
IT also means H is no longer a Halt Decider, since it fails to meet the
API definition of one.
You don't get to change the definition of a Halt Decider, trying to do
so just make yo a LIAR.
>
> When H(P,P) reports that P correctly simulated by H cannot possibly
> reach its own last instruction this is an easily verified fact, thus
> P(P) does not halt from the point of view of H.
Which isn't the criteria of a Halt Decider, and thus Ben is CORRECT to
say your H isn't a correct Halt Decider. It might be a correct POOP
decider, using your new criteria, but that isn't halting, and you saying
it is just makes you a LIAR.
> When H returns 0 for input P means either that P does not halt or
> P specifically targets H to do the opposite of whatever Boolean
> value that H returns not even people with little understanding can
> say that this is contradictory.
Which mean you admit that H isn't a Halt Decider, then LIE when you
claim it is.
>
>> The fact you can't see that just shows your lack of understanding.
> ChatGPT understood that Jack’s question is a self contradictory
> question when posed to Jack.
So? The Halting Question isn't the same question as we ask a Halt Decider:
Does the Machine represented by this input Halt when run?
That Question has a definite answer, as the machine needs to be fully
defined to ask it, and thus H has definite behavior, which will just
always be to give a wrong answer.
>
> ChatGPT further understood that this makes Jack’s question posed
> to Jack an incorrect question.
So?
>
> ChatGPT also understood that because D was intentionally defined
> to do the opposite of whatever Boolean value that H returned,
> that D is a self-contradictory input for H.
Only because it doesn't understand that H is fixed by the time you ask it.
>
> ChatGPT:
> “Therefore, based on the understanding that self-contradictory
> questions lack a correct answer and are deemed incorrect, one could
> argue that the halting problem's pathological input D can be
> categorized as an incorrect question when posed to the halting
> decider H.”
>
>
https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a
> It did not leap to this conclusion it took a lot of convincing.
>
> ChatGPT is not biased towards rebuttal against the truth.
> My human reviewers are biased towards rebuttal against the truth.
>
So, you seem to beleive in artificial intelligence as being correct,
maybe that is because you don't have any real intelligence of your own.
ChatGPT is a known liar, and has a tendency to say whatever it thinks
the person communicating with it wants to hear. Thus, it is not a good
source for finding Truth.
People have gotten into serious problem because the just blindly
believed what ChatGPT said to them. I guess we need to include you in
its victims.