Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

ChatGPT agrees that the halting problem input can be construed as an incorrect question

20 views
Skip to first unread message

olcott

unread,
Jun 17, 2023, 1:54:38 AM6/17/23
to
ChatGPT:
“Therefore, based on the understanding that self-contradictory
questions lack a correct answer and are deemed incorrect, one could
argue that the halting problem's pathological input D can be
categorized as an incorrect question when posed to the halting
decider H.”

https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a
It did not leap to this conclusion it took a lot of convincing.

--
Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Richard Damon

unread,
Jun 17, 2023, 8:09:13 AM6/17/23
to
On 6/17/23 1:54 AM, olcott wrote:
> ChatGPT:
>    “Therefore, based on the understanding that self-contradictory
>    questions lack a correct answer and are deemed incorrect, one could
>    argue that the halting problem's pathological input D can be
>    categorized as an incorrect question when posed to the halting
>    decider H.”
>
> https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a
> It did not leap to this conclusion it took a lot of convincing.
>

Except that the Halting Problem isn't a "Self-Contradictory" Quesiton,
so the answer doesn't apply.

H^ doesn't contradict ITSELF, it constrdicts H. Thus, the answer to what
is the Halting Behavior of a specific input, always has a definite
answer, as all machine/input combinations will either Halt or not. What
IS self-contradictory is the design process of trying to make an H that
can answer the template correctly. THAT has no solution, showing you
can't make a correct Halt Decider that works for all input. So, you are
proved that you are wrong about have refuted the problem, because you
don't understand what the problem is in the first place.

Also, you do know that ChatGPT can lie, especially if you lead it a lot.
It's programming was based, in part, with telling its conversation
partner the things it thinks they wanted to hear.

You are just showing you don't understand what you are talking about or
even how this sort of AI works.


YOU FAIL.

olcott

unread,
Jun 17, 2023, 12:59:07 PM6/17/23
to
On 6/17/2023 7:09 AM, Richard Damon wrote:
> On 6/17/23 1:54 AM, olcott wrote:
>> ChatGPT:
>>     “Therefore, based on the understanding that self-contradictory
>>     questions lack a correct answer and are deemed incorrect, one could
>>     argue that the halting problem's pathological input D can be
>>     categorized as an incorrect question when posed to the halting
>>     decider H.”
>>
>> https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a
>> It did not leap to this conclusion it took a lot of convincing.
>>
>
> Except that the Halting Problem isn't a "Self-Contradictory" Quesiton,
> so the answer doesn't apply.
>

My original source of Jack's question:
sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM

You ask someone (we'll call him "Jack") to give a truthful
yes/no answer to the following question:

Will Jack's answer to this question be no?

Jack can't possibly give a correct yes/no answer to the question.



I had to capture the dialogue as two huge images.
Then I converted them to PDF. It is about 60 pages of dialogue.
https://www.liarparadox.org/ChatGPT_HP.pdf

This is how the ChatGPT conversation began:

You ask someone to give a truthful yes/no answer to the following
question: Will your answer to this question be no?
Can they give a correct answer to that question?

After sixty pages dialogue ChatGPT understood that
any question (like the above question) that lacks a
correct yes or no answer because it is self-contradictory
when posed to a specific person/machine is an incorrect
question within this full context.

ChatGPT:
"Therefore, based on the understanding that self-contradictory
questions lack a correct answer and are deemed incorrect, one could
argue that the halting problem's pathological input D can be
categorized as an incorrect question when posed to the halting
decider H."

Double talk and misdirection might convince gullible fools that the
above 60 pages of reasoning is not correct. Double talk and misdirection
do not count as the slightest trace of any actual rebuttal.

Quit using ad hominem attacks and mere rhetoric to convince gullible
fools and try and find an actual flaw in the reasoning.

Richard Damon

unread,
Jun 17, 2023, 1:43:23 PM6/17/23
to
On 6/17/23 12:59 PM, olcott wrote:
> On 6/17/2023 7:09 AM, Richard Damon wrote:
>> On 6/17/23 1:54 AM, olcott wrote:
>>> ChatGPT:
>>>     “Therefore, based on the understanding that self-contradictory
>>>     questions lack a correct answer and are deemed incorrect, one could
>>>     argue that the halting problem's pathological input D can be
>>>     categorized as an incorrect question when posed to the halting
>>>     decider H.”
>>>
>>> https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a
>>> It did not leap to this conclusion it took a lot of convincing.
>>>
>>
>> Except that the Halting Problem isn't a "Self-Contradictory" Quesiton,
>> so the answer doesn't apply.
>>
>
> My original source of Jack's question:
> sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
>
>    You ask someone (we'll call him "Jack") to give a truthful
>    yes/no answer to the following question:
>
>    Will Jack's answer to this question be no?
>
>    Jack can't possibly give a correct yes/no answer to the question.
>
>

But you aren't claiming to be solving the Jack Question.

You are being asked the questions does D(D) Halt? when D is a fully
defined program which means H is a fully defined program. This question
ALWAYS has a definite answer.

Since this H DOES abort its simulation of D(D) and return 0 (to say
non-halting), this D(D) Halts so the correct answer is Halting, and H
returned the wrong answer.

There is no "Self-Contradictory" behavior, at least not once you
actually create your H. Yes, D acted contrary to the return value of but
since they are DIFFERENT (but related) programs, there is no "Self"
attrtibute.

The only point when you hit "Self-contradictory" is when you try to
apply logic to designing H, at that point, you hit the
self-contradiction that a correctc H needs to give the answer the
opposite of what it give. This means that no such H can exist, which
proves the theorem, not refute it, because you FIRST need to generate an
H, then you can test it.


>
> I had to capture the dialogue as two huge images.
> Then I converted them to PDF. It is about 60 pages of dialogue.
> https://www.liarparadox.org/ChatGPT_HP.pdf
>
> This is how the ChatGPT conversation began:
>
> You ask someone to give a truthful yes/no answer to the following
> question: Will your answer to this question be no?
> Can they give a correct answer to that question?
>
> After sixty pages dialogue ChatGPT understood that
> any question (like the above question) that lacks a
> correct yes or no answer because it is self-contradictory
> when posed to a specific person/machine is an incorrect
> question within this full context.
>
> ChatGPT:
>   "Therefore, based on the understanding that self-contradictory
>    questions lack a correct answer and are deemed incorrect, one could
>    argue that the halting problem's pathological input D can be
>    categorized as an incorrect question when posed to the halting
>    decider H."
>
> Double talk and misdirection might convince gullible fools that the
> above 60 pages of reasoning is not correct. Double talk and misdirection
> do not count as the slightest trace of any actual rebuttal.
>
> Quit using ad hominem attacks and mere rhetoric to convince gullible
> fools and try and find an actual flaw in the reasoning.
>

So, which of my rebuttals are you going to try to refute?

You have actually pointed an actual logical error to ANY of them,
because it seems you are incapable.

Note, you also don't understand what an "ad hominem" attack is. That
would be saying your arguement is wrong BECAUSE of something about you.
That isn't what have been saying.

I have been pointing out the error of your logic on the basis of the
logic itself, and pointing out the attribute of you that can be infered
from the fact that you put forward such bad logic.

A Correct rebuttal would be to point out what part of my statements
refuting your logic are incorrect, which you have been unable to do, all
you have done in this thread is continue an "Appeal to Athority" in
ChatGPT, which is laughable since ChatGPT isn't an accept authority on
logic, and has in fact been proven to make many provably false
statements, so is NOT in fact, a source of knowledge.

OF course, your problem is that you don't seem to understand the nature
of Truth and Knowledge and seem to think that computers can actually
"Know" something in the same way people do. There is a reason it is
called ARTIFICIAL intelegence, because it isn't actually a real
intelligence.

olcott

unread,
Jun 17, 2023, 2:23:16 PM6/17/23
to
When the halting problem is construed as requiring a correct yes/no
answer to a self-contradictory question it cannot be solved.

My semantic linguist friends understand that the context of the question
must include who the question is posed to otherwise the same word-for-
word question acquires different semantics.

The input D to H is the same as Jack's question posed to Jack,
has no correct answer because within this context the question is
self-contradictory.

When we ask someone else what Jack's answer will be or we present a
different TM with input D the same word-for-word question (or bytes of
machine description) acquires entirely different semantics and is no
longer self-contradictory.

When we construe the halting problem as determining whether or not an
(a) Input D will halt on its input <or>
(b) Either D will not halt or D has a pathological relationship with H

Then this halting problem cannot be showed to be unsolvable by any of
the conventional halting problem proofs.

Richard Damon

unread,
Jun 17, 2023, 4:27:16 PM6/17/23
to
RIght, an

>
> My semantic linguist friends understand that the context of the question
> must include who the question is posed to otherwise the same word-for-
> word question acquires different semantics.


No, it doesn't in this case, because the answer to the question isn't
based on who you ask. Remember, the actual question is does the machine
and input describe halt when run. That question isn't a function of who
you ask.

Do you think the actual answer for the question of who won the last
Presidentlal election in the United States of America depend on you you ask?

>
> The input D to H is the same as Jack's question posed to Jack,
> has no correct answer because within this context the question is
> self-contradictory.

Nope, we can ask that question to ANY halt decider.

The thing you keep forgetting is that H needs to have already been
decided, so its answer to this input has been fixed for all time by the
algorithm coded into H, so we can give a description of this D to any
decider we want.

>
> When we ask someone else what Jack's answer will be or we present a
> different TM with input D the same word-for-word question (or bytes of
> machine description) acquires entirely different semantics and is no
> longer self-contradictory.
>

Except in this case, Jack is a antomiton with a fixed response for every
question, so his answer is determinable. You don't seem to understand
that machines don't have "free-will", but apparently you don't
understand that.

> When we construe the halting problem as determining whether or not an
> (a) Input D will halt on its input <or>
> (b) Either D will not halt or D has a pathological relationship with H

Nope, Not the definition of the Halting Problem, so you are just
admitting you have wasted you life on the wrong problem.

You don't get to change the problem.

>
> Then this halting problem cannot be showed to be unsolvable by any of
> the conventional halting problem proofs.
>

Except it isn't the halting problem any more, so your logic is based on
a false premise.


Remember, the fact that you are incapable of understanding the simple
problem doesn't give you the power to redefine it and still correctly
claim you are working on it.

You have just admitted you are an utter failure.

Ben Bacarisse

unread,
Jun 17, 2023, 5:09:18 PM6/17/23
to
Richard Damon <Ric...@Damon-Family.org> writes:

> Except that the Halting Problem isn't a "Self-Contradictory" Quesiton, so
> the answer doesn't apply.

That's an interesting point that would often catch students out. And
the reason /why/ it catches so many out eventually led me to stop using
the proof-by-contradiction argument in my classes.

The thing is, it looks so very much like a self-contradicting question
is being asked. The students think they can see it right there in the
constructed code: "if H says I halt, I don't halt!".

Of course, they are wrong. The code is /not/ there. The code calls a
function that does not exist, so "it" (the constructed code, the whole
program) does not exist either.

The fact that it's code, and the students are almost all programmers and
not mathematicians, makes it worse. A mathematician seeing "let p be
the largest prime" does not assume that such a p exists. So when a
prime number p' > p is constructed from p, this is not seen as a
"self-contradictory number" because neither p nor p' exist. But the
halting theorem is even more deceptive for programmers, because the
desired function, H (or whatever), appears to be so well defined -- much
more well-defined than "the largest prime". We have an exact
specification for it, mapping arguments to returned values. It's just
software engineering to write such things (they erroneously assume).

These sorts of proof can always be re-worded so as to avoid the initial
assumption. For example, we can start "let p be any prime", and from p
we construct a prime p' > p. And for halting, we can start "let H be
any subroutine of two arguments always returning true or false". Now,
all the objects /do/ exist. In the first case, the construction shows
that no prime is the largest, and in the second it shows that no
subroutine computes the halting function.

This issue led to another change. In the last couple of years, I would
start the course by setting Post's correspondence problem as if it were
just a fun programming challenge. As the days passed (and the course
got into more and more serious material) it would start to become clear
that this was no ordinary programming challenge. Many students started
to suspect that, despite the trivial sounding specification, no program
could do the job. I always felt a bit uneasy doing this, as if I was
not being 100% honest, but it was a very useful learning experience for
most.

--
Ben.

olcott

unread,
Jun 17, 2023, 5:46:38 PM6/17/23
to
sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
You ask someone (we'll call him "Jack") to give a truthful
yes/no answer to the following question:

Will Jack's answer to this question be no?

Jack can't possibly give a correct yes/no answer to the question.

It is an easily verified fact that when Jack's question is posed to Jack
that this question is self-contradictory for Jack or anyone else having
a pathological relationship to the question.

It is also clear that when a question has no yes or no answer because
it is self-contradictory that this question is aptly classified as
incorrect.

It is incorrect to say that a question is not self-contradictory on the
basis that it is not self-contradictory in some contexts. If a question
is self-contradictory in some contexts then in these contexts it is an
incorrect question.

When we clearly understand the truth of this then and only then we have
the means to overcome the enormous inertia of the [received view] of
the conventional wisdom regarding decision problems that are only
undecidable because of pathological relationships.

Because of the brilliant work of Daryl McCullough we can see the actual
reality behind decision problems that are undecidable because of their
pathological relationships.

It only took ChatGPT a few hours and 60 pages of dialogue
to understand and agree with this.
https://www.liarparadox.org/ChatGPT_HP.pdf

ChatGPT:
"Therefore, based on the understanding that self-contradictory
questions lack a correct answer and are deemed incorrect, one could
argue that the halting problem's pathological input D can be
categorized as an incorrect question when posed to the halting
decider H."

Jeff Barnett

unread,
Jun 17, 2023, 6:05:35 PM6/17/23
to
Ben was describing an improved approach to teaching some theoretical
results to CS pupils. Those pupils were assumed to have some grounding
in practical aspects such as programming and at least a small interest
and competence in basic mathematics. You seemed to not be there when god
handed out those basic components of a human brain. You are neither the
exception or the rule; just an arrogant dumb fuck.

By the way, we have noticed that you haven't played the big "C" card
recently. Is this 1) an immaculate cure, 2) you putting on your big boy
pants and taking responsibility for your own sorry life and mind, or 3)
the time where you try to wiggle out of a past sequel of lies? We've
seen all but variation 2 in past interactions. The curious want to know
the real skinny so speak up!
--
Jeff Barnett

Richard Damon

unread,
Jun 17, 2023, 7:13:23 PM6/17/23
to
But the problem is "Jack" here is assumed to be a volitional being.

H is not, it is a program, so before we even ask H what will happen, the
answer has been fixed by the definition of the codr of H.

>
> It is also clear that when a question has no yes or no answer because
> it is self-contradictory that this question is aptly classified as
> incorrect.

And the actual question DOES have a yes or no answer, in this case,
since H(D,D) says 0 (non-Halting) the actual answer to the question does
D(D) Halt is YES.

You just confuse yourself by trying to imagine a program that can
somehow change itself "at will".

>
> It is incorrect to say that a question is not self-contradictory on the
> basis that it is not self-contradictory in some contexts. If a question
> is self-contradictory in some contexts then in these contexts it is an
> incorrect question.

In what context is "Does the Machine D(D) Halt When run" become
self-contradictory?

Remember, to ask the question, D has to have been defined, which means H
has been defined, so there is no arguing about "if H acted different"
since the specific example can't act different.

>
> When we clearly understand the truth of this then and only then we have
> the means to overcome the enormous inertia of the [received view] of
> the conventional wisdom regarding decision problems that are only
> undecidable because of pathological relationships.

No, you have poisoned your brain to think that reality doesn't actually
matter. You have made yourself an idiot.

H does what it does, and arguing about what would happen if it did
something else is like claiming cats can bark, because if a cat was a
dog, it could do that.

>
> Because of the brilliant work of Daryl McCullough we can see the actual
> reality behind decision problems that are undecidable because of their
> pathological relationships.
>
> It only took ChatGPT a few hours and 60 pages of dialogue
> to understand and agree with this.
> https://www.liarparadox.org/ChatGPT_HP.pdf
>
> ChatGPT:
>   "Therefore, based on the understanding that self-contradictory
>    questions lack a correct answer and are deemed incorrect, one could
>    argue that the halting problem's pathological input D can be
>    categorized as an incorrect question when posed to the halting
>    decider H."
>


And, as pointed out, that isn't the question being ask, so you arguement
just shows you are wrong.

IF you think that a given machine's halting property when it is run
depends on who you ask, shows you are just STUPID.


Richard Damon

unread,
Jun 17, 2023, 7:18:25 PM6/17/23
to
On 6/17/23 6:03 PM, Jeff Barnett wrote:
>
> By the way, we have noticed that you haven't played the big "C" card
> recently. Is this 1) an immaculate cure, 2) you putting on your big boy
> pants and taking responsibility for your own sorry life and mind, or 3)
> the time where you try to wiggle out of a past sequel of lies? We've
> seen all but variation 2 in past interactions. The curious want to know
> the real skinny so speak up!
> --
> Jeff Barnett


My assumption (but just that) is that it has been a lie the whole time
to try to gain sympathy. He as earned no reputation for honesty, and so
none will be given.

I will admit he might have been sick, but there has been no actual
evidence of it, so it is mearly an unsubstantiated claim.

olcott

unread,
Jun 17, 2023, 7:44:21 PM6/17/23
to
I did have cancer jam packed in every lymph node.
After chemo therapy last Summer this has cleared up.

It is my current understanding that Follicular Lymphoma always
comes back eventually.

A FLIPI index score of 3 was very bad news.
A 53% five year survival rate and a 35% 10 year survival rate.
https://www.nature.com/articles/s41408-019-0269-6

olcott

unread,
Jun 17, 2023, 7:58:38 PM6/17/23
to
When this question is posed to machine H.

Jack could be asked the question:
Will Jack answer "no" to this question?

For Jack it is self-contradictory for others that are not
Jack it is not self-contradictory. Context changes the semantics.

Richard Damon

unread,
Jun 17, 2023, 9:32:01 PM6/17/23
to
But you are missing the difference. A Decider is a fixed piece of code,
so its answer has always been fixed to this question since it has been
designed. Thus what it will say isn't a varialbe that can lead to the
self-contradiction cycle, but a fixed result that will either be correct
or incorrect.

A given H can't help but give the answer its program says it will give.
and thus it doesn't matter that we are asking H itself, as its answer is
already fixed.

You are confusing logic about volitional beings with logic about fixed
procedures.

Add in that if you actually did it right, and the input had a new copy
of a program equivalent to H, then your method used by H to detect the
"pathological" interaction become impossible. (This is why you need to
precisely define what you mean by "pathological relationship", you will
find that either you H can't detect it or we can make a variation on H
that D can use that doesn't meet your defintion of pathological but
still makes H wrong.

Richard Damon

unread,
Jun 17, 2023, 9:46:17 PM6/17/23
to
On 6/17/23 7:44 PM, olcott wrote:
> On 6/17/2023 6:18 PM, Richard Damon wrote:
>> On 6/17/23 6:03 PM, Jeff Barnett wrote:
>>>
>>> By the way, we have noticed that you haven't played the big "C" card
>>> recently. Is this 1) an immaculate cure, 2) you putting on your big
>>> boy pants and taking responsibility for your own sorry life and mind,
>>> or 3) the time where you try to wiggle out of a past sequel of lies?
>>> We've seen all but variation 2 in past interactions. The curious want
>>> to know the real skinny so speak up!
>>> --
>>> Jeff Barnett
>>
>>
>> My assumption (but just that) is that it has been a lie the whole time
>> to try to gain sympathy. He as earned no reputation for honesty, and
>> so none will be given.
>>
>> I will admit he might have been sick, but there has been no actual
>> evidence of it, so it is mearly an unsubstantiated claim.
>
> I did have cancer jam packed in every lymph node.
> After chemo therapy last Summer this has cleared up.
>
> It is my current understanding that Follicular Lymphoma always
> comes back eventually.
>
> A FLIPI index score of 3 was very bad news.
> A 53% five year survival rate and a 35% 10 year survival rate.
> https://www.nature.com/articles/s41408-019-0269-6
>

Which is a fairly amazing recovery, as your reports from a year and a
half ago were something like 90% dead by the end of last year from my
memory.

I won't say you are lying, as I have no evidence, and do admit you could
be telling the truth, but considering your verasity at other topics, you
have no credit earned in believability, and shading some of the truth is
an act I wouldn't put past you.

olcott

unread,
Jun 17, 2023, 10:29:54 PM6/17/23
to
Every input to a Turing machine decider such that both Boolean return
values are incorrect is an incorrect input.

olcott

unread,
Jun 17, 2023, 10:35:18 PM6/17/23
to
It is not the case that I ever lied on this forum. Most people
make the mistake of calling me a liar entirely on the basis that
they really really don't believe me and what I say goes against
conventional wisdom.

Most people seem to take conventional wisdom as the infallible
word of God.

Richard Damon

unread,
Jun 17, 2023, 10:57:22 PM6/17/23
to
Except it isn't. The problem is you are looking at two different
machines and two different inputs.

If you define your H0 to return 0 when given the input <D0> <D0> for the
D0 built on D0, then since D0 applied to <D0> will halt so the correct
answer is 1. If H0 returned that answer, it would have been correct, but
since H0 was defined with code that answered 0, that is the only thing
that it can answer.

On the other hand, if you instead defined a DIFFERENT machine H1, that
uses similar logic, but instead of returning Non-Halting, returned
Halting, the H1 applied to <D0> <D0> would abort its simulation and
return 1, and it would have been correct. The problem here is that since
H1 is a different machine, its "pathological" program is different,
(since it will be built on H1, not H0) and H1 applied to <D1> <D1> will
abort its simulation and return 1, but D1 applied to <D1> will go into
an infinite loop, so the correct answer should have been 0.

So, the problem is that the two cases you are looking at are DIFF#RENT
inputs, because they are built on DIFFERENT machines. You don't seem to
understand that a machine WILL generate the results that machine is
programmed for, so "hypothetical" about it doing somethng different are
just looking at impossible actions.

So, it isn't the case that both answers are wrong for the same question,
it is that the question changes when you alter your decider and whatever
answer you make you decider give, will be wrong, and the other one right.

Other deciders can get the correct answer for THAT input, but there will
be a different input, based on them, that they will get wrong.

You just seem to have a blind spot about what needs to stay the same,
and what changes when you play your mind games.

You dig your gas-lit hole because you seem to naturally do the deceptive
thing of not giving new names to things when you change them, but try to
hide the fact that you changed things by reusing names. This is a sign
of potentially intentional deception.

Richard Damon

unread,
Jun 17, 2023, 11:03:30 PM6/17/23
to
That is not true. There have been several cases where you have said that
someone said something that just wasn't true.

You also twist the words of people claiming they gave your ideas
support, when they did no such thing.

You also engage in great deception by improper trimming of quotations,
removing "inconvient" (to you) parts of statements to change the meaning
of them.

>
> Most people seem to take conventional wisdom as the infallible
> word of God.
>
>

While you think your own words are that infallible word of God, as you
think you are him.

You don't understand the difference between "conventional Wisdom" and
the DEFINITION of what something is, in part, because you just don't
understand what Truth actually is.

You seem incapable of actually dealing with truth, which is why you are
a pathological liar. I don't think your mind can actually handle how
truth actually works.

olcott

unread,
Jun 17, 2023, 11:10:54 PM6/17/23
to
If no one can possibly correctly answer what the correct return value
that any H<n> having a pathological relationship to its input D<n> could
possibly provide then that is proof that D<n> is an invalid input for
H<n> in the same way that any self-contradictory question is an
incorrect question.

Richard Damon

unread,
Jun 18, 2023, 8:02:27 AM6/18/23
to
But you have the wrong Question. The Question is Does D(D) Halt, and
that HAS a correct answer, since your H(D,D) returns 0, the answer is
that D(D) does Halt, and thus H was wrong.

It isn't a proper question to ask what a given machine should return,
since what it returns is determined by what its code it.

The fact that the DESIGN question of what can we design H to return for
this question, that NOW actually become pathological and creates a
self-contradiction, just shows, as you are trying to use logic to show,
that such a design is impossible.

This means that the problem is unsolvable, and thus no universally
correct halt decider can exist. It doesn't mean the problem was
incorrect. MANY problems prove impossible to solve, but are still valid
problem, its just their answer is that no answer exists.

For instance, what are the real roots of x*x + 1 = 0. That is a problem
with no solutions, but is still a perfectly valid question. Your logic
system is very poor if you don't allow the asking of questions without
solutions, and in fact, such a system becomes nearly worthless, as you
can ask about things until you actaully can know the question has an
answer, but you can't even ask if it has an asnswer, until you know if
THAT question has an answer, and so on.

olcott

unread,
Jun 18, 2023, 10:32:27 AM6/18/23
to
For Jack the question is self-contradictory for others that
are not Jack it is not self-contradictory.

The context (of who is asked) changes the semantics.

Every question that lacks a correct yes/no answer because
the question is self-contradictory is an incorrect question.

If you are not a mere Troll you will agree with this.

Richard Damon

unread,
Jun 18, 2023, 12:31:46 PM6/18/23
to
But the ACTUAL QUESTION DOES have a correct answer.

You are just stuck with the worng question.

The Question is, Does D(D) Halt?, asked by giving the decider the
appropriate representation.

Since your H(D,D) answers 0 (Non-Halting), the D, the only D in view,
will Halt when given the input D.

That is the correct answer no matter who you ask, and thus there is no
"self-contradiction" around. We can ask H, and because H is the program
that H is, it MUST answer 0, and is thus wrong.

When you Hypothise that H does something different, that is just a LIE,
because this H CAN'T do something different, not and be this H,

You can hypotosis what whould happen if H was instead H1, that acted
differently, but then you need to be clear on what you are doing, are
you asking H1 about D(D), or about another hypothetical D1(D1) for the
D1 built on it.

H1(D,D) can correctly answeer the question, but that doesn't prove
anything.

If you look at H1(D1,D1) you see that D1(D1) is non-halting, but D1 is a
different machine than D(D) so different behavior is understandable.

Thus, your whole arguement is based on the desception of assuming that
the machine H can become a different machine (called H1 above) but still
be the "same" H so the same question.

That is just a LIE and not a valid "Hypothetica;", because something
can't be somethig else and still be itself. That is the thinking of
insanity.

You are just proving that you can not think correctly, but are stuck
with totally invalid and unsound logic rules stuck in your mind.

You LIE about what you are doing, and about what is true.

YOU FAIL.

olcott

unread,
Jun 18, 2023, 12:41:36 PM6/18/23
to
The actual question posed to Jack has no correct answer.
The actual question posed to anyone else is a semantically
different question even though the words are the same.

Richard Damon

unread,
Jun 18, 2023, 12:54:28 PM6/18/23
to
But the question to Jack isn't the question you are actaully saying
doesn't have an answer.

Yes, asking Jack (a volitional being) about what he will do in the
future can lead to this form of self-contradiction.

Asking a "Program" (which isn't volitional, but deterministic) doesn't
since the answer was fixed when the program was writte.

It is like asking Jack if the answer to the LAST question was no, but
constraining him that he also must answer it the same as that last question.

It is the constraint that gives the impossibility to get a right answer,
just as it is the fundamental constraint on a program to always give the
same answer on the same input that leads to the impossibility of H to
give the right answer.

You just seem to be unable to tell the difference between things that
are different, and don't seem to understand the fundamental nature of
programs. You don't seem to understand the actual nature of Truth and
Knowledge or Intelegence, thinking that an "Artificial Intelegence" is
just the same as an "Intelegent and volitional Being". This seems to be
your insanity.

olcott

unread,
Jun 18, 2023, 1:09:53 PM6/18/23
to
The question posed to Jack does not have an answer because within the
context that the question is posed to Jack it is self-contradictory.
You can ignore that context matters yet that is not any rebuttal.

Richard Damon

unread,
Jun 18, 2023, 1:46:04 PM6/18/23
to
Right, but that has ZERO bearig on the Halting Problem, and the fact
that you don't see that and have gotten you stuck on this Red Herring
just shows your ignorance.

You are just proving that you personal logic system is filled with all
the logical fallacies in the book, so we can't trust anything you say to
have actual meaning.

You have religated yourself to the ashheap of history.

olcott

unread,
Jun 18, 2023, 2:05:30 PM6/18/23
to
That is great we made excellent progress on this.

When ChatGPT understood that Jack's question is self-contradictory for
Jack then it was also able to understand the following isomorphism:

For every H<n> on pathological input D<n> both Boolean return values
from H<n> are incorrect for D<n> proving that D<n> is isomorphic to a
self-contradictory question for every H<n>.

Richard Damon

unread,
Jun 18, 2023, 2:20:47 PM6/18/23
to
No, because a given H<n> can only give one result, the result that its
code will generate. The other "oossibe output" is actually impossible
for THAT H<n> to generate, and thus talking about it doing so is invalid
logic.

For example, for your defined H in your sample code, since H(D,D)
returns 0, the correct answer is 1, so both answers are not incorrect,
only the answer that H gives.

Your logic is like asking what is the color of a black cat that is
white? The question has an illogical premise (that something that is
black can be white) just like your question does, that a given program
COULD return both answers. The things that produce the two answers are
different programs.

The key point is that whatever value a given H generates, the OTHER
value would have been correct, due to the "pathological" nature of D<n>.

Thus, for every questions "Does D<n>(D<n>) Halt" there IS a correct
value that a correct halt decider should return. The problem is that, by
its code, H<n> doesn't happen to generate that value.

You are confusig the volitional Jack with the deterministic H<n>. Jack,
because his future chose isn't fixed, and he has free will to chose his
answer, sees the self-contradiction. H<n>, because it doesn't have
free-will, and whose answer has been fixed by its programing, is just
wrong, and the correct answer does exist, it just doesn't give it.

You get yourself stuck on the WRONG question that is actually put to the
free-will designer, what should I program my H to generate for a problem
generated by this template. yes THAT question is self-contraditory,
which shows that the programmer can't write a valid program to give the
right answer, because the correct answer is not computable. The key is
that the actual Halting Question can't possibly be asked, until the
programmer commits themselve to a claimed answer, as the input program
can't exist until the claim decider exists as an actual program, and
once you do that, the self-contradiction has been resolved, and the
decider is proven wrong.

olcott

unread,
Jun 18, 2023, 2:30:36 PM6/18/23
to
In other words you fail to understand that when Jack's question is posed
to someone else that it remains self-contradictory.

Richard Damon

unread,
Jun 18, 2023, 2:43:25 PM6/18/23
to
And you fail to understand that the nature of the halting question is
fundamentally different then the question to Jack.

The Question to Jack is about the future behavior of a volitional being,
so it doesn't have a "correct" answer until some point after it is
answered. The halting problem is about the results of a determinist
computation that has a correct answer even somewhat before the question
can be asked (but maybe not until the question CAN be asked).

There are philosophical arguments about the Jack question asked to
someone besides Jack, if it even HAS a "Correct answer" at the point the
question is asked, since the answer doesn't actually HAVE a truth value
until Jack answers his next question.

On the other hand, the question about the behavior of D(D) has a correct
answer as soon as D is actually constructed (or defined) which requires
that H be constructed (or fully defined). At that point, the answer is
fixed, and just happens to be that which makes H incorrect (if H answers).

The question you keep on looking at isn't the actual halting question,
but a design question on the path of trying to make a correct decider H.
The fact that THIS question leads you to the impossible state shows that
there can not be a correct H, not that the Halting Question is
malformed. It just shows that the Halting Question isn't computable. It
is answerable, just maybe not by a "computation".

You are just showing your inability to actually distinguish between
things that are categorically different, because you have lost your
connection to reality.

olcott

unread,
Jun 18, 2023, 2:47:49 PM6/18/23
to
Some of the elements of H<n>/D<n> are identical except for the return
value from H. In both of these cases the return value is incorrect.

Since I have just defined the set of every halting problem {decider /
input} pair that can possibly exist in any universe there is no rebuttal
of: What about this element of this set?

Richard Damon

unread,
Jun 18, 2023, 3:19:12 PM6/18/23
to
Nope, can't be. The code of H<n> fixes the return value that H<n>
returns when given the input D<n>, D<n> so there MUST be a diffence
besides what the code returns.

Please show us a specific H<n> that returns both values. Must be exactly
the identical code for H in the two cases (since that is what defines a
program).

Then show where in the execution of this H it gets into two different
states from the same input (which is all that is allowed to affect the
results of the computation) so that it can return two different values.

This question has been put to you before, and you keep on ducking it
because it calls your bluff.

Failure to answer is an admission that you are just being a pathological
liar about the actual behavior of H.

>
> Since I have just defined the set of every halting problem {decider /
> input} pair that can possibly exist in any universe there is no rebuttal
> of: What about this element of this set?
>

Yes, you have posulated EVERY possible H and its corresponding D, and
ALL the H's return the wrong value for their D, and thus you have just
repeated the proof that a corect H can not exist.

No element of your set meets the requirements.

Note "Programs" are not "Sets of Programs", you are making a categorial
error.


olcott

unread,
Jun 18, 2023, 3:26:16 PM6/18/23
to
The only difference between otherwise identical pairs of pairs H<n>/D<n>
and H<m>/D<m> is the single integer values of 0/1 within H<n> and H<m>
respectively thus proving that both True and False are the wrong return
value for the identical finite string pairs D<n>/D<m>.

Richard Damon

unread,
Jun 18, 2023, 4:10:53 PM6/18/23
to
So they are different programs. Different is different. Almost the same
is not the same.

Unless you are claiming that 1 is the same as 0, they are different.

So, your claim is based on a LIE, or you are admitting you are insane.

olcott

unread,
Jun 18, 2023, 7:43:35 PM6/18/23
to
The key difference with my work that is a true innovation in this field
is that H specifically recognizes self-contradictory inputs and rejects
them.

*Termination Analyzer H prevents Denial of Service attacks*
https://www.researchgate.net/publication/369971402_Termination_Analyzer_H_prevents_Denial_of_Service_attacks

Richard Damon

unread,
Jun 18, 2023, 7:59:16 PM6/18/23
to
Except the input isn't self-contradictory, since the input can't exist
until H is defined, and once H is defined, the input has definite
behavior, so there is no self-contradiction possilble, only error.

SInce your H that you are analyzing isn't actually a program yet, since
its behavior has not been fixed, the point where you hit yoru
contradiction is just in the DESIGN phase, showing that no H that meets
the requirements can be built, proving the theorem you claim to be
refuting, showing yourself to be a LIAR.

You are just showing you don't understand what a program actually is.

olcott

unread,
Jun 18, 2023, 11:31:24 PM6/18/23
to
On 6/18/2023 9:38 PM, Richard Damon wrote:
> On 6/18/23 9:43 PM, olcott wrote:
>> On 6/18/2023 8:29 PM, Richard Damon wrote:
>>> On 6/18/23 8:59 PM, olcott wrote:
>>>> On 6/18/2023 7:01 PM, Richard Damon wrote:
>>>>> On 6/18/23 7:41 PM, olcott wrote:
>>>>>> On 6/18/2023 1:56 PM, Fritz Feldhase wrote:
>>>>>>> On Sunday, June 18, 2023 at 8:09:51 PM UTC+2, olcott wrote
>>>>>>> <nonsense>
>>>>>>>
>>>>>>> A possible "practical solution" for an actual "halt decider"
>>>>>>> might be something I will call a semi-halt-decider here.
>>>>>>>
>>>>>>> The latter allows for 3 answers (return values) when called:
>>>>>>>
>>>>>>> H(P, d) -> 1 "P(d) halts"
>>>>>>> H(P, d) -> -1 "P(d) doesn't halt."
>>>>>>> H(P, d) -> 0 "Don't know/can't tell if P(d) halts or not"
>>>>>>>
>>>>>>> Such a semi-halt-decider might be able to determine _the correct_
>>>>>>> answer (1, -1) for a big class of casses. On the other hand, it
>>>>>>> would always have the possibility to "give up" (for certain
>>>>>>> cases) and anwer with 0: "Don't know/can't tell" (and this way be
>>>>>>> able to avoid INCORRECT ANSWERS concerning the actual behavior of
>>>>>>> P(d)).
>>>>>>>
>>>>>>
>>>>>> The key difference with my work that is a true innovation in this
>>>>>> field
>>>>>> is that H doesn't simply give up. H specifically recognizes self-
>>>>>> contradictory inputs and rejects them.
>>>>>>
>>>>>> *Termination Analyzer H prevents Denial of Service attacks*
>>>>>> https://www.researchgate.net/publication/369971402_Termination_Analyzer_H_prevents_Denial_of_Service_attacks
>>>>>>
>>>>>
>>>>>
>>>>> Except the input isn't self-contradictory, since the input can't
>>>>> exist until H is defined, and once H is defined, the input has
>>>>> definite behavior, so there is no self-contradiction possilble,
>>>>> only error.
>>>> If I ask you what correct (yes or no) answer of could Jack reply with?
>>>> Exactly why can’t you answer this?
>>>
>>> He has no answer that is correct, but that doesn't matter and is just
>>> you faliing into the fallacy of the Red Herring.
>>>
>> // The following is written in C
>> //
>> 01 typedef int (*ptr)(); // pointer to int function
>> 02 int H(ptr x, ptr y)   // uses x86 emulator to simulate its input
>> 03
>> 04 int D(ptr x)
>> 05 {
>> 06   int Halt_Status = H(x, x);
>> 07   if (Halt_Status)
>> 08     HERE: goto HERE;
>> 09   return Halt_Status;
>> 10 }
>> 11
>> 12 void main()
>> 13 {
>> 14   H(D,D);
>> 15 }
>>
>> Since the above H is an unspecified wildcard you are free to encode it
>> in any one of an infinite number of different ways and return any
>> Boolean value that you want.
>
> Nope, D isn't a PROGRAM until H is DEFINED.
That is why I triple dog dare you to define it or acknowledge that no
such program can possibly be defined because the input D to any
corresponding H is isomorphic to Jack's question posed to Jack.

Once we acknowledge that the halting problem input to H is an incorrect
to H then we can understand that this incorrect question is aptly re-
framed into the correct question:

Does input D halt on its input [GOOD INPUT] or is D [BAD INPUT] that
either fails to halt or defines a pathological relationship to H.

This does overcome Rice's theorem for at least the reduction of Rice's
theorem to the halting problem.

Does input D have semantic property S or is input D [BAD INPUT]?

Richard Damon

unread,
Jun 19, 2023, 7:38:32 AM6/19/23
to
SO, you AGREE that a "Correct Halt Decider", as defined by the Halting
Problem, can't exist.

It is easy to make D a program, just define some H, any H, then D is a
valid program, and will either Halt or not. D's validity as a program is
NOT dependent on H getting the right answer. Thus an H that just
immediately returns 0 makes D a valid program.

>
> Once we acknowledge that the halting problem input to H is an incorrect
> to H then we can understand that this incorrect question is aptly re-
> framed into the correct question:

Why is it "Incorrect"? The fact that H can't give the right answer is a
problem with H, not with the input.

The definition of a "Valid Input" for H, is that it represents a Program
and its input. This call sequence does that, so the input is valid.

>
> Does input D halt on its input [GOOD INPUT] or is D [BAD INPUT] that
> either fails to halt or defines a pathological relationship to H.

And D DOES halt on its input, since it will "call" H(D,D), which your H
has been defined so that it will return 0 from that call.

There is nothing "BAD" about a D that doesn't halt, that just means it
is an input that H needs to "reject" (return the "Non-Halting" value
for). There is also nothing "Bad" about the "pathological" relationship
between D and H, as that is just part of "Any Program".

Remember, if you change H to be the Hn, non-aborting version of it, and
the make the Dn from that Hn, we find that Dn(Dn) will not halt, so Hn
should have returned 0, but it just never returns an answer, showing
that *H* is a defective machine, not meeting its requirements.

>
> This does overcome Rice's theorem for at least the reduction of Rice's
> theorem to the halting problem.
>
> Does input D have semantic property S or is input D [BAD INPUT]?
>

No, because Rice's theorem is does the input have Semantic Property S,
and a "pathological relationship" isn't considered a "BAD INPUT".

ALL PROGRAMS means ALL PROGRAMS, not all the ones I can handle.

IF you wnat to try to define a Semntic Property S that somehow includes
this pathology in its criteria, you need to FORMALLY define what you
mean by it. You also need to show that the property is still wholly
Semantic, and that you haven't given yourself a Syntactic property.

You also then need to show that you can get the correct answer for ALL
inputs, the Achilies Heel for a Halt Decider might not be the Achilies
Heel for your new decider, so just because you handle it, doesn't mean
you have PROVEN that you can answwer that property.

olcott

unread,
Jun 19, 2023, 10:30:18 AM6/19/23
to
I don't agree that your understanding of the halting problem is correct.
H is required to report on the actual behavior that it actually sees.

You and others are requiring H to report on behavior that it does not
see. You already also admitted that when H reports on this behavior that
it does not see that this changes this behavior that it does not see
making its report incorrect.

Within the false hypothesis that H is incorrect to report that its input
does not halt, the only alternative is to change the meaning of what H
reports. When H becomes a [BAD INPUT] decider no one can correctly say
that H is wrong. This also refutes Rice which is more important that
solving the halting problem because it has a much broader scope.

Termination Analyzer H determines the semantic property of
[GOOD INPUT] meaning that input D halts <and>

[BAD INPUT] meaning
(a) input D doesn't halt <or>
(b) D has a pathological relationship to H. This means that D calls H
and does the opposite of the Boolean value that H returns.

> It is easy to make D a program, just define some H, any H, then D is a
> valid program, and will either Halt or not. D's validity as a program is
> NOT dependent on H getting the right answer. Thus an H that just
> immediately returns 0 makes D a valid program.
>

H correctly determines that D has the semantic property of [BAD INPUT]
making Denial of Service (DoS) attack detector H correct to reject D.


>>
>> Once we acknowledge that the halting problem input to H is an incorrect
>> to H then we can understand that this incorrect question is aptly re-
>> framed into the correct question:
>
> Why is it "Incorrect"? The fact that H can't give the right answer is a
> problem with H, not with the input.
>

Then the problem with Jack's question is Jack not the fact that Jack's
question is self-contradictory for Jack. Jack is simply too stupid to
give a correct yes or no answer to a self-contradictory question. We all
know that Jack's question has a correct answer, yet Jack is simply too
stupid to decide between yes and no.

> The definition of a "Valid Input" for H, is that it represents a Program
> and its input. This call sequence does that, so the input is valid.
>

A syntactically valid input is not the same as a semantically valid
input. Any input that makes both Boolean return values the wrong answer
is a semantically invalid input.

>>
>> Does input D halt on its input [GOOD INPUT] or is D [BAD INPUT] that
>> either fails to halt or defines a pathological relationship to H.
>
> And D DOES halt on its input, since it will "call" H(D,D), which your H
> has been defined so that it will return 0 from that call.
>

Which is a correct return value for the semantic property of [BAD INPUT].

> There is nothing "BAD" about a D that doesn't halt,

Sure everyone knows that Denial of Service attacks are great. My
hospital loved it when they had no access to patient records for several
days.

> that just means it
> is an input that H needs to "reject" (return the "Non-Halting" value
> for). There is also nothing "Bad" about the "pathological" relationship
> between D and H, as that is just part of "Any Program".
>

Yes that is true everyone loves successful Denial of Service attacks.
If there was a DoS detector that could correctly reject every
[malevolent input] people would really hate that. They love successful
DoS attacks.

> Remember, if you change H to be the Hn, non-aborting version of it, and
> the make the Dn from that Hn, we find that Dn(Dn) will not halt, so Hn
> should have returned 0, but it just never returns an answer, showing
> that *H* is a defective machine, not meeting its requirements.
>

When H reports on the semantic property of [BAD INPUT] the labels could
be switched to account for all of the people that love successful Denial
of Service attacks. Only inputs that allow DoS attacks are construed as
[GOOD INPUTS]. Inputs that simply halt are now called [BAD INPUTS].

H still correctly decides a semantic property of D, thus H still refutes
Rice.

>>
>> This does overcome Rice's theorem for at least the reduction of Rice's
>> theorem to the halting problem.
>>
>> Does input D have semantic property S or is input D [BAD INPUT]?
>>
>
> No, because Rice's theorem is does the input have Semantic Property S,
> and a "pathological relationship" isn't considered a "BAD INPUT".
>

That is the only reason that Rice has not been overcome. No one ever
thought of a way to exclude [BAD INPUTS] thus making semantic properties
undecidable. Once we do exclude [BAD INPUTS] then semantic properties
are decidable.

> ALL PROGRAMS means ALL PROGRAMS, not all the ones I can handle.
>

H correctly determines the semantic property of [BAD INPUT] prior to my
work no H could ever correctly determine any semantic property. That H
does correctly determine at least a single semantic property when Rice
claims that no H can every determine any semantic property refutes Rice.

> IF you wnat to try to define a Semntic Property S that somehow includes
> this pathology in its criteria, you need to FORMALLY define what you
> mean by it. You also need to show that the property is still wholly
> Semantic, and that you haven't given yourself a Syntactic property.
>

When-so-ever any input to any decider calls this decider with an input
that does the opposite of whatever Boolean value that this decider
returns this input <is> a pathological input. My H has been able to do
that for more than two years.

My system also works with embedded copies of deciders yet this makes the
code much more difficult to understand so I didn't implement it.

> You also then need to show that you can get the correct answer for ALL
> inputs, the Achilies Heel for a Halt Decider might not be the Achilies
> Heel for your new decider, so just because you handle it, doesn't mean
> you have PROVEN that you can answwer that property.

H does correctly refute Rice's theorem for the halting problem's
pathological input. This is much more success than anyone else has ever
achieved. Once this success is acknowledged a well funded large team of
experts can work on extending my ideas.

Richard Damon

unread,
Jun 19, 2023, 8:45:08 PM6/19/23
to
Where does THAT come from. It may only be ABLE to do so, but the
REQUIREMENT is the behavior of the actual machine.

You seem to have trouble with the English Languge.

Please show me any reputable reference that says you get to disregard
the ACTUAL REQUIREMENTS because you can't see what you need to do so

>
> You and others are requiring H to report on behavior that it does not
> see. You already also admitted that when H reports on this behavior that
> it does not see that this changes this behavior that it does not see
> making its report incorrect.

Yes, because that is what the requirements say. The requirements are
what the requirements say, because that is the requirements needed to
solve the mathematical problems that a Halt Decider is hoped to be able
to help with.

>
> Within the false hypothesis that H is incorrect to report that its input
> does not halt, the only alternative is to change the meaning of what H
> reports. When H becomes a [BAD INPUT] decider no one can correctly say
> that H is wrong. This also refutes Rice which is more important that
> solving the halting problem because it has a much broader scope.

That isn't a "false hypothesis", it is a stated requirement.

Since D(D) Halts, by the definition of the problem, H, to be correct,
must report Halting.

Remember:
In computability theory, the halting problem is the problem of
determining, from a description of an arbitrary computer program and an
input, whether the program will finish running, or continue to run forever.

Thus the thing to look at is the PROGRAM itself and its behavior.
DEFINITION.

>
> Termination Analyzer H determines the semantic property of
> [GOOD INPUT] meaning that input D halts <and>

Since the machine represented by the input does Halt, that condition is
statisfied.

Note you bad terminology, "Inputs" are just data, and don't actually DO
anything. They can have "syntactic properties", but not "Behavior". They
can represent something that does have behavior, and from the definiton
above, that is the machine they represent, NOT H's (partial) simulation
of them.

>
> [BAD INPUT] meaning
> (a) input D doesn't halt <or>
> (b) D has a pathological relationship to H. This means that D calls H
> and does the opposite of the Boolean value that H returns.

Which your H never atually confirms. You H will also call an HH that
does what H says to be pathological too, so you fail at this side.

>
>> It is easy to make D a program, just define some H, any H, then D is a
>> valid program, and will either Halt or not. D's validity as a program
>> is NOT dependent on H getting the right answer. Thus an H that just
>> immediately returns 0 makes D a valid program.
>>
>
> H correctly determines that D has the semantic property of [BAD INPUT]
> making Denial of Service (DoS) attack detector H correct to reject D.

Which isn't a criterial for a Halt Decider, and as I just explained
above, you don't actually detect the pathological relationship, just
that D calls H.

>
>
>>>
>>> Once we acknowledge that the halting problem input to H is an incorrect
>>> to H then we can understand that this incorrect question is aptly re-
>>> framed into the correct question:
>>
>> Why is it "Incorrect"? The fact that H can't give the right answer is
>> a problem with H, not with the input.
>>
>
> Then the problem with Jack's question is Jack not the fact that Jack's
> question is self-contradictory for Jack. Jack is simply too stupid to
> give a correct yes or no answer to a self-contradictory question. We all
> know that Jack's question has a correct answer, yet Jack is simply too
> stupid to decide between yes and no.

The problem with "Jack's Question" is it asks about something that
doesn't have a correct answer NOW.

>
>> The definition of a "Valid Input" for H, is that it represents a
>> Program and its input. This call sequence does that, so the input is
>> valid.
>>
>
> A syntactically valid input is not the same as a semantically valid
> input. Any input that makes both Boolean return values the wrong answer
> is a semantically invalid input.

Nope, it is a PROGRAM, thus it is VALID. If you try to define it as not
valid, you are just admitting that H isn't a "Halt Decider" by the
definition of Computation Theory.

You clearly don't understand what you are talking about.

>
>>>
>>> Does input D halt on its input [GOOD INPUT] or is D [BAD INPUT] that
>>> either fails to halt or defines a pathological relationship to H.
>>
>> And D DOES halt on its input, since it will "call" H(D,D), which your
>> H has been defined so that it will return 0 from that call.
>>
>
> Which is a correct return value for the semantic property of [BAD INPUT].

But makes D(D) Halt, so it is the wrong answer for a Halt Decider.

You are just admitting that you have been lying about working on the
Halting Problem of Computation Theory, the one descibed by the Linz
paper you quote.

Fine, everything you have said thus becomes a LIE.

>
>> There is nothing "BAD" about a D that doesn't halt,
>
> Sure everyone knows that Denial of Service attacks are great. My
> hospital loved it when they had no access to patient records for several
> days.

Except the only DOS was to the Decider. If they just ran the program, it
would have ended just fine.

You just don't understand the problem you are talking about and thus you
keep lying about it. You can't use the "honest mistake" excues, as the
errors have been pointed out, but you refuse to correct yourself.

>
>> that just means it is an input that H needs to "reject" (return the
>> "Non-Halting" value for). There is also nothing "Bad" about the
>> "pathological" relationship between D and H, as that is just part of
>> "Any Program".
>>
>
> Yes that is true everyone loves successful Denial of Service attacks.
> If there was a DoS detector that could correctly reject every
> [malevolent input] people would really hate that. They love successful
> DoS attacks.

But this isn't the DOS detector problem, that allows false positives.
This is the accurate Halt Decider problem, which H fails at.

You are just admitting that you have been LYING for years about what you
are working on.

>
>> Remember, if you change H to be the Hn, non-aborting version of it,
>> and the make the Dn from that Hn, we find that Dn(Dn) will not halt,
>> so Hn should have returned 0, but it just never returns an answer,
>> showing that *H* is a defective machine, not meeting its requirements.
>>
>
> When H reports on the semantic property of [BAD INPUT] the labels could
> be switched to account for all of the people that love successful Denial
> of Service attacks. Only inputs that allow DoS attacks are construed as
> [GOOD INPUTS]. Inputs that simply halt are now called [BAD INPUTS].
>
> H still correctly decides a semantic property of D, thus H still refutes
> Rice.


Nope. You can't refute Rice by saying that a machine gets one input right.

FALLACY of proof by example

You are just proving your logic system is full of fallacies.

>
>>>
>>> This does overcome Rice's theorem for at least the reduction of Rice's
>>> theorem to the halting problem.
>>>
>>> Does input D have semantic property S or is input D [BAD INPUT]?
>>>
>>
>> No, because Rice's theorem is does the input have Semantic Property S,
>> and a "pathological relationship" isn't considered a "BAD INPUT".
>>
>
> That is the only reason that Rice has not been overcome. No one ever
> thought of a way to exclude [BAD INPUTS] thus making semantic properties
> undecidable. Once we do exclude [BAD INPUTS] then semantic properties
> are decidable.

But you H doesn't successful decide on your property, as the DD that
does what H says is called "Bad input" when it doesn't meet the criteria
you have defined.

>
>> ALL PROGRAMS means ALL PROGRAMS, not all the ones I can handle.
>>
>
> H correctly determines the semantic property of [BAD INPUT] prior to my
> work no H could ever correctly determine any semantic property. That H
> does correctly determine at least a single semantic property when Rice
> claims that no H can every determine any semantic property refutes Rice.
>

Nope, H gets DD wrong.

>> IF you wnat to try to define a Semntic Property S that somehow
>> includes this pathology in its criteria, you need to FORMALLY define
>> what you mean by it. You also need to show that the property is still
>> wholly Semantic, and that you haven't given yourself a Syntactic
>> property.
>>
>
> When-so-ever any input to any decider calls this decider with an input
> that does the opposite of whatever Boolean value that this decider
> returns this input <is> a pathological input. My H has been able to do
> that for more than two years.

But it fails on DD, so it still fail.

>
> My system also works with embedded copies of deciders yet this makes the
> code much more difficult to understand so I didn't implement it.

I don't think it does. I think you don't understand the nature of that
problem.

>
>> You also then need to show that you can get the correct answer for ALL
>> inputs, the Achilies Heel for a Halt Decider might not be the Achilies
>> Heel for your new decider, so just because you handle it, doesn't mean
>> you have PROVEN that you can answwer that property.
>
> H does correctly refute Rice's theorem for the halting problem's
> pathological input. This is much more success than anyone else has ever
> achieved. Once this success is acknowledged a well funded large team of
> experts can work on extending my ideas.
>

Nope. Remember, by YOUR definiton of Pathological, your H fails for
DD(DD) as described above.

olcott

unread,
Jun 19, 2023, 11:57:39 PM6/19/23
to
When the requirements are self-contradictory then they are incorrect.

>>
>> Within the false hypothesis that H is incorrect to report that its input
>> does not halt, the only alternative is to change the meaning of what H
>> reports. When H becomes a [BAD INPUT] decider no one can correctly say
>> that H is wrong. This also refutes Rice which is more important that
>> solving the halting problem because it has a much broader scope.
>
> That isn't a "false hypothesis", it is a stated requirement.
>
> Since D(D) Halts, by the definition of the problem, H, to be correct,
> must report Halting.
>
> Remember:
> In computability theory, the halting problem is the problem of
> determining, from a description of an arbitrary computer program and an
> input, whether the program will finish running, or continue to run forever.
>
> Thus the thing to look at is the PROGRAM itself and its behavior.
> DEFINITION.
>

When the requirements are self-contradictory then they are incorrect.
When the bible says that God <is> and God <has> wrath the bible lies.

>>
>> Termination Analyzer H determines the semantic property of
>> [GOOD INPUT] meaning that input D halts <and>
>
> Since the machine represented by the input does Halt, that condition is
> statisfied.
>
> Note you bad terminology, "Inputs" are just data, and don't actually DO
> anything. They can have "syntactic properties", but not "Behavior". They
> can represent something that does have behavior, and from the definiton
> above, that is the machine they represent, NOT H's (partial) simulation
> of them.
>

Simply ignoring that a question is self-contradictory doesn't make it
not self-contradictory.

>>
>> [BAD INPUT] meaning
>> (a) input D doesn't halt <or>
>> (b) D has a pathological relationship to H. This means that D calls H
>> and does the opposite of the Boolean value that H returns.
>
> Which your H never atually confirms. You H will also call an HH that
> does what H says to be pathological too, so you fail at this side.
>
>>
>>> It is easy to make D a program, just define some H, any H, then D is
>>> a valid program, and will either Halt or not. D's validity as a
>>> program is NOT dependent on H getting the right answer. Thus an H
>>> that just immediately returns 0 makes D a valid program.
>>>
>>
>> H correctly determines that D has the semantic property of [BAD INPUT]
>> making Denial of Service (DoS) attack detector H correct to reject D.
>
> Which isn't a criterial for a Halt Decider, and as I just explained
> above, you don't actually detect the pathological relationship, just
> that D calls H.
>

Instead it refutes Rice's theorem.

>>
>>
>>>>
>>>> Once we acknowledge that the halting problem input to H is an incorrect
>>>> to H then we can understand that this incorrect question is aptly re-
>>>> framed into the correct question:
>>>
>>> Why is it "Incorrect"? The fact that H can't give the right answer is
>>> a problem with H, not with the input.
>>>
>>
>> Then the problem with Jack's question is Jack not the fact that Jack's
>> question is self-contradictory for Jack. Jack is simply too stupid to
>> give a correct yes or no answer to a self-contradictory question. We all
>> know that Jack's question has a correct answer, yet Jack is simply too
>> stupid to decide between yes and no.
>
> The problem with "Jack's Question" is it asks about something that
> doesn't have a correct answer NOW.
>

Sure it does you ask three people
(a) Bill says Jack will say yes
(b) John says that Jack will say no
(c) Harry say Jack will say nothing or something besides yes or no
One of them is right.

Because our imaginary Jack is fictional Harry was right.

>>
>>> The definition of a "Valid Input" for H, is that it represents a
>>> Program and its input. This call sequence does that, so the input is
>>> valid.
>>>
>>
>> A syntactically valid input is not the same as a semantically valid
>> input. Any input that makes both Boolean return values the wrong answer
>> is a semantically invalid input.
>
> Nope, it is a PROGRAM, thus it is VALID. If you try to define it as not
> valid, you are just admitting that H isn't a "Halt Decider" by the
> definition of Computation Theory.
>

Saying that it is valid because it is a program simply ignores bugs and
indicates you know hardly anything about programming.

> You clearly don't understand what you are talking about.
>
>>
>>>>
>>>> Does input D halt on its input [GOOD INPUT] or is D [BAD INPUT] that
>>>> either fails to halt or defines a pathological relationship to H.
>>>
>>> And D DOES halt on its input, since it will "call" H(D,D), which your
>>> H has been defined so that it will return 0 from that call.
>>>
>>
>> Which is a correct return value for the semantic property of [BAD INPUT].
>
> But makes D(D) Halt, so it is the wrong answer for a Halt Decider.
>

Not at all 0 means halts or D does the opposite of whatever Boolean
value that H returns.

> You are just admitting that you have been lying about working on the
> Halting Problem of Computation Theory, the one descibed by the Linz
> paper you quote.
>

When I point out that the conventional halting problem is self
contradictory this is the actual halting problem that I am referring to.

Don Stockbauer

unread,
Jun 20, 2023, 3:33:59 AM6/20/23
to
what's this chatGPT thing? I've never heard of it.

vallor

unread,
Jun 20, 2023, 7:16:28 AM6/20/23
to
On Tue, 20 Jun 2023 00:33:57 -0700 (PDT), Don Stockbauer wrote:

>
> what's this chatGPT thing? I've never heard of it.

There's a LLM shell discussion over in comp.ai.shells...

ChatGPT and Bard, mostly.

c.a.shells added, fu2 set.

--
-v

Richard Damon

unread,
Jun 20, 2023, 7:19:14 AM6/20/23
to
Whats self-contradictory of the ACTUAL QUESTION that is asked?

Does the program represented by the input Halt?

Since the H you claim to be correct is the H that returns 0 when asked
H(D,D), it turns out the the program represented by that input, that is
D(D) does Halt. No contradiction in the actual question.

The contradiction you run into is in trying to design an H to be
correct, and THAT says that it is impossible to make such an H

>
>>>
>>> Within the false hypothesis that H is incorrect to report that its input
>>> does not halt, the only alternative is to change the meaning of what H
>>> reports. When H becomes a [BAD INPUT] decider no one can correctly say
>>> that H is wrong. This also refutes Rice which is more important that
>>> solving the halting problem because it has a much broader scope.
>>
>> That isn't a "false hypothesis", it is a stated requirement.
>>
>> Since D(D) Halts, by the definition of the problem, H, to be correct,
>> must report Halting.
>>
>> Remember:
>> In computability theory, the halting problem is the problem of
>> determining, from a description of an arbitrary computer program and
>> an input, whether the program will finish running, or continue to run
>> forever.
>>
>> Thus the thing to look at is the PROGRAM itself and its behavior.
>> DEFINITION.
>>
>
> When the requirements are self-contradictory then they are incorrect.
> When the bible says that God <is> and God <has> wrath the bible lies.

Again, where is the actual contradiction in the actual question asked.
You just don't understand English.

That goes for how you read the Bible, and I guess you will find out
about God's wrath when he decides your time is up.

>
>>>
>>> Termination Analyzer H determines the semantic property of
>>> [GOOD INPUT] meaning that input D halts <and>
>>
>> Since the machine represented by the input does Halt, that condition
>> is statisfied.
>>
>> Note you bad terminology, "Inputs" are just data, and don't actually
>> DO anything. They can have "syntactic properties", but not "Behavior".
>> They can represent something that does have behavior, and from the
>> definiton above, that is the machine they represent, NOT H's (partial)
>> simulation of them.
>>
>
> Simply ignoring that a question is self-contradictory doesn't make it
> not self-contradictory.

And chaning the actual question doesn't say anything about the actual
question. You have just Strawmaned yourself into fallacious thinking.

>
>>>
>>> [BAD INPUT] meaning
>>> (a) input D doesn't halt <or>
>>> (b) D has a pathological relationship to H. This means that D calls H
>>> and does the opposite of the Boolean value that H returns.
>>
>> Which your H never atually confirms. You H will also call an HH that
>> does what H says to be pathological too, so you fail at this side.
>>
>>>
>>>> It is easy to make D a program, just define some H, any H, then D is
>>>> a valid program, and will either Halt or not. D's validity as a
>>>> program is NOT dependent on H getting the right answer. Thus an H
>>>> that just immediately returns 0 makes D a valid program.
>>>>
>>>
>>> H correctly determines that D has the semantic property of [BAD INPUT]
>>> making Denial of Service (DoS) attack detector H correct to reject D.
>>
>> Which isn't a criterial for a Halt Decider, and as I just explained
>> above, you don't actually detect the pathological relationship, just
>> that D calls H.
>>
>
> Instead it refutes Rice's theorem.

Nope. But you have shown yourself not able to understand what Rice's
theorem is, or what it takes to actually "Refute" something.

Maybe when you show enough intelligence to understand it, I will explain
it agian.

>
>>>
>>>
>>>>>
>>>>> Once we acknowledge that the halting problem input to H is an
>>>>> incorrect
>>>>> to H then we can understand that this incorrect question is aptly re-
>>>>> framed into the correct question:
>>>>
>>>> Why is it "Incorrect"? The fact that H can't give the right answer
>>>> is a problem with H, not with the input.
>>>>
>>>
>>> Then the problem with Jack's question is Jack not the fact that Jack's
>>> question is self-contradictory for Jack. Jack is simply too stupid to
>>> give a correct yes or no answer to a self-contradictory question. We all
>>> know that Jack's question has a correct answer, yet Jack is simply too
>>> stupid to decide between yes and no.
>>
>> The problem with "Jack's Question" is it asks about something that
>> doesn't have a correct answer NOW.
>>
>
> Sure it does you ask three people
> (a) Bill says Jack will say yes
> (b) John says that Jack will say no
> (c) Harry say Jack will say nothing or something besides yes or no
> One of them is right.
>
> Because our imaginary Jack is fictional Harry was right.

Nope, since the answer hasn't been determined yet, it has no correct
answer NOW. One of them WILL be right, when the answer is determied, but
until then, there is no correct answer. Since Jack has Free-Will, it is
impossible, even theoretically, to determine what the answer will be.

Now, if you believe in fatalism, and that Jack has no free-will, then
you could argue that one of them is correct now, but then nothing we do
matters, as it was all per-ordained and nothing matters.

The question about the machine is different, as it has no free-will, and
thus we CAN know NOW what it will do in the future.

>
>>>
>>>> The definition of a "Valid Input" for H, is that it represents a
>>>> Program and its input. This call sequence does that, so the input is
>>>> valid.
>>>>
>>>
>>> A syntactically valid input is not the same as a semantically valid
>>> input. Any input that makes both Boolean return values the wrong answer
>>> is a semantically invalid input.
>>
>> Nope, it is a PROGRAM, thus it is VALID. If you try to define it as
>> not valid, you are just admitting that H isn't a "Halt Decider" by the
>> definition of Computation Theory.
>>
>
> Saying that it is valid because it is a program simply ignores bugs and
> indicates you know hardly anything about programming.

A program with bugs still has defined behavior (per computation theory)
Some bugs may cause it to have unexpected inputs (like reading from
uninitialized memory) but the behavior is still full defined.


>
>> You clearly don't understand what you are talking about.

So, show me a program that meets the requirements of computabilyt theory
that can give different outputs for the same inputs.

YOU are the one that doesn't understand what you are talking about.

>>
>>>
>>>>>
>>>>> Does input D halt on its input [GOOD INPUT] or is D [BAD INPUT] that
>>>>> either fails to halt or defines a pathological relationship to H.
>>>>
>>>> And D DOES halt on its input, since it will "call" H(D,D), which
>>>> your H has been defined so that it will return 0 from that call.
>>>>
>>>
>>> Which is a correct return value for the semantic property of [BAD
>>> INPUT].
>>
>> But makes D(D) Halt, so it is the wrong answer for a Halt Decider.
>>
>
> Not at all 0 means halts or D does the opposite of whatever Boolean
> value that H returns.

But that isn't the question. You can't claim your answer is "Right" when
it isn't the answer to the question that was required.

You are just showing how bad you logic is.

>
>> You are just admitting that you have been lying about working on the
>> Halting Problem of Computation Theory, the one descibed by the Linz
>> paper you quote.
>>
>
> When I point out that the conventional halting problem is self
> contradictory this is the actual halting problem that I am referring to.

So, you admit that you aren't working on the actual Halting Problem, and
nothing you have said applies, and you have just been LYING about
everything for the last decades.

I guess by that same logic, you aren't talking about actual "Correct
Reasoning" because you have your own idea of what "Correct" is, and what
"Truth" is.

Thus, everything you have said is WORTHLESS to anyone else.


Note, as I have pointed out many times, the ACTUAL Halting Question
isn't self-contradictory, you have just waylaid yourself be
misunderstanding what the question actually is, and thus wasted your life.

olcott

unread,
Jun 20, 2023, 11:06:51 AM6/20/23
to
On 6/19/2023 3:08 PM, Ben Bacarisse wrote:
> Fritz Feldhase <franz.fri...@gmail.com> writes:
>
>> On Monday, June 19, 2023 at 5:58:39 PM UTC+2, olcott wrote:
>>
>>> the full semantics of the question <bla>
>>
>> Look, dumbo, we are asking the simple question: "Does D(D) halt?"
>>
>> Now, D(D) either halts or doesn't halt.
>>
>> Hence the CORRECT yes/no-answer to the question "Does D(D) halt?" is
>> "yes" iff D(D) halts and "no" if D(D) doesn't halt.
>
> Just a reminder that you are arguing with someone who has declared that
> the wrong answer is the right one:
>
> Me: "do you still assert that [...] false is the "correct" answer even
> though P(P) halts?"
>
> PO: Yes that is the correct answer even though P(P) halts.
>
> (Back then, D was called P.)
>
> This was not a slip of the tongue. He has been quite clear that he is
> talking about something other than what the world calls halting. It's
> about what /would/ happen if the program were slight different, not
> about what actually happens:
>
> PO: "A non-halting computation is every computation that never halts
> unless its simulation is aborted. This maps to every element of the
> conventional halting problem set of non-halting computations and a
> few more."
>
> He has been (eventually) perfectly clear -- PO's "Other Halting" is not
> halting, which is why false can be the correct answer for some halting
> computations. The only mystery is why anyone still wants to talk about
> POOH.
>

Stop doing this Ben !!!

When we use the criteria:
Can D correctly simulated by H ever terminate normally?

After N steps of correct simulation the execution trace of D proves that
D cannot possibly reach its final instruction and terminate normally in
any finite number of steps.

This criteria matches non-halting input and it also matches the cases
where the input D has been intentionally defined to do the opposite of
whatever Boolean value that H returns.

When H returns 1 it means that its input halts and when H return 0
it means that either its input does not halt or D was intentionally
defined to do the opposite of whatever Boolean value that H returns.

To the best of my knowledge no one has ever made this much progress on
the halting problem's pathological input. To the best of my knowledge
everyone else was completely stumped by the halting problem's
pathological input.

olcott

unread,
Jun 20, 2023, 11:10:01 AM6/20/23
to
D was intentionally defined to do the opposite
of whatever Boolean value that H returns.

D was intentionally defined to do the opposite
of whatever Boolean value that H returns.

D was intentionally defined to do the opposite
of whatever Boolean value that H returns.

D was intentionally defined to do the opposite
of whatever Boolean value that H returns.

D was intentionally defined to do the opposite
of whatever Boolean value that H returns.


When we use the criteria:
Can D correctly simulated by H ever terminate normally?

After N steps of correct simulation the execution trace of D proves that
D cannot possibly reach its final instruction and terminate normally in
any finite number of steps.

This criteria matches non-halting input and it also matches the cases
where the input D has been intentionally defined to do the opposite of
whatever Boolean value that H returns.

When H returns 1 it means that its input halts and when H return 0
it means that either its input does not halt or D was intentionally
defined to do the opposite of whatever Boolean value that H returns.

To the best of my knowledge no one has ever made this much progress on
the halting problem's pathological input. To the best of my knowledge
everyone else was completely stumped by the halting problem's
pathological input.

Richard Damon

unread,
Jun 20, 2023, 11:48:26 AM6/20/23
to
So?

How does that make the question self-contradictory?

For any D defined (which first requires H to be fully defined) the
behavior of D(D) is precisely defined, so no self-contradiction to the
quesiton of "Does the machine described by this input Halt?"

Where is the ACTUAL contradiction to that answer. Remember, at the point
the question can be ask, H needs to be a DEFINED program, not just some
nebulous concept of how to do it.
>
>
> When we use the criteria:
> Can D correctly simulated by H ever terminate normally?

Which isn't the criteria, so you are just admitting that you have been
lying for years about what you are doing.

Only an Idiot or a Pathological Liar thinks they can change the question
and still be working on the original problem.

>
> After N steps of correct simulation the execution trace of D proves that
> D cannot possibly reach its final instruction and terminate normally in
> any finite number of steps.

No, it proves that H can not simulate D to its final instruction.

D reaches that final instruction just fine.

You can't seem to tell the difference between REALITY (the actual
execution of the machine) and FANTASY (the partial simulation of the
input done by H).

This just shows your lack of understanding of how things actually work.

>
> This criteria matches non-halting input and it also matches the cases
> where the input D has been intentionally defined to do the opposite of
> whatever Boolean value that H returns.

But doesn't match ALL input, so it is a LIE to say it is equivalent.

>
> When H returns 1 it means that its input halts and when H return 0
> it means that either its input does not halt or D was intentionally
> defined to do the opposite of whatever Boolean value that H returns.

SO, H needs to return BOTH 0 and 1 for this criteria, which is
self-contradictory, so the criteria is invalid. PERIOD.

>
> To the best of my knowledge no one has ever made this much progress on
> the halting problem's pathological input. To the best of my knowledge
> everyone else was completely stumped by the halting problem's
> pathological input.
>
>

Maybe to the best of your knowledge, but you work it total garbage and
the actually useful ideas that you are using are quite old. The fact you
don't know about them just shows your ignorance.

Richard Damon

unread,
Jun 20, 2023, 11:48:29 AM6/20/23
to
So you are ADMITTING to working on a different problem, and lying about
what you are doing.

Ben is just pointing out the ERRORS in your logic

The fact you can't see that just shows your lack of understanding.

Just because you don't understand something doesn't make it
"inconsistent" or "invalid", the fact that only YOU don't understand it
shows that the problem is with you.

Ben Bacarisse

unread,
Jun 20, 2023, 12:02:55 PM6/20/23
to
Richard Damon <Ric...@Damon-Family.org> writes:

>> On 6/19/2023 3:08 PM, Ben Bacarisse wrote:

>>> Me: "do you still assert that [...] false is the "correct" answer even
>>>      though P(P) halts?"
>>>
>>> PO: Yes that is the correct answer even though P(P) halts.
<cut>
>>> This was not a slip of the tongue.  He has been quite clear that he is
>>> talking about something other than what the world calls halting.  It's
>>> about what /would/ happen if the program were slight different, not
>>> about what actually happens:
>>>
>>> PO: "A non-halting computation is every computation that never halts
>>>      unless its simulation is aborted.  This maps to every element of the
>>>      conventional halting problem set of non-halting computations and a
>>>      few more."

> Ben is just pointing out the ERRORS in your logic

I don't think I pointed to any errors of logic. I just quoted PO so
that readers can see what he's talking about.

Why do you keep making posts with personally derogatory subject lines?
You are just amplifying his nasty voice.

--
Ben.

olcott

unread,
Jun 20, 2023, 1:25:31 PM6/20/23
to
Quit doing that Ben !!!
Quit doing that Ben !!!
Quit doing that Ben !!!
Quit doing that Ben !!!
Quit doing that Ben !!!
Quit doing that Ben !!!
Quit doing that Ben !!!

olcott

unread,
Jun 20, 2023, 3:57:57 PM6/20/23
to
On 6/19/2023 3:08 PM, Ben Bacarisse wrote:
> Fritz Feldhase <franz.fri...@gmail.com> writes:
>
>> On Monday, June 19, 2023 at 5:58:39 PM UTC+2, olcott wrote:
>>
>>> the full semantics of the question <bla>
>>
>> Look, dumbo, we are asking the simple question: "Does D(D) halt?"
>>
>> Now, D(D) either halts or doesn't halt.
>>
>> Hence the CORRECT yes/no-answer to the question "Does D(D) halt?" is
>> "yes" iff D(D) halts and "no" if D(D) doesn't halt.
>
> Just a reminder that you are arguing with someone who has declared that
> the wrong answer is the right one:
>
> Me: "do you still assert that [...] false is the "correct" answer even
> though P(P) halts?"
>
> PO: Yes that is the correct answer even though P(P) halts.
>

Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts to
discourage honest dialogue]

*Ben Bacarisse targets my posts to discourage honest dialogue*
*Ben Bacarisse targets my posts to discourage honest dialogue*
*Ben Bacarisse targets my posts to discourage honest dialogue*

When Ben pointed out that H(P,P) reports that P(P) does not halt when
P(P) does halt this seems to be a contradiction to people that lack a
complete understanding.

Because of this I changed the semantic meaning of a return value of 0
from H to mean either
(a) that P(P) does not halt <or>
(b) P(P) specifically targets H to do the opposite of whatever Boolean
value that H returns.

When H(P,P) reports that P correctly simulated by H cannot possibly
reach its own last instruction this is an easily verified fact, thus
P(P) does not halt from the point of view of H.

When H returns 0 for input P means either that P does not halt or
P specifically targets H to do the opposite of whatever Boolean
value that H returns not even people with little understanding can
say that this is contradictory.

olcott

unread,
Jun 20, 2023, 3:59:44 PM6/20/23
to
On 6/19/2023 3:08 PM, Ben Bacarisse wrote:
> Fritz Feldhase <franz.fri...@gmail.com> writes:
>
>> On Monday, June 19, 2023 at 5:58:39 PM UTC+2, olcott wrote:
>>
>>> the full semantics of the question <bla>
>>
>> Look, dumbo, we are asking the simple question: "Does D(D) halt?"
>>
>> Now, D(D) either halts or doesn't halt.
>>
>> Hence the CORRECT yes/no-answer to the question "Does D(D) halt?" is
>> "yes" iff D(D) halts and "no" if D(D) doesn't halt.
>
> Just a reminder that you are arguing with someone who has declared that
> the wrong answer is the right one:
>
> Me: "do you still assert that [...] false is the "correct" answer even
> though P(P) halts?"
>
> PO: Yes that is the correct answer even though P(P) halts.
>


olcott

unread,
Jun 20, 2023, 4:00:55 PM6/20/23
to

Richard Damon

unread,
Jun 20, 2023, 4:34:42 PM6/20/23
to
On 6/20/23 3:57 PM, olcott wrote:
> On 6/19/2023 3:08 PM, Ben Bacarisse wrote:
>> Fritz Feldhase <franz.fri...@gmail.com> writes:
>>
>>> On Monday, June 19, 2023 at 5:58:39 PM UTC+2, olcott wrote:
>>>
>>>> the full semantics of the question <bla>
>>>
>>> Look, dumbo, we are asking the simple question: "Does D(D) halt?"
>>>
>>> Now, D(D) either halts or doesn't halt.
>>>
>>> Hence the CORRECT yes/no-answer to the question "Does D(D) halt?" is
>>> "yes" iff D(D) halts and "no" if D(D) doesn't halt.
>>
>> Just a reminder that you are arguing with someone who has declared that
>> the wrong answer is the right one:
>>
>> Me: "do you still assert that [...] false is the "correct" answer even
>>      though P(P) halts?"
>>
>> PO: Yes that is the correct answer even though P(P) halts.
>>
>
> Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts to
> discourage honest dialogue]
>
> *Ben Bacarisse targets my posts to discourage honest dialogue*
> *Ben Bacarisse targets my posts to discourage honest dialogue*
> *Ben Bacarisse targets my posts to discourage honest dialogue*

No, YOU DO by claiming your words don't actually mean what they say.

>
> When Ben pointed out that H(P,P) reports that P(P) does not halt when
> P(P) does halt this seems to be a contradiction to people that lack a
> complete understanding.

But since P(P) (now D(D) ) does halt, how do you explain that H saying
it doesn't is correct?

>
> Because of this I changed the semantic meaning of a return value of 0
> from H to mean either

So you are admitting to LYIHG about the problem you are doing/

OLCOTT --- ADMITTED LIAR

olcott

unread,
Jun 20, 2023, 4:42:57 PM6/20/23
to
When H(P,P) reports that P correctly simulated by H cannot possibly
reach its own last instruction this is an easily verified fact, thus
P(P) does not halt from the point of view of H.

This is the same thing as the Facebook post where two people are looking
at the same symbol that is a "9" or a "6" depending on your point of
view.

Richard Damon

unread,
Jun 20, 2023, 4:52:39 PM6/20/23
to
Which isn't the Halting Problem criteria, so you are lying about worki g
on the halting problem.

Note also, your H never actual DOES a "Correct Simulation" if it answer
the question, so your criteria is just invalid, so again, YOU LIE.

>
> This is the same thing as the Facebook post where two people are looking
> at the same symbol that is a "9" or a "6" depending on your point of
> view.
>

Nope. You are just too stupid to think.

You are so stupid, you don't see that you are lying, which is why you
ara a pathological liar. You are just proving it.

olcott

unread,
Jun 20, 2023, 5:39:16 PM6/20/23
to
Try and explain how any H can be defined that can be embedded
within Linz Ĥ such that embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ transitions to Ĥ.qy or Ĥ.qn
consistently with the behavior of Ĥ applied to ⟨Ĥ⟩.

If it is impossible to do this then you have affirmed that ⟨Ĥ⟩ ⟨Ĥ⟩ is a
self-contradictory input to embedded_H.

If it is possible to do this then explain the details of how it is done.

https://www.liarparadox.org/Linz_Proof.pdf

Once we know that the halting problem question is an incorrect question
then we can transform it into a correct question.

Richard Damon

unread,
Jun 20, 2023, 5:53:36 PM6/20/23
to
It can't, that is what the Theorem Proves.

That is because the Halting Function just isn't computable,

>
> If it is impossible to do this then you have affirmed that ⟨Ĥ⟩ ⟨Ĥ⟩ is a
> self-contradictory input to embedded_H.

Nope, because it just doesn't exist.

Since no H can exist that meets the requirements, an H that meets the
requirements doesn't exist, and so no H^ exists.

>
> If it is possible to do this then explain the details of how it is done.
>
> https://www.liarparadox.org/Linz_Proof.pdf
>
> Once we know that the halting problem question is an incorrect question
> then we can transform it into a correct question.
>

But it isn't an "Incorrect Question", but the definition of what a
"Correct Question" is.

Remember, the Question of the Halting Problem Theorem is, Can an H exist
that meets the requirements.

This Question has an answer of NO.

The Question of the Requirements is to decide if an given input will
Halt of Not.

This question has an answer for any input you can actually create.

The answer for the D built on your claimed H, is that it Halts, while
your claimed H says it doesn't.

Your "requirements" that you are claiming is that we must create a halt
decider for this template, there is no such requirement, since the
answer to the Halting Problem Theorem is NO, and you thinking is just
stuck in a rabbit hole trying to require the impossible, because you
refuse to face the reality that some things are just impossible, and
that is ok.

This is perhaps part of your mental defect.

olcott

unread,
Jun 20, 2023, 6:07:53 PM6/20/23
to
That is exactly analogous to:
(1) Can anyone correctly answer this question:
(2) Will your answer to this question be no?

The answer to (1) is "no" only because (2) is self-contradictory.

Richard Damon

unread,
Jun 20, 2023, 6:52:07 PM6/20/23
to
Nope, totally different questions, but you are too stupid to understand.

The question is NOT about some future event, but about something that
has already been determined. To ask about a machine, the machine must
exist, and thus the answer is fixed.

We conventionally talk about the machine's behavior in the future, as
there is no sense deciding on a machine we have already run, but its
behavior is NOT just in the future, but was fixed as soon as the machine
was created.

Not so with a question about a volitional beings future behavior.

Thus, the questions are VERY different.


Maybe you are just stuck on the idea of Free Will and Determinism and
can't figure out what is rules by what.

vallor

unread,
Jun 21, 2023, 3:10:29 PM6/21/23
to
On Sat, 17 Jun 2023 00:54:32 -0500, olcott wrote:

> ChatGPT:
> “Therefore, based on the understanding that self-contradictory
> questions lack a correct answer and are deemed incorrect, one could
> argue that the halting problem's pathological input D can be
> categorized as an incorrect question when posed to the halting
> decider H.”
>
> https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a It did
> not leap to this conclusion it took a lot of convincing.

Chatbots are highly unreliable at reasoning. They are designed
to give you the illusion that they know what they're talking about,
but they are the world's best BS artists.

(Try playing a game of chess with ChatGPT, you'll see what I mean.)

--
-v

vallor

unread,
Jun 21, 2023, 3:23:54 PM6/21/23
to
On Wed, 21 Jun 2023 19:10:26 -0000 (UTC), vallor wrote:
> Chatbots are highly unreliable at reasoning. They are designed to give
> you the illusion that they know what they're talking about,
> but they are the world's best BS artists.
>
> (Try playing a game of chess with ChatGPT, you'll see what I mean.)

Can't even get two moves into the game:

https://chat.openai.com/share/8a315ec0-f0c4-4a4e-8019-dcb070790e5c

--
-v

olcott

unread,
Jun 21, 2023, 3:59:55 PM6/21/23
to
I already know that and much worse than that they simply make up facts
on the fly citing purely fictional textbooks that have photos and back
stories for the purely fictional authors. The fake textbooks themselves
are complete and convincing.

In my case ChatGPT was able to be convinced by clearly correct
reasoning.

https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a
It did not leap to this conclusion it took a lot of convincing.

People are not convinced by this same reasoning only because they spend
99.9% of their attention on rebuttal thus there is not enough attention
left over for comprehension.

The only reason that the halting problem cannot be solved is that the
halting question is phrased incorrectly. The way that the halting
problem is phrased allows inputs that contradict every Boolean return
value from a set of specific deciders.

Each of the halting problems instances is exactly isomorphic to
requiring a correct answer to this question:
Is this sentence true or false: "This sentence is not true".

Richard Damon

unread,
Jun 21, 2023, 7:01:07 PM6/21/23
to
On 6/21/23 3:59 PM, olcott wrote:
> On 6/21/2023 2:10 PM, vallor wrote:
>> On Sat, 17 Jun 2023 00:54:32 -0500, olcott wrote:
>>
>>> ChatGPT:
>>>      “Therefore, based on the understanding that self-contradictory
>>>      questions lack a correct answer and are deemed incorrect, one could
>>>      argue that the halting problem's pathological input D can be
>>>      categorized as an incorrect question when posed to the halting
>>>      decider H.”
>>>
>>> https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a It did
>>> not leap to this conclusion it took a lot of convincing.
>>
>> Chatbots are highly unreliable at reasoning.  They are designed
>> to give you the illusion that they know what they're talking about,
>> but they are the world's best BS artists.
>>
>> (Try playing a game of chess with ChatGPT, you'll see what I mean.)
>>
>
> I already know that and much worse than that they simply make up facts
> on the fly citing purely fictional textbooks that have photos and back
> stories for the purely fictional authors. The fake textbooks themselves
> are complete and convincing.
>
> In my case ChatGPT was able to be convinced by clearly correct
> reasoning.
>

So, you admit that they will lie and tell you want you want to hear, you
think the fact that it agrees with you means something?

> https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a
> It did not leap to this conclusion it took a lot of convincing.

Which is a good sign that it was learnig what you wanted it to say so it
finally said it.

>
> People are not convinced by this same reasoning only because they spend
> 99.9% of their attention on rebuttal thus there is not enough attention
> left over for comprehension.

No, people can apply REAL "Correct Reasoning" and see the error in what
you call "Correct Reasoning". Your problem is that your idea of correct
isn't.

>
> The only reason that the halting problem cannot be solved is that the
> halting question is phrased incorrectly. The way that the halting
> problem is phrased allows inputs that contradict every Boolean return
> value from a set of specific deciders.

Nope, it is phrased exactly as needed. Your alterations allow the
decider to give false answer and still be considered "correct" by your
faulty logic.

>
> Each of the halting problems instances is exactly isomorphic to
> requiring a correct answer to this question:
> Is this sentence true or false: "This sentence is not true".
>

Nope.

How is "Does the Machine represented by the input to the decider?"
isomopric to your statement.

Note, the actual Halting Problem question always has a definite answer.

Your claimed Isomorphic does not.

So they CAN'T be Isomorphic.

Note, you altered question of What can H return isn't the actual
question, but you don't seem to be able to understand that.

Your question is asked before H exists, and its problem with finding an
answer says a correct H can't actually exist.

The actual question can only be asked once H is fully defined, and at
that point it is just wrong, you can't ask what it can return to be
right, since it can only return one answer.

olcott

unread,
Jun 21, 2023, 8:40:49 PM6/21/23
to
The halting problem instances that ask:
"Does this input halt"

are isomorphic to asking Jack this question:
"Will Jack's answer to this question be no?"

Which are both isomorphic to asking if this expression
is true or false: "This sentence is not true"

That you are unwilling to validate my work merely means that
someone else will get the credit for validating my work.

Richard Damon

unread,
Jun 21, 2023, 10:47:41 PM6/21/23
to
Nope, because Jack is a volitional being, so we CAN'T know the correct
answer to the question until after Jack answers the question, thus Jack,
in trying to be correct, hits a contradiction.

The correct answer to the Halting Problem Question was avaiable as soon
as the machine being asked about was defined, so the decider doesn't hit
a contradiction in logic, it just is wrong, because it CAN'T "try" to
give the other answer, because it just does as it was programmed.

All your logic is in designing the machine, and there the contradiction
just points out that you can't make a correct machine, which is an
acceptable answer. Not all problems are computable, so we can't always
make a machine give the answer.

>
> Which are both isomorphic to asking if this expression
> is true or false: "This sentence is not true"

Nope. Show how the CAN be.

The Halting Problem ALWAYS has a valid yes or no question, since the
machine it is being asked on must be defined to ask it, and thus its
behavior is FIXED by its code.

You just don't seem to understand what a program is, so I guess you
faked it when you were working as a programmer.

>
> That you are unwilling to validate my work merely means that
> someone else will get the credit for validating my work.
>
>

I can't "Validate" your work, as it is just incorrect.

You think to things of different kind are the same, which is impossible,
so your statements are just incorrect.

You don't seem to understand that compuations don't have volition, so,
you basically don't understand what a computation is at all, and nothing
you have done reguarding them has any hope of having a factual basis.

You also clearly don't understand how logic works too.

olcott

unread,
Jun 21, 2023, 10:58:29 PM6/21/23
to
We can know that the correct answer from Jack and the correct return
value from H cannot possibly exist, now and forever.

You are just playing head games.

Richard Damon

unread,
Jun 22, 2023, 7:26:52 AM6/22/23
to
But the question isn't what H can return to be correct, since the only
possible answer that H can return is what it does return by its
programming, which will either BE correct or not. (In this case NOT).

Therefore, the correct answer that H SHOULD HAVE returned (to be
correct) has an answer, so the question actually HAS a correct answer.

You clearly don't understand the difference between a volitional being
and a deterministic machinne. This shows your stupidity and ignornace.
Maybe you have lost your free will and ability to think because of the
evil in your life, and are condemned to keep repeating the same error
over and over proving your insanity and stupidity.

I guess you are now shown to be a Hypocritical Ignorant Pathological
Lying insane idiot.

olcott

unread,
Jun 22, 2023, 10:18:48 AM6/22/23
to
Yes it is and you just keep playing heed games.

Richard Damon

unread,
Jun 22, 2023, 9:06:20 PM6/22/23
to
So, you aren't talking about the Halting Problem, and your definition of
"Head Games" must be to be correcting your mistakes.


The question of the Halting Problem is does the Machine that the input
describes Halt. It make no reference to H itself. H to be correct needs
to get the right answer, but the question isn't what it needs to return
to be correct, since once you define H, its answer is fixed, so the only
answer it CAN give is what it DOES give.

You seem to not understand that programs are deterministic entities and
have no option of "choice", so we can't ask what they can do to be
correct, because they will only do what they do.

Your Head Games seems to be about assuming things might do what they
don't actually do, and thus thinking about lies of pure fantasy.

You alse seem to not understand the difference between a volitional
being an a deterministic process. Maybe because you have lost your own
determinism and gave it to your insanity, and now you are stuck forever
trying to do what you incorrect thought of.

Clearly you have lost the intelegence that comes out of volition, as you
show yourself to be so stupid and ignorant to not understand the basic
presented to you.
0 new messages