Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

The Psychology of Self-Reference

133 views
Skip to first unread message

Daryl McCullough

unread,
Jun 25, 2004, 7:30:39 PM6/25/04
to
It is becoming increasingly clear that Peter Olcott and Herc have
no coherent mathematical argument for rejecting Godel's theorem
and Turing's proof of the unsolvability of the halting problem.
Their objections are really psychological---they feel that the
proofs are somehow a cheat, but they lack the mathematical ability
to say why.

I'd like to talk about the psychology of why people sometimes feel
that Godel's and Turing's proofs are somehow cheats. Partly, it is
the fault of informal intuitive expositions of the results.

Both Godel's proof and Turing's proof have the flavor of using
self-reference to force someone to make a mistake. Both cases
seem a little like the following paradox (call it the "Gotcha"
paradox).

You ask someone (we'll call him "Jack") to give a truthful
yes/no answer to the following question:

Will Jack's answer to this question be no?

Jack can't possibly give a correct yes/no answer to the question.

While the Gotcha paradox gives some of the flavor of Godel's
proof or Turing's proof, there is one big difference, and this
difference is what makes people feel like there is something
fishy going on: In the case of the Gotcha paradox, it
is possible for Jack to *know* the answer, but to be
prevented by the rules from *saying* the answer.

In other words, there is a gap between what Jack knows
and what he can say. He knows that the answer to the question
is "no", but he can't say that answer, because that would
make the answer incorrect. So this informal paradox doesn't
really reveal any limitations in Jack's knowledge---it's
just a quirk of the rules that prevents Jack from telling
the answer. It's a little like the following one-question
quiz:

---------------
| 5 5 5 5 |
| How many 5's |
| appear inside |
| this box? |
| Answer: ___ |
| |
---------------

If you write "5" in the space provided, then the correct answer
is "6", and if you write "6" the correct answer is "5". The fact
that you can't write the correct answer in the space provided
doesn't prove that you have problems counting.

Someone hearing some variant of the Gotcha paradox might be led
to think (as Peter Olcott and Herc do) that Godel's and Turing's
proofs might be cheats in a similar way.

Of course, the difference is that there is no "gap" involved in
Turing's or Godel's proofs. It makes no sense to suppose that
Peano Arithmetic really knows that the Godel statement is true,
but just can't say it, because there is no notion of PA "knowing"
something independently of what it can prove. In the case of Turing's
proof, given a purported solution H to the halting problem,
one comes up with a program Q(x) such that

Q halts on its own input if and only if H(Q,Q) = false

There is no sense in which H "knows" that the answer is true
but is unable to say it.

We could try to modify the Gotcha paradox to eliminate the gap
between what you know and what you can say. Let's consider the
following statement (called "U" for "Unbelievable").

U: Jack will never believe this statement.

Apparently, if Jack believes U, then U is false. So we are left
with two possibilities:

Either (A) Jack believes some false statement, or (B)
there is some true statement that Jack doesn't believe.

This is a lot like Godel's sentence G that shows that PA is
either inconsistent or incomplete. However, it still seems like
a joke, or a trick, rather than something that reveals any
limitations in Jack's knowledge. U doesn't seem to have any
real content, so who cares whether it is true or not, or whether
Jack believes it or not. It isn't a claim about anything tangible,
so who could ever tell if Jack believes it or not, or what it even
*means* for Jack to believe it?

Okay, let's try one more time to get something meaningful that
really reveals a gap in Jack's knowledge akin to Godel's
incompleteness. Suppose that at some future time, the mechanisms
behind the human mind are finally understood. Suppose that it is
possible to insert probes into a person's brain to discover what
the person is thinking, and what he believes.

So we take our subject, Jack, and hook him up with our brain scanning
machine. We give Jack a computer monitor on which we can display
statements for Jack to consider, and we connect his brain scanning
machine to a bell in such a way that if Jack agrees with the statement
on the screen (that is, if the scanning machine determines that Jack
believes the statement) then the bell will ring. Then we display
on the screen the following statement:

The bell will not ring.

Now, there is no way out for Jack. The statement is now a completely
concrete claim---there is no ambiguity about what it means, and there
is no ambiguity about whether it is true or false. There is no "knowledge
gap" possible---either Jack believes that the statement is true, or
he doesn't.

Does Jack believe the statement, or not? It seems to me that in this
circumstance, Jack is forced to doubt his own reasoning ability, or
to doubt the truth of the circumstances (that the brain scanning machine
works as advertised, or that it is connected to the bell as described).
If he *really* believes in the soundness of his own reasoning, and he
really believes in the truth of the claims about the scanning machine,
then it logically follows that the bell will not ring. But as soon as
he makes that inference, the bell will ring, showing that he made a
mistake, somewhere. So the only way for Jack to avoid making a mistake
is if he considers it *possible* that he or his information is mistaken.

--
Daryl McCullough
Ithaca, NY

Charlie-Boo

unread,
Jun 26, 2004, 12:19:43 PM6/26/04
to
da...@atc-nycorp.com (Daryl McCullough) wrote

> Of course, the difference is that there is no "gap" involved in
> Turing's or Godel's proofs. It makes no sense to suppose that
> Peano Arithmetic really knows that the Godel statement is true,
> but just can't say it, because there is no notion of PA "knowing"
> something independently of what it can prove.

Cool examples (sources?) but I think there is more of a parallel than
that. The equivalent of PA knowing something is the fact that the
question definitely does have an an answer (true or false.)

Also, consider this parallel:

Liar: "This is not true." has no truth value.
Godel: "This is not provable." does have a truth value.
Conclusion: "True" and "provable" are not the same.

Charlie Volkstorf
Cambridge, MA

KRamsay

unread,
Jun 27, 2004, 5:00:12 PM6/27/04
to

In article <cbici...@drn.newsguy.com>, da...@atc-nycorp.com (Daryl

McCullough) writes:
|Of course, the difference is that there is no "gap" involved in
|Turing's or Godel's proofs. It makes no sense to suppose that
|Peano Arithmetic really knows that the Godel statement is true,
|but just can't say it, because there is no notion of PA "knowing"
|something independently of what it can prove.

I have read that Goedel took great pains to avoid any ingredients
in his proof that could be misinterpreted as a source of paradox,
and hence was scrupulous in keeping all the reasoning
"combinatorial".

I think at least part of the distraction when it comes to Goedel's
theorem is that a formal system is described in the terms one
would use for a creature that "knows" things, and our intuitions
about knowledge run counter to facts about formal systems. If
the provability or knowability of X is written as []X, then for
knowledge we have an intuition saying

[] ([] X -> X),

that for any X, we know that if we know X then it's true. On the
other hand for a standard type of formal system we have the
almost opposite

[] ([] X -> X) -> [] X,

namely that it's a theorem that X is true if it's provable, only if
X is a theorem already independent of provability.

Keith Ramsay

Daryl McCullough

unread,
Jun 27, 2004, 6:27:02 PM6/27/04
to
KRamsay says...

>If the provability or knowability of X is written as []X, then for
>knowledge we have an intuition saying
>
> [] ([] X -> X),
>
>that for any X, we know that if we know X then it's true. On the
>other hand for a standard type of formal system we have the
>almost opposite
>
> [] ([] X -> X) -> [] X,
>
>namely that it's a theorem that X is true if it's provable, only if
>X is a theorem already independent of provability.

I think that rather than *knowing*, provability is closer to *believing*
While you can't know something that is false, you can believe something
that is false.

I'm not sure whether Lob's theorem applies to human belief, or not.
I don't think so, because human beliefs are really not deductively
closed. We can believe a bunch of things which, if we worked out
the logical consequences, would be inconsistent. But we don't believe
any out-and-out contradictions. So our beliefs don't necessarily
obey the rule

[](A -> B) -> []A -> []B

|-|erc

unread,
Jun 29, 2004, 7:01:00 AM6/29/04
to
"Charlie-Boo" <ch...@aol.com> wrote in

2 & 3 are also not the same.

So HOW do you get rid of 2? do you only consider a subset of Godel numbers?

WHY STOP THERE? Why are mathematicians intent on consistency and consistency alone?

Why does Daryl insist this isn't a possible formula.

S = Forall x, isTrueFormula(x) <-> hasProof(x)

S, for SuperConsistent has a godel number too, just like liar statement, just like godel statement.

Why can't you just disprove Peters conjecture?

As Peter originally put it, the output of halt should reasonably be
expected to have 3 options :

1 / the input halts
2 / the input wont halt
3 / the input contains a reference to Halt()


there's some lack of mathematical ability around here alright, and ignoring
simple mathematical conjecture is its fertiliser.

Herc

ZZBunker

unread,
Jun 30, 2004, 4:12:39 AM6/30/04
to
da...@atc-nycorp.com (Daryl McCullough) wrote in message news:<cbici...@drn.newsguy.com>...

> It is becoming increasingly clear that Peter Olcott and Herc have
> no coherent mathematical argument for rejecting Godel's theorem
> and Turing's proof of the unsolvability of the halting problem.
> Their objections are really psychological---they feel that the
> proofs are somehow a cheat, but they lack the mathematical ability
> to say why.
>
> I'd like to talk about the psychology of why people sometimes feel
> that Godel's and Turing's proofs are somehow cheats. Partly, it is
> the fault of informal intuitive expositions of the results.
>
> Both Godel's proof and Turing's proof have the flavor of using
> self-reference to force someone to make a mistake. Both cases
> seem a little like the following paradox (call it the "Gotcha"
> paradox).
>
> You ask someone (we'll call him "Jack") to give a truthful
> yes/no answer to the following question:
>
> Will Jack's answer to this question be no?
>
> Jack can't possibly give a correct yes/no answer to the question.
>
> While the Gotcha paradox gives some of the flavor of Godel's
> proof or Turing's proof, there is one big difference, and this
> difference is what makes people feel like there is something
> fishy going on: In the case of the Gotcha paradox, it
> is possible for Jack to *know* the answer, but to be
> prevented by the rules from *saying* the answer.

That's not the objection. The objection is that
Jack's knowledge is not even knowledge, since
Jack is a fictous character. So Jack's
"knowledege" is nothing more Jack's
back-stage pass to a play written by
some amauter Shakespearian philosophers,
and published by unemployed
street-walker mathematicians.

Bill Taylor

unread,
Jun 30, 2004, 11:19:48 PM6/30/04
to
Nice essay Daryl. This little bit specially caught my eye...

da...@atc-nycorp.com (Daryl McCullough) wrote

> Will Jack's answer to this question be no?
>
> Jack can't possibly give a correct yes/no answer to the question.

This appears to be correct, and, given the usual caveats surrounding this
topic, as clear-cut as it may be. That is, Jack CANNOT give a correct
yes/no answer to this question.

So we may ask, is this a "bad thing" ? I doubt it. Just as (as you later
point out) Godel and Turing and Cantor find ways around it, so can we here.
Certainly, Jack can make an effective *response* to the query, as Godel etc do.
Jack might easily give a correct answer, just not a "yes/no" answer.
He might say "Not at all", or suchlike. We might then reword the question
to include such possibilities. He might then observe that "Anyone else
answering the question would correctly say no, so make your deductions
from that"; a response which would find favour among political interviewees.

So could we re-word the question so extensively, that it would cover all
possible attempts by Jack to execute such workarounds? I doubt it.
We might think of trying something like,

"Will Jack's answer to this question be in the negative, either directly,
or indirectly by alluding to other matters that imply a negative answer?"

But I suspect all such attempts are doomed to failure. I suspect they are
doomed for the same old same-old reasons that always tend to apply here;
either they would prove to be too imprecisely worded to be of any use,
or they would admit a sufficiently cunning response by Jack that
could work around the restrictions by jumping up another level.

The whole thing has very much the smell of the attempt, a while ago, to try
to define precisely, "all possible precise definitions of a real number".
Whatever you do, you can jump beyond. To try to circumvent matters by
referring to "all possible negative repsonses" is a bit like asking about
the set of all sets. It just doesn't go.

We have learned to live with the non-existence of a set of all sets,
or of a defintion of all definitions, so it should be no sweat to live
with an question that excludes all answers.

So in sum, I assert, no, it is not a "bad thing", it is not worrying,
it is just another case of what always happens here.

The original question, "Will Jack's answer to this question be no?",
is no more worrying than saying,
"Please respond at length to this inquiry without communicating in any way."

It is not a worry. It doesn't even need responding to, really.
As with all these paradoxes -
- once you have seen the answer, there is no question left.

-----------------------------------------------------------------------------
Bill Taylor W.Ta...@math.canterbury.ac.nz
-----------------------------------------------------------------------------
"The auto-destruct mechanism just blew itself up!" -- Dr Strangelove
-----------------------------------------------------------------------------

da...@atc-nycorp.com

unread,
Jul 1, 2004, 9:06:28 AM7/1/04
to
In article <716e06f5.04063...@posting.google.com>, Bill Taylor
says...

Daryl McCullough

unread,
Jul 1, 2004, 9:27:30 AM7/1/04
to
Bill Taylor says...

>da...@atc-nycorp.com (Daryl McCullough) wrote
>
>> Will Jack's answer to this question be no?
>>
>> Jack can't possibly give a correct yes/no answer to the question.
>
>This appears to be correct, and, given the usual caveats surrounding this
>topic, as clear-cut as it may be. That is, Jack CANNOT give a correct
>yes/no answer to this question.
>

>So we may ask, is this a "bad thing"? ...


>
>Jack might easily give a correct answer, just not a "yes/no" answer.
>He might say "Not at all", or suchlike.

No, it's not a bad thing that Jack is unable (given the rules,
that the answer must be "yes" or "no") to give the correct answer.
If I put duct tape over his mouth, he'll be unable to give a
correct answer, but it doesn't mean anything about limitations
for Jack's *reasoning* ability. It's like the sick joke about
the sadistic scientist who is measuring the correlation between
the number of legs of an animal and how far it can jump: He
tells the test animal, a dog, "Jump, Fido!" and Fido jumps 8 feet.
The scientist writes down "Dog with 4 legs jumps 8 feet." He
amputates one leg, then says "Jump, Fido!" and Fido jumps 6 feet.
The scientist writes down "Dog with 3 legs jumps 6 feet." He
amputates the three remaining legs, recording how far Fido jumps.
Finally, when the poor dog has no legs whatsoever, the scientist
says "Jump, Fido!" Fido doesn't jump at all. He says "Jump, Fido!"
again, same result. The scientist then writes down "Dog with 0
legs cannot hear."

As I said in my post, this failure of Jack to give the correct
answer does *not* indicate any limitation in Jack's ability to
reason---instead, it indicates a "knowledge gap"; there is a
gap between what Jack knows and what Jack is able to convey
(within the rules, given his situation).

So is there any way to eliminate this knowledge gap? Maybe not,
but if we imagine that we have a device that can directly
measure what a person believes, then we could construct a
scenario for humans that was analogous to the incompleteness
theorem---a statement that causes a person to be uncertain
about the correctness of his own reasoning ability or knowledge:

I connect the "belief detector" to your brain and tell you
that I'm going to show you a statement on a computer screen.
I tell you that the detector is detected to a bell so that
if (within the next 60 seconds, say) you come to believe the
statement on the screen, the bell will ring, otherwise the
bell will not ring. Then I show you the statement "The bell
will not ring in the next 60 seconds."

Perhaps this thought experiment shows that there can be
no perfect belief detector. I think that's plausible---even
if we completely understood the way the brain works, there
will still be fuzziness about whether this or that brain
state constitutes being in the state of "believing statement
S". A set-up such as the one above would then be impossible,
since it requires an unambiguous yes/no answer to the question
"Does Jack believe S?"

Bill Taylor

unread,
Jul 2, 2004, 12:51:27 AM7/2/04
to
da...@atc-nycorp.com (Daryl McCullough) wrote

> Perhaps this thought experiment shows that there can be
> no perfect belief detector.

I agree. In fact, this argument and thought experiment and conclusion,
were pretty much the subject of a similar essay by... I forget, it was
either in "Godel Escher Bach" or By Daniel Dennet in (possibly) "The Mind's I",
(great title for this sort of book, BTW!), or some similar work.

Maybe someone else can help me out with this reference? The protagonist was
trying to find out a subject's true beliefs, and invented such a machine,
and found the same paradox, was severely chastised by (the author) for having
been so irresponsible as to invent such a machine, possibly by causing
the universe to collapse in a great explosion of self-contradiction.

Well, maybe I'm embellishing my memory somewhat with that last bit;
but that was the theme.


> I think that's plausible--

Yes indeed! There are a great many reasons to suppose that we can never
precisely encompass anyone's beliefs, including our own. Or even that
the term has any precise meaning. But certainly the thought experiment
is a snazzy way of establishing this.

> -even if we completely understood the way the brain works,

Which is also different from understanding how any *particular* brain works;
but both seem to be permanently beyond the realm of the achievable, for
reasons of technical information storage if nothing else.

Just as the universe is its own fastest simulator, so is the brain its own
best interpreter.


> there will still be fuzziness about whether this or that brain
> state constitutes being in the state of "believing statement S".

Precisely. That is also my exact belief! ;-)

------------------------------------------------------------------------------
Bill Taylor W.Ta...@math.canterbury.ac.nz
------------------------------------------------------------------------------
Free will the inability to predict what you are going to do next
------------------------------------------------------------------------------

Jeffrey Ketland

unread,
Jul 2, 2004, 4:47:08 AM7/2/04
to
Bill Taylor wrote in message
<716e06f5.04070...@posting.google.com>...

>da...@atc-nycorp.com (Daryl McCullough) wrote
>
>> Perhaps this thought experiment shows that there can be
>> no perfect belief detector.
>
>I agree. In fact, this argument and thought experiment and conclusion,
>were pretty much the subject of a similar essay by... I forget, it was
>either in "Godel Escher Bach" or By Daniel Dennet in (possibly) "The Mind's
I",
>(great title for this sort of book, BTW!), or some similar work.
>
>Maybe someone else can help me out with this reference? The protagonist
was
>trying to find out a subject's true beliefs, and invented such a machine,
>and found the same paradox, was severely chastised by (the author) for
having
>been so irresponsible as to invent such a machine, possibly by causing
>the universe to collapse in a great explosion of self-contradiction.

I think you mean Raymond Smullyan's "An Epistemological Nightmare", which is
in _The Mind's I_ (Dennett and Hofstadter). Actually, it appears on the web
here:
http://www.rdegraaf.nl/index.asp?sND_ID=708558

--- Jeff


_ Olcott

unread,
May 28, 2023, 2:55:17 PM5/28/23
to
After nearly two decades of pondering I have derived some resolution. Self-contradictory expressions of language such as the Liar Paradox are not truth bearers thus have no Boolean value.

When Gödel's g expresses its unprovability within formal system F the proof of g in F requires a sequence of inference steps proving that no such sequence of inference steps exists in F, thus is self-contradictory in F. When we examine the same statement in metamathematics we are outside of the scope of self-contradiction.

*The same thing works for the Liar Paradox*
This sentence is not true: "This sentence is not true" is true.

When Jack is asked his question the Jack/Question pair is inside the scope of self-contradiction. When anyone else is asked the same question they are outside of the scope of self-contradiction. When a question is asked within the scope of self-contradiction it is an incorrect question because a correct answer cannot possibly exist.

Richard Damon

unread,
May 28, 2023, 3:15:27 PM5/28/23
to
On 5/28/23 2:55 PM, _ Olcott wrote:

> After nearly two decades of pondering I have derived some resolution. Self-contradictory expressions of language such as the Liar Paradox are not truth bearers thus have no Boolean value.
>
> When Gödel's g expresses its unprovability within formal system F the proof of g in F requires a sequence of inference steps proving that no such sequence of inference steps exists in F, thus is self-contradictory in F. When we examine the same statement in metamathematics we are outside of the scope of self-contradiction.

So, you just don't understand what you are reading.

He proves that there is no *FINITE* sequence if inference steps, that
could form a PROOF of the statement, and at the same time show that
there *IS* an INFINITE series of steps that shows that the statement is
TRUE.

This has long been one of your problems, that you don't understand the
difference in implication of a finite series of steps and an infinte
series, likely becaue you mind can't comprehend the infinite.

This is strange for someone who claims to be God, as God, by most
definitions, needs to be infinite himself.

>
> *The same thing works for the Liar Paradox*
> This sentence is not true: "This sentence is not true" is true.
>
> When Jack is asked his question the Jack/Question pair is inside the scope of self-contradiction. When anyone else is asked the same question they are outside of the scope of self-contradiction. When a question is asked within the scope of self-contradiction it is an incorrect question because a correct answer cannot possibly exist.

WHich are irrelevent, and you bringing them up just shows how broken
your case is.

Dan Christensen

unread,
May 28, 2023, 4:22:34 PM5/28/23
to
Once upon a midnight dreary, while I pondered weak and weary...

(P & ~P) is always false. (P <=> ~P) is always false.

Evermore!

Dan

Download my DC Proof 2.0 freeware at http://www.dcproof.com
Visit my Math Blog at http://www.dcproof.wordpress.com


olcott

unread,
May 28, 2023, 4:33:58 PM5/28/23
to
It is not merely that P & ~P is false it is that every expression of
language that is isomorphic to the Liar Paradox cannot be resolved to
true or false because it is semantically unsound.

?- G = not(provable(F, G)).
G = not(provable(F, G)).

?- unify_with_occurs_check(G, not(provable(F, G))).
false.


--
Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

_ Olcott

unread,
Oct 10, 2023, 7:23:06 PM10/10/23
to
On Friday, June 25, 2004 at 6:30:39 PM UTC-5, Daryl McCullough wrote:
> You ask someone (we'll call him "Jack") to give a truthful
> yes/no answer to the following question:
> Will Jack's answer to this question be no?
> Jack can't possibly give a correct yes/no answer to the question.
> --
> Daryl McCullough
> Ithaca, NY

All linguists know that a question with the exact same words
can have an entirely different meaning when context is taken
into account.

In the above case the conext of who is asked the question changes
the meaning of the question so that both yes and no are the wrong
answer from Jack.

olcott

unread,
Oct 10, 2023, 7:40:05 PM10/10/23
to
Jack's question is isomorphic to the halting problem counter-example
where the input D to a halt decider H does the opposite of whatever
halt status that H returns.

Fritz Feldhase

unread,
Oct 10, 2023, 8:34:26 PM10/10/23
to
On Wednesday, October 11, 2023 at 1:23:06 AM UTC+2, _ Olcott wrote:
> On Friday, June 25, 2004 at 6:30:39 PM UTC-5, Daryl McCullough wrote:
> >
> > You ask someone (we'll call him "Jack") to give a truthful
> > yes/no answer to the following question:
> > Will Jack's answer to this question be no?
> > Jack can't possibly give a correct yes/no answer to the question.
> >
> All linguists know that a question with the exact same words
> can have an entirely different meaning when context is taken
> into account.

Sure.

> In the above case the conext of who is asked the question changes
> the meaning of the question

Nope. The meaning is still the same.

Hint:

"Will Jack's answer to this question be no?" is true if Jack's answer to this question is /no/
and
"Will Jack's answer to this question be no?" is false if Jack's answer to this question is not /no/.

The problem it just that *Jack* cannot give a/the correct answer.

Fritz Feldhase

unread,
Oct 10, 2023, 8:37:25 PM10/10/23
to
On Wednesday, October 11, 2023 at 1:40:05 AM UTC+2, olcott wrote:
> On 10/10/2023 6:23 PM, _ Olcott wrote:
>
> [...] is isomorphic to the halting problem counter-example
> where the input D to a halt decider H does the opposite of
> whatever halt status [converning D] that H returns.

So the alleged "halt decider" cannot give the correct answer cconcerning D. What a shame!

olcott

unread,
Oct 10, 2023, 8:43:47 PM10/10/23
to
On 10/10/2023 7:34 PM, Fritz Feldhase wrote:
> On Wednesday, October 11, 2023 at 1:23:06 AM UTC+2, _ Olcott wrote:
>> On Friday, June 25, 2004 at 6:30:39 PM UTC-5, Daryl McCullough wrote:
>>>
>>> You ask someone (we'll call him "Jack") to give a truthful
>>> yes/no answer to the following question:
>>> Will Jack's answer to this question be no?
>>> Jack can't possibly give a correct yes/no answer to the question.
>>>
>> All linguists know that a question with the exact same words
>> can have an entirely different meaning when context is taken
>> into account.
>
> Sure.
>
>> In the above case the conext of who is asked the question changes
>> the meaning of the question
>
> Nope. The meaning is still the same.
>

When Jack answers "no" it is the wrong answer.
When anyone else answers "no" is it the right answer.

This conclusively proves that the question has a
different meaning when posed to Jack then when posed
to anyone else.

Richard Damon

unread,
Oct 10, 2023, 8:47:28 PM10/10/23
to
On 10/10/23 7:40 PM, olcott wrote:
> On 10/10/2023 6:23 PM, _ Olcott wrote:
>> On Friday, June 25, 2004 at 6:30:39 PM UTC-5, Daryl McCullough wrote:
>>> You ask someone (we'll call him "Jack") to give a truthful
>>> yes/no answer to the following question:
>>> Will Jack's answer to this question be no?
>>> Jack can't possibly give a correct yes/no answer to the question.
>>> --
>>> Daryl McCullough
>>> Ithaca, NY
>>
>> All linguists know that a question with the exact same words
>> can have an entirely different meaning when context is taken
>> into account.
>>
>> In the above case the conext of who is asked the question changes
>> the meaning of the question so that both yes and no are the wrong
>> answer from Jack.
>
> Jack's question is isomorphic to the halting problem counter-example
> where the input D to a halt decider H does the opposite of whatever
> halt status that H returns.
>

Nope, it just shows your ignorance of the whole problem.

In the counter example, H is give the simple question, does H^ applied
to the description of H^ halt or not. No matter WHO or WHAT You ask,
this will have exactly the same answer, the opposite of what H applied
to the description of H^ applied to the description of H.

The answer to this question is independent of who you ask it to. The
fact that H^ just happens to be built on a copy of H is irreverent.

Remember, H is DEFINED to be a definite program, so has definite behavior.

Note, the FUNDAMENTAL difference between Jack and H, Jack is a
volitional being who can take in new data and come up with new behavior,
while H is a FIXED program that will ALWAYS give the same answer when
asked the same question.

You are just PROVING that you don't understand the definition of the
field you are talking about, including things like what is Truth.

olcott

unread,
Oct 10, 2023, 8:49:13 PM10/10/23
to
// The following is written in C
//
01 typedef int (*ptr)(); // pointer to int function
02 int H(ptr x, ptr y) // uses x86 emulator to simulate its input
03
04 int D(ptr x)
05 {
06 int Halt_Status = H(x, x);
07 if (Halt_Status)
08 HERE: goto HERE;
09 return Halt_Status;
10 }
11
12 void main()
13 {
14 H(D,D);
15 }


When my termination analyzer H simulates its input
D the execution trace of D is different than when
termination analyzer H1 simulates the exact same
input.

Richard Damon

unread,
Oct 10, 2023, 8:52:01 PM10/10/23
to
On 10/10/23 7:40 PM, olcott wrote:
> On 10/10/2023 6:23 PM, _ Olcott wrote:
>> On Friday, June 25, 2004 at 6:30:39 PM UTC-5, Daryl McCullough wrote:
>>> You ask someone (we'll call him "Jack") to give a truthful
>>> yes/no answer to the following question:
>>> Will Jack's answer to this question be no?
>>> Jack can't possibly give a correct yes/no answer to the question.
>>> --
>>> Daryl McCullough
>>> Ithaca, NY
>>
>> All linguists know that a question with the exact same words
>> can have an entirely different meaning when context is taken
>> into account.
>>
>> In the above case the conext of who is asked the question changes
>> the meaning of the question so that both yes and no are the wrong
>> answer from Jack.
>
> Jack's question is isomorphic to the halting problem counter-example
> where the input D to a halt decider H does the opposite of whatever
> halt status that H returns.
>

I suppose a second comment is that you don't understand the difference
between the behavior of a volitional being and a program.

I don't know if that means you think people have no volition, and are
stuck doing what they are "programmed" to do, possiblely based on an
inference that because YOU don't seem able to actually put together
intelgent thought, that no one can.

Or is it that you just don't understand the nature of programs, and
think that somehow there is a form of "magic" that might allow a
computer program to break out of the rigid deterministic mold that is
computing?
Message has been deleted

Fritz Feldhase

unread,
Oct 10, 2023, 10:02:46 PM10/10/23
to
On Wednesday, October 11, 2023 at 2:43:47 AM UTC+2, olcott wrote:

> When Jack answers "no" it is a wrong answer.

Exactly. And if he answers "yes" it is a wrong answer too.

> When anyone else answers "no" it is the right answer.

Nope. It depends on what Jack actually answers.

Hint: If Jack answers "no", the correct answer to the question is yes, and if Jack answers "yes", the correct answer to the question is no.

So Jack will NEVER give the correct answer (he simpy can't), but others MAY be able to do so.

(Thought it semms unlikely that they actually _know_ the correct answer befor Jack answered the question himself - or refused to answer when asked.)

olcott

unread,
Oct 10, 2023, 10:16:56 PM10/10/23
to
On 10/10/2023 9:00 PM, Fritz Feldhase wrote:
> On Wednesday, October 11, 2023 at 2:43:47 AM UTC+2, olcott wrote:
>
>> When Jack answers "no" it is a wrong answer.
>
> Exactly. And if he answers "yes" it is a wrong answer to.
>
>> When anyone else answers "no" is it the right answer.
>
> Nope. It depends on what Jack actually answers.
>

A PhD computer science professor came up with a much better version that
does not depend on Jack ever answering:

Can Jack correctly answer “no” to this question?

Now we have a question that is incorrect for Jack and correct for
everyone else.

olcott

unread,
Oct 10, 2023, 10:21:05 PM10/10/23
to
On 10/10/2023 9:00 PM, Fritz Feldhase wrote:
> On Wednesday, October 11, 2023 at 2:43:47 AM UTC+2, olcott wrote:
>
>> When Jack answers "no" it is a wrong answer.
>
> Exactly. And if he answers "yes" it is a wrong answer to.
>
>> When anyone else answers "no" is it the right answer.
>
> Nope. It depends on what Jack actually answers.
>

A PhD computer science professor came up with a much better version that
does not depend on Jack ever answering:

Can Jack correctly answer “no” to this question?

Now we have a question that is incorrect for Jack and correct for
everyone else.

olcott

unread,
Jan 28, 2024, 12:19:04 AMJan 28
to
On 6/25/2004 6:30 PM, Daryl McCullough wrote:
> It is becoming increasingly clear that Peter Olcott...
>
> You ask someone (we'll call him "Jack") to give a truthful
> yes/no answer to the following question:
>
> Will Jack's answer to this question be no?
>
> Jack can't possibly give a correct yes/no answer to the question.
>
> Daryl McCullough
> Ithaca, NY
>

After all these years this deserves academic credit
because it forms a perfect isomorphism to the halting
problem's decider / input pair.

*A slightly adapted version is carefully examined in this paper*

Does the halting problem place an actual limit on computation?
https://www.researchgate.net/publication/374806722_Does_the_halting_problem_place_an_actual_limit_on_computation

immibis

unread,
Jan 28, 2024, 7:13:33 AMJan 28
to
On 1/28/24 06:18, olcott wrote:
> Does the halting problem place an actual limit on computation?

Is it possible or impossible to make a program that always tells you
whether the direct execution of its input would halt?

Richard Damon

unread,
Jan 28, 2024, 7:50:35 AMJan 28
to
On 1/28/24 12:18 AM, olcott wrote:
> On 6/25/2004 6:30 PM, Daryl McCullough wrote:
>> It is becoming increasingly clear that Peter Olcott...
>>
>> You ask someone (we'll call him "Jack") to give a truthful
>> yes/no answer to the following question:
>>
>>        Will Jack's answer to this question be no?
>>
>> Jack can't possibly give a correct yes/no answer to the question.
>>
>> Daryl McCullough
>> Ithaca, NY
>>
>
> After all these years this deserves academic credit
> because it forms a perfect isomorphism to the halting
> problem's decider / input pair.
>
> *A slightly adapted version is carefully examined in this paper*
>
> Does the halting problem place an actual limit on computation?
> https://www.researchgate.net/publication/374806722_Does_the_halting_problem_place_an_actual_limit_on_computation
>

Except that "Programs" don't have a "Psychology" as they don't "Think"
but just do deterministic computations.

Maybe your problem is that YOU don't actually think, but are just
following "your programming".

The conversion of "Does the machine described by the input halt?", to
"what is the correct answer that H could give?", the way you do it,
ignores the fact that H, to exist, has already fixed its answerk so
asking what it could have done differently is asking about if Jack was
Jill ...

Now, if when poseing the question we don't imagine that this "alternate
H" changed all things that had been done based on what H was, it becomes
valid again, that the answer for your H is change to H1 that doesn't
abort, showing that a CORRECT question has an answer.




olcott

unread,
Jan 28, 2024, 10:20:51 AMJan 28
to
On 1/27/2024 11:18 PM, olcott wrote:
> On 6/25/2004 6:30 PM, Daryl McCullough wrote:
>> It is becoming increasingly clear that Peter Olcott...
>>
>> You ask someone (we'll call him "Jack") to give a truthful
>> yes/no answer to the following question:
>>
>>        Will Jack's answer to this question be no?
>>
>> Jack can't possibly give a correct yes/no answer to the question.
>>
>> Daryl McCullough
>> Ithaca, NY
>>
>
> After all these years this deserves academic credit
> because it forms a perfect isomorphism to the halting
> problem's decider / input pair.
>
> *A slightly adapted version is carefully examined in this paper*
>
> Does the halting problem place an actual limit on computation?
> https://www.researchgate.net/publication/374806722_Does_the_halting_problem_place_an_actual_limit_on_computation
>

This paper contains professor Hehner's 2017 careful analysis
of an isomorphism to the halting problem (presented to me in 2004)
decider/input pair where professor Hehner proves my 2004 claim
that the halting problem is an ill-formed question. Two other
professors express concurring opinions.

--
Copyright 2024 Olcott "Talent hits a target no one else can hit; Genius

Richard Damon

unread,
Jan 28, 2024, 1:20:26 PMJan 28
to
On 1/28/24 10:20 AM, olcott wrote:
> On 1/27/2024 11:18 PM, olcott wrote:
>> On 6/25/2004 6:30 PM, Daryl McCullough wrote:
>>> It is becoming increasingly clear that Peter Olcott...
>>>
>>> You ask someone (we'll call him "Jack") to give a truthful
>>> yes/no answer to the following question:
>>>
>>>        Will Jack's answer to this question be no?
>>>
>>> Jack can't possibly give a correct yes/no answer to the question.
>>>
>>> Daryl McCullough
>>> Ithaca, NY
>>>
>>
>> After all these years this deserves academic credit
>> because it forms a perfect isomorphism to the halting
>> problem's decider / input pair.
>>
>> *A slightly adapted version is carefully examined in this paper*
>>
>> Does the halting problem place an actual limit on computation?
>> https://www.researchgate.net/publication/374806722_Does_the_halting_problem_place_an_actual_limit_on_computation
>>
>
> This paper contains professor Hehner's 2017 careful analysis
> of an isomorphism to the halting problem (presented to me in 2004)
> decider/input pair where professor Hehner proves my 2004 claim
> that the halting problem is an ill-formed question. Two other
> professors express concurring opinions.
>

Which starts with the ERROR that it thinks that a Computation can be
"Context Dependent"

A computation can NOT be "context Dependent", as a fundamental property
of a computation is that it is generates a definative mapping of its
input to its output.

You make the error by assuming the input to be decided on it a "program"
that act contrary to what ever decider is trying to decide it. That
isn't the input of the proof, and isn't even a possible program.

The input is an input built to refute ONE PARTICULAR decider (not
whatever decider is trying to decide it).

It is presented as a template, that is combinded with whatever decider
we might want to try to claim is correct, and it produces an input that
it can be shown that that ONE DECIDER will get wrong. The "template"
isn't what is given as the input, but the program generated by applying
that template to the particular decider we want to refute, which IS a
program, and whose behavior is not dependent on who we ask about this
particular input, so in Context Independent, as ALL computations must be,

You, and the people you like to say support you, seem not to understand
this fundamental property of Computations, perhaps confusing a more
general concept of "Program" from other parts of Computer Science (Yes,
you need to look at the field and what definitions and terms it uses).

olcott

unread,
Jan 28, 2024, 1:37:20 PMJan 28
to
On 1/28/2024 12:20 PM, Richard Damon wrote:
> On 1/28/24 10:20 AM, olcott wrote:
>> On 1/27/2024 11:18 PM, olcott wrote:
>>> On 6/25/2004 6:30 PM, Daryl McCullough wrote:
>>>> It is becoming increasingly clear that Peter Olcott...
>>>>
>>>> You ask someone (we'll call him "Jack") to give a truthful
>>>> yes/no answer to the following question:
>>>>
>>>>        Will Jack's answer to this question be no?
>>>>
>>>> Jack can't possibly give a correct yes/no answer to the question.
>>>>
>>>> Daryl McCullough
>>>> Ithaca, NY
>>>>
>>>
>>> After all these years this deserves academic credit
>>> because it forms a perfect isomorphism to the halting
>>> problem's decider / input pair.
>>>
>>> *A slightly adapted version is carefully examined in this paper*
>>>
>>> Does the halting problem place an actual limit on computation?
>>> https://www.researchgate.net/publication/374806722_Does_the_halting_problem_place_an_actual_limit_on_computation
>>>
>>
>> This paper contains professor Hehner's 2017 careful analysis
>> of an isomorphism to the halting problem (presented to me in 2004)
>> decider/input pair where professor Hehner proves my 2004 claim
>> that the halting problem is an ill-formed question. Two other
>> professors express concurring opinions.
>>
>
> Which starts with the ERROR that it thinks that a Computation can be
> "Context Dependent"

Your own lack of comprehension really can't be any basis for a
correct rebuttal. I provide links to the original papers.

Richard Damon

unread,
Jan 28, 2024, 1:51:05 PMJan 28
to
Which makes a similar error of thinking that the program is not properly
defined.

By the basic rules of Computation theory, if H(M,d) is a
Computation/Program, then the D(d) Computation/Program can be correctly
defined, as it is built with fundamental steps.

Thus, the complaints that This "Pathological" input might not be a
program just shows their lack of understanding.

Richard Damon

unread,
Jan 28, 2024, 1:58:45 PMJan 28
to
On 1/28/24 1:37 PM, olcott wrote:
So, please show me an actual computation built by a finite sequence of
definite deterministic instructions that depend only on the inputs to
the computation and intermediate reuslts (a more formal/structureal
description of a Computation) that can be "Context dependent". That is,
show an execution trace of two such identical sequences of instructions,
with the same inputs, that cause a difference in execution path, by
showing the FIRST difference that occurs.

You can't doing it, and your failure to show just means you have proven
you just lied.

I have explained the error that you made, and they they made. Failure to
use the actual definitions of the field show a lack of understanding of
the field

olcott

unread,
Jan 28, 2024, 2:25:49 PMJan 28
to
The proof of the halting problem assumes a universal halt test
exists and then provides S as an example of a program that the
test cannot handle. But S is not a program at all. It is not
even a conceptual object, and this is due to inconsistencies
in the specification of the halting function. (Stoddart: 2017)

The clearest way to sum up what these three author's are saying is
that the halting problem is defined with unsatisfiable specification.

Richard Damon

unread,
Jan 28, 2024, 2:55:12 PMJan 28
to
If by "Unsatisfiable" you mean that it is impossible to write a PROGRAM
that produces the results, you are EXACTLY RIGHT, and that it what the
Halting Theorem proves. So you are just admitting that you are wrong to
complain about the Halting Proglem.

If by "Unsatisfiable" you mean that the question the prospective Halt
Decider is asked doesn't have an answer, you are wrong.

EVERY Program/Input pair will have a correct answer for the Halting
Question, as the program will either Halt or Not. Thus the question is
"Valid". This template just produces an input that a given decider will
get wrong.

Note, The specification being "Unsatisfiable" in the sense that no
program can be created, does NOT make the specification "Inconsistant"
(which means there is either no answer or multiple answer when only one
is allowed to a given question).

Stoddart is just showing his ignorance. His claim that "S is not a
program at all" is just a false statement or making an improper nit-pic
between the description (detailed enough to be followed to contruct the
program in question) and the actual code of the program that derives
from the specification.

olcott

unread,
Jan 28, 2024, 3:01:50 PMJan 28
to
Yes exactly like you cannot correctly answer this question:
What time is it (yes or no)?
Because it was defined to have no correct answer.

What correct Boolean value does H return for input D that has
been defined to do the opposite of whatever value that H returns?

*Is isomorphic to this question*

USENET Message-ID: <uncb5j$npjn$2...@dont-email.me>
On 1/6/2024 1:54 PM, immibis wrote:
> "Does a barber who shaves every man who does not shave himself shave
> himself?" has no correct answer.

Every question that has been defined to have no correct
answer <is> an incorrect question:

Alan Turing's Halting Problem is incorrectly formed (PART-TWO) sci.logic
*On 6/20/2004 11:31 AM, Peter Olcott wrote*
> PREMISES:
> (1) The Halting Problem was specified in such a way that a solution
> was defined to be impossible.
>
> (2) The set of questions that are defined to not have any possible
> correct answer(s) forms a proper subset of all possible questions.
> …
> CONCLUSION:
> Therefore the Halting Problem is an ill-formed question.
>
USENET Message-ID:
<kZiBc.103407$Gx4....@bgtnsc04-news.ops.worldnet.att.net>

Richard Damon

unread,
Jan 28, 2024, 3:20:51 PMJan 28
to
Nope. Strawman.

Does the

>
> What correct Boolean value does H return for input D that has
> been defined to do the opposite of whatever value that H returns?

Which ISN'T the Halting Question.

>
> *Is isomorphic to this question*
>
> USENET Message-ID: <uncb5j$npjn$2...@dont-email.me>
> On 1/6/2024 1:54 PM, immibis wrote:
> > "Does a barber who shaves every man who does not shave himself shave
> > himself?" has no correct answer.

Yes, because as the Halting Theorem has proven, The machine you are
defining as your H, just doens't exsits, just like the Barber doesn't exist.

>
> Every question that has been defined to have no correct
> answer <is> an incorrect question:

But the actual question, has a correct answer.

Change your quesiton to: What answer should a correct halt decider
return to be correct for the input designed to do the opposiite of a
particular claimed Halt Decider return?

And we HAV# a correct answer, whatever is the opposite of what that
decider produced (or non-halting if it doesn't answer).

>
> Alan Turing's Halting Problem is incorrectly formed (PART-TWO)  sci.logic
> *On 6/20/2004 11:31 AM, Peter Olcott wrote*
> > PREMISES:
> > (1) The Halting Problem was specified in such a way that a solution
> > was defined to be impossible.

FALSE. The genesis of the Halting Problem predated the discovery that it
was impossible, and was in fact hoped and even presumed to be possible.

Alan Turing just showed that there was a particular input that could be
created that was impossible for a given machine to answer correctly.

> >
> > (2) The set of questions that are defined to not have any possible
> > correct answer(s) forms a proper subset of all possible questions.

So, you are confusing Problems with Questions.

The Halting Question is: Does the Machine and Input described by your
input Halt when run

The Halting Problem: Can you make a machine that computes this answer
for every possible input.


The Question clearly has a correct answer for every possible machine /
Input combination, as the Halting Property obeys the principle of the
excluded middle and non-contradictory. (it is impossible for a given
machine / input to be either BOTH Halting and non-halting or neither
Halting and Non-Halting. One MUST occur and excludes the other (since
any machine that doesn't Halt is defined to be Non-Halting).

> > …
> > CONCLUSION:
> > Therefore the Halting Problem is an ill-formed question.

UNSOUND & INVALID LOGIC since it uses false premsise and invalid logic
(Problems are different then Quesitons)

> >
> USENET Message-ID:
> <kZiBc.103407$Gx4....@bgtnsc04-news.ops.worldnet.att.net>
>
>
>

immibis

unread,
Jan 28, 2024, 3:45:28 PMJan 28
to
Can you specify how to execute a Turing machine?

olcott

unread,
Jan 28, 2024, 4:22:11 PMJan 28
to
Every decision problem defined to be unsatisfiable <is>
an incorrect question whether you understand this or not.

immibis

unread,
Jan 28, 2024, 4:36:56 PMJan 28
to
True or false: Every sequence is either finite or infinite.

Richard Damon

unread,
Jan 28, 2024, 4:37:15 PMJan 28
to
Nope, YOU don't understand what that means, because you are just to
ignorant to know the meaning of the words.

A QUESTION is incorrect, if it does not have a possible answer. Thus,
"What is the Truth Value of the Liar's Paradox" in an incorrect question.

An UNSATISFIABLE problem in Compuation Theory is a Problem that asks if
you can build a Machine that computes the answer to a Question for all
possible inputs.

That doesn't mean the Question doesn't have an answer for all possible
inputs, just that we can not build a computaton structure that gives
that answer in a finite number of steps.

The Halting Question has, as I have explained, a correct answer for
every possible program/input combination, as that compuation will either
finish in finite time or not.

The Halting Question is shown to be uncomputable, and thus the Halting
PRoblem unsatisfiable, because for any machine you might try to claim is
a solution to the problem, there is an input that it get wrong.

Thus, Halting has a valid question, but in uncomputable.

You just seem unable to distinguish between these seperate facts,
because you are just too ignorant about what they actually mean.

You are just proving yourself to be an Insane and Ignorant Hypocritical
Pathological Lying Idiot.

olcott

unread,
Jan 28, 2024, 5:20:22 PMJan 28
to
*Then you tell me what you think that means*

Richard Damon

unread,
Jan 28, 2024, 5:27:12 PMJan 28
to
A Decision problem is unsatisfied (and not just incorrect) if there
exist a valid "mathmatical" mapping from inputs to outputs (like the
Halting Property definition) but there does not exist a finite
computation that can compute that mapping for all inputs in a finite
number of steps.

Satisfiable (in computation theory) means there exist a program that
computes the answer in finite time for all possible inputs.

Correct Question means there exist a correct answer (even if no program
can compute it).

olcott

unread,
Jan 28, 2024, 7:21:10 PMJan 28
to
Yes AND sometimes some inputs are not computable because they
are self-contradictory, thus isomorphic to incorrect questions.

Richard Damon

unread,
Jan 28, 2024, 7:50:03 PMJan 28
to
Nope, not in this case.

Inputs are just strings that represent programs, and programs are self
contained blocks that always have a defined behavior.

No posibility for an actual PROGRAM to be "self-contradictory".

You get into your "Contradiction" by ignoring that H is a PROGRAM, and a
piece of the PROGRAM of D, and thus, must have defined behavior, so
"Unless" or "Must" (as you are trying to use them) don't really have
meaning.

A program does what it is programmed to do, and that result will either
be correct or incorrect.

Please try to show me a program that doesn't have a correct answer to
the question: "Does this program halt when run?"

(Note, Not your non-equivalent variant of correct simulation by H)

It can be a D built on an H, but you have to define the H.

And "Get the right answer" is NOT a programatic step.

If you want to specify until a such and such condition occurs, you need
to spell them out, not just "Correct Halting Patterns"

olcott

unread,
Jan 28, 2024, 7:59:06 PMJan 28
to
It is a verified fact that some decision problems are undecidable
because their inputs are self-contradictory.

If this proof was not way over your head you might understand this.
https://liarparadox.org/Tarski_275_276.pdf

Richard Damon

unread,
Jan 28, 2024, 8:19:48 PMJan 28
to
Input are just symbols. Perhaps a property can be defined in a
self-contradictory way, but Halting is not, as all programs will either
Halt or Not.

So, Halting can not be an "improper" question due to being
"Self-Contradictory"

IF you want to claim it is, show the ACTUAL PROGRAM that shows this.

>
> If this proof was not way over your head you might understand this.
> https://liarparadox.org/Tarski_275_276.pdf
>

And what does Tarski have to do with "Halting" or "Computation Theory"?

(Well there is a connection, but deeper than you seem to understand)

olcott

unread,
Jan 28, 2024, 9:04:59 PMJan 28
to
Tarski concluded that a True(L,x) predicate cannot exist
on the basis that this question:
Is this sentence true or false: "this sentence is not true" ?
has no correct answer.

When the formalized Liar Paradox is the input to a decider decision
theory concludes that it is undecidable rather than incorrect.

Richard Damon

unread,
Jan 28, 2024, 9:18:25 PMJan 28
to
We were talking about the Halting Problem.

Are you admitting you were wrong and shifting to another topic, or are
you just trapped and throwing up a Red Herring?

I'm goinf

olcott

unread,
Jan 28, 2024, 10:10:56 PMJan 28
to
Since you do not understand how deciders works then you cannot
understand how halt decider work.

Decision theory concludes that undecidable decision problems prove
that a theory is incomplete when it cannot prove or refute syntactically
correct expressions that are semantic nonsense.

"this sentence is not true" is a syntactically correct sentence
that is an semantically incorrect statement.

I owned LiarParadox.org for several years because many undecidable
decision problems are isomorphic to the Liar Paradox.


>
> Are you admitting you were wrong and shifting to another topic, or are
> you just trapped and throwing up a Red Herring?
>
> I'm goinf

Richard Damon

unread,
Jan 28, 2024, 10:49:02 PMJan 28
to
In other words, because you don't understand how logic works, you have
come up with cockamamie theorys of how it should work.

You say I don't know how Deciders work, but it is you how doesn't know that.

When you tried to write a simple Turing Machine to be a decider, you
totally failed and got hung up on irrelevent details about things like
what character set encoding the tape should be in.

The statement you call "semantic nonsense" (if you are refering to
Godel) is a very semantically meaningful statement (that you just don't
understand, so it has no meaning to you) that there does not exist a
natural number g that statisfies a particular Primative Recursive
Relationship. That sentence clearly has semantic meaning, at least as
much as ANY mathematical statement has "semantic" meaning

Note, you keep on saying that people expected the Liar's Paradox to have
a truth value or that the truth predicate was being applied to that
statement, but that was never actually done.

Tarski shows that give an assumption of the computable predicate for
truth, that by the rules previously shown (that you clearly don't
understand) allow him to DERIVE that the Liar's Paradox would have a
truth value, and from that shows that there can not be such a computable
predicate.

It is clear that you understanding of logic just can't handle the
concept of "Meta-Theory", likely because your understanding of logic
just doesn't understand what a formal logic system is.

I will also note that it seems that most of your Isomorphisms" are based
on the unacceptable assumption that Truth must be provable. While you
can build systems on such a definition, all the theories you have been
looking at have prerequisites that such a system can not meet. (Because
it strictly limits what logic you can allow into the system).

In other words, you are just proving to the world that you are just a
stupid crackpot. Maybe you can find some people that you think agree
with enough of your theory to make you happy, but none of that is actual
proof.

olcott

unread,
Jan 28, 2024, 11:28:26 PMJan 28
to
Try and show how a decider can correctly decide the truth value
of the formalized version of this: "this sentence is not true".

The problem is not my lack of understanding of logic the problem
it your lack of understanding of the philosophy of logic.

When I point out incoherence in aspects of logic you construe this
as my error because logic remains just the way that you memorized it.

You have zero deep understanding of the underlying epistemology
of the aspect of logic.

Lawrence D'Oliveiro

unread,
Jan 29, 2024, 1:59:37 AMJan 29
to
On Sun, 28 Jan 2024 09:20:46 -0600, olcott wrote:

> ... professor Hehner proves my 2004 claim that the
> halting problem is an ill-formed question.

Doesn’t matter how you phrase it, the fact remains that there is no
logically self-consistent answer to the problem. That’s what Turing
proved, and you have done nothing to change that.

Richard Damon

unread,
Jan 29, 2024, 7:25:57 AMJan 29
to
Never said it could.

> The problem is not my lack of understanding of logic the problem
> it your lack of understanding of the philosophy of logic.

No, YOU don't understand logic, or even language.

You think that "This statement is not True" is identical in meaning to
"This statement is not provable", this is false, so you don't understand
logic,

Note also, many aspects of the general philosophy of logic don't
actually apply to Formal Systems of Logic, as the decisions they discuss
have been decided, fixed, and locked down in the formal syatem. If you
want to change it, you can, but then you are in a DIFFERENT formal
system, and anything done isn't applicable to the original system.

This is something that appears to be foreign to you.

>
> When I point out incoherence in aspects of logic you construe this
> as my error because logic remains just the way that you memorized it.

No, you keep on going to Red Herrings, and never answer the actual
questions asked, probably because you know you can't

>
> You have zero deep understanding of the underlying epistemology
> of the aspect of logic.
>


Nope, YOU do, and are a victim of the Dunning-Kruger effect.

olcott

unread,
Jan 29, 2024, 8:53:27 AMJan 29
to
Likewise there is no logically consistent answer to this question:
Is this sentence true or false: "this sentence is not true"?
It is undecidable because the question itself is incorrect.

Every yes/no question defined to have no correct yes/no answer is an
incorrect question.

immibis

unread,
Jan 29, 2024, 9:10:36 AMJan 29
to
On 1/29/24 14:53, olcott wrote:
> On 1/29/2024 12:59 AM, Lawrence D'Oliveiro wrote:
>> On Sun, 28 Jan 2024 09:20:46 -0600, olcott wrote:
>>
>>> ... professor Hehner proves my 2004 claim that the
>>> halting problem is an ill-formed question.
>>
>> Doesn’t matter how you phrase it, the fact remains that there is no
>> logically self-consistent answer to the problem. That’s what Turing
>> proved, and you have done nothing to change that.
>
> Likewise there is no logically consistent answer to this question:
> Is this sentence true or false: "this sentence is not true"?
> It is undecidable because the question itself is incorrect.
>
> Every yes/no question defined to have no correct yes/no answer is an
> incorrect question.
>

The question:
Does this question have a correct answer:
Is this sentence true or false:
This sentence is not true.

has an answer.

olcott

unread,
Jan 29, 2024, 11:25:50 AMJan 29
to
Alan Turing's Halting Problem is incorrectly formed (PART-TWO) sci.logic
*On 6/20/2004 11:31 AM, Peter Olcott wrote*
> PREMISES:
> (1) The Halting Problem was specified in such a way that a solution
> was defined to be impossible.
>
> (2) The set of questions that are defined to not have any possible
> correct answer(s) forms a proper subset of all possible questions.
> …
> CONCLUSION:
> Therefore the Halting Problem is an ill-formed question.
>

[1] E C R Hehner. *Objective and Subjective Specifications*
WST Workshop on Termination, Oxford. 2018 July 18.
See https://www.cs.toronto.edu/~hehner/OSS.pdf

[2] Nicholas J. Macias. *Context-Dependent Functions*
Narrowing the Realm of Turing’s Halting Problem
13 Nov 2014
https://arxiv.org/abs/1501.03018
arXiv:1501.03018 [cs.LO]

[3] Bill Stoddart. *The Halting Paradox*
20 December 2017
https://arxiv.org/abs/1906.05340
arXiv:1906.05340 [cs.LO]

immibis

unread,
Jan 29, 2024, 11:44:28 AMJan 29
to
What about my formulation?
1. An execution point is the current state number and tape contents.
2. An execution sequence is the sequence of execution points a Turing
machine/input pair has, ending when it gets to a final state.
3. The halting problem is to write a Turing machine that takes a Turing
machine/input pair as input, and tells whether that Turing machine/input
pair has an infinite execution sequence.

You have ignored this formulation the last 3 times it was posted.

I suppose you'll reply to this one with a straw man or irrelevant argument.

olcott

unread,
Jan 29, 2024, 12:48:02 PMJan 29
to
Every instance of the conventional halting problem proofs
has an input D that attempts to do the opposite of whatever
Boolean value that H returns.

If it was successful then it would be a self-contradictory question
like this one:

Is this sentence true or false: "This sentence is not true" ?

For a simulating halt decider D cannot possibly reach the point
in its own execution trace where it derives the contradiction.

olcott

unread,
Jan 29, 2024, 1:03:37 PMJan 29
to
On 1/29/2024 3:47 AM, Mikko wrote:
> On 2024-01-28 19:25:42 +0000, olcott said:
>>> Which makes a similar error of thinking that the program is not
>>> properly defined.
>>
>>     The proof of the halting problem assumes a universal halt test
>>     exists and then provides S as an example of a program that the
>>     test cannot handle. But S is not a program at all. It is not
>>     even a conceptual object, and this is due to inconsistencies
>>     in the specification of the halting function. (Stoddart: 2017)
>>
>> The clearest way to sum up what these three author's are saying is
>> that the halting problem is defined with unsatisfiable specification.
>
> That is a reasonable way to say it but only if you accept that there
> is a proof that the specification is unsatisfiable. If you reject all
> proposed proofs you must say that it is an open question whether the
> halting problem is defined with unsatisriable specification.
>
> Mikko
>

Self-contradictory questions have been shown to define infinite
structures that cannot be resolved in finite time.

These expressions are undecidable because they are incorrect.

immibis

unread,
Jan 29, 2024, 5:15:54 PMJan 29
to
You have now ignored the formulation four (4) times. Nothing in your
reply refers to anything that I said.

immibis

unread,
Jan 29, 2024, 5:53:06 PMJan 29
to
On 1/29/24 18:47, olcott wrote:
> On 1/29/2024 10:44 AM, immibis wrote:
>> On 1/29/24 17:25, olcott wrote:
>>>
>>> Alan Turing's Halting Problem is incorrectly formed
>>
>> What about my formulation? >> [formulation]
>> You have ignored this formulation the last 3 times it was posted.
>>
>> I suppose you'll reply to this one with a straw man or irrelevant
>> argument.
>
> [irrelevant stuff]

What you are saying is that my formulation is wrong because you want it
to be wrong because if it is not wrong then you are wrong.

Richard Damon

unread,
Jan 29, 2024, 7:48:04 PMJan 29
to
But the Halting Problem is a PURELY OBJECTIVE.

Note also, His definition of a "Program" does not match that of a Turing
Machine.

For one thing, his "Halting Analyzer" is not of the same class as the
programs it is to decide on. He limits its inputs to "L-Programs" that
can have no inputs, but it itself has an input.

So, his "answer" to the Halting Problem is to just restrict the inputs
to machines lesser than the deciders, an well known answer.

And then he makes the determinatiom of whether a question is "Objective"
or "Subjective" NOT based on the actual meaning of the words, but makes
any question that can not be computed as "Subjective".

This is just FALSE.

>
> [2] Nicholas J. Macias. *Context-Dependent Functions*
> Narrowing the Realm of Turing’s Halting Problem
> 13 Nov 2014
> https://arxiv.org/abs/1501.03018
> arXiv:1501.03018 [cs.LO]

WHich just shows that he doesn't understand what a Compuation IS in
computation theory. It is BY DEFINITION, a finite deterministic
algorithm applied to a defined input.

As such, an "function" that depends on things not considered "input" is
not a computation.

Yes, in a non-Turing system, it is possible to define things that might
be called "functions" that are dependent on things besides their formal
parameters.

If you look at his examples, this is EXACTLY what his "CDFS" do.

Such functions can NOT be converted into Turing Machines.

So, his arguement is outside the domains of "Compuation Theory".

>
> [3] Bill Stoddart. *The Halting Paradox*
> 20 December 2017
> https://arxiv.org/abs/1906.05340
> arXiv:1906.05340 [cs.LO]
>
>

Here, the author says that

S defined as If H(S) then Loop else end.

"Can't be implemented", and the reasoning is that since H can't be made,
the problem is with S (and not the unimplementability of Halting Detection).

He says:

There is no reason, however, why the halt test cannot terminate in other
situations, or why failure to halt cannot be reported via an error
message when the halt test itself cannot halt.

Except that to do so violates the definition of a Decider, being a
program that ALWAYS delivers its answer to its caller/use.

And again, he ignores that the DEFINITION of the sort of thing that H is
required to be, a COMPUTATION, by DEFINITION is only a function of its
formal parameters, and thus when he talks about H determining if it is
being called by S just invalidates his argument.

So, the common thread in all these papers, as well as your own, is that
they are ignoring the actual definition of what a Compuation (commonly
called a "Program" in lay terms) actually is, and thus show that they
are NOT actually working on the Halting Problem of Compuation Theory.

Yes, my guess is a lot of people have similar misunderstandings, but
that doesn't make them right.

You are just putting you lot with people who have shown that they don't
know what they are talking about as far as the requirements of
Computation Theory.

They all refer to being "equivalent" to Turing Machines, but all the
"programs" they propose can not be converted to Turing Machines as they
all need "secret" inputs which just do not exist with a Turing machine.
That is one of the powers of the simple Turing Machine architecture, ANY
Turing Machine MUST perform a computation (or be non-halting depending
on the exact version of the definition of Computation being used) while
many other architectures allow for hidden data paths that allow
"programs" that fail to be compuations (but might be a piece of a large
Computation).

Richard Damon

unread,
Jan 29, 2024, 7:48:06 PMJan 29
to
On 1/29/24 8:53 AM, olcott wrote:
> On 1/29/2024 12:59 AM, Lawrence D'Oliveiro wrote:
>> On Sun, 28 Jan 2024 09:20:46 -0600, olcott wrote:
>>
>>> ... professor Hehner proves my 2004 claim that the
>>> halting problem is an ill-formed question.
>>
>> Doesn’t matter how you phrase it, the fact remains that there is no
>> logically self-consistent answer to the problem. That’s what Turing
>> proved, and you have done nothing to change that.
>
> Likewise there is no logically consistent answer to this question:
> Is this sentence true or false: "this sentence is not true"?
> It is undecidable because the question itself is incorrect.
>
> Every yes/no question defined to have no correct yes/no answer is an
> incorrect question.
>

And the question, "Does the Computation defined by this input Halt?"
always has a correct yes/no answer, so is a CORRECT question.

olcott

unread,
Jan 29, 2024, 9:12:27 PMJan 29
to
Professor Hehner defines what he means by his terms.

> Note also, His definition of a "Program" does not match that of a Turing
> Machine.
>

Isomorphism

> For one thing, his "Halting Analyzer" is not of the same class as the
> programs it is to decide on. He limits its inputs to "L-Programs" that
> can have no inputs, but it itself has an input.
>

That is a mere simplification that changes nothing.
https://academic.oup.com/comjnl/article/7/4/313/354243
Professor C. Strachey does the same thing.

> So, his "answer" to the Halting Problem is to just restrict the inputs
> to machines lesser than the deciders, an well known answer.
>

The key portion of his answered is anchored in Carol's
question. I told him about the loophole that you found.

> And then he makes the determinatiom of whether a question is "Objective"
> or "Subjective" NOT based on the actual meaning of the words, but makes
> any question that can not be computed as "Subjective".
>

His stipulative definition makes perfect sense as a stipulative definition.

A stipulative definition is a type of definition in which a new or
currently existing term is given a new specific meaning for the purposes
of argument or discussion in a given context.
https://en.wikipedia.org/wiki/Stipulative_definition

> This is just FALSE.
>
>>
>> [2] Nicholas J. Macias. *Context-Dependent Functions*
>> Narrowing the Realm of Turing’s Halting Problem
>> 13 Nov 2014
>> https://arxiv.org/abs/1501.03018
>> arXiv:1501.03018 [cs.LO]
>
> WHich just shows that he doesn't understand what a Compuation IS in
> computation theory. It is BY DEFINITION, a finite deterministic
> algorithm applied to a defined input.
>
> As such, an "function" that depends on things not considered "input" is
> not a computation.
>

Not at all. He like I and the other two professors understand
that when D calls H(D,D) then the halting problem specifies an
inconsistent, unsatisfiable specification
All three authors seems to agree on this.

> Yes, in a non-Turing system, it is possible to define things that might
> be called "functions" that are dependent on things besides their formal
> parameters.
>
> If you look at his examples, this is EXACTLY what his "CDFS" do.
>
> Such functions can NOT be converted into Turing Machines.
>

I already proved otherwise when we apply embedded_H to ⟨Ĥ⟩ ⟨Ĥ⟩.

> So, his arguement is outside the domains of "Compuation Theory".
>

The fact that embedded_H is applied to its own code DOES CHANGE THINGS.
This cannot be correctly ignored.

>>
>> [3] Bill Stoddart. *The Halting Paradox*
>> 20 December 2017
>> https://arxiv.org/abs/1906.05340
>> arXiv:1906.05340 [cs.LO]
>>
>>
>
> Here, the author says that
>
> S defined as If H(S) then Loop else end.
>
> "Can't be implemented", and the reasoning is that since H can't be made,
> the problem is with S (and not the unimplementability of Halting
> Detection).
>

Yes Professor Stoddart did not see that his own criterion measure could
be used as a halting criterion measure. He did see that it could be
used to report bad input.

"Implementation of H1 requires it to determine whether it is being
invoked from within S1"

> He says:
>
> There is no reason, however, why the halt test cannot terminate in other
> situations, or why failure to halt cannot be reported via an error
> message when the halt test itself cannot halt.
>

Yes I just said that second part.

> Except that to do so violates the definition of a Decider, being a
> program that ALWAYS delivers its answer to its caller/use.
>

Hence my independently derived enhancement to my independently derived
"Implementation of H1 requires it to determine whether it is being
invoked from within S1"

> And again, he ignores that the DEFINITION of the sort of thing that H is
> required to be, a COMPUTATION, by DEFINITION is only a function of its
> formal parameters, and thus when he talks about H determining if it is
> being called by S just invalidates his argument.
>

His work is preliminary compared to mine.

> So, the common thread in all these papers, as well as your own, is that
> they are ignoring the actual definition of what a Compuation (commonly
> called a "Program" in lay terms) actually is, and thus show that they
> are NOT actually working on the Halting Problem of Compuation Theory.
>

The key common thread is that the halting problem has
an inconsistent, unsatisfiable specification.

> Yes, my guess is a lot of people have similar misunderstandings, but
> that doesn't make them right.
>

Since I know these things first-hand I know that they are correct.

> You are just putting you lot with people who have shown that they don't
> know what they are talking about as far as the requirements of
> Computation Theory.
>
> They all refer to being "equivalent" to Turing Machines, but all the
> "programs" they propose can not be converted to Turing Machines as they
> all need "secret" inputs which just do not exist with a Turing machine.
> That is one of the powers of the simple Turing Machine architecture, ANY
> Turing Machine MUST perform a computation (or be non-halting depending
> on the exact version of the definition of Computation being used) while
> many other architectures allow for hidden data paths that allow
> "programs" that fail to be compuations (but might be a piece of a large
> Computation).

Some of their ideas may not be Turing computable yet all of their
ideas do unify around:

The halting problem has an inconsistent, unsatisfiable specification.
AKA the same ill-formed question that I claimed back in 2004.

Alan Turing's Halting Problem is incorrectly formed (PART-TWO) sci.logic
On 6/20/2004 11:31 AM, Peter Olcott wrote:
> PREMISES:
> (1) The Halting Problem was specified in such a way that a solution
> was defined to be impossible.
>
> (2) The set of questions that are defined to not have any possible
> correct answer(s) forms a proper subset of all possible questions.
> …
> CONCLUSION:
> Therefore the Halting Problem is an ill-formed question.
>
USENET Message-ID:
<kZiBc.103407$Gx4....@bgtnsc04-news.ops.worldnet.att.net>

Hehner's Carol's question does a great job of elaborating this.

olcott

unread,
Jan 29, 2024, 9:17:23 PMJan 29
to
Yet when H is asked this question it is an entirely different
question because the context of who is asked the question
DOES CHANGE THE MEANING OF THE QUESTION.

What correct Boolean value does H return when D is defined to do the
opposite of whatever value that H returns?" has no correct answer.

Richard Damon

unread,
Jan 29, 2024, 10:01:02 PMJan 29
to

Richard Damon

unread,
Jan 29, 2024, 10:01:08 PMJan 29
to
On 1/29/24 9:12 PM, olcott wrote:
And the LIES by not using it (OR MISUSING IT).



>
>> Note also, His definition of a "Program" does not match that of a
>> Turing Machine.
>>
>
> Isomorphism

Only if a bummy rabbit is "Isomorphic" to an office building.

You are just proving you don't know what ANY of these things mean.

Like the joke, "I understand every language but Greek", and then when
someone ask them a quesition in Spanish, the answer is "That's Greek to
Me!".

>
>> For one thing, his "Halting Analyzer" is not of the same class as the
>> programs it is to decide on. He limits its inputs to "L-Programs" that
>> can have no inputs, but it itself has an input.
>>
>
> That is a mere simplification that changes nothing.
> https://academic.oup.com/comjnl/article/7/4/313/354243
> Professor C. Strachey does the same thing.

Nope, He admits that an L-program decider couldn't decide on an
L-Program input, but then claims that an M-Program decider could, if
L-Programs aren't allowed to use M-Programs.

In other words, With M-Programs around, L-Programs can not be Turing
Complete.

Now, of course, he also argues that any M-Program can be converted to an
L-program (so he can claim that L-Programs are Turing Complete), so thus
the contradictory L-program CAN be built, or his claim is incorrect.


>
>> So, his "answer" to the Halting Problem is to just restrict the inputs
>> to machines lesser than the deciders, an well known answer.
>>
>
> The key portion of his answered is anchored in Carol's
> question. I told him about the loophole that you found.


Except that the issue with "Carol's Question" doesn't apply to question
put to Machines, as the machine is deterministic.

>
>> And then he makes the determinatiom of whether a question is
>> "Objective" or "Subjective" NOT based on the actual meaning of the
>> words, but makes any question that can not be computed as "Subjective".
>>
>
> His stipulative definition makes perfect sense as a stipulative definition.

but violates his previous definition.

>
> A stipulative definition is a type of definition in which a new or
> currently existing term is given a new specific meaning for the purposes
> of argument or discussion in a given context.
> https://en.wikipedia.org/wiki/Stipulative_definition

Right, and when you do that, you disconnect your argument from all other
meanings of the word, and thus, can no longer claim that because he
found the question to be stipulated-subjective that it must be invalide
as questions need to be objective and not subjective because his
stipulted-subjective definition includes some actually objective questions.

Thus, his arguement is a LIE.

>
>> This is just FALSE.
>>
>>>
>>> [2] Nicholas J. Macias. *Context-Dependent Functions*
>>> Narrowing the Realm of Turing’s Halting Problem
>>> 13 Nov 2014
>>> https://arxiv.org/abs/1501.03018
>>> arXiv:1501.03018 [cs.LO]
>> because the question is stipulatd -subjective it can't be correcgt,
>> WHich just shows that he doesn't understand what a Compuation IS in
>> computation theory. It is BY DEFINITION, a finite deterministic
>> algorithm applied to a defined input.
>>
>> As such, an "function" that depends on things not considered "input"
>> is not a computation.
>>
>
> Not at all. He like I and the other two professors understand
> that when D calls H(D,D) then the halting problem specifies an
> inconsistent, unsatisfiable specification
> All three authors seems to agree on this.

What is inconsistant about the specification?

What is wrong with it being unsatisfiable, which just means the answer
can't be computed by a machine for all possible inputs (but the correct
answer DOES exist).

All three make the same mistake of forgetting what a COMPUTATION is.


>
>> Yes, in a non-Turing system, it is possible to define things that
>> might be called "functions" that are dependent on things besides their
>> formal parameters.
>>
>> If you look at his examples, this is EXACTLY what his "CDFS" do.
>>
>> Such functions can NOT be converted into Turing Machines.
>>
>
> I already proved otherwise when we apply embedded_H to ⟨Ĥ⟩ ⟨Ĥ⟩.

Nope.

YOu have CLAIMED it. you have never PROVED it,

Show the ACTUAL TURING MACHINE that did it!!!

(of course you can't, you failed at writing even a simple turing machine
decider)

>
>> So, his arguement is outside the domains of "Compuation Theory".
>>
>
> The fact that embedded_H is applied to its own code DOES CHANGE THINGS.
> This cannot be correctly ignored.

Nope.

Show an actual example.

ACTUAL CODE.

>
>>>
>>> [3] Bill Stoddart. *The Halting Paradox*
>>> 20 December 2017
>>> https://arxiv.org/abs/1906.05340
>>> arXiv:1906.05340 [cs.LO]
>>>
>>>
>>
>> Here, the author says that
>>
>> S defined as If H(S) then Loop else end.
>>
>> "Can't be implemented", and the reasoning is that since H can't be
>> made, the problem is with S (and not the unimplementability of Halting
>> Detection).
>>
>
> Yes Professor Stoddart did not see that his own criterion measure could
> be used as a halting criterion measure. He did see that it could be
> used to report bad input.

So,

>
> "Implementation of H1 requires it to determine whether it is being
> invoked from within S1"

Which is IMPOSSIBLE for a computation,

>
>> He says:
>>
>> There is no reason, however, why the halt test cannot terminate in
>> other situations, or why failure to halt cannot be reported via an
>> error message when the halt test itself cannot halt.
>>
>
> Yes I just said that second part.

So, you agree he doesn't understand the requirements of a decider.

>
>> Except that to do so violates the definition of a Decider, being a
>> program that ALWAYS delivers its answer to its caller/use.
>>
>
> Hence my independently derived enhancement to my independently derived
> "Implementation of H1 requires it to determine whether it is being
> invoked from within S1"

Which is still IMPOSSIBLE for a COMPUTATION.

>
>> And again, he ignores that the DEFINITION of the sort of thing that H
>> is required to be, a COMPUTATION, by DEFINITION is only a function of
>> its formal parameters, and thus when he talks about H determining if
>> it is being called by S just invalidates his argument.
>>
>
> His work is preliminary compared to mine.

Yours is still POOP.

>
>> So, the common thread in all these papers, as well as your own, is
>> that they are ignoring the actual definition of what a Compuation
>> (commonly called a "Program" in lay terms) actually is, and thus show
>> that they are NOT actually working on the Halting Problem of
>> Compuation Theory.
>>
>
> The key common thread is that the halting problem has
> an inconsistent, unsatisfiable specification.

What is "Inconsistant" about it?

What is wrong with being Unsatisfiable, which just means that the aswer
exists but no machine can compute it in a finite number of steps for all
inputs?

>
>> Yes, my guess is a lot of people have similar misunderstandings, but
>> that doesn't make them right.
>>
>
> Since I know these things first-hand I know that they are correct.

Yes, you seem to have a LOT of first-hand knowledge of misconceptions.

Like most of yours.


>
>> You are just putting you lot with people who have shown that they
>> don't know what they are talking about as far as the requirements of
>> Computation Theory.
>>
>> They all refer to being "equivalent" to Turing Machines, but all the
>> "programs" they propose can not be converted to Turing Machines as
>> they all need "secret" inputs which just do not exist with a Turing
>> machine. That is one of the powers of the simple Turing Machine
>> architecture, ANY Turing Machine MUST perform a computation (or be
>> non-halting depending on the exact version of the definition of
>> Computation being used) while many other architectures allow for
>> hidden data paths that allow "programs" that fail to be compuations
>> (but might be a piece of a large Computation).
>
> Some of their ideas may not be Turing computable yet all of their
> ideas do unify around:
>
> The halting problem has an inconsistent, unsatisfiable specification.
> AKA the same ill-formed question that I claimed back in 2004.

But the question is NOT "ill-formed" as ever instance of it has an answer.

>
> Alan Turing's Halting Problem is incorrectly formed (PART-TWO)  sci.logic
> On 6/20/2004 11:31 AM, Peter Olcott wrote:
> > PREMISES:
> > (1) The Halting Problem was specified in such a way that a solution
> > was defined to be impossible.
> >
> > (2) The set of questions that are defined to not have any possible
> > correct answer(s) forms a proper subset of all possible questions.
> > …
> > CONCLUSION:
> > Therefore the Halting Problem is an ill-formed question.
> >
> USENET Message-ID:
> <kZiBc.103407$Gx4....@bgtnsc04-news.ops.worldnet.att.net>
>
> Hehner's Carol's question does a great job of elaborating this.
>
>

Which is just a bit LIE.

Richard Damon

unread,
Jan 29, 2024, 10:01:08 PMJan 29
to
On 1/29/24 9:17 PM, olcott wrote:
> On 1/29/2024 6:48 PM, Richard Damon wrote:
>> On 1/29/24 8:53 AM, olcott wrote:
>>> On 1/29/2024 12:59 AM, Lawrence D'Oliveiro wrote:
>>>> On Sun, 28 Jan 2024 09:20:46 -0600, olcott wrote:
>>>>
>>>>> ... professor Hehner proves my 2004 claim that the
>>>>> halting problem is an ill-formed question.
>>>>
>>>> Doesn’t matter how you phrase it, the fact remains that there is no
>>>> logically self-consistent answer to the problem. That’s what Turing
>>>> proved, and you have done nothing to change that.
>>>
>>> Likewise there is no logically consistent answer to this question:
>>> Is this sentence true or false: "this sentence is not true"?
>>> It is undecidable because the question itself is incorrect.
>>>
>>> Every yes/no question defined to have no correct yes/no answer is an
>>> incorrect question.
>>>
>>
>> And the question, "Does the Computation defined by this input Halt?"
>> always has a correct yes/no answer, so is a CORRECT question.
>
> Yet when H is asked this question it is an entirely different
> question because the context of who is asked the question
> DOES CHANGE THE MEANING OF THE QUESTION.

Why is it different?

WHy does the behavior of D change because we ask H about it, since that
H was fully defined before D was created?

(It had to be, due to causality)

>
> What correct Boolean value does H return when D is defined to do the
> opposite of whatever value that H returns?" has no correct answer.
>
>

That is NOT the question, just your POOP.

The real questions we can ask are:

1) What Answer DOES H produce when asked H(D,D) ?

(This answer was FIXED when H was created, and is unchanging, your
claimed machine returns non-halting)

2) What is the Behavior of the machine descirbed by the input?

(In this case D(D), where, to be the proof case, D was built from the H
created before question 1, which you claim "correctly" returns
non-halting, so by the definition of D's operation, it WILL get that
answer from its copy of H and Halt)

3) Do these answers agree?

(Since non-halting is not the same as Halting, they do not, so H was
incorrect).

You can not ask what correct answer does H return, as if H doesn't
return that correct answer, the question is incorrect.

olcott

unread,
Jan 29, 2024, 11:11:04 PMJan 29
to
That *IS* the question as long as you are not too ignorant
to understand that the context of who is asked a question
*DOES CHANGE THE MEANING OF THE QUESTION*

The key example of this is: Are you a little girl?

Lawrence D'Oliveiro

unread,
Jan 30, 2024, 12:41:37 AMJan 30
to
On Mon, 29 Jan 2024 07:53:23 -0600, olcott wrote:

> Every yes/no question defined to have no correct yes/no answer is an
> incorrect question.

Can you prove that?

olcott

unread,
Jan 30, 2024, 12:57:48 AMJan 30
to
I created the notion of an incorrect question back in 2015.
(and in 2004)

*The logical law of polar questions*
*Peter Olcott Feb 20, 2015, 11:38:48 AM*

When posed to a man whom has never been married,
the question: Have you stopped beating your wife?
Is an incorrect polar question because neither yes nor
no is a correct answer.

All polar questions (including incorrect polar questions)
have exactly one answer from the following:
1) No
2) Yes
3) Neither // Only applies to incorrect polar questions

As far as I know I am the original discoverer of the
above logical law, thus copyright 2015 by Peter Olcott.
https://groups.google.com/g/sci.lang/c/AO5Vlupeelo/m/nxJy7N2vULwJ

Mikko

unread,
Jan 30, 2024, 5:45:44 AMJan 30
to
Many questions about infinite structures can be answered in finite
time. For example, it is proven that rational numbers are equinumerous
with integer numbers and that real numbers are not.

--
Mikko

Richard Damon

unread,
Jan 30, 2024, 7:38:11 AMJan 30
to
Who you ask the question to ony matters if the question pertains to you.

The Halting problem does not refer to the decider at all.

You are INCORRECTLY changing an purely objective question, whose answer
is independent of who you ask, into an attempted subjective question
asking about the what the decider could do.

You then need to change the actual problem to an improper version, where
the input, rather than being a FIXED string (which means D is built on
exactly one particular H) to a template built on what ever decider tries
to answer it,

You need to do this because the question doesn't make sense if you
don't. If H has already been programmed to do what it will do, you can't
ask what can it do to be correct, only is it correct, as its behavior
has been fixed in its creation.

Thus, you are just showing fundamental problems with you definitions.

immibis

unread,
Jan 30, 2024, 10:38:55 AMJan 30
to
On 1/30/24 03:17, olcott wrote:
> On 1/29/2024 6:48 PM, Richard Damon wrote:
>> On 1/29/24 8:53 AM, olcott wrote:
>>> On 1/29/2024 12:59 AM, Lawrence D'Oliveiro wrote:
>>>> On Sun, 28 Jan 2024 09:20:46 -0600, olcott wrote:
>>>>
>>>>> ... professor Hehner proves my 2004 claim that the
>>>>> halting problem is an ill-formed question.
>>>>
>>>> Doesn’t matter how you phrase it, the fact remains that there is no
>>>> logically self-consistent answer to the problem. That’s what Turing
>>>> proved, and you have done nothing to change that.
>>>
>>> Likewise there is no logically consistent answer to this question:
>>> Is this sentence true or false: "this sentence is not true"?
>>> It is undecidable because the question itself is incorrect.
>>>
>>> Every yes/no question defined to have no correct yes/no answer is an
>>> incorrect question.
>>>
>>
>> And the question, "Does the Computation defined by this input Halt?"
>> always has a correct yes/no answer, so is a CORRECT question.
>
> Yet when H is asked this question it is an entirely different
> question

Wrong

> because the context of who is asked the question
> DOES CHANGE THE MEANING OF THE QUESTION.

Wrong in mathematics

>
> What correct Boolean value does H return when D is defined to do the
> opposite of whatever value that H returns?" has no correct answer.
>

That is the POOP problem, not the halting problem. We are talking about
the halting problem, which asks whether a Turing machine/input pair has
an execution sequence that is infinite.

olcott

unread,
Jan 30, 2024, 10:46:06 AMJan 30
to
On 1/30/2024 9:38 AM, immibis wrote:
> On 1/30/24 03:17, olcott wrote:
>> On 1/29/2024 6:48 PM, Richard Damon wrote:
>>> On 1/29/24 8:53 AM, olcott wrote:
>>>> On 1/29/2024 12:59 AM, Lawrence D'Oliveiro wrote:
>>>>> On Sun, 28 Jan 2024 09:20:46 -0600, olcott wrote:
>>>>>
>>>>>> ... professor Hehner proves my 2004 claim that the
>>>>>> halting problem is an ill-formed question.
>>>>>
>>>>> Doesn’t matter how you phrase it, the fact remains that there is no
>>>>> logically self-consistent answer to the problem. That’s what Turing
>>>>> proved, and you have done nothing to change that.
>>>>
>>>> Likewise there is no logically consistent answer to this question:
>>>> Is this sentence true or false: "this sentence is not true"?
>>>> It is undecidable because the question itself is incorrect.
>>>>
>>>> Every yes/no question defined to have no correct yes/no answer is an
>>>> incorrect question.
>>>>
>>>
>>> And the question, "Does the Computation defined by this input Halt?"
>>> always has a correct yes/no answer, so is a CORRECT question.
>>
>> Yet when H is asked this question it is an entirely different
>> question
>
> Wrong
>
>> because the context of who is asked the question
>> DOES CHANGE THE MEANING OF THE QUESTION.
>
> Wrong in mathematics

It is necessarily always right it is the case that math
guys hardly know any linguistics at all thus mistake their
own ignorance for knowledge.

The x86 machine code of D proves that it specifies recursive
simulation to H.

_D()
[00001c72] 55 push ebp
[00001c73] 8bec mov ebp,esp
[00001c75] 51 push ecx
[00001c76] 8b4508 mov eax,[ebp+08]
[00001c79] 50 push eax ; push D
[00001c7a] 8b4d08 mov ecx,[ebp+08]
[00001c7d] 51 push ecx ; push D
[00001c7e] e8bff8ffff call 00001542 ; call H
[00001c83] 83c408 add esp,+08
[00001c86] 8945fc mov [ebp-04],eax
[00001c89] 837dfc00 cmp dword [ebp-04],+00
[00001c8d] 7402 jz 00001c91
[00001c8f] ebfe jmp 00001c8f
[00001c91] 8b45fc mov eax,[ebp-04]
[00001c94] 8be5 mov esp,ebp
[00001c96] 5d pop ebp
[00001c97] c3 ret
Size in bytes:(0038) [00001c97]


>>
>> What correct Boolean value does H return when D is defined to do the
>> opposite of whatever value that H returns?" has no correct answer.
>>
>
> That is the POOP problem, not the halting problem. We are talking about
> the halting problem, which asks whether a Turing machine/input pair has
> an execution sequence that is infinite.

immibis

unread,
Jan 30, 2024, 11:38:11 AMJan 30
to
You are ignorant because the context of who is asked a mathematical
question *DOES NOT CHANGE THE MEANING OF THE QUESTION*

If I ask Susan whether the sequence [1,2,3,4,...] is infinite the
correct answer is the same as if I ask Joseph whether the sequence
[1,2,3,4,...] is infinite.

immibis

unread,
Jan 30, 2024, 12:47:20 PMJan 30
to
On 1/30/24 05:11, olcott wrote:
> the context of who is asked a question
> *DOES CHANGE THE MEANING OF THE QUESTION*
>
> The key example of this is: Are you a little girl?

Please express this question in ZFC

immibis

unread,
Jan 30, 2024, 12:51:29 PMJan 30
to
On 1/30/24 16:46, olcott wrote:
> On 1/30/2024 9:38 AM, immibis wrote:
>> On 1/30/24 03:17, olcott wrote:
>>> Yet when H is asked this question it is an entirely different
>>> question
>>
>> Wrong
>>
>>> because the context of who is asked the question
>>> DOES CHANGE THE MEANING OF THE QUESTION.
>>
>> Wrong in mathematics
>
> It is necessarily always right it is the case that math
> guys hardly know any linguistics at all thus mistake their
> own ignorance for knowledge.

You hardly know any mathematics at all thus mistake your own ignorance
for knowledge.

immibis

unread,
Jan 30, 2024, 12:54:36 PMJan 30
to
On 1/30/24 06:57, olcott wrote:
> On 1/29/2024 11:41 PM, Lawrence D'Oliveiro wrote:
>> On Mon, 29 Jan 2024 07:53:23 -0600, olcott wrote:
>>
>>> Every yes/no question defined to have no correct yes/no answer is an
>>> incorrect question.
>>
>> Can you prove that?
>
> I created the notion of an incorrect question back in 2015.
> (and in 2004)
>
> *The logical law of polar questions*
> *Peter Olcott Feb 20, 2015, 11:38:48 AM*
>
> When posed to a man whom has never been married,
> the question: Have you stopped beating your wife?
> Is an incorrect polar question because neither yes nor
> no is a correct answer.
>
> All polar questions (including incorrect polar questions)
> have exactly one answer from the following:
> 1) No
> 2) Yes
> 3) Neither // Only applies to incorrect polar questions
>
> As far as I know I am the original discoverer of the
> above logical law, thus copyright 2015 by Peter Olcott.
> https://groups.google.com/g/sci.lang/c/AO5Vlupeelo/m/nxJy7N2vULwJ
>

*The logical law of Olcott statements*
*Pseudonymous user "immibis" Jan 30 2024, 06:52:54 PM*

When posed to a Usenet newsgroup, any statement made by Peter Olcott is
an incorrect statement because it is the opposite of the truth.

All Olcott statements are at least two of the following:
1) Untrue
2) Stupid
3) Dishonest

As far as I know I am the original discoverer of the above logical law,
thus copyright 2024 by pseudonymous user "immibis".

Richard Damon

unread,
Jan 30, 2024, 9:21:18 PMJan 30
to
For this H, it specifies FINITE recursive simulation to H, so a HALTING
behavior.

olcott

unread,
Jan 30, 2024, 10:53:17 PMJan 30
to
When one understands that H is always correct to abort any
simulation that cannot possibly stop running unless aborted

01 int D(ptr x) // ptr is pointer to int function
02 {
03 int Halt_Status = H(x, x);
04 if (Halt_Status)
05 HERE: goto HERE;
06 return Halt_Status;
07 }
08
09 void main()
10 {
11 H(D,D);
12 }

As every H specified by the above template must do then each
and every element of this infinite set is correct to abort
its simulation and reject its input D as non-halting.

Fred. Zwarts

unread,
Jan 31, 2024, 4:38:46 AMJan 31
to
Op 31.jan.2024 om 04:53 schreef olcott:
No, Han aborts is simulation, so it is not necessary to abort Dan, which
is based on Han, because it aborts itself already. Then it returns a
non-halting status and Dan continues with line 04.
Han(Dan,Dan) should decide for its input Dan, which aborts itself, not
for its non-input Dss which has an infinite recursion.

immibis

unread,
Jan 31, 2024, 7:25:32 AMJan 31
to
When one understands that a non-halting machine has an infinite
execution sequence and a halting machine has a finite execution
sequence, one sees that you are wrong.

Richard Damon

unread,
Jan 31, 2024, 7:30:35 AMJan 31
to
No, H is only correct to abort and report non-halting, if that exact
same program it was looking at (using the exact same H as that H was)
will not halt when run.

If the code of that H is coded to abort and return non-halting, then
that input will be Halting, and thus that H was wrong.

This goes back to the comments about the "Illusion of Truth", as, H,
isn't looking at the input that it was ACTUALLY given, but the
programmer of it was reasoning (not the program, as programs don't
"reason" only obey their programmong) if he wrote a different program,
that didn't abort, then the input IT was given (neglecting that this
input would be DIFFERENT, as it is based on a different H) must have its
simulation aborted. But since that is a different input, you can't
migrate that answer to the input it was actually given.

Your problem is you just don't understand the fundamental terms you are
using. Halting is about Specific input that decribe specific programs.
"Templates" themselves are NOT valid inputs, only ways to make valid
inputs.h
THe above is NOT such a valid input, but needs the definition of H
included. Once you define that this is using a specific H, you aren't
allowed to change that for this input, which your logic does.

Thus, you are just proving that all you are talking about is POOP and
not halting.

olcott

unread,
Jan 31, 2024, 10:18:32 AMJan 31
to
Below I reference an infinite set of simulating termination
analyzers that each correctly aborts its simulation of D
and correctly rejects D as non-halting.

When one understands that simulating termination analyzer H
is always correct to abort any simulation that cannot possibly
stop running unless aborted:

01 int D(ptr x) // ptr is pointer to int function
02 {
03 int Halt_Status = H(x, x);
04 if (Halt_Status)
05 HERE: goto HERE;
06 return Halt_Status;
07 }
08
09 void main()
10 {
11 H(D,D);
12 }

Then every simulating termination analyzer H specified by
the above template correctly aborts its simulation of D
and correctly rejects D as non-halting.

Pages 661 to 696 of Halt7.c specify the H that does this
https://github.com/plolcott/x86utm/blob/master/Halt7.c

olcott

unread,
Jan 31, 2024, 10:36:25 AMJan 31
to
Below I reference an infinite set of simulating termination
analyzers that each correctly aborts its simulation of D
and correctly rejects D as non-halting.

*When one understands that simulating termination analyzer H*
*is always correct to abort any simulation that cannot possibly*
*stop running unless aborted*

01 int D(ptr x) // ptr is pointer to int function
02 {
03 int Halt_Status = H(x, x);
04 if (Halt_Status)
05 HERE: goto HERE;
06 return Halt_Status;
07 }
08
09 void main()
10 {
11 H(D,D);
12 }

Then every simulating termination analyzer H specified by
the above template correctly aborts its simulation of D
and correctly rejects D as non-halting.

Pages 661 to 696 of Halt7.c specify the H that does this
https://github.com/plolcott/x86utm/blob/master/Halt7.c



olcott

unread,
Jan 31, 2024, 10:40:12 AMJan 31
to
That you can't seem to fully grasp the concept of a program
template is your own short-coming and not mine.

Below I reference an infinite set of simulating termination
analyzers that each correctly aborts its simulation of D
and correctly rejects D as non-halting.

*When one understands that simulating termination analyzer H*
*is always correct to abort any simulation that cannot possibly*
*stop running unless aborted*

01 int D(ptr x) // ptr is pointer to int function
02 {
03 int Halt_Status = H(x, x);
04 if (Halt_Status)
05 HERE: goto HERE;
06 return Halt_Status;
07 }
08
09 void main()
10 {
11 H(D,D);
12 }

Then every simulating termination analyzer H specified by
the above template correctly aborts its simulation of D
and correctly rejects D as non-halting.

Pages 661 to 696 of Halt7.c specify the H that does this
https://github.com/plolcott/x86utm/blob/master/Halt7.c

immibis

unread,
Jan 31, 2024, 12:10:40 PMJan 31
to
yeah because if you referenced just one, it would be easy to prove you
are wrong. By referencing an infinite number at the same time, you make
the proof nonsensical, so it cannot be proven wrong because it doesn't
even make sense, like proving the colour blue wrong.

immibis

unread,
Jan 31, 2024, 12:11:06 PMJan 31
to
Halting is about programs, not program templates. A program halts or
doesn't. A program template does neither because it is just a template.

Richard Damon

unread,
Jan 31, 2024, 8:34:17 PMJan 31
to
Nope, as the correct simulation of the input for any H that returns
non-halting is Halting (even if H can't do that simulation).

>
> 01 int D(ptr x)  // ptr is pointer to int function
> 02 {
> 03   int Halt_Status = H(x, x);
> 04   if (Halt_Status)
> 05     HERE: goto HERE;
> 06   return Halt_Status;
> 07 }
> 08
> 09 void main()
> 10 {
> 11   H(D,D);
> 12 }
>
> Then every simulating termination analyzer H specified by
> the above template correctly aborts its simulation of D
> and correctly rejects D as non-halting.

Nope, see other detailed post.

Richard Damon

unread,
Jan 31, 2024, 8:34:18 PMJan 31
to
Excpet that Halting isn't about "Program Templates" but "Programs"

And thus you are caught in your LIE that you are actually working on the
Halting Problem.

You are just playing with your POOP.


>
> Below I reference an infinite set of simulating termination
> analyzers that each correctly aborts its simulation of D
> and correctly rejects D as non-halting.

So, you are just talking POOP, not Halting.

>
> *When one understands that simulating termination analyzer H*
> *is always correct to abort any simulation that cannot possibly*
> *stop running unless aborted*
>
> 01 int D(ptr x)  // ptr is pointer to int function
> 02 {
> 03   int Halt_Status = H(x, x);
> 04   if (Halt_Status)
> 05     HERE: goto HERE;
> 06   return Halt_Status;
> 07 }
> 08
> 09 void main()
> 10 {
> 11   H(D,D);
> 12 }
>
> Then every simulating termination analyzer H specified by
> the above template correctly aborts its simulation of D
> and correctly rejects D as non-halting.
>
> Pages 661 to 696 of Halt7.c specify the H that does this
> https://github.com/plolcott/x86utm/blob/master/Halt7.c
>

Just more lies. See details elsewhere.

immibis

unread,
Feb 5, 2024, 4:02:22 PMFeb 5
to
Olcott was not able to respond to this.
0 new messages