Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

When the Linz Ĥ is required to report on its own behavior both answers are wrong

359 views
Skip to first unread message

olcott

unread,
Feb 8, 2024, 9:15:00 AMFeb 8
to
When Ĥ is applied to ⟨Ĥ⟩
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞ // wrong answer
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn // wrong answer

The above pair of templates specify every encoding of Ĥ that can
possibly exist, an infinite set of Turing machines such that each one
gets the wrong answer when it is required to report its own halt status.
https://www.liarparadox.org/Linz_Proof.pdf

This proves that the halting problem counter-example
<is> isomorphic to the Liar Paradox.

--
Copyright 2024 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

immibis

unread,
Feb 8, 2024, 11:32:44 AMFeb 8
to
On 8/02/24 15:14, olcott wrote:
> When Ĥ is applied to ⟨Ĥ⟩
> Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞ // wrong answer
> Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn   // wrong answer
>
> The above pair of templates specify every encoding of Ĥ that can
> possibly exist, an infinite set of Turing machines such that each one
> gets the wrong answer when it is required to report its own halt status.

This proves that it is impossible to for any Ĥ to give the right answer
on all inputs.

olcott

unread,
Feb 8, 2024, 1:09:47 PMFeb 8
to
It proves that asking Ĥ whether it halts or not is an incorrect
question where both yes and no are the wrong answer.

immibis

unread,
Feb 8, 2024, 1:15:10 PMFeb 8
to
On 8/02/24 19:09, olcott wrote:
> On 2/8/2024 10:32 AM, immibis wrote:
>> On 8/02/24 15:14, olcott wrote:
>>> When Ĥ is applied to ⟨Ĥ⟩
>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞ // wrong answer
>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn   // wrong answer
>>>
>>> The above pair of templates specify every encoding of Ĥ that can
>>> possibly exist, an infinite set of Turing machines such that each one
>>> gets the wrong answer when it is required to report its own halt status.
>>
>> This proves that it is impossible to for any Ĥ to give the right
>> answer on all inputs.
>
> It proves that asking Ĥ whether it halts or not is an incorrect
> question where both yes and no are the wrong answer.

No, it proves the right answer is the opposite of what it says.

olcott

unread,
Feb 8, 2024, 1:28:43 PMFeb 8
to
*This seems to be over your head*
A self-contradictory question never has any correct answer.

Richard Damon

unread,
Feb 8, 2024, 6:42:44 PMFeb 8
to
On 2/8/24 9:14 AM, olcott wrote:
> When Ĥ is applied to ⟨Ĥ⟩
> Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞ // wrong answer
> Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn   // wrong answer
>
> The above pair of templates specify every encoding of Ĥ that can
> possibly exist, an infinite set of Turing machines such that each one
> gets the wrong answer when it is required to report its own halt status.
> https://www.liarparadox.org/Linz_Proof.pdf
>
> This proves that the halting problem counter-example
> <is> isomorphic to the Liar Paradox.
>


In other words, you don't understand a thing about the problem.

It isn't talking about TEMPLATES, but programs, SPECIFIC instances of
those templages.

And Ĥ isn't being asked to decide anything, only H (and its copy
embedded_H).

To be a "Halt Decider", H (and its copy embedded_H) must be a specific
prgram that gives a particular answer to every input it is given.

Thus H (Ĥ) (Ĥ), and embedded_H (Ĥ) (Ĥ) has a defined answer before we
even start to look at the problem.

The fact that if this specific H (Ĥ) (Ĥ) goes to H.qn, thinking its
input is non-halting means that Ĥ (Ĥ) (the Computation described by H's
input) Halts because its embedded_H (Ĥ) (Ĥ) will also go to H.qn which
is the equivalent state to Ĥ.qn, and thus Ĥ halts, just means that H was
wrong.

A CORRECT Halt Decider HH (Ĥ) (Ĥ) will go to HH.qy and be correct, so
there is a corret answer to the question, does the computation described
by the input halt.

So, you are just proving that you argument is isomorphic to a LIE.


And, your "proof" just shows that it is impossible to make an H that
gives the right answer, proving what you are trying to disprove.




Richard Damon

unread,
Feb 8, 2024, 6:51:05 PMFeb 8
to
On 2/8/24 1:28 PM, olcott wrote:
> On 2/8/2024 12:15 PM, immibis wrote:
>> On 8/02/24 19:09, olcott wrote:
>>> On 2/8/2024 10:32 AM, immibis wrote:
>>>> On 8/02/24 15:14, olcott wrote:
>>>>> When Ĥ is applied to ⟨Ĥ⟩
>>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞ // wrong answer
>>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn   // wrong answer
>>>>>
>>>>> The above pair of templates specify every encoding of Ĥ that can
>>>>> possibly exist, an infinite set of Turing machines such that each one
>>>>> gets the wrong answer when it is required to report its own halt
>>>>> status.
>>>>
>>>> This proves that it is impossible to for any Ĥ to give the right
>>>> answer on all inputs.
>>>
>>> It proves that asking Ĥ whether it halts or not is an incorrect
>>> question where both yes and no are the wrong answer.
>>
>> No, it proves the right answer is the opposite of what it says.
>>
>
> *This seems to be over your head*
> A self-contradictory question never has any correct answer.
>

So the Halting Question, does the computation described by the input
Halt? isn't a self-contradictory question, as it always has a correct
answer, the opposite of what H gives (if it gives one).

Thus, your premise is false.

olcott

unread,
Feb 8, 2024, 7:48:34 PMFeb 8
to
Maybe you need to carefully reread this fifty to sixty times before you
get it? (it took me twenty years to get it this simple)

When Ĥ is to report on its own behavior both Ĥ.qy and Ĥ.qn are the wrong
answer for every possible Ĥ applied to ⟨Ĥ⟩.

Do you understand that every possible element of an infinite set is more
than one element?

Richard Damon

unread,
Feb 8, 2024, 9:40:32 PMFeb 8
to
But Ĥ doesn't need to report on anything, the copy of H that is in it does.

>
> Do you understand that every possible element of an infinite set is more
> than one element?
>

Right, so the set isn't a specific input, so not the thing that Halting
quesiton is about.

The Haltig problem is about making a decider that answers the Halting
QUestion which asks the decider about the SPECIFIC COMPUTATION (a
specific program/data) that the input describes.

Not about "sets" of Decider / Inputs

You are just talking POOP, not Halting Problem.

olcott

unread,
Feb 8, 2024, 10:34:52 PMFeb 8
to
When an infinite set of decider/input pairs has no correct
answer then the question is rigged.

Richard Damon

unread,
Feb 8, 2024, 10:44:43 PMFeb 8
to
Except that EVERY element of that set had a correct answer, just not the
one the decider gave.

You don't know the difference between a "correct" question, and an
uncomputable problem.

The fact that every input has a correct answer for the Halting Question
(does the input describe a computation that Halts) shows the question is
correct.

That no decider can give the right answer to the particular input built
on it, shows that the problem is uncomputable.

You are just admitting the thing you are trying to prove wrong.

You are just proving your ignorance, and that you have POOP on your brain.

immibis

unread,
Feb 8, 2024, 10:56:09 PMFeb 8
to
On 8/02/24 19:28, olcott wrote:
> On 2/8/2024 12:15 PM, immibis wrote:
>> On 8/02/24 19:09, olcott wrote:
>>> On 2/8/2024 10:32 AM, immibis wrote:
>>>> On 8/02/24 15:14, olcott wrote:
>>>>> When Ĥ is applied to ⟨Ĥ⟩
>>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞ // wrong answer
>>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn   // wrong answer
>>>>>
>>>>> The above pair of templates specify every encoding of Ĥ that can
>>>>> possibly exist, an infinite set of Turing machines such that each one
>>>>> gets the wrong answer when it is required to report its own halt
>>>>> status.
>>>>
>>>> This proves that it is impossible to for any Ĥ to give the right
>>>> answer on all inputs.
>>>
>>> It proves that asking Ĥ whether it halts or not is an incorrect
>>> question where both yes and no are the wrong answer.
>>
>> No, it proves the right answer is the opposite of what it says.
>>
>
> *This seems to be over your head*
> A self-contradictory question never has any correct answer.
>
The question isn't self-contradictory. It has a right answer.

immibis

unread,
Feb 8, 2024, 10:58:24 PMFeb 8
to
Is the question "what positive integer should I subtract from 3 to make
5?" rigged? Or does it just have no correct answer?

The question "Does Dan(Dan) halt?" has an answer, and the answer is yes.
The question "Does Dah(Dah) halt?" has an answer, and the answer is no.
The question "What is the code for a halt decider?" has no answer,
because there aren't any halt deciders. It is like asking "what colour
is the elephant in the room?"

olcott

unread,
Feb 9, 2024, 12:22:29 AMFeb 9
to
When Ĥ applied to ⟨Ĥ⟩ has been intentionally defined to contradict
every value that each embedded_H returns for the infinite set of
every Ĥ that can possibly exist then each and every element of
these Ĥ / ⟨Ĥ⟩ pairs is isomorphic to a self-contradictory question.

That this is too difficult for you to understand does not entail
that I am incorrect. For me to be actually incorrect you must point
to actual incoherence directly in my view. So far every rebuttal
has been the form "I just don't believe it."

olcott

unread,
Feb 9, 2024, 12:26:58 AMFeb 9
to
It is rigged. What time is it (yes or no)? is also rigged.
When-so-ever the solution set is defined to be the empty set the

Richard Damon

unread,
Feb 9, 2024, 7:05:37 AMFeb 9
to
No, YOUR POOP question, is self-contradictory.

The Halting Question is not, as EVERY element of that set you talk about
has a correct answer to it, as every specific input describes a Halting
Computation or not.

Note, your POOP question is an incorrect question as it ignores that
fact that a given H can't "choose" its answer to try and be correct, but
its answer is FIXED by its definition.

It is like asking what name should Mary have been born with to sound
like a race. Questions about changing what your basic identity has been
are just invalid.

>
> That this is too difficult for you to understand does not entail
> that I am incorrect. For me to be actually incorrect you must point
> to actual incoherence directly in my view. So far every rebuttal
> has been the form "I just don't believe it."
>

Nope, that you think this shows you don't understand anything that you
talk about, but are just a pathological liar.

Richard Damon

unread,
Feb 9, 2024, 7:05:39 AMFeb 9
to
So, your brain is rigged?

Many LEGITIMATE questions have as an answer the empty set.

olcott

unread,
Feb 9, 2024, 9:37:32 AMFeb 9
to
On 2/9/2024 1:07 AM, Mikko wrote:
> On 2024-02-08 15:15:54 +0000, olcott said:
>
>> On 2/8/2024 9:11 AM, Mikko wrote:
>>> On 2024-02-08 14:39:05 +0000, olcott said:
>>>
>>>> On 2/8/2024 8:33 AM, Mikko wrote:
>>>>> On 2024-02-08 14:14:55 +0000, olcott said:
>>>>>
>>>>>> When Ĥ is applied to ⟨Ĥ⟩
>>>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞ // wrong answer
>>>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn   // wrong answer
>>>>>>
>>>>>> The above pair of templates specify every encoding of Ĥ that can
>>>>>> possibly exist, an infinite set of Turing machines such that each one
>>>>>> gets the wrong answer when it is required to report its own halt
>>>>>> status.
>>>>>> https://www.liarparadox.org/Linz_Proof.pdf
>>>>>>
>>>>>> This proves that the halting problem counter-example
>>>>>> <is> isomorphic to the Liar Paradox.
>>>>>
>>>>> Ĥ is not required to report anything. Linz only specifies how Ĥ is
>>>>> constructed but not what it should do.
>>>>>
>>>>
>>>> *Clearly you didn't read what he said on the link*
>>>
>>> The point is not what he said but what he didn't say. He didn't
>>> say what Ĥ is required to do.
>>>
>>
>> He did say what Ĥ is required to do
>> and you simply didn't read what he said.
>
> No, he didn't. Otherwise you would show where in that text is the word
> "require" or something that means the same. But you don't because he
> didn's say.
>

We can therefore legitimately ask what would happen if Ĥ is
applied to ŵ. (middle of page 3)
https://www.liarparadox.org/Linz_Proof.pdf

In my notational conventions it would be: Ĥ applied to ⟨Ĥ⟩.

When Ĥ is to report on its own behavior both Ĥ.qy and Ĥ.qn are the wrong
answer for every possible Ĥ applied to ⟨Ĥ⟩.

When every possible Ĥ of the infinite set of Ĥ is applied to
its own machine description: ⟨Ĥ⟩ then Ĥ is intentionally defined
to be self-contradictory.

olcott

unread,
Feb 9, 2024, 9:50:29 AMFeb 9
to
When every possible Ĥ of the infinite set of Ĥ is applied to
its own machine description: ⟨Ĥ⟩ then Ĥ is intentionally defined
to be self-contradictory.

The issue is not that the most powerful model of computation is
too weak. The issue is that an input was intentionally defined
to be self-contradictory.

I got involved with these in 2004
(a) The Liar Paradox
(b) Gödel's 1931 incompleteness theorem
(c) The halting problem
and in about 2016 with the Tarski Undefinability theorem
because they prove that human understanding of the notion
of analytical truth is incoherent.

As recently as 1975 one of the greatest minds in the field
did not seem to understand that the Liar Paradox is simply
not a truth bearer.
https://www.impan.pl/~kz/truthseminar/Kripke_Outline.pdf

olcott

unread,
Feb 9, 2024, 9:53:33 AMFeb 9
to
On 2/9/2024 6:05 AM, Richard Damon wrote:
No polar (YES/NO) question can correctly have the empty set as an
answer. The whole idea of vacuous truth is a ruse.

immibis

unread,
Feb 9, 2024, 2:00:14 PMFeb 9
to
"What time is it (yes or no)?" is a syntax error.

The question "Does Dan(Dan) halt?" has an answer, and the answer is yes.
The question "Does Dah(Dah) halt?" has an answer, and the answer is no.
The question "What is the code for a halt decider?" has no answer,
because there aren't any halt deciders. It is like asking "what colour
is the elephant in the room?"


> When-so-ever the solution set is defined to be the empty set the
> question is rigged.

So "what colour is the elephant in the room?" is a rigged question?

immibis

unread,
Feb 9, 2024, 2:02:12 PMFeb 9
to
On 9/02/24 15:53, olcott wrote:
>
> No polar (YES/NO) question can correctly have the empty set as an
> answer. The whole idea of vacuous truth is a ruse.
>

Nobody said it did.
The question "Does Dan(Dan) halt (yes/no)?" has a correct answer: the
answer is yes.
The question "Does Dah(Dah) halt (yes/no)?" has a correct answer: the
answer is no.
The question "What is the set with no elements?" has a correct answer:
the answer is the empty set.
The question "What is the set of halt deciders?" has a correct answer:
the answer is the empty set.

The question "What colour is the elephant in the room?" has no correct
answer because "the elephant in the room" does not refer to anything.

André G. Isaak

unread,
Feb 9, 2024, 2:07:03 PMFeb 9
to
On 2024-02-09 12:02, immibis wrote:

> The question "What colour is the elephant in the room?" has no correct
> answer because "the elephant in the room" does not refer to anything.

Well, that depends entirely on which room you are referring to.

André

--
To email remove 'invalid' & replace 'gm' with well known Google mail
service.

olcott

unread,
Feb 9, 2024, 2:24:21 PMFeb 9
to
It is a type mismatch error that must be processed semantically for
natural language expressions.

>
> The question "Does Dan(Dan) halt?" has an answer, and the answer is yes.
> The question "Does Dah(Dah) halt?" has an answer, and the answer is no.
> The question "What is the code for a halt decider?" has no answer,
> because there aren't any halt deciders. It is like asking "what colour
> is the elephant in the room?"
>
>
>> When-so-ever the solution set is defined to be the empty set the
>> question is rigged.
>
> So "what colour is the elephant in the room?" is a rigged question?
>

I am only examining analytic truth, synthetic truth is off-topic.
Every expression that can be verified as completely true entirely
on the basis of other expressions of language is analytic.

The typical color of elephants can be specified as an axiom
of the correct model of the actual world.

olcott

unread,
Feb 9, 2024, 2:25:51 PMFeb 9
to
On 2/9/2024 1:02 PM, immibis wrote:
> On 9/02/24 15:53, olcott wrote:
>>
>> No polar (YES/NO) question can correctly have the empty set as an
>> answer. The whole idea of vacuous truth is a ruse.
>>
>
> Nobody said it did.

When Ĥ is to report on its own behavior both Ĥ.qy and Ĥ.qn are the wrong
answer for every possible Ĥ applied to ⟨Ĥ⟩.

*Changing the subject does not count as rebuttal*

immibis

unread,
Feb 9, 2024, 3:40:57 PMFeb 9
to
On 9/02/24 20:25, olcott wrote:
> On 2/9/2024 1:02 PM, immibis wrote:
>> On 9/02/24 15:53, olcott wrote:
>>>
>>> No polar (YES/NO) question can correctly have the empty set as an
>>> answer. The whole idea of vacuous truth is a ruse.
>>>
>>
>> Nobody said it did.
>
> When Ĥ is to report on its own behavior both Ĥ.qy and Ĥ.qn are the wrong
> answer for every possible Ĥ applied to ⟨Ĥ⟩.
>

Ĥ is not to report on its own behaviour. Changing the subject does not
count as rebuttal. You ignore the rest of my answer.

immibis

unread,
Feb 9, 2024, 3:43:13 PMFeb 9
to
Whatever you want to call it, it is a very basic error that results in
no question at all. The fact that "What time is it (yes or no)?" has no
correct answer is precisely as uninteresting as the fact that
"dhjerlktw4v?" has no correct answer.

>
>>
>> The question "Does Dan(Dan) halt?" has an answer, and the answer is yes.
>> The question "Does Dah(Dah) halt?" has an answer, and the answer is no.
>> The question "What is the code for a halt decider?" has no answer,
>> because there aren't any halt deciders. It is like asking "what colour
>> is the elephant in the room?"
>>
>>> When-so-ever the solution set is defined to be the empty set the
>>> question is rigged.
>>
>> So "what colour is the elephant in the room?" is a rigged question?
>>
>
> I am only examining analytic truth, synthetic truth is off-topic.
> Every expression that can be verified as completely true entirely
> on the basis of other expressions of language is analytic.


It is analytic truth that Dan(Dan) halts and Dah(Dah) does not halt.

> The typical color of elephants can be specified as an axiom
> of the correct model of the actual world.

That was not the question.

olcott

unread,
Feb 9, 2024, 3:49:00 PMFeb 9
to
On 2/9/2024 2:40 PM, immibis wrote:
> On 9/02/24 20:25, olcott wrote:
>> On 2/9/2024 1:02 PM, immibis wrote:
>>> On 9/02/24 15:53, olcott wrote:
>>>>
>>>> No polar (YES/NO) question can correctly have the empty set as an
>>>> answer. The whole idea of vacuous truth is a ruse.
>>>>
>>>
>>> Nobody said it did.
>>
>> When Ĥ is to report on its own behavior both Ĥ.qy and Ĥ.qn are the
>> wrong answer for every possible Ĥ applied to ⟨Ĥ⟩.
>>
>
> Ĥ is not to report on its own behaviour.

https://www.liarparadox.org/Linz_Proof.pdf
The Linz proof's conclusion is based on the Linz Ĥ applied
to its own machine description.

No one ever bothered to notice that the inability to
correctly solve self-contradictory problem definitions
places no actual limit on anyone or anything.

One the greatest minds in the field (Saul Kripke) did not
even understand that the Liar Paradox is not a truth bearer.

immibis

unread,
Feb 9, 2024, 4:13:03 PMFeb 9
to
On 9/02/24 21:48, olcott wrote:
> On 2/9/2024 2:40 PM, immibis wrote:
>> On 9/02/24 20:25, olcott wrote:
>>> On 2/9/2024 1:02 PM, immibis wrote:
>>>> On 9/02/24 15:53, olcott wrote:
>>>>>
>>>>> No polar (YES/NO) question can correctly have the empty set as an
>>>>> answer. The whole idea of vacuous truth is a ruse.
>>>>>
>>>>
>>>> Nobody said it did.
>>>
>>> When Ĥ is to report on its own behavior both Ĥ.qy and Ĥ.qn are the
>>> wrong answer for every possible Ĥ applied to ⟨Ĥ⟩.
>>>
>>
>> Ĥ is not to report on its own behaviour.
>
> https://www.liarparadox.org/Linz_Proof.pdf
> The Linz proof's conclusion is based on the Linz Ĥ applied
> to its own machine description.

It is not to report on its own behaviour but the behaviour of all
machines passed as a parameter.

> No one ever bothered to notice that the inability to
> correctly solve self-contradictory problem definitions
> places no actual limit on anyone or anything.

The problem is not self-contradictory.

olcott

unread,
Feb 9, 2024, 4:33:26 PMFeb 9
to
On 2/9/2024 3:12 PM, immibis wrote:
> On 9/02/24 21:48, olcott wrote:
>> On 2/9/2024 2:40 PM, immibis wrote:
>>> On 9/02/24 20:25, olcott wrote:
>>>> On 2/9/2024 1:02 PM, immibis wrote:
>>>>> On 9/02/24 15:53, olcott wrote:
>>>>>>
>>>>>> No polar (YES/NO) question can correctly have the empty set as an
>>>>>> answer. The whole idea of vacuous truth is a ruse.
>>>>>>
>>>>>
>>>>> Nobody said it did.
>>>>
>>>> When Ĥ is to report on its own behavior both Ĥ.qy and Ĥ.qn are the
>>>> wrong answer for every possible Ĥ applied to ⟨Ĥ⟩.
>>>>
>>>
>>> Ĥ is not to report on its own behaviour.
>>
>> https://www.liarparadox.org/Linz_Proof.pdf
>> The Linz proof's conclusion is based on the Linz Ĥ applied
>> to its own machine description.
>
> It is not to report on its own behaviour but the behaviour of all
> machines passed as a parameter.
>

*ON PAGE THREE* https://www.liarparadox.org/Linz_Proof.pdf
Linz specifically says that he is specifically analyzing Ĥ applied to
its own machine description. Then Linz notices that both answers that
Ĥ provides are the wrong answer.

What no one noticed was that it gets the wrong answer because
Ĥ was defined to contradict itself.

>> No one ever bothered to notice that the inability to
>> correctly solve self-contradictory problem definitions
>> places no actual limit on anyone or anything.
>
> The problem is not self-contradictory.

immibis

unread,
Feb 9, 2024, 4:58:01 PMFeb 9
to
On 9/02/24 22:33, olcott wrote:
> Then Linz notices that both answers that Ĥ provides are the wrong answer.

Ĥ cannot provide both answers. It only provides one answer.

olcott

unread,
Feb 9, 2024, 5:19:21 PMFeb 9
to
The infinite set of every Ĥ that can possibly exist cannot provide
a correct answer proving that the question itself is incorrect.

olcott

unread,
Feb 9, 2024, 6:50:01 PMFeb 9
to
On 2/8/2024 8:14 AM, olcott wrote:
> When Ĥ is applied to ⟨Ĥ⟩
> Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞ // wrong answer
> Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn   // wrong answer
>
> The above pair of templates specify every encoding of Ĥ that can
> possibly exist, an infinite set of Turing machines such that each one
> gets the wrong answer when it is required to report its own halt status.
> https://www.liarparadox.org/Linz_Proof.pdf
>
> This proves that the halting problem counter-example
> <is> isomorphic to the Liar Paradox.
>

When Ĥ is to report on its own behavior both Ĥ.qy and Ĥ.qn are the wrong
answer for every possible Ĥ applied to ⟨Ĥ⟩.

Richard Damon

unread,
Feb 9, 2024, 7:09:48 PMFeb 9
to
So?

Note, every possible Ĥ means every possible H, so all H are wrong.

> The issue is not that the most powerful model of computation is
> too weak. The issue is that an input was intentionally defined
> to be self-contradictory.

But it shows that the simple problem, for which we have good reasons for
wanting an answer, can not be computed by this most powerful model of
computation.

>
> I got involved with these in 2004
> (a) The Liar Paradox
> (b) Gödel's 1931 incompleteness theorem
> (c) The halting problem
> and in about 2016 with the Tarski Undefinability theorem
> because they prove that human understanding of the notion
> of analytical truth is incoherent.
>
> As recently as 1975 one of the greatest minds in the field
> did not seem to understand that the Liar Paradox is simply
> not a truth bearer.
> https://www.impan.pl/~kz/truthseminar/Kripke_Outline.pdf
>

Maybe to YOUR understanding of them. but it works well enough for most
people.

I could say the same about your ideas of understanding, as you think it
is ok to just LIE about things to try to make a point. This shows you
just don't understand the basics of logic and truth (or computations,
but you have sort of already admitted that one).

Richard Damon

unread,
Feb 9, 2024, 7:09:49 PMFeb 9
to
But you have admitted that the ACTUAL question has a correct answer, and
only your strawman doesn't.

When H(D,D) returns non-halting, then the computation described by that
input, D(D) will halt, as you have admitted, thus there IS a correct
answer to the ACTUAL question.

The "Self-Contradiction" only happens when you reframe the question
about what answer can H return to be correct, ignoring the fact that a
given H can only return one answer to that question, the one its
programming says it will return.

So, your "logic" is just based on flights of fantasy.

Richard Damon

unread,
Feb 9, 2024, 7:09:50 PMFeb 9
to
On 2/9/24 2:25 PM, olcott wrote:
> On 2/9/2024 1:02 PM, immibis wrote:
>> On 9/02/24 15:53, olcott wrote:
>>>
>>> No polar (YES/NO) question can correctly have the empty set as an
>>> answer. The whole idea of vacuous truth is a ruse.
>>>
>>
>> Nobody said it did.
>
> When Ĥ is to report on its own behavior both Ĥ.qy and Ĥ.qn are the wrong
> answer for every possible Ĥ applied to ⟨Ĥ⟩.
>
> *Changing the subject does not count as rebuttal*
>

Except that Ĥ isn't asked to report on its own behavior.

H is.

So, yes, YOUR changing the subject does not count as rebuttal.

Richard Damon

unread,
Feb 9, 2024, 7:09:51 PMFeb 9
to
On 2/9/24 3:48 PM, olcott wrote:
> On 2/9/2024 2:40 PM, immibis wrote:
>> On 9/02/24 20:25, olcott wrote:
>>> On 2/9/2024 1:02 PM, immibis wrote:
>>>> On 9/02/24 15:53, olcott wrote:
>>>>>
>>>>> No polar (YES/NO) question can correctly have the empty set as an
>>>>> answer. The whole idea of vacuous truth is a ruse.
>>>>>
>>>>
>>>> Nobody said it did.
>>>
>>> When Ĥ is to report on its own behavior both Ĥ.qy and Ĥ.qn are the
>>> wrong answer for every possible Ĥ applied to ⟨Ĥ⟩.
>>>
>>
>> Ĥ is not to report on its own behaviour.
>
> https://www.liarparadox.org/Linz_Proof.pdf
> The Linz proof's conclusion is based on the Linz Ĥ applied
> to its own machine description.
>
> No one ever bothered to notice that the inability to
> correctly solve self-contradictory problem definitions
> places no actual limit on anyone or anything.
>
> One the greatest minds in the field (Saul Kripke) did not
> even understand that the Liar Paradox is not a truth bearer.
>

And you seem to not see that Ĥ isn't asked to decide on the input, it is
only H is. And it is only because H claims to be able to get the right
answer to all inputs that puts it in the spot.

YOU don't seem to understand that actual TRUTH does affect what is TRUE.

Richard Damon

unread,
Feb 9, 2024, 7:09:53 PMFeb 9
to
On 2/9/24 4:33 PM, olcott wrote:
> On 2/9/2024 3:12 PM, immibis wrote:
>> On 9/02/24 21:48, olcott wrote:
>>> On 2/9/2024 2:40 PM, immibis wrote:
>>>> On 9/02/24 20:25, olcott wrote:
>>>>> On 2/9/2024 1:02 PM, immibis wrote:
>>>>>> On 9/02/24 15:53, olcott wrote:
>>>>>>>
>>>>>>> No polar (YES/NO) question can correctly have the empty set as an
>>>>>>> answer. The whole idea of vacuous truth is a ruse.
>>>>>>>
>>>>>>
>>>>>> Nobody said it did.
>>>>>
>>>>> When Ĥ is to report on its own behavior both Ĥ.qy and Ĥ.qn are the
>>>>> wrong answer for every possible Ĥ applied to ⟨Ĥ⟩.
>>>>>
>>>>
>>>> Ĥ is not to report on its own behaviour.
>>>
>>> https://www.liarparadox.org/Linz_Proof.pdf
>>> The Linz proof's conclusion is based on the Linz Ĥ applied
>>> to its own machine description.
>>
>> It is not to report on its own behaviour but the behaviour of all
>> machines passed as a parameter.
>>
>
> *ON PAGE THREE*  https://www.liarparadox.org/Linz_Proof.pdf
> Linz specifically says that he is specifically analyzing Ĥ applied to
> its own machine description. Then Linz notices that both answers that
> Ĥ provides are the wrong answer.
>
> What no one noticed was that it gets the wrong answer because
> Ĥ was defined to contradict itself.

You seem to be having a problem understanding basics.

Ĥ doesn't contradict ITSELF, as Ĥ doesn't "assert" anything to contradict.

Ĥ contradicts H, which is a different machine, not "itself".

So, I guess you think John and Mary are the same person, just because
John always disagrees with what ever Mary says.

Note also, you don't understand "exhaustive" analysis. Since Linz
doesn't what to presume on the behavior of H, it looks at ALL POSSIBLE
behaviors that H could have. You even have claimed to have "invented"
this sort of analysis (You must have good Russian roots, to claim you
invvented things that have been well known for a long time).

By showing that for all H, whatever their answer is when given the input
Ĥ based on them in this way, will give the wrong answer, we have
exhaustively proven that no H can give the correct answer (which does
exist), and thus this problem is uncomputable by a machine.

Showing that it is POSSIBLE to build this sort of contradictory input
for a machine, shows that the system is powerful enough to generate
uncomputable problems. Knowing that is valuable knowledge, as it warns
us not to spend our life time chasing after an answer that doesn't exist.

You have made that exact failing, because you refuse to learn from history.

Richard Damon

unread,
Feb 9, 2024, 7:09:54 PMFeb 9
to
On 2/9/24 5:19 PM, olcott wrote:
> On 2/9/2024 3:57 PM, immibis wrote:
>> On 9/02/24 22:33, olcott wrote:
>>> Then Linz notices that both answers that Ĥ provides are the wrong
>>> answer.
>>
>> Ĥ cannot provide both answers. It only provides one answer.
>
> The infinite set of every Ĥ that can possibly exist cannot provide
> a correct answer proving that the question itself is incorrect.
>

Nope, because every member of that set, has a correct answerthat H
needed to produce to be correct.

That no H can do that, shows the problem is uncomputable, not incorrect.

You are just proving your ignorance of the topic. And your repeating the
error over and over shows your stupidity (and insanity).

Richard Damon

unread,
Feb 9, 2024, 7:09:57 PMFeb 9
to
Right, but Ĥ isn't required to give the answer, H is.

>
> In my notational conventions it would be: Ĥ applied to ⟨Ĥ⟩.
>
> When Ĥ is to report on its own behavior both Ĥ.qy and Ĥ.qn are the wrong
> answer for every possible Ĥ applied to ⟨Ĥ⟩.

But Ĥ isn't asked to "report", to copy of H embedded in it is.

>
> When every possible Ĥ of the infinite set of Ĥ is applied to
> its own machine description: ⟨Ĥ⟩ then Ĥ is intentionally defined
> to be self-contradictory.
>
>

No, it shows you don't know the meaning of that word.

In fact, it seems you don't understand that concept of IDENTITY.

That does seriously hamper your understanding of language.

Richard Damon

unread,
Feb 9, 2024, 7:09:59 PMFeb 9
to
On 2/9/24 6:49 PM, olcott wrote:
> On 2/8/2024 8:14 AM, olcott wrote:
>> When Ĥ is applied to ⟨Ĥ⟩
>> Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞ // wrong answer
>> Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn   // wrong answer
>>
>> The above pair of templates specify every encoding of Ĥ that can
>> possibly exist, an infinite set of Turing machines such that each one
>> gets the wrong answer when it is required to report its own halt status.
>> https://www.liarparadox.org/Linz_Proof.pdf
>>
>> This proves that the halting problem counter-example
>> <is> isomorphic to the Liar Paradox.
>>
>
> When Ĥ is to report on its own behavior both Ĥ.qy and Ĥ.qn are the wrong
> answer for every possible Ĥ applied to ⟨Ĥ⟩.
>
> The infinite set of every Ĥ that can possibly exist cannot provide
> a correct answer proving that the question itself is incorrect.
>

WHERE is ** Ĥ ** defined to REPORT on behavior.

Ĥ ACTS on the answer it gets from H which is the machine that is
required to report on behaivor.

The fact that NO H can give the right answer says the question is
non-computable.

The fact that there IS a correct answer for each input (even if not
given by H) says the question is correct

immibis

unread,
Feb 9, 2024, 8:05:24 PMFeb 9
to
On 9/02/24 23:19, olcott wrote:
> On 2/9/2024 3:57 PM, immibis wrote:
>> On 9/02/24 22:33, olcott wrote:
>>> Then Linz notices that both answers that Ĥ provides are the wrong
>>> answer.
>>
>> Ĥ cannot provide both answers. It only provides one answer.
>
> The infinite set of every Ĥ that can possibly exist cannot provide
> a correct answer proving that the question itself is incorrect.
>

Programs are not infinite sets. Changing the subject to infinite sets is
dishonest.

Richard Damon

unread,
Feb 9, 2024, 8:09:14 PMFeb 9
to
But we can make sets of Programs, and those sets can be infinite.

But, as you say, those sets are not the programs, but this just goes to
Peter Olcott's inability to understand identity.


olcott

unread,
Feb 9, 2024, 9:21:14 PMFeb 9
to
A property that every element of an infinite set of programs
has applies to each element of this set.

When Ĥ is to report on its own behavior both Ĥ.qy and Ĥ.qn
are the wrong answer for *every possible* Ĥ applied to ⟨Ĥ⟩.

Richard Damon

unread,
Feb 9, 2024, 9:40:06 PMFeb 9
to
On 2/9/24 9:21 PM, olcott wrote:
> On 2/9/2024 7:05 PM, immibis wrote:
>> On 9/02/24 23:19, olcott wrote:
>>> On 2/9/2024 3:57 PM, immibis wrote:
>>>> On 9/02/24 22:33, olcott wrote:
>>>>> Then Linz notices that both answers that Ĥ provides are the wrong
>>>>> answer.
>>>>
>>>> Ĥ cannot provide both answers. It only provides one answer.
>>>
>>> The infinite set of every Ĥ that can possibly exist cannot provide
>>> a correct answer proving that the question itself is incorrect.
>>>
>>
>> Programs are not infinite sets. Changing the subject to infinite sets
>> is dishonest.
>>
>
> A property that every element of an infinite set of programs
> has applies to each element of this set.
>
> When Ĥ is to report on its own behavior both Ĥ.qy and Ĥ.qn
> are the wrong answer for *every possible* Ĥ applied to ⟨Ĥ⟩.
>
>

Nope.

First, Ĥ isn't required to report on its behavior, H is.

You need to make that LIE, because you want to use the word
"self-contradictory", even though that isn't what is happening. This
proves you don't understand what you are doing.

Ĥ is contradictory to H, and they are DIFFERENT machines.

Second, for each H in that set, there is a specific Ĥ that it is given,
and for that specific Ĥ, there is a correct answer, just not the one
that H gives.

Of course, the ONLY answer a given machine CAN give, is the one that its
programing generates, even if it is wrong.

So, there IS a correct answer that every H should have given (to be
correct) but it always gave the other answer, so all you have done is
proven that which you want to clim is wrong, that there can not exist a
Halt Decider that can give the right answer to every input, since you
hae proven that for any Halt Decider, it can not get this input right.

olcott

unread,
Feb 9, 2024, 11:24:30 PMFeb 9
to
Ĥ applied to ⟨Ĥ⟩ is asking Ĥ:
Do you halt on your own Turing Machine Description?

This is isomorphic to asking whether the Liar Paradox
is true or false.

Richard Damon

unread,
Feb 10, 2024, 12:16:01 AMFeb 10
to
No, it is asking if the computation described by the input will halt
when run.

Neither Ĥ nor H know (or can know) that the input is a description of
them or something they are a part of.

There is no "pronoun" in the input point to THEM (except in your broken
version, which isn't an equivalent of the actual machines).

If you disagree, write a Turing Machine that accepts inputs that
represent itself and rejects all other inputs.

Be fore-warned, a Turing Machine doesn't have a single representation.

>
> This is isomorphic to asking whether the Liar Paradox
> is true or false.
>
>

Nope.

Just that YOU are a LIAR, that doesn't know better.

olcott

unread,
Feb 10, 2024, 12:33:44 AMFeb 10
to
Linz and I have been referring to the actual computation of
Ĥ applied to ⟨Ĥ⟩ with no simulators involved.

Ĥ makes sure to contradict every value that its embedded_H
computes, thus making Ĥ self-contradictory.

Mikko

unread,
Feb 10, 2024, 5:12:22 AMFeb 10
to
Or none.

--
Mikko

Richard Damon

unread,
Feb 10, 2024, 8:35:10 AMFeb 10
to
Right, and since your Ĥ (Ĥ) will Halt since your H (Ĥ) (Ĥ) goes to qn to
say the computation that its input (Ĥ) (Ĥ) represents, that is Ĥ (Ĥ)
will not halt.

Thus you H is just WRONG.

>
> Ĥ makes sure to contradict every value that its embedded_H
> computes, thus making Ĥ self-contradictory.
>
>

How is it SELF contradictory, when it doesn't contradict ITSELF, but
something else, (that is H)

You just seem to have a fundamental misunderstanding about identity.

But then since you think that YOU (a finite and flawed being) are GOD
(an INFINTE and PERFECT being), that has already been establshed.

A question is invalid, if there isn't a correct answer for it.

Not that a given program doesn't give the right answer.

Since there is ALWAYS a correct answer to does this specific computation
Halt (even if we don't know the answer), that question is a valid question.

The fact that for any program we might make, we can create an input it
will get wrong, shows that the question is not COMPUTABLE.

You seem to think uncomputable is invalid, just like unprovable can't be
True, but that is a flaw in YOU, not logic.

You are just proving you are just a ignorant, hypocritical,
pathologically lying idiot.

olcott

unread,
Feb 10, 2024, 9:59:14 AMFeb 10
to
embedded_H could be encoded with every detail all of knowledge that can
be expressed using language. This means that embedded_H is not
restricted by typical conventions. embedded_H could output a text string
swearing at you in English for trying to trick it. This would not be a
wrong answer.

Boolean True(English, "this sentence is not true")
would be required to do this same sort of thing.

olcott

unread,
Feb 10, 2024, 10:06:04 AMFeb 10
to
embedded_H could be encoded with every detail all of knowledge that can
be expressed using language. This means that embedded_H is not
restricted by typical conventions. embedded_H could output a text string
swearing at you in English for trying to trick it. This would not be a
wrong answer.

enum Boolean {
TRUE,
FALSE,
NEITHER
};

Boolean True(English, "this sentence is not true")
would be required to do this same sort of thing.


Richard Damon

unread,
Feb 10, 2024, 10:29:35 AMFeb 10
to
Embedded H is restricted to only be able to do what is computable.

Since Embedded_H is (at least by your claims) an exact copy of the
Turing Machine H, it can only do what a Turing Machine can do.

So, it CAN'T do what you claim, so you are a LIAR.
>
> enum Boolean {
>   TRUE,
>   FALSE,
>   NEITHER
> };
>
> Boolean True(English, "this sentence is not true")
> would be required to do this same sort of thing.
>
>

But CAN it? Remember, programs can only do what programs can do, which
is based on the instructions they are composed of

You are just too stupid to understand this.

olcott

unread,
Feb 10, 2024, 11:18:11 AMFeb 10
to
When Embedded_H has encoded within it all of human knowledge that can
be encoded within language then it ceases to be restricted to Boolean.
This enables Embedded_H to do anything that a human mind can do.

> So, it CAN'T do what you claim, so you are a LIAR.
>>
>> enum Boolean {
>>    TRUE,
>>    FALSE,
>>    NEITHER
>> };
>>
>> Boolean True(English, "this sentence is not true")
>> would be required to do this same sort of thing.
>>
>>
>
> But CAN it? Remember, programs can only do what programs can do, which
> is based on the instructions they are composed of
>
> You are just too stupid to understand this.

It is not that I am stupid it is that you cannot think outside the box
of conventional wisdom. There is nothing impossible about a TM that
can communicate in English and understand the meaning of words to the
same extent that human experts do.

Richard Damon

unread,
Feb 10, 2024, 11:35:17 AMFeb 10
to
You don't understand that your program must follow the rules of a program!

IF you think what you claim is possible, DO IT.

Remember though, that H has a defined "API", it is to give it's answer
in a specific way, and each option has a defined meaning.

Perhaps it could write a message in English on its tape, but then going
to Qn, it is clearly stating that its answer to the question it was
given was "The computation that the input I was given represents, WILL
NOT HALT, when it is run". And EXACTLY That. It doesn't matter what was
written to the tape, as that doesn't have significance to the API.

Since that computation halts, it was wrong.

If H doesn't go to either Qy or Qn in a finite period of time, it fails
to meet the DEFINITION of a decider.

You just don't understand that what you do in a formal system is
constrained by the rules of that system.

All you have done is shown that you are totally ignorant on the actual
basis of how formal logic works, and the fact that you have shown that
apparently you CAN NOT learn that, you are STUPID.

Defined, as someone unable to learn the basics of a field.


olcott

unread,
Feb 10, 2024, 11:53:11 AMFeb 10
to
It need not follow any arbitrary conventions.

> IF you think what you claim is possible, DO IT.
>
> Remember though, that H has a defined "API", it is to give it's answer
> in a specific way, and each option has a defined meaning.
>

The API could output its halt status determination using an
infinite set of equivalent natural language expressions.

> Perhaps it could write a message in English on its tape, but then going
> to Qn, it is clearly stating that its answer to the question it was
> given was "The computation that the input I was given represents, WILL
> NOT HALT, when it is run". And EXACTLY That. It doesn't matter what was
> written to the tape, as that doesn't have significance to the API.
>

It could alternatively cuss you out for trying to cheat using an
infinite set of equivalent natural language expressions.

Boolean True(English, "this sentence is not true")
is an incorrect yes/no question.

Tarski was simply too stupid to understand that the Liar Paradox
is not a truth bearer and must be rejected as invalid input to
any consistent and correct Truth predicate.

> Since that computation halts, it was wrong.
>
> If H doesn't go to either Qy or Qn in a finite period of time, it fails
> to meet the DEFINITION of a decider.
>
> You just don't understand that what you do in a formal system is
> constrained by the rules of that system.
>
> All you have done is shown that you are totally ignorant on the actual
> basis of how formal logic works, and the fact that you have shown that
> apparently you CAN NOT learn that, you are STUPID.
>
> Defined, as someone unable to learn the basics of a field.
>
>

Ross Finlayson

unread,
Feb 10, 2024, 1:28:25 PMFeb 10
to
I see you guys are still trying to invalidate each others' deciders.

In a Comenius language, the Liar: is just the prototype of a fallacy,
sharing as it does properties with "the sputnik of quantification",
"the Russell set", "ORD the order type of ordinals", that when the
self-same Universe of Objects is just truisms the Comenius language,
then the Liar is just the prototype of a fallacy, there isn't a
paradox so you can get rid of Ex Falso Quodlibet for Ex Falso Nihilum,
and then otherwise get back into hard problems and approximations
thereof, for which there are all manners of static analysis to
determine both for distributions what are optimal algorithms,
and, for what distributions are pathological algorithms.

Then there's the fun part with "sequences that converge slowly",
"anti-inductive results", "the super-task", "deductive closures
in complementary duals", "completions in universals and particulars".


"Comte's Boole's Russell's Whitehead's logical positivism's
'classical' logic is really only 'classical _quasi-modal_ logic'."

It's neither modal nor monotone, looking at it either way.


olcott

unread,
Feb 10, 2024, 1:52:14 PMFeb 10
to
Yes this also gets rid of the issue of undecidability.
An expression of language is either true or ~true, thus
unprovable from axioms merely mean untrue and cannot
mean undecidable.

The way that I do this within conventional formal systems
is {True, False, Not a truth bearer}.

...14 Every epistemological antinomy can likewise be used for a similar
undecidability proof...(Gödel 1931:43)

Both Tarski and Gödel did not comprehend that semantically
invalid inputs must be rejected as incorrect.

Instead Tarski concluded that a correct and consistent
truth predicate cannot exist on this basis:
Boolean True(English, "this sentence is not true")

> then the Liar is just the prototype of a fallacy, there isn't a
> paradox so you can get rid of Ex Falso Quodlibet for Ex Falso Nihilum,
> and then otherwise get back into hard problems and approximations
> thereof, for which there are all manners of static analysis to
> determine both for distributions what are optimal algorithms,
> and, for what distributions are pathological algorithms.
>
> Then there's the fun part with "sequences that converge slowly",
> "anti-inductive results", "the super-task", "deductive closures
> in complementary duals", "completions in universals and particulars".
>
>
> "Comte's Boole's Russell's Whitehead's logical positivism's
> 'classical' logic is really only 'classical _quasi-modal_ logic'."
>
> It's neither modal nor monotone, looking at it either way.
>
>

Richard Damon

unread,
Feb 10, 2024, 3:14:18 PMFeb 10
to
And not be a Halt Decider.

>
> Boolean True(English, "this sentence is not true")
> is an incorrect yes/no question.
>

But "Does the machine represented by this input Halt?", IS a correct
Yes/No question.

Show me one that doesn't.

Note, The Halting Question is always about a SPECIFIC input computation,
and thus a specific machine with specific input. The answer doesn't
depend on who you ask, as it is just about the machine itself.

Thus, to build the input D of the proof, you have had to FIRST define
the Halt Decider you are going to claim to be correct for all inputs, as
that is what is needed to convert the "Template" of the proof to an
actual input.


> Tarski was simply too stupid to understand that the Liar Paradox
> is not a truth bearer and must be rejected as invalid input to
> any consistent and correct Truth predicate.
>

Nope, you have proven that you are too stupid to know what he is doing.'

You have admitted that by failing to point out where he does that,

You seem to have a problem with the order of the steps. He shows that
the existance of a computable Truth predicate leads to impossible
conclusions, thus it can't exist.

Richard Damon

unread,
Feb 10, 2024, 3:14:24 PMFeb 10
to
So, show what you can do in your new Formal System. Remember, you can't
just assume properties from a different Formal System with different rules.

It has always been you option to decide to start a new set of PO-
theories, and show what they can do, you just can't add your fundamental
changes to an existing system after the fact.

See what PO-ZFC generates, or PO-Computation theory does, assuming you
can actually make them work with the limitations of your system.

>
> ...14 Every epistemological antinomy can likewise be used for a similar
> undecidability proof...(Gödel 1931:43)
>
> Both Tarski and Gödel did not comprehend that semantically
> invalid inputs must be rejected as incorrect.

Nope, they did, and they used the fact that we must. You just are too
stupid to understand what they actually did.

>
> Instead Tarski concluded that a correct and consistent
> truth predicate cannot exist on this basis:
> Boolean True(English, "this sentence is not true")

Nope. Note, "English" is not a proper "Formal System".

Your lack of understanding that shows you ignorant you are.

olcott

unread,
Feb 10, 2024, 3:36:13 PMFeb 10
to
You said that you understand that the Liar Paradox is
neither true nor false. This entails that you understand
Boolean True(English, "this sentence is not true")
is an incorrect question.

*If Tarski did not understand that then Tarski must be wrong*

olcott

unread,
Feb 10, 2024, 3:46:33 PMFeb 10
to
I just proved that Gödel said that self-contradictory
expressions can be used "for similar undecidability proof"

That you understand that the Liar Paradox is a
self-contradictory expression having no truth value
means that you understand that it cannot be proven
true or false.


>>
>> Instead Tarski concluded that a correct and consistent
>> truth predicate cannot exist on this basis:
>> Boolean True(English, "this sentence is not true")
>
> Nope. Note, "English" is not a proper "Formal System".
>
> Your lack of understanding that shows you ignorant you are.

Not at all the English does have a formal isomorphism.


>>
>>> then the Liar is just the prototype of a fallacy, there isn't a
>>> paradox so you can get rid of Ex Falso Quodlibet for Ex Falso Nihilum,
>>> and then otherwise get back into hard problems and approximations
>>> thereof, for which there are all manners of static analysis to
>>> determine both for distributions what are optimal algorithms,
>>> and, for what distributions are pathological algorithms.
>>>
>>> Then there's the fun part with "sequences that converge slowly",
>>> "anti-inductive results", "the super-task", "deductive closures
>>> in complementary duals", "completions in universals and particulars".
>>>
>>>
>>> "Comte's Boole's Russell's Whitehead's logical positivism's
>>> 'classical' logic is really only 'classical _quasi-modal_ logic'."
>>>
>>> It's neither modal nor monotone, looking at it either way.
>>>
>>>
>>
>

Richard Damon

unread,
Feb 10, 2024, 4:40:45 PMFeb 10
to
Yep, but you just don't understand what he means by that.

>
> That you understand that the Liar Paradox is a
> self-contradictory expression having no truth value
> means that you understand that it cannot be proven
> true or false.
>

Right, and so did Godel.

Please show where he actually didn't understand that.

>
>>>
>>> Instead Tarski concluded that a correct and consistent
>>> truth predicate cannot exist on this basis:
>>> Boolean True(English, "this sentence is not true")
>>
>> Nope. Note, "English" is not a proper "Formal System".
>>
>> Your lack of understanding that shows you ignorant you are.
>
> Not at all the English does have a formal isomorphism.
>

So, you don't understand what a formal logic system actually is.

You are just proving your stupidity.

Name the foundation axioms of "English" as a Formal System.

Richard Damon

unread,
Feb 10, 2024, 4:40:47 PMFeb 10
to
In other words, you still don't understand what you are talking about.

True(English, ...) isn't a predicate in "Formal Logic", as "English" is
not a Frmal Logic system, so you are just proving your stupidity.

olcott

unread,
Feb 10, 2024, 5:05:18 PMFeb 10
to
is an incorrect yes/no question.

Thus when we assume that I characterized Tarski correctly then
his claim that the above proves that a correct and consistent
Truth predicate cannot exist would be woefully incorrect.

olcott

unread,
Feb 10, 2024, 5:11:34 PMFeb 10
to
Yet it s well known that English can be formalized correctly
with Montague grammar. I wrote it in English because it is
conventional to always formalize self-reference incorrectly.

The only way around this is to create a formal system that
does formalize self-reference correctly.

It doesn't really need to actually be formalized. People
can correctly determine that the English Liar Paradox
is neither true nor false, thus can infer that a correct
and consistent True predicate must reject it.

Richard Damon

unread,
Feb 10, 2024, 8:12:53 PMFeb 10
to
In other words, you have absolutely no idea what a "Formal Logic System"
actually is.

I'm not sure you even know what a logic system is, let alone what it
means to be formalized.

>
> The only way around this is to create a formal system that
> does formalize self-reference correctly.
>
> It doesn't really need to actually be formalized. People
> can correctly determine that the English Liar Paradox
> is neither true nor false, thus can infer that a correct
> and consistent True predicate must reject it.
>

If it isn't formalized, you are doing formal logic.


Richard Damon

unread,
Feb 10, 2024, 8:12:57 PMFeb 10
to
Why should we assume that? You show a remrkable talent for trying to
assume things that are incorrect, or even impossible. You have shown you
don't understand what a formal logic system is, so you have absolutely
no basis to be talking about what he is doing.

Your Liar's paradox is that you just blatently lie, but don't see that
you are doing it because the lies are pathological.

Ross Finlayson

unread,
Feb 10, 2024, 9:14:49 PMFeb 10
to
There's Montague, he says English can be formal.

Of course we might just want derivation rules like

De Morgan not Boole
Sheffer not Kripke
Scott not Russell
relevant not quasi-modal

and so on.

Of course there's a technical subset of English
exactly so apropos as for "words" as any other.

Now I don't much follow Linz so I'm not following
along with this, but when you say

ZFC

or

computability theory (primitive recursive)

then I'll introduce

ZFC with classes ("proper" or "ultimate" per Quine)

and

non-standard or non-classical computability theory


Reading from Boolos and Jeffrey, these days also Burgess
but not in my edition, it's a usual tome,
"Computability and Logic", about these things.


"John P. Burgess [...] calls the two main classes
[of non-classical logic] anti-classical and extra-classical".
-- https://en.wikipedia.org/wiki/Non-classical_logic

Here the point is that logic is "anti-classical" insofar as
the usual Comte's Boole's Russell's is only quasi-modal,
that it's "anti-quasimodal-classical", and that it is
especially "extra-classical", as "extra-ordinary".


Kripke y u no Sheffer?





olcott

unread,
Feb 10, 2024, 9:24:38 PMFeb 10
to
No Montague converts English into math.
The Cyc project does the same thing with their CycL language.

> Of course we might just want derivation rules like
>
> De Morgan not Boole
> Sheffer not Kripke
> Scott not Russell
> relevant not quasi-modal
>
> and so on.
>
> Of course there's a technical subset of English
> exactly so apropos as for "words" as any other.
>
> Now I don't much follow Linz so I'm not following
> along with this, but when you say
>

When a machine contradicts every answer that this same machine
provides this is a ruse to try to show that computation is limited.

> ZFC
>
> or
>
> computability theory (primitive recursive)
>
> then I'll introduce
>
> ZFC with classes ("proper" or "ultimate" per Quine)
>
> and
>
> non-standard or non-classical computability theory
>
>
> Reading from Boolos and Jeffrey, these days also Burgess
> but not in my edition, it's a usual tome,
> "Computability and Logic", about these things.
>
>
> "John P. Burgess [...] calls the two main classes
> [of non-classical logic] anti-classical and extra-classical".
> -- https://en.wikipedia.org/wiki/Non-classical_logic
>
> Here the point is that logic is "anti-classical" insofar as
> the usual Comte's Boole's Russell's is only quasi-modal,
> that it's "anti-quasimodal-classical", and that it is
> especially "extra-classical", as "extra-ordinary".
>
>
> Kripke y u no Sheffer?
>
>
>
>
>

Ross Finlayson

unread,
Feb 10, 2024, 9:38:39 PMFeb 10
to
It's called science that's why you build in that
it has a first-class definition of "theory" and
a first-class definition of "uncertainty" and a
first-class definition of "science".


Any kind of mechanical thinker doesn't really have
a "certification" of its own knowledge, just that
it doesn't know otherwise, which is a first-class
sort of ordinary and even traditional belief that
thinking beings have.

Then, formally, technically, for truth, and theories
of truth, and, you know, foundations, for the

philosophy
logic
mathematics
science
physics

then it pretty much always has that

thinking

^
|
v

philosophy
logic
mathematics <- truth
science
physics

^
|
v

feeling

then that what higher order Mind acculurates itself
is an "object sense" for a "word sense" and a "number
sense" associated with a "time sense", that those are
allowed in the phenomenological, and indeed insulate
the Man's Mind's phenomenological, from its own senses.

I.e, "an infinite continuum", is real.

Of course there's always room for monism and then we
usually can point to Kant and Hegel for a sublime teleology.
(Deism is super-scientific.)

So, the whole objective/subjective distinction makes
that "certifying yourselves" isn't so much moot, and,
not that it's futile because mathematics and logic and
science are, "true", but there's the extra-ordinary
that's of an infinite continuum, and until it's first-class
in your theory, you're missing out.


immibis

unread,
Feb 10, 2024, 9:39:29 PMFeb 10
to
And no matter what it does, it always proves that H is wrong for at
least one input.

immibis

unread,
Feb 10, 2024, 9:41:54 PMFeb 10
to
On 10/02/24 03:21, olcott wrote:
> On 2/9/2024 7:05 PM, immibis wrote:
>> On 9/02/24 23:19, olcott wrote:
>>> On 2/9/2024 3:57 PM, immibis wrote:
>>>> On 9/02/24 22:33, olcott wrote:
>>>>> Then Linz notices that both answers that Ĥ provides are the wrong
>>>>> answer.
>>>>
>>>> Ĥ cannot provide both answers. It only provides one answer.
>>>
>>> The infinite set of every Ĥ that can possibly exist cannot provide
>>> a correct answer proving that the question itself is incorrect.
>>>
>>
>> Programs are not infinite sets. Changing the subject to infinite sets
>> is dishonest.
>>
>
> A property that every element of an infinite set of programs
> has applies to each element of this set.
>

This is a very stupid way of saying "a thing has the properties which it
has."

The halting problem is about halting deciders. The proof is about
halting deciders. None of them are about infinite sets. Changing the
subject to infinite sets is dishonest.


immibis

unread,
Feb 10, 2024, 9:42:19 PMFeb 10
to
On 10/02/24 15:59, olcott wrote:
> On 2/10/2024 4:12 AM, Mikko wrote:
>> On 2024-02-09 21:57:55 +0000, immibis said:
>>
>>> On 9/02/24 22:33, olcott wrote:
>>>> Then Linz notices that both answers that Ĥ provides are the wrong
>>>> answer.
>>>
>>> Ĥ cannot provide both answers. It only provides one answer.
>>
>> Or none.
>>
>
> embedded_H could be encoded with every detail all of knowledge that can
> be expressed using language. This means that embedded_H is not
> restricted by typical conventions. embedded_H could output a text string
> swearing at you in English for trying to trick it. This would not be a
> wrong answer.
>
> Boolean True(English, "this sentence is not true")
> would be required to do this same sort of thing.
>
>

embedded_H is irrelevant.

immibis

unread,
Feb 10, 2024, 9:43:36 PMFeb 10
to
On 11/02/24 03:24, olcott wrote:
> When a machine contradicts every answer that this same machine
> provides this is a ruse to try to show that computation is limited.

There is no such thing in the Turing machine system as "machines
contradicting answers" - that is nonsense. Machines give answers; they
do not contradict them.

olcott

unread,
Feb 10, 2024, 9:59:37 PMFeb 10
to
Mechanical and organic thinkers are either coherent or incorrect.

olcott

unread,
Feb 10, 2024, 10:00:57 PMFeb 10
to
The only reason that a truth decider does not exist is that it is
specified that it must not reject self-contradictory inputs and this is
the same reason that a halt decider does not exist.

immibis

unread,
Feb 10, 2024, 10:15:49 PMFeb 10
to
There's no such thing as a self-contradictory input. Every formula is
either true or false in each model. Each Turing machine/input pair's
configuration sequence is either finite or infinite.

Richard Damon

unread,
Feb 10, 2024, 10:27:06 PMFeb 10
to
On 2/10/24 9:59 PM, olcott wrote:
>
> Mechanical and organic thinkers are either coherent or incorrect.


"Mechanical things" don't "think" in the normal sense it us used.

They COMPUTE, based on fixed pre-defined rules.


Richard Damon

unread,
Feb 10, 2024, 10:27:10 PMFeb 10
to
On 2/10/24 9:24 PM, olcott wrote:
>
> When a machine contradicts every answer that this same machine
> provides this is a ruse to try to show that computation is limited.

In other words, you don't understand what you are talking about.

You don't understand what a computation IS, so you don't understand
their limits.

Ross Finlayson

unread,
Feb 10, 2024, 10:35:00 PMFeb 10
to
https://richardzach.org/2023/02/sheffer-stroke-before-sheffer-edward-stamm/

https://zbmath.org/?au=stamm&ti=&so=&la=&py=&ab=&rv=&an=&en=&cc=&ut=&sw=&br=&any=&dm=&db=jfm%7Ceram

https://zbmath.org/62.1026.03

Sheffer y u no Stamm?

https://en.wikipedia.org/wiki/Thomas_Bradwardine

Don't you imagine Bradwardine ("the subtle doctor")
has an infinite before Duns Scotus ("the profound doctor")?

What were you raised in a barn?
What is this The Middle Ages?

It's a good idea to know where food comes from,
to eat wholesome and natural food,
to thoroughly chew the food,
then floss the teeth you plan to keep.

And know your limits, voracious self-certifiers.

Masticate thoroughly:
it's a great aid to digestion.

"Bzzt... errror, ..., errror, ..., computes too much."



olcott

unread,
Feb 10, 2024, 10:43:30 PMFeb 10
to
In other words you don't know as much as Richard.
The Liar Paradox is neither true or false.

olcott

unread,
Feb 10, 2024, 10:45:54 PMFeb 10
to
LLMs can reconfigure themselves on the fly redefining
their own rules within a single dialogue.

olcott

unread,
Feb 10, 2024, 10:46:55 PMFeb 10
to
When Ĥ is applied to ⟨Ĥ⟩
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn

Ĥ applied to ⟨Ĥ⟩ asks: Do you halt on your own Turing Machine Description?
Both yes and no are the wrong answer just like the Liar Paradox question.
Is this sentence true or false: “this sentence is not true.” ???

Ross Finlayson

unread,
Feb 10, 2024, 10:51:17 PMFeb 10
to
The idea is that "models of cognition" and
recognizing one's own models of cognition,
is a pretty simple first-class thing.

I.e. "the practice of theory" is different
than "a flow machine", yes, then there's no
reason why "mechanical thinking" can't "think".

There are "human-level AI's" since the 80's, at least,
and "online mechanical psychiatrists" have been around
since at least the 60's.

The 1960's, ....

So, here the idea is pretty much that an "object sense"
sort of exists at least in simulation, or emulation,
by any model of knowledge _in its own terms_.

Then if you just emit that as a runnable configuration,
one might aver that's not thinking any-more,
but, models of cognition can be simple.

Everybody has a working psychology,
applied psychology doesn't work on everybody.

There's beliefs/desires/motivations,
there's risks/goals, or vice-versa,
there's all sorts models of cognition,
then that most models of cognition that
are thinking very great are long-earned
matters of maturity and wisdom in the great
scientific experiment that each is.

Then there are mockeries thereof,
and various of the bastardized
and castrated and lobotomized,
in terms of usual sorts of "genetic lotteries"
where most "mechanical thinkers" are the
products of the most vicious sort of creche.

Sock-puppets, ....


So anyways one can imagine that there are
"thinking beings" about as great as humans
and in many respects greater, then as with
regards to how and whether they arrive at
"the human condition", and especially as
with regards to surpassing it and "inverting
the needs", as it were, hopefully is so.


"Your thinking is non-sequitur, ...."

"I like that one, he's logical, ...."


Anyways pure logic arrives at that
science is a pretty good theory.

Also it can arrive at that truth
is a pure quality and quantity,
and attain to it.


So anyways there are approaches like
"approximation algorithms to NP-hard
problems" and so on, the point being,
"Church-Rice theorem is not an excuse,
it does not guarantee ignorance,
and ignorance is not an excuse."

(And there's a counterexample in
the extra-ordinary theory.)


olcott

unread,
Feb 11, 2024, 12:06:55 AMFeb 11
to
LLMs no longer use predefined rules, they can update their rules
several times during the same dialogue.

olcott

unread,
Feb 11, 2024, 12:07:44 AMFeb 11
to
On 2/10/2024 9:27 PM, Richard Damon wrote:
That is a fake rebuttal that did not point out a single mistake.

Mikko

unread,
Feb 11, 2024, 4:54:09 AMFeb 11
to
On 2024-02-10 14:59:08 +0000, olcott said:

> On 2/10/2024 4:12 AM, Mikko wrote:
>> On 2024-02-09 21:57:55 +0000, immibis said:
>>
>>> On 9/02/24 22:33, olcott wrote:
>>>> Then Linz notices that both answers that Ĥ provides are the wrong answer.
>>>
>>> Ĥ cannot provide both answers. It only provides one answer.
>>
>> Or none.
>>
>
> embedded_H could be encoded with every detail all of knowledge that can
> be expressed using language. This means that embedded_H is not
> restricted by typical conventions. embedded_H could output a text string
> swearing at you in English for trying to trick it. This would not be a
> wrong answer.
>
> Boolean True(English, "this sentence is not true")
> would be required to do this same sort of thing.

None of that matters. It only matters whether embedded_H
(A) halts in the state Qn
or (B) halts in some other state
or (C) does not halt.

--
Mikko

Richard Damon

unread,
Feb 11, 2024, 7:37:48 AMFeb 11
to
On 2/10/24 10:45 PM, olcott wrote:
> On 2/10/2024 9:26 PM, Richard Damon wrote:
>> On 2/10/24 9:59 PM, olcott wrote:
>>>
>>> Mechanical and organic thinkers are either coherent or incorrect.
>>
>>
>> "Mechanical things" don't "think" in the normal sense it us used.
>>
>> They COMPUTE, based on fixed pre-defined rules.
>>
>>
>
> LLMs can reconfigure themselves on the fly redefining
> their own rules within a single dialogue.
>

But only in accordance to its existing programming, or your system isn't
a Computation.

AI is ARTIFICIAL intelligence, because it isn't actual intelegence, only
programming complicated enough that we can't understand the programming
any more.

Richard Damon

unread,
Feb 11, 2024, 7:37:50 AMFeb 11
to
On 2/11/24 12:06 AM, olcott wrote:
> On 2/10/2024 9:26 PM, Richard Damon wrote:
>> On 2/10/24 9:59 PM, olcott wrote:
>>>
>>> Mechanical and organic thinkers are either coherent or incorrect.
>>
>>
>> "Mechanical things" don't "think" in the normal sense it us used.
>>
>> They COMPUTE, based on fixed pre-defined rules.
>
> LLMs no longer use predefined rules, they can update their rules
> several times during the same dialogue.
>

And that "update" is programmed in, so is according to its "fixed
pre-defined rules".


You are just showing your Natural Stupidity about how Artificial
Intelligence actually works.

Richard Damon

unread,
Feb 11, 2024, 7:38:15 AMFeb 11
to
On 2/10/24 10:46 PM, olcott wrote:
> On 2/10/2024 9:27 PM, Richard Damon wrote:
>> On 2/10/24 9:24 PM, olcott wrote:
>>>
>>> When a machine contradicts every answer that this same machine
>>> provides this is a ruse to try to show that computation is limited.
>>
>> In other words, you don't understand what you are talking about.
>>
>> You don't understand what a computation IS, so you don't understand
>> their limits.
>
> When Ĥ is applied to ⟨Ĥ⟩
> Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞
> Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
>
> Ĥ applied to ⟨Ĥ⟩ asks: Do you halt on your own Turing Machine Description?
> Both yes and no are the wrong answer just like the Liar Paradox question.
> Is this sentence true or false: “this sentence is not true.” ???
>
>
>

Nope, nothing in that Computation says "Your own". Ĥ happened to be
given its own description, but nothing tells it that it is its own
description.

Thus, your analysis is flawed.

First, Ĥ isn't being asked any particular question, its instructions are
just do the opposite of what H thinks this input will do.

H, being a specific and fixed program at this point, has a definite
answer that it DOES give for H (Ĥ) (Ĥ), and thus this particular Ĥ has a
fixed particular behavior. So, there IS a correct answer to the
question, "Does the Computation described by this input Halt?", and
since you claim H is correct is saying non-halting (by going to qn),
then it mus go to qn, and that makes H^ (Ĥ) Halt, so H was just wrong.

There is no "Liars Paradox", just a machine being wrong.

Just like YOU have been for the past 2 decades (if not longer).

You are just showing your TOTAL ignorance of what you have been talking
about.

Richard Damon

unread,
Feb 11, 2024, 7:38:17 AMFeb 11
to
On 2/11/24 12:07 AM, olcott wrote:
> On 2/10/2024 9:27 PM, Richard Damon wrote:
>> On 2/10/24 9:24 PM, olcott wrote:
>>>
>>> When a machine contradicts every answer that this same machine
>>> provides this is a ruse to try to show that computation is limited.
>>
>> In other words, you don't understand what you are talking about.
>>
>> You don't understand what a computation IS, so you don't understand
>> their limits.
>
> That is a fake rebuttal that did not point out a single mistake.
>

That a non-computation given a description of a non-conputation does say
anything about computations.

You hae admitted that your input isn't actually a description of a
computation, but just the template for one.

You have admitted that you "decider" isn't a particular machine in your
analysys but a "set" of them.

Thus, you have admitted that you are just LYING when you say you are
doing exactly like the proof does.

You have clearly demonstrated that you don't understand what a
computation actually is, even as far as saying your "H" actually does
things that computations are not allowed to do.

Thus, you are just admitting to your utter stupidity.

YOU are the one trying to make a rebuttal (to the halting problem
proof), but don't seem to understand what you need to do to even attempt it,

You are just too stupid,

Ross Finlayson

unread,
Feb 11, 2024, 8:39:26 AMFeb 11
to
He sort of refers to "adaptive" being the usual property
of "self-modifying code".

Dick it might be better to avoid the use
of the second-person pronoun, or to say "Pete Olcott"
instead of "you", or "those who say A imply B",
because "Mechanical Thinking" or "Artificial Intelligence"
doesn't have to be inscrutable at all, and what it results
is that it really looks sort of like "psychological projection",
that is to say, "D.D., that's psychological projection."

Same goes for the rest the usual "block-butting brick-batting
ball-busting", all the other gentle readers of these posts
get turned off by it, and don't need it for critical
evaluation of the context the comment.

So please quite thrashing your rejection stick,
the animal stick might be operant conditioning,
but here we don't give carrots out for that,
and our Mind knows "ceci n'est pas une carotte".


So anyways the "adaptive" has various approaches, I sort
of categorize machine learning into the three-fold:

1) expert systems,
with adaptive codification,
2) statistical inference,
model-fitting and the establishment of hypotheses,
summary and digest over time,
3) feedback-directed-optimization,
a.k.a. "flow machines", "neural nets", "the dumb part"

where it's altogether quite hybridized.




Anyways the word usually is "adaptive".


Even just a a finite-state-machine is a model of
a miniature "object sense", of the Animal or Machine
sort, equipping it with "number sense", then as to
"word sense", properly begins with qualia then for
quantity, a "time sense" is in a sense more primitive
while also that "a sense of the continuum of time"
is a higher order construct that, for example, Man,
has arrived at, in his Mind.


Mechanical thinking definitely doesn't require
"the resources of capitol capital industry",
it's as simple as accepter/rejector networks
or "many finite-state-machines, none alike".

Then just give it models of thinking and learning.
(And knowing a difference "laboratory" and "library".)

olcott

unread,
Feb 11, 2024, 9:53:48 AMFeb 11
to
The whole notion of undecidability in math and computer science is
inconsistent and incoherent. Self-contradictory input cannot be
used for an undecidability proof it must be rejected as invalid.

Tarski's whole Undecidability proof concludes that a correct and
consistent truth predicate cannot exist only because such a
predicate cannot correctly determine whether this sentence is
true or false: "this sentence is not true".

That is like saying that math is incomplete because math cannot
correctly determine the square root of an actual banana.

Gödel makes this same mistake.
...14 Every epistemological antinomy can likewise be used for a similar
undecidability proof...(Gödel 1931:43)


olcott

unread,
Feb 11, 2024, 9:58:00 AMFeb 11
to
On 2/11/2024 6:37 AM, Richard Damon wrote:
> On 2/10/24 10:45 PM, olcott wrote:
>> On 2/10/2024 9:26 PM, Richard Damon wrote:
>>> On 2/10/24 9:59 PM, olcott wrote:
>>>>
>>>> Mechanical and organic thinkers are either coherent or incorrect.
>>>
>>>
>>> "Mechanical things" don't "think" in the normal sense it us used.
>>>
>>> They COMPUTE, based on fixed pre-defined rules.
>>>
>>>
>>
>> LLMs can reconfigure themselves on the fly redefining
>> their own rules within a single dialogue.
>>
>
> But only in accordance to its existing programming, or your system isn't
> a Computation.
>

The point is that they can reprogram themselves on the fly using modern
machine learning. LLMs learn on their own.

> AI is ARTIFICIAL intelligence, because it isn't actual intelegence, only
> programming complicated enough that we can't understand the programming
> any more.

olcott

unread,
Feb 11, 2024, 10:01:02 AMFeb 11
to
On 2/11/2024 6:37 AM, Richard Damon wrote:
> On 2/11/24 12:06 AM, olcott wrote:
>> On 2/10/2024 9:26 PM, Richard Damon wrote:
>>> On 2/10/24 9:59 PM, olcott wrote:
>>>>
>>>> Mechanical and organic thinkers are either coherent or incorrect.
>>>
>>>
>>> "Mechanical things" don't "think" in the normal sense it us used.
>>>
>>> They COMPUTE, based on fixed pre-defined rules.
>>
>> LLMs no longer use predefined rules, they can update their rules
>> several times during the same dialogue.
>>
>
> And that "update" is programmed in, so is according to its "fixed
> pre-defined rules".
>

Not at all, not in the least little bit. The programmers
only provide a tiny seed of the basis for LLM to dynamically
learn everything that they know on their own. Because they are
stochastic they are not deterministic. You are simply wrong.

>
> You are just showing your Natural Stupidity about how Artificial
> Intelligence actually works.

It is libelous to call your own ignorance my stupidity.

olcott

unread,
Feb 11, 2024, 10:15:00 AMFeb 11
to
On 2/11/2024 6:38 AM, Richard Damon wrote:
> On 2/10/24 10:46 PM, olcott wrote:
>> On 2/10/2024 9:27 PM, Richard Damon wrote:
>>> On 2/10/24 9:24 PM, olcott wrote:
>>>>
>>>> When a machine contradicts every answer that this same machine
>>>> provides this is a ruse to try to show that computation is limited.
>>>
>>> In other words, you don't understand what you are talking about.
>>>
>>> You don't understand what a computation IS, so you don't understand
>>> their limits.
>>
>> When Ĥ is applied to ⟨Ĥ⟩
>> Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞
>> Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
>>
>> Ĥ applied to ⟨Ĥ⟩ asks: Do you halt on your own Turing Machine
>> Description?
>> Both yes and no are the wrong answer just like the Liar Paradox question.
>> Is this sentence true or false: “this sentence is not true.” ???
>>
>>
>>
>
> Nope, nothing in that Computation says "Your own".

I repeat myself because your ADD makes it too difficult for
you to pay enough attention. A better way would be for you
read and re-read what I say again and again until you fully
understand what I said before spouting off any rebuttal.

Ĥ applied to ⟨Ĥ⟩ asks Ĥ
Do you halt on your own Turing Machine Description?

Ĥ applied to ⟨Ĥ⟩ asks Ĥ
Do you halt on your own Turing Machine Description?

Ĥ applied to ⟨Ĥ⟩ asks Ĥ
Do you halt on your own Turing Machine Description?

Ĥ applied to ⟨Ĥ⟩ asks Ĥ
Do you halt on your own Turing Machine Description?

Ĥ applied to ⟨Ĥ⟩ asks Ĥ
Do you halt on your own Turing Machine Description?

> Ĥ happened to be
> given its own description, but nothing tells it that it is its own
> description.
>

It is an easily verified fact that Ĥ applied to ⟨Ĥ⟩ is asking Ĥ
Do you halt on your own Turing Machine Description?

This makes Ĥ applied to ⟨Ĥ⟩ isomorphic to the self-referential
liar paradox. The Liar Paradox contradicts both true and false
and Ĥ contradicts both yes and no.

That the Liar Paradox does not know that it is self-referential
(it is merely a text string that knows nothing) does not change
the fact that it is self-referential.

olcott

unread,
Feb 11, 2024, 10:33:05 AMFeb 11
to
On 2/11/2024 6:38 AM, Richard Damon wrote:
> On 2/11/24 12:07 AM, olcott wrote:
>> On 2/10/2024 9:27 PM, Richard Damon wrote:
>>> On 2/10/24 9:24 PM, olcott wrote:
>>>>
>>>> When a machine contradicts every answer that this same machine
>>>> provides this is a ruse to try to show that computation is limited.
>>>
>>> In other words, you don't understand what you are talking about.
>>>
>>> You don't understand what a computation IS, so you don't understand
>>> their limits.
>>
>> That is a fake rebuttal that did not point out a single mistake.
>>
>
> That a non-computation given a description of a non-conputation does say
> anything about computations.
>
> You hae admitted that your input isn't actually a description of a
> computation, but just the template for one.
>

When Ĥ is applied to ⟨Ĥ⟩
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
*It is the Linz template no my template*

The second ⊢* means every sequence of states of the
infinite set of all sequences of states.

One of these could ignore its input and simply play
tic-tac-toe with itself before transitioning to Ĥ.qy
or Ĥ.qn.

> You have admitted that you "decider" isn't a particular machine in your
> analysys but a "set" of them.
>

I am using the categorically exhaustive reasoning to analyze the
properties of each element of an infinite set in finite time.

The embedded_H of every Ĥ applied ⟨Ĥ⟩ gets the wrong answer only
because Ĥ was intentionally defined to be self-contradictory.


> Thus, you have admitted that you are just LYING when you say you are
> doing exactly like the proof does.
>

That is libelous. I am not doing exactly what the proof does. I
am analyzing the actual original proof and coming to a different
conclusion on the basis that the proof never notices that the inability
to correctly answer incorrect questions does not limit anyone or
anything.

> You have clearly demonstrated that you don't understand what a
> computation actually is, even as far as saying your "H" actually does
> things that computations are not allowed to do.
>

When we define embedded_H as a pair of machines that ignore their
input and simply transition to Ĥ.qy or Ĥ.qn embedded_H still gets
the wrong answer *only because* the Ĥ template was intentionally
defined to contradict both of these values.

*Try and show that Ĥ does nothing to contradict Ĥ.qy or Ĥ.qn*
(a) *Try and show that the loop appended to Ĥ.qy does not exist*
(b) *Try and show that a transition to Ĥ.qn does not halt*

> Thus, you are just admitting to your utter stupidity.
>

How did you do on the Mensa test? I scored in the top 3%

> YOU are the one trying to make a rebuttal (to the halting problem
> proof), but don't seem to understand what you need to do to even attempt
> it,
>
> You are just too stupid,

immibis

unread,
Feb 11, 2024, 1:15:27 PMFeb 11
to
The Liar Paradox is not a formula.
The Liar Paradox is not a Turing machine.
There is no Liar Paradox in first-order logic.
There is no Liar Paradox in Turing machines.

immibis

unread,
Feb 11, 2024, 1:20:49 PMFeb 11
to
On 11/02/24 04:45, olcott wrote:
> On 2/10/2024 9:26 PM, Richard Damon wrote:
>> On 2/10/24 9:59 PM, olcott wrote:
>>>
>>> Mechanical and organic thinkers are either coherent or incorrect.
>>
>>
>> "Mechanical things" don't "think" in the normal sense it us used.
>>
>> They COMPUTE, based on fixed pre-defined rules.
>>
>>
>
> LLMs can reconfigure themselves on the fly redefining
> their own rules within a single dialogue.
>

This is incorrect. I suggest you study them.
It is loading more messages.
0 new messages