Grupos de Google ya no admite publicaciones ni suscripciones nuevas de Usenet. El contenido anterior sigue visible.

Simulating (partial) Halt Deciders Defeat the Halting Problem Proofs

4 vistas
Ir al primer mensaje no leído

olcott

no leída,
12 abr 2023, 2:27:47 p.m.12/4/23
para
A simulating halt decider correctly predicts whether or not its
correctly simulated input can possibly reach its own final state and
halt. It does this by correctly recognizing several non-halting behavior
patterns in a finite number of steps of correct simulation. Inputs that
do terminate are simply simulated until they complete.

When a simulating halt decider correctly simulates N steps of its input
it derives the exact same N steps that a pure UTM would derive because
it is itself a UTM with extra features.

My reviewers cannot show that any of the extra features added to the UTM
change the behavior of the simulated input for the first N steps of
simulation:
-- Watching the behavior doesn't change it.
-- Matching non-halting behavior patterns doesn't change it
-- Even aborting the simulation after N steps doesn't change the first
N steps.

Because of all this we can know that the first N steps of input D
simulated by simulating halt decider H are the actual behavior that D
presents to H for these same N steps.

computation that halts… “the Turing machine will halt whenever it enters
a final state” (Linz:1990:234)

When we see (after N steps) that D correctly simulated by H cannot
possibly reach its simulated final state in any finite number of steps
of correct simulation then we have conclusive proof that D presents non-
halting behavior to H.



*Simulating (partial) Halt Deciders Defeat the Halting Problem Proofs*
https://www.researchgate.net/publication/369971402_Simulating_partial_Halt_Deciders_Defeat_the_Halting_Problem_Proofs


--
Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

olcott

no leída,
18 abr 2023, 1:00:37 a.m.18/4/23
para
A simulating halt decider correctly predicts whether or not its
correctly simulated input can possibly reach its own final state and
halt. It does this by correctly recognizing several non-halting behavior
patterns in a finite number of steps of correct simulation. Inputs that
do terminate are simply simulated until they complete.

When a simulating halt decider correctly simulates N steps of its input
it derives the exact same N steps that a pure UTM would derive because
it is itself a UTM with extra features.

My reviewers cannot show that any of the extra features added to the UTM
change the behavior of the simulated input for the first N steps of
simulation:
(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't change it
(c) Even aborting the simulation after N steps doesn't change the first
N steps.

Because of all this we can know that the first N steps of input D
simulated by simulating halt decider H are the actual behavior that D
presents to H for these same N steps.

*computation that halts*… “the Turing machine will halt whenever it

Richard Damon

no leída,
18 abr 2023, 7:32:15 a.m.18/4/23
para
On 4/18/23 1:00 AM, olcott wrote:
> A simulating halt decider correctly predicts whether or not its
> correctly simulated input can possibly reach its own final state and
> halt. It does this by correctly recognizing several non-halting behavior
> patterns in a finite number of steps of correct simulation. Inputs that
> do terminate are simply simulated until they complete.
>


Except t doesn't o this for the "pathological" program.

The "Pathological Program" when built on such a Decider that does give
an answer, which you say will be non-halting, and then "Correctly
Simulated" by giving it representation to a UTM, we see that the
simulation reaches a final state.

Thus, your H was WRONG t make the answer. And the problem is you have
added a pattern that isn't always non-halting.

> When a simulating halt decider correctly simulates N steps of its input
> it derives the exact same N steps that a pure UTM would derive because
> it is itself a UTM with extra features.

But if ISN'T a "UTM" any more, because some of the features you added
have removed essential features needed for it to be an actual UTM. That
you make this claim shows you don't actually know what a UTM is.

This is like saying a NASCAR Racing Car is a Street Legal vehicle, since
it started as one and just had some extra features axded.

>
> My reviewers cannot show that any of the extra features added to the UTM
> change the behavior of the simulated input for the first N steps of
> simulation:
> (a) Watching the behavior doesn't change it.
> (b) Matching non-halting behavior patterns doesn't change it
> (c) Even aborting the simulation after N steps doesn't change the first
> N steps.

No one claims that it doesn't correctly reproduce the first N steps of
the behavior, that is a Strawman argumen.

>
> Because of all this we can know that the first N steps of input D
> simulated by simulating halt decider H are the actual behavior that D
> presents to H for these same N steps.
>
> *computation that halts*… “the Turing machine will halt whenever it
> enters a final state” (Linz:1990:234)rrr

Right, so we are concerned about the behavior of the ACTUAL machine, not
a partial simulation of it.
H(D,D) returns non-halting, but D(D) Halts, so the answer is wrong.

>
> When we see (after N steps) that D correctly simulated by H cannot
> possibly reach its simulated final state in any finite number of steps
> of correct simulation then we have conclusive proof that D presents non-
> halting behavior to H.

But it isn't "Correctly Simulated by H" since this H never does a
correct simulation of the sort that determines halting or not (that of
an ACTUAL UTM, which never aborts until it reaches the end).

Since H DOES abort its simulation, the changing of "H" to be a UTM
instead, is just saying that H doesn't actually process the input that
was given it, and thus it gets the wrong answer.

>
> *Simulating (partial) Halt Deciders Defeat the Halting Problem Proofs*
> https://www.researchgate.net/publication/369971402_Simulating_partial_Halt_Deciders_Defeat_the_Halting_Problem_Proofs
>

Which is full of UNSOUND logic and Strawman.

You aren't allowed to change the input, so you can't change the H that D
uses.

You have been repeated told this, and yet you still repeat it. This
shows you have no capability of learning, and that you are totally
ignorant of the things you are talking about.

You have buried your reputation by all your lies and fabrications.

olcott

no leída,
18 abr 2023, 11:59:01 a.m.18/4/23
para
You agreed that the first N steps are correctly simulated.

It turns out that the non-halting behavior pattern is correctly
recognized in the first N steps.

Mr Flibble

no leída,
18 abr 2023, 5:55:10 p.m.18/4/23
para
Your assumption that a program that calls H is non-halting is erroneous:

void Px(void (*x)())
{
(void) H(x, x);
return;
}

Px halts (it discards the result that H returns); your decider thinks
that Px is non-halting which is an obvious error due to a design flaw in
the architecture of your decider. Only the Flibble Signaling Simulating
Halt Decider (SSHD) correctly handles this case.

/Flibble

Richard Damon

no leída,
18 abr 2023, 6:30:22 p.m.18/4/23
para
Nope, the pattern you detect isn't a "Nobn-Halting" pattern, as is shown
by the fact that D(D) does halt.

It might show that no possible H could simulate the input to a final
state, but that isn't the definition of Halting. Halting is strictly
about the behavior of the machine itself.

olcott

no leída,
18 abr 2023, 6:39:37 p.m.18/4/23
para
My new paper anchors its ideas in actual Turing machines so it is
unequivocal. The first two pages re only about the Linz Turing
machine based proof.

The H/D material is now on a single page and all reference
to the x86 language has been stripped and replaced with
analysis entirely in C.

With this new paper even Richard admits that the first N steps
UTM based simulated by a simulating halt decider are necessarily the
actual behavior of these N steps.
> void Px(void (*x)())
> {
>     (void) H(x, x);
>     return;
> }
>
> Px halts (it discards the result that H returns); your decider thinks
> that Px is non-halting which is an obvious error due to a design flaw in
> the architecture of your decider.  Only the Flibble Signaling Simulating
> Halt Decider (SSHD) correctly handles this case.
>
> /Flibble
>

Richard Damon

no leída,
18 abr 2023, 6:50:36 p.m.18/4/23
para
Right, but not halting in N steps is not the same as not halting ever.
Remember, it is the actual machine described by the input that matters,
not the (partial) simulation done by H.

>
> *Simulating (partial) Halt Deciders Defeat the Halting Problem Proofs*
> https://www.researchgate.net/publication/369971402_Simulating_partial_Halt_Deciders_Defeat_the_Halting_Problem_Proofs

Full of ERRORS.

olcott

no leída,
18 abr 2023, 7:13:09 p.m.18/4/23
para
When Ĥ is applied to ⟨Ĥ⟩
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn

computation that halts… “the Turing machine will halt whenever it enters
a final state” (Linz:1990:234)

Non-halting behavior patterns can be matched in N steps
⟨Ĥ⟩ Halting is reaching its simulated final state of ⟨Ĥ.qn⟩ in a finite
number of steps

N steps of ⟨Ĥ⟩ correctly simulated by embedded_H are the actual behavior
of this input:
(a) Ĥ.q0 The input ⟨Ĥ⟩ is copied then transitions to embedded_H
(b) embedded_H is applied to ⟨Ĥ⟩ ⟨Ĥ⟩ (input and copy) which simulates
⟨Ĥ⟩ applied to ⟨Ĥ⟩
(c) *which begins at its own simulated ⟨Ĥ.q0⟩ to repeat the process*

The above N steps proves that ⟨Ĥ⟩ correctly simulated by embedded_H
could not possibly reach the final state of ⟨Ĥ.q0⟩ in any finite number
of steps of correct simulation *because ⟨Ĥ⟩ is defined to have*
*a pathological relationship to embedded_H*

That a UTM applied to ⟨Ĥ⟩ ⟨Ĥ⟩ halts shows an entirely different sequence
*because UTM and ⟨Ĥ⟩ ⟨Ĥ⟩ do not have a pathological relationship*

Richard Damon

no leída,
18 abr 2023, 10:31:58 p.m.18/4/23
para
Right and Ĥ (Ĥ) will reach Ĥ.qn and halt if H (Ĥ) (Ĥ) goes to qn, as it
must to be saying that its input is non-halting.

This is because embedded_H and H must be identical machines, and thus do
exactly the same thing when given the same input.

>
> Non-halting behavior patterns can be matched in N steps
> ⟨Ĥ⟩ Halting is reaching its simulated final state of ⟨Ĥ.qn⟩ in a finite
> number of steps

Nope, Halting is the MACHINE Ĥ (Ĥ) reaching its final state Ĥ.qn in a
finite number of steps.

You can also use UTM (Ĥ) (Ĥ), which also reaches that final stste,
because it doesn't stop simulating until it reaches a final state, or it
just keeps simulating.

H / embedded_H are NOT a UTM, as they don't have that necessary property.

>
> N steps of ⟨Ĥ⟩ correctly simulated by embedded_H are the actual behavior
> of this input:
> (a) Ĥ.q0 The input ⟨Ĥ⟩ is copied then transitions to embedded_H
> (b) embedded_H is applied to ⟨Ĥ⟩ ⟨Ĥ⟩ (input and copy) which simulates
> ⟨Ĥ⟩ applied to ⟨Ĥ⟩
> (c) *which begins at its own simulated ⟨Ĥ.q0⟩ to repeat the process*

Except you have defined that H, and thus embeded_H doesn't do (c), but
when it sees the attempted to go into embedded_H with the same input
actually aborts its simulation and goes to Ĥ.qn which causes the machine
Ĥ to halt.



>
> The above N steps proves that ⟨Ĥ⟩ correctly simulated by embedded_H
> could not possibly reach the final state of ⟨Ĥ.q0⟩ in any finite number
> of steps of correct simulation *because ⟨Ĥ⟩ is defined to have*
> *a pathological relationship to embedded_H*

Nope, H is presuming INCORRECTLY that embedded_H is a UTM, and not a
copy of itself.

>
> That a UTM applied to ⟨Ĥ⟩ ⟨Ĥ⟩ halts shows an entirely different sequence
> *because UTM and ⟨Ĥ⟩ ⟨Ĥ⟩ do not have a pathological relationship*
>

Nope, the correct simulation of the input is the correct simulation of
the input and matches the actual behavior of the machine the input
represetns.

H does NOT do an actual "Correct Simulation" but only a PARTIAL
simulation of only N steps, which doesn't prove non-halting behavior.

You mind is stuck in a pathological loop, because you don't seem to
understand the actual basics of Turing Machines.

olcott

no leída,
18 abr 2023, 10:58:00 p.m.18/4/23
para
embedded_H could do (c) 10,000 times before aborting which would have to
be the actual behavior of the actual input because embedded_H remains in
pure UTM mode until it aborts.

How many times does it take for you to understand that ⟨Ĥ⟩ can't
possibly reach ⟨Ĥ.qn⟩ because of its pathological relationship to
embedded_H ?

Richard Damon

no leída,
18 abr 2023, 11:10:45 p.m.18/4/23
para
No such thing. UTM isn't a "Mode" but an identity.

if embedded_H aborts its simulation, it NEVER was a UTM. PERIOD.

It might be in "Simulation" mode, but it is incorrect think of it as
actually being a UTM, since it isn't.

That would be like saying you are in "Immortal" mode, until you die.

An Immortal can't die, just like a UTM won't stop simulating until it
reaches a final state.

>
> How many times does it take for you to understand that ⟨Ĥ⟩ can't
> possibly reach ⟨Ĥ.qn⟩ because of its pathological relationship to
> embedded_H ?
>

Then way does it? Either you are lying the embedded_H is an exact copy
of H, or you are lying that H (Ĥ) (Ĥ) goes to Qn, or Ĥ (Ĥ) goes to Ĥ.qn.

There is no other option.

You don't seem to understand that programs do exactly what they are
programmed to do, but not necessarily what you intend them to do.

embedded_H doesn't simulate until it gets the right answer, but does
exactly what H does,

So, if H aborts on the first time it sees Ĥ get into embedded_H, than so
does embedded_H, which IS the behavior you claim for the machine H that
gives the "right" answer,


You are just showing how little you understand about what you are talking.

olcott

no leída,
18 abr 2023, 11:21:37 p.m.18/4/23
para
But that is flat out not the truth. The input simulated by embedded_H
necessarily must have exact same behavior as simulated by a pure UTM
until the simulation of this input is aborted because aborting the
simulation of its input is the only one of three features added to a UTM
that changes the behavior of its input relative to a pure UTM.

(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't change it.
(b) Even aborting the simulation after N steps doesn't change the first
N steps.

N steps could be 10,000 recursive simulations.

Richard Damon

no leída,
18 abr 2023, 11:35:58 p.m.18/4/23
para
Which makes it NOT a UTM, so embedded_H doesn't actually act like a UTM.

It MUST act like H, or you have LIED about following the requirement for
building Ĥ.

>
> (a) Watching the behavior doesn't change it.
> (b) Matching non-halting behavior patterns doesn't change it.
> (b) Even aborting the simulation after N steps doesn't change the first
> N steps.
>
> N steps could be 10,000 recursive simulations.
>

Rigth, and then one more recusrive simulation by a REAL UTM past that
point will see the outer embedded_H abort its simulation, go to Qn and Ĥ
will then halt, showing embedded_H was wrong to say it couldn't.

Aborted simulations don't, by themselves, show non-halting behavior.

The only case that this doesn't work is if embedded_H actually never
does abort, but then H can't either, so H doesn't answer, and fails to
be a decider.

olcott

no leída,
18 abr 2023, 11:48:21 p.m.18/4/23
para
*You keep slip sliding with the fallacy of equivocation error*
The actual simulated input: ⟨Ĥ⟩ that embedded_H must compute its mapping
from never reaches its simulated final state of ⟨Ĥ.qn⟩ even after 10,000
necessarily correct recursive simulations because ⟨Ĥ⟩ is defined to have
a pathological relationship to embedded_H.

>
> Aborted simulations don't, by themselves, show non-halting behavior.
>
> The only case that this doesn't work is if embedded_H actually never
> does abort, but then H can't either, so H doesn't answer, and fails to
> be a decider.

Richard Damon

no leída,
19 abr 2023, 7:14:11 a.m.19/4/23
para
On 4/18/23 11:48 PM, olcott wrote:

> *You keep slip sliding with the fallacy of equivocation error*
> The actual simulated input: ⟨Ĥ⟩ that embedded_H must compute its mapping
> from never reaches its simulated final state of ⟨Ĥ.qn⟩ even after 10,000
> necessarily correct recursive simulations because ⟨Ĥ⟩ is defined to have
> a pathological relationship to embedded_H.


An YOU keep on falling into your Strawman error. The question is NOT
what does the "simulation by H" show, but what is the actual behavior of
the actual machine the input represents.


H (Ĥ) (Ĥ) is asking about the behavior of Ĥ (Ĥ)

PERIOD
DEFINITION.

When you are looking at the wrong question, you tend to get the wrong
answer.

Looking at the definition of H:

WM is the description of machine M

H WM w -> qy if M w will halt and to qn if M w will never halting.


Nothing about "H's simulation of the input", just the actual behavior of
the machine described.

You are stuck in your ignorance and thing that because H is defined as a
simulator, that somehow that changes the requirements, it doesn't.

olcott

no leída,
19 abr 2023, 11:05:56 a.m.19/4/23
para
On 4/19/2023 6:14 AM, Richard Damon wrote:
> On 4/18/23 11:48 PM, olcott wrote:
>
>> *You keep slip sliding with the fallacy of equivocation error*
>> The actual simulated input: ⟨Ĥ⟩ that embedded_H must compute its mapping
>> from never reaches its simulated final state of ⟨Ĥ.qn⟩ even after 10,000
>> necessarily correct recursive simulations because ⟨Ĥ⟩ is defined to have
>> a pathological relationship to embedded_H.
>
>
> An YOU keep on falling into your Strawman error. The question is NOT
> what does the "simulation by H" show, but what is the actual behavior of
> the actual machine the input represents.
>
>

When a simulating halt decider correctly simulates N steps of its input
it derives the exact same N steps that a pure UTM would derive because
it is itself a UTM with extra features.

My reviewers cannot show that any of the extra features added to the UTM
change the behavior of the simulated input for the first N steps of
simulation:
(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't change it
(c) Even aborting the simulation after N steps doesn't change the first
N steps.

The actual behavior that the actual input: ⟨Ĥ⟩ represents is the
behavior of the simulation of N steps by embedded_H because embedded_H
has the exact same behavior as a UTM for these first N steps, and you
already agreed with this.

Did you quit believing in UTMs?

Mr Flibble

no leída,
19 abr 2023, 2:47:30 p.m.19/4/23
para
Nope. For H to be a halt decider it must return a halt decision to its
caller in finite time and as Px discards this result and exits, Px
ALWAYS halts. Given your H doesn't do this and instead returns a result
of non-halting for Px shows us that your halt decider is invalid. Only
the Flibble Signaling Simulating Halt Decider (SSHD) is a solution to
the halting problem.

/Flibble

olcott

no leída,
19 abr 2023, 3:40:03 p.m.19/4/23
para
Although H must always return to some caller H is not allowed to return
to any caller that essentially calls H in infinite recursion.

Mr Flibble

no leída,
19 abr 2023, 4:32:59 p.m.19/4/23
para
The Flibble Signaling Simulating Halt Decider (SSHD) does not have any
infinite recursion thereby proving that such recursion is not a
necessary feature of SHDs invoked from the program being analyzed, the
infinite recursion in your H is present because your H has a critical
design flaw.

/Flibble

olcott

no leída,
19 abr 2023, 5:10:44 p.m.19/4/23
para
It overrode that behavior that was specified by the machine code for Px.

> such recursion is not a
> necessary feature of SHDs invoked from the program being analyzed, the
> infinite recursion in your H is present because your H has a critical
> design flaw.
>
> /Flibble

Mr Flibble

no leída,
19 abr 2023, 5:14:38 p.m.19/4/23
para
Nope. You SHD is not a halt decider as it has a critical design flaw as
it doesn't correctly report that Px halts.

/Flibble.

Richard Damon

no leída,
19 abr 2023, 6:49:23 p.m.19/4/23
para
On 4/19/23 11:05 AM, olcott wrote:
> On 4/19/2023 6:14 AM, Richard Damon wrote:
>> On 4/18/23 11:48 PM, olcott wrote:
>>
>>> *You keep slip sliding with the fallacy of equivocation error*
>>> The actual simulated input: ⟨Ĥ⟩ that embedded_H must compute its mapping
>>> from never reaches its simulated final state of ⟨Ĥ.qn⟩ even after 10,000
>>> necessarily correct recursive simulations because ⟨Ĥ⟩ is defined to have
>>> a pathological relationship to embedded_H.
>>
>>
>> An YOU keep on falling into your Strawman error. The question is NOT
>> what does the "simulation by H" show, but what is the actual behavior
>> of the actual machine the input represents.
>>
>>
>
> When a simulating halt decider correctly simulates N steps of its input
> it derives the exact same N steps that a pure UTM would derive because
> it is itself a UTM with extra features.
>

No, it ISN'T a UTM because if fails to meeet the definition of a UTM.

You are just proving that you are a pathological liar that doesn't know
what he is talking about.

> My reviewers cannot show that any of the extra features added to the UTM
> change the behavior of the simulated input for the first N steps of
> simulation:
> (a) Watching the behavior doesn't change it.
> (b) Matching non-halting behavior patterns doesn't change it
> (c) Even aborting the simulation after N steps doesn't change the first
> N steps.

Which don't matter, as the question

>
> The actual behavior that the actual input: ⟨Ĥ⟩ represents is the
> behavior of the simulation of N steps by embedded_H because embedded_H
> has the exact same behavior as a UTM for these first N steps, and you
> already agreed with this.

No, the actual behavior of the input is what the MACHINE Ĥ applied to
(Ĥ) does. You are just proving even more that you don't understand what
you are talking about and are just a pathological liar.

Please show an actual crediable reference that supports your idea that a
Halt Decider gets to use the fact that it can't simulate the input to a
final state to allow it to say the input is non-halting.

CREDIABLE, not your own words, or words you have tricked someone to
agreeing to not understanding your twisted interpreation of them.


>
> Did you quit believing in UTMs?
>

Nope, are you going to learn what a UTM actually is?

Remember UTM (Ĥ) (Ĥ) shows us the behavior of Ĥ (Ĥ) and that is Halting,
so the actual behavior of a "Correct Simulation" of the input to H is
Halting.

That H gets something different shows that it doesn't actually do a
"Correct Simulation" but only simulated for N steps, and then did some
unsound logic.

YOU FAIL.

Richard Damon

no leída,
19 abr 2023, 6:49:30 p.m.19/4/23
para
H must return an answer to ALL callers, as it can't help but to treat
all callers the same.

All you are doing is admitting that you H fails to be an actual computation

olcott

no leída,
19 abr 2023, 6:52:36 p.m.19/4/23
para
I was not even talking about my SHD, I was talking about how your
program does its simulation incorrectly.

My new write-up proves that my Turing-machine based SHD necessarily must
simulate the first N steps of its input correctly because for the first
N steps embedded_H <is> a pure UTM that can't possibly do any simulation
incorrectly for the first N steps of simulation.

> it has a critical design flaw as
> it doesn't correctly report that Px halts.
>
> /Flibble.
>

olcott

no leída,
19 abr 2023, 7:16:56 p.m.19/4/23
para
Because embedded_H is a UTM that has been augmented with three features
that cannot possibly cause its simulation of its input to diverge from
the simulation of a pure UTM for the first N steps of simulation we know
that it necessarily does provide the actual behavior specified by this
input for these N steps.

Because these N steps can include 10,000 recursive simulations of ⟨Ĥ⟩ by
embedded_H, these recursive simulations <are> the actual behavior
specified by this input.

Richard Damon

no leída,
19 abr 2023, 8:07:41 p.m.19/4/23
para
And is no longer a UTM, since if fails to meet the requirement of a UTM

You are just showing you don't understand what a "UTM" actually is.

Note, that UTM isn't just a fancy word for "Simulator", but a very
specific type of simulator, and since some of your "additions" break
those requiremnts, your H isn't actually a UTM.

You are just proving your stupidity.

>
> Because these N steps can include 10,000 recursive simulations of ⟨Ĥ⟩ by
> embedded_H, these recursive simulations <are> the actual behavior
> specified by this input.
>

And no matter how many steps (N) you design your H / embedded_H to
simulate Ĥ (Ĥ), there will always be a slightly larger (but still
finite) number, which if the same input is given to an ACTUAL UTM, that
simulation will reach the point that the top level embedded_H decides to
abort its simulation, transition to Qn, and Ĥ Halts.

Since you MUST chose your "N" when you design your H, it is a SINGLE
DEFINED VALUE, and always to small to determine the actual behavior of
the Ĥ[n] built on that H[n].

The only Ĥ[n] that is non-halting is when N becomes infinite, but for
that N, H never answers.

In fact, your flawed logic is based on the LIE that embedded_H can have
a different N than H does, which just means you LIED when you said you
built Ĥ by the requirements.


olcott

no leída,
19 abr 2023, 8:31:36 p.m.19/4/23
para
As you already agreed:
The behavior of N steps of ⟨Ĥ⟩ simulated by embedded_H must are the
actual behavior of these N steps because

(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't change it
(c) Even aborting the simulation after N steps doesn't change the
first N steps.



Richard Damon

no leída,
19 abr 2023, 8:45:42 p.m.19/4/23
para
But a UTM doesn't simulate just "N" steps of its input, but ALL of them.

Anything less and it isn't a UTM. DEFINITION.

Your basically claiming that your immortal since you haven't died, YET.

You are just PROVING that you don't understand what you are talking about.


Since you don't seem to want to hold to correct definitions, everything
you say needs to be treated as a likely LIE or DECEPTION.

You are just proving your incompetence.

olcott

no leída,
19 abr 2023, 8:52:19 p.m.19/4/23
para
Yet when embedded_H simulates N steps of ⟨Ĥ⟩ this is the actual behavior
of ⟨Ĥ⟩ for these N steps, thus when embedded_H simulates 10,000
recursive simulations these are the actual behavior of ⟨Ĥ⟩.

Richard Damon

no leída,
19 abr 2023, 9:08:03 p.m.19/4/23
para
Yes, but doesn't actually show the ACTUAL behavior of the input as
defined, since that would be what the ACTUAL MACHINE does, or the
COMPLETE SIMULATION done by a UTM, not the PARTIAL simulation done by H.

You are using a STRAWMAN criteria, so you get the wrong answer.

That is like drive 10 miles on a road, then getting off, and say they
all looked the same, so this road must go on forever.

Note, embedded_H simulates for the EXACT SAME number of steps as H, so
gets the exact same answer, because it INCORRECTLY aborts at the exact
same point and decides its input is non-halting, returning that answer
to its caller, which makes the ACTUAL BEHAVIOR that input represents to
be Halting.

You are just proving that you don't understand the simple basic of the
theory, and are proving yourself to be a ignorant liar.

olcott

no leída,
19 abr 2023, 9:25:10 p.m.19/4/23
para
There is only one actual behavior of the actual input and this behavior
is correctly demonstrated by N steps of ⟨Ĥ⟩ simulated by embedded_H.

If you simply don't "believe in" UTMs then you might not see this
correctly.

If you fully comprehend UTMs then you understand that 10,000 recursive
simulations of ⟨Ĥ⟩ by embedded_H are the actual behavior of ⟨Ĥ⟩.

Richard Damon

no leída,
19 abr 2023, 9:38:31 p.m.19/4/23
para
Nope, Read the problem definition.

The behavior to be decided by a Halt Decider is the behavior of the
ACTUAL MACHINE which is decribed by the input.

You are just trying to pass a DECIETFULL LIE about what your H is
supposed to do, because you just don't understand the theory, because
you are just too ignorant.

>
> If you simply don't "believe in" UTMs then you might not see this
> correctly.

No, you are the one that doesn't seem to "believe" in UTMs. You don't
seem to understand that a UTM ALWAYS recreates the behavior of the
machine it is given the description of, because it NEVER stops until it
is done.

Anything less is NOT a UTM, and you LIE when you claim your H is one.

>
> If you fully comprehend UTMs then you understand that 10,000 recursive
> simulations of ⟨Ĥ⟩ by embedded_H are the actual behavior of ⟨Ĥ⟩.
>
>

Nope, ALL the steps of Ĥ (Ĥ) is the behavior of the input, which is also
what happens when you give a REAL UTM (one the doesn't stop) the input
UTM (Ĥ) (Ĥ)

You are just stuck in using INCORRECT definitions, so you get wrong
answers. This shows that you really don't understand the very basics of
what Truth is, because Truth is base on using the RIGHT definitions, the
definition defined by the field.

Thus, YOU LIE when you make your claims, because you are too igno

olcott

no leída,
19 abr 2023, 9:59:54 p.m.19/4/23
para
No matter what the problem definition says the actual behavior of the
actual input must necessarily be the N steps simulated by embedded_H.

The only alternative is to simply disbelieve in UTMs.

Richard Damon

no leída,
19 abr 2023, 10:16:11 p.m.19/4/23
para
NOPE, Since H isn't a UTM, because it doesn't meet the REQUIREMENTS of a
UTM, the statement is meaningless.

A UTM is defined as a simulator whose behavior DOES match the behavior
of the input. MATCHING is the criteria that determines that it IS one,
not a result of just calling a machine one.

By your logic, I could change my cat into a dog my just calling it one.

Calling your machine a UTM, doesn't make it one, you need to make it
meet the requirements to earn the title. Your doesn't.

What happens if you decide to call the color "Red" to be "Green", and
run through a traffic light when the top light is on. A traffic ticket
for running a "Red Light", because just because you called it a Green
Light doesn't make it one.

YOU FAIL.

You are showing you don't understand the basics of logic, so your ideas
are just failures.

olcott

no leída,
19 abr 2023, 11:29:54 p.m.19/4/23
para
It <is> equivalent to a UTM for the first N steps that can include
10,000 recursive simulations.

Richard Damon

no leída,
19 abr 2023, 11:41:57 p.m.19/4/23
para
Which means it ISN'T the Equivalent of a UTM. PERIOD.

The numbers from 1 to 10 are not the equivalent of the numbers from 1 to
a hundred.

NOT EQUIVALENT is NOT EQUIVALENT.

It might correctly simulate the first N steps, but that doesn't make it
a UTM.

Like I have said it before, your statement is like claiming you are
immortal because you haven't died YET.

Being a UTM implies MORE than just correctly simulating part of a
machines behavior.

You are just proving you are not qualified to talk about Turing
Machines, or Logic.

You are just too ignornat, and make up too much stuff, which is the same
as just lying.

Your legacy is that of a kook.

olcott

no leída,
20 abr 2023, 12:04:46 a.m.20/4/23
para
Why are you playing head games with this?

You know and acknowledged that the first N steps of ⟨Ĥ⟩ correctly
simulated by embedded_H are the actual behavior of ⟨Ĥ⟩ for these first N
steps.

Richard Damon

no leída,
20 abr 2023, 7:23:37 a.m.20/4/23
para
Right, but we don't care about that. We care about the TOTAL behavior of
the input, which H never gets to see, because it gives up.

We know that

H (M) w needs to go to qy if M w will halt when actually run (By the
definition of a Halt decider)

H (Ĥ) (Ĥ) goes to qn (by your assertions)

Ĥ (Ĥ) will go to Ĥ.qn and halt when actually run.

THEREFORE, H was just WRONG, BY DEFINITION.

Also UTM (Ĥ) (Ĥ) will halt just like Ĥ (Ĥ)

So, if you want to use the alternate definition, that

H (M) w needs to go to qy if UTM (M) w halts.

Note, it is UTM (M) w, which ALWAYS will have the same behavior for a
given input. Not "the correct simulation done by H".

olcott

no leída,
20 abr 2023, 7:56:49 a.m.20/4/23
para
When Ĥ is applied to ⟨Ĥ⟩
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn

N steps of ⟨Ĥ⟩ correctly simulated by embedded_H are the actual behavior
of this input:
(a) Ĥ.q0 The input ⟨Ĥ⟩ is copied then transitions to embedded_H
(b) embedded_H is applied to ⟨Ĥ⟩ ⟨Ĥ⟩ (input and copy) which simulates
⟨Ĥ⟩ applied to ⟨Ĥ⟩
(c) *which begins at its own simulated ⟨Ĥ.q0⟩ to repeat the process*

When N steps of ⟨Ĥ⟩ correctly simulated by embedded_H are performed
(unless we are playing head games) we can see that ⟨Ĥ⟩ cannot possibly
reach its own final state of ⟨Ĥ.qn⟩ in any finite number of steps.

N steps could be reach (c) or N steps could be reaching (c) 10,000 times.


> We know that
>
> H (M) w needs to go to qy if M w will halt when actually run (By the
> definition of a Halt decider)
>
> H (Ĥ) (Ĥ) goes to qn (by your assertions)
>
> Ĥ (Ĥ) will go to Ĥ.qn and halt when actually run.
>
> THEREFORE, H was just WRONG, BY DEFINITION.
>
> Also UTM (Ĥ) (Ĥ) will halt just like Ĥ (Ĥ)
>
> So, if you want to use the alternate definition, that
>
> H (M) w needs to go to qy if UTM (M) w halts.
>
> Note, it is UTM (M) w, which ALWAYS will have the same behavior for a
> given input. Not "the correct simulation done by H".

Richard Damon

no leída,
20 abr 2023, 8:06:39 a.m.20/4/23
para
Until the outer embedded_H used by Ĥ reaches the point that it decides
to stop its simulation, and the whole simulation ends with just partial
results and it decides to go to qn and Ĥ Halts.

This MUST happen, as you say this is what H does.

If not, you are just admitting to be a stinking liar.

>
> When N steps of ⟨Ĥ⟩ correctly simulated by embedded_H are performed
> (unless we are playing head games) we can see that ⟨Ĥ⟩ cannot possibly
> reach its own final state of ⟨Ĥ.qn⟩ in any finite number of steps.
>

no, (Ĥ) (Ĥ) CAN reach its own final state when simulated by an ACTUAL
UTM, that doesn't stop. H / embedded_H isn't such a machine (so is not
even a UTM). It doesn't matter what H gets, it matters what the actual
machine does.

> N steps could be reach (c) or N steps could be reaching (c) 10,000 times.

And then the top embedded_H aborts its simulation, and goes to qn,
making the COMPLETE simulation of that input see that final end.

You are just exhibiting your God complex. You think H must be God and
able to get the right answer. H isn't God, and neither are you.

H, and you, are restricted by the rules. H, by the rules, needs to
answer about the actual machine represneted by the input, or the results
of an actual UTM simulating its input, even though it can't do that itself.

What matters is what ACTUALLY HAPPENS to the ACTUAL MACHINE, not what H
"thinks" is going to happen by its limited senses and simulation.

YOU FAIL.

olcott

no leída,
20 abr 2023, 10:59:46 a.m.20/4/23
para
You keep dodging the key truth when N steps of embedded_H are correctly
simulated by embedded_H and N = 30000 then we know that the actual
behavior of ⟨Ĥ⟩ is 10,000 recursive simulations that have never reached
their final state of ⟨Ĥ.qn⟩.

Mr Flibble

no leída,
20 abr 2023, 1:32:14 p.m.20/4/23
para
My SSHD does not do its simulation incorrectly: it does its simulation
just like I have defined it as evidenced by the fact that it returns a
correct halting decision for Px; something your broken SHD gets wrong.

>
> My new write-up proves that my Turing-machine based SHD necessarily must
> simulate the first N steps of its input correctly because for the first
> N steps embedded_H <is> a pure UTM that can't possibly do any simulation
> incorrectly for the first N steps of simulation.

Again, the mistake you are making is assuming that a program that
invokes the decider whilst it is being decided upon must cause an
infinite recursion when the halt decider is of the simulating type: I
have shown otherwise.

/Flibble

Mr Flibble

no leída,
20 abr 2023, 1:36:03 p.m.20/4/23
para
That is a contradiction: either H MUST ALWAYS return to its caller or
MUST NOT; your mistake is in thinking that there is some "get out"
clause for SHDs.

/Flibble

olcott

no leída,
20 abr 2023, 1:50:02 p.m.20/4/23
para
In order for you to have Px simulated by H terminate normally you must
change the behavior of Px away from the behavior that its x86 code
specifies.

void Px(void (*x)())
{
(void) H(x, x);
return;
}

Px correctly simulated by H cannot possibly reach past its machine
address of: [00001b3d].

_Px()
[00001b32] 55 push ebp
[00001b33] 8bec mov ebp,esp
[00001b35] 8b4508 mov eax,[ebp+08]
[00001b38] 50 push eax // push address of Px
[00001b39] 8b4d08 mov ecx,[ebp+08]
[00001b3c] 51 push ecx // push address of Px
[00001b3d] e800faffff call 00001542 // Call H
[00001b42] 83c408 add esp,+08
[00001b45] 5d pop ebp
[00001b46] c3 ret
Size in bytes:(0021) [00001b46]

What you are doing is the the same as recognizing that _Infinite_Loop()
never halts, forcing it to break out of its infinite loop and jump to
its "ret" instruction

_Infinite_Loop()
[00001c62] 55 push ebp
[00001c63] 8bec mov ebp,esp
[00001c65] ebfe jmp 00001c65
[00001c67] 5d pop ebp
[00001c68] c3 ret
Size in bytes:(0007) [00001c68]

Your system doesn't merely report on the behavior of its input it also
interferes with the behavior of its input.

Mr Flibble

no leída,
20 abr 2023, 3:08:34 p.m.20/4/23
para
Your "x86 code" has nothing to do with how my halt decider works; I am
using an entirely different simulation method, one that actually works.
No I am not: there is no infinite loop in Px above; forking the
simulation into two branches and returning a different halt decision to
each branch is a perfectly valid SHD design; again a design, unlike
yours, that actually works.

>
> Your system doesn't merely report on the behavior of its input it also
> interferes with the behavior of its input.

No it doesn't; H returns a value to its caller in finite time so
satisfies the requirements of a halt decider unlike your SHD which you
have to "abort" because your decider doesn't satisfy the requirements
because your design is broken.

/Flibble

olcott

no leída,
20 abr 2023, 3:20:20 p.m.20/4/23
para
If you say that Px correctly simulated by H ever reaches its own final
"return" statement and halts you are incorrect.

>>
>> Your system doesn't merely report on the behavior of its input it also
>> interferes with the behavior of its input.
>
> No it doesn't; H returns a value to its caller in finite time so
> satisfies the requirements of a halt decider unlike your SHD which you
> have to "abort" because your decider doesn't satisfy the requirements
> because your design is broken.
>
> /Flibble
>

Richard Damon

no leída,
20 abr 2023, 6:40:44 p.m.20/4/23
para
No, it has been shown that if N = 3000, then a UTM when it CORRECTLY AND
COMPLETELY simulates this input, it will see those 3000 steps and then
see one more iteration simulated, then the top level embedded_H will
abort its simulation and go to Qn which is also Ĥ.qn and Ĥ will halt.

Thus, this input represents a Halting Computation.

It doesn't matter that H can't simulate the input to a final state if it
gives up. What matters is what a REAL UTM (which never give up) will do
or what the actual machine does.

You are just working in a fantasy world where you close your eyes to
what is actually true, and try to pretend, by lying to yourself, that
things work the way you want.

This just makes all you8r logic invalid and your results worthless
because you ignore the actual rules and truth of the systems.

You have imprisioned you mind in this fantasy, and it seems locked
yourself in and you can't get out.

It seems that this is likely your fate for all eternity.

olcott

no leída,
20 abr 2023, 6:51:06 p.m.20/4/23
para
never reached their final state of ⟨Ĥ.qn⟩ because ⟨Ĥ⟩ is defined to have
a pathological relationship to embedded_H.

Referring to an entirely different sequence where there is no such
pathological relationship is like comparing apples to lemons and
rejecting apples because lemons are too sour.

Why do you continue to believe that you can get away with this?

Richard Damon

no leída,
20 abr 2023, 7:14:28 p.m.20/4/23
para
No, becasue the ACTUAL BEHAVIOR is defined by the machine that the input
describes.

PERIOD.

>
> Referring to an entirely different sequence where there is no such
> pathological relationship is like comparing apples to lemons and
> rejecting apples because lemons are too sour.

So, you just don't understand the meaning of ACTUAL BEHAVIOR

>
> Why do you continue to believe that you can get away with this?
>
>

Why do YOU?

Can you name a reliable source that supports your definition? (NOT YOU)

Not just someone you have "tricked" into agreeing to a poorly worded
statement that yo misinterpret to agree with you.

olcott

no leída,
20 abr 2023, 7:54:21 p.m.20/4/23
para
Professor Sipser.

Richard Damon

no leída,
20 abr 2023, 8:38:37 p.m.20/4/23
para
Nope, he agreed that IF H correctly predicted that a CORRECT SIMULATION
(which by his definition, is the simulation done by an ACTUAL UTM, which
always agrees with the behavior of the ACTUAL MACHINE) would never halt,
then H could abort its simulation.

Thus, he does not actually agree to your definiition.

The fact that you had the phrase "its correct simulation" is irrelevant,
because in his mind, the input has a COPY of H, and thus varying H to
simulate longer doesn't affect the input.

That is why I included the line (which you snipped to deceive):

Not just someone you have "tricked" into agreeing to a poorly worded
statement that you misinterpret to agree with you.

WHich just shows that something in you understands that your logic
doesn't actually hold. You are interpreting the statement you gave him
differently then he would understand it, because you are just too stupid
to know what things actually mean.

olcott

no leída,
20 abr 2023, 10:06:02 p.m.20/4/23
para
MIT Professor Michael Sipser has agreed that the following verbatim
paragraph is correct:

"If simulating halt decider H correctly simulates its input D until H
correctly determines that its simulated D would never stop running
unless aborted then H can abort its simulation of D and correctly report
that D specifies a non-halting sequence of configurations."

He understood that the above paragraph is a tautology. That you do not
understand that it is a tautology provides zero evidence that it is not
a tautology.

You have already agreed that N steps of an input simulated by a
simulating halt decider are the actual behavior for these N steps.

The fact that you agreed with this seems to prove that you will not
disagree with me at the expense of truth and that you do actually care
about the truth.

Richard Damon

no leída,
20 abr 2023, 10:20:07 p.m.20/4/23
para
Right, like I said, *IF* the decider correctly simulates its input D
until H *CORRECTLY* determines that its Simulate D would never stop
running unless aborted.

NOTE. THAT MEANS THE ACTUAL MACHIBE OR A UTM SIMULATION OF THE MACHINE.
NOT JUST A PARTIAL SIMULATION BY H.

then H can abort its simulation of D and correctly report that D
specifies a non-halting sequence of configurations.


THAT is a Tautology, if you can simulate a program to the point that you
can determine that the PROGRAM ITSELF if run (or simulated by a UTM)
would run forever unless aborted.

That your "simulation" by H can't get there doesn't count.

Yes, the N steps of the simulation done by H will match the first N
steps of the simulation done by a UTM.

It is also a fact that if the UTM simulates a bit farther, it will reach
a halting state, so H did not "correctly" determine that that would not
happen.

You have a GIVEN H, and that H simulates for ONLY N Steps, but the
correct determination needs to be about what would happen with a CORRECT
SIMULATION of a unlimited number of steps, which this H doesn't do, and
you can't imagine changing H, because the way you do it changes the
input, which violates the requirements of the problem.

Since UTM(D,D) halts, H(D,D) saying that it has "Correctly Determined"
that the correct simulation would not halt unless aborted is just a
FALSE statement, as UTM(D,D) does halt, and never aborted its simulation.

You just don't understand the meaning of the words you are using, so you
lie to yourself and trap yourself in a prison of falsehood.

olcott

no leída,
20 abr 2023, 10:43:13 p.m.20/4/23
para
Unless the simulation is from the frame-of-reference of the pathological
relationship it is rejecting apples because lemons are too sour.

Thus when N steps of ⟨Ĥ⟩ correctly simulated by embedded_H conclusively
proves by a form of mathematical induction that ⟨Ĥ⟩ ⟨Ĥ⟩ correctly
simulated by embedded_H cannot possibly reach its simulated final state
of ⟨Ĥ.qn⟩ in any finite number of steps the Sipser approved criteria has
been met.

Richard Damon

no leída,
21 abr 2023, 7:18:11 a.m.21/4/23
para
So, you don't understand the nature of simulation.

Simulation is NOT "From a frame of reference", but is a recreation of
what actually happens.

Remember, the DEFINITION of a Halting Decider is dependent on the actual
behavior of the machine represented, and the replacement of that critrea
with a simulation is based on the fact that the "Simulation" so defined
will ALWAYS reproduce that result.

If you claim that the simulation can create a different result, then you
can't use that simulation as a replacement for the actual behavior that
was required, so you are just admitting that you logic is flawed, and
that you are using strawman.

>
> Thus when N steps of ⟨Ĥ⟩ correctly simulated by embedded_H conclusively
> proves by a form of mathematical induction that ⟨Ĥ⟩ ⟨Ĥ⟩ correctly
> simulated by embedded_H cannot possibly reach its simulated final state
> of ⟨Ĥ.qn⟩ in any finite number of steps the Sipser approved criteria has
> been met.
>

Nope, Please provide your actual proof by induction.

Note, I am not going to tell you the steps you need to prove to make a
proof by induction, but you need to clearly make the statements about them.

I suspect, you don't even know what it actually means to prove something
by induction.

Remember too, the goal is to show that the input machine is actually
non-halting, and that a correct simulation of this exact input will
never reach a halting state.




Mr Flibble

no leída,
21 abr 2023, 8:17:46 a.m.21/4/23
para
Px halts if H is (or is part of) a genuine halt decider. Your H is not
a genuine halt decider as it aborts rather than returning a value to its
caller in finite time. Think of it this way: if H was not of the
simulating type then there would be no need to abort any recursion as H
would not be directly invoking Px, i.e., there would be no recursion.
Recursion is a problem for you because your halt decider is based on a
broken design.

/Flibble

olcott

no leída,
21 abr 2023, 11:16:24 a.m.21/4/23
para
The simulated Px only halts if it reaches its own final state in a
finite number of steps of correct simulation. It can't possibly do this.

>   Your H is not
> a genuine halt decider as it aborts rather than returning a value to its
> caller in finite time.  Think of it this way: if H was not of the
> simulating type then there would be no need to abort any recursion as H
> would not be directly invoking Px, i.e., there would be no recursion.
> Recursion is a problem for you because your halt decider is based on a
> broken design.
>
> /Flibble

olcott

no leída,
21 abr 2023, 11:35:58 a.m.21/4/23
para
a) If simulating halt decider H correctly simulates its input D until H
correctly determines that its simulated D would never stop running
unless aborted then

(b) H can abort its simulation of D and correctly report that D
specifies a non-halting sequence of configurations.

Thus it is established that:

The behavior of D correctly simulated by H
is the correct behavior to measure.

The behavior of ⟨Ĥ⟩ correctly simulated by embedded_H
is the correct behavior to measure.

Mr Flibble

no leída,
21 abr 2023, 12:36:39 p.m.21/4/23
para
Nope, a correctly simulated Px will allow it to reach its own final
state (termination); your H does NOT perform a correct simulation
because your H is broken.

/Flibble

olcott

no leída,
21 abr 2023, 12:41:51 p.m.21/4/23
para
Strawman deception
Px correctly simulated by H will never reach its own simulated final
state of "return" because Px and H have a pathological relationship to
each other.

Measuring the behavior of Px simulated by a simulator have no such
pathological relationship is the same as rejecting apples because lemons
are too sour. One must compare apples to apples.

Mr Flibble

no leída,
21 abr 2023, 1:43:00 p.m.21/4/23
para
Nope, there is no pathological relationship between Px and H because Px
discards the result of H (i.e. it does not try to do the opposite of the
H halting result as per the definition of the Halting Problem).

>
> Measuring the behavior of Px simulated by a simulator have no such
> pathological relationship is the same as rejecting apples because lemons
> are too sour. One must compare apples to apples.

LOLWUT?!

/Flibble

olcott

no leída,
21 abr 2023, 2:36:33 p.m.21/4/23
para
It seems that you continue to fail to see the nested simulation

01 void Px(void (*x)())
02 {
03 (void) H(x, x);
04 return;
05 }
06
07 void main()
08 {
09 H(Px,Px);
10 }

*Execution Trace when H never aborts its simulation*
main() calls H(Px,Px) that simulates Px(Px) at line 09
*keeps repeating*
simulated Px(Px) calls simulated H(Px,Px) that simulates Px(Px) at
line 03 ...


>>
>> Measuring the behavior of Px simulated by a simulator have no such
>> pathological relationship is the same as rejecting apples because lemons
>> are too sour. One must compare apples to apples.
>
> LOLWUT?!
>
> /Flibble

Mr Flibble

no leída,
21 abr 2023, 4:02:23 p.m.21/4/23
para
"nested simulation" (recursion) is a property of your broken halt
decider and not a property of the Halting Problem itself; the Flibble
SSHD avoids the problem of nested simulation (recursion) by forking
(branching) the simulation instead.

/Flibble

olcott

no leída,
21 abr 2023, 5:06:43 p.m.21/4/23
para
Nested simulation is inherent when any simulating halt decider is
applied to any of the conventional halting problem counter-example
inputs. That you may fail to comprehend this is not my mistake.

Mr Flibble

no leída,
21 abr 2023, 5:27:48 p.m.21/4/23
para
I have shown otherwise, dear.

/Flibble

Richard Damon

no leída,
21 abr 2023, 6:34:23 p.m.21/4/23
para
So, you're saying that a UTM doesn't do a "Correct Simulation"?

UTM(Px,Px) will see Px call H, and then H simulation its copy of Px(Px),
then aborting its simulaiton and returning non-halting to Px and then Px
halting


It is only the PARTIAL simulation by whatever H Px is built on that
can't reach that state. The UTM will ALWAYS reach that state slightly
(one recursion) after your H stops its simulation.



Richard Damon

no leída,
21 abr 2023, 6:36:28 p.m.21/4/23
para
*IF* H correctly simulates per the definition of a UTM

It doesn't, so it isn't.

>
> The behavior of ⟨Ĥ⟩ correctly simulated by embedded_H
> is the correct behavior to measure.
>

Since the simulation done by embedded_H does not meet the definition of
"correct simulation" that Professer Sipser uses, your arguement is VOID.


You are just PROVING your stupidity.

olcott

no leída,
21 abr 2023, 7:18:16 p.m.21/4/23
para
Always with the strawman error.
I am saying that when Px is correctly simulated by H it cannot possibly
reach its own simulated "return" instruction in any finite number of
steps because Px is defined to have a pathological relationship to H.

When we examine the behavior of Px simulated by a pure simulator or even
another simulating halt decider such as H1 having no such pathological
relationship as the basis of the actual behavior of the input to H we
are comparing apples to lemons and rejecting the apples because lemons
are too sour.


> UTM(Px,Px) will see Px call H, and then H simulation its copy of Px(Px),
> then aborting its simulaiton and returning non-halting to Px and then Px
> halting
>
>
> It is only the PARTIAL simulation by whatever H Px is built on that
> can't reach that state. The UTM will ALWAYS reach that state slightly
> (one recursion) after your H stops its simulation.
>
>
>

olcott

no leída,
21 abr 2023, 7:22:15 p.m.21/4/23
para
Always with the strawman error.
I am saying that when ⟨Ĥ⟩ is correctly simulated by embedded_H it cannot
possibly reach its own simulated final state of ⟨Ĥ.qn⟩ in any finite
number of steps because Ĥ is defined to have a pathological relationship
to embedded_H.

When we examine the behavior of ⟨Ĥ⟩ simulated by a pure UTM or even
another simulating halt decider such as embedded_H1 having no such
pathological relationship as the basis of the actual behavior of the
input to embedded_H we are comparing apples to lemons and rejecting the
apples because lemons are too sour.



Richard Damon

no leída,
21 abr 2023, 7:33:39 p.m.21/4/23
para
Since H never "Correctly Simulates" the input per the definition that
allows using a simulation instead of the actual machines behavior, YOUR
method is the STRAWMAN.

>
> When we examine the behavior of Px simulated by a pure simulator or even
> another simulating halt decider such as H1 having no such pathological
> relationship as the basis of the actual behavior of the input to H we
> are comparing apples to lemons and rejecting the apples because lemons
> are too sour.

Maybe, but the question is asking for the oranges that the pure
simulator gives, not the apples that you H gives.

H is just doing the wrong thing.

Your failure to see that just shows how blind you are to the actual
truth of the system.

H MUST answer about the behavior of the actual machine to be a Halt
Decider, since that is what the mapping a Halt Decider is supposed to
answer is based on.

Richard Damon

no leída,
21 abr 2023, 7:35:35 p.m.21/4/23
para
Since H never "Correctly Simulates" the input per the definition that
allows using a simulation instead of the actual machines behavior, YOUR
method is the STRAWMAN.



>
> When we examine the behavior of ⟨Ĥ⟩ simulated by a pure UTM or even
> another simulating halt decider such as embedded_H1 having no such
> pathological relationship as the basis of the actual behavior of the
> input to embedded_H we are comparing apples to lemons and rejecting the
> apples because lemons are too sour.
>
>
Maybe, but the question is asking for the lemons that the pure simulator

olcott

no leída,
21 abr 2023, 8:51:11 p.m.21/4/23
para
When a simulating halt decider or even a plain UTM examines the behavior
of its input and the SHD or UTM has a pathological relationship to its
input then when another SHD or UTM not having a pathological
relationship to this input is an incorrect proxy for the actual behavior
of this actual input to the original SHD or UTM.

I used to think that you were simply lying to play head games, I no
longer believe this. Now I believe that you are ensnared by group-think.

Group-think is the way that 40% of the electorate could honestly believe
that significant voter fraud changed the outcome of the 2020 election
even though there has very persistently been zero evidence of this.
https://www.psychologytoday.com/us/basics/groupthink

Hopefully they will not believe that Fox news paid $787 million to trick
people into believing that there was no voter fraud.

Maybe they will believe that tiny space aliens living in the heads of
Fox leadership took control of their brains and forced them to pay.

The actual behavior of the actual input is correctly determined by an
embedded UTM that has been adapted to watch the behavior of its
simulation of its input and match any non-halting behavior patterns.

Richard Damon

no leída,
21 abr 2023, 9:02:28 p.m.21/4/23
para
Nope. If an input has your "pathological" relationship to a UTM, then
YES, the UTM will generate an infinite behavior, but so does the machine
itself, and ANY UTM will see that same infinite behavior.

The problem is that you SHD is NOT a UTM, and thus the fact that it
aborts its simulation and returns an answer changes the behavior of the
machine that USED it (compared to a UTM), and thus to be "correct", the
SHD needs to take that into account.

>
> I used to think that you were simply lying to play head games, I no
> longer believe this. Now I believe that you are ensnared by group-think.


Nope, YOU are the one ensnared in your own fantasy world of lies.

>
> Group-think is the way that 40% of the electorate could honestly believe
> that significant voter fraud changed the outcome of the 2020 election
> even though there has very persistently been zero evidence of this.
> https://www.psychologytoday.com/us/basics/groupthink

And you fantasy world is why you think that a Halt Decider, which is
DEFINIED that H(D,D) needs to return the answer "Halting" if D(D) Halts,
is correct to give the answer non-halting even though D(D) Ha;ts.

You are just beliving your own lies.

>
> Hopefully they will not believe that Fox news paid $787 million to trick
> people into believing that there was no voter fraud.

No, they are paying $787 million BECAUSE they tried to gain views by
telling them the lies they wanted to hear.

At least they KNEW they were lying, but didn't care, and had to pay the
price.

You don't seem to understand that you are lying just as bad as they were.

>
> Maybe they will believe that tiny space aliens living in the heads of
> Fox leadership took control of their brains and forced them to pay.
>
> The actual behavior of the actual input is correctly determined by an
> embedded UTM that has been adapted to watch the behavior of its
> simulation of its input and match any non-halting behavior patterns.
>

But embedded_H isn't "embedded_UTM", so you are just living a lie.

You are just to ignorant to understand that a UTM can't be modified to
stop its simulation and still be a UTM.

That is like saying that all racing cars are street legal, because they
are based on the design of cars that were street legal.

olcott

no leída,
21 abr 2023, 10:10:56 p.m.21/4/23
para
The point is that that behavior of the input to embedded_H must be
measured relative to the pathological relationship or it is not
measuring the actual behavior of the actual input.

I know that this is totally obvious thus I had to conclude that anyone
denying it must be a liar that is only playing head games for sadistic
pleasure.

I did not take into account the power of group think that got at least
100 million Americans to believe the election fraud changed the outcome
of the 2020 election even though there is zero evidence of this
anywhere. Even a huge cash prize offered by the Lt. governor of Texas
only turned up one Republican that cheated.

Only during the 2022 election did it look like this was starting to turn
around a little bit.

> The problem is that you SHD is NOT a UTM, and thus the fact that it
> aborts its simulation and returns an answer changes the behavior of the
> machine that USED it (compared to a UTM), and thus to be "correct", the
> SHD needs to take that into account.
>
>>
>> I used to think that you were simply lying to play head games, I no
>> longer believe this. Now I believe that you are ensnared by group-think.
>
>
> Nope, YOU are the one ensnared in your own fantasy world of lies.
>
>>
>> Group-think is the way that 40% of the electorate could honestly believe
>> that significant voter fraud changed the outcome of the 2020 election
>> even though there has very persistently been zero evidence of this.
>> https://www.psychologytoday.com/us/basics/groupthink
>
> And you fantasy world is why you think that a Halt Decider, which is
> DEFINIED that H(D,D) needs to return the answer "Halting" if D(D) Halts,
> is correct to give the answer non-halting even though D(D) Ha;ts.
>
> You are just beliving your own lies.
>
>>
>> Hopefully they will not believe that Fox news paid $787 million to trick
>> people into believing that there was no voter fraud.
>
> No, they are paying $787 million BECAUSE they tried to gain views by
> telling them the lies they wanted to hear.
>

Yes, but even now 30% of the electorate may still believe the lies.

> At least they KNEW they were lying, but didn't care, and had to pay the
> price.
>
> You don't seem to understand that you are lying just as bad as they were.
>

I am absolutely not lying Truth is the most important thing to me even
much more important than love.

All of this work is aimed at formalizing the notion of truth because the
HP, LP, IT and Tarski's Undefinability theorem are all instances of the
same Olcott(2004) pathological self-reference error.

>>
>> Maybe they will believe that tiny space aliens living in the heads of
>> Fox leadership took control of their brains and forced them to pay.
>>
>> The actual behavior of the actual input is correctly determined by an
>> embedded UTM that has been adapted to watch the behavior of its
>> simulation of its input and match any non-halting behavior patterns.
>>
>
> But embedded_H isn't "embedded_UTM", so you are just living a lie.
>

embedded_H is embedded_UTM for the first N steps even when these N steps
include 10,000 recursive simulations.

After 10,000 recursive simulations even an idiot can infer that more
will not cause ⟨Ĥ⟩ simulated by embedded_H to reach its own final state
of ⟨Ĥ.qn⟩ in any finite number of steps.

You and I both know that mathematical induction proves this in far less
than 10,000 recursive simulations. Why you deny it when you should know
this is true is beyond me.

> You are just to ignorant to understand that a UTM can't be modified to
> stop its simulation and still be a UTM.
>
> That is like saying that all racing cars are street legal, because they
> are based on the design of cars that were street legal.

Richard Damon

no leída,
21 abr 2023, 10:37:47 p.m.21/4/23
para
No, the behavior measured must be the DEFINED behavior, which IS the
behavior of the ACTUAL MACHINE.

That Halts, so H gets the wrong answer.

>
> I know that this is totally obvious thus I had to conclude that anyone
> denying it must be a liar that is only playing head games for sadistic
> pleasure.

No, the fact that you think what you say shows that you are a TOTAL IDIOT.



>
> I did not take into account the power of group think that got at least
> 100 million Americans to believe the election fraud changed the outcome
> of the 2020 election even though there is zero evidence of this
> anywhere. Even a huge cash prize offered by the Lt. governor of Texas
> only turned up one Republican that cheated.

Nope, you just don't understand the truth. You are ready for the truth,
because it shows that you have been wrong, and you fragile ego can't
handle that.

>
> Only during the 2022 election did it look like this was starting to turn
> around a little bit.

You have been wrong a lot longer than that.


>
>> The problem is that you SHD is NOT a UTM, and thus the fact that it
>> aborts its simulation and returns an answer changes the behavior of
>> the machine that USED it (compared to a UTM), and thus to be
>> "correct", the SHD needs to take that into account.
>>
>>>
>>> I used to think that you were simply lying to play head games, I no
>>> longer believe this. Now I believe that you are ensnared by group-think.
>>
>>
>> Nope, YOU are the one ensnared in your own fantasy world of lies.
>>
>>>
>>> Group-think is the way that 40% of the electorate could honestly believe
>>> that significant voter fraud changed the outcome of the 2020 election
>>> even though there has very persistently been zero evidence of this.
>>> https://www.psychologytoday.com/us/basics/groupthink
>>
>> And you fantasy world is why you think that a Halt Decider, which is
>> DEFINIED that H(D,D) needs to return the answer "Halting" if D(D)
>> Halts, is correct to give the answer non-halting even though D(D) Ha;ts.
>>
>> You are just beliving your own lies.
>>
>>>
>>> Hopefully they will not believe that Fox news paid $787 million to trick
>>> people into believing that there was no voter fraud.
>>
>> No, they are paying $787 million BECAUSE they tried to gain views by
>> telling them the lies they wanted to hear.
>>
>
> Yes, but even now 30% of the electorate may still believe the lies.

So, you seem to beleive in 100% of your lies.

Yes, there is a portion of the population that fails to see what is
true, because, like you, they think their own ideas are more important
that what actually is true. As was philosophized, they ignore the truth,
but listen to what their itching ears what to hear. That fits you to the
T, as you won't see the errors that are pointed out to you, and you make
up more lies to try to hide your errors.

>
>> At least they KNEW they were lying, but didn't care, and had to pay
>> the price.
>>
>> You don't seem to understand that you are lying just as bad as they were.
>>
>
> I am absolutely not lying Truth is the most important thing to me even
> much more important than love.

THen why to you lie so much, or are you just that stupid.

It is clear you just don't know what you are talking about and are just
making stuff up.

It seems you have lied so much that you have convinced yourself of your
lies, and can no longer bear to let the truth in, so you just deny
anything that goes against your lies.

You have killed your own mind.


>
> All of this work is aimed at formalizing the notion of truth because the
> HP, LP, IT and Tarski's Undefinability theorem are all instances of the
> same Olcott(2004) pathological self-reference error.
>

So, maybe you need to realize that Truth has to match what is actually
true, and you need to work with the definitions that exist, not the
alternate ideas you make up.

A Halt Decider is DEFINED that

H(M,w) needs to answer about the behavior of M(w).

You don't see to understand that, and it seems to even be a blind spot,
as you like dropping that part when you quote what H is supposed to do.

You seem to see "see" self-references where there are not actual
self-references, but the effect of the "self-reference" is built from
simpler components. It seems you don't even understand what a
"Self-Reference" actually is, maybe even what a "reference" actually is.

For the halt decider, P is built on a COPY of the claimed decider and
given a representation of that resultand machine. Not a single reference
in sight.


>>>
>>> Maybe they will believe that tiny space aliens living in the heads of
>>> Fox leadership took control of their brains and forced them to pay.
>>>
>>> The actual behavior of the actual input is correctly determined by an
>>> embedded UTM that has been adapted to watch the behavior of its
>>> simulation of its input and match any non-halting behavior patterns.
>>>
>>
>> But embedded_H isn't "embedded_UTM", so you are just living a lie.
>>
>
> embedded_H is embedded_UTM for the first N steps even when these N steps
> include 10,000 recursive simulations.

Nope. Just your LIES. You clearly don't understand what a UTM is.

>
> After 10,000 recursive simulations even an idiot can infer that more
> will not cause ⟨Ĥ⟩ simulated by embedded_H to reach its own final state
> of ⟨Ĥ.qn⟩ in any finite number of steps.

The fact that if embedded_H does 10,000 recursive simulations and aborts
means that H^ will halt after 10,001.

Your propblem is you logic only works if you can find an N that is
bigger than N+1

>
> You and I both know that mathematical induction proves this in far less
> than 10,000 recursive simulations. Why you deny it when you should know
> this is true is beyond me.

Nope, you are just proving that you don't even know what mathematical
induction means.

You are just too stupid.

You are just proving you are a liar.

You have meet someone who calls you out on that, and you don't have answers.

You have just killed your reputation and any hope that someone might
look at your ideas about truth, as clearly you don't understand what
truth is.

Mr Flibble

no leída,
22 abr 2023, 12:46:58 a.m.22/4/23
para
When Px is correctly simulated Px will terminate (halt) as there is no
pathological relationship between Px and H because Px discards the
result of H rather than trying to do the opposite of H's result.

/Flibble

olcott

no leída,
24 abr 2023, 10:36:12 a.m.24/4/23
para
You know that a halt decider must compute the mapping from its actual
input based on the actual specified behavior of this input and then
contradict yourself insisting that the actual behavior of this actual
input is the wrong behavior to measure.

Richard Damon

no leída,
24 abr 2023, 7:35:42 p.m.24/4/23
para
Right, and the "ACtual Specified Behavior" of the input is DEFINED to be
the ACTUAL BEHAVIOR of the machine that input represents, which will be
identical to the actual behavior of that input processed by an ACTUAL
UTM (which, by definition don't stop until they reach a final step).

By THAT definition, D(D) Halts since H(D,D) returns non-halting, and
thus is wrong.

olcott

no leída,
25 abr 2023, 12:29:30 a.m.25/4/23
para
*When you say that P must be ~P instead of P we know that you are wacky*

The actual behavior of ⟨Ĥ⟩ correctly simulated by embedded_H is
necessarily the behavior of the first N steps of ⟨Ĥ⟩ correctly simulated
by embedded_H. From these N steps we can prove by mathematical induction
that ⟨Ĥ⟩ correctly simulated by embedded_H cannot possibly reach it own
final state of ⟨Ĥ.qn⟩ in any finite number of steps.

> which will be
> identical to the actual behavior of that input processed by an ACTUAL
> UTM (which, by definition don't stop until they reach a final step).
>

The verified facts prove otherwise, people that persistently deny
verified facts may be in danger of Hell fire, depending on their
motives.

My motive is to mathematically formalize the notion of True(L,x) thus
refuting Tarski and Gödel.

We really need this now because AI systems are hallucinating:
https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)

> By THAT definition, D(D) Halts since H(D,D) returns non-halting, and
> thus is wrong.

Richard Damon

no leída,
25 abr 2023, 7:56:31 a.m.25/4/23
para
What ~P

>
> The actual behavior of ⟨Ĥ⟩ correctly simulated by embedded_H is
> necessarily the behavior of the first N steps of ⟨Ĥ⟩ correctly simulated
> by embedded_H. From these N steps we can prove by mathematical induction
> that ⟨Ĥ⟩ correctly simulated by embedded_H cannot possibly reach it own
> final state of ⟨Ĥ.qn⟩ in any finite number of steps.

But we don't care about the "First N steps of (Ĥ) correctly simulated",
we care about the behavior of the actual machine Ĥ (Ĥ) or the actual
FULL correct simulation of UTM (Ĥ) (Ĥ) [ie the input to H]

>
>> which will be identical to the actual behavior of that input processed
>> by an ACTUAL UTM (which, by definition don't stop until they reach a
>> final step).
>>
>
> The verified facts prove otherwise, people that persistently deny
> verified facts may be in danger of Hell fire, depending on their
> motives.

Nope, the actual VERIFIED FACTS prove what I say.

>
> My motive is to mathematically formalize the notion of True(L,x) thus
> refuting Tarski and Gödel.

Eccept that you don't even seem to understand your own terminology.

olcott

no leída,
25 abr 2023, 11:45:48 p.m.25/4/23
para
The actual behavior of the input is the behavior of N steps correctly
simulated by embedded_H because embedded_H remains a UTM until it aborts
its simulation.

That these N steps provide a sufficient mathematical induction proof
that ⟨Ĥ⟩ correctly simulated by embedded_H cannot possibly reach it own
final state of ⟨Ĥ.qn⟩ in any finite number of steps is the correct basis
for the halt status decision by embedded_H.

That no textbook ever noticed that the behavior under pathological self-
reference(Olcott 2004) could possibly vary from behavior when PSR does
not exist is only because everyone rejected to notion of a simulation as
any basis for halt decider out-of-hand without review.

For the whole history of the halting problem everyone simply assumed
that the halt decider must provide a correct yes/no answer when no
correct yes/no answer exists.

No one ever noticed that the pathological input would be trapped in
recursive simulation that never reaches any final state when this
counter-example input is input to a simulating halt decider.

Richard Damon

no leída,
26 abr 2023, 8:07:32 a.m.26/4/23
para
ILLOGICAL STATEMENT.

Something can't be "a UTM until" as a UTM is a full identity, and
something is or isn't one.

That is like saying you are immortal until you die.

False premise means unsound logic.


Actual Behavior of the input is DEFINED to be the behavior of the actual
machine run on the actual input, which Halts. PERIOD.

> That these N steps provide a sufficient mathematical induction proof
> that ⟨Ĥ⟩ correctly simulated by embedded_H cannot possibly reach it own
> final state of ⟨Ĥ.qn⟩ in any finite number of steps is the correct basis
> for the halt status decision by embedded_H.

No;e. please show the ACTUAL "Induction proof" of you claim

>
> That no textbook ever noticed that the behavior under pathological self-
> reference(Olcott 2004) could possibly vary from behavior when PSR does
> not exist is only because everyone rejected to notion of a simulation as
> any basis for halt decider out-of-hand without review.

Because you don't uderstand what actual behavior means, and that it
can't change based on who is looking at it.
>
> For the whole history of the halting problem everyone simply assumed
> that the halt decider must provide a correct yes/no answer when no
> correct yes/no answer exists.

Except that a correct answer does exist, it is what ever is the opposite
of what the decider H gives. The fact that H can't give it doesn't mean
it doesn't exist.

>
> No one ever noticed that the pathological input would be trapped in
> recursive simulation that never reaches any final state when this
> counter-example input is input to a simulating halt decider.
>

Except if the rcursive simulation never is stop, then the decider isn't
a decider. So the decider MUST make up its mind, or be disqualified, and
is always wrong.

YOU are just showing you don't understand how programs work.

olcott

no leída,
26 abr 2023, 10:34:39 p.m.26/4/23
para
The actual behavior of the actual input is not necessarily the behavior
of a non-input as it has been assumed since forever.

Richard Damon

no leída,
27 abr 2023, 7:19:26 a.m.27/4/23
para
On 4/26/23 10:34 PM, olcott wrote:

> The actual behavior of the actual input is not necessarily the behavior
> of a non-input as it has been assumed since forever.
>
>

But it isn't a "non-input" but is an actual property of the actual
input, and the property DEFINED as what the decider is supposed to decide.

Your inability to understand this simple requirement has made you life a
total waste.

You just don't seem to understand even the simplest of truths, likely
because you are just a pathological liar and truth means nothing to you.

olcott

no leída,
27 abr 2023, 9:15:11 p.m.27/4/23
para
On 4/27/2023 6:19 AM, Richard Damon wrote:
> On 4/26/23 10:34 PM, olcott wrote:
>
>> The actual behavior of the actual input is not necessarily the
>> behavior of a non-input as it has been assumed since forever.
>>
>>
>
> But it isn't a "non-input" but is an actual property of the actual
> input, and the property DEFINED as what the decider is supposed to decide.
>

The actual behavior of the actual input MUST take into account that
pathological relationship between Ĥ and embedded_H.

> Your inability to understand this simple requirement has made you life a
> total waste.
>
> You just don't seem to understand even the simplest of truths, likely
> because you are just a pathological liar and truth means nothing to you.
I have said that this is my life's one legacy.
Everyone besides you believes that I believe what I say.
I can't be an actual liar if I believe what I say.

Richard Damon

no leída,
27 abr 2023, 10:41:29 p.m.27/4/23
para
On 4/27/23 9:15 PM, olcott wrote:
> On 4/27/2023 6:19 AM, Richard Damon wrote:
>> On 4/26/23 10:34 PM, olcott wrote:
>>
>>> The actual behavior of the actual input is not necessarily the
>>> behavior of a non-input as it has been assumed since forever.
>>>
>>>
>>
>> But it isn't a "non-input" but is an actual property of the actual
>> input, and the property DEFINED as what the decider is supposed to
>> decide.
>>
>
> The actual behavior of the actual input MUST take into account that
> pathological relationship between Ĥ and embedded_H.
>
>> Your inability to understand this simple requirement has made you life
>> a total waste.
>>
>> You just don't seem to understand even the simplest of truths, likely
>> because you are just a pathological liar and truth means nothing to you.
> I have said that this is my life's one legacy.
> Everyone besides you believes that I believe what I say.
> I can't be an actual liar if I believe what I say.
>

You are just proving yourself to be a liar.

Just because you "believe" it doesn't totally make it not a lie. An
"innocent" mistake is not a lie, but when said with a blatant disregard
for the actual truth, it becomes a lie.

You "Legacy" is that you were an ignorant lying idiot.

If you REALLY actually believe the CRAP that you spew out, then you are
just proving that you are mentally incompetent.

olcott

no leída,
27 abr 2023, 11:15:59 p.m.27/4/23
para
On 4/27/2023 9:41 PM, Richard Damon wrote:
> On 4/27/23 9:15 PM, olcott wrote:
>> On 4/27/2023 6:19 AM, Richard Damon wrote:
>>> On 4/26/23 10:34 PM, olcott wrote:
>>>
>>>> The actual behavior of the actual input is not necessarily the
>>>> behavior of a non-input as it has been assumed since forever.
>>>>
>>>>
>>>
>>> But it isn't a "non-input" but is an actual property of the actual
>>> input, and the property DEFINED as what the decider is supposed to
>>> decide.
>>>
>>
>> The actual behavior of the actual input MUST take into account that
>> pathological relationship between Ĥ and embedded_H.
>>
>>> Your inability to understand this simple requirement has made you
>>> life a total waste.
>>>
>>> You just don't seem to understand even the simplest of truths, likely
>>> because you are just a pathological liar and truth means nothing to you.
>> I have said that this is my life's one legacy.
>> Everyone besides you believes that I believe what I say.
>> I can't be an actual liar if I believe what I say.
>>
>
> You are just proving yourself to be a liar.
>
> Just because you "believe" it doesn't totally make it not a lie.

YES IT DOES (and you call me stupid) !!!
a false statement made with deliberate intent to deceive; an intentional
untruth. https://www.dictionary.com/browse/lie

YES IT DOES (and you call me stupid) !!!
a false statement made with deliberate intent to deceive; an intentional
untruth. https://www.dictionary.com/browse/lie

YES IT DOES (and you call me stupid) !!!
a false statement made with deliberate intent to deceive; an intentional
untruth. https://www.dictionary.com/browse/lie

YES IT DOES (and you call me stupid) !!!
a false statement made with deliberate intent to deceive; an intentional
untruth. https://www.dictionary.com/browse/lie

YES IT DOES (and you call me stupid) !!!
a false statement made with deliberate intent to deceive; an intentional
untruth. https://www.dictionary.com/browse/lie

Richard Damon

no leída,
28 abr 2023, 7:40:39 a.m.28/4/23
para
https://www.dictionary.com/browse/lie

3 an inaccurate or untrue statement; falsehood:
When I went to school, history books were full of lies, and I won't
teach lies to kids.

5 to express what is false; convey a false impression.


It does not ALWAYS require actual knowledge that the statement is incorrect.

Again, you fail by the fallacy of attempting proof by example.


For example, note that in the recent defamation suit, its wasn't needed
to prove that they "Knew" the statement to be for sure false, but to
have a blatant disregard for what is true.


You have been presented able evidence that you statements are untrue,
and any normal competent person would see it, therefore your repeating
the statements are just pathological lies. Lies because they are wrong
and pathological because you appear to be incapable of actually knowing
the truth.




olcott

no leída,
28 abr 2023, 10:59:36 a.m.28/4/23
para
Yes it does and you are stupid for saying otherwise.

Richard Damon

no leída,
28 abr 2023, 11:14:07 a.m.28/4/23
para
Then why do the definition I quoted say otherwise?

That just shows you are the one that is stupid, and a liar.


olcott

no leída,
28 abr 2023, 11:21:23 a.m.28/4/23
para
In other words you honestly believe that an honest mistake is a lie.
THAT MAKES YOU STUPID !!! (yet not a liar)

olcott

no leída,
28 abr 2023, 11:26:48 a.m.28/4/23
para
On 4/28/2023 10:14 AM, Richard Damon wrote:
In this case you are proving to be stupid: (yet not a liar)

1. Traditional Definition of Lying
There is no universally accepted definition of lying to others. The
dictionary definition of lying is “to make a false statement with the
intention to deceive” (OED 1989) but there are numerous problems with
this definition. It is both too narrow, since it requires falsity, and
too broad, since it allows for lying about something other than what is
being stated, and lying to someone who is believed to be listening in
but who is not being addressed.

The most widely accepted definition of lying is the following: “A lie is
a statement made by one who does not believe it with the intention that
someone else shall be led to believe it” (Isenberg 1973, 248) (cf.
“[lying is] making a statement believed to be false, with the intention
of getting another to accept it as true” (Primoratz 1984, 54n2)). This
definition does not specify the addressee, however. It may be restated
as follows:

(L1) To lie =df to make a believed-false statement to another person
with the intention that the other person believe that statement to be true.

L1 is the traditional definition of lying. According to L1, there are at
least four necessary conditions for lying.

First, lying requires that a person make a statement (statement condition).

Second, lying requires that the person believe the statement to be
false; that is, lying requires that the statement be untruthful
(untruthfulness condition).

Third, lying requires that the untruthful statement be made to another
person (addressee condition).

Fourth, lying requires that the person intend that that other person
believe the untruthful statement to be true (intention to deceive the
addressee condition).

https://plato.stanford.edu/entries/lying-definition/#TraDefLyi

Richard Damon

no leída,
28 abr 2023, 11:44:35 a.m.28/4/23
para
So, you are trying to use arguments to justify that you can say "false
statements" and not be considered a liar.

The fact that you seem to have KNOWN that the generally accept truth
differed from your ideas does not excuse you from claiming that you can
say them as FACT, and not be a liar.

The fact that your error has been pointed out an enormous number of
times, makes you blatant disregard for the actual truth, a suitable
stand in for your own belief.

If you don't understand from all instruction you have been given that
you are wrong, you are just proved to be totally mentally incapable.

If you want to claim that you are not a liar by reason of insanity, make
that plea, but that just becomes an admission that you are a
pathological liar, a liar because of a mental illness.

Richard Damon

no leída,
28 abr 2023, 11:44:48 a.m.28/4/23
para
On 4/28/23 11:21 AM, olcott wrote:
> On 4/28/2023 10:14 AM, Richard Damon wrote:
>> On 4/28/23 10:59 AM, olcott wrote:
>>> On 4/28/2023 6:40 AM, Richard Damon wrote:
>>
>>>> https://www.dictionary.com/browse/lie
>>>>
>>>> 3 an inaccurate or untrue statement; falsehood:
>>>>    When I went to school, history books were full of lies, and I
>>>> won't   teach lies to kids.
>>>>
>>>> 5 to express what is false; convey a false impression.
>>>>
>>>>
>>>> It does not ALWAYS require actual knowledge that the statement is
>>>> incorrect.
>>>>
>>>
>>> Yes it does and you are stupid for saying otherwise.
>>>
>>
>> Then why do the definition I quoted say otherwise?
>>
>> That just shows you are the one that is stupid, and a liar.
>
> In other words you honestly believe that an honest mistake is a lie.
> THAT MAKES YOU STUPID !!!  (yet not a liar)
>

So, you ADMIT that you ideas are a "Mistake"?

You ADMIT that your statements are untrue because you ideas, while
sincerly held by you, are admitted to be WRONG?

Note, these definition point to statements which are made that are
clearly false can be considered as lies on their face value.

Note also, I tend to use the term "Pathological liar", which implies
this sort error, the speaker, due to mental deficiencies have lost the
ability to actual know what is true or false. This seems to describe you
to the T.

I also use the term "Ignorant Liar" which means you lie out of a lack of
knowledge of the truth.




olcott

no leída,
28 abr 2023, 11:51:00 a.m.28/4/23
para
When I say that an idea is a fact I mean that it is a semantic
tautology. That you don't understand things well enough to verify that
it is a semantic tautology does not even make my assertion false.

> The fact that your error has been pointed out an enormous number of
> times, makes you blatant disregard for the actual truth, a suitable
> stand in for your own belief.
>

That fact that no one has understood my semantic tautologies only proves
that no one has understood my semantic tautologies. It does not even
prove that my assertion is incorrect.

> If you don't understand from all instruction you have been given that
> you are wrong, you are just proved to be totally mentally incapable.
>
> If you want to claim that you are not a liar by reason of insanity, make
> that plea, but that just becomes an admission that you are a
> pathological liar, a liar because of a mental illness.
>

That you continue to believe that lies do not require an intention to
deceive after the above has been pointed out makes you willfully
ignorant, yet still not a liar.

olcott

no leída,
28 abr 2023, 12:05:53 p.m.28/4/23
para
On 4/28/2023 10:44 AM, Richard Damon wrote:
> On 4/28/23 11:21 AM, olcott wrote:
>> On 4/28/2023 10:14 AM, Richard Damon wrote:
>>> On 4/28/23 10:59 AM, olcott wrote:
>>>> On 4/28/2023 6:40 AM, Richard Damon wrote:
>>>
>>>>> https://www.dictionary.com/browse/lie
>>>>>
>>>>> 3 an inaccurate or untrue statement; falsehood:
>>>>>    When I went to school, history books were full of lies, and I
>>>>> won't   teach lies to kids.
>>>>>
>>>>> 5 to express what is false; convey a false impression.
>>>>>
>>>>>
>>>>> It does not ALWAYS require actual knowledge that the statement is
>>>>> incorrect.
>>>>>
>>>>
>>>> Yes it does and you are stupid for saying otherwise.
>>>>
>>>
>>> Then why do the definition I quoted say otherwise?
>>>
>>> That just shows you are the one that is stupid, and a liar.
>>
>> In other words you honestly believe that an honest mistake is a lie.
>> THAT MAKES YOU STUPID !!!  (yet not a liar)
>>
>
> So, you ADMIT that you ideas are a "Mistake"?
>

No, to the best of my knowledge I have correctly proved all of my
assertions are semantic tautologies thus necessarily true.

The fact that few besides me understand that they are semantic
tautologies is not actual rebuttal at all.

> You ADMIT that your statements are untrue because you ideas, while
> sincerly held by you, are admitted to be WRONG?
>
> Note, these definition point to statements which are made that are
> clearly false can be considered as lies on their face value.
>

I can call you a liar on the basis that when you sleep at night you
probably lie down. This is not what is meant by liar.

> Note also, I tend to use the term "Pathological liar", which implies
> this sort error, the speaker, due to mental deficiencies have lost the
> ability to actual know what is true or false. This seems to describe you
> to the T.
>
> I also use the term "Ignorant Liar" which means you lie out of a lack of
> knowledge of the truth.

I am not a liar in any sense of the common accepted definition of liar
that requires that four conditions be met.

there are at least four necessary conditions for lying:

First, lying requires that a person make a statement (statement
condition).

Second, lying requires that the person believe the statement to be
false; that is, lying requires that the statement be untruthful
(untruthfulness condition).

Third, lying requires that the untruthful statement be made to another
person (addressee condition).

Fourth, lying requires that the person intend that that other person
believe the untruthful statement to be true (intention to deceive the
addressee condition).

https://plato.stanford.edu/entries/lying-definition/#TraDefLyi

That you continue to call me a "liar" while failing to disclose that you
are are not referring to what everyone else means by the term meets the
legal definition of "actual malice"

https://www.mtsu.edu/first-amendment/article/889/actual-malice

Richard Damon

no leída,
28 abr 2023, 12:41:56 p.m.28/4/23
para
On 4/28/23 12:05 PM, olcott wrote:
> On 4/28/2023 10:44 AM, Richard Damon wrote:
>> On 4/28/23 11:21 AM, olcott wrote:
>>> On 4/28/2023 10:14 AM, Richard Damon wrote:
>>>> On 4/28/23 10:59 AM, olcott wrote:
>>>>> On 4/28/2023 6:40 AM, Richard Damon wrote:
>>>>
>>>>>> https://www.dictionary.com/browse/lie
>>>>>>
>>>>>> 3 an inaccurate or untrue statement; falsehood:
>>>>>>    When I went to school, history books were full of lies, and I
>>>>>> won't   teach lies to kids.
>>>>>>
>>>>>> 5 to express what is false; convey a false impression.
>>>>>>
>>>>>>
>>>>>> It does not ALWAYS require actual knowledge that the statement is
>>>>>> incorrect.
>>>>>>
>>>>>
>>>>> Yes it does and you are stupid for saying otherwise.
>>>>>
>>>>
>>>> Then why do the definition I quoted say otherwise?
>>>>
>>>> That just shows you are the one that is stupid, and a liar.
>>>
>>> In other words you honestly believe that an honest mistake is a lie.
>>> THAT MAKES YOU STUPID !!!  (yet not a liar)
>>>
>>
>> So, you ADMIT that you ideas are a "Mistake"?
>>
>
> No, to the best of my knowledge I have correctly proved all of my
> assertions are semantic tautologies thus necessarily true.
>
> The fact that few besides me understand that they are semantic
> tautologies is not actual rebuttal at all.

No, but the fact that you can't rebute the claims against your
arguments, and really haven't tried, implies that you know that your
claims are baseless.


IF your counter to the fact that you have made clearly factually
incorrect statements is that "Honest Mistakes" are not lies, just shows
what you consider your grounds to defined yourself.

>
>> You ADMIT that your statements are untrue because you ideas, while
>> sincerly held by you, are admitted to be WRONG?
>>
>> Note, these definition point to statements which are made that are
>> clearly false can be considered as lies on their face value.
>>
>
> I can call you a liar on the basis that when you sleep at night you
> probably lie down. This is not what is meant by liar.

So, you admit you don't understand the defintion of liar?

>
>> Note also, I tend to use the term "Pathological liar", which implies
>> this sort error, the speaker, due to mental deficiencies have lost the
>> ability to actual know what is true or false. This seems to describe
>> you to the T.
>>
>> I also use the term "Ignorant Liar" which means you lie out of a lack
>> of knowledge of the truth.
>
> I am not a liar in any sense of the common accepted definition of liar
> that requires that four conditions be met.

But are by MY definition that I posted, one who makes false or
misleading statments.

>
> there are at least four necessary conditions for lying:
>
> First, lying requires that a person make a statement (statement
> condition).
>
> Second, lying requires that the person believe the statement to be
> false; that is, lying requires that the statement be untruthful
> (untruthfulness condition).
>
> Third, lying requires that the untruthful statement be made to another
> person (addressee condition).
>
> Fourth, lying requires that the person intend that that other person
> believe the untruthful statement to be true (intention to deceive the
> addressee condition).
>
> https://plato.stanford.edu/entries/lying-definition/#TraDefLyi
>
> That you continue to call me a "liar" while failing to disclose that you
> are are not referring to what everyone else means by the term meets the
> legal definition of "actual malice"
>
> https://www.mtsu.edu/first-amendment/article/889/actual-malice
>

So, you don't think that definition 3 or 5 of the reference you made,
that did NOT require knowledge of the error by the person.

Note, YOU don't get to limit the definition of a word as it is used by
another. That shows YOU don't understand how communication works.

There is a significant difference between an "Honest Mistake" and being
a denier of the truth when presented.

Unless you want to retrack all your statements about the "Trump Lie"
since some of the people seem to honestly believe it.

Richard Damon

no leída,
28 abr 2023, 12:41:58 p.m.28/4/23
para
So, you admit that you don't know that actually meaning of a FACT.


>> The fact that your error has been pointed out an enormous number of
>> times, makes you blatant disregard for the actual truth, a suitable
>> stand in for your own belief.
>>
>
> That fact that no one has understood my semantic tautologies only proves
> that no one has understood my semantic tautologies. It does not even
> prove that my assertion is incorrect.

No, the fact that you ACCEPT most existing logic is valid, but then try
to change the rules at the far end, without understanding that you are
accepting things your logic likely rejects, shows that you don't
understand how logic actually works.

You present "semantic tautologies" based on FALSE definition and results
that you can not prove.

>
>> If you don't understand from all instruction you have been given that
>> you are wrong, you are just proved to be totally mentally incapable.
>>
>> If you want to claim that you are not a liar by reason of insanity,
>> make that plea, but that just becomes an admission that you are a
>> pathological liar, a liar because of a mental illness.
>>
>
> That you continue to believe that lies do not require an intention to
> deceive after the above has been pointed out makes you willfully
> ignorant, yet still not a liar.
>

But, by the definiton I use, since it has been made clear to you that
you are wrong, but you continue to spout words that have been proven
incorrect make YOU a pathological liar.

Also, I am not "ignorant", since that means not having knowledge or
awareness of something, but I do understand what you are saying and
aware of your ideas, AND I POINT OUT YOUR ERRORS. YOU are the ignorant
one, as you don't seem to understand enough to even comment about the
rebutalls to your claims.

THAT show ignorance, and stupidity.

olcott

no leída,
28 abr 2023, 12:58:54 p.m.28/4/23
para
The SEP article references the four required conditions for
"The most widely accepted definition of lying"

The most widely accepted definition of lying is the following: “A lie is
a statement made by one who does not believe it with the intention that
someone else shall be led to believe it” (Isenberg 1973, 248) (cf.
“[lying is] making a statement believed to be false, with the intention
of getting another to accept it as true” (Primoratz 1984, 54n2)). This
definition does not specify the addressee, however. It may be restated
as follows:

(L1) To lie =df to make a believed-false statement to another person
with the intention that the other person believe that statement to be true.

L1 is the traditional definition of lying. According to L1, there are at
least four necessary conditions for lying.
Está cargando más mensajes.
0 mensajes nuevos