Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

The ultimate foundation of [a priori] Truth

174 views
Skip to first unread message

Pete Olcott

unread,
Feb 17, 2018, 1:42:55 AM2/17/18
to
a Collection is defined one or more things that have one or more properties in common. These operations from set theory are available:  {⊆, ∈}

An BaseFact is an expression X of (natural or formal) language L that has been assigned the semantic property of True. (Similar to a math Axiom).

A Collection T of BaseFacts of language L forms the ultimate foundation of the notion of Truth in language L.

To verify that an expression X of language L is True or False only requires a syntactic logical consequence inference chain (formal proof) from one or more elements of T to X or ~X.

True(L, X) ↔ ∃Γ ⊆ BaseFact(L) Provable(Γ, X)
False(L, X) ↔ ∃Γ ⊆ BaseFact(L) Provable(Γ, ~X)

Copyright 2018 (and many other years since 1997) Pete Olcott

--
*∀L ∈ Formal_Systems
∀X
True(L, X) ↔ ∃Γ ⊆ Axioms(L) Provable(Γ, X) *

Jeff Barnett

unread,
Feb 17, 2018, 2:35:06 AM2/17/18
to
The above is plain silly. Why not write, instead:

True(L, X) <-> Provable(BaseFact(L), X)
False(L, X) <-> Provable(BaseFact(L), X)

You might want to confound truth and provability but it gains little and
loses a lot as has been pointed out to you for years. I believe that you
have set a record for low number of converts per message. Please note
that most responses to your messages are from (wait for it!) you.
Usually with yet another correction.

You claim elsewhere that you've spent 30,000 hours and you have
pointless complications like the above that provide no insight to you or
anybody else; they only serve to confuse you. 30,000 hours is about 15
work years and every attempt to write a proof by you must be corrected a
zillion times and it never quite comes out right in the end. Take a
holiday. Relax. Start again. Quit arguing and try to listen to others so
you might learn something. You have picked a way to waste your life and
destroy a newsgroup.

I don't expect this message to turn you wise. You are too dedicated to
being recognized as a great thinker. Maybe after another 30,000 hours
you will be old and broken but you will not secure your theorem or your
fame. Get some sleep. Read a good book. Assume that the scientific and
philosophical communities are 1) not all nuts and 2) not out to get you.
--
Jeff Barnett

António Marques

unread,
Feb 17, 2018, 5:22:00 AM2/17/18
to
Jeff Barnett <j...@notatt.com> wrote:
> .... Assume that the scientific and
> philosophical communities are 1) not all nuts and 2) not out to get you.

3) not by any means devoid of people who know all 3 of Math / Computer
Science / Linguistics.

Peter Percival

unread,
Feb 17, 2018, 7:00:20 AM2/17/18
to
Pete Olcott wrote:
> a Collection is defined one or more things that have one or more
> properties in common. These operations from set theory are available:
> {⊆, ∈}

Those aren't operations, they are relations. Operations are such things
as union and intersection.


Pete Olcott

unread,
Feb 17, 2018, 10:23:18 AM2/17/18
to
That is very helpful thanks. Those two operators are the only ones
that I really need to make all of my key points. I am not sure if
my system has to be called type theory or not. It is those two
operators that limit relations to specific types.

George Greene

unread,
Feb 17, 2018, 11:04:56 AM2/17/18
to
On Saturday, February 17, 2018 at 1:42:55 AM UTC-5, Pete Olcott wrote:
> a Collection is defined

You left out the word "as".

> one or more things

The empty collection is a collection,
dumbass.

Pete Olcott

unread,
Feb 17, 2018, 3:20:06 PM2/17/18
to
On 2/17/2018 1:35 AM, Jeff Barnett wrote:
Pete Olcott wrote on 2/16/2018 11:42 PM:
a Collection is defined one or more things that have one or more properties in common. These operations from set theory are available:  {⊆, ∈}

An BaseFact is an expression X of (natural or formal) language L that has been assigned the semantic property of True. (Similar to a math Axiom).

A Collection T of BaseFacts of language L forms the ultimate foundation of the notion of Truth in language L.

To verify that an expression X of language L is True or False only requires a syntactic logical consequence inference chain (formal proof) from one or more elements of T to X or ~X.

True(L, X) ↔ ∃Γ ⊆ BaseFact(L) Provable(Γ, X)
False(L, X) ↔ ∃Γ ⊆ BaseFact(L) Provable(Γ, ~X)

Copyright 2018 (and many other years since 1997) Pete Olcott

The above is plain silly. Why not write, instead:

True(L, X) <-> Provable(BaseFact(L), X)
False(L, X) <-> Provable(BaseFact(L), X)

That would define True and False to be the same, and we aren't really
proving from all BaseFacts only a tiny subset of them.

I am using this as my basis for the correct way to say these things:
https://en.wikipedia.org/wiki/Logical_consequence#Syntactic_consequence
A formula A is a syntactic consequence within some formal system FS
of a set Γ of formulas if there is a formal proof in of A from the set Γ.
Γ ⊢FS A (translated to Minimal Type Theory becomes) ∃Γ ⊆ FS Provable(Γ, A)

It is this formula that refutes every otherwise WFF of formal language that
has the semantic error of pathological self-reference(Olcott 2004):

∀L ∈ Formal_Systems
∀X ∈ Closed-WFF(L)
~True(L, X) ∧ ~False(L, X) → Incorrect(L, X)

This applies to:
(1) Liar Paradox
(2) 1931 Incompleteness Theorem
(3) 1936 Halting Problem
(4) Tarski 1936 undefinability theorem.



You might want to confound truth and provability but it gains little and loses a lot as has been pointed out to you for years. I believe that you have set a record for low number of converts per message. Please note that most responses to your messages are from (wait for it!) you. Usually with yet another correction.


According to Mind magazine that is the way that creative genius works.
THE SCIENCE OF GENIUS.
Scientific American Mind. Nov/Dec2012

According to one theory I helped to develop, a genius hunts widely
almost blindly for a solution to a problem, exploring dead ends and
backtracking repeatedly before arriving at the ideal answer.

Creative achievement is positively associated both with cognitive
disinhibition openness to supposedly extraneous ideas

According to a theory proposed in 1960 by psychologist Donald
Campbell, creative thought emerges through a process or
procedure he termed blind variation and selective retention
(BVSR). In short, a creator must try out ideas that might fail before
hitting on a breakthrough.

The blindness of BVSR merely means that ideas are produced
without foresight into their eventual utility. The creator must
engage in trial-and-error or generate-and-test procedures to
determine the worth of an idea. Two common phenomena
characterize BVSR thinking: superfluity and backtracking.
Superfluity means that the creator generates a variety of ideas,
one or more of which turn out to be useless. Backtracking
signifies that the creator must often return to an earlier approach
after blindly going off in the wrong direction. Superfluity and
backtracking are often found together in the same creative episode.
Exploring the wrong track obliges a return to options that had
been originally cast aside

"Talent hits a target no one else can hit; genius hits a target no one else can see."


You claim elsewhere that you've spent 30,000 hours

On various software inventions here of my two US patents.
7,046,848
9,171,207

The time that I spent on self-reference paradox:
about 1000 hours from 1997 to 2003  alt.philosophy
3000 hours from 2004 to 2015             comp.theory and sci.logic
3000 hours from 2016 to present.       comp.theory and sci.logic


and you have pointless complications like the above that provide no insight to you or anybody else; they only serve to confuse you. 30,000 hours is about 15 work years and every attempt to write a proof by you must be corrected a zillion times and it never quite comes out right in the end. Take a holiday. Relax. Start again. Quit arguing and try to listen to others so you might learn something. You have picked a way to waste your life and destroy a newsgroup.

I don't expect this message to turn you wise. You are too dedicated to being recognized as a great thinker. Maybe after another 30,000 hours you will be old and broken but you will not secure your theorem or your fame. Get some sleep. Read a good book. Assume that the scientific and philosophical communities are 1) not all nuts and 2) not out to get you.

Here is the same material with key updates.
http://liarparadox.org/index.php/2018/02/17/the-ultimate-foundation-of-a-priori-truth/

I will be increasing more specific about exactly how this
correctly refutes Tarski’s 1936 undefinability theorem.


--

Pete Olcott

unread,
Feb 17, 2018, 3:33:05 PM2/17/18
to

http://liarparadox.org/index.php/2018/02/17/the-ultimate-foundation-of-a-priori-truth/

I got rid of the need to even refer to the term "collection" ignoramus.
Some concept of axiomatic set/class/collection/group theory is implied.

Basically I must divide finite strings into three categories:
∀L ∈ Formal_Systems
∀X ∈ Finite_Strings
Semantically_Correct(L, X) ⊆ WFF(L, X) ⊆ Finite_Strings






--

Pete Olcott

unread,
Feb 17, 2018, 5:14:05 PM2/17/18
to
As it turns out all three of those were required for my current insights.
My purpose in studying linguistics was formalizing natural language.
Much of what I have said can be found on sci.lang

My purpose in studying math and logic was to provide the infrastructure
for formalizing natural language. It turns out that the easiest way to
do this was to very slightly extend the syntax of FOL such that this
slightly adapted syntax could be used to formalize any predicate of
any finite order of HOL.

I have called it Minimal Type Theory for quite a while because it uses
SUBSET and ELEMENT_OF operators to restrict the domain of discourse.

The FOPL syntax that Minimal Type Theory is based on:
https://groups.google.com/forum/#!original/comp.compilers/Qalayu9h3xw/gaTrXbbRd7AJ

After I found out that only pure Lambda Calculus is Turing Complete I
redesigned the syntax of Minimal Type Theory so that it stays within
conventional Predicate Logic boundaries. This also greatly reduces its
learning curve.

The one guy that I ran into that could understand both linguistics and
computer science had for many years objected to the formalization of
natural language semantics because Frege's Principle of compositionality
did not seem to include context. It seems that I finally won him over by
this reference to Davidsonian Representations:

http://www.cyc.com/documentation/ontologists-handbook/writing-efficient-cycl/cycl-representation-choices/

Peter Percival

unread,
Feb 17, 2018, 5:22:36 PM2/17/18
to
Pete Olcott wrote:

> My purpose in studying math and logic was to provide the infrastructure

And what have you learned of mathematics and logic?

George Greene

unread,
Feb 17, 2018, 5:22:52 PM2/17/18
to
On Saturday, February 17, 2018 at 1:42:55 AM UTC-5, Pete Olcott wrote:
> a Collection is

Since we in fact use strings of characters to talk about all these things,
it might be a contribution to stop pretending that we even know how to forget
about order in the first place, and start with
STRINGS
instead of collections.
THE ACTUAL "ultimate" foundation of "a priori" truth
is occurrence of substrings. The subspecies of "a priori"
truth that I am trying to focus on here is "ad oculos" --
it just hits you in the eye, just from looking at it.
I mean things like
"abc" begins with "ab"
or
"abc" ends with "bc".
I could have said "begins with 'a'" or "ends with 'c'"
but that would have introduced character vs. string.
I'd rather stick with 1-character strings instead.
"The empty string is shorter than all other strings"
winds up in there too, somewhere.
Despite the fact that you can't actually look at all other
strings, you STILL CAN tell, JUST from looking at them, that
that universal generalization about strings is true.

Pete Olcott

unread,
Feb 17, 2018, 6:08:12 PM2/17/18
to

A continually increasing better way to prove that I was totally right all along.
(Since at least 1997, as far back as my USENET messages go).

--

Pete Olcott

unread,
Feb 17, 2018, 6:15:06 PM2/17/18
to
On 2/17/2018 4:22 PM, George Greene wrote:
> On Saturday, February 17, 2018 at 1:42:55 AM UTC-5, Pete Olcott wrote:
>> a Collection is
>
> Since we in fact use strings of characters to talk about all these things,
> it might be a contribution to stop pretending that we even know how to forget
> about order in the first place, and start with
> STRINGS
> instead of collections.
> THE ACTUAL "ultimate" foundation of "a priori" truth
> is occurrence of substrings. The subspecies of "a priori"
> truth that I am trying to focus on here is "ad oculos" --
> it just hits you in the eye, just from looking at it.
> I mean things like
> "abc" begins with "ab"
> or


Yeah, not really George. At least you are not being insulting.
It turns out that strings are only good for specifying relations,
the atomic basis of all knowledge.

The subatomic particles of these atoms of knowledge are
the connections between the constituent parts of a semantic atom.

Spend a few years studying formal semantics of linguistics and you
might see this. Frege's principle of compositionality goes far
deeper than Frege ever knew.

> "abc" ends with "bc".
> I could have said "begins with 'a'" or "ends with 'c'"
> but that would have introduced character vs. string.
> I'd rather stick with 1-character strings instead.
> "The empty string is shorter than all other strings"
> winds up in there too, somewhere.
> Despite the fact that you can't actually look at all other
> strings, you STILL CAN tell, JUST from looking at them, that
> that universal generalization about strings is true.
>


Pete Olcott

unread,
Feb 17, 2018, 11:17:01 PM2/17/18
to
On 2/17/2018 5:08 PM, Pete Olcott wrote:
On 2/17/2018 4:22 PM, Peter Percival wrote:
Pete Olcott wrote:

My purpose in studying math and logic was to provide the infrastructure

And what have you learned of mathematics and logic?


A continually increasing better way to prove that I was totally right all along.
(Since at least 1997, as far back as my USENET messages go).


When I say that I have been totally right all along I mean that I am
right that the Liar Paradox, Halting Problem, and 1931 GIT all make
exactly the same mistake.  The first time that I made this claim in
a public forum was April 16, 1998.

Pete Olcott

unread,
Feb 18, 2018, 10:27:18 AM2/18/18
to
On 2/17/2018 11:16 PM, DKleinecke wrote:
On Saturday, February 17, 2018 at 8:17:01 PM UTC-8, Pete Olcott wrote:
      
∀L ∈ Formal_Systems
∀X True(L, X) ↔ ∃Γ ⊆ Axioms(L) Provable(Γ, X)
Why subset the axioms?

Your definition of "True" works just as well as

    True (L, X) means Provable (Axioms (L), X)


The reason that I am able to do things such as correctly refute
Tarski's undefinability theorem is that I make sure that the syntax
always precisely represents all of the underlying semantics.

Since the proof to True does not use all of the axioms as your
form says it does, your form is incorrect.

Here is my model for saying these things

https://en.wikipedia.org/wiki/Logical_consequence#Syntactic_consequence
A formula A is a syntactic consequence within some formal system FS
of a set Γ of formulas if there is a formal proof in of A from the set Γ.
Γ ⊢FS A (translated to Minimal Type Theory becomes) ∃Γ ⊆ FS Provable(Γ, A)

In this above case Γ ⊆ FS would be more accurately stated as Γ ⊆ WFF(FS)

António Marques

unread,
Feb 18, 2018, 1:07:40 PM2/18/18
to
Pete Olcott <Pe...@NoEmail.address> wrote:
>
> The one guy that I ran into that could understand both linguistics and
> computer science

But proficiency in both of those subjects and more within the same person
is not by any means so rare.

Ben Bacarisse

unread,
Feb 18, 2018, 2:45:50 PM2/18/18
to
Indeed. At least one UK university offers (or offered -- I have not
checked recently) a joint honours degree in CS and linguistics.

--
Ben.

Jeff Barnett

unread,
Feb 18, 2018, 4:43:53 PM2/18/18
to
This used to be called "Computational Linguistics". It probably started
with Chomsky's work in formalizing how a grammar could be (more)
precisely represented. During the 1960s we started to see kids who could
hack putting their professors' treatises on languages put to he test by
programs that generated utterances from the proposed grammars and some
marked up lexicons. A great time was had by all and the age of checking
linguist proposals (with computers) before publishing was ushered in. Of
course, today, some believe that neural nets are the path to all things
to hard for us to understand; so today it is never the linguist's fault,
it's inadequate network technology.
--
Jeff Barnett

Ben Bacarisse

unread,
Feb 18, 2018, 6:38:18 PM2/18/18
to
Jeff Barnett <j...@notatt.com> writes:

> Ben Bacarisse wrote on 2/18/2018 12:45 PM:
>> António Marques <anton...@sapo.pt> writes:
>>
>>> Pete Olcott <Pe...@NoEmail.address> wrote:
>>>>
>>>> The one guy that I ran into that could understand both linguistics and
>>>> computer science
>>>
>>> But proficiency in both of those subjects and more within the same person
>>> is not by any means so rare.
>>
>> Indeed. At least one UK university offers (or offered -- I have not
>> checked recently) a joint honours degree in CS and linguistics.
>
> This used to be called "Computational Linguistics".

That's one of the subjects in the overlap, but that's not really what I
was talking about. In a joint degree, students would do courses in
linguistics that have nothing to do with computer science, and computer
science courses that have nothing to do with linguistics. The purpose
is to study both subjects as well the common ground.

--
Ben.

Jeff Barnett

unread,
Feb 18, 2018, 7:42:54 PM2/18/18
to
But that is mostly true in all dual studies involving computer science:
In order to make progress some deep discipline knowledge is necessary
and equally to program you must know quite a bit of CS, e.g., Knuth's
books, computation theory, etc. A degree in Computational Linguistics
used to include everything from comparative grammar methods and
lexicography to intro to operating systems. I think we are talking about
the same thing.
--
Jeff Barnett

Ben Bacarisse

unread,
Feb 18, 2018, 7:54:08 PM2/18/18
to
Yes, a degree called "Computational Linguistics" would probably be
pretty similar to the kind of joint degree I was talking about. I had
only encountered the name applied to an individual unit -- so you could
take, for example, "Computational Linguistics" as an option as part of a
CS degree.

--
Ben.

graham...@gmail.com

unread,
Feb 18, 2018, 8:16:53 PM2/18/18
to
On Saturday, February 17, 2018 at 5:35:06 PM UTC+10, Jeff Barnett wrote:
> Pete Olcott wrote on 2/16/2018 11:42 PM:
> > a Collection is defined one or more things that have one or more
> > properties in common. These operations from set theory are available:
> > {⊆, ∈}
> >
> > An BaseFact is an expression X of (natural or formal) language L that
> > has been assigned the semantic property of True. (Similar to a math Axiom).
> >
> > A Collection T of BaseFacts of language L forms the ultimate foundation
> > of the notion of Truth in language L.
> >
> > To verify that an expression X of language L is True or False only
> > requires a syntactic logical consequence inference chain (formal proof)
> > from one or more elements of T to X or ~X.
> >
> > True(L, X) ↔ ∃Γ ⊆ BaseFact(L) Provable(Γ, X)
> > False(L, X) ↔ ∃Γ ⊆ BaseFact(L) Provable(Γ, ~X)
> >
> > Copyright 2018 (and many other years since 1997) Pete Olcott
>
> The above is plain silly. Why not write, instead:
>
> True(L, X) <-> Provable(BaseFact(L), X)
> False(L, X) <-> Provable(BaseFact(L), X)
>
> You might want to confound truth and provability but it gains little and




44.
isProofFigure(x) ⇔ ∀0 < n ≤ length(x).
isAxiom(item(n,x))∨∃0 < p,q < n.
immConseq(item(n,x),item(p,x),item(q,x)) ∧ length(x) > 0


x


is a proof figure (a finite sequence of formulae, each of which is either an axiom or the immediate consequence of two of the preceding ones).




OK so what is TRUE(x) ?




HINT:


PROOF <==> PROVABLE <==> DERIVABLE <==> IS_THEOREM <==> T() <==> TRU() <==> TRUE()



George Greene

unread,
Feb 19, 2018, 1:22:44 AM2/19/18
to
On Saturday, February 17, 2018 at 11:17:01 PM UTC-5, Pete Olcott wrote:
> When I say that I have been totally right all along I mean that I am
>
> right that the Liar Paradox, Halting Problem, and 1931 GIT all make
>
> exactly the same mistake.

You really are completely full of shit about that.
The Liar Paradox is transparently self-referential and paradoxical.
In the other two, self-reference really has nothing to do with it.
The self-reference is opaque in the case of GIT, and in the case of
the halting problem, there is no self-reference at all. The easiest
proof is self-referential but the fundamental reason why no TM can solve
halting for all TMs is basically the same as the reason why there is no largest number. For every number, there is a bigger one, and for every TM, there is a more complex one.

peteolcott

unread,
Feb 19, 2018, 2:21:34 AM2/19/18
to
On Monday, February 19, 2018 at 12:22:44 AM UTC-6, George Greene wrote:
> On Saturday, February 17, 2018 at 11:17:01 PM UTC-5, Pete Olcott wrote:
> > When I say that I have been totally right all along I mean that I am
> >
> > right that the Liar Paradox, Halting Problem, and 1931 GIT all make
> >
> > exactly the same mistake.
>
> You really are completely full of shit about that.
> The Liar Paradox is transparently self-referential and paradoxical.
> In the other two, self-reference really has nothing to do with it.
> The self-reference is opaque in the case of GIT, and in the case of

> the halting problem, there is no self-reference at all.

Why do you say things that are so easily verified as false George?
Top of page 320:

http://liarparadox.org/HP_Infinite_Recursion.pdf

What happens when H^ is applied to an input sting w^ of ITSELF ???

You may actually have an MSCS, but, your discipline about
getting the facts straight is exceeded by the average 7th grader.

I don't say this to insult you. (Like you do to me). I say
this so that I can actually get an accurate critique.
You are capable of an accurate critique aren't you?

Pete Olcott

unread,
Feb 19, 2018, 1:13:44 PM2/19/18
to

The above has been simplified and clarified:
--BaseFact is defined as an expression X of (natural or
--formal) language L that has been assigned the semantic
--property of True. (Similar to a math Axiom).

New Material:
--BaseFacts that contradict other BaseFacts are prohibited.
--BaseFacts must specify Relations between Things. There
--are no other requirement for BaseFacts.

--To verify that an expression X of language L is True
--or False only requires a syntactic logical consequence
--inference chain (formal proof) from one or more BaseFacts
--to X or ~X.    (Backward chaining reverses this order).

--True(L, X) ↔ ∃Γ ⊆ BaseFacts(L) Provable(Γ, X)
--False(L, X) ↔ ∃Γ ⊆ BaseFacts(L) Provable(Γ, ~X)

Defining a generic decidability decider

∀L ∈ Formal_Systems
∀X ∈ Closed-WFF(L)
~True(L, X) ∧ ~False(L, X) → Incorrect(L, X)

Copyright 2018 (and many other years since 1997) Pete Olcott

--

Pete Olcott

unread,
Feb 19, 2018, 1:28:23 PM2/19/18
to
On 2/19/2018 12:13 PM, Pete Olcott wrote:
On 2/17/2018 12:42 AM, Pete Olcott wrote:
a Collection is defined one or more things that have one or more properties in common. These operations from set theory are available:  {⊆, ∈}

An BaseFact is an expression X of (natural or formal) language L that has been assigned the semantic property of True. (Similar to a math Axiom).

A Collection T of BaseFacts of language L forms the ultimate foundation of the notion of Truth in language L.

To verify that an expression X of language L is True or False only requires a syntactic logical consequence inference chain (formal proof) from one or more elements of T to X or ~X.

True(L, X) ↔ ∃Γ ⊆ BaseFact(L) Provable(Γ, X)
False(L, X) ↔ ∃Γ ⊆ BaseFact(L) Provable(Γ, ~X)

Copyright 2018 (and many other years since 1997) Pete Olcott


---The above has been simplified and clarified:
---BaseFact is defined as an expression X of (natural or
---formal) language L that has been assigned the semantic
---property of True. (Similar to a math Axiom).

---New Material:
---BaseFacts that contradict other BaseFacts are prohibited.
---BaseFacts must specify Relations between Things. There
---are no other requirement for BaseFacts.

---To verify that an expression X of language L is True
---or False only requires a syntactic logical consequence
---inference chain (formal proof) from one or more BaseFacts
---to X or ~X.    (Backward chaining reverses this order).

---True(L, X) ↔ ∃Γ ⊆ BaseFacts(L) Provable(Γ, X)
---False(L, X) ↔ ∃Γ ⊆ BaseFacts(L) Provable(Γ, ~X)

--- Defining a generic decidability decider
--- ∀L ∈ Formal_Systems
--- ∀X ∈ Closed-WFF(L)
--- ~True(L, X) ∧ ~False(L, X) → Incorrect(L, X)

Pete Olcott

unread,
Feb 20, 2018, 7:31:08 PM2/20/18
to
On 2/19/2018 7:50 PM, Dr. HotSalt wrote:
> On Saturday, February 17, 2018 at 2:14:05 PM UTC-8, Pete Olcott wrote:
>
> (snip)
>
>> My purpose in studying linguistics was formalizing natural language.
> Why on Earth would you want to even try to do that?

Because it is the only possible way for machines to acquire a human
degree of comprehension of natural language.

> Natural languages are not used for the same purposes as formal languages. You're trying to turn a box of quill pens into a set of box-end wrenches.
>
>> My purpose in studying math and logic was to provide the infrastructure
>> for formalizing natural language.
> Wrong tools. I have told you before and will tell you again- humans do not think logically except under extreme duress. We think associatively by default.
>
> You yourself demonstrated that when you began this thread with this sentence:
>
> "a Collection is defined one or more things that have one or more properties in common."
>
> (You will make progress if you acknowledge the fundamental associativity of human thinking and work with it instead of against it. You've made a start despite yourself. Don;t abandon it for the illusion of consistent logic.)

I extended logical formalisms to close the semantic gap that allowed
these to slip through as undetected errors:
(1) The Liar Paradox
(2) The 1931 Incompleteness Theorem
(3) The Halting Problem
(4) Tarski's undefinability theorem

I named my new formal language Minimal Type Theory.
I have not described MTT in this forum at all.

You only have to understand what I am saying to prove that it is necessarily true.

>
> This is why Godel's Incompleteness Theorems are "true". We are not capable of constructing a nontrivial, consistent, complete mathematical system.

You only have to bother to understand what I am saying to know that I correctly refuted this.

> You will fail, just as Hilbert failed in his eponymous Program, and for the same reasons:

Hilbert's formalist approach is the key to my success.

Without translating the Halting Problem into a formal mathematical
proof from finite strings there would have been no basis to confirm that
I am definitely correct.

Ben Bacarisse refused to consider that a halt decider must do a
step-by-step debug trace of its input TMD until

I framed the task of a halt decider as a David Hilbert syntactic
logical consequence (formal proof) from finite strings to final
states in the formal language of Turing machine Descriptions.

Then it becomes much more obvious that every halt decider must
do a step-by-step debug trace of its input TMD until this TMD
demonstrates halting or non halting behavior.

> https://en.wikipedia.org/wiki/Hilbert%27s_program
>
> I also find it interesting that you deny the Church/Turing proof that the (related) Decision Problem is insoluble. Upon what do you base that rejection? Do you think Turing machines and the lambda calculus are not equivalent, do you dismiss them as equivalently inherently false (and if so, why) or for some other reason?

Its easy to see read my papers on ResearchGate.

https://www.researchgate.net/publication/323268530_Defining_a_Decidability_Decider

https://www.researchgate.net/publication/323275797_Defining_a_Decidability_Decider_for_the_Halting_Problem

> Alternatively, you could prove that the Mandelbrot set is pathwise connected:
>
> http://mathworld.wolfram.com/MandelbrotSet.html
>
> (I have noticed that people like you hate fractals as much as you do natural languages. Too messy.)
>
> I will also point out that logic may well be the definition of "unnatural". Not only do we find fractals everywhere in Nature, but quantum mechanics is forcing those of us who raise our heads for the occasional experimental breather to re-evaluate our notions of causality itself, the basis of logic.
>
>
> Dr. HotSalt

You are an actual PhD right ?   what field?

Pete Olcott

unread,
Feb 20, 2018, 7:43:55 PM2/20/18
to
On 2/20/2018 7:30 AM, António Marques wrote:
> Dr. HotSalt <alie...@gmail.com> wrote:
>> I will also point out that logic may well be the definition of
>> "unnatural". Not only do we find fractals everywhere in Nature, but
>> quantum mechanics is forcing those of us who raise our heads for the
>> occasional experimental breather to re-evaluate our notions of causality
>> itself, the basis of logic.
>
> Logic is like the classical law of gravity - neat, simple, powerful, and
> limited in what it can model. Folks have a lot of trouble understanding the
> latter part, going to the point of wanting to ‘correct’ reality so that it
> fits logic. A bit like correcting every object to work like a screw.
>

The (big picture) reason why I created Minimal Type Theory is to have
a single mathematical formalism that could fill the role of a Tarski
metalanguage and be capable of exhaustively formalizing every nuance
of semantic meaning of any concept that a mind could ever imagine.

All of Minimal Type Theory fits on less than a single page of YACC BNF.
The syntax of Minimal Type theory is a very slight augmentation to the
syntax of First Order Predicate Logic. MTT is not limited to FOL. MTT
can easily formalize any finite order of logic from 0 to N.

I recently redesigned MTT so that propositional logic and FOPL are
included as a subset of MTT. The Types of MTT are optional, so MTT
can also represent HOL of any arbitrary finite order.

Ben Bacarisse

unread,
Feb 20, 2018, 8:53:55 PM2/20/18
to
Pete Olcott <Pe...@NoEmail.address> writes:
<snip>
> Ben Bacarisse refused to consider that a halt decider must do a
> step-by-step debug trace of its input TMD until

You have yet to answer the very simple question I posed to you three
days ago. It's a key question (which is presumably why you are avoiding
it) because the answer is both clear and damaging for your position.
Any answer other than the correct one will make it clear that you have
never really been talking about the hating problem.

(And, to answer you silly assertion here, what a halt decider must do is
the same as asserting the what the smell of unicorn shit must be.)

--
Ben.

Pete Olcott

unread,
Feb 20, 2018, 9:38:26 PM2/20/18
to
Yes and my lack of psychic ability to read your mind to determine what
question you could possibly be referring is certainly totally destroying
my credibility !

Quite often I answer questions by providing all of the reasoning behind
the answer. Quite often people mistake this as avoiding the question.

Since it is really not all that hard to repeat the question I
find statements such as the one that you just made a little dodgy.

I have taken your lack of response in this forum as implicit
acknowledgement that my fundamental position has been correct all along.

My fundamental position (That I first stated publicly on USENET in 1998)
is that the Liar Paradox, The 1931 Incompleteness Theorem, and the Halting
Problem are all based on the exact same semantic error that I first
called pathological self-reference in 2004.

graham...@gmail.com

unread,
Feb 20, 2018, 9:43:39 PM2/20/18
to
On Wednesday, February 21, 2018 at 11:53:55 AM UTC+10, Ben Bacarisse wrote:
>
> never really been talking about the hating problem.
>
> (And, to answer you silly assertion here, what a halt decider must do is
> the same as asserting the what the smell of unicorn shit must be.)
>
> --
> Ben.

















Pete Olcott is CORRECT!

Tarski, Godel, Cantor all use DIAGONALISATION INCORRECTLY!


q 12345
1 x
2 x
3 x
4 ?
5



given f5(5) = !q(5,5)


WE REMOVE THE WHOLE FRIIIIIKING LIST!






TURING, however uses DIAGONALISTION CORRECTLY!

He removes 1 ROW only from the CONTRADICTION





However, TURING made a DIFFERENT MISTAKE!


DETERMINISM + RECURSIVE ENUMERATION OF f + ANTI-DIAGONLISATION = CONTRADICTION


Using a NON_DETERMINISTIC TM that output RANDOM VALUES for CONTRADICTORY VALUES eliminates chaitan's Omega as a Random Real Number!


So Pete Olcott was RIGHT AGAIN.... there is NO UN-COMPUTABILITY


Ben Bacarisse

unread,
Feb 20, 2018, 10:09:36 PM2/20/18
to
Pete Olcott <Pe...@NoEmail.address> writes:

> On 2/20/2018 7:50 PM, Ben Bacarisse wrote:
>> Pete Olcott <Pe...@NoEmail.address> writes:
>> <snip>
>>> Ben Bacarisse refused to consider that a halt decider must do a
>>> step-by-step debug trace of its input TMD until
>>
>> You have yet to answer the very simple question I posed to you three
>> days ago. It's a key question (which is presumably why you are avoiding
>> it) because the answer is both clear and damaging for your position.
>> Any answer other than the correct one will make it clear that you have
>> never really been talking about the hating problem.
>>
>> (And, to answer you silly assertion here, what a halt decider must do is
>> the same as asserting the what the smell of unicorn shit must be.)
>>
>
> Yes and my lack of psychic ability to read your mind to determine what
> question you could possibly be referring is certainly totally destroying
> my credibility !

Message-ID: <87606w1...@bsb.me.uk>

> Quite often I answer questions by providing all of the reasoning behind
> the answer. Quite often people mistake this as avoiding the question.

Unless my news feed is broken you simply abandoned the thread.

> Since it is really not all that hard to repeat the question I
> find statements such as the one that you just made a little dodgy.

Ah, but the context matters. I'll repeat the question here but I
suspect you will use the change of context to repudiate earlier
statements. Here it is again:

You finally agreed that:

| Every TM computation halts or fails to halt.

To which I replied: Great. Do you also accept that every finite string
either encodes a TM computation or it does not?

I went on give an encoding which I've edited slightly on re-reading:

The encoding has to be agreed, of course, but something simple like the
binary bits of the ASCII characters used to write out the transitions,
the input string, and the number of the final (AKA accepting) state.
Here's a computation:

0,1R1>1;1,0R0>0/1/101

though the actual string is the zeros and ones of those ASCII bytes

There are two states, 0 and 1, the two decimal numbers that appear
before commas, and two transitions separated by semicolons. From state
0 a tape symbol of 1 cases a move right, a write of 1 to the tape, and
the TM enters state 1. From state 1 seeing a 0 cases the head to move
right, a 0 to be written and the machine goes to state 0. The machine's
final state is state 1, and the input for this computation is 101. This
computation halts (and, as it happens, it "accepts" since it halts in
state 1).

This:

Then hate me when thou wilt; if ever, now;

is not a TM computation and neither are

0,1R1>1;1,0R0>01/101

(missing /) nor

0,1R11>1;1,0R0>0/1/101

(only one symbol can be written after a move). Other non-computations
include binary sequences that are not a multiple of 8 long or any that
include anything but the ASCII characters

0 1 2 3 4 5 6 7 8 9 L R > , / ;

So, we have a simple encoding and a simple question: does every finite
string of bits either encode a TM computation or not?

> My fundamental position (That I first stated publicly on USENET in 1998)
> is that the Liar Paradox, The 1931 Incompleteness Theorem, and the Halting
> Problem are all based on the exact same semantic error that I first
> called pathological self-reference in 2004.

I know. You have been wrong for 14 years. Every instance of the
halting problem (not your version -- the real one) has a correct yes/no
answer. This is quite unlike questions such as the liar paradox, or
the nonsense questions you sometimes try to suggest as being equivalent.

--
Ben.

Ben Bacarisse

unread,
Feb 20, 2018, 10:14:25 PM2/20/18
to
graham...@gmail.com writes:
<snip>
> So Pete Olcott was RIGHT AGAIN.... there is NO UN-COMPUTABILITY

There are only countably many Turing machines (deterministic or
otherwise) and uncountably many subsets of any non-trivial alphabet
therefore almost every language is undecidable.

--
Ben.

Pete Olcott

unread,
Feb 20, 2018, 10:29:53 PM2/20/18
to
I have been through Rice's Theorem and Decidabilty Deciders with Ben
and others
very many times in 2012, 2013, 2014, 2015, 2016, 2017 and 2018.

You only have to understand this one idea to understand that I am right:

Defining a Decidability Decider for the Halting Problem
a ∈ WFF(Turing_Machines_Descriptions)
b ∈ WFF(Turing_Machines_Descriptions)
c ∈ Finite_Strings
( ~Halts(a, b, c) ∧ ~Loops(a, b, c) ) → Undecidable(a,b,c)

This does not mean that b neither halts nor loops on its input.

It only means that no formal mathematical proof that decides
whether b halts or loops exists.

Therefore a Decidabilty Decider is defined that
always decides whether or not (a, b, c) is decidable.

Copyright 2004, 2006, 2012, 2013, 2014, 2015, 2016, 2017 and 2018, Pete Olcott

--

Pete Olcott

unread,
Feb 20, 2018, 11:09:34 PM2/20/18
to
On 2/20/2018 9:08 PM, Ben Bacarisse wrote:
Pete Olcott <Pe...@NoEmail.address> writes:

On 2/20/2018 7:50 PM, Ben Bacarisse wrote:
Pete Olcott <Pe...@NoEmail.address> writes:
<snip>
Ben Bacarisse refused to consider that a halt decider must do a
step-by-step debug trace of its input TMD until
You have yet to answer the very simple question I posed to you three
days ago.  It's a key question (which is presumably why you are avoiding
it) because the answer is both clear and damaging for your position.
Any answer other than the correct one will make it clear that you have
never really been talking about the hating problem.

(And, to answer you silly assertion here, what a halt decider must do is
the same as asserting the what the smell of unicorn shit must be.)

Yes and my lack of psychic ability to read your mind to determine what
question you could possibly be referring is certainly totally destroying
my credibility !
Message-ID: <87606w1...@bsb.me.uk>

Quite often I answer questions by providing all of the reasoning behind
the answer. Quite often people mistake this as avoiding the question.
Unless my news feed is broken you simply abandoned the thread.
Something is screwy you may have changed your newsreader settings.
I responding in sci.logic yet the post would have only gone to comp.theory.

Since it is really not all that hard to repeat the question I
find statements such as the one that you just made a little dodgy.
Ah, but the context matters.  I'll repeat the question here but I
suspect you will use the change of context to repudiate earlier
statements.  Here it is again:

You finally agreed that:

| Every TM computation halts or fails to halt.

To which I replied: Great.  Do you also accept that every finite string
either encodes a TM computation or it does not?

What if a finite string encodes a syntactically correct TMD yet all that
it does is immediately halt? That might not be considered a computation.

Lets rephrase the question as every finite string encodes a syntactically
correct TMD or not. Furthermore whether or not a finite string encodes
a syntactically correct TMD is decidable by a DFA. 

I went on give an encoding which I've edited slightly on re-reading:

http://www.lns.mit.edu/~dsw/turing/doc/tm_manual.txt
No need to do that an encoding that works perfectly
has already been defined as a part it this TM interpreter.
Skipping past this moot part.
Three questions actually:
(1) Does a finite string of ASCII text encode a syntactically correct TMD
(2) Does a formal mathematical proof from this TMD to halting exist?
(3) Does a formal mathematical proof from this TMD to looping exist?
YES, NO, NO defines a decidability decider for (a,b,c)


Defining a Decidability Decider for the Halting Problem
∃a ∈ Turing_Machines_Descriptions)
∀b ∈ Turing_Machines_Descriptions
∀c ∈ Finite_Strings
(~
Halts(a, b, c) ∧ ~Loops(a, b, c)) → Undecidable(a,b,c)

My fundamental position (That I first stated publicly on USENET in 1998)
is that the Liar Paradox, The 1931 Incompleteness Theorem, and the Halting
Problem are all based on the exact same semantic error that I first
called pathological self-reference in 2004.
I know.  You have been wrong for 14 years.  Every instance of the
halting problem (not your version -- the real one) has a correct yes/no
answer.  This is quite unlike questions such as the liar paradox, or
the nonsense questions you sometimes try to suggest as being equivalent.

You have to actually pay close attention to the words that I am actually saying.
Several of the people that I responded to in this forum would glance at a couple
of my words before forming their rebuttal.

I usually only skim George's words just in case he makes a good point.  I know
that he is capable of making good points yet most often he does not bother to.

You have done a very good job lately at paying quite close attention and making
sure that every point is responded to with good reasoning and apt questions.

I have strived very hard often spending several hours responding to a single one
of your posts.  It looks like we are finally coming to a mutual understanding.

In order for this to be possible I had to learn to speak much more like a PhD
and much less like a crank.  First impression bias is very hard to overcome.

Copyright 2004, 2006, (2012 through 2018) Pete Olcott



--

Pete Olcott

unread,
Feb 20, 2018, 11:39:21 PM2/20/18
to
Putting this in the proper terminology of Rice's Theorem:
The above Decidabilty Decider segregates (a) into two sets:
(1) Has the semantic property of pathological self-reference(Olcott 2004) on (a,b).
(2) Does not have has the semantic property of pathological self-reference(Olcott 2004) on (a,b).

Copyright 2004, 2006, (2012 through 2018) Pete Olcott

Pete Olcott

unread,
Feb 20, 2018, 11:46:56 PM2/20/18
to

Whoops I goofed.

Putting this in the proper terminology of Rice's Theorem:
The above Decidabilty Decider segregates (a) into two sets:

(1) Decides halting for (a,b)
(2) Does not decide halting for (a,b)

Pete Olcott

unread,
Feb 21, 2018, 9:37:20 AM2/21/18
to
On 2/21/2018 3:28 AM, Arnaud Fournet wrote:
> Le mercredi 21 février 2018 01:31:08 UTC+1, Pete Olcott a écrit :
>> On 2/19/2018 7:50 PM, Dr. HotSalt wrote:
>>> On Saturday, February 17, 2018 at 2:14:05 PM UTC-8, Pete Olcott wrote:
>>>
>>> (snip)
>>>
>>>> My purpose in studying linguistics was formalizing natural language.
>>> Why on Earth would you want to even try to do that?
>>
>> Because it is the only possible way for machines to acquire a human
>> degree of comprehension of natural language.
>
> Why should machines understand human language?
>


https://www.technologyreview.com/s/600984/an-ai-with-30-years-worth-of-knowledge-finally-goes-to-work/
If for no other reason than I want them to.

Doug Lenat has 700 labor years worth of effort in this task over the last 33 years.

As soon as my work on pathological self-reference is accepted as correct
I want to begin working on the machine learning algorithm to complete the
population of the Cyc knowledge ontology.


> Natural languages are highly ambiguous and allusive and presuppose a vast encyclopedic knowledge of human life.
>
> All that strikes me as completely chimeric, if not dystopic.

Ross A. Finlayson

unread,
Feb 21, 2018, 12:42:07 PM2/21/18
to
Indeed.

Pete Olcott

unread,
Feb 21, 2018, 2:38:59 PM2/21/18
to
On 2/17/2018 4:21 AM, António Marques wrote:
> Jeff Barnett <j...@notatt.com> wrote:
>> .... Assume that the scientific and
>> philosophical communities are 1) not all nuts and 2) not out to get you.
>
> 3) not by any means devoid of people who know all 3 of Math / Computer
> Science / Linguistics.
>

I had the computer science. I had to add predicate logic,
formal mathematical proofs, and formal semantics of linguistics.

I don't really care much about math except for David Hilbert formalist
math proofs from finite strings. I don't care much about linguistics
except for formal semantics of linguistics.

graham...@gmail.com

unread,
Feb 21, 2018, 10:07:34 PM2/21/18
to
to which I REPLIED ... your question is out of context


graham...@gmail.com

unread,
Feb 21, 2018, 10:08:40 PM2/21/18
to
For every infinite set of alphabet symbols

there is a FINITE ALPHABET which does the same computations / language arithmetic


Pete Olcott

unread,
Feb 22, 2018, 12:55:33 AM2/22/18
to
On 2/21/2018 9:54 PM, David Kleinecke wrote:
> On Wednesday, February 21, 2018 at 3:36:59 PM UTC-8, Pete Olcott wrote:
>> On 2/21/2018 5:15 PM, David Kleinecke
>> wrote:
>>
>>
>>
>> On Wednesday, February 21, 2018 at 12:40:34 PM UTC-8, peteolcott wrote:
>>
>>
>> On Wednesday, February 21, 2018 at 2:31:26 PM UTC-6, David Kleinecke wrote:
>>
>>
>> On Wednesday, February 21, 2018 at 11:38:59 AM UTC-8, Pete Olcott wrote:
>>
>>
>> On 2/17/2018 4:21 AM, António Marques wrote:
>>
>>
>> Jeff Barnett wrote:
>>
>>
>> .... Assume that the scientific and
>> philosophical communities are 1) not all nuts and 2) not out to get you.
>>
>>
>> 3) not by any means devoid of people who know all 3 of Math / Computer
>> Science / Linguistics.
>>
>>
>>
>> I had the computer science. I had to add predicate logic,
>> formal mathematical proofs, and formal semantics of linguistics.
>>
>> I don't really care much about math except for David Hilbert formalist
>> math proofs from finite strings. I don't care much about linguistics
>> except for formal semantics of linguistics.
>>
>>
>>
>> (1) The way you use mathematics demonstrates to me that
>> you do not understand it.
>>
>> (2) Your formal semantics of language is nonsense.
>>
>>
>> Most of my "critiques" are of this same poor quality.
>> If you are unable to point out any 100% totally specific
>> mistakes then simply admit your ignorance.
>>
>> Feigning knowledge by poor quality commentary does not
>> count as any rebuttal.
>>
>>
>> (1) I have repeatedly pointed out where you use
>> terms and concepts without providing any definitions
>> for the terms you use.
>>
>>
>> I remember no instance where you pointed out anything like this
>> where I did not clarify.
>>
>>
>>
>>
>> (2) Quite recently you called my suggestion that you
>> not subset the axioms in your definition of "True"
>> erroneous.
>>
>>
>>
>> My use of SUBSET is merely an alternative notational convention
>>
>> see yellow highlighted formulas below.
>>
>>
>>
>> https://en.wikipedia.org/wiki/Logical_consequence#Syntactic_consequence
>>
>>
>> A
>> formula A
>> is
>> a syntactic
>> consequence within
>> some formal system FS
>>
>> of
>> a set
>> Γ
>> of
>> formulas
>> if there is a formal proof in of A
>> from
>> the set Γ.
>>
>> Γ
>> ⊢FS
>> A
>> (translated
>> to Minimal Type Theory becomes) ∃Γ
>> ⊆ WFF(FS) Provable(Γ, A)
>>
>>
>>
>>
>>
>> As far as SUBSET of BaseFacts / Axioms goes this is brand new
>> material that
>>
>> has no precedent, thus necessarily unconventional.
>>
>>
>>
>>
>>
>>
>> (3) In linguistic matters you persist in treating
>> Richard Montague's long-discarded theories as valid.
>>
>> (4) You make many other mistakes in linguistic matters
>> but comp.theory is not a suitable place to discuss
>> them.
>>
>>
>>
>>
>>
>>
>>
>> --
>>
>>
>>
>> ∀L ∈ Formal_Systems
>>
>> ∀X
>>
>> True(L, X) ↔ ∃Γ ⊆ Axioms(L) Provable(Γ, X)
>
> OK. Lets consider your sig. You have not defined
> "Formal_Systems". You cannot define that set by
> reference to some authority because your usages
> modify the alleged originals all too often. We
> want to know YOUR definition. You do not say
> whether you consider the third line the definition
> of "True" or a provable proposition and you do not
> define either "Axioms" nor "Provable". Without these
> definitions there is no way anyone can develop any
> comprehension of what you are trying to say.
>

http://liarparadox.org/index.php/2018/02/17/the-ultimate-foundation-of-a-priori-truth/
Axioms are BaseFacts.

Shooting from the hip (it is very late):
A Formal_System is any set of finite strings that specify relations
that have a formal proof to the semantic values of True and False.

Pete Olcott

unread,
Feb 22, 2018, 12:56:15 PM2/22/18
to
On 2/17/2018 12:42 AM, Pete Olcott wrote:
> a Collection is defined one or more things that have one or more properties in common. These operations from set theory are available:  {⊆, ∈}
>
> An BaseFact is an expression X of (natural or formal) language L that has been assigned the semantic property of True. (Similar to a math Axiom).
>
> A Collection T of BaseFacts of language L forms the ultimate foundation of the notion of Truth in language L.
>
> To verify that an expression X of language L is True or False only requires a syntactic logical consequence inference chain (formal proof) from one or more elements of T to X or ~X.
>
> True(L, X) ↔ ∃Γ ⊆ BaseFact(L) Provable(Γ, X)
> False(L, X) ↔ ∃Γ ⊆ BaseFact(L) Provable(Γ, ~X)
>
> Copyright 2018 (and many other years since 1997) Pete Olcott
>

Truth is the set of interlocking concepts that can be formalized symbolically.

All of formalized Truth is only about relations between finite strings of characters.

This exact same Truth can be equally expressed (tokenized) as relations between integers.

Pete Olcott

unread,
Feb 22, 2018, 2:11:22 PM2/22/18
to
On 2/22/2018 12:20 AM, David Kleinecke wrote:
> Then BaseFacts need to be defined and it needs to be stated
> whether Axioms are all BasFacts or only soem of them.
>

You have to look a the link for that.

BaseFact is defined as an expression X of (natural or formal)
language L that has been assigned the semantic property of True.
(Similar to a math Axiom).

From a David Hilbert formalist POV the semantic property of True
is assigned to elements of a collection of finite strings of
characters.

BaseFacts that contradict other BaseFacts are prohibited.
BaseFacts must specify Relations between Things. There are
no other requirement for BaseFacts.

>> Shooting from the hip (it is very late):
>> A Formal_System is any set of finite strings that specify relations
>> that have a formal proof to the semantic values of True and False.
>
> That is not an effective definition. String of what? What
> do you mean by "relation" and "specify". How do you define
> a "formal proof"? Or by "True" or "False"?
>

A finite string of characters from a defined alphabet, I use Unicode UTF-8.

A Relation is identical to the common notion of a Predicate from Predicate Logic,
essentially a Boolean valued function.

"specify" means that there is a bijective mathematically mapping from finite
strings to directed acyclic graphs. These DAGs encode relations.

A node represents either a Relation or an argument to a Relation.
Relations have directed paths to their arguments.

A formal proof is the same conventional syntactic logical consequence inference chain
typical of all formal proofs, yet the starting point is finite strings not expressions
of language.

True and False are named collections of finite strings.

Peter Percival

unread,
Feb 22, 2018, 2:22:07 PM2/22/18
to
Pete Olcott wrote:


> BaseFact is defined as an expression X of (natural or formal)
> language L that has been assigned the semantic property of True.

So a few example from natural languages are:
i) "fish", an expression that I assign the semantic property of True,
ii) "Donald Trump is an SVR officer", ditto.

> (Similar to a math Axiom

I see little similarity.

> ).

Do you think that "fish" and "Donald Trump is an SVR officer" are
BaseFacts? If not, how do they fail to fulfil your definition?


Pete Olcott

unread,
Feb 22, 2018, 2:52:49 PM2/22/18
to
http://liarparadox.org/index.php/2018/02/17/the-ultimate-foundation-of-a-priori-truth/

...BaseFact is defined as an expression X of (natural or formal)
...language L that has been assigned the semantic property of True.
...(Similar to a math Axiom).

...BaseFacts that contradict other BaseFacts are prohibited.
...BaseFacts must specify Relations between Things. There are
...no other requirement for BaseFacts.

...To verify that an expression X of language L is True or False
...only requires a syntactic logical consequence inference chain
...(formal proof) from one or more BaseFacts to X or ~X.
...(Backward chaining reverses this order).

...True(L, X) ↔ ∃Γ ⊆ BaseFacts(L) Provable(Γ, X)
...False(L, X) ↔ ∃Γ ⊆ BaseFacts(L) Provable(Γ, ~X)

Peter Percival

unread,
Feb 22, 2018, 3:22:17 PM2/22/18
to
Pete Olcott wrote:
> On 2/22/2018 1:22 PM, Peter Percival wrote:
>> Pete Olcott wrote:
>>
>>
>>> BaseFact is defined as an expression X of (natural or formal)
>>> language L that has been assigned the semantic property of True.
>>
>> So a few example from natural languages are:
>> i) "fish", an expression that I assign the semantic property of True,
>> ii) "Donald Trump is an SVR officer", ditto.
>>
>>> (Similar to a math Axiom
>>
>> I see little similarity.
>>
>>> ).
>>
>> Do you think that "fish" and "Donald Trump is an SVR officer" are
>> BaseFacts?  If not, how do they fail to fulfil your definition?
>>
>>
>
> http://liarparadox.org/index.php/2018/02/17/the-ultimate-foundation-of-a-priori-truth/
>
>
> ...BaseFact is defined as an expression X of (natural or formal)
> ...language L that has been assigned the semantic property of True.
> ...(Similar to a math Axiom).
>
> ...BaseFacts that contradict other BaseFacts are prohibited.

You haven't answered my question, nor do I think there is much prospect
of you doing to, so I'll ask another.

According to your claim

BaseFact is defined as an expression X of (natural or formal)
language L that has been assigned the semantic property of True.

I have equal entitlement to claim that

"Donald Trump is an SVR officer"

and

"Donald Trump is not an SVR officer"

are BaseFacts, but each contradicts the other. Which one is to be
prohibited, and why?

> ...BaseFacts must specify Relations between Things. There are
> ...no other requirement for BaseFacts.

Really? So my offering of "fish" as a BaseFact is prohibited because it
doesn't specify a relation between things (see footnote). You should
have said so initially. But I note that there are no other requirements.

> ...To verify that an expression X of language L is True or False
> ...only requires a syntactic logical consequence inference chain
> ...(formal proof) from one or more BaseFacts to X or ~X.

Good. I note that "Donald Trump is an SVR officer" follows from "Donald
Trump is an SVR officer", and "Donald Trump is not an SVR officer"
follows from "Donald Trump is not an SVR officer".

You allow BaseFacts to be expressed in a natural language. How do you
define syntactic logical consequence on a natural language?

> ...(Backward chaining reverses this order).

(Footnote. I assume that the capital letters in "Relations between
Things" have no great significance, and it's the same as "relations
between things".)

Peter Percival

unread,
Feb 22, 2018, 3:35:48 PM2/22/18
to
You may wish to say that I don't have equal entitlement to claim such
things because I need _evidence_ or _good reason_ or something, but...

>
>> ...BaseFacts must specify Relations between Things. There are
>> ...no other requirement for BaseFacts. >
> Really?  So my offering of "fish" as a BaseFact is prohibited because it
> doesn't specify a relation between things (see footnote).  You should
> have said so initially.  But I note that there are no other requirements.

... you can't do so because (according to you) there are no other

Pete Olcott

unread,
Feb 22, 2018, 3:46:45 PM2/22/18
to
My system of reasoning is a progressive mathematical interpolation from generalities to step-wise increasing specificity. I simply did not think of this detail initially.

Since I know that I will not ever think of all of the details initially I provide links to where they will be updated.

> But I note that there are no other requirements.
>
>> ...To verify that an expression X of language L is True or False
>> ...only requires a syntactic logical consequence inference chain
>> ...(formal proof) from one or more BaseFacts to X or ~X.
>
> Good.  I note that "Donald Trump is an SVR officer" follows from "Donald Trump is an SVR officer", and "Donald Trump is not an SVR officer" follows from "Donald Trump is not an SVR officer".
>
> You allow BaseFacts to be expressed in a natural language.  How do you define syntactic logical consequence on a natural language?

First translate to Minimal Type Theory formal language.

>
>> ...(Backward chaining reverses this order).
>
> (Footnote.  I assume that the capital letters in "Relations between Things" have no great significance, and it's the same as "relations between things".)
>

Capital letters indicate key terms that form the foundation of the system.

Pete Olcott

unread,
Feb 22, 2018, 3:58:06 PM2/22/18
to
One is free to define a language such that birds are a kind of fire truck and fire trucks lay eggs. Only contradiction is rejected.

>
>>
>>> ...BaseFacts must specify Relations between Things. There are
>>> ...no other requirement for BaseFacts. >
>> Really?  So my offering of "fish" as a BaseFact is prohibited because it doesn't specify a relation between things (see footnote).  You should have said so initially.  But I note that there are no other requirements.
>
> .... you can't do so because (according to you) there are no other requirements.

ReWritten as:

BaseFact is defined as an expression X of (natural or formal)
language L that has been assigned the semantic property of True.
(Similar to a math Axiom).
(1) BaseFacts that contradict other BaseFacts are prohibited.
(2) BaseFacts must specify Relations between Things.
The above is the complete specification for a BaseFact.

Your feedback was useful. Blog is also updated.

Peter Percival

unread,
Feb 22, 2018, 4:07:28 PM2/22/18
to
Pete Olcott wrote:

> BaseFact is defined as an expression X of (natural or formal)
> language L that has been assigned the semantic property of True.
> (Similar to a math Axiom).
> (1) BaseFacts that contradict other BaseFacts are prohibited.
> (2) BaseFacts must specify Relations between Things.
> The above is the complete specification for a BaseFact.

I hereby assign the semantic property of True to "Donald Trump is an SVR
officer", and I hereby assign the semantic property of True to "Donald
Trump is not an SVR officer". So they are both putative BaseFacts but
they contradict each other. But one of them (at least) is prohibited.
Which and why?

You say that BaseFacts are similar to mathematical axioms. Euclid's
fifth postulate and its negation have both been used as mathematical
axioms. How does this square with (1) above?


>

Pete Olcott

unread,
Feb 22, 2018, 4:34:12 PM2/22/18
to
An automated system similar to Prolog would reject adding any Facts
or Rules that contradict the current set of Facts and Rules.

George Greene

unread,
Feb 22, 2018, 4:38:23 PM2/22/18
to
On Wednesday, February 21, 2018 at 2:38:59 PM UTC-5, Pete Olcott wrote:
> On 2/17/2018 4:21 AM, António Marques wrote:
> > Jeff Barnett <j...@notatt.com> wrote:
> >> .... Assume that the scientific and
> >> philosophical communities are 1) not all nuts and 2) not out to get you.
> >
> > 3) not by any means devoid of people who know all 3 of Math / Computer
> > Science / Linguistics.
> >
>
> I had the computer science.

People find that hard to believe, because even after a career of writing
programs with procedures or functions or subroutines that call other routines,
YOU STILL DIDN'T KNOW
that one Turing machine can call another.

Pete Olcott

unread,
Feb 22, 2018, 4:51:54 PM2/22/18
to
On 2/22/2018 3:08 PM, David Kleinecke wrote:
> I like to make one point per post so I'll address your first
> paragraph:
>
>> BaseFact is defined as an expression X of (natural or formal)
>> language L that has been assigned the semantic property of True.
>> (Similar to a math Axiom).
>
> You haven't defined "expression", "language" or "semantic
> property".
>
> The word "postulate" is available if you want to actually
> differentiate BaseFact from Axiom.
>
>

The concept of BaseFact is to complete the specifcation of the
insufficiently specified term of Axiom. Definitions of Axiom
range anywhere from a mere assumption to a self-truth without
ever specifying what {self-evident} or {truth} means.

I have now specified all of those terms that you say I have not
specified. I will reiterate for clarity.

--A language L is a set of finite strings of characters from a
--defined alphabet specifying relations to other finite strings.

--These finite strings could be tokenized as single integer values.

--A Relation is identical to the common notion of a Predicate
--from Predicate Logic, essentially a Boolean valued function.

--an expression X is a finite string expressing a relation R of language L.

--I am not currently sure what a semantic property would be in general,
--I will have to ponder this some more. The specific semantic property
--of True is simply a named collection of expressions of L. The semantic
--property of False is the negation of every element of True.

Copyright 2016, 2017, 2018 Pete Olcott

Pete Olcott

unread,
Feb 22, 2018, 5:20:02 PM2/22/18
to
This can be correctly stated as a higher level abstraction
of the low level details. In the case of our discussion this
high level abstraction of the low level details prevents
key aspects of these low level details from ever being communicated.

https://www.researchgate.net/publication/323275797_Defining_a_Decidability_Decider_for_the_Halting_Problem

Unless you examine the debug trace** of [Ĥ] by H in isolation it cannot
possibly be understood that H [Ĥ] [Ĥ] inherently specifies in infinitely
recursive evaluation sequence.

**debug trace defined as:
-- Treating the task of a potential halt decider as deriving a formal
-- mathematical proof from its inputs to its own final states requires
-- the potential halt decider to trace the sequence of state transitions
-- of the input TMD as syntactic logical consequence inference steps of
-- this formal proof.

The non existence of a formal proof from [Ĥ][Ĥ] to either H.qy or H.qn
allows H to correctly decide undecidability of H [Ĥ][Ĥ].

In 2012, 2013, 2014, 2015, 2016, and 2017 I did not say this correctly.

Pete Olcott

unread,
Feb 22, 2018, 6:16:06 PM2/22/18
to
On 2/22/2018 4:03 PM, David Kleinecke wrote:
> Much better. But "specifying relations to other finite
> strings" has no meaning as it stands. I think you mean
> by Predicate is what I would call a set of n-tuples for
> some finite n. But if you want to have a type theory
> matters get more complex. And, once again "expressing a
> relation R of language L" has no apparent meaning.
>

A Relation is identical to the common notion of a Predicate
from Predicate Logic, essentially a Boolean valued function.

Is_Type_Of("cat","animal")
Is_Type_Of("cat","dog")
Is_Type_Of("cat","verb")
Is_Type_Of("cat","ikbfwrtjksdfkjnoiwer")

Numerically_Greater_Than("5", "3")
Numerically_Greater_Than("5", "dead snake")
Numerically_Greater_Than("5", "ikbfwrtjksdfkjnoiwer")

I am explicitly added a semantic aspect to the concept of WFF:

sentences of the form: " a has the property φ ", " b bears the
relation R to c ", etc. are meaningless, if a, b, c, R, φ are
not of types fitting together.

http://liarparadox.org/index.php/2018/02/17/the-ultimate-foundation-of-a-priori-truth/
I added the result of our dialogue at the bottom.

Pete Olcott

unread,
Feb 22, 2018, 8:24:02 PM2/22/18
to
On 2/22/2018 7:03 PM, David Kleinecke wrote:
> So the predicate in one of your relations is always a
> two-place predicate?
>

No, not at all. I began thinking this way more than a year ago.
As soon as I encountered John gave Mary a book
Gave(John, Mary, Book)
I realized that I had to expand my notions.

> If so it is equivalent to a set of ordered pairs.
>
> At this point I remain confused by what you intend. You
> say:
>
>> --A language L is a set of finite strings of characters from a
>> --defined alphabet specifying relations to other finite strings.
>
> As nearly as I can tell the fact that your language members are
> strings in some alphabet plays no part in the rest of the
> construction. That is a language in your sense might just as well
> be a set whose members have no internal structure.
>
I find that it is best to stick with the David Hilbert formalist school.
There are numerous problems that this eliminates.

> Then your construction is a set of sets of n-tuples. As nearly
> as I can tell your type theory takes the form of "a type is a
> set of things"

A collection of things that have one or more properties in common.

> and a a predicate with three variables - x, y
> and z - is in fact a member of the triple set product X x Y x Z
> where x is member of X, y of Y and z of Z. That is, the predicate
> Pred (x, y, z) is a subset of X x Y x Z and has the value True
> if the triple is in the predicate set and False if it isn't.
>
> This is a meaningful construction but it has no force. All we
> have is Basic Facts. Somehow deduction must be introduced
> before Provable has any meaning. How do you propose to do that?
>

The typical way that this is done. No need to get down into these
weeds just yet. We only must understand that numerous ways of
accomplishing this have already been specified in the literature.

Since I am simultaneously refuting several theorems at once my
words must remain generic across all of them.

Pete Olcott

unread,
Feb 22, 2018, 8:25:49 PM2/22/18
to
On 2/22/2018 7:03 PM, David Kleinecke wrote:
> So the predicate in one of your relations is always a
> two-place predicate?

No, not at all. I began thinking this way more than a year ago.
As soon as I encountered: "John gave Mary a book"
Gave(John, Mary, Book)
I realized that I had to expand my notions.

> If so it is equivalent to a set of ordered pairs.
>
> At this point I remain confused by what you intend. You
> say:
>
>> --A language L is a set of finite strings of characters from a
>> --defined alphabet specifying relations to other finite strings.
> As nearly as I can tell the fact that your language members are
> strings in some alphabet plays no part in the rest of the
> construction. That is a language in your sense might just as well
> be a set whose members have no internal structure.

I find that it is best to stick with the David Hilbert formalist school.
There are numerous problems that this eliminates.

> Then your construction is a set of sets of n-tuples. As nearly
> as I can tell your type theory takes the form of "a type is a
> set of things"
A collection of things that have one or more properties in common.

> and a a predicate with three variables - x, y
> and z - is in fact a member of the triple set product X x Y x Z
> where x is member of X, y of Y and z of Z. That is, the predicate
> Pred (x, y, z) is a subset of X x Y x Z and has the value True
> if the triple is in the predicate set and False if it isn't.
>
> This is a meaningful construction but it has no force. All we
> have is Basic Facts. Somehow deduction must be introduced
> before Provable has any meaning. How do you propose to do that?

The typical way that this is done. No need to get down into these
weeds just yet. We only must understand that numerous ways of
accomplishing this have already been specified in the literature.

Since I am simultaneously refuting several theorems at once my
words must remain generic across all of them.

Pete Olcott

unread,
Feb 22, 2018, 8:28:51 PM2/22/18
to
On 2/22/2018 4:16 PM, DKleinecke wrote:
> How can any intelligence - natural or artificial -
> determine whether an alleged new fact contradicts the
> current set without deriving all possible consequences?
Of the collection of all types of things very few types are related to very few other types.

> And what about the case where new data corrects the current
> set of agreed-upon facts?

The database can be updated.

Pete Olcott

unread,
Feb 22, 2018, 10:45:32 PM2/22/18
to
On 2/17/2018 12:42 AM, Pete Olcott wrote:
a Collection is defined one or more things that have one or more properties in common. These operations from set theory are available:  {⊆, ∈}

An BaseFact is an expression X of (natural or formal) language L that has been assigned the semantic property of True. (Similar to a math Axiom).

A Collection T of BaseFacts of language L forms the ultimate foundation of the notion of Truth in language L.

To verify that an expression X of language L is True or False only requires a syntactic logical consequence inference chain (formal proof) from one or more elements of T to X or ~X.

True(L, X) ↔ ∃Γ ⊆ BaseFact(L) Provable(Γ, X)

Simple English:
For any natural (human) or formal (mathematical) language L we know that an expression X of language L is true if and only if there are expressions Γ of language L that connect X to known facts.


False(L, X) ↔ ∃Γ ⊆ BaseFact(L) Provable(Γ, ~X)

Copyright 2018 (and many other years since 1997) Pete Olcott


--

Peter Percival

unread,
Feb 23, 2018, 12:08:36 PM2/23/18
to
But you don't seek to ensure that current implies correct? If I now
proclaim "Donald Trump is an SVR officer" (and let's suppose no
"current" fact clashes with that) and if next week I proclaim "Donald
Trump is not an SVR officer", is the second rejected because it is not
current, and for no better reason than that?

>

Peter Percival

unread,
Feb 23, 2018, 12:15:59 PM2/23/18
to
Here are two potential facts: "Donald Trump is an SVR officer", "Donald
Trump is not an SVR officer". How does this magic updating of this
magic database occur so that the _fact_ rather than the fiction goes
in/stays in? Since you have twice unwisely claimed to have given the
complete specification for a BaseFact, how can you even answer?

>

Pete Olcott

unread,
Feb 23, 2018, 12:17:21 PM2/23/18
to
If your only source of information of the world is a Liar there is nothing else that you can do.

Pete Olcott

unread,
Feb 23, 2018, 2:23:24 PM2/23/18
to
On 2/23/2018 1:05 PM, David Kleinecke wrote:
> On Thursday, February 22, 2018 at 10:10:55 PM UTC-8, peteolcott wrote:
>> On Thursday, February 22, 2018 at 11:43:15 PM UTC-6, David Kleinecke wrote:
>>> On Thursday, February 22, 2018 at 7:45:32 PM UTC-8, Pete Olcott wrote:
>>>>
>>>> For any natural (human) or formal (mathematical)
>>>> language L we know that an expression X of language L is true if and
>>>> only if there are expressions Γ of language L that connect X to
>>>> known facts.
>>>
>>> Clear enough but unfortunately itself not true.
>>>
>>> In the ordinary sense of "true.
>>
>> The mathematical form is True for (a priori) Truth such
>> that anything contradicting it is necessarily incorrect.
>> I have pondered this for at least three decades and finally
>> have it.
>
> But this is NOT the definition of truth mathematicians use
> (whenever they speak of true which is rarely). I strongly
> suspect everyone reading comp.theory (except you) knows
> this and rejects your definition of "true".
>
> Since you like to argue from authority I will point out
> that I have a PhD in Mathematics and I was for a brief
> time a student of Tarski's.
>

The best that I could find is that mathematics uses the term axiom
to specify Truth, and an Axiom is some vague thing anywhere from
a merely assumption to a self-evident truth without {self-evident}
or {truth} ever being defined.

In any case my concept of BaseFact is the actual way that (a priori)
Truth really works, thus unequivocally irrefutable.

--
*∀L ∈ Formal_Systems
∀X
True(L, X) ↔ ∃Γ ⊆ Axioms(L) Provable(Γ, X) *

Pete Olcott

unread,
Feb 23, 2018, 2:27:55 PM2/23/18
to
On 2/23/2018 1:05 PM, David Kleinecke wrote:
> On Thursday, February 22, 2018 at 10:10:55 PM UTC-8, peteolcott wrote:
>> On Thursday, February 22, 2018 at 11:43:15 PM UTC-6, David Kleinecke wrote:
>>> On Thursday, February 22, 2018 at 7:45:32 PM UTC-8, Pete Olcott wrote:
>>>>
>>>> For any natural (human) or formal (mathematical)
>>>> language L we know that an expression X of language L is true if and
>>>> only if there are expressions Γ of language L that connect X to
>>>> known facts.
>>>
>>> Clear enough but unfortunately itself not true.
>>>
>>> In the ordinary sense of "true.
>>
>> The mathematical form is True for (a priori) Truth such
>> that anything contradicting it is necessarily incorrect.
>> I have pondered this for at least three decades and finally
>> have it.
>
> But this is NOT the definition of truth mathematicians use
> (whenever they speak of true which is rarely). I strongly
> suspect everyone reading comp.theory (except you) knows
> this and rejects your definition of "true".
>
> Since you like to argue from authority I will point out
> that I have a PhD in Mathematics and I was for a brief
> time a student of Tarski's.
>

https://en.wikipedia.org/wiki/Philosophy_of_mathematics

It is a profound puzzle that on the one hand mathematical truths seem to have a compelling inevitability, but on the other hand the source of their "truthfulness" remains elusive. Investigations into this issue are known as the foundations of mathematics program.

--
*∀L ∈ Formal_Systems
∀X
True(L, X) ↔ ∃Γ ⊆ Axioms(L) Provable(Γ, X) *

George Greene

unread,
Feb 23, 2018, 4:51:41 PM2/23/18
to
On Thursday, February 22, 2018 at 4:34:12 PM UTC-5, Pete Olcott wrote:
> An automated system similar to Prolog would reject adding any Facts
> or Rules that contradict the current set of Facts and Rules.

That is exactly what automated theorem provers try to do, and while
this contradiction is (in zeroth- and in first-order logic) always provable,
in the FOL case the proof may take A VERY long time to find, and Prolog does
NOT always find it.


Pete Olcott

unread,
Feb 23, 2018, 5:04:35 PM2/23/18
to
That is why ontology design is so important. Yet before an
ontology can possibly be defined the notion of Truth had to
be unequivocally specified.

One aspect of formalizing the notion of Truth is resolving
every paradoxical statement of language into the specific error
that it truly was all along.

Pete Olcott

unread,
Feb 23, 2018, 5:15:51 PM2/23/18
to
On 2/23/2018 2:59 PM, Dr. HotSalt wrote:
> "An BaseFact is an expression X of (natural or formal) language L that has been assigned the semantic property of True. (Similar to a math Axiom)."
>
> In what way is "assigned the semantic property of True" different from "a merely assumption [sic] to a self-evident truth"?


That is a good question. I answer that on my blog now.
To assign the semantic value of True to an expression of
language only involves making this expression of language
a member of the named collection: BaseFact.

>
> (I really wish you'd pay more attention to your grammar. If you can't properly write "natural" language, why should anyone trust you to free it from "error"?)
>
>> In any case my concept of BaseFact is the actual way that (a priori)
>> Truth really works, thus unequivocally irrefutable.
>
> Are you *trying* to make me snort coffee out of my nose?
>
> Mathematical axioms are also by definition irrefutable. That's why mathematics is not a science.
>
> What process do you use to demonstrate that expressions to which you "assign the semantic property of True" are evident, either by themselves or with other support?
>

It turns out that this is simply the way that Truth works.
The ONLY reason that you "know" that {a cat is not a type of dog}
is that is what you were told.

Essentially:
{a cat is not a type of dog} has been made a member of BaseFacts.


> When did you define "True"?
>
> Upon what does that definition rest? Upon what *must* it rest?
>
> For someone obsessed with logic, you're not very good at it.
>
>
> Dr. HotSalt

Pete Olcott

unread,
Feb 23, 2018, 11:49:29 PM2/23/18
to
On 2/23/2018 7:28 PM, Dr. HotSalt wrote:
> In other words, no different. "It's true because I say it is."
>
>>> (I really wish you'd pay more attention to your grammar. If you
>>> can't properly write "natural" language, why should anyone trust you
>>> to free it from "error"?)
>>>
>>>> In any case my concept of BaseFact is the actual way that (a priori)
>>>> Truth really works, thus unequivocally irrefutable.
>>>
>>> Are you *trying* to make me snort coffee out of my nose?
>>>
>>> Mathematical axioms are also by definition irrefutable. That's why
>>> mathematics is not a science.
>>>
>>> What process do you use to demonstrate that expressions to which you
>>> "assign the semantic property of True" are evident, either by themselves
>>> or with other support?
>>>
>>
>> It turns out that this is simply the way that Truth works.
>> The ONLY reason that you "know" that {a cat is not a type of dog}
>> is that is what you were told.
>
> Nope.
>
> I know that cats and dogs are definitions assigned to specific objects with certain observable properties. Checking those properties against a list allows one to determine which is which.
>
> That requires reference to huge body of information, each word of which requires referring back to another body of information defining and qualifying each definition.
>> You seem to think that your "base facts" need no foundation.

The set of BaseFacts is actually organized as a complex knowledge
ontology inheritance hierarchy that has all of these detailed
properties included.

https://en.wikipedia.org/wiki/Ontology_(information_science)

For now all that you need to know is that a {dog} is not called a "cat"
simply because this is what you were told.

You were told that a "dog" wags its tail and barks.
You were told that a "cat" meows.

If you were told these words in reverse and everyone used them in
reverse you would have no way of knowing that they have been reversed.

// This is a key element of my Tarski's undefinability theorem refutation.
// Until the semantic notion of True is fully anchored there can be no
// predicate of language that defines Tarski’s ∀x True(x) ↔ φ(x)

Copyright 2016, 2017, 2018 Pete Olcott


>
>> Essentially:
>> {a cat is not a type of dog} has been made a member of BaseFacts.
>
> Sigh.
>
>>> When did you define "True"?
>

I will use simpler terms. True is simply a named collection of expressions of language.

I just fully realized this in 2018 at the age of 62.

It may take some getting used to and pondering, yet it is the actual essence of (a priori) Truth.

> I note no response.
>
>>> Upon what does that definition rest? Upon what *must* it rest?
>
> I note no response.

Pete Olcott

unread,
Feb 23, 2018, 11:53:00 PM2/23/18
to
On 2/22/2018 4:03 PM, David Kleinecke wrote:
> Much better. But "specifying relations to other finite
> strings" has no meaning as it stands. I think you mean
> by Predicate is what I would call a set of n-tuples for
> some finite n. But if you want to have a type theory
> matters get more complex. And, once again "expressing a
> relation R of language L" has no apparent meaning.
>

This dialogue was one of the most productive dialogues
that I have ever had on USENET. Thanks

Pete Olcott

unread,
Feb 24, 2018, 10:53:15 AM2/24/18
to
The way that Truth actually works is that unless there is
some reason to reject a declarative statement of language
it cannot be rejected.

If the second axiom is not entered as an update to the first
axiom it must be rejected.

Pete Olcott

unread,
Feb 28, 2018, 11:17:01 AM2/28/18
to
On 2/17/2018 12:42 AM, Pete Olcott wrote:
> a Collection is defined one or more things that have one or more properties in common. These operations from set theory are available:  {⊆, ∈}
>
> An BaseFact is an expression X of (natural or formal) language L that has been assigned the semantic property of True. (Similar to a math Axiom).
>
> A Collection T of BaseFacts of language L forms the ultimate foundation of the notion of Truth in language L.
>
> To verify that an expression X of language L is True or False only requires a syntactic logical consequence inference chain (formal proof) from one or more elements of T to X or ~X.
>
> True(L, X) ↔ ∃Γ ⊆ BaseFact(L) Provable(Γ, X)
> False(L, X) ↔ ∃Γ ⊆ BaseFact(L) Provable(Γ, ~X)
>
> Copyright 2018 (and many other years since 1997) Pete Olcott
>

BaseFacts (and all (a priori) knowledge) is entirely comprised of (HOL Predicate)** Relations and their arguments.

Only by assigning a subset of these Relations to the named Collection of Basefacts** is the semantic notion of Truth fully specified.

** Making these Relations a member of BaseFacts

** Minimal Type Theory has been defined such that any finite order of logic from 0 to N can be specified in MTT using very slightly augmented FOPL syntax.

https://www.researchgate.net/publication/315367846_Minimal_Type_Theory_MTT

https://www.researchgate.net/publication/317953772_Provability_with_Minimal_Type_Theory

Copyright 2018 Pete Olcott

Pete Olcott

unread,
Mar 1, 2018, 5:34:17 PM3/1/18
to
On 3/1/2018 12:58 PM, David Kleinecke wrote:
> On Wednesday, February 28, 2018 at 10:23:17 PM UTC-8, peteolcott wrote:
>> On Friday, February 23, 2018 at 1:05:50 PM UTC-6, David Kleinecke wrote:
>>> On Thursday, February 22, 2018 at 10:10:55 PM UTC-8, peteolcott wrote:
>>>> On Thursday, February 22, 2018 at 11:43:15 PM UTC-6, David Kleinecke wrote:
>>>>> On Thursday, February 22, 2018 at 7:45:32 PM UTC-8, Pete Olcott wrote:
>>>>>>
>>>>>> For any natural (human) or formal (mathematical)
>>>>>> language L we know that an expression X of language L is true if and
>>>>>> only if there are expressions Γ of language L that connect X to
>>>>>> known facts.
>>>>>
>>>>> Clear enough but unfortunately itself not true.
>>>>>
>>>>> In the ordinary sense of "true.
>>>>
>>>> The mathematical form is True for (a priori) Truth such
>>>> that anything contradicting it is necessarily incorrect.
>>>> I have pondered this for at least three decades and finally
>>>> have it.
>>>
>>> But this is NOT the definition of truth mathematicians use
>>> (whenever they speak of true which is rarely). I strongly
>>> suspect everyone reading comp.theory (except you) knows
>>> this and rejects your definition of "true".
>>>
>>> Since you like to argue from authority I will point out
>>> that I have a PhD in Mathematics and I was for a brief
>>> time a student of Tarski's.
>>
>> So was Richard Montague.
>
> Sure. I knew Dick Montague. Before he showed any interest in
> formal grammar. So what?
>

I would estimate that you must be pretty smart to have studied under Tarski.

I am glad that I could finally show you how the Principle of Compositionality could include context.
http://www.cyc.com/documentation/ontologists-handbook/writing-efficient-cycl/cycl-representation-choices/

I always thought of the POC as including context because I never divided context from semantics.

--
*∀L ∈ Formal_Systems
∀X
True(L, X) ↔ ∃Γ ⊆ Axioms(L) Provable(Γ, X) *
0 new messages