2 ideas after our last meeting (2018.1.24)

83 views
Skip to first unread message

alex.shkotin

unread,
Jan 25, 2018, 8:27:51 AM1/25/18
to ontolog-forum
Hi All!

1) Sometimes I think it's better not to use word "ontology" at all. And try to substitute "formal theory" or "finite model" instead. And only if some formal text keeps together elements of formal theory and finite model we should use word ontology;-)

2) Trying to apply Category theory to knowledge representation is as good as old. Let me refer to http://beniaminov.rsuh.ru/ (as my FB-friend pointed out me), but nowadays we ask CTh-enthusiasts what kind of tools do they give us? pencil and paper? :-)

Alex

Tom Tinsley

unread,
Jan 25, 2018, 11:51:53 AM1/25/18
to ontolo...@googlegroups.com

Hi All,

 

The discussion on category theory has been excellent. My take away is that It has a strong mathematical base but an almost zero level of usage.

 

Some answers may be found at: https://otterserver.com/category/catalog-for-knowledge-documents/

 

Tom

 

Sent from Mail for Windows 10

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
Visit this group at https://groups.google.com/group/ontolog-forum.
For more options, visit https://groups.google.com/d/optout.

 

Cory Casanave

unread,
Jan 25, 2018, 12:05:11 PM1/25/18
to ontolo...@googlegroups.com, Elisa Kendall, andreas....@fokus.fraunhofer.de, Leo J. Obrst, char...@mitre.org, Cory Casanave, Conrad Bock (conrad.bock@nist.gov), bob...@nomagic.com, Fabian Neuhaus

The discussion and presentation on ontolog sounds a lot like what has been done for the “DOL” (Distributed Ontology, Model, and Specification Language Specification) standard at OMG. (Not that I understand all the math).

 

DOL: http://www.omg.org/spec/DOL

Ontolog: http://ontologforum.org/index.php/ConferenceCall_2018_01_24

 

 

-Cory

Pat Hayes

unread,
Jan 27, 2018, 3:54:38 PM1/27/18
to ontolo...@googlegroups.com, alex.shkotin


> On Jan 25, 2018, at 7:27 AM, alex.shkotin <alex.s...@gmail.com> wrote:
>
> Hi All!
>
> 1) Sometimes I think it's better not to use word "ontology" at all. And try to substitute "formal theory" or "finite model" instead. And only if some formal text keeps together elements of formal theory and finite model we should use word ontology;-)

Finite model? In what sense of ‘model’? Many formal theories don’t have finite models in the sense of ‘model thoery', and shouldn’t have them. Arithmetic, for instance. But perhaps (?) you mean some other sense of ‘model’ ?

Pat



Alex Shkotin

unread,
Jan 29, 2018, 8:15:12 AM1/29/18
to Pat Hayes, ontolog-forum
Pat,

We use a finite model of rational numbers - something like the algebra of numbers modulo 10^20.
Any DB may be seen as a finite model but not very math.
Some finite categories mentioned on the last meeting are finite models for us.
What is a model? We may ask it instead of reality and get the same answer. Math models are cheaper than physical and more robust than DBs.

Alex

Pat Hayes

unread,
Jan 29, 2018, 1:49:32 PM1/29/18
to Alex Shkotin, ontolog-forum
Alex

I think we are talking at cross purposes. At any rate, you do not seem to be using “model” in the sense of model theory, ie an interpretation which makes a theory true, so I should not comment on your postings any further. 

Best wishes

Pat Hayes

Alex Shkotin

unread,
Jan 30, 2018, 7:57:36 AM1/30/18
to Pat Hayes, ontolog-forum
Pat,

I count on finite models of Description Logics.

Alex

Jack Hodges

unread,
Jan 30, 2018, 10:24:13 AM1/30/18
to ontolo...@googlegroups.com
The term ‘model’ has such a broad range of definitions that it seems almost absurd to seek consensus on it. Is model theory the baseline for discussions of semantic models? I would be interested in knowing your views on the relationship between conceptual models and model theory.

Jack

Sent from my iPad

On Jan 30, 2018, at 4:57 AM, Alex Shkotin <alex.s...@gmail.com> wrote:

Pat,

I count on finite models of Description Logics.

Alex


2018-01-29 21:49 GMT+03:00 Pat Hayes <pha...@ihmc.us>:
Alex

I think we are talking at cross purposes. At any rate, you do not seem to be using “model” in the sense of model theory, ie an interpretation which makes a theory true, so I should not comment on your postings any further. 

Best wishes

Pat Hayes


On Jan 29, 2018, at 7:15 AM, Alex Shkotin <alex.s...@gmail.com> wrote:

Pat,

We use a finite model of rational numbers - something like the algebra of numbers modulo 10^20.
Any DB may be seen as a finite model but not very math.
Some finite categories mentioned on the last meeting are finite models for us.
What is a model? We may ask it instead of reality and get the same answer. Math models are cheaper than physical and more robust than DBs.

Alex

2018-01-27 23:54 GMT+03:00 Pat Hayes <pha...@ihmc.us>:


> On Jan 25, 2018, at 7:27 AM, alex.shkotin <alex.s...@gmail.com> wrote:
>
> Hi All!
>
> 1) Sometimes I think it's better not to use word "ontology" at all. And try to substitute "formal theory" or "finite model" instead. And only if some formal text keeps together elements of formal theory and finite model we should use word ontology;-)

Finite model? In what sense of ‘model’? Many formal theories don’t have finite models in the sense of ‘model thoery', and shouldn’t have them. Arithmetic, for instance. But perhaps (?) you mean some other sense of ‘model’ ?

Pat






Alex Shkotin

unread,
Jan 30, 2018, 11:57:11 AM1/30/18
to ontolog-forum
Jack,

Pat and I we keep the same approach in mind - Model Theory from Math-Logic. I'd like to point that we need to create formal theories for every science and technology (for ex. our group have made a project for Geology) and get a lot of finite models for them.
But the problem comes from numbers: we use numbers but do not axiomatize them. And as I understand from the previous discussion, Pat points that there is no finite model if we use numbers.  
It's a little bit subtle to show that in practice we use a finite model of Numbers.

Anyway, the main point is that ontology = formal theory + finite model.

Alex


To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-forum+unsubscribe@googlegroups.com.

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-forum+unsubscribe@googlegroups.com.

henson graves

unread,
Jan 30, 2018, 2:41:07 PM1/30/18
to ontolo...@googlegroups.com

In my experience engineers and logicians use the term "model" very differently. Engineers develop models for systems under design or analysis, perhaps in OWL or UML. If formalized the model becomes an axiom set used to reason about the interpretations in the physical or a simulated world. Logicians speak of the interpretations of axiom sets as models. So when this is formalized one has

axiom set <-> engineer's model

logician's model <-> engineer's interpretation. Interpretations include simulations of engineer's models.


- Henson




From: ontolo...@googlegroups.com <ontolo...@googlegroups.com> on behalf of Jack Hodges <jhodg...@gmail.com>
Sent: Tuesday, January 30, 2018 9:24 AM
To: ontolo...@googlegroups.com
Subject: Re: [ontolog-forum] 2 ideas after our last meeting (2018.1.24)
 

Jon Awbrey

unread,
Jan 30, 2018, 3:56:12 PM1/30/18
to ontolo...@googlegroups.com, henson graves
Henson, List,

Different senses of “model” usually divide
into the “logical” and the “analogical”.
Here's some thoughts along those lines:

Objects, Model, Theories
1. https://inquiryintoinquiry.com/2013/09/10/objects-models-theories-1/
2. https://inquiryintoinquiry.com/2013/11/20/objects-models-theories-2/
3. https://inquiryintoinquiry.com/2013/11/21/objects-models-theories-3/
4. https://inquiryintoinquiry.com/2013/11/27/objects-models-theories-4/

Regards,

Jon

On 1/30/2018 2:41 PM, henson graves wrote:
> In my experience engineers and logicians use the term "model" very differently. Engineers develop models for systems under design or analysis, perhaps in OWL or UML. If formalized the model becomes an axiom set used to reason about the interpretations in the physical or a simulated world. Logicians speak of the interpretations of axiom sets as models. So when this is formalized one has
>
> axiom set <-> engineer's model
>
> logician's model <-> engineer's interpretation. Interpretations include simulations of engineer's models.
>
>
> - Henson
>

--

inquiry into inquiry: https://inquiryintoinquiry.com/
academia: https://independent.academia.edu/JonAwbrey
oeiswiki: https://www.oeis.org/wiki/User:Jon_Awbrey
isw: http://intersci.ss.uci.edu/wiki/index.php/JLA
facebook page: https://www.facebook.com/JonnyCache

Jack Hodges

unread,
Jan 30, 2018, 7:14:23 PM1/30/18
to ontolog-forum
Our work is on the engineering side of things, in OWL/RDFS.

Jack

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or unsubscribe to the email, see http://ontologforum.org/info/
--- You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-forum+unsubscribe@googlegroups.com.



--
Jack

John F Sowa

unread,
Jan 30, 2018, 9:15:10 PM1/30/18
to ontolo...@googlegroups.com, spencer...@nist.gov
Spencer, Cory, Alex, Pat H, and Jack H,

In last week's Ontology Summit, I grew impatient while listening
to a talk by Spencer Breiner about categories.

Cory
> The discussion and presentation on ontolog sounds a lot like what
> has been done for the DOL...

Yes, but there is a difference. Category theory and institutions
are great for mapping formal systems. See the attached dol1.jpg,
which shows the mappings among 24 formal logics used for ontology
and related applications.

But the thousands of independently defined ontologies are just a
miscellaneous selection. They don't form a coherent category with
the kinds of systematic mappings shown in dol1.jpg.

However, I agree with the statement on slide 4 of Spencer's talk:
> Build mixed contexts by inheriting from libraries

When people build ontologies by selecting predefined modules from
a library, they are using a lattice of theories with a lattice of
mappings. Those theories and mappings form a category. In his talk,
I wish that Spencer had emphasized the *libraries*, how to build them,
and how to use them. The fact that they form a category is worth
*one slide* for anyone who happens to know something about categories.

But note what Cory said: "Not that I understand all the math."
One slide would inform him that the ideas are related to DOL.
The details of that relationship would require a short course,
not a half-hour talk.

Alex
> Pat points that there is no finite model if we use numbers. It's a
> little bit subtle to show that in practice we use a finite model of
> Numbers.

Pat is absolutely correct. And Alex is correct that we can never
use or represent more than a finite subset of numbers. What Pat
meant is that there is no upper bound on that subset. As our
computers become bigger and faster, we use more of them. There
is no natural stopping point: You can always enlarge your subset.
That is the meaning of infinity: there is no end (last number).

Jack H
> The term ‘model’ has such a broad range of definitions that it
> seems almost absurd to seek consensus on it. Is model theory
> the baseline for discussions of semantic models?

For any logic used to specify an ontology, there is a consensus,
and Alfred Tarski stated it: A model M of a formal theory T
expressed in a logic L is a set theoretic construction for which
every axiom of T is true.

The attached mthworld.gif illustrates that consensus: On the right
is a theory T expressed by five axioms in a first-order logic L.
In the middle is a model M, which consists of a set of nodes
(represented by dots) and a set of relations (represented by
lines that connect the does). Each axiom has a denotation True
of False in terms of the model M.

That abstract model M can be used as an approximation to some aspect
of the world w, which is shown on the left of mthworld.gif. But that
mapping depends on methods of measurement that can never be absolutely
precise. Therefore, the mapping of M to w is only an approximation,
which may be judged as good, fair, or poor for some application.

John
dol1.jpg
mthworld.gif

Pat Hayes

unread,
Jan 30, 2018, 9:49:15 PM1/30/18
to ontolog-forum


> On Jan 30, 2018, at 8:15 PM, John F Sowa <so...@bestweb.net> wrote:
>
> Spencer, Cory, Alex, Pat H, and Jack H,
>
>
….
> Alex
>> Pat points that there is no finite model if we use numbers. It's a
>> little bit subtle to show that in practice we use a finite model of
>> Numbers.
>
> Pat is absolutely correct. And Alex is correct that we can never
> use or represent more than a finite subset of numbers. What Pat
> meant is that there is no upper bound on that subset.

Not exactly, though that is indeed true. What I meant was that pretty much any axiomatization of arithmetic does not have finite models. That is the whole point of arithmetic: it's the theory of *the natural numbers*. Obviously, being finite creatures, we can never use more than a finite number of numerals; that is a trivial observation. But what we might call the machinery of arithmetic, which we also use, itself is based on there being infinitely many numbers.

Alex says that he is using a finite model of Numbers. I am not sure what this means, but if he has an axiomatisation of Arithmetic which has finite models, I would love to see it. Just for a start, if 0 is a Number, and if every Number has a successor which is different from it, and if no Number’s successor is 0, then there are infinitely many Numbers. Putting addition and multiplication into the mix just confirms the infinity.

Pat



John F Sowa

unread,
Jan 31, 2018, 1:26:08 AM1/31/18
to ontolo...@googlegroups.com
On 1/30/2018 9:49 PM, Pat Hayes wrote:
> Alex says that he is using a finite model of Numbers. I am
> not sure what this means, but if he has an axiomatisation
> of Arithmetic which has finite models, I would love to see it.

There are, of course, no finite models of the integers or the
real numbers. But IEEE floating-point arithmetic, which is
usually used as an approximation to the real numbers, is finite.

On the other hand, all the theoretical work and theorem provers
are based on the real numbers or the integers. Mathematicians,
physicists, and engineers happily use systems such as Mathematica
to do the theorem proving and symbolic computation.

For symbolic computation, infinite models are actually *simpler*
than theorem proving with IEEE arithmetic -- primarily because
you never bump up against the upper bounds.

John

Alex Shkotin

unread,
Jan 31, 2018, 8:02:43 AM1/31/18
to ontolog-forum
Henson,

interpretation for logicians as I know is a function (usually recursive one) how to calculate some logical formulas on some math system (first of all - algebraic, second - categoric). If all axioms give True, the system is a model of this axioms.
In Description Logics, they call an axiom anything from a theory statement (like "any human is mortal") to system statement (like "Socrates is a human."). 
Keeping theory and system statements separately should be very useful IMHO.

Alex

2018-01-30 22:41 GMT+03:00 henson graves <henson...@hotmail.com>:

In my experience engineers and logicians use the term "model" very differently. Engineers develop models for systems under design or analysis, perhaps in OWL or UML. If formalized the model becomes an axiom set used to reason about the interpretations in the physical or a simulated world. Logicians speak of the interpretations of axiom sets as models. So when this is formalized one has

axiom set <-> engineer's model

logician's model <-> engineer's interpretation. Interpretations include simulations of engineer's models.


- Henson



To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-forum+unsubscribe@googlegroups.com.

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-forum+unsubscribe@googlegroups.com.

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-forum+unsubscribe@googlegroups.com.

henson graves

unread,
Jan 31, 2018, 10:21:51 AM1/31/18
to ontolog-forum

Alex,

I agree with what you are saying. One needs to distinguish between a theory and interpreted statements about the system that the theory is intended to describe.

A typical engineering problem is to modify a product to have some new capability.  The typical scenario is to build an engineering model of the product and its operating environment in a modeling language such as UML. One then attempts to determine consequences from the combined product-operation context model and construct simulation code from the model and execute it to better determine the modified system behavior. One also may operate some modified product and collect data for the same purpose.  Generally this results in revision and refinement of the engineering model. For an example see https://scholar.google.com/scholar?oi=bibs&cluster=8615220581478398249&btnI=1&hl=en

If you view this engineering activity within a standard logic formalism paradigm, the model is an axiom set, the conclusions derived from the model are statements in the theory of the axiom set, and the simulations, as well as the product operation scenarios are interpretations of the theory.

I am suggesting that the logic paradigm is a good description of the engineering paradigm with of course the change in terminology, e.g., engineering model = axiom set. There are a lot of consequences from adoption of the paradigm. Here are three.

1. The kind of logic used is not given a prior. The kind of logic and form of the theories are determined by the test and evaluation methods accepted in the domain.

2. Physicists and philosophers often think that they are trying to build an axiom set which has a single unique (categorical) interpretation. For engineering the axiom sets that they build to describe systems generally have more valid interpretations than intended. This simply means that part of the engineer’s job is to determine what assumptions need to be added to an axiom set to constrain the possible interpretations to correspond with the physical world.

3. At some stage the engineer may need a meta theory in which he can represent object theories and their interpretations. This is likely the consequence of your statement.

Henson





From: ontolo...@googlegroups.com <ontolo...@googlegroups.com> on behalf of Alex Shkotin <alex.s...@gmail.com>
Sent: Wednesday, January 31, 2018 7:02 AM
To: ontolog-forum
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Alex Shkotin

unread,
Jan 31, 2018, 10:25:36 AM1/31/18
to ontolog-forum
​​
Pat,

in fact, if ontology uses numbers for any particular value (as we do not have just natural or rational numbers but always number with a unit of measure) it should be axioms for upper, low boundaries and accuracy for any value. For example, percentage value must be in 0..100, but accuracy depends on particular science and technology. Or if I say that I have $10^80 in my bank account nobody put it in his/her finite system (sorry, ontology).
But of course, as John mentioned, it's easier to be infinite, i.e. ignore boundaries and accuracy matters.

Alex

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-forum+unsubscribe@googlegroups.com.

Alex Shkotin

unread,
Jan 31, 2018, 10:58:06 AM1/31/18
to ontolog-forum
Henson,

for me "the model is an axiom set" is not a good point of view as this should be a very special kind of axioms. It's better for me to think (as Tarski:-) that the model is math-structure satisfying some axioms.
W need a special language to build math-structures by the way.
What kind of math-structures engineers use to model their production it's another matter, but this structures (for example labeled graphs) are finite with numbers:-)

Alex

2018-01-31 18:21 GMT+03:00 henson graves <henson...@hotmail.com>:

Alex,

I agree with what you are saying. One needs to distinguish between a theory and interpreted statements about the system that the theory is intended to describe.

A typical engineering problem is to modify a product to have some new capability.  The typical scenario is to build an engineering model of the product and its operating environment in a modeling language such as UML. One then attempts to determine consequences from the combined product-operation context model and construct simulation code from the model and execute it to better determine the modified system behavior. One also may operate some modified product and collect data for the same purpose.  Generally this results in revision and refinement of the engineering model. For an example see https://scholar.google.com/scholar?oi=bibs&cluster=8615220581478398249&btnI=1&hl=en

If you view this engineering activity within a standard logic formalism paradigm, the model is an axiom set, the conclusions derived from the model are statements in the theory of the axiom set, and the simulations, as well as the product operation scenarios are interpretations of the theory.

I am suggesting that the logic paradigm is a good description of the engineering paradigm with of course the change in terminology, e.g., engineering model = axiom set. There are a lot of consequences from adoption of the paradigm. Here are three.

1. The kind of logic used is not given a prior. The kind of logic and form of the theories are determined by the test and evaluation methods accepted in the domain.

2. Physicists and philosophers often think that they are trying to build an axiom set which has a single unique (categorical) interpretation. For engineering the axiom sets that they build to describe systems generally have more valid interpretations than intended. This simply means that part of the engineer’s job is to determine what assumptions need to be added to an axiom set to constrain the possible interpretations to correspond with the physical world.

3. At some stage the engineer may need a meta theory in which he can represent object theories and their interpretations. This is likely the consequence of your statement.

Henson





Sent: Wednesday, January 31, 2018 7:02 AM
To: ontolog-forum

Subject: Re: [ontolog-forum] 2 ideas after our last meeting (2018.1.24)
Henson,

interpretation for logicians as I know is a function (usually recursive one) how to calculate some logical formulas on some math system (first of all - algebraic, second - categoric). If all axioms give True, the system is a model of this axioms.
In Description Logics, they call an axiom anything from a theory statement (like "any human is mortal") to system statement (like "Socrates is a human."). 
Keeping theory and system statements separately should be very useful IMHO.

Alex
2018-01-30 22:41 GMT+03:00 henson graves <henson...@hotmail.com>:

In my experience engineers and logicians use the term "model" very differently. Engineers develop models for systems under design or analysis, perhaps in OWL or UML. If formalized the model becomes an axiom set used to reason about the interpretations in the physical or a simulated world. Logicians speak of the interpretations of axiom sets as models. So when this is formalized one has

axiom set <-> engineer's model

logician's model <-> engineer's interpretation. Interpretations include simulations of engineer's models.


- Henson



Cory Casanave

unread,
Jan 31, 2018, 11:07:51 AM1/31/18
to ontolo...@googlegroups.com

Henson,

A good analysis. Also consider the “forward engineering” scenario. For physical items this could be 3D printing a design. For a software system this could be producing software artifacts (A.K.A. Model Driven Architecture – MDA).

 

In both cases there is a “source model” (set of axioms) and a set of “production rules”, which can be thought of as “production axioms”.

 

There is an interesting difference between physical and software production – the 3D printed item is the final “real thing” in the world. Produced software is, of course, a real thing in the world but is also, essentially, a set of axioms describing the data and processes the software will process (this assumes software can be accepted as a set of axioms). So we have the “source model” (set of axioms) transformed by a transformation model/rule (set of axioms) producing software (set of axioms) that act on statements about the world. Those “statements about the world” are what logicians typically call models! Perhaps they are all models.

 

Of course the source model (set of axioms) can also be processed by a different set of axioms for the simulation paradigm you describe. That the same model can be interpreted with different axioms for different purposes points to the need for unifying semantics.

 

-Cory

 

From: ontolo...@googlegroups.com [mailto:ontolo...@googlegroups.com] On Behalf Of henson graves
Sent: Wednesday, January 31, 2018 10:22 AM
To: ontolog-forum <ontolo...@googlegroups.com>
Subject: Re: [ontolog-forum] 2 ideas after our last meeting (2018.1.24)

 

Alex,

John F Sowa

unread,
Jan 31, 2018, 11:55:55 AM1/31/18
to ontolo...@googlegroups.com
Alex and Henson,

Alex
> if ontology uses numbers for any particular value (as we do not have
> just natural or rational numbers but always number with a unit of
> measure) it should be axioms for upper, low boundaries and accuracy
> for any value.

As Pat and I have been repeating, the word 'finite' does not
simplify anything -- and more often than not, it's false.

Re axioms for ontology: If you want your ontology to be general,
you don't want it to include upper & lower bounds, accuracy, etc.
That info may differ from one application to another.

Pat said that he would not reply to anything more on this topic.
This is also my last note on the subject. We agree that your use
of the word 'finite' is not helpful and usually wrong. But that's
your problem, not ours.

Henson
> the model is an axiom set, the conclusions derived from the model
> are statements in the theory of the axiom set, and the simulations,
> as well as the product operation scenarios are interpretations of
> the theory.

That first sentence is the most confusing. Axioms are always statements
in some theory. The word 'model' has different meanings in different
fields, but no field ever talks about the axioms of a model.

Besides logic, there are two other uses of the word 'model' that
may be used in applications:

1. Engineering models: They may be physical things, such as a scale
model that is smaller than the final product. Or they may be
computer simulations of the physical things.

2. Data models: That term is used for database systems to distinguish
different representations, such as a relational model, a network
model, a hierarchical model, or an object-oriented model.

Suggestion: I believe that the three-way distinction shown in the
attached mthworld.gif could be generalized to cover all these ways
of using the words 'theory' and 'model':

1. I first drew that diagram to illustrate a Tarski-style model of
a theory stated in logic and the relationship between a model
and the world.

2. For engineering models, the theory could be stated by axioms in
logic, but it could also be stated in mathematics (which could
also be mapped to logic). And it could even be stated in ordinary
language supplemented with mathematics.

3. The engineering model could be a simplified or scaled down
physical system or it could be a computer simulation. In either
case, it would conform to the theory as exactly as possible,
and the final product or physical implementation would conform
to the model as exactly as possible.

4. For data models, the tables, networks, or hierarchies are
different ways of representing the same abstract theory (AKA
conceptual schema). In fact, mthworld.gif shows the model
as a network. But a set of tables could also be true of
exactly the same axioms.

In summary, I believe that it would be possible to redraw mhtworld.gif
in ways that would be acceptable for logic, engineering, and databases.
In each version, the thing on the left would be physical, the thing
in the middle would be called a model, and the thing on the right
would be called a theory, ontology, or specification.

John
mthworld.gif

henson graves

unread,
Jan 31, 2018, 11:58:26 AM1/31/18
to ontolo...@googlegroups.com

Alex,

The fact that different communities use the same terminology for very different things should not be the show stopper that it seems to be in this forum. Logicians have one set of terminology, engineers have another. They use the word model for different things. Since I occasionally talk to people of both communities, I often use the words engineering model or descriptive model for what engineers talk about and interpretation of a model for what logicians mean when they say model.


I don't see that Cory needs to change his terminology e.g., MDA.  One doesn't have to conflate or confuse the concepts once one understands what is going on.  What I have outlined is completely consistent with the Tarskian view point, and with what one finds in books on model theory.

I agree that engineer's models (aka axiom sets) are often somewhat different from the ones that logicians typically deal with and the interpretation theory (logician's model theory) has some interesting aspects that Cory mentions.  But that only raises interesting issues for ontology discussion.

Henson


From: ontolo...@googlegroups.com <ontolo...@googlegroups.com> on behalf of Alex Shkotin <alex.s...@gmail.com>
Sent: Wednesday, January 31, 2018 9:58 AM
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

henson graves

unread,
Jan 31, 2018, 12:03:12 PM1/31/18
to ontolo...@googlegroups.com

John,

Of course it doesn't make sense to talk about the axioms of a model in the Tarskian sense, but the way that engineers use the word model such as the design model for an aircraft, the design is an axiom set, and the intepretations are; model in the Tarskian sense of the word.


Henson


From: ontolo...@googlegroups.com <ontolo...@googlegroups.com> on behalf of John F Sowa <so...@bestweb.net>
Sent: Wednesday, January 31, 2018 10:55 AM
To: ontolo...@googlegroups.com
Subject: Re: [ontolog-forum] Models, category theory, etc. (was two ideas after...
 
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or

---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.

henson graves

unread,
Jan 31, 2018, 12:19:44 PM1/31/18
to ontolo...@googlegroups.com

I have spent considerable building design and analysis models in various UML languages for both aircraft and molecules. I have also spent some time using OWL to build the same kind of "models" of aircraft and molecules. Of course these models are called axiom sets in OWL. My purpose was to see if OWL could be used in engineering model development as OWL of course provides reasoning and the UML languages do not. I was doing the same kind of activity in both cases. Same activity and artifacts, but different names.


The important thing to me seems to realize that the same activity is going own in both domains with different names. It seems not a point of real interest that the same names are used for different concepts.


I know pretty much what one finds in a book on model theory. It doesn't change what I am saying.


Henson




From: ontolo...@googlegroups.com <ontolo...@googlegroups.com> on behalf of henson graves <henson...@hotmail.com>
Sent: Wednesday, January 31, 2018 11:03 AM
unsubscribe to the email, see http://ontologforum.org/info/

---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Pat Hayes

unread,
Jan 31, 2018, 3:53:21 PM1/31/18
to ontolo...@googlegroups.com


On Jan 31, 2018, at 11:19 AM, henson graves <henson...@hotmail.com> wrote:

... It seems not a point of real interest that the same names are used for different concepts.

It only of interest, or better of importance, when people use such a wildly ambiguous technical word in a multidisciplinary forum like this, without clarifying which sense of the word they mean. If one took all the emails to just this forum which have been devoted to clearing up the confusion caused by one person’s use of “model” being mis-read by others as meaning something different from what the writer intended, it would probably fill a fairly large book. The point is not that anyone is right or wrong, only that communication sometimes requires brief amounts of pedantry, to avoid mutual misunderstanding.

Best wishes

Pat Hayes

Pat Hayes

unread,
Jan 31, 2018, 8:19:45 PM1/31/18
to ontolog-forum, John F. Sowa
Just for the record:

> On Jan 31, 2018, at 10:55 AM, John F Sowa <so...@bestweb.net> wrote:
>
> ...
> Suggestion: I believe that the three-way distinction shown in the
> attached mthworld.gif could be generalized to cover all these ways
> of using the words 'theory' and 'model':

I disagree, and think that this diagram is profoundly misleading. But John and I have had this argument in public now at least three times, and we should probably not have it again.

Pat

John F Sowa

unread,
Feb 1, 2018, 3:12:04 AM2/1/18
to ontolo...@googlegroups.com
On 1/31/2018 8:19 PM, Pat Hayes wrote:
>> I believe that the three-way distinction shown in the attached
>> mthworld.gif could be generalized to cover all these ways
>> of using the words 'theory' and 'model'
>
> I disagree, and think that this diagram is profoundly misleading.
> But John and I have had this argument in public now at least three
> times, and we should probably not have it again.

I agree. But I'll summarize the two positions. Anyone who has
heard the arguments before may stop reading *here*.

I thought of Pat when I wrote that note. I have shown
that diagram (copy attached) to many people who have strong
backgrounds in logic and philosophy. Some of them agree with
Pat's position and sometimes object even more violently than Pat.
But others look at the diagram and say it's obvious.

Pat's position, as I understand it, is that the domain of discourse
of a statement in logic may be some abstract set (such as integers
or other mathematical constructs). But it just as well could be
a set of things in the physical world.

An argument for Pat's position is that words of ordinary language
can and do refer to things in the physical world. When you translate
a sentence from some NL to some version of logic, the referents of
the variables in the logical sentence should be the same as the
referents of the corresponding words in the original NL sentence.

One reason for a distinction between the model and the physical world
is that it's more flexible. It enables us to distinguish the way
people think and talk about the world from the way it actually is.
Different people may have had different experiences and ways of
thinking. Some might be more accurate than others. For some, the
model might be a plan for a future that doesn't exist now or ever.

But mthworld.gif does not rule out the possibility that the model
in the diagram has an *exact* mapping to the world. In that case,
you could, if you wish, identify the model with part of the world.

John
mthworld.gif

Alex Shkotin

unread,
Feb 1, 2018, 10:44:54 AM2/1/18
to ontolog-forum
Henson,

I put the core question of mine in ontologforum blog http://ontologforum.org/index.php/Blog:Formal_theory,_finite_model_and_DL_reasoner
Let's continue there in comments and perhaps we get something useful for all.

Alex 

2018-01-31 19:58 GMT+03:00 henson graves <henson...@hotmail.com>:

Alex,

The fact that different communities use the same terminology for very different things should not be the show stopper that it seems to be in this forum. Logicians have one set of terminology, engineers have another. They use the word model for different things. Since I occasionally talk to people of both communities, I often use the words engineering model or descriptive model for what engineers talk about and interpretation of a model for what logicians mean when they say model.


I don't see that Cory needs to change his terminology e.g., MDA.  One doesn't have to conflate or confuse the concepts once one understands what is going on.  What I have outlined is completely consistent with the Tarskian view point, and with what one finds in books on model theory.

I agree that engineer's models (aka axiom sets) are often somewhat different from the ones that logicians typically deal with and the interpretation theory (logician's model theory) has some interesting aspects that Cory mentions.  But that only raises interesting issues for ontology discussion.

Henson

From: ontolo...@googlegroups.com <ontolog-forum@googlegroups.com> on behalf of Alex Shkotin <alex.s...@gmail.com>
Sent: Wednesday, January 31, 2018 9:58 AM

To: ontolog-forum
Subject: Re: [ontolog-forum] 2 ideas after our last meeting (2018.1.24)
 
Henson,

for me "the model is an axiom set" is not a good point of view as this should be a very special kind of axioms. It's better for me to think (as Tarski:-) that the model is math-structure satisfying some axioms.
W need a special language to build math-structures by the way.
What kind of math-structures engineers use to model their production it's another matter, but this structures (for example labeled graphs) are finite with numbers:-)

Alex
2018-01-31 18:21 GMT+03:00 henson graves <henson...@hotmail.com>:

Alex,

I agree with what you are saying. One needs to distinguish between a theory and interpreted statements about the system that the theory is intended to describe.

A typical engineering problem is to modify a product to have some new capability.  The typical scenario is to build an engineering model of the product and its operating environment in a modeling language such as UML. One then attempts to determine consequences from the combined product-operation context model and construct simulation code from the model and execute it to better determine the modified system behavior. One also may operate some modified product and collect data for the same purpose.  Generally this results in revision and refinement of the engineering model. For an example see https://scholar.google.com/scholar?oi=bibs&cluster=8615220581478398249&btnI=1&hl=en

If you view this engineering activity within a standard logic formalism paradigm, the model is an axiom set, the conclusions derived from the model are statements in the theory of the axiom set, and the simulations, as well as the product operation scenarios are interpretations of the theory.

I am suggesting that the logic paradigm is a good description of the engineering paradigm with of course the change in terminology, e.g., engineering model = axiom set. There are a lot of consequences from adoption of the paradigm. Here are three.

1. The kind of logic used is not given a prior. The kind of logic and form of the theories are determined by the test and evaluation methods accepted in the domain.

2. Physicists and philosophers often think that they are trying to build an axiom set which has a single unique (categorical) interpretation. For engineering the axiom sets that they build to describe systems generally have more valid interpretations than intended. This simply means that part of the engineer’s job is to determine what assumptions need to be added to an axiom set to constrain the possible interpretations to correspond with the physical world.

3. At some stage the engineer may need a meta theory in which he can represent object theories and their interpretations. This is likely the consequence of your statement.

Henson




henson graves

unread,
Feb 1, 2018, 11:17:37 AM2/1/18
to ontolo...@googlegroups.com

 John

My view of your diagram is very close to yours. But the inability to recognize how engineering terminology has evolved in the last 50 years, perhaps improperly from your point of view, is causing an immense amount of unnecessary confusion. One wonders if propagating confusion is the point of this thread.

To attempt to explain the difference in terminology, how it arose, and see how engineers are beginning to employ a standard logic paradigm as depicted in your diagram consider the following engineering creation myth.

People in ancient times, before say 1985 proceeded pretty much like you describe. People use test and verbal language to describe things they want to build or analyze. They then often built prototypes often on reduced scale which most everybody called models. So far everything is exactly as you say. Then something happened over a course of the next several years. The engineers started replacing their verbal and text description with artifacts in languages such as UML and OWL. This didn’t happen overnight and was not successful overnight. The reasons should be of great interest to logicians and KRR folks. But after year 2000 things changed. Now these artifacts are becoming the authoritative source of information.

Reasoning from the artifacts  are used to make design decisions;  they are used to generate simulations which help understand properties of systems under design or analysis. Engineers called these new artifacts “models” which is certainly not in keeping with the terminology of the Tarskian tradition. Some engineers have realized that these artifacts, which they call models, are axiom sets or can be embedded as axiom sets in logic. The result of doing so gives some well-known tools from logic to apply to questions of correctness of reasoning and validity of simulations (interpretations in synthetic worlds). This development promises to fundamentally change the way engineering is done on a daily basis.

 Unfortunately some people with a logic background get confused about the different use of terminiology, e.g., engineering model as axiom set and logical model as interpretation. Perhaps engineers should be chastised for calling their artifacts models, but they do. Perhaps “ model driven analysis”, “model based system engineering”, and “Unified Modeling Language” should be forced to change their names.  Historically the engineering artifacts were as you describe. But now the term model is generally used in the engineering community to mean the diagrams which translate to axioms or are directly axiom sets.  As I have said the interpretations of the engineering models are Tarskian models for the appropriate logic.  

So while your diagram is ok, the gloss is incomplete in my opinion. The real unfortunate aspect is it seems to preclude logicians from understanding how engineering is beginning to almost be applied logic. I would like for logicians to understand this scenario and contribute to the developments in logic, model theory needed to obtain specific benefits from using this well-known paradigm as expressed in your diagram, as opposed to chasing one’s tail regarding the use of the word “model”.

Henson





From: ontolo...@googlegroups.com <ontolo...@googlegroups.com> on behalf of John F Sowa <so...@bestweb.net>
Sent: Thursday, February 1, 2018 2:12 AM
To: ontolo...@googlegroups.com
Subject: Re: [ontolog-forum] Models, category theory, etc. (was two ideas after...
 
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or

---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.

Pat Hayes

unread,
Feb 1, 2018, 3:09:23 PM2/1/18
to ontolo...@googlegroups.com


> On Feb 1, 2018, at 2:12 AM, John F Sowa <so...@bestweb.net> wrote:
>
> On 1/31/2018 8:19 PM, Pat Hayes wrote:
>>> I believe that the three-way distinction shown in the attached
>>> mthworld.gif could be generalized to cover all these ways
>>> of using the words 'theory' and 'model'
>> I disagree, and think that this diagram is profoundly misleading.
>> But John and I have had this argument in public now at least three
>> times, and we should probably not have it again.
>
> I agree. But I'll summarize the two positions.

Sigh. I had hoped you would not do this, but here goes.

> Anyone who has
> heard the arguments before may stop reading *here*.
>
> ..
> Pat's position, as I understand it, is that the domain of discourse
> of a statement in logic may be some abstract set (such as integers
> or other mathematical constructs). But it just as well could be
> a set of things in the physical world.

Yes, that is the first observation, which is not a ‘position’, but simply a fact. But it goes beyond this, since of course the domain of discourse *could* also be made of mathematical entities. Your diagram is seriously misleading because it takes the matter of how a ‘model’ (in any sense) can be approximation to a reality – issues of degrees of precision, tolerance, approximation, accuracy and so forth – outside the semantic framework of formal ontologies and their semantics altogether. If your diagram were accurate, the relationship on the RHS of the diagram would have to be something that cannot *in principle* be described by any ontology on the far left. But (1) such matters *can* be described in formal ontologies; and more seriously (2) if they were outside this scope, as the diagram claims, what theory or framework do you suggest we could use to talk about them? I have never seen anyone, including your good self, explain how we can even begin to talk about the proposed relationship between ‘formal models’ and reality, if our semantic theories – that is, model theories – stop before these matters can even be brought into their scope.

>
> An argument for Pat's position is that words of ordinary language
> can and do refer to things in the physical world. When you translate
> a sentence from some NL to some version of logic, the referents of
> the variables in the logical sentence should be the same as the
> referents of the corresponding words in the original NL sentence.
>
> One reason for a distinction between the model and the physical world
> is that it's more flexible. It enables us to distinguish the way
> people think and talk about the world from the way it actually is.

It makes a distinction, but it does not provide any way to talk about it. In fact, it makes it *impossible* to talk about it, because all talk can only be about what the semantics of that talk says are the referents of the talk, and that only gets us to the models. So how can ANY talk be about the real world?

> Different people may have had different experiences and ways of
> thinking. Some might be more accurate than others. For some, the
> model might be a plan for a future that doesn't exist now or ever.

Of course, but that has nothing to do with the debate here. These are just observations about semantics generally.

>
> But mthworld.gif does not rule out the possibility that the model
> in the diagram has an *exact* mapping to the world. In that case,
> you could, if you wish, identify the model with part of the world.

But you still have this strange distinction between the real things on the right and their intermediary doppelgangers in the middle, even when they are in exact 1:1 correspondence. And this is just plain wrong: that is not how model theory works, nor how the original designers of it were thinking. Tarski’s running example of a sentence was “Snow is white”, and the truth conditions for this are that snow - actual, real, snow - is in fact white - the actual color, white. The language is related to the world; names refer to referents. Semantics is not a three-way relationship, it is a direct mapping between names the things they denote. That is how model theory works, and how it always has worked. If you want to talk about approximations and so forth, by all means do so, but the Tarskian semantics applies to that talk just like it applies to all other talk, which is why we can have ontologies about things like approximation and measurement tolerances and the difference between a quantity and a measurement of that quantity and so on.

But, as I say, let us not have this argument yet again :-)

Pat


>
> John
>
> --
> All contributions to this forum are covered by an open-source license.
> For information about the wiki, the license, and how to subscribe or unsubscribe to the email, see http://ontologforum.org/info/
> --- You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
> <mthworld.gif>


Edward Barkmeyer

unread,
Feb 2, 2018, 12:28:07 AM2/2/18
to ontolo...@googlegroups.com

Cory,

 

I have a real problem with this:

> In both cases there is a “source model” (set of axioms) and a set of “production rules”, which can be thought of as “production axioms”.

 

“Production rules” are transformation rules.  The significant question for transformations is:  What properties of the source do they preserve in the image?  And, to some extent, their preservation capability may be limited by the differences in nature between the source milieu and the target milieu.  A precise mathematical vector maps to an imperfect graphical display and to an even more imperfect physical cut line or deposition line.  And as Pat pointed out, a mapping from an n-ary fact to RDF does not maintain the integral sense when viewed as a set of triples; that sense must be imposed on the RDF graph, but it is not present in the RDF milieu per se.

 

So, in what sense are “production”/”transformation” rules “axioms”?  What is the nature of the logic in which they are “true”?  The truth seems to be only that the target image is a representation of the source.  And those are the kind of axioms we often call “simple facts”.  The interesting axioms are those that enable one to reason about behaviors of the target entities in the target milieu from the facts and axioms that describe the behaviors of the source entities in the source milieu.  That is: the preservation axioms and the mutation axioms (what is predictably different).

 

I have spent a large part of my life developing rule-based transformations in software (which is the nature of 90% of all software) and in machine control.  They are all “algorithmic”, but it is not clear that any of them is “axiomatic”.  And it is really important not to confuse those concepts.

 

-Ed

John F Sowa

unread,
Feb 2, 2018, 9:49:37 AM2/2/18
to ontolo...@googlegroups.com
On 2/2/2018 12:28 AM, Edward Barkmeyer wrote:
> I have spent a large part of my life developing rule-based
> transformations in software (which is the nature of 90% of all
> software) and in machine control. They are all “algorithmic”,
> but it is not clear that any of them is “axiomatic”. And it is
> really important not to confuse those concepts.

Every algorithm can be specified by a set of axioms in FOL,
and those axioms can be automatically translated to algorithms.
Production rules can also be translated to statements in logic,
which can be executed directly by logic-programming systems
or be translated to algorithms.

Those issues were hashed out and concluded in debates between Gödel,
Church, and Turing at Princeton in the 1930s. Their conclusion was
that recursive functions defined by axioms (Gödel's preferred form),
lambda calculus (by Church), and computers with an infinite tape
(Turing) had exactly the same computational power. They proved
that claim by defining translations from one to the other.

A major part of computer science and practice has been devoted
to implementing those ideas from the 1930s in the latest and
greatest programming languages and systems. The modern debates
are ways of saying "My version of the 1930s is better than yours."

John

Jack Hodges

unread,
Feb 2, 2018, 10:16:09 AM2/2/18
to ontolo...@googlegroups.com
I would be interested in a moderated forum devoted to semantics and engineering. Perhaps a companion forum to ontolog so that these interesting philosophical, theoretical, and historical discussions can take place, here, and topics which are engineering specific can take place there, trying to keep them independent as much as practical.

Jack

Sent from my iPad

John F Sowa

unread,
Feb 2, 2018, 11:10:30 AM2/2/18
to ontolo...@googlegroups.com
Pat and Mary-Anne,

I changed the subject line to include notes by both of you.

PJH
> ["Pat's position"], which is not a ‘position’, but simply a fact.

Whoa! A fact about what? The physical world? Some publication?
If the latter, please cite the source. It was certainly not Tarski.

Note the title of Tarski's original paper (1933): "The concept of
truth in formalized languages." For example, "Schnee ist weiß"
if and only if snow is white. But then he said that the issues
about natural languages and the world are too vague and complex.
That paper only addressed the right-hand side (RHS) of mthworld.gif.
For the LHS, see the informal philosophical paper by Tarski (1944):
http://jfsowa.com/logic/tarski.pdf

PJH
> Your diagram is seriously misleading because it takes the matter
> of how a ‘model’ (in any sense) can be approximation to a reality
> – issues of degrees of precision, tolerance, approximation, accuracy
> and so forth – outside the semantic framework of formal ontologies
> and their semantics altogether.

I believe that the correct term is 'metalevel', not 'outside'.

Some excerpts from Tarski (1944), sections 20, 21, and 22:
> The most natural and promising domain for the applications of
> theoretical semantics is clearly linguistics — the empirical study
> of natural languages...
>
> The relation between theoretical and descriptive semantics is analogous
> to that between pure and applied mathematics, or perhaps to that between
> theoretical and empirical physics... another important domain for
> possible applications of semantics is the methodology of science; this
> term is used here in a broad sense so as to embrace the theory of
> science in general... The semantics of scientific language should be
> simply included as a part in the methodology of science... One of the
> main problems of the methodology of empirical science consists in
> establishing conditions under which an empirical theory or hypothesis
> should be regarded as acceptable...
>
> As regards the applicability of semantics to mathematical sciences and
> their methodology, i.e., to metamathematics, we are in a much more
> favorable position than in the case of empirical sciences.

Formal mathematics is the only field for which Tarski (1944) claimed
that his definition of truth was directly applicable. He didn't deny
that it could be extended, but his discussion implied that extensions
would be in the "methodology" -- i.e., metalevel.

PJH
> I have never seen anyone, including your good self, explain how
> we can even begin to talk about the proposed relationship between
> ‘formal models’ and reality, if our semantic theories – that is,
> model theories – stop before these matters can even be brought
> into their scope.

It's a two-step mapping: In 1933, Tarski specified the RHS of
mthworld.gif. In 1944, he discussed the then current methodologies
for the LHS and admitted that they weren't formal. I doubt that he
would approve of making them formal by magic: Waving your hand and
declaring "Presto-zingo, my domain consists of things in the world."
That statement is OK as a hypothesis, but it's not an observation.

If you want a one-step mapping, look at fuzzy logic. Since I had
made some sympathetic comments about fuzzy systems, I was invited
to contribute to the Festschrift for Lotfi Zadeh. I wrote a 7-page
article on the question "What is the source of fuzziness?" and
included mthworld.gif as Figure 3: http://jfsowa.com/pubs/fuzzy.pdf

Since I didn't want to offend Lotfi, I didn't criticize fuzzy logic
directly. But I implied that it would be better to use a two-stage
mapping with classical logic on the RHS and fuzzy sets (or something
related) on the LHS. I don't believe that the LHS can ever be
completely formal, because no system of measurement can be perfect.

MAW
> a cool formalisation of symbol grounding
> http://www.benjaminjohnston.com.au/papers/formal.pdf

Symbol grounding addresses the LHS of mthworld.gif. Page 5
of the article discusses the informal issues:
> Representation units may or may not have any particular semantic
> interpretation, and may be manipulated by rules (such as interaction
> with the environment or hyper-computational systems) that are beyond
> formal definition.

Yes. For humans, symbols are grounded by what Peirce called
"the gates" of perception and purposive action. Methods of
pattern recognition and robotics address those two gates, but
none of them can be completely formal at the points of contact.

John
mthworld.gif

Rich Cooper

unread,
Feb 2, 2018, 11:44:08 AM2/2/18
to ontolo...@googlegroups.com

Jack,

If you start one, sign me up!

Sincerely,

Rich Cooper,

Rich Cooper,

Chief Technology Officer,

MetaSemantics Corporation

MetaSemantics AT EnglishLogicKernel DOT com

( 9 4 9 ) 5 2 5-5 7 1 2

http://www.EnglishLogicKernel.com

From: ontolo...@googlegroups.com [mailto:ontolo...@googlegroups.com] On Behalf Of Jack Hodges
Sent: Friday, February 02, 2018 7:16 AM
To: ontolo...@googlegroups.com
Subject: Re: [ontolog-forum] 2 ideas after our last meeting (2018.1.24)

I would be interested in a moderated forum devoted to semantics and engineering. Perhaps a companion forum to ontolog so that these interesting philosophical, theoretical, and historical discussions can take place, here, and topics which are engineering specific can take place there, trying to keep them independent as much as practical.

Jack

Sent from my iPad


On Jan 30, 2018, at 11:41 AM, henson graves <
henson...@hotmail.com> wrote:

In my experience engineers and logicians use the term "model" very differently. Engineers develop models for systems under design or analysis, perhaps in OWL or UML. If formalized the model becomes an axiom set used to reason about the interpretations in the physical or a simulated world. Logicians speak of the interpretations of axiom sets as models. So when this is formalized one has

axiom set <-> engineer's model

logician's model <-> engineer's interpretation. Interpretations include simulations of engineer's models.

- Henson

Pat Hayes

unread,
Feb 2, 2018, 12:30:44 PM2/2/18
to ontolog-forum, John F. Sowa
Hi John

The various formal models of computation all have the same power in the sense that they all define equivalent notions of computable function. But that is not to say that all computational architectures define the same notion of computation, or that they can all perform the same computations as all the others. For just one thing, there is lot more to computation than which function gets computed. The computer in my pocket can, for example, respond to a phone call within a couple of seconds, an ability which is not captured by talking about what function it computes.

I agree with Ed that ‘axiomatic’ and ‘algorithmic’ are distinct ideas which we should try to keep distinct. Even when talking about something like Prolog which has both aspects, it makes sense to distinguish these aspects of its design. For example, the model theory of Prolog really has nothing to do with the algorithmic machinery of a Prolog engine, and the use of, say, a linear-time unifier has got nothing much to do with the axiomatics.

Pat

henson graves

unread,
Feb 2, 2018, 12:45:01 PM2/2/18
to ontolo...@googlegroups.com

Ed,


Cory’s comments about axioms and production rules can be interpreted in the following way.

Engineering is transitioning from constructing artifacts in natural language to artifacts in UML and OWL. However, as axiom sets these artifacts are very weak as they generally have a lot more models (interpretations) than their constructors intended. Only gradually have engineers understood the implicit assumptions needed to constrain the valid models to what is intended. It takes a lot of work to identify and formalize this implicit knowledge. Some of us view these assumptions as the context of the axiom set. This usage of context is consistent with its use in logic and lambda calculus and other places.

What Cory refers to as production rules are rules used to construct simulation models for engineering and to generate code from “incomplete” artifacts used to generate software. These rules add information to the artifacts. However, this info should be and can be formalized and made part of the axiom set. This is necessary if one wants to reason correctly from the axiom sets to their interpretations.  This is now a concern in engineering.

 I don't think this view is really inconsistent with what you say.

Henson





From: ontolo...@googlegroups.com <ontolo...@googlegroups.com> on behalf of Edward Barkmeyer <ebark...@thematix.com>
Sent: Thursday, February 1, 2018 11:28 PM

Pat Hayes

unread,
Feb 2, 2018, 6:43:18 PM2/2/18
to ontolog-forum, John F. Sowa

On Feb 2, 2018, at 10:10 AM, John F Sowa <so...@bestweb.net> wrote:

Pat and Mary-Anne,

I changed the subject line to include notes by both of you.

PJH
["Pat's position"], which is not a ‘position’, but simply a fact.

Whoa!  A fact about what?  The physical world?  Some publication?
If the latter, please cite the source.  It was certainly not Tarski.

Tarki’s account of truth conditions, now usually referred to as ‘model theory’, characterizes the domain of an interpretation as being a non-empty set containing the referents of symbols. The *only* condition imposed by the theory on this set is that it be non-empty. In particular, the theory imposes no conditions whatever on the nature of the entities in this set. Ergo, it applies when the set contains real-world entites. QED.

If we need to argue about whether sets can contain real-world entities (as I recall we once did), I will refer you to Russell, Zermelo, Quine, and a host of other authorities – indeed almost everyone, including writers of textbooks, who has written extensively on the topic – but I hope we won’t need to go there again. 


Note the title of Tarski's original paper (1933):  "The concept of
truth in formalized languages."  For example, "Schnee ist weiß"
if and only if snow is white.  But then he said that the issues
about natural languages and the world are too vague and complex.

No, he said that natural languages were. (And he was right, in spite of valiant subsequent efforts by linguists.) Tarski never said that formal languages could not refer to reality. As we both know, his own running example was ‘snow is white’. 

That paper only addressed the right-hand side (RHS) of mthworld.gif.
For the LHS, see the informal philosophical paper by Tarski (1944):
http://jfsowa.com/logic/tarski.pdf

There is nothing, in any of Tarski’s writings, to suggest that his conception was like your diagram. I challenge you to give an exact citation, if you believe otherwise. That he talked about formal languages and truth in one place, and about measurement and approximation in another place, is not evidence for this claim. 

But in any case, subsequent authors have most certainly allowed formal logics to refer to real-world things, while assuming the Tarskian model theory (or some technical variant of it) without any qualms or hesitation or qualifications. The most obvious being Carnap, but also all the work on axiomatic mereologies, and of course just about all of modern formal ontology-building. 


PJH
Your diagram is seriously misleading because it takes the matter
of how a ‘model’ (in any sense) can be approximation to a reality
– issues of degrees of precision, tolerance, approximation, accuracy
and so forth – outside the semantic framework of formal ontologies
and their semantics altogether.

I believe that the correct term is 'metalevel', not 'outside’.

No, it takes it outside. The metalevel would be the (hypothetical, mysterious, unrealized) semantic theory of the RHS model-to-real-world mapping. But the image makes clear that this would not be part of model theory, the semantic theory of the LHS formal ontology: that is the LHS mapping, the Tarskian interpretation. So the ontology itself *cannot possibly* talk about the RHS of the diagram. It cannot talk about the real world, because whatever it says can only be interpreted through the middle domain of dry abstractions; so it cannot talk about the relationship between that domain and something else, such as the real-world RHS. 

What is so frustrating to me in these discussions is that you seem to keep missing this obvious point, that your diagram shoots itself in the foot in this way, because we both know that ontologies *can* be created for talking about such matters as the distinction between a real-world quantity and an observable measurement of that quantity, the bounds on possible errors of measurements and so forth: the very stuff of the RHS of your diagram, in fact. (I know this in part because I have done it, as part of an effort to creating an ontology of quantities and measures.) But on your account, any such ontology has to be some kind of illusion, since all its terms are obliged to refer to things in the central desert of mere formality, and can never reach across to the real world on the right. 

Some excerpts from Tarski (1944), sections 20, 21, and 22:
The most natural and promising domain for the applications of
theoretical semantics is clearly linguistics — the empirical study
of natural languages...
The relation between theoretical and descriptive semantics is analogous
to that between pure and applied mathematics, or perhaps to that between
theoretical and empirical physics... another important domain for
possible applications of semantics is the methodology of science; this
term is used here in a broad sense so as to embrace the theory of
science in general...  The semantics of scientific language should be
simply included as a part in the methodology of science... One of the
main problems of the methodology of empirical science consists in
establishing conditions under which an empirical theory or hypothesis
should be regarded as acceptable...
As regards the applicability of semantics to mathematical sciences and
their methodology, i.e., to metamathematics, we are in a much more
favorable position than in the case of empirical sciences.

Formal mathematics is the only field for which Tarski (1944) claimed
that his definition of truth was directly applicable.

The first paragraph you cite, above, says the exact opposite.

 He didn't deny
that it could be extended, but his discussion implied that extensions
would be in the "methodology" -- i.e., metalevel.

Tarski was indeed much more optimistic about applying logic to mathematics than to more empirical fields. In this he was of course not alone. But nothing you cite here, or indeed you can cite, I believe, suggests that he thought it was impossible in principle, or that his semantic picture needed to be “extended” along the lines of your diagram. Montague, following Tarski, used his semantics directly on natural language without significant modification to the notion of interpretation. Kripke extended the interpretation structures to cover modal logics, but did not interpose any new mappings between the content of interpretations and reality, and all the extensive literature on the nature of possibilia or counterparts in modal logic has clearly assumed that the things in Kripkean universes are parts of the actual world, or of possible worlds. 


PJH
I have never seen anyone, including your good self, explain how
we can even begin to talk about the proposed relationship between
‘formal models’ and reality, if our semantic theories – that is,
model theories – stop before these matters can even be brought
into their scope.

It's a two-step mapping:  In 1933, Tarski specified the RHS of
mthworld.gif.  In 1944, he discussed the then current methodologies
for the LHS and admitted that they weren't formal.

He did no such thing. As Tarskian scholarship, this is pure fantasy. You have retrofitted Tarski’s ideas onto your misleading diagram.

 I doubt that he
would approve of making them formal by magic:  Waving your hand and
declaring "Presto-zingo, my domain consists of things in the world.”

Why do you think that to claim to be talking about reality is to claim some kind of magic power? We all talk about reality much of the time. I daresay that some of the emails in this very discussion group have referred to reality on occassion. If I say that my ontology is about, say, family relationships among gerbils, I am not saying anything fundamentally different from someone who says his ontology is about homeomorphisms of finite groups. It’s just a different subject matter.

[**](Note, I do not need to have an axiomatic *definition* of ‘gerbil’ in order to talk about gerbils. Do you feel that any ontological claim has to be justified mathematically, by providing such definitions? Because that misapprehension could account for your strange views on this topic.)

That statement is OK as a hypothesis, but it's not an observation.

It is neither of these. It is true by fiat. If I am the author of the ontology, it is about whatever I say it is about. Now, it might of course be wrong, or confused, etc. – I am not omniscient when it comes to gerbils, no doubt –- but what it is *referring to* is my decision to make. 

If you feel this is hubris, ask yourself: what makes, say, the EPISTLE framework be about fluids and pipes and so on? Is it because Matthew West managed to reduce the oil and gas industries to a mathematical theory, a kind of Principia Processia?  Or is it about that because the authors said it was, and the users find it useful to use it in that way? 


If you want a one-step mapping, look at fuzzy logic.

I really would rather not, particularly as it has nothing whatever to do with what we are talking about. 

... I don't believe that the LHS can ever be

completely formal, because no system of measurement can be perfect.

You have said things like this previously, and I really don’t understand why you think this is even remotely relevant. Measurement has got nothing whatever to do with reference. I can refer to Julius Caesar – I just did – without measuring him, indeed without measuring anything. If we could only refer to things that were defined by measurement, we would all still be living on Laputa. 

Pat



MAW
a cool formalisation of symbol grounding
http://www.benjaminjohnston.com.au/papers/formal.pdf

Symbol grounding addresses the LHS of mthworld.gif.  Page 5
of the article discusses the informal issues:
Representation units may or may not have any particular semantic
interpretation, and may be manipulated by rules (such as interaction
with the environment or hyper-computational systems) that are beyond
formal definition.

Yes.  For humans, symbols are grounded by what Peirce called
"the gates" of perception and purposive action.  Methods of
pattern recognition and robotics address those two gates, but
none of them can be completely formal at the points of contact.

What do you mean by “completely formal”? (See my aside comment, [**] above.)

John F Sowa

unread,
Feb 4, 2018, 10:51:58 AM2/4/18
to Pat Hayes, ontolog-forum
On 2/2/2018 6:43 PM, Pat Hayes wrote:
> Tarki’s account of truth conditions, now usually referred to
> as ‘model theory’, characterizes the domain of an interpretation
> as being a non-empty set containing the referents of symbols.
> The *only* condition imposed by the theory on this set is that
> it be non-empty... Ergo, it applies when the set contains real-
> world entites. QED.

Nothing prevents you from having a set that consists of physical
entities. I'm just saying that such a set could not be the domain
of a Tarski-style model, but it might be isomorphic to the domain.

If you derive a model *for* a set of axioms, that model will consist
of mathematical objects. If you derive it *from* observations of
the world, you get data that you can store in a computer. It might
resemble, represent, or be analogous to something in the world,
but a set of data is not physical.

As a simple example, let's take an axiom for which we don't
need a computer or even pencil and paper:

Axiom: There are three people in Pat Hayes' living room.

If you are one of them, I'm sure that you could verify that the
axiom is true of the current state just by looking. Testing that
axiom would be trivial.

But there is something between that axiom and the physical situation:
the occipital lobes in the back of your head, where a "mental image"
of the room and things in it is formed. There are also the temporal
lobes that recognize some things called people, the parietal lobes
for matching patterns, and the frontal lobes for reasoning, counting,
and saying "Yes!"

As you said above, "the domain of an interpretation [is] a non-empty
set containing the referents of symbols." For that example, the set
of referents are the people and the room. But the image of those
referents and the process of verifying the axiom are performed on
patterns in your head. Psychologists such as Johnson-Laird would
call those patterns a "mental model".

For a more complex theory and situation, you would need a more
elaborate specification, aided by pencil and paper or a computer.
That spec would be more specific than the axioms. It would contain
"data", such as diagrams, measurements, lists of parts and subparts...

Whether you call it a mental model, an engineering model, or a
Tarski-style model, there is always something data-like between
your axioms and the physical things or situation. QED.

John

Pat Hayes

unread,
Feb 4, 2018, 12:26:31 PM2/4/18
to John F. Sowa, ontolog-forum

On Feb 4, 2018, at 9:51 AM, John F Sowa <so...@bestweb.net> wrote:

On 2/2/2018 6:43 PM, Pat Hayes wrote:
Tarki’s account of truth conditions, now usually referred to
as ‘model theory’, characterizes the domain of an interpretation
as being a non-empty set containing the referents of symbols.
The *only* condition imposed by the theory on this set is that
it be non-empty... Ergo, it applies when the set contains real-
world entites. QED.

Nothing prevents you from having a set that consists of physical
entities.  I'm just saying that such a set could not be the domain
of a Tarski-style model, but it might be isomorphic to the domain.

I know you are saying that, but you are wrong. There is no such restriction anywhere in any account of model theory. In fact, if one structure is isomorphic to another, and one of them is a Tarskian interpretation, then so is the other, by definition of “Tarskian interpretation”. 

(What do you think the universes of a Tarski-style model are restricted to contain? Not real things, so… unreal things? Spots in a diagram? Nodes of a graph? What? )


If you derive a model *for* a set of axioms, that model will consist
of mathematical objects.

That would be false if it made sense. What is a “mathematical object”? Mathematics can also describe real things, which is why it is so useful for people who deal with reality, like physicists and engineers. 

 If you derive it *from* observations of
the world, you get data that you can store in a computer.  It might
resemble, represent, or be analogous to something in the world,
but a set of data is not physical.

The observations are not physical, but they are (parts of) descriptions of things. The things they refer to, that they are descriptions of, can be physical. That is often the very point of making the measurements in the first place. When I note, with regret, that I now weigh 15lb more than I once did, it is the increase in girth of my all too, too physical, waist that I am concerned about. 

As a simple example, let's take an axiom for which we don't
need a computer or even pencil and paper:

Axiom:  There are three people in Pat Hayes' living room.

If you are one of them, I'm sure that you could verify that the
axiom is true of the current state just by looking.  Testing that
axiom would be trivial.

Indeed. A good example. But I note in passing that we aren’t talking about *testing* anything, in this argument we are having. 

But there is something between that axiom and the physical situation:
the occipital lobes in the back of your head, where a "mental image"
of the room and things in it is formed.  There are also the temporal
lobes that recognize some things called people, the parietal lobes
for matching patterns, and the frontal lobes for reasoning, counting,
and saying "Yes!”

All true, and all completely irrelevant to what we are talking about. Model theory is not a theory of perception or neurology. It simply talks about the relation between symbols – in this case, parts of English sentences – and whatever those symbols refer to – in this case, three people in my living room. Real people in a real room. 

Now, a more elaborate system of symbols might indeed describe not only my living room but also the goings-on inside my head that result in my visual perceptions, etc.. That would be a different (and far more elaborate) sentence, and the model theory would apply to it just as well. But as I say, it would be a DIFFERENT sentence, so it is not relevant to our discussion of your simple sentence and its interpretation.

As you said above, "the domain of an interpretation [is] a non-empty
set containing the referents of symbols."  For that example, the set
of referents are the people and the room.

Exactly. This is all I have been saying: the referents are parts of reality.

 But the image of those
referents and the process of verifying the axiom are performed on
patterns in your head.  Psychologists such as Johnson-Laird would
call those patterns a "mental model”.

That might be true – I don’t think Johnson-Laird has this right, myself, but whatever – but it has nothing to do with what we are talking about. Even Johnson-Laird would not claim that when we describe what we see, that we are *referring to* the images in our heads. And reference is not a *process* of verifying: it is the mapping that is thereby verified (or not, as the case may be.)

For a more complex theory and situation, you would need a more
elaborate specification, aided by pencil and paper or a computer.
That spec would be more specific than the axioms.  It would contain
"data", such as diagrams, measurements, lists of parts and subparts...

Whether you call it a mental model, an engineering model, or a
Tarski-style model, there is always something data-like between
your axioms and the physical things or situation.

You make a conceptual mistake here by putting Tarskian models – let me write Tmodels – into the same category as engineering models – Emodels. This is just a (bad) pun on the word “model”. A Tmodel of some sentences is an interpretation of them that makes them true. It is not a model in the sense of being a simplified or scaled-down simalcrum of something, like a model of a bridge. And if we use the word “model” to encompass an Emodel then things get even more teminologically confused, since an Emodel comprises *symbolic descriptions* – as you say, data – of the reality being modelled, so in this case the 

Emodel <—> reality being modeled

relationship is exactly like the 

Description <—> Tmodel 

mapping, and the meaning of the word “model” has almost completely inverted. To say that the (actual) bridge is correctly modeled by the Emodel description – that is, that it is correct when understood as referring to that bridge –  is *exactly the same claim* as saying that the actual bridge is a Tmodel of the assertions which comprise the Emodel data; that is, that those assertions are true under that interpretation. 

When you say ‘between’, you might meant several things. If you mean, there is some kind of measurement process which supports any claim of reference, then I might agree, at least provisionally.[**] Or at any rate, that is something we might discuss in greater depth. But if you mean that any reference mapping must (therefore?) be decomposable into a functional composition of what we might call a reference-1 map to some abstract domain, and a reference-2 map from that abstract domain to the real world, and model theory can talk only about reference-1 – which is what your diagram clearly asserts – then I sharply and firmly disagree. And nothing in these or any other emails has actually made a coherent argument for that second decomposition-of-reference claim. So:

 QED.

Nope. Non demonstratum quod erat demonstrandum.

Pat

[**] Actually on more thought I won’t agree, but that is another discussion. Might be more productive than this one, though :-)


John



Nicola Guarino

unread,
Feb 5, 2018, 6:02:45 AM2/5/18
to ontolo...@googlegroups.com
Folks,

I rarerely can afford the luxury of partecipating to Ontolog discussions, since they often tend to explode an my time is scarce… However, this time I can’t resist offering my two cents to this debate.

I think that, in a sense, John and Pat are both correct.

Pat is absolutely right while saying that the *domain* of interpretation of Tarski-style models can include real things. No question about that.

On the other hand, John is right in insisting on the distinction between a Tarskian *model* and the physical world.

These two claims concern two different things: a domain and a model. A Tarskian model includes a domain plus an interpretation function. In my view, it is this interpretation function which makes the difference between a model and the physical world.

Consider the example described in the attached slide, which I presented a number of times in formal ontology courses. The simple theory shown in the example is intended to axiomatize the relation “on” holding among blocks. It just says that the relation is asymmetric and anti-transitive, which is correct, so that all the models of this theory are actually intended. However, the theory can’t distinguish between the actual real-world situations shown to the right. This means that the interpretation function constrained by the theory is very different from the way people actually interpret two physical blocks as belonging to the extension of the ‘on’ relation.

As a result, the small ontology shown in the example is precise but not very accurate, since its intended models collapse intended and non-intended real-world situations. There are two ways to make it more accurate:

a) extending the domain of discourse in order to include other entities besides blocks (say, regions of space that may or may not be occupied by blocks).
b) extending the signature of the language in order to be able to talk of other primitive relations (say, topological connection among blocks, or geometrical arrangement of blocks)

In both cases, if we do things properly, the models of the resulting theory will be closer to the physical world, in the sense that the interpretation function constrained by the theory will be closer to the one actually used by competent English speakers using the ‘on’ preposition.

I hope this helps…

Cheers,

Nicola


Blocks.pdf

John F Sowa

unread,
Feb 5, 2018, 10:12:17 AM2/5/18
to Pat Hayes, ontolog-forum, Nicola Guarino'
Nicola and Pat,

Nicola
> I think that, in a sense, John and Pat are both correct...
> These two claims concern two different things: a domain and a model.
> A Tarskian model includes a domain plus an interpretation function.
> In my view, it is this interpretation function which makes the
> difference between a model and the physical world.

Thanks. That point is consistent with Tarski (excerpts below)
and with the following statements by Pat and me:

>> [John] Nothing prevents you from having a set that consists
>> of physical entities. I'm just saying that such a set could
>> not be the domain of a Tarski-style model, but it might be
>> isomorphic to the domain.
>
> [Pat] I know you are saying that, but you are wrong. There is
> no such restriction anywhere in any account of model theory.

I agree with Pat's last sentence: Nothing in Tarski's definition
restricts the domain of an interpretation. In fact, if a formal
ontology is about the real world, the variables in the logic must
refer to the world.

But the interpretation function, which maps sentences in the theory
to truth values, must have some formal method for accessing the
elements of the domain, their properties, and their relationships
to other elements. (I'm using the word 'relationship' to mean
"one instance of a relation" -- for example, one RDF triple or
one row of a relational DB.)

If those elements, properties, and relationships are represented
by names or other symbols, they can be stored in a database and be
indexed by character strings. But no logician would trudge through
a swamp to evaluate a relationship.

Tarski (1944) mentioned "mathematics and theoretical physics" as
suitable fields for applying formal semantics. (Section 6 below)

In Section 20, he discussed the difference between "empirical research"
and "theoretical semantics". The difference is that the empirical
(experimental) research is "concerned only with natural languages
and that theoretical semantics applies to these languages only with
certain approximation" -- as my mthworld.gif diagram shows.

In fact, experimental physicists have a rule: Never allow a
theoretician to walk into your laboratory. As soon as they do,
everything breaks. Neils Bohr was a very great theoretician.
As proof, when he took a train from Copenhagen to Zurich,
the minute his train passed through Göttingen, an experiment
at the university blew up.

In short, if you don't want your interpretation function to blow up,
restrict the elements of the domain to symbols, for which some
surveyor or experimenter determines the properties and relationships.

John
_______________________________________________________________________

Excerpts from Tarski (1944) http//jfsowa.com/logic/tarski.htm

Section 6

If in specifying the structure of a language we refer exclusively
to the form of the expressions involved, the language is said to be
formalized... the field of application of these languages is rather
comprehensive; we are able, theoretically, to develop in them various
branches of science, for instance, mathematics and theoretical physics.

Section 20

The fact that in empirical research we are concerned only with natural
languages and that theoretical semantics applies to these languages only
with certain approximation, does not affect the problem essentially.
However, it has undoubtedly this effect that progress in semantics will
have but a delayed and somewhat limited influence in this field. The
situation with which we are confronted here does not differ essentially
from that which arises when we apply laws of logic to arguments in
everyday life — or, generally, when we attempt to apply a theoretical
science to empirical problems...

The most natural and promising domain for the applications of
theoretical semantics is clearly linguistics — the empirical study of
natural languages. Certain parts of this science are even referred to as
"semantics," sometimes with an additional qualification...

It is perhaps unnecessary to say that semantics cannot find any direct
applications in natural sciences such as physics, biology, etc.; for in
none of these sciences are we concerned with linguistic phenomena, and
even less with semantic relations between linguistic expressions and
objects to which these expressions refer. We shall see, however, in the
next section that semantics may have a kind of indirect influence even
on those sciences in which semantic notions are not directly involved.

Section 21

Besides linguistics, another important domain for possible applications
of semantics is the methodology of science... The semantics of
scientific language should be simply included as a part in the
methodology of science...

One of the main problems of the methodology of empirical science
consists in establishing conditions under which an empirical theory or
hypothesis should be regarded as acceptable. This notion of
acceptability must be relativized to a given stage of the development of
a science (or to a given amount of presupposed knowledge)...

Section 22

As regards the applicability of semantics to mathematical sciences and
their methodology, i.e., to metamathematics, we are in a much more
favorable position than in the case of empirical sciences...

Pat Hayes

unread,
Feb 5, 2018, 11:11:40 AM2/5/18
to ontolog-forum, Nicola Guarino
Hi Nicola

Yes, of course the interpretation mapping itself is not part of the world being described: it is the semantic mapping from language expressions into that world. I hope nobody interpreted any thing I have said as disagreeing with this.

I believe I can summarize your point as follows: small theories (small ontologies, with only a few axioms) cannot fully capture a complicated world, because they allow non-standard models, ie interpretations which satisfy the theory but are not correct, i.e. they are not parts of any intended world, or the names of the theory are not correctly interpreted. This is of course true, and IMO one of the most useful methodological aspects of Tarskian model theory when developing ontologies. As I suggested in the ‘naive physics manifesto’, longer ago than I care to remember, an excellent way to critique a proposed ontology is to deliberately look for non-standard models, as they vividly reveal gaps in what one might call ‘coverage’ of the axioms. (I also used the blocks world as an example, as we all did :-) And then, as you say, the ontology can be improved by filling the gaps in its descriptive powers which are thus revealed, by enriching its expressibility. (As a later example, the ‘time catalog’ noted that all temporal ontologies (that did not explicitly mention durations of intervals) had all their truths preserved when the entire infinite time-line is projected into the unit interval, so they could not possibly account for the future being unbounded. I learned that trick from van Bentham, by the way.)

But, in order to do this kind of stuff, we are assuming that our formalism, the language of our ontology sentences, has a Tarskian semantics in the first place. Looking for nonstandard models is a quintissential application of model theory. Without that theory, the whole idea of a model, and hence of a nonstandard model, does not make sense. Which is why I defend it whenever it is attacked. Model theory is not just correct: it is an essential weapon in our meta-theoretic armory.

Best wishes

Pat
> --
> All contributions to this forum are covered by an open-source license.
> For information about the wiki, the license, and how to subscribe or
> unsubscribe to the email, see http://ontologforum.org/info/
> ---
> You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
> Visit this group at https://groups.google.com/group/ontolog-forum.
> For more options, visit https://groups.google.com/d/optout.
> <Blocks.pdf>


John F Sowa

unread,
Feb 5, 2018, 11:27:43 AM2/5/18
to ontolo...@googlegroups.com
On 2/5/2018 11:11 AM, Pat Hayes wrote:
> I believe I can summarize your [Nicola's] point as follows:
> small theories (small ontologies, with only a few axioms)
> cannot fully capture a complicated world, because they allow
> non-standard models...

That's not the point that Tarski meant when he discussed the
difference between theoretical physics and "empirical science".

It's not the point I was talking about, which I believe is
close to Tarski's distinction.

For more detail about my interpretation and its relation
to what Tarski and Nicola wrote, see my previous note
from 10:12 this morning.

John


Pat Hayes

unread,
Feb 5, 2018, 11:50:00 AM2/5/18
to ontolog-forum, John F. Sowa, Nicola Guarino'
OK John, last blast from me on this topic:

> On Feb 5, 2018, at 9:12 AM, John F Sowa <so...@bestweb.net> wrote:
>
> Nicola and Pat,
>
> Nicola
>> I think that, in a sense, John and Pat are both correct...
>> These two claims concern two different things: a domain and a model.
>> A Tarskian model includes a domain plus an interpretation function.
>> In my view, it is this interpretation function which makes the
>> difference between a model and the physical world.
>
> Thanks. That point is consistent with Tarski (excerpts below)
> and with the following statements by Pat and me:
>
>>> [John] Nothing prevents you from having a set that consists
>>> of physical entities. I'm just saying that such a set could
>>> not be the domain of a Tarski-style model, but it might be
>>> isomorphic to the domain.
>> [Pat] I know you are saying that, but you are wrong. There is no such restriction anywhere in any account of model theory.
>
> I agree with Pat's last sentence: Nothing in Tarski's definition
> restricts the domain of an interpretation. In fact, if a formal
> ontology is about the real world, the variables in the logic must
> refer to the world.
>
> But the interpretation function, which maps sentences in the theory
> to truth values, must have some formal method for accessing the
> elements of the domain, their properties, and their relationships
> to other elements.

The interpretation function is a mathematical construct in a semantic metatheory. It does not need to have any method, formal or otherwise, to “do” anything. It need not even be a computable function. We aren’t talking about computer science here: there isn't any “accessing” involved in the semantic theory.

> (I'm using the word 'relationship' to mean
> "one instance of a relation" -- for example, one RDF triple or
> one row of a relational DB.)
>
> If those elements, properties, and relationships are represented
> by names or other symbols, they can be stored in a database and be
> indexed by character strings. But no logician would trudge through
> a swamp to evaluate a relationship.
>
> Tarski (1944) mentioned "mathematics and theoretical physics" as
> suitable fields for applying formal semantics. (Section 6 below)
>
> In Section 20, he discussed the difference between "empirical research"
> and "theoretical semantics". The difference is that the empirical
> (experimental) research is "concerned only with natural languages
> and that theoretical semantics applies to these languages only with
> certain approximation" -- as my mthworld.gif diagram shows.

What Tarski means here is that the natural language involved is itself too complicated for his semantic theory to apply to it exactly. Natural language, even when used in science, has semantic content which cannot be fully captured by simple model theory. He might well have been right about that. But that is not what your diagram shows.
.
> In fact, experimental physicists have a rule: Never allow a
> theoretician to walk into your laboratory. As soon as they do,
> everything breaks. Neils Bohr was a very great theoretician.
> As proof, when he took a train from Copenhagen to Zurich,
> the minute his train passed through Göttingen, an experiment
> at the university blew up.

:-) But try telling that story, and its moral, to Rutherford or Lise Meitner.

> In short, if you don't want your interpretation function to blow up,
> restrict the elements of the domain to symbols, for which some
> surveyor or experimenter determines the properties and relationships.

Functions don’t blow up. And even if they did, you can just use a different one. After all, there are infinitely many interpretations.

Pat

>
> John
> _______________________________________________________________________
>
> Excerpts from Tarski (1944) http//jfsowa.com/logic/tarski.htm
>
> Section 6
>
> If in specifying the structure of a language we refer exclusively
> to the form of the expressions involved, the language is said to be formalized... the field of application of these languages is rather comprehensive; we are able, theoretically, to develop in them various branches of science, for instance, mathematics and theoretical physics.
>
> Section 20
>
> The fact that in empirical research we are concerned only with natural languages and that theoretical semantics applies to these languages only with certain approximation, does not affect the problem essentially. However, it has undoubtedly this effect that progress in semantics will have but a delayed and somewhat limited influence in this field. The situation with which we are confronted here does not differ essentially from that which arises when we apply laws of logic to arguments in everyday life — or, generally, when we attempt to apply a theoretical science to empirical problems...
>
> The most natural and promising domain for the applications of theoretical semantics is clearly linguistics — the empirical study of natural languages. Certain parts of this science are even referred to as "semantics," sometimes with an additional qualification...
>
> It is perhaps unnecessary to say that semantics cannot find any direct applications in natural sciences such as physics, biology, etc.; for in none of these sciences are we concerned with linguistic phenomena, and even less with semantic relations between linguistic expressions and objects to which these expressions refer. We shall see, however, in the next section that semantics may have a kind of indirect influence even on those sciences in which semantic notions are not directly involved.
>
> Section 21
>
> Besides linguistics, another important domain for possible applications of semantics is the methodology of science... The semantics of scientific language should be simply included as a part in the methodology of science...
>
> One of the main problems of the methodology of empirical science consists in establishing conditions under which an empirical theory or hypothesis should be regarded as acceptable. This notion of acceptability must be relativized to a given stage of the development of a science (or to a given amount of presupposed knowledge)...
>
> Section 22
>
> As regards the applicability of semantics to mathematical sciences and their methodology, i.e., to metamathematics, we are in a much more favorable position than in the case of empirical sciences...
>

Gary Berg-Cross

unread,
Feb 5, 2018, 12:30:23 PM2/5/18
to ontolog-forum, Nicola Guarino
Pat, John, Nicola:

To observe the obvious from time to time there are phrases used in these discussions (mathematical entities as one, I think) where there is a substantial potential for lines of argument to diverge based on what assumptions they come with.

But I did wonder if a big one is the meaning of the phrase "semantic mapping" in Pat's sentence below:

Pat > Yes, of course the interpretation mapping itself is not part of the world being described: it is the semantic mapping from language expressions into that world. I hope nobody interpreted any thing I have said as disagreeing with this.

Is this a mapping from the formal model or from a mental model?  It seems that John allows for both meanings from which the mapping is possible and Pat is using this more restrictively  to the formal model.

Gary Berg-Cross Ph.D.  
Independent Consultant
Potomac, MD

> To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-forum+unsubscribe@googlegroups.com.

> Visit this group at https://groups.google.com/group/ontolog-forum.
> For more options, visit https://groups.google.com/d/optout.
> <Blocks.pdf>


--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-forum+unsubscribe@googlegroups.com.

John F Sowa

unread,
Feb 5, 2018, 11:32:08 PM2/5/18
to ontolo...@googlegroups.com
Pat, Nicola, and Gary,

I believe that the following "blast" is correct.

Pat
> The interpretation function is a mathematical construct in a semantic
> metatheory. It does not need to have any method, formal or otherwise,
> to “do” anything. It need not even be a computable function. We aren’t
> talking about computer science here: there isn't any “accessing”
> involved in the semantic theory.

Yes. A model may be infinite. But it must be formally specified
by the kinds of mathematical methods used to specify possibly infinite
sets and structures -- for example, the method of deriving a model from
a set of axioms. But that derivation cannot give you a physical set.
At best, it will give you a set of symbols that may refer to the
elements in a physical set.

>> [JFS] Tarski (1944) mentioned "mathematics and theoretical physics"
>> as suitable fields for applying formal semantics. (Section 6)
>>
>> In Section 20, he discussed the difference between "empirical research"
>> and "theoretical semantics". The difference is that the empirical
>> (experimental) research is "concerned only with natural languages
>> and that theoretical semantics applies to these languages only with
>> certain approximation" -- as my mthworld.gif diagram shows.
>
> [Pat] What Tarski means here is that the natural language involved is
> itself too complicated for his semantic theory to apply to it exactly.
> Natural language, even when used in science, has semantic content which
> cannot be fully captured by simple model theory.

No. That is not what Tarski said or implied. In fact, Tarski said
that NLs were *easier* to deal with than "natural sciences, such as
physics, biology, etc." Some excerpts:

> [Section 20] The most natural and promising domain for the application
> of theoretical semantics is clearly linguistics...
>
> It is perhaps unnecessary to say that semantics cannot find any direct
> applications in natural sciences such as physics, biology, etc.; for
> in none of these sciences are we concerned with linguistic phenomena...
>
> [Section 21] Besides linguistics, another important domain for
> possible applications of semantics is the methodology of science;
> this term is used here in a broad sense so as to embrace the theory
> of science in general. Independent of whether a science is conceived
> merely as a system of statements or as a totality of certain statements
> and human activities, the study of scientific language constitutes an
> essential part of the methodological discussion of a science...
>
> One of the main problems of the methodology of empirical science
> consists in establishing conditions under which an empirical theory
> or hypothesis should be regarded as acceptable.

That methodology involves metalanguage about how to derive information
by observation and experimentation and using that info to specify the
axioms for which a model of the world may be derived.

Now let me go back to the following claim from a week ago

Pat
> Your diagram is seriously misleading because it takes the matter
> of how a ‘model’ (in any sense) can be approximation to a reality
> – issues of degrees of precision, tolerance, approximation, accuracy
> and so forth – outside the semantic framework of formal ontologies
> and their semantics altogether.

No. See the above points by Tarski (1944): Theoretical physics is
a suitable domain for formal semantics, but "It is perhaps unnecessary
to say that semantics cannot find any direct applications in natural
sciences such as physics, biology, etc."

By "natural science", he means the practice of observing, analyzing,
formulating hypotheses, testing them, and repeat. That is the
"methodology of science". The results of that methodology are the
theories of theoretical physics, which is suitable for formal semantics.

Pat
> [How can we] even begin to talk about the proposed relationship
> between ‘formal models’ and reality, if our semantic theories –
> that is, model theories – stop before these matters can even be
> brought into their scope.

Very simply. Tarski used ordinary language supplemented with GOFFOL
(Good Old Fashioned FOL) to present his publications about model theory.
In his 1944 paper, he talked about these issues in a very informal way.
For related comments by Halmos, Einstein, Polya, Euler, and Laplace,
see slides 3 to 8 of http://jfsowa.com/talks/ppe.pdf

Nicola
> The simple theory shown in the example is intended to axiomatize
> the relation “on” holding among blocks... However, the theory can’t
> distinguish between the actual real-world situations shown to the right.

That example illustrates the points that Tarski made about empirical
science. At that level of detail, The English description is just as
precise as the FOL formulas. But the real world is a continuum that
allows uncountably many options. The vagueness is not the fault of NLs,
but of any attempt to limit continuity to discrete options.

Gary
> Is this a mapping from the formal model or from a mental model?
> It seems that John allows for both meanings from which the mapping
> is possible and Pat is using this more restrictively to the formal
> model.

I'll let Pat answer for himself. I'll go with the mathematician
Paul Halmos who said, in a passage I quoted on slide 3 of ppe.pdf,
> Mathematics — this may surprise or shock some — is never deductive
> in its creation. The mathematician at work makes vague guesses,
> visualizes broad generalizations... the deductive stage, writing
> the result down, and writing its rigorous proof are relatively
> trivial once the real insight arrives.

I would say that what goes on in the mathematician's head could be
called a "mental model". See the later slides in ppe.pdf.

Re mthworld.gif: After rereading Tarski (1944), I'm more convinced
than ever that my diagram is consistent with that article:
http://jfsowa.com/logic/tarski.htm

Pat, if you're not convinced, please quote any passage by Tarski
that conflicts with the diagram.

John

Pat Hayes

unread,
Feb 6, 2018, 8:15:48 PM2/6/18
to ontolog-forum, Gary Berg-Cross

On Feb 5, 2018, at 11:29 AM, Gary Berg-Cross <gberg...@gmail.com> wrote:

Pat, John, Nicola:

To observe the obvious from time to time there are phrases used in these discussions (mathematical entities as one, I think) where there is a substantial potential for lines of argument to diverge based on what assumptions they come with.

True. 


But I did wonder if a big one is the meaning of the phrase "semantic mapping" in Pat's sentence below:

Pat > Yes, of course the interpretation mapping itself is not part of the world being described: it is the semantic mapping from language expressions into that world. I hope nobody interpreted any thing I have said as disagreeing with this.

Is this a mapping from the formal model or from a mental model? 

I have no idea how to answer that question. I really have no idea what the phrase “formal model” means. It is not one I would ever use. And I don’t think that the mentality or otherwise of a model was germane to the debate John and I were engaged in. 

As used in the phrase “model theory” AKA “Tarskian semantics”, it is a mapping from symbols – formal or natural, mental or physical, it does not matter, as long as they are symbols – to whatever those symbols are understood to denote. That is how I was using it in the passage quoted above, and I think how John was using it, also.  (Our debate concerned how ‘diriect’ this mapping can or should be, rather than what it was a mapping from.) 

Some psychologists, many AI people and others are working within a framework in which mental structures are presumed to be symbolic in this way (the “language of thought”) so the semantics would apply to mental symbols in this case. AI in particular has a strong tradition of ‘knowledge representation’ which uses formal-logical-style notations to encode knowledge, which is of course where the idea of formalized ontologies originated. Other people wish to apply the semantic theory only to external languages or notations, and some of them only to formalized such external notations, still others only to formalized languages of mathematics. But the semantic theory itself, which John and I were arguing about, applies robustly in all these cases. It has been applied to natural language by Montague and his followers, defining an entire sub-field within linguistics; to more elaborate formal notations by Kripke, van Bentham, Dana Scott and many other people, entire libraries-full of them; to diagrams and such pictorial notations as maps by Barwise and others, including myself. Applied to mathematical languages has been elaborated into a major field within mathematics itself, and is used as a tool to establish important results in set theory and other fields. It is by far the most robustly successful semantic framework ever devised. In fact, I would claim that it is in effect the only one: all the apparent alternatives are simply variations on the same theme. 

It seems that John allows for both meanings from which the mapping is possible and Pat is using this more restrictively  to the formal model.

I am pretty sure that I am the least restrictivising party in this debate :-)

Pat



To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Pat Hayes

unread,
Feb 6, 2018, 8:37:20 PM2/6/18
to ontolo...@googlegroups.com


> On Feb 5, 2018, at 10:27 AM, John F Sowa <so...@bestweb.net> wrote:
>
> On 2/5/2018 11:11 AM, Pat Hayes wrote:
>> I believe I can summarize your [Nicola's] point as follows:
>> small theories (small ontologies, with only a few axioms)
>> cannot fully capture a complicated world, because they allow
>> non-standard models...
>
> That's not the point that Tarski meant when he discussed the
> difference between theoretical physics and "empirical science”.

I agree, it is not. I was replying to Nicola.
>
> It's not the point I was talking about, which I believe is
> close to Tarski's distinction.

And I disagree. See my response. But in any case, fine-pointing Tarskian scholarship is not really useful here, since the field that he founded has now been developed and applied in all kinds of ways that he may not have considered.

Pat

>
> For more detail about my interpretation and its relation
> to what Tarski and Nicola wrote, see my previous note
> from 10:12 this morning.
>
> John
>
>

Pat Hayes

unread,
Feb 7, 2018, 1:48:50 AM2/7/18
to ontolog-forum, John F. Sowa

On Feb 5, 2018, at 10:32 PM, John F Sowa <so...@bestweb.net> wrote:

Pat, Nicola, and Gary,

I believe that the following “blast"

I prefer the term, “observation” :-)

is correct.

Pat
The interpretation function is a mathematical construct in a semantic
metatheory. It does not need to have any method, formal or otherwise,
to “do” anything. It need not even be a computable function. We aren’t
talking about computer science here: there isn't any “accessing”
involved in the semantic theory.

Yes.  A model may be infinite.

? True, but I fail to see what this has to do with what we are talking about. (Did I mention infinity? Is infinity relevant to anything you or I have said so far? I am quite happy to restrict the discussion to finite interpretations, if you prefer.)

 But it must be formally specified
by the kinds of mathematical methods

No, it does not have to be ‘formally specified’. Take your own example of the three people in my living room. One interpretation of that sentence has a universe comprising me, my wife, and our actual living room, and a relation of containment, and the property of being human, with the obvious mappings. That is enough: this is an interpretation, and it makes your sentence false, because as a matter of fact at that time, we were the only people in here. I didn't use any strange “mathematical methods” to describe this interpretation, but (1) it was one, and (2) it was enough to show that your sample sentence was not logically true (which of course we knew already, but..) 

used to specify possibly infinite
sets and structures -- for example, the method of deriving a model from
a set of axioms.

I presume you mean the Herbrand interpetations. 

 But that derivation cannot give you a physical set.

THAT one won’t, indeed. Herbrand interpretations are not physical. So?

At best, it will give you a set of symbols that may refer to the
elements in a physical set.

[JFS] Tarski (1944) mentioned "mathematics and theoretical physics"
as suitable fields for applying formal semantics.  (Section 6)
In Section 20, he discussed the difference between "empirical research"
and "theoretical semantics".  The difference is that the empirical
(experimental) research is "concerned only with natural languages
and that theoretical semantics applies to these languages only with
certain approximation" -- as my mthworld.gif diagram shows.

[Pat] What Tarski means here is that the natural language involved is
itself too complicated for his semantic theory to apply to it exactly.
Natural language, even when used in science, has semantic content which
cannot be fully captured by simple model theory.

No.  That is not what Tarski said or implied.  In fact, Tarski said
that NLs were *easier* to deal with than "natural sciences, such as
physics, biology, etc."  Some excerpts:

These do not say that NL is ‘easier’ than anything, only that it is the most obvious place to apply a theory of semantics. Which is of course obviously correct. 


[Section 20] The most natural and promising domain for the application
of theoretical semantics is clearly linguistics...
It is perhaps unnecessary to say that semantics cannot find any direct
applications in natural sciences such as physics, biology, etc.; for
in none of these sciences are we concerned with linguistic phenomena...
[Section 21] Besides linguistics, another important domain for
possible applications of semantics is the methodology of science;
this term is used here in a broad sense so as to embrace the theory
of science in general. Independent of whether a science is conceived
merely as a system of statements or as a totality of certain statements
and human activities, the study of scientific language constitutes an
essential part of the methodological discussion of a science...
One of the main problems of the methodology of empirical science
consists in establishing conditions under which an empirical theory
or hypothesis should be regarded as acceptable.

That methodology involves metalanguage about how to derive information
by observation and experimentation and using that info to specify the
axioms for which a model of the world may be derived.

I am really not too concerned about Tarski’s positions here, so even though I think you are badly misconstruing him, I will not engage in that argument. But even if I were to concede all this, it still does not make your misleading diagram correct. Yes, of course one can ask, of a proposed Tarskian semantics for some actual language used in some actual human activity, *how* the symbols acquire the fixed meanings that they sometimes (though not always) seem to have. And that is a very legitimate question. What is it that makes “London” refer to the city on the Thames? Model theory itself provides no account of this, of how symbols are grounded. It was never supposed to do that: it simply assumes that they refer, and gives an account of truth-conditions based on that assumption. But that observation does not lead to your curious distortion of model theory itself. It does not require “London” to efer to no city (or anything real) *at all*, but to some strange mathematical abstraction which is then mapped, by some completely mysterious, non-Tarskian, theory (is this also a semantic theory?) to London itself. Rather, it would provide some account of why or how this name came to refer to this city – and I really do mean the actual city, the kind you can tke a taxi ride in. Such accounts, of of how and why names refer, will be complicated and varied, no doubt: but they will *explain* the relationship pf reference, not deny or distort it. “London” does in fact refer to London, not to some mathematical abstraction of, or surrogate for, London. (And it does not refer via some such mathematical surrogate, either.) Reference remains reference, whether it is explained or not. 


Now let me go back to the following claim from a week ago

Pat
Your diagram is seriously misleading because it takes the matter
of how a ‘model’ (in any sense) can be approximation to a reality
– issues of degrees of precision, tolerance, approximation, accuracy
and so forth – outside the semantic framework of formal ontologies
and their semantics altogether.

No.  See the above points by Tarski (1944):  Theoretical physics is
a suitable domain for formal semantics, but "It is perhaps unnecessary
to say that semantics cannot find any direct applications in natural
sciences such as physics, biology, etc.”

Because they don’t deal with linguistics. Right, and as Tarski himself says, so obvoius that it hardly needs saying. But this obvious remark does not carry the weight that you are putting on it.


By "natural science", he means the practice of observing, analyzing,
formulating hypotheses, testing them, and repeat.  That is the
"methodology of science".  The results of that methodology are the
theories of theoretical physics, which is suitable for formal semantics.

Have you actually read that passage?  

“...another important domain for possible applications of semantics is the methodology of science; this term is used here in a broad sense so as to embrace the theory of science in general.”

The methodology of science *is* an important domain for applying semantics. Not the resulting theories, but science itself, “in general". And why? Because:

"One of the main problems of the methodology of empirical science consists in establishing conditions under which an empirical theory or hypothesis should be regarded as acceptable.”

In other words, how these “empirical” hypotheses – symbolic, language-y things , note – relate to the actual world, the one that science does its empirical testing in. That is *why* semantics is relevant to science. 

I rest my case. 


Pat
[How can we] even begin to talk about the proposed relationship
between ‘formal models’ and reality, if our semantic theories –
that is, model theories – stop before these matters can even be
brought into their scope.

Very simply.  Tarski used ordinary language supplemented with GOFFOL
(Good Old Fashioned FOL) to present his publications about model theory.
In his 1944 paper, he talked about these issues in a very informal way.

So all that vitally important stuff, which must be done before we can even apply semantics to any part of the real world, the content of the RHS of your diagram; all this must be done informally, in ordinary language? Before semantics can even be started? Before Montague linguistics can be done, for example? I find this a very unsatisactory conclusion; or rather I would, if I believed it for a second. 

For related comments by Halmos, Einstein, Polya, Euler, and Laplace,
see slides 3 to 8 of http://jfsowa.com/talks/ppe.pdf

I looked, and I fail to see what it has to do with our discussions in this thread. To me it seems like a complete side-track, and has no bearing on the merits of your diagram one way or the other.

(If your point is that diagrams, mental images, what Einstein means by ‘muscular’ intuitions, etc., are inherently non-symbolic in nature – as many have believed, and some have actually argued – then we have yet another, but perhaps more useful, argument on our hands, since I believe that all of these are just a symbolic, and just as amenable to a theory of refernce and truth, as GOFOL or Montagued NL. But I concede, that is a whole other discussion. So for the nonce let us just say that if this is your point here, then we still disagree.)

(If your point is that symbols which seem to refer to things in the world do not do as they seem to, but in fact refer to mental entities which in turn are connected to those apparent referents, then I am not sure how to disabuse you of such a fantastical idea. It is absolutely not what Tarski (or Quine, or Montague, or Kripke, or anyone else I can think of) takes reference to be, and it makes complete nonsense of any idea of mentality which hypothesises mental machinery which is itself symbolic in nature, which is just about every coherent theory so far suggested, including all of the krep work on which ontology is based.)



Nicola
The simple theory shown in the example is intended to axiomatize
the relation “on” holding among blocks... However, the theory can’t
distinguish between the actual real-world situations shown to the right.

That example illustrates the points that Tarski made about empirical
science. At that level of detail, The English description is just as
precise as the FOL formulas.  But the real world is a continuum that
allows uncountably many options.  The vagueness is not the fault of NLs,
but of any attempt to limit continuity to discrete options.

Sigh. I will just note in passing that the real world appears to not be a contunuum, if modern physics is to be believed. 


Gary
Is this a mapping from the formal model or from a mental model?
It seems that John allows for both meanings from which the mapping
is possible and Pat is using this more restrictively to the formal
model.

I'll let Pat answer for himself.  I'll go with the mathematician
Paul Halmos who said, in a passage I quoted on slide 3 of ppe.pdf,
Mathematics — this may surprise or shock some — is never deductive
in its creation. The mathematician at work makes vague guesses,
visualizes broad generalizations... the deductive stage, writing
the result down, and writing its rigorous proof are relatively
trivial once the real insight arrives.

That looks to me to be completely irrelevant to what we are talking about. Of course, anything that goes on inside anyone’s head is “mental”. That hardly needs a quote from Halmos to establish. 


I would say that what goes on in the mathematician's head could be
called a "mental model".  See the later slides in ppe.pdf.

Re mthworld.gif:  After rereading Tarski (1944), I'm more convinced
than ever that my diagram is consistent with that article:
http://jfsowa.com/logic/tarski.htm

Pat, if you're not convinced, please quote any passage by Tarski
that conflicts with the diagram.

The burden of proof is on you, John; but in fact, you already cited some, yourself. See above. And I will observe that your diagram is a direct denial of Tarski’s 1944 theory of truth. 

Pat


John


--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or unsubscribe to the email, see http://ontologforum.org/info/
--- You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

John F Sowa

unread,
Feb 8, 2018, 12:18:14 PM2/8/18
to Pat Hayes, ontolog-forum
Pat,

I agree with (1) the definition of model theory and your comments
about it; (2) that names in a logic may refer to aspects of the
world; and (3) that logicians, scientists, engineers, and ontologists
may talk about sets that contain physical entities.

But the issues about symbol grounding and identity conditions
are critical. Unless there is some way of relating a domain D
to the real world, there is no difference between a real D and
a hypothetical, virtual, imaginary, or fictional D.

> Model theory itself provides no account of this, of how symbols are
> grounded. It was never supposed to do that: it simply assumes that
> they refer, and gives an account of truth-conditions based on that
> assumption. it simply assumes that they refer, and gives an account
> of truth-conditions based on that assumption.

Again, we are in complete agreement.

> But that observation does not lead to your curious distortion
> of model theory itself.

I have not distorted model theory in the slightest. As you said,
model theory "simply assumes that [the symbols in the logic] refer".
I drew that diagram to show what model theory does *not* include.

For an example of the way I discuss that diagram, see the quotation
below. I have never seen a diagram with a shorter, clearer, and more
accurate description. If you (or anybody else) can find or draw a
better one with a better description, I would love to see it.

John
________________________________________________________________________

From page 5 of http://jfsowa.com/pubs/fuzzy.pdf

Figure 3: Relating a theory to the world

To bridge the gap between theories and the world, Figure 3 shows a model
as a Janus-like structure, with an engineering side facing the world and
an an abstract side facing a theory.

On the left is a picture of the physical world, which contains more
detail and complexity than any humanly conceivable model or theory can
represent. In the middle is a mathematical model that represents a
domain of individuals D and a set of relations R over individuals in D.

If the world had a unique decomposition into discrete objects and
relations, the world itself would be a universal model, of which all
accurate models would be subsets. But the selection of a domain and its
decomposition into objects depend on the intentions of some agent and
the limitations of the agent’s measuring instruments. Even the best
models are approximations to a limited aspect of the world for a
specific purpose.

Nicola Guarino

unread,
Feb 12, 2018, 10:30:29 AM2/12/18
to ontolo...@googlegroups.com
Hi Pat,

I have just realised that this message of mine didn’t go through, since it was sent from a bad email address. I don’t want to raise a long discussion again, this is just a clarification: looking at non-standard models may not be enough...

> On 5 Feb 2018, at 17:11, Pat Hayes <pha...@ihmc.us> wrote:
>
> I believe I can summarize your point as follows: small theories (small ontologies, with only a few axioms) cannot fully capture a complicated world, because they allow non-standard models, ie interpretations which satisfy the theory but are not correct, i.e. they are not parts of any intended world, or the names of the theory are not correctly interpreted. This is of course true, and IMO one of the most useful methodological aspects of Tarskian model theory when developing ontologies. As I suggested in the ‘naive physics manifesto’, longer ago than I care to remember, an excellent way to critique a proposed ontology is to deliberately look for non-standard models, as they vividly reveal gaps in what one might call ‘coverage’ of the axioms.

Yes, but an interesting feature of the small ontology I discussed is that it has NO non-standard model: ALL the models are intended, since the ontology says nothing wrong about the ‘on' relationship: all the possible domain structures based on the ‘on’ relationship are such that it is asymmetric and anti-transitive, which is fine. In a sense, this is therefore a maximally precise ontology, since all the models are intended. Yet, each intended model is just a mathematical structure, which may collapse intended and non-intended actual arrangements of blocks in the real world.

So, the absence of non-standard models is not enough to conclude that the ontology is accurate. We really have to to compare models, which are mathematical structures, with the reality itself. This is indeed what you implicitly did, in your (second) naive physics manifesto, when you discussed possible expansions of the toy blocks-world example, considering the necessity of introducing more axioms that in turn introduce other concepts.

Best,

Nicola

John F Sowa

unread,
Feb 12, 2018, 11:53:53 AM2/12/18
to ontolo...@googlegroups.com
On 2/12/2018 10:30 AM, Nicola Guarino wrote:
> the absence of non-standard models is not enough to conclude
> that the ontology is accurate.

I agree. But sometimes having multiple models can be good.
It just means that a theory is very general. For example, group
theory has a large number of useful specializations.

The question of non-standard (or non-intended) models first arose
with Peano's axioms for the integers. Everybody thought of the
integers as a single sequence with 0 or 1 at the beginning (or
the middle, if you allow negative integers).

But it turns out that Peano's axioms cannot rule out infinitely
many sequences, each with its own midpoint that behaves like
a quasi-zero for that sequence.

There are two options for non-standard models: (1) accept them
and prove theorems about them; (2) use higher-order logic to
say "There is one and only one sequence of integers."

For the real world, nobody knows how far any theory of physics
applies to any range of phenomena outside areas that have been
tested. Every known law of physics is fallible. Nobody knows
what may happen in extreme cases.

For "commonsense ontologies", the situation is far worse than
physics. That's why we have been talking about contexts.
In the summit talks, speakers have stated various definitions.
But when you analyze them, they all have a similar implication:
"A context is whatever you or somebody else says it is."

John

Mary-Anne Williams

unread,
Feb 12, 2018, 3:47:27 PM2/12/18
to ontolo...@googlegroups.com
This may be of interest

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or unsubscribe to the email, see http://ontologforum.org/info/
--- You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-forum+unsubscribe@googlegroups.com.

John F Sowa

unread,
Feb 12, 2018, 4:31:14 PM2/12/18
to ontolo...@googlegroups.com
On 2/12/2018 3:47 PM, Mary-Anne Williams wrote:
> *A grounding framework* -
> https://link.springer.com/article/10.1007/s10458-009-9082-0

That link charges money for articles. Do you have a version
on your website or on Researchgate?

Publishers have become a major obstacle to publication.

John

Mary-Anne Williams

unread,
Feb 12, 2018, 4:36:36 PM2/12/18
to ontolo...@googlegroups.com

Rich Cooper

unread,
Feb 12, 2018, 4:55:28 PM2/12/18
to ontolo...@googlegroups.com

John and Mary-Anne,

John wrote:

      For "commonsense ontologies", the situation is far worse than

      physics.  That's why we have been talking about contexts.

      In the summit talks, speakers have stated various definitions.

      But when you analyze them, they all have a similar implication:

      "A context is whatever you or somebody else says it is."

      John

Another way to put this is that the observer is subjective, and the ontologizer is subjective, and the context selected by either is strongly affected by their individual experiences, yet ontologies are expected to be universal, objective, transferrable.  Bad assumption in the first place. 

Sincerely,

Rich Cooper,

Rich Cooper,

Chief Technology Officer,

MetaSemantics Corporation

MetaSemantics AT EnglishLogicKernel DOT com

( 9 4 9 ) 5 2 5-5 7 1 2

http://www.EnglishLogicKernel.com

From: ontolo...@googlegroups.com [mailto:ontolo...@googlegroups.com] On Behalf Of Mary-Anne Williams
Sent: Monday, February 12, 2018 12:47 PM
To: ontolo...@googlegroups.com

To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
Visit this group at
https://groups.google.com/group/ontolog-forum.
For more options, visit
https://groups.google.com/d/optout.

--


All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see


---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.

To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
Visit this group at
https://groups.google.com/group/ontolog-forum.
For more options, visit
https://groups.google.com/d/optout.

Mary-Anne Williams

unread,
Feb 12, 2018, 5:02:47 PM2/12/18
to ontolo...@googlegroups.com
I agree Rich.

From the paper:

"Grounding can be contextual, and when it is, it should be measured relative to system goals."

"The framework is designed to describe, evaluate and in some cases formally measure the quality of representations and grounding capabilities which can be system specific, domain specific, and context specific. Our framework strongly supports the idea that when it comes to assessing grounding capabilities there are few absolute measures. "




                                          

Mary-Anne Williams FACS, FTSE
Distinguished Research Professor

Innovation and Enterprise Research Lab
Centre of Artificial Intelligence
University of Technology Sydney
PO Box 123 Broadway NSW 2007 Australia

Fellow, Stanford University
Twitter: @SwizzleFish

Australian Research Council Feature on Social Robotics Project with CBA




On 13 February 2018 at 08:54, Rich Cooper <metase...@englishlogickernel.com> wrote:

John and Mary-Anne,

John wrote:

      For "commonsense ontologies", the situation is far worse than

      physics.  That's why we have been talking about contexts.

      In the summit talks, speakers have stated various definitions.

      But when you analyze them, they all have a similar implication:

      "A context is whatever you or somebody else says it is."

      John

Another way to put this is that the observer is subjective, and the ontologizer is subjective, and the context selected by either is strongly affected by their individual experiences, yet ontologies are expected to be universal, objective, transferrable.  Bad assumption in the first place. 

Sincerely,

Rich Cooper,

Rich Cooper,

Chief Technology Officer,

MetaSemantics Corporation

MetaSemantics AT EnglishLogicKernel DOT com

( 9 4 9 ) 5 2 5-5 7 1 2

http://www.EnglishLogicKernel.com

To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-forum+unsubscribe@googlegroups.com.
Visit this group at
https://groups.google.com/group/ontolog-forum.
For more options, visit
https://groups.google.com/d/optout.

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see
http://ontologforum.org/info/


---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.

To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-forum+unsubscribe@googlegroups.com.
Visit this group at
https://groups.google.com/group/ontolog-forum.
For more options, visit
https://groups.google.com/d/optout.

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-forum+unsubscribe@googlegroups.com.

Rich Cooper

unread,
Feb 12, 2018, 5:17:56 PM2/12/18
to ontolo...@googlegroups.com

Mary-Anne,

 

So, given that we agree on the subjectivity of all participants in ontology, perhaps we should treat ontology as an art, rather than as a scientific, or universal, representation of knowledge.  The influence of an ontology's author is very personal, very much in tune with that person's experience.  The same is true of literature, art, music, and the many expressions of human knowledge. 

 

Sincerely,

Rich Cooper,

Rich Cooper,

 

Chief Technology Officer,

MetaSemantics Corporation

MetaSemantics AT EnglishLogicKernel DOT com

( 9 4 9 ) 5 2 5-5 7 1 2

http://www.EnglishLogicKernel.com

 

From: ontolo...@googlegroups.com [mailto:ontolo...@googlegroups.com] On Behalf Of Mary-Anne Williams
Sent: Monday, February 12, 2018 2:01 PM
To: ontolo...@googlegroups.com
Subject: Re: [ontolog-forum] Models and symbol grounding

 

I agree Rich.

To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.

To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.

To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.

To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Mary-Anne Williams

unread,
Feb 12, 2018, 6:04:40 PM2/12/18
to ontolo...@googlegroups.com
art is science and conversely!


To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-forum+unsubscribe@googlegroups.com.

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.

To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-forum+unsubscribe@googlegroups.com.

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.

To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-forum+unsubscribe@googlegroups.com.

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.

To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-forum+unsubscribe@googlegroups.com.

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-forum+unsubscribe@googlegroups.com.

Mary-Anne Williams

unread,
Feb 12, 2018, 6:08:32 PM2/12/18
to ontolo...@googlegroups.com
On 13 February 2018 at 10:04, Mary-Anne Williams <Mary...@themagiclab.org> wrote:
art is science and conversely!

Rich Cooper

unread,
Feb 12, 2018, 6:24:35 PM2/12/18
to ontolo...@googlegroups.com

Mary-Anne,

 

That link:

              https://www.lucs.lu.se/spinning/categories/dynamics/Williams/surprise

brings up a blank browser window, but nothing more than that.  Perhaps the URL needs correction?

 

Sincerely,

Rich Cooper,

Rich Cooper,

 

Chief Technology Officer,

MetaSemantics Corporation

MetaSemantics AT EnglishLogicKernel DOT com

( 9 4 9 ) 5 2 5-5 7 1 2

http://www.EnglishLogicKernel.com

 

From: ontolo...@googlegroups.com [mailto:ontolo...@googlegroups.com] On Behalf Of Mary-Anne Williams
Sent: Monday, February 12, 2018 3:08 PM
To: ontolo...@googlegroups.com
Subject: Re: [ontolog-forum] Models and symbol grounding

 

On 13 February 2018 at 10:04, Mary-Anne Williams <Mary...@themagiclab.org> wrote:

art is science and conversely!

On 13 February 2018 at 09:17, Rich Cooper <metase...@englishlogickernel.com> wrote:

Mary-Anne,

 

So, given that we agree on the subjectivity of all participants in ontology, perhaps we should treat ontology as an art, rather than as a scientific, or universal, representation of knowledge.  The influence of an ontology's author is very personal, very much in tune with that person's experience.  The same is true of literature, art, music, and the many expressions of human knowledge. 

 

Sincerely,

Rich Cooper,

Rich Cooper,

 

Chief Technology Officer,

MetaSemantics Corporation

MetaSemantics AT EnglishLogicKernel DOT com

( 9 4 9 ) 5 2 5-5 7 1 2

http://www.EnglishLogicKernel.com

 

To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.

To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.

To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.

To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.

To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.

To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Mary-Anne Williams

unread,
Feb 12, 2018, 6:35:21 PM2/12/18
to ontolo...@googlegroups.com
The server is just struggling to keep up with demand  - seems like a lot of people simultaneously clicked on it :-)

The link points to a Flash rendition, but all the info is at the following link so if you want to skip Flash try it https://www.lucs.lu.se/spinning/categories/dynamics/Williams/surprise/surprise.html





On 13 February 2018 at 10:24, Rich Cooper <metase...@englishlogickernel.com> wrote:

Mary-Anne,

 

That link:

To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-forum+unsubscribe@googlegroups.com.

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.

To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-forum+unsubscribe@googlegroups.com.

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.

To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-forum+unsubscribe@googlegroups.com.

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.

To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-forum+unsubscribe@googlegroups.com.

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.

To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-forum+unsubscribe@googlegroups.com.

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.

To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-forum+unsubscribe@googlegroups.com.

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-forum+unsubscribe@googlegroups.com.

Rich Cooper

unread,
Feb 12, 2018, 6:45:33 PM2/12/18
to ontolo...@googlegroups.com

Mary-Anne,

 

I got a blank browser window when I went to that URL, but I followed your paper through to another link:

              https://link.springer.com/article/10.1007/s10458-009-9082-0

 

which is informative, from your other writings.  The "grounding" you describe is called "ground truth" in US military systems that represent situations of interest, in areas of interest.  You pointed out the link from information to reality that is so slippery to straighten out. 

 

Sincerely,

Rich Cooper,

Rich Cooper,

 

Chief Technology Officer,

MetaSemantics Corporation

MetaSemantics AT EnglishLogicKernel DOT com

( 9 4 9 ) 5 2 5-5 7 1 2

http://www.EnglishLogicKernel.com

 

From: ontolo...@googlegroups.com [mailto:ontolo...@googlegroups.com] On Behalf Of Mary-Anne Williams
Sent: Monday, February 12, 2018 3:08 PM
To: ontolo...@googlegroups.com
Subject: Re: [ontolog-forum] Models and symbol grounding

 

On 13 February 2018 at 10:04, Mary-Anne Williams <Mary...@themagiclab.org> wrote:

art is science and conversely!

On 13 February 2018 at 09:17, Rich Cooper <metase...@englishlogickernel.com> wrote:

Mary-Anne,

 

So, given that we agree on the subjectivity of all participants in ontology, perhaps we should treat ontology as an art, rather than as a scientific, or universal, representation of knowledge.  The influence of an ontology's author is very personal, very much in tune with that person's experience.  The same is true of literature, art, music, and the many expressions of human knowledge. 

 

Sincerely,

Rich Cooper,

Rich Cooper,

 

Chief Technology Officer,

MetaSemantics Corporation

MetaSemantics AT EnglishLogicKernel DOT com

( 9 4 9 ) 5 2 5-5 7 1 2

http://www.EnglishLogicKernel.com

 

To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.

To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.

To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.

To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.

To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.

To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

John F Sowa

unread,
Feb 13, 2018, 12:05:57 PM2/13/18
to ontolo...@googlegroups.com
Mary-Anne and Rich,

MAW
> This may be of interest: A grounding framework
> https://www.researchgate.net/publication/220660856_A_grounding_framework

Yes. And I hope it can end the endless and fruitless debates that
Rich constantly brings up about objective vs subjective.

RC
> Another way to put this is that the observer is subjective, and
> the ontologizer is subjective, and the context selected by either
> is strongly affected by their individual experiences, yet ontologies
> are expected to be universal, objective, transferrable. Bad assumption
> in the first place.

That is an excellent example of the reason why the words 'objective'
and 'subjective' are hopelessly vague, confusing, worthless buzz words.
You can do a global change of 'objective' to 'buzz1' and 'subjective'
to 'buzz2' without making any meaningful difference in that statement.

By avoiding those buzz words, the paper that Mary-Anne cited made
a clear, precise statement that can be used for designing robots:

From "A grounding framework"
> Grounding can be contextual, and when it is, it should be measured
> relative to system goals... The framework is designed to describe,
> evaluate and in some cases formally measure the quality of represen-
> tations and grounding capabilities which can be system specific,
> domain specific, and context specific. Our framework strongly supports
> the idea that when it comes to assessing grounding capabilities there
> are few absolute measures.

Note the word 'goal'. That is something that can be stated in formal
terms (e.g., a sentence in a version of logic or clearly written NL).

Another word that I recommend is 'intention'. That word can be
defined as "A goal that guides the actions by some agent."

A goal can be stated in logic, an agent (such as a human, dog, or
robot) can be identified by pointing to it, and the words 'guide'
and 'action' are sufficiently clear that you can write a program
that controls the robot.

RC
> The "grounding" you describe is called "ground truth" in US military
> systems that represent situations of interest, in areas of interest.
> You pointed out the link from information to reality that is so
> slippery to straighten out.

The words 'reality' and 'truth' are both slippery. But science
and engineering do make progress -- just look at history. Both of
them are based on three assumptions: (1) there is a reality that
is independent of the way we think or talk; (2) we can never have
a perfect understanding of reality; but (3) by systematically
observing, analyzing, and testing our hypotheses we can get close
enough for most practical purposes.

Summary: The words 'objective' and 'subjective' are useless buzz words.
But methods of science and engineering have derived information that
can support complex goals. We can't expect any ontology to be perfect.
But with those methods, we can derive useful ontologies for engineering.

John

Hans Polzer

unread,
Feb 13, 2018, 12:47:59 PM2/13/18
to ontolo...@googlegroups.com
John,

The one thing I would add to this discussion is the role that human institutions play in establishing grounding and associated frames of reference and standards. I rarely see this discussed or pointed out, although it is often alluded to implicitly when people appeal to or cite "authoritative sources". There are many types of such institutions, with varying degrees of formality and sanction, including academic, government, standards bodies/NGOs, corporations, industry associations, and informal interest groups. In essence, these institutions serve to convert the arbitrary and subjective conventions and frames of reference into something that is viewed as at least quasi-objective by individuals and institutional entities who cite or otherwise subscribe to them. Interestingly, the scope of applicable standards, conventions, and frames of reference are usually closely associated with the scope of the sponsoring institutions - and the scope of the contexts important to those institutions.

Where things get really interesting (to me, at least), is when agents/systems try to interact with each other across the context scope boundaries of the relevant institutional groundings. These boundaries are often vague, implicit and difficult to discover because historically there has been no real need or capability to do so in a machine-processable way. That's the root cause of most interoperability issues that arise in our increasingly connected world. Thus we have the rise of many ad-hoc industry associations and government-sponsored initiatives to address specific cross-domain interoperability issues. But the general problem is much harder (or is it softer?) to solve, in part because there is little motivation for creating human institutions to address it.

Hans

-----Original Message-----
From: ontolo...@googlegroups.com [mailto:ontolo...@googlegroups.com] On Behalf Of John F Sowa
Sent: Tuesday, February 13, 2018 12:06 PM
To: ontolo...@googlegroups.com
Subject: Re: [ontolog-forum] Models and symbol grounding

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or unsubscribe to the email, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

John F Sowa

unread,
Feb 14, 2018, 9:54:38 AM2/14/18
to ontolo...@googlegroups.com
On 2/13/2018 12:47 PM, Hans Polzer wrote:
> The one thing I would add to this discussion is the role that
> human institutions play in establishing grounding and associated
> frames of reference and standards.

Human institutions or social organizations are extremely important.
They're based on shared intentions among a group of people for some
activity or system of activities.

> There are many types of such institutions, with varying degrees of
> formality and sanction, including academic, government, standards
> bodies/NGOs, corporations, industry associations, and informal interest
> groups.

Yes. And the intentions of the people involved are always indicated
by a sign: a contract, treaty, constitution, bylaws, announcement,
ceremonies... Some institutions, such as the Mafia, have rules
about signs: Don't write what you can say, don't say what you can
wink, and don't wink what you can nod.

> these institutions serve to convert the arbitrary and subjective
> conventions and frames of reference into something that is viewed
> as at least quasi-objective by individuals and institutional entities
> who cite or otherwise subscribe to them.

It's the publicly observable sign and the accompanying action that makes
them objectively known. That action may be as informal as a handshake,
or it may be a formal ceremony: a wedding, swearing on a Bible, or
smoking a peace pipe. A handshake with witnesses can be used as
evidence in a court of law.

> These boundaries are often vague, implicit and difficult to discover
> because historically there has been no real need or capability to do
> so in a machine-processable way.

People recognized the need thousands of years ago. The Sumerians
invented cuneiform around 4000 BC to list the goods carried by
their caravans and record what was exchanged for those goods.

> the root cause of most interoperability issues that arise in our
> increasingly connected world.

People and their institutions have always been connected to other
tribes with different institutions. The need for cooperation,
trade, and resolving conflicts required shared conventions and
methods of communication. The WWW is just our latest version
of cuneiform and camel caravans.

John

John Bottoms

unread,
Feb 14, 2018, 10:42:57 AM2/14/18
to ontolo...@googlegroups.com
Many times I find that you can get better performance by discussing
"cohorts".
These are handles of diverse and overlapping groups that are otherwise
difficult to unravel.
Individuals and agents are better left once a smaller context or
situation has been identified.
Recently I was asked to respond to a situation in which a robot walked
into a library. He (it) knew
that libraries are keys to knowledge, and that increased knowledge was a
subgoal, so, "in he went".
The issue for us was that the library was a church library and we had to
explain to the robot that
he is not a member of that cohort.
It's all good now.

-John Bottoms

Paul Tyson

unread,
Feb 15, 2018, 8:24:57 PM2/15/18
to ontolo...@googlegroups.com
On Wed, 2018-02-14 at 10:42 -0500, John Bottoms wrote:
> Many times I find that you can get better performance by discussing
> "cohorts".
> These are handles of diverse and overlapping groups that are otherwise
> difficult to unravel.
> Individuals and agents are better left once a smaller context or
> situation has been identified.
> Recently I was asked to respond to a situation in which a robot walked
> into a library. He (it) knew
> that libraries are keys to knowledge, and that increased knowledge was a
> subgoal, so, "in he went".
> The issue for us was that the library was a church library and we had to
> explain to the robot that
> he is not a member of that cohort.
> It's all good now.

Umm... so "cohort" == "echo chamber"?


hpo...@verizon.net

unread,
Feb 15, 2018, 9:42:32 PM2/15/18
to ontolo...@googlegroups.com
Cute! Some other common terms for the same general concept are community of interest, operational domain, team, organization, coterie, in-group, stovepipe, silo, functional area, bloc, clique, guild, union, party, trade, swim lane, language, argot, lingo, industry sector, department, division, business area, program, project, etc., etc. All of these have a variety of mostly scope nuances that distinguish among them to some degree and for certain purposes.

People band together for mutual benefit in the context of working towards some common purpose, but also with some assumptions regarding the larger environment within which that purpose is to be achieved - and constrained. This is the foreground-background problem, further complicated by having many such overlapping foregrounds, some of whom contribute to the background environment for other foregrounds. I like to use the lava lamp analogy to capture the somewhat dynamic (chaotic?) nature of this problem.

Of course, most echo chambers are not entirely soundproofed, have entrances/exits, and do exist adjacent to other rooms/buildings/environments/universes.

The other analogy I like to use is living in a college dorm with translucent walls and floors, changing roommates, and communal kitchens and bathrooms. You still get cliques, groups, blocs, etc., but they may be based on a somewhat different set of commonalities/differences. Inter-disciplinary groups often result in novel solutions to operational and social problems, but they don't guarantee them. There are benefits to focused perspectives and efforts, as well as costs and risks, just as there are benefits to taking a broader and more diverse view of both the problem at hand and the larger environment in which that problem exists, but also costs and risks. As that famous philosopher, Kenny Rogers, observed: " you gotta know when to hold them and when to fold them".

A key challenge is that it's easier to get a small group together and gain consensus among them regarding a given problem space representation than it is to get a usually much larger group together and gain consensus for representing the intersection among a multitude of problem spaces.

Hans



-----Original Message-----
From: ontolo...@googlegroups.com [mailto:ontolo...@googlegroups.com] On Behalf Of Paul Tyson
Sent: Thursday, February 15, 2018 8:25 PM
To: ontolo...@googlegroups.com
Subject: Re: [ontolog-forum] Models and symbol grounding

Jon Awbrey

unread,
Feb 16, 2018, 9:26:41 AM2/16/18
to ontolo...@googlegroups.com, hpo...@verizon.net
Ontologgers,

In the Peirce universe one speaks of “communities of inquiry”
and “communities of interpretation”. Those concepts give us
slightly better ways of handling “contexts of interpretation”.

All three constructs are of triadic sign relations compact.

Over the years I have found the hardest thing to convey about
sign relations has been what it's like to think and work within
an extended sign relational environment. A “setting” like that
consists of a large number of individual sign-relational triples
called “elementary sign relations”, all having the form (o, s, i),
where o is the object, s is the sign, and i is the interpretant sign
of the triple.

This means that any given sign relation L is a subset of a cartesian
product O×S×I, where O is the “object domain”, S is the “sign domain”,
and I is the “interpretant sign domain” of the sign relation L in view.

Taking this point of view on sign relations makes a big difference in
the conjoined theories of inquiry and interpretation that develop from
this point on.

Will continue as I, no, the other I, get time ...

But here's an article I wrote, or started,
originally for Wikipedia, some years back:

Sign Relations
http://intersci.ss.uci.edu/wiki/index.php/Sign_relation

Jon
> Umm ... so "cohort" == "echo chamber"?
>

--

inquiry into inquiry: https://inquiryintoinquiry.com/
academia: https://independent.academia.edu/JonAwbrey
oeiswiki: https://www.oeis.org/wiki/User:Jon_Awbrey
isw: http://intersci.ss.uci.edu/wiki/index.php/JLA
facebook page: https://www.facebook.com/JonnyCache

Richard H. McCullough

unread,
Feb 16, 2018, 12:18:56 PM2/16/18
to ontolog forum



John, you said


Note the word 'goal'.  That is something that can be stated in formal
terms (e.g., a sentence in a version of logic or clearly written NL).

Another word that I recommend is 'intention'.  That word can be
defined as "A goal that guides the actions by some agent."


In mKR, I refer to the purpose of an action, e.g.

     I do go to the store with purpose = { I do buy od food; };

The word ‘purpose’ can be defined as “the reason that something is done”.

Purpose is a property of an action.

It is a word that is used in everyday English.



Richard H. McCullough
http://ContextKnowledgeSystems.org
What is your context?

John F Sowa

unread,
Mar 28, 2018, 3:11:53 AM3/28/18
to ontolog-forum
Hans and Ravi,

I strongly agree with Hans. See below for my response to his
note from February 12th of this year.

HP
> You may recall our email exchanges from several years ago about
> what I referred to as "conceptual realities". Some examples are
> things like school districts, police or voting precincts, campuses,
> air space restricted areas, and the like. There are typically no
> physical manifestations of such entities...

The point I would add is that there are very important physical
manifestations. They're called signs. Every sign is interpreted
by minds (possibly with some mental aid, such as languages, pictures,
diagrams, paper and pencil, computers...). Peirce used the term
'quasi-mind' for non-human agents, which could include robots.

But every sign has a perceptible "mark". When it's interpreted
by some agent (human, animal, or robot) the result is some physical
action, which may generate further signs.

RS
> my question is what part of Logic or any other test can we
> perform to possibly separate ontologies that pertain to known
> or real domains (I cant find better words) vs some illogical
> non-plausible (based on current knowledge) ontologies that
> might be purely imaginary

Ravi, I'll continue to be polite, so I won't say what I think of
that paper. But "delusional" is mild compared to what I would say.

Those authors used the word 'unicorn' to deprecate the work by
scientists and engineers who are doing advanced R & D (Higgs boson
and Mars missions, for example). They used the term 'unicorn
delusions'. But they were not talking about unicorns. They were
implying that that you, I, and all the scientists, engineers,
mathematicians, and programmers are deluded in thinking that we're
doing something real.

But they have no understanding of logic. For example, the modal
logics based on possible worlds often use axiom S5, which implies
that every possible world has *exactly* the same ontology as the
real world. Axiom S4 would allow some variation in the laws of
different worlds, but most of the laws would be the same.

Their philosophy is nominalist, *not* realist. It implies that the
laws of science are nothing but summaries of data. That reduces
science to the study of meter readings.

Alonzo Church, a logician and philosopher I highly respect, gave a
lecture at Harvard, where he ridiculed the nominalists, such as Quine
who was in the audience: http://jfsowa.com/ontology/church.pdf

For an article about the limitations of nominalism, see Signs,
processes, and language games: http://jfsowa.com/pubs/signs.pdf

I'll say more when I get back from San Diego.

John

-------- Forwarded Message --------
Subject: Re: [ontolog-forum] Models and symbol grounding
Date: Wed, 14 Feb 2018 09:54:35 -0500
From: John F Sowa

On 2/13/2018 12:47 PM, Hans Polzer wrote:
> The one thing I would add to this discussion is the role that
> human institutions play in establishing grounding and associated
> frames of reference and standards.

Human institutions or social organizations are extremely important.
They're based on shared intentions among a group of people for some
activity or system of activities.

> There are many types of such institutions, with varying degrees of
> formality and sanction, including academic, government, standards
> bodies/NGOs, corporations, industry associations, and informal
> interest groups.

Yes. And the intentions of the people involved are always indicated
by a sign: a contract, treaty, constitution, bylaws, announcement,
ceremonies... Some institutions, such as the Mafia, have rules
about signs: Don't write what you can say, don't say what you can
wink, and don't wink what you can nod. Vinnie "The Chin", for
example, would stroke his chin as a sign for "Do it".

Marcel Fröhlich

unread,
Mar 28, 2018, 4:50:46 AM3/28/18
to ontolo...@googlegroups.com
John, Hans, Ravi,

while not advocating for the distinction of the cited paper, I recently found a fruitful distinction for describing systems.
Atmanspacher and Kronz (1998) differentiate ontic states from epistemic states of a system.

Many Realisms, Acta Polytechnica Scandinavica Ma-91, 31{43 (1998)

"
Ontic states describe all properties of a physical system exhaustively. (Exhaustive in this context means that an ontic state is precisely the way it is, without any reference to epistemic knowledge or ignorance.) Ontic states are the referents of individual descriptions, the properties of the system are formalized by intrinsic observables. Their temporal evolution (dynamics) follows universal, deterministic laws given by a Hamiltonian one-parameter group. As a rule, ontic states in this sense are empirically inaccessible. Epistemic states describe our (usually inexhaustive) knowledge of the properties of a physical system, i.e. based on a finite partition of the relevant state space. The referents of statistical descriptions are epistemic states, the properties of the system are formalized by contextual observables.
"

The approach is explicitly referring to the tradition of Quine. A broader and more recent description can be found here:

Let me cite the last section of the Scholarpedia article:
"

Relative onticity

Contextual emergence has been originally conceived as a relation between levels of descriptions, not levels of nature: It addresses questions of epistemology rather than ontology. In agreement with Esfeld (2009), who advocated that ontology needs to regain more significance in science, it would be desirable to know how ontological considerations might be added to the picture that contextual emergence provides.

A network of descriptive levels of varying degrees of granularity raises the question of whether descriptions with finer grains are more fundamental than those with coarser grains. The majority of scientists and philosophers of science in the past tended to answer this question affirmatively. As a consequence, there would be one fundamental ontology, preferentially that of elementary particle physics, to which the terms at all other descriptive levels can be reduced.

But this reductive credo also produced critical assessments and alternative proposals. A philosophical precursor of trends against a fundamental ontology is Quine's (1969) ontological relativity. Quine argued that if there is one ontology that fulfills a given descriptive theory, then there is more than one. It makes no sense to say what the objects of a theory are, beyond saying how to interpret or reinterpret that theory in another theory. Putnam (1981, 1987) later developed a related kind of ontological relativity, first called internal realism, later sometimes modified to pragmatic realism.

On the basis of these philosophical approaches, Atmanspacher and Kronz (1999) suggested how to apply Quine's ideas to concrete scientific descriptions, their relationships with one another, and with their referents. One and the same descriptive framework can be construed as either ontic or epistemic, depending on which other framework it is related to: bricks and tables will be regarded as ontic by an architect, but they will be considered highly epistemic from the perspective of a solid-state physicist.

Coupled with the implementation of relevance criteria due to contextual emergence (Atmanspacher 2016), the relativity of ontology must not be confused with dropping ontology altogether. The "tyranny of relativism" (as some have called it) can be avoided by identifying relevance criteria to distinguish proper context-specific descriptions from less proper ones. The resulting picture is more subtle and more flexible than an overly bold reductive fundamentalism, and yet it is more restrictive and specific than a patchwork of arbitrarily connected model fragments.

"

Marcel
(@FroehlichMarcel)




John

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or unsubscribe to the forum, see http://ontologforum.org/info/

--- You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-forum+unsubscribe@googlegroups.com.

Mathias Brochhausen

unread,
Mar 28, 2018, 5:15:53 AM3/28/18
to ontolo...@googlegroups.com
John,

while I should know better than to feed the troll, I wanted to point out that the paper clearly states that the problem the authors try to address is a specific problem in DL (and even more specifically for users of Basic Formal Ontology, but that isn't relevant to my remark).

Those so eager to judge others on their understanding of logic or its absence, should know that DL is a subset of First Order Logic. Hence, considerations of modal logic are not relevant and the solutions that modal logic provides to those problems are not accessible to those working with DL.

I would love to chat more about this, but I really think I have spend enough time in this honorable echo chamber.

Farewell,
Mathias




John

John F Sowa

unread,
Mar 28, 2018, 11:03:46 AM3/28/18
to ontolo...@googlegroups.com, Doug Skuce
Dear Matthias,

I apologize for doubting your knowledge of logic. But I was annoyed
by the word 'delusional' in the title of your paper and the suggestion
that the work of the best scientists and engineers is at the same level
as talk about mythical beasts.

Furthermore, I was responding to a note by Ravi, who was misled by
that title and some of the content. He asked how we can "separate
ontologies that pertain to known or real domains ... vs some illogical
non-plausible ... ontologies that might be purely imaginary."

In my response, I wanted to emphasize that the category "imaginary"
in your proposal includes all the R & D that he and other scientists
and engineers have been doing all their lives.

> DL is a subset of First Order Logic. Hence, considerations of modal
> logic are not relevant and the solutions that modal logic provides
> to those problems are not accessible to those working with DL.

That is a critical issue that has many implications, for both
logic and ontology:

1. The words 'real' and 'imaginary' are closely related to the
terms 'actual' and 'possible' in modal logics. They raise
similar issues, which cannot be completely resolved in FOL
or any subset, such as DLs. Points #2, #3, and #4 show why.

2. Many DL experts, starting with Ron Brachman, have observed that
DLs have a modal effect when used as a T-Box in conjunction
with an A-Box (which may be any source of assertions, such as
a database or the WWW). The reason for this modal effect is
that the T-Box is assumed to have a higher priority or
entrenchment with respect to other sources of information.
This method has been successfully used for years.

3. Ontologies are often used in design stages where modal terms occur,
such as obligatory, mandatory, optional, required, prohibited....
In those cases, the same ontology should be used in the design
stage, where the modal terms are used, and in the finished product,
where everything is actual. Since nearly every product goes through
many stages of design and revision, constantly switching from one
ontology to another would cause confusion and introduce bugs.

4. Finally, I used the example of Kripke semantics. S4 and S5 are
two widely used versions of modal logic. S5 implies that every
world, real or possible, would have exactly the same ontology.
S4 implies that any possible world that is accessible from the
real world would have mostly the same ontology, but perhaps with
some updates or revisions.

In short, specifications in the design stage typically use modal
terms, and everything in a finished product is actual. Ideally,
the same ontology should be used in both stages.

I don't know whether Barry Smith approves of your proposal. I hope
not, but I fear that he might. In any case, Barry and I are scheduled
for a debate in the Ontology Symposium on May 1. This would be a good
topic to discuss.

John

hpo...@verizon.net

unread,
Mar 28, 2018, 12:20:07 PM3/28/18
to ontolo...@googlegroups.com
John,

I agree that there are signs and marks that are physical manifestations of these types of conceptual/social/political entities. However, those physical manifestations are typically not physically associated with the physical entities to which they refer/signify. The association is only knowable to those who have previously been educated/apprised of the mark and what it signifies. If you lack that knowledge, no physical sensor will inform you of the existence of such entities. One may try to infer their existence by observing behaviors of others who are suspected of having such knowledge, but this is a probabilistic process with usually significant error margins.

BTW, I often harp on this point because of past dealings with people in domains where such knowledge is hard to come by and yet they tend to expect to obtain perfect knowledge about such entities from physical sensors (e.g., radars, infrared, sonic, radio emissions, etc.). I also often highlight the Internet revolution as making more of such signs and marks broadly accessible, but not necessarily making the marks understandable to those who haven't been clued in to their significance. In part, this is due to a lack of explicit context representation associated with such Internet-accessible marks.

Hans
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

John F Sowa

unread,
Mar 28, 2018, 4:17:21 PM3/28/18
to ontolo...@googlegroups.com, Doug Skuce
Hans and Michael,

HP
> I agree that there are signs and marks that are physical manifestations
> of these types of conceptual/social/political entities. However, those
> physical manifestations are typically not physically associated with
> the physical entities to which they refer/signify.


That is why you need Peirce's complete semiotic system.
An icon resembles its referent in some way.
An index points to it in some way.
And a symbol refers to something by convention.

As Peirce said, symbols evolve from icons and other symbols.
That's true of all alphabets and characters in the world.
The letter M came from the Egyptian hieroglyph for water (waves).
From that to simplified Egyptian to Phoenician to Greek to Latin.

See "Signs and Reality": http://jfsowa.com/pubs/signs.pdf

HP
> yet they tend to expect to obtain perfect knowledge...
> from physical sensors (e.g., radars, infrared, sonic, radio
> emissions, etc.).

Physical measurements can never be perfect, and observations
are always fallible. That gets into epistemology (study of
how and what we can know) as opposed to ontology (study of
what exists). Since epistemology can never be perfect,
every ontology is at best a useful approximation for some
purpose or range of purposes.

MDB
> this distinction between "real" and "not real" is not some well
> defined rigorous notion. I.e., it's not something we should expect
> that logic could inherently distinguish. Like many things in human
> language it's highly context dependent.

That opens up many more cans of worms. Three more articles
about related issues:

Five questions on epistemic logic:
http://jfsowa.com/pubs/5qelogic.pdf

What is the source of fuzziness?
http://jfsowa.com/pubs/fuzzy.pdf

The challenge of knowledge soup:
http://jfsowa.com/pubs/challenge.pdf

Brief summary: logic, epistemology, ontology, semiotics, and
linguistics are all involved (or entangled) in these issues.
You can't expect to get a complete solution from any one of them.
If you try to mash them all in one package, you get knowledge soup.

John

hpo...@verizon.net

unread,
Mar 28, 2018, 5:49:54 PM3/28/18
to ontolo...@googlegroups.com
John,

I think you are missing my point. The issue is not the accuracy of physical measurements of some physical entity in space-time. The issue is determining whether said physical entity is indeed a member of some larger conceptual entity - without the benefit of access to human-generated information sources - i.e., the signs and marks you mentioned in your email. For example, how would I know which people in some public assembly are members of the faculty of some university, simply by taking physical measurements?

Sure, they might be wearing some sort of externally visible ID, or maybe I could do 3-D tomography and read the ID cards in their wallet or purse (assuming they are carrying one on their person). But even that requires me to have some kind of knowledge about what types of IDs individual faculty members of that university possess. Or I could observe that certain individuals in the crowd eventually find their way to a building that I happen to know is a classroom for said university, and if I could see inside that classroom and detect the individual is at some sort of lectern position, I might infer that the person is a faculty member. But I could also be quite wrong about that, regardless of the accuracy of my physical observations - or the observations of the students in the classroom. I could also try to do something like facial recognition - but that would require access to some "authoritative" source of facial images of that university's faculty, not just physical measurement of facial images. Maybe the tweed jacket and elbow patches would be a dead giveaway??

The trend to bar code, RF ID, and GPS location enable just about everything is in fact an attempt to address this very prevalent issue in the government and business IT world. But the general problem I am referring to still exists and is very problematic in domains where the physical entities making up the larger composite/conceptual entities either don't have any externally detectable marks signifying their membership relationship for pragmatic reasons, or deliberately don't want that relationship to be detectable by others for privacy or nefarious reasons (as in your Mafia example).

Hans


-----Original Message-----
From: ontolo...@googlegroups.com <ontolo...@googlegroups.com> On Behalf Of John F Sowa

John F Sowa

unread,
Mar 28, 2018, 9:56:33 PM3/28/18
to ontolo...@googlegroups.com
Mathias, Hans, Michael, Mike, and Ravi,

I reread the paper with the above title, and I have to admit that
the authors went to a huge amount of work to make their system work.
I admit that it's possible to represent what they want to represent
and to make appropriate inferences from it.

It is really a tour de force. But I also believe that they could have
made their own lives easier and spared their readers a great deal of
effort if they had designed an ontology along the following lines:

1. OWL is very complex and quirky. Common Logic is much simpler,
and it offers a more systematic set of primitives with which to
specify the categories in the ontology.

2. With those categories defined in CL, they would have a framework
with which to specify equivalents of everything they need to specify
for their applications. And they could do so without going outside
the expressive power of OWL DL.

3. But the simpler and more expressive semantics of Common Logic allows
them to define a clean, elegant ontology. There is no need to worry
whether some universals (AKA functions and relations) do or do not
happen to have instances in the actual world or some possible
world. The presence of instances is a contingent issue that is
independent of the definitions.

For a quick overview of such an ontology, consider Common Logic
(or something like it) as the base logic. The following paper about
Peirce's semiotic discusses ways to use such a logic to define the
primitives: http://jfsowa.com/pubs/signs.pdf .

When the primitives are defined in CL, they can be used in OWL DL
to specify everything needed for the paper with the above title.

And Hans, Michael, Mike, and Ravi, I mostly agree with your points.
But I have to fly to San Diego tomorrow morning.

Until next week,
John

Chris Mungall

unread,
Mar 29, 2018, 2:32:35 AM3/29/18
to ontolo...@googlegroups.com

Hi Matthias,

Interesting paper, but I'm confused about equation 12. I encoded this in the attached ontology. I also added an axiom to state that delusions must be about something, otherwise equations 11 and 12 don't carry any force (a delusion that is not about anything would be equivalent to a unicorn delusion, since it's not about anything that isn't a unicorn).

Using this ontology, if I state that horned horses don't exist, then unicorn delusions become unsatisfiable. I don't think that's your intent or the intent of even the kind of weak-tea realism I subscribe to, in which the world of our scientific ontologies are buffered from any kind of phantasmagoric ontology (which may not even be logically consistent). When you try and put these things on the same footing, axioms and unsatisfiability 'leaks' from one 'world' to another.

This is the ontology in manchester syntax:

Prefix: : <http://unicorn.org/>
Ontology: <http://unicorn.org>

ObjectProperty: isAbout
ObjectProperty: hasPart
Class: Horn
Class: Delusion SubClassOf: isAbout some owl:Thing
Class: UnicornDelusion EquivalentTo: Delusion and isAbout only (Horse and hasPart some Horn)
Class: Horse DisjointWith: hasPart some Horn

This is the explanation of unsatisfiability in HermiT:

Note that it gets worse if you add instances, e.g. my own delusion about my pink unicorn:

Individual: MyUnicornDelusion Types: Delusion and isAbout only (Horse and hasPart some Horn)

This results in the whole ontology being inconsistent. i.e. the only consistent worlds are ones in which our delusions are about real things.

Apologies if I'm missing something, but was this your intent?

FWIW I'm not sure there is a use case for reasoning about fantasy entities. The most straightforward thing is to keep direct representations of unicorns and imaginary homeopathic processes out of scientific ontologies, and to have a lightweight minimally axiomatized ontologies of delusions, fiction etc if they are required (e.g. for a psychiatric ontology).

I enjoyed the paper, though I'm glad the bio-ontology community has moved on from unicorns and onto more pragmatic concerns.

To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
PastedImage.png
unicorn.owl

John F Sowa

unread,
Apr 2, 2018, 6:12:08 PM4/2/18
to ontolo...@googlegroups.com
On 3/29/2018 2:32 AM, Chris Mungall wrote:
> I'm not sure there is a use case for reasoning about fantasy entities.

All invention and discovery in science, mathematics, and engineering
is based on imagination and fantasy. Just consider Einstein's Gedanken
experiment about a train traveling at the speed of light. It was and
is pure fantasy. But it led to a revolution in physics and astronomy.

As another example, Elon Musk reminded hard-nosed investors that
his fantasies about the space business and the electric car business
now have a considerable amount of reality to back them up.

Even the unicorn example isn't delusional. The narwhal is a whale
that has a single "horn". Like all cetaceans, it belongs to the
order Arteriodactyla (even toed ungulates), which includes deer.
In searching for new species, biologists often speculate about the
options. There's nothing unreasonable about a one-horned deer.

Fundamental principle: There is a continuum between fantasy and
reality. The difference between them is *not* a matter of ontology.
It's a practical matter of experiment, exploration, design, testing,
development, and marketing.

Note: I apologized to Matthias about deprecating that paper. The
authors did some good, hard work in writing it. But the distinction
they make is not an issue for ontology.

John

Pat Hayes

unread,
Apr 2, 2018, 10:45:17 PM4/2/18
to Chris Mungall, ontolo...@googlegroups.com
Hi Chris

The classical, textbook, Quinean perspective is that the use of a
name, or an existentially quantified variable, is to refer to
some thing that exists. This is why we read the quantifier as
"exists...". This is certainly a coherent view, and it may be
what the founders of logic intended, but it has some problems,
which I think you are encountering here. The chief one is, it is
then logically incoherent to use names which don't refer to real
things. So you can't say that SherlockHolmes is fictional by writing

(forall (x)(not (= x SherlockHolmes) ))

because that sentence is logically inconsistent, because simply
by using the name we have implicitly asserted that its referent
does exist.

The textbook solution is to modify the logic so that the inference

(P A) |= (exists (x)(P x))

is no longer valid, giving a different logic where some names
have no referent. But (1) changing logics is expensive and (2)
the resulting logics aren't usually expressive enough.

But there is a different strategy, which is to just reinterpret
the quantifiers as not implying actual existence. The 'universe'
then contains real things but also fictional things, and you can
quantify over these possibilia and give them names and so on, and
the logic works the same as it always has done and everything is
just fine, except you cannot now automatically assume that just
because you are talking about something, that it is real. In
particular, the tautology

(exists (x) (= x A))

does not mean that A exists, only that A is one of the things we
are talking and reasoning about. If you want to s that something
is real, you have to actually SAY it is real: in other words,
existence is a predicate. Which many philosophers find quite
awful, because Kant said that existence is not a predicate. But
if we allow fictional (or otherwise not-real) things into the
universe of discourse, existence has to be a predicate. And I
think we have to allow them, in many ontologies. Maybe not
unicorns, but states of affairs that you want to prevent (in
planning) so you hope will not exist, but they have to be
reasoned about. Or stresses in girders of bridges that havnt been
built yet; or people rumored to have been seen in Afghanistan
last week, or many other examples of things that we need to talk
and reason about, and give names to, even though we know they
don't exist or hope they will never exist or simply are in doubt
about their existence. But the LOGIC of how we reason about them
is unchanged through all these comings-into and goings-out-of
existence.

Pat


On 3/29/18 1:32 AM, Chris Mungall wrote:
> Hi Matthias,
>
> Interesting paper, but I'm confused about equation 12. I encoded
> this in the attached ontology. I also added an axiom to state
> that delusions must be about something, otherwise equations 11
> and 12 don't carry any force (a delusion that is not about
> anything would be equivalent to a unicorn delusion, since it's
> not about anything that isn't a unicorn).
>
> Using this ontology, if I state that horned horses don't exist,
> then unicorn delusions become unsatisfiable. I don't think that's
> your intent or the intent of even the kind of weak-tea realism I
> subscribe to, in which the world of our scientific ontologies are
> buffered from any kind of phantasmagoric ontology (which may not
> even be logically consistent). When you try and put these things
> on the same footing, axioms and unsatisfiability 'leaks' from one
> 'world' to another.
>
> This is the ontology in manchester syntax:
>
> |Prefix: : <http://unicorn.org/> Ontology: <http://unicorn.org>
> ObjectProperty: isAbout ObjectProperty: hasPart Class: Horn
> Class: Delusion SubClassOf: isAbout some owl:Thing Class:
> UnicornDelusion EquivalentTo: Delusion and isAbout only (Horse
> and hasPart some Horn) Class: Horse DisjointWith: hasPart some Horn |
>
> This is the explanation of unsatisfiability in HermiT:
>
> ontolog-foru...@googlegroups.com
> <mailto:ontolog-forum%2Bunsu...@googlegroups.com>.
> For more options, visit
> https://groups.google.com/d/optout
> <https://groups.google.com/d/optout>.
>
>
> --
> All contributions to this forum are covered by an open-source
> license.
> For information about the wiki, the license, and how to
> subscribe or
> unsubscribe to the forum, see http://ontologforum.org/info/
> ---
> You received this message because you are subscribed to the
> Google Groups "ontolog-forum" group.
> To unsubscribe from this group and stop receiving emails from
> it, send an email to
> ontolog-foru...@googlegroups.com
> <mailto:ontolog-foru...@googlegroups.com>.
> For more options, visit https://groups.google.com/d/optout.
>
> --
> All contributions to this forum are covered by an open-source
> license.
> For information about the wiki, the license, and how to subscribe or
> unsubscribe to the forum, see http://ontologforum.org/info/
> ---
> You received this message because you are subscribed to the
> Google Groups "ontolog-forum" group.
> To unsubscribe from this group and stop receiving emails from it,
> send an email to ontolog-foru...@googlegroups.com
> <mailto:ontolog-foru...@googlegroups.com>.
> For more options, visit https://groups.google.com/d/optout.

--
-----------------------------------
call or text to 850 291 0667
www.ihmc.us/groups/phayes/
www.facebook.com/the.pat.hayes


John F Sowa

unread,
Apr 3, 2018, 12:27:16 AM4/3/18
to ontolo...@googlegroups.com
On 4/2/2018 10:45 PM, Pat Hayes wrote:
> just reinterpret the quantifiers as not implying actual existence. The
> 'universe' then contains real things but also fictional things, and you
> can quantify over these possibilia and give them names and so on, and
> the logic works the same as it always has done and everything is just
> fine, except you cannot now automatically assume that just because you
> are talking about something, that it is real.

I agree. Since ontologies are often used in design and development,
we should be able to use the same ontology during the design stage
(when the product does not exist), during the development stage (when
a prototype is being built), and after the product ships (when the
thing is really real).

That is a very important use case for an imaginary domain that
eventually becomes a real domain.

Re Sherlock Holmes: Those stories were imaginary, but they were
more effective than any textbook in teaching police departments
how to collect, examine, and interpret evidence.

That's another important use case.

John

Mike

unread,
Apr 3, 2018, 4:00:21 PM4/3/18
to ontolo...@googlegroups.com
John,

If one is to distinguish “imaginary” versus “real” parsimoniously, there seems little left for ontology. The concrete portion of reality obviously has no need for the distinction as nothing there is imaginary. The remaining anthropocentric (social and mental) portion of reality is reasonably, aside from their physiological underpinnings, all cognition, emotion and behavior. Human consciousness of these phenomena may well take the same form regardless of whether they concern a concrete entity or a hypothetical or fantastical one. So, strictly taken, anthropocentric (social and mental) reality is only imaginary, aside from the ensuing human actions and artifacts that enter concrete reality, and once there, of course, are no longer imaginary. Any “thought” experienced in the human brain is basically that – a thought – whether its propositions are believed to be true or to be false. This internalized state of affairs can carry over to the concrete world as well. Common discourse, for example, may attribute intention and cause to its referents, yet these interpretations are often simply imagined or intuited. Humans will act concretely as if their social constructs are real but, to anything other than god’s eye, their behavior – individual and collective – is only that which alters the things around them. For explanatory purposes, we may speak of social and mental constructs as real in regard to their impact on human behavior, but that does not seem to differentiate consensual ones from imagined ones in any substantial way. History, to consider one of your examples, is often subject to revision and, outside of humans, exists only to the extent that consequential changes to the physical world endure.

Mike

-----Original Message-----
From: ontolo...@googlegroups.com [mailto:ontolo...@googlegroups.com] On Behalf Of John F Sowa
Sent: Tuesday, April 3, 2018 12:27 AM
To: ontolo...@googlegroups.com
Subject: Re: [ontolog-forum] Higgs bosons, Mars missions, and unicorn delusions

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

John F Sowa

unread,
Apr 3, 2018, 10:05:51 PM4/3/18
to ontolo...@googlegroups.com
Mike,

Your note summarizes some of the many deeply entangled issues.
I'll just comment on your last point:

> History, to consider one of your examples, is often subject to
> revision and, outside of humans, exists only to the extent that
> consequential changes to the physical world endure.

Yes. And history is a system of signs that is derived by people
who do three kinds of research: (1) study records left by people
who are long gone; (2) do the same kind of research as geologists,
archaeologists, and forensic investigators; and (3) compare, cross-
check, and evaluate all available evidence for each claim.

But the best we can say about historical research is the same
as we can say about any branch of science: The results are
as accurate as we are capable of producing with our current
resources, tools, and methodologies. But we can never be
absolutely certain.

For any field that develops and stores information -- science,
history, engineering, banking, bookkeeping, journalism, or motor
vehicle registration -- all we can say is that the referents of
the signs are intended to be as accurate as possible. But anyone
who works in those fields knows that errors are inevitable.

Fundamental principle: An ontology -- as a classification and
specification the of the kinds of entities in some subject --
is independent of whether the referents of any statement that
uses that ontology happen to exist in the real world.

Implication: There is no difference in principle between an
ontology of a Sherlock Holmes story and an ontology of a report
by Scotland Yard. If anything, the ontology of the imaginary
story may be more useful for a forensic investigation than an
ontology of the actual practices in the late 19th century.

John

Patrick Cassidy

unread,
Apr 4, 2018, 12:31:43 PM4/4/18
to ontolo...@googlegroups.com
+1

Patrick Cassidy
MICRA Inc.
cas...@micra.com
1-908-561-3416


>-----Original Message-----
>From: ontolo...@googlegroups.com [mailto:ontolog-
>fo...@googlegroups.com] On Behalf Of John F Sowa
>Sent: Tuesday, April 03, 2018 10:06 PM
>To: ontolo...@googlegroups.com
>Subject: Re: [ontolog-forum] Higgs bosons, Mars missions, and unicorn
>delusions
>

Mike

unread,
Apr 4, 2018, 5:27:36 PM4/4/18
to ontolo...@googlegroups.com
John,

Yes, as well, these are entangled perspectives. My point was not to separate physical reality from social reality as much as to point out that there are orthogonal perspectives -- none of which including abstract vs concrete, ideal vs existing, occurrent vs continuant, imaginary vs real, situational vs canonical, et cetera are necessarily important to the application of ontology. What is more likely important is attention to circumscribing the intended domain and intersection of perspectives that a particular ontology is built to address. Earlier in this chain, Michael DeBellis succinctly identified the futility of defining one reality for all applications. However, if a domain of discourse uses vocabulary to describe both physical and conceptual things, for example, it may still be helpful to ontology users to distinguish which is which.

Mike

-----Original Message-----
From: ontolo...@googlegroups.com [mailto:ontolo...@googlegroups.com] On Behalf Of John F Sowa
Sent: Tuesday, April 3, 2018 10:06 PM
To: ontolo...@googlegroups.com
Subject: Re: [ontolog-forum] Higgs bosons, Mars missions, and unicorn delusions

John F Sowa

unread,
Apr 4, 2018, 9:43:03 PM4/4/18
to ontolo...@googlegroups.com
On 4/4/2018 5:27 PM, Mike wrote:
> if a domain of discourse uses vocabulary to describe both physical
> and conceptual things, for example, it may still be helpful to ontology
> users to distinguish which is which.


Certainly. That's the point of my article about "Signs and Reality":
http://jfsowa.com/pubs/signs.pdf

The computer is a giant semiotic processor -- something like
Peirce's notion of a quasi-mind. It's physical, but everything
stored and processed in it is a sign. The virtual reality it
creates is also physical, but it's generated from signs.

The word 'imaginary' is misleading, both about the way people
think and about the way computers process signs. And by the way,
Peirce was a pioneer in AI. 63 years before Alan Turing wrote
his article "Thinking Machines", Peirce published an article in
vol. 1 of the _American Journal of Psychology_ (1887) about
"Logical Machines": http://history-computer.com/Library/Peirce.pdf

Marvin Minsky (1963) cited that article in his bibliography of AI.

In 1882, Peirce wrote a letter to his former student, Alan Marquand,
who was building a mechanical logic machine, to suggest electricity
as a better basis than mechanical linkages. In it, he included a
diagram of switches in series to represent AND, and switches in
parallel for OR. For an article about Marquand and his design,
see http://history-computer.com/ModernComputer/thinkers/Peirce.html

John
Reply all
Reply to author
Forward
0 new messages