Wikipedia on upper ontology

65 views
Skip to first unread message

Bruce Schuman

unread,
Jan 1, 2016, 4:11:20 PM1/1/16
to ontolog-forum

This holiday I been getting the hang of the Office2016 Word program, and I’m suddenly finding myself with a navigation tool that lets me drag and drop hundreds of text blocks pretty easy.  So, my little vision for the Quixotic “universal upper-level ontology” has been percolating a bit, and I’m excited about it.  I’m feasting on Wikipedia.

 

My ideas have to do with defining all conceptual form in terms of a universal algebraic primitive – which approach, it looks to me, can take us past the problem of trying to patchy boxy category systems together when reality is continuously variable.  Put people in the wrong box, and in ten seconds the sparks are flying.  We need continuous variation in digital category structure.

 

Along the way – I ran into what looks like an authoritative and credible overview/survey of the major issues surround upper (high-level) ontologies on Wikipedia.  People will complain about anything – but I like this guy, and I think his five-page essay/overview is very solid and illuminating.

 

Anybody got a comment on this?

 

https://en.wikipedia.org/wiki/Upper_ontology

 

 

Arguments for the infeasibility of an upper ontology

Historically, many attempts in many societies have been made to impose or define a single set of concepts as more primal, basic, foundational, authoritative, true or rational than others.

In the kind of modern societies that have computers at all, the existence of academic and political freedoms imply that many ontologies will simultaneously exist and compete for adherents. While the differences between them may be narrow and appear petty to those not deeply involved in the process, so too did many of the theological debates of medieval Europe, but they still led to schisms or wars, or were used as excuses for same. The tyranny of small differences, that standard ontologies seek to end, may continue simply because other forms of tyranny are even less desirable. So private efforts to create competitive ontologies that achieve adherents by virtue of better communication may proceed, but tend not to result in long-standing monopolies.

A deeper objection derives from ontological constraints that philosophers have found historically inescapable. Some[who?] argue that a transcendent perspective or omniscience is implied by even searching for any general-purpose ontology – see God's eye view – since it is a social or cultural artifact, there is no purely objective perspective from which to observe the whole terrain of concepts and derive any one standard.

A narrower and much more widely held objection is implicature: the more general the concept and the more useful in semantic interoperability, the less likely it is to be reducible to symbolic concepts or logic and the more likely it is to be simply accepted by the complex beings and cultures relying on it. In the same sense that a fish doesn't perceive water, we[who?] don't see how complex and involved is the process of understanding basic concepts.

·         There is no self-evident way of dividing the world up into concepts, and certainly no non-controversial one

·         There is no neutral ground that can serve as a means of translating between specialized (or "lower" or "application-specific") ontologies

·         Human language itself is already an arbitrary approximation of just one among many possible conceptual maps. To draw any necessary correlation between English words and any number of intellectual concepts we might like to represent in our ontologies is just asking for trouble. (WordNet, for instance, is successful and useful precisely because it does not pretend to be a general-purpose upper ontology; rather, it is a tool for semantic / syntactic / linguistic disambiguation, which is richly embedded in the particulars and peculiarities of the English language.)

·         Any hierarchical or topological representation of concepts must begin from some ontological, epistemological, linguistic, cultural, and ultimately pragmatic perspective. Such pragmatism does not allow for the exclusion of politics between persons or groups, indeed it requires they be considered as perhaps more basic primitives than any that are represented.

Those who doubt the feasibility of general purpose ontologies are more inclined to ask “what specific purpose do we have in mind for this conceptual map of entities and what practical difference will this ontology make?” This pragmatic philosophical position surrenders all hope of devising the encoded ontology version of “everything that is the case,” (Wittgenstein, Tractatus Logico-Philosophicus).

According to Barry Smith in The Blackwell Guide to the Philosophy of Computing and Information (2004), "the initial project of building one single ontology, even one single top-level ontology, which would be at the same time non-trivial and also readily adopted by a broad population of different information systems communities, has largely been abandoned." (p. 159)

Finally there are objections similar to those against artificial intelligence. Technically, the complex concept acquisition and the social / linguistic interactions of human beings suggests any axiomatic foundation of "most basic" concepts must be cognitive, biological or otherwise difficult to characterize since we don't have axioms for such systems. Ethically, any general-purpose ontology could quickly become an actual tyranny by recruiting adherents into a political program designed to propagate it and its funding means, and possibly defend it by violence. Historically, inconsistent and irrational belief systems have proven capable of commanding obedience to the detriment or harm of persons both inside and outside a society that accepts them. How much more harmful would a consistent rational one be, were it to contain even one or two basic assumptions incompatible with human life?

Arguments for the feasibility of an upper ontology[edit]

https://upload.wikimedia.org/wikipedia/en/thumb/9/99/Question_book-new.svg/50px-Question_book-new.svg.png

This section does not cite any sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed. (November 2014)

Many of those who doubt the possibility of developing wide agreement on a common upper ontology fall into one of two traps:

1.   they assert that there is no possibility of universal agreement on any conceptual scheme; but they ignore that a practical common ontology does not need to have universal agreement, it only needs a large enough user community to make it profitable for developers to use it as a means to general interoperability, and for third-party developer to develop utilities to make it easier to use; and

2.   they point out that developers of data schemes find different representations congenial for their local purposes; but they do not demonstrate that these different representation are in fact logically inconsistent.

In fact, different representations of assertions about the real world (though not philosophical models), if they accurately reflect the world, must be logically consistent, even if they focus on different aspects of the same physical object or phenomenon. If any two assertions about the real world are logically inconsistent, one or both must be wrong, and that is a topic for experimental investigation, not for ontological representation. In practice, representations of the real world are created as and known to be approximations to the basic reality, and their use is circumscribed by the limits of error of measurements in any given practical application. Ontologies are entirely capable of representing approximations, and are also capable of representing situations in which different approximations have different utility. Objections based on the different ways people perceive things attack a simplistic, impoverished view of ontology. The objection that there are logically incompatible models of the world are true, but in an upper ontology those different models can be represented as different theories, and the adherents of those theories can use them in preference to other theories, while preserving the logical consistency of the necessary assumptions of the upper ontology. The necessary assumptions provide the logical vocabulary with which to specify the meanings of all of the incompatible models. It has never been demonstrated that incompatible models cannot be properly specified with a common, more basic set of concepts, while there are examples of incompatible theories that can be logically specified with only a few basic concepts.

 

******************

 

PS – thanks for the James Martin link!  Fabulous graphic on the cover of this PDF

 

http://www.oxfordmartin.ox.ac.uk/downloads/reports/ideas-into-action.pdf

 

I used to have about 8 James Martin books – Databases, distributed processes, “Telematic Society”, etc.  I ate that stuff up…

 

Martin, James (1981)

An End-User's Guide to Data Base, Prentice- Hall, Englewood Cliffs, NJ

_____ (1977)

Computer Data-Base Organization, 2nd Ed., Prentice-Hall, Englewood Cliffs, NJ

_____ (1981)

Computer Networks and Distributed Processing, Software, Techniques, and Architecture, Prentice-Hall, Englewood Cliffs, NJ

_____ (1981)

Design and Strategy for Distributed Data Processing, Prentice-Hall, Englewood Cliffs, NJ

_____ (1977)

Future Developments in Telecommunications, Second Edition, Prentice-Hall, Englewood Cliffs, NJ

_____ (1990)

Information Engineering, vols. 1,2,3, Prentice-Hall, Englewood Cliffs, NJ

_____ (1981)

Telematic Society, A Challenge for Tomorrow, Prentice-Hall, Englewood Cliffs, NJ

 

Bruce Schuman, Santa Barbara CA USA

 

 

image001.png

Michael Brunnbauer

unread,
Jan 3, 2016, 3:25:13 PM1/3/16
to Bruce Schuman, ontolog-forum

Happy New Year!

On Fri, Jan 01, 2016 at 01:11:08PM -0800, Bruce Schuman wrote:
> Anybody got a comment on this?
> https://en.wikipedia.org/wiki/Upper_ontology

That article lists 17 different upper ontologies. A point for the opponents
of those, it seems.

I wonder if the "semantic interoperability" - which seems to be the main reason
behind this - is actually deliverable in practice?

Wouldn't additional primitives and/or axioms in lower level ontologies be
problematic?

And even without: How often can you define the same structure in different
ways using the same primitives (e.g. the real numbers in set theory)?

Regards,

Michael Brunnbauer

--
++ Michael Brunnbauer
++ netEstate GmbH
++ Geisenhausener Straße 11a
++ 81379 München
++ Tel +49 89 32 19 77 80
++ Fax +49 89 32 19 77 89
++ E-Mail bru...@netestate.de
++ http://www.netestate.de/
++
++ Sitz: München, HRB Nr.142452 (Handelsregister B München)
++ USt-IdNr. DE221033342
++ Geschäftsführer: Michael Brunnbauer, Franz Brunnbauer
++ Prokurist: Dipl. Kfm. (Univ.) Markus Hendel

signature.asc

Hans Teijgeler

unread,
Jan 3, 2016, 4:40:02 PM1/3/16
to Michael Brunnbauer, Bruce Schuman, ontolog-forum

Michael,

I added another upper ontology: ISO 15926-2.

We defined some 180 templates, each of them represents a data-driven semantical unit, such as: ClassOfIndividualHasIndirectPropertyWithMaximumValue.
Each template has a "signature" like, for the above template:

Role No

Role Name

Role Object Type

1

hasPossessorType

dm:ClassOfIndividual

2

hasIndirectPropertyType

dm:ClassOfIndirectProperty

3

valMaximumValue

dm:ExpressReal

4

hasScale

dm:Scale

Role 1 refers to the URI of the applicable instance of ClassOfIndividual (for example a requirements class for pressure vessel V121 ),
Role 2 refers to the URI of the applicable instance of ClassOfIndirectProperty (standardized in a Reference Data Library),
Role 3 lists the applicable numeric value, and
Role 4 refers to the URI of the applicable instance of Scale  (standardized in a Reference Data Library).

All templates are defined here in OWL (when opened with a text editor check line 9252 and following for above template).

This results in a very large number of possible uses of the same structure. The semantics are defined, in the background and in FOL, by a small application model that exclusively uses ISO 15926-2 entity types.

One such usage, in a Semantic Web setting, is shown below in Turtle format:

:T29600495D7B94512B8FA1F73959FEEB2 rdf:type tpl:ClassOfIndividualHasIndirectPropertyWithMaximumValue ;

    tpl:hasPossessorType :CO_V121 ; # Requirements class for vessel V121

    tpl:hasIndirectPropertyType rdl:RDS1470835011 ; # Upper Limit Design Pressure

    tpl:valMaximumValue "15"^^xsd:decimal ;

    tpl:hasScale rdl:RDS1348874 ; # barg

    meta:valEffectiveDate "2014-06-22T00:00:00Z"^^xsd:dateTime . # if no longer valid, we add a meta:valDeprecationDate

NOTE - Above I made one shortcut in order not to make it too verbose. The website http://15926.org gives the whole story.

Regards,
Hans


Hans Teijgeler,
OntoConsult,
Netherlands
15926.org


++  Sitz: München, HRB Nr.142452 (Handelsregister B München)  USt-IdNr.
++ DE221033342


++  Geschäftsführer: Michael Brunnbauer, Franz Brunnbauer
++  Prokurist: Dipl. Kfm. (Univ.) Markus Hendel

--
All contributions to this forum by its members are made under an open content license, open publication license, open source or free software license. Unless otherwise specified, all Ontolog Forum content shall be subject to the Creative Commons CC-BY-SA 4.0 License or its successors.
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To post to this group, send email to ontolo...@googlegroups.com.
Visit this group at https://groups.google.com/group/ontolog-forum.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/20160103202508.GA16488%40netestate.de.
For more options, visit https://groups.google.com/d/optout.

Bruce Schuman

unread,
Jan 3, 2016, 6:32:01 PM1/3/16
to ontolog-forum

Dear Michael, dear List --

 

Thanks for this comment Michael -- and also thanks to MW who sent me a detailed response to particular points in the article, which I am still considering.  Thanks to anyone else with thoughts and suggestions on this theme. 

 

THE SEARCH FOR SIMPLIFICATION

 

What I personally want to see emerge is an ontology based on a theory of concepts where the entire structure is 100% linear and recursive and essentially built from "one algebraic primitive" -- which, yes, does map the real numbers, but builds up from there across the entire range of conceptual abstraction in the same terms (i.e., using the same primitive).  I think this is possible -- and if so, I think it might (?) emerge as an amazing and awesome simplification.  My "one primitive" idea is: the proper starting point is the concept of "distinction" (or "cut") -- which we can then understand as algebraically isomorphic to "dimension" -- which we can then understand as algebraically isomorphic to "ordered class" (perhaps by way of the concept "list" or “taxon”).  The entire structure of abstraction (and all related models such as taxonomies, kind-of hierarchies, part-of hierarchies, etc.) can all be described in this way.  A general theory of concepts can be built up from basic measurement theory (following dimensional analysis and basic common units of measure) -- and one writer who has done this (cited, perhaps surprisingly, by world-class object-oriented programmer Grady Booch) is the philosopher Ayn Rand in her book "Introduction to Objectivist Epistemology".  Ayn Rand describes abstraction as "measurement omission".  She sees these things very clearly.

 

Computer hardware is organized in a hierarchy of layers; computer languages are organized in a hierarchy of layers; and classification schemes and taxonomies (and indeed, ontologies) are organized as a hierarchy of layers.  Could there be one ideal linearly recursive straight line integration of this entire framework across all these levels??  Rockets go off in my brain sometimes on this subject.....

 

***

 

Here's a list of operating principles that is emerging for me:

 

1)      Reality is an undifferentiated continuum, with no boundaries, no objects, no categories, no properties and no measurements.  “Distinctions arise” in the human mind because it is convenient, not because those distinctions exist “in reality” (though this question is open for discussion).  Distinctions are mental constructions that humans compile into complex abstractions, maps of reality, scientific theories, etc.

2)      We are discussing the properties of models of reality, not of reality itself.  In this system, there are no “objects in the real world”.  All objects in this framework are abstract symbolic constructions similar to maps, represented in a cognitive or computing medium by some interpretable state in that medium.  The objective in any description or in science is to establish a correlation between “experience in the real world” and a “model of reality” developed in abstract symbolic terms.  The “empirical confirmation” by scientific testing refines and confirms the validity and accuracy of the model, establishing the correlation of the model (map) with reality.

3)      All semantic objects are constructed in this medium as composite cascades of the fundamental information structure “bit”, defined as “off/on” or “0/1”, and understood as physically represented in some medium as the “state” of that medium.  Every element – every word, every number, every term, every concept – is explicitly defined as a composite linear cascade of bits, combined into “bytes” and higher-level composite elements.  In every case, every item or object developed in the logic has a 100% unambiguous decomposition to its “absolutely grounded” definition as a hierarchy of bits.  This approach presumes that ignoring the explicit definition of this cascade introduces significant uncertainty and ambiguity in any logic developed on the basis of higher-level abstract objects and symbols.

4)      The emergence of distinctions is a process that is “motivated”.  “There is a reason” that the human mind makes a distinction, and that reason influences or controls the attributes of that distinction.

5)      All human categories, classes and abstractions can be understood as “constructed from distinctions”.   The broad objective of this review is to show how this is true, and to generalize the basic logical processes that can exist within this framework.

6)      There is no strict or rigid or universal meaning for the terms “class”, “category”, “set” or “type”.  Such terms are always defined in an ad hoc or stipulative way in the context of some particular system of definitions, and they have a particular meaning assigned by that system and valid within that system.

 

Bruce Schuman, Santa Barbara CA USA

 

 

 

-----Original Message-----
From: ontolo...@googlegroups.com [mailto:ontolo...@googlegroups.com] On Behalf Of Michael Brunnbauer
Sent: Sunday, January 3, 2016 12:25 PM
To: Bruce Schuman <bruces...@cox.net>
Cc: 'ontolog-forum' <ontolo...@googlegroups.com>
Subject: Re: [ontolog-forum] Wikipedia on upper ontology

 

 

Happy New Year!

 

On Fri, Jan 01, 2016 at 01:11:08PM -0800, Bruce Schuman wrote:

> Anybody got a comment on this?

> https://en.wikipedia.org/wiki/Upper_ontology

 

That article lists 17 different upper ontologies. A point for the opponents of those, it seems.

Patrick Cassidy

unread,
Jan 3, 2016, 7:08:59 PM1/3/16
to Michael Brunnbauer, Bruce Schuman, ontolog-forum
Answers to 2 questions from Michael B:

>
>Wouldn't additional primitives and/or axioms in lower level ontologies be
>problematic?
>
Not generally. When one has a set of domain ontologies that can interoperate by translation using the common foundation ontology as an interlingua, the foundation ontology will have all the semantic primitives necessary to logically specify the meanings of all of the domain terms in those ('legacy') ontologies. If a new domain ontology requires a new primitive element, it can be added to the foundation ontology. This will not disturb the interoperability of the legacy domain ontologies because the preexisting ontology applications will never need and never reference the new primitive. If the new primitive, or terms in the new domain ontology that use the new primitive, are of interest to legacy programs, they will be able to properly interpret those new terms, since the meaning of the new primitive will now be specified in relation to all the other terms. i.e. new primitives will not break existing applications.

>
>And even without: How often can you define the same structure in different
>ways using the same primitives (e.g. the real numbers in set theory)?
>
As often as is useful or desired. Why not? An important point is that, regardless of how many different ways of specifying a term are used, if they are logically equivalent, then they can be converted into each other, satisfying the usage preferences of any number of communities. It is assumed that any domain using a common foundation ontology for local purposes will only incorporate those terms that they require for their local application, to avoid unnecessary complexity. Multiple logically consistent ways to refer to the same can exist side by side peacefully, but they add complexity and will be pruned to the minimum necessary for local applications. For example, SUMO did not have a class for 'Mother' but it had a relation 'mother' that relates a woman to someone who is her child. Then an ontology with a class 'Mother' will include in that class any woman who is related to another person by that relation - and vice-versa. Indeed, the concern about building an ontology too large has inhibited some from looking to a general ontology such as CYC, it would seem, because those ontologies were not designed as an inventory of semantic primitives, and it was not obvious how to extract only what is needed for local usage. The CYC microtheories were designed to ameliorate that problem, but paradoxically made the CYC seemingly more complex and difficult to use.


Factual assertions in multiple modern human languages can be translated very well into each other. It gets trickier for statements with emotional content like poetry. But interoperability is not intended for poetic applications, and a common foundation ontology can serve at least as well as translation of natural languages (or of local idioms), and probably better because the terms are more precisely defined. Keep in mind that a common language is in fact precisely what the scientific community has - namely English. Even educated Chinese make a point of including their TOEFL (Test of English as a Foreign Language) scores on their resumes. This doesn't stop the Chinese from talking Chinese to each other, but when they go to an international conference they talk in English (sort of). People will learn a common language if there is adequate motivation.

If there is a widely used foundation ontology, it will be useful even for local teams that have no need to interoperate - it will make ontology development easier to have a properly tested set of base concepts to minimize unnecessary duplication. No one will need a full foundation ontology for a domain application, but it is fairly simple to extract the more general concepts pertaining to a particular domain, and with them some additional classes and relations that can be useful locally.

Pat


Patrick Cassidy
MICRA Inc.
cas...@micra.com
1-908-561-3416

>-----Original Message-----
>From: ontolo...@googlegroups.com [mailto:ontolog-
>fo...@googlegroups.com] On Behalf Of Michael Brunnbauer
>Sent: Sunday, January 03, 2016 3:25 PM
>To: Bruce Schuman
>Cc: 'ontolog-forum'
>Subject: Re: [ontolog-forum] Wikipedia on upper ontology
>
>
>Happy New Year!
>
>On Fri, Jan 01, 2016 at 01:11:08PM -0800, Bruce Schuman wrote:
>> Anybody got a comment on this?
>> https://en.wikipedia.org/wiki/Upper_ontology
>
>That article lists 17 different upper ontologies. A point for the opponents of
>those, it seems.
>
>I wonder if the "semantic interoperability" - which seems to be the main
>reason behind this - is actually deliverable in practice?
>
>Wouldn't additional primitives and/or axioms in lower level ontologies be
>problematic?
>
>And even without: How often can you define the same structure in different
>ways using the same primitives (e.g. the real numbers in set theory)?
>
>Regards,
>
>Michael Brunnbauer
>
>--
>++ Michael Brunnbauer
>++ netEstate GmbH
>++ Geisenhausener Straße 11a
>++ 81379 München
>++ Tel +49 89 32 19 77 80
>++ Fax +49 89 32 19 77 89
>++ E-Mail bru...@netestate.de
>++ http://www.netestate.de/
>++
>++ Sitz: München, HRB Nr.142452 (Handelsregister B München) USt-IdNr.
>++ DE221033342
>++ Geschäftsführer: Michael Brunnbauer, Franz Brunnbauer
>++ Prokurist: Dipl. Kfm. (Univ.) Markus Hendel
>

Bruce Schuman

unread,
Jan 3, 2016, 7:37:34 PM1/3/16
to ontolog-forum
Just to tack this link/article into this conversation -- I am finding this article on "Semantic Interoperability" very helpful and clarifying, and would say that it addresses some of the issues mentioned by Pat:

https://en.wikipedia.org/wiki/Semantic_interoperability

Bruce Schuman, Santa Barbara CA USA
http://networknation.net/matrix.cfm



-----Original Message-----
From: ontolo...@googlegroups.com [mailto:ontolo...@googlegroups.com] On Behalf Of Patrick Cassidy
Sent: Sunday, January 3, 2016 4:09 PM
To: 'Michael Brunnbauer' <bru...@netestate.de>; 'Bruce Schuman' <bruces...@cox.net>
Cc: 'ontolog-forum' <ontolo...@googlegroups.com>
Subject: RE: [ontolog-forum] Wikipedia on upper ontology

Answers to 2 questions from Michael B:

>
>Wouldn't additional primitives and/or axioms in lower level ontologies be >problematic?
>
Not generally. When one has a set of domain ontologies that can interoperate by translation using the common foundation ontology as an interlingua, the foundation ontology will have all the semantic primitives necessary to logically specify the meanings of all of the domain terms in those ('legacy') ontologies. If a new domain ontology requires a new primitive element, it can be added to the foundation ontology. This will not disturb the interoperability of the legacy domain ontologies because the preexisting ontology applications will never need and never reference the new primitive. If the new primitive, or terms in the new domain ontology that use the new primitive, are of interest to legacy programs, they will be able to properly interpret those new terms, since the meaning of the new primitive will now be specified in relation to all the other terms. i.e. new primitives will not break existing applications.

>
>And even without: How often can you define the same structure in different >ways using the same primitives (e.g. the real numbers in set theory)?
>
As often as is useful or desired. Why not? An important point is that, regardless of how many different ways of specifying a term are used, if they are logically equivalent, then they can be converted into each other, satisfying the usage preferences of any number of communities. It is assumed that any domain using a common foundation ontology for local purposes will only incorporate those terms that they require for their local application, to avoid unnecessary complexity. Multiple logically consistent ways to refer to the same can exist side by side peacefully, but they add complexity and will be pruned to the minimum necessary for local applications. For example, SUMO did not have a class for 'Mother' but it had a relation 'mother' that relates a woman to someone who is her child. Then an ontology with a class 'Mother' will include in that class any woman who is related to another person by that relation - and vice-versa. Indeed, the concern about building an ontology too large has inhibited some from looking to a general ontology such as CYC, it would seem, because those ontologies were not designed as an inventory of semantic primitives, and it was not obvious how to extract only what is needed for local usage. The CYC microtheories were designed to ameliorate that problem, but paradoxically made the CYC seemingly more complex and difficult to use.


Factual assertions in multiple modern human languages can be translated very well into each other. It gets trickier for statements with emotional content like poetry. But interoperability is not intended for poetic applications, and a common foundation ontology can serve at least as well as translation of natural languages (or of local idioms), and probably better because the terms are more precisely defined. Keep in mind that a common language is in fact precisely what the scientific community has - namely English. Even educated Chinese make a point of including their TOEFL (Test of English as a Foreign Language) scores on their resumes. This doesn't stop the Chinese from talking Chinese to each other, but when they go to an international conference they talk in English (sort of). People will learn a common language if there is adequate motivation.

If there is a widely used foundation ontology, it will be useful even for local teams that have no need to interoperate - it will make ontology development easier to have a properly tested set of base concepts to minimize unnecessary duplication. No one will need a full foundation ontology for a domain application, but it is fairly simple to extract the more general concepts pertaining to a particular domain, and with them some additional classes and relations that can be useful locally.

Pat


Patrick Cassidy
MICRA Inc.
cas...@micra.com
1-908-561-3416

>-----Original Message-----
>From: ontolo...@googlegroups.com [mailto:ontolog- >fo...@googlegroups.com] On Behalf Of Michael Brunnbauer
>Sent: Sunday, January 03, 2016 3:25 PM
>To: Bruce Schuman
>Cc: 'ontolog-forum'
>++ Sitz: München, HRB Nr.142452 (Handelsregister B München) USt-IdNr.
>++ DE221033342
>++ Geschäftsführer: Michael Brunnbauer, Franz Brunnbauer >++ Prokurist: Dipl. Kfm. (Univ.) Markus Hendel >
>--
>All contributions to this forum by its members are made under an open >content license, open publication license, open source or free software >license. Unless otherwise specified, all Ontolog Forum content shall be >subject to the Creative Commons CC-BY-SA 4.0 License or its successors.
>---
>You received this message because you are subscribed to the Google Groups >"ontolog-forum" group.
>To unsubscribe from this group and stop receiving emails from it, send an >email to ontolog-foru...@googlegroups.com.
>To post to this group, send email to ontolo...@googlegroups.com.
>Visit this group at https://groups.google.com/group/ontolog-forum.
>To view this discussion on the web visit
>https://groups.google.com/d/msgid/ontolog-
>forum/20160103202508.GA16488%40netestate.de.
>For more options, visit https://groups.google.com/d/optout.

--
All contributions to this forum by its members are made under an open content license, open publication license, open source or free software license. Unless otherwise specified, all Ontolog Forum content shall be subject to the Creative Commons CC-BY-SA 4.0 License or its successors.
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To post to this group, send email to ontolo...@googlegroups.com.
Visit this group at https://groups.google.com/group/ontolog-forum.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/011301d14684%2419277a20%244b766e60%24%40micra.com.

Matthew West

unread,
Jan 4, 2016, 4:14:27 AM1/4/16
to Michael Brunnbauer, Bruce Schuman, ontolog-forum
Dear Michael,

You wrote:
I wonder if the "semantic interoperability" - which seems to be the main reason behind this - is actually deliverable in practice?

Wouldn't additional primitives and/or axioms in lower level ontologies be problematic?

[MW>] Pat C believes that there is some finite set of primitives, so this is not a problem. I believe (I'm not sure if it can be proved or not) that you can always add a new primitive, so your issue is relevant.
For me the consequence is only that your integrating ontology (which is not the same as an upper level ontology) is capable of extension to incorporate new primitives as they become relevant. This has consequences for your upper ontology to be able to cope with that. But I'm comfortable that is doable (and I've set out how in my book).

Regards

Matthew West
Information Junction
Mobile: +44 750 3385279
Skype: dr.matthew.west
matthe...@informationjunction.co.uk
http://www.informationjunction.co.uk/
https://www.matthew-west.org.uk/
This email originates from Information Junction Ltd. Registered in England and Wales No. 6632177.
Registered office: 8 Ennismore Close, Letchworth Garden City, Hertfordshire, SG6 2SU.



Michael Brunnbauer

unread,
Jan 4, 2016, 11:17:54 AM1/4/16
to Patrick Cassidy, ontolog-forum

Hello Patrick,

On Sun, Jan 03, 2016 at 07:08:53PM -0500, Patrick Cassidy wrote:
> An important point is that, regardless of how many different ways of specifying a term are used, if they are logically equivalent, then they can be converted into each other, satisfying the usage preferences of any number of communities.

Is it not the point of "semantic interoperability" that this can be done
automatically? I doubt this is possible. If I construct two isomorphic
structures within a theory, those will usually not be logically equivalent
within that theory.

I suspect there are better reasons for using upper ontologies than
interoperability.

Regards,

Michael Brunnbauer

--
++ Michael Brunnbauer
++ netEstate GmbH
++ Geisenhausener Straße 11a
++ 81379 München
++ Tel +49 89 32 19 77 80
++ Fax +49 89 32 19 77 89
++ E-Mail bru...@netestate.de
++ http://www.netestate.de/
++
++ Sitz: München, HRB Nr.142452 (Handelsregister B München)
++ USt-IdNr. DE221033342
signature.asc

Michael Brunnbauer

unread,
Jan 4, 2016, 11:28:09 AM1/4/16
to Hans Teijgeler, ontolog-forum

Hello Hans,

thanks for giving a concrete example. So those templates would be how the
lower level ontologies using the ISO 15926 upper ontology would look like -
a bunch of iff definitions?

How confident are you that people understanding the stuff would come up with
the same definitions for the same concepts?

Regards,

Michael Brunnbauer

On Sun, Jan 03, 2016 at 10:39:49PM +0100, Hans Teijgeler wrote:
> Michael,
>
> I added another upper ontology: ISO
> <http://www.15926.org/topics/data-model/index.htm> 15926-2.
>
> We defined some 180 templates
> <http://www.15926.org/15926_template_specs.php> , each of them represents a
> <http://www.15926.org/15926_template_specs.php?sid=3ab3b197c2bdcdf92de67b316
> 9693a7f&mode=owl> in OWL (when opened with a text editor check line 9252
> and following for above template).
>
> This results in a very large number of possible uses of the same structure.
> The semantics are defined, in the background and in FOL, by a small
> application model that exclusively uses ISO 15926-2 entity types.
>
> One such usage, in a Semantic Web setting, is shown below in Turtle format:
>
> :T29600495D7B94512B8FA1F73959FEEB2 rdf:type
> tpl:ClassOfIndividualHasIndirectPropertyWithMaximumValue ;
>
> tpl:hasPossessorType :CO_V121 ; # Requirements class for vessel V121
>
> tpl:hasIndirectPropertyType rdl:RDS1470835011
> <http://posccaesar.org/rdl/page/RDS1470835011> ; # Upper Limit Design
> Pressure
>
> tpl:valMaximumValue "15"^^xsd:decimal ;
>
> tpl:hasScale rdl:RDS1348874 <http://posccaesar.org/rdl/page/RDS1348874>
> ; # barg
>
> meta:valEffectiveDate "2014-06-22T00:00:00Z"^^xsd:dateTime . # if no
> longer valid, we add a meta:valDeprecationDate
>
> NOTE - Above I made one shortcut in order not to make it too verbose. The
> website http://15926.org gives the whole story.
>
> Regards,
> Hans
>
> Hans Teijgeler,
> OntoConsult,
> Netherlands
> <http://15926.org> 15926.org
>
>
> _____
> To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/FE6237E5BA024A3EADFA4C3951E5899A%40HansPC.
> For more options, visit https://groups.google.com/d/optout.

--
++ Michael Brunnbauer
++ netEstate GmbH
++ Geisenhausener Straße 11a
++ 81379 München
++ Tel +49 89 32 19 77 80
++ Fax +49 89 32 19 77 89
++ E-Mail bru...@netestate.de
++ http://www.netestate.de/
++
++ Sitz: München, HRB Nr.142452 (Handelsregister B München)
++ USt-IdNr. DE221033342
signature.asc

John Bottoms

unread,
Jan 4, 2016, 11:51:52 AM1/4/16
to ontolo...@googlegroups.com
Matthew,

This is another case of "Those Old Guys Stole All My Ideas!".

On the First statement, yes, there needs to be "semantic
interoperability" in some form. However, the existing definitions are
too narrow to encompass all applications.

On the Second statement: no, primitives or axioms in the lower level
ontology are why we came to this arena. The upper ontology universals
should be sufficiently abstract that they don't care about what's in the
lower levels. That responsibility belongs to the predicates of each element.

Here is the proof: (reducto etc: If a new primitive effectively ripples
up in a well formed ontology then we must assume it will ripple up to
the level necessary to encompass the requirements of the new primitive.
This entails the rippling up to <entity> which we can see is meaningless.

It is left to discussion as to what a "well formed ontology" means. But
it is clear that basic security rules are required: if a system starts
in a secure (meaningful) state and each transaction is secure
(meaningful) then the new system state (the ontology) is secure
(meaningful). It is acknowledged that misunderstanding and bad data can
corrupt this process, which is the next discussion. Bad data added to an
ontology can cause a semantic(s) ripple both vertically or horizontally
through the ontology. These may echo for some time.

In 1985 as Charles Goldfarb (ex-IBM'er) worked to move the GML
specification from IBM to ISO SGML he discussed extensibility of a
grammar. There are three (3) basic practices with extending a grammar
using a metagrammar: (...so, 30 years ago.)

1. There must be within the metagrammar a grammar that supports new
entities.

2. There must be a definition of the new entity that is to be added.

3. There must be a system test that tests whether the new entity has
been added correctly. Further tests may be required to verify that the
resulting application performs as intended.

I look on these as the universals of extensibility and there are other
components that can be added. Further we should note that these are
missing among W3c specifications since HTML5 which abandons SGML. This
precludes the user from adding new entities to a conforming application.

-John Bottoms
FirstStar Systems
Concord, MA USA

Patrick Cassidy

unread,
Jan 4, 2016, 12:22:23 PM1/4/16
to Michael Brunnbauer, ontolog-forum
Conversions of data from one terminology to another can and should be done automatically through a common foundation ontology.

Demonstrating that functionality on a non-trivial level would require linking several significant ontology-based applications that can profitably interoperate. One needs to find such applications and then find the funding for the demo -- Much more difficult than just building a good foundation ontology.

For the task of just finding good modules to include in one's domain ontology, a repository of modules such as that overseen by Mike Gruninger (http://www.cs.toronto.edu/~torsten/publications/MGruninger_AO-12.pdf) would also be quite useful.

Pat

Patrick Cassidy
MICRA Inc.
cas...@micra.com
1-908-561-3416

>-----Original Message-----
>From: ontolo...@googlegroups.com [mailto:ontolog-
>fo...@googlegroups.com] On Behalf Of Michael Brunnbauer
>Sent: Monday, January 04, 2016 11:18 AM
>To: Patrick Cassidy
>Cc: 'ontolog-forum'
>Subject: Re: [ontolog-forum] Wikipedia on upper ontology
>
>
>Hello Patrick,
>
>On Sun, Jan 03, 2016 at 07:08:53PM -0500, Patrick Cassidy wrote:
>> An important point is that, regardless of how many different ways of
>specifying a term are used, if they are logically equivalent, then they can be
>converted into each other, satisfying the usage preferences of any number
>of communities.
>
>Is it not the point of "semantic interoperability" that this can be done
>automatically? I doubt this is possible. If I construct two isomorphic structures
>within a theory, those will usually not be logically equivalent within that
>theory.
>
>I suspect there are better reasons for using upper ontologies than
>interoperability.
>
>Regards,
>
>Michael Brunnbauer
>
>--
>++ Michael Brunnbauer
>++ netEstate GmbH
>++ Geisenhausener Straße 11a
>++ 81379 München
>++ Tel +49 89 32 19 77 80
>++ Fax +49 89 32 19 77 89
>++ E-Mail bru...@netestate.de
>++ http://www.netestate.de/
>++
>++ Sitz: München, HRB Nr.142452 (Handelsregister B München) USt-IdNr.
>++ DE221033342
>++ Geschäftsführer: Michael Brunnbauer, Franz Brunnbauer
>++ Prokurist: Dipl. Kfm. (Univ.) Markus Hendel
>
>--
>All contributions to this forum by its members are made under an open
>content license, open publication license, open source or free software
>license. Unless otherwise specified, all Ontolog Forum content shall be
>subject to the Creative Commons CC-BY-SA 4.0 License or its successors.
>---
>You received this message because you are subscribed to the Google Groups
>"ontolog-forum" group.
>To unsubscribe from this group and stop receiving emails from it, send an
>email to ontolog-foru...@googlegroups.com.
>To post to this group, send email to ontolo...@googlegroups.com.
>Visit this group at https://groups.google.com/group/ontolog-forum.
>To view this discussion on the web visit
>https://groups.google.com/d/msgid/ontolog-
>forum/20160104161751.GA20296%40netestate.de.

Hans Teijgeler

unread,
Jan 4, 2016, 6:50:38 PM1/4/16
to Michael Brunnbauer, ontolog-forum

Hi Michael,

[MB] So those templates would be how the lower level ontologies using the ISO 15926 upper ontology would look like - a bunch of iff definitions?
[HT] Forgive me my ignorance, but I don't know what you mean with "iff definitions". Google couldn't tell me either. If I may translate that to "application models" the answer is Yes.

[MB] How confident are you that people understanding the stuff would come up with the same definitions for the same concepts?
[HT] The definitions of the core concepts are given in the Reference Data Library, just as everybody does with vocabularies etc. The mapping of data in some data store to template instances isn't yet everybody's cup of tea, but that holds for any other way of representing information.

Our community has been, and still is, working on a way to map "engineering language" directly to a specialized template or a set of interrelated templates. Sometimes that gets pretty complex, but once it is done and published it can be used without much more effort.

A nice example is this one: When we measure fluid flow rate with an orifice plate (a plate with a hole, inserted in a pipe, where the fluid flow creates a differential pressure across that plate) we use the property "Beta Ratio", being the ratio between the cross section area of that hole and the cross section area inside the pipe. It is not a property of that orifice plate, but engineers often believe it is.

That's where the confusion starts, unless you do the modeling work for them, so that they can fill in the blanks.

Regards,
Hans

PS I am always puzzled why a forum like this doesn't use the HTML format as default. Images like above get lost in plain text format.

Hans Teijgeler,
OntoConsult,
Netherlands
http://15926.org


-----Original Message-----
From: Michael Brunnbauer [mailto:bru...@netestate.de]
Sent: maandag 4 januari 2016 17:28
To: Hans Teijgeler
Cc: 'ontolog-forum'
Subject: Re: [ontolog-forum] Wikipedia on upper ontology


++  Sitz: München, HRB Nr.142452 (Handelsregister B München)  USt-IdNr.

beta-ratio.png

Pat Hayes

unread,
Jan 5, 2016, 2:08:19 AM1/5/16
to Hans Teijgeler, Michael Brunnbauer, ontolog-forum

On Jan 4, 2016, at 3:50 PM, Hans Teijgeler <hans.te...@quicknet.nl> wrote:

> Hi Michael,
>
> [MB] So those templates would be how the lower level ontologies using the ISO 15926 upper ontology would look like - a bunch of iff definitions?
> [HT] Forgive me my ignorance, but I don't know what you mean with "iff definitions". Google couldn't tell me either. If I may translate that to "application models" the answer is Yes.

IFF is mathspeak for "If and only if", and an "IFF definition" means something like "x is a P if, and only if, Q", where Q states the exact and precise conditions which are sufficient, and exactly sufficient, for x to be a P. In other words, a very strict, exact definition of P which admits no exceptions, ambiguity or slackness of any kind.

As many people will tell you, such definitions are quite rare outside mathematics.

Best wishes

Pat Hayes

>
> [MB] How confident are you that people understanding the stuff would come up with the same definitions for the same concepts?
> [HT] The definitions of the core concepts are given in the Reference Data Library, just as everybody does with vocabularies etc. The mapping of data in some data store to template instances isn't yet everybody's cup of tea, but that holds for any other way of representing information.
>
> Our community has been, and still is, working on a way to map "engineering language" directly to a specialized template or a set of interrelated templates. Sometimes that gets pretty complex, but once it is done and published it can be used without much more effort.
>
> A nice example is this one: When we measure fluid flow rate with an orifice plate (a plate with a hole, inserted in a pipe, where the fluid flow creates a differential pressure across that plate) we use the property "Beta Ratio", being the ratio between the cross section area of that hole and the cross section area inside the pipe. It is not a property of that orifice plate, but engineers often believe it is.
>
> <beta-ratio.png>
> That's where the confusion starts, unless you do the modeling work for them, so that they can fill in the blanks.
>
> Regards,
> Hans
>
> PS I am always puzzled why a forum like this doesn't use the HTML format as default.

+1 from me.

------------------------------------------------------------
IHMC (850)434 8903 home
40 South Alcaniz St. (850)202 4416 office
Pensacola (850)202 4440 fax
FL 32502 (850)291 0667 mobile (preferred)
pha...@ihmc.us http://www.ihmc.us/users/phayes






Hans Teijgeler

unread,
Jan 5, 2016, 4:17:26 AM1/5/16
to Pat Hayes, Michael Brunnbauer, ontolog-forum
Thanks Pat,

I knew what 'iff' means, but not 'iff definition'. Is this official logician
speak?

We come pretty close with our definitions, although like always applies:
garbage in, garbage out.
In general software geeks underestimate the importance of proper modeling.
For the most for performance reasons they have to introduce a lot of
implicit information (shortcuts), which ISO 15926 intends to turn to
explicit information.
In the nineties Matthew taught us always to ask ourself the question: what
IS it really that I look at? Often there is a mix-up between essence and
role.

Regards,
Hans

Hans Teijgeler,
OntoConsult,
Netherlands
http://15926.org

-----Original Message-----
From: ontolo...@googlegroups.com [mailto:ontolo...@googlegroups.com]
On Behalf Of Pat Hayes
Sent: dinsdag 5 januari 2016 8:08
To: Hans Teijgeler
Cc: Michael Brunnbauer; ontolog-forum
Subject: Re: [ontolog-forum] Wikipedia on upper ontology


--
All contributions to this forum by its members are made under an open
content license, open publication license, open source or free software
license. Unless otherwise specified, all Ontolog Forum content shall be
subject to the Creative Commons CC-BY-SA 4.0 License or its successors.
---
You received this message because you are subscribed to the Google Groups
"ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to ontolog-foru...@googlegroups.com.
To post to this group, send email to ontolo...@googlegroups.com.
Visit this group at https://groups.google.com/group/ontolog-forum.
To view this discussion on the web visit
https://groups.google.com/d/msgid/ontolog-forum/291D5DBF-5FDD-485F-9944-ED3A
B664DFE3%40ihmc.us.

Michael Brunnbauer

unread,
Jan 5, 2016, 6:39:06 AM1/5/16
to Hans Teijgeler, ontolog-forum

Hello Hans,

On Tue, Jan 05, 2016 at 10:17:19AM +0100, Hans Teijgeler wrote:
> I knew what 'iff' means, but not 'iff definition'. Is this official logician
> speak?

Probably not - but logicians should know what I meant. Sorry for being unclear.

> > PS I am always puzzled why a forum like this doesn't use the HTML format
> as default.

It is up to the sending e-mail application and its user to use text, HTML or
text + HTML. In case of text + HTML, it is up to the receiving e-mail
application to choose what is displayed.

Your Outlook for example sends both formats and probably displays HTML when
available.

I use mutt - a text console based client. If someone sends a HTML only mail,
mutt will display the HTML source - which is usually so bloated that finding
the message text is too tedious for me to bother.

Regards,

Michael Brunnbauer

--
++ Michael Brunnbauer
++ netEstate GmbH
++ Geisenhausener Straße 11a
++ 81379 München
++ Tel +49 89 32 19 77 80
++ Fax +49 89 32 19 77 89
++ E-Mail bru...@netestate.de
++ http://www.netestate.de/
++
++ Sitz: München, HRB Nr.142452 (Handelsregister B München)
++ USt-IdNr. DE221033342
signature.asc

Bruce Schuman

unread,
Jan 5, 2016, 12:52:27 PM1/5/16
to ontolog-forum, Matthew West, Michael Brunnbauer

[MW>] Pat C believes that there is some finite set of primitives, so this is not a problem. I believe (I'm not sure if it can be proved or not) that you can always add a new primitive, so your issue is relevant.

 

[MW>] For me the consequence is only that your integrating ontology (which is not the same as an upper level ontology) is capable of extension to incorporate new primitives as they become relevant. This has consequences for your upper ontology to be able to cope with that. But I'm comfortable that is doable (and I've set out how in my book).

 

**

 

Couple thoughts on this --

 

1)     Thanks for comments on this thread – it’s very helpful for me that we are surveying this entire industry or business from a high/broad perspective, reviewing the wide range of options or kinds of projects it includes.

2)     Interesting to see the distinction between “integrating ontology” and “upper-level ontology” – since I might tend to see a “really upper-level” ontology as very broadly or maybe absolutely integrating (i.e., inclusive of every type).

3)      I been studying a couple of articles linked to the Wikipedia Upper Ontology piece – one on “Semantic Interoperability” at https://en.wikipedia.org/wiki/Semantic_interoperability and a second on “Conceptual Interoperability” at https://en.wikipedia.org/wiki/Conceptual_interoperability  I’m expecting to write some review or response to these three articles, maybe in the form of a single outline that considers the alternatives mentioned, and defining what might (?) be a coherent collaborative research agenda – at least as I see it.

4)     I am instinctively leery of any notion of “primitive” that is not absolutely crunched down to absolutely minimal information structure – probably a “bit” at the machine level, and something like “Dedekind cut” at the level of interpretation.  I say we are combining and manipulating “information structures” to which we assign meaning, and any tendency to “assume” absolutely anything introduces the potential for error or ambiguity.  Everything must be under exact and fully explicit and “transparent” mechanical definition.  According to me, any tendencies to the contrary introduce the potential for craziness.  So, as per this doctrine – don’t introduce miscellaneous not-actually-primitive primitives.  Get down to bedrock and build “everything” under explicit definition.  I’d say this is essential for overcoming incommensurate definitions at higher levels.  To not do this is to further populate the swamp…

 

And on motivation –

 

The Wikipedia article on Semantic Interoperability says that our collective incapacity to solve this problem is costing the US economy alone $100 billion per year.  That ought to be enough to get a few analysts out of bed in the morning.

 

Secondly – this fabulous resource on technical philosophy mentioned here recently, at http://philpapers.org/, lists at least a million articles – many thousands of which are relevant to this subject. That brilliant place is a very rich resource – but it too is a swamp.  Why are there thousands of articles going over intimately-related and overlapping and critically important subjects with what seems to be such mind-numbing redundancy?  Can’t we figure out some collaborative standards and come up with some answers that actually work?   In the context of what too often looks like global political meltdown around a very long list of interdependent issues, maybe somebody ought to figure this out.

 

Thanks.

 

Bruce Schuman, Santa Barbara CA USA

http://networknation.net/matrix.cfm

 

 

-----Original Message-----

From: Matthew West [mailto:dr.matt...@gmail.com]

Sent: Monday, January 4, 2016 1:14 AM

To: 'Michael Brunnbauer' <bru...@netestate.de>; 'Bruce Schuman' <bruces...@cox.net>

Cc: 'ontolog-forum' <ontolo...@googlegroups.com>

Subject: RE: [ontolog-forum] Wikipedia on upper ontology

 

Dear Michael,

 

You wrote:

I wonder if the "semantic interoperability" - which seems to be the main reason behind this - is actually deliverable in practice?

 

Wouldn't additional primitives and/or axioms in lower level ontologies be problematic?

 

[MW>] Pat C believes that there is some finite set of primitives, so this is not a problem. I believe (I'm not sure if it can be proved or not) that you can always add a new primitive, so your issue is relevant.

Matthew West

unread,
Jan 5, 2016, 2:09:45 PM1/5/16
to ontolog-forum

Dear Bruce,

 

[MW>] Pat C believes that there is some finite set of primitives, so this is not a problem. I believe (I'm not sure if it can be proved or not) that you can always add a new primitive, so your issue is relevant.

 

[MW>] For me the consequence is only that your integrating ontology (which is not the same as an upper level ontology) is capable of extension to incorporate new primitives as they become relevant. This has consequences for your upper ontology to be able to cope with that. But I'm comfortable that is doable (and I've set out how in my book).

 

**

 

Couple thoughts on this --

 

1)     Thanks for comments on this thread – it’s very helpful for me that we are surveying this entire industry or business from a high/broad perspective, reviewing the wide range of options or kinds of projects it includes.

2)     Interesting to see the distinction between “integrating ontology” and “upper-level ontology” – since I might tend to see a “really upper-level” ontology as very broadly or maybe absolutely integrating (i.e., inclusive of every type).

[MW>] That could only be possible if all the ontologies you were integrating were already consistent and e.g. used the same terms for the same things, admit the same kinds of objects, and had the same constraints. I have never found two independently developed ontologies that were consistent in this sense. In practice an integrating ontology needs to be a mediating ontology, so that for each ontology that it integrates there is a mapping to and from the integrated ontology. This means that an integrating ontology must have terms that comprehensively each domain that it seeks to integrate. An integrating ontology will have an upper ontology as a part. It is that part that ensures a consistent approach to the way the domains are analysed and integrated into the integrating ontology so the parts are consistent. It consists of abstract patterns that all domain ontologies can be expressed as specializations.

3)      I been studying a couple of articles linked to the Wikipedia Upper Ontology piece – one on “Semantic Interoperability” at https://en.wikipedia.org/wiki/Semantic_interoperability and a second on “Conceptual Interoperability” at https://en.wikipedia.org/wiki/Conceptual_interoperability  I’m expecting to write some review or response to these three articles, maybe in the form of a single outline that considers the alternatives mentioned, and defining what might (?) be a coherent collaborative research agenda – at least as I see it.

[MW>] I’m not sure there is much left to do in terms of how to do integration. Actually doing it, now there there is an ocean to boil.

4)     I am instinctively leery of any notion of “primitive” that is not absolutely crunched down to absolutely minimal information structure – probably a “bit” at the machine level, and something like “Dedekind cut” at the level of interpretation.  I say we are combining and manipulating “information structures” to which we assign meaning, and any tendency to “assume” absolutely anything introduces the potential for error or ambiguity.  Everything must be under exact and fully explicit and “transparent” mechanical definition.  According to me, any tendencies to the contrary introduce the potential for craziness.  So, as per this doctrine – don’t introduce miscellaneous not-actually-primitive primitives.  Get down to bedrock and build “everything” under explicit definition.  I’d say this is essential for overcoming incommensurate definitions at higher levels.  To not do this is to further populate the swamp…

[MW>] Good luck with that. As Pat H said, there are generally only formal definitions for mathematical objects. In practice most people only have a vague idea what they mean by a particular term, as you will discover if you press a few of them. Also between people there will be a range of meaning people will have in mind, and there is no way you can make them all take the same meaning. Any practical solution needs to take account of this.

 

And on motivation –

 

The Wikipedia article on Semantic Interoperability says that our collective incapacity to solve this problem is costing the US economy alone $100 billion per year.  That ought to be enough to get a few analysts out of bed in the morning.

 

Secondly – this fabulous resource on technical philosophy mentioned here recently, at http://philpapers.org/, lists at least a million articles – many thousands of which are relevant to this subject. That brilliant place is a very rich resource – but it too is a swamp.  Why are there thousands of articles going over intimately-related and overlapping and critically important subjects with what seems to be such mind-numbing redundancy?  Can’t we figure out some collaborative standards and come up with some answers that actually work?   In the context of what too often looks like global political meltdown around a very long list of interdependent issues, maybe somebody ought to figure this out.

[MW>] Well I will claim to know a way this can be done. (I’m confident there is more than one). So will others here. As I mentioned above, the problem is actually doing it. The devil is definitely in the detail.

Azamat Abdoullaev

unread,
Jan 5, 2016, 3:15:35 PM1/5/16
to Matthew West, ontolog-forum
Matthew West wrote:
“An integrating ontology will have an upper ontology as a part. It is that part that ensures a consistent approach to the way the domains are analysed and integrated into the integrating ontology so the parts are consistent. It consists of abstract patterns that all domain ontologies can be expressed as specializations”.

I agree with Matthew, just adding that the universal ontology is the one acting as a true “integrating ontology”:

http://ontolog.cim3.net/forum/ontolog-forum/2007-07/pdfbPPssy0C24.pdf

http://www.slideshare.net/ashabook/philosophy-science-arts-technology-grand-unification

--
All contributions to this forum by its members are made under an open content license, open publication license, open source or free software license. Unless otherwise specified, all Ontolog Forum content shall be subject to the Creative Commons CC-BY-SA 4.0 License or its successors.
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To post to this group, send email to ontolo...@googlegroups.com.
Visit this group at https://groups.google.com/group/ontolog-forum.

Singer, John

unread,
Jan 5, 2016, 3:46:58 PM1/5/16
to Matthew West, ontolog-forum

Forgive me for jumping on this thread, but I have a somewhat related question.  I am looking at building a system that integrates data from a number of sources at a high level and also will serve as a pointer to the underlying source systems that expose interfaces for supplying additional data.  I was looking at the Topic Maps standard (ISO 13250) as an approach for organizing the combined data but there doesn’t appear to be any work done on this approach for years.  Is there another better/newer approach? 

--

All contributions to this forum by its members are made under an open content license, open publication license, open source or free software license. Unless otherwise specified, all Ontolog Forum content shall be subject to the Creative Commons CC-BY-SA 4.0 License or its successors.
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To post to this group, send email to ontolo...@googlegroups.com.
Visit this group at https://groups.google.com/group/ontolog-forum.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/00c801d147ec%24a2ae7050%24e80b50f0%24%40gmail.com.
For more options, visit https://groups.google.com/d/optout.

CONFIDENTIALITY NOTICE This e-mail message and any attachments are only for the use of the intended recipient and may contain information that is privileged, confidential or exempt from disclosure under applicable law. If you are not the intended recipient, any disclosure, distribution or other use of this e-mail message or attachments is prohibited. If you have received this e-mail message in error, please delete and notify the sender immediately. Thank you.

Ed - 0x1b, Inc.

unread,
Jan 5, 2016, 4:10:54 PM1/5/16
to ontolog-forum, Singer, John
John you might find this W3C WG* useful - as a financial firm, you
should ask (ie find internal contact yada yada) EarlyWarning.com what
they use as you're likely an affiliate and will end up integrating
with their semantics anyway.
http://www.w3.org/TR/2013/NOTE-prov-overview-20130430/
> https://groups.google.com/d/msgid/ontolog-forum/0d28f422b30547b8a390a652bc4eae5a%40STL3MSX08.corp.mastercard.org.

Cory Casanave

unread,
Jan 5, 2016, 4:17:59 PM1/5/16
to Singer, John, Matthew West, ontolog-forum

John,

Are you aware of Financial Industry Business Ontology (FIBO)™?

http://www.edmcouncil.org/financialbusiness

 

In OMG (omg.org) there is also work on standards for mapping between conceptual/mediating and operational models/ontologies. Let me know if interested.

 

-Cory Casanave

Ed Lowry

unread,
Jan 5, 2016, 4:43:08 PM1/5/16
to Bruce Schuman, ontolog-forum
Bruce

I would like to encourage your search for simplification.
I suggest you reconsider your 3rd operating principle of building all semantic objects from bits.
Bits are excellent building blocks of information -- inside machines -- where they have become
increasingly fast, reliable, and inexpensive.  But they do not serve well to provide for conceptual
simplicity across rich semantic data structures.

If language subject matter is restricted to computer applications and extraneous complexity
is eliminated in a sufficiently thorough way, there is a convergence  of design in the underlying
language semantics.  In particular the design of data primitives appears to converge toward
an enduring practical optimum.  See "Toward Perfect Information Microstructures" on my web site.

I would be interested in any reasons there may be for departing from the proposed optimum
of "needles" described there, either for computer language or the possibly broader needs
of writing ontologies.  You speak of your primitive as algebraic so it may be worth noticing that
needles serving in the role of "pegs" can serve as algebraic variables.
If this direction seems unsuited to your goal, I would be interested in what other directions might
seem more attractive.

Ed Lowry
http://users.rcn.com/eslowry 
 
                ------
--
All contributions to this forum by its members are made under an open content license, open publication license, open source or free software license. Unless otherwise specified, all Ontolog Forum content shall be subject to the Creative Commons CC-BY-SA 4.0 License or its successors.
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To post to this group, send email to ontolo...@googlegroups.com.
Visit this group at https://groups.google.com/group/ontolog-forum.

Bruce Schuman

unread,
Jan 5, 2016, 4:49:44 PM1/5/16
to ontolog-forum, Azamat Abdoullaev, Matthew West

Thanks for the comments – and my apologies for attempting to describe such a complex and ambitious thesis in just a few words.  If it were feasible, it would probably be more appropriate for me to write a book that goes through all these elements and explains them all in detail.  Indeed, a better approach yet would involve having a working system that demonstrates these principles in action.  However,  I am making progress on a longer and very inclusive word.docx that begins to review many of these issues.

 

It’s true that I am talking about a slightly different concept of “semantic ontology” than what  I understand to be the prevailing norm for the industry – and probably for most people on this list.

 

 

THE INTERPRETATION OF INTENDED MEANING

 

I am not attempting to create a series of stable word definitions that everybody working in some professional domain agrees to accept.  What I want to do is build a model of semantic structure that describes what people are actually doing in their everyday acts of communication – which is, in a word, stipulating their intended meaning – meaning something in particular though their words might be inherently non-specific.  People do very often mean different things by the words they use, and ambiguity is a constant source of misunderstanding.  We might say that “every abstraction is a Rorschach test,” and to understand – especially in high-speed conversations -- listeners are forced to project uncertain psychological interpretations into vague abstractions.   It’s my guess and assumption that if this issue could be accurately understood – and I don’t believe it currently is – it should then be possible to transpose a more general solution for usage in more narrowly-defined professional contexts.

 

I am saying that we need a clear algebraic model of conceptual form – the general form of conceptual abstraction – perhaps in a form akin to what is called “the principle of compositionality.” 

 

https://en.wikipedia.org/wiki/Principle_of_compositionality   

 

Abstract words have a complex implicit (unstated but intended and inherent) structure that is intended by a speaker or user of the word or concept – but because it is left implicit (not explicitly stated) in the context of actual communication, the interpretation of the intended meaning is often inherently ambiguous and highly context-dependent.  Psychologists, semanticists and ontologists alike all need a theory of implicit meaning that shows the exact inherent structure of this implicit cascade across levels of abstraction.

 

In this industry, as I understand it – the work-around is to build a rigid dictionary and get people to agree what the words in it mean.  If what we are talking about are mechanical elements or fairly simple and recurrent objects, this might not be so complicated.  But I am talking about communication as it actually happens in the real world – where words are often highly abstract and spoken very quickly (think politics or religion), and this ambiguity becomes a major cause of misunderstanding and failure in communication.  I’d say that happens regularly in our discussions here on ontolog.

 

 

DIMENSIONAL DRILL-DOWN TO SPECIFICS

 

For a clear example of what I mean – consider the process of contract negotiation where there is a lot of money involved.  The lawyers negotiate the implicit details inherent in the abstractions.

 

“Corporation X agrees to build an aircraft carrier for Government Y for $10 billion.”

 

That’s a very broad abstraction with almost no explicit detail. 

 

So, the negotiation process involves absolutely detailed drill-down on every facet of that very simple high-level description.  Through that process, the meaning of the phrase “aircraft carrier” becomes very exact indeed.  How long, how high, what quality, what part, done how, by whom, with what – in millions of little details grounded in exact measurement or cost estimation.

 

Every other abstract term in common human conversation can be “dimensioned” in this same way – drilling down across levels of abstraction or “whole/part relationships” so that every detail is highly specified and the parties feel safe to sign the contract: “proceed: we understand each other”.

 

As Matthew writes:  “In practice most people only have a vague idea what they mean by a particular term, as you will discover if you press a few of them. Also between people there will be a range of meaning people will have in mind, and there is no way you can make them all take the same meaning. Any practical solution needs to take account of this.”

 

Yes – this is absolutely the point.  This is WHY we need to understand with high precision how meaning is intended – since word meaning absolutely is highly flexible and adaptive and creative and metaphorical in just about every actual context of usage – except for situations with highly regulated industry standards.  This is the point of what I am doing: addressing this concern.

 

**

 

Some years ago, I did quite a bit of writing along this general theme.  Here’s a link to an essay that describes what I call “ad hoc top-down stipulation” – the way abstract meaning is actually intended in human conversations – and indeed, in contract negotiations.  This is a problem I think the ontology community (or somebody) needs to face head-on, build an accurate simple model, and perhaps learn to program this kind of context into machine learning systems (as is probably already going on).

 

http://originresearch.com/sd/sd2.cfm#part6

 

Azamat Abdoullaev, I did take a look at your “grand unification” ontology when you introduced it here a while ago.  I will take another look at this PDF.

 

http://ontolog.cim3.net/forum/ontolog-forum/2007-07/pdfbPPssy0C24.pdf

 

Thanks.

 

Bruce Schuman, Santa Barbara CA USA

http://networknation.net/matrix.cfm

 

From: ontolo...@googlegroups.com [mailto:ontolo...@googlegroups.com] On Behalf Of Azamat Abdoullaev


Sent: Tuesday, January 5, 2016 12:16 PM
To: Matthew West <dr.matt...@gmail.com>
Cc: ontolog-forum <ontolo...@googlegroups.com>

Subject: Re: [ontolog-forum] Wikipedia on upper ontology

 

Matthew West wrote:

“An integrating ontology will have an upper ontology as a part. It is that part that ensures a consistent approach to the way the domains are analysed and integrated into the integrating ontology so the parts are consistent. It consists of abstract patterns that all domain ontologies can be expressed as specializations”.

 

I agree with Matthew, just adding that the universal ontology is the one acting as a true “integrating ontology”:

http://ontolog.cim3.net/forum/ontolog-forum/2007-07/pdfbPPssy0C24.pdf

http://www.slideshare.net/ashabook/philosophy-science-arts-technology-grand-unification

On Tue, Jan 5, 2016 at 9:09 PM, Matthew West <dr.matt...@gmail.com> wrote:

--

All contributions to this forum by its members are made under an open content license, open publication license, open source or free software license. Unless otherwise specified, all Ontolog Forum content shall be subject to the Creative Commons CC-BY-SA 4.0 License or its successors.
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To post to this group, send email to ontolo...@googlegroups.com.
Visit this group at https://groups.google.com/group/ontolog-forum.


For more options, visit https://groups.google.com/d/optout.

 

--

All contributions to this forum by its members are made under an open content license, open publication license, open source or free software license. Unless otherwise specified, all Ontolog Forum content shall be subject to the Creative Commons CC-BY-SA 4.0 License or its successors.
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To post to this group, send email to ontolo...@googlegroups.com.
Visit this group at https://groups.google.com/group/ontolog-forum.

John F Sowa

unread,
Jan 5, 2016, 11:57:25 PM1/5/16
to ontolo...@googlegroups.com
Dear Matthew, Michael B, Hans T, Bruce S, and Pat C,

Before getting to the details of your notes, I'd like to make
three general points:

1. Digital systems have been interoperating successfully since
the first punched-card systems in the 1890s. Those areas that
had successful punched-card methods of interoperability --
mostly in science, engineering, bookkeeping, and finance --
moved them to digital computers very early (1950s). But
interoperability for those areas that did not have workable
punched-card methods is still a major research issue today.

2. I gathered over 100+ references to documents that address various
issues of semantic interoperability since the 1980s. This is just
the tip of a huge volume of R & D that is still very active. For
an overview with URLs, see http://www.jfsowa.com/ikl

3. Linguists and lexicographers have come to the conclusion that
there is no possibility of a fixed set of word senses for any
natural language, much less for all natural languages. For an
an article about those issues by people who had been working
on them for decades, see "I don't believe in word senses":
https://www.kilgarriff.co.uk/Publications/1997-K-CHum-believe.pdf

Matthew responding to Michael
>> I wonder if the "semantic interoperability" - which seems to be the
>> main reason behind this - is actually deliverable in practice?
>>
>> Wouldn't additional primitives and/or axioms in lower level ontologies
>> be problematic?
>
> [MW>] Pat C believes that there is some finite set of primitives, so
> this is not a problem. I believe (I'm not sure if it can be proved or
> not) that you can always add a new primitive, so your issue is relevant.

We have to distinguish two kinds of interoperability: shallow and deep.
Shallow interoperability (sharing data such as names and addresses)
among independently developed systems has been successful for over
a century. But deep interoperability is only possible among systems
that have been designed from the beginning to use a fixed set of
predefined conventions.

Michael
> I suspect there are better reasons for using upper ontologies than
> interoperability.

I would say that interoperability is one of many reasons for having
an ontology. I'd add that there is *no* magic solution that guarantees
perfect interoperability. But there are many ways of using ontologies
to enable deeper interoperability than Schema.org can support.

Hans
> The website http://15926.org gives the whole story.

I followed that link to the website, which cites Wikipedia for an
overview, which cites the following criticism by Barry Smith:
http://ontology.buffalo.edu/bfo/west.pdf

I would agree that ISO 15926 is very good of its kind and that it
can support deeper sharing than the shallower Schema.org. But I
also agree with many of Barry's criticisms.

In particular, the fundamental framework of ISO 15926 is not adequate
as a general-purpose ontology of everything. Adding more primitives
to it will add more special cases. But a collection of special cases
can never become more general, no matter how big it may become.

Bruce
> What I personally want to see emerge is an ontology based on a theory
> of concepts where the entire structure is 100% linear and recursive
> and essentially built from "one algebraic primitive"

That would be wonderful if it were possible. But everybody who has
attempted anything similar has failed. That includes some brilliant
logicians and scientists who founded the "unified science" movement
in the 1930s. That was a good try, but they never designed anything
that achieved the goals they (and many ontologists today) hoped for.
People are still searching, but there is no consensus. See
http://plato.stanford.edu/entries/scientific-unity/

Bruce
> All semantic objects are constructed in this medium as composite
> cascades of the fundamental information structure “bit”

More precisely, all finite notations can be mapped to finite
strings of bits. But that says nothing about their meaning.

Pat
> When one has a set of domain ontologies that can interoperate by
> translation using the common foundation ontology as an interlingua,
> the foundation ontology will have all the semantic primitives necessary
> to logically specify the meanings of all of the domain terms in those
> ('legacy') ontologies.

That claim would only be true for those domains that had been designed
around a common ontology (e.g. ISO 15926). But note that OWL is the
notation used for that standard. That is a very weak version of logic.
It can support sharing that is deeper than Schema.org. But OWL, by
itself, cannot be used to specify mathematics, science, or any of the
projects required for a unified science. And any fundamental ontology
must provide the foundation for all of science -- physical and social.

> If a new domain ontology requires a new primitive element, it can be
> added to the foundation ontology... new primitives will not break
> existing applications.

Such additions are possible with a very weak logic, such as OWL.
But when you attempt to define terms in sufficient detail to
specify a computer program, you need at least FOL. And the
interactions between the new items and the old items become
vastly more complex.

For example, just look at Microsoft Windows, which has evolved
from a foundation that is almost 30 years old. All the terms
(with numerous updates and revisions) are still being used today.
But those terms get redefined with every weekly update.

Every revision causes some programs to break. In fact, the most
common revisions are *intended* to cause many programs to break
-- those are the ones called "malware". Major revisions cause
so many programs to break that people say "Never adopt version
X.0 of any Microsoft product."

> An important point is that, regardless of how many different ways
> of specifying a term are used, if they are logically equivalent,
> then they can be converted into each other, satisfying the usage
> preferences of any number of communities.

That "if" is an extremely big ***IF***. Proving that two terms
are logically equivalent can be undecidable. But even worse,
most definitions are either undefined or partially defined
(e.g., OWL definitions that put troublesome details in comments)
that it's impossible to determine whether they're equivalent.

> The CYC microtheories were designed to ameliorate that problem,
> but paradoxically made the CYC seemingly more complex and difficult
> to use.

Don't knock Cyc. It's the largest and most complete formal ontology
ever developed. Many people (including me) have criticized Cyc, but
none of the critics have developed a formal ontology that addresses
as many of the very complex issues.

On the other hand, the failure of Cyc to build the HAL 9000 or
even to win the Jeopardy! challenge is very strong evidence
for Adam Kilgarriff's article. (Please read or reread it.)

For a comparison of Cyc to IBM Watson, see slides 25 t0 29
of http://www.jfsowa.com/talks/nlu.pdf . For explanation of
the related issues, see the earlier and later slides.

Bruce
> The Wikipedia article on Semantic Interoperability says that
> our collective incapacity to solve this problem is costing
> the US economy alone $100 billion per year.

That claim is meaningless. The "problem" of semantic interoperability
has been partially "solved" at the shallow end since the punched-card
systems of the 1890s. At the deep end(s), there is no consensus about
how to define the problem, what a solution might look like, how to
implement the solution even if somebody magically discovered it, what
it would cost, how much profit it might make, or -- most difficult of
all -- how to convince funding agencies that a solution had been found.

John

Rich Cooper

unread,
Jan 6, 2016, 2:12:45 AM1/6/16
to ontolo...@googlegroups.com

Previous conversations…:

> The Wikipedia article on Semantic Interoperability says that our

> collective incapacity to solve this problem is costing the US economy

> alone $100 billion per year.

 

That claim is meaningless.  The "problem" of semantic interoperability has been partially "solved" at the shallow end since the punched-card systems of the 1890s.  At the deep end(s), there is no consensus about how to define the problem, what a solution might look like, how to implement the solution even if somebody magically discovered it, what it would cost, how much profit it might make, or -- most difficult of all -- how to convince funding agencies that a solution had been found.

                                                 

John

 

+1

 

 

Sincerely,

Rich Cooper,

Rich Cooper,

 

Chief Technology Officer,

MetaSemantics Corporation

MetaSemantics AT EnglishLogicKernel DOT com

( 9 4 9 ) 5 2 5-5 7 1 2

http://www.EnglishLogicKernel.com

 

-----Original Message-----
From: ontolo...@googlegroups.com [mailto:ontolo...@googlegroups.com] On Behalf Of John F Sowa
Sent: Tuesday, January 05, 2016 8:57 PM
To: ontolo...@googlegroups.com
Subject: Re: [ontolog-forum] Wikipedia on upper ontology

 

Dear Matthew, Michael B, Hans T, Bruce S, and Pat C,

--

All contributions to this forum by its members are made under an open content license, open publication license, open source or free software license. Unless otherwise specified, all Ontolog Forum content shall be subject to the Creative Commons CC-BY-SA 4.0 License or its successors.

---

You received this message because you are subscribed to the Google Groups "ontolog-forum" group.

To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

To post to this group, send email to ontolo...@googlegroups.com.

Visit this group at https://groups.google.com/group/ontolog-forum.

Matthew West

unread,
Jan 6, 2016, 2:41:51 AM1/6/16
to ontolo...@googlegroups.com
Dear John,

Matthew responding to Michael
>> I wonder if the "semantic interoperability" - which seems to be the
>> main reason behind this - is actually deliverable in practice?
>>
>> Wouldn't additional primitives and/or axioms in lower level
>> ontologies be problematic?
>
> [MW>] Pat C believes that there is some finite set of primitives, so
> this is not a problem. I believe (I'm not sure if it can be proved or
> not) that you can always add a new primitive, so your issue is relevant.

We have to distinguish two kinds of interoperability: shallow and deep.
Shallow interoperability (sharing data such as names and addresses) among independently developed systems has been successful for over a century. But deep interoperability is only possible among systems that have been designed from the beginning to use a fixed set of predefined conventions.
[MW>] I disagree. Designing systems to be interoperable ground up is certainly the easiest way to do it. However, it is quite possible to achieve interoperability between independently developed systems, at least where such interoperability is actually valid: i.e. mappings can be constructed between systems to translate between the systems.

Hans
> The website http://15926.org gives the whole story.

I followed that link to the website, which cites Wikipedia for an overview, which cites the following criticism by Barry Smith:
http://ontology.buffalo.edu/bfo/west.pdf
[MW>] This was actually a write up of an exchange that took place between us on the Ontolog Forum many years ago, and my rebuttal is here:
http://www.matthew-west.org.uk/publications/ResponseToBarrySmithCommentsOnISO15926.pdf
I notice reference to it had been removed from the Wikipedia article, so I have added it back.
It is worth noting that ISO 15926-2 (which is specifically what he was commenting on) has not been found to need any changes over the last 12 years (which to be honest surprises me)

I would agree that ISO 15926 is very good of its kind and that it can support deeper sharing than the shallower Schema.org. But I also agree with many of Barry's criticisms.

In particular, the fundamental framework of ISO 15926 is not adequate as a general-purpose ontology of everything.
[MW>] Please be precise. If it is not adequate exactly what is lacking? The only thing I can find in Barry's critique is that we do not use a modal logic. In fact we use a Possible Worlds approach to modal logic (which he claims we made up on the fly!)

Adding more primitives to it will add more special cases.
[MW>] The ability to add more primitives to support the detail of particular domains is built in to ISO 15926, it is in part at least a meta-ontology in that sense. This seems to have been one of the things that confused Barry.

But a collection of special cases can never become more general, no matter how big it may become.
[MW>] No. So I repeat, what is missing?

Pat
> When one has a set of domain ontologies that can interoperate by
> translation using the common foundation ontology as an interlingua,
> the foundation ontology will have all the semantic primitives
> necessary to logically specify the meanings of all of the domain terms
> in those
> ('legacy') ontologies.

That claim would only be true for those domains that had been designed around a common ontology (e.g. ISO 15926).
[MW>] That is not true. It can provide interoperability for any domains for which the "interlingua" is sufficiently expressive that it can provide formal definitions (mappings) of the terms in the domain ontologies.

> An important point is that, regardless of how many different ways of
> specifying a term are used, if they are logically equivalent, then
> they can be converted into each other, satisfying the usage
> preferences of any number of communities.

That "if" is an extremely big ***IF***. Proving that two terms are logically equivalent can be undecidable. But even worse, most definitions are either undefined or partially defined (e.g., OWL definitions that put troublesome details in comments) that it's impossible to determine whether they're equivalent.
[MW>] If I understand you correctly, this is true for automatic identification of equivalence. I suspect that is an unusual case since I agree most definitions are partial. I expect that the definition of equivalence needs to be hand crafted as a formal definition of a term in one ontology in terms of another (usually the integrating ontology).

John F Sowa

unread,
Jan 6, 2016, 7:25:12 AM1/6/16
to ontolo...@googlegroups.com
Dear Matthew,

As I said, there is a continuum between shallow interoperability
and the deeper kinds of interoperability. At the shallow end,
digital systems have been interoperating since the punched-card days.
But at the deeper ends, different versions of the "same system"
are not fully interoperable.

> Designing systems to be interoperable ground up is certainly the
> easiest way to do it. However, it is quite possible to achieve
> interoperability between independently developed systems, at least
> where such interoperability is actually valid: i.e. mappings can be
> constructed between systems to translate between the systems.

The degree of interoperability depends on the subject matter and
the kinds of industry standards that are being used. Banks, for
example, have standards for Electronic Funds Transfer (EFT) that
enable a high degree of interoperability.

When two banks merge, they have many similar services for
checking, savings, etc. But they *never* merge the software
systems for the two banks. Instead, they continue to operate
all the software of each bank indefinitely. Internal transfers
continue to interoperate with the same EFT standards they used
before the merger -- until some system(s) are shut down.

> [The article by Barry Smith] was actually a write up of an exchange
> that took place between us on the Ontolog Forum many years ago,
> and my rebuttal is here:
http://www.matthew-west.org.uk/publications/ResponseToBarrySmithCommentsOnISO15926.pdf

Thanks for the URL. I agree with many of your points, but Barry
also makes many points that I agree with. In any case, that
exchange confirms my major claim: there is a continuum of levels
of interoperability.

> If [ISO 15926] is not adequate exactly what is lacking?

I never said it was inadequate. In fact, I believe that it is
about as good as anyone can get for interoperability between
independently developed systems.

But I believe that the issues that apply to bank mergers would
apply to mergers between two companies whose software systems
complied with ISO 15926. Do you know of any exceptions?

> Adding more primitives to it will add more special cases.
> [MW] The ability to add more primitives to support the detail
> of particular domains is built in to ISO 15926, it is in part
> at least a meta-ontology in that sense.

I agree with your point. But the point I was making is that
adding special cases (or primitives) to any system will always
make it more specialized.

> [MW] No. So I repeat, what is missing?

Nothing is missing. It's a fundamental principle of logic:
Given two propositions p and q, the conjunction (p & q)
is more specialized than either one -- except when one
of them implies the other.

There is, however, a possibility of taking two independent
systems of so-called primitives and define both in terms
of more primitive "primitives". But in practice, that is
rare. For anything other than a toy system, the total number
"primitives" keeps growing indefinitely.

The only systems for which the number of primitives does not
increase are those that have become "functionally stabilized"
-- that's IBM's euphemism for 'obsolete'.

John

Matthew West

unread,
Jan 6, 2016, 8:29:40 AM1/6/16
to ontolo...@googlegroups.com
Dear John,

As I said, there is a continuum between shallow interoperability and the deeper kinds of interoperability. At the shallow end, digital systems have been interoperating since the punched-card days.
But at the deeper ends, different versions of the "same system"
are not fully interoperable.

> Designing systems to be interoperable ground up is certainly the
> easiest way to do it. However, it is quite possible to achieve
> interoperability between independently developed systems, at least
> where such interoperability is actually valid: i.e. mappings can be
> constructed between systems to translate between the systems.

The degree of interoperability depends on the subject matter and the kinds of industry standards that are being used. Banks, for example, have standards for Electronic Funds Transfer (EFT) that enable a high degree of interoperability.

When two banks merge, they have many similar services for checking, savings, etc. But they *never* merge the software systems for the two banks. Instead, they continue to operate all the software of each bank indefinitely. Internal transfers continue to interoperate with the same EFT standards they used before the merger -- until some system(s) are shut down.
[MW>] What you have described *is* the two systems interoperating. However, the different systems might well have different restrictions that means that one of them cannot take over the functions of another, but that is not a requirement for interoperability, only that they can do what they do, and share what they need to. Now if one of them was much superior to the other and had all the capabilities of the other, then it should be possible to transfer the data of the inferior system to the superior one, some mapping between terms would still probably be necessary.

> [The article by Barry Smith] was actually a write up of an exchange
> that took place between us on the Ontolog Forum many years ago, and my
> rebuttal is here:
http://www.matthew-west.org.uk/publications/ResponseToBarrySmithCommentsOnISO15926.pdf

Thanks for the URL. I agree with many of your points, but Barry also makes many points that I agree with.
[MW>] In general I agree with many of the points Barry makes in the abstract. They just don't apply to ISO 15926 in the way he claims.

> If [ISO 15926] is not adequate exactly what is lacking?

I never said it was inadequate. In fact, I believe that it is about as good as anyone can get for interoperability between independently developed systems.

[MW>] You said in your previous post:
"In particular, the fundamental framework of ISO 15926 is *not adequate* as a general-purpose ontology of everything."
I'll take your response above as a retraction.

[JS] But I believe that the issues that apply to bank mergers would apply to mergers between two companies whose software systems complied with ISO 15926. Do you know of any exceptions?
[MW>] If two companies had rigorously used ISO 15926 for their systems, then it should be relatively straightforward to merge the software systems involved. This would be by moving the data from one system to the other, and switching the other off. Although this is not what ISO 15926 is designed for. It, like the financial systems, is intended for interoperability amongst companies and systems along the process plant lifecycle (in the first place). However, whereas what is being exchanged in a financial system is at the level of an amount of money between two accounts, at the engineering level we are talking about the exchange of the design of e.g. a pump, or an off-shore Oil Rig, so a rather more complex "transaction", and indeed, sharing the data live is a likely scenario.
I will also say that I am not aware of any companies that have implemented ISO 15926 with sufficient rigour that a trivial merging of data would be possible. However, one of the things I have been pleased to find is that that there is graceful degradation. So as you use ISO 15926 with increasing rigour, so the costs of the data migration would reduce. It is not an all or nothing thing.

> Adding more primitives to it will add more special cases.
> [MW] The ability to add more primitives to support the detail of
> particular domains is built in to ISO 15926, it is in part at least a
> meta-ontology in that sense.

[JS] I agree with your point. But the point I was making is that adding special cases (or primitives) to any system will always make it more specialized.
[MW>] I'm not sure what you mean by specialized here. It sounds like it means more restricted. Actually, adding primitives means that there is a greater range of domains that it can integrate and map to. An upper ontology does not integrate domain ontologies by just sucking them up, different references to the same thing in different domains have to be resolved as well. Therefore if you do not have the primitives that enable this, you are not able to do the integration.

> [MW] No. So I repeat, what is missing?

Nothing is missing. It's a fundamental principle of logic:
Given two propositions p and q, the conjunction (p & q) is more specialized than either one -- except when one of them implies the other.

There is, however, a possibility of taking two independent systems of so-called primitives and define both in terms of more primitive "primitives". But in practice, that is rare. For anything other than a toy system, the total number "primitives" keeps growing indefinitely.
[MW>] This is the approach that ISO 15926 takes. And yes the total number of primitives does keep growing, so you need to have a method for adding new ones. There is a whole part of ISO 15926 dedicated to that.

Matthew West

unread,
Jan 6, 2016, 8:46:34 AM1/6/16
to ontolo...@googlegroups.com
Dear John,

As I said, there is a continuum between shallow interoperability and the
deeper kinds of interoperability. At the shallow end, digital systems have
been interoperating since the punched-card days.
But at the deeper ends, different versions of the "same system"
are not fully interoperable.

> Designing systems to be interoperable ground up is certainly the
> easiest way to do it. However, it is quite possible to achieve
> interoperability between independently developed systems, at least
> where such interoperability is actually valid: i.e. mappings can be
> constructed between systems to translate between the systems.

The degree of interoperability depends on the subject matter and the kinds
of industry standards that are being used. Banks, for example, have
standards for Electronic Funds Transfer (EFT) that enable a high degree of
interoperability.

When two banks merge, they have many similar services for checking, savings,
etc. But they *never* merge the software systems for the two banks.
Instead, they continue to operate all the software of each bank
indefinitely. Internal transfers continue to interoperate with the same EFT
standards they used before the merger -- until some system(s) are shut down.
[MW>] What you have described *is* the two systems interoperating. However,
the different systems might well have different restrictions that means that
one of them cannot take over the functions of another, but that is not a
requirement for interoperability, only that they can do what they do, and
share what they need to. Now if one of them was much superior to the other
and had all the capabilities of the other, then it should be possible to
transfer the data of the inferior system to the superior one, some mapping
between terms would still probably be necessary.

> [The article by Barry Smith] was actually a write up of an exchange
> that took place between us on the Ontolog Forum many years ago, and my
> rebuttal is here:
http://www.matthew-west.org.uk/publications/ResponseToBarrySmithCommentsOnISO15926.pdf

Thanks for the URL. I agree with many of your points, but Barry also makes
many points that I agree with.
[MW>] In general I agree with many of the points Barry makes in the
abstract. They just don't apply to ISO 15926 in the way he claims.

> If [ISO 15926] is not adequate exactly what is lacking?

I never said it was inadequate. In fact, I believe that it is about as good
as anyone can get for interoperability between independently developed
systems.

[MW>] You said in your previous post:
"In particular, the fundamental framework of ISO 15926 is *not adequate* as
a general-purpose ontology of everything."
I'll take your response above as a retraction.

[JS] But I believe that the issues that apply to bank mergers would apply to
mergers between two companies whose software systems complied with ISO
15926. Do you know of any exceptions?
[MW>] If two companies had rigorously used ISO 15926 for their systems, then
it should be relatively straightforward to merge the software systems
involved. This would be by moving the data from one system to the other, and
switching the other off. Although this is not what ISO 15926 is designed
for. It, like the financial systems, is intended for interoperability
amongst companies and systems along the process plant lifecycle (in the
first place). However, whereas what is being exchanged in a financial system
is at the level of an amount of money between two accounts, at the
engineering level we are talking about the exchange of the design of e.g. a
pump, or an off-shore Oil Rig, so a rather more complex "transaction", and
indeed, sharing the data live is a likely scenario.
I will also say that I am not aware of any companies that have implemented
ISO 15926 with sufficient rigour that a trivial merging of data would be
possible. However, one of the things I have been pleased to find is that
that there is graceful degradation. So as you use ISO 15926 with increasing
rigour, so the costs of the merger would reduce. It is not an all or nothing
thing.

> Adding more primitives to it will add more special cases.
> [MW] The ability to add more primitives to support the detail of
> particular domains is built in to ISO 15926, it is in part at least a
> meta-ontology in that sense.

I agree with your point. But the point I was making is that adding special
cases (or primitives) to any system will always make it more specialized.
[MW>] I'm not sure what you mean by specialized here. It sounds like it
means more restricted. Actually, it means that there is a greater range of
domains that it can integrate and map to. An upper ontology does not
integrate domain ontologies by just sucking them up, different references to
the same thing in different domains have to be resolved as well. Therefore
if you do not have the primitives that enable this, you are not able to do
the integration.

> [MW] No. So I repeat, what is missing?

Nothing is missing. It's a fundamental principle of logic:
Given two propositions p and q, the conjunction (p & q) is more specialized
than either one -- except when one of them implies the other.

There is, however, a possibility of taking two independent systems of
so-called primitives and define both in terms of more primitive
"primitives". But in practice, that is rare. For anything other than a toy
system, the total number "primitives" keeps growing indefinitely.
[MW>] This is the approach that ISO 15926 takes. And yes the total number of
primitives does keep growing, so you need to have a method for adding new
ones. There is a whole part of ISO 15926 dedicated to that.

Michael Brunnbauer

unread,
Jan 6, 2016, 9:19:06 AM1/6/16
to Patrick Cassidy, ontolog-forum

Hello Patrick,

On Mon, Jan 04, 2016 at 12:22:19PM -0500, Patrick Cassidy wrote:
> Conversions of data from one terminology to another can and should be done automatically through a common foundation ontology.

I disagree. My suspicion is that suitable conversions may not exist, are
usually hard to find if they exist and cannot be canonicalized.

We are talking about what happens when people build ontologies from the same
upper ontology without central coordination, don't we?

> Demonstrating that functionality on a non-trivial level would require linking several significant ontology-based applications that can profitably interoperate. One needs to find such applications and then find the funding for the demo -- Much more difficult than just building a good foundation ontology.
>
> For the task of just finding good modules to include in one's domain ontology, a repository of modules such as that overseen by Mike Gruninger (http://www.cs.toronto.edu/~torsten/publications/MGruninger_AO-12.pdf) would also be quite useful.

Let me quote from that paper:

"It is important to stress that Decomp and FindTheory are procedures and not algorithms, both because of the undecidable theorem proving and consistency-checking steps (Line 9 in FindTheory and Lines 5 and 15 in Decomp), but also because of the user intervention required for the specification of translation definitions (Line 2 of FindTheory and Line 4 of Decomp). In this sense, the procedures give practical guidance for the designers of modular ontologies."

The crucial parts are not automatic here, e.g.

"a user is required to provide the translation definitions for the core theories S1 ... into T".

"the entailment problem in step 11 typically requires human guidance even with an automated theorem prover"

And some parts are not even semi-decidable - if I interpret "consistency-
checking" correctly. Even the most basic properties of theories defined in
that paper look like they are Pi-2 at first glance so one has to be prudent.

Regards,

Michael Brunnbauer

--
++ Michael Brunnbauer
++ netEstate GmbH
++ Geisenhausener Straße 11a
++ 81379 München
++ Tel +49 89 32 19 77 80
++ Fax +49 89 32 19 77 89
++ E-Mail bru...@netestate.de
++ http://www.netestate.de/
++
++ Sitz: München, HRB Nr.142452 (Handelsregister B München)
++ USt-IdNr. DE221033342
signature.asc

Hans Teijgeler

unread,
Jan 6, 2016, 9:35:01 AM1/6/16
to ontolo...@googlegroups.com
Dear Matthew,

You wrote: Although this is not what ISO 15926 is designed for. It, like the
financial systems, is intended for interoperability amongst companies and
systems along the process plant lifecycle (in the first place).

I agree, but I'd like to phrase it differently:
ISO 15926 is designed for the integration of lifecycle information and that,
as a spin-off, allows for high-quality interoperability. Non-compliance with
the targets of the former will negatively influence the quality of the
latter.

On purpose I omitted reference to process plants, since in practice the
applicability of ISO 15926 is much wider, provided the proper reference data
are made available.
Some suggestions:
* air planes
* personal health records
* real estate
* etc, integrated lifecycle information about any other thing(s).

Regards,
Hans

Hans Teijgeler,
OntoConsult,
http://15926.org

-----Original Message-----
From: ontolo...@googlegroups.com [mailto:ontolo...@googlegroups.com]
--
All contributions to this forum by its members are made under an open
content license, open publication license, open source or free software
license. Unless otherwise specified, all Ontolog Forum content shall be
subject to the Creative Commons CC-BY-SA 4.0 License or its successors.
---
You received this message because you are subscribed to the Google Groups
"ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to ontolog-foru...@googlegroups.com.
To post to this group, send email to ontolo...@googlegroups.com.
Visit this group at https://groups.google.com/group/ontolog-forum.
To view this discussion on the web visit
https://groups.google.com/d/msgid/ontolog-forum/003601d14886%244b0f5a90%24e1
2e0fb0%24%40gmail.com.

Hans Teijgeler

unread,
Jan 6, 2016, 11:40:43 AM1/6/16
to ontolo...@googlegroups.com
Cory,
 
Please note that there are a number of dead links on that front page, like for instance https://github.com/edmcouncil/fibo/wiki/FIBO-Business-Entities
 
Regards,
Hans
 
Hans Teijgeler,
Laanweg 28,
1871 BJ Schoorl,
Netherlands
 


From: ontolo...@googlegroups.com [mailto:ontolo...@googlegroups.com] On Behalf Of Cory Casanave
Sent: dinsdag 5 januari 2016 22:18
To: Singer, John; Matthew West; 'ontolog-forum'

Azamat Abdoullaev

unread,
Jan 6, 2016, 11:55:42 AM1/6/16
to ontolog-forum
While the forum is contending to be or not to be for universal ontology,



Geographers how to model a geographic space and semantically enrich spatial data and territorial planning: http://www.igi-global.com/book/universal-ontology-geographic-space/59742, etc.

 

Below my introduction to the volume on GLOBAL INTEROPERABILITY

 

Interoperability is a critical idea needing depth and breadth and common foundation framework. Its extent or scope is as wide as railways, public safety, government, telecommunications, medical industry, business, and software. Its depth as different as physical interoperability, business process interoperability, computing interoperability, information interoperability, syntactic interoperability, semantic interoperability, or conceptual interoperability; or industrial, national, international or global interoperability.

In general, Interoperability implies common standard, formats, categorizations and integration,  unifying models and schemas, like as the software interoperability − the same data formats, the same communication protocols, and the same binary codes.

The General Interoperability Framework, GIF, looks closely connected with a world/domain reference model as common foundation ontology. What ideally makes an all-purpose world model/schema providing the foundation basis for specialized domains as well as supporting various forms and levels of interoperability, technical, semantic, or ontological.

Thus any thing, product, system, agent, service, network, or technology to be interoperable must be compatible with the same standard, ideally, with a standard ontology reference framework.
For example, for the information exchange interoperability, there are nation-level programs as EU Interoperability Framework, USA NIEM, or UK e-GIF.

Take the US National  Information Exchange Model: “It is designed to develop, disseminate and support enterprise-wide information exchange standards and processes that can enable jurisdictions to effectively share critical information in emergency situations, as well as support the day-to-day operations of agencies throughout the nation.”
Its syntactic operability is to be achieved by using the XML Schema data model, constructs, and methods, seemingly, thus supporting existing “legacy systems”, across all levels of the Government, federal, state and local.
However, the issue of issues is how to achieve computable Semantic Interoperability, among any and all communicating entities, legacy ones or not. Seemingly, it’s only by developing the GIF implying a standard system of entities and relationships, providing the semantic basis (meaning exchange/interpretation standards and processes) for more specialized domains and fields and applications.
Given that, to obtain the General Semantic Interoperability standard, costing hundreds billions per year, means to develop a single world reference model, of which the global geo ontology is the foundational part.

Some additional fresh on the universal ontology and its applications:

http://www.slideshare.net/ashabook/universal-standard-entity-classification-system-usecs

http://www.slideshare.net/ashabook/total-encyclopedia

 http://iworldx.wix.com/smart-world  


I have an impression our ontology forum is to be the last community to accept the global ontologyJ.

--
All contributions to this forum by its members are made under an open content license, open publication license, open source or free software license. Unless otherwise specified, all Ontolog Forum content shall be subject to the Creative Commons CC-BY-SA 4.0 License or its successors.
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To post to this group, send email to ontolo...@googlegroups.com.
Visit this group at https://groups.google.com/group/ontolog-forum.

Neil McNaughton

unread,
Jan 6, 2016, 12:14:24 PM1/6/16
to ontolo...@googlegroups.com

Azamat,

 

Are you saying that the psychologists, IT experts and geographers are all using the same upper ontology?

 

Neil

 

From: ontolo...@googlegroups.com [mailto:ontolo...@googlegroups.com] On Behalf Of Azamat Abdoullaev


Sent: Wednesday, January 06, 2016 5:56 PM
To: ontolog-forum <ontolo...@googlegroups.com>

Azamat Abdoullaev

unread,
Jan 6, 2016, 12:40:10 PM1/6/16
to ontolog-forum

Please don't mix the universal ontology, or the integrating ontology as Matthew stressed,  with upper ontologies, which are as many as any kinds and sorts, what one might see from the Wikipedia article.

Matthew West

unread,
Jan 6, 2016, 2:47:15 PM1/6/16
to ontolo...@googlegroups.com
Dear Hans,
That's OK with me.
Regards
Matthew
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/73DFA9230D3041EEBCAD802487F292C1%40HansPC.

Bruce Schuman

unread,
Jan 6, 2016, 6:36:22 PM1/6/16
to ontolo...@googlegroups.com

Bruce

> What I personally want to see emerge is an ontology based on a theory

> of concepts where the entire structure is 100% linear and recursive

> and essentially built from "one algebraic primitive"

 

John

That would be wonderful if it were possible.  But everybody who has attempted anything similar has failed.  That includes some brilliant logicians and scientists who founded the "unified science" movement in the 1930s.  That was a good try, but they never designed anything that achieved the goals they (and many ontologists today) hoped for.

People are still searching, but there is no consensus.  See http://plato.stanford.edu/entries/scientific-unity/

 

Bruce

Thanks.  That “dream of a unified science” is something I have looked at – but of course you are so right.  Could something new or unprecedented emerge?  Well – maybe.

 

For some mysterious compulsive reason I got into this stuff many years ago, with a relentless drive to compile and refine a comprehensive epistemological dictionary.  This project started off with something like 300 concepts I picked up from various texts and survey books, and once I had these things loaded into a simple outline processor, I just went over and over and over the definitions, trying to define these concepts in terms of one another, looking for simplifications and eliminating redundancy – and I ended up with the concept “dimension”.  It was like this huge grinding multi-year calculation finally terminated with the conclusion “everything is built out of dimensions”.

 

I’ll enclose a couple of graphics that emerged for me on this.  The claim is – the prime integrating dimension of all conceptual structure is “level of abstraction”.  It’s the backbone of all conceptual form – and is seen explicitly in many computer science and taxonomic-type models.  So, this little graphic purports to show major features of logic and science and philosophy and cognition – all mapped to one graphic polarized across this spectrum.  All these big issues (reductionism vs. holism, or analysis vs. synthesis, or deduction vs. induction) are all seen as organized across the same one-dimensional spectrum.  All these elements are “parts of one whole” – various facets of the thinking-human enterprise.

 

Is this a gross over-simplification?  Just plain wrong-headed?  Flat-out doesn’t work?   I dunno.  I keep staring at this thing, and nobody has talked me out of it yet.

 

In the last week, I’ve put together a 250-page word.docx file with a review of these questions at the front and a big load of Wikipedia articles pasted in behind that.  I think I will just go back to extreme basics, and start with something really simple, like the Wikipedia definition of “concept” – which is just about generalization and abstraction.  Very basic. 

 

https://en.wikipedia.org/wiki/Concept

 

 

 

 

 

 

 

 

image001.png
image002.png
image003.png
fundamentalpolarity2.JPG
integralholon.png
bridge.jpg

Rich Cooper

unread,
Jan 7, 2016, 11:53:05 AM1/7/16
to ontolo...@googlegroups.com

Dear Azamat,

 

Here is an article on smart cities that might be of interest to you, possibly others.

 

http://www.cio.com/video/60786/atandt-and-smart-cities

 

Sincerely,

Rich Cooper,

Rich Cooper,

 

Chief Technology Officer,

MetaSemantics Corporation

MetaSemantics AT EnglishLogicKernel DOT com

( 9 4 9 ) 5 2 5-5 7 1 2

http://www.EnglishLogicKernel.com

 

From: ontolo...@googlegroups.com [mailto:ontolo...@googlegroups.com] On Behalf Of Bruce Schuman
Sent: Wednesday, January 06, 2016 3:36 PM
To: ontolo...@googlegroups.com
Subject: RE: [ontolog-forum] Wikipedia on upper ontology

 

Bruce

--

All contributions to this forum by its members are made under an open content license, open publication license, open source or free software license. Unless otherwise specified, all Ontolog Forum content shall be subject to the Creative Commons CC-BY-SA 4.0 License or its successors.
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To post to this group, send email to ontolo...@googlegroups.com.
Visit this group at https://groups.google.com/group/ontolog-forum.

image001.png
image002.png
image003.png

Azamat Abdoullaev

unread,
Jan 7, 2016, 1:26:16 PM1/7/16
to ontolog-forum, metase...@englishlogickernel.com
Rich Cooper wrote:

"Dear Azamat,

 

Here is an article on smart cities that might be of interest to you, possibly others". http://www.cio.com/video/60786/atandt-and-smart-cities


Thank you, Rich

It is actually what is not a smart city:)  

Many multinationals are jumping on the bandwagon having no integrated I-city strategy, policy and implementation planning, mostly pushing their narrow commercial agenda, as in China (2 trillion Yuan smart cities developments, India (100 smart cities), etc.
This business strategy is corrupting the whole concept, seemingly one of the greatest ideas for next decade.







Rich Cooper

unread,
Jan 7, 2016, 3:04:33 PM1/7/16
to Azamat Abdoullaev, ontolog-forum

Azamat wrote:

 

AA:>It is actually what is not a smart city:) 

http://eu-smartcities.eu/blog/what-not-smart-city

 

RC:>I viewed that page (skipped the URL slides) and reached the text:

 

AA:>It’s when the sustainable world’s intelligent urbanism is synergistically driven by natural capital, social capital and digital capital, like as the Internet/Web of Things, Knowledge and Social Intelligence and Renewable Energy Sources.  A genuine sustainable community is consistently defined as digitally smart, socially intelligent, and ecologically sustainable.

 

RC:>That includes all the inputs I can think of; well done.  But I am not so sure that capital is what should be emphasized instead of capabilities.  But you are absolutely right that capital has to be available to make innovation happen.  Instead, I think the emphasis ought to be on picking qualified early startups that can fulfill one or more of the set of goals for said smart city within the smart city's budget for achieving that goal.  Managing all the threads and streams of talent and capital in an efficient way is extremely difficult to do.  But it can be approximated. 

 

RC:>I am completely unfamiliar with the Russian entrepreneurial system.  Here in the US, entrepreneurs with startup ideas first finance themselves, but run out of home equity and savings, friends and banks pretty quickly.  Then they go to an "Angel" investor who buys stock and provides the next capital infusion so the company can grow to the SMB stage - watch the "Sharks" programs on CNBC to see that kind of negotiation here.  Finally, if sales and history show strong growth rates, Venture Capital investors review the business plan and buy into more stock, at higher prices.  Eventually, the entrepreneurs are bought out of control, and they retire or just surf with the kids, start research institutes, invest it, or spend it all foolishly. 

 

RC:>Government sources vary all over the space, so I will leave that capital source and sink to your imagination.  But in any case, a smart city should be able to identify goals, select sources to satisfy the goals, and weight the values and costs of doing so in some equivalent kind of business plan template.  Allow for manageable growth and risk.  Then do the things that make sense, and cancel or delay the ones that don't make sense, yet. 

 

RC:>Some kind of financial and intelligence waterfall needs to be used for planning the totality of flow and measurement in said smart cities, IMHO.  There are probably a dozen or two template business plans that could be promulgated among the technocrats and MBAs to stimulate startup ideas.  Probably there will be numerous government related templates also.  But I have yet to see any such information on the web, not that I have been looking for it either.  I haven't as yet visited your URLs though.  I still have to do that. 

 

RC:>In the US, the Small Business Innovation Research (SBIR) program was a first attempt to do the equivalent for specific gov't agency needs.  It was the first attempt to give government and small businesses access to each other, but there have been no evolutionary improvements since it started, so this kind of outcome should be avoided if you can find a way to ensure an evolutionary learning process in advanced planning.  Evolutionary processes are what make businesses create knowledge, jobs, wealth, innovations, health and new concepts. 

 

AA:>Many multinationals are jumping on the bandwagon having no integrated I-city strategy, policy and implementation planning, mostly pushing their narrow commercial agenda, as in China (2 trillion Yuan smart cities developments, India (100 smart cities), etc.

This business strategy is corrupting the whole concept, seemingly one of the greatest ideas for next decade.

 

RC:>How else can it be done?  Every human worker has to pay his costs, save a little, pay off his bills, raise his kids, and many other requirements.  Each human has to have an income of some kind to encourage him or her to work instead of sleep, surf or schmooze.  So does the human's boss, and his boss, and the stockholders, and on up the chain.  Everybody who works expects pay.  If companies don't do it, where does the managing take place?  How can planning, measuring or adapting be accomplished?  An active mind focused on the individual's economic situation is essential. 

 

AA:>Something more on how to develop true intelligent cities of the future:

RC:>In summary, I think a diversity of business plan templates out to be thought out, reconciled with the spectrum of candidate startup populations, capital sources and sinks, and tuned for max growth that way.  Same with the plan templates for gov't/business interactions. 

 

Sincerely,

Rich Cooper,

Rich Cooper,

 

Chief Technology Officer,

MetaSemantics Corporation

MetaSemantics AT EnglishLogicKernel DOT com

( 9 4 9 ) 5 2 5-5 7 1 2

http://www.EnglishLogicKernel.com

 

image001.png
image002.png
image003.png

Azamat Abdoullaev

unread,
Jan 7, 2016, 4:24:11 PM1/7/16
to Rich Cooper, ontolog-forum

What should avoid big corporations to apply the same business philosophy for smart cities as for smart phones. In fact, i-phones differ from I-cities as the sky and the earth.

If briefly, any city, small. middle or global, need to overcome at least three smart city challenges: 1) how to pay for it, 2) how to take an integrated approach and 3) how to handle the smart technology challenges.
Always keeping in mind the grand aim: to solve the social, economic and environmental issues for better citizen life and social services, intelligent business environment, and smart city administration.
So, the business models, conception, implementation and technical architecture must be more innovative than for I-phone products.
In all, there is a consistent methodology and strict algorithm how to build I-city of the future:
  • Understanding the (ontological) fundamentals that underpin the building of smart cities

  • Creating a smart city blueprint, an integrated policy and holistic planning

  • Enabling the smart city ecosystem, as the Public-Private-Citizen Partnership (government, business, citizen)

  • Enabling the i-city platform to run the whole urban areas its assets and resources, flows and processes

  • Engaging citizens and designing citizen-centric smart services

  • Harnessing ICT, as the IoT and big data analytics, to automate the urban life of citizens

  • Building safe and secure, inclusive and resilient, liveable and workable, intelligent and interconnected, innovative and digital cities

I have to note that a harmful partiality of smart city developers is coming from the lack of ontological knowledge as well.

Bruce Schuman

unread,
Jan 7, 2016, 10:11:49 PM1/7/16
to ontolo...@googlegroups.com

JS:

Everybody who has attempted anything similar has failed.  That includes some brilliant logicians and scientists who founded the "unified science" movement in the 1930s.  That was a good try, but they never designed anything that achieved the goals they (and many ontologists today) hoped for.

People are still searching, but there is no consensus.  See http://plato.stanford.edu/entries/scientific-unity/

 

BRS:

Last night, I was looking into “the grounding problem”, so I went through the Wikipedia article on that subject, which mentions C.S. Pierce’s approach: https://en.wikipedia.org/wiki/Symbol_grounding_problem

 

Then I remembered the Vienna Circle, which was a big influence in my world years ago, and set me on this path of insisting that all terms and categories be rigorously grounded.  I took that part of their program very seriously (I had been very impressed when Husserl said “philosophy must become a science”), but I only embraced half their thesis: I never accepted the other side of the equation -- the idea that “non-grounded terms are therefore meaningless”.  It seemed pretty clear to me that you can’t just kick out metaphysics and holism because they have a weakness: you have to fix the weakness.  There IS content.  You have to devise a grounding.

 

But this was a good connection – because the Vienna Circle played a big role in the Unified Science movement, as the Wikipedia article reviews:

 

https://en.wikipedia.org/wiki/Vienna_Circle

 

This business of grounding is the essence of those diagrams I posted.  The right side is holism, broadly inclusive highly abstract (and metaphysical or theological) categories – the big “container” categories at the top of the common “upper ontology of the human race” -- but very weakly grounded, if at all.  And the left side is “the empirical ground” – where the rubber meets the road – where things can be quantitatively measured.

 

But the Vienna Circle was absolutely right.  The lack of grounding in the intuitive or holistic disciplines is what breaks the chain – and is why the world is divided into “the sciences” and “the humanities”. 

 

I pasted the Stanford article on Unified Science into a word.docx and printed it – 28 pages not including the long bibliography.  It’s a very good article – and raises some important themes that might be very interesting to ontology people – and I counted the word “hierarchy” appearing 22 times.

 

Hierarchy just seems to be a prime integrator of conceptual structure, across all disciplines and levels – perhaps taking a different form every time – perhaps always locally purpose-driven, ad hoc and context-specific -- but always conforming to a common underlying invariant.

 

 

“The base of the hierarchy – the foundation of all quantitative measurement”

 

The logical connections were provided by biconditional statements, or constitution sentences (these changed to conditionals, or reduction sentences, when Carnap encountered the problem of dispositional predicates). Different constitutive systems or logical constructions would serve different purposes. In one system of unified science the construction connects concepts and laws of the different sciences at different levels, with physics—with its genuine laws—as fundamental, lying at the base of the hierarchy. Because of the emphasis on the formal and structural properties of our representations, the individuality of concepts, like that of nodes in a railway network, was determined by their place in the whole structure, and hence, presupposed connective unity. Objectivity and unity went hand in hand. The formal emphasis developed further in Logical Syntax of Language (1934).

 

Alternatively, all scientific concepts could be constituted or constructed in a different system in the protocol language out of classes of elementary, experiential concepts. The basic experiences do not provide a reductive analysis of theoretical concepts; nor are the basic empirical concepts (red, etc) the outcome of an analysis of experience. They are not atomic in the Machian sense, but derived from the field of experience as a complex whole in the manner proposed by Gestalt psychology. This construction of scientific knowledge took into account the possibility of empirical grounding of theoretical concepts and testability of theoretical claims. Unity of science in this context was an epistemological project.

 

Carnap was influenced by the phenomenological tradition (through Husserl himself) and empiricist tradition (especially Russell and Mach) and the ideals of simplicity and reductive logical analysis in the early works of Russell and Wittgenstein. From the formalist point of view of the logicist and neo-Kantian traditions, Carnap's models of unity expressed his concern with the possibility of objectivity in scientific knowledge. The same concern was expressed in the subsequent idea of unity of science in the form of a physicalist language, the intersubjective language that translates the subjective language of experience into an objective and universal language. Carnap's pragmatic pluralism would extend to logic with his Principle of Tolerance—in Logical Syntax of Language (1934)—and subsequently—in “Empiricism, Semantics, and Ontology” (1950)—to the plurality of possible “linguistic frameworks”.

 

 

 

 

 

 

 

 

 

image001.png
polaropposites4.jpg

Azamat Abdoullaev

unread,
Jan 8, 2016, 5:53:32 AM1/8/16
to ontolog-forum

JS:

"Everybody who has attempted anything similar has failed.  That includes some brilliant logicians and scientists who founded the "unified science" movement in the 1930s.  That was a good try, but they never designed anything that achieved the goals they (and many ontologists today) hoped for.

People are still searching, but there is no consensus.  See http://plato.stanford.edu/entries/scientific-unity/"

Agree.

Moreover, there is today even less chances for the project of unity of things and unification of world knowledge. 

"Unification of disciplines", interdisciplinary, multidisciplinary, crossdisciplinary and transdisciplinary, is mostly empty talks; for all the knowledge infrastructure is made for experts, specialists, and narrow expertise, etc. Any good interdisciplinary project group has no real prospects for awards, grants, subsidies, etc. as far as being reviewed by the same "experts". 
Pluralism today is associated with multiple values, views, concepts, theories, explanations, virtues, goals, methods, models, representations, religions, cultures, political systems, lifestyles, etc.
The disorder of the world, disunity of things, plurality of cultures and deep fragmentation of knowledge are rather norms than exclusion today.

I see one key reason for all this: human mind laziness.

Unity, fundamentality, unification, universality, holism, harmony, integration, complexity or totality, all demand hard mental work and high intellectual capacity. Just stay narrowly specialized, being closed to the open dynamic world of knowledge, getting all the small but sure benefits, salaries, grants, awards, etc.
It is plain that all the disorder of the world comes from its unlimited "pluralism", fragmentation, specialization, division, disunity and disintegration.

Meantime, Unity and Integration is the only way for humanity to survive and progress.  
All the course of technology development, starting from the ancient transportation axle and wheel and ending with the internet, smart phones and intelligent cities,  serves to integration, unification, connection.

We need unified philosophies, sciences, the arts and technologies, if humanity is looking for SMART Globalization, suggesting global movement of people, goods, capital, finances, ideas and knowledge, without social, economic, demographic and environmental issues such as mass unemployment, hunger and poverty, mass migration, climate change, soil, water and air pollution, over-fishing of the ocean and other ecological crimes. 

USECS Universal Standard Entity Classification System: Integrating the World's Knowledge, Intelligence and Data




--
All contributions to this forum by its members are made under an open content license, open publication license, open source or free software license. Unless otherwise specified, all Ontolog Forum content shall be subject to the Creative Commons CC-BY-SA 4.0 License or its successors.
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To post to this group, send email to ontolo...@googlegroups.com.
Visit this group at https://groups.google.com/group/ontolog-forum.

John F Sowa

unread,
Jan 11, 2016, 1:17:43 AM1/11/16
to ontolo...@googlegroups.com
Dear Matthew, Pat C, Michael B, Azamat, Bruce, and Rich,

I'll summarize three basic points:

1. There is a continuum from a shallow ontology based on a terminology
with a few statements in OWL to the deepest ontology of an artifact:
the complete information needed to implement a replica and specify
all possible operations -- for example, the specifications for
a Boeing 787. In between are large formal ontologies such as Cyc.

2. Interoperability based on a shallow ontology has been successful
since the first punched-card systems in the 1890s. Those systems
were based on terminologies (for bookkeeping, finance, geography,
science, engineering, medicine...) that were sufficiently precise
to be implemented on digital computers.

3. Adding OWL to the terminologies of #2 is important for automating
the process. The coverage can be much broader, but not much
deeper. OWL reasoning is shallow compared to Cyc.

>> When two banks merge, they have many similar services for checking,
>> savings, etc. But they *never* merge the software systems for the
>> two banks...
> [MW] What you have described *is* the two systems interoperating...
> Now if one of them was much superior to the other and had all the
> capabilities of the other, then it should be possible to transfer
> the data of the inferior system to the superior one, some mapping
> between terms would still probably be necessary.

It's not a question of inferior or superior. The EFT categories can
be used to relate many of the names, but not the details and other
info that OWL cannot express. There is also the question of how
they're organized in the DB. The conceptual schema (proposed in
the 1970s) is the equivalent of an ontology. But there has been
40+ years of debate about these issues without a consensus.

>> For anything other than a toy system, the total number "primitives"
>> keeps growing indefinitely.
> [MW] This is the approach that ISO 15926 takes. And yes the total
> number of primitives does keep growing, so you need to have a method
> for adding new ones.

Yes. And any general method is directly or indirectly justified
by the lattice of all possible theories expressible in the given
logic. Every logically sound method for adding new primitives
can be specified as a map for walking through that lattice.

Pat
> Conversions of data from one terminology to another can and should
> be done automatically through a common foundation ontology.

Michael
> My suspicion is that suitable conversions may not exist, are
> usually hard to find if they exist and cannot be canonicalized.

I agree with both of you. The lattice of theories is the framework
that can systematize and relate all possible operations. For many
important cases, the conversions are possible, and consistency can
be proved. But there are many cases for which the proofs are
undecidable or extremely difficult to find.

Those principles are true of every branch of science and engineering.
Ontology is *identical* to the totality of all the sciences (for all
things in nature) and all branches of engineering (for all possible
artifacts). Those two observations imply that a complete ontology
of everything would require

1. Every possible scientific question to be answered, and

2. Every possible invention to be invented.

To accomplish both points completely would probably take longer
than the age of the universe. But science and engineering do make
progress. Although the process is never finished, the many steps
along the way can be very useful.

Azamat
The title "Universal ontology" of that article is a major exaggeration.

Azamat
> Given that, to obtain the General Semantic Interoperability standard,
> costing hundreds billions per year, means to develop a single world
> reference model, of which the global geo ontology is the foundational
> part.

If you're satisfied with shallow interoperability, we have had that
for over a century. Current methods are somewhat better. But we
won't have a completed, perfect ontology for a long, long time.

Bruce
> Could something new or unprecedented emerge? Well – maybe.

I would say *inevitably*. Every year some surprises come up.
Every 10 years, the progress is obvious. Every 20 years,
major new or unprecedented things appear. In 1996, how many
people imagined that we could not only watch television on our
mobile phones, we could also make movies with them and send
them around the world?

I doubt that Steve Jobs had imagined that in 1996.
AT&T, Microsoft, and Nokia certainly didn't.

Azamat
> The General Interoperability Framework, GIF, looks closely connected
> with a world/domain reference model as common foundation ontology.

To see what people in 1900 imagined about the year 2000, see
slide 3 of http://www.jfsowa.com/talks/nlu.pdf . Click the URL
at the bottom of the slide for more examples.

However, some general principles will still be true. A detailed
ontology in 1900 would be obsolete. But Husserl's book, which
was published in 1900, is very much worth reading today -- and
Peirce's writing from the same era are, in many ways, more
advanced that most of the ontologies written today.

Note that human thinking hasn't changed very much. Slide 3
is similar to what some people think today about NLP. They
think that you can just dump a book into a computer, which
will magically process it. But the task is much, much harder.

Bruce
> I remembered the Vienna Circle, which was a big influence in my world
> years ago... I had been very impressed when Husserl said “philosophy
> must become a science.”

Unfortunately, the Vienna Circlers took a very narrow view of what
it means for philosophy to become a science. Husserl had a broader
view of the requirements than Carnap, for example. In his
_Philosophical Investigations_, Husserl followed Brentano in
making intentionality a major part. But most of the analytic
philosophers dumped intentionality because it was "anthropomorphic".

However, you can't understand anything about life -- from bacteria
on up -- without intentionality. The biologist Lynn Margulis,
for example, said that a bacterium swimming upstream in a glucose
gradient exhibits the beginnings of intentionality that is
continuous with human experience, planning, and understanding.

Summary: Intentionality is *not* anthropomorphic -- it's biomorphic.
It's impossible to understand living things -- all animals, plants,
and even bacteria -- without understanding intentionality.

Rich
> The axiological issues that are significant for the evolution of
> communication theory are whether research can be truly free of value
> and whether the end for the administered research should be designed
> to expand knowledge or to change society.

Value judgments are essential for determining everything we do or say.
Or for understanding anything that anybody else -- human or animal
-- does or says. The term "change society" sounds like something
a do-gooder would say. But everything that anybody does or says
-- for better or worse -- makes some change.

Rich
> So how can a value be something objective, abstract, that every
> human being agrees on?

There will *never* be total agreement on everything. But it's
*extremely* important to understand the process. If you don't
have some understanding, you're totally lost -- clueless.

John

segun alayande

unread,
Jan 11, 2016, 6:10:38 AM1/11/16
to ontolo...@googlegroups.com, bo...@rockportsoft.com
Dear John,
 
"Human Mind Laziness". Great summary of my experience implementing enterprise architecture programmes in a number of organisations and industries. 
 
Some detractors (in IT leadership) claim it is all theoretical and often claim that the average business manager does not want the holistic approach. They forget that best practice is great theory. Yet in their presentation slide decks they talk about requirements for enterprise IT simplification and integration to reduce lifecycle maintenance costs.
 
On the podium, they also talk about IT strategies that enable business partner collaboration and based on information sharing and integration but are keen to knock the development of ontology frameworks that integrate disparate specialist functional vocabularies in their respective organisations.
 
In the Aviation industry, many Airlines and Airports developed mobile apps for passengers. A cost sensitive Passenger would need to install many apps to get services across Airports and Airlines.
 
The organisation, Airport Council International (ACI) realising the pain the Passengers are experiencing is now working with IATA, the Airline industry representative to develop a standards based Application Programming Interface (API) that would enable all mobile apps connect across Airline and Airport systems. This API is based on a holistic ontology of sorts that integrates aviation knowledge and is known as the ACRIS Semantic Model. It has also taken a few years to sell the concept of this holistic aviation ontology within the ACI and IATA.
 
It has required many organisations to go through the cycle of putting out the apps realising the impact on their customers before they could appreciate the benefits of a holistic approach and then committing to the ontology initiative.
 
Thank you for the link.

Best regards

Segun
07932651840


Date: Fri, 8 Jan 2016 13:53:27 +0300
Subject: Re: [ontolog-forum] Wikipedia on upper ontology
From: ontop...@gmail.com
To: ontolo...@googlegroups.com

Matthew West

unread,
Jan 11, 2016, 8:09:22 AM1/11/16
to ontolo...@googlegroups.com
Dear John,

>> When two banks merge, they have many similar services for checking,
>> savings, etc. But they *never* merge the software systems for the two
>> banks...
> [MW] What you have described *is* the two systems interoperating...
> Now if one of them was much superior to the other and had all the
> capabilities of the other, then it should be possible to transfer the
> data of the inferior system to the superior one, some mapping between
> terms would still probably be necessary.

[JS] It's not a question of inferior or superior. The EFT categories can be used to relate many of the names, but not the details and other info that OWL cannot express. There is also the question of how they're organized in the DB. The conceptual schema (proposed in the 1970s) is the equivalent of an ontology. But there has been
40+ years of debate about these issues without a consensus.
[MW>] I'm not aware of anyone using OWL for this. My experience is that mapping requires First Order Logic to be able to support the range of requirements that mapping can throw up, though I can neither prove that it is necessary nor that it is sufficient.

>> For anything other than a toy system, the total number "primitives"
>> keeps growing indefinitely.
> [MW] This is the approach that ISO 15926 takes. And yes the total
> number of primitives does keep growing, so you need to have a method
> for adding new ones.

[JS] Yes. And any general method is directly or indirectly justified by the lattice of all possible theories expressible in the given logic. Every logically sound method for adding new primitives can be specified as a map for walking through that lattice.
[MW>] I was not thinking of a logical method for generating additional primitives, just a pragmatic one. Consider an ontology that has pump as a primitive, and someone comes along and points out that there a centrifugal pumps, and reciprocating pumps and so on. Then I need to introduce some new primitives so that I can make these distinctions.

Rich Cooper

unread,
Jan 11, 2016, 11:08:46 AM1/11/16
to ontolo...@googlegroups.com

Folks wrote:

        >> For anything other than a toy system, the total number "primitives"

        >> keeps growing indefinitely.

        > [MW] This is the approach that ISO 15926 takes. And yes the total

        > number of primitives does keep growing, so you need to have a method

        > for adding new ones.

        JFS:> Yes.  And any general method is directly or indirectly justified by the lattice of all possible theories expressible in the given logic.  Every logically sound method for adding new primitives can be specified as a map for walking through that lattice.

        Pat

        > Conversions of data from one terminology to another can and should be

        > done automatically through a common foundation ontology.

        Michael

        > My suspicion is that suitable conversions may not exist, are usually

        > hard to find if they exist and cannot be canonicalized.

        JFS:>I agree with both of you.  The lattice of theories is the framework that can systematize and relate all possible operations.  For many important cases, the conversions are possible, and consistency can be proved.  But there are many cases for which the proofs are undecidable or extremely difficult to find.

        JFS:>Those principles are true of every branch of science and engineering.

        Ontology is *identical* to the totality of all the sciences (for all things in nature) and all branches of engineering (for all possible artifacts).  Those two observations imply that a complete ontology of everything would require

          1. Every possible scientific question to be answered, and

          2. Every possible invention to be invented.

        JFS:> To accomplish both points completely would probably take longer than the age of the universe.  But science and engineering do make progress.  Although the process is never finished, the many steps along the way can be very useful.

    But truly novel designs, new technology concepts, improvements in newer methods, all result in distoritions to the lattice. 

    Here for example is a wind turbine generator with no moving parts - no propeller to kill California Condors or American Eagles as the current designs appear to do.

    So how EXACTLY would you put this new concept into the lattice without breaking it?  Especially, in ISO 15926, which has nothing to do with wind, Condors or Eagles?  How would you describe the benefits and costs of the new element?  A very specific example of this wind turbine without propellers, which you can read about here:

    http://www.patent2pdf.com/pdf/09222465.pdf

    Sincerely,

    Rich Cooper,

    Rich Cooper,

    Chief Technology Officer,

    MetaSemantics Corporation

    MetaSemantics AT EnglishLogicKernel DOT com

    ( 9 4 9 ) 5 2 5-5 7 1 2

    http://www.EnglishLogicKernel.com

    -----Original Message-----
    From: ontolo...@googlegroups.com [mailto:ontolo...@googlegroups.com] On Behalf Of John F Sowa
    Sent: Sunday, January 10, 2016 10:18 PM
    To: ontolo...@googlegroups.com
    Subject: Re: [ontolog-forum] Wikipedia on upper ontology

    --

    All contributions to this forum by its members are made under an open content license, open publication license, open source or free software license. Unless otherwise specified, all Ontolog Forum content shall be subject to the Creative Commons CC-BY-SA 4.0 License or its successors.

    ---

    You received this message because you are subscribed to the Google Groups "ontolog-forum" group.

    To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

    To post to this group, send email to ontolo...@googlegroups.com.

    Visit this group at https://groups.google.com/group/ontolog-forum.

    Rich Cooper

    unread,
    Jan 11, 2016, 11:33:43 AM1/11/16
    to ontolo...@googlegroups.com

    Here is a description of said propeller-less wind turbine:

     

    How exactly could this invention be placed into the lattice, given what is ALREADY in ISO 15926?

     

    Sincerely,

    Rich Cooper,

    Rich Cooper,

     

    Chief Technology Officer,

    MetaSemantics Corporation

    MetaSemantics AT EnglishLogicKernel DOT com

    ( 9 4 9 ) 5 2 5-5 7 1 2

    http://www.EnglishLogicKernel.com

     

    image003.jpg

    Azamat Abdoullaev

    unread,
    Jan 11, 2016, 11:34:24 AM1/11/16
    to ontolog-forum

    John wrote: "Ontology is *identical* to the totality of all the sciences (for all things in nature) and all branches of engineering (for all possible artifacts). 

    John,

    This is too broad definition.

    In fact, Ontology is *identical* to the foundations "of all the sciences (for all things in nature) and all branches of engineering (for all possible artifacts).  This correction implies that a complete ontology of everything wouldn't require

      1. "Every possible scientific question to be answered, and

      2. Every possible invention to be invented".

    John F Sowa

    unread,
    Jan 11, 2016, 1:01:25 PM1/11/16
    to ontolo...@googlegroups.com
    Dear Azamat and Rich,

    Azamat
    > John wrote: "Ontology is *identical* to the totality of all the
    > sciences (for all things in nature) and all branches of engineering
    > (for all possible artifacts).
    >
    > This is too broad a definition.

    I was defining the philosophical term 'ontology', which is a study
    of everything that exists. Nobody has discovered the total ontology,
    but the infinite lattice contains it (even if you can't find it).

    > In fact, Ontology is *identical* to the foundations "of all
    > the sciences...

    That perfect foundation might be *more* difficult to find than
    the very broad version that I stated -- because you need to
    examine all the existing things to check whether there is
    anything missing.

    If a perfect foundation is possible (and that's still an
    unproved assumption), then it will also be in that infinite
    lattice, and the more detailed ontology will be another
    theory in the lattice that is a specialization of it.

    Please remember that the infinite lattice is truly infinite.
    You can think of it as a subset of the mind of God.

    Rich
    > But truly novel designs, new technology concepts, improvements
    > in newer methods, all result in distortions to the lattice.

    The infinite lattice contains *all possible* theories that may
    be stated in whatever version of logic is used for specifying
    ontologies. It describes all the inventions and theories (good,
    bad, or indifferent) that anyone in any galaxy might ever imagine.
    Nothing can distort it.

    > So how EXACTLY would you put this new concept into the lattice
    > without breaking it?

    The infinite lattice already contains all possible theories.
    Nothing can break it. The worst that can happen is that some
    theory that you state might be inconsistent. That just means
    that it's equivalent to the absurd theory, which is already
    located at the bottom of the lattice.

    > Especially, in ISO 15926, which has nothing to do with wind,
    > Condors or Eagles?

    If you are talking about something totally different, then
    that greatly *simplifies* the addition. See my previous note
    about Matthew's method:

    1. If you start with a consistent theory T1, you need to ensure
    that the statements of the new theory T2 are consistent with T1.

    2. If T2 talks about totally different topics, then there is no
    possibility of an inconsistency with T1.

    3. Therefore, you can just add the axioms of T2 to T1 to form T3.

    > A very specific example of this wind turbine without propellers

    If T1 states that all wind turbines have propellers, that could
    create an inconsistency if you call the new things wind turbines.
    Simple solution:

    1. Generate a new name "PropellerLessTurbine" or "PLT".

    2. Introduce "GeneralTurbine" as a supertype of "WindTurbine"
    and "PropellerlessTurbine".

    3. Rename the conflicting terms of T2, but leave T1 unchanged.

    The new theory that combines the old T1 (unchanged), the renamed T2,
    and the statement about GeneralTurbine and its subtypes, will be in
    the lattice as a specialization of T1 and the renamed T2.

    John

    Rich Cooper

    unread,
    Jan 11, 2016, 1:04:23 PM1/11/16
    to ontolo...@googlegroups.com

    Azamat wrote:

    AA:> In fact, Ontology is *identical* to the foundations "of all the sciences (for all things in nature) and all branches of engineering (for all possible artifacts).  This correction implies that a complete ontology of everything wouldn't require

      1. "Every possible scientific question to be answered, and

      2. Every possible invention to be invented".

    I agree with that modification, but even that is still not sufficient.  Among the other sources of knowledge, in addition to science, there is process knowledge from Hawaiian cooking, wheat farming, Eskimo construction methods, cold fusion, metrology, and then we can work the on the larger parts of science that are based on empirical knowledge such as axiology - see the recent few emails on that, plus the Wikipedia article on it.  Knowledge of any field starts with empirical knowledge, theoretical knowledge, knowledge of how to make use of objects, and on ad infinitum. 

     

    Sincerely,

    Rich Cooper,

    Rich Cooper,

     

    Chief Technology Officer,

    MetaSemantics Corporation

    MetaSemantics AT EnglishLogicKernel DOT com

    ( 9 4 9 ) 5 2 5-5 7 1 2

    http://www.EnglishLogicKernel.com

     

    Azamat Abdoullaev

    unread,
    Jan 11, 2016, 2:38:10 PM1/11/16
    to ontolog-forum
    Rich,

    This all good points, and they are covered by applied sciences and engineering.
    Re axiology as value theory, it makes a pillar of philosophy, together with logic and epistemology and praxeology, the foundation of ethics, economics and social sciences.
    Universal Ontology is the ultimate foundations, or the main roots for all Knowledge Tree.

    Rich Cooper

    unread,
    Jan 11, 2016, 8:11:14 PM1/11/16
    to ontolo...@googlegroups.com

    Dear Azamat and John

     

    You wrote:

    AA:>This all good points, and they are covered by applied sciences and engineering.

     

    RC:>No, applied sciences and engineering are insufficient.  There is also context, customary practices, sociology, all things in the entire universe; and IN ADDITION, all REPRESENTATIONS of said things, and then all REPRESENTATIONS of those immediately aforesaid things, ad infinitum.  Things evolve.  Common patterns merge into classes, common classes into concepts, common concepts into arguments. 

     

    In other words, there is no THERE there - there is no magical infinite source of (choose any one): knowledge, values, wisdom, practices, customs, art, …  .

     

    John especially I am surprised at!  You write with true reverence in your text choices about infinity as being like the mind of God!  That is far too Pat an answer!  We all know how to represent infinity and reason about infinities, by saying "everything exists that exists, that has ever existed, or that ever will exist, … " then that gets too far into the religious side for me.  I, personally, have never met an infinity, nor spoke to one, nor tried to discover truth in the lattice with one.  So I am out of this one.  But stay tuned for the more real ones. 

     

    AA:> Re axiology as value theory, it makes a pillar of philosophy, together with logic and epistemology and praxeology, the foundation of ethics, economics and social sciences.

    Universal Ontology is the ultimate foundations, or the main roots for all Knowledge Tree.

    See slides 4-6: http://www.slideshare.net/ashabook/philosophy-science-arts-technology-grand-unification

     

    But Axiology, so far as I see it explained in Wikipedia's entry of the said Axiology, it has some relatively shallow substance - take all inputs to an And as necessary, and take at least one input to an Or as necessary, etc.  Or do you have a deeper reference than the WikiArticle? 

     

    In any case, if the Universe is intended to be what exists, and then you state that what exists is all of science and math, that is one thing.  But when you add ideas, you begin adding representations.  Every idea has to be represented in some fashion that establishes it's context, the system, design, or other of those Things in the Universe.  But then you have to add more REPRESENTATIONs of Things, including said representations, and it descends into infinite recursion from there.  Which we know is a bad thing. 

     

    So it is a recursive argument without terminating merit. 

    John F Sowa

    unread,
    Jan 11, 2016, 10:39:34 PM1/11/16
    to ontolo...@googlegroups.com
    Dear Matthew,

    Some additions, not disagreements:

    >> The EFT categories can be used to relate many of the names,
    >> but not the details and other info that OWL cannot express.
    >
    > [MW] I'm not aware of anyone using OWL for this. My experience is
    > that mapping requires First Order Logic to be able to support the
    > range of requirements that mapping can throw up, though I can neither
    > prove that it is necessary nor that it is sufficient.

    EFT, as far as I know, does not use OWL. But I just mentioned OWL as
    an example of a limited logic that would be sufficient to represent
    the limited formats of EFT specifications.

    But I agree that a richer logic may be necessary to specify a mapping
    between two OWL ontologies. You could probably get by with the Horn-
    clause subset of FOL (which is used in most rule-based systems).

    >> Any general method for adding new primitives is directly or indirectly
    >> justified by the lattice of all possible theories expressible in the
    >> given logic. Every logically sound method can be specified as a map
    >> for walking through that lattice.
    >
    > [MW] I was not thinking of a logical method for generating additional
    > primitives, just a pragmatic one. Consider an ontology that has pump
    > as a primitive, and someone comes along and points out that there a
    > centrifugal pumps, and reciprocating pumps and so on. Then I need to
    > introduce some new primitives so that I can make these distinctions.

    I am definitely in favor of good, simple pragmatic methods. But it's
    also true that any *logically sound* method can be justified by showing
    how it is related to the lattice of theories. For your method,

    1. Start with the theory T1, which represents some current ontology.
    Since the lattice contains all possible theories, T1 is in it.

    2. Define a theory T2, which states some axioms that use the
    new primitives. T2 is also in the lattice.

    3. Combine all axioms of T1 and T2 to form a new theory T3.
    Then T3 is a common specialization of T1 and T2.

    Since T2 makes statements about relations that are not used in T1,
    you can follow some simple guidelines to ensure that every statement
    in T2 is consistent with T1.

    If T1 and T2 are consistent and T2 is consistent with T1, then T3
    will also be a consistent theory that is located below T1 and T2
    in the lattice.

    QED -- Quite Easily Done. (But I admit that there are more complex
    issues about combining ontologies that aren't quite so easy.)

    John

    John F Sowa

    unread,
    Jan 11, 2016, 10:59:19 PM1/11/16
    to ontolo...@googlegroups.com
    Rich,

    I'm trying to make a few points:

    1. The infinite lattice of all possible theories expressible
    in a give logic (AKA the Lindenbaum lattice) is very useful
    for organizing and relating ontologies.

    2. When talking about ontologies, it's useful to distinguish
    things that occur in nature from those that are designed
    by humans (AKA artifacts).

    3. Any proposed ontology (or foundation for ontologies) is
    closely related to science (for natural things) and to
    engineering (for humanly designed or invented things).

    > Among the other sources of knowledge, in addition to science,
    > there is process knowledge from Hawaiian cooking, wheat farming,
    > Eskimo construction methods, cold fusion, metrology....

    Re processes: I was using the word 'thing' in a broad sense.
    I agree with Whitehead that processes are more fundamental
    than things that are called objects. Methods of cooking,
    farming, construction, measurement, etc., are processes
    that I would include with the human inventions or designs.

    But whether you call them processes, objects, or methods,
    the theories about them are somewhere in the infinite lattice.

    > Knowledge of any field starts with empirical knowledge,

    Fine. I discuss topics of learning, etc., in the following

    http://www.jfsowa.com/talks/cogcycle.pdf
    The cognitive cycle

    > there is no magical infinite source of (choose any one): knowledge,
    > values, wisdom, practices, customs, art, …

    No, but any theory or hypothesis you prefer is somewhere
    in that lattice. How to find it and how to determine whether
    it's true, useful, or whatever depends on the cognitive cycle.

    > You write with true reverence in your text choices about infinity
    > as being like the mind of God!

    I didn't say that infinity was like the mind of God. But people
    sometimes talk about "a God's eye view" or "the mind of God"
    as a metaphor or convenient way of expressing some totality
    of thought that is far bigger than anything we may know.

    I said that the lattice of all possible theories is like that.
    It's a totality of all possible ways of thinking, taking, and
    reasoning about the world.

    > I, personally, have never met an infinity

    Nobody has. But mathematical methods can postulate them and
    reason about them. That's all I was trying to say.

    > do you have a deeper reference than the WikiArticle?

    You might start with the Stanford Encyclopedia of philosophy;
    http://plato.stanford.edu/entries/value-theory/

    In any case, we make value judgments every time we make a choice
    or a decision of any kind on any subject of any kind.

    If we're trying to design NLP systems that can understand
    a story or understand why people do or say anything, we need
    an ontology that can recognize value judgments and can use
    them to understand speech and other forms of behavior.

    John

    Rich Cooper

    unread,
    Jan 12, 2016, 12:03:37 AM1/12/16
    to ontolo...@googlegroups.com

    Thanks John,

    That explanation makes things clearer to image.  I have to revise my perception of this godhead lattice to more of an addressability of things, rather than an existence of things, I guess.  I can see an infinitely large repeat (think beeswax honeycomb lattice repeated forever far in n dimensions) with more fundamental ideas in each beeswax honeycomb cell as the conceptual structure of reality, in that sense.  Much like Goedel used iterating the primes in his model

    Infinity is better organized into an iterated structure, as Goedel described his array of objects.  But by definition, nobody will ever know all that empirical information which is, by definition, uniterable.  Or more accurately, only iterable in one dimension.  That makes it irrational information, like art, or law, or philosophy, or cooking, .  So you can't perceive, in one human head, all the details that are symbolic of that reality, much less find the object that matches your query x. 

    If it isn't iterable, how do you find your cell with just partial matching information?  For example, to solve an equation like d = r*T, you have to apply the knowledge of how multiplication and assignment work.  Then you have to use division to invert r or T to solve as a single unknown.  That works fine with numbers because they operate on the lattice concept ever since Cartesian graphs were developed and used with N dimensions. 

    But I don't see the same thing with empirical information.  That information contains all those things we predict, but aren't really sure about, whether now, in the past, or yet to be.  It's too complex for visual acuity.  And lots of information has been found before any theoretical explanations began to be considered So it may as well be beyond the causal cone.  And if you can't foresee it, you can't prevent it, or even worry effectively about it.  So kiss it off.  Surf.  Watch a movie. 

    But I can believe in a Goedelian beeswax comb model with prediction of ONLY SIMPLE, REPEATABLE patterns like math or FOL along the axes That is a representation of all the lovely theories and combinations of theories that cannot disprove each other, or prove each other, and therefore are for the moment the composition we now call science.  But that information is not empirical.  It's provable, demonstrable, experience-able in various sensory and perceptive ways.  So it can be wrapped around minds. 

    Empirical information is more like the evidence waiting to be understood, yet and perhaps evermore without deeper explanation.  If it never stops changing in unpredictable ways, as empirical information does by its very definition, it is irrational, non-experiential, utterly unstructured. 

    But, I can especially see no reason to believe that such an information object, containing both rational and empirical information, can actually exist, with all the local texture and uniqueness (Pat's grains of sand) that can be iterated by selectors in any way represented there, much less that we will ever (even infinitely) get to use it for any purpose It just doesn't feel right.  It isn't math.  It isn't art.  It isn't science.  So what is it?

    But I am trying to keep my mind open.  I still prefer nil as the upper level ontology because absolutely any Thing can be derived from it.  

    Sincerely,

    Rich Cooper,

    Rich Cooper,

    Chief Technology Officer,

    MetaSemantics Corporation

    MetaSemantics AT EnglishLogicKernel DOT com

    ( 9 4 9 ) 5 2 5-5 7 1 2

    http://www.EnglishLogicKernel.com

    -----Original Message-----
    From: ontolo...@googlegroups.com [mailto:ontolo...@googlegroups.com] On Behalf Of John F Sowa
    Sent: Monday, January 11, 2016 7:59 PM
    To: ontolo...@googlegroups.com
    Subject: Re: [ontolog-forum] Wikipedia on upper ontology

    --

    All contributions to this forum by its members are made under an open content license, open publication license, open source or free software license. Unless otherwise specified, all Ontolog Forum content shall be subject to the Creative Commons CC-BY-SA 4.0 License or its successors.

    ---

    You received this message because you are subscribed to the Google Groups "ontolog-forum" group.

    To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

    To post to this group, send email to ontolo...@googlegroups.com.

    Visit this group at https://groups.google.com/group/ontolog-forum.

    Azamat Abdoullaev

    unread,
    Jan 12, 2016, 8:57:43 AM1/12/16
    to ontolog-forum
    Rich,
    Indeed, the real world is much different than it's presented by philosophy, science, engineering, the arts and literature. In a sense, we live in the Big Data World. full of nonsense, chances and contingencies, to be transformed into as the Big Knowledge World, marked by sense, laws and necessities. Then you are able to value things in the CONTEXT of sociology, politics or economics.
    Re. Axiology, you raised a very significant issue, as far it involves such life-critical “realms” of value: morality, religion, art, science, economics, politics, law, and custom.
    The key issues are here establishing ontological status of value, its full classification, the ontological interconnection of value and fact as the subjective states of the world and objective state of the world.
    Ontology is focusing on what things should be valued in the first place, what is really good and what is really bad: knowledge or ignorance, health or disease, safety or risk, peace or war, science or religion, independence or subjection, representative democracy or direct democracy, poverty and wealth, equality or inequality, justice or injustice.
    For example, there are a lot of neoliberal politicians and economists, not mentioning business folks, who much value inequality, considering it as a contribution to economic growth. As a result of such corrupted values we have the world form of globalization now: https://www.linkedin.com/pulse/smart-globalization-vs-neoliberal-azamat-abdoullaev
     Today, most of young generation identify values with economic value. Go and ask people "what do they value first?" or "what is fundamentally good?"
    A goof introduction is Britannica article: http://www.britannica.com/topic/axiology.

    Bruce Schuman

    unread,
    Jan 12, 2016, 12:35:48 PM1/12/16
    to ontolo...@googlegroups.com

    Thanks for this continuing and very interesting/fertile discussion.

     

    This last two weeks, I’ve been building a library of Wikipedia articles having anything to do with this subject, ranging from computer architecture and languages to the theory of concepts and measurement to anything to do with the “actual construction” of a comprehensive model of conceptual structure.  I’m just pasting all this material into a single word.docx with a hierarchical table of contents, plus writing my own interpretive doctrine at the front end.  I’m up to almost 700 pages now, and it’s been a fascinating creative experience.

     

    COMPOSITIONAL SEMANTICS

     

    This morning, I’m thinking it would be interesting to introduce the subject of compositionality – or “compositional semantics” – since I am personally persuaded that any “answers” or “real solutions” to this very broad problem of a universal ontology simply must be addressed through a very robust algebraic theory of compositionality.

     

    Putting it very simply – it seems like the choice is – build a vast bottom-up glossary/vocabulary of concepts and terms that are found in actual usage, and do everything possible to schmooze out agreement on the meaning of these terms – or figure out how meaning is actually created from the ground up, and then build every interpretation in those terms.  Maybe this is what Rich is talking about in his below message.  I’d say it is.

     

    From my point of view – though this great bottom-up approach to ontology certainly has its merits and successes – it looks to me like this method is inherently flawed, and will not and cannot lead to the most desirable kind of solution, that I would argue can only come from a powerful and succinct theory of compositional semantics, grounded in a general algebraic theory of concepts.

     

    I would say that the oft-cited “I don’t believe in word senses” article points in exactly this direction.  We just can’t list all the “word senses” in play in the world today.  The idea is a violation of human creativity and spontaneity – and does not reflect sufficiently deep understanding of how language actually works.  The most powerful and authentic way to understand human meaning is to interpret language in the actual context of usage, as intended by a communicator in the act, and develop inferential methods capable of understanding it.  https://www.kilgarriff.co.uk/Publications/1997-K-CHum-believe.pdf

     

    So yes – until something brilliant and clarifying and succinct and transparently persuasive comes along, we’ll probably be wading through innovative versions of the same arguments.

     

    And yes – it’s true, as both Wikipedia and the Stanford Library of Philosophy article on Compositionality state – this is a controversial subject.

     

    http://plato.stanford.edu/entries/compositionality/  (Szabó, 2012)

    https://en.wikipedia.org/wiki/Principle_of_compositionality

     

    The principle of compositionality has been the subject of intense debate. Indeed, there is no general agreement as to how the principle is to be interpreted, although there have been several attempts to provide formal definitions of it. (Szabó, 2012). Scholars are also divided as to whether the principle should be regarded as a factual claim, open to empirical testing; an analytic truth, obvious from the nature of language and meaning; or a methodological principle to guide the development of theories of syntax and semantics. The Principle of Compositionality has been attacked in all three spheres, although so far none of the criticisms brought against it have been generally regarded as compelling.

     

    These articles list some reasons for the controversy – but perhaps the core reason is that nobody has really shown how to do this in a way that actually works, and because the “semantics industry” has a meaningful investment in other approaches.

     

    For me – the very notion of compositionality itself is not entirely well understood or appreciated – at least not in the articles available to my very quick survey.  As far as I can see, these existing approaches don’t drive the analysis to ground.  As I see it, we gotta work with “truly primitive primitives” – and if we try to build on a foundation that is already inherently abstract (and implicitly compositional in ways we don’t understand or even see), we’re building on sand.  We gotta get to bedrock or we are not safe.

     

    From my point of view, the “computer age” basically blows away any approach that does not respect what has been learned about languages in computer system development.  I can highly admire and respect and learn from the great voices of philosophy from the past, and especially the last 100 years, and indeed, they laid the foundation for what is emerging now.  But let’s move on.  We’ve “learned lots of stuff”.  Let’s assimilate it, somehow.  Punch hard.

     

    What I want to see emerge is a “principle of compositionality” that is 100% grounded in very fundamentalist algebra, where every single facet of conception and symbolic representation in a medium is cleanly represented in a general model of conceptual structure, that fully explains the process of abstraction and generalization and symbolic representation, and can absolutely map in an unbroken way the chain of grounding from any abstraction to its basis in empirical measurement.  Nothing else is good enough.  Everything else is fallible.

     

    *******

     

    Rich: Thanks John, That explanation makes things clearer to image.  I have to revise my perception of this godhead lattice to more of an addressability of things, rather than an existence of things, I guess.  I can see an infinitely large repeat (think beeswax honeycomb lattice repeated forever far in n dimensions) with more fundamental ideas in each beeswax honeycomb cell as the conceptual structure of reality, in that sense.  Much like Goedel used iterating the primes in his model. 

     

    Bruce: That sounds right to me.  Kind of a “hierarchy of fractals” approach.   I’m out there looking at the Dedekind cut as the limit for measurement of “absolutely anything”, and just now saw that the “unit interval” is defined as an example of a “completely distributive lattice”   https://en.wikipedia.org/wiki/Completely_distributive_lattice  I see the unit interval (0 to 1) as a kind of universal archetype – and if it is defined as “the next but unknowable decimal place” in an extended decimal point series, that looks to me like as close to “everywhere continuous” we’re going to get if we want to build “an infinitely fine-grained model of absolutely anything”.

     

    Rich: Infinity is better organized into an iterated structure, as Goedel described his array of objects.  But by definition, nobody will ever know all that empirical information which is, by definition, uniterable.  Or more accurately, only iterable in one dimension. 

     

    Bruce: Well, we chase that one dimension down to the very bottom of its rabbit hole in the Dedekind cut.  How much time do you have?  How strong is your computer….

     

    Rich: That makes it irrational information, like art, or law, or philosophy, or cooking, … .  So you can't perceive, in one human head, all the details that are symbolic of that reality, much less find the object that matches your query x. 

     

    Bruce: Yes, that would be common doctrine on the commonly ungrounded abstractions you mention (art, law philosophy, cooking).  But I personally don’t see that lack of grounding as inevitable or inherent.  I see it as a fallible property of our weak and short-sighted methods – a problem that all the followers of Husserl’s injunction that “philosophy must become a science” should challenge aggressively.  So “let’s break that code” and get past this confusing mystery of supposedly inherent incommensurateness.  It’s a prevailing and disabling myth, and one that should be de-throned…

     

    Rich:  If it isn't iterable, how do you find your cell with just partial matching information?  For example, to solve an equation like d = r*T, you have to apply the knowledge of how multiplication and assignment work.  Then you have to use division to invert r or T to solve as a single unknown.  That works fine with numbers because they operate on the lattice concept ever since Cartesian graphs were developed and used with N dimensions. 

    But I don't see the same thing with empirical information.  That information contains all those things we predict, but aren't really sure about, whether now, in the past, or yet to be.  It's too complex for visual acuity.  And lots of information has been found before any theoretical explanations began to be considered.  So it may as well be beyond the causal cone.  And if you can't foresee it, you can't prevent it, or even worry effectively about it.  So kiss it off.  Surf.  Watch a movie. 

    Bruce: (laugh)   Well, that makes sense.  But Don Quixote is still out there with his micrometer.  He’s worried about the Syrian refugees and whether they’re getting a fair deal – so he’s taking very fine measurements in a very high-dimensional model…

    Rich: But I can believe in a Goedelian beeswax comb model with prediction of ONLY SIMPLE, REPEATABLE patterns like math or FOL along the axes.  That is a representation of all the lovely theories and combinations of theories that cannot disprove each other, or prove each other, and therefore are for the moment the composition we now call science.  But that information is not empirical.  It's provable, demonstrable, experience-able in various sensory and perceptive ways.  So it can be wrapped around minds. 

    Bruce: My point would be – yes, it might be possible to build up any abstraction whatsoever across levels, 100% grounded in solid empiricism – but because the logic chain of abstraction is newly sound in unprecedented ways, no longer limited to empiricism only.

    Rich: Empirical information is more like the evidence waiting to be understood, yet and perhaps evermore without deeper explanation.  If it never stops changing in unpredictable ways, as empirical information does by its very definition, it is irrational, non-experiential, utterly unstructured. 

    Bruce: But if it matters, it can be detected.  If it has an effect somebody cares enough about, it can be influenced.

    Rich: But, I can especially see no reason to believe that such an information object, containing both rational and empirical information, can actually exist, with all the local texture and uniqueness (Pat's grains of sand) that can be iterated by selectors in any way represented there, much less that we will ever (even infinitely) get to use it for any purpose.  It just doesn't feel right.  It isn't math.  It isn't art.  It isn't science.  So what is it?

    Bruce: It isn’t math YET.  But it’s sneaking up on it from a pretty deep level.  Motivation meets measurement at the lowest decimal place we can handle.  Radiate this kind of thing in every potential direction, and we’re going to get a pretty good map of this place (and oh, yeah, right, Google has already taken a picture of (almost) every address on planet earth.  That’s a start.  And it works for people every day.)

    Rich: But I am trying to keep my mind open.  I still prefer nil as the upper level ontology because absolutely any Thing can be derived from it.  

    Bruce: Perfect.  Nil is the open door to the mystery, the cornucopia of exhaustless creativity.  Let’s meet there for some coffee…

    Bruce Schuman, Santa Barbara CA USA

    http://networknation.net/matrix.cfm

     

     

     

    -----Original Message-----
    From:
    ontolo...@googlegroups.com [mailto:ontolo...@googlegroups.com] On Behalf Of John F Sowa
    Sent: Monday, January 11, 2016 7:59 PM
    To:
    ontolo...@googlegroups.com
    Subject: Re: [ontolog-forum] Wikipedia on upper ontology

    Rich Cooper

    unread,
    Jan 12, 2016, 1:43:13 PM1/12/16
    to ontolo...@googlegroups.com

    Dear Bruce,

     

    You wrote:

     

    Putting it very simply – it seems like the choice is – build a vast bottom-up glossary/vocabulary of concepts and terms that are found in actual usage, and do everything possible to schmooze out agreement on the meaning of these terms – or figure out how meaning is actually created from the ground up, and then build every interpretation in those terms.  Maybe this is what Rich is talking about in his below message.  I’d say it is.

     

    Then you would say right.  But I would want to make a few changes in your assumptions.  The yellow annotation above is one I agree with quite well.  But the blue term is based on an assumption that people can agree to any needed level of precision to any essential terms at all. 

     

    I think the current process used by search engines could (and, IMHO, will) become more and more linguistic.  I think the cause of that tread is that words and phrases have different meanings at different moments in different events related to different people

     

    So even the bottom up attempt to structure "universal meaning" is doomed to failure by attempting to derive any universal meaning from real language use. 

     

    The same thing is true of the top down dual for this logical progression.  Starting at the top is like "drawing a box before you type the text inside", as an old friend used to complain about the relationship between requirements and design.

     

    Sincerely,

    Rich Cooper,

    Rich Cooper,

     

    Chief Technology Officer,

    MetaSemantics Corporation

    MetaSemantics AT EnglishLogicKernel DOT com

    ( 9 4 9 ) 5 2 5-5 7 1 2

    http://www.EnglishLogicKernel.com

     

    From: ontolo...@googlegroups.com [mailto:ontolo...@googlegroups.com] On Behalf Of Bruce Schuman


    Sent: Tuesday, January 12, 2016 9:36 AM
    To: ontolo...@googlegroups.com

    --

    All contributions to this forum by its members are made under an open content license, open publication license, open source or free software license. Unless otherwise specified, all Ontolog Forum content shall be subject to the Creative Commons CC-BY-SA 4.0 License or its successors.
    ---
    You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
    To post to this group, send email to ontolo...@googlegroups.com.
    Visit this group at https://groups.google.com/group/ontolog-forum.

    Rich Cooper

    unread,
    Jan 12, 2016, 2:29:54 PM1/12/16
    to ontolo...@googlegroups.com

    Bruce, one more item:

     

    You wrote:

    The most powerful and authentic way to understand human meaning is to interpret language in the actual context of usage, as intended by a communicator in the act, and

     

    From that word on, I disagree in stating that I think you left out the hearer, or observer, or other sensory receiver of the actual communications mediae, and the way that receiver interprets the said receiver's projections of the said receiver's interpretation as compared (by said receiver) to said actual communications mediae. 

     

    Its IMHO important to remember to distinguish among different receivers, which have different meanings to possibly each receiver. 

     

    It's also IMHO important to note that each receiver, confronted with the physical layer of sensory experience, first projects all of said each receiver's theories of the world, whether actually present in the sensed experience at the time said experience is received.

     

    Sorry for getting so specific, but it had to be done to convey the meaning which I, at least, intended to get across to you and other reader receivers.  Now watch how different receivers interpret differently in responses. 

     

    But otherwise we are close. 

     

    Sincerely,

    Rich Cooper,

    Rich Cooper,

     

    Chief Technology Officer,

    MetaSemantics Corporation

    MetaSemantics AT EnglishLogicKernel DOT com

    ( 9 4 9 ) 5 2 5-5 7 1 2

    http://www.EnglishLogicKernel.com

     

    From: ontolo...@googlegroups.com [mailto:ontolo...@googlegroups.com] On Behalf Of Bruce Schuman


    Sent: Tuesday, January 12, 2016 9:36 AM
    To: ontolo...@googlegroups.com

    --

    All contributions to this forum by its members are made under an open content license, open publication license, open source or free software license. Unless otherwise specified, all Ontolog Forum content shall be subject to the Creative Commons CC-BY-SA 4.0 License or its successors.
    ---
    You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
    To post to this group, send email to ontolo...@googlegroups.com.
    Visit this group at https://groups.google.com/group/ontolog-forum.

    Bruce Schuman

    unread,
    Jan 12, 2016, 3:06:21 PM1/12/16
    to ontolo...@googlegroups.com

    Thanks, Rich.  Your idea about search engines is an approach I had not considered.

     

    I do have this sense that a “dimensional model of semantic space” as it emerges from actual human intention and speech-acts could begin to expand throughout any local or specialized domain – the natural habitat, as I understand it, of a paid professional semantic ontologist.  Maybe the search engine idea – with its global reach and ubiquity – could begin to compile “the dimensionality of context in specific (sub) domains” in such a way as to provide strong inferential insight into “what is probably intended in this context”.  No doubt, a lot like this is already happening.  I was thinking that the right kind of listening engine could figure out “word senses” in a particular corporate environment, or based on “the way the boss sees it”.  “What does Steve Jobs think about it?”  Get that dimensionality generally defined – and maybe that kind of logic could be merged with similar frameworks emerging anywhere.  Maybe that is exactly what IS going on…

     

    But yes, I very much agree that any attempt to “define a universal meaning for an abstract term” is a wrong-headed or at the least a flawed and fallible thing to be doing.  It’s almost “fascist”, if you want to see it in a paranoid way.  Who are THESE guys to tell me what *I* mean???

     

    > words and phrases have different meanings at different moments in different events related to different people.   

     

    Yes, that’s the deal.  Absolute across-the-board context-specific local-moment definition – without exception if we can somehow pull it off.  Correlate the supposed generalization somewhere else.  But I think this is the point of the “I don’t believe in word senses” article.

     

    It might be true that simpler words with hard correspondences to “real objects” can be pretty unambiguous.  Don’t tell me that concrete wall over there isn’t real.  And how many ways do I need to say it?   But the more abstract and generalized a word becomes, the more “implicit (and potentially ambiguous) dimensionality” is nested within it – an implicit dimensionality that only the user of the word can explicitly define in the context of their own intention – at that particular moment, communicating to that particular listener or audience.

     

    > So even the bottom up attempt to structure "universal meaning" is doomed to failure by attempting to derive any universal meaning from real language use. 

     

    > The same thing is true of the top down dual for this logical progression.  Starting at the top is like "drawing a box before you type the text inside", as an old friend used to complain about the relationship between requirements and design.

     

    I’d say what can be generalized is the process or method of developing an algebraic model of abstraction, grounded in measurement, and defined across an ascending cascade of “levels of abstraction” where categories or terms are defined by boundary values – such that “an object is IN the category if its dimensional specification is within the n-dimensional envelope of that category”.  That can be done in a hard-core way, I think.  And this IS essentially a bottom-up process of defining a hard abstract object, then dimensioning its attributes in an ascending cascade where the logic remains hard and well-grounded to empirical measurement.

     

    Bruce Schuman, Santa Barbara CA USA

    http://networknation.net/matrix.cfm

     

    From: ontolo...@googlegroups.com [mailto:ontolo...@googlegroups.com] On Behalf Of Rich Cooper


    Sent: Tuesday, January 12, 2016 10:43 AM
    To: ontolo...@googlegroups.com

    Subject: RE: [ontolog-forum] Wikipedia on upper ontology

     

    Dear Bruce,

     

    You wrote:

     

    Putting it very simply – it seems like the choice is – build a vast bottom-up glossary/vocabulary of concepts and terms that are found in actual usage, and do everything possible to schmooze out agreement on the meaning of these terms – or figure out how meaning is actually created from the ground up, and then build every interpretation in those terms.  Maybe this is what Rich is talking about in his below message.  I’d say it is.

     

    Then you would say right.  But I would want to make a few changes in your assumptions.  The yellow annotation above is one I agree with quite well.  But the blue term is based on an assumption that people can agree to any needed level of precision to any essential terms at all. 

     

    I think the current process used by search engines could (and, IMHO, will) become more and more linguistic.  I think the cause of that tread is that words and phrases have different meanings at different moments in different events related to different people

     

    So even the bottom up attempt to structure "universal meaning" is doomed to failure by attempting to derive any universal meaning from real language use. 

     

    The same thing is true of the top down dual for this logical progression.  Starting at the top is like "drawing a box before you type the text inside", as an old friend used to complain about the relationship between requirements and design.

     

    Sincerely,

    Rich Cooper,

    Rich Cooper,

     

    Chief Technology Officer,

    MetaSemantics Corporation

    MetaSemantics AT EnglishLogicKernel DOT com

    ( 9 4 9 ) 5 2 5-5 7 1 2

    http://www.EnglishLogicKernel.com


    Sent: Tuesday, January 12, 2016 9:36 AM
    To: ontolo...@googlegroups.com

    --

    All contributions to this forum by its members are made under an open content license, open publication license, open source or free software license. Unless otherwise specified, all Ontolog Forum content shall be subject to the Creative Commons CC-BY-SA 4.0 License or its successors.
    ---
    You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
    To post to this group, send email to ontolo...@googlegroups.com.
    Visit this group at https://groups.google.com/group/ontolog-forum.


    For more options, visit https://groups.google.com/d/optout.

    --
    All contributions to this forum by its members are made under an open content license, open publication license, open source or free software license. Unless otherwise specified, all Ontolog Forum content shall be subject to the Creative Commons CC-BY-SA 4.0 License or its successors.
    ---
    You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
    To post to this group, send email to ontolo...@googlegroups.com.
    Visit this group at https://groups.google.com/group/ontolog-forum.

    Bruce Schuman

    unread,
    Jan 12, 2016, 3:38:17 PM1/12/16
    to ontolo...@googlegroups.com

    Yes, and sorry I didn’t see this comment before posting my last reply – but I have recently noticed that there does seem to be some precedent in semantic literature for understanding communication just as you are describing it – as a relationship between someone we might describe as “speaker” (or “communicator” or “sender”) and someone we might describe as “listener” (or “receiver”).  Those roles could bounce back and forth quickly in conversation as people take turns.

     

    The way I would put this is – it is the responsibility of the speaker to assess the probable interpretation of a communication by the receiver – based on every cue and clue the speaker might be aware of.  Who is this recipient, what are their assumptions, what influence emerges from this immediate context, what is the prevailing understanding of this concept in society today, etc.  So, if I want to successfully communicate with you – I have to have some model of how I expect you to hear what I say.  I’m a fool if I don’t.

     

    There’s a book I have here by political pollster and focus group leader Frank Luntz – entitled “Words That Work: It’s Not What You Say, It’s What People Hear”.  He’s right.

     

    And I agree with everything you said below, and agree that it is necessary to explicitly include this point.

     

    Bruce Schuman, Santa Barbara CA USA

    http://networknation.net/matrix.cfm

     

    From: ontolo...@googlegroups.com [mailto:ontolo...@googlegroups.com] On Behalf Of Rich Cooper


    Sent: Tuesday, January 12, 2016 11:30 AM
    To: ontolo...@googlegroups.com

    Subject: RE: [ontolog-forum] Wikipedia on upper ontology

     

    Bruce, one more item:

     

    You wrote:

    The most powerful and authentic way to understand human meaning is to interpret language in the actual context of usage, as intended by a communicator in the act, and

     

    From that word on, I disagree in stating that I think you left out the hearer, or observer, or other sensory receiver of the actual communications mediae, and the way that receiver interprets the said receiver's projections of the said receiver's interpretation as compared (by said receiver) to said actual communications mediae. 

     

    Its IMHO important to remember to distinguish among different receivers, which have different meanings to possibly each receiver. 

     

    It's also IMHO important to note that each receiver, confronted with the physical layer of sensory experience, first projects all of said each receiver's theories of the world, whether actually present in the sensed experience at the time said experience is received.

     

    Sorry for getting so specific, but it had to be done to convey the meaning which I, at least, intended to get across to you and other reader receivers.  Now watch how different receivers interpret differently in responses. 

     

    But otherwise we are close. 

     

    Sincerely,

    Rich Cooper,

    Rich Cooper,

     

    Chief Technology Officer,

    MetaSemantics Corporation

    MetaSemantics AT EnglishLogicKernel DOT com

    ( 9 4 9 ) 5 2 5-5 7 1 2

    http://www.EnglishLogicKernel.com


    Sent: Tuesday, January 12, 2016 9:36 AM
    To:

    --

    All contributions to this forum by its members are made under an open content license, open publication license, open source or free software license. Unless otherwise specified, all Ontolog Forum content shall be subject to the Creative Commons CC-BY-SA 4.0 License or its successors.
    ---
    You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
    To post to this group, send email to ontolo...@googlegroups.com.
    Visit this group at https://groups.google.com/group/ontolog-forum.


    For more options, visit https://groups.google.com/d/optout.

    --
    All contributions to this forum by its members are made under an open content license, open publication license, open source or free software license. Unless otherwise specified, all Ontolog Forum content shall be subject to the Creative Commons CC-BY-SA 4.0 License or its successors.
    ---
    You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
    To post to this group, send email to ontolo...@googlegroups.com.
    Visit this group at https://groups.google.com/group/ontolog-forum.

    Rich Cooper

    unread,
    Jan 12, 2016, 4:37:17 PM1/12/16
    to ontolo...@googlegroups.com

    Bruce,                 

     

    So we agree to a fairly good level then on the necessary subjectivity of sender and receiver.  This process of our finding a compromise position, such as we just did, requires one or likely both of each pair to adjust his belief system(s).   T0 and T1 belief bases must have intercourse, and exchange Q&A message DNAs, which helps each one model the other, at least as a goal for Interoperability.   

     

    So let's you and I communicate in a Gedankenexperiment (Sp?) and act out a procedural updating through that intercourse - words, phrases, sentences, questions, answers, yes.  But the energy of pursuing any line of discourse depends also our individual beliefs, which differ at various stages through this conversation. 

     

    But being subjective, projective entities, they each also have a life of the self, the knowledge base (RNA) of the individual's Self[I] being their belief system at any instant of evaluation.  Using the Solomonoff model as suggested by Whosis (sorry, can't remember his real name offhand) . 

     

    Good old Whosis described a linear model using Solomonoff's ideas about coding and compression in an n dimensional space.  Whosis used that idea in a feedback loop to construct a math model of two agents, back to back, outputs of each to the inputs of the other, etc. 

     

    The old game theory ideas apply to that structure, and Whosis was able to derive a general optimal control solution for a learning system with that architecture.  That is, the system that can learn fastest from the given inputs, outputs, controls, mechanisms  and state, step by step through a lifetime of actions, new inputs outputs and state.      NOTE: apologies for inserting my IDEF0 view of the world - yours may differ

                                                                                                                                                             

    One belief system, perhaps plural T[I], may ordinarily have to change some existing {T[I].beliefs} to preserve logical consistency in each T[I], which is what a belief system is developed to do, and each T[I].belief in {T[I].beliefs} may have to revise some less valid or less predicted to be useful hypotheses to any new beliefs about the said entities and relationships which may be required.  Putting a max size on each belief based on a system architecture would balance demand. 

     

    Over enough plural intercourses, all T[I] have to make progress in that each must understand more about other T[1] and about its {T[J].Self}.  They all have opportunities to learn different things about their differing inputs, control, outputs and mechanisms (add your own objects, s.v.p.). 

     

    That act of reaching a tolerant level of agreement, even if it requires new postulates in plural T[I], means that newly agreed upon ideas become available for communication, but only for communication among the T[I] participants. 

     

    You may know how to interpret X and I also know how, but our knows are different ones, you and I, even given the same text message.  Dually, each T[I] stores it's {Self.know's current Self.value} in different knowledge bases, perhaps using different methods for query and retrieval of different cognitive variant knows. 

     

    NOTE: Apologies for oversimplifying it dramatically, but it helps convey the content, so please be patient. 

     

    So I conclude there is a missing ingredient.  That would be precise knowledge of each individual agent's every state variable vector x at all times.  That's an infinity and a half of storage, at least, so I suggest it is at least impractical, so its value interest is also low, IMHO.  It's infeasible with current situations. 

     

    Then you wrote:

    RC:> words and phrases have different meanings at different moments in different events related to different people.  

     

    BS:>Yes, that’s the deal.  Absolute across-the-board context-specific local-moment definition – without exception if we can somehow pull it off.  Correlate the supposed generalization somewhere else.  But I think this is the point of the “I don’t believe in word senses” article.

     

    Yes, I agree it’s the point of Adam Kilgariff's article.  He had vast experience with those kinds of issues and learned over long decades ways to resolve them. 

    Hans Teijgeler

    unread,
    Jan 12, 2016, 6:06:34 PM1/12/16
    to ontolo...@googlegroups.com
    Rich,
     
    [RC] So we agree to a fairly good level then on the necessary subjectivity of sender and receiver.  This process of our finding a compromise position, such as we just did, requires one or likely both of each pair to adjust his belief system(s).   T0 and T1 belief bases must have intercourse, and exchange Q&A message DNAs, which helps each one model the other, at least as a goal for Interoperability.   
    [HT] To remain practical: any EPC contractor of some size uses some 600 applications, and different EPC contractors may have some in common, although differently configured. That means a hell of a lot of such "intercourses", maintaining of which is very costly and hard to manage. What you preach, if I understand you well, is a bottom-up approach to reach the Holy Grail of the Universal Upper Ontology. The problem is aggravated by the fact that we have a few thousands languages in the world. And a few billion "belief systems" (before the secularization did hit the Netherlands we used to say jokingly that we have as many churches as there are Dutchmen).
     
    I read a nice article about Vupik, a language spoken by the Inuit. They combine the who, what, where, when and how with the verb. For example the sentence: "Actually I have no desire to go" translates to "Ayagyuurmiitqapiartua". No wonder that Google Translate still produces substandard results, even between German and English. And they have quite a few bucks to burn...
     
    Regards,
    Hans
     
    Hans Teijgeler,
    Laanweg 28,
    1871 BJ Schoorl,
    Netherlands


    From: ontolo...@googlegroups.com [mailto:ontolo...@googlegroups.com] On Behalf Of Rich Cooper
    Sent: dinsdag 12 januari 2016 22:37

    Pat Hayes

    unread,
    Jan 12, 2016, 11:33:14 PM1/12/16
    to ontolog-forum, Azamat Abdoullaev

    On Jan 12, 2016, at 5:57 AM, Azamat Abdoullaev <ontop...@gmail.com> wrote:

    > Rich,
    > Indeed, the real world is much different than it's presented by philosophy, science, engineering, the arts and literature.

    That is a pretty comprehensive rejection of a large part of human thought. Could you give us an idea of which discipline, if any, might present us with a more accurate account (picture? description?) of the real world, in your opinion? How are we to approach this real world if philosophy, science, engineering and literature are prohibited to us?

    Pat Hayes
    ------------------------------------------------------------
    IHMC (850)434 8903 home
    40 South Alcaniz St. (850)202 4416 office
    Pensacola (850)202 4440 fax
    FL 32502 (850)291 0667 mobile (preferred)
    pha...@ihmc.us http://www.ihmc.us/users/phayes






    Pat Hayes

    unread,
    Jan 12, 2016, 11:43:36 PM1/12/16
    to ontolog-forum, Rich Cooper

    On Jan 11, 2016, at 5:11 PM, Rich Cooper <metase...@englishlogickernel.com> wrote:

    > ...We all know how to represent infinity and reason about infinities, by saying "everything exists that exists, that has ever existed, or that ever will exist, … " ........ I, personally, have never met an infinity,

    There are much more straighforward ways to meet infinity. For example, try this:

    1. For every number N, there is a number M larger than N.
    2. 0 is a number and every other number is larger than it.
    3. If N is larger than M, then M is not larger than N.
    4. If N is larger than M and M is larger than P, then N is larger than P.

    It follows from these axioms - which are pretty easy to understand, and I would claim intuitively obviously true, given the usual meanings of words like 'number' and 'larger' - that there are infinitely many numbers. So if you understand these axioms, and if you can intuit them all being true at once, then you have met an infinity.

    John F Sowa

    unread,
    Jan 12, 2016, 11:55:42 PM1/12/16
    to ontolo...@googlegroups.com
    I agree with the comments by Pat Hayes in his two notes above.

    John

    John F Sowa

    unread,
    Jan 13, 2016, 10:35:00 AM1/13/16
    to ontolo...@googlegroups.com
    On 1/12/2016 12:03 AM, Rich Cooper wrote:
    > I have to revise my perception of this godhead lattice to more of
    > an addressability of things, rather than an existence of things,
    > I guess. I can see an infinitely large repeat (think beeswax
    > honeycomb repeated forever far in n dimensions) with more fundamental
    > ideas in each beeswax honeycomb cell as the conceptual structure
    > of reality, in that sense.

    Jorge Luis Borges wrote a story about a huge library with books of
    random sequences of symbols, in which readers would wander forever.
    See https://en.wikipedia.org/wiki/The_Library_of_Babel

    But unlike that Babel, the lattice of all possible theories
    is beautifully organized, and it's easy to navigate.

    For any logic L, there exists such a lattice. In fact, it's called
    the Lindenbaum lattice for L -- named after the logician Adolf
    Lindenbaum, who was a student of Tarski's:

    1. At the top is the empty theory, which is also called the
    universal theory because it's true of everything. It has
    no axioms, and it contains every sentence in L that can be
    proved from an empty set of axioms.

    2. At the bottom is the contradictory theory -- also called
    the absurd theory because it's true of nothing. For every
    sentence s in L, it contains both s and its negation ~s.
    As an axiom for the absurd theory, you can use (p & ~p).

    3. Between the top and bottom are all the consistent theories
    that can be derived from one or more consistent axioms stated
    in the language L.

    4. For any theory T, if you delete an axiom, you move up the
    lattice to a more general theory. If you delete an axiom
    you move down the lattice to a more specialized theory.

    > I can especially see no reason to believe that such an information
    > object, containing both rational and empirical information, can
    > actually exist

    Since it's infinite, it cannot be written down in its entirety.
    But that's also true of the integers. You just compute as many
    as you need for whatever problem you're trying to solve.

    > It isn't math. It isn't art. It isn't science. So what is it?

    It's all of the above. It most definitely is mathematical.
    There's an art to using it well. And it contains every possible
    scientific theory or hypothesis -- both true and false.

    > I still prefer nil as the upper level ontology because absolutely
    > any Thing can be derived from it.

    You got it! That's the top theory. To derive any other theory
    you can start with the empty set of axioms, and add one axiom
    at a time. The complete theory for any set of axioms consists
    of every sentence provable from those axioms.

    Warning: But if you happen to add an axiom that is inconsistent
    with any other(s) in the current theory, the theory degenerates
    to the absurd theory at the bottom.

    John

    Burkett, William [USA]

    unread,
    Jan 13, 2016, 11:14:17 AM1/13/16
    to ontolo...@googlegroups.com

    One impression I get from ontolog discussions is that they seem to be about static things, e.g., “an ontology”, “a theory”, etc., and very little discussion about the role processes play in all of this.  For example, consider the following exchange between Bruce and Rich:

     

    ==============

    Dear Bruce,

    You wrote:

    Putting it very simply – it seems like the choice is – build a vast bottom-up glossary/vocabulary of concepts and terms that are found in actual usage, and do everything possible to schmooze out agreement on the meaning of these terms – or figure out how meaning is actually created from the ground up, and then build every interpretation in those terms.  Maybe this is what Rich is talking about in his below message.  I’d say it is.

    Then you would say right.  But I would want to make a few changes in your assumptions.  The yellow annotation above is one I agree with quite well.  But the blue term is based on an assumption that people can agree to any needed level of precision to any essential terms at all. 

    ===============

     

    I think the “bottom up” observation is the right starting point, because it echoes the Wittgenstein quote that Rich shared at the end of December

     

    “For a large class of cases of the employment of the word ‘meaning’—though not for all—this way can be explained in this way: the meaning of a word is its use in the language” (PI 43).

     

    The key is that real meaning is in the “actual usage” bit – that’s what we’re most interested in and this bit often seems left out of the “ontology picture.”

     

    If you couple John’s lattice framework idea to this – building interrelationships among usages - you get another structural piece of the puzzle.

     

    To address Rich’s concern about “schmoozing out agreement”, suppose you let actual use – i.e., actual communication events - drive out the commonality and “evolve” the semantic interoperability of the communicating parties.    We’ve all seen the phenomena of meeting a bunch of new people in a conference room and spending the first half of the meeting learning how others in the room use (for example) English – and then once the level-setting is done, more effective communication (i.e., “semantically interoperability”) can take place.  I contend the same thing happens with interoperating software systems and that the adaptability to do this is not usually a software design requirement.

     

    So instead of trying to find a single semantic model (or set of models) that encompasses everything, how about a set of localized semantic models that are coupled with mappings (to form a lattice) and evolutionary protocols to self-correct (the local models and the mappings) through actual use.   I think that is how “semantic interoperability” works – I think it is a very organic, adaptive, and evolutionary process.

     

    Bill

    John Bottoms

    unread,
    Jan 13, 2016, 11:27:20 AM1/13/16
    to ontolo...@googlegroups.com
    Bill,

    You beat me to it by a few minutes. I was just about to post that.

    Let me add that in addition to having access to the processes there must be a set of guideline (meta) processes that explain when, how, on-what a particular process is used. All of these exist if the lattice is complete for an individual. If the lattice is a collection of uncodified knowledge of many individuals or instances, then we are back to the library search/detective investigation scenario.

    -John Bottoms
    --
    All contributions to this forum by its members are made under an open content license, open publication license, open source or free software license. Unless otherwise specified, all Ontolog Forum content shall be subject to the Creative Commons CC-BY-SA 4.0 License or its successors.
    ---
    You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
    To post to this group, send email to ontolo...@googlegroups.com.
    Visit this group at https://groups.google.com/group/ontolog-forum.

    John F Sowa

    unread,
    Jan 13, 2016, 1:33:09 PM1/13/16
    to ontolo...@googlegroups.com
    Bill and John B,

    WB
    > One impression I get from ontolog discussions is that they seem to be
    > about static things, e.g., “an ontology”, “a theory”, etc., and very
    > little discussion about the role processes play in all of this.

    JB
    > Let me add that in addition to having access to the processes
    > there must be a set of guideline (meta) processes that explain
    > when, how, on-what a particular process is used.

    I completely agree. That is the reason why I have been emphasizing
    legacy systems, shallow vs deep ontologies, and the lattice of all
    *possible* theories -- not just the published standards.

    Before we can make progress, we have get rid of the notion that
    some ideal ontology, by itself, can ever be the final solution.

    JB
    > All of these exist if the lattice is complete for an individual.
    > If the lattice is a collection of uncodified knowledge of many
    > individuals or instances, then we are back to the library
    > search/detective investigation scenario.

    Since the lattice is infinite, it can never be complete. The amount
    that has been completed (any specific ontology, no matter how large)
    is always infinitesimally *tiny* compared with the possibilities.

    That puts the emphasis, as Bill said, on process. guidelines,
    discovery, and metalevel discovery:

    WB
    > it is a very organic, adaptive, and evolutionary process.

    John

    Christopher Menzel

    unread,
    Jan 13, 2016, 2:11:01 PM1/13/16
    to ontolo...@googlegroups.com
    On 11 Jan 2016, at 11:03 PM, Rich Cooper <metase...@englishlogickernel.com> wrote:
    That explanation makes things clearer to image.  I have to revise my perception of this godhead lattice to more of an addressability of things, rather than an existence of things, I guess.  I can see an infinitely large repeat (think beeswax honeycomb lattice repeated forever far in n dimensions) with more fundamental ideas in each beeswax honeycomb cell as the conceptual structure of reality, in that sense.

    I can't make the least sense of this.

    Much like Goedel used iterating the primes in his model.
    Infinity is better organized into an iterated structure, as Goedel described his array of objects.
    ... But I can believe in a Goedelian beeswax comb model with prediction of ONLY SIMPLE, REPEATABLE patterns like math or FOL along the axes.

    I'd thought you'd responded positively to earlier suggestions that you stay away from talking about Gödel. This talk of Gödel "iterating the primes in his model" is simply nonsense, as is the idea that he "described an array of objects" that "organized infinity into an iterated structure." Nothing in Gödel's work bears the remotest resemblance to these descriptions. And what does it mean to say that first-order logic is a "repeatable pattern"? What things form the pattern? And what is it that is repeated? Theorems? Proofs? Structures? And what pattern do you have in mind? Nothing I know of in first-order logic answers to this idea.

    I think it is well and good for this list to be used for high-level speculation and conjecture but it does nothing to enhance its credibility (let alone its readability) to appropriate well-understood concepts and well-known technical results and use them in ways that have nothing to do with their actual content.

    -chris

    Rich Cooper

    unread,
    Jan 13, 2016, 2:11:44 PM1/13/16
    to ontolo...@googlegroups.com

    Hans,

     

    Yes, and the Lakoff book "Women, Fire and Dangerous Things" identifies that title phrase as the way some tribe somewhere conceptualizes those three items into one supposed concept.  So yes, there is no way that there can be a universal meaning lattice. 

     

    It could work for a very limited set of math and physics, such as Newtonian and perhaps relativistic, but that's about as far as that goal goes, IMHO. 

     

    Sincerely,

    Rich Cooper,

    Rich Cooper,

     

    Chief Technology Officer,

    MetaSemantics Corporation

    MetaSemantics AT EnglishLogicKernel DOT com

    ( 9 4 9 ) 5 2 5-5 7 1 2

    http://www.EnglishLogicKernel.com

     

    Rich Cooper

    unread,
    Jan 13, 2016, 2:22:43 PM1/13/16
    to ontolo...@googlegroups.com, Azamat Abdoullaev

    Dear Pat,

    You wrote:

        > Rich,

        > Indeed, the real world is much different than it's presented by philosophy, science, engineering, the arts and literature.

        That is a pretty comprehensive rejection of a large part of human thought. Could you give us an idea of which discipline, if any, might present us with a more accurate account (picture? description?) of the real world, in your opinion? How are we to approach this real world if philosophy, science, engineering and literature are prohibited to us?

        Pat Hayes

        ------------------------------------------------------------

    I am not rejecting science, math, engineering or other hard sciences that have a long history of discourse among many players working on the same problem and having a reality that is testable to prove or disprove their theories. 

    But I do reject the notion of a single upper ontology that can represent in its derivations any possible soft science, empirical science, or early science such as LENA (Low energy nuclear activity) which is not known, only viewed in several labs around the world as of now.  But we have no clue about how LENA will develop, or why it produces measurable results in such odd processes. 

    Who was it who said "reality is more complex than you can even imagine, Freddie" or some such statement indicating that the universe is enormously more complicated than we understand, is even more complicated than we CAN understand.

    So to suppose that knowledge will somehow converge into a single upper ontology as time approaches infinity is unrealistic at best.  That is why I like nil as the upper ontology.  Anything expressible in logic can be derived from it.  But nothing that is not expressible in logic can be so derived. 

    -Rich

    Sincerely,

    Rich Cooper,

    Rich Cooper,

    Chief Technology Officer,

    MetaSemantics Corporation

    MetaSemantics AT EnglishLogicKernel DOT com

    ( 9 4 9 ) 5 2 5-5 7 1 2

    http://www.EnglishLogicKernel.com

    -----Original Message-----
    From: ontolo...@googlegroups.com [mailto:ontolo...@googlegroups.com] On Behalf Of Pat Hayes
    Sent: Tuesday, January 12, 2016 8:33 PM
    To: ontolog-forum; Azamat Abdoullaev
    Subject: Re: [ontolog-forum] Wikipedia on upper ontology

    On Jan 12, 2016, at 5:57 AM, Azamat Abdoullaev <ontop...@gmail.com> wrote:

    md...@ims-expertservices.com; dre...@ims-expertservices.com; kb...@ims-expertservices.comIHMC                                     (850)434 8903 home

    40 South Alcaniz St.            (850)202 4416   office

    Pensacola                            (850)202 4440   fax

    FL 32502                              (850)291 0667   mobile (preferred)

    pha...@ihmc.us       http://www.ihmc.us/users/phayes






    --

    All contributions to this forum by its members are made under an open content license, open publication license, open source or free software license. Unless otherwise specified, all Ontolog Forum content shall be subject to the Creative Commons CC-BY-SA 4.0 License or its successors.

    ---

    You received this message because you are subscribed to the Google Groups "ontolog-forum" group.

    To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

    To post to this group, send email to ontolo...@googlegroups.com.

    Visit this group at https://groups.google.com/group/ontolog-forum.

    Rich Cooper

    unread,
    Jan 13, 2016, 2:32:39 PM1/13/16
    to Pat Hayes, ontolog-forum

    Pat Hayes wrote:

          > ...We all know how to represent infinity and reason about infinities, by saying "everything exists that exists, that has ever existed, or that ever will exist, … "  ........   I, personally, have never met an infinity,

          There are much more straightforward ways to meet infinity. For example, try this:

          1. For every number N, there is a number M larger than N.

          2. 0 is a number and every other number is larger than it.

          3. If N is larger than M, then M is not larger than N.

          4. If N is larger than M and M is larger than P, then N is larger than P.

          It follows from these axioms - which are pretty easy to understand, and I would claim intuitively obviously true, given the usual meanings of words like 'number' and 'larger' -  that there are infinitely many numbers. So if you understand these axioms, and if you can intuit them all being true at once, then you have met an infinity.

          Pat Hayes

        Yes, those rules HYPOTHSIZE an infinity of numbers.  There are other rules, such as taking the limit of 1/N as N approaches infinity which HYPOTHESIZES that N can actually REACH infinity, or close enough for horseshoes and hand grenades purposes.  But no mathematician has ever seen N reach infinity, or even close to it That is what I mean by "I have never met an infinity."  It's not that we can do math with it, it's that we can't EXPERIENCE it.  We can only conjecture.

        We have demonstrated evidence to support relativity and quantum physics to some degree, even gravity waves, all dependant on infinities.  But we have not experience infinity, and by our definition of infinity, nobody ever will. 

        So there are plenty of us technologists who believe in infinity as a symbol, and base our conclusions on the existence of infinity, but we can never validate it, nor verify it, with the perfection that math imposes on its projections to reality. 

        We just don't know enough. 

        Sincerely,

        Rich Cooper,

        Rich Cooper,

        Chief Technology Officer,

        MetaSemantics Corporation

        MetaSemantics AT EnglishLogicKernel DOT com

        ( 9 4 9 ) 5 2 5-5 7 1 2

        http://www.EnglishLogicKernel.com

        -----Original Message-----
        From: Pat Hayes [mailto:pha...@ihmc.us]
        Sent: Tuesday, January 12, 2016 8:43 PM
        To: ontolog-forum; Rich Cooper
        Subject: Re: [ontolog-forum] Wikipedia on upper ontology

        Rich Cooper

        unread,
        Jan 13, 2016, 2:52:30 PM1/13/16
        to ontolo...@googlegroups.com

        Dear John,

        Thanks for taking the time to write:

            JS:> But unlike that Babel, the lattice of all possible theories is beautifully organized, and it's easy to navigate.

            For any logic L, there exists such a lattice.  In fact, it's called the Lindenbaum lattice for L -- named after the logician Adolf Lindenbaum, who was a student of Tarski's:

              1. At the top is the empty theory, which is also called the

                 universal theory because it's true of everything.  It has

                 no axioms, and it contains every sentence in L that can be

                 proved from an empty set of axioms.

              2. At the bottom is the contradictory theory -- also called

                 the absurd theory because it's true of nothing.  For every

                 sentence s in L, it contains both s and its negation ~s.

                 As an axiom for the absurd theory, you can use (p & ~p).

              3. Between the top and bottom are all the consistent theories

                 that can be derived from one or more consistent axioms stated

                 in the language L.

              4. For any theory T, if you delete an axiom, you move up the

                 lattice to a more general theory.  If you delete an axiom

                 you move down the lattice to a more specialized theory.

          RC:> Although you wrote point 4 that way, I think you mean that the second delete should have been an insert

              > I can especially see no reason to believe that such an information

              > object, containing both rational and empirical information, can

              > actually exist

              Since it's infinite, it cannot be written down in its entirety.

              But that's also true of the integers.  You just compute as many as you need for whatever problem you're trying to solve.

          Yes, so I can mathematize all I want, forever and ever.  That is exactly my point; mathematization only implies adding more axioms as the system gets larger, but so what?  It's still just math, still just a theory, and not possible to validate.  You can believe it if you want, but there is a lot more out there you have no experience with, and you can only mathematize what you DO KNOW from evidence.  The stuff you DON'T KNOW can't all be mmathematized. 

              > It isn't math. It isn't art. It isn't science. So what is it?

              It's all of the above.  It most definitely is mathematical.

              There's an art to using it well.  And it contains every possible scientific theory or hypothesis -- both true and false.

          I don't agree.  I can certainly believe that the rules of math can lead to all kinds of insights, but it will produce no insights that can be validated about infinities.  Only mathematical deductive sequences, like the four rules above. 

          I accept all that math can do for us.  You need to accept that there are some things for which math is not an answer adequate for our experiential nature. 

              > I still prefer nil as the upper level ontology because absolutely any

              > Thing can be derived from it.

              You got it!  That's the top theory.  To derive any other theory you can start with the empty set of axioms, and add one axiom at a time.  The complete theory for any set of axioms consists of every sentence provable from those axioms.

          Yes, nil has always been close to my heart.  I can experience nothing by meditating, but I can't experience Everything by meditating. 

              Warning:  But if you happen to add an axiom that is inconsistent with any other(s) in the current theory, the theory degenerates to the absurd theory at the bottom.

              John

          Yes, for example, theories about which people disagree.  There is no way to validate infinitely large deductions other by thought experiments, such as terminating recursions.  And thought is not mathematics

          So you remain convinced that EVERYTHING is math, and I remain convinced that math is a significant part of EVERYTHING, but not enough for ALL THINGS. 

          Sincerely,

          Rich Cooper,

          Rich Cooper,

          Chief Technology Officer,

          MetaSemantics Corporation

          MetaSemantics AT EnglishLogicKernel DOT com

          ( 9 4 9 ) 5 2 5-5 7 1 2

          http://www.EnglishLogicKernel.com

          -----Original Message-----
          From: ontolo...@googlegroups.com [mailto:ontolo...@googlegroups.com] On Behalf Of John F Sowa

          --

          All contributions to this forum by its members are made under an open content license, open publication license, open source or free software license. Unless otherwise specified, all Ontolog Forum content shall be subject to the Creative Commons CC-BY-SA 4.0 License or its successors.

          ---

          You received this message because you are subscribed to the Google Groups "ontolog-forum" group.

          To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

          To post to this group, send email to ontolo...@googlegroups.com.

          Visit this group at https://groups.google.com/group/ontolog-forum.

          Christopher Menzel

          unread,
          Jan 13, 2016, 3:51:54 PM1/13/16
          to ontolo...@googlegroups.com, Pat Hayes
          On 13 Jan 2016, at 1:32 PM, Rich Cooper <metase...@englishlogickernel.com> wrote:
          > Pat Hayes wrote:
          > > ...We all know how to represent infinity and reason about infinities, by saying "everything exists that exists, that has ever existed, or that ever will exist, … " ........ I, personally, have never met an infinity,
          > There are much more straightforward ways to meet infinity. For example, try this:
          > 1. For every number N, there is a number M larger than N.
          > 2. 0 is a number and every other number is larger than it.
          > 3. If N is larger than M, then M is not larger than N.
          > 4. If N is larger than M and M is larger than P, then N is larger than P.
          > It follows from these axioms - which are pretty easy to understand, and I would claim intuitively obviously true, given the usual meanings of words like 'number' and 'larger' - that there are infinitely many numbers. So if you understand these axioms, and if you can intuit them all being true at once, then you have met an infinity.
          > Pat Hayes
          >
          > Yes, those rules HYPOTHSIZE an infinity of numbers.

          They don't "hypothesize" anything. They are just propositions, expressible in a formal language. They are true if, and only if, there are infinitely many numbers (whatever you take numbers to be).

          > There are other rules, such as taking the limit of 1/N as N approaches infinity which HYPOTHESIZES that N can actually REACH infinity,

          Aargh, false, wrong, deeply and profoundly wrong; you do not understand the concept of a limit. Please, just stop talking about mathematics.

          > or close enough for horseshoes and hand grenades purposes. But no mathematician has ever seen N reach infinity, or even close to it. That is what I mean by "I have never met an infinity." It's not that we can do math with it, it's that we can't EXPERIENCE it. We can only conjecture.
          > We have demonstrated evidence to support relativity and quantum physics to some degree, even gravity waves, all dependant on infinities. But we have not experience infinity, and by our definition of infinity, nobody ever will.

          We haven't experienced the objects posited by quantum physics any more than we've experienced the real numbers. But the reals are as essential to physics as subatomic particles are. In both cases we posit their existence because they are essential to our best theories. As far as your experience goes, you've got as much reason to believe in the existence of the reals as you do the existence of subatomic particles.

          -chris

          Rich Cooper

          unread,
          Jan 13, 2016, 4:03:35 PM1/13/16
          to ontolo...@googlegroups.com

          Dear Bill

           

          You wrote:

          So instead of trying to find a single semantic model (or set of models) that encompasses everything, how about a set of localized semantic models that are coupled with mappings (to form a lattice) and evolutionary protocols to self-correct (the local models and the mappings) through actual use.   I think that is how “semantic interoperability” works – I think it is a very organic, adaptive, and evolutionary process.

           

          Bill

           

          I agree.  Math is good.  But there are other goods which cannot be mathematized.  Ontologies, as conceived to be ONLY made of FOL structures, cannot be everything, and cannot even represent everything.  Your opinion may differ, but Bill and I agree on that issue. 

           

          Sincerely,

          Rich Cooper,

          Rich Cooper,

           

          Chief Technology Officer,

          MetaSemantics Corporation

          MetaSemantics AT EnglishLogicKernel DOT com

          ( 9 4 9 ) 5 2 5-5 7 1 2

          http://www.EnglishLogicKernel.com

           

          From: ontolo...@googlegroups.com [mailto:ontolo...@googlegroups.com] On Behalf Of Burkett, William [USA]


          Sent: Wednesday, January 13, 2016 8:14 AM
          To: ontolo...@googlegroups.com

          --

          All contributions to this forum by its members are made under an open content license, open publication license, open source or free software license. Unless otherwise specified, all Ontolog Forum content shall be subject to the Creative Commons CC-BY-SA 4.0 License or its successors.
          ---
          You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
          To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
          To post to this group, send email to ontolo...@googlegroups.com.
          Visit this group at https://groups.google.com/group/ontolog-forum.

          Rich Cooper

          unread,
          Jan 13, 2016, 4:52:24 PM1/13/16
          to ontolo...@googlegroups.com

          Chris,

           

          If you would be so kind as to provide an explanation of Goedel's work so that even us unwashed can read it, that would be a valuable contribution to the list. 

           

          Sincerely,

          Rich Cooper,

          Rich Cooper,

           

          Chief Technology Officer,

          MetaSemantics Corporation

          MetaSemantics AT EnglishLogicKernel DOT com

          ( 9 4 9 ) 5 2 5-5 7 1 2

          http://www.EnglishLogicKernel.com

           

          From: ontolo...@googlegroups.com [mailto:ontolo...@googlegroups.com] On Behalf Of Christopher Menzel
          Sent: Wednesday, January 13, 2016 11:11 AM
          To: ontolo...@googlegroups.com
          Subject: Re: [ontolog-forum] Wikipedia on upper ontology

           

          On 11 Jan 2016, at 11:03 PM, Rich Cooper <metase...@englishlogickernel.com> wrote:

          --

          All contributions to this forum by its members are made under an open content license, open publication license, open source or free software license. Unless otherwise specified, all Ontolog Forum content shall be subject to the Creative Commons CC-BY-SA 4.0 License or its successors.
          ---
          You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
          To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
          To post to this group, send email to ontolo...@googlegroups.com.
          Visit this group at https://groups.google.com/group/ontolog-forum.

          Christopher Menzel

          unread,
          Jan 13, 2016, 5:02:03 PM1/13/16
          to ontolo...@googlegroups.com
          On 13 Jan 2016, at 3:03 PM, Rich Cooper <metase...@englishlogickernel.com> wrote:
          ...Ontologies, as conceived to be ONLY made of FOL structures,

          Ontologies are not first-order structures, they are first-order (or higher-order) theories, i.e., sets of sentences in a first-order (or higher-order) language that are closed under logical consequence. A structure is a model theoretic entity (a set of one sort or another, depending on the exact definition) that serves as an interpretation of a given formal language. A structure for a language L is a model of an ontology written in L just in case every sentence of the ontology is true in the structure (where truth-in-a-structure is a very precisely defined notion from model theory).

          Folks who wish to have a thorough understanding of these fundamental concepts — without which one simply cannot pretend to know what ontological engineering is — can find a wealth of material for self-paced learning at the Open Logic Project, headed by the excellent logician, philosopher, and historian of logic Richard Zach of the University of Calgary. One can freely download the excellent ~300 page Open Logic Text here. The text provides a comprehensive introduction to basic set theory, first-order languages, first-order model theory, proof theory, metatheory (soundness, completeness, compactness, etc), and computability theory through Gödel's incompleteness theorems.

          Chris Menzel

          Bruce Schuman

          unread,
          Jan 13, 2016, 5:27:10 PM1/13/16
          to ontolo...@googlegroups.com, Azamat Abdoullaev
          I was just wondering if this issue might appear differently from various cultural perspectives....

          For example -- I think it's widely accepted that most institutions have internal pressures and dynamics that cause them to favor one perspective rather than another. One school of thinking thrives at a particular academic department, and is deprecated elsewhere. Interdisciplinary work has to fight for status and funding -- and all of this is often for sound reasons.

          So, yes, a wide condemnation of all these categories seems a little off-base - but maybe through a certain cultural lens, it's a defensible concern.

          My small thought on this is -- hey, if somebody or something is putting their ideological torque wrench on your view of science or the world of ideas -- get a fast internet connection and hang out on Wikipedia. They too might suffer from the ideological torments of mere mortals -- but there is SO much there, and most of it quite good and highly linked. It's a university in a box -- all those "portals", etc. And I just discovered they have a feature that will now compile an instant "book" in PDF -- all the files and articles in one section.

          I like the article on "Interdisciplinarity" that goes to much of this concern -- the pressure of competing "silos" and narrow specialized departments, versus the wide-open connect-all-the-dots kind of thing I tend to find so fascinating...

          https://en.wikipedia.org/wiki/Interdisciplinarity

          Bruce Schuman, Santa Barbara CA USA
          http://networknation.net/matrix.cfm

          -----Original Message-----
          From: ontolo...@googlegroups.com [mailto:ontolo...@googlegroups.com] On Behalf Of Pat Hayes
          Sent: Tuesday, January 12, 2016 8:33 PM
          To: ontolog-forum <ontolo...@googlegroups.com>; Azamat Abdoullaev <ontop...@gmail.com>
          Subject: Re: [ontolog-forum] Wikipedia on upper ontology


          On Jan 12, 2016, at 5:57 AM, Azamat Abdoullaev <ontop...@gmail.com> wrote:

          > Rich,
          > Indeed, the real world is much different than it's presented by philosophy, science, engineering, the arts and literature.

          That is a pretty comprehensive rejection of a large part of human thought. Could you give us an idea of which discipline, if any, might present us with a more accurate account (picture? description?) of the real world, in your opinion? How are we to approach this real world if philosophy, science, engineering and literature are prohibited to us?

          Pat Hayes
          ------------------------------------------------------------
          IHMC (850)434 8903 home
          40 South Alcaniz St. (850)202 4416 office
          Pensacola (850)202 4440 fax
          FL 32502 (850)291 0667 mobile (preferred)
          pha...@ihmc.us http://www.ihmc.us/users/phayes






          --
          All contributions to this forum by its members are made under an open content license, open publication license, open source or free software license. Unless otherwise specified, all Ontolog Forum content shall be subject to the Creative Commons CC-BY-SA 4.0 License or its successors.
          ---
          You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
          To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
          To post to this group, send email to ontolo...@googlegroups.com.
          Visit this group at https://groups.google.com/group/ontolog-forum.
          To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/53A7AFEF-9825-4177-9EB8-32CFA7741BC6%40ihmc.us.

          Rich Cooper

          unread,
          Jan 13, 2016, 5:29:45 PM1/13/16
          to ontolo...@googlegroups.com

          Dear Chris,

           

          You wrote:

          Ontologies are not first-order structures, they are first-order (or higher-order) theories, i.e., sets of sentences in a first-order (or higher-order) language that are closed under logical consequence.

           

          But any higher order language can be expressed in a strictly first order language, so that part has no consequence.  So far you have said:

           

          "(they) are sets of sentences in FOL closed under consequence."

           

          You continued:

          A structure is a model theoretic entity (a set of one sort or another, depending on the exact definition) that serves as an interpretation of a given formal language.

           

          Whose interpretation?  You can't have just "an interpretation" without an interpreter. 

           

          But you continued:

           

          A structure for a language L is a model of an ontology written in L just in case every sentence of the ontology is true in the structure (where truth-in-a-structure is a very precisely defined notion from model theory).

           

          I disagree.  That is a structure for a purely mathematical concept of language; it's not the language that people speak.  Which is again tangential to my point: Ontologies are about math, but they are not grounded in human language.  There is no concept in ontologies about the subjective properties and the spectra of the population along those property axes. 

           

          You have biological senses, drives, goals, and many things that can't be expressed in math.  Yet you have a faith in your religion that mathematics is pure and perfect. 

           

          The Goedel gedankenexperiment showed that even an FOL system as simple as arithmetic contains inconsistencies such that there are theorems which, though true, can neither be proven nor disproven.  But people can decide effectively on them all day.

           

          Subjectivity works at times that perfect projections don't work - Q.E.D.

           

          Sincerely,

          Rich Cooper,

          Rich Cooper,

           

          Chief Technology Officer,

          MetaSemantics Corporation

          MetaSemantics AT EnglishLogicKernel DOT com

          ( 9 4 9 ) 5 2 5-5 7 1 2

          http://www.EnglishLogicKernel.com

           

          From: ontolo...@googlegroups.com [mailto:ontolo...@googlegroups.com] On Behalf Of Christopher Menzel


          Sent: Wednesday, January 13, 2016 2:02 PM
          To: ontolo...@googlegroups.com

          --

          All contributions to this forum by its members are made under an open content license, open publication license, open source or free software license. Unless otherwise specified, all Ontolog Forum content shall be subject to the Creative Commons CC-BY-SA 4.0 License or its successors.
          ---
          You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
          To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
          To post to this group, send email to ontolo...@googlegroups.com.
          Visit this group at https://groups.google.com/group/ontolog-forum.

          Bruce Schuman

          unread,
          Jan 13, 2016, 5:35:41 PM1/13/16
          to ontolo...@googlegroups.com

          And I just wanted to laugh at this -- it's so true.  I think my whole life is about this issue….

           

          >if you can intuit them all being true at once

           

          That's a huge issue, certainly in my world.  There is a LOT of stuff in the world that is (mind-blowingly) "all true at once" -- but our little brains just can't handle it, so most of us pick some modest corner and try to stay sane and get the bills paid.  Maybe that's why we need an upper-level ontology, or something like one.

           

          https://en.wikipedia.org/wiki/The_Magical_Number_Seven,_Plus_or_Minus_Two

           

          "The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information" is one of the most highly cited papers in psychology. It was published in 1956 by the cognitive psychologist George A. Miller of Princeton University's Department of Psychology in Psychological Review. It is often interpreted to argue that the number of objects an average human can hold in working memory is 7 ± 2. This is frequently referred to as Miller's Law.

           

          PS, the author of this famous article is one of the founders of WordNet

           

          https://en.wikipedia.org/wiki/George_Armitage_Miller

           

          Bruce Schuman, Santa Barbara CA USA

          http://networknation.net/matrix.cfm

           

           

          -----Original Message-----
          From: ontolo...@googlegroups.com [mailto:ontolo...@googlegroups.com] On Behalf Of Pat Hayes

          Sent: Tuesday, January 12, 2016 8:44 PM
          To: ontolog-forum <ontolo...@googlegroups.com>; Rich Cooper <metase...@englishlogickernel.com>
          Subject: Re: [ontolog-forum] Wikipedia on upper ontology

           

           

          On Jan 11, 2016, at 5:11 PM, Rich Cooper <metase...@englishlogickernel.com> wrote:

           

          > ...We all know how to represent infinity and reason about infinities, by saying "everything exists that exists, that has ever existed, or that ever will exist, … "  ........   I, personally, have never met an infinity,

           

          There are much more straighforward ways to meet infinity. For example, try this:

           

          1. For every number N, there is a number M larger than N.

          2. 0 is a number and every other number is larger than it.

          3. If N is larger than M, then M is not larger than N.

          4. If N is larger than M and M is larger than P, then N is larger than P.

           

          It follows from these axioms - which are pretty easy to understand, and I would claim intuitively obviously true, given the usual meanings of words like 'number' and 'larger' -  that there are infinitely many numbers. So if you understand these axioms, and if you can intuit them all being true at once, then you have met an infinity.

           

          Pat Hayes

           

          ------------------------------------------------------------

          IHMC                                     (850)434 8903 home

          40 South Alcaniz St.            (850)202 4416   office

          Pensacola                            (850)202 4440   fax

          FL 32502                              (850)291 0667   mobile (preferred)

          pha...@ihmc.us       http://www.ihmc.us/users/phayes

           

           

           

           

           

           

          --

          All contributions to this forum by its members are made under an open content license, open publication license, open source or free software license. Unless otherwise specified, all Ontolog Forum content shall be subject to the Creative Commons CC-BY-SA 4.0 License or its successors.

          ---

          You received this message because you are subscribed to the Google Groups "ontolog-forum" group.

          To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

          To post to this group, send email to ontolo...@googlegroups.com.

          Visit this group at https://groups.google.com/group/ontolog-forum.

          joseph simpson

          unread,
          Jan 13, 2016, 5:42:03 PM1/13/16
          to ontolo...@googlegroups.com
          Some people may find the book, Godel's Theorem: An Incomplete Guide to is Use and Abuse, informative..

          See:


          Take care, be good to yourself and have fun,

          Joe


          For more options, visit https://groups.google.com/d/optout.



          --
          Joe Simpson

          “Reasonable people adapt themselves to the world. 

          Unreasonable people attempt to adapt the world to themselves. 

          All progress, therefore, depends on unreasonable people.”

          George Bernard Shaw

          joseph simpson

          unread,
          Jan 13, 2016, 5:43:07 PM1/13/16
          to ontolo...@googlegroups.com

          Christopher Menzel

          unread,
          Jan 13, 2016, 5:54:45 PM1/13/16
          to ontolo...@googlegroups.com
          On 13 Jan 2016, at 4:29 PM, Rich Cooper <metase...@englishlogickernel.com> wrote:
          You wrote:
          Ontologies are not first-order structures, they are first-order (or higher-order) theories, i.e., sets of sentences in a first-order (or higher-order) language that are closed under logical consequence. 
           
          But any higher order language can be expressed in a strictly first order language, so that part has no consequence. 

          Really, seriously, you don't know what you're talking about. (The theorem you are alluding to concerns higher-order languages under what is known as General (or Henkin) Semantics, which is not a true higher-order semantics. See, e.g., Enderton's A Mathematical Introduction to Logic, ch 4 for details.)

          So far you have said:
           
          "(they) are sets of sentences in FOL closed under consequence."
           
          You continued:
          A structure is a model theoretic entity (a set of one sort or another, depending on the exact definition) that serves as an interpretation of a given formal language. 
           
          Whose interpretation?  You can't have just "an interpretation" without an interpreter.

          If you were to learn some basic mathematical logic you wouldn't even ask this question.

          But you continued:
           
          A structure for a language L is a model of an ontology written in L just in case every sentence of the ontology is true in the structure (where truth-in-a-structure is a very precisely defined notion from model theory).

          I disagree. 

          You are just saying silly things born of ignorance. Saying that you disagree with the definition of a structure in mathematical logic is like saying you disagree with the definition of a prime number in arithmetic.

          The Goedel gedankenexperiment

          Gödel's theorems are exactly that, rigorously proved theorems of mathematics. They are not flights of speculative fancy.

          showed that even an FOL system as simple as arithmetic contains inconsistencies such that there are theorems which, though true, can neither be proven nor disproven. 

          Really, Rich, for God's sake, stop. First, a definitional point — theorems are by definition statements that are proved. Hence it is an oxymoron to talk of theorems that cannot be proved. Second, and more importantly, Gödel showed absolutely no such thing as you suggest; he most certainly did not show that arithmetic contains any inconsistencies. What he did show is that, for any consistent system containing a certain bit of arithmetic, there are sentences in the language of the system that are neither provable nor disprovable in that system. It then follows (if you're a realist about the natural numbers) that there are sentences of the system that are true in the natural number structure but unprovable in the system.

          But people can decide effectively on them all day.

          Actually, no. One of the consequences of Gödel's theorem is that there is no decision procedure for arithmetical truth.

          Just do some homework, man, for your own good and the good of this forum. In the mean time, stick to what you know, whatever that is exactly.

          -chris

          Christopher Menzel

          unread,
          Jan 13, 2016, 6:01:32 PM1/13/16
          to ontolo...@googlegroups.com
          On 13 Jan 2016, at 4:42 PM, joseph simpson <jjs...@gmail.com> wrote:
          Some people may find the book, Godel's Theorem: An Incomplete Guide to is Use and Abuse, informative.

          Yes, this is a marvelous book written by a marvelous, and sadly missed, human being, Torkel Franzen. I cannot recommend it more highly both as an exposition of Gödel's theorem and as a bulwark against misinterpreting it.

          -chris

          ps: The link in Joe's email seems to be broken. This one works: http://goo.gl/D2zgCr

          Rich Cooper

          unread,
          Jan 13, 2016, 6:34:52 PM1/13/16
          to ontolo...@googlegroups.com

          Bruce,

           

          Sanity is overrated.  Remember Hemingway, Marilyn Monroe, Aristotle, among other pain sufferers, and Turing, among other socially disapproved, took the simplest way out; what a loss the human races have suffered since with such social incontinence.  Let's be more interoperable in our text, but we need not be semantically interoperable since there is so little acceptance.  Instead, let us agree on any interpretation we choose individually.  That is the effective meaning of "intercourse" on this list. 

           

          Viva nil!

           

          Sincerely,

          Rich Cooper,

          Rich Cooper,

           

          Chief Technology Officer,

          MetaSemantics Corporation

          MetaSemantics AT EnglishLogicKernel DOT com

          ( 9 4 9 ) 5 2 5-5 7 1 2

          http://www.EnglishLogicKernel.com

           

          From: ontolo...@googlegroups.com [mailto:ontolo...@googlegroups.com] On Behalf Of Bruce Schuman


          Sent: Wednesday, January 13, 2016 2:36 PM
          To: ontolo...@googlegroups.com

          Rich Cooper

          unread,
          Jan 13, 2016, 6:48:45 PM1/13/16
          to ontolo...@googlegroups.com

          Thanks Joe,

           

          But I am content with the way I learned it about forty five years ago.  This revisionism based on conjectures of conjectures I find interesting discussions, but I don't base my conclusions on them. 

           

          Thanks much for providing the link!

           

          Sincerely,

          Rich Cooper,

          Rich Cooper,

           

          Chief Technology Officer,

          MetaSemantics Corporation

          MetaSemantics AT EnglishLogicKernel DOT com

          ( 9 4 9 ) 5 2 5-5 7 1 2

          http://www.EnglishLogicKernel.com

           

          Rich Cooper

          unread,
          Jan 13, 2016, 6:51:11 PM1/13/16
          to ontolo...@googlegroups.com

          Thanks for your reference supporting the book, Chris.  I'll put it on my Christmas wish list so everybody knows what they can send me. 

           

          Sincerely,

          Rich Cooper,

          Rich Cooper,

           

          Chief Technology Officer,

          MetaSemantics Corporation

          MetaSemantics AT EnglishLogicKernel DOT com

          ( 9 4 9 ) 5 2 5-5 7 1 2

          http://www.EnglishLogicKernel.com

           

          From: ontolo...@googlegroups.com [mailto:ontolo...@googlegroups.com] On Behalf Of Christopher Menzel
          Sent: Wednesday, January 13, 2016 3:01 PM
          To: ontolo...@googlegroups.com
          Subject: Re: [ontolog-forum] Wikipedia on upper ontology

           

          On 13 Jan 2016, at 4:42 PM, joseph simpson <jjs...@gmail.com> wrote:

          --

          All contributions to this forum by its members are made under an open content license, open publication license, open source or free software license. Unless otherwise specified, all Ontolog Forum content shall be subject to the Creative Commons CC-BY-SA 4.0 License or its successors.
          ---
          You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
          To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
          To post to this group, send email to ontolo...@googlegroups.com.
          Visit this group at https://groups.google.com/group/ontolog-forum.

          Gregg Reynolds

          unread,
          Jan 13, 2016, 8:03:12 PM1/13/16
          to ontolo...@googlegroups.com


          On Jan 13, 2016 5:34 PM, "Rich Cooper" <metase...@englishlogickernel.com> wrote:
          >
          > Bruce,
          >
          >  
          >
          > Sanity is overrated.

          I can see why you might say that, Rich.

          (For the record, that's intended as an affectionate joke, not a hostile attack.  When somebody lobs such an easy pitch I can't resist taking a swing!)

          Rich Cooper

          unread,
          Jan 13, 2016, 8:21:39 PM1/13/16
          to ontolo...@googlegroups.com

          Easy swing accepted; just be careful in future (grin)

           

          Sincerely,

          Rich Cooper,

          Rich Cooper,

           

          Chief Technology Officer,

          MetaSemantics Corporation

          MetaSemantics AT EnglishLogicKernel DOT com

          ( 9 4 9 ) 5 2 5-5 7 1 2

          http://www.EnglishLogicKernel.com

           

          From: ontolo...@googlegroups.com [mailto:ontolo...@googlegroups.com] On Behalf Of Gregg Reynolds
          Sent: Wednesday, January 13, 2016 5:03 PM
          To: ontolo...@googlegroups.com
          Subject: RE: [ontolog-forum] Wikipedia on upper ontology

           

          --

          All contributions to this forum by its members are made under an open content license, open publication license, open source or free software license. Unless otherwise specified, all Ontolog Forum content shall be subject to the Creative Commons CC-BY-SA 4.0 License or its successors.
          ---
          You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
          To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
          To post to this group, send email to ontolo...@googlegroups.com.
          Visit this group at https://groups.google.com/group/ontolog-forum.

          Rich Cooper

          unread,
          Jan 14, 2016, 11:29:50 AM1/14/16
          to ontolo...@googlegroups.com

          Here is what I find on Goedel's Theorem in Wikipedia:

          https://en.wikipedia.org/wiki/G%C3%B6del's_incompleteness_theorems

           

          Gödel's incompleteness theorems are two theorems of mathematical logic that establish inherent limitations of all but the most trivial axiomatic systems capable of doing arithmetic. The theorems, proven by Kurt Gödel in 1931, are important both in mathematical logic and in the philosophy of mathematics. The two results are widely, but not universally, interpreted as showing that Hilbert's program to find a complete and consistent set of axioms for all mathematics is impossible, giving a negative answer to Hilbert's second problem.

           

          Let me abstract that:

          showing that Hilbert's program to find a complete and consistent set of axioms for all mathematics is impossible

           

          I.e., there are theorems which, though true, cannot be proven, and there are theorems which, though false, cannot be disproven.

           

          It is a very simple sentence.  What do you consider erroneous about it?

           

          Sincerely,

          Rich Cooper,

          Rich Cooper,

           

          Chief Technology Officer,

          MetaSemantics Corporation

          MetaSemantics AT EnglishLogicKernel DOT com

          ( 9 4 9 ) 5 2 5-5 7 1 2

          http://www.EnglishLogicKernel.com

           

          From: ontolo...@googlegroups.com [mailto:ontolo...@googlegroups.com] On Behalf Of Rich Cooper


          Sent: Wednesday, January 13, 2016 3:49 PM
          To: ontolo...@googlegroups.com

          Chris Partridge

          unread,
          Jan 14, 2016, 11:35:17 AM1/14/16
          to ontolo...@googlegroups.com
          try a little more searching ...

          "In mathematics, a theorem is a statement that has been proven on the basis of previously established statements, such as other theorems—and generally accepted statements, such as axioms. A theorem is a logical consequence of the axioms. "

          Rich Cooper

          unread,
          Jan 14, 2016, 11:42:21 AM1/14/16
          to ontolo...@googlegroups.com

          Thanks Chris,

          try a little more searching ...

           

          https://en.wikipedia.org/wiki/Theorem

          "In mathematics, a theorem is a statement that has been proven on the basis of previously established statements, such as other theorems—and generally accepted statements, such as axioms. A theorem is a logical consequence of the axioms. "

           

          I get that, but how does that apply to Goedel's Theorems? 

           

          Sincerely,

          Rich Cooper,

          Rich Cooper,

           

          Chief Technology Officer,

          MetaSemantics Corporation

          MetaSemantics AT EnglishLogicKernel DOT com

          ( 9 4 9 ) 5 2 5-5 7 1 2

          http://www.EnglishLogicKernel.com

           

          Chris Partridge

          unread,
          Jan 14, 2016, 11:52:55 AM1/14/16
          to ontolo...@googlegroups.com
          You said:
          "I.e., there are theorems which, though true, cannot be proven, and there are theorems which, though false, cannot be disproven."
          wiki says:
          " a theorem is a statement that has been proven"

          One of you is wrong.
          Probably wiki - you cannot trust it :)

          Rich Cooper

          unread,
          Jan 14, 2016, 12:01:21 PM1/14/16
          to ontolo...@googlegroups.com

          Chris,

           

          If you take that Wiki over the other Wiki, then theorems are all proven true.  I think that is too literal, given that Goedel showed that there are unprovable theorems.  So if mathematicians want such precision, they are doomed to failure, according to Goedel. 

           

          If anyone can prove that conclusion wrong, please do so. 

          joseph simpson

          unread,
          Jan 14, 2016, 12:47:15 PM1/14/16
          to ontolo...@googlegroups.com
          Some people on this list may find the following publication of interest:

          http://www.amazon.com/Alan-Turings-Systems-Logic-Princeton/dp/0691155747

          Which includes the following material:

          "The well-known theorem of Godel (1931) shows that every system of logic is in a certain sense incomplete, but at the same time it indicates means whereby from a system L of logic a more complete system L' may be obtained."

          Take care and have fun,

          Joe



          For more options, visit https://groups.google.com/d/optout.

          Rich Cooper

          unread,
          Jan 14, 2016, 1:28:48 PM1/14/16
          to ontolo...@googlegroups.com

          Thanks Joe,

           

          I also found a 162 page PDF at:

          http://www.karlin.mff.cuni.cz/~krajicek/smith.pdf

           

          which I am starting to read, if anyone wants to share discovering this material.

          Rich Cooper

          unread,
          Jan 14, 2016, 1:42:06 PM1/14/16
          to ontolo...@googlegroups.com

          That was quick; the paper has the following statement:

           

          1.2  Incompleteness

           

          But now, in headline terms, what G ̈odel’s First Incompleteness Theorem shows

          is that that the entirely natural idea that we can axiomatize basic arithmetic is

          wrong.  Suppose we try to specify a suitable axiomatic theory T that seems to

          capture the structure of the natural number sequence and pin down addition and multiplication (and maybe a lot more besides). Then G ̈odel gives us a recipe for coming up with a corresponding sentence G[T], couched in the language of basic arithmetic, such that (i) we can show (on very modest assumptions) that neither G[T] nor ¬ G[T] can be proved in T, and yet (ii) we can also recognize that

          G[T] will be true so long as T is consistent.

          It is loading more messages.
          0 new messages