Gary,
It was an interesting report and a nice discussion. And as always, it will take some time to study the terminology of the Giancarlo and Nicola approach.
For now, it looks like they are bringing their own terminology into the field of development, construction of scientific and engineering theories. Which should be justified, since there are already many standards and guidelines (aka methodologies).
So, a mini theory is most likely a sub-theory, when we take part of the axioms of a full theory, and sometimes part of the primary terms, and nevertheless obtain some conclusions that will be true in the full theory of the subject area.
background knowledge is the full theory of the subject area that we have at the moment.
unpacking some concepts is most likely a way to define concepts or get more of their properties and connections.
In technological areas (and medicine), we are faced with the fact that theoretical knowledge turns out to be specific to each organization, inventing its own nuances to standard definitions. In this case, general-purpose terms ("participation") will have different definitions in different theories, and the template is just a hint to start with.
The main question is, after all, what theories do we already have in various sciences and technologies? What do they look like? Where are they?
We go to the clinics and, starting from the scheme of their database and the structure of documents, we build an axiomatic theory of their theoretical knowledge and how they use it in practice.
Alex
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/ontolog-forum/CAMhe4f0_DBbZ5ae8wRsui%2BvAXYoZyTNnKoTLuKcP97kn_i0Zwg%40mail.gmail.com.
--
To view this discussion visit https://groups.google.com/d/msgid/ontolog-forum/CAGG%2B%3DmWzu7tOKMF-AHdRfADcWq_nsMF8SyYMyt8-BfFxd59xww%40mail.gmail.com.
To view this discussion visit https://groups.google.com/d/msgid/ontolog-forum/CAAN3-5cdsVq%2BW%3DQ9VxMcJL93_i3CJ7S_pGfhX5u%3Domw47d6DLg%40mail.gmail.com.
To view this discussion visit https://groups.google.com/d/msgid/ontolog-forum/CAAN3-5cdsVq%2BW%3DQ9VxMcJL93_i3CJ7S_pGfhX5u%3Domw47d6DLg%40mail.gmail.com.
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
Hi Giancarlo,
That is a great description of the separation of concerns that is vital in any domain of engineering, including ontology engineering and computer applications.
In the Semantic Shed community we have been working towards a crisp explanation of this distinction for some years, and have been looking at a range of techniques for deriving use case specific OWL ontologies, and conventional data models, from an over-arching application-independent ontology (or explanatory ontology).
Some people out there still seem to labor under the illusion that because “ontology” deals with meaning, you only need to do it once – and they then end up in all sorts of a muddle trying to make the same ontology do different jobs, often relaxing the constraints to fit the data until there is little left of the original meanings.
One interesting thing that’s arisen out of the Semantic Shed work is a descriptive framework (articulated by Jim Logan, Max Gillmore and Cory Casanave, with the help of others), in which we talk about the notion of “Direction of Instantiation”.
Picture a model such as a UML Object Oriented design model or an OWL ontology. Put this on the left of your mental screen. To the right is what it is a model of. The model is used to create the data, like the OO analogy of a cookie cutter. It stamps out what the shape of the data should be. The direction of travel is from left to right – from the model to the implementation.
Now picture another kind of model, the explanatory ontology. I’m going to pop this on the left of your mental screen again. On the right I will put the things in the world. Here, the direction of travel is reversed, from real-world things to the model that aims to be a model of those things. The real things precede the model.
This reversal of direction reflects two potential interpretations of the word “instantiate”.
In the first case “Instantiates”
means “creates an instance of”. In the second case you are not
creating: to be an instance of something in the
ontology is to be something that has traits that meet all the
conditions in the
intension of that model construct.
These two usages are in opposite directions: One direction is for knowledge (i.e. a theory) of what exists, while the other direction is for recording observations in (a) formation (i.e. “in-formation”). A suitable “formation” for putting that data into is the data schema or, in OWL, what we call a “data ontology”.
An observation in a formation is not the same thing as knowledge.
We can now line these up and see a flow from the real world things to the explanatory ontology, then we can apply some transformation from the explanatory ontology to some simplifying, data-focused ontology, and from that to some implementation.
The transformation may be as simple as identifying appropriate datatypes for data that represent real world things and relationships. It may add data surrogates for real-world truth-makers (for example, many legal capacities and capabilities may have a data surrogate in the form of a government license).
An enterprise-wide knowledge graph may also chop off some of the higher-level abstractions or collapse them into fewer levels of subsumption. Since the Explanatory Ontology has already dealt with those concerns, there is no need to replicate them in the knowledge graph. For narrower, use case-specific applications, you might collapse and compress parts of the explanatory ontology, for example replacing complex patterns and constructions (such as Relators and Roles in UFO) with ontology design patterns of simple object properties for the context of that use case.
There’s a whole lot more to that, and we’ve done some studies.
But I think there is also a third kind of thing in play.
When I look at various LinkedIn posts on knowledge graphs and linked data, with some exceptions the focus seems to be on understanding what classes of thing and relationships exist in the data, rather than necessarily the meanings of the things. Modelers are thinking about the shape of the data – indeed, the data shapes standard SHACL was introduced to enable modelers to say more at this data level.
The presence (and cardinalities, restrictions etc.) of relationships between things in the available data may well be coextensive with an account of the meanings of those things, most of the time. But even when these are coextensive, they are not the same thing.
Truth is not meaning.
The simplest difference is that a data ontology need not stand up all the properties that make up the necessary and sufficient conditions for class membership. They need not even stand up data surrogates for those things. They need only stand up the data that the application cares about, and that exists in the data domain. So the relationships in the linked data need not represent the meanings of things, even if they reflect it in part.
Also, and for good reasons, an explanatory ontology teases things apart whereas a data ontology conflates them back together again. It also need not have the higher levels of logic needed to capture some aspects of the definitions of things in reality.
But the requirements for representing the classes and relationships in the data may go beyond these differences, in interesting ways.
Some exceptions will make this clearer.
When I was doing an ontology for loans, I was asked not to include loans that were unsecured. It may be that you don’t want or don’t expect the data to handle unsecured loans, but it is part of the definition of a loan that it either is or is not secured. That’s part of its meaning.
Similarly, it may be tempting in an ontology to constrain the allowable range of the property “owns” such as to exclude “Person” from the range of things that can be owned. After all, you can’t own people.
However, that is a deontic limitation not a semantic one. It is legally forbidden, and morally reprehensible, to own people. However, you can’t define the term “slave” without that relationship.
Meaning is not truth.
This is not a criticism of the data ontology. In fact, it is a good thing. Being able to add a layer of deontic limitations to what can be expressed would be a very good thing to add to the data ontology, for many use cases.
Not all use cases will require the same additional deontic features, for example some compliance use cases will need to dig deeper into the meanings of things, in order to detect or prevent those things.
This is another good reason for this engineering separation of concerns. Because we have decoupled the data ontologies from the explanatory ontology, those data ontologies may not only remove assertions that have nothing to do with the data or the current application, but may add restrictions or other information to reflect what’s expected in the data, or as a means of detecting when something in the data is illegal or impermissible. You can use this new freedom to represent what Ronald Ross defines as “business rules”, for example "Nobody should be swimming in this pool at this time". Or to use data-oriented logic standards like SWRL or RIF to implement logical data rules, which are not the same thing. And of course you can layer on SHACL shapes here.
By decoupling the data ontology from the business-facing explanatory ontology, you can not only have less of the original semantics in that model but you can do more with the shapes of the data itself.
It seems to me that a lot of folks in the data ontology world resist this kind of separation of concerns because they perceive that it will be more work to create two or more different ontologies. But in fact, having this separation of concerns reduces not only the workload but also the complexity of the ontologies used in individual applications. In the case of these deontic edge cases, it also frees up the application ontologies to do things that the concept ontology was never intended for.
Mike
To view this discussion visit https://groups.google.com/d/msgid/ontolog-forum/CAGG%2B%3DmVS5VQSuEewK9xgqkSj9ir2S%2B1bp%3DakKj4XjyrTfnSzdQ%40mail.gmail.com.
To view this discussion visit https://groups.google.com/d/msgid/ontolog-forum/d5778169-0f82-4c91-8022-342cffb8c5d6%40hypercube.co.uk.
On Feb 18, 2025, at 9:58 AM, Jim Logan <jll...@gmail.com> wrote:
Mike,
To view this discussion visit https://groups.google.com/d/msgid/ontolog-forum/D8E86C56-9826-4440-AF4C-62D375DDE162%40gmail.com.
To view this discussion visit https://groups.google.com/d/msgid/ontolog-forum/d5778169-0f82-4c91-8022-342cffb8c5d6%40hypercube.co.uk.