Opinion: The Best Reuse comes from Use

157 views
Skip to first unread message

Michael DeBellis

unread,
May 5, 2023, 5:46:46 PM5/5/23
to ontolog-forum
What I mean is that in my experience with OOP the libraries that tend to  have the most reuse potential are the ones where a team took some functionality they developed for a real system and generalized it to be reusable on other projects. Often, code that was written just with the idea of being reusable ended up being more work to reuse than just creating from scratch. One example was Accenture's project Eagle. They spent an amazing number of non-billable hours to build it and it was in many ways an excellent set of libraries. However, it almost was never used because the people who developed it didn't had to deal with the realities of  a system in a real client environment with real legacy system requirements and so they designed it to be elegant but too difficult to use for client work. 

I'm asking because I'm writing a paper and I want to make that point. I'm not sure how much this is my opinion vs. something that most in the OOP community believe. I've done a preliminary search on Google scholar and there seem to be several papers with the message I was expecting. Also, from the ontologies I have experience using, the ones that seem to be the most reusable are those such as SKOS and Dublin Core that are based on use in content/document management. 

But I wonder what people here think?  I realize that modeling with ontologies is different from OOP modeling so perhaps the same isn't true for ontologies. Interested in what others think and any references on the topic. 

Michael

John F Sowa

unread,
May 5, 2023, 11:27:57 PM5/5/23
to ontolo...@googlegroups.com
Michael,

Some observations and suggestions.

MDB>   I realize that modeling with ontologies is different from OOP modeling so perhaps the same isn't true for ontologies. Interested in what others think and any references on the topic. 

An ontology of some subject (of any size, small or huge) is a semantic representation.  OOP is a structural (syntactic) method for organizing and using the semantics for some purpose.  The kinds of data structures used to represent the ontologies may be different from the kinds of structures  used in OOP.  But there should be some systematic methods for mapping information from one kind of structure to another.

That is one reason why I keep emphasizing the importance of using very general logics, such as Common Logic, and very systematic methods of mapping from one version of logic to another, such as the OMG standard for DOL: 

 All formally (mathematically) specified information can be stated in some version of logic.  DOL shows how  to do the mappings.  Therefore, mappings of ontology formats to and from OOP formats must be possible.  As soon as you show that a mapping is possible, you can look for a way of simplifying the mapping to make it more efficient.

John
 


From: "Michael DeBellis" <mdebe...@gmail.com>

doug foxvog

unread,
May 6, 2023, 1:21:10 AM5/6/23
to ontolo...@googlegroups.com
Michael,

Cyc was built as a general purpose ontology at the upper and mid-level,
such that it could be drilled down in specific areas for project work. I
worked at Cycorp 1996-2003, developing many design techniques and
significant sections of the mid-level as well as some of the upper
ontology. In later jobs, at DERI, NIST, and a couple of start-ups, i
found Cyc very reusable. This may be because i was totally familiar with
it, but i used it in creating ontology silos for buisiness messaging;
medical records; materials science; food recipies and allergies; human
cell types; and pharaceutical indications, counter-indications, and
interactions for projects at those later positions. Cycorp partnered for
a number of years with the Cleveland Clinic, developing ontology silos for
a number of medical projects. They haven't publicized many other
(probably smaller) projects they have performed for paying customers.

The point of all these ontologies was to allow "reasoning" with data in
the various fields. Being based on well-designed existing mid-level
ontologies greatly reduced what was necessary to add to allow for
reasoning about the detailed field-specific concepts in the various
projects.

I also found the NL system, including the encoding of thousands of words
and phrases, good for re-use. Sometimes new phrases or additional
denotations of words needed to be added and the priority of some existing
denotations needed to be reduced, but there was a great amount of re-use.
Of course, most of the mid-level ontology and NL encodings were irrelevant
for any particular project, but different projects had different sets of
irelevant ontology and NL encodings.

-- doug foxvog

"Michael DeBellis" <mdebe...@gmail.com> wrote:

> ... in my experience with OOP the libraries that tend to
> have the most reuse potential are the ones where a team took some
> functionality they developed for a real system and generalized it to be
> reusable on other projects. ...
>
> I'm asking because I'm writing a paper and I want to make that point. ...

dr.matt...@gmail.com

unread,
May 6, 2023, 4:55:07 AM5/6/23
to ontolo...@googlegroups.com

Dear Michael,

Unless you have specific requirements in mind, your chances of coming up with something useful are remote, because it will just be plucked out of the air rather than grounded in reality. However, even if things are grounded in real requirements, you can still have challenges in reuse with large reusable resources.

ISO 15926 is a  top and mid level data model/ontology that is extensible to detailed ontological content using Reference Data that was developed by the Oil Industry in the late nineties and early noughties. This has proved capable in the hands of those who were familiar with it (a parallel with Doug’s comments on CYC) but there is a decent overhead in becoming familiar with it and the way it is intended to be used. As a result, periodically some have looked at it and said “This is too complicated, there must be a better/simpler way to do what I need.” And they have gone of to try to do that, only to find (eventually – often years later) that it was not that simple, and then a few years after that the good bits of the work done have been reincorporated into ISO 15926.

Of course not everyone has taken that approach, and those that have done the initial homework have generally had good results.

 

Regards

Matthew West

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/3a5141ab-6412-4820-9a6c-d8837b03777en%40googlegroups.com.

Chris Partridge

unread,
May 7, 2023, 5:03:54 AM5/7/23
to ontolo...@googlegroups.com
Hi Michael.

WRT: "What I mean is that in my experience with OOP the libraries that tend to  have the most reuse potential are the ones where a team took some functionality they developed for a real system and generalized it to be reusable on other projects. "
In our experience, there is an even more stringent requirement. If you want to build usable (i.e. reusable)  ontologies for operational systems, then you should extract the ontologies from the data in existing working systems. The more reliable that data, the better the ontology.
(The converse is also true, so if you start with unstructured data, then your ontologies are unlikely to be good enough for operational use.)

Given the difficulty of building workable operational systems of any size this should not be a surprise. What is more of a surprise is how challenging the extraction of the ontology can be. That's why we focus much of our time on building processes for high quality extraction of ontologies. We mention this as an aside in a number of papers - e.g. https://www.academia.edu/101235440 or https://www.academia.edu/95095494/ or  https://www.academia.edu/95095504/.

However, I suspect this position is a bit of an outlier from a community perspective.

In these discussions, I'm reminded of Pat Churchland's 'ass' comment: “Also known as “armchair philosophy”, conceptual analysis as a way of find out about the nature of things seemed to me marginally worthwhile, and basically a dead end. Increasingly, I saw much of conceptual analysis as unconstrained speculation that ignored facts. Or, as our students sometimes put it, as just pulling it out of your ass.“ https://patriciachurchland.com/story-2/ --- replace "conceptual analysis" with desktop domain ontology building. In many ways this is old hat, Francis Bacon made the same point in 1620 less colloquially with ants, spiders and bees.

Chris

--

Anatoly Levenchuk

unread,
May 7, 2023, 9:01:05 AM5/7/23
to ontolo...@googlegroups.com

Hi Chris,

 > However, I suspect this position is a bit of an outlier from a community perspective.

 

 Not all of the community is in the armchair philosophy epicenter ))) 

 We teach our information technology students to ontology engineering in this chain of ideas:

 -- concept theories (prototype theory, theory theory, etc.) and stress about theory theory (https://plato.stanford.edu/entries/concepts/), i.e., objects and relationships

 -- semantics (relation between signs, concepts, the physical world) based on theory theory and 4D extensionalism (your BORO book is still practical. Here is the most recent student resume of it. Use Google translate to get it in English: https://blog.system-school.ru/2023/05/04/moi-zametki-po-prochtenii-knigi-boro/ -- it is written by a space engineering entrepreneur that is our student, 4th of May 2023 ). 

 -- top-level systems ontology, based on 3rd generation of systems thinking, https://ailev.livejournal.com/1657040.html 

-- our small middle-level ontology of methodology (how to think about agents), systems engineering practices, and management practices (enterprise ontology) incorporated in our online courses with a universal table modeler. We specifically teach students to use three levels of ontologies while dealing with tables (column «name» – meta-model of an application domain, type of column name – need be explicitly borrowed from our meta-meta-model that is top level systems ontology and our middle ontology – explicitly banned to mention it to colleagues but mandatory for our disciple, and in table raws is models of modeled real-world objects). It was a huge success! Students of organizational management courses started using coda.io, notion.so, and even Excel as universal ontology modelers. It worked! The key practice here was “industrial use” of a TLO in the ISO 15926 community (use type assignment only for a couple of hundred types in TLO, not accurate typing in a hundred thousand types of middle ontologies). The same practice was in the IBM Watson project: use of minimal ontology for their Lepardy! Winning application.

 -- then we named DDD (domain-driven-design, https://en.wikipedia.org/wiki/Domain-driven_design ) approach for software engineering ontology engineering. We required DDD “event storming” https://en.wikipedia.org/wiki/Event_storming to be augmented with our meta-meta-model (TLO and engineering+management middle ontology) from our courses in the same way that they use them in organization modeling with tables of universal modelers. Huge success! E.g., this approach was used by the chief software architect in Pandadoc, a unicorn startup company, and they have succeeded. 

 

 Therefore thank you for your work. It is helpful and is about ontology engineering. But this ontology engineering is not about OWL, formal, conceptual modeling (a level of pseudocode, not code with rigor and logical reasoners as provers), this is another type of ontology practice, and we prefer epistemology as a word for discussing all of this. Ontology (not very formal) is a result of the epistemology process, ontology engineering is an epistemology discipline, and only partly this belongs to ontology.

 

Best regards,

 Anatoly

 

 

Chris Partridge

unread,
May 7, 2023, 12:24:56 PM5/7/23
to ontolo...@googlegroups.com
Hi Anatoly,

It is always good to hear from you.

Epistemology --- ummmm. I would agree that the initial process of formalisation has a historical context - so at least at that level is epistemic. 
But from a architectural perspective, I think one can layer epistemology on top of ontology. With a limit - the essential indexicals - you are familiar with my papers on agentology? E.g. https://www.academia.edu/97809282/ and https://www.academia.edu/95095498/

WRT: "But this ontology engineering is not about OWL, formal, conceptual modeling (a level of pseudocode, not code with rigor and logical reasoners as provers), this is another type of ontology practice, and we prefer epistemology as a word for discussing all of this."
I'm not sure of the distinction here, it looks like you are suggesting 'ontological engineering' is not formal? Or rigourous? Is this right? Are you aware of the constructional papers? https://www.academia.edu/83919753/
Also, an interesting question what 'formal' means - see https://philpapers.org/rec/DUTFLI-2

Chris

Alex Shkotin

unread,
May 8, 2023, 4:44:00 AM5/8/23
to ontolo...@googlegroups.com

Chris,


I can't help but notice that your quote by Pat Churchland contains the beginnings of a conceptual analysis of a conceptual analysis ;-) Conceptual analysis is just one of the techniques and comes down to the analysis of the concepts that practitioners of a particular field of activity have. Very rarely their concepts are taken from their asses for being put into practice.

Usually these concepts are systematized up to theoretical knowledge.

Extracting an ontology from data is another term for constructing a theory of the subject area, even if it is the theory of the life of one particular enterprise.


Alex



вс, 7 мая 2023 г. в 12:03, Chris Partridge <partri...@gmail.com>:

Alex Shkotin

unread,
May 8, 2023, 5:21:43 AM5/8/23
to ontolo...@googlegroups.com

Hi Anatoly!


As always, it is very interesting. The end of the text of the student who read BORO is a little confusing: 

eng:"In my path as a student, I climbed the next step: "I see a meta-meta-model in the projects of others, I make mistakes in my own." But let's move on."

rus:"В своём пути ученика я поднялся на следующую ступеньку: «Вижу мета-мета-модель в проектов других, делаю ошибки в своей». Но будем двигаться дальше."

What is good about this or that ontology as it is understood, I hope, by our community: it can be presented. Is there a chance to get a link to any ontology created by your students that works in the information system?

You yourself know a working example is better than a bunch of words - because it can be conceptually analyzed :-)


You know my point: ontology being formal or informal is a way to represent theoretical knowledge we need for some application area; i.e. it's a theory :-)

And this is the source of schemas for the data of the subject area, which is discussed in the theory.


To use your terminology: the result of the "epistemological process" is a theory of this or that. We have been creating theories for 2-3 thousand years. With more and more math.


Alex



вс, 7 мая 2023 г. в 16:01, Anatoly Levenchuk <ai...@asmp.msk.su>:

Anatoly Levenchuk

unread,
May 8, 2023, 9:10:14 AM5/8/23
to ontolo...@googlegroups.com

Hi Alex
1. «"I see a meta-meta-model in the projects of others, I make mistakes in my own." But let's move on."» -- it is a reference to the levels of mastering of theory application: in the first stage, you are not aware of the existence of the theory and apparent errors of its applications, second you are aware of the theory and can see errors of its applications by other people, but cannot see the same errors in your activity, last third stage is when you can see errors in your thinking. The student tells us he is ready to transition to the third stage.

 

 2. Our courses present our meta-meta-model as plain text with occasional type annotations, e.g., "airplane::system flying::function". We also have multiple (about a hundred) examples of table models in modeling exercises that you do with your work project data (not learning projects). Sorry, it is all in Russian with this table modeling, but you can guess what it is with our previous version of courses. We have an amateur English translation of it. Register here: https://aisystant.eem.institute/ and then go to «Systems Engineering (prerelease)» and take a look at «modeling. …» in the course content menu (we translate only one part of the course, but it is in «almost English»). There will be tables for filling by students. That means columns of the tables are defined in the course's text.

 

 By the way, you can also find the course "Ontologics" that is intended for engineers and managers – and the book by Chris Partridge is supplementary reading specifically for it. Sorry, but this course also has only amateur translation to English (provided by not native speakers). More, it is the translation of a rather old version of the course. This course is about "not so formal ontology engineering," as I described in my previous letter. We have a better version of a course (with added modeling for capturing domain ontics that roughly correspond to micro-theories that have no references to types from general TLO) in Russian and are still working on it. The author of this "Ontologics" course (Prapion Medvedeva) is a reader of ONTOLOG Forum, but she is only lurking and still afraid to participate. And we renamed this part of the course from "Ontologics" (ontology and logic) to simply «Modeling». 

 

 In Russian, it is all available now at https://aisystant.system-school.ru (also free registration for a textbook access). The overall prescribed sequence of courses is here: https://system-school.ru/

 

3. We have multiple examples of the success of this methodology in the business environment that our students presented at our yearly conference. But a) in a conference, they speak Russian, b) they have no «learning case projects». They have only «actual business work projects» for modeling. Therefore their models are not open to the public. E.g., you can see two examples of the results of such a table modeling work at talks by our students here (both students are top managers of their companies):  https://www.youtube.com/watch?v=u08rhUX661A&t=21517s and https://www.youtube.com/watch?v=u08rhUX661A&t=23189s and already mentioned DDD talk of Pandadoc chief architect here https://www.youtube.com/watch?v=u08rhUX661A&t=1968s. All talks are fresh, Apr 23, 2023, but all in Russian. In the first talk, you can hear that about 75% of tables from our courses go directly into production organizational modeling without adaptation. But we teach our students about ontology engineering to create the remaining 25%. We are preparing to repeat this ontology engineering-based (meta-modeling based) with top-level systems ontology education for English-speaking students, but not quite ready yet, «work in progress». Stay tuned )))

Best regards,
Anatoly Levenchuk

 

 

Michael DeBellis

unread,
May 8, 2023, 11:09:53 AM5/8/23
to ontolo...@googlegroups.com
very systematic methods of mapping from one version of logic to another, such as the OMG standard for DOL: 

Have you ever seen Knowledge Interchange Format (KIF) or Rule Interchange Format (RIF). I have no experience with either but I've read about them because I find the DARPA Knowledge Sharing Initiative so interesting. My understanding is that RIF is actually a W3C standard but I have never seen it used on any project, R&D or especially deployed. Based on what you said, I think they are similar to the OMGs DOL 

In general I was always skeptical of KIF and RIF because in my experience reuse is much harder than most people realize and when you have deadlines it can be so much faster to just build something from scratch. I find it hard enough to do reuse even when the library is already in OWL and I'm building an OWL ontology. To have to also worry about some lowest common denominator knowledge representation (which I think is what languages like KIF and RIF force you to do) makes it even harder. This is also a good example of what I think your point was about syntactic vs semantic reuse although I would put it a bit differently. When I think about the messaging models (e.g. HL7) that achieved excellent reuse, those case you want a lowest common denominator interchange format because you don't want systems that communicate via messaging to be as tightly coupled as common data that is modeled by a database or ontology. 

I think that is another reason that OOP reuse is much easier than ontology reuse. If there is a Python (or in the past CLOS or Java or C++) library to do X I virtually always use it but I can think of many examples where I started using some vocabulary and then just realized what little I was reusing I could easily do myself and the extra cost of having the paradigm of the vocabulary developers imposed on me just wasn't worth it. I'm actually in the process right now of transforming an ontology to reuse Prov-O and it's not trivial.  

A good (i.e., bad) example, this may ruffle some feathers but I've seen it several times, is the domain of measurements. There are several vocabularies out there for doing measurements and I've tried using two of them one of them more than once, and each time gave up and just used a standard design pattern from OOP (adapted for OWL). This is another issue I have with a lot of vocabularies. Not W3C ones like Prov-O but others. Often because people try to "model the world" they end up with an ontology that models every possible thing one could say about something like measurement. You end up with the ontology you want to reuse being twice as big or more as the whole ontology you are trying to build. IMO, this is an issue with the ontology community in general. Too many people don't understand good software engineering practices and how to construct reusable libraries. 

But I agree with you there is something fundamentally different about OOP and Ontology reuse. In addition to semantic vs syntactic I think part of it is the modular nature of OOP vs the open nature of ontologies. OOP really lends itself to reuse because it puts boundaries around what the reuser should care about. Whereas with ontologies you need to understand most of the entire model to really reuse it. 

Of course there are other reasons besides speed to do reuse, i.e., eliminating data silos. 

Michael

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to a topic in the Google Groups "ontolog-forum" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/ontolog-forum/vZ25UP8sOr8/unsubscribe.
To unsubscribe from this group and all its topics, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/8d12356160674c22b559680bee758f3f%40bestweb.net.

Michael DeBellis

unread,
May 8, 2023, 11:18:53 AM5/8/23
to ontolo...@googlegroups.com
I worked at Cycorp 1996-2003, developing many design techniques and
significant sections of the mid-level as well as some of the upper
ontology.  In later jobs, at DERI, NIST, and a couple of start-ups, i
found Cyc very reusable

I think that's the key question: how much was it because you already knew all the design decisions that went into those models? I actually think this highlights that we really need some actual empirical studies of ontology reuse. If anyone knows of any please let me know. In general this is something that computer science and software engineering are really bad at.  There was a guy at MCC, I think named Bill Curtis, who did some excellent work on this and Walt Scacchi at USC also did great work on this but in general for a "science" computer science does very little actual empirical research. I was thinking of writing a paper on that at some point, something like "The Reproducibility Crisis in Software Engineering" where the crisis isn't (like the social sciences) that our experiments aren't reproducible from one researcher to another, but rather that for the most part we don't even HAVE results that could be tested for  reproducibility. I think Upper Models would be a great case study. Give different teams of computer science students the same problem and then give them various upper models and no upper model and measure the differences in the time and quality of the models they produce. I find in ontologies especially people make various claims that X or Y leads to "better" ontologies without much actual evidence. 

Michael

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to a topic in the Google Groups "ontolog-forum" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/ontolog-forum/vZ25UP8sOr8/unsubscribe.
To unsubscribe from this group and all its topics, send an email to ontolog-foru...@googlegroups.com.

Michael DeBellis

unread,
May 8, 2023, 11:28:53 AM5/8/23
to ontolo...@googlegroups.com
Unless you have specific requirements in mind, your chances of coming up with something useful are remote, because it will just be plucked out of the air rather than grounded in reality.

Exactly. I think too many vocabularies tend to be "plucked out of the air"

 ISO 15926 is a  top and mid level data model/ontology that is extensible to detailed ontological content using Reference Data that was developed by the Oil Industry... 

...As a result, periodically some have looked at it and said “This is too complicated, there must be a better/simpler way to do what I need.” And they have gone of to try to do that, only to find (eventually – often years later) that it was not that simple

Matthew, thanks I'll check out ISO 15926. I mentioned above that I'm reworking an ontology right now to reuse Prov-O. I had the same experience. I looked at  Prov-O  briefly several months ago and thought ("ugh too complicated") but then as I was getting into the weeds of a similar problem I realized "a lot of what I'm doing seems like Prov-O" and when I looked at Prov-O again I realized I was essentially duplicating it almost class by class and with almost the exact same names and it was foolish to do that. 

Michael

You received this message because you are subscribed to a topic in the Google Groups "ontolog-forum" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/ontolog-forum/vZ25UP8sOr8/unsubscribe.
To unsubscribe from this group and all its topics, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/00eb01d97ff8%247273f800%24575be800%24%40gmail.com.

John F Sowa

unread,
May 8, 2023, 5:06:33 PM5/8/23
to ontolo...@googlegroups.com
Michael,

KIF (Knowledge Interchange Format) is a version of logic.  It is not an ontology.  It was developed in the 1990s, and it was proposed as an ANSI standard.  It was merged with Conceptual Graphs and developed as an ISO standard, which evolved into Common Logic.  

Basic point:  There is nothing you can do with any SW notation (RDF, RDFS, SKOS, OWL...) that you cannot do in a more concise and efficient way with Common Logic.   In 2000, Tim B-L proposed a Semantic Web Logic Language (SWeLL), which was very close to KIF and CL.  Pat Hayes and Guha proposed a "simplified KIF", which became CL.  Unfortunately, Tim B-L wanted to be "democratic", and he allowed the OWL gang to stuff the ballot box by getting people who did not understand the issues to vote for the hodge-podge of kludges that became the SW mess of 2005.

Ontology is a separate issue.   I agree to a large extent with Matthew, Chris P., Doug F, Doug L., and the developers of DOL and Hets.  They are all very knowledgeable people.  I'm sure that we could sit down together and reach an agreement in principle on the first day, and an agreement in detail after a week.

Since the SW tools have been used for years, we can't ignore them.  But the DOL standard shows how to relate them, and there are already quite a few tools that are built to support DOL.   Developing a strategy for the future would take more time, but DOL should be at the center of any proposal.

Re OOP:   I agree that there were good tools and methodologies for supporting OOP.    But the major differences are in the methodology for specifying ontologies.  CL and DOL are sufficiently general that they can support  OOP methods and define systematic ways of relating ontologies defined by those methods to ontologies defined by other methods.

That would take some time and effort, but the generality of the DOL methods should aid in specifying the mapping methods.  When those methods are defined, it should be possible to implement automated metjhods for doing most of the mappings.  There may be some need for semi-automated methods that would require human experts to make some adjustments.  But the semi-automated tools would alert the humans to the issues that need assistance.

John
 


From: "Michael DeBellis" <mdebe...@gmail.com>

Paul Tyson

unread,
May 8, 2023, 10:04:17 PM5/8/23
to ontolo...@googlegroups.com
JOn 5/8/23 16:06, John F Sowa wrote:

> Basic point:  There is nothing you can do with any SW notation (RDF,
> RDFS, SKOS, OWL...) that you cannot do in a more concise and efficient
> way with Common Logic.

For many practical use cases this is beside the point. Speed and ease of
implementation, as well as robustness and adaptability, are more important.

Counterpoint: You can make breakthrough improvements quickly and cheaply
using the W3C SW standards and commodity, open-source tooling. This has
been true for at least 10 years, since RIF 2nd edition [1] and SPARQL
1.1 [2]. It is only getting better with development of RDF shapes
standards [3], [4] and continuing improvements in web platform standards
and technologies.

Regards,
--Paul

[1] https://www.w3.org/TR/2013/NOTE-rif-primer-20130205/
[2] https://www.w3.org/TR/2013/REC-sparql11-overview-20130321/
[3] https://www.w3.org/TR/shacl/
[4] http://shex.io/shex-primer/

John F Sowa

unread,
May 8, 2023, 10:53:36 PM5/8/23
to ontolo...@googlegroups.com
Paul,

I agree.  When  I need to get a quick & dirty result immediately, I hold my nose and take whatever garbage is ion my plate.  But when I'm working on a project for the future,  I clean house.  I throw out the old garbage, and start with a new, more powerful foundation.

Right now, we are in a transitional period when a large amount of the old ways of doing business are going to die.  It's time to start fresh -- except when you need to put yet another quick and dirty patch on that old clunker.

And by the way, those slides I cited were from my talk at the 2020 European Semantic Web conference:  https://jfsowa.com/talks/eswc.pdf

Even though the Semantic Web is in their title, none of the talks mentioned updates to the old 2005 SW tools.  When the advanced projects have abandoned the old tools, you can apply IBM's synonym "Functionally stabilized" for "obsolete".  That means they will fix glaring bugs, but there will be no new versions.

John
 


From: "Paul Tyson" <pht...@sbcglobal.net>

Alex Shkotin

unread,
May 9, 2023, 4:49:02 AM5/9/23
to ontolo...@googlegroups.com

Anatoly,


I'll definitely check out the reports. But in advance: you teach smart people how to become even smarter, including various modeling techniques, the results of which are recorded in various documents for the development and operation of a system of computers and people. It is certainly interesting to discuss this.

But there is a narrow albeit non-programming task: the creation of formal (and so far semi-formal) ontologies that can be processed by universal algorithms, and not just by clever people. This is the narrow theme of our community, I hope.

Well, the broad one extends all the way to metaphysics. In which infinity certainly exists.

So let me invite Prapion Medvedeva to share her ideas :-)


Alex



пн, 8 мая 2023 г. в 16:10, Anatoly Levenchuk <ai...@asmp.msk.su>:

Paul Tyson

unread,
May 10, 2023, 8:37:15 PM5/10/23
to ontolo...@googlegroups.com

John,

On 5/8/23 21:53, John F Sowa wrote:
Paul,

I agree.  When  I need to get a quick & dirty result immediately, I hold my nose and take whatever garbage is ion my plate.  But when I'm working on a project for the future,  I clean house.  I throw out the old garbage, and start with a new, more powerful foundation.

Sure, if by "quick and dirty ... garbage" you mean an enterprise scale application, running for near on 10 years, providing visualization, validation, and navigation through product structures comprising 1.7 million parts, represented by 1.3 billion RDF triples pulled from engineering, planning, shop floor, and logistics databases. I say, not bad for a handful of worthless W3C SW standards. Let's have some more of that worthlessness.

Regards,
--Paul


Right now, we are in a transitional period when a large amount of the old ways of doing business are going to die.  It's time to start fresh -- except when you need to put yet another quick and dirty patch on that old clunker.

And by the way, those slides I cited were from my talk at the 2020 European Semantic Web conference:  https://jfsowa.com/talks/eswc.pdf

Even though the Semantic Web is in their title, none of the talks mentioned updates to the old 2005 SW tools.  When the advanced projects have abandoned the old tools, you can apply IBM's synonym "Functionally stabilized" for "obsolete".  That means they will fix glaring bugs, but there will be no new versions.

John
 


From: "Paul Tyson" <pht...@sbcglobal.net>

On 5/8/23 16:06, John F Sowa wrote:

> Basic point:  There is nothing you can do with any SW notation (RDF,
> RDFS, SKOS, OWL...) that you cannot do in a more concise and efficient
> way with Common Logic.

For many practical use cases this is beside the point. Speed and ease of
implementation, as well as robustness and adaptability, are more important.

Counterpoint: You can make breakthrough improvements quickly and cheaply
using the W3C SW standards and commodity, open-source tooling. This has
been true for at least 10 years, since RIF 2nd edition [1] and SPARQL
1.1 [2]. It is only getting better with development of RDF shapes
standards [3], [4] and continuing improvements in web platform standards
and technologies.

Regards,
--Paul




--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/ada518bbfdb748b8a6e819c1cd4c3c35%40bestweb.net.

Michael DeBellis

unread,
May 10, 2023, 9:14:39 PM5/10/23
to ontolo...@googlegroups.com
I'm not sure of the distinction here, it looks like you are suggesting 'ontological engineering' is not formal? Or rigourous? Is this right? Are you aware of the constructional papers? https://www.academia.edu/83919753/
Also, an interesting question what 'formal' means - see https://philpapers.org/rec/DUTFLI-2

I've been busy so haven't followed this thread for a while but wanted to clarify what I mean in response to this question. What I'm denying is that there is some distinction in kind between building an ontology and building a database or an object model or an assembler program or a machine learning model. Obviously, they all require different skills and the details of how one develops each artifact will differ but fundamentally for all artifacts what should always drive the choices that a software engineer makes are user requirements. Not notions about what a formal model is supposed to look like. Actually, that's a bit too strong because of course if your users wanted you to mix a kindOf and partOf hierarchy you would educate them. But that's no different for ontologies than object models and in my experience that almost never happens and unlike what some people in the ontology community say, users aren't ignorant fools who don't understand the complexity of building an ontology. What users (quite rightly) don't understand is getting lectured by ontology designers about how they don't understand how to model their domain. I recently read a paper about OntoClean an ontology methodology where the authors quoted someone as saying to them that the benefit of OntoClean is that they "didn't have to spend so much time arguing with doctors about how to model their domain". To me that's the height of folly. You shouldn't be arguing with your users you should be listening to them. That's one of the most essential principles of Agile and it's why all the best software development organizations now use some version of Agile and why anyone who has seen Agile and Waterfall used on similar projects like I have will tell you that Agile always results in systems developed faster, cheaper, and that are much more likely to delight their users. And that is what matters. 

One other point: I find it ironic that people advocate top down ontology design models where you are supposed to get the ontology perfectly right (and complete as if that is ever possible) before you start importing data or writing code. One of the things that I like about using ontologies for development of real problems is that unlike traditional databases it is exponentially easier to change the design of an ontology than a data model so ontologies naturally lend themselves to Agile development. Software engineers (and Buddhists) learned this a long time ago: you don't get the perfect design so that you will never have to change, you give yourself tools and techniques to embrace change because change is inevitable. 

Michael


You received this message because you are subscribed to a topic in the Google Groups "ontolog-forum" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/ontolog-forum/vZ25UP8sOr8/unsubscribe.
To unsubscribe from this group and all its topics, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CAMWD8MrT-%2BeJmLhMC3-53y%2Btqg%2BcjiS9e56wU-mV-UuDV-fj5A%40mail.gmail.com.

John F Sowa

unread,
May 11, 2023, 12:08:20 AM5/11/23
to ontolo...@googlegroups.com
Paul,

The world of computer systems is polluted with a huge amount of garbage that has lasted 40 or 50 years or more and has wasted untold amounts of dollars, programming effort. engineering effort, and greenhouse gas.  I don't have the time to itemize them all in this note, but I'll just mention a few.

As we all know, the Intel CPUs dominate a huge percentage of computer systems today.  Their first CPU chip was the Intel 4004 with a four-bit data path.  It was designed to support the circuitry for a computer display.  That data path was doubled for the 8008. which was followed by the 8080, 8086, 8083,,,  By 1980, Intel knew that that technology was ugly, kludgy, and inefficient, obsolescent, and hard to program.   They were designing a far better machine for the future.

By then, Motorola had a much superior design called the 6800 and later the 68000.  That design was far cleaner, and a far, far better design for the future.  IBM was planning what became the IBM PC.  They looked at the Intel design and the Motorola design and knew that the Motorola version a far better platform.  (Apple, by the way, adopted the Motorola chip.)

Intel panicked and slashed the prices for the 8083 and supporting chips.  Intel knew that the 8083 hardware was the end of the line, since they already were working on a far better design.  But they did not want IBM to choose a superior design from Motorola.  The guys at IBM Boca Raton were engineers, and they knew nothing about software.  They chose Intel.

When the Boca Raton guys chose the Intel design, the guys at IBM Research groaned in disgust, but there were some who had worked on excellent designs for System/360, and they scaled it down for the Intel 8083.  They got  a prototype working in a month and could get a complete system working in a few more months.

But the idiots at Boca Raton went to Seattle to talk to Bill Gates, who had been designing some software for them.  Gates lied to the IBM engineers and told them that he had an operating system for the Intel 8080, which would run on the 8083.  IBM signed a contract with Gates.  But then Gates turned around and bought a system called QDOS (literally Quick and Dirty OS) from one guy who called himself Seattle Computer Systems, who had designed it by himself, and had sold copies to Boeing.  But QDOS had some serious limitations and was already obsolete.  The guy who sold it to Gates had already designed a superior version for Boeing, which did not have the limitations of QDOS (AKA MS and PC DOS).

There is much more to this story and many others that I could list.   Just one more case, Ted Codd had introduced relational databases as a good theoretical platform.  Ingres implemented an excellent system that was based on Codd's theory.  But some lesser lights at IBM designed SQL, which was far worse than ingres.  There is far more to this story, but the short summary is that this was another case where the garbage won.  Cost:  multibillion dollars worth of inferior software and a worse design that was later replaced by other DB designs.  The Ingres design was far superior to SQL and to the designs for the Semantic Web.  By now, I would bet that the cost of SQL instead of Ingres could be a trillion $ or more.

By the way, I had always called SQL the worst version of logic ever designed.  But that was before I saw OWL.  By comparison, I have almost become fond of SQL.  As a logic, it isn't as bad as OWL.

Fundamental principle:  Garbage is never a good idea.  When your customer is stuck with garbage, you may have to provide a Q & D fix for them.  But it's always better in the long term to go with a cleaner and more powerful path to the future.  The path that might look OK for a quick fix will always cost more, create more bugs and flaws, and lose out in the long run -- and those long runs can be 20, 30, years or more.  

Just note that we have been stuck with the SW garbage for almost 20 years.  And as I  said, all of the speakers at the European SW conference in 2020 were talking about future directions that did ***not*** include the SW designs of 2005.  Following is my keynote speech for their 2020 conference:  https://jfsowa.com/talks/eswc.pdf

John
 


From: "Paul Tyson" <pht...@sbcglobal.net>
Sent: 5/10/23 8:37 PM

To: ontolo...@googlegroups.com
Subject: Re: [ontolog-forum] Opinion: The Best Reuse comes from Use

Kingsley Idehen

unread,
May 11, 2023, 8:39:13 AM5/11/23
to ontolo...@googlegroups.com

Hi John,

On 5/11/23 12:07 AM, John F Sowa wrote:
Just note that we have been stuck with the SW garbage for almost 20 years.  And as I  said, all of the speakers at the European SW conference in 2020 were talking about future directions that did ***not*** include the SW designs of 2005.  Following is my keynote speech for their 2020 conference:  https://jfsowa.com/talks/eswc.pdf

From my perspective, the Semantic Web, as you criticize it, never really took off; it died around 2007.

What did gain momentum, however, were the Linked Data Principles outlined by TimBL. These principles provided guidelines for structured representation, leveraging the fundamental essence of the World Wide Web. They emphasized the use of hyperlinks for unambiguous naming of entities, entity types, and entity relationship types. This "deceptively simple" approach provides the foundation for machine-computable relationship type semantics, which also work symbiotically with advancements in language processing facilitated by LLM-based solutions such ChatGPT, providing conversational interfaces.

Today, we have language processing and structured data representation that are both informed by machine-computable semantics. In my opinion, this is a significant advancement.

I will be sharing additional demonstrations related to these matters.
-- 
Regards,

Kingsley Idehen	      
Founder & CEO 
OpenLink Software   
Home Page: http://www.openlinksw.com
Community Support: https://community.openlinksw.com
Weblogs (Blogs):
Company Blog: https://medium.com/openlink-software-blog
Virtuoso Blog: https://medium.com/virtuoso-blog
Data Access Drivers Blog: https://medium.com/openlink-odbc-jdbc-ado-net-data-access-drivers

Personal Weblogs (Blogs):
Medium Blog: https://medium.com/@kidehen
Legacy Blogs: http://www.openlinksw.com/blog/~kidehen/
              http://kidehen.blogspot.com

Profile Pages:
Pinterest: https://www.pinterest.com/kidehen/
Quora: https://www.quora.com/profile/Kingsley-Uyi-Idehen
Twitter: https://twitter.com/kidehen
Google+: https://plus.google.com/+KingsleyIdehen/about
LinkedIn: http://www.linkedin.com/in/kidehen

Web Identities (WebID):
Personal: http://kingsley.idehen.net/public_home/kidehen/profile.ttl#i
        : http://id.myopenlink.net/DAV/home/KingsleyUyiIdehen/Public/kingsley.ttl#this

Prapion Medvedeva

unread,
May 11, 2023, 1:59:25 PM5/11/23
to ontolo...@googlegroups.com
Hello, Alex and all! 
My background is formal philosophy, so the remarks about metaphysics made me smirk particularly.
I think that ontologies in the future will be processed by approximately the same computing architecture as human brains, so there is no need to try to formalize them too much now. Rather, we need to move towards a variety of architectures (of computers) without losing any properties that we do not want to lose, and strive not to narrow even more the already narrowed spectrum of where ontologies now remain.
At the same time, typing and model layering (to avoid contradictions and so on) makes any type of computer (both alive and hybrid :)) faster to learn, as far as I can tell from the literature and my work with students.
Pion Medvedeva

9 мая 2023 г., в 11:48, Alex Shkotin <alex.s...@gmail.com> написал(а):

Alex Shkotin

unread,
May 12, 2023, 5:04:56 AM5/12/23
to ontolog-forum, pi...@transhuman.ru
Hi Pion!

I am happy we have a formal philosopher here.  It was my dream that somebody from [1] joined us. Are you from another community?
And look, I have changed the subject line, but in google-group it still will be under the nice topic "Opinion: The Best Reuse comes from Use".
So, please, use new threads for new ideas.
Who is your Master philosopher (Eric's term) or by any chance you have your own philosophical doctrine? Just for exemple, John F. Sowa Master is C.S. Peirce, G.W.F. Hegel is mine, and Eric BEAUSSART has mentioned one more morden.
Just a little bit about our community of practice [2]: we have created formal ontologies for many years and right now.

Welcome,

Alex


чт, 11 мая 2023 г. в 20:59, Prapion Medvedeva <pi...@transhuman.ru>:

Anatoly Levenchuk

unread,
May 12, 2023, 12:08:13 PM5/12/23
to ontolo...@googlegroups.com

Hi Chris,

Epistemology for me has ontology description (knowledge) as its main artifact. Sure, it has direct relation to formalization (you should come with some communication means from neuro/connectionistic representation to some kind of symbolic representation of knowledge). Yes, formalization is about representing some objects in physical reality with their mathematical twins in mental reality, and there can be chains of neural/connectivists and symbolic representations (e.g. as in constructional ontology). You link to Catarina Dutilh Novaes book is about mathematical nature of formalisms. Yes, we teach to this in our courses.

Sure, we are aware about constructive ontology and especially about constructive mereology (many thanks to Matthew West that pointed us to works of Kit Fine). We already included this approach to ontology to our courses but with an addition: we have agent-as-a-computer to have an inference on a model. Therefore we have not constructor as operation but constructor-as-physical-device (Turing complete computer, robot, etc.) that have an inference on constructive ontology description data as an algorithm to construct entities. Therefore we have two physical systems in constructive ontology: system-in-interest and constructor system. Constructor ontology describe operations of constructor system to build constructive description of system of interest. Therefore we can go to “constructive mathematical world mereology” because it became somewhat physical mereology of constructor system. Yes mathematician is a physical systems and Turing compete as a computer!). We use extensive writings of David Deutsch (father of quantum computer) about this topics.

 

When we speaks about not rigid 0 1 formalism of Boolean logic but semi-formal “pseudocode level” it simply means that it refer to another type of mathematical objects for it description, more suited to quantum-like inference (not quantum as in quantum physics, but quantum-like like in mathematics that suited for quantum physics) and to neural-type inference like in deep neural networks (that is very close to quantum-like inference, it was works of Vanchurin).

 

If I want go to more formality than current set-theoretic formalism, I go for univalent foundations of mathematics and computer systems for it (Agda, Maude, etc.). Thanks Alex Shkotin that point me to ontology-related works with this new level of “classical formality” (Andrei Rodin, "Venus Homotopically" (https://philomatica.org/wp-content/uploads/2013/01/vh3.pdf -- this is 2016. You may be interested in his talk about ontology of formal foundation of mathematics, that was performed by Vladimir Voevodsky to provide foundation for applied mathematics as a modeling vehicle for physical world modeling -- https://philomatica.org/2022/12/univalent-foundations-and-applied-mathematics/).

We (me and Pion Medvedeva as a representatives of this “we”) therefore declare that there should be full spectrum of formality level (all of points of it sure mathematical in nature and need computer to provide inference on different levels of formality), both more formal than set-theoretic formality (e.g. constructive theory as univalent foundations of mathematics that construct sets) and less formal up to connectionist neural representations. All of these needs to real-life work with ontology descriptions.


All mentioned literature (including references to your work as [44]) you can find in literature to more elaborated description of ontology that can be used to describe world in terms of “systems” (I struggle here with English, thanks to Ken Baklawsky who point me to multiple nuances of English usage in case all these system, systems approach, system thinking, systems engineering and other terms with “systems” and I now afraid of use “systems ontology” as “ontological description of world in terms of interrelated systems”. But I still not knowing how to name it). Here: https://ailev.livejournal.com/1657040.html

Thank you for references to your work on constructive BORO and work of Catarina Novaes. I will use these in next version of our courses.

Especially interesting work about per re and per se distinction. We have two accounts for this “agentology” in systems thinking course:
-- active inference (you have at least two generative models: model of Self and model of World). It needs when you decide what to change with a next action (all or any of:  model of Self, model of World, Self, World) and sure you are neve confident where boundary is between Self and World (therefore you have additional activity to explore this). By the way, active inference community have an ontology engineering effort and provide some form of ontology description for their objects and relations, https://coda.io/@active-inference-institute/active-inference-ontology-website/actinf-ontology-definitons-3
-- internal and external “perspective”, e.g. «Soma is internal perspective to body description, body is external perspective”. We need it when teach body control for agents (including humans! E.g. body control for dancing or boxing), https://en.wikipedia.org/wiki/Somatics
-- Theory of Mind, that is in center of discussion about LLM now. E.g. most recent https://arxiv.org/abs/2304.11490. In tests human have 87% account of another human per se perspective, but GPT-4 show with appropriate prompting 100% )))

Best regards,
Anatoly

 

Anatoly Levenchuk

unread,
May 12, 2023, 1:33:49 PM5/12/23
to ontolo...@googlegroups.com

Alex,
by the way, neural neuron network is an algorithm (while learned). Moreover, in computer science line between software and hardware is blurred, and you can imagine quantum computer hardware and quantum computer software, and neural computer hardware and software (there are theorems that its all Turing complete computers). E.g. about neural networks as universal approximator of any function (including function of mathematician as a computer device, including Turing machine) you can see 2.3.1 in https://arxiv.org/abs/2301.00942 (Deep Learning and Computational Physics (Lecture Notes)) with mathematical results of recent years.

Therefore my notion of ontology is also about mathematics, but this is simply another mathematics. When you speak about computer interpretability, I have in my head picture of CYC with multiple “accelerators” or Toolformer that is roughly the same (https://arxiv.org/abs/2302.04761). E.g. Toolformer can be one of accelerators for CYC and vice versa. You can add human as a CYC accelerator and Toolformer tool and CYC and Toolformer as a “inference tools” for a human. And I will discuss epistemology and ontology for all of such a systems.

Prapion Medvedeva already answered her opinion.

Best regards,
Anatoly

 

Alex Shkotin

unread,
May 14, 2023, 4:03:35 AM5/14/23
to ontolog-forum, Anatoly Levenchuk
Hi Anatoly,

Sorry I did not catch your answer earlier because of the depth of this nice thread.
We may discuss any technology and “accelerators” and I hope we agree that formal ontology can be represented as a special artifact separately from the system it is used in. Well it may be secret, like for example the newest version of Azamat's super-ontology, but it exists.
You gave me examples of your ontologies and I am happy.

Alex


пт, 12 мая 2023 г. в 20:33, Anatoly Levenchuk <ai...@asmp.msk.su>:

Prapion Medvedeva

unread,
May 14, 2023, 1:42:38 PM5/14/23
to ontolo...@googlegroups.com, Анатолий Левенчук
Hi Alex!

Thank you for your hospitality! Yes, I was in this group when I studied.
I respect Pierce very much, in many ways I build my views on him; I was also influenced by philosophers of language and analytic philosophers: Frege, Wittgenstein, Quine, Russell and I also heavily immersed myself in fields such as modal logic and game theory (van Benthem). But I would name David Deutsch my champion :)

Pion

14 мая 2023 г., в 11:03, Alex Shkotin <alex.s...@gmail.com> написал(а):

Alex Shkotin

unread,
May 15, 2023, 4:04:27 AM5/15/23
to ontolo...@googlegroups.com, Анатолий Левенчук
Hi Pion,

Super! Nice to meet a member of the amazing FormPhil community! I'm sure you can help us create some great formal ontologies if you pay attention to them. There are many subtle problems in this area of technology.
By the way, James Overton (http://james.overton.ca/) from the OBO foundry initiative is also a philosopher:-)

Alex


вс, 14 мая 2023 г. в 20:42, Prapion Medvedeva <pi...@transhuman.ru>:
Reply all
Reply to author
Forward
0 new messages