FYI:Gartner is allways interesting

90 views
Skip to first unread message

Alex Shkotin

unread,
Sep 5, 2023, 4:32:31 AM9/5/23
to ontolog-forum

John F Sowa

unread,
Sep 5, 2023, 10:40:30 AM9/5/23
to ontolo...@googlegroups.com
Alex,

On the position in the hype cycle, I agree with Gartner (and their AI advisers).   Generative AI is at the "Peak of inflated expectations" and it's ready to plunge into the "Trough of disillusionment".  But many AI people are already looking at the long-range plateau, where they consider it just one more tool in the AI toolkit -- important, but not the only pony in the race.  See the attached AIhype.gif.

And by the way, investors (people with Big Money) are beginning to agree with that point.  NVIDIA makes the chips that are used in huge quantities to process the raw data (huge volumes of texts) that are scanned to derive the LLMs.  Their stock price rose rapidly when investors thought that more companies were going to be processing large volumes of texts.  But recently, NVIDIA's stock price crashed.  That's a clue.

I also noticed the Garner timeline for AGI:  coming close to its peak, but the yellow triangle is a warning that it will take more than 10 years to reach the plateau.   I believe that the time is more than 50 years.   In any case, long-term investors ignore anything that will take more than 10.

I noticed that neuro-symbolic is very low in Gartner's chart.   It is far from a peak, and it has a yellow triangle.  That is the kiss of death for investors.   They consider it a loser.

But I believe that looking at evidence from neuroscience is important for AI research.  However, investors don't want to pour money into research because it's not profitable.  If you want to get funding for research, it's important to find or invent good buzz words.

John


AIhype.gif

Alex Shkotin

unread,
Sep 5, 2023, 12:28:46 PM9/5/23
to ontolo...@googlegroups.com
John, 

I somehow missed that they already have their own foundational models.

Alex

вт, 5 сент. 2023 г. в 17:40, John F Sowa <so...@bestweb.net>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/67dea8ccf8814a40ac8514c7f66cfd46%40bestweb.net.

John F Sowa

unread,
Sep 5, 2023, 3:09:28 PM9/5/23
to ontolo...@googlegroups.com
Alex,

I don't understand the following sentence:  "I somehow missed that they already have their own foundational models."

What are you missing?   And how is that related to my note below?

Who is "they"?   The Gartner group?  The AI people that the Gartner authors have interviewed?  Somebody who uses LLMs?  Everybody who uses LLMs?

I don't believe that any two people who fall into any of those groups have the same "foundational model".

And I don't know what you mean by a "foundational model".  Is it an ontology of some kind?  Is it a theory of some kind?  Is it a structure of some kind that could be described or specified by some set of axioms?  Is it something that was generated by some computer program?  How? or Why?

John
 


From: "Alex Shkotin" <alex.s...@gmail.com>

Alex Shkotin

unread,
Sep 6, 2023, 3:34:08 AM9/6/23
to ontolo...@googlegroups.com
John,

I am talking about this part of Gartner's picture you gave in attachment.
image.png
It was unknown to me that guys from AI technology have their own ideas for the term "foundation models" [1] (just an example).

Alex


вт, 5 сент. 2023 г. в 22:09, John F Sowa <so...@bestweb.net>:
Alex,

I don't understand the following sentence:  "I somehow missed that they already have their own foundational models."

What are you missing?   And how is that related to my note below?

Who is "they"?   The Gartner group?  The AI people that the Gartner authors have interviewed?  Somebody who uses LLMs?  Everybody who uses LLMs?

I don't believe that any two people who fall into any of those groups have the same "foundational model".

And I don't know what you mean by a "foundational model".  Is it an ontology of some kind?  Is it a theory of some kind?  Is it a structure of some kind that could be described or specified by some set of axioms?  Is it something that was generated by some computer program?  How? or Why?

John
 


From: "Alex Shkotin" <alex.s...@gmail.com>

John, 

I somehow missed that they already have their own foundational models.

Alex

вт, 5 сент. 2023 г. в 17:40, John F Sowa <so...@bestweb.net>:
Alex,

On the position in the hype cycle, I agree with Gartner (and their AI advisers).   Generative AI is at the "Peak of inflated expectations" and it's ready to plunge into the "Trough of disillusionment".  But many AI people are already looking at the long-range plateau, where they consider it just one more tool in the AI toolkit -- important, but not the only pony in the race.  See the attached AIhype.gif.

And by the way, investors (people with Big Money) are beginning to agree with that point.  NVIDIA makes the chips that are used in huge quantities to process the raw data (huge volumes of texts) that are scanned to derive the LLMs.  Their stock price rose rapidly when investors thought that more companies were going to be processing large volumes of texts.  But recently, NVIDIA's stock price crashed.  That's a clue.

I also noticed the Garner timeline for AGI:  coming close to its peak, but the yellow triangle is a warning that it will take more than 10 years to reach the plateau.   I believe that the time is more than 50 years.   In any case, long-term investors ignore anything that will take more than 10.

I noticed that neuro-symbolic is very low in Gartner's chart.   It is far from a peak, and it has a yellow triangle.  That is the kiss of death for investors.   They consider it a loser.

But I believe that looking at evidence from neuroscience is important for AI research.  However, investors don't want to pour money into research because it's not profitable.  If you want to get funding for research, it's important to find or invent good buzz words.

John

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

John F Sowa

unread,
Sep 6, 2023, 10:25:37 AM9/6/23
to ontolo...@googlegroups.com, Peirce List, CG
Alex.

I read that web page you cited.   What Google calls "foundation models" I would call "mappings based on specialized ontologies".  They include three kindds:  (1) text to image, (2) text to code, and (3) speech to text.

I believe they are making a serious mistake by using English text in their foundation.  The article I'm writing, which puts Peirce's diagrammatic reasoning at the center, is more general, flexible, and powerful.  It also avoids a huge number of complex issues that differ from one natural language to another -- even worse, the words differ from one kind of application to another, even in the same language.

Thanks for citing that article.  I am now finishing the final Section 7 of my article, and this method by Google gives me a clear target to shoot at.  I'm actually glad to see that Google is making that mistake -- because it makes it easier to compete with them.

That diagram by Gartner puts foundation models at the top of the hype cycle.  That means they are about to plunge into the trough of disillusionment.  I would enjoy giving them a little push.

John
 


From: "Alex Shkotin" <alex.s...@gmail.com>

John,

Gary Berg-Cross

unread,
Sep 6, 2023, 10:58:56 AM9/6/23
to ontolog-forum
John,

You said and asked:
>And I don't know what you mean by a "foundational model".  Is it an ontology of some kind?  Is it a theory of some kind?  Is it a structure of some >kind that could be described or specified by some set of axioms?  Is it something that was generated by some computer program?  How? or Why?

I take their use of the term "foundational models" to simply mean:
'you can start here with what we have built (say their speech model) and develop some
speech application that you are interested in.'


Gary Berg-Cross 
Potomac, MD


--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Alex Shkotin

unread,
Sep 6, 2023, 12:30:41 PM9/6/23
to ontolo...@googlegroups.com
John, 

This Google page seems to refer to a situation where something new comes up and the big corporations say, yes, we have it. I gave the example of Google because they are pragmatists.
If we start to discuss this term, then perhaps we should start with the following:
"The Stanford Institute for Human-Centered Artificial Intelligence's (HAI) Center for Research on Foundation Models (CRFM) coined the term "foundation model" in August 2021, tentatively referring to "any model that is trained on broad data (generally using self-supervision at scale) that can be adapted (e.g., fine-tuned) to a wide range of downstream tasks".[14]" [1]

Alex


ср, 6 сент. 2023 г. в 17:25, John F Sowa <so...@bestweb.net>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

John F Sowa

unread,
Sep 6, 2023, 11:02:36 PM9/6/23
to ontolo...@googlegroups.com
Alex,

I changed the title to emphasize a point that I keep telling people again and again:   "Keep a good dictionary on your desk or shelf, and check any word you intend to adopt for any purpose."  

Both the Stanford gang and the Google gang violated that principle and created a confusion -- for themselves and everybody else.  The term 'foundation model' has two poorly chosen words, both of which are  misleading.

The worst choice is the word 'model'.  A far better choice would have been 'pattern'.  For all GPT systems, the LLMs should be called "Large Language Patterns".  That would be far more accurate and far less confusing than calling them models.

As for the word 'foundational', the Merriam-Webster dictionary defines that word as "serving as a basis supporting existence or determining essential structure or function". 

The last two words of that definition are exactly what we need.  Instead of the confusing phrase "functional models".  The Stanford gang and  the Google gang should have chose the word 'pattern' as the noun and the adjectives 'functional' or 'structural' as appropriate.

A functional pattern would be a pattern for doing something, and a structural pattern would specify the kinds of things that are being processed or generated by some functional pattern. 

For the Google application, they had four kinds structures:  text, speech, images, and code.  Then they  had three kinds of functions:  (1) mapping text to images, (2) mapping text to code, and (3) speech to text.

The phrase 'foundation model' sounds like a technical term that means something.  But it has no meaning of any kind.  It just creates confusion.

John
 


From: "Alex Shkotin" <alex.s...@gmail.com>

John, 

Alex Shkotin

unread,
Sep 7, 2023, 4:01:21 AM9/7/23
to ontolo...@googlegroups.com

John,


In practice, the inventor chooses the name "as God puts on his soul" (rus:"как Бог на душу положит") for his invention. And we have to immerse ourselves in the terminology of a particular science or technology to humbly master their "bird language". It is almost impossible to convince them to replace the term, because it is already part of the life of this community.

Why is the theory of directed graphs with composition of arrows called category theory?

Why did the DBMS guys call their company Oracle?

And so on.

Therefore, I came, I learned, I use it.


Alex



чт, 7 сент. 2023 г. в 06:02, John F Sowa <so...@bestweb.net>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Dan Brickley

unread,
Sep 7, 2023, 4:33:13 AM9/7/23
to ontolo...@googlegroups.com
Yes. Meanwhile, “pattern” is also in widespread use for characterizing the hard-to-define worldly regularities these systems are so good at matching.

In general, ML-AI terminology is a mess. Eg Labelled/unlabelled data, unsupervised/supervised learning, giving way (thankfully) to the otherwise wordy “self-supervised”. And the word “inference” is used in ways that might make some ontolog-forum readers splutter their coffee.

How long until Merriam-Websters updates

to include 

Desk dictionaries lag technical community usage by years, sometimes decades - eg 

Dan

Alex Shkotin

unread,
Sep 7, 2023, 5:05:09 AM9/7/23
to ontolo...@googlegroups.com
Yes. This is why we need special glossaries. For example, this request to google "genomics terminology" returns a list of glossaries to be harmonized (following Gary Berg-Cross) during formalization :-)
This is just the beginning:
etc
And by the way we have GENO formal ontology http://obofoundry.org/ontology/geno.html 
Is there a chance to have one world wide used dictionary for every science and technology?
AI is first of all summa technologae, each with its own glossary.

Alex

чт, 7 сент. 2023 г. в 11:33, Dan Brickley <dan...@danbri.org>:

Gary Berg-Cross

unread,
Sep 7, 2023, 9:22:35 AM9/7/23
to ontolo...@googlegroups.com
This discussion of developing proper defined terms  for related concepts reminds me of some of the system level  ideas
involved in John's Knowledge Soup.  Within a fluid conceptual, soupy space of related concepts some of us may pick an area and create by definition a  little island. solid area of knowledge.  This may overlap with other conceptual islands, so we get conceptual territory arguments..
Good knowledge engineering needs to consider these defined areas as knowledge systems and look around more widely to rationalize
a conceptual structure that holds up under scrutiny. One may leverage results from prior efforts with  best practices but often we don't have the vision or time or temperament to do this.

Gary Berg-Cross 
Potomac, MD

John F Sowa

unread,
Sep 7, 2023, 3:46:37 PM9/7/23
to ontolo...@googlegroups.com, ontolog...@googlegroups.com, CG
Alex, Gary, Dan B.

Before writing any detailed comments, I want to emphasize three points:  (1) Major software systems survive in one form or another for 40 years or more.  Few, if any precise definitions from the early days remain unchanged for more than a tiny fraction of that time.  As an example, IBM developed the first Airline reservation system for American Airlines in the 1960s to run on the IBM 7094.  An updated version of that became IBM's airline reservation system running on System/360.  The ontology and terminology of that system became the industry-wide basis for all reservations for hotels, cars, and any kind of services that travelers might need.  The ontology and choice of word definitions that IBM adopted in collaboration with American Airlines has become the universal world-wide standard.   The formal definitions change with every update, but the choice of words and their translations from English to other languages do not change.

(2) The researchers and programmers working on the details of any system may understand the formal details, but the top-level managers, the great majority of the users, and the investors who have money will never see or understand the details of those definitions.  They will interpret the terminology according to the way those words are used in everyday life.   If the formal definitions diverge too far from common usage, the result will be confusion and repeated errors.

(3) Any  attempt to edict an official, precise definition for all terms will guarantee that whatever system uses those terms exactly as defined will become obsolete within a few years.  Please note that the manuals for every product -- from a refrigerator to a programming language -- will have a new manual with new definitions of key terms for every update.

IBM used the term 'functionally stabilized' for any hardware or software system whose terminology would never change.  That term was a synonym for "obsolete".  IBM would continue to sell those obsolete systems to customers who could not afford to update their systems to accommodate the new products.  Microsoft, for example, just recently stopped producing and delivering updates for System/95  (wjocj was introduced in 1995)..

Alex> Is there a chance to have one world wide dictionary for every science and technology? 

You can define it, if you like, but it is guaranteed to become obsolete with the first new discovery in science or new development in engineering.  And even if you define it, 99.999% of the people in the world would never use more than a tiny percentage of the words as defined.

Alex> AI is first of all summa technologiae, each with its own glossary. 

There is no universal glossary of AI.  New terms are constantly being defined by people who never read or understood similar terms that had been defined and published before.  AI terminology changes very rapidly because many AI people never read anything that is more than five years old. 

Alex> Why is the theory of directed graphs with composition of arrows called category theory? 

For historical reasons.  Mathematicians, unlike Ai people, cite publications of any date and make updates compatible with the original definitions.

Alex> Why did the DBMS guys call their company Oracle? 

Because it answered questions, like an oracle.  There are many horror stories about compatibility in DB systems, but they developed in different ways than AI for different reasons -- mostly bad:  preserving incompatibility.  Preserving incompatibility was also one of the worst reasons for Windows 95.  But that is another story.

Dan> In general, ML-AI terminology is a mess. Eg Labelled/unlabelled data, unsupervised/supervised learning, giving way (thankfully) to the otherwise wordy “self-supervised”. And the word “inference” is used in ways that might make some ontolog-forum readers splutter their coffee. 

That's a good answer to Alex's questions.

Gary> One may leverage results from prior efforts with  best practices but often we don't have the vision or time or temperament to do this. 

That's a good explanation for the points by Alex and Dan.   

In summary,  most people who need to know something about Ai technology (users and funding agencies, for example) will not know or remember the detail of a formal definition,  Even if they read the definition, it will be easier to understand and remember if the words are used in ways that are consistent with common usage -- as codified in common dictionaries.

An example of a bad choice is the term 'fundamental model'.  Both words are commonly used, but that combination does not give any hint of what the term means.  But the terms 'functional pattern' and 'structural pattern' use common words that give an approximate idea of the meaning.  That makes them easier to learn, easier to remember and easier to use by everybody -- programmers, managers, funding agencies, and intelligent outsiders who want to know what is happening.

John   

Nadin, Mihai

unread,
Sep 7, 2023, 3:52:06 PM9/7/23
to ontolo...@googlegroups.com

Dear and respected colleagues,

Just an illustration (excellent article) of what John Sowa describes:

 

Transformers Revolutionized AI. What Will Replace Them? (forbes.com)

 

Mihai Nadin

--

All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Alex Shkotin

unread,
Sep 8, 2023, 7:15:50 AM9/8/23
to ontolo...@googlegroups.com, ontolog...@googlegroups.com, CG

John et al., 


We have come close to my favorite topic: framework for theory (theoretical knowledge), which I propose to consider using the example of genomics.

Give me the weekend to think carefully about the answer.


Alex



чт, 7 сент. 2023 г. в 22:46, John F Sowa <so...@bestweb.net>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

John F Sowa

unread,
Sep 8, 2023, 11:22:09 AM9/8/23
to ontolog...@googlegroups.com, ontolo...@googlegroups.com, CG
Alex, Eric, and Jack,

Jack Park cited an article that discusses Doug Lenat's career and ends with a link to a Ted Talk that he presented ten years ago.  That article and Lenat's talk clarify the issues that Alex and Eric mentioned.   I suggest that people listen to it before continuing with my comments below:  https://www.sciencetimes.com/articles/45824/20230906/douglas-lenat-dead-ai-researcher-spent-40-years-building-computer.htm

The last paragraph in that article, just before the link to the Ted Talk, is "According to cognitive scientist Gary Marcus, he undertook an endeavor no one else dared to do. Although he failed, he had at least shown a portion of the route for those exploring the same path."

Instead of saying that Lenat failed, it's better to say that he succeeded in one important goal: Developing a huge ontology and reasoning system that covers a large part of human knowledge.  That is, in fact, the explicit goal of the ISO standard for a top-level ontology that is sufficient to support a very large number of practical applications.

Lenat failed for the same reason why I have been making negative comments about that ISO Standard:  it's impossible to have a consistent formal ontology of everything that people do or say or think.  If we all agree that Lenat failed, then it's time to declare that ISO standard is a  dead end.  

If anybody thinks that the ISO standard is still worthwhile, then they have to face an uphill battle:  Explain what Lenat might have done in the past 40 years that could have enabled the Cyc project to succeed.  Since I doubt that anybody can do that, I believe that we should switch our attention to two more achievable goals:  (1) Emphasize the DOL standard for supporting interoperability among multiple independently developed ontologies.  (2) Adopt ideas from the recent work on generative AI to develop natural language interfaces that can use and relate the ontologies supported by point #1.

Alex>  We have come close to my favorite topic: framework for theory (theoretical knowledge), which I propose to consider using the example of genomics. 

Good luck.   But I doubt that any framework based on any methodology can do what Lenat failed to do in 40 years with a large number of very good programmers, logicians, linguists, ontologists, and specialists in various subfields.  After the first 20 years, they had devoted one person-millennium of effort to the project (an average of 50 people for 20 years).

Eric> Mathematicians, unlike "Industrials" (for computers ... and so on) cannot admit "obsolescence" ! 

Yes.  A mathematical theorem, once it is proved, is true forever.  Further progress can build on, simplify, unify, or generalize the statement of the proof.  Some results may be forgotten or ignored, but nothing that has been proved can become false.

Goal:  Future developments must use the accuracy and reliability of mathematics and mathematical logic to evaluate and build on the often flaky results of the GPT-like systems.

John

Alex Shkotin

unread,
Sep 9, 2023, 5:33:53 AM9/9/23
to ontolo...@googlegroups.com, ontolog...@googlegroups.com, CG

John,


I naively thought of writing something useful over the weekend. I made a mistake. Writing message No. 1 about the framework of the theory is postponed to weekdays.

I will briefly answer your letter for now.

JFS: "If we all agree that Lenat failed"

I disagree. I need to see what will happen to the project next.

JFS: "(1) Emphasize the DOL standard for supporting interoperability among multiple independently developed ontologies."

The essentials of the various ontologies will be collected in the framework of the theory. There we can formalize it in any language. Not necessarily those selected by the DOL. It is possible on those collected in hets.eu and in general any.

JFS: "But I doubt that any framework based on any methodology can do what Lenat failed to do in 40 years with a large number of very good programmers, logicians, linguists, ontologists, and specialists in various subfields."

The task is posed completely differently, i.e. vice versa: specialists in various subfields, attracting, as usual, good programmers, logicians, linguists, ontologists will create frameworks for their theories.


Alex



пт, 8 сент. 2023 г. в 18:22, John F Sowa <so...@bestweb.net>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Alex Shkotin

unread,
Sep 9, 2023, 5:44:01 AM9/9/23
to ontolo...@googlegroups.com, ontolog...@googlegroups.com, CG

John,


Very briefly about formal definitions. The formal definition should be compared with an engineering drawing.

Everyone uses various devices, but few people can and should be able to read engineering drawings.

The construction of formal definitions is important, for example, because they can be transferred to robots.


Alex



чт, 7 сент. 2023 г. в 22:46, John F Sowa <so...@bestweb.net>:
Alex, Gary, Dan B.

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

John F Sowa

unread,
Sep 9, 2023, 6:39:23 PM9/9/23
to ontolo...@googlegroups.com, ontolog...@googlegroups.com, CG
Alex,

I very strongly agree with your comment below:  The diagrams are fundamental, and the words are secondary.  Whenever there is any dispute -- start with the diagrams.  Formalisms, such as mathematical notations, always have a more direct mapping to diagrams than to words.  Euclidean geometry is the best example.  But any book that uses algebraic notations can always map the algebra more clearly and precisely to a diagram than to any words in any natural language.

Re engineering diagrams:  Anybody who can't read the engineering diagram, can't understand a precise explanation written in their native language.  Any simple explanation that they can understand is guaranteed  to be an oversimplification.  But if the engineering diagram is carefully explained to them then they can and do understand the subject.

I know that point very well -- because I've done it.  I also know that people who claim they understand a  simple explanation, but cannot understand the diagram don't know what they're talking about.  If you ask them some simple questions about how the thing works, their answers are hopelessly confused.  I know that because I've met such people.

If you doubt that point, try that exercise with people who claim that they understand the simple explanation.

The mapping to diagrams is especially important for robots.  Every action by a robot has a direct mapping to and from some kind of diagram.  But the explanation in a natural language is more complex, more unreadable, and more prone to misreading and misunderstandings.

John
 


From: "Alex Shkotin" <alex.s...@gmail.com>

Alex Shkotin

unread,
Sep 10, 2023, 6:02:33 AM9/10/23
to ontolo...@googlegroups.com, ontolog...@googlegroups.com, CG

John,


I should remember that you call some structures diagrams. So when you have science of diagrams, I just need to plug in the science of structures.

And I completely agree that the derivations on the structures are structural.

For example, in [1] examples of English sentence structures are given, and an example of logical inference on these structures.

"Whenever you have schemas `((every X) is Y)` and `(Z is X)` applied, i.e. X, Y, Z have a specific value, applying the first schema to the second gives `(Z is Y) `:

((every X) is Y), (Z is X) |= (Z is Y) --algorithm: {substitute Z instead of (every X)}"


Below I have given an example of substituting “structure” for “diagram” in your last letter. 


Alex


[1] https://www.researchgate.net/publication/366216531_English_is_a_HOL_language_message_1X 


The structures are fundamental, and the words are secondary.  Whenever there is any dispute -- start with the structures.  Formalisms, such as mathematical notations, always have a more direct mapping to structures than to words.  Euclidean geometry is the best example.  But any book that uses algebraic notations can always map the algebra more clearly and precisely to a structure than to any words in any natural language.


Re engineering structures:  Anybody who can't read the engineering structure, can't understand a precise explanation written in their native language.  Any simple explanation that they can understand is guaranteed  to be an oversimplification.  But if the engineering structure is carefully explained to them then they can and do understand the subject.


I know that point very well -- because I've done it.  I also know that people who claim they understand a  simple explanation, but cannot understand the structure don't know what they're talking about.  If you ask them some simple questions about how the thing works, their answers are hopelessly confused.  I know that because I've met such people.


If you doubt that point, try that exercise with people who claim that they understand the simple explanation.


The mapping to structures is especially important for robots.  Every action by a robot has a direct mapping to and from some kind of structure.



вс, 10 сент. 2023 г. в 01:39, John F Sowa <so...@bestweb.net>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages