Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Kinds of Ontologies

2 views
Skip to first unread message

Sergio Navega

unread,
Apr 28, 1998, 3:00:00 AM4/28/98
to

Kinds of Ontologies (somewhat long post)

Ontologies are one of the hallmarks of the symbolicist approach to AI.
Even with lots of (valid) criticisms on purely symbolic approaches, we
have a lot to learn from these attempts. I still believe we can
successfully use symbolic methods (at least for "partial useful
intelligence"), if we are careful to integrate them with suitable
competing techniques. This post is part of this effort.

I planned to divide this material in two parts. In this first post, I
will list the main aspects of some of the existing ontologies (there are
many more that I will not consider here). My intention is to present
examples of what have been thought to contrast with my future
proposition.

In the second part (under the name "Proposal of New Ontology") I will
give some details of my (humble) proposition for a novel approach. My
primary intention in writing these posts is not to show a finished work,
neither a detailed account of the rationale of each ontology, but just to
raise discussions, because we all learn a lot from it. So, prepare your
fingers for the talking.

// WordNet ---------------------------------------
Here is the upper level of WordNet:

[thing, entity]
[living thing, organism]
[plant, flora]
[animal, fauna]
[person, human being]
[non-living thing, object]
[natural object]
[artifact]
[substance]
[food]

// CYC ---------------------------------------
Certainly the most comprehensive ontology produced so far. There are
links joining items in more than one category, here we will list just one
of them (for example, IntangibleStuff has links to Stuff and to
IntangibleObject simultaneously).

[thing]
[individual object]
[event]
[process]
[something existing]
[intelligence]
[tangible object]
[tangible stuff]
[composite tangible and intangible object]
[stuff]
[intangible stuff]
[intangible]
[intangible object]
[internal machine thing]
[atribute value]
[represented thing]
[collection]
[relationship]
[slot]
[attribute]

// UMLS ---------------------------------------
Unified Medical Language System is one example of a domain-specific
ontology.

[entity]
[physical object]
[organism]
[substance]
[anatomical structure]
[manufactured object]
[conceptual entity]
[language]
[occupation or discipline]
[organization]
[group attribute]
[group]
[intellectual product]
[organism attribute]
[finding]
[idea or concept]

// TOVE ---------------------------------------
Toronto Virtual Enterprise Project is another domain-specific ontology
(enterprise modeling).

[organization-entity]
[organization-individual]
[employee]
[contractor]
[organization-group]
[board of directors]
[department]
[division]

// Mikrokosmos ---------------------------------------
Part of a project that supports knowledge acquisition and machine
translation.

[all]
[object]
[physical-object]
[material]
[separable-entity]
[place]
[mental-object]
[representational]
[abstract-object]
[social-object]
[organization]
[geopolitical-entity]
[social-role]
[event]
[physical-event]
[perceptual-event]
[mental-event]
[perceptual-event]
[cognitive-event]
[emotional-event]
[communicative-event]
[social-event]
[communicative-event]
[property]
[attribute]
[scalar-attribute]
[literal-attribute]
[relation]
[event-relation]
[object-relation]
[event-object relation]

// Penman ---------------------------------------
Designed to support natural language understanding, part of Penman Upper
Model.

[ob-thing]
[object]
[decomposable object]
[set]
[ordered object]
[non-decomposable object]
[space-point]
[substance]
[time-point]
[process]
[material-p]
[directed action]
[nondirected action]
[mental-p]
[M-active]
[M-inactive]
[relational-p]
[one place-r]
[two place-r]
[verbal-p]
[quality]
[material-world]
[dynamic]
[stative]
[logical quality]
[modal quality]

// Generalized Upper Model ---------------------------------------
Work of Bateman et. al., an outgrowth of the Penman Upper Model,
considered an "interface ontology".

[um-thing]
[sequence]
[expanding-configuration]
[projecting-configuration]
[configuration]
[doing & happening]
[meteorological]
[raining]
[snowing]
[saying & sensing]
[internal-processing]
[mental-active]
[being & having]
[existence]
[relating]
[generalized-possession]
[generalized-possession inverse]
[part of]
[element of]
[owned-by]
[intensive]
[ascription]
[identity]
[symbolization]
[generalized-positioning]
[element]
[simple-quality]
[material-world quality]
[logical quality]
[modal quality]
[simple-thing]
[decomposable object]
[nondecomposable object]
[conscious being]
[person]
[male]
[female]
[non-conscious being]

As you can see, most of these ontologies can be mapped into one another.
After all, they are trying to represent the same world. My proposal, too,
can be mapped into several existing ontologies. But the essential
differences between each ontology is the emphasis given to the upper
level concepts and their ramifications to lower concepts. This can make a
great difference during inference time.

In a couple of days I shall post my suggestion. Don't expect too much,
though. All works listed above are the result of years (some time
decades) of thinking done by several top-quality researchers. I had only
one head thinking over a few days :-)

Sergio Navega.

Sunil Mishra

unread,
Apr 28, 1998, 3:00:00 AM4/28/98
to

In article <354633...@ibm.net> Sergio Navega <sna...@ibm.net> writes:

Kinds of Ontologies (somewhat long post)

Ontologies are one of the hallmarks of the symbolicist approach to AI.
Even with lots of (valid) criticisms on purely symbolic approaches, we
have a lot to learn from these attempts. I still believe we can
successfully use symbolic methods (at least for "partial useful
intelligence"), if we are careful to integrate them with suitable
competing techniques. This post is part of this effort.

I planned to divide this material in two parts. In this first post, I
will list the main aspects of some of the existing ontologies (there are
many more that I will not consider here). My intention is to present
examples of what have been thought to contrast with my future
proposition.

One of the major problems with writing The One True Ontology is that
supporting a variety of reasoning using the same ontology is more or less
an unsolved problem. Take two tasks that are an integral part of being a
programmer or an engineer - design and debugging. As far as classical AI
implementations go (I don't know of any non-classical implementations of
such systems), the ontologies used in routine design are fairly different
from those used for debugging existing designs. There are a whole range of
differences, the most obvious being that for debugging you need to reason
about different failure modes of the components of the design, while for
design you need to know how to put those components together. Knowing how
to build a car does not necessarily tell you what the rattling sound is
every time you start it up.

In short, if you come up with an ontology to efficiently do debugging,
applying that ontology to design directly will result in a relatively
inefficient system. The same holds in the other direction. I don't think
anyone has figured out yet how you would go about focusing on relevant
parts of a knowledge base general enough to handle both design and
debugging. This ability to focus on relevant details of a problem is one of
the major differences that set experts apart from novices.

In other words, as admirable as your effort to devise another ontology is,
it will not get very far unless you can figure out what this focus
mechanism will look like, or how it might develop in any given ontology. A
fixed ontology will only lead to an inflexible system.

Good luck!

Sunil

Sergio Navega

unread,
Apr 28, 1998, 3:00:00 AM4/28/98
to

Sunil Mishra wrote:
>
> In article <354633...@ibm.net> Sergio Navega <sna...@ibm.net> writes:
>
> Kinds of Ontologies (somewhat long post)
>
> Ontologies are one of the hallmarks of the symbolicist approach to AI.
> Even with lots of (valid) criticisms on purely symbolic approaches, we
> have a lot to learn from these attempts. I still believe we can
> successfully use symbolic methods (at least for "partial useful
> intelligence"), if we are careful to integrate them with suitable
> competing techniques. This post is part of this effort.
>
> I planned to divide this material in two parts. In this first post, I
> will list the main aspects of some of the existing ontologies (there are
> many more that I will not consider here). My intention is to present
> examples of what have been thought to contrast with my future
> proposition.
>
> One of the major problems with writing The One True Ontology is that
> supporting a variety of reasoning using the same ontology is more or less
> an unsolved problem. Take two tasks that are an integral part of being a
> programmer or an engineer - design and debugging. As far as classical AI
> implementations go (I don't know of any non-classical implementations of
> such systems), the ontologies used in routine design are fairly different
> from those used for debugging existing designs. There are a whole range of
> differences, the most obvious being that for debugging you need to reason
> about different failure modes of the components of the design, while for
> design you need to know how to put those components together. Knowing how
> to build a car does not necessarily tell you what the rattling sound is
> every time you start it up.
>
> In short, if you come up with an ontology to efficiently do debugging,
> applying that ontology to design directly will result in a relatively
> inefficient system. The same holds in the other direction. I don't think
> anyone has figured out yet how you would go about focusing on relevant
> parts of a knowledge base general enough to handle both design and
> debugging. This ability to focus on relevant details of a problem is one of
> the major differences that set experts apart from novices.
>
> In other words, as admirable as your effort to devise another ontology is,
> it will not get very far unless you can figure out what this focus
> mechanism will look like, or how it might develop in any given ontology. A
> fixed ontology will only lead to an inflexible system.
>

Dear Sunil,
Thanks for your message. I agree with your pondering. You brought new
viewpoints and that's exactly what was my original intent.

There's a point which makes me wonder whether this "universal" ontology
exists or not. If it exists, and given that we (humans) can exercise
both activities you mentioned (debugging and design), then our "internal"
ontology should reflect closely the way we (humans) perceive the world.
My proposal will try to reflect this as much as possible.

If it does not exist (which can be interpreted as a failure in the
"representational" wish of classical AI), then the only other way would be
to let the ontology (or something like that) grow by itself, from what the
agent perceives from the world. My proposal is certainly closer to this
than the others. Soon I should be disclosing my suggestion.

Regards,
Sergio Navega.

Sergio Navega

unread,
Apr 28, 1998, 3:00:00 AM4/28/98
to

Jorn Barger pointed me to his contribution, the Fractal Thicket
Indexing Theory, accessible through:

http://www.mcs.net/~jorn/html/ai/thicketfaq.html

Here's a sample of his proposition:

element
motive
hunger
safety
sex
esteem
family
self-expression
thing
food
tool
container
vehicle
clothing
weapon
bodypart
waste
place
inside
home
livingroom
kitchen (etc)
school
office
outside
yard
street (etc)
person
gender
age
species
role?
modality
emotion
belief
etc

Thanks, Jorn.

Sunil Mishra

unread,
Apr 29, 1998, 3:00:00 AM4/29/98
to

In article <35464C...@ibm.net> Sergio Navega <sna...@ibm.net> writes:

Dear Sunil,
Thanks for your message. I agree with your pondering. You brought new
viewpoints and that's exactly what was my original intent.

There's a point which makes me wonder whether this "universal" ontology
exists or not. If it exists, and given that we (humans) can exercise
both activities you mentioned (debugging and design), then our "internal"
ontology should reflect closely the way we (humans) perceive the world.
My proposal will try to reflect this as much as possible.

If it does not exist (which can be interpreted as a failure in the
"representational" wish of classical AI), then the only other way would be
to let the ontology (or something like that) grow by itself, from what the
agent perceives from the world. My proposal is certainly closer to this
than the others. Soon I should be disclosing my suggestion.

I'm not saying it's impossible or it doesn't exist. We do manage to deal
with a variety of situations quite well. But that in part is learned
behavior.

At some point in time you must have had this experience: you are trying to
help someone, and tell them how to do something. But even though they pick
up this task, they can't handle something just a little different. Think of
trying to teach someone 40+ how to use a computer if they have never used
one before. Being able to make these cross task connections is *difficult*,
and an ontology that does not explain both the difficulty and the
learnability will probably be inadequate.

I don't think this can be done with an ontology alone. You need a process
model as well that works with the ontology. Representation is only half the
story.

Sunil

Sergio Navega

unread,
Apr 29, 1998, 3:00:00 AM4/29/98
to

Sunil Mishra wrote:
>
> In article <35464C...@ibm.net> Sergio Navega <sna...@ibm.net> writes:
>
> Dear Sunil,
> Thanks for your message. I agree with your pondering. You brought new
> viewpoints and that's exactly what was my original intent.
>
> There's a point which makes me wonder whether this "universal" ontology
> exists or not. If it exists, and given that we (humans) can exercise
> both activities you mentioned (debugging and design), then our "internal"
> ontology should reflect closely the way we (humans) perceive the world.
> My proposal will try to reflect this as much as possible.
>
> If it does not exist (which can be interpreted as a failure in the
> "representational" wish of classical AI), then the only other way would be
> to let the ontology (or something like that) grow by itself, from what the
> agent perceives from the world. My proposal is certainly closer to this
> than the others. Soon I should be disclosing my suggestion.
>
> I'm not saying it's impossible or it doesn't exist. We do manage to deal
> with a variety of situations quite well. But that in part is learned
> behavior.
>

I agree, learning is really the main issue here.

> At some point in time you must have had this experience: you are trying to
> help someone, and tell them how to do something. But even though they pick
> up this task, they can't handle something just a little different. Think of
> trying to teach someone 40+ how to use a computer if they have never used
> one before. Being able to make these cross task connections is *difficult*,
> and an ontology that does not explain both the difficulty and the
> learnability will probably be inadequate.
>

You are absolutely right. This is a good example. As a matter of fact, this is
my most recent concern. The "ontology" of the teacher may help him with
his natural language generation process, during his explanations to
his student. The student, on his turn, will use the received utterances
to "map" the concepts received to his own internal ontology. This process
is done with a somewhat good eficiency. However, it's clear that the
student is not able to perform as well as the teacher, even after receiving
the same amount of "knowledge". And that's because a well designed ontology
is not enough. One have to develop something else, by himself, to achieve this
level of excelence. This is the main argument that Hubert Dreyfus raised in his
criticism of classical AI. I believe that if we understand how this process of
additional learning works, we will be able to complement ontologies and increase
the level of competence of AI systems. And I believe that this additional "thing"
is related with patterns of occurrences.

Regards,
Sergio Navega.

Ewan

unread,
Apr 29, 1998, 3:00:00 AM4/29/98
to

>
>There's a point which makes me wonder whether this "universal" ontology
>exists or not. If it exists, and given that we (humans) can exercise
>both activities you mentioned (debugging and design), then our "internal"
>ontology should reflect closely the way we (humans) perceive the world.


I hope you dont take my post the wrong way here, and i apologies in advance
if it makes no sense, my mind went for a walk last week and hasnt come back
yet (im only a student, please dont shoot me!) but to my mind humans
generally dont perform these two tasks well at the same time - we seem much
better at designing, then later coming back and debugging the software than
debugging as we go.

I know this is a very specific example, but we all have had times where we
perform one task very well, but just cant possible seem to do a different
one, for no aparant reason. I dont know about anyone else, but i often have
times where im just in the 'wrong frame of mind' to perform a task
(programming or debugging being the usual task at hand :).

To me these troubles we have with our frame of mind suggests that perhaps we
dont have a 'universal' ontology, but rather a huge collection of individal
ones, which we sometimes access well and other times badly, which we add
every day of our lives, and from which we have to use the correct onefor the
correct task , or we end up crashing our car, or missing full stops from
letters. When we make these errors, were still using _a_ ontology, but not
perhaps the correct one or not in the correct way.

Apologies again for any nonsense ive produced (Hey, im living proof of my
theory :)

Ewan Leith
ew...@automata.nildram.co.uk

Sergio Navega

unread,
Apr 29, 1998, 3:00:00 AM4/29/98
to

Ewan wrote:

> Sergio Navega wrote:
> >
> >There's a point which makes me wonder whether this "universal" ontology
> >exists or not. If it exists, and given that we (humans) can exercise
> >both activities you mentioned (debugging and design), then our "internal"
> >ontology should reflect closely the way we (humans) perceive the world.
>
> I hope you dont take my post the wrong way here, and i apologies in advance
> if it makes no sense, my mind went for a walk last week and hasnt come back
> yet (im only a student, please dont shoot me!) but to my mind humans
> generally dont perform these two tasks well at the same time - we seem much
> better at designing, then later coming back and debugging the software than
> debugging as we go.
>

No need to apologize, Mr. Leith. I too am fond of rambling a bit and you seem
to have raised an useful point of view.

> I know this is a very specific example, but we all have had times where we
> perform one task very well, but just cant possible seem to do a different
> one, for no aparant reason. I dont know about anyone else, but i often have
> times where im just in the 'wrong frame of mind' to perform a task
> (programming or debugging being the usual task at hand :).
>
> To me these troubles we have with our frame of mind suggests that perhaps we
> dont have a 'universal' ontology, but rather a huge collection of individal
> ones, which we sometimes access well and other times badly, which we add
> every day of our lives, and from which we have to use the correct onefor the
> correct task , or we end up crashing our car, or missing full stops from
> letters. When we make these errors, were still using _a_ ontology, but not
> perhaps the correct one or not in the correct way.
>
> Apologies again for any nonsense ive produced (Hey, im living proof of my
> theory :)
>

In fact, this is similar to Sunil's considerations. We rarely have more than
one talent simultaneously. This can be an indication of our incapacity or this
can be an indication of "specialized" ontologies, that doesn't map well to
other domains.

But this can also be seen as an exciting perspective for intelligent agents,
if we manage to build them WITH these simultaneous capabilities. Better than
building one Artificial Intelligence that's equivalent to humans is building
one that is superior to them.

Regards,
Sergio Navega.

Ken Ewell

unread,
Apr 30, 1998, 3:00:00 AM4/30/98
to

Sergio Navega wrote in message <354633...@ibm.net>...


>Kinds of Ontologies (somewhat long post)
>
>Ontologies are one of the hallmarks of the symbolicist approach to AI.
>Even with lots of (valid) criticisms on purely symbolic approaches, we
>have a lot to learn from these attempts. I still believe we can
>successfully use symbolic methods (at least for "partial useful
>intelligence"), if we are careful to integrate them with suitable
>competing techniques. This post is part of this effort.
>

That is pretty contentious on the face of it as human intelligence
and the way we measure it pretty much relies on representative
symbols and numbers. Criticism usually arises as a point of
indeterminate information, not much more sustains it. This is the
case when we engage in talk of ontology or knowledge
representation. That, of course, is scientific talk that means we
don't really know what we are talking about.

>I planned to divide this material in two parts. In this first post, I
>will list the main aspects of some of the existing ontologies (there are
>many more that I will not consider here). My intention is to present
>examples of what have been thought to contrast with my future
>proposition.
>

And this is pretty pretentious as these slight taxonomies that you
provide are hardly "the main aspects" of these existing models.

>In the second part (under the name "Proposal of New Ontology") I will
>give some details of my (humble) proposition for a novel approach. My
>primary intention in writing these posts is not to show a finished work,
>neither a detailed account of the rationale of each ontology, but just to
>raise discussions, because we all learn a lot from it. So, prepare your
>fingers for the talking.
>

I suggest that what you are calling an ontology is merely another
classification scheme. Another way of putting it would be to say that
a taxonomy of concepts is not all there is to an (AI) ontology.

To be sure that we are talking about the same thing, an ontology as
it is used in the AI literature, means "a specification of a
conceptualization"
whereas in philosophy an ontology is a study of being (in the world). The
famous phrase "a cow is a cow because it is a cow" comes to mind.

I can tell you this with some assurance of correctness because I have
nearly twenty years of practical research experience in this field. Any
taxonomy, like those you listed, are a part, but they are not the entire
specification or the entire conceptualization in any of these ontological
models you present, or of others of which I am familiar. It is was either
incomplete or untruthful for you to claim otherwise.

If you want to contend whether "onotologies: are of much use at all in
representing knowledge and being aware of the essence of a thing, I
am with you. But just because you can so easily count-out these
artificial, incomplete and somewhat unnatural so-called ontologies, you
cannot count out a natural universal ontology so easily.

You say there is no universal ontology (no universal (essential) way of
being in the world) --no universal set of taxonomic circumstances. I say
that you and I would not understand one another were it not for a
universal ontology onto which human languages depend and cling and
upon which our thoughts might rest in memory.

I know of which I speak because I have published a system that
not only provides the ontological model but fills it with the terminology
of all known subject domains. Not just in English, but also in French
and German too. Theoretically in all languages.

I can tell you also that what we have mind: Ancient languages such
as Arabic, Hebrew, and Armenian, Sanskrit, Hindi and others all had
a clear concept, for instance, for "right things." I say clear, because
in those days, and the days of the Greek philosophers that gained
their knowledge from the Phoenicians, it was altogether a less
complex and sophisticated affair. May I ask that it be enough to say
that there were few synonyms and even fewer renowned examples
to take the place of the actual experience of "right things."

The Greek philosophers had this concept in mind when they debated
the ideals of beauty, honor and truth.

So, because it was ordinal, the super(concept) of "right things" secured
a place in the roots of human language and has survived until this day.

Thus, this concept of "right things" is a universal concept to all humans,
that is, all people have a concept of right things as well as wrong things
and things in general. I am obviously making a long story very short so
you will please forgive my rush.

Even though we have lost sight of them or forgotten them or even if we
cannot recognize them when we see them has no bearing on the fact.
Just because we express the ideas in a different languages does not
make them any less universal. Just because we express an idea using
different words does not make it any less apparent as a naturally
occurring concept in all cultures.

All ancient languages have a root in their languages for right things:
If you knew Arabic or Hebrew this would become quite clear for you.
It can be represented with the symbols hqq. We call this ancient
root word a universal superconcept. And this concept organizes a
number of other natural language concepts that we signify using
ordinary modern language terms; namely: truth, veracity, fact,
correctness, reality, realism, truthful, honesty, verify, certitude,
certify, rights, duties, and so on.

These kinds of concepts and ideas are universal. They are
universally recognized by all human beings and every culture.
Every language, spoken or written has one or more terms
supposed and intended to denote and signify "right things."

So, my point is, a taxonomy is an organizing mechanism but it is
not the complete organizing principle. A taxonomy alone does not
account for semantic processes and their internal taxonomic
symmetry, consistency, or truth, or the truth of the matters they
purportedly organize.

Thus these so-called ontological models are merely classification
apparatus. They are not the measures and they do not measure
real processes, or rather, the logico-symbolic combinations and
logical operations on representations of real processes. In reality
they do not even resemble our ordinary classification processes.

Besides this is not what is most important. What is most important
is how to grasp the subject, object and concepts being represented;
how to identify and isolate interesting topics and focus on the
impeding issues, and; how to condense and summarize that
information so that the significant knowledge might be plainly
visualized and clearly discerned.

Anyhow: while everyone seems to want to debate the modeling of
the binary representations and logic and the computational veracity
of the conceptual vehicles, driving processes and approaches to the
intersections of the highway of digital awareness, we invite you for
a test drive of our functional software intelligence that will assess any
source document and determine the subject matter, discern the topics
and locate and isolate the issues that might be raised in the text. That
is a sentence for the linguists among us.

You can find the system at http://ww.readware.com

We are just now organizing our intelligence functions into programmable
object class libraries. There will be four class libraries called:

RWConceptBase, RWCompileSpace, RWSearchSpace, RWQuery

Programmers will be able to implement the objects in order to use this
intelligence (the capacity to acquire and apply information) in their
business intelligence applications.

1) RWConceptBase *cb=new RWConceptBase(SystemPath, ViewerYesOrNo);

2) RWCompileSpace *cs=new RWCompileSpace(INITFILE, IniFile,
ShadowRoot, UserPath, IndexType, cb);

OR: RWCompileSpace *cs=new RWCompileSpace(FOLDERSTOREAD, 0,
ShadowRoot, UserPath, IndexType, cb);

3) cs->UpdateSpace(UPDATEALL);
OR: cs->UpdateSpace(REBUILDINDEX);
OR: cs->UpdateSpace(REBUILDALL);

delete cs;//done compiling

4) RWSearchSpace *ss=new RWSearchSpace(INITFILE, IniFile,
ShadowRoot, UserPath, IndexType, DoLoadIndex, cb);

OR: RWSearchSpace *ss=new RWSearchSpace(FOLDERSTOREAD, 0,
ShadowRoot, UserPath, IndexType, DoLoadIndex, cb);

OR: (Single-Area Search Spaces)
RWSearchSpace *ss1=new RWSearchSpace(SearchPath1, PathName1,
ShadowRoot, UserPath, IndexType, DoLoadIndex, cb);

RWSearchSpace *ss2=new RWSearchSpace(SearchPath2, PathName2,
ShadowRoot, UserPath, IndexType, DoLoadIndex, cb);

5) ss->LoadSearchArea(SearchAreaNumber);
OR: ss->LoadSearchArea(searchpath);
(Single-area search spaces are automatically loaded)

6) RWQuery *q=new RWQuery(cb);
q->SpecifyQuery(QueryString, Topics, Issues, Subjects,
Strategy, Language);
//Select concepts/synonyms/relatives
q->Attach(ss1);//connect query to search area
q->SetBatchSize(20); //20 hit documents at a time
NumberHitDocuments=q->GetFirstBatch();
while(NumberHitDocuments){
hits=q->GetHits();//get hit info structures
//display hit summary and hits
NumberHitDocuments=q->GetNextBatch();}

7) delete q; delete cb;

Ken Ewell/MITi

Chaumont Devin

unread,
May 1, 1998, 3:00:00 AM5/1/98
to

On Thu, 30 Apr 1998 22:53:53 -0400, "Ken Ewell" <mit...@readware.com> wrote:

> I say
> that you and I would not understand one another were it not for a
> universal ontology onto which human languages depend and cling and
> upon which our thoughts might rest in memory.

I believe you and I are communicating not by means of a UNIVERSAL ontology,
but only by means of an ENGLISH ontology. The English language and its
culture holds that certain semantic relations exist. For example:

Fresh air is a good thing.

So if I say, "Your insights were like fresh air," I mean I like them

But it may well be that in the ontology of some other language/culture we
might have:

Fresh air is a deadly thing.

in which case, "Your insights are like fresh air," would be an insult.

> What is most important
> is how to grasp the subject, object and concepts being represented;
> how to identify and isolate interesting topics and focus on the
> impeding issues, and; how to condense and summarize that
> information so that the significant knowledge might be plainly
> visualized and clearly discerned.

Yes, these things are important. Now can you tell us how to accomplish them
with a computer?

Your friend,
Chaumont Devin.


Ken Ewell

unread,
May 1, 1998, 3:00:00 AM5/1/98
to

Chaumont Devin wrote in message <6ibo5f$a...@mochi.lava.net>...


>On Thu, 30 Apr 1998 22:53:53 -0400, "Ken Ewell" <mit...@readware.com>
wrote:
>
>> I say
>> that you and I would not understand one another were it not for a
>> universal ontology onto which human languages depend and cling and
>> upon which our thoughts might rest in memory.
>
>I believe you and I are communicating not by means of a UNIVERSAL ontology,
>but only by means of an ENGLISH ontology. The English language and its
>culture holds that certain semantic relations exist. For example:
>
>Fresh air is a good thing.
>
>So if I say, "Your insights were like fresh air," I mean I like them
>
>But it may well be that in the ontology of some other language/culture we
>might have:
>
>Fresh air is a deadly thing.


Thank you Chaumont for you disillusi wisdomata. I am sorry Chaumont I was
not talking about fresh air and I made no such assertions. And that is not
semantics but some kinda angelo-englesse mentalisticautomata
psuedosemantics. I don't know, see: I cannot even find any notes suitable
for even framing such an idea.

My previous post used symbols and words in such combinations that certain
ideas about the subject of programming intelligence. Therein opinions on
some topics of ontology, knowledge, concepts, taxonomy, language, words
memory, culture and a system, were rendered (more or less) plain and clear
for readers of English. There may be also subtopics and sub-themes
depending on what turns up under any particular method of literary or
semiotic analysis. There were also significant issues raised in that post
but surely there was nothing about fresh air.

I said:
Just because we express the ideas in a different languages does not
make them any less universal.

That was intended to mean that the concepts and ideas being discussed (here,
in English) can just as easily be rendered using German words, French words
or the words of any other language. I must hasten to add that it is our
experience that (like programming languages) some languages are better
suited for some explanations than others. My friends, the words and
language we use may differ in very substantial ways. This is irrelevant.
The possible combinations of sounds and syllables and words and sentences
are infinitely diverse, and are completely arbitrary and are wholly and
completely a matter of chance and of choice. You can find some answers by
organizing some words and some conventional grammar but you will not get
far. I think everyone here knows that.

It is not the words Chaumont, Sergio, it is the facts, perceptions, or
'standardized knowings' that the words stand for, that they often subtly
replace. As a famous lurker and sometimes poster in this newsgroup has
convinced me of the power of metaphor, let me try to gently coax your
thoughts into this direction:

Think of words and names as the notes in a composition that are intended and
supposed to represent a particular tonal position in a scale. Now may I say
that it is customary to arrange the notes in pleasing ways. Some people do
prefer to put the notes in less than pleasing ways but then people who want
to be pleased will not listen or pay any attention. In this sense, the
composition will not be very attractive. There are other qualities that
mark what may be considered a good composition. Some of them are brevity,
order, economy, harmony. And there are qualities that mark a bad
composition as well.

Notwithstanding the particular quality, when you put notes together in a
composition they may sound good or bad, they may make sense or not.
Sometimes a single note will stand alone and be repeated. Whatever is
sufficient is what is called for and recognized as a general rule. Also, we
have all the notes that are necessary. Afterwards, if you attempt to take
one or two of the notes out the composition in which they are intended and
supposed to be necessary, the composition may no longer be sensible. The
composition might fall apart and collapse; lose all sensibility or capacity
for attraction. If you change the notes, you change the composition.
Generally, it might sound wrong or be wrong if you try.

Now, as you probably know, a note is a representation of a tone and a tone
is the vibration of a certain frequency spectrum. Consequently, and here is
the really good part, one can validate any instance of a note with its
representation *and* with its actualization because the use, re-enactment,
or playing of the note, always resonates with what it is intended and
supposed to irrespective of one's personal aspirations. That means, simply:
one cannot point or refer to a horse and call it an egg, in sane, reasonable
and pleasant company. Well, one can if one chooses, however, the
consequences may be neither satisfying, pleasing nor attractive.

We do not change words when the ideas we present no longer make sense. We
change our choice of words. We change our spin, delivery, or our
pitch --the words remain as they were, unchanged, unaffected, uncomposed.
This distinction may be banal but it is an important one.

Let us say because not all words or compositions are complete, or composed
of honest notes, or intended to be sincere compositions, there are cases
where words do not resonate with what that which they are supposed or
intended. I think a lot of people here can recognize those cases when that
type appears and occurs.

In the end: Reasonable, sane people can discern that which resonates and
that which does not; herein lies the groundwork of semantics and of truth.
The switch from vibration to vibrancy is a subtle one. In fact: words
denote experience, concepts, ideas and perceptions, in the same way as
musical notes on a scale represent tones. Words and names are applications
of experience, previous experience, perceptions and personal conceptions,
which are also the individual personal "knowings," we collectively call
knowledge. Nonetheless, words denote something special, distinctive,
familiar and discriminatory in the field, and according to the scale, in
which they are rendered.

So one can say that the meaning of words is to represent and stand in for
the actualities they denote. To 'understand' the meaning of words then can
only mean to be aware of the actualities (and ideas, perceptions,etc.) of
which the words stand under and are used in their stead. This might also
explain why one may be steadfast in their manner and speech.

Ken Ewell/MITi


Sergio Navega

unread,
May 2, 1998, 3:00:00 AM5/2/98
to

Chaumont Devin wrote:
>
> On Tue, 28 Apr 1998 18:40:37 -0300, Sergio Navega <sna...@ibm.net> wrote:
>
> Come on boys, there can be no universal ontology, and the reason is clear.
> For example, in one person's ontology:
>
> God is the supreme being of the universe.
>
> while in another's ontology:
>
> God is a mythical being.
>
> And many other relations are even more contradictory than this. How can there
> be a universal ontology when the links in one ontology differ from the links
> in another so as to create cross links if the two were to be combined? This
> has got to be impossible!

I agree with you, it is not possible to have one universal ontology. Each person
builds his own, and all ontologies may be similar but certainly differ in
important points. That's one explanation for the common misunderstandings that
abound between people. Of course, all this makes sense only if we can really
associate ontologies with something people have inside their own brains.
I'm starting to doubt it.

Best Regards,
Sergio Navega.

Chaumont Devin

unread,
May 2, 1998, 3:00:00 AM5/2/98
to

On Fri, 1 May 1998 18:45:38 -0400, "Ken Ewell" <mit...@readware.com> wrote:

> I said:
> Just because we express the ideas in a different languages does not
> make them any less universal.

> That was intended to mean that the concepts and ideas being discussed (here,
> in English) can just as easily be rendered using German words, French words
> or the words of any other language.

Not so. In fact many thoughts are easy to express in one language and
difficult to express in another.

For example, in the Malay ontology there is apparently no generic basket. All
baskets are named by the particular type of basket they are. So the hypernym
for this kind of basket or that kind of basket is evidently not "basket," but
something like "container." So when translating the word "basket" from
English to Malay one may encounter serious difficulties.

Or, to borrow your musical notation metaphor, in some cultures the diatonic
scale is unknown: they employ a pentatonic scale, which has only a part of the
tones of the diatonic scale. It is often impossible to play a melody composed
for an instrument using a diatonic scale on a pentatonic scale, because
certain tones just aren't there.

Correct me if I am wrong, but a universal musical instrument would have to be
able to sound every tone known to man from the pentatonic, diatonic,
chromatic, or any other scale. So, likewise, a universal ontology would have
to include every semantic node employed in the thoughts of men from all the
cultures of the world. This in itself would not seem to be impossible. For
example, a semantic node for basket might exist but just never be used in
Malay. English, which probably employs more semnods than most other
languages, only has about half a million words, which means that the number of
semnods it employs may be only a quarter million or less. So it might not
choke a modern computer to hold all the semantic nodes ever employed by man.

The problem will arise when contradictory semlinks are involved. For example,
most human beings would probably agree that an apple is a fruit (although some
botanists insist that it is really a flower). But in many other cases sharp
contradictions will occur. Here are some examples:

Christian: Paul was an apostle of God.
Muslim: Paul was an imposter.
Christian: Jesus was God.
Muslim: Jesus was a prophet.
European: Premarital-sex is bad.
Certain Pacific and African cultures: Premarital sex is okay.
Traditional Arab: Fire is an element.
Modern European: Fire is not an element.
Brazilian settler: A capybara is a fish.
Biologist: A capybara is a rodent.
Malay: A porpoise is a fish.
European: A porpoise is a mammal.
Certain cultures: Burping is impolite.
Certain others: Burping is polite.
Malay: A bat is a bird.
European: A bat is a mammal.
European: A picnic-basket is a kind of basket.
A basket is a kind of container.
Malay: A picnic basket is a kind of container (skips basket).
Malay: Fresh air is dangerous.
European: Fresh air is healthy.
Vietnamese: Sunshine is bad.
American: Sunshine is good.
Malay: A green light is a blue light.
A blue light is a green light (synonymous).
English: no synonymy.
Etc., etc., etc.

And to make matters even worse, holonymy links together differently in
different languages. For example, in English a foot is not part of a leg,
but part of a body, whereas in Malay a foot is part of a leg. In English a
toe is part of a foot, whereas in Malay a toe is also a part of a leg, etc.

Thus although it may be possible to assemble sufficient semantic nodes to
cover all such nodes known to man, it may never be possible to create a
universal ontology, because an ontology is not only semantic nodes but also
the links between such nodes, and these links are wired differently for
different languages.

With best regards,
Chaumont Devin.


Patrick Juola

unread,
May 2, 1998, 3:00:00 AM5/2/98
to

In article <6ifdph$f...@mochi.lava.net> Chaumont Devin <de...@lava.net> writes:
>On Fri, 1 May 1998 18:45:38 -0400, "Ken Ewell" <mit...@readware.com> wrote:
>
>> I said:
>> Just because we express the ideas in a different languages does not
>> make them any less universal.
>
>> That was intended to mean that the concepts and ideas being discussed (here,
>> in English) can just as easily be rendered using German words, French words
>> or the words of any other language.
>
>Not so. In fact many thoughts are easy to express in one language and
>difficult to express in another.
>
>For example, in the Malay ontology there is apparently no generic basket. All
>baskets are named by the particular type of basket they are. So the hypernym
>for this kind of basket or that kind of basket is evidently not "basket," but
>something like "container." So when translating the word "basket" from
>English to Malay one may encounter serious difficulties.

Unfortunately, this doesn't prove the assertion that you make two
paragraphs above. All it shows is that if a Malay speaker wants to talk
about "baskets," he needs to use a circumlocution equivalent to
"something like <sort-of-basket>." And if it were something that he
needed to talk about, he could simply define a new word or phrase and
use it as appropriate. People do this all the time; for instance,
the political correctness police would love it if every time I wrote about
a generic human being, I used the phrase "he or she" as my pronoun.
There's no exact equivalent in French for the English "cousin" (French
specifies gender on cousins), any more than there's a generic English
word for "aunt or uncle" -- but this doesn't keep people from thinking,
describing, or talking about "aunts and uncles."

>Or, to borrow your musical notation metaphor, in some cultures the diatonic
>scale is unknown: they employ a pentatonic scale, which has only a part of the
>tones of the diatonic scale. It is often impossible to play a melody composed
>for an instrument using a diatonic scale on a pentatonic scale, because
>certain tones just aren't there.
>
>Correct me if I am wrong, but a universal musical instrument would have to be
>able to sound every tone known to man from the pentatonic, diatonic,
>chromatic, or any other scale.

Or be tunable. For example, a violin (in the hands of a skilled violinist)
can play every tone known to man simply because it can, literally, play
any (fundamental) frequency [within its range], depending on how the
musician places his or her fingers. This isn't possible with a piano,
as the strings are tuned to discrete notes.

But because human language is extensible, it's more like the violin
than the piano. And for this reason, it's not clear that a "universal
ontology" makes sense for violin music simply because if you have a
need or a use for something outside your current set of notes, you
simply put your finger in the right spot.

-kitten

Oh, yes. P.s.


>Malay: A green light is a blue light.
> A blue light is a green light (synonymous).

These are *not* synonymous. Again, the Malay speaker extends the
language as needed. One classic that I'm rather fond of is a
description of a color as "red like a banana." This was from
a speaker of a language with three basic colors : black, white, and
red.... and as subtle shades were needed, they were described with
derived terms. Of course, English speakers are too sophisticated to
do anything of the sort, and I just have to assume that the car
manufacturer will not put olive seats in my British-racing-green
Jaguar (as there's no way for me to specify the difference between
shades of green).

-kitten

CMoel888

unread,
May 3, 1998, 3:00:00 AM5/3/98
to

In article <6ifdph$f...@mochi.lava.net>, Chaumont Devin <de...@lava.net> writes:

>Thus although it may be possible to assemble sufficient semantic nodes to
>cover all such nodes known to man, it may never be possible to create a
>universal ontology, because an ontology is not only semantic nodes but also
>the links between such nodes, and these links are wired differently for
>different languages.
>

It may be possible to build a universal ontology if it could encompass all of
the (finite) possible alternatives as conditional options. It would
necessariily be interactive.

Regards,
Charles Moeller

Chaumont Devin

unread,
May 3, 1998, 3:00:00 AM5/3/98
to

On Sat, 02 May 1998 09:26:17 -0300
Sergio Navega <sna...@ibm.net> wrote:

> Of course, all this makes sense only if we can really
> associate ontologies with something people have inside their own brains.

I believe we will likely prove or disprove the existence of the personal
ontology by means of deduction.

> I'm starting to doubt it.

But why would you doubt this when the evidence weighs so heavily in the
opposite direction? By this I mean that if we have found that it is
impossible to build a language machine that works without an ontology, then
why should we now assume that humans don't have one? I should think we should
take this evidence and apply it in the exact opposite direction, namely:
Because we have determined that it is impossible to build a language machine
without an ontology, therefore the human linguistic apparatus MUST include an
ontology in order to work.

I would predict that a person with a damaged ontology would speak gibberish,
and perhaps suffer from tremendous distress and sudden fears.

For example, what if your ontology failed and created a semlink of the
hypernym type from "rose" to "something-terrifying?" You might immediately
start suffering from a rare new disease called "rhodophobia!" You might be
sitting over coffee with a friend acting in a perfectly normal manner when
your wife came in from the garden with a rose. At the sight of this flower
you might then totally "flip out," and the little men in the white jackets
might have to come for you and take you away in a straight jacket! No big
deal, really, just ONE little semlink to the wrong place if it is strong
enough, and, poof! There goes your whole life for one innocent rose!

So you see, dear Sergio, my theory keeps explaining more and more things every
time you turn round, while the best competition keeps on spouting highfalutin
jargon without explaining anything. Don't you think its about time you came
round for a second, closer examination? It's all right there in black and
white. The only reason you aren't catching on is because you aren't really
reading it. Try it again, slowly, and you will see what I mean. Revelation!
Kodzillions of things falling into place! Wow! The human ontology!

And when you see that, you will also see what I said about your quest for the
kinds of knowledge. In a world of links and nodes, then, basically there are
two: (1) knowledge consisting of binary relations that can be represented in
an ontology, and (2) knowledge consisting of complex relations that can only
be represented using Panlingua. In these two types I have not included the
kinds of "body knowledge" you have been discussing in recent postings, such as
knowing how to walk, because these are not important to language. It seems to
be true that language is somehow used in creating the device drivers for these
things (walking, picking things up, brushing our teeth, etc.), but once this
software is in place it can work below the level of language, and thus does
not constitute knowledge in the logical/philosophical sense--at least as I
understand it right now.

The upshot of all this is a system that:

1. Has an ontology at the base of all knowledge.

2. Represents all higher knowledge in Panlingua.

3. Uses event codes in Panlingua or some other device to trigger lower,
automatic functions.

Its that simple.

Chaumont Devin

unread,
May 3, 1998, 3:00:00 AM5/3/98
to

On 3 May 1998 01:40:27 GMT, cmoe...@aol.com (CMoel888) wrote:

> It may be possible to build a universal ontology if it could encompass all
of
> the (finite) possible alternatives as conditional options. It would
> necessariily be interactive.

Yes. This is correct, but I didn't mention it because it entails such
complexity that for all intents and purposes it would be impossible at this
time. Nevertheless there may well be shortcuts that haven't come to light
that might enable us to reach this goal.

We show some potential for this larger ontology whenever we chose our words
tactfully in order to avoid misunderstanding. For example, suppose you are
talking to a heavily armed savage who worships this man called the "Holy Mama"
and you know that the "Holy Mama" is nothing but a conniving old bastard who
loves young female flesh. To this man, the "Holy Mama" is God, so in order to
avoid getting speared for nothing, you kinda play along with his ontology and
conceal yours. Thus while one ontology is at play inside our heads, we
sometimes model at least a part of the ontologies of others at the same time,
and are able to navigate these alternative ontologies quite nimbly.

I once knew a lady who had three distinct personalities. One was the
down-home all-American girl. The second was business lady. And the third
(she was born in the Philippines) was yakety Philippino. I used to marvel at
the changes that would come over her depending on who was at the other end
when she picked up the phone. She was one of the smartest people I ever met,
so this leads me to believe that the ability to be at home in more than one
ontological reference frame must go hand-in-hand with psychological
sophistication.

Butmost people do not speak more than two or three languages very well, and so
it is probably safe to assume that we are not designed to handle more than a
dozen or so different ontological reference frames at the same time.

And because the number of ways in which the nodes (semantic nodes or semnods)
in an ontology can be linked may be very large, it may never be possible to
build a machine capable of handling all cases.

Sergio Navega

unread,
May 4, 1998, 3:00:00 AM5/4/98
to

Ken Ewell wrote:
>
> It is not the words Chaumont, Sergio, it is the facts, perceptions, or
> 'standardized knowings' that the words stand for, that they often subtly
> replace. As a famous lurker and sometimes poster in this newsgroup has
> convinced me of the power of metaphor, let me try to gently coax your
> thoughts into this direction:
>
> Think of words and names as the notes in a composition that are intended and
> supposed to represent a particular tonal position in a scale. Now may I say
> that it is customary to arrange the notes in pleasing ways. Some people do
> prefer to put the notes in less than pleasing ways but then people who want
> to be pleased will not listen or pay any attention. In this sense, the
> composition will not be very attractive. There are other qualities that
> mark what may be considered a good composition. Some of them are brevity,
> order, economy, harmony. And there are qualities that mark a bad
> composition as well.
>

I mostly agree with your ponderings. Your analogy with music is appropriate and
reminds me of my current concern, which is the importance that patterns have
in our intelligence. This is a view that's gaining momentum within my beliefs.

Regards,
Sergio Navega.

Sergio Navega

unread,
May 4, 1998, 3:00:00 AM5/4/98
to

Chaumont Devin wrote:

>
> Sergio Navega <sna...@ibm.net> wrote:
> > I'm starting to doubt it.
>
> But why would you doubt this when the evidence weighs so heavily in the
> opposite direction? By this I mean that if we have found that it is
> impossible to build a language machine that works without an ontology, then
> why should we now assume that humans don't have one? I should think we should
> take this evidence and apply it in the exact opposite direction, namely:
> Because we have determined that it is impossible to build a language machine
> without an ontology, therefore the human linguistic apparatus MUST include an
> ontology in order to work.
>

Dear Chaumont,
First of all, sorry for the delay in responding this. I've been through a shock
wave of work lately. My doubts of the ontological approach (notwithstanding my
posts about it) are deep because I'm finding a lot of considerations pushing me
toward other methods. I regret that I cannot yet sustain my arguments for these
other views, but the time will come. Let's keep in touch, I will certainly need
your valuable insights to validate (or not) all my future ramblings.

Best Regards,
Sergio Navega.

0 new messages