Best texbook (most relevant to Opencog Node and Link Types) in Knowledge representation

190 views
Skip to first unread message

Alex

unread,
Apr 14, 2017, 7:07:08 AM4/14/17
to opencog
Hi!

What is the best texbook (most relevant to Opencog Node and Link Types) in Knowledge representation? I am aware about books about PLN and egineering AGI (and I am reading them and they are relevant to probabilisti reasoning side of knowledge represenatation), but I feel that e.g. concepts of inheritance (extensional and intensional) as adopted by OpenCog Atomsapce is coming from earlier work - so from what work? I would like to see this work, to include it into broader context. I have adapted to UML, ER, OO design and I am still struggling to model knowledge using OpenCog nodes and links. That is why I am seeking more books to dive into this line of thinkin.

I am reading now:

Linas Vepstas

unread,
Apr 14, 2017, 10:38:58 AM4/14/17
to opencog
I don't know that book but it's probably adequate.

Opencog contains a potpourri of ideas from logic, prolog, lambda calculus, relational algebra. There's usually several ways to solve any problem in opencog. Many or most of the link types in opencog are not used for basic KRR, but are there to solve assorted domain-specific problems, usually related to pattern matching.

--linas

--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+unsubscribe@googlegroups.com.
To post to this group, send email to ope...@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/2fb4cdd5-067a-448b-b72e-5afbc1618729%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Alex

unread,
Apr 16, 2017, 7:34:05 PM4/16/17
to opencog
Well, the mentioned book has chapter about inheritance, but it is in no way connected with the terms of intensional and extensional inheritance. So, this book is not usable.

Now I am looking Non-Axiomatic Logic: A Model Of Intelligent Reasoning by P. Wang and it should be fairly good book, because OpenCog uses term logic and the idea about possible use of term logic (more than 2000 years after Aristotle) in present times comes from P. Wang (as described in PLN book), but I have no access to this book, so.....

It could be nice to know where such notions about extensional and intensional inheritance comes from. OpenCog wiki says that intensional inheritance (class inheritance) can be understood as subset relationship but from my programming experience I can say that this is wrong perception. Subset relationship is subset relationship but class is something more: class has class functions, class as a concept exists even without members of this class, class has factory methods/constructors/destuctors - means of creating new instances and so on. Class (in some lanuages) can have multiple inheritance. So, I am still afraid of adopting OpenCog notions.

To be honest, I am a bit afraid to include in my thesis project that uses term logic - it is just fragment of monadic predicate logic and it was decided some 150 years ago that more extensive logics (full predicate logic) are necessary for more expressivity...

Linas Vepstas

unread,
Apr 16, 2017, 8:52:03 PM4/16/17
to opencog
On Sun, Apr 16, 2017 at 6:34 PM, Alex <alexand...@gmail.com> wrote:
Well, the mentioned book has chapter about inheritance, but it is in no way connected with the terms of intensional and extensional inheritance. So, this book is not usable.

Sure, its usable.  Opencog inheritance is still inheritance. The difference between extensional and intensional is this:

you can define a set by listing all the members of the set, for example, "fluffy is a dog" "fido is a dog" "satan is a dog", etc. this is the extensional definition of a set: it describes the extent.

The intensional definition is this: "dogs are all those things that have fur, four legs and bark".  one lists all the properties that the members of the set must posess, the "intensity"

That's all.


 

Now I am looking Non-Axiomatic Logic: A Model Of Intelligent Reasoning by P. Wang and it should be fairly good book, because OpenCog uses term logic and the idea about possible use of term logic (more than 2000 years after Aristotle) in present times comes from P. Wang (as described in PLN book), but I have no access to this book, so.....

There's a PDF of the PLN book online somewhere. Someone can send this to you.

It could be nice to know where such notions about extensional and intensional inheritance comes from.

Aristotle, I believe.  Seriously, the idea is thousands of years old. Thomas Aquinas. The Scholastics.
 
OpenCog wiki says that intensional inheritance (class inheritance) can be understood as subset relationship but from my programming experience I can say that this is wrong perception. Subset relationship is subset relationship but class is something more: class has class functions, class as a concept exists even without members of this class, class has factory methods/constructors/destuctors - means of creating new instances and so on. Class (in some lanuages) can have multiple inheritance. So, I am still afraid of adopting OpenCog notions.

class has many meanings besides the one used in programing/c++. Opencog uses the more general meaning. Even C++ originally used the word "class" in the general sense; only later were the notions of type theory formalized.

The problem is that class membership, in type theory, is binary; the standard example is the functor from the opposite category into class set, -- what is called the "pre-sheaf".  That's the high-falutin way of undrstandin it.  There's more, lots more, in this area.

I'm guessing that you don't really know javascript very well, or have never taken any formal courses in lisp or scheme.  Read the first 3-5 chapters of SICP it will expand your mind a lot.

To be honest, I am a bit afraid to include in my thesis project that uses term logic - it is just fragment of monadic predicate logic and it was decided some 150 years ago that more extensive logics (full predicate logic) are necessary for more expressivity...

Huh?
What's term logic got to do with this discussion?

--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+unsubscribe@googlegroups.com.
To post to this group, send email to ope...@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.

Ben Goertzel

unread,
Apr 17, 2017, 5:24:12 AM4/17/17
to opencog
On Mon, Apr 17, 2017 at 1:34 AM, Alex <alexand...@gmail.com> wrote:
> To be honest, I am a bit afraid to include in my thesis project that uses
> term logic - it is just fragment of monadic predicate logic and it was
> decided some 150 years ago that more extensive logics (full predicate logic)
> are necessary for more expressivity...


Yes, term logic in its simple form does not have adequate expressivity
for everything an AGI needs to do

However, for a lot of simple reasoning that an AGI needs to do, term
logic provides a concise and effective approach...

As for Pei Wang's book a moment's search shows that it is easily
downloadable from libgen if you want to read it...

ben

--
Ben Goertzel, PhD
http://goertzel.org

"I am God! I am nothing, I'm play, I am freedom, I am life. I am the
boundary, I am the peak." -- Alexander Scriabin

Alex

unread,
Apr 17, 2017, 6:52:20 AM4/17/17
to opencog
Thanks for the clarification and suggestions!

I had to search book as a set of articles and then it is here!

Is it hard/possible to extend PLN from term logic to the full predicate logic? As far as I understand, then I should rewrite only formulas for the truth value vectors/matrices?

Linas Vepstas

unread,
Apr 17, 2017, 5:48:58 PM4/17/17
to opencog
FYI, If you know what a closure is, then I just found a nice, simple explanation of how to create objects, if you have closures.

Here:
http://www.erights.org/elib/capability/ode/ode-objects.html

Its pretty nifty.

--linas


--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+unsubscribe@googlegroups.com.
To post to this group, send email to ope...@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.

Alex

unread,
Apr 18, 2017, 4:22:35 PM4/18/17
to opencog
Maybe we can solve the problem about modelling classes (and using OO and UML notions for knowledge representation) with the following (pseudo)code

- We can define ConceptNode "Object", that consists from the set or properties and functions

- We can require that any class e.g. Invoice is the inherited from the Object:
  IntensionalInheritanceLink
    Invoice
    Object

- We can require that any more specifica class, e.g. VATInvoice is the inherited from the more general class:
  IntensionalInheritanceLink
    VATInvoice
    Invoice

- We can require that any instance is inherited from the concrete class:
  ExtensionalInheritanceLinks
    invoice_no_2314
    VATInvoice

But I don't know yet what can and what can not be the parent for extensional and intensional inheritance. Can an entity be extensionally inherited from the more complex object or it can be extensionally inherited from empty set-placeholder only. When we introduce notion of set, then the futher question always arise - does OpenCog make distinction between sets and proper classes?

There is second problem as well - there is only one - mixed InheritanceLink. One can use SubsetLink for the extensional inheritance (still it feels strange), but there is certainly necessary syntactic sugar for intensional inheritance, because it is hard to write and read SubsetLink of property sets again and again (http://wiki.opencog.org/w/InheritanceLink).

Ben Goertzel

unread,
Apr 18, 2017, 4:31:22 PM4/18/17
to opencog
Hmmm...

Instead of

***
- We can require that any instance is inherited from the concrete class:
ExtensionalInheritanceLinks
invoice_no_2314
VATInvoice
***

I would think to say

MemberLink
invoice_no_2314
VATInvoice

DefiniteLink
invoice_no_2314

(where the latter indicates that invoice_no_2315 is being considered
as a "specific entity")

.. ben
> --
> You received this message because you are subscribed to the Google Groups
> "opencog" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to opencog+u...@googlegroups.com.
> To post to this group, send email to ope...@googlegroups.com.
> Visit this group at https://groups.google.com/group/opencog.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/opencog/a6d0102e-9ca1-4204-8dd4-75a9fb2ec06b%40googlegroups.com.
>
> For more options, visit https://groups.google.com/d/optout.



Linas Vepstas

unread,
Apr 18, 2017, 8:40:47 PM4/18/17
to opencog
On Tue, Apr 18, 2017 at 3:22 PM, Alex <alexand...@gmail.com> wrote:
Maybe we can solve the problem about modelling classes (and using OO and UML notions for knowledge representation) with the following (pseudo)code

- We can define ConceptNode "Object", that consists from the set or properties and functions

- We can require that any class e.g. Invoice is the inherited from the Object:
  IntensionalInheritanceLink
    Invoice
    Object

- We can require that any more specifica class, e.g. VATInvoice is the inherited from the more general class:
  IntensionalInheritanceLink
    VATInvoice
    Invoice

- We can require that any instance is inherited from the concrete class:
  ExtensionalInheritanceLinks
    invoice_no_2314
    VATInvoice

If you wish, you can do stuff like that. opencog per se is agnostic about how you do this, you can do it however you want. The proper way to do this is discussed in many places; for example here: https://en.wikipedia.org/wiki/Upper_ontology

I'm not particularly excited about building ontologies by hand, its much more interesting (to me) to understand how they can be learned automatically, from raw data.

But I don't know yet what can and what can not be the parent for extensional and intensional inheritance. Can an entity be extensionally inherited from the more complex object or it can be extensionally inherited from empty set-placeholder only. When we introduce notion of set, then the futher question always arise - does OpenCog make distinction between sets and proper classes?

Why? This "distinction" only matters if you want to implement set theory. My pre-emptive strike to halt this train of thought is this: Why would you want to implement set theory, instead of, say, model theory or universal algebra, or category theory, or topos theory?  why the heck would distinguishing a set-theoretical-set from a set-theoretical-proper-class matter? (which oh by the way is similar but not the same thing as a category-theoretic-proper-class...)

You've got multiple ideas going here, at once: the best way to hand-craft some ontology; the best theoretical framework to do it in; the philosophy of knowledge representation in general... and, my personal favorite: how do I get the machine to do this automatically, without manual intervention?
 

There is second problem as well - there is only one - mixed InheritanceLink. One can use SubsetLink for the extensional inheritance (still it feels strange), but there is certainly necessary syntactic sugar for intensional inheritance, because it is hard to write and read SubsetLink of property sets again and again (http://wiki.opencog.org/w/InheritanceLink).

If the machine has learned an ontology with a million subset links in it, no human being is ever going to read or want to read that network. It'll be like looking at a bundle of neurons: the best you can do is say "oh wow, a bundle of neurons!"

--linas

--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+unsubscribe@googlegroups.com.

To post to this group, send email to ope...@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.

Daniel Gross

unread,
Apr 19, 2017, 1:23:16 AM4/19/17
to opencog, linasv...@gmail.com
Hi Linas, 

How do you propose to learn an ontology from the data -- also, what purpose would, in your opinion, the learned ontology serve. Or stated differently, in what way are you thinking to engender higher-level cognitive capabilities via machine learned bundled neuron (and implicit ontologies, perhaps).

thank you,

Daniel
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.

To post to this group, send email to ope...@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.

Ben Goertzel

unread,
Apr 19, 2017, 2:16:42 AM4/19/17
to opencog, Linas Vepstas
We have a probabilistic logic engine (PLN) which works on (optionally
probabilistically labeled) logic expressions.... This logic engine
can also help with extracting semantic information from natural
language or perceptual observations. However, it's best used together
with other methods that carry out "lower levels" of processing in
feedback and cooperation with it...

In the case of vision, Ralf Mayet is leading an effort to use a
modified InfoGAN deep NN to extract semantic information from
images/videos/sounds to pass into PLN, the Pattern Miner, and so forth

In the case of textual language, Linas is leading an effort to extract
a first pass of semantic and syntactic information from unannotated
text corpora via this general approach

https://arxiv.org/abs/1401.3372

The same approach should work when non-textual groundings are included
in the corpus, or when the learning is real-time experiential rather
than batch-based.... but there's plenty of nitty-gritty work here...

ben goertzel
> https://groups.google.com/d/msgid/opencog/01d0f8ad-2c6c-44af-9e46-fc71e2f2559f%40googlegroups.com.
>
> For more options, visit https://groups.google.com/d/optout.



Daniel Gross

unread,
Apr 19, 2017, 8:23:45 PM4/19/17
to opencog, linasv...@gmail.com
Hi Ben, 

Thank you for your response. I started reading the paper and was wondering if you could help me clarify a confusion i apparently have when it comes to the meaning of meaning: 

How is linguistic meaning connected to human embodied meaning that we would call human (or AGI) understanding. 

Linguistic meaning seems to be about the linguistic meta-language that shows how a human would parse a sentence unambiguously, so that a human can, in principle, understand the meaning of a sentence, although, what is, say, instructed by a sentence, as understood by a human seems not captured, but would require more machinery.

In this sense, linguistic machinery seems to embody (as a theory of mind) how humans understand (in a cognitive economical manner), rather than what humans understand --at least this is what confuses me ...

any thought would be much appreciated ...

Ben Goertzel

unread,
Apr 20, 2017, 4:17:31 AM4/20/17
to opencog, Linas Vepstas
As I see it, the meaning of a word can be understood as the fuzzy set
of patterns in which that word is involved...

Some of these will be purely language-internal patterns (as
highlighted by Saussure and other structuralist linguists way back
when), others will be patterns associating the word with nonlinguistic
data (as in symbol groundings)

One way of thinking about the relationships between these different
aspects of meaning is given here:

https://arxiv.org/abs/1703.04368

-- Ben
> https://groups.google.com/d/msgid/opencog/75e2e6d9-8f6b-411d-ad9f-2b7566620cf4%40googlegroups.com.

Linas Vepstas

unread,
Apr 20, 2017, 9:59:38 AM4/20/17
to Daniel Gross, opencog
Semantics and syntax are two different things. Syntax allows you to parse sentences. Semantics is more about how concepts inter-relate to each other. --  a network. A sentence tends to be a quasi-linearized walk through such a network. For example, take a look at the "deep" and the "surface" structures in meaning-text theory.  From there, one asks "what kind of speech acts are there?"  and "why do people talk?" and this is would be the "next level", beyond the homework exercise I mentioned in the previous email.

--linas 

Linas Vepstas

unread,
Apr 20, 2017, 9:59:51 AM4/20/17
to Daniel Gross, opencog
On Wed, Apr 19, 2017 at 12:23 AM, Daniel Gross <gros...@gmail.com> wrote:
Hi Linas, 

How do you propose to learn an ontology from the data --

The simplest approach is to simply read english-langage sentences that encode an ontology: for example, an early version of MIT ConceptNet contained the sentence "a violin is an instrument". This is really quite striaght-forward to ingest into your favorite internal format.  More complex sentences require more sophistication.
 
also, what purpose would, in your opinion, the learned ontology serve.

I dunno. You could ask questions, like "what is an instrument?" and it could respond "violin, thermometer".  People seem to think that "semantic triples" e.g. the triplestore are useful. what do they get used for?
 
Or stated differently, in what way are you thinking to engender higher-level cognitive capabilities via machine learned bundled neuron (and implicit ontologies, perhaps).

Myself, I envision something more sophisticated than the above; but the above is an OK starting point.

FWIW, some 5-6 years ago, there was a version of opencog that was able to read conceptnet and did answer the above question in the above manner.  It was never attached to any reasoning system. But still it was entertaining and instructive to build. Kind of like a big homework exercise: I learned a lot,  but its not really "useful" in the real world.

the current "homework exercise" for opencog is to learn by reading, use reasoning on the learned content, be aware of external-world object/events, and use attention allocation to control generated verbal responses.   This seems doable, and we are assembling this system now.   The next-level homework exercise, after this, have not thought about it much.

--linas

--linas

Daniel Gross

unread,
Apr 20, 2017, 12:19:33 PM4/20/17
to opencog, gros...@gmail.com, linasv...@gmail.com
Hi Linas, 

Thank you for your responses, and the pointer. 

It seems to me that your example further pin-points my question:

A quasi-linear walk through a semantic network is essentially a constructed structure (or path) through the use of grammar, to get at a possible reading of a sentence that would make sense to a person within a "semantic space", without however capturing meaning per-se. A lexicon, say, "merely' captures the rules of constructions of particular given verbs and nouns based on their human interpreted meaning).

Hence, grammar's purpose seems to really "only" to construct a meanginful path rather than tell us what the meaning of the knowledge embodied in that path is. The latter seems to require another "kind" of semantics/meaning (and perhaps some might say that there are turtles all the way down -- or at least until some grounding). 

does my intuition make sense, 

thank you,

Daniel

Ivan Vodišek

unread,
Apr 20, 2017, 1:25:58 PM4/20/17
to ope...@googlegroups.com
Hi all :)

May I say a few words about semantics? In my work on describing knowledge, I've concluded that a semantics (meaning) of an expression is merely an abstract concept of thought that relates the expression to its interpretation in another (or the same) language for which we already know its interpretation. Let's say we have unknown language A and already known language B in which the language A can be expressed. To know semantics of our language A (in the terms of B) is to know how to translate language A to language B, under assumption that we already know semantics of B.

If we think about it in a natural way, how do we explain to someone a meaning of some expression? What we do in this situation is that we actually translate the expression unknown to that person to a form that is known by that person. For example, how do we explain in known language what some word from another language means? We simply show how to translate it. As simple as that. 

So, one might pose a question: "If semantics are all relative to the next member in the chain, what are semantics of the ending chain member?" On this thought, all I have right now are indices that the ending chain member is the Universe itself. If this is true, then every conceivable thought has its interpretation as an system inside Universe, with all of its static or dynamic states. Once we can picture out how to translate an expression to an Universe system, we can say that we know semantics of that expression. And the meaning of the Universe system itself? Sorry, don't ask me, I didn't create it, the thing is rolling on on its own :)

Moving further with a train of thought, how do we stand with logic conclusions? We translate a set of logic formulas to another new sets of logic formulas, which could be interpreted as like we give meanings to starting logic formulas. Take a look at this sentence: If it rains, it means that streets are wet. We used the word "means". So, with a proper set of translation rules, we can give meanings to languages, we can draw logical conclusions, and, from what I've seen so far in my research, we can build whole imaginary systems that can emulate real Universe systems. Shortly, a set of translation rules can be seen as a knowledge base about some situation that is possible to exist inside the Universe.

Not to stay just on words, I develop a programming language whose mandatory function is to make easier development at the field of artificial intelligence. As it is all about states in the Universe, and the real life situations are about those states, that language could be used for programming regular applications as well. The language is working embodiment of an universal rewrite system and has some cute properties, complete enough for programming and concluding new knowledge, being system transformation, abduction, deduction or induction. I'll try to inform Opencog community about the progress of my work because I believe that AGI world could benefit from such an investigation in a field of representing knowledge. I hope, at least that the language would be an inspiration for a lucid AGI developer.

- ivan -



To unsubscribe from this group and stop receiving emails from it, send an email to opencog+unsubscribe@googlegroups.com.

To post to this group, send email to ope...@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.

Linas Vepstas

unread,
Apr 20, 2017, 2:15:01 PM4/20/17
to Daniel Gross, opencog
On Thu, Apr 20, 2017 at 11:19 AM, Daniel Gross <gros...@gmail.com> wrote:
Hi Linas, 

Thank you for your responses, and the pointer. 

It seems to me that your example further pin-points my question:

A quasi-linear walk through a semantic network is essentially a constructed structure (or path) through the use of grammar, to get at a possible reading of a sentence that would make sense to a person within a "semantic space", without however capturing meaning per-se. A lexicon, say, "merely' captures the rules of constructions of particular given verbs and nouns based on their human interpreted meaning).

Hence, grammar's purpose seems to really "only" to construct a meanginful path rather than tell us what the meaning of the knowledge embodied in that path is. The latter seems to require another "kind" of semantics/meaning (and perhaps some might say that there are turtles all the way down -- or at least until some grounding). 

does my intuition make sense, 

Yes, sort of.  If one is a structuralist (and I guess I am one), then *all* knowledge is encoded as structure.   Therefore, anything that contains structure does in fact encode some amount of knowledge; and the question is "how much knowledge"?

To illustrate: what do we know about "turtles"? They have four feet, or maybe flippers. They have a hard shell, except when they don't. Most people bring up one or more "mental images" (literally, photo-like representations) when they think of turtles, although the precise images are highly variable from individual to individual.  What more can one say about turtles? The layman, not much, the specialist, a whole lot more.

OK, so what about four feet? what can one say about feet? ... flippers ...? well those are just more networks of knowledge: just more inter-related facts.

Is the meaning of "turtle" anything more than just this network of mostly-factual beliefs?  I claim that there isn't. If you disagree, it might be because you don't understand what I mean by a "network of mostly-factual beliefs", and we can try to clarify this further.  That network is very much tied into my sense of self, into my core beleif-system and world-view, which will be different than yours, and that is, in turn, different than that of wikipedia.  In particular, wikipedia does not eat, sleep or breath, and most people would argue that it's not even alive.

Where do we go from here? Well, "turtle" is associated with images, so AGI needs an image-processing subsystem. And one can hold a turtle in one's hands, and so sensory-motor capabilities are needed.  Humans get emotional when they do things, so if an AGI wants to understand how humans feel when they hold turtles in their hands, they may want to have some model of hormones and emotional outbursts typical of humans: its all part of understanding turtle-ness.  Again, turtle-ness for humans is different from turtleness for wikipedia, since wikipedia knows more than any human, and yet, wikipedia has no hormones or circulatory system and doesn't express emotional outbursts. Heck, wikipedia cannot hold a turtle in its hands, cause it has no hands. Heck, wikipedia has no mechanical attachements whatsoever, and if it did, would be incapable of moving them.  Or vision, or smell or touch.  It would need all of that to truly understand turtleness more fully.

Wikipedia has no dynamical system; it cannot alter itself. It alsmost sort-of-can: it has "bots" which alter it, but those bots are under human guidance, and are very very weak. Imagine a bot that could observe and read, and then edit wikipedia articles, based on the new knowledge it obtained. That would be a pretty large step towards what we call "AGI".

Gosh I make it sound so simple. What's the problem? what's taking so long? How come no one has done this yet?

--linas

Linas Vepstas

unread,
Apr 20, 2017, 2:31:08 PM4/20/17
to opencog
Ivan, I mostly agree (superficially) with most of what you are saying, but: I notice you avoid or over-simplify the issues mentioned in the wikipedia article "upper ontology".  The points are two fold: different human beings have subtley different "upper ontologies", they tend to change over time, they are often logically inconsistent, and they are strongly tied to mood, alertness, voluability, life-experiences, culture, language.  The way that Russians, Americans and Chinese think about "outer space" is different: not only is there no direct word-for-word translation for this concept, but its worse: different people put different emphasis on what is important about space, what its important defining characteristics are.  For some people, "space is infinite", for other people, "space is where star trek happens", for others still "space is boring, inner space is what we should explore".  So the "meaning" of the word "space" depends on the individual, and on their identity, their "value system" (what they consider to be important) and their *political* perspective.  Overtly political, even: "space should be conquered, and the conqueror gets to put their national flag on it, and claim all economic extractive rights". So what is "space", really?

--linas

Ivan Vodišek

unread,
Apr 20, 2017, 2:53:56 PM4/20/17
to ope...@googlegroups.com
Yes Linas, thank you for response. That is why there is no exclusively definite interpretation of any expression. Expression "space" can be translated to numerous meanings, with each meaning having its own, slightly different interpretation in its own language. If we think about "Multiverse", notion "space" could look differently in each Universe sourcing from Multiverse (I hope my imagination doesn't spoil my arguments). If we tie semantics not only to starting expression, but also provide the second parameter (target), we have opportunity to define semantics as a function of two parameters: source Universe and target Universe. And even when we reach target Universe, there are options to define ambiguities of a single target expression, I agree.

So the question we have to ask when we seek for a semantics of an expression should have the following form: What does expression X in language A means in language B, C or, maybe, D?

Ivan


Ivan Vodišek

unread,
Apr 20, 2017, 3:12:13 PM4/20/17
to ope...@googlegroups.com
Not to forget, languages A, B C and D from the previous post could all be different domains of the same language.

Daniel Gross

unread,
Apr 20, 2017, 3:37:53 PM4/20/17
to opencog
Hi Ivan, 

Your work sounds very exciting ... would be great to hear more about it. 

I think one issue with the approach you are describing is that you have to assume the knowledge of a second language and a mapping, in principle, from the first to the second. 

I think systems that aim to self-learn (unsupervised) try to omit such an a-priori mapping because it would (presumably) make the knowledge capture process non-scalable. 

So, you end up with a system that tries to self learn meaning of system A on its own terms (and via "meta-cognitive" strategies derived from the machine learning approach at hand- which are by definition meaning agnostic) ...  so i wonder where is the meaning in this kind of machine . -- if the semantic graph is actually constructed out of the machine learned parse of natural language text without a predefined mapping to a semantic graph (which is what ones want to build in the first place).

I think this is essentially what confuses me -- if i managed to explain it correctly ... .

Daniel

Ivan Vodišek

unread,
Apr 20, 2017, 4:40:05 PM4/20/17
to ope...@googlegroups.com
Hey Daniel, great to see someone interested in AGI :)

How about us, humans, I mean how do we think? I'm not trying to resemble our neural networks, I took another, top-down approach, in between, but let's observe us as an thinking example. Do we see how our thoughts are formed? I think that we don't see the math behind it (correct me if I'm wrong). All we see in our mind is input sensory data, or memories of it. From what we see in input, we try to adjust our output to reach the input we care about. If we fail, we remember that we failed. If we succeed, we remember the output actions to repeat them at places we find appropriate. In this process, we can see only with our sensory input, yet we don't see the math behind it. Looking from an AGI programming aspect, this math would be that invisible part, the part of notions that programmers would type into the machine. The machine (at run-time) doesn't need to see how it is really functioning behind the curtain, just to perform actions based on its input. Analogy is like the application user doesn't need to know how the application is programmed to actually use the application. She enters some data, observe output and she can do wonderful stuff without even seeing a line of code behind the application. In that sense, it is possible for us to change the world without knowing how we really do it. So, I assume, the machine could do it in the similar fashion.

Let's extrapolate this to our imaginary programming language, how would code in this language work?. The code reads some input, do some math invisible to users, and outputs something back to users, but what is this output really? If we say that output is really just a replicated input from the past, then even the programmer doesn't have to know the exact shape of output. All the programmer needs to know is that user entered something back there and that we want to replicate it in our output in given moment, based again on similarities between input data without knowing what the data actually is. And here we come to the essence of the problem: similarity. We need a method to compare the inputs without knowing the actual value of the input: we need to test if input I1 equals input I2. And I believe (with some testing behind) that's all we need to do tasks as complex as solving mathematical equations or concluding new knowledge. My belief comes from existence of a mechanism called pattern matching. We pattern match a set of rules against some input and provide relevant rule output. Remember that all these rule inputs (causes) and outputs (consequences) all came by simply remembering and replicating other inputs from the past of running the same process. From what I've seen in my work, with this pattern matching we can do pretty mean stuff, even comparing numbers regarding to their positive or negative distance from zero, or branching through different decisions, and all we need is testing if two inputs are equal. We don't even have to know what these inputs represent, numbers, letters, colors, cats or mice, to do something nice with them, making the world a better place to live in.

I hope I didn't scare you with this philosophy massage, things are a lot simpler when it comes to burning in the rules by which the machine do this or that, being changing lights on semaphore, or deciding the moment in which it has to stop lip motors and speaker, not to offend a person in a morning that asked "how do I look?" :) It could be all about input, equality match and output. I am pretty sure about it by now.

Tx for asking interesting questions :)

ivan


--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+unsubscribe@googlegroups.com.
To post to this group, send email to ope...@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.

Daniel Gross

unread,
Apr 20, 2017, 4:52:27 PM4/20/17
to opencog
Hi Ivan, 

thank you for your response. 

Pattern matching is a very general purpose mechanism -- in my mind key questions are:

what governed the language for pattern description and the semantics of how patterns match with inputs 
what governs the language of transformational rules, triggered by patterns

and finally, what mechanism creates patterns and the associated transformational rules, so that the inputs and outputs are correlated meaningful, relevant (semantically, temporally), and accurate enough in relation to the cognitive support they intend (i.e.  teleological) provide 


Daniel
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.

To post to this group, send email to ope...@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.

Ivan Vodišek

unread,
Apr 20, 2017, 5:25:19 PM4/20/17
to ope...@googlegroups.com
Mr. Daniel Gross,

I'm afraid I'm going to leave the juicy AGI details to AGI developers (not to say it is an easy part, far from that). I decided to be just a technical guy, if anyone is interested in my low-level solution of programming language that equally easy (or hard) solves application development and inferring a knowledge.

If you are interested in some of my unfinished work, I assembled a paper for showing off with some of my academic friends, but it is not yet ready for a broader public. Public version misses some examples and their thorough explanations (as a proof of concept), but I decided first to program the language, build an user community, and then to show up my face to the real AGI researchers, if they would want to consider another solution, whoever would be interested.

The project is conceptually defined, I'm in a process of implementing the language in Javascript and things look good to me. Wish me luck :)

ivan



To unsubscribe from this group and stop receiving emails from it, send an email to opencog+unsubscribe@googlegroups.com.

To post to this group, send email to ope...@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.

AT

unread,
Apr 20, 2017, 5:34:03 PM4/20/17
to opencog
Daniel/Ivan,

It is quite obvious we are not really in OpenCog territory here, but what your discussion is hinting at is that you will need your own theory of meaning, or theory of the meaning of meaning. At the conceptual level my approach begins where Linas left off, ie there is no meaning independent of agents. And yes, signals between agents, even if it is "notes to oneself", attempt to "compress" information about the universe(s), something like "dad's back" means "I don't know if a supernova exploded but hopefully it won't matter, while dad's return matters in so many ways". A "universe" with explicitly defined agents and their internal states will come as close as possible to intelligent interacting with and understanding humans.

AT

Daniel Gross

unread,
Apr 20, 2017, 7:28:06 PM4/20/17
to opencog
Hi Ivan, 

I think best if you can spend a bit time on working on a few representative examples that shows what you can do with your embedded language. AI discussions tend to get very abstract, very quickly :-), so to "engineer" ground ourselves its best to talk by way of examples. This helps highlight what one really means :-) by what one does. 

thank you,

Daniel




Ed Pell

unread,
Apr 20, 2017, 10:30:42 PM4/20/17
to opencog

DARPA has just requested proposals for a system that will read everything, listen too everything, look at everything and then figuure out what is going on in the world. It is a stretch goal but DARPA does those.

Ivan Vodišek

unread,
Apr 21, 2017, 8:04:56 AM4/21/17
to ope...@googlegroups.com
Hi Ivan, 
I think best if you can spend a bit time on working on a few representative examples that shows what you can do with your embedded language. AI discussions tend to get very abstract, very quickly :-), so to "engineer" ground ourselves its best to talk by way of examples. This helps highlight what one really means :-) by what one does. 
thank you,
Daniel

Tx for an useful advice. It makes sense that a theory without a practical solutions doesn't catch in no one's ear. I'll work on it a bit more soon. In a meanwhile, let's return to OpenCog.

ivan

Linas Vepstas

unread,
Apr 21, 2017, 11:02:03 AM4/21/17
to opencog
Ivan,
 what I wanted to say is that meaning depends not only on the language, but also on the person, and it changes over time. Most people agree on the meanings of most words, most of the time, but not always.  Best example is the slang of some subculture. If the subculture is a gang, there moght only be 10 or 20 people in the world who understand and mostly agree on the meanings of some of the words that the gang uses. And even then, they might not agree, due to some confusion.

Yes, meaning might be a morphism: (a morphism being an arrow between two things, what you called "source" and "target")  Meaning might be more than just a morphism between concdpts or words; it may be a morphism between structures. In the context of "rowing", the word "catch" corresponds to a specific set of physical movements.  In the context of "swimming", it means something conceptually similar, but physically not the same.  In the context of "baseball", it's completely different.

Daniel Gross

unread,
Apr 21, 2017, 11:12:43 AM4/21/17
to opencog, linasv...@gmail.com
Hi Linas, 

I think you "morphism" example is very interesting and just to emphasize a key insight -- context. 

In context of A one morphism may hold, in context B another -- and you indicated two kinds of contexts, ) domains (swimming, rowing) and human-introspective-valueladen interpretive context. 

thank you,

Daniel

--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.
To post to this group, send email to ope...@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.

--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.
To post to this group, send email to ope...@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.

--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.
To post to this group, send email to ope...@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.

Linas Vepstas

unread,
Apr 21, 2017, 11:14:44 AM4/21/17
to opencog
On Thu, Apr 20, 2017 at 2:37 PM, Daniel Gross <gros...@gmail.com> wrote:
 so i wonder where is the meaning in this kind of machine . -- if the semantic graph is actually constructed out of the machine learned parse of natural language text without a predefined mapping to a semantic graph (which is what ones want to build in the first place).

I think this is essentially what confuses me -- if i managed to explain it correctly ... .

I claim that there is nothing more to meaning than the semantic graph. It's all there is, and that's that.

The turtle example: does the word "turtle" mean anything more than what you could ever say about it? Where, by "saying", I mean: twittering, blogging, writing a book, showing a photo, singing a song, dancing, or creating an architectural work?  What more could "turtle" mean, if it is not definable in one of these expressions?
 
Post-modern literary critics have already deconstructed "meaning" for holy saints and seers: yes, you can go up into the mountains, hallucinate, have an epiphany, talk to god, and carve two stone tablets with ten rules that attempt to describe "turtleness" and utterly fail to capture the true core nature of that epiphany.  So, yes, "true knowledge" is locked up inside of us, and ultimately, we have no practical technology by which we can express our individual, personal, inner understanding of "turtle" to outsiders.  Bummer. My understanding of "turtle" is forever locked away in my brain, at least until we have better MRI machines. But I accept that a narrative of words and pictures is a reasonable facsimile, expression thereof.


--linas

Linas Vepstas

unread,
Apr 21, 2017, 11:27:51 AM4/21/17
to opencog

On Thu, Apr 20, 2017 at 4:34 PM, AT <sokra...@gmail.com> wrote:
It is quite obvious we are not really in OpenCog territory here


Why would you say that? The current coding task, in opencog, is to write the code that can perform the things that you describe, that I talk about.  Although all of my examples may seem to be abstract, in the back of my mind, I have some very concrete ideas about how to implement them, and this is an active topic, for me.  We have not yet begun to talk about which line of code in which files will do this stuff, but we are not that far from this.

--linas

Daniel Gross

unread,
Apr 21, 2017, 11:32:06 AM4/21/17
to opencog, linasv...@gmail.com
Hi Linas, 

I think my question is not here about the graph per se but about the mechanism to employ to generate and evolve the graph. 

From where does this graph comes from -- what is its genesis. 

If its through a notion of linguistic meaning that is even auto-learned (via non supervised ML) from text, with grammar self-learned, then as we indicated earlier, linguistic meaning is not the meaning embodied in the graph but merely a path construct over a path that when read by humans is meaningfully interpreted. 

that is what confuses me, i think. and the basis for my question of meaning linguistic and otherwise. 

thank you,

Daniel 

Linas Vepstas

unread,
Apr 21, 2017, 11:50:33 AM4/21/17
to Daniel Gross, opencog

On Fri, Apr 21, 2017 at 10:12 AM, Daniel Gross <gros...@gmail.com> wrote:
In context of A one morphism may hold, in context B another -- and you indicated two kinds of contexts, ) domains (swimming, rowing) and human-introspective-valueladen interpretive context. 


To return to Alex's original question, there was a question of how to represent knowledge in a computer.   So, for opencog, a very miniscule subset of the knowledge graph might be:

ContextLink
     ConceptNode "swimming"
     EvalutaionLink
           PredicateNode "catch"
           PhysicalMotorMovementLink
                 PositionLink...
                 VelocityLink....

that's the general idea. The above is actually a rather poor design for representing that knowledge: instead of position and velocity, it should be about hand and wrist. Instead of PredicateNode "catch" it should be PredicateNode "catch as taught by Mark", with additional links to Mark and why his technique differs from the catch as taught by coach Ted.  So this simplistic graph representation blows up out of control very rapidly.  Which is why it cannot be hand-authored: its why the system must automatically discern and learn such structures.

BTW, in opencog, any two-element link is a "morphism"

   SomeLink
        SomeNode "source"
        OtherNode "target"

Its OK to think of that as an arrow from source to target.  But its also OK to think about it as a binary tree, with "SomeLink" being the root, and the two nodes being the leaves.  So there are multiple ways to diagram these things.

--linas

Alex

unread,
Apr 21, 2017, 6:42:30 PM4/21/17
to opencog
I didn't read this discussion in its entirety, but just wanted to suggest nice article https://link.springer.com/chapter/10.1007%2F978-3-319-41649-6_11 - in which the pragmatic definition of understanding and meaning is given. The process understands phenomenon F if this process can 1) explain F; 2) predict F; 3) produce plans regarding F; 4) recreate F. The phenomenon F has meaning for some process if F is somehow related with the goals which process try to achieve. This is good starting point for measuring understanding and creating processes that maximizes understanding.

So - my approach https://groups.google.com/forum/#!topic/opencog/z_Uy5NYwjt4 is to create initial process with implemented understanding and learning capabilities and let this process to self-improve itself. So - this is step by step approach with human invovlement. Not so fancy approach as waiting for the full intelligence emerging from the large corpora, but, I guess, still good alternative path to AGI.

Ed Guy

unread,
Apr 22, 2017, 10:53:44 AM4/22/17
to opencog


Group:


If you’re not a Springer subscriber, you can find the content of the ‘About Understanding’ paper here: http://people.idsia.ch/~steunebrink/Publications/AGI16_understanding.pdf


Happy Reading

/ed

 

 

=====================

Edward T Guy, III, Ph.D.

ed...@eguy.org

Daniel Gross

unread,
Apr 22, 2017, 3:24:57 PM4/22/17
to opencog, gros...@gmail.com, linasv...@gmail.com
Hi Linas, 

Thank you for the example:

I think this again helps visualize my further questions:

How is this additional conceptual knowledge harvested in a way that mimics human thinking on one hand (i.e. it deploys context adequately, established relevant abstractions, and, generally, creates a structure that is parsimonious, conceptual (i.e. thing, not string), and supports effective and efficient  autonomous and goal-directed reasoning over it on the other. 

And, thinking out loud some more -- what if a lot of common sense knowledge is implicit and not observable. We can observe what people do but its much harder to know why they do it and the connective (socio-psychological-cultural and value-laden (a now favorite word of mine) ) tissue (personal experiences) that holds it all together and gives it explanatory meaning. 

thank you,

Daniel 
Reply all
Reply to author
Forward
0 new messages