Pros and cons

615 views
Skip to first unread message

Nageen Naeem

unread,
Apr 26, 2017, 6:10:56 AM4/26/17
to opencog
Hey,
I'm searching for pros and cons for using atomspace for knowledge representation but didn't get any full-fledged answer related to it. what are the pros and cons of using atomspace and why OpenCog shifted to java from c++ what are reasons behind it?  

Ben Goertzel

unread,
Apr 26, 2017, 12:36:04 PM4/26/17
to opencog
OpenCog did not shift from Java to C++, it was always C++

The advantage of Atomspace is that it allows fine-grained semantic
representations of all forms of knowledge in a common framework. The
disadvantage is, this makes things complicated. The other advantage
is, this fine-grained representation makes data amenable to multiple
AI algorithms, including ones that can work together synergetically

ben
> --
> You received this message because you are subscribed to the Google Groups
> "opencog" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to opencog+u...@googlegroups.com.
> To post to this group, send email to ope...@googlegroups.com.
> Visit this group at https://groups.google.com/group/opencog.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/opencog/bd2cd2ad-b15c-4a2e-a962-328a3197c0d7%40googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.



--
Ben Goertzel, PhD
http://goertzel.org

"I am God! I am nothing, I'm play, I am freedom, I am life. I am the
boundary, I am the peak." -- Alexander Scriabin

Nageen Naeem

unread,
Apr 26, 2017, 2:41:21 PM4/26/17
to opencog
OpenCog didn't shift to java from c++?
Thanks for defining pros and cons if there is any paper on comparison with other architecture kindly recommend me.

Linas Vepstas

unread,
Apr 26, 2017, 2:54:11 PM4/26/17
to opencog
On Wed, Apr 26, 2017 at 1:41 PM, Nageen Naeem <nage...@gmail.com> wrote:
OpenCog didn't shift to java from c++?

You are welcome to study https://github.com/opencog for the source languages used.
 
Thanks for defining pros and cons if there is any paper on comparison with other architecture kindly recommend me.

Ben has written multiple books on the archtiecture in general.  The wiki describes particular choices.

I am not aware of any other (knowledge-representation) architectures that can do what the atomspace can do.  So I'm not sure what you want to compare against. Triplestore? various actionscripts? Prolog?

--linas
 
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+unsubscribe@googlegroups.com.

To post to this group, send email to ope...@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.

Nageen Naeem

unread,
Apr 26, 2017, 3:02:16 PM4/26/17
to opencog, linasv...@gmail.com
basically, i want to compare knowledge representation techniques, want to compare knowledge representation in OpenCog and in clarion? any description, please.

Nageen Naeem

unread,
Apr 26, 2017, 3:06:55 PM4/26/17
to opencog, linasv...@gmail.com
how I can differentiate knowledge representation in OpenCog and traditional knowledge representation techniques.

Nageen Naeem

unread,
Apr 26, 2017, 3:11:22 PM4/26/17
to opencog
Ben respond, please


On Wednesday, April 26, 2017 at 9:36:04 PM UTC+5, Ben Goertzel wrote:

Daniel Gross

unread,
Apr 26, 2017, 10:13:34 PM4/26/17
to opencog, linasv...@gmail.com
Hi Linas, 

I guess it would be good to differentiate between the KR architecture and the language. Would be great if there exists some kind of comparison of the open cog language to other comparable KR languages. 

Then there are cognitive architectures, which can be compared. I think Ben has a number of architectures compared in his book. 

i guess one then needs a kind of "composite" -- what an architecture+language can do, since an architecture likely takes advantage of the language features. 

Daniel

Linas Vepstas

unread,
Apr 26, 2017, 10:22:02 PM4/26/17
to Nageen Naeem, opencog
On Wed, Apr 26, 2017 at 2:02 PM, Nageen Naeem <nage...@gmail.com> wrote:
basically, i want to compare knowledge representation techniques, want to compare knowledge representation in OpenCog and in clarion? any description, please.

There's a wikipedia article on clarion, and a bit of poking makes it clear that project went defunct in 2007. The source is not available. From the WP article, and other sources, it appears to implement some hand-crafted neural net that performs psychological modelling. It might be possible to compare clarion to micropsi, but there seems to be little in common with opencog. Well opencog does have a psychological model, but it remains experimental and in a primitive state.

--linas  

Linas Vepstas

unread,
Apr 26, 2017, 10:27:45 PM4/26/17
to Nageen Naeem, opencog
On Wed, Apr 26, 2017 at 2:06 PM, Nageen Naeem <nage...@gmail.com> wrote:
how I can differentiate knowledge representation in OpenCog and traditional knowledge representation techniques.

Opencog is really pretty traditional in its representation form. There are whizzy bits: the ability to assign arbitrary valuations to the KR (e.g. floating point probabilities). Maybe I should say that opencog allows you to "design your own KR", although it provides a reasonable one, based on the PLN books.

There's a pile of tools not available in other KR systems, including a sophisticate pattern matcher, a prototype pattern miner, a learning subsystem, an NLP subsystem.  Its an active project, its open source, with these last two distinguishing it from pretty much everything else.

--linas

Linas Vepstas

unread,
Apr 26, 2017, 10:42:02 PM4/26/17
to Daniel Gross, opencog
On Wed, Apr 26, 2017 at 9:13 PM, Daniel Gross <gros...@gmail.com> wrote:
Hi Linas, 

I guess it would be good to differentiate between the KR architecture and the language. Would be great if there exists some kind of comparison of the open cog language to other comparable KR languages. 

I don't quite understand.  However, if I were to take a guess at the intent.

opencog allows you to design your own KR language; it doesn't much care, it provides a set of tools. These include a data store, a rule engine with backward and forward chainers, a pattern matcher, a pattern miner.

Opencog does come with a default "KR language", PLN -- its described in multiple PLN books.  But if you don't like PLN, you can create your own KR language. All the parts are there.  

The "cognitive architecture" is something you'd layer on top of the KR language (and/or on top of various neural nets, and/or on top of various learning algorithms, etc).

opencog does not have a particularly firm "architecture" per se; we experiment and try to make things work, and learn from that. Ben would say that there is an architecture, it just hasn't been implemented yet.  There's a lot to do, we're only getting started.

--linas

Daniel Gross

unread,
Apr 27, 2017, 12:43:07 AM4/27/17
to opencog, gros...@gmail.com, linasv...@gmail.com
Hi Linas,

Yes your intuition is right. 

Thank you for your clarification. 

What is the core meta-language that is OpenCog into which PLN can be loaded. 

Daniel

Daniel Gross

unread,
Apr 28, 2017, 12:47:45 AM4/28/17
to opencog, nage...@gmail.com, linasv...@gmail.com
Hi Linas, 

I guess i should further ask:

What determines the expressiveness of OpenCogs representation, the one that is bult-into its inference. 

thank you,

Daniel

Nageen Naeem

unread,
Apr 28, 2017, 2:34:54 AM4/28/17
to opencog, nage...@gmail.com, linasv...@gmail.com
is opencog knowledge representation language is able to learn things? 

Nageen Naeem

unread,
Apr 28, 2017, 2:42:05 AM4/28/17
to opencog, nage...@gmail.com, linasv...@gmail.com
Can we say that knowledge representation in OpenCog is somehow equal to human level knowledge representation? if you can say then give reasons, please.
Thanks
Regards 
Nageen

Linas Vepstas

unread,
Apr 28, 2017, 10:47:16 AM4/28/17
to Daniel Gross, opencog
On Wed, Apr 26, 2017 at 11:43 PM, Daniel Gross <gros...@gmail.com> wrote:
Hi Linas,

Yes your intuition is right. 

Thank you for your clarification. 

What is the core meta-language that is OpenCog into which PLN can be loaded. 

Its the system of typed atoms and values values.  http://wiki.opencog.org/w/Atom    http://wiki.opencog.org/w/Value

You can add new types if you wish (you can remove them too, but stuff will then likely break) with the new types defining teh new kinds of knowledge you want to represent.

There is a rich set of pre-defined types, which encode pretty much everything that is generically useful, across multiple projects that people have done.  We call this "language" "atomese" http://wiki.opencog.org/w/Atomese

We've gone through a lot of different atom types, by trial and error; the current ones are the ones that seem to work OK.  There are over a hundred of them.

PLN uses only about a dozen of them, such as ImplicationLink, InheritanceLink, and most importantly, EvaluationLink.

Using EvaluationLink is kind-of-like inventing a new type. So most users are told to use that, and nothing else.  Some types seem to deserve a short-hand notation, and so these get hard-coded for various reasons (usually for performance reasons).

--linas

Daniel Gross

unread,
Apr 28, 2017, 11:09:53 AM4/28/17
to opencog, gros...@gmail.com, linasv...@gmail.com
Hi Linas, 

Thank you. 

What is the mechanism to endow new language elements in atomese with an (custom) inference semantics. 

thank you,

Daniel

Ben Goertzel

unread,
Apr 28, 2017, 11:12:00 AM4/28/17
to opencog, gros...@gmail.com, Linas Vepstas
to implement new inference rules, you code new ImplicationLinks,
wrapped with LambdaLinks etc. ...

new inference rules coded as such Atoms, can be executed perfectly
well by the URE rule engine...

quantitative truth value formulas associated with new inference rules
can be coded in Scheme or python and wrapped in GroundedSchemaNodes

easy peasy...
> https://groups.google.com/d/msgid/opencog/fe19fdfd-8070-40b2-a40a-82a9865aad84%40googlegroups.com.

Linas Vepstas

unread,
Apr 28, 2017, 11:39:44 AM4/28/17
to Daniel Gross, opencog, Nageen Naeem
On Thu, Apr 27, 2017 at 11:47 PM, Daniel Gross <gros...@gmail.com> wrote:
Hi Linas, 

I guess i should further ask:

What determines the expressiveness of OpenCogs representation, the one that is bult-into its inference. 

That's three different questions.

Inference  is done by PLN, and it uses only a dozen atom types or so.

You can design your own inference system, by use the provided backward and forward chainers.  These can work with any atom types.

The default set of atom types is quite rich, (over a hundred) but you can add more if you wish.  http://wiki.opencog.org/w/Atom_types

The "expressiveness" isn't limited, per-se: the opencog atom types define typed lambda calculus, Thus, you can compute anything a turning machine can compute, and also more: anything a quantum computer could compute, or a generic machine on a homogenous space (a quotient of lie groups) -- the later generalizing both the turing machine and the quantum computer.  Whoop-de-doo. But "expresiveness" is not a hard problem to solve.  That's not where the action is.

--linas

Linas Vepstas

unread,
Apr 28, 2017, 11:50:20 AM4/28/17
to Nageen Naeem, opencog
On Fri, Apr 28, 2017 at 1:34 AM, Nageen Naeem <nage...@gmail.com> wrote:
is opencog knowledge representation language is able to learn things? 

Yes, but that is a topic of current active research.  There are three ways to do this:
1) use moses
2) use the pattern miner
3) use the language-learning subsystem.
4) the neural net subsystem, Ralf is working on that, its a kind-of generalization of the earlier "destin", and using tensorflow under the covers.  So far, it's been used to create facial expressions (for use in humanoid robots)

I'm currently am working on language learning and have vague plans to port it over to the pattern miner, someday.  I haven't looked at the pattern miner yet, I'm guessing that it remains at a rather primitive, low level, for now.

Basically, moses is "mature" the other three are not, they're in very active development.

--linas

Linas Vepstas

unread,
Apr 28, 2017, 12:11:06 PM4/28/17
to Nageen Naeem, opencog
On Fri, Apr 28, 2017 at 1:42 AM, Nageen Naeem <nage...@gmail.com> wrote:
Can we say that knowledge representation in OpenCog is somehow equal to human level knowledge representation? if you can say then give reasons, please.

I'm very sorry, but that question does not make sense.  Knowledge representation in humans is done floating in a chemical soup of hormones in the blood-stream, neurotransmitters in junctions, highspeed electrical signals that "teleport" (think "stargate") neurotransmitters across long distances at high speeds (we call these teleportation devices "neurons" - up to two meters long)

This two-meter length can be overcome by using printed books, television media, social media, and e-mails. Basically, my neurons talk to your neurons through email.  Email is a weird idea-teleportation device.  Our two brains are part of a much bigger brain.

Currently, the largest such "global brains" are capitalist corporations and nations.  Think of a corporation, or a nation, as a super-sophisticated "meme".  Like a cat meme, but much much more complex. 

Although corporations are enabled by humans, they run automatically.  Some machines are created by accident; for example, World War One was a knowledge representation system that happened "by accident", and it only stopped when it ran out of feed-stock (young men to kill).

AGI is the gradual migration of various portions of corporations onto silicon computers.

Going in the other direction, Roger Penrose and Sturart Hammeroff think that individual living biological cells are quantum computers. Maybe they are right.

Although opencog can be used to represent a quantum computer, it would be silly to use this representation, and then try to create a full-automated, no-humans-involved, capitalist corporation.  In principle, opencog can do this,  In practice, not so much.

--linas

Linas Vepstas

unread,
Apr 28, 2017, 12:13:39 PM4/28/17
to Daniel Gross, opencog
On Fri, Apr 28, 2017 at 10:09 AM, Daniel Gross <gros...@gmail.com> wrote:
Hi Linas, 

Thank you. 

What is the mechanism to endow new language elements in atomese with an (custom) inference semantics. 

What Ben said.  Create your own inference engine by assembling out of the chainers. Create new language elements by defining new atom types. Give them any semantics you want.

--linas

Nageen Naeem

unread,
Apr 29, 2017, 3:20:07 AM4/29/17
to opencog, nage...@gmail.com, linasv...@gmail.com
Thanks, Linas.
Now my question what is knowledge how can you explain knowledge in cognitive agent? what is  knowledge basically a data structure, an information or what?

Alex

unread,
Apr 29, 2017, 7:16:43 PM4/29/17
to opencog, nage...@gmail.com, linasv...@gmail.com
Pretty basic questions. Modal logics serve to formalize such notions as 'agent knows that...', 'agent believes that...'. So the question is about representation of modalities in the Atomspace and this question has been already discussed here (result is that special predicates can be used for expression of modal statements). Regarding the content of the knowledge (everyting that sits under the modal operators) - this content can be more or less any logic - propositional logic, predicate logic, modal logic (then we have nested and mixed modalities), action logic and so on - universal logic (there are good Springer journal of book series Logica Universalis).

And there is other question - what agent can do with her knowledge, how agent knows that she knows something and whether her knowledge is sufficient or should she need to create goal "go out and seek for more knowledge". Such perspective leads use to the question "what is understanding", "what are tests for understanding", "how we can measure understanding", "what is meaning". Those questions are more or less answered in https://link.springer.com/chapter/10.1007%2F978-3-319-41649-6_11

BTW - there are Artificial General Intelligence conference series (LNCS) and de Gryter Journal of Artificial General Intelligence.

Nageen Naeem

unread,
Apr 30, 2017, 3:30:30 PM4/30/17
to opencog, nage...@gmail.com, linasv...@gmail.com
thanks
what parameters are still missing in opencog, and are the current state of opencog?

Nil Geisweiller

unread,
May 2, 2017, 1:37:11 AM5/2/17
to ope...@googlegroups.com


On 04/28/2017 06:49 PM, Linas Vepstas wrote:
>
>
> On Fri, Apr 28, 2017 at 1:34 AM, Nageen Naeem <nage...@gmail.com
> <mailto:nage...@gmail.com>> wrote:
>
> is opencog knowledge representation language is able to learn things?
>
>
> Yes, but that is a topic of current active research. There are three
> ways to do this:
> 1) use moses
> 2) use the pattern miner
> 3) use the language-learning subsystem.
> 4) the neural net subsystem, Ralf is working on that, its a kind-of
> generalization of the earlier "destin", and using tensorflow under the
> covers. So far, it's been used to create facial expressions (for use in
> humanoid robots)

Reasoning can be used too , you could for instance query

Implication
<go-to-the-store>
Variable "$X"

via the backward chainer and it would fill the blanks with $X that
directly and indirectly match. That is an inefficient form of learning,
but still.

Nil
> <https://groups.google.com/group/opencog>.
> <https://groups.google.com/d/msgid/opencog/bd2cd2ad-b15c-4a2e-a962-328a3197c0d7%40googlegroups.com>.
>
> > For more options, visit https://groups.google.com/d/optout
> <https://groups.google.com/d/optout>.
>
>
>
> --
> Ben Goertzel, PhD
> http://goertzel.org
>
> "I am God! I am nothing, I'm play, I am
> freedom, I am life. I am the
> boundary, I am the peak." -- Alexander Scriabin
>
> --
> You received this message because you are
> subscribed to the Google Groups "opencog" group.
> To unsubscribe from this group and stop
> receiving emails from it, send an email to
> opencog+u...@googlegroups.com.
> To post to this group, send email to
> ope...@googlegroups.com.
> Visit this group at
> https://groups.google.com/group/opencog
> <https://groups.google.com/group/opencog>.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/opencog/d6da6287-a623-47eb-b3c3-6444bce465c0%40googlegroups.com
> <https://groups.google.com/d/msgid/opencog/d6da6287-a623-47eb-b3c3-6444bce465c0%40googlegroups.com?utm_medium=email&utm_source=footer>.
>
> For more options, visit
> https://groups.google.com/d/optout
> <https://groups.google.com/d/optout>.
>
>
>
>
> --
> You received this message because you are subscribed to the Google
> Groups "opencog" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to opencog+u...@googlegroups.com
> <mailto:opencog+u...@googlegroups.com>.
> To post to this group, send email to ope...@googlegroups.com
> <mailto:ope...@googlegroups.com>.
> Visit this group at https://groups.google.com/group/opencog.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/opencog/CAHrUA36oLREgOrPfE8p363Ys3XQUmy_brf0cgrexiM5E6fvmrQ%40mail.gmail.com
> <https://groups.google.com/d/msgid/opencog/CAHrUA36oLREgOrPfE8p363Ys3XQUmy_brf0cgrexiM5E6fvmrQ%40mail.gmail.com?utm_medium=email&utm_source=footer>.

Nil Geisweiller

unread,
May 2, 2017, 1:45:34 AM5/2/17
to ope...@googlegroups.com, gros...@gmail.com, Linas Vepstas
On 04/28/2017 06:11 PM, Ben Goertzel wrote:
> to implement new inference rules, you code new ImplicationLinks,
> wrapped with LambdaLinks etc. ...

Some precision. You can encode rules as data using for instance
ImplicationLinks, then use PLN or any custom deduction, modus-ponens,
etc rules defined as BindLinks to reason on these. Or directly encode
your rules as BindLinks. The following example demonstrates the 2 ways

https://github.com/opencog/atomspace/tree/master/examples/rule-engine/frog

Nil

nageenn18

unread,
May 2, 2017, 10:41:19 AM5/2/17
to ope...@googlegroups.com, gros...@gmail.com, Linas Vepstas
Dear all, 
Can anyone here explain in detail tge concept of truth value
-stregnth 
-confidence
-count
What is the concept of attention value.
Explain with example please



Sent from my Samsung Galaxy smartphone.
You received this message because you are subscribed to a topic in the Google Groups "opencog" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/opencog/CMNQ85EfBMU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to opencog+u...@googlegroups.com.

To post to this group, send email to ope...@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.

nageenn18

unread,
May 2, 2017, 10:46:13 AM5/2/17
to ope...@googlegroups.com
Another question is that how atomspace is different from chunking technique?
Can we merge chunking with atoms concept??
Is there any representation of featuers like chunk with dimension/value pairs?


Sent from my Samsung Galaxy smartphone.

-------- Original message --------
From: 'Nil Geisweiller' via opencog <ope...@googlegroups.com>
Date: 5/2/17 10:37 AM (GMT+05:00)
Subject: Re: [opencog-dev] Pros and cons



> On Friday, April 28, 2017 at 9:47:45 AM UTC+5, Daniel Gross wrote:
>
>     Hi Linas,
>
>     I guess i should further ask:
>
>     What determines the expressiveness of OpenCogs representation, the
>     one that is bult-into its inference.
>
>     thank you,
>
>     Daniel
>
>     On Thursday, 27 April 2017 05:27:45 UTC+3, linas wrote:
>
>
>
>         On Wed, Apr 26, 2017 at 2:06 PM, Nageen Naeem

>         <nage...@gmail.com> wrote:
>
>             how I can differentiate knowledge representation in OpenCog
>             and traditional knowledge representation techniques.
>
>
>         Opencog is really pretty traditional in its representation form.
>         There are whizzy bits: the ability to assign arbitrary
>         valuations to the KR (e.g. floating point probabilities). Maybe
>         I should say that opencog allows you to "design your own KR",
>         although it provides a reasonable one, based on the PLN books.
>
>         There's a pile of tools not available in other KR systems,
>         including a sophisticate pattern matcher, a prototype pattern
>         miner, a learning subsystem, an NLP subsystem.  Its an active
>         project, its open source, with these last two distinguishing it
>         from pretty much everything else.
>
>         --linas
>
>
>
>             On Thursday, April 27, 2017 at 12:02:16 AM UTC+5, Nageen
>             Naeem wrote:
>
>                 basically, i want to compare knowledge representation
>                 techniques, want to compare knowledge representation in
>                 OpenCog and in clarion? any description, please.
>
>                 On Wednesday, April 26, 2017 at 11:54:11 PM UTC+5, linas
>                             <https://groups.google.com/group/opencog>.

>                             > To view this discussion on the web visit
>                             > https://groups.google.com/d/msgid/opencog/bd2cd2ad-b15c-4a2e-a962-328a3197c0d7%40googlegroups.com

>
>                             > For more options, visit https://groups.google.com/d/optout
>                             <https://groups.google.com/d/optout>.

>
>
>
>                             --
>                             Ben Goertzel, PhD
>                             http://goertzel.org
>
>                             "I am God! I am nothing, I'm play, I am
>                             freedom, I am life. I am the
>                             boundary, I am the peak." -- Alexander Scriabin
>
>                         --
>                         You received this message because you are
>                         subscribed to the Google Groups "opencog" group.
>                         To unsubscribe from this group and stop
>                         receiving emails from it, send an email to
>                         opencog+u...@googlegroups.com.
>                         To post to this group, send email to
>                         ope...@googlegroups.com.
>                         Visit this group at
>                         https://groups.google.com/group/opencog
>                         <https://groups.google.com/group/opencog>.

>                         To view this discussion on the web visit

>
>                         For more options, visit
>                         https://groups.google.com/d/optout
>                         <https://groups.google.com/d/optout>.

>
>
>
>
> --
> You received this message because you are subscribed to the Google
> Groups "opencog" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to opencog+u...@googlegroups.com

> To post to this group, send email to ope...@googlegroups.com

> Visit this group at https://groups.google.com/group/opencog.
> To view this discussion on the web visit

> For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the Google Groups "opencog" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/opencog/CMNQ85EfBMU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to opencog+u...@googlegroups.com.
To post to this group, send email to ope...@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.

Matthew Ikle

unread,
May 2, 2017, 11:00:27 AM5/2/17
to ope...@googlegroups.com
This is straightforward: Strength is a measure of likelihood — it can be thought of as a probability, while confidence is a measure of how confident one is in the strength value. Confidence is related to the value of count. The more pieces of evidence upon which the strength is determined, the higher the confidence in the strength value. 

The attention value is determined by what the system is working upon at the moment. It is a measure of the importance of an Atom to the system at a point in time. As I write this, for example, “Atoms” in my mind related to the attention allocation system (Economic Attention Networks) would have a high attention (or importance) value.

—matt

Ben Goertzel

unread,
May 2, 2017, 11:01:56 AM5/2/17
to opencog
A chunk can be represented as an Atom if one wishes... "chunking" in
this context would be a particular process for creating new Atoms from
sets of previously existing ones...
> https://groups.google.com/d/msgid/opencog/64B610FB-5817-4A9B-A42F-9269F8934AD0%40gmail.com.

Vishnu Priya

unread,
May 2, 2017, 11:17:47 AM5/2/17
to opencog, gros...@gmail.com, linasv...@gmail.com
Hi,


InheritenceLink Nageen human <strength, confidence>

strength - represents True/false
Confidence - expresses degree of strength, expresses  how certain/ uncertain the strength is.

InheritenceLink Nageen human <.9, .9>

InheritenceLink Nageen monster <.9, .1>
this indicates that there exists very small evidence that Nageen is monster.

Atoms are usually represented with attentional values. They are of following types.

STI: This value indicates how relevant this atom is to the currently running process/context
LTI: This value indicates how relevant this atom might be in future processes/context (Atoms with low LTI have no future use and get delete if the AS gets to big)
VLTI: This is a simple boolean that indicates that this atom should never be deleted. (Useful for system components that are written in Atomese)

-Cheers,
Vishnu

Nageen Naeem

unread,
May 2, 2017, 11:44:11 AM5/2/17
to opencog, gros...@gmail.com, linasv...@gmail.com
Nil, i didn't get your answer clear, actually im new to opencog and many of the concepts are not clear to me kindly explain in simple way. 

>> To post to this group, send email to ope...@googlegroups.com.
>> Visit this group at https://groups.google.com/group/opencog.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/opencog/fe19fdfd-8070-40b2-a40a-82a9865aad84%40googlegroups.com.
>>
>> For more options, visit https://groups.google.com/d/optout.
>
>
>

--
You received this message because you are subscribed to a topic in the Google Groups "opencog" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/opencog/CMNQ85EfBMU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to opencog+unsubscribe@googlegroups.com.

Nageen Naeem

unread,
May 2, 2017, 11:45:38 AM5/2/17
to opencog, gros...@gmail.com, linasv...@gmail.com
Thanks Vishnu for such a simple explanation.

Nil Geisweiller

unread,
May 2, 2017, 3:52:39 PM5/2/17
to ope...@googlegroups.com, gros...@gmail.com, linasv...@gmail.com


On 05/02/2017 06:44 PM, Nageen Naeem wrote:
> Nil, i didn't get your answer clear, actually im new to opencog and many
> of the concepts are not clear to me kindly explain in simple way.

Still need help (I think others answers are pretty good)?

Nil
> http://wiki.opencog.org/w/Value <http://wiki.opencog.org/w/Value>
> <https://groups.google.com/group/opencog>.
> >>>>>>>>>> To view this discussion on the web visit
> >>>>>>>>>>
> >>>>>>>>>>
> https://groups.google.com/d/msgid/opencog/bd2cd2ad-b15c-4a2e-a962-328a3197c0d7%40googlegroups.com
> <https://groups.google.com/d/msgid/opencog/bd2cd2ad-b15c-4a2e-a962-328a3197c0d7%40googlegroups.com>.
> >>>>>>>>>> For more options, visit
> https://groups.google.com/d/optout <https://groups.google.com/d/optout>.
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> --
> >>>>>>>>> Ben Goertzel, PhD
> >>>>>>>>> http://goertzel.org
> >>>>>>>>>
> >>>>>>>>> "I am God! I am nothing, I'm play, I am freedom, I am
> life. I am the
> >>>>>>>>> boundary, I am the peak." -- Alexander Scriabin
> >>>>>>>>
> >>>>>>>> --
> >>>>>>>> You received this message because you are subscribed to the
> Google
> >>>>>>>> Groups "opencog" group.
> >>>>>>>> To unsubscribe from this group and stop receiving emails
> from it,
> >>>>>>>> send an email to opencog+u...@googlegroups.com.
> >>>>>>>> To post to this group, send email to ope...@googlegroups.com.
> >>>>>>>> Visit this group at https://groups.google.com/group/opencog
> <https://groups.google.com/group/opencog>.
> >>>>>>>> To view this discussion on the web visit
> >>>>>>>>
> https://groups.google.com/d/msgid/opencog/d6da6287-a623-47eb-b3c3-6444bce465c0%40googlegroups.com
> <https://groups.google.com/d/msgid/opencog/d6da6287-a623-47eb-b3c3-6444bce465c0%40googlegroups.com>.
> >>>>>>>>
> >>>>>>>> For more options, visit https://groups.google.com/d/optout
> <https://groups.google.com/d/optout>.
> >>>>>>>
> >>>>>>>
> >>>>>
> >>>
> >> --
> >> You received this message because you are subscribed to the
> Google Groups
> >> "opencog" group.
> >> To unsubscribe from this group and stop receiving emails from it,
> send an
> >> email to opencog+u...@googlegroups.com
> <mailto:opencog%2Bunsu...@googlegroups.com>.
> >> To post to this group, send email to ope...@googlegroups.com
> <mailto:ope...@googlegroups.com>.
> >> Visit this group at https://groups.google.com/group/opencog
> <https://groups.google.com/group/opencog>.
> >> To view this discussion on the web visit
> >>
> https://groups.google.com/d/msgid/opencog/fe19fdfd-8070-40b2-a40a-82a9865aad84%40googlegroups.com
> <https://groups.google.com/d/msgid/opencog/fe19fdfd-8070-40b2-a40a-82a9865aad84%40googlegroups.com>.
> >>
> >> For more options, visit https://groups.google.com/d/optout
> <https://groups.google.com/d/optout>.
> >
> >
> >
>
> --
> You received this message because you are subscribed to a topic in
> the Google Groups "opencog" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/opencog/CMNQ85EfBMU/unsubscribe
> <https://groups.google.com/d/topic/opencog/CMNQ85EfBMU/unsubscribe>.
> To unsubscribe from this group and all its topics, send an email to
> opencog+u...@googlegroups.com
> <mailto:opencog%2Bunsu...@googlegroups.com>.
> To post to this group, send email to ope...@googlegroups.com
> <mailto:ope...@googlegroups.com>.
> <https://groups.google.com/group/opencog>.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/opencog/fd399981-1d6c-237c-c1da-3fc3a34703e2%40gmail.com
> <https://groups.google.com/d/msgid/opencog/fd399981-1d6c-237c-c1da-3fc3a34703e2%40gmail.com>.
> For more options, visit https://groups.google.com/d/optout
> <https://groups.google.com/d/optout>.
>
> --
> You received this message because you are subscribed to the Google
> Groups "opencog" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to opencog+u...@googlegroups.com
> <mailto:opencog+u...@googlegroups.com>.
> To post to this group, send email to ope...@googlegroups.com
> <mailto:ope...@googlegroups.com>.
> Visit this group at https://groups.google.com/group/opencog.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/opencog/e3426a6b-ba2e-48cb-83e0-50de82f33a02%40googlegroups.com
> <https://groups.google.com/d/msgid/opencog/e3426a6b-ba2e-48cb-83e0-50de82f33a02%40googlegroups.com?utm_medium=email&utm_source=footer>.

Daniel Gross

unread,
May 3, 2017, 12:21:35 AM5/3/17
to opencog, gros...@gmail.com, linasv...@gmail.com
Hi,

Thank you for the example. 

Perhaps i can ask a follow up question:

How are the STI values set (how do we know what is relevant for now), at what time and which processes in open cog are responsible for them. I assume that STI values are set for whole groups of atoms. 

When and for what purpose are STI values changed. 

Also, how are LTI values arrived at, it seems to me that LTI values are like forgetting -- once its gone, its gone, unless re-learned. 

thank you,

Daniel

Ben Goertzel

unread,
May 3, 2017, 12:36:19 AM5/3/17
to opencog, Daniel Gross, Linas Vepstas
When an Atom is used by a cognitive process its STI gets boosted
("stimulated"), along with (to a much lesser degree) its LTI value

The ECAN module has an importance-spreading agent that spreads STI and
LTI values around along the links in the Atomspace

That's what's happening now; fancier methods of adjusting STI and LTI
using predictive modeling have been thought through but not
implemented/tested...

Low LTI can cause something get saved to disk, not necessarily deleted
forever ... knowledge can be retrieved from disk without being
relearned..
> https://groups.google.com/d/msgid/opencog/490f40a4-9825-4f70-bcf0-92f79760c385%40googlegroups.com.
Reply all
Reply to author
Forward
0 new messages