OpenPsi review

70 views
Skip to first unread message

Linas Vepstas

unread,
Jun 9, 2016, 1:19:53 PM6/9/16
to opencog, Nil Geisweiller, Amen Belayneh, Ben Goertzel
So -- after using OpenPsi in practice, I have some comments questions and critiques. I'd like to have Nil pay attention and make comments, as his name came up next to the words "perform reasoning on OpenPsi rules using PLN".  I'm also thinking of applying moses-like mutations and genetic crossovers, as well, and so, for all this to work well, the OpenPsi rule format should be firmed up a bit more.  So ...

Currently, all of the clauses that make up the context are duped into the same AndLink as the action to be performed. This is kind-of annoying, as there is no direct way of picking out the action from the jumble of clauses. Can we fix that?

Currently, all of the context clauses have to be EvaluationLinks/PredicateNodes of some kind, so that their conjunction can be easily obtained from the truth values.  If these are not crisp truth values, do we have some formula for merging these? I am interested in doing some "fuzzy" matching on the contexts.

The action is currently defined as a Schema of some kind -- this causes two problems: first, everything else in the AndLink is a predicate, as menntioned above, whereas if the action is a schema, then ... it doesn't fit, like the rest.

Second problem with Schemas -- most actions need to be a sequence of steps to take, and thus are naturally expressed as steps in a SequentialAnd ... which means that most "actions" are naturally predicates. There seem to be several klunky ways of converting a sequence of steps into a schema-valued action, but these all seem klunky. So ... ??  (Also, think about trying to apply mutations or genetic cross-overs to sequences of actions, and/or learning new sequences of actions based on what went well, before -- learning "habits", or learning "skills" or "techniques" as semi-rigid sequences of actions applied to fairly sophisticated contexts ...

It seems that all this could be firmed up and clarified a bit more.

--------
Next, I am interested in splitting the context into two parts: one part that specifies where to look for context, and a second part that actually checks to see if the context part of a rule applies.

This is best explained by example. Consider, for example, rules that are used to apply to input sentences. To find out if a rule applies to the current input sentence, I don't search the entire atomspace to find the "current input sentence" -- I already know what it is -- it just came in from the auditory and speech-to-text subsystem. Thus it is straight-forward to see which rules apply to the input sentence, because I already know what it is.

I'd like to apply this insight/paradigm to ALL parts of a psi-rule context -- for example, if a rule applies only when the discussion topic is "astronauts", then I don't have to search all of the atomspace to find where this topic is --- I can find it instantly (probably because there is a StateLink hooked to a PredicateNode called "current discussion topic") 

Thus, I'm thinking that every part of a Psi rule context should have associated with it some method, some means of localizing where this context is stored, or provide some way of fetching it -- this can then be used to cut down on the search for applicable Psi rules.

This is both of theoretical interest, and also practical: the chat subsystem has almost 200K rules in it -- it is impractical to evaluate all 200K of them, to see which ones are to be applied. A quick rule-finder is needed.  I've already mostly implemented one, but I am thinking that it could be pretty generic, and was wondering just what exact form it could take.

(one way or another, I plan to make it "mostly generic" in the upcoming weeks/months. I'm just thinking out loud, in the early stages here.)

(How this could interact with PLN and/or moses is unclear)

--linas

Ben Goertzel

unread,
Jun 9, 2016, 1:39:55 PM6/9/16
to Linas Vepstas, opencog, Nil Geisweiller, Amen Belayneh
Hi,

> So -- after using OpenPsi in practice, I have some comments questions and
> critiques. I'd like to have Nil pay attention and make comments, as his name
> came up next to the words "perform reasoning on OpenPsi rules using PLN".
> I'm also thinking of applying moses-like mutations and genetic crossovers,
> as well, and so, for all this to work well, the OpenPsi rule format should
> be firmed up a bit more. So ...

Makes sense...

> Currently, all of the clauses that make up the context are duped into the
> same AndLink as the action to be performed. This is kind-of annoying, as
> there is no direct way of picking out the action from the jumble of clauses.
> Can we fix that?

We could just write the first part of the Psi Implication as

AND
AND
context-pred1
context-pred2
...
SequentialAND
schema-1
schema-2
...

I guess... or are you thinking of something fancier?

> Currently, all of the context clauses have to be
> EvaluationLinks/PredicateNodes of some kind, so that their conjunction can
> be easily obtained from the truth values. If these are not crisp truth
> values, do we have some formula for merging these? I am interested in doing
> some "fuzzy" matching on the contexts.

PLN has a heuristic rule for this, yeah... eventually it will be
replaced with a more sophisticated rule that tries to account for all
the various k-ary dependencies among the conjuncts, but we don't need
that right now...

> The action is currently defined as a Schema of some kind -- this causes two
> problems: first, everything else in the AndLink is a predicate, as
> menntioned above, whereas if the action is a schema, then ... it doesn't
> fit, like the rest.
>
> Second problem with Schemas -- most actions need to be a sequence of steps
> to take, and thus are naturally expressed as steps in a SequentialAnd ...
> which means that most "actions" are naturally predicates. There seem to be
> several klunky ways of converting a sequence of steps into a schema-valued
> action, but these all seem klunky. So ... ??

It seems there is a semantic subtlety here. When we apply
SequentialAnd to a sequence of ExecutionOutputLinks, we are really
doing something imperative, but making it look like something
logic-based.... I don't think there is any inconsistency here, but
some subtleties (familiar from functional programming) would be needed
to formally model what's going on...

As your above comments indirectly suggest, a SequentialAND of a series
of ExecutionOutputLinks isn't really a predicate in the
straightforward sense; I think it's got to be treated as effectively a
"macro execution output link" right?



>(Also, think about trying to
> apply mutations or genetic cross-overs to sequences of actions, and/or
> learning new sequences of actions based on what went well, before --
> learning "habits", or learning "skills" or "techniques" as semi-rigid
> sequences of actions applied to fairly sophisticated contexts ...

cool ;D

> Next, I am interested in splitting the context into two parts: one part that
> specifies where to look for context, and a second part that actually checks
> to see if the context part of a rule applies.
>
> This is best explained by example. Consider, for example, rules that are
> used to apply to input sentences. To find out if a rule applies to the
> current input sentence, I don't search the entire atomspace to find the
> "current input sentence" -- I already know what it is -- it just came in
> from the auditory and speech-to-text subsystem. Thus it is straight-forward
> to see which rules apply to the input sentence, because I already know what
> it is.

Hmmm... but can't you just pack this into the Atoms defining the context...

I.e., is this a suggestion for an extension of the Psi-Implication
format, or a suggestion for what kinds of contexts we should
habitually be using in our Psi Implications?

If you want the rule to look in a certain place for a certain Atom,
can't you just specify the Atom's location explicitly in the predicate
constructs used in the context part of the Psi Implication rule?

Or are you looking for some sort of Atomese "library function" that
makes this concise and elegant, since it has to be done over and over
again... I guess? ...

> Thus, I'm thinking that every part of a Psi rule context should have
> associated with it some method, some means of localizing where this context
> is stored, or provide some way of fetching it -- this can then be used to
> cut down on the search for applicable Psi rules.

I see the need, but I wonder if the localization can just be done
explicitly within the context predicates .. so that the PM will then
take account of it automatically...


> (How this could interact with PLN and/or moses is unclear)
>

Hmm, well if the localization is just some logical Atoms, then PLN can
leverage it explicitly...

ben

Linas Vepstas

unread,
Jun 10, 2016, 1:14:14 AM6/10/16
to Ben Goertzel, opencog, Nil Geisweiller, Amen Belayneh
On Thu, Jun 9, 2016 at 8:39 PM, Ben Goertzel <b...@goertzel.org> wrote:
Hi,

> So -- after using OpenPsi in practice, I have some comments questions and
> critiques. I'd like to have Nil pay attention and make comments, as his name
> came up next to the words "perform reasoning on OpenPsi rules using PLN".
> I'm also thinking of applying moses-like mutations and genetic crossovers,
> as well, and so, for all this to work well, the OpenPsi rule format should
> be firmed up a bit more.  So ...

Makes sense...

> Currently, all of the clauses that make up the context are duped into the
> same AndLink as the action to be performed. This is kind-of annoying, as
> there is no direct way of picking out the action from the jumble of clauses.
> Can we fix that?

We could just write the first part of the Psi Implication as

AND
    AND
       context-pred1
       context-pred2
       ...
    SequentialAND
       schema-1
       schema-2
       ...

I guess... or are you thinking of something fancier?

Or something simpler... The point is that AndLink is not ordered. So when looking at it, one is just as likely to see this:

 AND
    SequentialAND
       schema-1
       schema-2
       ...
    AND

       context-pred6
       context-pred3
       ...
 
Which part is the context, now, and which part is the action?  How can one know which is which?

A lesser issue is that SequentialAnd proceeds to the next step only if the previous step returned "true". What is the truth value of a schema?


> Currently, all of the context clauses have to be
> EvaluationLinks/PredicateNodes of some kind, so that their conjunction can
> be easily obtained from the truth values.  If these are not crisp truth
> values, do we have some formula for merging these? I am interested in doing
> some "fuzzy" matching on the contexts.

PLN has a heuristic rule for this, yeah... eventually it will be
replaced with a more sophisticated rule that tries to account for all
the various k-ary dependencies among the conjuncts, but we don't need
that right now...

Well, but I sort of do, already. The robot can understand some direct English commands, like "turn right".  I'd like to set it up so that if STT messes up, or something complicated is said, and "gargle blarb turn right" is the input text, that would be acted on.  I'm not ready to experiment with this next week,  but this is beginning to come up already.

> The action is currently defined as a Schema of some kind -- this causes two
> problems: first, everything else in the AndLink is a predicate, as
> menntioned above, whereas if the action is a schema, then ... it doesn't
> fit, like the rest.
>
> Second problem with Schemas -- most actions need to be a sequence of steps
> to take, and thus are naturally expressed as steps in a SequentialAnd ...
> which means that most "actions" are naturally predicates. There seem to be
> several klunky ways of converting a sequence of steps into a schema-valued
> action, but these all seem klunky. So ... ??

It seems there is a semantic subtlety here.   When we apply
SequentialAnd to a sequence of ExecutionOutputLinks, we are really
doing something imperative, but making it look like something
logic-based....   I don't think there is any inconsistency here, but
some subtleties (familiar from functional programming) would be needed
to formally model what's going on...

Yes,. There's a blurry boundary between imperative code written in python, to send ROS messages, and imperative sequences that more naturally fit in in the atomspace.  For example: if someone left the room, then glance at where they were last seen, then clear the face-visibility flag, and then update the room state. These three are currently done in atomese, and they are "naturally" atomese, since visibility and room state are in the atomspace. 

As your above comments indirectly suggest, a SequentialAND of a series
of ExecutionOutputLinks isn't really a predicate in the
straightforward sense; I think it's got to be treated as effectively a
"macro execution output link" right?

Yes, exactly. That's the narrow technical issue -- should a new MacroExecutionOutputLink be invented, or can we find some other way that does not require inventing yet another link type?

A related, more general issue is how to get the system to learn new behaviors, and then remember them -- i.e. how to represent them For example:  "if someone says something nice, then raise arm, wave, and say 'you're welcome' ". This can't really be two or three psi rules, it has to be one. The talking and the arm movement can be done in parallel, but the raising of the arm must occur before the waving.

Clearly, we need such compound actions, but its not clear how to represent them in such a way that either PLN or moses could do something useful.   A slightly unrealistic example: should one wave hello, while brandishing a knife? How, exactly, would that work?



>(Also, think about trying to
> apply mutations or genetic cross-overs to sequences of actions, and/or
> learning new sequences of actions based on what went well, before --
> learning "habits", or learning "skills" or "techniques" as semi-rigid
> sequences of actions applied to fairly sophisticated contexts ...

cool ;D

> Next, I am interested in splitting the context into two parts: one part that
> specifies where to look for context, and a second part that actually checks
> to see if the context part of a rule applies.
>
> This is best explained by example. Consider, for example, rules that are
> used to apply to input sentences. To find out if a rule applies to the
> current input sentence, I don't search the entire atomspace to find the
> "current input sentence" -- I already know what it is -- it just came in
> from the auditory and speech-to-text subsystem. Thus it is straight-forward
> to see which rules apply to the input sentence, because I already know what
> it is.

Hmmm... but can't you just pack this into the Atoms defining the context...

so it seems ...

I.e., is this a suggestion for an extension of the Psi-Implication
format, or a suggestion for what kinds of contexts we should
habitually be using in our Psi Implications?

Its a statement of an issue, Not sure how to best solve it.

If you want the rule to look in a certain place for a certain Atom,
can't you just specify the Atom's location explicitly in the predicate
constructs used in the context part of the Psi Implication rule?

Sure, but its computationally infeasible to run 200K rule evaluations per conversational turn.  Based on recent experience, those 200K evaluations took 15 minutes on my admittedly under-powered cheapo laptop.  You can't have a conversation if each turn takes 15  minutes.

Or are you looking for some sort of Atomese "library function" that
makes this concise and elegant, since it has to be done over and over
again... I guess? ...

No, this is purely a computational performance thing. The goal is to cut down the number of psi rules by many orders of magnitude, before one even begins to evaluate them.  In the good-old days, this was called the "frame problem".  In modern times, all AIML engines solve this by using a Trie, and OpenCog solves this by using a DualLink.  But in either case, this can be applied only to a single chunk of context: the current sentence.  Its now time to generalize this to the general case, i.e. not just the current input sentence, but for state in general.
 

> Thus, I'm thinking that every part of a Psi rule context should have
> associated with it some method, some means of localizing where this context
> is stored, or provide some way of fetching it -- this can then be used to
> cut down on the search for applicable Psi rules.

I see the need, but I wonder if the localization can just be done
explicitly within the context predicates .. so that the PM will then
take account of it automatically...

Well, some sort of explicit syntax is needed that allows this to happen.

--linas

Nil Geisweiller

unread,
Jun 10, 2016, 1:44:05 AM6/10/16
to Ben Goertzel, Linas Vepstas, opencog, Nil Geisweiller, Amen Belayneh
Hi,

On 06/09/2016 08:39 PM, Ben Goertzel wrote:
> We could just write the first part of the Psi Implication as
>
> AND
> AND
> context-pred1
> context-pred2
> ...
> SequentialAND
> schema-1
> schema-2
> ...
>
> I guess... or are you thinking of something fancier?
>
>> The action is currently defined as a Schema of some kind -- this causes two
>> problems: first, everything else in the AndLink is a predicate, as
>> menntioned above, whereas if the action is a schema, then ... it doesn't
>> fit, like the rest.
>>
>> Second problem with Schemas -- most actions need to be a sequence of steps
>> to take, and thus are naturally expressed as steps in a SequentialAnd ...
>> which means that most "actions" are naturally predicates. There seem to be
>> several klunky ways of converting a sequence of steps into a schema-valued
>> action, but these all seem klunky. So ... ??
>
> It seems there is a semantic subtlety here. When we apply
> SequentialAnd to a sequence of ExecutionOutputLinks, we are really
> doing something imperative, but making it look like something
> logic-based.... I don't think there is any inconsistency here, but
> some subtleties (familiar from functional programming) would be needed
> to formally model what's going on...
>
> As your above comments indirectly suggest, a SequentialAND of a series
> of ExecutionOutputLinks isn't really a predicate in the
> straightforward sense; I think it's got to be treated as effectively a
> "macro execution output link" right?

This is resolved by using an ExecutionLink

Execution
<action>
<arguments>
<output>

which is really just equivalent to

Evaluation
Predicate "Execution"
List
<action>
<arguments>
<output>

So in the end it's all predicates.

Nil

Nil Geisweiller

unread,
Jun 10, 2016, 2:03:54 AM6/10/16
to Ben Goertzel, Linas Vepstas, opencog, Nil Geisweiller, Amen Belayneh
Regarding evaluating the context of a rule, you'd use either the
backward chainer or if the context has already been evaluated the
pattern matcher (but the BC does that too). For instance if you have rule

PredictiveImplication
Variable X <-- sugar syntax for defining scope of X
And
Evaluation
Predicate "user-do"
X
Execution
GSN "do"
X
Evaluation
Predicate "mimic-user"

Then you'd query

AtTime
<current-time>
Evaluation
Predicate "user-do"
X

Given that, you can then use reasoning again to estimate the likelihood
of having the goal (mimic-user) being achieved in the future (you can
specify the temporal window in the PredictiveImplication) by triggering
action do(X) for a certain X. But for now the action selection is simpler.

Nil

On 06/09/2016 08:39 PM, Ben Goertzel wrote:
>

Nil Geisweiller

unread,
Jun 10, 2016, 2:45:48 AM6/10/16
to linasv...@gmail.com, Ben Goertzel, opencog, Nil Geisweiller, Amen Belayneh


On 06/10/2016 08:13 AM, Linas Vepstas wrote:
> Or something simpler... The point is that AndLink is not ordered. So
> when looking at it, one is just as likely to see this:
>
> AND
> SequentialAND
> schema-1
> schema-2
> ...
> AND
> context-pred6
> context-pred3
>
> ...
>
>
> Which part is the context, now, and which part is the action? How can
> one know which is which?

Indeed, And is undordered. The action part start with an ExecutionLink,
that's how to tell the difference.

>
> A lesser issue is that SequentialAnd proceeds to the next step only if
> the previous step returned "true". What is the truth value of a schema?

The truth value of a schema would be how much the universe inherit from
it. However the truth value of a certain ExecutionLink is whether such
inputs return such outputs.

> Yes,. There's a blurry boundary between imperative code written in
> python, to send ROS messages, and imperative sequences that more
> naturally fit in in the atomspace. For example: if someone left the
> room, then glance at where they were last seen, then clear the
> face-visibility flag, and then update the room state. These three are
> currently done in atomese, and they are "naturally" atomese, since
> visibility and room state are in the atomspace.

I don't know these parts, but as far as Atomese is concerned the
boundary between imperative and declarative is pretty clear, use
ExecutionLink for declarative, use ExecutionOutputLink for imperative.

>
>
> As your above comments indirectly suggest, a SequentialAND of a series
> of ExecutionOutputLinks isn't really a predicate in the
> straightforward sense; I think it's got to be treated as effectively a
> "macro execution output link" right?

SequentialAnd has a pretty clear definition with respect to temporal
reasoning (SeqAnd A B === A occurs, then B occurs). The fact that this
link is used to construct imperative action sequences I guess is OK.

>
>
> Yes, exactly. That's the narrow technical issue -- should a new
> MacroExecutionOutputLink be invented, or can we find some other way that
> does not require inventing yet another link type?

Why would you need a MacroExecutionOutputLink? How would you use it
exactly? Like a Lambda? Like a DefinedSchemaNode?

>
> A related, more general issue is how to get the system to learn new
> behaviors, and then remember them -- i.e. how to represent them For
> example: "if someone says something nice, then raise arm, wave, and say
> 'you're welcome' ". This can't really be two or three psi rules, it has
> to be one. The talking and the arm movement can be done in parallel, but
> the raising of the arm must occur before the waving.

We're not there yet, but yes definitely PLN, MOSES, pattern miner would
do that.

>
> Clearly, we need such compound actions, but its not clear how to
> represent them in such a way that either PLN or moses could do something
> useful. A slightly unrealistic example: should one wave hello, while
> brandishing a knife? How, exactly, would that work?

If you have rules expressing stuff about waving knife, then PLN can use
that to estimate the likelihood of having something bad happen if such
composite action is taken. But again we're not there yet (BC needs to be
completed first, I'm experimenting with FC inference on NLP so I'm not
working on the BC ATM).

> If you want the rule to look in a certain place for a certain Atom,
> can't you just specify the Atom's location explicitly in the predicate
> constructs used in the context part of the Psi Implication rule?
>
>
> Sure, but its computationally infeasible to run 200K rule evaluations
> per conversational turn. Based on recent experience, those 200K
> evaluations took 15 minutes on my admittedly under-powered cheapo
> laptop. You can't have a conversation if each turn takes 15 minutes.

But wouldn't that be the job of ECAN? Only rules in the attentional span
(getting there via hebbian or semantical relationship with contexts,
etc) would need to be taken into account.

Nil

Linas Vepstas

unread,
Jun 10, 2016, 4:02:52 AM6/10/16
to Nil Geisweiller, Ben Goertzel, opencog, Amen Belayneh
OK.  Maybe that issue is not that big of a deal; I think I can work with it.

Next topic: How, exactly, should AtTimeLink work? Events have a characteristic time scale: for sensory events, I want to react to sounds, sights for the last 1 or 2 seconds. I'm not sure how to specify and handle this kind of FuzzyAtTime.  What's the characteristic time scale for fuzzy time matching?

Note that currently, this problem is solved by using StateLink, but that doesn't work for a historical record of events.  Also, the current handling of time in the behavior code is hacky and messy: actions like "if you haven't done X for roughly 10 seconds or longer, then do Y", where the "roughly 10 seconds" if a nasty tortured construction of RandomNumberLink's, GetLinks to get the 10 second delta-time, MinusLinks to subtract time, and some more math to compute a std deviation  (i.e. "roughly 10 seconds" really means "10 +/- 2 seconds" where 2 is a deviation)  It would be nice to get rid of this crazy-quilt Atomese computation by something that PLN (or other rule systems) could understand.

--linas

Nil Geisweiller

unread,
Jun 10, 2016, 4:40:43 AM6/10/16
to linasv...@gmail.com, Nil Geisweiller, Ben Goertzel, opencog, Amen Belayneh


On 06/10/2016 11:02 AM, Linas Vepstas wrote:
> OK. Maybe that issue is not that big of a deal; I think I can work with it.
>
> Next topic: How, exactly, should AtTimeLink work? Events have a
> characteristic time scale: for sensory events, I want to react to
> sounds, sights for the last 1 or 2 seconds. I'm not sure how to specify
> and handle this kind of FuzzyAtTime. What's the characteristic time
> scale for fuzzy time matching?

At the moment no temporal reasoning is taking place, no rule have been
coded, etc, but in the future it should use constructs such as
InitiatedAt, etc, as described here
http://wiki.opencog.org/wikihome/index.php/Temporal_Reasoning

Regarding time scale, I wrote a thread about that a while ago
https://groups.google.com/forum/#!msg/opencog/hp2cdYaWwGU/7uqAgHK0sskJ

Regarding AtTimeLink record, see
http://wiki.opencog.org/wikihome/index.php/AtTimeLink

So for instance you may have recorded

AtTime <1 1>
TimeNode "1:30:23pm"
Evaluation
Predicate "Sensor"
Concept "Pins and needles"

And it's currently 1:30:25pm, then you need to use reasoning to estimate
how true this thing still is, depending on your assumptions about
temporal persistence of that sort of things. Once you have that

AtTime <1 .9>
TimeNode "1:30:25pm"
Evaluation
Predicate "Sensor"
Concept "Pins and needles"

and use this as a context to trigger some action according to some
openpsi rule. IOW, you often kinda have to make a lip of faith that by
the time you trigger the action the context is still gonna be true, we
human often do that...

Nil

Ben Goertzel

unread,
Jun 10, 2016, 4:48:44 AM6/10/16
to Linas Vepstas, Nil Geisweiller, opencog, Amen Belayneh
Regarding the first topic of how to do efficient pattern matching that
takes into account restrictions such as topic restrictions ... Amen
and I were just discussing this....

First of all, it seemed to us that one could *sometimes* use
PredicateNode truth value strengths to direct the PM in this regard...

It's similar to the case of doing a search in a large document
database for documents containing the keywords "Belayneh Hong Kong
software" .... If doing the search using a tool that only allows
searching on a single keyword in some defined collection of documents,
one definitely wants to start with "Belayneh" -- because it's the
least frequent of the keywords generally, so it's going to narrow down
the scope most. One can then search for the other terms in the set of
documents containing "Belayneh". This is going to work a lot better
than searching for "software" first (because there will be so many
documents containing "software")

Similarly if one has a context

AND
A(x)
B(x)
C(x)

then if the predicate B has a much lower strength than A or C, one
probably wants the PM to start with looking for x so that B(x) holds,
and then check among these to see if A(x) and C(x) hold...

This strategy is actually not restricted to AND-clauses, it's
generally applicable (though it's especially useful for AND, it would
seem...)

On the other hand, the strength value is not the only relevant
quantity. It may happen that A(x) is faster to evaluate than B(x) or
C(x), so that even though B has a lower strength, it may be easier to
first find x satisfying A. Or it may be that there is some clever
trick, like use of DualLink, that allows us to rapidly figure out the
set of x for which A(x) is true.... Based on this sort of
consideration, it may be better for the PM to explore A first...

Given these complexities, if we want to sometimes just tell the PM
which term to evaluate first, we could do that in a number of ways,
e.g.

AND
MatchingPriorityLink
A(x)
B(x)
C(x)


AND
MatchingPriorityLink <.6>
A(x)
MatchingPriorityLink <.8>
B(x)
C(x)


with the additional rule

Implication
AND
s > 0
MatchingPriorityLink <s>
A
A <1>


This is vaguely analogous to "cut" in Prolog -- it's an explicit
language command telling the PM where to search first....

Ideally, the system would keep a record of how long it spent searching
for satisfiers for each predicate, and would then be able to optimize
PM internal search paths accordingly. But obviously we're not there
yet...

-- ben
--
Ben Goertzel, PhD
http://goertzel.org

"When in the body of a donkey, enjoy the taste of grass." -- Tibetan saying

Ben Goertzel

unread,
Jun 10, 2016, 4:59:23 AM6/10/16
to Nil Geisweiller, Linas Vepstas, opencog, Amen Belayneh
Regarding time scale ... I guess we all understand that it's better to
represent time using intervals than points, right?

But if we have a time interval I = (L, U), then to express that x is
in fuzzy proximity to I, we could use

near(x, I, s(I) )

where s(I) is a default scale associated with I, e.g. I = U-L

If one wants to do something weird, one can then deviate from this and
say something like

near(x, the year 2001, scale=one minute)

which represent nearness to the year 2001 on the scale of minutes....
But the default would be

near(x, the year 2001, scale = one year)

We then would have a specific mathematical function defined for

near(x,I,s)

as is typically done in fuzzy theory...

-- Ben

Linas Vepstas

unread,
Jun 10, 2016, 5:10:33 AM6/10/16
to Nil Geisweiller, Ben Goertzel, opencog, Amen Belayneh
Hi Nil,

On Fri, Jun 10, 2016 at 9:45 AM, Nil Geisweiller <ngei...@googlemail.com> wrote:


On 06/10/2016 08:13 AM, Linas Vepstas wrote:
Or something simpler... The point is that AndLink is not ordered. So
when looking at it, one is just as likely to see this:

  AND
     SequentialAND
        schema-1
        schema-2
        ...
     AND
        context-pred6
        context-pred3

            ...


Which part is the context, now, and which part is the action?  How can
one know which is which?

Indeed, And is undordered. The action part start with an ExecutionLink, that's how to tell the difference.

Yeah, but that means I still have to examine every atom in the AndLink, look at its type, and see if its an ExecutionLink.  This is no better than what is done now, and is less general. (currently, we look at each atom, and then look to see if it is a Member of Concept "OpenPsi: Action")
 
Its OK, just inelegant.

A lesser issue is that SequentialAnd proceeds to the next step only if
the previous step returned "true". What is the truth value of a schema?

The truth value of a schema would be how much the universe inherit from it. However the truth value of a certain ExecutionLink is whether such inputs return such outputs.

Sure Everything in OpenCog is a Markov process, according to the theory. But in practice, such general advice isn't useful.  Here are some explicit examples of where things are, today:  For example:  https://github.com/opencog/ros-behavior-scripting/blob/master/src/behavior.scm#L251

Yes, the above should be split up into multiple, distinct OpenPsi rules, but even after that happens, there will still need to be short sequences of imperatives written in atomese.  Everything in that file is an example of the general issue -- currently, actions are defacto implemented as predicates, and NOT as schema, and changing them to Schema is confusing and opaque.  It breaks C++ code. Its not clear how to fix the C++ code so that it will work with Schema, and not break other things (e.g. the pattern matcher, which treats schema and predicates very differently)

I mean, we can change things, but lets not be flip: its a lot of work, it impacts a lot of subsystems, and its not easy work.
 

Yes,. There's a blurry boundary between imperative code written in
python, to send ROS messages, and imperative sequences that more
naturally fit in in the atomspace.  For example: if someone left the
room, then glance at where they were last seen, then clear the
face-visibility flag, and then update the room state. These three are
currently done in atomese, and they are "naturally" atomese, since
visibility and room state are in the atomspace.

I don't know these parts, but as far as Atomese is concerned the boundary between imperative and declarative is pretty clear, use ExecutionLink for declarative, use ExecutionOutputLink for imperative.

Again, ponder the contents of https://github.com/opencog/ros-behavior-scripting/blob/master/src/behavior.scm and you'll see what the issue is.



    As your above comments indirectly suggest, a SequentialAND of a series
    of ExecutionOutputLinks isn't really a predicate in the
    straightforward sense; I think it's got to be treated as effectively a
    "macro execution output link" right?

SequentialAnd has a pretty clear definition with respect to temporal reasoning (SeqAnd A B === A occurs, then B occurs). The fact that this link is used to construct imperative action sequences I guess is OK.

What if the execution of A fails? Then what?  Do we proceed to B?  How can we even know that the execution of A failed?



Yes, exactly. That's the narrow technical issue -- should a new
MacroExecutionOutputLink be invented, or can we find some other way that
does not require inventing yet another link type?

Why would you need a MacroExecutionOutputLink? How would you use it exactly? Like a Lambda? Like a DefinedSchemaNode?

Well ... again, look at behavior.scm and ponder how to re-write it so it uses only schema, and never predicates.
 


A related, more general issue is how to get the system to learn new
behaviors, and then remember them -- i.e. how to represent them For
example:  "if someone says something nice, then raise arm, wave, and say
'you're welcome' ". This can't really be two or three psi rules, it has
to be one. The talking and the arm movement can be done in parallel, but
the raising of the arm must occur before the waving.

We're not there yet, but yes definitely PLN, MOSES, pattern miner would do that.

?? We're definitely there, and have been for not quite a year now.  Look at behavior.scm -- its rife with this kind of stuff.  Its only non-verbal behavior, but we're now adding verbal behaviors.   This is not an academic imponderable, its a current, real-world issue, and has been for a while.


Clearly, we need such compound actions, but its not clear how to
represent them in such a way that either PLN or moses could do something
useful.   A slightly unrealistic example: should one wave hello, while
brandishing a knife? How, exactly, would that work?

If you have rules expressing stuff about waving knife, then PLN can use that to estimate the likelihood of having something bad happen if such composite action is taken. But again we're not there yet (BC needs to be completed first, I'm experimenting with FC inference on NLP so I'm not working on the BC ATM).

OK, so this can be deferred for a while.

    If you want the rule to look in a certain place for a certain Atom,
    can't you just specify the Atom's location explicitly in the predicate
    constructs used in the context part of the Psi Implication rule?


Sure, but its computationally infeasible to run 200K rule evaluations
per conversational turn.  Based on recent experience, those 200K
evaluations took 15 minutes on my admittedly under-powered cheapo
laptop.  You can't have a conversation if each turn takes 15  minutes.

But wouldn't that be the job of ECAN? Only rules in the attentional span (getting there via hebbian or semantical relationship with contexts, etc) would need to be taken into account.

Ehh? How can ECAN possibly know which of the 200K rules should be in the attention span? Below is an example, from AIML; there are 200K more of these roughly similar, many with variables in them.  The fact that they're AIML is a red herring -- they could be any kind of rules.  How can ECAN know that this is the rule, as opposed to some other one?

(psi-rule-nocheck
   ; context
   (list (AndLink
      (Evaluation
         (Predicate "*-AIML-pattern-*")
         (ListLink
            (Word "i")
            (Word "am")
            (Word "an")
            (Word "astronaut")
         ))
      ; Context with topic!
      (Evaluation
         (Predicate "*-AIML-topic-*")
         (ListLink
            (Word "astronaut")
         ))
   )) ;TEMPLATECODE

   ; action
   (ListLink
      (Word "what")
      (Word "missions")
      (Word "have")
      (Word "you")
      (Word "been")
      (Word "on?")
      (ExecutionOutput
         (DefinedSchema "AIML-tag think")
         (ListLink
            (ListLink
               (ExecutionOutput
                  (DefinedSchema "AIML-tag set")
                  (ListLink
                     (Concept "job")
                     (ListLink
                        (ExecutionOutput
                           (DefinedSchema "AIML-tag set")
                           (ListLink
                              (Concept "topic")
                              (ListLink
                                 (Word "astronaut")
                           )))
                  )))
         )))
   )
   (Concept "AIML chat subsystem goal")
   (stv 1 0.555555555555555)
   (psi-demand "AIML chat demand" 0.97)
)
 

--linas

Nil Geisweiller

unread,
Jun 10, 2016, 5:11:00 AM6/10/16
to Ben Goertzel, Nil Geisweiller, Linas Vepstas, opencog, Amen Belayneh


On 06/10/2016 11:59 AM, Ben Goertzel wrote:
> Regarding time scale ... I guess we all understand that it's better to
> represent time using intervals than points, right?

Indeed. I've recently added the following link specially for that

http://wiki.opencog.org/wikihome/index.php/TimeIntervalLink

It's already in the opencog repo.

Before time intervals where supposed to be encoded in TimeNode, but I
realized that it would make temporal pattern matcher queries more
difficult, and I think it's clearer that way.

Nil

Nil Geisweiller

unread,
Jun 10, 2016, 5:39:11 AM6/10/16
to linasv...@gmail.com, Nil Geisweiller, Ben Goertzel, opencog, Amen Belayneh
Hi Linas,

On 06/10/2016 12:10 PM, Linas Vepstas wrote:
> A lesser issue is that SequentialAnd proceeds to the next step
> only if
> the previous step returned "true". What is the truth value of a
> schema?
>
>
> The truth value of a schema would be how much the universe inherit
> from it. However the truth value of a certain ExecutionLink is
> whether such inputs return such outputs.
>
>
> Sure Everything in OpenCog is a Markov process, according to the theory.
> But in practice, such general advice isn't useful. Here are some
> explicit examples of where things are, today: For example:
> https://github.com/opencog/ros-behavior-scripting/blob/master/src/behavior.scm#L251
>
> Yes, the above should be split up into multiple, distinct OpenPsi rules,
> but even after that happens, there will still need to be short sequences
> of imperatives written in atomese. Everything in that file is an
> example of the general issue -- currently, actions are defacto
> implemented as predicates, and NOT as schema, and changing them to
> Schema is confusing and opaque. It breaks C++ code. Its not clear how
> to fix the C++ code so that it will work with Schema, and not break
> other things (e.g. the pattern matcher, which treats schema and
> predicates very differently)

It's OK that they are predicates and not schema, and it's OK that they
occur over a certain duration and have conditionals, etc.

What ExecutionLink could mean in that case would be that you trigger
that action asynchronously, the return value would be that you
successfully launched the action, independently of the end result. Then
the time lag between the action being triggered and the goal would be
indicated in the OpenPsi rule. For instance you may have the following
OpenPsi rule

PredictiveImplication <0.7 0.8>
TypedVariable
Variable X
Type "ConceptNode"
TimeInterval
TimeNode "1s"
TimeNode "20s"
And
Evaluation
Predicate "face-in-front-of-me"
Concept "X"
Execution
DefinedPredicate "Interact with face"
Concept "X"
Concept "async-action-successfully-launched"
<goal>

Maybe we could wrap that predicate into some

ASyncLaunchLink or something...

Nil

Linas Vepstas

unread,
Jul 7, 2016, 6:12:30 PM7/7/16
to Nil Geisweiller, Ben Goertzel, opencog, Amen Belayneh
Hi Nil,  sorry for late response, I'm drowning in email

AsyncLaunchLink already exists, its called ParallelLink and JoinLink


--linas

Reply all
Reply to author
Forward
0 new messages