operational semantics

3 views
Skip to first unread message

YKY (Yan King Yin, 甄景贤)

unread,
Oct 13, 2009, 12:25:09 AM10/13/09
to one-...@googlegroups.com
My approach so-far is basically a kind of "operational" semantics:
truth values are defined by operations or algorithms instead of formal
semantics. I think that is much more practical for AGI. In
particular, my logic relies on constructing (grounded) Bayesian
networks from which probabilistic truth values associated with
propositions can be obtained.

In theory, this approach is very sound because Bayes nets are the
correct way to deal with probabilistic conditionals in general. In
practice however I found that my P(Z) logic has to extend beyond
traditional Bayes nets in order to handle:
1. continuous variables (the theory already exists and some existing
BN software can do that)
2. non-linear functions of such continuous variable (extremely recent
development, and no software as far as I know)
So, I cannot simply output a BN and tell another software to "go
evaluate it". I'm currently trying to figure out some sort of crude
approximation (fine approximation is a very advanced thing that
requires a good understanding of the exact cases).

I'm wondering if you have other options available, when it comes to
combining logic with probabilities (and/or fuzziness)?

YKY

Abram Demski

unread,
Oct 13, 2009, 1:17:15 AM10/13/09
to one-...@googlegroups.com
YKY,

There are interesting points to grapple with when combining an
operational semantics viewpoint with an approximation-based
implementation... If a term's meaning is simply
what-the-system-does-with-it (a desirable feature to my eye), then
what does it meanfor it to be an apprximation? If we've got bayes nets
that get approximate inference done on them, then... is their
semantics given by probability theory, or by the *actual*
manipulations done on them?

In principle, I'd say that the way to clear this up is to require that
the approximation get closer as the system spends more effort on the
particular term... and more specifically, I'd add that it seems
reasonable to also require that the system's *certainty* about the
result monotonically increase as effort is spent. The first
requirement loosens things to a "convergence semantics," one might
say: the meaning of a term isn't what we actually end up doing with
limited time, but what we *would* do if we had time to think... what
we'd converge to. The second requirement is simply a result of wanting
to reflect uncertainty about a result that's been approximated.

In practice, such rigor might be too costly. Unfortunately. (Yet, when
combined with other ideas that would benefit from such a rigorous
approach, maybe it would be worth it.)

An operational semantics is also interesting in other ways...

Combining probability with logic is also interesting in other ways...

But I'll have to stop here for tonight. :)

--Abram

2009/10/13 YKY (Yan King Yin, 甄景贤) <generic.in...@gmail.com>:
--
Abram Demski
http://dragonlogic-ai.blogspot.com/
http://groups.google.com/group/one-logic

YKY (Yan King Yin, 甄景贤)

unread,
Oct 14, 2009, 6:32:57 AM10/14/09
to one-...@googlegroups.com
Another way to look at the question is: we either base the semantics
of our logic on: 1) binary logic or 2) Bayesian networks. Both ways
are theoretically sound but in practice have major difficulties:

1. In binary logic it is hard to describe the BN algorithm. For
instance, how would one interpret a sentence that is a probabilistic
conditional involving higher-order quantifiers?

2. If BN is taken to be foundational, then there may be probabilistic
relations that are not expressible in the BN framework, such as
nonlinear relations, or relations between complex probability
representations (eg unions of convex sets of probabilities).

YKY

YKY (Yan King Yin, 甄景贤)

unread,
Oct 14, 2009, 9:44:23 AM10/14/09
to one-...@googlegroups.com
Hi Abram,

After talking with Russell, we sort of concluded that we should have a
logical reasoner coupled to a Bayes net evaluator, so for each query
the Reasoner would generate a BN and the Evaluator will evaluate that
BN to give the answer.

So, would you be interested in implementing such an evaluator? =)

It'd probably be harder than implementing a BN solver because it needs
to deal with continuous variables etc. One paper that I find relevant
to this issue is:
http://web.ku.edu/~pshenoy/Papers/WP310.pdf

YKY

Abram Demski

unread,
Oct 14, 2009, 4:29:19 PM10/14/09
to one-...@googlegroups.com
YKY,

I am still focusing on writing my compression software when I have
time. But, have you concluded that existing open-source Bayes Net
evaluators are totally useless to you? Because you want continuous
variables? I don't know what your conversation with Russel was, but it
seems to me that a big advantage of splitting things up the way you're
proposing is that one can plug in existing Bayes net evaluators.

I think I should also say, this list is intended more towards
theoretical talk than implementation. Perhaps this conversation would
be better had on the AGI list, or via private email.

--Abram

2009/10/14 YKY (Yan King Yin, 甄景贤) <generic.in...@gmail.com>:

YKY (Yan King Yin, 甄景贤)

unread,
Oct 14, 2009, 4:55:15 PM10/14/09
to one-...@googlegroups.com
On Thu, Oct 15, 2009 at 4:29 AM, Abram Demski <abram...@gmail.com> wrote:

> I am still focusing on writing my compression software when I have
> time. But, have you concluded that existing open-source Bayes Net
> evaluators are totally useless to you? Because you want continuous
> variables? I don't know what your conversation with Russel was, but it
> seems to me that a big advantage of splitting things up the way you're
> proposing is that one can plug in existing Bayes net evaluators.

Yes, continuous variables is part of the problem, nonlinear functions
of continuous variables is another. No existing BN software that I
know of can deal with both issues (I've looked at this quite-large
list of BN software:
http://people.cs.ubc.ca/~murphyk/Bayes/bnsoft.html ).

The problem is that we're trying to base the entire semantics of a
logic on BNs. So, the requirements on the BN would be quite
demanding, and it's not surprising that existing BNs cannot deal with
them.

Unless, you have a better solution to the problem of "probabilistic semantics"?

> I think I should also say, this list is intended more towards
> theoretical talk than implementation. Perhaps this conversation would
> be better had on the AGI list, or via private email.

OK, we can move it to your private e-mail...

YKY

YKY (Yan King Yin, 甄景贤)

unread,
Oct 14, 2009, 7:27:30 PM10/14/09
to one-...@googlegroups.com
Perhaps I have no explained my idea sufficiently:

1. A probabilistic logic would contain statements that are
probabilistic conditionals.

2. The correct way to evaluate the value of a probabilistic node,
given a network of probabilistic conditionals, is the BN algorithm.

3. Therefore, it seems that a probabilistic logic would have BN-style
semantics, "nolens volens".

4. If we give the logic an operational BN semantics, we'd need to
expand the BN evaluator's capabilities, otherwise there would be a lot
of probabilistic relations that cannot be expressed (eg continuous).

That's my argument....

YKY

Abram Demski

unread,
Oct 15, 2009, 6:10:47 PM10/15/09
to one-...@googlegroups.com
YKY,

Yea, what I'm saying is, it seems like it would save work to extend an
existing implementation rather than start from scratch. I could be
wrong. I think we've already had this discussion though, so I won't
press the issue.

As it happens, I found some relevant things that are also possibly of
interest to this list (which is why I'm not making this a private
message).

There's a paper out there, "Some Puzzles About Probability and
Probabilistic Conditionals," which I can't get a copy of w/o paying
$45... What it does, according to the abstract, is show that
probabilistic conditionals are "almost monotonic"... that the
well-known nonmonotonicity is inconsequential in practice. This result
seems... quite surprising. Does it mean that, contrary to Judea
Pearl's intuition, probabilistic reasoning is not a good explanation
of nonmonotonicity in reasoning? Or is nonmonotonicity in reasoning
"inconsequential" in the same sense? Obviously I can't judge very well
without reading the paper...

--Abram

2009/10/14 YKY (Yan King Yin, 甄景贤) <generic.in...@gmail.com>:
>

YKY (Yan King Yin, 甄景贤)

unread,
Oct 16, 2009, 9:34:43 AM10/16/09
to one-...@googlegroups.com
On Fri, Oct 16, 2009 at 6:10 AM, Abram Demski <abram...@gmail.com> wrote:

> Yea, what I'm saying is, it seems like it would save work to extend an
> existing implementation rather than start from scratch. I could be
> wrong. I think we've already had this discussion though, so I won't
> press the issue.

Yeah, I've been looking at existing software such as JavaBayes, which
uses a version of the variable elimination algorithm. But I need more
understanding of why continuous variables are difficult to handle, how
they affect the joint distribution, etc.

> There's a paper out there, "Some Puzzles About Probability and
> Probabilistic Conditionals," which I can't get a copy of w/o paying
> $45... What it does, according to the abstract, is show that
> probabilistic conditionals are "almost monotonic"... that the
> well-known nonmonotonicity is inconsequential in practice. This result
> seems... quite surprising. Does it mean that, contrary to Judea
> Pearl's intuition, probabilistic reasoning is not a good explanation
> of nonmonotonicity in reasoning? Or is nonmonotonicity in reasoning
> "inconsequential" in the same sense? Obviously I can't judge very well
> without reading the paper...

I'll go to the library tomorrow to see if I can get a copy of it.
This certainly looks interesting -- understanding the origin of
nonmonotonicity is extremely important for AGI.

YKY

Reply all
Reply to author
Forward
0 new messages