In theory, this approach is very sound because Bayes nets are the
correct way to deal with probabilistic conditionals in general. In
practice however I found that my P(Z) logic has to extend beyond
traditional Bayes nets in order to handle:
1. continuous variables (the theory already exists and some existing
BN software can do that)
2. non-linear functions of such continuous variable (extremely recent
development, and no software as far as I know)
So, I cannot simply output a BN and tell another software to "go
evaluate it". I'm currently trying to figure out some sort of crude
approximation (fine approximation is a very advanced thing that
requires a good understanding of the exact cases).
I'm wondering if you have other options available, when it comes to
combining logic with probabilities (and/or fuzziness)?
YKY
1. In binary logic it is hard to describe the BN algorithm. For
instance, how would one interpret a sentence that is a probabilistic
conditional involving higher-order quantifiers?
2. If BN is taken to be foundational, then there may be probabilistic
relations that are not expressible in the BN framework, such as
nonlinear relations, or relations between complex probability
representations (eg unions of convex sets of probabilities).
YKY
After talking with Russell, we sort of concluded that we should have a
logical reasoner coupled to a Bayes net evaluator, so for each query
the Reasoner would generate a BN and the Evaluator will evaluate that
BN to give the answer.
So, would you be interested in implementing such an evaluator? =)
It'd probably be harder than implementing a BN solver because it needs
to deal with continuous variables etc. One paper that I find relevant
to this issue is:
http://web.ku.edu/~pshenoy/Papers/WP310.pdf
YKY
> I am still focusing on writing my compression software when I have
> time. But, have you concluded that existing open-source Bayes Net
> evaluators are totally useless to you? Because you want continuous
> variables? I don't know what your conversation with Russel was, but it
> seems to me that a big advantage of splitting things up the way you're
> proposing is that one can plug in existing Bayes net evaluators.
Yes, continuous variables is part of the problem, nonlinear functions
of continuous variables is another. No existing BN software that I
know of can deal with both issues (I've looked at this quite-large
list of BN software:
http://people.cs.ubc.ca/~murphyk/Bayes/bnsoft.html ).
The problem is that we're trying to base the entire semantics of a
logic on BNs. So, the requirements on the BN would be quite
demanding, and it's not surprising that existing BNs cannot deal with
them.
Unless, you have a better solution to the problem of "probabilistic semantics"?
> I think I should also say, this list is intended more towards
> theoretical talk than implementation. Perhaps this conversation would
> be better had on the AGI list, or via private email.
OK, we can move it to your private e-mail...
YKY
1. A probabilistic logic would contain statements that are
probabilistic conditionals.
2. The correct way to evaluate the value of a probabilistic node,
given a network of probabilistic conditionals, is the BN algorithm.
3. Therefore, it seems that a probabilistic logic would have BN-style
semantics, "nolens volens".
4. If we give the logic an operational BN semantics, we'd need to
expand the BN evaluator's capabilities, otherwise there would be a lot
of probabilistic relations that cannot be expressed (eg continuous).
That's my argument....
YKY
> Yea, what I'm saying is, it seems like it would save work to extend an
> existing implementation rather than start from scratch. I could be
> wrong. I think we've already had this discussion though, so I won't
> press the issue.
Yeah, I've been looking at existing software such as JavaBayes, which
uses a version of the variable elimination algorithm. But I need more
understanding of why continuous variables are difficult to handle, how
they affect the joint distribution, etc.
> There's a paper out there, "Some Puzzles About Probability and
> Probabilistic Conditionals," which I can't get a copy of w/o paying
> $45... What it does, according to the abstract, is show that
> probabilistic conditionals are "almost monotonic"... that the
> well-known nonmonotonicity is inconsequential in practice. This result
> seems... quite surprising. Does it mean that, contrary to Judea
> Pearl's intuition, probabilistic reasoning is not a good explanation
> of nonmonotonicity in reasoning? Or is nonmonotonicity in reasoning
> "inconsequential" in the same sense? Obviously I can't judge very well
> without reading the paper...
I'll go to the library tomorrow to see if I can get a copy of it.
This certainly looks interesting -- understanding the origin of
nonmonotonicity is extremely important for AGI.
YKY