YKY
> I have mixed feelings about this. I might agree that nonmonotonic
> logic provides a faster approximation of basic probabilistic
> reasoning. However, nonmonotonic systems get quite complicated, and
> subtle questions of conflict resolution arise. My understanding is
> that probability theory clears up all these questions, leaving a
> simple, elegant base theory where we'd have a mess if we tried to use
> nonmonotonic logic. However, this impression is not thanks to an
> in-depth knowledge of the field, but rather is gleaned from the
> commentary of others.
There are two problems with this.
First, probability theory is great stuff where applicable, but not
everything is a matter of probability. Consider:
Is the sun a big star?
Is there life on Mars?
Is the millionth digit of pi odd?
If pressed to give an answer in the range 0..1, I might answer each of
these with 0.5, but I would not mean the same thing as I mean when I
say there is a 0.5 probability that a coin will land heads.
Second, there is the issue of time, and of updating beliefs when new
information arrives. Even if an AI program operates purely in batch
mode, it will need to reason about processes that operate online.
> I have mixed feelings about this. I might agree that nonmonotonic
> logic provides a faster approximation of basic probabilistic
> reasoning. However, nonmonotonic systems get quite complicated, and
> subtle questions of conflict resolution arise. My understanding is
> that probability theory clears up all these questions, leaving a
> simple, elegant base theory where we'd have a mess if we tried to use
> nonmonotonic logic. However, this impression is not thanks to an
> in-depth knowledge of the field, but rather is gleaned from the
> commentary of others.
I don't like binary nonmonotonic logics either; but my intention is
to suggest that classical binary logic should be abandoned because it
cannot handle *nonmonotonicity*.
> Perhaps I should clarify what I mean. My thinking is that for any
> nonmonotonic reasoning system, we *in fact* would want to place more
> confidence on an assertion if it goes longer without being refuted. So
> just having an on/off asserted-or-not should be thought of as a rough
> approximation, where in fact we want degrees of belief. (And, of
> course, at present I take "degree of belief" to be best formalized by
> probability theory.)
I think "degree of belief" cannot be formalized by probability theory
alone. NARS has advocated the use of more than 1 number, and both PLN
and my PZ logic have followed suite. But I agree that the
formalization should better be compatible with probability theory.
In particular, belief revision can be handled easily with NARS-style
confidence, but it seems much harder to solve with single-number
probability theory alone.
I think nonmonotonicity should be handled by combining probability
theory and classical logic -- but that may change the proof theory of
classical logic. So I suggest to move away from classical logic to
probabilistic logic...
YKY
> Is the sun a big star?
This one is a fuzziness. My point has long been that we need both
fuzziness and probability for AGI...
> Is the millionth digit of pi odd?
You seem to have some conceptual problems with subjective
probabilities. We form subjective probabilities in our minds because
we do not know everything. But we can *update* those probabilities
when given more information. So there is no problem here.
YKY
Yep.
>> Is the millionth digit of pi odd?
>
> You seem to have some conceptual problems with subjective
> probabilities. We form subjective probabilities in our minds because
> we do not know everything. But we can *update* those probabilities
> when given more information. So there is no problem here.
I would say that what I have a problem with is the conflating of
subjective and objective probability (the latter ultimately boils down
to indexical uncertainty). We can assign a subjective probability of
0.5 to an arbitrary bit of pi, and our reasoning about this need not
-- must not -- violate the laws of probability theory, but it must
also, by whatever method, keep track of the fact that this is not an
irreducible indexical uncertainty; it is a logically determined result
that we have not computed. If we do keep track of this, and if our
inference machinery correctly updates when required, that is no
problem; does anyone have an outline design for some --
computationally tractable -- such machinery?
Of course, this is what NARS and PLN and P(Z)C are concerned with, ie,
probabilistic belief revision.
But I suspect that this aspect is not easily captured by classical /
binary logic. So there is a real need to abandon that approach...
YKY