Science : Engineering :: Math : Computer Programming
Science is a methodology for study that is intimately anchored
to the natural world. Physicists, chemists, biologists, etc. may
form their hypotheses, but these hypotheses are not interesting
until their usefulness is checked against the actual world we
live in.
Mathematics, in contrast, is much the same kind of methodology
as the other sciences, but it is not anchored to the natural world.
A mathematical idea may be useful all by itself, without needing
empirical verification of any kind. Thus we may derive use
from hyperbolic geometry without ever going out in to the
natural world and testing whether two parallel lines ever meet
or not. Indeed, we would not be able to locate parallel lines
in the natural world, because none exist there.
Mechanical engineering, chemical engineering, etc., all are
effective because they build on scientific principles, and from
truths discovered via science. The laws of the natural world
are the laws of engineering, and the engineer who understands
this is a successful engineer.
Software engineering, or simply programming, is the engineering
disipline of Mathematics, in the exact same way that chemical
engineering is the engineering discipline of the science of chemistry.
And because of this, we should regard a computer programmer
who believes that mathematics has nothing to offer him the
same exact way we would regard a chemical engineer who
believed that chemistry had nothing to offer him, or a mechanical
engineer who didn't think that physics mattered to his job.
Fortunately for the chemical engineers of the world, and
for the people who rely on their work, the idea would be
immediately rejected as ridiculous by anyone in that
discipline. It is unfortunate that the same is not true for
us programmers.
I lament the amount of time I spent studying OOP when I
could have been studying math. At the time, I still regarded
programming as a fuzzy discipline, not amenable to
formalisms. I see now that I was simply surrounded by
fuzziness as a cultural norm. The fuzziness was a social
disease, not an attribute of the discipline of programming.
Let us hope that our profession may one day recover from
this love of fuzziness. The "Law" of Demeter is as hard to
master as a hula hoop, and about as useful. Let us instead
study abstract algebra, set theory, type theory, or what
have you. Though these are all much sterner mistresses
than Demeter, the rewards are proportionally greater.
-------------------------
>From an interview with Alexander Stepanov, author of the STL:
http://www.stepanovpapers.com/CPCW_Interview.pdf
Q: What do you think of OO? Is it a good style of programming?
Is there a necessary and useful tool for learning OO?
Stepanov: I try not to think of OO. I am not impressed with their
approach to programming. Quoting from my interview to an
Italian journal: "I find OOP technically unsound. It attempts to
decompose the world in terms of interfaces that vary on a single
type. To deal with the real problems you need multisorted
algebras - families of interfaces that span multiple types. I find
OOP philosophically unsound. It claims that everything is an object.
Even if it is true it is not very interesting - saying that
everything is
an object is saying nothing at all. I find OOP methodologically
wrong. It starts with classes. It is as if mathematicians would start
with axioms. You do not start with axioms - you start with proofs.
Only when you have found a bunch of related proofs, can you
come up with axioms. You end with axioms. The same thing is
true in programming: you have to start with interesting algorithms.
Only when you understand them well, can you come up with an
interface that will let them work." I repeat: programming is
about algorithms and data structures, not about inheritance
and polymorphism.
Q: What is the relationship between Mathematics and Computer Science?
Stepanov: Computer Science is a mathematical discipline. Quoting
from Dijkstra: "As soon as programming emerges as a battle against
unmastered complexity, it is quite natural that one turns to that
mental
discipline whose main purpose has been for centuries to apply effective
structuring to otherwise unmastered complexity. That mental discipline
is more or less familiar to all of us, it is called Mathematics. If we
take
the existence of the impressive body of Mathematics as the
experimental evidence of the opinion that for the human mind the
mathematical method is indeed the most effective way to come to
grips with complexity, we have no choice any longer: we should
reshape our field of programming in such a way that, the
mathematician's methods become equally applicable to our
programming problems, for there are no other means."
Marshall
Hear! Hear!
I feel cheated that predicate logic was never taught to me in
highschool. I see no reason to exclude it from the curriculum, and it
would much better prepare students. I also feel cheated that my
education failed to teach me enough useful statistics.
> "Marshall" <marshal...@gmail.com> wrote in message
> news:1149554468....@i39g2000cwa.googlegroups.com...
> I think you need another glass of wine. Take a breath., Math is a language
> in which we ask questions and answer those questions. Math gets real hard at
> some point, and that's why many of us became computer science majors. All we
> needed was calculus 2.
That is the stupidest thing I have ever seen posted in c.d.t, and I have
seen some very stupid things posted here. Plonk!
"[The duality between source and channel coding] can be
pursued further and is related to a duality between past and
future and the notions of control and knowledge. Thus we may
have knowledge of the past but cannot control it; we may
control the future but have no knowledge of it."
Science employs control to pursue of knowledge.
Engineering employs knowledge to pursue of control.
-- Keith --
Science employs control to pursue knowledge.
Engineering employs knowledge to pursue control.
> -- Keith --
Well, see, the thing about that is, when I take a glass of wine
I can't do *any* math. I can't do any programming either.
It's not like I don't drink. But if I'm planning on doing any
programming I stay dry.
Marshall
PS. I can recommend a fine champagne if you like:
http://www.veuve-clicquot.com/
> Well, see, the thing about that is, when I take a glass of wine
> I can't do *any* math. I can't do any programming either.
> It's not like I don't drink. But if I'm planning on doing any
> programming I stay dry.
Fax machines watch out!
One day I tryed to find the bug in a program and couldn't find it for hours.
After a glass of alcohol, the brownian motion accelerated and I instantly
found the error. It was right under my nose. A glass of alcohol might be
just what you need to get out of a loop. :-)
One interesting error I once encountered was in a numerical program due to
too much copy/paste. The program produced correct answers but too slow. The
error was just one character that needed to be changed from + to - :-)
Other interesting error was due to a far call with near return (or something
like that). The program entered in an infinite loop because returned to a
place above a call to the same rutine. The program was written in Pascal and
used no pointers. :-)
> One day I tryed to find the bug in a program and couldn't find it for
hours.
> After a glass of alcohol, the brownian motion accelerated and I instantly
> found the error. It was right under my nose. A glass of alcohol might be
> just what you need to get out of a loop. :-)
It is said that you need other person to find the bugs in your code.
But anyone is other person after drinking a glass of wine.
And each person has the right to drink one glass of wine.
I've once written an assembler rutine for copying the character matrices to
the display memory. Looking at the code, I observed some constants. I
replaced the constants with parameters and experimented with various values.
To my surprise, I discovered that the text was displayed in various
interesting ways (skewed, right to left, top to bottom, etc). Math is really
useful sometimes.
You may use math to generate new axioms but you are not forced to.
Reasonable observation of nature may be a source of axioms. (nature
meant as what is around you). OTOH, you must use math to demonstrate
your theorems. Therefore math theory is a necessary passage to
validate new computing theories.
x a écrit :
What's the difference between Alan Kay's vision for OOP, and the vision as
carried forward by such people as the "thinking in Java" crowd?
By way of analogy, Grace Hopper's vision for COBOL was quite different from
what COBOL eventually became.
I noticed that, in a seminar on "Croquet", Alan said that he used Smalltalk
rather than some later languages, because all the later languages got too
many things wrong. He also mentioned Carl Hewitt, inventor of "Planner" in
his talk. There's some math behind Planner, but it's not as apparent to
someone like me as the math behind the relational data model.
OTOH, Alan Kay says that he "doesn't like data structures". I think he's
made a big mistake there. The details about data structures are tedious and
boring. But data structures often shape the direction of a programming
paradigm much more than the great thinkers like to believe.
If the math gets "real hard," I don't think computer science is the
place for you, though programming is populated by many who are
uninterested or unskilled in math. I am working hard to educate myself;
while my undergraduate computer major was "more mathematical" than
most, after graduation a year of COBOL followed by years of Oracle,
Microsoft technologies (Visual Basic), and now Java have eroded
mathematical faculties I'm struggling to rebuild. My interest is
strong, and my skills growing, but it's a lifetime commitment.
I don't think math is about "tough problems" per se. It's a state of
mind, a determination to eradicate complexity as much as possible,
requiring aesthetic judgment and hard-nosed pragmatism (to separate
wheat from chaff when investigating technologies). You must have
interest in more than just "making sh*t work" - that's the lowest
denominator, not nearly enough for the intellectual task at hand,
though enough to keep one employed, I suppose.
- Eric
I think the predicate logic is more important. Predicates link the
database to the real world, while normalization keeps the predicates
manageable. This is what I like best about relational: to me, it hits a
sweet spot between assertions about the business, and those about
solutions and technology, all the while pinning them solidly to logic
to solve problems.
> I believe that in unknown area that remain to be explored,
> ecclectic approaches should be guided and validated by math not
> burried by it.
I think you might be underestimating math. Math is not just arithmetic,
nor are most software development projects primarily arithmetical. Nor
are science and math devoid of exploration, hypothesis, and creativity.
- Eric
Regards
Alfredo
> > I believe that in unknown area that remain to be explored,
> > ecclectic approaches should be guided and validated by math not
> > burried by it.
>
> I think you might be underestimating math. Math is not just arithmetic,
> nor are most software development projects primarily arithmetical. Nor
> are science and math devoid of exploration, hypothesis, and creativity.
I would not say that I try to avoid overestimate mathematicians or
people who do math to prove a point. Math can prove false things
because of its *admitted* axioms are falses.
> - Eric
//> I would not say that I try to avoid overestimate mathematicians or
> people who do math to prove a point. Math can prove false things
> because of its *admitted* axioms are falses.
//
I advocate not overestimating mathematician as opposed to
mathematics...
I didn't intend to make this distinction here; I thought you were. If
not, never mind. I'd say your question is reversed, though - the better
statement might be that logic validates math, rather than vice versa.
> I would not say that I try to avoid overestimate mathematicians or
> people who do math to prove a point. Math can prove false things
> because of its *admitted* axioms are falses.
Axioms are just statements assumed to be true. It's when you try to
correlate them with some other system (e.g. some perception of the
"real world") that true/false appears. Axioms can be inconsistent with
one another, which is why they're normally developed to be orthogonal
to one another.
- erk
Marshall wrote:
> Elsewhere, I commented that:
>
> Science : Engineering :: Math : Computer Programming
>
>
> Science is a methodology for study that is intimately anchored
> to the natural world. Physicists, chemists, biologists, etc. may
> form their hypotheses, but these hypotheses are not interesting
> until their usefulness is checked against the actual world we
> live in.
If I follow your reasonning...you imply that hypothesis are interesting
(relevant) if useful. *Usefulness* is a consequence of sociological
and environment context (what is considered *useful* in one context
(cultural, geographic, historic...) may *not* be considered *useful* in
another context). Following your reasonning leads to accept that the
principle that hypothesis interest (and relevance) is determined by
sociological and environment context and is not therefore universal.
Don't you think this is a paradox ?
> Mathematics, in contrast, is much the same kind of methodology
> as the other sciences, but it is not anchored to the natural world.
> A mathematical idea may be useful all by itself, without needing
> empirical verification of any kind. Thus we may derive use
> from hyperbolic geometry without ever going out in to the
> natural world and testing whether two parallel lines ever meet
> or not. Indeed, we would not be able to locate parallel lines
> in the natural world, because none exist there.
// Mathematics, in contrast, is much the same kind of methodology
as the other sciences, but it is not anchored to the natural world.//
Vague. Please clarify *anchored in the natural world*.
//A mathematical idea may be useful all by itself, without needing
empirical verification of any kind.//
Mathematics can hardly defined by its usefulness as usefulness is
context defined and mathematics is not... (relevance would be a better
term don't you think?). Could you give an example of a mathematical
idea that meets these criterias of definition?
> Mechanical engineering, chemical engineering, etc., all are
> effective because they build on scientific principles, and from
> truths discovered via science. The laws of the natural world
> are the laws of engineering, and the engineer who understands
> this is a successful engineer.
> Software engineering, or simply programming, is the engineering
> disipline of Mathematics, in the exact same way that chemical
> engineering is the engineering discipline of the science of chemistry.
Risky. Several other domains of engineering may pretend to the same
relationship to math.
> And because of this, we should regard a computer programmer
> who believes that mathematics has nothing to offer him the
> same exact way we would regard a chemical engineer who
> believed that chemistry had nothing to offer him, or a mechanical
> engineer who didn't think that physics mattered to his job.
> Fortunately for the chemical engineers of the world, and
> for the people who rely on their work, the idea would be
> immediately rejected as ridiculous by anyone in that
> discipline. It is unfortunate that the same is not true for
> us programmers.
True. Math programs in Computer Science curriculum are deeply
unsufficient and low level in America. Coming from a European academic
background and having studied in an American university through a
computer science program, the most difficult level of math I have found
was european high school equivalent. I was so bored in math classes
that I would take a chance to work on programming homeworks during math
class time.
> I lament the amount of time I spent studying OOP when I
> could have been studying math. At the time, I still regarded
> programming as a fuzzy discipline, not amenable to
> formalisms. I see now that I was simply surrounded by
> fuzziness as a cultural norm. The fuzziness was a social
> disease, not an attribute of the discipline of programming.
Indeed a very good point.
> Let us hope that our profession may one day recover from
> this love of fuzziness.
I hope to be wrong but I am pessimistic. Things are getting worse and
worse everyday.
Thank you.
Marshall
Hrm, well, I wrestled with what word to use there, and settled
on the bland "useful". My understanding (I'm not a scientist) is
that one determines the utility of a hypothesis by testing its
predictive ability. Hypotheses with strong predictive ability
give us information about how the universe works, which I
would propose is interesting and useful regardless of social
context. I did not intend a narrow meaning such as "what will
make our stock price go up."
> > Mathematics, in contrast, is much the same kind of methodology
> > as the other sciences, but it is not anchored to the natural world.
> > A mathematical idea may be useful all by itself, without needing
> > empirical verification of any kind. Thus we may derive use
> > from hyperbolic geometry without ever going out in to the
> > natural world and testing whether two parallel lines ever meet
> > or not. Indeed, we would not be able to locate parallel lines
> > in the natural world, because none exist there.
>
> // Mathematics, in contrast, is much the same kind of methodology
> as the other sciences, but it is not anchored to the natural world.//
> Vague. Please clarify *anchored in the natural world*.
One never tests a mathematical idea by conducting an
experiment. One tests a mathematical idea by doing more
math. It is self-contained in a way that chemistry is not.
Chemistry has beakers and flasks and huge vats of
bubbling chemicals, and also symbols on the chalkboard.
Math has the symbols on the chalkboard, but no beakers
or anything like them.
Above I noted the example of hyperbolic geometry. Can
one conduct an experiment to determine whether hyperbolic
or Euclidean geometry is more "true?"
> //A mathematical idea may be useful all by itself, without needing
> empirical verification of any kind.//
> Mathematics can hardly defined by its usefulness as usefulness is
> context defined and mathematics is not... (relevance would be a better
> term don't you think?). Could you give an example of a mathematical
> idea that meets these criterias of definition?
I was not attempting to define mathematics, merely to describe it.
If you like, you can substitute "soundness". The thing is, all the
good terms for this are specific to math, and if I used math-specific
terms, it would defeat my purpose, which was to show the
structural relationships between science, math, engineering, and
programming.
Marshall
I see...you mean *useful* as a charateristics of hypothesis tthat can
represent nature in a thrustworthy and reasonnable manner. Don't you
think *reasonnable* would be a good substitute to *useful* which would
then become a consequence and not a characteristics of science?
> > > Mathematics, in contrast, is much the same kind of methodology
> > > as the other sciences, but it is not anchored to the natural world.
> > > A mathematical idea may be useful all by itself, without needing
> > > empirical verification of any kind. Thus we may derive use
> > > from hyperbolic geometry without ever going out in to the
> > > natural world and testing whether two parallel lines ever meet
> > > or not. Indeed, we would not be able to locate parallel lines
> > > in the natural world, because none exist there.
> >
> > // Mathematics, in contrast, is much the same kind of methodology
> > as the other sciences, but it is not anchored to the natural world.//
> > Vague. Please clarify *anchored in the natural world*.
>
> One never tests a mathematical idea by conducting an
> experiment. One tests a mathematical idea by doing more
> math. It is self-contained in a way that chemistry is not.
> Chemistry has beakers and flasks and huge vats of
> bubbling chemicals, and also symbols on the chalkboard.
> Math has the symbols on the chalkboard, but no beakers
> or anything like them.
I see...
What about mathematical ideas that are generated or invalidated from
observation of computing?
Several mathematical theorems using Reccurence (recursive) Reasonning
to predict integer values in series were proven wrong as a consequence
of observing that they work with small observable values but become
wrong when values become large enough. These demonstrated mathematics
theorems have been invalidated thanks to observation of values large
enough and sufficient computing power. As a result these theorems were
reconsidered as false and new *correcting* theorems emerged.
> Above I noted the example of hyperbolic geometry. Can
> one conduct an experiment to determine whether hyperbolic
> or Euclidean geometry is more "true?"
I do not know. But above is an example that demonstrate the influence
of observation over math. I think there is at least a bidirectional
relationship between math and nature.
> > //A mathematical idea may be useful all by itself, without needing
> > empirical verification of any kind.//
> > Mathematics can hardly defined by its usefulness as usefulness is
> > context defined and mathematics is not... (relevance would be a better
> > term don't you think?). Could you give an example of a mathematical
> > idea that meets these criterias of definition?
>
> I was not attempting to define mathematics, merely to describe it.
> If you like, you can substitute "soundness". The thing is, all the
> good terms for this are specific to math, and if I used math-specific
> terms, it would defeat my purpose, which was to show the
> structural relationships between science, math, engineering, and
> programming.
Very interresting indeed...But don't you think that this relationship
is not as unidirectional as you imply?
>
> Marshall
Well, I want a term that's as generic as possible. "Reasonable"
seems less generic that "useful." Also, the connotation of
reasonable is that it would appeal to the intuition of an
ordinary man, and in that sense, I would not say that
e.g., relativity is reasonable.
> > One never tests a mathematical idea by conducting an
> > experiment. One tests a mathematical idea by doing more
> > math. It is self-contained in a way that chemistry is not.
> > Chemistry has beakers and flasks and huge vats of
> > bubbling chemicals, and also symbols on the chalkboard.
> > Math has the symbols on the chalkboard, but no beakers
> > or anything like them.
> I see...
> What about mathematical ideas that are generated or invalidated from
> observation of computing?
Hmmm. I see your point: since the computer exists in the natural
world, we could sort of call something we did with the computer
an experiment. It's not entirely clear to me, though; the computer
merely moves around symbols; it is our interpretation of these
that gives the process meaning, is it not? Can math be reduced
to purely syntactic issues?
> > Above I noted the example of hyperbolic geometry. Can
> > one conduct an experiment to determine whether hyperbolic
> > or Euclidean geometry is more "true?"
>
> I do not know. But above is an example that demonstrate the influence
> of observation over math. I think there is at least a bidirectional
> relationship between math and nature.
> [...]
> Very interresting indeed...But don't you think that this relationship
> is not as unidirectional as you imply?
I'm not sure. I'm not sure it's any-directional. I think of math's
relevance to the real world is as by-analogy. I have never
seen any mathematical construct in the real world, although
I *can* use math to make predictions about the real world.
(Likewise, I have never seen any real-world object in math.)
Is 3 a real thing? I used to wonder about that. And in fact I
have put the question directly to a few of the best minds in
computer science. They mostly shrug. I now see why: it's
not that interesting a question.
Marshall
PS. For some reason, the canonical generic example value
lately is 3. In college it was always 7.
I see your point, *reasonnable* seems to be unsufficient. *useful*
bothers me as it is a consequence rather than a describing
characteritics of what may be a good hypothesis.
Your example about relativity is interresting. Is hypothesis behind
relativity useful? I prefer *sound*. I understand now why this
question bothered you.
> > > One never tests a mathematical idea by conducting an
> > > experiment. One tests a mathematical idea by doing more
> > > math. It is self-contained in a way that chemistry is not.
> > > Chemistry has beakers and flasks and huge vats of
> > > bubbling chemicals, and also symbols on the chalkboard.
> > > Math has the symbols on the chalkboard, but no beakers
> > > or anything like them.
> > I see...
> > What about mathematical ideas that are generated or invalidated from
> > observation of computing?
>
Hmmm. I see your point: since the computer exists in the natural
world, we could sort of call something we did with the computer
an experiment. It's not entirely clear to me, though; the computer
merely moves around symbols; it is our interpretation of these
that gives the process meaning, is it not? Can math be reduced
to purely syntactic issues?
No but math can be reduced to numbers and math are based on a direct
equation of meaning between numbers and the symbols that represent them
. Else it is not math.
>
> > > Above I noted the example of hyperbolic geometry. Can
> > > one conduct an experiment to determine whether hyperbolic
> > > or Euclidean geometry is more "true?"
> >
> > I do not know. But above is an example that demonstrate the influence
> > of observation over math. I think there is at least a bidirectional
> > relationship between math and nature.
> > [...]
> > Very interresting indeed...But don't you think that this relationship
> > is not as unidirectional as you imply?
>
//I'm not sure. I'm not sure it's any-directional. I think of math's
relevance to the real world is as by-analogy. //
I think the structural relationship between math and nature is
bidirectional as math is dynamic concept.
The example provided proves that mathematics is subjected to
observation of nature.
//I have never seen any mathematical construct in the real world//
Well here is one for you: take a microsope and observe an snowflake.
You will see a perfect example of Fractal Mathematics .
, although
I *can* use math to make predictions about the real world.
(Likewise, I have never seen any real-world object in math.)//
//Is 3 a real thing? I used to wonder about that. And in fact I
have put the question directly to a few of the best minds in
computer science. They mostly shrug. I now see why: it's
not that interesting a question.//
I do not quite understand what you are getting at...3 is the
mathematical symbol representing a number value extracted from an
ensemble of both reals and integer subset. Maybe in 27th century
mathematics it may mean something else but for the moment it precisely
mean.one thing..well '3' ..Else it is not mathematics at least not
known mathematics. This illustrates the dynamic nature of math once
again.
> Marshall
>
// PS. For some reason, the canonical generic example value
lately is 3. In college it was always 7.//
That's because some numbers are more important to some cultures than
others.
For instance, 3 (trinity based heritage) and 7 has strong meaning in
Christian cultures.
On the other hand, 1, 5 have strong meaning to Islamic cultures.
It's clear I don't have le mot juste yet.
I agree that there are many things that are well-described by
mathematics in the natural world. I disagree that there are
mathematical objects directly present in the real world.
I might have a basket with 3 oranges, but I can never
have a basket with just 3 in it.
I might be able to accurately predict the volume of a
soccer ball using equations about spheres, but I'll
never directly see a sphere in the real world;
a sphere is made up of points, and the real world
has no points in it. At least, not with the instruments
I've used to go looking for them.
> , although
> I *can* use math to make predictions about the real world.
> (Likewise, I have never seen any real-world object in math.)//
>
> //Is 3 a real thing? I used to wonder about that. And in fact I
> have put the question directly to a few of the best minds in
> computer science. They mostly shrug. I now see why: it's
> not that interesting a question.//
>
> I do not quite understand what you are getting at...3 is the
> mathematical symbol representing a number value extracted from an
> ensemble of both reals and integer subset.
In strict English grammar, we would say "3" is the mathematical
symbol representing a number value. But 3 is not the same thing
as "3". 3 is the actual number; the successor to 2, *not* the
glyph. "3" is a real thing, because symbols *do* appear in
the natural world. You see them on pages of math books
all the time. But is 3 a real thing, the way horses are real
and unicorns are not real?
I rather think 3 has more in common with unicorns than
with horses. (This is of course a metaphoric statement
rather than a literal one.) But I find 3 to be ... useful
nonetheless.
Marshall
PS. 5 is right out.
Yeah it was a little unclear what he was saying there. Did
he mean mathematics is an example of the application of
logic and thus validates the usefulness of logic? Or did he
literally mean mathematics is used to validate logic?
If the latter, than as you point out he is at odds with
current dogma which views formal logic + axiomatic set
theory as the foundation of mathematics. And interestingly,
one view of logic is as a specialization of conditional
probability theory. One that deals only with certainty (1)
and impossibility (0) rather than a range of probability.
Probability theory as a generalization of logic is useful
because in addition to the logically valid modus ponens and
modus tollens, it also gives a foundation for applying the
weak syllogisms which in the limiting case of logic are
treated as fallacies (affirming the consequent and denying
the antecedent).
So ... perhaps conditional probability theory is the
foundation of all :-)
-- Keith --
> I might be able to accurately predict the volume of a
> soccer ball using equations about spheres, but I'll
> never directly see a sphere in the real world;
> a sphere is made up of points, and the real world
> has no points in it. At least, not with the instruments
> I've used to go looking for them.
> > , although
> > I *can* use math to make predictions about the real world.
> > (Likewise, I have never seen any real-world object in math.)//
> >
> > //Is 3 a real thing? I used to wonder about that. And in fact I
> > have put the question directly to a few of the best minds in
> > computer science. They mostly shrug. I now see why: it's
> > not that interesting a question.//
> >
> > I do not quite understand what you are getting at...3 is the
> > mathematical symbol representing a number value extracted from an
> > ensemble of both reals and integer subset.
>
> In strict English grammar, we would say "3" is the mathematical
> symbol representing a number value.
Yes we agree on that.
> But 3 is not the same thing
> as "3". 3 is the actual number; the successor to 2, *not* the
> glyph. "3" is a real thing, because symbols *do* appear in
> the natural world. You see them on pages of math books
> all the time. But is 3 a real thing, the way horses are real
> and unicorns are not real?
Interresting theory.. Just do not tell that to mathematician (they
would burn you in public place for heresy) ;).
> I rather think 3 has more in common with unicorns than
> with horses. (This is of course a metaphoric statement
> rather than a literal one.) But I find 3 to be ... useful
> nonetheless.
I guess we have reached the end of discussing your theory...I can not
really follow you onto unicorns and horses starting from relationship
between math and science.. Nice exchaning with you though...
> Marshall
>
> PS. 5 is right out.
Just do not bring 0, just do not bring 0 , just do not bring 0 LOL
So you say that math is an invention ?
> I agree that there are many things that are well-described by
> mathematics in the natural world. I disagree that there are
> mathematical objects directly present in the real world.
> I might have a basket with 3 oranges, but I can never
> have a basket with just 3 in it.
Yes. You also have that basket. :-)
I can move 3 around by moving you for example.
3 is as natural as you are. :-)
> I might be able to accurately predict the volume of a
> soccer ball using equations about spheres, but I'll
> never directly see a sphere in the real world;
> a sphere is made up of points, and the real world
> has no points in it. At least, not with the instruments
> I've used to go looking for them.
Then you have not used your head, I suppose ... :-)
> I rather think 3 has more in common with unicorns than
> with horses. (This is of course a metaphoric statement
> rather than a literal one.) But I find 3 to be ... useful
> nonetheless.
Well, horses might not be that different from unicorns after all ...
Different carrying device.
> Marshall
> PS. 5 is right out.
What is with this 1,3,5,7 ?
You skipped 2 ?
I don't see how e.g. knowledge of trigonometry could be helpful in the
engineering discipline of programming. Or knowledge of how to resolve
a set of linear equations. Or knowledge of differentials and integrals.
Or knowledge of differentials and integrals.
I do not know either, but I do not see any reason to deprive ourself
from using them in the future to help making computing more efficient.
Not yet. but why not?
If Codd would not have thought of appying mathematical relations and
set theory to computing the RM would not have been invented. It's
true that all mathematics has not been applied and may not be applyable
to computing but mathematics has already inspired and helped the
formulation of computing models.
> If Codd would not have thought of appying mathematical relations and
> set theory to computing the RM would not have been invented. It's
> true that all mathematics has not been applied and may not be applyable
> to computing but mathematics has already inspired and helped the
> formulation of computing models.
>
Actually, this isn't true. If you'll go back to Codd's 1970 paper, you'll
see that he makes reference to the relational model of data having been used
in prior work. That prior work did not involve databases, admittedly.
> Probability theory as a generalization of logic is useful
PT cannot be 'a generalization of logic' because PT 'connectives'
(+/*) are not truth functional.
> because in addition to the logically valid modus ponens and
> modus tollens, it also gives a foundation for applying the
> weak syllogisms
It does not -- see above.
First, if you don't believe that PT can be seen as a
generalization of logic, then I have a simple question. In
limit of all probabilities being either 0 or 1, what does PT
reduce to?
Second, do you understand what "generalization" means? Would
you claim that the gamma function is /not/ a generalization
of the factorial because it is not limited to naturals?
Third, +/* are not the connectives of PT. PT uses the same
connectives as logic: conjunction, disjunction, and negation
(whatever symbol you decide to give them).
Fourth, these connectives (same as logic remember) ARE truth
functional in PT. That is when you apply the connectives to
truth-valued statements you get truth-valued statements
whose truth depends only on the constituent truth-values.
(If you don't agree to this then provide a counter-example.)
Just as when you apply the gamma function to natural numbers
you get a natural numbers (no zero quibbles please).
When you apply the connectives to a probability-valued
statements you get probability-valued statements whose
probability depends only on the constituent probabilities.
Just as when you apply the gamma function to real numbers
you get real numbers.
This is why PT is a /generalization/ of logic. It reduces to
logic when applied to truth-valued statements. Just as gamma
reduces to factorial for natural arguments. (Again no quibbles
about offset by 1 etc).
> > because in addition to the logically valid modus ponens
> > and modus tollens, it also gives a foundation for
> > applying the weak syllogisms
>
> It does not -- see above.
It does. Don't take my word for it, educate yourself. I
suggest starting with:
"Probability Theory: The Logic of Science" - ET Jaynes
-- Keith --
You mean like :
AND (p1, p2) === p1*p2
NOT (p1) === 1 - p1
OR (p1 , p2) === 1 - (1 - p1) * (1 - p2)
and all the rest of the equivalences ?
Wouldn't this lead to a form of logic that is at least as hard to talk
about as 3- or more-value logics ? And thus inattractive as a
foundation to build our data management mechanisms on ?
I was NOT disagreeing with the assertion that RM derives from math.
I was disagreeing with the idea that, without Codd, the RM would not have
been invented.
Codd himself made no such claim, and his reference to prior work shows it.
Let P(A and B ) = P(A)*P(B) where P stands for probabilities of
respective events. Please 'reduce' the above and tell when A and B is
true, i.e. when P(A and B) = 1.
>
> Second, do you understand what "generalization" means? Would
> you claim that the gamma function is /not/ a generalization
> of the factorial because it is not limited to naturals?
>
Irrelevant.
> Third, +/* are not the connectives of PT. PT uses the same
> connectives as logic: conjunction, disjunction, and negation
> (whatever symbol you decide to give them).
Cool, so what are the truth tables for those connectives in PT, or
alternatively what are the derivation rules ?
>
> Fourth, these connectives (same as logic remember) ARE truth
> functional in PT. That is when you apply the connectives to
> truth-valued statements you get truth-valued statements
> whose truth depends only on the constituent truth-values.
> (If you don't agree to this then provide a counter-example.)
Consider the real interval [0..1] with two subintervals A and B whose
respective lengths are say 1/3 and 1/8. Assuming the uniform
distribution, a randomly chosen point would have the probability P(A)
= 1/3 to be in subinterval A and the probability P(B)=1/8 to be in
subinterval B. What is the probability P(A and B), the probability of
the randomly chosen point being both in A and B ?
> Just as when you apply the gamma function to natural numbers
> you get a natural numbers (no zero quibbles please).
>
> When you apply the connectives to a probability-valued
> statements you get probability-valued statements whose
> probability depends only on the constituent probabilities.
No.
> Just as when you apply the gamma function to real numbers
> you get real numbers.
Forget the gamma, it's truly irrelevant.
>
> This is why PT is a /generalization/ of logic. It reduces to
> logic when applied to truth-valued statements.
So how about reducing the example I've given above ?
>Just as gamma
> reduces to factorial for natural arguments. (Again no quibbles
> about offset by 1 etc).
>
> > > because in addition to the logically valid modus ponens
> > > and modus tollens, it also gives a foundation for
> > > applying the weak syllogisms
> >
> > It does not -- see above.
>
> It does. Don't take my word for it, educate yourself. I
> suggest starting with:
>
> "Probability Theory: The Logic of Science" - ET Jaynes
That's unfortunate that Jaynes included a chapter on similarities
between logic and probabilistic reasoning in his otherwise interesting
book. If he did not, there would have been much fewer confused
readers.
>
> -- Keith --
P(p1 and p2) is not equal P(p1)*P(p1) in general, so no such
'generalization' is possible.
> NOT (p1) === 1 - p1
> OR (p1 , p2) === 1 - (1 - p1) * (1 - p2)
> and all the rest of the equivalences ?
>
> Wouldn't this lead to a form of logic that is at least as hard to talk
> about as 3- or more-value logics ?
Multivalued logics are truth functional although their utility for data
management is not completely clear.
I assume by "and" you mean conjunction? Also do you realize
that the "*" you wrote above is /not/ a connective? (which
you claimed before it now seems). Here is the reduction:
A : B : A and B
0 : 0 : 0
0 : 1 : 0
1 : 0 : 0
1 : 1 : 1
> > Second, do you understand what "generalization" means?
> > Would you claim that the gamma function is /not/ a
> > generalization of the factorial because it is not
> > limited to naturals?
> >
>
> Irrelevant.
It's relevant to making sure we agree on what generalization
means. So perhaps you can answer the question next time
around? Is gamma a generalization of factorial?
> > Third, +/* are not the connectives of PT. PT uses the
> > same connectives as logic: conjunction, disjunction, and
> > negation (whatever symbol you decide to give them).
>
> Cool, so what are the truth tables for those connectives
> in PT, or alternatively what are the derivation rules?
For truth-valued statements they are exactly the same as
those for logic! I wrote the one for conjunction above, I'm
sure you know the others.
> > Fourth, these connectives (same as logic remember) ARE truth
> > functional in PT. That is when you apply the connectives to
> > truth-valued statements you get truth-valued statements
> > whose truth depends only on the constituent truth-values.
> > (If you don't agree to this then provide a counter-example.)
>
> Consider the real interval [0..1] with two subintervals A
> and B whose respective lengths are say 1/3 and 1/8.
> Assuming the uniform distribution, a randomly chosen point
> would have the probability P(A) = 1/3 to be in subinterval
> A and the probability P(B)=1/8 to be in subinterval B.
> What is the probability P(A and B), the probability of the
> randomly chosen point being both in A and B ?
What part of "apply the connectives to truth-valued
statements" did you miss? Furthermore, I have no idea what
/statements/ A and B are supposed to be. In PT just as in
logic the connectives apply to /statements/ not lengths,
points, subintervals, etc. Regardless, you assigned
probability rather than truth values. Try again with
truth-valued statements.
> > Just as when you apply the gamma function to natural
> > numbers you get a natural numbers (no zero quibbles
> > please).
> >
> > When you apply the connectives to a probability-valued
> > statements you get probability-valued statements whose
> > probability depends only on the constituent
> > probabilities.
>
> No.
How informative.
> > Just as when you apply the gamma function to real
> > numbers you get real numbers.
>
> Forget the gamma, it's truly irrelevant.
Once I know we agree on what "generalization" is.
> > This is why PT is a /generalization/ of logic. It
> > reduces to logic when applied to truth-valued
> > statements.
>
> So how about reducing the example I've given above ?
I did, and it was warm and tart and trivial on my tongue.
> > "Probability Theory: The Logic of Science" - ET Jaynes
>
> That's unfortunate that Jaynes included a chapter on
> similarities between logic and probabilistic reasoning in
> his otherwise interesting book. If he did not, there
> would have been much fewer confused readers.
LOL. Amusing dismissal. I guess the title of his point is
also confusing? Have you actually read the book or did you
just look at a TOC after I mentioned it? Cause, if you have
read it then you must realize that almost everything I have
said here and the responses to your trivial challenges are
all explained with much greater care in the that book and
other sources. So can you point to flaws in his reasoning
then? Since you are adamant that PT is not a generalization
of logic perhaps you can point me to one of the surely
numerous resources demonstrating this?
You are confused. In any standard exposition of PT, the probability
of two independent events is the product of the probabilities of those
two events. In math, the product is designated by '*', not by 'and'.
>Here is the reduction:
>
> A : B : A and B
> 0 : 0 : 0
> 0 : 1 : 0
> 1 : 0 : 0
> 1 : 1 : 1
So your 'reduction' assigns the probability of one to both P(A) and
P(B). Cute. So what kind of events are A and B if the probability of
each of them is one ? Going in the opposite direction what would be
the probability of 'A or B' ? Is it two ? If it's not two, what is
it ?
To counter your possible answer that '1' is a truth value rather than
the probability, recall my original question "Please 'reduce' the
above and tell when A and B is true, i.e. when P(A and B) = 1."
> > > Third, +/* are not the connectives of PT. PT uses the
> > > same connectives as logic: conjunction, disjunction, and
> > > negation (whatever symbol you decide to give them).
> >
> > Cool, so what are the truth tables for those connectives
> > in PT, or alternatively what are the derivation rules?
>
> For truth-valued statements they are exactly the same as
> those for logic! I wrote the one for conjunction above, I'm
> sure you know the others.
The probability statements are not truth-valued (true/false, 1/0),
they have probability valuations. So what would be the probability
valuation table for let's say implication ?
>
> > > Fourth, these connectives (same as logic remember) ARE truth
> > > functional in PT. That is when you apply the connectives to
> > > truth-valued statements you get truth-valued statements
> > > whose truth depends only on the constituent truth-values.
> > > (If you don't agree to this then provide a counter-example.)
> >
> > Consider the real interval [0..1] with two subintervals A
> > and B whose respective lengths are say 1/3 and 1/8.
> > Assuming the uniform distribution, a randomly chosen point
> > would have the probability P(A) = 1/3 to be in subinterval
> > A and the probability P(B)=1/8 to be in subinterval B.
> > What is the probability P(A and B), the probability of the
> > randomly chosen point being both in A and B ?
>
> What part of "apply the connectives to truth-valued
> statements" did you miss? Furthermore, I have no idea what
> /statements/ A and B are supposed to be.
It's obvious that A is 'the randomly chosen point is in the subinterval
A whose length is 1/3' and B is 'the randomly chosen point is in the
subinterval B whose length is 1/8'.
> In PT just as in
> logic the connectives apply to /statements/ not lengths,
> points, subintervals, etc.
I asked you what is the probability of the conjunction (logical 'and')
of two statements A and B, namely 'the randomly chosen point is in the
subinterval A' and 'the randomly chosen point is in the subinterval B'.
> Regardless, you assigned
> probability rather than truth values. Try again with
> truth-valued statements.
I do not want to assign truth values because your original statement
was: "Fourth, these connectives (same as logic remember) ARE truth
functional in PT." Apparently, you've made a statement that the
connectives are truth functional with respect to probability
assignments. Did you you make your assertion with respect to
probability assignments? yes or no ? If yes, do you still insist that
P(A and B) depends just on P(A) and P(B) ? If you made your statement
with respect to truth value assignments, you did not say anything
relevant to PT.
>
> > > Just as when you apply the gamma function to natural
> > > numbers you get a natural numbers (no zero quibbles
> > > please).
> > >
> > > When you apply the connectives to a probability-valued
> > > statements you get probability-valued statements whose
> > > probability depends only on the constituent
> > > probabilities.
> >
> > No.
>
> How informative.
So what is the probability of the event ('the randomly chosen point is
in the subinterval A' and 'the randomly chosen point is in the
subinterval B') assuming you know the the 'constituent' probabilities
and relying on your assertion that "probability depends only on the
constituent probabilities". ?
>
> > > Just as when you apply the gamma function to real
> > > numbers you get real numbers.
> >
> > Forget the gamma, it's truly irrelevant.
>
> Once I know we agree on what "generalization" is.
>
> > > This is why PT is a /generalization/ of logic. It
> > > reduces to logic when applied to truth-valued
> > > statements.
> >
> > So how about reducing the example I've given above ?
>
> I did, and it was warm and tart and trivial on my tongue.
You did not, see above.
>
> > > "Probability Theory: The Logic of Science" - ET Jaynes
> >
> > That's unfortunate that Jaynes included a chapter on
> > similarities between logic and probabilistic reasoning in
> > his otherwise interesting book. If he did not, there
> > would have been much fewer confused readers.
>
> LOL. Amusing dismissal. I guess the title of his point is
> also confusing? Have you actually read the book or did you
> just look at a TOC after I mentioned it? Cause, if you have
> read it then you must realize that almost everything I have
> said here and the responses to your trivial challenges are
> all explained with much greater care in the that book and
> other sources. So can you point to flaws in his reasoning
> then? Since you are adamant that PT is not a generalization
> of logic perhaps you can point me to one of the surely
> numerous resources demonstrating this?
How about a) proving your statement that "probability depends only on
the constituent probabilities" using my trival challenge; b)
explaining what kind of events A and B are if P(A) = 1, P(B) = 1, and
P(A and B) = 1 as well as what would be the probability of 'A or B'
assuming your probability assignments of one.
When you've dealt successfully with (a) and (b), we can gradute to the
book discussion.
Wow! vc is going off the VI deep end at the moment. "no such
'generalization' is possible"? Saying that PT is not a
generalization is one thing; but, none possible??
Erwin, what vc was referring to is that
P(AB) = P(A|B)P(B) -or-
P(AB) = P(B|A)P(A)
where | means given and AB is short for "A and B". This is
called the product rule. Something that vc seems not to know
(given his questions in the other post) is that in the limit
of true (0) and false (1) the conditional probability
product rule reduces to the logical conjunction truth
table. Here is the proof
g : P(A) = 0
p : P(AB) = P(B|A)P(A)
u : P(AB) = 0
g : P(B) = 0
p : P(AB) = P(A|B)P(B)
u : P(AB) = 0
g : P(A) = 1
g : P(B) = 1
s : P(~B) = 0
m : P(A) = P(AB) + P(A~B)
p : P(A) = P(AB) + P(A|~B)P(~B)
u : P(A) = P(AB)
c : P(AB) = P(A)
u : P(AB) = 1
thus
P(A) : P(B) : P(AB)
0 : 0 : 0
0 : 1 : 0
1 : 0 : 0
1 : 1 : 1
descriptions
g : given
p : product rule
s : sum rule
m : marginalization (derived from sum rule)
u : substitution
-- Keith --
You are confused. Connectives have /nothing/ to do with real
multiplication (probabilities are real numbers and * is real
multiplication). Connectives are used to build compound
/statements/ they are not operators on real numbers. Do you
understand? Let me try to put it more simply. The "*" you
wrote above is the real operator "multiplication". It is NOT
a connective. You are confusing statements with numbers (as
you later confuse statements with events).
> >Here is the reduction:
> >
> > A : B : A and B
> > 0 : 0 : 0
> > 0 : 1 : 0
> > 1 : 0 : 0
> > 1 : 1 : 1
>
>
> So your 'reduction' assigns the probability of one to both
> P(A) and P(B). Cute.
Huh? What is cute about taking the limit? You asked for a
truth table, remember? If you want more then look at my
other post that provides a proof of the above reduction for
the general case ie not just when A and B are independent as
you gave in your challenge.
> So what kind of events are A and B if the probability of
> each of them is one?
A and B are NOT EVENTS! They are STATEMENTS! Just like logic
PT as a generalization of logic deals with /statements/ not
/events/. So you are talking nonsense at the moment.
> To counter your possible answer that '1' is a truth value
> rather than the probability, recall my original question
> "Please 'reduce' the above and tell when A and B is true,
> i.e. when P(A and B) = 1."
LOL. 1 is a truth-value and 1 is a probability. 0.5 is not a
truth-value and 0.5 is a probability. 0 is a truth-value and
0 is a probability. Does this clear it up?
> > > > Third, +/* are not the connectives of PT. PT uses
> > > > the same connectives as logic: conjunction,
> > > > disjunction, and negation (whatever symbol you
> > > > decide to give them).
> > >
> > > Cool, so what are the truth tables for those
> > > connectives in PT, or alternatively what are the
> > > derivation rules?
> >
> > For truth-valued statements they are exactly the same as
> > those for logic! I wrote the one for conjunction above,
> > I'm sure you know the others.
>
> The probability statements are not truth-valued
> (true/false, 1/0), they have probability valuations.
Probability statements CAN be truth-valued. Hopefully my
last statement helped you see this.
> > > > Fourth, these connectives (same as logic remember)
> > > > ARE truth functional in PT. That is when you apply
> > > > the connectives to truth-valued statements you get
> > > > truth-valued statements whose truth depends only on
> > > > the constituent truth-values. (If you don't agree
> > > > to this then provide a counter-example.)
>
> I do not want to assign truth values because your original
> statement was: "Fourth, these connectives (same as logic
> remember) ARE truth functional in PT." Apparently, you've
> made a statement that the connectives are truth functional
> with respect to probability assignments.
No I didn't. My statement is very clear and says nothing of
the kind. "when you apply the connectives to truth-valued
statements you get truth-valued statements ..."
> Did you you make your assertion with respect to
> probability assignments? yes or no ?
It's clear that the assertion is with respect to truth-value
assignments which is a special case ie specialization ie
subset of probability assignments. In other words, every
truth-value assignment is a probability assignment. NOT
every probability assignment is a truth-value assignment.
This is very simple do you understand?
> If yes, do you still insist that P(A and B) depends just
> on P(A) and P(B)? If you made your statement with
> respect to truth value assignments, you did not say
> anything relevant to PT.
I said exactly what I meant to say: PT is a /generlization/!
Look do you understand that every integer value is also a
real value (no quibbling about representations in set theory
please)? That every truth-value (only two 0 and 1) is a
probability value? That every truth-value assignment is a
probability assignment?
> So what is the probability of the event ('the randomly
> chosen point is in the subinterval A' and 'the randomly
> chosen point is in the subinterval B') assuming you know
> the the 'constituent' probabilities and relying on your
> assertion that "probability depends only on the
> constituent probabilities". ?
> Going in the opposite direction what would be the
> probability of 'A or B' ? Is it two? If it's not two,
> what is it?
> So what would be the probability valuation table for let's
> say implication ?
> How about a) proving your statement that "probability
> depends only on the constituent probabilities" using my
> trival challenge; b) explaining what kind of events A and
> B are if P(A) = 1, P(B) = 1, and P(A and B) = 1 as well as
> what would be the probability of 'A or B' assuming your
> probability assignments of one.
> When you've dealt successfully with (a) and (b), we can
> gradute to the book discussion.
Why are you obsessed with "events" when we are talking about
logic and it's generalization which applies to /statements/?
Why are you asking me to spoon-feed you truth tables and
basic probability theory? This is not the proper forum to
teach you such fundamentals. If you /still/ require more
lessons we will have to move to another group.
Let's recap.
You confused real multiplication with a connective.
You seemingly failed to understand that 1 and 0 are BOTH
truth-values and probabilities (part of the entire point
that probability theory is a /generalization/ of logic).
You refused to answer (twice now) a simple question (gamma)
designed simply to gauge whether you and I have the same
understanding of the word /generalization/ and thus whether
we are even speaking the same language.
You demanded a conjunction truth table for independent
statements. I gave it to you. Your only response was to
dismiss it as "cute".
In another post I even gave a /proof/ for the general (ie
no independence assumption) reduction of conjunction in the
logical limit. Let's see if you will respond or can even
understand the proof.
You refused to provide an example showing that the usual
logical connectives in PT are no longer truth-functional in
PT. The very definition of truth-functional of course
requiring truth-values which you "do not want to assign".
Seemingly because you wrongly believe truth-values are not
(also) probabilities.
You jumped to irrelevant "random" "event" terminology when
we are talking about logic and a generalization of logic
both of which concern themselves with /statements/ not
"events".
You comment on a book that you do not even seem to have
comprehended.
And finally you decided to be a condescending jerk with your
"graduate" comment (among others). (Hence my change of tone
in this post). I've done enough graduating in my life,
probably more than you.
Your arguments have been demolished and your responses are
starting to seem like VI commentary. Go educate yourself.
Start by comprehending the book you claim to have read.
-- Keith --
Recall that the OP claimed that PT is a logic 'generalization' in the
sense that 'probability depends only on the constituent probabilities'.
He failed to prove the bizzare assertion and refused/or was unable to
solve the trivial puzzle that disproves his statement.
However, below, instead of talking about such 'generalization, he
attempts to demonstrate 'reducing' probabilistic statements to their
logical truth valued counterparts by conjuring up, in vain, the spirit
of conditional probability:
> Erwin, what vc was referring to is that
>
> P(AB) = P(A|B)P(B) -or-
> P(AB) = P(B|A)P(A)
>
Note, that I said nothing about conditional probabilities. I merely
requested to compute the P(A and B) probability in terms of P(A) and
P(B) which was promised by the OP (see above).
> where | means given and AB is short for "A and B". This is
> called the product rule. Something that vc seems not to know
> (given his questions in the other post) is that in the limit
> of true (0) and false (1) the conditional probability
> product rule reduces to the logical conjunction truth
> table. Here is the proof
>
> g : P(A) = 0
> p : P(AB) = P(B|A)P(A)
> u : P(AB) = 0
Unfortunately, it's no proof but just mindless playing with formulas.
The conditional probability is *defined* as
P(B|A) def P(A and B)/P(A)
the requirement for such definition being that P(A) <>0, naturally.
The definition can be found in any introductory PT textbook.
Even, if P(A) were <> 0, the step 'p' is invalid since P(B|A) is
unknown, only P(A) and P(B) are given and P(A and B) has to be
computed. It's, like, secondary school algebra.
>
> g : P(B) = 0
> p : P(AB) = P(A|B)P(B)
> u : P(AB) = 0
>
See above.
> g : P(A) = 1
> g : P(B) = 1
> s : P(~B) = 0
> m : P(A) = P(AB) + P(A~B)
> p : P(A) = P(AB) + P(A|~B)P(~B)
> u : P(A) = P(AB)
> c : P(AB) = P(A)
> u : P(AB) = 1
This is even funnier. First, we do not know what P(A|B) is (see
above) and second the question is what kind of events might A and B be
if the probability of either is one ? What about P(A or B) given the
respective probabilities are one ? Is it two by any chance ? (I asked
the same question in another message).
>
> thus
>
> P(A) : P(B) : P(AB)
> 0 : 0 : 0
> 0 : 1 : 0
> 1 : 0 : 0
> 1 : 1 : 1
>
Unfortunately, there can be no 'thus'.
Any probability book would tell you that there is a notion of sample
space which is defined as a set of all possible outcomes of some
experiment (e.g. throwing a dice). Each subset of such space is called
an event. However, the terminology does not matter, you can call an
event a 'sentence' describing such event, it really changes nothing.
[...]
> LOL. 1 is a truth-value and 1 is a probability. 0.5 is not a
> truth-value and 0.5 is a probability. 0 is a truth-value and
> 0 is a probability. Does this clear it up?
No, it does not. What kind of sentences do you have in mind whose
probability is one ? Please provide a meaningful example. What would
be the probability of 'A or B' given that probabilities of A and B are
one ? I asked the question before but you did not answer. So what is
it, P(A or B) assuming P(A)=P(B)=1 ?
>
> > > > > Third, +/* are not the connectives of PT. PT uses
> > > > > the same connectives as logic: conjunction,
> > > > > disjunction, and negation (whatever symbol you
> > > > > decide to give them).
> > > >
> > > > Cool, so what are the truth tables for those
> > > > connectives in PT, or alternatively what are the
> > > > derivation rules?
> > >
> > > For truth-valued statements they are exactly the same as
> > > those for logic! I wrote the one for conjunction above,
> > > I'm sure you know the others.
> >
> > The probability statements are not truth-valued
> > (true/false, 1/0), they have probability valuations.
>
> Probability statements CAN be truth-valued. Hopefully my
> last statement helped you see this.
You earlier claimed(correctly) that a probability statement is real
valued. Are changing your mind ? If a statement is truth valued, it's
a logical statement. OK, one can say that the probability of one (a
real number) is *interpreted* as a truth value (true), but a question
arises as to what two (or more statements) with probability one might
mean. You've failed so far to provide a clear explanation and and
examples of such statements, in particular what the probability of the
disjunction of those statements might be.
>
[...]
Irrelevant stuff skipped.
> You seemingly failed to understand that 1 and 0 are BOTH
> truth-values and probabilities (part of the entire point
> that probability theory is a /generalization/ of logic).
If the real number 1 is interpreted as a probability, then see above.
[...]
> You demanded a conjunction truth table for independent
> statements. I gave it to you. Your only response was to
> dismiss it as "cute".
It is 'cute' for the reasons I described above, namely it's unclear
what kind of 'sentences' you can imagine, all with probabilities of
one. So far, you've failed to provide an answer. It's not such a hard
question, is it ?
> You refused to provide an example showing that the usual
> logical connectives in PT are no longer truth-functional in
> PT.
I did provide a trivial example/puzzle that you refused or were unable
to explain.
>
> You jumped to irrelevant "random" "event" terminology when
> we are talking about logic and a generalization of logic
> both of which concern themselves with /statements/ not
> "events".
That's the standard PT terminology.
[...]
To sum up, the OP claimed two things:
1. PT is a logic generalization. My response was that PT is not truth
functional in the sense the propositional logic is, namely that the
compound sentence truth, in logic, is determined by its constituent's
truth values whereas in PT (as a purported generalization) the
probability of the compound statement is *not* determined by the
probabilities of its constituent statements. To which you responded:
"When you apply the connectives to a probability-valued
statements you get probability-valued statements whose
probability depends only on the constituent probabilities.
"
I've provided an example, trivial to anyone who's read an introduction
to PT, and asked to compute P(A and B) given P(A) and P(B). There has
been no answer yet. Are you unable to answer the question ?
2. In some case, namely when probabilities are 0 and 1, the
probabilistic statements 'reduce' to logical statements. I asked to
privide two or more statements whose probability would be one and show
what the probability of the disjunction of such statements might be.
There has been no answer. Are you unable to answer the question ?
Let's start from scratch, forget the talk about valuations, what is
and what is not a connective and try answering (1) and (2), especially
(1) in the light of your assertion that 'you get probability-valued
statements whose probability depends only on the constituent
probabilities'.
This confirms that you have not comprehended Jaynes' book or
any other material relevant to probability theory as a
generalization of logic. Had you read and comprehended that
book then you would know that notions of sample space and
events are not needed to formulate probability theory. You
would have known that there are at least two formulations
Kolmogorov's (the one you refer to) and the arguably more
general Cox formulation. And, finally, you would have known
that anyone referring to probability theory as generalized
logic is without question referring to the Cox formulation.
> Each subset of such space is called an event. However, the
> terminology does not matter, you can call an event a
> 'sentence' describing such event, it really changes
> nothing.
Viewing PT from a Cox formulation versus Kolmogorov changes
A LOT. Since you are ignorant of the Cox formulation you
simply do not understand what changes.
> > LOL. 1 is a truth-value and 1 is a probability. 0.5 is
> > not a truth-value and 0.5 is a probability. 0 is a
> > truth-value and 0 is a probability. Does this clear it
> > up?
>
> No, it does not.
Please tell me you are joking? You do not understand that 1
is both a truth-value and a probability?
> > Probability statements CAN be truth-valued. Hopefully my
> > last statement helped you see this.
>
> You earlier claimed(correctly) that a probability
> statement is real valued. Are changing your mind? If a
> statement is truth valued, it's a logical statement. OK,
> one can say that the probability of one (a real number) is
> *interpreted* as a truth value (true), but a question
> arises as to what two (or more statements) with
> probability one might mean.
Finally you see that a probability of 1 can at least be
*interpreted* as a truth value. Of course, who knows what
you had in mind when adding the word *interpreted*. And no,
I haven't "changed my mind". I have consistently maintained
that a probability of 1 *is* a truth-value. Remember, this is
the Cox formulation. If you were more knowledgeable you would
have immediately known this when reading my original post:
"And interestingly, one view of logic is as a
specialization of conditional probability theory. One that
deals only with certainty (1) and impossibility (0) rather
than a range of probability." -- KHD
Here "one view" = "Cox formulation".
And by the way, no "question arises as to what ...
probability one might mean" in Cox. You really should learn
more about this formulation. It's quite interesting although
getting a copy of Cox's paper may be somewhat difficult for
some given it's age.
> > You refused to provide an example showing that the usual
> > logical connectives in PT are no longer truth-functional
> > in PT.
>
> I did provide a trivial example/puzzle that you refused or
> were unable to explain.
No you didn't! Stop snipping relevant context. Here is the
context restored:
> > You refused to provide an example showing that the usual
> > logical connectives in PT are no longer truth-functional
> > in PT. The very definition of truth-functional of course
> > requiring truth-values which you "do not want to assign".
> > Seemingly because you wrongly believe truth-values are not
> > (also) probabilities.
Your example was not a example of non-truth-functional
connectives because you applied them to non-truth-values!
> > You jumped to irrelevant "random" "event" terminology
> > when we are talking about logic and a generalization of
> > logic both of which concern themselves with /statements/
> > not "events".
>
> That's the standard PT terminology.
No, that is standard Kolmogorov terminology. Not standard in
the "one view of probability theory" ie Cox formulation that
is clearly what is being described. This is the nature of VI.
You are clearly ignorant of any other formulation of
probability theory (even though you implied you had read a
book steeped in the Cox formulation) besides Kolmogorov.
Being thus ignorant you decided to vociferously attack a
remark made by someone that knew about alternatives.
> 1. PT is a logic generalization.
No, I the OP claimed quote:
"interestingly, one view of logic is as a specialization
of conditional probability theory. One that deals only
with certainty (1) and impossibility (0) rather than a
range of probability." -- KHD
In other words to paraphrase myself and Cox:
"interestingly, one view of probability theory, the Cox
formulation, is as a generalization of logic. One that
deals with a range of probability corresponding to a
degree of rational belief bounded in the extremes by
certainty (1) and impossibility (0)."
> My response was that PT is not truth functional in the
> sense the propositional logic is namely that the compound
> sentence truth, in logic, is determined by its
> constituent's truth values
No, your response was quote
"PT cannot be 'a generalization of logic' because PT
'connectives' (+/*) are not truth functional." -- vc
Which, I tried to explain to you is wrong for two reasons.
First, (+/*) are NOT connectives in PT they are the real
operators addition and multiplication. Second, PT uses the
SAME connectives as logic. The connectives haven't changed
they are still truth-functional as well as probability-
functional.
> whereas in PT (as a purported generalization) the
> probability of the compound statement is *not* determined
> by the probabilities of its constituent statements. To
> which you responded:
>
> "When you apply the connectives to a probability-valued
> statements you get probability-valued statements whose
> probability depends only on the constituent probabilities."
No, you have completely screwed the context. That statement
of mine was a response to your
"PT cannot be 'a generalization of logic' because PT
'connectives' (+/*) are not truth functional." -- vc
You then responded TO ME with the simple "No". I have never
responded to your claim above as you seem to recognize
below.
> I've provided an example, trivial to anyone who's read an
> introduction to PT, and asked to compute P(A and B) given
> P(A) and P(B). There has been no answer yet. Are you
> unable to answer the question ?
Rest assured there is an answer and I am able to answer
it. I choose not to answer for two reasons. First, you are
acting like a VI and I'm not here to teach you basic theory,
in this case the Cox formulation. Second, responding to your
challenge is not needed to refute your nonsense claim that
"PT 'connectives' (+/*) are not truth functional." As I have
done many times now.
["what does P(A) = 1 mean" nagging]
> > You demanded a conjunction truth table for independent
> > statements. I gave it to you. Your only response was to
> > dismiss it as "cute".
>
> It is 'cute' for the reasons I described above, namely
> it's unclear what kind of 'sentences' you can imagine, all
> with probabilities of one. So far, you've failed to
> provide an answer. It's not such a hard question, is it ?
> What kind of sentences do you have in mind whose
> probability is one?
> Please provide a meaningful example.
What are you talking about? It's not hard. It's trivial and
irrelevant! For proving probability theorems or that
probability is a generalization of logic it does not matter
what A, B, C etc stand for! "My first name is Keith" "My
last name is Duggar" "My first name is Keith or my last name
is Duggar". Does that satisfy you? I hope so because it is
TOTALLY irrelevant (and somewhat VI honestly).
[disjunction nagging]
> 2. In some case, namely when probabilities are 0 and 1,
> the probabilistic statements 'reduce' to logical
> statements. I asked to provide two or more statements
> whose probability would be one and show what the
> probability of the disjunction of such statements might
> be. There has been no answer. Are you unable to answer the
> question ?
> What would be the probability of 'A or B' given that
> probabilities of A and B are one ? I asked the question
> before but you did not answer. So what is it, P(A or B)
> assuming P(A)=P(B)=1 ?
> You've failed so far to provide a clear explanation and
> and examples of such statements, in particular what the
> probability of the disjunction of those statements might
> be.
Dude, did you bother to look at my conjunction proof in
another post? Given that what makes you think I cannot prove
the same (ie reduction to logic) for disjunction? Why can't
you attempt it yourself? Why can't you comprehend Jaynes'
book? Why can't you admit that before this thread you were
ignorant of Cox probability? Why can't you admit that this
means you have been acting like a VI so far?
One last time I will provide one of these basic proofs. And
I will provide more than you ask for, that is below is a
proof for the complete reduction to logic in the limit of
certainty (1) and impossibility (0).
P(A) = 1
P(A or B) = P(~(~A and ~B))
P(A or B) = 1 - P(~A and ~B)
P(A or B) = 1 - P(~B|~A)P(~A)
P(A or B) = 1 - P(~B|~A)(1-P(A))
P(A or B) = 1 - P(~B|~A)(0)
P(A or B) = 1 - 0
P(A or B) = 1
P(B) = 1
(same as above just swap A and B)
P(A) = 0
P(B) = 0
P(~A) = 1
P(~B) = 1
P(~A and ~B) = 1 [conjunction proved in previous post]
P(A or B) = P(~(~A and ~B))
P(A or B) = 1 - P(~A and ~B)
P(A or B) = 1 - 1
P(A or B) = 0
thus
A : B : A or B
0 : 0 : 0
0 : 1 : 1
1 : 0 : 1
1 : 1 : 1
Again back to your "what does the sentence mean" nagging, it
does not matter what A and B mean. Just as their meaning (ie
their interpretation under some model) does not matter when
proving logical theorems.
> Let's start from scratch, forget the talk about valuations,
> what is and what is not a connective and try answering (1)
> and (2), especially (1) in the light of your assertion that
> 'you get probability-valued statements whose probability
> depends only on the constituent probabilities'.
No, I don't want to start over. I've said and /proven/ all
that was needed. I think it's time for you to admit that
"PT cannot be 'a generalization of logic' because PT
'connectives' (+/*) are not truth functional."
is non-sense, that you were entirely ignorant of the Cox
formulation of probability theory, that this formulation is
a generalization of logic, and that you did not comprehend
(or even read) Jaynes' book even though you commented on it.
If you cannot admit any of these, then it is time for you to
provide a reference to a respected source that echos your
claims that
"PT cannot be 'a generalization of logic' because PT
'connectives' (+/*) are not truth functional."
and that PT "does not" provide a foundation for applying the
weak syllogisms. Just as I provided Jaynes' book and Cox's
work/papers that more thoroughly demonstrate the contrary.
-- Keith --
No idiot. That is not the sense in which I claimed "one
view" of probability theory is as a generalization of logic.
The "bizzare" assertion was part of an elaboration in the
refutation of your idiotic claim
"PT 'connectives' (+/*) are not truth functional." -- vc
In which you confused addition and multiplication with
connectives and failed to realize that PT uses the SAME
connectives as logic.
> However, below, instead of talking about such
> 'generalization, he attempts to demonstrate 'reducing'
> probabilistic statements to their logical truth valued
> counterparts by conjuring up, in vain, the spirit of
> conditional probability:
Mindless drivel. "conjuring"? "spirit"? Absolutely senseless.
> > Erwin, what vc was referring to is that
> >
> > P(AB) = P(A|B)P(B) -or-
> > P(AB) = P(B|A)P(A)
> >
>
> Note, that I said nothing about conditional
> probabilities.
No but you claimed "P(p1 and p2) is not equal P(p1)*P(p1) in
general" which is of course pointing to the possibility of
p1 and p2 being dependent which is of course equivalent to a
conditional statement P(p1|p2) = P(p1).
> I merely requested to compute the P(A and
> B) probability in terms of P(A) and P(B) which was
> promised by the OP (see above).
Which the proof you called "mindless playing with formulas"
provides. How about this, since you believe I'm wrong why
don't YOU provide a proof? I would love to see what you
consider is /not/ "mindless playing with formulas"
> > where | means given and AB is short for "A and B". This
> > is called the product rule. Something that vc seems not
> > to know (given his questions in the other post) is that
> > in the limit of true (0) and false (1) the conditional
> > probability product rule reduces to the logical
> > conjunction truth table. Here is the proof
> >
> > g : P(A) = 0
> > p : P(AB) = P(B|A)P(A)
> > u : P(AB) = 0
>
> Unfortunately, it's no proof but just mindless playing
> with formulas. The conditional probability is *defined*
> as
>
> P(B|A) def P(A and B)/P(A)
>
> the requirement for such definition being that P(A) <>0,
> naturally. The definition can be found in any
> introductory PT textbook.
LMAO. "mindless playing with formulas"? You vc are a
mindless idiot who has an extremely limited understanding of
probability theory. You clearly know nothing about the Cox
formulation or his derivation of the product rule IN THE
FORM I gave above as p) from functional requirements
ALONE. Your total knowledge is apparently limited to a
(small) part of the Kolmogorov formulation. In Cox's
formulation there are NO such <>0 requirements. You are a
VI. Go educate yourself and stop spouting nonsense. Go pick
up your copy of Jaynes' book, turn to Chapter 2 and start
reading. After you get through the derivation of an equation
called "the product rule" (equation 2.18 in my copy), stare
at it, think about the derivation, realize that you are
wrong, then come back here and apologize, and finally STFU.
> Even, if P(A) were <> 0, the step 'p' is invalid since
> P(B|A) is unknown, only P(A) and P(B) are given and P(A
> and B) has to be computed. It's, like, secondary school
> algebra.
It's called the product rule idiot. It is ALWAYS valid
regardless of the values. Just like modus ponens or any
other /rule/ is always valid. It's, like, duh, the same
reason 2 x = x + x is valid even though x is unknown.
You are turning out to be an excellent example of VI.
> > g : P(A) = 1
> > g : P(B) = 1
> > s : P(~B) = 0
> > m : P(A) = P(AB) + P(A~B)
> > p : P(A) = P(AB) + P(A|~B)P(~B)
> > u : P(A) = P(AB)
> > c : P(AB) = P(A)
> > u : P(AB) = 1
>
> This is even funnier. First, we do not know what P(A|B) is
> (see above)
It doesn't matter idiot. It's a real number between 0 and 1
and it's multiplied by 0. Which means? Duh, the product is 0
REGARDLESS of it's value. Do you understand this? It's like
elementary school arithmetic.
> and second the question is what kind of events might A and
> B be if the probability of either is one ? What about P(A
> or B) given the respective probabilities are one ? Is it
> two by any chance ? (I asked the same question in another
> message).
It doesn't matter what A and B are idiot. I explained this
in another post AND answered your ignorant disjunction
question.
> > thus
> >
> > P(A) : P(B) : P(AB)
> > 0 : 0 : 0
> > 0 : 1 : 0
> > 1 : 0 : 0
> > 1 : 1 : 1
> >
>
> Unfortunately, there can be no 'thus'.
Unfortunately, you are acting like a total VI moron at this
point. Read Jaynes' book or Cox's paper, comprehend them,
educate yourself. After that come here and apologize for
being a VI. Before that STFU.
-- Keith --
In order to give vc a specific equation reference to stare
at, I had to pull out my copy of
"Probability Theory: The Logic of Science" by E. T. Jaynes
Reminded of what an excellent book it is (even though sadly
Jaynes died before completing it), I started reading the
first few chapters again. I ran across this statement by
Jaynes:
"Aristotelian deductive logic is the limiting form of our
rules for plausible reasoning, as the robot becomes more
certain of its conclusions."
This is nearly an exact paraphrase of my original comment
which vc vociferously and ignorantly attacked. Chapter 2 of
the book also contains derivations nearly identical to those
I have posted here that vc called "mindless playing with
formulas". So at least I'm in good company, vc, you
vociferous ignoramus.
Bob Badour, you mentioned in this thread that you "feel
cheated that [your] education failed to teach [you] enough
useful statistics". I don't know how much time you have to
study these days but I would like to recommend Jaynes' book
and also for an introduction focusing on practical use:
"Data Analysis: A Bayesian Tutorial" D. S. Sivia
I think you will find that learning probability theory from
the Cox perspective will radically improve and simplify your
understanding of both statistics and probability theory. The
same advice goes for anyone out there in a similar situation.
Reading the first few chapters of both Sivia and Jaynes as
well as Jaynes' appendix "Other Approaches To Probability
Theory" makes for a great start.
After reading those you will think "why the frak didn't they
teach me this in school."
-- Keith --
PS. Let me also take this opportunity to correct a typo in
my previous post before the VI has a mental orgasm.
Keith H Duggar wrote:
> No but you claimed "P(p1 and p2) is not equal P(p1)*P(p1)
> in general" which is of course pointing to the possibility
> of p1 and p2 being dependent which is of course equivalent
> to a conditional statement P(p1|p2) = P(p1).
that should have been P(p1|p2) != P(p1).
(Quote, with point numbers added and asterisks for footnotes)
Some Generalizations
JOIN, UNION, and INTERSECTION were all defined originally as dyadic
operators (i.e., each took exactly two relations as operands);* as we
have seen, however, they can be unambiguously generalized to become
n-adic operators for arbitrary n> 1. But what about n=1? Or n=0? It
turns out to be desireable, at least from a conceptual point of view, to
be able to perform "joins", "unions", and "intersections" of (a) just a
single relation and (b) no relations at all (even though Tutorial D
provides no direct syntactic support for any such operations). Here are
the definitions. Let s be a set of relations (all of the same relation
type RT in the case of union and intersection). Then:
(1) If s contains just one relation r, then the join, union, and
intersection of all relations in s are all defined to be simply r.
(2) If s contains no relations at all, then:
(2.1) The join of all relations in s is defined to be TABLE_DEE (the
identity with respect to join).
(2.2) The union of all relations in s is defined to be the empty
relation of type RT.
(2.3) The intersection of all relations in s is defined to be the
"universal" relation of type RT - that is, that unique relation of type
RT that contains all possible tuples with heading H, where H is the
heading of type RT.**
* MINUS is dyadic, too. By contrast, restrict and project are monodic
operators.
** We note in passing that the term universal relation is usually used
in the literature with a very different meaning. ...
(End of quote)
In the exercises for that chapter (#7.10), Date asks "Given that
intersect is a special case of join, why do not both operators give the
same result when applied to no relations at all?"
My guess at the motivation for all this is that it is at least partly to
define things completely enough to allow prefix notation, eg.
JOIN/INTERSECT/UNION (r1, r2, ... , rn). If that's so, then I can see
that we would want JOIN() (by this I mean the JOIN of nothing) to give
TABLE_DEE, if we wanted (JOIN())JOIN(r1) to give r1. Similarly for
INTERSECT() and other combinations of the three operators.
That seems a practical motivation. In terms of relations and/or set
theory/predicate calculus can anybody give a more theoretical one?
p
He simply defined them as the identity elements for the specific
operations just as one defines any aggregate/fold of zero elements using
the identity element for the base operation.
In other words, I'm guessing, "to make it all work/to guarantee the ops
always return relations", i.e., the motivation is practical only,
analogous to notions such as the "empty product" (such as in
http://en.wikipedia.org/wiki/Empty_product)? If so, I think I'm content
with that, e.g., thinking of the empty set as just a gizmo to enable
various operations we desire.
p
I'm easily confused when I try to compare the two, maybe because of
thinking that the set of all possible intersections is a subset of all
possible joins. Red herring maybe due to introducing def'ns of
competing operators, same phenomenon that makes contract law such a mess.
(one practical reason i like this "universal" relation, maybe it's the
same reason I like <OR> and <NOT> in TTM is that it is a 'defined' way
to produce the extension of a domain that is faster than typing it in!)
p
Exactly so. And exactly the same way Graham, Knuth and Patashnik define
aggregation in _Concrete Mathematics_. We define the iteration over zero
items as the identity element of the base operation. Sum() = 0,
Product() = 1, Min() = max_value_of_type, Max() = min_value_of_type, etc.
Mathematicians define aggregations that way on utilitarian grounds.
OK, good, so you continue to claim that the probability of P(A and B)
is determined solely by P(A) and P(B) just as you did before:
"When you apply the connectives to a probability-valued
statements you get probability-valued statements whose
probability depends only on the constituent probabilities.
"
Now, I'll rephrase my puzzle in Jaynes/Bayesian terms:
Let [0..1] be a real line interval from which a point is chosen
randomly. Let a and b be two sub-intervals of the [0..1] interval
whose lengths are 1/3 and 1/8 respectively. Let A and B be two
propositions 'the random point is chosen from a' and 'the random point
is chosen from b' respectively . Then, by the Indifference Principle
(see the Jaynes book), we can assign prior probabilities P(A)=1/3 and
P(B)=1/8. Show how to derive P(A and B) given P(A) and P(B). If you
can show that, you can claim that "probability depends only on the
constituent probabilities". Are you unable to do that ?
>
>
> > I've provided an example, trivial to anyone who's read an
> > introduction to PT, and asked to compute P(A and B) given
> > P(A) and P(B). There has been no answer yet. Are you
> > unable to answer the question ?
>
> Rest assured there is an answer and I am able to answer
> it.
So what is the answer ?
> > What kind of sentences do you have in mind whose
> > probability is one?
>
> > Please provide a meaningful example.
>
> What are you talking about? It's not hard. It's trivial and
> irrelevant! For proving probability theorems or that
> probability is a generalization of logic it does not matter
> what A, B, C etc stand for! "My first name is Keith" "My
> last name is Duggar" "My first name is Keith or my last name
> is Duggar". Does that satisfy you? I hope so because it is
> TOTALLY irrelevant (and somewhat VI honestly).
>
> [disjunction nagging]
>
> > 2. In some case, namely when probabilities are 0 and 1,
> > the probabilistic statements 'reduce' to logical
> > statements. I asked to provide two or more statements
> > whose probability would be one and show what the
> > probability of the disjunction of such statements might
> > be. There has been no answer. Are you unable to answer the
> > question ?
What you've provided is not a meaningful example, but just two
mutually irrelevant true propositions (similarly to introducing
irrelevancy with penguins and the leaking roof in the Jaynes book). To
provide a meaningful example of reducing probabilities, take for
example two relevant statements A='It will rain today' and 'The roof
will leak' (see the same book), assign priors and show how P(A and B)
can be equal to one.
Also, your 'basic proof' is quite meaningless (as I pointed out in
another message):
> One last time I will provide one of these basic proofs. And
> I will provide more than you ask for, that is below is a
> proof for the complete reduction to logic in the limit of
> certainty (1) and impossibility (0).
>
> P(A) = 1
> P(A or B) = P(~(~A and ~B))
> P(A or B) = 1 - P(~A and ~B)
> P(A or B) = 1 - P(~B|~A)P(~A)
Since P(~A) equals zero, the above statement does not make sense.
We'll try the argument from authority (since you appear impervious to
simple logic):
(the Jaynes book)
"In our formal probability symbols (those with a capital P)
P(A|B)
....
We repeat the warning that a probability symbol is undefined and
meaningless if the condi-
tioning statement B happens to have zero probability in the context of
our problem ...
"
> P(A or B) = 1 - P(~B|~A)(1-P(A))
meaningless
> P(A or B) = 1 - P(~B|~A)(0)
meaningless
> P(A or B) = 1 - 0
> P(A or B) = 1
>
> P(B) = 1
> (same as above just swap A and B)
meaningless, see above.
>
> P(A) = 0
> P(B) = 0
> P(~A) = 1
> P(~B) = 1
> P(~A and ~B) = 1 [conjunction proved in previous post]
> P(A or B) = P(~(~A and ~B))
> P(A or B) = 1 - P(~A and ~B)
> P(A or B) = 1 - 1
> P(A or B) = 0
>
> thus
>
> A : B : A or B
> 0 : 0 : 0
> 0 : 1 : 1
> 1 : 0 : 1
> 1 : 1 : 1
NMaturally, there can be no "thus" (see above).
[ Irrelevant chaff and the refusal to answer (1) and (2) skipped ]
> > > Erwin, what vc was referring to is that
> > >
> > > P(AB) = P(A|B)P(B) -or-
> > > P(AB) = P(B|A)P(A)
> > >
> >
> > Note, that I said nothing about conditional
> > probabilities.
>
> No but you claimed "P(p1 and p2) is not equal P(p1)*P(p1) in
> general" which is of course pointing to the possibility of
> p1 and p2 being dependent which is of course equivalent to a
> conditional statement P(p1|p2) = P(p1).
It's getting funnier. Are claiming now that, given two propositions p1
and p2 with probabilities P(p1) and P(p2), "the possibility of p1 and
p2 being dependent [is] equivalent to a conditional statement P(p1|p2)
= P(p1)" ? You've got it backwards, amigo.
Assuming Bayesian treatment (which was not specified originally, mind
you), the derivation is still meaningless. Let's try some argument
from authority:
"
In our formal probability symbols (those with a capital P)
P(A|B)
....
We repeat the warning that a probability symbol is undefined and
meaningless if the condi-
tioning statement B happens to have zero probability in the context of
our problem ...
"
[chaff skipped]
> > Unfortunately, there can be no 'thus'.
>
> Unfortunately, you are acting like a total VI moron at this
> point. Read Jaynes' book or Cox's paper, comprehend them,
> educate yourself. After that come here and apologize for
> being a VI. Before that STFU.
Unfortunately, there still cannot be any 'thus'. See the argument
from authority above.
>
> -- Keith --
The informal statement above should be read metaphorically rather than
literally. PT aint' no logic due to lack of truth functionality, and
the 'reduction' to logic, which you've failed to prove by the way, is
possible only in trivial and uninteresting cases.
> This is nearly an exact paraphrase of my original comment
> which vc vociferously and ignorantly attacked.
>Chapter 2 of
> the book also contains derivations nearly identical to those
> I have posted here that vc called "mindless playing with
> formulas". So at least I'm in good company, vc, you
> vociferous ignoramus.
What Jaynes did in his derivation of the sum/product rules has got
nothing to do with your mindless playing with formulas. See the
argument from authority in my previous messages.
>
> Bob Badour, you mentioned in this thread that you "feel
> cheated that [your] education failed to teach [you] enough
> useful statistics". I don't know how much time you have to
> study these days but I would like to recommend Jaynes' book
> and also for an introduction focusing on practical use:
>
> "Data Analysis: A Bayesian Tutorial" D. S. Sivia
>
> I think you will find that learning probability theory from
> the Cox perspective will radically improve and simplify your
> understanding of both statistics and probability theory. The
> same advice goes for anyone out there in a similar situation.
> Reading the first few chapters of both Sivia and Jaynes as
> well as Jaynes' appendix "Other Approaches To Probability
> Theory" makes for a great start.
>
> After reading those you will think "why the frak didn't they
> teach me this in school."
>
> -- Keith --
>
> PS. Let me also take this opportunity to correct a typo in
> my previous post before the VI has a mental orgasm.
>
> Keith H Duggar wrote:
> > No but you claimed "P(p1 and p2) is not equal P(p1)*P(p1)
> > in general" which is of course pointing to the possibility
> > of p1 and p2 being dependent which is of course equivalent
> > to a conditional statement P(p1|p2) = P(p1).
>
> that should have been P(p1|p2) != P(p1).
That's assuming that P(p1|p2) even makes sense. More general
formulation of such independence is just P(p1 and p2) = P(p1)* P(p2).
> Keith H Duggar wrote:
>
>>>vc wrote:
>>
>>[snip VI crap]
>>
>>In order to give vc a specific equation reference to stare
>>at, I had to pull out my copy of
>>
>>"Probability Theory: The Logic of Science" by E. T. Jaynes
>>
>>Reminded of what an excellent book it is (even though sadly
>>Jaynes died before completing it), I started reading the
>>first few chapters again. I ran across this statement by
>>Jaynes:
>>
>> "Aristotelian deductive logic is the limiting form of our
>> rules for plausible reasoning, as the robot becomes more
>> certain of its conclusions."
>
> The informal statement above should be read metaphorically rather than
> literally. PT aint' no logic due to lack of truth functionality, and
> the 'reduction' to logic, which you've failed to prove by the way, is
> possible only in trivial and uninteresting cases.
Are you seriously suggesting that true and false are trivial and
uninteresting? Should we all pack up and go home?
>>This is nearly an exact paraphrase of my original comment
>>which vc vociferously and ignorantly attacked.
>>Chapter 2 of
>>the book also contains derivations nearly identical to those
>>I have posted here that vc called "mindless playing with
>>formulas". So at least I'm in good company, vc, you
>>vociferous ignoramus.
>
> What Jaynes did in his derivation of the sum/product rules has got
> nothing to do with your mindless playing with formulas. See the
> argument from authority in my previous messages.
Your argument from authority was flawed. I will reply in the other thread.
The formulation is neither more general nor less general. It is, in
fact, a simple substitution of the equation describing independence:
(1) P(p1|p2) = P(p1)
into the formula for conditional probability:
(2) P(p1 and p2) = P(p1|p2)*P(p2)
Substitute (1) into (2) gives P(p1 and p2) = P(p1)*P(p2)
> Keith H Duggar wrote:
> [Irrelevant stuff skipped]
>
> Assuming Bayesian treatment (which was not specified originally, mind
> you), the derivation is still meaningless. Let's try some argument
> from authority:
[snip]
Your whole dismissal, as I recall, depends on your observation:
> P(B|A) def P(A and B)/P(A)
>
> the requirement for such definition being that P(A) <>0, naturally.
Keith used the equivalent definition:
P(A and B) = P(B|A)P(A), which places no requirements on P(A) because
one does not divide by P(A).
In the case of P(A) = 0, P(A and B) = 0 and P(B|A) is indeterminate,
which is to say, we don't care what it's value might be and it could be
any real number; although, as a probability, we restrict it to real
numbers in the range [0...1].
Thus, both of Keith's proofs were entirely valid because he neither
inferred nor concluded using the indeterminate P(B|A). He made the valid
conclusion that P(A and B) = 0 when P(A) = 0.
>
> Are you seriously suggesting that true and false are trivial and
> uninteresting? Should we all pack up and go home?
>
I am suggesting that a meaningful PT statement reduction to a
propositional logic statement is trivial and uninteresting. Also,
there is a problem with the conditional probability not being equal
the probability of the conditional (see Lewis's result) which makes
conditional probabily untranslatable to modus ponens in principle.
Going in the opposite direction, generalization, PT is not truth
functional , that is the probability of a compound statement is not
determined solely by its components probabilities (see my trivial
puzzle). Also, importantly, it appears impossible to find an axiom
system for any known probabilistic logic that would be sound and
complete (except some special cases). Obviously, lack of such axiom
system makes a formal derivation (a hallmark of any logic) impossible.
Apparently, despite obvious similarities and profound connections,
both had better be used what they are best at and attempts to merge
them do not seem very productive (see abundant literature on
probabilistic logics).
> >
> > What Jaynes did in his derivation of the sum/product rules has got
> > nothing to do with your mindless playing with formulas. See the
> > argument from authority in my previous messages.
>
> Your argument from authority was flawed. I will reply in the other thread.
The argument from authority was a quote from Jaynes' book , not mine.
> >>
> >>that should have been P(p1|p2) != P(p1).
> >
> > That's assuming that P(p1|p2) even makes sense. More general
> > formulation of such independence is just P(p1 and p2) = P(p1)* P(p2).
>
> The formulation is neither more general nor less general. It is, in
> fact, a simple substitution of the equation describing independence:
>
> (1) P(p1|p2) = P(p1)
Again, the substitution is possible only when P(p2) > 0. (See the
Jaynes book).
It does in the frequentist probability interpretation, yes.
> >
> > the requirement for such definition being that P(A) <>0, naturally.
>
> Keith used the equivalent definition:
In the Bayesian interpretation the product rule is a derivation form
Cox's postulates, but even there P(B|A)P(A) is meaningful only when
P(A) > 0.:
>From the Jaynes book:
"
In our formal probability symbols (those with a capital P)
P(A|B)
....
We repeat the warning that a probability symbol is undefined and
meaningless if the condi-
tioning statement B happens to have zero probability in the context of
our problem ...
"
Please see the book for details.
>
> P(A and B) = P(B|A)P(A), which places no requirements on P(A) because
> one does not divide by P(A).
Please see above or the book.
And since Keith never relied on any meaningful value for P(A|B) in his
proof, I wonder what point you are trying to make.
Consider a partial function f(x) defined on the set N of natural
numbers as:
if x > 10 f(x) = 2*x
Now, what would be the value of x*f(x) given x = 0 ? It's not zero,
it's undefined, it simply does not exist, there is no such thing as
'unmeaningful' values of f(x) given x outside the function domain.
Likewise, P(A|B) is defined as probability of proposition A given
proposition B is true, so if B is false P(A|B) is undefined (see
Jaynes for details).
Then why did you bring it up in the first place?
Also,
> there is a problem with the conditional probability not being equal
> the probability of the conditional (see Lewis's result) which makes
> conditional probabily untranslatable to modus ponens in principle.
Modus ponens requires a conditional probability of 1 and says nothing
about situations with other conditional probabilities which is why the
conditional "If A then B" says nothing about B when A is false.
For modus ponens, one must start with the premise "If A then B", which
is synonymous with P(B|A) = 1. To have a sound argument, one must not
only have a valid argument, but the premise "If A then B" must be true.
It think your argument might cause more problems for modus tollens than
for modus ponens.
> Going in the opposite direction, generalization, PT is not truth
> functional , that is the probability of a compound statement is not
> determined solely by its components probabilities (see my trivial
> puzzle).
I fail to see what an indeterminate problem demonstrates. If I give you
the lengths of two sides of a triangle and ask you for the length of the
third side, you cannot answer unless you know the angle where the given
sides meet.
Such a challenge would neither disprove pythagoras nor disprove the law
of cosines. Neither would it disprove that the law of cosines
generalizes the pythagorean theorem.
Also, importantly, it appears impossible to find an axiom
> system for any known probabilistic logic that would be sound and
> complete (except some special cases). Obviously, lack of such axiom
> system makes a formal derivation (a hallmark of any logic) impossible.
That precludes it from deductive logic but not from inductive logic.
> Apparently, despite obvious similarities and profound connections,
> both had better be used what they are best at and attempts to merge
> them do not seem very productive (see abundant literature on
> probabilistic logics).
Are you suggesting that they need merging? Classical mechanics is a
special case of relativity. Do they need merging? The pythagorean
theorem is a special case of the cosine law. Do they need merging?
Deductive logic is a special case of inductive logic. Do they need merging?
Each of the special cases has properties that do not generalize and each
of the generalizations considers factors irrelevant to the special case.
The real question is whether inductive logic is appropriate for data
management, and I do not think it is.
With a deductive formalism, one knows that the answers one gets are as
valid as the operations and as sound as the premises. With inductive
formalisms, one has no such knowledge.
Inductive logic, on the other hand, seems to have utility in science for
gauging the validity of an hypothesis when one lacks any clearly
contradictory evidence.
>>>What Jaynes did in his derivation of the sum/product rules has got
>>>nothing to do with your mindless playing with formulas. See the
>>>argument from authority in my previous messages.
>>
>>Your argument from authority was flawed. I will reply in the other thread.
>
> The argument from authority was a quote from Jaynes' book , not mine.
The source of the quote is irrelevant because the flaw itself was a lack
of relevance. Keith never relied on the meaning of any meaningless
value. The fact that the value of x might be indeterminate is
unimportant to the conclusion that x times zero is zero because the
conclusion holds for all x.
>>>>that should have been P(p1|p2) != P(p1).
>>>
>>>That's assuming that P(p1|p2) even makes sense. More general
>>>formulation of such independence is just P(p1 and p2) = P(p1)* P(p2).
>>
>>The formulation is neither more general nor less general. It is, in
>>fact, a simple substitution of the equation describing independence:
>>
>>(1) P(p1|p2) = P(p1)
>
> Again, the substitution is possible only when P(p2) > 0. (See the
> Jaynes book).
Okay, the formula for independence only applies when P(p2) > 0. And how,
exactly, does that support your assertion that one formulation is more
or less general than the other?
Actually, during the course of his recent wikipeducation
(yes it was that obvious, VC), he has variously attacked
the reduction as (in order more or less):
false
cute
false
impossible
funny
funnier
"mindless playing with equations"
flawed
and now finally
trivial and uninteresting
and throughout VC has never once been able to admit his
mistakes nor has he been able to admit that he learned
something. Typical pathetic VI behavior.
-- Keith --
Are you quibbling that Keith should have expressed his proof in the
limit as P(A) approaches zero? Would that overcome your objection?
0 = lim P(B|A)P(A) as P(A) -> 0
Is the above not true regardless of P(B|A) ?
Not having immediate access to Jaynes' book, I had to admit that my own
knowledge of this comes from a quick wikipeducation. I would not be
quite so quick to dismiss VC as a VI.
However, it seems his objections are philosophical in nature, which I
generally find boring and irrelevant. My wikipeducation tells me that
while there are plenty of frequentists and subjectivist bayesians who do
not fully agree with Jaynes, there are plenty of objectivist bayesians
who do agree with him.
As an engineer, I tend to the pragmatic and find myself much more
sympathetic to the eclectic philosophy.
Had you been arguing to replace the deductive relational model with an
inductive model, I would have to reject the idea at this time. However,
I don't recall you ever making any such suggestion.
While I can imagine some fields benefiting from conditional probability
and while I am aware of some very useful applications of it, I think a
deductive formalism better suits data management. The relational model
certainly does not prevent anyone from recording statements about
probabilities.
Probability theory is an interesting topic to me, and it is a topic I
would like to learn a lot more about. Thank you for the suggestions.
While it's true that some VI frequently try to confuse issues by making
useless philosophical arguments, I am not yet convinced that VC is a VI.
Joe, for instance, is infamous for making obscure references to ancient
fallacious arguments or to irrelevant quirks related only to infinite
sets. However, based on my very limited understanding, one could view
VC's posts as challenging the idea that probability theory makes a good
generalization of deductive logic. Thus, it is possible in my mind that
you and he are simply talking past each other.
You might be in a much better position to judge, though, and again thank
you for the suggestions.
I don't see how you draw the inference you draw. Clearly, P(B|A) and
P(A|B) are constituent probabilities as well.
Show how to derive P(A and B) given P(A) and P(B). If you
> can show that, you can claim that "probability depends only on the
> constituent probabilities". Are you unable to do that ?
P(A and B) = P(B|A)P(A)
P(A and B) = P(A|B)P(B)
To derive P(A and B) one must know either P(B|A) or P(A|B) just as one
must know the angle between two sides to use the cosine law. You fail to
demonstrate anything useful or meaningful by your challenge.
[light bulb goes on]
Keith already specified the argument applies to a limit.
>>P(A) = 1
>>P(A or B) = P(~(~A and ~B))
>>P(A or B) = 1 - P(~A and ~B)
>>P(A or B) = 1 - P(~B|~A)P(~A)
>
>
> Since P(~A) equals zero, the above statement does not make sense.
But Keith stated "in the limit of". Thus, one could read the first line
of what he wrote as
lim P(A) as P(A) -> 1
which makes the other factor
lim P(~A) as P(~A) -> 0.
Thus, in the limit as P(A) approaches 1,
P(A or B) = 1 - P(~B|~A)(1-P(A)) = 1
If he rewrote the proof using explicit limit notation, would you still
object to his proof?
If you read the whole thread carefully, you'll realize that it was the
OP who brought the 'reduction' issue, not me:
<OP>
"
This is why PT is a /generalization/ of logic. It reduces to
logic when applied to truth-valued statements. Just as gamma
reduces to factorial for natural arguments. (Again no quibbles
about offset by 1 etc).
"
My very first message just modestly reminded that probability is not
truth functional, that was all.
>
>
> Also,
> > there is a problem with the conditional probability not being equal
> > the probability of the conditional (see Lewis's result) which makes
> > conditional probabily untranslatable to modus ponens in principle.
>
> Modus ponens requires a conditional probability of 1 and says nothing
> about situations with other conditional probabilities which is why the
> conditional "If A then B" says nothing about B when A is false.
>
> For modus ponens, one must start with the premise "If A then B", which
> is synonymous with P(B|A) = 1. To have a sound argument, one must not
> only have a valid argument, but the premise "If A then B" must be true.
>
> It think your argument might cause more problems for modus tollens than
> for modus ponens.
The proponents of the probabilistic logics hoped that P(A|B) = P(B->A).
Lewis showed that the conditional probability cannot be the
probability of implication (the truth functional conditional) thus
making truth-functionality impossible though the conditional
probbaility either.
>
>
> > Going in the opposite direction, generalization, PT is not truth
> > functional , that is the probability of a compound statement is not
> > determined solely by its components probabilities (see my trivial
> > puzzle).
>
> I fail to see what an indeterminate problem demonstrates. If I give you
> the lengths of two sides of a triangle and ask you for the length of the
> third side, you cannot answer unless you know the angle where the given
> sides meet.
The problem merely demonstrates that PT does not possess truth
functionality in the same way the propositional logic does. In order to
derive the compound statement probability additional information must
be taken into account while with PL the compound statement truth
depends only on truth values of its sub-propositions.
>
> Such a challenge would neither disprove pythagoras nor disprove the law
> of cosines. Neither would it disprove that the law of cosines
> generalizes the pythagorean theorem.
That's an irrelevant remark. See above.
>
>
> Also, importantly, it appears impossible to find an axiom
> > system for any known probabilistic logic that would be sound and
> > complete (except some special cases). Obviously, lack of such axiom
> > system makes a formal derivation (a hallmark of any logic) impossible.
>
> That precludes it from deductive logic but not from inductive logic.
That very well may be the case depending on what you mean by "inductive
logic", but the original statement appears to be that PT somehow
subsumes/"generalizes" propositional logic, or any other
truth-functional/deductive logic, which is a bizzare statement indeed.
If some other, non-truth functional logic was implied, then the
context should have been clearly stated, the usual assumption being
that the '[default]logic' = 'propositional logic/predicate calculus'.
>
>
> > Apparently, despite obvious similarities and profound connections,
> > both had better be used what they are best at and attempts to merge
> > them do not seem very productive (see abundant literature on
> > probabilistic logics).
>
> Are you suggesting that they need merging?
You are confused, the OP does (see his talk about "generalizing"),
not me.
>Classical mechanics is a
> special case of relativity. Do they need merging? The pythagorean
> theorem is a special case of the cosine law. Do they need merging?
> Deductive logic is a special case of inductive logic. Do they need merging?
>
You are preachig to the choir.
> Each of the special cases has properties that do not generalize and each
> of the generalizations considers factors irrelevant to the special case.
>
> The real question is whether inductive logic is appropriate for data
> management, and I do not think it is.
Neither do I.
>
> >>>What Jaynes did in his derivation of the sum/product rules has got
> >>>nothing to do with your mindless playing with formulas. See the
> >>>argument from authority in my previous messages.
> >>
> >>Your argument from authority was flawed. I will reply in the other thread.
> >
> > The argument from authority was a quote from Jaynes' book , not mine.
>
> The source of the quote is irrelevant because the flaw itself was a lack
> of relevance. Keith never relied on the meaning of any meaningless
> value. The fact that the value of x might be indeterminate is
> unimportant to the conclusion that x times zero is zero because the
> conclusion holds for all x.
He relied on the non-existing value, not an existing but
unknown/indeterminate value. There is no x*zero simply because x does
not exist. Again, the Bayesian conditional probability *exists* only
if the respective conditioning proposition is true. The very
derivation of the product and sum rules from the Cox postulates relies
substantially on B being true in P(A|B). For a rigorous exposition see
for example Aczel's Lectures on Functional Equations.
>
>
> >>>>that should have been P(p1|p2) != P(p1).
> >>>
> >>>That's assuming that P(p1|p2) even makes sense. More general
> >>>formulation of such independence is just P(p1 and p2) = P(p1)* P(p2).
> >>
> >>The formulation is neither more general nor less general. It is, in
> >>fact, a simple substitution of the equation describing independence:
> >>
> >>(1) P(p1|p2) = P(p1)
> >
> > Again, the substitution is possible only when P(p2) > 0. (See the
> > Jaynes book).
>
> Okay, the formula for independence only applies when P(p2) > 0. And how,
> exactly, does that support your assertion that one formulation is more
> or less general than the other?
Independence can be (and is) defined as P(A and B) = P(A)*P(B). It
works with whatever values of P(A) and P(B) as opposed to the
conditional probability formulation.
There is a much simpler argument for P( false and B) = zero. First,
you prove that P(false) = zero (you have to prove that because you do
not have any rules yet) and then the answer is obvious.
>
> Is the above not true regardless of P(B|A) ?
P(B|A) does not exist if P(A) = 0, and we need not lim(P(A and B)
(even if it exists) but just P(A and B) which does exist (see above).
Stop claiming that relevant material you are too limited to
recognize as relevant is irrelevant. Dishonest idiot.
> > Are you suggesting that they need merging?
>
> You are confused, the OP does (see his talk about
> "generalizing"), not me.
You are confused and spouting meaningless gibberish.
"merging" != generalize, idiot. You seem to gravitate to
these meaningless comments like your prior "conjuring"
"spirit" crap. Probably because you have nothing concrete
and original to offer.
> > Classical mechanics is a special case of relativity. Do
> > they need merging? The pythagorean theorem is a special
> > case of the cosine law. Do they need merging? Deductive
> > logic is a special case of inductive logic. Do they need
> > merging?
> >
>
> You are preachig to the choir.
Idiot, do you understand you are contradicting yourself?
Previously you thought "merging" = generalization, now you
claim to agree to a comment that "merging" != generalize.
> He relied on the non-existing value, not an existing but
> unknown/indeterminate value. There is no x*zero simply
> because x does not exist.
Now you think undefined = "not exist"? Idiot.
And stop trying to regurgitate wikipeducation. Try
developing some original thoughts and analysis instead.
> Independence can be (and is) defined as P(A and B) =
> P(A)*P(B). It works with whatever values of P(A) and P(B)
> as opposed to the conditional probability formulation.
Only in your limited understanding.
-- Keith --
> Bob Badour wrote:
>
>>vc wrote:
>>
>>>Bob Badour wrote:
>>>
>>>
>>>>Are you seriously suggesting that true and false are trivial and
>>>>uninteresting? Should we all pack up and go home?
>>>
>>>I am suggesting that a meaningful PT statement reduction to a
>>>propositional logic statement is trivial and uninteresting.
>>
>>Then why did you bring it up in the first place?
>
> If you read the whole thread carefully, you'll realize that it was the
> OP who brought the 'reduction' issue, not me:
>
> <OP>
> "
> This is why PT is a /generalization/ of logic. It reduces to
> logic when applied to truth-valued statements. Just as gamma
> reduces to factorial for natural arguments. (Again no quibbles
> about offset by 1 etc).
> "
>
> My very first message just modestly reminded that probability is not
> truth functional, that was all.
With all due respect, whether it is truth functional is a philosophical
matter that applies equally to conditionals in logic. See
http://plato.stanford.edu/entries/conditionals/
Some philosophers treat both probability and logic as truth functional
and some don't.
>> Also,
>>
>>>there is a problem with the conditional probability not being equal
>>>the probability of the conditional (see Lewis's result) which makes
>>>conditional probabily untranslatable to modus ponens in principle.
>>
>>Modus ponens requires a conditional probability of 1 and says nothing
>>about situations with other conditional probabilities which is why the
>>conditional "If A then B" says nothing about B when A is false.
>>
>>For modus ponens, one must start with the premise "If A then B", which
>>is synonymous with P(B|A) = 1. To have a sound argument, one must not
>>only have a valid argument, but the premise "If A then B" must be true.
>>
>>It think your argument might cause more problems for modus tollens than
>>for modus ponens.
>
> The proponents of the probabilistic logics hoped that P(A|B) = P(B->A).
> Lewis showed that the conditional probability cannot be the
> probability of implication (the truth functional conditional) thus
> making truth-functionality impossible though the conditional
> probbaility either.
Unless one accepts Stalnaker's philosophy instead of Lewis'.
>>>Going in the opposite direction, generalization, PT is not truth
>>>functional , that is the probability of a compound statement is not
>>>determined solely by its components probabilities (see my trivial
>>>puzzle).
>>
>>I fail to see what an indeterminate problem demonstrates. If I give you
>>the lengths of two sides of a triangle and ask you for the length of the
>>third side, you cannot answer unless you know the angle where the given
>>sides meet.
>
> The problem merely demonstrates that PT does not possess truth
> functionality in the same way the propositional logic does.
I disagree. It demonstrates only that absent P(A|B) and absent P(B|A)
one cannot calculate P(A and B) just as absent the angle between two
sides of a triangle with known lengths, one cannot calculate the length
of the third side.
In order to
> derive the compound statement probability additional information must
> be taken into account while with PL the compound statement truth
> depends only on truth values of its sub-propositions.
Just as relativity depends on the speed of a frame of reference relative
to the speed of light whereas classical mechanics does not but only
really holds in some limit of that speed, and just as the cosine law
depends on the angle between two sides of a triangle whereas the
pythagorean theorm applies only at one specific angle.
Are you suggesting that relativity is not a generalization of classical
mechanics to relativistic speeds? Are you suggesting that the cosine law
is not a generalization of the pythagorean theorem to acute and obtuse
angles?
You seem to be arguing that the requirement for additional information
in the general case makes a generalization invalid whereas all other
generalizations seem to have similar requirements.
>>Such a challenge would neither disprove pythagoras nor disprove the law
>>of cosines. Neither would it disprove that the law of cosines
>>generalizes the pythagorean theorem.
>
> That's an irrelevant remark. See above.
With all due respect, your dismissal reeks of evasion. We are discussing
generalizations and specializations. You apparently argue that
probability theory is not a generalization of logic, and you allege that
your challenge demonstrates your argument. It does not.
Your argument is non sequitur.
>> Also, importantly, it appears impossible to find an axiom
>>
>>>system for any known probabilistic logic that would be sound and
>>>complete (except some special cases). Obviously, lack of such axiom
>>>system makes a formal derivation (a hallmark of any logic) impossible.
>>
>>That precludes it from deductive logic but not from inductive logic.
>
> That very well may be the case depending on what you mean by "inductive
> logic", but the original statement appears to be that PT somehow
> subsumes/"generalizes" propositional logic, or any other
> truth-functional/deductive logic, which is a bizzare statement indeed.
Your dismissal seems unsubstantive to me. A system of mechanics that
alters the sum of the angles of a triangle depending on the speed of the
frame of reference seems rather bizarre from the perspective of
classical mechanics and euclidean geometry. However, that doesn't make
relativity any less a generalization of classical mechanics nor does it
make euclidean geometry any less a special case of the reimannian
manifolds underpinning relativity.
It is entirely possible for an inductive logic to generalize deductive
logic. Special cases have special properties that do not apply to the
general case. The general case often depends on information considered
unimportant in the special case. Deductive logic has special properties
that do not apply to inductive logic. Probability theory requires
conditional probabilities that propositional logic ignores. Big deal.
What I find strange is your assumption that deductive logic must treat
indicative conditionals as truth functional for modus ponens and modus
tollens or for propositional logic. Since propositional logic does not
require truth functionality for modus ponens, I don't see how predicate
logic requires it either. However, I am unfamiliar with any arguments
either way.
> If some other, non-truth functional logic was implied, then the
> context should have been clearly stated, the usual assumption being
> that the '[default]logic' = 'propositional logic/predicate calculus'.
I don't see any evidence that Keith assumed any different context
requiring a statement. Instead, I question your assumption that
indicative conditionals are necessarily truth functional in deductive logic.
>>>Apparently, despite obvious similarities and profound connections,
>>>both had better be used what they are best at and attempts to merge
>>>them do not seem very productive (see abundant literature on
>>>probabilistic logics).
>>
>>Are you suggesting that they need merging?
>
> You are confused, the OP does (see his talk about "generalizing"),
> not me.
We obviously disagree regarding who is confused.
>>Classical mechanics is a
>>special case of relativity. Do they need merging? The pythagorean
>>theorem is a special case of the cosine law. Do they need merging?
>>Deductive logic is a special case of inductive logic. Do they need merging?
>
> You are preachig to the choir.
You did not answer my questions. If you are not confused on the issue of
generalization, are you suggesting that classical mechanics and
relativity require merging?
>>Each of the special cases has properties that do not generalize and each
>>of the generalizations considers factors irrelevant to the special case.
>>
>>The real question is whether inductive logic is appropriate for data
>>management, and I do not think it is.
>
> Neither do I.
If that was your main point, why didn't you simply state it instead of
making philosophical arguments regarding truth-functionality and
generalization?
>>>>>What Jaynes did in his derivation of the sum/product rules has got
>>>>>nothing to do with your mindless playing with formulas. See the
>>>>>argument from authority in my previous messages.
>>>>
>>>>Your argument from authority was flawed. I will reply in the other thread.
>>>
>>>The argument from authority was a quote from Jaynes' book , not mine.
>>
>>The source of the quote is irrelevant because the flaw itself was a lack
>>of relevance. Keith never relied on the meaning of any meaningless
>>value. The fact that the value of x might be indeterminate is
>>unimportant to the conclusion that x times zero is zero because the
>>conclusion holds for all x.
>
> He relied on the non-existing value, not an existing but
> unknown/indeterminate value. There is no x*zero simply because x does
> not exist.
But Keith did qualify that his argument applied in the limit. Would you
have accepted his argument had he presented it in explicit limit
notation? P(B|A) is defined in the limit as P(A) approaches 0 after all.
If you would accept the proof using limit notation, then your argument
seems more of a quibble over notation to me.
Again, the Bayesian conditional probability *exists* only
> if the respective conditioning proposition is true. The very
> derivation of the product and sum rules from the Cox postulates relies
> substantially on B being true in P(A|B). For a rigorous exposition see
> for example Aczel's Lectures on Functional Equations.
Would the rigorous exposition invalidate the proof if the proof were
given explicitly in limit notation?
>>>>>>that should have been P(p1|p2) != P(p1).
>>>>>
>>>>>That's assuming that P(p1|p2) even makes sense. More general
>>>>>formulation of such independence is just P(p1 and p2) = P(p1)* P(p2).
>>>>
>>>>The formulation is neither more general nor less general. It is, in
>>>>fact, a simple substitution of the equation describing independence:
>>>>
>>>>(1) P(p1|p2) = P(p1)
>>>
>>>Again, the substitution is possible only when P(p2) > 0. (See the
>>>Jaynes book).
>>
>>Okay, the formula for independence only applies when P(p2) > 0. And how,
>>exactly, does that support your assertion that one formulation is more
>>or less general than the other?
>
> Independence can be (and is) defined as P(A and B) = P(A)*P(B). It
> works with whatever values of P(A) and P(B) as opposed to the
> conditional probability formulation.
Okay. Fair enough.
Did I ? You have infomation P(A) and P(B), just as you might have had
truth values for similar propositions in logic. However, in PT, be
it frequentist or Bayesian, knowing P(A) and P(B) alone is not
sufficient to derive P(A and B). Your referring to P(A|B) is not
really an answer because you cannot determine P(A|B) based on P(A) and
P(B) alone and thus cannot solve the problem. You need additional
information so that you could compute P(A|B) or just P(A and B)
directly, whichever is easier.
>
> Keith already specified the argument applies to a limit.
>
And this is an incorrect answer because the question was not what the
limit of P(A and B) is but what exact value of P(A and B) given P(A) =
0 (or P(B)=0). There is an easy way to derive P(false) from Cox's
axioms directly or from the sum/product rules.
>
> >>P(A) = 1
> >>P(A or B) = P(~(~A and ~B))
> >>P(A or B) = 1 - P(~A and ~B)
> >>P(A or B) = 1 - P(~B|~A)P(~A)
> >
> >
> > Since P(~A) equals zero, the above statement does not make sense.
>
> But Keith stated "in the limit of". Thus, one could read the first line
> of what he wrote as
>
> lim P(A) as P(A) -> 1
'Limit' is not just a magical word, "hey, presto". One has to show
that such limit indeed exists, in what sense it exists, and even then
it would be a useless exercise because there is a simple and direct
answer (see above).
>
> which makes the other factor
>
> lim P(~A) as P(~A) -> 0.
>
> Thus, in the limit as P(A) approaches 1,
> P(A or B) = 1 - P(~B|~A)(1-P(A)) = 1
>
> If he rewrote the proof using explicit limit notation, would you still
> object to his proof?
See above.
> Bob Badour wrote:
> [..]
>
>>I don't see how you draw the inference you draw. Clearly, P(B|A) and
>>P(A|B) are constituent probabilities as well.
>>
>>
>> Show how to derive P(A and B) given P(A) and P(B). If you
>>
>>>can show that, you can claim that "probability depends only on the
>>>constituent probabilities". Are you unable to do that ?
>>
>>P(A and B) = P(B|A)P(A)
>>P(A and B) = P(A|B)P(B)
>>
>>To derive P(A and B) one must know either P(B|A) or P(A|B) just as one
>>must know the angle between two sides to use the cosine law. You fail to
>>demonstrate anything useful or meaningful by your challenge.
>
> Did I ? You have infomation P(A) and P(B), just as you might have had
> truth values for similar propositions in logic. However, in PT, be
> it frequentist or Bayesian, knowing P(A) and P(B) alone is not
> sufficient to derive P(A and B).
So? In relativistic mechanics, one must know not only the mass of an
object but the speed of the object's frame of reference relative to the
speed of light to calculate the effects of an applied force. In
classical mechanics, one need only know the force and the mass.
Are you suggesting that makes relativity any less a generalization of
classical mechanics?
If you tried to impeach probability theory's generalization of logic,
you failed. You merely demonstrated it's similarity to other
generalizations.
Your referring to P(A|B) is not
> really an answer because you cannot determine P(A|B) based on P(A) and
> P(B) alone and thus cannot solve the problem.
One cannot determine the speed of an object relative to the speed of
light by it's mass alone. Thus, you cannot solve the problem of the
effects of a force applied to the object. Big deal. Unless your point is
that relativity is not a generalization of classical mechanics, your
observation is pointless.
If you continue to repeat non sequitur even after the non sequitur is
explained to you, I will have to agree with Keith's assessment that you
are a VI.
You need additional
> information so that you could compute P(A|B) or just P(A and B)
> directly, whichever is easier.
Generalizations require additional information. Big deal. Relativity
requires the speed of the frame of reference relative to the speed of
light. The cosine law needs the angle between the two sides of known
length. Unless your point is that relativity is not a generalization of
classicial mechanics and that the cosine law is not a generalization of
the pythagorean theorem, your statements are pointless.
>>Keith already specified the argument applies to a limit.
>
> And this is an incorrect answer because the question was not what the
> limit of P(A and B) is but what exact value of P(A and B) given P(A) =
> 0 (or P(B)=0). There is an easy way to derive P(false) from Cox's
> axioms directly or from the sum/product rules.
The exact value of the sum of 1/2^i for i in [0..infinity] is 2. The
exact value of the sum of 3 times 10^(-k) for k in [1..infinity] is 1/3.
The exact value of P(A and B) is 0 give P(A) = 0 or P(B) = 0.
One uses limits to ascertain the truth of all three of the above statements.
>>>>P(A) = 1
>>>>P(A or B) = P(~(~A and ~B))
>>>>P(A or B) = 1 - P(~A and ~B)
>>>>P(A or B) = 1 - P(~B|~A)P(~A)
>>>
>>>
>>>Since P(~A) equals zero, the above statement does not make sense.
>>
>>But Keith stated "in the limit of". Thus, one could read the first line
>>of what he wrote as
>>
>>lim P(A) as P(A) -> 1
>
> 'Limit' is not just a magical word, "hey, presto". One has to show
> that such limit indeed exists, in what sense it exists, and even then
> it would be a useless exercise because there is a simple and direct
> answer (see above).
The only simple and direct answer I see above is the one Keith already
gave that you refuse to accept no matter how valid it is.
The limit exists. It exists in the sense that for the entire valid range
the limit of P(A and B) approaches 0 when P(A) approaches 0 or when P(B)
approaches 0 or both.
>>which makes the other factor
>>
>>lim P(~A) as P(~A) -> 0.
>>
>>Thus, in the limit as P(A) approaches 1,
>>P(A or B) = 1 - P(~B|~A)(1-P(A)) = 1
>>
>>If he rewrote the proof using explicit limit notation, would you still
>>object to his proof?
>
> See above.
Your obstinate evasiveness is just going to get you ignored for
intellectual dishonesty.
You are assuming that the literal, false, is substitutable for A. While
that's a valid assumption in logic, I am unsure whether it is valid in
probability theory. I have seen nothing to indicate the substitution is
valid and the method seems to beg the question.
>>Is the above not true regardless of P(B|A) ?
>
> P(B|A) does not exist if P(A) = 0, and we need not lim(P(A and B)
> (even if it exists) but just P(A and B) which does exist (see above).
In other words, you are now changing your position. Instead of arguing
that Keith cannot prove what he proved, you are now saying that of all
the perfectly valid proofs, you prefer a different one. That seems like
a real waste of everybody's time.
I'm not trying to knock wikipeducation. It's unquestionably
a valuable resource. In fact, it's hard for me to imagine
returning to pre-net days. What I object to is the somewhat
recent and growing phenomenon of people substituting google
for their brain. Nothing wrong with using /any resource/ to
learn, gather information, check your thinking, etc. But, we
should contribute at least some original thinking or
analysis when we post (which you BB have done IMO).
My impression was that VC was simply rushing to regurgitate
the nearest web-statement that he "thought" "proved me
wrong". That is without having actually thought or tried to
comprehend my points or any of the resources I referenced.
Like he was trying to protect some precious "math ego".
Further, it seems that when this thread began VC was largely
ignorant of much of the material now being discussed. Again,
nothing wrong with starting off ignorant and learning as you
go. At least he is no longer dismissing proofs as "mindless
playing with equations" (one of the dumbest things I've
heard in a while by the way). No. it's the vociferous way
in which he learned, and his inability to admit error and
ignorance that was annoying.
> I would not be quite so quick to dismiss VC as a VI.
I didn't think I was quick to do so. It took a few rounds at
least :-) And to me every post is a new post as every day is
a new day. At any moment, VC could simply admit his mistakes
and prior ignorance, apologize for his vociferous learning,
stop the dishonest snipping and outright fabrication of
false context, and it would be a clean slate as far as I'm
concerned. But, I'm not holding my breath and until he does
it's draining to continue communication with him.
> However, it seems his objections are philosophical in
> nature, which I generally find boring and irrelevant. My
> wikipeducation tells me that while there are plenty of
> frequentists and subjectivist bayesians who do not fully
> agree with Jaynes, there are plenty of objectivist
> bayesians who do agree with him.
>
> As an engineer, I tend to the pragmatic and find myself
> much more sympathetic to the eclectic philosophy.
Then as a fellow engineer, I think you will find Sivia's
book most enjoyable and enlightening. It belongs in the
family of LSD (Acid) books; being that it is thin, purple,
and will expand your mind :-)
(That's not to knock Jaynes' book by the way. His brilliant
clear thinking is a delight to explore.)
> Had you been arguing to replace the deductive relational
> model with an inductive model, I would have to reject the
> idea at this time. However, I don't recall you ever making
> any such suggestion.
You are right I didn't suggest that. Having little knowledge
and experience with RM I would not presume to make such a
suggestion. And from what little I do know of RM, I really
don't see myself reaching that conclusion anyhow.
It's strange, my comment was meant simply as an innocuous
note of something I personally find interesting.
"And interestingly, one view of logic is ...
So ... perhaps conditional probability theory is the
foundation of all :-)"
In light of the whole "one view" and smily thing, the VC
attack was ... surprising. I guess it was trying to stroke
it's "math ego" and thought it had found a low hanging juicy
fruit.
> While I can imagine some fields benefiting from
> conditional probability and while I am aware of some very
> useful applications of it, I think a deductive formalism
> better suits data management. The relational model
> certainly does not prevent anyone from recording
> statements about probabilities.
Absolutely. It's interesting you mention such recording.
There are numerous scientific databases that would be far
more useful had the designers included uncertainty data. One
current example is the Gene Ontology (GO), a database of
gene - function - location associations. Unfortunately,
there isn't even a simple confidence metric assigned to the
associations. Such uncertainty information would be
invaluable when analyzing various gene networks etc.
> Probability theory is an interesting topic to me, and it
> is a topic I would like to learn a lot more about. Thank
> you for the suggestions.
I too find probability theory fascinating. At some point,
I'd love to hear your impressions of those books.
> However, based on my very limited understanding, one could
> view VC's posts as challenging the idea that probability
> theory makes a good generalization of deductive logic.
If that was the case, it would have been great to hear his
thoughts about /better/ generalizations. (Note VC I said
/your/ thoughts not the /Internet's/ "thoughts".)
> Thus, it is possible in my mind that you and he are simply
> talking past each other.
Once I thought perhaps we were talking past each other due
to different notions of /generalization/. To explore this,
I asked VC the simple question: is the gamma function a
generalization of the factorial? He dismissed the question
as "irrelevant" twice and never answered it.
-- Keith --
I am afraid you misunderstood the article you are referring to. The
article discusses various interpretation of the conditional and
speculates on whether the conditional can be regarded as
truth-preserving. However, the propositional logic has it quite
clear: the implication *is* truth-preserving (as is the entire
predicate logic). There is no ambiguity or opinions with respect to
predicate logic.
> > The proponents of the probabilistic logics hoped that P(A|B) = P(B->A).
> > Lewis showed that the conditional probability cannot be the
> > probability of implication (the truth functional conditional) thus
> > making truth-functionality impossible though the conditional
> > probbaility either.
>
> Unless one accepts Stalnaker's philosophy instead of Lewis'.
The cornerstone of Stalnaker's ideas was the conjecture that the
probability of a conditional is the same as the conditional probability
(ca 1968). Lewis showed that it's not the case. I am not aware of any
Stalnaker's subsequent work where he would repair the assumptions
allowing his hypothesis to hold. His 2005 article referenced in the
"Conditionals" does not provide a formal account that would clearly
state assumptions, derive results and provided a way to numerically
apply his informal thoughts just as PT (or some form of probabilistic
logic) does. More importantly, the ordinary PT is not exempt from
Lewis's proof so whatever fixes Stalnaker may have had in mind do not
apply to PT.
>
> > In order to
> > derive the compound statement probability additional information must
> > be taken into account while with PL the compound statement truth
> > depends only on truth values of its sub-propositions.
>
> Just as relativity depends on the speed of a frame of reference relative
> to the speed of light whereas classical mechanics does not but only
> really holds in some limit of that speed, and just as the cosine law
> depends on the angle between two sides of a triangle whereas the
> pythagorean theorm applies only at one specific angle.
>
> Are you suggesting that relativity is not a generalization of classical
> mechanics to relativistic speeds? Are you suggesting that the cosine law
> is not a generalization of the pythagorean theorem to acute and obtuse
> angles?
>
> You seem to be arguing that the requirement for additional information
> in the general case makes a generalization invalid whereas all other
> generalizations seem to have similar requirements.
>
My very simple point was that PT is not truth functional and therefore
cannot qualify as a generalization of propositional logic or any other
logic possessing truth-functionality. Your extrapolating this simple
statement to the relativity theory vs. classical mechanics, cosine
laws, cabbages and kings is truly strange. As I said before, there
are profound connections between PT and proopsitional calculus, and PT
can be regarded, metaphorically, as a generalization of purely
logical reasoning about uncertainty. However, their areas of
applicability is quite different and the generalization talk serves
nothing but to increase confusion. Since I have nothing more to add
except PT lacking truth-functionality and a system of derivation axioms
(Abadi/Halpern), this wil be my last comment on propositional logic
vs. PT.
>
> With all due respect, your dismissal reeks of evasion. We are discussing
> generalizations and specializations. You apparently argue that
> probability theory is not a generalization of logic, and you allege that
> your challenge demonstrates your argument. It does not.
My challenge merely demonstrates that P(A and B) cannot be derived from
P(A) and P(B) alone. I do not know why it is so hard to understand and
compare PB non-truth-functionality to the propositional logic
truth-functionality -- it's a simple statement of fact. Also, I'd
like to remind that the OP made the claim, in so many words, that P(A
and B) can be calculated from P(A) and P(B) alone but failed to back up
his claim.
> What I find strange is your assumption that deductive logic must treat
> indicative conditionals as truth functional for modus ponens and modus
> tollens or for propositional logic. Since propositional logic does not
> require truth functionality for modus ponens, I don't see how predicate
> logic requires it either. However, I am unfamiliar with any arguments
> either way.
To be precise, modus ponens is a derivation rule (possibly single) in
the propositional/predicate logic. In the propositional logic,
material implication/conditional tautologies have the same syntactical
pattern as modus ponens proofs which can serve as sort of justification
for modus ponens truth preservation feature. So the correct statement
should be that modus ponens is truth preserving rather than truth
functional.
>
>
> > If some other, non-truth functional logic was implied, then the
> > context should have been clearly stated, the usual assumption being
> > that the '[default]logic' = 'propositional logic/predicate calculus'.
>
> I don't see any evidence that Keith assumed any different context
> requiring a statement. Instead, I question your assumption that
> indicative conditionals are necessarily truth functional in deductive logic.
That the propositional/predicate logic conditional is truth preserving
is a trivial mathematical fact, not an assumption.
> You did not answer my questions. If you are not confused on the issue of
> generalization, are you suggesting that classical mechanics and
> relativity require merging?
In the sense of truth-preservation and/or formal derivability, PT
cannot be called logic generalization by any stretch of imagination.
In the sense of being a system of reasoning, yes, PT can be regarded as
a propositional logic generalization.
>
> If that was your main point, why didn't you simply state it instead of
> making philosophical arguments regarding truth-functionality and
> generalization?
Truth functionality or its absence is a simple mathematical fact when
applied to PT or PL/PC so my argument was purely mathematical.
>
>
> >>>>>What Jaynes did in his derivation of the sum/product rules has got
> >>>>>nothing to do with your mindless playing with formulas. See the
> >>>>>argument from authority in my previous messages.
> >>>>
> >>>>Your argument from authority was flawed. I will reply in the other thread.
> >>>
> >>>The argument from authority was a quote from Jaynes' book , not mine.
> >>
> >>The source of the quote is irrelevant because the flaw itself was a lack
> >>of relevance. Keith never relied on the meaning of any meaningless
> >>value. The fact that the value of x might be indeterminate is
> >>unimportant to the conclusion that x times zero is zero because the
> >>conclusion holds for all x.
> >
> > He relied on the non-existing value, not an existing but
> > unknown/indeterminate value. There is no x*zero simply because x does
> > not exist.
>
> But Keith did qualify that his argument applied in the limit. Would you
> have accepted his argument had he presented it in explicit limit
> notation? P(B|A) is defined in the limit as P(A) approaches 0 after all.
It can be so defined, yes, but there are at least two problems with
such definition. (1)P(false) = 0 is usually derived directly from
Cox's postulates or from sum/production rules without relying on the
limit, so any departure from the usual derivation should be clearly
stated along with possible problems such approach may have; (2) more
importantly the limit idea is inapplicable in finite domains (e.g.
picking a ball from an urn, or some such) where probability values are
taken from a finite domain of possibilities.
>
> If you would accept the proof using limit notation, then your argument
> seems more of a quibble over notation to me.
I do not accept it because of (1) and (2).
I've replied to the limit attempt elsewhere.
>
>
> >>>>P(A) = 1
> >>>>P(A or B) = P(~(~A and ~B))
> >>>>P(A or B) = 1 - P(~A and ~B)
> >>>>P(A or B) = 1 - P(~B|~A)P(~A)
> >>>
> >>>
> >>>Since P(~A) equals zero, the above statement does not make sense.
> >>
> >>But Keith stated "in the limit of". Thus, one could read the first line
> >>of what he wrote as
> >>
> >>lim P(A) as P(A) -> 1
> >
> > 'Limit' is not just a magical word, "hey, presto". One has to show
> > that such limit indeed exists, in what sense it exists, and even then
> > it would be a useless exercise because there is a simple and direct
> > answer (see above).
>
> The only simple and direct answer I see above is the one Keith already
> gave that you refuse to accept no matter how valid it is.
It's not valid in finite models.
> Bob Badour wrote:
>
>>vc wrote:
>
> [...]
>
>>>My very first message just modestly reminded that probability is not
>>>truth functional, that was all.
>>
>>With all due respect, whether it is truth functional is a philosophical
>>matter that applies equally to conditionals in logic. See
>>http://plato.stanford.edu/entries/conditionals/
>>
>>Some philosophers treat both probability and logic as truth functional
>>and some don't.
>
> I am afraid you misunderstood the article you are referring to. The
> article discusses various interpretation of the conditional and
> speculates on whether the conditional can be regarded as
> truth-preserving. However, the propositional logic has it quite
> clear: the implication *is* truth-preserving (as is the entire
> predicate logic). There is no ambiguity or opinions with respect to
> predicate logic.
Why the sudden switch from truth-functional to truth-preserving?
>>>The proponents of the probabilistic logics hoped that P(A|B) = P(B->A).
>>> Lewis showed that the conditional probability cannot be the
>>>probability of implication (the truth functional conditional) thus
>>>making truth-functionality impossible though the conditional
>>>probbaility either.
>>
>>Unless one accepts Stalnaker's philosophy instead of Lewis'.
>
> The cornerstone of Stalnaker's ideas was the conjecture that the
> probability of a conditional is the same as the conditional probability
> (ca 1968). Lewis showed that it's not the case.
That's odd, because the Stalnaker work I referred to was done subsequent
to Lewis' work.
>>> In order to
>>>derive the compound statement probability additional information must
>>>be taken into account while with PL the compound statement truth
>>>depends only on truth values of its sub-propositions.
>>
>>Just as relativity depends on the speed of a frame of reference relative
>>to the speed of light whereas classical mechanics does not but only
>>really holds in some limit of that speed, and just as the cosine law
>>depends on the angle between two sides of a triangle whereas the
>>pythagorean theorm applies only at one specific angle.
>>
>>Are you suggesting that relativity is not a generalization of classical
>>mechanics to relativistic speeds? Are you suggesting that the cosine law
>>is not a generalization of the pythagorean theorem to acute and obtuse
>>angles?
>>
>>You seem to be arguing that the requirement for additional information
>>in the general case makes a generalization invalid whereas all other
>>generalizations seem to have similar requirements.
>
> My very simple point was that PT is not truth functional and therefore
> cannot qualify as a generalization of propositional logic or any other
> logic possessing truth-functionality.
Your argument rests on the axiom that conditional statements in
predicate logic are necessarily truth-functional. I have already shown
that the axiom is false, which makes your argument unsound.
>>With all due respect, your dismissal reeks of evasion. We are discussing
>>generalizations and specializations. You apparently argue that
>>probability theory is not a generalization of logic, and you allege that
>>your challenge demonstrates your argument. It does not.
>
> My challenge merely demonstrates that P(A and B) cannot be derived from
> P(A) and P(B) alone.
Which is pointless and uninteresting. Anything is indeterminate without
sufficient knowledge. Such as the acceleration due to a known force
applied to a known mass moving at an unknown fraction of the speed of light.
I do not know why it is so hard to understand and
> compare PB non-truth-functionality to the propositional logic
> truth-functionality -- it's a simple statement of fact.
And as in the case of all statements of fact, the statement can be
false. In this case, the statement is false. The truth-functionality of
indicative conditional is not a prerequisite for propositional logic either.
Also, I'd
> like to remind that the OP made the claim, in so many words, that P(A
> and B) can be calculated from P(A) and P(B) alone but failed to back up
> his claim.
I have followed this thread. I don't recall where he stated that at all.
Could you perhaps find the relevant quote?
>>What I find strange is your assumption that deductive logic must treat
>>indicative conditionals as truth functional for modus ponens and modus
>>tollens or for propositional logic. Since propositional logic does not
>>require truth functionality for modus ponens, I don't see how predicate
>>logic requires it either. However, I am unfamiliar with any arguments
>>either way.
>
> To be precise, modus ponens is a derivation rule (possibly single) in
> the propositional/predicate logic. In the propositional logic,
> material implication/conditional tautologies have the same syntactical
> pattern as modus ponens proofs which can serve as sort of justification
> for modus ponens truth preservation feature. So the correct statement
> should be that modus ponens is truth preserving rather than truth
> functional.
>
>
>>>If some other, non-truth functional logic was implied, then the
>>>context should have been clearly stated, the usual assumption being
>>>that the '[default]logic' = 'propositional logic/predicate calculus'.
>>
>>I don't see any evidence that Keith assumed any different context
>>requiring a statement. Instead, I question your assumption that
>>indicative conditionals are necessarily truth functional in deductive logic.
>
> That the propositional/predicate logic conditional is truth preserving
> is a trivial mathematical fact, not an assumption.
Why the sudden switch in terminology? Are you saying that you wasted
everybody's time with sloppy terminology and now you want us to
reinterpret everything you wrote previously?
>>You did not answer my questions. If you are not confused on the issue of
>>generalization, are you suggesting that classical mechanics and
>>relativity require merging?
>
> In the sense of truth-preservation and/or formal derivability, PT
> cannot be called logic generalization by any stretch of imagination.
It's inductive logic and not deductive logic. Big deal. I don't see how
that makes it any less a generalization. Deduction is a property of the
special case that does not apply to the general case. Conservation of
mass is a property that applies to the special case but not the general
case too. Are you suggesting we should reject quantum and relativistic
mechanics because they lack this familiar and useful property?
> In the sense of being a system of reasoning, yes, PT can be regarded as
> a propositional logic generalization.
In other words, you agree with Keith and your whole pretense at argument
has been a waste of time.
>>If that was your main point, why didn't you simply state it instead of
>>making philosophical arguments regarding truth-functionality and
>>generalization?
>
> Truth functionality or its absence is a simple mathematical fact when
> applied to PT or PL/PC so my argument was purely mathematical.
It is an axiom that is required for neither PT nor PL/PC. Neither is it
entirely excluded from either. Whether one assumes indicative
conditionals are truth-functional is a matter of philosophy and not a
matter of fact.
Since truth-functionality is equally optional in both the general case
and the special case, I fail to see any relevance.
>>>>>>>What Jaynes did in his derivation of the sum/product rules has got
>>>>>>>nothing to do with your mindless playing with formulas. See the
>>>>>>>argument from authority in my previous messages.
>>>>>>
>>>>>>Your argument from authority was flawed. I will reply in the other thread.
>>>>>
>>>>>The argument from authority was a quote from Jaynes' book , not mine.
>>>>
>>>>The source of the quote is irrelevant because the flaw itself was a lack
>>>>of relevance. Keith never relied on the meaning of any meaningless
>>>>value. The fact that the value of x might be indeterminate is
>>>>unimportant to the conclusion that x times zero is zero because the
>>>>conclusion holds for all x.
>>>
>>>He relied on the non-existing value, not an existing but
>>>unknown/indeterminate value. There is no x*zero simply because x does
>>>not exist.
>>
>>But Keith did qualify that his argument applied in the limit. Would you
>>have accepted his argument had he presented it in explicit limit
>>notation? P(B|A) is defined in the limit as P(A) approaches 0 after all.
>
> It can be so defined, yes, but there are at least two problems with
> such definition. (1)P(false) = 0 is usually derived directly from
> Cox's postulates
Since the probability that a statement is false is not necessarily 0, I
fail to see how one can derive it. Or are you suggesting that false is
somehow a meaningful statement on its own?
or from sum/production rules without relying on the
> limit, so any departure from the usual derivation should be clearly
> stated along with possible problems such approach may have; (2) more
> importantly the limit idea is inapplicable in finite domains (e.g.
> picking a ball from an urn, or some such) where probability values are
> taken from a finite domain of possibilities.
If Keith were talking about the Kolmogorov formulation, you would have a
point. He has been very explicit about using the Cox formulation.
>>If you would accept the proof using limit notation, then your argument
>>seems more of a quibble over notation to me.
>
> I do not accept it because of (1) and (2).
I conclude you lack intellectual honesty and are arguing simply for the
sake of having a pissing contest. Life is too short to waste it on
idiots like you. Plonk.
Because the PL conditional is truth-functional while modus ponens is
truth-preserving. You might (or might not) greatly benefit from a trip
to the nearest library it seems. There, you you can find an abundant
supply of books on propositional logic, derivation, etc.that might
put you straight.
> >>You seem to be arguing that the requirement for additional information
> >>in the general case makes a generalization invalid whereas all other
> >>generalizations seem to have similar requirements.
> >
> > My very simple point was that PT is not truth functional and therefore
> > cannot qualify as a generalization of propositional logic or any other
> > logic possessing truth-functionality.
>
> Your argument rests on the axiom that conditional statements in
> predicate logic are necessarily truth-functional. I have already shown
> that the axiom is false, which makes your argument unsound.
Could you provide an example of a statement in the propositional logic
that would *not* be truth-functional(assuming the standard
interpretation) ? Or you'll be as forthcoming with an answer as the OP
who claimed that PT was truth-functional?
> > I do not know why it is so hard to understand and
> > compare PB non-truth-functionality to the propositional logic
> > truth-functionality -- it's a simple statement of fact.
>
> And as in the case of all statements of fact, the statement can be
> false. In this case, the statement is false. The truth-functionality of
> indicative conditional is not a prerequisite for propositional logic either.
Care to oblige us with an example of such non-truth preserving
statement in the propositional logic ?
> Also, I'd
> > like to remind that the OP made the claim, in so many words, that P(A
> > and B) can be calculated from P(A) and P(B) alone but failed to back up
> > his claim.
>
> I have followed this thread. I don't recall where he stated that at all.
> Could you perhaps find the relevant quote?
Sure thing, bubba. Enjoy:
"When you apply the connectives to a probability-valued
statements you get probability-valued statements whose
probability depends only on the constituent probabilities.
"
> >
> > That the propositional/predicate logic conditional is truth preserving
> > is a trivial mathematical fact, not an assumption.
>
> Why the sudden switch in terminology? Are you saying that you wasted
> everybody's time with sloppy terminology and now you want us to
> reinterpret everything you wrote previously?
That was a typo for which I apologize. The propositional logic
conditional/implication, as rest of the propositional logic
connectives, are most certainly truth-functional. The derivation rule
of modus ponens is of course truth-preserving.
> >>You did not answer my questions. If you are not confused on the issue of
> >>generalization, are you suggesting that classical mechanics and
> >>relativity require merging?
> >
> > In the sense of truth-preservation and/or formal derivability, PT
> > cannot be called logic generalization by any stretch of imagination.
>
> It's inductive logic and not deductive logic. Big deal. I don't see how
> that makes it any less a generalization. Deduction is a property of the
> special case that does not apply to the general case. Conservation of
> mass is a property that applies to the special case but not the general
> case too. Are you suggesting we should reject quantum and relativistic
> mechanics because they lack this familiar and useful property?
>
>
> > In the sense of being a system of reasoning, yes, PT can be regarded as
> > a propositional logic generalization.
>
> In other words, you agree with Keith and your whole pretense at argument
> has been a waste of time.
Only for people who have no clue what they are talking about.
Apparently, you belong to the group.
> >>If that was your main point, why didn't you simply state it instead of
> >>making philosophical arguments regarding truth-functionality and
> >>generalization?
> >
> > Truth functionality or its absence is a simple mathematical fact when
> > applied to PT or PL/PC so my argument was purely mathematical.
>
> It is an axiom that is required for neither PT nor PL/PC. Neither is it
> entirely excluded from either. Whether one assumes indicative
> conditionals are truth-functional is a matter of philosophy and not a
> matter of fact.
So how about an example of such non-truth-functional statement in the
propositional logic?
>
> Since truth-functionality is equally optional in both the general case
> and the special case, I fail to see any relevance.
Sure you do.
> >>But Keith did qualify that his argument applied in the limit. Would you
> >>have accepted his argument had he presented it in explicit limit
> >>notation? P(B|A) is defined in the limit as P(A) approaches 0 after all.
> >
> > It can be so defined, yes, but there are at least two problems with
> > such definition. (1)P(false) = 0 is usually derived directly from
> > Cox's postulates
>
> Since the probability that a statement is false is not necessarily 0, I
> fail to see how one can derive it. Or are you suggesting that false is
> somehow a meaningful statement on its own?
'False' denotes an impossible, the opposite of a certain(in Jayne's
terminology) proposition, not your strawman of 'the probability that a
statement is false', P(false) is certainly zero. The fact is
derivable both from the sum/product rules as well as from the Cox
postulates and is quite obvious intuitively.
>
>
> or from sum/production rules without relying on the
> > limit, so any departure from the usual derivation should be clearly
> > stated along with possible problems such approach may have; (2) more
> > importantly the limit idea is inapplicable in finite domains (e.g.
> > picking a ball from an urn, or some such) where probability values are
> > taken from a finite domain of possibilities.
>
> If Keith were talking about the Kolmogorov formulation, you would have a
> point. He has been very explicit about using the Cox formulation.
Who's talking about 'Kolmogorov formulation' ? I meant some finite
Bayesian model.
Are you trying to say that Bayesian probability is inapplicable to
finite models like population dynamics, ? If yes, how come Jaynes
somehow manages to discuss such models (see the sampling chapter) ? If
not, then how do you go about finding a limit of a non-real valued
function(P(A) takes values from a finite set if the model is finite
obviously), with your engineering education and all ?
>
>
> >>If you would accept the proof using limit notation, then your argument
> >>seems more of a quibble over notation to me.
> >
> > I do not accept it because of (1) and (2).
>
> I conclude you lack intellectual honesty and are arguing simply for the
> sake of having a pissing contest. Life is too short to waste it on
> idiots like you. Plonk.
Indeed.
A couple of minor points.
I thought that it wasn't possible to have a "speed of a frame of
reference relative to the speed of light" as the speed of light is the
same to every observer.
While I know the triangle arguement is retricted to Euclidean geometry,
in the 'real world' we are on the surface of a sphere (roughly). The
length of your sides must be small enough that you can _assume_ you are
in a euclidean space.
Ahhh ... FINALLY. I congratulate you on finally having the
honesty and courage to admit this. If only it hadn't taken
so much arduous vociferous ignorant complaining then we
could have had a far more interesting discourse.
> My very simple point was that PT is not truth functional
> and therefore cannot qualify as a generalization of
> propositional logic or any other logic possessing
> truth-functionality.
It has been stated and proven here several times that /in
the logical limit/ ie when all values are either certain
(1) or impossible(0) probability theory IS truth-functional.
Nobody here (myself included) has EVER claimed that PT is
"truth-functional" in your understanding of the phrase ie
beyond this limit. For one because I don't even apply the
concept of "truth-functional" beyond the limit, that is to
degrees of /belief/ (Cox formulation). Partly because I have
never been talking about degrees of /truth/ which you later
migrated to when you started whining about probabilistic
logics etc. It should have been clear to you when I
introduced a different phrase "probability-functional" that
we were talking about different things. (Well, /now/ you are
talking about different concepts but originally I don't think
you know what you were talking about.)
> As I said before, there are profound connections between
> PT and proopsitional calculus, and PT can be regarded,
> metaphorically, as a generalization of purely logical
> reasoning about uncertainty. However, their areas of
> applicability is quite different and the generalization
> talk serves nothing but to increase confusion.
It is more than "metaphorical", that's just more of your
"conjure" "spirit" crap. And stop projecting /YOUR/
confusion and ignorance on others.
> Since I have nothing more to add except PT lacking
> truth-functionality and a system of derivation axioms
> (Abadi/Halpern), this wil be my last comment on
> propositional logic vs. PT.
More of your "Internet Mind" fiction and I believe this is a
Celko as well (ie a reference that does not claim what you
claim it claims). Abadi and Halper argue NO SUCH THING. And
"derivation axioms" is senseless. If you still want to make
this claim provide a precise reference (page, theorem, etc)
or modify your claim of what A/H argue.
> To be precise, modus ponens is a derivation rule (possibly
> single) in the propositional/predicate logic.
At least now you appear to know that modus ponens is a
derivation /rule/ and not a "derivation axiom".
> My challenge merely demonstrates that P(A and B) cannot be
> derived from P(A) and P(B) alone.
In the /logical limit/ YES THEY CAN. This has be proven here
and you eventually agreed (though bitched about the proof
being "wrong" and tried to hide your backtracking) that the
/truth/ tables derived using PT are /exactly the same/ as
the usual logical truth tables. Here remember by /truth/ I
mean certainty (1) and impossibly (0) not some "degrees of
truth" you want to swindle in so you can attack
probabilistic or fuzzy logic strawmen.
> I do not know why it is so hard to understand and compare
> PB non-truth-functionality to the propositional logic
> truth-functionality -- it's a simple statement of fact.
I do not know why you can't understand what someone means by
"generalization" "specialization" "in the limit" etc.
> Also, I'd like to remind that the OP made the claim, in so
> many words, that P(A and B) can be calculated from P(A)
> and P(B) alone but failed to back up his claim.
I claimed that ONLY IN THE LIMIT OF CERTAINTY AND
IMPOSSBILITY. Why is this so hard for you to understand?
vc wrote:
> Bob Badour wrote:
> > vc wrote:
> > > Also, I'd like to remind that the OP made the claim,
> > > in so many words, that P(A and B) can be calculated
> > > from P(A) and P(B) alone but failed to back up his
> > > claim.
> >
> > I have followed this thread. I don't recall where he
> > stated that at all. Could you perhaps find the relevant
> > quote?
>
> Sure thing, bubba. Enjoy:
>
> "When you apply the connectives to a probability-valued
> statements you get probability-valued statements whose
> probability depends only on the constituent
> probabilities."
It's /your/ limited understanding and ignorance that leads
you to believe those statements are equivalent, bubba. Given
that I have employed and discussed the product rule several
times along with the Cox derivation of same you are blind if
you believe that I claimed P(A and B) = f(P(A),P(B)). Hell,
one only need look at the functional requirements used to
derive the product rule to realize this is not generally the
case. HOWEVER, to repeat, it /IS TRUE/ in the limit of
certainty and impossibility since, AGAIN, the truth tables
are the same in both logic and PT.
> In the sense of being a system of reasoning, yes, PT can
> be regarded as a propositional logic generalization.
Ahhh ... FINALLY. I congratulate you on finally having the
honesty and courage to admit this. If only it hadn't taken
so much arduous vociferous ignorant complaining then we
could have had a far more interesting discourse.
> > If that was your main point, why didn't you simply state
> > it instead of making philosophical arguments regarding
> > truth-functionality and generalization?
>
> Truth functionality or its absence is a simple
> mathematical fact when applied to PT or PL/PC so my
> argument was purely mathematical.
Only recently have you started to offer anything remotely
mathematical or even logical. Most of your postings did
nothing but "conjure" vociferous "spiritual" ignorant
"mindless" attacks. Two of which you have now, to your
credit, retracted. You still need to retract a few more and
apologize.
> It can be so defined, yes, but there are at least two
> problems with such definition. (1)P(false) = 0
What is P(false)? "false" is not a statement in ANY logic or
probability I have ever seen. P(false) seems like nonsense
as much as P(0), P(1), or P(4.2) are. Perhaps you are using
"false" to represent a contradiction. Very strange notation.
Something like P(A and ~A) perhaps where "false" = "A and
~A". Bizarre. I'd love some references to material that use
this "P(false)" notation.
> is usually derived directly from Cox's postulates or from
> sum/production rules without relying on the limit, so any
> departure from the usual derivation should be clearly
> stated along with possible problems such approach may
> have;
Interesting that you claim there is a "usual derivation" for
a seemingly nonsense statement like P(false). Anyhow, please
present it. I eagerly await your "usual derivation".
> In the sense of being a system of reasoning, yes, PT can
> be regarded as a propositional logic generalization.
Ahhh ... FINALLY. I congratulate you on finally having the
honesty and courage to admit this. If only it hadn't taken
so much of your arduous vociferous ignorant complaining then
we could have had a far more interesting discourse.
-- Keith --