Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Probability Theory Is Inconsistent

18 views
Skip to first unread message

msadk...@yahoo.com

unread,
Sep 11, 2005, 8:12:18 PM9/11/05
to
What does it mean to say that the probability of an event having some
outcome is 1/2? It doesn't merely mean that there are two possible
outcomes and that only one of them will occur. Probability theory is
an attempt to quantify the behavior of "sufficiently large" numbers of
events.

For example, if a "random" event has a probability, for one of two
possible outcomes, of 1/2, then given 1,000 such events the ratio of
the number of events having one outcome to the number of events having
the other outcome, is predicted to approach 0.5 within some margin of
variance (plus or minus).

However, inasmuch as it is theoretically possible for 1,000 random
events to have any distribution whatsoever, including a 1,000 event
long run of a single outcome, it is necessary to give the expected
distribution a probability of its own. For example, if 1,000 sets of
1,000 events each set are recorded, it is expected that most of them
will obey the regular distribution, and both this and divergences from
it are assigned probabilities.

But what is true of 1,000 random events is true of 1,000 x 1,000 =
1,000,000 random events: any distribution whatsoever is theoretically
possible. The probabilities for such sets must then, in turn, be based
on yet another probability assignment; for example, 1,000 groups of
1,000 sets of 1,000 events.

By now it has become obvious that a definition of "the probability of
an event" leads to an infinite definitional regress. And this is fatal
to probability theory because it deals with finite quantities; after
all, what numerical meaning could be assigned to the expression "1/2 of
infinity", and how would it differ from "1/3 of infinity" or "1/4 of
infinity", etc.?

So, "probability" cannot be defined with reference to finite numbers of
random events; nor can it be defined with reference to an "infinite"
number of such events; therefore it cannot be defined at all with
respect to such events; whereas the "probability" of a "determined"
event is 1.

Mark Adkins
msadk...@yahoo.com

Tron

unread,
Sep 11, 2005, 8:39:04 PM9/11/05
to

<msadk...@yahoo.com> skrev i melding
news:1126483938....@g43g2000cwa.googlegroups.com...

You're probably right, but who knows?
T


quasi

unread,
Sep 11, 2005, 11:44:46 PM9/11/05
to

You are can't invalidate a theory by first defining your own version
of it -- you have to use the actual definition. If you take the
trouble to learn about sample spaces, events, etc. from a respected
text on mathematical probability, then you will see that probability
is an abstract theory, a kind of algebra. The tie to real world events
is at heart, experimental, and not a part of the theory itself. In
that sense, probability theory is immune from the type of
inconsistency you allude to. Of course, you can argue that the tie to
real world events is not proven, fine -- the theory itself doesn't try
to prove that tie. On the other hand, if you repeatedly bet lots of
money against what the laws of probability predict, the tie to the
real world will soon become apparent.

quasi

Herman Rubin

unread,
Sep 11, 2005, 9:42:35 PM9/11/05
to
In article <1126483938....@g43g2000cwa.googlegroups.com>,

<msadk...@yahoo.com> wrote:
>What does it mean to say that the probability of an event having some
>outcome is 1/2? It doesn't merely mean that there are two possible
>outcomes and that only one of them will occur. Probability theory is
>an attempt to quantify the behavior of "sufficiently large" numbers of
>events.

This is a common misconception due to the teaching of
probability starting with combinatorics and "equally
likely." Nothing like this is the case, but far too
many people cannot get over it. It is not the case
that the probability that a coin from the mint will
fall heads is exactly 1/2.

>For example, if a "random" event has a probability, for one of two
>possible outcomes, of 1/2, then given 1,000 such events the ratio of
>the number of events having one outcome to the number of events having
>the other outcome, is predicted to approach 0.5 within some margin of
>variance (plus or minus).

I suggest you read some good undergraduate theoretical
probability books to get your terminology correct.

................

>By now it has become obvious that a definition of "the probability of
>an event" leads to an infinite definitional regress.

This is only if one assumes that probability is linguistically
definable. We can take the physical approach that probability
exists, and satisfies the laws of measure theory. Calculating
probabilities by repeated trials converges slowly.

And this is fatal
>to probability theory because it deals with finite quantities; after
>all, what numerical meaning could be assigned to the expression "1/2 of
>infinity", and how would it differ from "1/3 of infinity" or "1/4 of
>infinity", etc.?

One can do this with measures, and this is needed. Possibly
you need to get your mathematics up to par.

--
This address is for information only. I do not claim that these views
are those of the Statistics Department or of Purdue University.
Herman Rubin, Department of Statistics, Purdue University
hru...@stat.purdue.edu Phone: (765)494-6054 FAX: (765)494-0558

Jonathan Hoyle

unread,
Sep 11, 2005, 10:00:49 PM9/11/05
to
>> What does it mean to say that the probability of
>> an event having some outcome is 1/2? It doesn't
>> merely mean that there are two possible
>> outcomes and that only one of them will occur.

Of course not. There could be ten, twenty, or a hundred events in
total. Saying that one has a probability of 1/2 does not imply
anything about the other, other than their aggregrate sum is 1 - 1/2 =
1/2.

>> By now it has become obvious that a definition
>> of "the probability of an event" leads to an infinite
>> definitional regress.

Only by the way you've defined it. A mathematically rigorous approach
suffers from none of your straw man attacks.

>> And this is fatal to probability theory because it
>> deals with finite quantities; after all, what
>> numerical meaning could be assigned to the
>> expression "1/2 of infinity", and how would it
>> differ from "1/3 of infinity" or "1/4 of infinity", etc.?

Untrue. You merely need to see where the limits tend.

For example, consider a simple coin game in which Players A and B each
take turns to flip a coin (A going first), and the first one to flip a
heads wins. What is the probability of A winning? Since A flips
first, he could win with a first flip of heads, with probability 1/2.
If he flips tails first, he could still win if Player B flips tails as
well, and then on A's next flip he flips heads (probability 1/2*1/2*1/2
= 1/8). And this goes on infinitely. Thus the probability of A
winning is equal to: 1/2 + 1/8 + 1/32 + 1/128 + ... = 2/3. This
infinite sum yields the correct answer, as you can verify by doing a
large number of trials and seeing that A wins about 2/3 of the time,
while B wins 1/3 of the time.

>> So, "probability" cannot be defined with reference
>> to finite numbers of random events; nor can it be
>> defined with reference to an "infinite" number of
>> such events; therefore it cannot be defined at all
>> with respect to such events; whereas the
>> "probability" of a "determined" event is 1.

This little "proof" of your is little more than the mathematical
equivalent to a bad pun. Probability Theory can be and is used for
analyzing both discrete and continuous processes (a better way of
describing the matter than you have).

Pardon my saying so, but your accusations of inconsistency betray both
your lack of knowledge of Mathematics as well as your arrogant
carelessness of speaking about such things without the courtesy of
simple inquiry.

David C. Ullrich

unread,
Sep 12, 2005, 6:33:47 AM9/12/05
to
On 11 Sep 2005 17:12:18 -0700, msadk...@yahoo.com wrote:

>What does it mean to say that the probability of an event having some
>outcome is 1/2?

In actual modern mathematical probability theory it doesn't mean
a thing, in a sense. Probability theory starts by _assuming_ a
sample space, with events that have _given_ probabilities.
The mathematicians gave up on defining what P(E) = 1/2 really
means long ago, and just axiomatized everything, taking
P(E) = 1/2 as undefined.

>It doesn't merely mean that there are two possible
>outcomes and that only one of them will occur. Probability theory is
>an attempt to quantify the behavior of "sufficiently large" numbers of
>events.
>
>For example, if a "random" event has a probability, for one of two
>possible outcomes, of 1/2, then given 1,000 such events the ratio of
>the number of events having one outcome to the number of events having
>the other outcome, is predicted to approach 0.5 within some margin of
>variance (plus or minus).
>
>However, inasmuch as it is theoretically possible for 1,000 random
>events to have any distribution whatsoever, including a 1,000 event
>long run of a single outcome, it is necessary to give the expected
>distribution a probability of its own. For example, if 1,000 sets of
>1,000 events each set are recorded, it is expected that most of them
>will obey the regular distribution, and both this and divergences from
>it are assigned probabilities.
>
>But what is true of 1,000 random events is true of 1,000 x 1,000 =
>1,000,000 random events: any distribution whatsoever is theoretically
>possible. The probabilities for such sets must then, in turn, be based
>on yet another probability assignment; for example, 1,000 groups of
>1,000 sets of 1,000 events.
>
>By now it has become obvious that a definition of "the probability of
>an event" leads to an infinite definitional regress.

An attempt to define it the way you do above does not work, that's
true. This was noticed a _long_ time ago. It does not show that
actual probability theory is inconsistent, because your "definition"
is simply _not_ part of probability theory.

>And this is fatal
>to probability theory because it deals with finite quantities; after
>all, what numerical meaning could be assigned to the expression "1/2 of
>infinity", and how would it differ from "1/3 of infinity" or "1/4 of
>infinity", etc.?
>
>So, "probability" cannot be defined with reference to finite numbers of
>random events; nor can it be defined with reference to an "infinite"
>number of such events; therefore it cannot be defined at all with
>respect to such events;

Exactly right, which is exactly why probability is _not_ defined
in a rigorous mathematical treatment.

>whereas the "probability" of a "determined"
>event is 1.
>
>Mark Adkins
>msadk...@yahoo.com


************************

David C. Ullrich

G. A. Edgar

unread,
Sep 12, 2005, 7:33:57 AM9/12/05
to
Littlewood has an essay something like this.
[Find it in his Miscellany. It is worth reading.]
"Probability" as a mathematical field is perfectly consistent.
But whether (or to what extent) it applies outside mathematics is a
question of philosophy, not of mathematics. In particular, it doesn't
work to "define" probability using laws of large numbers.

--
G. A. Edgar http://www.math.ohio-state.edu/~edgar/

msadk...@yahoo.com

unread,
Sep 12, 2005, 2:23:29 PM9/12/05
to
quasi wrote:

>
> You are can't invalidate a theory by first defining your own version
> of it -- you have to use the actual definition. If you take the
> trouble to learn about sample spaces, events, etc. from a respected
> text on mathematical probability, then you will see that probability
> is an abstract theory, a kind of algebra. The tie to real world events
> is at heart, experimental, and not a part of the theory itself.

I said nothing about real world events, and spoke completely within the
context of theory. What I said is rigorously true and none of the
replies (quite vaguely referring to sample spaces, etc.) have
contradicted my premises, reasoning, or conclusions in the slightest.
What I have encountered by way of response is merely obfuscation and
error -- vague (and false) claims that what I said is wrong, without
demonstrating that this is so, or how it is so. The academic models
cited all contain the very same fundamental, fatal flaw described in my
exposition; the fact that they employ varying language to conceal it
does not change this. Please follow your own advice and do not
misrepresent my comments in order to "disprove" them.

Mark Adkins
msadk...@yahoo.com

msadk...@yahoo.com

unread,
Sep 12, 2005, 2:57:45 PM9/12/05
to
Herman Rubin wrote:
> In article <1126483938....@g43g2000cwa.googlegroups.com>,
> <msadk...@yahoo.com> wrote:
> >What does it mean to say that the probability of an event having some
> >outcome is 1/2? It doesn't merely mean that there are two possible
> >outcomes and that only one of them will occur. Probability theory is
> >an attempt to quantify the behavior of "sufficiently large" numbers of
> >events.
>
> This is a common misconception due to the teaching of
> probability starting with combinatorics and "equally
> likely." Nothing like this is the case, but far too
> many people cannot get over it. It is not the case
> that the probability that a coin from the mint will
> fall heads is exactly 1/2.

I said nothing about coin tosses. I said that probability theory,
specifically academic statistical theory, involves abstract "random
events" and that in this context probability is so defined. What
you're doing is attempting to conceal a glaring theoretical defect with
smoke and mirrors (i.e., redefinitions using obscurantism and
misrepresentation).

>
> >For example, if a "random" event has a probability, for one of two
> >possible outcomes, of 1/2, then given 1,000 such events the ratio of
> >the number of events having one outcome to the number of events having
> >the other outcome, is predicted to approach 0.5 within some margin of
> >variance (plus or minus).
>
> I suggest you read some good undergraduate theoretical
> probability books to get your terminology correct.

There is nothing wrong with what I wrote above, factually speaking.
That IS in fact the case. If you find the language insufficiently
technical, too bad: it's substantively correct.

>
> ................
>
> >By now it has become obvious that a definition of "the probability of
> >an event" leads to an infinite definitional regress.
>
> This is only if one assumes that probability is linguistically
> definable. We can take the physical approach that probability
> exists, and satisfies the laws of measure theory. Calculating
> probabilities by repeated trials converges slowly.

Surely you aren't asserting that the *theoretical* concept of
"probability" cannot be defined except by reference to real-world
empiricism? And what do you mean about convergence? Convergence to
what? The expected distributions! And those distributions are
NECESSARILY based upon an infinite regress, and are therefore
insupportable. I have explained exactly why this is the case, and my
explanation remains accurate, regardless of attempts to obscure and
deny this.

Mark Adkins
msadk...@yahoo.com

msadk...@yahoo.com

unread,
Sep 12, 2005, 3:19:49 PM9/12/05
to
Jonathan Hoyle wrote:

>
> Mark Adkins wrote:
> >> By now it has become obvious that a definition
> >> of "the probability of an event" leads to an infinite
> >> definitional regress.
> >> And this is fatal to probability theory because it
> >> deals with finite quantities; after all, what
> >> numerical meaning could be assigned to the
> >> expression "1/2 of infinity", and how would it
> >> differ from "1/3 of infinity" or "1/4 of infinity", etc.?
>
> Untrue. You merely need to see where the limits tend.
>

<snip>

Stick to my example, please. I've shown that no convergence need
occur, theoretically, since the events are random: and I've shown that
statistical definitions of probability as a ratio toward which large
numbers of events tend to occur, must therefore accomodate the
possibility of a continued failure to converge; that the only way to do
this is by means of an infinite regress of probabilities which attempt
to define continued divergence as more and more unlikely; and that such
an infinite regress cannot support a definition of convergence toward a
ratio because convergence is not theoretically required, and because
use of a term like "1/2 of infinity" has no numerical meaning, whereas
statistical probability theory attempts to found itself upon numerical
ratios called probabilities.

> Pardon my saying so, but your accusations of inconsistency betray both
> your lack of knowledge of Mathematics as well as your arrogant
> carelessness of speaking about such things without the courtesy of
> simple inquiry.

Pardon my saying so, but you're merely a defective pseudo-sentient
abusing the truth with sophistry and obscurantism and pseudo-academic
claptrap based upon yet another unacknowledged revision of "reality".
The last time this argument came up, I was specifically assured by a
similar individual that no professional statistician worth his salt
would attempt to employ a definition of probability based upon an
infinite set of events; but I'm having difficulty finding this in the
archives now, though possibly it will turn up. In any case, my
comments remain, in full, true.

Mark Adkins
msadk...@yahoo.com

Herman Rubin

unread,
Sep 12, 2005, 3:35:42 PM9/12/05
to
In article <1126551465.2...@g49g2000cwa.googlegroups.com>,

<msadk...@yahoo.com> wrote:
>Herman Rubin wrote:
>> In article <1126483938....@g43g2000cwa.googlegroups.com>,
>> <msadk...@yahoo.com> wrote:
>> >What does it mean to say that the probability of an event having some
>> >outcome is 1/2? It doesn't merely mean that there are two possible
>> >outcomes and that only one of them will occur. Probability theory is
>> >an attempt to quantify the behavior of "sufficiently large" numbers of
>> >events.

>> This is a common misconception due to the teaching of
>> probability starting with combinatorics and "equally
>> likely." Nothing like this is the case, but far too
>> many people cannot get over it. It is not the case
>> that the probability that a coin from the mint will
>> fall heads is exactly 1/2.

>I said nothing about coin tosses. I said that probability theory,
>specifically academic statistical theory, involves abstract "random
>events" and that in this context probability is so defined. What
>you're doing is attempting to conceal a glaring theoretical defect with
>smoke and mirrors (i.e., redefinitions using obscurantism and
>misrepresentation).

There is NO reason to assume that outcomes are equally
likely in theoretical probability. Until it is realized
that the axioms of theoretical probability say nothing
about "equally likely", no understanding is possible.


................

It is a theorem of probability that IF certain assumptions
are make, and physical events are subject to the laws of
probability, that any particular relative frequency will
approach probability with probability one. There are even
linguistic models where this can be strengthened.

This cannot be used as a definition of probability except
in those linguistic models. Probability theory is consistent
if mathematics is consistent; your problem is that you are
assuming that certain things are equally likely which are not.

msadk...@yahoo.com

unread,
Sep 12, 2005, 3:48:20 PM9/12/05
to
David C. Ullrich wrote:
> On 11 Sep 2005 17:12:18 -0700, msadk...@yahoo.com wrote:
>
> >What does it mean to say that the probability of an event having some
> >outcome is 1/2?
>
> In actual modern mathematical probability theory it doesn't mean
> a thing, in a sense. Probability theory starts by _assuming_ a
> sample space, with events that have _given_ probabilities.
> The mathematicians gave up on defining what P(E) = 1/2 really
> means long ago, and just axiomatized everything, taking
> P(E) = 1/2 as undefined.

Heh-heh. Probability theory: A theory which says nothing because its
very subject is undefined. Heh-heh. Marvelous. Nebulous, but
marvelous. "We don't know what it is, but it fills textbooks. Just a
lot of symbol pushing, I expect." Heh-heh. Most amusing.

Alright: does *your* "probability theory" envision things called
"random events"? Does its, er, non-definition of "probability" make
statements about the behavior of such events? Because every
undergraduate-level probability theory textbook I've ever seen includes
them, usually using a coin-toss by way of illustration, though it is
understood that this is an abstract, ideal event rather than an actual
toss using an actual coin. Does that statement describe the behavior,
or tendencies, of groups of such events? Does it describe it in terms
of a ratio which such groups of events it said to tend to approach? If
so, you can't get away from the problems I outlined. If not, then it
is jejune.

Probability IS, at rock bottom, a statement about the distribution of
events, whether those events are completely abstract or not (and they
are, within pure theory). The distribution of random events need not
comply with any ratio, or tend toward it: that is also a fact,
definitionally, because it is a necessary part of any definition of
"random". A series of random events may, by definition, or axiomatic
premise, adopt any distribution whatsoever. Now, what can probability
say about such a series? Either nothing, in which case it does not
apply at all to them, or else it can claim, as it does in statistical
probability theory, that such series "tend to converge" toward a
numerical ratio, called a probability, according to definite
mathematical rules; yet, these rules must always be qualified to
accomodate further exceptions, since the series IS random, and this
leads to an infinite DEFINITIONAL regress which is fatal to theory.
And as I've also said, non-random events, which is to say, in this
context, "determined" events, have a probability of 1, so probability
theory has nothing to say about them either.

Incidentally, this analysis of mine HASN'T been around for a "long"
time, because it was NEVER RAISED OR CONSIDERED in treatments of the
subject during the time I went to school. This is just another
ham-handed attempt by the contrarian pseudo-sentients to steal my
thunder. I developed it entirely independently, on the basis of my own
original considerations of the subject, and no revision of "reality"
will convince me otherwise. I've noticed, when I point out the silly,
obvious flaws in the pseudo-scientific theories you base your fraud of
a culture upon, that you can't stand to admit truth, that you always
try to deny it, by obfuscation and obscurantism, or else you ignore it
and go on as usual; and where you can't deny it, you usually attempt to
*falsely* claim previous authorship.

Mark Adkins
msadk...@yahoo.com


Mark Adkins
msadk...@yahoo.com

Jonathan Hoyle

unread,
Sep 12, 2005, 4:01:35 PM9/12/05
to
>> Stick to my example, please. I've shown that no convergence
>> need occur, theoretically, since the events are random.

What do you mean "no convergence need occur"? In my example,
convergence does occur. You have an example where it does not? I have
not seen it. The fact that it is random doesn't change the
probability. A fair coin has a probability of coming up heads with
probability 0.5, and tails 0.5 (barring that it lands on edge). That
someone could theoretically flip heads indefinitely does not change
these probabilities.

>> and I've shown that statistical definitions of probability as a
>> ratio toward which large numbers of events tend to occur,
>> must therefore accomodate the possibility of a continued
>> failure to converge

"Tend to occur" is the key phrase here. "Accomodating the possibility
of continued failure " does not contradict what it tends to do.
Someone flipping 10 heads in a row does not violate the theory. In
fact, the theory states that this tends to occur once every 1024 times.
There is no contradiction here. Word games notwithstanding.

>> that the only way to do this is by means of an infinite regress
>> of probabilities which attempt to define continued divergence as
>> more and more unlikely

Again, this is a straw man argument. It is completely untrue that "the
only way to do this is with infinite regress". I gave you a specific
example where no infinite regress was required; all you need be able to
do is compute a limit.

<Unintelligible ranting snipped>

>> The last time this argument came up, I was specifically assured
>> by a similar individual that no professional statistician worth his
>> salt would attempt to employ a definition of probability based
>> upon an infinite set of events

I don't know who told you this, but the example I gave of two people
flipping a coin can be solved only by summing up an infinite number of
events. This example was a fairly simple one at that. However, I'm
not sure why a professional statistician would weigh in on this
example, as this is an example of Probability Theory, not Statistics.
That you confuse the two could be forgiven, if not for your arrogance.

>> In any case, my comments remain, in full, true.

Your comments remain, in full, a testament to ignorance. If you were a
sincere person, I'd recommend an introductory course on Probability.
Being that unlikely as you do not appear to want to learn, I'm sure you
will simply continue to post your uneducated remarks.

I'm always amazed by people who are proud of their ignorance.

Anon.

unread,
Sep 12, 2005, 4:28:44 PM9/12/05
to
Jonathan Hoyle wrote:
>>>Stick to my example, please. I've shown that no convergence
>>>need occur, theoretically, since the events are random.
>
<snip>

>>>In any case, my comments remain, in full, true.
>
>
> Your comments remain, in full, a testament to ignorance. If you were a
> sincere person, I'd recommend an introductory course on Probability.
> Being that unlikely as you do not appear to want to learn, I'm sure you
> will simply continue to post your uneducated remarks.
>
> I'm always amazed by people who are proud of their ignorance.
>
Don't be:
<http://www.apa.org/journals/features/psp7761121.pdf>

Bob

--
Bob O'Hara
Department of Mathematics and Statistics
P.O. Box 68 (Gustaf Hällströmin katu 2b)
FIN-00014 University of Helsinki
Finland

Telephone: +358-9-191 51479
Mobile: +358 50 599 0540
Fax: +358-9-191 51400
WWW: http://www.RNI.Helsinki.FI/~boh/
Journal of Negative Results - EEB: www.jnr-eeb.org

Kees

unread,
Sep 12, 2005, 4:24:40 PM9/12/05
to

I fail to see what is the problem with the following definition of probability. A probability space is a measure space X such that the measure of X is 1. Then the probability of an event is the measure of that event as an element in the \sigma algebra on X. This is the modern definition of probability.

Jonathan Hoyle

unread,
Sep 12, 2005, 6:02:22 PM9/12/05
to
Hi Bob,

Fascinating link! I'll have to read it through, as it has an
interesting psychological perspective.

Thanks,

Jonathan

Reef Fish

unread,
Sep 12, 2005, 6:57:59 PM9/12/05
to

Jonathan Hoyle wrote:
> >> Stick to my example, please. I've shown that no convergence
> >> need occur, theoretically, since the events are random.
>
> What do you mean "no convergence need occur"? In my example,
> convergence does occur. You have an example where it does not?

< snip >

> >> that the only way to do this is by means of an infinite regress
> >> of probabilities which attempt to define continued divergence as
> >> more and more unlikely
>
> Again, this is a straw man argument. It is completely untrue that "the
> only way to do this is with infinite regress". I gave you a specific
> example where no infinite regress was required; all you need be able to
> do is compute a limit.
>
> <Unintelligible ranting snipped>

LOL!

I've read posts from Alice in Wonderland from some regular posters
in sci.stat.math, but you two sound like mathematicians twisted
into pretzels from having gone through the Klein Bottle.

Then I realized you must have been from the OTHER groups cross-
posted.


If you are a STATISITICIAN, rather than a mathematician who are
ill-trained and ill-equipped to think STATISITICALLY, you should
have known that the meaning and definition of "probability" is
far from unique, and that there are different valid ways of
DEFINING the meaning of "probability".

In particular, NO limit, convergence, or any of the mathematical
gobbledygook is NECESSARY to make coherent and CONSISTENT sense
out of PROBABILITY.


It'll help you untie pretzle and the chains on your mathematical
ankles by citing a couple of references which contain hundreds
of other references to help you think in terms of SUBJECTIVE
probabilities assessed by the indifference between two sets of
gambles.

In an article (1962) titled "Bayesian Statistics", L. J. Savage
wrote, "Personal probability is a certain kind of numerical
measure of the opinions of somebody about something. It will be
the right approach for each of you to think that you are the
person under discussion; for this reason Good (*) introduced
"you" as the technical term for this person."

and proceeded to introduce the notion of how to assess personal
probabilities.

(*) Good,I.J. "Probability and the Weighing of Evidence".
New York: Hafner Publishing Company 1950.

Savage's article cited 205 references, most of which are foreign
to mathematicians and even most non-Bayesian statisticians.


Another article in "The Writings of Leonard Jimmie Savage --
A Memorial Selection", American Statistical Association 1981,
was dated 1970, and was titled, "Reading Suggestions for the
Foundation of Statistics".

It had this preface: The following was recently handed out
in a graduate class on the foundations of statistics. It is
offered here mainly to teachers of statistics as an example
of pedagogic effort, but others too may find some interest in
its content. It contained 71 references, most of which are
well-worth reading, by anyone seriously interested in the
FOUNDATIONS of statistics (which of course includes as an
integral part the meaning of SUBJECTIVE and PERSONAL
probabilities).

I was in that graduate class, in 1967.


There are other "objective" meanings of probability, that
can be defined merely as the basis for the mathematics of
probability, which may or may not have nay relevance to reality.

The moral of the story is: Don't be so sure that YOUR view is
the only valid, and scientifically accepted view of "probability".

-- Bob.

msadk...@yahoo.com

unread,
Sep 12, 2005, 11:13:41 PM9/12/05
to
Reef Fish wrote:

> There are other "objective" meanings of probability, that
> can be defined merely as the basis for the mathematics of
> probability, which may or may not have nay relevance to reality.
>
> The moral of the story is: Don't be so sure that YOUR view is
> the only valid, and scientifically accepted view of "probability".
>
> -- Bob.

ANY theory of probability must describe the distribution of outcomes
from a series of "random" events, because that is what probability is:
an attempt to describe the "likelihood" of particular outcomes on the
basis of this distribution. It must deal with random events because
determined events are, by definition, not probabilistic: there is no
point in talking about the "probability" of a determined event because
it is by definition a certainty. Yet, by definition each event in a
random series is independent of the others in the series, and each one
may result in any possible outcome (in my example, either of two
possible outcomes): therefore there can be no guarantee of any
particular distribution of outcomes no matter how long the series is.

Statistical probability theory is essentially, then, the juxtaposition
of two inconsistent premises: (i) That a series of random events need
not conform to any distribution of outcomes; (ii) That a series of
random events must conform to a distribution of outcomes, and this
distribution of outcomes is a numerical ratio called the probability
(in the case of my example, 1/2).

It attempts to reconcile these irreconcilable premises by asserting
that the "laws of probability" (requiring conformity to the expected
distribution) apply only to "sufficiently large" series, within a
margin of variance; the idea is that, as series grow larger, the amount
(as a percentage of series length) by which the distribution of their
outcomes diverge from the probability ratio, "tends to decrease"
according to certain mathematical formulae. The larger the divergence
is, the more "improbable" it is said to be, and this is quantified
statistically using probability ratios. Thus, of 1,000 events where
the probability of an event is 1/2, this series is expected to have a
distribution of 0.5 plus or minus x, which x represents the margin of
variance; that is, half of the outcomes, plus or minus x, are expected
to be of one of the two types. Divergences greater than x are possible
-- as mentioned, in a random series ANY distribution of outcomes is by
definition possible -- but are quantified as less and less probable,
the larger the divergence.

The problem is that each of these lesser probability ratios is itself
dependent upon still larger series of events in order to justify
itself; and those larger series generate their own possibilities of
exception. For example, while admitting the possibility that, say, 90
percent of the 1,000 event series may be of a single outcome type,
statistical probability theory attempts to circumvent the problem by
removing it one step, defining this "improbability" in terms of a still
larger series of events in which the 90 percent run is a mere blip
which does not affect the distribution of the larger series. Of
course, that larger series faces the same problem, so that in order to
define one probability, statistical probability theory must use an
infinite regress of probabilities, each involving a larger group of
events, and each dependent upon the next. The process is never ending
because the same problem exists at each step, and so the initial
probability of 1/2 remains undefined. Note that this is a
*theoretical* rather than an empirical problem! The problem is
*defining* the very concept of "probability". Note also that the
expression "1/2 of infinity" has no numerical meaning, and
probabilities are numerical ratios; note also that different "fractions
of infinity" (e.g., 1/2 of infinity, 1/3 of it, etc.) are numerically
indistinguishable.

All of this involves quite a standard description of the fundamentals
of statistical probability theory: attempts in this newsgroup by "other
users" to represent it as outmoded and inaccurate are deceitful
attempts to obscure the truth and to avoid admitting that their silly
system is built on quicksand.

Mark Adkins
msadk...@yahoo.com

Richard Ulrich

unread,
Sep 13, 2005, 12:17:13 AM9/13/05
to
On 12 Sep 2005 20:13:41 -0700, msadk...@yahoo.com wrote:
[snip, a bunch]

>
> All of this involves quite a standard description of the fundamentals
> of statistical probability theory: attempts in this newsgroup by "other
> users" to represent it as outmoded and inaccurate are deceitful
> attempts to obscure the truth and to avoid admitting that their silly
> system is built on quicksand.

That quicksand metaphor....
- If it is quicksand, who is sinking?
I still do my computations of gravity using Newton's
approximation instead of correct, Einsteinian equations.

- The system seems to stand as useful, which is all that
a research program needs. The Bayesians have not ousted
the Frequentists -- Bayesians claim better logic, but they have
to admit that the process can be tougher, and the answers
are often the same.

Do you have a more useful system? or are you just
giving advice that we should feel some existential despair?
I don't get frantic at that Bayesian critique; I do try to pay
attention, that they might (for instance) have better ideas
for analyzing micro-array data, which raise a big problem
of multiplicity.


I hope I did not miss something earlier, but I admit that
I have not been playing close attention.

--
Rich Ulrich, wpi...@pitt.edu
http://www.pitt.edu/~wpilib/index.html

David C. Ullrich

unread,
Sep 13, 2005, 8:50:36 AM9/13/05
to
On 12 Sep 2005 12:48:20 -0700, msadk...@yahoo.com wrote:

>David C. Ullrich wrote:
>> On 11 Sep 2005 17:12:18 -0700, msadk...@yahoo.com wrote:
>>
>> >What does it mean to say that the probability of an event having some
>> >outcome is 1/2?
>>
>> In actual modern mathematical probability theory it doesn't mean
>> a thing, in a sense. Probability theory starts by _assuming_ a
>> sample space, with events that have _given_ probabilities.
>> The mathematicians gave up on defining what P(E) = 1/2 really
>> means long ago, and just axiomatized everything, taking
>> P(E) = 1/2 as undefined.
>
>Heh-heh. Probability theory: A theory which says nothing because its
>very subject is undefined.

Exactly the same as any other branch of mathematics: It's obviously
impossible to define _everything_ in terms of previously defined
things - any branch of math has undefined somethings at the bottom.

>Heh-heh. Marvelous. Nebulous, but
>marvelous. "We don't know what it is, but it fills textbooks. Just a
>lot of symbol pushing, I expect." Heh-heh. Most amusing.
>

>[...]


>
>Incidentally, this analysis of mine HASN'T been around for a "long"
>time, because it was NEVER RAISED OR CONSIDERED in treatments of the
>subject during the time I went to school.

Fascinating. You took a class, and the fact that something was
not mentioned in that class proves that it was not known at
that time?

We can check whether the treatment you took in school was
actual mathematical probability theory or just an elementary
treatment (of the sort that does indeed often suffer from
the sort of flaws you point out). Did the textbook mention
a thing called "measure theory"?

In particular, does the textbook give a definition something
like the following? "A probability space is a measure space
(X, P), where P is a measure on X and P(X) = 1." If not then
the course you took did not include anything about actual
mathematical probability. And unless you took the course
more than maybe 60 or 70 years ago, if there was no such
definition then the course did not include everything
that was known about probability at the time.

>This is just another
>ham-handed attempt by the contrarian pseudo-sentients to steal my
>thunder. I developed it entirely independently, on the basis of my own
>original considerations of the subject, and no revision of "reality"
>will convince me otherwise. I've noticed, when I point out the silly,
>obvious flaws in the pseudo-scientific theories you base your fraud of
>a culture upon, that you can't stand to admit truth, that you always
>try to deny it, by obfuscation and obscurantism, or else you ignore it
>and go on as usual; and where you can't deny it, you usually attempt to
>*falsely* claim previous authorship.

Yeah, that's probably it. Guffaw.

>Mark Adkins
>msadk...@yahoo.com
>
>
>Mark Adkins
>msadk...@yahoo.com


************************

David C. Ullrich

David C. Ullrich

unread,
Sep 13, 2005, 9:11:20 AM9/13/05
to
On 12 Sep 2005 12:48:20 -0700, msadk...@yahoo.com wrote:

>[...]


>
>Incidentally, this analysis of mine HASN'T been around for a "long"
>time, because it was NEVER RAISED OR CONSIDERED in treatments of the
>subject during the time I went to school. This is just another
>ham-handed attempt by the contrarian pseudo-sentients to steal my
>thunder. I developed it entirely independently, on the basis of my own
>original considerations of the subject, and no revision of "reality"
>will convince me otherwise. I've noticed, when I point out the silly,
>obvious flaws in the pseudo-scientific theories you base your fraud of
>a culture upon, that you can't stand to admit truth, that you always
>try to deny it, by obfuscation and obscurantism, or else you ignore it
>and go on as usual; and where you can't deny it, you usually attempt to
>*falsely* claim previous authorship.

I just noticed something interesting in this regard.

I see six replies to your original post. You've replied to
five of them. The one that you seem to have ignored is
exactly the one that gives a _reference_ for the fact
that people have been aware of the sort of issues you
raise for a long time.

Curious.

>Mark Adkins
>msadk...@yahoo.com
>
>
>Mark Adkins
>msadk...@yahoo.com


************************

David C. Ullrich

Russell...@wdn.com

unread,
Sep 13, 2005, 9:20:54 AM9/13/05
to

Anon. wrote:
> Jonathan Hoyle wrote:
> >>>Stick to my example, please. I've shown that no convergence
> >>>need occur, theoretically, since the events are random.
> >
> <snip>
> >>>In any case, my comments remain, in full, true.
> >
> >
> > Your comments remain, in full, a testament to ignorance. If you were a
> > sincere person, I'd recommend an introductory course on Probability.
> > Being that unlikely as you do not appear to want to learn, I'm sure you
> > will simply continue to post your uneducated remarks.
> >
> > I'm always amazed by people who are proud of their ignorance.
> >
> Don't be:
> <http://www.apa.org/journals/features/psp7761121.pdf>
>
> Bob
>

Along those lines there is also the book _The Logic of Failure_
by Dietrich Dorner.

Cheers,
Russell

Ronald Bruck

unread,
Sep 13, 2005, 4:18:52 PM9/13/05
to
In article <dg4lae$4r...@odds.stat.purdue.edu>, Herman Rubin
<hru...@odds.stat.purdue.edu> wrote:
...

>
> It is a theorem of probability that IF certain assumptions
> are make, and physical events are subject to the laws of
> probability, that any particular relative frequency will
> approach probability with probability one. There are even
> linguistic models where this can be strengthened.
>
> This cannot be used as a definition of probability except
> in those linguistic models. Probability theory is consistent
> if mathematics is consistent; your problem is that you are
> assuming that certain things are equally likely which are not.

This is fascinating. To what "linguistic models" are you referring?
Could you elaborate on this?

Of course, ALL applications of deductive mathematics are subject to the
little proviso that "certain assumptions are made" and that "physical
events are subject to the laws of ..." Certainly people didn't used to
think like this; magic has been prevalent as explanation for far more
thousands of years than logic. When I roll the dice, what's to prevent
Zeus from thinking I've won enough already, and cause me to crap out?
I suspect the belief in such "magic" is still widespread (else WHY
would ANYONE go to Las Vegas?).

I don't know whether the Iliad truly represents the way the Greeks once
thought. Did they REALLY believe in all those gods and goddesses, and
their intervention in human affairs? I doubt Homer did; he apparently
INVENTED some of the legends about the gods, which would be a dangerous
activity if you believed in them. But probably the "ordinary people"
did, and whatever the beliefs were in prehistory, they were almost
certainly magical.

And if your belief system is right, its foundations are unfalsifiable
by observation, because you don't allow observation. There are many
such people in the world today. I guess it's a miracle we HAVE
mathematics.

--Ron Bruck

Gerry Myerson

unread,
Sep 13, 2005, 10:21:12 PM9/13/05
to
In article <130920051318527521%br...@math.usc.edu>,
Ronald Bruck <br...@math.usc.edu> wrote:

> When I roll the dice, what's to prevent
> Zeus from thinking I've won enough already, and cause me to crap out?
> I suspect the belief in such "magic" is still widespread (else WHY
> would ANYONE go to Las Vegas?).

I went to Las Vegas because the 2004 Western Number Theory meeting
was there last December.

The place also has some nice art galleries.

--
Gerry Myerson (ge...@maths.mq.edi.ai) (i -> u for email)

Herman Rubin

unread,
Sep 14, 2005, 12:14:38 PM9/14/05
to
>In article <dg4lae$4r...@odds.stat.purdue.edu>, Herman Rubin
><hru...@odds.stat.purdue.edu> wrote:
>...

>> It is a theorem of probability that IF certain assumptions
>> are make, and physical events are subject to the laws of
>> probability, that any particular relative frequency will
>> approach probability with probability one. There are even
>> linguistic models where this can be strengthened.

>> This cannot be used as a definition of probability except
>> in those linguistic models. Probability theory is consistent
>> if mathematics is consistent; your problem is that you are
>> assuming that certain things are equally likely which are not.

>This is fascinating. To what "linguistic models" are you referring?
>Could you elaborate on this?

These are the ones which state that something is meaningful
essentially only if it is constructive. This avoids some
paradoxes; in probability theory, given an arbitrary stopping
rule (one can use any rule which only involves all digits
before the n-th to decide when one stops at the n-th), the
resulting number is "random". If one allows only rules
which are well-defined within the countable language, such
numbers exist, but are obviously not constructible. However,
as the digits of pi are computable by a rule in the minimal
language of logic, pi is obviously not "random". That there
are only a countable number of allowable rules is important.

>Of course, ALL applications of deductive mathematics are subject to the
>little proviso that "certain assumptions are made" and that "physical
>events are subject to the laws of ..." Certainly people didn't used to
>think like this; magic has been prevalent as explanation for far more
>thousands of years than logic. When I roll the dice, what's to prevent
>Zeus from thinking I've won enough already, and cause me to crap out?
>I suspect the belief in such "magic" is still widespread (else WHY
>would ANYONE go to Las Vegas?).

There are legitimate reasons, but I will not go into them.

>I don't know whether the Iliad truly represents the way the Greeks once
>thought. Did they REALLY believe in all those gods and goddesses, and
>their intervention in human affairs?

VERY definitely.

I doubt Homer did; he apparently
>INVENTED some of the legends about the gods, which would be a dangerous
>activity if you believed in them. But probably the "ordinary people"
>did, and whatever the beliefs were in prehistory, they were almost
>certainly magical.

If any Greeks did NOT believe in them, they were few before
the INTRODUCTION of the idea that the universe was lawful,
that man could understand the laws, and that supernatural
forces were at least highly limited in the extent they
could intervene.

I do not know of any ancient religions which did not have
the belief that their divinities could so intervene, if
they had divinities at all. I am using "divinities" in
the sense of any supernatural beings.

There are quite a few people, quite possibly a majority,
who believe Hurricane Katrina was a Divine punishment;
they have many causes for this punishment, some opposing
others.

>And if your belief system is right, its foundations are unfalsifiable
>by observation, because you don't allow observation. There are many
>such people in the world today. I guess it's a miracle we HAVE
>mathematics.

Mathematics is falsifiable only by leading to contradictions.

Acme Diagnostics

unread,
Sep 14, 2005, 5:36:02 PM9/14/05
to

"Anon." <bob....@NOSPAMhelsinki.fi> wrote:

>Jonathan Hoyle wrote:
>>
>> I'm always amazed by people who are proud of their ignorance.
>>
>Don't be:
><http://www.apa.org/journals/features/psp7761121.pdf>

Thanks for a useful article on a crucial aspect of reasoning. I
have a few comments for discussion, but which are off-topic in
these groups. Instead I've posted them in alt.philosophy (etc.)
under the title "Study on self-judgment of competence" in case
anyone is interested.

Larry

Kees

unread,
Sep 14, 2005, 6:44:54 PM9/14/05
to

Maybe in the US there is an actual majority that thinks Katrina was a divine punishment. Luckily in Europe people have more brains and there is a big majority in Europe who knows better.

Ben Rudiak-Gould

unread,
Sep 15, 2005, 10:00:53 AM9/15/05
to
G. A. Edgar wrote:
> Littlewood has an essay something like this.
> [Find it in his Miscellany. It is worth reading.]

Leonard Susskind also wrote a short piece on this subject that I quite like:

http://www.edge.org/q2005/q05_8.html#susskind

-- Ben

monia9PL

unread,
Sep 15, 2005, 11:24:37 AM9/15/05
to
As I understand you say that the probability theory is bad because
includes concepts that are abstract. Ok. But have you ever see a "1"?
What makes you believe that 1+1 is 2? Also some assumptions. 1, 2 and
other natural numbers are also "arbitrarly" defined by mathematicians.
So why should we belive in them?

Reef Fish

unread,
Sep 15, 2005, 9:54:26 PM9/15/05
to

This thread proved only ONE thing -- that a bunch of supposedly
education grown-ups in mathematics, statistics, and probability
gets all tangled on two ENTIRELY DIFFERENT questions:

1. The ASSESSMENT of a uncertain number p, called probability.

2. The consistency of probability LAWS governing assessed p's.

Take a timely question in which it is meaningful to ask a
probability question:

Q. What's the probability that Delta airlines will be able
to avoid liquidation under the Chapter 11 filing, to
emerge as a non-bankrupt operating airline out of
Chapter 11, say, within two years?

Obviously there is no FREQUENTISTIC interpretation of this p,
because nothing is repeatable, for one thing. It is a one-
time affair that has a measure of UNCERTAINTY, and everyone
has his/her OPINION (personal assessment) of that probability,
and there are no "right" or "wrong" answer until after the
event is finished, checking only hindsigh after the 1-time trisl.


What about probability THEORY and probability LAWS? That's
an entirely separate and different question altogether.

For example, if the probability of "success" is assessed to be p,
then the probability of "failure or no success" is (1 - p), no
matter WHAT the assessed value of p is. It's inconsistent only
if Joe BLow says his p is 0.25 and his assessment of the
complement of that event is 0.65, then Joe Blow has a "defective"
probability space, which is not consistent in the sense of the
word being tossed around in the "subject" and the discussion
which is off the mark. There are numerous other ways for the
consistent set of probability laws to be violated, but NONE of
it is because someone gave an "incorrect" assessment of a
single probability, in and by itself, such as the probability
of flipping coin and getting a HEAD.

You know the other probability LAWS. They are consistent,
coherent, and have perfectly good OPERATIONAL meanings even though
some events have very different assessments of p by various experts.
That's what makes one investor a better one another, in all walks
of BUSINESS.

So, forget about the meaning of probability from a frequentist
(and repeatable event) point of view and forget about all that
convergence and other technical irrelevance on probability theory
itself, and concentrate on the subjective meaning of probability,
such as "What's the PROBABILITY that a share of DAL (Delta stock)
will rise above $1 by the end of this month)?

To understand the process of probability elicitation and assessment,
let's consider the following game (which is an obvious adaptation
of serious probability assessment ideas), such as those found in
many references contained in my post in this thread, on which
no-one seemed to have paid due attention because of their own
confusion about items (1) and (2) at the beginning of this post.

Read the meaning of "subjective probability" and the hundreds
of references embedded in the two references the post a few days ago:

http://tinyurl.com/8mdsv


Below is one of the many different ways of assessing one's
PERSONAL probability about any event. For the sake of simplicity,
I'll use that as an illustration of how anyone can assess his
own probability of the probability of a HEAD on one toss of a
fair (or unfair, for that matter) coin.

You can assess YOUR assessed probability of ANY future event
using the same (conceptual, if not game-show-like) method.

Have you heard of the TV show "Who Wants to be a Millionaire?"

Here's a version of the game that can be understood by anyone with
some "sense" rather than the physicists and mathematicians who make
imaginary booby traps to trip themselves all over the place.

Here's how the game is played. Instead of a series of questions
you have to give the correct answers to advance toward the $M, you
have a series of CHOICES between two alternatives which you think
has a better "chance" or "probability" of winning the $M, for YOU.

Instead of the three "life lines" when the contestent is not sure
of the answer to the questions, to get help from others, you have
three opportunities to say "I am not sure which I prefer; or I
am indifferent between the two choices". The game ends when you
have used up your three life lines.

Here are the two choices in each question:

A. You win $1,000,000 if you flip a coin and it comes up HEADS.

B. You win $1,000,000 if you draw a bead from a bag of 1,000,000
beads of X red ones and (1,000,000 - X) black ones, thoroughly
mixed at random, and draws a red one.

The game show host, Monte Philbin, tells you what X is each time,
and you decide whether you prefer A to B, or vice versa.

1st question. X = 1000.

Contestant has no problem choosing "A" as the "final answer".
The audience cheers on his wise choice.


The questions get progressively harder -- in this case, to make
the choice between "A" and "B".

17th question: X = 495,000.

Contestent had to think harder ... looks up the ceiling for
inspiration, scratches his head as if it would help, and then
chose "A".

18th, 19th, and 20th questions, the X's were 499,900, 199850,
and 499,950, and the Contestent could not make a clear choice
between "A" and "B", and the game was OVER.

That means the Contestant has assessed his SUBJECTIVE
probability of the probability of Heads of the coin to be
between .499850 and .499950.

The statistically untrained audience groans! They though he
had come so close to the "right" answer and lost. But THEY were
wrong.

The Contestent CANNOT be wrong about his PERSONAL probability.
If he has to pick a single number as his probability of HEADS
in the toss of a single coin, that probability in the chosen
interval of "indifference" would be CORRECT.

That's the ASSESSMENT of the probability of an event that has
a frequentistic "right" answer, by reason of symmetry or other
necessarian views.

The Contestant WON a "door price" from Monte Philbin for playing
the game and got the "right" answer. Every Contestant wins in
this show. Just because the show is named "Who Want To be a
Millionaire" doesn't mean any contestant ever wins more than
$10,000.

In short, you can assess YOUR probability on any event to as
many place decimal accuracy you are able to distinguish betwen
two close choices.

THEN, let the probability laws take over the operation on
those assessed probabilities. They will be consistent.

-- Bob.

Ben Rudiak-Gould

unread,
Sep 16, 2005, 9:55:56 AM9/16/05
to
Ronald Bruck wrote:
> Of course, ALL applications of deductive mathematics are subject to the
> little proviso that "certain assumptions are made" and that "physical
> events are subject to the laws of ..." Certainly people didn't used to
> think like this; magic has been prevalent as explanation for far more
> thousands of years than logic.

But "physical events are subject to the laws of probability" is not logic.
It's only an empirical law which is subject to modification in the face of
new evidence. An apparently purposeful world is just as reasonable a priori
as the apparently purposeless (random) one we find empirically. This needn't
even involve anything resembling human intelligence: for example, there are
models with closed causal loops in which events conspire in a seemingly
purposeful way to prevent inconsistency. This happens for much the same
reason that Deep Blue seems to understand chess. A magical world might still
be amenable to logical analysis.

> I suspect the belief in such "magic" is still widespread (else WHY
> would ANYONE go to Las Vegas?).

Because I have relatives there. But seriously, why does anyone go to movies?
You pay $10 to get in the building, and a few hours later you walk out with
nothing but memories. People who gamble pay money in exchange for a fantasy
of striking it rich. It's not fundamentally different from any other form of
entertainment. If you think that gambling is irrational, you're using the
wrong utility function. (Of course, there are many gamblers who exhibit
addictive behavior, but that's a separate issue.)

-- Ben

Jonathan Hoyle

unread,
Sep 16, 2005, 11:56:17 AM9/16/05
to
>> Here are the two choices in each question:
>>
>> A. You win $1,000,000 if you flip a coin and it comes up
>> HEADS.
>>
>> B. You win $1,000,000 if you draw a bead from a bag of
>> 1,000,000 beads of X red ones and (1,000,000 - X) black
>> ones, thoroughly mixed at random, and draws a red one.
>>
>> The game show host, Monte Philbin, tells you what X is
>> each time, and you decide whether you prefer A to B,
>> or vice versa.

I guess I am having trouble following your point here. It seems to me
that the best chance of success is choosing A when X < 500,000 and
choosing B when X > 500,000 (assuming a fair coin and a bead chosen
with replacement).

<snip>

>> 18th, 19th, and 20th questions, the X's were 499,900,
>> 199850, and 499,950, and the Contestent could not
>> make a clear choice between "A" and "B", and the game
>> was OVER.

Why again does the contestant not have a clear choice here?

>> That means the Contestant has assessed his
>> SUBJECTIVE probability of the probability of Heads
>> of the coin to be between .499850 and .499950.

Huh? I am definitely not following this. The phrase "the probability
of the probability" is confusing to me. In a fair coin (barring that
it lands on its side), the probability of it landing heads is 0.5. And
I don't know what you mean by the "subjective probability".

>> The statistically untrained audience groans! They
>> though he had come so close to the "right" answer
>> and lost. But THEY were wrong.

Wrong about which? What are you assuming they are groaning about?

>> The Contestent CANNOT be wrong about his PERSONAL
>> probability. If he has to pick a single number as his
>> probability of HEADS in the toss of a single coin, that
>> probability in the chosen interval of "indifference" would be
>> CORRECT.

I am not following your concept of "personal probability" either. A
fair coin is going to come up heads with probability 0.5, irrespective
of what is personal thoughts are (again, assuming a fair coin that
doesn't land on edge).

Perhaps there are other rules to this game that I did not notice?

Jonathan Hoyle

Ben Rudiak-Gould

unread,
Sep 16, 2005, 12:25:46 PM9/16/05
to
Reef Fish wrote:
>Ben Rudiak-Gould wrote:
>>G. A. Edgar wrote:
>>>Littlewood has an essay something like this.
>>Leonard Susskind also wrote a short piece on this subject [...]

>
>This thread proved only ONE thing -- that a bunch of supposedly
>education grown-ups in mathematics, statistics, and probability
>gets all tangled on two ENTIRELY DIFFERENT questions:
>
>1. The ASSESSMENT of a uncertain number p, called probability.
>
>2. The consistency of probability LAWS governing assessed p's.

I don't think I'm confusing these two questions, and I don't think Susskind
or Littlewood were either. I think you're making the same mistake as the OP,
which is assuming that people are unaware of something just because they
happen not to mention it. I'm also not convinced that you understand the
particular metaphysical difficulty that Susskind is writing about (and also
Littlewood I assume, though I haven't read his essay yet).

> You know the other probability LAWS. They are consistent,
> coherent, and have perfectly good OPERATIONAL meanings even though
> some events have very different assessments of p by various experts.
> That's what makes one investor a better one another, in all walks
> of BUSINESS.

Careful: it sounds like you're claiming here that investors who use a good
probability model will outperform investors who make investment decisions by
consulting an astrologer. If so, the question is, will they outperform them
in every investment, or just most of the time? Hint: don't say "most of the
time".

Let me try to explain the problem in detail. As a Bayesian, you follow a
certain procedure and obtain a number representing the subjective
probability you assign to some event. Now, what do you actually do with
these numbers? Presumably you act on them in some way -- for example, you
work out a subjective projected profit from investing in stock A, and a
subjective projected profit from investing in stock B, and invest in A or B
according to which number is higher.

Thus far I (playing the skeptic) have no problem with this. You're free to
live your life in any way you choose, as long as it doesn't impinge on my
freedoms, yada yada yada. The problem shows up if you try to convince *me*
to become a Bayesian. I don't mean convince me to use your subjective
probabilities, just to believe in the whole notion of subjective probability
and Bayesian inference as a good way to make decisions. There are basically
two ways to do this:

1. Appeal to prejudice: "It obviously makes sense."
2. Appeal to evidence: "It works."

The problem is that argument 2 is frequentist (or circular), and argument 1
is not scientific.

Possibly you believe that the internal self-consistency of Bayesian
reasoning is enough to justify its application in practice. That is easily
disposed of. Since any outcome of a probabilistic event is possible (i.e.
obtains in some possible world), the actual world could be one in which
probabilistic methods work badly. It could even be one in which
probabilistic methods work well for people named Bob, and badly for everyone
else. But it isn't; why not? Not only is the answer to this question
unclear, it isn't even clear how to formulate the question in a rigorous
way. That's the difficulty that Susskind is talking about.

-- Ben

Reef Fish

unread,
Sep 16, 2005, 2:26:00 PM9/16/05
to

Ben Rudiak-Gould wrote:
> Reef Fish wrote:
> >Ben Rudiak-Gould wrote:
> >>G. A. Edgar wrote:
> >>>Littlewood has an essay something like this.
> >>Leonard Susskind also wrote a short piece on this subject [...]
> >
> >This thread proved only ONE thing -- that a bunch of supposedly
> >education grown-ups in mathematics, statistics, and probability
> >gets all tangled on two ENTIRELY DIFFERENT questions:
> >
> >1. The ASSESSMENT of a uncertain number p, called probability.
> >
> >2. The consistency of probability LAWS governing assessed p's.
>
> I don't think I'm confusing these two questions, and I don't think Susskind
> or Littlewood were either.

If not, then they wouldn't be arguing about the question of
INCONSISTENCY in Probability Theory on only one of the many meanings
of Probability. None of the convergence, correctness of p, etc.
apply to (1).

> I think you're making the same mistake as the OP,
> which is assuming that people are unaware of something just because they
> happen not to mention it.

Read my statement again, more carefully this time. I didn't assume
anything about whether people are "aware" or "unaware" of subjective
probabilities; only they were CONFUSESD about (1) and (2), which
are different issues no matter how (1) is defined or treated,
classical, frequentist, Bayesian, neoBayesian.

(1) and (2) are SEPARATE issues, period!


> I'm also not convinced that you understand the
> particular metaphysical difficulty that Susskind is writing about (and also
> Littlewood I assume, though I haven't read his essay yet).

I got the full subtlety of Susskind's humor, call it metaphysical if
you will. That is a horse of a THIRD different color -- hunor,
sarcasm, satire, parady, and other forms of prose about technical
subjects.


> > You know the other probability LAWS. They are consistent,
> > coherent, and have perfectly good OPERATIONAL meanings even though
> > some events have very different assessments of p by various experts.
> > That's what makes one investor a better one another, in all walks
> > of BUSINESS.
>
> Careful: it sounds like you're claiming here that investors who use a good
> probability model will outperform investors who make investment decisions by
> consulting an astrologer. If so, the question is, will they outperform them
> in every investment, or just most of the time? Hint: don't say "most of the
> time".

Your bad. :-) It has nothing to do with any probability MODEL. It
has to do with how good the ASSESSMENT of p is, as in (1).

If you think an investment decision based on the assessment of the
probability of success is like consulting an astrologer, you have
much more to learn than just the difference between (1) and (2).

When Yahoo stocks was first issued, I know enough about it to think
of the probability of its success (say doubling within a year) was
as better than 0.5 or flipping a coin and getting HEADS. That
turned out to be a good "assessment" of p and a good investment
when I bought a bunch on the 2nd day of issue -- whose price doubled
several times within the first year. :-)

There was no MODEL of any kind.

All business investments are GAMBLES. It was a good assessment of
p and a good gamble that paid off.


> Let me try to explain the problem in detail. As a Bayesian, you follow a
> certain procedure and obtain a number representing the subjective
> probability you assign to some event. Now, what do you actually do with
> these numbers? Presumably you act on them in some way -- for example, you
> work out a subjective projected profit from investing in stock A, and a
> subjective projected profit from investing in stock B, and invest in A or B
> according to which number is higher.

That's acceptable.


>
> Thus far I (playing the skeptic) have no problem with this. You're free to
> live your life in any way you choose, as long as it doesn't impinge on my
> freedoms, yada yada yada. The problem shows up if you try to convince *me*
> to become a Bayesian.

I did no such! You can bring a horse to water, but you can't make it
do a backstroke ... or something like that.

You are free do whatever you do, including consulting your astrologer.
You can also choose to believe all the frequentist and convergence
stuff as NECESSARY before you can think coherently about probability
and its applications.

That ITSELF is your SUBJECTIVE probability assigned to the correctness
of one over the other.

You are a Bayesian whetehr you like is or not!

> I don't mean convince me to use your subjective
> probabilities, just to believe in the whole notion of subjective probability
> and Bayesian inference as a good way to make decisions. There are basically
> two ways to do this:
>
> 1. Appeal to prejudice: "It obviously makes sense."
> 2. Appeal to evidence: "It works."
>
> The problem is that argument 2 is frequentist (or circular), and argument 1
> is not scientific.

That's your own misguided notion about what's scientific and what's
not, and the rest of your own muddled thinking about the subject of
UNCERTAINTY, how to assess it, and how to apply what you assessed,
coherently and consistently.

>
> Possibly you believe that the internal self-consistency of Bayesian
> reasoning is enough to justify its application in practice.

Not at all! Read some of those references I gave about Bayesian
probability and statistics.

Being able to elicit your OWN probability assessment of whether
Tennessee will beat Florida in the game tomorrow has nothing to do
with whether Bayssian statistics is internally consistent or not.


> That is easily
> disposed of. Since any outcome of a probabilistic event is possible (i.e.
> obtains in some possible world), the actual world could be one in which
> probabilistic methods work badly. It could even be one in which
> probabilistic methods work well for people named Bob, and badly for everyone
> else. But it isn't; why not? Not only is the answer to this question
> unclear, it isn't even clear how to formulate the question in a rigorous
> way. That's the difficulty that Susskind is talking about.
>
> -- Ben

After all that "mouth-dancing", are you saying it is NOT meaningful
for anyone to assess the probability p that Florida will beat Tenn
in the football game tomorrow (if you're foreign <no pun intended>
excuse the particular example in the USA)?

How does a frequentist (or any other brand of probabilist) assess
that probability p?

-- Bob.

0 new messages