Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Frequentist probability confusion

27 views
Skip to first unread message

Bartosz Milewski

unread,
Mar 22, 2004, 5:36:22 AM3/22/04
to


I was trying to figure out if the frequentist interpretation could be used
as the foundation of the probabilistic interpretation of QM. As I understand
(correct me if I'm wrong) the basis of the frequentist interpretation can be
summarized in the following statement:
We have an repeatable experiment with M possible outcomes, each with
probability P_m. Let's repeat the experiment N times. The number of times we
get the m'th result is X_m . The statement is that,
when N goes to infinity, X_m/N -> P_m.
You can take this as a definition of probability P_m.
But how is this limit defined? Does it mean that for every epsilon there is
a N(epsilon) such that for any n > N(epsilon) the value |X_m / n - P_m| <
epsilon ?
Can this N(epsilon) be calculated? Does it depend on the details of the
experiment?
This doesn't make sense to me, so I'm wondering if my intuition is
completely wrong.


Arnold Neumaier

unread,
Mar 22, 2004, 6:05:15 AM3/22/04
to

In experimental terms, this limit is undefinable since during the
sum of lifetimes of all physicists ever, only a finite number of
experiments have been performed. And I fear this will be the case
in the near future, too. Thus a limit makes sense only on the
theoretical level. But there is no problem with probabilities anyway.

You can find the justification of a relative frequency interpretation
in any textbook of probability under the heading of the weak law
of large numbers. The limit is 'in probability', which means that
the probability of violating |X_m / n - P_m| < epsilon goes to zero
as n gets large. How large n must be at a given confidence level
can be calculated, if one is careful in the argument leading to the
proof. Unfortunately there is nothing that excludes the unlikely
remaining probability...


Arnold Neumaier

r...@maths.tcd.ie

unread,
Mar 23, 2004, 12:42:14 PM3/23/04
to
Arnold Neumaier <Arnold....@univie.ac.at> writes:


>Bartosz Milewski wrote:
>> I was trying to figure out if the frequentist interpretation could be used

>> as the foundation of the probabilistic interpretation of QM. ...

> ...

>You can find the justification of a relative frequency interpretation
>in any textbook of probability under the heading of the weak law
>of large numbers. The limit is 'in probability', which means that
>the probability of violating |X_m / n - P_m| < epsilon goes to zero
>as n gets large. How large n must be at a given confidence level
>can be calculated, if one is careful in the argument leading to the
>proof. Unfortunately there is nothing that excludes the unlikely
>remaining probability...

Right, so actually, the frequentist interpretation of probability
suffers from the same disease that the many-worlds interpretation
does, or at least the non-Bayesian one. In many worlds, the problem
is that there's no way to justify dismissing worlds with a small
quantum amplitude as being rare, and in the frequentist
version of probability theory, there's no way to justify dismissing
outcomes with small probability as being rare.

The frequentist interpretation of probability suffers from worse
diseases as well. For example, you'll find in many probability
books and hear from the mouths of top probability theorists the
claim that no process can produce random, uniformly distributed
positive integers, but that processes can produce random uniformly
distributed real numbers between zero and one (e.g. toss a fair
coin exactly aleph_0 times to get the binary expansion).

In fact, a process which produces uniformly distributed random real
numbers between zero and one can be modified so that it produces
uniformly distributed random positive integers in the following
way: Consider [0,1) as an additive group of reals modulo 1. Then
it has a subgroup, S, consisting of rational numbers in [0,1). Form
a set X by choosing one element from each coset of S in [0,1). Then
define X_r = {a+r mod 1 | a \in X}, for each r in S. The X_r are
pairwise disjoint, pairwise congruent sets, with congruent meaning
they are related to each other by isometries of the group [0,1).
In that sense, they are as equiprobable as can be. Now if q is a
random number between 0 and 1, then it falls into exactly one X_r,
so there is a unique rational number, r, associated with that real
number, and since the rationals are countable, there is also a
unique positive integer associated with that real number. Since the
X_r's are congruent, no one can be any more or less likely than any
other, so no positive integer is any more or less likely than any
other to result from this process. Voila, we have a way to get a
"random" positive integer from a "random" real in [0,1).

The problem is that if you define probabilities in terms of
outcomes of repeated processes or experiments, then you might get
lead astray when you find that certain probability distributions
don't exist (e.g. a uniform distribution over the positive
integers). You might start imagining that your probability theory
is telling you something about what kind of random-number-generation
processes are or aren't possible. As the example above shows, this
is incorrect.

R.

Ps. Yes, I know I used the axiom of choice and rely on axioms of
infinity, but that's not a problem. Nobody would actually
drop these axioms in order to save the frequentist interpretation,
or at least, nobody worth mentioning.

Bartosz Milewski

unread,
Mar 24, 2004, 9:45:31 PM3/24/04
to
"Arnold Neumaier" <Arnold....@univie.ac.at> wrote in message
news:405EC787...@univie.ac.at...

> You can find the justification of a relative frequency interpretation
> in any textbook of probability under the heading of the weak law
> of large numbers. The limit is 'in probability', which means that
> the probability of violating |X_m / n - P_m| < epsilon goes to zero
> as n gets large. How large n must be at a given confidence level
> can be calculated, if one is careful in the argument leading to the
> proof. Unfortunately there is nothing that excludes the unlikely
> remaining probability...

Doesn't this justification suffer from the same problem? What's the precise
meaning of "probability goes to zero as n gets large"? I don't know if
statements like this have a meaning at all. In mathematics "goes to" or "has
a limit" are very well defined notions (i.e., for any epsilon there exist...
etc.).

I can think of an unorthodox way of dealing with limits in probability.
Introduce the "random number generator," just like in computer simulations
(in practice one uses a pseudo-random generator). The N(epsilon) such that
for each n > N(epsilon) |X_m / n - P_m| < epsilon could then be calculated
based on the properties of the "random" number stream that generates values
X_m.
Here's some handwaving: Imagine that a stream of "random" numbers has the
following property: If you generate heads and tails using this stream, then
there is a certain N_1 above which you are guaranteed that the first N
tosses could not have been all heads or all tails (there is at least 1 odd
result). I don't know how much this would break radnomness properties, but
it would make the definition of limits meaningful.

By the way, has anyone tested experimetally the randomness of quantum
experiments? Is a quantum random number generator perfectly random?

J. J. Lodder

unread,
Mar 25, 2004, 5:04:00 AM3/25/04
to


Bartosz Milewski <bar...@nospam.relisoft.com> wrote:

> By the way, has anyone tested experimetally the randomness of quantum
> experiments? Is a quantum random number generator perfectly random?

Of course not, perfection doesn't exist.
There are always instrumental effects that spoil it.

A well known, and already very old example
is the dead time of Geiger counters.
This spoils the ideal Poisson distribution of the famous clicks.

For a practical implementation of a hardware random number generator
one often uses Zener diodes biassed in the reverse direction.
Since the current is caused by tunnelling of electrons through a barrier
this may wel be considered to be a quantummechanical noise source.

Suitable electronics can transform the variable current
into (very nearly) random bits, hence numbers.
Some small bias remains though.

Jan

Patrick Van Esch

unread,
Mar 25, 2004, 8:25:50 PM3/25/04
to
"Bartosz Milewski" <bar...@nospam.relisoft.com> wrote in message news:<c3qnae$scv$1...@brokaw.wa.com>...
>

> Here's some handwaving: Imagine that a stream of "random" numbers has the
> following property: If you generate heads and tails using this stream, then
> there is a certain N_1 above which you are guaranteed that the first N
> tosses could not have been all heads or all tails (there is at least 1 odd
> result).

But this is exactly what the original question was all about:
the frequentist application of probability theory doesn't make sense
if there is no "lower cutoff" below which we consider that the event
will never happen and which is NOT 0.
Indeed, naively said, the law of LARGE numbers cannot be replaced by
the law of INFINITE numbers because then probability theory makes NO
statement at all about finite statistics.
If you toss a coin N times, the probability to have all heads is
1/2^N. So if you toss a coin N times, to estimate the probability of
having "heads" on one go (which you expect to be 1/2), there is a
probability of 1/2^N that you will actually find 1 (and also a
probability of 1/2^N that you will find 0).
If N = 10000, then that's a small probability indeed, but according to
orthodox probability theory, it CAN occur.
Now let us say that we repeat this tossing of N coins M times, you'd
then expect that finding an estimate of the probability of one tossing
of the coin = 1 will only occur, on average, 1/2^N times M. But there
is again, a very small probability that we will find for ALL of these
M experiments (each with N coins), an average of 1 (namely 1/2^(NxM)
is that probability).
So the statement that "events with an extremely small probability
associated to it will probably not occur" is an empty statement
because tautological. It is only when we say that "events with an
extremely small probability will NOT occur" that suddenly, all of the
frequentist interpretation of probability theory makes sense. But
that statement is a strong one, however, in practical, and
experimental life we always make it. Most, if not all, experimental
claims are associated with a small probability (10 sigma for example)
that it is a statistical fluctuation, but beyond a certain threshold,
people take it as a hard fact.

cheers,
Patrick.

J. J. Lodder

unread,
Mar 25, 2004, 8:26:24 PM3/25/04
to
Bartosz Milewski <bar...@nospam.relisoft.com> wrote:

Your conceptual problem has nothing to do with quantum mechanics.
It arises in precisely the same form when you want to verify by
experiment that a coin being thrown repeatedly is fair
(That is, has exactly 50% probability of coming up heads or tails)

For the resolution see any textbook on probability:
you can never verify such a thing, you can only give confidence limits.
It has nothing to do with frequentist versus Bayesianism either:
a Bayesian can do no better.

Best,

Jan

Matt Leifer

unread,
Mar 26, 2004, 2:37:42 AM3/26/04
to
> You can find the justification of a relative frequency interpretation
> in any textbook of probability under the heading of the weak law
> of large numbers. The limit is 'in probability', which means that
> the probability of violating |X_m / n - P_m| < epsilon goes to zero
> as n gets large. How large n must be at a given confidence level
> can be calculated, if one is careful in the argument leading to the
> proof. Unfortunately there is nothing that excludes the unlikely
> remaining probability...

Notice that the argument is circular, i.e. one uses a concept of
probability in order to define the concept of probability. This
doesn't cause problems for most applications of probability theory,
but it is the main reason to be a Bayesian from the conceptual point
of view.

Matthew Donald

unread,
Mar 26, 2004, 4:49:59 AM3/26/04
to


Bartosz Milewski wrote
> has anyone tested experimentally the randomness of quantum
> experiments?

Try the e-print physics/0304013. Here's the abstract:

*******

physics/0304013

From: Dana J. Berkeland [view email]
Date: Fri, 4 Apr 2003 18:16:19 GMT (143kb)

Tests for non-randomness in quantum jumps

Authors: D.J. Berkeland, D.A. Raymondson, V.M. Tassin
Comments: 4 pages, 5 figures
Subj-class: Atomic Physics

In a fundamental test of quantum mechanics, we have collected
over 250,000 quantum jumps from single trapped and cooled
88Sr+ ions, and have tested their statistics using a
comprehensive set of measures designed to detect non-random
behavior. Furthermore, we analyze 238,000 quantum jumps from
two simultaneously confined ions and find that the number of
apparently coincidental transitions is as expected. Similarly, we
observe 8400 spontaneous decays of two simultaneously trapped
ions and find that the number of apparently coincidental decays
agrees with expected value. We find no evidence for short- or
long-term correlations in the intervals of the quantum jumps or
in the decay of the quantum states, in agreement with quantum
theory.

*****

Matthew Donald (matthew...@phy.cam.ac.uk)
web site:
http://www.poco.phy.cam.ac.uk/~mjd1014
``a many-minds interpretation of quantum theory''
*****************************************

Arnold Neumaier

unread,
Mar 26, 2004, 4:50:02 AM3/26/04
to


Bartosz Milewski wrote:
> "Arnold Neumaier" <Arnold....@univie.ac.at> wrote in message
> news:405EC787...@univie.ac.at...
>
>>You can find the justification of a relative frequency interpretation
>>in any textbook of probability under the heading of the weak law
>>of large numbers. The limit is 'in probability', which means that
>>the probability of violating |X_m / n - P_m| < epsilon goes to zero
>>as n gets large. How large n must be at a given confidence level
>>can be calculated, if one is careful in the argument leading to the
>>proof. Unfortunately there is nothing that excludes the unlikely
>>remaining probability...
>
> Doesn't this justification suffer from the same problem? What's the precise
> meaning of "probability goes to zero as n gets large"?

It has a precise mathematical meaning. it has no precise physical meaning.
But nothing at all has a precise physical meaning. Only theory can be
exact - you can say exactly what an electron is in QED; you cannot
say exactly what it is in reality.

The interface between theory and the real world is never exact.
This interface must just be clear enough to guarantee working protocols
for the execution of science in practice.

I think it is not reasonable to require of probability a more stringent
meaning than for an electron. It suffices that there are recipes that
give, in practice acceptable approximations.


Arnold Neumaier

Arnold Neumaier

unread,
Mar 26, 2004, 5:49:48 AM3/26/04
to


Matt Leifer wrote:
>>You can find the justification of a relative frequency interpretation
>>in any textbook of probability under the heading of the weak law
>>of large numbers. The limit is 'in probability', which means that
>>the probability of violating |X_m / n - P_m| < epsilon goes to zero
>>as n gets large. How large n must be at a given confidence level
>>can be calculated, if one is careful in the argument leading to the
>>proof. Unfortunately there is nothing that excludes the unlikely
>>remaining probability...
>
>
> Notice that the argument is circular, i.e. one uses a concept of
> probability in order to define the concept of probability.

No. It uses the concept of probability to explain why, within this
framework, relative frequences are valid approximations to
probabilities. Probabilities themselves are defined axiomatically,
and not justified at all.

Any theory needs to start somewhere with some basic, unexplained
terms that get their meaning from the consequences of the theory,
and not from anything outside.


Arnold Neumaier

Arnold Neumaier

unread,
Mar 26, 2004, 5:49:51 AM3/26/04
to


Patrick Van Esch wrote:

> So the statement that "events with an extremely small probability
> associated to it will probably not occur" is an empty statement
> because tautological.

Yes.

> It is only when we say that "events with an
> extremely small probability will NOT occur"

but this is wrong. If you randomly draw a real number x from a unifrom
distribution in [0,1], and get as a result s, the probability that
you obtained exactly this number is zero, but it was the one you got.

> that suddenly, all of the
> frequentist interpretation of probability theory makes sense.

This has nothing to do with the frequentist interpretation; the
problem of unlikely things happening is also present in a Bayesian
interpretation.

The problem stems from the attempt to assign 100% precise meaning
to concepts that have an inherent uncertainty. Once one realizes
that any concept applied to reality is limited since we cannot
say too precisely what is meant by it on the operational level
(100% clarity is avaialble only in theoretical models), the
difficulty disappears. That's why it has become standard practice
to distinguish between reality and our models of it.


Arnold Neumaier


Bartosz Milewski

unread,
Mar 26, 2004, 5:24:30 PM3/26/04
to
"Patrick Van Esch" <van...@ill.fr> wrote in message
news:c23e597b.04032...@posting.google.com...

> So the statement that "events with an extremely small probability
> associated to it will probably not occur" is an empty statement
> because tautological. It is only when we say that "events with an
> extremely small probability will NOT occur" that suddenly, all of the
> frequentist interpretation of probability theory makes sense.

Yes, that's exactly my point. I was trying to make the cutoff somewhat
better defined making it a property of a "random" number generator. Let me
clarify my point: In mathematics, probability theory describes properties of
"measures." It's a self-consistent theory (modulo Goedel) and that's it. In
physics we are trying to interpret these measures as probabilities, so we
have to provide a framework. The frequentist framework doesn't seem to be
consistent. I propose to extend the frequentist interpretation by
abstracting the random number generator part of it. Probability can then be
defined formally as a limit of frequencies, provided the frequencies fulfill
some additional properties--the cutoff properties. In essence, they must
behave as if they were generated by a computer program using a random number
generator with some well defined cutoff property. This random number
generator is necessary so that all the frequencies exhibit the same cutoff
property (i.e., the frequency of [quantum] heads has the same cutoffs as the
frequency of tails).

Bartosz Milewski

unread,
Mar 26, 2004, 5:21:56 PM3/26/04
to
"J. J. Lodder" <nos...@de-ster.demon.nl> wrote in message
news:1gb70ve.44...@de-ster.xs4all.nl...

> Bartosz Milewski <bar...@nospam.relisoft.com> wrote:
>
> Your conceptual problem has nothing to do with quantum mechanics.
> It arises in precisely the same form when you want to verify by
> experiment that a coin being thrown repeatedly is fair
> (That is, has exactly 50% probability of coming up heads or tails)

There is a huge difference between quantum probability and classical
probability. Coin tosses are not "really" random. They are chaotic, which
means we can't predict the results because (a) we never know the initial
conditions _exactly_ (the butterfly effect) and (b) because we don't have
computers powerful enough to model a coin toss. So coin tossing is for all
"practical" purposes random, but theoreticall it's not! In QM, on the other
hand, randomness is inherent. If you can prepare a system in pure state, you
know the initial conditions _exactly_. And yet, the results of experiments
are only predicted probabilistically. Moreover, there are no hidden
variables (this approach has been tried), whose knowledge could specify the
initial conditions more accurately and maybe let you predict the exact
outcomes.

I have no problems with coin tosses as long as you don't use a quantum coin.

Bartosz Milewski

unread,
Mar 27, 2004, 6:15:11 AM3/27/04
to


"Arnold Neumaier" <Arnold....@univie.ac.at> wrote in message

news:40640905...@univie.ac.at...


> but this is wrong. If you randomly draw a real number x from a unifrom
> distribution in [0,1], and get as a result s, the probability that
> you obtained exactly this number is zero, but it was the one you got.

This argument is also circular. How do you draw a number randomly? It's not
a facetious question.


Patrick Van Esch

unread,
Mar 29, 2004, 2:36:38 AM3/29/04
to
Arnold Neumaier <Arnold....@univie.ac.at> wrote in message news:<40640905...@univie.ac.at>...

> Patrick Van Esch wrote:
>
> > So the statement that "events with an extremely small probability
> > associated to it will probably not occur" is an empty statement
> > because tautological.
>
> Yes.
>
> > It is only when we say that "events with an
> > extremely small probability will NOT occur"
>
> but this is wrong. If you randomly draw a real number x from a unifrom
> distribution in [0,1], and get as a result s, the probability that
> you obtained exactly this number is zero, but it was the one you got.

I'm not talking about the mathematical theory of probabilities (a la
Kolmogorov) which is a nice mathematical theory. I'm talking about
the application of this mathematical theory to the physical sciences,
which maps events (potential experimental outcomes) to numbers (called
probabilities, but we could call it "gnorck"). Saying that measuring
the number of neutrons scattered under an angle of 12 to 13 degrees
has gnorck = 10^(-6) doesn't mean much as such. So this assignment of
probabilities to experimental outcomes has only a meaning when it
eventually turns into "hard" statements, and that can only happen when
we apply a lower cutoff to probabilities.
Your argument of drawing a real number out of [0,1] doesn't apply
here, because the outcome of an experiment is never a true real number
(most of which cannot even be written down !). There are always a
finite number of possibilities in the outcome of an experiment
(otherwise it couldn't be written onto a hard disk!).

cheers,
Patrick.

Italo Vecchi

unread,
Mar 29, 2004, 4:22:03 AM3/29/04
to

"Bartosz Milewski" <bar...@nospam.relisoft.com> wrote in message news:<c426a4$bvd$1...@brokaw.wa.com>...


> "J. J. Lodder" <nos...@de-ster.demon.nl> wrote in message
> news:1gb70ve.44...@de-ster.xs4all.nl...
> > Bartosz Milewski <bar...@nospam.relisoft.com> wrote:
> >
> > Your conceptual problem has nothing to do with quantum mechanics.
> > It arises in precisely the same form when you want to verify by
> > experiment that a coin being thrown repeatedly is fair
> > (That is, has exactly 50% probability of coming up heads or tails)
>
> There is a huge difference between quantum probability and classical
> probability. Coin tosses are not "really" random. They are chaotic, which
> means we can't predict the results because (a) we never know the initial
> conditions _exactly_ (the butterfly effect) and (b) because we don't have
> computers powerful enough to model a coin toss.

That "we can't predict the results" IS randomness. There is nothing
more to randomness than that.

Regards,

IV

Arnold Neumaier

unread,
Mar 29, 2004, 5:55:17 AM3/29/04
to

Bartosz Milewski wrote:

> There is a huge difference between quantum probability and classical
> probability. Coin tosses are not "really" random.

What is "really" random??? the term "random" has no precise meaning
outside theory. But in the theory of stochastic processes, there is
"real" classical randomness.


> In QM, on the other
> hand, randomness is inherent. If you can prepare a system in pure state, you

But no one can. Pure states in quantum mechanics are as much idealizations
as classical random processes.


Arnold Neumaier


Arnold Neumaier

unread,
Mar 29, 2004, 2:38:36 PM3/29/04
to
Patrick Van Esch wrote:
> Arnold Neumaier <Arnold....@univie.ac.at> wrote in message news:<40640905...@univie.ac.at>...
>
>>Patrick Van Esch wrote:

>>>It is only when we say that "events with an
>>>extremely small probability will NOT occur"
>>
>>but this is wrong. If you randomly draw a real number x from a unifrom
>>distribution in [0,1], and get as a result s, the probability that
>>you obtained exactly this number is zero, but it was the one you got.
>
> I'm not talking about the mathematical theory of probabilities (a la
> Kolmogorov) which is a nice mathematical theory. I'm talking about
> the application of this mathematical theory to the physical sciences,
> which maps events (potential experimental outcomes) to numbers (called
> probabilities, but we could call it "gnorck").

In that case, all statements are approximate statements only,
and probability has no exact meaning. Saying p=1e-12
in a practical situation where you can never collect more than
a few thousands cases is as meaningless as saying that the
distance earth-moon is 384001.1283564032984930201549807km


> Saying that measuring
> the number of neutrons scattered under an angle of 12 to 13 degrees
> has gnorck = 10^(-6) doesn't mean much as such. So this assignment of
> probabilities to experimental outcomes has only a meaning when it
> eventually turns into "hard" statements, and that can only happen when
> we apply a lower cutoff to probabilities.

But where exactly would you take it? If there is an eps such that
p<eps means 'it will not occur', what is the supremum of all such eps?
It would have to be a fundamental constant of nature.

The nonexistence of such a constant implies that your proposal cannot
be right.


> Your argument of drawing a real number out of [0,1] doesn't apply
> here, because the outcome of an experiment is never a true real number
> (most of which cannot even be written down !). There are always a
> finite number of possibilities in the outcome of an experiment
> (otherwise it couldn't be written onto a hard disk!).

This does not really help. Let eps be the "extremely small probability"
according to your proposal. Pick N >> - log eps / log 2, and
run a series of N coin tosses. You get the result x_1 ... x_N, say.
Although you really obtained exactly this result, the probability of
obtaining it was only 2^{-N} << eps. Thus your proposal amounts to
proving the impossibility of tossing coins more than a fairly small
number of times... Do you really want to claim that???


The truth is that probabilities, just like real numbers,
are concepts of theory, and as such apply only approximately
to reality, with a context-dependent and user-dependent accuracy
with fuzzy boundaries.


Arnold Neumaier

Patrick Powers

unread,
Mar 30, 2004, 3:47:24 AM3/30/04
to
r...@maths.tcd.ie wrote in message news:<c3njd0$295l$1...@lanczos.maths.tcd.ie>...

> Arnold Neumaier <Arnold....@univie.ac.at> writes:
>
>
> >Bartosz Milewski wrote:
> >> I was trying to figure out if the frequentist interpretation could be used
> >> as the foundation of the probabilistic interpretation of QM. ...
>
> > ...
>
> >You can find the justification of a relative frequency interpretation
> >in any textbook of probability under the heading of the weak law
> >of large numbers. The limit is 'in probability', which means that
> >the probability of violating |X_m / n - P_m| < epsilon goes to zero
> >as n gets large. How large n must be at a given confidence level
> >can be calculated, if one is careful in the argument leading to the
> >proof. Unfortunately there is nothing that excludes the unlikely
> >remaining probability...
The idea is that the probability may be made as small as one likes.
So it can be made so small that the event is for all practical
purposes impossible.

>
> Right, so actually, the frequentist interpretation of probability
> suffers from the same disease that the many-worlds interpretation
> does, or at least the non-Bayesian one. In many worlds, the problem
> is that there's no way to justify dismissing worlds with a small
> quantum amplitude as being rare, and in the frequentist
> version of probability theory, there's no way to justify dismissing
> outcomes with small probability as being rare.
>
Quantum theory is a probabilistic theory and extremely unlikely events
are not excluded, nor should they be. So this is a property of the
theory, not the interpretation. It seems to me that an interpretation
that excluded such events absolutely would be in error.

> The frequentist interpretation of probability suffers from worse
> diseases as well. For example, you'll find in many probability
> books and hear from the mouths of top probability theorists the
> claim that no process can produce random, uniformly distributed
> positive integers, but that processes can produce random uniformly
> distributed real numbers between zero and one (e.g. toss a fair
> coin exactly aleph_0 times to get the binary expansion).

Yes these claims as stated are contradictory. I suspect that the
definitions you are using are imprecise. The word "process" implies
computability, that the process is finite. A real number is cleverly
defined as a limit of a finite process. So a real number is
computable in this sense, that it can be approximated as closely as
one likes in finite time. The problem with your proof is that as the
real number is computed the choice of cosets changes with each step so
the process does not converge to an integer.

Using the axioms of choice and infinity then one can indeed choose a
natural number at random. There are some rather strange consequences.
It is then possible to prove that each number chosen in this way will
be greater than all such previously chosen numbers with probability
one. Let N be the greatest such number chosen so far. Then there are
finitely many natural numbers less than or equal to N but infinitely
many greater than N. So the next number chosen will be greater than N
with probability one. Note that our ostensibly random sequence is
strictly increasing with probability one. This is not the only
bizarre consequence of the axiom of choice: see the well-known
Banach-Tarski sphere paradox. So I should think a physicist would do
well to be wary of the axiom of choice as tending to produce
non-physical results.

The frequentist approach does not assume the axiom of choice and makes
no use of transfinite mathematics or completed limits. If it did, the
problems you mention would in fact arise.

Arnold Neumaier

unread,
Mar 30, 2004, 11:32:30 AM3/30/04
to

> r...@maths.tcd.ie wrote in message news:<c3njd0$295l$1...@lanczos.maths.tcd.ie>...
>

>>For example, you'll find in many probability


>>books and hear from the mouths of top probability theorists the
>>claim that no process can produce random, uniformly distributed
>>positive integers, but that processes can produce random uniformly
>>distributed real numbers between zero and one (e.g. toss a fair
>>coin exactly aleph_0 times to get the binary expansion).

This has a very simple reason: There is no consistent definition of
random, uniformly distributed positive integers, while there is
one for random uniformly distributed real numbers between zero and one.
This is a purely mathematical statement independent of any
interpretation!

And of course, when people say 'produce' they mean
'produce in theory', or if they mean 'produce in practice' they
have in mind that it is produced only approximately.


Arnold Neumaier


Russell Blackadar

unread,
Mar 30, 2004, 12:42:53 PM3/30/04
to

Patrick Powers wrote:
>
> r...@maths.tcd.ie wrote in message news:<c3njd0$295l$1...@lanczos.maths.tcd.ie>...
> > Arnold Neumaier <Arnold....@univie.ac.at> writes:

[snip]

> Using the axioms of choice and infinity then one can indeed choose a
> natural number at random. There are some rather strange consequences.
> It is then possible to prove that each number chosen in this way will
> be greater than all such previously chosen numbers with probability
> one. Let N be the greatest such number chosen so far. Then there are
> finitely many natural numbers less than or equal to N but infinitely
> many greater than N. So the next number chosen will be greater than N
> with probability one. Note that our ostensibly random sequence is
> strictly increasing with probability one.

Hmm, interesting post, thanks.

This is not the only
> bizarre consequence of the axiom of choice: see the well-known
> Banach-Tarski sphere paradox. So I should think a physicist would do
> well to be wary of the axiom of choice as tending to produce
> non-physical results.

But this only happens if you use it in a scenario that is *already*
unphysical, e.g. if you claim it's possible to draw a lottery ball
from a cage containing aleph_0 ping-pong balls. Here the physical
problem is not in the drawing, but in the setting up of the cage to
begin with. As for Banach-Tarski, it also requires an unphysical
scenario; you can't make it work by pulverizing and reassembling a
ball made of any physical material.

So I don't see any problem with a physicist accepting AC and its
useful consequences as applied to the mathematics of continua.

J. J. Lodder

unread,
Mar 30, 2004, 12:29:07 PM3/30/04
to
Bartosz Milewski <bar...@nospam.relisoft.com> wrote:

> "J. J. Lodder" <nos...@de-ster.demon.nl> wrote in message
> news:1gb70ve.44...@de-ster.xs4all.nl...
> > Bartosz Milewski <bar...@nospam.relisoft.com> wrote:
> >
> > Your conceptual problem has nothing to do with quantum mechanics.
> > It arises in precisely the same form when you want to verify by
> > experiment that a coin being thrown repeatedly is fair
> > (That is, has exactly 50% probability of coming up heads or tails)
>
> There is a huge difference between quantum probability and classical
> probability.

Not at all, from the point of view of probability theory.

> Coin tosses are not "really" random. They are chaotic, which
> means we can't predict the results because (a) we never know the initial
> conditions _exactly_ (the butterfly effect) and (b) because we don't have
> computers powerful enough to model a coin toss. So coin tossing is for all
> "practical" purposes random, but theoreticall it's not!

We use probability theory when we don't know, or don't want to know,
about underlying causes.
Whether or not such causes are actually present is irrelevant.
Probability just deals with 'something' that produces heads or tails,
and determines properties of the sequence of them,
(like confidence in being fair)
using the means of probability theory.

> In QM, on the other
> hand, randomness is inherent. If you can prepare a system in pure state, you
> know the initial conditions _exactly_. And yet, the results of experiments
> are only predicted probabilistically. Moreover, there are no hidden
> variables (this approach has been tried), whose knowledge could specify the
> initial conditions more accurately and maybe let you predict the exact
> outcomes.
>
> I have no problems with coin tosses as long as you don't use a quantum coin.

All coins are quantum coins, for we live in a quantum word.
In practice it may be quite hard to say whether or not
a coin throw may be considered to be 'classical'.
Quantum mechanics may come in in the precise timing
of the twitching of your fingers, on the molecular level,
when flipping the coin.

Not that it matters,

Jan

Patrick Van Esch

unread,
Mar 30, 2004, 12:31:45 PM3/30/04
to
"Bartosz Milewski" <bar...@nospam.relisoft.com> wrote in message news:<c428jd$cvv$1...@brokaw.wa.com>...

> "Patrick Van Esch" <van...@ill.fr> wrote in message
> news:c23e597b.04032...@posting.google.com...
> > So the statement that "events with an extremely small probability
> > associated to it will probably not occur" is an empty statement
> > because tautological. It is only when we say that "events with an
> > extremely small probability will NOT occur" that suddenly, all of the
> > frequentist interpretation of probability theory makes sense.
>
> Yes, that's exactly my point. I was trying to make the cutoff somewhat
> better defined making it a property of a "random" number generator.

In fact, this point is something that bothered me since I learned
about probability theory (20 years ago) and most people didn't seem to
even understand what my problem was (depends probably on the people
you talk to).
The fact that others here seem to struggle with the same problem
indicates that this is somehow a problem :-)
However, I have no idea if you can make a consistent probability
interpretation with such a cutoff. I don't know if ever some work in
that direction has been undertaken.
The Bayesian interpretation of probabilities is a nice information
theoretical construct of course, I'm not expert enough in it to see if
the problem also exists there. But I have difficulties with people
who deny the frequentist interpretation: after all, this is - to me -
the only way to make a connection to experimental results ! How do
you verify differential cross sections ? You do a number of
experiments, and then you make the HISTOGRAM (counting the number of
occurences) of the outcomes which you compare with your calculated
probability density. That's nothing else but applying the frequentist
interpretation, no ?

cheers,
Patrick.

Arnold Neumaier

unread,
Mar 30, 2004, 12:44:24 PM3/30/04
to

I am not circular since I didn't attempt to give a definition of
probability but simply assumed it to refute a claim made.


If randomness has any meaning at all in practice, it must be able to
draw numbers randlomly; if not by the uniform distribution then by
whatever process is assumed.

If you accept that there is something like a 'fair coin toss'
which gives independent events with probability 1/2, you can
easily get arbitrary small probabilities without invoking real numbers,
as I showed in another reply in this thread.

If you can't accept a fair coin toss, I wonder whether you have
any place at all for probabilisitic models in your world view.


The problem in all these issues related to probability is the silent
switch between theory and practice at some place, which different
people take at a different place, which makes communication difficult
and invites paradoxes. The interface between theory and reality is
always a little vague, and one has to be careful not to make statements
which are meaningless.

Raw reality has no concepts; it simply is. But to do science,
and indeed already to live intelligently, one needs to sort reality
into various conceptual bags that allow one to understand and predict.
Because of our incomplete access to reality, we can do this only in an
imperfect, somewhat fuzzy way. Respecting this in one's thinking
avoids all paradoxes; coming across a paradox means that one moved
somewhere across the border of what was permitted - though it is not
always easy to see where and why.

Full clarity can be obtained only on the logical level, this is
necessarily one level away from reality. In the foundations of
logic, one builds within an intuitive logic a complete model of
everything logic is about, and then is able to clarify the limits
of logical reasoning. This is the closest one can get to a clear
understanding of the foundations. To do the same in physics amounts
to building within mathematics a complete model with all the features
external reality is believed to have, and to discuss within this model
all the concepts and activities physicists work with. If this can be
done in a consistent way, it is as close as we can get to ascertaining
that the model is indeed faithful to external reality.

Therefore, when _I_ discuss probability, I choose such a model
as the background on the basis of which I can speak of well-defined
probabilities. In such a mathematical world, one can draw number
by decree (even though one cannot know what was drawn).

In reality, one needs to substitute pseudo-random number generators,
which makes drawing random numbers a practical activity. But of
course their properties only approximate the theoretical thing,
as always when a formal concept is implemented in nature.
Not even the Peano axioms for natural numbers can be realized in
nature - how much less more subtle concepts like probability.


Arnold Neumaier


Bartosz Milewski

unread,
Mar 30, 2004, 12:48:40 PM3/30/04
to

"Arnold Neumaier" <Arnold....@univie.ac.at> wrote in message
news:4067FA65...@univie.ac.at...

> What is "really" random??? the term "random" has no precise meaning
> outside theory. But in the theory of stochastic processes, there is
> "real" classical randomness.

But stochastic theory is NOT a fundamental theory. It's an idealization of a
chaotic process. Can we interpret QM as an idealization of some more
fundamental theory? A theory where events are theoretically 100%
predictable, but so chaotic that in practice we can only make stochastic
predictions? I'm afraid this path had been tried (hidden variables) and
rebuked.

Patrick Powers

unread,
Mar 30, 2004, 2:28:14 PM3/30/04
to
vec...@weirdtech.com (Italo Vecchi) wrote in message news:<61789046.04032...@posting.google.com>...

>
> That "we can't predict the results" IS randomness. There is nothing
> more to randomness than that.
>

True. But some posters are saying that quantum processes are
essentially random in that asuming that the results can be predicted
leads to a contradiction. In other words, they say that it has been
proved impossible that the events will ever be predicted.

I know about Bell's theorem, but don't know that such a thing has been
proved in general. Can someone please provide a reference?

Patrick Van Esch

unread,
Mar 30, 2004, 2:28:20 PM3/30/04
to
Arnold Neumaier <Arnold....@univie.ac.at> wrote in message news:<406803CA...@univie.ac.at>...

> Patrick Van Esch wrote:
> > Arnold Neumaier <Arnold....@univie.ac.at> wrote in message news:<40640905...@univie.ac.at>...
> >
>
> But where exactly would you take it? If there is an eps such that
> p<eps means 'it will not occur', what is the supremum of all such eps?
> It would have to be a fundamental constant of nature.
>
> The nonexistence of such a constant implies that your proposal cannot
> be right.
>

I agree that it should be a constant of this universe, if ever there
was such a thing. It is not unthinkable that there IS such a constant
(such as the inverse of the number of spacetime events, which could be
finite). But I do realise how speculative that statement is.
Nevertheless, without this vague idea, I cannot make any sense of what
it experimentally means to have an event with a probability p. Or
better, why every time we do statistics on real data, it works out !


>
> > Your argument of drawing a real number out of [0,1] doesn't apply
> > here, because the outcome of an experiment is never a true real number
> > (most of which cannot even be written down !). There are always a
> > finite number of possibilities in the outcome of an experiment
> > (otherwise it couldn't be written onto a hard disk!).
>
> This does not really help. Let eps be the "extremely small probability"
> according to your proposal. Pick N >> - log eps / log 2, and
> run a series of N coin tosses. You get the result x_1 ... x_N, say.
> Although you really obtained exactly this result, the probability of
> obtaining it was only 2^{-N} << eps. Thus your proposal amounts to
> proving the impossibility of tossing coins more than a fairly small
> number of times

Maybe that "fairly small number of times" is in fact a very big
number, and in our universe there's not enough matter and time to do
all this tossing around!

A very small cutoff can save ALL of frequentist interpretations of
probability, because you are allowed to consider combined, independent
events (what's the probability of tossing 100 times a coin and finding
100 heads in a row AND seeing the moon go supernova etc...).

cheers,
Patrick.

Arnold Neumaier

unread,
Mar 31, 2004, 2:30:17 AM3/31/04
to

What is 'fundamental'? Stochastic processes are mathematically sound,
well-founded, and consistent, unlike current quantum field theory,
say. So they'd make a better foundation.

Chaotic processes are also idealizations, no less than stochastic
processes. And who can tell what is more basic? You can get one
from the other in suitable approximations...


Arnold Neumaier

Arnold Neumaier

unread,
Mar 31, 2004, 5:36:20 PM3/31/04
to
Patrick Van Esch wrote:
> Arnold Neumaier <Arnold....@univie.ac.at> wrote in message news:<406803CA...@univie.ac.at>...

>
>>But where exactly would you take it? If there is an eps such that
>>p<eps means 'it will not occur', what is the supremum of all such eps?
>>It would have to be a fundamental constant of nature.
>>
>>The nonexistence of such a constant implies that your proposal cannot
>>be right.
>>
>
>
> I agree that it should be a constant of this universe, if ever there
> was such a thing. It is not unthinkable that there IS such a constant
> (such as the inverse of the number of spacetime events, which could be
> finite).

But the constant, to be meaningful in our real life, would have to be
not extremely small; and then it can be refuted easily.

> But I do realise how speculative that statement is.
> Nevertheless, without this vague idea, I cannot make any sense of what
> it experimentally means to have an event with a probability p.

For a single event, it means almost nothing.
For a large number of events, it means roughly the
relative frequency, but with a possibility of deviating to a not
precisely specified amount.

> Or
> better, why every time we do statistics on real data, it works out !

The sense it makes is the following: If you have a sound probabilistic
model of a multitude of independent events e_i with assigned
probability p you'd be surprised if the frequency of events is not
close to p within a small multiple of sqrt(p(1-p)/N). And you'd probably
rather try to explain away a rare occurence (a brick going upwards due
to fluctuations) by assuming a hidden, unobserved cause (someone throwing
it) rather than just accept it as something within your probabilistic
mode. The way probabilities are used in practice is always as rough guides
of what to expect, but not as statements with a 100% exact meaning.
I wrote a paper on surprise:
A. Neumaier,
Fuzzy modeling in terms of surprise,
Fuzzy Sets and Systems 135 (2003), 21-38.
http://www.mat.univie.ac.at/~neum/papers.html#fuzzy
that helps understand the fuzziness inherent in our concepts of reality.

>>>Your argument of drawing a real number out of [0,1] doesn't apply
>>>here, because the outcome of an experiment is never a true real number
>>>(most of which cannot even be written down !). There are always a
>>>finite number of possibilities in the outcome of an experiment
>>>(otherwise it couldn't be written onto a hard disk!).
>>
>>This does not really help. Let eps be the "extremely small probability"
>>according to your proposal. Pick N >> - log eps / log 2, and
>>run a series of N coin tosses. You get the result x_1 ... x_N, say.
>>Although you really obtained exactly this result, the probability of
>>obtaining it was only 2^{-N} << eps. Thus your proposal amounts to
>>proving the impossibility of tossing coins more than a fairly small
>>number of times
>
>
> Maybe that "fairly small number of times" is in fact a very big
> number, and in our universe there's not enough matter and time to do
> all this tossing around!

Oh, this is possible only if the universal eps is so tiny that
almost everything is possible - against the use you wanted to make
of it in real life! An ordinary person would take the eps to justify
their unconcious probabilistic models for assessing ordinary reality
quite high, for events that are not very repetitive probably at
1e-6 or so. (Even engineers who are responsible for the safety
of buildings, airplanes, etc.) This can only be justified with
being prepared to take the risk, but not with an objective cutoff.


Arnold Neumaier


Patrick Van Esch

unread,
Apr 1, 2004, 5:17:21 AM4/1/04
to

Arnold Neumaier <Arnold....@univie.ac.at> wrote in message news:<c4fh54$uur$1...@lfa222122.richmond.edu>...

>
> For a single event, it means almost nothing.
> For a large number of events, it means roughly the
> relative frequency, but with a possibility of deviating to a not
> precisely specified amount.

I know, and that's what you usually do, and the funny thing is that
the deviation is not very high ! I will call, when experimental
results follow in this sense the predictions of probability, a
"statistically correct" experiment.
But you realize the problems here:
first of all, all observation, even if it is 10000 times flipping a
coin, is a *single event* when taken as a whole, which has a high
probability when the event is "statistically correct" and a low
probability when it is not. We observe that experimental results of
this single event which are not statistically correct *never* occur.
This, to me, is a kind of miracle if there's not some kind of "law"
stating exactly this. Now I know the statistical physics explanation
of course: the high and low probabilities of these combined events
just reflect the fact that we deal with subsets of events with
different sizes: the subset of all sequences of 10000 heads and tails
which have an average around 5000 heads and 5000 tails is much bigger
than the subset with 10000 heads, which essentially contains just one
sequence. So if you hit one sequence "blindly" you'd probably hit one
of the biggest subsets. In fact, when you are in such a case, you
don't really need probability theory as such, it is just a matter of
*counting* equivalent points.
However, I have much more difficulties with quantum mechanical
probability predictions. After all, these are more fundamental
predictions because not the result of "picking blindly into a big set
of possiblities". So in order to make sense of the QM probability
predictions, we need a better understanding of exactly what is meant
by the frequentist interpretation of events with probability p. Now I
know that the frequentist interpretation is not the favourite one
amongst theoreticians, but I don't see how, as an experimentalist, you
can get around it.


cheers,
Patrick.

Aaron Denney

unread,
Apr 1, 2004, 10:15:23 AM4/1/04
to
On 2004-03-30, Patrick Van Esch <van...@ill.fr> wrote:
> The Bayesian interpretation of probabilities is a nice information
> theoretical construct of course, I'm not expert enough in it to see if
> the problem also exists there. But I have difficulties with people
> who deny the frequentist interpretation: after all, this is - to me -
> the only way to make a connection to experimental results ! How do
> you verify differential cross sections ? You do a number of
> experiments, and then you make the HISTOGRAM (counting the number of
> occurences) of the outcomes which you compare with your calculated
> probability density. That's nothing else but applying the frequentist
> interpretation, no ?

You can view it that way, but it comes nicely out of the Bayesian
interpretation too. Basically, as the number of samples goes up, the
probability of getting something markedly different than having the
right frequencies gets incredibly small.

--
Aaron Denney
-><-

Jerzy Karczmarczuk

unread,
Apr 1, 2004, 10:45:09 AM4/1/04
to
Patrick Van Esch wrote:
> We observe that experimental results of [[ 10000 coin flip example]]

> this single event which are not statistically correct *never* occur.
> This, to me, is a kind of miracle if there's not some kind of "law"
> stating exactly this. Now I know the statistical physics explanation
> of course: the high and low probabilities of these combined events
> just reflect the fact that we deal with subsets of events with
> different sizes: the subset of all sequences of 10000 heads and tails
> which have an average around 5000 heads and 5000 tails is much bigger
> than the subset with 10000 heads, which essentially contains just one
> sequence. So if you hit one sequence "blindly" you'd probably hit one
> of the biggest subsets. In fact, when you are in such a case, you
> don't really need probability theory as such, it is just a matter of
> *counting* equivalent points.
> However, I have much more difficulties with quantum mechanical
> probability predictions. After all, these are more fundamental
> predictions because not the result of "picking blindly into a big set
> of possiblities". So in order to make sense of the QM probability
> predictions, we need a better understanding of exactly what is meant
> by the frequentist interpretation of events with probability p. Now I
> know that the frequentist interpretation is not the favourite one
> amongst theoreticians, but I don't see how, as an experimentalist, you
> can get around it.

The "miracle" is not only the cardinality of contributing sets of events,
but also the *ergodicity*, which makes the statistics actually work in
practice.

Now, let's speculate (without selling our multi-souls to Multi-Devil...)
about a variant of many-world model of the Quantum Reality. Imagine that
there ARE effectively many worlds, each of them forming a fibre upon the
configuration substrate. When an electron may pass through a double slit,
in one subset of worlds it passes by one, in the other - through the other.
I am *NOT* speaking about the decoherence. No "splitting" takes place, the
fibres *are there*, and as in many other fibrous space you may choose your
fibres as you wish. Since this picture is embedded within a normal quantum
evolution picture, you may imagine also the "fusion"; you follow two different
fibres, but which finally end-up as one, since this is just drawing lines
in space, not physics.

And now, the dynamics in this fibrous space is *ERGODIC*. A kind of chaos in
multi-space... Following one fibre long enough to be able to repeat one
experiment (with identical preparation) many times, should give you the
distribution obtained from many fibres, from a "statistical ensemble" of them.

And you get a probabilistic model for the quantum reality. Of course, I didn't
really explain anything, I just shifted the focus from one set of words to
another. But such a "model" might rise its head again, when in some unspecified
future the experimentalists making very, very delicate measurements discover
that there are non-linear disturbances of the linear superposition principle.

I believe that one day the actual quantum theory will be replaced by something
else. Of course we won't get back to classical physics. If we discover some
non-linearities, then we will probably have to change our probabilistic/fre-
quentist or whatever interpretation, but let's wait...

Jerzy Karczmarczuk


Patrick Powers

unread,
Apr 4, 2004, 8:37:02 AM4/4/04
to

van...@ill.fr (Patrick Van Esch) wrote in message news:<c23e597b.04033...@posting.google.com>...


> Now I
> know that the frequentist interpretation is not the favourite one
> amongst theoreticians, but I don't see how, as an experimentalist, you
> can get around it.
>
>
> cheers,
> Patrick.

Actually, I think experimentalists use the Baysian approach. Usually
an experiment is undertaken with the expectation of some result. If
the results do not match this expectation, the equipment is tweaked
until the expected result is obtained. If this doesn't work either
the experiment is dropped or (rarely) some other explanation is found.

It is also true that physics experiments and games of chance are
deliberately constructed so that the frequentist model holds. Other
applications of the frequentist model, such as predictions of the
weather, are a looser application of statistics. In many sciences the
application of statistics falls in the "better than nothing" category.

What problem do theoretical physicists have with the frequentist
approach? I don't see how else QED could be interpreted. For
cosmology it is somewhat questionable.

Italo Vecchi

unread,
Apr 4, 2004, 8:37:00 AM4/4/04
to


nos...@de-ster.demon.nl (J. J. Lodder) wrote in message news:<1gba5ru.1be...@de-ster.xs4all.nl>...

> All coins are quantum coins, for we live in a quantum word.

Well said.

> In practice it may be quite hard to say whether or not
> a coin throw may be considered to be 'classical'.
> Quantum mechanics may come in in the precise timing
> of the twitching of your fingers, on the molecular level,
> when flipping the coin.
>

Quantum mechanics comes into coins also in the impossibility to
fix/determine initial conditions.
It would be interesting to have estimates for the growth of initial
"quantum scale" indeterminacies in classical models of chaotic
physical systems.
Consider for example a set of macroscopic balls bouncing in a box. One
may assume that deviations grow by a factor 10 between bounces (this
is based on my experience as a billiard player) in a reasonable
position-momentum norm. If the above assumption is realistic the
uncertainty principle prevents macroscopically accurate* deterministic
forecasts spanning more than a few dozen bounces.

IV

* of the kind that's relevant in actual billiard games.

Bartosz Milewski

unread,
Apr 4, 2004, 8:37:04 AM4/4/04
to

"Arnold Neumaier" <Arnold....@univie.ac.at> wrote in message

news:c4fh54$uur$1...@lfa222122.richmond.edu...


> The sense it makes is the following: If you have a sound probabilistic
> model of a multitude of independent events e_i with assigned
> probability p you'd be surprised if the frequency of events is not
> close to p within a small multiple of sqrt(p(1-p)/N). And you'd probably
> rather try to explain away a rare occurence (a brick going upwards due
> to fluctuations) by assuming a hidden, unobserved cause (someone throwing
> it) rather than just accept it as something within your probabilistic
> mode. The way probabilities are used in practice is always as rough guides
> of what to expect, but not as statements with a 100% exact meaning.
> I wrote a paper on surprise:
> A. Neumaier,
> Fuzzy modeling in terms of surprise,
> Fuzzy Sets and Systems 135 (2003), 21-38.
> http://www.mat.univie.ac.at/~neum/papers.html#fuzzy
> that helps understand the fuzziness inherent in our concepts of reality.

This brings about an interesting possibility that the cutoff is anthropic.
Things that are statistically improbable (from the point of the theory we
are testing), even if they happen, are rejected. Conversely, if too many
improbable things happen, we reject the theory. So there is no correct or
incorrect theory (as long as it's self-consistent), only the currently
accepted one. Moreover, a theory once believed to be experimentally
confirmed (within a very good margin of error) might at some point be
rebuked by another set of identical experiments.

There is a very good example of this phenomenon--the evolution of the speed
of light since 1935. After the first publication of the Michelson
measurement, up till1947, all the measurement where lower than the currently
accepted value by more than the (admitted) experimental error (see diagram
at http://www.sigma-engineering.co.uk/light/lightindex.shtml). This was
probably caused by the experimenters rejecting the data points that were too
far from the then accepted Michelson's number. Notice that even here I'm not
considering the possibility that there was a 12-year long statistical
fluctuation ;-)


Danny Ross Lunsford

unread,
Apr 5, 2004, 3:02:06 PM4/5/04
to
Patrick Powers wrote:

> Actually, I think experimentalists use the Baysian approach. Usually
> an experiment is undertaken with the expectation of some result. If
> the results do not match this expectation, the equipment is tweaked
> until the expected result is obtained. If this doesn't work either
> the experiment is dropped or (rarely) some other explanation is found.

I've often worried that the vaunted accuracy of QED is illusory, that
is, data are used to "tune" the equipment.

-drl

Italo Vecchi

unread,
Apr 6, 2004, 1:55:52 PM4/6/04
to
nos...@de-ster.demon.nl (J. J. Lodder) wrote in message news:<1gba5ru.1be...@de-ster.xs4all.nl>...

> All coins are quantum coins, for we live in a quantum word.

Well said.

> In practice it may be quite hard to say whether or not
> a coin throw may be considered to be 'classical'.
> Quantum mechanics may come in in the precise timing
> of the twitching of your fingers, on the molecular level,
> when flipping the coin.
>

Quantum mechanics kicks into coins also in the impossibility to
fix/determine initial conditions.
In a chaotic system an initial "quantum scale" indeterminacy will
quckly grow macroscopic, as highlighted in "Newtonian Chaos +
Heisenberg Uncertainty = macroscopic indeterminacy" by Barone, S.R.,
Kunhardt, E.E., Bentson, J., and Syljuasen, A., American Journal of
Physics, Vol 61, No. 5, May 1993.

Cheers,

IV

r...@maths.tcd.ie

unread,
Apr 6, 2004, 1:57:27 PM4/6/04
to
Arnold Neumaier <Arnold....@univie.ac.at> writes:


>> r...@maths.tcd.ie wrote in message news:<c3njd0$295l$1...@lanczos.maths.tcd.ie>...

>>>For example, you'll find in many probability
>>>books and hear from the mouths of top probability theorists the
>>>claim that no process can produce random, uniformly distributed
>>>positive integers, but that processes can produce random uniformly
>>>distributed real numbers between zero and one (e.g. toss a fair
>>>coin exactly aleph_0 times to get the binary expansion).

>This has a very simple reason: There is no consistent definition of
>random, uniformly distributed positive integers, while there is
>one for random uniformly distributed real numbers between zero and one.

Please give the definition you claim exists.

>This is a purely mathematical statement independent of any
>interpretation!

Wow.

>And of course, when people say 'produce' they mean
>'produce in theory', or if they mean 'produce in practice' they
>have in mind that it is produced only approximately.

My point was that probability distributions and methods of
generating random numbers are not in one-to-one correspondance, and
I gave an example of a method of generating integers which had
no corresponding probability distribution. I too meant "produce
in theory", since obviously we can't use the axiom of choice in
real life, but if we want to understand what probability theory
is and isn't about (in theory), then we shouldn't make mistakes
on this fundamental point.

R.

r...@maths.tcd.ie

unread,
Apr 6, 2004, 5:52:47 PM4/6/04
to
frisbie...@yahoo.com (Patrick Powers) writes:

>r...@maths.tcd.ie wrote in message news:<c3njd0$295l$1...@lanczos.maths.tcd.ie>...
>>

>> Right, so actually, the frequentist interpretation of probability
>> suffers from the same disease that the many-worlds interpretation
>> does, or at least the non-Bayesian one. In many worlds, the problem
>> is that there's no way to justify dismissing worlds with a small
>> quantum amplitude as being rare, and in the frequentist
>> version of probability theory, there's no way to justify dismissing
>> outcomes with small probability as being rare.
>>
>Quantum theory is a probabilistic theory and extremely unlikely events
>are not excluded, nor should they be. So this is a property of the
>theory, not the interpretation. It seems to me that an interpretation
>that excluded such events absolutely would be in error.

I'm not saying that they should be excluded; as a good Bayesian
I would merely say that the information available to me leads
me to expect that they won't happen, although it's not impossible.

>> The frequentist interpretation of probability suffers from worse
>> diseases as well. For example, you'll find in many probability
>> books and hear from the mouths of top probability theorists the
>> claim that no process can produce random, uniformly distributed
>> positive integers, but that processes can produce random uniformly
>> distributed real numbers between zero and one (e.g. toss a fair
>> coin exactly aleph_0 times to get the binary expansion).

>Yes these claims as stated are contradictory. I suspect that the
>definitions you are using are imprecise. The word "process" implies
>computability, that the process is finite. A real number is cleverly
>defined as a limit of a finite process. So a real number is
>computable in this sense, that it can be approximated as closely as
>one likes in finite time. The problem with your proof is that as the
>real number is computed the choice of cosets changes with each step so
>the process does not converge to an integer.

You are right; there is no convergence and there's no way to
actually compute such an integer or any approximation to it
in a finite number of operations. Modern mathematics, however,
allows us to deal with infinite sets without having to always
consider what can and can not be done in a finite number of
operations. The set of (not necessarily continuous) functions
from R to R has a cardinality greater than R itself, for example,
although this fact is of no relevance to finite creatures like
us. We don't *need* to define reals in terms of limits (for example,
we can define them in terms of Dedekind cuts).

So, rather than considering what's happening with the
cosets as being something which happens while the number is being
generated, I suppose that some acquaintance of mine can merely
give me a random number between 0 and 1, and then I convert
it into an integer. You, for example, might tell me 0.5 which
you can do in finite time, having generated it by your own
algorithm, which might be "just pick the middle number". If my
choice function is independent of your choice of number, then the
integer corresponding to 0.5 will be as random as the integer
corresponding to any other real number.

On the other hand, your point is well taken; generating reals
between 0 and 1 is itself impossible in practice.

>Using the axioms of choice and infinity then one can indeed choose a
>natural number at random. There are some rather strange consequences.
> It is then possible to prove that each number chosen in this way will
>be greater than all such previously chosen numbers with probability
>one. Let N be the greatest such number chosen so far. Then there are
>finitely many natural numbers less than or equal to N but infinitely
>many greater than N. So the next number chosen will be greater than N
>with probability one. Note that our ostensibly random sequence is
>strictly increasing with probability one.

If we consider infinite processes then the notion that the probabilities
one or zero mean anything goes out the window. Note that if I
tell you the third "random" integer first, then with probability
one the second is bigger than it, so with probability one the sequence
is not strictly increasing.

>This is not the only
>bizarre consequence of the axiom of choice: see the well-known
>Banach-Tarski sphere paradox. So I should think a physicist would do
>well to be wary of the axiom of choice as tending to produce
>non-physical results.

Indeed; these are more mathematical facts than physical ones.
Also, you don't need the axiom of choice to produce things like
Banach-Tarski - the set of complex numbers:
A={\sum a_n exp(i*n): a_n in N}, where N is the set of non-negative
integers can be broken into A=B disjoint union C, where B is A+1 and
C is exp(i)*A. Both B and C are exactly the same shape and size
as A, B being a translated version of A and C being a rotated version
of A, so A can be broken into two parts, each as "big" as A itself.

What I'm saying is that physicists don't need the axiom of choice
to get "unphysical" results. Infinite sets, which we use all the
time, are sufficient.

>The frequentist approach does not assume the axiom of choice and makes
>no use of transfinite mathematics or completed limits. If it did, the
>problems you mention would in fact arise.

Well, the problems I mentioned arise if we believe the axiom
of choice, the "experiment generating random numbers" version of
probability theory, and the idea that we can deal with infinite
sets (an axiom of infinity, eg "there exist infinite sets"), all
at the same time.

R.

Arnold Neumaier

unread,
Apr 6, 2004, 5:53:30 PM4/6/04
to
Bartosz Milewski wrote:
> "Arnold Neumaier" <Arnold....@univie.ac.at> wrote in message
> news:c4fh54$uur$1...@lfa222122.richmond.edu...
>
>>The sense it makes is the following: If you have a sound probabilistic
>>model of a multitude of independent events e_i with assigned
>>probability p you'd be surprised if the frequency of events is not
>>close to p within a small multiple of sqrt(p(1-p)/N). And you'd probably
>>rather try to explain away a rare occurence (a brick going upwards due
>>to fluctuations) by assuming a hidden, unobserved cause (someone throwing
>>it) rather than just accept it as something within your probabilistic
>>mode. The way probabilities are used in practice is always as rough guides
>>of what to expect, but not as statements with a 100% exact meaning.
>>I wrote a paper on surprise:
>> A. Neumaier,
>> Fuzzy modeling in terms of surprise,
>> Fuzzy Sets and Systems 135 (2003), 21-38.
>> http://www.mat.univie.ac.at/~neum/papers.html#fuzzy
>>that helps understand the fuzziness inherent in our concepts of reality.
>
>
> This brings about an interesting possibility that the cutoff is anthropic.

Not only anthropic, but subjective. Different people have different
views on the matter and are prepared to take different risks.

> Things that are statistically improbable (from the point of the theory we
> are testing), even if they happen, are rejected. Conversely, if too many
> improbable things happen, we reject the theory. So there is no correct or
> incorrect theory (as long as it's self-consistent), only the currently
> accepted one.

Positrons were observed before they were predicted by theory, but the
observers didn't believe the phenomenon was real. Rather than face
ridicule with a premature publication they ignored their evidence.
On the other hand, cold fusion had a different story...

We take a small probability p serious only if the associated phenomena
are repeatable frequently enough that an approximate frequentist
interpretation makes sense.


Arnold Neumaier


eb...@lfa221051.richmond.edu

unread,
Apr 7, 2004, 6:45:05 AM4/7/04
to

In article <9511688f.04032...@posting.google.com>,
Patrick Powers <frisbie...@yahoo.com> wrote:

>Using the axioms of choice and infinity then one can indeed choose a
>natural number at random

>From context, let me add "with a uniform distribution" -- that is,
with all natural numbers equally probable.

Is this statement meant to be obvious? It's not at all clear to me
how the axiom of choice says anything about probabilities.

If it's not meant to be obvious, but is nonetheless true, can someone
point me to an appropriate place to read more on this?

-Ted

--
[E-mail me at na...@domain.edu, as opposed to na...@machine.domain.edu.]

Daryl McCullough

unread,
Apr 8, 2004, 2:26:49 PM4/8/04
to
eb...@lfa221051.richmond.edu says...

>In article <9511688f.04032...@posting.google.com>,
>Patrick Powers <frisbie...@yahoo.com> wrote:
>
>>Using the axioms of choice and infinity then one can indeed choose a
>>natural number at random
>
>>From context, let me add "with a uniform distribution" -- that is,
>with all natural numbers equally probable.
>
>Is this statement meant to be obvious? It's not at all clear to me
>how the axiom of choice says anything about probabilities.
>
>If it's not meant to be obvious, but is nonetheless true, can someone
>point me to an appropriate place to read more on this?

I'm cross-posting to sci.math, because maybe a mathematician
has something to add. Patrick's point is not complicated
to prove, but it's hard to understand how to interpret it.

1. Pick an enumeration of all positive rational numbers
between 0 and 1. For example, 1/2, 1/3, 2/3, 1/4, 3/4, 1/5, 2/5,
3/5, 4/5, ... Let q_n be the nth rational number.

2. Define an equivalence relation on real numbers between
0 and 1: x ~~ y if and only if |x-y| is rational.

3. Using the axiom of choice, construct a set S by picking
one element out of every equivalence class.

4. Define S_n to be { x | |x - q_n| is in S }

Note that S_0 union S_1 union S_2 union ... = (0,1).

5. So here's how you generate a random nonnegative integer: Generate
a random real x in (0,1), and let your random integer be that n such
that x is an element of S_n.

There is no probability distribution on the possible outcomes of this
process, so it isn't a "uniform distribution on the integers" in a
measure-theoretic sense. But you can argue by symmetry that in some
sense every n is "equally likely" because each of the sets S_n are
identical, except for a translation.

--
Daryl McCullough
Ithaca, NY

Arnold Neumaier

unread,
Apr 8, 2004, 2:26:54 PM4/8/04
to
r...@maths.tcd.ie wrote:
> Arnold Neumaier <Arnold....@univie.ac.at> writes:

>>There is no consistent definition of
>>random, uniformly distributed positive integers, while there is
>>one for random uniformly distributed real numbers between zero and one.
>
> Please give the definition you claim exists.

Just to know what kind of answer you'd be prepared to accept,
please let me know what you regard as the definition of random
binary numbers with equal probabilities of 0 and 1. Then I'll
be able to answer your more difficult question satisfactorily.


Arnold Neumaier

Arnold Neumaier

unread,
Apr 8, 2004, 6:38:28 PM4/8/04
to
eb...@lfa221051.richmond.edu wrote:
> In article <9511688f.04032...@posting.google.com>,
> Patrick Powers <frisbie...@yahoo.com> wrote:
>
>
>>Using the axioms of choice and infinity then one can indeed choose a
>>natural number at random
>
>
>>From context, let me add "with a uniform distribution" -- that is,
> with all natural numbers equally probable.

There is no uniform distribution on natural numbers.
There is no way to make formal sense of the statement
'all natural numbers equally probable'.
Thus this 'context' is logically meaningless.

The natural least informative distribution on natural numbers
is a Poisson distribution, but here one has to be at least informed
about the mean.


Arnold Neumaier


Phillip Helbig---remove CLOTHES to reply

unread,
Apr 8, 2004, 6:40:54 PM4/8/04