Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Half-life question

3 views
Skip to first unread message

ch...@uwpg02.uwinnipeg.ca

unread,
May 16, 1992, 12:24:03 AM5/16/92
to
Hi. I'm a high school chem teacher. My colleague, a Physics
teacher, thought the following test question was a little
too vague to answer. What do you think?

Question: A single atom of a substance known to be unstable
with a half-life of 1 s is produced. After an interval of
1 s the atom has not decayed. Is it now more likely to decay
sometime before the end of the second interval (between
1 s and 2 s) ? Explain.


Thanks for any info. Marsh

Internet: CH...@UWPG02.UWINNIPEG.CA

Random Scurve

unread,
May 17, 1992, 1:59:35 AM5/17/92
to
ch...@uwpg02.uwinnipeg.ca writes:

>Internet: CH...@UWPG02.UWINNIPEG.CA

I don't see what problem your physics teacher friend had with the
question (though as a physicist myself I appreciate your
capitalization of the word Physics. :D ) The question seems to be
probing the students knowledge of independent assortment, which is
rather fundamental in many fields, including nuclear physics. If this
is what you were trying to test the student on, then keep the
question. If a student has a problem with the question, encourage
him to write his own, more clearly stated version, and then use it. A
little feedback never hurt anyone.
ma...@ugcs.caltech.edu


SCOTT I CHASE

unread,
May 17, 1992, 3:59:23 AM5/17/92
to
In article <maxim.706082375@molest>, ma...@molest.ugcs.caltech.edu (Random Scurve) writes...

>ch...@uwpg02.uwinnipeg.ca writes:
>
>>Hi. I'm a high school chem teacher. My colleague, a Physics
>>teacher, thought the following test question was a little
>>too vague to answer. What do you think?
>
>>Question: A single atom of a substance known to be unstable
>> with a half-life of 1 s is produced. After an interval of
>> 1 s the atom has not decayed. Is it now more likely to decay
>> sometime before the end of the second interval (between
>> 1 s and 2 s) ? Explain.
>
> >I don't see what problem your physics teacher friend had with the
>question (though as a physicist myself I appreciate your

I do. How can an atom be more or less likely to decay in the second
time interval depending on whether on not it decayed in the first time
interval? If it decayed in the first time interval, the experiment
is over. There is no answer to the question "what is the probability
of an atom decaying in the second time interval after it decayed in the
first time interval?", so there is no meaning to the question of whether
the probability is more or less if the atom did not decay.

Nevertheless, the *intended* question is a good one. If you flip
a coin N times, and get N heads, what is the probability of getting
tails on the next flip? This question addresses very common misconceptions
about probability that inexperienced students tend to have.
So I suggest that the question be recast into usable form.

-Scott

--------------------
Scott I. Chase "The question seems to be of such a character
SIC...@CSA2.LBL.GOV that if I should come to life after my death
and some mathematician were to tell me that it
Do you like my new quote? -> had been definitely settled, I think I would
immediately drop dead again." - Vandiver

John Whitmore

unread,
May 18, 1992, 6:03:05 PM5/18/92
to

>Question: A single atom of a substance known to be unstable
> with a half-life of 1 s is produced. After an interval of
> 1 s the atom has not decayed. Is it now more likely to decay
> sometime before the end of the second interval (between
> 1 s and 2 s) ? Explain.

First, let me congratulate the teacher in question for seeking
out second opinions on test question clarity.

Second, the answer to this question MIGHT be YES.
Probability (initially) that the particle decays between
1 s and 2 s, is 0.25; after observing the first second and
seeing that it has not yet decayed, the probability THEN that
the particle will decay between 1 s and 2 s is 0.5.

If, however, it is meant to find whether the probability
of decay before the end of the second second (given that it did
not decay during the first second) is greater than the probability
of the particle decaying during THE FIRST SECOND, then the answer
is NO.

As stated, the question admits both possible interpretations.

John Whitmore

Richard M. Mathews

unread,
May 18, 1992, 5:05:59 PM5/18/92
to
ch...@uwpg02.uwinnipeg.ca writes:
>Question: A single atom of a substance known to be unstable
> with a half-life of 1 s is produced. After an interval of
> 1 s the atom has not decayed. Is it now more likely to decay
> sometime before the end of the second interval (between
> 1 s and 2 s) ? Explain.

"More likely" than what? Than the probability determined at the
beginning of the experiment for it to decay in the [0s,1s] interval?
Than the probability determined at the beginning of the experiment
for it to decay in the [1s,2s] interval? Than the probability that it
will decay in the [1s,2s] interval if it *had* decayed in the [0s,1s]
interval (which was apparently Scott Chase's interpretation)? Than the
probability that a student will pass the test? You can't ask a "more"
question without specifying "more than what".

Richard M. Mathews Lietuva laisva = Free Lithuania
Internet: ric...@locus.com Brivu Latviju = Free Latvia
UUCP: ...!uunet!lcc!richard Eesti vabaks = Free Estonia
MIL/BITNET: richard%l...@UUNET.UU.NET WE DID IT!!!

bal...@ohstpy.mps.ohio-state.edu

unread,
May 19, 1992, 8:55:25 AM5/19/92
to
In article <1992May18.2...@u.washington.edu>, wh...@milton.u.washington.edu (John Whitmore) writes:
> In article <15MAY92....@uwpg02.uwinnipeg.ca> ch...@uwpg02.uwinnipeg.ca writes:
>
>>Question: A single atom of a substance known to be unstable
>> with a half-life of 1 s is produced. After an interval of
>> 1 s the atom has not decayed. Is it now more likely to decay
>> sometime before the end of the second interval (between
>> 1 s and 2 s) ? Explain.
>
> First, let me congratulate the teacher in question for seeking
> out second opinions on test question clarity.
>
> Second, the answer to this question MIGHT be YES.
> Probability (initially) that the particle decays between
> 1 s and 2 s, is 0.25; after observing the first second and
> seeing that it has not yet decayed, the probability THEN that
> the particle will decay between 1 s and 2 s is 0.5.
>
> [stuff deleted]
> John Whitmore


Hmm. Took me a few minutes to figure out your point but I think
you are right. Your probability of .25 arises because decays in the
first second are not counted. I.e. starting off with 100 particles,
50 decay in the first second, then 25 in the second.
Talking about probabilities with one particle is not intuitive
to me. Either the particle decays or it doesn't. You can't tell when,
where, or why until after it happens. It's only with a large group
of particles that probabilities become useful. Let's face it, if
it were your child's Schroedinger cat in the box you wouldn't describe
the cat's wave function to the child. You'd look in the box and see
if it was time to buy a new cat.
As for this question being given to high school students, it's
a good question to motivate discussion in the classroom and perhaps
as homework, but it doesn't strike me as a very fair test question
simply because of the different interpretations of it's meaning that
are possible, i.e. which probability are you talking about? Versus
the whole ensemble of particles you started out with? versus only those
particles that survived the first second? (no pun intended)
After all, in physics measurements we don't sit around waiting
for one particle to decay. We always deal with multiple events (or
at least we hope to). So probabilities take on a more concrete meaning.
I'd be willing to bet that if you asked the question in the
following way, you'd get more kids understanding it and answering it correctly.
"Given 100 identical particles with a half-life of 1 second, how many
decays would occur during the 2nd second."
Sorry for the tirade.

Mark Balbes
Dept. of Physics
Ohio State University

p.s. One more thing... With my phrasing of the question, you could
even get into a discussion on the errors of the measurements, thus
teaching the students that probabilities are not certainties (hence
the name :-) )

Jim Carr

unread,
May 19, 1992, 10:26:47 AM5/19/92
to
I sent my original comments on the question directly to the requestor. My
first thoughts were that the primary way of reading the question made it
fair, but I now see the alternative interpretation. I guess it depends a
lot on something we do not know -- which is what was actually discussed
in class on this subject. My own opinion was that it was not really a
suitable question for HS unless the issues related to probability had
been discussed adequately.

I also suggested that the class do some experiments on decay with coin
flips that explore correlations such as whether a coin coming up heads
is more likely to come up tails the next time.

Perhaps the most productive thing to do would be to discuss the question
in class, and have the students explain why they gave the answers they
chose -- which would make it easy to decide if the student deserves credit --
or assign a short essay on the problem to those who missed it and thought
it was unfair. Then do the suitable set of experiments to eliminate any
misconceptions identified from the discussion or essays.

--
J. A. Carr | "The New Frontier of which I
j...@gw.scri.fsu.edu | speak is not a set of promises
Florida State University B-186 | -- it is a set of challenges."
Supercomputer Computations Research Institute | John F. Kennedy (15 July 60)

Hal Lillywhite

unread,
May 19, 1992, 2:10:02 PM5/19/92
to
>Hi. I'm a high school chem teacher. My colleague, a Physics
>teacher, thought the following test question was a little
>too vague to answer. What do you think?

>Question: A single atom of a substance known to be unstable
> with a half-life of 1 s is produced. After an interval of
> 1 s the atom has not decayed. Is it now more likely to decay
> sometime before the end of the second interval (between
> 1 s and 2 s) ? Explain.

I think it's a very good question. In fact it gets at a common
misconception about probability, namely the unfounded belief that
probability somehow changes with previous outcomes (the dice are
getting "hot"). The real question is, "Is it fair to require a
particular class to answer it and have their grades depend on
the answer?"

If the teacher has adequately addressed this principle, by all means
ask the question. Otherwise I would like to see the teaching
upgraded. Too many people think that if you toss 3 heads in a row
with a coin, the chance of tails on the next toss increases. (And
professional "gamblers" who know better use this to separate people
from their money.)

Blair P. Houghton

unread,
May 19, 1992, 4:24:01 PM5/19/92
to
>>[...is this a fair question...]

>>Question: A single atom of a substance known to be unstable
>> with a half-life of 1 s is produced. After an interval of
>> 1 s the atom has not decayed. Is it now more likely to decay
>> sometime before the end of the second interval (between
>> 1 s and 2 s) ? Explain.
>
>I think it's a very good question. In fact it gets at a common
>misconception about probability, namely the unfounded belief that
>probability somehow changes with previous outcomes (the dice are
>getting "hot"). The real question is, "Is it fair to require a
>particular class to answer it and have their grades depend on
>the answer?"

Actually, it's a difficult question, since it's dependent
on whether atomic decay is or is not a stationary stochastic
process. (A stationary process is one in which the probability
does not depend on time, or whatever parameter one uses as
the metric; a non-stationary process is one in which the
probability does depend on the metric--e.g., what is the
probability that a balloon will pop at a certain diameter,
as a function of the largest observed diameter).

The difficulty is that atomic decay may or may not be a
stationary process. (Of course, this is a difficulty only
to me, since someone probably knows with smug certainty
that it is or is not. Such is the nature of physical
discovery--wondering just what it is that nature already
knows).

The asker, however, is probably testing knowledge, here,
meaning that the class has been told (probably through
example rather than mathematical definition) that atomic
decay is a stationary process. I.e., nobody's said
"atomic decay is a stationary process," it's probably
more like "it doesn't matter how long it's been, the
probability doesn't change."

My answer, given my detailed ignorance, would be "I don't
know; I don't remember that it's ever been made clear to me
whether atomic decay is or is not a stationary process;"
and I would hope for half-credit for understanding the
principle but lacking the result.

--Blair
"Government is a stationary
process, but that's appropriate
only by dint of the thesaurus."

P.S. Haha. Fooled me. "Half-life" indicates a stationary
process. One-quarter credit.

Leigh Palmer

unread,
May 19, 1992, 6:02:14 PM5/19/92
to

I think you mean: "Thanks for any opinion." In this case I'll support you.
I think any student who has been exposed to "the gambler's fallacy" ought
to cough up the answer (no) to this question in reflex fashion. If scien-
tists have to be more careful with language than you have been in this
context then there are too many nitpickers around. Your question is clear
to me.

Leigh

Leigh Palmer

unread,
May 19, 1992, 6:11:31 PM5/19/92
to
Now that I have read the first six responses to this request I have not
changed my opinion, initially expressed before reading anyone else's.
My overall impression is that this teacher is unlikely ever again to
seek counsel from us! With the pool of knowledge available here it is a
shame that more reasonable use is not made of it all.

Leigh Palmer

Leigh Palmer

unread,
May 19, 1992, 7:49:41 PM5/19/92
to
In article <96...@vice.ICO.TEK.COM> ha...@vice.ICO.TEK.COM (Hal Lillywhite)
writes:

>Too many people think that if you toss 3 heads in a row


>with a coin, the chance of tails on the next toss increases. (And
>professional "gamblers" who know better use this to separate people
>from their money.)

Actually the probability of *heads* increases as the string of heads
increases in length. Without the explicit hypothesis of a fair coin
one must admit the possibility of one which is unfair. Anyone who
bets tails after *ten* consecutive heads is irretrievably idealistic!

Leigh Palmer

Ray Trent

unread,
May 19, 1992, 8:53:42 PM5/19/92
to
In the referenced article, pal...@sfu.ca (Leigh Palmer) writes:
>>Question: A single atom of a substance known to be unstable
>> with a half-life of 1 s is produced. After an interval of
>> 1 s the atom has not decayed. Is it now more likely to decay
>> sometime before the end of the second interval (between
>> 1 s and 2 s) ? Explain.
>
>I think any student who has been exposed to "the gambler's fallacy" ought
>to cough up the answer (no) to this question in reflex fashion. If scien-

Of course, you always have to remember the "gambler's retort" to the
"gambler's fallacy", but that's probably picking nits (see below).

I'd say that the question is somewhat ambiguous. I understand
perfectly well what the question is probably getting at. However,
there is a way of thinking that says that questions are better when
they have fewer *possible* interpretations.

I prefer questions that lend themselves to that greatest of test
taking aphorisms: Read the question, and then answer it. Therefore,
I would prefer to see the question reworded something like:

Question: A single atom of an isotope known to be unstable
with a half-life of 1s is produced. What is the probability that
it will decay during the interval 0s to 1s? If it does not
decay during the interval 0s to 1s, what is the probability
that it will decay during the interval 1s to 2s?

I wouldn't hesitate to ask about the actual probabilities. If they
don't understand the term "half-life" in the first place, they aren't
going to understand the concept you are getting at either, except by
rote or guessing.

If that's too pedantic sounding, how about:

Question: A single atom of a substance known to be unstable

with a half-life of 1s is produced. What is the chance that
it will decay during the first second? If it does not
decay during the first second, what is the chance
that it will decay during the second second? (If you don't
know the answers, are the 2 chances the same or different? Explain
for partial credit.)

Aside: For those that haven't heard the "gambler's retort", it is
basically summed up by the following story:

An average person, a mathematician, and a gambler see a coin tossed 10
times. Each time, it comes up heads. The three people are then asked
to bet on the outcome of the 11th toss. The average person will
usually bet on tails, because "it's got to come up soon". The
mathematician will bet on either, because the probabilities are the
same. The gambler will bet on heads every time, because at worst the
odds are the same, and you never know, the coin might be fixed.

--
"When you're down, it's a long way up
When you're up, it's a long way down
It's all the same thing
And it's no new tale to tell" ../ray\..

John C. Baez

unread,
May 20, 1992, 11:31:46 AM5/20/92
to
In article <11...@inews.intel.com> bhou...@hopi.intel.com (Blair P. Houghton) writes:
>In article <96...@vice.ICO.TEK.COM> ha...@vice.ICO.TEK.COM (Hal Lillywhite) writes:
>>In article <15MAY92....@uwpg02.uwinnipeg.ca> ch...@uwpg02.uwinnipeg.ca writes:

>>>Question: A single atom of a substance known to be unstable
>>> with a half-life of 1 s is produced. After an interval of
>>> 1 s the atom has not decayed. Is it now more likely to decay
>>> sometime before the end of the second interval (between
>>> 1 s and 2 s) ? Explain.

>Actually, it's a difficult question, since it's dependent


>on whether atomic decay is or is not a stationary stochastic
>process. (A stationary process is one in which the probability

>does not depend on time[....])

>The difficulty is that atomic decay may or may not be a
>stationary process.

Houghton has hit the nail on the head here. The question has
been hotly disputed, and I've seen lots of articles on it.
(Unfortunately, I don't know any references off the top of my head.) In
a sloppy way of speaking, the question is, "does a radioactive nucleus
know how old it is?" That is, is its chance of decaying in the next
second, assuming it hasn't decayed yet, a function of how old it is, or
just a constant? Now, I realize that "everybody knows" that radioactive decay
is a stationary process, and that the term "half-life" implicitly
assumes this. On the other hand, if you do the calculations carefully,
you will find that radioactive decay is NOT a stationary process,
although it is VERY CLOSE to being one. I don't know if anyone has
actually observed this effect - it might be very hard.

If people want to flame me for this heresy I suggest that they dig up
the literature first. Here let me just sketch why it can't be a
stationary process. Let psi be the wavefunction of a "new-born
radioactive nucleus", and let P be the projection onto the space of
states in which the nucleus has already decayed. The probability of
decaying after time t is

||P exp(-itH) psi||^2

where H is the Hamiltonian. The claim is that various natural
assumptions imply that this function cannot equal exp(-kt) for t > 0,
where k is positive constant. There are different ways to proceed and
people love to argue about them...

Argument 1: Put the nucleus in a box and assume that only states below
a certain energy matter. By physics, H is bounded below and has pure
point spectrum (i.e. no continuous spectrum). Then we can write psi as
a finite linear combination of eigenstates and calculate


||P exp(-itH) psi||^2

Go ahead, do it - you get a bunch of sines and cosines, not anything
like exp(-kt).

People who know quantum will say, "Okay, sure, you're right, all motions
are quasiperiodic in such a system, what goes around comes around, but
if the box is big the answer might be so close to exp(-kt) that it's
ridiculous to worry about the discrepancy." Fine.

Argument 2: Just assume that psi is an analytic vector for H.
(Mathematicians can explain to physicists why this is reasonable.) Then

||P exp(-itH) psi||^2

is a real-analytic function. If it equals exp(-kt) for t > 0, it has to
equal exp(-kt) for t < 0, which is impossible, because then exp(-kt) >
1!!!

This is a somewhat better argument.

Even better is to do the calculation in a particular case, making as few
approximations as possible, and see what you get. This is done in the
literature.

------
I realize this is getting technical but I think it's worth posting to
sci.math and sci.edu because radioactive decay is often used to
illustrate exponential decay, and, while this is an excellent
approximation and there's certainly nothing to be ashamed of about using
it, it's interesting to note that there are controversies about this.

David Moore

unread,
May 20, 1992, 12:28:14 PM5/20/92
to
pal...@sfu.ca (Leigh Palmer) writes:

>Actually the probability of *heads* increases as the string of heads
>increases in length. Without the explicit hypothesis of a fair coin
>one must admit the possibility of one which is unfair. Anyone who
>bets tails after *ten* consecutive heads is irretrievably idealistic!

This, of course, is one of the classical problems of Bayes theory; If you
suspect the coin is two-headed with probability p, then what probability
should you ascribe to the coin being two-headed after it comes up heads?
A simple application of Bayes theorem gives the result. (The Monty Hall
problem, which seems to bemuse people periodically, is another simple
application)

Chris Colby

unread,
May 20, 1992, 3:14:58 PM5/20/92
to

I have recently (on the three boards I posted this to)
offered to email my file explaining the basics of evolutionary
biology to anyone who wanted to read it. I've sent many copies out,
but I have had mail bounce on a few requests. If you mailed
me and did not recieve a reply, send another message and include a
valid path in the body of your message. Alternately, I recently
posted the file on the newsgroup talk.origins. It is in two parts
titled "the theory of evolution" and "the theory of evolution (II)".
Here is the outline of the file:

----------------------------------------------------------
EVOLUTIONARY BIOLOGY
Introduction
What is evolution?
What isn't evolution?
What evolution isn't
Genetic variation
How is genetic variation described?
How much genetic variation is there?
Evolution within a lineage -- anagenesis
Mechanisms that decrease genetic variation
Natural selection
Sexual selection
Genetic drift
Mechanisms that increase genetic variation
Mutation
Recombination
Gene flow
Overview of anagenesis
Evolution among lineages -- cladogenesis
Speciation -- increasing biological diversity
Modes of speciation
Observed speciations
Macroevolution vs. microevolution
Extinction -- decreasing biological diversity
"Ordinary" extinctions
Mass extinctions
Conclusion
-------------------------------------------------------------

Chris Colby --- email: co...@bu-bio.bu.edu ---
"'My boy,' he said, 'you are descended from a long line of determined,
resourceful, microscopic tadpoles--champions every one.'"
--Kurt Vonnegut from "Galapagos"

Mark North

unread,
May 20, 1992, 6:48:04 PM5/20/92
to
pal...@sfu.ca (Leigh Palmer) writes:

>>Question: A single atom of a substance known to be unstable
>> with a half-life of 1 s is produced. After an interval of
>> 1 s the atom has not decayed. Is it now more likely to decay
>> sometime before the end of the second interval (between
>> 1 s and 2 s) ? Explain.
>>
>>Thanks for any info. Marsh

>I think you mean: "Thanks for any opinion." In this case I'll support you.
>I think any student who has been exposed to "the gambler's fallacy" ought
>to cough up the answer (no) to this question in reflex fashion. If scien-
>tists have to be more careful with language than you have been in this
>context then there are too many nitpickers around. Your question is clear
>to me.

The fact that the question is clear to you may be due to the fact that you
know too much 8^). Let me point out a couple of things. The question is
misleading in bringing up 'half-life'. Half-life or mean lifetime is a
statement about a sample of identical nuclei, it has no relevance to a
single nuclei. One has absolutely NO idea when a particular nuclei will
decay so all one-second time intervals are equally likely. Unless the
question was devised to expose the gambler's fallacy let me suggest a better
one.

Given N identical nuclei with mean lifetime = 1 sec what percentage decay in
the first second. Of those remaining after one second, what percentage
decay in the next second?

The answer is surprising and enlightening.. At least it was to me the first
time I encountered it.

Mark

Mark North

unread,
May 20, 1992, 7:07:51 PM5/20/92
to
pal...@sfu.ca (Leigh Palmer) writes:

No. Ten heads in a row is not *that* rare. I like Martingales up to five
some times -- you win often enough to keep it interesting. (Can't go
higher than that because of house limits).

Mark

Robert Parson

unread,
May 20, 1992, 3:44:08 PM5/20/92
to
In article <1992May20.1...@galois.mit.edu>, jb...@zermelo.mit.edu (John C. Baez) writes:
>
> Houghton has hit the nail on the head here. The question has
> been hotly disputed, and I've seen lots of articles on it.
> (Unfortunately, I don't know any references off the top of my head.) In
> a sloppy way of speaking, the question is, "does a radioactive nucleus
> know how old it is?" That is, is its chance of decaying in the next
> second, assuming it hasn't decayed yet, a function of how old it is, or
> just a constant? Now, I realize that "everybody knows" that radioactive decay
> is a stationary process, and that the term "half-life" implicitly
> assumes this. On the other hand, if you do the calculations carefully,
> you will find that radioactive decay is NOT a stationary process,
> although it is VERY CLOSE to being one. I don't know if anyone has
> actually observed this effect - it might be very hard.
>
Yes, in principle NO DECAY IS EXPONENTIAL - there's always a power law.
at very long times. There was a summary article in _Nature_ about 2-3 years
ago, I think. According to that article the power-law tail has never veen
observed for radiative or radioactive decay, because it sets in at about
40-50 half-lives, which leads to S/N problems...
This is a classica example of "results that are very well known by a very
small group", but somehow have never diffused into the mainstream. The
theoretical arguments go back at least to the 1950's.
For those of you who think of resonances as poles of the S-Matrix - there's
always a branch cut, and exponential decay arises only when you ignore it.

Robert

Robert B. Israel

unread,
May 20, 1992, 12:55:58 PM5/20/92
to

> ||P exp(-itH) psi||^2

Stated more precisely, this is the probability of observing a decayed
state at time t. Note also that the "wavefunction" must also include
any fields that are involved in the decay, e.g. electromagnetic and
neutrino in many cases.

>where H is the Hamiltonian. The claim is that various natural
>assumptions imply that this function cannot equal exp(-kt) for t > 0,
>where k is positive constant. There are different ways to proceed and
>people love to argue about them...

> [ Argument 1 deleted ]

>Argument 2: Just assume that psi is an analytic vector for H.
>(Mathematicians can explain to physicists why this is reasonable.) Then

> ||P exp(-itH) psi||^2

>is a real-analytic function. If it equals exp(-kt) for t > 0, it has to
>equal exp(-kt) for t < 0, which is impossible, because then exp(-kt) >
>1!!!

>This is a somewhat better argument.

Actually, what is meant is, if it equals 1 - exp(-kt) for t > 0, it has to
equal 1 - exp(-kt) < 0 for t < 0.

You don't have to assume analyticity, just that P psi is in the domain of H.
If f(t) = ||P exp(-itH) psi||^2, then f'(0) = i <H psi, P psi> - i <P psi, H psi>
= 0 since P psi = 0. Thus f(t) can't be 1 - exp(-kt).

This argument gives you a little more than the others, since it describes
one way in which the decay is not exponential: because of the initial condition,
i.e. the assumption that the particle has definitely not decayed at t = 0,
the decay rate must start at 0.

>Even better is to do the calculation in a particular case, making as few
>approximations as possible, and see what you get. This is done in the
>literature.

>------
>I realize this is getting technical but I think it's worth posting to
>sci.math and sci.edu because radioactive decay is often used to
>illustrate exponential decay, and, while this is an excellent
>approximation and there's certainly nothing to be ashamed of about using
>it, it's interesting to note that there are controversies about this.

--
Robert Israel isr...@math.ubc.ca
Department of Mathematics or isr...@unixg.ubc.ca
University of British Columbia
Vancouver, BC, Canada V6T 1Y4

Leigh Palmer

unread,
May 21, 1992, 7:08:35 PM5/21/92
to
In article <north.706402084@watop> no...@watop.nosc.mil (Mark North) writes:
>pal...@sfu.ca (Leigh Palmer) writes:
>
>>In article <15MAY92....@uwpg02.uwinnipeg.ca> ch...@uwpg02.uwinnipeg.ca
>>writes:
>>>
>>>Question: A single atom of a substance known to be unstable
>>> with a half-life of 1 s is produced. After an interval of
>>> 1 s the atom has not decayed. Is it now more likely to decay
>>> sometime before the end of the second interval (between
>>> 1 s and 2 s) ? Explain.
>>>
>>>Thanks for any info. Marsh
>
>>I think you mean: "Thanks for any opinion." In this case I'll support you.
>>I think any student who has been exposed to "the gambler's fallacy" ought
>>to cough up the answer (no) to this question in reflex fashion. If scien-
>>tists have to be more careful with language than you have been in this
>>context then there are too many nitpickers around. Your question is clear
>>to me.
>
>The fact that the question is clear to you may be due to the fact that you
>know too much 8^). Let me point out a couple of things. The question is
>misleading in bringing up 'half-life'. Half-life or mean lifetime is a
>statement about a sample of identical nuclei, it has no relevance to a
>single nuclei. One has absolutely NO idea when a particular nuclei will
>decay so all one-second time intervals are equally likely. Unless the
>question was devised to expose the gambler's fallacy let me suggest a better
>one.

I think that is the intent of the question. Of course you are correct about
the misapplication of "half-life" to a single nucleus. I read the dependent
clause in the sentence above to be "of a substance known to be unstable
with a half-life of 1 s". The half-life pertains to the substance. That
potential ambiguity is easily resolved by the student who knows too much.

>Given N identical nuclei with mean lifetime = 1 sec what percentage decay in
>the first second. Of those remaining after one second, what percentage
>decay in the next second?

I like your modification, with the exception of the ugly neologism
"percentage".
What's wrong with "how many" or "what fraction"? I still consider the original
question to be utterly kosher.

>The answer is surprising and enlightening.. At least it was to me the first
>time I encountered it.

I agree whole-heartedly. The other shocker is the fact that the mean free
path of a gas molecule is the same number as the mean distance that a molecule
will travel before its next collision, and is also the mean distance it has
traveled since its last collision. The apparent factor of two discrepancy can
keep students actively involved in the intellectual process as well as any
conundrum I know.

A clear understanding of the rudiments of probability is essential to the
understanding of physical theory as it is cast in our culture. I believe the
concepts to be simple, but they can be deceptively hard to teach. They do
yield lots of curious, often counterintuitive, results which can be used to
stimulate enlightening discussion.

Leigh Hunt Palmer

Leigh Palmer

unread,
May 21, 1992, 7:33:54 PM5/21/92
to
In article <1992May20...@cubldr.colorado.edu>

Well, my esteemed colleagues, both of you theoreticians: I believe that you
are philosophically challenged! The exponential radioactive decay "law" is
an *empirical* one, like Hooke's Law. The fact that the result is readily
derived on the basis of a poorly founded simple hypothesis has seduced you.
In fact the decay of a sample of radioactive material is not exponential.
It is a stochastic process consisting of discrete events. The exponential
"law" describes the behaviour of a hypothetical ensemble of samples, and if
that is flawed it can probably never be experimentally demonstrated, so
where's the beef?

But I *am* sincerely glad you exposed me to this arcanum, John. On behalf of
all of us who were formerly unaware of this, I thank you.

Leigh

John C. Baez

unread,
May 22, 1992, 1:34:45 AM5/22/92
to
In article <1992May21....@sfu.ca> pal...@sfu.ca (Leigh Palmer) writes:

>Well, my esteemed colleagues, both of you theoreticians: I believe that you
>are philosophically challenged! The exponential radioactive decay "law" is
>an *empirical* one, like Hooke's Law. The fact that the result is readily
>derived on the basis of a poorly founded simple hypothesis has seduced you.
>In fact the decay of a sample of radioactive material is not exponential.
>It is a stochastic process consisting of discrete events. The exponential
>"law" describes the behaviour of a hypothetical ensemble of samples, and if
>that is flawed it can probably never be experimentally demonstrated, so
>where's the beef?

I'm not quite sure what you mean. My point was simply that the usual
quantum-mechanical "derivations" of the exponential decay law for
excited states (radioactive nuclei, unstable particles, atoms with
excited electrons, etc..) are sloppy, that these calculations are
approximate, and that it's easy to see that they can't be exact.
Moreover it should be possible to experimentally falsify the exponential
law, but I don't think anyone has done so. This is where the beef
is for an experimentalist.

The exponential law, like Hooke's law, could be viewed simply a case
of fitting a curve with the first function that comes to mind. "Going
to zero ever more slowly? Let's try an exponential!" (Did they ever try
a Gaussian?) But ideologies tend to get built up supporting these
simple laws. We all know how physics was wedded to the paradigm of
linearity and how it's struggling to free itself from that now. The
paradigm of stationary processes and exponential decay is a similar sort
of thing.

>But I *am* sincerely glad you exposed me to this arcanum, John. On behalf of
>all of us who were formerly unaware of this, I thank you.
>
>Leigh

Quite welcome!

Mark North

unread,
May 22, 1992, 10:47:51 AM5/22/92
to
pal...@sfu.ca (Leigh Palmer) writes:

[north writes]


>>Given N identical nuclei with mean lifetime = 1 sec what percentage decay in
>>the first second. Of those remaining after one second, what percentage
>>decay in the next second?

>I like your modification, with the exception of the ugly neologism
>"percentage".
>What's wrong with "how many" or "what fraction"? I still consider the original
>question to be utterly kosher.

Nothing. I *hate* it when I do that.

Mark

Blair P. Houghton

unread,
May 22, 1992, 2:18:37 PM5/22/92
to
In article <1992May22....@galois.mit.edu> jb...@nevanlinna.mit.edu (John C. Baez) writes:
>The exponential law, like Hooke's law, could be viewed simply a case
>of fitting a curve with the first function that comes to mind. "Going
>to zero ever more slowly? Let's try an exponential!" (Did they ever try
>a Gaussian?) But ideologies tend to get built up supporting these
>simple laws. We all know how physics was wedded to the paradigm of
>linearity and how it's struggling to free itself from that now. The
>paradigm of stationary processes and exponential decay is a similar sort
>of thing.

Hmm.

I thought the exponential decay for atoms was at least
theoretical, and the curve-fitting served to verify
the theory, viz, that the rate of decay is directly
proportional to the number of undecayed atoms. The papers
should be about 75 years old, by now.

This feature is fairly obvious, in light of the law of
large numbers. What's also fairly obvious is that for
small numbers the theory will no longer hold, and that
certain configurations of atoms (not all necessarily the
same kind of atoms) can preserve the number of undecayed
atoms, and adding energy or particles can control the
number of undecayed atoms.

--Blair
"Heisenberg's a piker when
it comes to uncertainty;
at least, next to me..."

Carlo Graziani

unread,
May 22, 1992, 3:57:06 PM5/22/92
to
In article <11...@inews.intel.com> bhou...@hopi.intel.com (Blair P. Houghton) writes:
>In article <1992May22....@galois.mit.edu> jb...@nevanlinna.mit.edu (John C. Baez) writes:
>>The exponential law, like Hooke's law, could be viewed simply a case
>>of fitting a curve with the first function that comes to mind. "Going
>>to zero ever more slowly? Let's try an exponential!" (Did they ever try
>>a Gaussian?) But ideologies tend to get built up supporting these
>>simple laws. We all know how physics was wedded to the paradigm of
>>linearity and how it's struggling to free itself from that now. The
>>paradigm of stationary processes and exponential decay is a similar sort
>>of thing.
>
>Hmm.
>
>I thought the exponential decay for atoms was at least
>theoretical, and the curve-fitting served to verify
>the theory, viz, that the rate of decay is directly
>proportional to the number of undecayed atoms. The papers
>should be about 75 years old, by now.
>

Exponential decay is in fact a theoretical necessity. It is a generic
quantum mechanical feature of problems in which you have a discrete
state (e.g. an excited atomic or nuclear state) coupled to a continuum
of states (e.g. the atomic or nuclear system in the ground state and
an emitted photon flitting around somewhere). There is nothing ad hoc
about it. The original paper is Weisskopf & Wigner, 1930, Z. Physik,
63, 54. If you can't get a translation from German, (or don't speak
German), see Gasiorowicz, "Quantum Physics" 1974 (Wiley), pp 473-480,
or Cohen-Tannoudji, Diu, & Laloe, "Quantum Mechanics" 1977 (Wiley),
vol 2, pp 1343-1355.

The essence of the result is the effective modification of the energy of
the excited state by a small complex perturbation, E --> E + (dE - i*R/2)
where dE is the small radiative energy correction (Lamb shift) and R
is the decay rate. The time dependent phase factor is thus also modified:
exp(-i*E*t) --> exp[-i*(E+dE)*t]exp[-R*t/2]. This is the source of the decay;
probabilities, which go as the square of the amplitudes, will exhibit a time
dependence exp[-R*t].

Carlo

-----------------------------------------------------------------------------
Carlo Graziani | Warning: some of the above may not be a fully |
Dept. of Physics | correct representation of profound cosmic truth.|
University of Chicago | |
| "It's fun to have fun, but you have to know how!"|
ca...@nu.uchicago.edu | -The Cat in the Hat |
-----------------------------------------------------------------------------

Cheri A Anaclerio

unread,
May 22, 1992, 6:34:47 PM5/22/92
to
In article <north.706402084@watop> no...@watop.nosc.mil (Mark North) writes:
>pal...@sfu.ca (Leigh Palmer) writes:
>
>>In article <15MAY92....@uwpg02.uwinnipeg.ca> ch...@uwpg02.uwinnipeg.ca
>>writes:
>>>
>>>Question: A single atom of a substance known to be unstable
>>> with a half-life of 1 s is produced. After an interval of
>>> 1 s the atom has not decayed. Is it now more likely to decay
>>> sometime before the end of the second interval (between
>>> 1 s and 2 s) ? Explain.
>>>
>
>The fact that the question is clear to you may be due to the fact that you
>know too much 8^). Let me point out a couple of things. The question is
>misleading in bringing up 'half-life'. Half-life or mean lifetime is a
>statement about a sample of identical nuclei, it has no relevance to a
>single nuclei. One has absolutely NO idea when a particular nuclei will
>decay so all one-second time intervals are equally likely. Unless the
>question was devised to expose the gambler's fallacy let me suggest a better
>one.
>
>Given N identical nuclei with mean lifetime = 1 sec what percentage decay in
>the first second. Of those remaining after one second, what percentage
>decay in the next second?
>
>The answer is surprising and enlightening.. At least it was to me the first
>time I encountered it.
>
>Mark


So what is the answer??!!!!!!

Andrew - Palfreyman

unread,
May 23, 1992, 4:59:19 AM5/23/92
to
>>>Question: A single atom of a substance known to be unstable
>>> with a half-life of 1 s is produced. After an interval of
>>> 1 s the atom has not decayed. Is it now more likely to decay
>>> sometime before the end of the second interval (between
>>> 1 s and 2 s) ? Explain.

Blair P. Houghton writes:
>Actually, it's a difficult question, since it's dependent
>on whether atomic decay is or is not a stationary stochastic
>process. (A stationary process is one in which the probability
>does not depend on time[....])

>The difficulty is that atomic decay may or may not be a
>stationary process.

John Baez writes:
: Houghton has hit the nail on the head here. The question has


: been hotly disputed, and I've seen lots of articles on it.
: (Unfortunately, I don't know any references off the top of my head.) In
: a sloppy way of speaking, the question is, "does a radioactive nucleus
: know how old it is?"

Does a tossed coin know how many times it previously came up heads/ tails?
(Well, I hope not anyway.)
Does a man who was struck by lightning fear for his life more during the
next thunderstorm?
Of course. Should he?
(Well, I hope not anyway.)
--------------------------------------------------------------------------
| lord snooty @the giant | wolfpack, silver, down to the water |
| poisoned electric head | andrew palfr...@cup.portal.com |
--------------------------------------------------------------------------

Andrew - Palfreyman

unread,
May 23, 1992, 6:36:56 AM5/23/92
to
Carlo Graziani writes:
: The time dependent phase factor is thus also modified:

: exp(-i*E*t) --> exp[-i*(E+dE)*t]exp[-R*t/2]. This is the source of the decay;
: probabilities, which go as the square of the amplitudes, will exhibit a time
: dependence exp[-R*t].

O Ho. Looks like a non-stationary process to me now. What happened? :-)

John C. Baez

unread,
May 23, 1992, 10:03:47 AM5/23/92
to
In article <59...@cup.portal.com> lordS...@cup.portal.com (Andrew - Palfreyman) writes:
>John Baez writes:
>: Houghton has hit the nail on the head here. The question has
>: been hotly disputed, and I've seen lots of articles on it.
>: (Unfortunately, I don't know any references off the top of my head.) In
>: a sloppy way of speaking, the question is, "does a radioactive nucleus
>: know how old it is?"
>
>Does a tossed coin know how many times it previously came up heads/ tails?
>(Well, I hope not anyway.)

I hope this response isn't a fancy way of saying that the answer to my
question is obviously NO, because I went on to show the answer was most
likely YES, although the effect is minute.

As for your coin tossing question, here too the answer is likely to be
YES, although again the effect is minute and no doubt utterly swamped by
other effects. If a newly minted coin is thrown and lands heads up, the
"tails" side is probably worn away a little bit from hitting the surface
it landed on. This will affect the probabilities in the future.

In case anyone doubts my sanity, let me repeat that this effect is
obviously utterly negligible. My point is just that stationarity is
almost always an approximation, sometimes an excellent one, sometimes
not so hot. There is a whole subject of failure in which one studies
the rate at which lightbulbs burn out, car parts break, etc., and this
provides lots of interesting examples of nonstationary stochastic
processes.

To digress...

In Knuth's book Concrete Mathematics it says that if you SPIN a brand-new penny
it is noticeably more likely to land heads up than tails up. It needs
to be new. Supposedly this is a way to make money in bars. I urge
anyone to verify this before quitting school.

John C. Baez

unread,
May 23, 1992, 11:39:17 AM5/23/92
to

>Exponential decay is in fact a theoretical necessity. It is a generic
>quantum mechanical feature of problems in which you have a discrete
>state (e.g. an excited atomic or nuclear state) coupled to a continuum
>of states (e.g. the atomic or nuclear system in the ground state and
>an emitted photon flitting around somewhere). There is nothing ad hoc
>about it. The original paper is Weisskopf & Wigner, 1930, Z. Physik,
>63, 54. If you can't get a translation from German, (or don't speak
>German), see Gasiorowicz, "Quantum Physics" 1974 (Wiley), pp 473-480,
>or Cohen-Tannoudji, Diu, & Laloe, "Quantum Mechanics" 1977 (Wiley),
>vol 2, pp 1343-1355.
>
>The essence of the result is the effective modification of the energy of
>the excited state by a small complex perturbation, E --> E + (dE - i*R/2)
>where dE is the small radiative energy correction (Lamb shift) and R
>is the decay rate. The time dependent phase factor is thus also modified:
>exp(-i*E*t) --> exp[-i*(E+dE)*t]exp[-R*t/2]. This is the source of the decay;
>probabilities, which go as the square of the amplitudes, will exhibit a time
>dependence exp[-R*t].

This is indeed the conventional wisdom. Let me begin by saying:

1) I agree that the exponential decay law is backed up by theory in this
sort of situation and is far from an ad hoc "curve fitting" sort of
thing.

2) The exponential law is apparently an excellent approximation, and as
far as I know no deviations from it have ever been observed. Here I
am not talking about the (necessary) deviations due to finite sample
size. I am talking about deviations present in the limit as the sample
size approaches infinity.

3) If you ever wanted someone to actually calculate a decay rate for
you, I'm sure Graziani would do a whole lot better job than I would.
What follows has nothing to do with the important job of getting an
answer that's good enough for all practical purposes. It is a matter of
principal (my specialty). There's no real conflict.

Okay. So, Graziani has offered the conventional wisdom, what everyone
knows about radioactive decay, that it is a "theoretical necessity".
It's precisely because this is so well-entrenched that I thought I
should point out that one can easily prove that quantum-mechanical decay
processes cannot be EXACTLY exponential. There are approximations in
all of the arguments Graziani cites.

Let me just repeat the proof that decay processes aren't exactly
exponential. It uses one mild assumption, but if the going gets rough I
imagine someone will raise questions about this assumption. It'd be
nice to get a proof with even weaker assumptions; I vaguely recall that
one could use the fact that the Hamiltonian is bounded below to do so.

This is just the proof that Robert Israel gave a while ago (an improved
version of mine).

Let psi be the wavefunction of a "new-born radioactive nucleus",

together with whatever fields that are involved in the decay. Let P be
the projection onto the space of states in which the nucleus has NOT
decayed. Let H be the Hamiltonian, a self-adjoint operator. The
probability that at time t the system will be observed to have NOT
decayed is

||P exp(-itH) psi||^2

The claim is that this function cannot be of the form exp(-kt) for all
t>0, where k is some positive constant.

Just differentiate this function with respect to t and set t = 0.
First, rewrite the function as

<exp(-itH) psi, P exp(-itH) psi>,

and then differentiate to get

<-iH exp(-itH) psi, P exp(-itH) psi> +
<exp(-itH) psi, -iPH exp(-itH) psi>

and set t = 0 to get

<-iH psi, psi> + <psi, -iH psi> = 0

Here we are using P psi = psi. Since we get zero, the function could
not have been equal to exp(-kt) for k nonzero.

That should satisfy any physicist. A mathematician will worry about why
we can differentiate the function. This is a simple issue if you know
about unbounded self-adjoint operators. (Try Reed and Simon's Methods
of Modern Mathematical Physics vol. I: Functional Analysis, and vol. II:
Fourier Analysis and Self-Adjointness.) For the function to be
differentiable it suffices that psi is in the domain of H. For
physicists, this condition means that ||H psi|| < infinity.

[Let me put in a digression only to be read by the most nitpicky of
nitpickers, e.g. myself. An excited state psi, while presumably an
eigenvector for some "free" Hamiltonian which neglects the interactions
causing the decay, is not an eigenvector for the true Hamiltonian H,
which of course is why it doesn't just sit there. One might worry,
then, that the eigenvector psi of the "free" Hamiltonian is not in the
domain of the true Hamiltonian H. This is a standard issue in
perturbation theory and the answer depends on how singular the
perturbation is. Certainly for perturbations that can be treated by
Kato-Rellich perturbation theory any eigenvector of the free Hamiltonian
is in the domain of the true Hamiltonian H, cf. Thm X.13 vol. II R&S.
But I claim that this issue is a red herring, the real point being
that any state we can *actually prepare* has ||H psi|| < infinity.
Instead of arguing about this, I would hope that any mathematical
physicists would just come up with a theorem with weaker hypotheses.]

As Israel pointed out, this argument shows what's going on: when you are
SURE the nucleus has not decayed yet (i.e., it's "new-born"), the decay
rate must be zero; the decay rate then can "ramp up" very rapidly to the
value obtained by the usual approximate calculations.

Physicists occaisionally mistrust mathematicians on matters such as
these. Arcane considerations about the domains of unbounded
self-adjoint operators probably only serve to enhance this mistrust,
which is ironic, of course, since the mathematicians are simply trying
to avoid sloppiness. In any event, just to show that this isn't
something only mathematicians believe in, let me cite the paper:

Time scale of short time deviations from exponential decay, Grotz and
Klapdor, Phys Rev. C 30 (1984), 2098-2100.

"In this Brief Report we discuss critically whether such quantum
mechanically rigorously demanded deviations from the usual decay
formulas may lead to observable effects and give estimates using the
Heisenberg uncertainty relation.

It is easily seen that the exponential decay law following from a
statistical ansatz is only an approximation in a quantum mechanical
description. [Gives essentially the above argument.] So for very small
times, the decay rate is not constant as characteristic for an
exponential decay law, but varies proportional to t. [....]
Equations (2) and (3) tell us that for sufficiently short times, the
decay rate is whatever [arbitrarily - these guys are German] small.
However, to make any quantitative estimate is extremely difficult.
Peres uses the threshold effect to get a quantitative estimate for the
onset of the exponential decay [...] Applying this estimate to double
beta decay yields approximately 10^{-21} sec, which is much too small to
give any measurable effect. [They then go on to argue with Peres.]"

This is all I want to say about this, unless someone has some nice theorems
about the allowed behavior of the function

||P exp(-itH) psi||^2

when H is bounded below and psi is not necessarily in the domain of H.
(This would probably involve extending t to a complex half-plane.)

Cheri A Anaclerio

unread,
May 23, 1992, 12:58:21 PM5/23/92
to
In article <59...@cup.portal.com> lordS...@cup.portal.com (Andrew - Palfreyman) writes:
>>>>Question: A single atom of a substance known to be unstable
>>>> with a half-life of 1 s is produced. After an interval of
>>>> 1 s the atom has not decayed. Is it now more likely to decay
>>>> sometime before the end of the second interval (between
>>>> 1 s and 2 s) ? Explain.
>

Much to big of a deal has been made out of this problem. In order to solve it, first find the decay constant which is equal to the probability for each nuclus to decay (in this case there is only one).

1/2A(0) = A(0)exp[-kt], t= 1 sec
kt=ln2=0.693
k =0.693 = decay constant

The probability for the atom to decay in the second interval if it does not decay in the first interval is therefore

(1 - 0.693)(0.693) = 0.213

Therefore it is less likely to decay in the second interval since it did not decay in the first interval. Mind you, these are only probabilities- of course the atom does not "know" how old it is.
n
n
n
n

Therefore it is less likely to decay in the second interval since it did not decay in the first interval. Mind you, these are only probabilities- of course the atom does not "know"


D

A
A




Robert Parson

unread,
May 23, 1992, 3:10:59 PM5/23/92
to
In article <1992May22.1...@midway.uchicago.edu>, jf...@quads.uchicago.edu (Carlo Graziani) writes:
>>
>
> Exponential decay is in fact a theoretical necessity. It is a generic
> quantum mechanical feature of problems in which you have a discrete
> state (e.g. an excited atomic or nuclear state) coupled to a continuum
> of states (e.g. the atomic or nuclear system in the ground state and
> an emitted photon flitting around somewhere). There is nothing ad hoc
> about it. The original paper is Weisskopf & Wigner, 1930, Z. Physik,
> 63, 54. If you can't get a translation from German, (or don't speak
> German), see Gasiorowicz, "Quantum Physics" 1974 (Wiley), pp 473-480,
> or Cohen-Tannoudji, Diu, & Laloe, "Quantum Mechanics" 1977 (Wiley),
> vol 2, pp 1343-1355.

No, exponential decay is only an extremely accurate approximation.
The Weisskopf-Wigner model is
unphysical because it assumes that the energy spectrum of the continuum
is unbounded. As long as there is a lower bound to the spectrum - an energy
zero - the exponential will turn into a power law on a timescale
3*tau*Ln[E*tau/hbar], where tau is the lifetime and E is the energy of
the state, measured from the physical energy zero.
See:
L. Khalfin, JETP 6, 1053 (1958)
R. G. Winter, Phys. REv. _123_, 1503 (1961).
P. T. Greenland, _Nature_ (News and Views) _335_, 298 (1988).

The last article points out an interesting twist, which Blair Houghton
might enjoy: in order to get the nonexponential decay in the laboratory,
you must set up the experiment so that the decay products can recombine.
So if the crossover takes place on a 3 week timescale, you'll need an
awfully big chamber! (And don't watch the pot...) The result _sensitively_
depends upon the "innocent" assumption that the quantum system is isolated.
However, it is not totally irrelevant to the real world - one can
study decay of resonances very close to threshold, so that E is small.
There have been experiments (see the _Nature_ article) but no luck so far.

Robert

Blair P. Houghton

unread,
May 23, 1992, 4:30:04 PM5/23/92
to
In article <1992May23.1...@galois.mit.edu> jb...@littlewood.mit.edu (John C. Baez) writes:
>In article <59...@cup.portal.com> lordS...@cup.portal.com (Andrew - Palfreyman) writes:
>>John Baez writes:
>>: Houghton has hit the nail on the head here. The question has

Reminds me of my days as a stage carpenter. I think I'll
install a garage-door opener tomorrow afternoon...

>>: a sloppy way of speaking, the question is, "does a radioactive nucleus
>>: know how old it is?"
>>
>>Does a tossed coin know how many times it previously came up heads/ tails?
>>(Well, I hope not anyway.)

In the case given, in which ten trials provided ten heads
events, a statistical analysis give no reason even to believe
that the coin possesses a tails side...

Bet heads.

--Blair
"Y' move six-teen tons and whaddaya get?"

Andrew - Palfreyman

unread,
May 23, 1992, 5:34:30 PM5/23/92
to
Cheri Anaclerio writes:
: Much to big of a deal has been made out of this problem. In order to solve

: it, first find the decay constant which is equal to the probability for each
: nucleus to decay (in this case there is only one).

:
: 1/2A(0) = A(0)exp[-kt], t= 1 sec
: kt=ln2=0.693
: k =0.693 = decay constant
:
: The probability for the atom to decay in the second interval if it does not
: decay in the first interval is therefore
:
: (1 - 0.693)(0.693) = 0.213
:
: Therefore it is less likely to decay in the second interval since it did not
: decay in the first interval. Mind you, these are only probabilities- of
: course the atom does not "know" how old it is.

There are two errors in here (well, three if you count >75 chars/line :).

1. In the second interval, the probabilty of decay (given no decay in
the first interval) is also <k>. You have to divide your k(1-k) by the
total probability in that state. Since the total probability (given no
first decay) is {decays-in-2nd + does-not-decay-in-2nd} = k(1-k) + (1-k)^2
which is (1-k).
2. Your conclusion does not follow from your maths. Because if
"it is less likely to decay in the second" were true, then you *could*
conclude that the atom "knows" how old it is.

John C. Baez

unread,
May 24, 1992, 11:35:43 AM5/24/92
to

> No, exponential decay is only an extremely accurate approximation.
> The Weisskopf-Wigner model is
> unphysical because it assumes that the energy spectrum of the continuum
> is unbounded. As long as there is a lower bound to the spectrum - an energy
> zero - the exponential will turn into a power law on a timescale
> 3*tau*Ln[E*tau/hbar], where tau is the lifetime and E is the energy of
> the state, measured from the physical energy zero.
> See:
> L. Khalfin, JETP 6, 1053 (1958)
> R. G. Winter, Phys. REv. _123_, 1503 (1961).
> P. T. Greenland, _Nature_ (News and Views) _335_, 298 (1988).

Thanks immensely. This sounds like the kind of thing I was hoping for,
a proof of the nonexistence of exponential decay using the fact that the
Hamiltonian is bounded below. It's neat how here we have the
*long-time* departures from the exponential law, as opposed to
*short-time* departures as in the work I was discussing. These may have
a chance of being seen?

> (And don't watch the pot...)

Heh. Actually, the article I cited did consider the question of
stopping radioactive decay via the quantum Zeno effect, and concluded it
was impractical. It's amusing that the (German) authors called it the
"zenon" effect, and the reference in Reviews of Physics (or whatever
it's called) corrected that to the "xenon" effect.

> There have been experiments (see the _Nature_ article) but no luck so far.

Too bad!


Mcirvin

unread,
May 24, 1992, 12:16:57 PM5/24/92
to
jb...@littlewood.mit.edu (John C. Baez) writes:

>> No, exponential decay is only an extremely accurate approximation.

[...]


>> See:
>> L. Khalfin, JETP 6, 1053 (1958)
>> R. G. Winter, Phys. REv. _123_, 1503 (1961).
>> P. T. Greenland, _Nature_ (News and Views) _335_, 298 (1988).

>Thanks immensely. This sounds like the kind of thing I was hoping for,
>a proof of the nonexistence of exponential decay using the fact that the
>Hamiltonian is bounded below. It's neat how here we have the
>*long-time* departures from the exponential law, as opposed to
>*short-time* departures as in the work I was discussing. These may have
>a chance of being seen?

Rolf Winter taught the third semester of undergrad quantum mechanics
at William and Mary, and he discussed this phenomenon once, mentioning
that he likes to ask all kinds of experimentalists whether they think
they might be able to observe the long-time deviations from exponential
decay. He said that (as of 1990) he hadn't yet gotten an affirmative
answer.

Sidney Coleman claimed in his quantum field theory course at Harvard that
some statements about long-time deviations from exponential decay were
either wrong or tautological, but I couldn't follow his argument, and I
don't know whether he was actually referring to this work.

>> There have been experiments (see the _Nature_ article) but no luck so far.

>Too bad!


--
Matt McIrvin mci...@husc.harvard.edu Grad student, Department of
Clean .sig regularly with a lint-free cloth. Physics, Harvard University

Cheri A Anaclerio

unread,
May 24, 1992, 8:54:30 PM5/24/92
to
_____________________________________________________________________________

There seems to be an errror in your answer (well, two if you count "maths"):

1. According to your line of reasoning, if you have a fair coin and want
to know the probability of tossing a heads on the first throw and a
tails on the second throw, that probability would be(1 - 1/2)(1/2)
divided by the total probability in the second state (throw a tails
on the second + don't throw a tails on the second: (1 - 1/2)(1/2) +
(1 - 1/2)(1 - 1/2)) which would give you a final answer of 1/2.
Anyone who has taken an undergraduate course in probability knows
that this answer is incorrect. The probability of throwing a heads
and then a tails is (1/2)(1/2) = 1/4. Likewise, in the problem at
hand, in order to find the probability that the atom does not decay
in the first interval but does decay in the second interval is the
probability that it does not decay in the first interval * the proba-
bility of decaying in the second interval which is still k. The
answer, .213, reflects that the atom has a greater likelihood of
decaying in the first interval than not.

Andrew - Palfreyman

unread,
May 25, 1992, 12:21:49 AM5/25/92
to
Cheri Anaclerio:
: There seems to be an errror in your answer (well, two if you count "maths"):
British usage. My apologies!

: 1. According to your line of reasoning, if you have a fair coin and want


: to know the probability of tossing a heads on the first throw and a
: tails on the second throw, that probability would be(1 - 1/2)(1/2)
: divided by the total probability in the second state (throw a tails
: on the second + don't throw a tails on the second: (1 - 1/2)(1/2) +
: (1 - 1/2)(1 - 1/2)) which would give you a final answer of 1/2.

Sorry, but it would not. You are calculating the joint probability of
two independent events and therefore don't need the normalisation.
If you calculate the probability of an event in isolation ("k decays in the
second interval") then the sum of this probability and the other possible
outcome(s) in the second interval have to sum to 1. This is what we want to
do when we need to answer the question "does the atom know how old it is?".

Because of your simple model, you should discover a stationary process - the
answer should be "no". Since you don't, there must be a flaw.
So your two errors are:
1. Finding a time-dependency in decay rate (not there for exponential decay).
2. Interpreting this as a stationary process (not, by definition).

As we've seen, the process is in fact non-stationary, and requires a
sophistication beyond exponential decay to evince this property.

: Likewise, in the problem at


: hand, in order to find the probability that the atom does not decay
: in the first interval but does decay in the second interval is the
: probability that it does not decay in the first interval * the proba-
: bility of decaying in the second interval which is still k. The
: answer, .213, reflects that the atom has a greater likelihood of
: decaying in the first interval than not.

Yes indeed, but only if the atom has a memory! - the point at hand.
@lS

Kurt Sonnenmoser

unread,
May 25, 1992, 5:27:30 AM5/25/92
to
In article <1992May24.1...@galois.mit.edu>

jb...@littlewood.mit.edu (John C. Baez) writes:

>> (And don't watch the pot...)
>
>Heh. Actually, the article I cited did consider the question of
>stopping radioactive decay via the quantum Zeno effect, and concluded it

>was impractical. [...]

>> There have been experiments (see the _Nature_ article) but no luck so far.
>
>Too bad!

It has been assumed implicitly in the above, but I think that it should
be made explicit: the assumed Zeno effect involves some sort of belief
in the (controversial) Collapse Postulate. If the frequency f with which
your detector reports "not decayed ... not decayed ... " is high enough,
the decay should become slower (the decay rate goes to zero near t=0),
since the system is assumed to be projected onto the initial state with
the frequency f. (Also assume time translational invariance of the
Hamiltonian.)

(I tend to believe that such a sophisticated measuring device would
noticeably modify the whole interaction dynamics.)

John C. Baez

unread,
May 25, 1992, 9:16:12 AM5/25/92
to
In article <1992May23.1...@galois.mit.edu> jb...@littlewood.mit.edu (John C. Baez) writes:
>In Knuth's book Concrete Mathematics it says that if you SPIN a
>brand-new penny it is noticeably more likely to land heads up than tails
>up. It needs to be new. Supposedly this is a way to make money in
>bars. I urge anyone to verify this before quitting school.

A couple of people said via email that I got this backward. It's
supposed to be more likely to land tails up, they say. I just spun a
1991 penny nine times and it landed tails up only 3 times, which doesn't
prove much. One of them also said if you simply stand it on edge and
let if fall over, it tends to land HEADS up. I have trouble doing this
without feeling like I can make it fall over either way I want. In any
event, I am not planning to earn a living making bets on this trick in bars.

John C. Baez

unread,
May 25, 1992, 10:11:08 AM5/25/92
to

In article <22...@forty2.physik.unizh.ch> k...@forty2.physik.unizh.ch (Kurt Sonnenmoser) writes:

>It has been assumed implicitly in the above, but I think that it should
>be made explicit: the assumed Zeno effect involves some sort of belief
>in the (controversial) Collapse Postulate.

I don't think so. I believe in the Zeno effect but not in wavefunction
collapse. I darn well BETTER believe in the Zeno effect, by the way,
because it has been observed.

>(I tend to believe that such a sophisticated measuring device would
>noticeably modify the whole interaction dynamics.)

In the experiment which found the Zeno effect I believe this is the
case. An electron in an atom could be in either a high or low energy
state. Actually, the high energy state really consisted of two very
close energy levels. By bouncing back and forth between these the
electron emitted (and absorbed???) visible light, so one could actually
see it glowing when it was in the high energy state. I am vague about
the details of this, but I guess you have to shine visible light at the
atom to "look" and see which state it's in. By "looking" frequently one
can arrest its decay from the high energy state to the low energy state.
If I have the story right, it does seem that "looking" is noticeably
modifying the interaction dynamics.

I wouldn't mind becoming a lot clearer about this experiment than I am
now. I will not field questions like "so why do YOU think the Zeno
effect is happening if its not wavefunction collapse" until someone
tells me the details of the experiment.

One thing I think is a dangerously murky aspect of the way people talk
about QM is the business of "measurement must disturb the system". To
my mind, the old "every action causes an equal and opposite reaction,"
or "if A affects B, B affects A," even combined with quantized action,
is rather different than the uncertainty principle, which follows from
noncommutativity. (By the way, "if A affects B, B affects A" was on a
famous list of heretical propositions issued in Paris in the 1200's.)

For example, we can imagine a toy system with an electron spin up or
down, ^ or v, and a detector either ^, v, or "blank," o. We can imagine
a scattering operator as follows. States will be written with the state
of the electron followed by the state of the detector. The scattering
operator is:

^ x o -> ^ x ^
v x o -> v x v
^ x ^ -> ^ x o
v x ^ -> v x ^
^ x v -> ^ x v
v x v -> v x o

Note that the tensor product system has a basis of 2x3=6 states and I
have just described a UNITARY operator on this space. The point is
this: 1) if one wants to measure the electron's spin, put the detector
in the o state and wait; then the detector will indicate the electron's
spin, 2) the initial state of the detector does NOT affect the final
state of the electron - the spin of the electron does not change. To be
quite pedantic, we may say that ALL the spin operators, S_x, S_y, and
S_z for the electron commute with the above scattering operator. So
there is no kind of Newtonian "if A affects B, B affects A" stuff going
on here: the electron affects the detector but not vice versa. Of
course, this is a mathematical toy model and I don't know if it could be
implemented. If it couldn't, I'd like to know why.

Let's assume for the moment that it can be implemented. Okay, now
prepare an electron in the state (^ + v)/sqrt(2) and run it through the
detector, and see what you get... :-)

I don't think there's any paradox. All I wanted to note was the logical
distinction between "if A affects B, B affects A" and the weirdnesses
specific to QM.


Jan Willem Nienhuys

unread,
May 25, 1992, 10:47:47 AM5/25/92
to


In the Netherlands one-guilder coins with queen Juliana on them
falls with "tails" (the value) up much more than 50/50. I just
completed a run of 16 tails, 4 heads. Condition is that
you spin the coin fast in a vertical position on a smooth table.
Remarkably the effect is just opposite with old style 2,50 guilder
coins (they come up heads more often).
New style Dutch coins (with the more flat "Beatrix" design) don't
show the phenomenon.
It is said that with practice the percentage "tails" with old style
guilders can be lowered to 10%, but I did not succeed to come
significantly below 30%.

JWN

John Burnette

unread,
May 25, 1992, 11:49:47 AM5/25/92
to

Here's a fun question I give to my high school students who are starting out
with probability and/or statistics. Take two pairs of pliars and but a 20
degree bend in a penny. Give it to a student with instructions to explore
how the probability of heads/tails changes.

One gets lots of interesting questions regarding the design of the
experiment (how to flip etc) and how much evidence do you "need".

Interestingly very few of the students I have worked with have remembered
to test a "regular" penny. There seems to be a blanket assumption that
the US government makes "fair" coins.

--
John Burnette // The Opinions expressed above are shareware, they //
Choate Rosemary Hall // are <<not>> freeware. If you like them and use //
203-949-0914 // them please send $5 for documentation. //

Donald H. Locker

unread,
May 25, 1992, 11:53:23 AM5/25/92
to
In article <1992May23.1...@galois.mit.edu> jb...@littlewood.mit.edu (John C. Baez) writes:
>In article <1992May22.1...@midway.uchicago.edu> jf...@midway.uchicago.edu writes:
>
>>Exponential decay is in fact a theoretical necessity. It is a generic

[majority of discussion elided]

>
>As Israel pointed out, this argument shows what's going on: when you are
>SURE the nucleus has not decayed yet (i.e., it's "new-born"), the decay
>rate must be zero; the decay rate then can "ramp up" very rapidly to the
>value obtained by the usual approximate calculations.
>

It strikes me that the ``new-born'' nucleus does not exhibit a zero
decay rate; in fact the probability of the nucleus being generated
exhibits its non-zero decay rate. The new-born nucleus is merely one
non-decaying member of the population. Its decay rate, in turn, is
the same as that of the rest of the [perhaps already decayed]
population. Comment?

[remainder of discussion also]
--

Donald.
``Great minds run in the same gutter.'' me, ca. 1973
Speaking for myself.

Steve Parrott

unread,
May 26, 1992, 1:39:52 AM5/26/92
to
In article <1992May23.1...@galois.mit.edu> jb...@littlewood.mit.edu
(John C. Baez) writes:
>In article <1992May22.1...@midway.uchicago.edu> jf...@midway.uchicago.edu writes:
>
>>Exponential decay is in fact a theoretical necessity. It is a generic
>>quantum mechanical feature of problems in which you have a discrete
>>state (e.g. an excited atomic or nuclear state) coupled to a continuum
>>of states (e.g. the atomic or nuclear system in the ground state and
>>an emitted photon flitting around somewhere).

[Rest of Carlo Graziani quote omitted]

John Baez replies:

>[Some lines omitted]

>Okay. So, Graziani has offered the conventional wisdom, what everyone
>knows about radioactive decay, that it is a "theoretical necessity".
>It's precisely because this is so well-entrenched that I thought I
>should point out that one can easily prove that quantum-mechanical decay
>processes cannot be EXACTLY exponential. There are approximations in
>all of the arguments Graziani cites.

[Much omitted, including a mathematical statement of the problem
and a simple argument that if a subspace of states (the "non-decayed"
states) undergoes an exponential decay law to an orthogonal subspace
(the "decayed" states), then states psi in the "undecayed" subspace
cannot be in the domain of the Hamiltonian H: ||H psi|| = infinity.]

>But I claim that this issue is a red herring, the real point being
>that any state we can *actually prepare* has ||H psi|| < infinity.
>Instead of arguing about this, I would hope that any mathematical
>physicists would just come up with a theorem with weaker hypotheses.]

_____________________________________________________________________

To state my conclusion first, I appreciate John Baez's
mathematical argument (with improvements by Robert Israel),
but it doesn't physically convince me for reasons which I'll now state.

Around 1960 B. Sz-Nagy proved that every strongly continuous
contraction semi-group has a "unitary dilation".
To define "unitary dilation" in general
would be too complicated for this medium,
so I'll just describe what this theorem means for John's problem, using
his notation.

Let P be the projection on the "undecayed" subspace,
and consider the semigroup t --> exp(-t)I , where I is the identity
operator on the undecayed subspace. Sz.-Nagy's theorem implies that
there exists a self-adjoint operator H on the space of all states
such that for all undecayed states f,

Pexp(-iHt)f = exp(-t)f

The unitary group t --> exp(-iHt)f is called a >>unitary dilation<< of
the given contraction semigroup, and the theorem applies to all
strongly continuous semigroups, not just the given one..

John's argument shows that vectors f in the "undecayed" subspace
cannot be in the domain of H. But to my mind, it's quite a leap
(a quantum leap, in fact :-) )
from this observation to his conclusion that

"one can easily prove that quantum-mechanical decay
processes cannot be EXACTLY exponential."

The word "prove" bothers me.

First of all, the condition ||H f|| < infinity is not the same
as saying that the state f has infinite energy. The (expectation) of
the energy of f is <Hf,f>, whereas ||Hf||^2 = <Hf,Hf> = <H^2f,f>,
the expectation of the square of the energy.
I don't find it hard to believe that states which can be physically
observed might have average infinite squared-energy.
(Average energy infinite would bother me a little more, but only a little.)

For a more far-out (but serious) objection, suppose
that time were discrete rather than continuous. Suppose that time proceeded
in unimaginably small steps, say 10^-99 sec. One would expect to be able
to construct a discrete-time quantum theory which would approximate
arbitrarily closely the usual continuous-time theory.
In the discrete theory, Sz.-Nagy's result applies without caveat,
and EXACT exponential decay is mathematically possible.

If exact exponential decay can be proved impossible for continuous
time, I would be more inclined to wonder what is wrong with
the mathematical set-up of the continuous-time theory
than to look for physical examples of violation of exponential-decay.

s...@cs.umb.edu

Stephen Parrott
Department of Mathematics and Computer Science
100 Morrissey Boulevard
Boston, MA 02125
--
Stephen Parrott
Department of Mathmatics and Computer Science
University of Massachusetts at Boston
100 Morrissey Blvd.

Jon J Thaler

unread,
May 26, 1992, 12:11:04 PM5/26/92
to
d...@mrdog.msl.com (Donald H. Locker) says:

> jb...@littlewood.mit.edu (John C. Baez) writes:

>> As Israel pointed out, this argument shows what's going on: when you are
>> SURE the nucleus has not decayed yet (i.e., it's "new-born"), the decay
>> rate must be zero; the decay rate then can "ramp up" very rapidly to the
>> value obtained by the usual approximate calculations.

> It strikes me that the ``new-born'' nucleus does not exhibit a zero
> decay rate; in fact the probability of the nucleus being generated
> exhibits its non-zero decay rate. The new-born nucleus is merely one
> non-decaying member of the population. Its decay rate, in turn, is
> the same as that of the rest of the [perhaps already decayed]
> population. Comment?

The effect an observer has on the time evolution of a system is called
the "watchdog effect." Two papers on this topic are:

CONTINUOUS MEASUREMENT: WATCHDOG EFFECT VERSUS GOLDEN RULE.
By E. Joos (Heidelberg U., ITP), 1984.
Phys. Rev. D29 (1984) 1626-1633.

MEASURING PROCESSES IN QUANTUM MECHANICS. 1. CONTINUOUS OBSERVATION
AND THE WATCHDOG EFFECT.
By K. Kraus (Texas U.), DOE-ER-03992-390, Apr 1980. 59pp.

I got these off the SPIRES database (haven't read them).

John C. Baez

unread,
May 26, 1992, 12:31:14 PM5/26/92
to
In article <1992May25.1...@mrdog.msl.com> d...@mrdog.msl.com (Donald H. Locker) writes:

>>As Israel pointed out, this argument shows what's going on: when you are
>>SURE the nucleus has not decayed yet (i.e., it's "new-born"), the decay
>>rate must be zero; the decay rate then can "ramp up" very rapidly to the
>>value obtained by the usual approximate calculations.
>>
>
>It strikes me that the ``new-born'' nucleus does not exhibit a zero
>decay rate; in fact the probability of the nucleus being generated
>exhibits its non-zero decay rate. The new-born nucleus is merely one
>non-decaying member of the population. Its decay rate, in turn, is
>the same as that of the rest of the [perhaps already decayed]
>population. Comment?

Well, what can I say? Reread the proof that it DOES exhibit a zero decay
rate at t = 0. Perhaps you're getting mixed up because the "new-born"
radioactive nucleus is *itself* typically the product of the radioactive
decay of a *different* species of nucleus. The issue at hand is,
however, not how it got there, but what it's going to do now.

Leigh Palmer

unread,
May 26, 1992, 5:06:36 PM5/26/92
to
In article <1992May25....@galois.mit.edu> jb...@littlewood.mit.edu

Wrong experiment, and for reasons which only include the lack of sufficient
trials. To test Knuth's claim you must obtain several rolls of brand new
pennies and spin each of them. It would further help if the experiment were
repeated with rolls obtained from different mints.

It is my expectation that there will be a systematic asymmetry found in
such an experiment, and some static and dynamic mechanical experiments
on the pennies should shed light on the effect responsible, whether it
is displacement of the center of gravity in the stamping process or the
systematic "coning" of the penny's rim to favor one face.

I'm not going to do the experiment, but I will keep it in mind as a
student exercise. I've colleagues who would do it if they thought it
would yield a quick publication, especially in PRL.

Leigh

John C. Baez

unread,
May 26, 1992, 7:51:44 PM5/26/92
to
In article <1992May26.2...@sfu.ca> pal...@sfu.ca (Leigh Palmer) writes:

>I'm not going to do the experiment, but I will keep it in mind as a
>student exercise. I've colleagues who would do it if they thought it
>would yield a quick publication, especially in PRL.

I know people who would sit on an anthill covered with honey if they
thought it would yield a quick publication, especially in PRL. :-)

I'm amazed, though, that you think the pennies have to be BRAND BRAND
BRAND new to make the experiment work.

Kenneth Tolman

unread,
May 26, 1992, 7:47:05 PM5/26/92
to
I just spun a 1991 D penny and got the following results:

Heads up: 12
Tails up: 37

I supported the coin from above with my left hand, and struck it sharply
with my right. Every spin reached a "fair" spin where the axis of rotation
was clearly visible as a single spot. The surface was very smooth.
If a bunch of other people tried it, maybe we could reach some sort of
statistical significance....

I'll see you in the bar.


Leigh Palmer

unread,
May 27, 1992, 4:52:55 PM5/27/92
to
In article <1992May26....@galois.mit.edu> jb...@riesz.mit.edu (John C.
Baez) writes:

>I'm amazed, though, that you think the pennies have to be BRAND BRAND
>BRAND new to make the experiment work.

I don't. I just like to eliminate complications which are readily avoided.
I do feel that a representative sample should be used rather than a single
coin, however.

Kenneth Tolman's result looks highly significant to me and is even, ah,
suggestive:

In article <1992May26....@hellgate.utah.edu>

Someone should do this with a 1991 S penny. I want to know if it's easier
to get tail in Denver than in San Francisco.

Leigh

0 new messages