Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Uncertainty and Positivism

45 views
Skip to first unread message

john smith

unread,
Aug 20, 1998, 3:00:00 AM8/20/98
to
Not being a physicist, my interest is of a philosophical nature. I
understand that an
uncertainty is inherent in the ability to simultaneously measure certain
pairs of observables
such as position and momentum. And that this uncertainty is apparently a
consequence of
the affect of observing on that which is observed. There is a plethora
of information on this
phenomenon with all the many ways it manifest itself.

Additionally, the Copenhagen Interpretation takes a Logical
Positivist position by
asserting that, not only is it impossible to measure certain observables
with certainty, but that the
uncertainty is an inherent property of the particle regardless of it's
being measured or not.
This is one of the main tenets of the positivist school of which Bohr
subscribed. If it can't be
observed, it's meaningless to contemplate it's existence. In other
words, it is meaningless to
even speculate as to where, for example, an electron is when measuring
it's momentum. This is
important. It implies, among other things, that the position of the
electron does not exist prior
to a position measurement.

So what I'm wondering is how one can make the leap from an
uncertainty in measurement to an inherent uncertainty? Has the entire
scientific community adopted the positivist doctrine or is there a
scientific basis for this. Despite all the text on the Uncertainty
Principle, I've yet to find the postulate that elevates a philosophical
belief to a scientific principle.
And is it not impossible to use the same logical positivism that
asserts inherent uncertainty
to predict specific entities which cannot be empirically observed such
as virtual particles? This would be self-contradictory.
And specifically, what experimental evidence is there for virtual
particles? I am familiar with
the Casimir effect. But is it conclusively a result of the pressure of
virtual particles? Are there credible physicist that dispute this and
virtual particles in general? If so, who might they be.
I'm not taking a position as to the existense of virtual particles.
Mere consensus alone suggest to me that they must exist. But I need more
than a philosophical position to warrant their acceptance.
I need proof. But overall I just need to know how the uncertainty of
measurement translates to an inherent uncertainty. Any help on this
would be very greatly appreciated.


robin motz

unread,
Aug 22, 1998, 3:00:00 AM8/22/98
to
In his note, John Smith asks about "proof" for the existence of virtual
particles, separate from the Casimir effect.
One way to "demonstrate" their existence is to point out that
scattering calculations that agree with particle experiments generally
use higher-order Feynmann diagrams that incorporate the creation and
annihilation of virtual particles.


torqu...@my-dejanews.com

unread,
Aug 22, 1998, 3:00:00 AM8/22/98
to
In article <35DBC8F3...@notmail.com>,
john smith <sm...@notmail.com> wrote:

> So what I'm wondering is how one can make the leap from an
> uncertainty in measurement to an inherent uncertainty? Has the entire
> scientific community adopted the positivist doctrine or is there a
> scientific basis for this.

A particle in a state corresponding to an uncertainty in position is in a
superposition of states each of which corresponds to it being in a different
position. Each of the superimposed pieces acts just like it is in definite
position but the actual particle state is the sum of these pieces. When
physics is like this it's hard for me to see why there is any motivation at
all to talk about the particle having one definite position. Surely the onus
is on whoever would like to talk about a definite position to justify their
need.

I have a hunch (correct me please if I'm wrong!) that your knowledge of
quantum mechanics is coming from philosophical writings and the popular
physics press. There is absolutely no substitute for actually getting your
hands dirty and doing some real QM if you want to make sense of these issues.

The mathematical model which one uses to do quantum physics contains no
concept of definite position for particles not in a position eigenstate. The
model works well. In this sense a physicist can declare categorically that
the uncertainty is inherent - it is inherent in the model. Whether that
translates into a statement about reality is a metaphysical question - we can
only describe the world through our models of it. Is that logical positivism?

I suppose that begs the question: "Why do we use a model without a concept of
definite position?". Because the model physicists use is, in some sense, the
minimal model that works. You could, if you wanted to, add in a concept of
definite position. But it would serve absolutely no purpose whatsoever. It
would be completely decoupled from the rest of the theory. WHy should anyone
use such a model? Again the onus is on the person who would like such a model
to explain their stand.

> And is it not impossible to use the same logical positivism that
> asserts inherent uncertainty
> to predict specific entities which cannot be empirically observed such
> as virtual particles?

This is a sci.physics FAQ. There are no such things as virtual particles
except as intermediate terms in calculations and as a guide to intuition that
needs to be used with great care. Even then they are only an artifact of
using one particular approach to QM - perturbation theory.

> ...[Casimir effect]... But is it conclusively a result of the pressure of
> virtual particles?

Virtual particles are frequently invoked in descriptions of physical
processes for laypeople (Hawking radiation, Casimir effect etc.). As the
disclaimer goes: "This is for entertainment purposes only". In my experience
these descriptions bear little relation to the real physics. In fact if I
ever meet the person who started the story about Hawking radiation being
about virtual particles escaping a gravity well I'd personally like to
strangle them... :-) -- Torque http://www.grin.net/~tanelorn

-----== Posted via Deja News, The Leader in Internet Discussion ==-----
http://www.dejanews.com/rg_mkgrp.xp Create Your Own Free Member Forum


Joel Davis

unread,
Aug 22, 1998, 3:00:00 AM8/22/98
to
In article <35DBC8F3...@notmail.com>, john smith says...

> Not being a physicist, my interest is of a philosophical nature. I
> understand that an
> uncertainty is inherent in the ability to simultaneously measure certain
> pairs of observables
> such as position and momentum. And that this uncertainty is apparently a
> consequence of
> the affect of observing on that which is observed.

I don't believe that this is quite accurate. As you note further on--

> Additionally, the Copenhagen Interpretation takes a Logical
> Positivist position by
> asserting that, not only is it impossible to measure certain observables
> with certainty, but that the
> uncertainty is an inherent property of the particle regardless of it's
> being measured or not.

My admittedly non-scientist-but-interested-science-writer understanding
of QM is that this latter statement is in fact correct--that the
uncertainty in measuring certain pairs of properties of subatomic
entities such as electrons is NOT a product of the act of measuring, but
is an inherent property of "reality" itself. Note that I put "reality" in
quotes deliberately....

> In other
> words, it is meaningless to
> even speculate as to where, for example, an electron is when measuring
> it's momentum. This is
> important. It implies, among other things, that the position of the
> electron does not exist prior
> to a position measurement.

No. It does not imply that. It seems to me that what the Copenhagen
Interpretation is saying, is simply that it is meaningless to speculate
on the position of an electron when one precisely measures its momentum.
Nothing more, nothing less. To say that "the position of the electron
does not exist" prior to a measurement requires knowledge--i.e.,
information about the electron. You must have SOME information in order
to say that the electron's position "exists" or "does not exist." But
that information is not available until a measurement is made. So all we
can say is that it is meaningless to talk about it until a measurement is
made. That's it.

> So what I'm wondering is how one can make the leap from an
> uncertainty in measurement to an inherent uncertainty? Has the entire

We don't. See comment above.

> Has the entire
> scientific community adopted the positivist doctrine or is there a

> scientific basis for this. Despite all the text on the Uncertainty

Aha. Now this is a different question. Science as we understand it and
practice it today is most certainly based on the philosophical position
called Logical Positivism. That's well-known to both scientists who
are interested in this kind of thing, and philosophers of science.

> And is it not impossible to use the same logical positivism that
> asserts inherent uncertainty
> to predict specific entities which cannot be empirically observed such

> as virtual particles? This would be self-contradictory.

Nope. I'm in too deep myself already. I'm not gonna try talking about
virtual particles.... Except to say that in physics, "virtual" does not
equal "not-real".

I think.... I am a little uncertain about this... ;-)
--

_________________________________________________________________________
Joel Davis * jda...@memes.com * Latest book: "Alternate Realities"!
Check with your local bookstore or: http://www.plenum.com,
http://www.bookstore.washington.edu, or http://www.villagebooks.com.
"Spend the afternoon. You can't take it with you." --Annie Dillard


Word...@rocketmail.com

unread,
Aug 22, 1998, 3:00:00 AM8/22/98
to
In article <6rkba8$tvk$1...@nnrp1.dejanews.com>,
> john smith <sm...@notmail.com> wrote:
>
> > So what I'm wondering is how one can make the leap from an
> > uncertainty in measurement to an inherent uncertainty? Has the entire
> > scientific community adopted the positivist doctrine or is there a
> > scientific basis for this?

No, though many physicists will adopt a positivistic stance when pressed on
these issues. When at work, most physicists speak as though they think of
electrons and such as 'real', existing 'out there', independently of us. Which
is interesting, since that was Einstein's view, whereas Einstein is commonly
supposed to have lost the EPR debate.

> A particle in a state corresponding to an uncertainty in position is in a
> superposition of states each of which corresponds to it being in a different
> position. Each of the superimposed pieces acts just like it is in definite
> position but the actual particle state is the sum of these pieces. When
> physics is like this it's hard for me to see why there is any motivation at
> all to talk about the particle having one definite position. Surely the onus
> is on whoever would like to talk about a definite position to justify their
> need.

There is the difficulty that, when we observe things in the world around us,
they typically occupy a given place at a given time--and this is made explicit
in Minkowski's definition of 'event', though of course he was discussing
relativity.

> I have a hunch (correct me please if I'm wrong!) that your knowledge of
> quantum mechanics is coming from philosophical writings and the popular
> physics press. There is absolutely no substitute for actually getting your
> hands dirty and doing some real QM if you want to make sense of these issues.

But your concerns are echoed by many of the foremost thinkers in this area.
See Bell's *Speakable and Unspeakable in QM* or Chris Isham's excellent
recent book on the subject--the title of which escapes me at the moment.


> The mathematical model which one uses to do quantum physics contains no
> concept of definite position for particles not in a position eigenstate. The
> model works well. In this sense a physicist can declare categorically that
> the uncertainty is inherent - it is inherent in the model. Whether that
> translates into a statement about reality is a metaphysical question - we can
> only describe the world through our models of it. Is that logical positivism?
>
> I suppose that begs the question: "Why do we use a model without a concept of
> definite position?". Because the model physicists use is, in some sense, the
> minimal model that works. You could, if you wanted to, add in a concept of
> definite position. But it would serve absolutely no purpose whatsoever. It

> would be completely decoupled from the rest of the theory. Why should anyone


> use such a model? Again the onus is on the person who would like such a model
> to explain their stand.

See, e.g., *Undivided Universe* by Bohm & Hiley.

> > And is it not impossible to use the same logical positivism that
> > asserts inherent uncertainty
> > to predict specific entities which cannot be empirically observed such
> > as virtual particles?

Logical positivism per se cannot provide an answer as to whether unobserved
entities are inherently uncertain, or not. The answer is contingent upon both
observation and physical theory.

Johnathan Smith

unread,
Aug 23, 1998, 3:00:00 AM8/23/98
to
> I have a hunch (correct me please if I'm wrong!) that your knowledge of
> quantum mechanics is coming from philosophical writings and the popular
> physics press. There is absolutely no substitute for actually getting your
> hands dirty and doing some real QM if you want to make sense of these issues.

You are incorrect.

> The mathematical model which one uses to do quantum physics contains no
> concept of definite position for particles not in a position eigenstate. The
> model works well. In this sense a physicist can declare categorically that
> the uncertainty is inherent - it is inherent in the model. Whether that
> translates into a statement about reality is a metaphysical question - we can
> only describe the world through our models of it. Is that logical positivism?

Your contention is that, since uncertainty is inherent in the model, and the
model works well, then uncertainty is inherent in nature as well. This is not
positivism
This would be Idealism, if not downright Anthropocentrism. The model is the
reality etc...
I have a couple of physics friends who take this approach. They use QM every day
in
practical ways. But when they encounter one of the many enigmas such
as simultaneous live and dead cat's their eyes glaze over and they fall back
on the "QM is the most accurate theory ever so who cares?" mantra.
It is a natural inclination though, to seek causative relationships to physical
phenomena,
metaphysicist or not.


> This is a sci.physics FAQ. There are no such things as virtual particles
> except as intermediate terms in calculations and as a guide to intuition that
> needs to be used with great care. Even then they are only an artifact of
> using one particular approach to QM - perturbation theory.

I'm not sure if this is a semantical argument. I'm certainly not an expert
inGauge Theories, but don't they all have "virtual particles" as carriers of
force?
And if they carry force, does this not suggest a strong footing in reality?

> Virtual particles are frequently invoked in descriptions of physical
> processes for laypeople (Hawking radiation, Casimir effect etc.). As the
> disclaimer goes: "This is for entertainment purposes only". In my experience
> these descriptions bear little relation to the real physics. In fact if I
> ever meet the person who started the story about Hawking radiation being
> about virtual particles escaping a gravity well I'd personally like to
> strangle them...

Whoa... it wasn't me!

Again, what empirical evidence is there to support the assertion of
inherent uncertainty. Unlike the Copenhagen Int. you assert the wavefuntion
corresponds to a superposition which, incidentally, makes sense to me.
But what evidence is there to support this? Is this just inferred?
And, more importantly, what empirical evidence to support quantum vacuum
fluctuations? Is that better?

Cheers, John


John Forkosh

unread,
Aug 23, 1998, 3:00:00 AM8/23/98
to
john smith (sm...@notmail.com) wrote:
<snip>
: So what I'm wondering is how one can make the leap from an

: uncertainty in measurement to an inherent uncertainty?
<snip>
Let me refer you to the excellent (in my opinion) undergraduate
textbook "An Introduction to Hilbert Space and Quantum Logic,"
David W. Cohen, Springer-Verlag 1989, ISBN 0-387-96870-9.
On pages 39, 41-42, and 65-66 Cohen distinguishes between
epistemic uncertainty (_our_ lack of knowledge about which pure
state a system is in) versus ontological uncertainty (the inherent
uncertainty you're concerned about).
Mathematically, ontological uncertainty arises when a _pure_
state exhibits dispersion. When a mixture exhibits dispersion,
that's just epistemic uncertainty.
Cohen's page 65-66 discussion contains a detailed example,
with the pages preceding that introducing the necessary
mathematical concepts. I think his discussion may help you
understand what _people_ mean when they refer to this
distinction (what _nature_ means by exhibiting the observed
behavior is, of course, open to question).
John (for...@panix.com)


Harry Johnston

unread,
Aug 24, 1998, 3:00:00 AM8/24/98
to
john smith <sm...@notmail.com> wrote:

> And specifically, what experimental evidence is there for virtual
> particles?

The mathematical models of Quantum Electrodynamics, etc., are very
very successful in explaining experimental results.

Virtual particles - as found in Feynman diagrams - are a natural way
of expressing the mathematics found in these models. They express the
mathematics in a way which can be visualised by the human mind.

As such, they don't have to actually exist; in fact, I don't think
there is any meaningful way of asking the question.

Incidentally, have you looked up the word "virtual" in a dictionary?
;-)

[Moderator's note: As Niels Bohr said, the problem is not in the
language---the problem IS the language. :) -P.H.]

Harry.

---
Harry Johnston, om...@ihug.co.nz
One Earth, One World, One People


torqu...@my-dejanews.com

unread,
Aug 25, 1998, 3:00:00 AM8/25/98
to
In article <35DFB1E9...@notmail.com>,
Johnathan Smith <sm...@notmail.com> wrote:

> > I have a hunch...that your knowledge of quantum mechanics is coming from


philosophical writings and the popular physics press.
>

> You are incorrect.

Good!

> > The...


> > model works well. In this sense a physicist can declare categorically that
> > the uncertainty is inherent - it is inherent in the model. Whether that
> > translates into a statement about reality is a metaphysical question - we
can
> > only describe the world through our models of it. Is that logical
positivism?
>
> Your contention is that, since uncertainty is inherent in the model, and the
> model works well, then uncertainty is inherent in nature as well.

No. In fact I'm trying (obviously unclearly!) to deny that in the very
statement you are commenting on :-) I'm saying that we can only talk about
nature through our models of it. One day a new model may become fashionable
in which there is no uncertainty. (Although such a model will necessarily be
quite weird.) All physicists have to live with the possibility that their
models will be overthrown. Nonetheless it seems to me acceptable for
physicists to say today that the uncertainty is inherent. What do you expect
physicists to do? Preface all their statements with "According to our
models...". What a waste of air!

> But when they encounter one of the many enigmas such
> as simultaneous live and dead cat's their eyes glaze over and they fall back
> on the "QM is the most accurate theory ever so who cares?" mantra.

And why not? What are you personally looking for in a *physical* theory? (And
what's enigmatic about a superposition of dead and alive cats anyway?)

> I'm not sure if this is a semantical argument. I'm certainly not an expert
> inGauge Theories, but don't they all have "virtual particles" as carriers of
> force? And if they carry force, does this not suggest a strong footing in
> reality?

But they only occur as intermediate terms in calculations. Rearrange the
calculation and you can use a different set of virtual particles to evaluate
the same thing. This suggests no footing in reality to me. If you're lucky
enough to have a physical system that you can solve using nonperturbative
methods then there are no virtual particles at all. Unlike you other question
this, IMHO, is a complete red herring. It's like arguing over whether the
individual terms in the power series of sin and cos have a physical presence
in a system containing an object moving in circles! (Nonetheless you may use
such terms to make good predictions.)

> Again, what empirical evidence is there to support the assertion of
> inherent uncertainty. Unlike the Copenhagen Int. you assert the wavefuntion
> corresponds to a superposition which, incidentally, makes sense to me.
> But what evidence is there to support this? Is this just inferred?

It's inferred. What do you believe that isn't inferred? Is inference somehow,
in your opinion, weaker than another way to acquire knowledge so that you
need to qualify the word 'inferred' with 'just'?

> And, more importantly, what empirical evidence to support quantum vacuum
> fluctuations? Is that better?

Er...the Casimir effect? But of course that's inference and you don't like
inference :-)

But again a lot of talk about quantum 'fluctuations' shouldn't be taken too
seriously. There's a standard picture presented to laymen that starts "Not
only is there uncertainty in momentum and position, there's also one in
energy and time. This means that for short periods of time fluctuations in
energy can come into existence...". In the years I spent doing QFT I never
heard of anyone *really* using this argument except in the introductory pages
of their books and as a vague guide to intuition (to be followed up by real
calculation). -- http://www.grin.net/~tanelorn

Johnathan Smith

unread,
Aug 26, 1998, 3:00:00 AM8/26/98
to
torqu...@my-dejanews.com wrote:

[Moderator's note: Lines reformatted to make them shorter. Although 80
is a maximum to enable nice display on almost all terminals, it's better
to keep your own, original, unquoted text in posts at 72 characters per
line or less, to enable it to be quoted a couple of times and still keep
things at less than 80 characters per line total. Also, don't use more
than 2 characters (including space) for a quote symbol. I also trimmed
some quoted text. -P.H.]

> > I'm not sure if this is a semantical argument. I'm certainly not an expert
> > inGauge Theories, but don't they all have "virtual particles" as carriers of
> > force? And if they carry force, does this not suggest a strong footing in
> > reality?
>
> But they only occur as intermediate terms in calculations. Rearrange the
> calculation and you can use a different set of virtual particles to evaluate
> the same thing. This suggests no footing in reality to me. If you're lucky
> enough to have a physical system that you can solve using nonperturbative
> methods then there are no virtual particles at all. Unlike you other question
> this, IMHO, is a complete red herring. It's like arguing over whether the
> individual terms in the power series of sin and cos have a physical presence
> in a system containing an object moving in circles! (Nonetheless you may use
> such terms to make good predictions.)

Is there an imperative for a theory to, not only make accurate
predictions, butto reflect objective reality as well? It would appear
many physicist think so. On a tip from another post, I ran a keyword
search in xxx.lanl for the words ontological and objective. There was a
plethora of papers expressing dissatisfaction with the current theories
for their lack thereof. As is well known, DeBroglie, Einstein, Bohm,
Bell, Schrodenger all expressed similar concerns. So it's not just us
lay-people.

Is there a value in a physically objective theoretical basis other than
metaphysical? Again, I'm not a physicist, but it seems obvious to me
that there is. If not just for the simple fact that objectively
decribing nature is one of the primary goals of science, it seems to me
that any theory that is not physically justifiable, and ontologically
valid, will be far less fruitful at illuminating new and unforseen
phenomena. In other words, reality is far more inspiring.


> What do you believe that isn't inferred? Is inference somehow,
> in your opinion, weaker than another way to acquire knowledge so that you
> need to qualify the word 'inferred' with 'just'?

Your correct. I have no problem accepting the curvature of space-time
inGR but as far as I know there is no way to directly observe it. So I
retract the word "just".

> But again a lot of talk about quantum 'fluctuations' shouldn't be taken too
> seriously. There's a standard picture presented to laymen that starts "Not
> only is there uncertainty in momentum and position, there's also one in
> energy and time. This means that for short periods of time fluctuations in
> energy can come into existence...". In the years I spent doing QFT I never
> heard of anyone *really* using this argument except in the introductory pages
> of their books and as a vague guide to intuition (to be followed up by real
> calculation).

"As far as the laws of mathematics refer to reality, they are not
certain, and as far as they are certain,they do not refer to reality. "
Albert Einstein


torqu...@my-dejanews.com

unread,
Aug 27, 1998, 3:00:00 AM8/27/98
to
In article <35E3615C...@notmail.com>,
Johnathan Smith <sm...@notmail.com> wrote:

> Is there an imperative for a theory to, not only make accurate
> predictions, butto reflect objective reality as well? It would appear

> many physicist think so...So it's not just us
> lay-people.

I've tried to keep your queries separated into two distinct parts. A question
about 'inherent uncertainty' and a question about the existence of 'virtual
particles' (and 'fluctuations').

I think your first question is deep and I don't want to trivialise it (though
sci.physics.research possibly isn't the right place to discuss it for too long
as it's a philosophical question).

The second question is completely separate and (IMHO) a red herring partly
confused by writings for the general public. Are there any papers about
whether virtual particles really exist at xxx.lanl.gov? I hope not too many!
Virtual particles exist only as a very elegant intermediate stage in certain
QFT *calculations*.

> I have no problem accepting the curvature of space-time
> inGR but as far as I know there is no way to directly observe it.

That's a debatable point.

I think you may need to clarify what you mean by 'directly observe'.
Everyone's observations are theory-laden. If you a freefalling apple on its
hyperbolic path as a Newtonian observer you will say that you observe
directly the action of gravitational force. A General Relativist, on the
other hand, is not unlikely to say that they are directly observing the
curvature of spacetime. (I do! :-) -- http://www.grin.net/~tanelorn

torqu...@my-dejanews.com

unread,
Aug 27, 1998, 3:00:00 AM8/27/98
to
In article <1WGE1.175$vA1.7...@news.san.rr.com>,
Todd Desiato <tde...@san.rr.com> wrote:

> I doubt there will ever be a model where there is no uncertainty. Consider
> dropping a rock into a flat pond and creating waves.
>
> Do any of the waves have an exact position? No...

He he! I think you've actually given a very eloquent argument for why
uncertainty in physics might actually go away! In your examples you have
explained apparent uncertainty in terms of something more fundamental that is
deterministic.

> It depends on what phase of the wave you consider.

In other words you're saying that there isn't really uncertainty at all but a
problem with definitions. We should be careful when trying to apply properties
of particles (eg. position and momentum) to waves.

Johnathan Smith

unread,
Sep 1, 1998, 3:00:00 AM9/1/98
to
torqu...@my-dejanews.com wrote:

> I've tried to keep your queries separated into two distinct parts. A question
> about 'inherent uncertainty' and a question about the existence of 'virtual
> particles' (and 'fluctuations').
>
> I think your first question is deep and I don't want to trivialise it (though
> sci.physics.research possibly isn't the right place to discuss it for too long
> as it's a philosophical question).
>
> The second question is completely separate and (IMHO) a red herring partly
> confused by writings for the general public. Are there any papers about
> whether virtual particles really exist at xxx.lanl.gov? I hope not too many!
> Virtual particles exist only as a very elegant intermediate stage in certain
> QFT *calculations*.

Thank you for acknowledging the validity of the question. Thoughmy motive for
posing the question may be philosophical, I
don't think the discussion entirely is. But regardless, the reason
I chose sci.research, aside from the quality of the responses, is
that I am looking for any experiment, thought experiment, or
just any paper that claims to address the objective nature of the
wavefunction.

I did run a search in lanl. with the word 'virtual' and
it found nothing. And though your point on 'virtual
particles' is interesting, it seems to be rooted in the
same issue. Is there any objective basis for
their existence. And you apparently say there is not.

I have run across one experiment that may put objectivity
into the wavefunction. This is the double slit experiment
where both slits are open, but only one particle is made to
pass at a time. And yet after enough particles have passed
an interference pattern begins to emerge, in spite of the
fact that only one particle is going through at a time.

I've read papers attributing this phenomena to the knowledge
of an observer or the lack thereof. However, it seems far
more reasonable to me to attribute the interference to each
individual particle going through both holes and each
particle's wavefunction interfering with itself. As bizarre
as that sounds it certainly is less bizarre than entering
some parapsychological effect into the theory. And if the
wavefunction of a single particle can interfere with
itself, does this not imply that the wavefunction is
a real physical "thing", and not just probability.

There's another thought experiment that places a detector
at only one of the holes, say hole B so we may conclude
the particle went through hole A if we don't detect it
at whole B. It is argued that the knowledge of which
hole the particle went through is enough to destroy
interference because the wavefunction of the particles
that go through hole A is altered even though the
detector only interacts with particles that go through
hole B. But if you consider that each particle can
in fact go through both holes simultaneously, then
the detector must interact with all particles thus
eliminating the role of sentient beings other than
what to observe and possibbly offering empirical
evidense for the objective reality of the wavefunction.
Just a thought.

> > I have no problem accepting the curvature of space-time
> > inGR but as far as I know there is no way to directly observe it.
>
> That's a debatable point.
>
> I think you may need to clarify what you mean by 'directly observe'.
> Everyone's observations are theory-laden. If you a freefalling apple on its
> hyperbolic path as a Newtonian observer you will say that you observe
> directly the action of gravitational force. A General Relativist, on the
> other hand, is not unlikely to say that they are directly observing the
> curvature of spacetime. (I do! :-)

I was just saying that you can't directly observe empty space becauseit lacks any
observable "substance". And though we may directly
observe effects of space-time curvature, the actual geometrical
characteristics are only inferred. It is interesting to note that
Einstein found the idea of something without substance having
geometrical properties intellectually revolting, which led him to
suppose informally the existance of some kind of ether. I think
the quote I'm referring to was in some late interview. I just
find it ironic that SR eliminated the ether and GR kind of requires
it, at least to Einstein.

Cheers, John


Jim Carr

unread,
Sep 3, 1998, 3:00:00 AM9/3/98
to
someone (from dejanews) wrote:
}
} I have a hunch (correct me please if I'm wrong!) that your knowledge of

} quantum mechanics is coming from philosophical writings and the popular
} physics press. There is absolutely no substitute for actually getting your
} hands dirty and doing some real QM if you want to make sense of these issues.

Johnathan Smith <sm...@notmail.com> writes:
>
> You are incorrect.

On which part? I think there is no substitute for getting your hands
dirty and calculating something. The width of an unbound level is
my choice for this sort of discussion.

} The mathematical model which one uses to do quantum physics contains no
} concept of definite position for particles not in a position eigenstate. The

} model works well. In this sense a physicist can declare categorically that
} the uncertainty is inherent - it is inherent in the model. Whether that
} translates into a statement about reality is a metaphysical question - we can
} only describe the world through our models of it. Is that logical positivism?

> Your contention is that, since uncertainty is inherent in the model, and the
>model works well, then uncertainty is inherent in nature as well.

That is not what I read in the quoted text.

I saw a statement that was carefully crafted to avoid saying that a
particular model's success means anything about "reality".

>But when they encounter one of the many enigmas such
>as simultaneous live and dead cat's their eyes glaze over and they fall back
>on the "QM is the most accurate theory ever so who cares?" mantra.

Then they know that it can be pointless to ask questions of quantum
mechanics that it does not claim to be able to answer, just as they
know it is pointless to use classical physics to answer certain
questions because it gives the wrong answer.

>It is a natural inclination though, to seek causative relationships to
>physical phenomena, metaphysicist or not.

Certainly. There is no guarantee that you will find a relationship
more causative than what QM supplies, however. You do realize that
there _is_ a cause for the death of the cat, right? The decay of
some radioactive nucleus.

} This is a sci.physics FAQ. There are no such things as virtual particles
} except as intermediate terms in calculations and as a guide to intuition that
} needs to be used with great care. Even then they are only an artifact of
} using one particular approach to QM - perturbation theory.

>I'm not sure if this is a semantical argument.

What makes you guess that the above statement is a semantical argument?
It appears to be an accurate statement about quantum field theories.

>I'm certainly not an expert
>inGauge Theories, but don't they all have "virtual particles" as carriers of
>force?

As the reification of a certain mathematical procedure, useful for
drawing cartoons of the process, yes.

>And if they carry force, does this not suggest a strong footing in reality?

Remember what you said up above (now snipped) about making claims
that a model's success tells you about "reality"? Only in this
case it is not the model but a cartoon of the model that is being
talked about.

--
James A. Carr <j...@scri.fsu.edu> | Commercial e-mail is _NOT_
http://www.scri.fsu.edu/~jac/ | desired to this or any address
Supercomputer Computations Res. Inst. | that resolves to my account
Florida State, Tallahassee FL 32306 | for any reason at any time.


Johnathan Smith

unread,
Sep 5, 1998, 3:00:00 AM9/5/98
to
Jim Carr wrote:

> That is not what I read in the quoted text.
>
> I saw a statement that was carefully crafted to avoid saying that a
> particular model's success means anything about "reality".

This is the statement that prompted my response:>The model works well. In this


sense a physicist
>can declare categorically that the uncertainty is inherent
>- it is inherent in the model.

If you'll
refer to the original discussion, I was inquiring into the
possibility and/or evidence that uncertainty is an ontologically,
objectively real phenomenon. The poster's response implies
that it is real because it is real in the model. I don't know if
you followed the whole thread, but in context that was the
gist as I understood it.

The point is though, I wasn't asking about the model, I was
asking about a property of nature. The problem with QM
is that the model and the ontological foundation on which
that model is based, are often disconnected. If someone
in Newton's day had presented a version of the law of
gravitation that made exactly the same predictions but
included say, invisible rubber bands, the poor bloke
may have been committed.

I'm not that critical of QT. I understand how
difficult it is to probe the scale of microphyics.
But to some one trying to understand the workings
of the "real" physical world, the thought of models
that describe natural phenomena with fictitious
entities is a bit disturbing. I used to make a living as a
sound engineer, and before I even understood how
electronics and sound works, people would call me
because I could make 'things sound good'. So I
understand how insignificant the ontological foundations
of quantum physics may be to someone trying to build
lasers. Which is why I prefaced my question with,
'of a philosophical nature'.

My interest is this, if a particle, say an electron, really
is in a pure state of superposition, as in inherently
spread out in space-time, then the observable position
is not inherent and we may just as well say that we
create the observable position by measuring for it.
I find that interesting.

It seems the same reasoning would apply to all
observables. Including even the existence of
particles. This would mean we have a whole branch
of physics based, not on an objective reality, but on
a projected reality. This would be fascinating.

Humbly, John


Kevin Brown

unread,
Sep 6, 1998, 3:00:00 AM9/6/98
to
torqu...@my-dejanews.com wrote:
> Virtual particles exist only as a very elegant intermediate stage in
> certain QFT *calculations*.

On Sun, 30 Aug 1998 mmci...@world.std.com (Matthew J. McIrvin) wrote:
> Absolutely correct. At the same time, one can describe *all* particles
> as being somewhat virtual, since no experiment lasts an infinite time.

There's an interesting similarity between today's discussions of
the "existence status" of virtual particles (as opposed to the
existence status of non-virtual particles which, as Matt points
out, is not entirely unproblematical) and historical discussions of
the "existence" of imaginary numbers versus real numbers. In both
cases the entities in question first appeared only as very elegant
intermediate stages of calculations. For example, Cardano noted that
the cubic equation x^3 = 15x + 4 has three real roots, including x=4,
but if we simply apply the formula for the solution of a cubic we get

x = (2 + sqrt(-121))^1/3 + (2 - sqrt(-121))^1/3

It was believed that the square root of a negative number does not
"exist", but Cardano found that it IS possible to reason consistently
with these non-existent quantities. Evaluating the cube roots (by
methods not available to Cardano), the above expression reduces to

x = (2 + sqrt(-1)) + (2 - sqrt(-1))

which of course gives the real root x=4. The interesting point is
that to extract this real root from the equation it's hard to avoid
operating with "non-existent" numbers, which Cardano called either
"sophisticated" or "sophistic", depending on how you interpret
his Latin. I suppose those "sqrt(-1)'s" are analagous to virtual
particles, artifacts of our formal reasoning process, flashing in
and out of "existence" just long enough to bridge the conceptual
gap between the equation and the root.

Today we might take a different view of the relative "existence"
of real and imaginary numbers (not to mention quaternions, vectors,
matricies, etc), possibly even regarding the distinction as
meaningless. On the other hand, it's still true that we require
our observables in QM to have simple (real) values, even though
the theory seems to unavoidably involve manipulations with more
complex "numbers". It isn't clear to me if this is an absolute
characteristic of the external world, or of our preferred mode
of conceptualizing our experiences.


Doug Sweetser

unread,
Sep 6, 1998, 3:00:00 AM9/6/98
to
Let me make a technical point: an electron has an exact location in spacetime. An
electron also has an exact momentum. There is no uncertainty in either
measurement. The uncertainty principle comes into play only when both types of
data are requested. Knowing position exactly means nothing can be known about
momentum. In classical physics, knowledge of one measurement does not effect the
other.

The only way I can approach a philosophical issue is to sprinkle it with some math,
and it doesn't have to be difficult math :-) Let the position be x. Momentum
involves the change in position, or the difference between two positions. Quantum
mechanics with huge numbers of particles looks like classical physics. Quantum
gets its neat properties when there are very few particles around. Quantum
strangeness is clearest when we think of the difference between nothing and only
one or two particles (science is always the study of differences). So we are
considering the difference between absolutely nothing, and only one or two things.
What is the position of those things, and the change in those things, particularly
if we have a lot of trouble finding them using only a few things? It will be
difficult to get all this information! Quantum mechanics gives a technical
answer. I was shown a proof of the uncertainty principle which relies on the
properties of the complex numbers (John Baez outlined this proof a while ago in
this newsgroup). I love that proof because it shows that nature is really playing
number games, independent of philosophy.

doug
swee...@world.com

Give nature a full deck of numbers.
Deal in events as quaternions.


Aaron Bergman

unread,
Sep 7, 1998, 3:00:00 AM9/7/98
to
In article <35F15AA7...@world.std.com>, Doug Sweetser
<swee...@world.std.com> wrote:

:Let me make a technical point: an electron has an exact location in


spacetime. An
:electron also has an exact momentum. There is no uncertainty in either
:measurement. The uncertainty principle comes into play only when both types of
:data are requested. Knowing position exactly means nothing can be known about
:momentum.


I don't buy this. If you accept QM at face value (and interpretation
issues aside, I can't think of a reason not to) the state vector is the
fundamental thing and one cannot simultaneously be in a position and
momentum eigenstate.

Aaron

--
Aaron Bergman
A permanent e-mail address would be @aya.yale.edu


Johnathan Smith

unread,
Sep 7, 1998, 3:00:00 AM9/7/98
to
[Moderator's note: Quoted text trimmed and rewrapped. -P.H.]

Doug Sweetser wrote:

> Let me make a technical point: an electron has an exact location in spacetime.

You should read the whole thread. Dejanews. The point of discussionis
wether the electron has a definite position before a position
measurement or: is there an uncertainty of knowledge vs a real
physically objective uncertainty or both.


> An electron also has an exact momentum. There is no uncertainty in
> either measurement. The uncertainty principle comes into play only when
> both types of data are requested. Knowing position exactly means

> nothing can be known about momentum. In classical physics, knowledge of


> one measurement does not effect the other.

> The only way I can approach a philosophical issue is to sprinkle it with
> some math, and it doesn't have to be difficult math :-)

With all respect I have no idea what your point is. No one
is disputing the uncertainty principle. The question is what's going
on when we're not taking measurements. The positivist school
says this is a meaningless question. I disagree. As Einstein said,
"I like to think the moon is still there when I'm not looking at it.

torqu...@my-dejanews.com

unread,
Sep 7, 1998, 3:00:00 AM9/7/98
to
In article <35EF5C98...@notmail.com>,
Johnathan Smith <sm...@notmail.com> wrote:

> It seems the same reasoning would apply to all
> observables. Including even the existence of
> particles.

Yes. Particle number (ie. how many particles there are in a system) is an
observable like any other. And like other observables different observers in
different frames can measure different values for this number. One predicted
result of this that a region of space which an observer at rest thinks is
vacuum will look like it contains particles to an accelerating observer
(that's *real* particles BTW, not virtual ones). It's called Unruh radiation
- and it's very closely related to Hawking radiation.

> This would mean we have a whole branch
> of physics based, not on an objective reality

What's not objective? Physicists long ago realised that viewers in different
frames or performing different measurements see different things. But there
was no loss of objectivity because they've also learnt how to reconcile these
different viewpoints with transformation equations and suchlike. --
http://travel.to/tanelorn

Doug Sweetser

unread,
Sep 8, 1998, 3:00:00 AM9/8/98
to
Hello John:

I am trying to make a simple technical point. The uncertainty principle
for the measurements of x position and momentum in the x direction looks
something like this in ASCII

dx dPx >= hbar/2 "d" is a standard deviation

Everyone here agrees to this equation, even if they don't write it
down. I was trying to point out how many measurements are involved
here. This equation does not involve just one measurement. This a
statement about standard deviations (those are only meaningful after 3
measurements, correct?). Every additional measurement contributes to
the average position and/or average momentum, and the standard deviation
of the respective measurement. The product of these two standard
deviations is greater than hbar/2.

Notice how unexciting this all sounds! That is statistics at work.
Statistics can be exceptionally reliable when it involves 10^40 atoms
like the moon does (is that guess even within a factor of ten?). I have
much confidence in such a guess. Determining for a hydrogen atom in the
vacuum of space the average position and momentum, and the corresponding
standard deviations in position and momentum of a hydrogen atom in deep
space is a more delicate operation. I realize philosophical questions
sound much cooler than discussions of pairs of standard deviation
measurement, but that is the only content of dx dPx >= hbar/2.

A scientist can only speak as loudly as the data, or become an adman.
When you decide to not collect data, be as quiet as the data.

Doug
http://world.com/~sweetser


john baez

unread,
Sep 9, 1998, 3:00:00 AM9/9/98
to
In article <35f19f65...@news.seanet.com>,

Kevin Brown <ksb...@seanet.com> wrote:
>On the other hand, it's still true that we require
>our observables in QM to have simple (real) values, even though
>the theory seems to unavoidably involve manipulations with more
>complex "numbers".

It depends who "we" are. Bad introductory textbooks claim that
observable quantities must be real-valued, infer that only
operators with real eigenvalues can be observables, and
conclude that only self-adjoint operators can be observables --
ignoring the fact that plenty of operators with all real
eigenvalues aren't self-adjoint! Good introductory textbooks
don't make the last mistake, but they still tend to claim that
only self-adjoint operators can be observables, since only these
have an *orthonormal* basis of eigenvectors with real eigenvalues. [1]

However, writers of advanced books usually realize this insistence
on *real* eigenvalues is silly. After all, it's easy enough to make
a machine that measures some complex-valued *function* of position
and displays the answer --- say 2+3i --- on a digital readout or
two dials. What really matters is that the operator have an
orthonormal basis of eigenvectors --- or equivalently, that it
commute with its adjoint. Such operators are called "normal". [2]

Normal operators include self-adjoint and unitary operators as
special cases. There's a way to apply any complex function to
a normal operator and get a normal operator, thanks to the "spectral
theorem" - one of many theorems about self-adjoint operators
that easily generalizes to normal operators. In short: normal
operators are perfectly good observables!

Let's face up to reality and admit that the universe is complex!

-------------------------------------------------------------------
[1] I confine myself to the case of finite-dimensional Hilbert
spaces here to avoid subtleties involving continuous spectrum.

[2] In the infinite-dimensional case the former condition implies
the latter, but the latter condition turns out to be more general,
thanks to the possibility of continuous spectrum, so people define
an operator to be "normal" if it commutes with its adjoint.


john baez

unread,
Sep 9, 1998, 3:00:00 AM9/9/98
to
In article <6t6meo$j...@gap.cco.caltech.edu>,
Toby Bartels <to...@ugcs.caltech.edu> wrote:

>Standard deviations exist after only 1 measurement.
>The standard deviation of 1 measurement is always 0.
>The standard deviation of 2 measurements a and b is |a-b|/2.

The thread is digressing from the topic of "uncertainty and
postivism". Good! It was never going to make much progress
on that topic, anyway.

To digress a bit further... sometimes people define the standard
deviation of a list of numbers x_1,...,x_n to be

sqrt((1/n) sum (x_i - X)^2)

where X is the mean

X = (1/n) sum x_i

But I think sometimes I've seen the standard deviation defined as

sqrt((1/n-1) sum (x_i - X)^2)

or something like that. Is this true? Or did I just dream this?
If it's true, why do people sometimes use this alternate definition?
Is there a good reason or just a silly reason (like disliking the
very idea of the standard deviation of a single measurement)?

Toby Bartels

unread,
Sep 10, 1998, 3:00:00 AM9/10/98
to
John Baez <ba...@galaxy.ucr.edu> wrote for the most part:

>Sometimes people define the standard


>deviation of a list of numbers x_1,...,x_n to be
>sqrt((1/n) sum (x_i - X)^2)

>where X is the mean X = (1/n) sum x_i.

This is, I believe, the standard standard deviation,
which is what I was using when I mentioned the SD of 2 measurements.

>But I think sometimes I've seen the standard deviation defined as
>sqrt((1/n-1) sum (x_i - X)^2)
>or something like that. Is this true? Or did I just dream this?

It's true, unfortunately.
This quantity is useful in statistical inference,
so it's a good thing that people talk about it,
but I don't think it deserves the name "standard deviation".
Nevertheless, we are pretty much stuck with it.
Sometimes people say "n-deviation" and "(n-1)-deviation",
or something like that, to distinguish them.

>If it's true, why do people sometimes use this alternate definition?
>Is there a good reason or just a silly reason (like disliking the
>very idea of the standard deviation of a single measurement)?

It's not so much the standard deviations that matter here
but the variances (the squares of the standard deviations).
Suppose you have a random variable Y.
Y has some variance sigma^2 (defined, if Y is discrete, by division by n;
the n - 1 version is used only in analysing samples).
You measure Y n times, getting the values y_0 through y_{n-1}.
Let S^2 be the (n-1)-variance of these measurements.
Then S^2, not (n-1)S^2/n, is an unbiased estimate of sigma^2.

This makes some sense if you look at small n.
If n = 1, the n-variance of the sample is 0,
but you have no reason to supsect that sigma^2 is 0.
But S^2 is undefined, so you won't make this mistake.
So, in a sense, this is an aversion to the SD of a single measurement,
because the only possible SD (0) gives you a poor idea of the RV's SD (sigma).
If n = 2, the n-variance of the sample is (x_1 - x_0)^2/4.
Nevertheless, you would expect the two measurements
to be more than half a standard deviation away.

So the n-variance is a poor estimate of the true variance.
Still, it seems strange that the (n-1)-variance would work.
But calculate it, and you'll see that it works.
That is, caclute the expectation of S^2,
and you'll see that it is exactly sigma^2.
(Note that the expectation of S is not sigma;
that's why this is really about variances, not SDs.)
BTW, if you do this calculation, you'll see that
the measurements must be independent;
I forgot to mention that before.
Luckily, I had the good sense to do the calculation I was talking about.

Here is another, heuristic way to look at this.
When you calculate the variance of a sample,
you must calculate the mean, in effect if not in fact.
The mean of the sample is an unbiased estimate of the true mean,
so, by using the mean, you're taking away a degree of freedom.
Therefore, you should divide by n-1, not n.
This explanation doesn't make much sense to me,
but you can kind of see where it comes in in the calculation.


-- Toby
to...@ugcs.caltech.edu


Patrick Bakx

unread,
Sep 10, 1998, 3:00:00 AM9/10/98
to
john baez wrote in message <6t6per$1f2$1...@pravda.ucr.edu>...

>In article <6t6meo$j...@gap.cco.caltech.edu>,
>Toby Bartels <to...@ugcs.caltech.edu> wrote:

>>Standard deviations exist after only 1 measurement.
>>The standard deviation of 1 measurement is always 0.
>>The standard deviation of 2 measurements a and b is |a-b|/2.

>The thread is digressing from the topic of "uncertainty and
>postivism". Good! It was never going to make much progress
>on that topic, anyway.

>To digress a bit further... sometimes people define the standard


>deviation of a list of numbers x_1,...,x_n to be

>sqrt((1/n) sum (x_i - X)^2)

>where X is the mean

>X = (1/n) sum x_i

>But I think sometimes I've seen the standard deviation defined as

>sqrt((1/n-1) sum (x_i - X)^2)

>or something like that. Is this true? Or did I just dream this?

>If it's true, why do people sometimes use this alternate definition?
>Is there a good reason or just a silly reason (like disliking the
>very idea of the standard deviation of a single measurement)?

There is indeed a very good reason. It has something to do with the way you
estimate the parameters of the distribution of your measurements. If you
assume they are normally distributed you can try to estimate the parameters
(mean and standard deviation) of this distribution from your measurements.
One way of doing this is to determine those values of the mean and standard
deviation that maximise the probability that you actually measure the given
sequence of measurements (just multiply all probabilities - measurements are
assumed independent).
If you do this, you will find the mean m_estimated = 1/n sum (x_i) and
sigma_estimated = sqrt (1/n sum(x_i - m_estimated)^2).
This is not the end of the story. You can determine that the estimate for
sigma is biased, that is, the estimate does not converge to the actual value
if the number of measurements becomes infinite (forgot how that was actually
calculated, should not be too hard though). If you correct the estimate on
sigma for this bias you obtain the latter expression.
BTW: the estimate of the mean is not biased.
Greetings,

Patrick Bakx


Doug Sweetser

unread,
Sep 10, 1998, 3:00:00 AM9/10/98
to
Hello Toby:

Your definition does not look self-consistent to me.

> Standard deviations exist after only 1 measurement.
> The standard deviation of 1 measurement is always 0.
> The standard deviation of 2 measurements a and b is |a-b|/2.

...
> Anyway, from the relation xp - px = i hbar,
> you can deduce (<x^2> - <x>^2)(<p^2> - <p>^2) >= hbar^2/4,
> which is the square of the uncertainty relation that you quoted
> (where <A> is the expectation of the operator A).

If one measurement is made of position, and another for momentum, then

(0)(0) >= hbar^2/4

is not a valid statement. I don't know if physicists use a different
basic definition of a standard deviation to avoid this logic error.

I think this statement may also be wrong:

> It's quite possible, albeit unlikely, to measure x and p
> and find that the standard deviations of your measurements
> have a product which is less than hbar/2
> (especially if you only measure them 3 times).

I don't think it is possible, that the job of the ">=" comparison
relation. If it could be less, then it could be zero sometimes, but we
know in quantum the standard deviation is never zero at anytime.
Quantum mechanics is fuzzy by law!

On a more abstract level, I personally avoid thinking anything is more
fundamental than anything else. True, it is quite common to go from the
commutator relations to the uncertainty relations. I know, I have done
that myself in this newsgroup using quaternions :-) However, it is just
a possible to start from the uncertainty relations and end up at the
commutators. The commutator of the operators x and p on a state psi
equals i hbar, and the product of the standard deviation of the
operators is greater than or equal to hbar/2. The former statement is
not statistics, but the latter is, at least to my ear.

Doug
http://world.com/~sweetser


Herman Bruyninckx

unread,
Sep 10, 1998, 3:00:00 AM9/10/98
to

> To digress a bit further... sometimes people define the standard
> deviation of a list of numbers x_1,...,x_n to be
>
> sqrt((1/n) sum (x_i - X)^2)
>
> where X is the mean
>
> X = (1/n) sum x_i
>
> But I think sometimes I've seen the standard deviation defined as
>
> sqrt((1/n-1) sum (x_i - X)^2)
>
> or something like that. Is this true? Or did I just dream this?
> If it's true, why do people sometimes use this alternate definition?

It is true. The reason is that the latter definition gives a so-called
*unbiased* estimator. This means that the expected value converges to the
`real' value.... However, statistics has known a very unfortunate history
this last century, forgetting all the good things that physicists (like
Laplace) had developed, and went for a *very* ad hoc body of theories
following the `example' of Fisher. For a very interesting account of how
statistics went wrong, take a look at the unfinished book by the physicist
Ed T. Jaynes (<http://bayes.wustl.edu/>). This book is by far the greatest
scientific discovery I've ever done in my career as a scientist; it turns
out that our statistics departments have been telling us lies and still
continue to do so (although some universities have already adapted
themselves). This remark sounds too crazy to be true, but it is ;-(

> Is there a good reason or just a silly reason (like disliking the

> very idea of the standard deviation of a single measurement)?

This last idea is not silly: *any* estimate of parameters in a system
depends on a priori knowledge about the system (Fisher and his followers
tried to deny this fact, but all our everyday reasoning relies on it.) This
means that one single measurement is interpreted in the context of this a
priori knowledge, and this already has a certain probability density
and hence a standard deviation.

See the above-mentioned book by Jaynes; Jaynes himself (who died last
April) made enormous contributions to statistics, using principles that
look normal to a physicist (i.e., invariance under transformations of
coordinates) but that most statisticians still don't understand or
appreciate.

Herman

--
Herman.B...@mech.kuleuven.ac.be (Ph.D.) Fax: +32-(0)16-32 29 87
Dept. Mechanical Eng., Div. PMA, Katholieke Universiteit Leuven, Belgium
<http://www.mech.kuleuven.ac.be/~bruyninc>

torqu...@my-dejanews.com

unread,
Sep 11, 1998, 3:00:00 AM9/11/98
to
In article <6t6per$1f2$1...@pravda.ucr.edu>,
ba...@galaxy.ucr.edu (john baez) wrote:

> sqrt((1/n) sum (x_i - X)^2) ...

> But I think sometimes I've seen the standard deviation defined as
> sqrt((1/n-1) sum (x_i - X)^2)

The second formula isn't the standard deviation of the sample but an unbiased
estimator of the standard deviation of the population from which the sample
was derived. If you consider the latter as a random variable its expected
value is the underlying population standard deviation. --

Wilbert Dijkhof

unread,
Sep 11, 1998, 3:00:00 AM9/11/98
to
john baez wrote:

> To digress a bit further... sometimes people define the standard
> deviation of a list of numbers x_1,...,x_n to be
>

> sqrt((1/n) sum (x_i - X)^2)
>

> where X is the mean
>
> X = (1/n) sum x_i
>

> But I think sometimes I've seen the standard deviation defined as
>
> sqrt((1/n-1) sum (x_i - X)^2)
>

> or something like that. Is this true? Or did I just dream this?
> If it's true, why do people sometimes use this alternate definition?

> Is there a good reason or just a silly reason (like disliking the

> very idea of the standard deviation of a single measurement)?

They are both not correct.

Suppose X_1,...,X_n is an aselect sample of Y with mean mu, then the
sample mean
with given mu is given by:

S^2_mu := 1/n*sum((X_i-mu)^2,i=1..n).

The following holds: E[S^2_mu] = Var(Y), with Var(Y) = E[(Y-mu)^2].

In general, mu is not known. We then define the sample function S^2 by:

S^2 := 1/(n-1)*sum((X_i-X)^2,i=1..n)

Again we have E[S^2] = Var(X).

We have the following important theorem:
P[S^2 -> Var(X)] = 1 for n->oo
and also:
P[S^2_mu -> Var(X)] = 1 for n->oo

Summarized:
E[S^2_mu] = E[S^2] = Var(X)
Var(S^2_mu) <= Var(S^2)
So therefor if E[X] is known we give the preference to S^2_mu as
approximation of Var(X).

NB
The standard deviation of X is defined by
SD(X) := sqrt(Var(X))

Wilbert


john baez

unread,
Sep 11, 1998, 3:00:00 AM9/11/98
to
In article <Pine.HPP.3.91.98091...@joe.mech.kuleuven.ac.be>,
Herman Bruyninckx <Herman.B...@mech.kuleuven.ac.be> wrote:

>For a very interesting account of how
>statistics went wrong, take a look at the unfinished book by the physicist
>Ed T. Jaynes (<http://bayes.wustl.edu/>). This book is by far the greatest
>scientific discovery I've ever done in my career as a scientist; it turns
>out that our statistics departments have been telling us lies and still
>continue to do so (although some universities have already adapted
>themselves). This remark sounds too crazy to be true, but it is ;-(

I'm a great fan of Jaynes myself. In addition to the wonderful book
available electronically above, people should look at these:

"Maximum Entropy and Bayesian Methods", ed. J. Skilling, Kluwer, 1988
[this is one of a series of conference proceedings on maximum entropy]

"Physics and Probability", Essays in honor of E. T. Jaynes, ed: W. T. Grandy,
and P. W. Milonni, Cambridge, 1993.

For anyone who doesn't know Jaynes' work, perhaps I should give the
one-sentence summary: "to wisely choose a probability distribution,
choose it to maximize entropy subject to the constraints provided by
whatever information you happen to have."

Gossip: E. T. Jaynes was the thesis advisor of a now-retired physics
professor here at Riverside, Fred Cummings. Together they came up
with a model of lasers called the Jaynes-Cummings model. Solving this
equation numerically, they found the laser output could fluctuate
very erratically when the laser was pumped hard. [1] In modern
terminology, the solutions were chaotic! However, this was before
1963, when E. N. Lorenz discovered his marvelous equation and "chaos"
as a subject was born, so they didn't make a big deal out of it.
Thus they joined many other people in discovering chaos but not
fully recognizing its importance.

----------------------------------------------------------------------
[1] I think "pumping" just refers to feeding the laser energy, e.g.
by shining light into it - but I'm a bit hazy about all this.


Greg Kuperberg

unread,
Sep 11, 1998, 3:00:00 AM9/11/98
to
In article <35f744db...@news.seanet.com>,
>That's sort of what I hoped would be inferred (by anyone who cared)
>from my tilt away from the word "real" to "simple", recognizing that
>the choice of the "real" basis is not unique. On the other hand, I
>wonder how severe the "restriction" to real eigenvalues is for
>practical purposes.

Formally, it doesn't really make any difference. But I consider it a
conceptual error. In reality a measurement can take values in any set
whatsoever, such as {apple,banana,orange}. Such a measurement can be
expressed by a Hilbert space decomposition such as:

H = H_apple + H_banana + H_orange

For computations it is convenient to express all such measurements
in a form in which you can take linear combinations of the results.
For this purpose the measurement can take values in any real vector space
(including the complex numbers, a two-dimensional real vector space).
One-dimensional real vector spaces are a bit more convenient than
higher-dimensional real vector spaces, because the n components of an
operator taking values in R^n have to commute in order to constitute
a single measurement. These computational conveniences motivate the
overstated mantra that real-valued measurements are the only important
ones.

In quantum field theory there are eventually good reasons to consider
complex-valued fields, and hence complex-valued measurement operators.
--
/\ Greg Kuperberg (UC Davis)
/ \
\ / Visit the xxx Math Archive Front at http://front.math.ucdavis.edu/
\/ * 5926+841 e-prints and counting! *


Kevin Brown

unread,
Sep 13, 1998, 3:00:00 AM9/13/98
to
On 10 Sep 1998 22:54:07 GMT, ba...@math.ucr.edu (john baez) wrote:
>So we can alternatively define a normal operator to be one whose
>real and imaginary parts commute. Because they commute, measuring
>one doesn't interfere with measuring the other, so T=Re(T)+iIm(T)
>is a perfectly fine observable. Note that Re(T) and Im(T) are
>self-adjoint, so we don't lose anything by restricting ourselves
>to self-adjoint operators, as long as we recognize that we can
>simultaneously measure a *bunch* of self-adjoint operators that
>commute.

I obviously can't say for sure, but I suspect Dirac was cognizant of
what you've expressed there, and it doesn't totally invalidate the
point he was making. We need to distinguish between analytical
decomposition and what Paul Teller calls mereological decomposition
(e.g., the roof and walls of a house are mereological "components" of
the house, whereas the sine and cosine terms of the Fourier expansion
of a function are analytical "components" of the function.) When
Dirac says measured dynamical variables are always real he is (I
guess) asserting a mereological decomposition into the individual
constituients of what you call a "bunch" of self-adjoint operators.

Obviously we can always juxtapose multiple real measurements and
label them the analytical components of some complex quantity, but
what Dirac had in mind was decomposing phenomena to the maximum extent
that is mereologically possible, and treating the physics on that
level - which is sort of the classical reductionist way of doing
physics. Also, the fact that Dirac's recommended mereological
decomposition is sufficient to capture all the phenomena of quantum
mechanics suggests (to me) that there is some non-trivial consonance
between his idea and "the way things are". On the other hand, I
certainly don't deny the possibility that a less reductionist
formalization of QM could prove to be more fecund.


[The following is a copy of my earlier reply to Greg Kuperberg
on this same subject, that never showed up on my server. Not
sure if it was judged too speculative, or just got lost.]


Kevin Brown wrote:
> I wonder how severe the "restriction" to real eigenvalues is for
> practical purposes.

On Fri, 11 Sep 1998 gr...@math.ucdavis.edu (Greg Kuperberg) wrote:
> Formally, it doesn't really make any difference.

Good, that's what I thought. So, if anything, I was being overly
punctilious to even briefly allude to the non-uniqueness of the
"real" basis in my original discussion, where the uniqueness was
not at issue.


Greg Kuperberg wrote:
> But I consider it a conceptual error.

I guess I do too, although I'm a little uneasy talking about
conceptual errors that don't really make any difference.


Greg Kuperberg wrote:
> In reality a measurement can take values in any set whatsoever,

> such as {apple,banana,orange}....

When you say "in reality", wouldn't it be more conceptually correct
to say "in complexity"? The question isn't totally facetious,
because I think our critique of Dirac's "conceptual error" is
somewhat of this same syntactical nature. There's a sense in
which every statement (or measurement) establishes or implies its
own "orthonormal basis" of "reality", and at some point in our
deliberations we need to make a choice of bases in order to make
contact with experience (don't we?). It seems a bit like general
relativity, where we can reason in coordinate-free terms up to a
point, but eventually we need to assign a set of coordinates and
relate those to quantifiable experiences and observations.

In view of this, I'm tempted to wonder if perhaps Dirac, far from
being unaware of these "basis issues", was actually SO well aware of
them, and considered them so obvious, and saw so clearly that (as
you said) "formally it doesn't really make any difference", that
his discussion was actually more (rather than less) sophisticated
than the syntactical objections we might raise against it. But
maybe I'm giving him too much credit.


Greg Kuperberg wrote:
> In quantum field theory there are eventually good reasons to consider
> complex-valued fields, and hence complex-valued measurement operators.

So you're saying that, at this point (admittedly outside the scope of
the original discussion), it DOES begin to "make a difference", i.e.,
the convention of interpreting all measured dynamical variables as
real-valued begins to unavoidably conflict with experience. I'd be
interested to learn which measurements treated by quantum field
theory can't be performed on a real-valued basis.


[Moderator's note: The first article alluded to above arrived in my inbox,
but I didn't process articles before the second one arrived. Sorry about
that-- it's best to allow a couple of days before assuming that an
article's lost, since there's unavoidable lag involved in articles
propagating to and from moderators, even if the moderators process articles
several times a day, which isn't always possible.

If an article's rejected, you should get a rejection notice, unless
there's some problem sending e-mail to your address. -MM]

Greg Kuperberg

unread,
Sep 13, 1998, 3:00:00 AM9/13/98
to
In article <6s9laf$dk6$1...@pravda.ucr.edu>, john baez <ba...@math.ucr.edu> wrote:
>This is an example of "textbook degenerative disease". It usually
>happens when somebody learns something from a textbook, doesn't
>think about it too hard, writes their own textbook, then someone
>else reads that, doesn't think about it too hard, writes their
>*own* book, and so on.

I wouldn't necessarily blame the author's understanding of the material.
Quantum mechanics is a reasonably popular subject; I estimate that maybe
20,000 students in the United States take it each year. When enough
pressure builds to encourage or force weak students to study a given
subject, a contest emerges among textbook authors (as well as departments
and teachers) to see who can best help the ocean of weak students cram
and muddle their way through. The good students are then outnumbered
and don't matter much. Undoubtedly deep understanding is much better
than superficial understanding, but superficial understanding is much
better than failure and alienation. In any case whenever the educators
do figure out a way for weak students to "really learn" the material,
the incentive for students to take the course ratchets up another notch
and even weaker students enroll en masse within a few years.

I call this the "McDonalds syndrome" of math and science education.
Calculus is in an advanced stage of the syndrome. (Calculus has an annual
enrollment of about a million students in the US.) Another aspect of
the syndrome is convergence: The menus of McDonalds and Burger King may
well be written by people with very different personalities and opinions,
but the end result is almost the same. It has to be, because people
know a great deal about what people want to eat at a fast food stand.
The same is true for calculus books. The introductions go to great
lengths to try to make the books seem different, but they are remarkably
close to the same book.

If calculus is any example, you should feel lucky if the quantum
textbooks, whoever writes them these days, aren't littered with colored
boxes and red margin notes that say things like:

+---------------------------------------------+
|A Hermitian operator is called a measurement.|
+---------------------------------------------+

+--------------------------------+
|x and p are Hermitian operators.|
+--------------------------------+

If you take only the colored boxes, the worked examples, and the
exercises, they make a sub-book that has everything seriously intended for
most of the students. The book might well have your favorite argument
for why measurements are all or are not all real-valued to give you,
the professor who chooses the book, something to chew on. But rest
assured that the justification won't be in a colored box!


--
/\ Greg Kuperberg (UC Davis)
/ \
\ / Visit the xxx Math Archive Front at http://front.math.ucdavis.edu/

\/ * Tell every mathematician your new ideas in one day *


Toby Bartels

unread,
Sep 13, 1998, 3:00:00 AM9/13/98
to
Doug Sweetser <swee...@world.std.com> wrote:

>Toby Bartels <to...@ugcs.caltech.edu> wrote:

>>Standard deviations exist after only 1 measurement.
>>The standard deviation of 1 measurement is always 0.
>>The standard deviation of 2 measurements a and b is |a-b|/2.

>>Anyway, from the relation xp - px = i hbar,


>>you can deduce (<x^2> - <x>^2)(<p^2> - <p>^2) >= hbar^2/4,
>>which is the square of the uncertainty relation that you quoted
>>(where <A> is the expectation of the operator A).

>If one measurement is made of position, and another for momentum, then
>(0)(0) >= hbar^2/4
>is not a valid statement. I don't know if physicists use a different
>basic definition of a standard deviation to avoid this logic error.

That is exactly my *point*.
The law (Dx)(Dp) >= hbar/2 is *not* about measurements.
If you measure something once, the SD of the measurements is 0;
even if you measure something 3 or more times,
the SD of the measurements might be 0,
because you might happen, by chance, to measure the same value each time.
This doesn't violate the uncertainty relation, however,
because that relation simply isn't talking about measurements at all.

>>It's quite possible, albeit unlikely, to measure x and p
>>and find that the standard deviations of your measurements
>>have a product which is less than hbar/2
>>(especially if you only measure them 3 times).

>I don't think it is possible, that the job of the ">=" comparison
>relation. If it could be less, then it could be zero sometimes, but we
>know in quantum the standard deviation is never zero at anytime.
>Quantum mechanics is fuzzy by law!

The standard deviation *of the measurements* may very well be zero.
Start with an electron in a pure spin up state.
Measure the spin in the z direction once: it's hbar/2.
Measure the spin in the z direction again: it's hbar/2.
Measure the spin in the z direction a third time: it's hbar/2.
The ">=" comparison doesn't talk about this.
The ">=" comparison talks about the probability distributions
which an arbitrary state assigns to complementary observables,
*not* to the measurements actually made of those observables.

>On a more abstract level, I personally avoid thinking anything is more
>fundamental than anything else. True, it is quite common to go from the
>commutator relations to the uncertainty relations. I know, I have done
>that myself in this newsgroup using quaternions :-) However, it is just
>a possible to start from the uncertainty relations and end up at the
>commutators.

No, it isn't.
If the commutator were -i hbar instead of i hbar,
the ">=" comparison would be the same.

>The commutator of the operators x and p on a state psi
>equals i hbar, and the product of the standard deviation of the
>operators is greater than or equal to hbar/2. The former statement is
>not statistics, but the latter is, at least to my ear.

It's statistics in that it deals with probability distributions,
but still it doesn't deal with the statistics of measurements.


-- Toby
to...@ugcs.caltech.edu


Chris Martin

unread,
Sep 13, 1998, 3:00:00 AM9/13/98
to
I have not read the entire thread, but I will say this. The question about the nature
of the uncertainty is DEFINITELY a matter of interpretation, no matter how you cut it.
There are interpretations of QM which say that the uncertainty is ultimately
epistemic, and there are those that take it to be ontic. Now, certainly there are
gripes and moans to be had about every interpretation, *including the standard one* we
all learn in undergraduate physics. Also, it is not clear how much of the
interpretational squabbles can be carried over to full-blown quantum field theory.
Regardless, the questions you are asking have been thought about long and hard since
the birth of QM and are currently discussed in both the physics and the philosophy
(foundations) of phyics literature. In any event, there will be those who say it
doesn't matter how the question is answered as it is physically insignificant, and they
may be right in the long run. But, as some of the posters have said, there are plenty
of people who want more than that at some level---I think most good physicists do in
fact---and matters of interpretation are therefore important. These latter types see
that there is room to flesh out our QM world view via a more robust interpretative
apparatus; this may involve positing additional, and perhaps not directly observable
structures, or it may just involve a reinterpretation of QM fundamentals strictly in
terms of experimental outcomes.

Moreover, it would seem to me that matters of interpretation are quite important when
physicists (or, in general scientists) face some related area, say cosmology, or
quantum gravity, where our observational abilities have been, in a sense, outpaced by
theory, and the path in front of the physicist is especially untrodden and dark. Of
course, one would expect, and, in fact, demand that empirical data ultimately light the
true path; but neither the world, nor our theories thereof stand entirely
uninterpreted.

Chris


Toby Bartels

unread,
Sep 13, 1998, 3:00:00 AM9/13/98
to
Patrick Bakx <patric...@spamblock.tractebel.spamblock.be> wrote in part:

>You can determine that the estimate for
>sigma is biased, that is, the estimate does not converge to the actual value
>if the number of measurements becomes infinite (forgot how that was actually
>calculated, should not be too hard though).

That's not what bias is.
After all, the difference between n and n-1 disappears as n becomes infinite,
so it wouldn't matter which version of the SD you used!
Rather, an estimate X for the parametre x is unbiased
iff x is the expectation value for X.
BTW, the (n-1) estimate for sigma is also biased;
it is the (n-1) estimate for sigma^2 which is unbiased.


-- Toby
to...@ugcs.caltech.edu


PS: Who was it who call(ed|s) John Baez "John Bias"?
I don't remember --- now that I only read moderated newsgroups in sci.*.


Warren G. Anderson

unread,
Sep 14, 1998, 3:00:00 AM9/14/98
to
john smith wrote:

> So what I'm wondering is how one can make the leap from an
> uncertainty in measurement to an inherent uncertainty? Has the entire
> scientific community adopted the positivist doctrine or is there a
> scientific basis for this. Despite all the text on the Uncertainty
> Principle, I've yet to find the postulate that elevates a philosophical
> belief to a scientific principle.

The leap is made because the uncertainty is inherent to thetheory which we
use to describe the measurement. There is, of
course, no knowledge of whether such uncertainty is inherent to
nature. In fact, there is no knowledge of whether any theory
accurately describes the underlying mechanisms of nature, there
is only knowledge of whether they give results in accordance
with the observed measurements.

> And is it not impossible to use the same logical positivism that
> asserts inherent uncertainty
> to predict specific entities which cannot be empirically observed such
> as virtual particles?

Clearly it is possible, since we do it. A better question to ask
is "is it logically consistent?". I can see no logical contradiction.
One might also ask "is it useful?". Clearly the answer is yes, since
we use it.

> This would be self-contradictory.

In what sense? The argument goes like this: "due to the
uncertaintyprinciple, there is a regime where we cannot test the validity
of
our cherished beliefs (known in physics as "laws"). So we conclude
that in these regimes the cherished beliefs might fail (eg. there
might exist particles that violate these beliefs called "virtual
particles"). Let us add these violating particles to our theory and
see if they help our theory to explain and predict our observations.
By George, they do! Well, I guess we'll leave them in there then."
At what stage have I contradicted myself?

> And specifically, what experimental evidence is there for virtual
> particles? I am familiar with the Casimir effect. But is it conclusively
> a result of
> the pressure of ]virtual particles?

There is no direct evidence for virtual particles. That is why theyare
virtual!

> Are there credible physicist that dispute this and

> virtual particles in general? If so, who might they be.
> I'm not taking a position as to the existense of virtual particles.

Well, feel free to or not to. The whole point is that there is noreason to
decide the question of there existence. Its a bit like
saying "I'm not taking a stand on the existence of other universes".
Any position or lack thereof is equally vacuous.

> Mere consensus alone suggest to me that they must exist. But I need more
> than a philosophical position to warrant their acceptance.
> I need proof. But overall I just need to know how the uncertainty of
> measurement translates to an inherent uncertainty. Any help on this
> would be very greatly appreciated.

I think you would do best to stop considering theories in physicsas
describing the underlying mechanisms in nature. They are simply
models which may or may not describe the reality of the situation.
They are useful, and often colourful and beautiful, but they are
not objective truth and never can be.

--
+-----------------------+-----------+-------------------------------------+
| HOME | Warren G. | OFFICE |
+-----------------------+ Anderson +-------------------------------------+
| 2965 N. Bartlett, #34 +-----------+ 419, Physics Building |
| Milwaukee, WI, 53211 | ``"''/ | Department of Physics |
| Phone: (414) 963-1929 | |@ @ | | University of Wisconsin - Milwaukee |
| | ( ^ ) | Phone: (414) 229-6082 |
| | \O_/\ | Fax : (414) 229-5589 |
+---------------- war...@ricci.phys.uwm.edu ------------------------------+


Kevin Brown

unread,
Sep 14, 1998, 3:00:00 AM9/14/98
to
On 13 Sep 1998 gr...@math.ucdavis.edu (Greg Kuperberg) wrote:
> What John and I are criticizing is pronouncements in introductory
> quantum mechanics that amount to saying, "Real-valued measurements
> actually exist, all others are artificial formalisms." ...I suppose
> people will always confuse form with content...

I think the content that Dirac and others have tried to express can be
described in terms of the superposition of two states, A and B, such
that there exists an observation which, when made on the system in
state A, is certain to lead to the result "a", and when made on the
system in state B is certain to lead to the result "b". When an
observation is made on the system in a superposed state, the result is
sometimes "a" and sometimes "b", but it is never different from both
"a" and "b". As Dirac said (with my emphasis)

"The *intermediate character* of the state formed by superposition
thus expresses itself through the probability of a particular
result for an observation being intermediate between the
corresponding probabilities for the original states, *not*
through the result itself being intermediate between the
corresponding results for the original states."

In other words, the "state" of the system possesses a degree of
complexity that embodies not just the character of the eventual
observed result but also the distribution of probabilities for
the set of possible observed results with respect to any possible
measurement. When the observation is made, the higher order of
complexity of the state is reduced (projected) down onto the
particular basis of measurement. This reduction, Dirac and
others contend, represents actual content of the theory, not
mere form.

Of course, over the years many people have expressed the sentiment
that it would be formalistically very nice if things only SEEM to
behave as Dirac described, and if "in reality" no state reduction
ever occurs (i.e., the no-collapse interpretations of QM). Without
rehearsing all the well-known problems with what I like to call the
"decoherent interpretations" of QM, I think it should not be suggested
that failure to endorse such interpretations is unambiguous evidence
of naivete and conceptual error, particularly as long as people go
on SEEMING to make definite observations and are seemingly able to
communicate those observations to each other on a seemingly common
basis.


Doug Sweetser

unread,
Sep 15, 1998, 3:00:00 AM9/15/98
to
Hello Toby:

Toby Bartels wrote:

> The standard deviation *of the measurements* may very well be zero.
> Start with an electron in a pure spin up state.
> Measure the spin in the z direction once: it's hbar/2.
> Measure the spin in the z direction again: it's hbar/2.
> Measure the spin in the z direction a third time: it's hbar/2.

Thus the standard deviation of spin in the z direction is zero. As you
note:

> The ">=" comparison doesn't talk about this.

Correct, you must also be measuring a complementary observable. Without
mentioning the complementary observable, this example does not apply.
(unfortunately, I don't know what the complementary observable to spin
in the z direction is, but its standard deviation if measured would
approach infinity).

> The ">=" comparison talks about the probability distributions
> which an arbitrary state assigns to complementary observables,
> *not* to the measurements actually made of those observables.

I still think this has to apply to actual measurements of complementary
observables, not measurements of any one observable as in your example.
How has the uncertainty principle been proven? In the lab, collecting
data on complementary observables. Only by collecting a lot of data may
the limit of hbar/2 for the product of the two standard deviations be
approached.


> No, it isn't.
> If the commutator were -i hbar instead of i hbar,
> the ">=" comparison would be the same.

Isn't this just a convention? People like positive signs, so they went
that way around the complex plane.


I hope I'm not being to irritating or stupid here, but I just think the
numbers collected straight from the bench must behave like the equations
say. It seems aalright to me that several data points from
complementary observables must be collected before the uncertainty
principle can be used. More numbers, better confirmation for sure, but
never a contradiction.


doug
http://world.com/~sweetser


Toby Bartels

unread,
Sep 15, 1998, 3:00:00 AM9/15/98
to
Frank Wappler <fw7...@csc.albany.edu> wrote:

>Suppose that two operators L and M commute, and that for some
>common eigenstate they have individually the complex eigenvalues
>l and m, such that l =/= l* and m =/= m*.
>("*" denotes complex conjugation.)

>Which is the eigenvalue of the operator (L M) for this eigenstate:
>l m, l* m, l m*, ...?

Why would complex conjugation enter into this at all?

(LM)psi = L(M psi) = L(m psi) = m(L psi) = m(l psi) = (ml)psi = (lm)psi,
so the eigenvalue for LM is lm.


-- Toby
to...@ugcs.caltech.edu


Frank Wappler

unread,
Sep 15, 1998, 3:00:00 AM9/15/98
to

Frank Wappler wrote:

> john baez wrote:
> > [...] insistence on *real* eigenvalues is silly.

> Suppose that two operators L and M commute, and that for some
> common eigenstate they have individually the complex eigenvalues
> l and m, such that l =/= l* and m =/= m*.
> ("*" denotes complex conjugation.)

> Which is the eigenvalue of the operator (L M) for this eigenstate:
> l m, l* m, l m*, ...?

Well - obviously if L state> = l state> and M state> = m state>
then, formally:

L M state> = L m state> = m L state> = m l state> = l m state>.

The troublesome point is only that if L and l are given in terms of
real numbers and the operator i' which satisfies i' i' = -1,
and M and m are given in terms of real numbers and the operator
i" which satisfies i" i" = -1, then in order to express l m
(in terms of real numbers and the operator i satisfying i i = -1),
how should one determine whether i i' = 1 or i i' = -1, and
whether i i" = 1 or i i" = -1?

An even more substantial point to question complex eigenvalues:

Should non-hermitean operators, i.e. operators of the form (a~b)
with a =/= b, such that (a~b)~ = (b~a) =/= (a~b), have non-trivial
eigenstates and eigenvalues at all?

AFAIK, such operators can be considered _transition operators_
between different states namely between the different states b> and a>,
since (a~a)~(b~b) > 0 while any given state s> could be represented
in terms of the real eigenvalues ("levels") of the corresponding
hermitean _counting operator_ N == (b~a)(a~b) = (a~b)~(a~b).

Obviously application of the transition operator (a~b) to _any_ state
results in a state with a _different highest_ occupied "level", therefore
in a _different_ state, which is consequently not an eigenstate,
except (perhaps) when applying a down-transition to the vacuum -
the result is zero (can one therefore consider the vacuum an eigenstate
of the lowering operator with eigenvalue zero?), and when operating
on _one particular infinite_ combination of levels (there should be
precisely one specific such "coherent" state per non-hermitean operator).

If indeed such a necessarily _infinite_ combination can exist at all then -
sure, it is an eigenvector with complex eigenvalue (see Liboff, IIRC).
But that's _one_ (the only one, besides perhaps the vacuum), not a
_complete set of eigenstates_ as is common for hermitean operators
which have real eigenvalues.

So, I agree that it is silly to _insist_ on operators that have real
eigenvalues, because eigenvalue problems for other (non-hermitean)
operators just don't seem interesting at all.
(But I'd appreciate pointers to literature that argues the same,
or that argues the opposite. :)
(Smilie inserted in reference to non-hermitean operators.)

Of course those considerations leave out the "normal" operators
which commute with their adjoints (such that the corresponding
"counting operator" doesn't count anything):
characterizing (loosely?) stuff that cannot be counted nor created
nor destroyed, and (due to the complex eigenvalue) of which one has
to guess a phase direction. What could be their physical relevance?


> > [What really matters is that the operator have an


> > orthonormal basis of eigenvectors --- or equivalently, that it

> > commute with its adjoint. Such operators are called "normal".]

Could you please show (give a reference for) that proof?

In order to proof orthogonality of eigenvectors of an operator I have
to assume that its adjoint has a _complete set_ of (not necessarily
orthogonal) eigenvectors. Is the requirement of completeness
stronger/equivalent/weaker than being "normal"?


Best regards, Frank W ~@) R

Frank Wappler

unread,
Sep 15, 1998, 3:00:00 AM9/15/98
to
Toby Bartels wrote:

> Frank Wappler <fw7...@csc.albany.edu> wrote:

> > Suppose that two operators L and M commute, and that for some
> > common eigenstate they have individually the complex eigenvalues
> > l and m, such that l =/= l* and m =/= m*.
> > ("*" denotes complex conjugation.)

> > Which is the eigenvalue of the operator (L M) for this eigenstate:
> > l m, l* m, l m*, ...?

> Why would complex conjugation enter into this at all?

My post was in reply to John Baez' implied claim that complex eigenvalues
were not silly. Therefore I was considering operators whose eigenvalues
differ from their complex conjugates.


> (LM)psi = L(M psi) = L(m psi) = m(L psi) = m(l psi) = (ml)psi = (lm)psi,
> so the eigenvalue for LM is lm.

This I understand.
(See my follow-up, which was posted 19 seconds after your comment.)

However, what sort of mathematical object does (lm) symbolize?

If the complex numbers l and m are defined _independently_ in terms
of real numbers and the formally _independent_ imaginary units
i' (which is defined through i' i' = -1) and
i" (which is defined through i" i" = -1),
then - which complex number is (lm), if any?

Is i' = i", or is i' = -i",
or does (lm) have to be expressed in terms of _two different_
imaginary units?

Btw., note that a similar problem wrt. real numbers does _not_ arise:

If observers can recognize a "vacuum" state, then each can distinguish
their creation operators from their destruction operators (applying
the destruction operator to the own state results in "vacuum", applying
its adjoint to the own state results in a state different from "vacuum"),
i.e. each can define the unit "+1" individually and _all_ observers can
_reproduce_ this individual unit by conducting that same procedure
wrt. that individual observer's state.

(Oops, I just realize that this leaves out states whose
creation/destruction operators are non-hermitean and "normal" ... %)

AFAIU, the same procedure can of course be used to decide the positive
definiteness of hermitean operators, i.e. (v~v) > 0 vs. (v~v) < 0.
How else could one choose (v~v) > 0 as an axiom?

john baez

unread,
Sep 15, 1998, 3:00:00 AM9/15/98
to
In article <35FC9188...@world.std.com>,
Doug Sweetser <swee...@world.std.com> wrote:

>I hope I'm not being too irritating or stupid here, but I just think the


>numbers collected straight from the bench must behave like the equations

>say. It seems all right to me that several data points from


>complementary observables must be collected before the uncertainty
>principle can be used. More numbers, better confirmation for sure, but
>never a contradiction.

Here's what the uncertainty principle says, *experimentally* speaking.

Say you take a bunch of copies of a particle, all prepared to be
in the same state - any state you like. For some copies you measure
the position, so you get a bunch of position measurements q_1,...q_n.
For other copies you measure the momentum, so you get a bunch of
momentum measurements p_1,...,p_n. You use these measurements to
compute a mean position and mean momentum:

Q = 1/n Sum q_i
P = 1/n Sum p_i

and then compute the variances

(Delta q)^2 = 1/(n-1) Sum (q_i - Q)^2
(Delta p)^2 = 1/(n-1) Sum (p_i - P)^2

Now, THE HEISENBERG UNCERTAINTY PRINCIPLE DOES *NOT* GUARANTEE THAT

(Delta p) (Delta q) >= hbar/2

It says that if we do this game over and over again, and compute the
AVERAGE of the values of Delta q and Delta p that we obtain, these
AVERAGES will, with probability one, converge to values satisfying

(Delta p) (Delta q) >= hbar/2

Or, alternatively, if we take the limit as n -> infinity, then with
probability one the variances Delta q and Delta p will converge
to (possibly infinite) values satisfying

(Delta p) (Delta q) >= hbar/2

In short, the Heisenberg uncertainty principle only makes a claim
about what happens with probability one in the limit where we repeat
the experiment more and more times!

It is just like when we say "this coin is fair" - this doesn't mean
that whenever we flip it 6 times that it comes up heads 3 times and
tails 3 times! It only means that in the limit as n -> infinity,
with probability one the coin will come up heads a fraction of times
approaching 1/2.

What I'm really doing for you here, you see, is showing you how
to interpret *any* probabilistic theory. And I'm giving you a kind
of frequentist interpretation. And if you don't like this - i.e.,
if you don't like this "in the limit" and "with probability one"
stuff - then you will have to learn how to become a Bayesian!
Check out

http://math.ucr.edu/home/baez/bayes.html


john baez

unread,
Sep 17, 1998, 3:00:00 AM9/17/98
to
In article <6tk95c$e72$1...@pravda.ucr.edu>,
john baez <ba...@galaxy.ucr.edu> wrote:

>It [the uncertainty principle] says that if we do this game over

>and over again, and compute the AVERAGE of the values of Delta q
>and Delta p that we obtain, these AVERAGES will, with probability
>one, converge to values satisfying
>
>(Delta p) (Delta q) >= hbar/2

Sorry, I slipped here! As we learned elsewhere on this thread,
we need to compute variances, not standard deviations, to get an
unbiased indicator:

So I should have said:

The uncertainty principle says that if we do this game over
and over again, and compute the AVERAGE of the values of (Delta q)^2
and (Delta p)^2 that we obtain, these AVERAGES will, with probability

Toby Bartels

unread,
Sep 18, 1998, 3:00:00 AM9/18/98
to
Doug Sweetser <swee...@world.std.com> wrote:

>Toby Bartels <to...@ugcs.caltech.edu> wrote:

>>The standard deviation *of the measurements* may very well be zero.

You accepted this,
at least when the SD of the complementary observable is infinite.
That's a start. Now let's go back to this:

>>Doug Sweetser <swee...@world.std.com> wrote:

>>>If one measurement is made of position, and another for momentum, then
>>>(0)(0) >= hbar^2/4
>>>is not a valid statement. I don't know if physicists use a different
>>>basic definition of a standard deviation to avoid this logic error.

>>That is exactly my *point*.
>>The law (Dx)(Dp) >= hbar/2 is *not* about measurements.
>>If you measure something once, the SD of the measurements is 0;
>>even if you measure something 3 or more times,
>>the SD of the measurements might be 0,
>>because you might happen, by chance, to measure the same value each time.
>>This doesn't violate the uncertainty relation, however,
>>because that relation simply isn't talking about measurements at all.

Here we do have complementary observables,
yet the SD is still 0 for each of them.
And this doesn't violate the HUP,
because the HUP isn't directly about measurements.

>How has the uncertainty principle been proven? In the lab, collecting
>data on complementary observables. Only by collecting a lot of data may
>the limit of hbar/2 for the product of the two standard deviations be
>approached.

Yes, exactly; "Only by collecting a lot of data ....".
If you measure only once, the product will be smaller than hbar/2.
If you measure only twice, the product may well be smaller than hbar/2.
If you measure a billion times, there's hardly any chance it will be smaller.
This is because the HUP doesn't talk about the measurements;
it talks about the probability distributions of the observables.
To test what the probability distributions are,
you must make a lot of measurements.
A few measurements may, by chance, appear (but only *appear*) to violate.
(BTW, in testing the HUP by considering the SDs of your measurements,
you'll have to use the (n-1) variance, not the n variance,
as discussed elsewhere on this thread.)


>>If the commutator were -i hbar instead of i hbar,
>>the ">=" comparison would be the same.

>Isn't this just a convention? People like positive signs, so they went
>that way around the complex plane.

However, [x,px] and [y,py] must be the same -- that is no convention.
You may make them both i if you like or -i if you like,
but there's no way to tell if they are the same
by considering only standard deviations.


-- Toby
to...@ugcs.caltech.edu


Frank Wappler

unread,
Sep 18, 1998, 3:00:00 AM9/18/98
to
Frank Wappler wrote:

> If observers can recognize a "vacuum" state [...]

Let me define what I mean by "vacuum",
_how_ observers can recognize a "vacuum" state:

Observer "a_state>" determines two operators L and M from the
two conditions

L M a_state> = 1 a_state>, and

M L a_state> = 0 a_state>.

Using L and M, a_state> defines "vacuum" through

L M vacuum = M L vacuum.

In (other) words:
Vacuum (wrt. some observer) is what doesn't have any quatum numbers
which that observer could come up with.


Best regards, Frank W ~@) R

Doug Sweetser

unread,
Sep 19, 1998, 3:00:00 AM9/19/98
to
Thanks Toby and John:

Now I realize this is a deep issue, one that is still fought between
frequentists and Bayesians. I have only started to think about this
issue, but the notion of prior probabilities is attractive to me. Is
there a notion of the "size" of a prior probability? Perhaps the reason
the path of the Earth around the Sun is so darn dependable is because it
has an enormous prior probability. Still, the knowledge of that state
is finite, so the path does have a chaotic element at a high enough
resolution. For a single electron, it is a greater struggle to know its
prior probability, and that contributes to the wonderful logic of
quantum mechanics.


> However, [x,px] and [y,py] must be the same -- that is no convention.
> You may make them both i if you like or -i if you like,
> but there's no way to tell if they are the same
> by considering only standard deviations.

Space is isotropic. Imagine that someone decides to go from the
standard derivation to [x,px] = - i hbar. By the isotropic nature of
space, I can rerun the derivation by swapping every instance of x and px
with y and py. That would give me the same sign, which as you point
out, is not a convention.


Doug
http://world.com/~sweetser


Alexander Y. Vlasov

unread,
Sep 19, 1998, 3:00:00 AM9/19/98
to
>Greg Kuperberg <gr...@math.ucdavis.edu> wrote in article
><6tgosv$v67$1...@manifold.math.ucdavis.edu>:

>
>>If calculus is any example, you should feel lucky if the quantum
>>textbooks, whoever writes them these days, aren't littered with colored
>>boxes and red margin notes that say things like:
>>
>>+---------------------------------------------+
>>|A Hermitian operator is called a measurement.|
>>+---------------------------------------------+

I really did not feel lucky about such a box, because a Hermitian operator
could not be called a measurement --- everyday experience shows, that
by measuring something like lenght, momentum, etc., we find a number, not
an operator. A measurement is either "quantity found by measuring" or
"process of measuring". In first case it can be considered as some point of
spectrum of the Hermitian operator, in second one, by projection operator.

I'd rather expect to find something like:
+---------------------------------------------+
|A Hermitian operator is called an observable.|
+---------------------------------------------+
But even in such a case "is called" here sounds strange, because the Hermitian
operator is called so because of French mathematician Hermite who live 1822-1901
and so he could not call the operators "an observable", or "a measurement".
I think he rather called them "selfadjoint"

Frank Wappler

unread,
Sep 19, 1998, 3:00:00 AM9/19/98
to
Toby Bartels wrote:

> Frank Wappler wrote:
> > what sort of mathematical object does (lm) symbolize?

> It seems to me that i' and i'' have to be the same i
> because they act the same way on psi.

Given

L psi = l psi, l( i' )* = l( -i' ) =/= l( i' ); and

M psi = m psi, m( i" )* = m( -i" ) =/= m( i" ),

what "same way" do you mean?

Frank Wappler

unread,
Sep 20, 1998, 3:00:00 AM9/20/98
to
Frank Wappler wrote:

> If observers can recognize a "vacuum" state [...]

Let me sketch what I mean by "vacuum",
_how_ observers may recognize a "vacuum" state:

Observer "a_state>" determines two operators L and M from the
two conditions

L M a_state> = 1 a_state> = a_state>

(this requires an observer to determine whether or not
an own state and an operation on an own state are same), and

M L a_state> = 0 a_state>

(which requires additionally the notion of "0" as a number;
e.g. the number for which "0 + 0 = 0").

Using L and M, a_state> defines "vacuum" through

L M vacuum = M L vacuum.

(The notion of "0" may be checked by conducting the same procedure
on the _system_ "vacuum_and_a_state>". It should reproduce the same
operators L and M, and the same vacuum.)

In (other) words:
Vacuum (wrt. some observer) is what doesn't have any quantum numbers


which that observer could come up with.

Best regards, Frank W ~@) R

Toby Bartels

unread,
Sep 20, 1998, 3:00:00 AM9/20/98
to
Doug Sweetser <swee...@world.std.com> wrote in part:

>Toby Bartels <to...@ugcs.caltech.edu> wrote:

>>However, [x,px] and [y,py] must be the same -- that is no convention.
>>You may make them both i if you like or -i if you like,
>>but there's no way to tell if they are the same
>>by considering only standard deviations.

>Space is isotropic. Imagine that someone decides to go from the
>standard derivation to [x,px] = - i hbar. By the isotropic nature of
>space, I can rerun the derivation by swapping every instance of x and px
>with y and py. That would give me the same sign, which as you point
>out, is not a convention.

But there is no derivation to rerun.
You cannot derive [x,px] = - i hbar from the SD formula.
You can derive [x,px] = +- i hbar,
and you can rerun the derivation to derive [y,py] = +- i hbar,
and you can combine these facts to derive [x,px] = +- [y,py],
but there is no way to derive [x,px] = [y,py].


-- Toby
to...@ugcs.caltech.edu


john baez

unread,
Sep 22, 1998, 3:00:00 AM9/22/98
to
In article <6u27ls$h...@gap.cco.caltech.edu>,
Toby Bartels <to...@ugcs.caltech.edu> wrote:
>Doug Sweetser <swee...@world.std.com> wrote:

>>Space is isotropic. Imagine that someone decides to go from the
>>standard derivation to [x,px] = - i hbar. By the isotropic nature of
>>space, I can rerun the derivation by swapping every instance of x and px
>>with y and py. That would give me the same sign, which as you point
>>out, is not a convention.
>
>But there is no derivation to rerun.
>You cannot derive [x,px] = - i hbar from the SD formula.
>You can derive [x,px] = +- i hbar,
>and you can rerun the derivation to derive [y,py] = +- i hbar,
>and you can combine these facts to derive [x,px] = +- [y,py],
>but there is no way to derive [x,px] = [y,py].

I think we actually *can* derive [x,px] = [y,py] from the isotropy
of space together with some very reasonable assumptions. By the
isotropy of space we expect to have a unitary representation of the
rotation group on the Hilbert space of states. [1] Thus for
every rotation g we get a unitary operator U(g).

Now let g be a rotation that carries the x axis to the y axis.
It's reasonable to assume

U(g) x U(g)^{-1} = y U(g) px U(g)^{-1} = py

since this says that rotating the y axis to the x axis,
measuring the x coordinate of position (or momentum), and
then rotating back, has the same effect as measuring the
y axis of position (or momentum).

>From this we get

[y,py] = U(g) [x,px] U(g)^{-1}

so if [x,px] is a scalar, like - i hbar, then [y,py] is the
*same* scalar.

By the way, the usual convention is [x,px] = i hbar, not
- i hbar.

------------------------------------------------------------

[1] Or maybe just a representation of the double cover, SU(2) - but
this doesn't significantly change the argument that follows.


(Greg Weeks)

unread,
Sep 23, 1998, 3:00:00 AM9/23/98
to
john baez (ba...@math.ucr.edu) wrote:
: [2] And as Greg Kuperberg has already pointed out, there's nothing to stop
: us from studying observables that take values in an arbitrary SET.

This is fine as far as spectral decomposition is concerned and raises no
problem if all your observables commute. But I don't see how this fits
into the complete algebra of observables. The algebra forces a field on
you, eg, "The spectrum of an element A is the set of all v such that A - v
has no inverse."


Regards,
Greg


Frank Wappler

unread,
Sep 23, 1998, 3:00:00 AM9/23/98
to
Toby Bartels wrote:

> Frank Wappler wrote:
> > Given
> > L psi = l psi, l( i' )* = l( -i' ) =/= l( i' ); and

> > M psi = m psi, m( i" )* = m( -i" ) =/= m( i" ) [... what is (lm)?]

> psi belongs to a Hilbert space [...]

> When we say the evalue of L is l = Re l + i Im l,
> we're not merely saying l = Re l + i' Im l
> for some i' such that (i')^2 = -1.
> This information isn't enough to specify the evalue.

One can consider it sufficient for one individual operator by itself,
but - yes - it is apparently insufficient to determine relations
between different operators and their eigenvalues (as sketched earlier).
_That's why_ I argue against calling operators with complex eigenvalues
"not silly".

> i'' is defined in the same way. Thus, i' and i'' are equal.

You may make explicit how the operators act on states and how the
eigenvalues are expressed by writing

L psi( i' ) = l( i' ) psi( i' ), and
M psi( i" ) = m( i" ) psi( i" );

perhaps together with

L psi( -i' ) = l( -i' ) psi( -i' ), and
M psi( -i" ) = m( -i" ) psi( -i" ).

But if you wanted to fix relations between pairs of such operators and
their eigenvalues by using the condition in our example that psi is a
simultaneous ("same") eigenfunction, then you have to establish a relation
between the individual Hilbert spaces psi( i' ) and psi( i" ) in the first
place; that's an equivalent problem.


> Perhaps you can describe a physical situation
> in which we would have trouble telling if i' = i'' or i' = -i''.

I cannot think of a physical situation in which such a decision
were possible! Earlier in this thread John Baez gave an example
involving two counters. Each pair of counts, as obtained in any trial,
constitutes an ordered set of two real numbers, not a particular
complex number (and _not_ its complex conjugate).


> I feel that I don't really understand where you're coming from on this.

I'm starting from the assumption that
what I (or others) can count, others (or myself) can reproduce.

Frank Wappler

unread,
Sep 23, 1998, 3:00:00 AM9/23/98
to
john baez wrote:

> By the isotropy of space we expect to have a unitary representation

> of the rotation group on the Hilbert space of states. [...] Thus for


> every rotation g we get a unitary operator U(g).

> Now let g be a rotation that carries the x axis to the y axis.
> It's reasonable to assume

> U(g) x U(g)^{-1} = y U(g) px U(g)^{-1} = py

Why should both equations contain the same unitary operator U( g )?

Is not for every (unitary) solution T of the operator equation
T x T^{ -1 } = y the composition "y^r T x^s" a (unitary) solution as well?

Is not for every solution V of the equation V px V^{ -1 } = py
the composition "py^w V px^q" a solution as well?

(Since [x,px] == x px - px x is defined, I assume that the compositions
x x ... x == x^{ index } etc. are defined as well.)

How would you determine whether or not those two classes of operators
(indexed by the real number pairs <r, s> and <w, q>) contain at least
one common solution U( g )?

Thanks, Frank W ~@) R

john baez

unread,
Sep 24, 1998, 3:00:00 AM9/24/98
to
I'm not sure I understand what Toby is saying, but if he's saying
this, I agree with him:

You can start with the canonical commutation relations:

[p, q] = - i hbar

and derive the Heisenberg uncertainty principle:

(Delta p) (Delta q) >= hbar/2.

But you can't start with the Heisenberg uncertainty principle
and derive the canonical commutation relations - not even up
to a sign. There are obvious counterexamples: pairs of observables
A, B satisfying

(Delta A) (Delta B) >= hbar/2

in every state, but not satisfying

[A,B] = - i hbar.

For a really dumb counterexample, just let A = p, B = 2q.
There are less dumb counterexamples, too.

Perhaps there is a way to add some extra conditions to the
Heisenberg uncertainty principle to allow one to derive the
canonical commutation relations, but I can't think of any
*interesting* way to do it.

My other remark - that you can use the isotropy of space to
guarantee [p_x, q_x] = [p_y, q_y] as soon as you know both
these quantities are scalars - was not meant to be relevant
to the question "which is more fundamental, the canonical
commutation relations or the Heisenberg uncertainty principle?"

jw...@pacbell.net

unread,
Sep 24, 1998, 3:00:00 AM9/24/98
to
Hi.

I have cross-posted to physics, which is one of my subscribed groups.

I've been reading this thread back to Sept 9th with great interest;
please excuse me if my comments repeat something previously said.

Someone made the point about the Uncertainty Principle in the
form,

delta-x * delta-p >= hbar/2.

There then was some discussion of which statistical construct should
be used to define the "delta": sd, sample sd, etc.

Here's an example of a nonchaotic problem: Consider the repeated
measurement of the mean of an incremental count of the integers:
{ 1, 2, 3, ... }.

The mean is undefined: it constantly increases from 1 to 1.5 to 3,
etc. Likewise the variance is undefined. In a sense, each measurement
changes the state, so there actually is no meaningful repetition
possible.

Is it useful to claim that, because a quantum-level measurement
also changes the state, then the "mean" actually is not a "measurement"
in some sense? Here is how I understood this claim:

If I replace delta with a reference to just one
(state-altering) observation, the Uncertainty
Principle holds as above.

However, if I average many measurements with "true"
repetition, the mean will converge and my precision
will increase. So, replacing "delta" above seems
to violate the principle.

I think the solution is simple: Each time I repeat the measurement,
I divide the RIGHT side by sqrt(N), where N is the sample size.
So, the Uncertainty constraint decreases toward zero at the same
rate as the standard deviation (which has the dimension
of the mean or the measurement).

Actually, isn't this a way of estimating the value of hbar?

So, why shouldn't a mean be considered a measurement? It seems
consistent with Heisenberg's Principle.

--
John
Submarine Physicist

Ralph Frost

unread,
Oct 3, 1998, 3:00:00 AM10/3/98
to
Toby Bartels wrote:

[Moderator's note: Quoted text trimmed. -P.H.]

> Of course, this derivation has nothing to do with the HUP.
> So we see that isotropy does give us [x,px] = [y,py],
> but this doesn't really show that the commutator
> isn't a more fundamental way to write the HUP than the SD relation,
> because there may be other conjugate pairs of observables
> which are not related through isotropy considerations.
> So isotropy won't show that they have the same commutators.
> Nevertheless, they must, and the SD relation won't allow you to show it.

One minor point here. It might be helpful to minimize lurker confusion
if you would spell out what you mean by the abbreviation "SD". I
assume you mean "standard deviation" from the thread last week, but
other SD words might fit there just as well.


Thanks

Ralph


0 new messages