Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

A critique of textbook quantum mechanics

27 views
Skip to first unread message

Arnold Neumaier

unread,
May 29, 2004, 12:55:04 PM5/29/04
to
A critique of textbook quantum mechanics
----------------------------------------

Textbook quantum mechanics almost invariably gives an overly
misleading, simplified view of the measurement problem.
For example, the well-written and often recommended book
J.J. Sakurai,
Modern Quantum Mechanics (2nd Edition)
Addison-Wesley, 1993.
represents things as if there were no possible doubts at all about the
interpretation; a student learning exclusively from this book
(and without previous exposure to the still heated foundational debate)
would be very surprised if he afterwards encounters the bewildering
variety of interpretations and the lack of agreement on the foundations
of quantum mechanics.

In the following, I look at Chapter 1 of sakurai's book and expose some
of the hidden (and questionable) assumptions that go into the
discussion.

In Section 1.1., Sakurai introduces the Stern-Gerlach experiment,
where a single (!) beam of silver atoms (with spin 1/2) generated by a
thermal source, passes a strong magnet and leaves two (!) spots on
a screen. This well-known nonclassical behavior requires explanation;
a classical beam would either produce a single spot (unspinning
particles) or a strip (randomly spinning particles). The explanation
given is quite common, but nevertheless far from convincing, once the
attention is on foundational issues, namely checking whether the
interpretative assumptions going into the analysis are impeccable.

1. Sakurai reasons after (1.1.2),
''Because the atom as a whole is very heavy,
we expect that the classical concept of trajectory can be
legitimately applied.'' This may be the case, but making this
approximation is already a departure from unitary qunatum physics.
It is well-known that all foundational problems center around the
difficulty to harmonize a linear, unitary, deterministic dynamics
for the wave function with the observed 'collapse' of the wave
function.

However, once a classical variable is allowed, the dynamics
necessarily becomes nonlinear (and most likely stochastic, too),
since there is no linear way to combine quantum and classical dynamics
in such a way that they interact. But once one allows a nonlinear
quantum-classical dynamics, the foundational problems disappear:
L.L. Bonilla and F. Guinea,
Collapse of the wave packet and chaos in a model with classical
and quantum degrees of freedom,
Phys. Rev. A 45 (1992), 7718-7728
show that Hamiltonian quantum-classical dynamics easily accomodates
collapse with the associated probabilities.

Thus already the beginning of the analysis is tainted by assumptions
that one should avoid when looking at the foundations. Indeed, in view
of the nonlocality intrinsic in quantum mechanics (as Aspect's
experiments showed convincingly), it would be completely consistent
to argue instead that each (!) silver atom in this experiments has
two (!) entangled trajectories.

2. Then it is argued that the location of the spot on the screen is
(up to a calibration factor) a 'measurement' of the z-component of
the spin S. The argument just relies on the behavior of a classical
spinning particle in this situation. This may have been acceptable
at the time the experiment was done (before the intrinsic spin of
the electron was postulated), but in the mean time it is well-known
that naive classical analogy is often misleading in quantum situations
and should not be trusted.

3. Then it is claimed that ''the atoms in the oven are randomly
oriented'', a very questional assumption in view of the fact that
the Copenhagen interpretation expounded in the book explicitly denies
objective statements about microscopic objects when they are not
measured. Moreover, the assumption is not at all verifiable.

Judging from the classical analogy of polarization (expounded by
Sakurai a few pages later), where spin up and down corresponds to
right and left polarized light, the thermal state in the oven should
rather be regarded as being analogous to thermal light which is
unpolarized, and hence be considered as consisting of unoriented
atoms. Note that this analogy is 100 percent isomorphic on the
mathematical level (Bloch vectors - though Sakurai does not use this
useful terminology). Classical partially polarized light shows all
the typical quantum effects related to superposition and mixing.
Even the nonlocal aspects can be reproduced by using beam splitters.
Unfortunately, Sakurai stops discussing
the analogy half way, and does not discuss unpolarized light.

4.Then it is concluded from the experimental finding of the two
spots that ''the SG apparatus splits the original silver beam from
the oven into _two_distinct_ (original italic) components''.
It is well-known that such conclusions are completely unfounded,
and that assuming here two beams contradicts other experiments that
can be done on the system. The Copenhagen interpretation rather
insists on that we can't say anything about the situation between
magent and detector. If one wants to associate a visual intuition
to the situation it can only be that of intrinsically
nonlocal states of silver atoms, localized at two different places.
Calling this two beams requires at least a careful delineation of
what is meant by a beam.

5. From 1-4 it is concluded that ''only two possible values of
the z-component of S are observed to be possible.''
This is one of the universally agreed features of the experiment,
but as we can see, it rests on shallow foundations.

6. Considering a sequence of SG experiments, Sakurai says a few lines
before (1.1.4), ''the selection of the S_z+ beam by the second
apparatus (SGx) completely destroys any _previous_ (original italic)
information about S_z''. Judging from the analogy to classical
polarization, one should rather think that the SG magnet serves as
a filter that transforms the state of the quantum system and reduces
the intensity of the beam.

7. After (1.1.14), Sakurai emphasizes that ''we have deliberately [...]
ignored the quantum aspect of light'' and worked out ''the analogy [...]
with the polarization vectors of the
_classical_electromagnetic_field_ (original italic)''. That the quantum
nature of light never enters the analogy makes polarization a very
valuable analogy to spin since it shows that quantum phenomena
are not qualitatively different from certain classical phenomena
known since Stokes (1849).

8. After introducing the standard Dirac formalsim for quantum mechanics,
Sakurai goes on in section 1.4 to discuss measurements. He starts off
with an argument from authority, ''we first turn to the words of the
great master, P.A.M. Dirac, for guidance'', quoting his statement
''A measurement always causes the system to jump into an eigenstate
of the dynamical variable that is being measured.'' But it is just
one of the controversial questions whether or to which extent this is
the case!

Wigner, in his thorough analysis published in 1983,
E.P.Wigner,
Interpetation of quantum mechanics,
pp. 260-314 in:
J.A. Wheeler and W. H. Zurek (eds.),
Quantum theory and measurement.
Princeton Univ. Press, Princeton 1983.
gives a much more cautious, carefully qualified picture of what is
known. Since then, decoherence has become prominent, but does not
make Sakurai's statement less controversial. Moreover, most real
measurements are quite different and require modelling through positive
operator valued measures (POVMs) instead of von Dirac's simple
quantum jump picture.

9. After (1.4.3) comes the definition of measurement: ''When the
measurement causes |alpha> to change into |a'>, it is said that
A is measured to be a'.'' Compared to measurements in practice,
this is a mock measurement: You kill a mouse (measurement),
then you know that it is dead (measurement result), and say
'I measured that the mouse is dead'. If a measuremnt does not
reveal anything about the state of a system before the measurement
began, it does not deserve to be called a measurement of the system!
In _all_ applications of science, the goal of measurement is
to find out what happened to the intact system, not to the part that
was destroyed by the measurement.

Rather, what Sakurai describes as a measurement would usually be
considered as a preparation, or a filter; cf. the optical analogy
for partially polarized light. Sakurai discusses preparation a little
later, after (1.4.6), under the name of 'selective measurement'.
But what he describes there can in no way be regarded as a measurement
since (if no final screen comes which really performs a measurement)
no information would become available at the filter - one would not
even know that anything passes the filter (except by preparation).

10. So much about Sakurai. Other textbooks differ, of course, in
their detailed approach, but they do not fare much better.
Unfortunately, foundational aspects are poorly represented in
the textbook literature. For those who look for a more positive
perspective, I can recommend the book
A. Peres,
Quantum theory: Concepts and methods,
Kluwer, Dordrecht 1993.
which at least is careful in its use of concepts, though it avoids
the real controversies.


Arnold Neumaier

Frank Hellmann

unread,
May 31, 2004, 5:20:27 PM5/31/04
to
Arnold Neumaier <Arnold....@univie.ac.at> wrote in message news:<40B4A012...@univie.ac.at>...


Does it matter though?

The Copenhagen interpretation is massively deficient in many ways but
works as a "working interpretation".
The avarage student will have troubles learning about Hilbert spaces
and self adjoint operators first. The lack of anything approaching
clean maths in many textbooks was far more worrying to me back then.
(I had one Prof who during a QM lecture actually proved something by
taylor expanding the delta function...) Then one can introduce POVMs,
von neumanns theory of measurement, Many Worlds, Bohmian trajectories
and so on...

If there is a deficiency IMO it's primarily in explaining how this
totally unsatisfactory situation does not lead to any observable
problems, i.e. decoherence.

I think the textbooks actually reflect the attitude of a majority of
physicists anyways: "Shut up and calculate".
I've yet to find a Prof who would mention measurement in the context
of QFT at all, and textbooks mention it wrt causality if at all, and
when I asked some local string theorists about the situation of the
measurement problem in String theory I only got blank looks.


---
frank

Cl.Massé

unread,
May 31, 2004, 5:20:57 PM5/31/04
to
"Arnold Neumaier" <Arnold....@univie.ac.at> a écrit dans le message
de news:40B4A012...@univie.ac.at...

> 1. Sakurai reasons after (1.1.2),
> ''Because the atom as a whole is very heavy,
> we expect that the classical concept of trajectory can be
> legitimately applied.'' This may be the case, but making this
> approximation is already a departure from unitary qunatum physics.
> It is well-known that all foundational problems center around the
> difficulty to harmonize a linear, unitary, deterministic dynamics
> for the wave function with the observed 'collapse' of the wave
> function.

That is irrelevant to the result and interpretation of the experiment.

> 3. Then it is claimed that ''the atoms in the oven are randomly
> oriented'', a very questional assumption in view of the fact that
> the Copenhagen interpretation expounded in the book explicitly denies
> objective statements about microscopic objects when they are not
> measured. Moreover, the assumption is not at all verifiable.

It is merely a symmetry condition. The contrary would be incompatible
with the rotational symmetry of space.

> 4.Then it is concluded from the experimental finding of the two
> spots that ''the SG apparatus splits the original silver beam from
> the oven into _two_distinct_ (original italic) components''.
> It is well-known that such conclusions are completely unfounded,
> and that assuming here two beams contradicts other experiments that
> can be done on the system. The Copenhagen interpretation rather
> insists on that we can't say anything about the situation between
> magent and detector. If one wants to associate a visual intuition
> to the situation it can only be that of intrinsically
> nonlocal states of silver atoms, localized at two different places.
> Calling this two beams requires at least a careful delineation of
> what is meant by a beam.

Yes, That's only an issue of definition. The standard definition of a
beam, a location of maximum presence probability and constant momentum,
will do.

> 8. After introducing the standard Dirac formalsim for quantum
> mechanics, Sakurai goes on in section 1.4 to discuss measurements. He
> starts off with an argument from authority, ''we first turn to the
> words of the great master, P.A.M. Dirac, for guidance'', quoting his
> statement ''A measurement always causes the system to jump into an
> eigenstate of the dynamical variable that is being measured.'' But it
> is just one of the controversial questions whether or to which extent
> this is the case!

That is postulated in axiomatic QM. There may be other axiom systems,
but the book studies that one.

> Wigner, in his thorough analysis published in 1983,
> E.P.Wigner,
> Interpetation of quantum mechanics,
> pp. 260-314 in:
> J.A. Wheeler and W. H. Zurek (eds.),
> Quantum theory and measurement.
> Princeton Univ. Press, Princeton 1983.
> gives a much more cautious, carefully qualified picture of what is
> known. Since then, decoherence has become prominent, but does not
> make Sakurai's statement less controversial. Moreover, most real
> measurements are quite different and require modelling through
> positive operator valued measures (POVMs) instead of von Dirac's
> simple quantum jump picture.

Decoherence doesn't solve anything. That's some other sand in the eyes.

> 9. After (1.4.3) comes the definition of measurement: ''When the
> measurement causes |alpha> to change into |a'>, it is said that
> A is measured to be a'.'' Compared to measurements in practice,
> this is a mock measurement: You kill a mouse (measurement),
> then you know that it is dead (measurement result), and say
> 'I measured that the mouse is dead'. If a measuremnt does not
> reveal anything about the state of a system before the measurement
> began, it does not deserve to be called a measurement of the system!
> In _all_ applications of science, the goal of measurement is
> to find out what happened to the intact system, not to the part that
> was destroyed by the measurement.

A quantum measurement perturbs the measured system, there is no other
way.

> Rather, what Sakurai describes as a measurement would usually be
> considered as a preparation, or a filter; cf. the optical analogy
> for partially polarized light. Sakurai discusses preparation a little
> later, after (1.4.6), under the name of 'selective measurement'.
> But what he describes there can in no way be regarded as a measurement
> since (if no final screen comes which really performs a measurement)
> no information would become available at the filter - one would not
> even know that anything passes the filter (except by preparation).

Preparation is nothing else than a special case of measurement.

> 10. So much about Sakurai. Other textbooks differ, of course, in
> their detailed approach, but they do not fare much better.
> Unfortunately, foundational aspects are poorly represented in
> the textbook literature.

I don't see any critic to do about that. This textbook seems to
expound orthodox QM as it should be.

--
~~~~ clmasse at free dot fr
Liberty, Equality, Profitability.

Arkadiusz Jadczyk

unread,
Jun 1, 2004, 5:42:59 AM6/1/04
to

On Mon, 31 May 2004 21:20:27 +0000 (UTC), C....@gmx.net (Frank Hellmann)
wrote:

>I think the textbooks actually reflect the attitude of a majority of
>physicists anyways: "Shut up and calculate".

I would add: "And do not ask difficult questions."

An example of a difficult question: what is the mechanism that causes
the detector to click at a given time rather than sooner or later.....

See e.g. Mielnik, B.: The Screen Problem, Found. Phys.
24, (1994) 1113--1129

ark
--

Arkadiusz Jadczyk
http://www.cassiopaea.org/quantum_future/homepage.htm

--

Italo Vecchi

unread,
Jun 2, 2004, 4:56:35 AM6/2/04
to

"Cl.Massé" <in...@optinbig.com> wrote in message news:<40ba1d63$0$18308$626a...@news.free.fr>...

> Decoherence doesn't solve anything. That's some other sand in the eyes.
>

Indeed, as a solution to the measurement problem decoherence theory is
a joke, but it's currently very fashionable. I spotted some implicit
shrugs about its validity (see [1] for an explicit critique), but your
bluntness is rare.

Regards,

IV


[1] A. Bassi, G. Ghirardi "A General Argument Against the Universal
Validity of the Superposition Principle" at
http://www.arxiv.org/abs/quant-ph/0009020

----------------------------

"The King was naked and disappointingly so."

Arnold Neumaier

unread,
Jun 4, 2004, 8:52:49 AM6/4/04
to

Cl.Mass=E9 wrote:
> "Arnold Neumaier" <Arnold....@univie.ac.at> a =E9crit dans le mess=
age
> de news:40B4A012...@univie.ac.at...

[On Sakurai's treatment of the Stern-Gerlach experiment]


>>1. Sakurai reasons after (1.1.2),
>>''Because the atom as a whole is very heavy,
>>we expect that the classical concept of trajectory can be
>>legitimately applied.'' This may be the case, but making this
>>approximation is already a departure from unitary qunatum physics.
>>It is well-known that all foundational problems center around the
>>difficulty to harmonize a linear, unitary, deterministic dynamics
>>for the wave function with the observed 'collapse' of the wave
>>function.

>=20


> That is irrelevant to the result and interpretation of the experiment.

No. In a classical treatment of trajectories you can never get the
entanglement between spin and position that is the basis of the
paradoxical features of the Stern-Gerlach experiment.

>>3. Then it is claimed that ''the atoms in the oven are randomly
>>oriented'', a very questional assumption in view of the fact that
>>the Copenhagen interpretation expounded in the book explicitly denies
>>objective statements about microscopic objects when they are not
>>measured. Moreover, the assumption is not at all verifiable.

>=20


> It is merely a symmetry condition. The contrary would be incompatible
> with the rotational symmetry of space.

A typical thermal source already breaks the rotational symmetry.

Moreover, unoriented spin 1/2 atoms would also respect the symmetry;
just like unpolarized light is the rotationally symmetric version
of spin 1 light.

>>4.Then it is concluded from the experimental finding of the two
>>spots that ''the SG apparatus splits the original silver beam from
>>the oven into _two_distinct_ (original italic) components''.
>>It is well-known that such conclusions are completely unfounded,
>>and that assuming here two beams contradicts other experiments that
>>can be done on the system. The Copenhagen interpretation rather
>>insists on that we can't say anything about the situation between
>>magent and detector. If one wants to associate a visual intuition
>>to the situation it can only be that of intrinsically
>>nonlocal states of silver atoms, localized at two different places.
>>Calling this two beams requires at least a careful delineation of
>>what is meant by a beam.

>=20


> Yes, That's only an issue of definition. The standard definition of a

> beam, a location of maximum presence probability and constant momentum,=

> will do.

Is this standard? I'd like to see a reference. People usually use the
term quite loosely.

If a single atom passes the source, it is apparently in both beams!?


>>9. After (1.4.3) comes the definition of measurement: ''When the
>>measurement causes |alpha> to change into |a'>, it is said that
>>A is measured to be a'.'' Compared to measurements in practice,
>>this is a mock measurement: You kill a mouse (measurement),
>>then you know that it is dead (measurement result), and say
>>'I measured that the mouse is dead'. If a measuremnt does not
>>reveal anything about the state of a system before the measurement
>>began, it does not deserve to be called a measurement of the system!
>>In _all_ applications of science, the goal of measurement is
>>to find out what happened to the intact system, not to the part that
>>was destroyed by the measurement.

>=20


> A quantum measurement perturbs the measured system, there is no other
> way.

Many classical measurements do, too; e.g., if we want to find out the
chemical composition of a substance, we must destroy some of it.
But the analysis of the destroyed part yields a measuremnt of what
it was before destruction. This is the hallmark of any real measurement,
whether classical or quantum.


> Preparation is nothing else than a special case of measurement.

No; preparation and measurement are two completely disjoint aspects
of an experimental setting.

Preparations assume information based on properties of the equipment
and the experimental arrangement. They cannot yield anything new,
or provide any checks on the assumptions made.

Measurements are supposed to yield information about unknown states
or check information about putative known states. On the other hand,
they may destroy the system (as happens, e.g., in the Stern-Gerlach
setting), hecne it is ridiculous to say that a measurement causes
the state to change into an eigenstate. In many cases (measurement
of collision events) the measurement becomes available only long after
the system stopped existing...

Preparations assume information based on properties of equipment,
and cannot yioeld anything new, or provide any checks on the
assumptions made.

>>10. So much about Sakurai. Other textbooks differ, of course, in
>>their detailed approach, but they do not fare much better.
>>Unfortunately, foundational aspects are poorly represented in
>>the textbook literature.

>=20


> I don't see any critic to do about that. This textbook seems to
> expound orthodox QM as it should be.

Orthodoxy is not always a virtue.

Two problem with orthodox QM are that

1. it does not apply to measurements of single quantum systems.
For how these are modelled, see, e.g., the survey article
MB Plenio and PL Knight,
The quantum-jump approach to dissipative dynamics in quantum optics,=

Rev. Mod. Phys 70 (1998), 101-144.
This was not a real problem 50 years ago, but it is one now.

2. it requires classical detectors.
But we all know that detectors are just large quantum objects.
Thus the collapse of the wave function (required for accounting for
the behaviour of quantum systems when they pass screens, slits,
and other filters) clashes with the superposition principle.
This was always a problem, since it shows the inconsistency of
orthodox quantum mechanics in the large.

Even an introductory textbook should be able to spend a few paragraphs
on such issues.


Arnold Neumaier


Patrick Van Esch

unread,
Jun 7, 2004, 1:37:15 PM6/7/04
to
Arnold Neumaier <Arnold....@univie.ac.at> wrote in message news:<40B4A012...@univie.ac.at>...
> A critique of textbook quantum mechanics
> ----------------------------------------
>
> Textbook quantum mechanics almost invariably gives an overly
> misleading, simplified view of the measurement problem.
> For example, the well-written and often recommended book
> J.J. Sakurai,
> Modern Quantum Mechanics (2nd Edition)
> Addison-Wesley, 1993.
> represents things as if there were no possible doubts at all about the
> interpretation; a student learning exclusively from this book
> (and without previous exposure to the still heated foundational debate)
> would be very surprised if he afterwards encounters the bewildering
> variety of interpretations and the lack of agreement on the foundations
> of quantum mechanics.

[snip]

In defense of Sakurai, I would say that, although all you write is
correct, one cannot justify that on a pedagogical level ! Maybe a
footnote should be added in the book, saying that there are a few
subtle issues here, but I think you'll agree with me that a student
for which it is instructive to read the first chapter of Sakurai
shouldn't be exposed to all these for him/her incomprehensible
subtleties. The student is even not very well aware of the
superposition idea, which this very chapter tries to introduce.
The problem, in physics teaching, to try to be too rigorous is just as
dangerous as the opposite thing (also very common!) to be too sloppy.
For instance, I had a professor in theoretical physics who didn't want
to teach quantum field theory because of the mathematical difficulties
in its foundations. But in that case, you don't get anywhere either.
The aim of books like Sakurai's (and I think they do well at it) is to
teach the student problem solving skills in QM. You cannot tackle
foundational problems without mastering that. The opposite approach
would be, for the average student, much more dramatic: he would be
well aware of all subtleties of (mostly unobservable) aspects of the
quantum measurement problem, but wouldn't be able to solve a simple
problem. So his/her knowledge of QM is absolutely useless for the
"working physicist".

cheers,
Patrick.

Arnold Neumaier

unread,
Jun 8, 2004, 5:03:08 AM6/8/04
to

Patrick Van Esch wrote:
> Arnold Neumaier <Arnold....@univie.ac.at> wrote in message news:<40B4A012...@univie.ac.at>...
>>
>>Textbook quantum mechanics almost invariably gives an overly
>>misleading, simplified view of the measurement problem.
>>For example, the well-written and often recommended book
>> J.J. Sakurai,
>> Modern Quantum Mechanics (2nd Edition)
>> Addison-Wesley, 1993.
>>represents things as if there were no possible doubts at all about the
>>interpretation; a student learning exclusively from this book
>>(and without previous exposure to the still heated foundational debate)
>>would be very surprised if he afterwards encounters the bewildering
>>variety of interpretations and the lack of agreement on the foundations
>>of quantum mechanics.
>
> In defense of Sakurai, I would say that, although all you write is
> correct, one cannot justify that on a pedagogical level ! Maybe a
> footnote should be added in the book, saying that there are a few
> subtle issues here, but I think you'll agree with me that a student
> for which it is instructive to read the first chapter of Sakurai
> shouldn't be exposed to all these for him/her incomprehensible
> subtleties. The student is even not very well aware of the
> superposition idea, which this very chapter tries to introduce.

Superposition in itself is not difficult to grasp; it is valid for all
linear differential equations. And probably students know this already
before learning QM.

What is new, however, in QM is entanglement, which, in Stern-Gerlach
experiments, results in nonlocal states.

But here, Sakurai does a poor job. Without much more effort, namley
just an informal introduction to coherent states (which are very
useful anyway) the student could gain a much deeper and more realistic
perspective for QM.

Because what really happens in the Stern-Gerlach experiment is that
the silver atom is approximately in a superposition of the states
|up> tensor |x_+(t),p_+(t)>
and
|down> tensor |x_-(t),p_-(t)>,
where |x,p> denotes a coherent state with position x and momentum p,
and x_+(t),p_+(t) (resp. x_-(t),p_-(t)) are the phase space trajectories
of a classical particle with the magnetic interaction corresponding
to spin up (resp. spin down). This is the proper version of the
'classical' handling of the trajectories. (The way Sakurai proceeds,
this remains completely obscure - in his account, it looks as if a
single silver atom gets two classical trajectories but ends up at
only one place.)

Thus the Stern-Gerlach experiment would be an ideal opportunity of
introducing entanglement between spin states and position states,
which is the main thing responsible for nonclassical behavior.

One could also give here an excellent illustration for the necessity
of the collapse. It ensures that instead of a superposition of
|up> tensor |up-spot> and |down> tensor |down-spot> (or the
resulting mixture of |up-spot> and |down-spot> when projecting away
the invisible spin), we observe (in a single event) exactly one of
|up-spot> or |down-spot>, no matter how these look in detail.

With such a discussion, much of the confusion about the meaning of QM
could be avoided.


Arnold Neumaier

Andrew Resnick

unread,
Jun 8, 2004, 5:03:22 AM6/8/04
to

In <c23e597b.04060...@posting.google.com> Patrick Van Esch
wrote:

> Arnold Neumaier <Arnold....@univie.ac.at> wrote in message news:
> <40B4A012...@univie.ac.at>...
>> A critique of textbook quantum mechanics
>> ----------------------------------------
>>
>> Textbook quantum mechanics almost invariably gives an overly
>> misleading, simplified view of the measurement problem.
>> For example, the well-written and often recommended book
>> J.J. Sakurai,

<snip>


>
> In defense of Sakurai, I would say that, although all you write is
> correct, one cannot justify that on a pedagogical level ! Maybe a
> footnote should be added in the book, saying that there are a few
> subtle issues here, but I think you'll agree with me that a student
> for which it is instructive to read the first chapter of Sakurai
> shouldn't be exposed to all these for him/her incomprehensible
> subtleties. The student is even not very well aware of the

<snip>

With all due respect, this is an example of the "shut up and calculate"
approach to QM. Agreed, that's acceptable step along the road to a
fuller understanding, but since QM is an incomplete theory, foundational
issues MUST be highlighted at some point (IMO) in order to give aspiring
theorists directons to proceed.

Classical mechanics was only axiomatized in the last half of the 20th
century, mostly by Truesdell and Noll. Although I would be hard-pressed
to enumerate specific benefits that have accrued as a result, I can
state that the mechanics of deformable media is a complete and
consistent theory.

--
Andrew Resnick, Ph. D.
National Center for Microgravity Research
NASA Glenn Research Center

Patrick Van Esch

unread,
Jun 8, 2004, 3:23:08 PM6/8/04
to

Arnold Neumaier <Arnold....@univie.ac.at> wrote in message news:<40C5714F...@univie.ac.at>...


> Patrick Van Esch wrote:
> > The student is even not very well aware of the
> > superposition idea, which this very chapter tries to introduce.
>
> Superposition in itself is not difficult to grasp; it is valid for all
> linear differential equations. And probably students know this already
> before learning QM.

The mathematical notion of superposition is not difficult.
But the physical notion of "superposition of states" is !
It is this very idea which Sakurai tries to convey, in the
mathematically simplest situation, namely a 2 state system.
Classically, there would only be 2 discrete possibilities,
but quantum mechanically we end up with C^2 (/ R+).

>
> What is new, however, in QM is entanglement, which, in Stern-Gerlach
> experiments, results in nonlocal states.
>
> But here, Sakurai does a poor job. Without much more effort, namley
> just an informal introduction to coherent states (which are very
> useful anyway) the student could gain a much deeper and more realistic
> perspective for QM.

Coherent states are introduced only in chapter 3 !

>
> Because what really happens in the Stern-Gerlach experiment is that
> the silver atom is approximately in a superposition of the states
> |up> tensor |x_+(t),p_+(t)>
> and
> |down> tensor |x_-(t),p_-(t)>,
> where |x,p> denotes a coherent state with position x and momentum p,
> and x_+(t),p_+(t) (resp. x_-(t),p_-(t)) are the phase space trajectories
> of a classical particle with the magnetic interaction corresponding
> to spin up (resp. spin down). This is the proper version of the
> 'classical' handling of the trajectories. (The way Sakurai proceeds,
> this remains completely obscure - in his account, it looks as if a
> single silver atom gets two classical trajectories but ends up at
> only one place.)

Honestly, all you write is interesting, but you should master QM on
the level of Sakurai before understanding what you say. He's now just
trying to introduce C^2 as the state space for a 2-state system.
After that, at the end of chapter 1, the idea that position is
described quantum mechanically is introduced. I do not say that it
wouldn't be interesting, at the END of the book, to have a chapter on
interpretational issues, and come back to what you are writing above.
But in the beginning of chapter 1, you simply should consider a SG
machine as a spin measurement along a certain direction. You can say
so, in the language of decoherence, that the very interaction with the
non-uniform B field is the "act of measurement" where the effective
projection on the up or down state takes place, and from there on, you
can calculate a trajectory classically.

cheers,
patrick.

Italo Vecchi

unread,
Jun 11, 2004, 7:29:06 AM6/11/04
to

Arnold Neumaier <Arnold....@univie.ac.at> wrote in message news:<40BF417D...@univie.ac.at>...

>
> Measurements are supposed to yield information about unknown states
> or check information about putative known states. On the other hand,
> they may destroy the system (as happens, e.g., in the Stern-Gerlach
> setting), hecne it is ridiculous to say that a measurement causes
> the state to change into an eigenstate.


In Copenhagen the wave function does not represent the state of the
system. It encodes the observer's knowledge about past and future
measurement outcomes. What is cast in one of the measurement
operator's eigenvalues by the act of measurement is not the state of
the system, but the state of the observer's knowledge.

A friend sends me a letter, then he's killed ,say , in a motorcycle
accident. The day after his death I receive and read his letter. What
I read changes my knowledge of him and of his story, although he no
longer is. Now, that's a measurement.


>2. it requires classical detectors.
>But we all know that detectors are just large quantum objects.
>Thus the collapse of the wave function (required for accounting for
>the behaviour of quantum systems when they pass screens, slits,
>and other filters) clashes with the superposition principle.
>This was always a problem, since it shows the inconsistency of
>orthodox quantum mechanics in the large.


The suprposition principle holds as long as the system's evolution is
unitary. Collapse corresponds to the intrinsecally non-unitary act of
perception/measurement.

IV

--------------------

!... nicht allein diese Tropfen sind blosse Erscheinungen, sondern
selbst ihre runde Gestalt, ja sogar der Raum, in welchen sie fallen,
sind nichts an sich selbst, sondern blosse Modifikationen, oder
Grundlagen unserer sinnlichen Anschauung, das transzendentale Objekt
aber bleibt uns unbekannt.


" ...not only are the drops of rain mere appearances, but even their
round shape, and even the space in which they fall, are nothing in
themselves, but merely modifications of fundamental forms of our
sensible intuition, and the transcendental object remains unknown to
us"

Kant, I. Critique of Pure Reason.

Arnold Neumaier

unread,
Jun 11, 2004, 1:35:42 PM6/11/04
to

Italo Vecchi wrote:
> Arnold Neumaier <Arnold....@univie.ac.at> wrote in message news:<40BF417D...@univie.ac.at>...
>
>>Measurements are supposed to yield information about unknown states
>>or check information about putative known states. On the other hand,
>>they may destroy the system (as happens, e.g., in the Stern-Gerlach

>>setting), hence it is ridiculous to say that a measurement causes


>>the state to change into an eigenstate.
>
> In Copenhagen the wave function does not represent the state of the
> system. It encodes the observer's knowledge about past and future
> measurement outcomes.

No. Nature operates independent of what observers know about it.
Quantum mechanics was a valid description of Nature already before the
first observer existed. Otherwise time-honored dating methods such as
Carbon 14 method would be meaningless.


[Moderator's note: Italo Vecchi might have meant that the _maximally possible_
observer's knowledge about measurement outcomes is encoded in the wave function.
-usc]


The study of knowledge is a matter of psychology and not of physics.
If the observer gets a heart attack and loses some knowledge because of
that, the process in the lab does not change. Thus knowledge does not need
to follow either the Schroedinger equation or the collapse postulates.

Probabilities depend on the preparation of the system only, and not on
what the observer knows about this preparation; indeed, often the
measurements serve to find out about the details of how a system was
prepared.


>>2. it requires classical detectors.
>>But we all know that detectors are just large quantum objects.
>>Thus the collapse of the wave function (required for accounting for
>>the behaviour of quantum systems when they pass screens, slits,
>>and other filters) clashes with the superposition principle.
>>This was always a problem, since it shows the inconsistency of
>>orthodox quantum mechanics in the large.
>

> The superposition principle holds as long as the system's evolution is
> unitary. Collapse corresponds to the intrinsically non-unitary act of
> perception/measurement.

Perception/measurement is itself a quantum process and hence should
be described by QM if it is a consistent description of nature.
All the great writers on the subject had been aware of this inconsistency
which you try to hide under the carpet. The enigma was and is to specify
in a rational way where and how unitarity breaks down.


Arnold Neumaier

Alfred Einstead

unread,
Jun 12, 2004, 8:10:28 AM6/12/04
to
vec...@weirdtech.com (Italo Vecchi) wrote:
> Indeed, as a solution to the measurement problem decoherence theory is
> a joke,

What's humorous about it?

Its essential statement is that relative to 3 systems (as opposed
to only 2) one *does* indeed have basis uniqueness. That's
true as a mathematical theorem. Since the problem of basis
uniqueness is much of the essence of the measurement problem,
then much of the essence of a burden has been relieved by a joke.

Thomas Dent

unread,
Jun 12, 2004, 8:15:17 AM6/12/04
to
vec...@weirdtech.com (Italo Vecchi) wrote

> > Decoherence doesn't solve anything. That's some other sand in the eyes.
> >

But it exists and has been measured (in experiments with Be ions:
http://www.nature.com/nsu/000120/000120-10.html,
http://www.nature.com/cgi-taf/DynaPage.taf?file=/nature/journal/v403/n6767/abs/403269a0_fs.html).
So it solves one aspect of the problem of how QM corresponds with
reality, since without it such experiments would not make sense.

> Indeed, as a solution to the measurement problem decoherence theory is
> a joke

Why?

> but it's currently very fashionable.

Which proves nothing either way.

> I spotted some implicit shrugs about its validity (...)


> but your bluntness is rare.

So what? "Sand in the eyes", shrugs, "jokes", bluntness and fashions
prove nothing.

> (see [1] for an explicit critique)

> [1] A. Bassi, G. Ghirardi "A General Argument Against the Universal
> Validity of the Superposition Principle" at
> http://www.arxiv.org/abs/quant-ph/0009020

See also C. Anastopoulos, http://www.arxiv.org/abs/quant-ph/0011123
"Frequently Asked Questions about Decoherence" - esp. pp. 15-16 for
intrinsic decoherence.

Italo Vecchi

unread,
Jun 13, 2004, 10:39:05 AM6/13/04
to

whop...@csd.uwm.edu (Alfred Einstead) wrote in message news:<e58d56ae.0406...@posting.google.com>...

Who picks those 3 systems?

IV

Cl.Massé

unread,
Jun 13, 2004, 10:39:17 AM6/13/04
to

I wrote :

> > That is irrelevant to the result and interpretation of the
> > experiment.

"Arnold Neumaier" <Arnold....@univie.ac.at> a écrit dans le message
de news:40BF417D...@univie.ac.at...

> No. In a classical treatment of trajectories you can never get the
> entanglement between spin and position that is the basis of the
> paradoxical features of the Stern-Gerlach experiment.

I mean, the description of the trajectory is irrelevant as long as the
correlation between spin and trajectory is included in the whole
description. In practice, that is realized through a tensor product of
two distinct spaces, independently of the real nature of each space.
The trajectory can be described in QM terms, but the details added wrt
classical terms are irrelevant.

> A typical thermal source already breaks the rotational symmetry.

It wouldn't be thermal in the first place! "Thermal" means
randomization of all the degrees of freedom, including spin. That is
literally contrary to any fundamental symmetry breaking

> > Yes, That's only an issue of definition. The standard definition of
> > a beam, a location of maximum presence probability and constant

> > momentum, will do.

> Is this standard? I'd like to see a reference. People usually use the
> term quite loosely.

It need not be used precisely. Nothing depends on its meaning, the
intuitive one that fits the explanation is sufficient.

> If a single atom passes the source, it is apparently in both beams!?

Yes, that's how the Copenhagen interpretation sees it, until the atom is
detected. Then, as trajectory and spin are correlated, the spin state
collapses onto the corresponding eigenstate, along with the position
state.

> > A quantum measurement perturbs the measured system, there is no
> > other way.

> Many classical measurements do, too; e.g., if we want to find out the
> chemical composition of a substance, we must destroy some of it.
> But the analysis of the destroyed part yields a measuremnt of what
> it was before destruction. This is the hallmark of any real
> measurement, whether classical or quantum.

Not quantum, precisely. Two particles in the same state may be detected
with different spins, for example, therefore there is no mean to get
know the initial state only through measurement outcomes. The
explanation of the uncertainty by a classical perturbation is obsolete,
or at worst intuitive.


> > Preparation is nothing else than a special case of measurement.

> No; preparation and measurement are two completely disjoint aspects
> of an experimental setting.
>
> Preparations assume information based on properties of the equipment
> and the experimental arrangement. They cannot yield anything new,
> or provide any checks on the assumptions made.
>
> Measurements are supposed to yield information about unknown states
> or check information about putative known states. On the other hand,
> they may destroy the system (as happens, e.g., in the Stern-Gerlach
> setting), hecne it is ridiculous to say that a measurement causes
> the state to change into an eigenstate.

An eigenstate of what? That is the question. Actually, an eigenstate
of the system particle-apparatus, thus whether the particle is destroyed
or not. It is easy to see that a measuring apparatus measuring discrete
states can prepare a particle in any of those state just by a
measurement and a coincidence setup, or a mechanism able to destroy the
outgoing particle if not in the desired state. Any preparing apparatus
without exception works like this. An simple example is a beam
generator that measures the impulse and eliminate the particles with
impulse direction out of the allowed range through a screen.

> In many cases (measurement
> of collision events) the measurement becomes available only long after
> the system stopped existing...
> Preparations assume information based on properties of equipment,
> and cannot yioeld anything new, or provide any checks on the
> assumptions made.

The properties of the equipment determines which particle will be
eliminated, like in the example of the screen. Preparation and
production aren't necessary identical. A preparation put the particle
in a pure state, which is impossible without a measurement.

> Two problem with orthodox QM are that
>
> 1. it does not apply to measurements of single quantum systems.
> For how these are modelled, see, e.g., the survey article
> MB Plenio and PL Knight,
> The quantum-jump approach to dissipative dynamics in quantum
> optics,
>

> Rev. Mod. Phys 70 (1998), 101-144.
> This was not a real problem 50 years ago, but it is one now.
>
> 2. it requires classical detectors.
> But we all know that detectors are just large quantum objects.
> Thus the collapse of the wave function (required for accounting for
> the behaviour of quantum systems when they pass screens, slits,
> and other filters) clashes with the superposition principle.
> This was always a problem, since it shows the inconsistency of
> orthodox quantum mechanics in the large.

Those problems aren't solved yet. However, an imperfect, still useful,
theory exists : "orthodox" QM. Just like "classical" mechanics was
imperfect, but allowed to calculate the planet orbits with an incredible
precision. Both are worth being taught on their own right. There isn't
and will never be a perfect theory.

> Even an introductory textbook should be able to spend a few paragraphs
> on such issues.

On that, I agree, but it all depends on the purpose of the book. There
is a fashion of saying quantum mysteries are but early wonderings caused
by an unusual formalism. They are not.

Arnold Neumaier

unread,
Jun 14, 2004, 4:08:14 PM6/14/04
to
Thomas Dent wrote:
> vec...@weirdtech.com (Italo Vecchi) wrote
>
>>>Decoherence doesn't solve anything. That's some other sand in the eyes.
>
>>Indeed, as a solution to the measurement problem decoherence theory is
>>a joke
>
> Why?

The paper you quoted:

> See also C. Anastopoulos, http://www.arxiv.org/abs/quant-ph/0011123
> "Frequently Asked Questions about Decoherence" - esp. pp. 15-16 for
> intrinsic decoherence.

says on p.13/14:

''it is often claimed that environment induced decoherence provides
by itself a solution of the macroobjectification problem [...]
One concludes therefore that environment induced decoherence
cannot by itself explain the appearance of macroscopic definite
properties and a realist interpretation of quantum theory still needs
an additional postulate to account for macroobjectification,''

''a sufficiently classical behaviour for the environment seems to be
necessary if it is to act as a decohering agent and we can ask what
has brought the environment into such a state ad-infinitum.''

and as concluding sentence on p.16:

''At this stage, however, it is fair to say that there is not
conclusive evidence about how the classical world appears and it
is likely that a special initial condition is needed in order to
guarantee it.''

Thus decoherence assumes on some scale what it purports to derive
at other scales. This implies that while it describes something real
it does not help in the foundations.

Also, decoherence does not say anything about an individual system.
If the wave function is assumed to be an objective property of a single
quantum system, it must collapse upon measurement to a single eigenstate,
and not to a mixture as decoherence predicts.

The measurement problem is the problem of explaining why we observe facts
and not superpositions or mixtures of all possible facts, and decoherence
is completely silent about this.


Arnold Neumaier

Arnold Neumaier

unread,
Jun 14, 2004, 4:11:07 PM6/14/04
to
Cl.Massé wrote:

> It wouldn't be thermal in the first place! "Thermal" means

> randomization of all the degrees of freedom, including spin. =20

No. Thermal means generated by a stationary process which gives a
canonical ensemble
rho =3D Z^{-1}exp(-beta H), beta =3D 1/kT
for some Hamiltonian H.


> That is literally contrary to any fundamental symmetry breaking

Since angular momentum is a conserved quantity a density matrix of
the form
rho =3D Z^{-1}exp(-beta(omega dot J))
with some fixed angular velocity omega is perfectly admissible as a
thermal equilibrium state. Clearly, this breaks rotational symmetry
unless omega=3D0.

The rotationally symmetric case omega=3D0 corresponds to the assumption
in Sakurai, and gives the 2x2 density matrix
rho =3D 1/2 * identity.
This is an unoriented spin. Talking here about randomly oriented spins
has no observable basis and is (in Sakurai's analogy) as misleading
as calling classical unpolarized light 'randomly polarized'.

rho can be written in many ways as a mixture of two or more pure states.
For example, one can write it in the form
rho =3D 1/2 |up><up|+ 1/2 |down><down|,
with any two preferred orthogonal basis vectors |up> and |down>.
But this (or any other) decomposition is unphysical since it has no
observable consequences. 'randomly oriented' would even mean a mixture
of pure states in all possible direction...


>>>Preparation is nothing else than a special case of measurement.

>=20


>>No; preparation and measurement are two completely disjoint aspects
>>of an experimental setting.
>>
>>Preparations assume information based on properties of the equipment
>>and the experimental arrangement. They cannot yield anything new,
>>or provide any checks on the assumptions made.
>>
>>Measurements are supposed to yield information about unknown states
>>or check information about putative known states. On the other hand,
>>they may destroy the system (as happens, e.g., in the Stern-Gerlach
>>setting), hecne it is ridiculous to say that a measurement causes
>>the state to change into an eigenstate.

>=20


> An eigenstate of what? That is the question. Actually, an eigenstate

> of the system particle-apparatus, thus whether the particle is destroye=
d
> or not. =20

This is nonsense. The apparatus is in a metastable thermal equilibrium
state both before and after the measurement, and since this state is mixe=
d,
it cannot be an eigenstate of the Hamiltonian of the system
particle-apparatus.


> It is easy to see that a measuring apparatus measuring discrete
> states can prepare a particle in any of those state just by a

> measurement and a coincidence setup, or a mechanism able to destroy the=

> outgoing particle if not in the desired state. Any preparing apparatus=

> without exception works like this. An simple example is a beam
> generator that measures the impulse and eliminate the particles with
> impulse direction out of the allowed range through a screen.

I agree that this is a preparation by selection, but it is not a
measurement. You know by the set up the momentum of the particles
going through the screen but you have not measured it.
The result of passing through a small, long hole, can be idealized
as a superposition (or mixture) of a family of close coherent states,
but this cannot be represented as applying a projection operator.


> The properties of the equipment determines which particle will be
> eliminated, like in the example of the screen. Preparation and
> production aren't necessary identical. A preparation put the particle
> in a pure state, which is impossible without a measurement.

Most real preparations put the particle in an impure state; preparing
pure states (e.g. a pure 1-photon state) is an art. Pure spin 1/2
states are an exception since spins can be easily separated by magnets,
so that the unwanted part can be blocked.


>>Two problem with orthodox QM are that
>>
>>1. it does not apply to measurements of single quantum systems.
>>

>>2. it requires classical detectors.

>=20
> Those problems aren't solved yet. However, an imperfect, still useful,=

> theory exists : "orthodox" QM. Just like "classical" mechanics was

> imperfect, but allowed to calculate the planet orbits with an incredibl=
e
> precision. Both are worth being taught on their own right. =20

Yes, and it should be taught in a way that minimizes future confusion.
When people learn classical mechanics they are told that this is an
approximation and that there is a better theory, QM, needed when things
go microscopic. Similarly, when teaching QM, one should point out its
limitations, on a superficial but understandable level.


> There isn't and will never be a perfect theory.

How do you know?


>>Even an introductory textbook should be able to spend a few paragraphs
>>on such issues.

>=20
> On that, I agree, but it all depends on the purpose of the book. There=

> is a fashion of saying quantum mysteries are but early wonderings cause=


d
> by an unusual formalism. They are not.

Many of them are. It took me a long time to separate fact from fiction
in the traditional story of quantum mysteries...


Arnold Neumaier

r...@maths.tcd.ie

unread,
Jun 14, 2004, 4:09:16 PM6/14/04
to
vec...@weirdtech.com (Italo Vecchi) writes:

They correspond to the additional structure that must be added into
any no-collapse theory to compensate for the uncomfortable but
true fact that the state vector and Hamiltonian alone don't have
nearly enough structure to encode the world we see around us.

Decoherence advocates who claim that the classical world somehow
automatically emerges from the quantum fog through quantum dynamics
have never addressed this - they are in denial. Only by introducing
additional structure can the theory be made sensible, and the
introduction of that additional structure defeats the purpose of
claiming that the quantum fog is all there is.

Examples of additional structure which can be added include (but
are not necessarily limited to):

Bohmian particle positions.

Preferred bases.

Observable operators (the introduction of which automatically makes
the interpretation into the Copenhagen interpretation, as I explained
in my most recent post in the "Many Worlds" thread; for those who
are too lazy to read it, the gist is that the observables are defined
in terms of measurements, so introducing them (rather than defining
them in terms of the Hamiltonian and state vector, which can't be
done) necessarily makes measurement a fundamental part of the
interpretation.)

The identification of subspaces of the Hilbert space with subsystems
of the classical system. This is one of the most frequent techniques
of attempting to sneak additional structure in without the reader
noticing. It's also one of the most ridiculous mistakes for decoherence
advocates to make, since it very obviously assumes in advance that
which they are claiming to deduce - namely the existence of the
classical structures. The "3 systems" idea falls clearly into this
category.


R.

Arnold Neumaier

unread,
Jun 16, 2004, 4:42:32 PM6/16/04
to
Patrick Van Esch wrote:
> Arnold Neumaier <Arnold....@univie.ac.at> wrote in message news:<40C5714F...@univie.ac.at>...
>
>>Patrick Van Esch wrote:
>>
>>> The student is even not very well aware of the
>>>superposition idea, which this very chapter tries to introduce.
>>
>>Superposition in itself is not difficult to grasp; it is valid for all
>>linear differential equations. And probably students know this already
>>before learning QM.
>
> The mathematical notion of superposition is not difficult.
> But the physical notion of "superposition of states" is !

It is _made_ difficult by the traditional treatment, not difficult
in itself. One can set up all of orthodox QM without mentioning
superpositions at all.
See quant-ph/0303047 = Int. J. Mod. Phys. B 17 (2003), 2937-2980.


> It is this very idea which Sakurai tries to convey, in the
> mathematically simplest situation, namely a 2 state system.
> Classically, there would only be 2 discrete possibilities,
> but quantum mechanically we end up with C^2 (/ R+).

Both classically (polarization of light) and in QM, the spin is
described by a 3-dimensional Bloch vector, which contains the relevant
state information. This description is the common one in quantum optics,
and is much more useful than the description in terms of up and down.
See, e.g., http://minty.caltech.edu/Ph195/wednesday9.pdf
Interpreting spin in terms of two discrete possibilities causes all
sorts of avoidable complications.


>>What is new, however, in QM is entanglement, which, in Stern-Gerlach
>>experiments, results in nonlocal states.
>>
>>But here, Sakurai does a poor job. Without much more effort, namley
>>just an informal introduction to coherent states (which are very
>>useful anyway) the student could gain a much deeper and more realistic
>>perspective for QM.
>
> Coherent states are introduced only in chapter 3 !

which is a mistake. It is a wonderful tool to explore the
Stern-Gerlach setting. In Chapter 1 it would suffice to say
that the Hilbert space contains semiclassical states |x,p>
called 'coherent states' which behave almost like classical phase
space states, and where, for heavy particles, the Schroedinger equation
has approximate solutions of the form |x(t),p(t)>. Details of how to
construct these states, and verification of the claims made could be
left to Chapter 3. Thus only one extra paragraph would be needed.


> Honestly, all you write is interesting, but you should master QM on
> the level of Sakurai before understanding what you say. He's now just
> trying to introduce C^2 as the state space for a 2-state system.
> After that, at the end of chapter 1, the idea that position is
> described quantum mechanically is introduced.

If I'd write the book, I'd proceed just in the opposite way - first
introduce the classical view of QM by introducing coherent states
without spin. This shows that QM is not weird at all but just an
extension of classical mechanics to the microscopic world.
Then make things more interesting by introducing spin, showing that
indeed new features arise. The current practice of letting QM appear
to be something completely different than classical mechanics does
a lot of damage and helps no one.


> You can say
> so, in the language of decoherence, that the very interaction with the
> non-uniform B field is the "act of measurement" where the effective
> projection on the up or down state takes place, and from there on, you
> can calculate a trajectory classically.

This is simply wrong. The interaction with B is a completely unitary
process and hence not a measurement. B is just there to define the
Hamiltonian.

The measurement is in the split moment between the silver atom sensing
the screen and hitting it. This interaction (of the atom with a
macroscopic system in a thermodynamic equilibrium state)
causes the collapse (or rather what is there in place of it when the
detection is destructive, as in the present case)
and produces an observable record.

Without the screen, nothing would be measured, no matter how many
atoms pass the magnet.

Decoherence has nothing to do with the whole process.
It is not needed to discuss what happens, and it explains nothing.

Decoherence in general only claims that the final state of
system+detector is approximately a mixed state of the possible
experimental results, weighted by their probability.
This can be deduced in this simple case without decoherence;
that we observe a mixed state of an up-spot and a down-spot
follows easily from the approximate orthogonality of coherent
states at sufficiently distant positions.

But what is observed is a single spot per atom, and this defies
unitary explanations. (Actually, the standard S/G experiment
does not allow to observe a spot created by a single silver atom.
But simple variations of the experimental arrangement allow the
observation of single events.)


Arnold Neumaier


Italo Vecchi

unread,
Jun 16, 2004, 4:47:07 PM6/16/04
to
td...@auth.gr (Thomas Dent) wrote in message news:<cb504c2c.04060...@posting.google.com>...

> vec...@weirdtech.com (Italo Vecchi) wrote
>
> > > Decoherence doesn't solve anything. That's some other sand in the eyes.
> > >
>
> But it exists and has been measured (in experiments with Be ions:
> http://www.nature.com/nsu/000120/000120-10.html,
>
http://www.nature.com/cgi-taf/DynaPage.taf?file=/nature/journal/v403/n6767/abs/403269a0_fs.html).
> So it solves one aspect of the problem of how QM corresponds with
> reality, since without it such experiments would not make sense.
>

What exists is the obvious fact that , since relative phases are hard
to track , interference patterns may get quickly blurred if you let
the environment perturbthe system. Decoherence theory pumps up this
triviality and adds some circular argument to claim "emergence of the
classical world". All the "proofs" of decoherence theory I have
inspected relie on arbitrary cut-offs or unphysical assumptions such
as the no-recoilassumption or on misunderstanding entanglement to
"demonstrate" that physical systems have a preferred reference
frames. If you believe that you will believe anything.

By the way, I've said this before ([1].[2]),

IV

[1] http://www.lns.cornell.edu/spr/2001-12/msg0037849.html
[2]http://listserv.arizona.edu/cgi-bin/wa?A2=ind0003&L=quantum-mind&F=&S=&P=2566

Italo Vecchi

unread,
Jun 16, 2004, 4:48:31 PM6/16/04
to
Arnold Neumaier <Arnold....@univie.ac.at> wrote in message news:<40C9EC1B...@univie.ac.at>...

> Italo Vecchi wrote:
> > Arnold Neumaier <Arnold....@univie.ac.at> wrote in message news:<40BF417D...@univie.ac.at>...

>

> No. Nature operates independent of what observers know about it.
> Quantum mechanics was a valid description of Nature already before the
> first observer existed. Otherwise time-honored dating methods such as
> Carbon 14 method would be meaningless.

> ...

Meaning is in the eye of the beholder.

>
> The study of knowledge is a matter of psychology and not of physics.

Knowledge of experimental outcomes is what physics is all about.

> If the observer gets a heart attack and loses some knowledge because of
> that, the process in the lab does not change. Thus knowledge does not need
> to follow either the Schroedinger equation or the collapse postulates.

The observer's perceptions do. Obviously the very idea of "nature"
,i.e. of something that is out there and independent of our
perceptions, is a cultural construct, as pointed out by my previous
Kant quote. Such a prejudice is quite widespread, but it not
necessary for scientific discourse, so that it constitutes an
arbitrary and possibly erroneous assumption. Physics posits a locus of
intersubjective agreement, but it does not require a metaphysical
external entity which you may name "nature" or whatever. An effective
quantum model may actually require, as in Rovelli's relational QM [1],
that "different observers may give different accounts of the same
sequence of events".

> Probabilities depend on the preparation of the system only, and not on
> what the observer knows about this preparation;

After playing cards with a card-sharper you'll know better.
In the Definetti interpretation probabilities (any probability)
encode the observer's knowlege. You may not agree with it but it's
quite popular with probabilists (see [2] for a survey).The epstemic
approach à la Definetti has recently and belatedly been applied to the
quantum measurement problem (see e.g. [3]). I have several posts on
this topic ([4]).

>indeed, often the
> measurements serve to find out about the details of how a system was
> prepared.

Obviously measurement is about an observer acquiring information.

>...


>
> Perception/measurement is itself a quantum process and hence should
> be described by QM if it is a consistent description of nature.

All I ask QM to describe, as it does, are perception\measurements
outcomes.
My bet, which may or may not be confirmed by experiments, is that the
act of perception corresponds to state-vector reduction of the system
with respect to the observer . When Alice observes an electron in a
1/sqrt(2)(|spin up>+|spin down>) superposition she enters a
superposition 1/sqrt(2)|Alice>|spin up>+|Alice>|spin down> (Alice
measuring spin up and Alice measuring spin down respectively ). If
such a superposition can be detected by other observers,
that would be a big step in tackling the "measurement problem".

> All the great writers on the subject had been aware of this inconsistency
> which you try to hide under the carpet. The enigma was and is to specify
> in a rational way where and how unitarity breaks down.

I suppose that here "great" is a shorthand for "whom Arnold Neumeier
approves of".
Let me quote from Heisenberg's 'Physics and Philosophy':

"Therefore, the transition from the 'possible' to the 'actual' takes
place during the act of observation. If we want to describe what
happens in an atomic event, we have to realize that the word 'happens'
can apply only to the observation, not to the state of affairs between
two observations.
...
The discontinuous change in the probability function ... takes place
with the act of registration, because it is the discontinuous change
of our knowledge in the instant of registration that has its image in
the discontinuous change of the probability function."


By the way, I do believe that perception/measurement is described by
QM, but I think that there is a intrinsic completeness problem, which
is somehow implicit in Kant's remarks. Reference systems are picked
by observers and in measurement information is extracted along a
pointer basis, so that the observer/reference frame relationship is
essentially tautological. There is no such thing as a reference frame
without an observer, although mystics in exstasis tell of being rid of
any reference frame. Helas, they can't share that with non-believers
and so it's not science.

Cheers,

IV

[1] http://arxiv.org/abs/quant-ph/9609002

[2] http://philsci-archive.pitt.edu/archive/00000789/00/guttmann.pdf

[3] http://arxiv.org/abs/quant-ph/0205039

[4]http://groups.google.com/groups?selm=61789046.0403041110.2145316d%40posting.google.com
http://groups.google.com/groups?selm=61789046.0405150054.68bbe627%40posting.google.com&rnum=6
http://groups.google.com/groups?selm=61789046.0312070029.5e0fc3f9%40posting.google.com
http://groups.google.com/groups?selm=61789046.0404020407.627838e3%40posting.google.com&

Arnold Neumaier

unread,
Jun 17, 2004, 6:52:21 AM6/17/04
to

Thomas Dent wrote:
> Arnold Neumaier <Arnold....@univie.ac.at> wrote
>
>> ''a sufficiently classical behaviour for the environment seems to be
>> necessary if it is to act as a decohering agent and we can ask what
>> has brought the environment into such a state ad-infinitum.''
>>
>>Thus decoherence assumes on some scale what it purports to derive
>>at other scales. This implies that while it describes something real
>>it does not help in the foundations.
>
> To have a complete answer, as you say, one should ask what brought the
> environment into being. But this is really quantum cosmology, if you
> ask the question on the largest possible scale. Without quantum
> gravity, one can only present toy models of closed systems whose parts
> interact with one another and ask whether they approach a
> "quasi-classical" or thermal state: i.e. intrinsic decoherence.

There is an excellent toy universe to play with, namely quantized
nonrelativistic mechanics (N particles with Coulomb and Newton
interaction). This does not need cosmology. The foundational problems
are independent of relativity and cosmology.


>>Also, decoherence does not say anything about an individual system.
>>If the wave function is assumed to be an objective property of a single
>>quantum system, it must collapse upon measurement to a single eigenstate,
>>and not to a mixture as decoherence predicts.
>

> Copenhagen does not say anything about a single system either.

It does, at least in the common version of von Neumann and Wigner.
It says the wave function collapses at each individual measurement.


>>The measurement problem is the problem of explaining why we observe facts
>>and not superpositions or mixtures of all possible facts, and decoherence
>>is completely silent about this.
>

> I don't understand what you mean by "observe mixtures of all possible
> facts". A mixture is just a statement of probability, which one cannot
> get away from in QM. There is a probability that the observer will
> observe |a> and a probability that the observer will observe |b>. The
> statement that an observer could observe a "mixture" is nonsensical.

Yes, I formulated poorly. It depends on what is regardes as objective.
If only probabilities, why are there facts at all, about which we can agree?
If the state of a single system, regarded as a wave function, why don't
we observe superpositions of macroscopic bodies? If the state of a single
system, regarded as a density matrix, why don't we observe density
matrices modeling mixtures of macroscopic bodies? No matter which
hypothesis you find most agreeable, the problem does not go away.


>
> As for why the observer does not see superpositions, this is a problem
> for all interpretations, since the choice of basis is arbitrary, so
> what one calls a "superposition" or a "fact" is also a priori
> arbitrary.

But not in nature. The probabilistic interpretation is valid only for
the preferred basis determined by the experimental setting. There is no
choice.


Arnold Neumaier

Cl.Massé

unread,
Jun 17, 2004, 7:14:29 PM6/17/04
to
I wrote :

> > It wouldn't be thermal in the first place! "Thermal" means
> > randomization of all the degrees of freedom, including spin.

"Arnold Neumaier" <Arnold....@univie.ac.at> a écrit dans le message
de news:40CD9ADD...@univie.ac.at...

> No. Thermal means generated by a stationary process which gives a
> canonical ensemble
> rho =3D Z^{-1}exp(-beta H), beta =3D 1/kT
> for some Hamiltonian H.

There is a weight depending on the energy, but it amount to a
randomization. Any fundamental symmetry goes with a level degeneracy
and a population equality from that very formula, since only the
Hamiltonian, invariant under those symmetries, appears in it.

> I agree that this is a preparation by selection, but it is not a
> measurement.

A selection entails a measurement, thus a projection.

> You know by the set up the momentum of the particles
> going through the screen but you have not measured it.

A measurement is precisely or process where one gains knowledge over the
state of a system. There is even an operator corresponding to this
measurement, the final state is an eigenstate of.

> > There isn't and will never be a perfect theory.

> How do you know?

How do you know whether a theory is perfect?

Thomas Dent

unread,
Jun 17, 2004, 7:15:42 PM6/17/04
to
vec...@weirdtech.com (Italo Vecchi) wrote

> What exists is the obvious fact that, since relative phases are hard
> to track, interference patterns may get quickly blurred if you let
> the environment perturb the system. (...)

> All the "proofs" of decoherence theory I have

> inspected rely on arbitrary cut-offs or unphysical assumptions such
> as the no-recoil assumption or on misunderstanding entanglement to


> "demonstrate" that physical systems have a preferred reference

> frame. (...)

I have removed your sarcasm, which does not add to the content of the
argument.

By reference frame, I suppose you mean basis in Hilbert space. Then
you want to say that the basis of cat states |alive> and |dead> is
equivalent to the basis |alive>+|dead> /sqrt(2) and |alive>-|dead>
/sqrt(2), so why should decoherence operate in one basis rather than
the other?

Indeed, why should traditional collapse which throws away all the
states in Hilbert space except one happen in one basis rather than the
other? It is inconsistent to assert that physical systems have no
preferred basis, then to say that an observer does:

> "(...) physical systems do not choose reference frames, i.e. bases. Observers do."

This is the old, incoherent Copenhagen view of attributing some
mystical quality to "the observer" as if he or she were not a QM
physical system like any other. If this satisfies you, then so be it.

I agree that the problem of preferred bases is a serious problem. I
don't agree that it necessarily implies something beyond unitary
evolution, for the simple reason that one *can* distinguish physically
between different bases.

For the cat, note that the superposition |alive>+|dead> /sqrt(2), if
we imagine it 'selected' by decoherence, i.e. uncollapsed and
persisting for several seconds in conjunction with a given pure state
of the observer, is very much physically distinguishable from either
|alive> or |dead>. The expectation value of feline energy density
would have two disjoint regions with half the value of a
non-superposed cat. Similarly the probability of scattering of
cat-coloured light as a function of direction would have two disjoint
peaks at half the value appropriate to the original cat.

Similarly, if decoherence were to 'select' a basis of states strongly
localized in position, that would be physically distinguishable from
'selecting' in a basis delocalized in position but with sharp
momentum.

Thus, the statement that there can be no physical distinction between
bases appears dubious. The problem can be restated, why do we find
commonly find systems in macroscopically localized states, rather than
smeared-out superpositions.

I do not claim that this shows that the arguments of decoherence
theorists are correct, no more than the fact that some decoherence
theorists may have made some unjustified assumptions means they are
all wrong. But it suggests that an attempt to relate the common
occurrence and persistence of macroscopically localized states to
their physical properties is not automatically doomed to fail, because
localized quasi-classical states *do have* special physical
properties.

If you are determined to dismiss decoherence, you can study W. Zurek's
latest papers instead on "Envariance", which is his attempt to solve
the question of the preferred basis and of the emergence of
probability. He claims not to need decoherence any more.

http://arxiv.org/abs/quant-ph/0307229
http://arxiv.org/abs/quant-ph/0308163
http://arxiv.org/abs/quant-ph/0405161

"We study the emergence of objective properties of quantum systems
caused by their interaction with an environment. In our analysis, the
environment is promoted from a passive role of a reservoir selectively
destroying quantum coherence to a communication channel through which
observers find out about the Universe. We show that only preferred
pointer states of the system can leave a redundant and easily
detectable imprint on the environment. Observers who -- as it is
almost always the case -- discover the state of the system indirectly
(by monitoring a fraction of its environment) find out only about the
corresponding pointer observable, and can do that without perturbing
the system. Many observers acting independently in this fashion will
agree about its state: in this operational sense, preferred pointer
states exist objectively."

> [2] http://listserv.arizona.edu/cgi-bin/wa?A2=ind0003&L=quantum-mind&F=&S=&P=2566

While it is amusing, I doubt that the story of the wine-taster has any
direct relevance.

Thomas Dent

unread,
Jun 17, 2004, 7:15:50 PM6/17/04
to
Arnold Neumaier <Arnold....@univie.ac.at> wrote

> > one can only present toy models of closed systems whose parts
> > interact with one another and ask whether they approach a
> > "quasi-classical" or thermal state: i.e. intrinsic decoherence.
>
> There is an excellent toy universe to play with, namely quantized
> nonrelativistic mechanics (N particles with Coulomb and Newton
> interaction). This does not need cosmology. The foundational problems
> are independent of relativity and cosmology.

Fine. Or one could look at the quantum mechanics of hydrodynamic
flows. This would require some expertise in the literature of
intrinsic decoherence, which I do not have.


> > Copenhagen does not say anything about a single system either.
>
> It does, at least in the common version of von Neumann and Wigner.
> It says the wave function collapses at each individual measurement.

Sure, but this is a quite empty statement, because one cannot derive
from any physical principle the basis in which the collapse is to be,
and the physical consequences are the same as many-worlds.


> Yes, I formulated poorly. It depends on what is regardes as objective.
> If only probabilities, why are there facts at all, about which we can agree?

In other words, why does observer B agree with observer A after A's
first measurement? Or, why do most things live most of the time in
states which are not macroscopic superpositions? (This is a similar
question, since if there were a persistent superposition, B would have
a large probability of disagreeing with A).

See http://arxiv.org/abs/quant-ph/0307229 for one approach, using
theory of quantum information. They require that a state should be
capable of leaving redundant (i.e. observable by many people) records
of itself in the environment. Or rather, some states can do so and
some cannot.


> > what one calls a "superposition" or a "fact" is also a priori
> > arbitrary.
>
> But not in nature. The probabilistic interpretation is valid only for
> the preferred basis determined by the experimental setting. There is no
> choice.

This, like the former statement of Copenhagen, is content-free unless
there is some method to find out from first principles which basis the
"experimental setting" prefers. One says that a fluorescent screen
"prefers" position eigenstates, only because previous experiments with
such screens have produced signals localized in position. It is just
induction with no principle behind it. (As such, it works quite well!)
Likewise spectrometers "prefer" wavelength eigenstates. But if you
were given a piece of apparatus which you had never seen before, how
could you deduce which basis it preferred? To put it another way, what
is it about us observers (or the world we live in) that causes the
things we see to be macroscopically localized?

I have no answer to this, but neither does anyone else as far as I
know.

Arnold Neumaier

unread,
Jun 18, 2004, 8:11:47 AM6/18/04
to

Thomas Dent wrote:
> Arnold Neumaier <Arnold....@univie.ac.at> wrote

>>Yes, I formulated poorly. It depends on what is regardes as objective.


>>If only probabilities, why are there facts at all, about which we can agree?
>
> In other words, why does observer B agree with observer A after A's
> first measurement? Or, why do most things live most of the time in
> states which are not macroscopic superpositions? (This is a similar
> question, since if there were a persistent superposition, B would have
> a large probability of disagreeing with A).
>
> See http://arxiv.org/abs/quant-ph/0307229 for one approach, using
> theory of quantum information. They require that a state should be
> capable of leaving redundant (i.e. observable by many people) records
> of itself in the environment. Or rather, some states can do so and
> some cannot.

Yes, but how is this requirement made possible in a unitary dynamics?
Giving the desiderata names does not construct them; one cannot
solve foundational problems by casting them into words which lack
foundations themselves.


>>>what one calls a "superposition" or a "fact" is also a priori
>>>arbitrary.
>>
>>But not in nature. The probabilistic interpretation is valid only for
>>the preferred basis determined by the experimental setting. There is no
>>choice.
>
> This, like the former statement of Copenhagen, is content-free unless
> there is some method to find out from first principles which basis the
> "experimental setting" prefers.

These first principles are called 'experimentation' and are part of every
good physics education.


> One says that a fluorescent screen
> "prefers" position eigenstates, only because previous experiments with
> such screens have produced signals localized in position. It is just
> induction with no principle behind it. (As such, it works quite well!)

Isn't this the case with all we know? One says that the world is governed
by QM, only because previous experiments have produced results in agreement
with it. If this is not a valid starting point then nothing is.


> Likewise spectrometers "prefer" wavelength eigenstates. But if you
> were given a piece of apparatus which you had never seen before, how
> could you deduce which basis it preferred?

> I have no answer to this, but neither does anyone else as far as I
> know.

There is a simple answer: One finds out by experimenting with the new
piece, letting it interact with systems of known properties, and matching
the collected data to trial models until one fits. This is how things
are indeed done in practice. The process is called model calibration
(or parameter estimation).

At first, one never knows a new instrument precisely, and has to check
out its properties. After sufficient experience with enough instruments,
one knows reasonably well what to expect of the next, similar one.
Then only fine-tuning is needed, which saves time. And this knowledge
can be used to create new instruments which are likely to behave a certain
way; but one still has to check to which extent they actually do,
since no theoretical design is realized exactly in practice.
Not even in the classical, macroscopic domain!


To repeat, the important thing is: there is no choice.

The preferred basis is fully determined by Nature, and that's why we can
find it out. And Nature's choice is systematic, hence after having
seen that a number of screens have a preferred position basis,
we conclude that this is the case generally. As for a spectrometer,
if it is built with a prism to analyze light, it is reduced by theory
to the observation of light or heat at certain positions of the screen,
which is done in the preferred position basis. So once one knows
some of Natures preferences and the general laws, one can deduce other
preferences.


Of course, the theoretical challenge beyond Copenhagen is to deduce
from first principles that a screen made of quantum matter,
with two slits in it, actually has a preferred position basis and
projects the system to the part determined by the slits, etc.

None of the current paradigms beyond Copenhagen (many worlds,
decoherence, consistent histories, Bohmian mechanics, etc.)
handles this challenge.


Arnold Neumaier


Urs Schreiber

unread,
Jun 18, 2004, 8:28:38 AM6/18/04
to
"Arnold Neumaier" <Arnold....@univie.ac.at> schrieb im Newsbeitrag
news:40D2D2B8...@univie.ac.at...

> Of course, the theoretical challenge beyond Copenhagen is to deduce
> from first principles that a screen made of quantum matter,
> with two slits in it, actually has a preferred position basis and
> projects the system to the part determined by the slits, etc.
>
> None of the current paradigms beyond Copenhagen (many worlds,
> decoherence, consistent histories, Bohmian mechanics, etc.)
> handles this challenge.

As far as I understand the basis problem is solved by noting that the
environment of a system singles out a certain set of so called "robust
states" which are approximately stable under the time evolution subject to
interaction with the environment. In section 8 "Decoherence and robust state
dynamics" of

http://tqd1.physik.uni-freiburg.de/~walter/forschung/habil.ps
(or .pdf)

explanations, technical details and computer simulations concerning robust
state dynamics are presented.

I had once summarized this here:
http://groups.google.de/groups?selm=3E1C9025.A2D5A6CB%40uni-essen.de :

-------

The mere fact alone that the
density operator becomes diagonal does not yet yield the
classical world we observe. I believe among the first this has
been pointed out by Penrose in some of his lectures, where he
essentially remarks that a matrix is diagonal in more than one
basis. For instance

|0><0| + |1><1| = ( (|0> + |1>)(<0| + <1|) + (|0> - |1>)(<0|
- <1|) )/2 .

Hence it is not enough to argue that we observe a system in
either of the diagonal entries of the density operator: We
need to know in which basis!

This is resolved by understanding "robust states", which is
conceptually very simple. The point is that in addition to
"tracing out" the environment one needs to look at the
temporal evolution of the remaining reduced density operator.
This is governed by the precise nature of the coupling of the
system to its environment. Schematically the coupling term I
generically looks like

I = S (x) B ,

where S is an operator of the system and B an operator of the
bath. For instance consider a system consisting of a single
harmonic oscillator coupled to a bath of other harmonic
oscillators. Then the coupling is usually of the form

I = a (x) a+ + h.c.,

where the left "a" is the annihilation operator of the system
oscillator and the right a+ the creation operator of a bath
oscillator (details suppressed). This means the system and the
bath interact by exchanging quanta of energy - no surprise.

Now consider an initial superposition of the system:

(|0> + |1>) (x) |bath initially> .

Obviously when this state evolves it will generically lead to
something like

|0(t)> (x) |bath finally having interacted with 0>
+|1(t)> (x) |bath finally having interacted with 1>

When you now take the trace over the bath, the modification of
the system's density matrix is the larger the lesser the
"overlap" between the final bath states. But the final bath
states will only be appreciably different when the interaction
with the system's states was, too. This means that a measure
for the decoherence of the initial superposition is the
difference between the eigenvalues alpha of the interaction
operator a on the states |0> and |1>. The decay rate of the
coherence is proportional to

D ~ |alpha 0 - alpha 1|.

For general interaction operators s with eigenvalues sigma
this of course generalizes to

D ~ |sigma 0 - sigma 1|.

This law holds quite universally for arbitrary setups of
system and bath (e.g. quant-ph/0204129).

When you look at a classical object you do not see it
decohering. Hence classical states are states which have D=0.
These are also called robust states. They are obviously
characterized as being eigenstates of the system's interaction
term!

Hence for the oscillator-oscibath setup the robust states are
the eigenstates of the annihilator which are just the coherent
states. Note that these states are only "robust" as opposed to
being stable. They evolve in time as

|alpha(t)> = |alpha e^(-i omega - gamma)t>

i.e. they narrow in on the classical ground state |0>, but
while doing so they do not decohere.

This effect has been experimentally confirmed in 1996, or so,
in a famous experiment by some group in Paris (I forget the
details), which used as the system a single mode of an EM
field in a resonator.

So the upshot is that the trace over the environment explains
how a superposition becomes a mixture, while the coupling to
the environment determines which states we actually observe
when measuring this mixture, namely the robust states.

Cl.Massé

unread,
Jun 18, 2004, 3:05:10 PM6/18/04
to

"Arnold Neumaier" <Arnold....@univie.ac.at> a écrit dans le message
de news:40C9EC1B...@univie.ac.at...

> Perception/measurement is itself a quantum process and hence should
> be described by QM if it is a consistent description of nature.
> All the great writers on the subject had been aware of this
> inconsistency which you try to hide under the carpet. The enigma was
> and is to specify in a rational way where and how unitarity breaks
> down.

Evolution and measurement in QM are two distinct processes. Evolution
describes the behavior of an infinity of particle, that is, the
evolution of probabilities through the state vector. A measurement
applies to a single particle. The measurement of an infinity of
particules in the same state is an unitary transformation. There is
therefore no fundamental inconsistency, even with great writers. The
enigma is elsewhere.

Arnold Neumaier

unread,
Jun 18, 2004, 4:11:01 PM6/18/04
to

Cl.Mass=E9 wrote:
> "Arnold Neumaier" <Arnold....@univie.ac.at> a =E9crit dans le mess=
age
> de news:40C9EC1B...@univie.ac.at...
>=20

>>Perception/measurement is itself a quantum process and hence should
>>be described by QM if it is a consistent description of nature.
>>All the great writers on the subject had been aware of this
>>inconsistency which you try to hide under the carpet. The enigma was
>>and is to specify in a rational way where and how unitarity breaks
>>down.
>=20

> Evolution and measurement in QM are two distinct processes. Evolution
> describes the behavior of an infinity of particle, that is, the
> evolution of probabilities through the state vector. A measurement
> applies to a single particle. The measurement of an infinity of
> particules in the same state is an unitary transformation.=20

No. In the single case, a measurement reduces a superposition to an
eigenstate, averaged over an infinity of cases, a measurement still
reduced a superposition to a mixture. Both cases defy unitarity,
though the latter is easier to justify approximately by decoherence.
(However, as the derivation of decoherence assumes the Born rule,
it is not really a first principle justification.)

Moreover, the statement that a measurement changes the state of a
system does not make much sense if the state is associated only with
an infinity of particles.


> There is
> therefore no fundamental inconsistency, even with great writers. The
> enigma is elsewhere.

One may shift the enigma to a variety of places, depending on the
emphasis. If one considers QM as a purely statistical technique,
unitary evolution is alright but measurement requires a classical
description of the apparatus. The enigma is then in the unspecified
coexistence of a classical and a quantum world.

If one considers states as an objective property of a single system
(as the Copenhagen interpretation does) one can discuss measurement
as a quantum process but needs the collapse, i.e., a breakdown of
unitary evolution. This was the context for what I wrote.

If one wants to consider the universe as a _single_ quantum system
(which it most likely is since there is no indication of a
classical-quantum border and we cannot have any evidence of more than
one universe in the sense of completely isolated quantum system),
only the second possibility is feasible.


Arnold Neumaier


Thomas Dent

unread,
Jun 18, 2004, 4:24:39 PM6/18/04
to
Arnold Neumaier <Arnold....@univie.ac.at> wrote

> > See http://arxiv.org/abs/quant-ph/0307229 for one approach, using
> > theory of quantum information. They require that a state should be
> > capable of leaving redundant (i.e. observable by many people) records
> > of itself in the environment. Or rather, some states can do so and
> > some cannot.
>
> Yes, but how is this requirement made possible in a unitary dynamics?
> Giving the desiderata names does not construct them; one cannot
> solve foundational problems by casting them into words which lack
> foundations themselves.

Well, you are very confident at judging long and complicated papers,
if you have already decided that the words used in Zurek's papers
"lack foundations". You think Zurek does not have a definite idea what
he means by redundant records in the environment? He is just blowing
smoke?

I did not discuss the equations they use to back up their claims
because I am not an expert. You are free to read the papers and point
out whether and why they fail to do what is claimed for them.


> > This, like the former statement of Copenhagen, is content-free unless
> > there is some method to find out from first principles which basis the
> > "experimental setting" prefers.
>
> These first principles are called 'experimentation' and are part of every
> good physics education.

I meant, theoretical first principles. Otherwise, there is no way to
compare theory with experiment. Copenhagen says "We will observe
signals localized in position because this is what happened in the
past with similar apparatus". It is not a theory, it is a fudge which
for some reason happens to work and enables the (hopefully) real
theory, the Schrodinger equation, to be compared with experiment.


> > It is just
> > induction with no principle behind it. (As such, it works quite well!)
>
> Isn't this the case with all we know? One says that the world is governed
> by QM, only because previous experiments have produced results in agreement
> with it. If this is not a valid starting point then nothing is.

No, this is not a good analogy, because QM is (we hope) a theory with
an exact and transparent mathematical base, from which many quantities
can be calculated and compared with experiment, given assumptions
about the nature of quantum measurement. Whereas Copenhagen cannot be
compared with experiment, because it's nothing but the statement
"experiments seem to produce this sort of result when you have this
sort of apparatus".

Schematically, we have:

Schrodinger (exact equation with infinite number of predictions) +
Copenhagen (ugly, ad hoc set of assumptions justified only by success
in conjunction with Schrodinger) = Experiment

We can test Schrodinger by doing all sorts of experiments (keeping
Copenhagen of course), but we cannot test Copenhagen or find (so far)
any fundamental principle that substitutes for it. The hope of
decoherence or its successors (einselection, envariance) is to find
that Copenhagen was only an approximation - as anything so incoherent
must be - and that it is not needed when you take Schrodinger to cover
the observer and the environment. I.e. Schrodinger never really needed
Copenhagen.


> to deduce
> from first principles that a screen made of quantum matter,
> with two slits in it, actually has a preferred position basis and
> projects the system to the part determined by the slits, etc.

Finally, this is what I meant!

> None of the current paradigms beyond Copenhagen (many worlds,
> decoherence, consistent histories, Bohmian mechanics, etc.)
> handles this challenge.


This is just what we should be debating. (I agree that many-worlds
doesn't answer the question, it equally has the preferred basis
problem. It is just Copenhagen without throwing away orthogonal parts
of the wavefunction.)

Zurek claims he has a framework which could in principle tell us why a
certain basis is preferred. Of course, he only works with toy models
of qubits and suchlike. So one must look carefully to see if his
framework could work.

Arnold Neumaier

unread,
Jun 18, 2004, 4:21:43 PM6/18/04
to
Urs Schreiber wrote:
> "Arnold Neumaier" <Arnold....@univie.ac.at> schrieb im Newsbeitrag
> news:40D2D2B8...@univie.ac.at...
>
>>Of course, the theoretical challenge beyond Copenhagen is to deduce
>>from first principles that a screen made of quantum matter,
>>with two slits in it, actually has a preferred position basis and
>>projects the system to the part determined by the slits, etc.
>>
>>None of the current paradigms beyond Copenhagen (many worlds,
>>decoherence, consistent histories, Bohmian mechanics, etc.)
>>handles this challenge.
>
> As far as I understand the basis problem is solved by noting that the
> environment of a system singles out a certain set of so called "robust
> states" which are approximately stable under the time evolution subject to
> interaction with the environment. In section 8 "Decoherence and robust state
> dynamics" of
> http://tqd1.physik.uni-freiburg.de/~walter/forschung/habil.ps
> (or .pdf)
> explanations, technical details and computer simulations concerning robust
> state dynamics are presented.

The thesis is no longer available at this address.

As always, the devil appears to lie in the details.
Note that the challenge I posed was about a screen with two slits,
not a general plausible but somewhat vague scenario that defies
specialization to at least toy models of a macroscopic screen.

Since you seem to have looked at this in some detail, maybe you could
outline how the abstract robust state formalism applies to a particle
passing a screen with two slits. I have never seen it discussed,
although the double slit experiment is one of the most fundamental
quantum experiments.

Standard wave function arguments for purely unitary quantum mechanics
predict (at best) that the effect of the screen is to turn a particle
in a pure state psi into a superposition of three terms, one each for
being in one of the two beams passing the slit and a third for the
particle being stuck somewhere on the screen.

I don't see how decoherence and robust states could ever change this
conclusion, arrived at as a simple consequence of linearity of the
Schroedinger equation.

But it is generally believed - and assumed in _all_ discussions of
interference - that a double slit screen projects a particle with
incoming wave function psi with the correct Born probability to a
particle in a superposition of the two beams that pass the slits.

The challenge is to derive this from a quantum model of the situation,
without invoking explicit collapse anywhere in the derivation.
Before this cannot be done convincingly, I don't consider the
measurement problem to be solved.


Arnold Neumaier


Oz

unread,
Jun 21, 2004, 3:51:26 AM6/21/04
to

Arnold Neumaier <Arnold....@univie.ac.at> writes

>But it is generally believed - and assumed in _all_ discussions of
>interference - that a double slit screen projects a particle with
>incoming wave function psi with the correct Born probability to a
>particle in a superposition of the two beams that pass the slits.
>
>The challenge is to derive this from a quantum model of the situation,
>without invoking explicit collapse anywhere in the derivation.
>Before this cannot be done convincingly, I don't consider the
>measurement problem to be solved.

I'm surprised you say this, since in a discussion some while ago you
provided all the information required to do this. I doubt it could be
calculated precisely, or anything near, but a plausible mechanism that
could in theory be done numerically is to me quite evident.

I don;t think anyone would argue other that within the apparatus the
wavefunction is that corresponding to a diffraction pattern.

Its the process of absorption that seems to cause a problem.
That is, broadly, equivalent to the destruction of a photon.
Further, that destruction should be irreversible, otherwise you are
going to get scattering at best, or no effective interaction at worst.

I think that is enough of a description of a 'measurement' as is
required. That is: the irreversible destruction of a photon.

Now we must examine how this can be achieved. For irreversibility we
must have a system where the probability of the reverse reaction, that
is the re-emission of the photon, is very low but the maintenance of the
change of state of the detector is very high.

There are as many examples of this as there are absorbers of photons.
Probably the best studied would be photon absorbence by chlorophyll, and
I suspect most, possibly all, absorbers operate on a similar concept.
The chorophyll molecule absorbs EM radiation and becomes increasingly
excited. You may, if you wish, consider this as a superposition of
excited and unexcited states, but at some point it becomes excited
enough to transfer an electron in another part of the molecule. One must
note in particular that this is a quantised event. A precise amount of
energy is required to do this, and the energy difference is provided by
the difference between the excited and unexcited states of the
chlorophyll molecule (these too, represent a quantised step). The
electron is whisked away from the immediate environs of the light
absorbing part of the molecule and surplus energy dissipated as heat.

The thermodynamics of this are close to irreversibility. That is the
probability of the electron returning to the absorbing part of the
molecule, and just enough energy as heat being added to allow the
absorber to emit a photon is very low. A fraction of a second later the
electron thus produced is reprocessed via more strongly irreversible
processes (mostly chemical, with heat irreversibly dumped at each
stage), further increasing the improbability of the reverse reaction.

Now, the point to focus on is how the absorbing molecule collects a
photonsworth of energy. It doesn't need to take it all in in one
instantaneous instant of time. A fractional photonsworth of energy can
be handled as a fractional superposition of the two states. Of course
you can never measure but either one or the other, but that is not the
issue here. The issue is whether this superposition can last long enough
for a 'complete photonsworth' of energy to be absorbed. If so then the
probability of catching a photon becomes high which is that the change
of state within the superposed molecule becomes probable enough for that
electron to be emitted. The quantised nature is NOT that of the photon
beam, because we can describe that in a non-quantised manner (ie
maxwellian). The quantum nature is in the energy step of the *detector*,
which can ONLY be described with a quantised energy step.

Ok, I know this has no numbers, and I make no mathematical description
capable of falsification. You all know this is far beyond my capability.
That said, I can see no effective counterargument.

There are a few other points I should make.

If the absorbing, 'partially excited' molecule does not initiate an
irreversible change of state, then it will (after a period of time) dump
its energy in a scattering event. Typically this will be a non-
absorption and I suspect typically it will augment the incident em wave
(a bit of lasering in effect). If a leaf was illuminated, and then the
incoming beam abruptly switched off then I would expect a pulse of
fluorescence as the 'partially excited' molecules emit a photon by using
heat energy to 'balance the books', whilst others would dump their
exited energy to heat. Statistically, this will all balance up fine,
after all, the number of photons in a light beam is typically
indeterminate anyway, so who's counting?

The other is that the absorption characteristic of an individual
isolated stationary (ie cold) *atom* is very narrow. In essence (for
each orbital jump) it requires a photon of *precisely* the right energy
to obtain excitation and (temporary) absorption of a photon. For a large
molecule like chlorophyll the linewidth is wide (even if cold and
isolated), because there are very many vibratory modes of excitation
within the molecule (ie between each of its constituent atoms). These
are *all quantised*. They typically span a large range of energies,
resulting in complex comb-like near-continuum of lines either side of
the main absorption peak, drastically (typically) broadening it. Now
throw in the thermal energy, and the complex interactions of the many
adjacent atoms surrounding the chlorophyll molecule in the leaf and you
get something indistinguishable from a continuous absorption band.
However note that ALL of these are quantised. Although I have only given
the main absorption mechanism, in reality it is a complex interplay of
quantised activity within the molecule, all of which are reversible.

So, really there is no 'collapse of the wavefunction'. Its just that we
can never see any of the intermediate effects directly. A measurement is
nothing more than an absorption that is thermodynamically irreversible.

I've said this before, perhaps never quite so clearly.
Nobody ever refutes the concept, but they don't reply either.
Am I now widely killfiled?

--
Oz
This post is worth absolutely nothing and is probably fallacious.

BTOPENWORLD address about to cease. DEMON address no longer in use.
>>Use o...@farmeroz.port995.com (whitelist check on first posting)<<

Cl.MassX

unread,
Jun 22, 2004, 5:47:50 PM6/22/04
to
"Arnold Neumaier" <Arnold....@univie.ac.at> a écrit dans le message
de news:40D34B4A...@univie.ac.at...

> No. In the single case, a measurement reduces a superposition to an
> eigenstate, averaged over an infinity of cases, a measurement still
> reduced a superposition to a mixture.

The mixture isn't the consequence of the very quantum measurement, but
of the macroscopic, thermodynamic properties of the apparatus. One can
imagine a measurement made by a microscopic, zero temperature, device.
It is what actually happens, since there is no absolute way to
distinguish an interaction from a measurement.
Let's take an example, the two slits experiment. The measurement is
made by the screen, but the screen has some thermal agitation, and it is
never the same between two measurements. But an experiment could be
performed by a superconducting screen.
And still one step further. If the position of a particle is measured
at each point of space, and there is an infinity of particle in the
beam. A simple quantum calculation should show that the propagation is
unchanged, apart from a gauge transformation. We see that interaction
jumps by itself in the picture.

> Both cases defy unitarity,
> though the latter is easier to justify approximately by decoherence.
> (However, as the derivation of decoherence assumes the Born rule,
> it is not really a first principle justification.)
>
> Moreover, the statement that a measurement changes the state of a
> system does not make much sense if the state is associated only with
> an infinity of particles.

It is. One could say : "a coherent ensemble". The wave function
describes quite correctly an infinity of particle, without any enigma
and mystery, just like does thermodynamics. It is only the description
of a single particle that is problematic, and need a totally different
approach.

Patrick Van Esch

unread,
Jun 25, 2004, 4:45:07 PM6/25/04
to
Arnold Neumaier <Arnold....@univie.ac.at> wrote in message news:<caqbbo$fon$1...@lfa222122.richmond.edu>...

>
> If I'd write the book, I'd proceed just in the opposite way - first
> introduce the classical view of QM by introducing coherent states
> without spin.

I'm sure it would be a good idea that you write a book !


> > You can say
> > so, in the language of decoherence, that the very interaction with the
> > non-uniform B field is the "act of measurement" where the effective
> > projection on the up or down state takes place, and from there on, you
> > can calculate a trajectory classically.
>
> This is simply wrong. The interaction with B is a completely unitary
> process and hence not a measurement. B is just there to define the
> Hamiltonian.

I don't think that this is true. I think that the coupling to the
B-field (or in all generality, the coupling of the magnetic moment of
the Ag atom to the EM field) is "irreversible" in the sense of
decoherence theory. Otherwise it should be possible to recombine the
two split states and recover the original state, which I think is
impossible.
Look at the following gedanken experiment:
a first SG system selects the x+ states. These x+ states are then
analysed by a second SG system, splitting the beam into y+ and y-
states. If the "measurement" didn't take place until a screen, it
should be possible to recombine these two split beams into a new beam,
which should be pure x+. I don't believe that. After recombining the
beams, I'm sure you get a statistical mixture of x+ and x-. I know
this is NOT the case if we had photons, but, because of the magnetic
moment of the Ag atom, there are probably "traces left" in the EM
field of the passage of an Ag atom, which could maybe be detected with
a squid or so, once the separation of the beams is macroscopic (say, 2
cm).
This would be a slight variant of figure 3c, where, in the last step,
the Sx- beam is NOT blocked.



> Without the screen, nothing would be measured, no matter how many
> atoms pass the magnet.
>
> Decoherence has nothing to do with the whole process.
> It is not needed to discuss what happens, and it explains nothing.

As explained above, I don't think that you can say that, in the
current case.

If my explanation is correct, it is completely justified to consider
the *classical* trajectories after the SG system.

cheers,
Patrick.

Italo Vecchi

unread,
Jun 27, 2004, 6:56:28 PM6/27/04
to
td...@auth.gr (Thomas Dent) wrote in message news:<cb504c2c.04061...@posting.google.com>...

> I have removed your sarcasm, which does not add to the content of the
> argument.
>

Indeed. Thanks for your polite reply.

>
> Thus, the statement that there can be no physical distinction between
> bases appears dubious. The problem can be restated, why do we find
> commonly find systems in macroscopically localized states, rather than
> smeared-out superpositions.

We do not "find" them, we OBSERVE them.

> I do not claim that this shows that the arguments of decoherence
> theorists are correct, no more than the fact that some decoherence
> theorists may have made some unjustified assumptions means they are
> all wrong. But it suggests that an attempt to relate the common
> occurrence and persistence of macroscopically localized states to
> their physical properties is not automatically doomed to fail, because
> localized quasi-classical states *do have* special physical
> properties.

It should be clear that in your argument you are implicitly referring
to a "local observer", i.e, someone bound to position measurement and
who scratches his head about EPR. Even in such a setting however, I am
skeptical about the diagonalisation of the density matrix, which IMO
is unphysical and/or based on a misunderstanding of the relevant
formalism.

> If you are determined to dismiss decoherence, you can study W. Zurek's
> latest papers instead on "Envariance", which is his attempt to solve
> the question of the preferred basis and of the emergence of
> probability. He claims not to need decoherence any more.

If Zurek can do without decoherence I guess I can too.

>>[2]http://listserv.arizona.edu/cgi-bin/wa?A2=ind0003&L=quantum-mind&F=&S=&P=2566
>
> While it is amusing, I doubt that the story of the wine-taster has any
> direct relevance.

The Burgundy example is a joke, but it's related to a valid
counterexample to the predictability sieve, i.e. DNA computing ([1]).

Regards,

IV

[1] http://www.lns.cornell.edu/spr/2000-05/msg0024503.html

Arnold Neumaier

unread,
Jun 27, 2004, 6:57:50 PM6/27/04
to


A collapse challenge
____________________

June 20, 2004

Arnold Neumaier


A single photon is prepared in a superposition of two beams.
A photosensitive screen blocks one of the two beams but has a big hole
where the other beam can pass without significant interference.
At twice the distance of the first screen, a second photosensitive
screen without hole is placed.

The experimental observation is that the photon is observed at exactly
one of the two screens, at the position where the corresponding beam
ends.

The challenge is to build from first principles and your preferred
interpretation a complete, observer-free, quantum model of this
experiment (photon, two screens, and an environment),
together with a formal analysis that completely explains the
experimental result.


Comments.

1. The experimental result has the natural interpretation that the
photon was either stopped by the first screen,
or passed that screen successfully. This property is essential for the
analysis of any quantum experiment which uses screens with holes to
create or select beams of particles. Thus reproducing this experiment
correctly is a basic requirement for any theoretical model claiming
to provide complete foundations for quantum mechanics.

2. Clearly, the experimental result is something completely objective,
about which all observers agree. Thus the analysis is not permitted to
have any dependence on hypothetical observers.

3. Memory, records, etc. are permitted only if they are modelled as
quantum objects, too, and the properties assumed about them (such as
permanence or copyability) are derived from first principles, too.

4. Position, momentum, and time are required to be modelled explicitly;
apart from that, appropriate simplifications are permitted.
For example, it is ok to treat the photon as a scalar particle,
and to restrict to a single space dimension.

5. Unitary dynamics demands that the system photon+screen1+screen2,
characterized, say, by basis states of the form
|photon number, first screen count, second screen count>
(after tracing out all other degrees of freedom),
evolves from a pure initial state |1,0,0> into a superposition
of |0,1,0> and |0,0,1>, while agreement with experiment demands that
the final state is either |0,1,0> or |0,0,1>.
This disagreement is the measurement problem in its most basic form.


The web page http://www.mat.univie.ac.at/~neum/collapse.html
currently contains a copy of the challenge, and an analysis of the
challenge for the Copenhagen interpretation. I plan to update the
web page as more information about the challenge becomes available.


Arnold Neumaier

Arnold Neumaier

unread,
Jun 27, 2004, 6:58:01 PM6/27/04
to
Thomas Dent wrote:
> Arnold Neumaier <Arnold....@univie.ac.at> wrote
>
>
>>>See http://arxiv.org/abs/quant-ph/0307229 for one approach, using
>>>theory of quantum information. They require that a state should be
>>>capable of leaving redundant (i.e. observable by many people) records
>>>of itself in the environment. Or rather, some states can do so and
>>>some cannot.
>>
>>Yes, but how is this requirement made possible in a unitary dynamics?
>>Giving the desiderata names does not construct them; one cannot
>>solve foundational problems by casting them into words which lack
>>foundations themselves.
>
> Well, you are very confident at judging long and complicated papers,
> if you have already decided that the words used in Zurek's papers
> "lack foundations". You think Zurek does not have a definite idea what
> he means by redundant records in the environment? He is just blowing
> smoke?

Not _just_ smoke. Decoherence is a real, measurable effect with much
impact on potential quantum computers. This is why it is an important
development.

But the smoke is when they claim it solves the measurement problem
or making the collapse superfluous. Their smoke screen is the lack of
precision in their concepts. The devil lies in the details.

They cannot define all their concepts in terms of first principles.
They need handwaving at precisely the points where dissipation slips in
(and hence unitarity is lost).

That this is inevitable is clear from Wigner's old arguments - to
consider system+detector+environment (what characterizes the decoherence
approach) is nothing else than to consider in Wigner's fashion
system+observer+observer's_friend. Wigner's arguments are very clean,
short, logically immaculate, and without illegitimate assumptions.
See: Chapter II.2 in:
J.A. Wheeler and W. H. Zurek (eds.),
Quantum theory and measurement.
Princeton Univ. Press, Princeton 1983,
One cannot escape his conclusions that unrestricted unitarity,
unrestricted superposition, and effective collapse are inconsistent.
Here collapse is considered as an _observable_ phenomenon, entailing,
e.g., that a single photon prepared in a superposition of two beams
is counted in exactly one of two detectors placed at the end of the
two beams, and not found in a superposition of |left detector activated>
and |right detector activated>.

Adherents of decoherence in its foundational mission claim to deduce
this, but Wigner's analysis implies that it cannot be deduced. Hence
there must have been a sloppiness in their reasoning. And it can usually
be found out by looking for the place where unitarity is lost,
though different authors hide it in different places.

For example, in a brilliant paper gr-qc/9310032, Max Tegmark derives
decoherence by analyzing scattering of photons from a macroscopic
detector. Everything is neatly argued, and would give a correct
derivation of decoherence if discussed in a Copenhagen framework;
but Tegmark negates the latter. However, nothing follows when viewed
from a fundamental point of view since he argues on p.7 with input
from experiment, which is an instance of the collapse.

Assuming permanent records is another way of assuming the collapse,
since observing a record is impossible in a world where the collapse
does not happen (unless one adds a many worlds hypothesis, which indeed
is often done, with its own unresolved difficulties).

Treating part of the experimental setting as effectively classical
without saying exactly how is still another way of hiding the collapse.


In their moments of truth, the champions of decoherence admit what
they cannot do:

Zurek, quant-ph/0105127 (with perhaps the most affirmative view of
decoherence solving most fundamental problems) writes p.52 bottom:
''many of the remaining gaps in our understanding of quantum physics and
its relation to the classical domain - such as the definition of systems,
or the still mysterious details of the "collapse"''

Halliwell, gr-qc/9407040, p.3:
''The decoherent histories approach is not designed to answer the
question held by some to be the most important problem of quantum
measurement theory: why one particular history for the universe
"actually happens" whilst the other potential histories allowed by
quantum mechanics fade away.''

Zeh, quant-ph/9610014, p.8:
''How has the fundamental probability interpretation, used in Eq. (1),
then to be understood? And where precisely does it have to be applied
in view of the problem of where to position the Heisenberg cut?
The greatly differing opinions of physicists on this point are surprising,
and proof that the answer does not matter FAPP. Obviously, decoherence
cannot give an answer either, although it may help to recognize some
interpretations as being essentially motivated by prejudice or tradition.''

Zeh, quant-ph/9506020, p.24:
''Even real decoherence in the sense of above must be distinguished
from a genuine collapse, which is defined as the disappearance of all
but one components from reality (thus representing an irreversible law).''

http://www.decoherence.de/ (maintained by Joos) states at the bottom
of its entry page (Last change: March 30, 2004):
''Decoherence can not explain quantum probabilities without
(a) introducing a novel definition of observer systems in quantum
mechanical terms (this is usually done tacitly in classical terms),
and (b) postulating the required probability measure (according to the
Hilbert space norm).''
The catch lies in working 'tacitly in classical terms', which is not
permissible in foundations that claim (same page)
''There is but ONE basic framework for all physical theories:
quantum theory''


>>>This, like the former statement of Copenhagen, is content-free unless
>>>there is some method to find out from first principles which basis the
>>>"experimental setting" prefers.
>>
>>These first principles are called 'experimentation' and are part of every
>>good physics education.
>
> I meant, theoretical first principles. Otherwise, there is no way to
> compare theory with experiment. Copenhagen says "We will observe
> signals localized in position because this is what happened in the
> past with similar apparatus". It is not a theory, it is a fudge which
> for some reason happens to work and enables the (hopefully) real
> theory, the Schrodinger equation, to be compared with experiment.

No; you are doing injustice to the Copenhagen interpretation.
They explicitly demand a classical description of the experimental
setting, which entails saying what the instruments are supposed to do.

If one doesn't specify that, nothing is left to apply the theory to.
If you only have theoretical first principles, you can't do anything
since nothing connects the theory to experiment. Even in a fully
quantummechaical explanation of an experiment (should it be possible
in some interpretation) requires experimentation (or knowledge from
the past about the behavior of the instrument's constituents)
to be able to say that an instrument is indeed characterized by the
theoretical model used in the quantum description. Nothing else is needed
in the Copenhagen interpretation.


>>>It is just
>>>induction with no principle behind it. (As such, it works quite well!)
>>
>>Isn't this the case with all we know? One says that the world is governed
>>by QM, only because previous experiments have produced results in agreement
>>with it. If this is not a valid starting point then nothing is.
>
> No, this is not a good analogy, because QM is (we hope) a theory with
> an exact and transparent mathematical base, from which many quantities
> can be calculated and compared with experiment, given assumptions
> about the nature of quantum measurement.

And these assumptions say, for exaple, in a Stern-Gerlach experiment,
that the source indeed produces silver atoms, that the magnet indeed
provides a magnetic field of the assumed sort, that the screen indeed
is impenetrable for the beams, etc.. All this is needed as input to
interpret QM; and this knowledge is purely classical, outside the QM
treatment. QM without the experimental principles is powerless.

> Whereas Copenhagen cannot be
> compared with experiment, because it's nothing but the statement
> "experiments seem to produce this sort of result when you have this
> sort of apparatus".

Again, you are doing injustice to Copenhagen. Bohr wasn't such a poor
fellow as you want to have him appear. People successfully compared QM
in the Copenhagen interpretation with experiment, long before decoherence
even made its first appearence.


>>to deduce
>>from first principles that a screen made of quantum matter,
>>with two slits in it, actually has a preferred position basis and
>>projects the system to the part determined by the slits, etc.
>
> Finally, this is what I meant!

But it is not what unitarity + decoherence provide.
That's my whole point.


>>None of the current paradigms beyond Copenhagen (many worlds,
>>decoherence, consistent histories, Bohmian mechanics, etc.)
>>handles this challenge.

> This is just what we should be debating. (I agree that many-worlds
> doesn't answer the question, it equally has the preferred basis
> problem. It is just Copenhagen without throwing away orthogonal parts
> of the wavefunction.)
>
> Zurek claims he has a framework which could in principle tell us why a
> certain basis is preferred. Of course, he only works with toy models
> of qubits and suchlike. So one must look carefully to see if his
> framework could work.

Well, you are welcome to have a try. Note that two things are required by
my challenge, not only the preferred basis, but also the projection.
To be completely specific, I'll pose a slightly different, but more
revealing challenge, in a separate post with the subject

Collapse challenge for interpretations of QM.

Trying to model the situation in this challenge makes a good exercise
in disentangling fact from fiction in decoherence or any other
interpretation.


Arnold Neumaier

Arnold Neumaier

unread,
Jun 27, 2004, 6:58:33 PM6/27/04
to
Oz wrote:
> Arnold Neumaier <Arnold....@univie.ac.at> writes
>
>>But it is generally believed - and assumed in _all_ discussions of
>>interference - that a double slit screen projects a particle with
>>incoming wave function psi with the correct Born probability to a
>>particle in a superposition of the two beams that pass the slits.
>>
>>The challenge is to derive this from a quantum model of the situation,
>>without invoking explicit collapse anywhere in the derivation.
>>Before this cannot be done convincingly, I don't consider the
>>measurement problem to be solved.
>
> I'm surprised you say this, since in a discussion some while ago you
> provided all the information required to do this. I doubt it could be
> calculated precisely, or anything near, but a plausible mechanism that
> could in theory be done numerically is to me quite evident.

The problems are in the details. On a handwaving level, the measuremnt
problem has been claimed solved many times, but always a well conceiled
gap or a circular argument was involved on closer inspection.


> Its the process of absorption that seems to cause a problem.
> That is, broadly, equivalent to the destruction of a photon.
> Further, that destruction should be irreversible, otherwise you are
> going to get scattering at best, or no effective interaction at worst.

Yes. Since Boltzmann's time, irreversibility and conservative evolution
clash since the latter has no distinguished direction of time.
This is usually handled by assuming special low entropy initial
conditions. But the quantum case has an additional collapse problem.


> I think that is enough of a description of a 'measurement' as is
> required. That is: the irreversible destruction of a photon.

Yes; this is a way to state my request in informal terms.

>
> Now we must examine how this can be achieved. For irreversibility we
> must have a system where the probability of the reverse reaction, that
> is the re-emission of the photon, is very low but the maintenance of the
> change of state of the detector is very high.
>
> There are as many examples of this as there are absorbers of photons.
> Probably the best studied would be photon absorbence by chlorophyll, and
> I suspect most, possibly all, absorbers operate on a similar concept.

> The chlorophyll molecule absorbs EM radiation and becomes increasingly


> excited. You may, if you wish, consider this as a superposition of
> excited and unexcited states, but at some point it becomes excited
> enough to transfer an electron in another part of the molecule.

It is here where in your description the collapse - namely am actual
transfer instead of a superposition of transfer and no-transfer -
happens, and this is the part that cannot be modelled correctly using
unitary QM without explicit collapse.

Informally, it looks so intuitive what happens that most people use
somewhere in their solution of the measuremnt problem an
argument of this sort to cover the missing formal details. The difficulty
is that this step has no formal basis in standard QM if one deletes
the collapse postulate.

[...]


> The issue is whether this superposition can last long enough
> for a 'complete photonsworth' of energy to be absorbed.

The superposition lasts for ever. In a unitary framework one stays
always in a superposition of emitted and non-emitted, although in
thinking about it intuitively one is always tempted to smuggle in
a definte fact (emitted) at some place.

> If so then the
> probability of catching a photon becomes high which is that the change
> of state within the superposed molecule becomes probable enough for that
> electron to be emitted.

> So, really there is no 'collapse of the wavefunction'. Its just that we


> can never see any of the intermediate effects directly. A measurement is
> nothing more than an absorption that is thermodynamically irreversible.

The collapse is in this situation nothing less than an absorption
that is thermodynamically irreversible, and it is precisely this which
defies modeling by a unitary evolution of the universe.


Arnold Neumaier

Arnold Neumaier

unread,
Jun 28, 2004, 1:10:27 PM6/28/04
to

Patrick Van Esch wrote:
> Arnold Neumaier <Arnold....@univie.ac.at> wrote in message news:<caqbbo$fon$1...@lfa222122.richmond.edu>...
>
>>If I'd write the book, I'd proceed just in the opposite way - first
>>introduce the classical view of QM by introducing coherent states
>>without spin.
>
> I'm sure it would be a good idea that you write a book !

Sooner or later, I indded hope to find the time for it...


>>>You can say
>>>so, in the language of decoherence, that the very interaction with the
>>>non-uniform B field is the "act of measurement" where the effective
>>>projection on the up or down state takes place, and from there on, you
>>>can calculate a trajectory classically.
>>
>>This is simply wrong. The interaction with B is a completely unitary
>>process and hence not a measurement. B is just there to define the
>>Hamiltonian.
>
> I don't think that this is true. I think that the coupling to the
> B-field (or in all generality, the coupling of the magnetic moment of
> the Ag atom to the EM field) is "irreversible" in the sense of
> decoherence theory. Otherwise it should be possible to recombine the
> two split states and recover the original state, which I think is
> impossible.

No. Just place a second magnet completely symmetrically to the first,
but with a field in the opposite direction.
It will completely undo everything, and recover the original state,
at least according to current wisdom. I have not seen this done for
Stern-Gerlach, but for beam splitting experiment in quantum optics,
this is routine.


> Look at the following gedanken experiment:
> a first SG system selects the x+ states. These x+ states are then
> analysed by a second SG system, splitting the beam into y+ and y-
> states. If the "measurement" didn't take place until a screen, it
> should be possible to recombine these two split beams into a new beam,
> which should be pure x+.

Yes, this is what QM predicts; the back reaction of the silver atom
on the magnetic field can be completely neglected. (Acccounting for it
would still give something almost pure x+, not due to fast decoherence
that is typical for contact with macroscopic matter.)
It is the screen where the first chance for irreversibility arises.


> I don't believe that. After recombining the
> beams, I'm sure you get a statistical mixture of x+ and x-.

Sure because of what? Belief is a poor argument in physics.
You are simply guessing, without any objective basis for your guess.
Did you learn this in your physics courses???

Maybe you can convince someone to perform the experiment!
If your guess is right, you'll become famous...


> I know
> this is NOT the case if we had photons, but, because of the magnetic
> moment of the Ag atom, there are probably "traces left" in the EM
> field of the passage of an Ag atom, which could maybe be detected with
> a squid or so, once the separation of the beams is macroscopic (say, 2
> cm).

Photons have spin 1, and hence a magnetic moment.
They can also be entangled at macroscopic distance.
In situations where mass and statistic do not make a qualitative
difference, there is very little difference in the qualitative behavior
of a photon and a silver atom. Even buckyballs behave then like photons...


Arnold Neumaier


Italo Vecchi

unread,
Jun 28, 2004, 1:10:35 PM6/28/04
to

One week after the first went missing I am posting this revised reply.

td...@auth.gr (Thomas Dent) wrote in message news:<cb504c2c.04061...@posting.google.com>...

> I have removed your sarcasm, which does not add to the content of the
> argument.
>

Indeed. Thanks for your polite reply.

>

> Thus, the statement that there can be no physical distinction between
> bases appears dubious. The problem can be restated, why do we find
> commonly find systems in macroscopically localized states, rather than
> smeared-out superpositions.

We do not "find" them, we OBSERVE them. Actually systems such as
Maris' electrino bubbles have been detected in what you'd call
"smeared out" states (see [2]). It depends on what you measure and how
you measure/observe it.

...

>This is the old, incoherent Copenhagen view of attributing some
>mystical quality to "the observer" as if he or she were not a QM
>physical system like any other. If this satisfies you, then so be it.


If stating that our image of the world is shaped by the
energy/position measurements taking place between our retina and our
brain amounts to mysticism for you, so be it. I wonder whether
circular "proofs" make for better physics.

> I do not claim that this shows that the arguments of decoherence
> theorists are correct, no more than the fact that some decoherence
> theorists may have made some unjustified assumptions means they are
> all wrong. But it suggests that an attempt to relate the common
> occurrence and persistence of macroscopically localized states to
> their physical properties is not automatically doomed to fail, because
> localized quasi-classical states *do have* special physical
> properties.

It should be clear that in your argument you are implicitly referring


to a "local observer", i.e, someone bound to position measurement and
who scratches his head about EPR. Even in such a setting however, I am
skeptical about the diagonalisation of the density matrix, which IMO
is unphysical and/or based on a misunderstanding of the relevant
formalism.

> If you are determined to dismiss decoherence, you can study W. Zurek's


> latest papers instead on "Envariance", which is his attempt to solve
> the question of the preferred basis and of the emergence of
> probability. He claims not to need decoherence any more.

If Zurek can do without decoherence I guess I can too.

>>[2]http://listserv.arizona.edu/cgi-bin/wa?A2=ind0003&L=quantum-mind&F=&S=&P=2566


>
> While it is amusing, I doubt that the story of the wine-taster has any
> direct relevance.

The Burgundy example is a joke, but it's related to a valid


counterexample to the predictability sieve, i.e. DNA computing ([1]).

Regards,

IV

[1] http://xxx.lanl.gov/abs/cond-mat/0012370
[2] http://www.lns.cornell.edu/spr/2000-05/msg0024503.html

Bob Day

unread,
Jun 29, 2004, 6:42:43 PM6/29/04
to

"Arnold Neumaier" <Arnold....@univie.ac.at> wrote in message
news:40D5BC1D...@univie.ac.at...

>
> A single photon is prepared in a superposition of two beams.
< snip >

Please provide a diagram that shows the setup and the beams.

-- Bob Day


Oz

unread,
Jun 29, 2004, 6:44:48 PM6/29/04
to
Arnold Neumaier <Arnold....@univie.ac.at> writes

>Oz wrote:
>>
>> I'm surprised you say this, since in a discussion some while ago you
>> provided all the information required to do this. I doubt it could be
>> calculated precisely, or anything near, but a plausible mechanism that
>> could in theory be done numerically is to me quite evident.
>
>The problems are in the details. On a handwaving level, the measuremnt
>problem has been claimed solved many times, but always a well conceiled
>gap or a circular argument was involved on closer inspection.

By their very nature useful detectors are complex devices that cannot
yet be exactly described on the quantum level. Given that an exact
quantum description of a ground state helium atom is still at the
numerical stage (probably quite accurately done, I will admit), I am not
expecting a precise quantum description of carbon dioxide, let alone a
chlorophyll molecule.

So it seems to me that a certain amount of handwaving will be required
for some time. Indeed as far as I can tell 'collapse' is itself a
handwavy description at an even more crude level than the mechanism(s) I
am proposing.

So the question is NOT whether mechanism "A" describes "the destruction
of a photon", but whether its a better, more complete or more plausible
description than 'collapse', which is no description at all. Its merely
a statement that brushes the mechanism under the carpet.

>> Its the process of absorption that seems to cause a problem.
>> That is, broadly, equivalent to the destruction of a photon.
>> Further, that destruction should be irreversible, otherwise you are
>> going to get scattering at best, or no effective interaction at worst.
>
>Yes. Since Boltzmann's time, irreversibility and conservative evolution
>clash since the latter has no distinguished direction of time.

Eh? I see no clash whatsoever. The gas laws derive from a completely
time-reversible ballistic newtonian mechanics and yet show definitive
irreversibility. Interestingly its often described using the word
'states' and although totally reversible we never see rare states, but
the system always drives itself inexorably to one of the myriad near-
identical very highly probable states. That's why it is to all intents
and purposes irreversible. Shake a trayfull of ballbearings and the
mechanics of this is completely reversible, but we do not expect to see
a the ball bearings become still and the tray shake at a later time,
even if the system is completely frictionless.

I briefly discussed the spectra of chlorophyll, and showed (OK,
handwavy, but good) that the sharp peak of absorbtion mediated by a
small and precise area of the molecule becomes a broad band. The broad
band implies a wide range of states that the excited molecule will exist
in. That is (within a short period) it will become a superposition of
all these states. Each of these states are coupled with vibrational
modes of the molecule and these are themselves coupled to the water
molecules the chlorophyll is in contact with. Now we have a
superposition of states that not only includes a specific part of the
cl. molecule but also the surrounding environment. So (and I'm sure I
have seen work describing the time-evolution of excited chlorophyll) the
number of states starts off as (effectively) a single excitation, but
rapidly time-evolves to a myriad of states (and ultimately heat) making
the system irreversible.

>This is usually handled by assuming special low entropy initial
>conditions. But the quantum case has an additional collapse problem.

I don't see collapse as a 'problem', merely a mechanism to avoid the
physically and mathematically indescribable process that a 'measurement'
or 'detection' *necessarily* requires. Its a common methodology in
physics to describe a bulk process, sweeping the details under the
carpet. Nobody would contemplate describing a collision of two steel
balls by considering the quantum mechanical properties of every atom in
each ball.

>> I think that is enough of a description of a 'measurement' as is
>> required. That is: the irreversible destruction of a photon.
>
>Yes; this is a way to state my request in informal terms.

You didn't expect me to do 'formal', did you?

>> There are as many examples of this as there are absorbers of photons.
>> Probably the best studied would be photon absorbence by chlorophyll, and
>> I suspect most, possibly all, absorbers operate on a similar concept.
>> The chlorophyll molecule absorbs EM radiation and becomes increasingly
>> excited. You may, if you wish, consider this as a superposition of
>> excited and unexcited states, but at some point it becomes excited
>> enough to transfer an electron in another part of the molecule.
>
>It is here where in your description the collapse - namely am actual
>transfer instead of a superposition of transfer and no-transfer -
>happens, and this is the part that cannot be modelled correctly using
>unitary QM without explicit collapse.

I am specifically NOT saying that the process is one of a superposition
of 'transfer' and 'no transfer'. The process is a chain of increasingly
involved superpositions within a complex molecule in a complex
environment where 10^10 (or whatever) terminate in electron transfer but
only 5 terminate in re-emission of the photon.

Although I have forgotten most of it, the examination of the absorption
of a photon by a chlorophyll molecule has been intensively investigated.
The time evolution and many of the key vibrational and excited modes are
well understood.

>Informally, it looks so intuitive what happens that most people use
>somewhere in their solution of the measuremnt problem an
>argument of this sort to cover the missing formal details. The difficulty
>is that this step has no formal basis in standard QM if one deletes
>the collapse postulate.

Fine, I am willing to have my handwavy argument destroyed.
Where have I postulated a specific collapse?

> > The issue is whether this superposition can last long enough
>> for a 'complete photonsworth' of energy to be absorbed.
>
>The superposition lasts for ever. In a unitary framework one stays
>always in a superposition of emitted and non-emitted, although in
>thinking about it intuitively one is always tempted to smuggle in
>a definte fact (emitted) at some place.

I have no problem with superposition lasting 'forever'. At some point
the probability of an unlikely event can be neglected, it just drops
well below the noise in the system. Fortunately QM comes with its own
noise level, mediated by h. I have absolutely no problem with the idea
that a sugar molecule is in superposition with a photon emitted by the
sun and a *whole bunch* of other molecules and even with an (or rather a
myriad of) elementary particle 10^-27 secs after the big bang. That's a
theoretical viewpoint that its perfectly possible to hold (and sometimes
I consider that just as a wonder of the universe). However none of this
has any detectable effect on the reality of what is going to happen in
the future since it is all so improbable it cannot be detected *even in
principle*. It is a straw man.

>> If so then the
>> probability of catching a photon becomes high which is that the change
>> of state within the superposed molecule becomes probable enough for that
>> electron to be emitted.
>
>> So, really there is no 'collapse of the wavefunction'. Its just that we
>> can never see any of the intermediate effects directly. A measurement is
>> nothing more than an absorption that is thermodynamically irreversible.
>
>The collapse is in this situation nothing less than an absorption
>that is thermodynamically irreversible, and it is precisely this which
>defies modeling by a unitary evolution of the universe.

Quite. That's what I am saying - it doesn't defy it.
At least, I can't see why it should *in a real complex system*.

Arnold Neumaier

unread,
Jun 30, 2004, 6:40:15 PM6/30/04
to

It is difficult to do nicely in ascii; here is my attempt
(Needs typewriter font to come out unscrambled):

| |
/| |
/ | |
_ _ _|beam splitter|/ | |
| |\ | |
\ | |
\ |
big hole \ |
\ |
| \|
screen 1 | | screen 2
| |
| |

This is a very simple setting.


Arnold Neumaier

Arnold Neumaier

unread,
Jun 30, 2004, 6:40:27 PM6/30/04
to
Arkadiusz Jadczyk wrote:

> As [you] certainly know photons are not described by quantum mechanics
> at all.

I never heard of such a statement, but have a lot of evidence to
the contrary. For example, there is a book,

Molecular Quantum Electrodynamics:
An Introduction to Radiation-Molecule Interactions
by D. P. Craig, T. Thirunamachandran
Dover Publications 1998

that discusses interactions of photons with molecules without ever
mentioning quantum fields, purely in terms of standard quantum mechanics.

People in quantum optics do nothing else than QM.

Noninteracting photons are modeled in QM as particles with spin 1 and
Hamiltonian H=sqrt(p^2), and it is not difficult to write phenomenological
interactions with nonrelativistic matter that are in reasonable agreement
with experiment. Any such setting will suffice to discuss my challenge.


Photon nonlocality is a completely different issue.
QM does not require that wave functions can be written as a Hilbert space
of wave functions in a position variable. In a momentum representation,
there are no problems at all; and this is sufficient to describe a photon
beam.

Moreover, I explicitly allowed to ignore spin, and spinless particles are
localizable.

Arnold Neumaier

Charles Francis

unread,
Jun 30, 2004, 6:41:37 PM6/30/04
to
In message <0n40e0th646tdaou7...@4ax.com>, Arkadiusz
Jadczyk <arkREM...@ANDTHIScassiopaea.org> writes

>On Sun, 27 Jun 2004 22:57:50 +0000 (UTC), Arnold Neumaier
><Arnold....@univie.ac.at> wrote:
>
>>The challenge is to build from first principles and your preferred
>>interpretation a complete, observer-free, quantum model of this
>>experiment (photon, two screens, and an environment),
>>together with a formal analysis that completely explains the
>>experimental result.
>
>Arnold,
>As certainly know photons are not described by quantum mechanics at all.
>They are not even described by relativistic quantum mechanics - as there
>are problems with interpreting photons localization.

I don't think one should go that far. What is true is that it is not
possible to put a photon into an eigenstate of position, and that the
photon wave function is not a probability amplitude for finding the
photon at a given position. Instead it is the amplitude for the position
(typically of an electron) where the photon will be absorbed.
>
>So, you are putting together two difficulties:
>
>a) collapse

Not really. Once one has reinterpreted the photon wave function like
this collapse is just the same as for any other wave function.

>b) relativistic quantum mechanics which, as we are told, is not quite
>consistent and needs to be replaced by QFT which, again, leads to
>divergences and all kind of problems.

And the mathematical problems which exist in QFT do not seem to have any
real bearing on the situations Arnold is considering. I think he has
chosen a very reasonable example.


--
Charles Francis

Patrick Van Esch

unread,
Jul 2, 2004, 5:31:32 AM7/2/04
to

Arnold Neumaier <Arnold....@univie.ac.at> wrote in message news:<40DF28DF...@univie.ac.at>...


> > Look at the following gedanken experiment:
> > a first SG system selects the x+ states. These x+ states are then
> > analysed by a second SG system, splitting the beam into y+ and y-
> > states. If the "measurement" didn't take place until a screen, it
> > should be possible to recombine these two split beams into a new beam,
> > which should be pure x+.
>
> Yes, this is what QM predicts; the back reaction of the silver atom
> on the magnetic field can be completely neglected. (Acccounting for it
> would still give something almost pure x+, not due to fast decoherence
> that is typical for contact with macroscopic matter.)
> It is the screen where the first chance for irreversibility arises.

Maybe you're right, but I was guessing that there is already enough
information sent out (that's the fast decoherence you're talking
about) by the magnetic dipole moment of the Ag atoms. I'm not talking
about the probably very difficult measurement of the change in current
in the main magnet, but once the x+ and the x- beams are separated
macroscopically, and they evolve through "field-free" space, at, say,
5 cm separation, I'd guess you could pick up their magnetic dipole -
at least in principle.

>
>
> > I don't believe that. After recombining the
> > beams, I'm sure you get a statistical mixture of x+ and x-.
>
> Sure because of what? Belief is a poor argument in physics.
> You are simply guessing, without any objective basis for your guess.
> Did you learn this in your physics courses???

I'm indeed guessing the orders of magnitude, but I would be tempted to
think that at least in principle it should be possible to measure the
presence of the magnetic dipole of a silver atom, by a SQUID or
another very sensitive detector of magnetic fields. This is what
makes this experiment fundamentally different from the analogue
experiment with photons, which do not leave such a trace.

How to estimate this ? Would you agree with me that if at a
reasonable distance the integrated magnetic flux is comparable to a
quantum of flux, then a measurement is possible ?

So let us see. We take that the magnetic dipole of the silver atom is
the one of an electron, say a bohr magneton (9.27 10^(-24) Am^2)
A dipole field, integrated over its upper half sphere at distance R,
gives us an integrated flux of mu_0 M / (4 R). A quantum of flux is
given by 2.07 10^(-15) Tm^2.

So in order to have a quantum of flux, one needs to be at a distance
of

1.2 10^(-7) Tm/A x 9.2 10^(-24) A m^2 / 4(2.07 10^(-15) T m^2) =
1.4 10^(-15) m

Hum, indeed, you'd need to place the magnetic field detector VERY
close to Ag atom.


So you're probably right and I was wrong: the magnetic dipole of an Ag
atom doesn't seem to couple macroscopically... I have to say that I'm
surprised !

cheers,
patrick.

Arnold Neumaier

unread,
Jul 2, 2004, 5:32:39 AM7/2/04
to

Arkadiusz Jadczyk wrote:
> On Sun, 27 Jun 2004 22:57:50 +0000 (UTC), Arnold Neumaier
> <Arnold....@univie.ac.at> wrote:
>
>>The challenge is to build from first principles and your preferred
>>interpretation a complete, observer-free, quantum model of this
>>experiment (photon, two screens, and an environment),
>>together with a formal analysis that completely explains the
>>experimental result.
>
> If you are simply interested in collapse, you do not need photons.
> But then your question is answered by EEQT - it provides observer free
> model, with a well defined algorithm for detecting and for collapses
> (even repeated).

Could you please write down the relevant details (html or pdf) so that
I can put it on my collapse web page? There I intend to collect relevant
information as it comes up.


Arnold Neumaier

Arkadiusz Jadczyk

unread,
Jul 2, 2004, 5:34:34 AM7/2/04
to

On Wed, 30 Jun 2004 22:40:27 +0000 (UTC), Arnold Neumaier
<Arnold....@univie.ac.at> wrote:

>Arkadiusz Jadczyk wrote:
>
>> As [you] certainly know photons are not described by quantum mechanics
>> at all.
>
>I never heard of such a statement, but have a lot of evidence to
>the contrary.


Perhaps you have a different definition of a photon and a different
definition of quantum mechanics.

My definition is: photon states are described by irreducible, massless,
helicity one, representations of the Poincare group.

Due to problems with localization (which position variables to use), we
have problems with probability current and you do not know how to
describe local interactions with matter. You do not know how to describe
localized beams and localized detections.

Of course we can cheat (everybody does it) and replace the problem at
hand by another problem (phenomenological or fapp description, as it is
done, for example in quantum optics), but then what you describe are not
photons, they are other creatures that may have certain features
resembling photons.

To discuss fundamental problems of quantum mechanics we need to be
precise and try not to cheat the same way textbooks on phenomenology do.

Or so I think,

ark
--

Arkadiusz Jadczyk
http://www.cassiopaea.org/quantum_future/homepage.htm

--

Arnold Neumaier

unread,
Jul 2, 2004, 11:54:37 AM7/2/04
to

Arkadiusz Jadczyk wrote:
> On Wed, 30 Jun 2004 22:40:27 +0000 (UTC), Arnold Neumaier
> <Arnold....@univie.ac.at> wrote:
>
> Perhaps you have a different definition of a photon and a different
> definition of quantum mechanics.

No. Only a different view of what physicists do when they analyze
experiments involving photons.


> My definition is: photon states are described by irreducible, massless,
> helicity one, representations of the Poincare group.

Yes, and this makes it an object described by a Fock space and the
Hamiltonian (after selecting a time coordinate) H=|p| (p the 3-momentum).
The Fock space is isomporphic to the Hilbert space of functions psi from
R^3 to R^3 with
p dot psi(p) = 0 for all p
and standard inner product.


> Due to problems with localization (which position variables to use), we
> have problems with probability current and you do not know how to
> describe local interactions with matter. You do not know how to describe
> localized beams and localized detections.

It is well-known how to describe real beams; see, e.g., the quantum
optics book by Mandel and Wolf. One does not need pointlike idealizations
to do that. The difficulties with locality only show that the notion of
pointlikeness is a fiction.


> Of course we can cheat (everybody does it)

There is no cheating at all in the description of a single photon beam
the way Mandel and Wolf do it. It conforms to all fundamental principles
of QM. And interactions in QM have always been phenonemological, but
are nevertheless the basis of all we know about the foundations.
The phenomenological interactions employed by practitioners are fully
adequate to address my challenge.


> To discuss fundamental problems of quantum mechanics we need to be
> precise and try not to cheat the same way textbooks on phenomenology do.

If this were necessary, one could not do any foundational work without
starting with the standard model (or even lower)...


> If you are simply interested in collapse, you do not need photons.
> But then your question is answered by EEQT - it provides observer free
> model, with a well defined algorithm for detecting and for collapses
> (even repeated).

How does EEQT model the interaction of a scalar particle with a
screen made of particles?


Arnold Neumaier


Thomas Dent

unread,
Jul 9, 2004, 4:49:18 AM7/9/04
to

Arnold Neumaier <Arnold....@univie.ac.at> wrote

> They cannot define all their concepts in terms of first principles.
> They need handwaving at precisely the points where dissipation slips in
> (and hence unitarity is lost).

As far as I can see there are some papers which do *not* require
anything beyond unitarity. Until you have read the papers, you should
not prejudge whether or not the loss of unitarity is assumed.


> That this is inevitable is clear from Wigner's old arguments - to
> consider system+detector+environment (what characterizes the decoherence
> approach) is nothing else than to consider in Wigner's fashion
> system+observer+observer's_friend. Wigner's arguments are very clean,
> short, logically immaculate, and without illegitimate assumptions.
> See: Chapter II.2 in:
> J.A. Wheeler and W. H. Zurek (eds.),
> Quantum theory and measurement.
> Princeton Univ. Press, Princeton 1983,

Unfortunately I do not have the luxury of easy access to this book. Is
there a clean, short and logically immaculate summary of Wigner's
argument available on the web? You write such long messages that it
could not be too much to ask for a reproduction of this argument.


> One cannot escape his conclusions that unrestricted unitarity,
> unrestricted superposition, and effective collapse are inconsistent.
> Here collapse is considered as an _observable_ phenomenon, entailing,
> e.g., that a single photon prepared in a superposition of two beams
> is counted in exactly one of two detectors placed at the end of the
> two beams, and not found in a superposition of |left detector activated>
> and |right detector activated>.

Ah, but this is not what Copenhagen means by collapse. What Wigner
means by effective collapse is precisely the question of a preferred
basis, or alternatively macro-objectification. Still, I would be
interested to read the argument that leads to this conclusion.

You go on to quote at length in a rather polemical fashion some rather
old papers of the decoherence school. Of course, what Zurek is doing
nowadays is rather different, so your polemic may not apply to him in
the same way.

Anyway, I can't proceed further until I know what Wigner's argument
was.

Thomas

Thomas Dent

unread,
Jul 9, 2004, 4:49:19 AM7/9/04
to

vec...@weirdtech.com (Italo Vecchi) wrote

> td...@auth.gr (Thomas Dent) wrote

> > Thus, the statement that there can be no physical distinction between
> > bases appears dubious. The problem can be restated, why do we find
> > commonly find systems in macroscopically localized states, rather than
> > smeared-out superpositions.
>
> We do not "find" them, we OBSERVE them. Actually systems such as
> Maris' electrino bubbles have been detected in what you'd call
> "smeared out" states (see [2]). It depends on what you measure and how
> you measure/observe it.

So are you making distinctions between "finding", "observing" and
"detecting"? I quite agree that what one finds should depend on how
the detection/observation/looking is done.


> >This is the old, incoherent Copenhagen view of attributing some
> >mystical quality to "the observer" as if he or she were not a QM
> >physical system like any other. If this satisfies you, then so be it.
>
> If stating that our image of the world is shaped by the
> energy/position measurements taking place between our retina and our
> brain amounts to mysticism for you, so be it. I wonder whether
> circular "proofs" make for better physics.

I don't know what is being referred to as a 'circular "proof"' here.
But the statement that energy/position measurements are taking place
between the retina and brain is something which appears arbitrary and
difficult to understand, hence quite deserving of being called
mysticism. At least, it is not an explanation of anything from
physical principles. Perhaps I will not call Copenhagen "incoherent",
but only "empirical", since it seems to work without anyone having
much of an idea why.

Is the area between the retina and the brain the *only* place in the
Universe which has the privilege of being able to carry out
energy/position measurements (i.e. to collapse superpositions in
particular bases)? If so, what special physical qualities enable it to
do so? If you or I are asleep, does the area between retina and brain
continue to perform energy/position measurements, i.e. to collapse
wavefunctions in a particular basis? And what determines whether the
collapse is in the energy or position basis, or in some other basis?

Or perhaps I am not understanding the thrust of your statement.
Perhaps you mean that the result of an experiment, or of the
Schrodinger-cat situation, depends essentially on what is doing the
observing. And since in the real world, a person is doing the
observing (in conjunction with various bits of apparatus), we should
take into account the physics of people when thinking of experimental
observations.

But this is different from Copenhagen, which simply imposes a cutoff
beyond which classicality is declared to have been recovered. Perhaps
you mean that classicality is recovered in a particular part of the
brain. But I don't see any explanation of how, physically, this might
occur. How do you see the question of Wigner's friend? Can the brain
be in a superposition of states?


> > If you are determined to dismiss decoherence, you can study W. Zurek's
> > latest papers instead on "Envariance", which is his attempt to solve
> > the question of the preferred basis and of the emergence of
> > probability. He claims not to need decoherence any more.
>
> If Zurek can do without decoherence I guess I can too.


Ah, but that does not leave you agreeing with Zurek. He would say that
the measurement has been effectively fixed long before any humans with
their retinas and walk onto the scene - i.e. the experimental
apparatus, etc., determines what states can be observed with
non-negligible probabilities. He also (I think) would say that nothing
beyond unitary evolution is absolutely required.

Italo Vecchi

unread,
Jul 11, 2004, 3:58:03 AM7/11/04
to

td...@auth.gr (Thomas Dent) wrote in message news:<cb504c2c.04070...@posting.google.com>...
...


> >
> > If stating that our image of the world is shaped by the
> > energy/position measurements taking place between our retina and our
> > brain amounts to mysticism for you, so be it. I wonder whether
> > circular "proofs" make for better physics.
>
> I don't know what is being referred to as a 'circular "proof"' here.

I have already expressed my opinion about the soundness of decoherence
arguments. Unitary evolution won't pick a basis in a Hilbert space any
more than , say, incompressible flow picks a basis in Cartesian
space.

> But the statement that energy/position measurements are taking place
> between the retina and brain is something which appears arbitrary and
> difficult to understand, hence quite deserving of being called
> mysticism.

You may call it as you like. Indeed I prefer mysticism to flawed math.

...

>
> Or perhaps I am not understanding the thrust of your statement.
> Perhaps you mean that the result of an experiment, or of the
> Schrodinger-cat situation, depends essentially on what is doing the
> observing. And since in the real world, a person is doing the
> observing (in conjunction with various bits of apparatus), we should
> take into account the physics of people when thinking of experimental
> observations.
>
> But this is different from Copenhagen, which simply imposes a cutoff
> beyond which classicality is declared to have been recovered. Perhaps
> you mean that classicality is recovered in a particular part of the
> brain.

Not really. You can draw the "Heisenberg cut" all the way back to the
observer, The idea goes back to Von Neumann [1].
Copenhagen does not need the illusory notion of "classical world":
just a basis along which information is extracted .

> But I don't see any explanation of how, physically, this might
> occur.

Any such "explanation" would be ultimately circular. Physics is not
about explaining stuff, but about predicting perceptions/measurement
outcomes.
Our sensory apparatus is the ultimate barrier behind which the notion
of observer , on which the scientific paradigm is implicitly based,
breaks down,
Without some form of verification based on intersubjective agreement
there can be no scientific discourse. We can talk about
"interdependent arising" ("pratitya samutpada") but that's not
science, at least not in its current form and (almost) certainly not
in the moderator's higly relevant opinion.



> > > If you are determined to dismiss decoherence, you can study W. Zurek's
> > > latest papers instead on "Envariance", which is his attempt to solve
> > > the question of the preferred basis and of the emergence of
> > > probability. He claims not to need decoherence any more.
> >
> > If Zurek can do without decoherence I guess I can too.
>
>
> Ah, but that does not leave you agreeing with Zurek.

Right.

> He would say that
> the measurement has been effectively fixed long before any humans with
> their retinas and walk onto the scene - i.e. the experimental
> apparatus, etc., determines what states can be observed with
> non-negligible probabilities. He also (I think) would say that nothing
> beyond unitary evolution is absolutely required.

Sure. That's what he says.

IV

p.ki...@imperial.ac.uk

unread,
Jul 12, 2004, 2:00:43 PM7/12/04
to
Arnold Neumaier <Arnold....@univie.ac.at> wrote:
> A collapse challenge

> A single photon is prepared in a superposition of two beams.
> A photosensitive screen blocks one of the two beams but has a big hole
> where the other beam can pass without significant interference.
> At twice the distance of the first screen, a second photosensitive
> screen without hole is placed.

> The experimental observation is that the photon is observed at exactly
> one of the two screens, at the position where the corresponding beam
> ends.

> The challenge is to build from first principles and your preferred
> interpretation a complete, observer-free, quantum model of this
> experiment (photon, two screens, and an environment),
> together with a formal analysis that completely explains the
> experimental result.

Stop right there. You want a screen, and force me to describe
it quantum mechanically, but such a thing will be either:

(1) too complex to model without approximations you would likely
try to disqualify me from using, or

(2) not behave like any screen-like apparatus you would likely
accept as being the sort of screen you intended.

If I were to use nice simple quantum objects like two-level atoms (TLA)
as "screens", the system would go into a state involving superpositions
all three parts (photon, screen1, screen2), but that isn't the
"experimental result" you claim, so you'd disallow it -- even though I
might well have accurately calculated the experimental outcome for a
photon-TLA-TLA version of your setup.

I could try to put a gas of TLA's for each screen, but I'd struggle to
get any sort of intelligible result; I'd just get an even more smeared
out and featureless version of the photon-TLA-TLA superposition. It
would be even worse than assuming each screen was a zero-temperature
reservior-like collection of modes coupled to our incoming photon, but
omitting the trace over that reservior (and associated approximations)
to stay "quantum enough" for your conditions.

It would be easy to get a real, useful result which explained your
"experiment", but I'd have to do the trace and approximations that you
won't let me do. Your restriction is particularly perverse in that I
would only do them in order to get an answer. I'm not interested in
sticking collapse-like features into my models because I think that's
what they should have -- but collapse-like features might happen to
turn up as a side effect of some approximations.

> 5. Unitary dynamics demands that the system photon+screen1+screen2,
> characterized, say, by basis states of the form
> |photon number, first screen count, second screen count>
> (after tracing out all other degrees of freedom),
> evolves from a pure initial state |1,0,0> into a superposition
> of |0,1,0> and |0,0,1>, while agreement with experiment demands that
> the final state is either |0,1,0> or |0,0,1>.
> This disagreement is the measurement problem in its most basic form.

The only measurement problem of any utility is: does my model agree
with experiment and enable me to make useful predictions? It is rare
that insisting that a model follow unitary dynamics is very
instructive. Especially when it is clearly pointlessly restrictive
-- e.g. like, er, a photo- sensitive screen in a photon experiment.


Your challenge might be a clever trick to make people think -- but it
is set up so as forbid people from using the perfectly sensible, well
known tools that would enable them to get a physically satisfactory
answer.

--
---------------------------------+---------------------------------
Dr. Paul Kinsler
Blackett Laboratory (QOLS) (ph) +44-20-759-47520 (fax) 47714
Imperial College London, Dr.Paul...@physics.org
SW7 2BW, United Kingdom. http://www.qols.ph.ic.ac.uk/~kinsle/

gptejms

unread,
Jul 13, 2004, 3:37:21 AM7/13/04
to

> Arnold Neumaier

May be I'm missing your point but how is this experiment qualitatively
different from the usual one i.e. :- a half sivered mirror is placed at
45 degress to the direction of an incident photon.There

is a probability half of the photon being transmitted or
reflected.So it's in a superposition of two states until the photon
interacts with one of the two screens and the wavefunction collapses.
I'm not commenting on the validity of Copenhagen interpretation or
otherwise---my question is 'how's your experimental set up
qualitatively different from this set up'.

Jagmeet

------------------------------------------------------------------------
This post submitted through the LaTeX-enabled physicsforums.com
To view this post with LaTeX images:
http://www.physicsforums.com/showthread.php?t=32831#post254047

Thomas Dent

unread,
Jul 13, 2004, 10:43:56 AM7/13/04
to

vec...@weirdtech.com (Italo Vecchi) wrote

> > I don't know what is being referred to as a 'circular "proof"' here.
>
> I have already expressed my opinion about the soundness of decoherence
> arguments.

I'm not interested in your opinion (for the 5th time) on decoherence,
I'm interested in what (if any) definite thing you have to say about
the preferred basis problem, and what sort of position you have, which
is far from clear to me.


> > But this is different from Copenhagen, which simply imposes a cutoff
> > beyond which classicality is declared to have been recovered. Perhaps
> > you mean that classicality is recovered in a particular part of the
> > brain.
>
> Not really. You can draw the "Heisenberg cut" all the way back to the
> observer, The idea goes back to Von Neumann [1].

I don't see any reference [1], and I'm not familiar with the
"Heisenberg cut". And why can't you draw the "Heisenberg cut",
whatever it is, past the observer?


> > But I don't see any explanation of how, physically, this (basis selection)


> > might occur.
>
> Any such "explanation" would be ultimately circular. Physics is not
> about explaining stuff, but about predicting perceptions/measurement
> outcomes.

This is a highly debatable philosophy - the basis is chosen somehow by
a mechanism which can never be properly explained or elucidated? You
are saying that there *cannot* be any meaningful reason why one basis
is chosen, and we should just accept the empirical fact that certain
configurations of observers and apparatus pick out certain bases, and
"shut up and calculate"? In other words, wavefunctions collapse in
particular bases because they do?

Why not go back to Kepler and say, planets travel in elliptical orbits
due to an unknown reason which can never be elucidated, and we should
shut up and calculate?


> Our sensory apparatus is the ultimate barrier behind which the notion

> of observer, on which the scientific paradigm is implicitly based,


> breaks down,
> Without some form of verification based on intersubjective agreement
> there can be no scientific discourse.

I don't see a barrier, in principle, to studying our sensory apparatus
(etc.) as a quantum system. One could even study the quantum systems
of an experimental apparatus and *two* observers to see if they had
"intersubjective agreement". Whatever this sensory apparatus is, I
suppose the laws of physics apply to it.

Italo Vecchi

unread,
Jul 15, 2004, 4:57:22 AM7/15/04
to

td...@auth.gr (Thomas Dent) wrote in message news:<cb504c2c.04071...@posting.google.com>...


> vec...@weirdtech.com (Italo Vecchi) wrote
>
> > > I don't know what is being referred to as a 'circular "proof"' here.
> >
> > I have already expressed my opinion about the soundness of decoherence
> > arguments.
>
> I'm not interested in your opinion (for the 5th time) on decoherence,
> I'm interested in what (if any) definite thing you have to say about
> the preferred basis problem, and what sort of position you have, which
> is far from clear to me.
>

As I have said before "all the proofs of decoherence theory I have
inspected relie on arbitrary cut-offs or unphysical assumptions such
as the no-recoil assumption or on misunderstanding entanglement".
This applies also to the paper you kindly indicated [2].

Decoherence theory circulates around statements such as this:" ..
microscopic systems tend to decohere into energy eigenstates, since
they interact with their environment mainly though their decay
products"([3]).
In my opinion the above statement by Zeh may be regarded as either
meaningless or false (it can't be both). That's the stuff decoherence
theory is made of.

As for the current "experimental" confirmations of decoherence theory
they boil down to the fact that random phase perturbation as may be
induced by the environment fudge interference patterns. If you insert
a random phase changer in one arm of a Mach-Zehnder interferometer it
will destroy any interference effect for an observer who has access
only to the detectors. It will look as if the photons are being fired
up one arm or the other. That's basically all there is to decoherence
theory, beside bull.

The error at the core of decoherent arguments is actually (slightly )
older than QM ([4]).

...

> I don't see any reference [1], and I'm not familiar with the
> "Heisenberg cut". And why can't you draw the "Heisenberg cut",
> whatever it is, past the observer?

Because then nothing is left. Zeh refers to the "Heisenberg cut" in
[3], which btw is a nice paper.

> > > But I don't see any explanation of how, physically, this (basis selection)
> > > might occur.
> >
> > Any such "explanation" would be ultimately circular. Physics is not
> > about explaining stuff, but about predicting perceptions/measurement
> > outcomes.
>
> This is a highly debatable philosophy - the basis is chosen somehow by
> a mechanism which can never be properly explained or elucidated?

"Explained" is a nice word, makes me think of children nodding.

>You
> are saying that there *cannot* be any meaningful reason why one basis
> is chosen, and we should just accept the empirical fact that certain
> configurations of observers and apparatus pick out certain bases, and
> "shut up and calculate"? In other words, wavefunctions collapse in
> particular bases because they do?
>

We'll certainly learn more about this. I think one can tilt the basis,
but it's not straightforward (see[5]).

> Why not go back to Kepler and say, planets travel in elliptical orbits
> due to an unknown reason which can never be elucidated, and we should
> shut up and calculate?
>

When the "Principia" came out Galileo's followers ( the master was
long dead) yelled that Newton's action-at-distance smelled of alchemy.
Newton was indeed a keen alchemist. He replied "Hypotheses non fingo".

I am a great admirer of Kepler.

> > Our sensory apparatus is the ultimate barrier behind which the
notion
> > of observer, on which the scientific paradigm is implicitly based,
> > breaks down,
> > Without some form of verification based on intersubjective agreement
> > there can be no scientific discourse.
>
> I don't see a barrier, in principle, to studying our sensory apparatus
> (etc.) as a quantum system. One could even study the quantum systems
> of an experimental apparatus and *two* observers to see if they had
> "intersubjective agreement". Whatever this sensory apparatus is, I
> suppose the laws of physics apply to it.

I have discussed this issue in another post nearby [6].

Cheers,

IV

[1] J. von Neumann Mathematical Foundations of Quantum Mechanics
[2] http://www.arxiv.org/abs/quant-ph/0011123
[3] H.D.Zeh , Physics Letters A, 172 (1993) 189-192
[4] " Planck followed Hertz in representing a a physical resonator by
means of boundary conditions on a surface exterior to it. Under those
circumstances , reversibility was a characteristic to be investigated
, not one to be discovered at a glance. ... . When a small term
representing [radiation damping] was introduced .. the equation ceased
to be reversible in time .. . His treatment involved approximations
of course, and their elimination would have restored reversibility ,
not to his resonator equation which is still in use, but to the
equations for resonator plus field. But Planck was not to know that
for a number of years, by which time it was inconsequential. He had
long since abandoned hope thar irreversible processes might be
generated by conservative effects alone"
T. Kuhn, "Black-Body theory and the Quantum discontinuity, 1894-1912",
"Ch.1 - Planck's Route to the Black-Body Problem"
[5] http://olympus.het.brown.edu/pipermail/spr/Week-of-Mon-20040628/021940.html
[6] http://physicsforums.com/showpost.php?p=235981&postcount=23

Arnold Neumaier

unread,
Jul 29, 2004, 1:03:26 PM7/29/04
to

p.ki...@imperial.ac.uk wrote:
> Arnold Neumaier <Arnold....@univie.ac.at> wrote:
>
>> A collapse challenge
>
>>A single photon is prepared in a superposition of two beams.
>>A photosensitive screen blocks one of the two beams but has a big hole
>>where the other beam can pass without significant interference.
>>At twice the distance of the first screen, a second photosensitive
>>screen without hole is placed.
>
>>The experimental observation is that the photon is observed at exactly
>>one of the two screens, at the position where the corresponding beam
>>ends.
>
>>The challenge is to build from first principles and your preferred
>>interpretation a complete, observer-free, quantum model of this
>>experiment (photon, two screens, and an environment),
>>together with a formal analysis that completely explains the
>>experimental result.
>
> Stop right there. You want a screen, and force me to describe
> it quantum mechanically, but such a thing will be either:
>
> (1) too complex to model without approximations you would likely
> try to disqualify me from using, or

No. I accept meaningful approximations of all sorts, the only condition
is that they do not already smuggle in a collapse at some place.


> (2) not behave like any screen-like apparatus you would likely
> accept as being the sort of screen you intended.
>
> If I were to use nice simple quantum objects like two-level atoms (TLA)
> as "screens", the system would go into a state involving superpositions
> all three parts (photon, screen1, screen2), but that isn't the
> "experimental result" you claim, so you'd disallow it -- even though I
> might well have accurately calculated the experimental outcome for a
> photon-TLA-TLA version of your setup.

One can calculate probablilities by calculating interactions with
a single electron in the screen, which is fine (and explains everything
if the collapse is assumed). But it does not help to solve the collapse
problem itself. Calculating S-matrix elements only means that one then
knows the superposition into which a state develops;
but the challenge is about how this superposition of the possible
outcomes with their associated probabilities collapses into one of the
observed states. Clearly the collapse is a thermodynamic phenomenon
which requires a multibody setting in which dissipation is possible.


> It would be easy to get a real, useful result which explained your
> "experiment", but I'd have to do the trace and approximations that you
> won't let me do.

I do not forbid approximations, such as those common in statistical
mechanics, that simplify the mathematics to get tractable results.
But I'll analyse them to check whether they can be justified only by
assuming already the collapse somewhere. (The latter is the case, e.g.,
for Tegmark's arguments that decoherence solves the measurement problem.)


> I'm not interested in
> sticking collapse-like features into my models because I think that's
> what they should have -- but collapse-like features might happen to
> turn up as a side effect of some approximations.

In that case, analyzing the answer you give will clarify these side
effects, and illuminate the problem.


>>5. Unitary dynamics demands that the system photon+screen1+screen2,
>>characterized, say, by basis states of the form
>> |photon number, first screen count, second screen count>
>>(after tracing out all other degrees of freedom),
>>evolves from a pure initial state |1,0,0> into a superposition
>>of |0,1,0> and |0,0,1>, while agreement with experiment demands that
>>the final state is either |0,1,0> or |0,0,1>.
>>This disagreement is the measurement problem in its most basic form.

> The only measurement problem of any utility is: does my model agree
> with experiment and enable me to make useful predictions?

In this sense there has never been a measurement problem in QM.
Why is it then still a hotly debated issue?


> It is rare
> that insisting that a model follow unitary dynamics is very
> instructive. Especially when it is clearly pointlessly restrictive

> -- e.g. like, er, a photo-sensitive screen in a photon experiment.


>
> Your challenge might be a clever trick to make people think -- but it
> is set up so as forbid people from using the perfectly sensible, well
> known tools that would enable them to get a physically satisfactory
> answer.

No. I really want to know the answer. I spent a lot of time reading and
analyzing most of the literature on the foundations of QM - none of
the answers given were fully convincing, but the questions that need
to be answered became more and more clear. I think my challenge
focusses on the unsettled issues as well as it can be.

I have no problem at all with the minimal interpretation that suffices
to describe highly repeatable stochastic events that can be modelled by
ensembles. But I have problems reconciling the fact that in our
(unique) universe, putatively described by a quantum state, certain
things actually happen objectively, with the intrinsically undetermined
outcome in the statistical interpretation.

Worse, I have problems understanding what an ensemble should be in
many cases of practical interest, e.g., in ion traps, where the
traditional view of ensembles as independent realizations of a process
is clearly inadequate. These experiments (as well as the QM of the
whole universe) require that the state is a property of the individual
system, not of an ensemble, and then the collapse can no longer be viewed
as taking conditional expectation.

The challenge is my attempt to isolate the essential features of the
collapse in a simple, frequently occuring setting.


(I added some of the above explanations about permitted approximations
on my challenge web site http://www.mat.univie.ac.at/~neum/collapse.html
to make the conditions more clear.)


Arnold Neumaier

Arnold Neumaier

unread,
Jul 29, 2004, 1:42:35 PM7/29/04
to

gptejms wrote:

> Arnold Neumaier Wrote:
>
>>A single photon is prepared in a superposition of two beams.
>>A photosensitive screen blocks one of the two beams but has a big
>>hole
>>where the other beam can pass without significant interference.
>>At twice the distance of the first screen, a second photosensitive
>>screen without hole is placed.
>>
>>The experimental observation is that the photon is observed at
>>exactly
>>one of the two screens, at the position where the corresponding beam
>>ends.
>>
>>The challenge is to build from first principles and your preferred
>>interpretation a complete, observer-free, quantum model of this
>>experiment (photon, two screens, and an environment),
>>together with a formal analysis that completely explains the
>>experimental result.

> May be I'm missing your point but how is this experiment qualitatively


> different from the usual one i.e. :- a half sivered mirror is placed at
> 45 degress to the direction of an incident photon.There
> is a probability half of the photon being transmitted or
> reflected.So it's in a superposition of two states until the photon
> interacts with one of the two screens and the wavefunction collapses.
> I'm not commenting on the validity of Copenhagen interpretation or
> otherwise---my question is 'how's your experimental set up
> qualitatively different from this set up'.

The half-silvered mirror is just a way to achieve the preparation
mentioned in line 1 of my challenge. The challenge is to explain
what happens at the screens, when the wave function collapses.
Note that the Copenhagen interpretation does not meet my challenge;
see the discussion in http://www.mat.univie.ac.at/~neum/collapse.html


Arnold Neumaier

0 new messages