Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Re: Is State Vector Reduction a 'Process'?

3 views
Skip to first unread message
Message has been deleted

Arnold Neumaier

unread,
May 16, 2005, 3:20:49 AM5/16/05
to
Souvik wrote:
> Is state-vector reduction (or collapse of the wavefunction) a physical
> process? Or is it really an artefact of our procedure of canonical
> quantisation?

It is an artifact of the description of a quantum system by
a limited number of observables rather than by the state of
the whole universe. Once one projects to a subsystem, one
must ignore the details of the interaction with the unmodelled
environment and gets some dissipative effects. These are
responsible for the collapse.

But since we cannot know the full state of the whole universe,
we are doomed to such reduced descriptions and hence to the
collapse. Actually it is a big help in using QM...


Arnold Neumaier

Aaron Bergman

unread,
May 16, 2005, 3:20:46 AM5/16/05
to
In article <1116090071.2...@g14g2000cwa.googlegroups.com>,
"Souvik" <souvi...@gmail.com> wrote:

> Is state-vector reduction (or collapse of the wavefunction) a physical
> process?

No one knows.

> Or is it really an artefact of our procedure of canonical
> quantisation?

It has nothing to do with canonical quantizations, path integrals or
whatever. Quantum mechanics does not include state-vector reduction as a
physical process.

It does include, however, something called decoherence which makes the
observation of macroscopic superpositions occur with only infinitesimal
probability. Thus, even if state reduction were a physical process,
nature has conspired to make it incredibly difficult to observe it.

But that doesn't mean that it's impossible. It's not pointed out enough,
I think, that collapse vs. no-collapse scenarios for quantum mechanics
is *not* an issue of interpretation; it's a real physical question.
Experiments have been done to entangle and then disentangle increasingly
complex objects, so it might be possible to eventually observe state
reduction if it does exist.

Aaron

r...@maths.tcd.ie

unread,
May 20, 2005, 4:55:40 AM5/20/05
to
Aaron Bergman <aber...@physics.utexas.edu> writes:

>> Is state-vector reduction (or collapse of the wavefunction) a physical
>> process?

>No one knows.

Indeed, although there are a lot of people who claim they do. Quantum
mechanics has a psychological effect similar to metaphysics - when
otherwise honest people talk about it, they omit, distort and twist
the truth to promote their own interpretation. They'll give you
a firm but confusing answer, without informing you that a significant
fraction of physicists (say, over 50%) disagree with it. They'll
say that those who disagree with them are unreasonable, illogical.

Even very intelligent people do this. It's quite a bizarre social
phenomenon to have physicists turning into politicians like this,
attempting to manipulate and confuse others into sharing their
view. Part of it may stem from the fact that there's a lot of
dogma (belief taught as fact) in quantum mechanics textbooks,
and in undergraduate quantum mechanics classes. It's also
possible that this is a result and not a cause of the problem.
Physicists are aware that there's something odd going on;
a lot of them won't want to discuss it.

If you're interested in my opinion, which I don't suggest you
should be, it seems to me that quantum mechanics is like a piece
of alien technology that fell from space. We have no idea
why it works; many say they do, but it's rare that two people
agree on a reason. Nobody has ever deduced the mathematical
formalism by showing that, based on their interpretation,
the formalism that we do use is the formalism that we should
use. Rather, people look at the mathematical formalism
and then invent stories about what it means, some involving
parallel worlds, some involving consciousness.

This is different from the "shut up and calculate" approach,
which is appropriate for somebody who wants to follow a
career in physics but who has no motivation to actually
understand why the calculations he is doing are the right
ones to do. I advocate looking for a derivation of the
formalism of quantum mechanics, without assuming it in
advance. I haven't found one, but neither has anybody
else.

You may hear it said from various physicists that the
problem of reconciling quantum mechanics with gravity
does not require any better understanding of why
quantum mechanics works. This is dogma.

>... even if state reduction were a physical process,


>nature has conspired to make it incredibly difficult to observe it.

>But that doesn't mean that it's impossible. It's not pointed out enough,
>I think, that collapse vs. no-collapse scenarios for quantum mechanics
>is *not* an issue of interpretation; it's a real physical question.
>Experiments have been done to entangle and then disentangle increasingly
>complex objects, so it might be possible to eventually observe state
>reduction if it does exist.

There are some interpretations involving collapse, such as the GRW
spontaneous collapse interpretation, or Penrose's gravitational
collapse interpretation, which give different predictions to
"orthodox" quantum mechanics, which is the Hilbert space recipe for
calculating the probabilities of various experimental results.
These are testable in principle, and the experiment to test Penrose's
is currently being constructed.

The no-collapse interpretations (many worlds, Bohmian mechanics, and
so on) typically agree with orthodox QM on every prediction.

Orthodox QM itself, or the Copenhagen interpretation, features
collapse but doesn't consider it physical. From the Copenhagen point
of view, the wavefunction encodes knowledge about the system, and
it collapses when a measurement is performed; that is, when we
acquire new knowledge, we have to update the mathematical object
which we use to represent knowledge. Hence different observers will
use different wavefunctions to describe the same system. The
Copenhagen view is still the officially recognised majority view,
but I doubt there are many physicists today who would agree that,
for example, the ground state orbital of an electron in a hydrogen
atom represents knowledge. Physicists dislike knowledge because
knowledge is subjective, and subjective things are bad.

R.

Baugh

unread,
May 20, 2005, 4:56:25 AM5/20/05
to
Souvik wrote:
> Is state-vector reduction (or collapse of the wavefunction) a physical
> process? Or is it really an artefact of our procedure of canonical
> quantisation?
>
> In the Feynman path integral approach to Quantum Theory, state-vector
> reduction doesn't seem to be a process any more than asking the
> question: What is the amplitude of going from a certain initial state
> to a final state?
>

As you see different opinions arise. My take on it is this.
When you view the wave function as a statistical description,
(analogous to a probability distribution) then the assumption
of new information changes your description changes.

Thus even classical probability distributions "collapse".
E.g, your probability of winning the raffle can jump from
a small positive probability to either of certainty or absolute zero
upon the drawing. This occurs "instantaneously for all tickets".
This happens "non-locally" because the
state of "winning the raffle" does not describe the state
of your ticket but the correlation between your ticket and
the state of a ticket drawn in the raffle office.

So I would argue that collapsing wave functions are not physical
processes but rather occur "on paper".


Bill Hobba posted a link:
http://quantum.phys.cmu.edu/quest.html
to another "interpretation" of QM called "Consistent Histories"
which makes this point.

It may help in parsing the various thought experiments to remember
that when you are considering a "one particle experiment" there
is a global non-causal constraint being applied to the system definition
which excludes cases where a second or third particle enters the
experimental domain. So for example when you measure one electron
in position x you are in this act measuring also zero electrons
at every other position. The measurement action is non-local,
and you toss out instances when you get more than one electron total.
This "tossing out of instances" tells you that the collapse
is not occuring "to the particle" but rather "to your description".

This you may relate also to Arnold Neumaier's point about restriction
to a small part of "the whole universe". However I think this point
misleading in that "the wave function of the whole universe"
is not operationally meaningful. There is only one instance of "the
whole universe" so probabilities are meaningless and you cannot
interpret such a "wave function" quantum mechanically,
That is unless you use a 1-dimensional Hilbert space, with one
physical observable "does it exist" which has expectation value 1.


Regards,
James Baugh


Arnold Neumaier

unread,
May 21, 2005, 4:42:57 AM5/21/05
to
Baugh wrote:

> This you may relate also to Arnold Neumaier's point about restriction
> to a small part of "the whole universe". However I think this point
> misleading in that "the wave function of the whole universe"
> is not operationally meaningful. There is only one instance of "the
> whole universe" so probabilities are meaningless and you cannot
> interpret such a "wave function" quantum mechanically,

This is not true. Quantum mechanics not only provides probabilities
but also expectations. And these are _not_ meaningless,
even though only one instance of the universe exists.

For example, if we measure the mass of a piece of metal, we actually
measure the expectation of the mass operator <M> (M is the sum of
all the particle masses). Similarly, as any discussion of the
grand canonical ensemble can convince you, all the quantities discussed
in thermodynamics are either expectations, or numbers computed from
such expectations. And these are measured routinely for single objects.
Quantum mechanics just asserts that the accuraccy one can obtain is limited.

Going deeper into statistical mechanics shows that the quantities
handled by hydrodynamics (and this is much of our everyday life!)
are also expectations and functions of expectations, and apply to
single objects, such as the motor of the car you are driving.


Only the probabilistic interpretation of QM goes down the drain when
the whole universe is considered. But essentially everything else
remains intact. Those who can read German can find more discussion
of this in
http://www.mat.univie.ac.at/~neum/physik-faq.tex


Arnold Neumaier

Arnold Neumaier

unread,
May 21, 2005, 4:43:03 AM5/21/05
to
r...@maths.tcd.ie wrote:

> Aaron Bergman <aber...@physics.utexas.edu> writes:
>
>>In article <1116090071.2...@g14g2000cwa.googlegroups.com>,
>>"Souvik" <souvi...@gmail.com> wrote:
>
>>>Is state-vector reduction (or collapse of the wavefunction) a physical
>>>process?
>
>>No one knows.
>
> Indeed, although there are a lot of people who claim they do. Quantum
> mechanics has a psychological effect similar to metaphysics - when
> otherwise honest people talk about it, they omit, distort and twist
> the truth to promote their own interpretation. They'll give you
> a firm but confusing answer, without informing you that a significant
> fraction of physicists (say, over 50%) disagree with it.

Do you think truth is a matter of majority votes???
Truth is rather a matter of listening to the different sides
of a controversy and then choosing the best.


It is obvious that whatever a person claims is first and foremost
his or her personal opinion, and not a fact. Who takes it for a
fact is simply misleading himself or herself. Thus there is no
need to qualify each of one's statements by clumsy phrases like
'in my opinion', or 'according to what I have read/understood', or
'as far as I am informed' or 'since this makes most sense to me'.
These phrases accompany silently any statement by anyone.

It is also obvious that an opinion doesn't become a fact because
it is believed by half the number of people from a particular
ensemble; truth would otherwise become dependent on the choice
of this ensemble.

Honesty therefore only requires that one asserts what one thinks
is true, and gives one's reasons upon request. This is the scientific
approach, since it lets others check upon the trustworthiness of
a claim.


> They'll
> say that those who disagree with them are unreasonable, illogical.
>
> Even very intelligent people do this. It's quite a bizarre social
> phenomenon to have physicists turning into politicians like this,
> attempting to manipulate and confuse others into sharing their
> view.

I don't think this is a fair assessment. Everything called
knowledge is in fact a set of beliefs of the person claiming it.
And this set of beliefs is more or less close to the objective
truth, depending on the standards of that persons.

Telling others what one thinks is true in no way manipulates
others any more than feeding others what one thinks is nourishing.
But as we shouldn't accept being fed by those with poor judgment
about food, we shouldn't accept an opinion for the truth if offered
by someone with poor judgment about the relevant areas.

Thus one needs to check the claims, to listen to different sides
of a controversy, to ask for sources or justification of an opinion.
In this way, anyone who wants to get a clear picture soon notices
which claims are trustworthy, which ones are tenable but somewhat
shaky, and which ones are poorly founded.


Arnold Neumaier

Seratend

unread,
May 21, 2005, 12:48:20 PM5/21/05
to

r...@maths.tcd.ie a écrit :

> Aaron Bergman <aber...@physics.utexas.edu> writes:
> Orthodox QM itself, or the Copenhagen interpretation, features
> collapse but doesn't consider it physical. From the Copenhagen point
> of view, the wavefunction encodes knowledge about the system, and
> it collapses when a measurement is performed; that is, when we
> acquire new knowledge, we have to update the mathematical object
> which we use to represent knowledge. Hence different observers will
> use different wavefunctions to describe the same system. The
> Copenhagen view is still the officially recognised majority view,
> but I doubt there are many physicists today who would agree that,
> for example, the ground state orbital of an electron in a hydrogen
> atom represents knowledge. Physicists dislike knowledge because
> knowledge is subjective, and subjective things are bad.
>
> R.

Yes this is the formal point of view of the quantum results
measurements. However, this does not explain the selection of the
eingenbasis used to match the computed results with the experimental
results.
In other words, for the collapse postulate, every eigenbasis is ok to
express statistical results (the born rules), but in the lab, we just
have one basis where the statistics apply. Why?

Seratend.

(and please do not use the decoherence as the way to solve this issue)


Seratend

unread,
May 21, 2005, 12:48:20 PM5/21/05
to

Arnold Neumaier a écrit :

>
> For example, if we measure the mass of a piece of metal, we actually
> measure the expectation of the mass operator <M> (M is the sum of
> all the particle masses).
>

This is only true, in the absolute, if the piece of metal is made of an
infinite number of particle masses (convergence in law). What we really
measure is the value of M= sum_i Mi and not <M>.

Seratend.


Aaron Bergman

unread,
May 21, 2005, 7:05:09 PM5/21/05
to
In article <1116673523....@g49g2000cwa.googlegroups.com>,
Seratend <ser_m...@yahoo.fr> wrote:

> In other words, for the collapse postulate, every eigenbasis is ok to
> express statistical results (the born rules), but in the lab, we just
> have one basis where the statistics apply. Why?

Because we're entangling with a specific classical observable.

> (and please do not use the decoherence as the way to solve this issue)

Decoherence really does solve this issue up until you start to ask
questions about the human brain. At that point, I advocating throwing up
one's arms and being happy that we seem to get the right answer.

Aaron

Baugh

unread,
May 21, 2005, 9:37:43 PM5/21/05
to
Thought this should begin a new thread.

In the thread "Is State Vector Reduction a 'Process'?"


Arnold Neumaier wrote:
> Baugh wrote:
>
>
>>This you may relate also to Arnold Neumaier's point about restriction
>>to a small part of "the whole universe". However I think this point
>>misleading in that "the wave function of the whole universe"
>>is not operationally meaningful. There is only one instance of "the
>>whole universe" so probabilities are meaningless and you cannot
>>interpret such a "wave function" quantum mechanically,
>
>
> This is not true. Quantum mechanics not only provides probabilities
> but also expectations. And these are _not_ meaningless,
> even though only one instance of the universe exists.

Yes thats all very true for given systems, but given that an expectation
value is simply a sum of probabilities times potentially observed
values, could you please define for me an operationally meaningful
observable of the "universe as a whole" besides the "it exists" one I
provided, which of course has expectation value 1.

> For example, if we measure the mass of a piece of metal, we actually
> measure the expectation of the mass operator <M> (M is the sum of
> all the particle masses). Similarly, as any discussion of the
> grand canonical ensemble can convince you, all the quantities discussed
> in thermodynamics are either expectations, or numbers computed from
> such expectations. And these are measured routinely for single objects.
> Quantum mechanics just asserts that the accuraccy one can obtain is limited.

It asserts a bit more than that but be that as it may. One of the most
important aspects of quantum theory is the observer-system cut. This
though it may be relativized is still necessary to give operational
meanning to the act of observing a quantity such as a particle's mass
or spin or total angular momentum.

> Only the probabilistic interpretation of QM goes down the drain when
> the whole universe is considered.

I would say it remains intact but becomes singular (unit
probability/expectation value for the one implicit observable).

> But essentially everything else remains intact.

Not the case. Let me point out one specific example. Consider rather
than "the universe as a whole" a single composite system which we may
subdivide into two subsystems A and B. Recall that the entropy
of A alone is determined by its density operator rho_A obtained
by the partial trace of the composite over B. Likewise with rho_B.

Thus while the whole system may have zero entropy:
rho_{AxB} = psi (x)psi^* [ (x) = tensor prod.]

Each subsystem may have non-zero entropy due to entanglement, i.e.
rho_{AxB} not equal to rho_A (x) rho_B

We may alternatively factor the system into another pair of subsystems
A' and B'. Then the component entropies will have no relation to the
prior factorization except that the entropy of the whole composite
provides a lower bound on the sums of partial entropies.

Now with this in mind tell me how you can possibly determine
the entropy of the "universe as a whole". You can posit that it is
zero. But you cannot operationally determine its entropy.
I can posit any other value.

A similar factorization dependency will occur for expectation values
of observables.
An expectation value for the whole cannot be determined by analysis of
expectation values for a particular factorization into parts.

Trace[ rho_{AxB} H] is not equal to Trace( rho_A H_A) + Trace(rho_B H_B)

As an example consider the total spin of a singlet pair of electrons
which is zero vs the total spin of its component halves which are each
1/2. Compare this with that of a triplet pair which has total spin 1.

Again you cannot reduce the properties of "the universe as a whole" down
to properties of its components. Not in quantum theory where the
fatorization into components is not unique.

Most especially you cannot define an observer "outside the universe as a
whole" who can give then give meaning to the specific observables you
wish to associate with "the universe".

--
Regards,
James Baugh

Arnold Neumaier

unread,
May 22, 2005, 11:34:02 AM5/22/05
to
Seratend wrote:

No. What we measure is an approximation of
<integral_Omega dx a^*(x)Ma(x)> over the region Omega of interest.
This is the expression that figures in statistical mechanics
derivations of macroscopic elasticity theory.

Statistical mechanics must use the grand canonical ensemble
for calculations in local equilibrium such as elasticity theory
or hydrodynamics, since the cell boundaries in the coarse graining
are vague and hence the number of particles cannot be assumed
constant in each cell. Hence all fields are expectations.

Please resume discussion of this in the new thread
''Wave Function of the Universe?'' opened by Baugh,
where I copied this answer.


Arnold Neumaier


Frank Hellmann

unread,
May 23, 2005, 4:58:16 PM5/23/05
to
Arnold Neumaier wrote:

> Souvik wrote:
> > Is state-vector reduction (or collapse of the wavefunction) a
physical
> > process? Or is it really an artefact of our procedure of canonical
> > quantisation?
>
> It is an artifact of the description of a quantum system by
> a limited number of observables rather than by the state of
> the whole universe. Once one projects to a subsystem, one
> must ignore the details of the interaction with the unmodelled
> environment and gets some dissipative effects. These are
> responsible for the collapse.
>

Unless I'm mistaken you are describing decoherence, which is not
equivalent to collapse.

Frank.

Seratend

unread,
May 23, 2005, 4:59:26 PM5/23/05
to
Aaron Bergman a =E9crit :

> In article <1116673523....@g49g2000cwa.googlegroups.com>,
> Seratend <ser_m...@yahoo.fr> wrote:
>
> > In other words, for the collapse postulate, every eigenbasis is ok
to
> > express statistical results (the born rules), but in the lab, we
just
> > have one basis where the statistics apply. Why?
>
> Because we're entangling with a specific classical observable.
>
Why? I mean the entanglement does not choose a basis. Every basis is ok
to express the statistics of the entanglement.
For example, in statiscal classical mechanics we use to choose by
default, A(p,q) as an observable (in the formulation a la Von Neumann,
[q,p]=3D0). What does prevent us to choose another basis and other
observables to express the statistics? From this point, why in
classical mechanics do we seem to have the superselection rule
(prefered basis=3D p,q)for the experiments?

> > (and please do not use the decoherence as the way to solve this
issue)
>
> Decoherence really does solve this issue up until you start to ask
> questions about the human brain. At that point, I advocating throwing
up
> one's arms and being happy that we seem to get the right answer.
>
> Aaron

You can remove the human brain and just stay with events and outcomes
(in order to avoid philosophical questions). Your point of view seems
to be the selection of the basis (to express the results) after the
experiment: an a posteriori selection (no prediction).
In this case, you are implicitly saying (in my understanding) the QM
framework does not give (a prediction) an answer to the preferred basis
of experiment results. We must do before the experiment to know/learn
what is the preferred basis. (We must have/construct a collection of
experiments where we know the preferred basis of the statistics in
order to build other experiments for predictions - statistics on a
given basis).

Seratend.

Frank Hellmann

unread,
May 23, 2005, 4:57:58 PM5/23/05
to
Seratend wrote:
>
> Yes this is the formal point of view of the quantum results
> measurements. However, this does not explain the selection of the
> eingenbasis used to match the computed results with the experimental
> results.
> In other words, for the collapse postulate, every eigenbasis is ok to
> express statistical results (the born rules), but in the lab, we just
> have one basis where the statistics apply. Why?
>
> Seratend.

>
> (and please do not use the decoherence as the way to solve this
issue)

I have seen this argument quite a few times, do you have any good
concise review papers on this?
A meassurement is represented by a Projection operator, and you can
develop the whole apparatus of the statistical interpretation of QM
without ever refferring to a basis, so I really don't see where this
problem comes into things.

Frank.

r...@maths.tcd.ie

unread,
May 23, 2005, 5:00:33 PM5/23/05
to
Seratend <ser_m...@yahoo.fr> writes:

>r...@maths.tcd.ie a crit :


>> From the Copenhagen point
>> of view, the wavefunction encodes knowledge about the system, and
>> it collapses when a measurement is performed; that is, when we
>> acquire new knowledge, we have to update the mathematical object
>> which we use to represent knowledge.

>Yes this is the formal point of view of the quantum results


>measurements. However, this does not explain the selection of the
>eingenbasis used to match the computed results with the experimental
>results.
>In other words, for the collapse postulate, every eigenbasis is ok to
>express statistical results (the born rules), but in the lab, we just
>have one basis where the statistics apply. Why?

>(and please do not use the decoherence as the way to solve this issue)

I'm not sure I understand the question very clearly, but I'll try
to answer it. There is a special basis - the position basis, although
this isn't an entire basis for the Hilbert space, since it doesn't
span the spin part of the Hilbert space, for example. Anyway, all
measurements are ultimately measurements of position, as Bell
was fond of saying, for example the positions of instrument
pointers. To measure spin we separate the spin-up part of the beam
from the spin-down part and then measure position. Because
all interactions which can serve as measurements are local,
meaning that if system A wants to exchange information with system
B it has to be in the same region of space, interactions with
neighboring objects tend to act as position measurements.

I hope this was what you were asking.

R.

r...@maths.tcd.ie

unread,
May 23, 2005, 5:02:06 PM5/23/05
to
Arnold Neumaier <Arnold....@univie.ac.at> writes:

>r...@maths.tcd.ie wrote:

>> Aaron Bergman <aber...@physics.utexas.edu> writes:
>>
>>>In article <1116090071.2...@g14g2000cwa.googlegroups.com>,
>>>"Souvik" <souvi...@gmail.com> wrote:
>>
>>>>Is state-vector reduction (or collapse of the wavefunction) a physical
>>>>process?
>>
>>>No one knows.
>>
>> Indeed, although there are a lot of people who claim they do. Quantum
>> mechanics has a psychological effect similar to metaphysics - when
>> otherwise honest people talk about it, they omit, distort and twist
>> the truth to promote their own interpretation. They'll give you
>> a firm but confusing answer, without informing you that a significant
>> fraction of physicists (say, over 50%) disagree with it.

>Do you think truth is a matter of majority votes???
>Truth is rather a matter of listening to the different sides
>of a controversy and then choosing the best.

There are some things, like the canon of mathematics, classical
mechanics, and the formalism of quantum mechanics which are
well-established. There are other things, like the interpretation
of quantum mechanics, which aren't. It is difficult for a non-expert
to know in advance which areas are well-established, where is no
controversy, and in which areas there is a controversy among experts. If
such a person asks a question like "Is state-vector reduction a
physical process?", then a physicist who responds by saying "No it
isn't," without adding that this answer is merely his own opinion,
is doing the inquirer a disservice.

Most questions about physics have a clear, well-established
answer which can be found simply by asking a physicist, and only
the expert can be expected to know which questions are
exceptions to this general rule. A physicist who gives an
apparently straightforward, if slightly confusing, answer
to a question about physics, without making it clear that
this question has an unusual status in physics, that,
unlike most questions in physics, this one has no
well-established answer, is implicitly telling the
person that this question is just like other questions
in physics, that is has a well-established answer, and
that in fact the answer being given is the well-established
one.

Now this is what you did, and you interpreted my post as an attack
on you, became angry, and treated me to three question marks and a
lecture about how everything is mere opinion and belief:

>Everything called
>knowledge is in fact a set of beliefs of the person claiming it.

Readers of this post will be very well aware that certain knowledge,
for example knowledge of definitions, of mathematical theorems of
which one has seen the proof, and of many statements about physics,
are not merely beliefs. When I say that Newton's third law states
that action and reaction are equal in magnitude, I am not merely
expressing my personal belief, for that is exactly what the third
law says. When I say that a Banach space is a normed, complete
vector space, I am not merely giving my opinion on the matter.
Claiming that everything is opinion and nothing is well-established
is a practice of those who oppose science. "Evolution is a theory
not a fact," they say.

This is, of course, all beside the point, but serves to illustrate
my original point, which was that, when it comes to interpreting
quantum mechanics, otherwise honest people become less honest. I
noticed this first in myself and then in others. Only by explicitly
acknowledging this and trying to overcome it are we likely to make
progress. Pretending that it's not true and trying to promote our
own views through fighting against others with aggressive punctuation
only makes the situation worse.

R.

Seratend

unread,
May 23, 2005, 5:01:12 PM5/23/05
to
Arnold Neumaier a =E9crit :
> Seratend wrote:
>
> > Arnold Neumaier a =E9crit :

> >
> >>For example, if we measure the mass of a piece of metal, we
actually
> >>measure the expectation of the mass operator <M> (M is the sum of
> >>all the particle masses).
> >>
> > This is only true, in the absolute, if the piece of metal is made
of an
> > infinite number of particle masses (convergence in law). What we
really
> > measure is the value of M=3D sum_i Mi and not <M>.

>
> No. What we measure is an approximation of
> <integral_Omega dx a^*(x)Ma(x)> over the region Omega of interest.
> This is the expression that figures in statistical mechanics
> derivations of macroscopic elasticity theory.
>

We can apply the statistics to small independent areas. This does not
change the fact that at the end what we really measure on this local
are is M_area=3D sum_{i in the area} Mi and not <M>_{area} (what you call
the approximation when the area is not well defined). An instance of
the object has well defined (may be unkonwn to the observer) values of
Mi in all areas (independence of Mi random variables or commuting
observables). However, at the end one sums all over these areas (the
extensive property of M) to obtain the value we really measure.

In my opinion, this is simply the application of the law of large
numbers and the grand and micro canonical ensembles just use this
property with additional hypotheses.


Seratend.

Arnold Neumaier

unread,
May 23, 2005, 4:59:52 PM5/23/05
to
Baugh wrote:

> In the thread "Is State Vector Reduction a 'Process'?"
> Arnold Neumaier wrote:
>
>> Baugh wrote:
>>
>>> This you may relate also to Arnold Neumaier's point about restriction
>>> to a small part of "the whole universe". However I think this point
>>> misleading in that "the wave function of the whole universe"
>>> is not operationally meaningful. There is only one instance of "the
>>> whole universe" so probabilities are meaningless and you cannot
>>> interpret such a "wave function" quantum mechanically,
>>
>> This is not true. Quantum mechanics not only provides probabilities
>> but also expectations. And these are _not_ meaningless,
>> even though only one instance of the universe exists.
>
> Yes thats all very true for given systems, but given that an expectation
> value is simply a sum of probabilities times potentially observed
> values, could you please define for me an operationally meaningful
> observable of the "universe as a whole" besides the "it exists" one I
> provided, which of course has expectation value 1.

Every observable of a subsystem of the universe is also an observable
of the universe. For example ''the mass of the solar system'',
or ''the number of asteroids in the solar system with diameter
more than 1 km''.


>> For example, if we measure the mass of a piece of metal, we actually
>> measure the expectation of the mass operator <M> (M is the sum of
>> all the particle masses).

[Seratend replied here in the other thread:]

> This is only true, in the absolute, if the piece of metal is made of an
> infinite number of particle masses (convergence in law). What we really

> measure is the value of M= sum_i Mi and not <M>.

No. What we measure is an approximation of
<integral_Omega dx a^*(x)Ma(x)> over the region Omega of interest.
This is the expression that figures in statistical mechanics
derivations of macroscopic elasticity theory.

Statistical mechanics must use the grand canonical ensemble


for calculations in local equilibrium such as elasticity theory
or hydrodynamics, since the cell boundaries in the coarse graining
are vague and hence the number of particles cannot be assumed
constant in each cell. Hence all fields are expectations.

>> Similarly, as any discussion of the


>> grand canonical ensemble can convince you, all the quantities discussed
>> in thermodynamics are either expectations, or numbers computed from
>> such expectations. And these are measured routinely for single objects.
>> Quantum mechanics just asserts that the accuraccy one can obtain is
>> limited.
>
> It asserts a bit more than that but be that as it may.

Of course, it asserts also the values of correlations, from which
one can get probabilities, but this is not the point here.


> One of the most
> important aspects of quantum theory is the observer-system cut. This
> though it may be relativized is still necessary to give operational
> meanning to the act of observing a quantity such as a particle's mass
> or spin or total angular momentum.

The universe exists even without observers.

Observers in the sense of physics exist only since Galilei.
But no one doubts that quantities such as the ''distance of the sun
from the earth'' had a values (within the accuracy limited by the
size of the sun) already before the first physicist measured it.

Thus the observer-system cut must be independent of the foundations.


>> Only the probabilistic interpretation of QM goes down the drain when
>> the whole universe is considered.
>
> I would say it remains intact but becomes singular (unit
> probability/expectation value for the one implicit observable).

There are cillions of observables in the one universe!!!


>> But essentially everything else remains intact.
>
> Not the case. Let me point out one specific example. Consider rather
> than "the universe as a whole" a single composite system which we may
> subdivide into two subsystems A and B. Recall that the entropy
> of A alone is determined by its density operator rho_A obtained
> by the partial trace of the composite over B. Likewise with rho_B.
>
> Thus while the whole system may have zero entropy:

> rho_{AxB} = psi (x)psi^* [ (x) = tensor prod.] (**)


>
> Each subsystem may have non-zero entropy due to entanglement, i.e.
> rho_{AxB} not equal to rho_A (x) rho_B

Yes. This is actually the typical situation.
For example, macroscopic systems A in equilibrium have a
grand canonical density operator rho_A with lots of entropy
at typical temperatures.


> We may alternatively factor the system into another pair of subsystems
> A' and B'. Then the component entropies will have no relation to the
> prior factorization except that the entropy of the whole composite
> provides a lower bound on the sums of partial entropies.

This only says that values of quantities depend on their definition.
Nobody expects the properties of the system A' to be identical with
those of A.

> Now with this in mind tell me how you can possibly determine
> the entropy of the "universe as a whole".

Under your assumption (**) above (A = a system, B = everything else)
it is zero. But maybe it is very large; we cannot know.
Since we measure only little bits of the universe we can know only
about the entropy of these little bits.

> You can posit that it is
> zero. But you cannot operationally determine its entropy.
> I can posit any other value.

About things one cannot check, every posit is equally acceptable.
Physics only requires consistency with what we can test.
If one day, we'd be able to measure the entropy of the universe,
one could find out. Until then, this value is simply one of
the many things we don't know.


> A similar factorization dependency will occur for expectation values
> of observables.
> An expectation value for the whole cannot be determined by analysis of
> expectation values for a particular factorization into parts.
>
> Trace[ rho_{AxB} H] is not equal to Trace( rho_A H_A) + Trace(rho_B H_B)

Yes. This is what makes the whole thing work, and is not inconsistent
with what I wrote.


> As an example consider the total spin of a singlet pair of electrons
> which is zero vs the total spin of its component halves which are each
> 1/2. Compare this with that of a triplet pair which has total spin 1.
>
> Again you cannot reduce the properties of "the universe as a whole" down
> to properties of its components.

I don't understand. If this is part of a bigger universe then it is no
surprise that there are two different subsystems with different
total spin. And if these are isolated systems they are obviously two
different universes that cannot know of each other. I don't see
why this should conflict with my statements.


> Most especially you cannot define an observer "outside the universe as a
> whole" who can give then give meaning to the specific observables you
> wish to associate with "the universe".

Observers are not needed to give expectations a meaning.


Arnold Neumaier

Aaron Bergman

unread,
May 24, 2005, 12:12:40 AM5/24/05
to
In article <1116754314.9...@z14g2000cwz.googlegroups.com>,
"Seratend" <ser_m...@yahoo.fr> wrote:

> Aaron Bergman a =E9crit :
> > In article <1116673523....@g49g2000cwa.googlegroups.com>,
> > Seratend <ser_m...@yahoo.fr> wrote:
> >
> > > In other words, for the collapse postulate, every eigenbasis is ok
> to
> > > express statistical results (the born rules), but in the lab, we
> just
> > > have one basis where the statistics apply. Why?
> >
> > Because we're entangling with a specific classical observable.
> >
> Why? I mean the entanglement does not choose a basis. Every basis is ok
> to express the statistics of the entanglement.

This isn't just entanglement; it's entanglement with a classical
observable. That selects a basis, the one in which the classical
observable is diagonal.

[...]

> You can remove the human brain and just stay with events and outcomes
> (in order to avoid philosophical questions). Your point of view seems
> to be the selection of the basis (to express the results) after the
> experiment: an a posteriori selection (no prediction).
> In this case, you are implicitly saying (in my understanding) the QM
> framework does not give (a prediction) an answer to the preferred basis
> of experiment results. We must do before the experiment to know/learn
> what is the preferred basis. (We must have/construct a collection of
> experiments where we know the preferred basis of the statistics in
> order to build other experiments for predictions - statistics on a
> given basis).

Not at all. Decoherence shows us how the basis is selected.

Aaron

r...@maths.tcd.ie

unread,
May 24, 2005, 2:26:14 PM5/24/05
to
Aaron Bergman <aber...@physics.utexas.edu> writes:

>> Aaron Bergman a =E9crit :
>> > In article <1116673523....@g49g2000cwa.googlegroups.com>,
>> > Seratend <ser_m...@yahoo.fr> wrote:
>> >
>> > > In other words, for the collapse postulate, every eigenbasis is ok
>> to
>> > > express statistical results (the born rules), but in the lab, we
>> just
>> > > have one basis where the statistics apply. Why?
>> >
>> > Because we're entangling with a specific classical observable.
>> >
>> Why? I mean the entanglement does not choose a basis. Every basis is ok
>> to express the statistics of the entanglement.

>This isn't just entanglement; it's entanglement with a classical
>observable. That selects a basis, the one in which the classical
>observable is diagonal.

You don't even need the observable to be classical to get a
preferred basis; a decomposition of the Hilbert space, H, into
two factors, H_a and H_b, corresponding to "system" and "environment",
or "system" and "measuring device" will be enough to select
a special basis, in the sense that an overall state |psi>
of H can be decomposed into \sum_i l_i |a_i>|b_i>, where
the |a_i> are an orthonormal basis for H_a and |b_i> are an
orthonormal basis for H_b. The bases selected by the decomposition
depend on |psi>.

It's called the Schmidt decomposition, and the l_i are called
the Schmidt coefficients. If there's an initial state
which factorises into the state of the system and
the state of the measuring device, say, |psi_0>=|a_0>|b_0>,
then after an interaction between them, that state of
the system, |psi> can be decomposed according to the
Schmidt decomposition, which gives a preferred basis
for that interaction (there are pathological cases
where the decomposition isn't unique, but these are
"of measure zero"). Because of the locality of information
transfer, this tends to pick out basis states which
are fairly localised (presuming that spatial degrees
of freedom are included in the system and that all
interactions are, in fact, local), or rather, were
localised at the time of the interaction.

>> You can remove the human brain and just stay with events and outcomes
>> (in order to avoid philosophical questions). Your point of view seems
>> to be the selection of the basis (to express the results) after the
>> experiment: an a posteriori selection (no prediction).
>> In this case, you are implicitly saying (in my understanding) the QM
>> framework does not give (a prediction) an answer to the preferred basis
>> of experiment results. We must do before the experiment to know/learn
>> what is the preferred basis. (We must have/construct a collection of
>> experiments where we know the preferred basis of the statistics in
>> order to build other experiments for predictions - statistics on a
>> given basis).

>Not at all. Decoherence shows us how the basis is selected.

Pretty much; the above process, repeated many times, with
local interactions of many systems with one another, gives
you decoherence, with the various systems "telling each other"
where they are with respect to one another, leading to
a preferred basis which is approximately the position basis,
although it's not exactly a complete basis.

R.

Arnold Neumaier

unread,
May 24, 2005, 2:26:12 PM5/24/05
to
r...@maths.tcd.ie wrote:

You seem to be projecting _your_ anger onto me.

That you find three question marks and a short essay on learning
truth in controversial matters an aggressive behavior in a context
where controversial things are discussed is a sign of your emotional
state rather than a property of my contribution.

I wasn't angry at all when I wrote the mail you find so objectionable,
but simply thought that your assessment of the situation (namely,
characterizing otherwise honest people as people who 'omit, distort
and twist the truth' was far off the mark.

To say that ''Everything called knowledge is in fact a set
of beliefs of the person claiming it.'' does not contradict the
objectivity of mathematical definitions. When I say that a Banach
space is a normed, complete vector space, I both state my belief
and happen to coincide with the social consensus of the guild of
mathematicians. And when I say that state reduction is a
physical process, I both state my belief and happen to coincide with
famous physicists like von Neumann and many others, and this is good
enough to make this statement honestly.

It is ridiculous to require a percentage of people in a field
to agree with you before you utter a statement without adding
a qualification like 'I believe' or 'Some physisicts believe'.
There would never be an agreement on the percentage required
to do so.

In any case, such a requirement is _not_ part of the social agreement
of what constitutes ethical behavior of scientists. Thus you have
no right to accuse them of omitting, distorting and twisting
the truth.


Arnold Neumaier

mark...@yahoo.com

unread,
May 24, 2005, 9:02:16 PM5/24/05
to
Arnold Neumaier wrote:
> But since we cannot know the full state of the whole universe,
> we are doomed to such reduced descriptions and hence to the
> collapse.

Or more strongly: as Smolin has pointed out, there may not even *be*
such a thing as a state for the whole universe, not even a
configuration space! This necessarily forces one into a local
description and relative states cease to be merely an expedient, and
become a fundamental part of the whole enterprise.

In that case, collapse is there from the outset because reduced states
are all you have.

To put it more clearly: in the absence of a universal state or even a
universal configuration space, the universe in effect becomes an open
system.

A second account, not mutually exclusive (and in fact closely related),
for the collapse comes from the fact that the most general quantum
theory [see note 1] admits both quantum AND classical degrees of
freedom; the latter playing the role of superselection parameters.

In the most general case, a relative state obtained by tracing out the
environmental modes will cut across the superselection boundaries and
give you a state which is a mixture with respect to the classical
degrees.

States that lie in different superselection sectors combine only
classically, never by superposition. By itself, a system can't evolve
in such a way as to transmit information from the quantum to the
classical degrees (i.e. a Schroedinger or automorphic evolution
automatically precludes collapse-through-superselection). That's what
would be required to get a "measurement" type event. But when combined
with the feature of there being a cut-off at the boundary of the
environment, you get the desired result of
collapse-through-superselection.

Ben Rudiak-Gould

unread,
May 25, 2005, 1:26:55 AM5/25/05
to
Baugh wrote:
> One of the most
> important aspects of quantum theory is the observer-system cut.

I don't think this is true.

We can formulate classical dynamics in terms of the time evolution of a
probability function on the phase space, and the result looks a bit like
quantum mechanics. You start by representing (your knowledge of) the initial
state of your system as a probability distribution, and let it evolve for a
while. Then, given any property of the final state that you want to measure,
you can read off the probability that the result will be such-and-such. The
probability function even "collapses" if you incorporate the knowledge you
get from measuring that property in real life.

There's a second way of extracting predictions from this formalism, harder
in practice but equally valid in principle. That is to choose the property
to be measured beforehand, design an apparatus to measure it (with a big,
easily-recognized dial to show the result), and add a model of that
apparatus to your model of the system. You let the augmented system evolve
until the measurement apparatus reaches an equilibrium state; then you look
at the distribution of dial positions, and you should get the same answer as
before.

Both approaches still work if you replace the classical phase space with a
quantum phase space, and the probabilities with complex amplitudes. The
first is the way quantum mechanics is usually done. But the second is
perfectly valid. In the quantum case, you start with a superposition of
finitely many DeWitt worlds (probably just one), and end up with finitely
many DeWitt worlds (more than before). Each world encodes a measurement
outcome, and the probability of that outcome is equal to the squared modulus
of the amplitude of that world in the wave function.

Historically, the quantum wave function described only the system being
measured, and the measurement apparatus appeared only in the discrete
measurement/collapse rule. But I don't think there's any technical reason
that it has to be that way. In my opinion it would be much more
philosophically satisfying to let the wave function model everything in the
universe, and let the measurement/collapse rule apply to DeWitt worlds.

-- Ben

Arnold Neumaier

unread,
May 25, 2005, 12:21:36 PM5/25/05
to
mark...@yahoo.com wrote:
> Arnold Neumaier wrote:
>
>>But since we cannot know the full state of the whole universe,
>>we are doomed to such reduced descriptions and hence to the
>>collapse.
>
>
> Or more strongly: as Smolin has pointed out, there may not even *be*
> such a thing as a state for the whole universe, not even a
> configuration space! This necessarily forces one into a local
> description and relative states cease to be merely an expedient, and
> become a fundamental part of the whole enterprise.
>
> In that case, collapse is there from the outset because reduced states
> are all you have.
>
> To put it more clearly: in the absence of a universal state or even a
> universal configuration space, the universe in effect becomes an open
> system.

I prefer to view this situation as an indication of missing degrees of
freedom.


> A second account, not mutually exclusive (and in fact closely related),
> for the collapse comes from the fact that the most general quantum
> theory [see note 1] admits both quantum AND classical degrees of
> freedom; the latter playing the role of superselection parameters.

Where is note 1?


> In the most general case, a relative state obtained by tracing out the
> environmental modes will cut across the superselection boundaries and
> give you a state which is a mixture with respect to the classical
> degrees.

How can this be? Traditionally, superselection rules refer to
operators that are always in an eigenstate. A relative trace is
then in the same state with respect to these operators.
Thus the classical degrees of freedom would not mix.
Or do you have something different in mind?


> States that lie in different superselection sectors combine only
> classically, never by superposition. By itself, a system can't evolve
> in such a way as to transmit information from the quantum to the
> classical degrees (i.e. a Schroedinger or automorphic evolution
> automatically precludes collapse-through-superselection). That's what
> would be required to get a "measurement" type event.

This happens in nonlinear quantum dynamics.

> But when combined
> with the feature of there being a cut-off at the boundary of the
> environment, you get the desired result of
> collapse-through-superselection.

Could you please provide a good reference?


Arnold Neumaier

Seratend

unread,
May 25, 2005, 12:21:35 PM5/25/05
to
Aaron Bergman a écrit :

> > Aaron Bergman a =E9crit :
> > > In article <1116673523....@g49g2000cwa.googlegroups.com>,
> > > Seratend <ser_m...@yahoo.fr> wrote:
> > > > In other words, for the collapse postulate, every eigenbasis is ok to
> > > > express statistical results (the born rules), but in the lab, we just
> > > > have one basis where the statistics apply. Why?
> > >
> > > Because we're entangling with a specific classical observable.
> > >
> > Why? I mean the entanglement does not choose a basis. Every basis is ok
> > to express the statistics of the entanglement.
>
> This isn't just entanglement; it's entanglement with a classical
> observable. That selects a basis, the one in which the classical
> observable is diagonal.
>
Well if your answer is simply the preferred eigenbasis is the
eigenbasis of the classical observable, you are just saying that we
only know the preferred basis after the experiment as QM does not
describe what the classical observable is and the born rules do not
select a preferred basis (every basis is ok for the born rules).
Therefore, you are saying that QM theory does not give a prediction of
the preferred basis in an experiment (what is this classical observable
for this experiment). Therefore, we need to make at least an experiment
to learn the preferred basis of this classical observable.

> Not at all. Decoherence shows us how the basis is selected.

No, as you've said above, in this case, it is the classical observable
that gives the preferred basis of the experiment (the one where we
really measure the born rule statistics).
As long as QM does not predict the preferred observable of an
experiment (the born rules), I don't know how you can say that
decoherence show us how the basis is selected.

Seratend


Aaron Bergman

unread,
May 25, 2005, 5:17:54 PM5/25/05
to
In article <1117014495.9...@z14g2000cwz.googlegroups.com>,
Seratend <ser_m...@yahoo.fr> wrote:

> Aaron Bergman a écrit :
> > > Aaron Bergman a =E9crit :
> > > > In article <1116673523....@g49g2000cwa.googlegroups.com>,
> > > > Seratend <ser_m...@yahoo.fr> wrote:
> > > > > In other words, for the collapse postulate, every eigenbasis is ok to
> > > > > express statistical results (the born rules), but in the lab, we just
> > > > > have one basis where the statistics apply. Why?
> > > >
> > > > Because we're entangling with a specific classical observable.
> > > >
> > > Why? I mean the entanglement does not choose a basis. Every basis is ok
> > > to express the statistics of the entanglement.
> >
> > This isn't just entanglement; it's entanglement with a classical
> > observable. That selects a basis, the one in which the classical
> > observable is diagonal.
> >
> Well if your answer is simply the preferred eigenbasis is the
> eigenbasis of the classical observable, you are just saying that we
> only know the preferred basis after the experiment as QM does not
> describe what the classical observable is and the born rules do not
> select a preferred basis (every basis is ok for the born rules).
> Therefore, you are saying that QM theory does not give a prediction of
> the preferred basis in an experiment (what is this classical observable
> for this experiment). Therefore, we need to make at least an experiment
> to learn the preferred basis of this classical observable.

When you describe an experiment in quantum mechanics, you separate out
the world into classical and quantum components. The classical component
comes with a preferred basis given by the macrostates of the measurement
apparatus. When you entangle the measuring device with the quantum
system, decoherence tells you that the entangled states essentially no
longer interfere. Now, why we only perceive one branch of the decoherent
wavefunction is a complete mystery, but the basis is completely
deteremined by the experimental setup.

Aaron

Seratend

unread,
May 26, 2005, 2:34:12 PM5/26/05
to

Aaron Bergman a écrit :
> In article <1117014495.9...@z14g2000cwz.googlegroups.com>,

>
> When you describe an experiment in quantum mechanics, you separate out
> the world into classical and quantum components. The classical component
> comes with a preferred basis given by the macrostates of the measurement
> apparatus. When you entangle the measuring device with the quantum
> system, decoherence tells you that the entangled states essentially no
> longer interfere.

I understand you seem to adopt the copenhagen interpretation. If this
is the case, once again, CI does not explain the preferred basis, this
is an an external data compatible with the postulates of QM, but not
explained by QM theory (the classical apparatus and its eigen basis in
CI is not explained) . I understand decoherence tells us that there is
a local basis where there is no interference (the projected density
matric is diagonal in this eigenbasis), providing a sufficient
interaction with the environment. However, nothing in QM, prevents one
to choose another eigenbasis where we have interferences, otherwise, it
would be impossible to observe interference in quantum experiments
(e.g. double slits interferences, squizz, etc ...). Therefore, I really
do not understand how you can say that the basis where the local
density matrix is diagonal is the preferred eigenbasis of the
experiment and that this basis may be or not the one choosen by the
classical apparatus (what in QM postulates infere this result) .

> Now, why we only perceive one branch of the decoherent
> wavefunction is a complete mystery, but the basis is completely
> deteremined by the experimental setup.
>
> Aaron

I understand what you call one branch of the decoherent wavefunction as
an outcome of the experiment (the state after the experiment given by
the outcome). If this is the case, I have no problem with this part as
QM theory deals explicitely with statistical predictions of experiment
outcomes and not with the prediction of the outcomes (the born rules).
It is mainly a matter of choice (statistical versus deterministic
description) in a theory.

My question regarding this theory remains: does QM theory explain the
preferred basis or not?

Seratend.


Arnold Neumaier

unread,
May 26, 2005, 3:03:41 PM5/26/05
to
Frank Hellmann wrote:

The two are not equivalent but related. Decoherence is the
apparent collapse due to entanglement with the environment.
It does not lead to state reduction and does not solve the
measurement problem. See my paper
A. Neumaier,
Collapse challenge for interpretations of quantum mechanics
quant-ph/0505172
and the recent survey
M. Schlosshauer,
Decoherence, the Measurement Problem, and Interpretations of
Quantum Mechanics
Rev. Mod. Phys. 76 (2004), 1267--1305.
quant-ph/0312059.

Collapse is the result of approximating the entangled dynamics
by a Markov approximation, resulting in a dissipative master
equation of Lindblad type. The latter have built in collapse.
The validity of the Markov approximation is an additional
assumption beyond decoherence. It is responsible for the
collapse.

Quantum optics and hence all high quality experiments for
the foundations of quantum mechanics are unthinkable without
the Markov approximation.


Arnold Neumaier

Arnold Neumaier

unread,
May 26, 2005, 3:04:02 PM5/26/05
to
Seratend wrote:

> Arnold Neumaier a =E9crit :
>
>>Seratend wrote:
>>
>>>Arnold Neumaier a =E9crit :
>>>
>>>>For example, if we measure the mass of a piece of metal, we actually
>>>>measure the expectation of the mass operator <M> (M is the sum of
>>>>all the particle masses).
>>>>
>>>This is only true, in the absolute, if the piece of metal is made of an
>>>infinite number of particle masses (convergence in law). What we really
>>>measure is the value of M=3D sum_i Mi and not <M>.
>>
>>No. What we measure is an approximation of
>><integral_Omega dx a^*(x)Ma(x)> over the region Omega of interest.
>>This is the expression that figures in statistical mechanics
>>derivations of macroscopic elasticity theory.
>>
> We can apply the statistics to small independent areas. This does not
> change the fact that at the end what we really measure on this local
> are is M_area=3D sum_{i in the area} Mi and not <M>_{area} (what you call
> the approximation when the area is not well defined).

What we measure in practice is a weighted spacetime average
m_exp(x) = integral dy rho(x-y) m(y)
of a mass density field m(x) whose definition is given by
elasticity theory. (Because the latter tells us the meaning
of the measurements.) If one looks at the way elasticity theory
derives from statistical mechanics, one realizes that m(x)
is the ensemble means of some microscopic field operator M(x)
constructed from the basic observables of the underlying
quantum system. Thus we have
m_exp(x) = integral dy rho(x-y) <M(y)> = <M_exp(x)>,
the expectation of the operator
M_exp(x) = integral dy rho(x-y) M(y).

Arnold Neumaier

r...@maths.tcd.ie

unread,
May 26, 2005, 3:06:33 PM5/26/05
to
Arnold Neumaier <Arnold....@univie.ac.at> writes:

>r...@maths.tcd.ie wrote:

>> A physicist who gives an
>> apparently straightforward, if slightly confusing, answer
>> to a question about physics, without making it clear that
>> this question has an unusual status in physics, that,
>> unlike most questions in physics, this one has no
>> well-established answer, is implicitly telling the
>> person that this question is just like other questions
>> in physics, that is has a well-established answer, and
>> that in fact the answer being given is the well-established
>> one.
>>
>> Now this is what you did, and you interpreted my post as an attack
>> on you, became angry, and treated me to three question marks and a
>> lecture about how everything is mere opinion and belief:
>>
>>>Everything called
>>>knowledge is in fact a set of beliefs of the person claiming it.

>You seem to be projecting _your_ anger onto me.

Perhaps it seems that way to you; I assure you that I'm not.

>That you find three question marks and a short essay on learning
>truth in controversial matters an aggressive behavior in a context
>where controversial things are discussed is a sign of your emotional
>state rather than a property of my contribution.

Perhaps you use three question marks and assertions that everything
is mere opinion and belief all the time, but I read quite a lot
of your posts, and often enjoy reading them, and it seems that
you rarely do that. Your behaviour in this case seemed to be
an exception to your normal tone.

>To say that ''Everything called knowledge is in fact a set
>of beliefs of the person claiming it.'' does not contradict the
>objectivity of mathematical definitions. When I say that a Banach
>space is a normed, complete vector space, I both state my belief
>and happen to coincide with the social consensus of the guild of
>mathematicians.

Indeed, but the question is how it appears to the person for whom
your reply was intended. You are saying that one can adopt a
particular point of view, namely that everything anybody ever
says is their opinion and must be considered that way, and that it
might or might not coincide with what is well-established, and that
the onus is on the reader to determine whether what is said
is well-established or not. From this point of view, you weren't
being dishonest; I agree.

>And when I say that state reduction is a
>physical process, I both state my belief and happen to coincide with
>famous physicists like von Neumann and many others, and this is good
>enough to make this statement honestly.

Well, von Neumann was actually of the opinion that state reduction
wasn't a physical process, as far as I can determine from reading
his papers. In your post, you also said (more or less) that it
wasn't a physical process, so I presume you left out a "not"
above.

I happen to also think it's unlikely that state vector collapse is
a physical process, although I wouldn't presume to dogmatically
state that it isn't if asked by somebody who wasn't already familiar
with the subject. If I did that, I would be presenting what
is merely my opinion as though I were certain that it was
true.

Consider, for example, somebody who liked Penrose's gravitational
collapse interpretation. According to your criteria of honesty,
that person could say "Yes, collapse is a physical process,"
while being perfectly honest, since his opinion coincides
with that of a famous physicist. The poor person who asked
the question in the first place would have gotten two "honest"
answers to his question, one saying no (from you) and one
saying yes. Neither of the answerers would have given any
indication that their answer was merely their opinion,
and so the questioner would be left confused, and would
have to distrust future answers that he got from supposedly
respectable physicists.

You may very well say that this is a harsh lesson that he needs to
learn. I would say that it would be better if people clearly
distinguished between what was merely their opinion and
what is well-established, and then those who ask questions
would be able to trust the answers that physicists give them.

As another example, if somebody asks "Is Riemann hypothesis true?",
most knowledgeable people would reply that it isn't known whether
or not it is true, although it is widely believed that it is.
Somebody who simply says "Yes, it's true," would be being honest
by your criteria, but not by mine.

>It is ridiculous to require a percentage of people in a field
>to agree with you before you utter a statement without adding
>a qualification like 'I believe' or 'Some physisicts believe'.
>There would never be an agreement on the percentage required
>to do so.

I agree. I never suggested that one should require a
specific percentage of physicists to agree with one before
saying something. I do think, however, that if one knows
that a statement is merely an opinion, and that more than
50% of physicists hold the opposite opinion, one can
say that it is controversial, and that it shouldn't
be stated as though it were a fact. I gave 50% as an example
of a figure which would indicate a controversy, not as
a boundary between controversy and non-controversy.

R.

Aaron Bergman

unread,
May 26, 2005, 3:06:40 PM5/26/05
to
In article <1116962439.6...@g14g2000cwa.googlegroups.com>,
mark...@yahoo.com wrote:

> Arnold Neumaier wrote:
> > But since we cannot know the full state of the whole universe,
> > we are doomed to such reduced descriptions and hence to the
> > collapse.
>
> Or more strongly: as Smolin has pointed out, there may not even *be*
> such a thing as a state for the whole universe, not even a
> configuration space! This necessarily forces one into a local
> description and relative states cease to be merely an expedient, and
> become a fundamental part of the whole enterprise.

Of course, it would be nice to have a theory that could describe the
whole universe.

> In that case, collapse is there from the outset because reduced states
> are all you have.

I don't agree with this formulation. If you have unitary evolution, you
don't have collapse. Or, put another way, pure states never evolve into
mixed states.

> To put it more clearly: in the absence of a universal state or even a
> universal configuration space, the universe in effect becomes an open
> system.
>
> A second account, not mutually exclusive (and in fact closely related),
> for the collapse comes from the fact that the most general quantum
> theory [see note 1]

Note one appears to be missing.

> admits both quantum AND classical degrees of
> freedom; the latter playing the role of superselection parameters.

I'd find such a thing very difficult to implement.

> In the most general case, a relative state obtained by tracing out the
> environmental modes will cut across the superselection boundaries and
> give you a state which is a mixture with respect to the classical
> degrees.
>
> States that lie in different superselection sectors combine only
> classically, never by superposition. By itself, a system can't evolve
> in such a way as to transmit information from the quantum to the
> classical degrees (i.e. a Schroedinger or automorphic evolution
> automatically precludes collapse-through-superselection). That's what
> would be required to get a "measurement" type event. But when combined
> with the feature of there being a cut-off at the boundary of the
> environment, you get the desired result of
> collapse-through-superselection.

I'd prefer to ditch the superselection. If the world was how I wanted it
to be, there'd be an actual physical nonunitary collapse process that
makes things classical at large scales. Unfortunately, the world has no
obligation to conform to my needs. But I can hope.

Aaron

Seratend

unread,
May 26, 2005, 3:07:17 PM5/26/05
to
r...@maths.tcd.ie a =E9crit :

> Seratend <ser_m...@yahoo.fr> writes:
>
> >r...@maths.tcd.ie a crit :

> >In other words, for the collapse postulate, every eigenbasis is ok to


> >express statistical results (the born rules), but in the lab, we just
> >have one basis where the statistics apply. Why?
>

> >(and please do not use the decoherence as the way to solve this issue)
>
> I'm not sure I understand the question very clearly, but I'll try
> to answer it. There is a special basis - the position basis, although
> this isn't an entire basis for the Hilbert space, since it doesn't
> span the spin part of the Hilbert space, for example. Anyway, all
> measurements are ultimately measurements of position, as Bell
> was fond of saying, for example the positions of instrument
> pointers.

I do not know how you can come to such a conclusion (for the position).
The postulates of QM are clear (formally). Born rules may be applied to
any eigenbasis (i.e. any observable). If position becomes the preferred
basis for the observation, this postulate has to be changed (with
additional postulates or anything else). If this is the case, I would
like to know it explicitly.


> Because all interactions which can serve as measurements are local,
> meaning that if system A wants to exchange information with system
> B it has to be in the same region of space, interactions with
> neighboring objects tend to act as position measurements.
>

I think you are mixing the unitary time evolution of the global system
that is impacted by the type of interaction (always local) with the
born rules that may be applied to any eigenbasis of this system
(locally or globally).
For example, in the classical hydrogen atom, I just have a position
interaction (coulombian interaction), however, what I usually measure
is an energy (the radiation induced by the transitions between 2 energy
eigen states) and not position. How do you explain that?

Seratend.

Seratend

unread,
May 26, 2005, 3:07:08 PM5/26/05
to
Arnold Neumaier a =E9crit :

> [Seratend replied here in the other thread:]
>
> > This is only true, in the absolute, if the piece of metal is made of=
an
> > infinite number of particle masses (convergence in law). What we real=

ly
> > measure is the value of M=3D sum_i Mi and not <M>.
>
> No. What we measure is an approximation of
> <integral_Omega dx a^*(x)Ma(x)> over the region Omega of interest.
> This is the expression that figures in statistical mechanics
> derivations of macroscopic elasticity theory.
>
We can apply the statistics to small independent areas. This does not
change the fact that at the end what we really measure on this local
are is M_area=3D sum_{i in the area} Mi and not <M>_{area} (what you call
the approximation when the area is not well defined).
An instance (an outcome) of the object has well defined (may be unkonwn

to the observer) values of Mi in all areas (independence of Mi random
variables or commuting observables). However, at the end one sums all
over these areas (the extensive property of M) to obtain the value we
really measure (the approximation of <M>).

In my opinion, this is simply the application of the law of large

numbers and the grand or micro canonical ensembles just use this
property with additional hypotheses.=20

Seratend.

Seratend

unread,
May 26, 2005, 3:07:26 PM5/26/05
to
Frank Hellmann a =E9crit :

>
> I have seen this argument quite a few times, do you have any good
> concise review papers on this?
> A meassurement is represented by a Projection operator, and you can
> develop the whole apparatus of the statistical interpretation of QM
> without ever refferring to a basis, so I really don't see where this
> problem comes into things.
>
> Frank.

We may also view the measurement problem as the quest for a generic
superselection rule (the selection of the preferred basis): the
prediction of the preferred basis of a given experiment (what does not
seem to be given by the current QM postulates). However, I do not know
if one has ever proved that such a rule exists (either by tests or by a
theorem).

I may propose you the following papers:

* Elements of Environmental Decoherence, Joos 1999,
arXiv:quant-ph/9908008 (short one)
* Decoherence, the Measurement Problem, and Interpretations of Quantum
Mechanics, Schlosshauer 2003, arXiv:quant-ph/0312059 (extensive review
of the problem end of 2003, very complete and lot of pointers to other
papers)

If someone has other recent papers, do not hesitate to post them : ).

Seratend.

Aaron Bergman

unread,
May 26, 2005, 8:22:40 PM5/26/05
to
In article <1117095659.1...@o13g2000cwo.googlegroups.com>,
Seratend <ser_m...@yahoo.fr> wrote:

> Aaron Bergman a écrit :
> > In article <1117014495.9...@z14g2000cwz.googlegroups.com>,
> >
> > When you describe an experiment in quantum mechanics, you separate out
> > the world into classical and quantum components. The classical component
> > comes with a preferred basis given by the macrostates of the measurement
> > apparatus. When you entangle the measuring device with the quantum
> > system, decoherence tells you that the entangled states essentially no
> > longer interfere.
>
> I understand you seem to adopt the copenhagen interpretation.

I don't believe in any 'interpretation' of quantum mechanics. I'm just
confused by all of it. As I said elsewhere, in my ideal world, there
would be a physical collapse process leading to the emergence of a
classical world. Unfortunately, I'm not sure I believe that's likely.

> If this
> is the case, once again, CI does not explain the preferred basis, this
> is an an external data compatible with the postulates of QM, but not
> explained by QM theory (the classical apparatus and its eigen basis in
> CI is not explained) . I understand decoherence tells us that there is
> a local basis where there is no interference (the projected density
> matric is diagonal in this eigenbasis), providing a sufficient
> interaction with the environment. However, nothing in QM, prevents one
> to choose another eigenbasis where we have interferences, otherwise, it
> would be impossible to observe interference in quantum experiments
> (e.g. double slits interferences, squizz, etc ...). Therefore, I really
> do not understand how you can say that the basis where the local
> density matrix is diagonal is the preferred eigenbasis of the
> experiment and that this basis may be or not the one choosen by the
> classical apparatus (what in QM postulates infere this result) .

I'm not sure I understand what you're asking. Let me try to answer
something, then. The question of what why we observe what we observe is
completely unanswered by quantum mechanics. This, however, is a
different issue of the existence of the basis for observation. As you
seem to agree, the diagonalization of the density matrix in a given
experimental setup determines a preferred basis. The connection between
this basis and our perception of reality is too hard a question for me,
but it seems to be right.

[...]

> My question regarding this theory remains: does QM theory explain the
> preferred basis or not?

QM (ie, decoherence) explains how preferred bases arise in experiments.
It does not explain why we perceive what we perceive.

Aaron

Seratend

unread,
May 27, 2005, 8:41:58 AM5/27/05
to
Arnold Neumaier a écrit :

>
> What we measure in practice is a weighted spacetime average
> m_exp(x) = integral dy rho(x-y) m(y)
> of a mass density field m(x) whose definition is given by
> elasticity theory. (Because the latter tells us the meaning
> of the measurements.) If one looks at the way elasticity theory
> derives from statistical mechanics, one realizes that m(x)
> is the ensemble means of some microscopic field operator M(x)
> constructed from the basic observables of the underlying
> quantum system. Thus we have
> m_exp(x) = integral dy rho(x-y) <M(y)> = <M_exp(x)>,
> the expectation of the operator
> M_exp(x) = integral dy rho(x-y) M(y).
>
> Arnold Neumaier

You seem to be definitively a lover of the deterministic description
(of the integral)! ; ). It depends mainly on the point of view. Using
the monte carlo method or the small area (I do not remember the correct
term) to compute the expectation (the integral) of a given variable.
I prefer the law of large numbers. It gives, for me, the most correct
view of finite size materials (M= sum_i mi): what we usually find in
real conditions however this method is not practical for formal
calculations.
The density field approach requires the definition of an hypothetic
mean field that gives the good values as long as we do not look at too
small scales, in other word, mi/M is an approximation of the mean
field density.

Both approaches give the same value (in the large numbers domain),
However, if we believe that the material is made of a huge, but finite,
discrete number of components, I think M= sum_i mi is the correct one
to describe what we measure. If you think that the material is made of
continuous components, then it is your method who describes better what
it is (i.e. 19th century point of view, a material is made of
"continuous" components versus 20th century point of view, a material
is "discrete").

Seratend.


Arnold Neumaier

unread,
May 27, 2005, 2:05:02 PM5/27/05
to
Seratend wrote:

> We can apply the statistics to small independent areas. This does not
> change the fact that at the end what we really measure on this local
> are is M_area=3D sum_{i in the area} Mi and not <M>_{area} (what you call
> the approximation when the area is not well defined).

How do you know what we 'really' measure? Measurement is a complicated
process...


> An instance (an outcome) of the object has well defined (may be unkonwn
> to the observer) values of Mi in all areas (independence of Mi random
> variables or commuting observables). However, at the end one sums all
> over these areas (the extensive property of M) to obtain the value we
> really measure (the approximation of <M>).

But these areas themselves are only vaguely defined and then shrunk to
zero volume to get the hydrodynamic limit. Your description cannot even
imitate this limit, and thus does not make sense in the final
measurements, which are done on a thermodynamical interpretation and
not in terms of atoms.


Arnold Neumaier

Seratend

unread,
May 28, 2005, 5:14:29 AM5/28/05
to
Aaron Bergman a =E9crit :

> In article <1117095659.1...@o13g2000cwo.googlegroups.com>,
> Seratend <ser_m...@yahoo.fr> wrote:
>
> >
> > I understand you seem to adopt the copenhagen interpretation.
>
> I don't believe in any 'interpretation' of quantum mechanics. I'm just
> confused by all of it. As I said elsewhere, in my ideal world, there
> would be a physical collapse process leading to the emergence of a
> classical world. Unfortunately, I'm not sure I believe that's likely.
>
Ok, I also prefer to leave interpretation to philosophy : ).
I have another question: what do you call a classical world. Frankly I
do not understand that. QM deals only with statitistics of outcomes
and, in my opinion, outcomes are the "classical world" (what we "see").
Therefore, it is relatively difficult for me to understand people who
want to demonstrate that there is a physical collapse leading to the
outcomes.
I can only understand this sentence as the quest for a deterministic
(causal) description of outcomes compatible with the statistics of QM.
If this is the case, we already have such a description, bohmian
mechanics for the position eigen basis (and equivelent formulations in
different eigenbasis).

As in classical probability, I can work with the statistical
description it provides whenever its results are more practical than
the ones of a deterministic description explaining the outcomes. I have
a mathematical separation in the desciption of outcomes: statistical or
deterministic. Both are ok. Choosing one versus the other is only a
matter of taste rather than a necessity.
Hence my non understanding of the quest of a physical collapse behing
the outcomes of QM.

This subject as you have already noticed is different from the problem
of the preferred eigenbasis and the decoherence (the analog of the law
of large numbers in QM).

> I'm not sure I understand what you're asking. Let me try to answer
> something, then. The question of what why we observe what we observe is
> completely unanswered by quantum mechanics.

So you are saying that QM theory does not explain the preferred basis.

> This, however, is a different issue of the existence of the basis for o=
bservation.
I do not understand you. To describe the observation of something I
need a basis. For example, in the deterministic world, a signal s(t) is
well described by s(t) or its fourier transform ^s(w). However, I
usually see the values of the signal at every time (s(t) and not the
values at a given frequency).

> As you seem to agree, the diagonalization of the density matrix in a gi=
ven


> experimental setup determines a preferred basis.

No, I just say that we obtain a basis. I just question, if it is
another postulate. As if I take QM postulates, I may apply the collapse
postulate to this local state in any basis I want (especially one
different from the eigenbasis:

for any diagonal density matrix I have: rho=3D sum_i pi|ai><ai|
Choosing another basis:
|ai>=3D sum cij|bj>

=3D> rho=3D sum_i pi.cij.cik* |bj><bi|

rho is no more diagonal. Both eigen basis are possible for the
measurement outcomes as expressed by the QM postulates.

> The connection between this basis and our perception of reality is too =


hard a question for me,
> but it seems to be right.
>

Well if you accept decoherence, you may accept the situations where the
interaction between the environment and the system is so weak, that in
a first approximation, no entanglment as occured. In this case, what
eigenbasis do we select?

> [...]
>
> > My question regarding this theory remains: does QM theory explain the
> > preferred basis or not?
>
> QM (ie, decoherence) explains how preferred bases arise in experiments.
> It does not explain why we perceive what we perceive.
>

If I take 2 experiments: the observation of double slits interference
and the emission of radiation of an hidrogen gas. In one case I see,
the position (an interference) and in the second case the energy
eigenstate of the H atoms. Frankly, I see no common preferred basis and
the last observer is the human (the projector) in this example.
How decoherence is able to explain/solve this difference in this
preferred eigen basis observation?

Seratend.

Arnold Neumaier

unread,
May 28, 2005, 5:15:15 AM5/28/05
to
Aaron Bergman wrote:

> In article <1116962439.6...@g14g2000cwa.googlegroups.com>,
> mark...@yahoo.com wrote:
>
>>Arnold Neumaier wrote:
>>
>>>But since we cannot know the full state of the whole universe,
>>>we are doomed to such reduced descriptions and hence to the
>>>collapse.
>>
>>Or more strongly: as Smolin has pointed out, there may not even *be*
>>such a thing as a state for the whole universe, not even a
>>configuration space! This necessarily forces one into a local
>>description and relative states cease to be merely an expedient, and
>>become a fundamental part of the whole enterprise.
>
> Of course, it would be nice to have a theory that could describe the
> whole universe.

The standard model claims to be a theory of the whole universe
in a flat, gravitationless spacetime. It specifies expectations of
all fields and correlations at all possible combinations of
positions and times, and hence describes the world anywhere.
(That it is not in complete agreement with observation because it
does not treat gravity correctly does not invalidate the general
observation.)


>>In that case, collapse is there from the outset because reduced states
>>are all you have.
>
> I don't agree with this formulation. If you have unitary evolution, you
> don't have collapse. Or, put another way, pure states never evolve into
> mixed states.

But if you have unitary evolution, you have the whole universe.
For in that case nothing outside that system can interact with it
(by unitarity), so all observations are restricted to this system
itself. Which means, it contains us and everything we interact with,
hence the whole universe.


> I'd prefer to ditch the superselection. If the world was how I wanted it
> to be, there'd be an actual physical nonunitary collapse process that
> makes things classical at large scales. Unfortunately, the world has no
> obligation to conform to my needs. But I can hope.

There are some such scenarios that are not yet contradicted by
experiment....


Arnold Neumaier

Arnold Neumaier

unread,
May 28, 2005, 5:15:06 AM5/28/05
to
Seratend wrote:
> Arnold Neumaier a =E9crit :
>
>>What we measure in practice is a weighted spacetime average
>> m_exp(x) =3D integral dy rho(x-y) m(y)

>>of a mass density field m(x) whose definition is given by
>>elasticity theory. (Because the latter tells us the meaning
>>of the measurements.) If one looks at the way elasticity theory
>>derives from statistical mechanics, one realizes that m(x)
>>is the ensemble means of some microscopic field operator M(x)
>>constructed from the basic observables of the underlying
>>quantum system. Thus we have
>> m_exp(x) =3D integral dy rho(x-y) <M(y)> =3D <M_exp(x)>,

>>the expectation of the operator
>> M_exp(x) =3D integral dy rho(x-y) M(y).

>>
>>Arnold Neumaier
>
>
> You seem to be definitively a lover of the deterministic description

Yes. I believe that there is something deterministic about the universe!


> The density field approach requires the definition of an hypothetic
> mean field that gives the good values as long as we do not look at too
> small scales, in other word, mi/M is an approximation of the mean
> field density.

This is what the various approximation schemes to derive hydrodynamics
assume. Thus I assume it, too.


> Both approaches give the same value (in the large numbers domain),
> However, if we believe that the material is made of a huge, but finite,

> discrete number of components, I think M=3D sum_i mi is the correct on=


e
> to describe what we measure. If you think that the material is made of
> continuous components, then it is your method who describes better what
> it is (i.e. 19th century point of view, a material is made of
> "continuous" components versus 20th century point of view, a material
> is "discrete").

In the 1925 physics views, it is discrete.
In the post 1975 physics, it is again continuous, since the basic
objects in the universe aree quantum _fields_, and particles
only arise through some ill-understood process, unless the field is
free and the two descriptions are equivalent.

Arnold

Arnold Neumaier

unread,
May 28, 2005, 5:15:23 AM5/28/05
to
r...@maths.tcd.ie wrote:

> Arnold Neumaier <Arnold....@univie.ac.at> writes:
>
>>r...@maths.tcd.ie wrote:
>

>>>Now this is what you did, and you interpreted my post as an attack
>>>on you, became angry, and treated me to three question marks and a
>>>lecture about how everything is mere opinion and belief:
>>>
>>>
>>>>Everything called
>>>>knowledge is in fact a set of beliefs of the person claiming it.
>
>>You seem to be projecting _your_ anger onto me.
>
> Perhaps it seems that way to you; I assure you that I'm not.

Then it must have been an artifact of the medium usenet.
It seems to make statements to look more emotional than they
are meant, which occasionally (and in unmoderated groups often)
leads to an involuntary rise in aggression.


> Perhaps you use three question marks and assertions that everything
> is mere opinion and belief all the time, but I read quite a lot
> of your posts, and often enjoy reading them,

Thanks for the compliment. I try to be readable, informative,
and polite, though sometimes I am quite explicit about what I
think of a poor contribution.


> and it seems that
> you rarely do that. Your behaviour in this case seemed to be
> an exception to your normal tone.

I raise three question mark when I find something really unbelievable
though (because it is not about something formal) one cannot describe it
as wrong.


>>To say that ''Everything called knowledge is in fact a set
>>of beliefs of the person claiming it.'' does not contradict the
>>objectivity of mathematical definitions. When I say that a Banach
>>space is a normed, complete vector space, I both state my belief
>>and happen to coincide with the social consensus of the guild of
>>mathematicians.
>
> Indeed, but the question is how it appears to the person for whom
> your reply was intended. You are saying that one can adopt a
> particular point of view, namely that everything anybody ever
> says is their opinion and must be considered that way,

At least this is the way I take what others say. It is a very
efficient way of looking at communication. Then I sieve through
what nourishes my hunger for truth and understanding - independent
of whether the speaker spoke the truth or a prejudice. From an
interesting statement I sometimes learn even when the speaker does
not recognize its faults, and a true statement may fail to interest
me if it is phrased in a way that I cannot recognize its relevance.


> and that it
> might or might not coincide with what is well-established, and that
> the onus is on the reader to determine whether what is said
> is well-established or not. From this point of view, you weren't
> being dishonest; I agree.
>
>
>>And when I say that state reduction is a
>>physical process, I both state my belief and happen to coincide with
>>famous physicists like von Neumann and many others, and this is good
>>enough to make this statement honestly.
>
> Well, von Neumann was actually of the opinion that state reduction
> wasn't a physical process, as far as I can determine from reading
> his papers. In your post, you also said (more or less) that it
> wasn't a physical process, so I presume you left out a "not"
> above.

No. I meant ''state reduction is a physical process'' since this is
what I said and what physicists observe. See


A. Neumaier,
Collapse challenge for interpretations of quantum mechanics
quant-ph/0505172

(see also http://www.mat.univie.ac.at/~neum/collapse.html).
Von Neumann takes the collapse as an axiom, hence also testifies to its
reality. I'd appreciate getting a clear reference where he states
the contrary (if he does so).


> Consider, for example, somebody who liked Penrose's gravitational
> collapse interpretation. According to your criteria of honesty,
> that person could say "Yes, collapse is a physical process,"
> while being perfectly honest, since his opinion coincides
> with that of a famous physicist.

Yes.

> The poor person who asked
> the question in the first place would have gotten two "honest"
> answers to his question, one saying no (from you) and one
> saying yes.

This is the typical situation one finds when controversy prevails.
Indeed, in some sense, controversy _is_ the coexistence of
disagreeing honest statements.


> Neither of the answerers would have given any
> indication that their answer was merely their opinion,
> and so the questioner would be left confused,

No. If the questioner is only a little intelligent, he would
be left with the impression that either at least one of the
speakers was incompetent, or that the topic is controversial.


> and would
> have to distrust future answers that he got from supposedly
> respectable physicists.

This is indeed healthy. One should not trust a statement without
good reason, independently of whether it carries the label
'this is the truth' or 'this is my personal opinion'. In fact,
the first may be a lie and the second the truth.


> You may very well say that this is a harsh lesson that he needs to
> learn. I would say that it would be better if people clearly
> distinguished between what was merely their opinion and
> what is well-established, and then those who ask questions
> would be able to trust the answers that physicists give them.

Only if they have no prejudice, and if he recognizes that he speaks
with a person without prejudice. But both requirements are very rarely
met. So he is right to be cautious. Indeed, we learn it from the
earliest age not to trust too early.


> As another example, if somebody asks "Is Riemann hypothesis true?",
> most knowledgeable people would reply that it isn't known whether
> or not it is true, although it is widely believed that it is.
> Somebody who simply says "Yes, it's true," would be being honest
> by your criteria,

Only if he really thinks it is true, according to the standards
of mathematics. For example, I think that Louis de Branges
can say it with honesty.
http://www.math.columbia.edu/~woit/blog/archives/000035.html

>
>>It is ridiculous to require a percentage of people in a field
>>to agree with you before you utter a statement without adding
>>a qualification like 'I believe' or 'Some physisicts believe'.
>>There would never be an agreement on the percentage required
>>to do so.
>
>
> I agree. I never suggested that one should require a
> specific percentage of physicists to agree with one before
> saying something.

You suggested that one should require 50% in the mail which
caused my three question marks.


> I do think, however, that if one knows
> that a statement is merely an opinion, and that more than
> 50% of physicists hold the opposite opinion, one can
> say that it is controversial,

Should Einstein have declared his theory controversial
until he convinced haldf of the physicsists? (or of the
theoretical phyicsts only? Or of the astrophysicists
only? ...)

No. He was convinced it was right, and he was right to
having aserted it without scruples. It is part of the
scientific process that finding out the truth takes time.
But often it is being found by bold people who know what
they know, even being in a minority.


Arnold Neumaier

Seratend

unread,
May 28, 2005, 2:44:53 PM5/28/05
to
r...@maths.tcd.ie a écrit :

>
> It's called the Schmidt decomposition, and the l_i are called
> the Schmidt coefficients. If there's an initial state
> which factorises into the state of the system and
> the state of the measuring device, say, |psi_0>=|a_0>|b_0>,
> then after an interaction between them, that state of
> the system, |psi> can be decomposed according to the
> Schmidt decomposition, which gives a preferred basis
> for that interaction (there are pathological cases
> where the decomposition isn't unique, but these are
> "of measure zero").

Oh, yes? So you are claming that the degenerated states are seldom.
Especially in the system?
If this is the case, observing the unpolarized radiation of an atom in
a given energy eigen state, EPR would not be possible etc ...

The schmidt decomposition principle is a nice "trick" to quiclky say
you have a preferred eigenbasis in a measurement. However, each time I
look deeper into this decomposition, I see not so many difference
between claiming, a posteriori, the prefered basis of the measurement
(we kown the basis when we do the measurement)and choosing a psteriori
an environment basis and interactions such that the density matrix is
diagonal in the basis of the measurement.


> Because of the locality of information
> transfer, this tends to pick out basis states which
> are fairly localised (presuming that spatial degrees
> of freedom are included in the system and that all
> interactions are, in fact, local), or rather, were
> localised at the time of the interaction.
>

Please, if you use the word information, try to connect it (replace it)
with the formal objects of QM theory. There is so many confusion (at
least for me) with this word especially when applied to QM and
measurement that for me it has not a defined signification.

>Not at all. Decoherence shows us how the basis is selected.
>
> Pretty much; the above process, repeated many times, with
> local interactions of many systems with one another, gives
> you decoherence, with the various systems "telling each other"
> where they are with respect to one another, leading to
> a preferred basis which is approximately the position basis,
> although it's not exactly a complete basis.
>
> R.

Decoherence explains why we may not see so easily interferences in the
classical world (the unitary evolution of the system with the
environment through interactions). However, I do not see how
decoherence can explain the preferred basis (and surely not the ad hoc
schmidt decomposition). If this is the case, I think the collapse
postulate must be completed by another postulate. However, may be, QM
does not explain the preferred basis at all(out the scope of the
theory)?

Seratend.


Aaron Bergman

unread,
May 29, 2005, 2:26:34 PM5/29/05
to
In article <1117183732....@z14g2000cwz.googlegroups.com>,
"Seratend" <ser_m...@yahoo.fr> wrote:

> Aaron Bergman a =E9crit :
> > In article <1117095659.1...@o13g2000cwo.googlegroups.com>,
> > Seratend <ser_m...@yahoo.fr> wrote:
> >
> > >
> > > I understand you seem to adopt the copenhagen interpretation.
> >
> > I don't believe in any 'interpretation' of quantum mechanics. I'm just
> > confused by all of it. As I said elsewhere, in my ideal world, there
> > would be a physical collapse process leading to the emergence of a
> > classical world. Unfortunately, I'm not sure I believe that's likely.
> >
> Ok, I also prefer to leave interpretation to philosophy : ).
> I have another question: what do you call a classical world. Frankly I
> do not understand that. QM deals only with statitistics of outcomes
> and, in my opinion, outcomes are the "classical world" (what we "see").
> Therefore, it is relatively difficult for me to understand people who
> want to demonstrate that there is a physical collapse leading to the
> outcomes.

Why do we observe outcomes with probability |<a|psi>|^2? QM has no
answer for this question.

> I can only understand this sentence as the quest for a deterministic
> (causal) description of outcomes compatible with the statistics of QM.

QM is deterministic and causal. That's the problem.

> If this is the case, we already have such a description, bohmian
> mechanics for the position eigen basis (and equivelent formulations in
> different eigenbasis).

Bohmian mechanics has no relativistic generalization that I know of.

[...]

> > I'm not sure I understand what you're asking. Let me try to answer
> > something, then. The question of what why we observe what we observe is
> > completely unanswered by quantum mechanics.
>
> So you are saying that QM theory does not explain the preferred basis.

You'll have to communicate to me better what you mean by 'preferred
basis'. Given an experimental setup with a classical measuring device
(where classical means large numbers of microstates per macrostate), I
can describe to you a preferred basis in that setup.

[...]

Aaron

Seratend

unread,
May 30, 2005, 1:18:06 AM5/30/05
to
Arnold Neumaier a =E9crit :

> Seratend wrote:
>
> > We can apply the statistics to small independent areas. This does not
> > change the fact that at the end what we really measure on this local
> > are is M_area=3D3D sum_{i in the area} Mi and not <M>_{area} (what yo=

u call
> > the approximation when the area is not well defined).
>
> How do you know what we 'really' measure? Measurement is a complicated
> process...
>
I do not know what I really measure, I just have a model and a mapping
with the/my "reality" (the measurement "results"). It is the eternal
problem between mathematical models and their association with the
"reality" elements. Therefore, I have the choice to choose the model I
think is the more adequate to reflect the results(in this case the
discrete model versus the continuous one).
>
> > An instance (an outcome) of the object has well defined (may be unkon=

wn
> > to the observer) values of Mi in all areas (independence of Mi random
> > variables or commuting observables). However, at the end one sums all
> > over these areas (the extensive property of M) to obtain the value we
> > really measure (the approximation of <M>).
>
> But these areas themselves are only vaguely defined and then shrunk to
> zero volume to get the hydrodynamic limit.

This is important for your continuous model (the ones used mainly in
statistical physics). The discrete description does not need this limit
[to compute the <M>) value as I assume the object is made of a finite
(but huge number) of variables (choice). This method does not require
dividing the system into small areas as M is the sum of all discrete mi
(this need is specific to this continuous model).
However, huge sum of number is not very practical in formal calculus.
We prefer to approximate them by continuous models having the same
limit.

> Your description cannot even
> imitate this limit, and thus does not make sense in the final
> measurements, which are done on a thermodynamical interpretation and
> not in terms of atoms.
>

I do not understand, I may extend the discrete model towards a
continuous model in order to get the same result (<M>~M) at the limit.
For example, dividing the system into smaller areas, each one
containing a huge number of "particles", in order to get
sum_area mi=3D <m_area>dx ~ m_exp(x)dx =3D [integral dy rho(x-y) m(y)]dx

(if I have understood your notation). The area has not to be well
defined as you have already said.

Then we may add several hypotheses in order to recover at the limit
what we "really" measure (where the discrete and continuous model give
the same result - eg the convergence of <m_area> towards a density =3D>
countable number of particles towards an uncountable number). These are
the usual additional hypotheses made in statistical physics associated
to the continuous model.

Seratend.

Arnold Neumaier

unread,
May 30, 2005, 1:18:53 AM5/30/05
to
Seratend wrote:

> QM deals only with statistics of outcomes


> and, in my opinion, outcomes are the "classical world" (what we "see").

In my opinion, the "classical world" (what we "see") is the world
as seen after irreversible effects have set in, i.e., the world
as described by nonequilibrium thermodynamics (including hydromechanics
and kinetic theory). Everything in thermodynamics and kinetic theory
is real, objective, without any of the dubiosities that characterize
the traditional interpretations of the quantum world.


> Therefore, it is relatively difficult for me to understand people who
> want to demonstrate that there is a physical collapse leading to the
> outcomes.

The quest is to show that the interaction of a quantum system with
a macroscopic detector describable by thermodynamics (and hence,
through statistical mechanics, by quantum theory) gives rise to
macroscopic, observable effects in the detector that can be regarded
as the physical equivalent of an objective record of measurements.

I gave a concise formulation of a specific case of this quest in
my recent paper quant-ph/0505172.


Arnold Neumaier

r...@maths.tcd.ie

unread,
May 30, 2005, 1:21:29 AM5/30/05
to
Arnold Neumaier <Arnold....@univie.ac.at> writes:

>r...@maths.tcd.ie wrote:

>> Arnold Neumaier <Arnold....@univie.ac.at> writes:
>>
>>>r...@maths.tcd.ie wrote:
>>

>>>You seem to be projecting _your_ anger onto me.
>>
>> Perhaps it seems that way to you; I assure you that I'm not.

>Then it must have been an artifact of the medium usenet.
>It seems to make statements to look more emotional than they
>are meant, which occasionally (and in unmoderated groups often)
>leads to an involuntary rise in aggression.

Indeed; this happens far too often. In diplomacy, people
have developed formalized rules to avoid involuntary
rises in aggression like this, and refer to it as
protocol. Usenet hasn't got anything similar yet, excapt
for the vague rule that one shold be polite.

>>>And when I say that state reduction is a
>>>physical process, I both state my belief and happen to coincide with
>>>famous physicists like von Neumann and many others, and this is good
>>>enough to make this statement honestly.
>>
>> Well, von Neumann was actually of the opinion that state reduction
>> wasn't a physical process, as far as I can determine from reading
>> his papers. In your post, you also said (more or less) that it
>> wasn't a physical process, so I presume you left out a "not"
>> above.

>No. I meant ''state reduction is a physical process'' since this is
>what I said and what physicists observe.

Perhaps you are using the word "physical" in a way with which I'm
not familiar. You referred, in your original post to collapse
as "an artifact of the description of a quantum system by
a limited number of observables". To me, that sounds very
much like saying that collapse isn't a physical process.

>See
> A. Neumaier,
> Collapse challenge for interpretations of quantum mechanics
> quant-ph/0505172
> (see also http://www.mat.univie.ac.at/~neum/collapse.html).

The latter link appears to be broken. Your treatment of the Copenhagen
interpretation in the article claims that the "unresolved
quantum-classical interface issue (including the missing definition
of which situations constitute a measurement) is a serious defect
of the Copenhagen interpretation when viewed as a fundamental
interpretation of quantum mechanics."

This is slightly unfair to the Copenhagen interpretation, in
which the wavefunction is understood to represent knowledge
about the system, rather than the system itself. A definition
of measurement isn't missing because measurement is the
acquisition of new knowledge. State vector reduction happens
because the observer acquires new knowledge and then updates
the mathematical representation of his knowledge to reflect
the new knowledge that he has.

It is only if we ignore this, and suppose that the Copenhagen
interpretation asserts the opposite, namely that the wavefunction
doesn't represent knowledge, but represents the state of the
system, that the discontinuous change in the wavefunction
looks problematic, since that would mean that the system
itself changes discontinuously.

>Von Neumann takes the collapse as an axiom, hence also testifies to its
>reality.

He uses it as an axiom, but that doesn't mean that he claimed that
the wavefunction didn't represent knowledge.

>I'd appreciate getting a clear reference where he states
>the contrary (if he does so).

He is less clear about it than Bohr or Heisenberg, but, for
example, in his 1938 paper with Birkhoff, "The Logic of
Quantum Mechanics", for example, he expresses the view
that the formalism of quantum mechanics is the way it
is because the algebra of Hilbert-space subspaces is
that of a non-distributive orthomodular lattice, which
matches the structure of the collection of experimentally
verifiable propositions about a system. This seems to
me to be an indication that he considered rays of Hilbert
space to be associated with propositions (knowledge), rather
than with the actual configuration of the system.

More concretely, in chapter 4 of his "Mathematical
Foundations of Quantum Mechanics", he says:

"Let us assume that we do not know the state of a
system, S, but that we have made certain measurements
about the state of S and know their results. In reality,
it always happens this way, because we can learn something
about the state of S only from the results of measurements.
More precisely, the states are only a theoretical construction,
only the results of measurements are actually available, and
the problem of physics is to furnish relationships between
the results of past and future measurements." p. 337

In addition, he credits Bohr on page 420 with the insight
that quantum mechanics can only be understood in terms
of the relationship between the physical and the psychical,
which seems to me to be a direct indication that he
understood and agreed with the idea that the mathematical
representations that quantum mechanics uses refer to
knowledge about the system and not to the system itself.

He devotes chapter 6 to explaining that it doesn't
matter where the boundary between the system and
the observer is placed, whether at the pointer
on the measuring device or at the eye of the
observer. The reason that he does this is that, as
he says, "the danger lies in the fact that the
principle of psycho-physical parallelism is
violated, so long as it is not shown that the
boundary between the observed system and the observer
can be displaced arbitrarily..." (p. 421).

Now, the principle of psycho-physical parallelism is
understood by Von Neumann to be "that it must
be possible to describe the subjective experience
as if it were in reality in the physical world", and
that "that [the] boundary can be pushed arbitrarily
into the body of the actual observer is the content
of the principle of psycho-physical parallelism" (p. 420).

What this means (as I understand it) is, firstly,
that the ray of the Hilbert space in quantum
mechanics represents knowledge, and the question
"Knowledge about what?" can be given many answers,
such as "knowledge about the position of the
instrument pointer", "knowledge about the momentum
of the particle", or "knowledge about the conditions
inside of my body." The principle of psycho-physical
parallelism tells us that, whatever we claim to know
about the physical world, what we actually know about
is what's going on inside our body, and Von Neumann
is observing that pushing the boundary between the
observer and the observed inside the body of the
observer works just fine with quantum mechanics.

I'd be interested to hear any conflicting interpretations
of the above quotes regarding psycho-physical parallelism
and pushing the boundary inside the body of the observer.

You might also want to read the paper by Lon Becker:
"That von Neumann Did Not Believe in a Physical Collapse",
http://bjps.oupjournals.org/cgi/content/abstract/55/1/121

>> You may very well say that this is a harsh lesson that he needs to
>> learn. I would say that it would be better if people clearly
>> distinguished between what was merely their opinion and
>> what is well-established, and then those who ask questions
>> would be able to trust the answers that physicists give them.

>Only if they have no prejudice, and if he recognizes that he speaks
>with a person without prejudice. But both requirements are very rarely
>met. So he is right to be cautious. Indeed, we learn it from the
>earliest age not to trust too early.

Well, there is a distinction to be made between the role
of a teacher and the role of a physicist debating matters
with another physicist. We expect our teachers to honestly
tell us which things they are teaching are well established
and which are their opinions. Perhaps not all teachers
meet this high standard, but I think it's important to
keep that standard in place.

I would also think that, when approached by a non-expert
who has a relatively simple question to ask, the physicist
who answers implicitly adopts the role of a teacher.

>> As another example, if somebody asks "Is the Riemann hypothesis true?",


>> most knowledgeable people would reply that it isn't known whether
>> or not it is true, although it is widely believed that it is.
>> Somebody who simply says "Yes, it's true," would be being honest
>> by your criteria,

>Only if he really thinks it is true, according to the standards
>of mathematics. For example, I think that Louis de Branges
>can say it with honesty.
>http://www.math.columbia.edu/~woit/blog/archives/000035.html

I await the results of the scrutiny of his proof with interest.

Do you, incidentally, think that mathematicians should hold
themselves to higher standards than physicists when telling
others that a particular statement is true?

>>>It is ridiculous to require a percentage of people in a field
>>>to agree with you before you utter a statement without adding
>>>a qualification like 'I believe' or 'Some physisicts believe'.
>>>There would never be an agreement on the percentage required
>>>to do so.
>>
>> I agree. I never suggested that one should require a
>> specific percentage of physicists to agree with one before
>> saying something.

>You suggested that one should require 50% in the mail which
>caused my three question marks.

I gave an example of 50% as a figure that would indicate
controversy. I would not and do not suggest that one
should ever go to the bother of checking whether it
is 49% or 51% of physicists who agree with an opinion.

R.

Arnold Neumaier

unread,
May 30, 2005, 12:49:56 PM5/30/05
to
Seratend wrote:

> Arnold Neumaier a =E9crit :
>
>>Seratend wrote:
>>
>>>We can apply the statistics to small independent areas. This does not
>>>change the fact that at the end what we really measure on this local
>>>are is M_area=3D3D sum_{i in the area} Mi and not <M>_{area} (what yo=
>
> u call
>
>>>the approximation when the area is not well defined).
>>
>>How do you know what we 'really' measure? Measurement is a complicated
>>process...
>>
> I do not know what I really measure,

but then it is difficult to understand how you can make the factual
statements about it that you made.

> I just have a model and a mapping
> with the/my "reality" (the measurement "results"). It is the eternal
> problem between mathematical models and their association with the
> "reality" elements. Therefore, I have the choice to choose the model I
> think is the more adequate to reflect the results(in this case the
> discrete model versus the continuous one).
>

>>>An instance (an outcome) of the object has well defined (may be unknown


>>>to the observer) values of Mi in all areas (independence of Mi random
>>>variables or commuting observables). However, at the end one sums all
>>>over these areas (the extensive property of M) to obtain the value we
>>>really measure (the approximation of <M>).
>>
>>But these areas themselves are only vaguely defined and then shrunk to
>>zero volume to get the hydrodynamic limit.
>
> This is important for your continuous model (the ones used mainly in
> statistical physics).

And these continuous models are the ones that govern macroscopic
observations and hence all real raw measurements. Hydrodynamics is
well-defined only in the continuum limit, _not_ as a discrete theory.

> The discrete description does not need this limit
> [to compute the <M>) value as I assume the object is made of a finite
> (but huge number) of variables (choice). This method does not require
> dividing the system into small areas as M is the sum of all discrete mi
> (this need is specific to this continuous model).
> However, huge sum of number is not very practical in formal calculus.
> We prefer to approximate them by continuous models having the same
> limit.

We _define_ them in this way, since only then we have a good theory.


>>Your description cannot even
>>imitate this limit, and thus does not make sense in the final
>>measurements, which are done on a thermodynamical interpretation and
>>not in terms of atoms.
>>
> I do not understand, I may extend the discrete model towards a
> continuous model in order to get the same result (<M>~M) at the limit.

If you shrink the domains sufficiently much you are left with less than
one atom per cell, and cannot maintain your formulas meaningfully.


Arnold Neumaier

Seratend

unread,
May 30, 2005, 12:49:57 PM5/30/05
to
Arnold Neumaier a écrit :

> Seratend wrote:
>
> > QM deals only with statistics of outcomes
> > and, in my opinion, outcomes are the "classical world" (what we "see").
>
> In my opinion, the "classical world" (what we "see") is the world
> as seen after irreversible effects have set in, i.e., the world
> as described by nonequilibrium thermodynamics (including hydromechanics
> and kinetic theory).

Interesting.
You seem to view the measurement results exclusively through the mean
value filter in my point of view (like the interference pattern: single
photon screen impact event versus multiple independent photons
interference pattern event).
How do you explain the observed state of a single photon event? (I
understand you can explain the mean value of the measurement apparatus
that is almost equal to the outcome with the good hypotheses, but not
the one of photon or electron object).

What do you intend by irreversible effects?

> Everything in thermodynamics and kinetic theory
> is real, objective, without any of the dubiosities that characterize
> the traditional interpretations of the quantum world.
>

Frankly, I have a real problem to see reality behind pressure, volume
and energy/temperature. All are macroscopic random variables through
the mean value filter (or the law of large numbers if you prefer) and
this seems to be a restrictive choice on what can be observed,
especially when we consider the observation of a microscopic phenomenon
through a macroscopic one: most of the usual quantum cases measuring
single particles.

>
> > Therefore, it is relatively difficult for me to understand people who
> > want to demonstrate that there is a physical collapse leading to the
> > outcomes.
>
> The quest is to show that the interaction of a quantum system with
> a macroscopic detector describable by thermodynamics (and hence,
> through statistical mechanics, by quantum theory)

Statistical classical mechanics?

> gives rise to
> macroscopic, observable effects in the detector that can be regarded
> as the physical equivalent of an objective record of measurements.
>

I am not sure I understand what you say. In the QM description, I just
have statistics of outcomes. I have for a macroscopic detetector, a
macroscopic observable A= sum_i Ai where Ai are the microscopic
observables of the apparatus (huge number). The result of the
measurement will be for example the value of this observable that is
highly degenerated at the limit. Except for the preferred basis
problem, I do not understand what you are looking for.

> I gave a concise formulation of a specific case of this quest in
> my recent paper quant-ph/0505172.
>

I have read quickly you paper. I have not found the original thread. So
I have some questions:
a) what is the initial state of the photon (assuming a wave packet) :
|psi>= |path1>+|path2> with <path1|path2>=0?
b) if yes, |path1> and |path2> are for example 2 parallel paths, where
|path1> is 100% stopped by the first screen and |path2> 100% not?
c) what do you want to say?

I mean, I have a system that is well described through unitary
evolution (superposition of states). At the end, I must apply the born
rules to get the statistics (what I see in the experiment). QM does not
explain the preferred basis (here the paths of the particles), neither
why we have a particular outcome, the physical collapse?, in a given
trial.

Are you just searching for a predictive description of a particular
outcome in a given QM experiment?


Seratend.


Arnold Neumaier

unread,
May 30, 2005, 12:49:54 PM5/30/05
to
r...@maths.tcd.ie wrote:

> Arnold Neumaier <Arnold....@univie.ac.at> writes:
>
>>r...@maths.tcd.ie wrote:
>

>>>Arnold Neumaier <Arnold....@univie.ac.at> writes:
>>>
>>No. I meant ''state reduction is a physical process'' since this is
>>what I said and what physicists observe.
>
> Perhaps you are using the word "physical" in a way with which I'm
> not familiar. You referred, in your original post to collapse
> as "an artifact of the description of a quantum system by
> a limited number of observables". To me, that sounds very
> much like saying that collapse isn't a physical process.

It _is_ an observable (hence physical) process since all our
descriptions useful for prediction and quantitative analysis
are necesarily reduced, for example already since only part of the
universe is accessible to our observations.

Under the usual handwaving inherent in Markov approximations,
the collapse is also deducible from the unitary evolution of
the universe as a whole, though a rigorous mathematical basis
is missing.

>
>
>>See
>> A. Neumaier,
>> Collapse challenge for interpretations of quantum mechanics
>> quant-ph/0505172
>> (see also http://www.mat.univie.ac.at/~neum/collapse.html).
>
>
> The latter link appears to be broken.

We had a server failure on the weekend. Things are repaired now,
and I checked that the link works.


> Your treatment of the Copenhagen
> interpretation in the article claims that the "unresolved
> quantum-classical interface issue (including the missing definition
> of which situations constitute a measurement) is a serious defect
> of the Copenhagen interpretation when viewed as a fundamental
> interpretation of quantum mechanics."
>
> This is slightly unfair to the Copenhagen interpretation, in
> which the wavefunction is understood to represent knowledge
> about the system, rather than the system itself.

No. As far as I can tell, the first mention of the claim that
''the wavefunction is understood to represent knowledge'' is by
Jaynes in the 1950ies, long after the establishment of the
Copenhagen interpretation.


> A definition
> of measurement isn't missing because measurement is the
> acquisition of new knowledge.

This is not a good definition since it is never specified what
constitutes acquisition of knowledge. The theory of knowledge
acquisition is a branch of psychology, not of physics.


> State vector reduction happens
> because the observer acquires new knowledge and then updates
> the mathematical representation of his knowledge to reflect
> the new knowledge that he has.

I doubt whether any observer updates his or her knowledge according
to Bayesian reasoning. Field studies probably show large deviations
from this supposedly universal behavior.

Furthermore, knowledge depends on subjective decisions to trust
a measurement. If we discard one as an artifact, there is no
collapse. How can the collapse depend on such subjective issues?

At the time of Bohr, von neumann and Wigner, the collapse meant
something objective, though it might have been related to the mind
in some unspecified way.

>>Von Neumann takes the collapse as an axiom, hence also testifies to its
>>reality.
>
> He uses it as an axiom, but that doesn't mean that he claimed that
> the wavefunction didn't represent knowledge.

But he certainly didn't claim that the wavefunction does represent
knowledge.

> in his 1938 paper with Birkhoff, "The Logic of
> Quantum Mechanics", for example, he expresses the view
> that the formalism of quantum mechanics is the way it
> is because the algebra of Hilbert-space subspaces is
> that of a non-distributive orthomodular lattice, which
> matches the structure of the collection of experimentally
> verifiable propositions about a system. This seems to
> me to be an indication that he considered rays of Hilbert
> space to be associated with propositions (knowledge), rather
> than with the actual configuration of the system.

No. A proposition is a statement that is true or false,
or undecidable. It has nothing to do with whether or not
anyone knows (or claims to know) its truth or falsehood.


> "Let us assume that we do not know the state of a
> system, S,

This assumption already shows that the state of the system
must exist independent of our knowledge.


> but that we have made certain measurements
> about the state of S and know their results. In reality,
> it always happens this way, because we can learn something
> about the state of S only from the results of measurements.
> More precisely, the states are only a theoretical construction,
> only the results of measurements are actually available, and
> the problem of physics is to furnish relationships between
> the results of past and future measurements." p. 337

> The principle of psycho-physical
> parallelism tells us that, whatever we claim to know
> about the physical world, what we actually know about
> is what's going on inside our body,

I don't buy this. What we know is some platonic extract
extrapolated from sense data. And much of it is mistaken
in detail, but still we think we know and act accordingly.
It has nothing to do with physics as understood pragmatically.


> You might also want to read the paper by Lon Becker:
> "That von Neumann Did Not Believe in a Physical Collapse",
> http://bjps.oupjournals.org/cgi/content/abstract/55/1/121

I'll readf it and comment later, if I have more to say than
what I said already.


> Well, there is a distinction to be made between the role
> of a teacher and the role of a physicist debating matters
> with another physicist. We expect our teachers to honestly

yes, but expectations are not always born out in practice.
The hallmark of a good scientist ist his or her scepticism.


> tell us which things they are teaching are well established
> and which are their opinions. Perhaps not all teachers
> meet this high standard, but I think it's important to
> keep that standard in place.

I agree that this is important. But equally important is
to teach students not to take their teachers for infallible,
but to check for themselves whatever they find dubious.

>>Only if he really thinks it is true, according to the standards
>>of mathematics. For example, I think that Louis de Branges
>>can say it with honesty.
>>http://www.math.columbia.edu/~woit/blog/archives/000035.html
>
>
> I await the results of the scrutiny of his proof with interest.

Of course. But our discussion was about honesty, not truth.


> Do you, incidentally, think that mathematicians should hold
> themselves to higher standards than physicists when telling
> others that a particular statement is true?

When they discuss mathematical results (only), and upon request
(only), yes. For ordinary conversations (such as
those on the usenet), the ordinary level of honesty is enough.

But their standard of truth is higher that that of physicists
since their subject matter is completely standardized and
subject to logic rather than experiment.


Arnold Neumaier

Arnold Neumaier

unread,
May 31, 2005, 2:34:38 AM5/31/05
to
Seratend wrote:

> Arnold Neumaier a =E9crit :
>


>>Seratend wrote:
>>
>>>QM deals only with statistics of outcomes
>>>and, in my opinion, outcomes are the "classical world" (what we "see").
>>
>>In my opinion, the "classical world" (what we "see") is the world
>>as seen after irreversible effects have set in, i.e., the world
>>as described by nonequilibrium thermodynamics (including hydromechanics
>>and kinetic theory).
>
> Interesting.
> You seem to view the measurement results exclusively through the mean
> value filter

Yes. Mean values of thermodynamic origin are the raw observables
in all experiments; everything else is derived from these by theory
or speculation.

I call this the 'consistent experiment interpretation', following
first steps in this direction taken in Section 10 of
quant-ph/0303047 =3D Int. J. Mod. Phys. B 17 (2003), 2937-2980.
Since I wrote this, my view has considerably gained in strength.
If you read German, you can find much more about it at
http://www.mat.univie.ac.at/~neum/physik-faq.tex
I am working on a paper describing everything more formally
and in English, expecting (because of other work to do)
to have it finished by the end of the summer. In the mean time,
I am happy to feed the main qualitative arguments into this
discussion, if you are interested.


> in my point of view (like the interference pattern: single
> photon screen impact event versus multiple independent photons
> interference pattern event).
> How do you explain the observed state of a single photon event?

It is only a sloppy way of speaking, not a real physical event.
What actually happens is the following:

The light ray of a laser is an electromagnetic field localized in a
small region along the ray that begins in the laser and ends at the
photodetector. A ray of intensity I is described by a coherent state
|I>> =3D |0> + I|1> + I^2/2|2> + I^3/6|3> + ...
If I is tiny then, from time to time, an electron responds (in some
loose way of speaking that itself would need correction) to the
energy continuously transmitted by the ray by going into an excited
state, an event which is magnified in the detector and recorded.
These occasional events form a Poisson process, with a rate proportional
to the intensity I. This, no more and no less, is the experimental
observation. It is precisely what is predicted by quantum mechanics.

The traditional sloppy way of picturing this in an intuitive way is to
say that, from time to time, a photon arrives at the screen and kicks
an electron out of its orbit. This is a nice piccture, especially for
the newcomer or the lay man, but it cannot be taken any more seriously
than Bohr's picture of an atom, in which electrons orbit a nucleus in
certain quantum orbits. For nothing of this can be checked by experiment
- it is empty talk intended to serve intuition, but in fact causing more
damange than understanding.

Another way to see that is that the photo effect also happens for
fermionic matter in a classical external field. (See, e.g., the
quantum optics book by mandel and Wolf.) Thus the observed
Poisson process cannot be a consequence of quantized light, but
rather is an indication of quantized detectors.

> What do you intend by irreversible effects?

Dissipation, introduced by the Markov approximation necessary to get
a sensible dynamics of a system smaller than the whole universe.


>>Everything in thermodynamics and kinetic theory
>>is real, objective, without any of the dubiosities that characterize
>>the traditional interpretations of the quantum world.
>>
> Frankly, I have a real problem to see reality behind pressure, volume
> and energy/temperature.

Ask any engineer. They know what is real. I understand reality in the
engineering sense. They can determine the pressure, to within the
accuracy allowed by statistical mechanics. A single measurement on a
single large quantum system (such as a cup of tee) is usually sufficient
to get a reasonable objective value.

If this is not real, there is no reality at all, and we are all dreaming.


> All are macroscopic random variables through
> the mean value filter (or the law of large numbers if you prefer) and
> this seems to be a restrictive choice on what can be observed,
> especially when we consider the observation of a microscopic phenomenon
> through a macroscopic one: most of the usual quantum cases measuring
> single particles.

How can you measure a microscopic object without measuring something
macroscopic. You need the macroscopic, thermodynamic state of something
to assert that indeed some definite, objective event happened.
Take away objectivity and you lose all of physics.


>>>Therefore, it is relatively difficult for me to understand people who
>>>want to demonstrate that there is a physical collapse leading to the
>>>outcomes.
>>
>>The quest is to show that the interaction of a quantum system with
>>a macroscopic detector describable by thermodynamics (and hence,
>>through statistical mechanics, by quantum theory)
>
> Statistical classical mechanics?

No. Statistical mechanics as taught in textbooks. Which includes
(and on the deepest level is only) quantum mechanics.


>>gives rise to
>>macroscopic, observable effects in the detector that can be regarded
>>as the physical equivalent of an objective record of measurements.
>>
> I am not sure I understand what you say. In the QM description, I just
> have statistics of outcomes. I have for a macroscopic detetector, a

> macroscopic observable A=3D sum_i Ai

No. This is not what statistical mechanics teaches. The gurus there say
that the quantities thermodynamics is about are expectations of
microscopic operators, not their eigenvalues!


> where Ai are the microscopic
> observables of the apparatus (huge number). The result of the
> measurement will be for example the value of this observable that is
> highly degenerated at the limit. Except for the preferred basis
> problem, I do not understand what you are looking for.
>
>
>>I gave a concise formulation of a specific case of this quest in
>>my recent paper quant-ph/0505172.
>>
> I have read quickly you paper. I have not found the original thread. So
> I have some questions:

My time is up; will respond to these another time.


Arnold Neumaier

scerir

unread,
May 31, 2005, 2:35:51 AM5/31/05
to
Arnold Neumaier
> At the time of Bohr, von Neumann and Wigner, the collapse meant
> something objective,[...].

It seems, perhaps, interesting to point out that the
first definition was "reduction of probability packet",
sometimes "reduction of wave packet."

Quoting from "Electrons et Photons - Rapports et Discussions
du Cinquieme Conseil de Physique de l'Institut International
de Physique Solvay", [Paris, Gauthier-Villars, 1928, p. 250].
M. Born talking: "[...] En d'autres termes: comment le caractère
corpuscolaire du phénomène peut-il etre concilié ici avec
la raprésentation par ondes? Pour le faire, on doit faire appel
à la notion de 'réduction du paquet de probabilité' développée
par Heisenberg."

Actually Heisenberg gave a physical picture in 1930.
"There is then a definite probability for finding the photon
either in one part or in the other part of the divided wave packet.
After a suffcient time the two parts will be separated by any
distance desired; now if an experiment yields the result that
the photon is, say, in the reflected part of the packet, then
the probability of finding the photon in the other part of the
packet immediately becomes zero. The experiment at the position
of the reflected packet thus exerts a kind of action (reduction
of the wave packet) at the distant point occupied by the transmitted
packet, and one sees that this action is propagated with a velocity
greater than that of light. However, it is also obvious that
this kind of action can never be utilized for the transmission
of signals so that it is not in conflict with the postulates
of the theory of relativity." ('The Physical Principles of the
Quantum Theory', University of Chicago Press, Chicago, 1930).

(Following the above reasoning we expect that, i.e., the
information about the probability of a particle being at
a distance x comes to us with a signal velocity c.
Thus the |wavefunction(x,t - r/c)|^2 should represent
the probability that a particle is at x, as seen at
the origin. Or am I wrong?)

Unfortunately H.Kragh ("Dirac: a Scientific Biography", Cambridge
U.P., 1990) describes a (1927) discussion between Dirac, Heisenberg
and Born, about what, actually, gives rise to a "collapse".
Dirac said that it is 'Nature' that makes the choice (of the
measurement outcome). Born agreed. Heisenberg however maintained that,
behind the collapse, and the choice of which 'branch' the wavefunction
would be followed, there was "the free-will of the human observer".

And later, in "Physics and Philosophy" (Harper and Row, 1958, New York)
Heisenberg writes "The observation itself changes the probability
function discontinuously; it selects of all possible events
the actual one that has taken place [...] The discontinuous change
in the probability function, however, takes place with the act
of registration, because it is the discontinuous change
of our knowledge in the instant of registration that has its
image in the discontinuous change of the probability function."

According to Jan Faye "Bohr accepted the Born statistical
interpretation because he believed that the psi-function
has only a symbolic meaning and does not represent anything real.
It makes sense to talk about a collapse of the wave function
only if, as Bohr put it, the psi-function can be given a pictorial
representation, something he strongly denied."

It is really not so easy to find a definition (of the 'reduction')
by Niels Bohr. In a letter to Pauli (March 2, 1955) he wrote "Thus,
when speaking of the physical interpretation of the formalism,
I consider such details of procedure like "reduction of the wave
packets" as integral parts of a consistent scheme conforming
with the indivisibility of the phenomena and the essential
irreversibility involved in the very concept of observation."
(Niels Bohr Collected Works, vol. 10, Elsevier 1999, page 568).

Even in Max Born it is possible to find many (very) different
interpretations of the 'reduction' (and of the wave-funtion).
In example "The question of whether the waves are something
"real" or a function to describe and predict phenomena in
a convenient way is a matter of taste. I personally like
to regard a probability wave, even in 3N-dimensional space,
as a real thing, certainly as more than a tool for mathematical
calculations ... Quite generally, how could we rely on
probability predictions if by this notion we do not refer
to something real and objective?" [Max Born, Dover publ., 1964,
"Natural Philosophy of Cause and Chance", p. 107.]

-serafino

Seratend

unread,
May 31, 2005, 2:37:41 AM5/31/05
to
Aaron Bergman a =E9crit :
> In article <1117183732....@z14g2000cwz.googlegroups.com>,

>
> Why do we observe outcomes with probability |<a|psi>|^2? QM has no
> answer for this question.
>
This is the born rules, a postulate of QM. I think we are at the centre
of our minor problem of comprehension. QM formulation provides a very
formal statistical description of the system as Classical statistical
mechanics does. This is a formal choice: We describe the statistics of
outcomes rather than the description of the evolution of individual
outcomes.

For me at least this is clear. I am not sure if it is clear for you
too. In my comprehension, your question is just a mathematical
question: why the probability of an event (P(A=3Da)) is the frequency of
independent outcomes with identical probability (the statistics of
outcomes). This is simply the law of large numbers. Formally, I may
always do such affirmation: the hypotheses of the experiment where I
have this equality. This is not different from the coin tossing
experiment. (You can always question the validity of the independence,
but this is the problem of the experiment realization and not the
validity of the results).
In QM, we choose the observables (the set of outcomes is the
eigenvalues of the observable) and states (or generalized states) to
describe, formally, a probability law of the outcomes of the observable
(the random variable). The choice of observables and states to describe
the probability law is more related to the time evolution of the
probability law where we have a simple expression (e.g. the difference
between the observable expression of QM and the random variable
expression in Bohmian mechanics).

> > I can only understand this sentence as the quest for a deterministic
> > (causal) description of outcomes compatible with the statistics of QM.
>
> QM is deterministic and causal. That's the problem.
>

I do not understand where the problem is. I have a deterministic time
evolution of the probability law. I may also have a deterministic
evolution of the set of eigenvalues (the Heisenberg representation).
Where is the problem?

> > If this is the case, we already have such a description, bohmian

> > mechanics for the position eigen basis (and equivelent formulations i=


n
> > different eigenbasis).
>
> Bohmian mechanics has no relativistic generalization that I know of.
>

I think you should rather say bohmian mechanics has not a simple
relativistic generalization (the mathematical expression becomes more
difficult has the particle number in not conserved).
See for example Trajectories and Particle Creation and Annihilation in
Quantum Field Theory, quant-ph/0208072, 2002.
With bohmian mechanics, we must separate the equivalent formulation of
QM with contextual random variables from the interpretation. As long as
we stay with mathematical expressions (and knowing their hypothesis of
validity), there is no problem.


> [...]
>
> > > I'm not sure I understand what you're asking. Let me try to answer

> > > something, then. The question of what why we observe what we observ=


e is
> > > completely unanswered by quantum mechanics.
> >
> > So you are saying that QM theory does not explain the preferred basis.
>
> You'll have to communicate to me better what you mean by 'preferred
> basis'.

Ok, I will try to explain what I mean. I have no problem with the
preferred basis (the basis where I have the experiment outcomes). I
just have a problem with the prediction of such a basis by QM. I mean,
where in the QM theory do I have results inferring the preferred basis?
Up to now, I just know the preferred basis after I have done the
experiments. For example the interference pattern of double slit
experiments observed on the screen. I know, from this experiment, if I
place a screen, I will do a position measurement. But I have not seen
anywhere in QM, where the mathematics can predict such a basis as every
basis is possible from the theory point of view.

I can live with such a result. However, I prefer if everybody can
acknowledge such a result: the QM formulation does not predict the
basis of outcomes in an experiment (out of scope). It is important (at
least for me) to understand the scope of a physical theory (what it may
predict and what it does not predict).

> Given an experimental setup with a classical measuring device
> (where classical means large numbers of microstates per macrostate), I
> can describe to you a preferred basis in that setup.

Ok, let's play with a simple toy model and you will try to tell me
the proffered basis of this experimental setup.
Let's take the double slit experiment with electrons (formally
simpler in the terms of interaction description at the slit plate, but
we can change and choose photons as outside the local interaction the
free propagator or the photon and electron is equivalent).

a) Let's assume that the plate with the slits is a reflective plate
such that we can describe it through an interaction: Hint_plate:
|plate><plate|(x) V(r)
Where |plate> is a state of the free plat hamiltonian (no interaction
with the environment: we neglect it during the period of observation).
V(r) is null outside the plate and at the slits and infinite within the
plate (hypothesis model of the energy conservation). We just model the
plate with a quantum wall (with a non null thickness).
b) At a distance of the plate (z direction), we place a screen. Let's
describe this screen by another Hamiltonian: Hint_screen=3D
|Screen><Screen|(x)Vdiff(x) where Vdiff is a scattering potential null
outside the screen (such that photons or electrons arriving at the
screen are transferred into another direction: how we may see
externally the interference pattern from another direction). |Screen>
is an eigenstate of the Hamiltonian.

We have for this system: H=3D Ho+Ho_screen+H_int_plate+H_int_screen
We also have [Ho_screen,Hint_plate]=3D [Hint_screen,Hint_plate]=3D
[Ho_plate,Hint_plate]=3D0

Assume that the intial state is |state(o)>=3D|psi(o)>|plate>|screen>
where |psi(o)> is the state of the electron or photon. Before the
plate, |psi(t)> may be described by a wave packet centered on a
momentum |p_z> parallel to the z axis (free propagator of the electron
or the photon).

We may add or remove the properties we want to this toy model. The
important fact is the ability, by thought, to decrease the interaction
with the environment and to check if we still are able to predict an
eigenbasis or if the shcmidt eigenbasis is always correct for this
experiment (depending on the different hypotheses).

Now how can you deduce (from QM theory) the preferred basis and what we
"really" measure in this experiment?


Seratend.

Seratend

unread,
May 31, 2005, 2:37:54 AM5/31/05
to
Arnold Neumaier a =E9crit :

> >
> > You seem to be definitively a lover of the deterministic description
>
> Yes. I believe that there is something deterministic about the universe=
!
>
: )
I prefer to say that deterministic and statistical description are just
two equivalent ways of giving predictive results: I accept both.
Decribing a function by its points or by its induced probability law is
somewhat equivalent (2 point of views).

> In the 1925 physics views, it is discrete.
> In the post 1975 physics, it is again continuous, since the basic

> objects in the universe are quantum _fields_, and particles


> only arise through some ill-understood process, unless the field is
> free and the two descriptions are equivalent.
>

Well as I have a finite capacity brain, I just can understand/see
dicrete and finite quantities (I have never handled an infinite
quantity except in the math and physics models). Therefore It seems
that I prefer to view (choice) the world closer to the 1925 view (woaw!
I didn't think I have a so obsolete point of view ; ).
Even if the fields are continuous, the observables, at least what we
[can] measure is discrete. Therefore, the impossible anwser is to know
if the set of all possible measurable values of a given "real"
observable is countable or uncountable.
However, in any case, choosing a discrete or continuous model will give
the same results at the limit (huge number/small size).
And I must admit, making predictions when the particle number is not
conserved is already difficult in the QFT formalism. I just can
imagine, it would be a nightmare with a pure discrete approach (and may
be, a waste of time and not very interesting from a calculus point of
view ; ).

However, I prefer to view the universe as finite and discrete
(~epistemic/practical view): all the mathematical problems of
infinities dissappear and I may question mathematical results on
infinities whenever the limit is not continuous.( Note that, most of
the time I prefer the continous model for mathematical predictions ; ).

Seratend.

Aaron Bergman

unread,
May 31, 2005, 5:39:44 AM5/31/05
to
In article <1117448074....@f14g2000cwb.googlegroups.com>,
"Seratend" <ser_m...@yahoo.fr> wrote:

> Aaron Bergman a =E9crit :
> > In article <1117183732....@z14g2000cwz.googlegroups.com>,
> >
> > Why do we observe outcomes with probability |<a|psi>|^2? QM has no
> > answer for this question.
> >
> This is the born rules, a postulate of QM.

Not so much. It works as a good effective description, but if it refers
to an explicit nonunitary collapse process, it fails to give a
description of it. If it does not refer to collapse, then it fails to
explain our perception of (one part of) the reduced density matrix
(say), rather than the full coherent wavefunction.

[...]

> > > I can only understand this sentence as the quest for a deterministic
> > > (causal) description of outcomes compatible with the statistics of QM.
> >
> > QM is deterministic and causal. That's the problem.
> >
> I do not understand where the problem is. I have a deterministic time
> evolution of the probability law. I may also have a deterministic
> evolution of the set of eigenvalues (the Heisenberg representation).
> Where is the problem?

You'll get the wrong answer if you apply such a prescription. If you
apply the Born rules without nondeterministic state reduction you get
nonsense. If you lack an explicit state reduction, you are forced to
explain why, as I said above, we only perceive on 'branch' of the
wavefunction (given that decoherence effectively shields us from
macroscopic interferences).

> > > If this is the case, we already have such a description, bohmian
> > > mechanics for the position eigen basis (and equivelent formulations i=
> n
> > > different eigenbasis).
> >
> > Bohmian mechanics has no relativistic generalization that I know of.
>
> I think you should rather say bohmian mechanics has not a simple
> relativistic generalization (the mathematical expression becomes more
> difficult has the particle number in not conserved).
> See for example Trajectories and Particle Creation and Annihilation in
> Quantum Field Theory, quant-ph/0208072, 2002.
> With bohmian mechanics, we must separate the equivalent formulation of
> QM with contextual random variables from the interpretation. As long as
> we stay with mathematical expressions (and knowing their hypothesis of
> validity), there is no problem.

I hadn't known of this attempt at a relativistic generalization.
Nonetheless, I believe that Bohmian mechanics still has a problem with
the reproduction of classical trajectories (although perhaps decoherence
can go a long way towards solving that).

> > [...]
> >
> > > > I'm not sure I understand what you're asking. Let me try to answer
> > > > something, then. The question of what why we observe what we observ=
> e is
> > > > completely unanswered by quantum mechanics.
> > >
> > > So you are saying that QM theory does not explain the preferred basis.
> >
> > You'll have to communicate to me better what you mean by 'preferred
> > basis'.
>
> Ok, I will try to explain what I mean. I have no problem with the
> preferred basis (the basis where I have the experiment outcomes). I
> just have a problem with the prediction of such a basis by QM. I mean,
> where in the QM theory do I have results inferring the preferred basis?
> Up to now, I just know the preferred basis after I have done the
> experiments. For example the interference pattern of double slit
> experiments observed on the screen. I know, from this experiment, if I
> place a screen, I will do a position measurement. But I have not seen
> anywhere in QM, where the mathematics can predict such a basis as every
> basis is possible from the theory point of view.

No, you know ahead of time that, if you place the screen, you do a
position measurement. You in know way had to the experiment to figure
this out. Inherent in any experimental design, you know which
macroscopic observable you are entangling you system with and that
determines, via decoherece, a preferred basis. (Up to technicalities, I
suppose, like overcompleteness and the like).

[...snip experiment...]

> We have for this system: H=3D Ho+Ho_screen+H_int_plate+H_int_screen
> We also have [Ho_screen,Hint_plate]=3D [Hint_screen,Hint_plate]=3D
> [Ho_plate,Hint_plate]=3D0
>
> Assume that the intial state is |state(o)>=3D|psi(o)>|plate>|screen>
> where |psi(o)> is the state of the electron or photon. Before the
> plate, |psi(t)> may be described by a wave packet centered on a
> momentum |p_z> parallel to the z axis (free propagator of the electron
> or the photon).
>
> We may add or remove the properties we want to this toy model. The
> important fact is the ability, by thought, to decrease the interaction
> with the environment and to check if we still are able to predict an
> eigenbasis or if the shcmidt eigenbasis is always correct for this
> experiment (depending on the different hypotheses).
>
> Now how can you deduce (from QM theory) the preferred basis and what we
> "really" measure in this experiment?

You need to describe to me the macroscopic degrees of freedom in your
experiment, ie, the macrostates by which you are performing your
observation. With that, simlply wait a period equal to a couple times
the decoherence time and pick the basis in which the reduced density
matrix is diagonal. You don't have to worry about uniqueness of schmidt
bases or whatever.

Aaron

Seratend

unread,
May 31, 2005, 12:07:20 PM5/31/05
to
Arnold Neumaier wrote:

> And these continuous models are the ones that govern macroscopic
> observations and hence all real raw measurements. Hydrodynamics is
> well-defined only in the continuum limit, _not_ as a discrete theory.
>

I prefer to say these continuous models (in the absolute) are one of
the models that give results (approximations at the limit) compatible
with experiments. However, discrete models can give the same
approximation at the limit where they become continuous (or if you
prefer uncountable) (otherwise it would be very difficult to make
simulation models on computers).
Hydrodynamics is well-defined with continuous variables and practical.
However, this does not imply that continuous models govern macroscopic
observation just their results are compatible with experiment.

Infinities (big or small) are good mathematical objects to make
approximations. However, I have never seen an infinity object in "real"
experiments. Therefore, I always try to see if finite models are
sufficient to explain the results even if they are not useful for
explicit calculation. This is why I prefer the finite formulation of
what I may "see" (even if I appear to be old-fashioned with such an
idea : ). While I accept both mathematical formulations (knowing their
domain of validity).

> > I do not understand, I may extend the discrete model towards a
> > continuous model in order to get the same result (<M>~M) at the limit.
>
> If you shrink the domains sufficiently much you are left with less than
> one atom per cell, and cannot maintain your formulas meaningfully.
>

If you assume the continuous model, you accept that I may shrink the
domain in order to keep an infinite countable number of random
variables in this domain (I have access to all the infinites I want as
long as I am coherent). If you prefer, I need to increase the number of
random variables to keep the same limit and continuous distribution.

At the continuous limit, the total number of random variables in all
the domains becomes uncountable (uncountable number of domains).
However, we may construct other "discrete" models where we recover the
same limit.
(Probability theory is full of such examples where, whenever n becomes
infinite, the model becomes uncountable: e.g. the sample space of the
infinite sequence of random variables with more than one value, i.e.
{0,1}^|N is uncountable while {0,1}^n is countable if n is finite).

Another advantage of this model is to show the possibility to choose
for any description either the deterministic or the statistic
description (no preferred bias): We are used to view probability as the
lack of knowledge on deterministic values. The method of introducing
random variables behind deterministic values allows one to view
deterministic values defined by probability rules (hence, I am
interested by your mean value filter point of view in another post).

Seratend.

Arnold Neumaier

unread,
May 31, 2005, 12:07:20 PM5/31/05
to
Seratend wrote:

> Arnold Neumaier a écrit :
>
>>Seratend wrote:
>>
>>>Therefore, it is relatively difficult for me to understand people who
>>>want to demonstrate that there is a physical collapse leading to the
>>>outcomes.
>>
>>The quest is to show that the interaction of a quantum system with
>>a macroscopic detector describable by thermodynamics (and hence,

>>through statistical mechanics, by quantum theory) gives rise to


>>macroscopic, observable effects in the detector that can be regarded
>>as the physical equivalent of an objective record of measurements.
>>
> I am not sure I understand what you say. In the QM description, I just
> have statistics of outcomes. I have for a macroscopic detetector, a
> macroscopic observable A= sum_i Ai where Ai are the microscopic
> observables of the apparatus (huge number). The result of the
> measurement will be for example the value of this observable that is
> highly degenerated at the limit. Except for the preferred basis
> problem, I do not understand what you are looking for.

I am looking for an explanation why a particular detector coupled
to a particular quantum system produces the observed erratic but
objective record of individual results that can be analyzed
statistically and quoted in a physics journal.

If you want to claim more than that these outcomes are just the
results of changes of belief (aka 'knowledge') in an observer's mind
- and I think physics does and should claim more than that -
you need to explain why the observed record is objective,
for each individual observation, before any statistical analysis
is done.


>>I gave a concise formulation of a specific case of this quest in
>>my recent paper quant-ph/0505172.
>>
> I have read quickly you paper. I have not found the original thread.

Type "collapse challenge" into
http://groups-beta.google.com/groups?q=%22collapse+challenge%22&qt_s=Search


> So I have some questions:
> a) what is the initial state of the photon (assuming a wave packet) :
> |psi>= |path1>+|path2> with <path1|path2>=0?

Not quite. Roughly,
|psi(t)> = |path1(t)> tensor |1> + |path2(t)> tensor |1>
with spatial coherent states |pathi(t)> (i=1,2) moving at the
velocity of light and monochromatic 1-Photon Fock states |1>, say.
The actual situation would be more complicated since single
photon states are electromagnetic waves (solutions of the free
Maxwell equations) approximately localized along some direction.
The challenge allows, however, any specific setting (even
idealized, or with massive particles, etc.) that matches the
informal description in a reasonable way.


> b) if yes, |path1> and |path2> are for example 2 parallel paths, where
> |path1> is 100% stopped by the first screen and |path2> 100% not?

Yes. This is an example that can be prepared by half-silvered mirrors.


> c) what do you want to say?
>
> I mean, I have a system that is well described through unitary
> evolution (superposition of states).

Absorption by a screen is an irreversible macroscopic process
accompanied by a minute increase of temperature. The claim that
it is described by unitary evolution requires proof, which,
if successful, would be part of an answer of the challenge.

If there is unitary dynamics only then the final result is not
the state |0,1,1> or |0,0,1> as observed, but a superposition
of the two. Invoking Born's rule is _assuming_ the collapse
rather than explaining it.

That something remains to be explained even from the Copenhagen
point of view (some version of which you seem to adhere to)
is discussed in Section 3.

> At the end, I must apply the born
> rules to get the statistics (what I see in the experiment).

This is the informal prescription that is used to apply single-particle
reasoning to a complex multiparticle experiment. It successfully
avoids looking at the physics happening at the screen, replacing it
by simply assuming the collapse, i.e., the emergence of an objective
record according to the probabilities from the Born rule.
While this is an acceptable attitude it is obviously not the whole
story.

The challenge is to _explain_ the emergence of the objective record
as a multiparticle phenomenon.


> Are you just searching for a predictive description of a particular
> outcome in a given QM experiment?

Just an explanation for how particular outcomes arise through
measurement. Leaving something as complex as 'measurement' as
an uninterpreted, vague fundamental concept, while practical
measurement is a whole science in itself seem to me too gross
a simplification to be tolerable, and one of the reasons why the
foundations of QM are in the poor present state.


Arnold Neumaier

Arnold Neumaier

unread,
May 31, 2005, 12:07:21 PM5/31/05
to
Seratend wrote:

> Arnold Neumaier a =E9crit :
>
>>>You seem to be definitively a lover of the deterministic description
>>

>>Yes. I believe that there is something deterministic about the universe!


>
> : )
> I prefer to say that deterministic and statistical description are just
> two equivalent ways of giving predictive results: I accept both.
> Decribing a function by its points or by its induced probability law is
> somewhat equivalent (2 point of views).

But there is a difference between asserting that the die shows a three
and asserting that the probability of getting a three is 1/6.

It is a fact that there are many individual facts in Nature.
The foundations should reflect that.


>>In the 1925 physics views, it is discrete.
>>In the post 1975 physics, it is again continuous, since the basic
>>objects in the universe are quantum _fields_, and particles
>>only arise through some ill-understood process, unless the field is
>>free and the two descriptions are equivalent.
>>
> Well as I have a finite capacity brain, I just can understand/see
> dicrete and finite quantities (I have never handled an infinite
> quantity except in the math and physics models).

With your finite brain capacity, it is much easier to understand or see
continuous quantities (such as a straight line) rather than highly
discretized quantities (a long line of equispaced dots). Much of
brain processing is indeed concerned with producing simple continuous
models of a messy reality.


> Therefore It seems
> that I prefer to view (choice) the world closer to the 1925 view (woaw!
> I didn't think I have a so obsolete point of view ; ).

It would be very difficult to describe Nature with only lattice field
theories. And it is very unlikely that _if_ nature is discrete,
it is based on a lattice. But with an irregular point set to start,
physics would be next to impossible.

The most important fundamental results, such as Noether's theorem,
rests on the assumption of continuity. The fact that there are nice
laws of physics when modelled as continuum but none when modelled
discretely strongly hints at the continuous nature of Nature.


> Even if the fields are continuous, the observables, at least what we
> [can] measure is discrete.

By convention only. In fact what we can measure is fuzzy, not discrete.
Borderline cases are simply forced into a Procrustean bed to make
them fit a fixed scheme. In view of the inevitable measurement error
this does not harm things, but in a quest for understanding (and that's
what the foundations of QM are) one should not use the same Procrustean
techniques. http://www.mythweb.com/encyc/entries/procrustes.html


> Therefore, the impossible anwser is to know
> if the set of all possible measurable values of a given "real"
> observable is countable or uncountable.
> However, in any case, choosing a discrete or continuous model will give
> the same results at the limit (huge number/small size).
> And I must admit, making predictions when the particle number is not
> conserved is already difficult in the QFT formalism. I just can
> imagine, it would be a nightmare with a pure discrete approach (and may
> be, a waste of time and not very interesting from a calculus point of
> view ; ).

So why propagate the nightmare if there are nice dreams?


> However, I prefer to view the universe as finite and discrete
> (~epistemic/practical view): all the mathematical problems of
> infinities dissappear

.. and together with it, all deep insights into physics,
all differential equations basic to all sciences, the calculus
that made Newton famous and physics the 'hard' science it is today.

This sort of magic is completely against my taste...
Instead of solving the problems it provides a carpet of
intractability under which to sweep every challenge that
is left in the foundations.


Arnold Neumaier

I.Vecchi

unread,
May 31, 2005, 6:21:50 PM5/31/05
to
Aaron Bergman wrote:
> In article <1117448074....@f14g2000cwb.googlegroups.com>,
> "Seratend" <ser_m...@yahoo.fr> wrote:
..

> > Now how can you deduce (from QM theory) the preferred basis and what we
> > "really" measure in this experiment?
>
> You need to describe to me the macroscopic degrees of freedom in your
> experiment, ie, the macrostates by which you are performing your
> observation.

Isn't this obviously circular? Aren't the "the macrostates by which you
are performing your observation" precisely what decoherence is supposed
to derive from a purely quantum description the process?

IV

Seratend

unread,
May 31, 2005, 6:21:52 PM5/31/05
to
Aaron Bergman wrote:
> In article <1117448074....@f14g2000cwb.googlegroups.com>,

> > This is the born rules, a postulate of QM.


>
> Not so much. It works as a good effective description, but if it refers
> to an explicit nonunitary collapse process, it fails to give a
> description of it. If it does not refer to collapse, then it fails to
> explain our perception of (one part of) the reduced density matrix
> (say), rather than the full coherent wavefunction.
>

In my opinion, I think you are trying to say more than QM theory says.
You seem to be a adept of the wave function reality hence you try to
define something out of the scope of the current QM theory formulation
(an explication of the collapse) while I simply take the born rules as
statistics of outcomes and the collapse postulate the property where a
given outcome value of a system is true ("Outcome A=a" true).
If I compute the frequency of identical independent measurement
results, I get the probability law of the event. I do not explain how
to find these identical independent systems (out of scope) as I do not
explain how I get a particle at the place and time (qo,to) in classical
mechanics (the initial condition).

> [...]
>
> > > > I can only understand this sentence as the quest for a deterministic
> > > > (causal) description of outcomes compatible with the statistics of QM.
> > >
> > > QM is deterministic and causal. That's the problem.
> > >
> > I do not understand where the problem is. I have a deterministic time
> > evolution of the probability law. I may also have a deterministic
> > evolution of the set of eigenvalues (the Heisenberg representation).
> > Where is the problem?
>
> You'll get the wrong answer if you apply such a prescription. If you
> apply the Born rules without nondeterministic state reduction you get
> nonsense. If you lack an explicit state reduction, you are forced to
> explain why, as I said above, we only perceive on 'branch' of the
> wavefunction (given that decoherence effectively shields us from
> macroscopic interferences).
>

No I do not need to explain why I have a state reduction if I do not
attach any reality to the state and the collapse. It is simply a formal
view like Kolgomorov probability theory.
In this formal point of view, the collapse postulate is just the
logical assertion that a given property of a given system is true (e.g.
"Outcome A=a at time t"). We use it to select the systems (what you
may call a 'branch' of the global unitary evolution) where it is
true. Note that there is no time signification on this formal property:
it is either true or false once forever for a given system.
In practical experiments, we have a detector, when this detector
triggers we know that we have a system instance with a given property.

>
> I hadn't known of this attempt at a relativistic generalization.
> Nonetheless, I believe that Bohmian mechanics still has a problem with
> the reproduction of classical trajectories (although perhaps decoherence
> can go a long way towards solving that).
>

Please, do not try to mix the mathematic formulation of bohmian
mechanics with its interpretation (the reality of the bohmian paths).
There is no problem with the mathematical formulation while the
interpretation of bohmian paths is subject to problems.

> >
> > Ok, I will try to explain what I mean. I have no problem with the
> > preferred basis (the basis where I have the experiment outcomes). I
> > just have a problem with the prediction of such a basis by QM. I mean,
> > where in the QM theory do I have results inferring the preferred basis?
> > Up to now, I just know the preferred basis after I have done the
> > experiments. For example the interference pattern of double slit
> > experiments observed on the screen. I know, from this experiment, if I
> > place a screen, I will do a position measurement. But I have not seen
> > anywhere in QM, where the mathematics can predict such a basis as every
> > basis is possible from the theory point of view.
>
> No, you know ahead of time that, if you place the screen, you do a
> position measurement.

Because I have already learn it in an experiment before: I see an
interference pattern when I place a screen, hence the position
measurement.

> Inherent in any experimental design, you know which
> macroscopic observable you are entangling you system with and that
> determines, via decoherece, a preferred basis. (Up to technicalities, I
> suppose, like overcompleteness and the like).
>

Therefore, your view is equivalent to the one saying QM theory does not
predict the preferred basis. In this case, decoherence just explains
why we have stable outcome values for this basis.

> [...snip experiment...]

> > Now how can you deduce (from QM theory) the preferred basis and what we
> > "really" measure in this experiment?
>
> You need to describe to me the macroscopic degrees of freedom in your
> experiment, ie, the macrostates by which you are performing your
> observation. With that, simlply wait a period equal to a couple times
> the decoherence time and pick the basis in which the reduced density
> matrix is diagonal. You don't have to worry about uniqueness of schmidt
> bases or whatever.
>
> Aaron

I have forgotten in the hamiltonian of the system the free hamiltonian
of the slit plate in the previous post:

H= Ho+ Ho_plate+Ho_screen+H_int_plate+H_int_screen

Hint_plate: |plate><plate|(x) V(r)

Hint_screen=|Screen><Screen|(x)Vdiff(r)

My model supposes (see above) that the free Hamiltonian of the plate
and the screen commutes with the different interaction Hamiltonian:

a) [Ho_screen,Hint_plate]=0,
b) [Hint_screen,Hint_plate]=0
c) [Ho_plate,Hint_plate]=0,
b) [Hint_plate,Hint_plate]=0

Therefore if I have at time 0:

|state(o)>=|psi(o)>|plate>|screen>

I will have at *any* time t (whatever the form of the local interaction
V(r) and Vdiff(r) is):

|state(t)>=|psi(t)>|plate>|screen>

=> No entanglement occurs with the state of the photons |psi(t)> and
the states of the plate and screen.

(|plate> and |screen> are the eigenvectors of the free Hamiltonians of
the plate and the screen)

For a sufficient time, the initial wave packet has been partly
reflected by the slit plate and partly transmitted before it interacts
with the screen (due to the local interaction that does not entangled
the photon/electron state with the slit plate):

|psi(t)>= |psi_reflected(t)>+ |psi_slit1(t)>+ |psi_slit2(t)>

Where |<x|(|psi_slit1(t)>+ |psi_slit2(t)>)|^2= interference pattern for
a sufficient time.

Then the part |psi_slit1(t)>+ |psi_slit2(t)> is scattered later by the
screen interaction potential without entanglement with the screen
state.

Therefore, where is the preferred basis as in this model there is no
entanglement between the photons and the plate or the screen (the state
of the plate and the screen are not changed during the experiment)?

Seratend

Eugene Stefanovich

unread,
May 31, 2005, 6:21:54 PM5/31/05
to

Arnold Neumaier wrote:

>
> The light ray of a laser is an electromagnetic field localized in a
> small region along the ray that begins in the laser and ends at the
> photodetector. A ray of intensity I is described by a coherent state
> |I>> =3D |0> + I|1> + I^2/2|2> + I^3/6|3> + ...
> If I is tiny then, from time to time, an electron responds (in some
> loose way of speaking that itself would need correction) to the
> energy continuously transmitted by the ray by going into an excited
> state, an event which is magnified in the detector and recorded.
> These occasional events form a Poisson process, with a rate proportional
> to the intensity I. This, no more and no less, is the experimental
> observation. It is precisely what is predicted by quantum mechanics.
>
> The traditional sloppy way of picturing this in an intuitive way is to
> say that, from time to time, a photon arrives at the screen and kicks
> an electron out of its orbit. This is a nice piccture, especially for
> the newcomer or the lay man, but it cannot be taken any more seriously
> than Bohr's picture of an atom, in which electrons orbit a nucleus in
> certain quantum orbits. For nothing of this can be checked by experiment
> - it is empty talk intended to serve intuition, but in fact causing more
> damange than understanding.


Laser ray is a complex phenomenon involving a large number of photons.
How your coherent state/Poisson process picture describes the
interaction of a single photon with the screen or atom? I guess you
do not dispute the fact that single photons can be routinely prepared
in a laboratory, and their "arrivals at the screen" can be observed.

Eugene Stefanovich.

r...@maths.tcd.ie

unread,
May 31, 2005, 6:21:53 PM5/31/05
to
Arnold Neumaier <Arnold....@univie.ac.at> writes:

>r...@maths.tcd.ie wrote:

>> Arnold Neumaier <Arnold....@univie.ac.at> writes:
>>
>> Your treatment of the Copenhagen
>> interpretation in the article claims that the "unresolved
>> quantum-classical interface issue (including the missing definition
>> of which situations constitute a measurement) is a serious defect
>> of the Copenhagen interpretation when viewed as a fundamental
>> interpretation of quantum mechanics."
>>
>> This is slightly unfair to the Copenhagen interpretation, in
>> which the wavefunction is understood to represent knowledge
>> about the system, rather than the system itself.

>No. As far as I can tell, the first mention of the claim that
>''the wavefunction is understood to represent knowledge'' is by
>Jaynes in the 1950ies, long after the establishment of the
>Copenhagen interpretation.

I may have confused the official Copenhagen interpretation with
what Bohr, Heisenberg, von Neumann and so on believed. As Scerir
pointed out in this thread, Heisenberg said "The discontinuous change


in the probability function, however, takes place with the act
of registration, because it is the discontinuous change
of our knowledge in the instant of registration that has its

image in the discontinuous change of the probability function.",
Hiesenberg, "Physics and Philosophy", 1958

It may be that Heisenberg changed his interpretation of quantum
mechanics before he wrote that. It's even possible that Jaynes
influenced him for all I know. From what you have said about
your own interpretation, I take it that you claim that
Heisenberg was completely wrong when he wrote the
sentence quoted above.

With due respect, and I sincerely mean no offense, I believe
that you have been infected with the mental disease that I
ranted about in an earlier post:
http://groups-beta.google.com/group/sci.physics.research/msg/69ca190957f25c12?dmode=source
My understanding is that this is why you react so negatively to the
suggestion that the wavefunction describes knowledge. There is
nothing inherently absurd about that idea, but physicists and
mathematicians who believe that their subject is "noble", and that
investigation of the mind is "dirty" will reject it immediately and
without very good arguments, although they will do a great deal of
sneering and insulting to make up for the lack of good counter-arguments
(this is not to suggest that you have done so, or that you should,
but it is the way people in general react when their mental diseases
are attacked).

This idea of nobility is, quite frankly, medieval savagery, but
people in general are quite willing to adopt such an idea if it
helps their self-esteem. Everybody wants to be noble. Of course,
that doesn't mean that people are fully aware of what is happening.
The thought "I'm superior to that other group of people over there?
I like that idea," probably doesn't explicitly pass through their
mind, but some low-level mental processing registers exactly that.

>> A definition
>> of measurement isn't missing because measurement is the
>> acquisition of new knowledge.

>This is not a good definition since it is never specified what
>constitutes acquisition of knowledge.

Acquisition of knowledge is what happens when you look at the
measuring device and see where the pointer is pointing. That's
perfectly precise for a normal person, but it seems insufficient
to somebody who wants to know about "the real objective world".

>The theory of knowledge
>acquisition is a branch of psychology, not of physics.

The theory of knowledge acquisition is actually part of
philosophy, and is called epistemology. It is what metaphysics
should be, but unfortunately, in their rush to find out
what "really exists", most people who think about metaphysics
end up doing ontology instead. Ontology is a false hope -
you can never know "what really really exists"; we learn this
if we study epistemology, which is the theory of knowledge
acquistion, and which is qualified to address questions
about what we can know. Those who rush into ontology,
however, never learn this. They take it for granted that
they can know the truth about what's real, and rush off
in search of it. "I'm not interested in knowledge," they
say, "I want to know what's real." When approached by
an epistemologist who wants to help them understand
the situation, they react with scorn, declaring that
the theory of knowledge acquisition is a branch of
psychology, implying that it is therefore unworthy
of study, less noble than the quest for what's "real".

Of course, if this is pointed out to them, they
deny it, saying "Why not at all - I am the most
reasonable of fellows. I have carefully deliberated
and decided that what I am doing is the most sensible
thing to do. Let me refute your arguments," and then
they produce arguments which are unconvincing even to
themselves.

>> State vector reduction happens
>> because the observer acquires new knowledge and then updates
>> the mathematical representation of his knowledge to reflect
>> the new knowledge that he has.

>I doubt whether any observer updates his or her knowledge according
>to Bayesian reasoning. Field studies probably show large deviations
>from this supposedly universal behavior.

I have seen this kind of argument many times, and have never
understood why anybody would think it was valid. The general
form of the argument goes like this:

A: X is the proper way to do task Y.
B: That's wrong because X is not the way the average man on the street
attempts to do task Y.

Now, it seems to me abundantly clear that what the man on the
street does is of little relevance to the question of whether
X is the proper way to do task Y. The assertion that Bayesian
reasoning is the correct way to proceed when one has incomplete
information is not an assertion about how people behave, and
cannot be disproved by field studies.

>Furthermore, knowledge depends on subjective decisions to trust
>a measurement. If we discard one as an artifact, there is no
>collapse. How can the collapse depend on such subjective issues?

In the "wavefunction represents knowledge" interpretation, the
wavefunction is not an objective thing, but different observers
will use different wavefunctions, depending on what knowledge they
have about the system. The "collapse" is what happens when the
observer receives new knowledge, and updates his mathematical


representation of his knowledge to reflect the new knowledge that

he has. This is a subjective thing, because a different observer,
who has not received any new knowledge, will continue to use his
original wavefunction, and so the collapse is not objective. So
the answer to "How can the collapse depend on such subjective
issues?" is that the collapse itself is subjective. This is evident
from what Heisenberg said above about the "discontinuous change in
our knowledge."

Recall that subjective doesn't mean simply bad. It means that
the thing in question is particular to a single observer,
and it not common to all observers.

>At the time of Bohr, von neumann and Wigner, the collapse meant
>something objective, though it might have been related to the mind
>in some unspecified way.

I have to disagree with that, although I do not mean it in an
adversarial way. The relation to the mind was perfectly clear and
very specific for these people, at least by the '50s. Also, since
they understood that the wavefunction represented knowledge, the
collapse wasn't an objective thing for them.

>>>Von Neumann takes the collapse as an axiom, hence also testifies to its
>>>reality.
>>
>> He uses it as an axiom, but that doesn't mean that he claimed that
>> the wavefunction didn't represent knowledge.

>But he certainly didn't claim that the wavefunction does represent
>knowledge.

As I quoted before,

"Let us assume that we do not know the state of a system, S, but


that we have made certain measurements about the state of S and
know their results. In reality, it always happens this way, because
we can learn something about the state of S only from the results
of measurements. More precisely, the states are only a theoretical
construction, only the results of measurements are actually available,
and the problem of physics is to furnish relationships between the
results of past and future measurements." p. 337

This is exactly a claim that the wavefunction represents
knowledge. "The states are a theoretical construction,
only the results of measurements are actually available" refers
to the fact that the results of measurements are the knowledge
available, and that the states are a theoretical construction
which encode that knowledge.

>> in his 1938 paper with Birkhoff, "The Logic of
>> Quantum Mechanics", for example, he expresses the view
>> that the formalism of quantum mechanics is the way it
>> is because the algebra of Hilbert-space subspaces is
>> that of a non-distributive orthomodular lattice, which
>> matches the structure of the collection of experimentally
>> verifiable propositions about a system. This seems to
>> me to be an indication that he considered rays of Hilbert
>> space to be associated with propositions (knowledge), rather
>> than with the actual configuration of the system.

>No. A proposition is a statement that is true or false,
>or undecidable. It has nothing to do with whether or not
>anyone knows (or claims to know) its truth or falsehood.

Logic, which includes the propositional calculus, is the formal
science of inference, and inference can only be done by the mind.
An inference is what allows one to derive new knowledge from
knowledge that one already has. Knowledge is always of the
form "I know that proposition X is true", so propositions
certainly have a lot to do with knowledge.

What you have done is to suggest that I said that what
a proposition is depends on whether somebody knows or
claims to know its truth or falsehood. I never said
that, and I don't claim it now.

The desire to assert that logic has nothing to do with the mind is,
I believe, rooted in the primitive notion of nobility, since logic
is clearly part of the foundation of mathematics and therefore
worthy of respect, while the mind is the province of philosophers
and psychologists, who are not worthy of a physicist's respect. The
assertion that logic has nothing to do with the mind, however, is
evidently incorrect. My dictionary defines logic as "the science
of reasoning, proof, thinking, or inference", which means that it
is the science of certain mental acts.

I have to anticipate how somebody could reject something as
simple as this. The only thing I can think of is that somebody
might claim that, since computers can be programmed to do
symbolic manipulation, logic has nothing to do with thinking.

The problem with this argument is that the fact that computers
can do the symbolic manipulation associated with formal logic
indicates only that logic can be represented by symbolic
manipulations, but establishes nothing about what those
symbolic manipulations describe. Logic was established
in its present form because those symbolic manipulations
describe certain rules of correct thinking.

>> "Let us assume that we do not know the state of a
>> system, S,

>This assumption already shows that the state of the system
>must exist independent of our knowledge.

Now, as far as you were aware when you read that, nobody
had claimed that the state of the system didn't exist.

I was not asserting that the state of the system, considered as a
separate thing from our knowledge of it, doesn't exist. I was
asserting that von Neumann was aware that we only know the results
of measurements, and so these are what the mathematical symbols
that we write down represent. He goes on to say:

"More precisely, the states are only a theoretical construction,
only the results of measurements are actually available, and the
problem of physics is to furnish relationships between the results

of past and future measurements. To be sure, this is always
accomplished through the introduction of the auxilliary concept
"state", but the physical theory must then tell us on the one hand
how to make from past measurements inferences about the present
state, and on the other hand, how to go from the present state to
the results of future measurements." p. 337

What he is saying is that, in quantum mechanics, what we call
a "state" is actually a theoretical construction which incorporates
information about the results of past measurements on the system.
That is why the wavefunction represents knowledge. We are free,
of course, to say that the actual system is in a state which
is distinct from our knowledge of it, and that the measurements
tell us information about the "real" state, but the "state"
in quantum mechanics incorporates only whatever information
is available from the results of past measurements, and the
concept of the "real" state is an auxilliary concept. To say
that it is an auxilliary concept does not denigrate it in
any way, or insult its nobility, or deny the existence
of an actual state of the system, but it means that the concept
carries with it no information which is relevant for predicting
future measurement results based on past ones.

>> The principle of psycho-physical
>> parallelism tells us that, whatever we claim to know
>> about the physical world, what we actually know about
>> is what's going on inside our body,

>I don't buy this. What we know is some platonic extract
>extrapolated from sense data. And much of it is mistaken
>in detail, but still we think we know and act accordingly.
>It has nothing to do with physics as understood pragmatically.

Basically, you are saying that knowledge is a dirty thing,
not worth investigating for a noble physicist. You are
right that it has nothing to do with physics as understood
pragmatically, which means that it is not relevant for
the practical purposes of turning on the measuring device
and pressing the buttons on it. On the other hand, it
is relevant for a proper understanding of physics.

Let me try to address your concern that knowledge is
human and therefore fallible. It might be said that
inference is human and therefore fallible. That is
why we develop logic as a formal science - to
relieve us of the labour of making inferences ourselves.
The fallibility of humans doesn't mean that logic
is somehow flawed; it means that people make mistakes in
their application of it.

With knowledge, there is also a way to think and process
knowledge without making mistakes, although people
might not always succeed in using it properly. That
is why we look for a symbolic formalism; if we develop
one, mistakes will be easier to spot, as they are in
logic.

Also, when you say "I don't buy this," are you saying that
you don't believe that von Neumann held this opinion,
namely that the principle of psycho-physical parallelism
tells us that we can consider what we are observing
to be within our own bodies? Because he did:

"We wish to measure a temperature. ... [we can] say: this
temperature is measured by the thermometer. ... we can
calculate the resultant length of the mercury column,
and then say: this length is seen by the observer. Going
still further, and taking the light source into consideration ...
we would say: this image is registered by the retina of the
observer. And were our physiological knowledge more precise
than it is today, we could go still further, tracing the
chemical reactions which produce the impression of this image on
the retina, in the optic nerve tract and in the brain, and then in
the end say: these chemical changes of his brain cells are
perceived by the observer." p.419

"That this boundary can be pushed arbitrarily into the interior
of the body of the observer is the content of the principle
of the psycho-physical parallelism." p.420

>> You might also want to read the paper by Lon Becker:
>> "That von Neumann Did Not Believe in a Physical Collapse",
>> http://bjps.oupjournals.org/cgi/content/abstract/55/1/121

>I'll read it and comment later, if I have more to say than
>what I said already.

Lon seems to be trying to present the view that von Neumann
believed in a relative-state interpretation, which is
presumably his own favourite interpretation, and I further
presume that he believed that he could garner support
for his interpretation by claiming that von Neumann
believed it.

Similarly, you are trying to claim that von Neumann shared your
"collapse is a physical process" interpretation, and I assert that
he believed the wavefunction represented knowledge.

He also didn't have the "subjective means bad" attitude of modern
physicists, and was aware that what we deal with in physics is
not "the real world", but rather with subjective observations:
"Indeed experience only makes statements of this type: an observer
has made a certain (subjective) observation; and never any like
this: a physical quantity has a certain value." p.420

For him, the distinction between the observer and the observed
was of fundamental importance in quantum mechanics; this is
the so-called quantum/classical boundary:
"That is, we must always divide the world into two parts,
the one being the observed system, the other the observer. ...
The boundary between the two is arbitrary to a large extent. ...
but this does not change the fact that in each method of description
the boundary must be placed somewhere, if the method is not to
proceed vacuously, i.e., if a comparison with experiment is to be
possible." p.420

So, from von Neumann's point of view, to use a "wavefunction of the
universe" would be to proceed vacuously.

R.

Eugene Stefanovich

unread,
May 31, 2005, 6:21:53 PM5/31/05
to

Aaron Bergman wrote:

> Why do we observe outcomes with probability |<a|psi>|^2? QM has no
> answer for this question.

Let me add my two cents to this debate.

1. results of measurements performed on micro-systems are
inherently statistical/unpredictable. If you prepare twice the same
system in the same state, and measure the same observable,
you may get two different measurement results.

2. Quantum mechanics does not explain the origin of these probabilities.
All QM can do is to calculate these probabilities.
In textbook QM, the formula |<a|psi>|^2 is a postulate, but this
formula can be derived from a more fundamental "quantum logic" approach.
(see chapter 4 in physics/0504062)
If you know the rules of quantum mechanics, you can describe the
state of your system by a vector |psi> in the Hilbert space,
and the measurement by another vector |a>, and calculate/predict
the probability of finding value a in the state |psi> by using above
formula.

3. Quantum mechanics cannot "explain" why each time you measure
observable A in the state |psi> you obtain different values
a_1, a_2, a_3...
QM cannot predict exactly which value will occur next.
It can only predict the probability for each possible outcome.

4. The probabilistic behavior of micro-systems will be explained
by a theory that goes beyond quantum mechanics (there is no
such a theory, to the best of my knowledge) or, most likely,
not explained ever.

Eugene Stefanovich.

Arnold Neumaier

unread,
May 31, 2005, 6:21:51 PM5/31/05
to
Seratend wrote:

> Arnold Neumaier wrote:
>
>>And these continuous models are the ones that govern macroscopic
>>observations and hence all real raw measurements. Hydrodynamics is
>>well-defined only in the continuum limit, _not_ as a discrete theory.
>>
> I prefer to say these continuous models (in the absolute) are one of
> the models that give results (approximations at the limit) compatible
> with experiments. However, discrete models can give the same
> approximation at the limit where they become continuous (or if you
> prefer uncountable) (otherwise it would be very difficult to make
> simulation models on computers).

Of couse any continuous model can be approximated by a plethora of
discrete ones. But there are _many_ approximations and only one _nice_
theory. The nice one is the one chosen by the overwhelming majority
of physicists.

> Hydrodynamics is well-defined with continuous variables and practical.
> However, this does not imply that continuous models govern macroscopic
> observation just their results are compatible with experiment.

Macroscopic observations are _defined_ in terms of observables
which get their meaning from the traditional theories of mechanics.
These use continuous models.

>
> Infinities (big or small) are good mathematical objects to make
> approximations. However, I have never seen an infinity object in "real"
> experiments. Therefore, I always try to see if finite models are
> sufficient to explain the results even if they are not useful for
> explicit calculation. This is why I prefer the finite formulation of
> what I may "see" (even if I appear to be old-fashioned with such an
> idea : ).

Of course you are free to prefer whatever you like best.
But few people will want to imitate you since your preference
is very cumbersome.

>>If you shrink the domains sufficiently much you are left with less than
>>one atom per cell, and cannot maintain your formulas meaningfully.
>>
>
> If you assume the continuous model, you accept that I may shrink the
> domain in order to keep an infinite countable number of random
> variables in this domain (I have access to all the infinites I want as
> long as I am coherent).

Yes, but these random variables are no longer the 'number of particles
in the cell' which you employed, since these random variables cease
to be meaningful. One cannot split particles...


Arnold Neumaier

Aaron Bergman

unread,
Jun 1, 2005, 12:42:34 AM6/1/05
to
In article <1117568337....@g14g2000cwa.googlegroups.com>,
Seratend <ser_m...@yahoo.fr> wrote:

> Aaron Bergman wrote:
> > In article <1117448074....@f14g2000cwb.googlegroups.com>,
>
> > > This is the born rules, a postulate of QM.
> >
> > Not so much. It works as a good effective description, but if it refers
> > to an explicit nonunitary collapse process, it fails to give a
> > description of it. If it does not refer to collapse, then it fails to
> > explain our perception of (one part of) the reduced density matrix
> > (say), rather than the full coherent wavefunction.
> >
> In my opinion, I think you are trying to say more than QM theory says.

Funny, I feel the same way.

> You seem to be a adept of the wave function reality hence you try to
> define something out of the scope of the current QM theory formulation
> (an explication of the collapse) while I simply take the born rules as
> statistics of outcomes and the collapse postulate the property where a
> given outcome value of a system is true ("Outcome A=a" true).

I don't consider the Born rule part of QM. QM is unitary evolution.
Everything is else is what we do to make sense of the wavefunction. The
Born rules are a pragmatic procedure, but they lack a microscopic
mechanism.

[...]

Are you advocating a sort of consistent histories approach? That seems
to me to be a language in which to describe quantum outcomes, but
nothing like an interpretation. Regardless, when you make a real-world
measurement, you better collapse the wavefunction, whether through
decoherence or some other manner, or you will get the wrong answer. For
future measurements.

[...]

> > > Ok, I will try to explain what I mean. I have no problem with the
> > > preferred basis (the basis where I have the experiment outcomes). I
> > > just have a problem with the prediction of such a basis by QM. I mean,
> > > where in the QM theory do I have results inferring the preferred basis?
> > > Up to now, I just know the preferred basis after I have done the
> > > experiments. For example the interference pattern of double slit
> > > experiments observed on the screen. I know, from this experiment, if I
> > > place a screen, I will do a position measurement. But I have not seen
> > > anywhere in QM, where the mathematics can predict such a basis as every
> > > basis is possible from the theory point of view.
> >
> > No, you know ahead of time that, if you place the screen, you do a
> > position measurement.
>
> Because I have already learn it in an experiment before: I see an
> interference pattern when I place a screen, hence the position
> measurement.

No. You understand your measurement apparatus. It's not a mystery. There
are macroscopic degrees of freedom -- work backwards from there and you
know what your 'preferred basis' is.

[snip to end]

I'm sorry, but I can't figure out what you're talking about in your
experiment.

Aaron

Arnold Neumaier

unread,
Jun 1, 2005, 9:27:44 AM6/1/05
to
Eugene Stefanovich wrote:
>
> Arnold Neumaier wrote:
>
>> The light ray of a laser is an electromagnetic field localized in a
>> small region along the ray that begins in the laser and ends at the
>> photodetector. A ray of intensity I is described by a coherent state
>> |I>> =3D |0> + I|1> + I^2/2|2> + I^3/6|3> + ...
>> If I is tiny then, from time to time, an electron responds (in some
>> loose way of speaking that itself would need correction) to the
>> energy continuously transmitted by the ray by going into an excited
>> state, an event which is magnified in the detector and recorded.
>> These occasional events form a Poisson process, with a rate proportional
>> to the intensity I. This, no more and no less, is the experimental
>> observation. It is precisely what is predicted by quantum mechanics.
>>
>> The traditional sloppy way of picturing this in an intuitive way is to
>> say that, from time to time, a photon arrives at the screen and kicks
>> an electron out of its orbit. This is a nice piccture, especially for
>> the newcomer or the lay man, but it cannot be taken any more seriously
>> than Bohr's picture of an atom, in which electrons orbit a nucleus in
>> certain quantum orbits. For nothing of this can be checked by experiment
>> - it is empty talk intended to serve intuition, but in fact causing more
>> damange than understanding.
>
> Laser ray is a complex phenomenon involving a large number of photons.

Not complex in the typically used models.
The large number of photons are described by the single coherent state.
I recommend that you read some thorough quantum optics books such as
the comprehensive
L. Mandel and E. Wolf,
Optical Coherence and Quantum Optics,
Cambridge University Press, 1995.
or the lighter
U. Leonhardt,
Measuring the Quantum State of Light,
Cambridge, 1997.


> How your coherent state/Poisson process picture describes the
> interaction of a single photon with the screen or atom?

This is described in the book by Mandel and Wolf just quoted.


Arnold Neumaier

Arnold Neumaier

unread,
Jun 1, 2005, 9:27:44 AM6/1/05
to
r...@maths.tcd.ie wrote:

I commented that already. the 'acto of registration' happens on the
photographic plate or in the eye, not in the mind, and is simply
the irreversible magnification due to dissipation by interaction with
a macroscopic detector. It is objective and has no connection to
any 'knowledge'.


> It may be that Heisenberg changed his interpretation of quantum
> mechanics before he wrote that. It's even possible that Jaynes
> influenced him for all I know. From what you have said about
> your own interpretation, I take it that you claim that
> Heisenberg was completely wrong when he wrote the
> sentence quoted above.

No; only that your interpretation of what he said in terms of
knowledge is a postmodern interpretation, and either the
Copenhagen interpretation nor Heisenberg's intention.


> With due respect, and I sincerely mean no offense, I believe
> that you have been infected

Whatever I am infected with, I hope it is highly infectuous
and incurable, so that it spreads and has a lasting effect.


> with the mental disease that I
> ranted about in an earlier post:
> http://groups-beta.google.com/group/sci.physics.research/msg/69ca190957f25c12?dmode=source

This is a long post, I cannot recognize myself reflected in it.
Neither do I recognize signs of a mental disease in my behavior.


> My understanding is that this is why you react so negatively to the
> suggestion that the wavefunction describes knowledge.

I followed the historical development of the interpretations of QM
quite closely, reading hundreds of papers, to be able to make up my
own mind of how _I_ should interpret QM (and other physics).
In the discussions on s.p.r., I share my insights for those who might
wish to learn from it. I simply think that phrasing objective
descriptions in a psychological language, making them dependent on
mental processes, is neither necessary to understanding nor does it
serve any useful purpose. There is nothing inherently absurd about
this assessment.


> There is
> nothing inherently absurd about that idea, but physicists and
> mathematicians who believe that their subject is "noble", and that
> investigation of the mind is "dirty" will reject it immediately and
> without very good arguments,

I considered lots of argumewnts of all sides of the discussion, and
believe to have excellent arguments for my point of view, that can
compete well with other arguments. They have nothing to do with
nobility, but with realism, intelligibility, easy visualitization.
I believe that certain things really exist and can be described
objectively, and that even the subjective, observer-dependent
aspects can be described objectively. This makes for clear foundations,
which is my supreme objective in my quest about physics.


>>>A definition
>>>of measurement isn't missing because measurement is the
>>>acquisition of new knowledge.
>
>>This is not a good definition since it is never specified what
>>constitutes acquisition of knowledge.
>
> Acquisition of knowledge is what happens when you look at the
> measuring device and see where the pointer is pointing. That's
> perfectly precise for a normal person, but it seems insufficient
> to somebody who wants to know about "the real objective world".

It seems that it is sufficient for you. But it is insufficient for me.

I want a mathematical model of reality within which one can clearly
say what exists, what is an experiment, an observer, a measurement,
a record, etc., in such a way that one can predict in principle which
experiments give outcomes with which accuracy.
Such arguments are common in qunatum mechanical foundations (e.g.
discussions of the Heisenberg microscope) but currently based on
informal notions of experiment, observer, measurement, record
only.

My goal is to put the foundations of physics on a basis similarly to
the foundations of mathematics, where the whole logical process of
coherent deduction can be modelled on a metalevel and gives clarity
to the foundations of mathematics that is missing in physics.

And I think that such foundations are possible and will provide the
same clarity for physics.


>>The theory of knowledge
>>acquisition is a branch of psychology, not of physics.
>
> The theory of knowledge acquisition is actually part of
> philosophy, and is called epistemology.

Phiosophy can only discuss what should be, but not how knowledge
acquisition actually happens. The latter is an experimental question,
not a purely deductive one, and hence belongs to psychology,
not to philosophy.


> When approached by
> an epistemologist who wants to help them understand
> the situation, they react with scorn, declaring that
> the theory of knowledge acquisition is a branch of
> psychology, implying that it is therefore unworthy
> of study, less noble than the quest for what's "real".

No. First, I didn't react with scorn (again you put guessed
emotions of your choice into my statements), but simply observed
what to me is a fact. Second, I don't think that psychology
is not worth studying, quite on the contrary, it is a very
interesting science. I only assert that psychology is a poor
foundation for physics.


> Of course, if this is pointed out to them, they
> deny it, saying "Why not at all - I am the most
> reasonable of fellows.

Everyone who has a sensible point of view argues that way,
including you. What I point out to you does not seem sensible
to you, and conversely. This is the natural situation in
topics of controversy, and does not prove that you are right.


>>>State vector reduction happens
>>>because the observer acquires new knowledge and then updates
>>>the mathematical representation of his knowledge to reflect
>>>the new knowledge that he has.
>
>>I doubt whether any observer updates his or her knowledge according
>>to Bayesian reasoning. Field studies probably show large deviations
>>from this supposedly universal behavior.

> The assertion that Bayesian


> reasoning is the correct way to proceed when one has incomplete
> information is not an assertion about how people behave, and
> cannot be disproved by field studies.

If you assert that the wave function is about knowledge then knowledge
resides somewhere - according to you in some mind. But the claimed
behavior of these minds is something that can be studied by field
studies.

Your argument sounds as if you are not claiming that the wave function
collapse is about the change of real knowledge of real minds,
but about how knowledge should change if someone observes something
and acts completely rational. But then it becomes a moral statement
completely outside science.

However, the collapse was formulated by the founders as a necessity to
make sense of quantum mechanics, and not as a postulate about moral
standards for maintaining knowledge in minds.


>>Furthermore, knowledge depends on subjective decisions to trust
>>a measurement. If we discard one as an artifact, there is no
>>collapse. How can the collapse depend on such subjective issues?
>
> In the "wavefunction represents knowledge" interpretation, the
> wavefunction is not an objective thing,

How then can a non-objective thing change in time in an objective way???
(Please don't be offended by the three ?s!)


> but different observers
> will use different wavefunctions, depending on what knowledge they
> have about the system.

If I know nothing about an experiemnt, which wave function should I use?
Should I use instead of a pure state the microcanonical ensemble,
suggested by many statistical mechanics treatments as noninformative
prior? Then I make observations and find that they are not in accordance
with the predictions of my ensemble since it is born of ignorance
rather than knowledge...


> The "collapse" is what happens when the
> observer receives new knowledge, and updates his mathematical
> representation of his knowledge to reflect the new knowledge that
> he has.

This must be a ficticious observer invented to suit your interpretation.

A real observer with a real mind has no wave function in his mind --
that changes unitarily according to a differential equation whose
solution requires a computing capacity much beyond the mind's power, and
once it sees a measurement (any look out of the window, or only a
careful look at the detector needle to be sure of the third decimal?)
it computes the solution of the corresponding eigenvalue problem to
find out how the wave functions must be collapsed to be consistent.

At least you won't find that when interrogating the most competent
experimental physicists who know how they update their knowledge.


> Recall that subjective doesn't mean simply bad.

I never assumed that. But subjective means outside the realm of science,
unless that subjectivity can be explained and predicted by models of how
it arises from something objective, such as the subjective
observer-dependence in special and general relativity.


>>At the time of Bohr, von Neumann and Wigner, the collapse meant

>>something objective, though it might have been related to the mind
>>in some unspecified way.
>
> I have to disagree with that, although I do not mean it in an
> adversarial way. The relation to the mind was perfectly clear and
> very specific for these people, at least by the '50s. Also, since
> they understood that the wavefunction represented knowledge, the
> collapse wasn't an objective thing for them.

Please support your claims by solid evidence!


>>>>Von Neumann takes the collapse as an axiom, hence also testifies to its
>>>>reality.
>>>
>>>He uses it as an axiom, but that doesn't mean that he claimed that
>>>the wavefunction didn't represent knowledge.
>
>>But he certainly didn't claim that the wavefunction does represent
>>knowledge.
>
> As I quoted before,
>
> "Let us assume that we do not know the state of a system, S, but
> that we have made certain measurements about the state of S and
> know their results. In reality, it always happens this way, because
> we can learn something about the state of S only from the results
> of measurements. More precisely, the states are only a theoretical
> construction, only the results of measurements are actually available,
> and the problem of physics is to furnish relationships between the
> results of past and future measurements." p. 337
>
> This is exactly a claim that the wavefunction represents
> knowledge.

I cannot understand how you can possibly arrive at this statement.
If your claim were true, what von Neumann actually said (first sentence)
would mean: ''Let us assume that we do not know what we know (the state
of S)'', and then he deduces correctly from this (obviously false)
premise everything he likes.


> "The states are a theoretical construction,
> only the results of measurements are actually available" refers
> to the fact that the results of measurements are the knowledge
> available, and that the states are a theoretical construction
> which encode that knowledge.

No; it refers to the fact that the state is something that exists
(since we can learn something about it) but we _don't_ know,
hence must infer by a theoretical construction, while we _do_ know
the results of the measurements, and can infer from them partial
information about the state.

This is a much more coherent interpretation of his statement,
and conforms quite well with the form of knowledge experimenters
actually have, and with the practice of state estimation in
high quality quantum experiments.


>>No. A proposition is a statement that is true or false,
>>or undecidable. It has nothing to do with whether or not
>>anyone knows (or claims to know) its truth or falsehood.
>
> Logic, which includes the propositional calculus, is the formal
> science of inference, and inference can only be done by the mind.

No. It is routinely (and more reliably) done by computers.

We accept (usually without doubt) the inferences that a
system like Mathematica performs upon our requests.
And if we doubt, we usually doubt first _our_ abilities to
make the right requests (resulting in a process called 'debugging')
rather than the inference abilities of the computer.


> An inference is what allows one to derive new knowledge from
> knowledge that one already has. Knowledge is always of the
> form "I know that proposition X is true", so propositions
> certainly have a lot to do with knowledge.

Of course knowledge is about propositions, but propositions
are not about knowledge.

People discuss the consequences of the proposition
'The Riemann hypothesis is true' in the absence
of any knowledge about the truth of this statement.
The same happens routinely in proofs by contradiction,
where we assume some proposition although we know (and want
to demonstrate to someone else) that it is _not_ true.


> The desire to assert that logic has nothing to do with the mind is,
> I believe, rooted in the primitive notion of nobility,

No. For example, it can be rooted in the fact that logic can
be performed by microchips, which have little to do with mind as
commonly understood.

> I was
> asserting that von Neumann was aware that we only know the results
> of measurements,

I agree with this assertion. It is in flat contradiction with your claim
that the wave function represents our knowledge. For a wave function
needs infinitely many bits to specify, while the results of measurements
(according to what you just stated, the _only_ thing we know about the
system) can be coded in the finite number of bits making up a protocol.

> "More precisely, the states are only a theoretical construction,
> only the results of measurements are actually available, and the
> problem of physics is to furnish relationships between the results
> of past and future measurements. To be sure, this is always
> accomplished through the introduction of the auxilliary concept
> "state", but the physical theory must then tell us on the one hand
> how to make from past measurements inferences about the present
> state, and on the other hand, how to go from the present state to
> the results of future measurements." p. 337
>
> What he is saying is that, in quantum mechanics, what we call
> a "state" is actually a theoretical construction

but with the same objective status as mass, temperature, momentum,
charge distribution, etc. of an object. These are also theoretical
constructs used to organize our observations.

And with the same objective status as the galaxy as an assembly of
myriads of hot and heavy stars, a theoretical construction used to
organize the information we can gather about certain light dots in
the sky.

All of physics is theoretical construction based on past measurements.
Even the measurement results ('the spin of this particle was up')
themselves are theoretical constructions, indirectly derived from
the raw observations.

> which incorporates
> information about the results of past measurements on the system.

just as we infer the temperature field in a room from
information about the results of past measurements of a thermometer.


> That is why the wavefunction represents knowledge.

In this sense, it is a tautology. But this is not what von Neumann
could have had in mind.

>>>The principle of psycho-physical
>>>parallelism tells us that, whatever we claim to know
>>>about the physical world, what we actually know about
>>>is what's going on inside our body,
>
>>I don't buy this. What we know is some platonic extract
>>extrapolated from sense data. And much of it is mistaken
>>in detail, but still we think we know and act accordingly.
>>It has nothing to do with physics as understood pragmatically.
>
> Basically, you are saying that knowledge is a dirty thing,

No. You read this into my statements. Knowledge has nothing to
do with cleanliness. Dirty things can be washed; I wouldn't know
how to wash knowledge.


Knowledge is what we (think we) know. This may be a number of
experimental results to within some accuracy, an approximate
description of a quantum mechanical state, the rough
temperature distribution in a room, the behavior of a piece of
equipment according to the manufacturer's manual (perhaps
corrected by our own calibration experiments), the weight,
length and age of the persons working in a room, etc.
It is (in some idealization) something describable in a finite
string of symbols.

On the other hand, fundamental physics is about the mathematical
model of Nature resulting from such information. This model
(von Neumann's ''theoretical construction'') is inferred from
observations and contains more accurate parts, less accurate parts,
probably a few mistaken parts, and completely unknown parts -
it is like a 17th century world map, but instead for the
physical phenomenon under study. The objective state of the
system is one of the polethora of states compatible with the
asvailable information - which one, we don't know. But if we know
sufficiently much, all compatible states are approximately the same,
so working with any particular of them will give good predicitions.

Nothing here prevents one of taking the system to be the whole universe.
The state of the universe must simply be compatible with all details
we observed in the parts of the universe accessible to our experiments.


> Also, when you say "I don't buy this," are you saying that
> you don't believe that von Neumann held this opinion,
> namely that the principle of psycho-physical parallelism
> tells us that we can consider what we are observing
> to be within our own bodies? Because he did:
>
> "We wish to measure a temperature. ... [we can] say: this
> temperature is measured by the thermometer. ... we can
> calculate the resultant length of the mercury column,
> and then say: this length is seen by the observer. Going
> still further, and taking the light source into consideration ...
> we would say: this image is registered by the retina of the
> observer. And were our physiological knowledge more precise
> than it is today, we could go still further, tracing the
> chemical reactions which produce the impression of this image on
> the retina, in the optic nerve tract and in the brain, and then in
> the end say: these chemical changes of his brain cells are
> perceived by the observer." p.419
>
> "That this boundary can be pushed arbitrarily into the interior
> of the body of the observer is the content of the principle
> of the psycho-physical parallelism." p.420

Von Neumann says that collapse happens in each particular physical
system (defined by its boundary), but that consistency requires that
if we regard a particular system as part of a bigger system then
the collapse of the larger system must give, for the smaller system,
results compatible with the collapse of the smaller system considered
by itself. This is nothing more than an obvious compatibility
condition. It has nothing to do with the nature of the two systems,
You might care to notice that von Neumann carefully avoids to invoke
either the 'mind' or the observer's 'knowledge'.

Von Neumann simply argues that the collapse is consistent with the
psycho-physical parallelism (to the extent that one can define the
latter by the assertion that the ''boundary can be pushed arbitrarily
into the interior of the body of the observer''). But his general
argument does not require a body or a brain; it is true wherever
the boundary is placed, for example when the boundary is placed
between the exposed photographic plate and the process developing
the plate to see the picture.

Thus the psycho-physical parallelism is completely inessential for
the interpretation of the collapse.


>>>You might also want to read the paper by Lon Becker:
>>>"That von Neumann Did Not Believe in a Physical Collapse",
>>>http://bjps.oupjournals.org/cgi/content/abstract/55/1/121
>
>>I'll read it and comment later, if I have more to say than
>>what I said already.

I read it and found it wanting. It projects a particular
prejudice into his statements.


> He also didn't have the "subjective means bad" attitude of modern
> physicists, and was aware that what we deal with in physics is
> not "the real world", but rather with subjective observations:
> "Indeed experience only makes statements of this type: an observer
> has made a certain (subjective) observation; and never any like
> this: a physical quantity has a certain value." p.420

Von Neumann is more careful in his use of language than you in your
interpretation of his words.

There is a difference between 'experience' and 'experiment'.
The former is a psychological concept; the latter is a concept
of physics.

An experience produces subjective sensory perceptions;
an experiment produces recorded values of physical quantities.


> For him, the distinction between the observer and the observed
> was of fundamental importance in quantum mechanics; this is
> the so-called quantum/classical boundary:
> "That is, we must always divide the world into two parts,
> the one being the observed system, the other the observer. ...
> The boundary between the two is arbitrary to a large extent. ...

.. to such an extent that his observer can be an inanimate object
like a camera or a thermometer.


> but this does not change the fact that in each method of description
> the boundary must be placed somewhere, if the method is not to
> proceed vacuously, i.e., if a comparison with experiment is to be
> possible." p.420
>
> So, from von Neumann's point of view, to use a "wavefunction of the
> universe" would be to proceed vacuously.

Only in this last statement I agree with your interpretation of
his position.

At this point my view of quantum mechanics differs from his.
And with good grounds.


Arnold Neumaier

Arnold Neumaier

unread,
Jun 2, 2005, 1:28:44 AM6/2/05
to
scerir wrote:

> Arnold Neumaier
>
>>At the time of Bohr, von Neumann and Wigner, the collapse meant
>>something objective,[...].
>
> It seems, perhaps, interesting to point out that the
> first definition was "reduction of probability packet",
> sometimes "reduction of wave packet."

Yes. This is generally taken as synonymous with the collapse.


> Actually Heisenberg gave a physical picture in 1930.
> "There is then a definite probability for finding the photon
> either in one part or in the other part of the divided wave packet.

> After a sufficient time the two parts will be separated by any


> distance desired; now if an experiment yields the result that
> the photon is, say, in the reflected part of the packet, then
> the probability of finding the photon in the other part of the
> packet immediately becomes zero. The experiment at the position
> of the reflected packet thus exerts a kind of action (reduction
> of the wave packet) at the distant point occupied by the transmitted
> packet, and one sees that this action is propagated with a velocity
> greater than that of light. However, it is also obvious that
> this kind of action can never be utilized for the transmission
> of signals so that it is not in conflict with the postulates
> of the theory of relativity." ('The Physical Principles of the
> Quantum Theory', University of Chicago Press, Chicago, 1930).

Here (''The experiment at the position of the reflected packet
thus exerts a kind of action'') the collapse is described as
an objective event, unrelated to an 'observer', but related to
an 'experiment'.


> (Following the above reasoning we expect that, i.e., the
> information about the probability of a particle being at
> a distance x comes to us with a signal velocity c.
> Thus the |wavefunction(x,t - r/c)|^2 should represent
> the probability that a particle is at x, as seen at
> the origin. Or am I wrong?)

In nonrelativistic QM (an there was no consistent relativistic QM
before around 1947), superluminal signals would be nothing to worry
about. But the collapse is not a signal, so there is even less to worry.


> Unfortunately H.Kragh ("Dirac: a Scientific Biography", Cambridge
> U.P., 1990) describes a (1927) discussion between Dirac, Heisenberg
> and Born, about what, actually, gives rise to a "collapse".
> Dirac said that it is 'Nature' that makes the choice (of the
> measurement outcome). Born agreed. Heisenberg however maintained that,
> behind the collapse, and the choice of which 'branch' the wavefunction
> would be followed, there was "the free-will of the human observer".

Interesting but not conclusive. The free will of the observer
just means the freedom to arrange a certain experiment (and hence to
select the preferred basis with respect to which the collapse happens),
but not the freedom to choose the branch of the wave function.

In any case, in a typical experiment at CERN, it is clearly Nature that
makes the choice about which particles to produce in a wire chamber
experiment.


> And later, in "Physics and Philosophy" (Harper and Row, 1958, New York)
> Heisenberg writes "The observation itself changes the probability
> function discontinuously; it selects of all possible events
> the actual one that has taken place [...] The discontinuous change
> in the probability function, however, takes place with the act
> of registration, because it is the discontinuous change
> of our knowledge in the instant of registration that has its
> image in the discontinuous change of the probability function."

Again, the act of registration has nothing to do with the mind but
with the photographic plate, the bubble chamber, the Geiger counter,
or whatever detector is being used to register the phenomenon.


> According to Jan Faye "Bohr accepted the Born statistical
> interpretation because he believed that the psi-function
> has only a symbolic meaning and does not represent anything real.
> It makes sense to talk about a collapse of the wave function
> only if, as Bohr put it, the psi-function can be given a pictorial
> representation, something he strongly denied."
>
> It is really not so easy to find a definition (of the 'reduction')
> by Niels Bohr. In a letter to Pauli (March 2, 1955) he wrote "Thus,
> when speaking of the physical interpretation of the formalism,
> I consider such details of procedure like "reduction of the wave
> packets" as integral parts of a consistent scheme conforming
> with the indivisibility of the phenomena and the essential
> irreversibility involved in the very concept of observation."
> (Niels Bohr Collected Works, vol. 10, Elsevier 1999, page 568).

Thus the collapse is an integral part of the description of
an experiment, accounting for the irreversibility (i.e. nonunitarity)
of the system considered (since it is not isolated, but coupled to
the detector), and not a process of the mind.

Without irreversibility no experimental record, hence no measurement,
hence no collapse. Experimental records are therefore intimately tied
to irreversibility, hence to thermodynamics. Ultimately, the second
law is responsible for the observability of Nature.


> Even in Max Born it is possible to find many (very) different
> interpretations of the 'reduction' (and of the wave-funtion).
> In example "The question of whether the waves are something
> "real" or a function to describe and predict phenomena in
> a convenient way is a matter of taste. I personally like
> to regard a probability wave, even in 3N-dimensional space,
> as a real thing, certainly as more than a tool for mathematical
> calculations ... Quite generally, how could we rely on
> probability predictions if by this notion we do not refer
> to something real and objective?" [Max Born, Dover publ., 1964,
> "Natural Philosophy of Cause and Chance", p. 107.]

I share this assesment of Born.

Thanks for the quotes!


Arnold Neumaier

Aaron Bergman

unread,
Jun 2, 2005, 1:38:57 AM6/2/05
to
In article <1117557478.8...@g14g2000cwa.googlegroups.com>,
"I.Vecchi" <vec...@weirdtech.com> wrote:

I don't think see how. The macrostates are your pointer states.
Decoherence is the process wherein the zillions of degrees of freedom in
your pointer conspire to diagonalize the reduced density matrix in the
pointer basis.

Aaron

Aaron Bergman

unread,
Jun 2, 2005, 1:38:55 AM6/2/05
to
In article <429CC172...@synopsys.com>,
Eugene Stefanovich <eug...@synopsys.com> wrote:

> 2. Quantum mechanics does not explain the origin of these probabilities.
> All QM can do is to calculate these probabilities.
> In textbook QM, the formula |<a|psi>|^2 is a postulate, but this
> formula can be derived from a more fundamental "quantum logic" approach.
> (see chapter 4 in physics/0504062)
> If you know the rules of quantum mechanics, you can describe the
> state of your system by a vector |psi> in the Hilbert space,
> and the measurement by another vector |a>, and calculate/predict
> the probability of finding value a in the state |psi> by using above
> formula.

Outside of this 'envariance' stuff that I don't really understand, I
know of no way to derive the probability rules from QM -- in particular,
the use of the reduced density matrix really already assumes the Born
rule. Even envariance assumes, a priori, that these probabilities exist
which seems to be avoiding the central question to me.

Aaron

Seratend

unread,
Jun 2, 2005, 1:41:23 AM6/2/05
to
Arnold Neumaier wrote:
> Seratend wrote:
>
> > Arnold Neumaier wrote:
> >
> >>Seratend wrote:
> >>
> >>>QM deals only with statistics of outcomes
> >>>and, in my opinion, outcomes are the "classical world" (what we "see").
> >>
> >>In my opinion, the "classical world" (what we "see") is the world
> >>as seen after irreversible effects have set in, i.e., the world
> >>as described by nonequilibrium thermodynamics (including hydromechanics
> >>and kinetic theory).
> >
> > Interesting.
> > You seem to view the measurement results exclusively through the mean
> > value filter
>
> Yes. Mean values of thermodynamic origin are the raw observables
> in all experiments; everything else is derived from these by theory
> or speculation.
> I call this the 'consistent experiment interpretation', following
> first steps in this direction taken in Section 10 of
> quant-ph/0303047 =3D Int. J. Mod. Phys. B 17 (2003), 2937-2980.
> Since I wrote this, my view has considerably gained in strength.
> If you read German, you can find much more about it at
> http://www.mat.univie.ac.at/~neum/physik-faq.tex

Unfortunately, I do not read German (and I regret it today : ).
However, I am greatly interested by the mean value filter and hope that
you will be able to post soon the English version.
I have also looked at your paper, section 10, but it not easy to
understand the section alone (as the document is consistent : ) and
more precisely what do you intend by consistent experiment. Currently,
I would like to understand what part of thermodynamics you want to use
to derive some results.
> In the mean time,
> I am happy to feed the main qualitative arguments into this
> discussion, if you are interested.
>
Please do, I will be also very happy to understand your point of view.


>
> > in my point of view (like the interference pattern: single
> > photon screen impact event versus multiple independent photons
> > interference pattern event).
> > How do you explain the observed state of a single photon event?
>
> It is only a sloppy way of speaking, not a real physical event.
> What actually happens is the following:
>

> The light ray of a laser is an electromagnetic field localized in a
> small region along the ray that begins in the laser and ends at the
> photodetector. A ray of intensity I is described by a coherent state

> |I>> = |0> + I|1> + I^2/2|2> + I^3/6|3> + ...


> If I is tiny then, from time to time, an electron responds (in some
> loose way of speaking that itself would need correction) to the
> energy continuously transmitted by the ray by going into an excited
> state, an event which is magnified in the detector and recorded.
> These occasional events form a Poisson process, with a rate proportional
> to the intensity I. This, no more and no less, is the experimental
> observation. It is precisely what is predicted by quantum mechanics.
>

Yes, but we have 2 possible observations for this experiment assuming
the independence of the triggering events of the detectors (e.g. CCD to
cover the space of the interference pattern): Single events or the
complete set of events (the complete interference pattern).
By single events, I mean an experiment where the intensity is so weak
as we just have one click for one experiment trial (the electron case).

For the second experiment trial, there is sufficient intensity to
trigger the whole pattern (multiple independent electron case in time).

> The traditional sloppy way of picturing this in an intuitive way is to
> say that, from time to time, a photon arrives at the screen and kicks

> an electron out of its orbit. This is a nice picture, especially for


> the newcomer or the lay man, but it cannot be taken any more seriously
> than Bohr's picture of an atom, in which electrons orbit a nucleus in
> certain quantum orbits. For nothing of this can be checked by experiment
> - it is empty talk intended to serve intuition, but in fact causing more
> damange than understanding.
>

I agree, I do not care about the reality of the photon. I just want
that the generic mathematical model may be applied to every experiment
and in some experiments, this model may be compatible with the particle
view.

> Another way to see that is that the photo effect also happens for
> fermionic matter in a classical external field. (See, e.g., the
> quantum optics book by mandel and Wolf.) Thus the observed
> Poisson process cannot be a consequence of quantized light, but
> rather is an indication of quantized detectors.
>

Yes. However, in the case of single events (of the detector), we are
just able to apply the mean value statistics to the detector (huge set
of random variables/observables). In this case, I think the mean value
filter does not apply to the "particle" but only to the single
triggered detector: we may explain its "deterministic" triggering
value but not the cause of its triggering (except for the peculiar case
where the triggering value is equal to the photon state).


>
>
> > What do you intend by irreversible effects?
>
> Dissipation, introduced by the Markov approximation necessary to get
> a sensible dynamics of a system smaller than the whole universe.
>

Dissipation means energy exchange or does it also includes other types
of exchange (such as momentum, assuming energy conservation)?

>
> >>Everything in thermodynamics and kinetic theory
> >>is real, objective, without any of the dubiosities that characterize
> >>the traditional interpretations of the quantum world.
> >>
> > Frankly, I have a real problem to see reality behind pressure, volume
> > and energy/temperature.
>
> Ask any engineer. They know what is real. I understand reality in the
> engineering sense. They can determine the pressure, to within the
> accuracy allowed by statistical mechanics. A single measurement on a
> single large quantum system (such as a cup of tee) is usually sufficient
> to get a reasonable objective value.
> If this is not real, there is no reality at all, and we are all dreaming.
>

Ok, I begin to understand better what you may mean when you use the
world reality. You are close to the epistemic view of physics, aren't
you? (the engineering sense).
If this is the case, it ok for me: you are not trying to say more that
it is: what "we" can "see".

>
> How can you measure a microscopic object without measuring something
> macroscopic. You need the macroscopic, thermodynamic state of something
> to assert that indeed some definite, objective event happened.
> Take away objectivity and you lose all of physics.
>

Ok, for the macroscopic interface. See my previous answer with the
photons as I think there is a misunderstanding. I just question how you
can describe the trigger of such a macroscopic device by a single
particle event (e.g. an electron in a given quantum state).
If you only describe it through statistics (hence requires multiple
outcomes), Is your description able to predict a preferred basis of the
quantum state of the particle?

>
> >>>Therefore, it is relatively difficult for me to understand people who
> >>>want to demonstrate that there is a physical collapse leading to the
> >>>outcomes.
> >>
> >>The quest is to show that the interaction of a quantum system with
> >>a macroscopic detector describable by thermodynamics (and hence,
> >>through statistical mechanics, by quantum theory)
> >

> > Statistical classical mechanics?
>
> No. Statistical mechanics as taught in textbooks. Which includes
> (and on the deepest level is only) quantum mechanics.
>

(you mean modern statistical mechanics, thus bases on QM and not the
old gibbs statistical mechanics based on classical mechanics. I always
try to separate them as I seem to be an old-fashioned man : ).

Ok, I think I begin to understand what you are trying to say (tell me
If I am wrong).
You are using the mean value filter to try to get deterministic results
of macroscopic systems (on a given basis of this system: e.g. pressure,
energy, position, volume, etc ...). If this is correct, all these
macroscopic observables will commute between themselves (simultaneous
measurement possible). If you have a theorem stating that all the
observables of a macroscopic system (at the infinite number limit)
commute, you solve the preferred basis problem of the measurement.

Therefore, once we define the quantum interaction between the quantum
particle and the macroscopic system, we are able to know the states of
the quantum particle through the values of the commuting observables of
the macroscopic system (e.g. pressure, energy, position, volume, etc
..) if the decoherence results apply.

Very interesting. May you confirm? (and develop)
And the extra question, do you know such a theorem?

However, now, I think if such a theorem is true (exists) for a
macroscopic observable, I think we are only be able to infer the local
state of the quantum particle associated to the measurement macroscopic
value (local state= partial projection of the global state/density on
the highly degenerated basis of the macroscopic system associated to
the macroscopic value). While the global state for the given
macroscopic value (projection postulate) is the completely entangled
state of the quantum particle with the degenerated basis of the
macroscopic measurement value => no preferred basis of the particle in
the global state (only in the local projected state).
We seem to recover no preferred basis property of the collapse
postulate required to verify the coherence with this hypothetic
theorem. Very interesting as I have not considered this aspect before.

> >>
> > I am not sure I understand what you say. In the QM description, I just
> > have statistics of outcomes. I have for a macroscopic detetector, a

> > macroscopic observable A=3D sum_i Ai
>
> No. This is not what statistical mechanics teaches. The gurus there say
> that the quantities thermodynamics is about are expectations of
> microscopic operators, not their eigenvalues!
>

Ok, this seems to be the partial trace results. See my comment above. I
think if you have such a theorem about the commutation of macroscopic
observables of a macroscopic system, we are able to find easily this
result (entanglement with the degenerated basis of the macroscopic
observable and partial trace). And I think most of my questions may be
answered concerning the measurement by macroscopic systems : ).


Seratend.

Seratend

unread,
Jun 2, 2005, 1:41:31 AM6/2/05
to
Arnold Neumaier wrote:

> I am looking for an explanation why a particular detector coupled
> to a particular quantum system produces the observed erratic but
> objective record of individual results that can be analyzed
> statistically and quoted in a physics journal.
>

Me too. I just consider mathematics and experiment results to build
logical deductions.

> If you want to claim more than that these outcomes are just the
> results of changes of belief (aka 'knowledge') in an observer's mind
> - and I think physics does and should claim more than that -

I agree. I prefer to have a logical view (mathematics) of what we
describe: the only thing I know. In my sentences, the "we see" may
be anybody: a machine, a particle etc ... (very weak signification)
and should not considered as a mind or everything else out of the
subject.
There are no minds just logical propositions or properties if you
prefer that are true in each considered case (logical view). Everything
else is interpretation. We have to map these properties to the
"reality" to verify the results, that's all (what you may call
the objective record in some cases). (like the mathematical circle
object and the drawing of a circle). The mapping may be falsified by
the experiments not the mathematical theory (supposed to be
consistent).
Hence, I may choose to describe a system by statistical or
deterministic tools. For quantum systems, I have not simple results in
the deterministic tools (e.g. bohmian mechanics as it requires the
creation of a specific un-measurable object, the path of the bohmian
particle) while the statistical results are simpler.


>
>
> >>I gave a concise formulation of a specific case of this quest in
> >>my recent paper quant-ph/0505172.
> >>
> > I have read quickly you paper. I have not found the original thread.
>
> Type "collapse challenge" into
> http://groups-beta.google.com/groups?q=%22collapse+challenge%22&qt_s=Search
>
>
> > So I have some questions:
> > a) what is the initial state of the photon (assuming a wave packet) :
> > |psi>= |path1>+|path2> with <path1|path2>=0?
>
> Not quite. Roughly,
> |psi(t)> = |path1(t)> tensor |1> + |path2(t)> tensor |1>
> with spatial coherent states |pathi(t)> (i=1,2) moving at the
> velocity of light and monochromatic 1-Photon Fock states |1>, say.

Ok, usually when I write a state |path1>, this state may be the tensor
product of whatever we want (we may expand it when it is required).
Therefore, you seem to require the detail of this state:

|psi(t)>= [|path1(t)>+|path2(t)>](x)|1> with <path1|path2>=0?

Where <x|path1(t)>= <x|(|path1(t)>+|path2(t)>) for x in a given
transversal area we may call it A
And <x|path2(t)>= <x|(|path1(t)>+|path2(t)>) for x in a given
transversal area we may call it B

Such that {A} intersection {B} is empty.

> The actual situation would be more complicated since single
> photon states are electromagnetic waves (solutions of the free
> Maxwell equations) approximately localized along some direction.

Ok, we may approximate these states as free moving wave packets outside
the are of interactions of the screens.

> The challenge allows, however, any specific setting (even
> idealized, or with massive particles, etc.) that matches the
> informal description in a reasonable way.
>

In other words we are free to choose the interactions and the
Hamiltonians as I have done it in another post in this thread for the
interference toy model.


>
> > b) if yes, |path1> and |path2> are for example 2 parallel paths, where
> > |path1> is 100% stopped by the first screen and |path2> 100% not?
>
> Yes. This is an example that can be prepared by half-silvered mirrors.
>
>
> > c) what do you want to say?
> >
> > I mean, I have a system that is well described through unitary
> > evolution (superposition of states).
>
> Absorption by a screen is an irreversible macroscopic process
> accompanied by a minute increase of temperature. The claim that
> it is described by unitary evolution requires proof, which,
> if successful, would be part of an answer of the challenge.
>

See my description of the interference pattern toy model. I have
choosen Hamiltonians and interactions such that we have no entanglement
between the photons and the screens (formal choice): H_screen=
|screen><screen|(x)V(r)
I usually prefer to replace photons by electrons, whenever it does not
change the global result as the free propagator of photons and
electrons are the same. In the case of photons, V(r) is the effective
potential giving the source of the reflection or the transmission.
This model supposes the energy conservation between photons and the
screens (choice) and it is easy to see that everything evolves unitary,
just by taking the wave packet.

I may expand my explanation if required.

> If there is unitary dynamics only then the final result is not
> the state |0,1,1> or |0,0,1> as observed, but a superposition
> of the two. Invoking Born's rule is _assuming_ the collapse
> rather than explaining it.
>

I like this toy model where we force no entanglement between the
photons states and the screens and where we have the simple unitary
evolution of the initial state. It reflects perfectly what we do on an
experiment that reflects this unitary evolution:
a) we have to choose between all the photons, the one with the initial
state (hence an initial measurement result)
b) we simultaneously measure the reflected photons by either the first
screen or the second one outside the area of the local interaction of
the screens (here I suppose the plane of the reflecting screens are not
orthogonal to the beam direction in order to put the "real"
detector outside the incoming beam. We have only a single detector that
clicks at a time assuming the good energy trigger level on a restricted
area.

We can put the detectors or not, they do not change the result of the
transformation of the wave function by the two screens before it is
detected by the detectors (we assume a local interaction of the
detectors: a choice). If we develop more, we see that the projector of
the detectors (the spatial location) commutes with the interaction of
the screens.

Assuming this, we can say (interpretation) that the screens do not
collapse the wave function if we have no detectors, while if we put the
detectors we can say that the screen collapses the wave function.
=> the collapse is only an acknowledgement of the results (a property
of the system is true). Or if you prefer a view of a particular branch
of the unitary evolution of all possible states.

The only possibility to get coherent results is to assume that a
collapse is the acknowledge of a given property on a given system. This
is only mathematics: we have a set of experimental trials object where
2 properties are true: the initial measurement to prepare the initial
state and the final measurement result. If you prefer, we can see it as
contextual random variables outcomes.


> That something remains to be explained even from the Copenhagen
> point of view (some version of which you seem to adhere to)
> is discussed in Section 3.
>

Copenhagen interpretation does not assume the "reality" of the
wavefunction. What it says is very analogue to what I say. There is
most of the time, with CI, in my opinion, a misunderstanding on the
meaning of "before" or "after" the measurement. Just replace
the word "before" by "there is no measurement" and "after"
by "there is a measurement". Therefore, each instance of a system
has a single property: either there is a measurement or not (with its
associated definite single results).
There is no system where we have no measurement before and "after"
a measurement appears from nowhere. We have a measured system (of a
given value) or not (no time reference).


>
>
> > At the end, I must apply the born
> > rules to get the statistics (what I see in the experiment).
>
> This is the informal prescription that is used to apply single-particle
> reasoning to a complex multiparticle experiment. It successfully
> avoids looking at the physics happening at the screen, replacing it
> by simply assuming the collapse, i.e., the emergence of an objective
> record according to the probabilities from the Born rule.
> While this is an acceptable attitude it is obviously not the whole
> story.
>

This is what I call the statistical description of the physical
phenomena (we do not explain the outcomes, we just measure their
frequency and their evolution in the space time).
The description of the outcomes production is the deterministic view.
It thus requires a description compatible with the statistical one. I
will say why not to use the bohmian mechanics?

>
> > Are you just searching for a predictive description of a particular
> > outcome in a given QM experiment?
>
> Just an explanation for how particular outcomes arise through
> measurement. Leaving something as complex as 'measurement' as
> an uninterpreted, vague fundamental concept, while practical
> measurement is a whole science in itself seem to me too gross
> a simplification to be tolerable, and one of the reasons why the
> foundations of QM are in the poor present state.
>

Do you reject the deterministic bohmian formulation (at least in the
non relativistic case where it is the best achieved)?
(I mean the mathematical formulation connection predicting the outcomes
and not the interpretations).

Seratend

Seratend

unread,
Jun 2, 2005, 1:43:11 AM6/2/05
to
Aaron Bergman wrote:
> In article <1117568337....@g14g2000cwa.googlegroups.com>,

> > >
> > In my opinion, I think you are trying to say more than QM theory says.
>
> Funny, I feel the same way.
>
: ) . I prefer that.

> > You seem to be a adept of the wave function reality hence you try to
> > define something out of the scope of the current QM theory formulation
> > (an explication of the collapse) while I simply take the born rules as
> > statistics of outcomes and the collapse postulate the property where a
> > given outcome value of a system is true ("Outcome A=a" true).
>
> I don't consider the Born rule part of QM. QM is unitary evolution.
> Everything is else is what we do to make sense of the wavefunction. The
> Born rules are a pragmatic procedure, but they lack a microscopic
> mechanism.
>

I hope you also consider the statistics of statistical classical
mechanics as pragmatic procedures. If this is the case, you are simply
looking for the deterministic evolution of individual outcomes from a
given initial condition:
Outcome_i(t)= f(outcome_1(to), outcome_n(to),t).
If it is what you are looking for, general unitary evolution tells you
that it is most of the time impossible: we have at most a functional
relation: Outcome_i= f(outcome_1, outcome_n,t) (the function
Outcome_i(t) depends on the functions outcome_1...n(t) and not on their
values).

>
> Are you advocating a sort of consistent histories approach?

Well, I promote the shut up and calculate approach I think: just
mathematics, where we map formally the values of the mathematical
objects/values to the experiments. All of the interpretations of QM are
very similar, if you do not try to attach a too strong reality to the
used words. Consistent, many worlds, many minds, my mind, my leg etc
.. : ) all are flavours of the same mathematical theory, they are ok
if they do not change the predictive results of the formal theory.
Interpretation, for me is rather the domain of philosophy. Adding or
removing it does not change the results of the mapping.

> That seems to me to be a language in which to describe quantum outcomes, but
> nothing like an interpretation.

I think my point of view is closer to the epistemic view. I do not
require any "reality", whatever it is, just that the mapping of the
mathematical predictions agree with experiment. In this sense, the
collapse is a property of the system and not a transformation of the
system ( a given system is "collapsed" or not - the collapse identity -
but it cannot become collapsed: no meaning).

> Regardless, when you make a real-world
> measurement, you better collapse the wavefunction, whether through
> decoherence or some other manner, or you will get the wrong answer. For
> future measurements.
>

You really get a classical deterministic point of view. You seem to
think that the only possible determinisitc relation is Outcome_i(t)=
f(outcome_1(to), outcome_n(to),t).
While you may have the deterministic functional relation Outcome_i=
f(outcome_1, outcome_n,t).
The latter tells you that you may not have a relation between future
outcomes and past outcomes, just between the functions (function of
sets rather than of points). Therefore, the only way to connect 2
outcomes in an experiment is the direct "observation" (the formal
labelling of the outcomes): the collapse.

>
> No. You understand your measurement apparatus. It's not a mystery. There
> are macroscopic degrees of freedom -- work backwards from there and you
> know what your 'preferred basis' is.
>
> [snip to end]
>
> I'm sorry, but I can't figure out what you're talking about in your
> experiment.
>

Well I simply have constructed a thought experiment (the well known
double slit) where I have a unitary evolution of the photons/electrons
state, the slit plate and the screen without *entanglement*.
I may say that this experiment does not realize any measurement. While
if I am looking (or a detector) at the light scattered by the screen, I
may say the screen has performed a position measurement (the
interference pattern) without entanglement with the photons.

How can you place your decoherence procedure in such a situation to
find the preferred basis.

Do you really understand that this simple toy model does not provide
any entanglement?:

The interaction between the screen and the electrons/photon is
described by the Hamiltonian H=|screen><screen|(x)Vdiff(r). It is well
defined (formally). You may consider it, if you like, as a "strange
particle" with one degree of freedom as a spin 0 particle. The
interaction potential does not entangled the screen with the
electron/photon.


Seratend.

Seratend

unread,
Jun 2, 2005, 1:43:23 AM6/2/05
to
Arnold Neumaier wrote:
>
> Of couse any continuous model can be approximated by a plethora of
> discrete ones. But there are _many_ approximations and only one _nice_
> theory. The nice one is the one chosen by the overwhelming majority
> of physicists.
>
You mean one nice approximation : ).
However, I think the "nice" theory (or approximation) label is more
related to our possibilities to derive results with our limited
knowledge, our school learning and a lot of other subjectives
properties.
I wish I would have a sufficient brain capacity to study a full
discrete formulation of mechanics (quantum or classical). : )

May be in the future, thanks to mathlab, intel, linux and windows at
the very beginning in the school, the "nice" theory property will
be changed.

> > Hydrodynamics is well-defined with continuous variables and practical.
> > However, this does not imply that continuous models govern macroscopic
> > observation just their results are compatible with experiment.
>
> Macroscopic observations are _defined_ in terms of observables
> which get their meaning from the traditional theories of mechanics.
> These use continuous models.
>

This is our legacy from the past : ).
We are able to define discrete observables and I think it what we
implicitly do when we take into account the measure uncertainty errors.
I just mean, I think we have no clue to see if something is continuous
or not. I don't even know if this question may be falsifiable by an
experiment.


>
> Of course you are free to prefer whatever you like best.
> But few people will want to imitate you since your preference
> is very cumbersome.
>

Like keeping in fashion : ))))

>
> >>If you shrink the domains sufficiently much you are left with less than
> >>one atom per cell, and cannot maintain your formulas meaningfully.
> >>
> >
> > If you assume the continuous model, you accept that I may shrink the
> > domain in order to keep an infinite countable number of random
> > variables in this domain (I have access to all the infinites I want as
> > long as I am coherent).
>
> Yes, but these random variables are no longer the 'number of particles
> in the cell' which you employed, since these random variables cease
> to be meaningful. One cannot split particles...
>

As the local meaning of the continuous distributions...
We recover the same problems: we can (know) only measure finite
quantities.

I am not against the continuous model, I just know, that I need an
additional mapping between the result values of this model and the
experiments (usually the continuous requirement on the functions).
Seratend.

Eugene Stefanovich

unread,
Jun 2, 2005, 1:44:31 AM6/2/05
to
Arnold Neumaier wrote:

> I am looking for an explanation why a particular detector coupled
> to a particular quantum system produces the observed erratic but
> objective record of individual results that can be analyzed
> statistically and quoted in a physics journal.


That's a noble goal, but it has nothing to do with quantum mechanics.
Quantum mechanics does not explain why results of measurements are
erratic. QM cannot predict the exact sequence of these erratic
measurements. All it can do is to predict with very high accuracy
the probabilities of different outcomes.

The "explanation" of apparently unpredictable behavior of quantum
systems has been promised by the "hidden variable" theory, but never
delivered. The answers to these questions should be given by a
theory more fundamental than quantum mechanics. My personal belief
is that such a theory does not exist, and the random behavior
of microsystems will remain unexplained.

Eugene Stefanovich.

Aaron Bergman

unread,
Jun 2, 2005, 3:11:27 PM6/2/05
to
In article <1117635044....@g49g2000cwa.googlegroups.com>,
"Seratend" <ser_m...@yahoo.fr> wrote:

> Aaron Bergman wrote:
> > In article <1117568337....@g14g2000cwa.googlegroups.com>,
> > > >
> > > In my opinion, I think you are trying to say more than QM theory says.
> >
> > Funny, I feel the same way.
> >
> : ) . I prefer that.
>
> > > You seem to be a adept of the wave function reality hence you try to
> > > define something out of the scope of the current QM theory formulation
> > > (an explication of the collapse) while I simply take the born rules as
> > > statistics of outcomes and the collapse postulate the property where a
> > > given outcome value of a system is true ("Outcome A=a" true).
> >
> > I don't consider the Born rule part of QM. QM is unitary evolution.
> > Everything is else is what we do to make sense of the wavefunction. The
> > Born rules are a pragmatic procedure, but they lack a microscopic
> > mechanism.

> I hope you also consider the statistics of statistical classical
> mechanics as pragmatic procedures.

No. You can pretty much derive all of them from fundamental principles.

> If this is the case, you are simply
> looking for the deterministic evolution of individual outcomes from a
> given initial condition:
> Outcome_i(t)= f(outcome_1(to), outcome_n(to),t).
> If it is what you are looking for, general unitary evolution tells you
> that it is most of the time impossible: we have at most a functional
> relation: Outcome_i= f(outcome_1, outcome_n,t) (the function
> Outcome_i(t) depends on the functions outcome_1...n(t) and not on their
> values).
>
> >
> > Are you advocating a sort of consistent histories approach?
>
> Well, I promote the shut up and calculate approach I think: just
> mathematics, where we map formally the values of the mathematical
> objects/values to the experiments. All of the interpretations of QM are
> very similar, if you do not try to attach a too strong reality to the
> used words. Consistent, many worlds, many minds, my mind, my leg etc
> .. : ) all are flavours of the same mathematical theory, they are ok
> if they do not change the predictive results of the formal theory.
> Interpretation, for me is rather the domain of philosophy. Adding or
> removing it does not change the results of the mapping.

This is where I disagree. There's a fundamental _physical_ question:
whether or not the wavefunction collapses or not. This is (in principle)
experimentally verifiable. It's not philosophy; it's a question about
the real world.

Now, a more philosophical question that is, I think, informed by all of
this is why do we not perceive superpositions (which, even in the
presence of decoherence, still exist)? Or, in other words, why do we
only perceive one branch of the wavefunction. I'd like to think that
this has some real, physical answer, but maybe it's all just ephemeral.
Beats me.

> > That seems to me to be a language in which to describe quantum outcomes, but
> > nothing like an interpretation.

> I think my point of view is closer to the epistemic view. I do not
> require any "reality", whatever it is, just that the mapping of the
> mathematical predictions agree with experiment. In this sense, the
> collapse is a property of the system and not a transformation of the
> system ( a given system is "collapsed" or not - the collapse identity -
> but it cannot become collapsed: no meaning).

Either the wavefunction evolves unitarily or it doesn't. That's a
question amenable to experiment (although the experiments quickly get
exponentially difficulty.)

> > Regardless, when you make a real-world
> > measurement, you better collapse the wavefunction, whether through
> > decoherence or some other manner, or you will get the wrong answer. For
> > future measurements.
> >
> You really get a classical deterministic point of view. You seem to
> think that the only possible determinisitc relation is Outcome_i(t)=
> f(outcome_1(to), outcome_n(to),t).
> While you may have the deterministic functional relation Outcome_i=
> f(outcome_1, outcome_n,t).
> The latter tells you that you may not have a relation between future
> outcomes and past outcomes, just between the functions (function of
> sets rather than of points). Therefore, the only way to connect 2
> outcomes in an experiment is the direct "observation" (the formal
> labelling of the outcomes): the collapse.

I can't decipher this.

And I still can't figure out what you mean by your experiment. If you
have a detector and a measured object, it's practically a definition
that when you do a measurement, you entangle the two, ie, the state ends
up in (schematically)

|detector measures x>|object has state x>

In other words, the detector and the object have to be entangled. (In
reality, the state is much more complicated, of course, but you can
define the reduced density matrix, put it in a macroscopic basis and
watch it diagonalize.)

Aaron

Eugene Stefanovich

unread,
Jun 2, 2005, 3:11:30 PM6/2/05
to

Let me see if I understand your question.
There are two different sides of probabilities in QM. One side is
"fundamental", i.e., the question WHY repeated measurements of the
same observable in the same condition yield different results.
Is it possible to predict from theory the exact sequence of measurements
rather than their probabilities?
Another side is "technical": HOW to calculate the probabilities
of these random outcomes of measurements.

Quantum mechanics has nothing to say about the "fundamental" question.
The erratic random character of individual measured data is a postulate
of QM. However, QM can tell you everything about the "technical" side:
probabilities can be calculated with astonishing precision.
The formulas for calculating probabilities, e.g., |<a|psi>|^2 are not
postulated. They can be derived from more general statements in
the "quantum logic" approach to QM.

Eugene Stefanovich.


Joe Rongen

unread,
Jun 2, 2005, 3:11:27 PM6/2/05
to

> Arnold Neumaier wrote:
>
> > I am looking for an explanation why a particular detector coupled
> > to a particular quantum system produces the observed erratic but
> > objective record of individual results that can be analyzed
> > statistically and quoted in a physics journal.


Some detector systems employ photomultiplier tube(s).

The ideal photomultiplier tube is a detector that basically
absorbs (photo-electric effect) one photon and internally
converts/produces** due to an electron cascade/amplifier
effect, one measurable event.

** Lawrence and Beams showed in 1928 that photo-electrons are
sometimes emitted less than 3 *10^(-9) sec after initial illumination.

Best regards Joe


--
No virus found in this outgoing message.
Checked by AVG Anti-Virus.
Version: 7.0.322 / Virus Database: 267.4.1 - Release Date: 6/2/05


Seratend

unread,
Jun 3, 2005, 2:02:40 AM6/3/05
to
Aaron Bergman wrote:
>
> Outside of this 'envariance' stuff that I don't really understand, I
> know of no way to derive the probability rules from QM -- in particular,
> the use of the reduced density matrix really already assumes the Born
> rule. Even envariance assumes, a priori, that these probabilities exist
> which seems to be avoiding the central question to me.
>
> Aaron

Because the probabilities are external as in classical mechanics (we
filter the initial set of experiment trails to get the initial
probability law through the frequency, nothing provides that, it is an
external selection, what we call the preparation).
The only "real predictive information" QM theory gives is the unitary
evolution (functional evolution, what you may call a determinsic
evolution).

Once we have a functional evolution, we may formaly deduce the new
probability law from the previous one: Pnew(t)(A)=Pold o f-1(A). This
is the choice of description.
Where A is any event (a set of the possible values of the observable),
and f-1 the inverse function (on the sets, always exist) associated to
the "deterministic" unitary evolution (if you prefer the function
evolution of the operators in the heisenberg view given by e.g.
ihbardP/dt=[P,H] and ihbardQ/dt=[Q,H]).

QM does not explain why we have a probability, but rather, starting
from a selection of systems with a given probability law, we obtain a
new probability law through the unitary evolution: it is a choice of
description (statistics). We are free to choose another ones (except
that it may not be practical to use other choices).

Seratend.

Seratend

unread,
Jun 3, 2005, 2:03:15 AM6/3/05
to
Arnold Neumaier wrote:
> Seratend wrote:

> > I prefer to say that deterministic and statistical description are just
> > two equivalent ways of giving predictive results: I accept both.
> > Decribing a function by its points or by its induced probability law is
> > somewhat equivalent (2 point of views).
>
> But there is a difference between asserting that the die shows a three
> and asserting that the probability of getting a three is 1/6.
>

The die is a random variable or a function if you prefer. We have the
determinist results (e,f(e)), where e is a point of the trial space.
What you say is f(e)=3 for a given e. However, you have not defined the
function f saying that. In fact you are assuming you have the complete
set {e, f(e)} in order to be able to do such a logical affirmation
(i.e. you throw the die, you get one result: this is the logic behind
the experiment explanation given by the function, where e is the label
of the trial).

Now, I say on the set E={collection of e}, I may define, formally, all
the probability laws, P, I want to give a characterisation of the
function f . Now lets choose for the familiy of probability laws Pe,
the dirac distribution delta(e) => <f>= f(e)=Int{f(x)delta(x-e)de}.
Therefore choosing this probability family and calculating the mean
value, I have completely defined the function f.

=> So there is no fundamental difference in the descriptions, we just
give a characterisation of the function f (the logic behind the
experiment).


> > Well as I have a finite capacity brain, I just can understand/see
> > dicrete and finite quantities (I have never handled an infinite
> > quantity except in the math and physics models).
>
> With your finite brain capacity, it is much easier to understand or see
> continuous quantities (such as a straight line) rather than highly
> discretized quantities (a long line of equispaced dots). Much of
> brain processing is indeed concerned with producing simple continuous
> models of a messy reality.
>

My brain does not understand what a line really is. My brain
understands a single black box line (hence a single object:
discretization) with properties. Each time I try to see the properties
of the line object, I always go into discrete procedures: points, etc
.. and thereafter, I use the infinites limit (e.g. countable,
uncountable, etc ...) to link my discrete analysis towards continuous
objects. I don't even know if my brain is able to handle at all
objects defined through uncountable properties.

I do not reject (and thanks) the amazing properties we have with the
infinities (the continuous) models. However, I just try to separate
these mathematical properties from the physical ones. It allows me to
see (at least try) in the physical model/theory, what information it
really provides.

One important example is the restrictive properties we need to
introduce in the continuous probability or integration models: the
sigma additivity, as the unconditional additivity does not work for
general sigma algebras. When we use these models in the "reality"
it is important to understand that the sigma additivity should not have
any impact of the real properties of the studied system as we always
measure the frequency of finite trials.


>
> > Therefore It seems
> > that I prefer to view (choice) the world closer to the 1925 view (woaw!
> > I didn't think I have a so obsolete point of view ; ).
>
> It would be very difficult to describe Nature with only lattice field
> theories. And it is very unlikely that _if_ nature is discrete,
> it is based on a lattice.

Well, this where mathematics have to develop discrete models with the
same continous limit. In physics, we already use these approximations
(in fact a mix of continuous and discrete models), for example in
crystals (semiconductor industry): we use the continuous limit of
"real" discrete models.

> But with an irregular point set to start,
> physics would be next to impossible.
>

This is almost analogue when we consider separable or non separable
Hilbert spaces. We have a basis in both (countable or not) while all
the vectors are countable sums in both cases. However, we may deduce
some properties on non separable Hilbert spaces and better understand
the differences between them.

> The most important fundamental results, such as Noether's theorem,
> rests on the assumption of continuity. The fact that there are nice
> laws of physics when modelled as continuum but none when modelled
> discretely strongly hints at the continuous nature of Nature.
>

I will say, that we need to know what mathematical properties depend on
the "continuity" of the nature. Franckly, I have no preference. I just
want to avoid false conclusions.

Up to now, in almost all the physical models I know, we always
encounter the finite principle on the "physical" values. This
finite principle tend to say that the continuous model allows too many
properties. We need an extra, ad hoc, property to reduce these
possibilities hence my preference to consider the finite models in
order to check the validity of the continuous ones.

>
> > Even if the fields are continuous, the observables, at least what we
> > [can] measure is discrete.
>
> By convention only. In fact what we can measure is fuzzy, not discrete.
> Borderline cases are simply forced into a Procrustean bed to make
> them fit a fixed scheme. In view of the inevitable measurement error
> this does not harm things, but in a quest for understanding (and that's
> what the foundations of QM are) one should not use the same Procrustean
> techniques. http://www.mythweb.com/encyc/entries/procrustes.html
>

By convention or simply due to the physical incapacity to really handle
continuous variables?
The uncertainty of the measure allows one to introduce, formally, the
procrustean beds and to question on the continuity of the nature.
However, the QM foundations are not impact by such a problem. It just a
matter of choice: using a continuous probability distribution or a
discrete one as long as it is able to reflect the results.


>
> > Therefore, the impossible anwser is to know
> > if the set of all possible measurable values of a given "real"
> > observable is countable or uncountable.
> > However, in any case, choosing a discrete or continuous model will give
> > the same results at the limit (huge number/small size).
> > And I must admit, making predictions when the particle number is not
> > conserved is already difficult in the QFT formalism. I just can
> > imagine, it would be a nightmare with a pure discrete approach (and may
> > be, a waste of time and not very interesting from a calculus point of
> > view ; ).
>
> So why propagate the nightmare if there are nice dreams?
>

: ))).
(it depends on your own preferences: nightmares or nice dreams : ))).
Human being is so complicated!

Just to understand what properties belong really to the physical system
(versus the ones who belong only to the infinities).

>
> > However, I prefer to view the universe as finite and discrete
> > (~epistemic/practical view): all the mathematical problems of
> > infinities dissappear
>
> .. and together with it, all deep insights into physics,
> all differential equations basic to all sciences, the calculus
> that made Newton famous and physics the 'hard' science it is today.
>
> This sort of magic is completely against my taste...
> Instead of solving the problems it provides a carpet of
> intractability under which to sweep every challenge that
> is left in the foundations.
>

Interesting, I force myself to consider the discrete models in order to
remove the magic of the infinites in continuous models.

Seratend.

I.Vecchi

unread,
Jun 3, 2005, 2:03:35 AM6/3/05
to
Aaron Bergman wrote:
> In article <1117557478.8...@g14g2000cwa.googlegroups.com>,
> "I.Vecchi" <vec...@weirdtech.com> wrote:
> > Isn't this obviously circular? Aren't the "the macrostates by which you
> > are performing your observation" precisely what decoherence is supposed
> > to derive from a purely quantum description the process?
>
> I don't think see how. The macrostates are your pointer states.

What determines the pointer states?

> Decoherence is the process wherein the zillions of degrees of freedom in
> your pointer conspire to diagonalize the reduced density matrix in the
> pointer basis.

Conspire?
As a conspiracy, it's pretty lame. All DT proofs I have inspected relie
on some unphysical "no-recoil" assumption, either hidden or explicit,
in order to achieve that diagonalisation.
Indeed DT arguments follow your outline. First, the "right" pointer
basis is selected by the author. Then an "ad hoc", basis-dependent
dissipative mechanism is introduced to wipe away the off-diagonal
elements.

Regards,

IV

I.Vecchi

unread,
Jun 3, 2005, 2:05:32 AM6/3/05
to
r...@maths.tcd.ie wrote:
..
> Orthodox QM itself, or the Copenhagen interpretation, features
> collapse but doesn't consider it physical. From the Copenhagen point
> of view, the wavefunction encodes knowledge about the system, and
> it collapses when a measurement is performed; that is, when we
> acquire new knowledge, we have to update the mathematical object
> which we use to represent knowledge. Hence different observers will
> use different wavefunctions to describe the same system. The
> Copenhagen view is still the officially recognised majority view,
> but I doubt there are many physicists today who would agree that,
> for example, the ground state orbital of an electron in a hydrogen
> atom represents knowledge.

Some (pretty good ones imo) apparently do. Take this example : "It is
helpful to remember that the quantum state is just an expectation
catalog.
Its purpose is to make predictions about possible measurement results a
specific observer does not know yet" ([1]).

> Physicists dislike knowledge because
> knowledge is subjective, and subjective things are bad.

The point is that the belief in an objective, inherently existing
universe is unfounded. Such superstition is unnecessary and actually
harmful for scientific discourse, which requires only intersubjective
agreement on measurement outcomes (aka reproducibility).
Whether they proclaim themselves raelians or "physicists",
superstitious people are loath to see their beliefs undermined.
That may be the root of the "dislike" you mention.

IV

[1] Thomas Jennewein, Gregor Weihs, Jian-Wei Pan, Anton Zeilinger
"Reply to Ryff's comment ..." http://arxiv.org/abs/quant-ph/0303104

Hendrik van Hees

unread,
Jun 3, 2005, 2:55:31 AM6/3/05
to
Aaron Bergman wrote:

> This is where I disagree. There's a fundamental _physical_ question:
> whether or not the wavefunction collapses or not. This is (in
> principle) experimentally verifiable. It's not philosophy; it's a
> question about the real world.

It is interesting how quickly proponents for more "philosophy" in
physics rash over tough physics questions ;-)!

I do not understand, how you can distinguish experimentally between a
collaps a la Kopenhagen (i.e., state interpreted as a physical entity
of a single quantum system) and the minimal statistical interpretation
(i.e., state describes only our (objective) statistical knowledge about
ensembles of similarly prepared quantum systems). I am not aware of any
real experiment so far, which can verify or disprove something like the
state collaps for a single quantum system.


>
> Now, a more philosophical question that is, I think, informed by all
> of this is why do we not perceive superpositions (which, even in the
> presence of decoherence, still exist)? Or, in other words, why do we
> only perceive one branch of the wavefunction. I'd like to think that
> this has some real, physical answer, but maybe it's all just
> ephemeral. Beats me.

The point is that you call only such devices measurement devices which
really measure something (like the position of an electron). This means
you have some macroscopic object like a particle detector at a certain
place, which makes "click" if it is "hit by an electron". Then the
observable "position of the electron" becomes an objective reality
(within a certain finite detector resolution of course!).

--
Hendrik van Hees Texas A&M University
Phone: +1 979/845-1411 Cyclotron Institute, MS-3366
Fax: +1 979/845-1899 College Station, TX 77843-3366
http://theory.gsi.de/~vanhees/ mailto:he...@comp.tamu.edu

Seratend

unread,
Jun 3, 2005, 11:47:30 AM6/3/05
to
I.Vecchi wrote:
> All DT proofs I have inspected relie
> on some unphysical "no-recoil" assumption, either hidden or explicit,
> in order to achieve that diagonalisation.
> Indeed DT arguments follow your outline. First, the "right" pointer
> basis is selected by the author. Then an "ad hoc", basis-dependent
> dissipative mechanism is introduced to wipe away the off-diagonal
> elements.

That's also my understanding, but I need to understand the people who
think decoherence solves the problem as I am not sure I have understood
the whole problem.
Have you got some data concerning the "no-recoil" assumption?

Seratend.

Arnold Neumaier

unread,
Jun 3, 2005, 11:47:33 AM6/3/05
to
Seratend wrote:

> Arnold Neumaier wrote:
>
>>Of couse any continuous model can be approximated by a plethora of
>>discrete ones. But there are _many_ approximations and only one _nice_
>>theory. The nice one is the one chosen by the overwhelming majority
>>of physicists.
>>
> You mean one nice approximation : ).

Of course the theory is also an approximation.
But it is nice as a theory!


> However, I think the "nice" theory (or approximation) label is more
> related to our possibilities to derive results with our limited
> knowledge, our school learning and a lot of other subjectives
> properties.
> I wish I would have a sufficient brain capacity to study a full
> discrete formulation of mechanics (quantum or classical). : )

Grrr. Think of how large you would have to be then.
Surely bigger than a dinosaur...


> May be in the future, thanks to mathlab, intel, linux and windows at
> the very beginning in the school, the "nice" theory property will
> be changed.

No. it will always be needed to gain understanding.
In a discrete world, there are no concepts such as velocity,
no circles, straight lines, or ellipses, no canonical commutation
relation, no complex numbers, no e or pi etc., in short,
no basis to do physics.


>>>Hydrodynamics is well-defined with continuous variables and practical.
>>>However, this does not imply that continuous models govern macroscopic
>>>observation just their results are compatible with experiment.
>>
>>Macroscopic observations are _defined_ in terms of observables
>>which get their meaning from the traditional theories of mechanics.
>>These use continuous models.
>>
> This is our legacy from the past : ).

and will be our most important scientific legacy to the future.


> We are able to define discrete observables and I think it what we
> implicitly do when we take into account the measure uncertainty errors.

No, since we never know the exact errors. Uncertainty errors are
vague, not discrete.

> We recover the same problems: we can (know) only measure finite
> quantities.

No. we can only know fuzzy quantities.
We'll never know that a pointer position is exactly 1.345 or whatever
discrete value you wish to posit. But we can know that it deviates from
1.345 by less than 0.001. This specifies an interval, not a discrete
object.


Arnold Neumaier

Aaron Bergman

unread,
Jun 3, 2005, 11:47:28 AM6/3/05
to
In article <7e6dna0joO6...@pghconnect.com>,

Hendrik van Hees <he...@comp.tamu.edu> wrote:

> Aaron Bergman wrote:
>
> > This is where I disagree. There's a fundamental _physical_ question:
> > whether or not the wavefunction collapses or not. This is (in
> > principle) experimentally verifiable. It's not philosophy; it's a
> > question about the real world.
>
> It is interesting how quickly proponents for more "philosophy" in
> physics rash over tough physics questions ;-)!
>
> I do not understand, how you can distinguish experimentally between a
> collaps a la Kopenhagen (i.e., state interpreted as a physical entity
> of a single quantum system) and the minimal statistical interpretation
> (i.e., state describes only our (objective) statistical knowledge about
> ensembles of similarly prepared quantum systems).

I'm not sure what you're referring to by a 'minimal statistical
interpretation'. It seems to me that what you're indicating is a hidden
variables theory and that is experimentally ruled out (assuming
locality).

Aaron

Arnold Neumaier

unread,
Jun 3, 2005, 11:47:32 AM6/3/05
to
Seratend wrote:

OK; then we agree in the basic assumptions of what the model building
is about. This makes mutual understanding possible without having all
the time to talk about the reality issue.


>>>a) what is the initial state of the photon (assuming a wave packet) :
>>>|psi>= |path1>+|path2> with <path1|path2>=0?
>>
>>Not quite. Roughly,
>> |psi(t)> = |path1(t)> tensor |1> + |path2(t)> tensor |1>
>>with spatial coherent states |pathi(t)> (i=1,2) moving at the
>>velocity of light and monochromatic 1-Photon Fock states |1>, say.
>
> Ok, usually when I write a state |path1>, this state may be the tensor
> product of whatever we want (we may expand it when it is required).
> Therefore, you seem to require the detail of this state:
>
> |psi(t)>= [|path1(t)>+|path2(t)>](x)|1> with <path1|path2>=0?

No; there is no need for orthogonality. Indeed, coherent states are
not quite orthogonal, although their overlap is small if the paths are
far away.


>
> Where <x|path1(t)>= <x|(|path1(t)>+|path2(t)>) for x in a given
> transversal area we may call it A
> And <x|path2(t)>= <x|(|path1(t)>+|path2(t)>) for x in a given
> transversal area we may call it B
>
> Such that {A} intersection {B} is empty.

Again, this is only an approximation, since as mathematical entities,
coherent states extend everywhere. Working with coherent states is
much easier than with truly local wave functions. Moreover, truly
local solutions of the free Maxwell equations do not exist.


>>The challenge allows, however, any specific setting (even
>>idealized, or with massive particles, etc.) that matches the
>>informal description in a reasonable way.
>>
> In other words we are free to choose the interactions and the
> Hamiltonians

Yes; as long as it resembles the informal description.


> as I have done it in another post in this thread for the
> interference toy model.

I haven't seen that (lack of time to read all postings...)


>>>I mean, I have a system that is well described through unitary
>>>evolution (superposition of states).
>>
>>Absorption by a screen is an irreversible macroscopic process
>>accompanied by a minute increase of temperature. The claim that
>>it is described by unitary evolution requires proof, which,
>>if successful, would be part of an answer of the challenge.
>>
> See my description of the interference pattern toy model. I have

Please copy the relevant part for easy reference.


> choosen Hamiltonians and interactions such that we have no entanglement
> between the photons and the screens (formal choice): H_screen=
> |screen><screen|(x)V(r)
> I usually prefer to replace photons by electrons, whenever it does not
> change the global result as the free propagator of photons and
> electrons are the same.

So you ignore spin and assume a mass. But then it is simpler to
take spin 0 (rather than electrons), and simply talk about a 'particle'.


> In the case of photons, V(r) is the effective
> potential giving the source of the reflection or the transmission.
> This model supposes the energy conservation between photons and the
> screens (choice) and it is easy to see that everything evolves unitary,
> just by taking the wave packet.
>
> I may expand my explanation if required.

Yes please. I haven't read the initial description of your setting
(and my remarks below might reflect musunderstanding because of that).

If the dynamics is unitary, how do you get the permanent record (the
definite click or macroscopic spot) that constitutes a measurement?


>>If there is unitary dynamics only then the final result is not
>>the state |0,1,1> or |0,0,1> as observed, but a superposition
>>of the two. Invoking Born's rule is _assuming_ the collapse
>>rather than explaining it.
>>
> I like this toy model where we force no entanglement between the
> photons states and the screens and where we have the simple unitary
> evolution of the initial state. It reflects perfectly what we do on an
> experiment that reflects this unitary evolution:
> a) we have to choose between all the photons, the one with the initial
> state (hence an initial measurement result)

How do we choose that?

In my terminology, this would be a preparation, not a measurement,
since measurement is _acquiring_ new information or _confirming/testing_
old information, while preparation is _assuming_ information based on
past experience with one's equipment.


> b) we simultaneously measure the reflected photons by either the first
> screen or the second one outside the area of the local interaction of
> the screens (here I suppose the plane of the reflecting screens are not
> orthogonal to the beam direction in order to put the "real"
> detector outside the incoming beam. We have only a single detector that
> clicks at a time assuming the good energy trigger level on a restricted
> area.
>
> We can put the detectors or not,

But they change the system under consideration and hence the analysis
needed to get correct predictions.


> they do not change the result of the
> transformation of the wave function by the two screens before it is
> detected by the detectors (we assume a local interaction of the
> detectors: a choice). If we develop more, we see that the projector of
> the detectors (the spatial location) commutes with the interaction of
> the screens.


>
> Assuming this, we can say (interpretation) that the screens do not
> collapse the wave function if we have no detectors, while if we put the
> detectors we can say that the screen collapses the wave function.

Anything which is part of the system modelled unitarily does not
produce collapse, while anything does that isn't modelled in full
detail but whose interaction with the unmodelled dof's is nontrival.


>>That something remains to be explained even from the Copenhagen
>>point of view (some version of which you seem to adhere to)
>>is discussed in Section 3.
>>
>
> Copenhagen interpretation does not assume the "reality" of the
> wavefunction.

But it assumes the reality of the classical equipment, which
therefore gives an N-particle system with large N an ontological
status different from one with small N. It forgets to say at which
value of N one is entitled to swich from one status to the other.


> What it says is very analogue to what I say. There is
> most of the time, with CI, in my opinion, a misunderstanding on the
> meaning of "before" or "after" the measurement. Just replace
> the word "before" by "there is no measurement" and "after"
> by "there is a measurement". Therefore, each instance of a system
> has a single property: either there is a measurement or not (with its
> associated definite single results).
> There is no system where we have no measurement before and "after"
> a measurement appears from nowhere. We have a measured system (of a
> given value) or not (no time reference).

This is neither Kopenhagen nor true. The descriptions in
the authoritative treatises by von neumann, Londson and Bauer,
or Wigner tell me quite a different story.


>>>At the end, I must apply the born
>>>rules to get the statistics (what I see in the experiment).
>>
>>This is the informal prescription that is used to apply single-particle
>>reasoning to a complex multiparticle experiment. It successfully
>>avoids looking at the physics happening at the screen, replacing it
>>by simply assuming the collapse, i.e., the emergence of an objective
>>record according to the probabilities from the Born rule.
>>While this is an acceptable attitude it is obviously not the whole
>>story.
>>
> This is what I call the statistical description of the physical
> phenomena (we do not explain the outcomes, we just measure their
> frequency and their evolution in the space time).

Yes.


>>>Are you just searching for a predictive description of a particular
>>>outcome in a given QM experiment?
>>
>>Just an explanation for how particular outcomes arise through
>>measurement. Leaving something as complex as 'measurement' as
>>an uninterpreted, vague fundamental concept, while practical
>>measurement is a whole science in itself seem to me too gross
>>a simplification to be tolerable, and one of the reasons why the
>>foundations of QM are in the poor present state.
>>
> Do you reject the deterministic bohmian formulation (at least in the
> non relativistic case where it is the best achieved)?
> (I mean the mathematical formulation connection predicting the outcomes
> and not the interpretations).

Yes. They have to postulate the probabilities in an unconvincing
prestabilized way to get the equivalence to the standard formalism.
They _don't_ follow from the equations of motion. An isolated
hydrogen atom in the ground state consists of an electron at rest
at some distance from the proton, and to explain the observations
they need to _assume_ that there is an averaging according to a
particular concocted distribution...

And their trajectories are too bizarre to appeal to me.

But I don't want to discuss Bohmian mechanics. I want to discuss the
best interpretation of the traditional quantum formalism without any
enhancement.


Arnold Neumaier


Arnold Neumaier

unread,
Jun 3, 2005, 11:47:33 AM6/3/05
to
Joe Rongen wrote:

>>Arnold Neumaier wrote:
>>
>>>I am looking for an explanation why a particular detector coupled
>>>to a particular quantum system produces the observed erratic but
>>>objective record of individual results that can be analyzed
>>>statistically and quoted in a physics journal.
>
> Some detector systems employ photomultiplier tube(s).
>
> The ideal photomultiplier tube is a detector that basically
> absorbs (photo-electric effect) one photon and internally
> converts/produces** due to an electron cascade/amplifier
> effect, one measurable event.
>
> ** Lawrence and Beams showed in 1928 that photo-electrons are
> sometimes emitted less than 3 *10^(-9) sec after initial illumination.

Could you please explain how this relates to my statement?
Even a photomultiplier tube will trigger an erratic response
following a Poisson process when fed with a low intensity coherent
laser beam.


Arnold Neumaier


Seratend

unread,
Jun 3, 2005, 11:47:30 AM6/3/05
to
Aaron Bergman wrote:
>
> > I hope you also consider the statistics of statistical classical
> > mechanics as pragmatic procedures.
>
> No. You can pretty much derive all of them from fundamental principles.
>
I do not follow you. Do you think you are able to explain the
probabilities of statistical classical mechanics?
Or do you think fundamental each time you have a functional relation of
the type q(t)=f(qo,d/dtqo, t) where you apply porbabilities?

> > >
> > > Are you advocating a sort of consistent histories approach?
> >
> > Well, I promote the shut up and calculate approach I think: just
> > mathematics, where we map formally the values of the mathematical
> > objects/values to the experiments. All of the interpretations of QM are
> > very similar, if you do not try to attach a too strong reality to the
> > used words. Consistent, many worlds, many minds, my mind, my leg etc
> > .. : ) all are flavours of the same mathematical theory, they are ok
> > if they do not change the predictive results of the formal theory.
> > Interpretation, for me is rather the domain of philosophy. Adding or
> > removing it does not change the results of the mapping.
>
> This is where I disagree. There's a fundamental _physical_ question:
> whether or not the wavefunction collapses or not. This is (in principle)
> experimentally verifiable. It's not philosophy; it's a question about
> the real world.
>
How can it be experimentally verifiable? You seem to work with a kind
of absolute states/probability laws. Let me try to explain you:
In classical mechanics, when you write the position of a particle, q,
you agree you are implicitly referring to a reference point. You have
no way to escape this fundamental way of description (because we just
describe modifications - choice).

This is the same thing for probabilities. We have no absolute
probabilities. We always assume a probability law reference (in QM
case, the preparation of the system) and then we may deduce the
probability of a given measurement event with respect to the system
preparation.

Once you accept the preparation of a system is also an outcome, and
hence a collapse, how can you break this logical chain of assertions
(the collapses) in order to verify experimentally an absolute collapse
of an absolute wave function?

> Now, a more philosophical question that is, I think, informed by all of
> this is why do we not perceive superpositions (which, even in the
> presence of decoherence, still exist)? Or, in other words, why do we
> only perceive one branch of the wavefunction. I'd like to think that
> this has some real, physical answer, but maybe it's all just ephemeral.
> Beats me.
>

: )
Now, you are trying to move towards the perception of superpositions:
this is the same initial question: does QM allow to predict the
preferred basis of a measurement?

A superposition of vectors is only defined relatively to a given basis.
We define outcomes on a given basis, by defining the measurement. Any
vector has a unique decomposition on a given basis: the possible
singles outcomes of the measurement associated to this basis. Hence, we
can see the superposition of states (i.e. |psi>= sum_i ai |i>), only by
studying the statistics of outcomes (of multiple identically prepared
system) and not by direct observation of a single outcome. There is no
problem with superposition, even with classical probability, each time
a random variable produces different outcomes, we may see it, formally,
in a superposition of states. The only problem remains the prediction
of the experiment basis.

> > > Regardless, when you make a real-world
> > > measurement, you better collapse the wavefunction, whether through
> > > decoherence or some other manner, or you will get the wrong answer. For
> > > future measurements.
> > >
> > You really get a classical deterministic point of view. You seem to
> > think that the only possible determinisitc relation is Outcome_i(t)=
> > f(outcome_1(to), outcome_n(to),t).
> > While you may have the deterministic functional relation Outcome_i=
> > f(outcome_1, outcome_n,t).
> > The latter tells you that you may not have a relation between future
> > outcomes and past outcomes, just between the functions (function of
> > sets rather than of points). Therefore, the only way to connect 2
> > outcomes in an experiment is the direct "observation" (the formal
> > labelling of the outcomes): the collapse.
>
> I can't decipher this.
>

Replace outcome_i(t) by q(t).
f in this case is the path of the particle solving the Newton equation
(the deterministic evolution).

I think you have a classical point of view on how to describe a system.
I understand you think that given an initial condition (the preparation
of the system), these initial conditions infer the future measurements
outcomes i.e. q(t)= f(qo, d/dtqo, t).

You have to enlarge your possibilities: you also have the possibility
where, the function q(t) (i.e. the position of the particle) does not
depend on the initial conditions, but rather on other functions (i.e.
position of other particles) and not their values (the initial
conditions): q(t)= f(q1, ... qn, t)=/= f(q1(t), ... qn(t), t)

This is just an example, on how we can describe the physical world.
This last description includes the classical deterministic evolution,
but it allows plenty of other possibilities. One important result of
this description is that you have no more q(t)= f(qo, d/dtqo, t): the
position of a particle at time t in not a function of initials
conditions (e.g. qo, d/dtqo).

If you are able to understand this simple example, I think you can
better understand the mathematical meaning of collapse when applied to
this description (identical to the quantum case).

> And I still can't figure out what you mean by your experiment. If you
> have a detector and a measured object, it's practically a definition
> that when you do a measurement, you entangle the two, ie, the state ends
> up in (schematically)
>
> |detector measures x>|object has state x>
>
> In other words, the detector and the object have to be entangled. (In
> reality, the state is much more complicated, of course, but you can
> define the reduced density matrix, put it in a macroscopic basis and
> watch it diagonalize.)
>

No, you do not see what I want to show you. All the states remain
simple, except for the detector I have not described. But the detector
is just the "though" in this experiment. We can add or remove it,
just to verify that the screen is a measurement of the interference
pattern (we cannot make the distinction):

a) I may say that the screen is a measurement apparatus or not (it
collapses the wavefunction): this does not change the result of the
detector in this toy model (because the projector associated to the
collapse of the screen commutes with the projector of the detector).

b) Therefore in this toy model, I may have 2 measurements apparatus:
the screen and the detector. I can remove or add the measurement
detector: it does not change the behaviour of screen (no interaction).
The detector is just a virtual detector (we add or remove it by
thought).
Hence, I may say that the screen is a measurement apparatus that does
not entangled the photons with itself (I have chosen the interaction
such that no entanglement occurs).

In this case, I ask you how decoherence solves the problem as we have
no decoherence at all. This example is really close to the collapse
meaning. Assuming the presence of the detector outcome or assuming that
one instance of this experiment without the detector gives the result
does not change the system behaviour.

Seratend.

Aaron Bergman

unread,
Jun 3, 2005, 11:47:28 AM6/3/05
to
In article <1117700311.1...@g47g2000cwa.googlegroups.com>,
"Seratend" <ser_m...@yahoo.fr> wrote:

> Aaron Bergman wrote:
> >
> > Outside of this 'envariance' stuff that I don't really understand, I
> > know of no way to derive the probability rules from QM -- in particular,
> > the use of the reduced density matrix really already assumes the Born
> > rule. Even envariance assumes, a priori, that these probabilities exist
> > which seems to be avoiding the central question to me.
> >
> > Aaron
>
> Because the probabilities are external as in classical mechanics (we
> filter the initial set of experiment trails to get the initial
> probability law through the frequency, nothing provides that, it is an
> external selection, what we call the preparation).

I have no idea what this means.

Aaron

Arnold Neumaier

unread,
Jun 3, 2005, 11:47:34 AM6/3/05
to
Seratend wrote:

> Arnold Neumaier wrote:
>
>>Seratend wrote:
>
>>>I prefer to say that deterministic and statistical description are just
>>>two equivalent ways of giving predictive results: I accept both.
>>>Decribing a function by its points or by its induced probability law is
>>>somewhat equivalent (2 point of views).
>>
>>But there is a difference between asserting that the die shows a three
>>and asserting that the probability of getting a three is 1/6.
>>
> The die is a random variable or a function if you prefer. We have the
> determinist results (e,f(e)), where e is a point of the trial space.

Of course I know the standard interpretations. I just wanted to point
out that there is a difference between a random variable and a
realization, and not two equivalent ways of expressing the same.


>>>Well as I have a finite capacity brain, I just can understand/see
>>>dicrete and finite quantities (I have never handled an infinite
>>>quantity except in the math and physics models).
>>
>>With your finite brain capacity, it is much easier to understand or see
>>continuous quantities (such as a straight line) rather than highly
>>discretized quantities (a long line of equispaced dots). Much of
>>brain processing is indeed concerned with producing simple continuous
>>models of a messy reality.
>>
>
> My brain does not understand what a line really is.

If it understands what a large number of discrete objects are
it should also understand what a line is. The brain works on
concepts, which are few-bit summaries of complex phenomena.
Thus it finds much easier to remember a line than to remember
10 points. The line needs no infinite limit to be grasped,
though the mathematical definition might need it.


>>>Even if the fields are continuous, the observables, at least what we
>>>[can] measure is discrete.
>>
>>By convention only. In fact what we can measure is fuzzy, not discrete.
>>Borderline cases are simply forced into a Procrustean bed to make
>>them fit a fixed scheme. In view of the inevitable measurement error
>>this does not harm things, but in a quest for understanding (and that's
>>what the foundations of QM are) one should not use the same Procrustean
>>techniques. http://www.mythweb.com/encyc/entries/procrustes.html
>>
> By convention or simply due to the physical incapacity to really handle
> continuous variables?

By convention. Since the definition of a position, say, involves a
coordinate system and yields real values, which we measure to some
approximation.


>>>However, I prefer to view the universe as finite and discrete
>>>(~epistemic/practical view): all the mathematical problems of
>>>infinities dissappear
>>

>>... and together with it, all deep insights into physics,


>>all differential equations basic to all sciences, the calculus
>>that made Newton famous and physics the 'hard' science it is today.
>>
>>This sort of magic is completely against my taste...
>>Instead of solving the problems it provides a carpet of
>>intractability under which to sweep every challenge that
>>is left in the foundations.
>>
> Interesting, I force myself to consider the discrete models in order to
> remove the magic of the infinites in continuous models.

Well, I see that we'll not reach agreement here with such diametrally
opposite views on this matter...


Arnold Neumaier


Aaron Bergman

unread,
Jun 3, 2005, 3:55:30 PM6/3/05
to
In article <1117794648....@g43g2000cwa.googlegroups.com>,
Seratend <ser_m...@yahoo.fr> wrote:

> Aaron Bergman wrote:
> >
> > > I hope you also consider the statistics of statistical classical
> > > mechanics as pragmatic procedures.
> >
> > No. You can pretty much derive all of them from fundamental principles.
> >
> I do not follow you. Do you think you are able to explain the
> probabilities of statistical classical mechanics?

It's not completely rigorous, but generally, yeah. That's what
statistical mechanics is.

> Or do you think fundamental each time you have a functional relation of
> the type q(t)=f(qo,d/dtqo, t) where you apply porbabilities?

Sorry. I don't know what you mean.


>
> > > > Are you advocating a sort of consistent histories approach?
> > >
> > > Well, I promote the shut up and calculate approach I think: just
> > > mathematics, where we map formally the values of the mathematical
> > > objects/values to the experiments. All of the interpretations of QM are
> > > very similar, if you do not try to attach a too strong reality to the
> > > used words. Consistent, many worlds, many minds, my mind, my leg etc
> > > .. : ) all are flavours of the same mathematical theory, they are ok
> > > if they do not change the predictive results of the formal theory.
> > > Interpretation, for me is rather the domain of philosophy. Adding or
> > > removing it does not change the results of the mapping.
> >
> > This is where I disagree. There's a fundamental _physical_ question:
> > whether or not the wavefunction collapses or not. This is (in principle)
> > experimentally verifiable. It's not philosophy; it's a question about
> > the real world.
> >
> How can it be experimentally verifiable?

You entangle your system with a large measuring device. Then you try to
disentangle it and see if the superposition remains. Such experiments
have been done for small systems.

[...]

> Now, you are trying to move towards the perception of superpositions:
> this is the same initial question: does QM allow to predict the
> preferred basis of a measurement?
>
> A superposition of vectors is only defined relatively to a given basis.
> We define outcomes on a given basis, by defining the measurement. Any
> vector has a unique decomposition on a given basis: the possible
> singles outcomes of the measurement associated to this basis. Hence, we
> can see the superposition of states (i.e. |psi>= sum_i ai |i>), only by
> studying the statistics of outcomes (of multiple identically prepared
> system) and not by direct observation of a single outcome. There is no
> problem with superposition, even with classical probability, each time
> a random variable produces different outcomes, we may see it, formally,
> in a superposition of states. The only problem remains the prediction
> of the experiment basis.

I don't go for this instrumentalist viewpoint (if I understand you). It
seems to be essentially the old Copenhagen interpretation where there
are simply questions we are not allowed to ask. I'm not sure it's
completely consistent, for one.

[...]

> > And I still can't figure out what you mean by your experiment. If you
> > have a detector and a measured object, it's practically a definition
> > that when you do a measurement, you entangle the two, ie, the state ends
> > up in (schematically)
> >
> > |detector measures x>|object has state x>
> >
> > In other words, the detector and the object have to be entangled. (In
> > reality, the state is much more complicated, of course, but you can
> > define the reduced density matrix, put it in a macroscopic basis and
> > watch it diagonalize.)
> >
> No, you do not see what I want to show you. All the states remain
> simple, except for the detector I have not described. But the detector
> is just the "though" in this experiment.

I don't know what this means.

> We can add or remove it,
> just to verify that the screen is a measurement of the interference
> pattern (we cannot make the distinction):
>
> a) I may say that the screen is a measurement apparatus or not (it
> collapses the wavefunction): this does not change the result of the
> detector in this toy model (because the projector associated to the
> collapse of the screen commutes with the projector of the detector).

Are you discussing an example with commuting observables, then?

> b) Therefore in this toy model, I may have 2 measurements apparatus:
> the screen and the detector. I can remove or add the measurement
> detector: it does not change the behaviour of screen (no interaction).
> The detector is just a virtual detector (we add or remove it by
> thought).
> Hence, I may say that the screen is a measurement apparatus that does
> not entangled the photons with itself (I have chosen the interaction
> such that no entanglement occurs).

If the screen does anything that allows us to make an observation, it is
entangled with the photons. I don't see how you can simply render this
false by fiat.

Aaron

Aaron Bergman

unread,
Jun 5, 2005, 2:24:19 AM6/5/05
to
In article <1117721240.6...@g43g2000cwa.googlegroups.com>,
"I.Vecchi" <vec...@weirdtech.com> wrote:

> Aaron Bergman wrote:
> > In article <1117557478.8...@g14g2000cwa.googlegroups.com>,
> > "I.Vecchi" <vec...@weirdtech.com> wrote:
> > > Isn't this obviously circular? Aren't the "the macrostates by which you
> > > are performing your observation" precisely what decoherence is supposed
> > > to derive from a purely quantum description the process?
> >
> > I don't think see how. The macrostates are your pointer states.
>
> What determines the pointer states?
>
> > Decoherence is the process wherein the zillions of degrees of freedom in
> > your pointer conspire to diagonalize the reduced density matrix in the
> > pointer basis.
>
> Conspire?
> As a conspiracy, it's pretty lame. All DT proofs I have inspected relie
> on some unphysical "no-recoil" assumption, either hidden or explicit,
> in order to achieve that diagonalisation.

Proving decoherence in general is, as I understand it, hard (but this
isn't really my field.) The thing is, we can experimentally observe it
happening.

Aaron

Arnold Neumaier

unread,
Jun 5, 2005, 2:25:04 AM6/5/05
to
Seratend wrote:

> Arnold Neumaier wrote:
>
>>Seratend wrote:
>>
>>>Interesting.
>>>You seem to view the measurement results exclusively through the mean
>>>value filter
>>
>>Yes. Mean values of thermodynamic origin are the raw observables
>>in all experiments; everything else is derived from these by theory
>>or speculation.
>>I call this the 'consistent experiment interpretation', following
>>first steps in this direction taken in Section 10 of
>> quant-ph/0303047 =3D Int. J. Mod. Phys. B 17 (2003), 2937-2980.
>>Since I wrote this, my view has considerably gained in strength.
>>If you read German, you can find much more about it at
>> http://www.mat.univie.ac.at/~neum/physik-faq.tex
>
> Unfortunately, I do not read German (and I regret it today : ).
> However, I am greatly interested by the mean value filter and hope that
> you will be able to post soon the English version.
> I have also looked at your paper, section 10, but it not easy to
> understand the section alone (as the document is consistent : ) and
> more precisely what do you intend by consistent experiment. Currently,
> I would like to understand what part of thermodynamics you want to use
> to derive some results.

Nonequilibrium statistical mechanics shows the meaning of _single_
macroscopic observations of thermodynamic observables as
approximations to the expectations of corresponding microscopic
operators, and their attainable accuracy as the corresponding
standard deviation. In this sense, macroscopic expectations are
approximately measurable, independent of whether the underlying
microscopic theory is classical or quantum.

This defines realism for me, in accordance with the engineering point
of view. My interpretation extends this realist view of expectations
(rather than eigenvalues - which never figure in thermodynamics)
down to the microworld. Measurements of expectations simply become
more inaccurate, up to the point where they must be inferred by
averaging from very inaccurate raw measurements.

To analyze the measurement process and to recover Born's rule,
one needs the projection operator formalism. The most useful
exposition I found in the literature is in:
H Grabert,
Projection Operator Techniques in Nonequilibrium
Statistical Mechanics,
Springer Tracts in Modern Physics, 1982.


>>>in my point of view (like the interference pattern: single
>>>photon screen impact event versus multiple independent photons
>>>interference pattern event).
>>>How do you explain the observed state of a single photon event?
>>
>>It is only a sloppy way of speaking, not a real physical event.
>>What actually happens is the following:
>>
>>The light ray of a laser is an electromagnetic field localized in a
>>small region along the ray that begins in the laser and ends at the
>>photodetector. A ray of intensity I is described by a coherent state
>> |I>> = |0> + I|1> + I^2/2|2> + I^3/6|3> + ...
>>If I is tiny then, from time to time, an electron responds (in some
>>loose way of speaking that itself would need correction) to the
>>energy continuously transmitted by the ray by going into an excited
>>state, an event which is magnified in the detector and recorded.
>>These occasional events form a Poisson process, with a rate proportional
>>to the intensity I. This, no more and no less, is the experimental
>>observation. It is precisely what is predicted by quantum mechanics.
>>
> Yes, but we have 2 possible observations for this experiment assuming
> the independence of the triggering events of the detectors (e.g. CCD to
> cover the space of the interference pattern): Single events or the
> complete set of events (the complete interference pattern).
> By single events, I mean an experiment where the intensity is so weak
> as we just have one click for one experiment trial (the electron case).

In this case, the click just reveals the presence of the beam (which
I think of being present all the time, though not always transmitting
enough energy to cause a click), but not the presence of a photon.

The single event is, however, simply not enough to estimate the
intensity of the beam. The intensity is the objective contents in the
sense of my interpretation, while the clicks are the rough
approximations to it that can be read off the detecting equipment.
Since the response is so irregular, one needs a long observation time
to get a reliable estimate of the real thing, the intensity.

In a similar way, much of the observable information about distant
stars is collected by astronomers. Reliable measurements simply take
sometimes a lot of time, independent of whether the system observed is
classical or qwuantum. What decides about the number of repetitions
needed is the predicted accuracy, given by the standard deviation.

> For the second experiment trial, there is sufficient intensity to
> trigger the whole pattern (multiple independent electron case in time).

This has no interpretation problem.


>>The traditional sloppy way of picturing this in an intuitive way is to
>>say that, from time to time, a photon arrives at the screen and kicks
>>an electron out of its orbit. This is a nice picture, especially for
>>the newcomer or the lay man, but it cannot be taken any more seriously
>>than Bohr's picture of an atom, in which electrons orbit a nucleus in
>>certain quantum orbits. For nothing of this can be checked by experiment
>>- it is empty talk intended to serve intuition, but in fact causing more
>>damange than understanding.
>>
> I agree, I do not care about the reality of the photon. I just want
> that the generic mathematical model may be applied to every experiment
> and in some experiments, this model may be compatible with the particle
> view.

It can, in the consistent experiment interpretation.


>>Another way to see that is that the photo effect also happens for
>>fermionic matter in a classical external field. (See, e.g., the
>>quantum optics book by mandel and Wolf.) Thus the observed
>>Poisson process cannot be a consequence of quantized light, but
>>rather is an indication of quantized detectors.
>>
> Yes. However, in the case of single events (of the detector), we are
> just able to apply the mean value statistics to the detector (huge set
> of random variables/observables).

Yes. And this gives the click - an observable, macroscopic state of
the air hitting our ear (which hears the click). We then can
speculate about its origin. It proves the presence of the weak beam
only if repeatable; otherwise it might as well be discounted as an artifact.


> In this case, I think the mean value
> filter does not apply to the "particle" but only to the single
> triggered detector: we may explain its "deterministic" triggering
> value but not the cause of its triggering (except for the peculiar case
> where the triggering value is equal to the photon state).

Yes. A single click tells very little about the state of the beam.
Quantum optics experts use sophisticated and long series of raw
measuremnts to measure the state of a beam. See, e.g., the nice
booklet
U. Leonhardt,
Measuring the Quantum State of Light,
Cambridge, 1997.

>>>What do you intend by irreversible effects?
>>
>>Dissipation, introduced by the Markov approximation necessary to get
>>a sensible dynamics of a system smaller than the whole universe.
>>
> Dissipation means energy exchange or does it also includes other types
> of exchange (such as momentum, assuming energy conservation)?

Dissipation means lack of unitarity and the presence of a Lyapunov
function to reveal that. This implies a second law. Depending on the
context, it may mean energy loss, momentum loss, entropy increase, etc..


>>>>Everything in thermodynamics and kinetic theory
>>>>is real, objective, without any of the dubiosities that characterize
>>>>the traditional interpretations of the quantum world.
>>>>
>>>Frankly, I have a real problem to see reality behind pressure, volume
>>>and energy/temperature.
>>
>>Ask any engineer. They know what is real. I understand reality in the
>>engineering sense. They can determine the pressure, to within the
>>accuracy allowed by statistical mechanics. A single measurement on a
>>single large quantum system (such as a cup of tee) is usually sufficient
>>to get a reasonable objective value.
>>If this is not real, there is no reality at all, and we are all dreaming.
>>
>
> Ok, I begin to understand better what you may mean when you use the
> world reality. You are close to the epistemic view of physics, aren't
> you? (the engineering sense).
> If this is the case, it ok for me: you are not trying to say more that
> it is: what "we" can "see".

No. What any inanimate but reliable recording device can permanently
record (with respect to the time scale of interest), and what we could
check if we'd bother to do. Generally we only check a small summary of
it in which we happen to be interested.


>>How can you measure a microscopic object without measuring something
>>macroscopic. You need the macroscopic, thermodynamic state of something
>>to assert that indeed some definite, objective event happened.
>>Take away objectivity and you lose all of physics.
>>
> Ok, for the macroscopic interface. See my previous answer with the
> photons as I think there is a misunderstanding. I just question how you
> can describe the trigger of such a macroscopic device by a single
> particle event (e.g. an electron in a given quantum state).

I don't understand what precisely you want.
It is described by an increase in the electron density at the place
where the current is measured and turned into a record.


> If you only describe it through statistics (hence requires multiple
> outcomes), Is your description able to predict a preferred basis of the
> quantum state of the particle?

Every single click is a macroscopic event. A single click lets us infer
only an ypper bound on the intensity of the quantum system (the beam).
A multitude of clicks gives more and more information about it.


>>>>The quest is to show that the interaction of a quantum system with
>>>>a macroscopic detector describable by thermodynamics (and hence,
>>>>through statistical mechanics, by quantum theory)
>>>
>>>Statistical classical mechanics?
>>
>>No. Statistical mechanics as taught in textbooks. Which includes
>>(and on the deepest level is only) quantum mechanics.
>>
> (you mean modern statistical mechanics, thus bases on QM and not the
> old gibbs statistical mechanics based on classical mechanics. I always
> try to separate them as I seem to be an old-fashioned man : ).

They are essentially he same thing. Formally very close, only the
details of the dynamics differs.

But quantum statistical mechanics is OK for our discussion.


> Ok, I think I begin to understand what you are trying to say (tell me
> If I am wrong).
> You are using the mean value filter to try to get deterministic results
> of macroscopic systems (on a given basis of this system: e.g. pressure,
> energy, position, volume, etc ...).

Yes. Since this is universal agreement of physicists and engineers.
It is the best approximation to a concept of reality that physcis ever
developed.

> If this is correct, all these
> macroscopic observables will commute between themselves (simultaneous
> measurement possible).

No. There are no eigenvalues involved. Expectations are the real things,
not eigenvalues. This is a complete renouncement of the Copenhagen
interpretation! For example, spin is a continuous variable,
not a discrete one, though it is measured by collecting and
averaging discrete events.

Some macroscopic observables, however, commute, when taken at a fixed
time. Others nearly commute.


> If you have a theorem stating that all the
> observables of a macroscopic system (at the infinite number limit)
> commute, you solve the preferred basis problem of the measurement.

The preferred basis problem is solved in a different way, using the
projection operator formalism. (This is still unpublished work, that
I hoe to write up during the summer.)


> Therefore, once we define the quantum interaction between the quantum
> particle and the macroscopic system, we are able to know the states of
> the quantum particle through the values of the commuting observables of
> the macroscopic system (e.g. pressure, energy, position, volume, etc
> ..) if the decoherence results apply.

No. Please try again, given the new information expounded above.


Arnold Neumaier

Arnold Neumaier

unread,
Jun 5, 2005, 2:24:52 AM6/5/05
to
Seratend wrote:

> I hope you also consider the statistics of statistical classical
> mechanics as pragmatic procedures. If this is the case, you are simply
> looking for the deterministic evolution of individual outcomes from a
> given initial condition:
> Outcome_i(t)= f(outcome_1(to), outcome_n(to),t).
> If it is what you are looking for, general unitary evolution tells you
> that it is most of the time impossible:

No. Only a closed system evolves unitarily, and of course it cannot
be observed and hence does not produce outcomes.

Any system that can be observed from the outside is open, however,
and for an open system, modern quantum physics prescribes _nonunitary_
evolution.

This even holds in the traditional Copenhagen interpretation.
The view is that the system is closed most of the time and then
evolves unitarity. At certain very short moments, it is assumed
to be in contact with a detector for measurement - then the
system is open and evolves nonunitarily, by collapse.

Although not very clearly separated in many discussions,
these two processes happen never simultaneously but context
dependent, and are of course only approximations to more
realistic measurement situations.

For example, in a Stern-Gerlach experiment, the system (silver atom)
moves from the source along the magnet towards the screen with very
good accuracy in a unitary (and indeed reversible) way. But a few
split moments before it hits the screen it feels its interactions,
and describing it as a closed system becomes hopelessly inaccurate.
Instead, since the interaction time is very short, it can be
described very accurately by an instantaneous collapse.

The probabilistic nature comes about since of course modeling
the system without its interacting partner leaves one in lack of
deterministic information about the environment that would be
needed for an accurate prediction.

Qualitatively, the situation would be the same even in classical
mechanics; one couldn't predict the outcome of a classical particle
interacting with a classical screen without knowing the details
of the screen.

I'd find it strange if one would expect more from quantum mechanics.


Arnold Neumaier

Hendrik van Hees

unread,
Jun 5, 2005, 2:26:11 AM6/5/05
to
Aaron Bergman wrote:

> I'm not sure what you're referring to by a 'minimal statistical
> interpretation'. It seems to me that what you're indicating is a
> hidden variables theory and that is experimentally ruled out (assuming
> locality).

The minimal statistical interpretation is not a hidden-variable theory,
which indeed is ruled out experimentally assuming locality.

It is simply quantum mechanics, taking the probabilistic physical
content of the states seriously and does not associate the state with
single systems but only as a description of ensembles. With this
interpretation all the conceptional troubles of quantum theory vanish
at the price that one admits that quantum theory cannot describe single
systems.

Nevertheless it is sufficient to use quantum theory to all experimental
facts, known so far. A very nice introduction to this point of view can
be found in

L. E. Ballentine, The Statistical Interpretation of Quantum Mechanics,
Rev. Mod. Phys. *42* (1970) 358
http://link.aps.org/abstract/RMP/v42/p358

or in the marvelous textbook by the same author

L.E. Ballentine, Quantum Mechanics, a Modern Develeopment, World
Scientific

I.Vecchi

unread,
Jun 5, 2005, 2:28:51 AM6/5/05
to
Seratend wrote:
..

> That's also my understanding, but I need to understand the people who
> think decoherence solves the problem as I am not sure I have understood
> the whole problem.

Now that's a labyrinthine psychological problem. I guess some people
hate the idea that their reference frames (pointer states or whatever}
are just that and not the universe's fundamental structure.

> Have you got some data concerning the "no-recoil" assumption?

I am not sure I understand the question. Do you expect me to provide
data falsifying Newton's third law?
If you need a reference where the "no-recoil" assumption is introduced
explicitly I suggest Joos' article in the classic "Decoherence and the
Appearance of a Classical World in Quantum Theory". Joos is a believer
in decoherence, but he writes and argues with remarkable clarity,
making it easy to spot DT's hidden assumptions.

Cheers, or even better, je vous prie d'accepter, Monsieur Seratend,
l'expression sinc=E8re de mes sentiments les plus distingu=E9s.=20

IV

Seratend

unread,
Jun 5, 2005, 2:29:39 AM6/5/05
to
Aaron Bergman wrote:
> > I do not follow you. Do you think you are able to explain the
> > probabilities of statistical classical mechanics?
>
> It's not completely rigorous, but generally, yeah. That's what
> statistical mechanics is.
>
If you say that, I really think you have not really understood the
probabilities of classical statistical mechanics and I understand now
why you do not understand my explanations in the previous posts (e.g.
function q(t)=f(qo,d/dtqo, t) and the probability law and determinism,
..).
Do you really understand (formally) probabilities and how we use them
in the prediction of statistics of a system?
Do you understand that in order to predict a statistics, we need before
an externally given probability law? (e.g. the coin flipping
experiment: to say the frequency of the head/tails is the probability
law, we must assume the independence of the trials => a given
probability law on the space of all trials).
Do you understand that nothing (logic) predicts this probability,
except if it is the result of another experiment? (and hence, we have
another probability that is not predicted).
We need it in order to make a logical prediction of the statistics as
we need a reference point to give the logical position of a particle.

I have just written these questions in order to show you that if you
are not able to see how we construct (the logic) the statistical
description of experiments (quantum or classical: does not matter), it
will be difficult for you to understand what the measurement and the
collapse are: the logic (and not its multiple interpretations) of the
experiments. Therefore, it will be difficult to remove all the
non-mathematical results from the mathematical ones (the logic
explaination).
It will also be difficult for me to explain you and to understand what
you are trying to mean, if you have not a clear view of statistics (the
logic and not the belief). I only have the logic to build a common
ground of understanding.

Usual statistics are a very pragmatic description choice (nothing
magic). With QM or with a basic coin flipping experiment we always do
the same thing: we label the experimental trials and compute the
frequencies of the outcomes. This is a choice of description and not a
mysterious physical process:
a) you have a random variable, a function, that express your experiment
logic: for the trial labelled e, you have the result a (logic true).
The experiment trials implicitly define the function: the set {e, a} is
equivalent to a function a=f(e) (i.e. the proposition "the result of
the experiment label e is a" is true).
This is the formal meaning of the random variable in the experiment.
The function f defines the true propositions of the experimental
trials.
Do you understand that the function does not have to exist if the
experiment trials do not exist? (logical meaning).
Do you understand that without this function we are not able to
associate the frequencies to the outcomes of the experiment?
Do you understand that there is no time reference, just propositions
that we define by the realization of the experimental trials? (such a
a=f(e))

b) Each time you want (logic) to say you observe a given statistics [of
a random variable] in a given experiment, you need to get a probability
law on the abstract set of the considered experiment trials. This is
what we call the preparation of the system (or of the coin). This is
the basic expression of the induced probability law by the function f
defined by the experimental trials associated to the law of large
numbers theorem (the logic behind the sentence/proposition, "I
observe a given statistics"):
Probability law of the statistics P_f= P o f-1.
Where P is the probability law on the set of all experiment trials
(E={e}) and P_f the induced probability law on the set A={a}. Without
that, you cannot define a logical meaning to the frequency of the
outcomes.
Do you understand that nothing explains why we have the probability P
in this description choice? (Externally given). And this is not
mysterious, just a logical choice.

Do you understand that this description is independent of the theory
(QM or Classical mechanics)?

Are you able to see the connection between the function f and the
observables of QM and the collapse postulate?

Now given this description choice explanation, are you able to
understand we are not able to explain the probabilities of a given
experiment, just the probabilities induced by an external probability
law? Do you understand that QM and classical statistical mechanics just
describe the induced probability law (hence the "preparation" of a set
of systems is required to compute the induced probability)?

I really think that if you are not able to understand this simple basic
logical choice of description (the statistics), it will be very
difficult to for me to see the logical ground of your affirmations.


> > Or do you think fundamental each time you have a functional relation of
> > the type q(t)=f(qo,d/dtqo, t) where you apply porbabilities?
>
> Sorry. I don't know what you mean.
>

cf above. P_t= P_{t=o} o f-1

> [...]

> > We can add or remove it,
> > just to verify that the screen is a measurement of the interference
> > pattern (we cannot make the distinction):
> >
> > a) I may say that the screen is a measurement apparatus or not (it
> > collapses the wavefunction): this does not change the result of the
> > detector in this toy model (because the projector associated to the
> > collapse of the screen commutes with the projector of the detector).
>
> Are you discussing an example with commuting observables, then?

Yes (the observable of the screen and the observable of the detector).

>
> > b) Therefore in this toy model, I may have 2 measurements apparatus:
> > the screen and the detector. I can remove or add the measurement
> > detector: it does not change the behaviour of screen (no interaction).
> > The detector is just a virtual detector (we add or remove it by
> > thought).
> > Hence, I may say that the screen is a measurement apparatus that does
> > not entangled the photons with itself (I have chosen the interaction
> > such that no entanglement occurs).
>
> If the screen does anything that allows us to make an observation, it is
> entangled with the photons. I don't see how you can simply render this
> false by fiat.
>

Please once again, I gave you a very simple interaction hamiltonian to
describe the interaction of screen with the photons. I assume you are
able to understand that such a basic Hamiltonian does not entangle the
state of the photons/electrons with the state of the screen. (H=
|screen><screen|(x)V(r))

I use the detector to prove that nothing (logic) prevents you, in QM
theory, to say the screen is a measurement apparatus (producing the
interference pattern.

Now, you are free to give your own definition of a measurement
apparatus requiring an entanglement interaction and choose an ad hoc
interaction such that the Schmitt basis is the one where we get the
observations. However, in this case this is completely out of the scope
of QM theory and it is much more simpler to say that we simply know the
preferred basis by doing experiments (we use our knowledge of known
experiment results to deduce other experiment results and basis): no
prediction by QM theory.

Seratend

I.Vecchi

unread,
Jun 5, 2005, 11:55:04 AM6/5/05
to

Aaron Bergman wrote:
..


> Proving decoherence in general is, as I understand it, hard (but this
> isn't really my field.)
> The thing is, we can experimentally observe it
> happening.

What we can experimentally observe is that interference patterns that
may reveal macroscopic superpositions are in general hard to track.
That's hardly surprising and does not require grotesque unphysical
assumptions to be explained.

Anyways, in current QM there are far more interesting perspectives than
DT's conceptual ratholes. Macroscopic superpositions are already being
detected ([1],[2]) and I am looking forward to experiments detecting
superposed observers.

IV

[1] http://physicsweb.org/article/­world/13/8/3
[2] www.nobel.se/physics/symposia/­ncs-2001-1/leggett.pdf


Aaron Bergman

unread,
Jun 5, 2005, 8:19:13 PM6/5/05
to
In article <1117984312....@o13g2000cwo.googlegroups.com>,
"I.Vecchi" <vec...@weirdtech.com> wrote:

> Aaron Bergman wrote:
> ..
> > Proving decoherence in general is, as I understand it, hard (but this
> > isn't really my field.)
> > The thing is, we can experimentally observe it
> > happening.
>
> What we can experimentally observe is that interference patterns that
> may reveal macroscopic superpositions are in general hard to track.
> That's hardly surprising and does not require grotesque unphysical
> assumptions to be explained.

As I understand it, it's more than just not seeing macroscopic
superpositions; the actual process of decoherence has been observed.

Aaron

It is loading more messages.
0 new messages