Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

photoelectric effect : hypothetical experiment

26 views
Skip to first unread message

gans...@rediffmail.com

unread,
Sep 7, 2005, 7:29:34 PM9/7/05
to
hi,
Suppose I have a photon source S which can produce photons of
freq = f. Now, there are two identical and parallel metal plates,
each at a distance d from S. Basically S is inbetween the two
metal plates. The metal plates have a work function = hf. So,
photons from S can eject electrons from either metal plate.

The experiment: Now, the input energy to S = E = hf exactly.
So, S can only eject a single photon. Lets now look at the
picture in terms of electromagnetic waves.
The electromagnetic wave (which has energy = hf) travels
from S and reaches the two metal plates at identical time.
The amplitude of the electromagnetic wave is identical at
the two plates (because they are equidistant).
The electron in metal plate 1 (e1) sees an electromagnetic
wave with E = hf = its workfunction and decides to get ejected.
The electron in metal plate 2 (e2) decides the same thing.

So, do both the electrons get ejected?? What is the flaw in
the above argument??

Eugene Stefanovich

unread,
Sep 7, 2005, 9:42:05 PM9/7/05
to

The flaw is that classical electromagnetic wave theory
cannot explain the photoelectric effect which is a quantum
phenomenon. Einstein gave a consistent quantum explanation in 1905.

Eugene.


Souvik

unread,
Sep 8, 2005, 1:36:51 PM9/8/05
to
gans...@rediffmail.com wrote:
> The electromagnetic wave (which has energy = hf) travels
> from S and reaches the two metal plates at identical time.
> The amplitude of the electromagnetic wave is identical at
> the two plates (because they are equidistant).
> The electron in metal plate 1 (e1) sees an electromagnetic
> wave with E = hf = its workfunction and decides to get ejected.
> The electron in metal plate 2 (e2) decides the same thing.
>
> So, do both the electrons get ejected?? What is the flaw in
> the above argument??

What a wonderful question!

I suppose only one of the plates will fire at a time. The reason being
that the wavefunction in momentum space of the emitted photon will be
roughly spherical. When it encounters the plate and 'actualises' the
emission of an electron, it'll do so either at plate A or B to conserve
energy.

The meaning of the word 'actualises' in the previous paragraph is
unclear to me.

And I have no idea *how* one electron knows to pop out but not both,
inspite of encountering the same influence locally. (Well, if you're
considering QFT you can.) There is probably a directed piece of
information carried by the photon, our lack of knowledge of which
prompts us to assume a roughly spherical wavefunction that collapses on
measurable events like the popping of an electron.

This seems like a good pedagogical gedanken to illustrate the non-local
influences of quantum mechanics. Especially if one aggravates the
situation by boarding different inertial reference frames while the
experiment is running.

-Souvik

Igor Khavkine

unread,
Sep 8, 2005, 1:36:55 PM9/8/05
to

I don't see a flaw. What matters is the frequency of the radiation,
because that's what determines the energy (E = hf) of the photons that
will interact with the electrons. The number of photons in an
electromagnetic field is very large and sufficient to eject one, two,
three, or even a great many electrons.

What made you uncertain of your conclusion?

Igor

gans...@rediffmail.com

unread,
Sep 8, 2005, 5:59:47 PM9/8/05
to
Igor, check my post. I said a single photon and a freq =f such that
work function = hf.

Souvik

unread,
Sep 8, 2005, 5:59:48 PM9/8/05
to
Igor Khavkine wrote:
> I don't see a flaw. What matters is the frequency of the radiation,
> because that's what determines the energy (E = hf) of the photons that
> will interact with the electrons. The number of photons in an
> electromagnetic field is very large and sufficient to eject one, two,
> three, or even a great many electrons.

He is assuming the source S to be emitting exactly one photon at a
time.

| S |

There are two plates equidistant from S whose workfunctions are just so
that an electron will pop out if it is struck by a photon from S. Given
that S emits one photon at a time, I think his question is: How does an
electron at plate 1 'know' (through local interactions) that it must or
must not pop depending on what electrons in the other plate did?

-Souvik

Oz

unread,
Sep 9, 2005, 11:41:43 AM9/9/05
to
gans...@rediffmail.com writes

> The electron in metal plate 1 (e1) sees an electromagnetic
>wave with E = hf = its workfunction and decides to get ejected.
>The electron in metal plate 2 (e2) decides the same thing.
>
>So, do both the electrons get ejected?? What is the flaw in
>the above argument??

I posted this here just now on another thread.


Precisely the same concern would pertain to why it excites a single atom
in a diffuse gas, although one can to some extent call on superposition
to cloud the issue. Actually this might be a better way of my explaining
the mechanism than the silver halide one.

**crank warning**
In this area (and possibly others) I am considered a crank.

Lets consider a 'real' photon.
[I did this once before, but I can't find the posting.]

http://www.astrobio.nau.edu/~koerner/ast301/lecture/lecture23/lec23.html
Tutorial -- The lifetime of an electron in the first and second excited
states of Hydrogen is about 10-8 seconds. What, then, is the approximate
natural broadening of the Ha line at 6563 angstroms? (about 0.00046
Angstroms)

Lets convert to meters and get a feel for the interaction.
1A = 10^-10m

A 'real' photon with a wavelength of 6x10^-7m is about 3m 'long'.
[That's quite a long transition, I must admit.]

Now lets convert to 'atom sizes' (OK, OK, Angstrom and atom sizes match
quite well) at about 2x10^-10m

That's a wavelength of about 3000 atoms, and a length of 1.5x10^10
atoms. Its a BIG beastie. Its going to interact with millions of atoms.

Now this means there will be an entangled state between the incoming
packet of EM radiation and a vast number of atoms. In effect this means
that they all become one single co-evolving wavefunction.

Of course we cannot begin to calculate the consequences from first
principles, even if we knew the initial states of each and every atom
perfectly and their interactions with their neighbours (their
wavefunctions will overlap in space, remember). That's before we
introduce the photon ferchristssake!

We can consider likely outcomes statistically though.
There are two possible generalised outcomes:

1) The photon is absorbed by the gas.
2) Its not absorbed by the gas.

we might conclude that the gas is opaque and the probability of
absorption is very high.

we would also conclude that absorption would be the excitation of a
single atom and the annihilation of the incoming photon.

given our complete inability to model the wavefunctions of a million
interacting atoms (all simultaneously interacting with the incoming
single-photon EM wave) we have absolutely no way of predicting which
atom will become excited.

But one of the million probably will be.

We note that during the interaction period (here some 10^-8 sec) the
entangled wavefunction has evolved so that only a single atom has become
excited. You can use the same argument for a dot of silver on a
photographic plate.

One can easily extend this concept to more complex examples but in each
case its the emitter and/or the absorber that carry quantumness.

Note that this problem only applies to the EM wave.
Electrons and other particles seem to be inherently internally
quantised, although I still consider them to be entirely wavelike.

--
Oz
This post is worth absolutely nothing and is probably fallacious.

Use o...@farmeroz.port995.com [ozac...@despammed.com functions].
BTOPENWORLD address has ceased. DEMON address has ceased.

Souvik

unread,
Sep 10, 2005, 10:57:21 AM9/10/05
to
gans...@rediffmail.com wrote:
> The electromagnetic wave (which has energy = hf) travels
> from S and reaches the two metal plates at identical time.
> The amplitude of the electromagnetic wave is identical at
> the two plates (because they are equidistant).
> The electron in metal plate 1 (e1) sees an electromagnetic
> wave with E = hf = its workfunction and decides to get ejected.
> The electron in metal plate 2 (e2) decides the same thing.
>
> So, do both the electrons get ejected?? What is the flaw in
> the above argument??

What a wonderful question!

I suppose only one of the plates will fire at a time. The reason being
that the wavefunction in momentum space of the emitted photon will be
roughly spherical. When it encounters the plate and 'actualises' the
emission of an electron, it'll do so either at plate A or B to conserve
energy.

The meaning of the word 'actualises' in the previous paragraph is
unclear to me.

And I have *no* idea how one electron knows to pop out but not both,
inspite of encountering the same influence locally. There is probably a


directed piece of information carried by the photon, our lack of
knowledge of which prompts us to assume a roughly spherical
wavefunction that collapses on measurable events like the popping of an
electron.

This seems like be a good pedagogical gedanken to illustrate the


non-local influences of quantum mechanics. Especially if one aggravates

the situation by boarding different inertial reference frames.

-Souvik

nightlight

unread,
Sep 10, 2005, 10:57:27 AM9/10/05
to
Semiclassical EM (Schrodinger/Dirac eq. for electrons + clasical
Maxwell EM) explain photoeffect without any problem. You can check any
quantum optics textbook (e.g. Mandel & Wolf, [1], which includes
semiclassical & QED approaches) for the derivations (these derivations
have been around since late 1920s).

It is the classical mechanics (even with Bohr's quantization rules
included) + Maxwell EM that can't explain photoeffect (or Compton
effect). You are confusing, following the usual introductory QM
textbooks and popular expositions, the "classical impossibility"
conclusion of the so-called "Old QM" (pre-Schrodinger QM of 1900-1926)
with those of "New QM" (post-Schrodinger) where photoeffect is modeled
just fine without quantizing the EM field.

The only type of (presumed) QM predictions which cannot be reproduced
with the semiclassical theory are Bell inequality violations and
(subpoissonian) photon anticorrelations (that's why those experiments
were devised in the first place -- precisely because nothing else, not
photoeffect, not Compton effect,... can distinguish the two theories;
cf. Introduction in [2], which surveys the status of classical models).
Linguistic gimmicks and euphemisms aside (such as "loophole-free"), no
experiment can reproduced those imagined effects either [3], so far
(even though the anticorrelations have been pursued experimentally
since 1950s, and Bell inequalities since 1970s).

Note that the radiative corrections (which are outside of QM or the old
Dirac-Heisenberg-Jordan QED) have been reproduced via Barut's
Self-field ED (which is the semiclassical theory which takes into
account the self-interaction of the classical Dirac matter field) up to
alpha^5 order (i.e. as far as he and his studens had computed the SFED
effects, and which is the same precision that the high precision QED
experiments at the time, late 1980s, had achieved). These higher order
QED effects, though, are not relevant for the theory of photodetection
or for Quantum Optics (or for Bell inequalities violations & photon
anticorrelations).

-- References

1. L. Mandel, E. Wolf "Optical Coherence and Quantum Optics"
Cambridge Univ. Press., Cambridge (1995)

2. J.J. Thorn, M.S. Neel, V.W. Donato, G.S. Bergreen, R.E. Davies, M.
Beck
"Observing the quantum behavior of light in an undergraduate
laboratory"
Am. J. Phys., Vol. 72, No. 9, 1210-1219 (2004).
http://marcus.whitman.edu/~beckmk/QM/grangier/Thorn_ajp.pdf
Experiment Home Page: http://marcus.whitman.edu/~beckmk/QM/

3. E. Santos "Bell's theorem and the experiments: Increasing empirical
support to local realism"
quant-ph/0410193 http://cul.arxiv.org/abs/quant-ph/0410193

E. Santos "Optical tests of Bell's inequalities not resting upon the
absurd fair sampling assumption"
quant-ph/0401003 http://cul.arxiv.org/abs/quant-ph/0401003

nightlight

unread,
Sep 10, 2005, 10:57:34 AM9/10/05
to
> So, do both the electrons get ejected?? What is the flaw
> in the above argument??

This experiment is not so hypothetical, it is done in udergraduate
physics labs. Check for example the page by professor M. Beck (Whitman
College) on this very experiment:

http://marcus.whitman.edu/~beckmk/QM/grangier/grangier.html

which also includes their AJP 2004 paper:

http://marcus.whitman.edu/~beckmk/QM/grangier/Thorn_ajp.pdf

The paper claims that the experiment shows that only one detector (or
metal plate in your setup) will trigger. It turns out that the
experiment cheats (by tweaking the coincidence electronics to blatantly
ignore the double triggers), as explained in detail in the recent
Physics Forum discussion:

http://www.physicsforums.com/showthread.php?t=71297

What the actual experiment shows (once you disable their circuit
tweaking cheat), in agreement with the similar experiments of Grangier
et al (from 1986), is that each EM packet fragment triggers (or doesn't
trigger) its detector (metal plate) _independently_ of what the other
detector does. Note that if the trigger probability for a given
sampling time window is "small", e.g. 10 percent (p=0.10), then the
probability p2 of both detectors triggering in the same sampling window
is p2=p*p=0.01 i.e. both detectors will trigger just in 1 percent of
tries, but this type of (poissonian) "exclusivity" is perfectly
classical. Other than outright cheating with coincidence circuits (as
done in the AJP 2004 experiment), you can never get p2 below p*p on
actual counts.

The Quantum Optics terminology obscures this fact by calling
"(pair-)counts" not the actual pair counts but the filtered counts,
which exclude the tries (sampling windows) in which neither detector
triggers. This rejection of the sampling windows is done _after_ the
results of the triggers (or non-triggers) are known, which is analogous
of conducting a poll and removing the responses which the pollster
doesn't like. It is only these QO-counts (the post-filtered "counts")
which show the "non-classical" exclusivity (anticorrelations or
sub-poissonian statistics).

Consider the example p=0.1 given above, and say you had 1000 tries
(sampling windows). Each detector will have (on average) 100 triggers
and 900 non-triggers. Regarding the pair trigger counts, there will be
(on average) 10=p*p*1000 cases when both detectors trigger in the same
try, 180=2*(1-p)*1000 cases when only one of the two detectors triggers
and 810=(1-p)*(1-p)*1000 cases when neither detector triggers (10 + 180
+ 810 = 1000). So far, this is exactly what you expect classically, and
that is what the experimental counts will show you. Now, the Quantum
Opticians throw away the 810 no-trigger samples and end up with 190
samples, of which there are 180 cases of exclusive (single) trigger and
10 cases of double trigger and report these as "pair counts".

Now, if a student or a reader uninitiated in Glauber's QO terminology,
reads that the experiment shows "pair counts" having 180 exlusive
triggers and only 10 double triggers, and imagines that the "pair
counts" talked about by the Quantum Opticians are the actual counts
obtained (the mistake which the popular and pedagogical literature will
lead on and encourage), the result will appear highly non-classical.
Namely, suddenly for these 190 pairs, each detector triggers 100 times
i.e. it appears as if p=100/190=0.526 and the expected classical result
for the double counts would be p2=p*p=0.277 i.e. the classical model
would predict p*p*190 = 53 double counts, while the experiment shows
only 10 double counts, thus it _appears_ to show much higher trigger
exclusivity. The mystery disapperas completely if you get to the bottom
of the QO terminology and realize that their "pair counts" are not what
physicists and others outside of QO understand as 'pair counts'.

Even though this type of experiment has been pursued by Quantum
Opticians since 1950s, they have yet to show the _actual_ pair counts
break the classical inequality p2 >= p*p and show p2 < p*p. Only the
QO-pair counts, the 190 post-filtered cases from the example, "violate"
the inequality, i.e. the QO violation of classicality is a mere
terminological convention of Quantum Optics (I guess they use it since
it makes their results appear more funding-worthy). The phenomenon,
though, is perfectly classical (in the conventional sense of the
"classicality") -- both detectors trigger with probabilities that a
19th century physicist would have predicted.

Note that the Bell inequality "violations" claimed by Quantum Opticians
are exactly of this same "terminological convention" kind -- they
post-select the data based on pair results, discard no-trigger cases
(and also the triple and quadruple triggers) and then declare that e.g.
the sample of 190 pairs is a "fair" sample (despite being post-selected
based on the results!) for all 1000 sampling windows (i.e. they assume
that the same statistics they see on the 190 pairs, would hold on the
missing 810 pairs if only they had an "ideal detector" which they claim
will be achieved soon, although no one has any design as yet, not even
a conceptual one), and voila, the Quantum Magic has been experimentally
"confirmed". For detailed debunking of these kinds of QO claims you can
check preprints quant-ph/0410193 and quant-ph/0401003 (which summarize
the numerous peer reviewed references challenging these QO claims for
the last three decades):

http://arxiv.org/find/quant-ph/1/au:+Santos_E/0/1/0/all/0/1

You can also follow up the discussion in the mentioned PhysicsForum
thread, which also discusses the 1988 Bell inequality tests of Ou &
Mandel and the QED prediction (which are not the same thing as the QM
toy prediction) for these types of tests.

nightlight

unread,
Sep 10, 2005, 10:57:40 AM9/10/05
to
--- Minor Typo Correction --

The statement:

180=2*(1-p)*1000 cases when only one of the two detectors triggers

should be:

180=2*p*(1-p)*1000 cases when only one of the two detectors triggers

gans...@rediffmail.com

unread,
Sep 10, 2005, 11:07:30 AM9/10/05
to
Can you give an idea how one can explain considering QFT....
Is it because even the electron is now a field??

Eugene Stefanovich

unread,
Sep 10, 2005, 11:08:47 AM9/10/05
to

The author considered the case when the photon source has enough
energy to emit only one photon. So, this is the very-low-intensity
source we discussed elsewhere. As I pointed out, this situation is
not described by the classical wave theory. Classical waves are
appropriate only for a huge number of photons.

This case should be discussed in terms of quantum mechanics.
The emitted photon has a spherical wave function.
The wave function unpredicatably "collapses" and one electron
gets ejected either from the left or from the right metal plate.
There is absolutely no way to predict which one of the two plates
will be excited.

Eugene.

Souvik

unread,
Sep 10, 2005, 11:09:02 AM9/10/05
to
Igor Khavkine wrote:
> I don't see a flaw. What matters is the frequency of the radiation,
> because that's what determines the energy (E = hf) of the photons that
> will interact with the electrons. The number of photons in an
> electromagnetic field is very large and sufficient to eject one, two,
> three, or even a great many electrons.

He is assuming the source S to be emitting exactly one photon at a
time.

| S |

There are two plates equidistant from S whose workfunctions are just so
that an electron will pop out if it is struck by a photon from S. Given
that S emits one photon at a time, I think his question is: How does an
electron at plate 1 'know' (through local interactions) that it must or
must not pop depending on what electrons in the other plate did?

-Suovik

nightlight

unread,
Sep 10, 2005, 11:09:25 AM9/10/05
to
> I said a single photon and a freq =f such that work function = hf.

Single photon has infinite extent in time and space. They are thus
unsuitable for coincidence measurements (which you would need to
establish whether there is any additional/nonclassical anticorrelation
or exclusivity in the counts/ionisations on the two plates). An easy
way to see the problem is to consider Heinseberg uncertainty relation
for the single photon f states. The time-energy uncertainty relations
dE*dt >= h/2 imply that for fixed frequency photon (dE=0) you will
have dT=infinity, making any timestamping of ionization events
meaningless. The states with minimum uncertainty (with the equality:
dE*dt = h/2) correspond to the coherent states of EM field, which are
perfectly classical states (for coherent states the photon number
observable has Poissonian distribution of photocounts, which is what
classical EM predicts as well). More generally, the sharp photon
number states (which are a basis within the sharp energy
eigen-subspaces) cannot be used for Quantum Optical coincidence
measurement due to dT=infinity, and the optimum coincidence
measurements (the minimum uncertainty) are obtained for coherent
states, thus their counts correlations will be classical.

Note also that the coherent states can approach limit dE=0 with
arbitrary precision (in which case dT --> infinity), which means that
for any measurement with non-coherent states with arbitrarily small
(but finite) dE, there is a corresponding coherent state with the
exactly same dE, but with the smaller dT uncertainty i.e. it will have
a sharper coincidence. Hence, the best any actual (dT=finite) photon
coincidence measurements can give you is the classical (Poissonian
source) correlations.

gans...@rediffmail.com

unread,
Sep 10, 2005, 11:10:35 AM9/10/05
to
>
> And I have no idea *how* one electron knows to pop out but not both,
> inspite of encountering the same influence locally. (Well, if you're
> considering QFT you can.) There is probably a directed piece of

> information carried by the photon, our lack of knowledge of which
> prompts us to assume a roughly spherical wavefunction that collapses on
> measurable events like the popping of an electron.
>
> This seems like a good pedagogical gedanken to illustrate the non-local

> influences of quantum mechanics. Especially if one aggravates the
> situation by boarding different inertial reference frames while the
> experiment is running.
>
> -Souvik

Let me aggravate it by boarding diff inertial frames :-)
In the rest frame, let the system end up in the superposition of
three states |ny>, |yn> and |nn>; where ..............(1)
|n> = electron NOT popping out
|y> = electron popping out.
|yy> is ruled out due to conservation of energy as you had mentioned.
In the rest frame, the two events (the photon interacting with
the electron" are simultaneous. But need not be in another ref frame.
Let source S be at origin, e1 at x, and e2 at -x, and the ref. frame
has vel v parrallel to the x-axis.
the time diff here wud be delta_t = 2vx/gamma (assuming c=1).

So, lets say e1 interacts with the photon before e2. Again there can
be no local interaction between the two electrons since they are
outside each other's light cone. Let t_int be the time for an electron
to interact with the photon, and evolve to its excited state.
Now, by making x large I can increase delta_t to anything I like.
Let me make delta_t > t_int.

Now, even though there is no local interaction, how does entanglement
work here?? It seems that e1 should get entangled withe e2 at delta_t
time in the future for the results to be consistent with the
results of the experiment in the "rest frame". Again what am I
missing??

ganesh

Eugene Stefanovich

unread,
Sep 11, 2005, 4:33:47 AM9/11/05
to
<gans...@rediffmail.com> wrote in message
news:1126336030.9...@g49g2000cwa.googlegroups.com...

> Let me aggravate it by boarding diff inertial frames :-)
> In the rest frame, let the system end up in the superposition of
> three states |ny>, |yn> and |nn>; where ..............(1)
> |n> = electron NOT popping out
> |y> = electron popping out.
> |yy> is ruled out due to conservation of energy as you had mentioned.
> In the rest frame, the two events (the photon interacting with
> the electron" are simultaneous. But need not be in another ref frame.
> Let source S be at origin, e1 at x, and e2 at -x, and the ref. frame
> has vel v parrallel to the x-axis.
> the time diff here wud be delta_t = 2vx/gamma (assuming c=1).
>
> So, lets say e1 interacts with the photon before e2. Again there can
> be no local interaction between the two electrons since they are
> outside each other's light cone. Let t_int be the time for an electron
> to interact with the photon, and evolve to its excited state.
> Now, by making x large I can increase delta_t to anything I like.
> Let me make delta_t > t_int.
>
> Now, even though there is no local interaction, how does entanglement
> work here?? It seems that e1 should get entangled withe e2 at delta_t
> time in the future for the results to be consistent with the
> results of the experiment in the "rest frame". Again what am I
> missing??

Your problem is that you accept the Copenhagen interpretation of QM
too literally. You assume that if the wavefunction of the photon is
spherical,
then it means that each individual photon moves simultaneously in all
directions.
At some point in time this "photonic wave" collapses to a point: the photon
interacts either with electron e1 or with electron e2.

Description of an INDIVIDUAL photon by a wave function is the source
of numerous misleading statements about quantum theory. These statements go
all the
way to the claim that the human's brain participates in the development of
the universe.

If you want to avoid such a nonsense, then you should admit that the wave
function
description is applicable only to the ENSEMBLE of photons. The spherical
wave
function does not tell that each individual photon is a "spherical cloud".
It simply tells us that the probability of the photon emission does not
depend
on the angle. We simply don't know in which direction each photon will go.
But we can be sure that each photon will go only in one direction (either to
the left or to the right). Therefore, only one electron will be ejected
(though we
cannot predict wheteher it will be e1 or e2). There is no any kind of
"communication" (superluminal or otherwise) between the two electrons.

Eugene.

Carlos L

unread,
Sep 11, 2005, 4:34:22 AM9/11/05
to
gans...@rediffmail.com wrote:
> [...]
> So, do both the electrons get ejected?? ...

Hi ganesh

Let me rephrase your questions a bit:

Let's suppose a light source S and two detectors (plate 1 and plate 2
as you call them). Suppose that the optical setup is such that in every
experiment in which the source emits a "big" amount of energy it is
checked that plate 1 collects half the emitted energy and plate 2 the
other half. (No radiation is lost in other directions or optical
elements). This setup can be implemented with a beam-splitter.
Let's now suppose that the source emits a "photon's worth of energy" in
the form of radiation of frequency f. It emits therefore an energy
E=hf.

According to a purely corpuscular (photonic) interpretation, if the
source looses (in the form of radiation) an energy hf it must have
emitted only one photon (with the whole energy hf) that will end up
either in plate 1 or in plate 2. It is a matter of probability that
the photon takes path 1 or path 2 at the beam-splitter. With this
"particle interpretation" I don't see any "entanglement" or "wave
function collapse" problems. At most one could ask questions like "is
there a deterministic explanation (or mechanism) for the fact that the
photon takes at the beam-splitter a specific path and not the other?"
But the big problem with a purely corpuscular interpretation of light
is that it does not explain interference, diffraction, refraction, etc.


According to the wave-function description of QM, the
probability-wave-function of the photon splits at the beam-splitter and
is non-zero in both paths. But since, according to the official
interpretation of the experiments, (e.g. Grangier et al [1]) only one
of the two plates finally collects the hf energy emitted by the source,
there is here a weird entanglement and/or wave function collapse
mystery.

I think that is why "mainstream physics" sticks to the ambiguous
concept of the wave-particle duality which is an euphemistic way of
silencing that it does not know what a photon is.

But there are also semi-classical (non orthodox) interpretations for
the problem that you pose. Here is mine:
Light is a real wave. The original disturbance emitted by the source
splits in two weaker and equal disturbances that travel to their
corresponding plate (detector). Half the emitted energy (i.e. hf/2)
can be "assigned" to each of these "waves". If the detectors were 100%
efficient then each of this waves would have a (classic) probability
1/2 to eject an electron at its detector. Therefore repeating your
experiment many times (trials) with 100% efficient detectors it can be
expected that on the average (like in the toss of two coins):
1/4 of the trials neither of the detectors is triggered,
1/4 of the times detector 1 ejects a photoelectron with energy hf
while detector 2 is not triggered.
1/4 of the times detector 2 ejects a photoelectron with energy hf
while detector 1 is not triggered.
1/4 of the times both detectors eject a photoelectron with energy hf
*each*.
Therefore energy is conserved only "on the average".

The real detectors have a very low efficiency and in the majority of
the trials neither detector is triggered so the experimental outcomes
are in practice very different from my 1/4, 1/4, 1/4, 1/4 example. But
IMHO if the efficiencies of the detectors were accurately known the
experiments would still show a classic probabilistic behaviour
consistent with the real wave description. Many orthodox interpreters
of these kind of experiments disagree with this classic behaviour due
mainly to the reasons that nightlight has very clearly explained in one
of his posts in this thread. I think that the controversy remains
because the efficiencies of the detectors at low intensities are highly
overestimated (not of course because of incompetence of the
manufacturers but because their calibration is theory-dependent and the
theory is questionable).

But you should also read for example the orthodox explanations given by
Steve Carlip in the sci.physics thread "What evidence for photons?"

Best regards.
Carlos L

[1] Grangier, P.; Roger, G.; Aspect, A.
"Europhysics Letters", Vol. 1, p.173 (1986)

nightlight

unread,
Sep 13, 2005, 2:16:43 AM9/13/05
to
> Here is mine:
> Light is a real wave. The original disturbance
> emitted by the source splits in two weaker and
> equal disturbances that travel to their
> corresponding plate (detector).
> ...

> 1/4 of the trials neither of the detectors
> is triggered,
> ...

> Therefore energy is conserved only "on the average".
>

There is no such implication from the 'real wave' interpretation. The
combined energy of the EM field plus the energy of the Dirac matter
fields (of electrons, protons,...) is strictly conserved by Noether
theorem, since the equations of motion are invariant with respect to
time translations. But the detection process (which is an amplified
photoionization i.e. the scattering of the EM field on the bound
electrons) is subject to the unavoidable vacuum fluctuations (which
amount to 1/2 hv energy per mode, as you can see from the QED
Hamiltonian for the quantized EM field). In the semiclassical models of
photodetection (which yield identical empirical predictions as the QED
models), the QED vacuum fluctuations are considered a property of the
real degrees of freedom capable of absorbing and emitting energy, as
required by the energy conservation. In these models, the vacuum
fluctuations have been accounted for in two equivalent (within Quantum
Optics) ways:

a) A stochastic real EM field (zero point field, ZPF) is included as
the initial and boundary conditions for the classical Maxwell
equations. This approach is called Stochastic Electrodynamics (SED) and
it is capable of replicating all Quantum Optics experiments. This
theory is equivalent to the Old QED (of Dirac-Heisenberg-Jordan from
late 1920s), which is the level of QED used in Quantum Optics (the QO
also includes the Old QM heuristic of Einstein's lightquantum from the
Bohr atom era). The 1/2 hv ZPF was first proposed by Planck in 1911 (in
his, nearly forgotten, second quantum theory of black body radiation;
it has been rediscovered or revived numerous times since). The SED has
similar problem as QED - the infinite vacuum EM energy (which is
subtracted in an ad hoc ways via additional postulates e.g. via normal
operator ordering rule in QED/QO or via detection model in SED). The
SED, like the Old QED, cannot reproduce the radiative correction of the
New QED (of Feynman-Schwinger-Dyson). You can read more on the recent
advances in SED applied to Quantum Optics phenomena at arXiv:


http://arxiv.org/find/quant-ph/1/OR+au:+Marshall_T+au:+Santos_E/0/1/0/all/0/1?per_page=50


b) The ZPF of SED can be deduced (without unnatural additional
postulates on the initial & boundary conditions) as the average EM
field of the self-interacting Dirac matter field in the system of
coupled Maxwell-Dirac fields (these form a set of non-linear PDEs,
provided no external field or external currents approximations are
used). This approach goes back to the Schrodinger (and Lorentz)
original interpretation of QM, which was meant to be an alternative to
the Old QED (it was used in that role by Schrodinger and Fermi). The
computational difficulties of solving the nonlinear PDEs gave practical
advantage to the linearized theory, QED, until the Schrodinger's
approach was revived in 1970s as the Neoclassical ED by E.T. Jaynes and
subsequently worked out in great detail by Asim Barut from 1983-1993 as
the "Self-field ED (SFED), where the current QED arises as one possible
linearization approxiamation of SFED. Within SFED the ZPF is an
approximate average EM field generated by the Dirac matter field (which
interacts via the EM field with itself). Unlike the SED and QED, the
SFED doesn't have infinite EM (or Dirac) field energy, thus it needs no
additional postulates to make the infinity go away. It reproduces not
only the Old QED, but all the radiative corrections, the crown jewels
of the New QED, as far as Barut and his students had computed it (to
the alpha^5 order, which was sufficient to replicate all high precision
QED measurements known at the time). You can find a brief intro to SFED
along with key references in an earlier sci.physics.research thread:


http://groups.google.com/group/sci.physics.research/browse_frm/thread/1e3ae3b3697948db?scoring=d&

In either 'real wave' (i.e. semiclassical) approach, (a) or (b), that
has been worked out so far, the total energy is perfectly conserved
throughout. It merely redistributes among different degrees of freedom.
Within the SED, the energy of the incident "signal" EM field is
transferred to the ZPF, allowing for the no-detection case and with
total EM energy conserved. Within the SFED, for no-detection case the
"signal" energy is transferred to Dirac matter field and its EM
self-field, again conserving the total (EM plus Dirac matter field)
energy.

The suggestion you made, that the quantum phenomena involving EM
fields, such as photo-effect, imply energy 'conservation on average
only' was originally made within the Old QM (which was an ill-fitting
combination of Bohr's mechanics with the classical EM fields) by Bohr
and Kramers in 1924. That was a very short-lived idea (it lasted about
one year), quickly obsoleted by the discovery of Compton effect and the
advent of Heisenberg-Schrodinger QM, which had no problem explaining
the photo-effect and Compton effect without any violation of energy
conservation. Unfortunately, an odd mishmash of the old lightquantum
and Bohr's QM imagery is still used in pedagogical and popular
expositions as the physical picture of QM (since the formal Hilbert
space QM with its Measurement "Theory" metaphysical patches, such as
'observer consciousness' and various equivalents, offers no
self-sufficient physical model). A common sense built on that kind of
imagery then naturally leads one to rediscover the long forgotten
Bohr-Kramers proposal of approximate energy conservation.

Paul Danaher

unread,
Sep 13, 2005, 2:19:12 AM9/13/05
to
Eugene Stefanovich wrote:
>
> Your problem is that you accept the Copenhagen interpretation of QM
> too literally. You assume that if the wavefunction of the photon is
> spherical,
> then it means that each individual photon moves simultaneously in all
> directions.

What is the *precise* difference between considering a single photon (low
intensity) and a collection of photons which can be treated statistically?
At what point (number of photons) do the rules change?

Consider a single photon which is detected. The source is known. The
observation is known. There are no probabilities left in the system
following the detection.

What happens if we take a coherent very low intensity source of photons?
A lased photon doesn't "move simultaneously in all directions", so
presumably doesn't have a spherical wavefunction.

Igor Khavkine

unread,
Sep 13, 2005, 2:19:22 AM9/13/05
to

Ah, yes. I apologize for my hasty interpretation of the OP. Here's what
a standard QED treatment of the setup would say.

First, you've got to realize that photon characterize the energy
eigenstates of the electromagnetic field when its interactions with
matter can be neglected. Keeping this in mind, photon states are
completely delocalized, and it is a bit hard to make statements of the
form "a photon gets emitted at time t=0 and travels toward one of the
plates" without considerably rasing the level of sophistication of the
explanation.

Let me, nonetheless, make an attempt. In the absence of detector plates,
any stationary state can be described by counting the number of photons
corresponding to each field mode, which are labeled by wave vectors. Now
if we consider a state that described a photon emitted at a certain time
and having it propagate either to the left or to the right, this state
is definitely not stationary. However, we can assume that in its
expression in terms of photon states, it's greatest overlap is with a
photon state with wave vector k moving to the right. Lets call this
state |k,L> and the corresponding right moving state |k,R>.

So what does it mean that the source S emits only one photon at a time
which travels from the source in a spherically symmetric fashion? For
the sake of simplicity, let me drop spherical symmetry and just keep the
left-right symmetry left after restricting to a single dimension. Since
photons describe plane wave states, it's hard to find a plane wave that
travels in both directions at the same time. However, fear not, quantum
mechanics allows superpositions of states, so symmetry is restored
simply by describing the EM field state after emission by the linear
combination (|k,L>+|k,R>)/sqrt(2).

When we introduce interaction between the EM field and the detector
plates, they can be modeled by simple terms in the Hamiltonian, such as

H_I = |on,L><off,k,L| + |off,k,L><on,L| + (L <-> R).

Which can be interpreted as absorption of a photon by the detector which
goes from the 'on' to the 'off' state. Using this interaction
Hamiltonian we can calculate the probability of the |off,k> (left or
right) evolving into the state |on> (left or right) or the state
|off,k>, where the photon was not absorbed. In other words, after a
given time has elapsed, the system will be in a superposition of states,
one of which has the photon absorbed and the detector on, while the
other has the photon still propagating and the detector off.
Incidentally, this model assumes that left moving photons can only
triger the left detector, while right moving photons can only trigger
the right detector.

We can apply this calculation to the initial state
(|off,k,L>+|off,k,R>)/sqrt(2). And the result is that, after some time, the
state of the system will be in the superposition

A_1 (left moving photon detectors off)
+ A_2 (right moving photon detectors off)
+ A_3 (left detector on)
+ A_4 (right detector on)

Each of the amplitude coefficients A_i is calculable from the this
simple model. From here you can take your favorite interpretation of
quantum mechanics and deduce that (on which the interpretations should
agree) the four different states in the superposition describe different
alternative outcomes of the experiment (since these states are
orthogonal) and the moduli squared |A_i|^2 of the amplitude coefficients
give your the probability for each alternative outcome. At this point
the stnadard story ends and does not attempt to provide a "mechanism"
for the selection of different alternatives during repetitions of the
experiment.

Igor

Eugene Stefanovich

unread,
Sep 13, 2005, 4:56:17 PM9/13/05
to

Paul Danaher wrote:
> Eugene Stefanovich wrote:
>
>>Your problem is that you accept the Copenhagen interpretation of QM
>>too literally. You assume that if the wavefunction of the photon is
>>spherical,
>>then it means that each individual photon moves simultaneously in all
>>directions.
>
>
> What is the *precise* difference between considering a single photon (low
> intensity) and a collection of photons which can be treated statistically?
> At what point (number of photons) do the rules change?

In my view, the rules do not change at all. There was a discussion
thread a couple of weeks back in which Igor Khavkine suggested that
the physics of radiation changes from being quantum in the low-intensity
limit to being "classical field" in the high-intensity limit.
I do not agree with that. The interaction between photons is virtually
zero. So, each photon in the radiation field behaves independently on
others. Each photon can be described by its wave function. This wave
function takes into account all wave effects, like diffraction and
interference. This may sound heretical, but in my view Maxwell's
wave theory of radiation is an early attempt to describe a purely
quantum effect.


> Consider a single photon which is detected. The source is known. The
> observation is known. There are no probabilities left in the system
> following the detection.

I am not sure what is your point. Let me rephrase what you said:
"Consider a single die thrown on the table. The source is known.


The observation is known. There are no probabilities left in the system

following the detection." What is the difference between detecting a
single photon and throwing a die?


> What happens if we take a coherent very low intensity source of photons?
> A lased photon doesn't "move simultaneously in all directions", so
> presumably doesn't have a spherical wavefunction.

Yes, in the laser beam all photons move in one direction. Their
wavefunction is not spherical.

Eugene.


p.ki...@imperial.ac.uk

unread,
Sep 15, 2005, 10:03:33 AM9/15/05
to
nightlight <night...@omegapoint.com> wrote:
> a) A stochastic real EM field (zero point field, ZPF) is included as
> the initial and boundary conditions for the classical Maxwell
> equations. This approach is called Stochastic Electrodynamics (SED) and
> it is capable of replicating all Quantum Optics experiments.

There are loopholes in quantum optics experiments which
can let local realists off the hook. Usually these result from
QO people adjusting for their known detector efficiencies
in order to show (eg) a violation of Bells Inequalities
that would otherwise be hidden by our lack of perfect
detectors.

But just because you can torture an SED theory into producing
the same prediction that a standard QO approch does quite
naturally does not make SED a better theory, even though
some find it philosophically (or intuitively) preferable
to work with.

I'm not an expert on the detection loopholes used (and ably
defended) by QO researchers, all of whom (apart from me, if
I can still call myself that) have better things to do than
post here. Consquently, I won't be drawn into deconstructing
each one Mr Nightlight's extensive range of recent posts
in a variety of threads.

However, I can point out at least one theoretical case where
SED utterly fails to give the same prediction as QO/QM:

Phys. Rev. A 53, 2000 (1996)
http://prola.aps.org/abstract/PRA/v53/i4/p2000_1

Those without PR subscriptions can retrieve a PDF here

http://www.qols.ph.ic.ac.uk/~kinsle/e-docs/kinsler-1996-arXiv.pdf

And I'll also say this: In SED, there is a real energy contained
in the statistical vacuum fluctuations. In QO/QM, there is no
energy in the (1/2)hbar omega ... it's just a shifted origin
compared to classical theories. The only way I could extract
the QM (1/2)hbar omega is by rewriting the physics of the universe
and make it classical.

--
---------------------------------+---------------------------------
Dr. Paul Kinsler
Blackett Laboratory (QOLS) (ph) +44-20-759-47520 (fax) 47714
Imperial College London, Dr.Paul...@physics.org
SW7 2BW, United Kingdom. http://www.qols.ph.ic.ac.uk/~kinsle/

nightlight

unread,
Sep 19, 2005, 5:54:59 PM9/19/05
to
>
> There are loopholes in quantum optics experiments
> which can let local realists off the hook. Usually
> these result from QO people adjusting for their known
> detector efficiencies in order to show (eg) a violation
> of Bells Inequalities that would otherwise be hidden
> by our lack of perfect detectors.
>

An inventor of perpetuum mobile could have said the same when arguing
that the friction of his device lets "energy conservationists" off the
hook. Your "hidden violation" is a new advance in the art of QO
euphemisms for what in plain language the physicists would call
non-violations. In any case, there is no disagreement on the bare facts
of the experiments so far -- there are simply no experimental
violations of Bell inequality to date. The only room for disagreements
is on whether there will be such violations in the future.

Before you invoke "perfect detector" to support your belief in _future_
violations, you need to establish what would "perfect detector" detect?
Free EM field quanta (i.e. Dirac photons)? On what principles would
such detection work and does it violate QED cross-sections for
electron-photon interactions? Do you know of any design, even merely at
the conceptual level (but still respecting the QED and the known
physical constants)? And that is just the beginning of your problem.
Your "prefect detector" would not only need to count D-photons (which
depend on Fock base & on reference frame) with sufficient efficiency,
but it would need to know what the other three detectors of the Bell
EPR setup have shown in order to decide whether it has detected the
D-photon. Check, for example Ou & Mandel 1988 experiment [1], which
includes derivation of the QO "prediction" of Bell inequality
violations. In eq. (4) of [1], they restrict the correlations to a
particular 2nd order Glauber function G2(x1,x2) -- that means your
"perfect detector" (which would show violations before any data
filtering or "fair sampling" extrapolations) would need to know what
the other three detectors have decided, so that, for example, it can
report non-trigger when the other detector on the same side has
reported a trigger.

Namely, the G2(x1,x2) of their eq. (4) doesn't represent all the
scattering events (and their amplifications observed as photocurrents)
that are possible with their PDC source i.e. the eq. (4) is _not a
prediction of what will happen_ (even just statistically) in the
experiment, but merely that some post-selected subset of the events
(based on the knowledge of all 4 detector results!) will have
properties, such as the particular dependency on the experimental
parameters (e.g. the angle theta), that their eq. (4) has. That kind of
selection and observed dependency on the parameter theta has nothing to
do with the Bell inequality. The B.I. is a purely enumerative
constraint on the _entire_ data set (cf. [2] for one example of
enumerative formulation of Bell inequalities, which puts into its most
striking and pure form), of the same kind as the pigeonhole principle
by which if you have, say, 10 pigeons and 9 holes, there will be
necessarily one hole with two pigeons. Post-selecting a proper subset
of these 9 holes, say 5 holes, and pointing out that these 5 holes in
the subset all have single pigeon, doesn't in any way violate the
previous conclusion of the pigeonhole principle that in the 10+9 setup
there will be a hole with two pigeons. The existence of a subset of 5
holes, each with a single pigeon, is irrelevant as an experimental
demonstration of a "violation" of pigeonhole principle.

To get the full picture of what magic powers this "perfect detector"
would have to have, you need to read the Glauber's derivation of his
n-detector correlation functions Gn(x1,x2,...) in [3], in particular
his leap from the eq. (5.2b) to (5.5), where he starts with predicting
the dynamics of n detectors, then gives up the prediction and simply
defines his Gn() to retain only a small subset (approx. 1/e^n fraction)
of the dynamical terms "we are interested in" ([1], p. 85). A "perfect
detector", the G-detector, which has to show the raw counts which
correlate as Gn(), thus show QO non-classical effects, would have to
subtract not only the dark rates for itself (a single detector), but it
would have to know how to subtract 'accidental coincidences' and all
instances of m-triggers, for any m <> n. In other words, the G-detector
must be networked with all the other detectors and it needs an input to
the data-base of other experiments on the same setup (e.g. so it can
subtract 'accidentals' which are measured separately, with signal
turned off). That's the kind of "perfect detector" you need to design.
The non-local subtractions/dropping of the terms in Glauber's eq (5.5)
at the theoretical level is accomplished on he operational level by the
standard QO subtractions & filtering of the experimental data.

The classical theories, such as SED, don't include these (often
implicit) QO filtering conventions built into their predictions.
Therefore, the SED predictions refer operationally to different kind of
counts than the G-counts and G-correlations of QO. All the claimed QO
"violations" of the classical predictions are based on the gimmick of
extracting (non-locally!) the G-counts from the counts and comparing
them to classical model of the setup which doesn't include the model of
the QO subtractions (the Glauber's conventions, which are normally
implicit in the QO publications). In SED, the only subtractions applied
by default to the detectors are local subtractions of the ZPF
contributions (which, unlike the Glauber's subtractions, require no
knowledge of the remote results and results of the supplementary
experiments). Therefore the "predictions" of SED will appear different
from QO "predictions", simply because they refer to different kinds of
"counts". Until you normalize the predictions of the two theories to
refer to the same kind of "counts" (the distinction which your paper
doesn't even acknowledge to exist, in contrast to [4],[5]), you can't
make a meaningful claim having shown QO predictions which excludes SED.
When you include Bell's inequalities & sub-poissonian counts, that kind
of (straw man) "violations claims" are dime a dozen.

You should also note that in your paper & the comments given here, you
are melding together the original SED (e.g. your ref [1] cites T.
Marshall's 1963 SED paper) with its variant Stochastic Optics (SO) used
to describe the QO phenomena (cf. [4] and references there). The
original SED (which goes back to Plank & Nernst) is a much more
ambitious project, trying to show that the Schrodinger & Dirac matter
fields equations are deducible from a stochastic process (which
involves only classical charged particles interacting with classical EM
field, with ZPF initial & boundary conditions). This SED project can be
considered a failure. The much less ambitious project of SO, takes for
granted the matter fields equations and simply uses the ZPF for initial
& boundary conditions for EM fields. In order to replicate the QO
predictions (which it does for all PDC based sources), it also models
the QO subtraction procedures, as illustrated in [5]. While it can't
replicate the QED radiative corrections, the SO does replicate all the
phenomena of Quantum Optics to which it was applied (e.g. any order
coincidence experiments using PDC, cascade, coherent and chaotic
sources). Even in the well regarded QO textbook of Yariv [6], the last
chapter is dedicated to showing how various 'non-classical' QO results
obtained earlier via QED methods, can be replicated by inclusion of ZPF
in the classical EM theory. In the introduction to this chapter (p.
703), Yariv notes: "Somewhat to my surprise, I found that by asking the
student to accept just _one_ result from quantum mechanics [vacuum
fluctuations as classical ZPF], it is possible to treat all the
above-mentioned phenomena classically and obtain results that agree
with those of quantum optics."


1. Z.Y. Ou, L. Mandel
"Violation of Bell's Inequality and Classical Probability in a
Two-Photon Correlation Experiment" Phys. Rev. Lett. 61(1) pp 50-53
(1988).
http://prola.aps.org/abstract/PRL/v61/i1/p50_1

http://puhep1.princeton.edu/~mcdonald/examples/QM/ou_prl_61_50_88.pdf

2. Louis Sica "Bell's inequalities I: An explanation for their
experimental violation"
quant-ph/0101087 http://cul.arxiv.org/abs/quant-ph/0101087
http://xxx.arxiv.cornell.edu/find/quant-ph/1/au:+sica/0/1/0/all/0/1

3. R. J. Glauber "Optical coherence and photon statistics"
in Quantum Optics and Electronics, ed. C. de Witt-Morett, A.
Blandin, and C. Cohen-Tannoudji
(Gordon and Breach, New York, 1965), pp. 63-185.
For discussion & objections see:
http://www.physicsforums.com/showpost.php?p=529314&postcount=16
http://www.physicsforums.com/showpost.php?p=535516&postcount=61
http://www.physicsforums.com/showpost.php?p=538215&postcount=73

4. T. Marshall, E. Santos "The myth of the photon"
http://cul.arxiv.org/abs/quant-ph/9711046

5. A. Casado, T. Marshall, R. Risco-Delgado, E. Santos
"A Local Hidden Variables Model for Experiments involving Photon
Pairs Produced in Parametric Down Conversion"
http://cul.arxiv.org/abs/quant-ph/0202097
See also:
E. Santos "How photon detectors remove vacuum fluctuations"
http://cul.arxiv.org/abs/quant-ph/0207073

6. A. Yariv "Optical Electronics in Modern Communications"
5th Ed, 1997 Oxford Univ. Press.

{ 6:23PM, Sept 17, 2004 }

p.ki...@imperial.ac.uk

unread,
Sep 23, 2005, 3:55:09 PM9/23/05
to
nightlight <night...@omegapoint.com> wrote:
> >
> > There are loopholes in quantum optics experiments
> > which can let local realists off the hook. Usually
> > these result from QO people adjusting for their known
> > detector efficiencies in order to show (eg) a violation
> > of Bells Inequalities that would otherwise be hidden
> > by our lack of perfect detectors.
> >

> An inventor of perpetuum mobile could have said the same
> when arguing that the friction of his device lets "energy
> conservationists" off the hook.

The analogy would be that of an inventor adjusting for the
measurable energy loss in his device using well understood
physics. This would, we expect, get him(her) back to a value
close to energy conservation.

QO people adjust for well understood physics (lossy detectors)
in order to see how well the rest of their experiment matches
the predictions of QM. Since the agreement from following
this proceedure is good, this is taken as evidence that QM
is a reliable theory.

So maybe some SED theory is better. So, to convince people we need
a way to test cases where QM and SED differ, and show that SED
gives better predictions. Most SED proponents, in my experience,
prefer instead to go on about loopholes in the Bells test
experients, in order to show that SED has not been completely
ruled out.

But "not ruled out" is not a subsitute for evidence that SED gives
better predictions.


> [...]


> Even in the well regarded QO textbook of Yariv [6], the last
> chapter is dedicated to showing how various 'non-classical' QO results
> obtained earlier via QED methods, can be replicated by inclusion of ZPF
> in the classical EM theory.

Adding ZPF to the classical EM theory might give a model capable
of replicating predictions made by QM, but it doesn't guarantee a
better or more convenient way of making predictions. Further,
the Yariv chapter is all about how the SED-ish theory can play
catch-up to a QM model, which is hardly the most convincing case
for SED.

Make an SED prediction that differs from QM, and propose a loophole-
free experiment to test it.

nightlight

unread,
Sep 24, 2005, 12:10:24 PM9/24/05
to
> QO people adjust for well understood physics (lossy
> detectors) in order to see how well the rest of their
> experiment matches the predictions of QM. Since the
> agreement from following this procedure is good,

> this is taken as evidence that QM is a reliable theory.

Thanks for offering one more illustration of a typical 'QO sleight of
hand' -- pretend that the fundamental QO subtractions (which are built
into the very definition of Glauber's filtered correlation functions
Gn()) are due to some kind of minor and temporary technological
imperfection, to be overcome soon. As if the actual Gn()'s are meant to
predict the actual counts & their correlations (the kinds that Bell
inequalities talk about). They are not. The QO "correlations" (the
Gn()'s) are by definition "filtered signal" functions, with
subtractions built into their definition (which in turn makes them
irrelevant, however useful they may otherwise be in optical
engineering, for the B.I. violations). They don't predict (nor are they
meant or derived as a prediction of) what the _full set of detection
and non-detection events_ (the stuff that matters for the B.I.) is
supposed to be. That limitation of the coincidence correlations Gn() to
subtracted counts is not some transient technological artifact but it
is their very definition. I just posted another longer comment on this
in sci.physics.research:

1.
http://groups.google.com/group/sci.physics.research/msg/7ee770990a31bd3f

which explains it in more detail and points out exactly how this 'QO
sleight of hand' works, with a concrete illustration from the well
known Ou & Mandel's 1988 "observation" of Bell inequality violations
(the mother of all modern PDC based B.I. "violations" experiments).

> So maybe some SED theory is better. So, to convince people
> we need a way to test cases where QM and SED differ, and
> show that SED gives better predictions. Most SED proponents,
> in my experience, prefer instead to go on about loopholes in

> the Bells test experiments, in order to show that SED has not
> been completely ruled out.

The SED, in its practical Stochastic Optics (SO) form, is a limited
scope effective theory (it is an approximation to A. Barut's Self-Field
ED, useful mostly for the optical photons phenomena and unable to
model, among others, the QED radiative corrections), which in various
cases is more or less convenient than Glauber's QO formalism (which is
a subset of the 1920s Old QED of Dirac, Heisenberg & Jordan, minus the
2nd quantization of the Dirac fields, plus the Einstein's lightquantum
heuristics and imagery). That issue (which is better, the SED/SO or the
QO) wasn't the point at all, though.

Neither the Glauber's QO nor the Marshall-Santos SO are fundamental
theories. But the SO at least doesn't pretend that it is and it doesn't
put out the pretentious (and fake) QM magic shows, as a particular
group of Quantum Opticians has been doing for the last half a century.
The SO and QO are applied physics/engineering computational techniques
covering roughly equal domains of phenomena. There are no fundamental
effects there, be it in their computational algorithms or the
observations within their domain, such as genuine observations or
predictions of Bell inequality violations or photon anticorrelations,
that would constrain the more fundamental theories (QFTs, Self-Field
ED, gravity or string theories,... etc). As explained in more detail in
[1], these "magic" effects don't exist at any level -- there are
neither observations nor QO/QED predictions of such effects.

Note also that the typical QM "predictions" of Bell inequality
violations that one finds in pedagogical & popular "proofs" are merely
suggestions/heuristics on how one might deduce genuine predictions
within the theories closer and more appropriate to a given domain (such
as QED/QO or SO for optical photons). Namely, the general QM postulates
about the existence and formal properties of observables cannot, due to
their generality, specify the concrete operational rules which map the
readouts of particular instruments to the values of the corresponding
formal observables.

Particularly critical missing piece of these operational mappings
(which the general QM postulates could at best only assert to exist)
for the B.I. violations is the question on what constitutes, in terms
of the readouts on instruments, a realization of valid measurement of a
value of some observable S (such as spin or polarization, or their
products for composite systems) in a given setup, i.e. what are the
"valid tries" (in terms of readouts) yielding eigenvalues of S and what
are the rejections (failed measurements). The general QM postulates
merely say that the operational rules _exist_ which perform such
mapping between the eigenvalues of S and the readouts of the
instruments, and make the decisions on what is a valid and what is a
failed measurement, but they cannot say how exactly these mappings
work, and most importantly for the B.I. violations, what kind and what
proportion of the readouts are rejected as invalid realizations of the
"measurement" of the formal observable S.

To predict the B.I. violations it is critical (due to the enumerative
nature of the inequalities) to predict that the rejections of the
readouts are below approximately 17% of all actual Bell EPR pairs
considered for violation. And to make that kind of prediction, which is
the very essence of the B.I. constraints, one needs much more specific
theory for the particular experimental domain. In the case of optical
photon experiments, that more specific theory, providing the concrete
operational rules for mapping between the formal observables and the
instrument readouts, is the Glauber's photo-detection theory (model),
specifically his model for the n-detector correlations for optical
photons (cf. pp 84-88 in ref [5] of post [1]). As explained in [1]
regarding the Ou & Mandel QO "prediction" based on the Glauber's
n-detector theory, the QO doesn't predict any B.I. violation for
optical photons (neither does the SO, but the SO doesn't pretend to be
doing it). The corresponding experiments agree with the QO (and SO)
predictions and don't show any violations either.

One has to take the hat off to Clauser, Aspect, Grangier, Mandel,
Zeilinger, Chiao, ... and other masters of the QO magic shows, for
their chutzpah and a kind of twisted genius to misrepresent to the
physicists this kind of double failure to violate Bell inequalities,
the complete theoretical and experimental debacle, as the double
success, based merely on the fact that the theory and the experiment
indeed agree (that there is no violations, cf. Ou & Mandel paper
discussion in [1]). The deepest bow, though, goes to the founder of
modern QO, the great (in a twisted genius kind of way) Roy Glauber, for
the masterful design and construction of the verbal and formal props,
the gear behind the curtains of the QO magic shows (in his two 1963
Phys Rev papers & his 1964 Les Houches Lectures), which had
single-handedly turned around the greatest embarrassment and defeat for
the "magic show" branch of Quantum Optics, the Hanbury Brown & Twiss
"controversy" of 1956, into the greatest stretch of PR victories and
glory (however transient and hollow it all will eventually appear in
the eyes of the history) for the QO magicians, going on to this day.
The show has probably few more years to go, provided some cocky
youngsters (such as the guys of ref [4] in [1], with their ridiculous
377 standard deviations) don't get overly greedy and careless and yank
the curtain down before its time.

> Adding ZPF to the classical EM theory might give a model capable
> of replicating predictions made by QM, but it doesn't guarantee
> a better or more convenient way of making predictions. Further,
> the Yariv chapter is all about how the SED-ish theory can play
> catch-up to a QM model, which is hardly the most convincing
> case for SED.

The existence of the SED/SO models for the alleged experimental B.I.
violations and for the photon anticorrelations is simply another, more
constructive way, to show that the QO non-classicality claims are
bogus. In that role, as a direct counter-example, SED/SO doesn't need
to be "better" or more practical than QO (just as Bohm's QM, although
not any better than regular QM, was a counter-example for the von
Neumann's HV impossibility "proof", which only later Bell dismantled
directly). It just needs to exist as a "natural" (non-contrived; it's
the theory Planck & Nernst proposed way back in 1911-1916) local theory
capable of, among others, modelling the allegedly "non-classical" QO
experimental data classically. Careful reading of the Glauber's
founding papers of QO and of their operational interpretations by the
QO experimenters (such as Ou & Mandel's 1988 paper, as sketched in [1])
leads more directly to the exactly same conclusion -- the complete
failure of QO, on the theoretical and the experimental levels, to
demonstrate existence of any such non-classical effects in the optical
photon domain.

> Make an SED prediction that differs from QM, and propose
> a loophole-free experiment to test it.

The SED/SO prediction (see for example the papers of T. Marshall & E.
Santos on quant-ph) is that there is no B.I. violation with optical
photons. That is precisely what the experiments have shown so far.
There are no loopholes in those experiments. They show exactly what the
applicable theories, be it SED/SO or QED/QO, predict for the setup (cf.
[1] on the distinction between the facts from the 'QO sleight of hand'
kind of depiction of the experimental & theoretical facts). As
explained above the general QM doesn't predict anything specific enough
to say whether B.I. will or will not be violated in these experiments.
The general QM postulates simply lack the sufficient quantitative
specificity of the operational rules, the level of specificity required
by the enumerative nature of Bell inequalities (such as the 83% or more
retained pairs), to say anything definitive about the B.I. violations.

The best that the general QM postulates can do (which is what Bell did
in his papers) is to point in the general direction of some types of
phenomena, such as the Bell EPR pairs, which are candidates for
sufficiently specific quantitative predictions. But for any specific
realization of such Bell EPR pairs you need domain specific theory of
the phenomena, especially of the instruments, which has sufficient
quantitative precision to address the critical questions essential for
the B.I. constraints. With the optical photons and the detectors with
the Poissonian photo-electron counts (the best type of detection you
can get for the optical photons), you cannot ever violate Bell
inequalities, not even using the photo-detectors with 100% "Quantum
Efficiency" (note the Q.E. number doesn't count the 'dark counts' which
are relevant for the B.I. rejection limits of max 17% pairs). The most
Bell EPR pairs retained you can get with the Poissonian p-e detectors
is 1-1/e = 63.21% which is just below the limit of the natural SED/SO
models for the Bell EPR setup, which work provided there are at most
2/Pi = 63.66 retained pairs. See the related PhysicsForum post for more
details & references:

2. http://www.physicsforums.com/showpost.php?p=540771&postcount=99

{ Submitted to sci.phys.research on Sept 23, 2005, 22:08 EST }

Message has been deleted

nightlight

unread,
Oct 7, 2005, 12:52:26 AM10/7/05
to
> The deepest bow, though, goes to the founder of
> modern QO, the great (in a twisted genius kind of way) Roy Glauber,
> for the masterful design and construction of the verbal and formal props,
> the gear behind the curtains of the QO magic shows (in his two 1963
> Phys Rev papers & his 1964 Les Houches Lectures), which had
> single-handedly turned around the greatest embarrassment and defeat
> for the "magic show" branch of Quantum Optics, the Hanbury Brown & Twiss
> "controversy" of 1956, into the greatest stretch of PR victories and
> glory (however transient and hollow it all will eventually appear in
> the eyes of the history) for the QO magicians, going on to this day.

It seems I had picked an interesting timing for the critique of Roy
Glauber. Few days later he gets Nobel Prize.

http://nobelprize.org/physics/laureates/2005/index.html

Message has been deleted

van...@ill.fr

unread,
Oct 8, 2005, 8:38:36 AM10/8/05
to
I would like to ask the following question to SED proponents. Even if
they manage to mimick QO results, can they get out NR QM results in
multi-particle systems ?
The point is the following. In SED, there are only classical fields
if I understand well: a Dirac field for electrons and classical EM
(plus some background noise).
Now the equations to solve are the coupled Maxwell-Dirac equations
which are of course non-linear PDE.
Does this system allow for the solution of, say, the Argon atom ? I
would say, Helium, but I think it could manage, there being "enough
room" in the Dirac field for two spin states.
The point is that I have a hard time imagining that the multi-particle
Schroedinger equation can simply be replaced by a single field in space
(the Dirac field). Because if that is the case, in molecules (even not
very complicated ones), it would be way simpler to solve (numerically,
using a kind of finite element technique) for the 4-dimensional dirac
field in 3+1 dimensions than it is to expand the multi-electron wave
function, say, on a restricted basis set of hydrogen-like orbitals
centered on each atom, for each individual electron.
In fact, I have serious doubts that such a classical-field calculation
can yield the same answers (because of the cross terms between
different electrons) but if it *is* the case it would be vastly
interesting for quantum chemistry.

cheers,
Patrick.

nightlight

unread,
Oct 8, 2005, 8:45:45 AM10/8/05
to
> Note that Bell's inequalities have been violated in experiments that do
> not suffer from this detector efficiency problem.

The Rowe et al. experiment reported in the Nature paper [1] is even
farther from the B.I. violations than the optical experiments. As
pointed out by Lev Vaidman ([2] pp 242-243):

-----------------------------------------------------------------------------
Now I will discuss the latest experiment by Rowe
et al. [4] who claimed to close the detection efficiency
loophole. In this experiment the quantum correlations
were observed between results of measurements performed
on two ions few micrometers apart. The detection
efficiency was very high. It was admitted that
the locality loophole was not closed, but the situation
was worse than that. Contrary to other experiments
[3], not only the information about the choice, but also
about the results of local measurements could reach
other sites before completion of measurements there.

The reading of the results was based on observing
numerous photons emitted by the ions. This process
takes time which is a few orders magnitude larger than
the time it takes for the light to go from one ion to
the other. Thus, one can construct a very simple LHV
theory which arranges quantum correlations by "communication"
between the ions during the process of
measurement. It is much simpler to construct a LHV
theory which employs also "outcome dependence" instead
of only "parameter dependence" [8].

The purpose of closing the detection efficiency
loophole was to rule out the set of LHV theories in
which the particle carries, among others, instructions
of the type: "if the measuring device has particular parameters,
do not be detected". Such hidden variables
cannot explain the correlations of the Rowe et al. experiment
and this is an important achievement. However,
the task of performing an experiment closing
the detection efficiency loophole without opening new
loopholes (the possibility for "outcome dependence"
LHV in Rowe et al.) is still open.
--------------------------------------------------------------------

Therefore, the bare facts as agreed by everyone (sufficiently informed)
remain unchanged: no one has ever observed any B.I. violations (well
after the three decades of tries). The only topic left for
disagreements is, as was before, about the prospects of some potential
future experiments which somehow might produce B.I. violations some
day.

Concluding, as the ever-hopefuls have done, from the observation that
this experiment fails for different reasons than the optical
experiments, that this variety of ways to fail somehow increases the
chances of future success (the absence of failures) is as absurd as
would be to conclude from the observation that different people die
from different causes, that this variety of ways to die somehow
increases the chances of future immortality (the absence of death). It
seems to me that in both situations, these facts suggest exactly the
opposite -- the more ways to fail (or die) exist in a given scenario,
the greater the chances of failure (or death) in the scenario.

Anyone can, of course, believe and hope in what may happen some day as
they wish. As long as they don't mistake the future for the past, and
start claiming that what they hope will happen (the violations) has
already happened, there is no problem with their "optimism" (however
misguided it may be).

References

1. M. A. Rowe et al. "Experimental violation of a Bell's inequality
with efficient detection" Nature 409 (2001) 791.

2. L. Vaidman "Tests of Bell inequalities"
Phys. Lett. A, Vol. 286, No. 4, 30 July 2001, pp. 241-244
quant-ph/0107057 http://cul.arxiv.org/abs/quant-ph/0107057

nightlight

unread,
Oct 12, 2005, 4:16:28 AM10/12/05
to
> Even if they manage to mimick QO results, can they get
> out NR QM results in multi-particle systems ?
> The point is the following. In SED, there are only
> classical fields if I understand well: a Dirac field
> for electrons and classical EM (plus some background noise).
> Now the equations to solve are the coupled Maxwell-Dirac
> equations which are of course non-linear PDE.
>
> Does this system allow for the solution of, say, the
> Argon atom ?

Several classical theories are mixed up above. The SED is an ambitious
project started by Planck in his second quantum theory of blackbody
radiation (from 1911). The objective of SED was to show that classical
(Newtonian) charged particles subject to the ZPF (zero point field)
distribution of the EM field can reproduce 'quantum' results
(originally just the blackbody & photoeffect, later the full
Schrodinger/Dirac equations). While Planck and Nernst managed to
reproduce this way the blackbody formula and the ground state of
Hydrogen atom, after nearly a century this project has been reluctantly
judged a failure in producing a more fundamental theory even by its
long time proponents (despite advances, mostly at the formal level,
such as Nelson's stochastic mechanics and vast quantities of math on
stochastic processes). A much less ambitious sub-project of SED, the
Stochastic Optics (SO) was initiated by Marshall & Santos in 1980s
(there is a nice, long review in [1]). The SO takes as a given the
matter fields QM equations, in single & multiparticle forms, and merely
aims to reproduce the effects of the EM field quantization via the ZPF
initial/boundary conditions on the classical EM field. The SO has been
successfull in this objective (especially in providing natural,
non-contrived counter examples for various experimental
non-classicality claims by Quantum Opticians) and even some QO
authorities, such as Yariv's QO textbook, recognize this fact.

On the separate track, Schrodinger (following the basic plan proposed
earlier by Lorentz) attempted in 1926 to solve the coupled
Maxwell-Schrodinger/KG equations, which are the nonlinear PDEs (that
you were talking about above). Although he failed to obtain this way
even the correct H spectrum at the time (mostly due to the use of KG
instead of Dirac equation and some crude approximations cf. [B.3a]),
Fermi and others did find at the time useful application of the
Schrodinger's self-consistent field method as a practical alternative
to Dirac's QED, although using different approximations than
Schrodinger (such as the classical radiation reaction force for point
particles). This approach was revived by Jaynes in 1970s (as
Neoclassical ED) and fully developed for coupled Maxwell-Dirac
equations (cMD) by Barut in 1980s under the name Self-Field ED (SFED).
A summary of some SFED features & key references was given in [B].

In ref. [B.2a], Dowling reviews the relation between the SFED and the
SO/SED (citing earlier Phys Rev papers with Barut). The SO/SED is an
approximation to the SFED, where the ZPF of the SO/SED is the first
order (in alpha) approximation of the Dirac self-field in the form of
_external_ EM field. Therefore, the SO is still a _linear_ field theory
with non-trivial initial/boundary conditions.

The path from the cMD equations, which are nonlinear PDEs in 3
dimensions, to the 3N dimensional configuartion space linear
(integro-)PDEs of MPQM (Multi Particle QM, the coordinate
representation of the N-particle QM dynamics in the direct product of N
Hilbert spaces), which was the puzzle Schrodinger struggled with
throughout his life (cf. [2]), was shown by Barut in 1980s, as reviewed
in [B.1], [B.2] and sketched in [B] and [B.4] (Barut and his students
appear to have been unaware of the relation of their method to the
Carleman Linearization, e.g. that his key ansatz, eq. (11) in [B.1] is
a special case of the CL ansatz, eq. (2) in [B.6]). In order to obtain
MPQM as a linearized approximation of the cMD system, Barut does
introduce an additional postulate -- the charge quantization (with the
state antisymmetrization for identical Dirac particles). He had
attempted to deduce the charge quantization and compute, among others,
the value of alpha directly from the cMD (cf. [B.3h],[B.3j], where he
obtains alpha=1/136.7572 and later 1/136.03, albeit using in key steps
heuristic/physical arguments instead of a clean mathematical deduction
from the cMD equations alone). The regular QM/QED have no choice but to
postulate the charge quantization and the identical particle Hilbert
space reductions, since the linear equations don't constrain the charge
value. The cMD, by virtue of nonlinearity, constrain the charge and the
particle composition states e.g. it is easy to see that if a Psi(x) is
a solution of cMD, than no other function of the form L*Psi(x), where
|L| <> 1 is a solution (which in the case L=2 implies that two
electrons, which in cMD is merely a solution with twice the charge
integral, cannot have the same single particle Psi, i.e. the cMD hint
of the Pauli exclusion principle). The existence of solition solution
for the cMD equations (and for the Einstein-Maxwell-Dirac eqns) was
shown by mathematicians (e.g. Esteban, Sere, Lisi, Flato and others,
cf. [3]) only in 1990s.

> it would be way simpler to solve (numerically, using a kind of finite
> element technique) for the 4-dimensional dirac field in 3+1 dimensions
> than it is to expand the multi-electron wave function, say, on a
> restricted basis set of hydrogen-like orbitals centered on each atom,
> for each individual electron.

The numeric and analytic approximations of the coupled classical field
equations (which are not the same thing as the linear 'classical limit'
of QFT) are being done, especially within non-perturbative methods of
QED and QCD. The linearizing approximation of the coupled MD system
achieved by the 2nd quantization itself, is one among such analytic
approximation (a CL rediscovered by the physicists), wich due to the
historical accident, is among the most worked out approximate algorithm
for such PDE systems. In fact, the mathematicians and egineers are
importing the QED/QFT Fock space methods to solve nonlinear systems via
the Carleman linearization and then using the QFT techniques (such as
phase space formalism) to deal with the infinite sets of linear PDEs
generated by the CL (cf. [B.6]-[B.8]). Lattice methods are another form
of approximate numeric algorithms for the classical nonlinear
equations. One of my former advisors, Gerry Guralnik, has come up with
"source Galerikin" method (a variant of a long known approximation
technique in mathematics) for the coupled classical fields
corresponding to QED and QCD (cf. [4]).

This seemingly accidental drift in recent years toward the more correct
theory (which was explicitly forseen, among others, by Lorentz, Planck,
Einstein, de Broglie, Schrodinger, Jaynes and finally nearly
constructed by Barut) is another example of a seemingly strange
phenomenon which Wigner had labeled the "unreasonable effectivness of
mathematics". The formalism itself, being a vast network with highly
adaptible links (adapting under the punishments & rewards) is a
realization in the abstract realm of a distributed computer of the same
kind as neural networks or as human brain. Such networks study and
model their environment and pursue intelligently their own "happiness"
(the optimum of punishments/rewards). The individual nodes and the
substratum (the brains of physicists and mathematicians) of this vast
network are generally entirely unaware of the intelligent activity
pattern in the higher realm their little actions, thoughts and
life-works are a part of.

References

[B] Summary of Barut's SFED on sci.physics.research:
http://groups.google.com/group/sci.physics.research/msg/386f48731520d145

[B.x] Reference [x] from [B]

1. T. Marshall, E. Santos
"Stochastic Optics: A Reaffirmation of the Wave Nature of Light"
Found. Phys, 18.2, 185-223 (1988).

2. J. Dorling
"Schrodinger's original interpretation of Schrodinger equation: a
rescue attempt"
in "Schrodinger: Centenary selebration of polymath"
ed. C.W. Kilmister, Cambridge Univ. Press 1987

3.
http://scholar.google.com/scholar?q=maxwell-dirac+soliton&btnG=Search

4. G. Guralnik on "Source Galerkin" method on arXiv:

http://arxiv.org/find/grp_physics,grp_math,grp_nlin/1/AND+au:+guralnik+abs:+galerkin/0/1/0/all/0/1

Caroline Thompson

unread,
Oct 14, 2005, 3:09:14 AM10/14/05
to
Hi nightlight

"nightlight" <night...@omegapoint.com> wrote in message
news:di8f1p$2bon$1...@fiasco.xenopsyche.net...


>> Note that Bell's inequalities have been violated in experiments that do
>> not suffer from this detector efficiency problem.

Could you please tell me who wrote the above? I've come into the thread
in the middle and can't work it out.

> The Rowe et al. experiment reported in the Nature paper [1] is even
> farther from the B.I. violations than the optical experiments. As
> pointed out by Lev Vaidman ([2] pp 242-243):
>
> -----------------------------------------------------------------------------
> Now I will discuss the latest experiment by Rowe
> et al. [4]

nor, I'm afraid, can I work out what this ref [4] is!

> who claimed to close the detection efficiency
> loophole. In this experiment the quantum correlations
> were observed between results of measurements performed
> on two ions few micrometers apart. The detection
> efficiency was very high. It was admitted that
> the locality loophole was not closed, but the situation
> was worse than that. Contrary to other experiments
> [3], not only the information about the choice, but also
> about the results of local measurements could reach
> other sites before completion of measurements there.

This all sounds very similar to the original Rowe et al experiment,
which I met when first published. Though I agree that, due to the close
proximity of the ions and the fact that the measurement took a
considerable time, there was plenty of room for exchanges of
information, I've got a different idea as to the actual cause of the
Bell test violation.

If you look at the way in which the "detector settings" were
manipulated, you find that the settings for both sides were controlled
by the same laser and the situation was not fully under the
experimenters' control. It is possible, I think, that the errors in
achieving the desired settings were correlated. I haven't checked fully
but wouldn't a correlated error of this kind increase the apparent
correlation?

See http://freespace.virgin.net/ch.thompson1/Critiques/intro.htm

> The reading of the results was based on observing
> numerous photons emitted by the ions. This process
> takes time which is a few orders magnitude larger than
> the time it takes for the light to go from one ion to
> the other. Thus, one can construct a very simple LHV
> theory which arranges quantum correlations by "communication"
> between the ions during the process of
> measurement. It is much simpler to construct a LHV
> theory which employs also "outcome dependence" instead
> of only "parameter dependence" [8].

[What is ref [8]?]

> The purpose of closing the detection efficiency
> loophole was to rule out the set of LHV theories in
> which the particle carries, among others, instructions
> of the type: "if the measuring device has particular parameters,
> do not be detected". Such hidden variables
> cannot explain the correlations of the Rowe et al. experiment
> and this is an important achievement. However,
> the task of performing an experiment closing
> the detection efficiency loophole without opening new
> loopholes (the possibility for "outcome dependence"
> LHV in Rowe et al.) is still open.

Agreed -- and I think nobody actually disputes this. The failure to
achieve a "loophole-free" experiment is still motivating proposals that
will, hopefully, do just this. One I have been looking at is:

R. García-Patrón, J. Fiurácek , N. J. Cerf , J. Wenger , R.
Tualle-Brouri , and Ph. Grangier, "Proposal for a Loophole-Free Bell
Test Using Homodyne Detection", Phys. Rev. Lett. 93, 130409 (2004)
http://arxiv.org/abs/quant-ph/0403191

I hope the experimenters would go ahead and actually do this, as I can't
see any actual loophole and am confident that the Bell inequality chosen
(the CHSH inequality conducted with the "event-ready detectors" that
Bell would have liked) will not be violated. I can see no good reason
why they have not yet done it, since the preliminary work seems to have
been completed.

> Therefore, the bare facts as agreed by everyone (sufficiently informed)
> remain unchanged: no one has ever observed any B.I. violations (well
> after the three decades of tries). The only topic left for
> disagreements is, as was before, about the prospects of some potential
> future experiments which somehow might produce B.I. violations some
> day.
>
> Concluding, as the ever-hopefuls have done, from the observation that
> this experiment fails for different reasons than the optical
> experiments, that this variety of ways to fail somehow increases the
> chances of future success (the absence of failures) is as absurd as
> would be to conclude from the observation that different people die
> from different causes, that this variety of ways to die somehow
> increases the chances of future immortality (the absence of death). It
> seems to me that in both situations, these facts suggest exactly the
> opposite -- the more ways to fail (or die) exist in a given scenario,
> the greater the chances of failure (or death) in the scenario.

Well said!

Caroline
http://freespace.virgin.net/ch.thompson1/

nightlight

unread,
Oct 15, 2005, 8:03:11 AM10/15/05
to
>>> Note that Bell's inequalities have been violated in experiments that do
>>> not suffer from this detector efficiency problem.

> Could you please tell me who wrote the above?

Sorry, I didn't put the poster name. It was an earlier message by
"s1r.h3...@gmail.com" at the link:

http://groups.google.com/group/sci.physics.research/msg/aa075eaccef0db45?hl=en&

> > Now I will discuss the latest experiment by Rowe
>> et al. [4]

> nor, I'm afraid, can I work out what this ref [4] is!

> [What is ref [8]?]

The quotation from Lev Vaidman's paper (my ref [2]) must not be showing
in your news viewer as the three indentated paragraps. In that whole
section I was quoting Vaidman and these references are from his paper.

> If you look at the way in which the "detector settings" were
> manipulated, you find that the settings for both sides were controlled
> by the same laser and the situation was not fully under the
> experimenters' control. It is possible, I think, that the errors in
> achieving the desired settings were correlated. I haven't checked
> fully but wouldn't a correlated error of this kind increase the
> apparent correlation?

Yes, thanks for reminding me (I didn't look at Rowe et al. paper for
several years), the settings on "both" sides (if one can call their
actual aparatus, as opposed to the schematic depiction in Nature, "two"
sides at all) were controled by the same laser. The correlated errors
would indeed increase the apparent correlation. Unfortunately, the
original paper didn't have enough detail on their raw data processing
and I didn't follow up the whole series of preprints about the setup
they have on their website & on arXiv (where one could probably dig it
out):

The NIST experiment web site:
http://tf.nist.gov/ion/qucomp/papers.htm

> > and this is an important achievement. However,
>> the task of performing an experiment closing
>> the detection efficiency loophole without opening new
>> loopholes (the possibility for "outcome dependence"
>> LHV in Rowe et al.) is still open.

> Agreed -- and I think nobody actually disputes this.

That was still from the Vaidman quote. I agree, too, except I don't
call them "loopholes". There are no "loopholes" in these experiments --
they show exactly what the actual and full (including aparatus & actual
source properties) theoretical models of the experiments predict for
the _full_ detection & non-detection rates: the absence of violations.
The only other topic in physics where one hears the similar persistent
"loophole" jabber and fast-talk are the claims on various unlimited
free energy devices (which are always a minor technological "loophole"
away from working) peddled to potential investors in recent years.

The general postulates of QM lack sufficient quantitative precision
(required for the Bell inequality violations, such as maximum
rejections allowed to claim a violation) in specifying how the
experimental procedures and the readouts on the instruments map into
valid and invalid "measurement" realization of a formal QM observable
(or of the state "preparaion"). The QM postulates can only assert the
bare existence of this operational mapping (along with all its decision
rules for valid/invalid realization), but they can't tell you what the
procedure is and more importantly (for the B.I. violation claims) what
are the bounds on the rejection (as being invalid realization of
'measurement' or state preparation) rates. The Bell inequalities
violations require much more quantitatively specific kind of prediction
than what the mathematical properties of the formal QM observables can
offer.

Bell himself never called his QM toy model results a "theorem" (proving
existence of QM violations). He was merely pointing at the possible
phenomena which might show violations if one were to work out the
detailed enough model for a specific domain (such as optical photons &
photo-detectors). One cannot make a "prediction" of B.I. violations
based solely on the QM postulates and formal properties of abstract QM
observables. It was the Bell popularizers, pedagogues and experimenters
who puffed up the Bell's heuristic hint on where to look, rebranding it
into the "theorem".

van...@ill.fr

unread,
Oct 15, 2005, 8:05:40 AM10/15/05
to
nightlight a écrit :

> The path from the cMD equations, which are nonlinear PDEs in 3
> dimensions, to the 3N dimensional configuartion space linear
> (integro-)PDEs of MPQM (Multi Particle QM, the coordinate
> representation of the N-particle QM dynamics in the direct product of N
> Hilbert spaces), which was the puzzle Schrodinger struggled with
> throughout his life (cf. [2]), was shown by Barut in 1980s, as reviewed
> in [B.1]

Ok, I'm slowly exploring this...

In:

1. A.O. Barut "Quantum Electrodynamics based on self-energy"
IC1987248: http://library.ictp.trieste.it/DOCS/P/87/248.pdf

What I don't understand is his equation (7). He introduces TWO Dirac
fields! Normally, the TWO "electrons" are supposed to be "bumps" in
one and the SAME, single classical "electron" field, no ? Otherwise
you're not really "deriving" MPQM from the classical SINGLE Dirac field
coupled to the SINGLE EM field ?

cheers,
Patrick.

nightlight

unread,
Oct 16, 2005, 6:08:24 PM10/16/05
to
> ... [Ref B.1] ...

> What I don't understand is his equation (7). He introduces
> TWO Dirac fields!

The two-fermion action in eq. (7) is for two _distinct_ fermion fields
and the EM field (such as electron an proton which they used to derive
a closed form equivalent of the Bethe-Salpeter expansion). That is the
standard classical action for such system of coupled fields. You can
find the derivation sketched out in [B.1], with details written out
explicitly, in [1] (which looks like transcribed notes taken by his
grad students).

> Normally, the TWO "electrons" are supposed to be "bumps" in one
> and the SAME, single classical "electron" field, no ?

That's right, the SFED has a single Psi(x) field for all electrons. The
N electron case will merely have the charge integral -N*e. Note first
that to produce a linearized approximation (the two particle QM) for
the coupled Maxwell-Dirac equations (cMD) in two distinct fermion case,
he introduces the ansatz eq. (11) with the corresponding _weaker_
action variation in terms of fields Phi(x1,x2) (which will, curiously,
always work as described provided the Lagrangians have local gauge
symmetry) leading to the eq. (12). This two-distinct-particle Barut
ansatz eq. (11) is a special case of the Carleman Linearization ansatz
for PDEs (eq. (2), p. 100 in [B.6]). Note that the mapping (11) is not
1-to-1, e.g. the Phi(x1,x2) is insensitive to random phase factors of
Psi1 and Psi2 that cancel out, hence the ansatz (11) plus the weaker
variation via Phi, represent the formal transition from the single
system form of dynamical description (of Ps1 & Psi2 in (10)) to the
ensemble (of all Psi1 & Psi2 solutions of (10) yielding the same
stationary Phi solutions of (12)) form of description of the MPQM, i.e.
the Barut ansatz is the SFED explanation of the transition to the
mandatory statistical description of MPQM (where the single particle
states within the composite system are generally statistical operators
instead of the pure states within the single particle system; roughly
speaking, the statistical averaging over the random phases of Psi1 and
Psi2 leads to the linearization of the dynamical evolution for the
ensemble properties).

The resulting eqs (12) are still not linear, though, since they contain
the Green functions integrals of Phi(x1,x2) (as quadratic polynomials
of the 64 Phi components) burried inside the label A_self (the
self-field part of the EM field). These equations are the closed form
SFED equivalents of the QED's Bethe-Salpeter expansion (cf. [1],
[B.2]). The regular MPQM linear two-particle equations for Phi(x1,x2)
follow if one further drops the self-interaction terms from these
integrals in (12) (cf. [B.2] pp 355-356), which is a truncated form of
CL. The first order (in alpha) approximation for these self-interaction
terms in the form of the external EM fields (hence retaining linearity
of the equations) is preciesly the ZPF of SED/SO (cf. Dowling chapter
[B.2a]).

Another way to obtain a better approximation is to formally split the
Psi(x) into a sum of functions (interpreted in retrospect as the one
electron/positron contributions) Psi1(x) + Psi2(x) +..., with the
corresponding split in the rewritten Lagrangian (cf. eq. (53) p. 358 in
[B.2], after the antisymmetrized form of the ansatz), so that the
self-interaction contributions which get dropped from (12) become
smaller relative to the mutual interaction terms (now the interactions
between these formal one-electron components) which are kept after the
ansatz (11). The standard linear MPQM for N electrons treated as
identical particles is obtained if one uses the antisymmetrized form of
the product in (11), with the N factors corresponding to these one
electron functions (cf. [B.2] pp 358-361 for N=2 case). One can view
the whole procedure as a purely formal mathematical trick to get an
approxmation of the nonlinear equations from (10) "that works" (to the
extent that MPQM works, which is up to radiative corrections), and not
as any additional postulate, since any solutions of such systems are
merely linarized approximations of the solutions of the nonlinear
equations (12) (which in turn are a 'weak variation' approximation of
the exact equations of (10), i.e. (12) already contains non-physical
solutions absent in (10)).

There was a lame attempt to criticize Barut's SFED by Iwo
Bialynicki-Birula [2] based on vague suspicions that Barut had somehow
snuck into the SFED the QED results. That turned out to be a groundless
accusation easily refuted by Barut & Dowling [3]. The questions &
critique similar to yours was also expressed by Jack Sarfatti on
sci.physics [4] in the context of Mendel Sachs derivation of QM from
gravity (where Sachs rediscovers the Barut's linearizing ansatz leading
to MPQM). The problem with Sachs and Sarfatti is that, unlike Barut
(and few others), they both have the unfortunate misconception on the
empirical facts of QM non-locality phenomenon (they assume that the QM
non-locality, such as the B.I. violations, was experimentally
established; Barut knew better, of course; the Quantum Opticians such
as Glauber, Mandel, Clauser, Aspect, Grangier, ... knew better, too
[5]). That boggs them down into a tar pit of speculations and struggles
to somehow fit in the imagined non-local phenomena (which is as sad as
watching Don Quixote battling the windmills).

References

[B] Barut SFED summary in the earlier sci.physics.research post:

http://groups.google.com/group/sci.physics.research/msg/386f48731520d145

B.1. A.O. Barut "Quantum Electrodynamics based on self-energy"
IC1987248: http://library.ictp.trieste.it/DOCS/P/87/248.pdf

B.2. A.O. Barut "Foundations of self-field electrodynamics"
"New Frontiers in QED and Quantum Optics" pp 345-371
NATO ASI Series B, Vol. 232, Plenum 1990

a) In the same volume, pp 371-389
J.P. Downling "QED Based on Self-Fields: Cavity Effects"

B.6. K. Kowalski, W. Steeb
"Nonlinear Dynamical Systems and Carleman Linearization"
World Scientific, 1991.
http://www.worldscibooks.com/mathematics/1347.html

1. A.T. Alan, Z.Z. Aydin, N. Karagoz, A.U. Yilmazer
"Covariant Two Fermion Equations with Anomalous Magnetic Moments"
Turk J Phys 25 (2001) , 423 - 429

http://journals.tubitak.gov.tr/physics/issues/fiz-01-25-5/fiz-25-5-5-0011-7.pdf

2. I. Bialynicki-Birula Comment on "Quantum electrodynamics based on
self-energy: Lamb shift and spontaneous emission without field
quantization" Phys. Rev. A 34, 3500-3501 (1986)
http://prola.aps.org/abstract/PRA/v34/i4/p3500_1

Online: http://www.cft.edu.pl/~birula/publ/CommBarut.pdf
I.B-B publ: http://www.cft.edu.pl/~birula/publ.html

3. A. O. Barut and J. P. Dowling "Quantum electrodynamics based on
self-energy: Spontaneous emission in cavities"
Phys. Rev. A 36, 649-654 (1987)
http://prola.aps.org/abstract/PRA/v36/i2/p649_1

A. O. Barut "Quantum electrodynamics based on self-energy versus
quantization of fields: Illustration by a simple model"
Phys. Rev. A 34, 3502-3503 (1986)
http://prola.aps.org/abstract/PRA/v34/i4/p3502_1

4. Sach's linearizing ansatz criticized by Sarfatti:
http://groups.google.com/group/sci.physics/browse_thread/thread/527a2475c297e60f/78f0877645271163

Mendel Sachs, Home page with preprints, book reviews & discussions
http://www.compukol.com/mendel/articles/articles.html

5. QO 'sleight of hand' illustrated by Ou & Mandel 1988 experiment:

http://groups.google.com/group/sci.physics.research/msg/7ee770990a31bd3f

On kids variant of QO 'sleight of hand', so-called Bell's "Theorem"

http://groups.google.com/group/sci.physics.research/msg/fbd9858ee710e27a

which Bell himself never called a "theorem" or a QM "prediction"
of the B.I. violation (but understood it as a heuristic hint
pointing at the kind of phenomena to study for actual proofs &
experimental tests):

Emilio Santos "Bell's theorem and the experiments: Increasing
empirical support to local realism"
quant-ph/0410193 http://cul.arxiv.org/abs/quant-ph/0410193 (p.20)

{ Submitted to sci.physics.research on Oct 15 2005, 15:43 EST }

Caroline Thompson

unread,
Oct 16, 2005, 6:09:00 PM10/16/05
to
"nightlight" <night...@omegapoint.com> wrote in message
news:1129281784.2...@g49g2000cwa.googlegroups.com...

> The quotation from Lev Vaidman's paper (my ref [2]) must not be showing
> in your news viewer as the three indentated paragraps. In that whole
> section I was quoting Vaidman and these references are from his paper.

Thanks. Yes, the indentation got lost, though I should have realised
what the lines meant. I read Vaidman's paper at the time ...

>> ... It is possible, I think, that the errors in


>> achieving the desired settings were correlated. I haven't checked
>> fully but wouldn't a correlated error of this kind increase the
>> apparent correlation?
>
> Yes, thanks for reminding me (I didn't look at Rowe et al. paper for
> several years), the settings on "both" sides (if one can call their
> actual aparatus, as opposed to the schematic depiction in Nature, "two"
> sides at all) were controled by the same laser. The correlated errors
> would indeed increase the apparent correlation. Unfortunately, the
> original paper didn't have enough detail on their raw data processing

True, nor did all versions give a realistic picture of the experimental
setup!

> and I didn't follow up the whole series of preprints about the setup
> they have on their website & on arXiv (where one could probably dig it
> out):
>
> The NIST experiment web site:
> http://tf.nist.gov/ion/qucomp/papers.htm
>

I just read a couple of papers:

Kielpinski, David et al, "Recent Results in Trapped-Ion Quantum Computing",
http://arxiv.org/abs/quant-ph/0102086, and
M Rowe et al, Nature 409, 791 (2001)

> ... I don't


> call them "loopholes". There are no "loopholes" in these experiments --

Agreed, and some years ago I entered into an argument on this with the
editors of PRL. They assured me the term was customary, which, of
course, it now is. I felt that to call the flaws "loopholes" was
somehow derogatory to local realists, who could provide, as you say,
explanations for all the actual observations.

> The general postulates of QM lack sufficient quantitative precision
> (required for the Bell inequality violations, such as maximum
> rejections allowed to claim a violation) in specifying how the
> experimental procedures and the readouts on the instruments map into
> valid and invalid "measurement" realization of a formal QM observable
> (or of the state "preparaion").

Hmmm ... to put it a completely different way, Bell never tried in his
first paper to say just how the experiment was to be conducted or what
was to be done if not all particles were detected. And he certainly did
not look into other matters such as dark counts or accidentals.

> The QM postulates can only assert the
> bare existence of this operational mapping (along with all its decision
> rules for valid/invalid realization), but they can't tell you what the
> procedure is and more importantly (for the B.I. violation claims) what
> are the bounds on the rejection (as being invalid realization of
> 'measurement' or state preparation) rates. The Bell inequalities
> violations require much more quantitatively specific kind of prediction
> than what the mathematical properties of the formal QM observables can
> offer.

True. The QM formalism simply does seem to give a sufficiently complete
model of any real experimental situation.

> Bell himself never called his QM toy model results a "theorem" (proving
> existence of QM violations). He was merely pointing at the possible
> phenomena which might show violations if one were to work out the
> detailed enough model for a specific domain (such as optical photons &
> photo-detectors). One cannot make a "prediction" of B.I. violations
> based solely on the QM postulates and formal properties of abstract QM
> observables. It was the Bell popularizers, pedagogues and experimenters
> who puffed up the Bell's heuristic hint on where to look, rebranding it
> into the "theorem".

Well said! I wish a few more people realised this, instead of trying to
tell us there is experimental proof that local realism doesn't work.
Bell himself, however, doesn't seem entirely blame-free here. In 1976
he was saying:

"The authors [of Bell test experiments] in general make some more or
less ad hoc extrapolation to connect the results of the practical with
the result of the ideal experiment. It is in this sense that the
entirely unauthorised 'Bell limit' sometimes plotted along with
experimental points has to be understood." [Speakable and Unspeakable,
p 60]

and (page 88)

" . there is no question of actually realising a system which violates the
locality inequality."

but later he seems to have confused the issue, getting involved in
Bohm's "nonlocal" pilot wave ideas. It's a mystery to me, too, why he
backed Aspect's time-varying experiment. This was surely from the start
a "no-go" as far as any reasonable physical explanation was concerned?
What physical cause could make one detector send signals to the other in
such a way as to lead to the exact QM prediction? Perhaps if he had
realised that experiments such as the Geneva ones would later
demonstrate BI violations over many kilometers he would not have been so
keen. Over all this distance of fibre cable, up hill and down dale, the
idea of one station somehow influencing the other is inconceivable!

Caroline

http://freespace.virgin.net/ch.thompson1/

p.ki...@imperial.ac.uk

unread,
Oct 18, 2005, 8:57:32 AM10/18/05
to
Caroline Thompson <ch.tho...@virgin.net> wrote:
> What physical cause could make one detector send signals
> to the other in such a way as to lead to the exact QM
> prediction?

A universe in which QM was a good description of
the laws of nature would seem to be sufficient
"physical cause".

If you want to sink QM, you need to do an experiment
which (without loopholes) gives a result that QM can't
explain. Now, as you may have noticed, it's quite hard
for QM proponents to satisfy the Local Realists that
Local Realism as been ruled out -- so expect to find it
equally as hard in reverse.

0 new messages