Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

The time it takes to emit one photon

73 views
Skip to first unread message

Paul Danaher

unread,
Jul 27, 2005, 5:32:50 PM7/27/05
to
We have ("Mossbauer effect and retarded interactions") a fairly subtle
debate going on which an outsider with some mathematics but not nearly
enough physics feels should be capable of a direct answer. One of the
questions posed in this is, how long does it take to emit one photon. I
would really welcome a "simple" answer.

Eugene Stefanovich

unread,
Jul 27, 2005, 8:19:28 PM7/27/05
to

That's a good question, and I don't have a good answer for it.

In QM we describe the photon emission as a decay process
A -> B + C

The system is initially prepared in the (metastable) state A.
This is an excited state of the radioactive nucleus in our case.
At time t>0 we measure whether the system remained in A or
changed to B+C (the ground state of the nucleus plus a photon).
Then we repeat this preparation-measurement process many times for
each value of t, and finally get the time-dependent "decay law"
omega_A(t) which is the probability of finding the system in A at time
t if it was prepared in A at time t=0.

Quantum mechanics allows us to calculate the function
omega_A(t). In most cases this is an
exponential function of t with the decay parameter called
"the lifetime".

This parameter has nothing to do with the question you asked, i.e,
"how long does it take to emit one photon", and the above approaches
(both experimental and theoretical) do not offer any answer.

To answer this question experimentally, one needs to perform
repetitive measurements on _the same_ metastable system. E.g.,
prepare system in A at time t=0, measure the state of the system at
t=1ns, then measure the same system at t=2ns, etc. The result of one
such experimental run may look
like this (note that in experiment we can find the system only
in one of the two states A or B+C, and not somewhere in between):

time state
0ns A
1ns A
2ns A
3ns B+C
4ns B+C
....

This may suggest that the photon emission time is less than 1ns.
However, this "repetitive measurement" experiment is not
covered well by the standard QM formalism: you need to worry
about how the wave function changes after each measurement (collapse),
you meet some weird things like "Zeno effect", etc.

I think that your question boils down to the following:
"how fast is the collapse of the wave function?" My best guess is that
it happens instantaneously.

Eugene.

Oz

unread,
Jul 28, 2005, 12:00:07 PM7/28/05
to
Eugene Stefanovich <eug...@synopsys.com> writes

>
>
>In QM we describe the photon emission as a decay process
>A -> B + C
>
>Quantum mechanics allows us to calculate the function
>omega_A(t). In most cases this is an
>exponential function of t with the decay parameter called
>"the lifetime".
>
>This parameter has nothing to do with the question you asked, i.e,
>"how long does it take to emit one photon", and the above approaches
>(both experimental and theoretical) do not offer any answer.

NB I am an ignorant amateur.

I am not happy with the assumption that there are two states, existed
and emitted and nothing in between.

We can have a superposition of 'emitted' and 'excited' which seems to me
to be as good a description of 'partly emitted' as one could wish to
hope for.

>To answer this question experimentally, one needs to perform
>repetitive measurements on _the same_ metastable system. E.g.,
>prepare system in A at time t=0, measure the state of the system at
>t=1ns, then measure the same system at t=2ns, etc. The result of one
>such experimental run may look
>like this


>(note that in experiment we can find the system only
>in one of the two states A or B+C, and not somewhere in between):

Note carefully the above sentence. In essence one has FORCED the system
into state A or (B+C). Not unsurprisingly one sees ONLY A or (B+C). This
says nothing about the state of the system before you enforced a
measurement. It says a LOT about the measuring equipment's interaction
with the state.

>time state
>0ns A
>1ns A
>2ns A
>3ns B+C
>4ns B+C
>....
>
>This may suggest that the photon emission time is less than 1ns.
>However, this "repetitive measurement" experiment is not
>covered well by the standard QM formalism: you need to worry
>about how the wave function changes after each measurement (collapse),
>you meet some weird things like "Zeno effect", etc.

The above says more than that.
It says that the measuring apparatus can force the system into one state
or the other in less than 1ns. It says absolutely nothing about how fast
A -> (A+B) happens in the absence of the measuring apparatus.

Now how about the following scenario.
Here we measure 100,000 changes of A -> (B+C) ensuring A is always in
the same state before we start. This apparatus can force the system to
be in either A or (B+C) in 1/10ns. We get:

%A %(B+C)
2.0ns 100 0
2.1 99 1
2.2 97 3
2.3 85 15
2.4 60 40
2.5 50 50 (OK its convenient...)
2.6 40 60
2.7 15 85
2.8 3 97
2.9 1 99
3.0 0 100

Now you can decide what you like, but I would say that this system takes
about 0.6 ns to emit a photon to first order. Mind you IMHO it would be
a slightly odd system that behaved like the above.

>I think that your question boils down to the following:
>"how fast is the collapse of the wave function?" My best guess is that
>it happens instantaneously.

My guess is that it happens typically rather slowly and isn't a collapse
at all but a gradual change via complex superposition of states.

Of course if you measure it very fast, using an apparatus that enforces
very fast transitions, then the superposition includes that of the
wavefunction of the very fast detector. Not unsurprisingly the
transition is as fast as the measuring equipment will deliver.

--
Oz
This post is worth absolutely nothing and is probably fallacious.

Use o...@farmeroz.port995.com [ozac...@despammed.com functions].
BTOPENWORLD address has ceased. DEMON address has ceased.

DM Roberts

unread,
Jul 28, 2005, 12:00:09 PM7/28/05
to
Since we cannot measure the state partway through the process of
emitting a photon (i.e. some half-extruded photon still attached to the
nucleus - I speak as a fool), there is really not much meaning to the
question "how long does it take?"

One factor which may influence the answer, however, is the possibility
of virtual particles (read on, I'm not proposing anything wierd) -
unmeasurable, but "existing" for a time less than the upper bound set
by Heisenberg uncertainty (the more energy, incl mass, the particle
has, the less time it could "exist").

We might have to consider QFT effects, although if we are dealing with
a nucleus the corrections due to higher order diagrams will be small
(in the sense that we can design lasers properly using QM only).

Also I agree with Eugene about the measuring of the same state
repeatedly to see at which time it collapses - How precisely does one
measure the energy of the nucleus (or whatever) with out disturbing it
from the metastable state?

More rigorously, the state consisting of the metastable nucleus evolves
into a mixture of states, one of which is the undecayed nucleus, the
other is the decayed nucleus + a photon. Measurement projects the mixed
state vector onto one or the other. If we _never_ measure the system,
we can never actually say that the decay has taken place! Note that the
evolution of the state into the mixture starts immediately, so the
decay could "take place" immediately. I say "take place", as all we can
really tell is that the decay took place sometime before the
measurement, unless we look for extra data like how far the photon has
travelled.

[ Mod. note: I believe the poster meant "superposition" rather than
"mixture". "Mixture" has a different meaning in quantum statistical
mechanics. -ik ]

Just a question - what's the Zeno effect? I've moved off into pure
maths and so have lost that little reality-connecting thread I once
had.

David

Marcel LeBel

unread,
Jul 28, 2005, 12:00:07 PM7/28/05
to

How long does it take for a photon to be absorbed ? This amount of time
for a light photon is very short, and consequently, usually rounded up
to "instantaneous". But if you try to apply this idea to a long wave
radio photon of 100km wavelength .. it does not strike me as too
convincing. The radio photon is absorbed by a radio antenna by inducing
electron oscillation in the antenna....not by collapsing on it
instantaneously. The photon is a quantum of action h packaged in a fixed
amount of time, the period. The photon is something that can happen, but
happen only in the time of the package. The photon is a fixed specific
power. Once it is absorbed, we factor/integrate the total work done and
call it Energy. And this is the problem. We loose sight of what we
study. What is real?

*-The calculation, or the event? The event is real and our calculations
are not. Everything we put on paper is one order of time
over-integration. We integrate and measure energy, but the dimension of
the real event happening is "power". We integrate and measure time, but
the dimension of the real event happening is "the passage of time". We
integrate and measure the fall distance of an object, but the real event
is an object falling.. or its existence vs time. Take a light photon
absorbed by chlorophyll in plants. It eventually degrade down to
infra-red heat photons. The quantum of action h is still the same. The
only difference is that the time package is now bigger. The power of
the heat photon is lower than the power of a light photon.-*

The photon is a quantum of action h packaged in a specific amount of
time. It will be absorbed by oscillating structures tuned to receiving
the quantum just at the same rate. The same happen for emission. How
long does it take for a 100km long radio photon to be emitted? Exactly
one period T, the event time. Same for light photons and all other
photons of the EM spectrum.

le...@muontailpig.com remove particle

Eugene Stefanovich

unread,
Jul 29, 2005, 3:03:49 AM7/29/05
to

DM Roberts wrote:

> More rigorously, the state consisting of the metastable nucleus evolves
> into a mixture of states, one of which is the undecayed nucleus, the
> other is the decayed nucleus + a photon. Measurement projects the mixed
> state vector onto one or the other. If we _never_ measure the system,
> we can never actually say that the decay has taken place! Note that the
> evolution of the state into the mixture starts immediately, so the
> decay could "take place" immediately. I say "take place", as all we can
> really tell is that the decay took place sometime before the
> measurement, unless we look for extra data like how far the photon has
> travelled.
>
> [ Mod. note: I believe the poster meant "superposition" rather than
> "mixture". "Mixture" has a different meaning in quantum statistical
> mechanics. -ik ]

Thank you -ik. This is an important correction. We are not talking about
statistical mixed states described by density matrices (operators).
We should use words "superposition" or "linear combination" instead
of "mixture" everywhere.

Eugene.

David Park

unread,
Jul 29, 2005, 3:03:50 AM7/29/05
to

"Paul Danaher" <paul.d...@watwinc.com> wrote in message
news:vuEFe.52883$rb6.296@lakeread07...

Writing as a student and not as an expert:

I thought that one of the lessons of QM was that there is no such thing as a
trajectory? You can calculate the probability that a particle in a certain
state at point A later appears at point B in another state. But you can't
say how it got from A to B. You can't say that it took some particular path.
(In fact in Feynman's formulation of QM you would add up all the possible
paths.)

So I would suppose that in the same vain it is meaningless to talk about the
'process' of emitting a photon - as if it were some drawn out detailed
process that could be followed in a step by step continuous manner. All you
can calculate is the probability that a system in one energy state is later
a system in a lower energy state plus a photon.

So it is not a question that QM gives an answer to, and probably not even a
proper question.

David Park
dj...@earthlink.net
http://home.earthlink.net/~djmp/

Igor Khavkine

unread,
Jul 29, 2005, 3:03:49 AM7/29/05
to

To be really safe, the answer has to be infinity. Of course, we see
photons emitted all the time and these events happen rather more
quickly than that. Of course, this just means that the photon emission
time scale is very long compared to some other characteristic length,
yet still very short compared to the time scales we are used to. To
identify what each of these time scales is, we need to look at how
photon emission is observed and theoretically analyzed.

First lets look at how photons appear theoretically. Take Maxwell's
equations. They form a system of linear translation invariant
equations. A Fourier transform decouples the fields into independently
vibrating modes, each with a characteristic frequency. In this form,
quantization is trivial. Each mode is treated as an independent
harmonic oscillator.

The spectrum of stationary states (those with definite energy) of a
harmonic oscillator is well known. It is discrete and equally spaced,
with the spacing given by the frequency. Each state is labeled by an
integer n. The energy of the state is basically a multiple of this
number. We like to say that the state n contains n quanta, each with
energy proportional to the frequency of the oscillator. These quanta
are called phonons. Each stationary state of the electromagnetic field
can be described by a linear superposition of states with definite
finite photon number in each vibration mode.

So far we have only described stationary states of the EM field and
given them an interpretation in terms of photons. Obviously this does
not cover non-stationary states, which describe photon emission for
example. If we know all the stationary states and their energies, we
can in principle describe all time dependent states of the EM field as
well, but *only* of the EM field. To model interesting physical
situations, we have to introduce other matter or fields and
interactions with them. Unfortunately, for most interacting systems, we
can't write down a closed form solution for its state spectrum.
Calculations are usually done in perturbation theory. The crucial
assumption is that we can approximate the state at the beginning and
end of the experiment by stationary states of the non-interacting EM
field + matter system. The interaction can then be turned on and off in
between. By necessity, for the initial and final states to be well
approximated by stationary ones, the on/off switching of the
interaction must be slow, i.e. adiabatic. That is why we formally place
the initial state at time -oo and the final state at time +oo.

So, with this set up, how do we characterize processes in which photons
are emitted? It's rather simple really. Count the photons in the
initial state and count the photons in the final state. If the latter
number is larger than the former, then we say that some photons were
emitted. Note however, that in the only answer we can give to how long
it to took for the emission to take place is infinity, which is the
elapsed time between the initial and final states.

Ok, now we know what theorists mean whey they speak of photons. How
about experimentalists? If you look at the experimental eveidence that
is pointed to when speaking of photons, it always comes back to
quantization of energy, for example the photoelectric effect and
Planck's radiation spectrum. But to measure energy we need to observe
the system for a sufficiently long time. In other words, the
experimental setup is such that the system's state can be approximated
by a stationary one. But that's exactly where the theoretical
description of photons comes in. Everyone is on the same page here.

Finally, we come to observation of emission. The experimental procedure
is to prepare the system in some known state (again most likely
approximated by some stationary state), wait for the emission event and
for the detector to register the emission (wait for the detector to
also settle close to a stationary state). This matches the theoretical
description as well. But then we face the same uncertainty about the
time needed for emission.

However, now we have at least some bounds the needed time just by the
amount of time needed to setup the experiment and wait for the resutls.
With better technology, the time needed to perform the experiment can
be shortened. Can this be done until the least upper bound on the
emission duration is reached? Here, quantum mechanics says that you can
get close, but not quite. If you don't allow enough time for the system
to come close to a stationary state, the theoretical approximation and
the photon description break down. You will still measure photons
absorbed or emitted. But the results of the experiment will become more
and more erratic. In other words, the uncertainty in the number of
photons absorbed or emitted grows the less time you devote to setup of
the experiment and to observation. This is nothing but the energy time
uncertainty relation.

So, at the end of this perhaps overly verbose explanation, the simple
answer is given by the Heisenberg uncertainty principle. If the energy
of the photon is E, the time needed to emit a photon of this energy is
greater than hbar/E. The longer you wait, the more sure you are how
many quanta were emitted. The less you wait, the less certainty there
is about the number of photons emitted, which could also be zero.

Hope this helps.

Igor

Eugene Stefanovich

unread,
Jul 29, 2005, 3:03:48 AM7/29/05
to

DM Roberts wrote:

> Just a question - what's the Zeno effect?

My quick web search produced the following:

"The so called "Quantum Zeno Effect" was first discussed in detail by
E. C. George Sudarshan and Baidyanath Misra in 1977, and first
observed experimentally by Fischer et al. in 2001. The idea is often
paraphrased as "a watched pot never boils". Fischer el al. observed the
escaping time of trapped sodium atoms, and found that repeated
measurements can suppress or enhance the decay of the system, depending
on the frequency of measurements. When the observation frequency is
smaller than the characteristic relaxation frequency of the trapped
atoms, the decay is suppressed, we have quantum Zeno effect. When the
observation frequency is larger, the decay is enhanced, thus we have
quantum anti-Zeno effect."

I don't know much about this stuff, so I prefer discuss quantum effects
that can be observed without involving repetitive measurements,
i.e., system prepared -> system measured -> system discarded.

Eugene.


Eugene Stefanovich

unread,
Jul 29, 2005, 3:42:42 AM7/29/05
to
Oz wrote:

>>In QM we describe the photon emission as a decay process
>>A -> B + C

> I am not happy with the assumption that there are two states, existed
> and emitted and nothing in between.
>
> We can have a superposition of 'emitted' and 'excited' which seems to me
> to be as good a description of 'partly emitted' as one could wish to
> hope for.

In QM the two states A and B+C are described by orthogonal subspaces
in the Hilbert space. Of course, the state vector of the system may
wander somewhere in between these subspaces: it may have non-zero
projections on both A and B+C. However, our measuring devices cannot
reach into that territory. All we can measure is either A or B+C.
Not every projection in the Hilbert space correspond to a measurement.

If you ever saw tracks of particles in bubble chambers, you probably
noticed that at each point in time there is a well-defined collection
of particles. E.g., at some points you definitely see a muon, at later
points you definitely see an electron (plus two neutrinos). You never
see a "mixture" or "linear combination" of these two states.

>
> Now how about the following scenario.
> Here we measure 100,000 changes of A -> (B+C) ensuring A is always in
> the same state before we start. This apparatus can force the system to
> be in either A or (B+C) in 1/10ns. We get:
>
> %A %(B+C)
> 2.0ns 100 0
> 2.1 99 1
> 2.2 97 3
> 2.3 85 15
> 2.4 60 40
> 2.5 50 50 (OK its convenient...)
> 2.6 40 60
> 2.7 15 85
> 2.8 3 97
> 2.9 1 99
> 3.0 0 100
>
> Now you can decide what you like, but I would say that this system takes
> about 0.6 ns to emit a photon to first order. Mind you IMHO it would be
> a slightly odd system that behaved like the above.

This is a completely hypothetical system. Real unstable systems do not
behave in this way. They have exponential decay laws. My point was to
show that the decay rate of the exponent has nothing to do with
"the time in which the system decays" or "the time it takes to
emit one photon". This time is always very short,
but the decay rate could be in millions of years in some radioactive
nuclei.


>>I think that your question boils down to the following:
>>"how fast is the collapse of the wave function?" My best guess is that
>>it happens instantaneously.
>
>
> My guess is that it happens typically rather slowly and isn't a collapse
> at all but a gradual change via complex superposition of states.

If it were so, then we would be able to describe this gradual collapse
by some sort of evolution equation (like Schroedinger equation). I've
never heard of such a thing. The Schroedinger equation describes the
decay law, i.e., the gradual change of probabilities in time, but it has
nothing to do with the random (and instantaneous)
quantum jump (or collapse) between two states.

So, even if quantum collapse takes a measurable time, quantum mechanics
is silent about that. From the point of view of QM, this collapse is
instantaneous. Maybe there is a theory better than QM? I doubt it.

Eugene.

Kit Adams

unread,
Jul 29, 2005, 10:24:02 PM7/29/05
to

"Igor Khavkine" <igo...@gmail.com> wrote in message
news:1122574188.2...@g14g2000cwa.googlegroups.com...

snip...

> So, at the end of this perhaps overly verbose explanation, the simple
> answer is given by the Heisenberg uncertainty principle. If the energy
> of the photon is E, the time needed to emit a photon of this energy is
> greater than hbar/E. The longer you wait, the more sure you are how
> many quanta were emitted. The less you wait, the less certainty there
> is about the number of photons emitted, which could also be zero.

I like your point about photon number only being a meaningful description
for the steady state of the field - after all the creation and annihilation
operators are the Fourier coefficients of monochromatic field modes.

However I would say that the time needed to create a field excitation (emit
a photon) is proportional to the inverse of the uncertainty in the energy
(i.e. ~ hbar/delta E) of the excitation rather the inverse of the absolute
energy. This follows directly from Fourier analysis and ties in nicely with
the lifetime of excited states of atoms and nuclei being the inverse of the
linewidth i.e. the idea that the excited state radiates throughout its
lifetime.

For the Mossbauer effect using iron57 that gives an emission (and
absorbtion) time of 10e-7 secs, giving plenty of time for the recoil to
propagate through the 200000 nuclei required to allow resonance absorbtion
(see http://hyperphysics.phy-astr.gsu.edu/hbase/nuclear/mossfe.html#c1
)

Regards,
Kit

Eugene Stefanovich

unread,
Jul 29, 2005, 10:24:00 PM7/29/05
to

David Park wrote:

> So I would suppose that in the same vain it is meaningless to talk about the
> 'process' of emitting a photon - as if it were some drawn out detailed
> process that could be followed in a step by step continuous manner. All you
> can calculate is the probability that a system in one energy state is later
> a system in a lower energy state plus a photon.
>
> So it is not a question that QM gives an answer to, and probably not even a
> proper question.

I tend to agree with you. At least the time of photon emission cannot
be understood by following the time evolution of the wave function of
the emitting system (e.g., atom).

Eugene.

Oz

unread,
Jul 29, 2005, 10:23:57 PM7/29/05
to
Eugene Stefanovich <eug...@synopsys.com> writes

>Oz wrote:
>
>>>In QM we describe the photon emission as a decay process
>>>A -> B + C
>
>
>> I am not happy with the assumption that there are two states, existed
>> and emitted and nothing in between.
>>
>> We can have a superposition of 'emitted' and 'excited' which seems to me
>> to be as good a description of 'partly emitted' as one could wish to
>> hope for.
>
>In QM the two states A and B+C are described by orthogonal subspaces
>in the Hilbert space. Of course, the state vector of the system may
>wander somewhere in between these subspaces: it may have non-zero
>projections on both A and B+C.

Precisely.

>However, our measuring devices cannot
>reach into that territory.

Exactly.
Its a very limited quantised detector that cannot detect superpositions.

>All we can measure is either A or B+C.

Exactly.

>If you ever saw tracks of particles in bubble chambers, you probably
>noticed that at each point in time there is a well-defined collection
>of particles. E.g., at some points you definitely see a muon, at later
>points you definitely see an electron (plus two neutrinos). You never
>see a "mixture" or "linear combination" of these two states.

Of course not. The energy of the interaction is such that the chamber
could not resolve it even if it could. By definition the chamber can
only detect either a muon or an electron (see above).

>> Now how about the following scenario.
>> Here we measure 100,000 changes of A -> (B+C) ensuring A is always in
>> the same state before we start. This apparatus can force the system to
>> be in either A or (B+C) in 1/10ns. We get:
>>
>> %A %(B+C)
>> 2.0ns 100 0
>> 2.1 99 1
>> 2.2 97 3
>> 2.3 85 15
>> 2.4 60 40
>> 2.5 50 50 (OK its convenient...)
>> 2.6 40 60
>> 2.7 15 85
>> 2.8 3 97
>> 2.9 1 99
>> 3.0 0 100
>>
>> Now you can decide what you like, but I would say that this system takes
>> about 0.6 ns to emit a photon to first order. Mind you IMHO it would be
>> a slightly odd system that behaved like the above.
>
>This is a completely hypothetical system.

Yes.

>Real unstable systems do not
>behave in this way. They have exponential decay laws.

Simple ones, yes.

More complex (as in requiring many steps, in effect) ones, such as the
decay of an unstable nucleus, might well have this sort of shape. A very
long decay one measured in years followed by the ejection of a gamma in
a very short time. These are characterised by (typically) being
irreversible, that is you cannot reverse the process by hitting the
stable atom by the right gamma.

You can, of course, readily do this for simpler systems because they are
typically reversible.

>My point was to
>show that the decay rate of the exponent has nothing to do with
>"the time in which the system decays" or "the time it takes to
>emit one photon". This time is always very short,

Well, its not for most atomic (as against nuclear) transitions.
Ted has quoted 'forbidden' transitions measured in megayears.

Kinsler has told me that exact (or close to it) time-evolution of an
excited H atom has been done and is modelled by a superposition of
states with the two ends (ie excited H to e- + h+ + v) oscillating at a
particular frequency. To me this provides a theoretical expression of
'time of emission of a photon'.

>but the decay rate could be in millions of years in some radioactive
>nuclei.

That's actually the decay of a complex nuclear reaction.
The emission of the gamma is merely the final step of the decay of some
metastable internal state.

>>>I think that your question boils down to the following:
>>>"how fast is the collapse of the wave function?" My best guess is that
>>>it happens instantaneously.
>>
>> My guess is that it happens typically rather slowly and isn't a collapse
>> at all but a gradual change via complex superposition of states.
>
>If it were so, then we would be able to describe this gradual collapse
>by some sort of evolution equation (like Schroedinger equation). I've
>never heard of such a thing.

Kinsler assures me it has been done for the isolated excited H atom.
I believe anything more complex is beyond the reach of QM algorithms.
Of course reasonable models can probably be had that are somewhat less
exact.

>The Schroedinger equation describes the
>decay law, i.e., the gradual change of probabilities in time, but it has
>nothing to do with the random (and instantaneous)
>quantum jump (or collapse) between two states.

We may have to agree to differ here.
IMHO it is the measuring device which enforces this apparent effect.

>So, even if quantum collapse takes a measurable time, quantum mechanics
>is silent about that.

In essence yes. This is because detectors are hugely too complex for the
transition to be modelled. The solution is to concentrate on those
things we can say about in the interaction. This is absolutely normal
and usually gives excellent results. After all no scientist attempts to
analyse the precise atomic/crystal/mechanical properties when
considering the clash of two billiard balls.

>From the point of view of QM, this collapse is
>instantaneous.

I'm not sure that's actually true.
Schroedinger does not need to assume such a thing and nor do many
optical analyses.

>Maybe there is a theory better than QM? I doubt it.

I'm absolutely certain there is a better theory.
I am astonished that you are not hoping to find such a thing.

Eugene Stefanovich

unread,
Jul 30, 2005, 3:46:43 AM7/30/05
to
Oz wrote:

> Well, its not for most atomic (as against nuclear) transitions.
> Ted has quoted 'forbidden' transitions measured in megayears.

There are two different times associated with decays. One is
the lifetime of the metastable system. Another is the time during
which each particular system disintegrates. Let's take for example
a nucleus which emits an alpha-particle, and whose lifetime is
1 million years.

If you prepare such a nucleus, it will sit there for hundreds
of thousands or millions of years doing absolutely nothing.
Then one day it will emit so long-awaited alpha-particle.
So, the actual time of emission (i.e., the time during which the
alpha-particle leaves the nucleus) is very short. It could be
even instantaneous. However the time of wait (the lifetime) is very
long.

>>Maybe there is a theory better than QM? I doubt it.
>
>
> I'm absolutely certain there is a better theory.
> I am astonished that you are not hoping to find such a thing.

This is a matter of personal belief. I think that
quantum mechanics is safe. I am hoping to find interesting new
physics in other places. In the web of physics, there are links much
weaker than QM.

Eugene.

Eugene Stefanovich

unread,
Jul 30, 2005, 3:46:21 AM7/30/05
to

I did a little bit of thinking on this subject (which is always a good
thing to do), and now, I believe, I am able to formulate a coherent
solution.

First, I still think that the questions like "how long does it take
to emit
one photon" or "how much time the collapse of the wavefunction takes"
are beyond quantum mechanics. They cannot be answered from solutions of
the Schroedinger equation.

However, in the spirit of quantum mechanics, we can formulate a
description of the emission-recoil process averaged over ensemble,
i.e., provided by the time-dependent wave function of the system.

The initial state of the system is A="crystal + radioactive nucleus in the
excited state at rest". The final state of the system is B + C
where B="crystal + radioactive nucleus in the ground state moving at a
constant speed" and C="photon". Since we are considering the Mossbauer
effect, the state B does not involve any phonons.
The state of the system develops in time from A at t=0 to something
very close to B+C after T="the lifetime of the radioactive nucleus".
Accordingly, the expectation value of the velocity of the
crystal changes from zero at t=0 to a non-zero (but very small) value
after time
T. A typical lifetime of Mossbauer nuclei is about 10^{-7} s. So, we can
say that the entire process takes about 10^{-7} s. During this time
the photon is emitted with high probability, and the crystal picks up
the recoil speed.

If we accept this description, then the recoil is certainly superluminal
if the size of the crystal is greated than the distance passed by light
during 10^{-7} s. This distance is about 30m. I haven't seen such big
crystals, so the whole idea looks worthless.

I think, one can find radioactive nuclei whose decay time is much
shorter than 10^{-7}s (these nuclei are not useful for the Mossbauer
spectroscopy,
but they could be useful for the superluminal argument I am developing;
for example, the lifetime of 10^{-11}s corresponds to the reasonable
distance of only 3mm). Although, I believe that in such cases the recoil
momentum
transfers superluminally, its value divided by all atoms in the crystal
is so small that there is no chance it can be observed.

Eugene.

RP

unread,
Jul 30, 2005, 8:07:15 AM7/30/05
to

Paul Danaher wrote:

1/freq

Richard Perry

Oz

unread,
Jul 30, 2005, 8:07:17 AM7/30/05
to
Eugene Stefanovich <eug...@synopsys.com> writes

>Oz wrote:
>
>> Well, its not for most atomic (as against nuclear) transitions.
>> Ted has quoted 'forbidden' transitions measured in megayears.
>
>There are two different times associated with decays. One is
>the lifetime of the metastable system. Another is the time during
>which each particular system disintegrates. Let's take for example
>a nucleus which emits an alpha-particle, and whose lifetime is
>1 million years.
>
>If you prepare such a nucleus, it will sit there for hundreds
>of thousands or millions of years doing absolutely nothing.

I would suggest that the waveforms within the nucleus are doing plenty.
There is a huge amount of interaction.

>Then one day it will emit so long-awaited alpha-particle.
>So, the actual time of emission (i.e., the time during which the
>alpha-particle leaves the nucleus) is very short. It could be
>even instantaneous. However the time of wait (the lifetime) is very
>long.

Yes. However your example is very bad because you cannot easily disturb
the emitting body (the nucleus) to investigate the details. All you can
do is investigate the linewidth of the emitted gamma.

Better is an electon-atomic transition. This is rather easy to
manipulate, you can physically hit it with another atom for example. You
can even take 'snapshots' using much higher energy particles and deduce
even more (eventually).

So, for the forbidden transitions, you can completely stop them by
making interactions with the atom shorter than the (inverse) linewidth.
That's why they are only seen in extreme intergalactic environments.
Now, if the emission were 'instant' this would not happen. Much better
is to consider the transition as an analogue of the H atom emission,
that is a complex evolving wavefunction that take 1MY (or whatever) to
complete a cycle.

>>>Maybe there is a theory better than QM? I doubt it.
>>
>>
>> I'm absolutely certain there is a better theory.
>> I am astonished that you are not hoping to find such a thing.
>
>This is a matter of personal belief. I think that
>quantum mechanics is safe.

Maybe. I think the way it is formulated and modelled is not exactly
ideal. Whilst many things are answerable to remarkable precision many
other things (like exact modelling of chemical and even most excited
atoms) are pretty handwavy.

>I am hoping to find interesting new
>physics in other places.

Helps if you actually have some experimental evidence.

>In the web of physics, there are links much
>weaker than QM.

In the nature of things its explaining many cosmological things that are
our best 'laboratory'. Explain pioneer, galactic age behaviour, galactic
rotation curves and the detailed expansion of the universe seem to me to
be more practical, useful and with some experimental evidence. The other
end is to explain the actual masses and charges of elementary particles.

IMHO none of these have been satisfactorily answered.

Oz

unread,
Jul 30, 2005, 8:07:16 AM7/30/05
to
Kit Adams <kit....@anti.spam.net> writes

>However I would say that the time needed to create a field excitation (emit
>a photon) is proportional to the inverse of the uncertainty in the energy
>(i.e. ~ hbar/delta E) of the excitation rather the inverse of the absolute
>energy. This follows directly from Fourier analysis and ties in nicely with
>the lifetime of excited states of atoms and nuclei being the inverse of the
>linewidth i.e. the idea that the excited state radiates throughout its
>lifetime.
>
>For the Mossbauer effect using iron57 that gives an emission (and
>absorbtion) time of 10e-7 secs, giving plenty of time for the recoil to
>propagate through the 200000 nuclei required to allow resonance absorbtion
>(see http://hyperphysics.phy-astr.gsu.edu/hbase/nuclear/mossfe.html#c1
>)

I agree. However this sets up three possible physically different
systems.

1) The emitted gamma is a superposition of many pure (sinewave) states
which is unknown until measured (presumably at some distance). However
in this scenario the gamma is entangled with the emitting atom(s) and
the detector 'until detection' and the collapse is equivalent to the FTL
destruction of the entanglement. This selects out the precise energy and
entanglement which has now spread over some 200k atoms. Here the
'collapse' is FTL (entanglement: instantaneous) but the momentum is
still transferred to 200k atoms.

2) The emitted gamma is just an EM pulse with a well define waveform
about 10e-7 secs long recoiling the emitting atom throughout that period
allowing it to interact with some 200k atoms. This waveform contains one
photonsworth of energy (one atomic transition) and is absorbed by a
detector tuned to absorb one photonsworth of the appropriate (range) of
frequencies.

3) Whatever it was that the original poser of the question thought which
seems implausible.

The problem I have with (1) is that sinewaves exist from =00 to -00
whilst the gamma most certainly does not. There is thus a problem when
the gamma goes to the andromeda galaxy, being redshifted as it goes.

I have no problem with (2) since waveform, energy and momentum are
defined at emission.

Oz

unread,
Jul 30, 2005, 8:07:18 AM7/30/05
to
Eugene Stefanovich <eug...@synopsys.com> writes

>
>First, I still think that the questions like "how long does it take
>to emit
>one photon" or "how much time the collapse of the wavefunction takes"
>are beyond quantum mechanics. They cannot be answered from solutions of
>the Schroedinger equation.

They can for the hydrogen atom.

Anything else is too complex.

>I think, one can find radioactive nuclei whose decay time is much
>shorter than 10^{-7}s (these nuclei are not useful for the Mossbauer
>spectroscopy,
>but they could be useful for the superluminal argument I am developing;
>for example, the lifetime of 10^{-11}s corresponds to the reasonable
>distance of only 3mm). Although, I believe that in such cases the recoil
>momentum
>transfers superluminally,

You may even believe in a god, or that the moon is made of green cheese.

Personally I need experimental evidence.
What evidence have you for this belief?

>its value divided by all atoms in the crystal
>is so small that there is no chance it can be observed.

I would be quite sure that equivalent things have been measured.
More massive decays (fission) can emit particles with enough energy to
disrupt adjacent atoms. I would guess that if you examined metallurgical
effects of radiation (that is atoms in a crystal made radioactive by,
say, neutron capture then decaying and damaging the crystal) these are
rather well known and thus imaged. The work is probably decades old.

IMHO Emission of a gamma is in principle no different from emission of
an alpha or even a carbon nucleus.

Clearly these emissions are NOT transferring their momentum to the
entire crystal lattice instantaneously, they are bashing the next door
atom out of the way.

I think your main problem (a rather common one amongst physicists) is
that you keep thinking of particles as infinitely small points when they
are not. Whilst its true that very energetic particles, typically
discussed in nuclear-type physics, are good approximations to points
this is just a matter of scale.

A more general view sees them as waves. Electrons can have wavelengths
measured in mm, and photons in millions of meters, and pretty well the
same physics will apply to both.

IMHO The complex superposition of many states for a single particle (eg
a photon) to replicate the behaviour and even the 'size' really amounts
to modelling a simple wave of definable waveform following the laws of
the universe. Exactly why electrons have quantised charge and mass is
unanswered, but IMHO they are soliton-like waves surfing the spacetimes
of the universe. At least this seems to be consistent with physics as I
know it, even if its utterly unquantifiable and cannot be considered any
sort of theory at all.

John F

unread,
Jul 31, 2005, 3:14:09 AM7/31/05
to
Kit Adams <kit....@anti.spam.net> wrote:
: "Igor Khavkine" <igo...@gmail.com> wrote:
: snip...

Feshbach and Weisskopf make a very similar point in the Reference Frame
column "Ask A Foolish Question..." of the October 1988 issue of Physics
Today. Paraphrasing from the second paragraph starting on page 11:
"It is often said that an atom jumps abruptly from an excited
state to a lower state [typically emitting a photon in the process],
but the time of the jump is distributed according to a probability law.
This is a response to the inappropriate question: When did the electron
change its state? The energy of the excited state is reasonably well
defined, and therefore the time of the quantum jump is correspondingly
undefined within the lifetime of the state.
"The quantum state can be given by the state function
|u(t)> = a(t)|a> + b(t)|b>
where |a> is the state function of the upper state and |b> of the
lower state. |u> changes continuously; there is no jump. a(t) is
a decreasing and b(t) an increasing function of time. The probability
of finding the atom in state |a> or |b> is |a(t)|^2 and |b(t)|^2.
"Asking for the exact time of the transition [hence for the time
required to emit the photon] is an inappropriate question. Quantum
mechanics distinguishes questions that are appropriate for a given
experimental situation from those that are not. The former have
an exact answer; the latter have a probability distribution.
--
John Forkosh ( mailto: j...@f.com where j=john and f=forkosh )

Paul Danaher

unread,
Jul 31, 2005, 3:14:51 AM7/31/05
to

Is it meaningful to talk about a change in the state of the atom
(transition from one energy level to a lower energy level) and hence the
emission of a photon as a physical process (rather than just an event)?
When I asked the question, I had an underlying idea that the time taken
to emit one photon would be the same as the time required for the
transition, with a vague idea that some limits could be set by looking
at the behaviour of lasers, where the time to emit a photon = half the
interval between the arrival of the last excitation and the first laser
emission. In such a scenario, your answer would seem to imply slower
low-energy transitions.

Eugene Stefanovich

unread,
Jul 31, 2005, 3:16:43 AM7/31/05
to
"Oz" <O...@farmeroz.port995.com> wrote in message
news:IhidAHGo...@farmeroz.port995.com...

> Eugene Stefanovich <eug...@synopsys.com> writes
> >
> >First, I still think that the questions like "how long does it take
> >to emit
> >one photon" or "how much time the collapse of the wavefunction takes"
> >are beyond quantum mechanics. They cannot be answered from solutions of
> >the Schroedinger equation.
>
> They can for the hydrogen atom.

You can calculate the lifetime of an excited state of H, but this is not
the same as the time of the photon emission. I explained the difference
in my previous post. I used (somewhat extreme) example of the radioactive
nucleus with the lifetime of 1M years. Fundamentally, there is no much
difference
between QM description of the nuclear decay and photon emission.
QM can only predict the lifetime. It says nothing about "how long does it


take
to emit one photon"

> >its value divided by all atoms in the crystal


> >is so small that there is no chance it can be observed.
>
> I would be quite sure that equivalent things have been measured.
> More massive decays (fission) can emit particles with enough energy to
> disrupt adjacent atoms. I would guess that if you examined metallurgical
> effects of radiation (that is atoms in a crystal made radioactive by,
> say, neutron capture then decaying and damaging the crystal) these are
> rather well known and thus imaged. The work is probably decades old.
>
> IMHO Emission of a gamma is in principle no different from emission of
> an alpha or even a carbon nucleus.
>
> Clearly these emissions are NOT transferring their momentum to the
> entire crystal lattice instantaneously, they are bashing the next door
> atom out of the way.


You could be right that generally the decomposition of radioactive nuclei
in the material produces a local damage that can be observed. However, I
was talking about a specific situation known as Mossbauer effect. To have
this effect, you need to have a specific gamma-radioactive nucleus.
Just a handful of isotopes satisfy all necessary conditions. You also need
to have a
very low temperature to ensure that a significant portion of decays is
coupled to the "zero-phonon" mode of the crystal. In this case, the crystal
recoils
as a whole, no phonons are created, i.e., no local vibrations

> I think your main problem (a rather common one amongst physicists) is
> that you keep thinking of particles as infinitely small points when they
> are not. Whilst its true that very energetic particles, typically
> discussed in nuclear-type physics, are good approximations to points
> this is just a matter of scale.
>
> A more general view sees them as waves. Electrons can have wavelengths
> measured in mm, and photons in millions of meters, and pretty well the
> same physics will apply to both.
>

> Exactly why electrons have quantised charge and mass is
> unanswered, but IMHO they are soliton-like waves surfing the spacetimes
> of the universe.

I don't know why you find the wave picture so attractive. If you just
look around yourself, you'll not find any waves (unless you are on the
beach). The objects around you are made of particles (molecules, atoms,
protons, electrons, photons, etc.). "Soliton-like waves surfing the
spacetimes of the universe." may sound very poetic, but does not have
any experimental or theoretical support. Even quantum field theory can
be reformulated in the way which does not use fields as primary objects.
The particle-based formulation of QED is presented in my book.

The wave properties of particles (photons or electrons) arise as a
consequence of their quantum nature.
This has been explained very well by quantum mechanics.

Eugene.

Eugene Stefanovich

unread,
Jul 31, 2005, 3:16:18 AM7/31/05
to
"Oz" <O...@farmeroz.port995.com> wrote in message
news:CBGfI6F6...@farmeroz.port995.com...

> >I think that
> >quantum mechanics is safe.
>
> Maybe. I think the way it is formulated and modelled is not exactly
> ideal. Whilst many things are answerable to remarkable precision many
> other things (like exact modelling of chemical and even most excited
> atoms) are pretty handwavy.

I haven't seen any problems in applying QM to atomic or chemical
phenomena. There could be some handwavings because these are
multiparticle systems and the exact solution of the Schroedinger
equation is not possible even with modern day computers. So, one needs
to introduce simplifications and approximations. Most of quantum
chemistry is about how to simplify the problem without losing accuracy.
Apart from these technical problems, I think, nobody expects any
surprises in application of QM to chemistry.

Eugene.

Oz

unread,
Jul 31, 2005, 3:49:59 PM7/31/05
to
Eugene Stefanovich <eugene_st...@usa.net> writes

>You can calculate the lifetime of an excited state of H, but this is not
>the same as the time of the photon emission.

Note that its an oscillation. The time of photon emission will be
approximately the linewidth (modulo). That's how many wavelengths long
the EM waveform is, pretty well by definition.

Linewidths can be very short, or very long, even for individual atom
emission. They are typically thousands of wavelengths long. I did the
calc and posted it here a year or so ago for the yellow sodium line.
IIRC it was hundreds of thousands of wavelengths. Thats vastly longer
than 'instantaneous'.

>I explained the difference
>in my previous post. I used (somewhat extreme) example of the radioactive
>nucleus with the lifetime of 1M years. Fundamentally, there is no much
>difference
>between QM description of the nuclear decay and photon emission.
>QM can only predict the lifetime. It says nothing about "how long does it
>take
>to emit one photon"

That's because the two are not strongly correlated, as I explained.

>> >its value divided by all atoms in the crystal
>> >is so small that there is no chance it can be observed.
>>
>> I would be quite sure that equivalent things have been measured.
>> More massive decays (fission) can emit particles with enough energy to
>> disrupt adjacent atoms. I would guess that if you examined metallurgical
>> effects of radiation (that is atoms in a crystal made radioactive by,
>> say, neutron capture then decaying and damaging the crystal) these are
>> rather well known and thus imaged. The work is probably decades old.
>>
>> IMHO Emission of a gamma is in principle no different from emission of
>> an alpha or even a carbon nucleus.
>>
>> Clearly these emissions are NOT transferring their momentum to the
>> entire crystal lattice instantaneously, they are bashing the next door
>> atom out of the way.
>
>
>You could be right that generally the decomposition of radioactive nuclei
>in the material produces a local damage that can be observed.

Precisely.

>However, I
>was talking about a specific situation known as Mossbauer effect. To have
>this effect, you need to have a specific gamma-radioactive nucleus.

There is no difference in concept between emitting a gamma, alpha, beta
or chunk of nucleus. They are chunks of energy-momentum.

>Just a handful of isotopes satisfy all necessary conditions. You also need
>to have a
>very low temperature to ensure that a significant portion of decays is
>coupled to the "zero-phonon" mode of the crystal. In this case, the crystal
>recoils
>as a whole, no phonons are created, i.e., no local vibrations

That may or may not be so. However I do not believe gammas and recoil
are connected to the phonons in one single step. If you are in practice
sating that the crystal structure is in effect an entangled boson
condensate then I would be astonished if a single nucleus (as against
atoms) was that perfectly entangled with the whole crystal structure.

>I don't know why you find the wave picture so attractive. If you just
>look around yourself, you'll not find any waves (unless you are on the
>beach). The objects around you are made of particles (molecules, atoms,
>protons, electrons, photons, etc.).

Eh? Of course they aren't. Mostly I see atoms made of waves, in
particular electrons as waves encircling nuclei which are also waves.

>"Soliton-like waves surfing the
>spacetimes of the universe." may sound very poetic, but does not have
>any experimental or theoretical support.

Absolutely so. Very much like your concepts.

>Even quantum field theory can
>be reformulated in the way which does not use fields as primary objects.

Yes. A few problems with infinites though.

>The wave properties of particles (photons or electrons) arise as a
>consequence of their quantum nature.
>This has been explained very well by quantum mechanics.

I consider it to be the other way round.

Oz

unread,
Jul 31, 2005, 3:49:58 PM7/31/05
to
Eugene Stefanovich <eugene_st...@usa.net> writes

>"Oz" <O...@farmeroz.port995.com> wrote in message
>news:CBGfI6F6...@farmeroz.port995.com...
>
>> >I think that
>> >quantum mechanics is safe.
>>
>> Maybe. I think the way it is formulated and modelled is not exactly
>> ideal. Whilst many things are answerable to remarkable precision many
>> other things (like exact modelling of chemical and even most excited
>> atoms) are pretty handwavy.
>
>I haven't seen any problems in applying QM to atomic or chemical
>phenomena.

One can apply principles and make excellent predictions and even new
devices. This is very far, though, from saying we have an exact or in
many cases even good figures, direct from QM. Heck even the SM is packed
full of empirical (ie measured) constants.

>There could be some handwavings because these are
>multiparticle systems and the exact solution of the Schroedinger
>equation is not possible even with modern day computers.

Quite. I don't think they have even managed the helium atom.
Which is my point.

>So, one needs
>to introduce simplifications and approximations. Most of quantum
>chemistry is about how to simplify the problem without losing accuracy.

Indeed. Of course knowing the answer helps here.

>Apart from these technical problems, I think, nobody expects any
>surprises in application of QM to chemistry.

See above.

Pierre Asselin

unread,
Aug 1, 2005, 12:15:03 AM8/1/05
to
Oz <O...@farmeroz.port995.com> wrote:
> Eugene Stefanovich <eugene_st...@usa.net> writes

> >There could be some handwavings because these are
> >multiparticle systems and the exact solution of the Schroedinger
> >equation is not possible even with modern day computers.

> Quite. I don't think they have even managed the helium atom.
> Which is my point.

With only two electrons it is not too hard to come up with good,
fully correlated, variational wavefunctions. The ground state is
known very accurately. Probably some of the excited states, too.
Google finds this:
http://theor.jinr.ru/~korobov/papers/He_ground_24_digits_PRA02.pdf


--
pa at panix dot com

Eugene Stefanovich

unread,
Aug 1, 2005, 12:15:03 AM8/1/05
to

"Oz" <O...@farmeroz.port995.com> wrote in message
news:CpMfiQBX...@farmeroz.port995.com...

> I would be astonished if a single nucleus (as against
> atoms) was that perfectly entangled with the whole crystal structure.

That's exactly what happens in the Mossbauer effect. That's why this
effect is so cool.

> >Even quantum field theory can
> >be reformulated in the way which does not use fields as primary objects.
>
> Yes. A few problems with infinites though.

There are no ultraviolet infinities in the RQD formulation of quantum
electrodynamics.
Please read chapter 12 of the book physics/0504062 or the paper
E.V. Stefanovich, "Quantum field theory without infinities",
Ann. Phys. 292 (2001), 139.

Eugene.

Oz

unread,
Aug 1, 2005, 11:44:55 PM8/1/05
to
Pierre Asselin <p...@see.signature.invalid> writes

That is ground state.

I was talking about excited states with an emitted/absorbed photon.

p.ki...@imperial.ac.uk

unread,
Aug 3, 2005, 11:33:05 AM8/3/05
to
David Park <dj...@earthlink.net> wrote:
> So I would suppose that in the same vain it is meaningless to talk about the
> 'process' of emitting a photon - as if it were some drawn out detailed
> process that could be followed in a step by step continuous manner. All you
> can calculate is the probability that a system in one energy state is later
> a system in a lower energy state plus a photon.

You can construct the description using quantum monte-carlo methods,
in which you get an "ensemble of detailed processes that could be

followed in a step by step continuous manner".

You still need to average over the ensemble to get meaningful
predictions; but nevertheless the individual trajectories may
still provide some insight.


--
---------------------------------+---------------------------------
Dr. Paul Kinsler
Blackett Laboratory (QOLS) (ph) +44-20-759-47520 (fax) 47714
Imperial College London, Dr.Paul...@physics.org
SW7 2BW, United Kingdom. http://www.qols.ph.ic.ac.uk/~kinsle/

p.ki...@imperial.ac.uk

unread,
Aug 3, 2005, 11:33:06 AM8/3/05
to
Kit Adams <kit....@anti.spam.net> wrote:

> "Igor Khavkine" <igo...@gmail.com> wrote:
> snip...
> > So, at the end of this perhaps overly verbose explanation, the simple
> > answer is given by the Heisenberg uncertainty principle. If the energy
> > of the photon is E, the time needed to emit a photon of this energy is
> > greater than hbar/E. The longer you wait, the more sure you are how
> > many quanta were emitted. The less you wait, the less certainty there
> > is about the number of photons emitted, which could also be zero.

> I like your point about photon number only being a meaningful description
> for the steady state of the field
> - after all the creation and annihilation
> operators are the Fourier coefficients of monochromatic field modes.

Well, photon number is one sort of basis on which to describe the
field, and is indeed useful in cases where you want to do photon-
count-like things. However, it's not the only sort -- if I have a
state with 10^12 photons, for example, I'm going to get pretty tired
of all the counting, and even my computer will struggle to deal with
the 10^48 or so elements in my density matrix evolution.

The other obvious basis is that of coherent states, which have many
nice properties.

BUT IT DOESN'T MATTER WHICH BASIS YOU PICK (as long as it's complete),
I can still make a meaningful description. There is no a-priori
reason to privilige photon-number over other choices.

OK, so this thread is entitled "to emit one photon", so decribing
the states in terms of number states is likely to be sensible. But
saying "photon number only being a meaningful description for the
steady state of the field" is not right.

Paul Danaher

unread,
Aug 3, 2005, 8:52:02 PM8/3/05
to
Eugene Stefanovich wrote:
..

> I think that your question boils down to the following:
> "how fast is the collapse of the wave function?" My best guess is that
> it happens instantaneously.

In another thread, Jarek Korbicz wrote:
"Slightly related to that, there has been experiments by N. Gisin & co.
aimed at measuring the hypothetical speed of wavefunction collapse.
IIRC, they got some bounds like >10^7*c !"

I've tried googling this, but I haven't succeeded in identifying the
specific article. This is, however, at variance with another answer
(1/freq), which would come out at around 1/3 sec for an ELF photon ...

Eugene Stefanovich

unread,
Aug 3, 2005, 11:36:37 PM8/3/05
to

I have a few references:

Scarani, V.; Tittel, W.; Zbinden, H.; Gisin, N., The speed of quantum
information and the preferred frame: analysis of experimental data,
quant-ph/0007008

Zbinden, H.; Brendel, J.; Gisin, N.; Tittel, W., Experimental test of
non-local
quantum correlation in relativistic configurations, quant-ph/0007009

Zbinden, H.; Brendel, J.; Tittel, W.; Gisin, N., Experimental test of
relativistic quantum state collapse with moving reference frames,
quant-ph/0002031

Try to search arxiv.org or scholar.google.com by author names and,
I'm sure you'll find more.

There is no mystery in the instantaneous "wavefunction collapse".
Wavefunction is just a probability density
amplitude. Its collapse is not associated with any physical process.
If you lost your key and don't know where it is, there
is (a small but non-zero) chance that your key is on alpha Centauri.
As soon as you find you key in the pocket, this probability
instantaneously reduces to zero.

Eugene.

Paul Danaher

unread,
Aug 4, 2005, 4:27:00 PM8/4/05
to
Eugene Stefanovich wrote:
> ... There is no mystery in the instantaneous "wavefunction collapse".

> Wavefunction is just a probability density
> amplitude. Its collapse is not associated with any physical process.
> If you lost your key and don't know where it is, there
> is (a small but non-zero) chance that your key is on alpha Centauri.
> As soon as you find you key in the pocket, this probability
> instantaneously reduces to zero.

(Thank you for the references and pointers.)

This seems to be a strong version of the Copenhagen interpretation - we
can't talk about a process, only about events (i.e. observations). Doesn't
this logically lead to Vecchi's position in the "No new Einstein" thread,
which seems to be at variance with your own position there?

Experiments like the micromaser experiments at the Max-Planck-Institut seem
to me to imply the possibility of getting a distribution for the interval
between excitation and emission, where half the minimum interval would be an
upper bound for "the time taken to emit one photon", no?

Eugene Stefanovich

unread,
Aug 4, 2005, 11:04:47 PM8/4/05
to

Paul Danaher wrote:
> Eugene Stefanovich wrote:
>
>>... There is no mystery in the instantaneous "wavefunction collapse".
>>Wavefunction is just a probability density
>>amplitude. Its collapse is not associated with any physical process.
>>If you lost your key and don't know where it is, there
>>is (a small but non-zero) chance that your key is on alpha Centauri.
>>As soon as you find you key in the pocket, this probability
>>instantaneously reduces to zero.

>

> This seems to be a strong version of the Copenhagen interpretation - we
> can't talk about a process, only about events (i.e. observations).

Let me stress that my interpretation of QM is different from the
Copenhagen interpretation.

Copenhagen says that before the measurement
the system is "really" in the state which is a linear combination of
different possibilities. The state of one individual system is described
by the wave function. When the measurement is made, the wavefunction
"collapses". The observable didn't have a certain value before the
measurement. The definite value of observable "emerges" as a result
of measurement.

In my interpretation, the system did have a certain value of observable
before the measurement was done. We simply don't know what this value
is, and we have no means to predict this value. The wavefunction does
not describe the individual system. It simply describes our knowledge
(or lack of it) about the system. When the measurement is done, we
simply observe the value of observable which was already there.
There is no "collapse" of probabilities. At least, there is no more
"collapse" than in my above example for the probability of finding the
key on Alpha Centauri.


> Doesn't
> this logically lead to Vecchi's position in the "No new Einstein" thread,
> which seems to be at variance with your own position there?

As far as I can understand, Vecchi's position is different from mine.

>
> Experiments like the micromaser experiments at the Max-Planck-Institut seem
> to me to imply the possibility of getting a distribution for the interval
> between excitation and emission, where half the minimum interval would be an
> upper bound for "the time taken to emit one photon", no?

I am not sure which experiment you are talking about.
Could you give some references?

Eugene.


nightlight

unread,
Aug 5, 2005, 6:44:59 AM8/5/05
to
> There is no mystery in the instantaneous "wavefunction collapse".
> Wavefunction is just a probability density amplitude. Its collapse is not
> associated with any physical process. If you lost your key and don't
> know where it is, there is (a small but non-zero) chance that your key
> is on alpha Centauri. As soon as you find you key in the pocket, this
> probability instantaneously reduces to zero.

That is precisely _the_ problem. If Psi is just a construct "in your
head", then indeed it can "collapse" when you "know" the result. Yet,
this entity "in your head" somehow senses and changes based on all the
objects, phase shifters, generally all interactions outside of "your
head". One may ask, at what point and how does "your head" override the
other "heads" or other interactions. What is the formal criteria and
dynamics for this switch? You can't even define what is this magical
"knowing" that makes the Psi switch from one mode of evolution to
another, when and how does it happen.

Basically, what you're suggesting is an euphemistic acknowledgment that
the linear evolution formalism cannot (not even in principle) model
the most elemental experimental fact -- the single outcome of a single
measurement. Therefore, the QM Measurement "Theory" reaches outside of
the formalism to patch this gap by injecting the non-physical /
psychological / verbal gimmicks, such as "knowledge" or "consiousness"
or "spliting universe" or "decoherence" ...etc. As Einstein said, the
QM theory is plainly incomplete.

I posted here few weaks ago some references and a brief intro to the
Asim Barut's clear explanation of this incompleteness. Barut has
demonstrated that the standard multiparticle QM formalism (the product
Hilbert space on which the QM Measurement "Theory" is based) is a
purely mathematical linearization algorithm, a poor man's variant of
Carleman PDE linearization scheme (from 1930s), of the interacting
Maxwell-Dirac/Schrodinger fields (which is formally a set of coupled,
nonlinear PDEs). Thus, neither the Hilbert space product of QM nor the
Fock space formalism of QFT (which is the same linear approximation,
only carried out formally to infinite order) add any new physics which
is not already present in the original nonlineal classical
Maxwell-Dirac fields. In Barut's Maxwell-Dirac (or self-field)
formalism, the QM entanglement, quantum non-locality, quantum
"computing" with its apparent exponential parallelism, and other
"quantum magic" phenomena arise as simple artifacts of the
approximation, bringing in no new physical content (just as
approximating an integral describing a falling object with a sum of
trapezoid areas doesn't imply "trapezoidal structure of space-time") .
For your conveninence, I am copying below the relevant section with the
links I posted earlier:
------------------

http://groups-beta.google.com/group/sci.physics.research/browse_thread/thread/db9e0b7bbff18141/bab76a5967a0a132?lnk=st&q=nightlight+barut&rnum=1&hl=en#bab76a5967a0a132

> It is a mystery why math theories describe electrons. But why
> does math theories also apply so well to Wall Street?

The second quantization formalism of QFTs is a generic linearization
algorithm for nonlinear systems of PDEs and integro-differential
equations. It contains no more intrinsic physics in it than, say, the
Runge-Kutta numeric integration algorithm does. The N-point Green
functions (propagators) computed via Feynman diagrams approximate
nonlinear dynamics evolution of the classical interacting fields (e.g.
Maxwell-Dirac coupled PDEs) with piecewise linear evolution between the
N scattering events (the "interactions"). That is the mathematical
basis of their usefulness, in QFT and elsewhere. Check, for example,
papers by K. Kowalski on arXiv:

http://arxiv.org/find/grp_nlin /1/au:+kowalski_k/0/1/0/all/0/ 1
http://arxiv.org/abs/hep-th/92 12031

where the linearization aspect (of 2nd quantization) is explicit and
(unlike the typical QED/QFT textbooks) entirely non-mysterious. He uses
Fock space methods to solve variety of nonlinear diff. equations,
recurrences (difference equations), kinetic processes etc. The late
Asim Barut has developed similar results in 1980s. The ICTP site has
around 150 of his papers online at (you need Java scripts on, then
type Barut into the Author field):

http://library.ictp.it/pages/psearch/prep.php?PAGE=0
http://library.ictp.it/pages/psearch/prep.php?PAGE=7&NEXT=/ARCHIVE/preprint/SDW?W%3DAUTHOR+PH+WORDS+%27barut%27+ORDER+BY+EVERY+ICNUM/Ascend%26M%3D1%26R%3DY

On the linearization aspect of Maxwell-Dirac (treated as classical
interacting fields without 2nd quantization of either EM or Dirac
matter field), check especially his paper "QUANTUM-ELECTRODYNAMICS
BASED ON SELF-ENERGY" (sect. 4, from page 7) at:

http://library.ictp.trieste.it/DOCS/P/87/248.pdf

Additional discussion on 2nd quantization is in his paper "COMBINING
RELATIVITY AND QUANTUM MECHANICS: SCHRODINGER'S INTERPRETATION OF PSI"
at:

http://library.ictp.trieste.it/DOCS/P/87/157.pdf

which describes physical motivation for his approach. He and his PhD
students had replicated QED radiative corrections up to order alpha^5
(including Lamb shift and g-2 which require loop diagrams; note that
Barut's self-fields are not the "semiclassical" tree level
approximation of the QED -- the QED is a piecewise linearized
approximation of the Barut's self-fields aka of the cooupled
Maxwell-Dirac system). You may also check a recent discussion on this
topic in the PhysicsForum:

http://www.physicsforums.com/s howpost.php?p=540794&postcount =100
http://www.physicsforums.com/s howpost.php?p=541484&postcount =114
http://www.physicsforums.com/s howpost.php?p=541708&postcount =118
http://www.physicsforums.com/s howthread.php?t=71297&page=3&p p=40

Paul Danaher

unread,
Aug 5, 2005, 6:44:58 AM8/5/05
to

I'm afraid I don't know what you could mean by "the system did have a
certain value ... we simply don't know what this value is, and we have no
means to predict this value". Again, you talked about "Wavefunction is just
a probability density amplitude" - now, if the wavefunction collapses,
what's the difference between this and a "collapse" of probabilities?
(Again, I don't accept that there's a nonzero probability that my key might
be on alpha Centauri - I saw it ten minutes ago, so if it *is* there, it's
been transported by a previously unknown macroscopic physical process, and
is almost certainly surrounded by several million right socks ...)

>> Doesn't
>> this logically lead to Vecchi's position in the "No new Einstein"
>> thread, which seems to be at variance with your own position there?
>
> As far as I can understand, Vecchi's position is different from mine.

I can*t see why, but perhaps I'll come to understand this ...

>> Experiments like the micromaser experiments at the
>> Max-Planck-Institut seem to me to imply the possibility of getting a
>> distribution for the interval between excitation and emission, where
>> half the minimum interval would be an upper bound for "the time
>> taken to emit one photon", no?
>
> I am not sure which experiment you are talking about.
> Could you give some references?

I beg your pardon - a good starting point (for English) is
http://www.mpq.mpg.de/micromaser.html

Eugene Stefanovich

unread,
Aug 7, 2005, 2:25:21 PM8/7/05
to

nightlight wrote:
>>There is no mystery in the instantaneous "wavefunction collapse".
>>Wavefunction is just a probability density amplitude. Its collapse is not
>>associated with any physical process. If you lost your key and don't
>>know where it is, there is (a small but non-zero) chance that your key
>>is on alpha Centauri. As soon as you find you key in the pocket, this
>>probability instantaneously reduces to zero.
>
>
> That is precisely _the_ problem. If Psi is just a construct "in your
> head", then indeed it can "collapse" when you "know" the result. Yet,
> this entity "in your head" somehow senses and changes based on all the
> objects, phase shifters, generally all interactions outside of "your
> head". One may ask, at what point and how does "your head" override the
> other "heads" or other interactions. What is the formal criteria and
> dynamics for this switch? You can't even define what is this magical
> "knowing" that makes the Psi switch from one mode of evolution to
> another, when and how does it happen.

I don't see any magic here. In my previous post I used analogy with a
classical die with 6 faces. Before you throw the die, the probability
is evenly distributed (1/6) among all faces. After the throw the
probability "collapses" to just one phase. The quantum-mechanical
collapse is not different. I wouldn't say that the collapse happens in
my head. The number that comes out is quite objective. All observers
should agree what this number is. Otherwise the gambling industry
couldn't exist.


> Basically, what you're suggesting is an euphemistic acknowledgment that
> the linear evolution formalism cannot (not even in principle) model
> the most elemental experimental fact -- the single outcome of a single
> measurement. Therefore, the QM Measurement "Theory" reaches outside of
> the formalism to patch this gap by injecting the non-physical /
> psychological / verbal gimmicks, such as "knowledge" or "consiousness"
> or "spliting universe" or "decoherence" ...etc.

No, nothing of that sort.

> As Einstein said, the QM theory is plainly incomplete.

Einstein is right, in a sense. Quantum mechanics is incomplete.
But I doubt that a more complete theory will be found.
Quantum mechanics does not allow us
(even in principle) to predict results of measurements on quantum
systems. In QM we can predict only probabilities.
For classical dice we can (in principle) follow exactly the
movement of the hand, interactions of the die with the air, table, etc,
and predict the result of the throw. For quantum systems we cannot do
such a prediction. For example, we cannot predict exactly at what
time a radioactive nucleus will decay.

I have a favorite quote from Einstein which explains my position
much better than I can do it myself:

I now imagine a quantum theoretician who may even admit that the
quantum-theoretical description refers to ensembles of systems and not
to individual systems, but who, nevertheless, clings to the idea that
the type of description of the statistical quantum theory will, in its
essential features, be retained in the future. He may argue as
follows: True, I admit that the quantum-theoretical description is an
incomplete description of the individual system. I even admit that a
complete theoretical description is, in principle, thinkable. But I
consider it proven that the search for such a complete description
would be aimless. For the lawfulness of nature is thus constructed
that the laws can be completely and suitably formulated within the
framework of our incomplete description. To this I can only reply as
follows: Your point of view - taken as theoretical possibility - is
incontestable.

Eugene.

nightlight

unread,
Aug 7, 2005, 2:25:18 PM8/7/05
to

Eugene Stefanovich

unread,
Aug 7, 2005, 2:25:21 PM8/7/05
to

Paul Danaher wrote:

>>In my interpretation, the system did have a certain value of
>>observable before the measurement was done. We simply don't know what
>>this value is, and we have no means to predict this value. The
>>wavefunction does not describe the individual system. It simply
>>describes our knowledge (or lack of it) about the system. When the
>>measurement is done, we simply observe the value of observable which
>>was already there. There is no "collapse" of probabilities. At least,
>>there is no more
>>"collapse" than in my above example for the probability of finding the
>>key on Alpha Centauri.
>
>
> I'm afraid I don't know what you could mean by "the system did have a
> certain value ... we simply don't know what this value is, and we have no
> means to predict this value". Again, you talked about "Wavefunction is just
> a probability density amplitude" - now, if the wavefunction collapses,
> what's the difference between this and a "collapse" of probabilities?

My point was to demonstrate that there is nothing mysterious in the
instantaneous wavefunction (or probability) "collapse". We deal with it
every day. Suppose you are throwing a die. Before the throw, all 6
numbers have equal probabilities to show up. So, you may say that the
die is described by a probability function (1/6 for each face). After
the throw (measurement) just one number shows up. The probability
"collapses" from 1/6 for each face to 1 for one face and 0 for others.

This was in classical physics. Quantum physics is not much different.
The main difference is that in classical physics we can (if we try hard
enough) to predict which number will come out. In quantum physics
this is not possible.

> (Again, I don't accept that there's a nonzero probability that my key might
> be on alpha Centauri - I saw it ten minutes ago, so if it *is* there, it's
> been transported by a previously unknown macroscopic physical process, and
> is almost certainly surrounded by several million right socks ...)

If you saw your key 10 minutes ago, then Alpha Centauri is a bit too
far. I then say that there is a non-zero probability to find your key
on the Sun (you can reach the Sun in 8 minutes with the speed of light).
It is not unreasonable to suspect that some mad scientist could
steal your key and send it on a fast rocket toward the Sun.

Eugene.

Paul Danaher

unread,
Aug 7, 2005, 6:04:41 PM8/7/05
to
Eugene Stefanovich wrote:
> Paul Danaher wrote:
>
..

>> I'm afraid I don't know what you could mean by "the system did have a
>> certain value ... we simply don't know what this value is, and we
>> have no means to predict this value". Again, you talked about
>> "Wavefunction is just a probability density amplitude" - now, if the
>> wavefunction collapses, what's the difference between this and a
>> "collapse" of probabilities?
>
> My point was to demonstrate that there is nothing mysterious in the
> instantaneous wavefunction (or probability) "collapse". We deal with
> it every day. Suppose you are throwing a die. Before the throw, all 6
> numbers have equal probabilities to show up. So, you may say that the
> die is described by a probability function (1/6 for each face). After
> the throw (measurement) just one number shows up. The probability
> "collapses" from 1/6 for each face to 1 for one face and 0 for others.

Ah - so is what you're saying is something like "the system had a specific
value within a range - but we don't know which value within this range and
have no way of predicting it"? I can live with this quite easily, what I
can't understand is the notion of "instaneously". Presumably, for example,
you are not saying that the state of the detector changes "instantaneously"
(and we can know this)?

>> (Again, I don't accept that there's a nonzero probability that my
>> key might be on alpha Centauri - I saw it ten minutes ago, so if it
>> *is* there, it's been transported by a previously unknown
>> macroscopic physical process, and is almost certainly surrounded by
>> several million right socks ...)
>
> If you saw your key 10 minutes ago, then Alpha Centauri is a bit too
> far. I then say that there is a non-zero probability to find your key
> on the Sun (you can reach the Sun in 8 minutes with the speed of
> light). It is not unreasonable to suspect that some mad scientist
> could steal your key and send it on a fast rocket toward the Sun.

Okay, we can agree that there are constraints imposed by the speed of light
and other physical limits.

Eugene Stefanovich

unread,
Aug 7, 2005, 10:19:26 PM8/7/05
to

Paul Danaher wrote:
> Eugene Stefanovich wrote:
>
>>Paul Danaher wrote:
>>
>
> ..
>
>>>I'm afraid I don't know what you could mean by "the system did have a
>>>certain value ... we simply don't know what this value is, and we
>>>have no means to predict this value". Again, you talked about
>>>"Wavefunction is just a probability density amplitude" - now, if the
>>>wavefunction collapses, what's the difference between this and a
>>>"collapse" of probabilities?
>>
>>My point was to demonstrate that there is nothing mysterious in the
>>instantaneous wavefunction (or probability) "collapse". We deal with
>>it every day. Suppose you are throwing a die. Before the throw, all 6
>>numbers have equal probabilities to show up. So, you may say that the
>>die is described by a probability function (1/6 for each face). After
>>the throw (measurement) just one number shows up. The probability
>>"collapses" from 1/6 for each face to 1 for one face and 0 for others.
>
>
> Ah - so is what you're saying is something like "the system had a specific
> value within a range - but we don't know which value within this range and
> have no way of predicting it"?

I am saying that when you have a quantum system (or die), then even
before the measurement is done the system is in a well-defined state
(for example, the state with number one (one dot) up). The measurement
just reveals this state of the system. Note that in the Copenhagen
interpretation, the quantum system is assumed to be in the state which
is a linear combination of all six possibilities. Which possibility
is actually realised is not known until the measurement is done.

Eugene.

nightlight

unread,
Aug 8, 2005, 10:59:28 AM8/8/05
to
> I don't see any magic here. In my previous post I used analogy
> with a classical die with 6 faces. ...

You use "magic" (the unspecified non-physical collapse) to get around
the _mutual exclusivity_ of the two forms of evolution/change of Psi in
time. The "magic" part is the switchover between the two (which, being
incompatible / mutually exclusive, cannot both specify the evolution
simultaneously). When and how does Psi (of the _whole_ system, the
object plus aparatus and anything interacting with either) stop
evolving via the linear unitary evolution and start evolving via the
projection postulate? When and how does it then somehow stop evolving
via the projection postulate and resume the unitary evolution? What is
the formal counterpart in the QM that describes (in time and space
parameters) this switchover between the two modes of evolution?

The von Neumann's answer is as complete as any here -- he simply said
that the observer's consciousness performs the collapse. At least he
recognized that the unitary evolution cannot accomplish the required
transition. You can hand-wave it in many other ways tried since,
without in essence getting beyond the von Neumann's euphemistic
admission of the inadequacy of the unitary formalism to account for the
most basic empirical fact -- the occurence of a single result. His
solution is, of course, nothing but a magic trick -- a psychological /
didactic device, entirely outside of the formalism, creating an
appearance (at the verbal, handwaving level) of overcoming the core
shortcoming of the linear formalism. But, as the many decades of
persistent confusion among the physics students testify, it never
answers it at the formal level, within the theory or physical model
itself.

Your level of argument doesn't appear to even recognize the mutual
exclusivity of the two modes of evolution, carrying on as if the two
modes can guide the change of Psi simultaneously and it is a mere
matter of convention and convenience which description one chooses to
use (as your dice example illustrates). Once you recognize the mutual
exclusivity of the two (they cannot both guide the change of Psi
simultaneously), then the questions posed at the top (and the von
Neumann's euphemistic answer 'I don't know') make sense and remain
unanswered within the QM orthodoxy. Note also that the von Neumann's
proof of movability of the cut (the transition point between the two
modes) does not answer (within the formalism) where and when does the
projection mode occur -- he puts that part outside of the formalism
("consciousness" performs the collapse; what is "consciousness" in the
formalism? when, where and how "it" does it? the other conventional
answers merely substitute "consciousness" and its magic with other
equally vague and magical work-alikes).

One legitimate semi-answer, which at least clearly recognizes the
mutual exclusivity of the two modes, is the GRW spontaneous collapse
extension of QM -- here the switch between the two modes is a part of
the formalism, entering as an additional formal and genuine QM
postulate. Unfortunately the GRW spontaneous collapse solution is
grafted in superficially, as a quick & dirty patch of the gaping hole
in the formalism, but without any connection to other physics, with no
dynamical basis or the empirical tests (other than providing, by
declaring it, the 'single result'). Thus while GR & W do lay out the
problem correctly and cleanly, without having to call upon deus ex
machina from outside the formalism (consciousness, splitting of
universe, irreversible macroscopic measurement, decoherence), this
semi-answer still doesn't really add any useful physics (beyond von
Neumann's answer) to the QM.

Einstein did state what the genuine answer ought to be (which was
worked out in detail decades later by Barut and his students [see the
ref's cited earlier] confirming in full Einstein's intuition on this
problem):

" At the present time the opinion prevails that a field theory must
first, by "quantization," be transformed into a statistical theory of
field probabilities according to more or less established rules. I see
in this method only an attempt to describe relationships of an
essentially nonlinear character by linear methods." ["The Meaning of
Relativity", 5th ed, Dover 1956, pp 165].

This is precisely what Barut has demonstrated -- he constructs the
explicit linearization procedure, a variant of Carleman linearization
(without referring to it, being apparently unaware of its existence and
use in applied math), of the coupled nonlinear PDE system
(Maxwell-Dirac classical fields) of the very type suggested by
Einstein. (Similar view of the QM and QFT was shared by de Broglie,
Schrodinger, Fermi, Jaynes etc.)

Paul Danaher

unread,
Aug 8, 2005, 10:59:29 AM8/8/05
to

Okay, this does appear to be a reaffirmation of your concept of "objective
reality" in the "No new Einstein" thread - I'd taken your "the system did
have a certain value ... we simply don't know what this value is" too
literally. So, the cat is objectively either alive (and capable of emitting
a miaow) or dead in this view, and when we open the box we know which it is,
but can't know when it died?
Is this equivalent to the statement "We know that the detector changed state
by a specific time, showing that a photon had been emitted and reabsorbed,
but we cannot know when (and how long?) all this took to happen"? Given the
existence of optical systems with pulse/switching times of the order of a
picosecond, what (else?) sets the observational limits to this uncertainty?

Paul Danaher

unread,
Aug 8, 2005, 12:31:15 PM8/8/05
to
Eugene Stefanovich wrote:
..

> I am saying that when you have a quantum system (or die), then even
> before the measurement is done the system is in a well-defined state
> (for example, the state with number one (one dot) up). The measurement
> just reveals this state of the system. Note that in the Copenhagen
> interpretation, the quantum system is assumed to be in the state which
> is a linear combination of all six possibilities. Which possibility
> is actually realised is not known until the measurement is done.

Apart from early computational convenience, is there any formal reason to
believe the mathematics should be linear?

Eugene Stefanovich

unread,
Aug 8, 2005, 3:03:09 PM8/8/05
to

Oh yes! There is a very strong reason to believe in that.
I find this reason in "quantum logic". This is a theory that derives quantum
mechanics as a result of generalization of classical logical
relationships. In this approach, the entire (linear) formalism of QM
is based on a few simple postulates (similar to postulates of
classical logic). Any modification of the present QM formalism
(like making QM laws non-linear as in
S. Weinberg, "Testing quantum mechanics", KEK preprint 89-5-562)
would lead to violation of one or more of these postulates, which
I find very undesirable.

There are many good books on quantum logic. I wrote my own short
chapter on the subject (chapter 4 in
http://arxiv.org/abs/physics/0504062) where more references can be
found.

Eugene.

Eugene Stefanovich

unread,
Aug 8, 2005, 3:03:09 PM8/8/05
to

Paul Danaher wrote:
> Eugene Stefanovich wrote:
>
>>>>Paul Danaher wrote:
>>>
>>>Ah - so is what you're saying is something like "the system had a
>>>specific value within a range - but we don't know which value within
>>>this range and have no way of predicting it"?
>>
>>I am saying that when you have a quantum system (or die), then even
>>before the measurement is done the system is in a well-defined state
>>(for example, the state with number one (one dot) up). The measurement
>>just reveals this state of the system. Note that in the Copenhagen
>>interpretation, the quantum system is assumed to be in the state which
>>is a linear combination of all six possibilities. Which possibility
>>is actually realised is not known until the measurement is done.
>
>
> Okay, this does appear to be a reaffirmation of your concept of "objective
> reality" in the "No new Einstein" thread - I'd taken your "the system did
> have a certain value ... we simply don't know what this value is" too
> literally. So, the cat is objectively either alive (and capable of emitting
> a miaow) or dead in this view, and when we open the box we know which it is,
> but can't know when it died?

Yes, something of that sort. However, see some explanation below.

> Is this equivalent to the statement "We know that the detector changed state
> by a specific time, showing that a photon had been emitted and reabsorbed,
> but we cannot know when (and how long?) all this took to happen"? Given the
> existence of optical systems with pulse/switching times of the order of a
> picosecond, what (else?) sets the observational limits to this uncertainty?

I can repeat and strengthen my statements from the previous post:
It is pointless to ask what happened to the system in the time interval
between the preparation of the system and measurement.
For example, it is pointless to ask how
long it took the wavefunction to collapse.
One is only allowed to ask questions about results of physical
measurements. In the time interval between the preparation and
measurement there were no measurements performed (by definition),
so please do not ask me what happened to the system in this interval.

We have only a mathematical model of what went on in this
time interval: the wave function psi(0) is prepared at time 0;
the wave function unitarily evolves psi(0) -> psi(t); the wavefunction
collapses at time of the measurement t. These steps have nothing to do
with what "really" happened to the system between times 0 and t.
These are just mathematical manipulations that allow us to accurately
predict the results of measurements at time t. The results of these
manipulations agree with experimentally observed data, however the
manipulations themselves (wave function, time evolution operator,
collapse) have no relationship to the real physical world.

Eugene.


Eugene Stefanovich

unread,
Aug 8, 2005, 3:03:09 PM8/8/05
to

nightlight wrote:
>>I don't see any magic here. In my previous post I used analogy
>>with a classical die with 6 faces. ...
>
>
> You use "magic" (the unspecified non-physical collapse) to get around
> the _mutual exclusivity_ of the two forms of evolution/change of Psi in
> time. The "magic" part is the switchover between the two (which, being
> incompatible / mutually exclusive, cannot both specify the evolution
> simultaneously). When and how does Psi (of the _whole_ system, the
> object plus aparatus and anything interacting with either) stop
> evolving via the linear unitary evolution and start evolving via the
> projection postulate? When and how does it then somehow stop evolving
> via the projection postulate and resume the unitary evolution? What is
> the formal counterpart in the QM that describes (in time and space
> parameters) this switchover between the two modes of evolution?

One important lesson we should take from quantum mechanics is the
following: "do not ask questions that cannot be answered
experimentally". Your questions about two modes of evolution are
exactly of this sort. There is no experiment that can answer those
questions. So, I refuse to answer them as well.

Let me give you a short version of how I understand measurements and
their description in QM (a longer version can be found in chapter 3 of
http://arxiv.org/abs/physics/0504062 ).
Suppose we want to measure observable F of the physical system A
prepared in a state S at time t. To do that we need two devices:
the preparation device P and the measuring apparatus M. One act of
measurement consists of three steps:

1. Prepare the system in the state S using P.
2. wait time t.
3. Perform measurement by M.

As a result of this procedure we obtain a definite value of observable F
(let's call it f_1). Now, we repeat this procedure
preparation-measurement many times. Each time the result of
our measurement is different (f_i), even if we control the preparation
process as well as we can. This is the main mystery of nature. Nobody
has an answer why microsystems behave unpredictably. To answer this
question is not a job of QM. QM can only tell us the following:
"Having the most complete description of the preparation process,
predict the probabilities of measurements f_i at time t."

The answer to this question requires involvement of the math
apparatus of QM:
1) We describe the preparation process at time t=0
by some wave function psi(0).
2) We describe the evolution of the
state from time 0 to time t by a unitary evolution operator
psi(0) - > psi(t).
3) We calculate the probabilities to measure f_i at time t
by taking projections of the wave function psi(t) on
eigensubspaces of the Hermitian operator F.

I hope we both understand that steps 1) - 2) - 3) happen
in our heads only. They are not real. They are just a mathematical
description of reality. So, the question: "why the wave function
first evolves continuously and then suddenlty collapses?" is not
a question about real physical process. This is a question about
our mathematical model of the process.

The question about the real physical process between the
preparation and measurement
of the system remains unanswered. I believe there is simply
no answer to this question. Remember: "do not ask questions that cannot
be answered experimentally". We just can't say what happened to
the system between preparation (P) and measurement (M), because
in order to see that we need to perform another measurement M'
between P and M, and therefore completely change the original problem.

Eugene.

nightlight

unread,
Aug 9, 2005, 4:06:52 AM8/9/05
to
> One important lesson we should take from quantum mechanics is the
> following: "do not ask questions that cannot be answered experimentally".
> Your questions about two modes of evolution are exactly of this sort.
> There is no experiment that can answer those questions.

Thanks for bringing out one more characteristics of the present QM
Measurement "Theory" shared with the magical, pre-scientific modes
thinking (like those fairy tales, where the hero is told that he can go
through the whole castle, except for that one room, and he should
never, ever open that one door). If your theory has nothing to say
about the space-time properties of the collapse mode of state change
and its relation to the unitary evolution, that does not mean that "no
experiment can answer those questions." In fact, sticking to such
assertion will corner you quickly into having to issue outright
religious kinds of capricious edicts and arbitrary prohibitions:

(A1) The two modes/rules of change of Psi, the mode U: unitary
(dynamical) and mode P: collapse (projection postulate, non-dynamical)
are part of the standard QM axiomatics (by necessity, since the U mode
cannot model the occurrence of single results in the experiment on
arbitrary superposition).

(A2) The modes U and P are mutually exclusive (except in the trivial
case of 'no state change' U = P = Identity) i.e. they predict
different new state Psi1 from the same old state Psi0: U Psi0 = Psi_u
<different from> P Psi0 = Psu_p. Therefore, at any one time, there is
only one of the U and P modes which controls the change of state Psi
(except for the trivial case of no state change, U = P = I, hence
whenever state does change, then U Psi <> P Psi. ).

(A3) From (A2) it follows that for the change of Psi in time there is
sequence of times T1 < T2 < T3 < ... such that Psi changes by rule U
for t<T1 and by rule P for t>=T1, then by rule U for t>=T2 etc.

The question is, why are the times T1, T2,... absolutely outside of the
experimental and theoretical reach, including any future experiments?
QM has no formal counterpart that can model and yield T1, T2... as
result of some computation within the formalism, yet it asserts T1,
T2,... exist. Namely, if you treat the 'object' and the 'aparatus' as a
single physical system evolving via U (for the _full_ system, including
'environment'), you won't obtain the single results from a superposed
state, but only the superpositions, thus there is no prediction of T1,
T2,.,. Yet, the existence of these times T1, T2,... is implied (e.g.
via A1-A3) by the QM Measurement "Theory" itself.

By declaring that "There is no experiment that can answer those
questions. So, I refuse to answer them as well" you are placing
yourself into the role of that wizard in a fairy tale, showing a young
hero the castle, bringing him finally to a door (the claim of existence
of T1, T2,..) saying: and this is a door to the room for which I won't
tell you what is inside and you are never, ever to open this door.

You (or QM Measurement "Theory") have no rational basis in anything to
claim the experimental taboo regarding the times T1, T2,... that the QM
Measurement Theory claims to exist. What does the taboo follow from?

The Barut's work mentioned earlier clarifies why there is that kind of
irrational refusal to answer those questions -- the taboo is needed
because there are no two modes of evolution U and P at all, and thus
there are no times T1, T2,... for the switchovers between the two.

The actual evolution of Psi (which is a classical matter field, such as
Dirac or Schrodinger field) is not linear at all (since Psi is always
coupled with the EM field A into the nonlinear system of PDEs), but it
is a nonlinear evolution N. Barut and his students have demonstrated
(see the papers cited earlier) how one can approximate N via piecewise
linear (and unitary) evolution U, such that distint sections of
evolution governed by U are stitched together by the 'breaks' (or
jumps) modeled by P. The U and P modes and the distinct sections are
simply artifacts of the particular linearization scheme of N (analogous
to arbitrary ways of selecting straight line segments to approximate a
curve). There is no new physics revealed by U and P scheme that is not
already in N.

In other words, the linear approximation U is accurate enough until the
interaction occurs (e.g. with the aparatus), at which time U becomes
increasingly inaccurate and the evolution of field Psi (the state of
total system) via U diverges substantially from the actual evolution
via N (the exact evolution of the full coupled system). One example of
such drastic divergence is the interaction with the 'aparatus' where U
predicts superposed final Psi, while the actual Psi is just a single
term of the superposition (the actual single result). The QM
Measurement Theory solves the problem of inaccuracy of the linear
approximation U by introducing corrective jumps via evolution mode P,
which, whenever necessary, bring the incorrectly prediced Psi (the
superposition) back in sync with the actual Psi (the single term).

Since one can piecewise linearize (via U and P modes of evolution) N in
infinitely many different ways, all equally valid as approximations of
N, it is indeed meaningless to ask when does the actual system perform
switch between U and P -- it is meaningless not because "QM says you
can't talk about it, even though they do exist" (or any other other
equivalent magical "explanation") but because the actual system never
switches between the two, since it doesn't evolve via these two modes.
It evolves via the single nonlinear evolution N, which is always valid
and which never switches to something else. The two modes, and the
impled transitions between the two, are arbitrary artifacts of a
particular linearization scheme with no physical content of their own
(just as trapezoids approximating an integral, being arbitrarily chosen
in the first place, don't carry any physical content of their own).

> Nobody has an answer why microsystems behave unpredictably.
> To answer this question is not a job of QM.

Unless you treat the 'aparatus1' and the 'object1' as a new 'object2'.
After all, the division into 'object' and 'aparatus' is a matter of
convention. Then the QM _does_ have a job to answer how does that full
system 'object2' evolve. And it gives the wrong answer, as von Neumann
found out -- it predicts a superposed state of the composite system,
contrary to the experiment which shows the final state being a single
term of the superposition.

And the "fix" offered by von Neumann was to simply define some operator
P which maps the wrong answer into the right answer and then postulate
that the observer's "consciousness" performs that change of state
defined as P.

In other words, say, you're a teacher and I am student and I proclaim
that a position of a free falling body is described by the evolution
U(t) = k*t, and then, since that gets quickly too far away from the
experimental result E(t), I "solve" the problem by defining a mapping P
which maps the wrong answer U(t) into the expereminatally known correct
answer E(t) i.e. I define P U(t) = E(t) and proclaim a great new
"discovery" that it is the observer consciousness which performs the P
mapping. How would you rate that as a "solution"? Say, you object to
the vague term "consciousness", so I change the story and make a new
grand "discovery" that it is really the "environment" which performs
the P mapping via "decoherence" -- does that help my "solution" very
much? Would it help my U(t) "theory" if instead, I declare that it is
really the branching of the universes that performs P mapping?

Or, would you just say that U(t)= k*t is plainly wrong evolution law
for a free fall, and it is at best a crude linear approximation for
some small section of the path? That's exactly what Einstein believed
about QM (see the
earlier quotation) and what Barut has shown to be the case. Among
others, the magical "entanglement" (with its imagined exponential
amplification of the computing capabilities) is entirely a physically
contentless artifact of the approximation scheme (see the PhysicsForum
links given earlier for discussion on "entaglemnt" and its alleged
computational "powers"). Barut's approach to the QM/QFT (which is an
elaboration of the original Schrodinger & Fermi approach) provides a
rational explanation for the decades of failed attempts to
experimentally demonstrate the "entaglement" via the tests of Bell's
inequalities (the fact of the unbroken chain of gross failures is
officially recognized and it is referred to only via the mandatory
euphemism: the "loopholes" in the Bell's inequality experiments) --
trying to experimentally find the "entaglement" is like trying to
experimentally find the mysterious underlying trapezoids arising in the
numeric approximation of the integrals describing some phenomenon.

backdoo...@yahoo.com

unread,
Aug 10, 2005, 3:21:17 PM8/10/05
to

Eugene Stefanovich

unread,
Aug 11, 2005, 6:31:37 AM8/11/05
to
nightlight wrote:
>> One important lesson we should take from quantum mechanics is the
>> following: "do not ask questions that cannot be answered experimentally".
>> Your questions about two modes of evolution are exactly of this sort.
>> There is no experiment that can answer those questions.
>
>
> Thanks for bringing out one more characteristics of the present QM
> Measurement "Theory" shared with the magical, pre-scientific modes
> thinking (like those fairy tales, where the hero is told that he can go
> through the whole castle, except for that one room, and he should
> never, ever open that one door). If your theory has nothing to say
> about the space-time properties of the collapse mode of state change
> and its relation to the unitary evolution, that does not mean that "no
> experiment can answer those questions." In fact, sticking to such
> assertion will corner you quickly into having to issue outright
> religious kinds of capricious edicts and arbitrary prohibitions:

Just to make it clear. My point was that QM can predict (the
probabilities of outcomes of) measurements. However QM cannot say
a single thing about what happens to the system while measurements
are not done. Is it a problem? No. Because in physics we obliged to
explain results of measurements only. If experiment doesn't care
to measure a thing, then theory doesn't care to predict it.

Of course nobody can forbid us to make a new measurement in the time
interval not explored before. Then QM will faitfully predict (the
probabilities of) its outcomes.

I have a problem when somebody asks what happens to the system during
the time it is not observed (is it unitary evolution or collapse?).
Why should we care? This is not a job of
theoretical physics. Theoretical physics must predict results of
measurements. Period.

Eugene.

Message has been deleted

Eugene Stefanovich

unread,
Aug 11, 2005, 9:15:04 PM8/11/05
to

nightlight wrote:

>>However QM cannot say a single thing about what happens to
>>the system while measurements are not done.
>
>

> Of course it does say quite a bit. [...]

Thank you for presenting your interpretation of QM.
It seems that there are as many interpretations as there are people
who thought about the subject. I am not going to argue with your
interpretation, and I am not going to force my intepretation on
you. I just want to make clear my point of view so that (hopefully)
there is no misunderstanding.

QM tells us: describe your system (e.g., electron), describe how
it is prepared (e.g., emitted by a hot wire and passed through two
slits), describe what is measured (e.g., electron's position
registered by a flourescent screen) and I'll tell you how to predict
the results of your measurements (probabilities) to a very high
precision.

What else can we ask for? We have a theory that describes results of
all possible measurements. Problems begin when we start to ask
for something more, i.e., when we start to ask about the "mechanism":
what the electron actually was doing while it travelled from the hot
wire to the screen? Did it pass through one slit or through two slits?
What happened to the system while we haven't measured?

In my view, the only honest answer is "I don't know" or "I don't
care". It could be that the electron on its way to the screen
turned to the Cookie Monster and then back to the electron again.
Who cares?

QM does not tell you what the electron looked like before the
measurement.
It throws at you a bunch of formulas: wave functions, Hilbert space,
Schroedinger equation, projection postulate, etc. These formulas
(if properly applied) tell you a lot about what you measure
(the distribution of spots on the fluorescent screen), but they tell
you exactly zero about how electrons got there.

You may consider this a radical form of positivism.
Einstein once said: "I believe that the Moon is still there
even if nobody is looking". I would like to rephrase it:
"The Moon will be there when you'll look at it".

Eugene.

Message has been deleted

nightlight

unread,
Aug 12, 2005, 4:43:05 AM8/12/05
to
> QM does not tell you what the electron looked like before
> the measurement.
> It throws at you a bunch of formulas: wave functions, Hilbert
> space, Schroedinger equation, projection postulate, etc. These
> formulas (if properly applied) tell you a lot about what you
> measure (the distribution of spots on the fluorescent screen), but
> they tell you exactly zero about how electrons got there.

That line of argument is a red herring. The point I was making was
not about the location of 'electron' or 'photon' before detection,
or generally about the values of whose existence the QM says
nothing about. In contrast, the steps (A1)-(A3) from the
earlier post show you that there are values T1, T2,... which
QM Measurement Theory says _do exist_, but for which the QM
formalism lacks any counterpart capable of modeling / computing
them. That is the hole in QM that the von Neumann's "observer's
consciousness" or Everett's "universe branching" or GR & W
"spontaneous collapse" was meant to close.

> You may consider this a radical form of positivism.

That would be a very kind, euphemistic way to put it. The basic
difference between the QM "realists" (such as Einstein, Schrodinger,
de Broglie, E.T. Jaynes, A. Barut, T. Marshall, E. Santos,...) and QM
"anti-realists" (such as Bohr, von Neumann, Wigner, Shimony, Zeh,...)
is that realists insist that the our physical model be
_self-sufficient_ -- it
should be able to formally simulate any quantity/value that it claims
to
exist without invoking deus ex machina from outside of the formalism.
Otherwise, the relists consider the model/theory incomplete.

The anti-realists are much less picky -- they don't mind using
strings and duct tape i.e. any verbal or psychological crutches
from well _outside of the formalism_ (e.g. consciousness), to
hold the model together, as long as these devices help maintain
an illusion in a superficial listener (especially if that listener
is the one deciding on research funding) that the 'experts are in
full control' and their model/theory is complete, with only minor
details left to refine.

In short, the QM "realists" are focused on the nature and its laws,
while the QM "anti-realists" are focused on themselves, their relation
to the nature and how to optimize it in a given social context. The
realists are about substance, the anti-realists about appearance.
Or, in the academia-speak, the realists do physics, while the
anti-realists do gender neutral physical science studies.

nightlight

unread,
Aug 12, 2005, 4:43:47 AM8/12/05
to
> My point was that QM can predict (the probabilities of outcomes of)
> measurements.

It does that, too. But QM was already quite useful well before
probabilities entered the theory (via Born postulate). Probabilistic
view of Psi, introduced originally into QM as an erroneous footnote and
its later errata by Born, is useful mostly for scattering applications
of QM. For chemistry, molecular & atomic bound states, structures and
spectra, solid state,... it doesn't do anything that Schrodinger's or
Fermi's view of |Psi|^2 as density of matter/charge doesn't do more
clearly (e.g. via Hartree-Fock self-consistent matter fields).
Basically, the direct product of single particle Hilbert spaces (the
centeral formal element of QM Measurement "Theory", entanglement and
other QM magical aspects), is useful formal & computational method for
systems of few distinct (types of) particles, primarily in scattering
setups. The pedagogical literature makes it, unfortunatelly, the
centerpiece of the QM (although not of QFT).

Barut has shown how this multiparticle product space formalism, which
is the formal basis of all the QM magic claims ("predictions"), is
nothing more but a linearized approximation (obtained through the
under-constrained/weaker variation of the action, the Barut's ansatz cf
[3], which yields simultaneously the statistical aspects and the
linearized form for the QM formalism of composite systems) of the
interacting classical Maxwell field with the classical
Dirac/Schrodinger matter field. No new physics is added by the
composite system QM formalism (or by the second quantization, which is
equivalent to the infinite product case with anti/symmetrization). The
QM magic (entanglement, Bell nonlocality, QC... etc) is based solely on
the non-physical (thus non-existent in nature) artifacts of that
particular linearization approximation copmutational scheme, which
became dominant through historical accidents, but which is otherwise
arbitrary (this becomes evident when one recognizes it in its more
general form develped independenly in mathematics as the Carleman
linearization).

> However QM cannot say a single thing about what happens to
> the system while measurements are not done.

Of course it does say quite a bit. The dynamical evolution equations
yielding interference effects or the energy spectra for stationary
states, molecular and atomic structures, etc. These phenomena are all
computed without any reference to the "measurement" or any need of the
measurment "theory" postulates.

In relation to the measurement, QM does have to answer how does the
combined system 'aparatus1' plus 'object1' evolve (through the QM
unitary evolution of a composite system). It has to show how the
"measurement1" by the 'aparatus1' of the 'object1' is reflected in the
formalism when the composite system 'object1' + 'aparatus1' is treated
as a single 'object2'. That is precisely how von Neumann arrived to the
conclusion that the unitary evolution of the composite system is
inadequate and that it requires an additional form of non-unitary
evolution, which he introduced via his projection postulate (and
attributed to the consciousness of the observer).

Therefore, the self-consistency and the requirement of the independence
of the results on the (arbitrarily chosen) boundary between the
'object' and 'aparatus' carry implications which were first followed
through, as far as they can go, by von Neumann.

Your arguments so far are the standard "pedagogical" song and dance
(with all its capricious dictums and taboos, slippery twists and turns,
pulled out of thin air), the usual little initiation rite played to the
perplexed physics students to entrance them just enough so they can
"shut up and calculate." My masters thesis was on the topic of QM
measurement problem, and after hundreds of papers, preprints,
monographs, discussions with my professors ... etc, I was more
perplexed than before I started. It wasn't until some years later,
after leaving academia to work in industry, where I got a chance to see
how the quantum optics experiments were really done (unlike the magic
tricks shown to students e.g. see [1] for a recent shameless example),
that I realized that what I thought all along were the basic
experimental facts which needed to be explained by QM (such as double
slit or beam splitter anticorrelation "mystery", or Bell's inequality
violations) weren't the facts at all. They turned out to be simple and
entirely non-mysterious experimental phenomena that a 19th century
physicist would have no problem with, but merely wrapped in a peculiar
Quantum Optical jargon, in which the common terms such as "counts" and
"correlations" are quietly redefined (in references going back,
ultimately all to the Glauber's 1964 Les Houches Lectures [2], the
master document of the Quantum Optics) so that the QO-counts, which
they coincidentally also call just "counts", can be negative (via the
mostly unmentioned standard QO subtractions), where the
QO-correlations, which they also call "correlations", don't correlate
any events at all (Glauber's functions), where the QO-probabilities and
QO-information, which they also call "probabilities" and "information"
can be negative, etc.

------------------------------------------------------------
1. J.J. Thorn, M.S. Neel, V.W. Donato, G.S. Bergreen, R.E. Davies, M.
Beck
"Observing the quantum behavior of light in an undergraduate
laboratory"
Am. J. Phys., Vol. 72, No. 9, 1210-1219 (2004).
Preprint:
http://marcus.whitman.edu/~beckmk/QM/grangier/Thorn_ajp.pdf
Experiment Home Page: http://marcus.whitman.edu/~beckmk/QM/

2. R. J. Glauber, "Optical coherence and photon statistics"
in Quantum Optics and Electronics, ed. C. de Witt-Morett,
A. Blandin, and C. Cohen-Tannoudji.
Gordon and Breach, New York, 1965, pp. 63-185.


3. See also a recent Physics Forum thread with more detailed
discussion of [1] & [2] and additional references:

http://www.physicsforums.com/showthread.php?t=71297

For a discussion of the implication of Barut's self-field ED for
QM measurement theory see:

Eugene Stefanovich

unread,
Aug 12, 2005, 7:16:44 PM8/12/05
to

nightlight wrote:
>>QM does not tell you what the electron looked like before
>>the measurement.
>>It throws at you a bunch of formulas: wave functions, Hilbert
>>space, Schroedinger equation, projection postulate, etc. These
>>formulas (if properly applied) tell you a lot about what you
>>measure (the distribution of spots on the fluorescent screen), but
>>they tell you exactly zero about how electrons got there.
>
>
> That line of argument is a red herring. The point I was making was
> not about the location of 'electron' or 'photon' before detection,
> or generally about the values of whose existence the QM says
> nothing about. In contrast, the steps (A1)-(A3) from the
> earlier post show you that there are values T1, T2,... which
> QM Measurement Theory says _do exist_, but for which the QM
> formalism lacks any counterpart capable of modeling / computing
> them. That is the hole in QM that the von Neumann's "observer's
> consciousness" or Everett's "universe branching" or GR & W
> "spontaneous collapse" was meant to close.

The steps you suggested

(A1) The two modes/rules of change of Psi, the mode U: unitary
(dynamical) and mode P: collapse (projection postulate, non-dynamical)
are part of the standard QM axiomatics (by necessity, since the U mode
cannot model the occurrence of single results in the experiment on
arbitrary superposition).

(A2) The modes U and P are mutually exclusive (except in the trivial
case of 'no state change' U = P = Identity) i.e. they predict
different new state Psi1 from the same old state Psi0: U Psi0 = Psi_u
<different from> P Psi0 = Psu_p. Therefore, at any one time, there is
only one of the U and P modes which controls the change of state Psi
(except for the trivial case of no state change, U = P = I, hence
whenever state does change, then U Psi <> P Psi. ).

(A3) From (A2) it follows that for the change of Psi in time there is
sequence of times T1 < T2 < T3 < ... such that Psi changes by rule U
for t<T1 and by rule P for t>=T1, then by rule U for t>=T2 etc.

are purely mathematical manipulations. I think you are making a mistake
by trying to assign some physical meaning to formal mathematical
objects. These objects have no physical meaning. There are no
wavefunctions flying around us as little clouds and "collapsing" time
to time. Wavefunctions and state
vectors live in the mathematical world. The only connection between
this mathematical world and real physical world is established when
quantum mathematical formulas arrive to some probability distribution
or expectation value of an observable. Only these numbers can be
directly compared to what is measured.

Eugene.

nightlight

unread,
Aug 13, 2005, 2:23:12 AM8/13/05
to
> I think you are making a mistake by trying to assign some physical
> meaning to formal mathematical objects. These objects have no
> physical meaning. There are no wavefunctions flying around us
> as little clouds ...

Psi(x,t) describes exactly how wavefunction 'flies around'. For
example, the quantized EM field (in Heisenberg picture) evolves via
plain Maxwell equations through vacuum or through linear optical
elements.

> and "collapsing" time to time.

Apparently you haven't read von Neumann's discussion from his 1932
monograph, where the need for the two forms of state evolution within
the linear QM is explained (which is probably still the best discussion
on that topic). Within the linear unitary theory, the collapse is
necessary to maintain the self-consistency of the theory. Only if you
abandon the linearized QM/QFT, as some physicists have done, from de
Broglie and Schrodinger in 1920s, through E.T Jaynes in 1970s and A.
Barut in 1980s (he died, unfortunately, in 1994, while his self-fields
ED developments were in full swing:
http://phys.lsu.edu/~jdowling/barut.html ), you can avoid von Neumann's
conclusion on the necessity of the collapse.

> The only connection between this mathematical world and real
> physical world is established when quantum mathematical formulas
> arrive to some probability distribution or expectation value of an
> observable. Only these numbers can be directly compared to what
> is measured.

The part that you're missing, and which was von Neumann's starting
point, is that you can apply these same operational rules (the Born
probability postulate you're talking about) in different ways, depening
on how you choose to define aparatus and object. Then, the requirement
of self-consistency of the theory, specifically the independence of the
final results on the choice/convention of the object-aparatus boundary,
carries implication that go beyond the simplistic one-move-ahead
conclusions you seem to be stuck on. To continue with chess analogy,
von Neumann follows up variations all the way to check mate while
you're talking only about how much material your next legal move can
capture, insisting that this one-move capture value you got is all
there is to be said about the position.

Returning back to (A1)-(A3) -- that whole argument is in the model
space-time using the rules of the model (the QM, the unitary evolution
and the collapse). The model (the QM formalism) has time parameter,
too, and that is what the T1, T2,... refer to. The point of that
argument is to show that QM (via Measurement Theory, the projection
postulate) implies the existence (we're in the model realm here) of
values T1, T2,... which cannot be computed, not even in principle, by
the theory. Unlike the 'electron position' before the measurement
(which you keep bringing up) about whose existence the QM has nothing
to say, here the QM says that there are values T1, T2,... yet it can't
compute them because it has no algorithm within the formalism to do so.
All this is about the properties of the model itself (where T1, T2,...
belong), there is no confusion between the 'reality' realm (e.g. how to
measure T1, T2,..) with the 'model' realm (how to compute, at least in
principle, T1, T2,..).

The reason for pointing out this absence of the algorithm within the QM
formalism for computing T1, T2,... was to indicate why did von Neumann
(and numerous other 'anti-realists' since) have to reach beyond the
formalism (outside of the model) for a deus ex machina, e.g. observer's
consiousness, to execute the missing algorithm. The linear QM/QFT with
its Measurement Theory is either a defective theory or an inconsistent
theory (e.g. if you abandon von Neumann's self-consistency requirement
which leads to the existence of defect and requires deus ex machina as
a temporary patch until a genuine solution is found), and that is the
source of over seven decades of unsettled arguments. (I am using term
"defect" instead of the usual characterization of the problem as
"incompleteness" since the latter implies one can merely add the
missing piece without abandoning the rest [and retaining it only as one
among possible computational approximations]. The position of the
present QM/QED vs Barut's self-field ED is analogous to the position of
Newtonian physics with respect to the Special & General Relativity --
the older theory is fundamentally defective in both cases.)

Einstein, along with Schrodinger, de Broglie, ... through Jaynes and
Barut in recent times, offer a much more acceptable (in the long run)
way out -- drop the linearity and the measurement problem goes away. It
was only in Barut's work, that the Einstein's program, at least as far
as QM/QED, had clicked together. Barut has shown explicitly what
Einstein (and few others since) had conjectured five decades earlier --
the linear QM/QFT is merely a linearized approximation of the coupled
Maxwell-Dirac/Schrodinger classical fields. The linearization is
achieved by switching from the deterministic but nonlinear evolution of
the original Maxwell-Dirac fields to the indeterministic/ensemble
evolution (obtained via under-constrained variation of the action
performed by the Barut's ansatz, which is a special case of Carleman
linearization in variational form) of the QM/QED.

While these developments may appear obscure at present (especially if
one doesn't know what the question is that is being answered), I think
the beauty and the conceptual clarity of this approach, along with many
new possibilities it opens (e.g. genuine computation of the fundamental
constants, such as alpha, which Barut sketched but never completed and
which with todays computers should be within reach) will win eventually
over the 'quantum magic' dominating the current zeitgeist.

Eugene Stefanovich

unread,
Aug 14, 2005, 1:28:45 PM8/14/05
to
"nightlight" <night...@omegapoint.com> wrote in message
news:1123904389....@g47g2000cwa.googlegroups.com...

> > I think you are making a mistake by trying to assign some physical
> > meaning to formal mathematical objects. These objects have no
> > physical meaning. There are no wavefunctions flying around us
> > as little clouds ...
>
> Psi(x,t) describes exactly how wavefunction 'flies around'. For
> example, the quantized EM field (in Heisenberg picture) evolves via
> plain Maxwell equations through vacuum or through linear optical
> elements.

Correct mathematical description of quantum dynamics of particles (and
fields)
is provided by vectors and operators in infinite-dimensional Hilbert (or
Fock)
spaces. Only in rare occasions (e.g., the position-space wave-function of
a single particle) this description can be visualized as a "cloud of
probability"
in the real 3D space. I think that your attempts to visualize Hilbert space
creatures as
living in our real world brings more harm than good. Please leave them where
they belong - in their mathematical world.

> > and "collapsing" time to time.
>
> Apparently you haven't read von Neumann's discussion from his 1932
> monograph, where the need for the two forms of state evolution within
> the linear QM is explained (which is probably still the best discussion
> on that topic). Within the linear unitary theory, the collapse is
> necessary to maintain the self-consistency of the theory. Only if you
> abandon the linearized QM/QFT, as some physicists have done, from de
> Broglie and Schrodinger in 1920s, through E.T Jaynes in 1970s and A.
> Barut in 1980s (he died, unfortunately, in 1994, while his self-fields
> ED developments were in full swing:
> http://phys.lsu.edu/~jdowling/barut.html ), you can avoid von Neumann's
> conclusion on the necessity of the collapse.

What's wrong with the collapse? The collapse is a part of the mathematical
model
which, as I am trying to explain here, has very little to do with reality
(only final numbers calculated by this model can be compared with
experiment).
The collapse in the mathematical world of wave functions does not imply
that real physical systems evolve by jumps. You can arrive to this (wrong)
conclusion only if you identify the wave function with the physical system
itself. I refuse to do that. I say that these two beasts live in different
worlds.

> > The only connection between this mathematical world and real
> > physical world is established when quantum mathematical formulas
> > arrive to some probability distribution or expectation value of an
> > observable. Only these numbers can be directly compared to what
> > is measured.
>
> The part that you're missing, and which was von Neumann's starting
> point, is that you can apply these same operational rules (the Born
> probability postulate you're talking about) in different ways, depening
> on how you choose to define aparatus and object. Then, the requirement
> of self-consistency of the theory, specifically the independence of the
> final results on the choice/convention of the object-aparatus boundary,
> carries implication that go beyond the simplistic one-move-ahead
> conclusions you seem to be stuck on.

If you accept that wave functions and their collapses belong to the abstract
mathematical world, then there is no problem in selecting the boundary
between the physical system and the measuring apparatus. No matter how
you choose this boundary, the predictions of quantum mechanics will agree
with experiment.
You should only remember that the word "predictions" refer only to
experimentally observable effects. I agree that with different choices of
the boundary
the QM descriptions may look strikingly different. But these differences
are related only to the non-observable mathematical world.

> Returning back to (A1)-(A3) -- that whole argument is in the model
> space-time using the rules of the model (the QM, the unitary evolution
> and the collapse). The model (the QM formalism) has time parameter,
> too, and that is what the T1, T2,... refer to. The point of that
> argument is to show that QM (via Measurement Theory, the projection
> postulate) implies the existence (we're in the model realm here) of
> values T1, T2,... which cannot be computed, not even in principle, by
> the theory. Unlike the 'electron position' before the measurement
> (which you keep bringing up) about whose existence the QM has nothing
> to say, here the QM says that there are values T1, T2,... yet it can't
> compute them because it has no algorithm within the formalism to do so.

Are these times T1, T2,... related to some observable events? I guess no.
Then I don't care if QM cannot predict them. If QM couldn't predict
the lifetime of a radioactive nucleus, then I would worry very much.
Luckily, this doesn't happen. QM can predict (the probabilities of)
everything that can be measured.

> Einstein, along with Schrodinger, de Broglie, ... through Jaynes and
> Barut in recent times, offer a much more acceptable (in the long run)
> way out -- drop the linearity and the measurement problem goes away.

I am not sure if you can abandon the linearity of quantum mechanics
(the existence of linear superposition of states) so easily. The rules of
quantum mechanics follow from the postulates of quantum logic. These
postulates have very precise and (in principle) experimentally verifiable
meaning.
Any non-linear "generalization" of QM must violate some of these postulates,
i.e., violate some fundamental properties of measurements.

Eugene.

nightlight

unread,
Aug 15, 2005, 9:14:28 AM8/15/05
to
> Correct mathematical description of quantum dynamics of particles
> (and fields) is provided by vectors and operators in infinite-dimensional
> Hilbert (or Fock) spaces.

The abstract Hilbert space aspects of the QM/QED are only a part of the
story. To connect with the experiment, which is accomplished by the
operational rules of QM/QED, you do need, among other steps, to go to
the space-time representation of the abstract Hilbert space elements.
The particular separation line in the QM/QED between the formalism and
the operational rules is an arbitrary historical convention (due mostly
to von Neumann's 1932 formalization), with no more fundamental content
than that of an author's choice on how to separate the course material
into chapters or lectures.

So, yes, in the _full_ model of physical phenomena, you can draw a line
so that on the one side of the line you have formalism with no explicit
3D space and on the other side all the rest (called operational rules),
where the explicit 3D space representation has to be used in order to
map the predictions of the full model to the readouts of the
instruments.

A different convention (such as Bohm's QM) may draw this line
differently, so that some of the space-time description is on the
formalism side of the line, as well as in the operational rules side.

Hence, your argument above is about conventions, not about substance.
De gustibus non est disputandum.


> Only in rare occasions (e.g., the position-space wave-function
> of a single particle) this description can be visualized as a "cloud
> of probability" in the real 3D space.

The EM field operators in Heisenberg picture evolve in regular 3D space
(e.g. via Maxwell equations, for vacuum or for linear optical
elements).

The evolution in 3N dimensional (N>1) configuration space of regular
multiparticle QM can be obtained, as shown by Barut, as a linearization
approximation of the 3D evolution of coupled classical Maxwell-Dirac
fields. The 3N dimensional approximation (which is the N particle QM
formalism) does differ experimentally from the exact Maxwell-Dirac
solutions, and the experiment matches the self-fields ED predictions,
not those of the N particle QM. More specifically:

> I am not sure if you can abandon the linearity of quantum mechanics
> (the existence of linear superposition of states) so easily. The rules
> of quantum mechanics follow from the postulates of quantum logic.
> These postulates have very precise and (in principle) experimentally
> verifiable meaning.
>
> Any non-linear "generalization" of QM must violate some of these
> postulates, i.e., violate some fundamental properties of measurements.

Indeed, the Barut's self-fields predictions do deviate experimentally
from the predictions of the multiparticle QM (which is obtained as a
truncated linearization approximation of the Maxwell-Dirac coupled
fields, hence they are certainly not equivalent). This difference,
though, turns out to be precisely the radiative corrections, where the
QM is wrong and the Barut's self-fields ED is right, and where QM needs
to be superseded by QED to obain the experimentally correct results.
Barut's self-fields ED agree with the experiments (and QED) here as far
as he had carried out the computations (equivalent to the QED's alpha^5
order). So, the QM "works" in the sense of getting the numbers right,
but only to the extent that the truncated linear approximation of
Maxwell-Dirac dynamics (which is what the multiparticle QM formalism
is) works.

>> Returning back to (A1)-(A3) -- that whole argument is in the model
>> space-time using the rules of the model (the QM, the unitary evolution
>> and the collapse). The model (the QM formalism) has time parameter,

>> too, and that is what the T1, T2,... refer to. ....


>
> Are these times T1, T2,... related to some observable events? I guess no.

A wrong guess. Of course, they are related to observable events. The
T1, T2... operationally map to the most elemental experimental fact,
the times of occurence of the "single results" on aparatus1 (which are
here, in A1-A3, treated dynamically as the object2 consisting of
object1 + aparatus1). While it is true that you can treat aparatus1 as
the QM 'observer', in which case the "single result" on aparatus1 is
the result of Born postulate, you can also treat the combined system
object1+aparatus1 as an object2, evolving via unitary evolution. The
Born postulate in this new object2-convention could be applied only to
some hypothetical new aparatus2 (which could be a 2nd observer
observing the 1st observer, aparatus1), but not to the aparatus1 any
more.

Hence in the object2-convention aparatus1 cannot use Born postulate to
allow you to declare that aparatus1 yields a "single result" in a
single try. Yet, if you treat the experiment in the object1-convention,
the QM Measurement Theory tells you that there is a sequence of "single
results", thus there is a member of that sequence, the "single result"
for a given try (which is, obviously, also what the experiment shows).

{ Note that the "single result" discussed here, besides being the most
elemental experimental fact, is an absolutely essential ingredient of
the QM Measurement Theory, since without it you can't even begin to
define the operational mapping for the QM probabilities occurring in
the Born postulate (the probabilities are operationally mapped to the
normalized counts of the "single results"). For example, you can't
coherently claim that there is no "single result", while claiming there
is a count of single results, or ratios of limits of such counts.}

Now, the unitary (call it type-1) evolution of object1+aparatus1 cannot
produce anything that, in the object2-convention, operationally maps to
the "single result" at T1 on aparatus1, which the object1-convention
claims (and the experiment shows) to exist. Therefore, von Neumann had
to introduce the type-2 evolution of state, the collapse of state,
which allows object2-convention to have the operational mapping for the
"single result" on the aparatus1. The time T1 corresponds to the time
when the "single result" occurs in a given try (they are obviously
experimentally accessible values at least approximately e.g. by wiring
a timer to a photo-detector, where timer+detector are components of the
aparatus1), i.e. to the time when type-1 evolution is replaced by the
type-2 evolution. The time T2 (where T2>=T1), corresponds to the time
when the type-2 evolution yields controls of the state back to the
type-1 evolution. As explained in A1-A3, if you have two _mutually
exclusive_ types of state change/evolution, then these times T1 and T2
must exist. The fundamental QM defect is that it has no algorithm to
compute numbers T1 or T2 (not even in principle) which the logical
consistency requirement implies to exist.

Of course, if the logical consistency isn't a requirement (as in the
"pedagogical" expositions, where the teacher's authority combined with
the students' confusion will allow any number of incoherent components
of a theory to coexist "harmoniously"), you have "no problem." For
example, teacher can make go-away our first step above, by declaring:
in object2-convention you're not allowed (`cause I say so) to demand an
operational mapping for "single result" on aparatus1. What is then the
time T1, student asks, which the attached timer within aparatus1 has
recorded? Well, the teacher says, the T1 number isn't there until you,
which is the aparatus2, reads the aparatus1. Yes, professor, but what
if I was a part of that aparatus1 and I saw the T1 on the timer before
you (the new aparatus2) had asked me what was the result. Listen kid,
you think you saw it, but are you going to believe your lying eyes or
what I am telling you is going on? You didn't see it, period. There was
no T1 on that timer until I asked you, got it kid? Ok, well, you are
right and I was wrong,... but what if you are also part of the
aparatus1, and you just said T1 was there only after you asked me?
Doesn't that mean that aparatus1 had T1 before some other aparatus2
measured it? Don't be dense kid, I said "after" but didn't say when is
this "after" - if I am part of the aparatus1, then this "after" means
(because I say so) that there is no T1 until after a third person asks
me what did I hear from you. Thank you very much professor, I get it
now, but what if there is no any other person, thus no aparatus2, let's
say if aparatus1 is defined to be the entire universe? Well, then we
get in some more universes... period. Anything goes and "works" just
fine in this kind of word games.

Why go to all that trouble? What is the gain when a perfectly coherent
theory exists, in which the current QM composite system formalism (the
formal basis of the Measurement Theory, entanglement, etc) arises as
merely a particular kind of linear approximation of the coupled
classical fields, and where there is just one kind, non-linear,
evolution (thus there is no collapse and no superposition of the
object1+aparatus1 state and there is no entaglement of object1 and
aparatus1)?

It is not as if there was ever an experiment which had excluded a
purely local nonlinear field thery. Go ask experimenters -- there never
was any such experiment. (Note that Barut's self-field ED predicts
correctly all the radiative corrections, all of the crown jewels of the
legendary QED accuracy, as far as his calculations were carried out, to
the alpha^5 order.) There is only a wishful conjecture, a pipe dream
with no actual design, that such an experiment (so-called "loophole
free" Bell test, as this 'pipe dream' is euphemistically labeled) will
be done some day when the technology has advanced enough. How that will
be done, no one knows (note that the so-called "ideal detector" isn't a
design, or any kind of how-to operational recipe, but just a pair of
words written one after the other). I can see why someone selling
investment opportunities in his Quantum Computing company would prefer
to advocate "magical version" of QM. But why should the rest of us buy
into all the nonsense and jump all these silly hoops? Some day, when
our 'gender neutral physical science studies' become plain old
'physics' again, kids will laugh at our silly verbal acrobatics,
wondering: Why? What possessed them?

Eugene Stefanovich

unread,
Aug 15, 2005, 12:57:57 PM8/15/05
to

nightlight wrote:

> Indeed, the Barut's self-fields predictions do deviate experimentally
> from the predictions of the multiparticle QM (which is obtained as a
> truncated linearization approximation of the Maxwell-Dirac coupled
> fields, hence they are certainly not equivalent). This difference,
> though, turns out to be precisely the radiative corrections, where the
> QM is wrong and the Barut's self-fields ED is right, and where QM needs
> to be superseded by QED to obain the experimentally correct results.
> Barut's self-fields ED agree with the experiments (and QED) here as far
> as he had carried out the computations (equivalent to the QED's alpha^5
> order). So, the QM "works" in the sense of getting the numbers right,
> but only to the extent that the truncated linear approximation of
> Maxwell-Dirac dynamics (which is what the multiparticle QM formalism
> is) works.

> It is not as if there was ever an experiment which had excluded a


> purely local nonlinear field thery. Go ask experimenters -- there >never
> was any such experiment. (Note that Barut's self-field ED predicts
> correctly all the radiative corrections, all of the crown jewels of the
> legendary QED accuracy, as far as his calculations were carried out, to

> the alpha5 order.) There is only a wishful conjecture, a pipe dream


> with no actual design, that such an experiment (so-called "loophole
> free" Bell test, as this 'pipe dream' is euphemistically labeled) will
> be done some day when the technology has advanced enough. How that will
> be done, no one knows (note that the so-called "ideal detector" isn't a
> design, or any kind of how-to operational recipe, but just a pair of
> words written one after the other).


So, if I understand it right, you are saying that there are two
alternative theories: Barut's self-field ED and the traditional QM/QED.
Is there an experiment for which these two theories predict different
outcomes? Can this dispute be decided once and for all by measuring
something?

Eugene.

nightlight

unread,
Aug 15, 2005, 4:51:09 PM8/15/05
to
> So, if I understand it right, you are saying that there are two
> alternative theories: Barut's self-field ED and the traditional QM/QED.
> Is there an experiment for which these two theories predict different
> outcomes? Can this dispute be decided once and for all by
> measuring something?

A "loophole free" Bell test (the genuine violation of Bell
inequalities) would experimentally eliminate Barut's self-field ED.
Similarly, a genuine sub-poissonian photo-counts (without the QO
subtractions) would also elminate it. Neither of these two critical
tests has worked as yet, even though the sub-poissonian tests have been
pursued for over five decades and Bell tests for over three decades.

There are views (e.g. Barut's, plus many of the Stochastic ED
proponents) which are getting stronger as decades of experimental
failures pass, that the QM prediction which violates Bell inequalities
(or produces sub-poissonian correlations) is an operational
misinterpretation of QM and not a genuine prediction. Indeed, Bell's
own papers treat this QM "prediction" more as a heuristic, a hint in
what kind of phenomena to look for it, not a genuine QM prediction. The
physics forum thread mentioned earlier discusses similarly the QO
"prediction" of sub-poissonian light -- a careful reading of the QO
founding papers (to get to the bottom of what is the stuff the Quantum
Opticians call "counts" and "correlations") and examination of their
"proofs" show that there is no genuine sub-poissonian prediction on
_actual_ photodetector counts at all, but only on the reconstructed
(after standard QO subtractions) counts, the Glauber's "counts" (these
are "counts" of free EM field "photons" or Dirac QED photons) i.e. the
QO "nonclassicality" or "counts" or "correlations" are terms of art
which don't mean what physicists outside understand under these terms.
This QO-speak has an effect of obfuscating the actual failures of all
the experiments to show any genuine non-classicality (as understood
outside of QO).

>From the Barut's self-field ED perspective, the regular QM, which does
differ from the self-field ED experimentally (QM being a truncated
linearization of self-field ED), is already experimentally falsified by
the observed radiative corrections which regular QM doesn't predict.
These same experiments had also falsified the old QED of Dirac,
Heisenberg and Jordan, which was a direct extension of QM to EM fields.


The second QED (of Feynman, Schwinger, Dyson) was developed precisely
to fit these very experiments, thus it obviously "predicts" the numbers
it was fit to, which are the same numbers predicted by the self-field
ED (without a need for any data fitting; if Schrodinger was able to
carry out computations of his first attempt at this approach, the
radiative corrections would have been known in 1920s).

Being a linearized approximation of infinite order, QED should yield
the same radiative corrections as self-field ED to all orders for which
the present QED recipe can give finite numbers (Barut's work was cut
short before he could demonstrate this). The main difference is that
self-field ED is a fundamental theory, while the actual QED that yields
testable numbers is an effective theory, being defined by its
perturbative mathematical-like computational recipes (which are just a
subset of consequences of the self-field ED), which were constructed to
fit the already known experimental numbers. The self-field ED has no
infinities or any other dubious procedures or interpretational
problems. Also, the self-field ED could, at least in principle, produce
fundamental constants (such as electron charge, or alpha) if the
non-linear equations could be solved accurately enough to obtain
explicit solition solutions (which were so far only shown to exist for
both, the Maxwell-Dirac and the Maxwell-Dirac-Einstein equations).

The principal value (in my view) of the self-fields ED, which extends
to any QFT, is the fundamental explanation of what is the field (or
2nd) quantization -- it is merely a linearization algorithm (Carleman
linearization in disguise). Hence the field quantization adds no new
physics to the coupled Maxwell-Dirac system. Once this realization
becomes mainstream (as it will), the QM non-locality and the rest of QM
"magic" (such as entaglement and Quantum Computing) will be forgotten
as a historical fluke.

Igor Khavkine

unread,
Aug 16, 2005, 12:19:37 AM8/16/05
to
On 2005-08-15, nightlight <night...@omegapoint.com> wrote:

> Indeed, the Barut's self-fields predictions do deviate experimentally
> from the predictions of the multiparticle QM (which is obtained as a
> truncated linearization approximation of the Maxwell-Dirac coupled
> fields, hence they are certainly not equivalent). This difference,
> though, turns out to be precisely the radiative corrections, where the
> QM is wrong and the Barut's self-fields ED is right, and where QM needs
> to be superseded by QED to obain the experimentally correct results.
> Barut's self-fields ED agree with the experiments (and QED) here as far
> as he had carried out the computations (equivalent to the QED's alpha^5
> order). So, the QM "works" in the sense of getting the numbers right,
> but only to the extent that the truncated linear approximation of
> Maxwell-Dirac dynamics (which is what the multiparticle QM formalism
> is) works.

I'm curious about Barut's approach. So, perhaps you can help me
understand better the differences and similarities between it and
standard QFT.

First, let me say how I see the connection between non-linear classical
field theory, QFT, many particle quantum mechanics, and classical
particle mechanics.

Quantization
Classical Field Theory <-----------------> QFT
Classical Limit ^
|
Fock Space |
Construction | Wave Function
| Representation
Classical Quantization V
Particle Mechanics <-----------------> Many Particle QM
Classical Limit

Let me clarify the two steps that are less known than they should be.
In many particle QM (MPQM), states are represented by wave functions of
3N variables, where N represents the number of particles and may take on
different values. Before taking the classical limit, we fix N and, as
hbar -> 0, we obtain Hamiltonian mechanics of N particles. Fock space
construction relates the space of all symmetric (resp. antisymmetric)
wave functions to the modes of a number of bosonic (resp. fermionic)
quantum harmonic oscillators, one for each mode of the single particle
Hilbert space in MPQM. On the other hand, given a QFT with a Fock space
and an algebra of field operators, each N-particle state |N> can be
expressed as a wavefunction using the formula <N|psi(x)psi(y)...|0> and
other similar ones, with psi(x) being field operators and |0> the vacuum
state. Note that, in the classical limit of a QFT, a fermionic field
must reduce to a Grassmann-valued classical field.

Both Fock space construction and wave function representation are exact
equivalences, no approximation here. Not so for the other steps. The
classical limit is an approximation, hbar -> 0. And quantization is not
always unique, although it is reasonably so for important examples.

My first question is whether Self-Field ED fits into any of the above
theory categories, or does it have to be considered separately? If
applicable, which one of the arrows does "Carleman linearization"
correspond to?

I would also like to know exactly where standard QED and Self-Field ED
agree or part ways on experimental predictions. I don't want to discuss
measurement theory here. For me, a theory is a black box that takes
input parameters and spits out numbers that can be checked with existing
apparatus. If a theory does not make a prediction for a measurement that
we can't make, that doesn't bother me much. But it will if the
measurement can in fact be made.

You mentioned that, since Barut's theory is non-linear, state
superposition goes out the window. In that case, does it account for
Stern-Gerlach and interference-type experiments? If superpositions are
possible for weak fields (in the linear approximation) at what field
strength should non-linear effects become visible?

The quantization of electromagnetic excitations (photons) as well as
excitations of other fields (electrons, protons, etc.) is intimately
related to the Fock space structure of QFT and is seen all around us.
Does Barut's theory account for that as well?

QED takes the particle masses and the fine structure constant as input.
It outputs a great deal of predictions, including cross sections. Some
of which show some of the best known agreemet with experiment. Two
examples are the electron's gyromagnetic ratio and the Lamb shift. Does
Barut's theory agree with experiments to the same precision as QED? And
if written as powerseries in alpha, do the coefficients of these
calculated quantities agree between QED and Self-Field ED? Does the
agreement break at some power of alpha?

You also mentioned your doubts about the results of Bell's inequality
tests. Can Barut's theory produce a prediction for the correlations
measured in these experiments (however these correlations are defined)?
If so, how does the prediction compare to the experimental data? Better
or worse than QM?

Thanks.

Igor

Eugene Stefanovich

unread,
Aug 18, 2005, 11:17:51 PM8/18/05
to

Igor Khavkine wrote:

> First, let me say how I see the connection between non-linear classical
> field theory, QFT, many particle quantum mechanics, and classical
> particle mechanics.
>
> Quantization
> Classical Field Theory <-----------------> QFT
> Classical Limit ^
> |
> Fock Space |
> Construction | Wave Function
> | Representation
> Classical Quantization V
> Particle Mechanics <-----------------> Many Particle QM
> Classical Limit
>


Dear Igor,

I don't have much hope that you'll accept my position, because
it flies in the face of everything you've learned at school.
I understand it well. However, if there is a chance that somebody
will be encouraged to take a fresh look at things, I'll take this
chance...

I would like to modify your diagram in the following way:

Classical Quantization
Particle Mechanics <-----------------> Variable # of Particles QM
Classical Limit

You may note that I dropped QFT from that picture. In my view,
QFT is not a complete physical theory. It becomes complete only
after making the unitary transformation to dressed particles.
This transforms QED to the relativistic QM formulated in the Fock space
(i.e., with variable number of particles) and, incidentally,
removes all QFT problems related to infinities and bare particles.

There is no classical field theory on my picture as well.
For example, I would place Maxwell's theory somewhere between
QM and classical particle mechanics. In my view, Maxwell's theory
is a partial classical limit of QM in which heavy particles
(e.g., electrons) are treated classically, but the treatment of
photons remains quantum. The electric and magnetic
fields in Maxwell's theory are just attempts to describe wave
functions of (a very large number of) photons. In the presence of
charged particles, these fields also incorporate instantaneous
interparticle potentials (e.g., the Coulomb potential).

Eugene.

Igor Khavkine

unread,
Aug 19, 2005, 1:47:06 AM8/19/05
to
In article <43051258...@synopsys.com>, Eugene Stefanovich wrote:
>
>
> Igor Khavkine wrote:
>
>> First, let me say how I see the connection between non-linear classical
>> field theory, QFT, many particle quantum mechanics, and classical
>> particle mechanics.
>>
>> Quantization
>> Classical Field Theory <-----------------> QFT
>> Classical Limit ^
>> |
>> Fock Space |
>> Construction | Wave Function
>> | Representation
>> Classical Quantization V
>> Particle Mechanics <-----------------> Many Particle QM
>> Classical Limit
>>
> Dear Igor,
>
> I don't have much hope that you'll accept my position, because
> it flies in the face of everything you've learned at school.

I have many problems with your position, but that is not one of them.

> I would like to modify your diagram in the following way:
>
> Classical Quantization
> Particle Mechanics <-----------------> Variable # of Particles QM
> Classical Limit
>
> You may note that I dropped QFT from that picture. In my view,
> QFT is not a complete physical theory. It becomes complete only
> after making the unitary transformation to dressed particles.
> This transforms QED to the relativistic QM formulated in the Fock space
> (i.e., with variable number of particles) and, incidentally,
> removes all QFT problems related to infinities and bare particles.

Whatever your opinion of QFT, it is equivalent to a quantum theory of a
variable number of particles. You are free not to make use of this
mathematical equivalence, I myself and many others choose to use it.

Moreover, your diagram is incomplete. Particle mechanics is the
classical limit of a quantum theory with a fixed number of particles. I
explained this in the paragraphs below the diagram in my previous post.
When you allow the number of particles to vary (use Fock space),
strictly speaking you have a different quantum theory. This different
quantum theory also has a different clasical limit, which happens to be
a field theory.

> There is no classical field theory on my picture as well.

It's always there. All you've said is that you choose not to use it.

> For example, I would place Maxwell's theory somewhere between
> QM and classical particle mechanics. In my view, Maxwell's theory
> is a partial classical limit of QM in which heavy particles
> (e.g., electrons) are treated classically, but the treatment of
> photons remains quantum.

There is no need to place anything half way between quantum and
classical. Maxwell theory is purely classical. Photon number is not
conserved. So, as per above, the classical limit of the photon sector is
a field theory. Strictly speaking, electron number is not conserved
either, but under low energy conditions the approximation that it is can
be made. Moreover, electrons are massive and hence localizable.
Therefore the clasical limit of the electron sector is a particle
theory.

It is true that the single photon wave function and the photon sector
field theory agree on their linear parts (Maxwell's equations). But this
is true by construction, and no longer holds when more than one photon
is present.

> The electric and magnetic
> fields in Maxwell's theory are just attempts to describe wave
> functions of (a very large number of) photons.

True. Large number of photons => hbar -> 0 => Maxwell field theory.

> In the presence of
> charged particles, these fields also incorporate instantaneous
> interparticle potentials (e.g., the Coulomb potential).

Only in special choices of gauge. Can of worms. I suggest not opening it
until the Poincare noninvariance "proof" thread is exhausted. My last
post to that thread is stuck somewhere in the moderation queue.

Igor

Eugene Stefanovich

unread,
Aug 20, 2005, 6:47:57 AM8/20/05
to
nightlight wrote:
>>So, if I understand it right, you are saying that there are two
>> alternative theories: Barut's self-field ED and the traditional QM/QED.
>> Is there an experiment for which these two theories predict different
>> outcomes? Can this dispute be decided once and for all by
>> measuring something?

> Being a linearized approximation of infinite order, QED should yield
> the same radiative corrections as self-field ED to all orders for which
> the present QED recipe can give finite numbers (Barut's work was cut
> short before he could demonstrate this).

So, am I right that the short answer to my question is "no"?
All experimental predictions of both theories coincide so far?

> The self-field ED has no
> infinities or any other dubious procedures or interpretational
> problems.

I agree that QED has serious difficulties related to infinities,
bare particles, etc. In my book http://arxiv.org/abs/physics/0504062
I was able to solve these problems without changing the postulates of
quantum mechanics. So, I am not convinced that this is a strong argument
in favor of Barut's theory.


> Also, the self-field ED could, at least in principle, produce
> fundamental constants (such as electron charge, or alpha) if the
> non-linear equations could be solved accurately enough to obtain
> explicit solition solutions (which were so far only shown to exist for
> both, the Maxwell-Dirac and the Maxwell-Dirac-Einstein equations).

I don't count this as a valid argument until such solutions are found.
You are probably going to express electron charge and alpha
through some "more fundamental" constants. What are they?
number pi? the golden ratio?

Eugene.

Eugene Stefanovich

unread,
Aug 20, 2005, 7:17:45 AM8/20/05
to
"Igor Khavkine" <igo...@gmail.com> wrote in message
news:slrndgasck....@bigbang.richmond.edu...

> Whatever your opinion of QFT, it is equivalent to a quantum theory of a
> variable number of particles. You are free not to make use of this
> mathematical equivalence, I myself and many others choose to use it.

I agree completely: QFT is equivalent to a quantum theory of a variable
number of particles. And the best way to see it explicitly is to use the
"dressing transformation" which eliminates bare and virtual particles and
reduces QFT
to a theory of real particles interacting at a distance.

> Particle mechanics is the
> classical limit of a quantum theory with a fixed number of particles. I
> explained this in the paragraphs below the diagram in my previous post.

You are right. Traditional classical particle mechanics conserves the number
of particles.
But this is only because classical mechanics was formulated long before
E = mc^2 was invented and the possibility of converting energy to mass was
understood.
Nobody forbids us to formulate a classical theory in which particles move
along well-defined trajectories and creation/annihilation processes are
allowed
(trajectories may start and terminate at certain points).

> When you allow the number of particles to vary (use Fock space),
> strictly speaking you have a different quantum theory. This different
> quantum theory also has a different clasical limit, which happens to be
> a field theory.

I see a logical gap in your statement. I think if I take a classical limit
of
a quantum theory with variable (but finite) number of particles, I should
obtain a clasical theory with variable (but finite) number of particles.
Nobody has constructed such a theory (as far as I know), but this is not
a reason to believe that such a theory cannot exist.

> > For example, I would place Maxwell's theory somewhere between
> > QM and classical particle mechanics. In my view, Maxwell's theory
> > is a partial classical limit of QM in which heavy particles
> > (e.g., electrons) are treated classically, but the treatment of
> > photons remains quantum.
>
> There is no need to place anything half way between quantum and
> classical. Maxwell theory is purely classical. Photon number is not
> conserved. So, as per above, the classical limit of the photon sector is
> a field theory. Strictly speaking, electron number is not conserved
> either, but under low energy conditions the approximation that it is can
> be made. Moreover, electrons are massive and hence localizable.
> Therefore the clasical limit of the electron sector is a particle
> theory.

I fully agree that electrons can and should be treated classically.
However, photons is a different matter. They have very peculiar properties:
1) Photons can be easily emitted and absorbed.
2) There is a huge number of them. Billions and billions of photons
are emitted by an ordinary lighbulb. So, any attempt to describe this
situation
in the language of particles (either quantum or classical) would be
suicidal.
3) Photons have zero mass, so quantum
effects (such as diffraction and interference) for photons could be easily
seen
hundreds of years before invention of QM.

So, in my view, Maxwell's theory is, actually, a hybrid in which massive
charges are treated classically
while quantum behavior of billions of photons is approximated by 2 vector
functions
E(x,t) and B(x,t). Maxwell fields are just approximatons (quite successful,
I admit) to multiphoton wavefunctions. In addition to photons, Maxwell
lumped
interparticle forces (Coulomb and Biot-Savart) into his E(x,t) and B(x,t).
This created a lot of confusion.
Of course, Maxwell did not know that he was doing QM when
he wrote his equations. But now, 150 years later we can understand that.

In the approximation that leads from QED to Maxwell's field theory, the
limit
hbar -> 0 does not play any significant role.
Most important conditions are that
1) individual photon energies are small (like in the visible light),
so that individual particles cannot be easily observed.
2) the number of photons is huge, so that they appear as a single
continuous field.
Condition 2) can be violated in radiation fields of very low intensity where
even photons of visible light are emitted and registered one-by-one.
This low-intensity radiation is not described by Maxwell's theory at all.

> > The electric and magnetic
> > fields in Maxwell's theory are just attempts to describe wave
> > functions of (a very large number of) photons.
>
> True. Large number of photons => hbar -> 0 => Maxwell field theory.

I agree about "Large number of photons".
I disagree about "hbar -> 0". I think in this limit you should obtain
Newton's
corpuscular ray optics, i.e., photons moving along trajectories.


> > In the presence of
> > charged particles, these fields also incorporate instantaneous
> > interparticle potentials (e.g., the Coulomb potential).
>
> Only in special choices of gauge. Can of worms. I suggest not opening it
> until the Poincare noninvariance "proof" thread is exhausted.

That's a big can of fat worms. I totally agree not to open it now.

Eugene.

nightlight

unread,
Aug 20, 2005, 12:06:29 PM8/20/05
to
> My first question is whether Self-Field ED fits into any of the
> above theory categories, or does it have to be considered
> separately? If applicable, which one of the arrows does
> "Carleman linearization" correspond to?

The SFED is a separate node not shown on your diagram. Its
various approximations correspond to the 4 nodes on the
diagram. The Carleman linearization represents the arrows
from SFED to MPQM and QED/QFT. In Barut's work, the arrow
SFED to MPQM (which I call Barut's ansatz, cf. [1], eq. (11)
pp 8) is a special case, a fixed-N truncated form, of the
Carleman's linearizing ansatz (cf. [6], eq. (2), pp 100,
which also covers the unlimited N for Fock space).

More precisely, the Barut's ansatz (in SFED) for fixed
N particles, plus dropping of the self-interaction integrals
in the resulting Maxwell-Dirac action (cf. [1] eq. (10)-(12)
pp 8, more details in [2] pp 353-356) result in the N particle
direct product Hilbert space formalism of MPQM (Dirac or
Schrodinger QM particles with Coulomb & Lorentz interactions
and in any external fields). Without dropping of the
self-interaction terms, but with the linearizing Barut's ansatz
(of infinite order) Barut obtains SFED approximation
corresponding to QED with all radiative corrections.

The expansions in powers of alpha are not identical between
the linearized SFED and QED since Barut uses an iterative
method (which expands in alpha) only on parts of the action
(the fermion self-interaction terms), while using the exact
closed form for other terms (e.g. for EM vacuum fluctuations).
Also, unlike the QED recipe for subtractions of infinities
(and at best the asymptotic expansion), Barut uses only the
legitimate mathematics (finite quantities, convergent series).
The first theoretical differences between the linearized SFED
and QED show up at the alpha^5 order corrections, although
both formalisms fall within the experimental figure ranges
for all high precision QED measurements (as of late 1980s).

A detailed review of experimental & theoretical comparisons
is in the monograph [2], which also includes separate chapters
from Barut's students & coworkers. Much of that material
is available online in scanned ICTP preprints, ref [1],[3].
For a brief intro into the relation of SFED to QM entanglement,
measurement theory and to Carleman linearization (CL) see my
Physics Forum posts [4]. The main reference for CL is
Kowalski & Steeb monograph [6], with a brief sketches given
in Kowalski's arXiv preprints [7] and a tutorial [9] with
Mathematica code (for engineering applications of CL).
Kowalski has also utilized the relation of CL to QFT in
reverse direction, by porting the QFT phase space methods
(Wigner & Glauber pseudo-distributions, coherent states)
to solve CL problems arising in nonlinear dynamics and
general nonlinear ODE/PDE (cf. [7],[8]).


> You mentioned that, since Barut's theory is non-linear,
> state superposition goes out the window. In that case,
> does it account for Stern-Gerlach and interference-type
> experiments? If superpositions are possible for weak
> fields (in the linear approximation) at what field
> strength should non-linear effects become visible?

The non-linear effects of SFED are visible -- they manifest as
radiative corrections (in higher orders) and the absence of
QM nonlocality, such as Bell's inequality violations or various
QO non-classicalities (in the first order). Note that the
regular multiparticle QM is experimentally wrong on both of
these tests (and QO/QED on nonlocality only, assuming the
orthodox QM interpretation transplanted to QED; QED itself
doesn't make such nonlocal predictions from dynamics but
only via the QM composite system projection postulate, its
strong form interpretation as proposed by von Neumann &
Dirac). In SFED the entangled states are non-physical
artifacts of the particular (Carleman) linearization
scheme -- they correspond to solutions of variation of
action obtained by using a weakened variational principle
(a subset of the full variation, constrained via Barut's
ansatz) i.e. they are not true stationary/extremal solutions.

Although Barut has mostly tried to stay away from the tar-pit of
QM nonlocality debates and experiments, he did analyze classical
fields Stern-Gerlach (e.g. see [1]-b,c; [3]-i). For this case,
which is a low order phenomenon, he didn't even need the full
SFED to replicate what the Stern-Gerlach experiments show. There
were no claims of experimental violations of classicality (Bell
inequalities) in Stern-Gerlach experiments.

The main non-classical effects were claimed in Quantum Optics,
both for Bell tests and for non-local state collapse (e.g.
sub-Poissonian photocounts; cf [5]). Although Barut hasn't
bothered with QO experiments and non-classicality "predictions",
the SFED replicates the vacuum fluctuations effects of QED
via the the self-field component of EM (cf [1]-a, [3]) which
turn out to have an effective field approximation identical
to Zero Point Field (ZPF) of Stochastic Electrodynamics
(SED). The SED proponents (such T. Marshall and E. Santos)
have been battling the QO non-locality magic (experimental
and theoretical claims) since Aspect's 1980 claim. They
have refuted the non-locality claim of every QO experiment
by replicating it via purely local EM model which includes
ZPF initial & boundary conditions. Note that in QO-speak
"classical" means EM model with zero fields boundary
conditions, so QO-non-classical is very limited. If you
allow ZPF boundary conditions, all the QO-non-classical
phenomena are classical (see [5] for discussion & ref's).

> The quantization of electromagnetic excitations (photons)
> as well as excitations of other fields (electrons, protons,
> etc.) is intimately related to the Fock space structure
> of QFT and is seen all around us. Does Barut's theory
> account for that as well?

Barut, like regular QED/QFT, postulates quantization of
charge, even though he does recognize (cf. [3]-h) that unlike
QED where it has to be postulated, the charge quantization
should folow from the SFED first principles. All other
observed quantization effects then follow. Although
it is not essential for SFED approach, Barut uses
source form ED (analogous to Schwinger's source QED),
where the EM field is eliminated from the SFED action
via currents integrals, so there are no photons in SFED.
Of course, there are no non-local (non-classical) effects
in SFED which the presently dominating QM interpretation
suggest to exist (but which no experiment has demonstrated
as yet).


> You also mentioned your doubts about the results
> of Bell's inequality tests.

Neither I, nor the experimenters, nor anyone who has studied
the experiments in sufficient detail, have any doubts about
the results of the Bell tests -- no test has as yet violated
the inequalities. That is the plain factual situation (as
opposed to the pedagogical & popular accounts).

The only differences are in the expectations about future
experiments -- I doubt (along with other "heretics", from
Einstein & Schrodinger, through Barut, Jaynes, Marshall,
Santos...) that the inequalies will ever be violated,
while the believers claim it is only a matter of technology
until they obtain violations (the hope euphemistically
referred to as the "loophole free" violations, which is
trying to say 'violations which do violate' as opposed,
I guess, to the present "violations" which don't violate).


> Can Barut's theory produce a prediction for the
> correlations measured in these experiments (however
> these correlations are defined)? If so, how does
> the prediction compare to the experimental data? Better
> or worse than QM?

As suggested above, the SFED via its effective field form
suitable for Quantum Optics, the SED, replicates all
QO-non-classical effects (Bell inequalities "violations",
the photon "anticorrelations" etc; see ref's in [5]).
The only difference is in how the two sides label the
absence of violations -- in SFED/SED the absence of
violations is fundamental since the theory is local.
In QM/QO the absence of experimental violations is
introduced as a correction to the "ideal detector"
case (expected to be feasable in the future).

Considering that:

(a) SFED reproduces the high order effects of QED (to at
least alpha^5),

(b) SFED can deduce the multiparticle QM (MPQM) formalism,
the formal basis of entaglement and Bell's violations
predictions & von Neumann's QM measurement theory,
as a linearization approximation of SFED,

(c) MPQM is itself is a low order approximation to QED,
failing to match experiments of (a),

it seems hopless to expect with MPQM that future detector
technology will somehow make SFED fail in the low order
QED phenomena (make it fail in the leading digit of
the experimental data), while matching the remaining 8+
digits known already (which it does match).

References
----------

1. A.O. Barut "Quantum Electrodynamics based on self-energy"
IC1987248: http://library.ictp.trieste.it/DOCS/P/87/248.pdf

2. A.O. Barut "Foundations of self-field electrodynamics"
"New Frontiers in QED and Quantum Optics" pp 345-371
NATO ASI Series B, Vol. 232, Plenum 1990

a) See also in the same volume, pp 371-389
J.P. Downling "QED Based on Self-Fields: Cavity Effects"

b) A.O. Barut "QED - The Unfinished Business" pp 493-503
in "IIIrd Reg. Conf. on Mathematica Physics" 1989
World Scientific 1990

c) A.O. Barut "Fundamental Problems in Quantum Physics"
http://redshift.vif.com/JournalFiles/Pre2001/V02NO4PDF/V02N4FUN.PDF

3. Scanned ICTP preprints (enable Javascript & enter "Barut")
http://library.ictp.it/pages/psearch/prep.php?PAGE=0
Self-fields start (Barut's preprints 61-148)

http://library.ictp.it/pages/psearch/prep.php?PAGE=7&NEXT=/ARCHIVE/preprint/SDW?W%3DAUTHOR+PH+WORDS+%27barut%27+ORDER+BY+EVERY+ICNUM/Ascend%26M%3D61%26R%3DY

a. COMBINING RELATIVITY AND QUANTUM MECHANICS: SCHRODINGER'S
INTERPRETATION OF PSI
IC1987157: http://library.ictp.trieste.it/DOCS/P/87/157.pdf

b. ON THE COVARIANCE OF TWO-FERMION EQUATION FOR QUANTUM
ELECTRODYNAMICS
IC1986162: http://library.ictp.trieste.it/DOCS/P/86/162.pdf

c. NON-PERTURBATIVE QUANTUM ELECTRODYNAMICS WITHOUT INFINITIES
IC1986228: http://library.ictp.trieste.it/DOCS/P/86/228.pdf

d. QUANTUM ELECTRODYNAMICS BASED ON SELF-ENERGY: SPONTANEOUS
EMISSION IN CAVITIES
IC1986330: http://library.ictp.trieste.it/DOCS/P/86/330.pdf

e. RELATIVISTIC THEORY OF THE LAMB SHIFT BASED ON SELF ENERGY
IC1987210: http://library.ictp.trieste.it/DOCS/P/87/210.pdf

f. QUANTUM ELECTRODYNAMICS BASED ON SELF-ENERGY WITHOUT
SECOND QUANTIZATION: THE LAMB SHIFT AND LONG-RANGE CASIMIR-POLDER
VAN DER WAALS FORCES NEAR BOUNDARIES
IC1986341: http://library.ictp.trieste.it/DOCS/P/86/341.pdf

g. SELFFIELD QUANTUM ELECTRODYNAMICS WITHOUT INFINITIES. A NEW
CALCULATION OF VACUUM POLARIZATION
IC1993105: http://library.ictp.trieste.it/DOCS/P/93/105.pdf

h. CAN WE CALCULATE THE FUNDAMENTAL DIMENSIONLESS
CONSTANTS OF PHYSICS?
IC1987187: http://library.ictp.trieste.it/DOCS/P/87/187.pdf

i. EXPLICIT CALCULATIONS WITH A HIDDEN VARIABLE SPIN MODEL
IC1986367: http://library.ictp.trieste.it/DOCS/P/86/367.pdf

j. QUANTUM THEORY OF SINGLE EVENTS: LOCALIZED DE BROGLIE-WAVELETS,
SCHRODINGER WAVES AND CLASSICAL TRAJECTORIES
IC1990099: http://library.ictp.trieste.it/DOCS/P/90/099.pdf

4. Physics Forum posts on Barut's self-field & QM entanglement

5. PhysicsForum: Photon "Wave Collapse" Experiment...
http://www.physicsforums.com/showthread.php?t=71297

References:
http://www.physicsforums.com/showpost.php?p=544829&postcount=122

On SED analysis of the QO experiments:
http://arxiv.org/find/quant-ph/1/au:+Santos_E/0/1/0/all/0/

6. K. Kowalski, W. Steeb
"Nonlinear Dynamical Systems and Carleman Linearization"
World Scientific, 1991.
http://www.worldscibooks.com/mathematics/1347.html

7. K. Kowalski's arXiv preprints on Carleman linearization:
http://arxiv.org/find/grp_nlin/1/au:+kowalski_k/0/1/0/all/0/1
http://arxiv.org/abs/hep-th/9212031

8. K. Kowalski
"Methods of Hilbert Spaces in the Theory of Nonlinear
Dynamical Systems" World Scientific, 1994.
http://www.worldscibooks.com/chaos/2345.html

9. B.W. Gaude "Solving Nonlinear Aeronautical Problems Using the
Carleman Linearization Method"
Sandia National Labs, SAND2001-3064, OSTI ID: 787644

http://www.prod.sandia.gov/cgi-bin/techlib/access-control.pl/2001/013064.pdf
http://www.osti.gov/energycitations/product.biblio.jsp?osti_id=787644&query_id=0

Igor Khavkine

unread,
Aug 21, 2005, 2:00:01 AM8/21/05
to
On 2005-08-20, Eugene Stefanovich <eugene_st...@usa.net> wrote:
> "Igor Khavkine" <igo...@gmail.com> wrote in message
> news:slrndgasck....@bigbang.richmond.edu...
>
>> Whatever your opinion of QFT, it is equivalent to a quantum theory of a
>> variable number of particles. You are free not to make use of this
>> mathematical equivalence, I myself and many others choose to use it.
>
> I agree completely: QFT is equivalent to a quantum theory of a variable
> number of particles.

So far so good.

> And the best way to see it explicitly is to use the
> "dressing transformation" which eliminates bare and virtual particles and
> reduces QFT
> to a theory of real particles interacting at a distance.

Non sequitor. The way to see it is psi(x,y,..) = <psi|phi(x)phi(y)...|0>.

>> Particle mechanics is the
>> classical limit of a quantum theory with a fixed number of particles. I
>> explained this in the paragraphs below the diagram in my previous post.
>
> You are right. Traditional classical particle mechanics conserves the
> number of particles. But this is only because classical mechanics was
> formulated long before E = mc^2 was invented and the possibility of
> converting energy to mass was understood. Nobody forbids us to
> formulate a classical theory in which particles move along
> well-defined trajectories and creation/annihilation processes are
> allowed (trajectories may start and terminate at certain points).

True, no one forbids it. But it is also true that no one has done it
yet.

>> When you allow the number of particles to vary (use Fock space),
>> strictly speaking you have a different quantum theory. This different
>> quantum theory also has a different clasical limit, which happens to be
>> a field theory.
>
> I see a logical gap in your statement. I think if I take a classical
> limit of a quantum theory with variable (but finite) number of
> particles, I should obtain a clasical theory with variable (but
> finite) number of particles. Nobody has constructed such a theory (as
> far as I know), but this is not a reason to believe that such a theory
> cannot exist.

What you see is not a logical gap, but something that you wish were
true. However, once you put wishful thinking and what you think should
be aside, you'll see that the limit of a quantum theory with finitely
many particles (wave functions of as many arguments) has classical
particle dynamics as the classical limit, while quantum field theory
(Fock space with field operators) has classical field theory as the
classical limit.

This has been known for 70+ years. And when I say "known", I don't mean
in the sense of folklore. The calculations are there for anyone to see,
check any book on QM or QFT.

> I fully agree that electrons can and should be treated classically.
> However, photons is a different matter. They have very peculiar properties:
> 1) Photons can be easily emitted and absorbed.
> 2) There is a huge number of them. Billions and billions of photons
> are emitted by an ordinary lighbulb. So, any attempt to describe this
> situation
> in the language of particles (either quantum or classical) would be
> suicidal.
> 3) Photons have zero mass, so quantum
> effects (such as diffraction and interference) for photons could be easily
> seen
> hundreds of years before invention of QM.

Again, so far so good.

> So, in my view, Maxwell's theory is, actually, a hybrid in which
> massive charges are treated classically while quantum behavior of
> billions of photons is approximated by 2 vector functions E(x,t) and
> B(x,t). Maxwell fields are just approximatons (quite successful, I
> admit) to multiphoton wavefunctions.

Yes, they are approximations in the classical limit, hbar -> 0.

> In addition to photons, Maxwell
> lumped
> interparticle forces (Coulomb and Biot-Savart) into his E(x,t) and B(x,t).

Non sequitur.

> This created a lot of confusion.
> Of course, Maxwell did not know that he was doing QM when
> he wrote his equations. But now, 150 years later we can understand that.

Hmm, confusion indeed. However, the confusion is on a different side of
this computer screen than you think.

> In the approximation that leads from QED to Maxwell's field theory, the
> limit
> hbar -> 0 does not play any significant role.

So the fact that hbar does not appear in any classical formulas is a
coincidence? How about the fact that hbar's value is so small in
macroscopic (say cgs) units? Rhetirical questions aside, what we see
around us every day is described by the hbar -> 0 limit of the quantum
theories that we believe to be fundamental. This simple observation
gives the hbar -> 0 a very significant role and it's name "the classical
limit".

> Most important conditions are that
> 1) individual photon energies are small (like in the visible light),
> so that individual particles cannot be easily observed.
> 2) the number of photons is huge, so that they appear as a single
> continuous field.
> Condition 2) can be violated in radiation fields of very low intensity where
> even photons of visible light are emitted and registered one-by-one.
> This low-intensity radiation is not described by Maxwell's theory at all.
>
>> > The electric and magnetic
>> > fields in Maxwell's theory are just attempts to describe wave
>> > functions of (a very large number of) photons.

>> True. Large number of photons => hbar -> 0 => Maxwell field theory.
>
> I agree about "Large number of photons". I disagree about "hbar ->
> 0". I think in this limit you should obtain Newton's corpuscular ray
> optics, i.e., photons moving along trajectories.

Again, belief is no substitute for calculation. See Sakurai's _Advanced
Quantum Mechanics_. There he explicitly relates the strong field limit
(many photons) to the classical limit (hbar -> 0). Unfortunately, I
don't have the book handy, so I can't give a more precise reference.

Igor

Eugene Stefanovich

unread,
Aug 21, 2005, 7:18:40 PM8/21/05
to

Igor Khavkine wrote:

>>And the best way to see it explicitly is to use the
>>"dressing transformation" which eliminates bare and virtual particles and
>>reduces QFT
>>to a theory of real particles interacting at a distance.
>
>
> Non sequitor. The way to see it is psi(x,y,..) = <psi|phi(x)phi(y)...|0>.

1. I am not sure how your formula challenges what I wrote about the
"dressing transformation". Could you please elaborate?

2. I am not sure where you get this definition of the wave function.
Most likely from Weinberg's eqs. (14.1.4) - (14.1.5), however he
doesn't provide any justification that thus defined psi(x,y,..) is
the probability density amplitude for measuring particle positions
at x,y,...

3. I have a different definition for the position-space wave function
in the Fock space. This involves

a) construction of single particle
operators in each N-particle subspace H_N using the tensor product theorem
(see subsection 8.1.1 in my book). In the case of position-space wave
functions, the Newton-Wigner position operators r_1, r_2, ...
should be used.

b) construction of the common basis of eigenstates of the operators
r_1, r_2, ... in H_N.

c) Repeating step 2) for each N (see subsection 9.1.1, after eq. (9.9))

d) taking the scalar product of the state vector |psi> with the basis
vectors constructed above.


>>I see a logical gap in your statement. I think if I take a classical
>>limit of a quantum theory with variable (but finite) number of
>>particles, I should obtain a clasical theory with variable (but
>>finite) number of particles. Nobody has constructed such a theory (as
>>far as I know), but this is not a reason to believe that such a theory
>>cannot exist.
>
>
> What you see is not a logical gap, but something that you wish were
> true. However, once you put wishful thinking and what you think should
> be aside, you'll see that the limit of a quantum theory with finitely
> many particles (wave functions of as many arguments) has classical
> particle dynamics as the classical limit, while quantum field theory
> (Fock space with field operators) has classical field theory as the
> classical limit.


Now, I fail to see what is the difference between "quantum theory with
finitely many particles (wave functions of as many arguments)" and
"quantum field theory (Fock space with field operators)". I though we
agreed that they are equivalent. Let me remind you:

I wrote: "I agree completely: QFT is equivalent to a quantum theory of a
variable number of particles."

You wrote: "So far so good."

> This has been known for 70+ years. And when I say "known", I don't
> mean
> in the sense of folklore. The calculations are there for anyone to
> see,
> check any book on QM or QFT.

If you want to substitute discussion by pointing to books,
then I would like to draw your attention to my book
physics/0504062 which tells a different story.

>>So, in my view, Maxwell's theory is, actually, a hybrid in which
>>massive charges are treated classically while quantum behavior of
>>billions of photons is approximated by 2 vector functions E(x,t) and
>>B(x,t). Maxwell fields are just approximatons (quite successful, I
>>admit) to multiphoton wavefunctions.
>
>
> Yes, they are approximations in the classical limit, hbar -> 0.

I thought that if we take the limit hbar -> 0, then undeterministic
wave functions are replaced by trajectories. In this limit, photons
should be described in terms of Newtonian light rays.
No diffraction, no interference.

>
>
>>In addition to photons, Maxwell
>>lumped
>>interparticle forces (Coulomb and Biot-Savart) into his E(x,t) and B(x,t).
>
>
> Non sequitur.

?? Do you dispute the fact that in Maxwell's theory fields
E(x,t) and B(x,t) play two roles:

1) they describe the intensity distribution
in the radiation field.
2) they determine the forces acting on charged particles
(via the Lorentz force law)?


>>In the approximation that leads from QED to Maxwell's field theory, the
>>limit
>>hbar -> 0 does not play any significant role.
>
>
> So the fact that hbar does not appear in any classical formulas is a
> coincidence? How about the fact that hbar's value is so small in
> macroscopic (say cgs) units? Rhetirical questions aside, what we see
> around us every day is described by the hbar -> 0 limit of the quantum
> theories that we believe to be fundamental. This simple observation
> gives the hbar -> 0 a very significant role and it's name "the classical
> limit".

I think that hbar -> 0 limit does not apply to the description of
light in Maxwell's theory. If you think otherwise, then you should
come to the conclusion that there are two different sources of
the interference effect. One source is purely quantum (as in Feynman's
double-slit experiment), another source is due to "classical waves"
(as in Young's experiment). Can these two contributions to the
interference be distinguished experimentally? I don't think so.
In my view, interference is a quantum effect, and Young observed this
quantum effect 100 years before Planck's quantum theory.

In the true classical limit hbar -> 0 all wave properties of photons
should disappear.

It is not difficult to explain why hbar never appeared in
Maxwell's electrodynamics. Take for example, Einstein's formula
connecting photon energy E with the frequency v of its wavefunction

E = hv

In Maxwell's theory the wavefunctions of a large collection of
photons are modeled, of course, by the fields E(x,t) and B(x,t).
Quantity v is macroscopic and is a part of Maxwell's theory.
However E is a microscopic quantity, and individual photons where
not observed in the 19th century. So, E is not needed in
Maxwell's theory, and neither is h.


>>>True. Large number of photons => hbar -> 0 => Maxwell field theory.
>>
>>I agree about "Large number of photons". I disagree about "hbar ->
>>0". I think in this limit you should obtain Newton's corpuscular ray
>>optics, i.e., photons moving along trajectories.
>
>
> Again, belief is no substitute for calculation. See Sakurai's _Advanced
> Quantum Mechanics_. There he explicitly relates the strong field limit
> (many photons) to the classical limit (hbar -> 0). Unfortunately, I
> don't have the book handy, so I can't give a more precise reference.

Thanks for the reference. I'll check that out. It looks suspicious to me
that in the weak field limit (when individual photons can be discerned)
Maxwell's theory gives continuous predictions incompatible with
experiment. This forces me to believe that Maxwell's fields are
some surrogates for multi-photon wavefunctions, rather than their proper
hbar -> 0 limits.

Eugene.


p.ki...@imperial.ac.uk

unread,
Aug 25, 2005, 4:43:25 AM8/25/05
to
Eugene Stefanovich <eugene_st...@usa.net> wrote:
> So, in my view, Maxwell's theory is, actually, a hybrid in
> which massive charges are treated classically while quantum
> behavior of billions of photons is approximated by 2 vector
> functions E(x,t) and B(x,t).

Er, so you are saying Maxwell's theory is a quantum classical
hybrid in which massive charges are treated classically,
and where the quantum behavior of billions of photons is
reduced to a classical behaviour? In what sense does that
retain ANY kind of quantum properties?

> Maxwell fields are just approximatons (quite successful,
> I admit) to multiphoton wavefunctions.

No, they are not. Solutions for Maxwell fields make up
the MODE functions used by a quantum theory, they are not
(in any physical sense) wave functions, and calling them
wave functions is perverse and misleading.

The wavefunctions of the photons living inside these mode
functions are completely independent of the mode functions.
For a start, the Maxwell solutions (mode functions) wary
over time and space; (crudely) the wavefunctions vary over
excitation level of the quantum SH oscillator inside
the mode function.


--
---------------------------------+---------------------------------
Dr. Paul Kinsler
Blackett Laboratory (QOLS) (ph) +44-20-759-47520 (fax) 47714
Imperial College London, Dr.Paul...@physics.org
SW7 2BW, United Kingdom. http://www.qols.ph.ic.ac.uk/~kinsle/

Igor Khavkine

unread,
Aug 25, 2005, 4:44:31 AM8/25/05
to
On 2005-08-21, Eugene Stefanovich <eug...@synopsys.com> wrote:
>
>
> Igor Khavkine wrote:
>
>>>And the best way to see it explicitly is to use the
>>>"dressing transformation" which eliminates bare and virtual particles and
>>>reduces QFT
>>>to a theory of real particles interacting at a distance.
>>
>>
>> Non sequitor. The way to see it is psi(x,y,..) = <psi|phi(x)phi(y)...|0>.
>
> 1. I am not sure how your formula challenges what I wrote about the
> "dressing transformation". Could you please elaborate?

This formula is a direct reversal of the procedure of second
quantization, which can be checked by direct calculation. It is
mentioned not only in Weinberg, but in other books as well, mostly in
more old fashioned treatments of second quantization. Dirac is a
prominent example. Also, this formula is used in disguise in the Green
function or correlation function formalism. What I challenge is your
introduction of unnecessary steps into the equivalence, such as
"dressing". QFT is applicable to many theories, some of which don't
require renormalization.

> Now, I fail to see what is the difference between "quantum theory with
> finitely many particles (wave functions of as many arguments)" and
> "quantum field theory (Fock space with field operators)". I though we
> agreed that they are equivalent. Let me remind you:
>
> I wrote: "I agree completely: QFT is equivalent to a quantum theory of a
> variable number of particles."
>
> You wrote: "So far so good."

Notice the important adjective "variable". If I fix the number of
particles to N, my Hilbert space is composed of wave functions of N
arguments, psi(x_1,...,x_N). If I allow the number of particles to vary,
my Hilbert space is composed of linear combinations of wave functions
with different numbers of arguments, 1, psi(x), phi(x,y), chi(x,y,z),
...

Many, but fixed, number of particles is not the same as a variable
number of particles. It is only when the number of particles is allowed
to vary that the theory can be made equivalent to a field theory (Fock
space + field operators). The equivalence is through second
quantization.

This is an important, but perhaps subtle difference.

> > This has been known for 70+ years. And when I say "known", I don't
> > mean in the sense of folklore. The calculations are there for anyone
> > to see, check any book on QM or QFT.
>
> If you want to substitute discussion by pointing to books,
> then I would like to draw your attention to my book
> physics/0504062 which tells a different story.

My comment was regarding the construction of the classical limit. I've
outlined this construction several times previously. If a citation is
not sufficient, you're welcome to ask specific questions. As to book
thumping, one would have to establish the credibility of the author
before doing so.

> I thought that if we take the limit hbar -> 0, then undeterministic
> wave functions are replaced by trajectories. In this limit, photons
> should be described in terms of Newtonian light rays.
> No diffraction, no interference.

Again, trajectories arize in the classical limit of QM with a *fixed*
number of particles. Photon number is not conserved. When that happens,
the classical limit does not yield trajectories, it yields fields.

> ?? Do you dispute the fact that in Maxwell's theory fields
> E(x,t) and B(x,t) play two roles:
>
> 1) they describe the intensity distribution
> in the radiation field.
> 2) they determine the forces acting on charged particles
> (via the Lorentz force law)?

The E(x,t) and B(x,t) fields play a *single* role, to determine the
force on a test charge at any point in space and time. The fact that
they carry energy and momentum (radiation) stems from their equations of
motion and their coupling to matter. Radiation and the Lorentz force law
are two manifestations of the same phenomenon.

> I think that hbar -> 0 limit does not apply to the description of
> light in Maxwell's theory. If you think otherwise, then you should
> come to the conclusion that there are two different sources of
> the interference effect. One source is purely quantum (as in Feynman's
> double-slit experiment), another source is due to "classical waves"
> (as in Young's experiment). Can these two contributions to the
> interference be distinguished experimentally? I don't think so.

Yes, there are and yes they can. But Feynman's double slit example does
not describe a quantum effect here. What we call interference fringes
are a generic phenomenon common to all linear wave equations. This
includes both Schroedinger's and Maxwell's equations. So what's quantum
about it? A state with one electron is described by a wave function (of
one argument) obeying the Schroedinger equation. When we consider the
hbar -> 0 limit, only a single particle trajectory remains. The single
argument of the wave function becomes a dynamical variable of the
particle and no interference is seen. This limit can be reversed by
simultaneous consideration of many trajectories for the same particle
and by assigning phases to these trajectories. This is Feynman's
description of the double slit experiment, it shows how to recover the
wave function and the interference fringes with it.

For the sake of argument, suppose we want to apply the same treatment to
a state with a single photon. Everything is fine, the same argument
applies, with a few differences. We have to use Maxwell's equations
instead of Schroedinger's, and we have to use a multi-component wave
function. There will be one distinct kind of photon for each independent
polarization. Each independent polarization of the wave function will
describe the probability amplitude for the corresponding photon
particle. This implies that the values of the photon wave function are
not the same as the electric and magnetic field (or rather the vector
potential) amplitudes.

But that is not what we see when it comes to light. In most situations,
the photon number is very large. This implies that we have to use the
classical hbar -> 0 limit. But unlike the electron case, we do not
recover trajectories, rather we recover the electric and magnetic
fields. The big surprize is that they satisfy the same Maxwell equations
as the single photon wave function! Coincidence? No, it is a consequence
of second quantization. Note however, that the single particle (photon)
equation is always linear, while the classical field equation may be
non-linear. Any field non-linearities get translated into into
interactions when multi-particle (multi-photon) wave functions are
considered. So, simply because Maxwell's equations are linear, we
automatically get interference fringes completely within the classical
regime.

But what if we increas hbar, do we get any actual quantum effects? The
answer is yes, but the detection is more subtle. When we increase hbar,
we now have to consider multiple *field configurations* at the same time
and assign phases to each of them (cf. "multiple particle trajectories
at the same time"). One example of this is a cavity containing a QED
state of the form a|1 photon> + b|2 photons>. Each state |n photons>
(more or less) describes a classical field mode, with the number n
parametrizing the amplitude of the classical field. Depending on the
geometry of the cavity, the classical field mode may already exhibit
interference fringes. But at the same time, quantum mechanically the
state is in a superposition of two different classical field
configurations (different intensities). Wherever we see superposition,
we will see interference. However, in this case, it will not be as
visual as in the electron double slit experiment. So, yes, there are two
kinds of detectable interference here.

>> Again, belief is no substitute for calculation. See Sakurai's _Advanced
>> Quantum Mechanics_. There he explicitly relates the strong field limit
>> (many photons) to the classical limit (hbar -> 0). Unfortunately, I
>> don't have the book handy, so I can't give a more precise reference.
>
> Thanks for the reference. I'll check that out. It looks suspicious to me
> that in the weak field limit (when individual photons can be discerned)
> Maxwell's theory gives continuous predictions incompatible with
> experiment. This forces me to believe that Maxwell's fields are
> some surrogates for multi-photon wavefunctions, rather than their proper
> hbar -> 0 limits.

Take |psi> to be a several electron state. <x,y,...|psi> = psi(x,y,...)
is the corresponding several electron wave function. X = <psi|x|psi>, Y
= <psi|y|psi>, ..., are the "classical" expectation values of the
individual position operators x, y, .... The wave function psi(x,y,...)
satisfies the multi-electron Schroedinger equation. The expectation
values X, Y, ... satisfy Hamilton's equations of motion, this is
Ehrenfest's theorem.

Take |phi> to be a several photon state. <0|e(x)e(y)...|phi> =
phi(x,y,...) is the corresponding several photon wave function, with
e(x), e(y), ... being the field operators (which are also decorated with
polarization indices). E(x) = <psi|e(x)|psi> is the expectation value of
the classical field amplitude. The wave function phi(x,y,...) satisfies
the multi-photon "Maxwell equations". The expectation values E(x)
satisfy Maxwell's equations, in the usual sense of the term, which is
also a consequence of Ehrenfest's theorem. The fact that the single
photon wave equation is the same as the linear part of the classical
field equations is a theorem of second quantization.

Igor

Eugene Stefanovich

unread,
Aug 25, 2005, 1:42:47 PM8/25/05
to

p.ki...@imperial.ac.uk wrote:
> Eugene Stefanovich <eugene_st...@usa.net> wrote:
>
>>So, in my view, Maxwell's theory is, actually, a hybrid in
>>which massive charges are treated classically while quantum
>>behavior of billions of photons is approximated by 2 vector
>>functions E(x,t) and B(x,t).
>
>
> Er, so you are saying Maxwell's theory is a quantum classical
> hybrid in which massive charges are treated classically,
> and where the quantum behavior of billions of photons is
> reduced to a classical behaviour? In what sense does that
> retain ANY kind of quantum properties?

The behavior of photons in Maxwell's theory is described
by fields E and B. In my view these fields are surrogates of photons
wavefunctions. By substituting wavefunctions of billions of
photons by fields E and B the description is simplified, however
one important purely quantum aspect is preserved: Maxwell's fields
E and B can describe interference.

In a true classical hbar -> 0 limit, the photons would be
described by trajectories, as in Newton's corpuscular theory
of light. There would be no diffraction and interference effects.

>>Maxwell fields are just approximatons (quite successful,
>>I admit) to multiphoton wavefunctions.
>
>
> No, they are not. Solutions for Maxwell fields make up
> the MODE functions used by a quantum theory, they are not
> (in any physical sense) wave functions, and calling them
> wave functions is perverse and misleading.

Surely, they are not wavefunctions. I call them "approximations
to multiphoton wavefunctions". Maxwell's fields retain one
important property of wavefunctions: the additivity of the
amplitude. The intensity of light is given by the square of the
field which is analogous to the QM formula for the probability of
finding a photon (which is a square of the wave function).
This leads to similar descriptions of diffraction and interference
in the wave theory of light and in QM.

I think, there is only one source of interference - the quantum
nature of photons. The fact that "classical" Maxwell's theory
describes the interference of light tells me that this theory is
not completely classical. It still retains a very important quantum
aspect.

Maxwell's theory is a heuristic approach, and I don't think there is
a rigorous procedure which leads from photon wave functions of
QED to Maxwell's fields E and B.


> The wavefunctions of the photons living inside these mode
> functions are completely independent of the mode functions.
> For a start, the Maxwell solutions (mode functions) wary
> over time and space; (crudely) the wavefunctions vary over
> excitation level of the quantum SH oscillator inside
> the mode function.

Could you please explain what is "mode function" and what is
"quantum SH oscillator"?

Thanks.
Eugene


Eugene Stefanovich

unread,
Aug 25, 2005, 10:00:08 PM8/25/05
to

Igor Khavkine wrote:

>>Now, I fail to see what is the difference between "quantum theory with
>>finitely many particles (wave functions of as many arguments)" and
>>"quantum field theory (Fock space with field operators)". I though we
>>agreed that they are equivalent. Let me remind you:
>>
>>I wrote: "I agree completely: QFT is equivalent to a quantum theory of a
>> variable number of particles."
>>
>>You wrote: "So far so good."
>
>
> Notice the important adjective "variable". If I fix the number of
> particles to N, my Hilbert space is composed of wave functions of N
> arguments, psi(x_1,...,x_N). If I allow the number of particles to vary,
> my Hilbert space is composed of linear combinations of wave functions
> with different numbers of arguments, 1, psi(x), phi(x,y), chi(x,y,z),
> ...
>
> Many, but fixed, number of particles is not the same as a variable
> number of particles. It is only when the number of particles is allowed
> to vary that the theory can be made equivalent to a field theory (Fock
> space + field operators). The equivalence is through second
> quantization.

So far so good. The only thing is that quantum fields are not necessary
for describing the systems with a variable number of particles in the
Fock space. Such a description can be formulated entirely in the
language of "composite" wavefunctions, where each fixed-particle-number
function 1, psi(x), phi(x,y), chi(x,y,z) enters with its own
coefficient, and the sum of squares of all such coefficients is 1.

Such a description is not fundamentally different from the particle
description in ordinary (fixed particle number) quantum mechanics.
The only difference is that the particle number is not fixed.
In my view, this minor difference does not warrant the complete
change of the paradigm suggested by the field theory.

In my view, quantum fields are fine if you interpret them as
formal technical constructs that aid your calculations.
They are not fine when you start to consider them as
"basic ingredients of the universe" and particles as
"bundles of energy and momentum of the fields" (Weinberg's words).

This is not a purely philosophical debate. Particle picture is
essential to make the "dressing transformation" in QFT and to
eliminate "bare particles" and "ultraviolet infinities" for good.


>
>>I thought that if we take the limit hbar -> 0, then undeterministic
>>wave functions are replaced by trajectories. In this limit, photons
>>should be described in terms of Newtonian light rays.
>>No diffraction, no interference.
>
>
> Again, trajectories arize in the classical limit of QM with a *fixed*
> number of particles. Photon number is not conserved. When that happens,
> the classical limit does not yield trajectories, it yields fields.

I thought that in a classical theory with a variable number of
particles there should be trajectories that can start and terminate
at some points. I don't see how you can jump from a variable
(but still finite) number of degrees of freedom in the particle theory
to the
plain infinite number of degrees of freedom in the field.

The number of particles (including photons) in the Universe is finite.
Field theories seem to disregard this important fact. They use infinite
number of degrees of freedom to describe even one electron
with its field.


>> It looks suspicious to me
>>that in the weak field limit (when individual photons can be discerned)
>>Maxwell's theory gives continuous predictions incompatible with
>>experiment. This forces me to believe that Maxwell's fields are
>>some surrogates for multi-photon wavefunctions, rather than their proper
>>hbar -> 0 limits.
>
>
> Take |psi> to be a several electron state. <x,y,...|psi> = psi(x,y,...)
> is the corresponding several electron wave function. X = <psi|x|psi>, Y
> = <psi|y|psi>, ..., are the "classical" expectation values of the
> individual position operators x, y, .... The wave function psi(x,y,...)
> satisfies the multi-electron Schroedinger equation. The expectation
> values X, Y, ... satisfy Hamilton's equations of motion, this is
> Ehrenfest's theorem.
>
> Take |phi> to be a several photon state. <0|e(x)e(y)...|phi> =
> phi(x,y,...) is the corresponding several photon wave function, with
> e(x), e(y), ... being the field operators (which are also decorated with
> polarization indices). E(x) = <psi|e(x)|psi> is the expectation value of
> the classical field amplitude. The wave function phi(x,y,...) satisfies
> the multi-photon "Maxwell equations". The expectation values E(x)
> satisfy Maxwell's equations, in the usual sense of the term, which is
> also a consequence of Ehrenfest's theorem. The fact that the single
> photon wave equation is the same as the linear part of the classical
> field equations is a theorem of second quantization.

Do I understand you right? Are you saying that Maxwell's theory can be
applied to the weak-field regime? I don't think so.
Take the Young's double-slit experiment. Maxwell's wave theory describes
the light intensity on the screen by continuous functions E(x) and
B(x). This is all fine while the intensity of light is high: there are
many photons, and the light intensity appears continuous on the screen.
At low intensities, when we can distinguish individual
photons on the screen, the field description doesn't work anymore.
The light intensity produced by one photon is more like a
delta-function. One can reconsile these two contradicting
descriptions in the tradition
of quantum mechanics. One can say that E(x) and B(x) are "sort of"
photon wave functions, and when the photon reaches the screen these
wavefunctions collapse to produce a single observable dot.
This is my interpretation of Maxwell's theory: the fields E(x) and B(x)
there are some surrogates of multi-photon wavefunctions that remained
after we took the (incomplete) classical limit from QED to the theory in
which electrons are treated classically, while photons (due to their
zero mass) are treated in a "sort of" quantum way.

Eugene.

Eugene Stefanovich

unread,
Aug 25, 2005, 10:00:09 PM8/25/05
to

Igor Khavkine wrote:

>>I think that hbar -> 0 limit does not apply to the description of
>>light in Maxwell's theory. If you think otherwise, then you should
>>come to the conclusion that there are two different sources of
>>the interference effect. One source is purely quantum (as in Feynman's
>>double-slit experiment), another source is due to "classical waves"
>>(as in Young's experiment). Can these two contributions to the
>>interference be distinguished experimentally? I don't think so.

> So, yes, there are two
> kinds of detectable interference here.

This is very strange.
Take Young's double-slit experiment. When light has a high intensity
(many photons) you see a continuous interference pattern. This
is perfectly described by the Maxwell's wave theory. The intensity of
the image on the screen is
proportional to E^2(x). Now turn down
the intensity of the light source until you can see individual
photons one-by-one.
The form of the interference pattern does not change. However,
now it is clear that each photon produces a tiny dot on the screen.
Maxwell's E^2(x) intensity now corresponds to the number
(or frequency) of photons hitting the screen in the vicinity of
each point x.
The behavior of photons is clearly quantum-mechanical.

You are saying that the interference patterns in the high-intensity
and low-intensity regimes must be interpreted as manifestations of
two different physical laws. I am saying that there is no difference.
In both cases the interference appears due to the quantum law
of addition of particle quantum amplitudes.
In the high-intensity case the
particle nature of light is hidden by the huge number of particles
involved.

Eugene.

Igor Khavkine

unread,
Aug 26, 2005, 5:23:56 AM8/26/05
to
On 2005-08-26, Eugene Stefanovich <eug...@synopsys.com> wrote:
> Igor Khavkine wrote:

There is no difference between the diffraction patterns because they are
predicted by the same set of linear equations. What is different is what
these equations are applied to. In one case they are applied to
classical fields (continuous intensity, no quantum effects), while in
the other case they are applied to the wave function (the interference
pattern is built up individual dots exactly the same way as for single
electron diffraction, obvious quantum effects). The "thing" that
satisfies the equations in each case cannot be the same.

Igor

Igor Khavkine

unread,
Aug 27, 2005, 3:23:12 AM8/27/05
to
On 2005-08-26, Eugene Stefanovich <eug...@synopsys.com> wrote:
>
>

Necessity is a subjective notion. Necessary to whom? To yourself, who
does not use the field formulation? Or to the hunderds (thousands?) of
physicists who do? Equivalence is what it is. You either either
contradict it or you don't. If you do, I suggest you avail yourself of
the references I cited numerous times and follow the proof yourself. If
you don't, you've said nothing to change other people's opinions of QFT.

> Such a description is not fundamentally different from the particle
> description in ordinary (fixed particle number) quantum mechanics.
> The only difference is that the particle number is not fixed.
> In my view, this minor difference does not warrant the complete
> change of the paradigm suggested by the field theory.
>
> In my view, quantum fields are fine if you interpret them as
> formal technical constructs that aid your calculations.
> They are not fine when you start to consider them as
> "basic ingredients of the universe" and particles as
> "bundles of energy and momentum of the fields" (Weinberg's words).
>
> This is not a purely philosophical debate. Particle picture is
> essential to make the "dressing transformation" in QFT and to
> eliminate "bare particles" and "ultraviolet infinities" for good.

Yes it is. The ultraviolet infinities have been eliminated long before
your philosophy or "dressing transformation" existed. Old news. We've
been there.

>> Again, trajectories arize in the classical limit of QM with a *fixed*
>> number of particles. Photon number is not conserved. When that happens,
>> the classical limit does not yield trajectories, it yields fields.
>
> I thought that in a classical theory with a variable number of
> particles there should be trajectories that can start and terminate at
> some points. I don't see how you can jump from a variable (but still
> finite) number of degrees of freedom in the particle theory to the
> plain infinite number of degrees of freedom in the field.

That's the same statement that I've alread answered above. You may think
what you like, but that's not what actually happens. Read the following
carefully:

THERE IS PROOF THAT WHAT YOU THINK IS WRONG.

> The number of particles (including photons) in the Universe is finite.
> Field theories seem to disregard this important fact. They use infinite
> number of degrees of freedom to describe even one electron
> with its field.

Your objection is void. Any normalizable state in Fock space has a
finite expectation value for the number of particles. Just like,
classically, any physical field configuration has finite energy.

I don't think so. I don't think you understand what I'm saying.

> Take the Young's double-slit experiment. Maxwell's wave theory describes
> the light intensity on the screen by continuous functions E(x) and
> B(x). This is all fine while the intensity of light is high: there are
> many photons, and the light intensity appears continuous on the screen.
> At low intensities, when we can distinguish individual
> photons on the screen, the field description doesn't work anymore.
> The light intensity produced by one photon is more like a
> delta-function. One can reconsile these two contradicting
> descriptions in the tradition
> of quantum mechanics.

This situation is handled no differently than single electron
diffraction. The distribution pattern of detections is predicted by the
amplitude of the single photon wave function. As I illustrated above,
this wave function satisfies the wave equation for a relativistic vector
particle. There aren't that many wave equations that have this property.
In fact, there is only one, we call it "Maxwell's equations".

> One can say that E(x) and B(x) are "sort of"
> photon wave functions, and when the photon reaches the screen these
> wavefunctions collapse to produce a single observable dot.
> This is my interpretation of Maxwell's theory: the fields E(x) and B(x)
> there are some surrogates of multi-photon wavefunctions that remained
> after we took the (incomplete) classical limit from QED to the theory in
> which electrons are treated classically, while photons (due to their
> zero mass) are treated in a "sort of" quantum way.

What would be really nice is if you could give an even "sort of" precise
and quantitative statement of this correspondence. Something like a
formula relating these many-photon states to the electric and magnetic
fields, perhaps?

Igor

Eugene Stefanovich

unread,
Aug 27, 2005, 3:24:29 AM8/27/05
to
Igor Khavkine wrote:

>>You are saying that the interference patterns in the high-intensity
>>and low-intensity regimes must be interpreted as manifestations of
>>two different physical laws. I am saying that there is no difference.
>>In both cases the interference appears due to the quantum law
>>of addition of particle quantum amplitudes.
>>In the high-intensity case the
>>particle nature of light is hidden by the huge number of particles
>>involved.
>
>
> There is no difference between the diffraction patterns because they are
> predicted by the same set of linear equations. What is different is what
> these equations are applied to. In one case they are applied to
> classical fields (continuous intensity, no quantum effects), while in
> the other case they are applied to the wave function (the interference
> pattern is built up individual dots exactly the same way as for single
> electron diffraction, obvious quantum effects). The "thing" that
> satisfies the equations in each case cannot be the same.

Let us then use the analogy between photon diffraction/interference
and electron diffraction/interference. I.e., instead of Young's
experiment with light use Feynman's double-slit experiment with
electrons. There are lot of similarities. By adjusting electrons
energies one can get the diffraction pattern almost the same as in
the case with photons. I would expect that theoretical descriptions
should be very similar for photons and electrons.

If we follow your logic, then electron diffraction in the case
of a weak electron source (electrons emitted one-by-one) should be
described in terms of particle quantum mechanics. I agree
with you here. However, by your logic, if we use high intensity
electron gun (individual particles cannot be distinguished), then
instead of QM description we need to use some kind of "classical
field" description (similar to Maxwell's equations). I don't think
such a classical wave theory of electrons even exists.

In my view, for both photons and electrons and in both low-intensity
and high-intensity cases one should use good old quantum mechanics
in order to describe the diffraction and interference effects.
When intensity of the source goes up, nothing changes in the quantum
properties of individual particles (photons or electrons).
So, Maxwell's equations used for photons in the high intensity regime
are, actually, a simplified way of doing quantum mechanics.
Maxwell just found a clever way to substitute the wave function of
billions of photons by two vector functions E(x,t) and B(x,t).

Eugene.

Eugene Stefanovich

unread,
Aug 28, 2005, 3:30:46 AM8/28/05
to
"Igor Khavkine" <igo...@gmail.com> wrote in message
news:slrndgti7m....@corum.multiverse.ca...

> > So far so good. The only thing is that quantum fields are not necessary
> > for describing the systems with a variable number of particles in the
> > Fock space. Such a description can be formulated entirely in the
> > language of "composite" wavefunctions, where each fixed-particle-number
> > function 1, psi(x), phi(x,y), chi(x,y,z) enters with its own
> > coefficient, and the sum of squares of all such coefficients is 1.
>
> Necessity is a subjective notion. Necessary to whom? To yourself, who
> does not use the field formulation? Or to the hunderds (thousands?) of
> physicists who do? Equivalence is what it is. You either either
> contradict it or you don't. If you do, I suggest you avail yourself of
> the references I cited numerous times and follow the proof yourself. If
> you don't, you've said nothing to change other people's opinions of QFT.

Thank you for acknowledging that my particle-based approach is equivalent
to the traditional field-based aproach. I may agree with you that both
approaches
lead to the same numerical results. However, I hope you'd agree that they
offer
two different perspectives. One approach says: "Fields are basic
ingredients.
Particles are excitations of fields". Another approach says: "Particles are
basic
ingredients. Fields are just formal mathematical constructs"

I hope you'd also agree that having more than one perspectives or equivalent
formulations of the theory is a very useful thing. Take for example quantum
mechanics.
Heisenberg's matrix mechanics, Schroedinger's wave mechanics, and Feynman's
path integrals are three different perspectives that enhance and enrich each
other.
Some property that may look obscure in one formulation may be completely
transparent in
another formulation.

Another example is the old debate about the center of the universe.
Now we know that the choice of the frame of reference - either connected
to the Sun or to the Earth - is completely arbitrary. We can write all
equations
in both these frames. However, it appears that equations governing the
movement
of planets take especially simple form in the heliocentric system. This was
crucial
for formulation of the law of gravitation by Newton.

> > This is not a purely philosophical debate. Particle picture is
> > essential to make the "dressing transformation" in QFT and to
> > eliminate "bare particles" and "ultraviolet infinities" for good.
>
> Yes it is. The ultraviolet infinities have been eliminated long before
> your philosophy or "dressing transformation" existed. Old news. We've
> been there.

Feynman-Schwinger-Tomonaga theory "swept infinities under the rug".
True, one can have a completely finite formulation in terms of
Glazek-Wilson "similarity renormalization". However, this approach
requires unphysical "bare particles". The only approach to QFT that
can be formulated from the beginning to the end without encountering
a single divergent integral or bare particles is RQD.

> > The number of particles (including photons) in the Universe is finite.
> > Field theories seem to disregard this important fact. They use infinite
> > number of degrees of freedom to describe even one electron
> > with its field.
>
> Your objection is void. Any normalizable state in Fock space has a
> finite expectation value for the number of particles. Just like,
> classically, any physical field configuration has finite energy.

I was talking about the number of degrees of freedom, which is infinite
for fields in any finite volume. Please understand me, I am not saying that
field
theories are wrong. I am saying that there exists an alternative
particle-based
approach that seems to be simpler and more intuitive.

> > Take the Young's double-slit experiment. Maxwell's wave theory describes
> > the light intensity on the screen by continuous functions E(x) and
> > B(x). This is all fine while the intensity of light is high: there are
> > many photons, and the light intensity appears continuous on the screen.
> > At low intensities, when we can distinguish individual
> > photons on the screen, the field description doesn't work anymore.
> > The light intensity produced by one photon is more like a
> > delta-function. One can reconsile these two contradicting
> > descriptions in the tradition
> > of quantum mechanics.
>
> This situation is handled no differently than single electron
> diffraction. The distribution pattern of detections is predicted by the
> amplitude of the single photon wave function. As I illustrated above,
> this wave function satisfies the wave equation for a relativistic vector
> particle. There aren't that many wave equations that have this property.
> In fact, there is only one, we call it "Maxwell's equations".

Let me rephrase what you said to see if I understood it correctly.
You are saying:
1. In the case of high intensities, the light diffraction is a classical
phenomenon described by Maxwell's wave equation.
2. In the case of low intensity, the diffraction pattern has quantum
origin, but individual photons are still described by the same Maxwell's
equation, so the diffraction pattern does not change.

What I cannot understand is how the switch is ossured (physically, not
formally) between quantum and classical mechanisms when we simply change
the light intensity (the number of photons) without changing anything else.

> > One can say that E(x) and B(x) are "sort of"
> > photon wave functions, and when the photon reaches the screen these
> > wavefunctions collapse to produce a single observable dot.
> > This is my interpretation of Maxwell's theory: the fields E(x) and B(x)
> > there are some surrogates of multi-photon wavefunctions that remained
> > after we took the (incomplete) classical limit from QED to the theory in
> > which electrons are treated classically, while photons (due to their
> > zero mass) are treated in a "sort of" quantum way.
>
> What would be really nice is if you could give an even "sort of" precise
> and quantitative statement of this correspondence. Something like a
> formula relating these many-photon states to the electric and magnetic
> fields, perhaps?

First, I don't think that the task is to reproduce Maxwell's fields E(x)
and B(x) and related equations. I think, these fields and equations are
phenomenological constructs. They were designed to fit Faraday's
empirical observations, and I am not sure that Maxwell's theory will
folow in its entirety as a "classical" limit of the more general QED.

My goal is to have a simplified formulation of QED in which electrons
are treated in the classical (hbar -> 0) limit, while (some simplified)
quantum desription is used for photons. I started to do that in my book,
but this task is not completed. In the case of low accelerations, when
radiation can be neglected, I have a theory of charged particles
interacting at a distance. Taking into account the emission and
absorption of photons is more tricky. One needs to find a way to
approximate multi-photon wavefunctions by functions with a few
arguments. It has not been done yet.

Eugene.

Igor Khavkine

unread,
Aug 29, 2005, 1:27:04 AM8/29/05
to
On 2005-08-28, Eugene Stefanovich <eugene_st...@usa.net> wrote:
> "Igor Khavkine" <igo...@gmail.com> wrote in message
> news:slrndgti7m....@corum.multiverse.ca...
>
>> > So far so good. The only thing is that quantum fields are not necessary
>> > for describing the systems with a variable number of particles in the
>> > Fock space. Such a description can be formulated entirely in the
>> > language of "composite" wavefunctions, where each fixed-particle-number
>> > function 1, psi(x), phi(x,y), chi(x,y,z) enters with its own
>> > coefficient, and the sum of squares of all such coefficients is 1.
>>
>> Necessity is a subjective notion. Necessary to whom? To yourself, who
>> does not use the field formulation? Or to the hunderds (thousands?) of
>> physicists who do? Equivalence is what it is. You either either
>> contradict it or you don't. If you do, I suggest you avail yourself of
>> the references I cited numerous times and follow the proof yourself. If
>> you don't, you've said nothing to change other people's opinions of QFT.
>
> Thank you for acknowledging that my particle-based approach is
> equivalent to the traditional field-based aproach.

Before you go giddy with joy, let me point out that both approaches are
taditional, that even the equivalence between them is traditional, and
that you have no priority claim to either of them. The Hilbert space
consisting of (anti)symmetrized wave functions with a variable number of
arguments lies at the very core of second quantization. It is explicitly
used, for example, in F.A. Berezin, _Method of Second Quantization_
(1966), and many other places.

>> > This is not a purely philosophical debate. Particle picture is
>> > essential to make the "dressing transformation" in QFT and to
>> > eliminate "bare particles" and "ultraviolet infinities" for good.
>>
>> Yes it is. The ultraviolet infinities have been eliminated long
>> before your philosophy or "dressing transformation" existed. Old
>> news. We've been there.
>
> Feynman-Schwinger-Tomonaga theory "swept infinities under the rug".
> True, one can have a completely finite formulation in terms of
> Glazek-Wilson "similarity renormalization". However, this approach
> requires unphysical "bare particles". The only approach to QFT that
> can be formulated from the beginning to the end without encountering a
> single divergent integral or bare particles is RQD.

Are you not forgetting something? Previous discussion has made it clear
that you use the same renormalization procedures, that you so deplore,
to construct the coefficients in the Hamiltonian of your theory. This
voids any claims of superiority that your theory can make. I also seem
to recall that this point has been made on half a dozen separate
occasions. Do you intend to repeat the above claim again?

> Please understand me, I am
> not saying that field theories are wrong. I am saying that there
> exists an alternative particle-based approach that seems to be simpler
> and more intuitive.

Does a particle based approach exist? Yes. Does it always exist? No.
The cases where it fails have already been discussed in the thread
news:TmTje.9720$Db6.6575@okepread05 . Simpler and more intuitive? That
entirely depends on your personal preference. Anyone is free to make up
their own mind, especially since both approaches are described in
standard texts. What is unfortunate is that their equivalence is not
made as explicit as it could be.

> Let me rephrase what you said to see if I understood it correctly.
> You are saying: 1. In the case of high intensities, the light
> diffraction is a classical phenomenon described by Maxwell's wave
> equation. 2. In the case of low intensity, the diffraction pattern
> has quantum origin, but individual photons are still described by the
> same Maxwell's equation, so the diffraction pattern does not change.
>
> What I cannot understand is how the switch is ossured (physically, not
> formally) between quantum and classical mechanisms when we simply
> change the light intensity (the number of photons) without changing
> anything else.

In either case, the underlying description is quantum. One way to obtain
the classical limit is to look at a particular set of states that
reproduce the classical results, through expectation values, to high
videlity. These are so called coherent states.

Let N be the particle number operator. The particle description is
appropriate when its expectation value has a small variance,
<N^2>-<N>^2. When this quantity is large, the field description is more
appropriate. For coherent states <N^2>-<N>^2 ~ <N>. On the other hand,
field intensity is proportional to <N>. So if you change the intensity
from high to low, you are changing <N> from high to low, and hence
changing <N^2>-<N>^2 from high to low. In other words, you are smoothly
going from the (classical) field description to the (classical) particle
description. The underlying quantum formalism does not change.

> First, I don't think that the task is to reproduce Maxwell's fields
> E(x) and B(x) and related equations. I think, these fields and
> equations are phenomenological constructs. They were designed to fit
> Faraday's empirical observations,

There is no higher goal of theoretical physics than fitting empirical
observations. Maxwell's equations do so admirably and any theory that
claims to supercede them must reproduce them in the limits where they
are known to be valid.

> and I am not sure that Maxwell's
> theory will folow in its entirety as a "classical" limit of the more
> general QED.

Maxwell's equations do follow from QED. This has been known (read
demonstrated) since the time of Pauli, Dirac and Fermi.

> My goal is to have a simplified formulation of QED in which electrons
> are treated in the classical (hbar -> 0) limit, while (some
> simplified) quantum desription is used for photons. I started to do
> that in my book, but this task is not completed. In the case of low
> accelerations, when radiation can be neglected, I have a theory of
> charged particles interacting at a distance. Taking into account the
> emission and absorption of photons is more tricky. One needs to find a
> way to approximate multi-photon wavefunctions by functions with a few
> arguments. It has not been done yet.

It's an admirable goal, and I wish you luck with it. However, it would
greatly help your theory to be taken seriously if you avoid
premature/ill-informed claims of success, superiority, priority, or
deficiencies of existing theories.

Igor

Eugene Stefanovich

unread,
Aug 29, 2005, 10:49:41 AM8/29/05
to

Igor Khavkine wrote:

>>Thank you for acknowledging that my particle-based approach is
>>equivalent to the traditional field-based aproach.
>
>
> Before you go giddy with joy, let me point out that both approaches are
> taditional, that even the equivalence between them is traditional, and
> that you have no priority claim to either of them. The Hilbert space
> consisting of (anti)symmetrized wave functions with a variable number of
> arguments lies at the very core of second quantization. It is explicitly
> used, for example, in F.A. Berezin, _Method of Second Quantization_
> (1966), and many other places.

Unfortunately, I do not have this book with me. Does this book have
an explicit expression for the interacting Hamiltonian in the Fock
space through creation
and annihilation operators of real particles, where each term is
finite, and this Hamiltonian can be used for calculations of both the
S-matrix and the time evolution without encountering divergent
integrals? If it does, then I certainly missed a lot in my education.
If it doesn't, then you and I are talking about two different
"particle-based approaches".

>>>>This is not a purely philosophical debate. Particle picture is
>>>>essential to make the "dressing transformation" in QFT and to
>>>>eliminate "bare particles" and "ultraviolet infinities" for good.
>>>
>>>Yes it is. The ultraviolet infinities have been eliminated long
>>>before your philosophy or "dressing transformation" existed. Old
>>>news. We've been there.
>>
>>Feynman-Schwinger-Tomonaga theory "swept infinities under the rug".
>>True, one can have a completely finite formulation in terms of
>>Glazek-Wilson "similarity renormalization". However, this approach
>>requires unphysical "bare particles". The only approach to QFT that
>>can be formulated from the beginning to the end without encountering a
>>single divergent integral or bare particles is RQD.
>
>
> Are you not forgetting something? Previous discussion has made it clear
> that you use the same renormalization procedures, that you so deplore,
> to construct the coefficients in the Hamiltonian of your theory. This
> voids any claims of superiority that your theory can make. I also seem
> to recall that this point has been made on half a dozen separate
> occasions. Do you intend to repeat the above claim again?

Sure I do, because this claim is correct. You are right that I still
need to use the renormalization procedure in order to write down the
"dressed particle" Hamiltonian. But this doesn't invalidate my claim
that my approach is free of divergences "from the beginning to the end".
The point is to understand where is "the beginning" of a physical
theory. This is not a hairsplitting, this is a very important point.

In my view, the beginning of a realistic theory is when you constructed
the Fock space and defined 10 generators of the Poincare group there.
At this point you completely fixed the physical content of the theory
and interactions acting between particles. From this point you should
be able to get
physical results and predictions by a mechanical application of
mathematical rules. From this point on you do not expect to see any
surprises, like divergences. All calculations must be straightforward.

Everything happening before this point I call pre-theory. I don't care
much which weird speculations or tricks you may employ in the pre-theory
phase. You can just pull the Fock space and 10 generators out of your
pocket. That's fine too.

It is good to recall the major pre-theory steps involved in the
standard QFT:

1. Gauge invariance principle
2. Action and Lagrangian
3. Canonical quantization
4. Noether theorems
5. Renormalization counterterms

After these steps are done, the standard QFT finally has a definition
of the Fock space and 10 Poincare generators there. That's the point
which I mark as the "beginning". Now, let's see what do we have at this
starting point. There are quite a few problems:

a) The Fock space is built out of bare particles rather than real ones
b) The Hamiltonian is infinite (masses and coupling constants are
infinite)
c) If you exercise extreme care in cancelling all divergences you can
get good results for the S-matrix in this theory, but
d) you cannot calculate the time evolution. At least, the standard
formula exp(iHt) doesn't work at all.

Glazek-Wilson approach alleviates problems b) and c), but it doesn't
solve a) and d).

Now, let's see what are the pre-theory step in RQD. I make the same
steps 1.-5. as above (you are right that step 5. involves divergences,
but we haven't reached the beginning of the theory yet, so no harm is
done). In addition, I perform one more step:

6. Unitary dressing transformation.

Now I reached the beginning of my theory which has:

A) Fock space built out of real particles
B) Explicit finite expressions for the Hamiltonian and 9 other
Poincare generators as functions of creation/annihilation
operators for real particles
C) The S-matrix and all related properties can be calculated
from here by standard quantum-mechanical formulas without
encountering any divergences
D) The same is true for calculations of the time evolution by
means of the standard formula exp(iHt)


> Does a particle based approach exist? Yes. Does it always exist? No.
> The cases where it fails have already been discussed in the thread
> news:TmTje.9720$Db6.6575@okepread05 .

Could you please remind me when the particle based approach does not
exist? I am not so proficient in newsgroups. How can I reach the
thread you quoted?


>>What I cannot understand is how the switch is ossured (physically, not
>>formally) between quantum and classical mechanisms when we simply
>>change the light intensity (the number of photons) without changing
>>anything else.
>
>
> In either case, the underlying description is quantum. One way to obtain
> the classical limit is to look at a particular set of states that
> reproduce the classical results, through expectation values, to high
> videlity. These are so called coherent states.
>
> Let N be the particle number operator. The particle description is
> appropriate when its expectation value has a small variance,
> <N^2>-<N>^2. When this quantity is large, the field description is more
> appropriate. For coherent states <N^2>-<N>^2 ~ <N>. On the other hand,
> field intensity is proportional to <N>. So if you change the intensity
> from high to low, you are changing <N> from high to low, and hence
> changing <N^2>-<N>^2 from high to low. In other words, you are smoothly
> going from the (classical) field description to the (classical) particle
> description. The underlying quantum formalism does not change.

Now, you got me totally confused.
You are saying that at high <N> photons
are described by classical field (Maxwell's) theory. I disagree with
you, because in my view Maxwell's theory belongs to the quantum domain.
This disagreement is fine, at least I understand your point.
Now you said that at low <N> we get a classical particle description.
That I cannot understand at all.
How classical particle mechanics can explain
diffraction and interference? I hope you are not disputing the fact that
even when few photons are present, the diffraction and interference
effects are still there.


>>and I am not sure that Maxwell's
>>theory will folow in its entirety as a "classical" limit of the more
>>general QED.
>
>
> Maxwell's equations do follow from QED. This has been known (read
> demonstrated) since the time of Pauli, Dirac and Fermi.

Maxwell's equations may follow from QED at a formal superficial level.
However, I doubt that classical electrodynamics is a proper
classical limit of QED. The Pauli-Dirac-Fermi QED predicted
infinite scattering cross-sections. If your statement is correct,
then Maxwell's theory should do the same, which is not true.

I am also not sure that Maxwell's theory is a proper classical limit of
the Tomonaga-Schwinger-Feynman QED. In this theory, the mass and charge
of the electron are infinite, which is not true in Maxwell's
electrodynamics. I doubt that these discrepancies can be somehow
fixed by
taking the classical limit. Apparently, we attach a different meaning to
the word "demonstrated".

Eugene.

Igor Khavkine

unread,
Aug 29, 2005, 10:58:34 PM8/29/05
to
On 2005-08-29, Eugene Stefanovich <eug...@synopsys.com> wrote:
> Igor Khavkine wrote:
>
>>>Thank you for acknowledging that my particle-based approach is
>>>equivalent to the traditional field-based aproach.
>>
>> Before you go giddy with joy, let me point out that both approaches are
>> taditional, that even the equivalence between them is traditional, and
>> that you have no priority claim to either of them. The Hilbert space
>> consisting of (anti)symmetrized wave functions with a variable number of
>> arguments lies at the very core of second quantization. It is explicitly
>> used, for example, in F.A. Berezin, _Method of Second Quantization_
>> (1966), and many other places.
>
> Unfortunately, I do not have this book with me. Does this book have
> an explicit expression for the interacting Hamiltonian in the Fock
> space through creation
> and annihilation operators of real particles, where each term is
> finite, and this Hamiltonian can be used for calculations of both the
> S-matrix and the time evolution without encountering divergent
> integrals? If it does, then I certainly missed a lot in my education.

This formalism, like quantum mechanics in general, is dynamics
independent. The question of sensible dynamics is separate. Problems of
constructing separate dynamics are common to both the field and variable
particle number formalism (by equivalence) and are solved in field
theory as well as particle formalisms (again through equivalence) by
renormalization.

> If it doesn't, then you and I are talking about two different
> "particle-based approaches".

Evidently we are. As I suspected, your gratitude was premature.

>>>>>This is not a purely philosophical debate. Particle picture is
>>>>>essential to make the "dressing transformation" in QFT and to
>>>>>eliminate "bare particles" and "ultraviolet infinities" for good.
>>>>
>>>>Yes it is. The ultraviolet infinities have been eliminated long
>>>>before your philosophy or "dressing transformation" existed. Old
>>>>news. We've been there.
>>>
>>>Feynman-Schwinger-Tomonaga theory "swept infinities under the rug".
>>>True, one can have a completely finite formulation in terms of
>>>Glazek-Wilson "similarity renormalization". However, this approach
>>>requires unphysical "bare particles". The only approach to QFT that
>>>can be formulated from the beginning to the end without encountering a
>>>single divergent integral or bare particles is RQD.
>>
>>
>> Are you not forgetting something? Previous discussion has made it clear
>> that you use the same renormalization procedures, that you so deplore,
>> to construct the coefficients in the Hamiltonian of your theory. This
>> voids any claims of superiority that your theory can make. I also seem
>> to recall that this point has been made on half a dozen separate
>> occasions. Do you intend to repeat the above claim again?
>
> Sure I do, because this claim is correct. You are right that I still
> need to use the renormalization procedure in order to write down the
> "dressed particle" Hamiltonian. But this doesn't invalidate my claim
> that my approach is free of divergences "from the beginning to the end".
> The point is to understand where is "the beginning" of a physical
> theory. This is not a hairsplitting, this is a very important point.

No it is hair splitting. If someone gives me an effective field theory
obtained through QED with all divergences eliminated, then I never see
anything other than finite numbers in the calculations. The problem is
that if I want to consider higher order effects in the coupling
constant, I'll have to go back to the original QED theory and evaluate
and renormalize more loop graphs. As long as you rely on perturbation,
the job of constructing a finite theory is never finished. Again, this
voids your claims of superiority. Thus your claim falls squarely into
the "not even wrong" category so aptly named by Pauli. The sooner you
realize it, the better.


>
>> Does a particle based approach exist? Yes. Does it always exist? No.
>> The cases where it fails have already been discussed in the thread
>> news:TmTje.9720$Db6.6575@okepread05 .
>
> Could you please remind me when the particle based approach does not
> exist? I am not so proficient in newsgroups. How can I reach the
> thread you quoted?

http://groups.google.ca/group/sci.physics.research/browse_frm/thread/51b0fb0978abf419


>>>What I cannot understand is how the switch is ossured (physically, not
>>>formally) between quantum and classical mechanisms when we simply
>>>change the light intensity (the number of photons) without changing
>>>anything else.
>>
>> In either case, the underlying description is quantum. One way to obtain
>> the classical limit is to look at a particular set of states that
>> reproduce the classical results, through expectation values, to high
>> videlity. These are so called coherent states.
>>
>> Let N be the particle number operator. The particle description is
>> appropriate when its expectation value has a small variance,
>> <N^2>-<N>^2. When this quantity is large, the field description is more
>> appropriate. For coherent states <N^2>-<N>^2 ~ <N>. On the other hand,
>> field intensity is proportional to <N>. So if you change the intensity
>> from high to low, you are changing <N> from high to low, and hence
>> changing <N^2>-<N>^2 from high to low. In other words, you are smoothly
>> going from the (classical) field description to the (classical) particle
>> description. The underlying quantum formalism does not change.
>
> Now, you got me totally confused.
> You are saying that at high <N> photons
> are described by classical field (Maxwell's) theory. I disagree with
> you, because in my view Maxwell's theory belongs to the quantum domain.

I hope you realize that, for your opinion to carry any weight, you have
to offer justification. I have given you conditions under which the
classical field formulation approximates the quantum description. So
far, I have yet to hear any concrete objections or justifications for
your differing opinion.

> This disagreement is fine, at least I understand your point.
> Now you said that at low <N> we get a classical particle description.
> That I cannot understand at all.
> How classical particle mechanics can explain
> diffraction and interference? I hope you are not disputing the fact that
> even when few photons are present, the diffraction and interference
> effects are still there.

Sorry, I misspoke in this case. Under low <N> conditions, the behavior
is well aproximated by a quantum particle theory, a fixed photon number
truncation of the Fock space is justified. That's why low intensity
photon interference effects would be similar to low electron number
interference effects (like the video that I linked to in another
thread). To get an actual classical particle description, the further
condition of hbar -> 0 has to be satisfied. I think this is done in the
case of gamma rays, whose existence indicated a corpuscular aspect of
light.

>>>and I am not sure that Maxwell's
>>>theory will folow in its entirety as a "classical" limit of the more
>>>general QED.
>>
>> Maxwell's equations do follow from QED. This has been known (read
>> demonstrated) since the time of Pauli, Dirac and Fermi.
>
> Maxwell's equations may follow from QED at a formal superficial level.
> However, I doubt that classical electrodynamics is a proper
> classical limit of QED. The Pauli-Dirac-Fermi QED predicted
> infinite scattering cross-sections. If your statement is correct,
> then Maxwell's theory should do the same, which is not true.
>
> I am also not sure that Maxwell's theory is a proper classical limit of
> the Tomonaga-Schwinger-Feynman QED. In this theory, the mass and charge
> of the electron are infinite, which is not true in Maxwell's
> electrodynamics. I doubt that these discrepancies can be somehow
> fixed by
> taking the classical limit. Apparently, we attach a different meaning to
> the word "demonstrated".

This formal superficial level has been with us for a long time, but it
has gradually become less formal and less superficial. If you want to
see how it also follows from the renormalized theory of
Tomonaga-Schwinger-Feynman, you should familiarize yourself with the
technique of effective actions. When I say "demonstrated", I mean that
the demonstrations exist and have been accepted by experts in the field.
What I think you mean by "demonstrated" is actually "demonstrated and
understood by me", which I admit are two different things. However the
two can be brought in agreement, but only if you expend some effort to
study and understand the literature on this subject.

Igor

Igor Khavkine

unread,
Aug 30, 2005, 4:08:40 PM8/30/05
to
On 2005-08-27, Eugene Stefanovich <eug...@synopsys.com> wrote:

> Let us then use the analogy between photon diffraction/interference
> and electron diffraction/interference. I.e., instead of Young's
> experiment with light use Feynman's double-slit experiment with
> electrons. There are lot of similarities. By adjusting electrons
> energies one can get the diffraction pattern almost the same as in
> the case with photons. I would expect that theoretical descriptions
> should be very similar for photons and electrons.
>
> If we follow your logic, then electron diffraction in the case
> of a weak electron source (electrons emitted one-by-one) should be
> described in terms of particle quantum mechanics. I agree
> with you here. However, by your logic, if we use high intensity
> electron gun (individual particles cannot be distinguished), then
> instead of QM description we need to use some kind of "classical
> field" description (similar to Maxwell's equations). I don't think
> such a classical wave theory of electrons even exists.

This classical field would be a Grassmann valued field psi, whose
equations of motion are the same as the single particle Schroedinger
equation. That's if interactions between electrons are not taken into
account. Interactions would correspond either to non-linearities in the
Grassmann field equations or their coupling to other field equations,
such as E&M. Grassmann fields are covered in any text that discusses
path integrals for Fermions. So, yes, such a theory does exist and it is
used with very large electron guns and with very good success. Its
predictions correspond to tree level calculations in QED which reproduce
the cross sesctions measured in high energy particle accelerators to
good accuracy.

The operative difference here is not just the electron beam intensity,
but also the fact that in these experiments electron number is not
conserved. As I've been saying, conservation of particle number is what
distinguishes particle mechanics from field theory.

> In my view, for both photons and electrons and in both low-intensity
> and high-intensity cases one should use good old quantum mechanics
> in order to describe the diffraction and interference effects.
> When intensity of the source goes up, nothing changes in the quantum
> properties of individual particles (photons or electrons).
> So, Maxwell's equations used for photons in the high intensity regime
> are, actually, a simplified way of doing quantum mechanics.
> Maxwell just found a clever way to substitute the wave function of
> billions of photons by two vector functions E(x,t) and B(x,t).

While you are entitled to your opinions. What is unfortunate is that
these opinions are ill informed and are not backed up by any
mathematical formalism to make them precise. They also seem impervious
to change, even under overwhelming evidence to the contrary.

Igor

Eugene Stefanovich

unread,
Aug 30, 2005, 4:10:27 PM8/30/05
to

"Igor Khavkine" <igo...@gmail.com> wrote in message
news:slrndh7hf0....@corum.multiverse.ca...

> >> Are you not forgetting something? Previous discussion has made it clear
> >> that you use the same renormalization procedures, that you so deplore,
> >> to construct the coefficients in the Hamiltonian of your theory. This
> >> voids any claims of superiority that your theory can make. I also seem
> >> to recall that this point has been made on half a dozen separate
> >> occasions. Do you intend to repeat the above claim again?
> >
> > Sure I do, because this claim is correct. You are right that I still
> > need to use the renormalization procedure in order to write down the
> > "dressed particle" Hamiltonian. But this doesn't invalidate my claim
> > that my approach is free of divergences "from the beginning to the end".
> > The point is to understand where is "the beginning" of a physical
> > theory. This is not a hairsplitting, this is a very important point.
>
> No it is hair splitting. If someone gives me an effective field theory
> obtained through QED with all divergences eliminated, then I never see
> anything other than finite numbers in the calculations. The problem is
> that if I want to consider higher order effects in the coupling
> constant, I'll have to go back to the original QED theory and evaluate
> and renormalize more loop graphs. As long as you rely on perturbation,
> the job of constructing a finite theory is never finished. Again, this
> voids your claims of superiority. Thus your claim falls squarely into
> the "not even wrong" category so aptly named by Pauli. The sooner you
> realize it, the better.

Congratulations on knocking down another strawman! RQD is not an
effective field theory approximation to QED. In contrast to effective field
theories,
RQD produces the same
S-matrix as QED in each perturbation order for all particle energies.
There is a proof of this statement
in my book. So, RQD and QED are exactly scattering-equivalent.
RQD can also calculate the time evolution. This is beyond capacity of QED.

True, both theories rely on the perturbation expansion. So, you may say
that both of them are "never finished". I accept this criticism.
Can you offer anything better?

> >> Does a particle based approach exist? Yes. Does it always exist? No.
> >> The cases where it fails have already been discussed in the thread
> >> news:TmTje.9720$Db6.6575@okepread05 .
> >
> > Could you please remind me when the particle based approach does not
> > exist? I am not so proficient in newsgroups. How can I reach the
> > thread you quoted?
>
>
http://groups.google.ca/group/sci.physics.research/browse_frm/thread/51b0fb0
978abf419

I don't see in this thread any example of the failure of the particle
approach.
To the contrary, I see a failure of the field approach to describe the time
evolution of
interacting systems.


> >> Let N be the particle number operator. The particle description is
> >> appropriate when its expectation value has a small variance,
> >> <N^2>-<N>^2. When this quantity is large, the field description is more
> >> appropriate. For coherent states <N^2>-<N>^2 ~ <N>. On the other hand,
> >> field intensity is proportional to <N>. So if you change the intensity
> >> from high to low, you are changing <N> from high to low, and hence
> >> changing <N^2>-<N>^2 from high to low. In other words, you are smoothly
> >> going from the (classical) field description to the (classical)
particle
> >> description. The underlying quantum formalism does not change.
> >
> > Now, you got me totally confused.
> > You are saying that at high <N> photons
> > are described by classical field (Maxwell's) theory. I disagree with
> > you, because in my view Maxwell's theory belongs to the quantum domain.
>
> I hope you realize that, for your opinion to carry any weight, you have
> to offer justification. I have given you conditions under which the
> classical field formulation approximates the quantum description. So
> far, I have yet to hear any concrete objections or justifications for
> your differing opinion.

Let me tell you my fundamental position. I cannot justify it, because this
is my postulate. It says: "World is made of particles interacting with each
other. Everything you attribute to fields or waves (like diffraction and
interference) are just manifestations of quantum properties of the
particles.
Fields (either classical or quantum) are not needed. They are redundant."
I believe that everything we know in physics can be explained
as manifestations of properties of particles. If there are experimental
observations
that contradict my position, please let me know.

This postulate makes me believe that Maxwell's wave theory of light is
nothing
but an (rather successful, I admit) attempt to approximate the quantum
behavior
of billions of photons by two functions E(x,t) and B(x,t).

You may say that I have no right to criticize Maxwell's electrodynamics,
because I haven't constructed my own particle-based electrodynamics
which involves light emission and absorption and, at least, reproduces all
successes of Maxwell's theory. I may agree with that.
So, you may (dis)regard my criticism as mere speculation. However, let me
remind you that classical electrodynamics has very poor record when it
comes to radiation reaction and related things.

> > This disagreement is fine, at least I understand your point.
> > Now you said that at low <N> we get a classical particle description.
> > That I cannot understand at all.
> > How classical particle mechanics can explain
> > diffraction and interference? I hope you are not disputing the fact that
> > even when few photons are present, the diffraction and interference
> > effects are still there.
>
> Sorry, I misspoke in this case. Under low <N> conditions, the behavior
> is well aproximated by a quantum particle theory, a fixed photon number
> truncation of the Fock space is justified. That's why low intensity
> photon interference effects would be similar to low electron number
> interference effects (like the video that I linked to in another
> thread). To get an actual classical particle description, the further
> condition of hbar -> 0 has to be satisfied. I think this is done in the
> case of gamma rays, whose existence indicated a corpuscular aspect of
> light.

Now you should see my point. When you take the low <N> limit
of Maxwell's theory, you obtain a quantum theory of few photons.
Therefore, Maxwell's high <N> theory is itself a quantum theory.
In order to get to the classical picture, you need to perform
(in addition to the small <N> limit) another
limiting procedure hbar -> 0. Only then you obtain a classical theory
of photons. This corpuscular theory can describe gamma rays for which
quantum
effects (diffraction and interference) can be neglected.


> > I am also not sure that Maxwell's theory is a proper classical limit of
> > the Tomonaga-Schwinger-Feynman QED. In this theory, the mass and charge
> > of the electron are infinite, which is not true in Maxwell's
> > electrodynamics. I doubt that these discrepancies can be somehow
> > fixed by
> > taking the classical limit. Apparently, we attach a different meaning to
> > the word "demonstrated".
>
> This formal superficial level has been with us for a long time, but it
> has gradually become less formal and less superficial. If you want to
> see how it also follows from the renormalized theory of
> Tomonaga-Schwinger-Feynman, you should familiarize yourself with the
> technique of effective actions. When I say "demonstrated", I mean that
> the demonstrations exist and have been accepted by experts in the field.
> What I think you mean by "demonstrated" is actually "demonstrated and
> understood by me", which I admit are two different things. However the
> two can be brought in agreement, but only if you expend some effort to
> study and understand the literature on this subject.

Could you explain please how "effective actions" can
produce finite mass and charge of the electron by means of the classical
limit of a theory (QED) in which m and e are infinite. Do you have any
references,
by any chance? Or we should wait until some "expert in the field"
intervenes?

Eugene.

Arnold Neumaier

unread,
Aug 30, 2005, 4:10:34 PM8/30/05
to
Igor Khavkine wrote:

> On 2005-08-29, Eugene Stefanovich <eug...@synopsys.com> wrote:
>
>>Igor Khavkine wrote:
>>
>>>Let N be the particle number operator. The particle description is
>>>appropriate when its expectation value has a small variance,
>>><N^2>-<N>^2. When this quantity is large, the field description is more
>>>appropriate. For coherent states <N^2>-<N>^2 ~ <N>. On the other hand,
>>>field intensity is proportional to <N>. So if you change the intensity
>>>from high to low, you are changing <N> from high to low, and hence
>>>changing <N^2>-<N>^2 from high to low. In other words, you are smoothly
>>>going from the (classical) field description to the (classical) particle
>>>description. The underlying quantum formalism does not change.
>>

> Under low <N> conditions, the behavior
> is well aproximated by a quantum particle theory, a fixed photon number
> truncation of the Fock space is justified. That's why low intensity
> photon interference effects would be similar to low electron number
> interference effects (like the video that I linked to in another
> thread). To get an actual classical particle description, the further
> condition of hbar -> 0 has to be satisfied.

What one really needs is just that hbar/<N> small enough. This can be
achieved by a formal limit
hbar -> 0
or by an experimental limit
<N> -> inf.


Arnold Neumaier

Souvik

unread,
Aug 30, 2005, 4:13:12 PM8/30/05
to
Eugene Stefanovich wrote:
> One approach says: "Fields are basic
> ingredients.
> Particles are excitations of fields". Another approach says: "Particles are
> basic
> ingredients. Fields are just formal mathematical constructs"

Most physicists have tread that path. However, the particle viewpoint
just doesn't stand up to it. For example, it is difficult to explain
phenomena that involve a change in the number of particles in terms of
a particle theory -- without invoking a bunch of ad-hoc rules which are
larger in number than for a field theory.

Can you explain why the wavefunction of two electrons flip sign when
they're interchanged by the particle theory? Can you explain why
particles of the same species are identical (in the QM sense)?

Also, there is no simple interpretation of gauge symmetry in a particle
viewpoint as far as I can see. Ideas of unification would be horribly
convoluted in a particle viewpoint if it existed.

You see, a path integral over the space of all fields is a very
*different* theory than a path integral over the trajectories of all
particles. The two statements that you say above should be equivalent
are really not.

-Souvik

Eugene Stefanovich

unread,
Aug 31, 2005, 12:27:22 AM8/31/05
to

"Arnold Neumaier" <Arnold....@univie.ac.at> wrote in message
news:43142B25...@univie.ac.at...

> > Under low <N> conditions, the behavior
> > is well aproximated by a quantum particle theory, a fixed photon number
> > truncation of the Fock space is justified. That's why low intensity
> > photon interference effects would be similar to low electron number
> > interference effects (like the video that I linked to in another
> > thread). To get an actual classical particle description, the further
> > condition of hbar -> 0 has to be satisfied.
>
> What one really needs is just that hbar/<N> small enough. This can be
> achieved by a formal limit
> hbar -> 0
> or by an experimental limit
> <N> -> inf.

This notion is new to me. Are you saying that all quantum effects
disappear as the number of particles <N> becomes big?

Isn't it true that the interference pattern remains the same whether
you shoot photons (or electrons) one-by-one or use a
high-intensity source?

Eugene.

p.ki...@imperial.ac.uk

unread,
Aug 31, 2005, 10:20:18 AM8/31/05
to
Eugene Stefanovich <eug...@synopsys.com> wrote:
> >>Maxwell fields are just approximatons (quite successful,
> >>I admit) to multiphoton wavefunctions.
> >
> >
> > No, they are not. Solutions for Maxwell fields make up
> > the MODE functions used by a quantum theory, they are not
> > (in any physical sense) wave functions, and calling them
> > wave functions is perverse and misleading.

> Surely, they are not wavefunctions. I call them "approximations
> to multiphoton wavefunctions". Maxwell's fields retain one
> important property of wavefunctions: the additivity of the
> amplitude. The intensity of light is given by the square of the
> field which is analogous to the QM formula for the probability of
> finding a photon (which is a square of the wave function).
> This leads to similar descriptions of diffraction and interference
> in the wave theory of light and in QM.

You, of course, may call them whatever you like. But describing
the mode functions (in whatever limit or approximation) using
the word "wavefunctions" is perverse and misleading. Perverse,
because there is no good reason for it (they already have a name);
and misleading because it misleads readers as to where the
quantum behaviour exists.


> I think, there is only one source of interference - the quantum
> nature of photons. The fact that "classical" Maxwell's theory
> describes the interference of light tells me that this theory is
> not completely classical. It still retains a very important quantum
> aspect.

> Maxwell's theory is a heuristic approach, and I don't think there is
> a rigorous procedure which leads from photon wave functions of
> QED to Maxwell's fields E and B.


> > The wavefunctions of the photons living inside these mode
> > functions are completely independent of the mode functions.
> > For a start, the Maxwell solutions (mode functions) wary
> > over time and space; (crudely) the wavefunctions vary over
> > excitation level of the quantum SH oscillator inside
> > the mode function.

> Could you please explain what is "mode function" and what is
> "quantum SH oscillator"?

Any good quantum optics textbook will give you a better answer,
but a "mode function" is some convenient solution of the
classical Maxwells equations. Generally you want a nice
orthonormal set of these; often a useful set of modes is
just plane waves. If you look at the Hamiltonian for EM you
will see it is quadratic in field strength, it follows therefore t
hat each mode is a bit like a simple harmonic (SH) oscillator.
Quantize, and each mode is a quantum SH oscillator. We get
a pair of annihilation and creation operators for each mode
of the field.

The upshot is that the mode function(s) describes how some
component of the field is shaped in spacetime; and that all
the quantum properties are determined by the oscillator
"living" in this mode.

The mode function is a purely classical wave-like object over
space and time. The wavefunction is that of the QSHO, and
extends over the "excitation" of the oscillator of the
relevant mode; all space and time dependence is given by the
mode function.

This isn't intended as a rigorous description, and indeed I
am not the person to ask if you want such a thing. But it
corresponds to what is actually done by people in quantum
optics (in my experience) when they describe photons.

When it comes to doing a particular calculation, and I
need to put my photon description into some kind overlap
integral, for example, I'll use the product of the mode
function and the wavefunction.

Eugene Stefanovich

unread,
Aug 31, 2005, 10:55:33 AM8/31/05
to

Souvik wrote:
> Eugene Stefanovich wrote:
>
>>One approach says: "Fields are basic
>>ingredients.
>>Particles are excitations of fields". Another approach says: "Particles are
>>basic
>>ingredients. Fields are just formal mathematical constructs"
>
>
> Most physicists have tread that path. However, the particle viewpoint
> just doesn't stand up to it. For example, it is difficult to explain
> phenomena that involve a change in the number of particles in terms of
> a particle theory -- without invoking a bunch of ad-hoc rules which are
> larger in number than for a field theory.

I disagree. No ad hoc rules needed. In my book physics/0504062
I derive the Hamiltonian of interacting particles (electrons, photons,
etc.) which explicitly describes the creation and annihilation of
particles.
I build my derivation of the field-based QED, but QED is
just a temporary crutch there. There is no any reference to fields in
the final Hamiltonian.


> Can you explain why the wavefunction of two electrons flip sign when
> they're interchanged by the particle theory?

No, I don't have a proof of the spin-statistics theorem in my approach.
However, I do have something which is not avaliable in QED:
Divergence-free Hamiltonian description of particle dynamics.

> Can you explain why
> particles of the same species are identical (in the QM sense)?

I always thought this is a postulate in QM.
We assume that two electrons are identical. Then QM says that
their wavefunction should be either symmetic or antisymmetric
wrt permutations. Then QFT says that they actually must be
antisymmetric.


> Also, there is no simple interpretation of gauge symmetry in a particle
> viewpoint as far as I can see. Ideas of unification would be horribly
> convoluted in a particle viewpoint if it existed.

You are right. Gauge symmetry is not applicable to particle theories.
However, despite huge successes of unified gauge field theories
during last decades, I hold a belief that there is no deep physical
principle behind gauge symmetry. I believe (and you are totally free to
disagree with me) that gauge symmetry is phenomenological, and it
will not survive in the final comprehensive theory.

I haven't applied my theory to weak and strong nuclear forces, and
to gravity yet. I believe that it is possible. Though this is just one
person's hope. Nothing tangible.


> You see, a path integral over the space of all fields is a very
> *different* theory than a path integral over the trajectories of all
> particles. The two statements that you say above should be equivalent
> are really not.

I have a particle-based theory of electromagnetic interactions (RQD).
I have a proof that it predicts exactly the same S-matrix (in each
order) as perturbative renormalized QED. Therefore, I believe that
path integral formulations should be equivalent too. Though I haven't
thought about that.

Eugene.

Eugene Stefanovich

unread,
Aug 31, 2005, 2:47:42 PM8/31/05
to

p.ki...@imperial.ac.uk wrote:

>> Could you please explain what is "mode function" and what is
>> "quantum SH oscillator"?

> The upshot is that the mode function(s) describes how some
> component of the field is shaped in spacetime; and that all
> the quantum properties are determined by the oscillator
> "living" in this mode.
>
> The mode function is a purely classical wave-like object over
> space and time. The wavefunction is that of the QSHO, and
> extends over the "excitation" of the oscillator of the
> relevant mode; all space and time dependence is given by the
> mode function.

What role the mode functions are playing in the diffraction and
interference of light? Can I understand your words in the way that
there are two distinct contributions to the interference: one coming
from cassical "mode functions", the other coming from quantum
wave functions?

Eugene.

Igor Khavkine

unread,
Sep 1, 2005, 2:42:03 AM9/1/05
to

I don't know what you mean.

> RQD can also calculate the time evolution. This is beyond capacity of QED.

False. QED is a QFT. QFT is QM applied to field theory. If QM can describe
time evolution, then so can QFT, and then so can QED. If the issue is
renormalization, examples where explicit time evolution has been
constructed in a renormalized theory. Application of the same techniques
to QED is left as an exercise.

> True, both theories rely on the perturbation expansion. So, you may say
> that both of them are "never finished". I accept this criticism.
> Can you offer anything better?

I have no need to do better. This argument is sufficient to discredit
any claims of superiority that your theory might have. It's as finite as
QED is after renormalization. In either, at any given perturbative
order, all calculations can be made explicitly finite. But to go to any
higher order, recourse to renormalization is still necessary.

>> >> Does a particle based approach exist? Yes. Does it always exist? No.
>> >> The cases where it fails have already been discussed in the thread
>> >> news:TmTje.9720$Db6.6575@okepread05 .
>> >
>> > Could you please remind me when the particle based approach does not
>> > exist? I am not so proficient in newsgroups. How can I reach the
>> > thread you quoted?
>>
> http://groups.google.ca/group/sci.physics.research/browse_frm/thread/51b0fb0978abf419
>
> I don't see in this thread any example of the failure of the particle
> approach.

You should give that thread a better read. Several examples where the
particle picture fails were given. These include curved space-times,
extended field configurations, and symmetry breaking.

> To the contrary, I see a failure of the field approach to
> describe the time evolution of interacting systems.

Ah, you must have read only your own posts.

>> I hope you realize that, for your opinion to carry any weight, you have
>> to offer justification. I have given you conditions under which the
>> classical field formulation approximates the quantum description. So
>> far, I have yet to hear any concrete objections or justifications for
>> your differing opinion.
>
> Let me tell you my fundamental position. I cannot justify it, because
> this is my postulate. It says: "World is made of particles interacting
> with each other. Everything you attribute to fields or waves (like
> diffraction and interference) are just manifestations of quantum
> properties of the particles. Fields (either classical or quantum) are
> not needed. They are redundant." I believe that everything we know in
> physics can be explained as manifestations of properties of particles.
> If there are experimental observations that contradict my position,
> please let me know.

Postulates are a dime a dozen. Diffraction and interference are generic
features of hyperbolic linear PDEs. Fields come up naturally as the
classical limit of a quantum theory of a *variable* number of particles.
The particle view point breaks down in many cases, e.g. see above. All
of this renders your postulate untenable, barring a miracle where you
find a particle alternative to every single use of a field in a physical
theory (and I mean a mathematically formulated, predictive alternative).

> You may say that I have no right to criticize Maxwell's electrodynamics,
> because I haven't constructed my own particle-based electrodynamics
> which involves light emission and absorption and, at least, reproduces all
> successes of Maxwell's theory. I may agree with that.
> So, you may (dis)regard my criticism as mere speculation.

And I do.

> However, let me remind you that classical electrodynamics has very
> poor record when it comes to radiation reaction and related things.

Only for point charged particles, which are known not to exist. Plasma
physicists and those doing magnetohydrodynamics have yet to hang up
their hats in despair.

>> > This disagreement is fine, at least I understand your point.
>> > Now you said that at low <N> we get a classical particle description.
>> > That I cannot understand at all.
>> > How classical particle mechanics can explain
>> > diffraction and interference? I hope you are not disputing the fact that
>> > even when few photons are present, the diffraction and interference
>> > effects are still there.
>>
>> Sorry, I misspoke in this case. Under low <N> conditions, the behavior
>> is well aproximated by a quantum particle theory, a fixed photon number
>> truncation of the Fock space is justified. That's why low intensity
>> photon interference effects would be similar to low electron number
>> interference effects (like the video that I linked to in another
>> thread). To get an actual classical particle description, the further
>> condition of hbar -> 0 has to be satisfied. I think this is done in the
>> case of gamma rays, whose existence indicated a corpuscular aspect of
>> light.

> Now you should see my point. When you take the low <N> limit
> of Maxwell's theory, you obtain a quantum theory of few photons.
> Therefore, Maxwell's high <N> theory is itself a quantum theory.

This is a clear non-sequitor. High <N> and low <N> limits are completely
different and are described by different approximations. For low <N> the
approximation is still quantum, while for high <N> it no longer is.

> Could you explain please how "effective actions" can produce finite
> mass and charge of the electron by means of the classical limit of a
> theory (QED) in which m and e are infinite. Do you have any
> references, by any chance? Or we should wait until some "expert in the
> field" intervenes?

Bare cut-off theory -- loop calculations --> Effective theory
+ renormalization with modified lagrangian
| |
restore cut-off |
| take classical limit
V |
Infinite bare coupling constants V
because of renormalization Finite classical theory
| with cutoff
classical limit |
| restore cutoff
V |
Unphysical classical theory V
Physically reasonable
classical theory

Weinberg, QFT, vol.2

Burgess, GR as an Effective Field Theory
http://relativity.livingreviews.org/Articles/lrr-2004-5/index.html

Polchinski, Effective Field Theory and the Fermi Surface
http://arxiv.org/abs/hep-th/9210046

Google, "effective field theory".

Igor

Arnold Neumaier

unread,
Sep 1, 2005, 6:28:16 PM9/1/05
to
Eugene Stefanovich wrote:
> "Arnold Neumaier" <Arnold....@univie.ac.at> wrote in message
> news:43142B25...@univie.ac.at...
>
>
>>>Under low <N> conditions, the behavior
>>>is well aproximated by a quantum particle theory, a fixed photon number
>>>truncation of the Fock space is justified. That's why low intensity
>>>photon interference effects would be similar to low electron number
>>>interference effects (like the video that I linked to in another
>>>thread). To get an actual classical particle description, the further
>>>condition of hbar -> 0 has to be satisfied.
>>
>>What one really needs is just that hbar/<N> small enough. This can be
>>achieved by a formal limit
>> hbar -> 0
>>or by an experimental limit
>> <N> -> inf.
>
>
> This notion is new to me. Are you saying that all quantum effects
> disappear as the number of particles <N> becomes big?

In many cases yes, but not always. There are collective phenomena
such as superconductivity or bound states that leave their quantum trace
in the classical limit. But a classical description is nothing but the
limit of quantum theory in the case of large quantum numbers.


> Isn't it true that the interference pattern remains the same whether
> you shoot photons (or electrons) one-by-one or use a
> high-intensity source?

Yes. But interference is also an effect of classical electromagnetism.
So this does not contradict my statement.


Arnold Neumaier

Igor Khavkine

unread,
Sep 1, 2005, 6:29:54 PM9/1/05
to
On 2005-08-31, Eugene Stefanovich <eug...@synopsys.com> wrote:
> Souvik wrote:
>> Eugene Stefanovich wrote:
>>
>>>One approach says: "Fields are basic ingredients. Particles are
>>>excitations of fields". Another approach says: "Particles are basic
>>>ingredients. Fields are just formal mathematical constructs"
>>
>> Most physicists have tread that path. However, the particle viewpoint
>> just doesn't stand up to it. For example, it is difficult to explain
>> phenomena that involve a change in the number of particles in terms of
>> a particle theory -- without invoking a bunch of ad-hoc rules which are
>> larger in number than for a field theory.
>
> I disagree. No ad hoc rules needed.

>> Can you explain why


>> particles of the same species are identical (in the QM sense)?
>
> I always thought this is a postulate in QM.

That's one ad-hoc rule right there.

> We assume that two electrons are identical. Then QM says that
> their wavefunction should be either symmetic or antisymmetric
> wrt permutations. Then QFT says that they actually must be
> antisymmetric.

Quantization of a regular field theory yields a bosonic quantum field
theory, whose excitations translate to identical particles with
symmetrized wave functions. Quantization of a Grassmann field theory
yields a fermionic quantum field theory, whose excitations translate to
identical particles with antisymmetrized wave functions.
Field approach: 1, particle approach: 0.

>> Can you explain why the wavefunction of two electrons flip sign when
>> they're interchanged by the particle theory?
>
> No, I don't have a proof of the spin-statistics theorem in my approach.
> However, I do have something which is not avaliable in QED:
> Divergence-free Hamiltonian description of particle dynamics.

False attack on QED. Its Hamiltonian is there to calculate for anyone
who cares/dares.

>> Also, there is no simple interpretation of gauge symmetry in a particle
>> viewpoint as far as I can see. Ideas of unification would be horribly
>> convoluted in a particle viewpoint if it existed.
>
> You are right. Gauge symmetry is not applicable to particle theories.
> However, despite huge successes of unified gauge field theories
> during last decades, I hold a belief that there is no deep physical
> principle behind gauge symmetry. I believe (and you are totally free to
> disagree with me) that gauge symmetry is phenomenological, and it
> will not survive in the final comprehensive theory.

If beliefs come into play, you loose to several Nobel prize winners (Lee
& Yang, Gross & David Politzer & Wilczek, Feynman, Weinberg, and
probably others). Do you have anything more concrete to offer?

Igor

Eugene Stefanovich

unread,
Sep 1, 2005, 6:31:03 PM9/1/05
to
Igor Khavkine wrote:

>>RQD is not an
>>effective field theory approximation to QED.
>
>
> I don't know what you mean.

1. RQD is formulated in terms of particles, not fields
2. RQD is not an approximation. It is just as exact as QED
(within perturbative approach, of course)


>>RQD can also calculate the time evolution. This is beyond capacity of QED.
>
>
> False. QED is a QFT. QFT is QM applied to field theory. If QM can describe
> time evolution, then so can QFT, and then so can QED.

In QM we normally have a finite Hamiltonian expressed in terms of
observables (positions and momenta) of real particles, so the time
evolution is evaluated simply as exp(iHt). In local QFT or QED, the
Hamiltonian is expressed in terms of fictitious bare particles, and
H is formally infinite. The formula exp(iHt) doesn't work.
I am not saying that this is a dead end. It is not. Time evolution
CAN be described in QFT (or QED). But in order to do that, you need
to perform a unitary transformation to dressed particles.
That's the only rigorous approach known to me.
This has been done in RQD. RQD is not an alternative to QED.
It is just an extension of QED, one step beyond the existing theory.
This step is required to establish the similarity between QED
and ordinary QM and make formula exp(iHt) working again.

I don't know how one can make this step in the field-based formalism.
However, the dressing becomes simple and transparent when particle
representation is used. This is why I believe that the particle picture
is more fundamental.

> If the issue is
> renormalization, examples where explicit time evolution has been
> constructed in a renormalized theory. Application of the same techniques
> to QED is left as an exercise.

As far as I know, renormalization in QED has been applied only
to obtain a finite S-matrix. One further step toward the renormalized
time evolution was made by Glazek and Wilson. They obtained a
finite "renormalized Hamiltonian". However, their Hamiltonian still
contains tri-linear terms, i.e., it is still expressed in terms of
bare (rather than physical) particles. I think that's the reason I
haven't seen any discussion of time evolution in their papers.
They seem to be focused on the S-matrix or energies of bound states.
Did I miss anything?
If you know examples in which renormalized time evolution is
derived from the full QED Hamiltonian without ad hoc assumption,
I would like to see them.

I think that Glazek-Wilson "renormalized Hamiltonian" can be made
suitable for time evolution calculations if additional "similarity
transformation" to dressed particles is applied. But then their
theory would become exactly the same as RQD.

>>True, both theories rely on the perturbation expansion. So, you may say
>>that both of them are "never finished". I accept this criticism.
>>Can you offer anything better?
>
>
> I have no need to do better. This argument is sufficient to discredit
> any claims of superiority that your theory might have. It's as finite as
> QED is after renormalization. In either, at any given perturbative
> order, all calculations can be made explicitly finite.

Hamiltonian H and the time evolution operator exp(iHt) are not
explicitly finite in QED. You are right, they "can be made explicitly
finite", but the "dressing transformation" is the only way (known to me)
how to achieve that. Do you know any other way?


>>http://groups.google.ca/group/sci.physics.research/browse_frm/thread/51b0fb0978abf419
>>
>>I don't see in this thread any example of the failure of the particle
>>approach.
>
>
> You should give that thread a better read. Several examples where the
> particle picture fails were given. These include curved space-times,
> extended field configurations, and symmetry breaking.

That's the reason I never talk about gravity, GR, electro-weak theory,
or QCD. So, I avoid discussions of curved space-times, extended field
configurations, and symmetry breaking. All I said is applicable
only to electromagnetic interactions. Particle picture works perfectly
well there. It may happen that this picture will become
insufficient for more complex phenomena you are talking about,
and the field approach will be proven indispensable.
I don't know about that.

>> "World is made of particles interacting
>>with each other. Everything you attribute to fields or waves (like
>>diffraction and interference) are just manifestations of quantum
>>properties of the particles. Fields (either classical or quantum) are
>>not needed. They are redundant."

> All


> of this renders your postulate untenable, barring a miracle where you
> find a particle alternative to every single use of a field in a physical
> theory (and I mean a mathematically formulated, predictive alternative).

First, as I said above, I consider only electrodynamics.
My time and brainpower is rather limited. The particle picture is doing
well there:

1. the S-matrix of renormalized QED is exactly reproduced.
2. the energies and wavefunctions of bound states can be rigorously
obtained by the usual diagonalization of the Hamiltonian.
3. The time evolution can be calculated by the formula exp(iHt).
4. Known interaction potentials (e.g., the Darwin potential)
between charged particles are obtained in the classical limit.

The missing part (as we discussed in another thread) is construction
of an effective theory (similar to Maxwell's electrodynamics) in
which few massive charged particles are described in the classical
limit, and (a large number of) quantum photons are modeled in an
approximate way by something resembling Maxwell's fields
E and B. This has not been done yet.

>>However, let me remind you that classical electrodynamics has very
>>poor record when it comes to radiation reaction and related things.
>
>
> Only for point charged particles, which are known not to exist.

Isn't electron a point particle?

>>Could you explain please how "effective actions" can produce finite
>>mass and charge of the electron by means of the classical limit of a
>>theory (QED) in which m and e are infinite. Do you have any
>>references, by any chance? Or we should wait until some "expert in the
>>field" intervenes?
>
>
> Bare cut-off theory -- loop calculations --> Effective theory
> + renormalization with modified lagrangian
> | |
> restore cut-off |
> | take classical limit
> V |
> Infinite bare coupling constants V
> because of renormalization Finite classical theory
> | with cutoff
> classical limit |
> | restore cutoff
> V |
> Unphysical classical theory V
> Physically reasonable
> classical theory
>
> Weinberg, QFT, vol.2
>
> Burgess, GR as an Effective Field Theory
> http://relativity.livingreviews.org/Articles/lrr-2004-5/index.html
>
> Polchinski, Effective Field Theory and the Fermi Surface
> http://arxiv.org/abs/hep-th/9210046

Thanks for the references. I found Polchinski's paper
quite educational.

"Effective field theory" is an approximation to the full
"Bare cut-off theory". This approximation is usually valid
only for low energies.
For example, the "Bare cut-off theory" could be the theory of
atoms in the crystal lattice, and the "Effective field theory"
could be the theory of elasticity. In the theory of elasticity
we are not allowed to consider waves whose period is comparable
or smaller than the interatomic distance in the lattice. We are
not allowed to consider excitation energies comparable with
the energy required to extract an atom from its lattice site.
So, in your step

> Bare cut-off theory -- loop calculations --> Effective theory
> + renormalization with modified lagrangian

you are neglecting some
high-energy degrees of freedom. So, your path from
the "Bare cut-off theory" to the "Physically reasonable classical
theory" involves not just the classical limit hbar -> 0, but also
other approximations. By hiding high-energy degrees of freedom,
you can allow yourself to forget about renormalization difficulties
persistent in the "Bare cut-off theory".

This is not what I was talking about. When I spoke about classical limit
I had in mind just pure classical limit without dropping any degree
of freedom or limiting the energy range. "Effective field theory" cannot
do that. RQD suggests a solution:

Bare cut-off theory -->
-- dressing + renormalization --> Dressed particle theory -->
-- hbar -> 0 --> Physically reasonable classical theory

Eugene.

Eugene Stefanovich

unread,
Sep 2, 2005, 11:58:15 AM9/2/05
to

Igor Khavkine wrote:

>>>Can you explain why
>>>particles of the same species are identical (in the QM sense)?
>>
>>I always thought this is a postulate in QM.
>
>
> That's one ad-hoc rule right there.
>
>
>>We assume that two electrons are identical. Then QM says that
>>their wavefunction should be either symmetic or antisymmetric
>>wrt permutations. Then QFT says that they actually must be
>>antisymmetric.
>
>
> Quantization of a regular field theory yields a bosonic quantum field
> theory, whose excitations translate to identical particles with
> symmetrized wave functions. Quantization of a Grassmann field theory
> yields a fermionic quantum field theory, whose excitations translate to
> identical particles with antisymmetrized wave functions.
> Field approach: 1, particle approach: 0.

You may be right here. The identity of electrons looks
more natural in the field approach.
But the QM postulate that two electrons are
identical doesn't seem so unreasonable to me. Why in the world they
should be different?


>>No, I don't have a proof of the spin-statistics theorem in my approach.
>>However, I do have something which is not avaliable in QED:
>>Divergence-free Hamiltonian description of particle dynamics.
>
>
> False attack on QED. Its Hamiltonian is there to calculate for anyone
> who cares/dares.

This "anyone" would have a hard time calculating the time evolution of
real particles using a Hamiltonian written in terms of bare particles
and with infinite coefficients. Good luck!


> If beliefs come into play, you loose to several Nobel prize winners (Lee
> & Yang, Gross & David Politzer & Wilczek, Feynman, Weinberg, and
> probably others). Do you have anything more concrete to offer?

If opinion polls come into play, then I am losing 0:2 already.
The game continues.

Eugene.


Igor Khavkine

unread,
Sep 7, 2005, 2:26:44 AM9/7/05
to
On 2005-09-02, Eugene Stefanovich <eug...@synopsys.com> wrote:
> Igor Khavkine wrote:

>> Quantization of a regular field theory yields a bosonic quantum field
>> theory, whose excitations translate to identical particles with
>> symmetrized wave functions. Quantization of a Grassmann field theory
>> yields a fermionic quantum field theory, whose excitations translate to
>> identical particles with antisymmetrized wave functions.
>> Field approach: 1, particle approach: 0.
>
> You may be right here. The identity of electrons looks
> more natural in the field approach.
> But the QM postulate that two electrons are
> identical doesn't seem so unreasonable to me. Why in the world they
> should be different?

But why in the world should they be the same? The matter of fact is that
a particle theory can handle both possibilities. The identical particle
principle is empirically selected by fitting experiments (cf. Gibbs
paradox). The field formulation is more predictive, since it is only
consistent with a particle formulation supplemented by the identical
particle principle. If you dislike ad-hoc rules, then field theory is
coming out a clear winner.

>> False attack on QED. Its Hamiltonian is there to calculate for anyone
>> who cares/dares.
>
> This "anyone" would have a hard time calculating the time evolution of
> real particles using a Hamiltonian written in terms of bare particles
> and with infinite coefficients. Good luck!

Luck has nothing to do with this, and neither do infinite bare
coefficients. Let me supplement my earlier statement. "QED's Hamiltonian
is there to calculate for anyone competent who cares/dares."

Igor

Igor Khavkine

unread,
Sep 7, 2005, 2:26:54 AM9/7/05
to
On 2005-09-01, Eugene Stefanovich <eug...@synopsys.com> wrote:
> Igor Khavkine wrote:
>
>>>RQD is not an
>>>effective field theory approximation to QED.
>>
>> I don't know what you mean.
>
> 1. RQD is formulated in terms of particles, not fields
> 2. RQD is not an approximation. It is just as exact as QED
> (within perturbative approach, of course)

Hmm. Let me try again.

> Igor Khavkine wrote:


>> Eugene Stefanovich wrote:
>>> Congratulations on knocking down another strawman!
>>

>> I don't know what you mean.

>


>>>RQD can also calculate the time evolution. This is beyond capacity of QED.
>>
>>
>> False. QED is a QFT. QFT is QM applied to field theory. If QM can describe
>> time evolution, then so can QFT, and then so can QED.
>
> In QM we normally have a finite Hamiltonian expressed in terms of
> observables (positions and momenta) of real particles,

No, in terms of canonical degrees of freedom, which happen to be
positions and momenta of particles.

> so the time
> evolution is evaluated simply as exp(iHt). In local QFT or QED, the
> Hamiltonian is expressed in terms of fictitious bare particles, and

No, it's expressed in terms of canonical degrees of freedom, which
happen to be the amplitudes of field modes and associated momenta.

> H is formally infinite.

H is never infinite.

> The formula exp(iHt) doesn't work.

Yes it does.

> I am not saying that this is a dead end. It is not. Time evolution
> CAN be described in QFT (or QED). But in order to do that, you need
> to perform a unitary transformation to dressed particles.

A change of basis is a change of basis is a change of basis.

> That's the only rigorous approach known to me.

Your level of rigour is no higher than standard QED. I've previously
discussed how time evolution can be obtained in standard QED. I've also
shown all the steps (save perhaps one) that are necessary in such a
calculation. Perhaps you still don't know how to use this approach. But
you can't say you haven't seen it.

>> If the issue is
>> renormalization, examples where explicit time evolution has been
>> constructed in a renormalized theory. Application of the same techniques
>> to QED is left as an exercise.
>
> As far as I know, renormalization in QED has been applied only
> to obtain a finite S-matrix.

That's why the above is still left as an exercise. Nobody said it was an
easy one.

>>>True, both theories rely on the perturbation expansion. So, you may say
>>>that both of them are "never finished". I accept this criticism.
>>>Can you offer anything better?
>>
>> I have no need to do better. This argument is sufficient to discredit
>> any claims of superiority that your theory might have. It's as finite as
>> QED is after renormalization. In either, at any given perturbative
>> order, all calculations can be made explicitly finite.
>
> Hamiltonian H and the time evolution operator exp(iHt) are not
> explicitly finite in QED. You are right, they "can be made explicitly
> finite", but the "dressing transformation" is the only way (known to me)
> how to achieve that. Do you know any other way?
>
>
>>>http://groups.google.ca/group/sci.physics.research/browse_frm/thread/51b0fb0978abf419
>>>
>>>I don't see in this thread any example of the failure of the particle
>>>approach.

>> Several examples where the


>> particle picture fails were given. These include curved space-times,
>> extended field configurations, and symmetry breaking.
>
> That's the reason I never talk about gravity, GR, electro-weak theory,
> or QCD. So, I avoid discussions of curved space-times, extended field
> configurations, and symmetry breaking.

And that's why you are not seeing the big picture.

> All I said is applicable
> only to electromagnetic interactions. Particle picture works perfectly
> well there. It may happen that this picture will become
> insufficient for more complex phenomena you are talking about,
> and the field approach will be proven indispensable.
> I don't know about that.

Funny, this goes against your wish to express *everything* in terms of
particles. If you are ready to abandon it for phenomena outside the
scope of QED. What makes QED so special?

>>>However, let me remind you that classical electrodynamics has very
>>>poor record when it comes to radiation reaction and related things.
>>
>> Only for point charged particles, which are known not to exist.
>
> Isn't electron a point particle?

An individual electron is described by a Dirac wavefunction. According
to quantum mechanics it cannot be perfectly localized.

Yes it can if the underlying theory is renormalizable (such as QED).
That's precisely the situation where the cutoff can be completely
removed.

> RQD suggests a solution: [...]

A solution implies there was a problem. Except that there wasn't.

Igor

p.ki...@imperial.ac.uk

unread,
Sep 7, 2005, 2:28:05 AM9/7/05
to
Eugene Stefanovich <eug...@synopsys.com> wrote:
> > The mode function is a purely classical wave-like object over
> > space and time. The wavefunction is that of the QSHO, and
> > extends over the "excitation" of the oscillator of the
> > relevant mode; all space and time dependence is given by the
> > mode function.

> What role the mode functions are playing in the diffraction and
> interference of light? Can I understand your words in the way that
> there are two distinct contributions to the interference: one coming
> from cassical "mode functions", the other coming from quantum
> wave functions?

Yes -- in any given case, the mode part of the overlap between state
can exhibit interference, as can the quantum part.

Diffraction could be built into the mode functions, unless you
want to turn the description into a propagation problem.

Eugene Stefanovich

unread,
Sep 7, 2005, 2:31:03 AM9/7/05
to

Eugene Stefanovich wrote:

>>> We assume that two electrons are identical. Then QM says that
>>> their wavefunction should be either symmetic or antisymmetric
>>> wrt permutations. Then QFT says that they actually must be
>>> antisymmetric.
>>
>>
>>
>> Quantization of a regular field theory yields a bosonic quantum field
>> theory, whose excitations translate to identical particles with
>> symmetrized wave functions. Quantization of a Grassmann field theory
>> yields a fermionic quantum field theory, whose excitations translate to
>> identical particles with antisymmetrized wave functions.
>> Field approach: 1, particle approach: 0.
>
>
> You may be right here. The identity of electrons looks
> more natural in the field approach.
> But the QM postulate that two electrons are
> identical doesn't seem so unreasonable to me. Why in the world they
> should be different?

On a second thought, I wouldn't grant the field theory a point
for the prediction that all electrons are identical. Logically,
QFT is built on the basis of QM. You must assume the identity of
electrons already in QM, e.g., even before you do field quantization.
QFT does not predict the identity of different electrons.
This idea was introduced there at an earlier stage.

The things could be different if QFT could be formulated
independently from QM, and QM derived as a consequence of QFT,
but that's not the way QFT is normally presented.

Eugene.


Eugene Stefanovich

unread,
Sep 7, 2005, 3:11:17 PM9/7/05
to

Igor Khavkine wrote:
> On 2005-09-01, Eugene Stefanovich <eug...@synopsys.com> wrote:
>
>>Igor Khavkine wrote:
>>
>>
>>>>RQD is not an
>>>>effective field theory approximation to QED.
>>>
>>>I don't know what you mean.
>>
>>1. RQD is formulated in terms of particles, not fields
>>2. RQD is not an approximation. It is just as exact as QED
>> (within perturbative approach, of course)
>
>
> Hmm. Let me try again.
>
>
>>Igor Khavkine wrote:


>> Time evolution
>>CAN be described in QFT (or QED). But in order to do that, you need
>>to perform a unitary transformation to dressed particles.
>
>
> A change of basis is a change of basis is a change of basis.

Take ordinary QM. Take two Hamiltonians: H and H' = exp(iF)Hexp(-iF),
where F is an Hermitian operator not commuting with H.
For most F, Hamiltonians H and H' have the same energy spectrum of
bound states and the same S-matrix. However, it is not correct
to say that H and H' describe exactly the same physics. For example,
the shape of the stationary wave functions is different with H and
H'. The time evolution is different with H and H'.
The physics is different.

The change of basis is exactly trivial if you change both state
vectors and operators together. If you change only operators
(as I do in RQD) and leave the state vectors intact, the physics
is different.


>>All I said is applicable
>>only to electromagnetic interactions. Particle picture works perfectly
>>well there. It may happen that this picture will become
>>insufficient for more complex phenomena you are talking about,
>>and the field approach will be proven indispensable.
>>I don't know about that.
>
>
> Funny, this goes against your wish to express *everything* in terms of
> particles. If you are ready to abandon it for phenomena outside the
> scope of QED. What makes QED so special?

I am saying that the particle picture can be definitely applied to QED,
because I spent some time and effort in this field, and I have proofs.
This is just a beginning. I haven't spent
as much time and effort on thinking about strong forces or gravity.
So, I cannot say for sure if the idea of particles as fundamental
entities works there as well. I hope it does, but I don't have a proof.
So, I prefer not to talk about non-electromagnetic interactions yet.


>>>Only for point charged particles, which are known not to exist.
>>
>>Isn't electron a point particle?
>
>
> An individual electron is described by a Dirac wavefunction. According
> to quantum mechanics it cannot be perfectly localized.

Where does QM say that electron cannot be localized?
There is a position operator R (three commuting components of the
Newton-Wigner operator) describing
the electron. There are eigenvectors of R. The electron is perfectly
localized when its state is described by such an eigenvector.

Eugene.


Eugene Stefanovich

unread,
Sep 7, 2005, 7:30:50 PM9/7/05
to
p.ki...@imperial.ac.uk wrote:
> Eugene Stefanovich <eug...@synopsys.com> wrote:
>
>>>The mode function is a purely classical wave-like object over
>>>space and time. The wavefunction is that of the QSHO, and
>>>extends over the "excitation" of the oscillator of the
>>>relevant mode; all space and time dependence is given by the
>>>mode function.
>>
>
>>What role the mode functions are playing in the diffraction and
>>interference of light? Can I understand your words in the way that
>>there are two distinct contributions to the interference: one coming
>>from cassical "mode functions", the other coming from quantum
>>wave functions?
>
>
> Yes -- in any given case, the mode part of the overlap between state
> can exhibit interference, as can the quantum part.
>
> Diffraction could be built into the mode functions, unless you
> want to turn the description into a propagation problem.
>

Can these two contributions (from quantum wave functions and from
classical mode functions) be distinguished experimentally?
If not, then I prefer having a single explanation (=quantum)
for a single physical effect.

Eugene.

It is loading more messages.
0 new messages