Well if you want to be too clever for your own good:
Quantum field theory on flat spacetime can be formulated in terms
of Feynman diagrams. Statistical field theory is just the same
thing after the substitution t -> it, so it too can be formulated
in terms of quantum mechanics.
Classical field theory is the hbar -> 0 limit of quantum field
theory, so you can use the Feynman diagram formulation of quantum
field theory to formulate classical field theory in terms of Feynman
diagrams. This is sometimes called the "tree approximation", since
letting hbar -> 0 more or less corresponds to considering only
Feynman diagrams without loops. (I'm oversimplifying here - we
had an excellent discussion of this earlier on sci.physics.research!)
Quantum mechanics is isomorphic to quantum field theory in 0+1
dimensions, so you can use the Feynman diagram formulation of
quantum field theory to formulate quantum mechanics in terms
of Feynman diagrams.
Finally, classical mechanics is the hbar -> 0 limit of quantum mechanics,
so you can use the Feynman diagram formulation of quantum mechanics
to formulate classical mechanics in terms of Feynman diagrams.
>I still don't understand terribly well how Feynman diagrams arise;
>what do you need other than a variational principle to get them?
Let's get practical! Except in certain highbrow contexts like
topological quantum field theory, Feynman diagrams are just
tricks for computing the integral of a polynomial times a
Gaussian. Like this:
integral P(x) exp(-|x|^2) dx
where x lies in R^n and P(x) is a polynomial function on R^n.
It's easy to do integrals of this form using integration by parts.
But you get a lot of terms. Feynman diagrams are a nice way of
drawing pictures that represent these terms. They help you keep
track of what you're doing.
It's all PITIFULLY simple, and the n = 1 case could easily be
taught right after freshman calculus. If I were in charge of the
world, all physics students would learn how to do Feynman diagram
calculations as college freshman, while their brains are still fully
functioning. But don't count on finding a nice simple introduction to
this stuff quantum field theory textbooks! Textbooks usually jump
right into complicated examples where n = infinity. In other words:
instead of R^n, they integrate over an infinite-dimensional vector
space, like the space of all solutions of Maxwell's equations. This
complicates things without affecting the basic idea.
It's much easier to learn this stuff in an example where n is
finite. That's why I was trying to get you to learn about matrix
models. Remember what I said? We let X be the space of N x N
complex matrices. We can think of this as R^n for some n, so the
above remarks apply. Then we take some "action" S: X -> R like
S(x) = tr(xx* + cx^3).
Note: this is a polynomial function on X. Then we compute the partition
function
Z = integral_X exp(-S(x)) dx.
We do it perturbatively as follows:
Z = integral_X exp(-tr(xx* + cx^3)) dx
= integral_X exp(-tr(cx^3)) exp(-tr(xx*)) dx
= integral_X (1 - c tr(x^3) + (c tr(x^3))^2/2! + ... ) exp(-tr(xx*)) dx
Each term in this series is the integral of a polynomial times
the Gaussian exp(-tr(xx*)). This is exactly the sort of integral that
we can do using Feynman diagrams!!! See what I mean???
Of course I haven't actually said how you do such integrals using
Feynman diagrams, but I'm serious when I say it's pitifully simple.
When you do it, the nth term in the series above gives us a bunch of
terms which we keep track of by drawing trivalent graphs with n vertices.
Trivalent because tr(x^3) is a *cubic* polynomial on X. n vertices
because we are integrating the nth power of tr(x^3) times a Gaussian.
I would explain how you do the integral of a polynomial times a
Gaussian using Feynman diagrams, but quantum field theorists worldwide
would hate me - they want people to think they're doing something
really difficult and complicated!
A sum over paths? A scattering amplitude is a sum over all paths from A to B.
This space of paths decomposes into the union of connected pieces and so the
sum can be transformed into a sum over pieces of an integral for each piece.
Each piece in the sum corresponds to a Feynman diagram which represents the
shared topology of all the paths in the that piece. Because classical
mechanics is about single paths I'd guess it's unlikely that there is a path
integral formulation. -- Torque http://travel.to/tanelorn
-----------== Posted via Deja News, The Discussion Network ==----------
http://www.dejanews.com/ Search, Read, Discuss, or Start Your Own
> In article <m10ObKr...@crib.corepower.com>,
> Nathan Urban <nur...@vt.edu> wrote:
> >Can theories other than QFT and stat mech be formulated purely in terms
> >of Feynman diagrams? (Like, say, classical mechanics or something.)
> Well if you want to be too clever for your own good:
> [...]
> Classical field theory is the hbar -> 0 limit of quantum field
> theory, [... tree approximation] (I'm oversimplifying here - we
> had an excellent discussion of this earlier on sci.physics.research!)
Which thread was that?
> Quantum mechanics is isomorphic to quantum field theory in 0+1
> dimensions,
Really? How's that?
> so you can use the Feynman diagram formulation of
> quantum field theory to formulate quantum mechanics in terms
> of Feynman diagrams.
> Finally, classical mechanics is the hbar -> 0 limit of quantum mechanics,
> so you can use the Feynman diagram formulation of quantum mechanics
> to formulate classical mechanics in terms of Feynman diagrams.
I think that was too clever for my own good. :) I mean, I understand the
principle and all (just reduce the QFT formulation to a classical limit),
but that doesn't really help me see how you can do some dumb textbook
classical mechanics problems with Feynman diagrams, particularly since
I'm not too expert on how they're used in QFT to begin with.
> Let's get practical! Except in certain highbrow contexts like
> topological quantum field theory, Feynman diagrams are just
> tricks for computing the integral of a polynomial times a
> Gaussian. [...]
> It's all PITIFULLY simple, and the n = 1 case could easily be
> taught right after freshman calculus. If I were in charge of the
> world, all physics students would learn how to do Feynman diagram
> calculations as college freshman, while their brains are still fully
> functioning.
What would you teach physics students about with it? QFT, a la Feynman's
_QED_? Or QM or classical mechanics or something else? Or "learn this
math stuff, it'll be good for you later"? :)
> It's much easier to learn this stuff in an example where n is
> finite. That's why I was trying to get you to learn about matrix
> models.
You were? I didn't see what they were used for other than M-theory and
quantum gravity, so after I digested it I just filed all that away to
revisit later. I didn't know what questions to ask.
So what kinds of theories with finite degrees of freedom in their
Lagrangian can be rewritten in terms of matrix models?
> Remember what I said? We let X be the space of N x N
> complex matrices. We can think of this as R^n for some n, so the
> above remarks apply. Then we take some "action" S: X -> R like
> S(x) = tr(xx* + cx^3).
Is that used in any real physics or is it just a toy to give you nice
simple trivalent diagrams?
> Note: this is a polynomial function on X. Then we compute the partition
> function [...] We do it perturbatively as follows: [...]
> Each term in this series is the integral of a polynomial times
> the Gaussian exp(-tr(xx*)). This is exactly the sort of integral that
> we can do using Feynman diagrams!!! See what I mean???
Yup -- if I take your word that the sort of integral that we can do
using Feynman diagrams is one that has a polynomial times an exponential.
> I would explain how you do the integral of a polynomial times a
> Gaussian using Feynman diagrams, but quantum field theorists worldwide
> would hate me - they want people to think they're doing something
> really difficult and complicated!
Well, _I_ don't mind if they hate you. :)
>In article <m10ObKr...@crib.corepower.com>,
>Nathan Urban <nur...@vt.edu> wrote:
>>Can theories other than QFT and stat mech be formulated purely in terms
>>of Feynman diagrams? (Like, say, classical mechanics or something.)
>
>Well if you want to be too clever for your own good:
>
>Quantum field theory on flat spacetime can be formulated in terms
>of Feynman diagrams.
Do you mean this seriously? Or do you just mean computed in terms of
Feynman diagrams.
[...]
>I would explain how you do the integral of a polynomial times a
>Gaussian using Feynman diagrams, but quantum field theorists worldwide
>would hate me - they want people to think they're doing something
>really difficult and complicated!
We did it in my senior year mathematical methods class, actually, in the
context of the saddle point approximation. Anyways, I thought the story
was that Schwinger hated that Feynman invented them because they
essentially opened QFT to the masses. Schwinger wanted the calculations to
remain difficult and painful. Of course, like many a physics story, this
passed through many people before reaching my ears.
Aaron
--
Aaron Bergman
<http://www.princeton.edu/~abergman/>
> Can theories other than QFT and stat mech be formulated purely in terms
> of Feynman diagrams? (Like, say, classical mechanics or something..)
> I still don't understand terribly well how Feynman diagrams arise;
> what do you need other than a variational principle to get them?
I've always considered Feynman diagrams to be the ultimate application of
one of the primary rules in physics: When in doubt, expand in a power
series. And that's exactly what a Feynman diagram allows you to do. Each
vertex of a Feynman diagram contributes a factor to the scattering
amplitude that is on the order of the coupling constant (strength of the
interaction). Which is why only terms with the same number of vertices are
combined. It's something like taking two polynomials and multiplying them
together and then collecting all of the terms in x^0, x, x^2, x^3, etc.
When the coupling constant is low, such as with the electromagnetic or weak
interactions, you can often get by with only the second order process (two
vertices). With the strong interaction, the coupling constant is large
(~1), and you need many diagrams with many more vertices to make the
expansion in the coupling constant converge.
Richard
>>Quantum field theory on flat spacetime can be formulated in terms
>>of Feynman diagrams.
>Do you mean this seriously? Or do you just mean computed in terms of
>Feynman diagrams.
Well, one could argue that "computed" is a better word. However,
due to the sad state of our understanding of interacting quantum
field theories in 3+1 dimensions, it's not as if there is some quantity
that we already know *exists* which we are merely using Feynman diagrams
to *compute*. Instead, we start with an ill-defined path integral, do a
bunch of nonrigorous manipulations to it, and get a sum over Feynman
diagrams. Then we renormalize these. Only at the *end* of this game
do we have something that we know makes mathematical sense!
So a hardcore mathematician obsessed with rigor might say that the theory
is "formulated" in terms of Feynman diagrams. What the physicists call
"computation" might instead be regarded as a kind of abracadabra - a
magical incantation one utters before formulating the theory in precise
terms.
The same sort of thing happens in string theory - see
http://math.ucr.edu/home/baez/week127.html
for the magical incantation one uses to get from the ill-defined path
integral to something that makes rigorous sense.
Now don't get me wrong - I am not arguing that the hardcore mathematician
obsessed with rigor is "right" and the physicist is "wrong". When I was
a youth I was very upset at the lack of rigor in quantum field theory but
now I am older and wiser. What is nonrigorous now may eventually become
rigorous, and physicists shouldn't wait until they have a completely
rigorous formalism before they start testing their theory against experiment.
Even nonrigorous "computations" can be extremely useful.
>>I would explain how you do the integral of a polynomial times a
>>Gaussian using Feynman diagrams, but quantum field theorists worldwide
>>would hate me - they want people to think they're doing something
>>really difficult and complicated!
>We did it in my senior year mathematical methods class, actually, in the
>context of the saddle point approximation. Anyways, I thought the story
>was that Schwinger hated that Feynman invented them because they
>essentially opened QFT to the masses.
Yes, I've heard that too. Now you can just doodle a few diagrams and
and do a few integrals and go around calling yourself a quantum field
theorist! Shocking!
>> Classical field theory is the hbar -> 0 limit of quantum field
>> theory, [... tree approximation] (I'm oversimplifying here - we
>> had an excellent discussion of this earlier on sci.physics.research!)
>Which thread was that?
You could have just used DejaNews and looked under "tree approximation",
but heck, I'll do it for you... grumble grumble....
.... Here it is: "Help me believe in Coulomb's Law". It was started
by Greg Weeks.
>> Quantum mechanics is isomorphic to quantum field theory in 0+1
>> dimensions,
>Really? How's that?
Well, what's *classical* field theory in 0+1 dimensions?
>> Let's get practical! Except in certain highbrow contexts like
>> topological quantum field theory, Feynman diagrams are just
>> tricks for computing the integral of a polynomial times a
>> Gaussian. [...]
>> It's all PITIFULLY simple, and the n = 1 case could easily be
>> taught right after freshman calculus. If I were in charge of the
>> world, all physics students would learn how to do Feynman diagram
>> calculations as college freshman, while their brains are still fully
>> functioning.
>What would you teach physics students about with it? QFT, a la Feynman's
>_QED_? Or QM or classical mechanics or something else? Or "learn this
>math stuff, it'll be good for you later"? :)
I guess I'd either use it for quantum mechanics, like the quantum
anharmonic oscillator, or statistical mechanics, like the statistical
mechanics of a classical anharmonic oscillator. These two problems
are almost the same; they differ only by the usual t -> it trick.
I hope you know what I mean by "anharmonic oscillator": it's a particle
in the potential
V(x) = x^2 + cx^4
When c = 0 this is the usual harmonic oscillator and you compute
the partition function by doing an integral of a Gaussian - it's a
snap. When c is not equal to zero you expand the partition function
as a power series in c, get a bunch of integrals of polynomials times
Gaussians, and keep track of these using Feynman diagrams.
This is a great warmup for quantum field theory.
>> It's much easier to learn this stuff in an example where n is
>> finite. That's why I was trying to get you to learn about matrix
>> models.
>You were? I didn't see what they were used for other than M-theory and
>quantum gravity, so after I digested it I just filed all that away to
>revisit later. I didn't know what questions to ask.
I explained to you how computing the partition function in a matrix
model gives you a bunch of integrals of polynomials times Gaussians.
And I said you could keep track of these using Feynman diagrams. The
only thing I didn't tell you was how you take an integral of a polynomial
times a Gaussian and draw a Feynman diagram picture of it. Once you know
that, you know quantum field theory! Well, sort of.
Matrix models are very much like the anharmonic oscillator example
above, but they even simpler.
>So what kinds of theories with finite degrees of freedom in their
>Lagrangian can be rewritten in terms of matrix models?
Well, the one that immediately comes to my mind is the dynamical
triangulations approach to quantum gravity in 1+1 dimensions, but
there are a lot more. Perhaps the most popular example these days
is the matrix model that might have M theory as its large-N limit.
>> Remember what I said? We let X be the space of N x N
>> complex matrices. We can think of this as R^n for some n, so the
>> above remarks apply. Then we take some "action" S: X -> R like
>
>> S(x) = tr(xx* + cx^3).
>Is that used in any real physics or is it just a toy to give you nice
>simple trivalent diagrams?
"Real" physics? I don't do "real" physics, I do quantum gravity! As
someone once said after a talk I gave on knots and quantum gravity,
"That's *knot* physics."
Anyway, the above theory is one formulation of quantum gravity in 1+1
dimensions. Didn't I say that already? Whoops! I meant to. Maybe
that thread on matrix models fizzled out before I got around to it.
>> Note: this is a polynomial function on X. Then we compute the partition
>> function [...] We do it perturbatively as follows: [...]
>
>> Each term in this series is the integral of a polynomial times
>> the Gaussian exp(-tr(xx*)). This is exactly the sort of integral that
>> we can do using Feynman diagrams!!! See what I mean???
>Yup -- if I take your word that the sort of integral that we can do
>using Feynman diagrams is one that has a polynomial times an exponential.
Not any old exponential: a Gaussian. Gaussians have magic properties -
that's why harmonic oscillators work so nice, because their partition
functions are just integrals of Gaussians. And free quantum fields are
just harmonic oscillators with lots of degrees of freedom, so they give
us Gaussians in lots of variables. Then we treat interacting quantum
fields using that perturbative trick I described above, and get integrals
of polynomials times Gaussians.
Do you know the integral from -infinity to +infinity of
x^n exp(-x^2) ?
Once you know this, the rest of quantum field theory is just bookkeeping!
>> I would explain how you do the integral of a polynomial times a
>> Gaussian using Feynman diagrams, but quantum field theorists worldwide
>> would hate me - they want people to think they're doing something
>> really difficult and complicated!
>Well, _I_ don't mind if they hate you. :)
Okay, well, as long as they're not looking, I'll explain it to you.
But first you gotta tell me how *you* would calculate the above integral.
Until you do some of these calculations yourself you can't really appreciate
the virtues of a nice bookkeeping scheme like Feynman diagrams.
>In article <abergman-240...@abergman.student.princeton.edu>,
>Aaron Bergman <aber...@Princeton.EDU> wrote:
>>In article <7d97jo$ur6$1...@pravda.ucr.edu>, ba...@galaxy.ucr.edu
>>(john baez) wrote:
>
>>>Quantum field theory on flat spacetime can be formulated in terms
>>>of Feynman diagrams.
>
>>Do you mean this seriously? Or do you just mean computed in terms of
>>Feynman diagrams.
>
>Well, one could argue that "computed" is a better word. However,
>due to the sad state of our understanding of interacting quantum
>field theories in 3+1 dimensions, it's not as if there is some quantity
>that we already know *exists* which we are merely using Feynman diagrams
>to *compute*. Instead, we start with an ill-defined path integral, do a
>bunch of nonrigorous manipulations to it, and get a sum over Feynman
>diagrams. Then we renormalize these. Only at the *end* of this game
>do we have something that we know makes mathematical sense!
But there are a few nonperturbative results in QED. They have to fit in
somewhere, right?
> >> Let's get practical! Except in certain highbrow contexts like
> >> topological quantum field theory, Feynman diagrams are just
> >> tricks for computing the integral of a polynomial times a
> >> Gaussian. [...]
>
> >> It's all PITIFULLY simple, and the n = 1 case could easily be
> >> taught right after freshman calculus. If I were in charge of the
> >> world, all physics students would learn how to do Feynman diagram
> >> calculations as college freshman, while their brains are still fully
> >> functioning.
...
> When c = 0 this is the usual harmonic oscillator and you compute
> the partition function by doing an integral of a Gaussian - it's a
> snap. When c is not equal to zero you expand the partition function
> as a power series in c, get a bunch of integrals of polynomials times
> Gaussians, and keep track of these using Feynman diagrams.
>
> This is a great warmup for quantum field theory.
>
> >> It's much easier to learn this stuff in an example where n is
> >> finite. That's why I was trying to get you to learn about matrix
> >> models.
>
> >You were? I didn't see what they were used for other than M-theory and
> >quantum gravity, so after I digested it I just filed all that away to
> >revisit later. I didn't know what questions to ask.
>
> I explained to you how computing the partition function in a matrix
> model gives you a bunch of integrals of polynomials times Gaussians.
> And I said you could keep track of these using Feynman diagrams. The
> only thing I didn't tell you was how you take an integral of a polynomial
> times a Gaussian and draw a Feynman diagram picture of it. Once you know
> that, you know quantum field theory! Well, sort of.
...
ok, i just took another shot at trying to understand this feynman
diagram stuff, and there's _got_ to be more (or if not more then at
least different) to it than what you're saying here, because
calculating gaussian integrals is far too trivial a task to justify
all by itself such an elaborate bookkeeping scheme as feynman
diagrams. thus, i already know how to calculate all of the gaussian
integrals in the world, in a very direct and explicit way, and i
certainly don't use any feynman diagrams in my calculations, nor would
there even be any room to introduce such diagrams into such a trivial
calculation, nor do i yet have really the slightest idea what a
"feynman diagram" is, even.
let v be the vector space of functions of one variable of the form a
polynomial function times the gaussian function "x |-> exp(-x^2/2)".
differentiation of functions is a linear operator d:v->v. integration
of functions over the real number line is a linear functional
l:v->reals. a straightforward calculus exercise using the fundamental
theorem of calculus shows that for any function f in v, l(d(f)) = 0.
this equation allows us to recursively evaluate l(a_n) where a_n is
the monomial function "x^n" times the gaussian function "exp(-x^2/2)",
obtaining a more or less "closed form" result:
l(a_0) = some constant k
l(a_1) = l(-d(a_0)) = 0
l(a_2) = l(-d(a_1)+a_0) = l(a_0) = k
l(a_3) = l(-d(a_2)+2*a_1) = 2*l(a_1) = 0
l(a_4) = l(-d(a_3)+3*a_2) = 3*l(a_2) = 3*k
l(a_5) = l(-d(a_4)+4*a_3) = 4*l(a_3) = 0
l(a_6) = l(-d(a_5)+5*a_4) = 5*l(a_4) = 5*3*k
...
thus:
l(a_[2*j]) = (2*j-1) * (2*j-3) * ... * 3 * k
l(a_[2*j+1]) = 0
the same calculation could be explained in many other ways as well
("integration by parts" for example). whatever, we now know how to
calculate l on v and we didn't need any "feynman diagrams", whatever
they are. note that essentially the same method of calculation works
for calculating the m-fold iterated integral of a polynomial function
of m variables times the "diagonalized" gaussian function
"exp(-[[x_1]^2 + ... + [x_m]^2]/2)", because the product of a
multi-variable monomial and a multi-variable gaussian factorizes as a
product of products of single-variable monomials and single variable
gaussians.
so evidently the true motivation for feynman diagrams must be
something other than just "the calculation of the integral of a
polynomial function times a gaussian function". you still haven't
made it clear to me what that true motivation is.
does it have something essentially to do with gaussian functions of
infinitely many variables, or with gaussian functions that aren't
explicitly diagonalized, or with some phenomenon that you don't tend
to see unless you deal with families of these gaussian integral
problems parameterized in some specific (perhaps "perturbative"?) way?
or what??
i've told you all this before, of course, but i'm hoping to provoke
you into saying something different this time.
note added later: hmm, maybe you _did_ already give somewhat of an
answer to this question, in that paragraph i already quoted above:
> When c = 0 this is the usual harmonic oscillator and you compute
> the partition function by doing an integral of a Gaussian - it's a
> snap. When c is not equal to zero you expand the partition function
> as a power series in c, get a bunch of integrals of polynomials times
> Gaussians, and keep track of these using Feynman diagrams.
so maybe what you're saying here is that feynman diagrams are _not_
really "tricks for computing the integral of a polynomial times a
gaussian" (no tricks being necessary for such an untricky task), but
rather "tricks for _remembering which_ polynomial-times-gaussian
functions to integrate" in the context of... doing some larger
calculation that i'm not sure you've specified in the right generality
yet... _is_ that anything like what you're trying to say?? if so i
think you should try to say it more clearly somehow; you've really
gotten me confused about this and i'm not un-confused yet.
> Do you know the integral from -infinity to +infinity of
>
> x^n exp(-x^2) ?
Yes, I think so :
if n=2k+1 it is ZERO else use Feynman's way: diff wrt parameter.
> Once you know this, the rest of quantum field theory is just bookkeeping!
So you need to know how to do bookkeeping!
> In article <m10Q301...@crib.corepower.com>,
> Nathan Urban <nur...@vt.edu> wrote:
> >In article <7d97jo$ur6$1...@pravda.ucr.edu>, ba...@galaxy.ucr.edu (john baez)
> >wrote:
> >> Classical field theory is the hbar -> 0 limit of quantum field
> >> theory, [... tree approximation] (I'm oversimplifying here - we
> >> had an excellent discussion of this earlier on sci.physics.research!)
> >Which thread was that?
> You could have just used DejaNews and looked under "tree approximation",
> but heck, I'll do it for you... grumble grumble....
You'll have to pardon my brain seizure. For some reason, I didn't think
to search under "tree approximation" (even though I summed everything up
with that phrase), and was trying things like "classical quantum limit"
and got too many articles.
> >> Quantum mechanics is isomorphic to quantum field theory in 0+1
> >> dimensions,
> >Really? How's that?
> Well, what's *classical* field theory in 0+1 dimensions?
>From the way you said that, I'd have to answer "classical mechanics".
But I'm not sure if I see this. Classical field theory in 0+1 is
just a field defined at a point evolving in time. That looks nice and
worldline-like so maybe it's like the classical mechanics of a point
particle, but how can that be isomorphic to the full classical mechanics
of a point particle which is formulated in terms of a 2p+1-dimensional
phase space?
> I hope you know what I mean by "anharmonic oscillator": it's a particle
> in the potential
> V(x) = x^2 + cx^4
> When c = 0 this is the usual harmonic oscillator and you compute
> the partition function by doing an integral of a Gaussian - it's a
> snap. When c is not equal to zero you expand the partition function
> as a power series in c, get a bunch of integrals of polynomials times
> Gaussians, and keep track of these using Feynman diagrams.
What's the partition function in this case? It's still not quite clear
to me how you define it in general.
> >> S(x) = tr(xx* + cx^3).
> >Is that used in any real physics or is it just a toy to give you nice
> >simple trivalent diagrams?
> "Real" physics? I don't do "real" physics, I do quantum gravity! As
> someone once said after a talk I gave on knots and quantum gravity,
> "That's *knot* physics."
> Anyway, the above theory is one formulation of quantum gravity in 1+1
> dimensions. Didn't I say that already?
Oh, you did. Sorry, another brain seizure. You put it in right at the
end of the post after you did all the derivations, and somehow I got the
impression that you were saying that it was like quantum gravity in that
it had a sum-over-surfaces, not that it _was_ quantum gravity.
> >> Each term in this series is the integral of a polynomial times
> >> the Gaussian exp(-tr(xx*)). This is exactly the sort of integral that
> >> we can do using Feynman diagrams!!! See what I mean???
> >Yup -- if I take your word that the sort of integral that we can do
> >using Feynman diagrams is one that has a polynomial times an exponential.
> Not any old exponential: a Gaussian. Gaussians have magic properties -
> that's why harmonic oscillators work so nice, because their partition
> functions are just integrals of Gaussians.
Okay, a magic exponential.
> Do you know the integral from -infinity to +infinity of
> x^n exp(-x^2) ?
Well, I didn't, but I figured it out.
> Once you know this, the rest of quantum field theory is just bookkeeping!
That's something that physicists say to piss off laymen, isn't it?
> >> I would explain how you do the integral of a polynomial times a
> >> Gaussian using Feynman diagrams, but quantum field theorists worldwide
> >> would hate me - they want people to think they're doing something
> >> really difficult and complicated!
> >Well, _I_ don't mind if they hate you. :)
> Okay, well, as long as they're not looking, I'll explain it to you.
> But first you gotta tell me how *you* would calculate the above integral.
Well, for odd n the integrand is odd so the integral vanishes.
For even n it's even so it's twice the integral from 0 to infinity of
the same integrand. Then I did a substitution u=x^2 and ended up with
Gamma((n+1)/2), which has a messy but tractable expansion in terms
of factorials; in the standard ellipsis "and so on" notation, it's
1 * 3 * ... * (n-1) / 2^(n/2) sqrt(pi).
-let v be the vector space of functions of one variable of the form a
-polynomial function times the gaussian function "x |-> exp(-x^2/2)".
-differentiation of functions is a linear operator d:v->v. integration
-of functions over the real number line is a linear functional
-l:v->reals. a straightforward calculus exercise using the fundamental
-theorem of calculus shows that for any function f in v, l(d(f)) = 0.
-this equation allows us to recursively evaluate l(a_n) where a_n is
-the monomial function "x^n" times the gaussian function "exp(-x^2/2)",
-obtaining a more or less "closed form" result:
-
-l(a_0) = some constant k
-l(a_1) = l(-d(a_0)) = 0
-l(a_2) = l(-d(a_1)+a_0) = l(a_0) = k
-l(a_3) = l(-d(a_2)+2*a_1) = 2*l(a_1) = 0
-l(a_4) = l(-d(a_3)+3*a_2) = 3*l(a_2) = 3*k
-l(a_5) = l(-d(a_4)+4*a_3) = 4*l(a_3) = 0
-l(a_6) = l(-d(a_5)+5*a_4) = 5*l(a_4) = 5*3*k
-...
-
-thus:
-
-l(a_[2*j]) = (2*j-1) * (2*j-3) * ... * 3 * k
-l(a_[2*j+1]) = 0
-
-the same calculation could be explained in many other ways as well
-("integration by parts" for example). whatever, we now know how to
-calculate l on v and we didn't need any "feynman diagrams", whatever
-they are. note that essentially the same method of calculation works
-for calculating the m-fold iterated integral of a polynomial function
-of m variables times the "diagonalized" gaussian function
-"exp(-[[x_1]^2 + ... + [x_m]^2]/2)", because the product of a
-multi-variable monomial and a multi-variable gaussian factorizes as a
-product of products of single-variable monomials and single variable
-gaussians.
-
-so evidently the true motivation for feynman diagrams must be
-something other than just "the calculation of the integral of a
-polynomial function times a gaussian function".
i may as well describe a further idea about this stuff that i stumbled
across after posting my previous message. it _is_ possible to give a
"combinatorial" interpretation to the numerical values of the gaussian
integrals given above, and this combinatorial interpretation does even
have something to do with "feynman diagrams" of a sort (though
seemingly not a very interesting sort).
the function f(n) := l(a_n) (normalized by k := 1) can be thought of
as the number of "fixed-point-free involutions" of a set with n
elements. a fixed-point-free involution of a set with n elements can
be drawn as a "feynman diagram" with n "external edges" and no
"vertexes", the only information in the diagram that's considered
significant being which pairs of the n external edges form the n/2
actual "edges". thus for example the 3 diagrams with 4 external edges
are (warning: ascii pictures):
| | |
| | |
__ \__ __/ __ ___|___
\ / |
| | |
| | |
more generally the integral of a multi-variable monomial x_1^b_1 *
... * x_j^b_j with respect to the gaussian measure "exp(-[x_1^2 +
... + x_j^2]/2) dx_1 ... dx_j" is (again up to a multiplicative
constant) the number of diagrams of this sort with b_h external edges
of a distinctive particle type corresponding to x_h for h := 1 to j.
nevertheless i doubt that all this is really what john baez was trying
to get us to invent. it seems all backwards somehow; i thought that
monomials of degree d were supposed to have something to do with
feynman diagrams containing vertexes of "valence" d, whereas here we
seem to be relating monomials of degree d to feynman diagrams with d
external edges, as though blowing up individual vertexes into feynman
diagrams themselves.
OK, let's do this, since I'm not seeing where I could possibly
incorporate Feynman diagrams in this calculation. So we have some
polynomial
P(x) = Sum(0,infinity) c_n x^n
and we want to compute
integral P(x) exp(-x^2) dx.
This just comes down to computing
integral x^n exp(-x^2) dx.
This is obviously zero if n is odd and it is sqrt(pi) for n=0. For
the others, the slick way to get them is to insert a dummy variable
a=1.
integral x^2n exp(-a x^2) dx = (-1)^n d^n/da^n integral exp(-a x^2) dx
= (-1)^n d^n/da^n sqrt(pi/a)
= ((2n-1)!! / 2^n a^n) sqrt(pi/a)
= (2n-1)!! sqrt(pi) / 2^n
or, as you say, you can do this by integration by parts. But, there
aren't lots of terms, rather it's very simple and you get the same
answer quite easily
integral x^2n exp(-x^2) dx = (2n-1)/2 integral x^2(n-1) exp(-x^2) dx
I don't see what there is to keep track of with Feynman diagrams.
Similarly for any finite n,
P(x) = sum({m_i}) c_{m_1,m_2,...} prod(i=1,n) (x_i)^(2 m_i)
(again, odd powers will integrate to zero)
integral P(x) exp(-|x|^2) dx
= sum({m_i}) c_{m_1,m_2,...}
x prod(i=1,n) integral (x_i)^(2 m_i) exp(-(x_i)^2) dx
= sum({m_i}) c_{m_1,m_2,...}
x prod(i=1,n) ((2 m_i - 1)!! sqrt(pi) / 2^(m_i))
Again, I don't seem to need anything to help keep track of things.
Basically, I've just shown that you can calculate the gaussian
integral of any polynomial by dropping all terms with odd powers and
then making the substitution
x^2n -> (2n-1)!! sqrt(pi) / 2^n
>It's much easier to learn this stuff in an example where n is
>finite. That's why I was trying to get you to learn about matrix
>models. Remember what I said? We let X be the space of N x N
>complex matrices. We can think of this as R^n for some n, so the
>above remarks apply. Then we take some "action" S: X -> R like
>
>S(x) = tr(xx* + cx^3).
>
>Note: this is a polynomial function on X. Then we compute the partition
>function
>
>Z = integral_X exp(-S(x)) dx.
>
>We do it perturbatively as follows:
>
>Z = integral_X exp(-tr(xx* + cx^3)) dx
> = integral_X exp(-tr(cx^3)) exp(-tr(xx*)) dx
> = integral_X (1 - c tr(x^3) + (c tr(x^3))^2/2! + ... ) exp(-tr(xx*)) dx
Ahh...now here I can start to see how a diagrammatic scheme might help
you out. Perhaps you really meant that Feynman diagrams are tricks
for computing the gaussian integral of _the_exponential_ of a
polynomial. That I'm happy with. I'll even flesh this out a little
bit, but with an action I like a little better; those traces bug me.
S(x) = x_i x_i + c_ijk x_i x_j x_k
then
Z = integral (1 + (c_ijk x_i x_j x_k)^2 / 2! + ...) exp(- x_i x_i) dx
(the odd terms in the expansion integrate to zero, as usual). Now,
you can easily see that I could start drawing pictures of trivalent
vertices. Label the lines coming out of a vertex i,j,k and the vertex
then corresponds to a factor of c_ijk. Now start piecing these
together. Zero vertices is easy, just draw nothing. With one vertex,
I've got to have a free line - so I don't get any contribution. With
two vertices, I can get a football looking diagram and a barbell
looking diagram. These correspond to all the terms in the quadratic
piece that have only even powers of the x_i. And, so on; we can keep
going until we are bored senseless.
However,
>I would explain how you do the integral of a polynomial times a
>Gaussian using Feynman diagrams, but quantum field theorists worldwide
>would hate me - they want people to think they're doing something
>really difficult and complicated!
and in a later post
> Do you know the integral from -infinity to +infinity of
>
> x^n exp(-x^2) ?
>
> Once you know this, the rest of quantum field theory is just
> bookkeeping!
The problem is that the rest of quantum field theory isn't just
bookkeeping and it is difficult and often complicated.
The action in every theory I know of is an integral over all of
spacetime. So, once you do your easy gaussian integrals, you still
have more integrals left. And those integrals are _hard_, at least if
you want to go beyond one-loop.
--
======================================================================
Kevin Scaldeferri Calif. Institute of Technology
The INTJ's Prayer:
Lord keep me open to others' ideas, WRONG though they may be.
>it _is_ possible to give a
>"combinatorial" interpretation to the numerical values of the gaussian
>integrals given above, and this combinatorial interpretation does even
>have something to do with "feynman diagrams" of a sort (though
>seemingly not a very interesting sort).
It's actually VERY interesting. You're on exactly the right track!
You just need a slight change of viewpoint. Fixing up the numerical
fudge factors a little bit, you have computed
(1/sqrt(2pi)) integral x^n exp(-x^2/2) dx
and shown that it equals the number of Feynman diagrams with n
1-valent vertices and no external edges. Of course you didn't
say this - but what you said is equivalent to this!
For example, when n = 4 the above integral equals 3. And there
are 3 Feynman diagrams with 4 1-valent vertices and no external
edges:
| | |
| | |
__ \__ __/ __ ___|___
\ / |
| | |
| | |
(Here of course the crossing in the third one is not a vertex;
it's just an artifact of drawing these diagrams on 2-dimensional paper.)
How do we generalize this? We invent a polynomial of degree n
called the nth "Wick power" or "normal-ordered power" of x, and
denoted :x^n:, with the following marvelous property.
(1/sqrt(2pi)) integral :x^{n_1}: ... :x^{n_k}: exp(-x^2/2) dx
equals the number of Feynman diagrams with one vertex of valence
n_1, one of valence n_2, ... and one of valence n_k.
Your calculation says we should let
:x: = x
It's also true that
:1: = 1
These two cases are perhaps deceptively simple. In general the nth
Wick power of x is not x^n. But it's something very beautiful and
important. You might enjoy figuring out what it is.
Oversimplifying indeed!
The hbar-expansion is not the loop expansion.
The tree approximation is *not* classical field theory.
The so-called "hbar-expansion" is the actual hbar-expansion with factors of
hbar^(-N) artificially inserted in N-point functions. In the genuine
hbar expansion, in the limit as hbar->0, you get
<0| A(x1) ... A(xN) |0> = A^N
where the c-number A is the classical "vacuum" value. (In theories without
spontaneous symmetry breakdown, A is usually taken to be zero.)
Feynman diagrams give you vacuum expectation values (and related
quantities). These define the whole theory in the quantum case, but only
give you the "vacuum" state in the classical case.
Greg
[Calculation of integral of Gaussian times polynomial....]
>so evidently the true motivation for feynman diagrams must be
>something other than just "the calculation of the integral of a
>polynomial function times a gaussian function". you still haven't
>made it clear to me what that true motivation is.
That's true. So far in this thread I've mainly been trying to get
Nathan Urban *motivated* to learn quantum field theory - by making
it sound so trivial that he becomes really pissed off that he doesn't
already know it. What could be more trivial than the integral of
a polynomial times a Gaussian? Is that really all there is to it???
Well, yes and no. But one certainly needs to know how to do this
integral to understand Feynman diagrams, so it's an okay place to
start.
>does it have something essentially to do with gaussian functions of
>infinitely many variables, or with gaussian functions that aren't
>explicitly diagonalized, or with some phenomenon that you don't tend
>to see unless you deal with families of these gaussian integral
>problems parameterized in some specific (perhaps "perturbative"?) way?
>or what??
"Infinitely many variables" is not really the point here. The calculations
physicists do usually involve polynomials times Gaussians of infinitely
many variables. This is why their integrals are sometimes ill-defined
until they fix them by "imposing a cutoff" or something. But all the
basic technology of Feynman diagrams can be illustrated very nicely in
the finite-dimensional case, where these complications don't intrude.
This is why I was trying to get Nathan interested in matrix models.
"Gaussians that aren't explicitly diagonalized" is slightly more to the
point. Suppose you have a real vector space X with an inner product,
a positive self-adjoint operator A on X, and a bunch of vectors
v_1, ..., v_k in X. Then you want to be able to do the integral
integral_X <v_1,x> ... <v_k,x> exp(-<x,Ax>) dx
in a nice pretty way without diagonalizing A and breaking the integral
up into a product of 1-dimensional integrals. If you can do this
integral you can integrate any polynomial times the Gaussian exp(-<x,Ax>).
But there is a simple dirty trick that physicists use here, which is
not unlike diagonalizing A. They write A = B^2 for some self-adjoint
B and then they do a change of variables
x = B^{-1} y
Perhaps someone trying to learn this stuff would like to say what
happens when we do this change of variables in the integral.... If
you do, you will be well on the road to understanding what field
theorists call "propagators".
But much more to the point is your guess about "perturbation theory",
and the following remark:
>so maybe what you're saying here is that feynman diagrams are _not_
>really "tricks for computing the integral of a polynomial times a
>gaussian" (no tricks being necessary for such an untricky task), but
>rather "tricks for _remembering which_ polynomial-times-gaussian
>functions to integrate" in the context of... doing some larger
>calculation that i'm not sure you've specified in the right generality
>yet... _is_ that anything like what you're trying to say??
YES! The task is to compute partition functions and more generally
n-point functions. I don't really want to say what the latter are
right now, so let me just talk about the former. Let me not give any
physics motivation for why we want to do the task - that's a separate
issue. Instead, let me just say what we want to compute that makes
us want Feynman diagrams.
Suppose we have a physical system whose space of "histories" is some
real vector space X equipped with an inner product. Suppose we know
the "action" - a function on the space of histories
S: X -> R
And suppose the action has the following special form
S(x) = <x,Ax> + cP(x)
where A is a positive self-adjoint operator, P is a polynomial on
X, and c is a real number called the "coupling constant".
In the Euclidean approach to quantum field theory (or quantum physics
in general), the partition function is then
Z = integral_X exp(-S(x)) dx
and this is what we want to compute.
More precisely, we want to compute it "perturbatively" as a power
series in c. So we take
Z = integral_X exp(-<x,Ax> - cP(x)) dx
= integral_X (1 - cP(x) + (cP(x))^2/2 - ...) exp(-<x,Ax>) dx
and we develop a nice way of calculating the term proportional to c^n
as a sum over graphs with n vertices.
A nice example of this is the matrix model I was trying to explain
to Nathan.
The terrible thing about this post is that it doesn't the physics that
makes us want to do calculations of this sort! I'm assuming you know
that or can fake it. But if I ever explain quantum field theory to
Nathan I will need to say a lot more about the physics.
>>>> Quantum mechanics is isomorphic to quantum field theory in 0+1
>>>> dimensions,
>>>Really? How's that?
>> Well, what's *classical* field theory in 0+1 dimensions?
>From the way you said that, I'd have to answer "classical mechanics".
Right!
>But I'm not sure if I see this. Classical field theory in 0+1 is
>just a field defined at a point evolving in time. That looks nice and
>worldline-like so maybe it's like the classical mechanics of a point
>particle [...]
Right! In general a classical field is a section of some bundle
over spacetime, where spacetime is some n+1-dimensional manifold M.
When the bundle is trivial, our classical field is really just a map
from M to some fixed space X. In the case of 0+1 dimensions the
manifold M is just the real line, R. In other words, space is just
a point, so spacetime is just the "time line", R. Then our classical
field is just a map f: R -> X. But this is the same as a point particle
moving around in X.
In short, classical field theory in 0+1 dimensions has classical mechanics
as a special case. Note that we're not restricted to the classical
mechanics of a *single* point particle, either. By making X have lots
of dimensions, a point moving around in X can represent the motion of
*lots* of point particles.
This way of looking at things may seem silly but it's actually extremely
enlightening! In particular, it's a lot easier to learn *quantum* field
theory when you realize that in 0+1 dimensions it reduces to quantum
mechanics.
>but how can that be isomorphic to the full classical mechanics
>of a point particle which is formulated in terms of a 2p+1-dimensional
>phase space?
I'm not quite sure what you're puzzled about here. The following examples
might help:
Exercise - Take the wave equation in n+1 dimensions and restrict it to
the case n = 0. What classical mechanics problem does it become?
Exercise - Take the Klein-Gordon equation in n+1 dimensions and restrict
it to the case n = 0. What classical mechanics problem does it become?
(If you don't know what I mean by "the wave equation" or "the Klein-
Gordon equation" just let me know and I'll tell you.)
By the way, I don't know why you're making phase space be odd-dimensional.
Normally in classical mechanics people use "phase space" to mean the space
of positions and momenta, so it works out to be even-dimensional. There
are reasons to consider odd-dimensional "phase spaces" too, of course... is
that what you're doing? Anyway, I don't think it matters much; I'm just
curious.
>> I hope you know what I mean by "anharmonic oscillator": it's a particle
>> in the potential
>
>> V(x) = x^2 + cx^4
>
>> When c = 0 this is the usual harmonic oscillator and you compute
>> the partition function by doing an integral of a Gaussian - it's a
>> snap. When c is not equal to zero you expand the partition function
>> as a power series in c, get a bunch of integrals of polynomials times
>> Gaussians, and keep track of these using Feynman diagrams.
>
>What's the partition function in this case? It's still not quite clear
>to me how you define it in general.
Okay, let me define "partition function" in general. It means two
different things depending on whether we're talking about quantum
theory or statistical mechanics. These two things are closely related,
though....
1) STATISTICAL MECHANICS. In classical mechanics the energy is a
function H: X -> R where X is the space of "states" of our theory.
In statistical mechanics, the probability for the system to be in
the state x is proportional to
exp(-H(x)/kT)
where kT is Boltzmann's constant times the temperature. To get the
constant of proportionality right we need to divide by the
"partition function", which is
Z = integral_X exp(-H(x)/kT) dx.
2) QUANTUM THEORY. In classical mechanics the action is a
function S: X -> R where X is the space of "histories" of our theory.
In quantum theory, the amplitude for the system to undergo
the history x is proportional to
exp(iS(x)/hbar)
where hbar is Planck's constant. To get the constant of proportionality
right we need to divide by the "partition function", which is
Z = integral_X exp(iS(x)/hbar) dx.
I hope you see that the only difference apart from some *words* is
that in quantum theory we have a factor of i in the exponential
whereas in statistical mechanics we have a minus sign! Of course
the difference in words is incredibly important for the physical
meaning of what we're doing. But mathematically, the difference is
not so big. This is why statistical mechanics and quantum theory
are deeply related.
Okay, so now suppose you have an anharmonic oscillator - a particle
on the line in a potential
V(x) = x^2 + cx^4
We assume that as usual the kinetic energy is given by
K(p) = p^2/2m
where p is the momentum of the particle.
Exercise -
1) If we are studying the statistical mechanics of this system,
what is its partition function?
2) If we are studying the quantum theory of this system, what is
its partition function?
>> Do you know the integral from -infinity to +infinity of
>
>> x^n exp(-x^2) ?
>
>Well, I didn't, but I figured it out.
Good.
>> Once you know this, the rest of quantum field theory is just bookkeeping!
>
>That's something that physicists say to piss off laymen, isn't it?
Actually it's just something I said to piss you off; I've never heard
it before. But in a certain sense true. My main reason for trying to
piss you off was to get you to do the integral and then learn enough
quantum field theory to see what I'm talking about.
>Well, for odd n the integrand is odd so the integral vanishes.
>For even n it's even so it's twice the integral from 0 to infinity of
>the same integrand. Then I did a substitution u=x^2 and ended up with
>Gamma((n+1)/2), which has a messy but tractable expansion in terms
>of factorials; in the standard ellipsis "and so on" notation, it's
>1 * 3 * ... * (n-1) / 2^(n/2) sqrt(pi).
Okay, good, that sounds about right to me. Now try this:
Exercise - Draw n dots and count the number of ways of drawing edges
between them to connect them up in pairs. (These are Feynman diagrams
with only 1-valent vertices.) What do you get?
James Dolan already did this one so you can probably read the answer in
a forthcoming post of his if you don't feel like doing it yourself, but
it's actually pretty fun.
> In article <m10RTcX...@crib.corepower.com>, Nathan Urban
<nur...@vt.edu> wrote:
> >> Well, what's *classical* field theory in 0+1 dimensions?
> >From the way you said that, I'd have to answer "classical mechanics".
> Right!
> >But I'm not sure if I see this. Classical field theory in 0+1 is
> >just a field defined at a point evolving in time. That looks nice and
> >worldline-like so maybe it's like the classical mechanics of a point
> >particle [...]
> Right! In general a classical field is a section of some bundle
> over spacetime, where spacetime is some n+1-dimensional manifold M.
> When the bundle is trivial, our classical field is really just a map
> from M to some fixed space X. In the case of 0+1 dimensions the
> manifold M is just the real line, R. In other words, space is just
> a point, so spacetime is just the "time line", R. Then our classical
> field is just a map f: R -> X. But this is the same as a point particle
> moving around in X.
And X, the fiber, is the configuration space of the point particle?
> >but how can that be isomorphic to the full classical mechanics
> >of a point particle which is formulated in terms of a 2p+1-dimensional
> >phase space?
> I'm not quite sure what you're puzzled about here.
I was thinking that if you wanted your particle to move around in a
multi-dimensional space, you'd have to make the _base space_ of the
bundle multi-dimensional. That's pretty much because I was thinking
of the particle moving around in a multi-dimensional spacetime as the
configuration space and I'm used to spacetime being the base space of
the bundle. I did the same thing when I thought about the phase space.
> The following examples might help:
> Exercise - Take the wave equation in n+1 dimensions and restrict it to
> the case n = 0. What classical mechanics problem does it become?
Umm, I forgot the wave equation and I can't find my classical mechanics
book around here.. but I think it's d^2 f/dt^2 = k^2 del^2 f.
So that would become d^2 f/dt^2 = 0 in 0-d space, I guess. Which,
if you interpret f as taking values in the configuration space R of a
classical particle moving in 1+1 dimensions, is the equation of motion
for a free particle.
> Exercise - Take the Klein-Gordon equation in n+1 dimensions and restrict
> it to the case n = 0. What classical mechanics problem does it become?
That's d^2 f/dt^2 - del^2 f + m^2 f = 0. In 0-d space, it becomes
d^2 f/dt^2 + m^2 f = 0. That's the the equation of motion for a
particle in a linear potential.
> By the way, I don't know why you're making phase space be odd-dimensional.
Oh, I was trying to glom on a time dimension, even though I really do
know that phase space doesn't include a time dimension.
> Normally in classical mechanics people use "phase space" to mean the space
> of positions and momenta, so it works out to be even-dimensional. There
> are reasons to consider odd-dimensional "phase spaces" too, of course... is
> that what you're doing?
No. What's the physical significance of an odd-dimensional "phase space"?
> 1) STATISTICAL MECHANICS. In classical mechanics the energy is a
> function H: X -> R where X is the space of "states" of our theory.
> [...]
> Z = integral_X exp(-H(x)/kT) dx.
Vaguely familiar; it's been six years since I took stat mech.
> 2) QUANTUM THEORY. In classical mechanics the action is a
> function S: X -> R where X is the space of "histories" of our theory.
> [...]
> Z = integral_X exp(iS(x)/hbar) dx.
> Okay, so now suppose you have an anharmonic oscillator
> [V(x) = x^2 + cx^4, K(p) = p^2/2m]
> 1) If we are studying the statistical mechanics of this system,
> what is its partition function?
So... X is R^2, the particle's phase space? That's what "the state of the
system" means to me. If so, then H(x,p) = p^2/2m + x^2 + cx^4. We get:
Z = integral exp(-(p^2/2m + x^2 + cx^4)/kT) dx dp
which I could expand in terms of a power series in c like before, but
there's not much point in doing that yet.
> 2) If we are studying the quantum theory of this system, what is
> its partition function?
I'm not sure what X is here. The space of all trajectories in R^2 x
R that solve the Hamiltonian equations of motion for the anharmonic
oscillator? That's what the "history" of a particle means to me. Then
that's going to have to be a path integral over the infinite-dimensional
space of paths.
> Exercise - Draw n dots and count the number of ways of drawing edges
> between them to connect them up in pairs. (These are Feynman diagrams
> with only 1-valent vertices.) What do you get?
I'm not sure which graphs you're allowing. How many points can a
given point be paired with? Greater than zero? One and only one?
A fixed number? A variable number?
>But there are a few nonperturbative results in QED. They have to fit in
>somewhere, right?
Yeah, but nobody is sure exactly where. The stuff about the Landau
pole (see old threads on sci.physics.research!) is nonperturbative in a
certain sense, yet it actually comes from messing with Feynman diagrams
with no more than a fixed number of loops, hoping that this is a "good
approximation", and extrapolating the results to ridiculously small length
scales. People also do nonperturbative simulations of QED and other
gauge theories on a lattice - i.e., approximating spacetime by something like
a 16 x 16 x 16 x 16 grid. But nobody knows if these simulations would really
converge to any answer whatsoever if we took the continuum limit. In short,
QED, like all interacting quantum field theories in 3+1 dimensions, is a
bunch of techniques for doing "approximate calculations", but we don't
know if what we're "approximating" really exists in any mathematically
precise sense.
>integral x^2n exp(-x^2) dx = (2n-1)!! sqrt(pi) / 2^n
Great. It looks like you and Nathan are getting the same
answer, so it must be right. By the way, I don't like that
factor of 2^n, so from now on I'm gonna talk about a different
integral:
(1/sqrt(2pi)) integral x^2n exp(-x^2/2) dx = (2n-1)!!
There are three reasons for this. First, I want to show you
that this sort of integral is *counting* something, so I want
it to equal an integer. Second, in physics we often *do* have
a factor of 1/2 in the Gaussian, coming from the factor of
1/2 we have in the formula for kinetic energy. Third, the
Gaussian
(1/sqrt(2pi) exp(-x^2/2)
is very nice: it's a probability distribution with mean zero
and standard deviation equal to 1.
>I don't see what there is to keep track of with Feynman diagrams.
Well, you're probably just too smart! Remember, Schwinger didn't need
Feynman diagrams to do quantum field theory, and he actually got
annoyed when Feynman introduced them, because they "brought quantum
field theory to the masses".
I guess to reinvent Feynman diagrams we need someone who is not
so good at doing integrals. Like Jim Dolan. In fact, Jim just
figured out a relation between that strange number (2n-1)!! and some
funny-looking diagrams where we take 2n points and connect them
up in pairs. This is on exactly the right track. In my reply
to his post I'll explain what it means.
Turning to matrix models...
>>We do it perturbatively as follows:
>>
>>Z = integral_X exp(-tr(xx* + cx^3)) dx
>> = integral_X exp(-tr(cx^3)) exp(-tr(xx*)) dx
>> = integral_X (1 - c tr(x^3) + (c tr(x^3))^2/2! + ... ) exp(-tr(xx*)) dx
>Ahh...now here I can start to see how a diagrammatic scheme might help
>you out.
See, only when it gets hard do you start feeling the need for
Feynman diagrams. :-)
>Perhaps you really meant that Feynman diagrams are tricks
>for computing the gaussian integral of _the_exponential_ of a
>polynomial. That I'm happy with.
That may be a better way to put it. That's what we actually have
to do in physics! But of course the way we do it is expand the
exponential in a power series. Then we get a bunch of polynomials
times Gaussians to integrate. And each of these integrals has a
Feynman diagram interpretation.
>I'll even flesh this out a little bit, but with an action I like
>a little better; those traces bug me.
Great, thanks! Now you've shown how matrix models work. And you've
shown that a lot of the fun has nothing to do with matrices: any
old polynomial action S: X -> R defined on a finite-dimensional vector
space X will do. Like this:
>S(x) = x_i x_i + c_ijk x_i x_j x_k
And about my claim...
>> Once you know this, the rest of quantum field theory is just
>> bookkeeping!
>The problem is that the rest of quantum field theory isn't just
>bookkeeping and it is difficult and often complicated.
I could make quantum field theory sound difficult too if I felt like
it. I prefer to make it sound easy. So I'm emphasizing the fact
that the basic principles are very easy, rather than glorying in the
complicated ramifications of these basic principles.
>The action in every theory I know of is an integral over all of
>spacetime.
It's amusing that matrix models, which are currently hot candidates
for the Theory of Everything, are much simpler. They have to be
simpler, because they work in a context where there is no preexisting
spacetime to integrate over!
>So, once you do your easy gaussian integrals, you still
>have more integrals left. And those integrals are _hard_, at least if
>you want to go beyond one-loop.
Yes, you get some hard integrals to do, but every theory, no matter
how beautiful, must have a little nasty calculation built into it if
it's going to explain complex phenomena. The equations of general
relativity are simple and beautiful. Finding solutions can be a real
pain in the butt. But I prefer to emphasize the former aspect.
>> In general a classical field is a section of some bundle
>> over spacetime, where spacetime is some n+1-dimensional manifold M.
>> When the bundle is trivial, our classical field is really just a map
>> from M to some fixed space X. In the case of 0+1 dimensions the
>> manifold M is just the real line, R. [....] Then our classical
>> field is just a map f: R -> X. But this is the same as a point particle
>> moving around in X.
>And X, the fiber, is the configuration space of the point particle?
Right!
>I was thinking that if you wanted your particle to move around in a
>multi-dimensional space, you'd have to make the _base space_ of the
>bundle multi-dimensional.
No, that's the funny thing: a classical particle moving around in
a 469-dimensional configuration space is still an example of a 1-
dimensional classical field theory.
>> The following examples might help:
>
>> Exercise - Take the wave equation in n+1 dimensions and restrict it to
>> the case n = 0. What classical mechanics problem does it become?
>
>Umm, I forgot the wave equation and I can't find my classical mechanics
>book around here... but I think it's d^2 f/dt^2 = k^2 del^2 f.
Yeah. Of course, in applications to relativistic field theory that
annoying constant "k" is just the speed of light, which I always set
equal to 1.
>So that would become d^2 f/dt^2 = 0 in 0-d space, I guess.
Right. The Laplacian on a 0-dimensional manifold is zero, since
there are no directions to differentiate in!
>Which,
>if you interpret f as taking values in the configuration space R of a
>classical particle moving in 1+1 dimensions, is the equation of motion
>for a free particle.
Right. Ain't it cool? The wave equation, this incredibly important
equation that describes massless scalar fields, reduces to the equation
for a free particle when we make spacetime 1-dimensional!!!
>> Exercise - Take the Klein-Gordon equation in n+1 dimensions and restrict
>> it to the case n = 0. What classical mechanics problem does it become?
>
>That's d^2 f/dt^2 - del^2 f + m^2 f = 0.
Right. I see you've come to your senses now and set that yucky constant
"k" equal to 1.
>In 0-d space, it becomes d^2 f/dt^2 + m^2 f = 0.
Right! Again, the Laplacian goes away.
>That's the the equation of motion for a particle in a linear potential.
Eh? Are you sure? That would be awful - a particle in a linear
potential is a very unstable sort of system, it just keeps rolling
downhill endlessly, while the Klein-Gordon equation is supposed to
a massive scalar field that wiggles back and forth in a nice way!
>What's the physical significance of an odd-dimensional "phase space"?
You don't need to know this now so I'm not going to tell you. If
I told you all the obscure crap I know, the signal I'm trying to
transmit to you would be completely drowned in noise. Let's focus
on quantization and quantum field theory....
>> 1) STATISTICAL MECHANICS. In classical mechanics the energy is a
>> function H: X -> R where X is the space of "states" of our theory.
>>
>> Z = integral_X exp(-H(x)/kT) dx.
>Vaguely familiar; it's been six years since I took stat mech.
Well, there's not much to it. The chance of a system being in a state
of energy E dies out exponentially as E gets big - and the colder it
is, the faster this exponential curve dies out! The partition function
Z is just the normalization factor you use to turn this insight into
a probability distribution on the set of states.
>> 2) QUANTUM THEORY. In classical mechanics the action is a
>> function S: X -> R where X is the space of "histories" of our theory.
>>
>> Z = integral_X exp(iS(x)/hbar) dx.
Oh, by the way, I need to tell you what the heck a "history" is!
In classical mechanics, it's *any* function from R to the configuration
space - not just one that satisfies the classical equations of motion.
In classical field theory, it's *any* field, not just one that satisfies
the classical field equations. In both cases, the action is a real
function on the space of histories, S: X -> R. The points in X
where the derivative of S is zero are the solutions of the equations
of motion. This is the "variational" or "Lagrangian" approach to
classical physics!
In quantum mechanics, all possible classical histories happen! The
history x happens with an amplitude exp(iS(x)/hbar). But in regions
of X where the action S is rapidly varying, the phase exp(iS(x)/hbar)
swings around like mad so it tends to cancel out when we integrate over
all possible histories. At points where the derivative of S is zero,
we get constructive rather than desctructive interference of the phase.
And these are just the histories that solve the classical equations of
motion! (This paragraph summarizes why Feynman is famous - he thought
of all this stuff and used it to invent stuff like Feynman diagrams.)
Okay, so much for that quick review of partition functions....
>> ... so now suppose you have an anharmonic oscillator
>>
>> [V(x) = x^2 + cx^4, K(p) = p^2/2m]
>>
>> 1) If we are studying the statistical mechanics of this system,
>> what is its partition function?
>
>So... X is R^2, the particle's phase space? That's what "the state of the
>system" means to me.
Right. In classical mechanics, the word "state" is synonomous with
"point in phase space".
> If so, then H(x,p) = p^2/2m + x^2 + cx^4. We get:
>
> Z = integral exp(-(p^2/2m + x^2 + cx^4)/kT) dx dp
Right! Exactly!!!
>which I could expand in terms of a power series in c like before, but
>there's not much point in doing that yet.
Right. For now, the main point is that you *could* expand it in a
power series in c, and you'd a bunch of integrals that *you know
how to do*. And I'm telling you that you can keep track of these
integrals using Feynman diagrams... but I have barely begun to suggest
how this actually works.
>> 2) If we are studying the quantum theory of this system, what is
>> its partition function?
>
>I'm not sure what X is here. The space of all trajectories in R^2 x
>R that solve the Hamiltonian equations of motion for the anharmonic
>oscillator? That's what the "history" of a particle means to me.
No, in quantum mechanics "history" means something else - see my
remarks above. According to that definition what's the space of
histories in this particular problem? And what's the action S as
a function on this space X of histories?
>Then
>that's going to have to be a path integral over the infinite-dimensional
>space of paths.
Hmm. The space of *solutions* of the equations of motion for the
anharmonic oscillator is just *2-dimensional*, no? So by *your*
definition of "histories", we'd just integrate over a 2-dimensional
space. But according to *my* definition the space of histories is
indeed infinite-dimensional. And I want you to tell me what it is.....
>> Exercise - Draw n dots and count the number of ways of drawing edges
>> between them to connect them up in pairs. (These are Feynman diagrams
>> with only 1-valent vertices.) What do you get?
>
>I'm not sure which graphs you're allowing. How many points can a
>given point be paired with? Greater than zero? One and only one?
One and only one. That's why I used the word "pairs" - each point
has a unique mate (which is not itself). See Jim Dolan's post and
my reply for more details. Sorry this exercise was a bit vague.
Grrr....
If you can easily see this, you failed to think it through enough,
just like me. Jim Dolan has basically explained how the diagrams come
up. So, if you just have one variable, the Gaussian integral of x^n is
basically just the number of ways you can link up pairs of n dots.
If you have more than one variable, x^n y^m ... it's just the number
of ways to pair up n dots, times the number of ways to pair up m dots,
times ....
I've got to say, I'm still not seeing how this is useful. Basically,
it seems like we're just using up a lot of paper playing connect the
dots when we could just make that substitution I gave before
>
>x^2n -> (2n-1)!! sqrt(pi) / 2^n
>
--
>In article <abergman-270...@abergman.student.princeton.edu>,
>Aaron Bergman <aber...@Princeton.EDU> wrote:
>
>>But there are a few nonperturbative results in QED. They have to fit in
>>somewhere, right?
>
>Yeah, but nobody is sure exactly where. The stuff about the Landau
>pole (see old threads on sci.physics.research!) is nonperturbative in a
>certain sense, yet it actually comes from messing with Feynman diagrams
>with no more than a fixed number of loops, hoping that this is a "good
>approximation", and extrapolating the results to ridiculously small length
>scales.
But there's all sorts of other fun stuff like Schwinger's calculation,
renormalization group-type stuff and stuff that I don't have a
tremendously good grasp on the implications of like solitons and
instantons. There has to be something, somewhere other than just
perturbation theory. Maybe it's all mathematically ill-defined, but one
gets the feeling that something is hiding under the theoretical morass and
we're just not grasping at it in quite the proper way.
> People also do nonperturbative simulations of QED and other
>gauge theories on a lattice - i.e., approximating spacetime by something like
>a 16 x 16 x 16 x 16 grid. But nobody knows if these simulations would really
>converge to any answer whatsoever if we took the continuum limit. In short,
>QED, like all interacting quantum field theories in 3+1 dimensions, is a
>bunch of techniques for doing "approximate calculations", but we don't
>know if what we're "approximating" really exists in any mathematically
>precise sense.
I get the impression when doing renormalization calculations, adding
counterterms and the like, that somewhere, deep down, the theory is fine.
It's just the tools are flawed. Maybe lagrangiangs and the like just lead
to trouble. There should be some way of looking at all this, it seems,
that renormalization seems natural and necessary rather than ad hoc and
grotesquely funny. We're doing Wilson's stuff right now, so perhaps all
will become, if not clearer, then atleast a tad less opaque.
>>(1/sqrt(2pi)) integral x^n exp(-x^2/2) dx
>>
>>...equals the number of Feynman diagrams with n
>>1-valent vertices and no external edges...
>This result can kind of be explained using
>harmonic oscillators and raising and lowering
>operators.
Right! My only objection is to your qualifier "kind
of" - it seems to me that the explanation you gave is
the very BEST possible explanation of this result!
It illuminates the physics of what is really going
on and explains why the above integral is giving us
integers 1, 0, 1, 0, 3, 0, 15, ... when n = 0, 1,
2, 3, 4, 5, 6 .... What more could we want?
To quote you loosely:
integral x^n exp(-x^2/2) dx =
2^{-n/2} integral x^n exp(-x^2) dx =
2^{-n/2} integral exp(-x^2/2) x^n exp(-x^2/2) dx =
C 2^{-n/2} <0|x^n|0> =
C <0|(a + a*)^n |0>
and the number <0|(a + a*)^n |0> is the number of 1-valent Feynman
diagrams with n vertices.
So we see that starting from the task of computing a simple
integral, we've been led inexorably into thinking about harmonic
oscillators, raising and lowering operators, and Feynman diagrams.
Of course, what we could want is a clearer picture of what's
really going on here. The integral came from trying to compute
a partition functon. What is the physical significance of
that partition function? And why should that partition function
be related to all this stuff like raising and lowering operators?
The math is simple. The physics cries for further explanation.
>I've got to say, I'm still not seeing how this is useful. Basically,
>it seems like we're just using up a lot of paper playing connect the
>dots when we could just make that substitution I gave before.
I think that's probably what Schwinger felt about Feynman diagrams.
"Why draw all these damn pictures when you can just do the integrals?"
Of course you don't need to draw the pictures. The pictures merely
give you *insight* into what the integrals *mean*. Of course, I
haven't begun to talk about that yet - Daryl McCullough has just
broached this topic.
Perhaps I should explain a bit more about what I'm up to. In real
full-fledged quantum field theory, Feynman diagrams are used to
compute integrals like
integral_X P(x) exp(-||x||^2) dx
where P is a polynomial on an *infinite-dimensional* real Hilbert
space X. I'm teaching people the technique in the case where X is
a *one-dimensional* real Hilbert space - the real line. The technique
works exactly the same no matter what the dimension of X. But it
looks much more impressive in the case when X is infinite-dimensional.
Normally quantum field theory textbooks try to make Feynman diagrams
seem impressive by jumping directly into the infinite-dimensional
case. I'm trying to make them seem unimpressive, by doing the one-
dimensional case first.
Your question above shows that I'm succeeding: I'm making them so
unimpressive that you think they're unnecessary, at least in the
one-dimensional case. Good! That's true! They are equally
unnecessary in the infinite-dimensional case! But of course most
quantum field theory textbooks don't point that out....
Actually this reminds me a bit of when I teach my calculus students how
to compute the area under the curve y = x from x = 0 to x = 1 using
a Riemann sum. It seems like an incredibly complicated and stupid
way to do it, compared to just using the formula A = bh/2. And yet
it illustrates ideas which go much further.
>evidently, you're suggesting that for some reason we may be interested
>in calculating the integral from x := -infinity to +infinity of:
>
>f(x,c_1,...,c_j) := exp(-x^2/2 + c_1*:x^[n_1]: + ... + c_j*:x^[n_j]:)
>
>perturbatively with respect to the "coupling constants" c_1,...,c_j.
Yup. This is the "partition function" of a simple 1-dimensional
system. Higher-dimensional analogs of this integral arise as
partition functions in quantum field theory (or statistical mechanics)
when we consider systems whose action (or Hamiltonian) is given by
a quadratic function plus a polynomial "interaction term".
>thus the c_1^[m_1]*...*c_j^[m_j] coefficient at c_1:=0,...,c_j:=0 of
>the taylor series of this integral expression with respect to the
>variables c_1,...,c_j is the integral from x := -infinity to +infinity
>of:
>
> :x^[n_1]:^[m_1] * ... * :x^[n_j]:^[m_j] * exp(-x^2/2)
>
>if i did the algebra correctly, and this coefficient is (apart from
>some silly multiplicative constant) the number of feynman diagrams
>with m_1 vertexes of valence n_1, ..., and m_j vertexes of valence
>n_j.
>
>is that more or less right?
Right. Of course, the part of your sentence before "and this" is
true no matter how we define the Wick powers :x^n:, while the part
after "and this" will only be true once you find the correct definition
of the Wick powers.
Also, we have to be careful when we count those Feynman diagrams.
We think of the vertices as being labelled, and we don't count diagrams
where an edge has both ends at the same vertex. Should we think of
the edges as being labelled? I think so. But you can fiddle around
and see what works best. The Wick powers are so beautiful that you
should be led to them just by seeking something elegant that does more
or less what we're talking about here.....
-Since, to get something nonzero, each a must
-pair up with a corresponding a*,
-
- <0|(a + a*)^n |0>
-
-= the number of ways that the a's can pair
-up with the a*'s.
after talking it over with john baez the other day, i think i
understand most of what daryl said except for the sentence above,
which seems to me to involve a somewhat creative leap of logic
(perhaps in the spirit of the discussion so far).
let me try to act out an example here to a certain level of detail to
see if i can at least fake the mechanics of it.
i'll try the case n=4 again. (a+a*)^4 =
aaaa + a*aaa + aa*aa + a*a*aa +
aaa*a + a*aa*a + aa*a*a + a*a*a*a +
aaaa* + a*aaa* + aa*aa* + a*a*aa* +
aaa*a* + a*aa*a* + aa*a*a* + a*a*a*a*
in expanded form. then sufficient contemplation of daryl's facts #1-3
allegedly shows that each of the 16 terms above, when sandwiched
between "<0|" and "|0>", yields the number of feynman diagrams with
"chronological vertex profile" given by that term. that is,
interpreting "a" as standing for an annihilation vertex and "a*" for a
creation vertex, read the term from right to left (as daryl's
conventions seem to require) to obtain a sequence of four vertex
types, and then see how many feynman diagrams you can make from that
sequence. for example from the term "aaaa" in the upper left-hand
corner i get the sequence (annihilation, annihilation, annihilation,
annihilation), from which i can build a total of zero feynman
diagrams. in fact for all but two of the 16 terms, the sequence of
vertex types obtained from it admits no feynman diagrams. the two
terms that admit feynman diagrams are "aaa*a*" which admits two
feynman diagrams:
creation----------------------------------------annihilation
creation--------annihilation
and:
creation------------------------annihilation
creation------------------------annihilation
and "aa*aa*" which admits one feynman diagram:
creation--------annihilation creation--------annihilation
thus yielding a total of three feynman diagrams for all 16 terms put
together, which agrees with our previous calculations.
(in general, as each feynman diagram of the sort under consideration
has its unique sequence of vertex types, the number of feynman
diagrams you get from all the terms put together is equal to the total
number of feynman diagrams of that sort.)
that's about as much detail as i understand so far. it's not as close
yet as i'd like it to be to the aesthetic ideal of the attitude toward
combinatorics which prizes the giving of explicit and "robust"
bijections as explanations of equations between positive integers.
one more thing. while playing around i noticed that the equation:
integral("x^[2*n]*exp(-x^2/2)")/integral("exp(-x^2/2)")
= 1*3*...*(2*n-1)
is actually just a special case of the more general fact:
integral("x^[2*m*n]*exp(-x^[2*m]/(2*m))")/integral("exp(-x^[2*m]/(2*m))")
= 1*(2*m+1)*...*(2*m*(n-1)+1)
is there anything interesting to say about this more general fact (or
some even more general one) along the lines of what daryl and john and
others here have said about its special case? perhaps involving
combinatorics or feynman diagrams or quantum mechanics or integration
by parts or the gamma function or whatever?
creation----------------------------------------annihilation
creation--------annihilation
and:
creation------------------------annihilation
creation------------------------annihilation
creation--------annihilation creation--------annihilation
-James Dolan wrote:
-
->it _is_ possible to give a
->"combinatorial" interpretation to the numerical values of the gaussian
->integrals given above, and this combinatorial interpretation does even
->have something to do with "feynman diagrams" of a sort (though
->seemingly not a very interesting sort).
-
-It's actually VERY interesting. You're on exactly the right track!
-You just need a slight change of viewpoint. Fixing up the numerical
-fudge factors a little bit, you have computed
-
-(1/sqrt(2pi)) integral x^n exp(-x^2/2) dx
-
-and shown that it equals the number of Feynman diagrams with n
-1-valent vertices and no external edges. Of course you didn't
-say this - but what you said is equivalent to this!
...
-How do we generalize this? We invent a polynomial of degree n
-called the nth "Wick power" or "normal-ordered power" of x, and
-denoted :x^n:, with the following marvelous property.
-
-(1/sqrt(2pi)) integral :x^{n_1}: ... :x^{n_k}: exp(-x^2/2) dx
-
-equals the number of Feynman diagrams with one vertex of valence
-n_1, one of valence n_2, ... and one of valence n_k.
-
-Your calculation says we should let
-
-:x: = x
-
-It's also true that
-
-:1: = 1
-
-These two cases are perhaps deceptively simple. In general the nth
-Wick power of x is not x^n. But it's something very beautiful and
-important. You might enjoy figuring out what it is.
i'll try to describe here the progress that i've made on this so far.
first, i noticed that, if you're not sure about the precise definition
of "feynman diagram" here, then it is plausible to guess that these
"wick powers" of x might in fact be just the ordinary powers of x.
thus, in a later post in this thread you emphasize a clause in the
definition of "feynman diagram" that prohibits an edge from having
both of it's ends terminate at the same vertex, but if you omit that
clause from the definition then the number of feynman diagrams with m
given vertexes, each of respective valence v_j, j=1,...,m, depends
only on the sum v_1+...+v_m, which means that the "marvelous property"
of wick powers that you describe above could be achieved by defining
:x^n: to be x^n (at the price of using the modified version of the
definition of "feynman diagram").
i'm not particularly sure what to make of that, though it raises some
questions about the conceptual and historical relationship between
wick powers and ordinary powers.
next, i was unsure at first as to whether the "marvelous property" of
wick powers was not enough, or just the right amount of, or too much
information to characterize the wick powers. it certainly _seemed_
like an awful lot of information, but it wasn't immediately obvious to
me how to use all that information to completely characterize the
objects possessing them. i think i see it now, though, at least in
principle.
:x^n: is characterized from the "marvelous property" (together with
the requirement that :x: = x, if you insist on being careful) by the
fact that the measure with density function
k * :x^n: * exp(-x^2/2)
(k being a certain constant that i'll ignore) has it's mth "moment"
equal to the number of feynman diagrams with one vertex of valence n
plus m vertexes of valence 1.
(the mth "moment" of a measure d on the real numbers is the integral
with respect to d of the function x^m. thus if f is a density
function for d, the mth moment of d is the integral of x^m * f(x) with
respect to ordinary lebesgue measure on the reals. nice measures tend
to be characterized by their moments. putting all this together with
:x: = x and the "marvelous property" implies the relationship between
the moments of a wick power times a gaussian measure and feynman
diagrams with all vertexes but one being unary.)
as it happens it seems straightforward to calculate the number of
feynman diagrams with one vertex of valence n plus m vertexes of
valence 1 rather explicitly by naive combinatorial intuition. if m+n
is odd then it's zero, otherwise it's m*(m-1)*...*(m-(n-1)) times
(m-n-1)*(m-n-3)*...*1.
(the naive intuition here is that "the first edge originating at the
valence n edge has m choices as to which valence 1 vertex to terminate
at; the next edge has only m-1 choices, and so on down the line; then
the edge originating at the first remaining valence 1 vertex has only
m-n-1 choices because edges can't terminate where they originate, and
so on down the line".)
thus hopefully the wick power :x^n: is now characterized by knowing
the moments of the measure with density function :x^n: * exp(-x^2/2).
[since it's apparently becoming somewhat of a chore to finish writing
this i'll break off here now and try to continue later.]
using this terminology, the result of my last post can be expressed as
determining the gaussian moments of the wick powers of x. thus
whereas from previous calculations we have the sequence of gaussian
moments of the nth ordinary power x^n of x equal to the nth row in the
following table #1:
1 0 1 0 3 0 3*5 0 3*5*7 ...
=15 =105
0 1 0 3 0 3*5 0 3*5*7 0
=15 =105
1 0 3 0 3*5 0 3*5*7 0 3*5*7*9
=15 =105 =945
0 3 0 3*5 0 3*5*7 0 3*5*7*9 0
=15 =105 =945
3 0 3*5 0 3*5*7 0 3*5*7*9 0 3*5*7*9*11
=15 =105 =945 =10395
.
.
.
the sequence of gaussian moments of the nth wick power :x^n: of x is
equal to the nth row in table #2:
1 0 1 0 3 0 3*5 0 3*5*7 ...
=15 =105
0 1 0 3 0 3*5 0 3*5*7 0
=15 =105
0 0 2 0 3*4 0 3*5*6 0 3*5*7*8
=12 =90 =840
0 0 0 2*3 0 3*4*5 0 3*5*6*7 0
=6 =60 =630
0 0 0 0 2*3*4 0 3*4*5*6 0 3*5*6*7*8
=24 =360 =5040
.
.
.
now to try to express the wick powers of x as explicit linear
combinations of the ordinary powers of x. :1: = 1 and :x: = x, as
john baez already noted. it's almost as obvious, from glancing at
tables #1 and #2, that :x^2: = x^2 - 1. but let's see if we can tell
a combinatorial story to make sense out of this and prepare for
dealing with the higher wick powers. the nth gaussian moment of :x^2:
is the number of feynman diagrams with one valence 2 vertex plus n
valence 1 vertexes, or in other words the number of pairing-offs of
2+n "edge terminuses" with (due to the clause stating that edges never
have both ends at the same vertex) the first 2 of the 2+n not forming
one of the pairs. thus it's the number of pairing-offs of 2+n things,
minus the number of "bad" such pairing-offs, "bad" here meaning that
the first 2 of the 2+n form one of the pairs. but then it's clear
that these bad pairing-offs are in canonical one-to-one correspondence
with the pairing-offs of just n things (namely the remaining n of the
2+n things that are left after the first 2 have been nailed down).
thus the nth gaussian moment of :x^2: is the number of pairing-offs of
n+2 things minus the number of pairing-offs of n things, or in other
words the nth gaussian moment of x^2 minus the nth gaussian moment of
1.
next, to embellish this combinatorial story to handle the higher wick
powers.
[another break is called.]
>in fact for all but two of the 16 terms, the sequence of
>vertex types obtained from it admits no feynman diagrams.
This has tantalizing almost-connections with the theory of random walks.
For once ascii-art is up to the task. Let's use an upward-slanting / for
'a', and a downward-slanting \ for 'a*'; then here are a few sample terms:
/\ aaa*a*
/ \
/\/\ aa*aa*
\/\/ a*aa*a
Assume our random walk begins at the left-hand end of one of these
mountain-scapes, at coordinate (0,0).
So the first walk takes us on this itinerary: (0,0) -> (1,1) -> (2,2) ->
(3,1) -> (4,0). The last walk goes like this: (0,0) -> (1,-1) -> (2,0) ->
(3,-1) -> (4,0).
The only walks we need to consider are those that stay in the upper
half-plane, i.e., those for which all the y-coordinates are non-negative.
More later, gotta go get dogfood.
/\ aaaa*a*a*
/ \
/ \
or again:
/\/\
/ \ aaa*aa*a*
We can discard any terms that don't stay in the upper half plane (including
the x-axis). In other words, all the y-coordinates must be non-negative.
Now, how much does each term contribute to the final result? If we said
'one', then this would be a classic problem in probability theory, just a
slight variation on the so-called ballot problem---see, e.g., Feller volume
I. But in fact almost all terms contribute *more* than one.
There are two ways to compute the contribution of a term, say aaa*aa*a* to
pick a concrete example. Both are interesting.
First method: pair up the a's and a*'s following an "innermost out" nesting
rule. I.e., treat 'a' like a left parenthesis and 'a*' like a right
parenthesis, and pair up accordingly. Here's how it looks if we indicate
each pair with a connecting horizontal line, in the random-walk picture.
(View this post with a fixed-width font! Unfortunately, the peaks of the
mountain-tops don't quite meet anymore--- the curse of ascii.)
/-\/-\ aaa*aa*a*
/------\ 123 45 6 pairing: (16,23,45)
We've labelled the factors. It will be convenient to refer to the factors
just by number; for example, the pair 23 is the second 'a' and the first
'a*'.
Let's say that the pair 16 has height 1, and the pairs 23 and 45 have
height 2. Then the value of <0|aaa*aa*a*|0> is just the product of the
heights of the pairs: 1x2x2 = 4. This is easy to see, once you recall that
a*|k-1> = sqrt(k)|k>, and a|k> = sqrt(k)|k-1>.
Second method, more "combinatorial". The value of <0|aaa*aa*a*|0> is the
number of _valid pairings_ for the term aaa*aa*a*. What's a valid pairing?
Well, we pair up a's with a*'s, as before; but this time, the only
condition we impose is that each 'a' must be matched up with an 'a*' to its
RIGHT. So (13,25,46) is a valid pairing, but (34,25,16) is not.
So the claim is that <0|aaa*aa*a*|0> is equal to the cardinality of the set
of valid pairing:
<0|aaa*aa*a*|0> <==> (23,45,16) (23,45,16)
123 45 6 (13,25,46) (13,45,26)
How can we verify this? How do we know we didn't miss any valid pairing on
the right? And how can we tell that the value on the left equals 4---
without relying on our previous computation?
We pull out an old trick from the combinatorialist's bag: divide and
conquer! For starters, we can divide the valid pairings for aaa*aa*a* into
two groups: those that pair up 2 with 3, and those that don't! Those that
DON'T are valid pairings for aa*aaa*a*:
aa*aaa*a* <==> (13,25,46) (13,45,26)
13 245 6
where we've switched 2 and 3 (because now 2 can't match up with 3, which is
to its left). And those that DO are valid pairings for the term we get by
omitting factors 2 and 3:
aaa*a* <==> (23,45,16) (23,45,16)
145 6
On the other hand, aa* - a*a = 1, so aa* = a*a + 1, so we have this
equation:
aaa*aa*a* = a(a*a+1)aa*a* = aa*aaa*a* + aaa*a*
123 45 6 1 3 2 45 6 13 245 6 145 6
So we have reduced the verification for aaa*aa*a* (with 4 valid pairings)
to verifications for aa*aaa*a* and aaa*a* (2 valid pairing each).
The rest is just an exercise in proof by induction. Let's carry it a
couple of steps further, just for the flavor. We'll do aaa*a*. So factors
2 and 3 have already been paired off. Continuing:
aaa*a* = a(a*a+1)a* = aa*aa* + aa*
145 6 1 5 4 6 15 46 16
and in the term aa*aa*, we can be sure that 4 will not pair off with 5; in
the term aa*, 4 and 5 have paired off. So:
aa*aa* <==> (23,15,46)
15 46
aa* <==> (23,45,16)
16
If we wanted to, we could even apply the same divide and conquer method to
aa*aa*:
aa*aa* = (a*a+1)aa* = a*aaa* + aa*
15 46 5 1 46 5 146 46
But we can tell right away that a*aaa* has no valid pairings, for that
leftmost a* can't possibly find a mate! And of course <0|a*aaa*|0> = 0,
since <0|a* = 0.
Pretty mathematics! Now can we have some pretty physics to go with it?
Hmmm, Mr. Wizard?
> In article <m10SY3W...@crib.corepower.com>, Nathan Urban <nur...@vt.edu> wrote:
> >Which, if you interpret f as taking values in the configuration space R of a
> >classical particle moving in 1+1 dimensions, is the equation of motion
> >for a free particle.
> Right. Ain't it cool? The wave equation, this incredibly important
> equation that describes massless scalar fields, reduces to the equation
> for a free particle when we make spacetime 1-dimensional!!!
Is there some really deep significance to this?
> >> Exercise - Take the Klein-Gordon equation in n+1 dimensions and restrict
> >> it to the case n = 0. What classical mechanics problem does it become?
> >That's d^2 f/dt^2 - del^2 f + m^2 f = 0.
> Right. I see you've come to your senses now and set that yucky constant
> "k" equal to 1.
I just copied it out of a book that was apparently using sane units.
> >In 0-d space, it becomes d^2 f/dt^2 + m^2 f = 0.
> Right! Again, the Laplacian goes away.
> >That's the the equation of motion for a particle in a linear potential.
> Eh? Are you sure?
Oops. Linear FORCE. Quadratic potential. Good old Hooke's law.
> That would be awful - a particle in a linear
> potential is a very unstable sort of system, it just keeps rolling
> downhill endlessly, while the Klein-Gordon equation is supposed to
> a massive scalar field that wiggles back and forth in a nice way!
I guess a simple harmonic oscillator is wiggly enough.
> >What's the physical significance of an odd-dimensional "phase space"?
> You don't need to know this now so I'm not going to tell you.
I guess I'll always wonder then. Someday, when I'm lying on my deathbed,
rotting away, I'll surely wonder "What's the physical significance of
an odd-dimensional phase space?", but it'll be too late and then won't
you be sorry.
> >> 2) QUANTUM THEORY. In classical mechanics the action is a
> >> function S: X -> R where X is the space of "histories" of our theory.
> >> Z = integral_X exp(iS(x)/hbar) dx.
> Oh, by the way, I need to tell you what the heck a "history" is!
That would be helpful.
> In classical mechanics, it's *any* function from R to the configuration
> space - not just one that satisfies the classical equations of motion.
> In classical field theory, it's *any* field, not just one that satisfies
> the classical field equations.
> In quantum mechanics, all possible classical histories happen! The
> history x happens with an amplitude exp(iS(x)/hbar).
> >> ... so now suppose you have an anharmonic oscillator
> >> [V(x) = x^2 + cx^4, K(p) = p^2/2m]
> >I'm not sure what X is here. The space of all trajectories in R^2 x
> >R that solve the Hamiltonian equations of motion for the anharmonic
> >oscillator? That's what the "history" of a particle means to me.
> No, in quantum mechanics "history" means something else - see my
> remarks above. According to that definition what's the space of
> histories in this particular problem?
Um.. the space of all (classical) trajectories (maps from R to R^2),
regardless of whether or not they solve the equations of motion for the
anharmonic oscillator?
> And what's the action S as a function on this space X of histories?
Is it the _classical_ action for that path? Given an R^2-valued map
x(t) |-> (q,p) in X,
S(x) = integral K(p) - V(q) dt = integral [p^2/2m - x^2 - cx^4] dt ?
> >> Exercise - Draw n dots and count the number of ways of drawing edges
> >> between them to connect them up in pairs. (These are Feynman diagrams
> >> with only 1-valent vertices.) What do you get?
> >I'm not sure which graphs you're allowing. How many points can a
> >given point be paired with? Greater than zero? One and only one?
> One and only one. That's why I used the word "pairs" - each point
> has a unique mate (which is not itself).
Well, that's only going to be possible for even n. I'm going to go
ahead and assume that such pairings as:
*--* * *
and | |
*--* * *
are distinct. In that case, I get (n-1)*(n-3)*...*1.
The best book I know about these questions is
M. Veltman, Diagramatica
where the Feynman rules are derived carefully from quantum field theory.
The only pity is the somewhat unusual convention concerning the
Minkowski product for space time used in this book.
--
Hendrik van Hees Phone: ++49 06159 71-2755
c/o GSI-Darmstadt SB 3.162 Fax: ++49 06159 71-2990
Planckstr. 1 mailto:h.va...@gsi.de
D-64291 Darmstadt http://theory.gsi.de/~vanhees/vanhees.html
among the set of all pairing-offs of n+m things, we have the "good"
pairing-offs (in which none of the first n things is paired with any
other of them) and the "bad" ones (in which at least one of the pairs
consists of two of the first n things).
we can think of the good pairing-offs as those with zero
"disqualifications", a "disqualification" being a specific unordered
pair among the first n things which is one of the pairs included in
the pairing-off.
thus we can take some specific unordered pair among the first n
things, say for example the unordered pair {2,5} in the case where n =
10, and consider the set of all those pairing-offs of the n+m things
which include the disqualification {2,5}.
the number of good pairing-offs of the n+m things =
the number of pairing-offs with zero disqualifications =
the total number of pairing-offs
-
the sum over the set of all disqualifications of the number of
pairing-offs suffering from that disqualification
+
the sum over the set of all unordered pairs of disqualifications of
the number of pairing-offs suffering from both of those
disqualifications
-
the sum over the set of all unordered triples of disqualifications of
the number of pairing-offs suffering from all three of those
disqualifications
+
.
.
.
the combinatorial technique being used here must i think be some
version of what combinatorialists call "the inclusion-exclusion
method". in this method we first over-count the things that we're
interested in, getting too large a total because we included some
things that should have been disqualified; we then (over-)compensate
for the over-counting by subtracting off all the things that should
have been disqualified, but then we've under-counted, getting too
small a total because some things got disqualified twice; we then
compensate for that by adding a term dealing with the
twice-disqualified; which turns out to need further correction if some
things got disqualified three times; and so on forever. forever can
be achieved in a finite number of stages, however, if we can ascertain
that no things get disqualified more than a certain finite number of
times.
the total effect of all this iterated over- and under-counting is that:
things with 0 disqualifications get counted a total of 1 time;
things with 1 disqualification get counted a total of 1-1=0 times;
things with 2 disqualifications get counted a total of 1-2+1=0 times;
things with 3 disqualifications get counted a total of 1-3+3-1=0 times;
things with 4 disqualifications get counted a total of 1-4+6-4+1=0 times;
...
and so forth; thus we compute the integral of the characteristic
function of the subset of good things, which is just the number of
good things.
thus for example: the 3rd wick power :x^3: of x is x^3 - 3*x, as
follows: n=3, and there are 3 possible disqualifying pairs a
pairing-off of {1,2,3}+m can include: {1,2}, {1,3}, and {2,3}. no
pairing-off can suffer from more than one of the 3 possible
disqualifications, so the inclusion-exclusion series halts after the
single-disqualification term.
the 4th wick power :x^4: of x is x^4 - 6*x^2 + 3. here n=4; the 6
possible disqualifying pairs are {1,2}, {1,3}, {1,4}, {2,3}, {2,4},
{3,4}; and the 3 possible unordered pairs of disqualifying pairs are
{{1,2},{3,4}}, {{1,3},{2,4}}, and {{1,4},{2,3}}. no pairing-off can
suffer from any larger set of disqualifications, so the
inclusion-exclusion series halts after the 2-disqualification term.
is this more or less correct so far?
>> Ain't it cool? The wave equation, this incredibly important
>> equation that describes massless scalar fields, reduces to the equation
>> for a free particle when we make spacetime 1-dimensional!!!
>Is there some really deep significance to this?
I'd say it's more of a "ain't it cool" kind of thing than a
"dude, this is deep" kind of thing. But it's related to
some pretty important stuff....
Think about Hamiltonians that are quadratic polynomials in
p and q. When you do that exercise I gave you, you'll
see there's a really nice way to quantize any quadratic
polynomial in p and q - a way that has very nice properties.
As soon as you go to higher-degree polynomials, it doesn't
work so well.
The *positive* quadratic polynomials in p and q form a cone
in the vector space of quadratic polynomials. A typical
inhabitant of the interior of this cone is the harmonic
oscillator Hamiltonian:
(p^2 + aq^2)/2
where a > 0. As a -> 0 we approach the boundary of this
cone. When a = 0 we're right at the boundary and we get the
free particle Hamiltonian:
p^2/2
When a becomes negative we leave the cone.
All quadratic polynomials in the interior of the positive
cone work basically the same way. They are all very nice!
When we quantize them we get operators with a discrete
spectrum of positive eigenvalues. These operators make
nice Hamiltonians. You can hop from one eigenstate to
another using suitable "creation" and "annihilation"
operators. And so on. It's a physicist's dream: an
exactly solvable system with lots of nice math associated
to it.
As we approach the boundary of the positive cone, the
eigenvalues of our Hamiltonian crowd closer and closer
together, and when we reach the boundary they have
merged to form a *continuous* spectrum. For example,
the quantized free particle Hamiltonian has a continuous
spectrum. Operators with continuous spectrum are a bit
more of a pain to deal with. So this is not quite as
nice....
When we leave the positive cone, things get much worse.
Have you ever thought about what happens when you do
quantum mechanics with the Hamiltonian
(p^2 + aq^2)/2
with a < 0? It's nasty, man. I don't even wanna talk
about it!
All this is true not just for polynomials in *one* momentum
variable p and *one* position variable q, but also for
polynomials in *lots* of p's and q's. Something similar,
but a bit more complicated, also happens for polynomials
in *infinitely* many p's and q's. And these are what we
use as Hamiltonians for bosonic free field theories!
So just as the quantized harmonic oscillator Hamiltonian
is very nice, so is the Hamiltonian for the quantized
Klein-Gordon equation. All the same math applies! And
just as the free particle can be thought of as a limit
of the harmonic oscillator in which the spring gets
very weak, so can the wave equation can be thought of
as a limit of the Klein-Gordon equation in which the
mass goes to zero.
But of course this makes even *more* sense when you
realize that the free particle really *is* the wave
equation in 0+1 dimensional spacetime, and the harmonic
oscillator really *is* the Klein-Gordon equation in 0+1
dimensions:
>>>> Exercise - Take the Klein-Gordon equation in n+1 dimensions and restrict
>>>> it to the case n = 0. What classical mechanics problem does it become?
>>>That's d^2 f/dt^2 - del^2 f + m^2 f = 0.
>>>
>>>In 0-d space, it becomes d^2 f/dt^2 + m^2 f = 0.
>>>
>>>That's the the equation of motion for a particle in a linear potential.
>> Eh? Are you sure?
>Oops. Linear FORCE. Quadratic potential. Good old Hooke's law.
Right! The harmonic oscillator!!!
>>>What's the physical significance of an odd-dimensional "phase space"?
>> You don't need to know this now so I'm not going to tell you.
>I guess I'll always wonder then. Someday, when I'm lying on my deathbed,
>rotting away, I'll surely wonder "What's the physical significance of
>an odd-dimensional phase space?", but it'll be too late and then won't
>you be sorry.
What a morbid fantasy! Well, if this ever happens, you can always
give me a call....
"Hi, John, this is Nathan?"
"Nathan who?"
"Nathan Urban? Remember me? Once I asked you what was the physical
significance of an odd-dimensional phase space and you - sniff - never
told me. Now I'm - sniff - rotting away on my death bed and I'm still
wondering about this. I only have 10 minutes more to live at most.
Won't you take pity on a dying man and tell me?"
"Why don't you just use DejaNews and reread those threads on Poisson
groups and quantization procedures? I distinctly remember mentioning
TWO examples of odd-dimensional phase spaces: the phase space for a
classical spinning particle, namely R^3, and the group SU(2) regarded
as a Poisson group! There are lots more examples of odd-dimensional
Poisson manifolds.... and then, under a different interpretation of
the phrase `odd-dimensional phase space', there are also contact
manifolds! If you want to know about these, you should probably ask
Jim Dolan, since he really likes contact geometry. You can think of
contact manifolds as being like ordinary even-dimensional phase spaces
but with an extra dimension thrown in to keep track of the classical
analog of the quantum-mechanical `phase'. Thus in a sense they
deserve the name `phase space' even more than the ordinary even-
dimensional phase spaces you know and love - "
"grchglllch..."
"Nathan? Nathan? Are you there? I was just getting started!"
Hmm. You should probably learn this stuff before you have 10
minutes left to live, but not now.
Okay, more exercises:
>> No, in quantum mechanics "history" means something else - see my
>> remarks above. According to that definition what's the space of
>> histories in this particular problem?
>
>Um.. the space of all (classical) trajectories (maps from R to R^2),
>regardless of whether or not they solve the equations of motion for the
>anharmonic oscillator?
Hmm. Here was my definition of a history:
>> In classical mechanics, it's *any* function from R to the configuration
>> space - not just one that satisfies the classical equations of motion.
Can you see what's wrong about your answer given this definition?
>> And what's the action S as a function on this space X of histories?
>
>Is it the _classical_ action for that path? Given an R^2-valued map
>x(t) |-> (q,p) in X,
>
> S(x) = integral K(p) - V(q) dt = integral [p^2/2m - x^2 - cx^4] dt ?
Close - can you see how to fix this to correct for the error you
made above? Extra hint: the Lagrangian is not a function of q and
p. It's a function of q and qdot.
>> >> Exercise - Draw n dots and count the number of ways of drawing edges
>> >> between them to connect them up in pairs. (These are Feynman diagrams
>> >> with only 1-valent vertices.) What do you get?
>I'm going to go ahead and assume that such pairings as:
>
> *--* * *
> and | |
> *--* * *
>
>are distinct. In that case, I get (n-1)*(n-3)*...*1.
Right - and see, up to a few little normalization factors this
is the same as that integral you did. More precisely - assuming
I'm not making a little slip somewhere - we have
1/sqrt(2pi) integral x^n exp(-x^2/2) dx = (n-1)*(n-3)*...1
when n is even, and 0 when n is odd. So there is a connection
between this sort of integral and the combinatorics of Feynman
diagrams. The interesting part is to understand this connection
more deeply! Daryl McCullough, James Dolan and Michael Weiss
have been posting a bit about that. When you really understand it,
you'll understand why partition functions in quantum field theory
and statistical mechanics can be computed using Feynman diagrams,
and the relation of these diagrams to annihilation and creation
operators! Understanding the math at a formal level is not enough:
you want to really understand at a gut level why the integrals and
the combinatorics are different ways of viewing the same physics!
I should post more about this.....
Heh. Jim Dolan and I were just drawing a bunch of pictures like this
the other day! It's odd that I've never seen them in textbooks; they
seem so natural.
>Now, how much does each term contribute to the final result?
Jim and I came up with both methods you described! Here is
a nice way to describe your second, more combinatorial method.
Given any "mountainscape" of the sort you described:
/\
/ \
/ \
think of it as a graph of the number of employees at a small company,
starting and ending with zero. At each step either one employee is
hired or one is fired. How many ways can this happen? But beware:
there is a small subtlety: the employees are identical bosons! They
have no distinguishing features other than when they were hired! So
for example, in the above situation, first "employee number 1" is hired,
then "employee number 2", and then "employee number 3". That's all
we can say about them! But when the first employee is fired it could
be either "number 1", "number 2", or "number 3", so there are 3 ways
this can happen. When the second is fired there are two ways that can
happen. When the third is fired (or commits suicide) there is only
one way that can happen, since there is only one remaining employee
at the time. So there are 6 possibilities.
Of course this is just a feeble attempt to dramatize the combinatorics
of annihilation and creation operators for bosonic quantum systems. It
may be more misleading than enlightening. You said it more precisely:
>Second method, more "combinatorial". The value of <0|aaa*aa*a*|0> is the
>number of _valid pairings_ for the term aaa*aa*a*. What's a valid pairing?
>Well, we pair up a's with a*'s, as before; but this time, the only
>condition we impose is that each 'a' must be matched up with an 'a*' to its
>RIGHT.
In my dramatization, this rule corresponds to the fact that no employee
can be fired before being hired.
>Pretty mathematics! Now can we have some pretty physics to go with it?
>Hmmm, Mr. Wizard?
It's a long story but let me just say this for now. We are computing
the moments of a Gaussian in a very combinatorial way. We can think of
this Gaussian as a "free bosonic quantum field in 0-dimensional spacetime".
Spacetime is just a point, and the quantum field takes a random value at
this point given by a Gaussian distribution. We study the moments of the
Gaussian by "creating particles", "letting them evolve", and then
"annihilating them", getting a vacuum-vacuum transition amplitude. Of
course since spacetime is 0-dimensional the business about "letting them
evolve" is completely trivial. It gets more interesting in higher
dimensions. We can also have more fun, quite cheaply, by considering
*interacting* bosonic quantum fields in 0-dimensional spacetime. Now
instead of a Gaussian, our probability distribution is something like
exp(-x^2/2 - cP(x))
for some polynomial P that's bounded below. We call x^2/2 the
"free Lagrangian", P(x) the "interaction Lagrangian", and c the
"coupling constant". Of course we need to normalize the above
function to get a probability distribution; the normalization
factor is called the "partition function". We can work out the
partition function and all the moments of the above distribution
perturbatively, as a power series in c, using Feynman diagrams.
We already have all the techniques to do it! --- i.e., they've
been discussed already in this thread.
But the really fun thing to think about is why determining the
moments of a probability distribution that's a perturbation
of a Gaussian can be phrased in the language of "creation and
annihilation of particles". This is where the physical insight
abides.
Nota bene: don't get mixed up! The Gaussian in one variable
is both the ground state of a field theory in 0+1-dimensional
spacetime - i.e., the harmonic oscillator - and the probability
distribution of field values for a field theory in 0-dimensional
spacetime. In the Photons Schmotons thread we've been talking
about the former, but above I'm talking about the latter. The
former is Hamiltonian field theory; the latter is Lagrangian
field theory.
(If you feel confused, good - this is also something worth
thinking about.)
>thus for example: the 3rd wick power :x^3: of x is x^3 - 3*x, as
>follows: n=3, and there are 3 possible disqualifying pairs a
>pairing-off of {1,2,3}+m can include: {1,2}, {1,3}, and {2,3}. no
>pairing-off can suffer from more than one of the 3 possible
>disqualifications, so the inclusion-exclusion series halts after the
>single-disqualification term.
>
>the 4th wick power :x^4: of x is x^4 - 6*x^2 + 3. here n=4; the 6
>possible disqualifying pairs are {1,2}, {1,3}, {1,4}, {2,3}, {2,4},
>{3,4}; and the 3 possible unordered pairs of disqualifying pairs are
>{{1,2},{3,4}}, {{1,3},{2,4}}, and {{1,4},{2,3}}. no pairing-off can
>suffer from any larger set of disqualifications, so the
>inclusion-exclusion series halts after the 2-disqualification term.
>
>is this more or less correct so far?
Let me calculate :x^3: another way to check your results. Let me
pull a formula out of my big grey wizard's hat:
exp(tx)/<0|exp(tx)|0> = :1: + t :x: + t^2 :x^2:/2! + t^3 :x^3:/3! + ...
Using this we can grind out the Wick powers to our heart's content.
Let me just go up to the third one:
exp(tx)/<0|exp(tx)|0> =
(1 + tx + (tx)^2/2! + (tx)^3/3!)/<0| (1 + tx + (tx)^2/2! + (tx)^3/3!) |0>
+ higher order terms
But we've seen
<0|x^n|0> = (n-1)!! if n even
= 0 if n odd
so
exp(tx)/<0|exp(tx)|0> =
= (1 + tx + (tx)^2/2! + (tx)^3/3!)/(1 + t^2/2 + t^4/8) + higher order terms
= (1 + tx + (tx)^2/2! + (tx)^3/3!)(1 - t^2/2 - 3t^4/8) + higher order terms
= 1 + tx + t^2(x^2 - 1)/2! + t^3(x^3 - 3x)/3! + higher order terms
so
:1: = 1
:x: = x
:x^2: = x^2 - 1
:x^3: = x^3 - 3x
So yeah, it looks like it's working.
(I would probably never have understood your clever method for
calculating the Wick powers without talking to you about it in
person. There is a lot of interesting combinatorics going on
here. We need to understand what it all means! Why does this
combinatorics of diagrams wind up being exactly the stuff needed
to understand perturbative quantum field theory? The new work
by Kreimer on renormalization, Feynman diagrams and Hopf algebras
suggests that this business goes very deep. He's using Hopf
algebras for all the usual things combinatorists do - but applying
it to quantum field theory!)
(Can you see how the division by <0|exp(tx)|0> does the same thing
that the inclusion-exclusion does? I actually used the formula
1/(1 + x) = 1 - x + x^2 - x^3 + ... to calculate 1/<0|exp(tx)|0>
above. This is very reminiscent of inclusion-exclusion. The
inclusion-exclusion "correction terms" match the "correction terms"
I get by dividing by <0|exp(tx)|0>. But I don't see why yet.)
In case anyone is trying to learn about Feynman diagrams by reading
this thread, don't be upset if James Dolan's remarks seem mysterious
and complicated. I barely understand them myself! He has a way of
learning things that involves listening to what I'm trying to
explain but then going off and figuring out some other approach that
makes it seem completely new and different. As he gradually learns
more and more about the subject, I gradually feel I know less and less.
Then we both join in and try to figure out what the hell is going on.
-ba...@galaxy.ucr.edu (John Baez) says...
-
->James Dolan wrote:
-
->(1/sqrt(2pi)) integral x^n exp(-x^2/2) dx
->
->...equals the number of Feynman diagrams with n
->1-valent vertices and no external edges...
-
->For example, when n = 4 the above integral equals 3. And there
->are 3 Feynman diagrams with 4 1-valent vertices and no external
->edges:
->
-> | | |
-> | | |
->__ \__ __/ __ ___|___
-> \ / |
-> | | |
-> | | |
->
-
-This result can kind of be explained using
-harmonic oscillators and raising and lowering
-operators.
-
-integral x^n exp(-x^2/2) dx
-= 2^{-n/2} integral x^n exp(-x^2) dx
-= 2^{-n/2} integral exp(-x^2/2) x^n exp(-x^2/2) dx
-
-If we recognize that exp(-x^2/2) is just the ground state of
-the Harmonic oscillator with H = 1/2 (x^2 + (d/dx)^2), we can
-write this integral as:
-
- C 2^{-n/2} <0|x^n|0>
-
-(For some number C independent of n).
-Now, letting a = (x + d/dx) and a* = (x - d/dx), we convert
-this to:
-
- C <0|(a + a*)^n |0>
-
...
continuing my attempts to understand what's going on here, i tried
writing the operators a, a*, and a+a* as matrixes with respect to the
basis v, a*(v), a*^2(v), ... where v is the above-mentioned ground
state of the harmonic oscillator. according to my calculations these
matrixes are as follows:
a:
0 2 0 0 0 0 0 0 . . .
0 0 4 0 0 0 0 0
0 0 0 6 0 0 0 0
0 0 0 0 8 0 0 0
0 0 0 0 0 10 0 0
0 0 0 0 0 0 12 0
0 0 0 0 0 0 0 14
0 0 0 0 0 0 0 0
. .
. .
. .
a*:
0 0 0 0 0 0 0 0 . . .
1 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0
0 0 1 0 0 0 0 0
0 0 0 1 0 0 0 0
0 0 0 0 1 0 0 0
0 0 0 0 0 1 0 0
0 0 0 0 0 0 1 0
. .
. .
. .
a+a*:
0 2 0 0 0 0 0 0 . . .
1 0 4 0 0 0 0 0
0 1 0 6 0 0 0 0
0 0 1 0 8 0 0 0
0 0 0 1 0 10 0 0
0 0 0 0 1 0 12 0
0 0 0 0 0 1 0 14
0 0 0 0 0 0 1 0
. .
. .
. .
i then noticed that the powers of the matrix for a+a* come
tantalizingly close to having an interesting combinatorial
interpretation. in general, given a square matrix m of natural
numbers, we can interpret it as describing a "directed multi-graph"
g_m by taking the (i,j)^th entry of m to be the number of arrows in
g_m from the i^th vertex to the j^th vertex; g_[m^n] is then the
directed multi-graph with the same vertex set as g_m, but with the
arrows in g_[m^n] corresponding to the arrow-paths of length n in g_m.
thus g_[a+a*] looks as follows:
-> -> -> -> -> -> -> ->
0 1 2 3 4 5 6 7 . . .
<- <- <- <- <- <- <- <-
<- <- <- <- <- <- <- <-
<- <- <- <- <- <- <-
<- <- <- <- <- <- <-
<- <- <- <- <- <-
<- <- <- <- <- <-
<- <- <- <- <-
<- <- <- <- <-
<- <- <- <-
<- <- <- <-
<- <- <-
<- <- <-
<- <-
<- <-
<-
<-
if you then think about what arrow-paths in g_[a+a*] are like, they
might remind you of something john baez said in response to one of
michael weiss's posts in this thread:
-think of it as a graph of the number of employees at a small company,
-starting and ending with zero. At each step either one employee is
-hired or one is fired. How many ways can this happen? But beware:
-there is a small subtlety: the employees are identical bosons! They
-have no distinguishing features other than when they were hired! So
-for example, in the above situation, first "employee number 1" is
-hired, then "employee number 2", and then "employee number 3". That's
-all we can say about them! But when the first employee is fired it
-could be either "number 1", "number 2", or "number 3", so there are 3
-ways this can happen. When the second is fired there are two ways
-that can happen. When the third is fired (or commits suicide) there
-is only one way that can happen, since there is only one remaining
-employee at the time. So there are 6 possibilities.
evidently there's a few slight differences though between our
situation and whatever john was talking about. first, we seem to have
gotten beyond the restriction that the number of employees at the
start and at the finish must be equal to zero; now we're interested in
arrow-paths starting and finishing at arbitrary vertexes. second,
although we're still following the general principle that "it doesn't
matter who you hire, but firing is from the heart" (because you hire
strangers whereas you fire acquaintances), for some strange reason our
situation seems to require distinguishing two distinct methods of
termination of an employee's tenure. perhaps there is some
idiosyncrasy of normalization that i omitted or introduced somehwere
that controls the occurrence of this complication. anyway, john's
mention of suicide doesn't seem to have been an allusion to this.
----- Posted via Deja.com, The People-Powered Information Exchange -----
------ http://www.deja.com/ Discussions * Ratings * Communities ------
>continuing my attempts to understand what's going on here, i tried
>writing the operators a, a*, and a+a* as matrixes with respect to the
>basis v, a*(v), a*^2(v), ... where v is the above-mentioned ground
>state of the harmonic oscillator.
Ah, so you're following in Heisenberg's footsteps! This is one of
the first things he did when inventing "matrix mechanics". Good!
Hmm, they look roughly right - but a bit funny. I know the rows of
your matrices are everyone else's columns and vice versa, so I won't
worry about *that*. But I'm used to something a bit more like this:
a:
0 1 0 0 0 0 0 0 . . .
0 0 sqrt(2) 0 0 0 0 0
0 0 0 sqrt(3) 0 0 0 0
0 0 0 0 sqrt(4) 0 0 0
. .
. .
. .
a* :
0 0 0 0 0 0 0 0 . . .
1 0 0 0 0 0 0 0
0 sqrt(2) 0 0 0 0 0 0
0 0 sqrt(3) 0 0 0 0 0
0 0 0 sqrt(4) 0 0 0 0
. .
. .
. .
Note that my matrices are transposes of each other while yours
are not. So I bet you're not writing your matrices using an
*orthonormal* basis. Yeah, that's right: your basis a*^n(v)
is orthogonal but not normal. You'd have to divide a*^n(v) by
something like sqrt((n+1)!) to normalize it.
Anyway, why don't you compute aa* with your conventions. In the
usual conventions this is the "number operator" and it has
eigenvalues 0,1,2,.... I don't think you'll get those numbers.
There seems to be an annoying factor of 2 lurking around.
The advantage of your conventions - or conventions like yours -
is that they avoid nasty square roots and thus allow more direct
combinatorial interpretations of what's going on. This is great;
it fits in with our overall nefarious plan to give combinatorial
interpretations to vast hunks of math. But I think you can come
up with conventions where aa* has eigenvalues 0,1,2,..., without
losing this other advantage. If you can, you probably should.
>evidently there's a few slight differences though between our
>situation and whatever john was talking about. first, we seem to have
>gotten beyond the restriction that the number of employees at the
>start and at the finish must be equal to zero; now we're interested in
>arrow-paths starting and finishing at arbitrary vertexes.
Well, that's good.
>second,
>although we're still following the general principle that "it doesn't
>matter who you hire, but firing is from the heart" (because you hire
>strangers whereas you fire acquaintances), for some strange reason our
>situation seems to require distinguishing two distinct methods of
>termination of an employee's tenure. perhaps there is some
>idiosyncrasy of normalization that i omitted or introduced somehwere
>that controls the occurrence of this complication.
Yeah, I think you've got a nasty factor of two somewhere that you
don't really want.
By the way, I find the principle "it doesn't matter who you hire,
but it matters who you fire" appealing but also somewhat worrisome,
just because I've never *heard* anyone talk about Bose-Einstein
statistics this way. If the principle is right - if you should
think of bosons as having no "distinct identity" when created by
having one when annihilated - you'd think someone would have said
so by now.
And also by the way: who's your favorite, Schrodinger or Heisenberg?
I know you like to cast certain mathematicians as "good guys" and
others as "bad guys". Do you have an opinion here? Of course
Heisenberg helped the Nazis in WWII while Schrodinger ran away and
dallied with his damsels, but I'm mainly talking about the difference
in scientific styles here, not personalities: they seem to represent
opposite poles, scientifically.
James Dolan writes:
> since we seem to be getting into a situation where we'd like to be in
> a coordinate system where the creation operator corresponds to
> "multiplication by x" and the annihilation operator corresponds to
> "differentiation with respect to x" (do you agree with me that the
> goal of categorification seems to be pushing us in that direction?),
> in contrast to our original coordinate system in which the position
> operator was "multiplication by x" and some scalar multiple of the
> momentum operator was "differentiation with respect to x", i'm led to
> ask the following question:
>
> does the desire to make a change of coordinate systems along the lines
> described above somehow motivate the introduction of the concept of
> "wick powers"? or something like that?? i don't see the calculations
> clearly enough to tell whether this makes any sense yet.
I think this is the way it makes sense:
The Fock space on C, i.e. the symmetric tensor algebra over C, is
isomorphic to the algebra of polynomials in one variable X. The
monomial X^n is the "n-particle state". Annihilation and creation
operators act as multiplication and differentiation by X, respectively.
It's also nice to describe these states as "wavefunctions" psi
a la Schrodinger. But the n-particle state does not correspond to
psi(x) = x^n or even to psi(x) = x^n exp(-x^2/2). It corresponds to
psi(x) = :x^n: exp(-x^2/2).
So when we go from the Fock space description to the Schrodinger
description we use the map
X^n -> :x^n:
I think this is the "change of coordinates" you want.
Note: there are some annoying constant factors which I'm ignoring
here. At some point it's good to get them straightened out (in any
of various different ways), but not now.
>Could you elaborate a bit? Why are Heisenberg and Schroedinger
>scientific opposites?
I don't know if they *really* are, but they often seem to be.
Heisenberg developed an algebraic approach to quantum mechanics
which emphasized operators with discrete spectrum and wrote general
operators as matrices representing transitions between these discretely
labelled states. Schroedinger developed a geometrical approach to
quantum mechanics which emphasized operators with continuous
spectrum and wrote general operators as differental operators on
the space of these continuously labelled states. Heisenberg wrote
down an equation for the time evolution of operators - the "Heisenberg
picture". Schroedinger wrote down an equation for the time evolution
of states - the "Schroedinger picture". It just seems like whenever
there was a choice to make, they made the opposite choice - even
though the two choices were secretly isomorphic.
After quantum mechanics was born, Heisenberg drifted into nuclear
and particle physics. Schroedinger drifted into general relativity
and biology.