Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Why is renormalization necessary?

23 views
Skip to first unread message

Ralph E. Frost

unread,
Apr 2, 2002, 9:05:55 AM4/2/02
to
Why is renormalization necessary?

I understand (okay, maybe I don't understand but I remember reading) that
infinities arise and they have to be sealed or healed, but I am wondering
WHY they arise in the first place?

Is it jut a symbolic difficulty or does it arise more out of folks beginning
with the wrong POV and then trying to "think backwards" from a flawed model?

Also, "once" any of the improved, more unified
models/representations/approachs is groked and enough people have adapted to
the new view, is it true that renormalization will no longer be necessary
or a valid part of the model? I seem to remember reading something like
that in a mechanics handbook

--
Best regards,
Ralph Frost
http://www.refrost.com
Seek a thought worthy of speech.

"...Love one another..." John 15:12

[Moderator's note: The subject of renormalization has been rather
extensively discussed in this newsgroup in the past. At trip to the
archives might be called for. -- KS]

Jacques Distler

unread,
Apr 3, 2002, 1:02:32 PM4/3/02
to
In article <uai6d17...@corp.supernews.com>, Ralph E. Frost
<ref...@dcwi.com> wrote:

>Why is renormalization necessary?
>
>I understand (okay, maybe I don't understand but I remember reading) that
>infinities arise and they have to be sealed or healed, but I am wondering
>WHY they arise in the first place?
>
>Is it jut a symbolic difficulty or does it arise more out of folks beginning
>with the wrong POV and then trying to "think backwards" from a flawed model?
>

>[snip]


>
>[Moderator's note: The subject of renormalization has been rather
>extensively discussed in this newsgroup in the past. At trip to the
>archives might be called for. -- KS]

Well, if the moderators were to impose an 'originality' requirement on
posts to this newsgroup, that would certainly change the character of
the group.

Anyway, to answer the question (no, there is *nothing* original in this
response);

"Curing infinities" is the wrong way to think about renormalization.

The right way is to realize that quantum field theories come equiped
with a cutoff, not to cure some sickness, but as a way of codifying our
ignorance about the details of physics at short distances.

Now, it might be that Ralph is a little less ignorant about physics at
short distances than I am. So his short-distance cutoff might be
different than mine. Nonetheless, we should calculate the same
long-distance physics.

In order to ensure that changing the short-distance cutoff a little bit
does not change the calculated results for long-distance physics, we
need to make the coupling constants of the theory cutoff-dependent in a
very particular way. This is called Renormalization. The equation which
expresses how the coupling constants change under an infinitesimal
change of cutoff is called the Renormalization Group Equation.

In comparing Ralph's calculations with mine, it proves convenient to
reexpress our answers, not in terms of the cutoff-dependent "bare"
couplings, but in terms of some renormalized or "physical" parameters.

E.g., in QED, we might wish to trade the gauge coupling constant and
mass parameter in the Lagrangian for some physical definition of charge
and the physical electron mass.

That procedure was worked out in the '50s, long before people
understood *why* renormalization was an integral part of quantum field
theory.

JD

--
PGP public key: http://golem.ph.utexas.edu/~distler/distler.asc

Moataz Emam

unread,
Apr 3, 2002, 1:08:10 PM4/3/02
to
"Ralph E. Frost" wrote:
> infinities arise and they have to be sealed or healed, but I am wondering
> WHY they arise in the first place?

The problem has its roots in classical physics. Consider Coulomb's law:
If you try to calculate the total potential energy of a charge at rest,
the integer diverges, BECAUSE you are including the point r=0, which
means you are assuming zero size for the particle. That is the source of
the infinities of quantum field theory also. Assuming point particles
diverges integrals, roughly. Once you assume that the particle is no
longer a point, as in string theory, these divergences go away. So
renormalization assumes that these infinities are canceled by other
infinities as a technique to avoid asking the question "what does an
electron really look like?"

--
Moataz H. Emam

J. J. Lodder

unread,
Apr 3, 2002, 1:13:08 PM4/3/02
to
Ralph E. Frost <ref...@dcwi.com> wrote:

> Why is renormalization necessary?
>
> I understand (okay, maybe I don't understand but I remember reading) that
> infinities arise and they have to be sealed or healed, but I am wondering
> WHY they arise in the first place?

Because we want to have theories describing point particles.

> Is it jut a symbolic difficulty or does it arise more out of folks beginning
> with the wrong POV and then trying to "think backwards" from a flawed model?

Point particles may or may not be a flawed model.
We just don't know yet.
However, even if electrons and the like turn out to be not point
particles after all we would still want to have a good theory of point
particles as an idealization.
Point particles are certainly a lot simpler than extended particles,
which need structure functions of some kind.

> Also, "once" any of the improved, more unified
> models/representations/approachs is groked and enough people have adapted to
> the new view, is it true that renormalization will no longer be necessary
> or a valid part of the model? I seem to remember reading something like
> that in a mechanics handbook

Truth will be seen only after there is a working theory that actually
predicts something.

> [Moderator's note: The subject of renormalization has been rather
> extensively discussed in this newsgroup in the past. At trip to the
> archives might be called for. -- KS]

That is about the technicalities.
As I understand Ralph he wants to discuss the why,

Jan

Ed Gibbs

unread,
Apr 3, 2002, 10:42:55 PM4/3/02
to
"Ralph E. Frost" <ref...@dcwi.com> wrote in message news:<uai6d17...@corp.supernews.com>...
> Why is renormalization necessary?

snip....



> Also, "once" any of the improved, more unified
> models/representations/approachs is groked and enough people have adapted to
> the new view, is it true that renormalization will no longer be necessary
> or a valid part of the model? I seem to remember reading something like
> that in a mechanics handbook

Ralph,

I like to think that infinities point to places where a given theory
is incomplete.

New theories may be more complete, or may be incomplete in other
places. But the incompleteness theorem makes a pretty good argument
that a single, truly complete theory cannot exist.

It seems that we will always need to describe the world through a
collection of theories that all individually break down at various
points, but in aggregate do a pretty good job. Renormalization is
just shifting from one of those theories to another to avoid the
places where madness lurks.

Ed Gibbs

Saco

unread,
Apr 4, 2002, 1:21:12 PM4/4/02
to
> Why is renormalization necessary?

Because we are expanding a function around a point where mathematical
expansion in power series is not possible. This has nothing to do with
dealing with point particles.


Kevin A. Scaldeferri

unread,
Apr 4, 2002, 1:22:18 PM4/4/02
to
In article <uai6d17...@corp.supernews.com>,

Ralph E. Frost <ref...@dcwi.com> wrote:
>Why is renormalization necessary?

I don't really have much interest in saying much more on this topic
than I have in the past (archives, hint, hint), but after a couple of
the other responses, I want to address a slightly different question
of why renormalization isn't necessary.

Specifically, renormalization is not necessary because we have point
particles.

There are finite theories with point particles. This is a fairly
basic fact that everyone ought to learn in their QFT class.

Sophisticated readers will note that this does not mean that
renormalization is not necessary in those theories, and I want to go
in two directions with that.

First, there are also theories of point particles with vanishing beta
function. Those theories really don't require renormalization. (Note
that renormalization is not about getting rid of infinities -- that's
what regularization is for. Renormalization is about what it left
after you get rid of the infinities, i.e., the beta function.)

Second, just because a theory has extended objects instead of point
particles doesn't guarantee that it has vanishing beta function.
These theories require renormalization to make effective calculations
just like theories of point particles.


I said I wouldn't, but it seems natural at this point to actually
answer the original question. So, here goes:

Renormalization is necessary because we want to use perturbation
theory to make calculations at energy scales that aren't the natural
energy scale of the theory.


--
======================================================================
Kevin Scaldeferri Calif. Institute of Technology
The INTJ's Prayer:
Lord keep me open to others' ideas, WRONG though they may be.

A.J. Tolland

unread,
Apr 4, 2002, 6:44:35 PM4/4/02
to
On Thu, 4 Apr 2002, Saco wrote:
> > Ralph wrote:
> > Why is renormalization necessary?
>
> Because we are expanding a function around a point where mathematical
> expansion in power series is not possible.

This is flat out wrong. Renormalization is necessary because we
are expressing the physics in terms of approximate variables, choosing to
average out "microscopic" degrees of freedom to get "macroscopic" degrees
of freedom. The definitions of microscopic and macroscopic depend, of
course, on the resolution (i.e. the energy) with which we can probe the
system. Different resolutions demand different sets of approximate
variables. For instance, if you are studying a hydrogen atom with an
energy resolution of .3 eV, you can see the gross energy levels, but you
can't see the fine corrections due to electron spin. So you need the spin
degrees of freedom at higher energy, but it's not really necessary or
appropriate to use them at lower energy.
Renormalization is the process of selecting new variables from an
old set of variables. It has absolutely nothing to do with convergence of
power series. In fact, it does not solve the problems you refer to! The
power series about zero coupling we used to describe renormalized QED is
still divergent!

> [renormalization] has nothing to do with dealing with point particles.

This is true. Point particles have a lot to do with the
infinities which crop up when you assume that your approximate variables
are valid at ludicrously high resolution. But these infinities are not
why we need renormalization. We need renormalization because we chose to
describe physics in terms of these approximate variables in the first
place.
If I had three wishes, I might use the third one to expurge from
peoples' minds the notion that renormalization is needed as a cure for
infinities. The infinities are a result of a stoopid extrapolation.
They are not physical. The connection between renormalization and
infinities only persists because the techniques people developed for
hiding these infinities, properly reinterpreted, remain the easiest way of
actually calculating the relations between different descriptions of the
physics.
</RANT>
--A.J.


Matt McIrvin

unread,
Apr 5, 2002, 6:29:54 PM4/5/02
to
In article <a8fsnt$ipe$1...@blinky.its.caltech.edu>,

ke...@blinky.its.caltech.edu (Kevin A. Scaldeferri) wrote:

> (Note
> that renormalization is not about getting rid of infinities -- that's
> what regularization is for. Renormalization is about what it left
> after you get rid of the infinities, i.e., the beta function.)

This is one of those cases in which the popular and elementary
treatments of the subject mostly echo a point of view that is decades
out of date. Into the 1960s, regularization and renormalization were
thought of together, even by their inventors, as a disreputable trick
you did in QED to get rid of infinities. Wilson's work, the use of the
renormalization group in condensed matter physics, and the notion of
effective field theory changed considerably how physicists think about
renormalization. But you still hear it referred to as a silly way of
subtracting infinity minus infinity to make a nonsensical theory make
sense.

--
Matt McIrvin http://world.std.com/~mmcirvin/

Alfred Einstead

unread,
Apr 8, 2002, 11:58:25 PM4/8/02
to
Moataz Emam <em...@physics.umass.edu> wrote:
> The problem has its roots in classical physics. Consider Coulomb's law:
> If you try to calculate the total potential energy of a charge at rest,
> the integer diverges, BECAUSE you are including the point r=0...

Not quite. More accurately, it's this: if you naively extrapolate
the vacuum free-field relations D = epsilon_0 E; B = mu_0 H to the
actual mass points, you get infinity. In reality, the E field AT
a mass point would be the sum of all the D/epsilon_0's from all
the other mass points; plus the free field D/epsilon_0, plus an
undetermined self-force.

The force law F = rho E + J x B and power law P = J.E both yield
well-defined *finite* values, keeping these points in mind.

The classical source of renormalization is the removal of the
D/epsilon_0 term corresponding to the mass point R from the
total value of E at R.

The ambiguity of the undetermined self-force likewise corresponds
to the ambiguity in the definition of time-ordered products T[].
The ambiguity issue is actually what renormalization is about,
not so-called infinities. There are no infinities.

Saco

unread,
Apr 9, 2002, 12:45:00 PM4/9/02
to
"A.J. Tolland" <a...@hep.uchicago.edu> wrote in message news:<Pine.SGI.4.40.0204041...@hep.uchicago.edu>...

> Renormalization is the process of selecting new variables from an
> old set of variables. It has absolutely nothing to do with convergence of
> power series. In fact, it does not solve the problems you refer to! The
> power series about zero coupling we used to describe renormalized QED is
> still divergent!

This is flat out wrong. The power series about zero coupling is not
divergent in renormalizable theories if renormalization is curried out
to arbitrary order in coupling constant.

Saco

unread,
Apr 9, 2002, 3:49:56 PM4/9/02
to
"A.J. Tolland" <a...@hep.uchicago.edu> wrote in message news:<Pine.SGI.4.40.0204041...@hep.uchicago.edu>...
> On Thu, 4 Apr 2002, Saco wrote:
> > > Ralph wrote:
> > > Why is renormalization necessary?
> >
> > Because we are expanding a function around a point where mathematical
> > expansion in power series is not possible.
>
> This is flat out wrong. Renormalization is necessary because we
> are expressing the physics in terms of approximate variables, choosing to
> average out "microscopic" degrees of freedom to get "macroscopic" degrees
> of freedom. The definitions of microscopic and macroscopic depend, of
> course, on the resolution (i.e. the energy) with which we can probe the
> system. Different resolutions demand different sets of approximate
> variables. For instance, if you are studying a hydrogen atom with an
> energy resolution of .3 eV, you can see the gross energy levels, but you
> can't see the fine corrections due to electron spin. So you need the spin
> degrees of freedom at higher energy, but it's not really necessary or
> appropriate to use them at lower energy.
> Renormalization is the process of selecting new variables from an
> old set of variables. It has absolutely nothing to do with convergence of
> power series. In fact, it does not solve the problems you refer to! The
> power series about zero coupling we used to describe renormalized QED is
> still divergent!

I apologize for my previous message, power series about zero coupling
might still be divergent even in renormalizable theories, but since I
was falsely accused of being "flat out wrong" I'd like to clarify
that.

Fist of all the question was "why is renormalization necessary". There
was no question about whether renormalization is sufficient to deal
with the convergence of the power series. Second author only explain
how renormalization is done in statistical mechanics. The example of
sudden reappearance of electron spin due to change of scale is not
correct. Electrons has spin in whatever scale we look.

Buzurg Shagird

unread,
Apr 10, 2002, 11:39:40 PM4/10/02
to physics-...@ncar.ucar.edu

Hi! I have a couple of questions, not exactly what the thread says,
but the people participating in this thread have probably thought
about these questions already. I apologize that the questions are
not well-posed -- please be gentle.

A. Is there any theory which has divergences that need to be
regularized, but no further renormalization is needed?

Is such a question meaningful? Probably not, since any scheme for
removing divergences would have a scheme-dependent `regularized'
quantity for each coupling and mass parameter of the theory, and
getting scheme-independent quantities would need renormalization.

I guess I am trying to understand why so many experts talk about
divergences and renormalization in the same breath. Could it be
because whenever one encounters a divergence in the mass and
coupling parameters, renormalization is necessary?


B. Another question about the idea of renormalization relating
theories at different scales.

Suppose I have a theory which is a good approximation to physics at
some scale, but a bad approximation to physics at some other scale.
Renormalization tells me how to change the masses and couplings so
that the modified theory is a good approximation at the new scale.
Is that a reasonable statement?

Suppose the `correct' theory at some energy scale M has non-local
states and operators. Is it still meaningful to use renormalization
to get the theory at some other scale? Or are these ideas relevant
only to local operators in local theories?

This question is related to the idea that QCD at low energies has
string like objects. But it is also a renormalizable theory at high
energy. But is it possible, even in principle, to run the couplings
down to low energy and get non-local operators? Or is it incorrect
to apply renormalization to these operators?

-S.


_________________________________________________________________
Join the world’s largest e-mail service with MSN Hotmail.
http://www.hotmail.com

Josh Willis

unread,
Apr 10, 2002, 11:40:03 PM4/10/02
to
In article <85160b48.02040...@posting.google.com>,

No, it isn't. The power series about zero coupling constant *is* divergent,
and for a physical reason: the model described by the Lagrangian is unstable
for negative values of the coupling constant.

This is discussed in many places; see Chapter 23 of "Quantum Physics" by Glimm
and Jaffe for a brief discussion and pointers to the references on lower
dimensional QFTs, where more is known. In particular they discuss that while
the Schwinger functions for phi^{4}_{2} models have divergent expansion in the
coupling constant about lambda = 0, they are Borel summable. For other models,
including other two-dimensional models, I don't think even this much is known.
The best you can generally hope for is that your perturbation expansion be
asymptotic.

Josh

Aaron Bergman

unread,
Apr 10, 2002, 11:41:23 PM4/10/02
to
In article <85160b48.02040...@posting.google.com>,
sa...@moon.yerphi.am (Saco) wrote:

> "A.J. Tolland" <a...@hep.uchicago.edu> wrote in message
> news:<Pine.SGI.4.40.0204041...@hep.uchicago.edu>...
>
> > Renormalization is the process of selecting new variables from an
> > old set of variables. It has absolutely nothing to do with convergence of
> > power series. In fact, it does not solve the problems you refer to! The
> > power series about zero coupling we used to describe renormalized QED is
> > still divergent!
>
> This is flat out wrong.

No, it's flat out right.

> The power series about zero coupling is not
> divergent in renormalizable theories if renormalization is curried out
> to arbitrary order in coupling constant.

Sure it is. The standard argument is that if you have negative coupling,
the theory is unstable so the power series doesn't converge for negative
coupling. Thus, the radius of convergence around zero must be zero.

Aaron
--
Aaron Bergman
<http://www.princeton.edu/~abergman/>

A.J. Tolland

unread,
Apr 10, 2002, 11:42:43 PM4/10/02
to
On Tue, 9 Apr 2002, Saco wrote:

> I apologize for my previous message, power series about zero coupling
> might still be divergent even in renormalizable theories, but since I
> was falsely accused of being "flat out wrong" I'd like to clarify
> that.

Yeah, sorry for the inflammatory language. I guess I was feeling
grouchy when I posted that. But my comment stands: the non-convergence
of the power series in question is _not_ the reason we need
renormalization. To claim otherwise is misleading.

> Fist of all the question was "why is renormalization necessary". There
> was no question about whether renormalization is sufficient to deal
> with the convergence of the power series.

> Second author only explain how renormalization is done in statistical
> mechanics.

This is how it's done in field theory, too.

> The example of sudden reappearance of electron spin due to change of
> scale is not correct. Electrons has spin in whatever scale we look.

Yes, the example is not correct as stated. Spin is still visible
if you're counting states. I should have specified that I was restricting
my attention to the effective Hamiltonian describing hydrogen. In this
case, if there are not strong external EM fields, you don't need the spin
variables, and it's appropriate to integrate them out.

--A.J.


Kevin A. Scaldeferri

unread,
Apr 12, 2002, 4:28:48 PM4/12/02
to
In article <F148pJdTvaYM1...@hotmail.com>,

Buzurg Shagird <b_sh...@hotmail.com> wrote:
>
>Hi! I have a couple of questions, not exactly what the thread says,
>but the people participating in this thread have probably thought
>about these questions already. I apologize that the questions are
>not well-posed -- please be gentle.
>
>A. Is there any theory which has divergences that need to be
>regularized, but no further renormalization is needed?

One way that this can happen is if the theory is at a "UV fixed
point". This means if the coupling constant is such that the beta
function is zero.

This probably isn't what you meant, though. Instead, I think that you
want to know if there are any theories with aren't finite but where
the beta function is identically zero. Peskin & Schroeder give a
heuristic explanation of why this is impossible on p. 425, but I don't
know if this is rigorously known to be impossible.

Alfred Einstead

unread,
Apr 12, 2002, 4:32:31 PM4/12/02
to
ke...@blinky.its.caltech.edu (Kevin A. Scaldeferri) wrote:

> Specifically, renormalization is not necessary because we have point
> particles.
>
> There are finite theories with point particles. This is a fairly
> basic fact that everyone ought to learn in their QFT class.

Very good. In fact, you can actually write out a clear and simple
recursive formula for the generating function for T[exp(A)], where
A is any Wick polynomial and T[] any time-ordered product operator
satisfying suitable conditions:

T: n-point distributions -> n-point distributions
(1) T[A] = A, for Wick monomials and 0-point terms
(2) Symmetry: T[AB] = T[BA], for any Wick polynomials A, B
(3) Causality: T[A(x) B(y)] = T[A(x)] T[B(y)],

where x = (x1,...,xm), y = (y1,...,yn); x's lie on future side of
some Cauchy surface from y's.

A generalized evolution operator, in Perturbation theory, is
represented in terms of an interaction Lagrangian operator LI(x) as:

S(g) = T[LI(g)],

where the polynomial LI is smeared over by a vector of coupling
functions g. This yields a transfer function

S(g): psi(S1) |-> psi(S2)

for states psi on Cauchy surfaces that enclose between them the
region supp(g). In the limit, as supp(g) -> R^4 and g -> constants,
this gives you the usual S-matrix (if the limit exists).

Anyway, the upshot of renormalization methods is to define a suitable
T[] operator. Traditionally this is done by defining it in terms of
the more familiar Wick TW[] or Hadamard TH[] operator by an "infinity
subtraction" method.

Other methods (Epstein & Glaser; differential renormalization) try
to define T[] recursively without having to resort to TW[] or TH[].

What's little-known is that a definition can be made directly for T[]
IN TERMS OF TW[] or TH[] without involving infinities; which is
unique modulo the "impulse operator"

S(x) = T[exp(i LI(x))].

This is best illustrated by looking at orders 2 and 3.

At order 2, causality tells you that

T[A(x) B(y)] = A(x) B(y), if (x) > (y)
= B(y) A(x), if (x) < (y)

so that, as a distribution over the space S'(R^2-Delta_2), this yields
a unique well-defined solution:

T[A(x) B(y)] = TW[A(x) B(y)]
where
Delta_n = main diagonal = { (x,x,x,...,x) in R^n: x in R }

The behavior of T[A(x) A(y)] in the vicinity of x = y is given by
the second order term in the formal expansion of the impulse
operator T[exp(A(x))]. There's no loss of generality in considering
perfect-square products, since

T[AB] = 1/4 (T[(A+B)(A+B)] - T[(A-B)(A-B)])

At order 3, causality only gives you T[A(x) B(y) C(z)] recursively
as a combination of the form:

T[A(x) B(y) C(z)]
= TW[
a (T[A(x)B(y)] C(z) + T[B(y)C(z)] A(x) + T[C(z)A(x)] B(y)) +
b (A(x) B(y) C(z))
]

as a distribution in S'(R^3 - Delta_3). The behavior in the vicinity
of the sub-diagonals (x=y), (y=z), (z=x) and the requirement that you
get a well-defined Wick (or Hadamard) product uniquely determine what
a and b are. In the vicinity of x=y, if (x,y) < (z), you must have,
by Causality:

C(z) T[A(x)B(y)] =
TW[a C(z) T[A(x)B(y)] + 2a C(z) A(x)B(y) + b C(z) A(x)B(y)]

which implies that a = 1, and b = -2. Thus,

T[A(x)B(y)C(z)] =
TW[ 1! (T[A(x)B(y)]C(z) + T[B(y)C(z)]A(x) + T[C(z)A(x)]B(y))
- 2! A(x)B(y)C(z) ]

with the implied generalization illustrated in the expression above.

After smearing over by a function f, you can write:

T[A(f)^1] = A(f)
T[A(f)^2] = TW[A(f)^2], up to Delta_2
T[A(f)^3] = TW[3 A(f) T[A(f)^2] - 2 A(f)^3], up to Delta_3.
T[A(f)^4] = TW[4 A(f) T[A(f)^3] + 3 T[A(f)^2]^2 - 12 A(f)^2 T[A(f)^2]
+ 6 A(f)^4], up to Delta_4
etc.

Collecting these terms, you get the recursive formula for the formal
series expansion:
S(f) = T[exp(A(f))] = sum T[A(f)^n/n!]
= 1 + A(f) + TW[S(f)-ln(S(f))-1]
= TW[A(f) + S(f) - ln(S(f))], up to main diagonal

Given the requirement S(f) = TW[S(f)], this yields the recursive formula:

TW[ln(S(f))] = A(f), when S(f) = T[exp(A(f))].

A unique series expansion at each order for the ALREADY RENORMALIZED
T[] operator (and, consequently, S-matrix) is thus arrived at by
the conditions:
(1) TW[ln(S(f))] = i LI(f)
(2) TW[S(f)] = S(f)
given the impulse operator
(3) S(x) = T[exp(i LI(x))].

This is generally true for all theories, renormalizeable or not.
The impulse operator, describing the behavior of the generating
function S(f) near the main diagonal Delta_n at each order n,
parametrizes the entire set of solutions for possible T[]'s.

So, in effect, renormalization amounts to specifying the impulse
operator.

A.J. Tolland

unread,
Apr 13, 2002, 7:30:56 PM4/13/02
to
Hi,

On Thu, 11 Apr 2002, Buzurg Shagird wrote:

> Is there any theory which has divergences that need to be regularized,
> but no further renormalization is needed?

I can think of one example which you probably won't like: a
non-interacting scalar field. If you define the Hamiltonian in the naive
way in these theories, you end up with a divergent vacuum energy. One
typically regularizes this to zero. Now, you actually _can_ renormalize
the free scalar field, and maybe you actually should. However, we usually
skip this step, because it doesn't change anything. The same set of
variables works perfectly at all energy scales.
You might argue that this is a dumb example because vacuum
energy is unobservable. I might respond by mumbling about general
relativity and cosmological constants.

> I guess I am trying to understand why so many experts talk about
> divergences and renormalization in the same breath. Could it be
> because whenever one encounters a divergence in the mass and
> coupling parameters, renormalization is necessary?

You can think of it that way. I tend to take the effective field
theory point of view on everything [*], so I prefer to see infinities as a
sign that you're doing something wrong. As I've ranted about elsewhere,
renormalization is not some technique for assigning values to divergent
quantities. You don't encounter a divergence, and say, "oh crud, where's
my renormalizer?" You set things up at the beginning, and renormalize
every time you make a change in scale. If you encounter divergences, your
original scheme wasn't a good description of the physics.

* = exact theories are just really good effective theories! :)

> Suppose I have a theory which is a good approximation to physics at
> some scale, but a bad approximation to physics at some other scale.
> Renormalization tells me how to change the masses and couplings so
> that the modified theory is a good approximation at the new scale.
> Is that a reasonable statement?

Mostly, but not entirely. The renormalization group isn't a group
at all; it's a semigroup. Renormalization flows in only one direction.
What you describe above is true if the new energy scale is smaller than
the old one. If you want to go the other way, you can demand that your
set of variables "look the same" at higher energy, and then demanding that
the coupling constants of this higher energy theory are such that they
give rise to your original theory at lower energy. This sometimes gets
you into trouble: Theories like Fermi's theory of the weak interaction,
which are non-renormalizable in the old sense (i.e. their coupling
constants have negative mass dimension), get more and more broken at
higher energies. And even nicer theories like QED occasionally suffer
from Landau poles.
Possible in principle doesn't always equate to possible in
practice, particularly when we can only do perturbation theory. We know
that quarks and gluons are good degrees of freedom for strong interaction
physics at high energies, and we know that baryons and mesons are good
degree at lower energies. But I don't think anyone's fully succeeded yet
in getting these low energy degrees of freedom to emerge from
renormalization, an achievement which would be tantamount to describing
hadrons in terms of quarks and gluons. Others on this newsgroup know more
about this than I do.

--A.J.


Neil

unread,
Apr 13, 2002, 11:35:13 PM4/13/02
to
"Ralph E. Frost" <ref...@dcwi.com> wrote in message news:<uai6d17...@corp.supernews.com>...
> Why is renormalization necessary?
>
> I understand (okay, maybe I don't understand but I remember reading) that
> infinities arise and they have to be sealed or healed, but I am wondering
> WHY they arise in the first place?
<snip>

There have been plenty of answers to this question already, but they
are typically geared to the quantum mechanical end of the problem. I
just want to remind everyone that there is another (if, sort of
equivalent) reason why infinities come up and need dealing with, and
it is based on classical physics. Even static electric fields have
energy per unit volume, which I'll quickly just say is proportional to
|E|^2. Suppose we integrate the total field energy of a particle such
as an electron, going from infinite distance and working down to the
particle. This field energy becomes equal to the entire mass of the
electron at a certain radius, called "the classical electron radius."
If electrons are smaller than that, it is more than their entire mass,
and if they are points, it is infinite. We know from scattering
experiments that electrons are not as big as the CER, and actually
much smaller (not billiard bounce but pure inverse-square scattering
behavior within the distance the experiment allows.) Whether they're
literal "points" or not is debatable I suppose.

I know, fields are not really (?) classical at that level, but this is
a starting point for seeing how infinities arise and having to correct
to get known values. What I wonder is, looking at the whole picture:
what actually "runs" renormalization physically and makes it work like
it does, not just as a mathematical rescue scheme based on the wish
for something to fix things up?

Neil Bates

Mark

unread,
Apr 15, 2002, 1:09:28 PM4/15/02
to
In article <uai6d17...@corp.supernews.com> "Ralph E. Frost" <ref...@dcwi.com> writes:
>Why is renormalization necessary?
>
>I understand (okay, maybe I don't understand but I remember reading) that
>infinities arise and they have to be sealed or healed, but I am wondering
>WHY they arise in the first place?

They don't. They're entirely an artifact of a particular, naive, approach
to perturbation theory. In fact, in the extreme case, there are theories
known where an exact model is available where the (naive) perturbation
method gives you infinities that are sealed up by renormalization, but the
theory itself, involves no infinities whatsoever.

You can by-pass the infinities entirely and directly arrive at the
already-renormalized expressions via the general formula and theorem:

Let T[] a time-ordered product operator satisfying certain general
conditions.

If S[A] is the generating function of T[]; S[A] = T[exp(A)], where A
is a Wick polynomial, then as a formal power series the following
identities uniquely define S[A] up to "point-order" terms:

TW[S[A]] = S[A]; TW[log(S[A])] = A

where TW[] is the (naive) Wick time-ordered product operator.

In particular, the terms in the expansion of log(S[A]) inside the TW[]
are free of infinities at each order!

The ambiguity of and resolution of point-order terms is actually what
renormalization is concerned with.

It might help for you to see the account Bogoliubov gave of the occurrence
of "infinities". It is, in fact, directly related to the expressions above
and it shows how the notion of "effective Lagrangian counter terms
offsetting infinities" arises.

A suitably defined T[] operator should satisfy certain general conditions.
In particular, if A(x) B(y) C(z) are operators, then

T[A(x) B(y) C(z)] = T[A(x) B(y)] C(z)

whenever the points x, y lie on the future side of a Cauchy surface from z.
Analogous conditions apply for more complex combinations of operators.

For 2-point terms, T[] will agree TW[] for non-coincident points:

T[A(x) B(y)] = A(x) B(y) if x is outside of y's past lightcone
T[A(x) B(y)] = B(y) A(x) if x is outside of y's future lightcone

which also means that A(x) B(y) = B(y) A(x) if x and y are outside of
each other's light cones. For all NON-COINCIDENT points x, y, this
means:
T[A(x)B(y)] = TW[A(x)B(y)].

However, TW[] may be ill-defined in the vicinity of coincident points.
Hence, the use of TW[] leads to the artificial appearance of infinities
as an artifact.

The primary focus of perturbation theory is the S matrix, which is the
generating function on the interaction Lagrangian:

S = T[exp(i LI(e))]

where the action (modulated by g)

LI(g) = integral (LI(x) g(x) dx)

is the smearing of LI(x) by g(x), and the action is smeared with respect to
the constant coupling terms g(x) = e.

When computed naively as S = TW[], you get the infinities described above.

Bogoliubov's take on this is that at the second order, one can write
formally:
T[A(x)B(y)] = TW[A(x)B(y)] + i D[AB](x) delta(x-y)

where the counter-term D[AB](x) has the general form

polynomial in d/dx.

and the coefficients of the polynomial are, formally, infinite at
degrees 0,...,n if TW[A(x)B(y)] diverges as 1/(x-y)^n when y->x.

For the term
T[i LI(g) i LI(g)] = integral T[i LI(x) i LI(y)] g(x) g(y) dx dy

this gives you a formal counter-term:

T[(i LI(g))^2] = TW[(i LI(g))^2] + i L2(g,g)
L2(g,g) = integral L2(x,y) g(x) g(y) dx dy

with L2(x,y) a multiple of delta(x-y) as above.

At the third order, the most you can say about T in relation to TW[]
at points x,y,z no all coincident is that:

T[A(x)B(y)C(z)] = TW[
(A(x) T[B(y)C(z)] + B(y) T[C(z)A(x)] + C(z) T[A(x)B(y)])
- 2 A(x)B(y)C(z)
] + D[ABC](x) delta(x-y) delta(x-z)

This particular combination (and only this) gives you the correct
values when 2 out of the 3 points are coincident. So the net result
is that T[] and TW[] formally differ by a point-local term

D[ABC](x) delta(x-y) della(x-z)
D[ABC](x) = polynomial in d/dx.

So, at the third order to the generating function for i LI(g), you get:

T[(i LI(g))^3] = TW[3 iLI(g) T[(iLI(g))^2] - 2 (iLI(g))^3] + i L3(g,g,g)

with L3(x,y,z) localized to x=y=z, a multiple of delta(x-y)delta(x-z).

Similarly, at the 4th and higher orders you get a unique decomposition:

T[(iLI(g))^n] = TW[An] + i Ln(g,g,...,g)
where
An = iLI(g)^n - n! nth order of log(T[exp(iLI(g))])

with Ln a (formally infinite) counter term.

If all the T's are replaced by TW's and the (formal infinite) counter
terms L2,L3,..., you get the expansion:

T[exp(iLI(g))] = TW[exp(iLI(g) + iL2(g,g) + iL3(g,g,g) + ...)]

thereby effectiving the introduction of (formally infinite) Lagrangian
counter terms L2,L3,... to offset the (naive, but wrong) use of TW[].

So, the entire reason for the so-called infinities and the appearance
of the counter terms is the naive, but wrong, use of TW[] the Wick
time ordered product operator in the expansion of S.

The infinities are an artifact, nothing more.

Charles Francis

unread,
May 13, 2002, 10:11:46 PM5/13/02
to

In article <f7aeccdb.02041...@posting.google.com>, Neil
<para...@mailcity.com> writes:

>"Ralph E. Frost" <ref...@dcwi.com> wrote in message
>news:<uai6d17...@corp.supernews.com>...

>> Why is renormalization necessary?
>>
>> I understand (okay, maybe I don't understand but I remember reading) that
>> infinities arise and they have to be sealed or healed, but I am wondering
>> WHY they arise in the first place?

Actually I think at least one answer was given:

In article <020420020922466602%dis...@golem.ph.utexas.edu>, Jacques
Distler <dis...@golem.ph.utexas.edu> writes

>"Curing infinities" is the wrong way to think about renormalization.
>
>The right way is to realize that quantum field theories come equiped
>with a cutoff, not to cure some sickness, but as a way of codifying our
>ignorance about the details of physics at short distances.

Which rather surprised me, because that is precisely what I say and with
which I had thought Jacques Distler strongly disagreed. Still there is
no accounting for misunderstanding.

I think the answer to your question is that physically there really is a
cut-off, which I generally summarise as saying that a particle cannot
interact again in the instant of its creation.

Mathematically it doesn't make any difference how the cut-off is
applied. Discrete QED uses a sharp cut-off, but the value of the cut-off
makes no difference (at least in flat space). The method of Epstein and
Glaser applies a cut-off using a continuous switching function. So we
are ignorant of the exact manner of the cut-off, but in my view there is
certainly some form of physical cut-off.


Regards

--
Charles Francis

Jacques Distler

unread,
May 14, 2002, 5:09:36 PM5/14/02
to
In article <abprp2$d8k$1...@news.state.mn.us>, Charles Francis
<cha...@clef.demon.co.uk> wrote:

>Actually I think at least one answer was given:
>
>In article <020420020922466602%dis...@golem.ph.utexas.edu>, Jacques
>Distler <dis...@golem.ph.utexas.edu> writes
>
>>"Curing infinities" is the wrong way to think about renormalization.
>>
>>The right way is to realize that quantum field theories come equiped
>>with a cutoff, not to cure some sickness, but as a way of codifying our
>>ignorance about the details of physics at short distances.
>
>Which rather surprised me, because that is precisely what I say and with
>which I had thought Jacques Distler strongly disagreed. Still there is
>no accounting for misunderstanding.
>
>I think the answer to your question is that physically there really is a
>cut-off, which I generally summarise as saying that a particle cannot
>interact again in the instant of its creation.

By snipping the *next* part of my post, you totally distort its meaning.
I went on to say:

In order to ensure that changing the short-distance cutoff a
little bit does not change the calculated results for
long-distance physics, we need to make the coupling constants of
the theory cutoff-dependent in a very particular way. This is
called Renormalization. The equation which expresses how the
coupling constants change under an infinitesimal change of
cutoff is called the Renormalization Group Equation.

In comparing Ralph's calculations with mine, it proves convenient
to reexpress our answers, not in terms of the cutoff-dependent
"bare" couplings, but in terms of some renormalized or
"physical" parameters.

E.g., in QED, we might wish to trade the gauge coupling constant
and mass parameter in the Lagrangian for some physical definition
of charge and the physical electron mass.

That procedure was worked out in the '50s, long before people
understood *why* renormalization was an integral part of quantum
field theory.

*You* want to attach fundamental physical significance to the cutoff.

"I" (which is to say, modern quantum field theorists) want the cutoff
to be completely and utterly physically irrelevant.

To achieve that, we need to introduce the whole machinery of the
Renormalization Group, and all of the interesting physics that stems
from it.

The RG was (last I heard) notably lacking from your approach.

Charles Francis

unread,
May 23, 2002, 1:35:48 AM5/23/02
to

In article <130520022147099655%dis...@golem.ph.utexas.edu>, Jacques
Distler <dis...@golem.ph.utexas.edu> writes:

>In article <abprp2$d8k$1...@news.state.mn.us>, Charles Francis
><cha...@clef.demon.co.uk> wrote:

>>>"Curing infinities" is the wrong way to think about renormalization.
>>>
>>>The right way is to realize that quantum field theories come equiped
>>>with a cutoff, not to cure some sickness, but as a way of codifying our
>>>ignorance about the details of physics at short distances.

>>Which rather surprised me, because that is precisely what I say and with
>>which I had thought Jacques Distler strongly disagreed. Still there is
>>no accounting for misunderstanding.
>>
>>I think the answer to your question is that physically there really is a
>>cut-off, which I generally summarise as saying that a particle cannot
>>interact again in the instant of its creation.

>By snipping the *next* part of my post, you totally distort its meaning.
>I went on to say:
>
> In order to ensure that changing the short-distance cutoff a
> little bit does not change the calculated results for
> long-distance physics, we need to make the coupling constants of
> the theory cutoff-dependent in a very particular way. This is
> called Renormalization.

Neither I, nor Scharf would choose to call it that. In fact that Scharf
does not need to specify the switching function but only the general
properties of the switching function.

>*You* want to attach fundamental physical significance to the cutoff.
>"I" (which is to say, modern quantum field theorists) want the cutoff
>to be completely and utterly physically irrelevant.

In the flat space approximation in which we usually do field theory the
value of the cut-off is completely irrelevant. It's only relevance is
that when the limit is taken after including the cut-off things remain
finite, but that is what you know anyway. It matters that there is a
physical cut-off, since without it the theory is not defined. I believe
the value of the cut-off is related to gravity, but that is irrelevant
in the flat space approximation.

>To achieve that, we need to introduce the whole machinery of the
>Renormalization Group, and all of the interesting physics that stems
>from it.
>
>The RG was (last I heard) notably lacking from your approach.

Please refer to the account given in Scharf. The renormalisation group
is present, just untangled from the infinite renormalisation arguments
one sometimes sees.

Regards

--
Charles Francis

0 new messages