S = integral_a_b [delta(x - x0)]^2 dx over an interval (a,b) including
the point x0 completely. I always considered it to be infinity. But I
see that it is often called an "ill-defined" integral (as if it could
be defined differently). Why not call it simply divergent, infinite?
There is provably no way to give the term [delta(x - x0)]^2 an
unambiguous mathematical meaning, assuming natural rules for the
behavior under addition and multiplication. Therefore it, and every
expression containing it is called ill-defined.
Is infinity that ambiguous? In my opinion, not.
I understand the term "ill-defined" as "not-yet-defined-and-sensitive-
to-definition". In my case it is already defined as infinity or a very
big number, if you like.
Another matter is that this integral is a part of a perturbative
correction to an eigenvalue. The exact eigenvalue is finite, and one
my be tempted to "define" this expression " from physical
requirements" in the way to get a finite answer. For example, with
help of the delta-function argument splitting: [delta(x - x0)]^2 =def=
delta(x - x0)*delta(x - x1), x1 =/= x0. Thus the obvious infinity
might be "defined" as zero.
On the other hand, I know that this infinity must appear because my
series is a series in powers of a _formally_ small parameter put to
infinity, like calculation of exp(-z) = 1 - z, z -> infinity. The
exact function is finite but its series is not obliged to be finite in
this limit. So the argument splitting is not a correct definition but
a mathematical flaw. This integral is just z when it tends to infinity
so it is infinity as it follows from its analytical expression. No
need to "define" it further.
In QFT they say even such a thing that the theory is "defined
perturbatively". So the values of : perturbative corrections are
"defined" from such "physical requirements" like I mentioned above:
the results "must" be finite and correspond to the observable values.
So they "define" obviously divergent terms by replacing them with
appropriately fitted finite terms. The terminology "ill-defined"
serves as a pretext to "well define" apparently divergent terms. In my
opinion, it is just fooling ourselves.
In terms of this, what you are looking for (using their a, not
Vladimir's) is:
S = integral [delta(x - x0)]^2 dx
= integral [lim a-> 0 (1/pi a^2) exp(-2(x-x0)^2/a^2)] dx (1)
One may then want to try the same thing for some of the other
distributions (34)-(40) at
http://mathworld.wolfram.com/DeltaFunction.html and see what one gets.
Thinking about this further, the key thing that makes the delta
*defined* is that, e.g.,
integral [(1/sqrt(pi)a) exp(-(x-x0)^2/a^2)] = 1 (2)
no matter what the value of a. Thus, as a->0, the area under the delta
is still 1. I would have to suppose that in particular, it is this
feature of the delta which allows mathematicians to say that this is
"defined," because even though the pulse is of infinite magnitude, its
integral stays controlled and finite.
Now, the exponential in (2) can be mapped into the exponential in (1) by
re-scaling a-> a/ sqrt(2). However, by such a rescaling, the entire
term of interest in (2) becomes:
[(1/sqrt(2pi) a) exp(-2(x-x0)^2/a^2)] (3)
The key difference between the expression in (3) and the expression
in (1), is that the coefficient goes from ~1/a to ~1/a^2. This means
that the distribution with an area of 1, is multiplied by an overall
factor of 1/a. So, as a->0, this multiplier becomes infinite, and by
the time this distribution becomes a delta, you lose the salient
property of the delta that its integral is equal to 1. Instead, its
integral becomes infinite.
Arnold said there is "provably" no way to give delta^2 an
unambiguous mathematical meaning. This is perhaps not a general proof,
but it does demonstrate that the area under delta^2 becomes infinite
rather than unity, at least when the limit taken is of a Gaussian. One
would think that the same sort of thing will happen for other
distribution that may also be used to define the delta in the limiting
case.
Jay
Because it is neither.
The delta function - and its' products - have integrals that are well
defined, convergent, and finite.
Assigning a value of infinity is useless. Usually several similar terms
occur, where some naively evaluate to infinity, and others to minus
infinity, and one wants to know their sum. Your definition does not
help the slightest to give meaning to the sum.
You should study the successful ways renormalizations are done,
and the reasoning behind it, rather than spend your time trying to
invent shortcuts, which, if they would exist, had been found long,
long ago by those more experienced than you.
The role of renormalization is well understood from very different
points of view. After you understand some of them thoroughly, you may
still go back to your original dreams, and see whether they can be
turned into reality. But it is good to have studied in detail the
explorers that tried the same journey as a preparation wor the journey
one wants to take.
Arnold Neumaier
Because you probably got the incorrect idea it is infinite from one of
the properties of the delta
integral delta(x - x0) f(x) dx = f(x0) [1]
then if you incorrectly assumes that f(x) = delta(x-x0) you get
integral delta(x - x0) delta(x - x0) dx = delta(x0 - x0) = infinite
But delta is *not* a function and thus you cannot use [1].
The integral of the square of a delta is ill-defined in ordinary space.
I believe that mathematicians try to give sense to integrals as that
using extended (rigged) spaces but maybe I am completely wrong.
--
http://www.canonicalscience.org/
BLOG:
http://www.canonicalscience.org/publications/canonicalsciencetoday/canonicalsciencetoday.html
This may or may not be helpful, but you might want to think about
this problem by considering the delta as the limit in the sense of
distributions of a sequence of Gaussians, see the graph on the right at
the top of http://en.wikipedia.org/wiki/Dirac_delta_function.
In terms of this, what you are looking for (using their a, not
yours) is:
S = integral [delta(x - x0)]^2 dx
= integral [lim a-> 0 (1/pi a^2) exp(-2(x-x0)^2/a^2)] dx (1)
You may then want to try the same thing for some of the other
distributions (34)-(40) at
http://mathworld.wolfram.com/DeltaFunction.html and see what you get.
Perhaps from all of that, you can discern a general pattern that
will enable you to answer your question. Looking at (1) however, I
don't see why the integral of delta^2 is any better or worse-defined
than the integral of delta^1.
Keep in mind also that deltas such as:
delta^(4) (x) == delta(t)delta(x)delta(y)delta(z) (2)
often show up in many situations, which may also help you to think about
this.
Jay
> I have an integral with a delta-function squared as an integrand:
>
> S = integral_a_b [delta(x - x0)]^2 dx over an interval (a,b) including
> the point x0 completely. I always considered it to be infinity.
No, it is just undefined.
And calling it 'infinity' doesn't define it,
for 'infinity' is not a real number.
> But I
> see that it is often called an "ill-defined" integral (as if it could
> be defined differently). Why not call it simply divergent, infinite?
Since (by Fourier) it formally equals \int_{-infty}^{+infty} dx
calling it divergent does make sense.
It doesn't matter though what you call it,
it is undefined by any other name.
However, on physical grounds we know
that all such integrals are to be taken as zero,
since all vacuum fluctuation diagrams
(having one delta too many)
are proportional to them.
There is no mathematically rigorous way
to derive the physically necessary answer though,
Jan
+inf
x(t) = integral{ X(f) e^(i*2*pi*f*t) df }
-inf
+inf
X(f) = integral{ x(t) e^(-i*2*pi*f*t) dt }
-inf
and convolution as
+inf
(x (*) y)(t) = integral{ x(u)*y(t-u) du }
-inf
* means multiply, (*) means convolve.
we know that the F.T. of x(t) (*) y(t) is X(f)*Y(f).
we also know that the F.T. of delta(t) = 1 for all f.
now, wouldn't that mean that if both x(t) and y(t) are delta(t), that
the F.T. of the two convolved against each other to be the same as the
F.T. of just one? so two dirac deltas convolved against each other is
an identical dirac delta. at t=0, of course, the dirac delta is ill-
defined (i s'pose it's inf) and so is the convolution integral. but
we know what the convolution integral is for other t (it should be
zero).
we neanderthal electrical engineers are often more careless with the
dirac delta than mathematicians. mathematicians insist that a
function that is zero almost everywhere has integral of zero. EEs
think that the integral of a dirac delta (which we treat as a
"function") is 1 and that the dirac delta is zero almost everywhere.
r b-j
Unlike physicists and engineers, mathematicians insist that a function
that is zero almost everywhere has an integral of zero. This is indeed
a consequence of the definition of what mathematicians call a function
and an integral. It ensures that functional analysis is not plagued
by the same problems as quantum field theory.
Therefore mathematicians call the ''Dirac function'' delta(x),
which vanishes for nonzero x but integrates to 1, not a function
but a distribution. As a distribution (= a linear mapping that assigns
to nice functions a number), the delta distribution is well-defined:
delta(f) = f(0), (1)
and the notation
\int dx delta(x) f(x) = f(0)
is just considered to be a formal expansion of (1).
But no such distributional meaning (nor any other) can be attached
to the square of delta(x), whence mathematications call delta(x)^2
ill-defined.
Of course, one can give an ad hoc meaning to \int dx delta(x)^2;
onecan define it to be infinite, zero, or whatever else one finds
convenient.
But one cannot attach consistently a meaning to
\int dx delta(x)^2 f(x)
for arbitrary nice functions f, in a way that one can handle the
integral according to the usual rules without getting contradictions.
Therefore, delta(x)^2 is mathematically as ill-defined as 0/0,
The latter can also be defined to be 1 or infinity, or NaN,
but any such definition is inconsistent with the rules for handling
numbers.
See also the section ''The square of the delta function'' in Chapter
B4: ''Divergences in quantum field theory'' of my theoretical physics
FAQ at http://www.mat.univie.ac.at/~neum/physfaq/physics-faq.html .
Arnold Neumaier
It is clear that one cannot give the square of a delta-distribution a
definite meaning. Why do you think delta is "positive" (at best you
need to define what you mean by positive, but I cannot imagine that
it makes much sense to ascribe a one-form a sign)? Even, if you think of
the delta distribution as a certain equivalence class of weak limits, it
doesn't make sense. You can of course take specific examples of \delta
sequences of functions that all are positive (e.g., take a series of
Gaussians which become narrower and narrower around the origin), but
there are as well sequences of functions with no specific sign which are
proper delta-sequences like the sequence of sinc functions which is
an important example for a delta-sequence in time-dependent perturbation
theory of quantum mechanics.
=======================================================================
On 9 avr, 13:24, Arnold Neumaier <Arnold.Neuma...@univie.ac.at> wrote:
...
> Of course, one can give an ad hoc meaning to \int dx delta(x)^2;
> one can define it to be infinite, zero, or whatever else one finds
> convenient.
If the integrand is positive and big, the integral may not be zero or
negative, may it?
> But one cannot attach consistently a meaning to
> \int dx delta(x)^2 f(x)
> for arbitrary nice functions f, in a way that one can handle the
> integral according to the usual rules without getting contradictions.
If f(0) is finite and has a certain sign, this integral is also big
and of the same sign.
In my example delta(x)^2 appeared from derivative of fast changing
physical function (heat conduction) in a two-layer system. I can
always consider this derivative as finite but extremely big, so the
integral has a big and positive numerical value.
The suggestive way of writing a distribution G acting on a test
function f, G(f) = "int G(x) f(x) dx", is sometimes too suggestive.
[1]
If we are given f(x), then what is f(x)^2? The answer to that question
and even its well-posedness, depends on what f(x) represents. If f is
a function, then f(x) the value of that function at a point x (a real
or complex number, or something similar), and f(x)^2 is the same value
squared.
On the other hand, trying to apply the same logic to G(x), in the
notation introduces above, fails. The reason is that G is not a
function, but a distribution. And G(x) is not the value of G at a
point x (since a distribution need not have a value at a given point).
The notation G(x) only makes sense as a symbolic expression in the
larger expression "int G(x) f(x) dx", which is defined to be G(f). As
such, G(x) is not a number and hence cannot be squared.
A naive way of trying to get around this obstacle is to consider a
sequence of functions G_n(x) which converge to G as n->oo. Then
G_n(x)^2 makes perfect sense. One may then attempt to define,
G^2(f) = limit_(n->oo) int G_n(x)^2 f(x) dx .
However, it is easy to see for simple examples of G_n and f that this
limit will result in infinity, which is not a very useful way to
define G^2.
Even worse is the fact that distributions cannot be identified with
particular sequences of functions, rather they can only be identified
with equivalence classes of sequences that converge to the same
distribution. If one wishes to generalize the above approach to define
the product of distributions as the limit of products of
representative sequences of functions from the respective equivalence
classes, it is easy to see that this definition will not be
independent of the choice of representative, demonstrating that
consistently defining products in this way is impossible. Finding a
simple illustration of the last statement is left as an exercise to
the reader.
[1] Just to make sure we are all on the same page, a distribution is a
continuous linear functional on the linear space of functions (called
test functions) with a given topology.
Hope this helps.
Igor
Of course, one can declare that any ill-defined object is infinity.
But this doesn't solve the problem. The point is that well-defined
objects are defined not just as single instances (where any ad hoc
definition could be used) but as special cases of a general,
meaningful conceptual basis. There is no such basis for the product
of general distributions. (A possible way out are generalized functions
in the Colombeau sense, but there infinitely many generalized functions
correspond to the same delta(x-x_0), so that already the latter is
ambiguous.)
> I understand the term "ill-defined" as "not-yet-defined-and-sensitive-
> to-definition".
But the common understanding is ''not defined in the standard background
context''. Of course, authors are entitled to add their personal
definitions to the standard background and thus make previous
ill-defined things defined. But others are unlikely to follow unless
there is a convincing conceptual advance behind such a definition.
Simply defining an ill-defined expressing to be infinite does not
help anyone.
In my case it is already defined as infinity or a very
> big number, if you like.
No, it is not defined, according to standard practice.
> Another matter is that this integral is a part of a perturbative
> correction to an eigenvalue.
If it occurs in such an expansion it just means that perturbation theory
has been applied incorrectly. Perturbation theory is well-defined only
for corrections that are small. Infinity is never small.
> On the other hand, I know that this infinity must appear because my
> series is a series in powers of a _formally_ small parameter
Formal stuff is often ill-defined.
> In QFT they say even such a thing that the theory is "defined
> perturbatively".
This just means: defined as a formal power series f(alpha)
The latter is a respectable, well-defined object. Only when
trying to evaluate a formal power series at a nonzero value of
alpha, something ill-defined occurs.
But Physicists never insert a finite value into the formal power
series, but only in a finite truncation, where it is harmless.
(The quality of the approximation remains, of course in doubt,
but this is resolved on the basis of comparisons with experiments.)
> In my opinion, it is just fooling ourselves.
In my opinion, you are fooling yourself.
Those who understand renormalization theory are not fools.
Arnold Neumaier
I agree that the integral of the square of the Dirac delta is ill-defined, but
there can be situations in which it appears incorrectly.
Specifically, the Dirac delta is often used as a projection operator. A
fundamental property of a projection operator P is:
P P = P
If one applies this first, before writing P as an integral over a Dirac delta,
then the problem disappears.
This is a rather general approach: if you obtain an expression that is
mathematically ill-defined for a computation that makes sense on physical
grounds, think more carefully about what you are doing. There may be an
alternate approach that is not ill-defined.
Tom Roberts
> Bob_for_short wrote:
>
>> I have an integral with a delta-function squared as an integrand:
>>
>> S = integral_a_b [delta(x - x0)]^2 dx over an interval (a,b)
>> including the point x0 completely. I always considered it to be
>> infinity. But I see that it is often called an "ill-defined"
>> integral (as if it could be defined differently). Why not call it
>> simply divergent, infinite?
>
> Because it is neither.
This is wrong, as been pointed in this thread by at least three
different authors.
> The delta function - and its' products - have integrals that are well
> defined, convergent, and finite.
First the delta is not a function. Second the integral given by Bob is
neither well-defined nor finite.
i've always called those "functionals". a mapping of a metric space
of some sort to a real or complex number.
> the delta distribution is well-defined:
> delta(f) = f(0), (1)
> and the notation
> \int dx delta(x) f(x) = f(0)
> is just considered to be a formal expansion of (1).
those of us in signal processing call that a "sampling operator".
> But no such distributional meaning (nor any other) can be attached
> to the square of delta(x), whence mathematicians call delta(x)^2
> ill-defined.
and maybe i do, too. but what about the convolution of two deltas?
doesn't that become just another dirac delta function?
> Of course, one can give an ad hoc meaning to \int dx delta(x)^2;
> one can define it to be infinite, zero, or whatever else one finds
> convenient.
i am not sure how defining it to "infinite" is very well defined, but
it says something to engineers. i do not think defining it as
anything else (zero or some finite value) can make any sense at all,
even to neanderthal engineers.
> But one cannot attach consistently a meaning to
> \int dx delta(x)^2 f(x)
> for arbitrary nice functions f, in a way that one can handle the
> integral according to the usual rules without getting contradictions.
if one defines it as a square of nascent deltas (like the Gaussian,
but it could be a simpler one), it is clear that the integral of
delta^2 goes to infinity as the little delta_x goes to zero. i can't
think of a single nascent delta where that is not true.
> Therefore, delta(x)^2 is mathematically as ill-defined as 0/0,
i don't see a 0/0 in this integral of (delta(t))^2. i see a finite/0
coming out of this limit. it becomes infinite.
> The latter can also be defined to be 1 or infinity, or NaN,
> but any such definition is inconsistent with the rules for handling
> numbers.
>
> See also the section ''The square of the delta function'' in Chapter
> B4: ''Divergences in quantum field theory'' of my theoretical physics
> FAQ at http://www.mat.univie.ac.at/~neum/physfaq/physics-faq.html.
i'll do that. thanx.
r b-j
I do not consider anybody to be fool, of course. Fooling oneself means
making a conceptual mistake, in my opinion. I just repeated the known
Feynman's words.
I know definition of distributions and all subtleties connected with
their applications. In my case, originated from a concrete nearly
engineering calculation, I think I can consider delta(x) as a
"rectangular" function with the area a/a = 1, a -> 0. With this delta
representation the integral diverges as 1/a. That is why I called it a
positive infinity.
I think everyone has expressed himself and we can stop here.
With kind regards,
Vladimir.
> But no such distributional meaning (nor any other) can be attached
> to the square of delta(x), whence mathematications call delta(x)^2
> ill-defined.
My initial thought was "well the delta function has a nice consistent
definition, and we have integration by parts.." but I neglected to actually
_do_ the integration to check.
\int dx delta(x) f(x), using integration by parts, expands into delta(x)f(0)
- f(0) \int [d/dx delta(x)] dx
On one hand, if I evaluate the integral directly I find the second integral
is equal to zero if I integrate the derivative of the delta function over a
symmetric interval due to the function being odd. On the other, if I simply
allow the differentials to cancel I find that the _entire_ integral is equal
to zero.
The detailed analysis in this thread has once again reminded me why I did
not like my real analysis courses. :P
[...]
> Arnold Neumaier wrote:
>> [...]
>
> I agree that the integral of the square of the Dirac delta is
> ill-defined, but there can be situations in which it appears
> incorrectly.
>
> Specifically, the Dirac delta is often used as a projection operator.
> A fundamental property of a projection operator P is:
> P P = P
> If one applies this first, before writing P as an integral over a
> Dirac delta, then the problem disappears.
This looks all as mathematical funambulism...
The Dirac delta, which is not a function or a projection operator,
is defined as
\int delta(x) f(x) dx = f(0) [*]
where f(x) is a function which may verify certain properties.
Sometimes it is said that f(x) is a test function for the distribution
delta(x). The meaning of the distribution is only obtained using an
adequate functional space for the test functions.
The product of the distribution by the function is well-defined and
thus the integration [*] is also.
The problem here is that the product of distributions is not
well-defined
delta(x) delta(x) = ?
Therefore the next integral is ill-defined
\int delta(x) delta(x) dx = ?
(...)
P= delta(E-H) is only a distributional projection operator,
and does _not_ satisfy P^2=P, which is a property of normal
projection operators.
The spectrum of a P satisfying P^2=P is {0,1}, while that of
P= delta(E-H) is, in some sense, {0,inf}.
The square of P is as ill-defined as the square of delta(x).
> ============= Moderator's note ========================================
>
> It is clear that one cannot give the square of a delta-distribution a
> definite meaning. Why do you think delta is "positive" (at best you
> need to define what you mean by positive, but I cannot imagine that
> it makes much sense to ascribe a one-form a sign)? Even, if you think of
> the delta distribution as a certain equivalence class of weak limits, it
> doesn't make sense. You can of course take specific examples of \delta
> sequences of functions that all are positive (e.g., take a series of
> Gaussians which become narrower and narrower around the origin), but
> there are as well sequences of functions with no specific sign which are
> proper delta-sequences like the sequence of sinc functions which is
> an important example for a delta-sequence in time-dependent perturbation
> theory of quantum mechanics.
>
> =======================================================================
It is not difficult to define a sequence f_n(x) such that
f_n(x) -> \delta(x) (in the usual weak sense)
and
\int dx f_n(x) \delta(x) tends to a well defined limit,
any desired one, -37 for example,
Jan
The term ''linear functional'' is the generic term for linear mappings
from an arbitrary vector space to its defining field. One talks
about distributions when the space is the space of real- or complex
valued functions with the property that x^m d^n/ dx^n f(x) tends to zero
as |x| tends to infinity.
>> But no such distributional meaning (nor any other) can be attached
>> to the square of delta(x), whence mathematicians call delta(x)^2
>> ill-defined.
>
> and maybe i do, too. but what about the convolution of two deltas?
> doesn't that become just another dirac delta function?
Yes, that's unproblematic.
>> Of course, one can give an ad hoc meaning to \int dx delta(x)^2;
>> one can define it to be infinite, zero, or whatever else one finds
>> convenient.
>
> i am not sure how defining it to "infinite" is very well defined, but
> it says something to engineers. i do not think defining it as
> anything else (zero or some finite value) can make any sense at all,
> even to neanderthal engineers.
In some renormalization contexts, one can safely regard it as zero.
For nonzero values, see J.J. Lodder's response.
>> But one cannot attach consistently a meaning to
>> \int dx delta(x)^2 f(x)
>> for arbitrary nice functions f, in a way that one can handle the
>> integral according to the usual rules without getting contradictions.
>
> if one defines it as a square of nascent deltas (like the Gaussian,
> but it could be a simpler one), it is clear that the integral of
> delta^2 goes to infinity as the little delta_x goes to zero. i can't
> think of a single nascent delta where that is not true.
>
>> Therefore, delta(x)^2 is mathematically as ill-defined as 0/0,
>
> i don't see a 0/0 in this integral of (delta(t))^2. i see a finite/0
> coming out of this limit. it becomes infinite.
By the above calculation, delta(x)^2 is mathematically as ill-defined as
0/0 := lim a_n/b_n for a_n, b_n to 0,
though the latter is otherwise totally unrelated.
So talking about \int dx delta(x)^2 has no mathematical content.
It may convey some intuitive information, but this may turn out to
be correct or misleading, depending on the circumstances.
Mathematicians prefer arguments that are reliable.
Arnold Neumaier
The rigged Hilbert space is just a formalization of the good old Hilbert
space theory with distributions lying in the dual space of a dense subspace,
where operators with continuous spectrum (like position or momentum
operators in quantum mechanics) are defined. The square of a Dirac Delta
(as of many other distributions too) stays undefined, and there's also
no need for them in quantum mechanics as long as one doesn't make
a mistake. This becomes clear when deriving Fermi's golden rule, where
the meaning of |S_{fi}|^2 with the energy-momentum-conserving delta-
distribution is analysed in noting that momentum eigenstates of
asymptotically free states are no Hilbert-space vectors but distributions
and one has to use "wave packets" rather than these distributions to
define states of asymptotically free scattering particles. The same physical
approach proves successful in the real-time perturbation theory in finite-
temperature field theory, where the Keldysh Green's functions of free
particles contain on-shell delta-distributions, and when one resums Dyson's
series this seems to be trouble, but this is resolved as soon as one puts
in the correct "causal regularization" of these delta-distributions and takes
the (weak) limit afterwards. I guess there are many more examples in
(quantum) theory, where one has to use the physicist's common sense to
define such expressions in the right way. An alternative is the mathematicians
approach and formulate the theory in a stricter way from the very beginning. :-)
HvH.
=====================================================================
Isn't this question and many more addressed by using Rigged Hilbert
Spaces? I seem to recall a thread like this not too long ago. Maybe
someone already mentioned it and I missed that, but during the past
thread I found this paper which offers a pretty nice explanation for
handling eigenfunctions of momentum and position operators and other
"divergent" quantities in quantum mechanics,
The role of the rigged Hilbert space in Quantum Mechanics, R. de la
Madrid, http://arxiv.org/abs/quant-ph/0502053v1.
--
-- Lou Pecora
Neumaier's remark pertains to a no go theorem that lies at the root of
all problems with trying to extend distributions into the non-linear
domain.
For this reason Schwarz (sp?) theory is linear.
However -- as Grisse points out -- one can give meaning (and rigorous
background) to non-linear combinations of distributions.
The simplest approach -- the one I was first familiar with back in the
early 1980's -- is simply to take the infinity at face value and use
non-standard analysis. Luxembourg in his general treatment of non-
standard analysis and its applications discussed this in some detail
back then. So, I always thought the issue was resolved long ago.
The generic approach (the one taken as the "standard" in the
Mathematical community) is closely related to non-standard analysis --
namely, the approach grounded in Colombeau theory (and Neumaier's "no
meaning" remark -- given discussions in the past -- has that
specifically in mind). Colombeau is needed or is used in non-linear
dynamics (e.g. fluid dynamics) and in General Relativity (e.g. in
addressing the issue of signature change). The original intent of
Colombeau (and Rosenberg) was to use this specifically for quantum
field theory.
The two places where quadratic or higher order combinations of
distributions arise are also the cases where the classical theory,
which the quantum theory is a quantization of, has problems: the "self-
force" problem (which manifests itself in quantum theory by the
appearance of non-linear combinations of singular propagators whose
singular points overlap); and the "self-energy" problem (which
manifests itself in quantum theory by the appearance of non-linear
terms of a similar nature in the energy integral).
The specialized approach (for quantum theory) rests on various
incarnations of the same underlying idea, whose incarnations go under
various names such as the Epstein-Glaser (or "causal" approach) --
along with its extension to curved space settings that rests on
"microlocal analysis"), or differential renormalization.
To date, nobody I'm aware of has explicitly linked the generalities of
Colombeau theory to the specialized application seen in differential
renormalization or microlocal analysis. But I've seen enough of both
to clearly see that Colombeau and microlocal analysis have a large
degree of overlap.
All approaches have the same problem: they treat the tumor of
colliding infinities by trying to band-aid over it and ignore the
question of digging up the root of the problem and removing the issue
at its root.
In the case of quantum theory, the root of the problem is that the
fields themselves are treated as distributional. This can never be
anything more than an idealization of something that is more
realistic, more messy and involves ordinary functions. But just what
that "something" is, is still unresolved.
The key to the singular nature of the field quantities is that they
are singular by virtue of the fact that propagators are singular on
the light-cone. In turn, this is the relativization of what in
ordinary 3-dimensional space (in a non-relativistic setting) is the
singularity of the field source. Only in a relativistic context, this
singularity propagates along light cones.
In order for the field quantities not to be singular, one of two
things must occur: either (a) the light cone must be "smoothed over"
or "smeared out". This is what people are ACTUALLY referring to by the
problem of "small scales" (there are no scales large or small since
scale is not an invariant concept -- at least when the term "scale" is
taken in its usually-visualized meaning of "spatial size". The
invariant quantities in relativity that go under the misnomer of
"scale" is proper length and proper time. What is ACTUALLY being
measured by these invariants is more properly thought of as "proximity
to the light cone", not as "scale".
Otherwise, (b) we have to revoke Lorentz (who "shut off" the vacuum by
making the (D,H) fields linear in (E,B) with CONSTANT coefficients)
and go back to Maxwell (who specifically intended for the coefficients
to be non-trivial so as to eliminate the problem of self-force
infinity; one of the sections in his treatise specifically being given
a title to call attention to this). More generally in a Lagrangian
field theory, the roles of (E,B) are played by the field configuration
variables and their gradients, while the role of (D,H) are played by
the derivatives of the Lagrangian with respect to the configuration
variables and gradients.
So, what (b) calls for is revoking the notion of Lagrangians that are
quadratic in the field gradients with constant coefficients. For
fermions, the corresponding fix would be to revoke the linear-with-
constant-coefficient nature of the Lagrangian.
In my opinion, there is nothing bad in fields being distributions. If
we invent a different coupling, the perturbation operator may become
well defined instead of being a too singular product of distributions
("colliding infinities").
> So, what (b) calls for is revoking the notion of Lagrangians that are
> quadratic in the field gradients with constant coefficients. For
> fermions, the corresponding fix would be to revoke the linear-with-
> constant-coefficient nature of the Lagrangian.
I think we can safely preserve the free motion equations for fields -
they may describe separated variables in a compound system, namely,
the center of inertia and relative (internal) motion variables (i.e.,
quasi-particles). Such their understanding makes the interaction of
compound charged systems rather smeared = naturally regularized (or
cut-off). Inelastic channels in scattering compound systems give all
richness of observable events.
On Apr 9, 6:03 pm, Tom Roberts <tjrob...@sbcglobal.net> wrote:
> ...
>
> > This is a rather general approach: if you obtain an expression that
> > is mathematically ill-defined for a computation that makes sense on
> > physical grounds, think more carefully about what you are doing.
> > There may be an alternate approach that is not ill-defined.
> >
> > Tom Roberts
>
Excellent point, Tom! This is what I practice and promote: fiding a
better physical fromulation and obtain simple finite perturbative
series by virtue of correct exact fromulation (See
http://vladimirkalitvianski.wordpress.com and references in it).