Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Distributions in integrals

3 views
Skip to first unread message

Kaba

unread,
Mar 22, 2011, 9:16:49 PM3/22/11
to
Hi,

It is somewhat common to see an expression such as

int_RR f(x) delta_y dx

where delta_y is the "delta distribution at y". This is then supposed to
evaluate to f(y).

Given a knowledge of Lebesgue integration, measure theory, and
distributions, how are such integrals formalized?

--
http://kaba.hilvi.org

Kaba

unread,
Mar 22, 2011, 10:52:30 PM3/22/11
to

Oh, I see. The point is to take the delta_y as a measure, 1 if the given
set contains y, and 0 otherwise. Then you have

int_RR f(x) d(delta_y) = f(y)

--
http://kaba.hilvi.org

Rob Johnson

unread,
Mar 23, 2011, 8:37:13 AM3/23/11
to
In article <MPG.27f3767f5...@news.cc.tut.fi>,

Tempered distributions are defined as linear functionals on their
test class, the Schwartz functions (infinitely differentiable
functions which, along with all of their derivatives, decay faster
than the reciprocal of any polynomial near infinity). See

<http://en.wikipedia.org/wiki/Distribution_%28mathematics%29#Tempered_distributions_and_Fourier_transform>

The delta distribution evaluates a test function at 0. This can be
translated to other points, too, but let's keep things simple.
Since all test functions are continuous, this is a well defined
operation. There is no function that when integrated against all
test functions gives their value at 0, but there are approximations
to the delta distribution: Let g be a Lebesgue integrable function
so that

|\oo
| g(x) dx = 1
\|-oo

Then for any test function f,

|\oo
lim | f(x) t g(tx) dx = f(0)
t->oo \|-oo

In this sense, t g(tx) is an approximation to the delta distribution.

The delta distribution can be achieved by a singular measure with
weight 1 supported at 0. However, there are distributions that can
not be achieved by measures.

As singular and discontinuous as the delta distribution is, we can
take its derivative. The derivative of the delta distribution gives
the negative of the derivative of the test function at 0. We can,
however, find an approximation for the derivative of the delta
distribution. Let g be as before except let it be differentiable,
also, and let its derivative be Lebesgue integrable. Then,

|\oo 2
lim | f(x) t g'(tx) dx = -f'(0)
t->oo \|-oo

Thus, t^2 g'(tx) = (t g(tx))' is an approximation of the derivative
of the delta distribution.

Perhaps I've missed your question, but I hope this helps.

Rob Johnson <r...@trash.whim.org>
take out the trash before replying
to view any ASCII art, display article in a monospaced font

Kaba

unread,
Mar 23, 2011, 6:02:58 PM3/23/11
to
Rob Johnson wrote:
> In article <MPG.27f3767f5...@news.cc.tut.fi>,
> Kaba <no...@here.com> wrote:
> >It is somewhat common to see an expression such as
> >
> >int_RR f(x) delta_y dx
> >
> >where delta_y is the "delta distribution at y". This is then supposed to
> >evaluate to f(y).
> >
> >Given a knowledge of Lebesgue integration, measure theory, and
> >distributions, how are such integrals formalized?
>
> Tempered distributions are defined as linear functionals on their
> test class, the Schwartz functions (infinitely differentiable
> functions which, along with all of their derivatives, decay faster
> than the reciprocal of any polynomial near infinity).

Thanks, that "approximation" is an interesting viewpoint. Any reason why
you took the tempered distributions, rather than the general
distributions?

Well, my question actually is if integration of distributions makes
sense. The distribution most people ever come in contact with is the
delta distribution. I could get away with its uses by interpreting it as
a measure.

I guess we can define an anti-derivative of a distribution D as a
distribution E whose distributional derivative is D. But does it make
sense somehow to do (some generalized) Lebesgue integral over a general
distribution?

--
http://kaba.hilvi.org

Rob Johnson

unread,
Mar 24, 2011, 6:57:35 PM3/24/11
to
In article <MPG.27f49a8f3...@news.cc.tut.fi>,

Kaba <no...@here.com> wrote:
>Rob Johnson wrote:
>> In article <MPG.27f3767f5...@news.cc.tut.fi>,
>> Kaba <no...@here.com> wrote:
>> >It is somewhat common to see an expression such as
>> >
>> >int_RR f(x) delta_y dx
>> >
>> >where delta_y is the "delta distribution at y". This is then supposed to
>> >evaluate to f(y).
>> >
>> >Given a knowledge of Lebesgue integration, measure theory, and
>> >distributions, how are such integrals formalized?
>>
>> Tempered distributions are defined as linear functionals on their
>> test class, the Schwartz functions (infinitely differentiable
>> functions which, along with all of their derivatives, decay faster
>> than the reciprocal of any polynomial near infinity).
>
>Thanks, that "approximation" is an interesting viewpoint. Any reason why
>you took the tempered distributions, rather than the general
>distributions?

The delta distribution is a tempered distribution, and, having studied
Fourier Analysis, I am better acquainted with tempered distributions.
I didn't see a reason to go beyond tempered distributions.

>Well, my question actually is if integration of distributions makes
>sense. The distribution most people ever come in contact with is the
>delta distribution. I could get away with its uses by interpreting it as
>a measure.
>
>I guess we can define an anti-derivative of a distribution D as a
>distribution E whose distributional derivative is D. But does it make
>sense somehow to do (some generalized) Lebesgue integral over a general
>distribution?

Remember that the integral representation for a distribution does not
mean that there is an actual function or even a measure that would
give that distribution when integrated against the test functions.

The derivative of a distribution is defined as if one could apply
integration by parts. For a distribution u, u' is defined as

u'(f) = -u(f')

or written in the pseudo-integral fashion

|\oo |\oo
| f(x) u'(x) dx = - | f'(x) u(x) dx
\|-oo \|-oo

for each test function f. If u can be realized as an actual function
or measure, then the right side of the pseudo-integral becomes a real
integral.

If f is a Schwartz function and

|\oo
| f(x) dx = 0 [1]
\|-oo

then
|\t
F(t) = | f(x) dx
\|-oo

Is also a Schwartz function. Thus, we could similarly define the
integral, U, of a distribution, u, to be

U(f) = -u(F)

for each test function f which satisfies [1]. Since [1] holds, the
integral of a distribution can be determined up to a constant. For
example, the integral of the dirac distribution would be the usual
Heaviside function, H(x):

<http://en.wikipedia.org/wiki/Heaviside_step_function>

However, against the test functions satisfying [1], H(x) - 1 = -H(-x)
yields the same functional as does H(x), but these two do not act the
same on general test functions. So, how to extend the integral of a
distribution to a general test function is not well-defined.

0 new messages