Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Normal distribution function with skew and kurtosis

236 views
Skip to first unread message

Phil Sherrod

unread,
Jun 8, 2006, 8:04:46 PM6/8/06
to
The probability density function for the normal distribution with mean 'm'
and standard deviation 's' is:

1/(s*sqrt(2*Pi)) * exp(-(X-m)^2 / 2s^2)

How can this formula be generalized to include skew and kurtosis?

--
Phil Sherrod
(phil.sherrod 'at' sandh.com)
http://www.dtreg.com
http://www.nlreg.com

David A. Heiser

unread,
Jun 8, 2006, 9:24:51 PM6/8/06
to

"Phil Sherrod" <phil.s...@REMOVETHISsandh.com> wrote in message
news:wsmdnY4j0OumJhXZ...@giganews.com...
+++++++++++++++++++++++++++++++++++++++++++++++++++++
The normal distribution has no skewness or excess kurtosis. Also skewness
has so many definitions and equations, there is no truly accepted measure.
Kurtosis also has many different equations. There is no single one parameter
that explicitly measures or defines skewness or symmetry of a data set. For
a normal distribution, Fisher's g2 kurtosis measure is exactly 3.
Consequently excess kurtosis refers to differences from 3.

David Heiser


Robert Israel

unread,
Jun 9, 2006, 1:18:02 AM6/9/06
to
In article <wsmdnY4j0OumJhXZ...@giganews.com>,

Phil Sherrod <phil.s...@REMOVETHISsandh.com> wrote:
>The probability density function for the normal distribution with mean 'm'
>and standard deviation 's' is:
>
> 1/(s*sqrt(2*Pi)) * exp(-(X-m)^2 / 2s^2)
>
>How can this formula be generalized to include skew and kurtosis?

There are lots of generalizations. For example, you might try
the Pearson distributions. See e.g.
<http://eom.springer.de/P/p071920.htm>
and references there.

Robert Israel isr...@math.ubc.ca
Department of Mathematics http://www.math.ubc.ca/~israel
University of British Columbia Vancouver, BC, Canada

Greg Heath

unread,
Jun 9, 2006, 9:44:57 AM6/9/06
to

Phil Sherrod wrote:
> The probability density function for the normal distribution with mean 'm'
> and standard deviation 's' is:
>
> 1/(s*sqrt(2*Pi)) * exp(-(X-m)^2 / 2s^2)
>
> How can this formula be generalized to include skew and kurtosis?

Go to Google Groups and search on

johnson-transformations

Hope this helps.

Greg

Reef Fish

unread,
Jun 9, 2006, 11:48:04 AM6/9/06
to

Robert Israel wrote:
> In article <wsmdnY4j0OumJhXZ...@giganews.com>,
> Phil Sherrod <phil.s...@REMOVETHISsandh.com> wrote:
> >The probability density function for the normal distribution with mean 'm'
> >and standard deviation 's' is:
> >
> > 1/(s*sqrt(2*Pi)) * exp(-(X-m)^2 / 2s^2)
> >
> >How can this formula be generalized to include skew and kurtosis?

A strange question that has drawn many different strange answers.

The normal FAMILY of distributions is only parametrized by the
first two moments (mean and variance), which is a member of the
stable family of symmetric distributions.

Many other families MAY be considered as "generalizations", but
they really aren't. strictly speaking.

One might say that the Family of Student-T distributions is already
a generalization for each finite degree of freedom -- which converges
to the normal distribution with infinite degrees of freedom.

> There are lots of generalizations. For example, you might try
> the Pearson distributions.

One might say the N.L.Johnson family of distributions is a
generalization.

http://www.stat.auburn.edu/fsdd2006/papers/Alderman.pdf

I recall a paper in JASA by Mark E. Johnson, on a family of
distributions
for simulation of DEPARTURES from the normal distribution, parametrized
by the skewness and kurtosis coefficients. I don't have the JASA
reference, but the one below might be the pre-JASA version of the
paper:

*> Mark E. Johnson, Distribution selection in statistical simulation
studies,
*> Proceedings of the 18th conference on Winter simulation, p.253-259,

*> December 08-10, 1986, Washington, D.C., United States

Beyond those, I suppose anything unimodal and has some resemblance
to a Gaussian distribution may be considered its generalization.

Among the ones mentioned above and by others, I think the Mark Johnson
distribution (parametrized by the skewness and kurtosis coefficients)
seems closest to what the OP miight be looking for, because that is
one distribution you can pre-assign values for the 3rd and 4th moments
directly.

-- Bob.

-- Bob.

beli...@aol.com

unread,
Jun 15, 2006, 8:49:18 AM6/15/06
to

Replacing (x-m)^2 with |x-m|^p gives the exponential power distribution
aka generalized error distribution, which has excess kurtosis for p <
2. The Student t distribution also has excess kurtosis and approaches
the normal as the degrees of freedom approach infinity. Skewness can be
introduced by allowing the "s" scale parameter to change at
x = m, as discussed in

http://greywww.kub.nl:2080/greyfiles/center/1996/doc/58.pdf
On Bayesian modelling of fat tails and skewness
Fernandez,C.
Steel,M.F.J. (Tilburg University, Center for Economic Research)
Abstract
We consider a Bayesian analysis of linear regression models that can
account for skewed error distributions with fat tails. The latter two
features are often observed characteristics of empirical data sets, and
we will formally incorporate them in the inferential process. A general
procedure for introducing skewness into symmetric distributions is
first proposed. Even though this allows for a great deal of flexibility
in distributional shape, tail behaviour is not affected. In addition,
the impact on the existence of posterior moments in a regression model
with unknown scale under commonly used improper priors is quite
limited. Applying this skewness procedure to a Student-$t$
distribution, we generate a ``skewed Student'' distribution, which
displays both flexible tails and possible skewness, each entirely
controlled by a separate scalar parameter. The linear regression model
with a skewed Student error term is the main focus of the paper: we
first characterize existence of the posterior distribution and its
moments, using standard improper priors and allowing for inference on
skewness and tail parameters. For posterior inference with this model,
a numerical procedure is suggested, using Gibbs sampling with data
augmentation. The latter proves very easy to implement and renders the
analysis of quite challenging problems a practical possibility. Two
examples illustrate the use of this model in empirical data analysis.

Note to OP: Google Groups allows a message to be posted to a maximum of
5 groups. Although other methods of accessing Usenet may allow more, it
is probably a good guideline in any case. I have trimmed the list of
newsgroups.

0 new messages