Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

cumulative probability analysis

1 view
Skip to first unread message

BilZ0r

unread,
Aug 17, 2004, 11:12:57 PM8/17/04
to
So in the paper I'm reading, they're doing intraceullar voltage-clamp
recording from a single CA1 pyramidal cell. The papers are looking at IPSCs
induced by serotonin, and they have this "cumulative probability analysis"
graph... well two of them. The both have "Cumulative Probability" on the Y
axis, and one has "amplitude" on the X, the other has "interevent
interval".

I figure that amplitude refers to the abilitude of the IPSC, but what is
the cumulative probability and the interevent internal refering too? The
cumulative probability of there being an IPSC of that amblitude? and what
are the events, that the interevent internal, is an interval of?

tomte

unread,
Aug 18, 2004, 6:17:10 AM8/18/04
to
Hi!


> I figure that amplitude refers to the abilitude of the IPSC, but what is
> the cumulative probability and the interevent internal refering too? The
> cumulative probability of there being an IPSC of that amblitude?

Right idea! That's quite close. It's the probability of an incoming
IPSC with indicated amplitude or lower. The last point is important!
cumulative probaility := P(amplitude ≤ X)

> and what are the events, that the interevent internal, is an interval of?

The events are the postsynaptic events => the IPSCs. It is the
interval between two successive IPSPs (or IPSCs).

Best regards,
Thomas

Glen M. Sizemore

unread,
Aug 18, 2004, 8:01:17 AM8/18/04
to
T: Right idea! That's quite close. It's the probability of an incoming IPSC

with indicated amplitude or lower.

GS: Isn't this sometimes called a "survivor plot" and the ordinate is
usually a log scale? As far as the inter-event plot goes, do you guys ever
use the opposite of cumulative probability, i.e., plot all the inter-event
intervals in a particular bin divided by that number plus all the
inter-event intervals of greater duration? That gives you the probability of
that inter-event interval given that it CAN happen (that's why all those of
shorter duration are removed.

"tomte" <tehga...@web.de> wrote in message
news:1adcf6cc.0408...@posting.google.com...

Matt Jones

unread,
Aug 18, 2004, 1:44:34 PM8/18/04
to
BilZ0r <Bil...@TAKETHISOUThotmail.com> wrote in message news:<Xns95499AF2E278F...@202.20.93.13>...


To get the cumulative probability here's what you do:

1) Make a standard histogram of interevent intervals (IEI),
amplitudes, rise times or whatever. That is, for the parameter of
interest (say, IEI), create a number of bins spanning the range of
values you observed (say, from 0 to 1000ms in steps of 5 ms). Then for
each bin, count the number of events that had had that value. For
example, when looking at interevent intervals of a Poisson process,
one should get a histogram that decays exponentially. If looking at
amplitudes of mEPSCs at the neuromuscular junction, one would get an
amplitude histogram shaped like a Gaussian (but not usually at central
synapses, where the distribution is skewed). These are actually
"frequency histograms", since you are looking at the frequency of
observing particular events.

2) Divide each binned value by the sum of all the values. This makes
the area of the whole thing equal 1. So now the height in each bin is
approximately the probability of observing that class of events. This
is now the "probability distribution".

3) To get the cumulative probability distribution, make a new
histogram using the same bin spacing, but now fill each bin with the
SUM of all the bin heights from 2) leading up to and including the
current bin. Now, the height of each new bin tells the probability of
observing an event less than or equal to the current value. This
distribution (obviously) starts at zero, curves upward approximately
sigmoidaly, and assymptotes toward 1 (i.e., after examining all
events, the probability is 1 that you will have observed events less
than or equal to the largest event).

The usefulness of the cumulative distribution is that it is
1) smoother than the raw distribution (because the summation smooths
out fluctuations between bins like a running average). This also means
that you can -lace two similar CDFs on top of each other and its
easier to see whether they're different or not. This is hard to do
with the raw histograms cause they're usually all lumpy.

2) Easy to tell whether the parent distrubution was symmetric or
skewed.

3) Certain parameters of interest can be read right off the graph. For
example, the point on the x-axis where the graph goes through 0.5 pn
the y-axis is the median of the parent distribution, the point where
it goes through 0.95 is the 95-th percentile, etc.


Matt


PS:

BilZ0r, you've been asking a lot of questions about basic
electrophysiology analysis methods lately. I recommend reading a
spectacular (but short and very clear) book by Bernard Katz called
"Nerve, Muscle and Synapse". This is hard to find but you might get a
used copy on Amazon. It goes through the methodology of experiments
and analysis of such things like Hodgkin-Huxley Equations, Quantal
Analysis and so forth. A great explanation of the fundamental
principles of electrical neuroscience, written by a genius and
founding father of the field. This book should be handed out to
neuroscience grad students on their first day of school (or
undergraduate humanites majors, for that matter. It's so well written,
they could probably learn a lot from it).

Matt

BilZ0r

unread,
Aug 19, 2004, 4:42:35 PM8/19/04
to
tehga...@web.de (tomte) wrote in
news:1adcf6cc.0408...@posting.google.com:

Ohhhh, like that! I figured they would label the x axis "cummulative
amplitude" or "> amplitude" or something if they ment that. Thanks for
the answer.

BilZ0r

unread,
Aug 19, 2004, 4:45:21 PM8/19/04
to
jone...@physiology.wisc.edu (Matt Jones) wrote in
news:b86268d4.04081...@posting.google.com:

> BilZ0r, you've been asking a lot of questions about basic
> electrophysiology analysis methods lately. I recommend reading a
> spectacular (but short and very clear) book by Bernard Katz called
> "Nerve, Muscle and Synapse". This is hard to find but you might get a
> used copy on Amazon. It goes through the methodology of experiments
> and analysis of such things like Hodgkin-Huxley Equations, Quantal
> Analysis and so forth. A great explanation of the fundamental
> principles of electrical neuroscience, written by a genius and
> founding father of the field. This book should be handed out to
> neuroscience grad students on their first day of school (or
> undergraduate humanites majors, for that matter. It's so well written,
> they could probably learn a lot from it).
>
> Matt


Katz, that name certainly rings a bell. I just got through the first half
of Bertile Hilles, Ion Channels of Excitatable membranes, I think that
book hand many figures taken from Katz's work... in giant squid axons
maybe?

0 new messages