Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

The Acoustic Evidence: Test Accuracy and False vs True Positives

10 views
Skip to first unread message

Mitch Todd

unread,
Nov 27, 2001, 1:22:26 AM11/27/01
to
A number of things are to be tested to find out whether they
have a particular quality. The result of the test is binary, yes
or no, and is prone to giving false results some small
percentage of the time. If T is the number of tests performed,
P is the number of tested things that have the quality being
tested for, and A is the accuracy of the test (ie, the probability
that the test will give the correct answer), then the testing
results will show:

P * A true positives (we'll call that Pt for short),
P * (1 - A) false negatives (Nf),
(T - P) * A true negatives (Nt), and
(T - P) * (1 - A) false positives (Pf).

Plus,

T = Pt + Pf + Nt + Nf

Now, assume that we are given the results of a set of tests,
but we don't know the accuracy of the test, or the number
of actual positives. Since we know the number of indicated
positives (call this number Pi), we can get some idea of the
relationship between test accuracy and the number of
actual positives in the original sample. Pi = Pt + Pf, so
starting with

T = Pi + Nt + Nf

we wind up with

{Pi - T * (1 - A)} / (2A - 1) = P

and

(T - Pi - P) / (T - 2P) = A

(The derivation of these equations are at
the end of the post). One thing that's immediately
obvious is that

Pi > T * (1 - A)

to have any chance of having an actual positive
to test for.

You're asking, Mitch, why the hell are you doing this?
I ask myself the same thing all the time! Seriously,
these equation can give us an idea of how accurate
BBN's test methodology have to be in order to
give us meaningful data.

BBN performed 12 * 36 * 6 comparisons between
recordings they made in Dealey Plaza and the
six sets of impulses on the DPD channel 1 tape.
That's a total of 2592 tests. I'm feeling generous today,
so I'll note that the first set of twelve microphones
produced no matches, and neither did any of the
recordings of pistol shots. That will reduce the
number of tests to 11 * 24 * 6. That's 1584
comparisons. From those comparisons, BBN
reported 15 matches.

Now, with these numbers, we can figure out how
accurate the test will have to be in order to have
one true positive in the results.

(T - Pi - P) / (T - 2P) = A
(1584 - 15 - 1) / (1584 - 2) = A
A = 99.1%

If you want a majority of the positives to be true,
then the accuracy has to be at least:

(1584 - 15 - 8) / (1584 - 16)
A = 99.6%

The required accuracy numbers for 2592 tests are
99.5% (1 positive) and 99.7% (8 positives).

In the real world, things get more complicated.
the rate of false negatives can be different than
the rate of false positives, sometimes significantly
so. If the rate of false positives is greater than the
rate of false negatives, then the numbers I just
calculated would be even higher. Conversely,
if the false negative rate is smaller, then the
accuracy number would be relaxed.

The upshot? If BBN's methodology isn't
very, very accurate, then the whole shebang
will have to be either rethought or thrown out
altogether. BTW, I'm not really arguing either
for or against the BBN study here. I think that
this excercise is good food for thought, and
not just for the acoustics controversy. Think
about, if 5% of your coworkers use illegal
drug X (no pun intended....okay, I lied), and
your entire office is subjected to a test for X
that is 95% accurate, how many of the people
who test positive are actually users of drug X?

MST

Here's the derivations I promised:

T = Pi + Nt + Nf
T = Pi + A* (T - P) + P * (1 - A)
T = Pi + AT - AP + P - AP
T = Pi + AT + P - 2AP
T * (1 - A) - Pi = P (1-2A)
{T * (1 - A) - Pi} / (1 - 2 A) = P
{Pi - T * (1 - A)} / (2A - 1) = P

T = Pi + Nt + Nf
T = Pi + A* (T - P) + P * (1 - A)
T = Pi + AT - AP + P - AP
T = Pi + AT + P - 2AP
T - Pi - P = AT - 2AP
(T - Pi - P) / (T - 2P) = A

Clark Wilkins

unread,
Nov 27, 2001, 2:36:31 PM11/27/01
to

"Mitch Todd" <jere...@earthlink.com> wrote in message
news:1OFM7.8273$WC1.9...@newsread2.prod.itd.earthlink.net...


Okay. So BBN's study must be nearly 100% accurate to produce even 8
positives?
Am I reading you correctly?

Also, can we calculate the number of false positives we would expect the BBN
study to produce? You gave the following equation:

(T - P) * (1 - A) false positives (Pf).


I am wondering what the chances of the GK "shot" being a false positive are?
Given some 1500 plus tests I hadn't considered this possibility before. I
had simply taken one result, found it matched the GK shot to within a 95%
certainty, and said "Hey! This meets generally accepted scientific
confidence intervals. It must be a shot or we're getting some awfully
freakish random noise." Now I'm wondering if it's that simple.


This has been a radical week for me. First, in spite of my debate with
Venem, I was at the time of my debate with him a believer in the SBT. I
knew it wasn't a high probability (3%) but 3% is still possible. Venem had
made a convincing argument for placing when JBC was hit but made a far less
convincing argument for when JFK was hit (It came down, more or less, to his
opinion). That was when Barb posted JFK's elbows had begun to rise before
JFK passed behind the sign and that's when, for me, the SBT went right out
the window. Now I see here that there exists a calculation for a false
positive on the GK shot and, while I haven't done the calculation, if it's
even close to one then the BBN study is, for me, also out the window (Unless
Anthony Marsh can convince me otherwise).

I don't normally dabble with what shot was fired when, from where, and by
how many shooters, but I am finding what little faith I had in both sides to
be easily rattled. Dave Reitzes posted an interestng argument for a pre
Z160 shot (provided you exclude his "Rosemary Willis" claims) this last
summer which convinced me to take a frame by frame analysis of the Z film
to look for this shot. Not only did I not find it but I found that the so
called "jiggle analysis" made by others and claimed as evidence for placing
shots is all a bunch of pure garbage. Oh! I did manage to find evidence of
a shot on the film but it wasn't where either side told me I should find it.
Then I learned Oswald can't fire a shot here anyway, unless he fires a shot
through the side of the walls of the building, and still be in position to
make the hit on JBC that Venem has claimed.
So where does that leave me know? It looks like I have to consider no
GK shooter, no SBT, and no pre Z160 shot - which leaves me with Oswald
firing three shots and scoring three hits and very little clue of how he did
it.
It seems the more I read and think about it, the further behinder I
get.

So Mitch? What's the chance of the GK shot being a false positive?


Just curious.


::Clark::

Mitch Todd

unread,
Nov 28, 2001, 8:25:48 AM11/28/01
to
"Clark Wilkins"wrote:
> "Mitch Todd" <jere...@earthlink.com> wrote:
[...]

> > Now, with these numbers, we can figure out how
> > accurate the test will have to be in order to have
> > one true positive in the results.
[...]
> > A = 99.1%

> > If you want a majority of the positives to be true,

> > then the accuracy has to be at least: [...]A = 99.6%

> > The required accuracy numbers for 2592 tests are
> > 99.5% (1 positive) and 99.7% (8 positives).


> Okay. So BBN's study must be nearly 100% accurate to produce even 8
> positives? Am I reading you correctly?

Basically, yes.

> Also, can we calculate the number of false positives we would expect the
BBN
> study to produce? You gave the following equation:

> (T - P) * (1 - A) false positives (Pf).

If we know what the accuracy of the procedure is, and if the rate of
false positives is the same as the rate false negatives. If they are
different,
then you'll need to use Bayesian methods.


> I am wondering what the chances of the GK "shot" being a false positive
are?
> Given some 1500 plus tests I hadn't considered this possibility before. I
> had simply taken one result, found it matched the GK shot to within a 95%
> certainty, and said "Hey! This meets generally accepted scientific
> confidence intervals. It must be a shot or we're getting some awfully
> freakish random noise." Now I'm wondering if it's that simple.

If it were only that simple, would this group exist? ;->

Unfortunately, it really isn't that simple. This sort of analysis is very
dependent
on how many alternative hypotheses you can account for. Weiss and Ashkenesy
only considered the null hypothesis, that the impulses were just random
noise.
They didn't consider the two other matches for that particular impulse
group.
That kind of oversight winds up killing a lot of research. Reputations, too.

Part of the problem is that the level of accuracy of the procedure is
unknown.
The basic principles are sound, and these principles are used to great
effect
by oceanographers, geophysicists, and submariners. However, these
applications
have a very basic advantage. They know where the microphone is. The
geologists and oceanographers know the exact location of the sound source,
too. BBN's solution to the problem is well thought out, but its accuracy is
entirely unknown. Aye, that's the rub.


> This has been a radical week for me. First, in spite of my debate with
> Venem, I was at the time of my debate with him a believer in the SBT. I
> knew it wasn't a high probability (3%) but 3% is still possible. Venem
had
> made a convincing argument for placing when JBC was hit but made a far
less
> convincing argument for when JFK was hit (It came down, more or less, to
his
> opinion). That was when Barb posted JFK's elbows had begun to rise before
> JFK passed behind the sign and that's when, for me, the SBT went right out
> the window. Now I see here that there exists a calculation for a false
> positive on the GK shot and, while I haven't done the calculation, if it's
> even close to one then the BBN study is, for me, also out the window
(Unless
> Anthony Marsh can convince me otherwise).

[...]


> It seems the more I read and think about it, the further behinder I get.

In this biz, if you never find yourself challenged by some fold in the
evidence,
you aren't trying! There's a lot of stuff in the box, a lot of it will never
quite
agree...and a lot of things that seem convincing now won't be later on. With
the acoustics, at least there is some baseline of knowledge about why we see
what we see. But who's ever seen genuine research into how people react to
being shot? What bag of knowledge do we have to dig into to make
comparisons from? Loftus, et al, have done fine work demonstrating how
unreliable eyewitness testimony can be, but they can't tell us what to
filter out.


> So Mitch? What's the chance of the GK shot being a false positive?

> Just curious.

That's the trick. I don't think anyone knows. The smallest number of FP's
I can come up with just using F367 is five. That's assuming that the
sounds are shots, and that the microphone was where BBN said it
should be.

MST


R2JUDGE

unread,
Nov 28, 2001, 6:04:25 PM11/28/01
to
Subject: Re: The Acoustic Evidence: Test Accuracy and False vs True Positives
From: "Clark Wilkins" clwi...@prodigy.net
Date: 11/27/01 11:36 AM Pacific Standard Time
Message-id: <X%QM7.88$Mi.32...@newssvr17.news.prodigy.com>


***Clark, Kennedy's elbows did not start to rise until Z225. Kennedy's left
elbow was straight down at his side at Z224 and while his right elbow is not
visible his right hand dropped slightly from Z224 to Z225.

At the same frame Kennedy's left elbo is starting to move away from his side,
Connally's hat rises into view. In the next frame the movement has accelerated
Both are reacting simultaneously. The SBT is supported by the Zapruder film.

***Ron Judge

0 new messages