Positive LR where specificity=100%

427 views
Skip to first unread message

Liz

unread,
Jun 11, 2008, 9:03:23 AM6/11/08
to MedStats
Dear all,
When calculating positive LR, what should one do if specificity is
100%? This would lead to a division by zero if left unadjusted. I have
seen abstracts where authors simply stated that +ve LR could not be
calculated in this situation. I'm intending to combine multiple test
results to give a post-test probability of diagnosis x if tests a, b
and c are all positive, so given that a specificity of 100 represents
a very useful result I'd rather not just exclude it! I was thinking of
substituting 99 - this would give a +ve LR equal to the sensitivity of
the test. Any thoughts?
Thanks,
Liz Hensor

Francesca Chappell

unread,
Jun 11, 2008, 9:55:35 AM6/11/08
to MedS...@googlegroups.com
There's some correspondence on exactly this in the American Journal of
Roentgenology 1987;148(6):1272-3 (this is available online, and is an
authors' response by WC Black et al to a letter). They added 0.5 to
each cell of the 2x2 table to avoid division by zero and recalculated
sensitivity, specificity, and the likelihood ratio. But this method
has been criticised as the 0.5 is an arbitrary figure. There is also
some discussion about imprecision of estimates and use of the lower
end of the 95% (score) confidence interval.

I would be interested to hear what other people suggest.

Francesca


Quoting Liz <lizh...@hotmail.com>:


Francesca Chappell
Medical Statistician
University of Edinburgh
Bramwell Dott Building
Western General Hospital
Crewe Road
Edinburgh EH4 2XU
Tel 0131 537 3585
Fax 0131 332 5150

The University of Edinburgh is a charitable body, registered in Scotland, with
registration number SC005336

--
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.


Bruce Weaver

unread,
Jun 11, 2008, 4:14:19 PM6/11/08
to MedStats
On Jun 11, 9:55 am, Francesca Chappell <francesca.chapp...@ed.ac.uk>
wrote:
> There's some correspondence on exactly this in the American Journal of
> Roentgenology 1987;148(6):1272-3 (this is available online, and is an
> authors' response by WC Black et al to a letter). They added 0.5 to
> each cell of the 2x2 table to avoid division by zero and recalculated
> sensitivity, specificity, and the likelihood ratio. But this method
> has been criticised as the 0.5 is an arbitrary figure. There is also
> some discussion about imprecision of estimates and use of the lower
> end of the 95% (score) confidence interval.
>
> I would be interested to hear what other people suggest.
>
> Francesca
>

A value of 0.5 may be arbitrary, but it certainly is fairly
conventional. Consider the relative risk, for example.

Event
Y N
Tx a b n1
Ctl c d n2

RR = (a/n1)/(c/n2)
VAR[ln(RR)] = 1/a - 1/n1 + 1/c + 1/n2

According to Alan Agresti (Categorical Data Analysis, 2nd edition, p.
56), a less biased estimator of the RR is obtained when you add 0.5 to
each of a, n1, c, and n2. (Notice that 0.5 is added to the row
totals, not to the b and d cells.) You do likewise in the formula for
the variance.

I've not found anything similar for likelihood ratios per se; but if
you buy what Agresti says about relative risks, it should also apply
to likelihood ratios, because both are the ratio of two proportions.

--
Bruce Weaver
bwe...@lakeheadu.ca
www.angelfire.com/wv/bwhomedir
"When all else fails, RTFM."

Ray Koopman

unread,
Jun 11, 2008, 7:05:56 PM6/11/08
to MedStats
See problem 14.4, p 595.

>
> --
> Bruce Weaver
> bwea...@lakeheadu.cawww.angelfire.com/wv/bwhomedir

Francesca Chappell

unread,
Jun 12, 2008, 4:13:49 AM6/12/08
to MedS...@googlegroups.com
To digress slightly from the original problem, in Liz's original
email, she said that the test had a specificity of 100% and a
sensitivity of 99%, and she wanted to combine test results to get a
post-test probability. Assuming that these are not overly optimistic
estimates of accuracy, is there anything to be gained by combining
test results? After all, combining test results simply trades
sensitivity against specificity, it cannot improve them both, and the
single test looks pretty good as it stands. I think a key question is
how accurate are the other tests?

Francesca

Quoting Ray Koopman <koo...@sfu.ca>:

Martin Holt

unread,
Jun 12, 2008, 5:20:58 AM6/12/08
to MedS...@googlegroups.com
Thanks Ray, but to which text are you referring ?

Best,

Martin

Bruce Weaver

unread,
Jun 12, 2008, 11:48:46 AM6/12/08
to MedStats
Thanks Ray. I just noticed that I gave the wrong edition for the
material I cited above. It is from the 1st edition (1990), not the
second.

Bruce Weaver

unread,
Jun 12, 2008, 11:49:45 AM6/12/08
to MedStats
On Jun 12, 5:20 am, "Martin Holt" <theho...@care4free.net> wrote:
> Thanks Ray, but to which text are you referring ?
>
> Best,
>
> Martin


Martin, Ray is referring to the 2nd edition (2002). The material I
cited in my earlier post is actually from the 1st edition (1990), not
the second. Sorry about any confusion this caused.

mcap

unread,
Jun 12, 2008, 3:29:41 PM6/12/08
to MedStats
Hi Liz:

I don't have a comment on your numerical issue here. It seems
like there are good suggestions. However, you may be fundamentally
misinterpreting sensitivity and specificity.

"I'm intending to combine multiple test
results to give a post-test probability of diagnosis x if tests a, b
and c are all positive,"

Sensitivity and Specificity refer the probability of getting a
particular test result given a disease state. Not the other way
around. It is the posttest probability of tests a, b and c positive
given that they have the disease you are studying. Look at the PPVs
and NPVs. There is some degree of controversy about the untility of
sens/spec.

Marc


Liz

unread,
Jun 18, 2008, 9:20:16 AM6/18/08
to MedStats
Thanks Marc, perhaps I haven't explained myself fully here.
I'm following the methods of Rudwaleit et al. (Ann Rheum Dis
2004;63;535-543) in which they calculate the probability of the
presence of the disease in case of a positive test as 100/[1 + ((100-
pretest prob)/pretest prob)*((100-specificity)/sensitivity).
The probability of the presence of disease in the case of a negative
test inverts the last part of that equation (specificity/100-
sensitivity) so if either sensitivity or specificity is 100 we hit a
snag.
Rudwaleit et al. apply these equations in chains, so that the post-
test probability of disease after applying test a then becomes the pre-
test probability for test b, and so on.
My understanding of positive/negative predictive value is that they
are highly prevalence-dependent so have their own associated problems.
Does anyone have any comments about the methods of Rudwaleit et al
highlighted above?
Thanks for all of your responses - I feel we're getting somewhere!
Liz

Liz

unread,
Jun 18, 2008, 9:24:21 AM6/18/08
to MedStats
Sorry Francesca, just to confirm - I have a specificity of 100 but
sensitivity is much lower (20-50% for most of the tests where this
issue has arisen). Sens and spec around 100 would be the holy grail
and I would not need any other tests, you're right! I meant I was
thinking of substituting 99 for 100 in my calculations so I didn't end
up dividing by 0, and this would give a positive LR equal to the
sensitivity of the test (LR=50 or so).

On Jun 12, 9:13 am, Francesca Chappell <francesca.chapp...@ed.ac.uk>
wrote:
> To digress slightly from the original problem, in Liz's original  
> email, she said that the test had a specificity of 100% and a  
> sensitivity of 99%, and she wanted to combine test results to get a  
> post-test probability. Assuming that these are not overly optimistic  
> estimates of accuracy, is there anything to be gained by combining  
> test results?  After all, combining test results simply trades  
> sensitivity against specificity, it cannot improve them both, and the  
> single test looks pretty good as it stands. I think a key question is  
> how accurate are the other tests?
>
> Francesca
>
> Scotland, with registration number SC005336.- Hide quoted text -
>
> - Show quoted text -

Peter Flom

unread,
Jun 18, 2008, 9:39:37 AM6/18/08
to MedStats
Liz <lizh...@hotmail.com> wrote LR where specificity=100%

>
>
>Sorry Francesca, just to confirm - I have a specificity of 100 but
>sensitivity is much lower (20-50% for most of the tests where this
>issue has arisen). Sens and spec around 100 would be the holy grail
>and I would not need any other tests, you're right! I meant I was
>thinking of substituting 99 for 100 in my calculations so I didn't end
>up dividing by 0, and this would give a positive LR equal to the
>sensitivity of the test (LR=50 or so).


I've not been following this thread carefully, excuse me if this has been covered.

If you have specificity of 100 and sensitivity much lower (20-50%) then there is something seriously wrong. Isn't this equivalent to jsut telling everyone they have the disease? So, I think you need to adjust your model.

Peter

Peter L. Flom, PhD
Statistical Consultant
www DOT peterflom DOT com

Francesca Chappell

unread,
Jun 18, 2008, 10:03:09 AM6/18/08
to MedS...@googlegroups.com
Sorry Liz, I must have misread your original email.

But to digress again and if you're not constrained to use the
chains-of-LRs route, but some people estimate the sensitivity and
specificity of test combinations, where you use Boolean AND or OR to
combine the individual test results and you would avoid the zero
divisor problem. For example, you could use the OR rule (any test is
positive means the diagnosis is positive) to increase sensitivity,
though of course specificity would go down. Haven't read the Rudwaleit
et al paper so can't comment on that.

Francesca

Liz

unread,
Jun 19, 2008, 6:39:10 AM6/19/08
to MedStats
Sorry Peter I'm not sure I understand you. I'm looking at predictors
of inflammatory arthritis. All of the patients who are in the 'non-
inflammatory' group are test negative (hence specificity=100%),
whereas between 20% and 50% of those in the 'inflammatory' group are
test positive, depending on the test in question (sensitivity
20%-50%). Is there a problem with my model?

Liz

unread,
Jun 19, 2008, 6:47:57 AM6/19/08
to MedStats
Thanks Francesca,
That's a method I have used before, but my colleagues are keen to
replicate the work of Rudwaleit et al. because the design of their
study is so similar and the concepts/diseases under scrutiny are very
comparable. If I can't solve the division by zero problem I may end up
reverting to the Boolean approach, but I'm going to investigate the
adding 0.5 to the cells approach and see where that gets me.
Thanks again for all your help!
Liz

On Jun 18, 3:03 pm, Francesca Chappell <francesca.chapp...@ed.ac.uk>
wrote:
> Sorry Liz, I must have misread your original email.
>
> But to digress again and if you're not constrained to use the  
> chains-of-LRs route, but some people estimate the sensitivity and  
> specificity of test combinations, where you use Boolean AND or OR to  
> combine the individual test results and you would avoid the zero  
> divisor problem. For example, you could use the OR rule (any test is  
> positive means the diagnosis is positive) to increase sensitivity,  
> though of course specificity would go down. Haven't read the Rudwaleit  
> et al paper so can't comment on that.
>
> Francesca
>

Peter Flom

unread,
Jun 19, 2008, 6:56:58 AM6/19/08
to MedStats
Liz <lizh...@hotmail.com> wrote

>
>Sorry Peter I'm not sure I understand you. I'm looking at predictors
>of inflammatory arthritis. All of the patients who are in the 'non-
>inflammatory' group are test negative (hence specificity=100%),
>whereas between 20% and 50% of those in the 'inflammatory' group are
>test positive, depending on the test in question (sensitivity
>20%-50%). Is there a problem with my model?

Maybe, maybe not. Unless there is some huge problem with giving people a false positive, then I think you may need to adjust the point at which you diagnose inflammatory arthritis. What if you got 90% specificity, would you get much better sensitivity?

Usually, I think, guaranteeing no false negatives would increase false positives by a lot....

But I overstated it in my previous message.

Liz

unread,
Jun 19, 2008, 8:00:04 AM6/19/08
to MedStats
If we were going to use the tests individually then yes this would
lead to a lot of false positives; this is why we are combining the
results of different tests, which should hopefully increase the
sensitivity. In a lot of cases we can't adjust the diagnostic
criterion - certain inflammatory/genetic markers are either present or
absent. It's very difficult in the early stages of potential
inflammatory arthritis to give an exact diagnosis, but if we can
definitively rule out inflammatory disease in some patients on the
basis of a selection of their test results, then that will help. It's
not like a positive result will lead to surgery, just a certain
treatment regime. On the other hand, current evidence suggests that if
you treat inflammatory arthritis early enough you can prevent joint
damage, so it's important we don't miss anyone.

On Jun 19, 11:56 am, Peter Flom <peterflomconsult...@mindspring.com>
wrote:
> Liz <lizhen...@hotmail.com> wrote

Adrian Sayers

unread,
Jun 19, 2008, 8:21:30 AM6/19/08
to MedS...@googlegroups.com
Just to chip in on this thread, if your using many tests in succession
to determine treatment, and what you want by the sounds of it is an
assessment of sensitivity and specificity, why don't you just work out
the sensitivity and specificity for the series of tests you will
perform.

Its difficult to draw probability trees, but if you set out your test
series as branches, and work out some rules about how many tests have
to be +ve before they are classed as true +ve's and -ve before they
are classified as true -ves. Then its quite easy to compute your
sensitivity and specificity of the combination of tests you want to
use.

Adrian

2008/6/19 Liz <lizh...@hotmail.com>:

Ted Harding

unread,
Jun 19, 2008, 8:43:11 AM6/19/08
to MedS...@googlegroups.com
I'm beginning to wonder what the real issues in this thread are.

Specificty, as I understand it, is a measure of the probability
that you will not get a positive result, when the condition being
tested for is not present -- a kind of reassurance that a positive
result is indeed specific to that condition: but this interpretation
implicitly has a "reference level" based on the prevalence.

Secondly, now that I look back again throught this thread, I cannot
see that Liz has stated what the sizes of the two groups are.
Whatever the size of the "non-inflammatory" group, an observed 100%
of "test negative" still leads to uncertainty of what the probability
of "positive given not present" really is. Say we assess this as
a 95% confidence interval. Let N be the size of the "non-inflammatory"
group. Then:

N = 50: CI = (0,0.0582)
N = 100: CI = (0,0.0295)
N = 250: CI = (0,0.0119)
N = 500: CI = (0,0.00598)
N = 1000: CI = (0,0.00299)

So you need a big group before you can be reasonably sure that the
probability is quite small; and you are never sure that it is zero.
In order to avoid unrealistic numerical results, one should try to
take account if uncertaqinties of this kind.

At Date: Wed, 18 Jun 2008 06:20:16 -0700 (PDT), Liz stated:

I'm following the methods of Rudwaleit et al.
(Ann Rheum Dis 2004;63;535-543) in which they
calculate the probability of the presence of the
disease in case of a positive test as
100/[1 + ((100-pretestprob)/pretestprob)*

((100-specificity)/sensitivity)].

(I've added the closing "]" in the above). This formula is
just Bayes' theorem for Prob(present given positive-test),
using "pretestprob" to mean "prevalence", i.e. the prior
probability that the condition is present.

If, now (as in Liz'z statement of her problem), the specificity
is 100%, then the factor (100-specificity)/sensitivity) is 0
(provided sensitivity > 0, which Liz states is the case).

Hence the result of that formula is 100/1 = 100%: if specificity
is 100%, then the probability of presence given positive test is
100%, as one would expect; and so no problems arise. So far so good.

Liz goes on to state:

The probability of the presence of disease in the
case of a negative test inverts the last part of that

equation (specificity/100-sensitivity) so if either


sensitivity or specificity is 100 we hit a snag.

This may be where confusion could arise. If I work out (from the
Bayes' theorem direction) what the probability of presence is when
the test is negative, I get (rephrasing it in the above terms):

100/[1 + (specificity/(100 - sensitivity))*
((100 - pretestprob)/pretestprob)]

so, if the specificity is 100% again, then the result still makes
sense (no "snag") provided the sensitivity is not 100%. Liz
states that the sensitivity is 20-50%, so no problem. Even if the
sensitivity were 100%, all that results (mathematically speaking)
is 100/(1 + infinity) = 0. But that's OK too, since 100% sensitivity
means that every true positive gives a positive test, so a negative
test must mean that the disease is not present -- so Prob=0.

So still no problem! Though, depending on how the calculation of
such a formula is implemented in software or programmed in code,
one may get an uninformtaive answer. However, that is dependent
on the software. If, for instance, I do it in R, then (adopting
pretestprob=17% arbitrarily):

spec=100 ; sens=100; pretestprob=17
100/(1 + (spec/(100 - sens))*((100 - pretestprob)/pretestprob))
0

as expected. And this is because R does the right thing with Infinity:

1/0
Inf

1/Inf
0

I don't know if this helps -- maybe there's something more we need
to be told about where the real problem is arising -- but I hope
it helps a bit!
Ted.

On 19-Jun-08 10:39:10, Liz wrote:
>
> Sorry Peter I'm not sure I understand you. I'm looking at predictors
> of inflammatory arthritis. All of the patients who are in the 'non-
> inflammatory' group are test negative (hence specificity=100%),
> whereas between 20% and 50% of those in the 'inflammatory' group are
> test positive, depending on the test in question (sensitivity
> 20%-50%). Is there a problem with my model?
>
>> I've not been following this thread carefully, excuse me if this has
>> been covered.
>>
>> If you have specificity of 100 and sensitivity much lower (20-50%)

>> then there is something seriously wrong. _Isn't this equivalent to
>> jsut telling everyone they have the disease? _So, I think you need to


>> adjust your model.
>>
>> Peter
>>
>> Peter L. Flom, PhD
>> Statistical Consultant
>> www DOT peterflom DOT com

--------------------------------------------------------------------
E-Mail: (Ted Harding) <Ted.H...@manchester.ac.uk>
Fax-to-email: +44 (0)870 094 0861
Date: 19-Jun-08 Time: 13:43:05
------------------------------ XFMail ------------------------------

Ted Harding

unread,
Jun 19, 2008, 9:01:41 AM6/19/08
to MedS...@googlegroups.com
Follow-up: Inj my previous reply I intended to include (but in
the heat of the creative moment forgot) the following (the rest
suppressed):

On 19-Jun-08 12:43:11, Ted Harding wrote:
> [...]


> At Date: Wed, 18 Jun 2008 06:20:16 -0700 (PDT), Liz stated:
>
> I'm following the methods of Rudwaleit et al.
> (Ann Rheum Dis 2004;63;535-543) in which they
> calculate the probability of the presence of the
> disease in case of a positive test as
> 100/[1 + ((100-pretestprob)/pretestprob)*
> ((100-specificity)/sensitivity)].
>
> (I've added the closing "]" in the above). This formula is
> just Bayes' theorem for Prob(present given positive-test),
> using "pretestprob" to mean "prevalence", i.e. the prior
> probability that the condition is present.

This formula can equivalently be re-written (and will then
be in the form arising directly from Bayes' theorem):

The probability of the presence of the disease in case
of a positive test is

sensitivity*pretestprob/
[sensitivity*pretestprob + (100-specificity)*(100-pretestprob)]

If you use it in that form, you will NEVER have any computational
problems! So why adopt a form which could (in certain cases) be
problematic?

Best wishes to all,
Ted.

--------------------------------------------------------------------
E-Mail: (Ted Harding) <Ted.H...@manchester.ac.uk>
Fax-to-email: +44 (0)870 094 0861

Date: 19-Jun-08 Time: 14:01:38
------------------------------ XFMail ------------------------------

Liz

unread,
Jun 19, 2008, 11:01:38 AM6/19/08
to MedStats
Ah!!! Thanks Ted - I was just following the methods exactly as set out
in the paper, like a statistical sheep. I have no idea why the formula
was presented like that in the text if it stems from an existing
theorem that obviously works rather more reliably. Not that hot on
Bayes - obviously something I need to work on (a revision of basic
balancing of equations evidently wouldn't hurt either).
As for the sample size I absolutely take your point - which is why I
was originally opting to reduce specificity from 100 to 99. This is a
small exploratory dataset (N=40! often the case with imaging studies
where MRIs are so expensive) so we'll be saying something along the
lines of "whilst our findings need to be replicated in a much larger
study, these particular tests [a,b,c] are the most promising, and if
the larger study reveals similar sensitivities and specificities to
those measured here, then in combination these tests would allow us to
be x% sure of our diagnosis." Rudwaleit et al. reviewed many existing
studies and combined them to obtain their estimates of sens and spec.

Thanks so much Ted, and everyone else, for your help.
Liz

On Jun 19, 2:01 pm, (Ted Harding) <Ted.Hard...@manchester.ac.uk>
wrote:
> E-Mail: (Ted Harding) <Ted.Hard...@manchester.ac.uk>

Ted Harding

unread,
Jun 19, 2008, 11:23:59 AM6/19/08
to MedS...@googlegroups.com
Well, just for the record, I'll spell it out!

For abbreviation, sensitivity is "sens", specificity is "spec",
and pretestprob is "prev" (short for "prevalence").

Then:

Prob(disease given test+) = Prob(disease AND test+)/Prob(test+)

= (Prob(test+ GIVEN disease)*Prob(disease))/Prob(test+)

= (sens*prev)/Prob(test+)

= (sens*prev)/
[Prob(test+ GIVEN disease)*Prob(disease) +
Prob(test+ GIVEN (not disease))*Prob(not disease)]

= (sens*prev)/[sens*prev + (100-spec)*(100-prev)]

QED -- where probabilities are expressed at percentages, of course.

So the form that does not fall over in certain cases is the one
that directly derives from the Bayes approach.

Ted.

On 19-Jun-08 15:01:38, Liz wrote:
>
> Ah!!! Thanks Ted - I was just following the methods exactly as set out
> in the paper, like a statistical sheep. I have no idea why the formula
> was presented like that in the text if it stems from an existing
> theorem that obviously works rather more reliably. Not that hot on
> Bayes - obviously something I need to work on (a revision of basic
> balancing of equations evidently wouldn't hurt either).
> As for the sample size I absolutely take your point - which is why I
> was originally opting to reduce specificity from 100 to 99. This is a
> small exploratory dataset (N=40! often the case with imaging studies
> where MRIs are so expensive) so we'll be saying something along the
> lines of "whilst our findings need to be replicated in a much larger
> study, these particular tests [a,b,c] are the most promising, and if
> the larger study reveals similar sensitivities and specificities to
> those measured here, then in combination these tests would allow us to
> be x% sure of our diagnosis." Rudwaleit et al. reviewed many existing
> studies and combined them to obtain their estimates of sens and spec.
>
> Thanks so much Ted, and everyone else, for your help.
> Liz
>

> On Jun 19, 2:01_pm, (Ted Harding) <Ted.Hard...@manchester.ac.uk>


> wrote:
>> Follow-up: Inj my previous reply I intended to include (but in
>> the heat of the creative moment forgot) the following (the rest
>> suppressed):
>>
>> On 19-Jun-08 12:43:11, Ted Harding wrote:
>>
>> > [...]
>> > At Date: Wed, 18 Jun 2008 06:20:16 -0700 (PDT), Liz stated:
>>

>> > _ I'm following the methods of Rudwaleit et al.
>> > _ (Ann Rheum Dis 2004;63;535-543) in which they
>> > _ calculate the probability of the presence of the
>> > _ disease in case of a positive test as
>> > _ 100/[1 + ((100-pretestprob)/pretestprob)*
>> > _ _ _ _ _ _((100-specificity)/sensitivity)].


>>
>> > (I've added the closing "]" in the above). This formula is
>> > just Bayes' theorem for Prob(present given positive-test),
>> > using "pretestprob" to mean "prevalence", i.e. the prior
>> > probability that the condition is present.
>>
>> This formula can equivalently be re-written (and will then
>> be in the form arising directly from Bayes' theorem):
>>

>> _ The probability of the presence of the disease in case
>> _ of a positive test is
>>
>> _ sensitivity*pretestprob/
>> _ [sensitivity*pretestprob + (100-specificity)*(100-pretestprob)]


>>
>> If you use it in that form, you will NEVER have any computational
>> problems! So why adopt a form which could (in certain cases) be
>> problematic?
>>
>> Best wishes to all,
>> Ted.
>>
>> --------------------------------------------------------------------
>> E-Mail: (Ted Harding) <Ted.Hard...@manchester.ac.uk>
>> Fax-to-email: +44 (0)870 094 0861

>> Date: 19-Jun-08 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Time: 14:01:38
>> ------------------------------ XFMail ------------------------------

--------------------------------------------------------------------
E-Mail: (Ted Harding) <Ted.H...@manchester.ac.uk>


Fax-to-email: +44 (0)870 094 0861

Date: 19-Jun-08 Time: 16:23:56
------------------------------ XFMail ------------------------------

mcap

unread,
Jun 19, 2008, 3:02:09 PM6/19/08
to MedStats
I think the statistical discussion has gone over my head a bit.
However, I have a few basic questions for you. I have my lecture on
sens/spec coming up for my allied health students. So, I am not
trying to nitpick. I am curious.

1. How are you determining the true disease state in this case? If
you are talking about RA, there is no definitive diagnostic standard.
If I remember correctly, a combination of several symptoms/signs and
tests are used. Seronegative dieases like AS or psoriatic also lack
definitive diagnostic criteria. If the idea is to catch people early,
how are you determining whether these early people actually have what
you are testing for.

2. In all of these calculations you are including the pretest
probability. I assume you get that just by looking at the prevalence
in your sample. You are calculating probabilities and incorporating
the pretest prob from your study sample. This is often far different
than the typical profile of someone reporting to a PCP or rheumatology
office with suspicion of inflammatory joint disease.

Sounds like a very interesting study.

Best,
Marc

On Jun 19, 11:23 am, (Ted Harding) <Ted.Hard...@manchester.ac.uk>
wrote:
> Date: 19-Jun-08                                       Time: 16:23:56
> ------------------------------ XFMail ------------------------------- Hide quoted text -

Liz

unread,
Jun 24, 2008, 6:08:13 AM6/24/08
to MedStats
Hi Marc,
We've followed patients up and obtained their eventual diagnosis. So
we're looking at baseline measures to see what, if anything, will
allow us to identify which patients will go on to be diagnosed with
inflammatory disease in general, and RA in particular. For our pre-
test probability we are having to use the best estimate we can obtain
from published studies as to the prevalence of inflammatory disease/RA
in patients who seek treatment for a variety of hand problems - we are
not using an estimate from our study.
Thanks,
Liz

martin2

unread,
Jun 24, 2008, 10:03:12 AM6/24/08
to MedStats
Dear Liz,

Would it be imposing too much to ask you to write a summary of this
thread - not everything ! - what your situation is, what you wanted to
find out from the group, what you still want to find out...........or
is closed ? I keep wanting to review the thread but find I've got
something else to do, and a summary would be very helpful.

Best Wishes,

Martin Holt

On Jun 19, 1:21 pm, "Adrian Sayers" <adriansay...@gmail.com> wrote:
> Just to chip in on this thread, if your using many tests in succession
> to determine treatment, and what you want by the sounds of it is an
> assessment of sensitivity and specificity, why don't you just work out
> the sensitivity and specificity for the series of tests you will
> perform.
>
> Its difficult to draw probability trees, but if you set out your test
> series as branches, and work out some rules about how many tests have
> to be +ve before they are classed as true +ve's and -ve before they
> are classified as true -ves. Then its quite easy to compute your
> sensitivity and specificity of the combination of tests you want to
> use.
>
> Adrian
>
> 2008/6/19 Liz <lizhen...@hotmail.com>:
> >> www DOT peterflom DOT com- Hide quoted text -

mcap

unread,
Jun 24, 2008, 11:08:38 AM6/24/08
to MedStats
Hi Liz:

Very interesting study. I teach sensitivity and spec to PA
students and the lecture is coming up next week. I am always looking
for new material. I usually have to use Lyme as the example when we
discuss combining tests to improve the overal sens/spec.

How variable are the pretest probs in other studies and aren't you
dealing with fundamentally different patient pools? Whey not just
take your final prevalence rate and use it for your pretest
probability? Is that impossible?
> > > - Show quoted text -- Hide quoted text -
Reply all
Reply to author
Forward
0 new messages