Best regards,
Bendix
_______________________________________________
Bendix Carstensen
Senior Statistician
Steno Diabetes Center
Niels Steensens Vej 2-4
DK-2820 Gentofte
Denmark
+45 44 43 87 38 (direct)
+45 30 75 87 38 (mobile)
b...@steno.dk http://www.biostat.ku.dk/~bxc
www.steno.dk
> --
> To post a new thread to MedStats, send email to
> MedS...@googlegroups.com .
> MedStats' home page is http://groups.google.com/group/MedStats .
> Rules: http://groups.google.com/group/MedStats/web/medstats-rules
>
Now consider the correlation Corr(X-Y,X+Y) between (X-Y) and (X+Y)
(equivalent to correlation between difference and mean). In the
numerator, this has the covariance
Cov((X-Y),(X+Y)) = Exp((X-Y)*(X+Y)) - (Exp(X-Y))*(Exp(X+Y))
Exp((X-Y)*(X+Y)) = Exp(X^2 - Y^2) = Exp(X^2) - Exp(Y^2)
(Exp(X-Y))*(Exp(X+Y)) = (Exp(X) - Exp(Y))*(Exp(X) + Exp(Y))
= (Exp(X))^2 - (Exp(Y))^2
Hence
Cov((X-Y),(X+Y))
= {Exp(X^2) - (Exp(X))^2} - {Exp(Y^2) - (Exp(Y))^2}
= Var(X) - Var(Y)
Hence zero correlation between difference (X-Y) and mean (X+Y)/2
is equivalent to Var(X) = Var(Y), so a test of zero correlation
can be made by testing equality of these variances.
That is the relation between the Balnd-Altman plot and the
associated test of difference of variance, and is what is behind
Eric's first result:
"we used a Bland Altmann plot and obtained a Pitman's
Test of difference in variance: r = 0.290, n = 66, p = 0.02."
Eric goes on to say:
"My dilemma is that when we do a pairwise correlation (pwcorr)
of the measurements between the two raters the correlation
coefficient r =0.70 which is distinctly different from what
we obtain after performing a "baplot"."
The implication of this is that Eric has calculated the correlation
Corr(X,Y) between X and Y (i.e. not between (X-Y) and (X+Y)).
This has nothing to do with Corr(X-Y,X+Y). In particular, if the
data X and Y are obtained by A and B measuring a quantity U,
each without bias, with equal variances for each meaurement,
independently of each other, and with measurement variances
independent of the value being measured, so that the two raters
agree perfectly in their performance, and if the values of U range
over a wide interval (much greater than the SD of each observer),
then a high value of the correlation Corr(X,Y) will be obtained,
while Corr((X-Y),(X+Y)) will be zero:
X = U + E.A (A's error, expcted value = 0)
Y = U + E.B (B's error. expcetd value = 0)
Exp(X*Y) = Exp(U^2) + U*Exp(E.A) + U*Exp(E.B) + Exp(A)*Exp(B)
= Exp(U^2)
Exp(X)*Exp(Y) = Exp(U + E.A)*Exp(U+E.B)
= Exp(U^2) + Exp(U)*Exp(E.A) + Exp(U)*Exp(E.B) + Exp(E.A*E.B)
= Exp(U^2)
so the numerator in Corr(X,Y) is Exp(U^2) - (Exp(U))^2 = Var(U).
The denominator will be
sqrt(Var(U+E.A)*Var(U+E.B)) = (Var(U) + V)
where V is the common variance of E.A and E.B. Hence, if Var(U)
is much greater than V,
Corr(X,Y) = Var(U)/(Var(U) + V) approx = 1
On the other hand, the numerator of Corr(X-Y,X+Y) will (as above)
be Var(X) - Var(Y) = (Var(U) + V) - (Var(U) + V) = 0.
So there is no reason to expect the two correlations Corr(X,Y)
and Corr((X-Y),(X+Y) to be equal!
Hoping this helps,
Ted.
On 23-Jul-10 22:46:43, BXC (Bendix Carstensen) wrote:
> I am not quite sure what "baplot" is, but a fair guess would be
> that it is some command in (Stata, SPSS...) which plots differences
> versus averages. And presumably gives limits of agreement and/or sd
> of the differences. The correlations coefficient is in any case
> irrelevant for the problem of comparing the raters.
>
> Best regards,
> Bendix
> _______________________________________________
>
>> -----Original Message-----
>> From: meds...@googlegroups.com
>> [mailto:meds...@googlegroups.com] On Behalf Of ohumaeric
>> Sent: 23. juli 2010 10:23
>> To: MedStats
>> Subject: {MEDSTATS} Pitman's test of variance
>>
>> Dear colleagues,
>>
>> I was recently involved in an analysis where we were
>> interested in assessing agreement between two raters on
>> tympanometry baseline pressure measurements in children. To
>> do this we used a Bland Altmann plot and obtained a Pitman's
>> Test of difference in variance: r = 0.290, n = 66, p = 0.02.
>> My dilemma is that when we do a pairwise correlation (pwcorr)
>> of the measurements between the two raters the correlation
>> coefficient r =0.70 which is distinctly different from what
>> we obtain after performing a "baplot". Does someone know or
>> have an explanation why this is so - seems like i am missing
>> an important concept here...
>>
>> Thanks in advance
>> Eric
--------------------------------------------------------------------
E-Mail: (Ted Harding) <Ted.H...@manchester.ac.uk>
Fax-to-email: +44 (0)870 094 0861
Date: 24-Jul-10 Time: 11:03:27
------------------------------ XFMail ------------------------------
-----FW: <XFMail.10072411...@manchester.ac.uk>-----
Date: Sat, 24 Jul 2010 11:03:41 +0100 (BST)
Sender: meds...@googlegroups.com
From: (Ted Harding) <Ted.H...@manchester.ac.uk>
To: meds...@googlegroups.com
Subject: RE: {MEDSTATS} Pitman's test of variance
Hence
X = U + E.A (A's error, expected value = 0)
Y = U + E.B (B's error. expected value = 0)
Exp(X*Y) = Exp(U^2) + U*Exp(E.A) + U*Exp(E.B) + Exp(A)*Exp(B)
= Exp(U^2)
[### ******** ###] Errors in the following:
# Exp(X)*Exp(Y) = Exp(U + E.A)*Exp(U+E.B)
# = Exp(U^2) + Exp(U)*Exp(E.A) + Exp(U)*Exp(E.B) + Exp(E.A*E.B)
# = Exp(U^2)
[### ******** ###] Should be:
Exp(X)*Exp(Y) = Exp(U + E.A)*Exp(U+E.B)
= Exp(U)^2 + Exp(U)*Exp(E.A) + Exp(U)*Exp(E.B) + Exp(E.A*E.B)
= Exp(U)^2
Hoping this helps,
Ted.
Date: 24-Jul-10 Time: 13:50:40
------------------------------ XFMail ------------------------------
> Corr(X,Y) = Var(U)/(Var(U) + V) approx = 1
where Var(U) is the variation BETWEEN the subjects measured and V is the measurement variance of the two methods.
On the way Ted uses the expression for the denominator:
> sqrt(Var(U+E.A)*Var(U+E.B)) = (Var(U) + V)
where the last expression is a bit of an oversimplification because it assumes that Var(E.A)=Var(E.B)=V. You could say, who would care to compare two methods where the measurement variance is the same? Well for the bias of course.
The crux is that if you only let each of the raters rate subjects once, there is no way to find out if the variances are different. And so you are stuck with the sum of the two variances as the variance of X-Y. Which is the core of the Bland-Altman plot and Limits of Agreement.
The correlation between X and Y is only approximately 1 if the subjects are chosen to substantially variable, that is if the variation between them
is large compared to the measurement variances [Var(E.A), Var(E.B)].
This is usually the case in laboratory medicine (otherwise one wound not bother to use the methods anyway), but I am not so sure that it is the case in other areas.
There IS a way to get your hands on the separate variances if you have replicate measurements, see:
B Carstensen, J Simpson & LC Gurrin:
Statistical models for assessing agreement in method comparison studies with replicate measurements.
International Journal of Biostatistics, 4(1):Article 16, 2008.
You can actually also assess the extend of bias and how it varies, see:
Carstensen B:
Comparing methods of measurement: Extending the LoA by regression.
Stat Med. 2010 Feb 10;29(3):401-10.
Finally there is a book out now covering the classical designs (incidentally authored by...):
Bendix Carstensen:
Comparing Clinical Measurement Methods: A practical guide
Wiley, 2010
All this is based on starting out be setting up a model for the data pretty much in the vein of what Ted does, but insisting on the irrelevance of the distribution of U (the true values of the subjects). If you compare two methods, one would hope that the results of the comparisons came out the same if you had done it on another sample, with say twice the variation between them.
So any mention of an expectation with U in it is trespassing the fields of irrelevance, which, as illustrated above, occasionally can be illustrative.
Best regards,
Bendix