N.cohen.kappa(rate1,rate2,kappa,hypokappa,power=.8,alpha=.05,twosided=FALSE)
rate1 |
The probability that the first rater will record a positive diagnosis |
rate2 |
The probability that the second rater will record a positive diagnosis |
kappa |
The true Cohen's Kappa statistic |
hypokappa |
The value of kappa under the null hypothesis |
alpha |
Type I error of test |
power |
The desired power to detect the difference between kappa and hypokappa |
twosided |
TRUE if test is two-sided |
returns required sample size
The question is here, can I use intraobserver agreement of first two observers as rate 1 and rate 2 respectively?
0.01 to 0.54 and those for intraobserver agreement from –0.11 to +0.82.
The BMD measured by DEXA ranged from 0.35 g/cm2 to 1.14 g/cm2. We found
absolutely no correlation between the Singh index and BMD (Fig. 5). The
percentage of
variance ‘explained’ by the Singh index was 6.5%."
I searched for calculating the sample size for inter-rater reliability.
The most comprehensive and appealing approaches were either using stata
command sskapp or using formula n = 1/(r^2(pa-pe)^2).
As I am applying these tools first time, so I am unable to detect these
statistics required for sample size estimation using thees two tools.
Can anyone help me in this matter that how can I compute sample suing
above information?
Please, do help me, I will be grateful to you.
--
To post a new thread to MedStats, send email to MedS...@googlegroups.com .
MedStats' home page is http://groups.google.com/group/MedStats .
Rules: http://groups.google.com/group/MedStats/web/medstats-rules