Sample size using kappa statistic (Need urgent Help)

1,217 views
Skip to first unread message

Mehwish

unread,
Sep 18, 2012, 2:19:57 AM9/18/12
to meds...@googlegroups.com

Hello,

I have to calculate sample size with the objective of comparison of measuring osteoporosis using Singh Index by two observers. The only related article was found (Reference here) Koot VCM, Kesselaer SMMJ, Clevers GJ et al. Evaluation of Singh Index for Measuring Osteoporosis. J Bone Joint Surg [Br] 1996;78-B:831-4.

I am copying the Results of their study (Due to error in my internet connection, I could not be able to attach the full text PDF)

"RESULTS
Eight radiographs were excluded from the study because of their poor technical quality. Only three of the remaining 72 fractures were classified identically by all observers (4%). Eight were classified by five of the six observers (11%), but the fact that the abberant eight classifications were made by different observers must be taken into account. The kappa values for interobserver agreement ranged from 0.15 to 0.54 (mean 0.33). Intraobserver agreement was reached in 37 of 60 radiographs (62%). The kappa values for intraobserver agreement ranged from 0.63 to 0.88 (mean 0.78) (Table I). The best agreement scenarios from Table I are shown in detail in Figures 3 and 4. Table II shows the results after collapsing the six categories into three. After this simplification the kappa values for interobserver agreement ranged from 0.01 to 0.54 and those for intraobserver agreement from –0.11 to +0.82. The BMD measured by DEXA ranged from 0.35 g/cm2 to 1.14 g/cm2. We found absolutely no correlation between the Singh index and BMD (Fig. 5). The percentage of
variance ‘explained’ by the Singh index was 6.5%."

I searched for calculating the sample size for inter-rater reliability. The most comprehensive and appealing approaches were either using stata command sskapp or using formula n = 1/(r^2(pa-pe)^2).

As I am applying these tools first time, so I am unable to detect these statistics required for sample size estimation using thees two tools. Can anyone help me in this matter that how can I compute sample suing above information?
Please, do help me, I will be grateful to you.

Mehwish

unread,
Sep 18, 2012, 2:41:59 AM9/18/12
to meds...@googlegroups.com

I also get another command by R which is given below:

N.cohen.kappa(rate1,rate2,kappa,hypokappa,power=.8,alpha=.05,twosided=FALSE)

Arguments

rate1 The probability that the first rater will record a positive diagnosis
rate2 The probability that the second rater will record a positive diagnosis
kappa The true Cohen's Kappa statistic
hypokappa The value of kappa under the null hypothesis
alpha Type I error of test
power The desired power to detect the difference between kappa and hypokappa
twosided TRUE if test is two-sided

Value

returns required sample size


The question is here, can I use intraobserver agreement of first two observers as rate 1 and rate 2 respectively?

Christian Lerch

unread,
Sep 18, 2012, 3:32:08 AM9/18/12
to meds...@googlegroups.com, Mehwish
You may find these resources helpful:

1. www.agreestat.com
2. Sample size requirements for estimating intraclass correlations with
desired precision, Bonett DG, Stat Med. 2002 May 15;21(9):1331-5.
3. Planning a reproducibility study: how many subjects and how many
replicates per subject for an expected width of the 95 per cent
confidence interval of the intraclass correlation coefficient. Giraudeau
B, Mary JY., Stat Med. 2001 Nov 15;20(21):3205-14.

Regards,
Christian



Am 18.09.2012 08:19, schrieb Mehwish:
>
> Hello,
>
> I have to calculate sample size with the objective of comparison of
> measuring osteoporosis using Singh Index by two observers. The only
> related article was found (Reference here) Koot VCM, Kesselaer SMMJ,
> Clevers GJ et al. Evaluation of Singh Index for Measuring Osteoporosis.
> J Bone Joint Surg [Br] 1996;78-B:831-4.
>
> I am copying the Results of their study (Due to error in my internet
> connection, I could not be able to attach the full text PDF)
>
> "RESULTS
> Eight radiographs were excluded from the study because of their poor
> technical quality. Only three of the remaining 72 fractures were
> classified identically by all observers (4%). Eight were classified by
> five of the six observers (11%), but the fact that the abberant eight
> classifications were made by different observers must be taken into
> account. The kappa values for interobserver agreement ranged from 0.15
> to 0.54 (mean 0.33). Intraobserver agreement was reached in 37 of 60
> radiographs (62%). The kappa values for intraobserver agreement ranged
> from 0.63 to 0.88 (mean 0.78) (Table I). The best agreement scenarios
> from Table I are shown in detail in Figures 3 and 4. Table II shows the
> results after collapsing the six categories into three. After this
> simplification the kappa values for interobserver agreement ranged from
> 0.01 to 0.54 and those for intraobserver agreement from �0.11 to +0.82.
> The BMD measured by DEXA ranged from 0.35 g/cm2 to 1.14 g/cm2. We found
> absolutely no correlation between the Singh index and BMD (Fig. 5). The
> percentage of
> variance �explained� by the Singh index was 6.5%."
>
> I searched for calculating the sample size for inter-rater reliability.
> The most comprehensive and appealing approaches were either using stata
> command sskapp or using formula n = 1/(r^2(pa-pe)^2).
>
> As I am applying these tools first time, so I am unable to detect these
> statistics required for sample size estimation using thees two tools.
> Can anyone help me in this matter that how can I compute sample suing
> above information?
> Please, do help me, I will be grateful to you.
>
> --
> To post a new thread to MedStats, send email to MedS...@googlegroups.com .
> MedStats' home page is http://groups.google.com/group/MedStats .
> Rules: http://groups.google.com/group/MedStats/web/medstats-rules

Mehwish Hussain

unread,
Sep 18, 2012, 5:00:46 AM9/18/12
to Christian Lerch, meds...@googlegroups.com
Thank you, however, these files are not full text. Thus, I am unable to approach these.

Meanwhile, I am using R and Stata for the same, both commands are not accessible too in both softwares.



0.01 to 0.54 and those for intraobserver agreement from –0.11 to +0.82.

The BMD measured by DEXA ranged from 0.35 g/cm2 to 1.14 g/cm2. We found
absolutely no correlation between the Singh index and BMD (Fig. 5). The
percentage of
variance ‘explained’ by the Singh index was 6.5%."


I searched for calculating the sample size for inter-rater reliability.
The most comprehensive and appealing approaches were either using stata
command sskapp or using formula n = 1/(r^2(pa-pe)^2).

As I am applying these tools first time, so I am unable to detect these
statistics required for sample size estimation using thees two tools.
Can anyone help me in this matter that how can I compute sample suing
above information?
Please, do help me, I will be grateful to you.

--
To post a new thread to MedStats, send email to MedS...@googlegroups.com .
MedStats' home page is http://groups.google.com/group/MedStats .
Rules: http://groups.google.com/group/MedStats/web/medstats-rules



--
Regards

Mehwish Hussain, PhD*
Senior Lecturer (DUHS)
Coordinator, MSBE Program (DUHS)
Manager, ORIC (HEC)
Pakistan


Mehwish Hussain

unread,
Sep 19, 2012, 4:08:45 AM9/19/12
to Christian Lerch, meds...@googlegroups.com

I did it in stata while installing ssklg, available via stata help options. Just sharing to inform if anybody doesn't know this.
Reply all
Reply to author
Forward
0 new messages