--
--
To post a new thread to MedStats, send email to MedS...@googlegroups.com .
MedStats' home page is http://groups.google.com/group/MedStats .
Rules: http://groups.google.com/group/MedStats/web/medstats-rules
---
You received this message because you are subscribed to the Google Groups "MedStats" group.
To unsubscribe from this group and stop receiving emails from it, send an email to medstats+u...@googlegroups.com.
To view this discussion on the web, visit https://groups.google.com/d/msgid/medstats/7b6c671a-285b-4dba-8042-5957769201aan%40googlegroups.com.
To view this discussion on the web, visit https://groups.google.com/d/msgid/medstats/a99f617f-d34b-46f9-92f5-bc38a8593729n%40googlegroups.com.
To view this discussion on the web, visit https://groups.google.com/d/msgid/medstats/CAMWn5NwMb0yWwmxzT1DzfX1SsLLm66qfBC0XShfHwHgKi_Evhg%40mail.gmail.com.
....I'm worried that using only 3 classes may result in outcomes missing out on all kinds of medical-surgical research knowledge for the future?
Dear Giovanni,
Thank you for your reply. There are many members here with seriously impressive statistical knowledge, with luck one or more will choose to communicate with you.
My chief concern with my question was-is medical-surgical: with due respect to the CDC naturally, nevertheless is it fair to ask the question whether they are necessarily or not the best influence for designing research methodology? That they may have too much of a "Headquarters" mentality and thus be behind the frontline curve? To quote an old English language expression "A stitch in time saves nine". I'm worried that using only 3 classes may result in outcomes missing out on all kinds of medical-surgical research knowledge for the future?
Very best regards,
William Stanbury.
On Mon, 10 May 2021 at 12:57, Giovanni Delli Carpini < giovdell...@gmail.com> wrote:
- Dear William,
- Thank you for you answer!
- CDC criteria presents three outcome categories of surgical site infections (SSI):Â "no SSI", "superficial incisional SSI", and "deep incisional SSI", while ASEPSIS score presents five outcome categories: "satisfactory healing (0–10 points), disturbance of healing (11–20 points), minornor SSI (21–30 points), moderate SSI (31–40 points), and severe SSI (>40 points).
- In order to compare the two systems and to determine the required sample size (with the R package "kappasize"), we decided to group the ASEPSIS categories as following: satisfactory healing and disturbance of healing, minor and moderate SSI, and severe SSI, to obtain three categories similar to CDC criteria.Â
- I have found some literature that explain several methodologies to obtain a kappa value for multiple raters, but I am unable to find the right software to perform the analysis and to chose the correct methodology.Â
- I have tried with SPSS (for weighted kappa) but it only shows pairwise comparison between raters and with R package "rel" Krippendorff’s alpha but I was unable to determine the confidence interval. Moreover, I don't know how to subsequently compare the kappa values (CDC vs ASEPSIS); the R package "svanbelle/multiagree" seems to be useful, but it is difficult to run.Â
- Giovanni
- Â
- Il giorno lunedì 10 maggio 2021 alle 12:19:41 UTC+2 william...@gmail.com ha scritto:
- Giovanni,
- Good afternoon, welcome!
- :-)
- Thank you for your interesting email.
- If I may please, I have a single question. For the categorical classification, why have only 3 classes (1. no infection 2. mild infection 3. severe infection) please and e.g. no "moderate" or other classes?
- Grazie mille!
- :-)
- William Stanbury.
- On Mon, 10 May 2021 at 10:29, Giovanni Delli Carpini <giovdell...@gmail.com> wrote:
- Good morning,
- I am Giovanni Delli Carpini, researcher in Gynecology and Obstetrics at Università Politecnica delle Marche, Ancona Italy.
- Thank you for accepting my request to join this interesting group.Â
- We are conducting a clinical study in which we are evaluating the inter-rater agreement for two scoring systems (CDC criteria and ASEPSIS score) in assessing the presence of surgical site infections after cesarean section.Â
- Both systems provide a categorical classification in three classes according to the presence of infection (e.g., 1. no infection, 2. mild infection, 3. severe infection).Â
- Three raters were asked to determine the scores and assign each patient to one of the three classes.Â
- My question is: which test should we use to obtain the kappa value for multiple raters? (weighted kappa? Fleiss? Krippendorff’s alpha?). Subsequently, it is possible to compare the obtained kappa values to verify if there is any difference between them? (in other words, if one of the two scoring system provide higher concordance between raters)
- Thank you for the help,
- Giovanni Delli Carpini
- --
- --
- To post a new thread to MedStats, send email to MedS...@googlegroups.com .
- MedStats' home page is http://groups.google.com/group/MedStats .
- Rules: http://groups.google.com/group/MedStats/web/medstats-rules
- ---
- You received this message because you are subscribed to the Google Groups "MedStats" group.
- To unsubscribe from this group and stop receiving emails from it, send an email to medstats+u...@googlegroups.com.
- To view this discussion on the web, visit https://groups.google.com/d/msgid/medstats/7b6c671a-285b-4dba-8042-5957769201aan%40googlegroups.com .
- --
- --
- To post a new thread to MedStats, send email to MedS...@googlegroups.com .
- MedStats' home page is http://groups.google.com/group/MedStats .
- You received this message because you are subscribed to the Google Groups "MedStats" group.
- To unsubscribe from this group and stop receiving emails from it, send an email to medstats+u...@googlegroups.com.
- To view this discussion on the web, visit https://groups.google.com/d/msgid/medstats/a99f617f-d34b-46f9-92f5-bc38a8593729n%40googlegroups.com .
--
--
To post a new thread to MedStats, send email to MedS...@googlegroups.com .
MedStats' home page is http://groups.google.com/group/MedStats .
Rules: http://groups.google.com/group/MedStats/web/medstats-rules
---
You received this message because you are subscribed to the Google Groups "MedStats" group.
To unsubscribe from this group and stop receiving emails from it, send an email to medstats+u...@googlegroups.com.
To view this discussion on the web, visit https://groups.google.com/d/msgid/medstats/CAMWn5NwMb0yWwmxzT1DzfX1SsLLm66qfBC0XShfHwHgKi_Evhg%40mail.gmail.com .
You received this message because you are subscribed to a topic in the Google Groups "MedStats" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/medstats/Es4uDbe5Axo/unsubscribe.
To unsubscribe from this group and all its topics, send an email to medstats+u...@googlegroups.com.
To view this discussion on the web, visit https://groups.google.com/d/msgid/medstats/508360946.7689093.1620664556581.JavaMail.zimbra%40psyctc.org.
From: "Bruce Weaver" <bwe...@lakeheadu.ca>
To: "medstats" <meds...@googlegroups.com>
Sent: Tuesday, 11 May, 2021 22:40:51
Subject: Re: {MEDSTATS} Agreement between multiple raters and comparison of kappa values
Good comments, Chris. I'll offer just one small correction via this excerpt from Fleiss & Cohen (1973):
"This paper establishes the equivalence of weighted kappa with the intraclass correlation coefficient under general conditions. Krippendorff (1970) demonstrated essentially the same result."
Fleiss, J. L., & Cohen, J. (1973). The equivalence of weighted kappa and the intraclass correlation coefficient as measures of reliability. Educational and psychological measurement, 33(3), 613-619.
A Google Scholar search on the title of that article takes me to a PDF I can open from home. YMMV.