Measuring inter-rater reliability for nominal data - which coefficients and confidence intervals are appropriate?
Average rating
Cast your vote
You can rate an item by clicking the amount of stars they wish to award to this item.
When enough users have cast their vote on this item, the average rating will also be shown.
Star rating
Your vote was cast
Thank you for your feedback
Thank you for your feedback
Issue Date
2016
Metadata
Show full item recordAbstract
Reliability of measurements is a prerequisite of medical research. For nominal data, Fleiss' kappa (in the following labelled as Fleiss' K) and Krippendorff's alpha provide the highest flexibility of the available reliability measures with respect to number of raters and categories. Our aim was to investigate which measures and which confidence intervals provide the best statistical properties for the assessment of inter-rater reliability in different situations.Citation
Measuring inter-rater reliability for nominal data - which coefficients and confidence intervals are appropriate? 2016, 16:93 BMC Med Res MethodolAffiliation
Helmholtz Centre for infection research, Inhoffenstr. 7, 38124 Braunschweig, Germany.Journal
BMC medical research methodologyPubMed ID
27495131Type
ArticleLanguage
enISSN
1471-2288ae974a485f413a2113503eed53cd6c53
10.1186/s12874-016-0200-9
Scopus Count
Collections
The following license files are associated with this item:
- Creative Commons
Except where otherwise noted, this item's license is described as http://creativecommons.org/licenses/by-nc-sa/4.0/
Related articles
- Reliability in evaluator-based tests: using simulation-constructed models to determine contextually relevant agreement thresholds.
- Authors: Beckler DT, Thumser ZC, Schofield JS, Marasco PD
- Issue date: 2018 Nov 19
- Assessing the inter-rater agreement for ordinal data through weighted indexes.
- Authors: Marasini D, Quatto P, Ripamonti E
- Issue date: 2016 Dec
- A new coefficient of interrater agreement: The challenge of highly unequal category proportions.
- Authors: van Oest R
- Issue date: 2019 Aug
- Semantic Krippendorff's α for measuring inter-rater agreement in SNOMED CT coding studies.
- Authors: Karlsson D, Gøeg KR, Örman H, Højen AR
- Issue date: 2014
- Six of one, half a dozen of the other: A measure of multidisciplinary inter/intra-rater reliability of the society for fetal urology and urinary tract dilation grading systems for hydronephrosis.
- Authors: Rickard M, Easterbrook B, Kim S, Farrokhyar F, Stein N, Arora S, Belostotsky V, DeMaria J, Lorenzo AJ, Braga LH
- Issue date: 2017 Feb