-
Notifications
You must be signed in to change notification settings - Fork 8
Chance adjusted index
Chance-adjusted indices estimate the amount of agreement between raters that can be expected to have occurred due to chance (i.e., random guessing). This estimation is accomplished in different ways by different indices; each index makes its own assumptions which can lead to paradoxical results when violated. Chance-adjusted indices calculate reliability as the ratio of observed non-chance agreement to possible non-chance agreement.
is percent observed agreement
is percent chance agreement
Numerous approaches to estimating chance agreement with two raters and dichotomous categories have been proposed. In general, they adopt one of three main approaches. The category-based approach is adopted by Bennett et al.'s S score; the individual-distribution-based approach is adopted by Cohen's kappa coefficient; and the average-distribution-based approach is adopted by Scott's pi coefficient, Gwet's gamma coefficient, and Krippendorff's alpha coefficient.
is the total number of categories
is the number of items rater assigned to category
is the number of items rater assigned to category
is the number of items rater assigned to category
is the number of items rater assigned to category
Coming soon...
- Zhao, X., Liu, J. S., & Deng, K. (2012). Assumptions behind inter-coder reliability indices. In C. T. Salmon (Ed.), Communication Yearbook (pp. 418–480). Routledge.
- Gwet, K. L. (2014). Handbook of inter-rater reliability: The definitive guide to measuring the extent of agreement among raters (4th ed.). Gaithersburg, MD: Advanced Analytics.