Skip to content

Chance adjusted index

Jeffrey M Girard edited this page Feb 25, 2016 · 2 revisions

Overview

Chance-adjusted indices estimate the amount of agreement between raters that can be expected to have occurred due to chance (i.e., random guessing). This estimation is accomplished in different ways by different indices; each index makes its own assumptions which can lead to paradoxical results when violated. Chance-adjusted indices calculate reliability as the ratio of observed non-chance agreement to possible non-chance agreement.

R

p_o is percent observed agreement

p_c is percent chance agreement

Chance agreement in simplified formulas

Numerous approaches to estimating chance agreement with two raters and dichotomous categories have been proposed. In general, they adopt one of three main approaches. The category-based approach is adopted by Bennett et al.'s S score; the individual-distribution-based approach is adopted by Cohen's kappa coefficient; and the average-distribution-based approach is adopted by Scott's pi coefficient, Gwet's gamma coefficient, and Krippendorff's alpha coefficient.


S

kappa

m_1

m_2

pi

gamma

alphak


q is the total number of categories

n_1+ is the number of items rater r_1 assigned to category k_1

n_2+ is the number of items rater r_1 assigned to category k_2

n_+1 is the number of items rater r_2 assigned to category k_1

n_+2 is the number of items rater r_2 assigned to category k_2

table

Chance agreement in generalized formulas

Coming soon...

References

  1. Zhao, X., Liu, J. S., & Deng, K. (2012). Assumptions behind inter-coder reliability indices. In C. T. Salmon (Ed.), Communication Yearbook (pp. 418–480). Routledge.
  2. Gwet, K. L. (2014). Handbook of inter-rater reliability: The definitive guide to measuring the extent of agreement among raters (4th ed.). Gaithersburg, MD: Advanced Analytics.