Skip to content

Scott's pi coefficient

Jeffrey M Girard edited this page Feb 24, 2016 · 36 revisions

Overview

The pi coefficient is a chance-adjusted index for the reliability of categorical measurements. It estimates chance agreement using a distribution-based approach. It assumes that observers have a conspired "quota" for each category that they work together to meet.

History

Scott's (1955) formulation only applies to data from two raters and nominal categories. Fleiss (1971) extended this formula to accommodate multiple raters; this formulation has become known as Fleiss' kappa coefficient. This formula, in turn, was generalized by Gwet (2014) to accommodate multiple raters, any weighting scheme, and missing data; he refers to this formulation as the generalized Fleiss' kappa coefficient. Because both Fleiss' kappa formulations yield equivalent results as Scott's pi coefficient when applied to data from two raters and nominal categories, I refer to it here as the generalized Scott's pi coefficient. It is also worth noting that several reliability indices are equivalent to Scott's original formulation including Siegel & Castellan's (1988) revised kappa coefficient and Byrt, Bishop, and Carlin's (1993) bias-adjusted kappa coefficient.

MATLAB Functions

  • FAST_PI %Calculates pi using simplified formulas
  • FULL_PI %Calculates pi using generalized formulas

Simplified Formulas

Use these formulas with two raters and two (dichotomous) categories:


pi

pi

pi

pi

pi


a is the number of items both raters assigned to the first category

d is the number of items both raters assigned to the second category

n is the total number of items

f_1 is the number of items rater A assigned to category 1

f_2 is the number of items rater A assigned to category 2

g_1 is the number of items rater B assigned to category 1

g_2 is the number of items rater B assigned to category 2

Contingency Table

Generalized Formulas

Use these formulas with multiple raters, multiple categories, and any weighting scheme:


pi

pi

pi

pi

pi


q is the total number of categories

w_kl is the weight associated with two raters assigning an item to categories k and l

r_il is the number of raters that assigned item i to category l

n' is the number of items that were coded by two or more raters

r_ik is the number of raters that assigned item i to category k

r_i is the number of raters that assigned item i to any category

n is the total number of items

References

  1. Scott, W. A. (1955). Reliability of content analysis: The case of nominal scaling. Public Opinion Quarterly, 19(3), 321–325.
  2. Fleiss, J. L. (1971). Measuring nominal scale agreement among many raters. Psychological Bulletin, 76(5), 378–382.
  3. Siegel, S., & Castellan, N. J. (1988). Nonparametric statistics for the behavioural sciences. New York, NY: McGraw-Hill.
  4. Byrt, T., Bishop, J., & Carlin, J. B. (1993). Bias, prevalence and kappa. Journal of Clinical Epidemiology, 46, 423–429.
  5. Gwet, K. L. (2014). Handbook of inter-rater reliability: The definitive guide to measuring the extent of agreement among raters (4th ed.). Gaithersburg, MD: Advanced Analytics.