Skip to content

Scott's pi coefficient

Jeffrey M Girard edited this page Feb 24, 2016 · 36 revisions

Overview

The pi coefficient is a chance-adjusted index for the reliability of categorical measurements. It estimates chance agreement using a distribution-based approach. It assumes that observers have a conspired "quota" for each category that they work together to meet.

History

Scott (1955) proposed the pi coefficient to estimate the reliability of two raters assigning items to nominal categories. Fleiss (1971) extended the pi coefficient to accommodate multiple raters. Then, Gwet (2014) generalized the pi coefficient to accommodate multiple raters, any weighting scheme, and missing data. The generalized formulas provided here, and instantiated in the FULL_PI function, correspond to Gwet's formulation (which he refers to as the generalized Fleiss' kappa coefficient). The simplified formulas correspond to It is also worth noting that several other reliability indices are equivalent to Scott's pi coefficient including Siegel & Castellan's (1988) revised kappa coefficient and Byrt, Bishop, and Carlin's (1993) bias-adjusted kappa coefficient.

MATLAB Functions

  • FAST_PI %Calculates pi using simplified formulas
  • FULL_PI %Calculates pi using generalized formulas

Simplified Formulas

Use these formulas with two raters and two (dichotomous) categories:


pi

pi

pi

pi

pi


a is the number of items both raters assigned to the first category

d is the number of items both raters assigned to the second category

n is the total number of items

f_1 is the number of items rater A assigned to category 1

f_2 is the number of items rater A assigned to category 2

g_1 is the number of items rater B assigned to category 1

g_2 is the number of items rater B assigned to category 2

Contingency Table

Generalized Formulas

Use these formulas with multiple raters, multiple categories, and any weighting scheme:


pi

pi

pi

pi

pi


q is the total number of categories

w_kl is the weight associated with two raters assigning an item to categories k and l

r_il is the number of raters that assigned item i to category l

n' is the number of items that were coded by two or more raters

r_ik is the number of raters that assigned item i to category k

r_i is the number of raters that assigned item i to any category

n is the total number of items

References

  1. Scott, W. A. (1955). Reliability of content analysis: The case of nominal scaling. Public Opinion Quarterly, 19(3), 321–325.
  2. Fleiss, J. L. (1971). Measuring nominal scale agreement among many raters. Psychological Bulletin, 76(5), 378–382.
  3. Siegel, S., & Castellan, N. J. (1988). Nonparametric statistics for the behavioural sciences. New York, NY: McGraw-Hill.
  4. Byrt, T., Bishop, J., & Carlin, J. B. (1993). Bias, prevalence and kappa. Journal of Clinical Epidemiology, 46, 423–429.
  5. Gwet, K. L. (2014). Handbook of inter-rater reliability: The definitive guide to measuring the extent of agreement among raters (4th ed.). Gaithersburg, MD: Advanced Analytics.