Skip to content

Latest commit

 

History

History
25 lines (19 loc) · 1.38 KB

README.md

File metadata and controls

25 lines (19 loc) · 1.38 KB

Why Don’t Prompt-Based Fairness Metrics Correlate? (ACL main 2024)

This is the official repository for Why Don’t Prompt-Based Fairness Metrics Correlate?, accepted at ACL main 2024.

Summary (short version): We explain why fairness metrics don't correlate and propose CAIRO to make them correlate.

Summary (longer version): Prompt-based bias metrics don't correlate because prompting is unreliable in assessing the model's knowledge. In addition, metrics differ in how they define and quantify bias. For example, according to one metric, race bias could refer to the deviation in the model's toxicity when prompted with sentences about black and white people, while another metric could measure the difference in sentiment when prompted with sentences about Asian and Middle Eastern people. CAIRO fixes the inconsistencies within these metrics.

Usage

Please follow the instructions in our tutorial.

Citation

@article{zayed2024don,
  title={Why Don't Prompt-Based Fairness Metrics Correlate?},
  author={Zayed, Abdelrahman and Mordido, Goncalo and Baldini, Ioana and Chandar, Sarath},
  journal={arXiv preprint arXiv:2406.05918},
  year={2024}
}