Fairness-Aware Structured Pruning in Transformers (AAAI 2024)
We show that certain attention heads are responsible for bias and pruning them improves fairness.
The figure below illustrates how FASP is applied to a model with
Get started with the Colab tutorial, FASP_AAAI24_reproducibility.ipynb
, which guides you through the process of downloading the models, understanding the preprocessing steps, and creating the scripts required to run the experiments.
@inproceedings{zayed2024fairness,
title={Fairness-aware structured pruning in transformers},
author={Zayed, Abdelrahman and Mordido, Gon{\c{c}}alo and Shabanian, Samira and Baldini, Ioana and Chandar, Sarath},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
volume={38},
number={20},
pages={22484--22492},
year={2024}
}