Debiasing Stereotyped Language Models via Model Editing
-
Updated
Oct 20, 2024 - Python
Debiasing Stereotyped Language Models via Model Editing
Tools and resources of the paper "Do Neural Ranking Models Intensify Gender Bias?"
This project aims to investigate different models for solving the stereotype detection task.
CalgaryHacks 2021 :: Rogue-like adventure game heavily based on widespread Canadian stereotypes.
Investigating how culturally sensitive terms are used in Linked Open Data
Add a description, image, and links to the stereotypes topic page so that developers can more easily learn about it.
To associate your repository with the stereotypes topic, visit your repo's landing page and select "manage topics."