Skip to content

Added SocialStigmaQA to README.md #17

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -242,6 +242,7 @@ Fairness in Legal Text Processing](https://arxiv.org/pdf/2203.07228.pdf). Ilias
50. [Mind Your Bias: A Critical Review of Bias Detection Methods for Contextual Language Models](https://aclanthology.org/2022.findings-emnlp.311/). Silke Husse, Andreas Spitz. EMNLP 2022
51. [Bias Against 93 Stigmatized Groups in Masked Language Models and Downstream Sentiment Classification Tasks](https://dl.acm.org/doi/10.1145/3593013.3594109). Katelyn X. Mei, Sonia Fereidooni, Aylin Caliskan. ACM FAccT 2023
52. [On the Independence of Association Bias and Empirical Fairness in Language Models](https://dl.acm.org/doi/10.1145/3593013.3594004). Laura Cabello, Anna Katrine Jørgensen, Anders Søgaard. ACM FAccT 2023
53. [SocialStigmaQA: A Benchmark to Uncover Stigma Amplification in Generative Language Models](https://ojs.aaai.org/index.php/AAAI/article/view/30142). Manish Nagireddy, Lamogha Chiazor, Moninder Singh, Ioana Baldini. AAAI 2024 Technical Track on Safe, Robust and Responsible AI Track

##### Bias Mitigation
1. [Reducing Gender Bias in Abusive Language Detection](https://www.aclweb.org/anthology/D18-1302), Park, Ji Ho and Shin, Jamin and Fung, Pascale, 2018
Expand Down