Skip to content

SocialScienceDataLab/detect-biases-lang-models

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

5 Commits
Β 
Β 
Β 
Β 

Repository files navigation

Detecting implicit biases in large language corpora

πŸ“ virtual MZES, Mannheim

πŸ“† June 01, 2022

In this tutorial, I will show you how the R package sweater can be used to detect biases in word embeddings. The package provides highly optimized functions to calculate the following bias metrics: mean average cosine similarity, relative norm distance, SemAxis, normalized association score, relative negative sentiment bias, embedding coherence test and word embedding association test. Using two public available word embeddings trained on media content, I am going to demonstrate how sweater can be used to study implicit gender and racial biases.

πŸ“ Slides

πŸ‘€ Chung-hong Chan is a Research Fellow at the Mannheim Center for European Social Research (MZES), University of Mannheim.

About

Detecting implicit biases in large language corpora by Chung-hong Chan

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  

Languages