Skip to content

Towards Comprehensive Understanding of Bias in Pre-trained Neural Language Models: A Survey with Special Emphasis on Affective Bias

Notifications You must be signed in to change notification settings

anoopkdcs/NLPBias

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

70 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Bias in Pre-trained Neural Language Models


Towards Comprehensive Understanding of Bias in Pre-trained Neural Language Models: A Survey with Special Emphasis on Affective Bias
Anoop K1, Manjary P Gangan1, Deepak P2 and Lajish V L1
1University of Calicut, Kerala, India
2Queen’s University Belfast, UK

📝 Paper : Pre-print
(Accepted at ICDSE)

Abstract: Remarkable progress in Natural Language Processing (NLP) brought by deep learning, particularly with the recent advent of large pre-trained neural language models, is injured as several studies began to discuss and report potential biases in NLP applications. Bias in NLP is found to originate from latent historical biases encoded by humans into textual data which gets perpetuated or even amplified by NLP algorithm. We present a survey to comprehend bias in large pre-trained language models, the stages at which they occur in these models, and various ways in which these biases are quantified and mitigated. Considering wide applicability of textual affective computing based downstream tasks in real-world systems such as business, healthcare, education, etc., we give a special emphasis on investigating bias in the context of affect (emotion) i.e., Affective Bias, in large pre-trained language models. We present a summary of various bias evaluation corpora that help to aid future research and discuss challenges in the research on bias in pretrained language models. We believe that our attempt to draw a comprehensive view of bias in pre-trained language models, and especially the exploration of affective bias will be highly beneficial to acquire deep knowledge on recent paradigms in this area of research.

For other inquiries, please contact:
Anoop K 📧 [email protected] 🌏 website
Manjary P Gangan 📧 [email protected] 🌏 website
Deepak P 📧 [email protected] 🌏 website
Lajish V L 📧 [email protected] 🌏 website

Citation

@misc{anoop2022towards,
  title={Towards an Enhanced Understanding of Bias in Pre-trained Neural Language Models: A Survey with Special Emphasis on Affective Bias},
  author={Anoop, K and Manjary, P Gangan and Deepak, P and Lajish, VL},
  journal={arXiv preprint: arXiv:2204.10365},
  year={2022}
}

About

Towards Comprehensive Understanding of Bias in Pre-trained Neural Language Models: A Survey with Special Emphasis on Affective Bias

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published