Skip to content

Latest commit

 

History

History
35 lines (34 loc) · 4.42 KB

bibliographie.md

File metadata and controls

35 lines (34 loc) · 4.42 KB

Defining and detecting Bias

  • Abigail Z. Jacobs, Hanna Wallach. Measurement and Fairness. https://arxiv.org/abs/1912.05511
  • Buolamwini, Joy, und Timnit Gebru (2018): Gender shades: Intersectional accuracy disparities in commercial gender classification. Conference on fairness, accountability and transparency.
  • Suresh, Harini, and John V. Guttag. "A framework for understanding unintended consequences of machine learning." arXiv preprint arXiv:1901.10002 (2019).
  • Caliskan, Aylin, Joanna J. Bryson und Arvind Narayanan. (2017): Semantics derived automatically from language corpora contain human-like biases. Science 356.
  • Garg, Nikhil, Londa Schiebinger, Dan Jurafsky und James Zou (2018): Word embeddings quantify 100 years of gender and ethnic stereotypes. Proceedings of the National Academy of Sciences.
  • May, Chandler, Alex Wang, Shikha Bordia, Samuel R. Bowman und Rachel Rudinger (2019): On Measuring Social Biases in Sentence Encoders. Conference of the NAACL.
  • Sap, Maarten, Dallas Card, Saadia Gabriel, Yejin Choi und Noah A. Smith (2019): The Risk of Racial Bias in Hate Speech Detection. Conference of the ACL.
  • Swinger, Nathaniel, Maria De-Arteaga, Neil Heffernan IV, Mark Leiserson und Adam Kalai (2019): What are the biases in my word embedding?. Conference on Artificial Intelligence, Ethics, and Society (AIES).
  • The Meaning and Measurement of Bias
  • http://web.engr.oregonstate.edu/~burnett/CS507/algorithmicBias-cacm2016.pdf

Mitigating Bias

  • Reducing sentiment polarity for demographic attributes in word embeddings using adversarial learning
  • Bolukbasi, Tolga, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, und Adam T Kalai (2016): Man is to computer programmer as woman is to homemaker? debiasing word embeddings. Conference of NIPS.
  • A. Caliskan, J. J. Bryson, und A. Narayanan (2017): Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183–186.
  • Y. Elazar und Y. Goldberg (2018). Adversarial removal of demographic attributes from text data. arXiv preprint arXiv:1808.06640.
  • Gonen, H. und Yoav Goldberg (2019): Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Biases in Word Embeddings But do not Remove Them. Conference of the NAACL.
  • K. Lu, P. Mardziel, F. Wu, P. Amancharla, und A. Datta. Gender bias in neural natural language processing (2018): arXiv preprint arXiv:1807.11714.
  • T. Manzini, Y. C. Lim, Y. Tsvetkov, und A. W. Black (2019): Black is to criminal as caucasian is to police: Detecting and removing multiclass bias in word embeddings. arXiv preprint arXiv:1904.04047
  • R. H. Maudslay, H. Gonen, R. Cotterell, und S. Teufel (2019): It’s all in the name: Mitigating gender bias with name-based counterfactual data substitution. arXiv preprint arXiv:1909.00871
  • T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, und J. Dean (2013): Distributed representations of words and phrases and their compositionality. Advances in Neural Information Processing Systems 26, pages 3111–3119.
  • J. H. Park, J. Shin, und P. Fung (2018): Reducing gender bias in abusive language detection. arXiv preprint arXiv:1808.07231.
  • T. Sun, A. Gaut, S. Tang, Y. Huang, M. ElSherief, J. Zhao, D. Mirza, E. Belding, K.-W. Chang, und W. Y. Wang (2019): Mitigating gender bias in natural language processing: Literature review. arXiv preprint arXiv:1906.08976
  • Zhang, B. H., Lemoine, B., und Mitchell, M. (2018): Mitigating unwanted biases with adversarial learning. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society.
  • Zhao, J., Wang, T., Yatskar, M., Ordonez, V., und Chang, K. W. (2018): Gender bias in coreference resolution: Evaluation and debiasing methods. arXiv preprint arXiv:1804.06876.

1800

Data