You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In this paper, a novel dataset of music listening records is introduced to study bias in recommender systems based on users' demographics. The authors define a notion of fairness based on the performance gap between different demographic groups and evaluate various collaborative filtering algorithms for accuracy and fairness metrics. They find significant unfairness between male and female user groups and examine how recommender algorithms can amplify underlying population bias.
Additionally, they explore the effectiveness of a resampling strategy as a debiasing method, which slightly improves fairness measures while maintaining accuracy and performance. The main contributions of the study include the introduction of a large-scale real-world dataset, identifying robust algorithms for handling gender bias and assessing the impact of data debiasing methods.
Metrics used: NDCG, Recall, Diversity, Coverage
The results demonstrate that they are able to enhance diversity and coverage without any/significant changes in NDCG and Recall.
@hosseinfani
These metrics seem to be applicable to the team formation problem and I believe we can use their resampling method as a pre-processing debias and see if it is effective in making the teams more fair.
Link: https://dl.acm.org/doi/10.1016/j.ipm.2021.102666
Year: 2021
Venue: Information Processing and Management
In this paper, a novel dataset of music listening records is introduced to study bias in recommender systems based on users' demographics. The authors define a notion of fairness based on the performance gap between different demographic groups and evaluate various collaborative filtering algorithms for accuracy and fairness metrics. They find significant unfairness between male and female user groups and examine how recommender algorithms can amplify underlying population bias.
Additionally, they explore the effectiveness of a resampling strategy as a debiasing method, which slightly improves fairness measures while maintaining accuracy and performance. The main contributions of the study include the introduction of a large-scale real-world dataset, identifying robust algorithms for handling gender bias and assessing the impact of data debiasing methods.
Metrics used: NDCG, Recall, Diversity, Coverage
The results demonstrate that they are able to enhance diversity and coverage without any/significant changes in NDCG and Recall.
Github code: https://github.com/CPJKU/recommendation_systems_fairness
The text was updated successfully, but these errors were encountered: