-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Discussion about Adding a Baseline to Adila #85
Comments
Hi @hosseinfani, @mahdis-saeedi def fair_greedy(member_prob, att_list, prob_dist):
L = list(zip(member_prob, att_list))
print(L)
R = [L[0]] # Initialize R as [L1]
print(len(L))
for i in range(1, len(L)):
flag = False
p = calculate_att_dist(R)
# To address comments after meeting with Mahdis
p_diff = {False: p[False] - prob_dist[False] , True: p[True] - prob_dist[True]}
while not flag:
z_min = min(p_diff, key=p_diff.get)
# Find the first item with the underrepresented feature
for j in range(i, len(L)):
if L[j][1] == z_min: # Assuming each item in L has a 'feature' attribute
temp = L.pop(j)
# Move down the items in L to place the selected item at position i
L = L[:i] + [temp] + L[i:]
print(len(L))
R.append(temp)
flag = True
break
# To avoid infinite loop for the case there are not enough samples from the chosen protected att
if j == len(L)-1:
flag = True
break
return R
def calculate_att_dist(members):
false_ratio = [second_item for _, second_item in members].count(False) / len(members)
return {True: 1 - false_ratio, False: false_ratio}
if __name__ == "__main__":
member_prob = [0.9, 0.8, 0.7, 0.6, 0.5]
att_list = [False, False, False, True, True]
x = fair_greedy(member_prob, att_list, {False: 0.6, True: 0.4})
print(x) |
@Hamedloghmani |
@mahdis-saeedi |
Hi @hosseinfani , @mahdis-saeedi
I have attached step by step traceback of these settings to this issue. This progress is ongoing and this was only the first step. |
Hi,
I have attached step by step traceback of my sample run to this issue. |
Hi @hosseinfani , @mahdis-saeedi ,
I would love to hear your thoughts. |
Hi, @hosseinfani @mahdis-saeedi
Although this issue is not finalized and it is still under construction, the following are the options that I have came up with so far:
1) Implementing the fourth algorithm presented in Fairness-Aware Ranking in Search & Recommendation Systems with Application to LinkedIn Talent Search
I already did this option since it did not require much effort, to summarize the effect, fairness is improved, utility is dropped drastically ( it's better than other deterministic algorithms that we implemented already but still fails to hold on to utility)
2) one of the three algorithms proposed in Has CEO Gender Bias Really Been Fixed? Adversarial Attacking and Improving Gender Fairness in Image Search
This paper provides three algorithms. i) Epsilon-greedy ii) Relevance-aware Swapping iii) Fair-greedy.
Fair-greedy was their best algorithm based on the results but I was only able to find the implementation for Epsilon-greedy
Questions:
a) Does our new baseline (e.g Epsilon-greedy) has to beat fa*ir ?
The text was updated successfully, but these errors were encountered: