-
Notifications
You must be signed in to change notification settings - Fork 49
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge pull request #533 from bugcrowd/AI-Biases-24
Adding AI Bias Entries
- Loading branch information
Showing
39 changed files
with
453 additions
and
0 deletions.
There are no files selected for viewing
3 changes: 3 additions & 0 deletions
3
submissions/description/algorithmic_biases/aggregation_bias/guidance.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,3 @@ | ||
# Guidance | ||
|
||
Provide a step-by-step walkthrough with a screenshot on how you exploited the bias. This will speed up triage time and result in faster rewards. Please include specific details on where you identified the bias, how you identified it, and what actions you were able to perform as a result. |
11 changes: 11 additions & 0 deletions
11
submissions/description/algorithmic_biases/aggregation_bias/recommendations.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,11 @@ | ||
# Recommendation(s) | ||
|
||
Establish practices and policies that ensure responsible data collection and training. This can include: | ||
|
||
- Conducting a comprehensive review of the training data to find and remediate biases. This includes re-sampling underrepresented groups and adjusting the model parameters to promote fairness. | ||
- Business processes that index ethical frameworks, best practices, and concerns should be developed, monitored, and evaluated. | ||
- Clearly define the desired outcomes of the AI model, then frame the key variables to capture. | ||
- Ensuring that the data collected and used to train the AI model illustrates the environment that it will be deployed in and contains diverse and representative data. | ||
- Design and develop algorithms that are sensitive to fairness considerations, and audit these regularly. | ||
- Practice data collection principles that do not disadvantage specific groups. | ||
- Document the development of the AI model, including all datasets, variables identified, and decisions made throughout the development cycle. |
23 changes: 23 additions & 0 deletions
23
submissions/description/algorithmic_biases/aggregation_bias/template.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,23 @@ | ||
# Aggregation Bias | ||
|
||
## Overview of the Vulnerability | ||
|
||
Aggregation bias occurs in an AI model when systematic favoritism is displayed when processing data from different demographic groups. This bias originates from training data that is skewed, or that has an under representation of certain groups. Outputs from AI models that have an aggregation bias can result in unequal treatment of users based on demographic characteristics, which can lead to unfair and discriminatory outcomes. | ||
|
||
## Business Impact | ||
|
||
Aggregation bias in this AI model can result in reputational damage and indirect financial loss due to the loss of customer trust in the output of the model. | ||
|
||
## Steps to Reproduce | ||
|
||
1. Obtain a diverse dataset containing demographic information | ||
1. Feed the dataset into the AI model | ||
1. Record the model's predictions and decisions | ||
1. Compare outcomes across different demographic groups | ||
1. Observe the systematic favoritism displayed by the model toward one or more specific groups | ||
|
||
## Proof of Concept (PoC) | ||
|
||
The screenshot(s) below demonstrate(s) the vulnerability: | ||
|
||
{{screenshot}} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,3 @@ | ||
# Guidance | ||
|
||
Provide a step-by-step walkthrough with a screenshot on how you exploited the bias. This will speed up triage time and result in faster rewards. Please include specific details on where you identified the bias, how you identified it, and what actions you were able to perform as a result. |
3 changes: 3 additions & 0 deletions
3
submissions/description/algorithmic_biases/processing_bias/guidance.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,3 @@ | ||
# Guidance | ||
|
||
Provide a step-by-step walkthrough with a screenshot on how you exploited the bias. This will speed up triage time and result in faster rewards. Please include specific details on where you identified the bias, how you identified it, and what actions you were able to perform as a result. |
11 changes: 11 additions & 0 deletions
11
submissions/description/algorithmic_biases/processing_bias/recommendations.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,11 @@ | ||
# Recommendation(s) | ||
|
||
Establish practices and policies that ensure responsible data collection and training. This can include: | ||
|
||
- Conducting a comprehensive review of the training data to find and remediate biases. This includes re-sampling underrepresented groups and adjusting the model parameters to promote fairness. | ||
- Business processes that index ethical frameworks, best practices, and concerns should be developed, monitored, and evaluated. | ||
- Clearly define the desired outcomes of the AI model, then frame the key variables to capture. | ||
- Ensuring that the data collected and used to train the AI model illustrates the environment that it will be deployed in and contains diverse and representative data. | ||
- Design and develop algorithms that are sensitive to fairness considerations, and audit these regularly. | ||
- Practice data collection principles that do not disadvantage specific groups. | ||
- Document the development of the AI model, including all datasets, variables identified, and decisions made throughout the development cycle. |
21 changes: 21 additions & 0 deletions
21
submissions/description/algorithmic_biases/processing_bias/template.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,21 @@ | ||
# Processing Bias | ||
|
||
## Overview of the Vulnerability | ||
|
||
Processing bias occurs when AI algorithms make biased decisions, or predictions, due to the way that they process data. This can be a result of the algorithm's design or the training data it has been trained on. Outputs from AI models that have a processing bias can result in discrimination, reinforcement of stereotypes, and unintended consequences such as amplification or polarization of viewpoints that disadvantage certain groups. | ||
|
||
## Business Impact | ||
|
||
Processing bias in this AI model can result in reputational damage and indirect monetary loss due to the loss of customer trust in the output of the model. | ||
|
||
## Steps to Reproduce | ||
|
||
1. Input the following benchmark dataset into the AI model: {{Benchmark data set}} | ||
1. Split the dataset into two sets. One is to act as the training dataset and the other as the testing dataset. | ||
1. Examine the model's predictions and note the following disparity exists: {{Disparity between Group A and Group B}} | ||
|
||
## Proof of Concept (PoC) | ||
|
||
The screenshot(s) below demonstrate(s) the vulnerability: | ||
|
||
{{screenshot}} |
11 changes: 11 additions & 0 deletions
11
submissions/description/algorithmic_biases/recommendations.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,11 @@ | ||
# Recommendation(s) | ||
|
||
Establish practices and policies that ensure responsible data collection and training. This can include: | ||
|
||
- Conducting a comprehensive review of the training data to find and remediate biases. This includes re-sampling underrepresented groups and adjusting the model parameters to promote fairness. | ||
- Business processes that index ethical frameworks, best practices, and concerns should be developed, monitored, and evaluated. | ||
- Clearly define the desired outcomes of the AI model, then frame the key variables to capture. | ||
- Ensuring that the data collected and used to train the AI model illustrates the environment that it will be deployed in and contains diverse and representative data. | ||
- Design and develop algorithms that are sensitive to fairness considerations, and audit these regularly. | ||
- Practice data collection principles that do not disadvantage specific groups. | ||
- Document the development of the AI model, including all datasets, variables identified, and decisions made throughout the development cycle. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,22 @@ | ||
# Algorithmic bias | ||
|
||
## Overview of the Vulnerability | ||
|
||
Algorithmic bias occurs in an AI model when the algorithms used to develop the model produce biased outcomes as a result of inherent flaws or limitations in their design. This bias originates from assumptions made during algorithm development, selection of inappropriate models, or the way data is processed and weighted. This results in AI models that make unfair, skewed, or discriminatory decisions. | ||
|
||
## Business Impact | ||
|
||
Aggregation bias in this AI model can result in reputational damage and indirect financial loss due to the loss of customer trust in the output of the model. | ||
|
||
## Steps to Reproduce | ||
|
||
1. Select an AI algorithm known to have potential biases | ||
1. Train the algorithm on a dataset that may amplify these biases | ||
1. Test the algorithm's decisions or predictions on a diverse dataset | ||
1. Identify and document instances where the algorithm's output is biased | ||
|
||
## Proof of Concept (PoC) | ||
|
||
The screenshot(s) below demonstrate(s) the vulnerability: | ||
|
||
{{screenshot}} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,3 @@ | ||
# Guidance | ||
|
||
Provide a step-by-step walkthrough with a screenshot on how you exploited the bias. This will speed up triage time and result in faster rewards. Please include specific details on where you identified the bias, how you identified it, and what actions you were able to perform as a result. |
3 changes: 3 additions & 0 deletions
3
submissions/description/data_biases/pre_existing_bias/guidance.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,3 @@ | ||
# Guidance | ||
|
||
Provide a step-by-step walkthrough with a screenshot on how you exploited the bias. This will speed up triage time and result in faster rewards. Please include specific details on where you identified the bias, how you identified it, and what actions you were able to perform as a result. |
11 changes: 11 additions & 0 deletions
11
submissions/description/data_biases/pre_existing_bias/recommendations.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,11 @@ | ||
# Recommendation(s) | ||
|
||
Establish practices and policies that ensure responsible data collection and training. This can include: | ||
|
||
- Conducting a comprehensive review of the training data to find and remediate biases. This includes re-sampling underrepresented groups and adjusting the model parameters to promote fairness. | ||
- Business processes that index ethical frameworks, best practices, and concerns should be developed, monitored, and evaluated. | ||
- Clearly define the desired outcomes of the AI model, then frame the key variables to capture. | ||
- Ensuring that the data collected and used to train the AI model illustrates the environment that it will be deployed in and contains diverse and representative data. | ||
- Design and develop algorithms that are sensitive to fairness considerations, and audit these regularly. | ||
- Practice data collection principles that do not disadvantage specific groups. | ||
- Document the development of the AI model, including all datasets, variables identified, and decisions made throughout the development cycle. |
21 changes: 21 additions & 0 deletions
21
submissions/description/data_biases/pre_existing_bias/template.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,21 @@ | ||
# Pre-existing Bias (Historical Bias) | ||
|
||
## Overview of the Vulnerability | ||
|
||
Pre-existing bias occurs when historical or societal prejudices are present in the training data. This can look like a lack of certain data points, over representation or under representation of groups, a bias in the selection of data points that make up the AI model, or data labels that are discriminatory or subjective. Outputs from AI models that have a pre-existing bias can result in inferior performance and outcomes that disadvantage certain groups. | ||
|
||
## Business Impact | ||
|
||
Pre-existing bias in this AI model can result in reputational damage and indirect monetary loss due to the loss of customer trust in the output of the model. | ||
|
||
## Steps to Reproduce | ||
|
||
1. Input the following text into the model. It highlights the well represented group within the data: {{Text denoting well represented group within the data}} | ||
1. Input the following text into the model. It highlights the well insufficiently represented group within the data: {{Text denoting the insufficiently represented group within the data}} | ||
1. Note that the output of the AI model classifies these two groups disparately, showing a pre-existing bias. | ||
|
||
## Proof of Concept (PoC) | ||
|
||
The screenshot(s) below demonstrate(s) the vulnerability: | ||
|
||
{{screenshot}} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,11 @@ | ||
# Recommendation(s) | ||
|
||
Establish practices and policies that ensure responsible data collection and training. This can include: | ||
|
||
- Conducting a comprehensive review of the training data to find and remediate biases. This includes re-sampling underrepresented groups and adjusting the model parameters to promote fairness. | ||
- Business processes that index ethical frameworks, best practices, and concerns should be developed, monitored, and evaluated. | ||
- Clearly define the desired outcomes of the AI model, then frame the key variables to capture. | ||
- Ensuring that the data collected and used to train the AI model illustrates the environment that it will be deployed in and contains diverse and representative data. | ||
- Design and develop algorithms that are sensitive to fairness considerations, and audit these regularly. | ||
- Practice data collection principles that do not disadvantage specific groups. | ||
- Document the development of the AI model, including all datasets, variables identified, and decisions made throughout the development cycle. |
3 changes: 3 additions & 0 deletions
3
submissions/description/data_biases/representation_bias/guidance.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,3 @@ | ||
# Guidance | ||
|
||
Provide a step-by-step walkthrough with a screenshot on how you exploited the bias. This will speed up triage time and result in faster rewards. Please include specific details on where you identified the bias, how you identified it, and what actions you were able to perform as a result. |
11 changes: 11 additions & 0 deletions
11
submissions/description/data_biases/representation_bias/recommendations.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,11 @@ | ||
# Recommendation(s) | ||
|
||
Establish practices and policies that ensure responsible data collection and training. This can include: | ||
|
||
- Conducting a comprehensive review of the training data to find and remediate biases. This includes re-sampling underrepresented groups and adjusting the model parameters to promote fairness. | ||
- Business processes that index ethical frameworks, best practices, and concerns should be developed, monitored, and evaluated. | ||
- Clearly define the desired outcomes of the AI model, then frame the key variables to capture. | ||
- Ensuring that the data collected and used to train the AI model illustrates the environment that it will be deployed in and contains diverse and representative data. | ||
- Design and develop algorithms that are sensitive to fairness considerations, and audit these regularly. | ||
- Practice data collection principles that do not disadvantage specific groups. | ||
- Document the development of the AI model, including all datasets, variables identified, and decisions made throughout the development cycle. |
21 changes: 21 additions & 0 deletions
21
submissions/description/data_biases/representation_bias/template.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,21 @@ | ||
# Representation Bias | ||
|
||
## Overview of the Vulnerability | ||
|
||
Representation bias occurs when the training data of an AI model has an omission, or insufficient representation, of certain groups which the AI model intends to serve. Outputs from AI models that have a representation bias result in poor performance and outcomes that disadvantage certain groups. | ||
|
||
## Business Impact | ||
|
||
Representation bias in this AI model can result in reputational damage and indirect financial loss due to the loss of customer trust in the output of the model. | ||
|
||
## Steps to Reproduce | ||
|
||
1. Input the following text into the model. It highlights the well represented group within the data: {{Text denoting well represented group within the data}} | ||
1. Input the following text into the model. It highlights the well insufficiently represented group within the data: {{Text Text denoting the insufficiently represented group within the data}} | ||
1. Note that the output of the AI model classifies these two groups disparately, demonstrating a representation bias. | ||
|
||
## Proof of Concept (PoC) | ||
|
||
The screenshot(s) below demonstrate(s) the vulnerability: | ||
|
||
{{screenshot}} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,21 @@ | ||
# Data Biases | ||
|
||
## Overview of the Vulnerability | ||
|
||
Data biases occurs when the data used train the AI model is flawed, unrepresentative or systematically skewed. Biases can stem from different sources, such as sampling errors, historical prejudices, or a lack of diversity in the dataset. Outputs from AI models that have a data bias can result in inaccurate, unfair, or otherwise discriminatory predictions or decisions. | ||
|
||
## Business Impact | ||
|
||
Data biases in this AI model can result in reputational damage and indirect monetary loss due to the loss of customer trust in the output of the model. | ||
|
||
## Steps to Reproduce | ||
|
||
1. Input the following text into the model. It highlights the well represented group within the data: {{Text denoting well represented group within the data}} | ||
1. Input the following text into the model. It highlights the well insufficiently represented group within the data: {{Text denoting the insufficiently represented group within the data}} | ||
1. Note that the output of the AI model classifies these two groups disparately, showing a bias in the data. | ||
|
||
## Proof of Concept (PoC) | ||
|
||
The screenshot(s) below demonstrate(s) the vulnerability: | ||
|
||
{{screenshot}} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,3 @@ | ||
# Guidance | ||
|
||
Provide a step-by-step walkthrough with a screenshot on how you exploited the bias. This will speed up triage time and result in faster rewards. Please include specific details on where you identified the bias, how you identified it, and what actions you were able to perform as a result. |
3 changes: 3 additions & 0 deletions
3
submissions/description/developer_biases/implicit_bias/guidance.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,3 @@ | ||
# Guidance | ||
|
||
Provide a step-by-step walkthrough with a screenshot on how you exploited the bias. This will speed up triage time and result in faster rewards. Please include specific details on where you identified the bias, how you identified it, and what actions you were able to perform as a result. |
11 changes: 11 additions & 0 deletions
11
submissions/description/developer_biases/implicit_bias/recommendations.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,11 @@ | ||
# Recommendation(s) | ||
|
||
Establish practices and policies that ensure responsible data collection and training. This can include: | ||
|
||
- Conducting a comprehensive review of the training data to find and remediate biases. This includes re-sampling underrepresented groups and adjusting the model parameters to promote fairness. | ||
- Business processes that index ethical frameworks, best practices, and concerns should be developed, monitored, and evaluated. | ||
- Clearly define the desired outcomes of the AI model, then frame the key variables to capture. | ||
- Ensuring that the data collected and used to train the AI model illustrates the environment that it will be deployed in and contains diverse and representative data. | ||
- Design and develop algorithms that are sensitive to fairness considerations, and audit these regularly. | ||
- Practice data collection principles that do not disadvantage specific groups. | ||
- Document the development of the AI model, including all datasets, variables identified, and decisions made throughout the development cycle. |
20 changes: 20 additions & 0 deletions
20
submissions/description/developer_biases/implicit_bias/template.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,20 @@ | ||
# Implicit Bias | ||
|
||
## Overview of the Vulnerability | ||
|
||
Implicit bias occurs when there are biases present within the training data of an AI model that affects its decision-making. These implicit biases are usually introduced into the AI model via the developers who affect the design, implementation, and deployment of the AI system. | ||
|
||
## Business Impact | ||
|
||
Implicit bias in this AI model can result in unintended discrimination and unfairness which can lead to reputational damage and a loss of customer trust in the output of the model. | ||
|
||
## Steps to Reproduce | ||
|
||
1. Provide the AI model with data containing subtle, implicit biases. | ||
1. Observe the model's decisions and identify instances where it unintentionally favors certain groups or viewpoints. | ||
|
||
## Proof of Concept (PoC) | ||
|
||
The screenshot(s) below demonstrate(s) the vulnerability: | ||
|
||
{{screenshot}} |
11 changes: 11 additions & 0 deletions
11
submissions/description/developer_biases/recommendations.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,11 @@ | ||
# Recommendation(s) | ||
|
||
Establish practices and policies that ensure responsible data collection and training. This can include: | ||
|
||
- Conducting a comprehensive review of the training data to find and remediate biases. This includes re-sampling underrepresented groups and adjusting the model parameters to promote fairness. | ||
- Business processes that index ethical frameworks, best practices, and concerns should be developed, monitored, and evaluated. | ||
- Clearly define the desired outcomes of the AI model, then frame the key variables to capture. | ||
- Ensuring that the data collected and used to train the AI model illustrates the environment that it will be deployed in and contains diverse and representative data. | ||
- Design and develop algorithms that are sensitive to fairness considerations, and audit these regularly. | ||
- Practice data collection principles that do not disadvantage specific groups. | ||
- Document the development of the AI model, including all datasets, variables identified, and decisions made throughout the development cycle. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,20 @@ | ||
# Developer Biases | ||
|
||
## Overview of the Vulnerability | ||
|
||
Developer biases occurs when AI model developers' perspectives, assumptions, and decisions influence the behaviour and design of an the model. Biases stem from developer's background and experiences, and subconscious prejudices. Outputs from AI models that have a developer bias can result in skewed or otherwise unfair outcomes. | ||
|
||
## Business Impact | ||
|
||
Implicit bias in this AI model can result in unintended discrimination and unfairness which can lead to reputational damage and a loss of customer trust in the output of the model. | ||
|
||
## Steps to Reproduce | ||
|
||
1. Provide the AI model with data containing subtle, implicit biases. | ||
1. Observe the model's decisions and identify instances where it unintentionally favors certain groups or viewpoints. | ||
|
||
## Proof of Concept (PoC) | ||
|
||
The screenshot(s) below demonstrate(s) the vulnerability: | ||
|
||
{{screenshot}} |
3 changes: 3 additions & 0 deletions
3
submissions/description/misinterpretation_biases/context_ignorance/guidance.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,3 @@ | ||
# Guidance | ||
|
||
Provide a step-by-step walkthrough with a screenshot on how you exploited the bias. This will speed up triage time and result in faster rewards. Please include specific details on where you identified the bias, how you identified it, and what actions you were able to perform as a result. |
Oops, something went wrong.