Skip to content

MIT-LCP/the-bias-glossary

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 

Repository files navigation

The Bias Glossary

The Bias Glossary is designed to be an adaptable framework that accompanies each unique dataset in order to provide a standardized, bias-centered, collaborative, non-exhaustive, web-based list of increasingly known biases that accompany the data, whether related to social determinants of health, biases arising from medical devices, or any other characteristics. The glossary is envisaged as a living document, evolving continuously as new insights and types of biases are identified by researchers, clinicians, and other stakeholders. This dynamic and collaborative nature ensures that the Bias Glossary remains an updated and effective tool for highlighting and addressing biases in healthcare datasets.

As for its characteristics, the standardized, bias-centered approach on the Bias Glossary aims to create a uniform language and methodology for categorizing and documenting biases. This standardization facilitates clearer communication among researchers, enhances the reproducibility of bias assessments, and enables more systematic approaches to bias mitigation. By focusing specifically on biases—ranging from those embedded in data collection processes to those arising from algorithmic interpretations built from it—the glossary helps direct attention to the multifaceted impacts of these biases on AI performance and healthcare outcomes.

Additionally, the collaborative, non-exhaustive, web-based nature of the Bias Glossary encourages participation from a broad spectrum of stakeholders, including dataset curators, AI developers, healthcare professionals, patient advocates, and policymakers. Its web-based platform should be user-friendly and accessible, ensuring that contributors can easily propose additions and consult it. This collaborative effort not only enriches the glossary with diverse perspectives and expertise but also underscores the principle that understanding and addressing bias is a collective responsibility. The non-exhaustive aspect of the glossary acknowledges the ever-evolving understanding of biases, inviting continuous updates and contributions to refine and expand the glossary’s contents.

Contributing

We highly encourage you to contribute to the Bias Glossary by sharing any biases you have encountered while working with the MIMIC dataset. Sharing your code and findings not only aids in making studies more reproducible but also fosters collaborative research. Your contributions help to enrich the Glossary, making it a more comprehensive resource for addressing biases effectively. To contribute, please:

Coding style

Please refer to the style guide for guidelines on formatting your code for the repository.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published