You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.
Code for the definition and testing of three new fairness-aware algorithms: Fair Decision Tree, Fair Genetic Pruning, and Fair LightGBM (FDT, FGP, FLGBM), completed for my Master's thesis.
Designed a novel online platform to eliminate AI bias, integrating software engineering, cybersecurity, and machine learning techniques with ethical principles.
Repository of the paper "Toward a Responsible Fairness Analysis: From Binary to Multiclass and Multigroup Assessment in Graph Neural Network-Based User Modeling Tasks"
Credo AI Lens is a comprehensive assessment framework for AI systems. Lens standardizes model and data assessment, and acts as a central gateway to assessments created in the open source community.
Service to examine data processing pipelines (e.g., machine learning or deep learning pipelines) for uncertainty consistency (calibration), fairness, and other safety-relevant aspects.
This hosts the code and appendix of the SIGIR 2024 full paper "Can We Trust Recommender System Fairness Evaluation: The Role of Fairness and Relevance"
Binary classification, SHAP (Explainable Artificial Intelligence), and Grid Search (for tuning hyperparameters) using EfficientNetV2-B0 on Cat VS Dog dataset.