This small project demonstrates the application of SHAP (SHapley Additive exPlanations) for understanding the performance of a non-interpretable algorithm (lightGBM) applied to genomic data modeling.
- Book Cristoph Molner - “Interpretable Machine learning”
- Article “A new perspective on Shapley values, part I: Intro to Shapley and SHAP”
- Article “Kernel SHAP, un paso adelante”
- Paper Lloyd Shapley
- Paper Scott Lundberg y Su-In Lee SHAP
- Paper Scott Lundberg y Su-In Lee TreeSHAP
- Video “GTO-7-03: The Shapley Value”
- Interpreting machine learning models to investigate circadian regulation and facilitate exploration of clock function
- Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
- Molecular Biology of the Cell
- Article “Understanding Shapley value explanation algorithms for trees”
- Derecho a la explicación
- Ley de igualdad de Oportunidad de Crédito (EE.UU.)
- Explainable machine-learning predictions for the prevention of hypoxaemia during surgery
- Curso de Explainable AI por Sun-In Lee (desarrolladora de SHAP)
- Article “Problems with Shapley-value-based explanations as feature importance measures”
- Article “Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods”