This repository hosts https://events.fat-forensics.org, which holds details of past and upcoming events such as workshops and (hands-on) tutorials run with the FAT Forensics package.
Introduction to Machine Learning Explainability
An invited lecture given for the Ethics in Computer Science (COMP4920/SENG4920 22T3) course at the University of New South Wales (UNSW), Sydney.
For more details please see the event homepage: https://events.fat-forensics.org/2022_unsw-lecture.
Never Let the Truth Get in the Way of a Good Story: The Importance of Multilevel Human Understanding in Explainable Artificial Intelligence
An interactive presentation given at the University of New South Wales (UNSW), discussing different levels at which explainability techniques and their insights need to be understood (using the example of surrogate explainers).
For more details please see the event homepage: https://events.fat-forensics.org/2022_unsw.
Mind the Gap! Bridging Explainable Artificial Intelligence and Human Understanding
An interactive presentation given at ETH Zürich, discussing how to interpret and understand insights generated with surrogate explainers.
For more details please see the event homepage: https://events.fat-forensics.org/2022_eth.
Mind the Gap! Bridging Explainable Artificial Intelligence and Human Understanding
An interactive presentation given at Università della Svizzera italiana, discussing how to interpret and understand insights generated with surrogate explainers.
For more details please see the event homepage: https://events.fat-forensics.org/2022_usi.
Where Does the Understanding Come From When Explaining Automated Decision-making Systems?
An interactive presentation for the AI and Humanity summer cluster workshop, discussing how machine learning explainers can help humans to understand automated decision-making.
For more details please see the event homepage: https://events.fat-forensics.org/2022_simons-institute.
Transparency and Explainability
The AI/Data Science Professional Lectorial (2022, RMIT University)
An overview of surrogate explainers for the 2022 AI/Data Science Professional RMIT lectorial.
For more details please see the event homepage: https://events.fat-forensics.org/2022_rmit-lectorial.
What and How of Machine Learning Transparency:
Building Bespoke Explainability Tools with Interoperable Algorithmic Components
A summer school session with a hands-on component organised at the 2021 TAILOR Summer School held virtually between the 23rd and the 24th of September 2021.
For more details please see the event homepage: https://events.fat-forensics.org/2021_tailor-summer-school.
Practical Machine Learning Explainability:
Surrogate Explainers and Fairwashing
A summer school session with a hands-on component organised at the 2021 Bristol Interactive AI Summer School (BIAS) held between the 2nd and the 7th of September 2021.
For more details please see the event homepage: https://events.fat-forensics.org/2021_bias.
Do You Trust Your Explainer?
An interactive presentation for Workpackage 3 of the TAILOR project.
For more details please see the event homepage: https://events.fat-forensics.org/2021_tailor-wp3.
Making Machine Learning Explanations Truthful and Intelligible
An invited talk for the Machines Programme of the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S).
For more details please see the event homepage: https://events.fat-forensics.org/2021_adms.
Making Machine Learning Explanations Truthful and Intelligible
An invited talk for the special session on Fair and Explainable Models held at the 31st European Conference on Operational Research (2021).
For more details please see the event homepage: https://events.fat-forensics.org/2021_euro-explainability.
Did You Get That?
Reviewing Intelligibility of State-of-the-art Machine Learning Explanations
An interactive demonstration for the AI in Action session of The Alan Turing Institute's AI UK 2021 conference.
For more details please see the event homepage: https://events.fat-forensics.org/2021_turing-ai-uk.
What and How of Machine Learning Transparency:
Building Bespoke Explainability Tools with Interoperable Algorithmic Components
A hands-on tutorial to be held at the ECML-PKDD 2020 conference in Ghent, Belgium, from the 14th to the 18th of September 2020.
For more details please see the event homepage: https://events.fat-forensics.org/2020_ecml-pkdd.
To reference this tutorial please use:
@article{sokol2022what,
title={What and How of Machine Learning Transparency:
{Building} Bespoke Explainability Tools with Interoperable
Algorithmic Components},
author={Sokol, Kacper and Hepburn, Alexander and
Santos-Rodriguez, Raul and Flach, Peter},
journal={Journal of Open Source Education},
volume={5},
number={58},
pages={175},
publisher={The Open Journal},
year={2022},
doi={10.21105/jose.00175},
url={https://events.fat-forensics.org/2020_ecml-pkdd}
}