As a matter of fact, Artificial Intelligence exceeds human performance in almost every application it has been tried. It is increasingly playing a key role in our day-to-day lives, making its way into healthcare, education, criminal justice, automatic transportation and many other fields. This has led to a growing concern regarding potential bias in AI models, and a demand for transparency and interpretability.
Explainability is a prerequisite for building trust and allowing AI systems to enter these high-stakes domains, which require maximum reliability and safety. Moreover, a model can help deepen the understanding of a topic, providing new observations which a human may not be able to detect.
As a consequence, AI researchers have focused their attention on explainable AI (XAI), a set of tools and frameworks to help humans understand and interpret predictions made by Machine Learning models.
In this presentation, the topic will be introduced by explaining the problem and providing an overview of the models and techniques that can be used to solve it, while also giving a brief insight on the European regulations surrounding the topic.
In the end, an example of XAI on electrocardiograms is provided, where the use of explainability in a study on sex prediction provided new insights into electrophysiology.