Can you trust your AI?

Artificial Intelligence is everywhere, from your mobile phone to the majority of the website you can access. It is becoming more and more important that such systems can be trusted by all the involved stakeholders especially when AI “predictions” have a tangible impact on humans, e.g. in domains like healthcare or finance.

Explainable AI (XAI) is a research field whose aim is to provide insights about how AI / predictive models generate predictions by means of explanations to make such models less opaque and more trustworthy.

The TrustyAI initiative at Red Hat aims at embracing explainability to foster the trustworthiness of decisions, together with runtime tracing of operational metrics and accountability.

Daniele Zonca

Daniele Zonca

Daniele Zonca works in Red Hat as architect for Red Hat Decision Manager and TrustyAI initiative. He contributes to open source projects Drools and Kogito focusing in particular on predictive model runtime support (PMML), ML explainability, runtime tracing and decision monitoring.

 

JOIN US ON THE 18th OF NOVEMBER 2021