Speaker Details

Nicole Prentzas
Red Hat
Nicole is a senior software engineer at Red Hat, contributing mainly to the Drools Business Rule Management System. She is also a PhD candidate in the area of Explainable AI (XAI), with a focus on Explainable Machine Learning via Argumentation. As a researcher, she developed an argumentation-based framework for explainable machine learning and participates in research projects for XAI applications in healthcare and has published a number of research papers in notable academic conferences.
What AI can do nowadays is simply mind-blowing. I must admit that I cannot stop being surprised and sometimes literally jumping from my seat thinking: "I didn't imagine that AI could ALSO do this!". What is a bit misleading here is that what we tend to identify with Artificial Intelligence is actually Machine Learning which is only a subset of all AI technologies available: ML is a fraction of the whole AI-story, while Symbolic Artificial Intelligence enables experts to encode their knowledge of a specific domain through a set of human-readable and transparent rules.
In fact there are many situations where being surprised is the last thing that you may want. You don't want to jump from your seat when your bank refuses your mortgage without any human understandable reason, but only because AI said no. And even the bank may want to grant their mortgages only to applicants who are considered viable under their strict and well-defined business rules.
Given these premises why not mixing 2 very different and complementary AI branches like Machine Learning and Symbolic Reasoning? During this talk we will demonstrate with practical examples why this could be a winning architectural choice in many common situations and how Quarkus through its langchain4j and drools extensions makes the development of applications integrating those technologies straightforward.