Explanations come with reasoning: Symbolic explanation module for black-box classifiers

Speakers:

- Prof. dr. Gonzalo R. Nàpoles
- Fabian R. Hoitsma
- Andreas J. Knoben

Explaining artificial intelligence is crucial to ensure the models that are created are valid, fair, and free of bias. Furthermore, organizations and governments may request clarifications on how and why certain decisions have been made. To provide such information, models have to be transparent and explainable to some extent. In this talk, we will present a way to derive explanations concerning a black-box classifier using symbolic reasoning over its inputs and outputs. Logic programming, fuzzy sets and fuzzy-rough sets are at the heart of our agnostic explanation module, which allows answering what-if questions and counterfactual questions. Additionally, we will present an interface to interact with this explanation module in a natural way, using natural language processing powered by a conversational agent.

This talk will take place at Hasselt University, Campus Diepenbeek, in room A5. Alternatively, you can join the seminar online via Google Meet. The link will be shared with you at a later date after registration.

05 July 2022
12.00 - 13.30 h CEST
Hasselt University - Room A5

This event has already taken place.

Dr. Gonzalo Nápoles received his PhD degree from Hasselt University Belgium in 2017. Currently, he is an Assistant Professor at the Department of Cognitive Science & Artificial Intelligence, Tilburg University in The Netherlands. Gonzalo was recipient of the Cuban Academy of Science Award in 2013, 2014 and 2022, which is deemed the highest scientific award in Cuba. He is the creator of the FCM Expert software tool for neural cognitive modeling. His research interests include fuzzy cognitive maps, recurrent neural networks, reasoning under uncertainty conditions, fairness in machine learning and symbolic reasoning.

Back to the Agenda overview