Dear all,
We are pleased to announce that we will hold the sixth session of this year's Priberam
Machine Learning Seminars next Tuesday, May 16. Our guest speaker will
be Ricardo Moreira, who worked for Feedzai as a Research Data Scientist and
is a co-founder and the CTO of GoWizi. He will present Causal DiConStruct, a new explanation method for artificial intelligence (AI) models that is both concept-based and causal.
The event will occur at 1 PM in Instituto Superior Técnico (room PA2), and we will provide lunch bags for attendees. To learn more about the event and register
(which is mandatory if you plan to attend), please follow the link below:
We hope to see you all there!
Kind regards,
Diogo Pernes
Priberam is hiring!
If you are interested in working with us please consult the available positions at priberam.com/careers.
PRIBERAM SEMINARS
__________________________________________________
Priberam Machine Learning Lunch Seminar
Speaker: Ricardo
Moreira (GoWizi)
Venue: Instituto Superior
Técnico (room PA2)
Date: Tuesday,
May 16, 2023
Time: 1 PM
Title:
Abstract:
Model
interpretability plays a central role in human-AI decision-making systems. Ideally, explanations should be expressed through semantic concepts and their causal relations in an interpretable way for the human experts. Additionally, explanation methods should
be efficient, and not compromise the performance of the prediction task. Despite the rapid advances in AI explainability in the last few years, these problems still continue to not be addressed in practice. Furthermore, mainstream methods for local concept
explainability do not produce causal explanations and incur in a tradeoff between explainability and prediction performance. To fill this gap in AI explainability, we present Causal DiConStruct, an explanation method that is both concept-based and causal,
with the goal of creating more interpretable local explanations in the form of structural causal models and concept attributions. Our explainer learns exogenous variables and structural assignments in two separate components, and works for any black-box machine
learning model by approximating its predictions while producing the respective explanations. Because Causal DiConStruct is a distillation model, it generates explanations efficiently while not impacting the black-box prediction task. We validate our method
on an image dataset and a tabular dataset and show that Causal DiConStruct approximates the black-box models with higher fidelity while obtaining more diverse concept attributions than other concept explainability baselines.
Short
Bio:
Ricardo
Moreira studied mechanical engineering at Instituto Superior Técnica, Lisboa, where he also did 1 year of research into image processing with medical applications. Then he joined Feedzai to the customer success team where he worked for 2 years developing machine
learning models for fraud detection. In the last 2 years, he started a PhD in Causality for Machine learning and joined Feedzai's research team where he's been involved in developing novel methods for feature monitoring and model explainability.