[Priberam ML Seminars] Priberam Machine Learning Lunch Seminars (T12) - 3 - "Explainability for Sequential Decision-Making", João Bento de Sousa (IST / Feedzai)

5 views
Skip to first unread message

Rúben Cardoso

unread,
Mar 31, 2021, 4:22:43 AM3/31/21
to priberam_...@googlegroups.com, si...@omni.isr.ist.utl.pt, isr-...@isr.tecnico.ulisboa.pt
Hello all,

Hope you are all safe and healthy, the Priberam Machine Learning Seminars will continue to take place remotely via Zoom on Tuesdays at 1 p.m.

Next Tuesday, April 6th, João Bento de Sousa, a Research Data Scientist at Feedzai will present his work on "Explainability for Sequential Decision-Making" at 13:00h (zoom link: https://us02web.zoom.us/j/83634614792?pwd=V1ROM1NlU0tkMjhZUTdJT0JvM1Rrdz09 ).

You can register for this event and keep watch on future seminars below:
Please note that the seminar is limited to 100 people and this will work on a 1st come 1st served basis. So please try to be on time if you wish to attend.

Best regards,
Rúben Cardoso

Priberam Labs
http://labs.priberam.com/

Priberam is hiring!
If you are interested in working with us please consult the available positions at priberam.com/careers. 

Image result for priberam logoPRIBERAM SEMINARS   --  Zoom 836 3461 4792
__________________________________________________

Priberam Machine Learning Lunch Seminar
Speaker:  João Bento de Sousa (IST / Feedzai)
Venue: https://us02web.zoom.us/j/83634614792?pwd=V1ROM1NlU0tkMjhZUTdJT0JvM1Rrdz09
Date: Tuesday, April 6th, 2021
Time: 13:00 
Title:
Explainability for Sequential Decision-Making
Abstract:
Machine learning has been used to aid decision-making in several domains, from healthcare to finance. Understanding the decision process of ML models is paramount in high-stakes decisions that impact people's lives, otherwise, loss of control and lack of trust may arise. Often, these decisions have a sequential nature. For instance, the transaction history of a credit card must be considered when predicting the risk of fraud of the most recent transaction. Although RNNs are state-of-the-art models for many sequential decision-making tasks, they are perceived as black-boxes, creating a tension between accuracy and interpretability. While there has been considerable research effort towards developing explanation methods for ML, recurrent models have received relatively much less attention. Recently, Lundberg and Lee unified several methods under a single family of additive feature attribution explainers. From this family, KernelSHAP has seen a wide adoption throughout the literature; however, this explainer is unfit to explain models in a sequential setting, as it only accounts for the current input not the whole sequence.
In this work, we present TimeSHAP, a model-agnostic recurrent explainer that builds upon KernelSHAP and extends it to sequences. TimeSHAP explains recurrent models by computing feature-, timestep-, and cell-level attributions, producing explanations at both the feature and time axes. As sequences may be arbitrarily long, we further propose two pruning methods that are shown to dramatically decrease TimeSHAP's computational cost and increase its reliability. We validate TimeSHAP by using it to explain predictions of two RNN models in two real-world fraud detection tasks, obtaining relevant insights into these models and their predictions.
Short Bio:
João Bento is a Research Data Scientist at Feedzai working on explainability for Machine Learning models. He previously obtained the M.Sc. degree in Information Systems and Computer Engineering at Instituto Superior Técnico, Lisbon. His thesis, advised by Pedro Saleiro and Mário Figueiredo, was a collaboration between Feedzai and the university, focusing on recurrent model explainability. His research interests include deep learning explainability and transparency.

Eventbrite:
Image result for priberam logo
 
Reply all
Reply to author
Forward
0 new messages