[Priberam ML Seminars] Priberam Machine Learning Lunch Seminars (T11) - 8 - "The Explanation Game: Towards Prediction Explainability through Sparse Communication", Marcos Treviso (DeepSPIN/IT)

Skip to first unread message

Rúben Cardoso

Jun 16, 2020, 7:16:24 AM6/16/20
to priberam_...@googlegroups.com, si...@omni.isr.ist.utl.pt, isr-...@isr.tecnico.ulisboa.pt
Hello all,

Hope you are all safe and healthy, the 
Priberam Machine Learning Seminars will continue to take place remotely via zoom on Tuesdays at 1 p.m.

Next Tuesday, June 23rd, Marcos Treviso, a DeepSPIN / IT Ph.D student will present his work "The Explanation Game: Towards Prediction Explainability through Sparse Communicationat 13:00h (zoom link: https://zoom.us/j/88158646177 )

You can register for this event and keep watch on future seminars below:
Food will not be provided but feel free to eat at the same time :) Please note that the seminar is limited to 100 people and this will work on a 1st come 1st served basis. So please try to be on time if you wish to attend.

Best regards,
Rúben Cardoso

Priberam Labs

Priberam is hiring!
If you are interested in working with us please consult the available positions at priberam.com/careers. 

Image result for priberam logoPRIBERAM SEMINARS   --  Zoom 88158646177

Priberam Machine Learning Lunch Seminar
Speaker:  Marcos Treviso (DeepSPIN/IT)
Venue: https://zoom.us/j/88158646177
Date: Tuesday, June 23rd, 2020
Time: 13:00 
The Explanation Game: Towards Prediction Explainability through Sparse Communication

Explainability is a topic of growing importance in NLP. In this work, we provide a unified perspective of explainability as a communication problem between an explainer and a layperson about a classifier’s decision. We use this framework to compare several prior approaches for extracting explanations, including gradient methods, representation erasure, and attention mechanisms, in terms of their communication success. In addition, we reinterpret these methods at the light of classical feature selection, and we use this as inspiration to propose new embedded methods for explainability, through the use of selective, sparse attention. Experiments in text classification and natural language inference, using different configurations of explainers and laypeople (including both machines and humans), reveal an advantage of attention-based explainers over gradient and erasure methods. Human experiments show promising results on text classification with post-hoc explainers trained to optimize communication success.

Short Bio:
Marcos is a Ph.D. student in the DeepSPIN Project, supervised by André Martins. His main interests include semi-parametric models and explainability of neural networks. Previously, he obtained an M.Sc. degree in Computer Science and Computational Mathematics at the University of São Paulo (USP), having worked with NLP and Machine Learning for sentence segmentation and disfluency detection. Marcos was also a research AI Intern at Unbabel in 2018, where he contributed to the OpenKiwi project.

Image result for priberam logo


You received this message because you are subscribed to the Google Groups "Priberam Machine Learning Seminars" group.
To unsubscribe from this group and stop receiving emails from it, send an email to priberam_MLsemi...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/priberam_MLseminars/B0CC700EA38F44459E8F2F2EE3C23B0B0AE14E01%40EXCH2K10.interno.priberam.pt.

Reply all
Reply to author
0 new messages