HCML Reading Group Session on 25.03.2025

84 views
Skip to first unread message

Piera Riccio

unread,
Mar 23, 2025, 9:27:00 PMMar 23
to ELLIS-Human-Centric ML
Hello everyone,

happy to announce our next session, happening on the 25th of March at 3pm CET. At the usual meeting link! :D

Title: "iLLuMinaTE: An LLM-XAI Framework Leveraging Social Science Explanation Theories Towards Actionable Student Performance Feedback"
Abstract: "Recent advances in eXplainable AI (XAI) for education have highlighted a critical challenge: ensuring that explanations for state-of-the-art models are understandable for non-technical users such as educators and students. In response, we introduce iLLuMinaTE, a zero-shot, chain-of-prompts LLM-XAI pipeline inspired by Miller (2019)’s cognitive model of explanation. iLLuMinaTE is designed to deliver theory-driven, actionable feedback to students in online courses. iLLuMinaTE navigates three main stages — causal connection, explanation selection, and explanation presentation — with variations drawing from eight social science theories (e.g. Abnormal Conditions, Pearl’s Model of Explanation, Necessity and Robustness Selection, Contrastive Explanation). We extensively evaluate 21,915 natural language explanations of iLLuMinaTE extracted from three LLMs (GPT-4o, Gemma2-9B, Llama3-70B), with three different underlying XAI methods (LIME, Counterfactuals, MC-LIME), across students from three diverse online courses. Our evaluation involves analyses of explanation alignment to the social science theory, understandability of the explanation, and a real-world user preference study with 114 university students containing a novel actionability simulation. We find that students prefer iLLuMinaTE explanations over traditional explainers 89.52% of the time. Our work provides a robust, ready-to-use framework for effectively communicating hybrid XAI-driven insights in education, with significant generalization potential for other human-centric fields."Bio: "Vinitra Swamy is a 5th-year PhD student at EPFL specializing in human-centric explainable AI for education, co-advised by Professors Tanja Käser and Martin Jaggi. Her research focuses on evaluating explainability methods, designing intrinsically interpretable neural networks, and leveraging large language models to communicate explanations effectively to students and teachers. Prior to EPFL, she earned both Bachelor’s and Master’s degrees in Computer Science from UC Berkeley at age 20, holding a record as the youngest-ever M.S. graduate in the EECS department, and spent two years at Microsoft AI as a lead engineer for the Open Neural Network eXchange (ONNX) initiative. Vinitra loves teaching data science, and has served as a data science lecturer at UC Berkeley and the University of Washington. Her research has been featured widely and published at top venues including NeurIPS, AAAI, and ICLR, earning accolades such as the G-Research PhD Prize, EPFL EDIC Fellowship, EPFL IC Distinguished Service Awards, UC Berkeley EECS Award of Excellence for Teaching and Leadership, and paper awards at LAK, AIED LBR, and ECTEL. She has recently been named a 2024 “Rising Star in Data Science” by Stanford, UCSD, and UChicago."

See you there!

Best,

Piera
Reply all
Reply to author
Forward
0 new messages