**** Apologies for cross-posting ****
IWINAC 2026
Fuerteventura, Spain, 26-29 May 2026http://icinac.org/iwinac.org/iwinac2026/index.htmlSPECIAL SESSION
"Explainable, Robust and Trustworthy Machine Learning and its Applications"
As the implementation and use of machine learning (ML) systems continues to gain significance in contemporary society, the focus is swiftly shifting from purely optimizing model performance towards building models that are both understandable and interpretable. This new emphasis stems from a growing need for applications that not only solve complex problems with high accuracy, but also provide clear, transparent insights into their decision-making processes for a range of end-users and stakeholders.
The aim of this special session is to gather researchers working on Explainable AI (xAI) in ML, placing a strong emphasis on the practical applications of this framework. Its primary goal is to present innovative methods that make ML models more interpretable, transparent, and trustworthy, while preserving their performance, but we invite contributions that go beyond theory, showcasing tangible real-world implementations in different application scenarios. By centering on application-driven insights, this session seeks to bridge the gap between basic research and operational solutions, ultimately aiming to steer the ML community toward more responsible and societally beneficial AI technologies.
We are seeking contributions that address practical applications, presenting innovative approaches and technological xAI advancements. Topics of interest include, but are not limited to:
Explainable methods in medicine and healthcare
Explainable methods in computational neuroscience
Business and public governance applications of xAI
Explainable biomedical knowledge discovery with ML
xAI in agriculture, forestry and environmental applications
xAI and human-computer interaction
xAI methods for linguistics & machine translation
Explainability in decision-support systems
Best practices for presenting model explanations to non-technical stakeholders
Auto-encoders & explainability of latent spaces
Causal inference & explanations
Post-hoc methods for explainability
Reinforcement learning for enhancing xAI systems
xAI for Deep Learning methods
OrganizersAlfredo Vellido and Caroline König, Universitat Politècnica de Catalunya (UPC), Barcelona, Spain
DEADLINESFull paper submission (February 15th)
Notification (April 1st)