ESANN 2021, DEADLINE EXTENSION: CFP Interpretable Models in ML and Explainable AI_ SPECIAL SESSION

37 views
Skip to first unread message

alfredo vellido

unread,
May 4, 2021, 5:46:49 AM5/4/21
to Machine Learning News
*** apologies for cross-posting ***
DEADLINE EXTENSION: 17/05
ESANN 2021 
The 29th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning. 
Bruges, Belgium: 6-8 October 2021. https://www.esann.org  

CFP SPECIAL SESSION: Interpretable Models in Machine Learning and Explainable Artificial Intelligence
===================
Machine learning models are currently dominated by neural networks and, in particular, by deep variants of those networks. Frequently, these models achieve promising results. However, usually deep networks act as black-box, as do many other machine learning models. Further, due to powerful tools, the learning process, which is often a gradient descent approach, is hidden for the developer as well as for the applicant of the model. Therefore, the possibilities to assess the network are mainly performance evaluations. However, this is problematic for many application for example in medicine, engineering and economy/finance applications, which require a transparent decision or prediction process.

Recently there has been considerable effort to develop interpretable models in machine learning and approaches to explain the decision/prediction processes to the user.

The aim of this special session is to make these new approaches and models more highly visible to the community. We invite papers highlighting different aspects of interpretable models and explaining decision support processes and inference models involving artificial intelligence. The session covers a broad range of this topic, ranging from theoretical considerations and new machine learning models to machine learning applications requiring or benefitting from interpretability and explainability.

Topics include, but are not limited to:

Machine learning models with inherent interpretability
Methods to explain existing models
Model verification
Visualization and visual inspection of the operation of machine learning models
Confidence and trustworthiness in AI
Prediction confidence and quantification of uncertainty
Trade-off between interpretability and performance
Model transparency in safety critical applications
We welcome both, new theoretical developments as well as practical applications.
===========

ORGANIZERS:
Sascha Saralajew (Bosch Center for Artificial Intelligence, Germany), 
Alfredo Vellido (Universitat Politècnica de Catalunya - UPC BarcelonaTech, España), 
Thomas Villmann (University of Applied Sciences Mittweida, Saxony Institute for Computational Intelligence and Machine Learning, Deutschland), 
Paulo Lisboa (Liverpool John Moores University, United Kingdom)

DATES:
Paper submission (extended): 17/05/21
Decisions: 20/07/21

SUBMISSION:
Reply all
Reply to author
Forward
0 new messages