CFP: IEEE Special Issue in Interpretable and Explainable AI

138 views
Skip to first unread message

ksub...@stevens.edu

unread,
Jun 14, 2022, 11:37:21 AM6/14/22
to Women in Machine Learning

Due to several requests for an extension, the deadline for the IEEE TAI special issue on Interpretable and Explainable AI has been extended to July 1st, 2022. The special issue will consider three types of articles: review, short research, and regular research articles. Make sure to select the special issue from the drop-down menu in the manuscript central. More information about the special issue is available at: https://www.kpsuba.com/activities/ieee-tai-special-issue

Call for Papers

IEEE Transactions on Artificial Intelligence

Special Issue on New Developments in Explainable and Interpretable AI

Motivation and Introduction

Over the years, machine learning (ML) and artificial intelligence (AI) models have steadily grown in complexity, accuracy and other quality metrics, often at the expense of interpretability of the final results. Simultaneously, researchers and practitioners have begun to realize that more transparency in the deep learning and artificial intelligence engines are necessary if the power of these engines should be adopted in practice. For example, having a very good performance metric for a disease predictor is of little use, if it is not possible to give an explanation to the end user (a physician, the patient or even the designer of the tool). Similarly, being able to understand the reasons why a model makes mistakes when it does, can add invaluable insight and is essential in critical applications.

This kind of transparency can be achieved by designing interpretable AI engines which inherently offer a window into the reasoning behind the decisions it arrives at or by designing robust post-hoc methods that can explain the decision of the AI engine.Thus, two areas of research called interpretable AI (IAI) and explainable AI (or XAI), respectively have emerged with the goal to produce models that are both well performing and understandable. Interpretable AI are models that obey some domain specific constraints so that they are better understandable by humans. In essence, they are not black-box models. On the other hand, explainable AI refers to models and methods that are typically used to explain another black-box model.

With the sizable XAI and IAI research community that has formed, there is now a key opportunity to take the field of explainable and interpretable AI to the next level, to overcome the shortcomings of current neural network explanation techniques and extend the related concepts and methods towards more widely applicable, semantically rich and actionable XAI. This special issue aims to bring together these new developments in the fascinating field of interpretable and explainable AI.

SCOPE OF THE SPECIAL ISSUE
Original submissions are welcome in the topics including but not limited to:

-   Explainable and interpretable AI for classification and non-classification problems (e.g., regression, segmentation, reinforcement learning)

-   Explainable and interpretable state-of-the-art neural network architectures (e.g., transformers) and non-neural network models (e.g., trees, kernel-methods, clustering algorithms)

-   Explainable/interpretable AI for fairness, privacy, and trustworthy models

-   Novel criteria to evaluate explanation and interpretability

-   Theoretical foundations of explainable/interpretable AI

-   Causal mechanisms for explainable/interpretable AI

-   Explainable and Interpretable AI for human-computer interaction

-   Explainable and interpretable AI for applications (e.g., medical diagnosis, disaster prediction, credit underwriting, remote sensing, big data)

-   Counterfactual explanations

-   Human-in-the-loop explanations

SUBMISSION INSTRUCTIONS
Three kinds of articles can be submitted to this special issues: (1) Regular (2) Review and (3) Letters.

The special issue will follow the instructions for submission for IEEE TAI including an impact statement. Additionally, the manuscript should contain a “Interpretability/Explainability Evaluation” section. This section will include a quantification of interpretability/explainability of the proposed methods. Examples of interpretability/explainability metrics include sparsity, case-based reasoning etc. If proposing an XAI model, the authors are encouraged to include information on the goals of the explanation. For example, would the explanation provided by the model be an human understandable explanation of the black box or will it provide an approximation of a complex model.

Note that submission will be done via the manuscript central: http://mc.manuscriptcentral.com/tai-ieee

Please select the appropriate special issue when submitting.

IMPORTANT DATES
Submission deadlines: July 1, 2022

First round of reviews due: September 15, 2022

Revised manuscripts due: October 15, 2022

Final decision: December 15, 2022



GUEST EDITORS
K.P. (Suba) Subbalakshmi, Stevens Institute of Technology, USA

Wojciech Samek, Fraunhofer Heinrich Hertz Institute HHI, Germany

Xia “Ben” Hu, Rice University, USA
Reply all
Reply to author
Forward
0 new messages