[CFP] Machine Learning Journal-Special Issue on Explainable AI for Secure Applications

8 views
Skip to first unread message

Giuseppina Andresini

unread,
Jul 11, 2024, 9:59:20 AM (5 days ago) Jul 11
to AIxIA mailing list
Machine Learning Journal- Call for Papers: Special Issue on Explainable AI for Secure Applications

APOLOGIZE FOR MULTIPLE POSTINGS]


Dear colleagues,

We invite you to submit to the special issue on "Explainable AI for Secure Applications" (more details can be found here: https://link.springer.com/journal/10994/updates/27325590).


The special issue is hosted by Machine Learning Journal (Impact Factor - 4.3, CiteScore - 11)


- Aims and scope -

Over the past decade, the boom of Artificial Intelligence (AI) and Machine Learning (ML) has spurred the pervasive use of deep neural intelligence to enhance the accuracy of smart decision modelling systems in multiple fields. While the main goal of ML models remains correct decisions, eXplainable AI (XAI) has recently emerged as one of the key technologies to provide a good explanation of how ML algorithms or models can achieve a correct decision. On the other hand, alongside traditional cyber-attacks, AI and ML systems are as vulnerable to attacks as any other software systems, with the added complexity that data, as well as explanations and models, can be targeted. This special issue focuses on the challenges and problems in leveraging XAI for Secure Applications. It aims to share and discuss recent advances and future trends of secure and explainable ML to assure stakeholders about the safety and security of ML-based decisions and accelerate the development of XAI approaches for Secure Applications. The topic of the proposed special issue is strictly connected to the emerging view of a Symbiotic AI.


- Call for Papers -

Topics of interest include, but are not limited to:

Interpretability and Explainability of machine learning and deep learning models
XAI to increase the accuracy of machine learning and deep learning models
XAI to increase the transparency of machine learning and deep learning models
XAI to increase the detection of adversarial machine learning and the robustness of AI models against malicious actions
XAI to develop novel adversarial machine learning algorithms
Metrics to evaluate the robustness of XAI algorithms to adversarial attacks
Exploring vulnerabilities of XAI algorithms
Novel design and implementations of XAI algorithms that are more robust to adversarial learning
New datasets, benchmarks and challenges to assess the vulnerability of AI and XAI algorithms
Examples of innovative applications of XAI algorithms for security and vulnerability analysis of AI models


- Schedule -
Paper submission deadline: October 15, 2024 - January 15, 2025 (no paper can be submitted before October 15, 2024)
First notification of acceptance:  March 15, 2025
Deadline for revised submissions: April 15, 2025
Final notification of acceptance: June 15, 2025
Expected publication date (online): August/September 2025


- Guest editors -

Annalisa Appice, University of Bari Aldo Moro, Italy, annalis...@uniba.it

Giuseppina Andresini, University of Bari Aldo Moro, Italy, giuseppina...@uniba.it

Przemysław Biecek, Warsaw University of Technology, Poland, przemysl...@pw.edu.pl

Christian Wressnegger, Karlsruhe Institute of Technology (KIT), KASTEL Security Research Labs, Germany, christian....@kit.edu

- More info -

https://link.springer.com/journal/10994/updates/27325590
Reply all
Reply to author
Forward
0 new messages