[CFP] Last Call for Papers: IJCNN Special Session Explainable Deep Neural Networks for Responsible AI (DeepXplain 2025)

7 views
Skip to first unread message

Francielle Vargas

unread,
Jan 11, 2025, 7:13:33 AMJan 11
to NAACL-Latin-America

*** Last Call for Papers ***

We invite paper submissions to the Explainable Deep Neural Networks for Responsible AI: Post-Hoc and Self-Explaining Approaches (DeepXplain 2025), a special session at IJCNN 2025 dedicated to innovative methodologies for improving the interpretability of Deep Neural Networks (DNNs), while addressing fairness and bias mitigation. 

Website: https://deepxplain.github.io/

Important Dates:

Contributions

This special session aims to foster interdisciplinary collaboration, promote the ethical design of DNN-based AI systems, and encourage the development of benchmarks and datasets for explainability research. Our goal is to advance both post-hoc and intrinsic interpretability approaches, bridging the gap between the high performance of deep neural networks and their transparency. By doing so, we seek to enhance human trust in these models and mitigate the risks of negative social impacts.

Topics of interest include, but are not limited to:

  • Theoretical advances in post-hoc explaining methods (LIME, SHAP, Grad-CAM) for DNNs.

  • Development of inherently interpretable architectures that use self-explaining mechanisms, such as attention-based models, Prototype Networks, and SENNs.

  • Post-hoc and self-explaining methods for large-scale language models (LLMs).

  • Studies on ethics, fairness and interpretability in large-scale language models (LLMs).

  • Studies on evaluations and methodologies to identify biased DNN models.

  • Counterfactual approaches for bias mitigation in DNNs.

  • Application-driven insights, particularly in Natural Language Processing and Computer Vision.

  • Ethical evaluations of DNN models that focus on reducing bias and societal impact.

  • Methods, metrics, and methodologies for improving interpretability and fairness in DNNs.

  • Ethical discussions on the social impact of the lack of transparency in AI.

  • Benchmark datasets and tools for explainability.

  • Explainable AI in critical applications: healthcare, governance, misinformation, hate speech, etc.

  • Metrics and methodologies for evaluating interpretability and fairness.

Submission Information

We welcome submissions of academic papers (both long and short) across the spectrum of theoretical and practical work, including research ideas, methods, tools, simulations, applications or demonstrations, practical evaluations, position papers, and surveys. Submissions must be written in English, adhere to the IJCNN-2025 formatting guidelines, and be submitted as a single PDF file.


Organizers




Reply all
Reply to author
Forward
0 new messages