** CALL FOR PAPERS **
IJCAI-18 Workshop on Explainable AI (XAI-18)
Stockholm, Sweden
July 14/15, 2018
http://home.earthlink.net/~dwaha/research/meetings/faim18-xai/
Topics and Objectives
*************************
Explainable AI (XAI) systems embody explanation processes that allow users to gain insight into the system's models and decisions, with the intent of improving the user's performance on a related task. For example, an XAI system could allow a delivery drone to explain (to its remote operator) why it is operating normally and the situations for when it will deviate (e.g., avoid placing fragile packages on unsafe locations), thus allowing an operator to better manage a set of such drones. Likewise, an XAI decision aid could explain its recommendation for an aggressive surgical intervention (e.g., in reaction to a patient's recent health patterns and medical breakthroughs) so that a doctor can provide better care. The XAI system's models could be learned and/or hand-coded, and be used for a wide variety of analysis or synthesis tasks. However, while users of many applications (e.g., related to autonomous control, medical, or financial decision making) require understanding before committing to decisions with inherent risk, most AI systems do not support robust explanation processes. Addressing this challenge has become more urgent with the increasing reliance on learned models in deployed applications.
This raises several questions, such as: how should explainable models be designed? How should user interfaces communicate decision making? What types of user interactions should be supported? How should explanation quality be measured? These questions are of interest to researchers, practitioners, and end-users, independent of what AI techniques are used. Solutions can draw from several disciplines, including cognitive science, human factors, and psycholinguistics.
This workshop will provide a forum for learning about exciting research on interactive XAI methods, highlighting and documenting promising approaches, and encouraging further work, thereby fostering connections among researchers interested in AI (e.g., causal modeling, computational analogy, constraint reasoning, intelligent user interfaces, ML, narrative intelligence, planning) human-computer interaction, cognitive modeling, and cognitive theories of explanation and transparency. While sharing an interest in technical methods with other workshops, the XAI Workshop will focus on agent explanation problems, motivated by human-machine teaming needs. This topic is of particular importance to (1) deep learning techniques (given their many recent real-world successes and black-box models) and (2) other types of ML and knowledge acquisition models, but also (3) application of symbolic logical methods to facilitate their use in applications where supporting explanations is critical.
This is the Second XAI Workshop; the First XAI Workshop was held at IJCAI-17. XAI-18 will be coordinated among a set of four workshops:
-Explainable Artificial Intelligence (XAI)
-Fairness, Accountability, and Transparency in Machine Learning (FAT/ML)
-Human Interpretability in Machine Learning (WHI)
-Interpretable & Reasonable Deep Learning and its Applications (IReDLia)
Topics of interests include but are not limited to:
Technologies
-Machine learning (e.g., deep, reinforcement, statistical relational, transfer)
-Cognitive architectures
-Commonsense reasoning
-AI Planning
-Decision making
-Episodic reasoning
-Intelligent agents (e.g., goal reasoning)
-Knowledge acquisition
-Narrative intelligence
-Temporal reasoning
Applications/Tasks
-After action reporting
-Ambient intelligence
-Autonomous control
-Caption generation
-Computer games
-Image processing (e.g., security/surveillance tasks)
-Information retrieval and reuse
-Intelligent Decision Aids
-Intelligent tutoring
-Plan replay
-Recommender systems
-User modeling
-Visual question-answering
Important Dates
******************
Paper submission: May 18, 2018
Notification: May 29, 2018
Camera-ready submission: TBD
Submission Details
**********************
Authors may submit *long papers* (6 pages plus up to one page of references) or *short papers* (4 pages plus up to one page of references).
All papers should be typeset in the IJCAI style. Accepted papers will be published on the workshop website.
Papers must be submitted in PDF format via the EasyChair system (
https://easychair.org/conferences/?conf=xai18).
Organizing Chairs
*********************
David Aha (NRL, USA)
Trevor Darrell (UC Berkeley, USA)
Patrick Doherty (Linkoping University, Sweden)
Daniele Magazzeni (King’s College London)