Call for Papers - EXplainable & Responsible AI in Law (XAILA) 2020 - workshop at JURIX 2020

15 views
Skip to first unread message

Michał Araszkiewicz

unread,
Oct 21, 2020, 11:03:51 AM10/21/20
to Jurix Foundation for Legal Knowledge Based Systems

EXplainable & Responsible AI in Law (XAILA) 2020

(XAILA webpage http://xaila.geist.re)

Co-located with: JURIX 2020 https://jurix2020.law.muni.cz

 

 

Organizers: Grzegorz J. Nalepa, Michał Araszkiewicz, Bart Verheij, Martin Atzmüller

(Jagiellonian University, Poland; University of Groningen, The Netherlands; University of Osnabrueck, Germany)

The idea of the workshop

The idea of the XAILA series of JURIX workshops (1st edition XAILA 2018 in Groningen, 2nd edition XAILA 2019 in Madrid) is to provide an interdisciplinary platform for the discussion of ideas of explainable AI, algorithmic transparency, comprehensiveness, interpretability and the related topics. This years’ edition is particularly focused on the emerging idea of responsible AI (RAI) and the multiple connections between the notions of explainability and responsibility but we also aim to continue the discussion in the scope of all domains related to the workshop’s topic.

 

Important dates:

Submission:                    04.11.2020

Notification:                    23.11.2020

Camera-ready:               30.11.2020

Workshop:                      09.12.2020

 

Submission and proceedings:

We accept regular/long papers up to 12pp. We also welcome short and position papers of 6pp. Please use the Springer LNCS format.

A dedicated Easychair installation is provided at

https://easychair.org/conferences/?conf=xaila2020

Workshop proceedings will be made available by CEUR-WS. A post workshop journal publication is considered.

 

Description:

In the last several years we have observed a growing interest in advanced AI systems achieving impressive task performance. However, there has also been an increased awareness of their complexity and challenging consequences of their possibly limited understandability to humans. In response, a number of research directions have been initiated. These include humanized or human-centered AI, as well as ethically aligned, ethically designed, or just ethical AI. In many of these ideas, the principal concept seems to be the explanatory capability of the AI system (XAI), e.g. via interpretable and explainable machine learning, inclusion of human background knowledge and adequate declarative knowledge, that could provide foundations not only for transparency and understandability, but also for a possible value alignment and human centricity, as the explanation is to be provided to humans.

Recently, the term responsible AI (RAI) has been coined as a step beyond XAI. Discussion of RAI has been again strongly influenced by the “ethical” perspective. However, as practitioners in our fields we are convinced that the advancements of AI are way too fast, and the ethical perspective much too vague to offer conclusive and constructive results. We are convinced that the concepts of responsibility, and accountability should be considered primarily from the legal perspective, also because the operation of AI-based systems poses actual challenges to rights and freedoms of individuals. In the field of law, these concepts should obtain some well-defined interpretation, and reasoning procedures based on them should be clarified. The introduction of AI systems into the public, as well as the legal domain brings many challenges that have to be addressed. The catalogue of these problems include, but is not limited to: (1) the type of liability adequate for the operation of AI (be it civil, administrative of criminal liability); (2) the (re)interpretation of classical legal concepts concerning the ascription of liability, such as causal link, fault or foreseeability and (3) the distribution of liability among the involved actors (AI developers, vendors, operators, customers etc.). As the notions relevant for the discussion of legal liability evolved on the basis of observation and evaluation of human behavior, they are not easily transferable to the new and disputable domain of liability related to the operation of artificial intelligent systems. The goal of the workshop is to cover and integrate these problems and questions, bridging XAI and RAI by integrating methodological AI, as well as the respective ethical and legal perspectives, also specifically with support of established concepts and methods regarding responsibility, and accountability.

Topics of interest

Our objective is to bring people from AI interested in XAI and RAI topics  and create an ample space for discussion with people from the field of legal scholarship and/or legal practice, and most importantly the vibrant AI&Law community. As many members of the AI and Law community join both perspectives, the JURIX conference is the perfect venue for the workshop. Together we would like to address some questions like:

* the notions of transparency, interpretability and explainability in XAI

* non-functional design choices for explainable and transparent AI systems

* legal consequences of black-box AI systems

* legal criteria and requirements for explainable, transparent, and responsible AI systems

* criteria of legal responsibility discussed in the context of intelligent systems operation and the role of explainability in liability ascription

* possible applications of XAI systems in the area of legal policy deliberation, legal practice, teaching and research

* legal implications of the use of AI systems in different spheres of societal life

* the notion of right to explanation

* relation of XAI and RAI to argumentation technologies

* approaches and architectures for XAI and RAI in AI systems

* XAI, RAI and declarative domain knowledge

* risk-based approach to analysis of AI systems and the influence of XAI on risk assessment

* incorporation of ethical values into AI systems, its legal interpretation and consequences

* XAI, privacy and data protection (conceptual and theoretical issues)

* XAI, certification and compliance

 

 

List of members of the program committee (tbc, tbe):

Martin Atzmueller, Osnabrueck University, Germany

Michal Araszkiewicz, Jagiellonian University, Poland

Kevin Ashley, University of Pittsburgh, USA

Szymon Bobek, Jagiellonian University, Poland

Jörg Cassens, University of Hildesheim, Germany

David Camacho, Universidad Autonoma de Madrid, Spain

Pompeu Casanovas, Universitat Autonoma de Barcelona, Spain

Teresa Moreira, University of Minho Braga, Portugal

Paulo Novais, University of Minho Braga, Portugal

Grzegorz J. Nalepa, Jagiellonian University, Poland

Tiago Oliveira, National Institute of Informatics, Japan

Martijn von Otterlo, Tilburg University, The Netherlands

Adrian Paschke, Freie Universität Berlin, Germany

Monica Palmirani, Università di Bologna, Italy

Radim Polčák, Masaryk University, Czech Republic

Marie Postma, Tilburg University, The Netherlands

Ken Satoh, National Institute of Informatics, Japan

Erich Schweighofer, University of Vienna, Austria

Michal Valco, Constantine the Philosopher University in Nitra, Slovakia

Bart Verheij, University of Groningen, The Netherlands

Tomasz Żurek, Maria Curie-Skłodowska University of Lublin, Poland

Reply all
Reply to author
Forward
0 new messages