ACM IUI 2023 Workshop -- TExSS 2023 CALL FOR PAPERS

10 views
Skip to first unread message

Styliani Kleanthous

unread,
Jan 5, 2023, 1:54:18 PM1/5/23
to
Workshop on Transparency and Explanations in Smart Systems (TExSS) Responsible, Explainable AI for Inclusivity and Trust
Held in conjunction with ACM Intelligent User Interfaces (IUI) 2023, March 27-31, University of Technology Sydney (UTS), Sydney, Australia

https://explainablesystems.comp.nus.edu.sg/2023/


IMPORTANT DATES
-----------------------------------------
Submission date Jan 9, 2022
Notifications send   Jan 29, 2022
Camera-ready     Feb 16, 2022
Workshop Date   March 27, 2022

SUBMISSIONS
-----------------------------------------
Papers should be submitted via Easychair (https://easychair.org/conferences/?conf=texss2023) by the end of January 9th 2023, and will be reviewed by committee members. Position papers do not need to be anonymized. 
At least one author of each accepted position paper must register for and attend the workshop. It is anticipated that accepted contributions will be published in dedicated workshop proceedings. 
For further questions please contact the workshop organizers at <texs...@easychair.org>.

Paper authors will present their work as part of thematic panels followed by smaller group activities related to the workshop theme. For more information visit our website at https://explainablesystems.comp.nus.edu.sg/2023/

MOTIVATION AND GOALS
-----------------------------------------
Smart systems that apply complex reasoning to make decisions and plan behavior, such as decision support systems and personalized recommendations, are difficult for users to understand. A large variety of algorithms allow the exploitation of rich and varied data sources, in order to support human decision-making and/or taking direct actions. However, there are increasing concerns surrounding their transparency and accountability, as these processes are typically opaque to the user - e.g., because they are too technically complex to be explained or are protected trade secrets. The topics of transparency and accountability have attracted increasing interest in recent years, aiming at  more effective system training, better reliability and improved usability.
This workshop will provide a venue for exploring issues that arise in designing, developing and evaluating intelligent user interfaces that provide responsible, explainable AI taking into account the diversity of the stakeholders involved, and ensuring trust through system transparency. Furthermore, understanding users’ fairness perceptions especially when interacting with such systems (e.g. on how to explain systems and models towards ensuring social justice and trust), will lead into more effective system interactions, better reliability, improved usability and user experience.
Suggested themes include, but not limited to:

- How can we build inclusive transparency and explanations of algorithmic systems, particularly those that demonstrate that they are fair, accountable, and not biased?
- How different stakeholders perceive algorithmic fairness, especially when interacting with AI enabled systems?
- Through explanations, transparency, or other means, how can we raise stakeholders’ awareness of the potential risk for biases and social harms that could result from developing and using intelligent interactive systems?
- How do different groups of users (e.g. experts, developers, end-users) perceive the explanations provided by those systems?
- How can we build (good) algorithmic systems, particularly those that demonstrate that they are fair and accountable?
- When are the optimal points at which explanations are needed for transparency?
- What is important in user modeling for system transparency and explanations?
- What metrics can be used when evaluating transparent systems and explanations?
- How can we evaluate explanations and their ability to accurately explain underlying algorithms and overall systems’ behavior, especially for the goals of fairness and accountability?
- What techniques can we apply for testing models and assumptions of transparent and explainable intelligent interactive systems, being mindful of the potential for social and discriminatory harm?
- How can explanations allow human evaluators to select model(s) that are unbiased, such as by revealing traits or outcomes of the underlying learned system?
- What are important social aspects in interaction design for system transparency and explanations?
- How to account for stakeholders’ diversity when designing and evaluating transparency and explanations?

Researchers and practitioners in academia or industry who have an interest in these areas are invited to submit papers up to 8 pages (not including references) in ACM SIGCHI Paper Format (see https://iui.acm.org/2023/call_for_papers.html). These submissions must be original and relevant contributions. Examples include, but not limited to, position papers summarizing authors’ existing research in this area and how it relates to the workshop theme, papers offering an industrial perspective on the workshop theme or a real-world approach to the workshop theme, papers that review the related literature and offer a new perspective, and papers that describe work-in-progress research projects.


ORGANIZING COMMITTEE
----------------------------------------------
Tsvi Kuflik Information Systems, The University of Haifa, Haifa, Israel
Styliani Kleanthous Loizou fAIre MRG, CYENS CoE and Cyprus Centre for Algorithmic Transparency, Open University of Cyprus, Nicosia, Cyprus
Jonathan Dodge Penn State University, State College, Pennsylvania, United States
Brian Y. Lim Department of Computer Science, National University of Singapore, Singapore
Advait Sarkar Microsoft Research, Cambridge, United Kingdom
Avital Shulner-Tal Information Systems, The University of Haifa, Haifa, Israel
 
On behalf of the organizing committee,

Styliani Kleanthous, PhD
MRG Co-Leader, fAIre - Fairness and Ethics in AI - Human Interaction Group

       

The content of this email is confidential and intended for the recipient specified in message only. It is strictly forbidden to share any part of this message with any third party,without a written consent of the sender. If you received this message by mistake, please reply to this message and follow with its deletion, so that we can ensure such a mistake does not occur in the future.

Reply all
Reply to author
Forward
0 new messages