*** Apologies for cross postings ***
*ExUM Workshop @UMAP 2026 - Call For Papers - *** DEADLINE APRIL, 9, 2026 ***
----------------------------------------------------------------------
Workshop on Explainable User Models and Personalised Systems (ExUM@UMAP 2026)
co-located
with UMAP 2026 (
https://www.um.org/umap2026/) - 34th ACM Conference on
User Modeling, Adaptation and Personalization, June 8-11, 2026 |
Gothenburg, Sweden.
Twitter:
https://x.com/ExUM_WorkshopWeb:
https://exum-umap.github.ioFor any information:
marco.p...@uniba.it,
catald...@uniba.it=================
IMPORTANT DATES
=================
* Submission Deadline: April 9, 2026
* Notification: April 28, 2026
* Workshop Papers Camera-ready Submission (CEUR proceedings): May 7, 2026
Please note: All deadlines refer to 11:59 pm AoE (Anywhere on Earth) time.
=========
ABSTRACT
=========
Adaptive
and personalized systems, including Large Language Models (LLMs) have
rapidly emerged as transformative technologies, deeply integrated into
various aspects of modern life. From conversational agents that provide
human-like interactions to recommendation algorithms that curate
personalized content such as music, movies, or products, these systems
are reshaping how individuals interact with digital platforms. As their
influence grows in supporting decision-making, content delivery, and
user engagement, it becomes increasingly important to address key issues
such as transparency, fairness, and user trust. Frameworks like the EU
General Data Protection Regulation (GDPR) and EU AI-Act have highlighted
the 'right to explanation,' underscoring the need for users to
understand the mechanisms driving these intelligent systems. Despite
that, a significant portion of research in these fields has been geared
toward maximizing performance, i.e., improving the relevance of the
results of personalized systems, often at the expense of explainability.
This trade-off risks eroding user trust and poses problems of
compliance with ethical and regulatory standards. This initiative aims
to create a forum for discussing the pressing challenges, innovative
methodologies, and future directions in exploring how transparency,
explainability, and user-centric design can be incorporated into these
technologies to make them not only effective but also trustworthy,
ethical, and aligned with the diverse needs and expectations of their
users.
======
TOPICS
======
Topics of interest include but are not limited to:
- TRANSPARENT AND EXPLAINABLE PERSONALIZATION STRATEGIES
Scrutable User Models
Transparent User Profiling and Personal Data Extraction
Explainable Personalization and Adaptation Methodologies
Novel strategies (e.g., conversational recommender systems) for building transparent algorithms
Transparent Personalization and Adaptation to Groups of users
- TRANSPARENT PERSONALIZATION BASED ON LARGE LANGUAGE MODELS
- DESIGNING EXPLANATION ALGORITHMS
Explanation algorithms based on item description and item properties
Explanation algorithms based on user-generated content (e.g., reviews)
Explanation algorithms based on collaborative information
Building explanation algorithms for opaque personalization techniques
(e.g., neural networks, matrix factorization, deep learning approaches)
Explanation algorithms based on methods to build group models
- DESIGNING TRANSPARENT AND EXPLAINABLE USER INTERFACES
Transparent User Interfaces
Designing Transparent Interaction methodologies
Novel paradigms (e.g. chatbots, LLMs) for building transparent models
- EVALUATING TRANSPARENCY AND EXPLAINABILITY
Evaluating Transparency in interaction or personalization
Evaluating Explainability of the algorithms
Designing User Studies for evaluating transparency and explainability
Novel metrics and experimental protocols
- OPEN ISSUES IN TRANSPARENT AND EXPLAINABLE USER MODELS AND PERSONALIZED SYSTEMS
Ethical issues (fairness and biases) in user / group models and personalized systems
Privacy management of personal and social data
Discussing Recent Regulations (GDPR) and future directions
============
SUBMISSIONS
============
We
encourage the submission of contributions investigating novel
methodologies to exploit heterogeneous personal data and approaches to
build transparent and scrutable user models. In particular, we accept
three kinds of submissions:
(A) Regular papers (10 or more standard pages, including references (CEUR format));
(B) Short papers (5–9 standard pages, including references (CEUR format));
(C) Ongoing projects, Demo, Position and Perspective Papers (less than 5 standard pages, including references (CEUR format));
Submission site:
https://easychair.org/my2/conference?conf=exum2026All
submitted papers will be evaluated by at least two members of the
program committee, based on originality, significance, relevance, and
technical quality. Note that the references do not count toward page
limits. Submissions should be single-blinded, i.e. authors’ names should
be included in the submissions. Papers must be formatted according to
the workflow for CEUR publications. All accepted papers will be
published by CEUR as a joint volume of Workshop UMAP 2026 Proceedings.
At least one author of each accepted paper must register for the
particular workshop and present the paper there.
CEUR Templates and Formatting:
All papers must use the CEUR-WS template in one-column format (LaTeX is strongly preferred).
Offline version:
http://ceur-ws.org/Vol-XXX/CEURART.zip
Overleaf version:
https://www.overleaf.com/latex/templates/template-for-submissions-to-ceur-workshop-proceedings-ceur-ws-dot-org/wqyfdgftmcfwIf LaTeX is not used, authors must strictly follow the ODT template instructions (provided within the CEURART package).
Microsoft Word must not be used for the ODT template.
The Libertinus font family is mandatory; installation instructions are included in the template.
Papers that do not comply with these requirements will not be suitable for publication in CEUR-WS.
Declaration of Generative AI:
Each paper must include a mandatory Declaration of Generative AI, in accordance with the CEUR-WS Generative AI Policy.
https://ceur-ws.org/GenAI/Policy.html=============
ORGANIZATION
=============
Marco Polignano - Bari, Italy
Amra Delic - University of Sarajevo, Bosnia-Herzegovina
Cataldo Musto - University of Bari, Italy
Amon Rapp – University of Torino, Italy
Giovanni Semeraro - University of Bari, Italy
Juergen Ziegler - University of Duisburg Essen, Germany