CfP BIAS - 4th Workshop on Bias and Fairness in AI

66 views
Skip to first unread message

Daphne Lenders

unread,
Apr 10, 2024, 8:30:09 AM4/10/24
to Machine Learning News

CALL FOR PAPERS

BIAS ‘24 - Fourth Workshop on Bias and Fairness in AI, hosted by the “European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases” (ECML PKDD) on 


Website:  https://sites.google.com/view/bias2024

Submission Website: will be posted on website once available



IMPORTANT DATES (all deadlines are Anytime on Earth 23:59)

15.06.2024 - Paper Submission Deadline

15.07.2024 - Paper Notification Deadline

09.09.2024 or 13.09.2024 - Workshop Date 


MOTIVATION

Fairness in Machine Learning continues to be a growing area of research and is perhaps now more relevant than ever, as the popularity of Generative Large Multimodal Models continues to grow, new AI-powered applications and tools are being widely used by the public, and legal regulations of AI/ML (e.g., the EU AI Act) are close to being adopted. While initial studies on fair ML and AI bias often focused on the technical aspects behind discriminatory algorithms and on treating fairness as an objective to be optimized, more recent work is recognizing the importance of looking at fairness from a broader perspective, taking legal and societal implications into account and involving different stakeholders in the design process of fair algorithms.



TOPICS OF INTEREST

We invite contributions that deal with bias and fairness in any ML approach (including but not limited to supervised learning, unsupervised learning, ranking, generative models, etc.) and ML system (e.g., recommender systems, search engines, chatbots, content moderation, etc.) on any type of data (tables, text, images, videos, speech, multimodal ...) and learning setup (batch, non i.i.d., federated, …). We especially welcome interdisciplinary work, bridging Computer Science with fields like Human-Computer-Interaction, Law and Social Sciences.


Contributions may concern the fairness auditing/assessment of ML systems, surrounding topics like:

  • Auditing practices and tools

  • Best practices and legal frameworks around audits

  • Case studies

  • Privacy-aware fairness audits

  • xAI for understanding/auditing biases

  • Visual analytics for understanding/auditing biases

  • Society’s perception of algorithmic fairness



Other contributions may deal with the design of fairer algorithms:

  • Human in the loop approaches for fairness

  • Case studies on fairness-aware algorithms

  • Fairness-aware learning in multimodal and multi-attribute data

  • Fairness-aware data collection

  • Fairness-aware data processing

  • Fairness-aware algorithms



SUBMISSION INFORMATION

In this workshop, we wish to stimulate the exchange of novel ideas and interdisciplinary perspectives. To do this, we will accept two different types of submissions:

  • Full papers, presenting novel and original work (max. 14 pages, excluding references)

  • Abstracts of already published work (max. 2 pages, excluding references)


All papers must be anonymized and formatted according to the Springer LNCS guidelines. Each article will be reviewed by at least two reviewers from a panel of experts. Authors of full papers can opt to have their work published in the post-workshop proceedings of the CCIS series. At least one author of each accepted paper is required to attend the workshop to present. For each accepted paper, we plan to have regular talks and additional poster presentations to foster further discussions, based on local venue capabilities. 




Reply all
Reply to author
Forward
0 new messages