EvalRS 2023] [CfP] - Workshop EvalRS 2023. Well-Rounded Recommender Systems For Real-World Deployments (EvalRS 2023)

1 view
Skip to first unread message

Claudio Pomo

unread,
Apr 27, 2023, 5:52:03 AM4/27/23
to Machine Learning and Statistics
ACM SIGKDD Workshop - EvalRS 2023 CALL FOR PAPERS


The EvalRS 2023. Well-Rounded Recommender Systems For Real-World Deployments,
in conjunction with the 29TH ACM SIGKDD Conference On Knowledge Discovery And
Data Mining (ACM SIGKDD 2023), ONSITE in Long Beach (CA), USA, 6-10 August 2023.


Full details are available online: https://reclist.io/kdd2023-cup/


# IMPORTANT DATES
Submission deadline: 23 May 2023 AoE (extended)
Notification: 23 June 2023
Camera-ready: 15 July 2023
Workshop date: August 7th


# FORMAT
Research papers and 4 hours hackathon on recSys evaluation.


The CFP is open (see below); full program and the finalize logistics will be soon announced on the official website: https://reclist.io/kdd2023-cup/


# PRIZES
Thanks to Mozilla AI support, we will award monetary prizes for:
* best paper
* best student paper
* best hackathon project


# MOTIVATION & GOALS
EvalRS aims to foster closer partnerships between the academic and industrial sectors regarding the well-rounded evaluation of recommender systems (RS).  


The traditional approach to RS evaluation has been centered on accuracy metrics. However, EvalRS seeks to expand the scope of evaluation techniques beyond just accuracy, to encompass other vital aspects such as fairness, interpretability, and robustness. By bringing together experts from industry, academia, and government, EvalRS creates a forum for discussion and collaboration on the latest trends and challenges across a wide
range of domains.


EvalRS 2023 follows in the footsteps of the first edition, EvalRS 2022, which featured more than 150 participants and was conducted entirely in the open, with artifacts such as datasets, metrics and evaluation code released back in the community. A review was published in Nature Machine Intelligence (https://www.nature.com/articles/s42256-022-00606-0) emphasizing the first-of-its-kind nature of the workshop, in which theoretical considerations became practical contributions, as participants were asked to “live and breadth” the problem of evaluation through a data challenge.


We believe that the rounded evaluation of RS is, by nature, a multi-faceted and multi-disciplinary endeavor and that the field as a whole has often been held back by the false dichotomy of
quantitative-and-scalable vs. qualitative-and-manual. The introduction of the hackathon promises to be an additional element of differentiation, making EvalRS 2023 a chance to present cutting-edge work on recommendation systems and network with like-minded, learn-by-doing, and, why not, win a prize!


# TOPICS OF INTEREST
Topics of interest include, but are not limited to:
- Online vs offline evaluation - e.g. making offline evaluation more trustworthy and unbiased;
- Tools and frameworks for the evaluation of RS;
- Empirical studies on the evaluation of RS;
- Reports from real-world deployments - failures, successes, and surprises;
- New metrics and methodologies for evaluation, both quantitative and qualitative;
- Multi-dimensional evaluation, combining multiple recommendation quality factors;
- Multi-disciplinary investigation on ethical questions connected to the deployment and use of RS.


# TYPES OF PAPERS
In EvalRs 2023 We encourage the submission of original contributions along our main topics. Submitted papers will be evaluated (single-blind) according to their originality, technical content, style, clarity, and relevance to the workshop. Papers must be original work and may not be under submission to another venue at the time of review.


Accepted papers will appear in the workshop proceedings (as we did with EvalRS, we plan on using CEUR for proceedings).


- Long research/position papers (8 pages, excl references) short research/position papers (4 pages, excl. references), presenting work in
progress, lessons learnt, positions, emerging or future research issues and
directions on Recommender Systems evaluation.[a][b]


- Extended abstract (2 pages, excl. references), containing descriptions of ongoing
projects or presenting already published results in the area.


Submissions of contributions must be in English, in PDF format in the CEUR-WS two-column conference format available at: http://ceur-ws.org/Vol-XXX/CEURART.zip
or at: https://www.overleaf.com/latex/templates/template-for-submissions-to-ceur-workshop-proceedings-ceur-ws-dot-org/hpvjjzhjxzjk if an Overleaf template is preferred.


# SUBMISSION & PUBLICATION
All papers will undergo a peer review process by at least two expert reviewers to ensure a high standard of quality. Referees will consider originality, significance, technical soundness, clarity of exposition, and relevance to the workshop’s topics.


Research papers should be submitted electronically as a single PDF file through the CMT submission system at the following link https://cmt3.research.microsoft.com/EvalRS2023.


We plan to award monetary prizes to students and participants for outstanding paper contribution.


# ORGANIZING COMMITTEE
Federico Bianchi - Stanford
Patrick John Chia - Coveo
Ciro Greco - Bauplan
Gabriel Moreira - NVIDIA
Claudio Pomo - Politecnico di Bari
Davide Eynard - Mozilla AI
Fahd Husain - Mozilla AI
Jacopo Tagliabue - NYU, Bauplan
Reply all
Reply to author
Forward
0 new messages