<Apologies for cross-postings>
--------------------------------------
CALL FOR PARTICIPATION
--------------------------------------
MIRROR@IberLEF20206: Motivational Interviewing Response & Rating via synthetic cOnversational tuRns
Website: https://mirror-iberlef.vercel.app/
-------------------------------
***Task description***
-------------------------------
We invite the community to develop Generative AI (GenAI) methods for creating synthetic conversation turns that can substantially improve the performance of models trained to recognize behavior codes (BCs) in the context of motivational interviews (MI). A BC is a discrete, observable clinician action (e.g., asking a question, giving information) that is counted during coding of a motivational interviewing session to quantify specific techniques used. These codes allow raters to tally how often particular clinician behaviours occur, which helps assess adherence to MI-consistent versus MI-inconsistent practice. Our ultimate goal is to generate valuable data for training models for the automatic assessment of clinicians’ motivational-interviewing skills. These skills — crucial for promoting behavior change among patients — can be evaluated by using the “Motivational Interviewing Treatment Integrity (MITI)” rubric (https://tinyurl.com/38byjrwy).
This is a data-centric competition: participants are expected to produce high-quality datasets representing a wide range of clinical conversations (rather than training a model) to enhance the performance of a frozen baseline model used for BC classification. We encourage participants to include samples featuring clients from diverse backgrounds, varied conversation topics, and conversing with different types of health professionals.
Participants in this competition should provide three datasets (one per pair of considered BCs) of at most 100 labeled conversation turns that will be used to fine-tune pretrained models; the fine-tuned models will then be used to make predictions for a hold-out dataset. The performance of the fine-tuned model will be used as the leading evaluation metric to rank participants. The considered pairs of BCs are:
(1) Simple reflection vs. Complex reflection;
(2) Open question vs. Closed question;
(3) Persuasion vs. Giving Information.
Sample submissions, and detailed instructions on the formatting, evaluation criteria and competition platform will be available at the MIRROR website.
-------------------------------
***Important dates***
-------------------------------
* Mar 9th: Start of the development phase (platform starts receiving submissions for the validation set)
* May 1st: Start of the final phase (platform starts receiving submissions for the test set)
* May 11th: End of evaluation campaign (deadline for submission of runs)
* May 22nd; Publication of official results
* Jun 8th: Deadline for paper submission
* Jun 23th: Acceptance notification
* Jun 30th: Camera-ready submission deadline
* Sep, TBD: Publication of proceedings
* Sep, TBD: Workshop with SEPLN 2026
-------------------------------
***Organizing team***
-------------------------------
* Luis J. Arellano INAOE, Mexico
* Carlos Olachea INAOE, Mexico
* John Piette, University of Michigan, USA
* Hugo Jair Escalante, INAOE, Mexico
* Delia Irazú Hernández, INAOE, Mexico
* Luis Villaseñor, INAOE, Mexico
* Manuel Montes, INAOE, Mexico
Contact:
Hugo Jair Escalante (hugo...@gmail.com)
Join our googlegroup at: https://groups.google.com/g/mirror-iberlef2026