We cordially invite you to participate in our ICCV’2021 Understanding Social Behavior in Dyadic and Small Group Interactions Workshop & Challenge
Human interaction has been a central topic in psychology and social sciences, aiming at explaining the complex underlying mechanisms of communication with respect to cognitive, affective and behavioral perspectives. From a computational point of view, research in dyadic and small group interactions enables the development of automatic approaches for detection, understanding, modeling and synthesis of individual and interpersonal social signals and dynamics. Many human-centered applications for good (e.g., early diagnosis and intervention, augmented telepresence and personalized agents) depend on devising solutions for such tasks.
Verbal and nonverbal communication channels are used in dyadic and small group interactions to convey our goals and intentions while building a common ground. During interactions, people influence each other based on the cues they perceive. However, the way we perceive, interpret, react, and adapt to them depends on a myriad of factors (e.g., our personal characteristics, either stable or transient; the relationship and shared history between individuals; the characteristics of the situation and task at hand; societal norms; and environmental factors). To analyze individual behaviors during a conversation, the joint modeling of participants is required due to the existing dyadic or group interdependencies. While these aspects are usually contemplated in non-computational dyadic research, context- and interlocutor-aware computational approaches are still scarce, largely due to the lack of datasets providing contextual metadata in different situations and populations.
Topics and Motivation: In line with these, we would like to bring together researchers in the field and from related disciplines to discuss the advances and new challenges on the topic of dyadic and small group interactions. We want to put a spotlight on the strengths and limitations of the existing approaches, and define the future directions of the field. In this context, we accept papers addressing the issues related to, but not limited to, these topics:
Detection, understanding, modeling and synthesis of individual and interpersonal social signals and dynamics;
Verbal / nonverbal communication analysis in dyadic and small groups;
Contextual analysis in dyadic and small groups;
Datasets, annotation protocols and bias discovering/mitigation methods in dyadic and small groups;
Interpretability / Explainability in dyadic and small groups;
Workshop papers will be published in two different venues, detailed next.
Papers submitted following our “ICCV Workshop schedule” will use the ICCV format and will be published in the proceedings of ICCV’2021.
Paper submission (ICCV): July 25, 2021
Author notification (ICCV): September 10th, 2021
Camera-ready (ICCV): September 16th, 2021
Papers submitted following our “PMLR Workshop schedule” will use the PMLR format and will be published in Proceedings of Machine Learning Research (PMLR).
Paper submission (PMLR): October 31th, 2021
Author notification (PMLR): November 30th, 2021
Camera-ready (PMLR): December 20th, 2021
Louis-Philippe Morency, Carnegie Mellon University, USA
Alexander Todorov, Princeton University, USA
Hatice Gunes, University of Cambridge, UK
Daniel Gatica-Perez, IDIAP, Switzerland
Qiang Ji, Rensselaer Polytechnic Institute, USA
Yaser Sheikh, Carnegie Mellon University, USA
Norah Dunbar, UC Santa Barbara, USA
To advance and motivate the research on visual human behavior analysis in dyadic and small group interactions, the challenge will use a large scale, multimodal, and multiview (UDIVA) dataset recently collected by our group, which provides many related challenges. It will address two different problems, divided in two competition tracks:
Automatic self-reported personality recognition of single individuals (i.e., a target person) during a dyadic interaction, from two individual views.
Behavior forecasting: the focus of this track will be to estimate future (up to N frames) 2D facial landmarks, hand, and upper body pose of a target individual in a dyadic interaction.
In both tasks, multiview and multimodal information (audio-visual, transcriptions, context and medatada) are expected to be exploited to solve the problem.
Dataset access request period open: 18th May, 2021
Start of the Challenge (development phase): June 1st, 2021
Start of test phase: September 1st, 2021
End of the Challenge: September 17th, 2021
Release of final results: September 30th, 2021
Top winning solutions will be invited to give a talk to present their work at the associated ICCV 2021 ChaLearn workshop (http://chalearnlap.cvc.uab.es/workshop/44/description/).
ORGANIZATION and CONTACT*
Sergio Escalera*, Computer Vision Center (CVC) and University of Barcelona, Spain <sergio.escal...@gmail.com>
Cristina Palmero*, Computer Vision Center (CVC) and University of Barcelona, Spain <c.palmero...@gmail.com>
Wei-Wei Tu, 4Paradigm Inc., China
Albert Clapés, Computer Vision Center (CVC), Spain
Julio C. S. Jacques Junior, Computer Vision Center (CVC/UAB), Spain
Sponsors: This event is sponsored by ChaLearn, 4Paradigm Inc., and Facebook Reality Labs. University of Barcelona, Computer Vision Center at Autonomous University of Barcelona, and Human Pose Recovery and Behavior Analysis (HuPBA) group, are the co-sponsors of the Challenge.
Prizes: Top winning solutions will be invited to give a talk to present their work at the associated ICCV 2021 ChaLearn workshop, will receive a winning certificate and will have free ICCV registration. Our sponsors are also offering the following prizes:
Track 1: Top-1 solution: 1000$ / Top-2 solution: 500$ / Top-3 solution: 300$
Track 2: Top-1 solution: 1000$ / Top-2 solution: 500$ / Top-3 solution: 300$