MOCHA: Multimodal cOntent annotation CHAllenge

4 views
Skip to first unread message

Hugo Jair

unread,
May 18, 2021, 5:52:38 PM5/18/21
to gesturechallenge

ICMI 2021 Grand Challenge:

MOCHA: Multimodal cOntent annotation CHAllenge 

https://competitions.codalab.org/competitions/31432


Streaming platforms are pervasive nowadays and everyone has access to them. This raises major concerns for minors who are exposed to content that may not be appropriate for their age. Automated tools for tagging online content according to the presence of questionable material can have a great impact in protecting users from being exposed to disturbing and inappropriate material. With this goal in mind, we are organizing a grand challenge on labeling questionable content in the wild.


Challenge participation

You are invited to join this grand challenge, where we approach the task of the automated detection of questionable comic mischief in videos. By comic mischief, we mean the appearance of mild harm inflicted on video characters in a manner that is intended to be humorous or funny. Participants of the challenge are asked to develop solutions for automatically recognizing comic mischief categories from videos.

A dataset with labeled videos is made available to participants for the development of their solutions and a separate test dataset will be used for their evaluation. Participants are encouraged to exploit the available multimodal information when developing their solutions. Data and detailed information of the task can be found in the competition site at:


https://competitions.codalab.org/competitions/31432


Awards will be granted to the best grand challenge system, best grand challenge paper, best

negative results paper, and best interpretation paper.


Paper submission 

We will accept submissions from both, participants of the grand challenge describing/analyzing their solutions, and non-participants presenting work related to the grand challenge. In both cases, submissions should meet the required quality standards of ICMI, as accepted papers will

be published in the ICMI main proceedings. The scope comprises submissions on all aspects of the automated analysis of questionable content from multimodal information. Topics of interest include, but are not limited to:

  • Datasets and benchmarks for the automated multimodal analysis of questionable content in videos, including work that focuses on specic types of questionable content (violence, adult content, hate speech).

  • User studies and methodologies for the manual and semi-supervised labeling of multimodal datasets on questionable content.

  • Novel and effective methodologies for the automated detection and recognition of questionable content in the wild.

  • Position papers discussing ethical implications of developing multimodal technology for the detection of questionable content.


Submissions 

Paper submissions will be processed the via the corresponding CMT site:

https://cmt3.research.microsoft.com/MOCHAICMI2021



Important dates 

  • May 6th, 2021: Start of development phase. Release of labeled development (training) data

  • July 5th, 2021: Start of test / final phase. Release of unlabeled test data.

  • July 11th, 2021: End of final phase.

  • July 12th, 2021: Official results announced

  • July 26th, 2021: Paper submission deadline for associated ICMI2021 workshop

  • August 9th, 2021: Author notification

  • August 16th, 2021: Deadline for camera ready submission

  • October 2021: Workshop with ICMI 2021



Contact email

mocha-questio...@googlegroups.com


Organizing team. 

Thamar Solorio, University of Houston, TX, USA

Ioannis Kakadiaris, University of Houston, TX, USA

Hugo Jair Escalante, INAOE, Mexico and ChaLearn, CA, USA


Hugo Jair

unread,
Jun 29, 2021, 6:16:32 PM6/29/21
to gesturechallenge
Dear all, 

Apologies for cross postings. This is a friendly reminder that the final phase of the MOCHA challenge is approaching. On July 5, unlabeled test data will be released.  Looking forward to receiving your submissions. 

Best

Organizing team 

Reply all
Reply to author
Forward
0 new messages