Call for papers - HCOMP 2017 Workshop on Human Computation for Image and Video Analysis (GroupSight) - October 24 in Québec City, Canada

25 views
Skip to first unread message

Genevieve Patterson

unread,
Jul 21, 2017, 10:35:20 AM7/21/17
to
Please forgive duplicate advertisements and please do forward this to your students and interested colleagues!

HCOMP 2017 Workshop on Human Computation for Image and Video Analysis (GroupSight) - October 24 in Québec City, Canada

https://groupsight.github.io/


Submission Deadline: August 14, 2017  (5:59 pm EDT)

Notification: August 25, 2017

Workshop Date: October 24, 2017


Overview


We are pleased to announce the second annual GroupSight workshop, building on the successful first edition at last year's AAAI Conference on Human Computation and Crowdsourcing (HCOMP). This year, HCOMP is co-located with UIST 2017, the premiere forum for innovations in human-computer interfaces. This promises an exciting mix of people and papers at the intersection of HCI, crowdsourcing, and computer vision.


The aim of this workshop is to promote greater interaction between the diversity of researchers and practitioners who examine how to mix human and computer efforts to convert visual data into discoveries and innovations that benefit society at large.  It will foster in-depth discussion of technical and application issues for how to engage humans with computers to optimize cost/quality trade-offs.  It will also serve as an introduction to researchers and students curious about this important, emerging field at the intersection of crowdsourced human computation and image/video analysis.  Relevant topics include (but are not limited to):


  • Crowdsourcing image and video annotations (e.g., labeling methods, quality control)

  • Humans in the loop for visual tasks (e.g., recognition, segmentation, tracking, counting, etc).

  • Richer modalities of communication between humans and visual information (e.g., language, 3D pose, attributes)

  • Semi-automated computer vision algorithms

  • Active visual learning

  • Studies of crowdsourced image/video analysis in the wild


Submission

We are requesting submissions in the following two categories:


  1. Original work not published elsewhere.  In addition to research papers describing theoretical or empirical results, this year we are also encouraging demo submissions, describing new systems, architectures, interaction techniques, etc. Papers should be submitted as 4-page extended abstracts (including references) using the provided author kit. Demos should also include a URL to a video (max 6 min). Multiple submissions are not allowed. Reviewing will be double-blind.

  2. Previously published work from a recent conference or journal.  Authors should submit an unrevised copy of their published work.  Reviewing will be single-blind.  


Submissions: Email submissions to group...@outlook.com

Author Kit: http://www.aaai.org/Publications/Templates/AuthorKit17.zip


At the Workshop

All accepted authors will prepare a poster and/or demo to present during an interactive session at the workshop. The organizers will also invite a subset of authors to give short talks. We will also present two awards at the workshop, for Best Paper and Best Demo, with cash prizes courtesy of our sponsor, Evolv Technology.


Accepted submissions will be posted and publicly downloadable on the GroupSight website.


People


Organizers:

  • Danna Gurari (University of Texas at Austin)

  • Kurt Luther (Virginia Tech)

  • Genevieve Patterson (Brown University / MSR New England)

  • Steve Branson (California Institute of Technology)

Steering Committee:

  • James Hays (Georgia Institute of Technology)

  • Serge Belongie (Cornell Tech)

  • Pietro Perona (California Institute of Technology)

Keynote Speakers:

  • Walter Lasecki (University of Michigan)

  • Others TBD

Reply all
Reply to author
Forward
0 new messages