Computation and Crowdsourcing have become ubiquitous in the world of algorithm augmentation and data management. However, humans have various cognitive biases that influence the way they make decisions, remember information, and interact with machines. It is thus important to identify human biases and analyse their effect on complex hybrid systems. On the other hand, the potential interaction with a large pool of human contributors gives the opportunity to detect and handle biases in existing data and systems.
The goal of this symposium is to analyse both existing human biases in hybrid systems, and methods to manage bias via crowdsourcing and human computation. We will discuss different types of biases, measures and methods to track bias, as well as methodologies to prevent and solve bias. An interdisciplinary approach is often required to capture the broad effects that these processes have on systems and people, and at the same time to improve model interpretability and systems’ fairness.
We will provide a framework for discussion among scholars, practitioners and other interested parties, including industry, crowd workers, requesters and crowdsourcing platform managers. We expect contributions combining ideas from different disciplines, including computer science, psychology, economics and social sciences.
We welcome ~250-word abstracts describing methodologies, studies or systems relevant to the topics of the workshop. Submissions are not anonymous. Non published work, vision statements, and work in progress are welcome.
We are looking for contributions with interesting insights, which could lead to a productive discussion during the symposium. The main criteria of evaluation of the Programme Committee are scientific relevance, innovation level and research potential.
Please submit your abstract by using the on-line submission system via: https://easychair.org/conferences/?conf=bhcc2020
Deadline for Abstract Submissions is September 6th 2020.
Topics of interest include, but are not limited to:
Biases in Human Computation and Crowdsourcing
- Human sampling bias
- Effect of cultural, gender and ethnic biases
- Effect of human in the loop training and past experiences
- Effect of human expertise vs interest
- Bias in experts vs bias in crowdsourcing
- Bias in outsourcing vs bias in crowdsourcing
- Bias in task selection
- Task assignment/recommendation for reducing bias
- Effect of human engagement on bias
- Responsibility and ethics in human computation and bias management
- Preventing bias in crowdsourcing and human computation
- Creating awareness of cognitive biases among human agents
- Measuring and addressing ambiguities and biases in human annotation
- Human factors in AI
Using Human Computation and Crowdsourcing for Bias Understanding and Management
- Biases in Human-in-the-loop systems
- Identifying new types of cognitive bias in data or content
- Measuring bias in data or content
- Removing bias in data or content
- Dealing with algorithmic bias
- Fake news detection
- Diversification of sources by means
- Provenance and traceability
- Long-term crowd engagement
- Generating benchmarks for bias management
Styliani Kleanthous, Open University of Cyprus (Cyprus)
Jahna Otterbacher, Open University of Cyprus (Cyprus)
Eddy Maddalena, King’s College London (UK)
Alessandro Checco, the University of Sheffield (UK)