Dear colleagues,
If any of you have experienced crowdsourcing toxic or sensitive content, please consider speaking with my research team as we are conducting a study exploring how we can improve AI crowdsourcing design. See information below.
We are researchers at Carnegie Mellon University gathering insights to enhance the safety and transparency of AI-related tasks on platforms such as Amazon Mechanical Turk or Prolific. Practitioners rely on crowd workers for crucial work–like lableing data and reviewing AI outputs-that supports AI standards, but some tasks may involve risks (e.g., graphic content). Our study aims to develop best practices to help requesters better design RAI tasks.Who can participate? Practitioners who have crowdsourced any activity related to anticipating, generating, reviewing, reasoning about, or making decisions for harmful content/responsible AI/AI safety
What does participation involve? Participants will take part in a 60-90 minute interview with $30 gift card compensation
How do I sign up? Please fill out our
short survey, and we’ll reach out within a week of your application.
For questions, please contact
aqz...@andrew.cmu.edu.
Sincerely,
-- Alice Qian ZhangPh.D. Student
Human-Computer Interaction Institute