Attention/motivation checks

65 views
Skip to first unread message

Dasa Zeithamova

unread,
Jul 9, 2020, 2:15:18 PM7/9/20
to Online Experiments
I have an individual differences study, looking for the pattern of performance correlations among tasks. This can reveal which tasks involve similar processes and which tasks distinct processes. One challenge is that if a participant does not even try, they'll bomb everything, which can create a seemingly positive correlation among all tasks.
What would be good quick quality checks/catch mini-tasks to insert in the protocol that everyone should get "right" so they could serve to provide an independent exclusion criterion?

Joshua Hartshorne

unread,
Jul 9, 2020, 2:27:16 PM7/9/20
to Online Experiments
So one option is Instructional Manipulation Checks, which mostly gets at whether the subject is paying attention, not whether they are trying hard. That should help some. Though keep in mind that the canonical IMCs are used in a lot of experiments on AMT, so subjects have learned to look for them. So you'll want your own. 

Another option that we've used with some success is to make sure each task has a few *very* easy items and then pay subjects a bonus for getting them right. That incentivizes them to try harder. And you exclude subjects who miss too many of those. You may end up eliminating some enthusiastic but low-performing subjects, but if the items are easy enough, that shouldn't be too bad.

Another option is to not use paid subjects and use more the viral-quiz-with-personalized-feedback method that I described on Tuesday. testmybrain.org does a good job of this, so you can do some of their experiments for ideas. If you are working with volunteers who are only taking the test to see how well they can do, you probably won't have as many shirkers. That said, it can be difficult to run an experiment that is longer than 15-20 minutes, so you'd need to keep your tasks short enough. 

You didn't ask, but I highly recommend piloting your tasks individually to make sure they have large ranges, good internal consistency, etc. If stimuli are clearly distinguishable from one another, I'd use Item Response Theory to help with stimulus-selection and build a highly reliable test. 

Joshua K Hartshorne
Assistant Professor
Boston College

Becky Gilbert

unread,
Jul 9, 2020, 2:34:03 PM7/9/20
to Online Experiments
Hi Dasa,

I agree with Josh, and just wanted to add another suggestion: you could include a task that requires a similar amount of effort/attention, but where the underlying skills/mechanisms shouldn't overlap strongly with the IDs that you're trying to measure. For example, I've done this in a study looking at individual differences in auditory timing/memory, and we included a visuospatial memory task that was structured similarly to the other tasks, but that didn't involve the key individual differences factors we were interested in (auditory perception and memory for serial order). 

Becky

Connie Bainbridge

unread,
Jul 9, 2020, 2:41:57 PM7/9/20
to Online Experiments
Just to chime in with a few more ideas: for more survey-based attention checks, it's pretty common to use questions that have a clear answer, such as "What color is the sky? Please select the color 'red'" or basic math problems (2+2=?) if they do not interfere with the relevant tasks.

This paper may also provide some helpful insights: https://www.sciencedirect.com/science/article/abs/pii/S0272696317300402

Best,
-Connie
Reply all
Reply to author
Forward
0 new messages