So one option is
Instructional Manipulation Checks, which mostly gets at whether the subject is paying attention, not whether they are trying hard. That should help some. Though keep in mind that the canonical IMCs are used in a lot of experiments on AMT, so subjects have learned to look for them. So you'll want your own.
Another option that we've used with some success is to make sure each task has a few *very* easy items and then pay subjects a bonus for getting them right. That incentivizes them to try harder. And you exclude subjects who miss too many of those. You may end up eliminating some enthusiastic but low-performing subjects, but if the items are easy enough, that shouldn't be too bad.
Another option is to not use paid subjects and use more the viral-quiz-with-personalized-feedback method that I described on Tuesday.
testmybrain.org does a good job of this, so you can do some of their experiments for ideas. If you are working with volunteers who are only taking the test to see how well they can do, you probably won't have as many shirkers. That said, it can be difficult to run an experiment that is longer than 15-20 minutes, so you'd need to keep your tasks short enough.
You didn't ask, but I highly recommend piloting your tasks individually to make sure they have large ranges, good internal consistency, etc. If stimuli are clearly distinguishable from one another, I'd use Item Response Theory to help with stimulus-selection and build a highly reliable test.
Joshua K Hartshorne
Assistant Professor
Boston College