Design with long training periods + temporal sensitivity

58 views
Skip to first unread message

Charlotte Guo

unread,
Jul 9, 2020, 1:19:19 PM7/9/20
to Online Experiments
Dear all, 

I'm looking to design an online experiment whereby participants learn to associate two contingent events (e.g. a flash + tone) in the training period. The training is foreseeably very long as the relations learnt will be crucial to subsequent testing. Do you have any experience on how long approx. the training needs to be, and how to implement a sanity check (I've heard of webcam eye-tracking packages in Gorilla) to filter out messy datasets where participants are not attending to the stimuli? 

My experiment also needs to measure the perceived temporal distance between events (150ms - 500ms time window); in a nutshell participants will be input button presses during stimulus viewing. Can this sensitivity be achieved accurately online? 

Many thanks,
Charlotte 

Jenni Rodd

unread,
Jul 9, 2020, 1:23:06 PM7/9/20
to Online Experiments
Is there a task during training? Will task performance give an adequate measure of attention or do you need to add something else? I doubt that Gorilla eyetracking will be accurate enough to give you what you want and you may end up throwing away perfectly good data because the eyetracking isn't working 100%

Joshua Hartshorne

unread,
Jul 9, 2020, 1:26:43 PM7/9/20
to Online Experiments
Hi Charlotte,

I need a little more detail. Can you please answer the following questions:

1. Is learning the contingency the point of your experiment? Or is it a prerequisite? That is, would it make sense to have people participate in training until they demonstrate they have learned the contingency according to some criterion?

2. When you say that you need to measure the perceived temporal distance between events, what do you mean? Can you please describe in more detail?

Joshua K Hartshorne
Assistant Professor
Boston College

Charlotte Guo

unread,
Jul 9, 2020, 1:34:56 PM7/9/20
to Online Experiments
Hi Jenni,

Thanks for the response - no there shouldn't be a task during training (as far as I'm aware) other than passive viewing. So in that sense there's no task performance to give a measure of attention unless I figure out a better design... something for me to think about!

Also good to know that Gorilla eyetracking shouldn't be trusted!

Charlotte

Becky Gilbert

unread,
Jul 9, 2020, 1:39:01 PM7/9/20
to Online Experiments
Just wanted to add that I agree with Jenni's reservations about using webcam eye-tracking as a sanity check, though I haven't used Gorilla's eye-tracking system myself so can't say for sure how well it would work for this. 

I wonder if you could add in some simple RT catch trials, e.g. show a red X every once in a while, and the participant needs to respond ASAP. This should help you figure out whether people are attending to the screen, but the downside is that it changes the nature of the training task. So if you wanted to go this route, ideally you would also pilot in the lab (whenever that becomes possible...) to make sure you get the same training effects when you add in an attention check.

Cheers,
Becky

Jenni Rodd

unread,
Jul 9, 2020, 1:44:53 PM7/9/20
to Online Experiments
I agree with Becky. Generally I try to avoid passive viewing tasks online wherever possible as it is so hard for participants to avoid the temptation to check their phones etc. Is there any way of building in an unrelated task to the stimuli you want them to view without messing up your paradigm? Anything that helps keep participants actively engaged with your stimuli will help.

Charlotte Guo

unread,
Jul 9, 2020, 1:47:21 PM7/9/20
to Online Experiments
Hi Joshua,

Thank you for the reply! Sorry the question wasn't clear enough.

To (1): learning the contingency isn't the point of the experiment, it'd be a prerequisite to enter the 'real' testing. I think there is a lot of individual variance in how long the training needs to be since I need not only the participants learn the surface relationship (Y follows X) but also for them to learn it robustly (put it vaguely).

To (2): The idea is to introduce two levels in testing: absense and presence of agency. (literature in this area is around temporal binding, volition, libet's clock experiment etc. and I wouldn't bother you with all the nuances) I want to test the judgement of the temporal distance between X and Y, either when participants self-initiate X or not (say, only viewing Y following X). Hope that makes more sense?

Charlotte

Joshua Hartshorne

unread,
Jul 9, 2020, 1:48:30 PM7/9/20
to Online Experiments
I would add that avoiding passive viewing is good advice in lab-based studies as well. Unless you really need to study passive viewing specifically. 

Charlotte Guo

unread,
Jul 9, 2020, 1:50:27 PM7/9/20
to Online Experiments
Thank you Becky and Joshua! That's really good advice and I'll seek to adjust the procedures accordingly.

Charlotte

Joshua Hartshorne

unread,
Jul 9, 2020, 1:54:21 PM7/9/20
to Online Experiments
Hi Charlotte,

In that case, if you can turn your training session from passive viewing into a task that allows you to measure learning (like a serial reaction task? you'd have to think about what makes sense in the context of your study). 

The other thing is that I would be cautious about timing. There are some slides in my talk on Tuesday starting around 1:12 that go into recent work on timing. You'll have some error in the keyboard RTs as well as in the relative timing of the audio and visual stimuli. Those errors aren't huge and they are pretty consistent within-subject, but you may have some interesting issues across-subjects. Unfortunately, it's not a solved problem. One option would be to record the browser the subject is using and use that as a covariate in your analyses, which might help some. 

But it's going to require some thinking. Please feel free to ask follow-up questions here in the future as you work through this.

Charlotte Guo

unread,
Jul 9, 2020, 2:13:41 PM7/9/20
to Online Experiments
Hi Joshua,

Thanks again for the help. Both points would require a lot of thinking on my part. Intuitively, I think SRT could work but it also potentially conflates with the key press measures I want to use in test, unless I take out the quantitive timing judgement measures (as per your second warning) and implement a short qualitative question (did X proceed Y?). Confident that with more thinking and piloting the design will turn out fine!

I'll definitely read through the slides and be inspired.

Warmest Regards,
Charlotte

Becky Gilbert

unread,
Jul 9, 2020, 2:18:11 PM7/9/20
to Online Experiments
Hi Charlotte,

Just want to say that I agree with Josh about timing, and add that this will really depend on what sort of (within-subject) RT difference you're expecting to see between conditions. 

It sounds like you're testing a range of different duration estimates, and IIRC, the accuracy (and precision?) of an individual's duration estimates decreases with the length of the duration. So I'm wondering if your effect size will change depending on the duration being perceived/estimated. If you have any data on this (e.g. from your own lab studies or the literature) then what I might do when moving this task online is just start with the durations that are most likely to produce the largest effect sizes, and with a training period that is long enough that you're pretty confident should work (ideally with some attention checks, since making the training longer will make lapses in attention more likely). That should help you decide whether you might be able to shorten the training period, and whether it's worth trying to use your online task to measure (possibly-smaller) effects with other durations, conditions etc. 

Becky
Reply all
Reply to author
Forward
0 new messages