Dear Or,
I agree entirely with Jeremy. Many people use their own custom
programming to run experiments, so in terms of publication, you're
actually ahead by being able to cite Jon's papers to show that you're
using an established system.
As Jeremy says, having a system which ALLOWS you to produce accurate
and precise experiments doesn't mean that one actually WILL. It is
entirely possible to produce non-optimal code which will muck up
timings, or to produce a dissociation between what is displayed and
what is actually recorded as data. Fundamentally, you need to test, re-
test, and test again.
I've just been implementing an fMRI block design in PsychoPy. Each
block should last 27 s. My initial runs showed variability of up to
several hundred milliseconds per block (not crucial from an fMRI
design point of view, but important for measuring reaction times to
the individual events). Now, each block duration is within one screen
refresh of 27 s. The initial problems were due to several factors:
-- running the development code from my laptop (with a 60 Hz internal
screen clock) rather than the dedicated stimulus PC and 100 Hz CRT to
be used on the production version in the lab (although the fMRI runs
will also be at 60 Hz so that was a useful heads-up). 100 Hz refresh
rates are great because you can set all of your experimental timings
in multiples of 10 ms. Code that runs like clockwork on a 100 Hz
screen can show a bit of slippage when runs at other refresh rates,
depending on the way one does timing. e.g. it is possible to display a
stimulus for exactly 100 ms at 100 Hz, but not at 60Hz, where
durations must be multiples of 16.67 ms.
-- issues with synchronising with our eye tracker (understanding that
successive network UDP messages aren't sent in real time on Windows
but are also run on a 100 Hz schedule)
-- and lastly, choosing one of several possible timing schemes that
PsychoPy makes possible. You can count frames, or you can continually
check a timer. You can time individual events, or run an absolute
timer over the course of the experiment to prevent drift.
What reviewers are looking for is evidence that you have given
consideration to performance and quality issues. Where those can't be
controlled, they should be quantified and stated as limits on the
accuracy of results. For example, you may be able to precisely control
the timing of visual presentation of stimuli, but your primary
dependent measure is a keypress on a USB keyboard, the reliability of
which will be the actual limiting factor. Conversely, I've reviewed
papers where people used LCD screens and expected to get accurate
response latencies, not realising that, compared to fast CRT, the
delays (in terms of slow refresh rate but also significant lag in
response (don't believe the marketing specifications)) were a
significant proportion of the expected fast reaction time, which was a
likely contributor to their unusually long saccade latencies.
In essence, reviewers don't care what software you use, only whether
you used it to achieve precise control and whether you are aware of
and accounted for hardware limitations.
In terms of your original question, your task can almost certainly be
implemented in PsychoPy. Open, run, and read a bunch of Jon's examples
from the Demo menu, and you should be most of the way to understanding
how.
Regards,
Michael
--
Michael R. MacAskill, PhD
michael....@vanderveer.org.nz
Chief Scientist,
Van der Veer Institute
for Parkinson's & Brain Research
66 Stewart St
http://www.vanderveer.org.nz
Christchurch 8011 Ph:
+64 3 3786 072
NEW ZEALAND Fax:
+64 3 3786 080