Thanks for checking that thread first, and referring to it here. But
I'm not sure how much more clear I can make it. TargetOnsetTime is the
time when a stimulus is *scheduled* to take place; OnsetTime is the time
when a stimulus actually starts its presentation. To take just one
example, suppose that a visual stimulus has a TargetOnsetTime of 10500,
but the next video frame will not start until 10508. If you want to
synchronize your visual stimulus presentation with the start of a video
frame (and you should!), then the actual OnsetTime will be pushed back
to at least 10508; if the visual stimulus requires loading a large
graphics file that takes up some time, then the OnsetTime could get
pushed back even further, say, to 10558. So again, TargetOnsetTime
represents the *plan* or *intent*, OnsetTime represents the *actuality*.
Just to be clear, TargetOnsetTime *always* precedes (or coincides with)
OnsetTime (but don't take my word for that, look for yourself!).
As far as which is more "accurate", that depends on what accuracy you
want. If you want the most accurate measure of when stimuli actually
start, then you want OnsetTime. If you want the most accurate measure
of when stimuli are scheduled to start, then you want TargetOnsetTime.
E.g., when I am controlling a sequence for fMRI and I need to maintain a
tight *schedule* that matches the scanner, then I pay more attention to
TargetOnsetTime, because that is what my program actually controls (and
I use Cumulative timing mode for that); in other cases I pay more
attention to OnsetTime. It all depends. Best just to understand what
each of these time audit measures do, and then pick what best applies to
the particular situation.
Does that help?
E-Prime training online:
Twitter: @EPrimeMaster (https://twitter.com/EPrimeMaster