impact of sending data down TCP/IP connection

70 views
Skip to first unread message

Jon

unread,
Mar 22, 2011, 3:30:45 PM3/22/11
to Open Stimulus Developers
Hi there,

An auditory scientist was asking me the other day whether he would be
able to use python to generate visual stimuli while sending data
(periodically) to another system via TCP/IP. Basically he could get
really good audio-visual syncing by sending some sounds to a Tucker
Davis system and triggering them with a parallel port or similar, but
the sound data could be quite a lot of samples.

I've never done anything like this, so thought it would be a good one
to kick the list off!

I feel certain it would be possible to do this in a separate thread
(or possibly a separate parallel process if using python) but maybe
that's not even necessary. Is there likely to be much overhead in
sending data to a TCP/IP port?

Jon

Sebastiaan Mathôt

unread,
Mar 22, 2011, 3:46:13 PM3/22/11
to open-sti...@googlegroups.com
Hi Jon,

Let's get the list started!

There is very little overhead and it's also quite easy to implement in
Python (or most other languages). You'll probably want to launch an
extra thread, not because of performance issues, but because otherwise
communication will block the main thread while waiting for incoming
information. You can find some examples in the Python docs:

http://docs.python.org/library/socket.html

I actually made an object tracker that uses a TCP/IP connection to
communicate with E-Prime and it worked quite well
(www.cogsci.nl/mantra). The sound part might be tricky though, if you
really want to stream sound through the connection (?), because that
will require buffering etc. (but that's not really specific to TCP/IP).

Hope this helps!

Regards,
Sebastiaan


--
Vrije Universiteit, Amsterdam
Dept. of Cognitive Psychology
http://www.cogsci.nl/smathot

Per B. Sederberg

unread,
Mar 22, 2011, 4:16:11 PM3/22/11
to open-sti...@googlegroups.com, Sebastiaan Mathôt
Wow Sebastiaan, Mantra looks great! Now I just have to figure out a
situation in which to use it in my research :)

On a related note, I was checking out OpenSesame the other day and I
couldn't figure out how to do something in parallel (sequences are
obvious). For example, here's a real-world scenario:

1) Put up an image and text below it saying Old/New.
2) Wait until a keyboard response or up to 2000ms.
3) Upon keyboard response, take away the Old/New text, leaving up the image.
4) After 2000ms take down the Image and, if it wasn't already removed,
the Old/New text.

Can I set up a trial to do that in OpenSesame or do I have to remove
everything and reshow the image upon a keypress (I think that's how
I'd do it in PsychoPy). In my all-code land of PyEPL I just remove
the text when I get the keypress and update the screen, leaving the
image up from before.

Best,
Per

Sebastiaan Mathôt

unread,
Mar 22, 2011, 6:10:32 PM3/22/11
to open-sti...@googlegroups.com
Hi Per,

I just realized that I replied directly to you, rather than to the
mailing list, so hereby I bring the conversation back to the public...

On 03/22/2011 10:33 PM, Per B. Sederberg wrote:
> Hi Sebastiaan:
>
> That's actually a perfectly acceptable solution. PyEPL isn't
> parallel, it's just all code (no GUI), which is easier for me to think
> in :)
>
> I've been looking for an easier way for people in my classes and lab
> to write experiments without having to code. I wanted to write a GUI
> on top of PyEPL based on serial and parallel hierarchical state
> machines (I'll leave that conversation for later), but I just don't
> have time, even to manage some other programmers to do it. PsychoPy
> is pretty good, but for some reason I'm currently drawn more to
> OpenSesame, perhaps because it's new to me and perhaps because it's
> more like the GUI I had in mind and seems to have a lot of the same
> ideas as PyEPL (e.g., pools).
>
> To this end, I'd love some pointers from you in figuring out how to do
> some critical things:
>
> 1) I often have participants type in responses, but reaction time is
> important. Does the text entry widget allow for saving the timing of
> each key press?
No, the text entry item does not record the time of each keypress. It
might actually be a good idea to modify the plug-in so that it uses the
first keypress to determine the response time. I'll make a note of this
and implement it for the upcoming 0.23 (which is actually due next
weekend/ week). If you need some specific functionality you could modify
the plug-in, rename it and thus create a new plug-in that does precisely
what you want. The plug-ins are written in Python and there is a small
tutorial here:

http://www.cogsci.nl/blog/tutorials/128-creating-an-opensesame-plug-in
> 2) Can you explain the timing model in OpenSesame? For example, how
> do I know that the time a key press is recorded is
> accurate/inaccurate?
OpenSesame simply uses PyGame, which in turn uses SDL to handle input
and graphics. I think timing-wise OpenSesame will not be better or worse
than other packages. That being said, the guys at our tech department
have informed me that most keyboards have a jitter in the order of 10s
of milliseconds, so in this respect hardware will probably be the
limiting factor. There's the SR Box plug-in to use with SR Boxes and
compatible devices, which you could use if timing is critical. In that
case, communication goes through the serial port, which should be
virtually perfect. I don't have any hard data yet, btw, but I am
planning to do extensive benchmarking.
> 3) Is there any real reason OpenSesame would not work on OSX?
On the contrary, there's every reason to presume that it will work
perfectly. Since I don't have a Mac myself, I haven't been able to
provide packages, but we're working on it (that is, I've asked friends
with Macs to take a look at it).
> 4) How hard would it be to interface with another audio library for
> low-latency recording and playback?
4) That would depend on the audio library and whether there are good
Python bindings available. So I couldn't really say right now.
> 5) I'd like to send TTL pulses out a parallel port to synchronize with
> my EEG system. Do you have suggestions for how to include that in an
> experiment?
5) pyserial is bundled with OpenSesame and provides a fairly
straightforward way to handle serial port communication. This is also
uses by the SR Box plug-in. You can find more documentation here:

http://pyserial.sourceforge.net/

For parallel port communication there appears to be PyParallel, but I
have no experience with that. It is also not bundled with OpenSesame,
which would mean that you need to run OpenSesame from source (under
Windows, under Linux there's no source/ binary distinction).

http://pyserial.sourceforge.net/pyparallel.html
http://www.cogsci.nl/blog/tutorials/111-running-opensesame-from-source

I hope this is of any help! And please let me know of any suggestions/
remarks/ questions, etc.!

Regards.
Sebastiaan
> I know I'll have loads more questions, but that's a good start ;)
>
> Thanks,
> Per
>
>
>
>
>
>
> On Tue, Mar 22, 2011 at 4:44 PM, Sebastiaan Math�t<s.ma...@psy.vu.nl> wrote:
>> Hi Per,
>>
>> That's a scenario where you'd have to resort partly to an inline script
>> (i.e., Python code) in OpenSesame. This is because the second interval
>> (keypress to display clear) depends on the first. And you are correct,
>> OpenSesame is not parallel in the way that, apparently, PyEPL is. You could
>> simply write a short script to handle this (using the inline_script item),
>> in which case you would do it the same way as in PsychoPy, I suppose.
>> However, since the goal of OpenSesame is to be as GUI-ish as possible, I can
>> think of another way in which only a very short inline script (step 3) is
>> required:
>>
>> 1) sketchpad A for 0ms (text + image)
>> 2) keyboard_response item with 2000ms timeout
>> 3) inline_script item, which does something like:
>> self.experiment.set("interval2", 2000 - self.get("response_time"))
>> 4) sketchpad B for [interval2] ms (image)
>> 5) sketchpad C for 0ms (empty)
>>
>> I feel that this is somewhat more complex than it would need to be and I
>> will see if I can think of a more intuitive way to handle this type of
>> scenarios. Thanks for bringing it to my attention!
>>
>> Regards,
>> Sebastiaan


>>
>> On 03/22/2011 09:16 PM, Per B. Sederberg wrote:
>>> Wow Sebastiaan, Mantra looks great! Now I just have to figure out a
>>> situation in which to use it in my research :)
>>>
>>> On a related note, I was checking out OpenSesame the other day and I
>>> couldn't figure out how to do something in parallel (sequences are
>>> obvious). For example, here's a real-world scenario:
>>>
>>> 1) Put up an image and text below it saying Old/New.
>>> 2) Wait until a keyboard response or up to 2000ms.
>>> 3) Upon keyboard response, take away the Old/New text, leaving up the
>>> image.
>>> 4) After 2000ms take down the Image and, if it wasn't already removed,
>>> the Old/New text.
>>>
>>> Can I set up a trial to do that in OpenSesame or do I have to remove
>>> everything and reshow the image upon a keypress (I think that's how
>>> I'd do it in PsychoPy). In my all-code land of PyEPL I just remove
>>> the text when I get the keypress and update the screen, leaving the
>>> image up from before.
>>>
>>> Best,
>>> Per
>>>
>>>

>>> On Tue, Mar 22, 2011 at 3:46 PM, Sebastiaan Math�t<s.ma...@psy.vu.nl>

Per B. Sederberg

unread,
Mar 22, 2011, 7:05:09 PM3/22/11
to open-sti...@googlegroups.com, Sebastiaan Mathôt
Thanks for the responses! I forgot the most important one:

PsychoPy and PyEPL (and PsychToolbox for that matter) ensure exact
timing of each drawing to the screen by using an OpenGL trick whereby
you draw a clear pixel to the back buffer after calling flip, which
causes the flip function to block until it actually draws to the
screen. If using OpenGL, which I think OpenSesame is not, the PyGame
flip function will return immediately, even though it's still syncing
to the vertical retrace.

Have you tested/verified what is happening under OpenSesame? If the
flip call is not blocking, just recording the time after the call to
flip will not provide accurate timing of when stimuli are shown to the
screen.

Best,
Per

>> On Tue, Mar 22, 2011 at 4:44 PM, Sebastiaan Mathôt<s.ma...@psy.vu.nl>

>>>> On Tue, Mar 22, 2011 at 3:46 PM, Sebastiaan Mathôt<s.ma...@psy.vu.nl>

Andrew Straw

unread,
Mar 23, 2011, 5:09:07 AM3/23/11
to open-sti...@googlegroups.com
Hi Per,

On 03/23/2011 12:05 AM, Per B. Sederberg wrote:
> Thanks for the responses! I forgot the most important one:
>
> PsychoPy and PyEPL (and PsychToolbox for that matter) ensure exact
> timing of each drawing to the screen by using an OpenGL trick whereby
> you draw a clear pixel to the back buffer after calling flip, which
> causes the flip function to block until it actually draws to the
> screen.

Can you explain how drawing a clear pixel after the call to flip causes
the flip call itself to block? Your explanation seems to invert
causality. (Also, your explanation doesn't take into account the latency
of the display itself.) I would also be interested in a link describing
this technique, if you know of any.

In cases where one is trying to do feedback driven stimulation with
minimal latency, blocking on flip is actually what you don't want to do:
it prevents you from having the most recent data being used to draw the
frame.

-Andrew

Jon

unread,
Mar 23, 2011, 6:18:35 AM3/23/11
to Open Stimulus Developers


On Mar 23, 9:09 am, Andrew Straw <andrew.st...@imp.ac.at> wrote:
> Hi Per,
>
> On 03/23/2011 12:05 AM, Per B. Sederberg wrote:
>
> > Thanks for the responses!  I forgot the most important one:
>
> > PsychoPy and PyEPL (and PsychToolbox for that matter) ensure exact
> > timing of each drawing to the screen by using an OpenGL trick whereby
> > you draw a clear pixel to the back buffer after calling flip, which
> > causes the flip function to block until it actually draws to the
> > screen.
>
> Can you explain how drawing a clear pixel after the call to flip causes
> the flip call itself to block? Your explanation seems to invert
> causality. (Also, your explanation doesn't take into account the latency
> of the display itself.) I would also be interested in a link describing
> this technique, if you know of any.
>

glFinish() is then called too, and that's the important bit. glFinish
waits for all previous gl calls to finish, but drawing the pixel can't
finish until it has somewhere to draw, and that means it needs the
buffers to finish their flip.

It's possible that you don't actually need the draw-the-pixel step
(you can just do glFinish after flip) but I suspect that's going to be
system/card dependent because the flip is not itself a gl command and
not may not be beholden to the glFinish call.

> In cases where one is trying to do feedback driven stimulation with
> minimal latency, blocking on flip is actually what you don't want to do:
> it prevents you from having the most recent data being used to draw the
> frame.
>

true dat. Blocking (and any use of glFinish) are bad for rendering
speed. If you don't need the framebuffer timestamp then sync but don't
block.

Jon

Jon

unread,
Mar 23, 2011, 6:24:34 AM3/23/11
to Open Stimulus Developers


On Mar 22, 7:46 pm, Sebastiaan Mathôt <s.mat...@psy.vu.nl> wrote:
> Hi Jon,
>
> Let's get the list started!
>
> There is very little overhead and it's also quite easy to implement in
> Python (or most other languages). You'll probably want to launch an
> extra thread, not because of performance issues, but because otherwise
> communication will block the main thread while waiting for incoming
> information.

cool, thanks

> You can find some examples in the Python docs:
> http://docs.python.org/library/socket.html
>
> I actually made an object tracker that uses a TCP/IP connection to
> communicate with E-Prime and it worked quite well
> (www.cogsci.nl/mantra). The sound part might be tricky though, if you
> really want to stream sound through the connection (?), because that
> will require buffering etc. (but that's not really specific to TCP/IP).
>

The buffering aspect is something that will be handled by the TDT
hardware and (I believe) the person already knows how to handle that
end.

cheers
Jon

Andrew Straw

unread,
Mar 23, 2011, 6:53:32 AM3/23/11
to open-sti...@googlegroups.com
On 03/22/2011 08:30 PM, Jon wrote:

I would recommend UDP over TCP: there's less socket state to maintain,
and you're not subject to Nagle's algorithm (
http://en.wikipedia.org/wiki/Nagle%27s_algorithm ), which can cause
undesired delays. With UDP on a gigabit LAN, latencies are less than 1
msec. Of course, UDP doesn't guarantee delivery, but on a gigabite LAN,
I rarely, if ever, see dropped packets. (Such low latencies may also
eliminate the need for a separate parallel port channel.)

If you want a really good introduction to low-level network programming,
I highly recommend this:
http://beej.us/guide/bgnet/output/html/multipage/index.html

But over UDP, I would recommend a higher-level toolkit if your setup is
likely to get more complex than a single sender and a single receiver
with fixed addresses. This gets you out of learning the details of
low-level networks. ROS ( http://www.ros.org ) has a nice distributed
realtime networking design, with APIs for C++ and Python and decent
documentation. It's pretty linux centric, however, and includes many
more components than just networking, so may be overkill for your
colleague's needs.

-Andrew

Per B. Sederberg

unread,
Mar 23, 2011, 7:34:40 AM3/23/11
to open-sti...@googlegroups.com
Hi Andrew and Jon:

Jon, thanks again for starting this list, I have a feeling bringing
all these folks together is going to give rise to at least a JND
better world.

On Wed, Mar 23, 2011 at 6:18 AM, Jon <jon.p...@gmail.com> wrote:
>
>
> On Mar 23, 9:09 am, Andrew Straw <andrew.st...@imp.ac.at> wrote:
>> Hi Per,
>>
>> On 03/23/2011 12:05 AM, Per B. Sederberg wrote:
>>
>> > Thanks for the responses!  I forgot the most important one:
>>
>> > PsychoPy and PyEPL (and PsychToolbox for that matter) ensure exact
>> > timing of each drawing to the screen by using an OpenGL trick whereby
>> > you draw a clear pixel to the back buffer after calling flip, which
>> > causes the flip function to block until it actually draws to the
>> > screen.
>>
>> Can you explain how drawing a clear pixel after the call to flip causes
>> the flip call itself to block? Your explanation seems to invert
>> causality. (Also, your explanation doesn't take into account the latency
>> of the display itself.) I would also be interested in a link describing
>> this technique, if you know of any.
>>

Yes, you are correct that this doesn't take into account the latency
of the display, but I think we can assume this is a relatively stable
latency that we could easily quantify. Essentially, we can never
really know without a some external measure such as a photodiode.

>
> glFinish() is then called too, and that's the important bit. glFinish
> waits for all previous gl calls to finish, but drawing the pixel can't
> finish until it has somewhere to draw, and that means it needs the
> buffers to finish their flip.
>
> It's possible that you don't actually need the draw-the-pixel step
> (you can just do glFinish after flip) but I suspect that's going to be
> system/card dependent because the flip is not itself a gl command and
> not may not be beholden to the glFinish call.
>

Yes, that's the idea. I think the drawing of the pixel is to make
this work in a stable manner across systems. Sadly, I don't have a
link actually explaining it, but I do have code:

# The following is taken from the PsychToolbox
# Draw a single pixel in left-top area of back-buffer. This will
wait/stall the rendering pipeline
# until the buffer flip has happened, aka immediately after the
VBL has started.
# We need the pixel as "synchronization token", so the following
glFinish() really
# waits for VBL instead of just "falling through" due to the
asynchronous nature of
# OpenGL:
glDrawBuffer(GL_BACK)
# We draw our single pixel with an alpha-value of zero - so
effectively it doesn't
# change the color buffer - just the z-buffer if z-writes are enabled...
glColor4f(0,0,0,0)
glBegin(GL_POINTS)
glVertex2i(10,10)
glEnd()
# This glFinish() will wait until point drawing is finished, ergo
backbuffer was ready
# for drawing, ergo buffer swap in sync with start of VBL has happened.
glFinish()

>> In cases where one is trying to do feedback driven stimulation with
>> minimal latency, blocking on flip is actually what you don't want to do:
>> it prevents you from having the most recent data being used to draw the
>> frame.
>>
>
> true dat. Blocking (and any use of glFinish) are bad for rendering
> speed. If you don't need the framebuffer timestamp then sync but don't
> block.
>

Yes, you are totally right, which is why it's probably best to have
both modes. One that only calls flip on the occasional situations
when you are updating the screen (this is what PyEPL defaults to
because it's focus is on non-vision oriented experiments that only
need to update the screen every couple seconds and we want the event
loop to be processing responses instead of rendering for each frame)
and one that renders for every frame and does not block (as in
VisionEgg).

Best,
Per

Per B. Sederberg

unread,
Mar 23, 2011, 1:08:31 PM3/23/11
to open-sti...@googlegroups.com
Hi Sebastiaan:

I've poked around the OpenSesame (OS) code a bit and it's looking
good. I have a suggestion that I think will allow you to maintain all
current functionality, yet greatly increase other capabilities.

It seems as though there is no experiment-level event loop. For
example, when you want to collect a keypress, you start a while loop
processing only key presses until a time limit. If you added an
experiment-level event loop that you call during time when you are
waiting for responses, or just waiting to do something else, then you
can actually process loads of things in the background and provide
"parallel" operations. For example, you could collect mouse presses,
mouse movements, and keyboard presses all at the same time. Or you
could play a sound or video and collect keyboard responses. Another
great feature that would fall out naturally would be the ability to
add in custom timers and callbacks for things to occur during the
experiment. For example, I'd like to send a sync pulse out the
parallel port every 1 second (jittered) in order to sync with EEG. I
can't do this with OS as it stands, but with a timer and callback this
would be totally easy with a couple lines of code at the start of the
experiment. Finally, I think this callback functionality will be the
key to adding in audio playback and recording that interfaces with
external modules for low-latency (e.g., Jack).

I think this could also improve the overall accuracy of the timing
logs, as well. Because, like in PyEPL, you can give a range over
which you know an event occurred, which will let the users know when
there were processing bottlenecks and not to trust the timing. In
PyEPL, which also uses pygame, these latencies are under 1ms, but
occasionally they can pop up and you want to ignore events that have
large time ranges.

I'd be happy to talk about this via chat or skype or whatever if
you're interested. I have code in PyEPL that we could use as a
template or for inspiration.

Best,
Per

Per B. Sederberg

unread,
Mar 23, 2011, 1:42:38 PM3/23/11
to open-sti...@googlegroups.com
Me again :)

I've been testing the sync to vertical retrace and I'm not
understanding what is happening. If I run the attached experiment, I
would expect there to be a 16.66667 (because my monitor is running at
60Hz) ms latency between each time_sketchpad in the log, but it's
14ms. If I set the duration down to 1, the latency between each
sketchpad drops to 5ms.

I'm not sure what's happening, but it doesn't seem to be exactly
right. Any thoughts?

Best,
Per

testos.opensesame.tar.gz

Sebastiaan Mathot

unread,
Mar 24, 2011, 12:57:25 PM3/24/11
to Open Stimulus Developers, psede...@gmail.com
Hi Per,

Thank you for notifying me! I thought I was actively participating in
the discussion, but I see now that my comments have not made it onto
the list. Perhaps I've been marked as spam or my comments do not meet
Google's criteria for substantial contributions to a mailinglist. At
any rate, I will restrict to re-posting my last comment (see below),
which deals with your test experiment! From now on I will post through
the web interface.

Regards,
Sebastiaan

***

Hi Per,

There are two things going on here. First, OpenSesame internally has a
prepare and a run phase, which occur at the sequence level. What this
means is that, before a sequence is executed, it is "prepared" (i.e.,
sketchpads are drawn, sounds are generated, inline code is compiled,
etc.), so that the timing during the run phase will be as good as
possible. However, in your experiment, you are testing timing between
sequences, rather than within, and in that case an optimal timing is
not guaranteed (or even aspired for). I chose this method, because the
time-critical part of an experiment will usually be a trial, which
will typically correspond to a sequence. The idea is to sacrifice
accuracy between trials for accuracy within a trial.

So, given that your sketchpad lasts 10ms and you register a 14ms
interval, the preparatory phase took 4ms on your system. The same is
true for 1ms => 5ms difference. This is, in a sense, OpenSesame's
desired behaviour (although I can see how it might be confusing).

Another thing is the behaviour of flip(). On some systems you can
flip() as much as you want (no blocking at all), even though there is
an obvious limit on the number of displays that are actually shown.
Nevertheless, v-sync is luckily intact. I've seen this behaviour on
Linux and Windows 7. Under Windows XP flip() generally does block, and
the result of your experiment would probably be as you predicted (but
see the point above). I will not pretend any deep understanding of the
underlying mechanics of flip(), but this is what I've observed.
Perhaps it would be different if I use OpenGL mode as well? By default
OpenSesame uses the DOUBLEBUF and HWSURFACE display options.

Which brings me to a question of my own: Even if you don't care about
the fancy rendering possibilities, is there a benefit of enabling
OpenGL? E.g., will flip() behave more nicely on Linux and Windows 7?
And does anybody have an example of how you can blit a PyGame surface
onto a OpenGL display surface? Because I get an error if try to do
that in the (for me) regular way.

And thank you for your suggestion regarding the event loop. It's most
certainly something to think about, and it would have some advantages
to be sure. Still, I'll have to consider the best way to incorporate
this into OpenSesame. If I need some suggestions I won't hesitate to
contact you.

Regards,
Sebastiaan

****

Hi Sebastiaan:

I wanted to verify that you saw the last couple emails on this thread
(I totally understand if you've been busy, so no worries if that's the
case). When you have the chance, I'd love to hear your thoughts on my
event loop suggestion and also your input on the timing of visual
stimuli (see email below and attached script), which I could not
verify is working properly on my machine (Debian with nVidia card and
nVidia drivers).

Thanks,
Per

On Mar 23, 6:42 pm, "Per B. Sederberg" <psederb...@gmail.com> wrote:
> Me again :)
>
> I've been testing the sync to vertical retrace and I'm not
> understanding what is happening.  If I run the attached experiment, I
> would expect there to be a 16.66667 (because my monitor is running at
> 60Hz) ms latency between each time_sketchpad in the log, but it's
> 14ms.  If I set the duration down to 1, the latency between each
> sketchpad drops to 5ms.
>
> I'm not sure what's happening, but it doesn't seem to be exactly
> right.  Any thoughts?
>
> Best,
> Per
>
> > On Wed, Mar 23, 2011 at 7:34 AM, Per B. Sederberg <psederb...@gmail.com> wrote:
> >> Hi Andrew and Jon:
>
> >> Jon, thanks again for starting this list, I have a feeling bringing
> >> all these folks together is going to give rise to at least a JND
> >> better world.
>
>  testos.opensesame.tar.gz
> 1KViewDownload

Yaroslav Halchenko

unread,
Mar 24, 2011, 1:20:02 PM3/24/11
to open-sti...@googlegroups.com
Hi Sebastian,

resending my original post which actually was rejected ;) since
apparently I was registered to the group with a different email address,
now, following my instructions I have added it as well to the list of my
emails and let's see if the post gets through:

about google/spam: I had lots of pain with it as well and partially got
them resolved (some IPs from which I email from time to time still
considered as "Spam sources"... heh heh)

It might be that you were using your other email address as the
origin (i.e. s.ma...@psy.vu.nl) which would have let google groups to
dump your email (or explicitly refuse to post).

To overcome, I have added my alternative email addresses to "Email
addresses" within "Personal Settings". To get there, go to

http://groups.google.com/

in the right-top corner there is a gear icon for "Options", click and
select "Account options" and then you would see "Personal Settings".

hope this would be of help to someone
Cheers,

> Hi Per,

> Regards,
> Sebastiaan

> ***

> Hi Per,

> Regards,
> Sebastiaan

> ****

> Hi Sebastiaan:

> Thanks,
> Per

> On Mar 23, 6:42О©╫pm, "Per B. Sederberg" <psederb...@gmail.com> wrote:
> > Me again :)

> > I've been testing the sync to vertical retrace and I'm not

> > understanding what is happening. О©╫If I run the attached experiment, I


> > would expect there to be a 16.66667 (because my monitor is running at
> > 60Hz) ms latency between each time_sketchpad in the log, but it's

> > 14ms. О©╫If I set the duration down to 1, the latency between each
> > sketchpad drops to 5ms.

> > I'm not sure what's happening, but it doesn't seem to be exactly

> > right. О©╫Any thoughts?

> > Best,
> > Per

> > On Wed, Mar 23, 2011 at 1:08 PM, Per B. Sederberg <psederb...@gmail.com> wrote:

> > > Hi Sebastiaan:

> > > I've poked around the OpenSesame (OS) code a bit and it's looking

> > > good. О©╫I have a suggestion that I think will allow you to maintain all


> > > current functionality, yet greatly increase other capabilities.

> > > It seems as though there is no experiment-level event loop. О©╫For


> > > example, when you want to collect a keypress, you start a while loop

> > > processing only key presses until a time limit. О©╫If you added an


> > > experiment-level event loop that you call during time when you are
> > > waiting for responses, or just waiting to do something else, then you
> > > can actually process loads of things in the background and provide

> > > "parallel" operations. О©╫For example, you could collect mouse presses,
> > > mouse movements, and keyboard presses all at the same time. О©╫Or you
> > > could play a sound or video and collect keyboard responses. О©╫Another


> > > great feature that would fall out naturally would be the ability to
> > > add in custom timers and callbacks for things to occur during the

> > > experiment. О©╫For example, I'd like to send a sync pulse out the
> > > parallel port every 1 second (jittered) in order to sync with EEG. О©╫I


> > > can't do this with OS as it stands, but with a timer and callback this
> > > would be totally easy with a couple lines of code at the start of the

> > > experiment. О©╫Finally, I think this callback functionality will be the


> > > key to adding in audio playback and recording that interfaces with
> > > external modules for low-latency (e.g., Jack).

> > > I think this could also improve the overall accuracy of the timing

> > > logs, as well. О©╫Because, like in PyEPL, you can give a range over


> > > which you know an event occurred, which will let the users know when

> > > there were processing bottlenecks and not to trust the timing. О©╫In


> > > PyEPL, which also uses pygame, these latencies are under 1ms, but
> > > occasionally they can pop up and you want to ignore events that have
> > > large time ranges.

> > > I'd be happy to talk about this via chat or skype or whatever if

> > > you're interested. О©╫I have code in PyEPL that we could use as a
> > > template or for inspiration.

> > > Best,
> > > Per

> > > On Wed, Mar 23, 2011 at 7:34 AM, Per B. Sederberg <psederb...@gmail.com> wrote:
> > >> Hi Andrew and Jon:

> > >> Jon, thanks again for starting this list, I have a feeling bringing
> > >> all these folks together is going to give rise to at least a JND
> > >> better world.

> > >> On Wed, Mar 23, 2011 at 6:18 AM, Jon <jon.pei...@gmail.com> wrote:

> > >>> On Mar 23, 9:09О©╫am, Andrew Straw <andrew.st...@imp.ac.at> wrote:
> > >>>> Hi Per,

> > >>>> On 03/23/2011 12:05 AM, Per B. Sederberg wrote:

> > >>>> > Thanks for the responses! О©╫I forgot the most important one:

> > >>>> > PsychoPy and PyEPL (and PsychToolbox for that matter) ensure exact
> > >>>> > timing of each drawing to the screen by using an OpenGL trick whereby
> > >>>> > you draw a clear pixel to the back buffer after calling flip, which
> > >>>> > causes the flip function to block until it actually draws to the
> > >>>> > screen.

> > >>>> Can you explain how drawing a clear pixel after the call to flip causes
> > >>>> the flip call itself to block? Your explanation seems to invert
> > >>>> causality. (Also, your explanation doesn't take into account the latency
> > >>>> of the display itself.) I would also be interested in a link describing
> > >>>> this technique, if you know of any.

> > >> Yes, you are correct that this doesn't take into account the latency
> > >> of the display, but I think we can assume this is a relatively stable

> > >> latency that we could easily quantify. О©╫Essentially, we can never


> > >> really know without a some external measure such as a photodiode.

> > >>> glFinish() is then called too, and that's the important bit. glFinish
> > >>> waits for all previous gl calls to finish, but drawing the pixel can't
> > >>> finish until it has somewhere to draw, and that means it needs the
> > >>> buffers to finish their flip.

> > >>> It's possible that you don't actually need the draw-the-pixel step
> > >>> (you can just do glFinish after flip) but I suspect that's going to be
> > >>> system/card dependent because the flip is not itself a gl command and
> > >>> not may not be beholden to the glFinish call.

> > >> Yes, that's the idea. О©╫I think the drawing of the pixel is to make
> > >> this work in a stable manner across systems. О©╫Sadly, I don't have a


> > >> link actually explaining it, but I do have code:

> > >> О©╫ О©╫# The following is taken from the PsychToolbox
> > >> О©╫ О©╫# Draw a single pixel in left-top area of back-buffer. This will
> > >> wait/stall the rendering pipeline
> > >> О©╫ О©╫# until the buffer flip has happened, aka immediately after the
> > >> VBL has started.
> > >> О©╫ О©╫# We need the pixel as "synchronization token", so the following
> > >> glFinish() really
> > >> О©╫ О©╫# waits for VBL instead of just "falling through" due to the
> > >> asynchronous nature of
> > >> О©╫ О©╫# OpenGL:
> > >> О©╫ О©╫glDrawBuffer(GL_BACK)
> > >> О©╫ О©╫# We draw our single pixel with an alpha-value of zero - so
> > >> effectively it doesn't
> > >> О©╫ О©╫# change the color buffer - just the z-buffer if z-writes are enabled...
> > >> О©╫ О©╫glColor4f(0,0,0,0)
> > >> О©╫ О©╫glBegin(GL_POINTS)
> > >> О©╫ О©╫glVertex2i(10,10)
> > >> О©╫ О©╫glEnd()
> > >> О©╫ О©╫# This glFinish() will wait until point drawing is finished, ergo
> > >> backbuffer was ready
> > >> О©╫ О©╫# for drawing, ergo buffer swap in sync with start of VBL has happened.
> > >> О©╫ О©╫glFinish()

> > >>>> In cases where one is trying to do feedback driven stimulation with
> > >>>> minimal latency, blocking on flip is actually what you don't want to do:
> > >>>> it prevents you from having the most recent data being used to draw the
> > >>>> frame.

> > >>> true dat. Blocking (and any use of glFinish) are bad for rendering
> > >>> speed. If you don't need the framebuffer timestamp then sync but don't
> > >>> block.

> > >> Yes, you are totally right, which is why it's probably best to have

> > >> both modes. О©╫One that only calls flip on the occasional situations


> > >> when you are updating the screen (this is what PyEPL defaults to
> > >> because it's focus is on non-vision oriented experiments that only
> > >> need to update the screen every couple seconds and we want the event
> > >> loop to be processing responses instead of rendering for each frame)
> > >> and one that renders for every frame and does not block (as in
> > >> VisionEgg).

> > >> Best,
> > >> Per

> > О©╫testos.opensesame.tar.gz
> > 1KViewDownload


--
=------------------------------------------------------------------=
Keep in touch www.onerussian.com
Yaroslav Halchenko www.ohloh.net/accounts/yarikoptic

Sebastiaan Mathôt

unread,
Mar 23, 2011, 6:43:12 AM3/23/11
to open-sti...@googlegroups.com
Hi Per,

You're right in assuming that OpenSesame simply uses the flip function.
What I've done is create a test_suite.opensesame file, which is
essentially an OpenSesame experiment that does some basic benchmarking.
In the first step it checks if the reported interval (i.e., determined
by when the flip() returns) matches to the requested interval between
two sketchpads. It also presents alternating yellow and blue screens, so
you can visually determine if v-sync is enabled (I'm not sure I trust
the systems introspection on this). If not, you see clear tearing
(erratic horizontal lines).

My experience with running this test is that under Windows XP the
reported error is usually 0ms (although maybe I should take a look at
the double flipping), whereas under Linux and Windows 7 the reported
error is usually very small but non-zero, especially on slower machines
(e.g., 1ms or 2ms). This suggests to me that there is a difference in
the behavior of flip() on these systems, but I'm not sure exactly what.
This difference is also apparent in the fact that Linux and Windows 7
readily accept more flips than can be shown, given the refresh rate,
whereas Windows XP does not. On all systems, v-sync works fine, although
on my system v-sync breaks down if you disable compiz and aero on
respectively Linux and Windows 7.

I tend to think that v-sync is the crucial thing, as it basically
ascertains that the relative timing of the displays is correct and
stable (unless there is a timing error so large that an entire refresh
is missed), but that the absolute presentation time of the displays
doesn't matter that much. So, in that sense, I think that the flip()
behaviour is fine (especially on XP). Nevertheless, I'm curious to learn
about better ways to handle flipping, and also what other people's
experiences with Linux and 7/ Vista are in this respect.

Regards,
Sebastiaan

Sebastiaan Mathôt

unread,
Mar 23, 2011, 3:08:24 PM3/23/11
to open-sti...@googlegroups.com
Hi Per,

Regards,
Sebastiaan

Sebastiaan Mathot

unread,
Mar 25, 2011, 5:46:40 AM3/25/11
to Open Stimulus Developers
And suddenly my posts magically appear! Two days after the fact.
> >> On Wed, Mar 23, 2011 at 7:34 AM, Per B. Sederberg<psederb...@gmail.com>  wrote:
> >>> Hi Andrew and Jon:
>
> >>> Jon, thanks again for starting this list, I have a feeling bringing
> >>> all these folks together is going to give rise to at least a JND
> >>> better world.
>

Jonathan Peirce

unread,
Mar 25, 2011, 5:57:36 AM3/25/11
to open-sti...@googlegroups.com
Google thought they might be spam and held them back for moderation. But
I only just got the notification. You should be fine from here on in.

Jon

On 25/03/2011 09:46, Sebastiaan Mathot wrote:
> And suddenly my posts magically appear! Two days after the fact.
>

--
Dr. Jonathan Peirce
Nottingham Visual Neuroscience

http://www.peirce.org.uk/

Jonathan Peirce

unread,
Mar 23, 2011, 7:01:35 AM3/23/11
to open-sti...@googlegroups.com

On 22/03/2011 20:16, Per B. Sederberg wrote:
> On a related note, I was checking out OpenSesame the other day and I
> couldn't figure out how to do something in parallel (sequences are
> obvious). For example, here's a real-world scenario:
>
> 1) Put up an image and text below it saying Old/New.
> 2) Wait until a keyboard response or up to 2000ms.
> 3) Upon keyboard response, take away the Old/New text, leaving up the image.
> 4) After 2000ms take down the Image and, if it wasn't already removed,
> the Old/New text.
>

In PsychoPy Builder all that is currently easy except for the part where
you leave the image up, but not the text, after a keypress. I do intend
to add more complex options for stimuli to start/stop in a yoked fashion
with each other but wanted to get the simple stuff clean and relatively
bug-free first. PsyScope can yoke stimuli to events, but getting the GUI
representation right is a challenge I think. It will happen one day.

Parallel objects is no problem. So the slightly-modified version of the
study, where the subject's response ends the entire trial is easy and
looks like the attached. The keyboard component has a check box called
forceEndTrial and otherwise everything lasts for 2000ms (after 500ms ISI).

> Can I set up a trial to do that in OpenSesame or do I have to remove
> everything and reshow the image upon a keypress (I think that's how
> I'd do it in PsychoPy). In my all-code land of PyEPL I just remove
> the text when I get the keypress and update the screen, leaving the
> image up from before.

In PsychoPy Coder that's pretty much the same.

Jon

parallelStims.png

Per B. Sederberg

unread,
Mar 25, 2011, 4:48:11 PM3/25/11
to open-sti...@googlegroups.com, Sebastiaan Mathôt
Hi Sebastiaan et al.:

I've got bad and good news.

The non-OpenGL timing is an issue. I was unable to get a single
machine in my lab (granted it's full of Linux boxen) to provide
accurate timing of when something came on the screen in OpenSesame
using the pygame flip without OpenGL. Unless the flip blocks, with
the current way it's coded you can't know when in the screen refresh
it actually happened, so you could have an error of up to 16.66667ms
(if your refresh rate is 60Hz), which is no good if you care about RTs
or EEG.

That was the bad news. The good news is that I got adventurous today
and added in OpenGL functionality to OpenSesame. With it I'm getting
perfect ms timing on all my machines with nVidia cards in fullscreen
mode (opengl syncing only works in fullscreen). The code could still
use some cleanup, but check out my opengl branch on my github fork
(psederberg is my username). The only current issue that I've found
is that the first thing that you draw (i.e., the welcome screen) does
not show up, but everything after that works just fine. I assume
that's because something I'm doing isn't getting initialized until you
draw something once and I'm sure that will be pretty easy to fix.

The speed is as good as the non-OpenGL version and the testsuite
example runs just fine (except for that welcome screen not coming up).
You can switch between the OpenGL and original pygame software
rendering version by changing the mode_opengl line in experiment.py to
yes or no (like it was originally).

I welcome feedback on the code and feel free to pull it into upstream.
We should probably configure it so that you can select opengl or not
from the command line instead of having to change it in the code, but
that's all bells and whistles.

Best,
Per

Sebastiaan Mathot

unread,
Mar 25, 2011, 5:25:41 PM3/25/11
to Open Stimulus Developers
Hi Per,

That's awesome! It seems then, that OpenGL is without a doubt the way
to go. I was not aware that it makes such a big difference. I took a
quick look at your code, which looks fine, and I'll merge it upstream
when I get the time, perhaps with some modifications. Btw, you can
enable OpenGL mode fairly easily by adding "set mode_opengl yes" to
the general script of OpenSesame, no command line or source
modifications required. Admittedly, without your patch, turning on
OpenGL breaks pretty much everything. But it's possible.

I'll take a look at it and get back to you in more detail as soon as I
find the time. Thank you very much!

Regards,
Sebastiaan
> >> On Wed, Mar 23, 2011 at 1:08 PM, Per B. Sederberg<psederb...@gmail.com>
> >>> On Wed, Mar 23, 2011 at 7:34 AM, Per B. Sederberg<psederb...@gmail.com>
> >>>  wrote:
>
> >>>> Hi Andrew and Jon:
>
> >>>> Jon, thanks again for starting this list, I have a feeling bringing
> >>>> all these folks together is going to give rise to at least a JND
> >>>> better world.
>
> ...
>
> read more »

Per B. Sederberg

unread,
Mar 25, 2011, 6:37:57 PM3/25/11
to open-sti...@googlegroups.com, Jonathan Peirce

Hi Jon:

Thanks for this example. I've been trying to implement this staggered
stimulus update and it's proven extremely difficult/impossible in most
GUIs without a weird hack. Can you suggest a way to do it (remove the
text, but keep the image up, following a keypress) in the PsycoPy
Builder, but with a inline code snippit? In other words, how could I
do it without dropping out completely into Coder?

Thanks,
Per

Per B. Sederberg

unread,
Mar 25, 2011, 8:22:56 PM3/25/11
to open-sti...@googlegroups.com, Sebastiaan Mathot
Hi S:

I've been looking more at the code I added today and, while it's a
great first step, there's many optimizations to be made. For example,
this was a proof of concept that I could take a surface that has
everything drawn on it (essentially the entire screen) and then blit
that on an OpenGL texture. This works, but is actually much slower
than it will eventually be when we don't have two blitting steps (i.e.
onto the pygame surface the size of the screen and then onto a big
texture). The real way is to draw all the objects on smaller
surfaces, which we then blit to the opengl surface. The ogl_image.py
code is designed to do that, so this will not be hard. The sum story
is that you may want to wait to merge in with upstream until I've
optimized everything.

Have a great weekend!
P

Sebastiaan Mathôt

unread,
Mar 27, 2011, 6:50:52 AM3/27/11
to open-sti...@googlegroups.com
Hi Jon,

You're right, this is not an OpenSesame mailing list. I apologize for
that. So let me bring it back to a more general issue, which is related
to OpenSesame, but will quite possibly interest people besides Per and
myself as well.

It would be great if our respective projects could benefit *directly*
from each others code. I have one very specific way in mind in which
this could be done, which I already mentioned to Jon earlier. OpenSesame
could use PsychoPy or another stimulus library (PyEPL?) as a back-end to
handle display operations. As Per has already seen, all it takes is
essentially overriding the Canvas class, which is OpenSesame's extremely
thin wrapper around whatever back-end you please (similar story for
sound). Right now it would be a little tricky to use a non-pygame
derived backend, but even that wouldn't be too hard too fix.

So, my question is: Would there be any benefit/ downside to using
PsychoPy as a back-end? With "back-end" I mean that the Canvas class
would invoke whatever functions PsychoPy uses for drawing primitives and
flipping stuff to the screen. So it would not use all aspects of
PsychoPy (not the builder, for instance). I am not familiar enough with
PsychoPy to say for sure, but I'm guessing that this is possible? My
feeling is that the average user wouldn't notice or care what back-end
is used, but that advanced users would greatly appreciate the extra
functionality offered by PsychoPy. In addition, I presume that display
operations have been extensively tested in PsychoPy.

I think it might be a good idea if I modify OpenSesame so that there is
a good infrastructure, to allow different backends to be easily
inserted, in a way that is mostly transparent to the user (as long he/
she doesn't use back-end specific functions, of course). This wouldn't
be too much work, because the basic back-end independence is already
there. One back-end would be the (soon to be legacy) Non-OpenGL backend
that there is now. Another back-end could be Per's OpenGL
back-end-in-progress (rough, I know, but looking good). A third,
particularly powerful one could be PsychoPy.

What I envision is a combination of a fully graphical experiment
builder, with a huge library of functions to match. This could blow the
proprietary competition out of the water. And it's all there, just not
in a single package.

Any thoughts?

Regards,
Sebastiaan

On 03/26/2011 12:59 PM, Per B. Sederberg wrote:
> Howdy Jon:
>
> No problem! I was worried about the same thing (for example, the
> subject line on the emails), however, I kept posting to the group
> because it seemed like occasionally some general interest things were
> cropping up. For example, when Andrew asked us about the OpenGL
> screen sync trick. This is exactly the sort of thing I think we're
> all benefiting from with this list (the collective knowledge of people
> who have worked on this same problem for many years). Even the last
> post I made, while sounding specific, is still a commentary on OpenGL
> vs. raw SDL. I could certainly have posed it as a question (e.g., why
> am I getting slow performance when I blit a surface the size of the
> screen to an OpenGL texture? I think it's that it's certainly more
> efficient to blit the individual objects than an entire surface, but
> I'd love input from people who have worked with OpenGL more than I
> have, e.g. you and Andrew).
>
> So, I agree that this thread got pretty specific, but I'm torn as to
> whether it's too specific for general consumption (especially if we
> took more care making the issues sound more general). I, for example,
> would love to be a fly on the wall for a similar conversation
> concerning say VisionEgg (I use that as an example because I don't use
> it at all and wouldn't be on the mailing list for it). If we were all
> on the mailing lists for VisionEgg, PsycoPy, PyEPL, OpenSesame,
> PsychToolbox, etc..., then we wouldn't need this list.
>
> All that said, I'm happy the open-stim-dev list exists and hope we
> continue to use it. It seems this is all part of the growing process
> for figuring out what this list will be for...
>
> Have a great weekend,
> Per
>
>
> On Sat, Mar 26, 2011 at 4:46 AM, Jonathan Peirce<jon.p...@gmail.com> wrote:
>> forgive me guys, but this thread seems to have wandered a little away from
>> 'general' issues/solutions for the packages. Maybe you could take it to a
>> list specifically for OpenSesame?
>>
>> best wishes,
>> Jon

>>>>> On Wed, Mar 23, 2011 at 3:08 PM, Sebastiaan Math�t<s.mat...@psy.vu.nl>

>>>>> read more �


>> --
>> Dr. Jonathan Peirce
>> Nottingham Visual Neuroscience
>>
>> http://www.peirce.org.uk/
>>
>>

Jonathan Peirce

unread,
Mar 28, 2011, 5:29:42 AM3/28/11
to open-sti...@googlegroups.com
I don't think it would be any problem at all to add PsychoPy as a
middle-to-back-ish-end. PsychoPy then uses pyglet or pygame as a
further-towards-the-back-end and does all its drawing in opengl (really
quite close to the back!) ;-)

PsychoPy started off simply as a library and many people still use it as
such, because they have their own preferred editor and don't need the
Builder. The 'Coder' (IDE) got added to make it easier for newbies, who
want to install an 'application' and want to run demos easily. The
Builder is much more recent, quite beta-quality and was my own attempt
to do something like OpenSesame, Actually, if I'd known you would write
OpenSesame, I might not have bothered! ;-) Builder would only be useful
to you in the name of seeing how i might go about coding a particular
type of study because you can view the script that builder generates.

The point is, you can quite happily use the psychopy API without
touching the psychopy app. Take a look at the coder demos within the
package to see some examples, and the (fairly) complete api reference is
available here:
http://www.psychopy.org/api/api.html

It may well be that the opensesame gui will surpass the builder in
functionality and then I would be interested in distributing it as an
additional/replacement option within psychopy. Of course I would then
ask users to cite your work as well as my own if they do make use of it,
along the lines of "The experiment was generated in OpenSesame (Math�t,
20XX) using the PsychoPy library (Peirce, 2007)." I have actually
wondered about adding to PsychoPy Builder a *Psychtoolbox* back-end!
Builder literally just generates a script from the graphical
representation and runs it, and it would be just as easy to generate a
matlab script as a python one. You can see the concepts of back-ends and
front-ends getting extremely blurred...

By the way, I'm not sure if OpenSesame is doing this yet, but I think
the trick of generating a script that the user can use as a template is
really useful. You can never write the GUI that handles all possible
experiments - code is more flexible. You want the user to be able to
carry on using your software after they reach that point where the GUI
can't keep up. The GUI/code combo is, I think, the solution.

all the best,
Jon


On 27/03/2011 11:50, Sebastiaan Math�t wrote:
> Hi Jon,

--

Dr. Jonathan Peirce
Nottingham Visual Neuroscience

http://www.peirce.org.uk/


This message and any attachment are intended solely for the addressee and may contain confidential information. If you have received this message in error, please send it back to me, and immediately delete it. Please do not use, copy or disclose the information contained in this message or in any attachment. Any views or opinions expressed by the author of this email do not necessarily reflect the views of the University of Nottingham.

This message has been checked for viruses but the contents of an attachment
may still contain software viruses which could damage your computer system:
you are advised to perform your own checks. Email communications with the
University of Nottingham may be monitored as permitted by UK legislation.

Reply all
Reply to author
Forward
0 new messages