eyetracking/hardware in a separate thread or process

616 views
Skip to first unread message

Jonathan Peirce

unread,
Apr 17, 2012, 5:18:49 AM4/17/12
to PsychoPy Developers

On the matter of adding some of this code to a hardware/smi.py module, I
think that would be fantastic! ;-)

The rest of the below is about the notion of threading/multiprocess.

I think using the multiprocessing module is going to be the way to go
(and it looks like it /is/ already included on the Standalone
distribution). I expect any code you've already written to work on
multiple threads will translate really easily to that module and run on
multiple cores, and would have a performance advantage whenever more
than one core is available (nearly all recent machines). On a single
core machine there might be a very slight additional overhead, but
otherwise it will operate just as a second thread alternating for
control of the core. Potentially on a second core you could do some more
detailed analysis of the inputs and just send back the result when
needed, or simply keep storing the current result in some shared location.

Having had a little play with this it looks fairly straightforward and I
might well add a class to hardware/__init__ like
class AsyncProcess():
def addFunction(self,f,args=[]):
"""add a function that you want to run as part of the
process. Multiple functions could be run in turn
"""
def readOutputs(self):
"""read the contents of an output buffer, containing
anything that the functions have returned in running
"""

Already I'm thinking that this is going to be the easiest way to handle
the new methods for fetching keyboard events too in a separate process.

Jon

On 17/04/2012 04:43, Michael MacAskill wrote:
>> So threads would be a good start, but I you may run into issues if you're wanting to check the eyetracker at a very high rate you'll consume a lot of CPU time and that will affect your rendering. Balancing the load between drawing and rending is likely to be an issue I imagine.
>>
>> There are also libraries to allow you to get around this and use separate cores (multiprocessing or parallel python), but then the communication between your processes needs to be managed. From my quick look at multiprocessing docs (actually available from 2.6 but probably not yet in the Standalone PsychoPy - I'll add it) it looks like that should be reasonably straight forward:
> After a year or two of putting this off, it only took an hour or two to implement a thread to monitor the real time output from the SMI iView X. Even on my personal laptop, running Mail, Excel and web browsers etc, I had to **slow down** the thread so that it wasn't polling faster than the 1250 Hz UDP stream. Python didn't get above 13% of one core in CPU usage (the Terminal was using about 50% but that may been due to all the printing going on).
>
> That is, in the run event of the monitoring thread:
>
> # method which will run when the thread is called:
> def run (self):
> while True:
> if self.__stop:
> break
> print (self.receiveNoBlock())
> time.sleep(0.0005)
>
> If the time.sleep() call is not there, the thread runs so fast that it gathers many more empty values from the UDP port than actual samples. With the small sleep value above, the mean time between samples was 0.800 ms (i.e. exactly 1250 Hz), with near 0 standard deviation.
>
> iView's ET_STR command (i.e. "start streaming") has an optional sample rate parameter, so one could set it to, say, 500 Hz, put a longer sleep time in the thread, and that would hopefully leave enough time for PsychoPy to do its stuff (setting a lower output rate isn't necessary, though, as we could just sample at a lower rate from the 1250 Hz stream. Each packet contains its own microsecond time stamp, so irregular sampling isn't too problematic).
>
> Have tested this stuff just in Python so far, only with text output of the eye tracker samples. Next step is to wire it into an actual PsychoPy task, extract the eye pixel coordinates and get a gaze-controlled image implemented. That will be the proof of the pudding as to whether simple threads give the necessary performance, or if the multiprocessing approach is necessary.
>
> Cheers,
>
> Mike
>
>

--
Jonathan Peirce
Nottingham Visual Neuroscience

http://www.peirce.org.uk


This message and any attachment are intended solely for the addressee and may contain confidential information. If you have received this message in error, please send it back to me, and immediately delete it. Please do not use, copy or disclose the information contained in this message or in any attachment. Any views or opinions expressed by the author of this email do not necessarily reflect the views of the University of Nottingham.

This message has been checked for viruses but the contents of an attachment
may still contain software viruses which could damage your computer system:
you are advised to perform your own checks. Email communications with the
University of Nottingham may be monitored as permitted by UK legislation.

Michael MacAskill

unread,
Apr 17, 2012, 5:45:19 PM4/17/12
to psycho...@googlegroups.com, Nate Vack
On 18 Apr, 2012, at 04:23, Nate Vack wrote:

> Maybe I'm being thick here, but couldn't you ditch the sleep() and do
> a blocking recv()?
For testing purposes, by not blocking, I was able to see how fast the thread would max out, which seemed to be about 10 times as fast as the 1250 Hz stream (i.e. there were about 10 null receives for every real packet). It also let me simulate (with larger values of the sleep time) what would happen if the thread was only able to sample slower than 1250 Hz due to competing activities.

Agree that blocking receives would easier to deal with as it would avoid still receiving occasional null packets, and would probably be the way to go when doing this for real. But the blocking is a black box to me: I have no idea whether that would still entail a very tight (but hidden under the hood) polling loop until a valid packet is received. The control freak in me likes explicitly releasing time to other activities via sleep. But no doubt the under-the-hood stuff is documented somewhere.

Cheers,

Mike

Michael MacAskill

unread,
Apr 17, 2012, 6:09:37 PM4/17/12
to psycho...@googlegroups.com

On 17 Apr, 2012, at 17:54, Denis A. Engemann wrote:

> I would be happy to have a look at your code and to share what I have done so far to master the SMI using psychopy. If this is of interest for you we could switch to the developpers list and keep on duscussing this over there. Maybe we arrive at some solution worth implementing in the psychopy.hardware module.

Hi Denis,

Here is some simple code for just displaying the data stream. We have a lot more for detailed control of the eye tracker which I'll send you directly as won't be of interest to most others.

As Jon suggests, we should shift this to multiprocessing for its various advantages.


****First file: test.py:****

# test gaze thread

import gazeContingent
import time

iViewThread = gazeContingent.GazeContingent()
iViewThread.start()

time.sleep(5)
"""just sit here for n seconds monitoring the data stream. In practice,
this is where PsychoPy would be doing its stuff, hopefully yielding sufficient
time to the thread.
You should see n seconds worth of data being printed to the terminal.
Need to add code to the thread to extract the gaze position, detect saccades,
etc, and allow these to be queried periodically."""

# then signal the thread to stop gracefully:
iViewThread.stop()


****Second file: named gazeContingent.py ****

# Class to monitor iView eye tracker in real time in a separate thread
# =====================================================================
#
# Example Usage:
#
# import gazeContingent
# gazeThread = gazeContingent.GazeContingent()
# gazeThread.run()
# #do something here
# gazeThread.stop()
#
# 2012-04-17: v0.1: initial implementation
#
# Written by Michael MacAskill <michael....@nzbri.org>

import threading # this class is a thread sub-class
import socket # it listens on a UDP port
import time # for the sleep function


class GazeContingent(threading.Thread):

# initialise:
# use defaults appropriate to your lab IP addresses and ports:
def __init__(self, port = 5555, iViewIP = "192.168.110.63", iViewPort = 4444):

# UDP port to listen for iView data on.
# Set iView software to duplicate stream to this port number so that we don't conflict with
# the listening and sending on the main port number.

# Ports that we send and receive on:
self.port = port
self.iViewPort = iViewPort

# address to send some messages to iView
self.iViewIP = iViewIP

# Bind to all interfaces:
self.host = '0.0.0.0'

# Setup the socket:
self.sock = socket.socket( socket.AF_INET, socket.SOCK_DGRAM )

# The size of the buffer we use for receiving:
self.buffer = 4096

# Bind to the local host and port
self.sock.bind((self.host,self.port))

# get iView to start streaming data
self.send('ET_FRM "%ET %TU %SX %SY"') # set the format of the datagram (see iView manual)
self.send('ET_STR') # start streaming (can also add optional integer to specify rate)

# create self as a thread
threading.Thread.__init__(self)

self.__stop = False


def receiveNoBlock(self):
# copied from iview.py
""" Get any data that has been received
If there is no data waiting it will return immediately
Returns data (or 0 if nothing)"""

self.sock.setblocking(0)
try:
data = self.sock.recv(self.buffer)
except Exception:
return 0
else:
return data


def receiveBlock(self):
# copied from iview.py
""" Get any data that has been received or wait until some is received
If there is no data waiting it will block until some is received
Returns data"""

self.sock.setblocking(1)

data = self.sock.recv(self.buffer)

return data


def send(self, message):
# send msgs to iView when required
#Note: iview requires a newline (\n)
entire_message = message+"\n"
try:
self.sock.sendto( entire_message, (self.iViewIP, self.iViewPort) )
except:
print "Could not send UDP message"
print(message)


# method which will run when the thread is called:
def run (self):

i=0


while True:
if self.__stop:
break

i = i + 1
print (i, self.receiveNoBlock())
# could receive with block and skip the sleep()
time.sleep(0.0005)


# so caller can ask for the thread to stop monitoring:
def stop (self):
self.send("ET_EST") # tell iView to stop streaming
self.__stop = True # the run() method monitors this flag


Sol Simpson

unread,
May 7, 2012, 10:07:09 PM5/7/12
to psycho...@googlegroups.com
HI Michael,

I am working on a project, currently dubbed pyEyeTracker, that aims to create a standard eye tracker interface for several different eye tracking systems, including some of the SMI models like the head supported high speed series. There are about 4 -5 other manufactures that we will be implementing an interface for into the pyEyeTracker common API in the first phase. This is part of the work being done by the COGAIN technical committee on eye tracker data quality standardization ( http://www.cogain.org/info/accuracy). The committee is made up of people from universities and from eye tracking companies working together to try to define a standard for how some of the common eye tracking system performance measures are tested, like gaze accuracy, low level system resolution, end to end eye sample access delay, and such. The hope is that the this effort will be a first, important, step towards having well clear definitions for what each eye tracking data measure means and what some best practices may be for testing them.

The general idea of pyEyeTracker is that regardless of eye tracking device (within practical limits), a researcher writing an experiment would be able to use the same python API doing the majority of their work with the system. To use a different eye tracker would just be a matter of switching the name of the eye tracker subclass that is created at the start of the psychopy script / perhaps even an integration to the builder could be possible. For areas of an eye trackers existing interface that is commonly used, but not common between other eye tracking systems, a generic way to access that 'extended' functionality for that particular model will be provided. This way the developer of the experiment should get all the benefits of using a common interface 90% of the time, but also the flexibility that allows them to not be limited by the interface for the 10% or less of functionality that differs. In a way I guess this is similar to how OpenGL grew, with as standard API version, but then video card specific extensions to it that allowed special purpose or bleeding edge functionality to be accessed.

As part of this initiative, we will be running some experiments to help take the suggestions and ideas that have been formed and empirically check them; trying to determine if there is a best approach for testing a measure that generalizes across many eye tracking systems, or if a set of suggestions that factor in mechanical and other design differences between eye trackers needs to be put forth. While developing a standard set of measure definitions and suggested techniques on now to quantify them is very important, it is also important that any suggestions made are fair to all eye tracker types and models. It may be that for some measures  it is important to identify the categories of eye trackers that exist where different testing approaches should be used.

With all that said, we will be using psychopy as the experiment development environment, which we are very excited about. Due to the wide variety of eye tracking systems that will be tested, and wanting to keep the experiment itself as similar as possible across all the eye tracker models so that as few conditions differ other than the eye tracking hardware and software itself, developing a common interface that is used for all eye tracking systems seems like a very worth while goal.  

As soon as we are able to put out a first draft of the proposed python interface, with one or more functioning implementations of it with different eye tracking systems, as well as some documentation of course,  I will post it and update this group on the status of the project during these very preliminary weeks. Anyone who is interested is welcome to provide feedback and constructive criticism of the standard interface design so that it can be made as good as possible.

Michael, the more short term 'practical'  question I had was whether you have any experience using the SMI UDP interface for streaming at their higher data rates, say 500 - 1250 Hz and accessing all the possible sample fields made available? I ask because I had planned using their C DLL with the ctypes form of interface that they provide some examples of, since it should have no issue accessing the sample data at full speed without dropping any data, and with the shortest latency,  possible. However if you have experience using the UDP interface, or even better, a fully functioning python code set that could be used as a basis for the SMI implementation or some stress testing to make a decision on what direction to go for the pyEyeTracker interface; that may be an interesting option to consider if you thought so as well.

Any insight you have between the C DLL vs open UDP ASCII form of interface in terms of real world performance, etc. would be of great interest and aid in determining what the best route for the SMI implementation will be.  Thanks in advance for any thoughts you have on the subject. 

I'm also shopping around for a better name for the project as well. ;)

Thank you; more to come in the near future.

Sol


import threading # this class is a thread sub-class

Sol Simpson

unread,
May 7, 2012, 11:43:22 PM5/7/12
to psycho...@googlegroups.com
Regarding using multiple processes vs multiple 'threads'; Here are a couple things to add to the pot of considerations.

1) This is more of a ... "have you seen this..." point. When doing some more research on the current state of things in this area, i found this 2 year old pycon talk by David Beazly about GIL bound python threads and multicore CPUs. Maybe everyone else in the python world has already seen it, so sorry for bring up old news if that is the case, but I pulled my head out of the sand I guess and just watched it now. It is a great talk, the guys is quite funny, but the message is not funny at all!


He basically shows, through some very contrived examples and one not so contrived example, that using Python threads on a multicore CPU not only does not improve performance due to the GIL, but due to how the GIL is implemented (at least in 2.6 for example) and an interaction with common OS scheduling methods, the result is that adding multiple GIL bound threads greatly reduces performance! I was aware of the GIL and that because if it threads in Python ran one at a time sequentially, so they were just providing time slicing really, but I had no idea how negative of a performance hit they could have on yee good old python program.  I am not sure how much of this is old news or how much has been further tested and the issue space reduced based on the further tests, but the talk is quite interesting to say the least.

2) Regardless this is just even more reason to look at how to take advantage of multiple cores by a) using the processing module that is part of the python distro now, or b) consider the alternative, which is what we have done on previous projects, of creating your worker thread in a small C extension that releases the GIL right away so it is not bound by it, and set the affinity of the C extension thread and python main app such that they run on different cores. In this approach you get the benefit of parallel processing at a much lighter cost compared to a separate python process / python interpreter and the message passing method you pick to talk between the two. However in the C no GIL thread example you do have to worry about any cases where there is overlap between python objects and you messing with them in C. So I think some of the advantage may depend on the intended usage. Based on the research I did today, it appears you can also setup ctypes so that a thread created by ctypes releases the GIL prior to processing BTW, although I have never tried this.

3) A case where the C thread approach makes some sense I think, is in the case of creating the parallel processing unit for the purpose of gathering various input streams / event streams, time stamping them, and then feeding a combined stream (if desired, sorted by time) to the python interpreter. this is because the 'touch points' are minimized and can be done very quickly. In this case all the input gathering would be done in C and would there be OS specific unless you used a cross platform API like SDL for getting your keyboard and mouse (and joystick) inputs atleast. So I guess this is a downside of the approach. If it can be done in Ctypes it may not be as 'bad', if you think it is bad to begin with. ;) Regardless of the implementation, it makes sense to me that if the keyboard and mouse is going to me moved, effectively all input processing should be moved, including TTL and analog input reading; and eye tracker data reading for example. Having a common data integration hub for the experiment environment make synchronizing the various event types much easier IMO. This may not matter so much for keyboard and mouse inputs because the timing accuracy of those events is very poor to begin with in general. However for TTL, analog, voice triggering, and other device input like eye tracking, it can be much more important to the researcher that the time stamping is applied as quickly as possible, and that the relative order of events is as correct as possible.

4) One practical issue that I do not know the answer too, and would really appreciate any feedback on, is the case of when you need to load a library, but the library is only designed to support one instance being loaded at one time on a PC for it to function properly. An example of this is the eyelink C DLL and I believe the SMI C DLL as well, perhaps other eye trackers once I have had time to look more closely at their APIs. The significance of this is that when using the separate non GIL thread  with your main Python process / thread, you still only need to load the DLL once and it can be shared by the C extension library thread and the main process thread. So for example, all data collection can be done by the non GIL C thread, but interaction, sending commands, etc. to the eye tracker can be done in the main python app. This is good.

But this is what I am not sure about; what about the case of using 2 processes; 2 separate processing, like what you would get by using processing, can not share a single DLL instance can they? If they can not, then it means ALL interaction with the device needs to be done via the 2nd process, not just collecting data. This adds a lot of overhead and extra time because of the messaging that would need to occur every time you want to do anything with the external device. 

Sending a command would no longer be:

Python_Calls C_Send_Command_Func-> commandSent_toDevice using C call;

so basically 0 over head. Instead it would be something like 

Put_SendCommandRequest_in_Queue->2ndProcess_checkAndRetrieves_SendCommandRequest(possible extra memory copy required)-> 2ndProcesses_Python_Calls C_SendCommand_Func-> commandSent_toDevice -> 2nd _process_communicatesabck_MSGSent

So this is adding a lot of overhead in cases where the parallel processing unit needs to be 'interactive'.  Vs cases where it is just doing processing and automatically spitting out results.

Do I have this right, am I missing anything that makes the 2 process route less of a concern? I know it is probably easier to implement using 2 python processes and processing module goodness, but I'm coming from a performance point of view as well; and my understand that 2 processes can not share a single instance of a DLL running in memory may make it a no go for some eye trackers. So I am really hoping I am wrong !

A third option which ' may' be worth considering, just though of this now so it could be very stupid,  is not adding another thread at all, and using greenlet or stackless python  to get the necessarily time slicing by using mircothreads instead of OS threads at all. That maybe all is needed for this. In this case no real extra threads are created and instead you manage multiple mirco threads (application created thread constructs), slicing up time as needed between them. So you can often get work done during all those periods the app is currently blocked on something or another. While you are still bound to 1 CPU, you can often take advantage of the CPU time much better. Since python thread are really just time slicing anyhow, why not  just use microthreads and save all the GIL induced thrashing and context switch all together. Just a thought.

Thanks for any input.

Sol

Jonathan Peirce

unread,
May 8, 2012, 5:39:16 AM5/8/12
to psycho...@googlegroups.com


On 08/05/2012 04:43, Sol Simpson wrote:
> Regarding using multiple processes vs multiple 'threads'; Here are a
> couple things to add to the pot of considerations.
>
> 1) ...using Python threads on a multicore CPU not only does not
> improve performance due to the GIL, but ... greatly reduces performance!
Yes, although the users that have put it into practice so far have
typically found that they oculd maintain sufficient drawing/polling
rates that it was bearable.
>
> 2) Regardless this is just even more reason to look at how to take
> advantage of multiple cores by a) using the processing module that is
> part of the python distro now, or b) consider the alternative, which
> is what we have done on previous projects, of creating your worker
> thread in a small C extension that releases the GIL right away so it
> is not bound by it
So far psychopy has avoided using any compiled code. That's really handy
for distribution purposes and should continue unless it's really
important to include native c code.

For similar reasons, if communication to the eyetracker can be done with
UDP that is generally preferable to using dll, because the latter is
hard to support across all platforms (or even across versions of a
single platform).

On the other hand, for keyboard handling I'm struggling to get
multiprocess.py working using pyglet. I've a feeling that there we do
need to start looking building our own keyboard events buffer from
scratch, using either ctypes or a c extension :-(
>
> 3) A case where the C thread approach makes some sense I think, is in
> the case of creating the parallel processing unit for the purpose of
> gathering various input streams / event streams, time stamping them,
> and then feeding a combined stream (if desired, sorted by time) to the
> python interpreter. this is because the 'touch points' are minimized
> and can be done very quickly. In this case all the input gathering
> would be done in C and would there be OS specific unless you used a
> cross platform API like SDL for getting your keyboard and mouse (and
> joystick) inputs atleast.
Currently this is done by pyglet, which is using ctypes to access
platform-specific libs in the os. The headache that I'm running into is
that keyboard events are closely associated with a window and that means
that both the main process and the key-checking process need access to
the window events and we have a problem. For an eye-tracker that
wouldn't be a problem and the separate thread can do exactly as you
expect, fetching events, possibly analysing them, and storing them
time-stamped on a buffer.
> So I guess this is a downside of the approach. If it can be done in
> Ctypes it may not be as 'bad', if you think it is bad to begin with.
> ;) Regardless of the implementation, it makes sense to me that if the
> keyboard and mouse is going to me moved, effectively all input
> processing should be moved, including TTL and analog input reading;
> and eye tracker data reading for example.
That seems hard to handle in a general sense, because we won't know what
other things the user will want to poll/analyse, or even have access to.
> Having a common data integration hub for the
> experiment environment make synchronizing the various event types much
> easier IMO. This may not matter so much for keyboard and mouse inputs
> because the timing accuracy of those events is very poor to begin with
> in general. However for TTL, analog, voice triggering, and other
> device input like eye tracking, it can be much more important to the
> researcher that the time stamping is applied as quickly as possible,
> and that the relative order of events is as correct as possible
Agreed and this is another reason I think we may need to make our own
platform-specific keyboard/mouse handling code. But I don't think it
means we need them all to be in one process, only that they all have
access to the system clock and represent time in the same format from it.
>
> 4) One practical issue that I do not know the answer too, and would
> really appreciate any feedback on, is the case of when you need to
> load a library, but the library is only designed to support one
> instance being loaded at one time on a PC for it to function properly.
I imagine they should be loaded into an eyetracking
thread/process/microthread and only there. That thread would output
useful data for the user on the (specific or general) event stack. But
the main thread should only need to check that event stack, not the
eyetracker itself.

My feeling is that this is the sort of thing that might cause you to
need to integrate more tightly into psychopy, if people think a
'general' event stack is what's needed. Having multiple event buffers
(one for keyboard/mouse, one for TTL, one for eyetrackers...) is easier
to write independently but maybe less efficient.
>
> But this is what I am not sure about; what about the case of using 2
> processes; 2 separate processing, like what you would get by using
> processing, can not share a single DLL instance can they? If they can
> not, then it means ALL interaction with the device needs to be done
> via the 2nd process, not just collecting data. This adds a lot of
> overhead and extra time because of the messaging that would need to
> occur every time you want to do anything with the external device.
I doubt it needs to be very interactive (but I don't know about
eyetrackers). Possibly psychopy would need to send the tracker messages
like "the stimulus is now presented, go ahead and take your calibration
reading" but on the whole I imagine the communication is pretty much
one-way.
>
> A third option which ' may' be worth considering, just though of this
> now so it could be very stupid, is not adding another thread at all,
> and using greenlet or stackless python
I don't know anything about these, but they may be useful.

Jon

Jonathan Peirce

unread,
May 8, 2012, 6:08:59 AM5/8/12
to psycho...@googlegroups.com
This is great news Sol, and welcome to the party!

Having a unified eye-tracker solution will be fabulous. Although I know some users have been 'rolling their own' I'm sure the field would benefit from something more general.

Some (probably unnecessary) tips to maximise usage and contributions from others;
    1 git/github has been very useful for managing contributions and getting interaction on the code
    2 making releases available via easy_install/pip in a platform-independent way is really important. Quite a few hardware manufacturers require the user to go to their own website, then install to a non-standard location :-/
    3 keeping things as much in pure-python as possible.
        a) it's /much/ easier to distribute if there isn't platform-specific compilation to do
        b) people are more willing to dig around in python code than in C code, so you'll get people saying "I discovered this bug and fixed it" rather than "Your package doesn't work, why not?"

Probably those are things you're already aware of, but lots of hardware manufacturers add python support as something of an afterthought and don't do it in a very standard or convenient way.

best wishes,
Jon

Michael MacAskill

unread,
May 8, 2012, 6:22:53 AM5/8/12
to psycho...@googlegroups.com
Hi Sol,

I've heard of the COGAIN initiate, which is a great idea, and exciting to hear that you want to base things from PsychoPy.

Re the SMI interface, I've used the remote UDP system to control everything we need completely from within PsychoPy (e.g. starting and stopping recordings, embedding stimulus info in the binary data file, calibration, displaying bitmaps within iView). Re real time streaming, I've only set up a proof-of-concept thread to monitor that and it seemed to cope at the full 1250 Hz rate without problems. Denis Engeman is exploring some further work on that at the moment, and I'm looking forward to hearing what he achieves before I get back into it (to avoid re-inventing the wheel). As you say there may be issues to iron out with the GIL and multiprocessing, but it might be worth checking on the performance of simple threads first, as handling UDP seems to be pretty well optimised (I guess it has to deal with real-time voice streaming, etc).

Echoing Jon's message in this thread, my strong preference is to use the UDP ASCII interface rather than SMI's DDK and Windows DLL. The main reason is simply that it is cross-platform, and the simplicity of being in pure Python code. But the other point is that I haven't found anything to date that I wasn't able to achieve by just using the simple UDP messaging system (which is what they must be doing under the hood in the SDK too), so I think it just introduces an unnecessary set of complexity for no real value. (But it may be of great benefit for people who are developing Windows compiled executables, outside of the Python ecosystem.)

Happy to discuss the nuts and bolts of the UDP interface to SMI trackers, but the arcane details of that might be best kept off-list.

Cheers,

Michael
--
Michael R. MacAskill, PhD michael....@nzbri.org
Research Director,
New Zealand Brain Research Institute

66 Stewart St http://www.nzbri.org/macaskill
Christchurch 8011 Ph: +64 3 3786 072
NEW ZEALAND Fax: +64 3 3786 080










Sol Simpson

unread,
May 8, 2012, 8:01:55 AM5/8/12
to psycho...@googlegroups.com
Thanks very much Jon. The feedback on how to roll out the project is really appreciated. I'll definitely follow 1 and 2 when there is something worth sharing and asking for input on etc. ;) It will be the first time I have used github as the project maintainer,  so I'll likely follow the model / process you guys have suggested and may even have a Q or 2 about that when the time comes. ;)

For item 3, I think that what you have outlined is an important order of preference list, when doing so does not significantly impact the performance of the device; and when there is a 'standard' type interface even available for the device at all that can be accessed via the python lib. If the only access to the device is via C DLL, then that is what we will have to work with. At the same time, especially  for devices that output very high sampling rates at a very low delay, it will be important that python does not get a bad rep for dropping data or making data access to slow for such devices only because performance was not also considered a top priority. I think one of pythons great strengths is how light weight and fast the interface is to C libraries; when you need to use them. It allows for the best of both worlds; pure python implementations based on the standard library when possible, but easy extension and addition of high performance modules by a simple wrapping of a DLL. A great example it numpy; it would never exist, and neither would many other projects by definition, unless this fast, efficient mechanism exists for making a C library look like a python one.  

I totally agree that using C can be a pain and cause issues, but my sense is more if that is from poor precompiled binary support for many packages that do use external native libraries, forcing people to compile the C source into DLLs / libs when the package is installed instead of providing them precompiled libs along with the python source.  How many people complain about installing numpy?

As a rule of thumb, point 3 is a great guideline to follow, but it should not outweigh performance in the cases where it makes a big difference. In most cases the difference is non significant,  in some it is though. I definitely am all for as little C as possible, though, don't get me wrong. ;) C extension has it's uses and python is a great scripting language in part because of it's create support for access and tapping into it when needed.

For example, that is why I was asking about the UDP protocol performance of one of the tracking systems. If it does scale to 1250Hz sampling with no big drop in performance then it is definitely the way to go compared to accessing the C library; but if it has issues at those rates, then going the C library route will be the way to go for that device. I'm hoping we can just use the UDP, it is an open issue right now.

Thanks again for the great input and suggestions,

Sol

Sol Simpson

unread,
May 8, 2012, 3:33:14 PM5/8/12
to psycho...@googlegroups.com
Hi Michael,

Thanks for the quick update. This sounds very promising indeed. As I have said, I would also prefer the UDP route if it can be used and offer similar results to the DLL; for all the reasons you and Jon have mentioned.  Hopefully we can also not reinvent the wheel and can work together on hooking the UDP calls you have developed to the cross eye tracker pyEyeTracker interface. As you mention, I can take that offline with you, so I will email you now about it. ;)

Thanks again!,

Sol 
Michael R. MacAskill, PhD                michael.maca...@nzbri.org
Reply all
Reply to author
Forward
0 new messages