Hi Steve,
Sorry for the delay in responding…
> Has anyone used Psychopy with Pylink (with Eyelink II) recently? I need to use the eye tracker with my Psychopy code along with the Eyelink II eye tracker. I am a little confused by a few things. What is the difference between importing the Pylink module to use directly, and the iohub? I don't understand what an iohub is. I can only find some old posts (2009-2011) and I am not sure how relevant they are at the current time.
ioHub was written to provide a single interface to multiple eye trackers. i.e. we can write pretty much the same code to run an experiment on any of a number of supported trackers. Ideally, the only thing that needs to be changed is a simple configuration file which specifies that you are using an Eyewink II and and some of its characteristics (such as sampling rate and how to communicate it). See the attached example .yaml file for an EyeLink system. You could simply edit such a file to suit your particular configuration.
If you go the Pylink route (which I'm not familiar with), you'll probably have to solve some of the problems which ioHub has already worked out. So it is probably better going with ioHub.
> For my experiment, I am not interested in analysing any eye saccades post-experiment. I am only using it to make sure that the participant does not look outside of the central fixation area for the duration of the trial (attention experiment). If they do look outside of it during a trial, then I want to play a beep to inform them not to do that next time. This seems to me like simple use of the tracker. Could someone tell me if this sounds simple or complicated to implement? I have my code ready but no Pylink code yet. I have only found a few code statements from old posts.
Sure. In a code component in Builder, in the "every frame" tab, you would query the tracker for its current position. I'm assuming here that the fixation position is at (0,0) and that there is a tolerance of say, 20 pixels radios around that where the eye needs to stay. Very crudely, it would look something like this:
# during the time the person is supposed to be fixating:
if t > central_fixation_start_time and t < central_fixation_end_time:
# get eye tracker gaze position:
x, y = eyetracker.getPosition()
d = np.sqrt(x**2 + y**2) # Euclidean distance from the centre
if d > 20:
heldFixation = False # could do something more sophisticated…
> Could someone give me some guidance on how i would go about doing this? Maybe a simple breakdown of the steps I'd need to go through, or point me to some basic code that has done something similar? I can only find some small bits and pieces, which all feels a bit hacky. I'm hoping things have become a little clearer since the old posts and someone has had success in using it.
I'm working on creating a Builder eye tracking demo at the moment. Feel free to get in touch with me off-list if you want any further guidance. It would be useful to try out with someone who is using a different tracker (I use SMI systems).
Regards,
Michael
--
Michael R. MacAskill, PhD 66 Stewart St
Research Director, Christchurch 8011
New Zealand Brain Research Institute NEW ZEALAND
Senior Research Fellow,
michael....@nzbri.org
Te Whare Wānanga o Otāgo, Otautahi Ph:
+64 3 3786 072
University of Otago, Christchurch
http://www.nzbri.org/macaskill