NI USB Box & Fullscreen on OS X w/ Intel integrated cards

32 views
Skip to first unread message

Erik Kastman

unread,
Jun 1, 2016, 10:46:45 AM6/1/16
to 'Oliver Clark' via psychopy-dev
Hi all,

Sorry for the long absence from the list! I’ve been pulled away lately, but am about to get back into a psychopy block at work for a few different projects. In the run up, I’ve got a few questions from my absence:

1. Sol - I’ll be using an National Instruments USB box to interface with a hand gripper; I already have code using the NI dll working but was wondering if you had anything put in to IOHub yet (and if not, whether you’d like the addition) (http://sine.ni.com/nips/cds/view/p/lang/en/nid/201987)

2. Fullscreen with MacBooks - I’m having trouble running a fullscreen window on a 10.11.5 El Capitan Macbook Pro with an intel integrated card (Intel Iris Graphics 6100, 1536 MB RAM). Depending on the order of when I open the psychopy app and attach the external monitor, I either get a window drawn only full screen on the main screen, or drawn with the shape of the second screen on the full screen. I looked around at some of the recent archives and didn’t find anything that suggests this is known bad. But, I also noticed this caveat in the builder settings doc:

> If multiple screens are available (and if the graphics card is not an intel integrated graphics chip) - http://www.psychopy.org/builder/settings.html

Is that something we’re enforcing as a recommendation? Or something really breaking with the cards? This used to work on older models & versions, but did the modern OS X changes break something?

Good to be back,
Erik

Richard Höchenberger

unread,
Jun 1, 2016, 11:32:22 AM6/1/16
to psychopy-dev
Hi,

On Wed, Jun 1, 2016 at 4:46 PM, Erik Kastman <erik.k...@gmail.com> wrote:
>
> 1. Sol - I’ll be using an National Instruments USB box to interface with a hand gripper; I already have code using the NI dll working but was wondering if you had anything put in to IOHub yet (and if not, whether you’d like the addition)  (http://sine.ni.com/nips/cds/view/p/lang/en/nid/201987)

I had discussed that briefly with Sol a year ago, and there is currently no NI support in ioHub, and it wasn't planned either, because these boards are so extremely versatile and it's pretty much impossible to cover every use case. I've always used PyLibNIDAQmx, and have been very happy with it so far. When I have time, we'll try to make a new "stable" release of the package, which could probably easily bundled with PsychoPy -- it only depends on NumPy, and Jon has always been very open when it comes to adding useful packages that don't harm anyone :)

How are you currently interfacing with the NI card, specifically?

Thanks,

    Richard

Erik Kastman

unread,
Jun 1, 2016, 11:41:12 AM6/1/16
to psycho...@googlegroups.com
I was using PyDAQmx  (https://pythonhosted.org/PyDAQmx/) with the NI NiDAQmxBase mac driver; since both of those are just a wrapper to the NI c API it looks like their interface is pretty similar; PyDAQmx seemed to be pretty lightweight and installed without a problem (once I had an NI driver for the right architecture). I’ll take a peek at pylibnidaqmx - I hadn’t seen it googling around, but it looks like it may actually be a bit clearer.

I was thinking of using IOHub just for thread management (use it to spin off another thread to talk to the box and then pass that back to the experiment); I don’t think that functionality would be too use-case specific. But, I’ve never really played with IOHub at all, so maybe my passing acquaintance is totally wrong. :)

I’ll probably throw together a first pass w/o IOHub, and then maybe we can think about ways to standardize something that would be more generally useful?

Thanks!


--
You received this message because you are subscribed to the Google Groups "psychopy-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to psychopy-dev...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Richard Höchenberger

unread,
Jun 1, 2016, 11:45:20 AM6/1/16
to psychopy-dev
Hi Erik,

On Wed, Jun 1, 2016 at 5:41 PM, Erik Kastman <erik.k...@gmail.com> wrote:
>
> I was using PyDAQmx (https://pythonhosted.org/PyDAQmx/) with the NI NiDAQmxBase mac driver

oh, in that case save your time and don't even try out PyLibNIDAQmx,
for it does not (yet) support NI-DAQmx Base :(

> I was thinking of using IOHub just for thread management (use it to spin off another thread to talk to the box and then pass that back to the experiment); I don’t think that functionality would be too use-case specific. But, I’ve never really played with IOHub at all, so maybe my passing acquaintance is totally wrong. :)

You're right, I had a similar use case a while back. Beware that in
ioHub you wouldn't use threads, but greenlets (??? oh my, it's been
quite a while!!), and you shouldn't use core.wait() and such, but the
repsective gevent/greenlet functions. Or something like that ;) Sorry,
been too long. :)

But great to see you're working on this!!

All the best,

Richard

Richard Höchenberger

unread,
Jun 1, 2016, 11:52:27 AM6/1/16
to psychopy-dev
On Wed, Jun 1, 2016 at 5:41 PM, Erik Kastman <erik.k...@gmail.com> wrote:
> I’ll probably throw together a first pass w/o IOHub, and then maybe we can
> think about ways to standardize something that would be more generally
> useful?

This reminds me, maybe I can dig out my old code? I did use ioHub to
interface with an olfactometer (serial port) via an NI card. I let
ioHub do somethine like, "set trigger HIGH, wait 900 ms, set trigger
LOW", so my experiment would run unaffected of that "sleeping", which
took place in ioHub. I could share that code if you're interested!

Richard

Erik Kastman

unread,
Jun 1, 2016, 12:03:56 PM6/1/16
to psycho...@googlegroups.com
Thanks, I’d be much obliged! Sounds like that would be just about my use case. I may be overcomplicating it, but that would probably help me think through whatever I end up doing. Don’t kill yourself if you can’t find it fast, but I’ll take whatever you can give me. :)

What I’m really curious about was question #2 (using the second screen on OS X). Does anyone have insight into that one?

Jonathan Peirce

unread,
Jun 1, 2016, 12:12:43 PM6/1/16
to psycho...@googlegroups.com
I can't help you with the nidaq thing but I do know about the second
screen.

Basically, Apple withdrew support for fullscreen mode on the second
screen, for no good reason that I can imagine. Like, made it impossible.
It's still possible on the primary monitor (and therefore in mirror mode
which is probably why one of your ordering methods gave you an
odd-shaped "full" screen) but for the second monitor all you can do is
create a screen-sized borderless window.

Possibly you can connect a monitor and set that up to be the primary
monitos? Not sure.

Or install inux! ;-)

bw
Jon
--
Jonathan Peirce
http://www.peirce.org.uk





This message and any attachment are intended solely for the addressee
and may contain confidential information. If you have received this
message in error, please send it back to me, and immediately delete it.

Please do not use, copy or disclose the information contained in this
message or in any attachment. Any views or opinions expressed by the
author of this email do not necessarily reflect the views of the
University of Nottingham.

This message has been checked for viruses but the contents of an
attachment may still contain software viruses which could damage your
computer system, you are advised to perform your own checks. Email
communications with the University of Nottingham may be monitored as
permitted by UK legislation.

Jonathan Peirce

unread,
Jun 1, 2016, 12:55:04 PM6/1/16
to psycho...@googlegroups.com


On 01/06/2016 16:44, Richard Höchenberger wrote:
> On Wed, Jun 1, 2016 at 5:41 PM, Erik Kastman<erik.k...@gmail.com> wrote:
>> >
>> >I was using PyDAQmx (https://pythonhosted.org/PyDAQmx/) with the NI NiDAQmxBase mac driver
> oh, in that case save your time and don't even try out PyLibNIDAQmx,
> for it does not (yet) support NI-DAQmx Base:(
>
PyDAQmx is pip-installable whereas PyLibNIDAQmx requires
download/extract/setup (which means it can't be added as a simple
dependency). Can we stick with PyDAQmx or does the latter offer
something extra?

Jon

Erik Kastman

unread,
Jun 1, 2016, 1:03:55 PM6/1/16
to psycho...@googlegroups.com
Thanks Jon, that's what I was afraid of. That was part of the fallout from the big graphics refactoring in 10.10?

Unless I'm wrong, the stdout window also always appears in the main window, and there's no code-level control of that, right? I have a request from users to watch responses on the screen and am thinking of the simplest instructions I could write that would fulfill that. Or Linux. ;)

Can you confirm in principle that this would be the same situation for other OSX task tools, e.g. Pyschtoolbox? Just want to make sure I'm telling users the right thing.

Jonathan Peirce

unread,
Jun 1, 2016, 1:24:46 PM6/1/16
to psycho...@googlegroups.com


On 01/06/2016 18:03, Erik Kastman wrote:
> Thanks Jon, that's what I was afraid of. That was part of the fallout from the big graphics refactoring in 10.10?
>
> Unless I'm wrong, the stdout window also always appears in the main window, and there's no code-level control of that, right? I have a request from users to watch responses on the screen and am thinking of the simplest instructions I could write that would fulfill that. Or Linux. ;)
Test and see if you drop frames when not in fullscreen mode. It might be
fine anyway if your machine is fast?
You can move a terminal window, or PsychoPy coder view, or you could
potentially create a second visual.window and send messages there in
between trials (or during trials if you have a super-fast system)
> Can you confirm in principle that this would be the same situation for other OSX task tools, e.g. Pyschtoolbox? Just want to make sure I'm telling users the right thing.
Yes, inability to run genuine fullscreen has to be the same on all
software running recent OSX. The only difference is that Mario at PTB
will give you a more virulent anti-mac response if you ask the question ;-)

hope you work something out
Jon

Erik Kastman

unread,
Jun 1, 2016, 3:46:51 PM6/1/16
to psycho...@googlegroups.com
My hacky solution is to add a darwin platform check to the compiled .py (if sys.platform.startswith('darwin’): screen = 0; else: screen = 1) and have people move the “primary” monitor over to the presentation monitor. That means that the desktop icons will be visible, but if these are dedicated research machines that should be ok. 

I know how you feel about “flexibility”, but would you be persuadable to have the Experiment Settings be evaluated (i.e. so I can put screen = $screen in the builder settings instead of hacking the compiled coder output every time for distribution?)

I experimented with using the second screen, but was getting 4-5 dropped frames out of 500 (~1%) from the timer demo, which is less than ideal. Not a deal breaker, but I’d rather use the fullscreen if I can.

I remember how Mario feels; that’s why I asked you! Thanks Jon!

As for pydaqmx, it seems to be doing everything I need it to, so until it breaks I’m going to stick with it and just write good documentation (especially because of its pip install).

Jon Peirce

unread,
Jun 1, 2016, 4:56:15 PM6/1/16
to psycho...@googlegroups.com
On 01/06/2016 20:46, Erik Kastman wrote:
I know how you feel about “flexibility”, but would you be persuadable to have the Experiment Settings be evaluated (i.e. so I can put screen = $screen in the builder settings instead of hacking the compiled coder output every time for distribution?)
Ha ha, I try to be flexible where possible! ;-)

Problem 1: The reason you can't use code here at the moment is because we're subtracting 1 before writing the script. Builder users will expect the screens to be 1 and 2, like in all other software, but visual.Window expects the first screen to be zero like in all other Python indexing. So Builder subtracts 1 while writing the script and can only do that if the value is an int not code. Solutions (all easy to implement, it's just a matter of knowing what to do:
  1. Subtract 1 at runtime not when the code is written. ie. each Builder script would have
            win = visual.Window(size, screen=1-1)
    • It wouldn't actually hurt anything (and extremely easy to implement
    • but would look a bit odd to most users
  2. Subtract 1 when writing the code (like now) but only if we can convert to an int
    • most uses would look the same as now but you could also specify code
    • but very confusing because putting 1 in the box will give you the first screen but putting a variable that evaluates to 1 will give the second monitor! :-/ Nobody will read the manual and that's hard to debug without knowing why it does it!
  3. Maybe a combo? subtract 1 when writing (if can convert to int) and subtract 1 at runtime if not (doesn't look so weird to have screen=screenNum-1)
Problem 2: Code components don't have an entry point anywhere before the window appears so either:
  1. You insert all your code into the entry box for screen in a single line, as in
        screen: $(not sys.platform.startswith('darwin’))+1
    The above evaluates to 1 for darwin and 2 for win32 which will both have 1 subtracted at runtime
  2. We add a new entry point for code after imports and before window creation (but hardly anyone will use it and it can't have the name "Begin experiment"). Possible.
  3. We move the current Begin Experiment to earlier (no, would break too many existing experiments)
What are your thoughts?
Jon
-- 
Jon Peirce
http://www.peirce.org.uk

Richard Höchenberger

unread,
Jun 1, 2016, 5:03:31 PM6/1/16
to psychopy-dev
Hi Jon,

On Wed, Jun 1, 2016 at 6:55 PM, Jonathan Peirce <jon.p...@gmail.com> wrote:
> PyDAQmx is pip-installable whereas PyLibNIDAQmx requires
> download/extract/setup (which means it can't be added as a simple
> dependency). Can we stick with PyDAQmx or does the latter offer something
> extra?

the main differences are, as far as I can see:

- PyDAQmx supports both NI-DAQmx and NI-DAQmx Base. NI-DAQmx is the
Windows driver, supporting all (?) NI devices; while NI-DAQmx Base is
the Linux/OS X driver, supporting a limited number of devices only.
PyLibNIDAQmx supports NI-DAQmx only, but not NI-DAQmx Base. Meaning
that, essentially, PyLibNIDAQmx only works on Windows systems, while
PyDAQmx works on all systems.*

- PyDAQmx's API and "feel" is more or less directly derived from the
C API it wraps, and the user has to deal with ctype functionality
directly (e.g., passing arguments as ctypes.byref(arg)). Using PyDAQmx
feels like writing C code... whereas PyLibNIDAQmx pretty much
abstracts the C internals away from the user and offers a very
"Pythonic" API.

If the device interfacing is handled by ioHub and the actual
PyDAQmx/PyLibDAQmx calls are hidden from the user anyway (behind the
ioHub device's API), PyDAQmx offers greater portability.

For users who want to "manually" control their NI cards, PyLibNIDAQmx
is much easier to use than PyDAQmx; but as I pointed out, it currently
only supports Windows.

Like I said, we're planning to release a new version of PyLibNIDAQmx
"sometime soon": I have been working on a larger refactoring, but this
still lacks testing. Once I have time do test properly, we can make a
new release and push the package to PyPI. After that has been done, I
will start working on support for NI-DAQmx Base, to support Linux and
OS X again.

But until that release happens, I see no reason to include
PyLibNIDAQmx in PsychoPy just now...

Apart from that, the PyDAQmx/PyLibNIDAQmx should probably always stay
an optional dependency anyway, since they are both useless without an
NI-DAQmx (Base) driver installation on the host system (that's a nice
1.5 gig installation package for the recent Windows version :))

Cheers,

Richard



*Actually, that's not quite true. There used to be NI-DAQmx releases
which were feature-equivalent for all platforms. Only a couple years
back, NI cut down the functionality on the non-Windows platforms by
introducing NI-DAQmx Base. Meaning that, for VERY OLD versions of
NI-DAQmx, PyLibNIDAQmx would support Linux and OS X too.

Jon Peirce

unread,
Jun 1, 2016, 5:15:25 PM6/1/16
to psycho...@googlegroups.com

Ha! I had missed the part where this is actually your project! I see. Nice! Well it sounds like the mac standalone only has one option, but for windows I can't see any reason not to provide both in the standalone if yours is all pure python. But the pypi release will make life easier: although PsychoPy has a lot of dependencies (I'm currently having to rebuild my win32 dev installation before this next release) increasingly they all support pip and wheels so very little double-click downloads or compiling required.

I'm hoping to have a dev installation option a bit like the conda install but using based on a simple pip install requirements script that will work on any python that has pip installed already. :-)

best wishes
Jon

Richard Höchenberger

unread,
Jun 1, 2016, 5:26:23 PM6/1/16
to psychopy-dev
On Wed, Jun 1, 2016 at 11:15 PM, Jon Peirce <jon.p...@gmail.com> wrote:
> Ha! I had missed the part where this is actually your project! I see.

Oh it's not my project, it's Pearu Peterson's, he wrote 95% of the
code or so. I'm just contributing here and there and preparing the
next release. So yeah, I'm kind of one of the maintainers, but I
wouldn't call it "my" project :)

Cheers,

Richard

Richard Höchenberger

unread,
Jun 1, 2016, 5:55:33 PM6/1/16
to psychopy-dev
On Wed, Jun 1, 2016 at 11:15 PM, Jon Peirce <jon.p...@gmail.com> wrote:
> I'm hoping to have a dev installation option a bit like the conda install
> but using based on a simple pip install requirements script that will work
> on any python that has pip installed already. :-)

This would be great! I made one for my lab colleagues, because they
couldn't even launch PsychoPy, let alone play sounds or use ioHub,
after installing via pip without installing some additional packages
manually :)

I will let you know when we have a PyLibNIDAQmx release ready that can be
easily installed from PyPI.

All the best,

Richard

Richard Höchenberger

unread,
Jun 1, 2016, 6:32:48 PM6/1/16
to psychopy-dev, Sol Simpson
Hi Erik,

On Wed, Jun 1, 2016 at 6:03 PM, Erik Kastman <erik.k...@gmail.com> wrote:
> Don’t kill yourself if you can’t find it fast, but I’ll take whatever you can give me. :)

cannot find it on my laptop -- it's probably somewhere on our old lab
computer. But I did discover an old message from Sol, it's so
extensive it's almost like a manual ;) that should explain everything
you need to know. Enjoy, and good luck!

Richard


----------
Devices can block ioHub Event Processing

Yes, this is totally the case and is critical to keep in mind when
developing a device for iohub. iohub uses gevent and greenlet instead
of python threads (for the majority of things). There are +'s and -'s
to this. IMO, for what iohub needs to do, the +'s outweigh the -'s. ;)

The reasoning is that, due to the Python GIL only allowing 1 python
thread to run at a time, having too many threads running in parallel
actually reduces the amount of 'real' work each thread can do. It can
be so bad in some use cases where single threaded computation is
actually much faster than breaking up the task into threads. Using
greenlets allows you to program in a fashion similar to threads, but
is much much more light weight. i.e. switching between greenlets is
/much/ faster than switching between threads, and since both run in a
serial nature in python, greenlets allow python to spend more time
doing real work and let time in context switching.

The potential negative to this is that you control the scheduling of
your own 'tasks' code in iohub, it is not done for you via thread
context switching. This can also be an advantage once you get use to
it actually. So you need to keep any calls made within your device to
be non-blocking and as fast as possible.

In iohub, each device has it's own greenlet that handles event
processing for that device. Basically this means that a devices
._poll() method, or ._handleNativeEvent method(), runs within a
tasklet for the device. Which of the two methods a device should use
depending on whether the native device needs to be polled for new
events, or can tell iohub when new events are available via a callback
function. Any device methods that can be called by the psychopy
processes script are run by the main iohub servers tasklet.

As you found out the hard way, if your device makes a call that is
blocking, then no other greenlet /tasklet can run until that blocking
call is done. This may sound serious at first, and it can be if how
the design works is not understood. In practice it is rarely a issue,
since devices will either have non-blocking / asynchronous versions of
blocking calls, are the native device delivers events in an event
driven fashion, so blocking is not an issue to begin with.

Finally, sometimes you have no choice but to run the native device
interface in a separate python thread though, and then use the iohub
device's callback approach, or use non blocking python queue calls, to
hand events to the iohub device tasklet. If the native device must be
polled for new events / data, and the device generates events at a
very slow rate, then using a thread may also make more sense then
using non blocking methods and having the device poll every msec or
something. If a thread is almost always in a blocked state, then the
issue with the python GIL is non significant.

You can also explicitly 'break up' an iohub device method's execution
into chunks if it takes a long time to run and you do not want to
block other devices for that long. This is often called cooperative
multitasking; you control when one tasklet yield execution to other
that are waiting to run. The easiest way to do this is to call:

gevent.sleep(0)

within your method at points where you want to allow other tasklets to
run before continueing that methods code. The call to gevent.sleep(0)
effectively:
tells the iohub scheduler to stop running your tasklet at that point
each other tasklet, that is waiting to run, gets a chance to run.
Then control is returned to the tasklet that called gevent.sleep and
execution continues.
If you actually want your device to sleep for x sec.msec during the
call to a function, then use gevent.sleep(x), this allows other
tasklets to run and the current tasklet will be scheduled to resume
after x amount of time.

So gevent.sleep is an example of a blocking python function
(time.sleep) that gevent has provided a non blocking asynchronous
version of, but that complexity is hidden from you. gevent provides an
async version of the python socket module as well for example.

For your example use case, you could made a non blocking version of
the method without using a thread by something like:

def provideStimulation(delay,duration):
## wait for a certain amount of time
gevent.sleep(delay)

## send a trigger (open valves)
# ... your value open code here....

## wait for a certain amount of time (stimulation)
gevent.sleep(duration)

## send another trigger (close valves)
# ... your close valve code here....

While the above method runs, other devices are not being blocked by
it, except at the two points of valve triggering. Note this highlights
the good side about tasklets; you decide when your method will stop
running and allow others to run, it is not arbitrarily decided by the
python GIL or OS thread scheduling.

If provideStimulation was called from your psychopy script, while it
would not block other iohub devices, it would block your psychopy
script until it returned. This is a design choice I made in the iohub
interface BTW, not something inherent to other packages used.

If you wanted the method to return back to your psychopy script in an
async manner (so it returned prior to the actual method completing on
the iohub server, you could change the method as follows:

def provideStimulation(delay,duration):
""" Async version (for psychopy script) of method
"""
def provideStimulationTask(delay,duration):
## wait for a certain amount of time
gevent.sleep(delay)

## send a trigger (open valves)
# ... your value open code here....

## wait for a certain amount of time (stimulation)
gevent.sleep(duration)

## send another trigger (close valves)
# ... your close valve code here....

gevent.spawn(provideStimulationTask,delay=delay, duration=duration)

Now the method will return very quickly back to your psychopy script
and the iohub process will run the provideStimulationTask function
asynchronously.
----------

Richard Höchenberger

unread,
Jun 1, 2016, 6:45:25 PM6/1/16
to psychopy-dev, Sol Simpson
On Thu, Jun 2, 2016 at 12:32 AM, Richard Höchenberger
<richard.ho...@gmail.com> wrote:
> that should explain everything
> you need to know. Enjoy, and good luck!

... at least in case you already know how to create a simple iohub
device..? If not, feel free to ask :)

Cheers,

Richard

Jonathan Peirce

unread,
Jun 2, 2016, 6:55:28 AM6/2/16
to psycho...@googlegroups.com
OK, I'll leave it out of the install script, but I could include your
current version (0.2.0dev) in the windows standalone release (that I'm
building)?

cheers

Richard Höchenberger

unread,
Jun 2, 2016, 9:01:22 AM6/2/16
to psychopy-dev
On Thu, Jun 2, 2016 at 12:55 PM, Jonathan Peirce <jon.p...@gmail.com> wrote:
> but I could include your current version (0.2.0dev) in the windows
> standalone release (that I'm building)?

Ok great, thank you! :)

Richard

Erik Kastman

unread,
Jun 2, 2016, 12:12:39 PM6/2/16
to psycho...@googlegroups.com
I don’t have a clue, but this will be a good check on the state of the documentation. ;)

Sol’s answer on greenlets is helpful; thanks for finding it. As far as I can see from the examples, the NI box usually blocks and collects measurements at its own frequency and own internal clock, and I can’t [yet] quite find the right way to just get the last current value. I think this is just something I need to figure out from the API - it seems like its designed to block more and collect a number of measurements from the frequency of the box’s own internal clock instead of dumbly passing on measurements, which is what I want.

I’m still playing with it, but might come back and get some advice from you all if I get stuck (and if I try to implement it using ioHub).

Thanks!

Richard Höchenberger

unread,
Jun 2, 2016, 1:02:26 PM6/2/16
to psychopy-dev
Hi Erik,

On Thu, Jun 2, 2016 at 6:12 PM, Erik Kastman <erik.k...@gmail.com> wrote:
> I don’t have a clue, but this will be a good check on the state of the documentation. ;)

I think there is pretty much none on this particular issue ;) I will
send you the skeleton of a very early attempt of mine to get things
working. That code actually calls an external library I wrote myself
which ITSELF wraps PyLibNIDAQmx, so this is nothign you can really use
anyway, BUT since it's so super-minimal, you will get an idea of what
is needed to get a simple ioHub device working.

> Sol’s answer on greenlets is helpful; thanks for finding it. As far as I can see from the examples, the NI box usually blocks and collects measurements at its own frequency and own internal clock, and I can’t [yet] quite find the right way to just get the last current value. I think this is just something I need to figure out from the API - it seems like its designed to block more and collect a number of measurements from the frequency of the box’s own internal clock instead of dumbly passing on measurements, which is what I want.

I cannot really follow here, the NI devices can surely collect and
generate data in the background (i.e., non-blocking). Also I would
have thought you would use the DIO and not analog ports for your
specific purpose (yes I know, the USB bus-powered devices don't have
hardware-timed DIO, but using software-times DIO via ioHub should
create a jitter of <1 ms only)

We recently switched to an internal card that supports hardware-timed
DIO (it can be synced to the internal analog clock), but so far I had
no reason to actually use that, since software timing worked well
enough.

If you tell me more about what specifically you are trying to do, I
can probably provide some more help! You may also message me off-list
if you prefer.

Cheers,

Richard

Erik Kastman

unread,
Jun 28, 2016, 10:07:34 PM6/28/16
to psycho...@googlegroups.com
Apologies for the delay; just getting back to this. (Haven’t made it back to PyDAQMx, but that’s coming next week).

My votes would be:

Problem 1 (discordance between builder numbering and coder numbering):  I think solution 3 (subtract 1 at compile if it evaluates to an int and subtract 1 at runtime if not) makes a lot of sense, doesn’t break anything, and adds the flexibility.

Problem 2 (post-import code entry): I was imaging putting all the code to evaluate in the dialog, but you’re right that that’s not a great use case. I would vote for an experiment-wide post-import code block - I often have entire routines at the start of my experiment split out just to set constants, load site config files, etc., and it seems like that would be a natural place for that setup, with the additional benefit that you could use it for the window.


For now I’m attempting to use the “Window the size of the screen" trick because collaborators don’t want to change the primary display (too much possibility for human error). I’m definitely still dropping a handfull of frames, but since A) I’m using non-slip timing (not timing by frames), B) button response components are using callOnFlip, and C) I’m not doing any psychophysics, I feel like a few dropped frames here doesn’t invalidate these MR experiments. 

I’d like to be able to introspect the size of the second screen instead of having each site enter it. I’m trying to reuse code from the Builder DlgExperimentProperties instance method onFullScrChange, but don’t know if I really need to initialize a full wx dialog to get at the resolution, which is in self.paramCtrls['Screen'].valueCtrl.GetValue(). I also found this SO answer (http://stackoverflow.com/a/3129494/218118) for finding the resolution with wx, but I think that declaring the wx app is messing up the experiment, and I don’t want to create a dialog box just to use it to get properties. Do you know of anyone else who has done this? If not I’ll add it to the docs once I get it (unless you think that would encourage recklessness ;)  Point me in the right direction?

Thanks again,
Erik


Jonathan Peirce

unread,
Jun 29, 2016, 9:53:21 AM6/29/16
to psycho...@googlegroups.com
Problem 1: numbering of screens
I agree and I think this will be a 3-line fix in the builder code generation so let's do it

Problem 2: code before screen
I've been talking with Sol about the idea that some Components should act more like Routines (ie they should appear on the Flow rather than contained in a Routine). In his case a component for running an eyetracker calibration should be a component, but arranged on the flow, not within a routine. Maybe this could be something similar. This isn't a 3-line fix though and needs some mental "cooking time" to think about what works best (e.g. does it appear in the components panel like a Code Compon or in the Flow like an "Insert Routine" button?). Thoughts very welcome.

Problem 3: getting screen size
I think this should be possible from pyglet too, without needing to create the wx app. Pyglet has calls to determine what screens are available and what their resolution options are so should help you out

Jon
-- 
Jonathan Peirce
University of Nottingham

http://www.peirce.org.uk

Jeremy Gray

unread,
Jun 29, 2016, 10:03:39 AM6/29/16
to psycho...@googlegroups.com
Quick note re Problem 2 (components that should act like routines, not be contained within one): This also applies to a component to insert a .psyexp file into the flow. I hit issues of deep copy failing when doing that, seems like it should be solvable but I have not tracked it down.

--Jeremy

Erik Kastman

unread,
Jun 29, 2016, 2:16:10 PM6/29/16
to psycho...@googlegroups.com
I love the idea of adding whole psyexp’s to the flow, but I agree that this would need a little experimentation. 

Personally I haven't found the "Add Loop" and "Add Routine” drop downs that intuitive, so maybe there’s something we could do to unify them. I think just sketching out ideas may be the way to go, but hopefully these wouldn’t be huge changes since people have learned how to use them.


You were right that pyglet had some simple screen access - my mistake going to wx. For future reference, here’s the quick code to access the last screen (secondary display) and not fail when there’s only 1 present:

import pyglet
display = pyglet.window.get_platform().get_default_display()
screens = display.get_screens()
width, height = screens[-1].width, screens[-1].height

# Setup the Window
win = visual.Window(size=(width, height), fullscr=False, screen=len(screens) - 1, allowGUI=False, ...)

Jonathan Peirce

unread,
Jun 30, 2016, 7:04:51 AM6/30/16
to psycho...@googlegroups.com


On 29/06/16 19:16, Erik Kastman wrote:
>
> Personally I haven't found the "Add Loop" and "Add Routine” drop downs
> that intuitive, so maybe there’s something we could do to unify them.
> I think just sketching out ideas may be the way to go, but hopefully
> these wouldn’t be huge changes since people have learned how to use them.

Sure - if there's something more intuitive then it would be good to see :-)
Reply all
Reply to author
Forward
0 new messages