Hi Gang, I have been looking into events and threading in python and came up against the concepts of OS time-slicing. From what I read, the only way to change the time slice is to recompile the kernel to do smaller time-slices. Is this correct? Does this mean that everytime I give up my thread I can only get it back in (typically) 10ms or more? I ask this because I have a small app doing an event loop that polls for events. If I have the loop run as fast as possible I get polling runs at 44 kHz or so but it grabs 100% CPU and sets off the fans etc. I put in a sleep statement in the loop. That works fine to calm down CPU usage, but I can not sleep for less than 10ms (which I now discover is related to OS time-slicing). I tried to use event driven code (using pythons Queue object) and it works fine, but I still run up against the 10ms latency. Is this the 'granularity' of response times we can get without hogging the CPU, or is there some trick to not hog the CPU but still have really small response times? Best -Kaushik
This message has been checked for viruses but the contents of an attachment may still contain software viruses which could damage your computer system: you are advised to perform your own checks. Email communications with the University of Nottingham may be monitored as permitted by UK legislation.
Hi Gang, I have been looking into events and threading in python and came up against the concepts of OS time-slicing. From what I read, the only way to change the time slice is to recompile the kernel to do smaller time-slices. Is this correct? Does this mean that everytime I give up my thread I can only get it back in (typically) 10ms or more? I ask this because I have a small app doing an event loop that polls for events. If I have the loop run as fast as possible I get polling runs at 44 kHz or so but it grabs 100% CPU and sets off the fans etc. I put in a sleep statement in the loop. That works fine to calm down CPU usage, but I can not sleep for less than 10ms (which I now discover is related to OS time-slicing). I tried to use event driven code (using pythons Queue object) and it works fine, but I still run up against the 10ms latency. Is this the 'granularity' of response times we can get without hogging the CPU, or is there some trick to not hog the CPU but still have really small response times? Best -Kaushik
This message has been checked for viruses but the contents of an attachment may still contain software viruses which could damage your computer system: you are advised to perform your own checks. Email communications with the University of Nottingham may be monitored as permitted by UK legislation.
Woops. Must have been some issue with running this the first time. Because now it gives t0=time.time();wx.MilliSleep(1);time.time()-t0 -> 0.0011050701141357422 On May 11, 8:07 am, Kaushik Ghose <kaushik.gh...@gmail.com> wrote:Jon, Jeremy, Thanks for your replies. Mac OS X is a sluggard. import time t0=time.time();time.sleep(.001);time.time()-t0 -> 0.01031494140625 import wx t0=time.time(); wx.MilliSleep(1); print time.time()-t0 -> 0.0186131000519 So, Jon, if I understand right: psychopy does its time sensitive part by hogging the CPU during the polling? Thanks
This message has been checked for viruses but the contents of an attachment may still contain software viruses which could damage your computer system: you are advised to perform your own checks. Email communications with the University of Nottingham may be monitored as permitted by UK legislation.