x in Sleep(x) Actual Pulse Width
.01 .606 microseconds
.1 .595 microseconds
1 10 milliseconds
10 10 milliseconds
11 20 milliseconds
15 20 milliseconds
20 20 milliseconds
21 30 milliseconds
Next, I created a simple MFC application using the project wizard and
used the SetTimer() function and toggled the same bit in my own CALLBACK
function. The following are the results:
x in SetTimer() function Actual Pulse Width
.01 10 milliseconds
.1 10 milliseconds
1 10 milliseconds
5 10 milliseconds
10 10 milliseconds
11 20 milliseconds
20 20 milliseconds
21 30 milliseconds
Both functions say their resolution is 1ms. How do I account for the
less than 1ms pulse and the 10ms resolution? What is the shortest delay
I can expect to get using SetTimer()? Are there any other “interrupt”
style timers to generate a 1ms interrupt?
Thanks
> Both functions say their resolution is 1ms. How do I account for the
> less than 1ms pulse and the 10ms resolution? What is the shortest delay
> I can expect to get using SetTimer()? Are there any other "interrupt"
> style timers to generate a 1ms interrupt?
how are you measuring the actual timing?
yugami wrote:
I am measuring the timing with a LeCroy scope connected to the output bit the
program is toggling on the OPTO22 card. I have the scope set up to give me the
average pulse width. The times I gave are actually averages. There is some
variability due to the other things Windows is doing.
Using a single x86 CPU on NT/2000/XP, the expected granularity
of the system tick is 10 ms. With dual x86 CPUs, you may expect
somewhat different results, but within a factor of 2 (I think
it's 7.5 or 15 ms, but I'm not sure; I don't have access to
one). On 9x, I think it is approximately 55 ms (ugh!).
The resolution is not the same as the granularity!
Sleep takes an integer argument. Any floating point value less
than 1 will just get converted to integer 0 when you call Sleep.
The fraction of a ms result just indicates how long it takes
to return immediately from Sleep.
Since any call to Sleep, even with an argument of 0, may result
in a process switch, sometimes it will take one tick (10 ms in
this case) to return. In general, you may expect +/- one tick
(+/- 10 ms) variation in results when you call Sleep. But
then, since NT is a preemptive multitasking system, you're
going to have to expect variations in in-line code execution
whether or not you ever call Sleep.
You can "typically" get 1 ms granularity using multimedia timers.
Check out timeBeginPeriod, timeEndPeriod, timeGetDevCaps,
timeSetEvent, timeKillEvent, and TimeProc. There is a bit of
work to get a processor-yielding delay using this API, what
with having to implement a callback and set up an event or
other messaging object, but it is not terribly difficult.
If you need better than 1 ms delay granularity, you're using
the wrong OS. However, having said that, I have gotten pretty
impressive results making short delays by calling
QueryPerformanceCounter in a tight loop, or even by accessing
the CPU's built-in CPU-clock-frequency counter in assembly
language (a 1 GHz CPU has a very accurate 1 ns counter, for
example). You can even resort to giving your process "high"
or even "real time" priority class (see SetPriorityClass),
and giving your process thread(s) the highest priority
(see SetThreadPriority), but you better not do CPU intensive
work for very long priods (more than a few ms) with these
settings, because you have virtually unlimited power to
lock out system critical tasks or even lock up the entire
system. Used judiciously and with informed knowledge,
these techniques can all be useful if you are really serious.
Beware of a QueryPerformanceCounter-related hardware glitch
that makes it statistically unreliable on some (a lot of)
systems. See:
http://support.microsoft.com/directory/article.asp?ID=KB;EN-US;Q274323&
"Ted Busky" <no_spam...@prox.com> wrote in message
news:3CA35CD3...@prox.com...
M. Nouryan
>.
>
> Since any call to Sleep, even with an argument of 0, may result
> in a process switch, sometimes it will take one tick (10 ms in
actually Sleep(0) is a documented way of giving up the rest of your time
slice, and (not near MSDN right now) if no threads of equal priority exist
it will return immediatly
>Both functions say their resolution is 1ms. How do I account for the
>less than 1ms pulse and the 10ms resolution? What is the shortest delay
>I can expect to get using SetTimer()? Are there any other “interrupt”
>style timers to generate a 1ms interrupt?
The standard Windows timer uses the resolution of the system clock,
which on a PC is (or used to be) 55 ms. The multimedia timer has a
higher resolution, and most PCs these days have the hardware to use
it. If you need a higher resolution timer, don't use the standard one.
Pete Barrett