Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

How to delay a task in micro seconds range

3,235 views
Skip to first unread message

ling...@my-deja.com

unread,
Aug 3, 1999, 3:00:00 AM8/3/99
to
HI

I wonder how to delay a task in micro seconds range.
The function taskdelay() delays a task in terms of ticks. so i
converted the time into number of ticks as follows:

taskDelay((int)(dwMicroSeconds/1000000)* sysClkRateGet());

But In my case, The above will always be zero because i need to delay a
task only for 1000 or 2000 micro seconds.

Effectively there will be no delay in task.

can anyone give me a solution for this.

Thanks in advance.

regards,
s.shiva


Sent via Deja.com http://www.deja.com/
Share what you know. Learn what you don't.

Markus Mitterer

unread,
Aug 3, 1999, 3:00:00 AM8/3/99
to

Few time ago I need a similar function. First of all, I would calculate with
milliseconds, because witch the system tick you won't come in a lower range.
Then it is important that you first do the multiplication and then the
division because otherwise the result will always be zero due to the data
type DWORD. At the end I checked if the result is zero when the time value
is not zero. In this case I increased the delay time to one system tick.
Hope that helps you

Markus

Douglas Fraser

unread,
Aug 3, 1999, 3:00:00 AM8/3/99
to

One thing to be aware of is that a delay of 1 tick will get
you at MOST one tick of delay. If you call taskDelay() just
before the timer fires it will return immediately.
In other words. taskDelay(n) will result in a delay between
n-1 and n ticks. If you really need hard microsecond delays
you may have to resort to wasted cycles. It all depends on
what your delay needs are. If your processor provides an
auxiliary clock, you may want to use that.

Doug

bwed...@my-deja.com

unread,
Aug 3, 1999, 3:00:00 AM8/3/99
to
In article <7o6021$t1l$1...@nnrp1.deja.com>,
ling...@my-deja.com wrote:

> I wonder how to delay a task in micro seconds range.
> The function taskdelay() delays a task in terms of ticks. so i
> converted the time into number of ticks as follows:
>
> taskDelay((int)(dwMicroSeconds/1000000)* sysClkRateGet());
>
> But In my case, The above will always be zero because i need to
> delay a task only for 1000 or 2000 micro seconds.

You don't mention your target. If you have an auxillary clock, that is
what they are for. Look at the sysAuxClk functions.

Bruce

Georg Feil

unread,
Aug 5, 1999, 3:00:00 AM8/5/99
to
ling...@my-deja.com writes:
>I wonder how to delay a task in micro seconds range.

It's not easy using software alone, so many boards have hardware facilities
to allow this. For example the mv2604 has a microsecond counter which can be
used for this kind of thing.

GEORG
--
Georg Feil
| http://www.sgl.crestech.ca/
Space Geodynamics Laboratory | Email: ge...@sgl.crestech.ca
CRESTech | Phone: (416) 665-5458
4850 Keele St./North York/Ont/Canada/M3J 3K1 | Fax: (416) 665-1815

Harvey Taylor

unread,
Aug 7, 1999, 3:00:00 AM8/7/99
to
ling...@my-deja.com wrote:
> I wonder how to delay a task in micro seconds range.
>

If you are on an x-86 system you can use the RDTSC timer.
There is a similar PPC mnemonic.
<l8r>
-het


--
"you don't really appreciate how smart a moron is
until you try to design a robot..." -a robotics designer

Harvey Taylor Internet: h...@despam.portal.ca

johne...@oscsystems.com

unread,
Aug 7, 1999, 3:00:00 AM8/7/99
to ling...@my-deja.com
ling...@my-deja.com wrote:
>
> HI

>
> I wonder how to delay a task in micro seconds range.
> The function taskdelay() delays a task in terms of ticks. so i
> converted the time into number of ticks as follows:
>
> taskDelay((int)(dwMicroSeconds/1000000)* sysClkRateGet());
>
> But In my case, The above will always be zero because i need to delay a
> task only for 1000 or 2000 micro seconds.
>
> Effectively there will be no delay in task.
>
> can anyone give me a solution for this.

Here is a routine that I found a couple of months ago. I cleaned it up
a bit and it has been used on several different projects. The
auto-calibration nature of this routine makes it nice. Seems to survive
code optimization as well. It works great down into the microsecond
level. Use the timexN routine to test it's accuracy on your platform.

Regards,
--
John W. Edwards, Sr. Engineer
Avionics Software
Orbital Sciences Corporation / Fairchild Defense
Germantown, MD USA
JohnE...@oscsystems.com

============================================================
Submitted-by es...@idiom.com Fri Dec 5 11:35:17 1997
Submitted-by: Geoffrey Espin <es...@idiom.com>

--MimeMultipartBoundary
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit

Marc, et al,

> Submitted-by ma...@onion.jhuapl.edu Fri Dec 5 07:26:18 1997
> We are using vxWorks 5.1.1 and are using a clock rate of 100 ticks/msec,
> we would like to switch to a higher clock rate (1000ticks/msec) and need
> to know if/how using a higher clock rate might negatively impact performance.

Obviously the more interrupts per second from any source will
cost you something. A 1000/sec on lots of CPUs these days is probably
not a big deal. If an interrupt costs you 5 microseconds:

1000 * 5 usec = 5 millisecs of mostly wasted time every second

But note that you cannot use taskDelay() from an interrupt handler! :-)

Attached is a portable sub-clock tick "hard" (busy loop) delay[>]
library.
I can't believe how many times I've seen code like (bletch!):

for (ix = 0; ix < 0x40000; ix++) ; /* spin wheels for 20 usecs (I hope)
*/

WRS does supply nanosleep() but is actually limited to system clock rate
usually I think.

Attached are my delayLib.c and delayLib.h. If you have any
suggestions or improvments let me know: es...@idiom.com.

Geoff
--
Geoffrey Espin es...@idiom.com
--

+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=
/* delayLib.c - self-calibrating hard delay routines */

/*
modification history
--------------------
27Mar96,espin written.
*/

/*
DESCRIPTION
This module provides "hard" [<]delay[>] routines for micro and
millisecond
periods.

EXAMPLE
.CS
-> delayMsec (1); /@ very first call used for calibration @/
-> timexN delayMsec, 10
timex: 75 reps, time per rep = 9555 +/- 222 (2%) microsecs
value = 59 = 0x3b = ';'
->
.CE

The routines sysClkRateGet() and tickGet() are used to calibrate
the timing loop the first time either routine is called. Therefore,
the first call is much longer than requested. If the system clock
rate is changed, a new calibration must be explicitly made by
passing a negative [<]delay[>] duration.

INTERNAL
A one-shot timer could provide high resolution sub-clock tick
delay... but then this would be board dependent.
*/

#include "vxWorks.h"
#include "tickLib.h"
#include "sysLib.h"

#define DEBUG FALSE

void delayUsec (unsigned int u);
void delayMsec (unsigned int m);

/*******************************************************************************
*
* delayUsec - hard [<]delay[>] for <u> microseconds
*
* RETURNS: N/A
*/

#if DEBUG
int delayLoop = 0;
#endif /* DEBUG */

void delayUsec
(
unsigned int u /* # of microsecs */
)
{
#if !DEBUG
static int delayLoop = 0;
#endif /* !DEBUG */
int ix;
int iy;

if (delayLoop == 0 || u == 0xffffffff) /* need calibration? */
{
int maxLoop;
int start = 0;
int stop = 0;
int mpt = (1000 * 1000) / sysClkRateGet (); /* microsecs per
tick */

for (delayLoop = 1; delayLoop < 0x1000 && stop == start;
delayLoop<<=1)
{
for (stop = start = tickGet (); start == stop; start =
tickGet ())
; /* wait for clock turn over */

delayUsec (mpt); /* single recursion */
stop = tickGet ();
}

maxLoop = delayLoop / 2; /* loop above overshoots */
#if DEBUG
printf ("maxLoop = %d\n", maxLoop);
#endif /* DEBUG */
start = 0;
stop = 0;
if (delayLoop < 4)
delayLoop = 4;
for (delayLoop /= 4; delayLoop<maxLoop && stop == start;
delayLoop++)
{
for (stop = start = tickGet (); start == stop; start =
tickGet ())
; /* wait for clock turn over */

delayUsec (mpt); /* single recursion */
stop = tickGet ();
}
#if DEBUG
printf ("delayLoop = %d\n", delayLoop);
#endif /* DEBUG */
}

for (iy = 0; iy < u; iy++)
{
for (ix = 0; ix < delayLoop; ix++)
;
}
}
/*******************************************************************************
*
* delayMsec - hard [<]delay[>] for <m> milliseconds
*
* RETURNS: N/A
*/

void delayMsec
(
unsigned int m /* # of millisecs */
)
{
delayUsec (m * 1000);
}

Here's the include file:


/* delayLib.h - self-calibrating hard [<]delay routines header file */

/*
modification history
--------------------
27Mar96,espin written.
*/

#ifndef __INCdelayLibh
#define __INCdelayLibh

#if defined(__STDC__) || defined(__cplusplus)
extern void delayUsec (unsigned int u);
extern void delayMsec (unsigned int m);
#else
extern void delayUsec ();
extern void delayMsec ();
#endif /* __STDC__ || __cplusplus */

#endif /* __INCdelayLibh */

Manfred Fischer

unread,
Aug 19, 1999, 3:00:00 AM8/19/99
to

> I wonder how to delay a task in micro seconds range.
> The function taskdelay() delays a task in terms of ticks. so i
> converted the time into number of ticks as follows:
>
> taskDelay((int)(dwMicroSeconds/1000000)* sysClkRateGet());
>
> But In my case, The above will always be zero because i need to delay a
> task only for 1000 or 2000 micro seconds.
>
> Effectively there will be no delay in task.
>
> can anyone give me a solution for this.
>
I think your problem is, that the default sysClkRate is 60. That means that
the shortest time a delay can be is 0.01667s or 16.67 ms or 16666
microseconds. You can change the rate with sysClkRateSet() (achitecture
dependent) but remember that the delay time you set can differ in time one
system tic. An other way to delay can be the posix function nanosleep().

manfred

Curt McDowell

unread,
Aug 19, 1999, 3:00:00 AM8/19/99
to
Here's a routine to delay a specified number of microseconds.

#include <sys/types.h>
#include <vxWorks.h>
#include <selectLib.h>

/*
* usleep(usec)
*/

void usleep(UINT32 usec)
{
struct timeval tv;
tv.tv_sec = (time_t) (usec / 1000000);
tv.tv_usec = (long) (usec % 1000000);
select(0, (fd_set *) 0, (fd_set *) 0, (fd_set *) 0, &tv);
}

Curt McDowell
c...@broadcom.com

Charles H. Chapman

unread,
Aug 20, 1999, 3:00:00 AM8/20/99
to
On Thu, 19 Aug 1999 18:08:02 GMT, Curt McDowell <c...@broadcom.com> wrote:
>Here's a routine to delay a specified number of microseconds.
>
> struct timeval tv;
> tv.tv_sec = (time_t) (usec / 1000000);
> tv.tv_usec = (long) (usec % 1000000);
> select(0, (fd_set *) 0, (fd_set *) 0, (fd_set *) 0, &tv);
>
>Manfred Fischer wrote:

>> dependent) but remember that the delay time you set can differ in time one
>> system tic. An other way to delay can be the posix function nanosleep().

Nope, neither select nor nanosleep will allow you to time things down to
the microsecond. Even though you can specify the delay in microseconds,
the resolution is still +/- one clock tick.

Chuck


Charlie Grames

unread,
Aug 20, 1999, 3:00:00 AM8/20/99
to
It is important to note that select() and all POSIX clock functions
(including nanosleep()) are based off the system clock. You may request
a delay time less than the resolution of the system clock with these
functions, but the actual delay will not expire until the next system
clock tick.

If you want smaller delays, you need to

1) Increase the rate of the system clock (which may cause undesired
performance problems--best to evaluate on your own platform)

2) Use the sysAuxClk routines at an appropriate rate in conjunction with
a semaphore give and take

3) Create a non-blocking processor delay loop, e.g.:
for (i = 0; i < n; i++);

Option 3 is the least desirable because of its nonportability. We chose
option 1 on the MVME2700 and increased the system clock rate to 1000
Hz. Doing so allows you to use select() and nanosleep() with 1 ms
resolution.

Charlie Grames
The Boeing Company
Charles.R.Grames @ boeing.com

Curt McDowell wrote:
>
> Here's a routine to delay a specified number of microseconds.
>

> #include <sys/types.h>
> #include <vxWorks.h>
> #include <selectLib.h>
>
> /*
> * usleep(usec)
> */
>
> void usleep(UINT32 usec)
> {

> struct timeval tv;
> tv.tv_sec = (time_t) (usec / 1000000);
> tv.tv_usec = (long) (usec % 1000000);
> select(0, (fd_set *) 0, (fd_set *) 0, (fd_set *) 0, &tv);
> }
>

> Curt McDowell
> c...@broadcom.com
>
> Manfred Fischer wrote:
> >
> > > I wonder how to delay a task in micro seconds range.
> > > The function taskdelay() delays a task in terms of ticks. so i
> > > converted the time into number of ticks as follows:
> > >
> > > taskDelay((int)(dwMicroSeconds/1000000)* sysClkRateGet());
> > >
> > > But In my case, The above will always be zero because i need to delay a
> > > task only for 1000 or 2000 micro seconds.
> > >
> > > Effectively there will be no delay in task.
> > >
> > > can anyone give me a solution for this.
> > >
> > I think your problem is, that the default sysClkRate is 60. That means that
> > the shortest time a delay can be is 0.01667s or 16.67 ms or 16666
> > microseconds. You can change the rate with sysClkRateSet() (achitecture

> > dependent) but remember that the delay time you set can differ in time one
> > system tic. An other way to delay can be the posix function nanosleep().
> >

> > manfred

Steve Doiel

unread,
Aug 23, 1999, 3:00:00 AM8/23/99
to

Charlie Grames <nob...@nowhere.com> wrote in message
news:37BD7B6D...@nowhere.com...

> It is important to note that select() and all POSIX clock functions
> (including nanosleep()) are based off the system clock. You may request
> a delay time less than the resolution of the system clock with these
> functions, but the actual delay will not expire until the next system
> clock tick.
>
> If you want smaller delays, you need to
>
> 1) Increase the rate of the system clock (which may cause undesired
> performance problems--best to evaluate on your own platform)
>
> 2) Use the sysAuxClk routines at an appropriate rate in conjunction with
> a semaphore give and take
>
> 3) Create a non-blocking processor delay loop, e.g.:
> for (i = 0; i < n; i++);

Option 4 is to make use of a clock (if available) on your target processor
that is not in use by the Wind kernel. In doing so you have to write your
own hardware handling and interrupt routines.

Microsecond delays are (at least on some architectures) pushing the
envelope in terms of how fast you can process. It is not uncommon for
things like context switches to be measured in microseconds.

SteveD

Ag@whd

unread,
Aug 31, 1999, 3:00:00 AM8/31/99
to
On Fri, 20 Aug 1999 15:59:41 GMT, Charlie Grames <nob...@nowhere.com>
nn...@news.boeing.com (Boeing NNTP News Access) penned these words:

>3) Create a non-blocking processor delay loop, e.g.:
> for (i = 0; i < n; i++);
>

>Option 3 is the least desirable because of its nonportability. We chose
>option 1 on the MVME2700 and increased the system clock rate to 1000
>Hz. Doing so allows you to use select() and nanosleep() with 1 ms
>resolution.

To make it more "portable" you could calibrate such a loop by counting
how many iterations can be performed say in one second, however this
won't help if you must have a specific and precise usec delay since
interrupts can make it at least the <n> usec you desire, but quite
possibly a great deal more, unless you disable interrupts (both in the
calibrating loop and the delay loop).

You may like to look at your BSP specific "sysTimestamp()" collection
of routines which I am trying to find myself in the BSP - I can't find
the source code or description of their parameters except that the
routines ("sysTimestampGetFreq()" and "sysTimestampGetPeriod()" or
names similar to these) return large numbers suggesting high
resolution.

Adrian

WWW WWW Adrian Gothard
WWW ww WWW White Horse Design
WWWWWWWWWW
WWWW WWWW w...@zetnet.co.uk, http://www.users.zetnet.co.uk/whd
---
It's good to be me, yes, Heh heh heh, 'purrr', 'purrr'.

0 new messages