Put in a time wasting loop:
volatile int i;
for(i=0;i<N;i++) ;
The exact time could be measured with an oscilloscope by toggling a port pin
before and after the for loop and the value of N adjusted. If it needs to be a
small exact time, turn off interrupts during the for loop.
There is also a function nanosleep() which may do the job.
I hope this helps.
Regards,
Graham Baxter
Freelance Software Engineer (vxWorks and pSOS)
gba...@bcsNOSPAM.org.uk
Nah, nanosleep may have a resolution of nanoseconds but it has a granularity
of one clock tick.
Chuck
Unless you have some external clock device with a higher resolution that
is capable of generating interrupts for you, I don't see how you can
possibly get anything with a higher resoltion than your system clock.
I suppose you could try to just burn time with some bogous assembly
instructions. But the timing is going to be CPU (and clock speed)
dependent, and mighty tough to accurately calculate if you are using a
modern pipelined and cached processor.
I'd put in a taskDelay (1) and be done with it. If that's too long, jack
up your clock frequency.
--
T.E.D.
http://www.telepath.com/~dennison/Ted/TED.html
Sent via Deja.com http://www.deja.com/
Before you buy.
Be aware that some (most?) timestamp drivers reset the timestamp counter every
tick. Sometimes this is because of hardware limitations, like the PowerPC
decrementer register acts for both the system tick and timestamp, but sometimes
it's because WindRiver made it so, like the i8253Timer.c driver for x86 which
resets the Pentium's TSC every tick.
James Marshall
Hi Ted,
Several CPUs allow you to count cycles with an RDTSC type
instruction which can be utilized to good effect.
<ciao>
-het
--
"Experience is like lead shot in your behind; one who has never
been shot doubts that such a thing exists at all." -Hans Lippman
Harvey Taylor mailto:h...@despam.pangea.ca http://www.pangea.ca/~het
Wow, thanks a lot for the info James. I always wondered how Windview
managed to get that kind of resolution, when the best I could manage was
a clock tick. Duuuh. I guess I could have read the WindView users' guide
myself.
I've got a couple of very good immediate uses for that, too.
I use the following to read the accurate clock on a PPC:
static int _accurate_time_fake=0;
static unsigned long _accurate_time_now_upper32bits() { __asm__ (" mftbu 3");
__asm__ (" blr ");
return _accurate_time_fake++; /* this line added to avoid warnings */}
static unsigned long _accurate_time_now_lower32bits() { __asm__ (" mftb 3");
__asm__ (" blr ");
return _accurate_time_fake++; /* this line added to avoid warnings */}
Ben Abbott
bab...@swri.org
Many CPUs have a block of FLASH that is used for non-volatile storage.
This FLASH will NOT be cached (to allow FLASH programming algorithms to
work)
and will usually have a well defined access time for read cycles. For
very small delays, you can just use some integer number of reads from
this
FLASH block to implement a predictable delay.
Yes, it is board specific, thus non-portable. But it is very light
weight
and reliable. On my last product where I needed this, FLASH reads
completed
in exactly 140nS. The instruction loop, being VERY small, fit entirely
in
the instruction cache. We could change the clock rate on the processor
by
a gross amount and the change in the delay time was minimal because the
access time on the FLASH swamped the execution time.
Hope that helps.
--
Douglas Fraser
Li Xin wrote:
I a few years back Geoffrey Espin posted this
a self calibrating delay routine on the vxWorks exploder.
It's the next best thing to a hardware timer.
The first call to the delay routine takes a long time because
it then does the calibration. If you can't tolerate that
you can make a dummy call when installing the device driver.
You can test the accuracy on your platform with timexN.
Good luck !
--leif
...{most code snipped}
> for (iy = 0; iy < u; iy++)
> {
> for (ix = 0; ix < delayLoop; ix++)
> ;
> }
3 questions here:
Don't modern processors render this type of timing completely
unfeasable? With pipelinging, instruction caches and speculative
execution etc. combined with possible task switches, How do you know
that the later runs of this code bear any relation time-wise to the
calibration run?
If I want to optimize my code, what's to stop the compiler from
optimizing these loops into a couple of assignments?
Thirdly: What happens if the calling task get switched out for a higher
priority task during the "delay"? There's no accounting for this extra
time that I see. When the calling task resumes, it will keep on looping
until the end. Depending on what you wanted the delay for, I guess this
might not always be a big problem.