clock()

7 views
Skip to first unread message

robert...@gmail.com

unread,
Feb 19, 2006, 10:26:02 PM2/19/06
to
Ok, so I am trying to establish how boost.timer works and I try a few
things and afaict it doesn't. So I open the header to see why and run
some tests to establish what is going on and it all boils down to the
clock() function. Here is the code:


#include <iostream>
#include <cstdlib>

int main(void)
{
boost::timer tmr;


sleep(15);

std::cout << std::clock() << std::endl;

return 0;
}


That program always outputs "0". Documentation on clock() says that it
should return clock ticks since program started, which with the 15
second sleep that should be plenty. I don't understand.

example...@gmail.com

unread,
Feb 20, 2006, 3:36:21 AM2/20/06
to
The clock() function returns an approximation of processor time used by
the program.

but sleep() function not use processor time.

JetSnaiL

unread,
Feb 20, 2006, 3:51:44 AM2/20/06
to
> robert...@gmail.comwrote:

#include <ctime>
#include <cstdlib>
#include <iostream>

using namespace std;

int main()
{
sleep(10);
cout << clock() << endl;
return 0;
}


Outputs "15".

robert...@gmail.com

unread,
Feb 20, 2006, 9:35:49 AM2/20/06
to

Well, I get 0.

Default User

unread,
Feb 20, 2006, 12:55:01 PM2/20/06
to
robert...@gmail.com wrote:

> Ok, so I am trying to establish how boost.timer works and I try a few
> things and afaict it doesn't. So I open the header to see why and run
> some tests to establish what is going on and it all boils down to the
> clock() function. Here is the code:

1. You aren't using clock() correctly. From the C99 draft standard:

#include <time.h>
clock_t clock(void);

Description

[#2] The clock function determines the processor time used.

Returns

[#3] The clock function returns the implementation's best
approximation to the processor time used by the program
since the beginning of an implementation-defined era related
only to the program invocation. To determine the time in
seconds, the value returned by the clock function should be
divided by the value of the macro CLOCKS_PER_SEC. If the
processor time used is not available or its value cannot be
represented, the function returns the value (clock_t)-1.

In order to measure the time spent in a program, the
clock function should be called at the start of the
program and its return value subtracted from the value
returned by subsequent calls.


2. The clock() function measures process time, not wall time. You
reference sleep(), but that is NOT a standard function and we can't
really say anything about it. Probably comp.unix.programmer would be
better. Try it with calls to time() and difftime().

Brian

KeithSpook

unread,
Feb 20, 2006, 1:14:29 PM2/20/06
to

There seems to be differences between certain implementations of
clock(). Microsoft
(http://msdn.microsoft.com/library/default.asp?url=/library/en-us/vclib/html/_crt_clock.asp)
says that clock() returns the number of ticks since the process
started, while Solaris 9 man pages state that it returns the number of
ticks of CPU time since the last call to clock(). (So try calling it
at the beginning of your program...).

So all though it's a "standard C library call", it seems (to me) to be
rather implementation dependent, at least for the time being.

~KS

Victor Bazarov

unread,
Feb 20, 2006, 1:17:48 PM2/20/06
to
Default User wrote:
> [..]

> 2. The clock() function measures process time, not wall time.

Tell it to those who wrote Visual C++ C run-time library. Maybe
they will learn something new.

V
--
Please remove capital As from my address when replying by mail


Default User

unread,
Feb 20, 2006, 3:02:18 PM2/20/06
to
Victor Bazarov wrote:

> Default User wrote:
> > [..]
> > 2. The clock() function measures process time, not wall time.
>
> Tell it to those who wrote Visual C++ C run-time library. Maybe
> they will learn something new.

Well, that's what the standard says.

Indeed this little program (warning, contains VSC++ stuff) seems to
give the same value for wall and process time. I'm not sure if that
indicates a broken implementation of clock() or of Sleep(), the latter
being non-standard of course so "broken" is questionable regardless.

Of course, the standard says "best approximation to the processor time
used by the program", so maybe that's is the best.

#include <iostream>
#include <time.h>
#include <windows.h>

int main(void)
{
time_t start, end;

start = time(0);
clock();
Sleep(15*1000); // equivalent to unix sleep(15)
std::cout << clock()/CLOCKS_PER_SEC << std::endl;
end = time(0);
std::cout << difftime(end, start) << std::endl;

return 0;
}

Results:

15
15

A relatively equivalent program on Solaris/gnu gives:

0
15


Brian

Default User

unread,
Feb 20, 2006, 3:07:45 PM2/20/06
to
Victor Bazarov wrote:

> Default User wrote:
> > [..]
> > 2. The clock() function measures process time, not wall time.
>
> Tell it to those who wrote Visual C++ C run-time library. Maybe
> they will learn something new.

Thinking about it more, the standard says, "time used by the program".
I guess that it's not at all clear what that means. So my other
comments about it being "broken" aren't correct, one could consider the
total time since invokation or the actual CPU usage or some other
method and be equally valid.

Brian

an...@servocomm.freeserve.co.uk

unread,
Feb 20, 2006, 3:36:36 PM2/20/06
to
Default User wrote:

> In order to measure the time spent in a program, the
> clock function should be called at the start of the
> program and its return value subtracted from the value
> returned by subsequent calls.

Does anyone know what happens/is meant to happen if the clock overflows
between calls?

regards
Andy Little

Victor Bazarov

unread,
Feb 20, 2006, 3:41:36 PM2/20/06
to

Yeah, yeah... They do provide GetProcessTimes or GetThreadTimes to
get the true CPU times used, but it seems that they are unable to
grasp what 'clock' is for or about. My speculation is that nobody
has told them. That's why I suggested you do it.

robert...@gmail.com

unread,
Feb 20, 2006, 3:51:10 PM2/20/06
to

Victor Bazarov wrote:
> Default User wrote:
> > Victor Bazarov wrote:
> >
> >> Default User wrote:
> >>> [..]
> >>> 2. The clock() function measures process time, not wall time.
> >>
> >> Tell it to those who wrote Visual C++ C run-time library. Maybe
> >> they will learn something new.
> >
> > Thinking about it more, the standard says, "time used by the program".
> > I guess that it's not at all clear what that means. So my other
> > comments about it being "broken" aren't correct, one could consider
> > the total time since invokation or the actual CPU usage or some other
> > method and be equally valid.
>
> Yeah, yeah... They do provide GetProcessTimes or GetThreadTimes to
> get the true CPU times used, but it seems that they are unable to
> grasp what 'clock' is for or about. My speculation is that nobody
> has told them. That's why I suggested you do it.

Ok, this bothers me then that this class "timer" is part of boost. I
thought boost was supposed to be a peer reviewed library and this
class, which is dependent upon a function that could basically return
anything, seems too trivial and buggy to be included.

Not only that, but it uses C-Style casts.

Default User

unread,
Feb 20, 2006, 3:53:35 PM2/20/06
to
Victor Bazarov wrote:

> Default User wrote:

> > Thinking about it more, the standard says, "time used by the
> > program". I guess that it's not at all clear what that means. So
> > my other comments about it being "broken" aren't correct, one could
> > consider the total time since invokation or the actual CPU usage or
> > some other method and be equally valid.
>
> Yeah, yeah... They do provide GetProcessTimes or GetThreadTimes to
> get the true CPU times used, but it seems that they are unable to
> grasp what 'clock' is for or about. My speculation is that nobody
> has told them. That's why I suggested you do it.

I took your comment somewhat differently :)

It does seem like having a clock() that is more or less time() with
different granularity isn't all that useful, especially as it makes
standard programs run very differently on other systems.

Brian

Default User

unread,
Feb 20, 2006, 3:59:49 PM2/20/06
to
an...@servocomm.freeserve.co.uk wrote:

The C standard doesn't say. The POSIX standard has the following under
Application Usage:

The value returned by clock() may wrap around on some implementations.
For example, on a machine with 32-bit values for clock_t, it wraps
after 2147 seconds or 36 minutes.

Not much is specified about clock_t. Like time_t the only requirement
is that it be an "arithmetic type capable of representing times".

Theoretically it could be a signed integer and have UB on overflow, but
that would be a poor QOI.


Brian

an...@servocomm.freeserve.co.uk

unread,
Feb 20, 2006, 4:28:34 PM2/20/06
to

Right but I seem to remember from assembler that in 2'sC you can take
(something like ) the absolute value of the difference between two
values to ignore overflow of an integer. You will need to detect a
rollover (by periodically detecting if the new value is < than the
previous 1, reseting between 2 > cycle half periods or otherwise only
doing this once per cycle) and add that on to the result returned since
startup, assuming some clock object with a ctor etc. (the return type
would have to be a (e.g) double of course) Actually its quite complex
to do it right. More so if you are guessing at the behaviour!

regards
Andy Little

David Lindauer

unread,
Feb 20, 2006, 8:38:04 PM2/20/06
to

an...@servocomm.freeserve.co.uk wrote:

The problem with that is that if you are really going to deal with standard
behavior, you can't assume 2'sC arithmetic...

David

Rolf Magnus

unread,
Feb 20, 2006, 9:38:36 PM2/20/06
to
Default User wrote:

> Victor Bazarov wrote:
>
>> Default User wrote:
>> > [..]
>> > 2. The clock() function measures process time, not wall time.
>>
>> Tell it to those who wrote Visual C++ C run-time library. Maybe
>> they will learn something new.
>
> Thinking about it more, the standard says, "time used by the program".

It says "the processor time used by the program".

> I guess that it's not at all clear what that means. So my other
> comments about it being "broken" aren't correct, one could consider the
> total time since invokation or the actual CPU usage or some other
> method and be equally valid.

"The processor time" is for me the actual CPU time used.

an...@servocomm.freeserve.co.uk

unread,
Feb 20, 2006, 9:54:30 PM2/20/06
to

I would be happy to adopt a divide and conquer approach, as there is
apparently no standard behaviour and the required spec seems reasonably
clear-cut. I'd be happy with just win32 and Linux 86. I havent met
any non 2'sC architecture though FWIW. I use clock as the basis of a
process timer for comparing code performance and while not life
critical it would be nice to sit down and get it right I guess. Its a
bit pathetic if it overflows every 30 minutes or so though, as that
gives an alarmingly high probability of it silently being wrong

regards
Andy Little

an...@servocomm.freeserve.co.uk

unread,
Feb 21, 2006, 9:08:02 AM2/21/06
to

an...@servocomm.freeserve.co.uk wrote:

> Right but I seem to remember from assembler that in 2'sC you can take
> (something like ) the absolute value of the difference between two
> values to ignore overflow of an integer.

FWIW seems that difftime(later,earlier) might do just this

regards
Andy Little

Reply all
Reply to author
Forward
0 new messages