Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Execution time of code?

8 views
Skip to first unread message

mlt

unread,
Mar 5, 2009, 4:16:24 PM3/5/09
to
I have some code that implements various seach and sorting algorithms. I
would like to get some kind of time measure for various parts of the
algorithm, like:

public myAlgo() {
...
...

float timer = // start measurement timer
for (...)
{
// do various calculations.

}

std::cout << "time spend = " << timer;


float timer2 = // start measurement timer
for (...)
{
// do some other calculations.

}
std::cout << "time2 spend = " << timer2;

...
...

}

Is there some build in function in C++ that is designed for this kind of
purpose? I am also interested in knowing if there exists some performance
measuring framework for this kind of task.

Victor Bazarov

unread,
Mar 5, 2009, 4:20:24 PM3/5/09
to

There is 'clock()', but know that it's the last function you actually
want to use to measure the performance of your code. Look into what
your OS provides. Windows has 'QueryPerformanceCounter'. UNIX
undoubtedly has something similar.

Or simply get yourself a profiler. Trust me, your code and your
customers will love you for getting the performance where it should be.

V
--
Please remove capital 'A's when replying by e-mail
I do not respond to top-posted replies, please don't ask

mlt

unread,
Mar 5, 2009, 4:34:59 PM3/5/09
to

"Victor Bazarov" <v.Aba...@comAcast.net> wrote in message
news:gopfmo$skf$1...@news.datemas.de...

I have googled 'C++ profiler' and get a lot of different hits. Must of them
deal with analysing where calls are made and not so much on how long time a
block of code takes to execute.

Are there any specific profiling tools I should search for to get this kind
of time measurement functionality?


Jeff Schwab

unread,
Mar 5, 2009, 4:46:34 PM3/5/09
to
mlt wrote:

> Are there any specific profiling tools I should search for to get this
> kind of time measurement functionality?

Of you're on ~ Unix, try gprof and oprofile. Gprof gives you accurate
call counts, but kind of blows for multithreaded code.

Victor Bazarov

unread,
Mar 5, 2009, 5:07:20 PM3/5/09
to
mlt wrote:
> [..]

> I have googled 'C++ profiler' and get a lot of different hits. Must of
> them deal with analysing where calls are made and not so much on how
> long time a block of code takes to execute.
>
> Are there any specific profiling tools I should search for to get this
> kind of time measurement functionality?
>
>

Profilers are OS-specific.

<offtopic>
You're on Windows, aren't you? Look for LTProf (trial is free, the
license is inexpensive), or, if you (your company) can afford it, try
and then buy AQtime.
</offtopic>

Dennis Jones

unread,
Mar 5, 2009, 9:13:32 PM3/5/09
to

"mlt" <as...@asd.com> wrote in message
news:49b0413f$0$90273$1472...@news.sunsite.dk...

>I have some code that implements various seach and sorting algorithms. I
>would like to get some kind of time measure for various parts of the
>algorithm, like:

<snipped>

> Is there some build in function in C++ that is designed for this kind of
> purpose? I am also interested in knowing if there exists some performance
> measuring framework for this kind of task.

If you are on Windows, I'll second Vector's recommendation for AQtime by
AutomatedQA, particularly if you want to get high-resolution timing results
on a function-by-function basis.

If you don't necessarily care about high-resolution timing and don't mind
writing some code, you can roll your own. You can do something similar to
what I did. I wrote an RAII class that measures the lifetime of objects of
the class. I followed the pattern of Alexandresu and Marginean's ScopeGuard
to let me do something like this:

void SomeFunction()
{
MEASURE_SCOPE();

// do a bunch of stuff
}

The MEASURE_SCOPE macro simply creates an object of my RAII scope
measurement class. When the object is destroyed, it logs the object's
lifetime (along with the function name and line number where the object was
created). It does require me to add the macro wherever I want to do
measurements, and it doesn't provide line-by-line timing, but if I need
that, I'll use AQtime. I used Petru Marginean's logging class in the
implementation of my RAII scope measurement class, so I can turn the logging
on and off at runtime with almost no runtime penalty when it is off, thereby
eliminating the need to comment out or disable the macro when I don't need
it. In Windows, my accuracy is dependent on the resolution of the clock
(about 18ms). I could probably re-write it to use a performance counter,
but I haven't had any reason to do that.

So anyway, there's a couple of ideas for you.

- Dennis


Alf P. Steinbach

unread,
Mar 5, 2009, 9:47:14 PM3/5/09
to
* Victor Bazarov:

> mlt wrote:
>> I have some code that implements various seach and sorting algorithms.
>> I would like to get some kind of time measure for various parts of the
>> algorithm, like:
>>
>> public myAlgo() {
>> ...
>> ...
>>
>> float timer = // start measurement timer
>> for (...)
>> {
>> // do various calculations.
>>
>> }
>>
>> std::cout << "time spend = " << timer;
>>
>>
>> float timer2 = // start measurement timer
>> for (...)
>> {
>> // do some other calculations.
>>
>> }
>> std::cout << "time2 spend = " << timer2;
>>
>> ...
>> ...
>>
>> }
>>
>> Is there some build in function in C++ that is designed for this kind
>> of purpose? I am also interested in knowing if there exists some
>> performance measuring framework for this kind of task.
>
> There is 'clock()', but know that it's the last function you actually
> want to use to measure the performance of your code.

Why do you think that?

'clock' is extremely easy to use, and it's always available.

Hence, I'd say it's the /first/ you should try, absolutely not the last (if
other more heavy instruments are brought to bear, then 'clock' adds nothing).

To use 'clock', call the relevant piece of code an appropriate number of times
to get well within 'clock' resolution, and check how it fared.

This will often be enough to form a good opinion about rough performance, and
usually that's the best one can hope for anyway, no matter how sophisticated
instruments are employed (because detailed performance depends on data, machine
load, usage patterns, and factors that one could never imagine offhand).

Using 'clock' involves some work in adding intrusive code and/or factoring out
the relevant code to be measured.

Using a heavier instrument, unless one already has everything set up for that
instrument (which makes the question of choosing moot), involves even more work.


Cheers & hth.,

- Alf


--
Due to hosting requirements I need visits to [http://alfps.izfree.com/].
No ads, and there is some C++ stuff! :-) Just going there is good. Linking
to it is even better! Thanks in advance!

Victor Bazarov

unread,
Mar 5, 2009, 10:34:05 PM3/5/09
to
Alf P. Steinbach wrote:
> * Victor Bazarov:
>> [..]

>> There is 'clock()', but know that it's the last function you actually
>> want to use to measure the performance of your code.
>
> Why do you think that?

I don't "think" that. I know that. From experience.

> [..]

Alf P. Steinbach

unread,
Mar 5, 2009, 10:59:02 PM3/5/09
to
* Victor Bazarov:

> Alf P. Steinbach wrote:
>> * Victor Bazarov:
>>> [..]
>>> There is 'clock()', but know that it's the last function you actually
>>> want to use to measure the performance of your code.
>> Why do you think that?
>
> I don't "think" that. I know that. From experience.

Sorry, all that means is that your experience indicates that for *you* 'clock'
is ungood. Perhaps you have used it incorrectly. Or perhaps you always have an
expensive tool set-up geared towards profiling (which is indicated by your
strong focus on micro-efficiency, so wouldn't surprise me!).

Victor Bazarov

unread,
Mar 5, 2009, 11:55:28 PM3/5/09
to
Alf P. Steinbach wrote:
> * Victor Bazarov:
>> Alf P. Steinbach wrote:
>>> * Victor Bazarov:
>>>> [..]
>>>> There is 'clock()', but know that it's the last function you
>>>> actually want to use to measure the performance of your code.
>>> Why do you think that?
>>
>> I don't "think" that. I know that. From experience.
>
> Sorry, all that means is that your experience indicates that for
> *you* 'clock' is ungood.

Yes, absolutely. What source of information do you use when you
claim 'clock's suitability? Marketting hype?

> Perhaps you have used it incorrectly.

Perhaps. Or perhaps on all systems I've had experience with,
the 'clock' was implemented inadequately. Or was relying on
some rather inadequate hardware mechanism.

> Or
> perhaps you always have an expensive tool set-up geared towards
> profiling (which is indicated by your strong focus on
> micro-efficiency, so wouldn't surprise me!).

My "strong focus on micro efficiency"? What gave you that idea?

And why are you so inclined to try to insult people this fine
morning? I am speaking from experience with I say that dealing
with return values is faster than using exceptions. It's what
my experience indicates. I don't try to simply convey somebody
else's viewpoint I've read somewhere. And, yes, when it comes
to efficiency, good tools are expensive. Not as expensive as
our customer's time, though.

'clock' just doesn't cut it on Windows, for example. Machines
nowadays are so fast and the software is so complex that time
measurement with the granularity of 20 milliseconds is just not
suitable for measuring time on a function level.

>
> Cheers & hth.,

It doesn't, sorry.

Alf P. Steinbach

unread,
Mar 6, 2009, 12:18:57 AM3/6/09
to
* Victor Bazarov:
> Alf P. Steinbach wrote:
>> * Victor Bazarov:
>>> Alf P. Steinbach wrote:
>>>> * Victor Bazarov:
>>>>> [..]
>>>>> There is 'clock()', but know that it's the last function you
>>>>> actually want to use to measure the performance of your code.
>>>> Why do you think that?
>>> I don't "think" that. I know that. From experience.
>> Sorry, all that means is that your experience indicates that for
>> *you* 'clock' is ungood.
>
> Yes, absolutely. What source of information do you use when you
> claim 'clock's suitability? Marketting hype?

When you claim that 'clock' should be the last one tries, it is just a silly
claim until you have substantiated it somehow with facts and/or logic.

Which would be rather difficult since the claim, it seems, was purely a personal
one, referring to yourself as "you". :-o


>> Perhaps you have used it incorrectly.
>
> Perhaps. Or perhaps on all systems I've had experience with,
> the 'clock' was implemented inadequately. Or was relying on
> some rather inadequate hardware mechanism.
>
>> Or
>> perhaps you always have an expensive tool set-up geared towards
>> profiling (which is indicated by your strong focus on
>> micro-efficiency, so wouldn't surprise me!).
>
> My "strong focus on micro efficiency"? What gave you that idea?

Recent threads including this one.


> And why are you so inclined to try to insult people this fine
> morning? I am speaking from experience with I say that dealing
> with return values is faster than using exceptions. It's what
> my experience indicates.

Most serious investigations of that have yielded the opposite conclusion.

As an example of a serious investigation, the international C++ standardization
committee's Technical Report 18015:2006 on performance, available at <url:
http://www.open-std.org/jtc1/sc22/wg21/docs/TR18015.pdf>, quotes one compiler
vendor as reporting a 6% overhead for the "code" approach to implementing
exceptions (how the compiler does it internally), and asserts 0% for normal case
code for the "data" approach. Since normal case code then avoids having to check
for error cases everywhere it can result in total speed-up. YMMV, of course. :-)

Plus, more importantly, as mentioned but seems needs can't be mentioned often
enough, micro-efficiency is entirely the wrong aspect to elevate to Most
Important Criterion -- e.g. correctness and programmer time is more important.

> I don't try to simply convey somebody
> else's viewpoint I've read somewhere. And, yes, when it comes
> to efficiency, good tools are expensive. Not as expensive as
> our customer's time, though.
>
> 'clock' just doesn't cut it on Windows, for example. Machines
> nowadays are so fast and the software is so complex that time
> measurement with the granularity of 20 milliseconds is just not
> suitable for measuring time on a function level.

Have you considered calling your routine in a loop (as mentioned in the parts
you snipped from my posting)? <g>

I've not had any problems using 'clock' in Windows.


>> Cheers & hth.,
>
> It doesn't, sorry.

Perhaps it might help other readers, though.


Cheers, & again, hth.,

Victor Bazarov

unread,
Mar 6, 2009, 12:38:01 AM3/6/09
to
> conclusion. [..]

Just as I suspected. Reading somebody else's reports... It has
to count somewhere, at least in a newsgroup. Oh well...

> [..]


> Have you considered calling your routine in a loop (as mentioned in
> the parts you snipped from my posting)? <g>

No. I do not consider calling any routine in a loop unless the
logic of our multi-million LOC application requires it. Figuring
out how long in micro- or nano-seconds any particular function
would execute is not a good use of anybody's time. Or even the CPU
time, for that matter. It is only good in a project with a few
scores of functions, well, a few hundreds, maybe. When the count
of files/classes/projects goes beyond a number of the fingers of
the entier team's hands (and feet), performance of a single function
is of no consequence. On a toy project, 'clock' would definitely
suffice.

Alf P. Steinbach

unread,
Mar 6, 2009, 12:55:50 AM3/6/09
to

Noted, you discount the C++ standardization committee's Technical Report on
performance when discussing C++ performance.

And in addition resort to stupid personal insinuations (counting the one quoted
above, plus the one about insulting people, you're up to 2 so far).

A discussion here can not be fruitful on those terms: completely discounting the
technical facts, referring to unspecified personal experience, and accentuating
the personal aspect.

> Oh well...

>
>> [..]
>> Have you considered calling your routine in a loop (as mentioned in
>> the parts you snipped from my posting)? <g>
>
> No. I do not consider calling any routine in a loop unless the
> logic of our multi-million LOC application requires it. Figuring
> out how long in micro- or nano-seconds any particular function
> would execute is not a good use of anybody's time. Or even the CPU
> time, for that matter. It is only good in a project with a few
> scores of functions, well, a few hundreds, maybe. When the count
> of files/classes/projects goes beyond a number of the fingers of
> the entier team's hands (and feet), performance of a single function
> is of no consequence. On a toy project, 'clock' would definitely
> suffice.

If you can't, in most cases, call the routine(s) in a loop, then you have a very
serious spaghetti problem. :-)

That said, there are some special cases where some small routine is called
zillions of times from zillions of places.

But such cases are rare.


Cheers & hth.,

Kai-Uwe Bux

unread,
Mar 6, 2009, 1:19:33 AM3/6/09
to
Alf P. Steinbach wrote:

> * Victor Bazarov:
>> Alf P. Steinbach wrote:
>>> * Victor Bazarov:
>>>> Alf P. Steinbach wrote:
>>>>> * Victor Bazarov:
>>>>>> Alf P. Steinbach wrote:
>>>>>>> * Victor Bazarov:

[snip]


> And in addition resort to stupid personal insinuations (counting the one
> quoted above, plus the one about insulting people, you're up to 2 so far).

[snip]

I don't understand your count. It appears that you are discounting remarks
of your own like:

"Perhaps you have used it incorrectly." [with regard to std::clock()]

which _is_ an unveiled insinuation to incompetence (since you have never
seen the code you talk about and engage just in speculation).

>>> [..]
>>> Have you considered calling your routine in a loop (as mentioned in
>>> the parts you snipped from my posting)? <g>
>>
>> No. I do not consider calling any routine in a loop unless the
>> logic of our multi-million LOC application requires it. Figuring
>> out how long in micro- or nano-seconds any particular function
>> would execute is not a good use of anybody's time. Or even the CPU
>> time, for that matter. It is only good in a project with a few
>> scores of functions, well, a few hundreds, maybe. When the count
>> of files/classes/projects goes beyond a number of the fingers of
>> the entier team's hands (and feet), performance of a single function
>> is of no consequence. On a toy project, 'clock' would definitely
>> suffice.
>
> If you can't, in most cases, call the routine(s) in a loop, then you have
> a very serious spaghetti problem. :-)

You do it again. You don't know the code in question. On top of that, you
misrepresent the point: Victor did not say that he _can't_ call the routine
in a loop but that he did not consider that because other measurements
yield way more meaningful data. If that, to you, can only be explained in
terms of a spaghetti code problem, the reason can as well be a lack of
imagination on your part. In any case, to speculate about the quality of
unseen code and to call its quality into question based on essentially no
evidence is _rude_. (And it does not add anything to the technical merits
of the discussion.)

[snip]


Best

Kai-Uwe Bux

Alf P. Steinbach

unread,
Mar 6, 2009, 2:13:57 AM3/6/09
to
* Kai-Uwe Bux:

> Alf P. Steinbach wrote:
>
>> * Victor Bazarov:
>>> Alf P. Steinbach wrote:
>>>> * Victor Bazarov:
>>>>> Alf P. Steinbach wrote:
>>>>>> * Victor Bazarov:
>>>>>>> Alf P. Steinbach wrote:
>>>>>>>> * Victor Bazarov:
> [snip]
>> And in addition resort to stupid personal insinuations (counting the one
>> quoted above, plus the one about insulting people, you're up to 2 so far).
> [snip]
>
> I don't understand your count. It appears that you are discounting remarks
> of your own like:
>
> "Perhaps you have used it incorrectly." [with regard to std::clock()]
>
> which _is_ an unveiled insinuation to incompetence (since you have never
> seen the code you talk about and engage just in speculation).

No, that is an unfounded insinuating speculation that I have insinuated something.

Jeez.

When someone states in this newsgroup that they have problems using the 'clock'
routine, hinting about something to do with Windows, one naturally queries for
some concrete example.

That's just being helpful.

Otherwise, anybody could (and considering the above, /can/) state that they or
someone else are being the victims of veiled malevolent insinuation simply by
(1) stating there is a problem using, say, 'strcat', and then when respondents
list among a number of possible reasons that perhaps they're using the routine
incorrectly, respond in turn that (2) hey you're insinuating I'm incompetent,
thereby (3) insinuating something rather more nasty about the respondent.


>>>> [..]
>>>> Have you considered calling your routine in a loop (as mentioned in
>>>> the parts you snipped from my posting)? <g>
>>> No. I do not consider calling any routine in a loop unless the
>>> logic of our multi-million LOC application requires it. Figuring
>>> out how long in micro- or nano-seconds any particular function
>>> would execute is not a good use of anybody's time. Or even the CPU
>>> time, for that matter. It is only good in a project with a few
>>> scores of functions, well, a few hundreds, maybe. When the count
>>> of files/classes/projects goes beyond a number of the fingers of
>>> the entier team's hands (and feet), performance of a single function
>>> is of no consequence. On a toy project, 'clock' would definitely
>>> suffice.
>> If you can't, in most cases, call the routine(s) in a loop, then you have
>> a very serious spaghetti problem. :-)
>
> You do it again. You don't know the code in question.

You do again what you did above, insinuating by trying to give the impression
that someone has insinuated something -- that only exists in your fantasy.

If the code in question is a single example, then it has no power as argument
and is simply noise inserted into the discussion, e.g. to divert attention from
the technical matter discussed. Which I can readily believe because Victor
discounted and snipped all reference to C++ committee's report on performance
and instead added an insinuation. It seems all about diverting attention and
obscuring the subject matter, and I'm not insinuating anything when I state very
openly that in my opinion, what I'm thinking, that's exactly what happened.

If the code in question is, on the other hand, meant as a general argument, then
talking about such code in general, as I did above, is appropriate, and carries
no insinuation about any concrete manifestation of the problem.


> On top of that, you
> misrepresent the point: Victor did not say that he _can't_ call the routine
> in a loop but that he did not consider that because other measurements
> yield way more meaningful data. If that, to you, can only be explained in
> terms of a spaghetti code problem, the reason can as well be a lack of
> imagination on your part.

Yeah. If so then some concrete examples would be nice. But the concrete is
severely lacking here, even to the degree of snipping away facts and references
(We Shall Have No Facts, they're so bothersome), and the personal is very much
present, I'm sad to observe.


> In any case, to speculate about the quality of
> unseen code and to call its quality into question based on essentially no
> evidence is _rude_.

Oh God, help me. It's rude to discuss the quality of code? Here?


> (And it does not add anything to the technical merits
> of the discussion.)

Since I'm the only one who has discussed the technical here, Victor and you
resorting to /snipping away/ the technical and going, via vague implications,
for the personal (can it really hurt so much being confronted on a technical
issue?), such a statement that is misleading about intentions -- yes, it's
insinuating -- and technically meaningless, well it doesn't surprise me.

co...@mailvault.com

unread,
Mar 6, 2009, 2:50:28 AM3/6/09
to

I'm not sure if you responded to this very well:

> 'clock' just doesn't cut it on Windows, for example. Machines
> nowadays are so fast and the software is so complex that time
> measurement with the granularity of 20 milliseconds is just not
> suitable for measuring time on a function level.

I've done some performance testing on Windows and Linux --
www.webEbenezer.net/comparison.html. On Windows I use clock
and on Linux I use gettimeofday. From what I can tell
gettimeofday gives more accurate results than clock on Linux.
Depending on how this thread works out, I may start using the
function Victor mentioned on Windows.


>
> --
> Due to hosting requirements I need visits to [http://alfps.izfree.com/].
> No ads, and there is some C++ stuff! :-) Just going there is good. Linking

> to it is even better! Thanks in advance!- Hide quoted text -
>

I'm interested in trading links with people on webEbenezer.net.
I don't care if your site doesn't get a lot of hits. I've been
there and done that and know it can be tough.

Brian Wood
Ebenezer Enterprises
www.webEbenezer.net

Alf P. Steinbach

unread,
Mar 6, 2009, 3:16:43 AM3/6/09
to
* co...@mailvault.com:

> On Mar 5, 11:55 pm, "Alf P. Steinbach" <al...@start.no> wrote:
>
> I'm not sure if you responded to this very well:
>
>> 'clock' just doesn't cut it on Windows, for example. Machines
>> nowadays are so fast and the software is so complex that time
>> measurement with the granularity of 20 milliseconds is just not
>> suitable for measuring time on a function level.

Oh. Well, the question we were talking about was a timer-thing for measuring the
performance of various parts of an algorithm. 'clock' is eminently usable for
that; it's trivial to accumulate results, and/or adjust argument values for the
measured thing, to get into the resolution range, and I described that in
concrete in my first response in this thread. Not that it's necessarily how
would do it (Windows' GetTickCount API routine comes to mind... ;-)).

But I'm taking issue with Victor's statement that 'clock' is the last thing you
should try for this.

That is so far just a silly assertion that he's failed to back up in any way,
veering instead into general profiling of massive applications, adding in
various personal perspectives, snipping facts and references, etc.


> I've done some performance testing on Windows and Linux --
> www.webEbenezer.net/comparison.html. On Windows I use clock
> and on Linux I use gettimeofday. From what I can tell
> gettimeofday gives more accurate results than clock on Linux.
> Depending on how this thread works out, I may start using the
> function Victor mentioned on Windows.

Performance counters in Windows can be great for general profiling yes.

And (OFF-TOPIC for clc++) you can even access all that data without any special
tools, just importing it into nearest spreadsheet.

But for just measuring an algorithm, the OP's problem, that approach can be and
IME (although I have not very much experience with the performance counters)
usually is massive overkill... ;-)


>> --
>> Due to hosting requirements I need visits to [http://alfps.izfree.com/].
>> No ads, and there is some C++ stuff! :-) Just going there is good. Linking
>> to it is even better! Thanks in advance!- Hide quoted text -
>>
>
> I'm interested in trading links with people on webEbenezer.net.
> I don't care if your site doesn't get a lot of hits. I've been
> there and done that and know it can be tough.

Thanks. But I'm not really into link trading. It's just that the free Norwegian
hosting I've used is being terminated (for all thousands of homepages) in May,
so I had to find some new free hosting, and they require 10 hits per month,
otherwise the site is deemed inactive and is removed. I didn't know how much
traffic I had. As it turned out it seems I have 30-40 hits per day (unique
visitors), so I should be safe against the 10 visitors per month criterion. :-)


Cheers,

- Alf

Kai-Uwe Bux

unread,
Mar 6, 2009, 3:26:11 AM3/6/09
to
Alf P. Steinbach wrote:

> * Kai-Uwe Bux:
>> Alf P. Steinbach wrote:
>>
>>> * Victor Bazarov:
>>>> Alf P. Steinbach wrote:
>>>>> * Victor Bazarov:
>>>>>> Alf P. Steinbach wrote:
>>>>>>> * Victor Bazarov:
>>>>>>>> Alf P. Steinbach wrote:
>>>>>>>>> * Victor Bazarov:
>> [snip]

[snip]


>>>>> [..]
>>>>> Have you considered calling your routine in a loop (as mentioned in
>>>>> the parts you snipped from my posting)? <g>
>>>> No. I do not consider calling any routine in a loop unless the
>>>> logic of our multi-million LOC application requires it. Figuring
>>>> out how long in micro- or nano-seconds any particular function
>>>> would execute is not a good use of anybody's time. Or even the CPU
>>>> time, for that matter. It is only good in a project with a few
>>>> scores of functions, well, a few hundreds, maybe. When the count
>>>> of files/classes/projects goes beyond a number of the fingers of
>>>> the entier team's hands (and feet), performance of a single function
>>>> is of no consequence. On a toy project, 'clock' would definitely
>>>> suffice.
>>> If you can't, in most cases, call the routine(s) in a loop, then you
>>> have a very serious spaghetti problem. :-)
>>
>> You do it again. You don't know the code in question.
>
> You do again what you did above, insinuating by trying to give the
> impression
> that someone has insinuated something -- that only exists in your
> fantasy.

I think, it exists in your post and not just in my fantasy. But we shall
see. At least, I claim that the way I understood you is a viable
interpretation, which you could have anticipated.

> If the code in question is a single example, then it has no power as
> argument and is simply noise inserted into the discussion, e.g. to divert
> attention from the technical matter discussed. Which I can readily believe
> because Victor discounted and snipped all reference to C++ committee's
> report on performance and instead added an insinuation. It seems all about
> diverting attention and obscuring the subject matter, and I'm not
> insinuating anything when I state very openly that in my opinion, what I'm
> thinking, that's exactly what happened.

So you think, the code in question is a single example.

> If the code in question is, on the other hand, meant as a general
> argument, then talking about such code in general, as I did above, is
> appropriate, and carries no insinuation about any concrete manifestation
> of the problem.

This paragraph is weird, then: it seems that you (like me) think the code
that Victor was talking about is a single example (the if-clause of the
previous paragraph). But in this paragraph, you say that in responding, you
responded as if it is not a single example but "code in general", wherefore
your response carries no insinuation. When I read your post, that escaped
me because it appears clear from the quote that Victor is talking about a
specific, though large, piece of code: the "multi-millon LOC application".
I took your response to be also talking about this specific piece of code
since there was no indication that the perspective changed to a more
generic point of view.

You may be right that Victors specific example does not carry weight in the
discussion. But instead of making that point, you called the quality of the
piece of code into question by asserting that it suffers from a spaghetti
problem without having seen it. This is not in my mind, this is in your
post.

[snip]

>> In any case, to speculate about the quality of
>> unseen code and to call its quality into question based on essentially no
>> evidence is _rude_.
>
> Oh God, help me. It's rude to discuss the quality of code? Here?

I never claimed that discussing the code per se is rude. I maintain, though,
that speculating about unseen code and calling its quality into question is
rude. I am sure, you see the difference.


>> (And it does not add anything to the technical merits
>> of the discussion.)
>
> Since I'm the only one who has discussed the technical here, Victor and
> you resorting to /snipping away/ the technical

As for me, I snip the technical parts since I was _only_ interested in your
way of counting that gets Victor "up two". That is a non-technical issue.

> and going, via vague implications,

I don't think, what I write is vague.

> for the personal (can it really hurt so much being confronted on a
> technical issue?), such a statement that is misleading about
> intentions -- yes, it's insinuating -- and technically meaningless,
> well it doesn't surprise me.

Since I am not interested in this particular technical problem, I focus
entirely on your way of counting. For the same reason, I am not being
confronted on a technical issue.


Best

Kai-Uwe Bux

James Kanze

unread,
Mar 6, 2009, 5:41:03 AM3/6/09
to
On Mar 6, 3:47 am, "Alf P. Steinbach" <al...@start.no> wrote:
> * Victor Bazarov:

[...]


> >> Is there some build in function in C++ that is designed for
> >> this kind of purpose? I am also interested in knowing if
> >> there exists some performance measuring framework for this
> >> kind of task.

> > There is 'clock()', but know that it's the last function you
> > actually want to use to measure the performance of your
> > code.

> Why do you think that?

> 'clock' is extremely easy to use, and it's always available.

> Hence, I'd say it's the /first/ you should try, absolutely not
> the last (if other more heavy instruments are brought to bear,
> then 'clock' adds nothing).

I tend to agree, but it's important to understand what clock()
actually measures on your system. According to the C standard,
it should measure CPU time used, if this is available. In VC++,
the last time I checked, it was broken, and returned elapsed
time.

On all of the Unix based systems I've used, it is at least as
good as anything else for CPU time. But of course, do you want
to count the time spent handling a page fault, or not? Or maybe
elapsed time is what you want (but what does that mean on a
machine that is running other programs at the same time).
Still, for all of the benchmarking I've done, I've just used
clock.

As you mentionned in the parts I've cut, the results won't
really be exact, because exact really doesn't exist in a
multi-process environment with virtual memory and who knows what
all else. But at least on Unix based machines, they've always
been close enough for my purposes. And even under Windows, if
the function is pure CPU, and I'm not doing anything else on the
machine. Just make sure you do a number of runs, and eliminate
the outliers. (I'll always do a first execution before starting
measurments, to ensure that the code being measured is actually
loaded.)

--
James Kanze (GABI Software) email:james...@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

James Kanze

unread,
Mar 6, 2009, 6:05:38 AM3/6/09
to
On Mar 6, 8:50 am, c...@mailvault.com wrote:
> On Mar 5, 11:55 pm, "Alf P. Steinbach" <al...@start.no> wrote:

[...]


> I've done some performance testing on Windows and Linux

> --www.webEbenezer.net/comparison.html. On Windows I use clock


> and on Linux I use gettimeofday. From what I can tell
> gettimeofday gives more accurate results than clock on Linux.
> Depending on how this thread works out, I may start using the
> function Victor mentioned on Windows.

On Unix based machines, clock() and gettimeofday() measure
different things. I use clock() when I want what clock()
measures, and gettimeofday() when I want what gettimeofday()
measures. For comparing algorithms to see which is more
effective, this means clock().

Victor is right about one thing: the implementation of clock()
in VC++ is broken, in the sense that it doesn't conform to the
specification of the C standard, e.g. that "The clock function
returns the implementation's best approximation to the processor
time used by the program since the beginning of an
implementation-defined era related only to the program
invocation." The last time I checked, the clock() function in
VC++ returned elapsed time, and not processor time. (Of course,
if you run enough trials, on a quiescent machine, the functions
involved are pure CPU, and the goal is just to compare, not to
obtain absolute values, the information obtained is probably
adequate anyway.)

Of course, neither the C standard nor Posix are very precise
about what is meant by "processor time". Depending on what you
are trying to do, the function times() or some of the timer_...
functions might be more appropriate, at least under Unix (but I
presume that Windows also has something similar). But I
wouldn't bother until I'd determined that clock() wasn't
sufficient. (The Unix command time, for example, will probably
use gettimeofday for the real time, and times for the user and
sys time.)

For the rest: if you have a large application which is running
slow, you need a profiler, to determine where it is running
slow. Having found the critical function, however, if often
makes sense to write up a quick benchmark harness to compare
different possible implementations of the function, in order to
determine which one is best without having to rebuild and
remeasure the entire application each time.

Martin Eisenberg

unread,
Mar 6, 2009, 7:12:00 AM3/6/09
to
Kai-Uwe Bux wrote:

> I think, it exists in your post and not just in my fantasy. But
> we shall see. At least, I claim that the way I understood you is
> a viable interpretation, which you could have anticipated.

One would think that most regulars have interacted with Alf for
enough years to take these arguefests in stride...


Martin

--
Quidquid latine scriptum est, altum videtur.

Lionel B

unread,
Mar 6, 2009, 7:26:16 AM3/6/09
to
On Fri, 06 Mar 2009 03:05:38 -0800, James Kanze wrote:

> On Mar 6, 8:50 am, c...@mailvault.com wrote:
>> On Mar 5, 11:55 pm, "Alf P. Steinbach" <al...@start.no> wrote:
>
> [...]
>> I've done some performance testing on Windows and Linux
>> --www.webEbenezer.net/comparison.html. On Windows I use clock and on
>> Linux I use gettimeofday. From what I can tell gettimeofday gives more
>> accurate results than clock on Linux. Depending on how this thread
>> works out, I may start using the function Victor mentioned on Windows.
>
> On Unix based machines, clock() and gettimeofday() measure different
> things. I use clock() when I want what clock() measures, and
> gettimeofday() when I want what gettimeofday() measures. For comparing
> algorithms to see which is more effective, this means clock().

FWIW, on Linux at least, there is also 'clock_gettime()' which can access
a variety of clocks including CLOCK_PROCESS_CPUTIME_ID, described as a
"High resolution per-process timer". As far as I can make out, this
measures something similar to 'clock()' but at higher resolution. It does
have issues, though, on some SMP systems since it may access the CPU's
built-in timer and CPU timers on SMP systems are not guaranteed to be in
sync. It can thus potentially give bogus results if e.g. a process
migrates to another CPU. I'm not sure, but I'd imagine that something
similar may apply to high-resolution timers on Windows.

[...]

--
Lionel B

Jeff Schwab

unread,
Mar 6, 2009, 7:31:13 AM3/6/09
to
Martin Eisenberg wrote:
> Kai-Uwe Bux wrote:
>
>> I think, it exists in your post and not just in my fantasy. But
>> we shall see. At least, I claim that the way I understood you is
>> a viable interpretation, which you could have anticipated.
>
> One would think that most regulars have interacted with Alf for
> enough years to take these arguefests in stride...

And after all, who *doesn't* fantasize about C++ flamewars?

Alf P. Steinbach

unread,
Mar 6, 2009, 7:54:34 AM3/6/09
to
* Jeff Schwab:

I don't, but it's amusing: after a flamefest of transparent insinuations from
Victor (consistently snipping away the technical, changing context, and so on),
then from Kai-Uwe, Martin adds one more and that's /all/ he manages to utter.

It seems you guys think this is a social engineering group, where technical
matters can be decided by girlish put-downing, pouting, posturing and suchlike.

Jeff Schwab

unread,
Mar 6, 2009, 8:01:25 AM3/6/09
to
Alf P. Steinbach wrote:

> It seems you guys think this is a social engineering group, where
> technical matters can be decided by girlish put-downing, pouting,
> posturing and suchlike.

You have cooties. Ergo, std::clock() lacks sufficient precision for
profiling.

FWIW, I use it, too, just for first-order approximations.

co...@mailvault.com

unread,
Mar 6, 2009, 4:16:55 PM3/6/09
to
On Mar 6, 5:05 am, James Kanze <james.ka...@gmail.com> wrote:
> On Mar 6, 8:50 am, c...@mailvault.com wrote:
>
> > On Mar 5, 11:55 pm, "Alf P. Steinbach" <al...@start.no> wrote:
>
>     [...]
>
> > I've done some performance testing on Windows and Linux
> > --www.webEbenezer.net/comparison.html.  On Windows I use clock
> > and on Linux I use gettimeofday.  From what I can tell
> > gettimeofday gives more accurate results than clock on Linux.
> > Depending on how this thread works out, I may start using the
> > function Victor mentioned on Windows.
>
> On Unix based machines, clock() and gettimeofday() measure
> different things.  I use clock() when I want what clock()
> measures, and gettimeofday() when I want what gettimeofday()
> measures.  For comparing algorithms to see which is more
> effective, this means clock().
>

I've just retested the test that saves/sends a list<int> using
clock on Linux. The range of ratios from the Boost version to
my version was between 1.4 and 4.5. The thing about clock is
it returns values like 10,000, 20,000, 30,000, 50,000, 60,000, etc.
I would be more comfortable with it if I could get it to round
its results less. The range of results with gettimeofday for the
same test is not so wide -- between 2.0 and 2.8. I don't run
other programs while I'm testing besides a shell/vi and firefox.
I definitely don't start or stop any of those between the tests,
so I'm of the opinion that the elapsed time results are meaningful.


> Victor is right about one thing: the implementation of clock()
> in VC++ is broken, in the sense that it doesn't conform to the
> specification of the C standard, e.g. that "The clock function
> returns the implementation's best approximation to the processor
> time used by the program since the beginning of an
> implementation-defined era related only to the program
> invocation."  The last time I checked, the clock() function in
> VC++ returned elapsed time, and not processor time.  (Of course,
> if you run enough trials, on a quiescent machine, the functions
> involved are pure CPU, and the goal is just to compare, not to
> obtain absolute values, the information obtained is probably
> adequate anyway.)

Except for the part about the functions being purely CPU, this
describes my approach/intent.

Alf P. Steinbach

unread,
Mar 6, 2009, 11:00:25 PM3/6/09
to
* co...@mailvault.com:

> The thing about clock is
> it returns values like 10,000, 20,000, 30,000, 50,000, 60,000, etc.
> I would be more comfortable with it if I could get it to round
> its results less.

For a difference between 'clock' results, i.e. a time interval expressed in
'clock' units, convert to 'double' and then divide by CLOCKS_PER_SEC.

But note: do that /after/ any subtraction of end time from start time.

That's part of using 'clock' correctly as alluded to earlier in the thread
(another part of that is James' observation about wall time versus processor time).


Cheers & hth.,

- Alf

--

co...@mailvault.com

unread,
Mar 7, 2009, 2:17:08 AM3/7/09
to
On Mar 6, 10:00 pm, "Alf P. Steinbach" <al...@start.no> wrote:
> * c...@mailvault.com:

>
> > The thing about clock is
> > it returns values like 10,000, 20,000, 30,000, 50,000, 60,000, etc.
> > I would be more comfortable with it if I could get it to round
> > its results less.
>
> For a difference between 'clock' results, i.e. a time interval expressed in
> 'clock' units, convert to 'double' and then divide by CLOCKS_PER_SEC.
>
> But note: do that /after/ any subtraction of end time from start time.
>

I'm aware of that, but don't see the point here. Both the Boost and
Ebenezer numbers would be divided by the same constant. It is
simpler,
I think, to just add up the times from clock for each version and then
figure out the ratio. (I could document the results from clock, but
for now I just document the ratio.) I use semicolons within a shell
to run each version 3 times in a row. I execute that command twice.
The second group starts up right on the heels of the first. So the
test is run 6 times total. I ignore the first 3 runs/times and do
those just to get the machine ready for the next 3. Anyway, my
impression, and it seemed like Victor has a similar impression, is
the output from clock isn't as precise as it could be. The range
I got earlier from clock, 1.4 - 4.5, leaves quite a bit of room for
manipulation if that is a person's goal.

> That's part of using 'clock' correctly as alluded to earlier in the thread
> (another part of that is James' observation about wall time versus processor time).

I agree with James' point and plan to head in that direction.
I'm not sure if I'll use clock or platform specific APIs on Linux,
but on Windows it probably won't involve clock.

co...@mailvault.com

unread,
Mar 7, 2009, 2:30:35 AM3/7/09
to
On Mar 7, 1:17 am, c...@mailvault.com wrote:
> On Mar 6, 10:00 pm, "Alf P. Steinbach" <al...@start.no> wrote:
>
> > * c...@mailvault.com:
>
> > > The thing about clock is
> > > it returns values like 10,000, 20,000, 30,000, 50,000, 60,000, etc.
> > > I would be more comfortable with it if I could get it to round
> > > its results less.
>
> > For a difference between 'clock' results, i.e. a time interval expressed in
> > 'clock' units, convert to 'double' and then divide by CLOCKS_PER_SEC.
>
> > But note: do that /after/ any subtraction of end time from start time.
>
> I'm aware of that, but don't see the point here.  Both the Boost and
> Ebenezer numbers would be divided by the same constant.  It is
> simpler,
> I think, to just add up the times from clock for each version and then

It is probably clearer to say: add up the results from clock...

Alf P. Steinbach

unread,
Mar 7, 2009, 2:43:29 AM3/7/09
to
* co...@mailvault.com:

> On Mar 6, 10:00 pm, "Alf P. Steinbach" <al...@start.no> wrote:
>> * c...@mailvault.com:
>>
>>> The thing about clock is
>>> it returns values like 10,000, 20,000, 30,000, 50,000, 60,000, etc.
>>> I would be more comfortable with it if I could get it to round
>>> its results less.
>> For a difference between 'clock' results, i.e. a time interval expressed in
>> 'clock' units, convert to 'double' and then divide by CLOCKS_PER_SEC.
>>
>> But note: do that /after/ any subtraction of end time from start time.
>>
>
> I'm aware of that, but don't see the point here. Both the Boost and
> Ebenezer numbers would be divided by the same constant. It is
> simpler,
> I think, to just add up the times from clock for each version and then
> figure out the ratio.

How is that different from what you quoted?

I was commenting on the "problem" with values like "10,000".

If you *really*, actually, have values like those you exemplified, like
"10,000", then that indicates with high probability that somewhere in the
testing code an integer division is used where a floating point division should
have been used.

But I assumed that was not the case, that it was just a case of dramatizing the
effect of low resolution.

If you really, actually have such values, then check out the division of 'clock'
result.


> (I could document the results from clock, but
> for now I just document the ratio.) I use semicolons within a shell
> to run each version 3 times in a row. I execute that command twice.
> The second group starts up right on the heels of the first. So the
> test is run 6 times total. I ignore the first 3 runs/times and do
> those just to get the machine ready for the next 3. Anyway, my
> impression, and it seemed like Victor has a similar impression, is
> the output from clock isn't as precise as it could be. The range
> I got earlier from clock, 1.4 - 4.5, leaves quite a bit of room for
> manipulation if that is a person's goal.

Uh, the person doing the timing is presumably /not/ your adversary, but yourself?

Anyways, if you have a range of 1.4 to 4.5 for the same code, tested in *nix
(indicated by your comment about semicolons), using 'clock' which in *nix-land
reports processor time, then Something Is Wrong.

Perhaps the integer division issue mentioned above?

James Kanze

unread,
Mar 7, 2009, 6:04:21 AM3/7/09
to
On Mar 6, 2:01 pm, Jeff Schwab <j...@schwabcenter.com> wrote:
> Alf P. Steinbach wrote:
> > It seems you guys think this is a social engineering group,
> > where technical matters can be decided by girlish
> > put-downing, pouting, posturing and suchlike.

> You have cooties. Ergo, std::clock() lacks sufficient
> precision for profiling.

std::clock() has nothing to do with profiling; it's not a
profiler. It's a useful tool for comparing different
implementations of a function, once profiling has determined
which functions need attention.

> FWIW, I use it, too, just for first-order approximations.

I don't really know of a better function for what it does, at
least when implemented correctly. It should give you the best
precision available on the machine.

James Kanze

unread,
Mar 7, 2009, 6:21:03 AM3/7/09
to
On Mar 6, 10:16 pm, c...@mailvault.com wrote:
> On Mar 6, 5:05 am, James Kanze <james.ka...@gmail.com> wrote:
> > On Mar 6, 8:50 am, c...@mailvault.com wrote:
> > > On Mar 5, 11:55 pm, "Alf P. Steinbach" <al...@start.no> wrote:

> > [...]
> > > I've done some performance testing on Windows and Linux
> > > --www.webEbenezer.net/comparison.html. On Windows I use
> > > clock and on Linux I use gettimeofday. From what I can
> > > tell gettimeofday gives more accurate results than clock
> > > on Linux. Depending on how this thread works out, I may
> > > start using the function Victor mentioned on Windows.

> > On Unix based machines, clock() and gettimeofday() measure
> > different things. I use clock() when I want what clock()
> > measures, and gettimeofday() when I want what gettimeofday()
> > measures. For comparing algorithms to see which is more
> > effective, this means clock().

> I've just retested the test that saves/sends a list<int> using
> clock on Linux. The range of ratios from the Boost version to
> my version was between 1.4 and 4.5. The thing about clock is
> it returns values like 10,000, 20,000, 30,000, 50,000, 60,000,
> etc.

This sounds like a defective (albeit legal) implementation.
Posix requires CLOCKS_PER_SEC to be 1000000 precisely so that
implementations can offer more precision if the system supports
it. Linux does. I'd file a bug report.

Of course, historically, a lot of systems had clocks generated
from the mains, which meant a CLOCKS_PER_SEC of 50 (in Europe)
or 60 (in North America). On such systems, better precision
simply wasn't available, and I've gotten into the habit of not
counting on values of benchmarks that run for less than about 5
minutes. So I would tend not to noticed such anomalies as you
describe.

> I would be more comfortable with it if I could get it to round
> its results less. The range of results with gettimeofday for
> the same test is not so wide -- between 2.0 and 2.8. I don't
> run other programs while I'm testing besides a shell/vi and
> firefox. I definitely don't start or stop any of those
> between the tests, so I'm of the opinion that the elapsed time
> results are meaningful.

The relative values are probably meaningful if the actual values
are large enough (a couple of minutes, at least) and they are
reproduceable. The actual values, not really (but that's
generally not what you're interested in).

In my own tests, with clock(), under both Linux and Solaris, I
generally get differences from one run to the next of
considerably less than 10%. Which is about as accurate as
you're going to get, I think. Under Windows, I have to be more
careful about the surrounding environment, and even then, there
will be an outlier from time to time.

> > Victor is right about one thing: the implementation of
> > clock() in VC++ is broken, in the sense that it doesn't
> > conform to the specification of the C standard, e.g. that
> > "The clock function returns the implementation's best
> > approximation to the processor time used by the program
> > since the beginning of an implementation-defined era related
> > only to the program invocation." The last time I checked,
> > the clock() function in VC++ returned elapsed time, and not
> > processor time. (Of course, if you run enough trials, on a
> > quiescent machine, the functions involved are pure CPU, and
> > the goal is just to compare, not to obtain absolute values,
> > the information obtained is probably adequate anyway.)

> Except for the part about the functions being purely CPU, this
> describes my approach/intent.

Again, it depends on what you are trying to measure. If you
want to capture disk transfer speed, for example, then clock()
is NOT the function you want (except under Windows).

James Kanze

unread,
Mar 7, 2009, 6:29:41 AM3/7/09
to
On Mar 7, 8:43 am, "Alf P. Steinbach" <al...@start.no> wrote:
> * c...@mailvault.com:
> > On Mar 6, 10:00 pm, "Alf P. Steinbach" <al...@start.no> wrote:
> >> * c...@mailvault.com:
[...]

> I was commenting on the "problem" with values like "10,000".

> If you *really*, actually, have values like those you
> exemplified, like "10,000", then that indicates with high
> probability that somewhere in the testing code an integer
> division is used where a floating point division should have
> been used.

I've just done a few tests on my Linux machine here, and it does
seem to be an error in the library implementation under Linux.

For some reason, Posix requires that CLOCKS_PER_SEC be 1000000,
regardless of the actual accuracy available. So a machine whose
only timer source is the 50 Hz line frequency (I don't know of
any today, but that used to be a frequent case, many years ago)
will return the values 0, 20000, 40000, etc. (or 0, 10000,
20000, etc., if it triggers on each zero crossing). This sort
of defeats the purpose of CLOCKS_PER_SEC, as defined by the C
standard, but Posix does occasionally get confused. And of
course, on a machine which does support more precision (i.e. all
modern machines), you should get it, at least from a QoI point
of view.

> But I assumed that was not the case, that it was just a case
> of dramatizing the effect of low resolution.

> If you really, actually have such values, then check out the
> division of 'clock' result.

> > (I could document the results from clock, but for now I just
> > document the ratio.) I use semicolons within a shell to run
> > each version 3 times in a row. I execute that command
> > twice. The second group starts up right on the heels of the
> > first. So the test is run 6 times total. I ignore the
> > first 3 runs/times and do those just to get the machine
> > ready for the next 3. Anyway, my impression, and it seemed
> > like Victor has a similar impression, is the output from
> > clock isn't as precise as it could be. The range I got
> > earlier from clock, 1.4 - 4.5, leaves quite a bit of room
> > for manipulation if that is a person's goal.

> Uh, the person doing the timing is presumably /not/ your
> adversary, but yourself?

> Anyways, if you have a range of 1.4 to 4.5 for the same code,
> tested in *nix (indicated by your comment about semicolons),
> using 'clock' which in *nix-land reports processor time, then
> Something Is Wrong.

Yes. And that is probably true even if clock only has a
resolution of 10 ms. You don't bench a single run; you bench a
large number of runs, in a loop. For any significant
measurements, I would expect a total measured time of something
like 5 minutes, at least. Any decent benchmark harness should
be able to handle this sort of stuff.

Jeff Schwab

unread,
Mar 7, 2009, 7:09:44 AM3/7/09
to
James Kanze wrote:


> Of course,

Of course!

> historically, a lot of systems had clocks generated
> from the mains, which meant a CLOCKS_PER_SEC of 50 (in Europe)
> or 60 (in North America).

What's that got to do with clock frequency? (And why use the generator
frequency? Since we have triphase power, couldn't the grid be used to
generate 150 or 180 Hz signals?)

> On such systems, better precision simply wasn't available,

How so? Even the slowest processors I've ever seen had clock speeds on
the order of KHz. If you run slowly enough, weird stuff can happen;
capacitors leak voltage, and stored values flip.

Alf P. Steinbach

unread,
Mar 7, 2009, 7:36:56 AM3/7/09
to
* Jeff Schwab:

> James Kanze wrote:
>
>> historically, a lot of systems had clocks generated
>> from the mains, which meant a CLOCKS_PER_SEC of 50 (in Europe)
>> or 60 (in North America).
>
> What's that got to do with clock frequency? (And why use the generator
> frequency? Since we have triphase power, couldn't the grid be used to
> generate 150 or 180 Hz signals?)

Hum, this is REALLY off-topic. But as I recall, in Windows the 'clock'
resolution has to do with ordinary Windows timer resolution which again, if I
recall this correctly, and I think I do, has to do with the wiring of the very
first IBM PC's timer chip, which as I recall had three timers on the chip, and
it was sort of 52 interrupts per second.

Let me check with gOOgle, just wait a moment...

Ah, not quite, it interrupted every 55 msec, that is about 18.2 times per
second. I remembered that about three channels correctly, though. :-)

And doesn't seem to be connected to Windows timer resolution after all, dang!
But while in this really off-topic mode, that search found useful article, <url:
www.microsoft.com/technet/sysinternals/information/HighResolutionTimers.mspx>.

>> On such systems, better precision simply wasn't available,
>
> How so? Even the slowest processors I've ever seen had clock speeds on
> the order of KHz. If you run slowly enough, weird stuff can happen;
> capacitors leak voltage, and stored values flip.

I'm too lazy to check the value of CLOCKS_PER_SEC with Windows compilers.


Cheers,

- Alf

Jeff Schwab

unread,
Mar 7, 2009, 7:49:43 AM3/7/09
to
Alf P. Steinbach wrote:

> Hum, this is REALLY off-topic. But as I recall, in Windows the 'clock'
> resolution has to do with ordinary Windows timer resolution which again,
> if I recall this correctly, and I think I do, has to do with the wiring
> of the very first IBM PC's timer chip, which as I recall had three
> timers on the chip, and it was sort of 52 interrupts per second.
>
> Let me check with gOOgle, just wait a moment...
>
> Ah, not quite, it interrupted every 55 msec, that is about 18.2 times
> per second. I remembered that about three channels correctly, though. :-)

Believe it or not, that timer is still useful. The 18.2 Hz corresponds
to the DRAM refresh rate, which was the real reason for adding that
timer in the first place.

co...@mailvault.com

unread,
Mar 7, 2009, 1:06:18 PM3/7/09
to
On Mar 7, 5:21 am, James Kanze <james.ka...@gmail.com> wrote:
> On Mar 6, 10:16 pm, c...@mailvault.com wrote:
>
>
>
> > On Mar 6, 5:05 am, James Kanze <james.ka...@gmail.com> wrote:
> > > On Mar 6, 8:50 am, c...@mailvault.com wrote:
> > > > On Mar 5, 11:55 pm, "Alf P. Steinbach" <al...@start.no> wrote:
> > >     [...]

> > I've just retested the test that saves/sends a list<int> using

My testing till now has been of tests that take less than a
second. I'll add a loop to the tests to make them last for
several minutes.

>
> Again, it depends on what you are trying to measure.  If you
> want to capture disk transfer speed, for example, then clock()
> is NOT the function you want (except under Windows).
>

All of my tests measure time to marshal data to disk or from a
disk. I'll test the longer running versions on Linux with
gettimeofday.

co...@mailvault.com

unread,
Mar 7, 2009, 5:55:44 PM3/7/09
to

I added the following loop to the Linux versions of the tests
that send a list<int>

for (int reps = 1; reps <= elements/100; ++ reps) {

}


elements is gotten from the command line and controls how
many ints are added to the list. I tested with values of
200,000 and 300,000 for elements. I tested the Ebenezer
version and then the Boost Serialization version and
went back and forth like that. Here are the results in
seconds.

input Boost Serialization C++ Middleware Writer
------------------------------------------------------
200000 169 84
200000 102 81
200000 173 82
200000 101 82
200000 103 82

300000 367 187
300000 295 187
300000 296 189
300000 228 188
300000 300 188


The size of the output files when the input was 200,000
are a little over 1.6 billion bytes. When the input
was 300,000, the output files are a little over 3.6
billion bytes. I had to use O_LARGEFILE with open()
in the Ebenezer version to get the correct results
when the input was 300,000. All of the Ebenezer results
are with the version that uses O_LARGEFILE even though
it wasn't needed when the input was 200,000.

The ratios from these tests are less than the 2.4 to
2.7 that are posted on the website. I'm not convinced
though that the posted ratios are inaccurate. Those
tests may reflect more typical usage than these.
Finally I think it's worth noting how the Ebenezer
results were more stable than the Boost Serialization
results.

James Kanze

unread,
Mar 8, 2009, 10:09:25 AM3/8/09
to
On Mar 7, 1:09 pm, Jeff Schwab <j...@schwabcenter.com> wrote:
> James Kanze wrote:
> > Of course,

> Of course!

Well, it seems "of course" to those of us who actually lived
it:-).

> > historically, a lot of systems had clocks generated from the
> > mains, which meant a CLOCKS_PER_SEC of 50 (in Europe) or 60
> > (in North America).

> What's that got to do with clock frequency? (And why use the
> generator frequency? Since we have triphase power, couldn't
> the grid be used to generate 150 or 180 Hz signals?)

I don't know what the grid could have been used to generate, but
most computers then (and now) got their power from a standard
wall plug, which delived single phase 110 V, 60 Hz in North
America, and 220 V, 50 Hz. in Europe.

> > On such systems, better precision simply wasn't available,

> How so? Even the slowest processors I've ever seen had clock
> speeds on the order of KHz. If you run slowly enough, weird
> stuff can happen; capacitors leak voltage, and stored values
> flip.

Back then, quarz clocks were a luxury; the CPU "clock" was often
generated by an RC feedback to a Schmitt trigger. With a
precision of well under 10%. Whereas the line sector (at least
in France) was guaranteed by the electric company to have
4320000 +/- 0.5 oscillations in each 24 hour period. (Also, a
lot of machines at the time could run in pure step by step mode,
with each clock pulse being triggered manually. Dynamic RAM had
just been invented, and still wasn't too wide spread, and
magnetic core memory doesn't flip at whim.)

Jeff Schwab

unread,
Mar 8, 2009, 10:23:13 AM3/8/09
to
James Kanze wrote:

> Back then, quarz clocks were a luxury; the CPU "clock" was often
> generated by an RC feedback to a Schmitt trigger. With a
> precision of well under 10%. Whereas the line sector (at least
> in France) was guaranteed by the electric company to have
> 4320000 +/- 0.5 oscillations in each 24 hour period. (Also, a
> lot of machines at the time could run in pure step by step mode,
> with each clock pulse being triggered manually. Dynamic RAM had
> just been invented, and still wasn't too wide spread, and
> magnetic core memory doesn't flip at whim.)

Now I feel young. :)

0 new messages