"crea" <n...@com.notvalid> wrote in news:9blju.115250$qC.2...@fx07.am4:
>> Many other examples as well where C++ is faster. Like in some
> situations where you check the type and do calculations according to
> that in C++ and C does not do the same so its slower. Sometimes C uses
> void* types to do calculations, when C++ knows the type and thus can
> use a special very fast methods to proceed.
This is not related to types, but to templates and inlining instead. In
principle you could use void* in C++ too and do manual casting where
needed, if the thing gets inlined it's as fast as type-safe code
(static_cast<> is a null-op at run time). Such a void* approach has
sometimes been used in C++ to reduce the so-called "template bloat", but
nowadays compilers do this automatically.
> Also sometimes in vectors etc its faster to do a memcopy instead of
> copying eatch element one by one like C functions are doing. So C++ is
> checking if there is a faster method and proceeds according to that.
Does not follow, I'm sure C compilers recognize loops which can be
replaced with optimized memcpy, or vice versa.
>
>>
>> If you can (safely) use std::atoi(), then it's faster than the
>> equivalent code using std::stringstream.
Sure.
>
> I actually just did both of them, and yes seems like this is true.
> stringstream method seemed to be much slower. Donno why....
> There is no fast way of doing atoi in C++?
The "fast" is very relative. If you are parsing user-typed input or
otherwise a small number of numbers (like thousands), then stringstream
is most probably ok. Only if there are millions of numbers in tight
loops, then the parsing speed might become critical and one may need to
convolute the code by using atoi() (or probably the more manageable
variants strtol(), strtoul()).
There is also a handy boost::lexical_cast feature which is effectively
wrapping stringstream.
> This is the problem: you never know which C function is faster that
> corresponding C++ version. So you have to time it yourself or just
> know it. But there are thousands of functions, so how can you remember
> how fast earch of them are?
Experience helps a bit, but in reality you never know exactly how fast
something exactly is, as the hardware is constantly changing. The point
is to express your algorithm in the clearest and concise way you can
find, getting the algorithmic big-O complexities right and only worry if
something appears too slow in the end. And in that case one should start
with profiling an actual realistic usage case which is too slow, and
start with optimizing that.
Of course there are some rough guidelines like that dynamic memory
allocation and frequent virtual function calls are slow, dynamic_cast is
slower, and exception throwing is very slow. Arithmetics is fast and
memory access beyond cpu caches is slow. C++ streams use a lot of virtual
function calls and probably some dynamic memory allocation, so no wonder
they are slower than atoi() which has none of that. But this is still
only a constant factor like 2-3, which means the same algorithmic
complexity, so one cannot just say C++ streams are useless. They are just
designed for another task. And factors like 2-3 only come into play in
the code branches where the program spends a significant fraction of its
run time.
hth
Paavo