Here is a description of the environment with the test code I use and the
results I am getting.
The input data file used contains 3.5 million text records of varying
lengths terminated by '\n'.
I run the tests twice to make sure that caching is not responsible for the
time difference.
-----------------------------------
{
time_t start, end;
double dif;
std::string line;
std::ifstream in("c:\\Data\\IVData.csv");
time(&start);
while (in.getline(line, 1024))
{
}
time(&end);
in.close();
dif = difftime (end,start);
cxMemo1->Lines->Add((AnsiString)"Test 1 has taken " + dif + " seconds.");
FILE * fp;
char cline[512];
fp = fopen("c:\\Data\\IVData.CSV", "r");
time(&start);
while (fgets(cline, 512, fp) != NULL)
{
}
time(&end);
fclose(fp);
dif = difftime (end,start);
cxMemo1->Lines->Add((AnsiString)"Test 2 has taken " + dif + " seconds.");
}
-----------------------------------
Results :
Test 1 has taken 203 seconds.
Test 2 has taken 5 seconds.
Test 1 has taken 201 seconds.
Test 2 has taken 4 seconds.
> std::string line;
> std::ifstream in("c:\\Data\\IVData.csv");
> while (in.getline(line, 1024))
> {
> }
> in.close();
Try this:
ifstream ifs(filename.c_str(), ios::in|ios::app);
std::string buffer;
while ( getline( ifs, buffer) )
{
}
ifs.close();
Hans.
http://info.borland.com/newsgroups/guide.html
- Clayton
Sorry, I posted accidentally to the wrong newsgroup and tried to cancel the
post but wasn't able to.
Thanks for your suggestion. I tried it and it helps (takes about 25% off of
the total time) but getline() still doesn't come even close to fgets(); I
am guessing that this is a flaw in the implementation of getline( ).
Nate
Yes, since I run each of the tests twice (ie. test1 -> test2 -> test1 ->
test2),
I think that the order of the tests is effectively swapped, right ?
> Yes, since I run each of the tests twice (ie. test1 -> test2 -> test1 ->
> test2),
>
> I think that the order of the tests is effectively swapped, right ?
>
Well, I guess so if the file is buffered to the same memory. I found
that for one of my programs that the second, third, etc. time that I ran
it it ran faster than the first time. I inferred that this was because
large chunks of the data were in memory - but I could be wrong. I
didn't see any time differences like yours, though.
Nate
> Thanks for your suggestion. I tried it and it helps (takes about 25% off of
> the total time) but getline() still doesn't come even close to fgets(); I
> am guessing that this is a flaw in the implementation of getline( ).
One difference is that getline writes to an std::basic_string object,
while fgets is writing into a fixed-size array that is declared on the
stack. Perhaps the real efficiency bottleneck is in the string class
and not in the getline() function itself? The string class is,
afterall, using dynamic memory.
--
Chris (TeamB);
I would have no idea what the cause is but a 4000 percent difference in
performance seems out
of line to me and points to a flaw in the implementation of getline. Also,
when fgets is compared
to getline using gc++ on Linux there is some performance difference but
nothing to write home about.
Exactly this had been explained to crhas over in
c.l.c++.m already.
Schobi
--
Spam...@gmx.de is never read
I'm Schobi at suespammers dot org
"The sarcasm is mightier than the sword."
Eric Jarvis
> I would have no idea what the cause is but a 4000 percent difference
> in performance seems out of line to me and points to a flaw in the
> implementation of getline. Also, when fgets is compared to getline
> using gc++ on Linux there is some performance difference but nothing
> to write home about.
Without profiling, it's hard to say where the time is spent. But it's
not a fair comparison between linux/g++ and windows/BCB. Different
os, different compiler (with different levels of optimizations... g++
probably much more aggressive), and last but not least, different
standard library implementations.
Dinkumware (BCB) has a great implementation, but I have heard it
expects the compiler to optimize away and inline things that Borland
might not be doing.
Also, I'm not making excuses for Borland, but based on what you have
said I do disagree with your conclusions that it is a flaw in
getline. It may later turn out you are right, but the supporting
arguments you are currently using are inadequate to draw any
conclusions about anything except that your program takes a lot
longer using getline.
--
Chris (TeamB);
Schobi,
I don't know who you are but it seems that you are taking it
personally when I say that getline( )'s implementation is flawed. A lot
has been explained to me in this thread and I am grateful for
everyone's responses. It doesn't seem to me however, that the
explanation that "strings do more" than char buffers or getline( ) writes
into a string rather than a fixed size array should account for a
performance difference of 4000 percent.
Do you ?
Thanks Chris - good stuff.
Curt
Two points that may help the OP.
If he's profiling in a debug build I would
expect getline()/std::string to take longer.
He needs to profile in a release build with
optimizations to really compare.
Second thing is, regarding your point
of reallocation, he can always use reserve().
I've found that if I have some idea of the size
of the lines, this helps a lot.
To the OP, how do your results look in
a release build if you do:
std::string line;
line.reserve(512);
before the getline() call?
I'm not. I know the guys who implemented it only from
their newsgroups postings and have absolutely no
personal relationships with them. (One of them once
publicly plonked me, if that's any indication.)
I just find it annoying that you keep asking the same
question over and over despite the fact that you
already got it answered several times.
> A lot
> has been explained to me in this thread and I am grateful for
> everyone's responses. It doesn't seem to me however, that the
> explanation that "strings do more" than char buffers or getline( ) writes
> into a string rather than a fixed size array should account for a
> performance difference of 4000 percent.
(If you don't understand this, then why don't you ask
this question, but instead keep repeating the already
answered one?)
> Do you ?
In 'std::getline()', 'std::string' hast to allocate
memory as needed in order to grow. When a string grows
beyond its capacity, it allocates a new (bigger) chunk
of memory, copies its contents into this, and frees
the old memory. Allocating and freeing dynamic memory
is very expensive (and AFAIK, regarding performance,
Borland's memory manager isn't exactly the leader of
the pack). Copying characters isn't for free either.
'fgets()', OTOH, has only to copy bytes from the file
into a user provided buffer, stopping either on newline
or on end-of-buffer, leaving you to deal with the
consequences of the latter.
This /could/ explain the difference you see. But then
it could be many other things. Without profiling the
app, you can't know.
Nevertheless, I went and did some tests using VC8,
which is what I have available here. I started with
this program:
#include <iostream>
#include <fstream>
#include <string>
#include <cstdio>
#include <cassert>
#include <windows.h>
const unsigned int num_tests = 10;
void test_getline(const std::string& filename)
{
std::ifstream ifs(filename.c_str());
assert(ifs.good());
std::string line;
while( std::getline(ifs, line,'\n') ) {
}
}
void test_fgets(const std::string& filename)
{
std::FILE* fp = std::fopen(filename.c_str(), "r");
assert(fp);
char buffer[512];
while( std::fgets(buffer, sizeof(buffer), fp) != NULL ) {
}
}
inline unsigned int test( const std::string& filename
, void (*func)(const std::string& filename) )
{
const DWORD dwStart = ::GetTickCount();
for( unsigned int u=0; u<num_tests; ++u ) {
func(filename);
}
return ::GetTickCount()-dwStart;
}
int main(int argc, char* argv[])
{
assert(argc==2);
const unsigned int u_getline = test( argv[1], test_getline );
const unsigned int u_fgets = test( argv[1], test_fgets );
std::cout << "reading \"" << argv[1] << "\" took ~"
<< u_getline/num_tests << " using 'std::getline()'\n";
std::cout << "reading \"" << argv[1] << "\" took ~"
<< u_fgets /num_tests << " using 'std::fgets()'\n";
return 0;
}
I fed it with a 193k log file that happened to be in my
temp folder and got
reading "<file>" took ~139 using 'std::getline()'
reading "<file>" took ~4 using 'std::fgets()'
for the debug version and
reading "<file>" took ~6 using 'std::getline()'
reading "<file>" took ~1 using 'std::fgets()'
for the release version.
(I checked and found that reversing the order of the
tests doesn't have any notable influence on the result.)
So VC was able to speed up 'fgets()' by a factor of 4,
'std::getline()' by a factor of 20. To me this seems
to indicate that a good optimizer is crucial for the
C++ version to perform reasonable.
(Good optimization is another thing BCC has not been
very famous for in the last couple of years.)
Of course, these don't have to be realistic figures. VC
might, after all, find out that the results of reading
the files aren't needed and just skip a lot of code.
So I changed my code to output the results:
void test_getline(const std::string& filename)
{
std::ifstream ifs(filename.c_str());
assert(ifs.good());
std::ofstream ofs( (filename+".out").c_str());
assert(ofs.good());
std::string line;
while( std::getline(ifs, line,'\n') ) {
ofs << line << '\n';
}
}
void test_fgets(const std::string& filename)
{
std::FILE* fp = std::fopen(filename.c_str(), "r");
assert(fp);
std::ofstream ofs( (filename+".out").c_str());
assert(ofs.good());
char buffer[512];
while( std::fgets(buffer, sizeof(buffer), fp) != NULL ) {
ofs << buffer << '\n';
}
}
I now get
reading "<file>" took ~203 using 'std::getline()'
reading "<file>" took ~11 using 'std::fgets()'
for a debug build and
reading "<file>" took ~13 using 'std::getline()'
reading "<file>" took ~7 using 'std::fgets()'
for a release build.
VC was able to speed up the C version by a factor
of 1.5, the C++ version by a factor of ~15.5 --
obviously again indicating that optimization is
a very important factor in this.
If we take the the optimized builds of the first
version, we see a factor of 7 between the C and C++
versions. That seems a lot -- but then you have to
consider that you didn't compare equal algorithms.
The C version will cut lines that are longer than
511 characters. (Indeed it fails on the log file
I fed it with.)
If you change the code to handle arbitrary long
lines, you will have to use dynamic memory. I very
much doubt that you would find an algorithm that
is significantly faster than the (considerably
easier to write, to get right, and to read) C++
version.
Does this make it any clearer?
Schobi
I ran the tests using line.reserve( ) and got the following results:
NOTE:
Test 1 is getline( ) with line.reserve( )
Test 2 is fgets( )
Release Version
-----------------------
Test 1 has taken 124 seconds.
Test 2 has taken 4 seconds.
Debug Version
------------------------
Test 1 has taken 159 seconds.
If I compare these number to mine, the optimization
questions comes up. The debug version numbers are
pretty close to mine. But my release version numbers
are much better for 'std::getline()', while yours
aren't that much different.
You typically see a big difference between debug/release
when using templates - especially boost shared ptr
stuff.
> I ran the tests using line.reserve( ) and got the following results:
>
> NOTE:
> Test 1 is getline( ) with line.reserve( )
> Test 2 is fgets( )
>
> Release Version
> -----------------------
> Test 1 has taken 124 seconds.
> Test 2 has taken 4 seconds.
>
> Debug Version
> ------------------------
> Test 1 has taken 159 seconds.
> Test 2 has taken 4 seconds.
Running your test in MSVC7.1,
in release with optimization I get
around 1 second for fgets() and around
4 seconds for getline() without reserve(),
around 2-3 seconds with reserve().
<snip>
> If I compare these number to mine, the optimization
> questions comes up. The debug version numbers are
> pretty close to mine. But my release version numbers
> are much better for 'std::getline()', while yours
> aren't that much different.
With MSVC7.1 my results were similar to yours.
Using reserve() seems to reduce the time by
around 40%.
Although I have BCB6 installed,
I don't have time to test with that.
While fgets() is clearly faster, it requires allocating
a fixed sized buffer or reallocating it. This isn't
needed with std::string/getline() so I find the
tradeoff acceptable. I'm also optimizing
for size and speed. This may be a factor.
I don't remember how to setup optimization with
BCB. But it sounds like its "out of the box"
settings aren't very good.
I just put a
line.reserve(1024)
into the 'test_getline()' function (first version,
without the output) and it didn't change all that
much.
> [...]
I used the code as shown and, as BCB6 refuses the 'inline' of the function
containing the 'for' loop, commented out the word 'inline' so the warning
doesn't clutter up the screen capture.
Note that 'reserve' was not used.
The results were:
The 1.2M, 25,000 line test file took these times for getline/fgets:
122/20 dynamic linked debug on
36/20 dynamic linked debug off
129/21 static linked debug on
36/20 static linked debug off
------------------------------
C:\Documents and Settings\Edward\My Documents\lookat\q186
>dir test
Volume in drive C has no label.
Volume Serial Number is FC8D-A209
Directory of C:\Documents and Settings\Edward\My Documents\lookat\q186
09/20/2006 09:26 AM 1,239,311 test
1 File(s) 1,239,311 bytes
0 Dir(s) 30,942,109,696 bytes free
C:\Documents and Settings\Edward\My Documents\lookat\q186
>s8 lc test
S Version 8.0 Copyright 1986-2003 Emdata Co
C:\Documents and Settings\Edward\My Documents\lookat\q186\
25450 lines test
25450 lines 1 files
C:\Documents and Settings\Edward\My Documents\lookat\q186
>bcc32 -WCR -v ques186
Borland C++ 5.6.4 for Win32 Copyright (c) 1993, 2002 Borland
ques186.cpp:
Turbo Incremental Link 5.66 Copyright (c) 1997-2002 Borland
C:\Documents and Settings\Edward\My Documents\lookat\q186
>ques186 test
reading "test" took ~122 using 'std::getline()'
reading "test" took ~20 using 'std::fgets()'
C:\Documents and Settings\Edward\My Documents\lookat\q186
>bcc32 -WCR -v- ques186
Borland C++ 5.6.4 for Win32 Copyright (c) 1993, 2002 Borland
ques186.cpp:
Turbo Incremental Link 5.66 Copyright (c) 1997-2002 Borland
C:\Documents and Settings\Edward\My Documents\lookat\q186
>ques186 test
reading "test" took ~36 using 'std::getline()'
reading "test" took ~20 using 'std::fgets()'
C:\Documents and Settings\Edward\My Documents\lookat\q186
>bcc32 -WC -v ques186
Borland C++ 5.6.4 for Win32 Copyright (c) 1993, 2002 Borland
ques186.cpp:
Turbo Incremental Link 5.66 Copyright (c) 1997-2002 Borland
C:\Documents and Settings\Edward\My Documents\lookat\q186
>ques186 test
reading "test" took ~129 using 'std::getline()'
reading "test" took ~21 using 'std::fgets()'
C:\Documents and Settings\Edward\My Documents\lookat\q186
>bcc32 -WC -v- ques186
Borland C++ 5.6.4 for Win32 Copyright (c) 1993, 2002 Borland
ques186.cpp:
Turbo Incremental Link 5.66 Copyright (c) 1997-2002 Borland
C:\Documents and Settings\Edward\My Documents\lookat\q186
>ques186 test
reading "test" took ~36 using 'std::getline()'
reading "test" took ~20 using 'std::fgets()'
C:\Documents and Settings\Edward\My Documents\lookat\q186
>
----------------------------------------------------------
The code used was
----------------------------------------------------------
C:\Documents and Settings\Edward\My Documents\lookat\q186
>type ques186.cpp
#include <iostream>
#include <fstream>
#include <string>
#include <cstdio>
#include <cassert>
#include <windows.h>
const unsigned int num_tests = 10;
void test_getline(const std::string& filename)
{
std::ifstream ifs(filename.c_str());
assert(ifs.good());
std::string line;
while( std::getline(ifs, line,'\n') ) {
}
}
void test_fgets(const std::string& filename)
{
std::FILE* fp = std::fopen(filename.c_str(), "r");
assert(fp);
char buffer[512];
while( std::fgets(buffer, sizeof(buffer), fp) != NULL ) {
}
}
/* inline */
unsigned int test( const std::string& filename
, void (*func)(const std::string& filename) )
{
const DWORD dwStart = ::GetTickCount();
for( unsigned int u=0; u<num_tests; ++u ) {
func(filename);
}
return ::GetTickCount()-dwStart;
}
int main(int argc, char* argv[])
{
assert(argc==2);
const unsigned int u_getline = test( argv[1], test_getline );
const unsigned int u_fgets = test( argv[1], test_fgets );
std::cout << "reading \"" << argv[1] << "\" took ~"
<< u_getline/num_tests << " using 'std::getline()'\n";
std::cout << "reading \"" << argv[1] << "\" took ~"
<< u_fgets /num_tests << " using 'std::fgets()'\n";
return 0;
}
C:\Documents and Settings\Edward\My Documents\lookat\q186
>
----------------------------------------------------------
. Ed
> Duane Hebert wrote in message
> news:4511...@newsgroups.borland.com...
So yours is more in line with expectations.
I imagine since you have BCB6 it's dinkumware.
Which BCB and which stl is the op using?
I changed my optimization to be for speed
and the time was a bit less and reserve didn't
help as much.
FWIW, we do a lot of parsing of flat files
and I've never seen this as a bottleneck -
at least not more than you would expect
for file i/o.
As you can see from the screen capture it is BCB 6, bcc32.exe is 5.6.4, the
one which uses StlPort.
. Ed
> Duane Hebert wrote in message
> news:4511b263$1...@newsgroups.borland.com...
Right. I just posted a test from home using turbo c++ from
Borland and msvc8. I believe Borland uses dinkumware
in this release so both are with the "same" std library.
My results were a bit different than yours but I had a much smaller
file. Ignoring debug runs you seem to get around
2:1 where I get more like 15:1 with Borland.
Maybe stlport is faster. I wonder how BDS fares? Probably
similar to the Turbo C++ that I'm running.
I ran your code at home on Turbo C++ and the free
MSVC8.
MSVC: 14/4
Turbo:
Console project, no vcl.
Debug: 93/14
Release dynamic 62/4
Release static 65/4
I was using a 314Kb text file.
I had both set to optimize for speed in the release build.
We can ignore the debug build. I get the same
small boost with the dynamic run in Borland.
But I still see that Borland is 4 times slower using
std::getline.
Admittedly, I've just installed Turbo C++ and
am just using the default release/debug settings,
but I'm also using my default settings in msvc.
To Hendrik, with msvc8 and this code,
reserving the string made nearly no difference.
With BCB reserving 1024 for the string increased
the release build to 68.
I was talking about the OP and it looks like he's running
Borland Studio (I imagine that's BDS).
At any rate, I tried your code again with a file size
of 1.256Kb.
Results were:
MSVC: 51/12 Both from the ide with no debugging
and from the exe (still around 4:1)
Borland
From the ide, release /run with no debugging:
368/17
From the exe:
259/16
Sounds like the (run with no debugging
from the ide has some problems)
I don't know what all this proves though.
This code doesn't really do anything but
load the files. Maybe things would change
profiling string parsing or something.
I am using Borland Studio 2006 on Windows XP. That's
Dinkumware, right ? The file I am reading is 206Meg or 3.5 million
text records so the performance difference is pretty noticeable.
Same here. Most time is spent in processing
the parsed data.
That's probably an indication that the rumours regarding
VC having a pretty good allocator are right? IIUC, your
"BCB" in the above referred to a version using Dinkumware,
so it shouldn't be in 'std::string's implementation.
What do you do with the data you parse? I ask
because it might still be nonrelevant when data
processing kicks in. Then you would have wasted
time and effort on this issue.
If it's really all that important, download the
free VC8 and play with it. Or consider using
the Intel compiler (which AFAIK plugs into VS).
They'll most likely have a evaluation version,
too, so you can test whether it gains you
anything.
Same here but I also noticed a big difference between running
it from the IDE with no debugging and running it from the
exe directly.
> That's probably an indication that the rumours regarding
> VC having a pretty good allocator are right? IIUC, your
> "BCB" in the above referred to a version using Dinkumware,
> so it shouldn't be in 'std::string's implementation.
I ran these tests at home where I have Borland's
free Turbo C++ Explorer and MS's free VC8.
Both are using Dinkumware. I haven't applied
any patches to Turbo though.
The thing that I actually found more annoying
was the time differences with Borland between
running a release from the IDE and from the
exe. With MSVC8 at home and the licensed
MSVC7.1 that I have at the office, there's
basically no difference.
My normal practice is to get something working
in debug and switch to release to see how it
performs. Not serious profiling but just sort
of an "eye ball" test along the way. I remember
with BCB, it was the same.
Not in my case. over 2 minutes to simply read in the file is a significant
amount of the total time. I cannot use getline( ) in this program with any
kind of flags or tweaks that I've come across.
Regardless of the outcome, I appreciate everyone's interest in MY problem
and I don't consider this a waste of time. Besides, the value of my time is
questionable anyway.
Once the file is read I process the comma delimited file and perform
optimized matrix calculations on the data. I am working hard to bring the
calc time down and THAT would be a waste of time if it took 2 minutes, or
more, just to input the data.
> That's probably an indication that the rumours regarding
> VC having a pretty good allocator are right? IIUC, your
I don't know when/how/if things have changed wrt VC++ allocator, but in
the past it was *very* slow, probably the slowest of all Windows compilers.
.a
>Once the file is read I process the comma delimited file and perform
>optimized matrix calculations on the data. I am working hard to bring the
>calc time down and THAT would be a waste of time if it took 2 minutes, or
>more, just to input the data.
It might be faster, then, to read the file into a large buffer (just a
straight fread), and parse that, than to parse the parsed file.
In that case you may want to investigate TStringList and
LoadFromFile(). It may be quicker with your version of
BCB.
>> > It might be faster, then, to read the file into a large buffer (just a
>> > straight fread), and parse that, than to parse the parsed file.
>> >
>> That's the solution. Thank you.
>
> In that case you may want to investigate TStringList and
> LoadFromFile(). It may be quicker with your version of
> BCB.
Or to increase the ifstream buffer size; cf.
http://www.cplusplus.com/ref/iostream/streambuf/pubsetbuf.html
Setting the buffer to 512 or 1024 had no measurable affect.
I suspect that it's something with getline() but I can't
say for sure.
I ran a modified version of Ed Mulroy's test program under a profiler to
see what was really happening. This version uses reserve to set the
initial size of the string buffer to 512 (like the fgets char buffer).
It was compiled with BDS2006 with all updates and hotfixes installed. It
is reading a file with 446K lines containing 27.7M characters.
Total execution time using getline is 31.7 seconds
The getline based test program spends almost 100% of the time inside the
getline function. The profiler shows where getline spends its time.
Note, I have only listed the subroutines that use a significant amount
of time the functions execution time. The balance of the total is spent
in other unlisted routines.
getline 60% streambuf::snextc
19% string::operator+=
10% getline body
5% string::max_size
---
94%
streambuf::snextc 61% streambuf::sgetc
22% streambuf::sbumpc
12% snextc body
---
95%
streambuf::sgetc 92% filebuf::underflow
5% sgetc body
---
97%
filebuf::underflow 45% filebuf::pbackfail
31% filebuf::uflow
17% underflow body
---
93%
streambuf::sbumpc 80% filebuf::uflow
15% sbumpc body
---
95%
filebuf::pbackfail 51% pbackfail body
27% std::ungetc
---
78%
filebuf::uflow 54% uflow body
31% std::fgetc
---
85%
The function uflow is called twice per character, once from sbumpc and
once from underflow. The function pbackfail is called once per
character. This means that each character is read twice by fgetc and
pushed back onto the stream once by ungetc (i.e. read pushed back and
then read again).
The total time spent in each function (excluding time spent in called
subroutines) and the number of calls is listed below for the most
significant functions.
filebuf::uflow 12% 55.4M
std::getline 10% 446K
filebuf::pbackfail 8% 27.7M
streambuf::snextc 7% 27.3M
std::fgetc 7% 55.4M
filebuf::underflow 6% 27.7M
char_traits::eq_int_type 5% 138.1M
string::max_size 4% 54.5M
std::ungetc 4% 27.7M
char_traits::eof 4% 110.4M
string::max_size 3% 54.5M
---
70%
Interesting things to note:
each character is read, pushed back, and then read again
eof is checked four times for each character
each character is compared to eof four times and the delimiter once
string max_size is called twice per character in getline and in
string::operator+=
Now for the fgets based version. Its total execution time is only 1.4
seconds!
This seems to match just about perfectly with calling std::fgetc 27.7M
times. The getline version spent 7% of 31.7 seconds, or 2.2 seconds in
55.4M calls to fgetc. This was to read each character twice. To read
each character only once should therefore take about 1.1 seconds.
Note, the getline timings were done with a debug build (so the profiler
could get symbol information) and a release build only takes about 40%
of the time, or 13 seconds. This is probably principally due to
expanding the trivial calls like eof, max_size, and eq_int_type with
inline code.
Fundamentally, getline adds a lot more overhead to its calls to fgetc
than fgets does, since both programs ultimately end up calling it to
read the file data.
Dennis Cote
Thomas suggested some possibilities with setting the
buffer for the file stream but they had no affect so it
seemed like something with getline(). I didn't have time
to delve deeper though I was curious.
What's a bit sad is that the comparison between TurboC++
and MSVC7.1 is so different, given that they both use Dinkumware
and Borland's product is a lot newer. They need to spend
some time on their compiler.
>What's a bit sad is that the comparison between TurboC++
>and MSVC7.1 is so different, given that they both use Dinkumware
>and Borland's product is a lot newer. They need to spend
>some time on their compiler.
they need to spend LOTof time on their compiler if they want it to be
any competitive. as I said before even _if_ they will achieve good
standard conformance with the next release their compiler has to work
hard to optimize resulting code and get rid of huge overhead
introduced by some modern libs such as some boost parts or
alexandrescu code. bcc32 is still in the stone age in this regard as
this thread clearly demonstrated
--
Vladimir Ulchenko aka vavan