BTW, I like your URL (tom.com) :o)
Tom
"fiveight" <five...@tom.com> wrote in message
news:E77C7784-951D-4930...@microsoft.com...
Hi,
maybe the std::vector is better designed than CArray.
You might consider using it.
std::vector is easy to use (consider that I'm not in the set of C++
template-gurus :) so, if I'm telling you that std::vector is easy, it
really is :)
with std::vector you have two concepts:
- capacity
- size
capacity >= size
The array (or vector) is very fast to access elements (the array
element access by integer index is O(1), this is not true for linked
lists).
The problem with array is when you have to reallocate, so we have to
avoid reallocation.
How can we do that?
We could simply require more space than it is actually required, we
can do a kind of "guess". This is a problem specific thing. e.g.
suppose that you want to have an array of integers, and likely in your
specific application you would not have more than 300 integers, so you
could create an std::vector with capacity = 300.
This means that you have to add 300 items into the array, *before* a
reallocation occur.
You write:
typedef std::vector< int > Integers;
Integers integers;
// The "reserve" method sets the capacity of vector
integers.reserve( 300 );
Now, the array is empty, i.e. it has no elements in it.
integers.size() is 0, even if its capacity is 300.
This mean that you can do *very fast* insertions into the array for
the first 300 items, because memory will not be reallocated!
But if you insert more items (i.e. if you insert more items than
vector's capacity), a reallocation will occur.
So, in your app, in startup, you may reserve some capacity for the
array, and then forget about it and just using vector.push_back() to
add items, and vector.size() to get the element count.
And, if you push back more items than capacity, the vector will do a
reallocation for the new items (but you will not notice that, it's
automatically done by vector; you just do push_back()).
To me, it seems that MFC CArray has not this capacity vs. size thing
(at least, I don't see a public method like SetCapacity).
MrAsm
>FWIW, I almost
>never call this function unless I know in advance how many elements I am
>adding at the time which is almost never and I've never noticed any
>performance hits.
Hi Tom,
I think that this kind of optimization is not very effective on modern
fast CPUs and for moderately sized arrays.
I also tested that on array of size of some hundreds, also a linear
search (O(n)) and a binary search (O(log(n)) has no difference.
Maybe this kind of things were useful for much older machines, or for
*very* big (huge) data storage.
I like that very much from our Joe:
"Optimization: Your Worst Enemy"
:)
http://www.flounder.com/optimization.htm
MrAsm
On the contrary, I find STL collections to be very non-intuitive as compared
to the MFC ones.
> with std::vector you have two concepts:
>
> - capacity
> - size
>
> capacity >= size
>
The capacity is set in CArray::SetSize(). But CArray::GetSize(), and the
newly added method CArray::GetCount(), report the actual number of elements
added, not the value you set.
In addition, CArray::SetSize() has a second nGrowBy parameter which lets you
specify how many elements are added in case you go beyonod the initial
number reserved by the first parameter. I don't see anything like that in
STL.
-- David
>"MrAsm" <mr...@usa.com> wrote in message
>news:dr0i135rc2hpk0kvo...@4ax.com...
>> maybe the std::vector is better designed than CArray.
>> You might consider using it.
>>
>> std::vector is easy to use (consider that I'm not in the set of C++
>> template-gurus :) so, if I'm telling you that std::vector is easy, it
>> really is :)
>>
>
>On the contrary, I find STL collections to be very non-intuitive as compared
>to the MFC ones.
One of the things I like about STL collections is that there is no
need for helper functions like CopyElements, ConstructElements, etc.
So e.g. if a class has a constructor, this is called without the need
to define a ConstructElements helper.
Moreover, it seems that it is simpler to build "compound" data
structures (like an array whose items are maps) with STL collections
than with MFC ones.
Could you confirm this point?
However, I just use basic STL collections. There are several advanced
things and collections in STL and Boost; but I don't use them (I'm not
at the C++ knowledge level [i.e. kind of templates gurus] to be able
to understand Boost).
>The capacity is set in CArray::SetSize(). But CArray::GetSize(), and the
>newly added method CArray::GetCount(), report the actual number of elements
>added, not the value you set.
hmm...
STL:
std::vector<int> integers;
integers.reserve(100);
// capacity = 100; size = 0
integers.push_back(3);
// capacity = 100; integers.size() = 1
I can do push_back's and I will not force a reallocation until size >=
capacity.
I think that there is no equivalent of that in MFC CArray... am I
wrong?
I thought that CArray::SetSize is the same as std::vector.resize()
('resize' is different from 'reserve'; 'reserve' acts on capacity,
'resize' acts on size), and not as std::vector.reserve().
Please correct me if I'm wrong.
Thanks,
MrAsm
Never had to do this.
> Moreover, it seems that it is simpler to build "compound" data
> structures (like an array whose items are maps) with STL collections
> than with MFC ones.
>
Never had to do this either.
> STL:
>
> std::vector<int> integers;
>
> integers.reserve(100);
> // capacity = 100; size = 0
>
> integers.push_back(3);
> // capacity = 100; integers.size() = 1
>
> I can do push_back's and I will not force a reallocation until size >=
> capacity.
>
> I think that there is no equivalent of that in MFC CArray... am I
> wrong?
>
> I thought that CArray::SetSize is the same as std::vector.resize()
> ('resize' is different from 'reserve'; 'reserve' acts on capacity,
> 'resize' acts on size), and not as std::vector.reserve().
>
> Please correct me if I'm wrong.
>
I think you are wrong.
-- David
>> I think that there is no equivalent of that in MFC CArray... am I
>> wrong?
>>
>> I thought that CArray::SetSize is the same as std::vector.resize()
>> ('resize' is different from 'reserve'; 'reserve' acts on capacity,
>> 'resize' acts on size), and not as std::vector.reserve().
>>
>> Please correct me if I'm wrong.
>>
>
>I think you are wrong.
So, what is the right thing?
How can I set capacity in CArray?
I see no CArray.SetCapacity nor CArray.Reserve method (and if I do
CArray.SetSize, I'm changing the size of the array, not its capacity).
MrAsm
--
Ajay Kalra [MVP - VC++]
ajay...@yahoo.com
"David Ching" <d...@remove-this.dcsoft.com> wrote in message
news:tSbSh.6716$Kd3....@newssvr27.news.prodigy.net...
I may be wrong, but SetSize() does set the "capacity". In my KillSpam
program I set the initial size of messages to be 17,000 (I have a lot of
spam). I don't think that means I allocate 17,000 elements. I'm pretty
sure when I call GetSize() it returns 0 because no elements have been added
yet. Yet this makes it fast to insert the first 17,000 elements.
Also, I set the GrowBy parameter to 500, so if it does have to resize the
array, it sets the new capacity to 17,500 and not 17,001. This has further
performance increase.
-- David
Note that the problem you are describing does not go away if you don't use CArray;
std::vector and raw allocation have exactly the same problem with exactly the same
performance issues.
Do you have ANY estimate of the size? Is it < 1024? > 1,000,000? I posted an article on
this newsgroup months ago comparing the performance of CArray in a couple scenarios
dealing with preallocation.
joe
On Sun, 8 Apr 2007 22:28:18 +0800, "fiveight" <five...@tom.com> wrote:
Joseph M. Newcomer [MVP]
email: newc...@flounder.com
Web: http://www.flounder.com
MVP Tips: http://www.flounder.com/mvp_tips.htm
The need is still there. STL is hopelessly non-intutitive, IMHO! It's
interesting the .NET collections are similar to the MFC ones and not STL.
Now they are talking about creating STL.NET to bring this montrosity into
the world of .NET. :-(
-- David
GetSize = GetCount.
both return the number of array elements.
Somebody posted a message earlier about removing one of these methods.
I would assume the same way you are thinking, but this does not seem to be
the case.
Perhaps if there is some kind of memory issue, one will be different from
the other?
The reason I believe this is why it is, is because you may do an InsertAt()
or RemoveAt()
So you have X number or elements in the array, but they might not be in
order.
So, you have 17000 elements with nothing in them.
It is equivalent to
char array = new[17000];
array[10] = 'a';
array[37] = 'b';
char[987] = 'c';
GetCount(), or they way we think is that it would return 3.
Then, we would assume they would be at elements 1,2 and 3.
I think they let the programmer take care of where they added objects and
for them to
track how many objects are inserted and deleted from the array.
See the SetAtGrow method.
http://msdn2.microsoft.com/en-us/library/78sf1kak(VS.80).aspx
Because indexes are zero-based, the size is 1 greater than the largest
index.
Calling this method will generate the same result as the CArray::GetCount
method.
http://msdn2.microsoft.com/en-us/library/t44ec40h(VS.80).aspx
P.S. GetUpperBound = GetSize -1.
When you open a file, write stuff, then save it,
it will find a space on the hard drive big enough to save your work
unfragmented.
If you open the file again and add stuff, it "might not" be able to
save the file unfragmented, because another program "might"
be using the space directly after your unfragmented file.
So, it is best to not make frequent saves, because the file could become
fragmented.
Though we all save files frequently because fragmentation is nill compared
to effort and time.
(That is why we run defrag every once in a while.)
There is a memory defragger that defragments memory the same way it works
for a hard drive.
The way you are thinking is that the memory is all in order,
the same way file access appears to be, but in fact, the underlying
mechanisms
change the access to account for fragmentation issues.
HTH,
My test:
CArray<int> IntArray;
_tcprintf(_T("%d, %d, %d\n"), IntArray.GetCount(), IntArray.GetSize(),
IntArray.GetUpperBound());
IntArray.SetSize(0, 16);
_tcprintf(_T("%d, %d, %d\n"), IntArray.GetCount(), IntArray.GetSize(),
IntArray.GetUpperBound());
IntArray.Add(100);
_tcprintf(_T("%d, %d, %d\n"), IntArray.GetCount(), IntArray.GetSize(),
IntArray.GetUpperBound());
IntArray.SetSize(8, 16);
_tcprintf(_T("%d, %d, %d\n"), IntArray.GetCount(), IntArray.GetSize(),
IntArray.GetUpperBound());
_tcprintf(_T("%d\n"), IntArray[0]);
IntArray.Add(1000);
_tcprintf(_T("%d, %d, %d\n"), IntArray.GetCount(), IntArray.GetSize(),
IntArray.GetUpperBound());
_tcprintf(_T("%d\n"), IntArray[8]);
Output:
0, 0, -1
0, 0, -1
1, 1, 0
8, 8, 7
100
9, 9, 8
1000
Tom
"David Ching" <d...@remove-this.dcsoft.com> wrote in message
news:%YkSh.2436$zC....@newssvr22.news.prodigy.net...
Tom
"Joseph M. Newcomer" <newc...@flounder.com> wrote in message
news:rrhj1314f1jo4ke4i...@4ax.com...
> Do you have any numbers on how much better std::vectore claims to
> perform? I keep hearing that it is incredibly better, but I don't know
> how to qualify or quantify that claim.
Tom:
I don't think it's a matter of performance. Things I dislike about the
MFC collection classes are
1. Confusing double template argument
2. Confusing helper functions
3. Confusing duplication in operator [], GetAt() and ElementAt()
4. No copy constructor or assignment operator
5. Non-portable
For me the STL collection classes just seems so much better designed.
--
David Wilkinson
Visual C++ MVP
Numbers are significantly better. I cant imagine a high performance
subsystem to use MFC collection classes at all. Besides, using an
allocator alone kills MFC collection classes. In a high performance
system, you would want an allocator. I use MFC collection classes but
mainly for UI stuff but nothing more. I have also found MFC collection
classes easy to use but performance hit precludes using them for
anything substantial.
---
Ajay
>"MrAsm" <mr...@usa.com> wrote in message
>news:e9ti13ldctqk00bue...@4ax.com...
>> So, what is the right thing?
>>
>> How can I set capacity in CArray?
>> I see no CArray.SetCapacity nor CArray.Reserve method (and if I do
>> CArray.SetSize, I'm changing the size of the array, not its capacity).
>>
>
>I may be wrong, but SetSize() does set the "capacity". In my KillSpam
>program I set the initial size of messages to be 17,000 (I have a lot of
>spam). I don't think that means I allocate 17,000 elements. I'm pretty
>sure when I call GetSize() it returns 0 because no elements have been added
>yet. Yet this makes it fast to insert the first 17,000 elements.
I think you must be mistaken. Not even an MFC class could be so unintuitive
as to define a member SetSize that sets the capacity and a member GetSize
that returns the size.
>Also, I set the GrowBy parameter to 500, so if it does have to resize the
>array, it sets the new capacity to 17,500 and not 17,001. This has further
>performance increase.
It's still a linear growth policy and thus insignificant in the long run,
becoming ultimately glacial at some point depending on machine speed and
whether or not the underlying allocator can extend in place. The solution
is to use an exponential growth policy, such as std::vector has been
required to use all along. See the subthread beginning here for more on
this:
--
Doug Harrison
Visual C++ MVP
>Whatever the issues of comparing std::vector vs. CArray, the performance issues remain
>largely the same.
Yeah, but std::vector deals with them. :) The std::vector class provides
amortized constant time insertions at the end of the vector. The
std::vector class implements an exponential growth policy, while CArray
implements a linear growth policy. Unless you do a lot of hand holding and
good guessing for CArray, std::vector will give you far better performance
when growing an array from small to large size by appending. Even seemingly
large "grow-by" parameters such as 4096 aren't particularly helpful, as I
explained here:
MS even documents this in the current MSDN:
CArray Class
http://msdn2.microsoft.com/en-us/library/4h2f09ct(VS.80).aspx
<q>
Before using an array, use SetSize to establish its size and allocate
memory for it. If you do not use SetSize, adding elements to your array
causes it to be frequently reallocated and copied. Frequent reallocation
and copying are inefficient and can fragment memory.
</q>
While CArray::SetSize is equivalent to vector::resize, there doesn't seem
to be a CArray member corresponding to vector::reserve, so this is at best
a partial solution. At least this was fixed in CString, which now has a
Preallocate member function.
Thanks. Those are more compelling reasons than the typical "use STL it's
better" we usually get here. I appreciate your insight.
Tom
"David Wilkinson" <no-r...@effisols.com> wrote in message
news:uctxC3re...@TK2MSFTNGP06.phx.gbl...
You're right, as the fellow who actually did the test and printed out the
results confirmed. (I was confusing CArray with CHashTable, which also has
an init size method.) Well then, this is quite confusing. If SetSize(100)
is called to allocate the first 100 elements so they will be stored quickly,
then GetSize() returns 100. But then how to iterate the array to visit all
items? The doc gives the example of using GetSize()!
CTypedPtrArray<CObArray, CPerson*> myArray;
for( int i = 0; i < myArray.GetSize();i++ )
{
CPerson* thePerson = myArray.GetAt( i );
...
}
So even if zero items are added, this loop still executes 100 times! I
wonder what GetAt() returns for these non-existant items? I guess I should
find out, but I guess I've not used SetSize() before ...
> It's still a linear growth policy and thus insignificant in the long run,
> becoming ultimately glacial at some point depending on machine speed and
> whether or not the underlying allocator can extend in place. The solution
> is to use an exponential growth policy, such as std::vector has been
> required to use all along. See the subthread beginning here for more on
> this:
>
... and perhaps the reason is perhaps it's not necessary in later versions
of MFC. The doc says, about nGrowBy parameter: "If the default value is
used, MFC allocates memory in a way calculated to avoid memory fragmentation
and optimize efficiency for most cases."
So maybe MFC also implements exponential growth policy in the default case?
Again, I really should find out but am too busy at the moment.
-- David
>While CArray::SetSize is equivalent to vector::resize, there doesn't seem
>to be a CArray member corresponding to vector::reserve, so this is at best
>a partial solution.
A clear point here.
MrAsm
It should be possible to Override methods of CArray?
>>But then how to iterate the array to visit all items?
Override the GetAt() or GetCount() to get all the items.
That is why GetCount and GetSize are the same.
Now, it makes sense as to why the 2 are the same.
GetCount is meant to be overloaded.
It could be a good way to actually use the pure virtual method?
Object* myArray::GetAt(int nElement)
{
int nCorrectPlace;
//Insert Super Intelligent Algorithm Here.
nCorrectPlace = nElment + 1;
return CArray::GetAt(nCorrectPlace);
}
myArray::GetCount(int element)
{
return m_nCount;
//CArray::GetCount();
}
myArray::Add(Object* Obj)
{
m_nCount++;
CArray::Add(Obj);
char Array[126];
InsertAlphabetAt100();
CreateWordAt0();
GetCount(){ return GetLengthOfWord(); }
GetSize and GetCount would differ in this sense.
That must be the rational or logic they are using?
That would make sense.
I think the rational for not making it pure virtual is because
the above pseudocode is rarely used and is
often used in the more standard way?
Hence, GetCount = GetSize.
Does that make sense?
>You're right, as the fellow who actually did the test and printed out the
>results confirmed. (I was confusing CArray with CHashTable, which also has
>an init size method.) Well then, this is quite confusing. If SetSize(100)
>is called to allocate the first 100 elements so they will be stored quickly,
>then GetSize() returns 100. But then how to iterate the array to visit all
>items? The doc gives the example of using GetSize()!
>
> CTypedPtrArray<CObArray, CPerson*> myArray;
> for( int i = 0; i < myArray.GetSize();i++ )
> {
> CPerson* thePerson = myArray.GetAt( i );
> ...
> }
>
>So even if zero items are added, this loop still executes 100 times! I
>wonder what GetAt() returns for these non-existant items? I guess I should
>find out, but I guess I've not used SetSize() before ...
The obvious thing for CArray to do is default-construct the new items when
SetSize is called. I think that's what the deprecated helper
ConstructElements does nowadays; oh wait, it seems to have disappeared
entirely in VC2005. Better take those deprecated warnings seriously, I
guess.: )
(Note that vector::resize allows you to pass in an object to be copied to
the new items, so your stored type needn't be default-constructible. That's
the sort of thing MFC tends to ignore, because people just weren't thinking
in those terms back in the day.)
>Something just dawned on me.
>Speaking of the pure virtual methods and such.
>
>It should be possible to Override methods of CArray?
None of its member functions are virtual, except some of the ones it got
from CObject, which don't help. You cannot override non-virtual member
functions.
>>>But then how to iterate the array to visit all items?
>
>Override the GetAt() or GetCount() to get all the items.
>That is why GetCount and GetSize are the same.
>Now, it makes sense as to why the 2 are the same.
>GetCount is meant to be overloaded.
I don't see that. Technically, you can't overload it, because the only way
is to modify the CArray template, which you aren't gonna do. If I were to
guess, I'd say both functions exist for historical reasons. Maybe GetCount
came first, and then they added SetSize and felt there needed to be a
corresponding GetSize, but couldn't remove GetCount since it was already in
use. These things happen.
>... and perhaps the reason is perhaps it's not necessary in later versions
>of MFC. The doc says, about nGrowBy parameter: "If the default value is
>used, MFC allocates memory in a way calculated to avoid memory fragmentation
>and optimize efficiency for most cases."
>
>So maybe MFC also implements exponential growth policy in the default case?
>Again, I really should find out but am too busy at the moment.
My advice is not to believe everything you read. More to the point, don't
interpret nebulous statements through a filter of unbridled optimism. :) It
takes about as much time to test this as it does to speculate about it. I
wrote the example below in < 10 minutes (including looking at the MFC
source code, but examples are more dramatic, so...):
#include <afx.h>
#include <afxtempl.h>
#include <vector>
#include <stdio.h>
#include <time.h>
const int N = 2*1000*1000;
const int G = 4096;
int main()
{
{
std::vector<int> v;
printf("std::vector size = %d\n", int(v.size()));
clock_t t0 = clock();
for (int i = 0; i < N; ++i)
v.push_back(1);
clock_t t1 = clock();
printf(
"It took %.2f sec to append %d items to a "
"std::vector in the default case.\n",
double(t1-t0)/CLOCKS_PER_SEC,
int(v.size()));
}
{
CArray<int, int> a;
printf("CArray size = %d\n", int(a.GetSize()));
clock_t t0 = clock();
for (int i = 0; i < N; ++i)
a.Add(1);
clock_t t1 = clock();
printf(
"It took %.2f sec to append %d items to a "
"CArray in the default case.\n",
double(t1-t0)/CLOCKS_PER_SEC,
int(a.GetSize()));
}
{
CArray<int, int> a;
a.SetSize(0, G);
printf("CArray size = %d\n", int(a.GetSize()));
clock_t t0 = clock();
for (int i = 0; i < N; ++i)
a.Add(1);
clock_t t1 = clock();
printf(
"It took %.2f sec to append %d items to a "
"CArray with nGrowBy = %d.\n",
double(t1-t0)/CLOCKS_PER_SEC,
int(a.GetSize()), G);
}
}
The output I get is:
X>cl -EHsc -O2 -W4 a.cpp
X>a
std::vector size = 0
It took 0.01 sec to append 2000000 items to a std::vector in the default
case.
CArray size = 0
It took 8.64 sec to append 2000000 items to a CArray in the default case.
CArray size = 0
It took 2.20 sec to append 2000000 items to a CArray with nGrowBy = 4096.
Pretty striking, I'd say. Note that growing by 4096 bytes at a time is
still roughly 200x slower than the exponential policy of std::vector. To
get the full benefit of the example, you need to play around with the test
parameters N and G. This would be a good example to illustrate basic
computational complexity principles, as proper use of it covers relative
growth rates and also the fact that none of it matters for favorable
combinations of data and machine speed. It's the last point that people
often forget, resulting in programs that run like molasses when used with
real world data sets.
Nice bit of work there. :-)
For my apps, all of this is pretty theoretical. I would take your advice
and play with N and G... I would probably set G to 2,000,000 and enjoy
(possibly) equivalent performance with CArray. I guess the data sets I deal
with are so small, it really is programmer preference which collections to
use, as performance isn't the relevant issue. Programmer productivity is.
And I just hate STL. I mean, fingernails on chalkboard hate. It's like the
author went out of his way to make a bizarre syntax to wrap up his nice
algos.
Bottom line is I'm grateful to have a backup in case I need to speed up my
apps but I don't think it will happen any time soon.
Thanks,
David
Perhaps templates methods can't be overloaded,
but with other implementations of the templates,
such as in CListBox, CListCtrl, etc.
It is meant to be implemented or added in the hopes
that there is some kind of congruence within the language.
CListBox has GetCount. Size = Count, so there was no
need to add GetSize. Understandable and congruent.
CListCtrl changes GetCount to GetItemCount.
Joe inadvertantly made a mistake in his menu reply.
>>for(int i = 0; i < menu->GetCount(); i++)
It should be GetMenuItemCount()
It makes the language incongruent.
I get confused on what I say sometimes.
Start, Finish or Begin, End?
Start, End, or Begin, Finish?
IsOn()? GetOn()?
IsEnabled() GetEnabled()?
I do GetStateEnable() now.
I add State for all bool conditions.
It is too confusing otherwise.
GetMyObject()->Dang_What_Did_I_Call_It?
I changed it to Implementation, rather than inheritance.
Since template methods can't be overloaded.
class CMyArray
{
private:
CArray<Object, Object> m_Array;
int m_nCount;
public:
int GetSize();
int GetCount();
}
CMyArray::GetSize()
{
return m_Array.GetSize();
}
CMyArray::GetCount()
{
return m_nCount;
// return m_Array.GetCount();
}
//Added Incongruence
CMyArray::GetMyArrayItemCount()
{
}
OTOH, if you did a SetSize(0, 100), then if you add 2 elements, the GetSize is 2, but you
will have the same performance in doing Add up to 100.
Key here is that size is size. Growth capacity is a different concept (we used to call it
"slack").
The elements are not "non-existent". They exist, because you told them to exist by doing
the SetSize.
Note that if you do
CArray<t,t&>s;
s.SetSize(100);
s has 100 newly-constructed elements. They;'re there. This is an efficient way of doing
CArray<t,t&>s;
t tinstance;
for(int i = 0; i < 100; i++)
s.Add(tinstance);
so it should not be surprising that all the elements are there.
joe
On Mon, 9 Apr 2007 10:17:36 -0700, "David Ching" <d...@remove-this.dcsoft.com> wrote:
There are some serious performance problems with MFC collections in debug mode; the
measurements were something like 5,000 elements per second created in debug mode, and
12.5million elements per second created in release mode with a growby of 500K elements in
a 2M element array.
joe
Tom
"Joseph M. Newcomer" <newc...@flounder.com> wrote in message
news:hkkl131ot99a465uq...@4ax.com...
That makes sense, thanks. Actually I store pointers in these collections
and not objects, so I guess NULL pointers were created, which is very
efficient.
-- David
>Yes, the issues are essentially the same; key here is that std::vector seems to have
>handled them differently.
Not "seems to"; HAS.
>The exponential growth policy is one of the crucial performance
>hacks.
It's the most obvious way to satisfy the std::vector REQUIREMENT that
insertions at the end take amortized constant time.
One of the big advantages of the STL over MFC is that the C++ Standard
specification of the STL is far more rigorous about defining behavior,
including computational complexity. To this day, MFC remains much more of a
"fill in the blank" library, and I regularly have to consult the source
code to augment the MFC documentation, sometimes to answer very simple
questions such as a class's ownership policy for an object it manages. This
is of course bad, and it requires some judgment to distinguish between info
that's likely to remain reliable in the future (e.g. a documentation
omission) and that which may change (e.g. unspecified behavior that isn't
strongly implied by what is documented).
>The exponential growth policy is one of the crucial performance
>hacks.
Also the C# List<T> container is using capacity concept and
exponential growth, isn't it?
MrAsm
Its not only the size of collection that matters. Ultimately you are
using C Run time and frequently calling new/delete on objects is going
to have memory implications leading to performance degradation. Any
high performance app (like trading securities etc), you would not even
consider MFC collections. STL is essentially the defacto standard in
industry, even in MFC shops.
---
Ajay
In the case of the C++ standard library the performance is part of the
specification(and the abstraction) in the form of big O complexity
guarantees.
> they're right about performance being an "implementation detail" but it
> makes all the
> difference if you understand the performance issues. Therefore, I've
> always argued that
> performance specs are a critical part of the specification. Otherwise,
> you can't choose
> between qsort and bubblesort, since both have the identical abstraction.
> joe
std::sort, std::partial_sort and std::stable_sort along with std::map and
std::set have clearly specified complexity guarantees. Sadly, these
specifications rarely appear in MSDN documentation.
Jeff Flinn
I have a comparison of exponential growth vs. linear growth performance as well. The
program that generates all this will be out on my Web site someday.
joe
I think the MFC ones are a little bit friendlier, but the STL ones are
fine in practice and a bit more powerful and I've switched to them.
STL vectors in particular are very good, and perfectly intuitive with a
little practice. Lists aren't so great because they override pointer
syntax for the iterators, and so you have pointer ugliness which could
be avoided with MFC.
- Gerry Quinn