Actually, I meant someone slightly different. The shame is that the
vector class is not factored onto an exact-size array (let's call it
"block") and the rest of vector functionality.
vector has the design flaw that it is too many things: it is the owner
of several contiguous objects in memory but also the growth strategy.
The block is useful independently of vector if:
a) a custom growth strategy is needed; or
b) no growth is needed at all (a very often case in my practice).
For small vectors, replacing them with blocks could provide significant
space savings (not limited by only capacity even when compiler trickery
is not used; e.g. block<size_t> could be specialized so that the
underlying storage layout is implemented like this:
struct { const size_t N; size_t data[]; }; // or like this
struct { const size_t *end; size_t data[]; };
This way, 2 words of memory are saved (capacity and begin pointer) and
one indirection is also saved when one needs to access elements in a
block passed by reference or pointer so this brings both performance and
space savings.
Then, as I said, for the purpose of implementing the block (again, maybe
of only some specializations), using compiler trickery for computing N
*could* be justified. *could* is a keyword here. It *could* also be
useless or even detrimental performance-wise. It is feasible that the
operator new[]
would actually take more memory from the OS for N elements than
(non-array) allocation function
void* operator new(size[, align_val_t]...)
would for the same N elements. In this case, no gain is achieved from
using operator new[] and compiler trickery over using the single-object
form of allocation function "operator new" and system-independent
implementation.
-Pavel