Not possible for two identifiers defined by one definition. But that's
as besides the point your trying to obscure than your statement
itself. I'm going to recapitulate this very shortly: 'Fragmentation'
is a phenomenon which happens when long-lived objects and short-lived
objects which happen to be allocated at a time close to each other are
placed in the same memory area because the long-lived (or longer
living) objects prevent this memory area from being reused for
allocation requests of a different size even despite the area itself
could be used to satisfy them (Please do me a favour and stop playing
the idiot here. There's ample room for intentional misunderstanding in
any non-trivial text and we all know that). Consequently, allocating
objects whose lifetimes differ wildly in the same chunk of memory
should be avoided. The problem someone implementing a general purpose
allocator is facing here is making an educated guess at the lifetime
of some object based on the available information and this is really
only the size of the object because 'at the same time' is not a
well-defined concept in sequential process. Assuming the heap isn't
already fragmented, objects of identical sizes will end up being 'time
clustered' on their own. Consequently, allocators seggregating objects
based on their sizes effectively employ this two-part strategy and
shouldn't exhibit significant external fragmentation if it worked. But
they do[*].
[*] I happen to know this from first-hand experience because I
once had the mispleasure to tend an instance of the Avira
antivir daemon running on a 64M UTM device. While it lasted,
this was a constant battle with this program's ever increasing
memory requirements and one of the things I did to deal with
that was to modify the malloc implementation in the C library
we were using to satisfy all allocation requests by handing
out blocks whose size was a multiple of 32. This reduced the
RAM consumption of the process instantly by about 1/3. Which
suggests that seggregating objects by size is decidedly not a
good idea, at least not for small objects.
Now, why not take a step backward, assuming that the problem is not
'damned to have to make malloc work', which cannot use anything but
the object size for making placement descisions, but 'devise a
generally useful memory management scheme', maybe even adding another
restriction, namely 'a scheme useful for typed objects of varying,
fixed sizes', purposely setting 'C strings of arbitrary sizes' aside
for the moment. Assuming that the general idea that objects of
identical types will tend to have a similar life, IOW, will be
allocated and freed in batches, is sound, why not try to segregate
objects based on their actual type instead based on the size this type
happens to have? It should also be noted that the size of a type can
(and often will) change during the development and maintenance time of
some software while the ways objects of this types are used,
especially, where they are allocated and where they're freed again,
won't change just because some fields were added or removed. As it
turns out to be, that was an experiment someone named Jeff Bonwick
already conducted for the SunOS kernel in 1996 and with success. The
basic strategy devised by him is meanwhile at least used in SunOS,
FreeBSD and Linux (and my guess would be that other animals living in
the BSD zoo use it as well). And the kernel is a much less forgiving
environment for a memory allocator than userspace.
The same strategy can also be used for userspace applications (but not
- dammit - by nailing the amyelencephalus named malloc on top of it --
maybe, that will finally make him become alive!). Another past program
where I happened to have used it was an HTTP interception proxy used
for content-filtering on the same appliance I already mentioned. And
despite this program had a much more 'dynamic' life than the virus
scanner, being used by multiple people (for the deployments I could
observe, something like five to fifteen) concurrently for 'random web
surfing', it's absolute memory usage remained frugal over extended
periods of time (measured in months).