> I noticed that many buffer sizes tend to be powers of two.
> Does this really matter that much?
Obviously it depends on the application, operating system and level
(language, API) at which you are programming. If an algorithm requires
256 bytes of storage then that's what you need. Operating systems tend
to map memory page sizes as powers of 2 (e.g. 4096 bytes) so if you're
working at a low level or within the kernel it may be that multiples
of the page size only are allowed in allocation routines. At a higher
level it won't matter as the memory management will be done for you.
> Is a buffer that is 16384 bytes really any better than one that is 16000 bytes?
It could actually waste memory in a normal application. If you only
need 16000 bytes then that's what you should allocate. If you try to
second-guess what the OS might really allocate internally, given the
page size, you are likely to use more than 16384 (if that's what you
specify) because another page gets allocated for overhead (headers,
etc.) added by the heap allocation routines you're using.