Message from discussion Code density and performance?
Subject: Re: Code density and performance?
X-Disclaimer: This message contains only personal opinions
References: <email@example.com> <3kpk31Fukb5lU1@individual.net>
From: pg...@0506.exp.sabi.co.UK (Peter Grandi)
Organization: Home's where's my rucksack
Content-Type: text/plain; charset=US-ASCII
Date: Sat, 06 Aug 2005 17:38:16 +0100
User-Agent: Gnus/5.1007 (Gnus v5.10.7) XEmacs/21.4.17 (Jumbo Shrimp, linux)
X-Trace: 1123346297 shaftesbury.zen.co.uk 31917 22.214.171.124:52339
>>> On Sat, 06 Aug 2005 04:40:44 -0400, Bill Todd
>>> <billt...@metrocast.net> said:
[ ... ]
billtodd> Well, one obvious way is to cluster fetches such that multiple
billtodd> (in this case, say, 2) pages of the smaller size are fetched
billtodd> at once (virtually for free as long as the additional transfer
billtodd> time is negligible compared with the disk head-positioning
billtodd> overhead) but the ones in the group that turn out to be
billtodd> useless are then soon discarded to make room for new fetches.
billtodd> [ ... ] The constantly increasing amount of data which one can
billtodd> efficiently fetch in a single random disk access is useful
billtodd> primarily if it has been arranged such that much of it is
billtodd> likely to be of interest if any of it is needed,
But the problem is that very very few people bother doing this, in part
out of lack of knowledge, in part because it does require some more
effort (or a lot more effort to retrofit). It is the ''dusty deck''
issue all over again.
billtodd> or is managed such that portions which turn out not to be
billtodd> needed can easily be discarded.
The problem with this kind of demented argument is that unless ''memory
is not a bottleneck'', which is on my PC, any dragged-along data that is
then ''easily discarded'' displaces data that has been considered
valuable enough by the replacement policy that it is still in memory.
Also, if ''memory is not a bottleneck'', then one should load everything
on startup and avoid the troubles of running a paging system.
billtodd> Fetching or writing more than is useful just because "it's
billtodd> free" from the disk-utilization viewpoint ignores the very
billtodd> real costs in other areas like memory and bus utilization.
In particular in the area of undermining the replacement policy...
The cost of bringing in low use data is that odds are it will replace
higher use data, and given the sharp knee in all working set size/page
fault rate graph, that is going to cost a lot immediately thereafter.