I've been exploring several scenarios and have found that in many cases
(especially when using large drives) the sheer number of clusters can cause
major issues..
For me it's not a real big problem because i partition my drives to support
various cluster sizes which are appropriate to the content I put on them, but
that is a nerd approach. Average users put massive files, large folders with
huge amounts of small files, system files, media etc all on one big partition.
Even on the latest Quad and I7 systems I've still noticed that the cpu usage
can be really high when accessing larger folders full of small files - such
as windows or games.. I did some tests with a ramdrive also and noticed that
reading a 512 MB large file will happen around 3.5GB/s whereas reading 512MB
worth of 64K files will drop down to less than 128MB/s.
The reason I'm wondering is because as SSD performance improves, I wonder
whether the CPU will begin to become a bottleneck to the transfer and access
of real world files.
Perhaps there may be a need for a hardware solution? A filesystem processor?
I heard that the new SandForce controller from OCZ does some hardware
compression and "interpretation" of filesystem data. Perhaps this concept
could be expanded to allow the filesystem computation to be done on that
chip??