After spending a day in this configuration and getting horrible
performance (the lone IDE drive in the system was beating it out, but only
slightly), I decided it was time to start from scratch.
Rebuilt as a pure RAID 0 array performance went up considerably. But look
at this:
Version 1.01d ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
localhost 8G 69499 87 112164 60 18697 12 51132 98 56762 27 316.6 5
Writes twice the speed of reads!? Did hell freeze over?
I'm curious to hear anyones opinion on why write performance is besting
read performance. For reference, the RAID 0/5 performance was around
27-33Mb/s across the board.
Even more appreciated is pointers to where the problem lies. I may end up
going through the hassel of installing another OS and using it to
benchmark the RAID to eliminate FreeBSD from the list of
problems. Write-back is enabled. Soft-updates are enabled. The drives
are fast, 15K seagate ones that individually start at a minimum of
47Mb/sec. This thing should be close to pushing the 64/33 PCI bus it's
on.
Anyone else have this card? Offer any pointers? Even a "Yea, you should
be doing better than that" would be helpful.
As a curiosity item, both the boot rom utility and the dptmgr util report
the PCI bus a 528Mb/s, or 64/66. The AMD 760 chipset only does 64/33.
To Unsubscribe: send mail to majo...@FreeBSD.org
with "unsubscribe freebsd-scsi" in the body of the message