Οι Ομάδες Google δεν υποστηρίζουν πλέον νέες αναρτήσεις ή εγγραφές στο Usenet. Το ιστορικό περιεχόμενο παραμένει ορατό.

Using a larger block size on large filesystems

0 προβολές
Παράβλεψη και μετάβαση στο πρώτο μη αναγνωσμένο μήνυμα

Sheldon Hearn

μη αναγνωσμένη,
23 Νοε 2001, 10:03:43 π.μ.23/11/01
ως

Hi folks,

Background:

I recently got some disk space to play with and, following discussions
hear about ffs block sizes, I used postmark [1] to test the
"MTA-visible" difference between an 8192/1024 vs a 16384/2048
filesystem.

I found that, for this large filesystem, the mixed MTA-like transaction
rate [2] on a 16384/2048 ffs [3] filesystem outperforms the 8192/1024
ffs filesystem by a factor of 67% (using 2-4 hours benchmarks that
consistently show exactly the same transaction rate when repeated).

Compared with the 1% improvement I score with noatime and the 38%
improvement scored by using a Mylex controller with 15Krpm drives
instead of a Compaq SmartArray (yuk! yuk!) controller with 10Krpm
drives, this is a valuable optimization!

The question:

I'm now looking at finding the minimum filesystem size for which
defaulting the block/frag ratio to 16384/2048 would make sense.

I know there was some discussion on -arch recently about teaching
sysinstall to make this decision. However, I'd like to suggest that
newfs do the thinking instead.

Obviously, if either of the -b or -f options are specified, the current
behaviour will be unchanged. However, I'd like to code in a filesystem
size threshold over which newfs sans -b or -f options will jump up to
block/frag sizes of 16384/2048.

And more obviously, I'm going to do a lot more testing to see what kinds
of applications might be negatively impacted by the change!

Assuming that the threshold my research leads me to is agreeable to the
community, is there any sense in making this a sysinstall decision, or
can I press on with making it a newfs job?

Ciao,
Sheldon.

[1] See the ports/bechmarks/postmark/pkg-descr for information on
postmark, a storage benchmark with MTAs and news servers in mind.

[2] Postmark v1.5 configuration: 100000 files, 500000 transactions,
10KB to 25KB file size spread, random seed 42.

[3] Soft updates enabled in both cases.

To Unsubscribe: send mail to majo...@FreeBSD.org
with "unsubscribe freebsd-arch" in the body of the message

David O'Brien

μη αναγνωσμένη,
23 Νοε 2001, 6:56:43 μ.μ.23/11/01
ως
On Fri, Nov 23, 2001 at 05:04:34PM +0200, Sheldon Hearn wrote:
> I know there was some discussion on -arch recently about teaching
> sysinstall to make this decision. However, I'd like to suggest that
> newfs do the thinking instead.

Agreed.

--
-- David (obr...@FreeBSD.org)

Kirk McKusick

μη αναγνωσμένη,
24 Νοε 2001, 12:41:30 μ.μ.24/11/01
ως
I am of the opinion that we should default to 16K/2K for most
filesystems today. I believe that the change should be in newfs.

Kirk McKusick

Matthew Dillon

μη αναγνωσμένη,
24 Νοε 2001, 1:49:48 μ.μ.24/11/01
ως

:I am of the opinion that we should default to 16K/2K for most

:filesystems today. I believe that the change should be in newfs.
:
: Kirk McKusick

The only thing I worry about is reduced performance when doing
random database accesses, which makes me kinda want to give the
system the capability to do smaller I/O's :-) But apart from that
worry I agree completely. We get fewer indirection levels (64MB
multiplier instead of 16MB per indirection block) , smaller bitmaps
(1/2 the size), and less strain on the clustering code (at least for
sequential I/O). Memory is getting cheap and filesystems are getting
larger, too.

Sheldon, I think you have a go to change the newfs default. Do it!

p.s. side note on the buffer cache: The buffer cache is optimized
for both 1K/8K and 2K/16K, but it is *NOT* optimized for anything
larger. 2K/16K is thus the largest configuration we can use optimally
in regards to the buffer cache.

-Matt
Matthew Dillon
<dil...@backplane.com>

0 νέα μηνύματα