Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

comments on newfs raw disk ? Safe ? (7 terabyte array)

7 views
Skip to first unread message

Arone Silimantia

unread,
Feb 9, 2007, 1:35:48 AM2/9/07
to freeb...@freebsd.org

FreeBSD 6.2-RELEASE.

Big 3ware sata raid with 16 disks. First two disks are a mirror to
boot off of. I installed the system with sysinstall and created all the
partitions on the boot mirror, etc., and just didn't even touch the
14-disk array that was also created.

So then I spent hours researching bsdlabel and gpt and blah blah blah,
and I just got fed up and:

dd if=/dev/zero of=/dev/da1 bs=1k count=1
newfs -m 0 /dev/da1
mount /dev/da1 /mnt

And that's that. But it seems too good to be true! Can someone please
comment on this scheme and if there are some hidden dangers or lack of
functionality that I will regret in the future ?

Will it fsck just like any other UFS2 partition I run ? Can I run
quotas and snapshots and everything else on it, just like normal ?

Other than the fact that I can't boot this, is there _any downside
whatsoever_ to newfs'ing raw disk like this ?


---------------------------------
Don't pick lemons.
See all the new 2007 cars at Yahoo! Autos.

Ivan Voras

unread,
Feb 9, 2007, 8:12:07 AM2/9/07
to freeb...@freebsd.org
Arone Silimantia wrote:

> dd if=/dev/zero of=/dev/da1 bs=1k count=1
> newfs -m 0 /dev/da1
> mount /dev/da1 /mnt
>
> And that's that. But it seems too good to be true! Can someone please
> comment on this scheme and if there are some hidden dangers or lack of
> functionality that I will regret in the future ?

No dangers at the system level - you can create your file system on any
storage-like device, use it and mount it any way you want. Raw disks are
a perfectly valid target.

> Will it fsck just like any other UFS2 partition I run ? Can I run
> quotas and snapshots and everything else on it, just like normal ?

Yes.

> Other than the fact that I can't boot this, is there _any downside
> whatsoever_ to newfs'ing raw disk like this ?

Only "collateral" problems because of the partition size: a regular
(non-softupdates) fsck will take a LONG time to finish and eat a LOT of
memory while it's doing its stuff. You'll need a lot of swap space (1GB
per TB? someone had empirical numbers on this, I'm sure) if you think
you'll need to fsck it entirely. Creating snapshots will also take a
long time on it, and you probably want to search the lists for
recommendations about creating snapshots in a second level directory in
order not to block the root directory. Related to this is
background-fsck which works by creating snapshots, so you'll probably
want to disable it.

In any case, try every feature you think you'll need before deploying it.

Also, write about your experience on this list :)

Ivan Voras

unread,
Feb 9, 2007, 8:15:10 AM2/9/07
to freeb...@freebsd.org
Ivan Voras wrote:

> Only "collateral" problems because of the partition size: a regular

^^^^^^^^^^^^^^
Sorry, "file system size".

Eric Anderson

unread,
Feb 9, 2007, 8:38:07 AM2/9/07
to Ivan Voras, freeb...@freebsd.org
On 02/09/07 07:12, Ivan Voras wrote:
> Arone Silimantia wrote:
>
>> dd if=/dev/zero of=/dev/da1 bs=1k count=1
>> newfs -m 0 /dev/da1
>> mount /dev/da1 /mnt
>>
>> And that's that. But it seems too good to be true! Can someone please
>> comment on this scheme and if there are some hidden dangers or lack of
>> functionality that I will regret in the future ?
>
> No dangers at the system level - you can create your file system on any
> storage-like device, use it and mount it any way you want. Raw disks are
> a perfectly valid target.


As a side benefit, one might also get a performance *increase* because
not having a slice/partition in the way might make the file system
blocks line up better with the stripe size. Scott Long has posted about
this in the past, either here, or on -performance, or somewhere. I
can't find the thread right off, but it's out there (he wrote a very
good description of what happens, why, etc).

FWIW, I almost always do it this way.


>> Will it fsck just like any other UFS2 partition I run ? Can I run
>> quotas and snapshots and everything else on it, just like normal ?
>
> Yes.
>
>> Other than the fact that I can't boot this, is there _any downside
>> whatsoever_ to newfs'ing raw disk like this ?
>
> Only "collateral" problems because of the partition size: a regular
> (non-softupdates) fsck will take a LONG time to finish and eat a LOT of
> memory while it's doing its stuff. You'll need a lot of swap space (1GB
> per TB? someone had empirical numbers on this, I'm sure) if you think
> you'll need to fsck it entirely. Creating snapshots will also take a
> long time on it, and you probably want to search the lists for
> recommendations about creating snapshots in a second level directory in
> order not to block the root directory. Related to this is
> background-fsck which works by creating snapshots, so you'll probably
> want to disable it.

I have 5 10Tb file systems (and some 2Tb ones, but who cares about those
tiny things? :)), and I can tell you that an empty huge file system is
pretty easily fsck-able, but a full one will kill you. It greatly
depends on how many files (inodes) you have used on the file system. If
you have a massive amount of small files, you'll be eating up a ton of
memory. My 'rule of thumb' for my data (which averages to about
16k/file) is 1G of memory for each 1Tb of disk space used. So, on a
10Tb file system, if I ever want the fsck to complete, I need an AMD64
box with *at least* 10G of memory, plus a lot of time. A *lot* of time.
By 'a lot', I mean anywhere from a day, to several days.


> In any case, try every feature you think you'll need before deploying it.
>
> Also, write about your experience on this list :)

I second that. It's important to share anything we can, so we see what
others are doing, what the needs are, etc.

I also recommend looking at gjournal, now in -CURRENT. I'm not sure if
it's still considered beta, but care should be taken when using it of
course. However, I use it for many tens of TB of data, and I'm a happy
gjournal fan (thanks Pawel!).

Eric


Indigo

unread,
Feb 9, 2007, 8:34:13 AM2/9/07
to Ivan Voras, freeb...@freebsd.org
Hi,
does that mean that slicing and partitioning additional drives has no
advantages on a purely FreeBSD machine?

Vasek

> _______________________________________________
> freeb...@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-...@freebsd.org"


Ivan Voras

unread,
Feb 9, 2007, 8:44:23 AM2/9/07
to freeb...@freebsd.org
Indigo wrote:
> Hi,
> does that mean that slicing and partitioning additional drives has no
> advantages on a purely FreeBSD machine?

No, except organizational - you might want to use different file systems
or different file system parameters on each partition/slice, you might
want to distribute data so that if one file system gets full (for
example, with logs) the others are not affected, etc. There was a pretty
long list of organizational benefits of partitioning on the questions-
mailing list recently.

But from a system point of view, no - partitioning is not required.

Indigo

unread,
Feb 9, 2007, 9:04:30 AM2/9/07
to freeb...@freebsd.org
Hello everyone,
my new RAID card just arrived in the mail and it can expand an array.

My question is - how can I expand a filesystem(slice&parititon OR raw
device) when I add another drive to a RAID-5 for example.

I will probably flood this mailing list with questions today because
theres a lot of things I couldn;t find in the handbook.

Vasek

Eric Anderson

unread,
Feb 9, 2007, 9:14:12 AM2/9/07
to Indigo, freeb...@freebsd.org

See growfs(8)


Eric

Oliver Fromme

unread,
Feb 9, 2007, 12:01:41 PM2/9/07
to freeb...@freebsd.org, aron...@yahoo.com
Arone Silimantia wrote:
> Big 3ware sata raid with 16 disks. First two disks are a mirror to
> boot off of. I installed the system with sysinstall and created all the
> partitions on the boot mirror, etc., and just didn't even touch the
> 14-disk array that was also created.
> [...]
> newfs -m 0 /dev/da1

You didn't mention the size of the FS, but I guess it's at
least 4 TB, probably more.

You'll probably want to reduce the inode density (i.e.
increase the bytes-per-inode ratio). With the default
value, an fsck will be a royal pain, no matter whether you
use background fsck (with snapshots) or not. It might even
not work at all if you don't have a huge amount of RAM.

If you increase the ratio to 64 K, it will lower the fsck
time and RAM requirement by an order of magnitude, while
there are still about 15 million inodes available per TB.
If possible, increase the ratio (-i option) further. It
depends on the expected average file size and the maximum
number of files that you intend to store on the FS, of
course.

Depending on your application, it might also make sense to
_carefully_ (!) adjust the fragment and block sizes of the
FS (-f and -b options to newfs). However, note that non-
standard values are not widely used and might expose bugs,
especially on large file systems. If you change them, you
should at least perform some extensive stress testing.

Another thing that should be mentioned is the fact that
"-m 0" will result in two things: First, it will make the
FS slower, especially when its getting full, then it will
be _much_ slower. Second, it increases fragmentation.

I recommend you don't use the -m option at leave it at the
default. Yes, that means that a whole lot of GB will not
be available to users (non-root), but for that price you'll
get a fast file system. Also note that you can change
that option at a later date with tunefs(8), so if you
decide that you _really_ need that extra space, and speed
is not an issue at all, then you can change the -m value
any time.

Just my two cents.

Oh by the way, I also agree with Eric that you should have
a look at gjournal. It pratically removes the fsck issues.
At the moment it's only in -current, but I think Pawel
provided a port for 6.x.

Best regards
Oliver

--
Oliver Fromme, secnetix GmbH & Co. KG, Marktplatz 29, 85567 Grafing b. M.
Handelsregister: Registergericht Muenchen, HRA 74606, USt-Id: DE204219783
Any opinions expressed in this message are personal to the author and may
not necessarily reflect the opinions of secnetix GmbH & Co KG in any way.
FreeBSD-Dienstleistungen, -Produkte und mehr: http://www.secnetix.de/bsd

One Unix to rule them all, One Resolver to find them,
One IP to bring them all and in the zone to bind them.

Arone Silimantia

unread,
Feb 9, 2007, 2:06:52 PM2/9/07
to freeb...@freebsd.org

Oliver,

Thank you for your detailed response - my own response is inline below:


Oliver Fromme <ol...@lurza.secnetix.de> wrote: Arone Silimantia wrote:
> Big 3ware sata raid with 16 disks. First two disks are a mirror to
> boot off of. I installed the system with sysinstall and created all the
> partitions on the boot mirror, etc., and just didn't even touch the
> 14-disk array that was also created.
> [...]
> newfs -m 0 /dev/da1

You didn't mention the size of the FS, but I guess it's at
least 4 TB, probably more.


Well, in the subject line I mentioned 7 TB, but I have since rearranged some things and it will be 5.5 TB.

You'll probably want to reduce the inode density (i.e.
increase the bytes-per-inode ratio). With the default
value, an fsck will be a royal pain, no matter whether you
use background fsck (with snapshots) or not. It might even
not work at all if you don't have a huge amount of RAM.

Well, I have 4 GB of physical RAM, and 4 GB of swap - so does that total of 8 GB satisfy the "1 GB per TB" requirement, or do I really need >5.5 GB of actual swap space (in addition to the physical) ?


If you increase the ratio to 64 K, it will lower the fsck
time and RAM requirement by an order of magnitude, while
there are still about 15 million inodes available per TB.
If possible, increase the ratio (-i option) further. It
depends on the expected average file size and the maximum
number of files that you intend to store on the FS, of
course.


Ok, I will look into this. My data population uses a little less than 5 million inodes per TB, so this may be workable to tune. So I see the default is '4' - so I could run newfs with:

newfs -i 8

to do what you are suggesting ?


Depending on your application, it might also make sense to
_carefully_ (!) adjust the fragment and block sizes of the
FS (-f and -b options to newfs). However, note that non-
standard values are not widely used and might expose bugs,
especially on large file systems. If you change them, you
should at least perform some extensive stress testing.


I think I'll make things simple by steering clear of this...

Another thing that should be mentioned is the fact that
"-m 0" will result in two things: First, it will make the
FS slower, especially when its getting full, then it will
be _much_ slower. Second, it increases fragmentation.

I recommend you don't use the -m option at leave it at the
default. Yes, that means that a whole lot of GB will not
be available to users (non-root), but for that price you'll
get a fast file system. Also note that you can change
that option at a later date with tunefs(8), so if you
decide that you _really_ need that extra space, and speed
is not an issue at all, then you can change the -m value
any time.


Ok, that' s good advice - I will leave it at the default.


Oh by the way, I also agree with Eric that you should have
a look at gjournal. It pratically removes the fsck issues.
At the moment it's only in -current, but I think Pawel
provided a port for 6.x.


Well, I don't mind a 24 hour fsck, and I would like to remove complexity and not be so on the bleeding edge with things. Since I am only using 5mil inodes per TB anyway, that ends up being 25-30 million inodes in the 5 TB drive which I think could fsck in a day or so.

I just need to know if my 4+4 GB of memory is enough, and if this option in loader.conf:

kern.maxdsiz="2048000000"

is sufficient...

Again, many thanks.


---------------------------------
Expecting? Get great news right away with email Auto-Check.
Try the Yahoo! Mail Beta.

Arone Silimantia

unread,
Feb 9, 2007, 6:31:43 PM2/9/07
to freeb...@freebsd.org

On Fri, 9 Feb 2007, Eric Anderson wrote:

> I have 5 10Tb file systems (and some 2Tb ones, but who cares about those
> tiny things? :)), and I can tell you that an empty huge file system is
> pretty easily fsck-able, but a full one will kill you. It greatly
> depends on how many files (inodes) you have used on the file system. If
> you have a massive amount of small files, you'll be eating up a ton of
> memory. My 'rule of thumb' for my data (which averages to about
> 16k/file) is 1G of memory for each 1Tb of disk space used. So, on a
> 10Tb file system, if I ever want the fsck to complete, I need an AMD64
> box with *at least* 10G of memory, plus a lot of time. A *lot* of time.
> By 'a lot', I mean anywhere from a day, to several days.


So ... the time it takes to fsck is not a function of how many inodes are actually initialized from newfs, but how many you are _actually using_ ?

But the amount of memory the fsck takes is a function of how many inodes exist, regardless of how many you are actually using ?

Are those two interpretations correct ?



---------------------------------
Everyone is raving about the all-new Yahoo! Mail beta.

Antony Mawer

unread,
Feb 9, 2007, 7:05:46 PM2/9/07
to Eric Anderson, freeb...@freebsd.org, Ivan Voras
On 9/02/2007 3:38 AM, Eric Anderson wrote:
>> Only "collateral" problems because of the partition size: a regular
>> (non-softupdates) fsck will take a LONG time to finish and eat a LOT of
>> memory while it's doing its stuff. You'll need a lot of swap space (1GB
>> per TB? someone had empirical numbers on this, I'm sure) if you think
>> you'll need to fsck it entirely. Creating snapshots will also take a
>> long time on it, and you probably want to search the lists for
>> recommendations about creating snapshots in a second level directory in
>> order not to block the root directory. Related to this is
>> background-fsck which works by creating snapshots, so you'll probably
>> want to disable it.
>
> I have 5 10Tb file systems (and some 2Tb ones, but who cares about those
> tiny things? :)), and I can tell you that an empty huge file system is
> pretty easily fsck-able, but a full one will kill you. It greatly
> depends on how many files (inodes) you have used on the file system. If
> you have a massive amount of small files, you'll be eating up a ton of
> memory. My 'rule of thumb' for my data (which averages to about
> 16k/file) is 1G of memory for each 1Tb of disk space used. So, on a
> 10Tb file system, if I ever want the fsck to complete, I need an AMD64
> box with *at least* 10G of memory, plus a lot of time. A *lot* of time.
> By 'a lot', I mean anywhere from a day, to several days.

Has anyone looked at the changes in DragonFly that were made in the 1.8
release? I noticed the other day, reading the release notes
(http://www.dragonflybsd.org/community/release1_8.shtml) the point:

"Greatly reduce the memory allocated by fsck when fscking filesytems
with a huge number of directories (primarily mirors with lots of
hardlinked files). Otherwise fsck can run out of memory on such
filesystems."

Whether or not this helps in the general case, or only the scenario
described, I do not know... but it would be interesting for someone with
enough filesystem-foo to have a look at!

--Antony

Eric Anderson

unread,
Feb 10, 2007, 2:12:07 AM2/10/07
to Antony Mawer, freeb...@freebsd.org, Ivan Voras


I'll check that out - didn't know about it, thanks!

Eric

Oliver Fromme

unread,
Feb 12, 2007, 10:52:44 AM2/12/07
to freeb...@freebsd.org, aron...@yahoo.com
Arone Silimantia wrote:
> Thank you for your detailed response - my own response is inline below:

You should indent the quoted text, so it's easier to tell
the quoted text from your own text. Most MUAs offer a
function to indent the quoted text automatically.

I tried to fix it below.

> Oliver Fromme <ol...@lurza.secnetix.de> wrote:
> > Arone Silimantia wrote:
> > > Big 3ware sata raid with 16 disks. First two disks are a mirror to
> > > boot off of. I installed the system with sysinstall and created all the
> > > partitions on the boot mirror, etc., and just didn't even touch the
> > > 14-disk array that was also created.
> > > [...]
> > > newfs -m 0 /dev/da1
> >
> > You didn't mention the size of the FS, but I guess it's at
> > least 4 TB, probably more.
>
> Well, in the subject line I mentioned 7 TB, but I have
> since rearranged some things and it will be 5.5 TB.

Sorry, I didn't look at the subject line too closely.
I expected that all important information was included
in the mail body.

> > You'll probably want to reduce the inode density (i.e.
> > increase the bytes-per-inode ratio). With the default
> > value, an fsck will be a royal pain, no matter whether you
> > use background fsck (with snapshots) or not. It might even
> > not work at all if you don't have a huge amount of RAM.
>
> Well, I have 4 GB of physical RAM, and 4 GB of swap - so does that
> total of 8 GB satisfy the "1 GB per TB" requirement, or do I really
> need >5.5 GB of actual swap space (in addition to the physical) ?

That "1 GB per TB" requirement is just a rule of thumb.
I don't know hoe accurate it is. Also note that it is
desirable to avoid having fsck use swap, because it will
be even slower then. A lot slower.

> > If you increase the ratio to 64 K, it will lower the fsck
> > time and RAM requirement by an order of magnitude, while
> > there are still about 15 million inodes available per TB.
> > If possible, increase the ratio (-i option) further. It
> > depends on the expected average file size and the maximum
> > number of files that you intend to store on the FS, of
> > course.
>
> Ok, I will look into this. My data population uses a little less
> than 5 million inodes per TB, so this may be workable to tune. So I
> see the default is '4' - so I could run newfs with:

The default is 4096 (one inode per 4 KB).

> newfs -i 8
>
> to do what you are suggesting ?

I think you mean 8192. That will cut the number of inodes
in half, but I think you can go even further. Try:

# newfs -i 65536

That will leave room for about 15 million inodes per TB,
which is plenty for your needs.

By the way, reducing the inode density like that will also
give your more space for actual file data. In UFS2, every
inode takes 256 bytes. Reducing the bytes-per-inode ratio
from 4 KB to 64 KB will give you additional 60 GB of space.
_And_ it will reduce the memory and time requirements of
fsck.

> > Oh by the way, I also agree with Eric that you should have
> > a look at gjournal. It pratically removes the fsck issues.
> > At the moment it's only in -current, but I think Pawel
> > provided a port for 6.x.
>
> Well, I don't mind a 24 hour fsck, and I would like to remove
> complexity and not be so on the bleeding edge with things. Since I
> am only using 5mil inodes per TB anyway, that ends up being 25-30
> million inodes in the 5 TB drive which I think could fsck in a day or
> so.

I suggest you test it before putting it into production,
i.e. populate the file system with the expected number of
files, then run fsck.

> I just need to know if my 4+4 GB of memory is enough, and if this
> option in loader.conf:
>
> kern.maxdsiz="2048000000"

That will limit the process size to 2 GB. You might need
to set it higher if fsck needs more than that. (I assume
you're running FreeBSD/amd64, or otherwise you'll run into
process size limitations anyway.)

Best regards
Oliver

--
Oliver Fromme, secnetix GmbH & Co. KG, Marktplatz 29, 85567 Grafing b. M.
Handelsregister: Registergericht Muenchen, HRA 74606, USt-Id: DE204219783
Any opinions expressed in this message are personal to the author and may
not necessarily reflect the opinions of secnetix GmbH & Co KG in any way.
FreeBSD-Dienstleistungen, -Produkte und mehr: http://www.secnetix.de/bsd

Perl is worse than Python because people wanted it worse.
-- Larry Wall

Oliver Fromme

unread,
Feb 12, 2007, 11:16:11 AM2/12/07
to freeb...@freebsd.org, aron...@yahoo.com
I'm sorry I made a small mistake ...

Oliver Fromme wrote:


> Arone Silimantia wrote:
> > Ok, I will look into this. My data population uses a little less
> > than 5 million inodes per TB, so this may be workable to tune. So I
> > see the default is '4' - so I could run newfs with:
>
> The default is 4096 (one inode per 4 KB).

The default is 4 * fragsize, and the default fragsize is
2 KB, so the default bytes-per-inode ratio is 8 KB, not 4.

(Historically the default UFS fragsize was 1 KB with a
blocksize of 8 KB, so the default ratio was indeed 4 KB
per inode. But that was changed quite some years ago.)

Best regards
Oliver

--
Oliver Fromme, secnetix GmbH & Co. KG, Marktplatz 29, 85567 Grafing b. M.
Handelsregister: Registergericht Muenchen, HRA 74606, USt-Id: DE204219783
Any opinions expressed in this message are personal to the author and may
not necessarily reflect the opinions of secnetix GmbH & Co KG in any way.
FreeBSD-Dienstleistungen, -Produkte und mehr: http://www.secnetix.de/bsd

With Perl you can manipulate text, interact with programs, talk over
networks, drive Web pages, perform arbitrary precision arithmetic,
and write programs that look like Snoopy swearing.

Randy Bush

unread,
Feb 12, 2007, 11:26:49 AM2/12/07
to Oliver Fromme, freeb...@freebsd.org
this thread has been great. but i suspect it would be greatly
appreciated if the handbook had a page "How to format and use
multi-terabyte drives, facts, trade-offs, and recipies." i am about to
do this (5TB), have been gathering info, and feel a appropriately confused.

randy

Oliver Fromme

unread,
Feb 12, 2007, 11:53:21 AM2/12/07
to Randy Bush, freeb...@freebsd.org

Randy Bush wrote:
> this thread has been great. but i suspect it would be greatly
> appreciated if the handbook had a page "How to format and use
> multi-terabyte drives, facts, trade-offs, and recipies."

That would indeed be great. Unfortunately I have other
battle fields right now (fighting against sysinstall and
loader code, among other things), so I have zero time to
write a Handbook chapter from scratch. I'm also probably
not that authoritative; there are people who are more
knowledgable about UFS/UFS2.

However, that whole issue -- formatting multi-TB FS -- is
very much a moving target, especially right now that the
new gjournal code is entering the arena. It will change
quite a lot of things, and the recommandations will be
different. You'll probably have to rewrite half of the
chapter.

With that in mind, maybe it makes more sense to wait a
little bit until gjournal has matured some more an has
officially hit the RELENG_6 branch. I don't think it will
take long; the code has already proven quite stable.

Best regards
Oliver

--
Oliver Fromme, secnetix GmbH & Co. KG, Marktplatz 29, 85567 Grafing b. M.
Handelsregister: Registergericht Muenchen, HRA 74606, USt-Id: DE204219783
Any opinions expressed in this message are personal to the author and may
not necessarily reflect the opinions of secnetix GmbH & Co KG in any way.
FreeBSD-Dienstleistungen, -Produkte und mehr: http://www.secnetix.de/bsd

"In My Egoistical Opinion, most people's C programs should be indented
six feet downward and covered with dirt."
-- Blair P. Houghton

Ivan Voras

unread,
Feb 12, 2007, 4:03:47 PM2/12/07
to freeb...@freebsd.org

It probably doesn't have to be a handbook chapter - I think an article
would also be good, especially because it would contain "rule of thumb"
information. I'll probably start making one during the next week or two...

signature.asc

Indigo

unread,
Feb 12, 2007, 4:12:58 PM2/12/07
to Ivan Voras, freeb...@freebsd.org

It would be nice to have more information about partitioning and
filesystem tuning in the handbook. Im not going to write it -but If anyone
ever decides to write it Id love to help in any way I can.

Vasek

Arone Silimantia

unread,
Feb 13, 2007, 12:58:06 AM2/13/07
to freeb...@freebsd.org, ol...@lurza.secnetix.de

Oliver,


On Mon, 12 Feb 2007, Oliver Fromme wrote:

> > > You'll probably want to reduce the inode density (i.e.
> > > increase the bytes-per-inode ratio). With the default
> > > value, an fsck will be a royal pain, no matter whether you
> > > use background fsck (with snapshots) or not. It might even
> > > not work at all if you don't have a huge amount of RAM.
> >
> > Well, I have 4 GB of physical RAM, and 4 GB of swap - so does that
> > total of 8 GB satisfy the "1 GB per TB" requirement, or do I really
> > need >5.5 GB of actual swap space (in addition to the physical) ?
>
> That "1 GB per TB" requirement is just a rule of thumb.
> I don't know hoe accurate it is. Also note that it is
> desirable to avoid having fsck use swap, because it will
> be even slower then. A lot slower.


Ok, understood. But regardless of performance, fsck will use
BOTH physical and swap, so as far as fsck is concerned, I have 8 GB of
memory ?


> # newfs -i 65536
>
> That will leave room for about 15 million inodes per TB,
> which is plenty for your needs.
>
> By the way, reducing the inode density like that will also
> give your more space for actual file data. In UFS2, every
> inode takes 256 bytes. Reducing the bytes-per-inode ratio
> from 4 KB to 64 KB will give you additional 60 GB of space.
> _And_ it will reduce the memory and time requirements of
> fsck.


Thank you - this is great advice.

> > Well, I don't mind a 24 hour fsck, and I would like to remove
> > complexity and not be so on the bleeding edge with things. Since I
> > am only using 5mil inodes per TB anyway, that ends up being 25-30
> > million inodes in the 5 TB drive which I think could fsck in a day

> > so.
>
> I suggest you test it before putting it into production,
> i.e. populate the file system with the expected number of
> files, then run fsck.


Well, here is what I am assuming, and I would like to get some
confirmation on these two points:

- The time it takes to fsck is not a function of how many inodes are
initialized from newfs, but how many you are _actually using_.

- But the amount of memory the fsck takes is a function of how many inodes
exist, regardless of how many you are actually using.

Are these two interpretations correct ?


> > I just need to know if my 4+4 GB of memory is enough, and if this
> > option in loader.conf:
> >
> > kern.maxdsiz="2048000000"
>
> That will limit the process size to 2 GB. You might need
> to set it higher if fsck needs more than that. (I assume
> you're running FreeBSD/amd64, or otherwise you'll run into
> process size limitations anyway.)


Well ... no, I am using normal x86 FreeBSD on an Intel based system. I
have 4 GB of physical ram, and 4 GB of swap. So I am tempted to just make
that number 4096000000 and be done with it ... if fsck doesn't need
that much memory, there is no harm to the system in simply having an
inflated limit like that, is there ?

I guess if I want to be safe and guard against a rogue, runaway, memory
eating process, I could ratchet it up to (physical_ram - 256 megs).

Which brings me to my last question:

I understand why it's not useful to try to compute fsck _times_ - there
are so many factors from disk speed to array speed to stripe size to
population, etc. - who knows how long it will take.

BUT, why isn't it possible to compute fsck _memory needs_ ? If I have a
filesystem of A size with X inodes init'd, and Y inodes used, shouldn't I
be able to compute how much memory fsck will need ?

Thanks again.



---------------------------------
No need to miss a message. Get email on-the-go
with Yahoo! Mail for Mobile. Get started.

Nicole Harrington

unread,
Feb 13, 2007, 3:09:13 AM2/13/07
to Arone Silimantia, freeb...@freebsd.org, ol...@lurza.secnetix.de
Or, try not to worry about FSCK via a journaling file
system.

I have been using and having great success with File
Journaling patches from:
http://people.freebsd.org/~pjd/patches/

See gjournal patches. I really hope these get put
into 6.3. They rock and it is something FreeBSD needs
to stay competitive for large file systems.

They no longer apply cleanly, but if you have any
questions, feel free to ask me. There is also some
basic info to be found vis goole for gjournal and old
mail list postings.


Nicole

Oliver Fromme

unread,
Feb 13, 2007, 3:10:18 AM2/13/07
to freeb...@freebsd.org, ind...@voda.cz, ivo...@fer.hr
Indigo <ind...@voda.cz> wrote:

Note that there's already quite some information about that
in the tuning(7) manual page. Have you had a look at it?

Best regards
Oliver

--
Oliver Fromme, secnetix GmbH & Co. KG, Marktplatz 29, 85567 Grafing b. M.

Handelsregister: Registergericht Muenchen, HRA 74606, Geschäftsfuehrung:
secnetix Verwaltungsgesellsch. mbH, Handelsregister: Registergericht Mün-
chen, HRB 125758, Geschäftsführer: Maik Bachmann, Olaf Erb, Ralf Gebhart


Any opinions expressed in this message are personal to the author and may
not necessarily reflect the opinions of secnetix GmbH & Co KG in any way.
FreeBSD-Dienstleistungen, -Produkte und mehr: http://www.secnetix.de/bsd

"Perl will consistently give you what you want,
unless what you want is consistency."
-- Larry Wall

Oliver Fromme

unread,
Feb 13, 2007, 3:35:36 AM2/13/07
to freeb...@freebsd.org, aron...@yahoo.com
Arone Silimantia wrote:

> Oliver Fromme wrote:
> > That "1 GB per TB" requirement is just a rule of thumb.
> > I don't know hoe accurate it is. Also note that it is
> > desirable to avoid having fsck use swap, because it will
> > be even slower then. A lot slower.
>
> Ok, understood. But regardless of performance, fsck will use
> BOTH physical and swap,

Basically yes. fsck runs as a normal userland process, so
it can use memory (RAM + swap) like any other programm,
but it is also subject of the usual limitations (e.g.
resource limits, address space limitations etc.).

> so as far as fsck is concerned, I have 8 GB of
> memory ?

Only if you run a 64bit operating system (FreeBSD/amd64,
/ia64 or /sparc64). In 32bit operating systems the address
space is limited to 4 GB. Also note that the kernel needs
some room from that address space, so the available space
will be even smaller, usually 3 GB or less, depending on
how your kernel is tuned.

> > I suggest you test it before putting it into production,
> > i.e. populate the file system with the expected number of
> > files, then run fsck.
>
> Well, here is what I am assuming, and I would like to get some
> confirmation on these two points:
>
> - The time it takes to fsck is not a function of how many inodes are
> initialized from newfs, but how many you are _actually using_.
>
> - But the amount of memory the fsck takes is a function of how many inodes
> exist, regardless of how many you are actually using.
>
> Are these two interpretations correct ?

The answer is yeas and no. :-) I have to admit that I'm
not 100% sure here, so please someone correct me if I'm
wrong ...

However, fsck runs several passes which do different things
on the file system. One of the passes involves reading all
directory information -- this pass is obviously dependant
on the number of directories and files that are actually
allocated on the file system. In another pass fsck checks
for lost inodes -- this pass involves visiting _all_ inodes,
no matter if they're currently marked as allocated or not.
So you have both parameters in the equation, and they affect
both the memory requirements and the run time of fsck.

The exact function of inodes vs. memory/runtime is probably
not very simple. That's why I suggested you try it yourself
under the expected conditions before putting the machine
into production.

> > > I just need to know if my 4+4 GB of memory is enough, and if this
> > > option in loader.conf:
> > >
> > > kern.maxdsiz="2048000000"
> >
> > That will limit the process size to 2 GB. You might need
> > to set it higher if fsck needs more than that. (I assume
> > you're running FreeBSD/amd64, or otherwise you'll run into
> > process size limitations anyway.)
>
> Well ... no, I am using normal x86 FreeBSD on an Intel based system. I
> have 4 GB of physical ram, and 4 GB of swap. So I am tempted to just make
> that number 4096000000 and be done with it ...

See my comment about 32bit vs. 64bit above. If you're
running a 32bit OS (such as FreeBSD/i386), you have a 4GB
address space limit, and it is shared between kernel and
userland processes. Of course, every process has its own
(virtual) address space, but the kernel virtual memory
(KVM) is always mapped into it. So, for example, if the
kernel uses 1 GB of KVM, then a single userland process
can only be as big as 3 GB.

(By the way, the PAE option does _not_ change the limit of
the address space. It's still only 4 GB even with PAE.)

> if fsck doesn't need
> that much memory, there is no harm to the system in simply having an
> inflated limit like that, is there ?

Well, the process limits are useful for protection against
run-away processes that just keep growing (because of a bug,
an attack or other circumstances). If there's no limit, a
single process can take the whol system down by using up
all of its resources.

However, there's a soft and a hard limit. The maxdsize
parameter specifies the maximum hardlimit, so you can still
have a lower soft limit for certain processes or users.
You can modify the limits via /etc/login.conf. (The soft
limit can only be increased up to the hard limit, and the
hard limit can never be increased.)

> BUT, why isn't it possible to compute fsck _memory needs_ ? If I have a
> filesystem of A size with X inodes init'd, and Y inodes used, shouldn't I
> be able to compute how much memory fsck will need ?

Yes, in theory that should be possible. Either by carefully
reading the fsck source code, or by running fsck on various
test file systems and trying to build a function from the
observed process sizes. However, it isn't _that_ trivial,
because it also depends on the malloc implementation and
on the malloc flags in use (e.g. via /etc/malloc.conf).

By the way, I think fsck also records and checks the path
names of all files, so those must be taken into the
equation, too. Short names will take less space. I just
did a "find /usr/src | wc" for testing, and it showed
about 50,000 files, and the path names are 2 MB total.
If you have 25,000,000 files with the same average file
name length, those names alone will take 1 GB to store.

(I'm assuming here that fsck indeed stores all the paths
names at the same time. I don't know if it really does
that. I haven't examined the source code closely.)

Best regards
Oliver

--
Oliver Fromme, secnetix GmbH & Co. KG, Marktplatz 29, 85567 Grafing b. M.
Handelsregister: Registergericht Muenchen, HRA 74606, Geschäftsfuehrung:
secnetix Verwaltungsgesellsch. mbH, Handelsregister: Registergericht Mün-
chen, HRB 125758, Geschäftsführer: Maik Bachmann, Olaf Erb, Ralf Gebhart
Any opinions expressed in this message are personal to the author and may
not necessarily reflect the opinions of secnetix GmbH & Co KG in any way.
FreeBSD-Dienstleistungen, -Produkte und mehr: http://www.secnetix.de/bsd

"It combines all the worst aspects of C and Lisp: a billion different
sublanguages in one monolithic executable. It combines the power of C
with the readability of PostScript."
-- Jamie Zawinski, when asked: "What's wrong with perl?"

Randy Bush

unread,
Feb 13, 2007, 4:13:08 AM2/13/07
to Oliver Fromme, freeb...@freebsd.org, aron...@yahoo.com
this has been a wonderfully well-timed thread as i am about
to hack a 4tb array tomorrow afternoon. the normal spindle
is separate and partitioned to death and newfsed using the
defaults. with 2gb of ram, i figure 6gb swap just in case
two userland hogs are running at once, e.g. some hog while
background fsck is running.

the 4tb will be used as a dump/restore target only. so i
am thinking few files, relatively big ones, little i/o and
more write than read.

so my current plan is

newfs -b 16384 -f 2048 -i 262144

i would crank up even further, but these are the largest
numbers mentioned in tuning(7).

i will leave -m alone for now.

does this seem reasonable?

thank you all for this thread.

randy

Oliver Fromme

unread,
Feb 13, 2007, 4:27:11 AM2/13/07
to freeb...@freebsd.org, ra...@psg.com, nes...@yahoo.com
Randy Bush wrote:
> this has been a wonderfully well-timed thread as i am about
> to hack a 4tb array tomorrow afternoon. the normal spindle
> is separate and partitioned to death and newfsed using the
> defaults. with 2gb of ram, i figure 6gb swap just in case
> two userland hogs are running at once, e.g. some hog while
> background fsck is running.

A bit careful here ... Background fsck had some issues,
especially when the machine crashed or is otherwise reset
while the background fsck is still running. It resulted
in corruption that could not be repaired by fsck anymore.
I don't know if all of those issues have been resolved in
RELENG_6, but personally I always disable background fsck
on all of my machines, just to be safe.

Also note that background fsck will take longer than regular
fscp, and it puts more stress on the disk, depending on
what kind of applications run at the same time.

> the 4tb will be used as a dump/restore target only. so i
> am thinking few files, relatively big ones, little i/o and
> more write than read.
>
> so my current plan is
>
> newfs -b 16384 -f 2048 -i 262144
>
> i would crank up even further, but these are the largest
> numbers mentioned in tuning(7).
>
> i will leave -m alone for now.
>
> does this seem reasonable?

Yes, I think so. I would probably use the very same numbers
in that case. I've also heard of people using 1 MB per
inode (-i 1048576), but I haven't tried that high a number
myself. -i 262144 works fine.

In a thread some time ago, Matt Dillon commented on very
high -i numbers:

Matt Dillon wrote:
> [...]
> :> newsfeed-inn# newfs -i 67108864 /dev/twed0d
> :> [stuff deleted]
> :> 1048576032, 1048641568, 1048707104, 1048772640, 1048838176, 1048903712,
> :> 1048969248, 1049034784, 1049100320, 1049165856, 1049231392, 1049296928,
> :> 1049362464, 1049428000, 1049493536, 1049559072, 1049624608, 1049690144,
> :> 1049755680, 1049821216, 1049886752, 1049952288, 1050017824, 1050083360,
> :> 1050148896, 1050214432, 1050279968, 1050345504, 1050411040, 1050476576,
> :> 1050542112, 1050607648, 1050673184, 1050738720, 1050804256, 1050869792,
> :> 1050935328
> :> fsinit: inode value out of range (2).
> :>
> :> Tried larger -i parameters, same thing.
> :>
> :> Can't newfs figure this out before it gets to this point that something
> :> isn't going to work?
> :>
> :> I'll try some different block/frag sizes, see if it helps.
>
> Specifying a byte-to-inode ratio that large is kind of silly. I
> usually use something like -i 262144 -b 16384 -f 2048 on my larger
> filesystems. A couple of million would also probably work, but beyond
> that you aren't really saving anything, not even fsck time. I'm not
> surprised that 67 million doesn't work.
>
> -Matt

Best regards
Oliver

--
Oliver Fromme, secnetix GmbH & Co. KG, Marktplatz 29, 85567 Grafing b. M.
Handelsregister: Registergericht Muenchen, HRA 74606, Geschäftsfuehrung:
secnetix Verwaltungsgesellsch. mbH, Handelsregister: Registergericht Mün-
chen, HRB 125758, Geschäftsführer: Maik Bachmann, Olaf Erb, Ralf Gebhart
Any opinions expressed in this message are personal to the author and may
not necessarily reflect the opinions of secnetix GmbH & Co KG in any way.
FreeBSD-Dienstleistungen, -Produkte und mehr: http://www.secnetix.de/bsd

"[...] one observation we can make here is that Python makes
an excellent pseudocoding language, with the wonderful attribute
that it can actually be executed." -- Bruce Eckel

John Kozubik

unread,
Feb 13, 2007, 12:49:42 PM2/13/07
to freeb...@freebsd.org

Friends,


On Tue, 13 Feb 2007, Oliver Fromme wrote:

> Randy Bush wrote:
> > this has been a wonderfully well-timed thread as i am about
> > to hack a 4tb array tomorrow afternoon. the normal spindle
> > is separate and partitioned to death and newfsed using the
> > defaults. with 2gb of ram, i figure 6gb swap just in case
> > two userland hogs are running at once, e.g. some hog while
> > background fsck is running.
>
> A bit careful here ... Background fsck had some issues,
> especially when the machine crashed or is otherwise reset
> while the background fsck is still running. It resulted
> in corruption that could not be repaired by fsck anymore.
> I don't know if all of those issues have been resolved in
> RELENG_6, but personally I always disable background fsck
> on all of my machines, just to be safe.


Also remember that filling a filesystem to capacity _while_ it is being
snapshotted will lock your system up[1]. I suppose some interesting crash
loops could arise from this bug on a near full filesystem that someone is
unlucky enough to background fsck.

I think that FreeBSD needs to address the default implementation of
background fsck in general. UFS2 snapshots are dangerous and unstable,
and have been since their introduction in 5.x [2].

Oliver and I and everyone else here knows the dangers of UFS2 snapshots
and background fsck, and it's very telling that Oliver (like myself)
refuses to use them. I won't touch either of them, despite overwhelming
financial incentive to implement them [3].

But how many innocent sysadmins and less well informed unix engineers in
the world are loading up FreeBSD because of a perceived history of safety
and stability and putting very important data and services on systems,
which _by default_ have a dangerous ticking time bomb on them ? Are these
people supposed to fall out of the womb knowing that UFS2 snapshots are
unstable and dangerous, and that _4 years later_ they still aren't safe ?

Until well-informed members of this list feel safe and secure with
snapshots and background fsck in general use, I think background fsck
should be disabled by default.


John Kozubik - jo...@kozubik.com - http://www.kozubik.com


[1] http://lists.freebsd.org/pipermail/freebsd-bugs/2006-January/016703.html

[2] http://lists.freebsd.org/pipermail/freebsd-bugs/2004-July/007574.html

[3] http://www.rsync.net

Oliver Fromme

unread,
Feb 13, 2007, 1:20:56 PM2/13/07
to freeb...@freebsd.org, jo...@kozubik.com
John Kozubik wrote:
> Oliver Fromme wrote:
> > Randy Bush wrote:
> > > this has been a wonderfully well-timed thread as i am about
> > > to hack a 4tb array tomorrow afternoon. the normal spindle
> > > is separate and partitioned to death and newfsed using the
> > > defaults. with 2gb of ram, i figure 6gb swap just in case
> > > two userland hogs are running at once, e.g. some hog while
> > > background fsck is running.
> >
> > A bit careful here ... Background fsck had some issues,
> > especially when the machine crashed or is otherwise reset
> > while the background fsck is still running. It resulted
> > in corruption that could not be repaired by fsck anymore.
> > I don't know if all of those issues have been resolved in
> > RELENG_6, but personally I always disable background fsck
> > on all of my machines, just to be safe.
>
> [...]

> UFS2 snapshots are dangerous and unstable,
> and have been since their introduction in 5.x [2].

That's not what I wrote. I wrote that they _had_ issues,
and that I do not know if they have been fixed. I don't
recall any reports of problems recently (i.e. in the past
few months), and there are no open PRs that seem to relate
to the current code, so those issues may very well have
been fixed. It's just my personal paranoia that lets me
disable bg fsck on my machines (and I don't really need
bg fsck anyway).

You have to be very careful with what you claim, or people
might accuse you of spreading FUD.

Best regards
Oliver

--
Oliver Fromme, secnetix GmbH & Co. KG, Marktplatz 29, 85567 Grafing b. M.
Handelsregister: Registergericht Muenchen, HRA 74606, Geschäftsfuehrung:
secnetix Verwaltungsgesellsch. mbH, Handelsregister: Registergericht Mün-
chen, HRB 125758, Geschäftsführer: Maik Bachmann, Olaf Erb, Ralf Gebhart
Any opinions expressed in this message are personal to the author and may
not necessarily reflect the opinions of secnetix GmbH & Co KG in any way.
FreeBSD-Dienstleistungen, -Produkte und mehr: http://www.secnetix.de/bsd

"UNIX was not designed to stop you from doing stupid things,
because that would also stop you from doing clever things."
-- Doug Gwyn

Randy Bush

unread,
Feb 13, 2007, 1:34:25 PM2/13/07
to John Kozubik, freeb...@freebsd.org
> Also remember that filling a filesystem to capacity _while_ it is
> being snapshotted will lock your system up[1]. I suppose some
> interesting crash loops could arise from this bug on a near full
> filesystem that someone is unlucky enough to background fsck.
>
> I think that FreeBSD needs to address the default implementation
> of background fsck in general. UFS2 snapshots are dangerous and
> unstable, and have been since their introduction in 5.x [2].

first, this smells of fud.

second, this is not a problem for me. as the 4tb drive is the
target of everyone else's dump/restore, i do not intend to snapshot
it and dump it live.

rasndh

John Kozubik

unread,
Feb 13, 2007, 2:01:27 PM2/13/07
to freeb...@freebsd.org

On Tue, 13 Feb 2007, Oliver Fromme wrote:

> > > A bit careful here ... Background fsck had some issues,
> > > especially when the machine crashed or is otherwise reset
> > > while the background fsck is still running. It resulted
> > > in corruption that could not be repaired by fsck anymore.
> > > I don't know if all of those issues have been resolved in
> > > RELENG_6, but personally I always disable background fsck
> > > on all of my machines, just to be safe.
> >
> > [...]
> > UFS2 snapshots are dangerous and unstable,
> > and have been since their introduction in 5.x [2].
>
> That's not what I wrote. I wrote that they _had_ issues,
> and that I do not know if they have been fixed. I don't
> recall any reports of problems recently (i.e. in the past
> few months), and there are no open PRs that seem to relate
> to the current code, so those issues may very well have
> been fixed. It's just my personal paranoia that lets me
> disable bg fsck on my machines (and I don't really need
> bg fsck anyway).


Fair enough. For your information, they are still dangerous and
unstable[1][2][3]. Your initial assessment is still valid today,
unfortunately. FWIW, [1] is open and relates to the current code.

It (bg_fsck and UFS2 snapshots) has gotten better over time - but it is
still not something that I feel is fair to enable by default, as if it
were rock solid, and force it onto unsuspecting end users who are not as
well informed as you and I are.

[3] [2, above] has been fixed, but large quantity inode movements keep
coming back to haunt snapshots every other release or so...

Eric Anderson

unread,
Feb 13, 2007, 2:47:38 PM2/13/07
to John Kozubik, freeb...@freebsd.org

Uhh, aren't those threads below at *least* a year old, or am I
misreading it? If so - then I think you in fact need to become more
informed, since massive UFS updates have been done in the past 6 months.
If you have pointers to more recent issues, please post them..

Eric

> [1] http://lists.freebsd.org/pipermail/freebsd-bugs/2006-January/016703.html
> [2] http://lists.freebsd.org/pipermail/freebsd-bugs/2004-July/007574.html
> [3] [2, above] has been fixed, but large quantity inode movements keep
> coming back to haunt snapshots every other release or so...

John Kozubik

unread,
Feb 13, 2007, 3:03:54 PM2/13/07
to Eric Anderson, freeb...@freebsd.org

On Tue, 13 Feb 2007, Eric Anderson wrote:

> > Fair enough. For your information, they are still dangerous and
> > unstable[1][2][3]. Your initial assessment is still valid today,
> > unfortunately. FWIW, [1] is open and relates to the current code.
> >
> > It (bg_fsck and UFS2 snapshots) has gotten better over time - but it is
> > still not something that I feel is fair to enable by default, as if it
> > were rock solid, and force it onto unsuspecting end users who are not as
> > well informed as you and I are.
>
> Uhh, aren't those threads below at *least* a year old, or am I
> misreading it? If so - then I think you in fact need to become more
> informed, since massive UFS updates have been done in the past 6 months.
> If you have pointers to more recent issues, please post them..


[1] Is from January 2006, and is currently acknowledged as an existing
problem that _is not_ fixed in 6.2. Apparently there is some pretty
heavy lifting that needs to be done to fix this "fill disk while
snapshotting" problem. It is an open, current problem.

[2], as I state below, has been fixed, but I keep re-demonstrating it
every other release or so. It has been my observation that high volume
inode movement on snapshotted UFS2 filesystems keeps popping up as a
problem.


I think you misunderstand my point in all of this. None of it affects me
at all - I keep abreast of freebsd-fs, I test things, and I, like many
others, simply don't use these features. The end.

My point is not to complain about the current state of snapshots and
bg_fsck.

My point is that the average user is not active on these lists and should
not be subject to an _enabled by default_ feature set that is dangerous.
If they want to use snapshots and bg_fsck, then by all means - but have
them turn it on themselves with some warning as to the ramifications of
doing so.


> > [1] http://lists.freebsd.org/pipermail/freebsd-bugs/2006-January/016703.html
> > [2] http://lists.freebsd.org/pipermail/freebsd-bugs/2004-July/007574.html
> > [3] [2, above] has been fixed, but large quantity inode movements keep
> > coming back to haunt snapshots every other release or so...

Oliver Fromme

unread,
Feb 13, 2007, 3:23:54 PM2/13/07
to freeb...@freebsd.org, jo...@kozubik.com
John Kozubik wrote:
> Oliver Fromme wrote:
> > That's not what I wrote. I wrote that they _had_ issues,
> > and that I do not know if they have been fixed. I don't
> > recall any reports of problems recently (i.e. in the past
> > few months), and there are no open PRs that seem to relate
> > to the current code, so those issues may very well have
> > been fixed. It's just my personal paranoia that lets me
> > disable bg fsck on my machines (and I don't really need
> > bg fsck anyway).
>
> Fair enough. For your information, they are still dangerous and
> unstable[1][2][3].

Well, lets see ... [1] is from January 2006, applies to
6.0-RELEASE and has zero follow-ups. [2] is from July
2004 and was closed in May 2005 because it wasn't
reproducible with the new compiler and no more feedback
from the submitter was provided. [3] doesn't link to
any PR or other information.

> Your initial assessment is still valid today,
> unfortunately. FWIW, [1] is open and relates to the current code.

It says 6.0-RELEASE. FreeBSD 6 might have been "-current"
at that time, but it's hardly current today. :-)

> [1] http://lists.freebsd.org/pipermail/freebsd-bugs/2006-January/016703.html
> [2] http://lists.freebsd.org/pipermail/freebsd-bugs/2004-July/007574.html
> [3] [2, above] has been fixed, but large quantity inode movements keep
> coming back to haunt snapshots every other release or so...

In that case a new PR should be opened, I think.

Best regards
Oliver

--
Oliver Fromme, secnetix GmbH & Co. KG, Marktplatz 29, 85567 Grafing b. M.
Handelsregister: Registergericht Muenchen, HRA 74606, Geschäftsfuehrung:
secnetix Verwaltungsgesellsch. mbH, Handelsregister: Registergericht Mün-
chen, HRB 125758, Geschäftsführer: Maik Bachmann, Olaf Erb, Ralf Gebhart
Any opinions expressed in this message are personal to the author and may
not necessarily reflect the opinions of secnetix GmbH & Co KG in any way.
FreeBSD-Dienstleistungen, -Produkte und mehr: http://www.secnetix.de/bsd

"I learned Java 3 years before Python. It was my language of
choice. It took me two weekends with Python before I was more
productive with it than with Java." -- Anthony Roberts

Randy Bush

unread,
Feb 14, 2007, 1:48:45 AM2/14/07
to Oliver Fromme, freeb...@freebsd.org
4TB

newfs -b 16384 -f 2048 -i 262144

# time fsck -y /dev/da1
** /dev/da1
** Last Mounted on /mnt
** Phase 1 - Check Blocks and Sizes
** Phase 2 - Check Pathnames
** Phase 3 - Check Connectivity
** Phase 4 - Check Reference Counts
** Phase 5 - Check Cyl groups
2 files, 2 used, 2173822273 free (17 frags, 271727782 blocks, 0.0% fragmentation)

***** FILE SYSTEM MARKED CLEAN *****

real 8m58.571s
user 0m31.769s
sys 0m1.246s

randy

Eric Anderson

unread,
Feb 14, 2007, 7:59:51 AM2/14/07
to Randy Bush, freeb...@freebsd.org, Oliver Fromme

Perfect.. Now, if you could only leave it empty, your fscks would take
no time at all!! :)


Eric

Randy Bush

unread,
Feb 14, 2007, 12:22:26 PM2/14/07
to Eric Anderson, freeb...@freebsd.org, Oliver Fromme
> On 02/14/07 00:48, Randy Bush wrote:
>> 4TB
>> newfs -b 16384 -f 2048 -i 262144
>>
>> # time fsck -y /dev/da1
>> ** /dev/da1
>> ** Last Mounted on /mnt
>> ** Phase 1 - Check Blocks and Sizes
>> ** Phase 2 - Check Pathnames
>> ** Phase 3 - Check Connectivity
>> ** Phase 4 - Check Reference Counts
>> ** Phase 5 - Check Cyl groups
>> 2 files, 2 used, 2173822273 free (17 frags, 271727782 blocks, 0.0% fragmentation)
>>
>> ***** FILE SYSTEM MARKED CLEAN *****
>>
>> real 8m58.571s
>> user 0m31.769s
>> sys 0m1.246s
>
> Perfect.. Now, if you could only leave it empty, your fscks would take
> no time at all!! :)

sorry. forgot to remention that this is a backup store for a dozen
systems' rdump/restores. so it will have a few hundred large files.
so we do not expect fsck time to go radically gigh.

randy

Ivan Voras

unread,
Feb 14, 2007, 2:35:01 PM2/14/07
to freeb...@freebsd.org
Eric Anderson wrote:

>
> Perfect.. Now, if you could only leave it empty, your fscks would take
> no time at all!! :)

Yes, this leads us to the age old discussion about users, and how the
world would be a better place without them :) Nice, clean terabytes of
storage, sitting there quietly...

signature.asc
0 new messages