I may be converting a host to ext4 and was curious, is 0.90 still the only
superblock version for mdadm/raid-1 that you can boot from without having
to create an initrd/etc?
Are there any benefits to using a superblock > 0.90 for a raid-1 boot
volume < 2TB?
Justin.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majo...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
You need the superblock at the end of the partition: If you read the
manual that is clearly either version 0.90 OR 1.0 (NOT 1.1 and also
NOT 1.2; those use the same superblock layout but different
locations).
FYI: http://bugs.debian.org/492897
--
martin | http://madduck.net/ | http://two.sentenc.es/
it may look like i'm just sitting here doing nothing.
but i'm really actively waiting
for all my problems to go away.
spamtraps: madduc...@madduck.net
0.9 has the *serious* problem that it is hard to distinguish a whole-volume
However, apparently mdadm recently switched to a 1.1 default. I
strongly urge Neil to change that to either 1.0 and 1.2, as I have
started to get complaints from users that they have made RAID volumes
with newer mdadm which apparently default to 1.1, and then want to boot
from them (without playing MBR games like Grub does.) I have to tell
them that they have to regenerate their disks -- the superblock occupies
the boot sector and there is nothing I can do about it. It's the same
pathology XFS has.
-hpa
--
H. Peter Anvin, Intel Open Source Technology Center
I work for Intel. I don't speak on their behalf.
My original question was does the newer superblock do anything special or
offer new features *BESIDES* the quicker resync?
Justin.
--
the older superblocks have limits on the number of devices that can be
part of the raid set.
David Lang
The 1.1 and 1.2 formats ALSO play more nicely with stacking partition
contents. LVM, filesystems, and partition info all begin at the start
of a block device; putting the md labels there too makes it obvious
what order to unpack the structures in.
0.90 has a very bad problem, which is that it is hard to distinguish
between a RAID partition at the end of volume and a full RAID device.
This is because 0.90 doesn't actually tell you the start of the device.
Then, of course, there are a lot of limitations on size, number of
devices, and so on in 0.90.
-hpa
--
H. Peter Anvin, Intel Open Source Technology Center
I work for Intel. I don't speak on their behalf.
--
but it is the only format supporting autodetection.
So - when will autodetection be introduced with 1.X? And if not, why not?
All I found was 'autodetection might be troublesome' and nothing else.
But dealing with initrds is troublesome too. Pure evil even.
Glᅵck Auf,
Volker
I remember hearing that 1.x had /no/ plans for kernel level
auto-detection ever. That can be accomplished in early-userspace
leaving the code in the kernel much less complex, and therefore far
more reliable.
In other words, 'auto-detection' for 1.x format devices is using an
initrd/initramfs.
> On Sat, Feb 13, 2010 at 5:51 PM, Volker Armin Hemmann
> <volke...@googlemail.com> wrote:
>>> 0.90 has a very bad problem, which is that it is hard to distinguish
>>> between a RAID partition at the end of volume and a full RAID device.
>>> This is because 0.90 doesn't actually tell you the start of the device.
>>>
>>> Then, of course, there are a lot of limitations on size, number of
>>> devices, and so on in 0.90.
>>
>> but it is the only format supporting autodetection.
>>
>> So - when will autodetection be introduced with 1.X? And if not, why not?
>>
>> All I found was 'autodetection might be troublesome' and nothing else.
>> �But dealing with initrds is troublesome too. Pure evil even.
>>
>> Gl?ck Auf,
>> Volker
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majo...@vger.kernel.org
>> More majordomo info at �http://vger.kernel.org/majordomo-info.html
>>
>
> I remember hearing that 1.x had /no/ plans for kernel level
> auto-detection ever. That can be accomplished in early-userspace
> leaving the code in the kernel much less complex, and therefore far
> more reliable.
>
> In other words, 'auto-detection' for 1.x format devices is using an
> initrd/initramfs.
hmm, I've used 1.x formats without an initrd/initramfs (and without any
conifg file on the server) and have had no problem with them being
discovered. I haven't tried to use one for a boot/root device, so that may
be the difference.
David Lang
Yes, that is the difference. You must have a more traditional simple
block device and filesystem drivers compiled in. You have no need for
extra drivers or higher level device detection and evaluation (with
user-set policies to determine operation). Anything past root-fs
mount can happen in normal user-space before logins are enabled.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majo...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
>
> In other words, 'auto-detection' for 1.x format devices is using an
> initrd/initramfs.
which makes 1.x format useless for everybody who does not want to deal with
initrd/initramfs.
Gl�ck Auf,
Volker
--
True, but afaik every distro uses an initrd/initramfs and bundles tools
making it easy to manage and customise them, so what's the problem?
Cheers,
John.
Yes, it is far more reliable kernel side, if only because it doesn't do
anything.
But the userspace reliability is _not_ good. initrds are a source of
problems the moment things start to go wrong, and that's when they are not
the problem themselves.
And the end result is a system that needs manual intervention to get its
root filesystem back.
In my experience, every time we moved critical codepaths to userspace, we
ended up decreasing the *overall* system reliability.
--
"One disk to rule them all, One disk to find them. One disk to bring
them all and in the darkness grind them. In the Land of Redmond
where the shadows lie." -- The Silicon Valley Tarot
Henrique Holschuh
Maybe you'd like a simple, easy to customize initramfs creator.
That's exactly what I was aiming for when I made AEUIO
https://sourceforge.net/projects/aeuio There are some things that
could use improvement, but if your system can boot without loading
modules it should be more than sufficient even across kernel versions.
In Fedora 12, for example, Dracut tries to make the distinction between
whole RAID device and a partition device, and utterly fails -- often
resulting in data loss.
With a pointer to the beginning this would have been a trivial thing to
detect.
IMO it would make sense to support autoassemble for 1.0 superblocks, and
making them the default. The purpose would be to get everyone off 0.9.
However, *any* default is better than 1.1.
-hpa
--
H. Peter Anvin, Intel Open Source Technology Center
I work for Intel. I don't speak on their behalf.
--
If the distro doesn't use it (as its default initramfs creator, even), it is
a lot more chance for breakage. Less testing, and all that...
And if I am deploying a specific kernel in a server, you better believe it
is important enough that all due care will be taken so that it won't need an
initrd to mount the root filesystem to begin with ;-)
--
"One disk to rule them all, One disk to find them. One disk to bring
them all and in the darkness grind them. In the Land of Redmond
where the shadows lie." -- The Silicon Valley Tarot
Henrique Holschuh
> 1.x autodetection worked great for me in initramfs. Basically you
> only need /etc/mdadm/mdadm.conf copied to initramfs (via
> update-initramfs),
There is no autodetection with 1.1. Once you have mdadm.conf you have
pretty hard rules about what to look for and how to assemble it - ie.
there is not much left to "auto" detect. Real autodetection would mean
there is _no_ such information available, and you figure out everything
by just looking at the devices you find.
> initramfs procedure.
> Also consider 1.x allows to choose which arrays are autoassembled
> (hostname written in the array name equal to hostname in the machine
> or specified in mdadm.conf): this is more precise than 0.9 which
> autoassembles all, I think.
And also causes much more pain when you install machines on an internal
network where it gets a random name (in fact all new machines get the
same temporary name), then it is moved to its real location and
reconfigured with its real name. And you wonder why your arrays aren't
assembled any more...
Gabor
Those don't require a reboot test to verify, and are far easier to rollback.
Also, they can (and SHOULD) be done on testbeds. While the kind of screwup
where an initramfs decides to bite you hard, usually cannot (they tend to
happen when things already went horribly wrong).
> (note: I speak for Debian/Ubuntu, redhat's initramfs I think is more messy.)
> 1.x autodetection worked great for me in initramfs. Basically you
> only need /etc/mdadm/mdadm.conf copied to initramfs (via
> update-initramfs), the rest is done by Debian/Ubuntu standard
> initramfs procedure.
Yeah, cute. What happens when the initrd is not updated for whatever
reason? That is a new failure mode that doesn't exist with 0.9 and kernel
autorun.
It boils down to whether failure modes new to 1.x without autorun are more
likely to happen than the failure modes that are specific to 0.9 with
autorun.
IME, the 0.9 ones are less likely to happen, and I have been through quite a
few incidents involving boot problems. Experience told me that initrds are
far more prone to operator errors than the kernel autorun. Debian's
*stable* initramfs creators have not screwed up on me yet, but I am well
aware that they could.
> Also consider 1.x allows to choose which arrays are autoassembled
> (hostname written in the array name equal to hostname in the machine
> or specified in mdadm.conf): this is more precise than 0.9 which
> autoassembles all, I think.
That can be either a good or bad thing depending on the situation, so I
would never use it to count for (or against) 1.x or 0.9.
--
"One disk to rule them all, One disk to find them. One disk to bring
them all and in the darkness grind them. In the Land of Redmond
where the shadows lie." -- The Silicon Valley Tarot
Henrique Holschuh
Well, FWIW, I would happily use (and recommend) 1.0 with auto-assemble
(after verifying all the emergency repair toolset in use where I work has
been upgraded to support it) in distros where the bootloader has enough of a
clue to not bork on md-1.0 devices. Which should be most of the current
crop.
--
"One disk to rule them all, One disk to find them. One disk to bring
them all and in the darkness grind them. In the Land of Redmond
where the shadows lie." -- The Silicon Valley Tarot
Henrique Holschuh
> True, but afaik every distro uses an initrd/initramfs and bundles
> tools making it easy to manage and customise them, so what's the
> problem?
Distro provided initramfs generators have a bad habit assuming you
patch/build your kernel like the distro does. If you want to use a
vanilla kernel with different things built in/built as modules/not built
at all, then you can get nasty surprises, and debugging can be rather
painful.
My current view is if you use a distro kernel, then you should also use
an initramfs (in fact you do not have a choice). But if you want to
build your own kernel, then you should get rid of the initramfs.
Gabor
Cheers,
Rudy
On Mon, 15 Feb 2010, Rudy Zijlstra wrote:
> H. Peter Anvin wrote:
>> In Fedora 12, for example, Dracut tries to make the distinction between
>> whole RAID device and a partition device, and utterly fails -- often
>> resulting in data loss.
>>
> i do not use Fedora/redhat and do not intent to ever try them again... still,
> the point is valid
>> With a pointer to the beginning this would have been a trivial thing to
>> detect.
>>
>> IMO it would make sense to support autoassemble for 1.0 superblocks, and
>> making them the default. The purpose would be to get everyone off 0.9.
>> However, *any* default is better than 1.1.
>> -hpa
> As long is autodetect is supported in the kernel, i am willing to upgrade to
> 1.0 superblocks. BUT i need the autodetect in the kernel, as i refuse to use
> initrd for production servers.
> Cheers,
> Rudy
I also have to agree with Rudy in this matter .
Tia , JimL
--
+------------------------------------------------------------------+
| James W. Laferriere | System Techniques | Give me VMS |
| Network&System Engineer | 3237 Holden Road | Give me Linux |
| bab...@baby-dragons.com | Fairbanks, AK. 99709 | only on AXP |
+------------------------------------------------------------------+
Which, coincidentally, is where we're heading with incremental
assembly. Check the Debian experimental package if you want to try.
--
martin | http://madduck.net/ | http://two.sentenc.es/
"we should have a volleyballocracy.
we elect a six-pack of presidents.
each one serves until they screw up,
at which point they rotate."
-- dennis miller
spamtraps: madduc...@madduck.net
When mdadm defaults to 1.0 for a RAID1 it prints a warning to the effect that
the array might not be suitable to store '/boot', and requests confirmation.
So I assume that the people who are having this problem either do not read,
or are using some partitioning tool that runs mdadm under the hood using
"--run" to avoid the need for confirmation. It would be nice to confirm if
that was the case, and find out what tool is being used.
If an array is not being used for /boot (or /) then I still think that 1.1 is
the better choice as it removes the possibility for confusion over partition
tables.
I guess I could try defaulting to 1.2 in a partition, and 1.1 on a
whole-device. That might be a suitable compromise.
How do people cope with XFS??
NeilBrown
> Hi,
>
> I may be converting a host to ext4 and was curious, is 0.90 still the only
> superblock version for mdadm/raid-1 that you can boot from without having
> to create an initrd/etc?
>
> Are there any benefits to using a superblock > 0.90 for a raid-1 boot
> volume < 2TB?
The only noticeable differences that I can think of are:
1/ If you reboot during recovery of a spare, then 0.90 will restart the
recovery at the start, while 1.x will restart from where it was up to.
2/ The /sys/class/block/mdXX/md/dev-YYY/errors counter is reset on each
re-assembly with 0.90, but is preserved across stop/start with 1.x
3/ If your partition starts on a multiple of 64K from the start of the
device and is the last partition and contains 0.90 metadata, then
mdadm can get confused by it.
4/ If you move the devices to a host with a different arch and different
byte-ordering, then extra effort will be needed to see the array for
0.90, but not for 1.x
I suspect none of these is a big issue.
It is likely that future extensions will only be supported on 1.x metadata.
For example I hope to add support for storing a bad-block list, so that a
read error during recovery will only be fatal for that block, not the whole
recovery process. This is unlikely ever to be supported on 0.90. However
it may not be possible to hot-enable it on 1.x either, depending on how much
space has been reserved for extra metadata, so there is no guarantee that
using 1.x now makes you future-proof.
And yes, 0.90 is still the only superblock version that supports in-kernel
autodetect, and I have no intention of adding in-kernel autodetect for any
other version.
My guess is that they are using the latter. However, some of it is
probably also a matter of not planning ahead, or not understanding the
error message. I'll forward one email privately (don't want to forward
a private email to a list.)
> If an array is not being used for /boot (or /) then I still think that 1.1 is
> the better choice as it removes the possibility for confusion over partition
> tables.
>
> I guess I could try defaulting to 1.2 in a partition, and 1.1 on a
> whole-device. That might be a suitable compromise.
In some ways, 1.1 is even more toxic on a whole-device, since that means
that it is physically impossible to boot off of it -- the hardware will
only ever read the first sector (MBR).
> How do people cope with XFS??
There are three options:
a) either don't boot from it (separate /boot);
b) use a bootloader which installs in the MBR and
hopefully-unpartitioned disk areas (e.g. Grub);
c) use a nonstandard custom MBR.
Neither (b) or (c), of course, allow for chainloading from another OS
install and thus are bad for interoperability.
-hpa
--
H. Peter Anvin, Intel Open Source Technology Center
I work for Intel. I don't speak on their behalf.
--
> On 02/15/2010 04:27 PM, Neil Brown wrote:
>
> There are three options:
>
> a) either don't boot from it (separate /boot);
> b) use a bootloader which installs in the MBR and
> hopefully-unpartitioned disk areas (e.g. Grub);
> c) use a nonstandard custom MBR.
>
> Neither (b) or (c), of course, allow for chainloading from another OS
> install and thus are bad for interoperability.
I have had no problems with XFS partitions and lilo as the bootloader.
I've been doing this for a couple of years now without realizing that
there is supposed to be a problem.
David Lang
For lilo, at least, this is not so:
http://www.sfr-fresh.com/linux/misc/lilo-22.8.src.tar.gz:a/lilo-22.8/raid.c
Line 145:
if (ioctl(md_fd,RAID_VERSION,&md_version_info) < 0)
Line 155:
if (ioctl(md_fd,GET_ARRAY_INFO,&md_array_info) < 0)
Lines 160-168:
if ((md_array_info.major_version != md_version_info.major) &&
(md_array_info.minor_version != md_version_info.minor)) {
die("Inconsistent Raid version information on %s (RV=%d.%d GAI=%d.%d)",
boot,
(int)md_version_info.major,
(int)md_version_info.minor,
(int)md_array_info.major_version,
(int)md_array_info.minor_version);
}
It's 0.90 or nothing as md_version_info gives 0.90 due to:
/linux/drivers/md/md.c:
Line 4599:
ver.major = MD_MAJOR_VERSION;
ver.minor = MD_MINOR_VERSION;
linux/include/linux/raid/md_u.h:
Line 23:
#define MD_MAJOR_VERSION 0
#define MD_MINOR_VERSION 90
I got bitten by this as I was testing different raid superblocks on a new
setup. Wound up hand-making my own initramfs, which was a pain (right pain
to debug). Would prefer not to have one tbh.
--
"A search of his car uncovered pornography, a homemade sex aid, women's
stockings and a Jack Russell terrier."
- http://www.news.com.au/story/0%2C27574%2C24675808-421%2C00.html
There isn't, if you use partitions. It could (would) go wrong if you
tried to put an XFS filesystem, or md RAID with a v1.1 superblock, on a
whole disc without a partition table *and* you tried to put a bootloader
on. I can't say it's ever occurred to me to do that, because I always
assumed that whatever I put in a partition used all of it, and I
couldn't expect to double-book the beginning of it and have it work.
Cheers,
John.
LILO also can be stuffed in the MBR (and then uses block-pointers from
there). There is one more option that I didn't mention, which is to put
the bootloader of a separate partition, OS/2 style. Again, breaks the
standard chainloading model.
-hpa
Cheers,
Rudy
Hi Neil,
Thanks for the response, this is exactly what I was looking for and
probably should be put in a FAQ.
Justin.
so assume you have an initrd and metadata 1.x without auto assembling.
You do some changes to the raid and screw up something else. Next boot nothing
works. Mostly because the mdadm.conf in your initrd is not correct.
You whip out your trusty usb stick with a resuce system - and you are stuck.
If autoassembling would work, you would have working md devices you could
mount and edit the files you have to. But you don't and the mdadm.conf in the
initrd is outdated.
Sounds like 'you are screwed'.
Or you have that famous grub boot line to have root autoassembled but the
device names changed.
Yeah, sounds really great.
And that because ...? Is there any good reason not to have autoassmbling in
the kernel?
Gl�ck Auf
Volker
I'm not really sure what you're getting at here, I use grub in MBR and
then add chain loader stanzas to grub.conf for many things, usually an
alternate Linux release, or to have 32/64 of the same release handy for
testing, and always memtest from the boot menu. Even Win98SP2 on one
machine, since that works very poorly under KVM. (ask Avi if you care
why, something about what it does in real mode). In any case, I don't
see the chain loader issue, unless you mean to reboot out of some other
OS into Linux.
--
Bill Davidsen <davi...@tmr.com>
"We can't solve today's problems by using the same thinking we
used in creating them." - Einstein
You make this sound like some major big deal. are you running your own
distribution? In most cases mkinitrd does the right thing when you "make
install" the kernel, and if you are doing something in the build so
complex that it needs options, you really should understand the options
and be sure you're doing what you want.
Generally this involves preloading a module or two, and if you need it
every time you probably should have built it in, anyway.
My opinion...
Given that 4k sector drives make that a lot more likely that it used to
be, I suspect some effort will be needed to address this sooner or later.
> 4/ If you move the devices to a host with a different arch and different
> byte-ordering, then extra effort will be needed to see the array for
> 0.90, but not for 1.x
>
> I suspect none of these is a big issue.
>
> It is likely that future extensions will only be supported on 1.x metadata.
> For example I hope to add support for storing a bad-block list, so that a
> read error during recovery will only be fatal for that block, not the whole
> recovery process. This is unlikely ever to be supported on 0.90. However
> it may not be possible to hot-enable it on 1.x either, depending on how much
> space has been reserved for extra metadata, so there is no guarantee that
> using 1.x now makes you future-proof.
>
> And yes, 0.90 is still the only superblock version that supports in-kernel
> autodetect, and I have no intention of adding in-kernel autodetect for any
> other version.
>
--
Bill Davidsen <davi...@tmr.com>
"We can't solve today's problems by using the same thinking we
used in creating them." - Einstein
--
No; mdadm can assemble arrays without needing a conf file (at least
arrays which have superblocks).
And if you have otherwise screwed something up with the RAID, no amount
of in-kernel autoassembly is going to help, in fact it's more likely to
get it wrong and make things worse; you need a command line and mdadm to
sort it out.
Cheers,
John.
I'd be more than happy to push my FAQ[0], possibly fused with my
"recipes", upstream and would welcome anyone who wanted to help out.
0. http://git.debian.org/?p=pkg-mdadm/mdadm.git;a=blob;f=debian/FAQ;hb=HEAD
1. http://git.debian.org/?p=pkg-mdadm/mdadm.git;a=blob;f=debian/README.recipes;hb=HEAD
--
.''`. martin f. krafft <madduck@d.o> Related projects:
: :' : proud Debian developer http://debiansystem.info
`. `'` http://people.debian.org/~madduck http://vcs-pkg.org
`- Debian - when you have better things to do than fixing systems
"literature always anticipates life.
it does not copy it, but moulds it to its purpose.
the nineteenth century, as we know it,
is largely an invention of balzac."
-- oscar wilde
I am running my own kernels - and of course everything that is needed to boot
and get the basic system up is built in. Why should I make the disk drivers
modules?
That does not make sense.
And the reason is simple: even when the system is completely fucked up, I want
a kernel that is able to boot until init=/bin/bb takes over.
Gl�ck Auf
Volker
>>> In other words, 'auto-detection' for 1.x format devices is using an
>>> initrd/initramfs.
>>>
>>
>> which makes 1.x format useless for everybody who does not want to deal with
>> initrd/initramfs.
>>
>
> You make this sound like some major big deal. are you running your own
> distribution? In most cases mkinitrd does the right thing when you "make
> install" the kernel,
I don't know what "make install" will do, so I'll have to expect random
results.
I don't expect it to copy bzimage to /boot/linux-version-commentfrommymind,
point the "ln" or "lt" entry (depending on if I want to upgrade or to test
a new kernel) in lilo.conf to the new kernel and to run lilo -R ln.
I don't expect it to sftp the kernel from my build machine to my server, either.
I expect it to move ~/bin/umount (a wrapper around /bin/umount fusermount -u
and smbumount) to initrd:/bin/umount. It might also create an initrd with a
passwordless rescue mode. Or it will use a minimal shell, and in case of
trouble, I have to fight the shell, too. In short: I expect it to backstab me.
(As a bonus, you can't read about rdinit= if you encounter your first initrd-ed
system and init=/bin/sh does not work.)
> and if you are doing something in the build so
> complex that it needs options, you really should understand the options
> and be sure you're doing what you want.
What I do is the most simple thing you can do. No initrd no cry. That's why I
have to use the 0.9 format. This - and the fact that my distribution defaults
to using the 1.0 format - is what I discovered after upgrading my system last
time.
I agree that it makes little sense to make something a module when you
can't unload it anyway, but...
> And the reason is simple: even when the system is completely fucked up, I want
> a kernel that is able to boot until init=/bin/bb takes over.
I put a complete set of recovery tools into my initramfses so that when
the system is completely fucked up, I have a kernel that is able to boot
until rdinit=/bin/zsh (or /bin/bb, if you prefer) takes over.
This has the added advantage of working when the root filesystem cannot
be mounted at all: a scenario which does not seem too far-fetched when
the filesystem is located on a raid array.
--
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)
and what do you do if you have to boot from a cd/usb stick and need to access
the raid?
Simple with auto assembling. Not so much without.
Gl�ck Auf,
Volker
That's one of many uses, yes
Presumably the reason you don't have problems is because the partitions
you chainload aren't RAID partitions with 1.1 superblocks, or you're
specifying an explicit offset for your chainloads (Grub syntax allows that.)
Either which way, it's a good example of the usage model. Chainloading
is important for a lot of people.
-hpa
On Tue, 16 Feb 2010, Bill Davidsen wrote:
> Volker Armin Hemmann wrote:
>> On Sonntag 14 Februar 2010, you wrote:
>>> In other words, 'auto-detection' for 1.x format devices is using an
>>> initrd/initramfs.
>>
>> which makes 1.x format useless for everybody who does not want to deal with
>> initrd/initramfs.
>
> You make this sound like some major big deal. are you running your own
> distribution? In most cases mkinitrd does the right thing when you "make
> install" the kernel, and if you are doing something in the build so complex
> that it needs options, you really should understand the options and be sure
> you're doing what you want.
>
> Generally this involves preloading a module or two, and if you need it every
> time you probably should have built it in, anyway.
>
> My opinion...
My Opinion as well . That is one of the many reasons why I have my '/'
autoassemble . And do to this I am permanently stuck at 0.90 version of the
raid table . No big shakes for that . But at sometime in the past there was a
discussion to have the 0.90 raid table be removed , NOW THAT SCARES THE H?LL
OUT OF ME . So far Neil has not done so .
I am unaware of any record from Neil or other maintainer(s) of the
/md/ device tree saying that they will not remove the 0.90 table and the
autoassembly functions there . I'd very much like to hear a statement saying
there will not be a removal of the autoassembly functions for 0.90 raid table
from the kernel tree .
Tia , JimL
--
+------------------------------------------------------------------+
| James W. Laferriere | System Techniques | Give me VMS |
| Network&System Engineer | 3237 Holden Road | Give me Linux |
| bab...@baby-dragons.com | Fairbanks, AK. 99709 | only on AXP |
+------------------------------------------------------------------+
I will not be removing 0.90 or auto-assemble from the kernel in the
foreseeable future.
None the less, I recommend weaning yourself from your dependence on it.
initramfs is the future, embrace it.
NeilBrown
> I will not be removing 0.90 or auto-assemble from the kernel in the
> foreseeable future.
> None the less, I recommend weaning yourself from your dependence on it.
> initramfs is the future, embrace it.
>
> NeilBrown
a future that is worse than the present. For what reason?
Gl�ck Auf,
Volker
What are people's reasons for pushback against initramfs? I've heard
lots of claims that "it's not trustworthy" and "it breaks", but in 7
years of running bootable software RAID boxes on weird architectures
(even running Debian unstable) I have only once or twice had initramfs
problems.
As a software capability, initramfs makes it possible to use
*anything* as a root filesystem, no matter what is necessary to set it
up. For example, I have seen somebody use DRBD (essentially network
RAID-1) as a root filesystem with a few custom hook scripts added to
the initramfs-tools configs. Other examples include using Sun ZFS as
a root fs via an initramfs FUSE daemon, a feat which even Solaris
could not accomplish at the time. Encrypted root filesystems also
require an initramfs to prompt for encryption keys and decrypt the
block device. Multipath block devices are another example.
You should also take a look at your distro installers. There is not a
single one made in the last several years which does not use an
initramfs to start networking or access the installation media. In
fact, of all the distro installers I have had the most consistent
behavior regardless of system hardware from the ones which operate
entirely out of their initramfs.
From a reliability perspective, an initramfs is no more essential
than, say, /sbin/init or /boot/vmlinuz-2.6.33. Furthermore, all of
the modern initramfs generation tools automatically keep backup copies
exactly the same way that "make install" keeps backup copies of your
kernel images. The two times I've managed to hose my initramfs I was
able to simply edit my grub config to use a file called something like
"/boot/initramfs-2.6.33.bak" instead.
In fact, I have had several times where an initramfs made my boot
process *more* reliable. On one of my LVM JBOD systems, I was able to
pull a group of 3 SATA drives whose backplane had failed and drop them
all in USB enclosures to get the system back up and running in a half
an hour. With just straight partitions on the volumes I would have
been hunting around for 2 hours to figure out where all my partitions
had gone only to have the USB drives spin up in a different order
during the next reboot.
If you're really concerned about boot-process reliability, go ahead
and tell your initramfs tool to include a fully-featured busybox,
coreutils, bash, strace, gdb, and a half-dozen other developer tools.
You may wait an extra 20 seconds for your bootloader to load the damn
thing during boot, but you'll be able to track down that annoying
10-second hang in your /sbin/init program during config-file parsing.
I've built specialized embedded computers with stripped-down chipset
initialization code, a tiny Linux kernel and a special-purpose
initramfs burned into the flash. By using the fastboot patches and
disabling all the excess drivers, my kernel was fully operational
within the first half-second. It used the tools on the initramfs to
poke around on the hard disk as a bootloader, then kexec() to load the
operational kernel.
Counting up all the problems I've had with system boot... I've had an
order of magnitude more problems from somebody getting careless with
"rm", "dpkg --purge", etc than with initramfs deficiencies.
Cheers,
Kyle Moffett
for the power-user system manager, who manages all his servers and has
knowledgeable backup, initrd may indeed work as above.
I have to keep in mind, that when there is a problem while i am
travelling (and that happens), there is no sys-admin present. Also, i am
supporting systems remote where no-one has the knowledge to debug using
a initrd. In such cases, initrd is an additional step. And each
additional step is an additional source of mistakes.
1/ distro tools assume that the kernel being build will run on that
machine. For servers this is often not true. There are very valid
security reasons to exclude compilation capability from many servers.
2/ For most small shops, there is need for RAID (disks are fallible,
shop cannot do without server), the RAID should work without being
visible. If there is a problem with the RAID that causes auto-assemble
to break, it means i need to travel (>100KM) to trouble shoot. The
simpler the setup, the more i like it. This is also why i almost always
use HW raid for the system partitions. The ones i use have userland
tools in Linux which warn on disk failure, ensure auto rebuild, etc...
Still, for large storage needs it is SW RAID over SATA.
3/ for my home systems, if i need to remote-support to get things
working again (i am often travelling for my work), the added layer of
initrd is an added layer of possible mistakes.
Cheers,
Rudy
That's simply not true, at least not for Debian. If you actually use the
distro tools [1] the only assumptions are made at kernel *installation*
time, not at kernel build time.
I've been using initramfs-tools generated initrds for years without
problems, and that includes "root on LVM on LUKS encrypted partition"
and "root on LVM on RAID" setups.
Cheers,
FJP
[1] I.e. if you build and install the kernel as a .deb package using e.g.
the deb-pkg target or kernel-package.
The same initramfs can be used on a CD or USB stick. If you were
referring to using someone else's CD or USB stick, then obviously
mdadm will need to be available.
--
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)
Actually, a properly built initramfs gives you far more reliable
behavior even without a local sysadmin. For example, most
graphical-boot tools are designed to be built into an initramfs; I
have seen some prototype initramfs images which provide end-user
accessible GUI boot menus and other tools which function reliably even
when your root filesystem is inaccessible.
> 1/ distro tools assume that the kernel being build will run on that machine.
> For servers this is often not true. There are very valid security reasons to
> exclude compilation capability from many servers.
As Frans Pop states, this is entirely untrue for (at the very least)
Debian. The "initramfs-tools" package present there works regardless
of where I obtain my kernel. If I use the "make-kpkg" Debian tool
when building my kernel (regardless of where it is built), then the
resulting package will automatically generate an appropriate initramfs
image when installed. If for some reason I install a kernel by hand I
can very trivially build the necessary initramfs with this command:
update-initramfs -c -k 2.6.32-mykernel01
In the event that you need to "customize" the initramfs for some
reason, you can simply do so. When the "update-initramfs" tool is
next run, it will report that the user has customized that image and
avoid modifying it. If you wish to return to the autogenerated
initramfs you can simply run this command:
update-initramfs -u -t -k 2.6.32-2-amd64
> 2/ For most small shops, there is need for RAID (disks are fallible, shop
> cannot do without server), the RAID should work without being visible. If
> there is a problem with the RAID that causes auto-assemble to break, it
> means i need to travel (>100KM) to trouble shoot. The simpler the setup, the
> more i like it. This is also why i almost always use HW raid for the system
> partitions. The ones i use have userland tools in Linux which warn on disk
> failure, ensure auto rebuild, etc...
> Still, for large storage needs it is SW RAID over SATA.
>
> 3/ for my home systems, if i need to remote-support to get things working
> again (i am often travelling for my work), the added layer of initrd is an
> added layer of possible mistakes.
You are actually just setting yourself up for problems. RAID
autoassembly has bad corner cases when disks disappear between reboots
(which happens with failing disk head assemblies). In that case it
will fail to find its root filesystem or wait forever for the last
disk to show up. I have had even *worse* problems (including
corruption of unrelated logical volumes) with many hardware RAID
controllers, even those from big-name server vendors such as HP and
Dell.
To contrast, an initramfs is configurable to prompt the user,
automatically degrade the array after a small delay, or even play a
kazoo if desired :-D. One of these days I may get around to building
myself a small GUI wrapper around mdadm on an initramfs which allows a
user to manually recover from RAID problems.
Cheers,
Kyle Moffett
> On Wed, Feb 17, 2010 at 04:38, Rudy Zijlstra
> <ru...@grumpydevil.homelinux.org> wrote:
>> Kyle Moffett wrote:
>>> On Tue, Feb 16, 2010 at 21:01, Neil Brown <ne...@suse.de> wrote:
>>>> I will not be removing 0.90 or auto-assemble from the kernel in the
>>>> foreseeable future.
>>>> None the less, I recommend weaning yourself from your dependence on it.
>>>> initramfs is the future, embrace it.
>>>>
>>>
>>> What are people's reasons for pushback against initramfs? I've heard
>>> lots of claims that "it's not trustworthy" and "it breaks", but in 7
>>> years of running bootable software RAID boxes on weird architectures
>>> (even running Debian unstable) I have only once or twice had initramfs
>>> problems.
Kyle,
for a distro that is trying to make one kernel image run on every
possible type of hardware features like initramfs (and udev, modeules,
etc) are wonderful.
however for people who run systems that are known ahead of time and
static (and who build their own kernels instead of just relying on the
distro default kernel), all of this is unnessesary complication, which
leaves more room for problems to creep in.
David Lang
Such people can easily construct an initramfs containing busybox and
mdadm with a shell script hardcoded to mount their root fs and run
switch_root. It's a ~10 minute jobbie that only needs to be done once.
--
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)
and even better when you don't have to do that one time job at all.
btw, what about additional delay?
Gl�ck Auf,
Volker
But people who are building their own kernels are already doing a
(much harder, imo) one time job of configuring their kernels.
> btw, what about additional delay?
It takes about half a second for mdadm to assemble my root array, is
that what you're referring to?
I assume that kernel auto-assembly is no faster, although I've never
used it. Regardless, half a second isn't very long to wait.
--
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)
> On 19:27 Wed 17 Feb , Volker Armin Hemmann wrote:
>> On Mittwoch 17 Februar 2010, Nick Bowler wrote:
>>> On 09:41 Wed 17 Feb , da...@lang.hm wrote:
>>>> for a distro that is trying to make one kernel image run on every
>>>> possible type of hardware features like initramfs (and udev, modeules,
>>>> etc) are wonderful.
>>>>
>>>> however for people who run systems that are known ahead of time and
>>>> static (and who build their own kernels instead of just relying on the
>>>> distro default kernel), all of this is unnessesary complication, which
>>>> leaves more room for problems to creep in.
>>>
>>> Such people can easily construct an initramfs containing busybox and
>>> mdadm with a shell script hardcoded to mount their root fs and run
>>> switch_root. It's a ~10 minute jobbie that only needs to be done once.
>>
>> and even better when you don't have to do that one time job at all.
>
> But people who are building their own kernels are already doing a
> (much harder, imo) one time job of configuring their kernels.
>
>> btw, what about additional delay?
>
> It takes about half a second for mdadm to assemble my root array, is
> that what you're referring to?
>
> I assume that kernel auto-assembly is no faster, although I've never
> used it. Regardless, half a second isn't very long to wait.
If you are aiming for a 5-second boot time it's 10% of your total boot
time. That's a lot for a feature that's not needed.
David Lang
well at the moment it takes less than two seconds until init takes over.
Adding .5 seconds is a lot. And loading the initrd and changing root isn't
free either, true?
I remember well all the noise in the past about making linux booting faster.
So why slow it down with an initrd - especially if you can do without?
Gl�ck Auf,
Volker
Only if the kernel auto-assembly takes zero time, which it obviously
does not.
--
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)
> That's simply not true, at least not for Debian. If you actually use the
> distro tools [1] the only assumptions are made at kernel *installation*
> time, not at kernel build time.
And that's why network-booted diskless clients and virtual guests have
all sort of useless modules loaded; the HW where the kernel package was
installed in this case is very different from the HW where the kernel
will run. If only there were a switch to prohibit ever looking at the
current machine's configuration when generating the initramfs...
> I've been using initramfs-tools generated initrds for years without
> problems, and that includes "root on LVM on LUKS encrypted partition"
> and "root on LVM on RAID" setups.
I've tried a couple of times to use a Debian-built initramfs with a
custom built kernel. The kernel worked fine without an initramfs (it had
everything built in), but it did not boot with the initramfs.
Gabor
> On 10:41 Wed 17 Feb , da...@lang.hm wrote:
>> On Wed, 17 Feb 2010, Nick Bowler wrote:
>>> It takes about half a second for mdadm to assemble my root array, is
>>> that what you're referring to?
>>>
>>> I assume that kernel auto-assembly is no faster, although I've never
>>> used it. Regardless, half a second isn't very long to wait.
>>
>> If you are aiming for a 5-second boot time it's 10% of your total boot
>> time. That's a lot for a feature that's not needed.
>
> Only if the kernel auto-assembly takes zero time, which it obviously
> does not.
the assembly time would probably be the same, but the initramfs being
proposed did not include that time either.
David Lang
Interesting use case. But also a use case for which initramfs-tools
probably very simply was never intended.
I agree that in this case you probably want to either
- have a very generic initrd that supports anything (Debian default) [1]
or
- provide a list of modules to include in the initrd based on knowing the
hardware you want to support (e.g. using /etc/initramfs-tools/modules)
and prevent including anything based on the host system.
I've never really done that so I don't know if initramfs-tools has any
features to support that.
> If only there were a switch to prohibit ever looking at the
> current machine's configuration when generating the initramfs...
Did you ever file a wishlist bug report for that?
> > I've been using initramfs-tools generated initrds for years without
> > problems, and that includes "root on LVM on LUKS encrypted partition"
> > and "root on LVM on RAID" setups.
>
> I've tried a couple of times to use a Debian-built initramfs with a
> custom built kernel. The kernel worked fine without an initramfs (it had
> everything built in), but it did not boot with the initramfs.
It's obviously hard to comment on something like this without more details
(which would be off-topic for this list).
[1] Could still leave you with problems if the clients use something fancy
for the root fs that uses info copied from /etc.
This was the *only* time that was included. Quoting myself:
> It takes about half a second for mdadm to assemble my root array
I didn't make any claim about any other timings, since I have not made
any measurements (I am not adding instrumentation code to my initramfs
and rebooting the box just to do this, and my watch is not precise
enough to measure the time spent in initramfs).
After the kernel has loaded, but before init on my root fs is run, there
are only three things that cause noticeable delays:
* probing all the disks.
* assembling the root array.
* mounting the root filesystem.
--
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)
> On 13:17 Wed 17 Feb , da...@lang.hm wrote:
>> On Wed, 17 Feb 2010, Nick Bowler wrote:
>>
>>> On 10:41 Wed 17 Feb , da...@lang.hm wrote:
>>>> On Wed, 17 Feb 2010, Nick Bowler wrote:
>>>>> It takes about half a second for mdadm to assemble my root array, is
>>>>> that what you're referring to?
>>>>>
>>>>> I assume that kernel auto-assembly is no faster, although I've never
>>>>> used it. Regardless, half a second isn't very long to wait.
>>>>
>>>> If you are aiming for a 5-second boot time it's 10% of your total boot
>>>> time. That's a lot for a feature that's not needed.
>>>
>>> Only if the kernel auto-assembly takes zero time, which it obviously
>>> does not.
>>
>> the assembly time would probably be the same, but the initramfs being
>> proposed did not include that time either.
>
> This was the *only* time that was included. Quoting myself:
>
>> It takes about half a second for mdadm to assemble my root array
sorry, I misunderstood, I thought you were referring to the time added by
using the initramfs itself.
David Lang
Note that an extremely lightweight initramfs can quite possibly be
faster than doing it in the kernel, just because userspace is so much
less constrained. I was hoping klibc would catch on for this stuff, but
it hasn't as much as I'd like.
-hpa
If you are discussing boot times rather than mdadm, might I suggest
you change the subject line?
Upstream is keen on finally dropping kernel autoassembly, and
I support that because of the gained flexibility. Boot times are
important for laptops and desktops, which are hardly the primary
target of RAID.
Anyway, this is FLOSS. If you want kernel autoassembly, take over
the code and bring it up to speed.
--
martin | http://madduck.net/ | http://two.sentenc.es/
"what's your conceptual continuity? --
well, it should be easy to see:
the crux of the bisquit is the apopstrophe!"
-- frank zappa
spamtraps: madduc...@madduck.net
>
>
> On Tue, 16 Feb 2010, Neil Brown wrote:
>
> > On Thu, 11 Feb 2010 18:00:23 -0500 (EST)
> > Justin Piszcz <jpi...@lucidpixels.com> wrote:
> >
> >> Hi,
> >>
> >> I may be converting a host to ext4 and was curious, is 0.90 still the only
> >> superblock version for mdadm/raid-1 that you can boot from without having
> >> to create an initrd/etc?
> >>
> >> Are there any benefits to using a superblock > 0.90 for a raid-1 boot
> >> volume < 2TB?
> >
> > The only noticeable differences that I can think of are:
> > 1/ If you reboot during recovery of a spare, then 0.90 will restart the
> > recovery at the start, while 1.x will restart from where it was up to.
> > 2/ The /sys/class/block/mdXX/md/dev-YYY/errors counter is reset on each
> > re-assembly with 0.90, but is preserved across stop/start with 1.x
> > 3/ If your partition starts on a multiple of 64K from the start of the
> > device and is the last partition and contains 0.90 metadata, then
> > mdadm can get confused by it.
> > 4/ If you move the devices to a host with a different arch and different
> > byte-ordering, then extra effort will be needed to see the array for
> > 0.90, but not for 1.x
> >
> > I suspect none of these is a big issue.
> >
> > It is likely that future extensions will only be supported on 1.x metadata.
> > For example I hope to add support for storing a bad-block list, so that a
> > read error during recovery will only be fatal for that block, not the whole
> > recovery process. This is unlikely ever to be supported on 0.90. However
> > it may not be possible to hot-enable it on 1.x either, depending on how much
> > space has been reserved for extra metadata, so there is no guarantee that
> > using 1.x now makes you future-proof.
> >
> > And yes, 0.90 is still the only superblock version that supports in-kernel
> > autodetect, and I have no intention of adding in-kernel autodetect for any
> > other version.
> >
> > NeilBrown
> >
>
> Hi Neil,
>
> Thanks for the response, this is exactly what I was looking for and
> probably should be put in a FAQ.
>
I believe the linux-raid wiki is open for anyone to update. Feel free :-)
NeilBrown
> On Mittwoch 17 Februar 2010, Neil Brown wrote:
>
> > I will not be removing 0.90 or auto-assemble from the kernel in the
> > foreseeable future.
> > None the less, I recommend weaning yourself from your dependence on it.
> > initramfs is the future, embrace it.
> >
> > NeilBrown
>
> a future that is worse than the present. For what reason?
Reason: some things are easier to implement and maintain in userspace.
Implementing them in the kernel would likely produce a worse products.
Worse than the present: only if you refuse to embrace it and there-by
contribute to fixing/improving it.
NeilBrown
It is worth noting that the Team that was recently working for v.short boot
times wanted to disable in-kernel autodetect for RAID, and added a
compile-time option to do just that.
The reason is that before the in-kernel autodetection can work reliably you
have to wait for all storage devices to have been discovered. That wait
can unnecessarily increase the total boot time.
Using user-space autodetection, you can plug "mdadm -I" into udev, and have
arrays assembled as they are found, and filesystems mounted as arrays are
assembled, and then you just have to wait for the root filesystem to appear,
not for "all devices".
Yes, you could make the in-kernel autodetection smarter so it doesn't have to
wait quite so long, but that would make it quite a bit more complex, and it
is harder to maintain the complexity in the kernel.
NeilBrown
The mdadm experimental package offers this via debconf (default off
for now). I would appreciate testers — I literally whacked this up
on a rainy Sunday with an hangover, and while it seems to work fine,
it's probably got warts.
If you don't run Debian or a derivative, you can get the files from
debian/initramfs/* in git://git.debian.org/pkg-mdadm/mdadm.git or
http://git.debian.org/?p=pkg-mdadm/mdadm.git;a=tree;f=debian/initramfs;hb=HEAD
don't be scared off by the complexity, incremental assembly actually
bypasses most of the (shell) code in both scripts.
> Yes, you could make the in-kernel autodetection smarter so it
> doesn't have to wait quite so long, but that would make it quite
> a bit more complex, and it is harder to maintain the complexity in
> the kernel.
It is definitely a user-space task, if you ask me.
--
martin | http://madduck.net/ | http://two.sentenc.es/
windows 2000: designed for the internet.
the internet: designed for unix.
spamtraps: madduc...@madduck.net
Is this ready for testing somewhere? initramfs+mdadm.conf is operator-error
bait, proper auto-assemble that does away with the requirement of an
up-to-date mdadm.conf inside the initrd would help a great deal, there.
It will need something like LVM has to blacklist/whitelist what device
classes it will scan for superblocks though, or it will eventually cause a
lot of trouble.
--
"One disk to rule them all, One disk to find them. One disk to bring
them all and in the darkness grind them. In the Land of Redmond
where the shadows lie." -- The Silicon Valley Tarot
Henrique Holschuh
Debian experimental. But so far, I was unable to get rid of
mdadm.conf because it only works without the info in that file if
the homehost is correctly encoded in the metadata. So the challenge
I am facing is http://bugs.debian.org/567468.
> It will need something like LVM has to blacklist/whitelist what
> device classes it will scan for superblocks though, or it will
> eventually cause a lot of trouble.
We rely on linux-base reporting the FS type as linux_raid_member and
mdadm -E finding the metadata if that's the case.
--
martin | http://madduck.net/ | http://two.sentenc.es/
sex an und für sich ist reine selbstbefriedigung.
spamtraps: madduc...@madduck.net
> On 09:41 Wed 17 Feb , da...@lang.hm wrote:
>> for a distro that is trying to make one kernel image run on every
>> possible type of hardware features like initramfs (and udev, modeules,
>> etc) are wonderful.
>>
>> however for people who run systems that are known ahead of time and
>> static (and who build their own kernels instead of just relying on the
>> distro default kernel), all of this is unnessesary complication, which
>> leaves more room for problems to creep in.
>
> Such people can easily construct an initramfs containing busybox and
> mdadm with a shell script hardcoded to mount their root fs and run
> switch_root. It's a ~10 minute jobbie that only needs to be done once.
Except when mdadm, cryptsetup, lvm change you need to update it.
Esspecially when you set up a new system that might have newer
metadata.
Also at least Debian doesn't (yet) support a common initramfs for their
kernel packaging. You either build a kernel without need for one or you
have a per kernel initramfs that is automatically build and updated
whenever anything in the initrmafs changes. Not often, but still too
often, the initramfs then doesn't work.
Does any other distribution allow building kernel image rpms that will
use a common initramfs for all kernels?
MfG
Goswin
> On Wed, Feb 17, 2010 at 02:26:46PM +0100, Frans Pop wrote:
>
>> That's simply not true, at least not for Debian. If you actually use the
>> distro tools [1] the only assumptions are made at kernel *installation*
>> time, not at kernel build time.
>
> And that's why network-booted diskless clients and virtual guests have
> all sort of useless modules loaded; the HW where the kernel package was
> installed in this case is very different from the HW where the kernel
> will run. If only there were a switch to prohibit ever looking at the
> current machine's configuration when generating the initramfs...
From my experience you must boot up your client/guest and install the
kernel in there. Then copy the kernel and initrmafs over to the boot
server for use.
Initramfs was never designed to generate an image for another system, or
worse, for a pool of different systems. You can probably make it work on
a case by case basis but that wasn't thought of during the design phase.
Interesting idea though.
>> I've been using initramfs-tools generated initrds for years without
>> problems, and that includes "root on LVM on LUKS encrypted partition"
>> and "root on LVM on RAID" setups.
>
> I've tried a couple of times to use a Debian-built initramfs with a
> custom built kernel. The kernel worked fine without an initramfs (it had
> everything built in), but it did not boot with the initramfs.
>
> Gabor
'make-kpkg ... --initrd kernel-image' should build you your custom
kernel with all the magic required to generate a working initramfs.
If not then please do file bugs.
MfG
Goswin
I'm not sure what the problem is. I've had to do this (because the disk
with grub on the MBR was the one that failed - now I put grub on them
all).
I booted of the fedora install disk in rescue mode. Told it not to try
and mount any system disks, got into a shell and ran mdadm -As
I'm not sure what else a kernel auto-assemble would be expected to do
that mdadm -As wouldn't...
--
Ian Dall <i...@beware.dropbear.id.au>
I meant "once per system". One typically doesn't _need_ to update the
mdadm in the initramfs, as long as it's capable of assembling the root
array.
> Also at least Debian doesn't (yet) support a common initramfs for their
> kernel packaging. You either build a kernel without need for one or you
> have a per kernel initramfs that is automatically build and updated
> whenever anything in the initrmafs changes. Not often, but still too
> often, the initramfs then doesn't work.
The scenario was when users configure and build their own kernel. These
users are presumably capable of using grub's "initrd" command or the
CONFIG_INITRAMFS_SOURCE kernel option.
--
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)
The only really annoying issue I've had with my custom initramfs
creator is getting it 'chained' by various distros auto-initramfs
update triggers so that it can grab the version of modules that match
a given kernel. I have several ways in mind to work around that issue
at various steps, but no known userbase to support besides my self and
thus less motivation to work on that task.
Everything else of course works exactly the same as long as the
configuration hasn't changed on the host system.
I've not been active here for a long time - sorry :)
The linux raid wiki at OSDL (http://linux-raid.osdl.org/) was 'migrated' to a
drupal system during some Linux Foundation changes - clearly not suitable for
these kind of docs.
I spoke to maddog at kernel.org some months ago and we are now part of the
managed kernel wiki farm (which the osdl wiki pre-dated in case anyone wonders
why we didn't start out there).
I've asked osdl to redirect the current url to the kernel.org wiki but I think
this home should last us a while ;)
so:
hi martin..
martin f krafft wrote:
> also sprach Justin Piszcz <jpi...@lucidpixels.com> [2010.02.17.0214 +1300]:
>> Thanks for the response, this is exactly what I was looking for
>> and probably should be put in a FAQ.
>
> I'd be more than happy to push my FAQ[0], possibly fused with my
> "recipes", upstream and would welcome anyone who wanted to help out.
>
> 0. http://git.debian.org/?p=pkg-mdadm/mdadm.git;a=blob;f=debian/FAQ;hb=HEAD
> 1. http://git.debian.org/?p=pkg-mdadm/mdadm.git;a=blob;f=debian/README.recipes;hb=HEAD
See:
http://raid.wiki.kernel.org/
David
--
"Don't worry, you'll be fine; I saw it work in a cartoon once..."