By default, build_image will create an image with
--enable_rootfs_verification enabled for x86 boards. This change has
not landed just yet, but it will quite soon!
= Why, why, why?
- Build bots and automated tests will be running on the expected boot
path and rootfs layer (verified to the rootfs)
- Exercise and polish the code!
- If boot is really slow, check your dmesg for it spewing errors.
(Right now, it doesn't fail on error; just warns..a lot)
Also, the verified rootfs will open up booting by partition UUID which
will make recovery/usb shim booting work irrespective of device
enumeration. (This change to build_image and the installer is still
in progress...)
= Great, but how can I ignore this for now?
Build with:
./build_image --noenable_rootfs_verification ...
= I keep getting errors on the USB. What gives?!?
Right now, our machines automount any recognized filesystems. If you
put a imaged usb stick into a booted chromium os machine, it _will_
automount the root filesystem. In doing so, it will modify the
filesystem's metadata and make it fail the integrity checks.
For legacy/efi bootloaders, you can rerun chromeos-setimage, but
otherwise, you need to reimage the usb stick.
If you have any ideas on how to avoid this, I'd love to know. It
seems that moving to a read-only filesystem (like squashfs) is the
"easiest" solution ;)
= Disabling and re-enabling rootfs verification
If you're doing a lot of iterations, I'd suggest temporarily disabling
verification (on legacy and efi systems):
crosh -> shell
/usr/sbin/chromeos-setimage --noenable_rootfs_verification [A|B]
reboot
Just pick the A or B depending on which you'd like to boot to. Then
when you are ready to test those changes, just rerun it with
--enable_rootfs_verification and reboot!
If you have firmware that can boot to the kernel partition, then
you'll need a new kernel partition image that doesn't have dm="..." in
the commandline (or has stubbed out devices ROOT_DEV/HASH_DEV). I
believe there is a doc floating around to do this, but I can't seem to
dig it up.
Without doing anything special, you can always just disable
dm-verity/rootfs verification in your image using
cros_make_image_bootable:
1. Remove --enable_rootfs_verification from your [IMAGE_DIR]/boot.desc
(just delete the line)
2. Inside the chroot run: bin/cros_make_image_bootable [IMAGE_DIR]
chromiumos_image.bin
3. cd [IMAGE_DIR]; ./unpack_partitions chromiumos_image.bin
4. scp part2 over to your device and dd it over the kernel partition.
It's probably easier to just build a fresh image with it disabled :)
= gmerge plz!
Given that the root filesystem will be read-only, you can't just
gmerge onto it. Probably it would make sense to build without support
or disable it then re-enable it when testing.
If you really want to gmerge and keep going, you can switch between
rootfs, but it isn't pretty:
mkdir /tmp/other_root
mount /dev/sdaX /tmp/other_root
for d in /var /tmp /usr/local /mnt/stateful_partition; do mount
--bind $d /tmp/other_root$d; done
chroot /tmp/other_root
gmerge blah
exit # the chroot
for d in /var /tmp /usr/local /mnt/stateful_partition; do umount
/tmp/other_root$d; done
umount /tmp/other_root
# Recomputes the rootfs hash, writes it to disk, and updates the
legacy and efi bootloaders
chromeos-setimage [A|B]
reboot
(See the next section if you aren't using efi/legacy bootloaders.)
= Will it get better?
There is an open task to provide a gmerge helper to make this
smoother, but it will end up doing the same thing as above.
Hopefully, turning this on will motivate additional interest in making
the development process even better! Right now, I'm pretty certain my
normal flow varies from other people, and I've been hesitant to
optimize for it.
Please send any ideas you have to make this less painful and/or more
efficient for development! Any and all other comments are certainly
appreciated -
will
--
Chromium OS Developers mailing list: chromiu...@chromium.org
View archives, change email options, or unsubscribe:
http://groups.google.com/a/chromium.org/group/chromium-os-dev?hl=en
Hey team,
By default, build_image will create an image with
--enable_rootfs_verification enabled for x86 boards. This change has
not landed just yet, but it will quite soon!
= Why, why, why?
- Build bots and automated tests will be running on the expected boot
path and rootfs layer (verified to the rootfs)
- Exercise and polish the code!
- If boot is really slow, check your dmesg for it spewing errors.
(Right now, it doesn't fail on error; just warns..a lot)
Also, the verified rootfs will open up booting by partition UUID which
will make recovery/usb shim booting work irrespective of device
enumeration. (This change to build_image and the installer is still
in progress...)
= Great, but how can I ignore this for now?
Build with:
./build_image --noenable_rootfs_verification ...
= I keep getting errors on the USB. What gives?!?
Right now, our machines automount any recognized filesystems. If you
put a imaged usb stick into a booted chromium os machine, it _will_
automount the root filesystem. In doing so, it will modify the
filesystem's metadata and make it fail the integrity checks.
image_to_vm is already patched to support this. Why would we need a
different image?
I imagine so. I haven't looked at our automounting at all. So yeah,
once we enable for ARM, that would invalidate the failover rootfs too
:/
Enables this change. Reverting it is as easy as changing the default
back to FLAGS_FALSE.
There's already been some discussion about why we'd want this on by
default and what we should change. So I thought I'd open it up for
anyone still reading:
The issue: if you gmerge a lot, you will probably just build with this
disabled. If that's common, why make it a default?
Proposed suggestion: Only enable this on bots and tests
I agree with the sentiment and the goal of this change isn't to make
everyone's lives harder "in the name of security". The goal is quite
simply to raise awareness around code we plan on shipping to every
user and ensure that people are indeed running their work on it. The
root filesystem verification will impact all I/O bound processes on
the root filesystem which means every piece of code we run and all
rootfs-based config data. Hopefully getting more eyes on it will
tease out dumb issues and result in smart suggestions for how to make
it easier to develop on and better for our users. However, I'm open
to changing it.
For now, I'd like to leave this on by default. If we do change it, we
need to make sure it stays on for the factory_install and test images
and on the bots by default. sosa & kliegs suggested ./build_image
--withdev disable rootfs_verification. Any thoughts on if that flag
makes the most sense to tie it to? If so, I'd be happy to do so (and
even happier to review the change to do so :).
Anyhow, nothing is immutable, so please pipe up with suggestions,
fixes, cls, etc.
thanks!
will
Just to clarify my thinking on tieing it to withdev. I can imagine more flags in the future and it'd be nice to keep the number of flags used by developers in the common case minimal.
-jon
> On Tue, Aug 17, 2010 at 6:...
Ideally, it would mean that if someone (anyone) checks out a fresh
build and goes through the build steps, they would be building as
close to a release version as possible. I think it'd be nice to
optimize for that case, then tie other flags (like turning vboot off)
to --withdev because there are a large number of possible tweaks we
may each want for our dev builds that may not be shared widely.
it just seems that the less custom flags needed for release builds
will make them less fiddly and more reproducible while dev builds are
more inclined to be frankenbuilds (gmerge use is a great example). So
maybe:
./build_image --board=blah --withdev
would be all you'd need to toggle a suite of common dev flags (plus
the dev packages) but
./build_image --board=blah
would produce a closer to golden image for the given board. I realize
I'm still proposing extra typing for our day-to-day work, but I also
worry about keeping good tabs on what exactly it means to do a release
build and what magic happens. (Yes, I know overlays play a big part
in this, but I still think less magic is better for those cases. The
more insight each engineer has into a final image, the better our
result will be I think.)
Would that make sense to anyone else?
cheers!
We could also add a --release option. Overall I don't like changing flags or default behavior. It's not just a matter of more typing but as efficient people I'm sure we all have scripts to automate things. So its also a matter of modifying all the scripts we've written and forgotten about.
-jon
Fair enough. --release == --nowithdev, but it would be nice to make
it explicit. However, I'd still be in favor of making changes that
streamlined, instead of bloated, the cmdline arguments even if it
requires some script/alias updates. We already have a huge number of
flags for build image.
> It's not just a matter of more typing but as efficient
> people I'm sure we all have scripts to automate things. So its also a matter
> of modifying all the scripts we've written and forgotten about.
That's this same problem that encouraged me to make
--enable_rootfs_verification default. There're a lot of scripts that
people use that "just work". The only way to explore those scripts is
to change the defaults. While it can yield breakage, it also means we
see valid breakage from incompatibility. So far, turning this on
hasn't yielded any automated breakage that I've seen, but I'm waiting
for more people to wake up and let me know exactly how irritating this
is :)
Anyway, if we tie it to --withdev we still need an extra option to
force verification when needed. shflags can't easily change defaults
with other arguments unless it parses the cmdline twice. Boo.
In general though, I'd be curious where the build scripts are headed
[+anush :]. For instance, it might make sense to have a
bin/cros_build_image that wraps build image codifying our build_image
arguments into profiles: release, developer, release-test, ...,
[profile]-[mod]. But I expect that's even too narrow a view when we
start to pull in the cros_workon stuff, binary packages, and release
branches :)
I don't have a specific course of action, but if no one else has
strong opinions, we can pursue tying it to --withdev or some thing
else in a few days. (I expect after a few days of dealing with this,
there may be more strong opinions :)
thanks!
I don't have a specific course of action, but if no one else has
strong opinions, we can pursue tying it to --withdev or some thing
else in a few days. (I expect after a few days of dealing with this,
there may be more strong opinions :)
thanks!
What about keeping the rootfs locked down and using unionfs or aufs to cause any dev-mode or gmerge changes to be put in the stateful partition?
unionfs has never been part of the kernel tree, but the developers
keep up pretty good, a 2.6.36 patch is available
(http://www.fsl.cs.sunysb.edu/project-unionfs.html) and they still
hope to get admitted to the kernel one of those days :)
cheers,
/vb
It's not at all likely to get merged upstream now. More likely that
Val Aurora's union-at-VFS-level changes will be merged in the next few
months.
The Ubuntu kernel, on which we have based, does have ubuntu/aufs in
the tree: but starting to make use of that at this stage seems like a
can of worms to me, useful though it may well be.
Hugh
BTW, in case you need to change the kernel command line:
dd the kernel in question into a file (<original_kernel>) and then run:
vbutil_kernel --repack <modified_kernel> --config <new_cmd_line> \
--signprivate <path_to>/vboot_reference/tests/devkeys/<key> \
--oldblob <original_kernel>
where <key> is kernel_data_key.vbprivk for the main kernel or
recovery_kernel_data_key.vbprivk for the flash drive based recovery
kernel.
<new_cmd_line> is the name of the file containing the updated command line.
and then dd the <modified_kernel> back to where <original_kernel> came from.
to get the kernel command line with verified boot disabled try this:
vbutil_kernel --verify <original_kernel> --verbose | tail -1 | sed '
s/dm_verity[^ ]\+//g
s|verity /dev/sd%D%P /dev/sd%D%P ||
s| root=/dev/dm-0 | root=/dev/sd%D%P |
s/dm="[^"]\+" //' > new_cmd_line
Edit new_cmd_line if required and use it to repack the kernel blob as
described above.
cheers,
/vb