Some questions on Kiwi to build images

537 views
Skip to first unread message

Roy Yang

unread,
Sep 23, 2017, 1:18:39 AM9/23/17
to kiwi
Thinking to switch current home-brew build system to KIWI --- today we use kickstart plus pre-install, post-install scripts to achieve everything. Wonder whether the same can be achieved  if I spent some times for a while.
Here is some high-level requirements/goals:
1.)  Installation should be unattended. That means, disk should be automatically partitioned, and boot or root partition  can be in the same disks or different disks or RAIDed.
2.)  Boot/Root can be either MBR or GPT, and it depends on hardware platform.
3.)  Same ISO image can be installed on multiple platforms, from physical hardware to virtual platform.
4.)  System configuration files can be override, especially files under /etc.
5.)  Same ISO image can be used for ISO install, USB install or PXE install.
6.)  Image can be used to upgrade too. For example, decompress the tar.gz like FreeBSD, and manipulate the grub entries.
7.)  Build target is CentOS for history reason,  Can the build system be CentOS too? Or it has to be SUSE or OpenSuse(Learning it in spare time, and like its philosophy)
8.)  How to build CentOS applications? Compile application into RPM on CentOS, put into some Repo,  then let Kiwi do packaging?
9.)  What image is good --- iso or oem. Frankly, I don't get the difference here via reading. For example, I thought both can be used on bare mental  machines.
10.) How to add/remove or even update drivers or packages? In either OS level or initrd level.

Want to see whether we can use KIWI at production. Please give some pointers so that I can dive in to explore feasibility.

Thank you for your help, 

Roy

Marcus Schäfer

unread,
Sep 25, 2017, 10:18:54 AM9/25/17
to kiwi-...@googlegroups.com
Hi,

> 1.) Installation should be unattended. That means, disk should be
> automatically partitioned, and boot or root partition can be in the
> same disks or different disks or RAIDed.

Can be done in kiwi's oem type with:

<oemconfig>
<oem-unattended>true</oem-unattended>
</oemconfig>

if multiple install targets exists, the first one is taken
or you specify the device via

<oem-unattended-id>by-id</oem-unattended-id>

or you filter out

<oem-device-filter>...</oem-device-filter>

depends on the target

Talking about raid we support mdraid in level striping or mirroring

mdraid="mirroring" | "striping"

The disk will be setup in degraded raid mode, which means you can
add disks after deployment as we don't know what the target provides
at image creation time

> 2.) Boot/Root can be either MBR or GPT, and it depends on hardware
> platform.

yes we support EFI, EFI-secure boot, legacy(CSM/BIOS) mode
which can be controlled by the firmware="..." attribute

> 3.) Same ISO image can be installed on multiple platforms, from
> physical hardware to virtual platform.

yes kiwi detects the storage devices, as long as drivers are present
you can install to any device

> 4.) System configuration files can be override, especially files under
> /etc.

yes we call that overlay files in kiwi. In your image description you
can put static files for overwrite in a tarball or below a root/ directory.
The structure there is put on top and overwrites what exists

> 5.) Same ISO image can be used for ISO install, USB install or PXE
> install.

The install iso kiwi creates is a hybrid system, meaning it provides
an install media suitable for CD/DVD and USB sticks in one file.
For pxe deployment kiwi supports two systems, documented here:

http://suse.github.io/kiwi/building/build_oem_disk.html#deployment-methods

> 6.) Image can be used to upgrade too. For example, decompress the
> tar.gz like FreeBSD, and manipulate the grub entries.

Don't understand this

> 7.) Build target is CentOS for history reason, Can the build system
> be CentOS too? Or it has to be SUSE or OpenSuse(Learning it in spare
> time, and like its philosophy)

There are no constraints on the underlaying build system except for the core
tools used by kiwi to be compatible. Meaning tools like xz, gpart, fdisk,
losetup, dmsetup, lvcreate, btrfs etc etc... kiwi is a heavy user of many
low level system components. Fortunately during the past years the core
linux is more and more the same. My integration tests in the buildservice
uses the target system also as host system. So at least for my Fedora25
build I can approve it works.

Overall I'm not suffering from the "not invented here syndrom" :) and
try hard to make my software work on all linux distributions... of course
there is no guarantee

> 8.) How to build CentOS applications? Compile application into RPM on
> CentOS, put into some Repo, then let Kiwi do packaging?

Don't understand this

> 9.) What image is good --- iso or oem. Frankly, I don't get the
> difference here via reading. For example, I thought both can be used on
> bare mental machines.

The iso type builds live iso images using an overlay technology to
allow persistent or temporary writing depending on the target capabilities
the rootfs is a squashfs here

The oem type builds a disk image as file (virtual disk) and can therefore
be used to become the real operating system with free choice for root
filesystem, lvm, btrfs and more. It is meant to be rolled out to [n]
clients and not to be a live system on a mobile media

> 10.) How to add/remove or even update drivers or packages? In either OS
> level or initrd level.

The initrd is based on dracut, thus whenever dracut runs it embedds the
tools and libraries of the environment it is called from. Updating
packages can be done on several levels. I normally just rebuild the image.
Another approach is to run "zypper up", "dnf update" or how you call it
after the system has been deployed. Another approach is to build in the
open buildservice. Any new package will automatically trigger a rebuild
of the image... there are many ways to update software and it highly
depends on the use case and the environment to make the decision what's
best

> Want to see whether we can use KIWI at production. Please give some
> pointers so that I can dive in to explore feasibility.

Not sure what you are aiming for but kiwi is used in production all over
the suse company. As I work in the public cloud development team one pointer
I can give you is the Amazon EC2 marketplace, search for the SUSE images
and run an instance. These are all kiwi images used by all people who
made the decision for sles in Amazon EC2 :) The marketplace of Microsoft
Azure as well as Google Compute Engine offers also kiwi built images.

From my personal experience; I would only change the system which builds
images if there is a real benefit. If the end result will be the same but
just differently build I would not put on the effort. Of course it
brings a smile on my face if you give kiwi a chance :)

Regards,
Marcus
--
Public Key available via: https://keybase.io/marcus_schaefer/key.asc
keybase search marcus_schaefer
-------------------------------------------------------
Marcus Schäfer (Res. & Dev.) SUSE Linux GmbH
Tel: 0911-740 53 0 Maxfeldstrasse 5
FAX: 0911-740 53 479 D-90409 Nürnberg
HRB: 21284 (AG Nürnberg) Germany
GF: Felix Imendörffer, Jane Smithard, Graham Norton
http://www.suse.de
-------------------------------------------------------

Roy Yang

unread,
Sep 26, 2017, 1:57:19 PM9/26/17
to kiwi
Thank you for your answers. I understand more while I read KIWI document over weekend.

For 1.) The installation target/disks may be different for each platform.
    For example,  some platform, SATADOM is used as boot device, while SATA SSD is used as root device.
    while some platform, two SATA SSDs are used for both boot and root. Also, each platform may have a lot
    of disks, currently, I use scripts to find those disks for installation and do corresponding partitions.

For 6.) What I mean whether the build result can be used for upgrade, mostly, the build target can be a tar.gz like FreeBSD.
       To support upgrade, usually one boot partition to hold grub, initrd and kernel, two root partitions. If tar.gz is one build target,
       just untar the build target to spare root, do some adjustment, and reboot.

For 7.)  What I mean whether KIWI can run on CentOS. So far, I make it works on OpenSuse only. 
     I try to install KIWI RPMs on CentOS, and did not succeed.

For 8.)  What I mean if KIWI can only run on SUSE family OS, when the build target is CentOS, and 
applications are designed for CentOS. Should I start one CentOS VM, and build the application to
RPM, and then move the RPM to the SUSE, and let KIWI do the package. Or anything better?

For 10.) I just want to try KIWI for new projects. Have experienced both hacking init script and kickstart style
script for customized install, and found KIWI keep those things simple. Why should I reinvent the wheels? If
can leverage KIWI, that will be much better.

Thank you,

Roy

Marcus Schäfer

unread,
Sep 27, 2017, 8:49:07 AM9/27/17
to kiwi-...@googlegroups.com
Hi,

> For 1.) The installation target/disks may be different for each
> platform. For example, some platform, SATADOM is used as boot
> device, while SATA SSD is used as root device.

That's gonna be hard for an imaging solution which provides the
entire system as an image file. kiwi does not support spread images
where one portion is on another physical storage component than
the other. This is by intention because the image itself should be
one functional entity.

Your setup is imho better solved by a recipe controlled installation
workflow on the target machine(s) with access to all storage components.
Meaning kickstart, autoyast, etc...

Roy Yang

unread,
Sep 29, 2017, 2:00:03 AM9/29/17
to kiwi
Thank you.

In my current script based installation,  I use script to figure out which platform, 
then figure out which disk is for /boot, which disk is for /root, then all the rest
installation will be automated. 
Yes, we are using kickstart for now.

Can autoyast work with Kiwi?

David Cassany

unread,
Sep 29, 2017, 5:40:35 AM9/29/17
to kiwi-...@googlegroups.com
Hi Roy,

> Can autoyast work with Kiwi?

I do not have experience with autoyast or kickstart. But I believe this
is not a matter of if autoyast works with kiwi or not, I believe the
issue is that they serve different purposes. KIWI's focus is to provide
complete functional systems, rather than providing an installation
tools. Autoyast can work with kiwi in the sense that in can be requested
to be included in an image, form there how it can be configured to
perform some installation tasks when running the appliance is something
that I am not familiar with. KIWI can build live ISO images with some
pre-enabled services, not sure if autoyast can work in such a way though.

Regards,
David

signature.asc

Marcus Schäfer

unread,
Sep 29, 2017, 5:43:18 AM9/29/17
to kiwi-...@googlegroups.com
Hi,

> Can autoyast work with Kiwi?

It can be combined with kiwi in the meaning that you can run
the system configuration modules autoyast provides but not the
ones for partitioning and installation because this is covered
on the os image level in kiwi

As I said if the core os is split to several different storage
devices and not organized in a top level compound like a raid
cloud or virtualization system, the imaging approach is not
suitable for you.

From my experience /boot on one disk /root on another disk
/xxx on another disk, etc.. and glued together provides a working
system but is imho not a flexible or easy to maintain system.
Also deployment as you said is not a trivial task, some logic
needs to find the place and put the right pieces together.
That's were imaging kicks in. The os forms one functional and
testable entity, location and storage independent so that the
same thing can work basically everywhere. The application layer
forms another entitiy independent of the os layer as much as
possible.

If there are many constraints which has to be fulfilled at
deployment time e.g you have to have access to all storage
devices so that your install logic can work than things are
not loosely coupled which makes it less flexible. In a world
where things change every day I count flexiblity and simplicity
to be very important, and it's really hard to get there :)

So don't get me wrong your approach works and will meet
customer needs but seems not to meet the criteria being
used for image based applications and deployments. Thus
I'm afraid using kiwi for your needs might not be the
right solution.

Hope that makes sense

Roger Oberholtzer

unread,
Sep 29, 2017, 5:53:54 AM9/29/17
to kiwi-...@googlegroups.com
I use autoyast firstboot in OEM images created with KiWI. So I know
you can do at least as much as is defined in /etc/YaST/firstboot.xml

You need to install yast2-firstboot to get this.

In kiwi, you also need to add this to config.sh

touch /var/lib/YaST2/reconfig_system


Of course, this only happens once when the image is first booted. The
flag file is deleted by YaST firstboot stuff.



--
Roger Oberholtzer

Roy Yang

unread,
Sep 29, 2017, 10:47:46 AM9/29/17
to kiwi
Thanks for sharing.

First boot configuration is different. The script is executed after all files on /boot and /root are
dumped to disk target. We can use it to install RPMs or do some configuration.

What I want is to specify the disk target before ISO/OEM installation --- sometimes /boot and /root will be on different disks, while mostly, they are not. I will first try to install what KIWI gave to me on servers first.

Packer can do similar work like Kiwi for packing. If I have autoyast2 or kickstart files, then use packer/kiwi to package the files into ISO.

Thanks for all help! I will get hands dirty first and make problems specific.
Yes, what I want maybe not out of box, and just explore the possibility of leverage kiwi, autoyast/kickstart as recipes for the final solution.

linzama

unread,
Aug 14, 2019, 12:59:57 PM8/14/19
to kiwi

Hello,

I need little help for building BIOS RAID Controllers(fakeRaid) based image. attached is my config.xml. i tried changing parameter in my config.xml but still my iso does not see my raid. it list disk individually.

Thanks
config.xml
issue-1.PNG

Marcus Schäfer

unread,
Aug 15, 2019, 6:25:01 AM8/15/19
to kiwi-...@googlegroups.com
Hi,

> I need little help for building BIOS RAID Controllers(fakeRaid) based
> image. attached is my config.xml. i tried changing parameter in my
> config.xml but still my iso does not see my raid. it list disk
> individually.

Haven't done tests with fake raid controllers for quite some time.
From an image perspective the following needs to be done:

1. Your type setup still uses the old oemboot stuff. Change this to dracut

<type image="oem" filesystem="ext4" initrd_system="dracut" installiso="true" bootloader="grub2" kernelcmdline="splash" firmware="efi"/>

2. Make sure dmraid and mdadm is installed

<package name="dmraid"/>
<package name="mdadm"/>

3. Activate autoassembly

kernelcmdline="rd.auto"

For details on the dracut level see:

http://man7.org/linux/man-pages/man7/dracut.cmdline.7.html

Search for dmraid and mdadm

Maybe that helps to get you started

Regards,
Marcus
--
Public Key available via: https://keybase.io/marcus_schaefer/key.asc
keybase search marcus_schaefer
-------------------------------------------------------
Marcus Schäfer (Res. & Dev.) SUSE Software Solutions Germany GmbH

linzama

unread,
Aug 15, 2019, 7:36:33 PM8/15/19
to kiwi
Hi
Thanks you so much  for your response. 
i tried seems like its not working even though i have configure what you suggested and as well as what documents says.

Any Help would be Really Appreciated

Thanks

Marcus Schäfer

unread,
Aug 16, 2019, 3:29:38 AM8/16/19
to kiwi-...@googlegroups.com
Hi,

> Thanks you so much for your response.
> i tried seems like its not working even though i have configure what
> you suggested and as well as what documents says.
> Any Help would be Really Appreciated

This needs more debugging then. So what is the behavior you see ?
I guess you deployed the install.iso to a e.g USB stick and booted
the machine from it. Next you should see the list of possible target
devices to deploy to. I guess in that list you don't see the expected
devices ?

If so you can boot the machine with the follwing options

rd.debug rd.kiwi.debug

If you see the dialog with the device list, hit the cancel button
this will drop you into a shell if rd.debug is enabled. From that
shell you can now debug if dmraid sees your controller and if there
are any devices available

The enabled rd.kiwi.debug flag causes a log file to be written:

/run/initramfs/log/boot.kiwi

This log only contains information from the kiwi dracut extensions.
It's probably not helpful here but it can't hurt to take a look at
it too
Reply all
Reply to author
Forward
0 new messages