Debian buster nodes

121 views
Skip to first unread message

Niko Kivel

unread,
Jan 25, 2021, 8:42:56 AM1/25/21
to Warewulf
Hi all

 ... after 3 years not working with warewulf or clusters all together, I'm back to the topic. Even though the application isn't HPC at all, I think that warewulf can provide what we need.
Short explanation:
I work in accelerator controls at the Paul Scherrer Institute in Switzerland. We upgrade our synchrotron in the next years. I'm responsible for all motion software in the facility. In the end we'll end up with many servers running a minimal Debian 10, optimized for realtime performance.
I plan to use warewulf to provision the servers, as this seems to be the prudent way of doing this ... and I know warewulf v3 from my earlier life in HPC.
As I didn't know about v4 last week (must have overlooked it), I just forked the git development branch and build everything for RHEL8.2 (which I have to use as host OS for the master node). I also managed to create a chroot with Deb10 and booted the first node about a day after. All in all a very nice experience!

So far I liked the 'wwvnfs' / 'wwboostrap' command combo very much to build the image from a chroot-env. Can some of the pros, please tell me what's the equivalent in v4? I couldn't find anything, only for Centos-{7,8}.

Sorry for the long question, but I presume a little context can't hurt. If you tell me that it's a silly idea to handle the provisioning with warewulf, that's fine. In that case I'd appreciate an alternative, though.

TL;DR: How to boot a stateless Debian10 node from a RHEL8 master node?

Thanks in advance
Niko

Gregory M. Kurtzer

unread,
Jan 25, 2021, 9:45:37 PM1/25/21
to ware...@lbl.gov
Hi Niko,

Thank you for the background and context, indeed, very helpful.

So WW v3 is a bit long in the tooth at this point, but it is very tested and true.

On the other hand, WW v4.0 is just about to be released and while the gist is VERY similar to v3, there are some notable differences as well as still some features which need to still be incorporated.

One such change is the migration from VNFS chroots to containers. This means instead of building a VNFS internally, you can use whatever tools and pipelines you or your organization may be using for containers. I encourage you to read through the docs in progress here:


You will be able to see in the Quickstarts the equivalent commands between wwvnfs (wwctl container build $NAME) and wwbootstrap (wwctl kernel import `uname -r`).

Also, not sure you've seen it yet, but WW v4 is currently being hosted out of my company's github and can be found here:


Thanks,
Greg


--
You received this message because you are subscribed to the Google Groups "Warewulf" group.
To unsubscribe from this group and stop receiving emails from it, send an email to warewulf+u...@lbl.gov.
To view this discussion on the web visit https://groups.google.com/a/lbl.gov/d/msgid/warewulf/9fcb4ee3-1099-4bce-a851-81cab154a931n%40lbl.gov.


--
Gregory M. Kurtzer
CEO and Founder, CtrlCmd: http://ctrl-cmd.com
Executive Director, Next Generation of High Performance Computing: http://hpcng.org

Niko Kivel

unread,
Jan 26, 2021, 3:44:03 AM1/26/21
to ware...@lbl.gov
Hi Greg

Thanks for the links to the repo and the documentation.
I already installed v4 and went through the quickstart. Those are very helpful. I eventually made it work and one node is booting CentOS 8. Overall a very nice experience! I think the overall procedure is more streamlined than v3 was. I particularly like that there is only one command.

As you mentioned, some features of v3 are still missing. For my proposed setup the most important would be the ability to operate on a kernel that is _not_ present on the master node.
From what I get from this function (https://github.com/ctrliq/warewulf/blob/f58ac3548aa35082e5ac0448e53ff3523ccaeee1/internal/pkg/kernel/kernel.go#L72), only local kernels are supported yet.
Before I could just run `wwbootstrap --chroot=/opt/warewulf/chroot/debian-rt 4.19.0-13-rt-amd64`. Not sure how to do this on a kernel in a docker image. So far I haven't had intimate contact with docker.

But, as you mentioned, v4 is just around the corner and from what I can tell, this is very promising.
As I'm not under a lot of pressure, I can wait for such a feature to be implemented. That is, of course, if it's planned to do that. I'm not sure to what extent I can contribute to it. as I'm pretty busy with other tasks.

Thanks
Niko

Gregory M. Kurtzer

unread,
Feb 4, 2021, 2:11:27 AM2/4/21
to ware...@lbl.gov
Hi Niko,

In the past, where did you get the chroot that hosted the kernel you wish to use?

We can easily do a kernel import from a chroot, but you hit the nail on the head regarding the docker/container image and they don't typically have a kernel installed. With that said, what would be your ideal solution here? How would you like to specify what kernel to import into Warewulf?

Thank you for the testing of Warewulf v4 and feedback!

Greg


Niko Kivel

unread,
Feb 4, 2021, 3:20:36 AM2/4/21
to ware...@lbl.gov
Hi Greg

In ww v3 I used `wwmngchroot` and simply installed the kernel of interest with `apt`, exited the chroot and pointed `wwboostrap` to the chroot and kernel version as mentioned in the previous email. The chroot I got from running 'debootstrap' on the master node.

For v4 I just copied the buster rt-kernel to the master nodes' respective directories and ran `wwctl kernel import`.
If wwctl had an option to tell it the chroot location, that would make it cleaner, as I now have a kernel lingering around on the master that is kinda out-of-place. Since I have to build a lot of kernel modules in the chroot, that would also allow for a clean workflow, rather than copying the stuff to the master every time.

Since I desperately needed an rt-kernel on the test nodes, and couldn't get them to (properly) boot a debian container (I didn't have time to fiddle with the system overlay) I just booted your centos-8 docker with the buster rt-kernel.
Turned out they love each other and I get superb rt-performance, far superior to anything I ever achieved with RHEL7 and the CentOS or Cern rt-kernel. It just looks really odd when you run `uname -a` and get a Debian kernel on a CentOS machine :)

@testing and feedback:
sure thing, I'm certain I benefit from it in the long term.

best
Niko

Gregory M. Kurtzer

unread,
Feb 4, 2021, 4:29:26 PM2/4/21
to ware...@lbl.gov
Hi Niko,

Comments inline...

On Thu, Feb 4, 2021 at 12:20 AM Niko Kivel <niko....@gmail.com> wrote:
Hi Greg

In ww v3 I used `wwmngchroot` and simply installed the kernel of interest with `apt`, exited the chroot and pointed `wwboostrap` to the chroot and kernel version as mentioned in the previous email. The chroot I got from running 'debootstrap' on the master node.

Understood, got it.
 

For v4 I just copied the buster rt-kernel to the master nodes' respective directories and ran `wwctl kernel import`.
If wwctl had an option to tell it the chroot location, that would make it cleaner, as I now have a kernel lingering around on the master that is kinda out-of-place. Since I have to build a lot of kernel modules in the chroot, that would also allow for a clean workflow, rather than copying the stuff to the master every time.

Yep, that is a feature addition I'll work on ASAP and get it in before release.
 

Since I desperately needed an rt-kernel on the test nodes, and couldn't get them to (properly) boot a debian container (I didn't have time to fiddle with the system overlay) I just booted your centos-8 docker with the buster rt-kernel.

I have a feeling that the problem isn't the RT Kernel, but rather, the Debian container. These, like the CentOS and other containers, are purposefully made NOT TO ACTUALLY BOOT. So we need to build some base images and push them to the Warewulf Docker location. The only difference between the CentOS container's I pushed there and the default CentOS container is that I created it like a real system, not a container, so it has a full Systemd. 

If someone wants to take a look at what would need to happen to "convert" the default OS containers such that they can boot, that would be appreciated. I don't know what they broke, and haven't had time to research it yet.
 
Turned out they love each other and I get superb rt-performance, far superior to anything I ever achieved with RHEL7 and the CentOS or Cern rt-kernel. It just looks really odd when you run `uname -a` and get a Debian kernel on a CentOS machine :)

Great to hear about the performance and yeah, it is fun how modular and swapable Warewulf makes the user and kernel spaces for mixing and matching!
 

@testing and feedback:
sure thing, I'm certain I benefit from it in the long term.

Excellent, thank you!

Greg

 

Gregory M. Kurtzer

unread,
Feb 4, 2021, 9:12:22 PM2/4/21
to ware...@lbl.gov
Hi Niko,

In the main branch, you can now import a kernel from a chroot directory using the command `sudo wwctl kernel import --root /path/to/chroot $KVERSION`.

Please confirm that this is working as expected.

Greg

Niko Kivel

unread,
Feb 5, 2021, 12:15:34 PM2/5/21
to ware...@lbl.gov
Hi Greg

also inline

On Thu, Feb 4, 2021 at 10:29 PM Gregory M. Kurtzer <gmku...@gmail.com> wrote:
Hi Niko,

Comments inline...

On Thu, Feb 4, 2021 at 12:20 AM Niko Kivel <niko....@gmail.com> wrote:
Hi Greg

In ww v3 I used `wwmngchroot` and simply installed the kernel of interest with `apt`, exited the chroot and pointed `wwboostrap` to the chroot and kernel version as mentioned in the previous email. The chroot I got from running 'debootstrap' on the master node.

Understood, got it.
 

For v4 I just copied the buster rt-kernel to the master nodes' respective directories and ran `wwctl kernel import`.
If wwctl had an option to tell it the chroot location, that would make it cleaner, as I now have a kernel lingering around on the master that is kinda out-of-place. Since I have to build a lot of kernel modules in the chroot, that would also allow for a clean workflow, rather than copying the stuff to the master every time.

Yep, that is a feature addition I'll work on ASAP and get it in before release.
 
I saw the later email, thank, will check next week. 
 

Since I desperately needed an rt-kernel on the test nodes, and couldn't get them to (properly) boot a debian container (I didn't have time to fiddle with the system overlay) I just booted your centos-8 docker with the buster rt-kernel.

I have a feeling that the problem isn't the RT Kernel, but rather, the Debian container. These, like the CentOS and other containers, are purposefully made NOT TO ACTUALLY BOOT. So we need to build some base images and push them to the Warewulf Docker location. The only difference between the CentOS container's I pushed there and the default CentOS container is that I created it like a real system, not a container, so it has a full Systemd. 
 
I built a new docker container from my debian chroot (following these instrutctions https://docs.docker.com/develop/develop-images/baseimages/), pushed it to our internal docker hub and imported it from there into warewulf. So it's got the whole init-system. The nodes also boot with this ... kinda :) I wouldn't expect this to work smoothly, as the config structure is quite different and needs a dedicated system overlay. I just haven't had the time to deal with this. I'm pretty certain that - with the correct sysOverlay - the nodes will also boot debian just fine.
But, given the experience with CentOS and the debian rt-kernel, and the strong links of all ETH-domain institutions to RedHat I'm leaning towards staying on CentOS or rocky later. The pool of experts is much larger.

If someone wants to take a look at what would need to happen to "convert" the default OS containers such that they can boot, that would be appreciated. I don't know what they broke, and haven't had time to research it yet.
 
Turned out they love each other and I get superb rt-performance, far superior to anything I ever achieved with RHEL7 and the CentOS or Cern rt-kernel. It just looks really odd when you run `uname -a` and get a Debian kernel on a CentOS machine :)

Great to hear about the performance and yeah, it is fun how modular and swapable Warewulf makes the user and kernel spaces for mixing and matching!
indeed.

Niko 

Niko Kivel

unread,
Feb 18, 2021, 5:56:36 AM2/18/21
to ware...@lbl.gov
Hi Greg

sorry for the late reply.
I just tested the kernel import from chroot, works like a charm!
Thanks for the swift implementation of the feature.

best
Niko

Gregory M. Kurtzer

unread,
Mar 18, 2021, 12:32:21 AM3/18/21
to ware...@lbl.gov
Hi Niko,

My pleasure, glad it is working!

I would like to get the support for Debian images to properly boot. My guess is that it just needs the network configuration layer to be included in the system overlay. If you, or if anyone else, knows what files to provision, and what the contents should be, I'll be happy to add them in.

Otherwise, I will do the best I can to recapitulate what Warewulf v3 is doing here.

Thank you,
Greg



Niko Kivel

unread,
Mar 18, 2021, 11:32:20 AM3/18/21
to ware...@lbl.gov
Hoi Greg

that's great news!
I managed to boot nodes into debian buster, but failed so far with the network configuration. Give me another week and I might have a working setup. If not, I'll get back to you.

best
Niko

Gregory M. Kurtzer

unread,
Mar 19, 2021, 11:21:03 PM3/19/21
to ware...@lbl.gov
Hi Niko,

Perfect, and looking forward to talking about this in next week's community meeting! :)

Thanks and have a great weekend,
Greg




--
Gregory M. Kurtzer
Director, The Rocky Enterprise Software Foundation: http://www.rockylinux.org

Niko Kivel

unread,
Mar 24, 2021, 4:06:38 PM3/24/21
to Warewulf, gmkurtzer
Hi all

There is a pull request with a basic debian overlay ready.
Still a bit rough around the edges, so more like a first step, not a final product.

best
Niko

Reply all
Reply to author
Forward
0 new messages