Optionally support nfs/tftp servers in build containers to speed up dev cycle

11 views
Skip to first unread message

Tim Black

unread,
Aug 20, 2024, 1:36:04 AM8/20/24
to kas-devel

This is a feature request to optionally enable tftp/nfs servers in a kas build container. The feature would implement the canonical use case in early embedded linux development, a la the yocto wiki's NFS Root instructions.


Would this be considered as a kas feature? Typically the build dir lives on the host and is a volume mounted in the container for persistence. One perspective may be that since the host owns the files, the host should be the one serving up the kernel over tftp and rootfs over nfs. But my perspective and goal is to simplify and streamline the entire embedded workflow, and I believe that serving tftp/nfs, in an optional and declarative way, would strengthen kas's position as the modern embedded linux development tool.


Jan Kiszka

unread,
Aug 20, 2024, 2:03:08 AM8/20/24
to Tim Black, kas-devel
On 20.08.24 07:36, Tim Black wrote:
> This is a feature request to optionally enable tftp/nfs servers in a kas
> build container. The feature would implement the canonical use case in
> early embedded linux development, a la the yocto wiki's NFS Root
> instructions <https://wiki.yoctoproject.org/wiki/Poky_NFS_Root>.
>
> Would this be considered as a kas feature? Typically the build dir lives
> on the host and is a volume mounted in the container for persistence.
> One perspective may be that since the host owns the files, the host
> should be the one serving up the kernel over tftp and rootfs over nfs.
> But my perspective and goal is to simplify and streamline the entire
> embedded workflow, and I believe that serving tftp/nfs, in an optional
> and declarative way, would strengthen kas's position as the modern
> embedded linux development tool.
>

Less host-side configuration can indeed by a feature. But we also have
no dedicated test container, something I offered to consider in order to
take more dependencies without enlarging the build container. For this
use case, such thing would also be needed because the lifecycle of the
build container is different from an image serving one. Do you have a
proposal, at least high-level, how the workflow and interfaces should
look like?

FWIW, I have the host-side setup on my machine, and that can serve
multiple targets in parallel, independent of running builds.

Jan

--
Siemens AG, Technology
Linux Expert Center

Tim Black

unread,
Aug 20, 2024, 5:16:22 PM8/20/24
to kas-devel
Thanks for the feedback, Jan. 

Definitely the container lifecycle aspect requires some thought, but I don't yet see any problem making it work, that is as long as the kas team is ok with multiple containers and orchestrating them with compose or an ultra lightweight k8s like k3d. I'm new to kas, but it is crystal clear that its value lies in organizations with building for many platforms and applications at scale. k8s is tailor-made for this, but perhaps compose would be a better start for the average embedded dev.

BTW, I am returning to focus on embedded Linux, where I was for the first 20 years of my career, after a few years doing high-level sw to learn more about public cloud, IaC, K8s and DevOps. Discovering kas last week was like music in my ears. :-)

I propose that there would be a new nfs-boot container that hosts the tftp and nfs servers. Some notes:
    • separating nfs-boot container wouldn't bloat the build container
    • nfs-boot container of course requires access to build artifacts (kernel and rootfs)
      • This could be just the same volume used by the build container
      • I haven't seen yet how kas manages build dirs, but I'm assuming these are or could be managed as persistent host volumes.
      • The scope of the build volume would only have to be the scope/lifetime of the "pod" composed of the build + nfs-boot containers.
    • nfs-boot container would be started only upon successful exit code of build container.
    • Any number of test containers could be appended to this pod and follow the same basic approach.
      • ..perhaps the first could be a locally-emulated integration test of the whole shebang: qemu - u-boot - tftp kernel - nfs boot rootfs...
    • If the user enabled the nfs-boot container, and let it run for a long while, another kas build on the same host could still run a build + nfs-boot + test sequence without collision, bc
      • kas would manage and dole out unique port numbers used by tftp and nfs servers, and these numbers would be shared across the containers within the "pod"
I'm sure there are numerous aspects I've not considered, but this is a high-level view. I eagerly await your feedback.

BTW, I will be migrating my team's var-som-mx8mp yocto project to use kas and mender over here in the coming weeks. Time-permitting, I may prototype the above, and would be happy to collaborate towards our shared vision of kas container orchestration in all its glory. :-) 

Jan Kiszka

unread,
Aug 21, 2024, 9:07:22 AM8/21/24
to Tim Black, kas-devel
Hi Tim,

On 20.08.24 23:16, Tim Black wrote:
> Thanks for the feedback, Jan. 
>
> Definitely the container lifecycle aspect requires some thought, but I
> don't yet see any problem making it work, that is as long as the kas
> team is ok with multiple containers and orchestrating them with compose
> or an ultra lightweight k8s like k3d. I'm new to kas, but it is crystal
> clear that its value lies in organizations with building for many
> platforms and applications at scale. k8s is tailor-made for this, but
> perhaps compose would be a better start for the average embedded dev.
>
> BTW, I am returning to focus on embedded Linux, where I was for the
> first 20 years of my career, after a few years doing high-level sw to
> learn more about public cloud, IaC, K8s and DevOps. Discovering kas last
> week was like music in my ears. :-)
>
> I propose that there would be a new /nfs-boot/ container that hosts the
> tftp and nfs servers. Some notes:
>
> 1.
> * separating /nfs-boot/ container wouldn't bloat the /build /container
> * /nfs-boot/ container of course requires access to build
> artifacts (kernel and rootfs)
> o This could be just the same volume used by the /build /container
> o I haven't seen yet how kas manages build dirs, but I'm
> assuming these are or could be managed as persistent host
> volumes.
> o The scope of the build volume would only have to be the
> scope/lifetime of the "pod" composed of the /build /+
> /nfs-boot/ containers.

kas-container emulates plain kas: The build artifacts are left on the
host in KAS_WORK_DIR (normally "build"). For an artifact serving,
testing, you-name-it container/tool-set, the host is likely the best
place to sync.

> * /nfs-boot/ container would be started only upon successful exit
> code of /build /container.
> * Any number of test containers could be appended to this pod and
> follow the same basic approach.
> o ..perhaps the first could be a locally-emulated integration
> test of the whole shebang: qemu - u-boot - tftp kernel - nfs
> boot rootfs...
> * If the user enabled the /nfs-boot/ container, and let it run for
> a long while, another kas build on the same host could still run
> a /build/ + /nfs-boot/ + /test/ sequence without collision, bc
> o kas would manage and dole out unique port numbers used by
> tftp and nfs servers, and these numbers would be shared
> across the containers within the "pod"
>
> I'm sure there are numerous aspects I've not considered, but this is a
> high-level view. I eagerly await your feedback.
>
> BTW, I will be migrating my team's var-som-mx8mp yocto project to use
> kas and mender over here
> <https://github.com/timblaktu/meta-mender-community/tree/scarthgap-var-som-imx8mp> in the coming weeks. Time-permitting, I may prototype the above, and would be happy to collaborate towards our shared vision of kas container orchestration in all its glory. :-) 

I would love to read a user story in form of command issued,
configurations written etc. about these extensions first. Then we can
first settle about a reasonable workflow (for most people, you never
catch them all) and could then think about how to implement things in
details.

The other way around may only work if your technical extension is so
easy to write that it does not matter to be thrown away for version 2
and 3 after discussing the workflows on it later on.

Tim Black

unread,
Aug 21, 2024, 11:12:38 AM8/21/24
to Jan Kiszka, kas-devel

Agreed re: the user story and design details. I will work towards a story as I start using kas in the new build environment I'll be setting up.

Where should this be posted when ready?

I noticed that the kas Github doesn't have public collaboration enabled (which of course is what brought me here).

Jan Kiszka

unread,
Aug 21, 2024, 12:44:45 PM8/21/24
to Tim Black, kas-devel
On 21.08.24 17:12, Tim Black wrote:
> Agreed re: the user story and design details. I will work towards a
> story as I start using kas in the new build environment I'll be setting up.
>
> Where should this be posted when ready?
>
> I noticed that the kas Github doesn't have public collaboration enabled
> (which of course is what brought me here).
>

We use the mailing list for discussions, so here would be best (just
like avoiding top-posting ;-)).

Jan
> <https://github.com/timblaktu/meta-mender-community/tree/scarthgap-var-som-imx8mp <https://github.com/timblaktu/meta-mender-community/tree/scarthgap-var-som-imx8mp>> in the coming weeks. Time-permitting, I may prototype the above, and would be happy to collaborate towards our shared vision of kas container orchestration in all its glory. :-) 
Reply all
Reply to author
Forward
0 new messages