Hi Tim,
On 20.08.24 23:16, Tim Black wrote:
> Thanks for the feedback, Jan.
>
> Definitely the container lifecycle aspect requires some thought, but I
> don't yet see any problem making it work, that is as long as the kas
> team is ok with multiple containers and orchestrating them with compose
> or an ultra lightweight k8s like k3d. I'm new to kas, but it is crystal
> clear that its value lies in organizations with building for many
> platforms and applications at scale. k8s is tailor-made for this, but
> perhaps compose would be a better start for the average embedded dev.
>
> BTW, I am returning to focus on embedded Linux, where I was for the
> first 20 years of my career, after a few years doing high-level sw to
> learn more about public cloud, IaC, K8s and DevOps. Discovering kas last
> week was like music in my ears. :-)
>
> I propose that there would be a new /nfs-boot/ container that hosts the
> tftp and nfs servers. Some notes:
>
> 1.
> * separating /nfs-boot/ container wouldn't bloat the /build /container
> * /nfs-boot/ container of course requires access to build
> artifacts (kernel and rootfs)
> o This could be just the same volume used by the /build /container
> o I haven't seen yet how kas manages build dirs, but I'm
> assuming these are or could be managed as persistent host
> volumes.
> o The scope of the build volume would only have to be the
> scope/lifetime of the "pod" composed of the /build /+
> /nfs-boot/ containers.
kas-container emulates plain kas: The build artifacts are left on the
host in KAS_WORK_DIR (normally "build"). For an artifact serving,
testing, you-name-it container/tool-set, the host is likely the best
place to sync.
> * /nfs-boot/ container would be started only upon successful exit
> code of /build /container.
> * Any number of test containers could be appended to this pod and
> follow the same basic approach.
> o ..perhaps the first could be a locally-emulated integration
> test of the whole shebang: qemu - u-boot - tftp kernel - nfs
> boot rootfs...
> * If the user enabled the /nfs-boot/ container, and let it run for
> a long while, another kas build on the same host could still run
> a /build/ + /nfs-boot/ + /test/ sequence without collision, bc
> o kas would manage and dole out unique port numbers used by
> tftp and nfs servers, and these numbers would be shared
> across the containers within the "pod"
>
> I'm sure there are numerous aspects I've not considered, but this is a
> high-level view. I eagerly await your feedback.
>
> BTW, I will be migrating my team's var-som-mx8mp yocto project to use
> kas and mender over here
> <
https://github.com/timblaktu/meta-mender-community/tree/scarthgap-var-som-imx8mp> in the coming weeks. Time-permitting, I may prototype the above, and would be happy to collaborate towards our shared vision of kas container orchestration in all its glory. :-)
I would love to read a user story in form of command issued,
configurations written etc. about these extensions first. Then we can
first settle about a reasonable workflow (for most people, you never
catch them all) and could then think about how to implement things in
details.
The other way around may only work if your technical extension is so
easy to write that it does not matter to be thrown away for version 2
and 3 after discussing the workflows on it later on.