[PATCH 0/2] Rework deployment of image artifacts

8 views
Skip to first unread message

Felix Moessbauer

unread,
Apr 10, 2026, 9:23:07 AM (8 days ago) Apr 10
to isar-...@googlegroups.com, w...@ilbers.de, jan.k...@siemens.com, quirin.g...@siemens.com, Felix Moessbauer
As the changing of the DEPLOY_DIR_IMAGE has proven to be fundamentally
incompatible with custom initrd recipes, this patch was reverted,
re-introducing the do_copy_boot_files error on DTBs that are named
equally but belong to different mc targets.

This series mitigate the issue by keeping the layout of DEPLOY_DIR_IMAGE,
but prefixing the DTB_FILES with PN and DISTRO. These prefixes are
then stripped again during imaging.

Note, that the series has been CI tested against the "dtbdeploy" and "dev"
tags.

Best regards,
Felix Moessbauer
Siemens AG

Felix Moessbauer (2):
Revert "meta: Deploy image build artifacts into distro- and
image-specific subdirs"
prefix DTB files with PN in deploy dir

RECIPE-API-CHANGELOG.md | 84 +++----------------
.../installer-add-rootfs.bbclass | 9 +-
meta/classes-recipe/image.bbclass | 17 ++--
.../imagetypes_container.bbclass | 2 +-
meta/classes-recipe/imagetypes_wic.bbclass | 2 +-
meta/conf/bitbake.conf | 3 +-
.../wic/plugins/source/bootimg-efi-isar.py | 3 +-
.../plugins/source/isoimage-isohybrid-isar.py | 2 +-
testsuite/cibase.py | 2 +-
testsuite/citest.py | 7 +-
testsuite/start_vm.py | 2 +-
11 files changed, 36 insertions(+), 97 deletions(-)

--
2.53.0

Felix Moessbauer

unread,
Apr 10, 2026, 9:23:08 AM (8 days ago) Apr 10
to isar-...@googlegroups.com, w...@ilbers.de, jan.k...@siemens.com, quirin.g...@siemens.com, Felix Moessbauer
As the changing of the DEPLOY_DIR_IMAGE has proven to be fundamentally
incompatible with custom initrd recipes, this patch was reverted,
re-introducing the do_copy_boot_files error on DTBs that are named
equally but belong to different mc targets.

To mitigate this limitation without breaking custom initrds, we prefix
all DTB files with ${PN}-${DISTRO} when deploying to DEPLOY_IMAGE_DIR.
On imaging, these prefixes are stripped again by the imager scripts.

Signed-off-by: Felix Moessbauer <felix.mo...@siemens.com>
---
RECIPE-API-CHANGELOG.md | 20 +++++++++++++++++++
meta/classes-recipe/image.bbclass | 6 ++++--
meta/classes-recipe/imagetypes_wic.bbclass | 2 +-
.../wic/plugins/source/bootimg-efi-isar.py | 3 ++-
4 files changed, 27 insertions(+), 4 deletions(-)

diff --git a/RECIPE-API-CHANGELOG.md b/RECIPE-API-CHANGELOG.md
index 0e6a3172..856da5de 100644
--- a/RECIPE-API-CHANGELOG.md
+++ b/RECIPE-API-CHANGELOG.md
@@ -981,3 +981,23 @@ fragment, this can be specified via adding `${S}/path/to/fragment.cfg` to
`KERNEL_CONFIG_FRAGMENTS`. If a fragment was checked out into ${WORKDIR} as
part of a repository, a tarball, or some other directory structure, just
specify it relative to ${WORKDIR} in `KERNEL_CONFIG_FRAGMENTS`.
+
+Changes in next
+---------------
+
+### Prefix DTB file names when deploying
+
+DTB files are now placed in the ${DEPLOY_DIR_IMAGE} with a prefix of
+${PN}-${DISTRO}. During wic imaging, the prefix is removed again, so no changes
+to downstream wks files are needed (i.e. `dtb=my-device-tree.dtb` is not
+affected by this change). Custom imaging plugins need to be adapted to this
+change by removing the prefix from the filename. For that, the variable
+DTB_PREFIX is exported as bitbake var into wic environment.
+
+This fixes errors when building different distros with the same machine,
+whereby previously the following error occured:
+
+do_copy_boot_files: The recipe isar-image-base is trying to install
+files into a shared area when those files already exists. It happens
+when some files have the same names (e.g., dtb files) for different
+distros.
diff --git a/meta/classes-recipe/image.bbclass b/meta/classes-recipe/image.bbclass
index 26a4ec06..9b5dd23e 100644
--- a/meta/classes-recipe/image.bbclass
+++ b/meta/classes-recipe/image.bbclass
@@ -379,7 +379,8 @@ EOF
KERNEL_IMG = "${PP_DEPLOY}/${KERNEL_IMAGE}"
INITRD_IMG = "${PP_DEPLOY}/${INITRD_DEPLOY_FILE}"
# only one dtb file supported, pick the first
-DTB_IMG = "${PP_DEPLOY}/${@(d.getVar('DTB_FILES').split() or [''])[0]}"
+DTB_PREFIX = "${PN}-${DISTRO}."
+DTB_IMG = "${PP_DEPLOY}/${DTB_PREFIX}${@os.path.basename((d.getVar('DTB_FILES').split() or [''])[0])}"

do_copy_boot_files[cleandirs] += "${DEPLOYDIR}"
do_copy_boot_files[sstate-inputdirs] = "${DEPLOYDIR}"
@@ -402,7 +403,8 @@ do_copy_boot_files() {
die "${file} not found"
fi

- cp -f "$dtb" "${DEPLOYDIR}/"
+ dtb_name=$(basename "$dtb")
+ cp -f "$dtb" "${DEPLOYDIR}/${DTB_PREFIX}$dtb_name"
done
}
addtask copy_boot_files before do_rootfs_postprocess after do_rootfs_install
diff --git a/meta/classes-recipe/imagetypes_wic.bbclass b/meta/classes-recipe/imagetypes_wic.bbclass
index dd6c501d..c0813223 100644
--- a/meta/classes-recipe/imagetypes_wic.bbclass
+++ b/meta/classes-recipe/imagetypes_wic.bbclass
@@ -107,7 +107,7 @@ WICVARS += "\
ROOTFS_SIZE STAGING_DATADIR STAGING_DIR STAGING_LIBDIR TARGET_SYS TRANSLATED_TARGET_ARCH"

# Isar specific vars used in our plugins
-WICVARS += "DISTRO DISTRO_ARCH KERNEL_FILE MACHINE"
+WICVARS += "DISTRO DISTRO_ARCH KERNEL_FILE MACHINE DTB_PREFIX"

python do_rootfs_wicenv () {
wicvars = d.getVar('WICVARS')
diff --git a/meta/scripts/lib/wic/plugins/source/bootimg-efi-isar.py b/meta/scripts/lib/wic/plugins/source/bootimg-efi-isar.py
index 6bc78d42..32b220fa 100644
--- a/meta/scripts/lib/wic/plugins/source/bootimg-efi-isar.py
+++ b/meta/scripts/lib/wic/plugins/source/bootimg-efi-isar.py
@@ -57,7 +57,8 @@ class BootimgEFIPlugin(SourcePlugin):
if dtb:
if ';' in dtb:
raise WicError("Only one DTB supported, exiting")
- cp_cmd = "cp %s/%s %s" % (bootimg_dir, dtb, hdddir)
+ dtb_file = "%s%s" % (get_bitbake_var("DTB_PREFIX"), dtb)
+ cp_cmd = "cp %s/%s %s/%s" % (bootimg_dir, dtb_file, hdddir, dtb)
exec_cmd(cp_cmd, True)

@classmethod
--
2.53.0

Felix Moessbauer

unread,
Apr 10, 2026, 9:23:08 AM (8 days ago) Apr 10
to isar-...@googlegroups.com, w...@ilbers.de, jan.k...@siemens.com, quirin.g...@siemens.com, Felix Moessbauer
This reverts commit 13cb77dd04614b655499bbb6d2b88b96718634cd.

As the changing of the DEPLOY_DIR_IMAGE has proven to be fundamentally
incompatible with custom initrd recipes, this patch was reverted,
re-introducing the do_copy_boot_files error on DTBs that are named
equally but belong to different mc targets.

Signed-off-by: Felix Moessbauer <felix.mo...@siemens.com>
---
RECIPE-API-CHANGELOG.md | 80 -------------------
.../installer-add-rootfs.bbclass | 9 +--
meta/classes-recipe/image.bbclass | 11 ++-
.../imagetypes_container.bbclass | 2 +-
meta/conf/bitbake.conf | 3 +-
.../plugins/source/isoimage-isohybrid-isar.py | 2 +-
testsuite/cibase.py | 2 +-
testsuite/citest.py | 7 +-
testsuite/start_vm.py | 2 +-
9 files changed, 17 insertions(+), 101 deletions(-)

diff --git a/RECIPE-API-CHANGELOG.md b/RECIPE-API-CHANGELOG.md
index c5962969..0e6a3172 100644
--- a/RECIPE-API-CHANGELOG.md
+++ b/RECIPE-API-CHANGELOG.md
@@ -981,83 +981,3 @@ fragment, this can be specified via adding `${S}/path/to/fragment.cfg` to
`KERNEL_CONFIG_FRAGMENTS`. If a fragment was checked out into ${WORKDIR} as
part of a repository, a tarball, or some other directory structure, just
specify it relative to ${WORKDIR} in `KERNEL_CONFIG_FRAGMENTS`.
-
-### Change DEPLOY_DIR_IMAGE path and artifacts naming
-
-Change DEPLOY_DIR_IMAGE from ${DEPLOY_DIR}/images/${MACHINE} to
-${DEPLOY_DIR}/images/${MACHINE}/${DISTRO}-${IMAGE_PN}.
-
-When building different distros with the same machine the following
-error occurs:
-
-do_copy_boot_files: The recipe isar-image-base is trying to install
-files into a shared area when those files already exists. It happens
-when some files have the same names (e.g., dtb files) for different
-distros.
-
-To prevent such collisions, image artifacts are now deployed into a
-distro- and image-specific subdirectory.
-
-Additionally, artifact filenames have been shortened by removing the
-${DISTRO} and ${IMAGE_PN} prefix, since this information is now
-encoded in the directory path.
-
-Example 1: Build isar-image-base (phyboard-mira, debian-bookworm)
-Under "build/tmp/deploy/images/":
-Before:
-phyboard-mira/imx6q-phytec-mira-rdk-nand.dtb
-phyboard-mira/isar-image-base-debian-bookworm-phyboard-mira-initrd.img
-phyboard-mira/isar-image-base-debian-bookworm-phyboard-mira-vmlinuz
-phyboard-mira/isar-image-base-debian-bookworm-phyboard-mira.dpkg_status
-phyboard-mira/isar-image-base-debian-bookworm-phyboard-mira.ubi
-
-After:
-phyboard-mira/debian-bookworm-isar-image-base/imx6q-phytec-mira-rdk-nand.dtb
-phyboard-mira/debian-bookworm-isar-image-base/initrd.img
-phyboard-mira/debian-bookworm-isar-image-base/vmlinuz
-phyboard-mira/debian-bookworm-isar-image-base/phyboard-mira.dpkg_status
-phyboard-mira/debian-bookworm-isar-image-base/phyboard-mira.ubi
-
-Example 2: Build isar-image-ci (qemuamd64, debian-bookworm)
-Under "build/tmp/deploy/images/":
-Before:
-qemuamd64/isar-image-ci-debian-bookworm-qemuamd64-initrd.img
-qemuamd64/isar-image-ci-debian-bookworm-qemuamd64-vmlinuz
-qemuamd64/isar-image-ci-debian-bookworm-qemuamd64.dpkg_status
-qemuamd64/isar-image-ci-debian-bookworm-qemuamd64.manifest
-qemuamd64/isar-image-ci-debian-bookworm-qemuamd64.wic
-qemuamd64/isar-image-ci-debian-bookworm-qemuamd64.wic.bmap
-qemuamd64/isar-image-ci-debian-bookworm-qemuamd64.wic.manifest
-
-After:
-qemuamd64/debian-bookworm-isar-image-ci/initrd.img
-qemuamd64/debian-bookworm-isar-image-ci/vmlinuz
-qemuamd64/debian-bookworm-isar-image-ci/qemuamd64.dpkg_status
-qemuamd64/debian-bookworm-isar-image-ci/qemuamd64.manifest
-qemuamd64/debian-bookworm-isar-image-ci/qemuamd64.wic
-qemuamd64/debian-bookworm-isar-image-ci/qemuamd64.wic.bmap
-qemuamd64/debian-bookworm-isar-image-ci/qemuamd64.wic.manifest
-
-Artifacts that do not belong to a full image (e.g. isar-image-base,
-isar-image-ci) remain unchanged. For example, a customized initramfs
-built independently is not affected.
-
-This change affects the location and naming of build artifacts and must
-be taken into account by downstream users.
-
-Note that this approach differs from OpenEmbedded.
-
-OpenEmbedded typically avoids artifact collisions in multiconfig builds
-by using separate TMPDIRs per configuration, resulting in multiple
-build directories such as tmp-qemuarm64 and tmp-qemuarm64customized. In
-this model, artifacts with identical names but different contents do
-not cause conflicts because they reside in their own isolated build
-directories.
-
-If multiple configurations are intentionally configured to share the
-same TMPDIR in OpenEmbedded, conflicts may occur and are not handled by
-OE. Artifacts with the same name overwrite each other, leading to
-incorrect build results.
-
-Changes in next
----------------
diff --git a/meta-isar/classes-recipe/installer-add-rootfs.bbclass b/meta-isar/classes-recipe/installer-add-rootfs.bbclass
index 62301c34..69d87be8 100644
--- a/meta-isar/classes-recipe/installer-add-rootfs.bbclass
+++ b/meta-isar/classes-recipe/installer-add-rootfs.bbclass
@@ -13,10 +13,9 @@ INSTALLER_TARGET_IMAGES ??= "${INSTALLER_TARGET_IMAGE}"
INSTALLER_TARGET_MC ??= "installer-target"
INSTALLER_TARGET_DISTRO ??= "${DISTRO}"
INSTALLER_TARGET_MACHINE ??= "${MACHINE}"
-INSTALLER_TARGET_IMAGE ??= "${IMAGE_PN}"
-INSTALLER_TARGET_DEPLOY_DIR_IMAGE ??= "${DEPLOY_DIR}/images/${INSTALLER_TARGET_MACHINE}/${INSTALLER_TARGET_DISTRO}-${INSTALLER_TARGET_IMAGE}"
+INSTALLER_TARGET_DEPLOY_DIR_IMAGE ??= "${DEPLOY_DIR}/images/${INSTALLER_TARGET_MACHINE}"

-IMAGE_DATA_FILE ??= "${INSTALLER_TARGET_MACHINE}"
+IMAGE_DATA_FILE ??= "${INSTALLER_TARGET_IMAGE}-${INSTALLER_TARGET_DISTRO}-${INSTALLER_TARGET_MACHINE}"
IMAGE_DATA_POSTFIX ??= "wic.zst"
IMAGE_DATA_POSTFIX:buster ??= "wic.xz"
IMAGE_DATA_POSTFIX:bullseye ??= "wic.xz"
@@ -30,7 +29,7 @@ def get_installer_sources(d, suffix):
target_machine = d.getVar('INSTALLER_TARGET_MACHINE')
sources = []
for image in installer_target_images:
- image_data = f"{target_machine}"
+ image_data = f"{image}-{target_distro}-{target_machine}"
sources.append(f"{target_deploy_dir}/{image_data}.{suffix}")
return sources

@@ -42,7 +41,7 @@ def get_installer_destinations(d, suffix):
target_machine = d.getVar('INSTALLER_TARGET_MACHINE')
dests = []
for image in installer_target_images:
- image_data = f"{target_machine}"
+ image_data = f"{image}-{target_distro}-{target_machine}"
dests.append(f"/install/{image_data}.{suffix}")
return dests

diff --git a/meta/classes-recipe/image.bbclass b/meta/classes-recipe/image.bbclass
index 866df68a..26a4ec06 100644
--- a/meta/classes-recipe/image.bbclass
+++ b/meta/classes-recipe/image.bbclass
@@ -18,9 +18,8 @@ IMAGE_ROOTFS ?= "${WORKDIR}/rootfs"
KERNEL_IMAGE_PKG ??= "${@ ("linux-image-" + d.getVar("KERNEL_NAME")) if d.getVar("KERNEL_NAME") else ""}"
IMAGE_INSTALL += "${KERNEL_IMAGE_PKG}"

-# Name the image as the machine name only, since the path includes distro name now
-IMAGE_FULLNAME = "${MACHINE}"
-IMAGE_PN = "${PN}"
+# Name of the image including distro&machine names
+IMAGE_FULLNAME = "${PN}-${DISTRO}-${MACHINE}"

# Deprecated; this would be set to e.g. "${INITRAMFS_RECIPE}-${DISTRO}-${MACHINE}-initrd.img"
INITRD_IMAGE ?= ""
@@ -30,7 +29,7 @@ INITRD_IMAGE ?= ""
IMAGE_INITRD ?= ""

# Name of the deployed initrd image
-INITRD_DEPLOY_FILE = "initrd.img"
+INITRD_DEPLOY_FILE = "${@ d.getVar('IMAGE_INITRD') or '${PN}' }-${DISTRO}-${MACHINE}-initrd.img"

# Make sure dependent initramfs recipe is built
do_image[depends] += "${@ '${IMAGE_INITRD}:do_build' if '${IMAGE_INITRD}' else '' }"
@@ -53,7 +52,7 @@ python() {
ROOTFS_FEATURES += "${@ 'generate-initrd' if (d.getVar('INITRD_IMAGE') == '' and d.getVar('IMAGE_INITRD') == '') else ''}"

# This variable is used by wic and start_vm
-KERNEL_IMAGE ?= "${KERNEL_FILE}"
+KERNEL_IMAGE ?= "${IMAGE_FULLNAME}-${KERNEL_FILE}"

# This defines the deployed dtbs for reuse by imagers
DTB_FILES ?= ""
@@ -109,7 +108,7 @@ ROOTFS_PACKAGES += "${IMAGE_PREINSTALL} ${@isar_multiarch_packages('IMAGE_INSTAL
ROOTFS_VARDEPS += "IMAGE_INSTALL"
ROOTFS_MANIFEST_DEPLOY_DIR ?= "${DEPLOY_DIR_IMAGE}"
ROOTFS_DPKGSTATUS_DEPLOY_DIR ?= "${DEPLOY_DIR_IMAGE}"
-ROOTFS_PACKAGE_SUFFIX ?= "${MACHINE}"
+ROOTFS_PACKAGE_SUFFIX ?= "${PN}-${DISTRO}-${MACHINE}"

CACHE_DEB_SRC = "${@bb.utils.contains('BASE_REPO_FEATURES', 'cache-deb-src', '1', '0', d)}"
python () {
diff --git a/meta/classes-recipe/imagetypes_container.bbclass b/meta/classes-recipe/imagetypes_container.bbclass
index fba15503..e07ce8e6 100644
--- a/meta/classes-recipe/imagetypes_container.bbclass
+++ b/meta/classes-recipe/imagetypes_container.bbclass
@@ -9,7 +9,7 @@
CONTAINER_TYPES = "oci-archive docker-archive docker-daemon containers-storage"
USING_CONTAINER = "${@bb.utils.contains_any('IMAGE_BASETYPES', d.getVar('CONTAINER_TYPES').split(), '1', '0', d)}"

-CONTAINER_IMAGE_NAME ?= "container-${DISTRO_ARCH}"
+CONTAINER_IMAGE_NAME ?= "${PN}-${DISTRO}-${DISTRO_ARCH}"
CONTAINER_IMAGE_TAG ?= "${PV}-${PR}"
CONTAINER_IMAGE_CMD ?= "/bin/dash"
CONTAINER_IMAGE_ENTRYPOINT ?= ""
diff --git a/meta/conf/bitbake.conf b/meta/conf/bitbake.conf
index 5f339d40..5c71078d 100644
--- a/meta/conf/bitbake.conf
+++ b/meta/conf/bitbake.conf
@@ -57,8 +57,7 @@ WORKDIR = "${TMPDIR}/work/${DISTRO}-${DISTRO_ARCH}/${PN}/${PV}-${PR}"
GIT_DL_LINK_DIR = "${TMPDIR}/work/${DISTRO}-${DISTRO_ARCH}"
DEPLOY_DIR_BOOTSTRAP = "${DEPLOY_DIR}/bootstrap"
DEPLOY_DIR_SDKCHROOT = "${DEPLOY_DIR}/sdkchroot"
-IMAGE_PN ?= ""
-DEPLOY_DIR_IMAGE = "${DEPLOY_DIR}/images/${MACHINE}${@('/%s-%s' % (d.getVar('DISTRO'), d.getVar('IMAGE_PN'))) if d.getVar('IMAGE_PN') != '' else ''}"
+DEPLOY_DIR_IMAGE = "${DEPLOY_DIR}/images/${MACHINE}"
DL_DIR ?= "${TOPDIR}/downloads"
SSTATE_DIR ?= "${TOPDIR}/sstate-cache"
SSTATE_MANIFESTS = "${TMPDIR}/sstate-control/${DISTRO}-${DISTRO_ARCH}"
diff --git a/meta/scripts/lib/wic/plugins/source/isoimage-isohybrid-isar.py b/meta/scripts/lib/wic/plugins/source/isoimage-isohybrid-isar.py
index eaef9c79..0ed61fff 100644
--- a/meta/scripts/lib/wic/plugins/source/isoimage-isohybrid-isar.py
+++ b/meta/scripts/lib/wic/plugins/source/isoimage-isohybrid-isar.py
@@ -196,7 +196,7 @@ class IsoImagePlugin(SourcePlugin):
raise WicError("Couldn't find MACHINE, exiting.")

pattern = '%s/%s*%s.%s' % (initrd_dir, image_name, machine, image_type)
- pattern = '%s/initrd.%s' % (initrd_dir, image_type)
+ pattern = '%s/%s-%s-initrd.%s' % (initrd_dir, image_name, machine, image_type)
files = glob.glob(pattern)
if files:
initrd = files[0]
diff --git a/testsuite/cibase.py b/testsuite/cibase.py
index 060607f7..4a6308d0 100755
--- a/testsuite/cibase.py
+++ b/testsuite/cibase.py
@@ -44,7 +44,7 @@ class CIBaseTest(CIBuilder):
self.configure(wic_deploy_parts=wic_deploy_parts, **kwargs)
self.bitbake(targets, **kwargs)

- wic_path = f"{self.build_dir}/tmp/deploy/images/*/*/*.wic.p1"
+ wic_path = f"{self.build_dir}/tmp/deploy/images/*/*.wic.p1"
partition_files = set(glob.glob(wic_path))
if wic_deploy_parts and len(partition_files) == 0:
self.fail("Found raw wic partitions in DEPLOY_DIR")
diff --git a/testsuite/citest.py b/testsuite/citest.py
index a1b362c4..3eea07e5 100755
--- a/testsuite/citest.py
+++ b/testsuite/citest.py
@@ -498,9 +498,8 @@ class InitRdBaseTest(CIBaseTest):
super().init()
self.deploy_dir = os.path.join(self.build_dir, 'tmp', 'deploy')

- def deploy_dir_image(self, mc, image):
- multiconfig = f"{mc}:{image}"
- return CIUtils.getVars('DEPLOY_DIR_IMAGE', target=multiconfig)
+ def deploy_dir_image(self, machine):
+ return os.path.join(self.deploy_dir, 'images', machine)

def dracut_in_image(self, targets):
machine = 'qemuamd64'
@@ -529,7 +528,7 @@ class InitRdBaseTest(CIBaseTest):
bb_should_fail=False):
mc = f'mc:{machine}-{distro.removeprefix("debian-")}'
initrd_image = f'{initrd}-{distro}-{machine}-initrd.img'
- initrd_path = os.path.join(self.deploy_dir_image(mc, initrd), initrd_image)
+ initrd_path = os.path.join(self.deploy_dir_image(machine), initrd_image)

# cleansstate if the initrd image was already built/deployed to verify
# that a new build does result in the image being deployed
diff --git a/testsuite/start_vm.py b/testsuite/start_vm.py
index 958ab00a..8e28f11b 100755
--- a/testsuite/start_vm.py
+++ b/testsuite/start_vm.py
@@ -50,7 +50,7 @@ def format_qemu_cmdline(
image_type = image_fstypes.split()[0]
base = 'ubuntu' if distro in ['jammy', 'focal', 'noble'] else 'debian'

- rootfs_image = f"qemu{arch}.{image_type}"
+ rootfs_image = f"{image}-{base}-{distro}-qemu{arch}.{image_type}"

if image_type == 'ext4':
kernel_image = deploy_dir_image + '/' + kernel_image
--
2.53.0

Jan Kiszka

unread,
Apr 10, 2026, 9:38:10 AM (8 days ago) Apr 10
to Felix Moessbauer, isar-...@googlegroups.com, w...@ilbers.de, quirin.g...@siemens.com
Let's but the DTBs in prefix-subdirs - will also make referring to them
cleaner.
DTBs are the most prominent conflicts with multiconfigs or partial
rebuilds. We may mitigate this one, so it's fine, but the fundamental
risk will remain. One of the reason why I asked to study OE carefully
and try to learn from it first.

Jan

--
Siemens AG, Foundational Technologies
Linux Expert Center

MOESSBAUER, Felix

unread,
Apr 10, 2026, 10:12:43 AM (8 days ago) Apr 10
to Kiszka, Jan, isar-...@googlegroups.com, w...@ilbers.de, Gylstorff, Quirin

I explicitly decided against putting them in subdirs to not diverge
from the names of the other artifacts. However, OE deploys the DTBs via
the devicetree class to ${IMAGE_DEPLOY_DIR}/devicetree/ , which does
not help much as we still would get a clash. In addition, it deploys to
sysroot where other recipes should consume it from [1]. But we could do
that cleanup while we are at it.

Another reason for not putting them in a directory are file globs
commonly used in CI to copy out all files in the deploy dir.

[1]
https://docs.yoctoproject.org/dev/ref-manual/classes.html#devicetree

Apart from that, OE anyways discourages direct deploy to
DEPLOY_DIR_IMAGE. Instead the deploy.bbclass class should be used to
deploy through the sstate cache.

In OE these problems are also not solved. Even the split of the TMPDIR
is done in downstream layers, not OE-core (at least I found nothing in
OE core).

Felix

Zhihang Wei

unread,
Apr 10, 2026, 10:21:33 AM (8 days ago) Apr 10
to Felix Moessbauer, isar-...@googlegroups.com, jan.k...@siemens.com, quirin.g...@siemens.com
This has passed the newly added 2 DTB deploy test cases. I'll put it on
full CI and get back to this next week.

Zhihang

Jan Kiszka

unread,
Apr 10, 2026, 12:32:20 PM (8 days ago) Apr 10
to Moessbauer, Felix (FT RPD CED OES-DE), isar-...@googlegroups.com, w...@ilbers.de, Gylstorff, Quirin (FT RPD CED OES-DE)
Downstream pick-up scripts like
https://gitlab.com/Xenomai/xenomai-images/-/blob/master/scripts/deploy_to_aws.sh?ref_type=heads
will need adjustments as well.
Which of the cleanups? There is no sysroot in Isar. Rather, we would
have to add a package to the chroot of the imager and pick it up from there.

>
> Another reason for not putting them in a directory are file globs
> commonly used in CI to copy out all files in the deploy dir.

See above, those will have to be adjusted because you often cannot pick
up the mangled DTB file names. That is why I suggested a directory.

>
> [1]
> https://docs.yoctoproject.org/dev/ref-manual/classes.html#devicetree
>
> Apart from that, OE anyways discourages direct deploy to
> DEPLOY_DIR_IMAGE. Instead the deploy.bbclass class should be used to
> deploy through the sstate cache.
>

Then let's try that and see if it helps better.
Then we are either overusing multiconfig here or are still missing some
other detail, such as indirect deployment.

As the DTB deployment conflict is existing in Isar for many years, only
affecting in practice its own setup, I would suggest to take the revert
quickly into the tree and possibly postpone a real solution after
further research.

I'm also concerned that the pattern applied here will not easily scale
to similar problems around other artifacts we deploy in various
downstream layers. We are producing the same error around its
firmware.bin when rebuilding isar-cip-core for different distros e.g.

Zhihang Wei

unread,
Apr 16, 2026, 9:54:49 AM (2 days ago) Apr 16
to Felix Moessbauer, isar-...@googlegroups.com, jan.k...@siemens.com, quirin.g...@siemens.com
The last three lines should be kept.
Also, the 2nd patch to add prefix to DTBs needs an entry in the
API-CHANGELOG. Apart from these, we'll apply this series. Zhihang

MOESSBAUER, Felix

unread,
Apr 16, 2026, 10:39:24 AM (2 days ago) Apr 16
to Zhihang Wei, isar-...@googlegroups.com, Kiszka, Jan, Gylstorff, Quirin

Hi,

which lines are you referring to? The "Changes in next" line is in
there and the added prefix is documented (see below). But feel free to
just adjust as you like.

We still need a decision if the "meta: Deploy image build artifacts
into distro- and image-specific subdirs" is JUST reverted (including
the then broken test), or if we switch to the API proposed here, given
that there might be another breaking change in case other artifacts
(like firmware) also need to be split.

Anyways, I really like to move forward - this way or another, as the
ISAR 1.0 release is currently not usable for us.

Best regards,
Felix

Zhihang Wei

unread,
Apr 16, 2026, 11:41:16 AM (2 days ago) Apr 16
to MOESSBAUER, Felix, isar-...@googlegroups.com, Kiszka, Jan, Gylstorff, Quirin
Sorry, overlooked this part. Please ignore.
> We still need a decision if the "meta: Deploy image build artifacts
> into distro- and image-specific subdirs" is JUST reverted (including
> the then broken test), or if we switch to the API proposed here, given
> that there might be another breaking change in case other artifacts
> (like firmware) also need to be split.
We agree to switch to prefixed DTBs. That would be at least a
workaround to support the Trixie targets.

Zhihang

Zhihang Wei

unread,
Apr 16, 2026, 11:42:52 AM (2 days ago) Apr 16
to Felix Moessbauer, isar-...@googlegroups.com, jan.k...@siemens.com, quirin.g...@siemens.com
Applied to next, thanks.

Zhihang

On 4/10/26 15:22, Felix Moessbauer wrote:

Jan Kiszka

unread,
Apr 16, 2026, 11:51:51 AM (2 days ago) Apr 16
to Zhihang Wei, Felix Moessbauer, isar-...@googlegroups.com, quirin.g...@siemens.com
On 16.04.26 17:42, Zhihang Wei wrote:
> Applied to next, thanks.
>

I am repeating myself: We broken the deployment API in v1.0. This series
is breaking it once more, just differently (and rather unhandy for many
downstream users).

Are we truly sure now that this is the FINAL solution, or do we plan for
n more API breakages?

Jan Kiszka

unread,
Apr 16, 2026, 12:01:46 PM (2 days ago) Apr 16
to Zhihang Wei, Felix Moessbauer, isar-...@googlegroups.com, Baurzhan Ismagulov, quirin.g...@siemens.com
On 16.04.26 17:51, Jan Kiszka wrote:
> On 16.04.26 17:42, Zhihang Wei wrote:
>> Applied to next, thanks.
>>
>
> I am repeating myself: We broken the deployment API in v1.0. This series
> is breaking it once more, just differently (and rather unhandy for many
> downstream users).
>
> Are we truly sure now that this is the FINAL solution, or do we plan for
> n more API breakages?
>

But let's a assume we keep it like this:

What is the plan for a v1.0.1 hotfix release? I'm seeing other patches
being merged. Some would qualify to be part as well, another one is
harmless but not a hotfix. So I'm asking before things possibly move on
further. At least some users here are waiting to adopt v1.0.x.

Zhihang Wei

unread,
Apr 17, 2026, 10:15:10 AM (yesterday) Apr 17
to Jan Kiszka, Felix Moessbauer, isar-...@googlegroups.com, Baurzhan Ismagulov, quirin.g...@siemens.com
We have tested the prefixed DTB approach on our downstreams and it
works without major issues.

Regarding splitting TMPDIRs: this would break at least one of our
downstreams. We have a setup with two multiconfigs — one for rescue
(with its own rootfs, kernel, etc.) and another regular one which
depends on the rescue one and needs to consume its deployed artifacts.
Splitting TMPDIRs is not viable in this case.


As for the release plan: we would like to target v1.1 rather than a
v1.0.1 hotfix, with the API expected to remain stable for some period.
I propose that if a final solution can be found quickly, we hold v1.1
until it is merged. Otherwise, v1.1 should be based on the prefixed DTB
patch.

We do have a new proposal for the "final solution". Let me send it
under v6 DTB discussion.

Zhihang

Jan Kiszka

unread,
Apr 17, 2026, 10:39:24 AM (yesterday) Apr 17
to Zhihang Wei, Felix Moessbauer, isar-...@googlegroups.com, Baurzhan Ismagulov, quirin.g...@siemens.com
On 17.04.26 16:15, Zhihang Wei wrote:
>
>
> On 4/16/26 18:01, Jan Kiszka wrote:
>> On 16.04.26 17:51, Jan Kiszka wrote:
>>> On 16.04.26 17:42, Zhihang Wei wrote:
>>>> Applied to next, thanks.
>>>>
>>> I am repeating myself: We broken the deployment API in v1.0. This series
>>> is breaking it once more, just differently (and rather unhandy for many
>>> downstream users).
>>>
>>> Are we truly sure now that this is the FINAL solution, or do we plan for
>>> n more API breakages?
>>>
>> But let's a assume we keep it like this:
>>
>> What is the plan for a v1.0.1 hotfix release? I'm seeing other patches
>> being merged. Some would qualify to be part as well, another one is
>> harmless but not a hotfix. So I'm asking before things possibly move on
>> further. At least some users here are waiting to adopt v1.0.x.
>>
>> Jan
>>
> We have tested the prefixed DTB approach on our downstreams and it
> works without major issues.

You likely have nothing like the deployment scripts for LAVA
(isar-cip-core, xenomai-images). It breaks them, and we need extra logic
because of the mangled names and paths.

>
> Regarding splitting TMPDIRs: this would break at least one of our
> downstreams. We have a setup with two multiconfigs — one for rescue
> (with its own rootfs, kernel, etc.) and another regular one which
> depends on the rescue one and needs to consume its deployed artifacts.
> Splitting TMPDIRs is not viable in this case.
>

This has already been identified as problem, yes. OE does not use it
either, only few downstreams. We are apparently overusing mc: or the
deployment folder here.

>
> As for the release plan: we would like to target v1.1 rather than a
> v1.0.1 hotfix, with the API expected to remain stable for some period.

We must fix this soon or 1.0 will remain a no-go version. It will then
happen the opposite of what "1.0" was aiming at: to signal maturity.

> I propose that if a final solution can be found quickly, we hold v1.1
> until it is merged. Otherwise, v1.1 should be based on the prefixed DTB
> patch.
>
> We do have a new proposal for the "final solution". Let me send it
> under v6 DTB discussion.
>

I'm looking into it.

Jan Kiszka

unread,
Apr 17, 2026, 11:05:51 AM (yesterday) Apr 17
to Zhihang Wei, Felix Moessbauer, isar-...@googlegroups.com, Baurzhan Ismagulov, quirin.g...@siemens.com
Just to make it clear (again), this is how OE deploys DTBs to images:

build/tmp/deploy/images/qemusdc2/devicetree/*.dtb

Didn't check yet, but there might even unfold the new subdirectory
structure (by vendor, mostly) under that folder. So this renaming here
is not getting us better aligned (to be fair: we were not aligned before).
Reply all
Reply to author
Forward
0 new messages