error with version 3.1

465 views
Skip to first unread message

Nils Kuhn

unread,
Aug 16, 2022, 9:38:07 AM8/16/22
to kas-devel
Hi,

I am facing problems building an image with kas version 3.1. 

I am getting the following error:
ERROR: linux-yocto-5.4.132+gitAUTOINC+2ff6e59274_92705f9629-r0 do_kernel_metadata: Could not generate configuration queue for qemux86-64.
ERROR: linux-yocto-5.4.132+gitAUTOINC+2ff6e59274_92705f9629-r0 do_kernel_metadata: Execution of '/build/tmp/work/qemux86_64-poky-linux/linux-yocto/5.4.132+gitAUTOINC+2ff6e59274_92705f9629-r
0/temp/run.do_kernel_metadata.157' failed with exit code 1:
[ERROR]: processing of file /tmp/tmp.yWnxj7A6cf failed
/build/tmp/hosttools/dirname: missing operand
Try '/build/tmp/hosttools/dirname --help' for more information.
WARNING: exit code 1 from a shell command.

With version 3.0.2 it works. I am running kas within docker container, I created a simple example here, to reproduce the error:

The integrated layer is just adding a single file to SRC_URI. Without the layer, the error does not occur. I am a beginner concerning kas and yocto, so it is possible, that I am just overlooking something simple.

Regards, Nils

Jan Kiszka

unread,
Aug 17, 2022, 9:33:51 AM8/17/22
to Nils Kuhn, kas-devel
As a background task, I was reproducing this today. Didn't find the
reason yet, though. Something goes wrong with the scc tool from [1],
called here [2]. Maybe you can instrument that and dig deeper why it
likely invokes dirname without a parameter at some point. Once we know
that, we may also understand how kas/kas-container contributes to that.

Jan

[1] https://git.yoctoproject.org/yocto-kernel-tools/
[2]
https://git.yoctoproject.org/poky/tree/meta/classes/kernel-yocto.bbclass?h=yocto-3.1.10#n236

--
Siemens AG, Technology
Competence Center Embedded Linux

Nils Kuhn

unread,
Aug 17, 2022, 10:47:01 AM8/17/22
to kas-devel
In the meantime, we identified this commit to introduce the problem:
We forked kas and just reverted that single commit and it works. So somehow the relative paths within generated bblayers.conf cause the problems.

Works (generated by kas version 3.0.2):
...
BBLAYERS ?= " \                                                                                                                                                                           
    /work/sources-iteratec/meta-iteratec-tmp \                                                                                                                                
    /work/sources/poky/meta \                                                                                                                                                 
    /work/sources/poky/meta-poky \                                                                                                                                            
    /work/sources/poky/meta-yocto-bsp"
...

Does not work (generated by version 3.1):
...
BBLAYERS ?= " \                                                                                                                                                                           
   ${TOPDIR}/../work/sources-iteratec/meta-iteratec-tmp \                                                                                                                                
   ${TOPDIR}/../work/sources/poky/meta \                                                                                                                                                 
   ${TOPDIR}/../work/sources/poky/meta-poky \                                                                                                                                            
   ${TOPDIR}/../work/sources/poky/meta-yocto-bsp"
...

Nils Kuhn

unread,
Aug 18, 2022, 3:21:34 AM8/18/22
to kas-devel
In kernel-yocto.bbclassscheme2c (=scc) is called like this from kas v3.0.2 build:

scc --force \
  -o /build/tmp/work-shared/qemux86-64/kernel-source/.kernel-meta:cfg,merge,meta \
  -I/build/tmp/work/qemux86_64-poky-linux/linux-yocto/5.4.132+gitAUTOINC+2ff6e59274_92705f9629-r0/kernel-meta \
  -I/work/sources-iteratec/meta-iteratec-tmp/recipes-kernel/linux/linux-yocto \
  /build/tmp/work/qemux86_64-poky-linux/linux-yocto/5.4.132+gitAUTOINC+2ff6e59274_92705f9629-r0/kernel-meta/bsp/common-pc-64/common-pc-64-standard.scc \
  /work/sources-iteratec/meta-iteratec-tmp/recipes-kernel/linux/linux-yocto/iio-driver.cfg \
  features/nfsd/nfsd-enable.scc features/debug/printk.scc features/kernel-sample/kernel-sample.scc features/netfilter/netfilter.scc cfg/virtio.scc features/drm-bochs/drm-bochs.scc cfg/sound.scc cfg/paravirt_kvm.scc features/scsi/scsi-debug.scc

From kas v3.1 build, it is called like this:

scc --force \
  -o /build/tmp/work-shared/qemux86-64/kernel-source/.kernel-meta:cfg,merge,meta \
  -I/build/tmp/work/qemux86_64-poky-linux/linux-yocto/5.4.132+gitAUTOINC+2ff6e59274_92705f9629-r0/kernel-meta \
  -I/build/../work/sources-iteratec/meta-iteratec-tmp/recipes-kernel/linux/linux-yocto \
  /build/tmp/work/qemux86_64-poky-linux/linux-yocto/5.4.132+gitAUTOINC+2ff6e59274_92705f9629-r0/kernel-meta/bsp/common-pc-64/common-pc-64-standard.scc \
  /build/../work/sources-iteratec/meta-iteratec-tmp/recipes-kernel/linux/linux-yocto/iio-driver.cfg \
  features/nfsd/nfsd-enable.scc features/debug/printk.scc features/kernel-sample/kernel-sample.scc features/netfilter/netfilter.scc cfg/virtio.scc features/drm-bochs/drm-bochs.scc cfg/sound.scc cfg/paravirt_kvm.scc features/scsi/scsi-debug.scc

There are only two differences:
* second includes (-I) flag, referencing the folder to search for source files within my layer is using a relative path /build/../work/sources-iteratec/... instead of absolute path /work/sources-iteratec/...
* second filename given to scc (the cfg file that my layer wants to include) is using a relative path /build/../work/sources-iteratec/... instead of absolute path /work/sources-iteratec/...

I tried to reproduce the error from outside the build context from my local machine via scc, but without success. Providing my cfg file my layer wants to include with a relative path produced an error pointing out, that the compilation wasn't successful cause of some format problems. If I changed the relative path to some non-existent path, gcc throws a No such file or directory error. So I assume, that scc is capable of handling relative paths in general.

Any ideas, what could cause the problems? Maybe the duplicated mount inside container (root ./ is mounted to /work and ./build is mounted to /build additionally) causes some trouble?

Nils Kuhn

unread,
Aug 18, 2022, 10:26:53 AM8/18/22
to kas-devel
The dirname: missing operand error happens in this part (simplified to the important part) of the generated file /build/tmp/work/qemux86_64-poky-linux/linux-yocto/5.4.132+gitAUTOINC+2ff6e59274_92705f9629-r0/temp/run.do_kernel_metadata.157:

sccs_from_src_uri="/build/../work/sources-iteratec/meta-iteratec-tmp/recipes-kernel/linux/linux-yocto/iio-driver.cfg"
patches=""
sccs_from_src_uri=$(echo $sccs_from_src_uri | awk '(match($0, "defconfig") == 0) { print $0 }' RS=' ')
sccs="$sccs_from_src_uri"

for s in ${sccs} ${patches}; do
  sdir=$(dirname $s)
done

Nils Kuhn

unread,
Aug 18, 2022, 10:28:15 AM8/18/22
to kas-devel
doesn't make much sense to me, looks fine and should work ...

Moessbauer, Felix

unread,
Aug 18, 2022, 12:59:17 PM8/18/22
to Nils Kuhn, kas-devel

Hi Nils,

 

As developer of the patch-in-question, please put me in CC in the following discussions.

Using relative paths in the BBLAYERS variable might not be very common in Yocto, but according to the BBLAYERS docs there is no rule that prohibits relative paths there.

 

Probably you just hit a bug in Yocto where the assumption that the path is absolute is wrong.

Maybe it would make sense to move this discussion to the Yocto ML or try to fix the respective recipe.

Please note, that there is also a lengthy discussion about this topic on Stackoverflow [1].

 

We mainly need the patch for ISAR builds to improve the sstate cacheability when running builds in the gitlab-ci.

There, the sources folder gets mounted to an non-predictable path that changes on each build.
With improved downstream cache handling (vardepvalue, etc…), this patch might (to be verified) not be relevant anymore.

Then, we could simply revert it. But before doing so, I would really appreciate to get a better understand of the issue.

 

Felix

 

[1] https://stackoverflow.com/questions/38890788/why-does-the-yocto-bblayers-conf-file-use-absolute-paths

 

--
You received this message because you are subscribed to the Google Groups "kas-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kas-devel+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kas-devel/228075af-3718-45e2-9a84-6b1adfd13940n%40googlegroups.com.

Nils Kuhn

unread,
Aug 19, 2022, 3:31:30 AM8/19/22
to kas-devel
Hi Felix,
two questions, I do have:
1) In the kas documentation, it is stated out, that sstate cache dir may be set for bitbake by setting SSTATE_DIR. Shouldn't it be possible to use this to enable a shared cache?
2) When running kas in a container, wouldn't it be possible, to mount an arbitrary source folder to a predictable path inside the container? So, wouldn't it be possible, even on your ci server, to ensure that source folder ends up at /work/sources?
Nils

Moessbauer, Felix

unread,
Aug 19, 2022, 12:57:56 PM8/19/22
to Nils Kuhn, kas-devel
> From: kas-...@googlegroups.com <kas-...@googlegroups.com> On Behalf Of Nils Kuhn
> Sent: Friday, August 19, 2022 9:32 AM
> To: kas-devel <kas-...@googlegroups.com>
> Subject: Re: error with version 3.1
>
> Hi Felix,
> two questions, I do have:
> 1) In the https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fkas.readthedocs.io%2Fen%2Flatest%2Fcommand-line.html%23environment-variables&data=05%7C01%7Cfelix.moessbauer%40siemens.com%7C510fddcda77c4fc6f6e508da81b4d72b%7C38ae3bcd95794fd4addab42e1495d55a%7C1%7C0%7C637964912833480562%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=UfjqEV3vWwSTEmRSP54D84GsCukCIXPCH6Qm2nU22e8%3D&reserved=0, it is stated out, that sstate cache dir may be set for bitbake by setting SSTATE_DIR. Shouldn't it be possible to use this to enable a shared cache?

No! Setting SSTATE_DIR is just about where the cache artifacts are placed on disk. This is independent of the content of the cache.
The content describes which variables + values define a particular cache item.
In case we have absolute paths in expanded variables or variables that are not excluded, the item is generated but will never be re-used.
This is especially problematic in CI where a lot of builds happen because it bloats the cache with items without any value.
To find these caching issues, we even wrote a cache linter for ISAR [1].
I guess Yocto provides something similar.

> 2) When running kas in a container, wouldn't it be possible, to mount an arbitrary source folder to a predictable path inside the container? So, wouldn't it be possible, even on your ci server, to ensure that source folder ends up at /work/sources?

No! In the gitlab-ci, we use the kas-container docker image as the userland / environment of the build task.
Gitlab-CI then mounts the git repo into that container using a bind mount with an unpredictable path.
We cannot use the kas-container script with the common /work /build pattern here, as this would require docker-in-docker in the CI, which is definitely not what we want.

[1] https://github.com/ilbers/isar/blob/next/scripts/isar-sstate#L798

Felix

> Nils

Nils Kuhn

unread,
Aug 22, 2022, 3:01:17 AM8/22/22
to kas-devel
ok, thanks for clarification. We are using a shell gitlab runner on a dedicated build server and use the kas image to run the single builds on it. So our setup is different, that is why I thought it would be easy to control the path, source folder gets mounted to.

Jan Kiszka

unread,
Aug 26, 2022, 12:14:54 PM8/26/22
to Nils Kuhn, kas-devel, Moessbauer, Felix (T CED SES-DE)
Debugged a bit further down the road, and it is a bug in yocto-kernel-
tools, namely around strip_common_prefix() of tools/spp. There we have
call of

strip_common_prefix /build/../work/sources-iteratec/meta-iteratec-tmp/recipes-kernel/linux/linux-yocto/iio-driver.cfg

resulting in

build/../iio-driver.cfg

because $include_paths is

"/build/tmp/work/qemux86_64-poky-linux/linux-yocto/5.4.132+gitAUTOINC+2ff6e59274_92705f9629-r0/kernel-meta /work/sources-iteratec/meta-iteratec-tmp/recipes-kernel/linux/linux-yocto"

This seems to fix it:

diff --git a/tools/spp b/tools/spp
index 4d3fa10..27a99d1 100755
--- a/tools/spp
+++ b/tools/spp
@@ -112,7 +112,7 @@ warn()
# search paths, and can be found later.
strip_common_prefix()
{
- in_name=$1
+ in_name=$(readlink -f $1)

# this takes an input name and searches all known paths.
# the relocation that removes the MOST from the original is


Feel free to submit a bug report upstream.

Jan

Nils Kuhn

unread,
Aug 29, 2022, 4:43:38 AM8/29/22
to kas-devel
I can do so. 

Can you say whether it is this repository:
Or are these all forks and it is another one not at github at all?

Bezdeka, Florian

unread,
Aug 29, 2022, 5:53:13 AM8/29/22
to nils...@iteratec.com, kas-...@googlegroups.com, jan.k...@siemens.com, Moessbauer, Felix
On Mon, 2022-08-29 at 01:43 -0700, Nils Kuhn wrote:
> I can do so. 
>
> Can you say whether it is this repository:
> https://github.com/brcd-org/yocto-kernel-tools/blob/master/tools/spp
> Or this one:
> https://github.com/sakshaya/yocto-kernel-tools/blob/master/tools/spp
> Or are these all forks and it is another one not at github at all?

See https://git.yoctoproject.org/yocto-kernel-tools/tree/README.

That file holds all the necessary information. They are using the
mailing list workflow for all contributions.

Florian

Peter Hoyes

unread,
Aug 31, 2022, 5:39:40 AM8/31/22
to kas-devel
Hi,

I've only just seen this thread, but we discovered the same issue a few weeks ago. I started this thread on the linux-yocto mailing list: https://lists.yoctoproject.org/g/linux-yocto/topic/93057468, referencing the Kas change. The maintainer has promised to develop a fix.

We have worked around the issue here by moving the WORKDIR and BUILDDIR down a level (so they are not mounted at the root of the filesystem).

Regards,

Peter

Bezdeka, Florian

unread,
Aug 31, 2022, 6:28:47 AM8/31/22
to kas-...@googlegroups.com, peter...@arm.com, jan.k...@siemens.com, Moessbauer, Felix
Hi Peter,

On Wed, 2022-08-31 at 02:39 -0700, Peter Hoyes wrote:
> Hi,
>
> I've only just seen this thread, but we discovered the same issue a
> few weeks ago. I started this thread on the linux-yocto mailing
> list: https://lists.yoctoproject.org/g/linux-yocto/topic/93057468,
> referencing the Kas change. The maintainer has promised to develop a
> fix.

there is a proposal from Jan below. Maybe you can forward that. Might
be a good starting point.

Thanks!

Moessbauer, Felix

unread,
Aug 31, 2022, 9:09:37 AM8/31/22
to Nils Kuhn, kas-devel

Hi Nils,

 

I just checked if the patch is still required for ISAR / Yocto:

Yes, it is.

 

The BBLAYERS variable is referenced in the wic logic (in WICVARS) and cannot be excluded there as otherwise changes to the variable are not detected correctly.
This is especially important when doing (rather rare) image-in-image builds (e.g. when building a host image with applications in docker containers).
These inner images (the container) can only be cached when using relative paths in BBLAYERS.

 

Felix

Nils Kuhn

unread,
Aug 31, 2022, 11:34:15 AM8/31/22
to kas-devel
Thanks for taking care and transfering most important informations to yoctoprojects mailing list! Looks like Bruce Ashfield will integrate the fix if he is back in september, that's good news!

Claudius Heine

unread,
Sep 8, 2022, 8:42:26 AM9/8/22
to Jan Kiszka, Nils Kuhn, kas-devel, Moessbauer, Felix (T CED SES-DE)
Hi Jan,
Does that mean that certain versions of OE, where the eventual patch in
upstream is not backported too cannot be built with kas 3.1?

I think we should document that. We might need some compatibility matrix
here maybe.

OE only checks and complains if the distro is incompatible, but since
there is no new OE with that issue patched yet and kas also hasn't
updated their distro, this incompatibility is not made obvious to the user.

The problem here is that the change in kas was probably merged in too
soon, first OE and Isar should be made to generate relocatable
bblayer.conf files and then afterwards allow kas to do this as well. Now
kas broke this compatibility and it should at least document it if not
revert that patch and release a hotfix version.

Also third-party layers could be implemented on the assumption that
those paths are set in a certain way and would need to be fixed in order
to work with kas... This just seems like backwards to me:

IMO for these kind of changes: first OE needs to be fixed, then allow
third-parties to fix their layers, so that they are compatible with the
new OE release, and at the same time to that adapt kas, so that it
generates the same bblayers.conf as OE would.

regards,
Claudius

Jan Kiszka

unread,
Sep 8, 2022, 8:54:02 AM9/8/22
to Claudius Heine, Jan Kiszka, Nils Kuhn, kas-devel, Moessbauer, Felix (T CED SES-DE)
Yes, these complications are not nice. Felix looked at avoiding them,
but I think he didn't find some alternative, right?

BTW, is OE then also affected by the sstate cachability issue that
Felix' patch is addressing, though targeting Isar layers?

Jan

Moessbauer, Felix

unread,
Sep 8, 2022, 10:34:33 AM9/8/22
to jan.k...@siemens.com, Claudius Heine, Jan Kiszka, Nils Kuhn, kas-devel
> -----Original Message-----
> From: Kiszka, Jan (T CED) <jan.k...@siemens.com>
> Sent: Thursday, September 8, 2022 2:54 PM
> To: Claudius Heine <c...@denx.de>; Jan Kiszka <jan.k...@web.de>; Nils Kuhn
> <nils...@iteratec.com>; kas-devel <kas-...@googlegroups.com>;
> Moessbauer, Felix (T CED SES-DE) <felix.mo...@siemens.com>
> Subject: Re: error with version 3.1
>
> On 08.09.22 14:42, Claudius Heine wrote:
> > Hi Jan,
> >
> > On 2022-08-26 18:14, Jan Kiszka wrote:
> >> On 17.08.22 15:33, Jan Kiszka wrote:
> >>> On 16.08.22 16:38, Nils Kuhn wrote:
> >>>> Hi,
> >>>>
> >>>> I am facing problems building an image with kas version 3.1.
> >>>>
> >>>> I am getting the following error:
> >>>> ERROR: linux-yocto-5.4.132+gitAUTOINC+2ff6e59274_92705f9629-r0
> >>>> do_kernel_metadata: Could not generate configuration queue for
> >>>> qemux86-64.
> >>>> ERROR: linux-yocto-5.4.132+gitAUTOINC+2ff6e59274_92705f9629-r0
> >>>> do_kernel_metadata: Execution of
> >>>> '/build/tmp/work/qemux86_64-poky-linux/linux-yocto/5.4.132+gitAUTOI
> >>>> NC+2ff6e59274_92705f9629-r
> >>>>
> >>>> 0/temp/run.do_kernel_metadata.157' failed with exit code 1:
> >>>> [ERROR]: processing of file /tmp/tmp.yWnxj7A6cf failed
> >>>> /build/tmp/hosttools/dirname: missing operand Try
> >>>> '/build/tmp/hosttools/dirname --help' for more information.
> >>>> WARNING: exit code 1 from a shell command.
> >>>>
> >>>> With version 3.0.2 it works. I am running kas within docker
> >>>> container, I created a simple example here, to reproduce the error:
> >>>> https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fg
> >>>> ithub.com%2Fiteratec%2Fkas-
> tmp&amp;data=05%7C01%7Cfelix.moessbauer%
> >>>>
> 40siemens.com%7C6b28050f3a0d443e340508da91993218%7C38ae3bcd95794f
> d4
> >>>>
> addab42e1495d55a%7C1%7C0%7C637982384395826279%7CUnknown%7CTWF
> pbGZsb
> >>>>
> 3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%
> >>>>
> 3D%7C3000%7C%7C%7C&amp;sdata=TONf2Wt%2Fr77IyeeeI3JGxrOyb%2B3X8l
> uCex
> >>>> ACPqV1EZI%3D&amp;reserved=0
Actually if that is not supported, it is a bug.
But to be realistic, we do not want to break builds just because we insist on this behavior.
If we revert the patch, we basically break the ISAR layer CI workflows where gitlab-cloud-ci is used
(we don't break them, but generate hundreads of GB of trash data in S3 for common layers).

What about making this configurable?

> >
>
> Yes, these complications are not nice. Felix looked at avoiding them, but I think
> he didn't find some alternative, right?

Yes, as mentioned in this thread, there are no alternatives apart from making the paths in bblayers.bb relative to TOPDIR (or another excluded variable). Otherwise image-in-image builds will not be cacheable.
This is especially an issue for CI runners as this adds a huge cache artifact per build which will never be reused.

>
> BTW, is OE then also affected by the sstate cachability issue that Felix' patch is
> addressing, though targeting Isar layers?

Both OE and ISAR are affected, but probably image-in-image builds are even more rare in OE than in ISAR.

Felix
Reply all
Reply to author
Forward
0 new messages