[PATCH 0/3] Update bitbake to 2.8.1

4 views
Skip to first unread message

Felix Moessbauer

unread,
Mar 4, 2026, 8:32:05 AM (14 days ago) Mar 4
to isar-...@googlegroups.com, Felix Moessbauer
Prior to the update we ensure the bitbake dir is patch-free.

This makes bitbake compatible with Python 3.14 and fixes a critical
error on Debian Trixie hosts where no stacktrace was shown on a
parser exception.

Given that isar releases happen quite rarely and considering that
Debian Trixie is the current stable release, I consider this patch
series release critical (despite coming late to the table).

Best regards,
Felix Moessbauer
Siemens AG

Felix Moessbauer (3):
partial revert of "Bitbake: use LAYERDIR_RE when setting
BBFILE_PATTERN_x"
Revert "bitbake: Downgrade python requirements"
bitbake: Update to 2.8.1 release

bitbake/bin/bitbake | 2 +-
bitbake/bin/bitbake-diffsigs | 9 +-
.../bitbake-user-manual-ref-variables.rst | 2 +-
bitbake/lib/bb/__init__.py | 47 ++++++-
bitbake/lib/bb/asyncrpc/client.py | 6 +-
bitbake/lib/bb/asyncrpc/serv.py | 2 +-
bitbake/lib/bb/codeparser.py | 33 +++--
bitbake/lib/bb/command.py | 21 ++-
bitbake/lib/bb/cooker.py | 54 +++++---
bitbake/lib/bb/data.py | 2 +-
bitbake/lib/bb/data_smart.py | 16 +--
bitbake/lib/bb/event.py | 19 +--
bitbake/lib/bb/exceptions.py | 96 --------------
bitbake/lib/bb/fetch2/__init__.py | 64 ++++-----
bitbake/lib/bb/fetch2/gcp.py | 13 +-
bitbake/lib/bb/fetch2/git.py | 3 +-
bitbake/lib/bb/fetch2/gitsm.py | 44 +++----
bitbake/lib/bb/fetch2/wget.py | 29 +++--
bitbake/lib/bb/msg.py | 4 -
bitbake/lib/bb/parse/__init__.py | 12 +-
bitbake/lib/bb/parse/ast.py | 20 +--
bitbake/lib/bb/persist_data.py | 1 +
bitbake/lib/bb/runqueue.py | 123 +++++++++++++-----
bitbake/lib/bb/server/process.py | 2 +-
bitbake/lib/bb/siggen.py | 11 +-
bitbake/lib/bb/tests/fetch.py | 14 +-
.../lib/bb/tests/runqueue-tests/recipes/g1.bb | 2 +
.../lib/bb/tests/runqueue-tests/recipes/h1.bb | 0
bitbake/lib/bb/tests/runqueue.py | 11 +-
bitbake/lib/bb/tests/support/httpserver.py | 4 +-
bitbake/lib/bb/tinfoil.py | 16 ++-
bitbake/lib/bb/ui/knotty.py | 20 ++-
bitbake/lib/bb/ui/teamcity.py | 5 -
bitbake/lib/bb/utils.py | 33 ++++-
bitbake/lib/bblayers/query.py | 15 ++-
bitbake/lib/hashserv/client.py | 106 +++++++++++++--
bitbake/lib/hashserv/tests.py | 77 ++++++++++-
.../tests/testdata/layer1/conf/layer.conf | 2 +-
.../tests/testdata/layer2/conf/layer.conf | 2 +-
.../tests/testdata/layer3/conf/layer.conf | 2 +-
.../tests/testdata/layer4/conf/layer.conf | 2 +-
bitbake/lib/toaster/tests/builds/buildtest.py | 2 +-
42 files changed, 604 insertions(+), 344 deletions(-)
delete mode 100644 bitbake/lib/bb/exceptions.py
create mode 100644 bitbake/lib/bb/tests/runqueue-tests/recipes/g1.bb
create mode 100644 bitbake/lib/bb/tests/runqueue-tests/recipes/h1.bb

--
2.53.0

Felix Moessbauer

unread,
Mar 4, 2026, 8:32:06 AM (14 days ago) Mar 4
to isar-...@googlegroups.com, Felix Moessbauer
This reverts commit da17d920ee2e0f37633b7d91755e2777219a6abd.

This change was only needed for building on a debian buster host, but as
the debootstrap support was removed in d58332b2 ("bootstrap: remove
isar-bootstrap support") and the alternative (mmdebstrap) requires at
least a bullseye host, the minimal required host version is already
bullseye.

By reverting the patch, the bitbake directory is patch free again.

Signed-off-by: Felix Moessbauer <felix.mo...@siemens.com>
---
bitbake/lib/bb/__init__.py | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/bitbake/lib/bb/__init__.py b/bitbake/lib/bb/__init__.py
index 7d9bc147..eef45fe4 100644
--- a/bitbake/lib/bb/__init__.py
+++ b/bitbake/lib/bb/__init__.py
@@ -12,9 +12,8 @@
__version__ = "2.8.0"

import sys
-# It was 3.8.0 originally but set to 3.7.3 for Debian Buster
-if sys.version_info < (3, 7, 3):
- raise RuntimeError("Sorry, python 3.7.3 or later is required for this version of bitbake")
+if sys.version_info < (3, 8, 0):
+ raise RuntimeError("Sorry, python 3.8.0 or later is required for this version of bitbake")

if sys.version_info < (3, 10, 0):
# With python 3.8 and 3.9, we see errors of "libgcc_s.so.1 must be installed for pthread_cancel to work"
--
2.53.0

Felix Moessbauer

unread,
Mar 4, 2026, 8:32:10 AM (14 days ago) Mar 4
to isar-...@googlegroups.com, Felix Moessbauer
Upstream commit 1c9ec1ffde75809de34c10d3ec2b40d84d258cb4.

This makes bitbake compatible with Python 3.14 and fixes a critical
error on Debian Trixie hosts where no stacktrace was shown on a
parser exception.

Signed-off-by: Felix Moessbauer <felix.mo...@siemens.com>
---
bitbake/bin/bitbake | 2 +-
bitbake/bin/bitbake-diffsigs | 9 +-
.../bitbake-user-manual-ref-variables.rst | 2 +-
bitbake/lib/bb/__init__.py | 42 +++++-
bitbake/lib/toaster/tests/builds/buildtest.py | 2 +-
38 files changed, 598 insertions(+), 337 deletions(-)
delete mode 100644 bitbake/lib/bb/exceptions.py
create mode 100644 bitbake/lib/bb/tests/runqueue-tests/recipes/g1.bb
create mode 100644 bitbake/lib/bb/tests/runqueue-tests/recipes/h1.bb

diff --git a/bitbake/bin/bitbake b/bitbake/bin/bitbake
index f494eaa1..a2a42a3f 100755
--- a/bitbake/bin/bitbake
+++ b/bitbake/bin/bitbake
@@ -27,7 +27,7 @@ from bb.main import bitbake_main, BitBakeConfigParameters, BBMainException

bb.utils.check_system_locale()

-__version__ = "2.8.0"
+__version__ = "2.8.1"

if __name__ == "__main__":
if __version__ != bb.__version__:
diff --git a/bitbake/bin/bitbake-diffsigs b/bitbake/bin/bitbake-diffsigs
index 8202c786..9d6cb8c9 100755
--- a/bitbake/bin/bitbake-diffsigs
+++ b/bitbake/bin/bitbake-diffsigs
@@ -72,16 +72,17 @@ def find_siginfo_task(bbhandler, pn, taskname, sig1=None, sig2=None):
elif sig2 not in sigfiles:
logger.error('No sigdata files found matching %s %s with signature %s' % (pn, taskname, sig2))
sys.exit(1)
+
+ latestfiles = [sigfiles[sig1]['path'], sigfiles[sig2]['path']]
else:
sigfiles = find_siginfo(bbhandler, pn, taskname)
latestsigs = sorted(sigfiles.keys(), key=lambda h: sigfiles[h]['time'])[-2:]
if not latestsigs:
logger.error('No sigdata files found matching %s %s' % (pn, taskname))
sys.exit(1)
- sig1 = latestsigs[0]
- sig2 = latestsigs[1]
-
- latestfiles = [sigfiles[sig1]['path'], sigfiles[sig2]['path']]
+ latestfiles = [sigfiles[latestsigs[0]]['path']]
+ if len(latestsigs) > 1:
+ latestfiles.append(sigfiles[latestsigs[1]]['path'])

return latestfiles

diff --git a/bitbake/doc/bitbake-user-manual/bitbake-user-manual-ref-variables.rst b/bitbake/doc/bitbake-user-manual/bitbake-user-manual-ref-variables.rst
index 899e584f..f23fb7f2 100644
--- a/bitbake/doc/bitbake-user-manual/bitbake-user-manual-ref-variables.rst
+++ b/bitbake/doc/bitbake-user-manual/bitbake-user-manual-ref-variables.rst
@@ -424,7 +424,7 @@ overview of their function and contents.

Example usage::

- BB_HASHSERVE_UPSTREAM = "hashserv.yocto.io:8687"
+ BB_HASHSERVE_UPSTREAM = "hashserv.yoctoproject.org:8686"

:term:`BB_INVALIDCONF`
Used in combination with the ``ConfigParsed`` event to trigger
diff --git a/bitbake/lib/bb/__init__.py b/bitbake/lib/bb/__init__.py
index eef45fe4..3fb2ec5d 100644
--- a/bitbake/lib/bb/__init__.py
+++ b/bitbake/lib/bb/__init__.py
@@ -9,7 +9,7 @@
# SPDX-License-Identifier: GPL-2.0-only
#

-__version__ = "2.8.0"
+__version__ = "2.8.1"

import sys
if sys.version_info < (3, 8, 0):
@@ -36,6 +36,35 @@ class BBHandledException(Exception):

import os
import logging
+from collections import namedtuple
+import multiprocessing as mp
+
+# Python 3.14 changes the default multiprocessing context from "fork" to
+# "forkserver". However, bitbake heavily relies on "fork" behavior to
+# efficiently pass data to the child processes. Places that need this should do:
+# from bb import multiprocessing
+# in place of
+# import multiprocessing
+
+class MultiprocessingContext(object):
+ """
+ Multiprocessing proxy object that uses the "fork" context for a property if
+ available, otherwise goes to the main multiprocessing module. This allows
+ it to be a drop-in replacement for the multiprocessing module, but use the
+ fork context
+ """
+ def __init__(self):
+ super().__setattr__("_ctx", mp.get_context("fork"))
+
+ def __getattr__(self, name):
+ if hasattr(self._ctx, name):
+ return getattr(self._ctx, name)
+ return getattr(mp, name)
+
+ def __setattr__(self, name, value):
+ raise AttributeError(f"Unable to set attribute {name}")
+
+multiprocessing = MultiprocessingContext()


class NullHandler(logging.Handler):
@@ -227,3 +256,14 @@ def deprecate_import(current, modulename, fromlist, renames = None):

setattr(sys.modules[current], newname, newobj)

+TaskData = namedtuple("TaskData", [
+ "pn",
+ "taskname",
+ "fn",
+ "deps",
+ "provides",
+ "taskhash",
+ "unihash",
+ "hashfn",
+ "taskhash_deps",
+])
diff --git a/bitbake/lib/bb/asyncrpc/client.py b/bitbake/lib/bb/asyncrpc/client.py
index a350b4fb..6fa2839f 100644
--- a/bitbake/lib/bb/asyncrpc/client.py
+++ b/bitbake/lib/bb/asyncrpc/client.py
@@ -87,7 +87,11 @@ class AsyncClient(object):
import websockets

async def connect_sock():
- websocket = await websockets.connect(uri, ping_interval=None)
+ websocket = await websockets.connect(
+ uri,
+ ping_interval=None,
+ open_timeout=self.timeout,
+ )
return WebsocketConnection(websocket, self.timeout)

self._connect_sock = connect_sock
diff --git a/bitbake/lib/bb/asyncrpc/serv.py b/bitbake/lib/bb/asyncrpc/serv.py
index a66117ac..953c02ef 100644
--- a/bitbake/lib/bb/asyncrpc/serv.py
+++ b/bitbake/lib/bb/asyncrpc/serv.py
@@ -11,7 +11,7 @@ import os
import signal
import socket
import sys
-import multiprocessing
+from bb import multiprocessing
import logging
from .connection import StreamConnection, WebsocketConnection
from .exceptions import ClientError, ServerError, ConnectionClosedError, InvokeError
diff --git a/bitbake/lib/bb/codeparser.py b/bitbake/lib/bb/codeparser.py
index 2e8b7ced..1001ca19 100644
--- a/bitbake/lib/bb/codeparser.py
+++ b/bitbake/lib/bb/codeparser.py
@@ -72,6 +72,11 @@ def add_module_functions(fn, functions, namespace):
parser.parse_python(None, filename=fn, lineno=1, fixedhash=fixedhash+f)
#bb.warn("Cached %s" % f)
except KeyError:
+ targetfn = inspect.getsourcefile(functions[f])
+ if fn != targetfn:
+ # Skip references to other modules outside this file
+ #bb.warn("Skipping %s" % name)
+ continue
lines, lineno = inspect.getsourcelines(functions[f])
src = "".join(lines)
parser.parse_python(src, filename=fn, lineno=lineno, fixedhash=fixedhash+f)
@@ -82,14 +87,14 @@ def add_module_functions(fn, functions, namespace):
if e in functions:
execs.remove(e)
execs.add(namespace + "." + e)
- modulecode_deps[name] = [parser.references.copy(), execs, parser.var_execs.copy(), parser.contains.copy()]
+ modulecode_deps[name] = [parser.references.copy(), execs, parser.var_execs.copy(), parser.contains.copy(), parser.extra]
#bb.warn("%s: %s\nRefs:%s Execs: %s %s %s" % (name, fn, parser.references, parser.execs, parser.var_execs, parser.contains))

def update_module_dependencies(d):
for mod in modulecode_deps:
excludes = set((d.getVarFlag(mod, "vardepsexclude") or "").split())
if excludes:
- modulecode_deps[mod] = [modulecode_deps[mod][0] - excludes, modulecode_deps[mod][1] - excludes, modulecode_deps[mod][2] - excludes, modulecode_deps[mod][3]]
+ modulecode_deps[mod] = [modulecode_deps[mod][0] - excludes, modulecode_deps[mod][1] - excludes, modulecode_deps[mod][2] - excludes, modulecode_deps[mod][3], modulecode_deps[mod][4]]

# A custom getstate/setstate using tuples is actually worth 15% cachesize by
# avoiding duplication of the attribute names!
@@ -112,21 +117,22 @@ class SetCache(object):
codecache = SetCache()

class pythonCacheLine(object):
- def __init__(self, refs, execs, contains):
+ def __init__(self, refs, execs, contains, extra):
self.refs = codecache.internSet(refs)
self.execs = codecache.internSet(execs)
self.contains = {}
for c in contains:
self.contains[c] = codecache.internSet(contains[c])
+ self.extra = extra

def __getstate__(self):
- return (self.refs, self.execs, self.contains)
+ return (self.refs, self.execs, self.contains, self.extra)

def __setstate__(self, state):
- (refs, execs, contains) = state
- self.__init__(refs, execs, contains)
+ (refs, execs, contains, extra) = state
+ self.__init__(refs, execs, contains, extra)
def __hash__(self):
- l = (hash(self.refs), hash(self.execs))
+ l = (hash(self.refs), hash(self.execs), hash(self.extra))
for c in sorted(self.contains.keys()):
l = l + (c, hash(self.contains[c]))
return hash(l)
@@ -155,7 +161,7 @@ class CodeParserCache(MultiProcessCache):
# so that an existing cache gets invalidated. Additionally you'll need
# to increment __cache_version__ in cache.py in order to ensure that old
# recipe caches don't trigger "Taskhash mismatch" errors.
- CACHE_VERSION = 11
+ CACHE_VERSION = 12

def __init__(self):
MultiProcessCache.__init__(self)
@@ -169,8 +175,8 @@ class CodeParserCache(MultiProcessCache):
self.pythoncachelines = {}
self.shellcachelines = {}

- def newPythonCacheLine(self, refs, execs, contains):
- cacheline = pythonCacheLine(refs, execs, contains)
+ def newPythonCacheLine(self, refs, execs, contains, extra):
+ cacheline = pythonCacheLine(refs, execs, contains, extra)
h = hash(cacheline)
if h in self.pythoncachelines:
return self.pythoncachelines[h]
@@ -338,6 +344,7 @@ class PythonParser():
self.contains = {}
for i in codeparsercache.pythoncache[h].contains:
self.contains[i] = set(codeparsercache.pythoncache[h].contains[i])
+ self.extra = codeparsercache.pythoncache[h].extra
return

if h in codeparsercache.pythoncacheextras:
@@ -346,6 +353,7 @@ class PythonParser():
self.contains = {}
for i in codeparsercache.pythoncacheextras[h].contains:
self.contains[i] = set(codeparsercache.pythoncacheextras[h].contains[i])
+ self.extra = codeparsercache.pythoncacheextras[h].extra
return

if fixedhash and not node:
@@ -364,8 +372,11 @@ class PythonParser():
self.visit_Call(n)

self.execs.update(self.var_execs)
+ self.extra = None
+ if fixedhash:
+ self.extra = bbhash(str(node))

- codeparsercache.pythoncacheextras[h] = codeparsercache.newPythonCacheLine(self.references, self.execs, self.contains)
+ codeparsercache.pythoncacheextras[h] = codeparsercache.newPythonCacheLine(self.references, self.execs, self.contains, self.extra)

class ShellParser():
def __init__(self, name, log):
diff --git a/bitbake/lib/bb/command.py b/bitbake/lib/bb/command.py
index 1fcb9bf1..5e166fe4 100644
--- a/bitbake/lib/bb/command.py
+++ b/bitbake/lib/bb/command.py
@@ -420,15 +420,30 @@ class CommandsSync:
return command.cooker.recipecaches[mc].pkg_dp
getDefaultPreference.readonly = True

+
def getSkippedRecipes(self, command, params):
+ """
+ Get the map of skipped recipes for the specified multiconfig/mc name (`params[0]`).
+
+ Invoked by `bb.tinfoil.Tinfoil.get_skipped_recipes`
+
+ :param command: Internally used parameter.
+ :param params: Parameter array. params[0] is multiconfig/mc name. If not given, then default mc '' is assumed.
+ :return: Dict whose keys are virtualfns and values are `bb.cooker.SkippedPackage`
+ """
+ try:
+ mc = params[0]
+ except IndexError:
+ mc = ''
+
# Return list sorted by reverse priority order
import bb.cache
def sortkey(x):
vfn, _ = x
- realfn, _, mc = bb.cache.virtualfn2realfn(vfn)
- return (-command.cooker.collections[mc].calc_bbfile_priority(realfn)[0], vfn)
+ realfn, _, item_mc = bb.cache.virtualfn2realfn(vfn)
+ return -command.cooker.collections[item_mc].calc_bbfile_priority(realfn)[0], vfn

- skipdict = OrderedDict(sorted(command.cooker.skiplist.items(), key=sortkey))
+ skipdict = OrderedDict(sorted(command.cooker.skiplist_by_mc[mc].items(), key=sortkey))
return list(skipdict.items())
getSkippedRecipes.readonly = True

diff --git a/bitbake/lib/bb/cooker.py b/bitbake/lib/bb/cooker.py
index c5bfef55..778cbb58 100644
--- a/bitbake/lib/bb/cooker.py
+++ b/bitbake/lib/bb/cooker.py
@@ -12,12 +12,12 @@
import sys, os, glob, os.path, re, time
import itertools
import logging
-import multiprocessing
+from bb import multiprocessing
import threading
from io import StringIO, UnsupportedOperation
from contextlib import closing
from collections import defaultdict, namedtuple
-import bb, bb.exceptions, bb.command
+import bb, bb.command
from bb import utils, data, parse, event, cache, providers, taskdata, runqueue, build
import queue
import signal
@@ -134,7 +134,8 @@ class BBCooker:
self.baseconfig_valid = False
self.parsecache_valid = False
self.eventlog = None
- self.skiplist = {}
+ # The skiplists, one per multiconfig
+ self.skiplist_by_mc = defaultdict(dict)
self.featureset = CookerFeatures()
if featureSet:
for f in featureSet:
@@ -315,13 +316,13 @@ class BBCooker:
dbfile = (self.data.getVar("PERSISTENT_DIR") or self.data.getVar("CACHE")) + "/hashserv.db"
upstream = self.data.getVar("BB_HASHSERVE_UPSTREAM") or None
if upstream:
- import socket
try:
- sock = socket.create_connection(upstream.split(":"), 5)
- sock.close()
- except socket.error as e:
+ with hashserv.create_client(upstream) as client:
+ client.ping()
+ except (ConnectionError, ImportError) as e:
bb.warn("BB_HASHSERVE_UPSTREAM is not valid, unable to connect hash equivalence server at '%s': %s"
% (upstream, repr(e)))
+ upstream = None

self.hashservaddr = "unix://%s/hashserve.sock" % self.data.getVar("TOPDIR")
self.hashserv = hashserv.create_server(
@@ -612,8 +613,8 @@ class BBCooker:
localdata = {}

for mc in self.multiconfigs:
- taskdata[mc] = bb.taskdata.TaskData(halt, skiplist=self.skiplist, allowincomplete=allowincomplete)
- localdata[mc] = data.createCopy(self.databuilder.mcdata[mc])
+ taskdata[mc] = bb.taskdata.TaskData(halt, skiplist=self.skiplist_by_mc[mc], allowincomplete=allowincomplete)
+ localdata[mc] = bb.data.createCopy(self.databuilder.mcdata[mc])
bb.data.expandKeys(localdata[mc])

current = 0
@@ -933,7 +934,7 @@ class BBCooker:
for mc in self.multiconfigs:
# First get list of recipes, including skipped
recipefns = list(self.recipecaches[mc].pkg_fn.keys())
- recipefns.extend(self.skiplist.keys())
+ recipefns.extend(self.skiplist_by_mc[mc].keys())

# Work out list of bbappends that have been applied
applied_appends = []
@@ -1459,7 +1460,6 @@ class BBCooker:

if t in task or getAllTaskSignatures:
try:
- rq.rqdata.prepare_task_hash(tid)
sig.append([pn, t, rq.rqdata.get_task_unihash(tid)])
except KeyError:
sig.append(self.getTaskSignatures(target, [t])[0])
@@ -2098,7 +2098,6 @@ class Parser(multiprocessing.Process):
except Exception as exc:
tb = sys.exc_info()[2]
exc.recipe = filename
- exc.traceback = list(bb.exceptions.extract_traceback(tb, context=3))
return True, None, exc
# Need to turn BaseExceptions into Exceptions here so we gracefully shutdown
# and for example a worker thread doesn't just exit on its own in response to
@@ -2299,8 +2298,12 @@ class CookerParser(object):
return False
except ParsingFailure as exc:
self.error += 1
- logger.error('Unable to parse %s: %s' %
- (exc.recipe, bb.exceptions.to_string(exc.realexception)))
+
+ exc_desc = str(exc)
+ if isinstance(exc, SystemExit) and not isinstance(exc.code, str):
+ exc_desc = 'Exited with "%d"' % exc.code
+
+ logger.error('Unable to parse %s: %s' % (exc.recipe, exc_desc))
self.shutdown(clean=False)
return False
except bb.parse.ParseError as exc:
@@ -2309,20 +2312,33 @@ class CookerParser(object):
self.shutdown(clean=False, eventmsg=str(exc))
return False
except bb.data_smart.ExpansionError as exc:
+ def skip_frames(f, fn_prefix):
+ while f and f.tb_frame.f_code.co_filename.startswith(fn_prefix):
+ f = f.tb_next
+ return f
+
self.error += 1
bbdir = os.path.dirname(__file__) + os.sep
- etype, value, _ = sys.exc_info()
- tb = list(itertools.dropwhile(lambda e: e.filename.startswith(bbdir), exc.traceback))
+ etype, value, tb = sys.exc_info()
+
+ # Remove any frames where the code comes from bitbake. This
+ # prevents deep (and pretty useless) backtraces for expansion error
+ tb = skip_frames(tb, bbdir)
+ cur = tb
+ while cur:
+ cur.tb_next = skip_frames(cur.tb_next, bbdir)
+ cur = cur.tb_next
+
logger.error('ExpansionError during parsing %s', value.recipe,
exc_info=(etype, value, tb))
self.shutdown(clean=False)
return False
except Exception as exc:
self.error += 1
- etype, value, tb = sys.exc_info()
+ _, value, _ = sys.exc_info()
if hasattr(value, "recipe"):
logger.error('Unable to parse %s' % value.recipe,
- exc_info=(etype, value, exc.traceback))
+ exc_info=sys.exc_info())
else:
# Most likely, an exception occurred during raising an exception
import traceback
@@ -2343,7 +2359,7 @@ class CookerParser(object):
for virtualfn, info_array in result:
if info_array[0].skipped:
self.skipped += 1
- self.cooker.skiplist[virtualfn] = SkippedPackage(info_array[0])
+ self.cooker.skiplist_by_mc[mc][virtualfn] = SkippedPackage(info_array[0])
self.bb_caches[mc].add_info(virtualfn, info_array, self.cooker.recipecaches[mc],
parsed=parsed, watcher = self.cooker.add_filewatch)
return True
diff --git a/bitbake/lib/bb/data.py b/bitbake/lib/bb/data.py
index 505f4295..f672a844 100644
--- a/bitbake/lib/bb/data.py
+++ b/bitbake/lib/bb/data.py
@@ -293,7 +293,7 @@ def build_dependencies(key, keys, mod_funcs, shelldeps, varflagsexcl, ignored_va
if key in mod_funcs:
exclusions = set()
moddep = bb.codeparser.modulecode_deps[key]
- value = handle_contains("", moddep[3], exclusions, d)
+ value = handle_contains(moddep[4], moddep[3], exclusions, d)
return frozenset((moddep[0] | keys & moddep[1]) - ignored_vars), value

if key[-1] == ']':
diff --git a/bitbake/lib/bb/data_smart.py b/bitbake/lib/bb/data_smart.py
index 0128a5bb..7b67127c 100644
--- a/bitbake/lib/bb/data_smart.py
+++ b/bitbake/lib/bb/data_smart.py
@@ -31,7 +31,7 @@ logger = logging.getLogger("BitBake.Data")

__setvar_keyword__ = [":append", ":prepend", ":remove"]
__setvar_regexp__ = re.compile(r'(?P<base>.*?)(?P<keyword>:append|:prepend|:remove)(:(?P<add>[^A-Z]*))?$')
-__expand_var_regexp__ = re.compile(r"\${[a-zA-Z0-9\-_+./~:]+?}")
+__expand_var_regexp__ = re.compile(r"\${[a-zA-Z0-9\-_+./~:]+}")
__expand_python_regexp__ = re.compile(r"\${@(?:{.*?}|.)+?}")
__whitespace_split__ = re.compile(r'(\s)')
__override_regexp__ = re.compile(r'[a-z0-9]+')
@@ -272,12 +272,9 @@ class VariableHistory(object):
return
if 'op' not in loginfo or not loginfo['op']:
loginfo['op'] = 'set'
- if 'detail' in loginfo:
- loginfo['detail'] = str(loginfo['detail'])
if 'variable' not in loginfo or 'file' not in loginfo:
raise ValueError("record() missing variable or file.")
var = loginfo['variable']
-
if var not in self.variables:
self.variables[var] = []
if not isinstance(self.variables[var], list):
@@ -336,7 +333,8 @@ class VariableHistory(object):
flag = '[%s] ' % (event['flag'])
else:
flag = ''
- o.write("# %s %s:%s%s\n# %s\"%s\"\n" % (event['op'], event['file'], event['line'], display_func, flag, re.sub('\n', '\n# ', event['detail'])))
+ o.write("# %s %s:%s%s\n# %s\"%s\"\n" % \
+ (event['op'], event['file'], event['line'], display_func, flag, re.sub('\n', '\n# ', str(event['detail']))))
if len(history) > 1:
o.write("# pre-expansion value:\n")
o.write('# "%s"\n' % (commentVal))
@@ -390,7 +388,7 @@ class VariableHistory(object):
if isset and event['op'] == 'set?':
continue
isset = True
- items = d.expand(event['detail']).split()
+ items = d.expand(str(event['detail'])).split()
for item in items:
# This is a little crude but is belt-and-braces to avoid us
# having to handle every possible operation type specifically
@@ -582,12 +580,9 @@ class DataSmart(MutableMapping):
else:
loginfo['op'] = keyword
self.varhistory.record(**loginfo)
- # todo make sure keyword is not __doc__ or __module__
- # pay the cookie monster

# more cookies for the cookie monster
- if ':' in var:
- self._setvar_update_overrides(base, **loginfo)
+ self._setvar_update_overrides(base, **loginfo)

if base in self.overridevars:
self._setvar_update_overridevars(var, value)
@@ -640,6 +635,7 @@ class DataSmart(MutableMapping):
nextnew.update(vardata.contains.keys())
new = nextnew
self.overrides = None
+ self.expand_cache = {}

def _setvar_update_overrides(self, var, **loginfo):
# aka pay the cookie monster
diff --git a/bitbake/lib/bb/event.py b/bitbake/lib/bb/event.py
index 4761c868..a12adbc9 100644
--- a/bitbake/lib/bb/event.py
+++ b/bitbake/lib/bb/event.py
@@ -19,7 +19,6 @@ import sys
import threading
import traceback

-import bb.exceptions
import bb.utils

# This is the pid for which we should generate the event. This is set when
@@ -195,7 +194,12 @@ def fire_ui_handlers(event, d):
ui_queue.append(event)
return

- with bb.utils.lock_timeout(_thread_lock):
+ with bb.utils.lock_timeout_nocheck(_thread_lock) as lock:
+ if not lock:
+ # If we can't get the lock, we may be recursively called, queue and return
+ ui_queue.append(event)
+ return
+
errors = []
for h in _ui_handlers:
#print "Sending event %s" % event
@@ -214,6 +218,9 @@ def fire_ui_handlers(event, d):
for h in errors:
del _ui_handlers[h]

+ while ui_queue:
+ fire_ui_handlers(ui_queue.pop(), d)
+
def fire(event, d):
"""Fire off an Event"""

@@ -759,13 +766,7 @@ class LogHandler(logging.Handler):

def emit(self, record):
if record.exc_info:
- etype, value, tb = record.exc_info
- if hasattr(tb, 'tb_next'):
- tb = list(bb.exceptions.extract_traceback(tb, context=3))
- # Need to turn the value into something the logging system can pickle
- record.bb_exc_info = (etype, value, tb)
- record.bb_exc_formatted = bb.exceptions.format_exception(etype, value, tb, limit=5)
- value = str(value)
+ record.bb_exc_formatted = traceback.format_exception(*record.exc_info)
record.exc_info = None
fire(record, None)

diff --git a/bitbake/lib/bb/exceptions.py b/bitbake/lib/bb/exceptions.py
deleted file mode 100644
index 801db9c8..00000000
--- a/bitbake/lib/bb/exceptions.py
+++ /dev/null
@@ -1,96 +0,0 @@
-#
-# Copyright BitBake Contributors
-#
-# SPDX-License-Identifier: GPL-2.0-only
-#
-
-import inspect
-import traceback
-import bb.namedtuple_with_abc
-from collections import namedtuple
-
-
-class TracebackEntry(namedtuple.abc):
- """Pickleable representation of a traceback entry"""
- _fields = 'filename lineno function args code_context index'
- _header = ' File "{0.filename}", line {0.lineno}, in {0.function}{0.args}'
-
- def format(self, formatter=None):
- if not self.code_context:
- return self._header.format(self) + '\n'
-
- formatted = [self._header.format(self) + ':\n']
-
- for lineindex, line in enumerate(self.code_context):
- if formatter:
- line = formatter(line)
-
- if lineindex == self.index:
- formatted.append(' >%s' % line)
- else:
- formatted.append(' %s' % line)
- return formatted
-
- def __str__(self):
- return ''.join(self.format())
-
-def _get_frame_args(frame):
- """Get the formatted arguments and class (if available) for a frame"""
- arginfo = inspect.getargvalues(frame)
-
- try:
- if not arginfo.args:
- return '', None
- # There have been reports from the field of python 2.6 which doesn't
- # return a namedtuple here but simply a tuple so fallback gracefully if
- # args isn't present.
- except AttributeError:
- return '', None
-
- firstarg = arginfo.args[0]
- if firstarg == 'self':
- self = arginfo.locals['self']
- cls = self.__class__.__name__
-
- arginfo.args.pop(0)
- del arginfo.locals['self']
- else:
- cls = None
-
- formatted = inspect.formatargvalues(*arginfo)
- return formatted, cls
-
-def extract_traceback(tb, context=1):
- frames = inspect.getinnerframes(tb, context)
- for frame, filename, lineno, function, code_context, index in frames:
- formatted_args, cls = _get_frame_args(frame)
- if cls:
- function = '%s.%s' % (cls, function)
- yield TracebackEntry(filename, lineno, function, formatted_args,
- code_context, index)
-
-def format_extracted(extracted, formatter=None, limit=None):
- if limit:
- extracted = extracted[-limit:]
-
- formatted = []
- for tracebackinfo in extracted:
- formatted.extend(tracebackinfo.format(formatter))
- return formatted
-
-
-def format_exception(etype, value, tb, context=1, limit=None, formatter=None):
- formatted = ['Traceback (most recent call last):\n']
-
- if hasattr(tb, 'tb_next'):
- tb = extract_traceback(tb, context)
-
- formatted.extend(format_extracted(tb, formatter, limit))
- formatted.extend(traceback.format_exception_only(etype, value))
- return formatted
-
-def to_string(exc):
- if isinstance(exc, SystemExit):
- if not isinstance(exc.code, str):
- return 'Exited with "%d"' % exc.code
- return str(exc)
diff --git a/bitbake/lib/bb/fetch2/__init__.py b/bitbake/lib/bb/fetch2/__init__.py
index 5bf2c4b8..1a6ff25d 100644
--- a/bitbake/lib/bb/fetch2/__init__.py
+++ b/bitbake/lib/bb/fetch2/__init__.py
@@ -237,7 +237,7 @@ class URI(object):
# to RFC compliant URL format. E.g.:
# file://foo.diff -> file:foo.diff
if urlp.scheme in self._netloc_forbidden:
- uri = re.sub("(?<=:)//(?!/)", "", uri, 1)
+ uri = re.sub(r"(?<=:)//(?!/)", "", uri, count=1)
reparse = 1

if reparse:
@@ -499,30 +499,30 @@ def fetcher_init(d):
Calls before this must not hit the cache.
"""

- revs = bb.persist_data.persist('BB_URI_HEADREVS', d)
- try:
- # fetcher_init is called multiple times, so make sure we only save the
- # revs the first time it is called.
- if not bb.fetch2.saved_headrevs:
- bb.fetch2.saved_headrevs = dict(revs)
- except:
- pass
-
- # When to drop SCM head revisions controlled by user policy
- srcrev_policy = d.getVar('BB_SRCREV_POLICY') or "clear"
- if srcrev_policy == "cache":
- logger.debug("Keeping SRCREV cache due to cache policy of: %s", srcrev_policy)
- elif srcrev_policy == "clear":
- logger.debug("Clearing SRCREV cache due to cache policy of: %s", srcrev_policy)
- revs.clear()
- else:
- raise FetchError("Invalid SRCREV cache policy of: %s" % srcrev_policy)
+ with bb.persist_data.persist('BB_URI_HEADREVS', d) as revs:
+ try:
+ # fetcher_init is called multiple times, so make sure we only save the
+ # revs the first time it is called.
+ if not bb.fetch2.saved_headrevs:
+ bb.fetch2.saved_headrevs = dict(revs)
+ except:
+ pass

- _checksum_cache.init_cache(d.getVar("BB_CACHEDIR"))
+ # When to drop SCM head revisions controlled by user policy
+ srcrev_policy = d.getVar('BB_SRCREV_POLICY') or "clear"
+ if srcrev_policy == "cache":
+ logger.debug("Keeping SRCREV cache due to cache policy of: %s", srcrev_policy)
+ elif srcrev_policy == "clear":
+ logger.debug("Clearing SRCREV cache due to cache policy of: %s", srcrev_policy)
+ revs.clear()
+ else:
+ raise FetchError("Invalid SRCREV cache policy of: %s" % srcrev_policy)
+
+ _checksum_cache.init_cache(d.getVar("BB_CACHEDIR"))

- for m in methods:
- if hasattr(m, "init"):
- m.init(d)
+ for m in methods:
+ if hasattr(m, "init"):
+ m.init(d)

def fetcher_parse_save():
_checksum_cache.save_extras()
@@ -536,8 +536,8 @@ def fetcher_compare_revisions(d):
when bitbake was started and return true if they have changed.
"""

- headrevs = dict(bb.persist_data.persist('BB_URI_HEADREVS', d))
- return headrevs != bb.fetch2.saved_headrevs
+ with dict(bb.persist_data.persist('BB_URI_HEADREVS', d)) as headrevs:
+ return headrevs != bb.fetch2.saved_headrevs

def mirror_from_string(data):
mirrors = (data or "").replace('\\n',' ').split()
@@ -1662,13 +1662,13 @@ class FetchMethod(object):
if not hasattr(self, "_latest_revision"):
raise ParameterError("The fetcher for this URL does not support _latest_revision", ud.url)

- revs = bb.persist_data.persist('BB_URI_HEADREVS', d)
- key = self.generate_revision_key(ud, d, name)
- try:
- return revs[key]
- except KeyError:
- revs[key] = rev = self._latest_revision(ud, d, name)
- return rev
+ with bb.persist_data.persist('BB_URI_HEADREVS', d) as revs:
+ key = self.generate_revision_key(ud, d, name)
+ try:
+ return revs[key]
+ except KeyError:
+ revs[key] = rev = self._latest_revision(ud, d, name)
+ return rev

def sortable_revision(self, ud, d, name):
latest_rev = self._build_revision(ud, d, name)
diff --git a/bitbake/lib/bb/fetch2/gcp.py b/bitbake/lib/bb/fetch2/gcp.py
index f40ce2ea..2ee9ed21 100644
--- a/bitbake/lib/bb/fetch2/gcp.py
+++ b/bitbake/lib/bb/fetch2/gcp.py
@@ -47,7 +47,6 @@ class GCP(FetchMethod):
ud.basename = os.path.basename(ud.path)

ud.localfile = d.expand(urllib.parse.unquote(ud.basename))
- ud.basecmd = "gsutil stat"

def get_gcp_client(self):
from google.cloud import storage
@@ -58,17 +57,20 @@ class GCP(FetchMethod):
Fetch urls using the GCP API.
Assumes localpath was called first.
"""
+ from google.api_core.exceptions import NotFound
logger.debug2(f"Trying to download gs://{ud.host}{ud.path} to {ud.localpath}")
if self.gcp_client is None:
self.get_gcp_client()

- bb.fetch2.check_network_access(d, ud.basecmd, f"gs://{ud.host}{ud.path}")
- runfetchcmd("%s %s" % (ud.basecmd, f"gs://{ud.host}{ud.path}"), d)
+ bb.fetch2.check_network_access(d, "blob.download_to_filename", f"gs://{ud.host}{ud.path}")

# Path sometimes has leading slash, so strip it
path = ud.path.lstrip("/")
blob = self.gcp_client.bucket(ud.host).blob(path)
- blob.download_to_filename(ud.localpath)
+ try:
+ blob.download_to_filename(ud.localpath)
+ except NotFound:
+ raise FetchError("The GCP API threw a NotFound exception")

# Additional sanity checks copied from the wget class (although there
# are no known issues which mean these are required, treat the GCP API
@@ -90,8 +92,7 @@ class GCP(FetchMethod):
if self.gcp_client is None:
self.get_gcp_client()

- bb.fetch2.check_network_access(d, ud.basecmd, f"gs://{ud.host}{ud.path}")
- runfetchcmd("%s %s" % (ud.basecmd, f"gs://{ud.host}{ud.path}"), d)
+ bb.fetch2.check_network_access(d, "gcp_client.bucket(ud.host).blob(path).exists()", f"gs://{ud.host}{ud.path}")

# Path sometimes has leading slash, so strip it
path = ud.path.lstrip("/")
diff --git a/bitbake/lib/bb/fetch2/git.py b/bitbake/lib/bb/fetch2/git.py
index c7ff769f..60291446 100644
--- a/bitbake/lib/bb/fetch2/git.py
+++ b/bitbake/lib/bb/fetch2/git.py
@@ -926,9 +926,8 @@ class Git(FetchMethod):
commits = None
else:
if not os.path.exists(rev_file) or not os.path.getsize(rev_file):
- from pipes import quote
commits = bb.fetch2.runfetchcmd(
- "git rev-list %s -- | wc -l" % quote(rev),
+ "git rev-list %s -- | wc -l" % shlex.quote(rev),
d, quiet=True).strip().lstrip('0')
if commits:
open(rev_file, "w").write("%d\n" % int(commits))
diff --git a/bitbake/lib/bb/fetch2/gitsm.py b/bitbake/lib/bb/fetch2/gitsm.py
index f7f3af72..fab4b116 100644
--- a/bitbake/lib/bb/fetch2/gitsm.py
+++ b/bitbake/lib/bb/fetch2/gitsm.py
@@ -147,6 +147,19 @@ class GitSM(Git):

return submodules != []

+ def call_process_submodules(self, ud, d, extra_check, subfunc):
+ # If we're using a shallow mirror tarball it needs to be
+ # unpacked temporarily so that we can examine the .gitmodules file
+ if ud.shallow and os.path.exists(ud.fullshallow) and extra_check:
+ tmpdir = tempfile.mkdtemp(dir=d.getVar("DL_DIR"))
+ try:
+ runfetchcmd("tar -xzf %s" % ud.fullshallow, d, workdir=tmpdir)
+ self.process_submodules(ud, tmpdir, subfunc, d)
+ finally:
+ shutil.rmtree(tmpdir)
+ else:
+ self.process_submodules(ud, ud.clonedir, subfunc, d)
+
def need_update(self, ud, d):
if Git.need_update(self, ud, d):
return True
@@ -164,15 +177,7 @@ class GitSM(Git):
logger.error('gitsm: submodule update check failed: %s %s' % (type(e).__name__, str(e)))
need_update_result = True

- # If we're using a shallow mirror tarball it needs to be unpacked
- # temporarily so that we can examine the .gitmodules file
- if ud.shallow and os.path.exists(ud.fullshallow) and not os.path.exists(ud.clonedir):
- tmpdir = tempfile.mkdtemp(dir=d.getVar("DL_DIR"))
- runfetchcmd("tar -xzf %s" % ud.fullshallow, d, workdir=tmpdir)
- self.process_submodules(ud, tmpdir, need_update_submodule, d)
- shutil.rmtree(tmpdir)
- else:
- self.process_submodules(ud, ud.clonedir, need_update_submodule, d)
+ self.call_process_submodules(ud, d, not os.path.exists(ud.clonedir), need_update_submodule)

if need_update_list:
logger.debug('gitsm: Submodules requiring update: %s' % (' '.join(need_update_list)))
@@ -195,16 +200,7 @@ class GitSM(Git):
raise

Git.download(self, ud, d)
-
- # If we're using a shallow mirror tarball it needs to be unpacked
- # temporarily so that we can examine the .gitmodules file
- if ud.shallow and os.path.exists(ud.fullshallow) and self.need_update(ud, d):
- tmpdir = tempfile.mkdtemp(dir=d.getVar("DL_DIR"))
- runfetchcmd("tar -xzf %s" % ud.fullshallow, d, workdir=tmpdir)
- self.process_submodules(ud, tmpdir, download_submodule, d)
- shutil.rmtree(tmpdir)
- else:
- self.process_submodules(ud, ud.clonedir, download_submodule, d)
+ self.call_process_submodules(ud, d, self.need_update(ud, d), download_submodule)

def unpack(self, ud, destdir, d):
def unpack_submodules(ud, url, module, modpath, workdir, d):
@@ -263,14 +259,6 @@ class GitSM(Git):
newfetch = Fetch([url], d, cache=False)
urldata.extend(newfetch.expanded_urldata())

- # If we're using a shallow mirror tarball it needs to be unpacked
- # temporarily so that we can examine the .gitmodules file
- if ud.shallow and os.path.exists(ud.fullshallow) and ud.method.need_update(ud, d):
- tmpdir = tempfile.mkdtemp(dir=d.getVar("DL_DIR"))
- subprocess.check_call("tar -xzf %s" % ud.fullshallow, cwd=tmpdir, shell=True)
- self.process_submodules(ud, tmpdir, add_submodule, d)
- shutil.rmtree(tmpdir)
- else:
- self.process_submodules(ud, ud.clonedir, add_submodule, d)
+ self.call_process_submodules(ud, d, ud.method.need_update(ud, d), add_submodule)

return urldata
diff --git a/bitbake/lib/bb/fetch2/wget.py b/bitbake/lib/bb/fetch2/wget.py
index fbfa6938..5bb3b2f3 100644
--- a/bitbake/lib/bb/fetch2/wget.py
+++ b/bitbake/lib/bb/fetch2/wget.py
@@ -87,7 +87,7 @@ class Wget(FetchMethod):
if not ud.localfile:
ud.localfile = d.expand(urllib.parse.unquote(ud.host + ud.path).replace("/", "."))

- self.basecmd = d.getVar("FETCHCMD_wget") or "/usr/bin/env wget -t 2 -T 30"
+ self.basecmd = d.getVar("FETCHCMD_wget") or "/usr/bin/env wget -t 2 -T 100"

if ud.type == 'ftp' or ud.type == 'ftps':
self.basecmd += " --passive-ftp"
@@ -108,7 +108,8 @@ class Wget(FetchMethod):

fetchcmd = self.basecmd

- localpath = os.path.join(d.getVar("DL_DIR"), ud.localfile) + ".tmp"
+ dldir = os.path.realpath(d.getVar("DL_DIR"))
+ localpath = os.path.join(dldir, ud.localfile) + ".tmp"
bb.utils.mkdirhier(os.path.dirname(localpath))
fetchcmd += " -O %s" % shlex.quote(localpath)

@@ -128,12 +129,21 @@ class Wget(FetchMethod):
uri = ud.url.split(";")[0]
if os.path.exists(ud.localpath):
# file exists, but we didnt complete it.. trying again..
- fetchcmd += d.expand(" -c -P ${DL_DIR} '%s'" % uri)
+ fetchcmd += " -c -P " + dldir + " '" + uri + "'"
else:
- fetchcmd += d.expand(" -P ${DL_DIR} '%s'" % uri)
+ fetchcmd += " -P " + dldir + " '" + uri + "'"

self._runwget(ud, d, fetchcmd, False)

+ # Sanity check since wget can pretend it succeed when it didn't
+ # Also, this used to happen if sourceforge sent us to the mirror page
+ if not os.path.exists(localpath):
+ raise FetchError("The fetch command returned success for url %s but %s doesn't exist?!" % (uri, localpath), uri)
+
+ if os.path.getsize(localpath) == 0:
+ os.remove(localpath)
+ raise FetchError("The fetch of %s resulted in a zero size file?! Deleting and failing since this isn't right." % (uri), uri)
+
# Try and verify any checksum now, meaning if it isn't correct, we don't remove the
# original file, which might be a race (imagine two recipes referencing the same
# source, one with an incorrect checksum)
@@ -143,15 +153,6 @@ class Wget(FetchMethod):
# Our lock prevents multiple writers but mirroring code may grab incomplete files
os.rename(localpath, localpath[:-4])

- # Sanity check since wget can pretend it succeed when it didn't
- # Also, this used to happen if sourceforge sent us to the mirror page
- if not os.path.exists(ud.localpath):
- raise FetchError("The fetch command returned success for url %s but %s doesn't exist?!" % (uri, ud.localpath), uri)
-
- if os.path.getsize(ud.localpath) == 0:
- os.remove(ud.localpath)
- raise FetchError("The fetch of %s resulted in a zero size file?! Deleting and failing since this isn't right." % (uri), uri)
-
return True

def checkstatus(self, fetch, ud, d, try_again=True):
@@ -370,7 +371,7 @@ class Wget(FetchMethod):
except (FileNotFoundError, netrc.NetrcParseError):
pass

- with opener.open(r, timeout=30) as response:
+ with opener.open(r, timeout=100) as response:
pass
except (urllib.error.URLError, ConnectionResetError, TimeoutError) as e:
if try_again:
diff --git a/bitbake/lib/bb/msg.py b/bitbake/lib/bb/msg.py
index 3e18596f..4f616ff4 100644
--- a/bitbake/lib/bb/msg.py
+++ b/bitbake/lib/bb/msg.py
@@ -89,10 +89,6 @@ class BBLogFormatter(logging.Formatter):
msg = logging.Formatter.format(self, record)
if hasattr(record, 'bb_exc_formatted'):
msg += '\n' + ''.join(record.bb_exc_formatted)
- elif hasattr(record, 'bb_exc_info'):
- etype, value, tb = record.bb_exc_info
- formatted = bb.exceptions.format_exception(etype, value, tb, limit=5)
- msg += '\n' + ''.join(formatted)
return msg

def colorize(self, record):
diff --git a/bitbake/lib/bb/parse/__init__.py b/bitbake/lib/bb/parse/__init__.py
index a4358f13..7ffdaa6f 100644
--- a/bitbake/lib/bb/parse/__init__.py
+++ b/bitbake/lib/bb/parse/__init__.py
@@ -49,20 +49,23 @@ class SkipPackage(SkipRecipe):
__mtime_cache = {}
def cached_mtime(f):
if f not in __mtime_cache:
- __mtime_cache[f] = os.stat(f)[stat.ST_MTIME]
+ res = os.stat(f)
+ __mtime_cache[f] = (res.st_mtime_ns, res.st_size, res.st_ino)
return __mtime_cache[f]

def cached_mtime_noerror(f):
if f not in __mtime_cache:
try:
- __mtime_cache[f] = os.stat(f)[stat.ST_MTIME]
+ res = os.stat(f)
+ __mtime_cache[f] = (res.st_mtime_ns, res.st_size, res.st_ino)
except OSError:
return 0
return __mtime_cache[f]

def check_mtime(f, mtime):
try:
- current_mtime = os.stat(f)[stat.ST_MTIME]
+ res = os.stat(f)
+ current_mtime = (res.st_mtime_ns, res.st_size, res.st_ino)
__mtime_cache[f] = current_mtime
except OSError:
current_mtime = 0
@@ -70,7 +73,8 @@ def check_mtime(f, mtime):

def update_mtime(f):
try:
- __mtime_cache[f] = os.stat(f)[stat.ST_MTIME]
+ res = os.stat(f)
+ __mtime_cache[f] = (res.st_mtime_ns, res.st_size, res.st_ino)
except OSError:
if f in __mtime_cache:
del __mtime_cache[f]
diff --git a/bitbake/lib/bb/parse/ast.py b/bitbake/lib/bb/parse/ast.py
index 7581d003..327e45c8 100644
--- a/bitbake/lib/bb/parse/ast.py
+++ b/bitbake/lib/bb/parse/ast.py
@@ -391,6 +391,14 @@ def finalize(fn, d, variant = None):
if d.getVar("_FAILPARSINGERRORHANDLED", False) == True:
raise bb.BBHandledException()

+ while True:
+ inherits = d.getVar('__BBDEFINHERITS', False) or []
+ if not inherits:
+ break
+ inherit, filename, lineno = inherits.pop(0)
+ d.setVar('__BBDEFINHERITS', inherits)
+ bb.parse.BBHandler.inherit(inherit, filename, lineno, d, deferred=True)
+
for var in d.getVar('__BBHANDLERS', False) or []:
# try to add the handler
handlerfn = d.getVarFlag(var, "filename", False)
@@ -444,14 +452,6 @@ def multi_finalize(fn, d):
logger.debug("Appending .bbappend file %s to %s", append, fn)
bb.parse.BBHandler.handle(append, d, True)

- while True:
- inherits = d.getVar('__BBDEFINHERITS', False) or []
- if not inherits:
- break
- inherit, filename, lineno = inherits.pop(0)
- d.setVar('__BBDEFINHERITS', inherits)
- bb.parse.BBHandler.inherit(inherit, filename, lineno, d, deferred=True)
-
onlyfinalise = d.getVar("__ONLYFINALISE", False)

safe_d = d
@@ -487,7 +487,9 @@ def multi_finalize(fn, d):
d.setVar("BBEXTENDVARIANT", variantmap[name])
else:
d.setVar("PN", "%s-%s" % (pn, name))
- bb.parse.BBHandler.inherit(extendedmap[name], fn, 0, d)
+ inherits = d.getVar('__BBDEFINHERITS', False) or []
+ inherits.append((extendedmap[name], fn, 0))
+ d.setVar('__BBDEFINHERITS', inherits)

safe_d.setVar("BBCLASSEXTEND", extended)
_create_variants(datastores, extendedmap.keys(), extendfunc, onlyfinalise)
diff --git a/bitbake/lib/bb/persist_data.py b/bitbake/lib/bb/persist_data.py
index bcca791e..c4454b15 100644
--- a/bitbake/lib/bb/persist_data.py
+++ b/bitbake/lib/bb/persist_data.py
@@ -154,6 +154,7 @@ class SQLTable(collections.abc.MutableMapping):

def __exit__(self, *excinfo):
self.connection.__exit__(*excinfo)
+ self.connection.close()

@_Decorators.retry()
@_Decorators.transaction
diff --git a/bitbake/lib/bb/runqueue.py b/bitbake/lib/bb/runqueue.py
index bc7e1817..db68f97e 100644
--- a/bitbake/lib/bb/runqueue.py
+++ b/bitbake/lib/bb/runqueue.py
@@ -14,6 +14,7 @@ import os
import sys
import stat
import errno
+import itertools
import logging
import re
import bb
@@ -728,6 +729,8 @@ class RunQueueData:
if mc == frommc:
fn = taskData[mcdep].build_targets[pn][0]
newdep = '%s:%s' % (fn,deptask)
+ if newdep not in taskData[mcdep].taskentries:
+ bb.fatal("Task mcdepends on non-existent task %s" % (newdep))
taskData[mc].taskentries[tid].tdepends.append(newdep)

for mc in taskData:
@@ -1273,27 +1276,41 @@ class RunQueueData:

bb.parse.siggen.set_setscene_tasks(self.runq_setscene_tids)

+ starttime = time.time()
+ lasttime = starttime
+
# Iterate over the task list and call into the siggen code
dealtwith = set()
todeal = set(self.runtaskentries)
while todeal:
+ ready = set()
for tid in todeal.copy():
if not (self.runtaskentries[tid].depends - dealtwith):
- dealtwith.add(tid)
- todeal.remove(tid)
- self.prepare_task_hash(tid)
- bb.event.check_for_interrupts(self.cooker.data)
+ self.runtaskentries[tid].taskhash_deps = bb.parse.siggen.prep_taskhash(tid, self.runtaskentries[tid].depends, self.dataCaches)
+ # get_taskhash for a given tid *must* be called before get_unihash* below
+ self.runtaskentries[tid].hash = bb.parse.siggen.get_taskhash(tid, self.runtaskentries[tid].depends, self.dataCaches)
+ ready.add(tid)
+ unihashes = bb.parse.siggen.get_unihashes(ready)
+ for tid in ready:
+ dealtwith.add(tid)
+ todeal.remove(tid)
+ self.runtaskentries[tid].unihash = unihashes[tid]
+
+ bb.event.check_for_interrupts(self.cooker.data)
+
+ if time.time() > (lasttime + 30):
+ lasttime = time.time()
+ hashequiv_logger.verbose("Initial setup loop progress: %s of %s in %s" % (len(todeal), len(self.runtaskentries), lasttime - starttime))
+
+ endtime = time.time()
+ if (endtime-starttime > 60):
+ hashequiv_logger.verbose("Initial setup loop took: %s" % (endtime-starttime))

bb.parse.siggen.writeout_file_checksum_cache()

#self.dump_data()
return len(self.runtaskentries)

- def prepare_task_hash(self, tid):
- bb.parse.siggen.prep_taskhash(tid, self.runtaskentries[tid].depends, self.dataCaches)
- self.runtaskentries[tid].hash = bb.parse.siggen.get_taskhash(tid, self.runtaskentries[tid].depends, self.dataCaches)
- self.runtaskentries[tid].unihash = bb.parse.siggen.get_unihash(tid)
-
def dump_data(self):
"""
Dump some debug information on the internal data structures
@@ -2175,12 +2192,20 @@ class RunQueueExecute:
if not hasattr(self, "sorted_setscene_tids"):
# Don't want to sort this set every execution
self.sorted_setscene_tids = sorted(self.rqdata.runq_setscene_tids)
+ # Resume looping where we left off when we returned to feed the mainloop
+ self.setscene_tids_generator = itertools.cycle(self.rqdata.runq_setscene_tids)

task = None
if not self.sqdone and self.can_start_task():
- # Find the next setscene to run
- for nexttask in self.sorted_setscene_tids:
+ loopcount = 0
+ # Find the next setscene to run, exit the loop when we've processed all tids or found something to execute
+ while loopcount < len(self.rqdata.runq_setscene_tids):
+ loopcount += 1
+ nexttask = next(self.setscene_tids_generator)
if nexttask in self.sq_buildable and nexttask not in self.sq_running and self.sqdata.stamps[nexttask] not in self.build_stamps.values() and nexttask not in self.sq_harddep_deferred:
+ if nexttask in self.sq_deferred and self.sq_deferred[nexttask] not in self.runq_complete:
+ # Skip deferred tasks quickly before the 'expensive' tests below - this is key to performant multiconfig builds
+ continue
if nexttask not in self.sqdata.unskippable and self.sqdata.sq_revdeps[nexttask] and \
nexttask not in self.sq_needed_harddeps and \
self.sqdata.sq_revdeps[nexttask].issubset(self.scenequeue_covered) and \
@@ -2210,8 +2235,7 @@ class RunQueueExecute:
if t in self.runq_running and t not in self.runq_complete:
continue
if nexttask in self.sq_deferred:
- if self.sq_deferred[nexttask] not in self.runq_complete:
- continue
+ # Deferred tasks that were still deferred were skipped above so we now need to process
logger.debug("Task %s no longer deferred" % nexttask)
del self.sq_deferred[nexttask]
valid = self.rq.validate_hashes(set([nexttask]), self.cooker.data, 0, False, summary=False)
@@ -2438,14 +2462,17 @@ class RunQueueExecute:
taskdepdata_cache = {}
for task in self.rqdata.runtaskentries:
(mc, fn, taskname, taskfn) = split_tid_mcfn(task)
- pn = self.rqdata.dataCaches[mc].pkg_fn[taskfn]
- deps = self.rqdata.runtaskentries[task].depends
- provides = self.rqdata.dataCaches[mc].fn_provides[taskfn]
- taskhash = self.rqdata.runtaskentries[task].hash
- unihash = self.rqdata.runtaskentries[task].unihash
- deps = self.filtermcdeps(task, mc, deps)
- hashfn = self.rqdata.dataCaches[mc].hashfn[taskfn]
- taskdepdata_cache[task] = [pn, taskname, fn, deps, provides, taskhash, unihash, hashfn]
+ taskdepdata_cache[task] = bb.TaskData(
+ pn = self.rqdata.dataCaches[mc].pkg_fn[taskfn],
+ taskname = taskname,
+ fn = fn,
+ deps = self.filtermcdeps(task, mc, self.rqdata.runtaskentries[task].depends),
+ provides = self.rqdata.dataCaches[mc].fn_provides[taskfn],
+ taskhash = self.rqdata.runtaskentries[task].hash,
+ unihash = self.rqdata.runtaskentries[task].unihash,
+ hashfn = self.rqdata.dataCaches[mc].hashfn[taskfn],
+ taskhash_deps = self.rqdata.runtaskentries[task].taskhash_deps,
+ )

self.taskdepdata_cache = taskdepdata_cache

@@ -2460,9 +2487,11 @@ class RunQueueExecute:
while next:
additional = []
for revdep in next:
- self.taskdepdata_cache[revdep][6] = self.rqdata.runtaskentries[revdep].unihash
+ self.taskdepdata_cache[revdep] = self.taskdepdata_cache[revdep]._replace(
+ unihash=self.rqdata.runtaskentries[revdep].unihash
+ )
taskdepdata[revdep] = self.taskdepdata_cache[revdep]
- for revdep2 in self.taskdepdata_cache[revdep][3]:
+ for revdep2 in self.taskdepdata_cache[revdep].deps:
if revdep2 not in taskdepdata:
additional.append(revdep2)
next = additional
@@ -2556,17 +2585,28 @@ class RunQueueExecute:
elif self.rqdata.runtaskentries[p].depends.isdisjoint(total):
next.add(p)

+ starttime = time.time()
+ lasttime = starttime
+
# When an item doesn't have dependencies in total, we can process it. Drop items from total when handled
while next:
current = next.copy()
next = set()
+ ready = {}
for tid in current:
if self.rqdata.runtaskentries[p].depends and not self.rqdata.runtaskentries[tid].depends.isdisjoint(total):
continue
+ # get_taskhash for a given tid *must* be called before get_unihash* below
+ ready[tid] = bb.parse.siggen.get_taskhash(tid, self.rqdata.runtaskentries[tid].depends, self.rqdata.dataCaches)
+
+ unihashes = bb.parse.siggen.get_unihashes(ready.keys())
+
+ for tid in ready:
orighash = self.rqdata.runtaskentries[tid].hash
- newhash = bb.parse.siggen.get_taskhash(tid, self.rqdata.runtaskentries[tid].depends, self.rqdata.dataCaches)
+ newhash = ready[tid]
origuni = self.rqdata.runtaskentries[tid].unihash
- newuni = bb.parse.siggen.get_unihash(tid)
+ newuni = unihashes[tid]
+
# FIXME, need to check it can come from sstate at all for determinism?
remapped = False
if newuni == origuni:
@@ -2587,6 +2627,15 @@ class RunQueueExecute:
next |= self.rqdata.runtaskentries[tid].revdeps
total.remove(tid)
next.intersection_update(total)
+ bb.event.check_for_interrupts(self.cooker.data)
+
+ if time.time() > (lasttime + 30):
+ lasttime = time.time()
+ hashequiv_logger.verbose("Rehash loop slow progress: %s in %s" % (len(total), lasttime - starttime))
+
+ endtime = time.time()
+ if (endtime-starttime > 60):
+ hashequiv_logger.verbose("Rehash loop took more than 60s: %s" % (endtime-starttime))

if changed:
for mc in self.rq.worker:
@@ -2712,8 +2761,12 @@ class RunQueueExecute:
logger.debug2("%s was unavailable and is a hard dependency of %s so skipping" % (task, dep))
self.sq_task_failoutright(dep)
continue
+
+ # For performance, only compute allcovered once if needed
+ if self.sqdata.sq_deps[task]:
+ allcovered = self.scenequeue_covered | self.scenequeue_notcovered
for dep in sorted(self.sqdata.sq_deps[task]):
- if self.sqdata.sq_revdeps[dep].issubset(self.scenequeue_covered | self.scenequeue_notcovered):
+ if self.sqdata.sq_revdeps[dep].issubset(allcovered):
if dep not in self.sq_buildable:
self.sq_buildable.add(dep)

@@ -2806,13 +2859,19 @@ class RunQueueExecute:
additional = []
for revdep in next:
(mc, fn, taskname, taskfn) = split_tid_mcfn(revdep)
- pn = self.rqdata.dataCaches[mc].pkg_fn[taskfn]
deps = getsetscenedeps(revdep)
- provides = self.rqdata.dataCaches[mc].fn_provides[taskfn]
- taskhash = self.rqdata.runtaskentries[revdep].hash
- unihash = self.rqdata.runtaskentries[revdep].unihash
- hashfn = self.rqdata.dataCaches[mc].hashfn[taskfn]
- taskdepdata[revdep] = [pn, taskname, fn, deps, provides, taskhash, unihash, hashfn]
+
+ taskdepdata[revdep] = bb.TaskData(
+ pn = self.rqdata.dataCaches[mc].pkg_fn[taskfn],
+ taskname = taskname,
+ fn = fn,
+ deps = deps,
+ provides = self.rqdata.dataCaches[mc].fn_provides[taskfn],
+ taskhash = self.rqdata.runtaskentries[revdep].hash,
+ unihash = self.rqdata.runtaskentries[revdep].unihash,
+ hashfn = self.rqdata.dataCaches[mc].hashfn[taskfn],
+ taskhash_deps = self.rqdata.runtaskentries[revdep].taskhash_deps,
+ )
for revdep2 in deps:
if revdep2 not in taskdepdata:
additional.append(revdep2)
diff --git a/bitbake/lib/bb/server/process.py b/bitbake/lib/bb/server/process.py
index 76b18929..34b3a2ae 100644
--- a/bitbake/lib/bb/server/process.py
+++ b/bitbake/lib/bb/server/process.py
@@ -13,7 +13,7 @@
import bb
import bb.event
import logging
-import multiprocessing
+from bb import multiprocessing
import threading
import array
import os
diff --git a/bitbake/lib/bb/siggen.py b/bitbake/lib/bb/siggen.py
index 8ab08ec9..65ca0811 100644
--- a/bitbake/lib/bb/siggen.py
+++ b/bitbake/lib/bb/siggen.py
@@ -381,7 +381,7 @@ class SignatureGeneratorBasic(SignatureGenerator):
self.taints[tid] = taint
logger.warning("%s is tainted from a forced run" % tid)

- return
+ return set(dep for _, dep in self.runtaskdeps[tid])

def get_taskhash(self, tid, deps, dataCaches):

@@ -726,10 +726,13 @@ class SignatureGeneratorUniHashMixIn(object):
return result

if self.max_parallel <= 1 or len(queries) <= 1:
- # No parallelism required. Make the query serially with the single client
+ # No parallelism required. Make the query using a single client
with self.client() as client:
- for tid, args in queries.items():
- query_result[tid] = client.get_unihash(*args)
+ keys = list(queries.keys())
+ unihashes = client.get_unihash_batch(queries[k] for k in keys)
+
+ for idx, k in enumerate(keys):
+ query_result[k] = unihashes[idx]
else:
with self.client_pool() as client_pool:
query_result = client_pool.get_unihashes(queries)
diff --git a/bitbake/lib/bb/tests/fetch.py b/bitbake/lib/bb/tests/fetch.py
index 85c1f79f..b57cf511 100644
--- a/bitbake/lib/bb/tests/fetch.py
+++ b/bitbake/lib/bb/tests/fetch.py
@@ -1419,12 +1419,12 @@ class FetchLatestVersionTest(FetcherTest):
("dtc", "git://git.yoctoproject.org/bbfetchtests-dtc.git;branch=master;protocol=https", "65cc4d2748a2c2e6f27f1cf39e07a5dbabd80ebf", "", "")
: "1.4.0",
# combination version pattern
- ("sysprof", "git://gitlab.gnome.org/GNOME/sysprof.git;protocol=https;branch=master", "cd44ee6644c3641507fb53b8a2a69137f2971219", "", "")
+ ("sysprof", "git://git.yoctoproject.org/sysprof.git;protocol=https;branch=master", "cd44ee6644c3641507fb53b8a2a69137f2971219", "", "")
: "1.2.0",
- ("u-boot-mkimage", "git://git.denx.de/u-boot.git;branch=master;protocol=git", "62c175fbb8a0f9a926c88294ea9f7e88eb898f6c", "", "")
+ ("u-boot-mkimage", "git://git.yoctoproject.org/bbfetchtests-u-boot.git;branch=master;protocol=https", "62c175fbb8a0f9a926c88294ea9f7e88eb898f6c", "", "")
: "2014.01",
# version pattern "yyyymmdd"
- ("mobile-broadband-provider-info", "git://gitlab.gnome.org/GNOME/mobile-broadband-provider-info.git;protocol=https;branch=master", "4ed19e11c2975105b71b956440acdb25d46a347d", "", "")
+ ("mobile-broadband-provider-info", "git://git.yoctoproject.org/mobile-broadband-provider-info.git;protocol=https;branch=master", "4ed19e11c2975105b71b956440acdb25d46a347d", "", "")
: "20120614",
# packages with a valid UPSTREAM_CHECK_GITTAGREGEX
# mirror of git://anongit.freedesktop.org/xorg/driver/xf86-video-omap since network issues interfered with testing
@@ -1511,7 +1511,7 @@ class FetchLatestVersionTest(FetcherTest):

def test_wget_latest_versionstring(self):
testdata = os.path.dirname(os.path.abspath(__file__)) + "/fetch-testdata"
- server = HTTPService(testdata)
+ server = HTTPService(testdata, host="127.0.0.1")
server.start()
port = server.port
try:
@@ -1519,10 +1519,10 @@ class FetchLatestVersionTest(FetcherTest):
self.d.setVar("PN", k[0])
checkuri = ""
if k[2]:
- checkuri = "http://localhost:%s/" % port + k[2]
+ checkuri = "http://127.0.0.1:%s/" % port + k[2]
self.d.setVar("UPSTREAM_CHECK_URI", checkuri)
self.d.setVar("UPSTREAM_CHECK_REGEX", k[3])
- url = "http://localhost:%s/" % port + k[1]
+ url = "http://127.0.0.1:%s/" % port + k[1]
ud = bb.fetch2.FetchData(url, self.d)
pupver = ud.method.latest_versionstring(ud, self.d)
verstring = pupver[0]
@@ -1715,6 +1715,8 @@ class GitShallowTest(FetcherTest):
if cwd is None:
cwd = self.gitdir
actual_refs = self.git(['for-each-ref', '--format=%(refname)'], cwd=cwd).splitlines()
+ # Resolve references into the same format as the comparision (needed by git 2.48 onwards)
+ actual_refs = self.git(['rev-parse', '--symbolic-full-name'] + actual_refs, cwd=cwd).splitlines()
full_expected = self.git(['rev-parse', '--symbolic-full-name'] + expected_refs, cwd=cwd).splitlines()
self.assertEqual(sorted(set(full_expected)), sorted(set(actual_refs)))

diff --git a/bitbake/lib/bb/tests/runqueue-tests/recipes/g1.bb b/bitbake/lib/bb/tests/runqueue-tests/recipes/g1.bb
new file mode 100644
index 00000000..3c7dca02
--- /dev/null
+++ b/bitbake/lib/bb/tests/runqueue-tests/recipes/g1.bb
@@ -0,0 +1,2 @@
+do_build[mcdepends] = "mc::mc-1:h1:do_invalid"
+
diff --git a/bitbake/lib/bb/tests/runqueue-tests/recipes/h1.bb b/bitbake/lib/bb/tests/runqueue-tests/recipes/h1.bb
new file mode 100644
index 00000000..e69de29b
diff --git a/bitbake/lib/bb/tests/runqueue.py b/bitbake/lib/bb/tests/runqueue.py
index cc87e8d6..74f5ded2 100644
--- a/bitbake/lib/bb/tests/runqueue.py
+++ b/bitbake/lib/bb/tests/runqueue.py
@@ -26,7 +26,7 @@ class RunQueueTests(unittest.TestCase):
a1_sstatevalid = "a1:do_package a1:do_package_qa a1:do_packagedata a1:do_package_write_ipk a1:do_package_write_rpm a1:do_populate_lic a1:do_populate_sysroot"
b1_sstatevalid = "b1:do_package b1:do_package_qa b1:do_packagedata b1:do_package_write_ipk b1:do_package_write_rpm b1:do_populate_lic b1:do_populate_sysroot"

- def run_bitbakecmd(self, cmd, builddir, sstatevalid="", slowtasks="", extraenv=None, cleanup=False):
+ def run_bitbakecmd(self, cmd, builddir, sstatevalid="", slowtasks="", extraenv=None, cleanup=False, allowfailure=False):
env = os.environ.copy()
env["BBPATH"] = os.path.realpath(os.path.join(os.path.dirname(__file__), "runqueue-tests"))
env["BB_ENV_PASSTHROUGH_ADDITIONS"] = "SSTATEVALID SLOWTASKS TOPDIR"
@@ -41,6 +41,8 @@ class RunQueueTests(unittest.TestCase):
output = subprocess.check_output(cmd, env=env, stderr=subprocess.STDOUT,universal_newlines=True, cwd=builddir)
print(output)
except subprocess.CalledProcessError as e:
+ if allowfailure:
+ return e.output
self.fail("Command %s failed with %s" % (cmd, e.output))
tasks = []
tasklog = builddir + "/task.log"
@@ -314,6 +316,13 @@ class RunQueueTests(unittest.TestCase):
["mc_2:a1:%s" % t for t in rerun_tasks]
self.assertEqual(set(tasks), set(expected))

+ # Check that a multiconfig that doesn't exist rasies a correct error message
+ error_output = self.run_bitbakecmd(["bitbake", "g1"], tempdir, "", extraenv=extraenv, cleanup=True, allowfailure=True)
+ self.assertIn("non-existent task", error_output)
+ # If the word 'Traceback' or 'KeyError' is in the output we've regressed
+ self.assertNotIn("Traceback", error_output)
+ self.assertNotIn("KeyError", error_output)
+
self.shutdown(tempdir)

def test_hashserv_single(self):
diff --git a/bitbake/lib/bb/tests/support/httpserver.py b/bitbake/lib/bb/tests/support/httpserver.py
index 78f76600..03327e92 100644
--- a/bitbake/lib/bb/tests/support/httpserver.py
+++ b/bitbake/lib/bb/tests/support/httpserver.py
@@ -3,7 +3,7 @@
#

import http.server
-import multiprocessing
+from bb import multiprocessing
import os
import traceback
import signal
@@ -43,7 +43,7 @@ class HTTPService(object):
self.process = multiprocessing.Process(target=self.server.server_start, args=[self.root_dir, self.logger])

# The signal handler from testimage.bbclass can cause deadlocks here
- # if the HTTPServer is terminated before it can restore the standard
+ # if the HTTPServer is terminated before it can restore the standard
#signal behaviour
orig = signal.getsignal(signal.SIGTERM)
signal.signal(signal.SIGTERM, signal.SIG_DFL)
diff --git a/bitbake/lib/bb/tinfoil.py b/bitbake/lib/bb/tinfoil.py
index dcd3910c..4dc4590c 100644
--- a/bitbake/lib/bb/tinfoil.py
+++ b/bitbake/lib/bb/tinfoil.py
@@ -188,11 +188,19 @@ class TinfoilCookerAdapter:
self._cache[name] = attrvalue
return attrvalue

+ class TinfoilSkiplistByMcAdapter:
+ def __init__(self, tinfoil):
+ self.tinfoil = tinfoil
+
+ def __getitem__(self, mc):
+ return self.tinfoil.get_skipped_recipes(mc)
+
def __init__(self, tinfoil):
self.tinfoil = tinfoil
self.multiconfigs = [''] + (tinfoil.config_data.getVar('BBMULTICONFIG') or '').split()
self.collections = {}
self.recipecaches = {}
+ self.skiplist_by_mc = self.TinfoilSkiplistByMcAdapter(tinfoil)
for mc in self.multiconfigs:
self.collections[mc] = self.TinfoilCookerCollectionAdapter(tinfoil, mc)
self.recipecaches[mc] = self.TinfoilRecipeCacheAdapter(tinfoil, mc)
@@ -201,8 +209,6 @@ class TinfoilCookerAdapter:
# Grab these only when they are requested since they aren't always used
if name in self._cache:
return self._cache[name]
- elif name == 'skiplist':
- attrvalue = self.tinfoil.get_skipped_recipes()
elif name == 'bbfile_config_priorities':
ret = self.tinfoil.run_command('getLayerPriorities')
bbfile_config_priorities = []
@@ -514,12 +520,12 @@ class Tinfoil:
"""
return defaultdict(list, self.run_command('getOverlayedRecipes', mc))

- def get_skipped_recipes(self):
+ def get_skipped_recipes(self, mc=''):
"""
Find recipes which were skipped (i.e. SkipRecipe was raised
during parsing).
"""
- return OrderedDict(self.run_command('getSkippedRecipes'))
+ return OrderedDict(self.run_command('getSkippedRecipes', mc))

def get_all_providers(self, mc=''):
return defaultdict(list, self.run_command('allProviders', mc))
@@ -533,6 +539,7 @@ class Tinfoil:
def get_runtime_providers(self, rdep):
return self.run_command('getRuntimeProviders', rdep)

+ # TODO: teach this method about mc
def get_recipe_file(self, pn):
"""
Get the file name for the specified recipe/target. Raises
@@ -541,6 +548,7 @@ class Tinfoil:
"""
best = self.find_best_provider(pn)
if not best or (len(best) > 3 and not best[3]):
+ # TODO: pass down mc
skiplist = self.get_skipped_recipes()
taskdata = bb.taskdata.TaskData(None, skiplist=skiplist)
skipreasons = taskdata.get_reasons(pn)
diff --git a/bitbake/lib/bb/ui/knotty.py b/bitbake/lib/bb/ui/knotty.py
index f86999bb..3784c93a 100644
--- a/bitbake/lib/bb/ui/knotty.py
+++ b/bitbake/lib/bb/ui/knotty.py
@@ -577,6 +577,8 @@ def main(server, eventHandler, params, tf = TerminalFilter):
else:
log_exec_tty = False

+ should_print_hyperlinks = sys.stdout.isatty() and os.environ.get('NO_COLOR', '') == ''
+
helper = uihelper.BBUIHelper()

# Look for the specially designated handlers which need to be passed to the
@@ -640,7 +642,7 @@ def main(server, eventHandler, params, tf = TerminalFilter):
return_value = 0
errors = 0
warnings = 0
- taskfailures = []
+ taskfailures = {}

printintervaldelta = 10 * 60 # 10 minutes
printinterval = printintervaldelta
@@ -726,6 +728,8 @@ def main(server, eventHandler, params, tf = TerminalFilter):
if isinstance(event, bb.build.TaskFailed):
return_value = 1
print_event_log(event, includelogs, loglines, termfilter)
+ k = "{}:{}".format(event._fn, event._task)
+ taskfailures[k] = event.logfile
if isinstance(event, bb.build.TaskBase):
logger.info(event._message)
continue
@@ -821,7 +825,7 @@ def main(server, eventHandler, params, tf = TerminalFilter):

if isinstance(event, bb.runqueue.runQueueTaskFailed):
return_value = 1
- taskfailures.append(event.taskstring)
+ taskfailures.setdefault(event.taskstring)
logger.error(str(event))
continue

@@ -942,11 +946,21 @@ def main(server, eventHandler, params, tf = TerminalFilter):
try:
termfilter.clearFooter()
summary = ""
+ def format_hyperlink(url, link_text):
+ if should_print_hyperlinks:
+ start = f'\033]8;;{url}\033\\'
+ end = '\033]8;;\033\\'
+ return f'{start}{link_text}{end}'
+ return link_text
+
if taskfailures:
summary += pluralise("\nSummary: %s task failed:",
"\nSummary: %s tasks failed:", len(taskfailures))
- for failure in taskfailures:
+ for (failure, log_file) in taskfailures.items():
summary += "\n %s" % failure
+ if log_file:
+ hyperlink = format_hyperlink(f"file://{log_file}", log_file)
+ summary += "\n log: {}".format(hyperlink)
if warnings:
summary += pluralise("\nSummary: There was %s WARNING message.",
"\nSummary: There were %s WARNING messages.", warnings)
diff --git a/bitbake/lib/bb/ui/teamcity.py b/bitbake/lib/bb/ui/teamcity.py
index fca46c28..7eeaab8d 100644
--- a/bitbake/lib/bb/ui/teamcity.py
+++ b/bitbake/lib/bb/ui/teamcity.py
@@ -30,7 +30,6 @@ import bb.build
import bb.command
import bb.cooker
import bb.event
-import bb.exceptions
import bb.runqueue
from bb.ui import uihelper

@@ -102,10 +101,6 @@ class TeamcityLogFormatter(logging.Formatter):
details = ""
if hasattr(record, 'bb_exc_formatted'):
details = ''.join(record.bb_exc_formatted)
- elif hasattr(record, 'bb_exc_info'):
- etype, value, tb = record.bb_exc_info
- formatted = bb.exceptions.format_exception(etype, value, tb, limit=5)
- details = ''.join(formatted)

if record.levelno in [bb.msg.BBLogFormatter.ERROR, bb.msg.BBLogFormatter.CRITICAL]:
# ERROR gets a separate errorDetails field
diff --git a/bitbake/lib/bb/utils.py b/bitbake/lib/bb/utils.py
index ebee65d3..1b4fb93a 100644
--- a/bitbake/lib/bb/utils.py
+++ b/bitbake/lib/bb/utils.py
@@ -14,7 +14,7 @@ import logging
import bb
import bb.msg
import locale
-import multiprocessing
+from bb import multiprocessing
import fcntl
import importlib
import importlib.machinery
@@ -1174,8 +1174,6 @@ def process_profilelog(fn, pout = None):
#
def multiprocessingpool(*args, **kwargs):

- import multiprocessing.pool
- #import multiprocessing.util
#multiprocessing.util.log_to_stderr(10)
# Deal with a multiprocessing bug where signals to the processes would be delayed until the work
# completes. Putting in a timeout means the signals (like SIGINT/SIGTERM) get processed.
@@ -1854,15 +1852,42 @@ def path_is_descendant(descendant, ancestor):

return False

+# Recomputing the sets in signal.py is expensive (bitbake -pP idle)
+# so try and use _signal directly to avoid it
+valid_signals = signal.valid_signals()
+try:
+ import _signal
+ sigmask = _signal.pthread_sigmask
+except ImportError:
+ sigmask = signal.pthread_sigmask
+
# If we don't have a timeout of some kind and a process/thread exits badly (for example
# OOM killed) and held a lock, we'd just hang in the lock futex forever. It is better
# we exit at some point than hang. 5 minutes with no progress means we're probably deadlocked.
+# This function can still deadlock python since it can't signal the other threads to exit
+# (signals are handled in the main thread) and even os._exit() will wait on non-daemon threads
+# to exit.
@contextmanager
def lock_timeout(lock):
- held = lock.acquire(timeout=5*60)
try:
+ s = sigmask(signal.SIG_BLOCK, valid_signals)
+ held = lock.acquire(timeout=5*60)
if not held:
+ bb.server.process.serverlog("Couldn't get the lock for 5 mins, timed out, exiting.\n%s" % traceback.format_stack())
os._exit(1)
yield held
finally:
lock.release()
+ sigmask(signal.SIG_SETMASK, s)
+
+# A version of lock_timeout without the check that the lock was locked and a shorter timeout
+@contextmanager
+def lock_timeout_nocheck(lock):
+ try:
+ s = sigmask(signal.SIG_BLOCK, valid_signals)
+ l = lock.acquire(timeout=10)
+ yield l
+ finally:
+ if l:
+ lock.release()
+ sigmask(signal.SIG_SETMASK, s)
diff --git a/bitbake/lib/bblayers/query.py b/bitbake/lib/bblayers/query.py
index bfc18a75..9b2e081c 100644
--- a/bitbake/lib/bblayers/query.py
+++ b/bitbake/lib/bblayers/query.py
@@ -142,10 +142,11 @@ skipped recipes will also be listed, with a " (skipped)" suffix.
# Ensure we list skipped recipes
# We are largely guessing about PN, PV and the preferred version here,
# but we have no choice since skipped recipes are not fully parsed
- skiplist = list(self.tinfoil.cooker.skiplist.keys())
- mcspec = 'mc:%s:' % mc
+ skiplist = list(self.tinfoil.cooker.skiplist_by_mc[mc].keys())
+
if mc:
- skiplist = [s[len(mcspec):] for s in skiplist if s.startswith(mcspec)]
+ mcspec = f'mc:{mc}:'
+ skiplist = [s[len(mcspec):] if s.startswith(mcspec) else s for s in skiplist]

for fn in skiplist:
recipe_parts = os.path.splitext(os.path.basename(fn))[0].split('_')
@@ -162,7 +163,7 @@ skipped recipes will also be listed, with a " (skipped)" suffix.
def print_item(f, pn, ver, layer, ispref):
if not selected_layer or layer == selected_layer:
if not bare and f in skiplist:
- skipped = ' (skipped: %s)' % self.tinfoil.cooker.skiplist[f].skipreason
+ skipped = ' (skipped: %s)' % self.tinfoil.cooker.skiplist_by_mc[mc][f].skipreason
else:
skipped = ''
if show_filenames:
@@ -301,7 +302,7 @@ Lists recipes with the bbappends that apply to them as subitems.
if self.show_appends_for_pn(pn, cooker_data, args.mc):
appends = True

- if not args.pnspec and self.show_appends_for_skipped():
+ if not args.pnspec and self.show_appends_for_skipped(args.mc):
appends = True

if not appends:
@@ -317,9 +318,9 @@ Lists recipes with the bbappends that apply to them as subitems.

return self.show_appends_output(filenames, best_filename)

- def show_appends_for_skipped(self):
+ def show_appends_for_skipped(self, mc):
filenames = [os.path.basename(f)
- for f in self.tinfoil.cooker.skiplist.keys()]
+ for f in self.tinfoil.cooker.skiplist_by_mc[mc].keys()]
return self.show_appends_output(filenames, None, " (skipped)")

def show_appends_output(self, filenames, best_filename, name_suffix = ''):
diff --git a/bitbake/lib/hashserv/client.py b/bitbake/lib/hashserv/client.py
index 0b254bed..775faf93 100644
--- a/bitbake/lib/hashserv/client.py
+++ b/bitbake/lib/hashserv/client.py
@@ -5,6 +5,7 @@

import logging
import socket
+import asyncio
import bb.asyncrpc
import json
from . import create_async_client
@@ -13,6 +14,66 @@ from . import create_async_client
logger = logging.getLogger("hashserv.client")


+class Batch(object):
+ def __init__(self):
+ self.done = False
+ self.cond = asyncio.Condition()
+ self.pending = []
+ self.results = []
+ self.sent_count = 0
+
+ async def recv(self, socket):
+ while True:
+ async with self.cond:
+ await self.cond.wait_for(lambda: self.pending or self.done)
+
+ if not self.pending:
+ if self.done:
+ return
+ continue
+
+ r = await socket.recv()
+ self.results.append(r)
+
+ async with self.cond:
+ self.pending.pop(0)
+
+ async def send(self, socket, msgs):
+ try:
+ # In the event of a restart due to a reconnect, all in-flight
+ # messages need to be resent first to keep to result count in sync
+ for m in self.pending:
+ await socket.send(m)
+
+ for m in msgs:
+ # Add the message to the pending list before attempting to send
+ # it so that if the send fails it will be retried
+ async with self.cond:
+ self.pending.append(m)
+ self.cond.notify()
+ self.sent_count += 1
+
+ await socket.send(m)
+
+ finally:
+ async with self.cond:
+ self.done = True
+ self.cond.notify()
+
+ async def process(self, socket, msgs):
+ await asyncio.gather(
+ self.recv(socket),
+ self.send(socket, msgs),
+ )
+
+ if len(self.results) != self.sent_count:
+ raise ValueError(
+ f"Expected result count {len(self.results)}. Expected {self.sent_count}"
+ )
+
+ return self.results
+
+
class AsyncClient(bb.asyncrpc.AsyncClient):
MODE_NORMAL = 0
MODE_GET_STREAM = 1
@@ -36,11 +97,27 @@ class AsyncClient(bb.asyncrpc.AsyncClient):
if become:
await self.become_user(become)

- async def send_stream(self, mode, msg):
+ async def send_stream_batch(self, mode, msgs):
+ """
+ Does a "batch" process of stream messages. This sends the query
+ messages as fast as possible, and simultaneously attempts to read the
+ messages back. This helps to mitigate the effects of latency to the
+ hash equivalence server be allowing multiple queries to be "in-flight"
+ at once
+
+ The implementation does more complicated tracking using a count of sent
+ messages so that `msgs` can be a generator function (i.e. its length is
+ unknown)
+
+ """
+
+ b = Batch()
+
async def proc():
+ nonlocal b
+
await self._set_mode(mode)
- await self.socket.send(msg)
- return await self.socket.recv()
+ return await b.process(self.socket, msgs)

return await self._send_wrapper(proc)

@@ -89,10 +166,15 @@ class AsyncClient(bb.asyncrpc.AsyncClient):
self.mode = new_mode

async def get_unihash(self, method, taskhash):
- r = await self.send_stream(self.MODE_GET_STREAM, "%s %s" % (method, taskhash))
- if not r:
- return None
- return r
+ r = await self.get_unihash_batch([(method, taskhash)])
+ return r[0]
+
+ async def get_unihash_batch(self, args):
+ result = await self.send_stream_batch(
+ self.MODE_GET_STREAM,
+ (f"{method} {taskhash}" for method, taskhash in args),
+ )
+ return [r if r else None for r in result]

async def report_unihash(self, taskhash, method, outhash, unihash, extra={}):
m = extra.copy()
@@ -115,8 +197,12 @@ class AsyncClient(bb.asyncrpc.AsyncClient):
)

async def unihash_exists(self, unihash):
- r = await self.send_stream(self.MODE_EXIST_STREAM, unihash)
- return r == "true"
+ r = await self.unihash_exists_batch([unihash])
+ return r[0]
+
+ async def unihash_exists_batch(self, unihashes):
+ result = await self.send_stream_batch(self.MODE_EXIST_STREAM, unihashes)
+ return [r == "true" for r in result]

async def get_outhash(self, method, outhash, taskhash, with_unihash=True):
return await self.invoke(
@@ -237,10 +323,12 @@ class Client(bb.asyncrpc.Client):
"connect_tcp",
"connect_websocket",
"get_unihash",
+ "get_unihash_batch",
"report_unihash",
"report_unihash_equiv",
"get_taskhash",
"unihash_exists",
+ "unihash_exists_batch",
"get_outhash",
"get_stats",
"reset_stats",
diff --git a/bitbake/lib/hashserv/tests.py b/bitbake/lib/hashserv/tests.py
index 0809453c..ed1ade74 100644
--- a/bitbake/lib/hashserv/tests.py
+++ b/bitbake/lib/hashserv/tests.py
@@ -11,7 +11,7 @@ from bb.asyncrpc import InvokeError
from .client import ClientPool
import hashlib
import logging
-import multiprocessing
+from bb import multiprocessing
import os
import sys
import tempfile
@@ -594,6 +594,43 @@ class HashEquivalenceCommonTests(object):
7: None,
})

+ def test_get_unihash_batch(self):
+ TEST_INPUT = (
+ # taskhash outhash unihash
+ ('8aa96fcffb5831b3c2c0cb75f0431e3f8b20554a', 'afe240a439959ce86f5e322f8c208e1fedefea9e813f2140c81af866cc9edf7e','218e57509998197d570e2c98512d0105985dffc9'),
+ # Duplicated taskhash with multiple output hashes and unihashes.
+ ('8aa96fcffb5831b3c2c0cb75f0431e3f8b20554a', '0904a7fe3dc712d9fd8a74a616ddca2a825a8ee97adf0bd3fc86082c7639914d', 'ae9a7d252735f0dafcdb10e2e02561ca3a47314c'),
+ # Equivalent hash
+ ("044c2ec8aaf480685a00ff6ff49e6162e6ad34e1", '0904a7fe3dc712d9fd8a74a616ddca2a825a8ee97adf0bd3fc86082c7639914d', "def64766090d28f627e816454ed46894bb3aab36"),
+ ("e3da00593d6a7fb435c7e2114976c59c5fd6d561", "1cf8713e645f491eb9c959d20b5cae1c47133a292626dda9b10709857cbe688a", "3b5d3d83f07f259e9086fcb422c855286e18a57d"),
+ ('35788efcb8dfb0a02659d81cf2bfd695fb30faf9', '2765d4a5884be49b28601445c2760c5f21e7e5c0ee2b7e3fce98fd7e5970796f', 'f46d3fbb439bd9b921095da657a4de906510d2cd'),
+ ('35788efcb8dfb0a02659d81cf2bfd695fb30fafa', '2765d4a5884be49b28601445c2760c5f21e7e5c0ee2b7e3fce98fd7e5970796f', 'f46d3fbb439bd9b921095da657a4de906510d2ce'),
+ ('9d81d76242cc7cfaf7bf74b94b9cd2e29324ed74', '8470d56547eea6236d7c81a644ce74670ca0bbda998e13c629ef6bb3f0d60b69', '05d2a63c81e32f0a36542ca677e8ad852365c538'),
+ )
+ EXTRA_QUERIES = (
+ "6b6be7a84ab179b4240c4302518dc3f6",
+ )
+
+ for taskhash, outhash, unihash in TEST_INPUT:
+ self.client.report_unihash(taskhash, self.METHOD, outhash, unihash)
+
+
+ result = self.client.get_unihash_batch(
+ [(self.METHOD, data[0]) for data in TEST_INPUT] +
+ [(self.METHOD, e) for e in EXTRA_QUERIES]
+ )
+
+ self.assertListEqual(result, [
+ "218e57509998197d570e2c98512d0105985dffc9",
+ "218e57509998197d570e2c98512d0105985dffc9",
+ "218e57509998197d570e2c98512d0105985dffc9",
+ "3b5d3d83f07f259e9086fcb422c855286e18a57d",
+ "f46d3fbb439bd9b921095da657a4de906510d2cd",
+ "f46d3fbb439bd9b921095da657a4de906510d2cd",
+ "05d2a63c81e32f0a36542ca677e8ad852365c538",
+ None,
+ ])
+
def test_client_pool_unihash_exists(self):
TEST_INPUT = (
# taskhash outhash unihash
@@ -636,6 +673,44 @@ class HashEquivalenceCommonTests(object):
result = client_pool.unihashes_exist(query)
self.assertDictEqual(result, expected)

+ def test_unihash_exists_batch(self):
+ TEST_INPUT = (
+ # taskhash outhash unihash
+ ('8aa96fcffb5831b3c2c0cb75f0431e3f8b20554a', 'afe240a439959ce86f5e322f8c208e1fedefea9e813f2140c81af866cc9edf7e','218e57509998197d570e2c98512d0105985dffc9'),
+ # Duplicated taskhash with multiple output hashes and unihashes.
+ ('8aa96fcffb5831b3c2c0cb75f0431e3f8b20554a', '0904a7fe3dc712d9fd8a74a616ddca2a825a8ee97adf0bd3fc86082c7639914d', 'ae9a7d252735f0dafcdb10e2e02561ca3a47314c'),
+ # Equivalent hash
+ ("044c2ec8aaf480685a00ff6ff49e6162e6ad34e1", '0904a7fe3dc712d9fd8a74a616ddca2a825a8ee97adf0bd3fc86082c7639914d', "def64766090d28f627e816454ed46894bb3aab36"),
+ ("e3da00593d6a7fb435c7e2114976c59c5fd6d561", "1cf8713e645f491eb9c959d20b5cae1c47133a292626dda9b10709857cbe688a", "3b5d3d83f07f259e9086fcb422c855286e18a57d"),
+ ('35788efcb8dfb0a02659d81cf2bfd695fb30faf9', '2765d4a5884be49b28601445c2760c5f21e7e5c0ee2b7e3fce98fd7e5970796f', 'f46d3fbb439bd9b921095da657a4de906510d2cd'),
+ ('35788efcb8dfb0a02659d81cf2bfd695fb30fafa', '2765d4a5884be49b28601445c2760c5f21e7e5c0ee2b7e3fce98fd7e5970796f', 'f46d3fbb439bd9b921095da657a4de906510d2ce'),
+ ('9d81d76242cc7cfaf7bf74b94b9cd2e29324ed74', '8470d56547eea6236d7c81a644ce74670ca0bbda998e13c629ef6bb3f0d60b69', '05d2a63c81e32f0a36542ca677e8ad852365c538'),
+ )
+ EXTRA_QUERIES = (
+ "6b6be7a84ab179b4240c4302518dc3f6",
+ )
+
+ result_unihashes = set()
+
+
+ for taskhash, outhash, unihash in TEST_INPUT:
+ result = self.client.report_unihash(taskhash, self.METHOD, outhash, unihash)
+ result_unihashes.add(result["unihash"])
+
+ query = []
+ expected = []
+
+ for _, _, unihash in TEST_INPUT:
+ query.append(unihash)
+ expected.append(unihash in result_unihashes)
+
+
+ for unihash in EXTRA_QUERIES:
+ query.append(unihash)
+ expected.append(False)
+
+ result = self.client.unihash_exists_batch(query)
+ self.assertListEqual(result, expected)

def test_auth_read_perms(self):
admin_client = self.start_auth_server()
diff --git a/bitbake/lib/toaster/tests/builds/buildtest.py b/bitbake/lib/toaster/tests/builds/buildtest.py
index cacfccd4..e54d5613 100644
--- a/bitbake/lib/toaster/tests/builds/buildtest.py
+++ b/bitbake/lib/toaster/tests/builds/buildtest.py
@@ -128,7 +128,7 @@ class BuildTest(unittest.TestCase):
if os.environ.get("TOASTER_TEST_USE_SSTATE_MIRROR"):
ProjectVariable.objects.get_or_create(
name="SSTATE_MIRRORS",
- value="file://.* http://cdn.jsdelivr.net/yocto/sstate/all/PATH;downloadfilename=PATH",
+ value="file://.* http://sstate.yoctoproject.org/all/PATH;downloadfilename=PATH",
project=project)

ProjectTarget.objects.create(project=project,
--
2.53.0

Felix Moessbauer

unread,
Mar 4, 2026, 8:32:11 AM (14 days ago) Mar 4
to isar-...@googlegroups.com, Felix Moessbauer
This partially reverts commit 457b394c124890f03501529812eb95a38e860888.

We must not patch inside the bitbake directory, as this is a 1:1 copy of
the upstream bitbake stable branch. We revert the changes inside the
bitbake dir while keeping the fixes in isar.

Signed-off-by: Felix Moessbauer <felix.mo...@siemens.com>
---
bitbake/lib/layerindexlib/tests/testdata/layer1/conf/layer.conf | 2 +-
bitbake/lib/layerindexlib/tests/testdata/layer2/conf/layer.conf | 2 +-
bitbake/lib/layerindexlib/tests/testdata/layer3/conf/layer.conf | 2 +-
bitbake/lib/layerindexlib/tests/testdata/layer4/conf/layer.conf | 2 +-
4 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/bitbake/lib/layerindexlib/tests/testdata/layer1/conf/layer.conf b/bitbake/lib/layerindexlib/tests/testdata/layer1/conf/layer.conf
index c7a372d7..966d5319 100644
--- a/bitbake/lib/layerindexlib/tests/testdata/layer1/conf/layer.conf
+++ b/bitbake/lib/layerindexlib/tests/testdata/layer1/conf/layer.conf
@@ -4,7 +4,7 @@ BBPATH .= ":${LAYERDIR}"
BBFILES += "${LAYERDIR}/recipes-*/*/*.bb"

BBFILE_COLLECTIONS += "core"
-BBFILE_PATTERN_core = "^${LAYERDIR_RE}/"
+BBFILE_PATTERN_core = "^${LAYERDIR}/"
BBFILE_PRIORITY_core = "5"

LAYERSERIES_CORENAMES = "sumo"
diff --git a/bitbake/lib/layerindexlib/tests/testdata/layer2/conf/layer.conf b/bitbake/lib/layerindexlib/tests/testdata/layer2/conf/layer.conf
index dc9d36a6..7569d1c2 100644
--- a/bitbake/lib/layerindexlib/tests/testdata/layer2/conf/layer.conf
+++ b/bitbake/lib/layerindexlib/tests/testdata/layer2/conf/layer.conf
@@ -6,7 +6,7 @@ BBFILES += "${LAYERDIR}/recipes-*/*/*.bb \
${LAYERDIR}/recipes-*/*/*.bbappend"

BBFILE_COLLECTIONS += "networking-layer"
-BBFILE_PATTERN_networking-layer := "^${LAYERDIR_RE}/"
+BBFILE_PATTERN_networking-layer := "^${LAYERDIR}/"
BBFILE_PRIORITY_networking-layer = "5"

# This should only be incremented on significant changes that will
diff --git a/bitbake/lib/layerindexlib/tests/testdata/layer3/conf/layer.conf b/bitbake/lib/layerindexlib/tests/testdata/layer3/conf/layer.conf
index 54ddee90..7089071f 100644
--- a/bitbake/lib/layerindexlib/tests/testdata/layer3/conf/layer.conf
+++ b/bitbake/lib/layerindexlib/tests/testdata/layer3/conf/layer.conf
@@ -5,7 +5,7 @@ BBPATH .= ":${LAYERDIR}"
BBFILES += "${LAYERDIR}/recipes*/*/*.bb ${LAYERDIR}/recipes*/*/*.bbappend"

BBFILE_COLLECTIONS += "meta-python"
-BBFILE_PATTERN_meta-python := "^${LAYERDIR_RE}/"
+BBFILE_PATTERN_meta-python := "^${LAYERDIR}/"
BBFILE_PRIORITY_meta-python = "7"

# This should only be incremented on significant changes that will
diff --git a/bitbake/lib/layerindexlib/tests/testdata/layer4/conf/layer.conf b/bitbake/lib/layerindexlib/tests/testdata/layer4/conf/layer.conf
index 4646c234..6649ee02 100644
--- a/bitbake/lib/layerindexlib/tests/testdata/layer4/conf/layer.conf
+++ b/bitbake/lib/layerindexlib/tests/testdata/layer4/conf/layer.conf
@@ -5,7 +5,7 @@ BBPATH .= ":${LAYERDIR}"
BBFILES += "${LAYERDIR}/recipes-*/*/*.bb ${LAYERDIR}/recipes-*/*/*.bbappend"

BBFILE_COLLECTIONS += "openembedded-layer"
-BBFILE_PATTERN_openembedded-layer := "^${LAYERDIR_RE}/"
+BBFILE_PATTERN_openembedded-layer := "^${LAYERDIR}/"

# Define the priority for recipes (.bb files) from this layer,
# choosing carefully how this layer interacts with all of the
--
2.53.0

Zhihang Wei

unread,
Mar 6, 2026, 5:11:01 AM (12 days ago) Mar 6
to Felix Moessbauer, isar-...@googlegroups.com
Hi,

I tested this patch set on CI with the patch "bitbake: Downgrade python
requirements" re-applied.

However, errors still occur. The function "valid_signals()" here was
added to the signal module in Python 3.8, but Buster only has 3.7.3. Not
sure how many other similar problems we might encounter, as the test
fails at this point. Zhihang

MOESSBAUER, Felix

unread,
Mar 6, 2026, 5:26:00 AM (12 days ago) Mar 6
to isar-...@googlegroups.com, w...@ilbers.de, Kiszka, Jan
On Fri, 2026-03-06 at 11:10 +0100, Zhihang Wei wrote:
> On 3/4/26 14:31, 'Felix Moessbauer' via isar-users wrote:
> > Upstream commit 1c9ec1ffde75809de34c10d3ec2b40d84d258cb4.
> >
> > This makes bitbake compatible with Python 3.14 and fixes a critical
> > error on Debian Trixie hosts where no stacktrace was shown on a
> > parser exception.
> >
> > Signed-off-by: Felix Moessbauer <felix.mo...@siemens.com>
> > ---

[shortening the thread]

> >
> >
> > +# Recomputing the sets in signal.py is expensive (bitbake -pP idle)
> > +# so try and use _signal directly to avoid it
> > +valid_signals = signal.valid_signals()
> Hi,
>
> I tested this patch set on CI with the patch "bitbake: Downgrade python
> requirements" re-applied.
>
> However, errors still occur. The function "valid_signals()" here was
> added to the signal module in Python 3.8, but Buster only has 3.7.3. Not
> sure how many other similar problems we might encounter, as the test
> fails at this point. Zhihang

Too bad. So either we have to drop buster support or we have to
maintain our own bitbake stable branch or we must re-implement the
imaging plugins to not run inside the chroot. The eLTS of buster lives
until 30. Juni 2029 [1], so I also don't like the idea of dropping
buster support.

However, we can't hold the fixes we need for trixie back that long. We
already got complains that simple bitbake syntax errors are basically
impossible to debug on trixie as the parser just crashes without any
indication what went wrong. So we have to make a decision.

PS: The only fix we really need is [2]. Maybe we can just cherry-pick
that prior to the isar release to buy us some time.

[1] https://wiki.debian.org/de/LTS/Extended
[2]
https://github.com/openembedded/bitbake/commit/c25e7ed128b9fd5b53d28d678238e2f3af52ef8b

Felix

> > +try:
> > + import _signal
> > + sigmask = _signal.pthread_sigmask
> > +except ImportError:
> > + sigmask = signal.pthread_sigmask
> > +
> > # If we don't have a timeout of some kind and a process/thread exits badly (for example
> > # OOM killed) and held a lock, we'd just hang in the lock futex forever. It is better
> > # we exit at some point than hang. 5 minutes with no progress means we're probably deadlocked.
> > +# This function can still deadlock python since it can't signal the other threads to exit
> > +# (signals are handled in the main thread) and even os._exit() will wait on non-daemon threads
> > +# to exit.
> > @contextmanager


--
Siemens AG
Linux Expert Center
Friedrich-Ludwig-Bauer-Str. 3
85748 Garching, Germany

Jan Kiszka

unread,
Mar 6, 2026, 6:05:31 AM (12 days ago) Mar 6
to Moessbauer, Felix (FT RPD CED OES-DE), isar-...@googlegroups.com, w...@ilbers.de
We cannot drop it, it's part of CIP's portfolio as well.

Let's fix the compat issue and at least contribute that to bitbake upstream

>
> However, we can't hold the fixes we need for trixie back that long. We
> already got complains that simple bitbake syntax errors are basically
> impossible to debug on trixie as the parser just crashes without any
> indication what went wrong. So we have to make a decision.
>
> PS: The only fix we really need is [2]. Maybe we can just cherry-pick
> that prior to the isar release to buy us some time.

Exactly: Do that now and resync with bitbake, also upstream, after the
release.

Jan

--
Siemens AG, Foundational Technologies
Linux Expert Center

MOESSBAUER, Felix

unread,
Mar 6, 2026, 10:51:57 AM (12 days ago) Mar 6
to isar-...@googlegroups.com, Kiszka, Jan, w...@ilbers.de
What exactly needs to be contributed? I'm pretty sure bitbake will not
accept patches to lessen the needed python version. The signal handling
is not easy to implement for older and newer versions.

>
> >
> > However, we can't hold the fixes we need for trixie back that long. We
> > already got complains that simple bitbake syntax errors are basically
> > impossible to debug on trixie as the parser just crashes without any
> > indication what went wrong. So we have to make a decision.
> >
> > PS: The only fix we really need is [2]. Maybe we can just cherry-pick
> > that prior to the isar release to buy us some time.
>
> Exactly: Do that now and resync with bitbake, also upstream, after the
> release.

@Zhihang Wei Shall I send a v2 or do you want to take over (which is
probably faster as you can directly pass it through your CI).

Just let me know.

Felix

>
> Jan
>
> --
> Siemens AG, Foundational Technologies
> Linux Expert Center

Zhihang Wei

unread,
Mar 6, 2026, 10:55:12 AM (12 days ago) Mar 6
to MOESSBAUER, Felix, isar-...@googlegroups.com, Kiszka, Jan
It's better to send a v2 to the mailing list, thanks.

Zhihang
Reply all
Reply to author
Forward
0 new messages