[PATCH 0/2] Python code style for the testsuite

4 views
Skip to first unread message

Anton Mikanovich

unread,
Jul 12, 2024, 8:13:35 AMJul 12
to isar-...@googlegroups.com, Anton Mikanovich
Current testcases are written in various different code styles ignoring
any linter checks. Fix this and also declare some rules and usefull
tools for the future test writers.

Anton Mikanovich (2):
testsuite: Provide code style documentation
testsuite: Fix code style

testsuite/README.md | 45 ++
testsuite/cibase.py | 237 ++++++----
testsuite/cibuilder.py | 425 +++++++++++-------
testsuite/citest.py | 245 ++++++----
testsuite/repro-build-test.py | 39 +-
testsuite/start_vm.py | 152 +++++--
testsuite/unittests/bitbake.py | 22 +-
testsuite/unittests/rootfs.py | 9 +-
.../unittests/test_image_account_extension.py | 162 ++++---
9 files changed, 861 insertions(+), 475 deletions(-)

--
2.34.1

Anton Mikanovich

unread,
Jul 12, 2024, 8:13:36 AMJul 12
to isar-...@googlegroups.com, Anton Mikanovich
Add some recomendations for testcase creators.

Signed-off-by: Anton Mikanovich <ami...@ilbers.de>
---
testsuite/README.md | 45 +++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 45 insertions(+)

diff --git a/testsuite/README.md b/testsuite/README.md
index cfcfb1bf..7cbacf99 100644
--- a/testsuite/README.md
+++ b/testsuite/README.md
@@ -137,6 +137,51 @@ avocado so that isar testsuite files could be found:
export PYTHONPATH=${PYTHONPATH}:${TESTSUITEDIR}
```

+# Code style for testcases
+
+Recommended Python code style for the testcases is based on
+[PEP8 Style Guide for Python Code](https://peps.python.org/pep-0008) with
+several additions described below.
+
+## Using quotes
+
+Despite [PEP8](https://peps.python.org/pep-0008) doesn't have any string quote
+usage recommendations, Isar preffered style is the following:
+
+ - Single quotes for data and small symbol-like strings.
+ - Double quotes for human-readable strings and string interpolation.
+
+## Line wrapping
+
+Argument lists don't fit in 79 characters line limit should be placed on the
+new line, keeping them on the same line if possible. Otherwise every single
+argument should be placed in separate line.
+
+## Function definition spacing
+
+Any function and class definition should done in the following way:
+
+ - One line before and after inner functions.
+ - Two lines before and after module-level functions and classes.
+
+## Tools for checking code style
+
+To check the compliance with PEP8 standards:
+
+```
+$ flake8 sample.py
+```
+
+To format the code to recommended code style:
+
+```
+$ black -S -l 79 sample.py
+```
+
+Black use it's own [code style](https://black.readthedocs.io/en/stable/the_black_code_style/current_style.html)
+based on [PEP8](https://peps.python.org/pep-0008), so some options should be
+used to set non-default style checking behaviour.
+
# Example of the downstream testcase

See `meta-isar/test` for an example of the testcase for kas-based downstream.
--
2.34.1

Anton Mikanovich

unread,
Jul 12, 2024, 8:13:38 AMJul 12
to isar-...@googlegroups.com, Anton Mikanovich, Ilia Skochilov
Bring the Python code into compliance with PEP8 requirements.
Also change string quotes style for consistency throughout the code.
Rebuild line wrapping and function/classes declaration spacing to be
compliant with the current rules described in testsuite/README.md.

Used black v23.1.0 and flake8 v5.0.4.

Signed-off-by: Anton Mikanovich <ami...@ilbers.de>
Signed-off-by: Ilia Skochilov <iskoc...@ilbers.de>
---
testsuite/cibase.py | 237 ++++++----
testsuite/cibuilder.py | 425 +++++++++++-------
testsuite/citest.py | 245 ++++++----
testsuite/repro-build-test.py | 39 +-
testsuite/start_vm.py | 152 +++++--
testsuite/unittests/bitbake.py | 22 +-
testsuite/unittests/rootfs.py | 9 +-
.../unittests/test_image_account_extension.py | 162 ++++---
8 files changed, 816 insertions(+), 475 deletions(-)

diff --git a/testsuite/cibase.py b/testsuite/cibase.py
index b2a804b7..cccac86c 100755
--- a/testsuite/cibase.py
+++ b/testsuite/cibase.py
@@ -5,18 +5,18 @@ import os
import re
import shutil
import tempfile
-import time

from cibuilder import CIBuilder, isar_root
from utils import CIUtils

from avocado.utils import process

+
class CIBaseTest(CIBuilder):
def perform_build_test(self, targets, **kwargs):
self.configure(**kwargs)

- self.log.info('Starting build...')
+ self.log.info("Starting build...")

self.bitbake(targets, **kwargs)

@@ -24,31 +24,43 @@ class CIBaseTest(CIBuilder):
self.configure(wic_deploy_parts=wic_deploy_parts, **kwargs)
self.bitbake(targets, **kwargs)

- partition_files = set(glob.glob(f'{self.build_dir}/tmp/deploy/images/*/*.wic.p1'))
+ wic_path = f"{self.build_dir}/tmp/deploy/images/*/*.wic.p1"
+ partition_files = set(glob.glob(wic_path))
if wic_deploy_parts and len(partition_files) == 0:
- self.fail('Found raw wic partitions in DEPLOY_DIR')
+ self.fail("Found raw wic partitions in DEPLOY_DIR")
if not wic_deploy_parts and len(partition_files) != 0:
- self.fail('Did not find raw wic partitions in DEPLOY_DIR')
+ self.fail("Did not find raw wic partitions in DEPLOY_DIR")

def perform_repro_test(self, targets, signed=False, **kwargs):
- gpg_pub_key = os.path.dirname(__file__) + '/keys/base-apt/test_pub.key'
- gpg_priv_key = os.path.dirname(__file__) + '/keys/base-apt/test_priv.key'
+ keys_dir = os.path.dirname(__file__) + '/keys/base-apt'
+ gpg_pub_key = os.path.join(keys_dir, 'test_pub.key')
+ gpg_priv_key = os.path.join(keys_dir, 'test_priv.key')

- self.configure(gpg_pub_key=gpg_pub_key if signed else None, sstate_dir="", **kwargs)
+ self.configure(
+ gpg_pub_key=gpg_pub_key if signed else None,
+ sstate_dir='',
+ **kwargs,
+ )

os.chdir(self.build_dir)

os.environ['GNUPGHOME'] = gnupg_home = tempfile.mkdtemp()
- result = process.run('gpg --import %s %s' % (gpg_pub_key, gpg_priv_key))
+ result = process.run(f"gpg --import {gpg_pub_key} {gpg_priv_key}")

if result.exit_status:
- self.fail('GPG import failed')
+ self.fail("GPG import failed")

try:
self.bitbake(targets, **kwargs)

- self.move_in_build_dir('tmp', 'tmp_middle_repro_%s' % ('signed' if signed else 'unsigned'))
- self.configure(gpg_pub_key=gpg_pub_key if signed else None, offline=True, sstate_dir="", **kwargs)
+ repro_type = 'signed' if signed else 'unsigned'
+ self.move_in_build_dir('tmp', f"tmp_middle_repro_{repro_type}")
+ self.configure(
+ gpg_pub_key=gpg_pub_key if signed else None,
+ offline=True,
+ sstate_dir='',
+ **kwargs,
+ )

self.bitbake(targets, **kwargs)

@@ -71,13 +83,13 @@ class CIBaseTest(CIBuilder):
count = 0
for filename in glob.iglob(dir + '/**/stats', recursive=True):
if os.path.isfile(filename):
- with open(filename,'r') as file:
+ with open(filename, 'r') as file:
content = file.readlines()
- if (field < len(content)):
+ if field < len(content):
count += int(content[field])
return count

- self.configure(ccache=True, sstate_dir="", **kwargs)
+ self.configure(ccache=True, sstate_dir='', **kwargs)

# Field that stores direct ccache hits
direct_cache_hit = 22
@@ -86,21 +98,21 @@ class CIBaseTest(CIBuilder):
self.delete_from_build_dir('sstate-cache')
self.delete_from_build_dir('ccache')

- self.log.info('Starting build and filling ccache dir...')
+ self.log.info("Starting build and filling ccache dir...")
self.bitbake(targets, **kwargs)
hit1 = ccache_stats(self.build_dir + '/ccache', direct_cache_hit)
- self.log.info('Ccache hits 1: ' + str(hit1))
+ self.log.info(f"Ccache hits 1: {str(hit1)}")

self.move_in_build_dir('tmp', 'tmp_middle_ccache')
self.delete_from_build_dir('sstate-cache')

- self.log.info('Starting build and using ccache dir...')
+ self.log.info("Starting build and using ccache dir...")
self.bitbake(targets, **kwargs)
hit2 = ccache_stats(self.build_dir + '/ccache', direct_cache_hit)
- self.log.info('Ccache hits 2: ' + str(hit2))
+ self.log.info(f"Ccache hits 2: {str(hit2)}")

if hit2 <= hit1:
- self.fail('Ccache was not used on second build')
+ self.fail("Ccache was not used on second build")

# Cleanup
self.move_in_build_dir('tmp', 'tmp_after_ccache')
@@ -112,10 +124,10 @@ class CIBaseTest(CIBuilder):
# Use a different isar root for populating sstate cache
isar_sstate = f"{isar_root}/isar-sstate"
os.makedirs(isar_sstate)
- process.run(f'git --work-tree={isar_sstate} checkout HEAD -- .')
+ process.run(f"git --work-tree={isar_sstate} checkout HEAD -- .")

self.init('../build-sstate', isar_dir=isar_sstate)
- self.configure(sstate=True, sstate_dir="", **kwargs)
+ self.configure(sstate=True, sstate_dir='', **kwargs)

# Cleanup sstate and tmp before test
self.delete_from_build_dir('sstate-cache')
@@ -127,17 +139,30 @@ class CIBaseTest(CIBuilder):
# Remove isar configuration so the the following test creates a new one
self.delete_from_build_dir('conf')

- def perform_signature_lint(self, targets, verbose=False, sources_dir=isar_root,
- excluded_tasks=None, **kwargs):
- """Generate signature data for target(s) and check for cachability issues."""
+ def perform_signature_lint(
+ self,
+ targets,
+ verbose=False,
+ sources_dir=isar_root,
+ excluded_tasks=None,
+ **kwargs,
+ ):
+ """
+ Generate signature data for target(s) and check for cachability issues
+ """
self.configure(**kwargs)
- self.move_in_build_dir("tmp", "tmp_before_sstate")
- self.bitbake(targets, sig_handler="none")
-
- verbose_arg = "--verbose" if verbose else ""
- excluded_arg = f"--excluded-tasks {','.join(excluded_tasks)}" if excluded_tasks else ""
- cmd = f"{isar_root}/scripts/isar-sstate lint --lint-stamps {self.build_dir}/tmp/stamps " \
- f"--build-dir {self.build_dir} --sources-dir {sources_dir} {verbose_arg} {excluded_arg}"
+ self.move_in_build_dir('tmp', 'tmp_before_sstate')
+ self.bitbake(targets, sig_handler='none')
+
+ verbose_arg = '--verbose' if verbose else ''
+ excluded_arg = ''
+ if excluded_tasks:
+ excluded_arg = f"--excluded-tasks {','.join(excluded_tasks)}"
+ cmd = (
+ f"{isar_root}/scripts/isar-sstate lint --lint-stamps "
+ f"{self.build_dir}/tmp/stamps --build-dir {self.build_dir} "
+ f"--sources-dir {sources_dir} {verbose_arg} {excluded_arg}"
+ )
self.log.info(f"Running: {cmd}")
exit_status, output = process.getstatusoutput(cmd, ignore_status=True)
if exit_status > 0:
@@ -148,10 +173,11 @@ class CIBaseTest(CIBuilder):

def perform_sstate_test(self, image_target, package_target, **kwargs):
def check_executed_tasks(target, expected):
- taskorder_file = glob.glob(f'{self.build_dir}/tmp/work/*/{target}/*/temp/log.task_order')
+ recipe_workdir = f"{self.build_dir}/tmp/work/*/{target}/*"
+ taskorder_file = glob.glob(f"{recipe_workdir}/temp/log.task_order")
try:
with open(taskorder_file[0], 'r') as f:
- tasks = [l.split()[1] for l in f.readlines()]
+ tasks = [line.split()[1] for line in f.readlines()]
except (FileNotFoundError, IndexError):
tasks = []
if expected is None:
@@ -163,75 +189,116 @@ class CIBaseTest(CIBuilder):
should_run = False
e = e[1:]
if should_run != (e in tasks):
- self.log.error(f"{target}: executed tasks {str(tasks)} did not match expected {str(expected)}")
+ self.log.error(
+ f"{target}: executed tasks {str(tasks)} did not match "
+ f"expected {str(expected)}"
+ )
return False
return True

- self.configure(sstate=True, sstate_dir="", **kwargs)
+ self.configure(sstate=True, sstate_dir='', **kwargs)
+
+ deploy_dir = f"{self.build_dir}/tmp/deploy"

- # Check signature files for cachability issues like absolute paths in signatures
- result = process.run(f'{isar_root}/scripts/isar-sstate lint {self.build_dir}/sstate-cache '
- f'--build-dir {self.build_dir} --sources-dir {isar_root}')
+ # Check signature files for cachability issues like absolute paths in
+ # signatures
+ result = process.run(
+ f"{isar_root}/scripts/isar-sstate lint "
+ f"{self.build_dir}/sstate-cache --build-dir {self.build_dir} "
+ f"--sources-dir {isar_root}"
+ )
if result.exit_status > 0:
self.fail("Detected cachability issues")

# Save contents of image deploy dir
- expected_files = set(glob.glob(f'{self.build_dir}/tmp/deploy/images/*/*'))
+ expected_files = set(glob.glob(f"{deploy_dir}/images/*/*"))

# Rebuild image
self.move_in_build_dir('tmp', 'tmp_before_sstate')
self.bitbake(image_target, **kwargs)
- if not all([
- check_executed_tasks('isar-bootstrap-target',
- ['do_bootstrap_setscene', '!do_bootstrap']),
- check_executed_tasks('sbuild-chroot-target',
- ['do_rootfs_install_setscene', '!do_rootfs_install']),
- check_executed_tasks('isar-image-base-*',
- ['do_rootfs_install_setscene', '!do_rootfs_install'])
- ]):
+ if not all(
+ [
+ check_executed_tasks(
+ 'isar-bootstrap-target',
+ ['do_bootstrap_setscene', '!do_bootstrap'],
+ ),
+ check_executed_tasks(
+ 'sbuild-chroot-target',
+ ['do_rootfs_install_setscene', '!do_rootfs_install'],
+ ),
+ check_executed_tasks(
+ 'isar-image-base-*',
+ ['do_rootfs_install_setscene', '!do_rootfs_install'],
+ ),
+ ]
+ ):
self.fail("Failed rebuild image")

# Verify content of image deploy dir
- deployed_files = set(glob.glob(f'{self.build_dir}/tmp/deploy/images/*/*'))
+ deployed_files = set(glob.glob(f"{deploy_dir}/images/*/*"))
if not deployed_files == expected_files:
if len(expected_files - deployed_files) > 0:
- self.log.error(f"{target}: files missing from deploy dir after rebuild with sstate cache:"
- f"{expected_files - deployed_files}")
+ self.log.error(
+ f"{image_target}: files missing from deploy dir after "
+ f"rebuild with sstate cache:"
+ f"{expected_files - deployed_files}"
+ )
if len(deployed_files - expected_files) > 0:
- self.log.error(f"{target}: additional files in deploy dir after rebuild with sstate cache:"
- f"{deployed_files - expected_files}")
+ self.log.error(
+ f"{image_target}: additional files in deploy dir after "
+ f"rebuild with sstate cache:"
+ f"{deployed_files - expected_files}"
+ )
self.fail("Failed rebuild image")

# Rebuild single package
self.move_in_build_dir('tmp', 'tmp_middle_sstate')
self.bitbake(package_target, **kwargs)
- if not all([
- check_executed_tasks('isar-bootstrap-target',
- ['do_bootstrap_setscene']),
- check_executed_tasks('sbuild-chroot-target',
- ['!do_sbuildchroot_deploy']),
- check_executed_tasks('hello',
- ['do_dpkg_build_setscene', 'do_deploy_deb', '!do_dpkg_build'])
- ]):
+ if not all(
+ [
+ check_executed_tasks(
+ 'isar-bootstrap-target', ['do_bootstrap_setscene']
+ ),
+ check_executed_tasks(
+ 'sbuild-chroot-target', ['!do_sbuildchroot_deploy']
+ ),
+ check_executed_tasks(
+ 'hello',
+ [
+ 'do_dpkg_build_setscene',
+ 'do_deploy_deb',
+ '!do_dpkg_build',
+ ],
+ ),
+ ]
+ ):
self.fail("Failed rebuild single package")

# Rebuild package and image
self.move_in_build_dir('tmp', 'tmp_middle2_sstate')
- process.run(f'find {self.build_dir}/sstate-cache/ -name sstate:hello:* -delete')
+ sstate_cache_dir = f"{self.build_dir}/sstate-cache/"
+ process.run(f"find {sstate_cache_dir} -name sstate:hello:* -delete")
self.bitbake(image_target, **kwargs)
- if not all([
- check_executed_tasks('isar-bootstrap-target',
- ['do_bootstrap_setscene', '!do_bootstrap']),
- check_executed_tasks('sbuild-chroot-target',
- ['do_rootfs_install_setscene', '!do_rootfs_install']),
- check_executed_tasks('hello',
- ['do_fetch', 'do_dpkg_build']),
- # TODO: if we actually make a change to hello, then we could test
- # that do_rootfs is executed. currently, hello is rebuilt,
- # but its sstate sig/hash does not change.
- check_executed_tasks('isar-image-base-*',
- ['do_rootfs_install_setscene', '!do_rootfs_install'])
- ]):
+ if not all(
+ [
+ check_executed_tasks(
+ 'isar-bootstrap-target',
+ ['do_bootstrap_setscene', '!do_bootstrap'],
+ ),
+ check_executed_tasks(
+ 'sbuild-chroot-target',
+ ['do_rootfs_install_setscene', '!do_rootfs_install'],
+ ),
+ check_executed_tasks('hello', ['do_fetch', 'do_dpkg_build']),
+ # TODO: if we actually make a change to hello, then we could
+ # test that do_rootfs is executed. currently, hello is
+ # rebuilt, but its sstate sig/hash does not change.
+ check_executed_tasks(
+ 'isar-image-base-*',
+ ['do_rootfs_install_setscene', '!do_rootfs_install'],
+ ),
+ ]
+ ):
self.fail("Failed rebuild package and image")

def perform_source_test(self, targets, **kwargs):
@@ -242,9 +309,9 @@ class CIBaseTest(CIBuilder):
package = target.rsplit(':', 1)[-1]
isar_apt = CIUtils.getVars('REPO_ISAR_DB_DIR', target=target)
fpath = f"{package}/{package}*.tar.*"
- targz = set(glob.glob(f'{isar_apt}/../apt/*/pool/*/*/{fpath}'))
+ targz = set(glob.glob(f"{isar_apt}/../apt/*/pool/*/*/{fpath}"))
if len(targz) < 1:
- self.fail('No source packages found')
+ self.fail("No source packages found")
for fname in targz:
sfiles[target][fname] = CIUtils.get_tar_content(fname)
return sfiles
@@ -260,14 +327,16 @@ class CIBaseTest(CIBuilder):
for filename in sfiles_before[tdir]:
for file in sfiles_before[tdir][filename]:
if os.path.basename(file).startswith('.git'):
- self.fail('Found .git files')
+ self.fail("Found .git files")

package = targets[0].rsplit(':', 1)[-1]
- tmp_layer_nested_dirs = os.path.join(tmp_layer_dir,
- 'recipes-app', package)
+ tmp_layer_nested_dirs = os.path.join(
+ tmp_layer_dir, 'recipes-app', package
+ )
os.makedirs(tmp_layer_nested_dirs, exist_ok=True)
- bbappend_file = os.path.join(tmp_layer_nested_dirs,
- package + '.bbappend')
+ bbappend_file = os.path.join(
+ tmp_layer_nested_dirs, package + '.bbappend'
+ )
with open(bbappend_file, 'w') as file:
file.write('DPKG_SOURCE_EXTRA_ARGS = ""')

@@ -278,12 +347,12 @@ class CIBaseTest(CIBuilder):
for tdir in sfiles_after:
for filename in sfiles_after[tdir]:
if not sfiles_before[tdir][filename]:
- self.fail('Source filenames are different')
+ self.fail("Source filenames are different")
diff = []
for file in sfiles_after[tdir][filename]:
if file not in sfiles_before[tdir][filename]:
diff.append(file)
if len(diff) < 1:
- self.fail('Source packages are equal')
+ self.fail("Source packages are equal")
finally:
self.cleanup_tmp_layer(tmp_layer_dir)
diff --git a/testsuite/cibuilder.py b/testsuite/cibuilder.py
index a20e88f9..4731bf69 100755
--- a/testsuite/cibuilder.py
+++ b/testsuite/cibuilder.py
@@ -19,7 +19,7 @@ from avocado import Test
from avocado.utils import path
from avocado.utils import process

-sys.path.insert(0, os.path.dirname(os.path.realpath(__file__)) + '/../bitbake/lib')
+sys.path.append(os.path.join(os.path.dirname(__file__), '../bitbake/lib'))

import bb

@@ -28,19 +28,23 @@ DEF_VM_TO_SEC = 600
isar_root = os.path.abspath(os.path.join(os.path.dirname(__file__), '..'))
backup_prefix = '.ci-backup'

-app_log = logging.getLogger("avocado.app")
+app_log = logging.getLogger('avocado.app')
+

class CanBeFinished(Exception):
pass

+
class CIBuilder(Test):
def setUp(self):
super(CIBuilder, self).setUp()
job_log = os.path.join(os.path.dirname(self.logdir), '..', 'job.log')
self._file_handler = logging.FileHandler(filename=job_log)
self._file_handler.setLevel(logging.ERROR)
- fmt = ('%(asctime)s %(module)-16.16s L%(lineno)-.4d %('
- 'levelname)-5.5s| %(message)s')
+ fmt = (
+ '%(asctime)s %(module)-16.16s L%(lineno)-.4d '
+ '%(levelname)-5.5s| %(message)s'
+ )
formatter = logging.Formatter(fmt=fmt)
self._file_handler.setFormatter(formatter)
app_log.addHandler(self._file_handler)
@@ -49,22 +53,31 @@ class CIBuilder(Test):
# initialize build_dir and setup environment
# needs to run once (per test case)
if hasattr(self, 'build_dir'):
- self.error("Broken test implementation: init() called multiple times.")
+ self.error(
+ "Broken test implementation: init() called multiple times."
+ )
self.build_dir = os.path.join(isar_dir, build_dir)
os.chdir(isar_dir)
- os.environ["TEMPLATECONF"] = "meta-test/conf"
+ os.environ['TEMPLATECONF'] = 'meta-test/conf'
path.usable_rw_dir(self.build_dir)
- output = process.getoutput('/bin/bash -c "source isar-init-build-env \
- %s 2>&1 >/dev/null; env"' % self.build_dir)
- env = dict(((x.split('=', 1) + [''])[:2] \
- for x in output.splitlines() if x != ''))
+ output = process.getoutput(
+ f"/bin/bash -c 'source isar-init-build-env {self.build_dir} 2>&1 "
+ f">/dev/null; env'"
+ )
+ env = dict(
+ (
+ (x.split('=', 1) + [''])[:2]
+ for x in output.splitlines()
+ if x != ''
+ )
+ )
os.environ.update(env)

self.vm_dict = {}
self.vm_dict_file = '%s/vm_dict_file' % self.build_dir

if os.path.isfile(self.vm_dict_file):
- with open(self.vm_dict_file, "rb") as f:
+ with open(self.vm_dict_file, 'rb') as f:
data = f.read()
if data:
self.vm_dict = pickle.loads(data)
@@ -73,12 +86,25 @@ class CIBuilder(Test):
if not hasattr(self, 'build_dir'):
self.error("Broken test implementation: need to call init().")

- def configure(self, compat_arch=True, cross=True, debsrc_cache=False,
- container=False, ccache=False, sstate=False, offline=False,
- gpg_pub_key=None, wic_deploy_parts=False, dl_dir=None,
- sstate_dir=None, ccache_dir=None,
- source_date_epoch=None, use_apt_snapshot=False,
- image_install=None, **kwargs):
+ def configure(
+ self,
+ compat_arch=True,
+ cross=True,
+ debsrc_cache=False,
+ container=False,
+ ccache=False,
+ sstate=False,
+ offline=False,
+ gpg_pub_key=None,
+ wic_deploy_parts=False,
+ dl_dir=None,
+ sstate_dir=None,
+ ccache_dir=None,
+ source_date_epoch=None,
+ use_apt_snapshot=False,
+ image_install=None,
+ **kwargs,
+ ):
# write configuration file and set bitbake_args
# can run multiple times per test case
self.check_init()
@@ -104,24 +130,26 @@ class CIBuilder(Test):
# get parameters from environment
distro_apt_premir = os.getenv('DISTRO_APT_PREMIRRORS')

- self.log.info(f'===================================================\n'
- f'Configuring build_dir {self.build_dir}\n'
- f' compat_arch = {compat_arch}\n'
- f' cross = {cross}\n'
- f' debsrc_cache = {debsrc_cache}\n'
- f' offline = {offline}\n'
- f' container = {container}\n'
- f' ccache = {ccache}\n'
- f' sstate = {sstate}\n'
- f' gpg_pub_key = {gpg_pub_key}\n'
- f' wic_deploy_parts = {wic_deploy_parts}\n'
- f' source_date_epoch = {source_date_epoch} \n'
- f' use_apt_snapshot = {use_apt_snapshot} \n'
- f' dl_dir = {dl_dir}\n'
- f' sstate_dir = {sstate_dir}\n'
- f' ccache_dir = {ccache_dir}\n'
- f' image_install = {image_install}\n'
- f'===================================================')
+ self.log.info(
+ f"===================================================\n"
+ f"Configuring build_dir {self.build_dir}\n"
+ f" compat_arch = {compat_arch}\n"
+ f" cross = {cross}\n"
+ f" debsrc_cache = {debsrc_cache}\n"
+ f" offline = {offline}\n"
+ f" container = {container}\n"
+ f" ccache = {ccache}\n"
+ f" sstate = {sstate}\n"
+ f" gpg_pub_key = {gpg_pub_key}\n"
+ f" wic_deploy_parts = {wic_deploy_parts}\n"
+ f" source_date_epoch = {source_date_epoch} \n"
+ f" use_apt_snapshot = {use_apt_snapshot} \n"
+ f" dl_dir = {dl_dir}\n"
+ f" sstate_dir = {sstate_dir}\n"
+ f" ccache_dir = {ccache_dir}\n"
+ f" image_install = {image_install}\n"
+ f"==================================================="
+ )

# determine bitbake_args
self.bitbake_args = []
@@ -142,7 +170,10 @@ class CIBuilder(Test):
f.write('IMAGE_INSTALL += "kselftest"\n')
if cross:
f.write('ISAR_CROSS_COMPILE = "1"\n')
- f.write('IMAGE_INSTALL:append:hikey = " linux-headers-${KERNEL_NAME}"\n')
+ f.write(
+ 'IMAGE_INSTALL:append:hikey = '
+ '" linux-headers-${KERNEL_NAME}"\n'
+ )
if debsrc_cache:
f.write('BASE_REPO_FEATURES = "cache-deb-src"\n')
if offline:
@@ -150,7 +181,10 @@ class CIBuilder(Test):
f.write('BB_NO_NETWORK = "1"\n')
if container:
f.write('SDK_FORMATS = "docker-archive"\n')
- f.write('IMAGE_INSTALL:remove = "example-module-${KERNEL_NAME} enable-fsck"\n')
+ f.write(
+ 'IMAGE_INSTALL:remove = '
+ '"example-module-${KERNEL_NAME} enable-fsck"\n'
+ )
if gpg_pub_key:
f.write('BASE_REPO_KEY="file://' + gpg_pub_key + '"\n')
if wic_deploy_parts:
@@ -161,7 +195,9 @@ class CIBuilder(Test):
f.write('USE_CCACHE = "1"\n')
f.write('CCACHE_TOP_DIR = "%s"\n' % ccache_dir)
if source_date_epoch:
- f.write('SOURCE_DATE_EPOCH_FALLBACK = "%s"\n' % source_date_epoch)
+ f.write(
+ 'SOURCE_DATE_EPOCH_FALLBACK = "%s"\n' % source_date_epoch
+ )
if use_apt_snapshot:
f.write('ISAR_USE_APT_SNAPSHOT = "1"\n')
if dl_dir:
@@ -194,9 +230,9 @@ class CIBuilder(Test):

def bitbake(self, target, bitbake_cmd=None, sig_handler=None, **kwargs):
self.check_init()
- self.log.info('===================================================')
- self.log.info('Building ' + str(target))
- self.log.info('===================================================')
+ self.log.info("===================================================")
+ self.log.info(f"Building {str(target)}")
+ self.log.info("===================================================")
os.chdir(self.build_dir)
cmdline = ['bitbake']
if self.bitbake_args:
@@ -212,9 +248,13 @@ class CIBuilder(Test):
else:
cmdline.append(target)

- with subprocess.Popen(" ".join(cmdline), stdout=subprocess.PIPE,
- stderr=subprocess.PIPE, universal_newlines=True,
- shell=True) as p1:
+ with subprocess.Popen(
+ ' '.join(cmdline),
+ stdout=subprocess.PIPE,
+ stderr=subprocess.PIPE,
+ universal_newlines=True,
+ shell=True,
+ ) as p1:
poller = select.poll()
poller.register(p1.stdout, select.POLLIN)
poller.register(p1.stderr, select.POLLIN)
@@ -229,28 +269,28 @@ class CIBuilder(Test):
app_log.error(p1.stderr.readline().rstrip())
p1.wait()
if p1.returncode:
- self.fail('Bitbake failed')
+ self.fail("Bitbake failed")

def backupfile(self, path):
self.check_init()
try:
shutil.copy2(path, path + backup_prefix)
except FileNotFoundError:
- self.log.warn(path + ' not exist')
+ self.log.warn(f"{path} not exist")

def backupmove(self, path):
self.check_init()
try:
shutil.move(path, path + backup_prefix)
except FileNotFoundError:
- self.log.warn(path + ' not exist')
+ self.log.warn(f"{path} not exist")

def restorefile(self, path):
self.check_init()
try:
shutil.move(path + backup_prefix, path)
except FileNotFoundError:
- self.log.warn(path + backup_prefix + ' not exist')
+ self.log.warn(f"{path}{backup_prefix} not exist")

def create_tmp_layer(self):
tmp_layer_dir = os.path.join(isar_root, 'meta-tmp')
@@ -259,82 +299,102 @@ class CIBuilder(Test):
os.makedirs(conf_dir, exist_ok=True)
layer_conf_file = os.path.join(conf_dir, 'layer.conf')
with open(layer_conf_file, 'w') as file:
- file.write('\
-BBPATH .= ":${LAYERDIR}"\
-\nBBFILES += "${LAYERDIR}/recipes-*/*/*.bbappend"\
-\nBBFILE_COLLECTIONS += "tmp"\
-\nBBFILE_PATTERN_tmp = "^${LAYERDIR}/"\
-\nBBFILE_PRIORITY_tmp = "5"\
-\nLAYERVERSION_tmp = "1"\
-\nLAYERSERIES_COMPAT_tmp = "v0.6"\
-')
-
- bblayersconf_file = os.path.join(self.build_dir, 'conf',
- 'bblayers.conf')
+ file.write(
+ 'BBPATH .= ":${LAYERDIR}"\n'
+ 'BBFILES += "${LAYERDIR}/recipes-*/*/*.bbappend"\n'
+ 'BBFILE_COLLECTIONS += "tmp"\n'
+ 'BBFILE_PATTERN_tmp = "^${LAYERDIR}/"\n'
+ 'BBFILE_PRIORITY_tmp = "5"\n'
+ 'LAYERVERSION_tmp = "1"\n'
+ 'LAYERSERIES_COMPAT_tmp = "v0.6"\n'
+ )
+
+ bblayersconf_file = os.path.join(
+ self.build_dir, 'conf', 'bblayers.conf'
+ )
bb.utils.edit_bblayers_conf(bblayersconf_file, tmp_layer_dir, None)

return tmp_layer_dir

def cleanup_tmp_layer(self, tmp_layer_dir):
- bblayersconf_file = os.path.join(self.build_dir, 'conf',
- 'bblayers.conf')
+ bblayersconf_file = os.path.join(
+ self.build_dir, 'conf', 'bblayers.conf'
+ )
bb.utils.edit_bblayers_conf(bblayersconf_file, None, tmp_layer_dir)
bb.utils.prunedir(tmp_layer_dir)

def get_ssh_cmd_prefix(self, user, host, port, priv_key):
- cmd_prefix = 'ssh -o ConnectTimeout=5 -o StrictHostKeyChecking=no '\
- '-p %s -o IdentityFile=%s %s@%s ' \
- % (port, priv_key, user, host)
+ cmd_prefix = (
+ f"ssh -o ConnectTimeout=5 -o StrictHostKeyChecking=no -p {port} "
+ f"-o IdentityFile={priv_key} {user}@{host}"
+ )

return cmd_prefix

-
def exec_cmd(self, cmd, cmd_prefix):
- proc = subprocess.run('exec ' + str(cmd_prefix) + ' "' + str(cmd) + '"', shell=True,
- stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
+ proc = subprocess.run(
+ f"exec {str(cmd_prefix)} '{str(cmd)}'",
+ shell=True,
+ stdin=subprocess.PIPE,
+ stdout=subprocess.PIPE,
+ stderr=subprocess.PIPE,
+ )

return proc.returncode, proc.stdout, proc.stderr

-
def remote_send_file(self, src, dest, mode):
priv_key = self.prepare_priv_key()
- cmd_prefix = self.get_ssh_cmd_prefix(self.ssh_user, self.ssh_host, self.ssh_port, priv_key)
-
- proc = subprocess.run('cat %s | %s install -m %s /dev/stdin %s' %
- (src, cmd_prefix, mode, dest), shell=True,
- stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
+ cmd_prefix = self.get_ssh_cmd_prefix(
+ self.ssh_user, self.ssh_host, self.ssh_port, priv_key
+ )
+
+ proc = subprocess.run(
+ f"cat {src} | {cmd_prefix} install -m {mode} /dev/stdin {dest}",
+ shell=True,
+ stdin=subprocess.PIPE,
+ stdout=subprocess.PIPE,
+ stderr=subprocess.PIPE,
+ )

return proc.returncode, proc.stdout, proc.stderr

def run_script(self, script, cmd_prefix):
- script_dir = self.params.get('test_script_dir',
- default=os.path.abspath(os.path.dirname(__file__))) + '/scripts/'
+ file_dirname = os.path.abspath(os.path.dirname(__file__))
+ script_dir = self.params.get('test_script_dir', default=file_dirname)
+ script_dir = script_dir + '/scripts/'
script_path = script_dir + script.split()[0]
script_args = ' '.join(script.split()[1:])

if not os.path.exists(script_path):
- self.log.error('Script not found: ' + script_path)
- return (2, '', 'Script not found: ' + script_path)
+ self.log.error(f"Script not found: {script_path}")
+ return (2, '', f"Script not found: {script_path}")

- rc, stdout, stderr = self.remote_send_file(script_path, "./ci.sh", "755")
+ rc, stdout, stderr = self.remote_send_file(
+ script_path, './ci.sh', '755'
+ )

if rc != 0:
- self.log.error('Failed to deploy the script on target')
+ self.log.error("Failed to deploy the script on target")
return (rc, stdout, stderr)

time.sleep(1)

- proc = subprocess.run('%s ./ci.sh %s' % (cmd_prefix, script_args), shell=True,
- stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
+ proc = subprocess.run(
+ f"{cmd_prefix} ./ci.sh {script_args}",
+ shell=True,
+ stdin=subprocess.PIPE,
+ stdout=subprocess.PIPE,
+ stderr=subprocess.PIPE,
+ )

return (proc.returncode, proc.stdout, proc.stderr)

def wait_connection(self, cmd_prefix, timeout):
- self.log.info('Waiting for SSH server ready...')
+ self.log.info("Waiting for SSH server ready...")

rc = None
- stdout = ""
- stderr = ""
+ stdout = ''
+ stderr = ''

goodcnt = 0
# Use 3 good SSH ping attempts to consider SSH connection is stable
@@ -348,33 +408,34 @@ BBPATH .= ":${LAYERDIR}"\
goodcnt = 0

time_left = timeout - time.time()
- self.log.info('SSH ping result: %d, left: %.fs' % (rc, time_left))
+ self.log.info("SSH ping result: %d, left: %.fs" % (rc, time_left))

return rc, stdout, stderr

-
def prepare_priv_key(self):
- # copy private key to build directory (that is writable)
+ # Copy private key to build directory (that is writable)
priv_key = '%s/ci_priv_key' % self.build_dir
if not os.path.exists(priv_key):
- shutil.copy(os.path.dirname(__file__) + '/keys/ssh/id_rsa', priv_key)
+ key = os.path.join(os.path.dirname(__file__), '/keys/ssh/id_rsa')
+ shutil.copy(key, priv_key)
os.chmod(priv_key, 0o400)

return priv_key

-
def remote_run(self, cmd=None, script=None, timeout=0):
if cmd:
- self.log.info('Remote command is `%s`' % (cmd))
+ self.log.info(f"Remote command is `{cmd}`")
if script:
- self.log.info('Remote script is `%s`' % (script))
+ self.log.info(f"Remote script is `{script}`")

priv_key = self.prepare_priv_key()
- cmd_prefix = self.get_ssh_cmd_prefix(self.ssh_user, self.ssh_host, self.ssh_port, priv_key)
+ cmd_prefix = self.get_ssh_cmd_prefix(
+ self.ssh_user, self.ssh_host, self.ssh_port, priv_key
+ )

rc = None
- stdout = ""
- stderr = ""
+ stdout = ''
+ stderr = ''

if timeout != 0:
rc, stdout, stderr = self.wait_connection(cmd_prefix, timeout)
@@ -382,20 +443,20 @@ BBPATH .= ":${LAYERDIR}"\
if rc == 0 or timeout == 0:
if cmd is not None:
rc, stdout, stderr = self.exec_cmd(cmd, cmd_prefix)
- self.log.info('`' + cmd + '` returned ' + str(rc))
+ self.log.info(f"`{cmd}` returned {str(rc)}")
elif script is not None:
rc, stdout, stderr = self.run_script(script, cmd_prefix)
- self.log.info('`' + script + '` returned ' + str(rc))
+ self.log.info(f"`{script}` returned {str(rc)}")

return rc, stdout, stderr

-
- def ssh_start(self, user='ci', host='localhost', port=22,
- cmd=None, script=None):
- self.log.info('===================================================')
- self.log.info('Running Isar SSH test for `%s@%s:%s`' % (user, host, port))
- self.log.info('Isar build folder is: ' + self.build_dir)
- self.log.info('===================================================')
+ def ssh_start(
+ self, user='ci', host='localhost', port=22, cmd=None, script=None
+ ):
+ self.log.info("===================================================")
+ self.log.info(f"Running Isar SSH test for `{user}@{host}:{port}`")
+ self.log.info(f"Isar build folder is: {self.build_dir}")
+ self.log.info("===================================================")

self.check_init()

@@ -404,52 +465,63 @@ BBPATH .= ":${LAYERDIR}"\
self.ssh_port = port

priv_key = self.prepare_priv_key()
- cmd_prefix = self.get_ssh_cmd_prefix(self.ssh_user, self.ssh_host, self.ssh_port, priv_key)
- self.log.info('Connect command:\n' + cmd_prefix)
+ cmd_prefix = self.get_ssh_cmd_prefix(
+ self.ssh_user, self.ssh_host, self.ssh_port, priv_key
+ )
+ self.log.info(f"Connect command:\n{cmd_prefix}")

if cmd is not None or script is not None:
rc, stdout, stderr = self.remote_run(cmd, script)

if rc != 0:
- self.fail('Failed with rc=%s' % rc)
+ self.fail(f"Failed with rc={rc}")

return stdout, stderr

- self.fail('No command to run specified')
+ self.fail("No command to run specified")

-
- def vm_turn_on(self, arch='amd64', distro='buster', image='isar-image-base',
- enforce_pcbios=False):
+ def vm_turn_on(
+ self,
+ arch='amd64',
+ distro='buster',
+ image='isar-image-base',
+ enforce_pcbios=False,
+ ):
logdir = '%s/vm_start' % self.build_dir
if not os.path.exists(logdir):
os.mkdir(logdir)
- prefix = '%s-vm_start_%s_%s_' % (time.strftime('%Y%m%d-%H%M%S'),
- distro, arch)
- fd, boot_log = tempfile.mkstemp(suffix='_log.txt', prefix=prefix,
- dir=logdir, text=True)
+ prefix = f"{time.strftime('%Y%m%d-%H%M%S')}-vm_start_{distro}_{arch}_"
+ fd, boot_log = tempfile.mkstemp(
+ suffix='_log.txt', prefix=prefix, dir=logdir, text=True
+ )
os.chmod(boot_log, 0o644)
latest_link = '%s/vm_start_%s_%s_latest.txt' % (logdir, distro, arch)
if os.path.exists(latest_link):
os.unlink(latest_link)
os.symlink(os.path.basename(boot_log), latest_link)

- cmdline = start_vm.format_qemu_cmdline(arch, self.build_dir, distro, image,
- boot_log, None, enforce_pcbios)
+ cmdline = start_vm.format_qemu_cmdline(
+ arch, self.build_dir, distro, image, boot_log, None, enforce_pcbios
+ )
cmdline.insert(1, '-nographic')

need_sb_cleanup = start_vm.sb_copy_vars(cmdline)

- self.log.info('QEMU boot line:\n' + ' '.join(cmdline))
- self.log.info('QEMU boot log:\n' + boot_log)
-
- p1 = subprocess.Popen('exec ' + ' '.join(cmdline), shell=True,
- stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE,
- universal_newlines=True)
+ self.log.info(f"QEMU boot line:\n{' '.join(cmdline)}")
+ self.log.info(f"QEMU boot log:\ni{boot_log}")
+
+ p1 = subprocess.Popen(
+ f"exec {' '.join(cmdline)}",
+ shell=True,
+ stdin=subprocess.PIPE,
+ stdout=subprocess.PIPE,
+ stderr=subprocess.PIPE,
+ universal_newlines=True,
+ )
self.log.info("Started VM with pid %s" % (p1.pid))

return p1, cmdline, boot_log, need_sb_cleanup

-
def vm_wait_boot(self, p1, timeout):
login_prompt = b' login:'

@@ -471,7 +543,7 @@ BBPATH .= ":${LAYERDIR}"\
shift = max(0, len(data) + len(databuf) - databuf_size)
databuf = databuf[shift:] + bytearray(data)
if login_prompt in databuf:
- self.log.info('Got login prompt')
+ self.log.info("Got login prompt")
return 0
if fd == p1.stderr.fileno():
app_log.error(p1.stderr.readline().rstrip())
@@ -479,35 +551,31 @@ BBPATH .= ":${LAYERDIR}"\
self.log.error("Didn't get login prompt")
return 1

-
def vm_parse_output(self, boot_log, multiconfig, skip_modulecheck):
# the printk of recipes-kernel/example-module
module_output = b'Just an example'
resize_output = None
- image_fstypes, \
- wks_file, \
- bbdistro = CIUtils.getVars('IMAGE_FSTYPES',
- 'WKS_FILE',
- 'DISTRO',
- target=multiconfig)
+ image_fstypes, wks_file, bbdistro = CIUtils.getVars(
+ 'IMAGE_FSTYPES', 'WKS_FILE', 'DISTRO', target=multiconfig
+ )

# only the first type will be tested in start_vm
if image_fstypes.split()[0] == 'wic':
if wks_file:
# ubuntu is less verbose so we do not see the message
# /etc/sysctl.d/10-console-messages.conf
- if bbdistro and "ubuntu" not in bbdistro:
- if "sdimage-efi-sd" in wks_file:
+ if bbdistro and 'ubuntu' not in bbdistro:
+ if 'sdimage-efi-sd' in wks_file:
# output we see when expand-on-first-boot runs on ext4
resize_output = b'resized filesystem to'
- if "sdimage-efi-btrfs" in wks_file:
+ if 'sdimage-efi-btrfs' in wks_file:
resize_output = b': resize device '
rc = 0
if os.path.exists(boot_log) and os.path.getsize(boot_log) > 0:
- with open(boot_log, "rb") as f1:
+ with open(boot_log, 'rb') as f1:
data = f1.read()
- if (module_output in data or skip_modulecheck):
- if resize_output and not resize_output in data:
+ if module_output in data or skip_modulecheck:
+ if resize_output and resize_output not in data:
rc = 1
self.log.error("No resize output while expected")
else:
@@ -515,13 +583,11 @@ BBPATH .= ":${LAYERDIR}"\
self.log.error("No example module output while expected")
return rc

-
def vm_dump_dict(self, vm):
- f = open(self.vm_dict_file, "wb")
+ f = open(self.vm_dict_file, 'wb')
pickle.dump(self.vm_dict, f)
f.close()

-
def vm_turn_off(self, vm):
pid = self.vm_dict[vm][0]
os.kill(pid, signal.SIGKILL)
@@ -529,24 +595,30 @@ BBPATH .= ":${LAYERDIR}"\
if self.vm_dict[vm][3]:
start_vm.sb_cleanup()

- del(self.vm_dict[vm])
+ del self.vm_dict[vm]
self.vm_dump_dict(vm)

self.log.info("Stopped VM with pid %s" % (pid))

-
- def vm_start(self, arch='amd64', distro='buster',
- enforce_pcbios=False, skip_modulecheck=False,
- image='isar-image-base', cmd=None, script=None,
- keep=False):
+ def vm_start(
+ self,
+ arch='amd64',
+ distro='buster',
+ enforce_pcbios=False,
+ skip_modulecheck=False,
+ image='isar-image-base',
+ cmd=None,
+ script=None,
+ keep=False,
+ ):
time_to_wait = self.params.get('time_to_wait', default=DEF_VM_TO_SEC)

- self.log.info('===================================================')
- self.log.info('Running Isar VM boot test for (' + distro + '-' + arch + ')')
- self.log.info('Remote command is ' + str(cmd))
- self.log.info('Remote script is ' + str(script))
- self.log.info('Isar build folder is: ' + self.build_dir)
- self.log.info('===================================================')
+ self.log.info("===================================================")
+ self.log.info(f"Running Isar VM boot test for ({distro}-{arch})")
+ self.log.info(f"Remote command is {str(cmd)}")
+ self.log.info(f"Remote script is {str(script)}")
+ self.log.info(f"Isar build folder is: {self.build_dir}")
+ self.log.info("===================================================")

self.check_init()

@@ -556,41 +628,50 @@ BBPATH .= ":${LAYERDIR}"\

p1 = None
pid = None
- cmdline = ""
- boot_log = ""
+ cmdline = ''
+ boot_log = ''

run_qemu = True

- stdout = ""
- stderr = ""
+ stdout = ''
+ stderr = ''

if vm in self.vm_dict:
pid, cmdline, boot_log, need_sb_cleanup = self.vm_dict[vm]

# Check that corresponding process exists
- proc = subprocess.run("ps -o cmd= %d" % (pid), shell=True, text=True,
- stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
+ proc = subprocess.run(
+ f"ps -o cmd= {pid}",
+ shell=True,
+ text=True,
+ stdout=subprocess.PIPE,
+ stderr=subprocess.STDOUT,
+ )
if cmdline[0] in proc.stdout:
- self.log.info("Found '%s' process with pid '%d', use it" % (cmdline[0], pid))
+ self.log.info(
+ f"Found '{cmdline[0]}' process with pid '{pid}', use it"
+ )
run_qemu = False

if run_qemu:
- self.log.info("No qemu-system process for `%s` found, run new VM" % (vm))
+ self.log.info(
+ f"No qemu-system process for `{vm}` found, run new VM"
+ )

- p1, cmdline, boot_log, \
- need_sb_cleanup = self.vm_turn_on(arch, distro, image,
- enforce_pcbios)
+ p1, cmdline, boot_log, need_sb_cleanup = self.vm_turn_on(
+ arch, distro, image, enforce_pcbios
+ )
self.vm_dict[vm] = p1.pid, cmdline, boot_log, need_sb_cleanup
self.vm_dump_dict(vm)

rc = self.vm_wait_boot(p1, timeout)
if rc != 0:
self.vm_turn_off(vm)
- self.fail('Failed to boot qemu machine')
+ self.fail("Failed to boot qemu machine")

if cmd is not None or script is not None:
- self.ssh_user='ci'
- self.ssh_host='localhost'
+ self.ssh_user = 'ci'
+ self.ssh_host = 'localhost'
self.ssh_port = 22
for arg in cmdline:
match = re.match(r".*hostfwd=tcp::(\d*).*", arg)
@@ -599,21 +680,23 @@ BBPATH .= ":${LAYERDIR}"\
break

priv_key = self.prepare_priv_key()
- cmd_prefix = self.get_ssh_cmd_prefix(self.ssh_user, self.ssh_host, self.ssh_port, priv_key)
- self.log.info('Connect command:\n' + cmd_prefix)
+ cmd_prefix = self.get_ssh_cmd_prefix(
+ self.ssh_user, self.ssh_host, self.ssh_port, priv_key
+ )
+ self.log.info(f"Connect command:\n{cmd_prefix}")

rc, stdout, stderr = self.remote_run(cmd, script, timeout)
if rc != 0:
if not keep:
self.vm_turn_off(vm)
- self.fail('Failed to run test over ssh')
+ self.fail("Failed to run test over ssh")
else:
multiconfig = 'mc:qemu' + arch + '-' + distro + ':' + image
rc = self.vm_parse_output(boot_log, multiconfig, skip_modulecheck)
if rc != 0:
if not keep:
self.vm_turn_off(vm)
- self.fail('Failed to parse output')
+ self.fail("Failed to parse output")

if not keep:
self.vm_turn_off(vm)
diff --git a/testsuite/citest.py b/testsuite/citest.py
index 8dd907d0..4e1634b7 100755
--- a/testsuite/citest.py
+++ b/testsuite/citest.py
@@ -1,8 +1,7 @@
#!/usr/bin/env python3

-import os
-
from avocado import skipUnless
+from avocado.core import exceptions
from avocado.utils import path
from cibase import CIBaseTest
from utils import CIUtils
@@ -26,22 +25,23 @@ class DevTest(CIBaseTest):

:avocado: tags=dev,fast,full
"""
+
def test_dev(self):
targets = [
'mc:qemuamd64-bullseye:isar-image-ci',
'mc:qemuarm-bullseye:isar-image-base',
'mc:qemuarm-bullseye:isar-image-base:do_populate_sdk',
'mc:qemuarm64-bullseye:isar-image-base',
- ]
+ ]

self.init()
- self.perform_build_test(targets, image_install="example-raw")
+ self.perform_build_test(targets, image_install='example-raw')

def test_dev_apps(self):
targets = [
'mc:qemuamd64-bullseye:isar-image-ci',
'mc:qemuarm64-bullseye:isar-image-base',
- ]
+ ]

self.init()
self.perform_build_test(targets)
@@ -73,6 +73,7 @@ class DevTest(CIBaseTest):
self.init()
self.vm_start('arm', 'bullseye', skip_modulecheck=True)

+
class ReproTest(CIBaseTest):

"""
@@ -80,12 +81,13 @@ class ReproTest(CIBaseTest):

:avocado: tags=repro,full
"""
+
def test_repro_signed(self):
targets = [
'mc:rpi-arm-v7-bullseye:isar-image-base',
'mc:rpi-arm64-v8-bullseye:isar-image-base',
'mc:qemuarm64-bullseye:isar-image-base',
- ]
+ ]

self.init()
try:
@@ -97,7 +99,7 @@ class ReproTest(CIBaseTest):
targets = [
'mc:qemuamd64-bullseye:isar-image-base',
'mc:qemuarm-bullseye:isar-image-base',
- ]
+ ]

self.init()
try:
@@ -105,6 +107,7 @@ class ReproTest(CIBaseTest):
finally:
self.move_in_build_dir('tmp', 'tmp_repro_unsigned')

+
class CcacheTest(CIBaseTest):

"""
@@ -112,11 +115,13 @@ class CcacheTest(CIBaseTest):

:avocado: tags=ccache,full
"""
+
def test_ccache_rebuild(self):
targets = ['mc:qemuamd64-bullseye:hello-isar']
self.init()
self.perform_ccache_test(targets)

+
class CrossTest(CIBaseTest):

"""
@@ -124,6 +129,7 @@ class CrossTest(CIBaseTest):

:avocado: tags=cross,fast,full
"""
+
def test_cross(self):
targets = [
'mc:qemuarm-buster:isar-image-ci',
@@ -135,7 +141,7 @@ class CrossTest(CIBaseTest):
'mc:qemuarm64-focal:isar-image-base',
'mc:nanopi-neo-efi-bookworm:isar-image-base',
'mc:phyboard-mira-bookworm:isar-image-base',
- ]
+ ]

self.init()
self.perform_build_test(targets, debsrc_cache=True)
@@ -143,14 +149,15 @@ class CrossTest(CIBaseTest):
def test_cross_rpi(self):
targets = [
'mc:rpi-arm-v7-bullseye:isar-image-base',
- ]
+ ]

self.init()
try:
self.perform_build_test(targets, debsrc_cache=True)
- except:
+ except exceptions.TestFail:
self.cancel('KFAIL')

+
class WicTest(CIBaseTest):

"""
@@ -158,21 +165,31 @@ class WicTest(CIBaseTest):

:avocado: tags=wic,full
"""
+
def test_wic_nodeploy_partitions(self):
targets = ['mc:qemuarm64-bookworm:isar-image-ci']

self.init()
self.move_in_build_dir('tmp', 'tmp_before_wic')
- self.perform_wic_partition_test(targets,
- wic_deploy_parts=False, debsrc_cache=True, compat_arch=False)
+ self.perform_wic_partition_test(
+ targets,
+ wic_deploy_parts=False,
+ debsrc_cache=True,
+ compat_arch=False,
+ )

def test_wic_deploy_partitions(self):
targets = ['mc:qemuarm64-bookworm:isar-image-ci']

self.init()
# reuse artifacts
- self.perform_wic_partition_test(targets,
- wic_deploy_parts=True, debsrc_cache=True, compat_arch=False)
+ self.perform_wic_partition_test(
+ targets,
+ wic_deploy_parts=True,
+ debsrc_cache=True,
+ compat_arch=False,
+ )
+

class NoCrossTest(CIBaseTest):

@@ -181,6 +198,7 @@ class NoCrossTest(CIBaseTest):

:avocado: tags=nocross,full
"""
+
def test_nocross(self):
targets = [
'mc:qemuarm-buster:isar-image-ci',
@@ -209,7 +227,7 @@ class NoCrossTest(CIBaseTest):
'mc:hikey-bookworm:isar-image-base',
'mc:de0-nano-soc-bookworm:isar-image-base',
'mc:beagleplay-bookworm:isar-image-base',
- ]
+ ]

self.init()
# Cleanup after cross build
@@ -226,12 +244,12 @@ class NoCrossTest(CIBaseTest):
'mc:rpi-arm-v7-bookworm:isar-image-base',
'mc:rpi-arm-v7l-bookworm:isar-image-base',
'mc:rpi-arm64-v8-bookworm:isar-image-base',
- ]
+ ]

self.init()
try:
self.perform_build_test(targets, cross=False, debsrc_cache=True)
- except:
+ except exceptions.TestFail:
self.cancel('KFAIL')

def test_nocross_trixie(self):
@@ -239,12 +257,12 @@ class NoCrossTest(CIBaseTest):
'mc:qemuamd64-trixie:isar-image-base',
'mc:qemuarm64-trixie:isar-image-base',
'mc:qemuarm-trixie:isar-image-base',
- ]
+ ]

self.init()
try:
self.perform_build_test(targets, cross=False)
- except:
+ except exceptions.TestFail:
self.cancel('KFAIL')

def test_nocross_sid(self):
@@ -252,14 +270,15 @@ class NoCrossTest(CIBaseTest):
'mc:qemuriscv64-sid:isar-image-base',
'mc:sifive-fu540-sid:isar-image-base',
'mc:starfive-visionfive2-sid:isar-image-base',
- ]
+ ]

self.init()
try:
self.perform_build_test(targets, cross=False)
- except:
+ except exceptions.TestFail:
self.cancel('KFAIL')

+
class ContainerImageTest(CIBaseTest):

"""
@@ -267,17 +286,19 @@ class ContainerImageTest(CIBaseTest):

:avocado: tags=containerbuild,fast,full,container
"""
+
@skipUnless(UMOCI_AVAILABLE and SKOPEO_AVAILABLE, 'umoci/skopeo not found')
def test_container_image(self):
targets = [
'mc:container-amd64-buster:isar-image-base',
'mc:container-amd64-bullseye:isar-image-base',
'mc:container-amd64-bookworm:isar-image-base',
- ]
+ ]

self.init()
self.perform_build_test(targets, container=True)

+
class ContainerSdkTest(CIBaseTest):

"""
@@ -285,35 +306,44 @@ class ContainerSdkTest(CIBaseTest):

:avocado: tags=containersdk,fast,full,container
"""
+
@skipUnless(UMOCI_AVAILABLE and SKOPEO_AVAILABLE, 'umoci/skopeo not found')
def test_container_sdk(self):
targets = ['mc:container-amd64-bullseye:isar-image-base']

self.init()
- self.perform_build_test(targets, bitbake_cmd='do_populate_sdk', container=True)
+ self.perform_build_test(
+ targets, bitbake_cmd='do_populate_sdk', container=True
+ )
+

class SignatureTest(CIBaseTest):
+
"""
Test for signature cachability issues which prevent shared state reuse.

- SstateTest also checks for these, but this test is faster and will check more cases.
+ SstateTest also checks for these, but this test is faster and will check
+ more cases.

:avocado: tags=signatures,sstate
"""
+
def test_signature_lint(self):
- verbose = bool(int(self.params.get("verbose", default=0)))
+ verbose = bool(int(self.params.get('verbose', default=0)))
targets = [
'mc:qemuamd64-bullseye:isar-image-ci',
'mc:qemuarm-bullseye:isar-image-base',
'mc:qemuarm-bullseye:isar-image-base:do_populate_sdk',
'mc:qemuarm64-bullseye:isar-image-base',
- 'mc:qemuamd64-focal:isar-image-base'
- ]
+ 'mc:qemuamd64-focal:isar-image-base',
+ ]

self.init()
self.perform_signature_lint(targets, verbose=verbose)

+
class SstateTest(CIBaseTest):
+
"""
Test builds with artifacts taken from sstate cache

@@ -332,6 +362,7 @@ class SstateTest(CIBaseTest):
self.init('build-sstate')
self.perform_sstate_test(image_target, package_target)

+
class SingleTest(CIBaseTest):

"""
@@ -339,6 +370,7 @@ class SingleTest(CIBaseTest):

:avocado: tags=single
"""
+
def test_single_build(self):
self.init()
machine = self.params.get('machine', default='qemuamd64')
@@ -354,6 +386,7 @@ class SingleTest(CIBaseTest):

self.vm_start(machine.removeprefix('qemu'), distro)

+
class SourceTest(CIBaseTest):

"""
@@ -361,15 +394,17 @@ class SourceTest(CIBaseTest):

:avocado: tags=source
"""
+
def test_source(self):
targets = [
'mc:qemuamd64-bookworm:libhello',
'mc:qemuarm64-bookworm:libhello',
- ]
+ ]

self.init()
self.perform_source_test(targets)

+
class VmBootTestFast(CIBaseTest):

"""
@@ -380,47 +415,72 @@ class VmBootTestFast(CIBaseTest):

def test_arm_bullseye(self):
self.init()
- self.vm_start('arm','bullseye', image='isar-image-ci', keep=True)
+ self.vm_start('arm', 'bullseye', image='isar-image-ci', keep=True)

def test_arm_bullseye_example_module(self):
self.init()
- self.vm_start('arm','bullseye', image='isar-image-ci',
- cmd='lsmod | grep example_module', keep=True)
+ self.vm_start(
+ 'arm',
+ 'bullseye',
+ image='isar-image-ci',
+ cmd='lsmod | grep example_module',
+ keep=True,
+ )

def test_arm_bullseye_getty_target(self):
self.init()
- self.vm_start('arm','bullseye', image='isar-image-ci',
- script='test_systemd_unit.sh getty.target 10')
-
+ self.vm_start(
+ 'arm',
+ 'bullseye',
+ image='isar-image-ci',
+ script='test_systemd_unit.sh getty.target 10',
+ )

def test_arm_buster(self):
self.init()
- self.vm_start('arm','buster', image='isar-image-ci', keep=True)
+ self.vm_start('arm', 'buster', image='isar-image-ci', keep=True)

def test_arm_buster_getty_target(self):
self.init()
- self.vm_start('arm','buster', image='isar-image-ci',
- cmd='systemctl is-active getty.target', keep=True)
+ self.vm_start(
+ 'arm',
+ 'buster',
+ image='isar-image-ci',
+ cmd='systemctl is-active getty.target',
+ keep=True,
+ )

def test_arm_buster_example_module(self):
self.init()
- self.vm_start('arm','buster', image='isar-image-ci',
- script='test_kernel_module.sh example_module')
-
+ self.vm_start(
+ 'arm',
+ 'buster',
+ image='isar-image-ci',
+ script='test_kernel_module.sh example_module',
+ )

def test_arm_bookworm(self):
self.init()
- self.vm_start('arm','bookworm', image='isar-image-ci', keep=True)
+ self.vm_start('arm', 'bookworm', image='isar-image-ci', keep=True)

def test_arm_bookworm_example_module(self):
self.init()
- self.vm_start('arm','bookworm', image='isar-image-ci',
- cmd='lsmod | grep example_module', keep=True)
+ self.vm_start(
+ 'arm',
+ 'bookworm',
+ image='isar-image-ci',
+ cmd='lsmod | grep example_module',
+ keep=True,
+ )

def test_arm_bookworm_getty_target(self):
self.init()
- self.vm_start('arm','bookworm', image='isar-image-ci',
- script='test_systemd_unit.sh getty.target 10')
+ self.vm_start(
+ 'arm',
+ 'bookworm',
+ image='isar-image-ci',
+ script='test_systemd_unit.sh getty.target 10',
+ )


class VmBootTestFull(CIBaseTest):
@@ -433,92 +493,119 @@ class VmBootTestFull(CIBaseTest):

def test_arm_bullseye(self):
self.init()
- self.vm_start('arm','bullseye')
-
+ self.vm_start('arm', 'bullseye')

def test_arm_buster(self):
self.init()
- self.vm_start('arm','buster', image='isar-image-ci', keep=True)
+ self.vm_start('arm', 'buster', image='isar-image-ci', keep=True)

def test_arm_buster_example_module(self):
self.init()
- self.vm_start('arm','buster', image='isar-image-ci',
- cmd='lsmod | grep example_module', keep=True)
+ self.vm_start(
+ 'arm',
+ 'buster',
+ image='isar-image-ci',
+ cmd='lsmod | grep example_module',
+ keep=True,
+ )

def test_arm_buster_getty_target(self):
self.init()
- self.vm_start('arm','buster', image='isar-image-ci',
- script='test_systemd_unit.sh getty.target 10')
-
+ self.vm_start(
+ 'arm',
+ 'buster',
+ image='isar-image-ci',
+ script='test_systemd_unit.sh getty.target 10',
+ )

def test_arm64_bullseye(self):
self.init()
- self.vm_start('arm64','bullseye', image='isar-image-ci', keep=True)
+ self.vm_start('arm64', 'bullseye', image='isar-image-ci', keep=True)

def test_arm64_bullseye_getty_target(self):
self.init()
- self.vm_start('arm64','bullseye', image='isar-image-ci',
- cmd='systemctl is-active getty.target', keep=True)
+ self.vm_start(
+ 'arm64',
+ 'bullseye',
+ image='isar-image-ci',
+ cmd='systemctl is-active getty.target',
+ keep=True,
+ )

def test_arm64_bullseye_example_module(self):
self.init()
- self.vm_start('arm64','bullseye', image='isar-image-ci',
- script='test_kernel_module.sh example_module')
-
+ self.vm_start(
+ 'arm64',
+ 'bullseye',
+ image='isar-image-ci',
+ script='test_kernel_module.sh example_module',
+ )

def test_i386_buster(self):
self.init()
- self.vm_start('i386','buster')
-
+ self.vm_start('i386', 'buster')

def test_amd64_buster(self):
self.init()
# test efi boot
- self.vm_start('amd64','buster', image='isar-image-ci')
+ self.vm_start('amd64', 'buster', image='isar-image-ci')
# test pcbios boot
self.vm_start('amd64', 'buster', True, image='isar-image-ci')

-
def test_amd64_focal(self):
self.init()
- self.vm_start('amd64','focal', image='isar-image-ci', keep=True)
+ self.vm_start('amd64', 'focal', image='isar-image-ci', keep=True)

def test_amd64_focal_example_module(self):
self.init()
- self.vm_start('amd64','focal', image='isar-image-ci',
- cmd='lsmod | grep example_module', keep=True)
+ self.vm_start(
+ 'amd64',
+ 'focal',
+ image='isar-image-ci',
+ cmd='lsmod | grep example_module',
+ keep=True,
+ )

def test_amd64_focal_getty_target(self):
self.init()
- self.vm_start('amd64','focal', image='isar-image-ci',
- script='test_systemd_unit.sh getty.target 10')
-
+ self.vm_start(
+ 'amd64',
+ 'focal',
+ image='isar-image-ci',
+ script='test_systemd_unit.sh getty.target 10',
+ )

def test_amd64_bookworm(self):
self.init()
self.vm_start('amd64', 'bookworm', image='isar-image-ci')

-
def test_arm_bookworm(self):
self.init()
- self.vm_start('arm','bookworm', image='isar-image-ci')
-
+ self.vm_start('arm', 'bookworm', image='isar-image-ci')

def test_i386_bookworm(self):
self.init()
- self.vm_start('i386','bookworm')
-
+ self.vm_start('i386', 'bookworm')

def test_mipsel_bookworm(self):
self.init()
- self.vm_start('mipsel','bookworm', image='isar-image-ci', keep=True)
+ self.vm_start('mipsel', 'bookworm', image='isar-image-ci', keep=True)

def test_mipsel_bookworm_getty_target(self):
self.init()
- self.vm_start('mipsel','bookworm', image='isar-image-ci',
- cmd='systemctl is-active getty.target', keep=True)
+ self.vm_start(
+ 'mipsel',
+ 'bookworm',
+ image='isar-image-ci',
+ cmd='systemctl is-active getty.target',
+ keep=True,
+ )

def test_mipsel_bookworm_example_module(self):
self.init()
- self.vm_start('mipsel','bookworm', image='isar-image-ci',
- script='test_kernel_module.sh example_module')
+ self.vm_start(
+ 'mipsel',
+ 'bookworm',
+ image='isar-image-ci',
+ script='test_kernel_module.sh example_module',
+ )
diff --git a/testsuite/repro-build-test.py b/testsuite/repro-build-test.py
index 04e4ddc7..d24e8f84 100755
--- a/testsuite/repro-build-test.py
+++ b/testsuite/repro-build-test.py
@@ -15,32 +15,33 @@ class ReproBuild(CIBuilder):

def test_repro_build(self):
target = self.params.get(
- "build_target", default="mc:qemuamd64-bullseye:isar-image-base"
+ 'build_target', default='mc:qemuamd64-bullseye:isar-image-base'
)
source_date_epoch = self.params.get(
- "source_date_epoch", default=self.git_last_commit_timestamp()
+ 'source_date_epoch', default=self.git_last_commit_timestamp()
)
self.init()
- self.build_repro_image(target, source_date_epoch, "image1.tar.gz")
- self.build_repro_image(target, source_date_epoch, "image2.tar.gz")
- self.compare_repro_image("image1.tar.gz", "image2.tar.gz")
+ self.build_repro_image(target, source_date_epoch, 'image1.tar.gz')
+ self.build_repro_image(target, source_date_epoch, 'image2.tar.gz')
+ self.compare_repro_image('image1.tar.gz', 'image2.tar.gz')

def git_last_commit_timestamp(self):
- return process.run("git log -1 --pretty=%ct").stdout.decode().strip()
+ return process.run('git log -1 --pretty=%ct').stdout.decode().strip()

def get_image_path(self, target_name):
- image_dir = "tmp/deploy/images"
- machine, image_name = CIUtils.getVars('MACHINE', 'IMAGE_FULLNAME',
- target=target_name)
+ image_dir = 'tmp/deploy/images'
+ machine, image_name = CIUtils.getVars(
+ 'MACHINE', 'IMAGE_FULLNAME', target=target_name
+ )
return f"{image_dir}/{machine}/{image_name}.tar.gz"

def build_repro_image(
- self, target, source_date_epoch=None, image_name="image.tar.gz"
+ self, target, source_date_epoch=None, image_name='image.tar.gz'
):
-
if not source_date_epoch:
self.error(
- "Reproducible build should configure with source_date_epoch time"
+ "Reproducible build should configure with "
+ "source_date_epoch time"
)

# clean artifacts before build
@@ -48,7 +49,9 @@ class ReproBuild(CIBuilder):

# Build
self.log.info("Started Build " + image_name)
- self.configure(source_date_epoch=source_date_epoch, use_apt_snapshot=True)
+ self.configure(
+ source_date_epoch=source_date_epoch, use_apt_snapshot=True
+ )
self.bitbake(target)

# copy the artifacts image name with given name
@@ -57,18 +60,16 @@ class ReproBuild(CIBuilder):
self.move_in_build_dir(image_path, image_name)

def clean(self):
- self.delete_from_build_dir("tmp")
- self.delete_from_build_dir("sstate-cache")
+ self.delete_from_build_dir('tmp')
+ self.delete_from_build_dir('sstate-cache')

def compare_repro_image(self, image1, image2):
self.log.info(
"Compare artifacts image1: " + image1 + ", image2: " + image2
)
result = process.run(
- "diffoscope "
- "--text " + self.build_dir + "/diffoscope-output.txt"
- " " + self.build_dir + "/" + image1 +
- " " + self.build_dir + "/" + image2,
+ f"diffoscope --text {self.build_dir}/diffoscope-output.txt"
+ f" {self.build_dir}/{image1} {self.build_dir}/{image2}",
ignore_status=True,
)
if result.exit_status > 0:
diff --git a/testsuite/start_vm.py b/testsuite/start_vm.py
index d6e04049..2c986344 100755
--- a/testsuite/start_vm.py
+++ b/testsuite/start_vm.py
@@ -9,43 +9,48 @@ import socket
import subprocess
import sys
import shutil
-import time

from utils import CIUtils

OVMF_VARS_PATH = '/usr/share/OVMF/OVMF_VARS_4M.ms.fd'

-def format_qemu_cmdline(arch, build, distro, image, out, pid, enforce_pcbios=False):
- multiconfig = f'mc:qemu{arch}-{distro}:{image}'
-
- image_fstypes, \
- deploy_dir_image, \
- kernel_image, \
- initrd_image, \
- serial, \
- root_dev, \
- qemu_arch, \
- qemu_machine, \
- qemu_cpu, \
- qemu_disk_args = CIUtils.getVars('IMAGE_FSTYPES',
- 'DEPLOY_DIR_IMAGE',
- 'KERNEL_IMAGE',
- 'INITRD_DEPLOY_FILE',
- 'MACHINE_SERIAL',
- 'QEMU_ROOTFS_DEV',
- 'QEMU_ARCH',
- 'QEMU_MACHINE',
- 'QEMU_CPU',
- 'QEMU_DISK_ARGS',
- target=multiconfig)
+
+def format_qemu_cmdline(
+ arch, build, distro, image, out, pid, enforce_pcbios=False
+):
+ multiconfig = f"mc:qemu{arch}-{distro}:{image}"
+
+ (
+ image_fstypes,
+ deploy_dir_image,
+ kernel_image,
+ initrd_image,
+ serial,
+ root_dev,
+ qemu_arch,
+ qemu_machine,
+ qemu_cpu,
+ qemu_disk_args,
+ ) = CIUtils.getVars(
+ 'IMAGE_FSTYPES',
+ 'DEPLOY_DIR_IMAGE',
+ 'KERNEL_IMAGE',
+ 'INITRD_DEPLOY_FILE',
+ 'MACHINE_SERIAL',
+ 'QEMU_ROOTFS_DEV',
+ 'QEMU_ARCH',
+ 'QEMU_MACHINE',
+ 'QEMU_CPU',
+ 'QEMU_DISK_ARGS',
+ target=multiconfig,
+ )

extra_args = ''
- cpu = ['']

image_type = image_fstypes.split()[0]
base = 'ubuntu' if distro in ['jammy', 'focal'] else 'debian'

- rootfs_image = image + '-' + base + '-' + distro + '-qemu' + arch + '.' + image_type
+ rootfs_image = f"{image}-{base}-{distro}-qemu{arch}.{image_type}"

if image_type == 'ext4':
kernel_image = deploy_dir_image + '/' + kernel_image
@@ -55,33 +60,37 @@ def format_qemu_cmdline(arch, build, distro, image, out, pid, enforce_pcbios=Fal
else:
initrd_image = deploy_dir_image + '/' + initrd_image

- kargs = ['-append', '"console=' + serial + ' root=/dev/' + root_dev + ' rw"']
+ kargs = ['-append', f'"console={serial} root=/dev/{root_dev} rw"']

extra_args = ['-kernel', kernel_image, '-initrd', initrd_image]
extra_args.extend(kargs)
elif image_type == 'wic':
extra_args = ['-snapshot']
else:
- raise ValueError('Invalid image type: ' + str(image_type))
+ raise ValueError(f"Invalid image type: {str(image_type)}")

if out:
- extra_args.extend(['-chardev','stdio,id=ch0,logfile=' + out])
- extra_args.extend(['-serial','chardev:ch0'])
- extra_args.extend(['-monitor','none'])
+ extra_args.extend(['-chardev', 'stdio,id=ch0,logfile=' + out])
+ extra_args.extend(['-serial', 'chardev:ch0'])
+ extra_args.extend(['-monitor', 'none'])
if pid:
extra_args.extend(['-pidfile', pid])

- qemu_disk_args = qemu_disk_args.replace('##ROOTFS_IMAGE##', deploy_dir_image + '/' + rootfs_image).split()
+ rootfs_path = os.path.join(deploy_dir_image, rootfs_image)
+ qemu_disk_args = qemu_disk_args.replace('##ROOTFS_IMAGE##', rootfs_path)
+ qemu_disk_args = qemu_disk_args.split()
if enforce_pcbios and '-bios' in qemu_disk_args:
bios_idx = qemu_disk_args.index('-bios')
- del qemu_disk_args[bios_idx : bios_idx+2]
+ del qemu_disk_args[bios_idx : bios_idx + 2]

# Support SSH access from host
ssh_sock = socket.socket()
ssh_sock.bind(('', 0))
- ssh_port=ssh_sock.getsockname()[1]
+ ssh_port = ssh_sock.getsockname()[1]
extra_args.extend(['-device', 'e1000,netdev=net0'])
- extra_args.extend(['-netdev', 'user,id=net0,hostfwd=tcp::' + str(ssh_port) + '-:22'])
+ extra_args.extend(
+ ['-netdev', 'user,id=net0,hostfwd=tcp::' + str(ssh_port) + '-:22']
+ )

cmd = ['qemu-system-' + qemu_arch, '-m', '1024M']

@@ -105,8 +114,10 @@ def sb_copy_vars(cmdline):
if os.path.exists(ovmf_vars_filename):
break
if not os.path.exists(OVMF_VARS_PATH):
- print(f'{OVMF_VARS_PATH} required but not found!',
- file=sys.stderr)
+ print(
+ f"{OVMF_VARS_PATH} required but not found!",
+ file=sys.stderr,
+ )
break
shutil.copy(OVMF_VARS_PATH, ovmf_vars_filename)
return True
@@ -119,7 +130,9 @@ def sb_cleanup():


def start_qemu(arch, build, distro, image, out, pid, enforce_pcbios):
- cmdline = format_qemu_cmdline(arch, build, distro, image, out, pid, enforce_pcbios)
+ cmdline = format_qemu_cmdline(
+ arch, build, distro, image, out, pid, enforce_pcbios
+ )
cmdline.insert(1, '-nographic')

need_cleanup = sb_copy_vars(cmdline)
@@ -136,17 +149,60 @@ def start_qemu(arch, build, distro, image, out, pid, enforce_pcbios):
def parse_args():
parser = argparse.ArgumentParser()
arch_names = ['arm', 'arm64', 'amd64', 'amd64-sb', 'i386', 'mipsel']
- parser.add_argument('-a', '--arch', choices=arch_names,
- help='set isar machine architecture.', default='arm')
- parser.add_argument('-b', '--build', help='set path to build directory.', default=os.getcwd())
- parser.add_argument('-d', '--distro', choices=['buster', 'bullseye', 'bookworm', 'trixie', 'focal', 'jammy'], help='set isar Debian distribution.', default='bookworm')
- parser.add_argument('-i', '--image', help='set image name.', default='isar-image-base')
- parser.add_argument('-o', '--out', help='Route QEMU console output to specified file.')
- parser.add_argument('-p', '--pid', help='Store QEMU pid to specified file.')
- parser.add_argument('--pcbios', action="store_true", help='remove any bios options to enforce use of pc bios')
+ distro_names = [
+ 'buster',
+ 'bullseye',
+ 'bookworm',
+ 'trixie',
+ 'focal',
+ 'jammy',
+ ]
+ parser.add_argument(
+ '-a',
+ '--arch',
+ choices=arch_names,
+ help='set isar machine architecture.',
+ default='arm',
+ )
+ parser.add_argument(
+ '-b',
+ '--build',
+ help='set path to build directory.',
+ default=os.getcwd(),
+ )
+ parser.add_argument(
+ '-d',
+ '--distro',
+ choices=distro_names,
+ help='set isar Debian distribution.',
+ default='bookworm',
+ )
+ parser.add_argument(
+ '-i', '--image', help='set image name.', default='isar-image-base'
+ )
+ parser.add_argument(
+ '-o', '--out', help='Route QEMU console output to specified file.'
+ )
+ parser.add_argument(
+ '-p', '--pid', help='Store QEMU pid to specified file.'
+ )
+ parser.add_argument(
+ '--pcbios',
+ action='store_true',
+ help='remove any ' 'bios options to enforce use of pc bios',
+ )
return parser.parse_args()

-if __name__ == "__main__":
+
+if __name__ == '__main__':
args = parse_args()

- start_qemu(args.arch, args.build, args.distro, args.image, args.out, args.pid, args.pcbios)
+ start_qemu(
+ args.arch,
+ args.build,
+ args.distro,
+ args.image,
+ args.out,
+ args.pid,
+ args.pcbios,
+ )
diff --git a/testsuite/unittests/bitbake.py b/testsuite/unittests/bitbake.py
index 1e2f685a..66cd0b2c 100644
--- a/testsuite/unittests/bitbake.py
+++ b/testsuite/unittests/bitbake.py
@@ -3,35 +3,35 @@
#
# SPDX-License-Identifier: MIT

+import os
import sys
-import pathlib
from typing import Callable

-location = pathlib.Path(__file__).parent.resolve()
-sys.path.insert(0, "{}/../../bitbake/lib".format(location))
+location = os.path.dirname(__file__)
+sys.path.append(os.path.join(location, "../../bitbake/lib"))

from bb.parse import handle
from bb.data import init

-# Modules added for reimport from testfiles
-from bb.data_smart import DataSmart
-

def load_function(file_name: str, function_name: str) -> Callable:
"""Load a python function defined in a bitbake file.

Args:
- file_name (str): The path to the file e.g. `meta/classes/my_special.bbclass`.
- function_name (str): The name of the python function without braces e.g. `my_special_function`
+ file_name (str): The path to the file
+ e.g. `meta/classes/my_special.bbclass`.
+ function_name (str): The name of the python function without braces
+ e.g. `my_special_function`

Returns:
Callable: The loaded function.
"""
d = init()
- parse = handle("{}/../../{}".format(location, file_name), d)
+ parse = handle(f"{location}/../../{file_name}", d)
if function_name not in parse:
- raise KeyError("Function {} does not exist in {}".format(
- function_name, file_name))
+ raise KeyError(
+ f"Function {function_name} does not exist in {file_name}"
+ )
namespace = {}
exec(parse[function_name], namespace)
return namespace[function_name]
diff --git a/testsuite/unittests/rootfs.py b/testsuite/unittests/rootfs.py
index 6c511493..da97d0d3 100644
--- a/testsuite/unittests/rootfs.py
+++ b/testsuite/unittests/rootfs.py
@@ -12,7 +12,7 @@ temp_dirs = []


class TemporaryRootfs:
- """ A temporary rootfs folder that will be removed after the testrun. """
+ """A temporary rootfs folder that will be removed after the testrun."""

def __init__(self):
self._rootfs_path = tempfile.mkdtemp()
@@ -22,7 +22,7 @@ class TemporaryRootfs:
return self._rootfs_path

def create_file(self, path: str, content: str) -> None:
- """ Create a file with the given content.
+ """Create a file with the given content.

Args:
path (str): The path to the file e.g. `/etc/hostname`.
@@ -31,8 +31,9 @@ class TemporaryRootfs:
Returns:
None
"""
- pathlib.Path(self._rootfs_path +
- path).parent.mkdir(parents=True, exist_ok=True)
+ pathlib.Path(self._rootfs_path + path).parent.mkdir(
+ parents=True, exist_ok=True
+ )
with open(self._rootfs_path + path, 'w') as file:
file.write(content)

diff --git a/testsuite/unittests/test_image_account_extension.py b/testsuite/unittests/test_image_account_extension.py
index 08021a4a..636c2a8b 100644
--- a/testsuite/unittests/test_image_account_extension.py
+++ b/testsuite/unittests/test_image_account_extension.py
@@ -3,158 +3,202 @@
#
# SPDX-License-Identifier: MIT

-from bitbake import load_function, DataSmart
+from bitbake import load_function
from rootfs import TemporaryRootfs

+import os
+import sys
import unittest
from unittest.mock import patch
from typing import Tuple

+sys.path.append(os.path.join(os.path.dirname(__file__), '../../bitbake/lib'))

-file_name = "meta/classes/image-account-extension.bbclass"
-image_create_users = load_function(file_name, "image_create_users")
-image_create_groups = load_function(file_name, "image_create_groups")
+from bb import process
+from bb.data_smart import DataSmart

+file_name = 'meta/classes/image-account-extension.bbclass'
+image_create_users = load_function(file_name, 'image_create_users')
+image_create_groups = load_function(file_name, 'image_create_groups')

-class TestImageAccountExtensionCommon(unittest.TestCase):

+class TestImageAccountExtensionCommon(unittest.TestCase):
def setup(self) -> Tuple[DataSmart, TemporaryRootfs]:
rootfs = TemporaryRootfs()

d = DataSmart()
- d.setVar("ROOTFSDIR", rootfs.path())
+ d.setVar('ROOTFSDIR', rootfs.path())

return (d, rootfs)


-class TestImageAccountExtensionImageCreateUsers(TestImageAccountExtensionCommon):
-
+class TestImageAccountExtensionImageCreateUsers(
+ TestImageAccountExtensionCommon
+):
def setup(self, user_name: str) -> Tuple[DataSmart, TemporaryRootfs]:
d, rootfs = super().setup()
rootfs.create_file(
- "/etc/passwd", "test:x:1000:1000::/home/test:/bin/sh")
- d.setVar("USERS", user_name)
+ '/etc/passwd', 'test:x:1000:1000::/home/test:/bin/sh'
+ )
+ d.setVar('USERS', user_name)
return (d, rootfs)

def test_new_user(self):
- test_user = "new"
+ test_user = 'new'
d, rootfs = self.setup(test_user)
- # make the list a bit clumsy to simulate appends and removals to that var
- d.setVarFlag('USER_{}'.format(test_user), 'groups', 'dialout render foo ')
+ # Make the list a bit clumsy to simulate appends and removals to that
+ # var
+ d.setVarFlag(f"USER_{test_user}", 'groups', 'dialout render foo ')

- with patch.object(bb.process, "run") as run_mock:
+ with patch.object(process, 'run') as run_mock:
image_create_users(d)

run_mock.assert_called_once_with(
- ["sudo", "-E", "chroot", rootfs.path(), "/usr/sbin/useradd",
- '--groups', 'dialout,render,foo', test_user])
+ [
+ 'sudo',
+ '-E',
+ 'chroot',
+ rootfs.path(),
+ '/usr/sbin/useradd',
+ '--groups',
+ 'dialout,render,foo',
+ test_user,
+ ]
+ )

def test_existing_user_no_change(self):
- test_user = "test"
+ test_user = 'test'
d, _ = self.setup(test_user)

- with patch.object(bb.process, "run") as run_mock:
+ with patch.object(process, 'run') as run_mock:
image_create_users(d)

run_mock.assert_not_called()

def test_existing_user_home_change(self):
- test_user = "test"
+ test_user = 'test'
d, _ = self.setup(test_user)
- d.setVarFlag("USER_{}".format(test_user), "home", "/home/new_home")
+ d.setVarFlag(f"USER_{test_user}", 'home', '/home/new_home')

- with patch.object(bb.process, "run") as run_mock:
+ with patch.object(process, 'run') as run_mock:
image_create_users(d)

assert run_mock.call_count == 1
- assert run_mock.call_args[0][0][-5:] == ["/usr/sbin/usermod",
- '--home', '/home/new_home', '--move-home', 'test']
+ assert run_mock.call_args[0][0][-5:] == [
+ '/usr/sbin/usermod',
+ '--home',
+ '/home/new_home',
+ '--move-home',
+ 'test',
+ ]

def test_deterministic_password(self):
- test_user = "new"
- cleartext_password = "test"
+ test_user = 'new'
+ cleartext_password = 'test'
d, _ = self.setup(test_user)

- d.setVarFlag("USER_{}".format(test_user),
- "flags", "clear-text-password")
- d.setVarFlag("USER_{}".format(test_user),
- "password", cleartext_password)
+ d.setVarFlag(f"USER_{test_user}", 'flags', 'clear-text-password')
+ d.setVarFlag(f"USER_{test_user}", 'password', cleartext_password)

- source_date_epoch = "1672427776"
- d.setVar("SOURCE_DATE_EPOCH", source_date_epoch)
+ source_date_epoch = '1672427776'
+ d.setVar('SOURCE_DATE_EPOCH', source_date_epoch)

- # openssl passwd -6 -salt $(echo "1672427776" | sha256sum -z | cut -c 1-15) test
- encrypted_password = "$6$eb2e2a12cccc88a$IuhgisFe5AKM5.VREKg8wIAcPSkaJDWBM1cMUsEjNZh2Wa6BT2f5OFhqGTGpL4lFzHGN8oiwvAh0jFO1GhO3S."
+ # openssl passwd -6 -salt $(echo "1672427776" | sha256sum -z | cut \
+ # -c 1-15) test
+ encrypted_password = (
+ '$6$eb2e2a12cccc88a$IuhgisFe5AKM5.VREKg8wIAcPSkaJDWBM1cMUsEjNZh2W'
+ 'a6BT2f5OFhqGTGpL4lFzHGN8oiwvAh0jFO1GhO3S.'
+ )

- with patch.object(bb.process, "run") as run_mock:
+ with patch.object(process, 'run') as run_mock:
image_create_users(d)

- assert run_mock.call_count == 2
- assert run_mock.call_args[0][1] == "{}:{}".format(
- test_user, encrypted_password).encode()
+ password_data = f"{test_user}:{encrypted_password}".encode()

+ assert run_mock.call_count == 2
+ assert run_mock.call_args[0][1] == password_data

-class TestImageAccountExtensionImageCreateGroups(TestImageAccountExtensionCommon):

+class TestImageAccountExtensionImageCreateGroups(
+ TestImageAccountExtensionCommon
+):
def setup(self, group_name: str) -> Tuple[DataSmart, TemporaryRootfs]:
d, rootfs = super().setup()
- rootfs.create_file("/etc/group", "test:x:1000:test")
- d.setVar("GROUPS", group_name)
+ rootfs.create_file('/etc/group', 'test:x:1000:test')
+ d.setVar('GROUPS', group_name)
return (d, rootfs)

def test_new_group(self):
- test_group = "new"
+ test_group = 'new'
d, rootfs = self.setup(test_group)

- with patch.object(bb.process, "run") as run_mock:
+ with patch.object(process, 'run') as run_mock:
image_create_groups(d)

run_mock.assert_called_once_with(
- ["sudo", "-E", "chroot", rootfs.path(), "/usr/sbin/groupadd", test_group])
+ [
+ 'sudo',
+ '-E',
+ 'chroot',
+ rootfs.path(),
+ '/usr/sbin/groupadd',
+ test_group,
+ ]
+ )

def test_existing_group_no_change(self):
- test_group = "test"
+ test_group = 'test'
d, _ = self.setup(test_group)

- with patch.object(bb.process, "run") as run_mock:
+ with patch.object(process, 'run') as run_mock:
image_create_groups(d)

run_mock.assert_not_called()

def test_existing_group_id_change(self):
- test_group = "test"
+ test_group = 'test'
d, rootfs = self.setup(test_group)
- d.setVarFlag("GROUP_{}".format(test_group), "gid", "1005")
+ d.setVarFlag(f"GROUP_{test_group}", 'gid', '1005')

- with patch.object(bb.process, "run") as run_mock:
+ with patch.object(process, 'run') as run_mock:
image_create_groups(d)

run_mock.assert_called_once_with(
- ["sudo", "-E", "chroot", rootfs.path(), "/usr/sbin/groupmod", "--gid", "1005", test_group])
+ [
+ 'sudo',
+ '-E',
+ 'chroot',
+ rootfs.path(),
+ '/usr/sbin/groupmod',
+ '--gid',
+ '1005',
+ test_group,
+ ]
+ )

def test_new_group_system_flag(self):
- test_group = "new"
+ test_group = 'new'
d, _ = self.setup(test_group)
- d.setVarFlag("GROUP_{}".format(test_group), "flags", "system")
+ d.setVarFlag(f"GROUP_{test_group}", 'flags', 'system')

- with patch.object(bb.process, "run") as run_mock:
+ with patch.object(process, 'run') as run_mock:
image_create_groups(d)

assert run_mock.call_count == 1
- assert "--system" in run_mock.call_args[0][0]
+ assert '--system' in run_mock.call_args[0][0]

def test_existing_group_no_system_flag(self):
- test_group = "test"
+ test_group = 'test'
d, _ = self.setup(test_group)
- d.setVarFlag("GROUP_{}".format(test_group), "flags", "system")
- d.setVarFlag("GROUP_{}".format(test_group), "gid", "1005")
+ d.setVarFlag(f"GROUP_{test_group}", 'flags', 'system')
+ d.setVarFlag(f"GROUP_{test_group}", 'gid', '1005')

- with patch.object(bb.process, "run") as run_mock:
+ with patch.object(process, 'run') as run_mock:
image_create_groups(d)

assert run_mock.call_count == 1
- assert "--system" not in run_mock.call_args[0][0]
+ assert '--system' not in run_mock.call_args[0][0]


-if __name__ == "__main__":
+if __name__ == '__main__':
unittest.main()
--
2.34.1

Schmidt, Adriaan

unread,
Jul 15, 2024, 2:22:35 AM (13 days ago) Jul 15
to Anton Mikanovich, isar-...@googlegroups.com
Anton Mikanovich, Sent: Freitag, 12. Juli 2024 14:13:
> Add some recomendations for testcase creators.

Hi Anton,

Thanks for this! I like clear rules for consistent code.

Maybe one additional thing I see in the refactoring, and which could be stated explicitly:

## String formatting

Use format strings (f"The value is {x}") instead of printf-style formatting
("The value is %d" % x) or string concatenations ("The value is " + str(x)).

Plus few minor things below

>
> Signed-off-by: Anton Mikanovich <ami...@ilbers.de>
> ---
> testsuite/README.md | 45
> +++++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 45 insertions(+)
>
> diff --git a/testsuite/README.md b/testsuite/README.md
> index cfcfb1bf..7cbacf99 100644
> --- a/testsuite/README.md
> +++ b/testsuite/README.md
> @@ -137,6 +137,51 @@ avocado so that isar testsuite files could be found:
> export PYTHONPATH=${PYTHONPATH}:${TESTSUITEDIR}
> ```
>
> +# Code style for testcases
> +
> +Recommended Python code style for the testcases is based on
> +[PEP8 Style Guide for Python Code](https://peps.python.org/pep-0008) with
> +several additions described below.
> +
> +## Using quotes
> +
> +Despite [PEP8](https://peps.python.org/pep-0008) doesn't have any string quote
> +usage recommendations, Isar preffered style is the following:

*preferred

> +
> + - Single quotes for data and small symbol-like strings.
> + - Double quotes for human-readable strings and string interpolation.
> +
> +## Line wrapping
> +
> +Argument lists don't fit in 79 characters line limit should be placed on the
> +new line, keeping them on the same line if possible. Otherwise every single
> +argument should be placed in separate line.

Argument lists *that don't fit in *the 79 characters...

> +
> +## Function definition spacing
> +
> +Any function and class definition should done in the following way:

should *be done

Adriaan
> --
> You received this message because you are subscribed to the Google Groups "isar-
> users" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to
> isar-users+...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/isar-
> users/20240712121324.2200037-2-amikan%40ilbers.de.

Anton Mikanovich

unread,
Jul 19, 2024, 2:29:51 AM (9 days ago) Jul 19
to isar-...@googlegroups.com, Anton Mikanovich
Current testcases are written in various different code styles ignoring
any linter checks. Fix this and also declare some rules and usefull
tools for the future test writers.

Changes since v1:
- Fix join in startvm test cases.
- Fix wording and typos.

Anton Mikanovich (2):
testsuite: Provide code style documentation
testsuite: Fix code style

testsuite/README.md | 50 +++
testsuite/cibase.py | 237 ++++++----
testsuite/cibuilder.py | 425 +++++++++++-------
testsuite/citest.py | 245 ++++++----
testsuite/repro-build-test.py | 39 +-
testsuite/start_vm.py | 152 +++++--
testsuite/unittests/bitbake.py | 22 +-
testsuite/unittests/rootfs.py | 9 +-
.../unittests/test_image_account_extension.py | 162 ++++---
9 files changed, 866 insertions(+), 475 deletions(-)

--
2.34.1

Anton Mikanovich

unread,
Jul 19, 2024, 2:29:53 AM (9 days ago) Jul 19
to isar-...@googlegroups.com, Anton Mikanovich
Add some recomendations for testcase creators.

Signed-off-by: Anton Mikanovich <ami...@ilbers.de>
---
testsuite/README.md | 50 +++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 50 insertions(+)

diff --git a/testsuite/README.md b/testsuite/README.md
index cfcfb1bf..3b2be5af 100644
--- a/testsuite/README.md
+++ b/testsuite/README.md
@@ -137,6 +137,56 @@ avocado so that isar testsuite files could be found:
export PYTHONPATH=${PYTHONPATH}:${TESTSUITEDIR}
```

+# Code style for testcases
+
+Recommended Python code style for the testcases is based on
+[PEP8 Style Guide for Python Code](https://peps.python.org/pep-0008) with
+several additions described below.
+
+## Using quotes
+
+Despite [PEP8](https://peps.python.org/pep-0008) doesn't have any string quote
+usage recommendations, Isar preferred style is the following:
+
+ - Single quotes for data and small symbol-like strings.
+ - Double quotes for human-readable strings and string interpolation.
+
+## Line wrapping
+
+Argument lists that don't fit in the 79 characters line limit should be placed
+on the new line, keeping them on the same line if possible. Otherwise every
+single argument should be placed in separate line.
+
+## String formatting
+
+Use format strings (f"The value is {x}") instead of printf-style formatting
+("The value is %d" % x) or string concatenations ("The value is " + str(x)).
+
+## Function definition spacing
+
+Any function and class definition should be done in the following way:

Anton Mikanovich

unread,
Jul 19, 2024, 2:29:54 AM (9 days ago) Jul 19
to isar-...@googlegroups.com, Anton Mikanovich, Ilia Skochilov
Bring the Python code into compliance with PEP8 requirements.
Also change string quotes style for consistency throughout the code.
Rebuild line wrapping and function/classes declaration spacing to be
compliant with the current rules described in testsuite/README.md.

Used black v23.1.0 and flake8 v5.0.4.

Signed-off-by: Anton Mikanovich <ami...@ilbers.de>
Signed-off-by: Ilia Skochilov <iskoc...@ilbers.de>
---
testsuite/cibase.py | 237 ++++++----
testsuite/cibuilder.py | 425 +++++++++++-------
testsuite/citest.py | 245 ++++++----
testsuite/repro-build-test.py | 39 +-
testsuite/start_vm.py | 152 +++++--
testsuite/unittests/bitbake.py | 22 +-
testsuite/unittests/rootfs.py | 9 +-
.../unittests/test_image_account_extension.py | 162 ++++---
index a20e88f9..35af3d9c 100755
+ key = os.path.join(os.path.dirname(__file__), 'keys/ssh/id_rsa')
Reply all
Reply to author
Forward
0 new messages