[PATCH 00/29] DELTA Update

556 views
Skip to first unread message

Stefano Babic

unread,
Oct 11, 2021, 7:22:09 AM10/11/21
to swup...@googlegroups.com, Stefano Babic
This is a major feature and adds supports for binary delta
update. This work is sponsored by Siemens CT (Munich) - many thanks
to Siemens that made this development possible !!!

The patchset is split into several parts:

- Doc for delta update: this describes shortly the design,
I plan to describe with more details later.
- Add generic functions used later in delta handler,
but that can be reused by other code.
- rework channel and channel_curl to make it SWUpdate's unaware,
that is the downloaded data can be forwarded to another process
and not just to the installer. Add some more features, like
downloading HTTP Range Requests.
- Delta Handler, split into the downloader process and the delta
handler itself.
- Last patch is just an optimization, useful in case a filesystem
is put into a partition much more larger. This allows to index
just a part of the whole partition and to reduce memory consumption.

Test:
-----

I tested on eMMC / SD cards using the raw and rawfile as chained handler.
But from design, any handler can be used to install the resulting artifact.

Stefano Babic (29):
Doc for delta update
Handlers: sort list of handlers in menuconfig
Add utility function to detect filesystem
util: add function to convert string to lowercase
Hide copyfile() implementation to add more input
Introduce copybuffer to copy from memory
Import small multipart library
Fix warning in multipart code
channel_curl: statify entry points functions
channel_curl: fix wrong usage of pointer in assert
channel_curl: do not automatically add charset header
curl: change signature of write data callback
curl: add a noipc parameter
channel_curl: pass channel_data to headers callback
channel_curl: allow an external callback for headers
channel_curl: add optional pointer for callbacks
channel_curl: pass channel pointer to internal callbacks
channel_curl: add the possibility to request a range
channel_curl: do not check download size if range requested
channel_curl: store HTTP return code before callbacks
Be sure to initialize channel_data_t by users
delta: add process to download chunks
tools: fix warning due to new SOURCE type
example: userid and groupid in case of downloader
Start chunks downloader if delta is enabled
delta: add handler for delta update
doc: add documentation for delta handler
doc: drop delta update from roadmap
delta: add the option to limit size for source

Kconfig | 4 +
Makefile.deps | 4 +
Makefile.flags | 5 +
core/cpio_utils.c | 85 +-
core/swupdate.c | 12 +
core/util.c | 12 +
corelib/Makefile | 1 +
corelib/channel_curl.c | 178 +++--
corelib/downloader.c | 5 +-
corelib/multipart_parser.c | 306 ++++++++
doc/source/delta-update.rst | 223 ++++++
doc/source/handlers.rst | 86 +++
doc/source/index.rst | 1 +
doc/source/roadmap.rst | 19 -
examples/configuration/swupdate.cfg | 6 +
fs/diskformat.c | 28 +-
handlers/Config.in | 269 +++----
handlers/Makefile | 1 +
handlers/delta_downloader.c | 217 ++++++
handlers/delta_handler.c | 1106 +++++++++++++++++++++++++++
handlers/delta_handler.h | 37 +
include/channel_curl.h | 8 +-
include/delta_process.h | 10 +
include/fs_interface.h | 1 +
include/multipart_parser.h | 49 ++
include/swupdate_status.h | 3 +-
include/util.h | 3 +
suricatta/server_general.c | 5 +-
suricatta/server_hawkbit.c | 28 +-
tools/swupdate-progress.c | 3 +
tools/swupdate-sysrestart.c | 2 +
31 files changed, 2474 insertions(+), 243 deletions(-)
create mode 100644 corelib/multipart_parser.c
create mode 100644 doc/source/delta-update.rst
create mode 100644 handlers/delta_downloader.c
create mode 100644 handlers/delta_handler.c
create mode 100644 handlers/delta_handler.h
create mode 100644 include/delta_process.h
create mode 100644 include/multipart_parser.h

--
2.25.1

Stefano Babic

unread,
Oct 11, 2021, 7:22:12 AM10/11/21
to swup...@googlegroups.com, Stefano Babic
Signed-off-by: Stefano Babic <sba...@denx.de>
---
doc/source/delta-update.rst | 223 ++++++++++++++++++++++++++++++++++++
doc/source/index.rst | 1 +
2 files changed, 224 insertions(+)
create mode 100644 doc/source/delta-update.rst

diff --git a/doc/source/delta-update.rst b/doc/source/delta-update.rst
new file mode 100644
index 0000000..5307dcd
--- /dev/null
+++ b/doc/source/delta-update.rst
@@ -0,0 +1,223 @@
+..
+ SPDX-FileCopyrightText: 2021 Stefano Babic <sba...@denx.de>
+ SPDX-License-Identifier: GPL-2.0-only
+
+==========================
+Delta Update with SWUpdate
+==========================
+
+Overview
+--------
+
+The size of update packages is steadily increasing. While once the whole software was just
+a bunch of megabytes, it is not unusual now that OS and application on devices running
+Linux as OS reach huge size of Gigabytes.
+
+Several mechanisms can be used to reduce the size of downloaded data. The resulting images
+can be compressed. However, this is not enough when bandwidth is important and not cheap.
+It is very common that a device will be upgraded to a version that is similar to the
+running one but add new features and solves some bugs. Specially in case of just fixes,
+the new version is pretty much equal as the original one. This asks to find methods to
+download just the differences with the current software without downloading a full image.
+In case an update is performed from a known base, we talk about *delta updates*. In the following
+chapter some well known algorithms are considered and verified if they can be integrated
+into SWUpdate. The following criteria are important to find a suitable algorithm:
+
+ - license must be compatible with GPLv2
+ - good performance for smaller downloads, but not necessarily the best one.
+ - SWUpdate remains with the concept to deliver one package (SWU), the same
+ independently from the source where the SWU is stored (USB, OTA, etc.)
+ - It must comply to SWUpdate's security requirements (signed images, privilege separation, etc.)
+
+Specific ad-hoc delta updates mechanisms can be realized when the nature of the updated files is
+the same. It is always possible with SWUpdate to install single files, but coherency and compatibility
+with the runningsoftware must be guaranteed by the integratot / manufacturer. This is not covered here:
+the scope is to get an efficient and content unaware *delta* mechanism, that can upgrade in differential
+mode two arbitrary images, without any previous knowledge about what they content.
+
+FOSS projects for delta encoding
+--------------------------------
+
+There are several algorithms for *delta encoding*, that is to find the difference between files,
+generally in binary format. Only algorithms available under a compatible FOSS license (GPLv2)
+are considered for SWUpdate.
+One of the goals in SWUpdate is that it should work independently which is the format of the
+artifacts. Very specialized algorithm and libraries like Google's Courgette used in Chromium will give
+much better results, but it works on programs (ELF files) and take advantages of the structure of compiled
+code. In case of OTA update, not only software, but any kind of artifact can be delivered, and this includes
+configuration data, databases, videos, docs, etc.
+
+librsync_
+.........
+
+librsync_ is an independent implementation for rsync and does not use the rsync protocol. It is well
+suited to generate offline differential update and it is already integrated into SWUpdate.
+However, librsync takes the whole artifact and generates a differential image that is applied
+on the whole image. It gives the best results in terms of reduced size when differences are
+very small, but the differential output tends to be very large as soon as the differences
+are meaningful. Differential images created for SWUpdate show that, as soon as the difference larger is,
+the resulting delta image can even become larger as the original one.
+
+SWUpdate supports `librsync` as delta encoder via the rdiff handler.
+
+xdelta_
+.......
+
+xdelta_ uses the VCDIFF algorithm to compute differences between binaries. It is often used
+to deliver smaller images for CD and DVD. The resulting images are created from an installed
+image that should be loaded entirely in main memory. For this reason, it does not scale well
+when the images are becoming larger and it is unsuitable for embedded systems and SWUpdate.
+
+casync_
+.......
+
+casync_ is, according to his author. a tool for distributing images. It has several interesting
+aspects that can be helpful with OTA update.
+Files itself are grouped together in chunks and casync creates a "Chunk storage" where each chunk
+is stored on a separate file. The chunk storage is part of the delivery, and it must be stored on
+a server. casync checks if the chunk is already present on the target, and if not
+download it. If this seems to be what is required, there are some drawbacks if casync
+should be integrated in SWUpdate:
+
+ - because of the nature of casync, each chunk is a separate file. This cause a huge
+ number of new connections, because each file is a separate GET on the server.
+ The overhead caused to re-instantiate connection is high on small devices,
+ where SSL connections are also increasing CPU load. There are downloads of
+ hundreds or thousands of small files just to recreate the original metadata
+ file.
+ - casync has no authentication and verification and the index (.caidx or .caibx)
+ are not signed. This is known, but casync goals and scopes are outside the ones
+ on embedded devices.
+ - it is difficult to deliver a whole chunk storage. The common usage for OTA is to deliver
+ artifacts, and they should be just a few. Thousands of files to be delivered to let
+ casync to compute the new image is not practical for companies: they have a new "firmware"
+ or "software" and they need an easy way to deliver this file (the output from their build system)
+ to the devices. In some cases, they are even not responsible for that, and the firmware is given to
+ another authority that groups all packages from vendors and realizes a sort of OTA service.
+ - casync is quite a huge project - even if it was stated that it will be converted into
+ a library, this never happened. This makes difficult to interface to SWUpdate,
+ and using it as external process is a no way in SWUpdate for security reason.
+ It breaks privilege separation, and adds a lot of code that is difficult
+ to maintain.
+
+For all these reasons, even if the idea of a chunk storage is good for an OTA updater, casync
+is not a candidate for SWUpdate. A out-of-the-box solution cannot be found, and it is required to
+implement an own solution that better suits for SWUpdate.
+
+Zchunk_ - compression format
+............................
+
+zchunk_ seems to combine the usage of a chunk storage without having to deliver it on a server.
+zchunk is a FOSS project released under BSD by its author_. The goal of this project is something else:
+zchunk creates a new compression format that adds the ability to download the differences between
+new and old file. This matches very well with SWUpdate. A zchunk file contains a header that
+has metadata for all chunks, and according to the header, it is known which chunks must be
+downloaded and which ones can be reused. zchunk has utilities to download itself the missing chunks,
+but it could be just used to find which part of an artifact must be downloading,
+and SWUpdate can go on with its own way to do this.
+
+One big advantage on this approach is that metadata and compressed chunks are still bound into a single file,
+that can be built by the buildsystem and delivered as it is used to. The updater needs first the metadata, that is
+the header in zchunk file, and processes it to detect which chunks need to be downloaded. Each chunk has
+its own hash, and the chunks already available on the device are verified against the hash to be sure
+they are not corrupted.
+
+Zchunk supports multiple sha algorithms - to be compatible with SWUpdate, zchunk should be informed
+to generate sha256 hashes.
+
+Design Delta Update in SWUpdate
+-------------------------------
+
+For all reasons stated before, `zchunk` is chosen as format to deliver delta update in SWUpdate. An artifact
+can be generated in ZCK format and then the ZCK's header (as described in format_) can be extracted and
+added to the SWU. In this way, a ZCK file is signed (and if requested compressed and/or encrypted) as
+part of the SWU, and loading chunks from an external URL can be verified as well because the corresponding
+hashes are already verified as part of the header.
+
+
+.. _casync: http://0pointer.net/blog/casync-a-tool-for-distributing-file-system-images.html
+.. _xdelta: http://xdelta.org/
+.. _zchunk: https://github.com/zchunk/zchunk
+.. _author: https://www.jdieter.net/posts/2018/05/31/what-is-zchunk/
+.. _librsync: https://librsync.github.io/
+.. _format: https://github.com/zchunk/zchunk/blob/main/zchunk_format.txt
+
+Changes in ZCHUNK project
+-------------------------
+
+Zchunk has an API that hides most of its internal, and provides a set of tools for creating
+and downloading itself a file in ZCK format. Nevertheless, Zchunk relies on hashes for the compressed
+(ZST) chunks, and it was missing for support for uncompressed data. To combine SWUpdate and zchunk,
+it is required that a comparison can be done between uncompressed data, because it is unwanted that
+a device is obliged to compress big amount of data just to perform a comparisons.
+A short list of changes in the Zchunk project is:
+
+ - create hashes for uncompressed data and extend format to support it. The header
+ must be extended to include both size and hash of uncompressed data.
+ - make the library embedded friendly, that means reports errors in case of failure
+ instead of exiting and find a suitable way to integrate the log output
+ for the caller.
+ - allow to use sha256 (already foreseen in zchunk) as this is the only hash type
+ used in SWUpdate.
+ - add API to allow an external caller to take itself the decision if a chunk must be
+ downloaded or reused.
+
+Some of these changes were merged into Zchunk project, some are still open. Zchunk working with
+SWUpdate ist stored on a separate branch_.
+
+.. _branch: https://github.com/sbabic/zchunk/tree/devel
+
+Most of missing features in Zchunk listed in TODO for the project have no relevance here:
+SWUpdate already verifies the downloaded data, and there is no need to add signatures to Zchunk itself.
+
+Integration in sw-description
+-----------------------------
+
+The most important part in a Zchunk file is the header: this contains all metadata and hashes to perform
+comparisons. The `zck` tool splits a file in chunks and creates the header. Size of the header are know, and the
+header itself can be extracted from the ZCK file.
+The header will be part of sw-description: this is the header for the file that must be installed. Because the header
+is very small compared to the size of the whole file (quite 1 %), this header can be delivered into the SWU.
+
+
+Integration in SWUpdate: the delta handler
+------------------------------------------
+
+The delta handler is responsible to compute the differences and to download the missing parts. It is not responsible
+to install the artifact, because this breaks the module design in SWUpdate and will constrain to have
+just one artifact type, for example installing as `raw` or `rawfile`. But what about if the artifact should be installed
+by a different handler, for example UBI, or a custom handler ?
+The best way is that the delta handler does not install, but it creates the stream itself so that this stream
+can be passed to another (chained) handler, that is responsible for installing. All current SWUpdate's handlers
+can be reused: each handler does not know that the artifact is coming with separate chunks and it sees just a stream
+as before.
+The delta handler has in short the following duties:
+
+ - parse and understanf the ZCK header
+ - create a ZCK header from the file / partition used as source for the comparison
+ - detect which chunks are missing and which one must be copied.
+ - build a mixer that copies and downloads all chunks and generates a stream
+ for the following handler.
+ - detect any error coming form the chained handler.
+
+Because the delta handler requires to download more data, it must start a connection to the storage
+where the original ZCK is stored. This can lead to security issues, because handlers run with high
+priviliges because they write into the hardware. In fact, this breaks `privilege separation` that is
+part of SWUpdate design.
+To avoid this, the delta handler does not download itself. A separate process, that can runs with different
+userid and groupid, is responsible for this. The handler sends a request to this process with a list of
+ranges that should be downloaded (see HTTP Range request). The delta handler does not know how the chunks are
+downlaoded, and even if using HTTP Range Request is the most frequent choice, it is open to further
+implementations.
+The downloader process prepares the connection and asks the server for ranges. If the server is not
+able to provide ranges, the update aborts. It is in fact a requirement for delta update that the
+server storing the ZCK file is able to answer to HTTP Range Request, and there is no fallback to download
+the full file.
+An easy IPC is implemented between the delta handler and the downloader process. This allows to exchange
+messages, and the downloader can inform the handler if any error occurs so that the update can be stopped.
+The downloader will send a termination message when all chunks will be downloaded.
+Because the number of missing chunks can be very high, the delta handler must sends and organize
+several requests to the downloader, and tracking each of them.
+The downloader is thought as dummy servant: it starts the connection, retrieves HTTP headers and data,
+and sends them back to the caller. The delta handler is then responsible to parse the answer, and to
+retrieve the missing chunks from the multipart HTTP body.
diff --git a/doc/source/index.rst b/doc/source/index.rst
index 0b10576..cd57d95 100644
--- a/doc/source/index.rst
+++ b/doc/source/index.rst
@@ -46,6 +46,7 @@ SWUpdate Documentation
bindings.rst
building-with-yocto.rst
swupdate-best-practise.rst
+ delta-update.rst

############################################
Utilities and tools
--
2.25.1

Stefano Babic

unread,
Oct 11, 2021, 7:22:13 AM10/11/21
to swup...@googlegroups.com, Stefano Babic
This can be used to detect a filesystem from a device instead of specify
the expected filesystem.

Signed-off-by: Stefano Babic <sba...@denx.de>
---
fs/diskformat.c | 28 ++++++++++++++++++++--------
include/fs_interface.h | 1 +
2 files changed, 21 insertions(+), 8 deletions(-)

diff --git a/fs/diskformat.c b/fs/diskformat.c
index 0841f0b..8d58fc3 100644
--- a/fs/diskformat.c
+++ b/fs/diskformat.c
@@ -39,36 +39,48 @@ static struct supported_filesystems fs[] = {
* Checks if file system fstype already exists on device.
* return 0 if not exists, 1 if exists, negative values on failure
*/
-int diskformat_fs_exists(char *device, char *fstype)
+
+char *diskformat_fs_detect(char *device)
{
- char buf[10];
- const char *value = buf;
+ const char *value;
+ char *s = NULL;
size_t len;
blkid_probe pr;
- int ret = 0;

pr = blkid_new_probe_from_filename(device);

if (!pr) {
ERROR("%s: failed to create libblkid probe",
device);
- return -EFAULT;
+ return NULL;
}

while (blkid_do_probe(pr) == 0) {
if (blkid_probe_lookup_value(pr, "TYPE", &value, &len)) {
ERROR("blkid_probe_lookup_value failed");
- ret = -EFAULT;
break;
}

- if (!strncmp(value, fstype, sizeof(buf))) {
- ret = 1;
+ if (len > 0) {
+ s = strndup(value, len);
break;
}
}
blkid_free_probe(pr);

+ return s;
+}
+
+int diskformat_fs_exists(char *device, char *fstype)
+{
+ int ret = 0;
+ char *filesystem = diskformat_fs_detect(device);
+
+ if (filesystem) {
+ ret = !strcmp(fstype, filesystem);
+ }
+
+ free(filesystem);
return ret;
}

diff --git a/include/fs_interface.h b/include/fs_interface.h
index 25c22e5..581f02a 100644
--- a/include/fs_interface.h
+++ b/include/fs_interface.h
@@ -7,6 +7,7 @@
#ifndef _FS_INTERFACE_H
#define _FS_INTERFACE_H

+char *diskformat_fs_detect(char *device);
int diskformat_fs_exists(char *device, char *fstype);

int diskformat_mkfs(char *device, char *fstype);
--
2.25.1

Stefano Babic

unread,
Oct 11, 2021, 7:22:13 AM10/11/21
to swup...@googlegroups.com, Stefano Babic
Numbers of supplied handlers is raised, sort the list to find them
easier during menuconfig.

Signed-off-by: Stefano Babic <sba...@denx.de>
---
handlers/Config.in | 256 ++++++++++++++++++++++-----------------------
1 file changed, 128 insertions(+), 128 deletions(-)

diff --git a/handlers/Config.in b/handlers/Config.in
index 14f8af9..ad5dfdd 100644
--- a/handlers/Config.in
+++ b/handlers/Config.in
@@ -9,67 +9,31 @@

menu "Image Handlers"

-config UBIVOL
- bool "ubivol"
+config ARCHIVE
+ bool "archive"
+ depends on HAVE_LIBARCHIVE
default n
- depends on HAVE_LIBUBI
- depends on MTD
help
- ubi is the default format for NAND device.
- Say Y if you have NAND or you use UBI on
- your system.
+ Handler using the libarchive to extract tarballs
+ into a filesystem.

-comment "ubivol support needs libubi"
- depends on !HAVE_LIBUBI
+comment "archive support needs libarchive"
+ depends on !HAVE_LIBARCHIVE

-config UBIATTACH
- bool "Automatically attach UBI devices"
+config LOCALE
+ bool "Locale support for filenames"
+ depends on ARCHIVE
default y
- depends on UBIVOL
- help
- If this option is enabled, swupdate will try to attach
- UBI devices to all MTD devices.
-
- Make sure UBIBLACKLIST or UBIWHITELIST is set correctly,
- since attaching a UBI device will write to it if it is
- found to be empty, and that may destroy already existing
- content on that device.
-
-config UBIBLACKLIST
- string "List of MTD devices to be excluded for UBI"
- depends on UBIATTACH
- help
- Define a list of MTD devices that are excluded
- by scan_mtd_device. The devices are still available
- as raw devices.
- The list can be set as a string with the mtd numbers.
- Examples: "0 1 2"
- This excludes mtd0-mtd1-mtd2 to be searched for UBI volumes
-
-config UBIWHITELIST
- string "List of MTD devices that must have UBI"
- depends on UBIATTACH
help
- Define a list of MTD devices that are planned to have
- always UBI. If first attach fails, the device is erased
- and tried again.
- The list can be set as a string with the mtd numbers.
- Examples: "0 1 2"
- This sets mtd0-mtd1-mtd2 to be used as UBI volumes.
- UBIBLACKLIST has priority on UBIWHITELIST.
+ Option to remove attempts to use locale in systems
+ without locale support in toolchain.

-config UBIVIDOFFSET
- int "VID Header Offset"
- depends on UBIATTACH
- default 0
+config BOOTLOADERHANDLER
+ bool "bootloader"
+ default n
help
- Force UBI to set a VID header offset to be 2048 bytes
- instead of the value reported by the kernel.
- In other words, you may ask UBI to avoid using sub-pages.
- This is not recommended since this will require
- more storage overhead, but may be useful
- if your NAND driver incorrectly reports that it can handle
- sub-page accesses when it should not.
+ Enable it to change bootloader environment
+ during the installation process.

config CFI
bool "cfi"
@@ -124,53 +88,6 @@ config DISKFORMAT_HANDLER

source fs/Config.in

-config UNIQUEUUID
- bool "uniqueuuid"
- depends on HAVE_LIBBLKID
- default n
- help
- This handler checks that no filesystem on the device has
- a UUID from a list (list is added as part of "properties"
- in sw-description) for this handler.
- This is useful for bootloader (like GRUB) that use UUID to
- select the partition to be started, and in case two or
- more filesystem have the same UUID, a wrong one is started.
- This handler is a partition handler and it is guaranteed that
- it runs before any image is installed on the device.
-
-comment "uniqueuuid support needs libblkid"
- depends on !HAVE_LIBBLKID
-
-config RAW
- bool "raw"
- default n
- help
- This is a simple handler that simply copies
- into the destination.
-
-config RDIFFHANDLER
- bool "rdiff"
- depends on HAVE_LIBRSYNC
- default n
- help
- Add support for applying librsync's rdiff patches,
- see http://librsync.sourcefrog.net/
-
-comment "rdiff support needs librsync"
- depends on !HAVE_LIBRSYNC
-
-config READBACKHANDLER
- bool "readback"
- depends on HASH_VERIFY
- default n
- help
- To verify that an image was written properly, this readback handler
- calculates the sha256 hash of a partition (or part of it) and compares
- it against a given hash value.
-
- This is a post-install handler running at the same time as
- post-install scripts.
-
config LUASCRIPTHANDLER
bool "Lua Script"
depends on LUA
@@ -179,14 +96,6 @@ config LUASCRIPTHANDLER
Handler to be called for pre- and post scripts
written in Lua.

-config SHELLSCRIPTHANDLER
- bool "shellscript"
- default n
- help
- Handler to be called for pre- and post scripts
- written as shell scripts. The default shell /bin/sh
- is called.
-
config HANDLER_IN_LUA
bool "Handlers in Lua"
depends on LUASCRIPTHANDLER
@@ -219,24 +128,35 @@ config EMBEDDED_LUA_HANDLER_SOURCE
Path to the Lua handler source code file to be
embedded into the SWUpdate binary.

-config ARCHIVE
- bool "archive"
- depends on HAVE_LIBARCHIVE
+config RAW
+ bool "raw"
default n
help
- Handler using the libarchive to extract tarballs
- into a filesystem.
+ This is a simple handler that simply copies
+ into the destination.

-comment "archive support needs libarchive"
- depends on !HAVE_LIBARCHIVE
+config RDIFFHANDLER
+ bool "rdiff"
+ depends on HAVE_LIBRSYNC
+ default n
+ help
+ Add support for applying librsync's rdiff patches,
+ see http://librsync.sourcefrog.net/

-config LOCALE
- bool "Locale support for filenames"
- depends on ARCHIVE
- default y
+comment "rdiff support needs librsync"
+ depends on !HAVE_LIBRSYNC
+
+config READBACKHANDLER
+ bool "readback"
+ depends on HASH_VERIFY
+ default n
help
- Option to remove attempts to use locale in systems
- without locale support in toolchain.
+ To verify that an image was written properly, this readback handler
+ calculates the sha256 hash of a partition (or part of it) and compares
+ it against a given hash value.
+
+ This is a post-install handler running at the same time as
+ post-install scripts.

config REMOTE_HANDLER
bool "Remote handler"
@@ -253,6 +173,14 @@ config REMOTE_HANDLER
comment "remote handler needs zeromq"
depends on !HAVE_LIBZEROMQ

+config SHELLSCRIPTHANDLER
+ bool "shellscript"
+ default n
+ help
+ Handler to be called for pre- and post scripts
+ written as shell scripts. The default shell /bin/sh
+ is called.
+
config SWUFORWARDER_HANDLER
bool "SWU forwarder"
depends on HAVE_LIBCURL
@@ -275,13 +203,6 @@ comment "swuforward handler needs json-c and libcurl"
comment "swuforward handler needs websockets and uriparser"
depends on !HAVE_LIBWEBSOCKETS || !HAVE_URIPARSER

-config BOOTLOADERHANDLER
- bool "bootloader"
- default n
- help
- Enable it to change bootloader environment
- during the installation process.
-
config SSBLSWITCH
bool "Second Stage Switcher"
depends on MTD
@@ -293,6 +214,68 @@ config SSBLSWITCH
way between two software set. It can be used to reliable update
a second stage bootloader.

+config UBIVOL
+ bool "ubivol"
+ default n
+ depends on HAVE_LIBUBI
+ depends on MTD
+ help
+ ubi is the default format for NAND device.
+ Say Y if you have NAND or you use UBI on
+ your system.
+
+comment "ubivol support needs libubi"
+ depends on !HAVE_LIBUBI
+
+config UBIATTACH
+ bool "Automatically attach UBI devices"
+ default y
+ depends on UBIVOL
+ help
+ If this option is enabled, swupdate will try to attach
+ UBI devices to all MTD devices.
+
+ Make sure UBIBLACKLIST or UBIWHITELIST is set correctly,
+ since attaching a UBI device will write to it if it is
+ found to be empty, and that may destroy already existing
+ content on that device.
+
+config UBIBLACKLIST
+ string "List of MTD devices to be excluded for UBI"
+ depends on UBIATTACH
+ help
+ Define a list of MTD devices that are excluded
+ by scan_mtd_device. The devices are still available
+ as raw devices.
+ The list can be set as a string with the mtd numbers.
+ Examples: "0 1 2"
+ This excludes mtd0-mtd1-mtd2 to be searched for UBI volumes
+
+config UBIWHITELIST
+ string "List of MTD devices that must have UBI"
+ depends on UBIATTACH
+ help
+ Define a list of MTD devices that are planned to have
+ always UBI. If first attach fails, the device is erased
+ and tried again.
+ The list can be set as a string with the mtd numbers.
+ Examples: "0 1 2"
+ This sets mtd0-mtd1-mtd2 to be used as UBI volumes.
+ UBIBLACKLIST has priority on UBIWHITELIST.
+
+config UBIVIDOFFSET
+ int "VID Header Offset"
+ depends on UBIATTACH
+ default 0
+ help
+ Force UBI to set a VID header offset to be 2048 bytes
+ instead of the value reported by the kernel.
+ In other words, you may ask UBI to avoid using sub-pages.
+ This is not recommended since this will require
+ more storage overhead, but may be useful
+ if your NAND driver incorrectly reports that it can handle
+ sub-page accesses when it should not.
+
config UCFWHANDLER
bool "microcontroller firmware update"
depends on HAVE_LIBGPIOD
@@ -316,4 +299,21 @@ config UCFW_OLD_LIBGPIOD
Rather there is no way to get this changes from the library
at build time.

+config UNIQUEUUID
+ bool "uniqueuuid"
+ depends on HAVE_LIBBLKID
+ default n
+ help
+ This handler checks that no filesystem on the device has
+ a UUID from a list (list is added as part of "properties"
+ in sw-description) for this handler.
+ This is useful for bootloader (like GRUB) that use UUID to
+ select the partition to be started, and in case two or
+ more filesystem have the same UUID, a wrong one is started.
+ This handler is a partition handler and it is guaranteed that
+ it runs before any image is installed on the device.
+
+comment "uniqueuuid support needs libblkid"
+ depends on !HAVE_LIBBLKID
+
endmenu
--
2.25.1

Stefano Babic

unread,
Oct 11, 2021, 7:22:15 AM10/11/21
to swup...@googlegroups.com, Stefano Babic
No function available in libc, just add to utilities.

Signed-off-by: Stefano Babic <sba...@denx.de>
---
core/util.c | 12 ++++++++++++
include/util.h | 1 +
2 files changed, 13 insertions(+)

diff --git a/core/util.c b/core/util.c
index 7e96652..2844e1d 100644
--- a/core/util.c
+++ b/core/util.c
@@ -269,6 +269,18 @@ char *substring(const char *src, int first, int len) {
return s;
}

+/*
+ * Convert all chars of a string to lower,
+ * there is no ready to use function
+ */
+
+char *string_tolower(char *s)
+{
+ char *p = s;
+ for ( ; *p; ++p) *p = tolower(*p);
+ return s;
+}
+
int openfileoutput(const char *filename)
{
int fdout;
diff --git a/include/util.h b/include/util.h
index 31f67b1..3d328ee 100644
--- a/include/util.h
+++ b/include/util.h
@@ -190,6 +190,7 @@ char **splitargs(char *args, int *argc);
char *mstrcat(const char **nodes, const char *delim);
char** string_split(const char* a_str, const char a_delim);
char *substring(const char *src, int first, int len);
+char *string_tolower(char *s);
size_t snescape(char *dst, size_t n, const char *src);
void freeargs (char **argv);
int get_hw_revision(struct hw_type *hw);
--
2.25.1

Stefano Babic

unread,
Oct 11, 2021, 7:22:16 AM10/11/21
to swup...@googlegroups.com, Stefano Babic
Add copyfile implementation with a wrapper function to allow to change
internals without modifying the API.

Signed-off-by: Stefano Babic <sba...@denx.de>
---
core/cpio_utils.c | 20 +++++++++++++++++++-
1 file changed, 19 insertions(+), 1 deletion(-)

diff --git a/core/cpio_utils.c b/core/cpio_utils.c
index 2e4aca3..e06bf5f 100644
--- a/core/cpio_utils.c
+++ b/core/cpio_utils.c
@@ -380,7 +380,7 @@ static int zstd_step(void* state, void* buffer, size_t size)

#endif

-int copyfile(int fdin, void *out, unsigned int nbytes, unsigned long *offs, unsigned long long seek,
+static int __swupdate_copy(int fdin, void *out, unsigned int nbytes, unsigned long *offs, unsigned long long seek,
int skip_file, int __attribute__ ((__unused__)) compressed,
uint32_t *checksum, unsigned char *hash, bool encrypted, const char *imgivt, writeimage callback)
{
@@ -633,6 +633,24 @@ copyfile_exit:
return ret;
}

+int copyfile(int fdin, void *out, unsigned int nbytes, unsigned long *offs, unsigned long long seek,
+ int skip_file, int __attribute__ ((__unused__)) compressed,
+ uint32_t *checksum, unsigned char *hash, bool encrypted, const char *imgivt, writeimage callback)
+{
+ return __swupdate_copy(fdin,
+ out,
+ nbytes,
+ offs,
+ seek,
+ skip_file,
+ compressed,
+ checksum,
+ hash,
+ encrypted,
+ imgivt,
+ callback);
+}
+
int copyimage(void *out, struct img_type *img, writeimage callback)
{
return copyfile(img->fdin,
--
2.25.1

Stefano Babic

unread,
Oct 11, 2021, 7:22:18 AM10/11/21
to swup...@googlegroups.com, Stefano Babic
This adds a central function to copy a SWUpdate artifact.
Functions are:
copyfile() : copy artifact rwading from file descriptor
copyimage() : copy artifact with a image definition
NEW copybuffer() : copy an artifact from memory

Signed-off-by: Stefano Babic <sba...@denx.de>
---
core/cpio_utils.c | 67 ++++++++++++++++++++++++++++++++++++++++++-----
include/util.h | 2 ++
2 files changed, 63 insertions(+), 6 deletions(-)

diff --git a/core/cpio_utils.c b/core/cpio_utils.c
index e06bf5f..e1325ef 100644
--- a/core/cpio_utils.c
+++ b/core/cpio_utils.c
@@ -29,6 +29,11 @@

#define NPAD_BYTES(o) ((4 - (o % 4)) % 4)

+typedef enum {
+ INPUT_FROM_FD,
+ INPUT_FROM_MEMORY
+} input_type_t;
+
int get_cpiohdr(unsigned char *buf, struct filehdr *fhdr)
{
struct new_ascii_header *cpiohdr;
@@ -120,8 +125,10 @@ int copy_write(void *out, const void *buf, unsigned int len)
int ret;
int fd;

- if (!out)
+ if (!out) {
+ ERROR("Output file descriptor invalid !");
return -1;
+ }

fd = *(int *)out;

@@ -187,6 +194,9 @@ typedef int (*PipelineStep)(void *state, void *buffer, size_t size);
struct InputState
{
int fdin;
+ input_type_t source;
+ unsigned char *inbuf;
+ size_t pos;
unsigned int nbytes;
unsigned long *offs;
void *dgst; /* use a private context for HASH */
@@ -196,12 +206,26 @@ struct InputState
static int input_step(void *state, void *buffer, size_t size)
{
struct InputState *s = (struct InputState *)state;
+ int ret = 0;
if (size >= s->nbytes) {
size = s->nbytes;
}
- int ret = fill_buffer(s->fdin, buffer, size, s->offs, &s->checksum, s->dgst);
- if (ret < 0) {
- return ret;
+ switch (s->source) {
+ case INPUT_FROM_FD:
+ ret = fill_buffer(s->fdin, buffer, size, s->offs, &s->checksum, s->dgst);
+ if (ret < 0) {
+ return ret;
+ }
+ break;
+ case INPUT_FROM_MEMORY:
+ memcpy(buffer, &s->inbuf[s->pos], size);
+ if (s->dgst) {
+ if (swupdate_HASH_update(s->dgst, &s->inbuf[s->pos], size) < 0)
+ return -EFAULT;
+ }
+ ret = size;
+ s->pos += size;
+ break;
}
s->nbytes -= ret;
return ret;
@@ -380,7 +404,7 @@ static int zstd_step(void* state, void* buffer, size_t size)

#endif

-static int __swupdate_copy(int fdin, void *out, unsigned int nbytes, unsigned long *offs, unsigned long long seek,
+static int __swupdate_copy(int fdin, unsigned char *inbuf, void *out, unsigned int nbytes, unsigned long *offs, unsigned long long seek,
int skip_file, int __attribute__ ((__unused__)) compressed,
uint32_t *checksum, unsigned char *hash, bool encrypted, const char *imgivt, writeimage callback)
{
@@ -398,6 +422,9 @@ static int __swupdate_copy(int fdin, void *out, unsigned int nbytes, unsigned lo

struct InputState input_state = {
.fdin = fdin,
+ .source = INPUT_FROM_FD,
+ .inbuf = NULL,
+ .pos = 0,
.nbytes = nbytes,
.offs = offs,
.dgst = NULL,
@@ -435,6 +462,14 @@ static int __swupdate_copy(int fdin, void *out, unsigned int nbytes, unsigned lo
#endif
#endif

+ /*
+ * If inbuf is set, read from buffer instead of from file
+ */
+ if (inbuf) {
+ input_state.inbuf = inbuf;
+ input_state.source = INPUT_FROM_MEMORY;
+ }
+
PipelineStep step = NULL;
void *state = NULL;
uint8_t buffer[BUFF_SIZE];
@@ -604,7 +639,8 @@ static int __swupdate_copy(int fdin, void *out, unsigned int nbytes, unsigned lo
}
}

- fill_buffer(fdin, buffer, NPAD_BYTES(*offs), offs, checksum, NULL);
+ if (!inbuf)
+ fill_buffer(fdin, buffer, NPAD_BYTES(*offs), offs, checksum, NULL);

if (checksum != NULL) {
*checksum = input_state.checksum;
@@ -638,6 +674,7 @@ int copyfile(int fdin, void *out, unsigned int nbytes, unsigned long *offs, unsi
uint32_t *checksum, unsigned char *hash, bool encrypted, const char *imgivt, writeimage callback)
{
return __swupdate_copy(fdin,
+ NULL,
out,
nbytes,
offs,
@@ -651,6 +688,24 @@ int copyfile(int fdin, void *out, unsigned int nbytes, unsigned long *offs, unsi
callback);
}

+int copybuffer(unsigned char *inbuf, void *out, unsigned int nbytes, int __attribute__ ((__unused__)) compressed,
+ unsigned char *hash, bool encrypted, const char *imgivt, writeimage callback)
+{
+ return __swupdate_copy(-1,
+ inbuf,
+ out,
+ nbytes,
+ NULL,
+ 0,
+ 0,
+ compressed,
+ NULL,
+ hash,
+ encrypted,
+ imgivt,
+ callback);
+}
+
int copyimage(void *out, struct img_type *img, writeimage callback)
{
return copyfile(img->fdin,
diff --git a/include/util.h b/include/util.h
index 3d328ee..52adc25 100644
--- a/include/util.h
+++ b/include/util.h
@@ -177,6 +177,8 @@ int copyfile(int fdin, void *out, unsigned int nbytes, unsigned long *offs,
int skip_file, int compressed, uint32_t *checksum,
unsigned char *hash, bool encrypted, const char *imgivt, writeimage callback);
int copyimage(void *out, struct img_type *img, writeimage callback);
+int copybuffer(unsigned char *inbuf, void *out, unsigned int nbytes, int compressed,
+ unsigned char *hash, bool encrypted, const char *imgivt, writeimage callback);
off_t extract_next_file(int fd, int fdout, off_t start, int compressed,
int encrypted, char *ivt, unsigned char *hash);
int openfileoutput(const char *filename);
--
2.25.1

Stefano Babic

unread,
Oct 11, 2021, 7:22:19 AM10/11/21
to swup...@googlegroups.com, Stefano Babic
Fix warning due to unused parameter in log.

Signed-off-by: Stefano Babic <sba...@denx.de>
---
corelib/multipart_parser.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/corelib/multipart_parser.c b/corelib/multipart_parser.c
index 5014dc8..982e853 100644
--- a/corelib/multipart_parser.c
+++ b/corelib/multipart_parser.c
@@ -8,7 +8,7 @@
#include <stdarg.h>
#include <string.h>

-static void multipart_log(const char * format, ...)
+static void multipart_log(const char __attribute__ ((__unused__)) *format, ...)
{
#ifdef DEBUG_MULTIPART
va_list args;
--
2.25.1

Stefano Babic

unread,
Oct 11, 2021, 7:22:20 AM10/11/21
to swup...@googlegroups.com, Stefano Babic
This is a lightweight parser for C licensed under MIT license. Sources
are available at https://github.com/iafonov/multipart-parser-c.

Fix a warning due to missing va_end() and set SPDX header.

Signed-off-by: Stefano Babic <sba...@denx.de>
---
corelib/Makefile | 1 +
corelib/multipart_parser.c | 306 +++++++++++++++++++++++++++++++++++++
include/multipart_parser.h | 49 ++++++
3 files changed, 356 insertions(+)
create mode 100644 corelib/multipart_parser.c
create mode 100644 include/multipart_parser.h

diff --git a/corelib/Makefile b/corelib/Makefile
index 037e72f..d6ea6a2 100644
--- a/corelib/Makefile
+++ b/corelib/Makefile
@@ -2,6 +2,7 @@
#
# SPDX-License-Identifier: GPL-2.0-only

+lib-y += multipart_parser.o
lib-$(CONFIG_DOWNLOAD) += downloader.o
lib-$(CONFIG_MTD) += mtd-interface.o
lib-$(CONFIG_LUA) += lua_interface.o lua_compat.o
diff --git a/corelib/multipart_parser.c b/corelib/multipart_parser.c
new file mode 100644
index 0000000..5014dc8
--- /dev/null
+++ b/corelib/multipart_parser.c
@@ -0,0 +1,306 @@
+/*
+ * Copyright (c) 2012 Igor Afonov afo...@gmail.com
+ *
+ * SPDX-License-Identifier: MIT
+ */
+#include "multipart_parser.h"
+#include <stdio.h>
+#include <stdarg.h>
+#include <string.h>
+
+static void multipart_log(const char * format, ...)
+{
+#ifdef DEBUG_MULTIPART
+ va_list args;
+ va_start(args, format);
+
+ fprintf(stderr, "[HTTP_MULTIPART_PARSER] %s:%d: ", __FILE__, __LINE__);
+ vfprintf(stderr, format, args);
+ fprintf(stderr, "\n");
+ va_end(args);
+#endif
+}
+
+#define NOTIFY_CB(FOR) \
+do { \
+ if (p->settings->on_##FOR) { \
+ if (p->settings->on_##FOR(p) != 0) { \
+ return i; \
+ } \
+ } \
+} while (0)
+
+#define EMIT_DATA_CB(FOR, ptr, len) \
+do { \
+ if (p->settings->on_##FOR) { \
+ if (p->settings->on_##FOR(p, ptr, len) != 0) { \
+ return i; \
+ } \
+ } \
+} while (0)
+
+
+#define LF 10
+#define CR 13
+
+struct multipart_parser {
+ void * data;
+
+ size_t index;
+ size_t boundary_length;
+
+ unsigned char state;
+
+ const multipart_parser_settings* settings;
+
+ char* lookbehind;
+ char multipart_boundary[1];
+};
+
+enum state {
+ s_uninitialized = 1,
+ s_start,
+ s_start_boundary,
+ s_header_field_start,
+ s_header_field,
+ s_headers_almost_done,
+ s_header_value_start,
+ s_header_value,
+ s_header_value_almost_done,
+ s_part_data_start,
+ s_part_data,
+ s_part_data_almost_boundary,
+ s_part_data_boundary,
+ s_part_data_almost_end,
+ s_part_data_end,
+ s_part_data_final_hyphen,
+ s_end
+};
+
+multipart_parser* multipart_parser_init
+ (const char *boundary, const multipart_parser_settings* settings) {
+
+ multipart_parser* p = malloc(sizeof(multipart_parser) +
+ strlen(boundary) +
+ strlen(boundary) + 9);
+
+ strcpy(p->multipart_boundary, boundary);
+ p->boundary_length = strlen(boundary);
+
+ p->lookbehind = (p->multipart_boundary + p->boundary_length + 1);
+
+ p->index = 0;
+ p->state = s_start;
+ p->settings = settings;
+
+ return p;
+}
+
+void multipart_parser_free(multipart_parser* p) {
+ free(p);
+}
+
+void multipart_parser_set_data(multipart_parser *p, void *data) {
+ p->data = data;
+}
+
+void *multipart_parser_get_data(multipart_parser *p) {
+ return p->data;
+}
+
+size_t multipart_parser_execute(multipart_parser* p, const char *buf, size_t len) {
+ size_t i = 0;
+ size_t mark = 0;
+ char c, cl;
+ int is_last = 0;
+
+ while(i < len) {
+ c = buf[i];
+ is_last = (i == (len - 1));
+ switch (p->state) {
+ case s_start:
+ multipart_log("s_start");
+ p->index = 0;
+ p->state = s_start_boundary;
+
+ /* fallthrough */
+ case s_start_boundary:
+ multipart_log("s_start_boundary");
+ if (p->index == p->boundary_length) {
+ if (c != CR) {
+ return i;
+ }
+ p->index++;
+ break;
+ } else if (p->index == (p->boundary_length + 1)) {
+ if (c != LF) {
+ return i;
+ }
+ p->index = 0;
+ NOTIFY_CB(part_data_begin);
+ p->state = s_header_field_start;
+ break;
+ }
+ if (c != p->multipart_boundary[p->index]) {
+ return i;
+ }
+ p->index++;
+ break;
+
+ case s_header_field_start:
+ multipart_log("s_header_field_start");
+ mark = i;
+ p->state = s_header_field;
+
+ /* fallthrough */
+ case s_header_field:
+ multipart_log("s_header_field");
+ if (c == CR) {
+ p->state = s_headers_almost_done;
+ break;
+ }
+
+ if (c == ':') {
+ EMIT_DATA_CB(header_field, buf + mark, i - mark);
+ p->state = s_header_value_start;
+ break;
+ }
+
+ cl = tolower(c);
+ if ((c != '-') && (cl < 'a' || cl > 'z')) {
+ multipart_log("invalid character in header name");
+ return i;
+ }
+ if (is_last)
+ EMIT_DATA_CB(header_field, buf + mark, (i - mark) + 1);
+ break;
+
+ case s_headers_almost_done:
+ multipart_log("s_headers_almost_done");
+ if (c != LF) {
+ return i;
+ }
+
+ p->state = s_part_data_start;
+ break;
+
+ case s_header_value_start:
+ multipart_log("s_header_value_start");
+ if (c == ' ') {
+ break;
+ }
+
+ mark = i;
+ p->state = s_header_value;
+
+ /* fallthrough */
+ case s_header_value:
+ multipart_log("s_header_value");
+ if (c == CR) {
+ EMIT_DATA_CB(header_value, buf + mark, i - mark);
+ p->state = s_header_value_almost_done;
+ break;
+ }
+ if (is_last)
+ EMIT_DATA_CB(header_value, buf + mark, (i - mark) + 1);
+ break;
+
+ case s_header_value_almost_done:
+ multipart_log("s_header_value_almost_done");
+ if (c != LF) {
+ return i;
+ }
+ p->state = s_header_field_start;
+ break;
+
+ case s_part_data_start:
+ multipart_log("s_part_data_start");
+ NOTIFY_CB(headers_complete);
+ mark = i;
+ p->state = s_part_data;
+
+ /* fallthrough */
+ case s_part_data:
+ multipart_log("s_part_data");
+ if (c == CR) {
+ EMIT_DATA_CB(part_data, buf + mark, i - mark);
+ mark = i;
+ p->state = s_part_data_almost_boundary;
+ p->lookbehind[0] = CR;
+ break;
+ }
+ if (is_last)
+ EMIT_DATA_CB(part_data, buf + mark, (i - mark) + 1);
+ break;
+
+ case s_part_data_almost_boundary:
+ multipart_log("s_part_data_almost_boundary");
+ if (c == LF) {
+ p->state = s_part_data_boundary;
+ p->lookbehind[1] = LF;
+ p->index = 0;
+ break;
+ }
+ EMIT_DATA_CB(part_data, p->lookbehind, 1);
+ p->state = s_part_data;
+ mark = i --;
+ break;
+
+ case s_part_data_boundary:
+ multipart_log("s_part_data_boundary");
+ if (p->multipart_boundary[p->index] != c) {
+ EMIT_DATA_CB(part_data, p->lookbehind, 2 + p->index);
+ p->state = s_part_data;
+ mark = i --;
+ break;
+ }
+ p->lookbehind[2 + p->index] = c;
+ if ((++ p->index) == p->boundary_length) {
+ NOTIFY_CB(part_data_end);
+ p->state = s_part_data_almost_end;
+ }
+ break;
+
+ case s_part_data_almost_end:
+ multipart_log("s_part_data_almost_end");
+ if (c == '-') {
+ p->state = s_part_data_final_hyphen;
+ break;
+ }
+ if (c == CR) {
+ p->state = s_part_data_end;
+ break;
+ }
+ return i;
+
+ case s_part_data_final_hyphen:
+ multipart_log("s_part_data_final_hyphen");
+ if (c == '-') {
+ NOTIFY_CB(body_end);
+ p->state = s_end;
+ break;
+ }
+ return i;
+
+ case s_part_data_end:
+ multipart_log("s_part_data_end");
+ if (c == LF) {
+ p->state = s_header_field_start;
+ NOTIFY_CB(part_data_begin);
+ break;
+ }
+ return i;
+
+ case s_end:
+ multipart_log("s_end: %02X", (int) c);
+ break;
+
+ default:
+ multipart_log("Multipart parser unrecoverable error");
+ return 0;
+ }
+ ++ i;
+ }
+
+ return len;
+}
diff --git a/include/multipart_parser.h b/include/multipart_parser.h
new file mode 100644
index 0000000..015e8ad
--- /dev/null
+++ b/include/multipart_parser.h
@@ -0,0 +1,49 @@
+/*
+ * Copyright (c) 2012 Igor Afonov afo...@gmail.com
+ *
+ * SPDX-License-Identifier: MIT
+ */
+#ifndef _multipart_parser_h
+#define _multipart_parser_h
+
+#ifdef __cplusplus
+extern "C"
+{
+#endif
+
+#include <stdlib.h>
+#include <ctype.h>
+
+typedef struct multipart_parser multipart_parser;
+typedef struct multipart_parser_settings multipart_parser_settings;
+typedef struct multipart_parser_state multipart_parser_state;
+
+typedef int (*multipart_data_cb) (multipart_parser*, const char *at, size_t length);
+typedef int (*multipart_notify_cb) (multipart_parser*);
+
+struct multipart_parser_settings {
+ multipart_data_cb on_header_field;
+ multipart_data_cb on_header_value;
+ multipart_data_cb on_part_data;
+
+ multipart_notify_cb on_part_data_begin;
+ multipart_notify_cb on_headers_complete;
+ multipart_notify_cb on_part_data_end;
+ multipart_notify_cb on_body_end;
+};
+
+multipart_parser* multipart_parser_init
+ (const char *boundary, const multipart_parser_settings* settings);
+
+void multipart_parser_free(multipart_parser* p);
+
+size_t multipart_parser_execute(multipart_parser* p, const char *buf, size_t len);
+
+void multipart_parser_set_data(multipart_parser* p, void* data);
+void * multipart_parser_get_data(multipart_parser* p);
+
+#ifdef __cplusplus
+} /* extern "C" */
+#endif
+
+#endif
--
2.25.1

Stefano Babic

unread,
Oct 11, 2021, 7:22:22 AM10/11/21
to swup...@googlegroups.com, Stefano Babic
Functions are defined as part of a generic channel and initialized via a
channel_new() call. Scope can be set inside the module and not global.

Signed-off-by: Stefano Babic <sba...@denx.de>
---
corelib/channel_curl.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/corelib/channel_curl.c b/corelib/channel_curl.c
index ff2053d..62009de 100644
--- a/corelib/channel_curl.c
+++ b/corelib/channel_curl.c
@@ -75,12 +75,12 @@ char *channel_get_redirect_url(channel_t *this);
static void channel_log_effective_url(channel_t *this);

/* Prototypes for "public" functions */
+static channel_op_res_t channel_close(channel_t *this);
+static channel_op_res_t channel_open(channel_t *this, void *cfg);
+static channel_op_res_t channel_get(channel_t *this, void *data);
+static channel_op_res_t channel_get_file(channel_t *this, void *data);
+static channel_op_res_t channel_put(channel_t *this, void *data);
channel_op_res_t channel_curl_init(void);
-channel_op_res_t channel_close(channel_t *this);
-channel_op_res_t channel_open(channel_t *this, void *cfg);
-channel_op_res_t channel_get(channel_t *this, void *data);
-channel_op_res_t channel_get_file(channel_t *this, void *data);
-channel_op_res_t channel_put(channel_t *this, void *data);
channel_t *channel_new(void);


--
2.25.1

Stefano Babic

unread,
Oct 11, 2021, 7:22:23 AM10/11/21
to swup...@googlegroups.com, Stefano Babic
Signed-off-by: Stefano Babic <sba...@denx.de>
---
corelib/channel_curl.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/corelib/channel_curl.c b/corelib/channel_curl.c
index 62009de..5c88ef7 100644
--- a/corelib/channel_curl.c
+++ b/corelib/channel_curl.c
@@ -1312,7 +1312,7 @@ channel_op_res_t channel_get(channel_t *this, void *data)
{
channel_curl_t *channel_curl = this->priv;
assert(data != NULL);
- assert(channel_curl.handle != NULL);
+ assert(channel_curl->handle != NULL);

channel_op_res_t result = CHANNEL_OK;
channel_data_t *channel_data = (channel_data_t *)data;
--
2.25.1

Stefano Babic

unread,
Oct 11, 2021, 7:22:24 AM10/11/21
to swup...@googlegroups.com, Stefano Babic
If Content-type is set to bytes / binaries, the charset has no meaning.
Just add it automatically in case of application/json and
application/text. It is still possible to add it in other case, but the
caller should add it to the list of headers.

Signed-off-by: Stefano Babic <sba...@denx.de>
---
corelib/channel_curl.c | 11 +++++++++++
1 file changed, 11 insertions(+)

diff --git a/corelib/channel_curl.c b/corelib/channel_curl.c
index 5c88ef7..50d5cc8 100644
--- a/corelib/channel_curl.c
+++ b/corelib/channel_curl.c
@@ -522,6 +522,17 @@ static channel_op_res_t channel_set_content_type(channel_t *this,
result = CHANNEL_EINIT;
}
}
+ /*
+ * Add default charset for application content
+ */
+ if ((!strcmp(content, "application/json") || !strcmp(content, "application/text")) &&
+ (result == CHANNEL_OK)) {
+ if ((channel_curl->header = curl_slist_append(
+ channel_curl->header, "charsets: utf-8")) == NULL) {
+ ERROR("Set channel charset header failed.");
+ result = CHANNEL_EINIT;
+ }
+ }

return result;
}
--
2.25.1

Stefano Babic

unread,
Oct 11, 2021, 7:22:26 AM10/11/21
to swup...@googlegroups.com, Stefano Babic
The "checkdwl" callback is used up now only by the Hawkbit backend to
check if there is a cancel request on the server. It is called by the
curl WRITEFUNCTION callback, but the callback in channel_curl.c changes
the signature and does not send parameters to checkdwl.

Rename checkdwl in a more generic "dwlwrdata" and pass all parameters
foreseen by Curl for the callback. This allows to get access to the
incoming stream and the channel can be used in other cases without
sending data to to installer.

Signed-off-by: Stefano Babic <sba...@denx.de>
---
corelib/channel_curl.c | 6 ++++--
include/channel_curl.h | 3 ++-
suricatta/server_general.c | 2 +-
suricatta/server_hawkbit.c | 17 ++++++++++-------
4 files changed, 17 insertions(+), 11 deletions(-)

diff --git a/corelib/channel_curl.c b/corelib/channel_curl.c
index 50d5cc8..0636efc 100644
--- a/corelib/channel_curl.c
+++ b/corelib/channel_curl.c
@@ -191,8 +191,10 @@ size_t channel_callback_ipc(void *streamdata, size_t size, size_t nmemb,
return 0;
}

- if (data->channel_data->checkdwl && data->channel_data->checkdwl())
- return 0;
+ if (data->channel_data->dwlwrdata) {
+ return data->channel_data->dwlwrdata(streamdata, size, nmemb, data->channel_data);
+ }
+
/*
* Now check if there is a callback from the server
* during the download
diff --git a/include/channel_curl.h b/include/channel_curl.h
index 8ecaf59..49d5242 100644
--- a/include/channel_curl.h
+++ b/include/channel_curl.h
@@ -68,7 +68,8 @@ typedef struct {
bool nocheckanswer;
long http_response_code;
bool nofollow;
- int (*checkdwl)(void);
+ size_t (*dwlwrdata)(char *streamdata, size_t size, size_t nmemb,
+ void *data);
struct swupdate_digest *dgst;
char sha1hash[SWUPDATE_SHA_DIGEST_LENGTH * 2 + 1];
sourcetype source;
diff --git a/suricatta/server_general.c b/suricatta/server_general.c
index e8c3186..194ad20 100644
--- a/suricatta/server_general.c
+++ b/suricatta/server_general.c
@@ -545,7 +545,7 @@ server_op_res_t server_install_update(void)

channel_data.nofollow = false;
channel_data.nocheckanswer = false;
- channel_data.checkdwl = NULL;
+ channel_data.dwlwrdata = NULL;

channel_data.url = strdup(url);

diff --git a/suricatta/server_hawkbit.c b/suricatta/server_hawkbit.c
index f8f560e..1cad5cf 100644
--- a/suricatta/server_hawkbit.c
+++ b/suricatta/server_hawkbit.c
@@ -651,12 +651,15 @@ cleanup:
return result;
}

-static int server_check_during_dwl(void)
+static size_t server_check_during_dwl(char __attribute__ ((__unused__)) *streamdata,
+ size_t size,
+ size_t nmemb,
+ void __attribute__ ((__unused__)) *data)
{
struct timeval now;
channel_data_t channel_data = channel_data_defaults;
int action_id;
- int ret = 0;
+ int ret = size * nmemb;
const char *update_action;

server_get_current_time(&now);
@@ -668,7 +671,7 @@ static int server_check_during_dwl(void)
* was requested
*/
if ((now.tv_sec - server_time.tv_sec) < ((int)server_get_polling_interval()))
- return 0;
+ return ret;

/* Update current server time */
server_time = now;
@@ -685,7 +688,7 @@ static int server_check_during_dwl(void)
* go on downloading
*/
free(channel);
- return 0;
+ return ret;
}

/*
@@ -696,13 +699,13 @@ static int server_check_during_dwl(void)
if (result == SERVER_UPDATE_CANCELED) {
/* Mark that an update was cancelled by the server */
server_hawkbit.cancelDuringUpdate = true;
- ret = -1;
+ ret = 0;
}
update_action = json_get_deployment_update_action(channel_data.json_reply);

/* if the deployment is skipped then stop downloading */
if (update_action == deployment_update_action.skip)
- ret = -1;
+ ret = 0;

check_action_changed(action_id, update_action);

@@ -1146,7 +1149,7 @@ server_op_res_t server_process_update_artifact(int action_id,
goto cleanup_loop;
}

- channel_data.checkdwl = server_check_during_dwl;
+ channel_data.dwlwrdata = server_check_during_dwl;

/*
* There is no authorizytion token when file is loaded, because SWU
--
2.25.1

Stefano Babic

unread,
Oct 11, 2021, 7:22:28 AM10/11/21
to swup...@googlegroups.com, Stefano Babic
Headers callback collects the headers in a distionary and receive as
parameter the pointer to it. The callback can require more information
if the headers must be evaluated, then pass the curl's setup
(channel_data_t) instead of just the dictionary.

Signed-off-by: Stefano Babic <sba...@denx.de>
---
corelib/channel_curl.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/corelib/channel_curl.c b/corelib/channel_curl.c
index be553f0..20323b7 100644
--- a/corelib/channel_curl.c
+++ b/corelib/channel_curl.c
@@ -450,7 +450,8 @@ static int channel_callback_xferinfo_legacy(void *p, double dltotal, double dlno

static size_t channel_callback_headers(char *buffer, size_t size, size_t nitems, void *userdata)
{
- struct dict *dict = (struct dict *)userdata;
+ channel_data_t *channel_data = (channel_data_t *)userdata;
+ struct dict *dict = channel_data->received_headers;
char *info = malloc(size * nitems + 1);
char *p, *key, *val;

--
2.25.1

Stefano Babic

unread,
Oct 11, 2021, 7:22:28 AM10/11/21
to swup...@googlegroups.com, Stefano Babic
The get_file() function sends data to IPC to install the SWU. To make
channel() more generic, add a parameter to control if the incoming data
must be forwarded to the IPC.

This allows to use get_file() in other contexts, providing an own
callback to handle the stream and the curl callback in channel_curl.c
becomes a proxy that simply forwards the data to a supplied callback as
"dwlwrdata" in the channel_data_t structure.

Signed-off-by: Stefano Babic <sba...@denx.de>
---
corelib/channel_curl.c | 50 +++++++++++++++++++++++-------------------
include/channel_curl.h | 1 +
2 files changed, 28 insertions(+), 23 deletions(-)

diff --git a/corelib/channel_curl.c b/corelib/channel_curl.c
index 0636efc..be553f0 100644
--- a/corelib/channel_curl.c
+++ b/corelib/channel_curl.c
@@ -184,7 +184,8 @@ size_t channel_callback_ipc(void *streamdata, size_t size, size_t nmemb,
}
}

- if (ipc_send_data(data->output, streamdata, (int)(size * nmemb)) <
+ if (!data->channel_data->noipc &&
+ ipc_send_data(data->output, streamdata, (int)(size * nmemb)) <
0) {
ERROR("Writing into SWUpdate IPC stream failed.");
result_channel_callback_ipc = CHANNEL_EIO;
@@ -1083,7 +1084,8 @@ channel_op_res_t channel_put(channel_t *this, void *data)
channel_op_res_t channel_get_file(channel_t *this, void *data)
{
channel_curl_t *channel_curl = this->priv;
- int file_handle;
+ int file_handle = -1;
+ struct swupdate_request req;
assert(data != NULL);
assert(channel_curl->handle != NULL);

@@ -1143,30 +1145,32 @@ channel_op_res_t channel_get_file(channel_t *this, void *data)
goto cleanup_header;
}

- struct swupdate_request req;
- swupdate_prepare_req(&req);
- req.dry_run = channel_data->dry_run;
- req.source = channel_data->source;
- if (channel_data->info) {
- strncpy(req.info, channel_data->info,
- sizeof(req.info) - 1 );
- }
- for (int retries = 3; retries >= 0; retries--) {
- file_handle = ipc_inst_start_ext( &req, sizeof(struct swupdate_request));
- if (file_handle > 0)
- break;
- sleep(1);
- }
- if (file_handle < 0) {
- ERROR("Cannot open SWUpdate IPC stream: %s", strerror(errno));
- result = CHANNEL_EIO;
- goto cleanup_header;
- }
-
write_callback_t wrdata;
wrdata.channel_data = channel_data;
+ if (!channel_data->noipc) {
+ swupdate_prepare_req(&req);
+ req.dry_run = channel_data->dry_run;
+ req.source = channel_data->source;
+ if (channel_data->info) {
+ strncpy(req.info, channel_data->info,
+ sizeof(req.info) - 1 );
+ }
+ for (int retries = 3; retries >= 0; retries--) {
+ file_handle = ipc_inst_start_ext( &req, sizeof(struct swupdate_request));
+ if (file_handle > 0)
+ break;
+ sleep(1);
+ }
+ if (file_handle < 0) {
+ ERROR("Cannot open SWUpdate IPC stream: %s", strerror(errno));
+ result = CHANNEL_EIO;
+ goto cleanup_header;
+ }
+ }
+
wrdata.output = file_handle;
result_channel_callback_ipc = CHANNEL_OK;
+
if ((curl_easy_setopt(channel_curl->handle, CURLOPT_WRITEFUNCTION,
channel_callback_ipc) != CURLE_OK) ||
(curl_easy_setopt(channel_curl->handle, CURLOPT_WRITEDATA,
@@ -1305,7 +1309,7 @@ cleanup_file:
* so use close() here directly to issue an error in case.
* Also, for a given file handle, calling ipc_end() would make
* no semantic sense. */
- if (close(file_handle) != 0) {
+ if (file_handle > 0 && close(file_handle) != 0) {
ERROR("Channel error while closing download target handle: '%s'",
strerror(errno));
}
diff --git a/include/channel_curl.h b/include/channel_curl.h
index 49d5242..fe68a99 100644
--- a/include/channel_curl.h
+++ b/include/channel_curl.h
@@ -66,6 +66,7 @@ typedef struct {
bool usessl;
bool strictssl;
bool nocheckanswer;
+ bool noipc; /* do not send to SWUpdate IPC if set */
long http_response_code;
bool nofollow;
size_t (*dwlwrdata)(char *streamdata, size_t size, size_t nmemb,
--
2.25.1

Stefano Babic

unread,
Oct 11, 2021, 7:22:30 AM10/11/21
to swup...@googlegroups.com, Stefano Babic
Headers are handled automatically by the channel. Add an optional
callback that the channel can call to handle the headers.

Signed-off-by: Stefano Babic <sba...@denx.de>
---
corelib/channel_curl.c | 63 +++++++++++++++++++++++-------------------
include/channel_curl.h | 2 ++
2 files changed, 37 insertions(+), 28 deletions(-)

diff --git a/corelib/channel_curl.c b/corelib/channel_curl.c
index 20323b7..9af9c39 100644
--- a/corelib/channel_curl.c
+++ b/corelib/channel_curl.c
@@ -452,36 +452,43 @@ static size_t channel_callback_headers(char *buffer, size_t size, size_t nitems,
{
channel_data_t *channel_data = (channel_data_t *)userdata;
struct dict *dict = channel_data->received_headers;
- char *info = malloc(size * nitems + 1);
+ char *info;
char *p, *key, *val;

- if (!info) {
- ERROR("No memory allocated for headers, headers not collected !!");
- return nitems * size;
- }
- /*
- * Work on a local copy because the buffer is not
- * '\0' terminated
- */
- memcpy(info, buffer, size * nitems);
- info[size * nitems] = '\0';
- p = memchr(info, ':', size * nitems);
- if (p) {
- *p = '\0';
- key = info;
- val = p + 1; /* Next char after ':' */
- while(isspace((unsigned char)*val)) val++;
- /* Remove '\n', '\r', and '\r\n' from header's value. */
- *strchrnul(val, '\r') = '\0';
- *strchrnul(val, '\n') = '\0';
- /* For multiple same-key headers, only the last is saved. */
- dict_set_value(dict, key, val);
- TRACE("Header processed: %s : %s", key, val);
- } else {
- TRACE("Header not processed: '%s'", info);
+ if (dict) {
+ info = malloc(size * nitems + 1);
+ if (!info) {
+ ERROR("No memory allocated for headers, headers not collected !!");
+ return nitems * size;
+ }
+ /*
+ * Work on a local copy because the buffer is not
+ * '\0' terminated
+ */
+ memcpy(info, buffer, size * nitems);
+ info[size * nitems] = '\0';
+ p = memchr(info, ':', size * nitems);
+ if (p) {
+ *p = '\0';
+ key = info;
+ val = p + 1; /* Next char after ':' */
+ while(isspace((unsigned char)*val)) val++;
+ /* Remove '\n', '\r', and '\r\n' from header's value. */
+ *strchrnul(val, '\r') = '\0';
+ *strchrnul(val, '\n') = '\0';
+ /* For multiple same-key headers, only the last is saved. */
+ dict_set_value(dict, key, val);
+ TRACE("Header processed: %s : %s", key, val);
+ } else {
+ TRACE("Header not processed: '%s'", info);
+ }
+
+ free(info);
}

- free(info);
+ if (channel_data->headers)
+ return channel_data->headers(buffer, size, nitems, userdata);
+
return nitems * size;
}

@@ -608,7 +615,7 @@ channel_op_res_t channel_set_options(channel_t *this, channel_data_t *channel_da
goto cleanup;
}

- if (channel_data->received_headers) {
+ if (channel_data->received_headers || channel_data->headers) {
/*
* Setup supply request and receive reply HTTP headers.
* A LIST_INIT()'d dictionary is expected at channel_data->headers.
@@ -619,7 +626,7 @@ channel_op_res_t channel_set_options(channel_t *this, channel_data_t *channel_da
CURLOPT_HEADERFUNCTION,
channel_callback_headers) != CURLE_OK) ||
(curl_easy_setopt(channel_curl->handle, CURLOPT_HEADERDATA,
- channel_data->received_headers) != CURLE_OK)) {
+ channel_data) != CURLE_OK)) {
result = CHANNEL_EINIT;
goto cleanup;
}
diff --git a/include/channel_curl.h b/include/channel_curl.h
index fe68a99..456367d 100644
--- a/include/channel_curl.h
+++ b/include/channel_curl.h
@@ -71,6 +71,8 @@ typedef struct {
bool nofollow;
size_t (*dwlwrdata)(char *streamdata, size_t size, size_t nmemb,
void *data);
+ size_t (*headers)(char *streamdata, size_t size, size_t nmemb,
+ void *data);
struct swupdate_digest *dgst;
char sha1hash[SWUPDATE_SHA_DIGEST_LENGTH * 2 + 1];
sourcetype source;
--
2.25.1

Stefano Babic

unread,
Oct 11, 2021, 7:22:31 AM10/11/21
to swup...@googlegroups.com, Stefano Babic
Channel callbacks pass as parameter the channel_data_t structure used to
set up the transfer. Add to this structure an optional pointer that cna
be used by the callbacks for own data.

Signed-off-by: Stefano Babic <sba...@denx.de>
---
include/channel_curl.h | 1 +
1 file changed, 1 insertion(+)

diff --git a/include/channel_curl.h b/include/channel_curl.h
index 456367d..7d1e892 100644
--- a/include/channel_curl.h
+++ b/include/channel_curl.h
@@ -79,4 +79,5 @@ typedef struct {
struct dict *headers_to_send;
struct dict *received_headers;
unsigned int max_download_speed;
+ void *user;
} channel_data_t;
--
2.25.1

Stefano Babic

unread,
Oct 11, 2021, 7:22:32 AM10/11/21
to swup...@googlegroups.com, Stefano Babic
Channel can be used by the callbacks to call curl functions that can be
evaluated later.

Signed-off-by: Stefano Babic <sba...@denx.de>
---
corelib/channel_curl.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/corelib/channel_curl.c b/corelib/channel_curl.c
index 9af9c39..abe252c 100644
--- a/corelib/channel_curl.c
+++ b/corelib/channel_curl.c
@@ -53,6 +53,7 @@ typedef struct {
channel_data_t *channel_data;
int output;
output_data_t *outdata;
+ channel_t *this;
} write_callback_t;

typedef struct {
@@ -940,7 +941,7 @@ static channel_op_res_t channel_post_method(channel_t *this, void *data, int met
channel_op_res_t result = CHANNEL_OK;
channel_data_t *channel_data = (channel_data_t *)data;
output_data_t outdata = {};
- write_callback_t wrdata = { .channel_data = channel_data, .outdata = &outdata };
+ write_callback_t wrdata = { .this = this, .channel_data = channel_data, .outdata = &outdata };

if ((result = channel_set_content_type(this, channel_data)) !=
CHANNEL_OK) {
@@ -1153,7 +1154,7 @@ channel_op_res_t channel_get_file(channel_t *this, void *data)
goto cleanup_header;
}

- write_callback_t wrdata;
+ write_callback_t wrdata = { .this = this };
wrdata.channel_data = channel_data;
if (!channel_data->noipc) {
swupdate_prepare_req(&req);
@@ -1343,7 +1344,7 @@ channel_op_res_t channel_get(channel_t *this, void *data)
channel_data_t *channel_data = (channel_data_t *)data;
channel_data->http_response_code = 0;
output_data_t outdata = {};
- write_callback_t wrdata = { .channel_data = channel_data, .outdata = &outdata };
+ write_callback_t wrdata = { .this = this, .channel_data = channel_data, .outdata = &outdata };

if ((result = channel_set_content_type(this, channel_data)) !=
CHANNEL_OK) {
--
2.25.1

Stefano Babic

unread,
Oct 11, 2021, 7:22:34 AM10/11/21
to swup...@googlegroups.com, Stefano Babic
In case just a part of file via bytes-range is requested, do not ask the
server for the size of the file to be downloaded.

Signed-off-by: Stefano Babic <sba...@denx.de>
---
corelib/channel_curl.c | 16 +++++++++++-----
1 file changed, 11 insertions(+), 5 deletions(-)

diff --git a/corelib/channel_curl.c b/corelib/channel_curl.c
index bed631b..4aebaa5 100644
--- a/corelib/channel_curl.c
+++ b/corelib/channel_curl.c
@@ -1147,14 +1147,20 @@ channel_op_res_t channel_get_file(channel_t *this, void *data)
}

download_callback_data_t download_data;
- if (channel_enable_download_progress_tracking(channel_curl,
- channel_data->url,
- &download_data) == CHANNEL_EINIT) {
- WARN("Failed to get total download size for URL %s.",
+ /*
+ * In case of range do not ask the server for file size
+ */
+ if (!channel_data->range) {
+ if (channel_enable_download_progress_tracking(channel_curl,
+ channel_data->url,
+ &download_data) == CHANNEL_EINIT) {
+ WARN("Failed to get total download size for URL %s.",
channel_data->url);
} else
INFO("Total download size is %lu kB.",
- download_data.total_download_size / 1024);
+ download_data.total_download_size / 1024);
+
+ }

if (curl_easy_setopt(channel_curl->handle, CURLOPT_CUSTOMREQUEST, "GET") !=
CURLE_OK) {
--
2.25.1

Stefano Babic

unread,
Oct 11, 2021, 7:22:34 AM10/11/21
to swup...@googlegroups.com, Stefano Babic
This allow to set a range request for a file to be downloaded.

Signed-off-by: Stefano Babic <sba...@denx.de>
---
corelib/channel_curl.c | 9 +++++++++
include/channel_curl.h | 1 +
2 files changed, 10 insertions(+)

diff --git a/corelib/channel_curl.c b/corelib/channel_curl.c
index abe252c..bed631b 100644
--- a/corelib/channel_curl.c
+++ b/corelib/channel_curl.c
@@ -757,6 +757,15 @@ channel_op_res_t channel_set_options(channel_t *this, channel_data_t *channel_da
}
}

+ if (channel_data->range) {
+ if (curl_easy_setopt(channel_curl->handle, CURLOPT_RANGE,
+ channel_data->range) != CURLE_OK) {
+ ERROR("Bytes Range could not be set.");
+ result = CHANNEL_EINIT;
+ goto cleanup;
+ }
+ }
+
cleanup:
return result;
}
diff --git a/include/channel_curl.h b/include/channel_curl.h
index 7d1e892..4409dca 100644
--- a/include/channel_curl.h
+++ b/include/channel_curl.h
@@ -79,5 +79,6 @@ typedef struct {
struct dict *headers_to_send;
struct dict *received_headers;
unsigned int max_download_speed;
+ char *range; /* Range request for get_file in any */

Stefano Babic

unread,
Oct 11, 2021, 7:22:36 AM10/11/21
to swup...@googlegroups.com, Stefano Babic
Callbacks need to know the HTTP return code, if any.

Signed-off-by: Stefano Babic <sba...@denx.de>
---
corelib/channel_curl.c | 3 +++
1 file changed, 3 insertions(+)

diff --git a/corelib/channel_curl.c b/corelib/channel_curl.c
index 4aebaa5..d051ae4 100644
--- a/corelib/channel_curl.c
+++ b/corelib/channel_curl.c
@@ -185,6 +185,9 @@ size_t channel_callback_ipc(void *streamdata, size_t size, size_t nmemb,
}
}

+ if (!data->channel_data->http_response_code)
+ channel_map_http_code(data->this, &data->channel_data->http_response_code);
+
if (!data->channel_data->noipc &&
ipc_send_data(data->output, streamdata, (int)(size * nmemb)) <
0) {
--
2.25.1

Stefano Babic

unread,
Oct 11, 2021, 7:22:37 AM10/11/21
to swup...@googlegroups.com, Stefano Babic
Initialize new introduced field in the structure in any use case
(downloader, Hawkbit, general server).

Signed-off-by: Stefano Babic <sba...@denx.de>
---
corelib/downloader.c | 5 ++++-
suricatta/server_general.c | 3 +++
suricatta/server_hawkbit.c | 11 +++++++----
3 files changed, 14 insertions(+), 5 deletions(-)

diff --git a/corelib/downloader.c b/corelib/downloader.c
index 8596694..9eec3f1 100644
--- a/corelib/downloader.c
+++ b/corelib/downloader.c
@@ -106,7 +106,10 @@ static channel_data_t channel_options = {
.retries = DL_DEFAULT_RETRIES,
.low_speed_timeout = DL_LOWSPEED_TIME,
.headers_to_send = NULL,
- .max_download_speed = 0 // Unlimited download speed is default.
+ .max_download_speed = 0, /* Unlimited download speed is default. */
+ .noipc = false,
+ .range = NULL,
+ .headers = NULL,
};

int start_download(const char *fname, int argc, char *argv[])
diff --git a/suricatta/server_general.c b/suricatta/server_general.c
index 194ad20..8716122 100644
--- a/suricatta/server_general.c
+++ b/suricatta/server_general.c
@@ -109,7 +109,10 @@ static channel_data_t channel_data_defaults = {.debug = false,
#ifdef CONFIG_SURICATTA_SSL
.usessl = true,
#endif
+ .noipc = false,
+ .headers = NULL,
.format = CHANNEL_PARSE_NONE,
+ .range = NULL,
.nocheckanswer = true,
.nofollow = true,
.strictssl = true};
diff --git a/suricatta/server_hawkbit.c b/suricatta/server_hawkbit.c
index 1cad5cf..936ceb0 100644
--- a/suricatta/server_hawkbit.c
+++ b/suricatta/server_hawkbit.c
@@ -134,10 +134,13 @@ static channel_data_t channel_data_defaults = {.debug = false,
.nocheckanswer = false,
.nofollow = false,
.strictssl = true,
- .connection_timeout = 0,
- .headers_to_send = NULL,
- .received_headers = NULL,
- .max_download_speed = 0 // No download speed limit is default.
+ .max_download_speed = 0, // No download speed limit is default.
+ .noipc = false,
+ .range = NULL,
+ .connection_timeout = 0,
+ .headers = NULL,
+ .headers_to_send = NULL,
+ .received_headers = NULL
};

static struct timeval server_time;
--
2.25.1

Stefano Babic

unread,
Oct 11, 2021, 7:22:39 AM10/11/21
to swup...@googlegroups.com, Stefano Babic
Signed-off-by: Stefano Babic <sba...@denx.de>
---
tools/swupdate-progress.c | 3 +++
tools/swupdate-sysrestart.c | 2 ++
2 files changed, 5 insertions(+)

diff --git a/tools/swupdate-progress.c b/tools/swupdate-progress.c
index 716c8fb..0bb3d1f 100644
--- a/tools/swupdate-progress.c
+++ b/tools/swupdate-progress.c
@@ -290,6 +290,9 @@ int main(int argc, char **argv)
case SOURCE_DOWNLOADER:
fprintf(stdout, "DOWNLOADER\n\n");
break;
+ case SOURCE_CHUNKS_DOWNLOADER:
+ fprintf(stdout, "CHUNKS DOWNLOADER\n\n");
+ break;
case SOURCE_LOCAL:
fprintf(stdout, "LOCAL\n\n");
break;
diff --git a/tools/swupdate-sysrestart.c b/tools/swupdate-sysrestart.c
index 078da4b..221bb45 100644
--- a/tools/swupdate-sysrestart.c
+++ b/tools/swupdate-sysrestart.c
@@ -208,6 +208,8 @@ int main(int argc, char **argv)
case SOURCE_LOCAL:
fprintf(stdout, "LOCAL\n\n");
break;
+ default:
+ break;
}

}
--
2.25.1

Stefano Babic

unread,
Oct 11, 2021, 7:22:40 AM10/11/21
to swup...@googlegroups.com, Stefano Babic
The delta handler must download the missing parts of an artifacts.
Because it runs with high rights, the download itself should be done by
a separate process to avoid to break privilege separation. The process
gets userid and group id from the configuration file, and as fallback
will run with the same rights as the installer.

The downloader will wait for requests, and it will write the downloaded
data into the IPC pipe. Only one process is allowed to talk to the
downloader at once. If more as one user will exist in the future, the
access to the downloader should be queued (not done at the present).

Signed-off-by: Stefano Babic <sba...@denx.de>
---
handlers/delta_downloader.c | 217 ++++++++++++++++++++++++++++++++++++
handlers/delta_handler.h | 37 ++++++
include/delta_process.h | 10 ++
include/swupdate_status.h | 3 +-
4 files changed, 266 insertions(+), 1 deletion(-)
create mode 100644 handlers/delta_downloader.c
create mode 100644 handlers/delta_handler.h
create mode 100644 include/delta_process.h

diff --git a/handlers/delta_downloader.c b/handlers/delta_downloader.c
new file mode 100644
index 0000000..ddb7cdb
--- /dev/null
+++ b/handlers/delta_downloader.c
@@ -0,0 +1,217 @@
+/*
+ * (C) Copyright 2021
+ * Stefano Babic, sba...@denx.de.
+ *
+ * SPDX-License-Identifier: GPL-2.0-only
+ */
+
+/*
+ * This is part of the delta handler. It is started as separate process
+ * and gets from the main task which chunks should be downloaded.
+ * The main task just sends a RANGE Request, and the downloader start
+ * a curl connection to the server and sends the received data back to the main task.
+ * The IPC is message oriented, and process add small metadata
+ * information to inform if the download reports errors (from libcurl).
+ * This is used explicitely to retrieve ranges : an answer
+ * different as "Partial Content" (206) is rejected. This avoids that the
+ * whole file is downloaded if the server is not able to work with ranges.
+ */
+
+#include <stdbool.h>
+#include <stdio.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <unistd.h>
+#include <fcntl.h>
+#include <stdlib.h>
+#include <errno.h>
+#include <string.h>
+#include <util.h>
+#include <pctl.h>
+#include <zlib.h>
+#include <channel.h>
+#include <channel_curl.h>
+#include "delta_handler.h"
+#include "delta_process.h"
+
+/*
+ * Structure used in curl callbacks
+ */
+typedef struct {
+ unsigned int id; /* Request id */
+ int writefd; /* IPC file descriptor */
+ range_answer_t *answer;
+} dwl_data_t;
+
+extern channel_op_res_t channel_curl_init(void);
+
+static channel_data_t channel_data_defaults = {
+ .debug = false,
+ .source=SOURCE_CHUNKS_DOWNLOADER,
+ .retries=CHANNEL_DEFAULT_RESUME_TRIES,
+ .retry_sleep=
+ CHANNEL_DEFAULT_RESUME_DELAY,
+ .nocheckanswer=false,
+ .nofollow=false,
+ .connection_timeout=0,
+ .headers_to_send = NULL,
+ .received_headers = NULL
+ };
+
+/*
+ * Data callback: takes the buffer, surrounded with IPC meta data
+ * and send to the process that reqeusted the download
+ */
+static size_t wrdata_callback(char *buffer, size_t size, size_t nmemb, void *data)
+{
+ channel_data_t *channel_data = (channel_data_t *)data;
+ dwl_data_t *dwl = (dwl_data_t *)channel_data->user;
+ ssize_t nbytes = nmemb * size;
+ int ret;
+ if (!nmemb) {
+ return 0;
+ }
+ if (!data)
+ return 0;
+
+ if (channel_data->http_response_code != 206) {
+ ERROR("Bytes request not supported by server, returning %ld",
+ channel_data->http_response_code);
+ return 0;
+ }
+ while (nbytes > 0) {
+ range_answer_t *answer = dwl->answer;
+ answer->id = dwl->id;
+ answer->type = RANGE_DATA;
+ answer->len = min(nbytes, RANGE_PAYLOAD_SIZE);
+ memcpy(answer->data, buffer, answer->len);
+ answer->crc = crc32(0, (unsigned char *)answer->data, answer->len);
+ ret = copy_write(&dwl->writefd, answer, sizeof(range_answer_t));
+ if (ret < 0) {
+ ERROR("Error sending IPC data !");
+ return 0;
+ }
+ nbytes -= answer->len;
+ }
+
+ return size * nmemb;
+}
+
+/*
+ * This function just extract the header and sends
+ * to the process initiating the transfer.
+ * It envelops the header in the answer struct
+ * The receiver knows from meta data if payload contains headers
+ * or data.
+ * A single header is encapsulated in one IPC message.
+ */
+static size_t delta_callback_headers(char *buffer, size_t size, size_t nitems, void *data)
+{
+ channel_data_t *channel_data = (channel_data_t *)data;
+ dwl_data_t *dwl = (dwl_data_t *)channel_data->user;
+ int ret;
+
+ range_answer_t *answer = dwl->answer;
+ answer->id = dwl->id;
+ answer->type = RANGE_HEADERS;
+ answer->len = min(size * nitems , RANGE_PAYLOAD_SIZE - 2);
+ memcpy(answer->data, buffer, answer->len);
+ answer->len++;
+ answer->data[answer->len] = '\0';
+
+ ret = write(dwl->writefd, answer, sizeof(range_answer_t));
+ if (ret != sizeof(range_answer_t)) {
+ ERROR("Error sending IPC data !");
+ return 0;
+ }
+
+ return nitems * size;
+}
+
+/*
+ * Process that is spawned by the handler to download the missing chunks.
+ * Downloading should be done in a separate process to not break
+ * privilige separation
+ */
+int start_delta_downloader(const char __attribute__ ((__unused__)) *fname,
+ int __attribute__ ((__unused__)) argc,
+ __attribute__ ((__unused__)) char *argv[])
+{
+ ssize_t ret;
+ range_request_t *req = NULL;
+ channel_op_res_t transfer;
+ range_answer_t *answer;
+ struct dict httpheaders;
+ dwl_data_t priv;
+
+ TRACE("Starting Internal process for downloading chunks");
+ if (channel_curl_init() != CHANNEL_OK) {
+ ERROR("Cannot initialize curl");
+ return SERVER_EINIT;
+ }
+ req = (range_request_t *)malloc(sizeof *req);
+ if (!req) {
+ ERROR("OOM requesting request buffers !");
+ exit (EXIT_FAILURE);
+ }
+
+ answer = (range_answer_t *)malloc(sizeof *answer);
+ if (!answer) {
+ ERROR("OOM requesting answer buffers !");
+ exit (EXIT_FAILURE);
+ }
+
+ channel_data_t channel_data = channel_data_defaults;
+ channel_t *channel = channel_new();
+ if (!channel) {
+ ERROR("Cannot get channel for communication");
+ exit (EXIT_FAILURE);
+ }
+ LIST_INIT(&httpheaders);
+ if (dict_insert_value(&httpheaders, "Accept", "*/*")) {
+ ERROR("Database error setting Accept header");
+ exit (EXIT_FAILURE);
+ }
+
+ for (;;) {
+ ret = read(sw_sockfd, req, sizeof(range_request_t));
+ if (ret < 0) {
+ ERROR("reading from sockfd returns error, aborting...");
+ exit (EXIT_FAILURE);
+ }
+
+ if ((req->urllen + req->rangelen) > ret) {
+ ERROR("Malformed data");
+ continue;
+ }
+ priv.writefd = sw_sockfd;
+ priv.id = req->id;
+ priv.answer = answer;
+ channel_data.url = req->data;
+ channel_data.noipc = true;
+ channel_data.method = CHANNEL_GET;
+ channel_data.content_type = "*";
+ channel_data.headers = delta_callback_headers;
+ channel_data.dwlwrdata = wrdata_callback;
+ channel_data.range = &req->data[req->urllen + 1];
+ channel_data.user = &priv;
+
+ if (channel->open(channel, &channel_data) == CHANNEL_OK) {
+ transfer = channel->get_file(channel, (void *)&channel_data);
+ } else {
+ ERROR("Cannot open channel for communication");
+ transfer = CHANNEL_EINIT;
+ }
+
+ answer->id = req->id;
+ answer->type = (transfer == CHANNEL_OK) ? RANGE_COMPLETED : RANGE_ERROR;
+ answer->len = 0;
+ if (write(sw_sockfd, answer, sizeof(*answer)) != sizeof(*answer)) {
+ ERROR("Answer cannot be sent back, maybe deadlock !!");
+ }
+
+ (void)channel->close(channel);
+ }
+
+ exit (EXIT_SUCCESS);
+}
diff --git a/handlers/delta_handler.h b/handlers/delta_handler.h
new file mode 100644
index 0000000..4a9196b
--- /dev/null
+++ b/handlers/delta_handler.h
@@ -0,0 +1,37 @@
+/*
+ * (C) Copyright 2021
+ * Stefano Babic, sba...@denx.de.
+ *
+ * SPDX-License-Identifier: GPL-2.0-only
+ */
+
+#pragma once
+
+#include <sys/types.h>
+#include <stdint.h>
+
+#define RANGE_PAYLOAD_SIZE (32 * 1024)
+typedef enum {
+ RANGE_GET,
+ RANGE_HEADERS,
+ RANGE_DATA,
+ RANGE_COMPLETED,
+ RANGE_ERROR
+} request_type;
+
+typedef struct {
+ uint32_t id;
+ request_type type;
+ size_t urllen;
+ size_t rangelen;
+ uint32_t crc;
+ char data[RANGE_PAYLOAD_SIZE]; /* URL + RANGE */
+} range_request_t;
+
+typedef struct {
+ uint32_t id;
+ request_type type;
+ size_t len;
+ uint32_t crc;
+ char data[RANGE_PAYLOAD_SIZE]; /* Payload */
+} range_answer_t;
diff --git a/include/delta_process.h b/include/delta_process.h
new file mode 100644
index 0000000..51d1e04
--- /dev/null
+++ b/include/delta_process.h
@@ -0,0 +1,10 @@
+/*
+ * (C) Copyright 2021
+ * Stefano Babic, sba...@denx.de.
+ *
+ * SPDX-License-Identifier: GPL-2.0-only
+ */
+
+#pragma once
+
+extern int start_delta_downloader(const char *fname, int argc, char *argv[]);
diff --git a/include/swupdate_status.h b/include/swupdate_status.h
index 8ac9af1..29eea0f 100644
--- a/include/swupdate_status.h
+++ b/include/swupdate_status.h
@@ -35,7 +35,8 @@ typedef enum {
SOURCE_WEBSERVER,
SOURCE_SURICATTA,
SOURCE_DOWNLOADER,
- SOURCE_LOCAL
+ SOURCE_LOCAL,
+ SOURCE_CHUNKS_DOWNLOADER
} sourcetype;

#ifdef __cplusplus
--
2.25.1

Stefano Babic

unread,
Oct 11, 2021, 7:22:42 AM10/11/21
to swup...@googlegroups.com, Stefano Babic
Signed-off-by: Stefano Babic <sba...@denx.de>
---
examples/configuration/swupdate.cfg | 6 ++++++
1 file changed, 6 insertions(+)

diff --git a/examples/configuration/swupdate.cfg b/examples/configuration/swupdate.cfg
index 7d15bb5..2f19ffa 100644
--- a/examples/configuration/swupdate.cfg
+++ b/examples/configuration/swupdate.cfg
@@ -82,6 +82,10 @@ logcolors : {
# complete URL pointing to the SWU image of the update package
# retries : integer
# Number of retries (0=forever)
+# userid : integer
+# userID for Webserver process
+# groupid : integer
+# groupId for Webserver process
# timeout : integer
# it is the number of seconds that can be accepted without
# receiving any packets. If it elapses, the connection is
@@ -95,6 +99,8 @@ download :
retries = 3;
timeout = 1800;
url = "http://example.com/software.swu";
+ userid = 1000;
+ groupid = 1000;
};

#
--
2.25.1

Stefano Babic

unread,
Oct 11, 2021, 7:22:43 AM10/11/21
to swup...@googlegroups.com, Stefano Babic
Start the downloader process if delta is activated. The process is
monitored by SWUpdate.

Signed-off-by: Stefano Babic <sba...@denx.de>
---
core/swupdate.c | 12 ++++++++++++
1 file changed, 12 insertions(+)

diff --git a/core/swupdate.c b/core/swupdate.c
index a31aeb1..a83a5bf 100644
--- a/core/swupdate.c
+++ b/core/swupdate.c
@@ -41,6 +41,7 @@
#include "network_ipc.h"
#include "sslapi.h"
#include "suricatta/suricatta.h"
+#include "delta_process.h"
#include "progress.h"
#include "parselib.h"
#include "swupdate_settings.h"
@@ -843,6 +844,17 @@ int main(int argc, char **argv)
freeargs(dwlav);
}
#endif
+#if defined(CONFIG_DELTA)
+ {
+ uid_t uid;
+ gid_t gid;
+ read_settings_user_id(&handle, "download", &uid, &gid);
+ start_subprocess(SOURCE_CHUNKS_DOWNLOADER, "chunks_downloader", uid, gid,
+ cfgfname, ac, av,
+ start_delta_downloader);
+ }
+#endif
+

/*
* Start all processes added in the config file
--
2.25.1

Stefano Babic

unread,
Oct 11, 2021, 7:22:47 AM10/11/21
to swup...@googlegroups.com, Stefano Babic
Explain attributes and properties for the delta update handler.

Signed-off-by: Stefano Babic <sba...@denx.de>
---
doc/source/handlers.rst | 80 +++++++++++++++++++++++++++++++++++++++++
1 file changed, 80 insertions(+)

diff --git a/doc/source/handlers.rst b/doc/source/handlers.rst
index 8689585..54a9b6c 100644
--- a/doc/source/handlers.rst
+++ b/doc/source/handlers.rst
@@ -912,3 +912,83 @@ found on the device. It is a partition handler and it runs before any image is i
"18e12df1-d8e1-4283-8727-37727eb4261d"];
}
});
+
+Delta Update Handler
+--------------------
+
+The handler processes a ZCHUNK header and finds which chunks should be downloaded
+after generating the corresponding header of the running artifact to be updated.
+The handler uses just a couple of attributes from the main setup, and gets more information
+from the properties. The attributes are then passed to a secondary handler that
+will install the artefact after the delta handler has assembled it.
+The handler requires ZST because this is the compression format for Zchunk.
+
+The SWU must just contain the ZCK's header, while the ZCK file is put as it is on the server.
+The utilities in Zchunk project are used to build the zck file.
+
+::
+
+ zck -u -h sha256 <artifact>
+
+This will generates a file <arifact>.zck. To extract the header, use the `zck_read_header`
+utility:
+
+::
+
+ HSIZE=`zck_read_header -v <artifact>.zck | grep "Header size" | cut -d':' -f2`
+ dd if=<artifact>.zck of=<artifact>.header bs=1 count=$((HSIZE))
+
+The resulting header file must be packed inside the SWU.
+
+.. table:: Properties for delta update handler
+
+ +-------------+-------------+----------------------------------------------------+
+ | Name | Type | Description |
+ +=============+=============+====================================================+
+ | url | string | This is the URL from where the handler will |
+ | | | download the missing chunks. |
+ | | | The server must support byte range header. |
+ +-------------+-------------+----------------------------------------------------+
+ | source | string | name of the device or file to be used for |
+ | | | the comparison. |
+ +-------------+-------------+----------------------------------------------------+
+ | chain | string | this is the name (type) of the handler |
+ | | | that is called after reassembling |
+ | | | the artifact. |
+ +-------------+-------------+----------------------------------------------------+
+ | max-ranges | string | Max number of ranges that a server can |
+ | | | accept. Default value (150) should be ok |
+ | | | for most servers. |
+ +-------------+-------------+----------------------------------------------------+
+ | zckloglevel | string | this sets the log level of the zcklib. |
+ | | | Logs are intercepted by SWupdate and |
+ | | | appear in SWUpdate's log. |
+ | | | Value is one of debug,info |
+ | | | warn,error,none |
+ +-------------+-------------+----------------------------------------------------+
+ | debug-chunks| string | "true", default is not set. |
+ | | | This activates more verbose debugging |
+ | | | output and the list of all chunks is |
+ | | | printed, and it reports if a chunk |
+ | | | is downloaded or copied from the source. |
+ +-------------+-------------+----------------------------------------------------+
+
+
+Example:
+
+::
+
+ {
+ filename = "software.header";
+ type = "delta";
+
+ path = "testimage.raw";
+ properties: {
+ url = "http://examples.com/software.zck";
+ chain = "rawfile";
+ source = "/dev/mmcblk0p3";
+ zckloglevel = "error";
+ /* debug-chunks = "true"; */
+ };
+ }
+
--
2.25.1

Stefano Babic

unread,
Oct 11, 2021, 7:22:48 AM10/11/21
to swup...@googlegroups.com, Stefano Babic
Signed-off-by: Stefano Babic <sba...@denx.de>
---
doc/source/roadmap.rst | 19 -------------------
1 file changed, 19 deletions(-)

diff --git a/doc/source/roadmap.rst b/doc/source/roadmap.rst
index 9f25b1a..b507f33 100644
--- a/doc/source/roadmap.rst
+++ b/doc/source/roadmap.rst
@@ -29,25 +29,6 @@ To reduce bandwidth or for big images, a stronger compressor could help.
Adding a new compressor must be careful done because it changes the core of
handling an image.

-More efficient delta updates
-============================
-
-A whole update could be very traffic intensive. Specially in case
-of low-bandwidth connections, it could be interesting to introduce
-a way for delta binary updates.
-There was already several discussions on the Mailing List about
-this. If introducing binary delta is high desired, on the other side
-it is strictly required to not reduce the reliability of the update
-and the feature should not introduce leaks and make the system
-more vulnerable. It is accepted that different technologies could be added,
-each of them solves a specific use case for a delta update.
-
-SWUpdate is already able to perform delta updates based on librsync library. This is
-currently a good compromise to reduce complexity. Anyway, this helps in case of
-small changes, and it is not a general solution between two generic releases.
-A general approach could be to integrate SWUpdate with a storage to allow one
-a delta upgrade from any release.
-
Support for OpenWRT
===================

--
2.25.1

Stefano Babic

unread,
Oct 11, 2021, 7:22:48 AM10/11/21
to swup...@googlegroups.com, Stefano Babic
Large size or small bandwidth require to reduce the size of the
downloaded data. This implements a delta update using the zchunk project
as basis. The full documentation and design specification is in doc.

Signed-off-by: Stefano Babic <sba...@denx.de>
---
Kconfig | 4 +
Makefile.deps | 4 +
Makefile.flags | 5 +
handlers/Config.in | 13 +
handlers/Makefile | 1 +
handlers/delta_handler.c | 1071 ++++++++++++++++++++++++++++++++++++++
6 files changed, 1098 insertions(+)
create mode 100644 handlers/delta_handler.c

diff --git a/Kconfig b/Kconfig
index cb86d55..e28b8a6 100644
--- a/Kconfig
+++ b/Kconfig
@@ -117,6 +117,10 @@ config HAVE_URIPARSER
bool
option env="HAVE_URIPARSER"

+config HAVE_ZCK
+ bool
+ option env="HAVE_ZCK"
+
menu "Swupdate Settings"

menu "General Configuration"
diff --git a/Makefile.deps b/Makefile.deps
index 3f4cbf9..58ed373 100644
--- a/Makefile.deps
+++ b/Makefile.deps
@@ -109,3 +109,7 @@ endif
ifeq ($(HAVE_URIPARSER),)
export HAVE_URIPARSER = y
endif
+
+ifeq ($(HAVE_ZCK),)
+export HAVE_ZCK = y
+endif
diff --git a/Makefile.flags b/Makefile.flags
index e549b46..019ef77 100644
--- a/Makefile.flags
+++ b/Makefile.flags
@@ -226,6 +226,11 @@ ifneq ($(CONFIG_SWUFORWARDER_HANDLER),)
LDLIBS += websockets uriparser
endif

+# Delta Update
+ifneq ($(CONFIG_DELTA),)
+LDLIBS += zck
+endif
+
# If a flat binary should be built, CFLAGS_swupdate="-elf2flt"
# env var should be set for make invocation.
# Here we check whether CFLAGS_swupdate indeed contains that flag.
diff --git a/handlers/Config.in b/handlers/Config.in
index ad5dfdd..efb0e8d 100644
--- a/handlers/Config.in
+++ b/handlers/Config.in
@@ -60,6 +60,19 @@ config CFIHAMMING1

You do not need this if you do not have an OMAP SoC.

+config DELTA
+ bool "delta"
+ depends on HAVE_LIBCURL
+ depends on HAVE_URIPARSER
+ depends on HAVE_ZSTD
+ depends on HAVE_ZCK
+ select CHANNEL_CURL
+ default n
+ help
+ Handler to enable delta images. The handler computes the differences
+ and download the missing parts, and pass the resulting image to the
+ next handler.
+
config DISKPART
bool "diskpart"
depends on HAVE_LIBFDISK
diff --git a/handlers/Makefile b/handlers/Makefile
index 534259c..9cca6a6 100644
--- a/handlers/Makefile
+++ b/handlers/Makefile
@@ -11,6 +11,7 @@ obj-y += dummy_handler.o
obj-$(CONFIG_ARCHIVE) += archive_handler.o
obj-$(CONFIG_BOOTLOADERHANDLER) += boot_handler.o
obj-$(CONFIG_CFI) += flash_handler.o
+obj-$(CONFIG_DELTA) += delta_handler.o delta_downloader.o
obj-$(CONFIG_DISKFORMAT_HANDLER) += diskformat_handler.o
obj-$(CONFIG_DISKPART) += diskpart_handler.o
obj-$(CONFIG_UNIQUEUUID) += uniqueuuid_handler.o
diff --git a/handlers/delta_handler.c b/handlers/delta_handler.c
new file mode 100644
index 0000000..e0202ab
--- /dev/null
+++ b/handlers/delta_handler.c
@@ -0,0 +1,1071 @@
+/*
+ * (C) Copyright 2021
+ * Stefano Babic, sba...@denx.de.
+ *
+ * SPDX-License-Identifier: GPL-2.0-only
+ */
+
+/*
+ * This handler computes the difference between an artifact
+ * and an image on the device, and download the missing chunks.
+ * The resulting image is then passed to a chained handler for
+ * installing.
+ * The handler uses own properties and it shares th same
+ * img struct with the chained handler. All other fields
+ * in sw-description are reserved for the chain handler, that
+ * works as if there is no delta handler in between.
+ */
+
+#include <stdbool.h>
+#include <stdio.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <sys/wait.h>
+#include <unistd.h>
+#include <fcntl.h>
+#include <ctype.h>
+#include <stdlib.h>
+#include <errno.h>
+#include <string.h>
+#include <swupdate.h>
+#include <handler.h>
+#include <signal.h>
+#include <zck.h>
+#include <zlib.h>
+#include <util.h>
+#include <pctl.h>
+#include <pthread.h>
+#include "delta_handler.h"
+#include "multipart_parser.h"
+#include "installer.h"
+
+#define FIFO_FILE_NAME "deltafifo"
+#define DEFAULT_MAX_RANGES 150 /* Apache has default = 200 */
+
+const char *handlername = "delta";
+void delta_handler(void);
+
+/*
+ * Structure passed to callbacks
+ */
+/*
+ * state machine when answer from
+ * server is parsed.
+ */
+typedef enum {
+ NOTRUNNING,
+ WAITING_FOR_HEADERS,
+ WAITING_FOR_BOUNDARY,
+ WAITING_FOR_FIRST_DATA,
+ WAITING_FOR_DATA,
+ END_TRANSFER
+} dwl_state_t;
+
+/*
+ * There are two kind of answer from an HTTP Range request:
+ * - if just one range is selected, the server sends a
+ * content-range header with the delivered bytes as
+ * <start>-<end>/<totalbytes>
+ * - if multiple ranges are requested, the server sends
+ * a multipart answer and sends a header with
+ * Content-Type: multipart/byteranges; boundary=<boundary>
+ */
+typedef enum {
+ NONE_RANGE, /* Range not found in Headers */
+ SINGLE_RANGE,
+ MULTIPART_RANGE
+} range_type_t;
+
+struct dwlchunk {
+ unsigned char *buf;
+ size_t chunksize;
+ size_t nbytes;
+ bool completed;
+};
+
+struct hnd_priv {
+ /* Attributes retrieved from sw-descritpion */
+ char *url; /* URL to get full ZCK file */
+ char *srcdev; /* device as source for comparison */
+ char *chainhandler; /* Handler to pass the decompressed image */
+ zck_log_type zckloglevel; /* if found, set log level for ZCK to this */
+ unsigned long max_ranges; /* Max allowed ranges (configured via sw-description) */
+ /* Data to be transferred to chain handler */
+ struct img_type img;
+ char fifo[80];
+ int fdout;
+ int fdsrc;
+ zckCtx *tgt;
+ /* Structures for downloading chunks */
+ bool dwlrunning;
+ range_type_t range_type; /* Single or multipart */
+ char boundary[SWUPDATE_GENERAL_STRING_SIZE];
+ int pipetodwl; /* pipe to downloader process */
+ dwl_state_t dwlstate; /* for internal state machine */
+ range_answer_t *answer; /* data from downloader */
+ uint32_t reqid; /* Current request id to downloader */
+ struct dwlchunk current; /* Structure to collect data for working chunk */
+ zckChunk *chunk; /* Current chunk to be processed */
+ size_t rangelen; /* Value from Content-range header */
+ size_t rangestart; /* Value from Content-range header */
+ bool content_range_received; /* Flag to indicate that last header is content-range */
+ bool error_in_parser; /* Flag to report if an error occurred */
+ multipart_parser *parser; /* pointer to parser, allocated at any download */
+ /* Some nice statistics */
+ size_t bytes_to_be_reused;
+ size_t bytes_to_download;
+ size_t totaldwlbytes; /* bytes downloaded, including headers */
+ /* flags to improve logging */
+ bool debugchunks;
+};
+
+static bool copy_existing_chunks(zckChunk **dstChunk, struct hnd_priv *priv);
+
+/*
+ * Callbacks for multipart parsing.
+ */
+static int network_process_data(multipart_parser* p, const char *at, size_t length)
+{
+ struct hnd_priv *priv = (struct hnd_priv *)multipart_parser_get_data(p);
+ size_t nbytes = length;
+ const char *bufSrc = at;
+ int ret;
+
+ /* Stop if previous error occurred */
+ if (priv->error_in_parser)
+ return -EFAULT;
+
+ while (nbytes) {
+ size_t to_be_filled = priv->current.chunksize - priv->current.nbytes;
+ size_t tobecopied = min(nbytes, to_be_filled);
+ memcpy(&priv->current.buf[priv->current.nbytes], bufSrc, tobecopied);
+ priv->current.nbytes += tobecopied;
+ nbytes -= tobecopied;
+ bufSrc += tobecopied;
+ /*
+ * Chunk complete, it must be copied
+ */
+ if (priv->current.nbytes == priv->current.chunksize) {
+ char *sha = zck_get_chunk_digest(priv->chunk);
+ unsigned char hash[SHA256_HASH_LENGTH]; /* SHA-256 is 32 byte */
+ ascii_to_hash(hash, sha);
+ free(sha);
+
+ if (priv->debugchunks)
+ TRACE("Copying chunk %ld from NETWORK, size %ld",
+ zck_get_chunk_number(priv->chunk),
+ priv->current.chunksize);
+ if (priv->current.chunksize != 0) {
+ ret = copybuffer(priv->current.buf,
+ &priv->fdout,
+ priv->current.chunksize,
+ COMPRESSED_ZSTD,
+ hash,
+ 0,
+ NULL,
+ NULL);
+ } else
+ ret = 0; /* skipping, nothing to be copied */
+ /* Buffer can be discarged */
+ free(priv->current.buf);
+ priv->current.buf = NULL;
+ /*
+ * if an error occurred, stops
+ */
+ if (ret) {
+ ERROR("copybuffer failed !");
+ priv->error_in_parser = true;
+ return -EFAULT;
+ }
+ /*
+ * Set the chunk as completed and switch to next one
+ */
+ zck_set_chunk_valid(priv->chunk, 1);
+ priv->chunk = zck_get_next_chunk(priv->chunk);
+ if (!priv->chunk && nbytes > 0) {
+ WARN("Still data in range, but no chunks anymore !");
+ close(priv->fdout);
+ }
+ if (!priv->chunk)
+ break;
+
+ size_t current_chunk_size = zck_get_chunk_comp_size(priv->chunk);
+ priv->current.buf = (unsigned char *)malloc(current_chunk_size);
+ if (!priv->current.buf) {
+ ERROR("OOM allocating new chunk %lu!", current_chunk_size);
+ priv->error_in_parser = true;
+ return -ENOMEM;
+ }
+
+ priv->current.nbytes = 0;
+ priv->current.chunksize = current_chunk_size;
+ }
+ }
+ return 0;
+}
+
+/*
+ * This is called after headers are processed. Allocate a
+ * buffer big enough to contain the next chunk to be processed
+ */
+static int multipart_data_complete(multipart_parser* p)
+{
+ struct hnd_priv *priv = (struct hnd_priv *)multipart_parser_get_data(p);
+ size_t current_chunk_size;
+
+ current_chunk_size = zck_get_chunk_comp_size(priv->chunk);
+ priv->current.buf = (unsigned char *)malloc(current_chunk_size);
+ priv->current.nbytes = 0;
+ priv->current.chunksize = current_chunk_size;
+ /*
+ * Buffer check should be done in each callback
+ */
+ if (!priv->current.buf) {
+ ERROR("OOM allocating new chunk !");
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
+/*
+ * This is called after a range is completed and before next range
+ * is processed. Between two ranges, chunks are taken from SRC.
+ * Checks which chunks should be copied and copy them until a chunk must
+ * be retrieved from network
+ */
+static int multipart_data_end(multipart_parser* p)
+{
+ struct hnd_priv *priv = (struct hnd_priv *)multipart_parser_get_data(p);
+ free(priv->current.buf);
+ priv->current.buf = NULL;
+ priv->content_range_received = true;
+ copy_existing_chunks(&priv->chunk, priv);
+ return 0;
+}
+
+/*
+ * Set multipart parser callbacks.
+ * No need at the moment to process multipart headers
+ */
+static multipart_parser_settings multipart_callbacks = {
+ .on_part_data = network_process_data,
+ .on_headers_complete = multipart_data_complete,
+ .on_part_data_end = multipart_data_end
+};
+
+/*
+ * Debug function to output all chunks and show if the chunk
+ * can be copied from current software or must be downloaded
+ */
+static size_t get_total_size(zckCtx *zck, struct hnd_priv *priv) {
+ zckChunk *iter = zck_get_first_chunk(zck);
+ size_t pos = 0;
+ priv->bytes_to_be_reused = 0;
+ priv->bytes_to_download = 0;
+ if (priv->debugchunks)
+ TRACE("Index Typ HASH %*c START(chunk) SIZE(uncomp) Pos(Device) SIZE(comp)",
+ (((int)zck_get_chunk_digest_size(zck) * 2) - (int)strlen("HASH")), ' '
+ );
+ while (iter) {
+ if (priv->debugchunks)
+ TRACE("%12lu %s %s %12lu %12lu %12lu %12lu",
+ zck_get_chunk_number(iter),
+ zck_get_chunk_valid(iter) ? "SRC" : "DST",
+ zck_get_chunk_digest_uncompressed(iter),
+ zck_get_chunk_start(iter),
+ zck_get_chunk_size(iter),
+ pos,
+ zck_get_chunk_comp_size(iter));
+
+ pos += zck_get_chunk_size(iter);
+ if (!zck_get_chunk_valid(iter)) {
+ priv->bytes_to_download += zck_get_chunk_comp_size(iter);
+ } else {
+ priv->bytes_to_be_reused += zck_get_chunk_size(iter);
+ }
+ iter = zck_get_next_chunk(iter);
+ }
+
+ INFO("Total bytes to be reused : %12lu\n", priv->bytes_to_be_reused);
+ INFO("Total bytes to be downloaded : %12lu\n", priv->bytes_to_download);
+
+ return pos;
+}
+
+/*
+ * Get attributes from sw-description
+ */
+static int delta_retrieve_attributes(struct img_type *img, struct hnd_priv *priv) {
+ if (!priv)
+ return -EINVAL;
+
+ priv->zckloglevel = ZCK_LOG_DDEBUG;
+ priv->url = dict_get_value(&img->properties, "url");
+ priv->srcdev = dict_get_value(&img->properties, "source");
+ priv->chainhandler = dict_get_value(&img->properties, "chain");
+ if (!priv->url || !priv->srcdev ||
+ !priv->chainhandler || !strcmp(priv->chainhandler, handlername)) {
+ ERROR("Wrong Attributes in sw-description: url=%s source=%s, handler=%s",
+ priv->url, priv->srcdev, priv->chainhandler);
+ free(priv->url);
+ free(priv->srcdev);
+ free(priv->chainhandler);
+ return -EINVAL;
+ }
+ errno = 0;
+ if (dict_get_value(&img->properties, "max-ranges"))
+ priv->max_ranges = strtoul(dict_get_value(&img->properties, "max-ranges"), NULL, 10);
+ if (errno || priv->max_ranges == 0)
+ priv->max_ranges = DEFAULT_MAX_RANGES;
+
+ char *zckloglevel = dict_get_value(&img->properties, "zckloglevel");
+ if (!zckloglevel)
+ return 0;
+ if (!strcmp(zckloglevel, "debug"))
+ priv->zckloglevel = ZCK_LOG_DEBUG;
+ else if (!strcmp(zckloglevel, "info"))
+ priv->zckloglevel = ZCK_LOG_INFO;
+ else if (!strcmp(zckloglevel, "warn"))
+ priv->zckloglevel = ZCK_LOG_WARNING;
+ else if (!strcmp(zckloglevel, "error"))
+ priv->zckloglevel = ZCK_LOG_ERROR;
+ else if (!strcmp(zckloglevel, "none"))
+ priv->zckloglevel = ZCK_LOG_NONE;
+
+ char *debug = dict_get_value(&img->properties, "debug-chunks");
+ if (debug) {
+ priv->debugchunks = true;
+ }
+
+ return 0;
+}
+
+/*
+ * Prepare a request for the chunk downloader process
+ * It fills a range_request structure with data for the
+ * connection
+ */
+
+static range_request_t *prepare_range_request(const char *url, const char *range, size_t *len)
+{
+ range_request_t *req = NULL;
+
+ if (!url || !len)
+ return NULL;
+
+ if (strlen(range) > RANGE_PAYLOAD_SIZE - 1) {
+ ERROR("RANGE request too long !");
+ return NULL;
+ }
+ req = (range_request_t *)calloc(1, sizeof(*req));
+ if (req) {
+ req->id = rand();
+ req->type = RANGE_GET;
+ req->urllen = strlen(url);
+ req->rangelen = strlen(range);
+ strcpy(req->data, url);
+ strcpy(&req->data[strlen(url) + 1], range);
+ } else {
+ ERROR("OOM preparing internal IPC !");
+ return NULL;
+ }
+
+ return req;
+}
+
+/*
+ * ZCK and SWUpdate have different levels for logging
+ * so map them
+ */
+static zck_log_type map_swupdate_to_zck_loglevel(LOGLEVEL level) {
+
+ switch (level) {
+ case OFF:
+ return ZCK_LOG_NONE;
+ case ERRORLEVEL:
+ return ZCK_LOG_ERROR;
+ case WARNLEVEL:
+ return ZCK_LOG_WARNING;
+ case INFOLEVEL:
+ return ZCK_LOG_INFO;
+ case TRACELEVEL:
+ return ZCK_LOG_DEBUG;
+ case DEBUGLEVEL:
+ return ZCK_LOG_DDEBUG;
+ }
+ return ZCK_LOG_ERROR;
+}
+
+static LOGLEVEL map_zck_to_swupdate_loglevel(zck_log_type lt) {
+ switch (lt) {
+ case ZCK_LOG_NONE:
+ return OFF;
+ case ZCK_LOG_ERROR:
+ return ERRORLEVEL;
+ case ZCK_LOG_WARNING:
+ return WARNLEVEL;
+ case ZCK_LOG_INFO:
+ return INFOLEVEL;
+ case ZCK_LOG_DEBUG:
+ return TRACELEVEL;
+ case ZCK_LOG_DDEBUG:
+ return DEBUGLEVEL;
+ }
+ return loglevel;
+}
+
+/*
+ * Callback for ZCK to send ZCK logs to SWUpdate instead of writing
+ * into a file
+ */
+static void zck_log_toswupdate(const char *function, zck_log_type lt,
+ const char *format, va_list args) {
+ LOGLEVEL l = map_zck_to_swupdate_loglevel(lt);
+ char buf[NOTIFY_BUF_SIZE];
+ int pos;
+
+ pos = snprintf(buf, NOTIFY_BUF_SIZE - 1, "(%s) ", function);
+ vsnprintf(buf + pos, NOTIFY_BUF_SIZE - 1 - pos, format, args);
+
+ switch(l) {
+ case ERRORLEVEL:
+ ERROR("%s", buf);
+ return;
+ case WARNLEVEL:
+ WARN("%s", buf);
+ return;
+ case INFOLEVEL:
+ INFO("%s", buf);
+ return;
+ case TRACELEVEL:
+ TRACE("%s", buf);
+ return;
+ case DEBUGLEVEL:
+ TRACE("%s", buf);
+ return;
+ default:
+ return;
+ }
+}
+
+/*
+ * Create a zck Index from a file
+ */
+static bool create_zckindex(zckCtx *zck, int fd)
+{
+ const size_t bufsize = 16384;
+ char *buf = malloc(bufsize);
+ ssize_t n;
+ int ret;
+
+ if (!buf) {
+ ERROR("OOM creating temporary buffer");
+ return false;
+ }
+ while ((n = read(fd, buf, bufsize)) > 0) {
+ ret = zck_write(zck, buf, n);
+ if (ret < 0) {
+ ERROR("ZCK returns %s", zck_get_error(zck));
+ free(buf);
+ return false;
+ }
+ }
+
+ free(buf);
+
+ return true;
+}
+
+/*
+ * Thread to start the chained handler.
+ * This received from FIFO the reassembled stream with
+ * the artifact and can pass it to the handler responsible for the install.
+ */
+static void *chain_handler_thread(void *data)
+{
+ struct hnd_priv *priv = (struct hnd_priv *)data;
+ struct img_type *img = &priv->img;
+ unsigned long ret;
+
+ thread_ready();
+ /*
+ * Try sometimes to open FIFO
+ */
+ if (!priv->fifo) {
+ ERROR("Named FIFO not set, thread exiting !");
+ return (void *)1;
+ }
+ for (int cnt = 5; cnt > 0; cnt--) {
+ img->fdin = open(priv->fifo, O_RDONLY);
+ if (img->fdin > 0)
+ break;
+ sleep(1);
+ }
+ if (img->fdin < 0) {
+ ERROR("Named FIFO cannot be opened, exiting");
+ return (void *)1;
+ }
+
+ img->install_directly = true;
+ ret = install_single_image(img, false);
+
+ if (ret) {
+ ERROR("Chain handler return with Error");
+ close(img->fdin);
+ }
+
+ return (void *)ret;
+}
+
+/*
+ * Chunks must be retrieved from network, prepare an send
+ * a request for the downloader
+ */
+static bool trigger_download(struct hnd_priv *priv)
+{
+ range_request_t *req = NULL;
+ zckCtx *tgt = priv->tgt;
+ size_t reqlen;
+ zckRange *Range;
+ bool status = true;
+
+
+ priv->boundary[0] = '\0';
+
+ Range = zck_get_missing_range(tgt, priv->max_ranges);
+
+ req = prepare_range_request(priv->url, zck_get_range_char(tgt, Range), &reqlen);
+ if (!req) {
+ ERROR(" Internal chunk request cannot be prepared");
+ free(Range);
+ return false;
+ }
+
+ /* Store request id to compare later */
+ priv->reqid = req->id;
+ priv->range_type = NONE_RANGE;
+
+ if (write(priv->pipetodwl, req, sizeof(*req)) != sizeof(*req)) {
+ ERROR("Cannot write all bytes to pipe");
+ status = false;
+ }
+
+ free(req);
+ free(Range);
+ priv->dwlrunning = true;
+ return status;
+}
+
+/*
+ * drop all temporary data collected during download
+ */
+static void dwl_cleanup(struct hnd_priv *priv)
+{
+ multipart_parser_free(priv->parser);
+ priv->parser = NULL;
+}
+
+static bool read_and_validate_package(struct hnd_priv *priv)
+{
+ ssize_t nbytes = sizeof(range_answer_t);
+ range_answer_t *answer;
+ int count = -1;
+ uint32_t crc;
+
+ do {
+ count++;
+ if (count == 1)
+ DEBUG("id does not match in IPC, skipping..");
+
+ char *buf = (char *)priv->answer;
+ do {
+ ssize_t ret;
+ ret = read(priv->pipetodwl, buf, sizeof(range_answer_t));
+ if (ret < 0)
+ return false;
+ buf += ret;
+ nbytes -= ret;
+ } while (nbytes > 0);
+ answer = priv->answer;
+
+ if (nbytes < 0)
+ return false;
+ } while (answer->id != priv->reqid);
+
+
+ if (answer->type == RANGE_ERROR) {
+ ERROR("Transfer was unsuccessful, aborting...");
+ priv->dwlrunning = false;
+ dwl_cleanup(priv);
+ return false;
+ }
+
+ if (answer->type == RANGE_DATA) {
+ crc = crc32(0, (unsigned char *)answer->data, answer->len);
+ if (crc != answer->crc) {
+ ERROR("Corrupted package received !");
+ exit(1);
+ return false;
+ }
+ }
+
+ priv->totaldwlbytes += answer->len;
+
+ return true;
+}
+
+/*
+ * This is called to parse the HTTP headers
+ * It searches for content-ranges and select a SINGLE or
+ * MULTIPARTS answer.
+ */
+static bool parse_headers(struct hnd_priv *priv)
+{
+ int nconv;
+ char *header = NULL, *value = NULL, *boundary_string = NULL;
+ char **pair;
+ int cnt;
+
+ range_answer_t *answer = priv->answer;
+ answer->data[answer->len] = '\0';
+ /* Converto to lower case to make comparison easier */
+ string_tolower(answer->data);
+
+ /* Check for multipart */
+ nconv = sscanf(answer->data, "%ms %ms %ms", &header, &value, &boundary_string);
+
+ if (nconv == 3) {
+ if (!strncmp(header, "content-type", strlen("content-type")) &&
+ !strncmp(boundary_string, "boundary", strlen("boundary"))) {
+ pair = string_split(boundary_string, '=');
+ cnt = count_string_array((const char **)pair);
+ if (cnt == 2) {
+ memset(priv->boundary, '-', 2);
+ strlcpy(&priv->boundary[2], pair[1], sizeof(priv->boundary) - 2);
+ priv->range_type = MULTIPART_RANGE;
+ }
+ free(pair);
+ }
+
+ if (!strncmp(header, "content-range", strlen("content-range")) &&
+ !strncmp(value, "bytes", strlen("bytes"))) {
+ pair = string_split(boundary_string, '-');
+ priv->range_type = SINGLE_RANGE;
+ size_t start = strtoul(pair[0], NULL, 10);
+ size_t end = strtoul(pair[1], NULL, 10);
+ free(pair);
+ priv->rangestart = start;
+ priv->rangelen = end - start;
+ }
+ free(header);
+ free(value);
+ free(boundary_string);
+ } else if (nconv == 1) {
+ free(header);
+ } else if (nconv == 2) {
+ free(header);
+ free(value);
+ }
+
+ return true;
+}
+
+static bool search_boundary_in_body(struct hnd_priv *priv)
+{
+ char *s;
+ range_answer_t *answer = priv->answer;
+ size_t i;
+
+ if (priv->range_type == NONE_RANGE) {
+ ERROR("Malformed body, no boundary found");
+ return false;
+ }
+
+ if (priv->range_type == SINGLE_RANGE) {
+ /* Body contains just one range, it is data, do nothing */
+ return true;
+ }
+ s = answer->data;
+ for (i = 0; i < answer->len; i++, s++) {
+ if (!strncmp(s, priv->boundary, strlen(priv->boundary))) {
+ DEBUG("Boundary found in body");
+ /* Reset buffer to start from here */
+ if (i != 0)
+ memcpy(answer->data, s, answer->len - i);
+ answer->len -=i;
+ return true;
+ }
+ }
+
+ return false;
+}
+
+static bool fill_buffers_list(struct hnd_priv *priv)
+{
+ range_answer_t *answer = priv->answer;
+ /*
+ * If there is a single range, all chunks
+ * are consecutive. Same processing can be done
+ * as with multipart and data is received.
+ */
+ if (priv->range_type == SINGLE_RANGE) {
+ return network_process_data(priv->parser, answer->data, answer->len) == 0;
+ }
+
+ multipart_parser_execute(priv->parser, answer->data, answer->len);
+
+ return true;
+}
+
+/*
+ * copy_network_chunk() retrieves chunks from network and triggers
+ * a network transfer if no one is running.
+ * It collects data in a buffer until the chunk is fully
+ * downloaded, and then copies to the pipe to the installer thread
+ * starting the chained handler.
+ */
+static bool copy_network_chunks(zckChunk **dstChunk, struct hnd_priv *priv)
+{
+ range_answer_t *answer;
+
+ priv->chunk = *dstChunk;
+ priv->error_in_parser = false;
+ while (1) {
+ switch (priv->dwlstate) {
+ case NOTRUNNING:
+ if (!trigger_download(priv))
+ return false;
+ priv->dwlstate = WAITING_FOR_HEADERS;
+ break;
+ case WAITING_FOR_HEADERS:
+ if (!read_and_validate_package(priv))
+ return false;
+ answer = priv->answer;
+ if (answer->type == RANGE_HEADERS) {
+ if (!parse_headers(priv)) {
+ return false;
+ }
+ }
+ if ((answer->type == RANGE_DATA)) {
+ priv->dwlstate = WAITING_FOR_BOUNDARY;
+ }
+ break;
+ case WAITING_FOR_BOUNDARY:
+ /*
+ * Not needed to read data because package
+ * was already written as last step in WAITING_FOR_HEADERS
+ */
+ if (!search_boundary_in_body(priv))
+ return false;
+ priv->parser = multipart_parser_init(priv->boundary,
+ &multipart_callbacks);
+ multipart_parser_set_data(priv->parser, priv);
+ priv->dwlstate = WAITING_FOR_FIRST_DATA;
+ break;
+ case WAITING_FOR_FIRST_DATA:
+ if (!fill_buffers_list(priv))
+ return false;
+ priv->dwlstate = WAITING_FOR_DATA;
+ break;
+ case WAITING_FOR_DATA:
+ if (!read_and_validate_package(priv))
+ return false;
+ answer = priv->answer;
+ if ((answer->type == RANGE_COMPLETED)) {
+ priv->dwlstate = END_TRANSFER;
+ } else if (!fill_buffers_list(priv))
+ return false;
+ break;
+ case END_TRANSFER:
+ dwl_cleanup(priv);
+ priv->dwlstate = NOTRUNNING;
+ *dstChunk = priv->chunk;
+ return !priv->error_in_parser;
+ }
+ }
+
+ return !priv->error_in_parser;
+}
+
+/*
+ * This writes a chunk from an existing copy on the source path
+ * The chunk to be copied is retrieved via zck_get_chunk_src()
+ */
+static bool copy_existing_chunks(zckChunk **dstChunk, struct hnd_priv *priv)
+{
+ unsigned long offset = 0;
+ uint32_t checksum;
+ int ret;
+ unsigned char hash[SHA256_HASH_LENGTH];
+
+ while (*dstChunk && zck_get_chunk_valid(*dstChunk)) {
+ zckChunk *chunk = zck_get_chunk_src(*dstChunk);
+ size_t len = zck_get_chunk_size(chunk);
+ size_t start = zck_get_chunk_start(chunk);
+ char *sha = zck_get_chunk_digest_uncompressed(chunk);
+ if (!len) {
+ *dstChunk = zck_get_next_chunk(*dstChunk);
+ continue;
+ }
+ if (!sha) {
+ ERROR("Cannot get hash for chunk %ld", zck_get_chunk_number(chunk));
+ return false;
+ }
+ if (lseek(priv->fdsrc, start, SEEK_SET) < 0) {
+ ERROR("Seeking source file at %lu", start);
+ free(sha);
+ return false;
+ }
+
+ ascii_to_hash(hash, sha);
+
+ if (priv->debugchunks)
+ TRACE("Copying chunk %ld from SRC %ld, start %ld size %ld",
+ zck_get_chunk_number(*dstChunk),
+ zck_get_chunk_number(chunk),
+ start,
+ len);
+ ret = copyfile(priv->fdsrc, &priv->fdout, len, &offset, 0, 0, COMPRESSED_FALSE,
+ &checksum, hash, false, NULL, NULL);
+
+ free(sha);
+ if (ret)
+ return false;
+
+ *dstChunk = zck_get_next_chunk(*dstChunk);
+ }
+ return true;
+}
+
+/*
+ * Handler entry point
+ */
+static int install_delta(struct img_type *img,
+ void __attribute__ ((__unused__)) *data)
+{
+ struct hnd_priv *priv;
+ int ret = -1;
+ int dst_fd = -1, in_fd = -1;
+ zckChunk *iter;
+ range_request_t *req = NULL;
+ zckCtx *zckSrc = NULL, *zckDst = NULL;
+ char *FIFO = NULL;
+ pthread_t chain_handler_thread_id;
+
+ /*
+ * No streaming allowed
+ */
+ if (img->install_directly) {
+ ERROR("Do not set install-directly with delta, the header cannot be streamed");
+ return -EINVAL;
+ }
+
+ /*
+ * Initialize handler data
+ */
+ priv = (struct hnd_priv *)calloc(1, sizeof(*priv));
+ if (!priv) {
+ ERROR("OOM when allocating handler data !");
+ return -ENOMEM;
+ }
+ priv->answer = (range_answer_t *)malloc(sizeof(*priv->answer));
+ if (!priv->answer) {
+ ERROR("OOM when allocating buffer !");
+ free(priv);
+ return -ENOMEM;
+ }
+
+ /*
+ * Read setup from sw-description
+ */
+ if (delta_retrieve_attributes(img, priv)) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+
+ priv->pipetodwl = pctl_getfd_from_type(SOURCE_CHUNKS_DOWNLOADER);
+
+ if (priv->pipetodwl < 0) {
+ ERROR("Chunks dowbnloader is not running, delta update not available !");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ if ((asprintf(&FIFO, "%s/%s", get_tmpdir(), FIFO_FILE_NAME) ==
+ ENOMEM_ASPRINTF)) {
+ ERROR("Path too long: %s", get_tmpdir());
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+
+ /*
+ * FIFO to communicate with the chainhandler thread
+ */
+ unlink(FIFO);
+ ret = mkfifo(FIFO, 0600);
+ if (ret) {
+ ERROR("FIFO cannot be created in delta handler");
+ goto cleanup;
+ }
+ /*
+ * Open files
+ */
+ dst_fd = open("/dev/null", O_TRUNC | O_WRONLY | O_CREAT, 0666);
+ if (!dst_fd) {
+ ERROR("/dev/null not present or cannot be opened, aborting...");
+ goto cleanup;
+ }
+ in_fd = open(priv->srcdev, O_RDONLY);
+ if(in_fd < 0) {
+ ERROR("Unable to open Source : %s for reading", priv->srcdev);
+ goto cleanup;
+ }
+
+ /*
+ * Set ZCK log level
+ */
+ zck_set_log_level(priv->zckloglevel >= 0 ?
+ priv->zckloglevel : map_swupdate_to_zck_loglevel(loglevel));
+ zck_set_log_callback(zck_log_toswupdate);
+
+ /*
+ * Initialize zck context for source and destination
+ * source : device / file of current software
+ * dst : final software to be installed
+ */
+ zckSrc = zck_create();
+ if (!zckSrc) {
+ ERROR("Cannot create ZCK Source %s", zck_get_error(NULL));
+ zck_clear_error(NULL);
+ goto cleanup;
+ }
+ zckDst = zck_create();
+ if (!zckDst) {
+ ERROR("Cannot create ZCK Destination %s", zck_get_error(NULL));
+ zck_clear_error(NULL);
+ goto cleanup;
+ }
+
+ /*
+ * Prepare zckSrc for writing: the ZCK header must be computed from
+ * the running source
+ */
+ if(!zck_init_write(zckSrc, dst_fd)) {
+ ERROR("Cannot initialize ZCK for writing (%s), aborting..",
+ zck_get_error(zckSrc));
+ goto cleanup;
+ }
+ if (!zck_init_read(zckDst, img->fdin)) {
+ ERROR("Unable to read ZCK header from %s : %s",
+ img->fname,
+ zck_get_error(zckDst));
+ goto cleanup;
+ }
+
+ TRACE("ZCK Header read successfully from SWU, creating header from %s",
+ priv->srcdev);
+ /*
+ * Now read completely source and generate the index file
+ * with hashes for the uncompressed data
+ */
+ if (!zck_set_ioption(zckSrc, ZCK_UNCOMP_HEADER, 1)) {
+ ERROR("%s\n", zck_get_error(zckSrc));
+ goto cleanup;
+ }
+ if (!zck_set_ioption(zckSrc, ZCK_COMP_TYPE, ZCK_COMP_NONE)) {
+ ERROR("Error setting ZCK_COMP_NONE %s\n", zck_get_error(zckSrc));
+ goto cleanup;
+ }
+ if (!zck_set_ioption(zckSrc, ZCK_HASH_CHUNK_TYPE, ZCK_HASH_SHA256)) {
+ ERROR("Error setting HASH Type %s\n", zck_get_error(zckSrc));
+ goto cleanup;
+ }
+
+ if (!create_zckindex(zckSrc, in_fd)) {
+ WARN("ZCK Header form %s cannot be created, fallback to full download",
+ priv->srcdev);
+ } else {
+ zck_create_hashdb(zckSrc);
+ zck_assembly_ctx(zckSrc, zckDst);
+ }
+
+ size_t uncompressed_size = get_total_size(zckDst, priv);
+ INFO("Size of artifact to be installed : %lu", uncompressed_size);
+
+ /*
+ * Everything checked: now starts to combine
+ * source data and ranges from server
+ */
+
+
+ /* Overwrite some parameters for chained handler */
+ memcpy(&priv->img, img, sizeof(*img));
+ priv->img.compressed = COMPRESSED_FALSE;
+ priv->img.size = uncompressed_size;
+ memset(priv->img.sha256, 0, SHA256_HASH_LENGTH);
+ strlcpy(priv->img.type, priv->chainhandler, sizeof(priv->img.type));
+ strlcpy(priv->fifo, FIFO, sizeof(priv->fifo));
+
+ signal(SIGPIPE, SIG_IGN);
+
+ chain_handler_thread_id = start_thread(chain_handler_thread, priv);
+ wait_threads_ready();
+
+ priv->fdout = open(FIFO, O_WRONLY);
+ if (priv->fdout < 0) {
+ ERROR("Failed to open FIFO %s", FIFO);
+ goto cleanup;
+ }
+
+ ret = 0;
+
+ iter = zck_get_first_chunk(zckDst);
+ bool success;
+ priv->tgt = zckDst;
+ priv->fdsrc = in_fd;
+ while (iter) {
+ if (zck_get_chunk_valid(iter)) {
+ success = copy_existing_chunks(&iter, priv);
+ } else {
+ success = copy_network_chunks(&iter, priv);
+ }
+ if (!success) {
+ ERROR("Delta Update fails : aborting");
+ ret = -1;
+ goto cleanup;
+ }
+ }
+
+ INFO("Total downloaded data : %ld bytes", priv->totaldwlbytes);
+
+ void *status;
+ ret = pthread_join(chain_handler_thread_id, &status);
+ if (ret) {
+ ERROR("return code from pthread_join() is %d", ret);
+ }
+ ret = (unsigned long)status;
+ TRACE("Chained handler returned %d", ret);
+
+cleanup:
+ if (zckSrc) zck_free(&zckSrc);
+ if (zckDst) zck_free(&zckDst);
+ if (req) free(req);
+ if (dst_fd > 0) close(dst_fd);
+ if (in_fd > 0) close(in_fd);
+ if (FIFO) {
+ unlink(FIFO);
+ free(FIFO);
+ }
+ if (priv->answer) free(priv->answer);
+ free(priv);
+ return ret;
+}
+
+__attribute__((constructor))
+void delta_handler(void)
+{
+ register_handler(handlername, install_delta,
+ IMAGE_HANDLER | FILE_HANDLER, NULL);
+}
--
2.25.1

Stefano Babic

unread,
Oct 11, 2021, 7:22:49 AM10/11/21
to swup...@googlegroups.com, Stefano Babic
When upgrading a partition, the filesystem could be much more smaller as
the partition itself. The handler as default will read the whole
partition and it will create the whole ZCK index for it, but this
requires more memory as really needed. Add a way to check the real size
of the filesystem, and do not index the rest of the partition.

Signed-off-by: Stefano Babic <sba...@denx.de>
---
doc/source/handlers.rst | 6 ++++++
handlers/delta_handler.c | 39 +++++++++++++++++++++++++++++++++++++--
2 files changed, 43 insertions(+), 2 deletions(-)

diff --git a/doc/source/handlers.rst b/doc/source/handlers.rst
index 54a9b6c..d5ddd6b 100644
--- a/doc/source/handlers.rst
+++ b/doc/source/handlers.rst
@@ -972,6 +972,12 @@ The resulting header file must be packed inside the SWU.
| | | printed, and it reports if a chunk |
| | | is downloaded or copied from the source. |
+-------------+-------------+----------------------------------------------------+
+ | source-sitze| string | This limits the index of the source |
+ | | | It is helpful in case of filesystem in much |
+ | | | bigger partition. It has the value for the size |
+ | | | or it can be set to "detect" and the handler |
+ | | | will try to find the effective size of fs. |
+ +-------------+-------------+----------------------------------------------------+


Example:
diff --git a/handlers/delta_handler.c b/handlers/delta_handler.c
index e0202ab..0a59a59 100644
--- a/handlers/delta_handler.c
+++ b/handlers/delta_handler.c
@@ -21,6 +21,7 @@
#include <sys/types.h>
#include <sys/stat.h>
#include <sys/wait.h>
+#include <sys/statvfs.h>
#include <unistd.h>
#include <fcntl.h>
#include <ctype.h>
@@ -35,6 +36,7 @@
#include <util.h>
#include <pctl.h>
#include <pthread.h>
+#include <fs_interface.h>
#include "delta_handler.h"
#include "multipart_parser.h"
#include "installer.h"
@@ -89,6 +91,8 @@ struct hnd_priv {
char *srcdev; /* device as source for comparison */
char *chainhandler; /* Handler to pass the decompressed image */
zck_log_type zckloglevel; /* if found, set log level for ZCK to this */
+ bool detectsrcsize; /* if set, try to compute size of filesystem in srcdev */
+ size_t srcsize; /* Size of source */
unsigned long max_ranges; /* Max allowed ranges (configured via sw-description) */
/* Data to be transferred to chain handler */
struct img_type img;
@@ -319,6 +323,15 @@ static int delta_retrieve_attributes(struct img_type *img, struct hnd_priv *priv
if (errno || priv->max_ranges == 0)
priv->max_ranges = DEFAULT_MAX_RANGES;

+ char *srcsize;
+ srcsize = dict_get_value(&img->properties, "source-size");
+ if (srcsize) {
+ if (!strcmp(srcsize, "detect"))
+ priv->detectsrcsize = true;
+ else
+ priv->srcsize = ustrtoull(srcsize, 10);
+ }
+
char *zckloglevel = dict_get_value(&img->properties, "zckloglevel");
if (!zckloglevel)
return 0;
@@ -452,7 +465,7 @@ static void zck_log_toswupdate(const char *function, zck_log_type lt,
/*
* Create a zck Index from a file
*/
-static bool create_zckindex(zckCtx *zck, int fd)
+static bool create_zckindex(zckCtx *zck, int fd, size_t maxbytes)
{
const size_t bufsize = 16384;
char *buf = malloc(bufsize);
@@ -470,6 +483,8 @@ static bool create_zckindex(zckCtx *zck, int fd)
free(buf);
return false;
}
+ if (maxbytes && n > maxbytes)
+ break;
}

free(buf);
@@ -917,6 +932,26 @@ static int install_delta(struct img_type *img,
ERROR("/dev/null not present or cannot be opened, aborting...");
goto cleanup;
}
+
+ char *filesystem;
+ if (priv->detectsrcsize) {
+ filesystem = diskformat_fs_detect(priv->srcdev);
+ if (filesystem) {
+ char* DATADST_DIR = alloca(strlen(get_tmpdir())+strlen(DATADST_DIR_SUFFIX)+1);
+ sprintf(DATADST_DIR, "%s%s", get_tmpdir(), DATADST_DIR_SUFFIX);
+ if (!swupdate_mount(priv->srcdev, DATADST_DIR, filesystem)) {
+ struct statvfs vfs;
+ if (!statvfs(DATADST_DIR, &vfs)) {
+ TRACE("Detected filesystem %s, block size : %lu, %lu blocks = %lu size",
+ filesystem, vfs.f_frsize, vfs.f_blocks, vfs.f_frsize * vfs.f_blocks);
+ priv->srcsize = vfs.f_frsize * vfs.f_blocks;
+ }
+ swupdate_umount(DATADST_DIR);
+ }
+ free(filesystem);
+ }
+ }
+
in_fd = open(priv->srcdev, O_RDONLY);
if(in_fd < 0) {
ERROR("Unable to open Source : %s for reading", priv->srcdev);
@@ -983,7 +1018,7 @@ static int install_delta(struct img_type *img,
goto cleanup;
}

- if (!create_zckindex(zckSrc, in_fd)) {
+ if (!create_zckindex(zckSrc, in_fd, priv->srcsize)) {
WARN("ZCK Header form %s cannot be created, fallback to full download",
priv->srcdev);
} else {
--
2.25.1

Dominique MARTINET

unread,
Oct 11, 2021, 7:25:43 PM10/11/21
to Stefano Babic, swup...@googlegroups.com
Stefano Babic wrote on Mon, Oct 11, 2021 at 01:21:40PM +0200:
> The get_file() function sends data to IPC to install the SWU. To make
> channel() more generic, add a parameter to control if the incoming data
> must be forwarded to the IPC.
>
> This allows to use get_file() in other contexts, providing an own
> callback to handle the stream and the curl callback in channel_curl.c
> becomes a proxy that simply forwards the data to a supplied callback as
> "dwlwrdata" in the channel_data_t structure.
>
> Signed-off-by: Stefano Babic <sba...@denx.de>
> ---
> corelib/channel_curl.c | 50 +++++++++++++++++++++++-------------------
> include/channel_curl.h | 1 +
> 2 files changed, 28 insertions(+), 23 deletions(-)
>
> diff --git a/corelib/channel_curl.c b/corelib/channel_curl.c
> index 0636efc..be553f0 100644
> --- a/corelib/channel_curl.c
> +++ b/corelib/channel_curl.c
> @@ -1305,7 +1309,7 @@ cleanup_file:
> * so use close() here directly to issue an error in case.
> * Also, for a given file handle, calling ipc_end() would make
> * no semantic sense. */
> - if (close(file_handle) != 0) {
> + if (file_handle > 0 && close(file_handle) != 0) {

should be >= 0 as 0 is a valid fd number

Stefano Babic

unread,
Oct 12, 2021, 4:49:07 AM10/12/21
to Dominique MARTINET, Stefano Babic, swup...@googlegroups.com
Thanks, I will fix in V2.

Regards,
Stefano

--
=====================================================================
DENX Software Engineering GmbH, Managing Director: Wolfgang Denk
HRB 165235 Munich, Office: Kirchenstr.5, D-82194 Groebenzell, Germany
Phone: +49-8142-66989-53 Fax: +49-8142-66989-80 Email: sba...@denx.de
=====================================================================

Pierre-Jean Texier

unread,
Nov 5, 2021, 6:13:59 PM11/5/21
to Stefano Babic, swup...@googlegroups.com
Hi Stefano,

Le 11/10/2021 à 13:21, Stefano Babic a écrit :
> When upgrading a partition, the filesystem could be much more smaller as
> the partition itself. The handler as default will read the whole
> partition and it will create the whole ZCK index for it, but this
> requires more memory as really needed. Add a way to check the real size
> of the filesystem, and do not index the rest of the partition.
>
> Signed-off-by: Stefano Babic <sba...@denx.de>
> ---
> doc/source/handlers.rst | 6 ++++++
> handlers/delta_handler.c | 39 +++++++++++++++++++++++++++++++++++++--
> 2 files changed, 43 insertions(+), 2 deletions(-)
>
> diff --git a/doc/source/handlers.rst b/doc/source/handlers.rst
> index 54a9b6c..d5ddd6b 100644
> --- a/doc/source/handlers.rst
> +++ b/doc/source/handlers.rst
> @@ -972,6 +972,12 @@ The resulting header file must be packed inside the SWU.
> | | | printed, and it reports if a chunk |
> | | | is downloaded or copied from the source. |
> +-------------+-------------+----------------------------------------------------+
> + | source-sitze| string | This limits the index of the source |

Small typo --^ (source-size)

Thanks,
--
Pierre-Jean Texier

Pierre-Jean Texier

unread,
Nov 5, 2021, 6:40:33 PM11/5/21
to Stefano Babic, swup...@googlegroups.com
Hi Stefano,

Le 11/10/2021 à 13:21, Stefano Babic a écrit :
> Large size or small bandwidth require to reduce the size of the
> downloaded data. This implements a delta update using the zchunk project
> as basis. The full documentation and design specification is in doc.
>
> Signed-off-by: Stefano Babic <sba...@denx.de>
> ---

<snip>

> # If a flat binary should be built, CFLAGS_swupdate="-elf2flt"
> # env var should be set for make invocation.
> # Here we check whether CFLAGS_swupdate indeed contains that flag.
> diff --git a/handlers/Config.in b/handlers/Config.in
> index ad5dfdd..efb0e8d 100644
> --- a/handlers/Config.in
> +++ b/handlers/Config.in
> @@ -60,6 +60,19 @@ config CFIHAMMING1
>
> You do not need this if you do not have an OMAP SoC.
>
> +config DELTA
> + bool "delta"
> + depends on HAVE_LIBCURL
> + depends on HAVE_URIPARSER

I think 'uriparser' is not needed here, right ?

Thanks,
--
Pierre-Jean Texier

Stefano Babic

unread,
Nov 6, 2021, 5:59:27 AM11/6/21
to Pierre-Jean Texier, Stefano Babic, swup...@googlegroups.com
Hi Pierre-Jean,
Yes, thanks - I fix in V2.

Regards,
Stefano

> Thanks,
> --
> Pierre-Jean Texier

Stefano Babic

unread,
Nov 6, 2021, 6:01:42 AM11/6/21
to Pierre-Jean Texier, Stefano Babic, swup...@googlegroups.com
Hi Pierre-Jean,
Thanks, I fix it in V2.

Regards,
Stefano
Reply all
Reply to author
Forward
0 new messages