Segmented Installed-Directly

68 views
Skip to first unread message

Rob Lee

unread,
Jun 22, 2021, 5:02:48 PM6/22/21
to swup...@googlegroups.com
Hello,

The devices I work on have A/B (dual copy) type partition
configurations with a large rootfs (>1GiB) and thus large rootfs
images even with compression. We do not have room to store the rootfs
image temporarily, so we want to use the "installed-directly" option.

For the current "installed-directly" operation, from reading the
documentation my understanding is that the image data can only be
verified *after* it is completely written to storage. While we can
perform an erase after a verification error is detected, our security
goal is to never to write any unverified data to storage.

Are there SWUpdate methods one could recommend to achieve this goal?

As a newbie to SWUpdate my initial thoughts are that perhaps we could
split the large rootfs image into sub-images (e.g., 1 MiB in size) and
install them in sequence to a ramdisk (e.g., 1MiB in size), verify
them, then install the data from the ram disk to the proper storage
location. That said, it would be preferable to do something like this
within the available functionality of swupdate and not in custom
scripts. Also, I realize there would be an over-the-air data size
increases with this chunking scheme due to worse compression
performance due to splitting the data into smaller segments.

Thanks,
Rob

James Hilliard

unread,
Jun 22, 2021, 8:14:45 PM6/22/21
to Rob Lee, swupdate
So the system doesn't have enough ram to temporarily store the rootfs
image? You might want to look into the libarchive/archive handler with
say a rootfs.tar.xz(or multiple so you can split things up and combine
them during install or something to work around ram limitations) which
gets extracted to a partition(look at diskpart handler maybe for
formatting the partition automatically before install) this may work
better than a disk image depending on the filesystem/setup you need.

Regards,
James
> --
> You received this message because you are subscribed to the Google Groups "swupdate" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to swupdate+u...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/swupdate/CAH593dAQOYg%3D3pmOER2VSsoq-H3DFhV1-8nnud%2B02kLEjBC-tQ%40mail.gmail.com.

Stefano Babic

unread,
Jun 23, 2021, 6:38:18 AM6/23/21
to Rob Lee, swup...@googlegroups.com
Hi Rob,

On 22.06.21 23:02, Rob Lee wrote:
> Hello,
>
> The devices I work on have A/B (dual copy) type partition
> configurations with a large rootfs (>1GiB) and thus large rootfs
> images even with compression. We do not have room to store the rootfs
> image temporarily, so we want to use the "installed-directly" option.
>
> For the current "installed-directly" operation, from reading the
> documentation my understanding is that the image data can only be
> verified *after* it is completely written to storage. While we can
> perform an erase after a verification error is detected, our security
> goal is to never to write any unverified data to storage.

This is quite weird, because what is important is that not verified
software will run on the device, and if the flash / storage is then
filled / erased should not matter.

Anyway...

>
> Are there SWUpdate methods one could recommend to achieve this goal?
>

You reach easy this goal if you encrypt the images. In fact, SWUpdate
first work with internal RAM buffers, and if it cannot decrypt, nothing
is stored on the device.

> As a newbie to SWUpdate my initial thoughts are that perhaps we could
> split the large rootfs image into sub-images (e.g., 1 MiB in size) and
> install them in sequence to a ramdisk (e.g., 1MiB in size),

Then you need a ramdisk equal or bigger as your image.

> verify
> them, then install the data from the ram disk to the proper storage
> location.

Very convoluted way and IMHO not needed at all. SWUpdate already
verifies signed images. And if you encrypt the rootfs, you get the same
result without splitting.

> That said, it would be preferable to do something like this
> within the available functionality of swupdate and not in custom
> scripts. Also, I realize there would be an over-the-air data size
> increases with this chunking scheme due to worse compression
> performance due to splitting the data into smaller segments.
>

Let me say it sounds weird.


Best regards,
Stefanop Babic

--
=====================================================================
DENX Software Engineering GmbH, Managing Director: Wolfgang Denk
HRB 165235 Munich, Office: Kirchenstr.5, D-82194 Groebenzell, Germany
Phone: +49-8142-66989-53 Fax: +49-8142-66989-80 Email: sba...@denx.de
=====================================================================

James Hilliard

unread,
Jun 23, 2021, 6:54:59 AM6/23/21
to Stefano Babic, Rob Lee, swupdate
On Wed, Jun 23, 2021 at 4:38 AM Stefano Babic <sba...@denx.de> wrote:
>
> Hi Rob,
>
> On 22.06.21 23:02, Rob Lee wrote:
> > Hello,
> >
> > The devices I work on have A/B (dual copy) type partition
> > configurations with a large rootfs (>1GiB) and thus large rootfs
> > images even with compression. We do not have room to store the rootfs
> > image temporarily, so we want to use the "installed-directly" option.
> >
> > For the current "installed-directly" operation, from reading the
> > documentation my understanding is that the image data can only be
> > verified *after* it is completely written to storage. While we can
> > perform an erase after a verification error is detected, our security
> > goal is to never to write any unverified data to storage.
>
> This is quite weird, because what is important is that not verified
> software will run on the device, and if the flash / storage is then
> filled / erased should not matter.
>
> Anyway...
>
> >
> > Are there SWUpdate methods one could recommend to achieve this goal?
> >
>
> You reach easy this goal if you encrypt the images. In fact, SWUpdate
> first work with internal RAM buffers, and if it cannot decrypt, nothing
> is stored on the device.

I think he's trying to verify the image not obfuscate it(symmetric encryption
by itself doesn't help much there as someone could just extract the key
from the device and use it to encrypt a new image, the same is not possible
with asymmetric encryption used for verified images).

>
> > As a newbie to SWUpdate my initial thoughts are that perhaps we could
> > split the large rootfs image into sub-images (e.g., 1 MiB in size) and
> > install them in sequence to a ramdisk (e.g., 1MiB in size),
>
> Then you need a ramdisk equal or bigger as your image.
>
> > verify
> > them, then install the data from the ram disk to the proper storage
> > location.
>
> Very convoluted way and IMHO not needed at all. SWUpdate already
> verifies signed images. And if you encrypt the rootfs, you get the same
> result without splitting.

Signing is different from encryption as you never have to distribute the
private key. Best I can tell the symmetric encryption is just for obfuscation
and provides little security benefit. Or am I missing something here?

>
> > That said, it would be preferable to do something like this
> > within the available functionality of swupdate and not in custom
> > scripts. Also, I realize there would be an over-the-air data size
> > increases with this chunking scheme due to worse compression
> > performance due to splitting the data into smaller segments.
> >
>
> Let me say it sounds weird.
>
>
> Best regards,
> Stefanop Babic
>
> --
> =====================================================================
> DENX Software Engineering GmbH, Managing Director: Wolfgang Denk
> HRB 165235 Munich, Office: Kirchenstr.5, D-82194 Groebenzell, Germany
> Phone: +49-8142-66989-53 Fax: +49-8142-66989-80 Email: sba...@denx.de
> =====================================================================
>
> --
> You received this message because you are subscribed to the Google Groups "swupdate" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to swupdate+u...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/swupdate/7791efac-0938-edb1-44bd-9328c919db9a%40denx.de.

Stefano Babic

unread,
Jun 23, 2021, 7:04:33 AM6/23/21
to James Hilliard, Stefano Babic, Rob Lee, swupdate
Hi James,
As usual, where the key is lstored is et out from the decrxyption
itself. If the device has a secure storage for example, the attacker
cannot extract the key. Or if the key is first loaded via a secure
channel, and not stored at all on the device, the attacker cannot
extract the key as well. So yes, it helps.

> and use it to encrypt a new image, the same is not possible
> with asymmetric encryption used for verified images).
>
>>
>>> As a newbie to SWUpdate my initial thoughts are that perhaps we could
>>> split the large rootfs image into sub-images (e.g., 1 MiB in size) and
>>> install them in sequence to a ramdisk (e.g., 1MiB in size),
>>
>> Then you need a ramdisk equal or bigger as your image.
>>
>>> verify
>>> them, then install the data from the ram disk to the proper storage
>>> location.
>>
>> Very convoluted way and IMHO not needed at all. SWUpdate already
>> verifies signed images. And if you encrypt the rootfs, you get the same
>> result without splitting.
>
> Signing is different from encryption as you never have to distribute the
> private key.

This is clear - but he want to avoid to write into the flash, even if it
does not add further security if it is ensured that the new software
cannot be started (and for that, there are different mechanism).

> Best I can tell the symmetric encryption is just for obfuscation
> and provides little security benefit. Or am I missing something here?

If we are talking about security, avoiding to write into the flash adds
no security at all. He could also invalidate the flash after a failing
update if he wants.

Best regards,
Stefano

James Hilliard

unread,
Jun 23, 2021, 7:33:07 AM6/23/21
to Stefano Babic, Rob Lee, swupdate
I mean, you can make it difficult to extract but fundamentally the key
must be present on the device during the update...even if it's just in
RAM it can still be extracted, someone just needs to find an infoleak
or RCE exploit to read out the key during extraction. If you don't find
something there then someone who knows what they are doing could
likely find a hardware side channel/glitching attack or something(these
are often used for extracting keys from heavily locked down devices
like game consoles) to grab the key.

I've pulled gpg update decryption keys from some pretty heavily locked
down ISP routers in the past using RCE vulnerabilities. Once you have
one decryption key you can pretty much indefinitely decrypt future updates
without too much effort.

>
> > and use it to encrypt a new image, the same is not possible
> > with asymmetric encryption used for verified images).
> >
> >>
> >>> As a newbie to SWUpdate my initial thoughts are that perhaps we could
> >>> split the large rootfs image into sub-images (e.g., 1 MiB in size) and
> >>> install them in sequence to a ramdisk (e.g., 1MiB in size),
> >>
> >> Then you need a ramdisk equal or bigger as your image.
> >>
> >>> verify
> >>> them, then install the data from the ram disk to the proper storage
> >>> location.
> >>
> >> Very convoluted way and IMHO not needed at all. SWUpdate already
> >> verifies signed images. And if you encrypt the rootfs, you get the same
> >> result without splitting.
> >
> > Signing is different from encryption as you never have to distribute the
> > private key.
>
> This is clear - but he want to avoid to write into the flash, even if it
> does not add further security if it is ensured that the new software
> cannot be started (and for that, there are different mechanism).
>
> > Best I can tell the symmetric encryption is just for obfuscation
> > and provides little security benefit. Or am I missing something here?
>
> If we are talking about security, avoiding to write into the flash adds
> no security at all. He could also invalidate the flash after a failing
> update if he wants.

I don't think this is the case, blocking extraction by verifying can
significantly reduce attack surface, parsers/extraction software in
general tends to be a very large source of security issues, by
validating payloads before extraction an attacker would have to
find a vulnerability in something that happens before validation.

For example if there's a RCE vulnerability in say libmtd that can
be exploited by passing it a malicious image file your system could
get attacked unless the validator halts the update before passing
the image to libmtd to be written.

I've actually found exploitable vulnerabilities like this in a number of
devices using different update systems, for example a system that
extracted a tar file before validating the update signature could be
exploited due to a path vulnerability in busybox tar that one could use
to effectively write arbitrary files anywhere in the filesystem, but this
was only exploitable since signature validation happened after tar
extraction. If it happened before then it wouldn't reach the vulnerable
busybox tar codepaths.

Stefano Babic

unread,
Jun 23, 2021, 8:16:22 AM6/23/21
to James Hilliard, Stefano Babic, Rob Lee, swupdate
Hi James,
But this is a different topic. Many SOCs have a secure storage for keys
and keys are then present in kernel only. Of course, we rely on the SOC
manufacturers for this. But then an attacker should gain first full
access to the device (and with root right) and find the key that should
be obfuscated in kernel.

It looks much easier that the attacker repolace the public key with an
own one.

> If you don't find
> something there then someone who knows what they are doing could
> likely find a hardware side channel/glitching attack or something(these
> are often used for extracting keys from heavily locked down devices
> like game consoles) to grab the key.

Of course, attackers are creative...

>
> I've pulled gpg update decryption keys from some pretty heavily locked
> down ISP routers in the past using RCE vulnerabilities. Once you have
> one decryption key you can pretty much indefinitely decrypt future updates
> without too much effort.

Sure.
But if data is streamed without temporary copy, data is verified on the
fly (as SWUpdate is doing).

>
> For example if there's a RCE vulnerability in say libmtd that can
> be exploited by passing it a malicious image file your system could
> get attacked unless the validator halts the update before passing
> the image to libmtd to be written.

But then this is valid for each package. SWUpdate can have leaks, too,
that must be fixed. What helps is to maintain the device up to data, and
fix any CVE that can be found. If we go on with hypothesis, then even
the verification step before installing can be tricked in some way
(openSSL bug ? Whatever...), and it does not help as well. It just
introduce an additional step without many benefits.

James Hilliard

unread,
Jun 23, 2021, 9:14:12 AM6/23/21
to Stefano Babic, Rob Lee, swupdate
Sure, I just mean verified has a different security model to some degree
vs encrypted and may have different requirements, such as secure key
storage or similar to be all that secure.
It's only decrypted on the fly from what I can tell, verification requires
the sha256 hash of the data which can only be obtained at the end of
the upload, and by that time most of the data would be written, at least
in streaming mode.

If you wanted to verify on the fly you'd probably need to have a set of
signed hashes for each chunk of data being streamed, with each chunk
being no larger than what you are ok caching in RAM as you can only
validate one chunk(can be any size however) of data for each hash I
think. You'd probably just use a merkle tree of chunk hashes and then
sign the merkle root. Right now we essentially would have the entire
image as the chunk with a single hash, so we can't verify the image
until it's fully received. You can validate it at the end of streaming, just
not before you have the end.

To validate you would first validate the merkle root(probably in the
sw-description file) using RSA, then validate the chunk hashes using
the merkle tree, then use the chunk hashes to validate each chunk
as the data is streamed. This way no unvalidated chunk would have
to make it past a validation code path essentially.

It's a little bit complex but should be possible to do streaming validation,
unsure if that would be worth implementing though.

>
> >
> > For example if there's a RCE vulnerability in say libmtd that can
> > be exploited by passing it a malicious image file your system could
> > get attacked unless the validator halts the update before passing
> > the image to libmtd to be written.
>
> But then this is valid for each package. SWUpdate can have leaks, too,
> that must be fixed. What helps is to maintain the device up to data, and
> fix any CVE that can be found. If we go on with hypothesis, then even
> the verification step before installing can be tricked in some way
> (openSSL bug ? Whatever...), and it does not help as well. It just
> introduce an additional step without many benefits.

Yeah, it may not have much benefit, especially in cases where there are
easier things to exploit, but if your use case requires extra security you
might want the reduced attack surface by validating before extraction.

James Hilliard

unread,
Jun 23, 2021, 9:38:20 AM6/23/21
to Stefano Babic, Rob Lee, swupdate
Oh, looks like someone wrote a library that already does this sort of thing:
https://github.com/IAIK/secure-block-device

Stefano Babic

unread,
Jun 23, 2021, 9:43:44 AM6/23/21
to James Hilliard, Stefano Babic, Rob Lee, swupdate
Hi James,
Right.
Ok, so let's say compuztation of hash is done on the fly, result is of
course at the end when the whole data was downloaded.

> and by that time most of the data would be written, at least
> in streaming mode.

That is correct.

This is also why streaming-mode is optional: if someone strictly need to
verify before touching the hardware (mostly this is for single-copy
mode), streaming-mode is off.

>
> If you wanted to verify on the fly you'd probably need to have a set of
> signed hashes for each chunk of data being streamed, with each chunk
> being no larger than what you are ok caching in RAM as you can only
> validate one chunk(can be any size however) of data for each hash I
> think. You'd probably just use a merkle tree of chunk hashes and then
> sign the merkle root. Right now we essentially would have the entire
> image as the chunk with a single hash, so we can't verify the image
> until it's fully received. You can validate it at the end of streaming, just
> not before you have the end.

You can even do this in current implementation if you split the whole
image (but most project have not a single artifact, as several of them)
and then use the "offset" attribute to write at the right place. Because
SWUpdate uses 16KB buffers, splitting the image in 16KB chunks (!!) will
do as side effect to verify the chunk (because it is still in memory,
write is the last step in the pipeline) before installing. It sounds
crazy, but well...

All hashes are signed in sw-description - but well, I have never tested
such a thing and sw-description will become quite large...

>
> To validate you would first validate the merkle root(probably in the
> sw-description file) using RSA, then validate the chunk hashes using
> the merkle tree, then use the chunk hashes to validate each chunk
> as the data is streamed. This way no unvalidated chunk would have
> to make it past a validation code path essentially.
>
> It's a little bit complex but should be possible to do streaming validation,
> unsure if that would be worth implementing though.

I agree, it should be possible, and agree again, no, it is not worth.

>
>>
>>>
>>> For example if there's a RCE vulnerability in say libmtd that can
>>> be exploited by passing it a malicious image file your system could
>>> get attacked unless the validator halts the update before passing
>>> the image to libmtd to be written.
>>
>> But then this is valid for each package. SWUpdate can have leaks, too,
>> that must be fixed. What helps is to maintain the device up to data, and
>> fix any CVE that can be found. If we go on with hypothesis, then even
>> the verification step before installing can be tricked in some way
>> (openSSL bug ? Whatever...), and it does not help as well. It just
>> introduce an additional step without many benefits.
>
> Yeah, it may not have much benefit, especially in cases where there are
> easier things to exploit, but if your use case requires extra security you
> might want the reduced attack surface by validating before extraction.

Ok

James Hilliard

unread,
Jun 23, 2021, 9:55:11 AM6/23/21
to Stefano Babic, Rob Lee, swupdate
Yeah, that would work assuming sw-description itself doesn't get too large
to fit into RAM, otherwise you would probably need something like a merkle
tree which would allow for validating the chunk hashes from branches by
themselves.

But yeah...def sounds crazy but does sound like it might sorta work, heh.

Rob Lee

unread,
Jun 23, 2021, 11:31:25 PM6/23/21
to James Hilliard, Stefano Babic, swupdate
Thanks James and Stefano for this thought experiment and information.

I had initially considered the encryption option but I wanted to explore
the segmented option first as per my previous thread with subject "New
Feature Contributions"
(https://groups.google.com/u/1/g/swupdate/c/nbfx2HaDPqw/m/ZGulta5rBwAJ)
we also require the ability to continue a download/install even with
the "installed-directly" option being used on a large rootfs image.
In such a system, a segmented approach whose chunks are
independently compressed could be accessed via a "chunk index"
that could translate to a download index to continue a download from.
The chunk index location to continue from could be determined if each
chunk has its own hash in the sw-configuration file.

Additionally, the chunk size could be chosen to balance reasonably
good compression and a reasonable size that won't signifiantly
impact system RAM (our OTA updates will run in the background
while other applications are running) to allow for uncompressing
and verifying before writing to storage. So for example, 1 or 2 MiB
for chunk size.

I'll further look into the archive handler and performing encryption as
a 'verify-before-storage-write" solution. We will have keys accessible
from an ARM trusted execution environment which are used for other
filesystem encryption purpoese for example.

Thanks,
Rob
Reply all
Reply to author
Forward
0 new messages