Hi Andrew,
On 04.02.23 17:04, Andrew Murray wrote:
> Hi Stefano, Matt,
>
> On Sat, 4 Feb 2023 at 11:14, Stefano Babic <
sba...@denx.de> wrote:
>>
>> Hi Matt,
>>
>> (maybe added article's author in discussion):
>
> Thanks for adding me in :)
You're welcome :-)
Ok, understood. The point is not so clear in the article.
>
> We identified that when you upload artifacts via the Hawkbit UI they
> were accessible via the RESTful management API, hence the suggestion
> in the blog post that this could be considered for use. But as you and
> the documentation points out this isn't intended for consumption by
> the device and requires HTTP Basic Authentication which isn't
> supported by SWupdate. (But we didn't understand all of this at the
> time of writing).
Ok
>
>
>>
>> As concept for delta update, SWUpdate requires the SWU and a URL for
>> each artifact that requires delta update, in format of ZCK file. In the
>> simplest case this is just one file (rootfs), but it can be many files
>> (for example with containers). URLs can belong to different repositories
>> / different servers. The URLs are defined at build time, because they
>> are part of the SWU and then signed. On the other side, Hawkbit adds
>> dynamically the artifacts into its database with a software module ID,
>> that cannot be known at build time, and the URL contains this ID. We
>> cannot exclude even that rule / way to set the ID could change in
>> future, just the API is guaranteed by Hawkbit's developers to be
>> maintained. This makes dangerous to bind these URLs and the way to get
>> them from what Hawkbit is doing.
>
> You are referring to the URL's exchanged across DDI right? Which
> currently look a bit like this?
>
> /DEFAULT/controller/v1/site/softwaremodules/3/artifacts/image.swu
Right.
>
>
>>
>> So from start, the repository to store the ZCK artifacts is not bound to
>> Hawkbit, and can be any reachable URL (supporting byte request range, of
>> course). Nobody prohibits to set up such as server on the same machine
>> of Hawkbit, but this is generally not requested.
>
> If someone wanted to put these artifacts on a generic server, does
> SWUpdate provide any support for authentication/authorisation?
This can be done via reverse proxy, see previous Matt's thread. SWUpdate
passes to libcurl key and certificate for the connection if provided.
This is also the way preferred in Hawkbit (token and in the specific
case gateway token are considered for development / testing) to
authenticate the device. So or there is noauthentication at all (plug
and play mode), or device should received own key and cetificate for the
connection.
> I.e. I
> can imagine people may want this in an S3 bucket, etc?
>
Well, Hawkbit supports artifacts on S3 bucket, see hawkbit extensions
for artifactory.
But really people want to update with and without the Hawkbit server.
SUpdate makes this independently from the way you update, and the same
SWU (with or without delta) can be stored on Hawkbit, pushed to the
device via the Webserver or locally via USB storage, or whatever.
> I think it's a reasonable assumption to want to put any ZCK artifacts
> alongside the SWUs within the control of Hawkbit.
As I said, there are extensions, and specific for S3 bucket and
Microsoft Azure. But people want to use any kind of cloud service, and
to integrate this in Hawkbit means that a new extension must be written
(and integrated into Hawkbit project).
>
>
>>
>> The Hawkbit Server is often located on the cloud whose costs depend on
>> the generated traffic, and moving the artifact repos outside allows to
>> find for a competitor with the cheapest solution. That means it is
>> possible from release to release to put the artifacts on another server,
>> if this reduces costs, but older releases could still be deployed. With
>> delta update, the traffic generated on the Hawkbit server can become
>> negligible, because it could just contain more or less meta data (ZCH
>> headers) information.
>>
>> It is not clear to me what is the intention and possible security issues
>> mentioned by author with " we located it on publicly accessible server –
>> but of course this isn’t recommended for production due to security
>> concerns.". The Hawkbit itself is a "public" server as well, and you
>> know that you can set up a reverse proxy if you want to limit the
>> access. Nevertheless, the server is not in SWUpdate's trust of chain. It
>> could be compromised, too, without compromising the device. SWUpdate
>> will detect any manipulation, and this is for the ZCK (delta) files,
>> too, even if it is not deployed with SWU.
>
> The blog post wasn't clear here
Maybe time to update the post ;-)
> - by security concerns I meant you may
> not want your binaries on a server that anyone can download via a
> public URL without authentication - which was the case for our
> testing.
Ok, this is clear now. I will say that this is generally not the main
concern (yes, it is requested, too, but with lower priority). The main
request is that a not verified SWU is not allowed to be installed on the
device. The SWU generally can be read, and the Hawkbit Server could be
compromised, too. If there are concerns to make the SWU public, the SWU
itself should be encrypted (and SWUpdate supports this).
And as I explained above, customers have often more ways to update the
devices - fleet management is one case, but the same device could be
updated in another way, even offline via USB pen. SWUpdate is
transparent how the software is reaching the device.
>
>
>>
>> After these considerations, I do not say it could not be possible to
>> bind in some way the repo with Hawkbit, but this should be done
>> exclusively using the DDI interface, for example by retrieving and
>> parsing the list of software modules and in one of them URLs are
>> provided, but this was up now never asked. Splitting the artifact repos
>> from the main fleet management server seems a much more flexible solution.
>
> After writing the post, we did make further progress such that we were
> able to download the zck via the DDI. Our approach looked like this:
>
> - Make a note of the URL of the artifact downloaded in
> server_process_update_artifact (e.g. url/rootfs.ext4) - thus allowing
> us to infer the URL to the location of the directory (software module)
> containing the SWU and any other artifacts.
ok - but this makes probably the SWU not working outside Hawkbit.
Nevertheless, I agree it is a use case.
> - Insider the delta handler (delta_retrieve_attributes) if the URL if
> the zck file is a specific identifier/token, then we substitute it for
> the URL we previously made note of, but with the additional .zck
> suffix (e.g. url/rootfs.ext4.zck)
Probably I prefer that this happens independently from the delta
handler. The delta handler receives a URL via a "property" - if URL
should be replaced, this should happen when sw-description is parsed and
/ or suitable Lua extension.
> - This allows us to store the ext4.zck in Hawkbit along side the .ext4
> and obtain a URL for it (as part of the same software module).
>
Ok, I understand that URL should be replaced in some way, it is to find
which is the best way. My concern is that if the URL is provided as URL
of an additional file of the same software module, this is not signed
and verified. Nothing terrible, SWUpdate is still able to recognize a
malformed or modified ZCK file, but maybe not the best.
> However the delta_downloader.c doesn't support passing an
> Authorisation token - so we needed to add that (i.e.
> channel_data_defaults.auth_token),
Right, it was never asked. And again, the delta handler is thought to
work with any server supporting byte range request, not only hawkbit.
But the connection can be verified if private key / certificate are
passed for the SSL connection.
> along with a new chunks_downloader
> module settings for providing the gateway token.
>
> And finally, we noticed that the Hawkbit server doesn't conform to the
> multi-part RFC which says "The boundary must be followed immediately
> either by another CRLF and the header fields for the next part, or by
> two CRLFs, in which case there are no header fields for the next part
> (and it is therefore assumed to be of Content-Type text/plain)" - this
> causes multipart handling to fail - we made a fix for that.
Fine - does it mean you push the fix to the Hawkbit project to be
integrated into mainline ?
> Also
> SWUpdate assumes that the multi-part boundary separator is lowercase,
> however Hawkbit outputs an upper case one! (Happy to share any of
> those fixes).
Please share.
>
> We got it to work, but with the limited time we had the solution
> wasn't elegant or generic and it makes many assumptions.
>
> Going forward,
Right - there are space for improvement.
> I think there are some questions open with regards to
> configuration:
>
> - How does the user specify in the sw-description the location of the
> zck that is relative to the location of the swu obtained via the DDI?
My current use cases and customer's request was to split the artifact
repository from the Hawkbit server, and the URL is fixed and defined at
build time.
If companies will require this, a way should be found, just probably
made generic instead of the way you described before. But I agree, the
information is coming from DDI as "links" in the deployment info answer.
> - How does the user specify authorization tokens for it - presumably
> use the same one used for the DDI?
Yes
> - When the zck is not located on a hawkbit server, how does the user
> specify authorization information?
Key and certificate, no authorization token is currently provided.