[RFC PATCH 2/3] tools/swupdate-progress: sticky FAILURE state modifications

30 views
Skip to first unread message

Konrad Schwarz

unread,
Apr 11, 2026, 3:46:30 PM (11 days ago) Apr 11
to swup...@googlegroups.com, Konrad Schwarz
From: Konrad Schwarz <konrad....@siemens.com>

tools/swupdate-progress can trigger various mechanisms
when the FAILURE state has been reached, e.g.,
run a "post script".

The previous patch turned the state=FAILURE message into
a "sticky" state: all further messages continue to report
state=FAILURE until the next update starts, as reported
by state=START. This makes the messages state field
mirror the state of the current update attempt,
and not a "state transition" field.

To prevent multiple (back-to-back) messages with state=FAILURE
from tools/swupdate-progress triggering the post script multiple times,
this patch includes an edge-filter: triggering only
occurs on the transition to state=FAILURE.

Signed-off-by: Konrad Schwarz <konrad....@siemens.com>
---
tools/swupdate-progress.c | 11 ++++++++++-
1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/tools/swupdate-progress.c b/tools/swupdate-progress.c
index 94cd84e1..26f260c8 100644
--- a/tools/swupdate-progress.c
+++ b/tools/swupdate-progress.c
@@ -300,7 +300,9 @@ int main(int argc, char **argv)
connfd = -1;
redirected = !isatty(fileno(stdout));

+ bool failure = false;
while (1) {
+
if (connfd < 0) {
connfd = progress_ipc_connect(opt_w);
}
@@ -423,8 +425,14 @@ int main(int argc, char **argv)
}

switch (msg.status) {
- case SUCCESS:
+ case START:
+ failure = false;
+ break;
case FAILURE:
+ if (failure)
+ goto progress_case;
+ failure = true;
+ case SUCCESS:
if (opt_c) {
if (msg.status == FAILURE)
textcolor(BLINK, RED, BLACK);
@@ -466,6 +474,7 @@ int main(int argc, char **argv)
fprintf(stdout, "\nDONE.\n\n");
break;
case PROGRESS:
+progress_case:
/*
* Could also check for "source": <sourcetype> as sent
* by wfx but that's left for later when we have full
--
2.47.3

Stefano Babic

unread,
Apr 12, 2026, 4:48:28 AM (10 days ago) Apr 12
to Konrad Schwarz, swup...@googlegroups.com, Konrad Schwarz
Hi Konrad,

On 4/11/26 21:45, Konrad Schwarz wrote:
> From: Konrad Schwarz <konrad....@siemens.com>
>
> tools/swupdate-progress can trigger various mechanisms
> when the FAILURE state has been reached, e.g.,
> run a "post script".
>
> The previous patch turned the state=FAILURE message into
> a "sticky" state: all further messages continue to report
> state=FAILURE until the next update starts, as reported
> by state=START. This makes the messages state field
> mirror the state of the current update attempt,
> and not a "state transition" field.
>
> To prevent multiple (back-to-back) messages with state=FAILURE
> from tools/swupdate-progress triggering the post script multiple times,
> this patch includes an edge-filter: triggering only
> occurs on the transition to state=FAILURE.
>

This seems tricky and it is not how this is thought.

postscript "failure" scripts were introduced. A script can be
preinstall, postinstall and failure. Last ones are called in case the
update fails to restore the status before the update is started. This is
under the control of SWUpdate and it is already guaranteed that each
script is called just once.

A second important advantage is that the behavior is update specific,
that is each SWU can have the own set of f"failure" script, while a
script in the progress tool is global.

Best regards,
Stefano

Konrad Schwarz

unread,
Apr 12, 2026, 10:13:11 AM (10 days ago) Apr 12
to swupdate
Hello Stefano,

for some reason,  my 1+3 patch series was pulled apart in groups.google.com -- I'm not exactly sure why this happened,
it seems I omitted --thread=shallow to git format-patch by accident.  I apologize for that.

In any case, patch 1 and 2 are not independent: if you decide to apply patch 1, you should also apply patch 2, which "undoes" the effects of patch 1 in tools/swupdate-progress.  If not, the post-failure script can run multiple times -- a big change in semantics!

I would understand rejecting the entire patch series, that's why I marked them RFC.  But applying only 1 without 2 would be dangerous.

Best Regards,

Konrad

Stefano Babic

unread,
Apr 12, 2026, 11:32:18 AM (10 days ago) Apr 12
to Konrad Schwarz, swupdate
Hi Konrad,

On 4/12/26 16:13, Konrad Schwarz wrote:
> Hello Stefano,
>
> for some reason,  my 1+3 patch series was pulled apart in
> groups.google.com -- I'm not exactly sure why this happened,
> it seems I omitted --thread=shallow to git format-patch by accident.  I
> apologize for that.
>

It is not an issue.

> In any case, patch 1 and 2 are *not *independent: if you decide to apply
> patch 1, you should also apply patch 2, which "undoes" the effects of
> patch 1 in tools/swupdate-progress.  If not, the post-failure script can
> run multiple times -- a big change in semantics!
>

They are not supposed to be applied singularly.

The series seems to want to address different topics:

1) the progress interface sends some messages that can confuse a monitor
application. This should be specified, but then it should be fixed
inside SWUpdate.

2) some actions are to be taken if an update fails, and you do it by
adding a sort of post process + filtering.

Point 1) should be identified. I have foreseen at the beginning some
additional state transitions (DONE state), that was unnecessary. So
which are the transitions after a FAILURE that seems confusing ?

Note: progress interfaces are waitig for a START after a
SUCCESS/FAILURE. Other state are ignored.

Point 2) is solved in another way by adding post-failure scripts. These
are guaranteed to run once, and they run at the right time in the update.

So patches weren't applied, but point 1) can be better investigated and
check if something should be done.

Best regards,
Stefano
> --
> You received this message because you are subscribed to the Google
> Groups "swupdate" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to swupdate+u...@googlegroups.com
> <mailto:swupdate+u...@googlegroups.com>.
> To view this discussion visit https://groups.google.com/d/msgid/
> swupdate/5191cb21-e617-4b68-b9d0-15638e307ba6n%40googlegroups.com
> <https://groups.google.com/d/msgid/swupdate/5191cb21-e617-4b68-
> b9d0-15638e307ba6n%40googlegroups.com?utm_medium=email&utm_source=footer>.

konrad....@gmail.com

unread,
Apr 13, 2026, 3:33:22 AM (9 days ago) Apr 13
to Stefano Babic, swupdate
> -----Original Message-----
> From: Stefano Babic <stefan...@swupdate.org>
> > In any case, patch 1 and 2 are *not *independent: if you decide to
> > apply patch 1, you should also apply patch 2, which "undoes" the
> > effects of patch 1 in tools/swupdate-progress. If not, the
> > post-failure script can run multiple times -- a big change in semantics!
> >
>
> They are not supposed to be applied singularly.
>
> The series seems to want to address different topics:
>
> 1) the progress interface sends some messages that can confuse a monitor
> application. This should be specified, but then it should be fixed inside SWUpdate.

I was writing a monitor application that displayed the current state using an industrial
"status light" with five colors stacked upon each other. Different color patterns
indicate different stati. This was done for a trade fair, as an eye catcher.

The funny thing was that in case of a failure, the FAILULRE state didn't appear. Upon
closer examination, this was because some PROGRESS messages were being
sent *after* the FAILURE message, so the simple logic in the lamp switched from
FAILURE (blinking red) to the PROGRESS pattern (blinking green, IIRC).

This is obviously easy to filter; the first patch does this in SWUpdate. All messages
after a FAILURE message have their state changed to FAILURE, until the next START
message comes. (For the fair, I added the filter to the lamp code; but it affects
every progress monitor, hence, the patch to core).


> 2) some actions are to be taken if an update fails, and you do it by adding a sort of
> post process + filtering.

SWUpdates own tools/swupdate_progress is not just a passive progress monitor,
but can call an external script / reboot the machine upon certain state transitions.

The logic there up to now has been that when receiving a FAILURE message,
the post script is invoked. Because of my change above, multiple FAILURE
messages are being generated: the first (real) one, and then the re-labeled PROGRESS messages.

The second patch is to prevent tools/swupdate_progress from starting the post script multiple times:
the script is invoked only on the first FAILURE message.

> Point 1) should be identified. I have foreseen at the beginning some additional state
> transitions (DONE state), that was unnecessary. So which are the transitions after
> a FAILURE that seems confusing ?

See above: PROGRESS messages are appearing after FAILURE, which in naïve
interpretation would mean that some sort of progress is occurring after FAILURE.

(Unfortunately, I didn't save the trace showing this and it's not easy for me to recreate it).

>
> Note: progress interfaces are waitig for a START after a SUCCESS/FAILURE.
> Other state are ignored.
>
> Point 2) is solved in another way by adding post-failure scripts. These are
> guaranteed to run once, and they run at the right time in the update.

--
Konrad

Michael Adler

unread,
Apr 13, 2026, 4:37:31 AM (9 days ago) Apr 13
to konrad....@gmail.com, Stefano Babic, swupdate
Hi Stefano and Konrad,

> > Point 1) should be identified. I have foreseen at the beginning some additional state
> > transitions (DONE state), that was unnecessary. So which are the transitions after
> > a FAILURE that seems confusing ?

Agreed. The proposed patch is more of a workaround for a fundamental
problem, in my opinion. Maybe I can shed some light on this, as I also
did a brief analysis back then:

1. server_wfx.lua executes (around line 1490):

local msg = ("Error installing artifact %d/%d."):format(count, #job.definition.artifacts)
suricatta.notify.error(msg)

2. The SWUpdate progress listener receives a message of severity PROGRESS: {"0": [100%] Error installing artifact 1/1.}

Expected behavior would be to receive an error/failure message.

More generally, it appears that Lua suricatta.notify.error() messages
are actually logged with PROGRESS severity.

I haven't investigated this in detail, but my best guess was that
progress_thread.c:swupdate_progress_update unconditionally sets the
severity to PROGRESS, overriding the original value. This needs
further investigation though.

> (Unfortunately, I didn't save the trace showing this and it's not easy
> for me to recreate it).

Any non-successful update via wfx should actually reproduce this
behavior. (In our case, it was attempting to install an update with the
same rootfs UUID as the currently running system.)

Kind regards,
Michael

--
Michael Adler

Siemens AG
Technology
Connectivity & Edge
Open Source Embedded Systems
FT RPD CED OES-DE
Friedrich-Ludwig-Bauer-Str. 3
85748 Garching, Germany

Siemens Aktiengesellschaft: Chairman of the Supervisory Board: Jim Hagemann
Snabe; Managing Board: Roland Busch, Chairman, President and Chief Executive
Officer; Cedrik Neike, Matthias Rebellius, Ralf P. Thomas, Judith Wiese;
Registered offices: Berlin and Munich, Germany; Commercial registries:
Berlin-Charlottenburg, HRB 12300, Munich, HRB 6684; WEEE-Reg.-No. DE 23691322

Stefano Babic

unread,
Apr 13, 2026, 4:37:31 AM (9 days ago) Apr 13
to konrad....@gmail.com, Stefano Babic, swupdate
Hi Konrad,

On 4/13/26 09:33, konrad....@gmail.com wrote:
>> -----Original Message-----
>> From: Stefano Babic <stefan...@swupdate.org>
>>> In any case, patch 1 and 2 are *not *independent: if you decide to
>>> apply patch 1, you should also apply patch 2, which "undoes" the
>>> effects of patch 1 in tools/swupdate-progress. If not, the
>>> post-failure script can run multiple times -- a big change in semantics!
>>>
>>
>> They are not supposed to be applied singularly.
>>
>> The series seems to want to address different topics:
>>
>> 1) the progress interface sends some messages that can confuse a monitor
>> application. This should be specified, but then it should be fixed inside SWUpdate.
>
> I was writing a monitor application that displayed the current state using an industrial
> "status light" with five colors stacked upon each other. Different color patterns
> indicate different stati. This was done for a trade fair, as an eye catcher.
>

Ok, nice.

> The funny thing was that in case of a failure, the FAILULRE state didn't appear. Upon
> closer examination, this was because some PROGRESS messages were being
> sent *after* the FAILURE message, so the simple logic in the lamp switched from
> FAILURE (blinking red) to the PROGRESS pattern (blinking green, IIRC).

This exactly what should be found. Somewhere the PROGRESS status is set,
and I would like to check when it happens and how.

>
> This is obviously easy to filter; the first patch does this in SWUpdate.

Yes, but the patch simply overwrite the status instead of fixing where
it occurs. So it won't be applied.


> All messages
> after a FAILURE message have their state changed to FAILURE, until the next START
> message comes. (For the fair, I added the filter to the lamp code; but it affects
> every progress monitor, hence, the patch to core).

Ok

>
>
>> 2) some actions are to be taken if an update fails, and you do it by adding a sort of
>> post process + filtering.
>
> SWUpdates own tools/swupdate_progress is not just a passive progress monitor,
> but can call an external script / reboot the machine upon certain state transitions.
>
> The logic there up to now has been that when receiving a FAILURE message,
> the post script is invoked. Because of my change above, multiple FAILURE
> messages are being generated: the first (real) one, and then the re-labeled PROGRESS messages.

Ok - but why ist this not done with post-failure scripts in the SWU, as
these are thought for this purpose ?

The post script in swupdate-progress is global, and it is called from
each SWU. It is also unrelated to the SWU, so it is maybe already in the
old (the running) version and this adds a bad dependency between
software versions.

>
> The second patch is to prevent tools/swupdate_progress from starting the post script multiple times:
> the script is invoked only on the first FAILURE message.

That is clear, but it is a work-around for the first work-around.

>
>> Point 1) should be identified. I have foreseen at the beginning some additional state
>> transitions (DONE state), that was unnecessary. So which are the transitions after
>> a FAILURE that seems confusing ?
>
> See above: PROGRESS messages are appearing after FAILURE, which in naïve
> interpretation would mean that some sort of progress is occurring after FAILURE.
>
> (Unfortunately, I didn't save the trace showing this and it's not easy for me to recreate it).

This is what should be necessary to identify the issue.

>
>>
>> Note: progress interfaces are waitig for a START after a SUCCESS/FAILURE.
>> Other state are ignored.
>>
>> Point 2) is solved in another way by adding post-failure scripts. These are
>> guaranteed to run once, and they run at the right time in the update.

Best regards,
Stefano

--
_______________________________________________________________________
Nabla Software Engineering GmbH
Hirschstr. 111A | 86156 Augsburg | Tel: +49 821 45592596
Geschäftsführer : Stefano Babic | HRB 40522 Augsburg
E-Mail: sba...@nabladev.com

konrad....@gmail.com

unread,
Apr 13, 2026, 5:07:39 AM (9 days ago) Apr 13
to Stefano Babic, swupdate, Adler, Michael (FT RPD CED OES-DE)
Hi Stefano,

> > The funny thing was that in case of a failure, the FAILULRE state
> > didn't appear. Upon closer examination, this was because some
> > PROGRESS messages were being sent *after* the FAILURE message, so the
> > simple logic in the lamp switched from FAILURE (blinking red) to the PROGRESS
> pattern (blinking green, IIRC).
>
> This exactly what should be found. Somewhere the PROGRESS status is set, and
> I would like to check when it happens and how.
>
> >
> > This is obviously easy to filter; the first patch does this in SWUpdate.
>
> Yes, but the patch simply overwrite the status instead of fixing where it occurs. So
> it won't be applied.

I am obviously new to this code, but all locations where PROGRESS messages are
generated seemed "normal"/non buggy to me at least. My conclusion was that from the viewpoint
of the PROGRESS generating code, it was basically doing the right thing.

Even if we did find a different location, the fix would ultimately be the same:
the originator would need to remember that FAILURE had already
been generated and remap the PROGRESS messages as FAILURE afterward.

On the other hand, if unrelated parts of the system were generating PROGRESS messages,
then the central solution I created would actually be the "right" place to do this remapping.

> > SWUpdates own tools/swupdate_progress is not just a passive progress monitor,
> > but can call an external script / reboot the machine upon certain state transitions.
> >
> > The logic there up to now has been that when receiving a FAILURE message,
> > the post script is invoked. Because of my change above, multiple FAILURE
> > messages are being generated: the first (real) one, and then the re-labeled
> PROGRESS messages.
>
> Ok - but why ist this not done with post-failure scripts in the SWU, as
> these are thought for this purpose ?
>
> The post script in swupdate-progress is global, and it is called from
> each SWU. It is also unrelated to the SWU, so it is maybe already in the
> old (the running) version and this adds a bad dependency between
> software versions.

Yes, I see. A staged roll-out would be required, first updating swupdate-progress,
then updating the core logic.

The change is definitely not backwards compatible: patch #2, to swudate-progress,
is required to reinstate the old behavior. Note that the patched swupdate-progress
will work fine with an older swupdate core,
because the first/single FAILURE message still arrives.

In any case, patch #3 updates the minor revision number of the API, to be able
to distinguish the old and new behavior.

Even if the problem is solved different from patch #1, patch #2 is required, because the
effect would be the same: several FAILURE messages are produced (unless
those PROGRESS messages are simply dropped at the source after a FAILURE).

--
Konrad

Stefano Babic

unread,
Apr 13, 2026, 6:15:59 AM (9 days ago) Apr 13
to Michael Adler, konrad....@gmail.com, Stefano Babic, swupdate
Hi Michael & Konrad,

On 4/13/26 10:36, 'Michael Adler' via swupdate wrote:
> Hi Stefano and Konrad,
>
>>> Point 1) should be identified. I have foreseen at the beginning some additional state
>>> transitions (DONE state), that was unnecessary. So which are the transitions after
>>> a FAILURE that seems confusing ?
>
> Agreed. The proposed patch is more of a workaround for a fundamental
> problem, in my opinion.

I fully agree.

> Maybe I can shed some light on this, as I also
> did a brief analysis back then:
>
> 1. server_wfx.lua executes (around line 1490):
>
> local msg = ("Error installing artifact %d/%d."):format(count, #job.definition.artifacts)
> suricatta.notify.error(msg)
>
> 2. The SWUpdate progress listener receives a message of severity PROGRESS: {"0": [100%] Error installing artifact 1/1.}
>
> Expected behavior would be to receive an error/failure message.

Maybe this is not the last error, but in any case we should check the
sequence of messages.

>
> More generally, it appears that Lua suricatta.notify.error() messages
> are actually logged with PROGRESS severity.
>
> I haven't investigated this in detail, but my best guess was that
> progress_thread.c:swupdate_progress_update unconditionally sets the
> severity to PROGRESS, overriding the original value. This needs
> further investigation though.

Fully agree.

>
>> (Unfortunately, I didn't save the trace showing this and it's not easy
>> for me to recreate it).
>
> Any non-successful update via wfx should actually reproduce this
> behavior. (In our case, it was attempting to install an update with the
> same rootfs UUID as the currently running system.)
>

Stefano Babic

unread,
Apr 13, 2026, 6:21:05 AM (9 days ago) Apr 13
to konrad....@gmail.com, Stefano Babic, swupdate, Adler, Michael (FT RPD CED OES-DE)
On 4/13/26 11:07, konrad....@gmail.com wrote:
> Hi Stefano,
>
>>> The funny thing was that in case of a failure, the FAILULRE state
>>> didn't appear. Upon closer examination, this was because some
>>> PROGRESS messages were being sent *after* the FAILURE message, so the
>>> simple logic in the lamp switched from FAILURE (blinking red) to the PROGRESS
>> pattern (blinking green, IIRC).
>>
>> This exactly what should be found. Somewhere the PROGRESS status is set, and
>> I would like to check when it happens and how.
>>
>>>
>>> This is obviously easy to filter; the first patch does this in SWUpdate.
>>
>> Yes, but the patch simply overwrite the status instead of fixing where it occurs. So
>> it won't be applied.
>
> I am obviously new to this code, but all locations where PROGRESS messages are
> generated seemed "normal"/non buggy to me at least. My conclusion was that from the viewpoint
> of the PROGRESS generating code, it was basically doing the right thing.
>
> Even if we did find a different location, the fix would ultimately be the same:
> the originator would need to remember that FAILURE had already
> been generated and remap the PROGRESS messages as FAILURE afterward.

Mmmhhh...not. It could be that something in this direction should be
done, but nevertheless it should be first verified how this happens
instead of building a work-around that fixes the output instead of
checking the source os the issue.

>
> On the other hand, if unrelated parts of the system were generating PROGRESS messages,
> then the central solution I created would actually be the "right" place to do this remapping.

The central solution is an overwrite without checking why we get this
and won't be merged.

>
>>> SWUpdates own tools/swupdate_progress is not just a passive progress monitor,
>>> but can call an external script / reboot the machine upon certain state transitions.
>>>
>>> The logic there up to now has been that when receiving a FAILURE message,
>>> the post script is invoked. Because of my change above, multiple FAILURE
>>> messages are being generated: the first (real) one, and then the re-labeled
>> PROGRESS messages.
>>
>> Ok - but why ist this not done with post-failure scripts in the SWU, as
>> these are thought for this purpose ?
>>
>> The post script in swupdate-progress is global, and it is called from
>> each SWU. It is also unrelated to the SWU, so it is maybe already in the
>> old (the running) version and this adds a bad dependency between
>> software versions.
>
> Yes, I see. A staged roll-out would be required, first updating swupdate-progress,
> then updating the core logic.
>
> The change is definitely not backwards compatible: patch #2, to swudate-progress,
> is required to reinstate the old behavior. Note that the patched swupdate-progress
> will work fine with an older swupdate core,
> because the first/single FAILURE message still arrives.
>
> In any case, patch #3 updates the minor revision number of the API, to be able
> to distinguish the old and new behavior.
>
> Even if the problem is solved different from patch #1, patch #2 is required, because the
> effect would be the same: several FAILURE messages are produced (unless
> those PROGRESS messages are simply dropped at the source after a FAILURE).

You are not answering the question. These two patches try to add a
feature that is already realized in a better and per SWU way. Again:
why is this not done with post-failure scripts in the SWU instead of
rely to psot-update scripts that are outside in the running software ?

Stefano Babic

unread,
Apr 13, 2026, 9:00:16 AM (9 days ago) Apr 13
to konrad....@gmail.com, Adler, Michael (FT RPD CED OES-DE), swup...@googlegroups.com
Hi Konrad,

please *never* remove the ML from the communication when discussion was
posted.

On 4/13/26 13:34, konrad....@gmail.com wrote:
> Hello Stefano,
>
>> -----Original Message-----
>> The central solution is an overwrite without checking why we get this
>> and won't be merged.
>
> I understand. I will try to recreate the problem later this week.
>
>> You are not answering the question. These two patches try to add a
>> feature that is already realized in a better and per SWU way. Again:
>> why is this not done with post-failure scripts in the SWU instead of
>> rely to psot-update scripts that are outside in the running software ?
>
> Hmm, there seems to be some miscommunication.

Maybe.

>
> Is your question
> "why is this not done with post-failure scripts in the SWU,
> as these are thought for this purpose?", where "this"
> is "Still work if called repeatedly, i.e.,
> don't do the FAILURE action again"? (If not, see below...)
>
> If the core were changed to produce multiple FAILURE messages
> and swupdate-progress were not adjusted accordingly, swupdate-progress
> would call the post-failure script multiple times. The post-failure
> scripts are user defined and outside of SWUpdate's purview,
> so requiring them to be idempotent would place a new burden on
> SWUpdate's users. This would be a -- completely unnecessary --
> backwards-incompatible change, at least in my mind.

No, the misunderstanding is going on.

You want with a monitoring application (that should just monitor and not
be active in the update) use swupdate-progress to execute restore
scripts in case of FAILURE.

I am telling you that this should be part of the SWU, that means in the
SWU (sw-description) you add a section:

scripts:(
{
filename=<...>;
type = "postfailure";
data = ".....";
},

< multiple post-failure scripts allowed>

}

They will rollback (if required) in case of failure and it is the
release manager (or integrator or whatever) to decide how it is done.

>
> It would be poor engineering as well, since the script would have to record
> the fact that it has been called already in a persistent location
> somewhere (e.g. a file), and remember to clear this location at certain state transitions,
> whereas we here just use a local variable on the stack.
>

You are not getting the point.

>
> Upon re-reading:
> I don't understand what you mean with "why is this not with post-failure
> scripts in the SWU instead of rely on ... outside in the running software".

In fact, there is a misunderstanding. I hope it is clearer now.

>
> My application does not use post-failure scripts at all, I'm attaching to the lua bindings
> of the progress API.

The fact that an update is successful or not should be part of the
release itself, that means it should be embedded in the SWU. That means
also that you do not need to roll out new scripts in case they are
changed (that means update of swupdate-progress and related scripts),
because this is part of the release itself that remains self contained.

> I am trying to make that API a little bit easier to use while preserving
> the existing interface to the post script of tools/swupate-progress.
>

Best regards,
Stefano Babic

konrad....@gmail.com

unread,
Apr 14, 2026, 6:49:08 AM (8 days ago) Apr 14
to Stefano Babic, Adler, Michael (FT RPD CED OES-DE), swup...@googlegroups.com
Hi Stefano,

> No, the misunderstanding is going on.
>
> You want with a monitoring application (that should just monitor and not be active in
> the update) use swupdate-progress to execute restore scripts in case of FAILURE.

No, I don't want this at all. I have no stake in the matter. My only interest is to make
progress monitors a little bit simpler to implement.

However, your existing code base has a monitoring application that offers this ability.
Is this a good idea? Maybe not, but it's there.

> I am telling you that this should be part of the SWU, that means in the SWU (sw-
> description) you add a section:
>
> scripts:(
> {
> filename=<...>;
> type = "postfailure";
> data = ".....";
> },
>
> < multiple post-failure scripts allowed>
>
> }
>
> They will rollback (if required) in case of failure and it is the release manager (or
> integrator or whatever) to decide how it is done.

Nevertheless, the existing tools/swupdate-progress.c has the options

-e, --exec <script> call the script with the result of the update
-r, --reboot [<script>] reboot after a successful update by call the given script or
by calling the reboot() syscall by default.

I understand you to be saying these options should be deprecated. That's fine,
but is not the subject of my patch (and not for me to decide).

The patch merely ensures that the script passed as the argument to -e
continues to be invoked exactly as before.

> > My application does not use post-failure scripts at all, I'm attaching
> > to the lua bindings of the progress API.
>
> The fact that an update is successful or not should be part of the release itself, that
> means it should be embedded in the SWU. That means also that you do not need
> to roll out new scripts in case they are changed (that means update of swupdate-
> progress and related scripts), because this is part of the release itself that remains
> self contained.

That's good to know, but can we rule out that some of SWUpdate's users are using
tools/swupdate-progress with the options mentioned above?
If not, it's likely they will be unpleasantly surprised if things
suddenly stopped working.


Anyhow, all of this is going on a tangent -- if a root cause for excess
PROGRESS messages can be found and eliminated, all of this is
moot.

--

All the best,
Konrad

Stefano Babic

unread,
Apr 14, 2026, 7:16:35 AM (8 days ago) Apr 14
to konrad....@gmail.com, Adler, Michael (FT RPD CED OES-DE), swup...@googlegroups.com
Hi Konrad,

On 4/14/26 12:49, konrad....@gmail.com wrote:
> Hi Stefano,
>
>> No, the misunderstanding is going on.
>>
>> You want with a monitoring application (that should just monitor and not be active in
>> the update) use swupdate-progress to execute restore scripts in case of FAILURE.
>
> No, I don't want this at all. I have no stake in the matter. My only interest is to make
> progress monitors a little bit simpler to implement.
>
> However, your existing code base has a monitoring application that offers this ability.
> Is this a good idea? Maybe not, but it's there.
>

SWUpdate is very flexible and there are several ways to do the same
thing. So yes, the swupdate-progress monitors the update and can run
scripts at the end and this is a use case.

>> I am telling you that this should be part of the SWU, that means in the SWU (sw-
>> description) you add a section:
>>
>> scripts:(
>> {
>> filename=<...>;
>> type = "postfailure";
>> data = ".....";
>> },
>>
>> < multiple post-failure scripts allowed>
>>
>> }
>>
>> They will rollback (if required) in case of failure and it is the release manager (or
>> integrator or whatever) to decide how it is done.
>
> Nevertheless, the existing tools/swupdate-progress.c has the options
>
> -e, --exec <script> call the script with the result of the update
> -r, --reboot [<script>] reboot after a successful update by call the given script or
> by calling the reboot() syscall by default.
>

Sure. It depends on the use case - but the progress has less control
about the status. The scripts above can be driven with Lua variables
that are valid for the whole update itself.

> I understand you to be saying these options should be deprecated.

It is not deprecated, but the script in the progress has the meaning to
restore the monitor itself, maybe switching between applications for
HMI, etc. Sure, it can be used for restoring the status before the
update (restoring containers, etc.), but for that are Lua inside SWU better.

> That's fine,
> but is not the subject of my patch (and not for me to decide).
> > The patch merely ensures that the script passed as the argument to -e
> continues to be invoked exactly as before.
> >>> My application does not use post-failure scripts at all, I'm
attaching
>>> to the lua bindings of the progress API.
>>
>> The fact that an update is successful or not should be part of the release itself, that
>> means it should be embedded in the SWU. That means also that you do not need
>> to roll out new scripts in case they are changed (that means update of swupdate-
>> progress and related scripts), because this is part of the release itself that remains
>> self contained.
>
> That's good to know, but can we rule out that some of SWUpdate's users are using
> tools/swupdate-progress with the options mentioned above?

Users have this possibility, too. They have to write thier own update
concept and to find the best solution.

> If not, it's likely they will be unpleasantly surprised if things
> suddenly stopped working.
>
>
> Anyhow, all of this is going on a tangent -- if a root cause for excess
> PROGRESS messages can be found and eliminated, all of this is
> moot.

Right, this is the point.

Best regards,
Stefano
>

Reply all
Reply to author
Forward
0 new messages