Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Function pointers: good or bad things?

55 views
Skip to first unread message

pozz

unread,
Nov 18, 2021, 2:58:10 AM11/18/21
to
In reply to my previous post, Paul Rubin says:

"I think MISRA C disallows function pointers, partly for this reason."

Are function pointers really bad things?

Sincerely I found them useful and lastly I was using them more and more.
Am I wrong?

Function pointers help me in at least three situation.

With function pointer I can isolate C modules from the rest of the
project, so it is much more simple to create a well-define hardware
abstraction layer that let me improve portability to other hw platforms,
but mostly let me create a "simulator" on the development machine.

For example, consider a simple leds module interface:

int leds_init(void (*on_fn)(uint8_t idx), void (*off_fn)(uint8_t idx));
int led_on(uint8_t led_idx);
int led_off(uint8_t led_idx);
int led_toggle(uint8_t led_idx);
int led_blink(uint8_t led_idx);
int leds_on_all(void);
...

Here the important function is leds_init() with function pointers
arguments that really switch on or off the LED idx. I call leds_init()
in this way:

leds_init(bsp_led_on, bsp_led_off);

On target bsp_led_on() manages GPIO registers or SPI GPIO expander and
so on, on development machine bsp_led_on() could be a printf() or a
different icon on a GUI.

In this context, functions pointers help to have testability modules. If
I want to test leds module, the test code could call leds_init() in this
way:

leds_init(test_led_on, test_led_off);

So during tests it's much simpler to break the links between modules and
insert the test suite where it is necessary.


Another nice thing that is possible with function pointers is to make
some OOP tricks, for example polymorfism.

Of course one drawback of using function pointers is stack usage
calculation (see my previous post): it would be impossible to make a
calc, because the tool can't resolve the function pointer call.

David Brown

unread,
Nov 18, 2021, 4:37:30 AM11/18/21
to
On 18/11/2021 08:58, pozz wrote:
> In reply to my previous post, Paul Rubin says:
>
>   "I think MISRA C disallows function pointers, partly for this reason."
>

He is, AFAIK, wrong - but I only looked at one version of MISRA (MISRA C
2012).

> Are function pointers really bad things?
>

Yes.

> Sincerely I found them useful and lastly I was using them more and more.
> Am I wrong?

No.

They are useful, but they are also bad !
There is no doubt that function pointers are useful for this kind of
thing. But really, what you describe here is crying out for a move to
C++ and to use an Led class with virtual functions. While that may seem
like just function pointers underneath, they are /much/ safer because
they are tied so tightly to specific types and uses.

One alternative is to have a "proxy" in the middle that routes between
the modules. Another is to have connections handled via a header,
perhaps with conditional compilation.

Think of this as analogous to electronics. Function pointers are like
free connectors on a board that let you re-wire the board when you use
it. That can be very flexible, but makes it very difficult to be sure
the board is working and can quickly give you spaghetti systems. C++
virtual functions are like connectors with unique shapes - you can make
a few choices of your connections, but only to those points you have
specifically allowed. A "proxy" module is like a multiplexer or buffer
for controlling the signal routing, and a header with conditional
compilation is like DIP switches or jumpers.

Remember, the strength of a programming technique is often best measured
in terms of what it /restricts/ you from doing, not from what it
/allows/ you to do. It is more important that it is hard to get things
wrong, than to make it easy to get things right.

>
> Another nice thing that is possible with function pointers is to make
> some OOP tricks, for example polymorfism.

Don't go there. Go to C++, rather than a half-baked home-made solution
in C. Using C for OOP like this made sense in the old days - but not now.

>
> Of course one drawback of using function pointers is stack usage
> calculation (see my previous post): it would be impossible to make a
> calc, because the tool can't resolve the function pointer call.

That is one drawback. In general, function pointers means you can't
make a call-tree from your program. (Embedded systems generally have a
"call forest", since interrupts start their own trees, as do threads or
tasks.) You can't follow the logic of the program, either with tools or
manually.

pozz

unread,
Nov 18, 2021, 5:59:01 PM11/18/21
to
Il 18/11/2021 10:37, David Brown ha scritto:
> On 18/11/2021 08:58, pozz wrote:
[...]
> One alternative is to have a "proxy" in the middle that routes between
> the modules. Another is to have connections handled via a header,
> perhaps with conditional compilation.

It should be nice to have a few examples of these approaches.
However function pointers usually could assume only a few and fixed
values, mostly only *one* value in a build (for example, bsp_led_on()
and bsp_led_off() for my previous example).
I think that if it's possible to inform the call-graph or stack usage
tool of these "connections", it would be possible to generate a good
call graph and worst case stack usage.


Another situation where I use function pointers is when I have a
callback. Just to describe an example, you have two modules: the lower
level module [battery] that monitors continuously the battery level and
emit an event (call a function) when the level goes under a custom
threshold.
Suppose this event must be acquired by [control] module. One solution
without function pointers could be:

/* battery.h */
int battery_set_low_thres(uint16_t lvl_mV);

/* battery.c */
#include "battery.h"
#include "control.h"
....
if (current_lvl < low_thres) {
control_battery_low_level_event(current_lvl);
}
...

/* control.h */
void control_battery_low_level_event(uint16_t lvl_mV);

/* control.c */
#include "control.h"
#include "battery.h"

void control_battery_low_level_event(uint16_t lvl_mV) {
bsp_led_on(BSP_LED_BATTERY_LOW);
power_down();
}

I don't like it, the lower level module [battery] is stricly coupled to
[control] module, because it needs control_battery_low_level_event()
function declaration. Indeed "control.h" must be included in battery.c.

Instead I like an approach that uses a function pointer:

/* battery.h */
typedef void (*battery_low_cb)(uint16_t lvl_mV);
int battery_set_low_thres(uint16_t lvl_mV);

/* battery.c */
#include "battery.h"
....
static battery_low_cb low_cb;
....
if (current_lvl < low_thres) {
if (low_cb != NULL) low_cb(current_lvl);
}
...

/* control.h */
void control_battery_low_level_event(uint16_t lvl_mV);

/* control.c */
#include "control.h"
#include "battery.h"

static void control_battery_low_level_event(uint16_t lvl_mV);

void control_init(void) {
battery_set_low_thres(2700, control_battery_low_level_event);
}

static void control_battery_low_level_event(uint16_t lvl_mV) {
bsp_led_on(BSP_LED_BATTERY_LOW);
power_down();
}

Here the lower level module [battery] doesn't know anything about the
higher level module [control], indeed "control.h" isn't included at all
in battery.c.

It appears to me this approach is perfectly legal and much simpler to
test because of less linking between modules.
Because [battery] doesn't depend on other higher level modules, I can
reuse it in other projects as is.

Moreover there are many C projects, even for embedded, that makes large
use of this approach. For example, many functions in lwip project accept
function pointers to callback (for example, tcp_connect()[1]).


Another approach could be using weak functions:

/* battery.h */
int battery_set_low_thres(uint16_t lvl_mV);

/* battery.c */
#include "battery.h"
....
void battery_event_low(uint16_t lvl_mV) __attrinute((weak));
void battery_event_low(uint16_t lvl_mV) {
UNUSED(lvl_mV);
}
....
if (current_lvl < low_thres) {
if (low_cb != NULL) battery_event_low(current_lvl);
}
...

/* control.h */
void control_battery_low_level_event(uint16_t lvl_mV);

/* control.c */
#include "control.h"
#include "battery.h"

void control_init(void) {
battery_set_low_thres(2700, control_battery_low_level_event);
}

void battery_event_low(uint16_t lvl_mV) {
bsp_led_on(BSP_LED_BATTERY_LOW);
power_down();
}

[1] https://www.nongnu.org/lwip/2_1_x/group__tcp__raw.html

David Brown

unread,
Nov 19, 2021, 11:20:04 AM11/19/21
to
On 18/11/2021 23:58, pozz wrote:
> Il 18/11/2021 10:37, David Brown ha scritto:
>> On 18/11/2021 08:58, pozz wrote:
> [...]
>> One alternative is to have a "proxy" in the middle that routes between
>> the modules.  Another is to have connections handled via a header,
>> perhaps with conditional compilation.
>
> It should be nice to have a few examples of these approaches.

The rough idea is that you want to have something like these modules :

blinker (handling timing and the "user interface" of an led)
gpio (handling locally connected pins)
spi (handling pins connected via an spi bus).

You want the modules to be basically independent. You should be able to
write the blinker module without knowing whether the actual led is
connected directly to the microcontroller, or via an SPI bus. You
should be able to write the gpio and spi modules without knowing what
the pins will be used for.

Then you have a "master" module that somehow joins things together.

You have been using function pointers - in "master", you call a function
in "blinker" with pointers to functions in "gpio" or "spi".

Alternatively, blinker could call fixed named functions "blink_on" and
"blink_off" that are expected to be defined by users of the "blink"
module. These would be defined in the "master" module, and call the
appropriate functions in "gpio" or "spi". These are the proxy
functions, and are a little unusual in that they are declared in
"blinker.h", but defined in "master.c".

Another option is to have hooks defined in a header that is included in
by modules such as "blinker", and which define the functions to be
called for turning the lights on and off. Then the connections are
given in that header, not the blinker module. And the blinker module
can use conditional compilation - if no hook functions are defined, they
are not used.

In both cases, you can use the "blinker", "gpio" and "spi" modules in
test harnesses or real code without changing the source code used.
Yes, that's true - but there is no way to express that in C. Even if
you use a few specific types (struct typedefs) as parameters to ensure
that only functions with very specific signatures are accepted by the
compiler, it still means /any/ function with a compatible signature
could be used, and your external tools are as helpless for following the
code flow.

> I think that if it's possible to inform the call-graph or stack usage
> tool of these "connections", it would be possible to generate a good
> call graph and worst case stack usage.
>

You can't do that in C.
It is perfectly legal, and may be easier to test - but it is not easier
to analyse, and it is harder to follow the code flow. Don't
misunderstand me - your use of function pointers here is common and
idiomatic. But function pointers have a cost, and alternative
structures can be better (though they too have their costs).
Yes, I have made use of weak functions in a similar fashion (noting that
weak functions are not standard C). These too will confuse analysis tools.

Don Y

unread,
Nov 19, 2021, 6:21:51 PM11/19/21
to
On 11/18/2021 12:58 AM, pozz wrote:
> Are function pointers really bad things?

That's sort of like asking if floats or NULLs are "bad things".
It all boils down to how they are used -- or misused.

Using a pointer to a function when the function can be used
directly, otherwise, is seldom "advisable". Consider:

sort(args, direction_t dir) {
switch dir {
case FORWARD:
fwd_sort(args); break;
case BACKWARDS:
bwd_sort(args); break;
case SIDEWAYS:
sdw_sort(args); break;
case UPSIDEDOWN:
updn_sort(args); break;
}
...
do_other_stuff();
}

contrasted with:

sort(args, direction_t dir) {
(*dir)(args)
...
do_other_stuff();
}

The first implementation avoids use of function pointers by
using a selector ("dir") to determine which particular "_sort()"
to invoke.

The second requires the caller to provide an explicit pointer
to a function that performs that action.

The latter is more flexible in that some unforeseen algorithm
can be used, down the road (e.g., diag_sort()). The first would
have to be recompiled to include an explicit reference to that
"new" algorithm (and a new "selector" -- "DIAGONAL" -- created).

/cf./ qsort(3C)

Function pointers facilitate late binding. Most applications
*know* what their bindings will be at compile time so the
support is often not needed (even if the actual binding is deferred
to runtime)

Along with LATE binding, they facilitate *dynamic* binding -- where
you want to alter some set of invocations programmatically, at
run-time. E.g., call-backs, dispatchers and ISRs.

/cf./ atexit(3C) and similar

I use them heavily in my FSM implementations. It lets me encode
the transition tables into highly compressed forms while still
retaining a high degree of performance and flexibility.

The typical "anxiety" associated with them comes from the syntax
required to properly declare them -- coupled with the whole
notion of "pointers are bad". typedefs are your friend, here.

> Sincerely I found them useful and lastly I was using them more and more. Am I
> wrong?

Use them when needed, but not casually. You don't use long longs for
iterators, do you? Or doubles?

> Of course one drawback of using function pointers is stack usage calculation
> (see my previous post): it would be impossible to make a calc, because the tool
> can't resolve the function pointer call.

Not "impossible" -- just not within the range of capabilities of most
current tools. Clearly *you* could chase down every reference to such
a pointer and examine the value(s) that are used in its place... and,
then, compute the stack usage of each of those functions and add it to the
current stack penetration at the time the function is actually
invoked. So, the information *exists*[1]...

[1] unless the application prompts the user to enter a hexadecimal
constant that it then uses as the address of the function to be
invoked (I've built debugging tools that supported such interfaces).

David Brown

unread,
Nov 20, 2021, 7:17:33 AM11/20/21
to
All you write above is true (and it's a useful point to make about
compile-time and run-time binding).

In small embedded systems, however, you almost never /need/ dynamic
run-time binding. Your tasks are known at compile-time - as are your
interrupt functions, your state machines, and lots of other things.
Being able to change things at run-time gives you /no/ benefits in
itself, but significant costs in analysability, code safety, and static
checking (plus possibly significant code efficiency costs).

No one cares if a source module has to be re-compiled - but we /do/ care
if it has to be changed, and has to go through reviews or testing again.
Some kind of late-binding mechanism can help avoid that at times.

However, when we compare the two "sort" versions above, remember that
there are other possible arrangements and other possible pros and cons.
First, note that in the explicit switch, the compiler has more
information - it can, for example, check that all cases are covered
(assuming you have a decent compiler or an external static analysis tool
or linter).

Also note the pattern here. If you want more compact source code that
does not need to be changed when adding diagonal sorting, X-macros are a
good choice. (Or, better, C++ templates.)

>
> I use them heavily in my FSM implementations.  It lets me encode
> the transition tables into highly compressed forms while still
> retaining a high degree of performance and flexibility.
>

Function pointers are certainly often used in such circumstances. I
generally don't use them - because I rarely see compressed forms as
beneficial, they can't be analysed or graphed, you can't follow the code
directly, and you often have /less/ flexibility and significantly less
performance.

However, these things are always a balance, and there are many types of
state machine, many types of code, and many solutions. How readable
code can be depends on who writes it and what other information,
diagrams and documentation is available as much as the code structure.

> The typical "anxiety" associated with them comes from the syntax
> required to properly declare them -- coupled with the whole
> notion of "pointers are bad".  typedefs are your friend, here.
>

I agree that typedefs are your friend here - I'm in less agreement that
the syntax of function pointers is significant to people choosing not to
use them much. But maybe it is relevant for some people.

>> Sincerely I found them useful and lastly I was using them more and
>> more. Am I wrong?
>
> Use them when needed, but not casually.  You don't use long longs for
> iterators, do you?  Or doubles?
>

Agreed - be aware of the pros and cons, and choose carefully according
to your own needs and your own ways of coding.

>> Of course one drawback of using function pointers is stack usage
>> calculation (see my previous post): it would be impossible to make a
>> calc, because the tool can't resolve the function pointer call.
>
> Not "impossible" -- just not within the range of capabilities of most
> current tools.  Clearly *you* could chase down every reference to such
> a pointer and examine the value(s) that are used in its place... and,
> then, compute the stack usage of each of those functions and add it to the
> current stack penetration at the time the function is actually
> invoked.  So, the information *exists*[1]...
>
> [1] unless the application prompts the user to enter a hexadecimal
> constant that it then uses as the address of the function to be
> invoked (I've built debugging tools that supported such interfaces).


Although this thread has been talking about C, I think a comparison with
C++ is worth noting. For much of C++'s history, a major and
heavily-used feature has been class inheritance hierarchies with virtual
functions. These allow late binding - you have "Animal * p; p->run();"
where the actual "run" function depends on the actual type of the
animal. Such virtual functions are a big step up from function pointers
that you have in C, because they are more limited - "run()" is not just
any function, but it is a method for a type that derives from "Animal".
You can't do as much analysis or checking as for compile-time binding,
but it is still vastly better than you can get from C function pointers.

However, modern C++ is moving significantly away from that towards
compile-time polymorphism - templates and generic programming. The
tools in modern C++ have improved greatly for compile-time work, and do
so with every new C++ standard revision. Instead of having a table of
states and actions that is dynamically interpreted at run-time via
function pointers, or manually writing long switch statements for the
job, you can pass the table to a template function and have the optimal
compile-time binding code generated for you.

Instead of the "leds" module (in the first post) having an "led_init"
function that takes two function pointers and having the "led_on"
function calling one of those functions, you now have an "Led<>"
template class that takes a "DigitalOutput" type as a parameter. The
Led<> class is fully flexibly and independent of the actual type of
digital output in use, but the resulting code is as optimal as possible
and it is all generated, checked and analysed at compile-time.

Don Y

unread,
Nov 20, 2021, 6:17:17 PM11/20/21
to
On 11/20/2021 5:17 AM, David Brown wrote:
> In small embedded systems, however, you almost never /need/ dynamic
> run-time binding. Your tasks are known at compile-time - as are your
> interrupt functions, your state machines, and lots of other things.
> Being able to change things at run-time gives you /no/ benefits in
> itself, but significant costs in analysability, code safety, and static
> checking (plus possibly significant code efficiency costs).

Define "small". In the late 70's, early 80's, I was working with
"small" 8b devices in 64KB address spaces. ROM and RAM were precious
so often in short supply.

Yet, we found it advantageous to implement run-time linking in our
products. The processor would do some essential testing, on power up.
Then, probe the memory space for "ROMs" (typically located on
other cards). Finding a ROM, it would examine a descriptor that
declared entry points ("functions") *in* that ROM before moving on
to locate the next ROM.

It would then invoke the "POST" entry point for each ROM that declared
one to finish POST.

Assuming that went as planned, it would invoke the "init()" entry
point. The ROM's init() could query the processor and other ROMs
for routines that it needed. In each case, a "fixup" table was built
and revised to allow CALLs (everything was ASM, back then) to be
dispatched through the RAM-based fixup tables to piece the code
together.

So, "display_character()" on a 7-segment display board could be
implemented entirely differently than "display_character()" on
a graphic LCD.

The 7-segment display board might be interested in the "line_frequency()"
determined by the power supply board (to minimize beat against ambient
light sources) -- while the LCD display might have no interest.

This allowed us to swap out boards without having to rebuild the
sources for any of the other boards.

It also allowed us to design "plug-in modules" (the size of a
pack of cigarettes) that could contain code libraries that
were accessed by the user (so, the number and names of the
functions within were unknown to the rest of the system
UNTIL that module was plugged in).

> No one cares if a source module has to be re-compiled - but we /do/ care
> if it has to be changed, and has to go through reviews or testing again.
> Some kind of late-binding mechanism can help avoid that at times.

If you ware developing for a regulated industry (medical, pharma,
aviation, gaming, etc.) you *do* care about having to recompile
because you now have a different codebase -- that must be validated.

OTOH, if you've validated your "display board" against its
interface contract, you can use that with any compatible system
that *expects* that interface. New display board? Validate *its*
code -- and only its code.

>> I use them heavily in my FSM implementations. It lets me encode
>> the transition tables into highly compressed forms while still
>> retaining a high degree of performance and flexibility.
>
> Function pointers are certainly often used in such circumstances. I
> generally don't use them - because I rarely see compressed forms as
> beneficial, they can't be analysed or graphed, you can't follow the code
> directly, and you often have /less/ flexibility and significantly less
> performance.

If you have a few hundred states in an FSM and each state has a dozen
or more branches out to other states, the space consumed by the
state tables quickly escalates.

test_criteria_a() &tested_item &next_state_table transition_routine()
.
test_criteria_c() &other_item &other_state_table other_transition()
.
.
.
test_criteria_a() &other_item &another_state_table one_more_transition()

The biggest modules in my applications are usually those that
implement the "logic" of the UI -- because there are often so many
interactions and competing issues that have to be addressed
(contrast that with the code required for a UART driver)

Additionally, this reduces the effort to implement the FSM thereby
ensuring *each* potential condition and transition is processed
identically. No risk of the developer forgetting to do <whatever>
on some particular transition.

>>> Sincerely I found them useful and lastly I was using them more and
>>> more. Am I wrong?
>>
>> Use them when needed, but not casually. You don't use long longs for
>> iterators, do you? Or doubles?
>
> Agreed - be aware of the pros and cons, and choose carefully according
> to your own needs and your own ways of coding.

No. You always have to consider that someone else WILL end up having to
"deal with" your code. You want your code to be understandable without
requiring a tedious, detailed examination. Folks maintaining code tend
not to have the time to spend "trying on" the code to see how it fits.
They have to be able to quickly AND ACCURATELY understand what you are doing.
If they have trouble with expression syntax and might misunderstand
what you are doing, then "your way" was likely not the RIGHT way.

Be able to justify why you make design and implementation decisions.
Not just because it's "fun" or "elegant".

I suspect we've all encountered products that were overly complex (to
use) -- because the developer thought he was "giving the user flexibility"
(more flexible is better, right?).

Function pointers afford lots of flexibility. But, they also leave you
always wondering if there is some *other* value that may be used and
if your understanding of the code would change -- had you known of it's
existence. (the first sort() nails down ALL the possibilities; the
second one leaves you forever uncertain...)

David Brown

unread,
Nov 21, 2021, 8:05:57 AM11/21/21
to
On 21/11/2021 00:16, Don Y wrote:
> On 11/20/2021 5:17 AM, David Brown wrote:
>> In small embedded systems, however, you almost never /need/ dynamic
>> run-time binding.  Your tasks are known at compile-time - as are your
>> interrupt functions, your state machines, and lots of other things.
>> Being able to change things at run-time gives you /no/ benefits in
>> itself, but significant costs in analysability, code safety, and static
>> checking (plus possibly significant code efficiency costs).
>
> Define "small".  In the late 70's, early 80's, I was working with
> "small" 8b devices in 64KB address spaces.  ROM and RAM were precious
> so often in short supply.
>

Small-systems embedded programming is about dedicated devices with
dedicated programs, rather than the size of the device.

<snip war stories>

>> No one cares if a source module has to be re-compiled - but we /do/ care
>> if it has to be changed, and has to go through reviews or testing again.
>>   Some kind of late-binding mechanism can help avoid that at times.
>
> If you ware developing for a regulated industry (medical, pharma,
> aviation, gaming, etc.) you *do* care about having to recompile
> because you now have a different codebase -- that must be validated.
>

You care more about the source than the compiled code, but yes, you /do/
care about having to recompile. However, there is very little
difference between recompiling one module in a program or many - the
resulting binary changes and must be tested and qualified appropriately.

> OTOH, if you've validated your "display board" against its
> interface contract, you can use that with any compatible system
> that *expects* that interface.  New display board?  Validate *its*
> code -- and only its code.

We are not talking about separate devices here. The posts are long
enough without extra side-tracking.

>
>>> I use them heavily in my FSM implementations.  It lets me encode
>>> the transition tables into highly compressed forms while still
>>> retaining a high degree of performance and flexibility.
>>
>> Function pointers are certainly often used in such circumstances.  I
>> generally don't use them - because I rarely see compressed forms as
>> beneficial, they can't be analysed or graphed, you can't follow the code
>> directly, and you often have /less/ flexibility and significantly less
>> performance.
>
> If you have a few hundred states in an FSM and each state has a dozen
> or more branches out to other states, the space consumed by the
> state tables quickly escalates.

Your design is totally and utterly broken, so there is no point in
pretending there is a "good" way to implement it. Throw it out and
start again, by dividing the problem into manageable pieces.

Don Y

unread,
Dec 2, 2021, 11:43:23 AM12/2/21
to
[Sorry for the delay, I tend not to watch c.a.e anymore (lack of
"interesting" traffic]

On 11/21/2021 6:05 AM, David Brown wrote:
> On 21/11/2021 00:16, Don Y wrote:
>> On 11/20/2021 5:17 AM, David Brown wrote:
>>> In small embedded systems, however, you almost never /need/ dynamic
>>> run-time binding. Your tasks are known at compile-time - as are your
>>> interrupt functions, your state machines, and lots of other things.
>>> Being able to change things at run-time gives you /no/ benefits in
>>> itself, but significant costs in analysability, code safety, and static
>>> checking (plus possibly significant code efficiency costs).
>>
>> Define "small". In the late 70's, early 80's, I was working with
>> "small" 8b devices in 64KB address spaces. ROM and RAM were precious
>> so often in short supply.
>
> Small-systems embedded programming is about dedicated devices with
> dedicated programs, rather than the size of the device.

So, static link a Linux kernel, slap it in a "closed"/dedicated
functionality device (like a DVR?) and it's now "small"?

I consider "small" to be an indication of relative complexity.

Less resources (small) *tends* to be less complex. A "mouse"
is a small system. An MRI scanner isn't. Both can be implemented
however their designers CHOSE to implement them -- none is likely
going to morph into a chess program with the flip of a configuration
switch (so, no *need* to be able to dynamically RElink)!

> <snip war stories>

No, REAL examples of how this was used to advantage. Instead of
baseless claims with nothing to back them up.

>>> No one cares if a source module has to be re-compiled - but we /do/ care
>>> if it has to be changed, and has to go through reviews or testing again.
>>> Some kind of late-binding mechanism can help avoid that at times.
>>
>> If you ware developing for a regulated industry (medical, pharma,
>> aviation, gaming, etc.) you *do* care about having to recompile
>> because you now have a different codebase -- that must be validated.
>
> You care more about the source than the compiled code, but yes, you /do/
> care about having to recompile. However, there is very little
> difference between recompiling one module in a program or many - the
> resulting binary changes and must be tested and qualified appropriately.

There's a huge difference! You only have to test the *component*
that has been modified -- not the entire system!

If you use COTS devices (hardware/software), do you *validate* their
designs? (against what -- a "marketing blurb"?) Are you sure all
the setup and hold times for signals are met (where did you find the
schematics?)? All the calling parameters and return values of
all the internal routines used? (source code availability?)

You likely just "trust" that they've done things "right" and
hope for the best. (after all, what *can* you do to convince
yourself of that?)

When *you* are the creator of those "components", you have access
to all of this information and can *ensure* that your vendor
(i.e., yourself) has produced the product that they claim to
have produced.

I can throw together a product that requires very little final
assembly testing if I've already tested/validated all of the components.
Treating the software that drives a component as part of the component,
FROZEN in silicon (ROM) reduces the size and complexity of the
portions that are yet-to-be-tested (e.g., the application layer)

Did they revalidate the *entire* 737MAX design?? (if you think
they did, then gotta wonder why it took so long to validate it
the *first* time 'round!)

>> OTOH, if you've validated your "display board" against its
>> interface contract, you can use that with any compatible system
>> that *expects* that interface. New display board? Validate *its*
>> code -- and only its code.
>
> We are not talking about separate devices here. The posts are long
> enough without extra side-tracking.

You've missed the point, entirely. Read more carefully:

"Then, probe the memory space for 'ROMs' (typically located on
other cards). Finding a ROM, it would examine a descriptor that
declared entry points ("functions") *in* that ROM before moving on
to locate the next ROM."

There's only one "device". It is implemented using multiple boards
("cards") -- as a good many "devices" are NOT implemented on "single
PCBs". For example:

- One board has the processor and whatever devices seem appropriate.

- Another board has a display (imagine a bunch of 7-segment LEDs
and drive electronics... or PGDs, VFDs, LCDs, etc.) along with
a ROM containing code that knows how to "talk" to the hardware
on *that* board -- BUT NO PROCESSOR.

- Another has the power supply and "power interface" (to monitor
voltage, battery, charger, line frequency, mains power available,
etc.) along with a ROM containing code to talk to the hardware
on *that* board -- BUT, again, NO PROCESSOR.

A "bus" connects them -- so *the* processor on the main board can
interact with the hardware on those cards. And, coincidentally, it
can access the ROM(s) containing the code (that DOES that interaction)!

I.e., the code for the *single* device is scattered across
three boards (in this case). The product glues them together
at run time. Power down the product. Swap out a board with
another that is *functionally* equivalent but potentially
implemented entirely differently (e.g., a different power
supply for a different market; or a different display technology
for use in a different environment) and the product FUNCTIONS
the same as it did before power was removed.

There are *huge* advantages to this approach -- esp in prototyping
new systems. How long do you want to wait to have functional
hardware on which to run your application? If you're buying COTS
modules, you're essentially doing this -- except module X from
vendor A likely won't have *code* on it that will seemlessly interface
with *code* on module Y from vendor B!

They *may* provide you with some "sample code" to assist with your
development. And, I'm *sure* that was written with YOUR needs in
mind? (not!)

I use the same approach in my current project -- except the boards
are *tiny* (~3 sq in) and the ROMs are virtual -- downloaded over
a network interface. So, the *internal* ROM in the processor can be
identical (because "processor boards" are *all* identical!) and, yet,
support any number of add-on boards, with the appropriate software
loaded and configured at run-time (the processor just has to identify
each connected board/module and request associated "firmware" -- in
a manner similar to a kernel "probing" the hardware available in
the environment in which it finds itself executing. Easy to do with
tiny MCUs, nowadays, which can also add "addressable functionality"
(like "ensure the card is powered down on reset -- until the processor
directs you to power it up")

Want to drive a speaker AND a video display? Put a speaker board
and a video display board on a processor, power up, wait for code
to load and you're all set. Want to scatter those functions onto
different "nodes"? Put the speaker board on one processor and the
video board on another. No change to software.

>>>> I use them heavily in my FSM implementations. It lets me encode
>>>> the transition tables into highly compressed forms while still
>>>> retaining a high degree of performance and flexibility.
>>>
>>> Function pointers are certainly often used in such circumstances. I
>>> generally don't use them - because I rarely see compressed forms as
>>> beneficial, they can't be analysed or graphed, you can't follow the code
>>> directly, and you often have /less/ flexibility and significantly less
>>> performance.
>>
>> If you have a few hundred states in an FSM and each state has a dozen
>> or more branches out to other states, the space consumed by the
>> state tables quickly escalates.
>
> Your design is totally and utterly broken, so there is no point in
> pretending there is a "good" way to implement it. Throw it out and
> start again, by dividing the problem into manageable pieces.

Amusing conclusion.

You're suggesting an *application* that is inherently of sufficient
complexity to require hundreds of states to REPRESENT, should NOT be
implemented by EXPOSING those states, the stimuli to which each responds
and the follow-on states? Instead, it should be implemented in some
other, likely LESS OBVIOUS way -- that makes it *easier* to determine
when a particular stimulus is being "unhandled"?

The user interface is the one place in a design where you want to see
EVERYTHING that affects the user. Because the *user* will see everything,
regardless of the "module" in which you try to "hide" it.

"What happens if power fails while he's doing X? What if the backup
battery will only guarantee 5 minutes of continued operation at
that point? What if 30 minutes? How will he know if the 'typical'
operation can likely be completed in the 'remaining battery'?"

Should you bury the device's reaction to these events in the "power
monitor" module? And, maybe have the UI module *talk* to it to tell
it when it is acceptable to commandeer the (one line!) display to
paint a message to inform the user of this event? And, when that
message should be dismissed? Should the "power monitor" module
need to talk to the display module to preserve the previous content
of the display, overwritten by its power fail message and needing
restoration thereafter? What if the previous display contents should
NOT be restored (because the UI doesn't *want* them restored in that
event)?

"What if the user decides he wants to pump Hi-Test instead of Regular
Unleaded... AFTER he's already selected the latter? Is his choice locked
in? Or, is there a means by which he can change his mind, after the fact?
Do we require him to finish the transaction and start a new one? How do
we tell him that?"

Should the module that monitors the fuel selection buttons make/enforce
that restriction? How should they communicate that fact to the UI
module (which can then decide how to communicate it to the user?)

"What if the customer doesn't scan their membership/discount card
at the *start* of the checkout process? Do we still award the
discounts on those 'special-sale-for-members-only' items that
have already been scanned? Or, is the customer SoL? What if he
tries to complete the transaction before doing so -- do we remind
him and give him one last chance? Or, too-bad-so-sad?"

Should the card reader module have direct access to a "membershipID"
variable, somewhere, that it uses to tell the rest of the machine
that the customer has presented a membership card? How do we tell
that card reader module NOT to accept the membership card accept
in certain places in the UI protocol?

FSMs make ALL of these conditions (events) very visible. They
aren't buried in conditionals or switch statements or "between
the semicolons" or interactions between modules, etc. A "power
monitor" module may *determine* that power has failed (using
whatever algorithm is appropriate to *it* -- and of which the
user need not be aware). But, if that *event* is going to
affect the flow of activities/operations that the *user*
experiences, then it has to be made visible to the "process"
that is interacting with the user.

An FSM makes this VERY terse. It simply enumerates states
(which will exist in EVERY implementation you choose because they
are a characteristic of the user interface protocol and not the
implementation, though likely in a less discrete/visible manner)
and events of interest in each of those states. Choose state
names, event names and transition routine names well, and there's
no need to even go digging through the *code*!

I have an annoying capability of being able to walk up to a
product and tickle a yet-undiscovered bug in its user interface.
Because I *know* that folks aren't systematic about how they
express these interfaces. They scatter decisions/algorithms
that are pertinent to the UI in several "must be remembered"
locations in their codebases.

So, while its likely that you verified that the cables that
interconnect the different "boxes" in your design are in place
during POST, I'll wager you've not thought about the fact
that the state of those cables can *change* during operation!

"Ooops! I just unplugged this cable and now your machine
THINKS the world is round but I've suddenly rendered it flat!"

Instead, put the POST in the UI FSM (because it will be interacting
with the user -- if only to allow you to say "Ready for operation")
and, chances are, someone will notice that there is a transition
that reacts to ALL_CABLES_CONNECTED to exit the "POST" state (and
generate that "ready" message on its outbound transition). And,
if they are diligent, they will ask why that "event" is never
present in subsequent operating states.

"Gee, I wonder what will happen if I unplug this cable *now*...
who/what will see that? How? What will they deduce from the
consequence of that *fault*?"

Ask yourself if your tech writer can write a user manual from your
codebase. The "next developer" will be starting maintenance of
your codebase from that level of experience (with THAT product).

Or, do YOU have to effectively tell him what your code is doing,
and when, in order for him to undertake that effort. (Are you
sure you remember EVERYTHING that can happen and how it will
manifest to the user?)

I designed a little "electronic taperule" many years ago. A
"simple" device (in terms of implementation, inherent complexity
and user interactions). Two buttons, one "7 segment" display.

But, all sorts of "modes" (states!) that it can operate in!

Power it on, power it off.
Indicate whether you want to display results in imperial or metric units.
Decimals, or fractions.
"Zero" the rule's readout -- regardless of how far extended the tape
may be at the time ("how much LONGER is this piece of lumber than
THAT piece?")
Flip the display (so you can hold the rule in your other hand)
Battery status.
Auto-power-off.
etc.

I drew a state transition chart and used that to present the design
to management. No technical skills needed (all "financial" people)
yet they could all understand what the device would do and how the
user would KNOW where he was (in the state diagram) as well as how
to navigate to another place in that diagram ("usage mode")

Yeah, its amusing to watch grown men tracing lines from one "state
bubble" to another. But, had I given them a lengthy bit of prose
or some source doe listings, their eyes would have glazed over.

[And, I could overtrace those same lines with a yellow highlighter
to make it apparent how much of the device's operation we'd
already discussed. "What does THIS transition do? It hasn't been
highlighted, yet... Oh, yeah! If you put the tape rule in your
other hand, the displayed values would be upside down! I see..."]
0 new messages