<snip>
> > I am stuck with what is available on the market. And diode
> > is logically pretty simple component, yet we need many kinds.
> >
> >> I.e., we readily accept differences in "standard components"
> >> in other disciplines; why not when it comes to software
> >> modules?
> >
> > Well, software is _much_ more compilcated than physical
> > engineering artifacts. Physical thing may have 10000 joints,
> > but if joints are identical, then this is moral equivalent of
> > simple loop that just iterates fixed number of times.
>
> This is the argument in favor of components. You'd much rather
> read a comprehensive specification ("datasheet") for a software
> component than have to read through all of the code that implements
> it.
Well, if there is simple to use component that performs what
you need, then using it is fine. However, for many tasks
once component is flexible enough to cover both your and
my needs its specification may be longer and more tricky
than code doing task at hand.
> What if it was implemented in some programming language in
> which you aren't expert? What if it was a binary "BLOB" and
> couldn't be inspected?
There are many reasons when existing code can not be reused.
Concerning BLOB-s, I am trying to avoid them and in first
order approximation I am not using them. One (serious IMO)
problem with BLOB-s is that sooner or later they will be
incompatible with other things (OS/other libraries/my code).
Very old source code usually can be run on modern systems
with modest effort. BLOB-s normally would require much
more effort.
> > At software level number of possible pre-composed blocks
> > is so large that it is infeasible to deliver all of them.
>
> You don't have to deliver all of them. When you wire a circuit,
> you still have to *solder* connections, don't you? The
> components don't magically glue themselves together...
Yes, one needs to make connections. In fact, in programming
most work is "making connections". So you want something
which is simple to connect. In other words, you can all
parts of your design to play nicely together. With code
deliverd by other folks that is not always the case.
> > Classic trick it to parametrize. However even if you
> > parametrize there are hundreds of design decisions going
> > into relatively small piece of code. If you expose all
> > design decisions then user as well may write his/her own
> > code because complexity will be similar. So normaly
> > parametrization is limited and there will be users who
> > find hardcoded desion choices inadequate.
> >
> > Another things is that current tools are rather weak
> > at supporting parametrization.
>
> Look at a fleshy UART driver and think about how you would decompose
> it into N different variants that could be "compile time configurable".
> You'll be surprised as to how easy it is. Even if the actual UART
> hardware differs from instance to instance.
UART-s are simple. And yet some things are tricky: in C to have
"compile time configurable" buffer size you need to use macros.
Works, but in a sense UART implementation "leaks" to user code.
Well, there are routine tasks, for them it is natural to
re-use existing code. There are new tasks that are "almost"
routine, than one can come with good design at the start.
But in a sense "interesting" tasks are when at start you
have only limited understanding. In such case it is hard
to know "where the design is headed", except that it is
likely to change. Of course, customer may be dissatisfied
if you tell "I will look at the problem and maybe I will
find solution". But lack of understanding is normal
in research (at starting point), and I think that software
houses also do risky projects hoping that big win on succesful
ones will cover losses on failures.
> I approach a design from the top (down) and bottom (up). This
> lets me gauge the types of information that I *may* have
> available from the hardware -- so I can sort out how to
> approach those limitations from above. E.g., if I can't
> control the data rate of a comm channel, then I either have
> to ensure I can catch every (complete) message *or* design a
> protocol that lets me detect when I've missed something.
Well, with UART there will be some fixed transmission rate
(with wrong clock frequency UART would be unable to receive
anything). I would expect MCU to be able to receive all
incoming characters (OK, assuming hardware UART with drivier
using high priority interrupt). So, detecting that you got too
much should not be too hard. OTOH, sensibly handling
excess input is different issue: if characters are coming
faster than you can process them, then either your CPU is
underpowered or there is some failure causing excess transmission.
In either case specific application will dictate what
should be avoided.
> There are costs to both approaches. If I dedicate resource to
> ensuring I don't miss anything, then some other aspect of the
> design will bear that cost. If I rely on detecting missed
> messages, then I have to put a figure on their relative
> likelihood so my device doesn't fail to provide its desired
> functionality (because it is always missing one or two characters
> out of EVERY message -- and, thus, sees NO messages).
My thinking goes toward using relatively short messages and
buffer big enough for two messages. If there is need for
high speed I would go for continous messages and DMA
transfers (using break interrupt to discover end of message
in case of variable length messages). So device should
be able to get all messages and in case of excess message
trafic whole message could be dropped (possibly looking
first for some high priority messages). Of course, there
may be some externaly mandated message format and/or
communitation protocal making DMA inappropriate.
Still, assuming interrupts, all characters should reach
interrupt handler, causing possibly some extra CPU
load. The only possiblity of unnoticed loss of characters
would be blocking interrupts too long. If interrupts can
be blocked for too long, then I would expect loss of whole
messages. In such case protocol should have something like
"dont talk to me for next 100 miliseconds, I will be busy"
to warn other nodes and request silence. Now, if you
need to faithfully support sillyness like Modbus RTU timeouts,
then I hope that you are adequatly paid...
> > In slightly different spirit: in another thread you wrote
> > about accessing disc without OS file cache. Here I
> > normaly depend on OS and OS file caching is big thing.
> > It is not perfect, but OS (OK, at least Linux) is doing
> > this resonably well I have no temptation to avoid it.
> > And I appreciate that with OS cache performance is
> > usually much better that would be "without cache".
> > OTOH, I routinly avoid stdio for I/O critical things
> > (so no printf in I/O critical code).
>
> My point about the cache was that it is of no value in my case;
> I'm not going to revisit a file once I've seen it the first
> time (so why hold onto that data?)
Well, OS "cache" has many functions. One of them is read-ahead,
another is scheduling of requests to minimize seek time.
And beside data there is also meta-data. OS functions need
access to meta-data and OS-es are designed under assumption
that there is decent cache hit rate on meta-data access.
> >>> In other cases I fixed
> >>> bugs by replacing composition of library routines by a single
> >>> routine: there were interactions making simple composition
> >>> incorrect. Correct alterantive was single routine.
> >>>
> >>> As I wrote my embedded programs are simple and small. But I
> >>> use almost no external libraries. Trying some existing libraries
> >>> I have found out that some produce rather large programs, linking
> >>> in a lot of unneeded stuff.
> >>
> >> Because they try to address a variety of solution spaces without
> >> trying to be "optimal" for any. You trade flexibility/capability
> >> for speed/performance/etc.
> >
> > I think that this is more subtle: libraries frequently force some
> > way of doing things. Which may be good if you try to quickly roll
> > solution and are within capabilities of library. But if you
> > need/want different design, then library may be too inflexible
> > to deliver it.
>
> Use a different diode.
Well, when needed I use my own library.
Nice, but I am not sure how practical this would be in modern
times. I have C code and can resonably estimate resource use.
But there are changable parameters which may enable/disable
some parts. And size/speed/stack use depends on compiler
optimizations. So there is variation. And there are traps.
Linker transitively pulls dependencies, it there are "false"
dependencies, they can pull much more than strictly needed.
One example of "false" dependence are (or maybe were) C++
VMT-s. Namely, any use of object/class pulled VMT which in
turn pulled all ancestors and methods. If unused methods
referenced other classes that could easily cascade. In both
cases authors of libraries probably thought that provided
"goodies" justified size (intended targets were larger).
> So, before I designed the hardware, I knew what I would need
> by way of ROM/RAM (before the days of FLASH) and could commit
> the hardware to foil without fear of running out of "space" or
> "time".
>
> > code. That may be fine if you have bigger device and need features,
> > but for smaller MCU-s it may be difference between not fitting into
> > device or (without library) having plenty of free space.
>
> Sure. But a component will have a datasheet that tells you what
> it provides and at what *cost*.
My 16x2 text LCD routine may pull I2C driver. If I2C is not needed
anyway, this is additional cost, otherwise cost is shared.
LCD routine depends also on timer. Both timer and I2C affect
MCU initialization. So even in very simple situations total
cost is rather complex. And libraries that I tried presumably
were not "components" in your sense, you had to link the program
to learn total size. Documentation mentioned dependencies,
when they affected correctness but otherwise not. To say
the truth, when library supports hundreds or thousends of different
targets (combinations of CPU core, RAM/ROM sizes, peripherial
configurations) with different compilers, then there is hard
to make exact statements.
IMO, in ideal world for "standard" MCU functionality we would
have configuration tool where user can specify needed
functionality and tool would generate semi-custom code
and estimate its resource use. MCU vendor tools attempt to
offer something like this, but reports I heard were rather
unfavourable, in particular it seems that vendors simply
deliver thick library that supports "everything", and
linking to this library causes code bloat.
> > When I tried it Free RTOS for STM32 needed about 8k flash. Which
> > is fine if you need RTOS. But ATM my designs run without RTOS.
>
> RTOS is a commonly misused term. Many are more properly called
> MTOSs (they provide no real timeliness guarantees, just multitasking
> primitives).
Well, Free RTOS comes with "no warranty", but AFAICS they make
honest effort to have good real time behaviour. In particular,
code paths trough Free RTOS from events to user code are of
bounded and rather short length. User code still may be
delayed by interrupts/process priorities, but they give resonable
explanation. So it is up to user to code things in way that gives
needed real-time behaviour, but Free RTOS normally will not spoil it
and may help.
> IMO, the advantages of writing in a multitasking environment so
> far outweigh the "costs" of an MTOS that it behooves one to consider
> how to shoehorn that functionality into EVERY design.
>
> When writing in a HLL, there are complications that impose
> constraints on how the MTOS provides its services. But, for small
> projects written in ASM, you can gain the benefits of an MTOS
> for very few bytes of code (and effectively zero RAM).
Well, looking at books and articles I did not find convincing
argument/example showing that one really need multitasking for
small systems. I tend to think rather in terms of collection
of coupled finite state machines (or if you prefer Petri net).
State machines transition in response to events and may generate
events. Each finite state machine could be a task. But it is
not clear if it should. Some transitions are simple and should
be fast and that I would do in interrupt handlers. Some
other are triggered in regular way from other machines and
are naturally handled by function calls. Some need queues.
The whole thing fits resonably well in "super loop" paradigm.
I have found one issue that at first glance "requires"
multitasking. Namely, when one wants to put system in
sleep mode when there is no work natural "super loop"
approach looks like
if (work_to_do) {
do_work();
} else {
wait_for_interrupt();
}
where 'work_to_do' is flag which may be set by interrupt handlers.
But there is nasty race condition, if interrupt comes between
test for 'work_to_do' and 'wait_for_interrupt': despite
having work to do system will go to sleep and only wake on
next interrupt (which depending on specific requirements may
be harmless or disaster). I was unable to find simple code
that avoids this race. With multitasking kernel race vanishes:
there is idle task which is only doing 'wait_for_interrupt'
and OS scheduler passes control to worker tasks when there is
work to do. But when one looks how multitasker avoids race,
then it is clear that crucial point is doing control transfer
via return from interrupt. More precisely, variables are
tested with interrupts disabled and after decision is made
return from interrupt transfers control. Important point is
that if interrupt comes after control transfer interrupt handler
will re-do test before returning to user code. So what is needed
is piece of low-level code that uses return from interrupt for
control transfer and all interrupt handlers need to jump to
this code when finished. The rest (usually majority) of
multitasker is not needed...
Ownership may cause problems: there is tendency to "solve"
problems locally, that is in code that given person "owns".
This is good if there is easy local solution. However, this
may also lead to ugly workarounds that really do not work
well, while problem is easily solvable in different part
("owned" by different programmer). I have seen such thing
several times, looking at whole codebase after some effort
it was possible to do simple fix, while there were workarounds
in different ("wrong") places. I had no contact with
original authors, but it seems that workarounds were due to
"ownership".
--
Waldek Hebisch