Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

C++, Ada, ...

47 views
Skip to first unread message

pozz

unread,
Apr 17, 2021, 11:49:03 AM4/17/21
to
What do you think about different languages than usual C for embedded
systems?

I mean C++, Ada but also Python. Always more powerful embedded
processors are coming, so I expect new and modern languages will enter
in the embedded world.

Hardware are cheaper and more powerful than ever, but software stays
expensive. New and modern languages could reduce the software cost,
because they are simpler than C and much more similar to desktop/server
programming paradigm.

We embedded sw developers were lucky: electronics and technologies
change rapidly, but sw for embedded has changed slower than
desktop/mobile sw. Think of mobile app developers: maybe they already
changed IDEs, tools and languages ten times in a few years.
C language for embedded is today very similar than 10 years ago.

However I think this situation for embedded developers will change in
the very next years. And we will be forced to learn modern technologies,
such as new languages and tools.
Is it ok for me to study and learn new things... but it will be more
difficult to acquire new skills for real jobs.

What do you think?

David Brown

unread,
Apr 17, 2021, 12:45:57 PM4/17/21
to
You should probably add Rust to your list - I think its popularity will
increase.

Python is great when you have the resources. It's the language I use
most on PC's and servers, and it is very common on embedded Linux
systems (like Pi's, and industrial equivalents). Micropython is
sometimes found on smaller systems, such as ESP32 devices.

Ada involves a fair amount of change to how you work, compared to C
development. (Disclaimer - I have only done a very small amount of Ada
coding, and no serious projects. But I have considered it as an
option.) I really don't see it becoming much more common, and outside
of niche areas (defence, aerospace) it is very rare. Programming in Ada
often takes a lot more effort even for simple things, leading quickly to
code that is so wordy that it is unclear what is going on. And most of
the advantages of Ada (such as better typing) can be achieved in C++
with less effort, and at the same time C++ can give additional safety on
resources that is harder to get on Ada. (But Ada has some nice features
introspective that C++ programmers currently only dream about.)

C++ is not uncommon in embedded systems, and I expect to see more of it.
I use it as my main embedded language now. C++ gives more scope for
getting things wrong in weird ways, and more scope for accidentally
making your binary massively larger than you expect, but with care it
makes it a lot easier to write clear code and safe code, where common C
mistakes either can't happen or you get compile-time failures.

It used to be the case that C++ compilers were expensive and poor
quality, that the resulting code was very slow on common
microcontrollers, and that the language didn't have much extra to offer
small-systems embedded developers. That, I think, has changed in all
aspects. I use gcc 10 with C++17 on Cortex-M devices, and features like
templates, strong enumerations, std::array, and controlled automatic
conversions make it easier to write good code.


Tom Gardner

unread,
Apr 17, 2021, 2:55:33 PM4/17/21
to
On 17/04/21 17:45, David Brown wrote:
> And most of
> the advantages of Ada (such as better typing) can be achieved in C++
> with less effort, and at the same time C++ can give additional safety on
> resources that is harder to get on Ada.

Interesting. Could you give a quick outline of that?

Clifford Heath

unread,
Apr 17, 2021, 6:29:51 PM4/17/21
to
On 18/4/21 2:45 am, David Brown wrote:
> On 17/04/2021 17:48, pozz wrote:
>> What do you think about different languages than usual C for embedded
>> systems?
>>
>> I mean C++, Ada but also Python. Always more powerful embedded
>> processors are coming, so I expect new and modern languages will enter
>> in the embedded world.
>>
>> Hardware are cheaper and more powerful than ever, but software stays
>> expensive. New and modern languages could reduce the software cost,
>> because they are simpler than C and much more similar to desktop/server
>> programming paradigm.
>>
>> We embedded sw developers were lucky: electronics and technologies
>> change rapidly, but sw for embedded has changed slower than
>> desktop/mobile sw. Think of mobile app developers: maybe they already
>> changed IDEs, tools and languages ten times in a few years.
>> C language for embedded is today very similar than 10 years ago.
>>
>> However I think this situation for embedded developers will change in
>> the very next years. And we will be forced to learn modern technologies,
>> such as new languages and tools.
>> Is it ok for me to study and learn new things... but it will be more
>> difficult to acquire new skills for real jobs.
>>
>> What do you think?
>
> You should probably add Rust to your list - I think its popularity will
> increase.

+1.

I've used C++ for over 30 years, but Rust is going to rule. It's not
really any harder, but vastly safer, especially for parallel coding.

CH

Dave Nadler

unread,
Apr 17, 2021, 9:07:29 PM4/17/21
to
On 4/17/2021 11:48 AM, pozz wrote:
> What do you think about different languages than usual C for embedded
> systems?
>
> I mean C++, Ada but also Python. Always more powerful embedded
> processors are coming, so I expect new and modern languages will enter
> in the embedded world.

Another +1 for C++. We started using C++ for embedded ~25 years ago, and
the tool quality (and language) is vastly better now. Main problem using
C++ is lack of available qualified staff - too many still only trained
in C. C++, even more than C, *requires* painful amount of knowledge of
how things work under the hood to use safely, not a good quality for any
tool in any realm. But for *qualified* developers, C++ certainly gives
faster development, produces better (ie faster/smaller) code, and
generally improves life (compared to C).

Don Y

unread,
Apr 17, 2021, 9:54:12 PM4/17/21
to
On 4/17/2021 6:07 PM, Dave Nadler wrote:

> Another +1 for C++. We started using C++ for embedded ~25 years ago, and the
> tool quality (and language) is vastly better now. Main problem using C++ is
> lack of available qualified staff - too many still only trained in C. C++, even
> more than C, *requires* painful amount of knowledge of how things work under
> the hood to use safely, not a good quality for any tool in any realm. But for
> *qualified* developers, C++ certainly gives faster development, produces better
> (ie faster/smaller) code, and generally improves life (compared to C).

This is exactly why I *don't* use C++ (or any "new fangled" languages)
for product development. I'm often the "Rev 1" author of a product.
Then, move on (to another product, another client, another application
domain).

I'm contractually obligated to fix bugs in my code (indefinitely and at no
charge) -- it's one of my selling points ("Yes, I'm *that* confident in the
code I've created for you!")

The last thing I need is some coder-wannabe tweeking my code with some
"harmless" change... then, defiantly claiming (to his management)
that the newly broken code is a consequence of my initial design
(he's just UNCOVERED a bug).

So, I get called back to *teach* the dweeb why what he's done
won't work. And, in the process, inevitably have to show him
how to do what he wants to do.

Yeah, I can bill the client -- which is countered with "but you said
you'd support your code, for free?" (yeah, but that wasn't MY code!).
Regardless, I'm not keen on spending my time as a tutor!

Or, swallow it as cost of doing business.

It's considerably easier to find multiple C coders (and hope one
of them will catch the guy's mistake) than a qualified C++, Erlang,
Python, etc. coder -- esp when you're dealing with small firms.

So, my strategy becomes: "Have you rebuilt MY sources using the
tools that *I* used? Does the product exhibit this bug? If not,
it's not my problem!"

[I can do this even with some other language. But, the dweeb will
inevitably claim -- to the owner/management -- that his changes are
benign. They, of course, won't want to think they hired an inept
employee, so...]

Niklas Holsti

unread,
Apr 18, 2021, 3:39:13 AM4/18/21
to
On 2021-04-17 19:45, David Brown wrote:
> On 17/04/2021 17:48, pozz wrote:
>> What do you think about different languages than usual C for embedded
>> systems?
>>
>> I mean C++, Ada but also Python. Always more powerful embedded
>> processors are coming, so I expect new and modern languages will enter
>> in the embedded world.
>>
>> ...
>> Is it ok for me to study and learn new things... but it will be more
>> difficult to acquire new skills for real jobs.
>>
>> What do you think?
>
> ...
> Ada involves a fair amount of change to how you work, compared to C
> development. (Disclaimer - I have only done a very small amount of Ada
> coding, and no serious projects. But I have considered it as an
> option.) I really don't see it becoming much more common, and outside
> of niche areas (defence, aerospace) it is very rare. Programming in Ada
> often takes a lot more effort even for simple things, leading quickly to
> code that is so wordy that it is unclear what is going on.


That depends, I guess, on what one considers "simple", and also, of
course, on how familiar one is with the languages.

Perhaps it is time to link once again to the experience of C versus Ada
in an introductory embedded-programming class using a model rail-road
example, where the students using Ada fared much better than those using C:

http://archive.adaic.com/projects/atwork/trains.html

I am (as you guessed) an Ada fan, and find it a pleasure to write, even
if it takes more keystrokes than the equivalent in C.

For embedded systems in particular, I find portability to be a great
advantage of Ada. With few exceptions, the embedded program can be
exercised on the development host, and if the target supports the
standard Ada tasking system (or a subset of it), this applies even to
multi-threaded programs (without the need to emulate some embedded
RTK/OS on the host).


> And most of the advantages of Ada (such as better typing) can be
> achieved in C++ with less effort,

According to the link above, the main Ada advantage was the ability to
define one's own problem-oriented scalar types, such as integers with
specific value ranges. AIUI, this is not easily done in C++.


> and at the same time C++ can give additional safety on
> resources that is harder to get on Ada. (But Ada has some nice features
> introspective that C++ programmers currently only dream about.)


Can you be more specific on those points, please?


> It used to be the case that C++ compilers were expensive and poor
> quality, that the resulting code was very slow on common
> microcontrollers, and that the language didn't have much extra to offer
> small-systems embedded developers. That, I think, has changed in all
> aspects. I use gcc 10 with C++17 on Cortex-M devices, and features like
> templates, strong enumerations, std::array, and controlled automatic
> conversions make it easier to write good code.


AdaCore provides the gcc-based GNAT Ada compiler for several embedded
targets, free to use for open-source projects: libre.adacore.com.

Commercial projects can use Ada on other targets by means of the
(different) AdaCore compiler that generates C code as an intermediate
representation.

Tom Gardner

unread,
Apr 18, 2021, 4:42:34 AM4/18/21
to
On 18/04/21 02:07, Dave Nadler wrote:
> Main problem using C++ is lack of available qualified staff - too many still
> only trained in C. C++, even more than C, *requires* painful amount of knowledge
> of how things work under the hood to use safely, not a good quality for any tool
> in any realm.

Yes indeed. Especially when coupled with inexperience leading
someone to believe they understand C or C++.

Your other (valid) caveats and points snipped for brevity.

David Brown

unread,
Apr 18, 2021, 6:53:08 AM4/18/21
to
Which part?

My understanding of Ada classes is that, like Pascal classes, you need
to explicitly construct and destruct objects. This gives far greater
scope for programmers to get things wrong than when they are handled
automatically by the language.

On the other hand, some of Ada's type mechanisms make it a lot easier to
make new types with similar properties while remaining distinct (such as
subranges of integer types). You can do that in C++, but someone needs
to make the effort to make the appropriate class templates. Thus these
are more often used in Ada coding than in C++ coding.

On the third hand (three hands are always useful for programming), the
wordy nature of type conversions in Ada mean programmers would be
tempted to take shortcuts and skip these extra types.

dalai lamah

unread,
Apr 18, 2021, 8:39:19 AM4/18/21
to
Un bel giorno pozz digitň:

> What do you think about different languages than usual C for embedded
> systems?
>
> I mean C++, Ada but also Python. Always more powerful embedded
> processors are coming, so I expect new and modern languages will enter
> in the embedded world.

I wouldn't include C++ in the list of "modern languages". One might as well
use C.

As for the "more modern" languages (Python, C#, Java etc) I've used most of
them and maybe it's the old man in me speaking, but I haven't felt any
inherent important advantages in using them. Their success is mostly due to
better support by popular development tools.

These days, smarter people praise functional programming languages
(Haskell, Clojure, etc...). They promise tangible advantages compared to
imperative languages, especially for maintainability and reliability. But I
think that even if they will gain popularity, years would pass before you
will see real applications in the embedded world.

Python is a "hybrid", it can be also used as a functional programming
language, and it will probably gain popularity in embedded systems.

--
Fletto i muscoli e sono nel vuoto.

David Brown

unread,
Apr 18, 2021, 8:59:33 AM4/18/21
to
Are there no newer examples? While this study was interesting, it is
some 30 years out of date and compares and old version of Ada with an
old version of C (perhaps not even C90). I'd rather see the pros and
cons of Ada 2012 compared to C++17.

>
> I am (as you guessed) an Ada fan, and find it a pleasure to write, even
> if it takes more keystrokes than the equivalent in C.

I had no need to guess - it's not the first time you have posted about
Ada here! I'm glad that someone with your Ada qualifications can chime
in here, and can correct my if I'm wrong.

>
> For embedded systems in particular, I find portability to be a great
> advantage of Ada. With few exceptions, the embedded program can be
> exercised on the development host, and if the target supports the
> standard Ada tasking system (or a subset of it), this applies even to
> multi-threaded programs (without the need to emulate some embedded
> RTK/OS on the host).
>

For suitable targets, I can see that being useful. C and C++ can be
used on a wider range of targets, and with a wider range of RTOS (if
any). Of course, code for widely differing target systems is often very
different anyway, so portability is not always a concern.

>
>> And most of the advantages of Ada (such as better typing) can be
>> achieved in C++ with less effort,
>
> According to the link above, the main Ada advantage was the ability to
> define one's own problem-oriented scalar types, such as integers with
> specific value ranges. AIUI, this is not easily done in C++.
>

Sure it can be done in C++.

<https://foonathan.net/2016/10/strong-typedefs/>

You can make types with specific ranges (with compile-time checks where
possible, optional run-time checks otherwise). You can choose which
operations you allow on them, and how they combine (so that multiplying
a speed and a time give you a distance automatically, but adding them is
not allowed). You can make integer types with different overflow
behaviour (wrapping, saturating, throwing errors, unspecified behaviour,
undefined behaviour) that can be used as easily as normal integers, and
mixed if and only if it is safe to do so or you make it explicit. You
can make array types that are indexed by enumerated types. It's all
possible - more so that in Ada, AFAIK.

Where you have a difference, however, is that many of the most
beneficial cases (such as strong subrange types) are simple to use in
Ada and built into the language, and therefore they are often used. In
C++, you need to find a suitable template library or make things
yourself, giving a much higher barrier to use.

A very interesting distinction when looking at language features and
safe coding is how much people /could/ use safety features, and how much
they /do/ use them. For example, with C programming you can get strong
type safety as long as you wrap all your new types in structs. The
compiler will complain for type mistakes, just as it would in Ada - but
the /use/ of such types is often very inconvenient in C.

>
>> and at the same time C++ can give additional safety on
>> resources that is harder to get on Ada.  (But Ada has some nice features
>> introspective that C++ programmers currently only dream about.)
>
>
> Can you be more specific on those points, please?
>

For the first point - my understanding is that in Ada, when you have
classes you need to manually call constructor and destructor
(initializer and finalizer) functions. RAII is thus harder, and it is
as easy to make mistakes as it is in C (with malloc and free). Does Ada
have the equivalent of std::unique_ptr, std::shared_ptr and other smart
pointers, or container classes that handle allocation?

For the second point, I am thinking of things like type attributes. I
would /really/ like C++ enumerations to have something matching the
"first" and "last" attributes! AFAIK, C++ won't get this until
reflection is added, probably not until C++26.

>
>> It used to be the case that C++ compilers were expensive and poor
>> quality, that the resulting code was very slow on common
>> microcontrollers, and that the language didn't have much extra to offer
>> small-systems embedded developers.  That, I think, has changed in all
>> aspects.  I use gcc 10 with C++17 on Cortex-M devices, and features like
>> templates, strong enumerations, std::array, and controlled automatic
>> conversions make it easier to write good code.
>
>
> AdaCore provides the gcc-based GNAT Ada compiler for several embedded
> targets, free to use for open-source projects: libre.adacore.com.
>
> Commercial projects can use Ada on other targets by means of the
> (different) AdaCore compiler that generates C code as an intermediate
> representation.
>

This means that Ada suffers from a similar barrier to adoption that C++
used to do. Big companies will happily pay the cost for the tools that
they think will lead to better development efficiency or reliability.
Smaller ones will have a much harder time trying to justify the expense,
especially as they will often have few developers who can use the tools.
A move from C to Ada is compounded by needing a complete change of the
code - with C to C++, you can re-use old code, and you can also use a
few of the C++ features rather than moving wholescale (though of course
you also miss out on corresponding benefits).

Niklas Holsti

unread,
Apr 18, 2021, 11:48:20 AM4/18/21
to
On 2021-04-18 13:53, David Brown wrote:
> On 17/04/2021 20:55, Tom Gardner wrote:
>> On 17/04/21 17:45, David Brown wrote:
>>> And most of
>>> the advantages of Ada (such as better typing) can be achieved in C++
>>> with less effort, and at the same time C++ can give additional safety on
>>> resources that is harder to get on Ada.
>>
>> Interesting. Could you give a quick outline of that?
>>
>
> Which part?
>
> My understanding of Ada classes is that, like Pascal classes, you need
> to explicitly construct and destruct objects. This gives far greater
> scope for programmers to get things wrong than when they are handled
> automatically by the language.


If you mean automatic allocation and deallocation of storage, Ada lets
you define types that have an "initializer" that is called
automatically, and can allocate memory if it needs to, and a "finalizer"
that is automatically called when leaving the scope of the object in
question. The finalizer does have to explicitly deallocate any
explicitly allocated and owned resources, and it may have to use
reference counting for that, for complex data structures.

AUI that also holds for C++, although I believe recent C++ standards may
have simplified the definition and use of reference-counted data structures.


> On the other hand, some of Ada's type mechanisms make it a lot easier to
> make new types with similar properties while remaining distinct (such as
> subranges of integer types). You can do that in C++, but someone needs
> to make the effort to make the appropriate class templates. Thus these
> are more often used in Ada coding than in C++ coding.


(That relative ease in Ada was the point of my other response -- more to
come on that.)


> On the third hand (three hands are always useful for programming), the
> wordy nature of type conversions in Ada mean programmers would be
> tempted to take shortcuts and skip these extra types.


Huh? A normal conversion in C is written "(newtype)expression", the same
in Ada is written "newtype(expression)". Exactly the same number of
characters, only the placement of the () is different. The C form might
even require an extra set of parentheses around it, to demarcate the
expression to be converted from any containing expression.

Of course, in C you have implicit conversions between all kinds of
numerical types, often leading to a whole lot of errors... not only
apples+oranges, but also truncation or other miscomputation.

Niklas Holsti

unread,
Apr 18, 2021, 12:23:54 PM4/18/21
to
I have none to hand, at least.


> While this study was interesting, it is some 30 years out of date

I would call it "30 years old". Whether it is "out of date" can be
debated -- I don't think the basic ideas of the two languages have
changed very drastically even in that time.


> and compares and old version of Ada with an old version of C (perhaps
> not even C90). I'd rather see the pros and cons of Ada 2012 compared
> to C++17.

So would I, but I haven't seen any such studies reported.

I can only point to the biased material at
https://www.adaic.org/advantages/.

And a recent Ada-adoption story from NVIDIA:

https://www.adacore.com/webinars/securing-future-of-embedded-software

but that is more about the AdaCore tools and the SPARK form of Ada than
about C-vs-Ada.

>>
>> I am (as you guessed) an Ada fan, and find it a pleasure to write, even
>> if it takes more keystrokes than the equivalent in C.
>
> I had no need to guess - it's not the first time you have posted about
> Ada here! I'm glad that someone with your Ada qualifications can chime
> in here, and can correct my if I'm wrong.
>
>>
>> For embedded systems in particular, I find portability to be a great
>> advantage of Ada. With few exceptions, the embedded program can be
>> exercised on the development host, and if the target supports the
>> standard Ada tasking system (or a subset of it), this applies even to
>> multi-threaded programs (without the need to emulate some embedded
>> RTK/OS on the host).
>>
>
> For suitable targets, I can see that being useful. C and C++ can be
> used on a wider range of targets, and with a wider range of RTOS (if
> any).


Ada can be used with other RTOSses too, although one loses both some
ease-of-use advantages and the portability advantage. Typical Ada
compilers for embedded systems also let one write Ada programs that do
not need an RTOS at all and run with a single thread.


> Of course, code for widely differing target systems is often very
> different anyway, so portability is not always a concern.
>
>>
>>> And most of the advantages of Ada (such as better typing) can be
>>> achieved in C++ with less effort,
>>
>> According to the link above, the main Ada advantage was the ability to
>> define one's own problem-oriented scalar types, such as integers with
>> specific value ranges. AIUI, this is not easily done in C++.
>>
>
> Sure it can be done in C++.
>
> <https://foonathan.net/2016/10/strong-typedefs/>

I did not see any mention of range-checked types there. But yes, they
can be constructed in C++, but not as easily as in Ada, as it seems we
agree. I should have said "/as/ easily" in the quote above.


> You can make types with specific ranges (with compile-time checks where
> possible, optional run-time checks otherwise). You can choose which
> operations you allow on them, and how they combine (so that multiplying
> a speed and a time give you a distance automatically, but adding them is
> not allowed). You can make integer types with different overflow
> behaviour (wrapping, saturating, throwing errors, unspecified behaviour,
> undefined behaviour) that can be used as easily as normal integers, and
> mixed if and only if it is safe to do so or you make it explicit. You
> can make array types that are indexed by enumerated types. It's all
> possible - more so that in Ada, AFAIK.


I admit that C++ templates allow the programming of more compile-time
activities than Ada allows.


> Where you have a difference, however, is that many of the most
> beneficial cases (such as strong subrange types) are simple to use in
> Ada and built into the language, and therefore they are often used. In
> C++, you need to find a suitable template library or make things
> yourself, giving a much higher barrier to use.


Yup.


> A very interesting distinction when looking at language features and
> safe coding is how much people /could/ use safety features, and how much
> they /do/ use them. For example, with C programming you can get strong
> type safety as long as you wrap all your new types in structs. The
> compiler will complain for type mistakes, just as it would in Ada - but
> the /use/ of such types is often very inconvenient in C.


Indeed.


>>> and at the same time C++ can give additional safety on
>>> resources that is harder to get on Ada.  (But Ada has some nice features
>>> introspective that C++ programmers currently only dream about.)
>>
>>
>> Can you be more specific on those points, please?
>>
>
> For the first point - my understanding is that in Ada, when you have
> classes you need to manually call constructor and destructor
> (initializer and finalizer) functions. RAII is thus harder, and it is
> as easy to make mistakes as it is in C (with malloc and free). Does Ada
> have the equivalent of std::unique_ptr, std::shared_ptr and other smart
> pointers, or container classes that handle allocation?


Answered in another message. The Ada "controlled" types have
automatically called initializer and finalizer operations. They can be
used to implement smart pointers, but no smart pointers are built into
Ada (perhaps mainly because they would then have to be thread-safe,
which would be a lot of overhead for a single-threaded program).


> For the second point, I am thinking of things like type attributes. I
> would /really/ like C++ enumerations to have something matching the
> "first" and "last" attributes! AFAIK, C++ won't get this until
> reflection is added, probably not until C++26.


Yes, good point in favour of Ada. I've seen some C++ programs that go to
great metaprogramming lengths to define "trait" templates to get the
equivalent of the "first", "last", "size" etc. attributes. So it is,
again, possible in C++, but not as easy as when it is built-in, as in Ada.


>>> It used to be the case that C++ compilers were expensive and poor
>>> quality, that the resulting code was very slow on common
>>> microcontrollers, and that the language didn't have much extra to offer
>>> small-systems embedded developers.  That, I think, has changed in all
>>> aspects.  I use gcc 10 with C++17 on Cortex-M devices, and features like
>>> templates, strong enumerations, std::array, and controlled automatic
>>> conversions make it easier to write good code.
>>
>>
>> AdaCore provides the gcc-based GNAT Ada compiler for several embedded
>> targets, free to use for open-source projects: libre.adacore.com.
>>
>> Commercial projects can use Ada on other targets by means of the
>> (different) AdaCore compiler that generates C code as an intermediate
>> representation.
>>
>
> This means that Ada suffers from a similar barrier to adoption that C++
> used to do. Big companies will happily pay the cost for the tools that
> they think will lead to better development efficiency or reliability.
> Smaller ones will have a much harder time trying to justify the expense,
> especially as they will often have few developers who can use the tools.
> A move from C to Ada is compounded by needing a complete change of the
> code - with C to C++, you can re-use old code, and you can also use a
> few of the C++ features rather than moving wholescale (though of course
> you also miss out on corresponding benefits).


Of course you can move from C to Ada without a full re-write, because
all Ada compilers provide inter-linking with C code, and most data
structures can be mapped between the two languages. (For some Ada
compilers like GNAT, this even holds for C++ and Ada)

In fact, there is another C-vs-Ada study,

http://archive.adaic.com/intro/ada-vs-c/cada_art.html

where an Ada compiler vendor incrementally moved their Ada compiler from
being written in C to being written in Ada, and found a steady increase
of quality and decrease of cost-to-repair as modules were moved from C
to Ada. But it is from 1995, so perhaps "out of date" :-)


David Brown

unread,
Apr 18, 2021, 12:59:16 PM4/18/21
to
On 18/04/2021 17:48, Niklas Holsti wrote:
> On 2021-04-18 13:53, David Brown wrote:
>> On 17/04/2021 20:55, Tom Gardner wrote:
>>> On 17/04/21 17:45, David Brown wrote:
>>>> And most of
>>>> the advantages of Ada (such as better typing) can be achieved in C++
>>>> with less effort, and at the same time C++ can give additional
>>>> safety on
>>>> resources that is harder to get on Ada.
>>>
>>> Interesting. Could you give a quick outline of that?
>>>
>>
>> Which part?
>>
>> My understanding of Ada classes is that, like Pascal classes, you need
>> to explicitly construct and destruct objects.  This gives far greater
>> scope for programmers to get things wrong than when they are handled
>> automatically by the language.
>
>
> If you mean automatic allocation and deallocation of storage, Ada lets
> you define types that have an "initializer" that is called
> automatically, and can allocate memory if it needs to, and a "finalizer"
> that is automatically called when leaving the scope of the object in
> question. The finalizer does have to explicitly deallocate any
> explicitly allocated and owned resources, and it may have to use
> reference counting for that, for complex data structures.

I had a little look for this (these discussions are great for inspiring
learning!). The impression I got was that it was possible, but what
takes a few lines of C++ (excluding whatever work must be done inside
the constructor and destructor bodies) involves inheriting from a
specific library type. And you don't have automatic initialisation of
subobjects and ancestors in a controlled order, nor automatic
finalisation of them in the reverse order.

Let's take a little example. And since this is comp.arch.embedded,
let's take a purely embedded example of disabling interrupts, rather
than shunned dynamic memory allocations:

static inline uint32_t disableGlobalInterrupts(void) {
uint32_t pri;
asm volatile(
" mrs %[pri], primask\n\t" // Get old mask
" cpsid i\n\t" // Disable interrupts entirely
" dsb" // Ensures that this takes effect before next
// instruction
: [pri] "=r" (pri) :: "memory");
return pri;
}

static inline void restoreGlobalInterrupts(uint32_t pri) {
asm volatile(
" msr primask, %[pri]" // Restore old mask
:: [pri] "r" (pri) : "memory");
}

class CriticalSectionLock {
private :
uint32_t oldpri;
public :
CriticalSectionLock() { oldpri = disableGlobalInterrupts(); }
~CriticalSectionLock() { restoreGlobalInterrupts(oldpri); }
};


You can use it like this:

bool compare_and_swap64(uint64_t * p, uint64_t old, uint64_t x)
{
CriticalSectionLock lock;

if (*p != old) return false;
*p = x;
return true;
}

This is the code compiled for a 32-bit Cortex-M device:

<https://godbolt.org/z/7KM9M6Kcd>

The use of the class here has no overhead compared to manually disabling
and re-enabling interrupts.

What would be the Ada equivalent of this class, and of the
"compare_and_swap64" function?

>
> AUI that also holds for C++, although I believe recent C++ standards may
> have simplified the definition and use of reference-counted data
> structures.
>
>
>> On the other hand, some of Ada's type mechanisms make it a lot easier to
>> make new types with similar properties while remaining distinct (such as
>> subranges of integer types).  You can do that in C++, but someone needs
>> to make the effort to make the appropriate class templates.  Thus these
>> are more often used in Ada coding than in C++ coding.
>
>
> (That relative ease in Ada was the point of my other response -- more to
> come on that.)
>
>
>> On the third hand (three hands are always useful for programming), the
>> wordy nature of type conversions in Ada mean programmers would be
>> tempted to take shortcuts and skip these extra types.
>
>
> Huh? A normal conversion in C is written "(newtype)expression", the same
> in Ada is written "newtype(expression)". Exactly the same number of
> characters, only the placement of the () is different. The C form might
> even require an extra set of parentheses around it, to demarcate the
> expression to be converted from any containing expression.
>
> Of course, in C you have implicit conversions between all kinds of
> numerical types, often leading to a whole lot of errors... not only
> apples+oranges, but also truncation or other miscomputation.

C also makes explicit conversions wordy, yes. In C++, you can choose
which conversions are explicit and which are implicit - done carefully,
your safe conversions will be implicit and unsafe ones need to be explicit.

(C++ suffers from its C heritage and backwards compatibility, meaning it
can't fix things that were always implicit conversion. It's too late to
make "int x = 123.4;" an error. The best C++ can do is add a new syntax
with better safety - so "int y { 123 };" is fine but "int z { 123.4 };"
is an error.)

David Brown

unread,
Apr 18, 2021, 1:26:37 PM4/18/21
to
C made a big leap, IMHO, with C99 - it became a significantly better
language, and it became easier to write safe, clear and efficient code
than with C90. (C11 and C17 add little in comparison.)

C++ is a completely different language now than it was 30 years ago.

And the tools for both C and C++ have changed hugely in that time.

But some programmers have not changed, and the way they write C code has
not changed. Again, it's a question of whether you are comparing how
languages /could/ be used at their best, or how some people use them at
their worst, or guessing something in between.

>
>> and compares and old version of Ada with an old version of C (perhaps
>> not even C90).  I'd rather see the pros and cons of Ada 2012 compared
>> to C++17.
>
> So would I, but I haven't seen any such studies reported.
>
> I can only point to the biased material at
> https://www.adaic.org/advantages/.

As an example of the bias, I looked at the "What constructs exist in Ada
that do not have an equivalent construct in C/C++?" page. First off,
anyone who talks about "C/C++" as though it were a single language is
either ignorant or biased. Some of their points are outdated (C++ has
"proper" array types in the standard library), and support for very nice
syntaxes for literals). Some are fair enough - strong type checking of
arithmetic types needs a good deal more effort (such as shown in the
"strong typedef" link from earlier). Some are absolutely a point to
Ada, such as named parameters. And the page is missing the "What
constructs exist in C++ that do not have an equivalent in Ada?" list :-)
Once you have made appropriate templates, they can be used as easily as
in Ada. But such templates are not in the C++ standard library.

>
>
>> You can make types with specific ranges (with compile-time checks where
>> possible, optional run-time checks otherwise).  You can choose which
>> operations you allow on them, and how they combine (so that multiplying
>> a speed and a time give you a distance automatically, but adding them is
>> not allowed).  You can make integer types with different overflow
>> behaviour (wrapping, saturating, throwing errors, unspecified behaviour,
>> undefined behaviour) that can be used as easily as normal integers, and
>> mixed if and only if it is safe to do so or you make it explicit.  You
>> can make array types that are indexed by enumerated types.  It's all
>> possible - more so that in Ada, AFAIK.
>
>
> I admit that C++ templates allow the programming of more compile-time
> activities than Ada allows.
>

C++ has got a lot stronger at compile-time work in recent versions. Not
only have templates got more powerful, but we've got "concepts" (named
sets of features or requirements for types), constexpr functions (that
can be evaluated at compile time or runtime), consteval functions (that
must be evaluated at compile time), constinit data (that must have
compile-time constant initialisers), etc. And constants determined at
compile-time are not restricted to scaler or simple types.
Agreed.

There is a proposal in the works for "metaclasses" in C++, which goes
beyond reflection and into code generation. I'm looking forward to that
- it will put an end to a lot of "boilerplate" code. But it's going to
take a good while to arrive.
You can move file by file from C to Ada, but you can move from C to C++
by adding the "-x c++" flag to your build or clicking the "compile C
code as C++" on the IDE. Then you change as much or as little as you
want, when you want.

Tom Gardner

unread,
Apr 18, 2021, 2:29:46 PM4/18/21
to
On 18/04/21 18:26, David Brown wrote:
> But some programmers have not changed, and the way they write C code has
> not changed. Again, it's a question of whether you are comparing how
> languages/could/ be used at their best, or how some people use them at
> their worst, or guessing something in between.

That is indeed a useful first question.

A second question is "how easy is it to get intelligent
but inexperienced programmers to avoid the worst features
and use the languages well?" (We'll ignore all the programmers
that shouldn't be given a keyboard :) )

A third is "will that happen in company X's environment when
they are extending code written by people that left 5 years
ago?"

Tom Gardner

unread,
Apr 18, 2021, 2:30:18 PM4/18/21
to
On 18/04/21 18:26, David Brown wrote:
> C++ has got a lot stronger at compile-time work in recent versions. Not
> only have templates got more powerful, but we've got "concepts" (named
> sets of features or requirements for types), constexpr functions (that
> can be evaluated at compile time or runtime), consteval functions (that
> must be evaluated at compile time), constinit data (that must have
> compile-time constant initialisers), etc. And constants determined at
> compile-time are not restricted to scaler or simple types.

Sounds wordy ;)

David Brown

unread,
Apr 18, 2021, 4:17:11 PM4/18/21
to
Have you looked at the C++ standards documents? There are more than a
few words there!

I'm not suggesting C++ is a perfect language - not by a long way. It
has plenty of ugliness, and in this thread we've covered several points
where Ada can do something neater and clearer than you can do it in C++.

But it's a good thing that it has more ways for handling things at
compile time. In many of my C projects, I have had Python code for
pre-processing, for computing tables, and that kind of thing. With
modern C++, these are no longer needed.

An odd thing about the compile-time calculation features of C++ is that
they came about partly by accident, or unforeseen side-effects. Someone
discovered that templates with integer parameters could be used to do
quite a lot of compile-time calculations. The code was /really/ ugly,
slow to compile, limited in scope. But people were finding use for it.
So the motivation for "constexpr" was that programmers were already
doing compile-time calculations, and so it was best to let them do it in
a nicer way.

David Brown

unread,
Apr 18, 2021, 4:23:49 PM4/18/21
to
All good questions.

Another is what will happen to the company when the one person that
understood the language at all, leaves? With C++, you can hire another
qualified and experienced C++ programmer. (Okay, so you might have to
beat them over the head with an 8051 emulator until they stop using
std::vector and realise you don't want UCS2 encoding for your 2x20
character display, but they'll learn that soon enough.) With Ada, or
Forth, or any other minor language, you are scuppered.

These are important considerations.

Tom Gardner

unread,
Apr 18, 2021, 6:09:15 PM4/18/21
to
They are important considerations, indeed.

I'm unconvinced that it is practical to rely on hiring
another C++ programmer that is /actually/ qualified
and experienced - and wants to work on someone else's
code.

The "uncanny valley" is a major problem for any new tech,
languages included.

We've all seen the next better mousetrap that turns
out to merely have exchanged swings and roundabouts.

Tom Gardner

unread,
Apr 18, 2021, 6:17:19 PM4/18/21
to
On 18/04/21 21:17, David Brown wrote:
> On 18/04/2021 20:30, Tom Gardner wrote:
>> On 18/04/21 18:26, David Brown wrote:
>>> C++ has got a lot stronger at compile-time work in recent versions.  Not
>>> only have templates got more powerful, but we've got "concepts" (named
>>> sets of features or requirements for types), constexpr functions (that
>>> can be evaluated at compile time or runtime), consteval functions (that
>>> must be evaluated at compile time), constinit data (that must have
>>> compile-time constant initialisers), etc.  And constants determined at
>>> compile-time are not restricted to scaler or simple types.
>>
>> Sounds wordy ;)
>
> Have you looked at the C++ standards documents? There are more than a
> few words there!

No. I value my sanity.


> I'm not suggesting C++ is a perfect language - not by a long way. It
> has plenty of ugliness, and in this thread we've covered several points
> where Ada can do something neater and clearer than you can do it in C++.
>
> But it's a good thing that it has more ways for handling things at
> compile time. In many of my C projects, I have had Python code for
> pre-processing, for computing tables, and that kind of thing. With
> modern C++, these are no longer needed.

The useful question is not whether something is good,
but whether there are better alternatives. "Better",
of course, can include /anything/ relevant!



> An odd thing about the compile-time calculation features of C++ is that
> they came about partly by accident, or unforeseen side-effects. Someone
> discovered that templates with integer parameters could be used to do
> quite a lot of compile-time calculations. The code was /really/ ugly,
> slow to compile, limited in scope. But people were finding use for it.
> So the motivation for "constexpr" was that programmers were already
> doing compile-time calculations, and so it was best to let them do it in
> a nicer way.

Infamously, getting a valid C++ program that cause the
compiler to generate the sequence of prime numbers during
compilation came as an unpleasant /surprise/ to the C++
standards committee.

The code is short; whether it is ugly is a matter of taste!
https://en.wikibooks.org/wiki/C%2B%2B_Programming/Templates/Template_Meta-Programming#History_of_TMP

David Brown

unread,
Apr 19, 2021, 2:52:41 AM4/19/21
to
On 19/04/2021 00:17, Tom Gardner wrote:
> On 18/04/21 21:17, David Brown wrote:
>> On 18/04/2021 20:30, Tom Gardner wrote:
>>> On 18/04/21 18:26, David Brown wrote:
>>>> C++ has got a lot stronger at compile-time work in recent versions. 
>>>> Not
>>>> only have templates got more powerful, but we've got "concepts" (named
>>>> sets of features or requirements for types), constexpr functions (that
>>>> can be evaluated at compile time or runtime), consteval functions (that
>>>> must be evaluated at compile time), constinit data (that must have
>>>> compile-time constant initialisers), etc.  And constants determined at
>>>> compile-time are not restricted to scaler or simple types.
>>>
>>> Sounds wordy ;)
>>
>> Have you looked at the C++ standards documents?  There are more than a
>> few words there!
>
> No. I value my sanity.

I have looked at it, and come back as sane as ever (apart from the
pencils up my nose and underpants on my head). But I've worked up to it
through many versions of the C standards.

More seriously, if I need to look up any details of C or C++, I find
<https://en.cppreference.com/w/> vastly more user-friendly.

>
>
>> I'm not suggesting C++ is a perfect language - not by a long way.  It
>> has plenty of ugliness, and in this thread we've covered several points
>> where Ada can do something neater and clearer than you can do it in C++.
>>
>> But it's a good thing that it has more ways for handling things at
>> compile time.  In many of my C projects, I have had Python code for
>> pre-processing, for computing tables, and that kind of thing.  With
>> modern C++, these are no longer needed.
>
> The useful question is not whether something is good,
> but whether there are better alternatives. "Better",
> of course, can include /anything/ relevant!
>

Yes - and "better" is usually highly subjective.

In the case of compile-time calculations, modern C++ is certainly better
than older C++ versions or C (or, AFAIK, Ada). It can't do everything
that I might do with external Python scripts - it can't do code
generation, or make tables that depend on multiple source files, or make
CRC checks for the final binary. But it can do some things that
previously required external scripts, and that's nice.

>
>
>> An odd thing about the compile-time calculation features of C++ is that
>> they came about partly by accident, or unforeseen side-effects.  Someone
>> discovered that templates with integer parameters could be used to do
>> quite a lot of compile-time calculations.  The code was /really/ ugly,
>> slow to compile, limited in scope.  But people were finding use for it.
>>   So the motivation for "constexpr" was that programmers were already
>> doing compile-time calculations, and so it was best to let them do it in
>> a nicer way.
>
> Infamously, getting a valid C++ program that cause the
> compiler to generate the sequence of prime numbers during
> compilation came as an unpleasant /surprise/ to the C++
> standards committee.

I don't believe that the surprise was "unpleasant" - it's just something
they hadn't considered. (I'm not even sure of that either - my feeling
is that this is an urban myth, or at least a story exaggerated in the
regular retelling.)

>
> The code is short; whether it is ugly is a matter of taste!
> https://en.wikibooks.org/wiki/C%2B%2B_Programming/Templates/Template_Meta-Programming#History_of_TMP
>

Template-based calculations were certainly very convoluted - they needed
a functional programming style structure but with much more awkward
syntax (and at the height of their "popularity", horrendous compiler
error messages when you made a mistake - that too has improved greatly).
And that is why constexpr (especially in latest versions) is so much
better.

Template-based calculations are a bit like trying to do calculations and
data structures in LaTeX. It is all possible, but it doesn't roll off
the tongue very easily.


(I wonder if anyone else understands the pencil and underpants
reference. I am sure Tom does.)

David Brown

unread,
Apr 19, 2021, 2:56:39 AM4/19/21
to
You don't tell the new guy that he or she must maintain old code! You
spring that as a surprise, once you've got them hooked on the quality of
your coffee machine.

>
> The "uncanny valley" is a major problem for any new tech,
> languages included.
>
> We've all seen the next better mousetrap that turns
> out to merely have exchanged swings and roundabouts.

Indeed.

But code written in the latest fad language is not the worst. I've had
to deal with code (for a PC) written in ancient versions of a propriety
"RAD tool" where the vendor will no longer sell the outdated version and
the new tool version is not remotely compatible. I'd pick Ada over that
any day of the week.


Tom Gardner

unread,
Apr 19, 2021, 5:08:53 AM4/19/21
to
On 19/04/21 07:52, David Brown wrote:
> On 19/04/2021 00:17, Tom Gardner wrote:
>> Infamously, getting a valid C++ program that cause the
>> compiler to generate the sequence of prime numbers during
>> compilation came as an unpleasant /surprise/ to the C++
>> standards committee.
>
> I don't believe that the surprise was "unpleasant" - it's just something
> they hadn't considered. (I'm not even sure of that either - my feeling
> is that this is an urban myth, or at least a story exaggerated in the
> regular retelling.)

I was thinking of "unpleasant" /because/ it was a surprise.
Bjarne dismissed the possibility before being stunned a
couple of days later.

Here's a google (mis)translation of the Erwin Unruh's account at
http://www.erwin-unruh.de/meta.html

Temple meta programming

The template meta programming is a way to carry out calculations already in C
++ during the translation. This allows additional checks to be installed. This
is particularly used to build efficient algorithms.
In 2000, a special workshop was held for this purpose.

This started with the C ++ standardization meeting in 1994 in Sandiego. Here
is my personal memory:

We discussed the possibilities of determining template arguments from a
template. The question came up as to whether the inverse of a function could be
determined. So if "i + 1 == 5" could be closed, that "i == 4". This was
denied, but the question inspired me to the idea to calculate primes during the
translation. The first version I crafted on Monday, but she was fundamentally
wrong. Bjarne Strouffup said so something would not work in principle.
This stacked my zeal, and so I finished the scaffolding of the program
Wednesday afternoon. On Wednesday evening another work meeting was announced
where I had some air. There I met Tom Pennello, and we put together. He had
his notebook and we tapped my program briefly. After some crafts, the program
ran. We made a run and printed program and error message. Then Tom came to the
idea of ​​taking a more complicated function. We chose the Ackermann function.
After a few hours, this also ran and calculated the value of the Ackermann
function during the translation.
On Thursday I showed the term Bjarne. He was extremely stunned. I then made
copies for all participants and officially distributed this curious program. I
kept the whole thing for a joke.

A few weeks later, I developed a proof that the template mechanism is
turbine-complete. However, since this proof was quite dry, I just put it to the
files. I still have the notes. On the occasion, I will tempt this time and
provide here.

Later, Todd Veldhuizen picked up the idea and published an article in the C ++
Report. This appeared in May 1995. He understood the possibilities behind the
idea and put them in concrete metatograms that make something constructive.
This article was the basis on which template meta programming was built.
Although I gave the kick-off, but did not recognize the range of the idea.

Erwin Unruh, 1. 1. 2002


> Template-based calculations are a bit like trying to do calculations and
> data structures in LaTeX. It is all possible, but it doesn't roll off
> the tongue very easily.

Being perverse can be fun, /provided/ it doesn't happen
accidentally in everyday life.

David Brown

unread,
Apr 19, 2021, 6:15:17 AM4/19/21
to
On 19/04/2021 11:08, Tom Gardner wrote:
> On 19/04/21 07:52, David Brown wrote:
>> On 19/04/2021 00:17, Tom Gardner wrote:
>>> Infamously, getting a valid C++ program that cause the
>>> compiler to generate the sequence of prime numbers during
>>> compilation came as an unpleasant /surprise/ to the C++
>>> standards committee.
>>
>> I don't believe that the surprise was "unpleasant" - it's just something
>> they hadn't considered.  (I'm not even sure of that either - my feeling
>> is that this is an urban myth, or at least a story exaggerated in the
>> regular retelling.)
>
> I was thinking of "unpleasant" /because/ it was a surprise.
> Bjarne dismissed the possibility before being stunned a
> couple of days later.
>
> Here's a google (mis)translation of the Erwin Unruh's account at
> http://www.erwin-unruh.de/meta.html
>
> Temple meta programming

<snip for brevity>

>
>
>> Template-based calculations are a bit like trying to do calculations and
>> data structures in LaTeX.  It is all possible, but it doesn't roll off
>> the tongue very easily.
>
> Being perverse can be fun, /provided/ it doesn't happen
> accidentally in everyday life.

Usually it is not a problem when you discover something has extra
features or possibilities beyond what you imagined. About the only
disadvantage of "turbine-complete" templates is that compilers need to
have limits to how hard they will try to compile them - it would be
quite inconvenient to have your compiler work for hours trying to
calculate Ackerman(5, 2) before melting your processor.

(I've done "perverted" stuff in LaTeX - but it wasn't an accident.
Fortunately I don't have to do it /every/ day.)

Niklas Holsti

unread,
Apr 19, 2021, 6:51:19 AM4/19/21
to
Yes, you must inherit from one of the types Ada.Finalization.Controlled
or Ada.Finalization.Limited_Controlled when you create a type for which
you can program an initializer and/or a finalizer.

However, you can aggregate a component of such a type into some other
composite type, and then that component's initializer and finalizer will
be called automatically when any object of the containing composite type
is constructed and destroyed.


> And you don't have automatic initialisation of
> subobjects and ancestors in a controlled order, nor automatic
> finalisation of them in the reverse order.


No, and yes. Subobjects (components) are automatically initialized
before the composite is initialized (bottom-up), and are automatically
finalized after the composite is finalized (top-down). But there is no
automatic invocation of the initializer or finalizer of the parent
class; that would have to be called explicitly (except in the case of an
"extension aggregate" expression, where an object of the parent type is
created and then extended to an object of the derived class).

The Ada initializer and finalizer concept is subtly different from the
C++ constructor and destructor concept. In Ada, the construction and
destruction are considered to happen implicitly and automatically. The
construction step can assign some initial values that can be defined by
default (pointers default to null, for example) or can be specified for
the type of the component in question, or can be defined for that
component explicitly. For example:

type Down_Counter is range 0 .. 100 with Default_Value => 100;

type Zero_Handler is access procedure;

type Counter is record
Count   : Down_Counter; -- Implicit init to 100.
Running : Boolean := False; -- Explicit init.
At_Zero : Zero_Handler; -- Default init to null.
end record;

Beyond that automatic construction step, the programmable initializer is
used to perform further automatic activities that may further initialize
the object, or may have some other effects. For example, we might want
to automatically register every instance of a Counter (as above) with
the kernel, and that would be done in the initializer. Conversely, the
finalizer would then deregister the Counter, before the Counter is
automatically destroyed (removed from the stack or from the heap).

So the Ada "initializer" is not like a C++ constructor, which in Ada
corresponds more closely to a function returning an object of the class.

An Ada "finalizer" is more similar to a C++ destructor, taking care of
any clean-up that is needed before the object disappears.

>
> Let's take a little example. And since this is comp.arch.embedded,
> let's take a purely embedded example of disabling interrupts, rather
> than shunned dynamic memory allocations:
>
> static inline uint32_t disableGlobalInterrupts(void) {
> uint32_t pri;
> asm volatile(
> " mrs %[pri], primask\n\t" // Get old mask
> " cpsid i\n\t" // Disable interrupts entirely
> " dsb" // Ensures that this takes effect before next
> // instruction
> : [pri] "=r" (pri) :: "memory");
> return pri;
> }
>
> static inline void restoreGlobalInterrupts(uint32_t pri) {
> asm volatile(
> " msr primask, %[pri]" // Restore old mask
> :: [pri] "r" (pri) : "memory");
> }


I won't try to write Ada equivalents of the above :-) though I have of
course written much Ada code to manage and handle interrupts.


> class CriticalSectionLock {
> private :
> uint32_t oldpri;
> public :
> CriticalSectionLock() { oldpri = disableGlobalInterrupts(); }
> ~CriticalSectionLock() { restoreGlobalInterrupts(oldpri); }
> };


Here is the same in Ada. I chose to derive from Limited_Controlled
because that makes it illegal to assign a Critical_Section value from
one object to another.

-- Declaration of the type:

type Critical_Section is new Ada.Finalization.Limited_Controlled
with record
old_pri : Interfaces.Unsigned_32;
end record;

overriding procedure Initialize (This : in out Critical_Section);
overriding procedure Finalize (This : in out Critical_Section);

-- Implementation of the operations:

procedure Initialize (This : in out Critical_Section)
is begin
This.old_pri := disableGlobalInterrupts;
end Initialize;

procedure Finalize (This : in out Critical_Section)
is begin
restoreGlobalInterrupts (This.old_pri);
end Finalize;


>
> You can use it like this:
>
> bool compare_and_swap64(uint64_t * p, uint64_t old, uint64_t x)
> {
> CriticalSectionLock lock;
>
> if (*p != old) return false;
> *p = x;
> return true;
> }
>


function Compare_and_Swap64 (
p : access Interfaces.Unsigned_64;
old, x : in Interfaces.Unsigned_64)
return Boolean
is
Lock : Critical_Section;
begin
if p.all /= old then
return False;
else
p.all := x;
return True;
end if;
end Compare_and_Swap64;

(I think there should be a "volatile" spec for the "p" object, don't you?)



> This is the code compiled for a 32-bit Cortex-M device:
>
> <https://godbolt.org/z/7KM9M6Kcd>
>
> The use of the class here has no overhead compared to manually disabling
> and re-enabling interrupts.
>
> What would be the Ada equivalent of this class, and of the
> "compare_and_swap64" function?


See above. I don't have an Ada-to-Cortex-M compiler at hand to compare
the target code, sorry.

But critical sections in Ada applications are more often written using
the Ada "protected object" feature. Here is the same as a protected
object "CS", with separate declaration and body as usual in Ada. Here I
must write the operation as a procedure instead of a function, because
protected objects have "single writer, multiple readers" semantics, and
any function is considered a "reader" although it may have side effects:

protected CS
with Priority => System.Interrupt_Priority'Last
is

procedure Compare_and_Swap64 (
p : access Interfaces.Unsigned_64;
old, x : in Interfaces.Unsigned_64;
result : out Boolean);

end CS;

protected body CS
is

procedure Compare_and_Swap64 (
p : access Interfaces.Unsigned_64;
old, x : in Interfaces.Unsigned_64;
result : out Boolean);
is begin
result := p.all = old;
if result then
p.all := x;
end if;
end Compare_and_Swap64;

end CS;

However, it would be more in the style of Ada to focus on the thing that
is being "compared and swapped", so that "p" would be either a
discriminant of the protected object, or a component of the protected
object, instead of a parameter to the copy-and-swap operation. But it
would look similar to the above.


>>> On the third hand (three hands are always useful for programming), the
>>> wordy nature of type conversions in Ada mean programmers would be
>>> tempted to take shortcuts and skip these extra types.
>>
>>
>> Huh? A normal conversion in C is written "(newtype)expression", the same
>> in Ada is written "newtype(expression)". Exactly the same number of
>> characters, only the placement of the () is different. The C form might
>> even require an extra set of parentheses around it, to demarcate the
>> expression to be converted from any containing expression.
>>
>> Of course, in C you have implicit conversions between all kinds of
>> numerical types, often leading to a whole lot of errors... not only
>> apples+oranges, but also truncation or other miscomputation.
>
> C also makes explicit conversions wordy, yes. In C++, you can choose
> which conversions are explicit and which are implicit - done carefully,
> your safe conversions will be implicit and unsafe ones need to be explicit.


Ada does not have programmable implicit conversions, but one can
override some innocuous operator, usually "+", to perform whatever
conversions one wants. For example:

function "+" (Item : Boolean) return Float
is (if Item then 1.0 else 0.0);

or more directly

function "+" (Item : Boolean) return Float
is (Float (Boolean'Pos (Item)));


> (C++ suffers from its C heritage and backwards compatibility, meaning it
> can't fix things that were always implicit conversion. It's too late to
> make "int x = 123.4;" an error. The best C++ can do is add a new syntax
> with better safety - so "int y { 123 };" is fine but "int z { 123.4 };"
> is an error.)


Ada also has some warts, but perhaps not as easily illustrated.

Tom Gardner

unread,
Apr 19, 2021, 7:00:57 AM4/19/21
to
On 19/04/21 11:15, David Brown wrote:
> (I've done "perverted" stuff in LaTeX - but it wasn't an accident.
> Fortunately I don't have to do it/every/ day.)

We've all done that in one language or another!

Niklas Holsti

unread,
Apr 19, 2021, 7:04:31 AM4/19/21
to
On 2021-04-18 23:23, David Brown wrote:
> On 18/04/2021 20:29, Tom Gardner wrote:
>> On 18/04/21 18:26, David Brown wrote:
>>> But some programmers have not changed, and the way they write C code has
>>> not changed.  Again, it's a question of whether you are comparing how
>>> languages/could/  be used at their best, or how some people use them at
>>> their worst, or guessing something in between.
>>
>> That is indeed a useful first question.
>>
>> A second question is "how easy is it to get intelligent
>> but inexperienced programmers to avoid the worst features
>> and use the languages well?" (We'll ignore all the programmers
>> that shouldn't be given a keyboard :) )
>>
>> A third is "will that happen in company X's environment when
>> they are extending code written by people that left 5 years
>> ago?"
>
> All good questions.
>
> Another is what will happen to the company when the one person that
> understood the language at all, leaves?


If the company trained only one person in the language, that was a
stupid (risky) decision by the company, or they should not have started
using that language at all.


> With C++, you can hire another
> qualified and experienced C++ programmer. (Okay, so you might have to
> beat them over the head with an 8051 emulator until they stop using
> std::vector and realise you don't want UCS2 encoding for your 2x20
> character display, but they'll learn that soon enough.) With Ada, or
> Forth, or any other minor language, you are scuppered.


During all my years (since about 1995) working on on-board SW for ESA
spacecraft, the company hired one person with earlier experience in Ada,
and that was I. All other hires working in Ada projects learned Ada on
the job (and some became enthusiasts).

Sadly, some of the large aerospace "prime" companies in Europe are
becoming unwilling to accept subcontracted SW products in Ada, for the
reason discussed: because their HR departments say that they cannot find
programmers trained in Ada. Bah, a competent programmer will pick up the
core concepts quickly, says I.

Of course there are also training companies that offer Ada training courses.

Dimiter_Popoff

unread,
Apr 19, 2021, 8:16:29 AM4/19/21
to
On 4/19/2021 14:04, Niklas Holsti wrote:
> ....
> Sadly, some of the large aerospace "prime" companies in Europe are
> becoming unwilling to accept subcontracted SW products in Ada, for the
> reason discussed: because their HR departments say that they cannot find
> programmers trained in Ada. Bah, a competent programmer will pick up the
> core concepts quickly, says I.

This is valid not just for ADA. An experienced programmer will need days
to adjust to this or that language. I guess most if not all of us have
been through it.

Dimiter

======================================================
Dimiter Popoff, TGI http://www.tgi-sci.com
======================================================
http://www.flickr.com/photos/didi_tgi/

David Brown

unread,
Apr 19, 2021, 8:22:14 AM4/19/21
to
OK. Am I right in assuming the subobjects here also need to inherit
from the "Finalization" types individually, in order to be automatically
initialised in order?

Are there any overheads (other than in the source code) for all this
inheriting? Ada (like C++) aims to be minimal overhead, AFAIUI, but its
worth checking.
C++ gives you the choice. You can do work in a constructor, or you can
leave it as a minimal (often automatically generated) function. You can
give default values to members. You can add "initialise" member
functions as you like. You can have "factory functions" that generate
instances. This lets you structure the code and split up functionality
in whatever way suits your requirements.

For a well-structured class, the key point is that a constructor will
always establish the class invariant. Any publicly accessible function
will assume that invariant, and maintain it. Private functions might
temporarily break the invariant - these are only accessible by code that
"knows what it is doing". And the destructor will always clean up after
the object, recycling any resources used.

Having C++ style constructors are not a requirement for having control
of the class invariant, but they do make it more convenient and more
efficient (both at run-time and in the source code) than separate
minimal constructors (or default values) and initialisers.

>>
>> Let's take a little example.  And since this is comp.arch.embedded,
>> let's take a purely embedded example of disabling interrupts, rather
>> than shunned dynamic memory allocations:
>>
>> static inline uint32_t disableGlobalInterrupts(void) {
>>      uint32_t pri;
>>      asm volatile(
>>          "  mrs %[pri], primask\n\t" // Get old mask
>>          "  cpsid i\n\t"             // Disable interrupts entirely
>>          "  dsb"        // Ensures that this takes effect before next
>>                      // instruction
>>          : [pri] "=r" (pri) :: "memory");
>>      return pri;
>> }
>>
>> static inline void restoreGlobalInterrupts(uint32_t pri) {
>>      asm volatile(
>>          "  msr primask, %[pri]"         // Restore old mask
>>          :: [pri] "r" (pri) : "memory");
>> }
>
>
> I won't try to write Ada equivalents of the above :-) though I have of
> course written much Ada code to manage and handle interrupts.

I think Ada has built-in (or standard library) support for critical
sections, does it not? But this is just an example, not necessarily
something that would be directly useful. Obviously the code above is
highly target-specific.
Are "Initialize" and "Finalize" overloaded global procedures, or is this
the syntax always used for member functions?

>>
>> You can use it like this:
>>
>> bool compare_and_swap64(uint64_t * p, uint64_t old, uint64_t x)
>> {
>>     CriticalSectionLock lock;
>>
>>     if (*p != old) return false;
>>     *p = x;
>>     return true;
>> }
>>
>
>
>    function Compare_and_Swap64 (
>       p      : access Interfaces.Unsigned_64;
>       old, x : in     Interfaces.Unsigned_64)
>    return Boolean
>    is
>       Lock : Critical_Section;
>    begin
>       if p.all /= old then
>          return False;
>       else
>          p.all := x;
>          return True;
>       end if;
>    end Compare_and_Swap64;
>
> (I think there should be a "volatile" spec for the "p" object, don't you?)

It might be logical to make it volatile, but the code would not be
different (the inline assembly has memory clobbers already, which force
the memory accesses to be carried out without re-arrangements). But
adding "volatile" would do no harm, and let the user of the function
pass a volatile pointer.

The Ada and C++ code is basically the same here, which is nice.

How would it look with block scope?

extern int bar(int x);
int foo(volatile int * p, int x, int y) {
int u = bar(x);
{
CriticalSectionLock lock;
*p += z;
}
int v = bar(y);
return v;
}

The point of this example is that the "*p += z;" line should be within
the calls to disableGlobalInterrupts and restoreGlobalInterrupts, but
the calls to "bar" should be outside. This requires the lifetime of the
lock variable to be more limited.


>
>
>
>> This is the code compiled for a 32-bit Cortex-M device:
>>
>> <https://godbolt.org/z/7KM9M6Kcd>
>>
>> The use of the class here has no overhead compared to manually disabling
>> and re-enabling interrupts.
>>
>> What would be the Ada equivalent of this class, and of the
>> "compare_and_swap64" function?
>
>
> See above. I don't have an Ada-to-Cortex-M compiler at hand to compare
> the target code, sorry.

godbolt.org has Ada and gnat 10.2 too, but only for x86-64. The
enable/restore interrupt functions could be changed to simply reading
and writing a volatile int. Then you could compare the outputs of Ada
and C++ for x86-64. If you have the time and inclination, it might be
fun to see.
I think the idea of language support for protected sections is nice, but
I'd be concerned about how efficiently it would map to the requirements
of the program and the target. Such things are often a bit "brute
force", because they have to emphasise "always works" over efficiency.
For example, if you have a 64-bit atomic type (on a 32-bit device), you
don't /always/ need to disable interrupts around it. If you are already
in an interrupt routine and know that no higher priority interrupt
accesses the data, no locking is needed. If you are in main thread code
but only read the data, maybe repeatedly reading it until you get two
reads with the same value is more efficient. Such shortcuts must, of
course, be used with care.

In C and C++, there are atomic types (since C11/C++11). They require
library support for different targets, which are (unfortunately) not
always good. But certainly it is common in C++ to think of an atomic
type here rather than atomic access functions, just as you describe in Ada.
A language without warts would be boring!

Thank you for the insights and updates to my Ada knowledge.

David Brown

unread,
Apr 19, 2021, 8:26:07 AM4/19/21
to
I agree with you there. But so many people learn "programming in C++"
or "programming in Java", rather than learning "programming".

Tauno Voipio

unread,
Apr 19, 2021, 9:15:09 AM4/19/21
to
I just wonder if templates and C++ would be valid for the IOCCC contest.
The example would be a good candidate.

--

-TV

David Brown

unread,
Apr 19, 2021, 9:43:13 AM4/19/21
to
I am sure that if the IOCCC were open to C++ entries, templates would be
involved.

But that particular example is not hard to follow IMHO. The style is
more like functional programming, with recursion and pattern matching
rather than loops and conditionals, so that might make it difficult to
understand at first.

Niklas Holsti

unread,
Apr 19, 2021, 12:16:55 PM4/19/21
to
Yes, if they need more initialization than provided by the automatic
"construction" step (Default_Value etc.)

>
> Are there any overheads (other than in the source code) for all this
> inheriting? Ada (like C++) aims to be minimal overhead, AFAIUI, but its
> worth checking.
>


If the compiler sees the actual type of an object that needs
finalization (as in the critical-section example) it can generate direct
calls to Initialize and Finalize without dispatching. If the object is
polymorphic (what in Ada is called a "class") the calls must go through
a dispatch table according to the actual type of the object.


>> So the Ada "initializer" is not like a C++ constructor, which in Ada
>> corresponds more closely to a function returning an object of the class.
>>
>> An Ada "finalizer" is more similar to a C++ destructor, taking care of
>> any clean-up that is needed before the object disappears.
>>
>
> C++ gives you the choice. You can do work in a constructor, or you can
> leave it as a minimal (often automatically generated) function. You can
> give default values to members. You can add "initialise" member
> functions as you like. You can have "factory functions" that generate
> instances. This lets you structure the code and split up functionality
> in whatever way suits your requirements.


So, just as in Ada.


> For a well-structured class, the key point is that a constructor will
> always establish the class invariant. Any publicly accessible function
> will assume that invariant, and maintain it. Private functions might
> temporarily break the invariant - these are only accessible by code that
> "knows what it is doing". And the destructor will always clean up after
> the object, recycling any resources used.
>
> Having C++ style constructors are not a requirement for having control
> of the class invariant, but they do make it more convenient and more
> efficient (both at run-time and in the source code) than separate
> minimal constructors (or default values) and initialisers.


In Ada one can write the preconditions, invariants and postconditions in
the source code itself, with standard "aspects" and Ada Boolean
expressions, and have them either checked at run-time or verified by
static analysis/proof.


>>> Let's take a little example.  And since this is comp.arch.embedded,
>>> let's take a purely embedded example of disabling interrupts, rather
>>> than shunned dynamic memory allocations:
>>>
>>> static inline uint32_t disableGlobalInterrupts(void) {
>>>      uint32_t pri;
>>>      asm volatile(
>>>          "  mrs %[pri], primask\n\t" // Get old mask
>>>          "  cpsid i\n\t"             // Disable interrupts entirely
>>>          "  dsb"        // Ensures that this takes effect before next
>>>                      // instruction
>>>          : [pri] "=r" (pri) :: "memory");
>>>      return pri;
>>> }
>>>
>>> static inline void restoreGlobalInterrupts(uint32_t pri) {
>>>      asm volatile(
>>>          "  msr primask, %[pri]"         // Restore old mask
>>>          :: [pri] "r" (pri) : "memory");
>>> }
>>
>>
>> I won't try to write Ada equivalents of the above :-) though I have of
>> course written much Ada code to manage and handle interrupts.
>
> I think Ada has built-in (or standard library) support for critical
> sections, does it not?


Yes, "protected objects". See below.
They are operations ("member functions") of the
Ada.Finalization.Limited_Controlled type, that are null (do nothing) for
that (base) type, and here we override them for the derived
Critical_Section type, to replace the inherited null operations.

The "overriding" keyword is optional (an unfortunate wart from history).


>>> You can use it like this:
>>>
>>> bool compare_and_swap64(uint64_t * p, uint64_t old, uint64_t x)
>>> {
>>>     CriticalSectionLock lock;
>>>
>>>     if (*p != old) return false;
>>>     *p = x;
>>>     return true;
>>> }
>>>
>>
>>
>>    function Compare_and_Swap64 (
>>       p      : access Interfaces.Unsigned_64;
>>       old, x : in     Interfaces.Unsigned_64)
>>    return Boolean
>>    is
>>       Lock : Critical_Section;
>>    begin
>>       if p.all /= old then
>>          return False;
>>       else
>>          p.all := x;
>>          return True;
>>       end if;
>>    end Compare_and_Swap64;
>>
>> (I think there should be a "volatile" spec for the "p" object, don't you?)
>
> It might be logical to make it volatile, but the code would not be
> different (the inline assembly has memory clobbers already, which force
> the memory accesses to be carried out without re-arrangements).


So you are relying on the C++ compiler actually respecting the "inline"
directive? Are C++ compilers required to do that?


> But adding "volatile" would do no harm, and let the user of the
> function pass a volatile pointer. >
> The Ada and C++ code is basically the same here, which is nice.


(Personally I dislike this style of critical sections. I think it is a
confusing mis-use of local variables. Its only merit is that it ensures
that the finalization occurs even in the case of an exception or other
abnormal exit from the critical section.)


> How would it look with block scope?
>
> extern int bar(int x);
> int foo(volatile int * p, int x, int y) {
> int u = bar(x);
> {
> CriticalSectionLock lock;
> *p += z;
> }
> int v = bar(y);
> return v;
> }
>
> The point of this example is that the "*p += z;" line should be within
> the calls to disableGlobalInterrupts and restoreGlobalInterrupts, but
> the calls to "bar" should be outside. This requires the lifetime of the
> lock variable to be more limited.


Much as you would expect; the block is

declare
Lock : Critical_Section;
begin
p.all := p.all + z:
end:

(In the next Ada standard -- probably Ada 2022 -- one can write such
updating assignments more briefly, as

p.all := @ + z;

but the '@' can be anywhere in the right-hand-side expression, in one or
more places, which is more flexible than the C/C++ combined
assignment-operations like "+=".)


>> But critical sections in Ada applications are more often written
>> using the Ada "protected object" feature.
>>
>> <snip PO example> >>>
> I think the idea of language support for protected sections is nice, but
> I'd be concerned about how efficiently it would map to the requirements
> of the program and the target.


I haven't had any problems so far. Though in some highly stressed
real-time applications I have resorted to communicating through shared
atomic variables with lock-free protocols.


> Such things are often a bit "brute
> force", because they have to emphasise "always works" over efficiency.
> For example, if you have a 64-bit atomic type (on a 32-bit device), you
> don't /always/ need to disable interrupts around it. If you are already
> in an interrupt routine and know that no higher priority interrupt
> accesses the data, no locking is needed.


Interrupt handlers in Ada are written as procedures in protected
objects, with the protected object given the appropriate priority. Other
operations in that same protected object can then be executed in
automatic mutual exclusion with the interrupt handler. The protected
object can also provide one or more "entry" operations with Boolean
"guards" on which tasks can wait, for example to wait for an interrupt
to have occurred. I find this works very well in practice.


> If you are in main thread code but only read the data, maybe
> repeatedly reading it until you get two reads with the same value is
> more efficient. Such shortcuts must, of course, be used with care.

Sure.


> In C and C++, there are atomic types (since C11/C++11). They require
> library support for different targets, which are (unfortunately) not
> always good. But certainly it is common in C++ to think of an atomic
> type here rather than atomic access functions, just as you describe in Ada.


The next Ada standard includes several generic atomic operations on
typed objects in the standard package System.Atomic_Operations and its
child packages. The proposal is at

http://www.ada-auth.org/standards/2xaarm/html/AA-C-6-1.html

Follow the "Next" arrows to see all of it.

In summary, it seems to me that the main difference we have found in
this discussion, so far, between Ada and C++ services in the areas we
have looked at, is that Ada makes it simpler to define one's own scalar
types, while C++ has more compile-time programmability like constexpr
functions.


pozz

unread,
Apr 19, 2021, 12:38:59 PM4/19/21
to
Il 17/04/2021 18:45, David Brown ha scritto:
> On 17/04/2021 17:48, pozz wrote:
>> What do you think about different languages than usual C for embedded
>> systems?
>>
>> I mean C++, Ada but also Python. Always more powerful embedded
>> processors are coming, so I expect new and modern languages will enter
>> in the embedded world.
>>
>> Hardware are cheaper and more powerful than ever, but software stays
>> expensive. New and modern languages could reduce the software cost,
>> because they are simpler than C and much more similar to desktop/server
>> programming paradigm.
>>
>> We embedded sw developers were lucky: electronics and technologies
>> change rapidly, but sw for embedded has changed slower than
>> desktop/mobile sw. Think of mobile app developers: maybe they already
>> changed IDEs, tools and languages ten times in a few years.
>> C language for embedded is today very similar than 10 years ago.
>>
>> However I think this situation for embedded developers will change in
>> the very next years. And we will be forced to learn modern technologies,
>> such as new languages and tools.
>> Is it ok for me to study and learn new things... but it will be more
>> difficult to acquire new skills for real jobs.
>>
>> What do you think?
>
> You should probably add Rust to your list - I think its popularity will
> increase.
>
> Python is great when you have the resources. It's the language I use
> most on PC's and servers, and it is very common on embedded Linux
> systems (like Pi's, and industrial equivalents). Micropython is
> sometimes found on smaller systems, such as ESP32 devices.
>
> Ada involves a fair amount of change to how you work, compared to C
> development. (Disclaimer - I have only done a very small amount of Ada
> coding, and no serious projects. But I have considered it as an
> option.) I really don't see it becoming much more common, and outside
> of niche areas (defence, aerospace) it is very rare. Programming in Ada
> often takes a lot more effort even for simple things, leading quickly to
> code that is so wordy that it is unclear what is going on. And most of
> the advantages of Ada (such as better typing) can be achieved in C++
> with less effort, and at the same time C++ can give additional safety on
> resources that is harder to get on Ada. (But Ada has some nice features
> introspective that C++ programmers currently only dream about.)
>
> C++ is not uncommon in embedded systems, and I expect to see more of it.
> I use it as my main embedded language now. C++ gives more scope for
> getting things wrong in weird ways, and more scope for accidentally
> making your binary massively larger than you expect, but with care it
> makes it a lot easier to write clear code and safe code, where common C
> mistakes either can't happen or you get compile-time failures.
>
> It used to be the case that C++ compilers were expensive and poor
> quality, that the resulting code was very slow on common
> microcontrollers, and that the language didn't have much extra to offer
> small-systems embedded developers. That, I think, has changed in all
> aspects. I use gcc 10 with C++17 on Cortex-M devices, and features like
> templates, strong enumerations, std::array, and controlled automatic
> conversions make it easier to write good code.

What do you suggest for a poor C embedded developer that wants to try
C++ on the next project?

I would use gcc on Cortex-M MCUs.


Paul Rubin

unread,
Apr 19, 2021, 1:48:37 PM4/19/21
to
pozz <pozz...@gmail.com> writes:
> What do you suggest for a poor C embedded developer that wants to try
> C++ on the next project?

I'm not sure what kind of answer you are looking for, but I recommend
the book "Effective Modern C++" by Scott Meyers. C++ is a mudball with
many layers of cruft, that improved tremendously over the past few
revisions. The book shows you how to do things the right way, using the
improvements instead of the cruft.

Tom Gardner

unread,
Apr 19, 2021, 2:33:17 PM4/19/21
to
On 19/04/21 17:38, pozz wrote:

> What do you suggest for a poor C embedded developer that wants to try C++ on the
> next project?

Choose a /very/ small project, and try to Get It Right (TM).

When you think there might be a better set of implementation
cliches and design strategies, refactor bits of your code to
investigate them.

Don't forget to use your favourite IDE to do the mechanics
of that refactoring.

Paul Rubin

unread,
Apr 19, 2021, 3:20:32 PM4/19/21
to
Niklas Holsti <niklas...@tidorum.invalid> writes:
> In Ada... The construction step can assign some initial values that
> can be defined by default (pointers default to null, for example)...
>
> type Zero_Handler is access procedure;
> type Counter is record ...
> At_Zero : Zero_Handler; -- Default init to null.

Wow, that is disappointing. I had thought Ada access types were like
C++ or ML references, i.e. they have to be initialized to point to a
valid object, so they can never be null. I trust or at least hope that
SPARK has good ways to ensure that a given access variable is always
valid.

Debugging null pointer exceptions is a standard time-waster in most
languages that have null pointers. Is it that way in Ada as well?

Fwiw, I'm not a C++ expert but I do use it. I try to write in a style
that avoids pointers (e.g. by using references instead), and still find
myself debugging stuff with invalid addresses that I think wouldn't
happen in Ada. But I've never used Ada beyond some minor playing
around. It seems like a big improvement over C. C++ it seems to me
also improves on C, but by going in a different direction than Ada.

I plan to spend some time on Rust pretty soon. This is based on
impression rather than experience, but ISTM that a lot of Rust is
designed around managing dynamic memory allocation by ownership
tracking, like C++ unique_ptr on steroids built into the language. That
lets you write big applications that heavily use dynamic allocation
while avoiding the usual malloc/free bugs and without using garbage
collection. Ada on the other hand is built for high assurance embedded
applications that don't use dynamic allocation much, except maybe at
program initialization time. So Rust and Ada aim to solve different
problems.

I like to think it is reasonable to write the outer layers of complex
applications in garbage collected languages, with critical or realtime
parts written in something like Ada. Tim Sweeney talks about this in
his old slide deck "The Next Mainstream Programming Language":

https://www.st.cs.uni-saarland.de//edu/seminare/2005/advanced-fp/docs/sweeny.pdf

The above url currently throws an expired-certificate warning but it is
ok to click past that.

Paul Rubin

unread,
Apr 19, 2021, 3:47:15 PM4/19/21
to
Dimiter_Popoff <d...@tgi-sci.com> writes:
> On 4/19/2021 14:04, Niklas Holsti wrote:
>> ...their HR departments say that they cannot find programmers trained
>> in Ada. Bah, a competent programmer will pick up the core concepts
>> quickly, says I.
>
> This is valid not just for ADA. An experienced programmer will need days
> to adjust to this or that language. I guess most if not all of us have
> been through it.

No it's much worse than that. First of all some languages are really
different and take considerable conceptual adjustment: it took me quite
a while as a C and Python programmer to become anywhere near clueful
about Haskell. But understanding Haskell then demystified parts of C++
that had made no sense to me at all.

Secondly, being competent in a language now means far more than the
language itself. There is also a culture and a code corpus out there
which also have to be assimilated for each language. E.g. Ruby is a
very simple language, but coming up to speed as a Ruby developer means
getting used to a decade of Rails hacks, ORM internals, 100's of "gems"
(packages) scattered over 100s of Github repositories, etc. It's the
same way with Javascript and the NPM universe plus whatever
framework-of-the-week your project is using. Python is not yet that
bad, because it traditionally had a "batteries included" ethic that
tried to standardize more useful functions than other languages did, but
it seems to have given up on that in the past few years.

Maybe things are better in the embedded world, but in the internet world
any significant application will have far too much internal
functionality (dealing with complex network protocols, file formats,
etc) for the developers to get anything done without bringing in a mass
of external dependencies. A lot of dev work ISTM now is about
understanding and managing those dependencies, and also in connecting to
a wider dev community that you can exchange wisdom with. "Computer
science", such as knowing how to balance binary trees, is now almost a
useless subject. (On the other hand, math in general, particularly
probability, has become a lot more useful. This is kind of satisfying
for me since I studied a lot of it in school and then for many years
never used it in programming.)

David Brown

unread,
Apr 19, 2021, 4:19:36 PM4/19/21
to
On 19/04/2021 18:16, Niklas Holsti wrote:
> On 2021-04-19 15:22, David Brown wrote:
>> On 19/04/2021 12:51, Niklas Holsti wrote:
>>> On 2021-04-18 19:59, David Brown wrote:

(I'm snipping for brevity - I appreciate your comments even though I've
snipped many.)

>
> In Ada one can write the preconditions, invariants and postconditions in
> the source code itself, with standard "aspects" and Ada Boolean
> expressions, and have them either checked at run-time or verified by
> static analysis/proof.
>

Yes, I like that for Ada. These are on the drawing board for C++, but
it will be a while yet before they are in place.

>>>
>>> (I think there should be a "volatile" spec for the "p" object, don't
>>> you?)
>>
>> It might be logical to make it volatile, but the code would not be
>> different (the inline assembly has memory clobbers already, which force
>> the memory accesses to be carried out without re-arrangements).
>
>
> So you are relying on the C++ compiler actually respecting the "inline"
> directive? Are C++ compilers required to do that?
>

No, it is not relying on the "inline" at all - it is relying on the
semantics of the inline assembly code (which is compiler-specific,
though several major compilers support the gcc inline assembly syntax).

Compilers are required to support "inline" correctly, of course - but
the keyword doesn't actually mean "generate this code inside the calling
function". It is one of these historical oddities - it was originally
conceived as a hint to the compiler for optimisation purposes, but what
it /actually/ means is roughly "It's okay for there to be multiple
definitions of this function in the program - I promise they will all do
the same thing, so I don't mind which you use in any given case".

The compiler is likely to generate the code inline as part of normal
optimisation, but it would do that anyway.

>
>> But adding "volatile" would do no harm, and let the user of the
>> function pass a volatile pointer. >
>> The Ada and C++ code is basically the same here, which is nice.
>
>
> (Personally I dislike this style of critical sections. I think it is a
> confusing mis-use of local variables. Its only merit is that it ensures
> that the finalization occurs even in the case of an exception or other
> abnormal exit from the critical section.)

Fair enough - styles are personal things. And they are heavily
influenced by what is convenient or idiomatic in the language(s) we
commonly use (and vice versa).

>
>
>> How would it look with block scope?
>>
>> extern int bar(int x);
>> int foo(volatile int * p, int x, int y) {
>>     int u = bar(x);
>>     {
>>         CriticalSectionLock lock;
>>         *p += z;
>>     }
>>     int v = bar(y);
>>     return v;
>> }
>>
>> The point of this example is that the "*p += z;" line should be within
>> the calls to disableGlobalInterrupts and restoreGlobalInterrupts, but
>> the calls to "bar" should be outside.  This requires the lifetime of the
>> lock variable to be more limited.
>
>
> Much as you would expect; the block is
>
>    declare
>       Lock : Critical_Section;
>    begin
>       p.all := p.all + z:
>    end:

Fair enough. (I expected there was some way to have smaller block-scope
variables in Ada, though I didn't know how to write them. And it is not
a given that the scope and the lifetime would be the same, though it
looks like it is the case here.)

As a matter of style, I really do not like the "declare all variables at
the start of the block" style, standard in Pascal, C90 (or older), badly
written (IMHO) newer C, and apparently also Ada. I much prefer to avoid
defining variables until I know what value they should hold, at least
initially. Amongst other things, it means I can be much more generous
about declaring them as "const", there are almost no risks of using
uninitialised data, and the smaller scope means it is easier to see all
use of the variable.

>
> (In the next Ada standard -- probably Ada 2022 -- one can write such
> updating assignments more briefly, as
>
>     p.all := @ + z;
>
> but the '@' can be anywhere in the right-hand-side expression, in one or
> more places, which is more flexible than the C/C++ combined
> assignment-operations like "+=".)
>

It may be flexible, but I'm not convinced it is clearer nor that it
would often be useful. But I guess that will be highly related to
familiarity.
These are certainly example differences. In general it would appear
that most things that can be written in one language could be written in
the other in roughly the same way (given appropriate libraries or type
definitions).

And we can probably agree that both are better than plain old C in terms
of expressibility and (in the right hands) writing safer code by
reducing tedious and error-prone manual boilerplate.


David Brown

unread,
Apr 19, 2021, 4:48:09 PM4/19/21
to
On 19/04/2021 18:38, pozz wrote:

> What do you suggest for a poor C embedded developer that wants to try
> C++ on the next project?
>
> I would use gcc on Cortex-M MCUs.
>

I'm not entirely sure what you are asking - "gcc on Cortex-M" is, I
would say, the right answer if you are asking about tools.

Go straight to a new C++ standard - C++17. (If you see anything that
mentions C++98 or C++03, run away - it is pointless unless you have to
maintain old code.) Lots of things got a lot easier in C++11, and have
improved since. Unfortunately the law of backwards compatibility means
old cruft still has to work, and is still there in language. But that
doesn't mean you have to use it.

Go straight to a newer gcc - gcc 10 from GNU Arm Embedded. The error
messages are much better (or at least less horrendous), and the static
checking is better. Be generous with your warnings, and use a good IDE
with syntax highlighting and basic checking in the editor.

Disable exceptions and RTTI (-fno-exceptions -fno-rtti), and enable
optimisation. C++ (used well) results in massive and incomprehensible
assembly unless you have at least -O1.

Don't try and learn everything at once. Some things, like rvalue
references, are hard and rarely useful unless you are writing serious
template libraries. There are many features of C++ that are needed to
write libraries rather than use them.

Don't be afraid of templates - they are great.

Be wary of the bigger parts of the C++ standard library - std::vector
and std::unordered_map are very nice for PC programming, but are far too
dynamic for small systems embedded programming. (std::array, however,
is extremely useful. And I like std::optional.)

Think about how the code might be implemented - if it seems that a
feature or class could be implemented in reasonably efficient object
code on a Cortex-M, then it probably will be. If it looks like it will
need dynamic memory, it probably does.

Big class inheritance hierarchies, especially with multiple inheritance,
virtual functions, etc., is old-fashioned. Where you can, use
compile-time (static) polymorphism rather than run-time polymorphism.
That means templates, overloaded functions, CRTP, etc.


Keep <https://en.cppreference.com/w/cpp> handy. Same goes for
<https://godbolt.org>.

And keep smiling! [](){}(); (That's the C++11 smiley - when you
understand what it means, you're laughing!)

pozz

unread,
Apr 19, 2021, 5:17:49 PM4/19/21
to
Il 19/04/2021 22:48, David Brown ha scritto:
> On 19/04/2021 18:38, pozz wrote:
>
>> What do you suggest for a poor C embedded developer that wants to try
>> C++ on the next project?
>>
>> I would use gcc on Cortex-M MCUs.
>>
>
> I'm not entirely sure what you are asking - "gcc on Cortex-M" is, I
> would say, the right answer if you are asking about tools.

I mentioned the tools just as a starting point. I don't know almost
anything about C++, but I coded C for many years. I think there are some
precautions to take in this situation to learn C++ respect learing C++
as the first language.
Dynamic memory... is it possible to have a C++ project without using
heap at all?

Niklas Holsti

unread,
Apr 19, 2021, 5:26:26 PM4/19/21
to
On 2021-04-19 22:20, Paul Rubin wrote:
> Niklas Holsti <niklas...@tidorum.invalid> writes:
>> In Ada... The construction step can assign some initial values that
>> can be defined by default (pointers default to null, for example)...
>>
>> type Zero_Handler is access procedure;
>> type Counter is record ...
>> At_Zero : Zero_Handler; -- Default init to null.
>
> Wow, that is disappointing. I had thought Ada access types were like
> C++ or ML references, i.e. they have to be initialized to point to a
> valid object, so they can never be null.


You can impose that constraint if you want to: if I had defined

type Zero_Handler is not null access procedure;

then the above declaration of the At_Zero component would be illegal,
and the compiler would insist on a non-null initial value.

But IMO sometimes you need pointers that can be null, just as you
sometimes need null values in a database.

There are also other means in Ada to force the explicit initialization
of objects at declaration (the "unspeficied discriminants" method).

The state of the art in Ada implementations of critical systems is
slowly becoming to use static analysis and proof tools to verify that no
run-time check failures, such as accessing a null pointer, can happen.
That is already fairly easy to do with the AdaCore tools (CodePeer,
SPARK and others). Proving functional correctness still remains hard.


> I plan to spend some time on Rust pretty soon. This is based on
> impression rather than experience, but ISTM that a lot of Rust is
> designed around managing dynamic memory allocation by ownership
> tracking, like C++ unique_ptr on steroids built into the language. That
> lets you write big applications that heavily use dynamic allocation
> while avoiding the usual malloc/free bugs and without using garbage
> collection. Ada on the other hand is built for high assurance embedded
> applications that don't use dynamic allocation much, except maybe at
> program initialization time. So Rust and Ada aim to solve different
> problems.


There is a proposal and an implementation from AdaCore to augment Ada
pointers with an "ownership" concept, as in Rust. I believe that SPARK
supports that proposal. Again, in some cases you want shared ownership
(multiple pointers to the same object), and then the Rust ownership
concept is not enough, as I understand it (not expert at all).

Niklas Holsti

unread,
Apr 19, 2021, 5:37:30 PM4/19/21
to
On 2021-04-19 22:47, Paul Rubin wrote:
> Dimiter_Popoff <d...@tgi-sci.com> writes:
>> On 4/19/2021 14:04, Niklas Holsti wrote:
>>> ...their HR departments say that they cannot find programmers trained
>>> in Ada. Bah, a competent programmer will pick up the core concepts
>>> quickly, says I.
>>
>> This is valid not just for ADA. An experienced programmer will need days
>> to adjust to this or that language. I guess most if not all of us have
>> been through it.
>
> No it's much worse than that. First of all some languages are really
> different and take considerable conceptual adjustment: it took me quite
> a while as a C and Python programmer to become anywhere near clueful
> about Haskell. But understanding Haskell then demystified parts of C++
> that had made no sense to me at all.


I agree that it takes a mental leap to go from an imperative language to
a functional languge, or to a logic-programming / declarative language.

To maintain a Haskell program, one would certainly prefer to hire a
programmer experienced in functional programming, over one experienced
only in C, C++ or Ada.


> Secondly, being competent in a language now means far more than the
> language itself. There is also a culture and a code corpus out there
> which also have to be assimilated for each language. E.g. Ruby is a
> very simple language, but coming up to speed as a Ruby developer means
> getting used to a decade of Rails hacks, ORM internals, 100's of "gems"
> (packages) scattered over 100s of Github repositories, etc. It's the
> same way with Javascript and the NPM universe plus whatever
> framework-of-the-week your project is using. Python is not yet that
> bad, because it traditionally had a "batteries included" ethic that
> tried to standardize more useful functions than other languages did, but
> it seems to have given up on that in the past few years.


Relying on libraries/packages from the Internet is also a huge
vulnerability. Not long ago a large part of the world's programs in one
of these languages (unfortunately I forget which -- it was reported on
comp.risks) suddenly stopped working because they all depended on
real-time download of a small library package from a certain repository,
where the maintainer of that package had quarreled with the repository
owner/provider and had removed the package. Boom... Fortunately it was a
very small piece of SW and was easily replaced.

The next step is for a malicious actor to replace some such package with
malware... the programs which use it will seem to go on working, but may
not do what they are supposed to do.

Niklas Holsti

unread,
Apr 19, 2021, 6:03:35 PM4/19/21
to
On 2021-04-19 23:19, David Brown wrote:
> On 19/04/2021 18:16, Niklas Holsti wrote:
>> On 2021-04-19 15:22, David Brown wrote:
>>> On 19/04/2021 12:51, Niklas Holsti wrote:
>>>>
>>>> (I think there should be a "volatile" spec for the "p" object, don't
>>>> you?)
>>>
>>> It might be logical to make it volatile, but the code would not be
>>> different (the inline assembly has memory clobbers already, which force
>>> the memory accesses to be carried out without re-arrangements).
>>
>>
>> So you are relying on the C++ compiler actually respecting the "inline"
>> directive? Are C++ compilers required to do that?
>>
>
> No, it is not relying on the "inline" at all - it is relying on the
> semantics of the inline assembly code (which is compiler-specific,
> though several major compilers support the gcc inline assembly syntax).
>
> Compilers are required to support "inline" correctly, of course - but
> the keyword doesn't actually mean "generate this code inside the calling
> function". It is one of these historical oddities - it was originally
> conceived as a hint to the compiler for optimisation purposes, but what
> it /actually/ means is roughly "It's okay for there to be multiple
> definitions of this function in the program - I promise they will all do
> the same thing, so I don't mind which you use in any given case".


If the call is not in fact inlined, it seems to me that the compilation
of the caller does not see the "asm volatile" in the callee, and
therefore might reorder non-volatile accesses in the caller with respect
to the call. But perhaps such reordering is forbidden in this example,
because p is a pointer, and the callee might access the same underlying
object (*p) through some other pointer to it, or directly.


> As a matter of style, I really do not like the "declare all variables at
> the start of the block" style, standard in Pascal, C90 (or older), badly
> written (IMHO) newer C, and apparently also Ada. I much prefer to avoid
> defining variables until I know what value they should hold, at least
> initially. Amongst other things, it means I can be much more generous
> about declaring them as "const", there are almost no risks of using
> uninitialised data, and the smaller scope means it is easier to see all
> use of the variable.


I mostly agree. However, there is always the possibility of having an
exception, and the question of which variables an exception handler can
see and use.

When all variable declarations are collected in one place, as in

declare
<local var declarations>
begin
<statements>
exception
<handlers>
end

it is easy and safe to say that the handlers can rely on all the
variables declared between "declare" and "begin" being in existence when
handling some exception from the <statements>. If variables are declared
here and there in the <statements>, they might or might not yet exist
when the exception happens, and it would not be safe for the local
exception handler to access them in any way.

Of course one can nest exception handlers when one nests blocks, but
that becomes cumbersome pretty quickly.

I don't know how C++ really addresses this problem.


>> (In the next Ada standard -- probably Ada 2022 -- one can write such
>> updating assignments more briefly, as
>>
>>     p.all := @ + z;
>>
>> but the '@' can be anywhere in the right-hand-side expression, in one or
>> more places, which is more flexible than the C/C++ combined
>> assignment-operations like "+=".)
>>
>
> It may be flexible, but I'm not convinced it is clearer nor that it
> would often be useful. But I guess that will be highly related to
> familiarity.


Yes, I have not yet used it at all, although I believe GNAT already
implements it.

I imagine one not uncommon use might be in function calls, such as

x := Foo (@, ...);

I believe that the main reason this new Ada feature was formulated in
this "more flexible" way was not the desire for more flexibility, but to
avoid the introduction of many more lexical tokens and variations like
"+=", or "+:=" as it would have been for Ada.

Richard Damon

unread,
Apr 19, 2021, 6:09:40 PM4/19/21
to
Unless the compiler knows otherwise, it must assume that a call to a
function might access volatile information, so can not migrate volatile
access across that call. Non-volatile accesses. that can not be affected
by the call, can be.
>

Tom Gardner

unread,
Apr 19, 2021, 6:30:31 PM4/19/21
to
On 19/04/21 20:20, Paul Rubin wrote:

> I plan to spend some time on Rust pretty soon. This is based on
> impression rather than experience, but ISTM that a lot of Rust is
> designed around managing dynamic memory allocation by ownership
> tracking, like C++ unique_ptr on steroids built into the language. That
> lets you write big applications that heavily use dynamic allocation
> while avoiding the usual malloc/free bugs and without using garbage
> collection.

It also helps with concurrency. From
https://doc.rust-lang.org/book/ch16-00-concurrency.html

"Initially, the Rust team thought that ensuring memory safety
and preventing concurrency problems were two separate challenges
to be solved with different methods. Over time, the team
discovered that the ownership and type systems are a powerful
set of tools to help manage memory safety and concurrency
problems! By leveraging ownership and type checking, many
concurrency errors are compile-time errors in Rust rather
than runtime errors."

I haven't kicked Rust's tyres, but that seems plausible

Paul Rubin

unread,
Apr 19, 2021, 6:55:33 PM4/19/21
to
pozz <pozz...@gmail.com> writes:
> I mentioned the tools just as a starting point. I don't know almost
> anything about C++, but I coded C for many years. I think there are
> some precautions to take in this situation to learn C++ respect
> learing C++ as the first language.

In this case use the book I mentioned, plus Stroustrup's introductory
book "Programming: Principles and Practice Using C++" is supposed to be
good (I haven't used it). He also wrote "The C++ Programming language"
which is more of a reference manual and which I found indispensible.
Make sure to get the most recent edition in either case. C++ changed
tremendously with C++11 and anything from before that should be
considered near-useless. So that means 4th edition or later for the
reference manual: I'm not sure for the intro book.

Alternatively it might be best to skip C++ entirely and use Ada or
Rust. I don't have a clear picture in my mind of how to address that.

> Dynamic memory... is it possible to have a C++ project without using
> heap at all?

Yes, C++ is a superset of C, more or less. You do have to maintain some
awareness of where dynamic allocation can happen, to avoid using it,
at least after program initialization is finished.

>> And keep smiling! [](){}(); (That's the C++11 smiley - when you
>> understand what it means, you're laughing!)

Heh, if that means what I think it means.

Paul Rubin

unread,
Apr 19, 2021, 7:01:35 PM4/19/21
to
Niklas Holsti <niklas...@tidorum.invalid> writes:
> You can impose that constraint if you want to: if I had defined
> type Zero_Handler is not null access procedure;
> then the above declaration of the At_Zero component would be illegal,
> and the compiler would insist on a non-null initial value.

Oh, this is nice.

> But IMO sometimes you need pointers that can be null, just as you
> sometimes need null values in a database.

Preferable these days is to use a separate type for a value that is
nullable or optional, so failing to check for null gives a compile-time
type error. This is 't Option in ML, Maybe a in Haskell, and iirc
std::Optional<T> these days in C++.

> The state of the art in Ada implementations of critical systems is
> slowly becoming to use static analysis and proof tools to verify that
> no run-time check failures, such as accessing a null pointer, can
> happen. That is already fairly easy to do with the AdaCore tools
> (CodePeer, SPARK and others). Proving functional correctness still
> remains hard.

I know about SPARK. Is CodePeer something along the same lines? Is it
available through GNU, or is it Adacore proprietary or what?

I still have Burns & Wellings' book on SPARK on the recommendation of
someone here. It looks good but has been in my want-to-read pile since
forever. One of these days.

David Brown

unread,
Apr 20, 2021, 12:58:29 AM4/20/21
to
Either the compiler "sees" the definition of the functions, and can tell
that there are things that force the memory access to be done in middle
(whether these functions are inlined or not), or the compiled does not
"see" the definitions and must therefore make pessimistic assumptions
about what the functions might do. The compiler can't re-order things
unless it can /prove/ that it is safe to do so.

>
>
>> As a matter of style, I really do not like the "declare all variables at
>> the start of the block" style, standard in Pascal, C90 (or older), badly
>> written (IMHO) newer C, and apparently also Ada.  I much prefer to avoid
>> defining variables until I know what value they should hold, at least
>> initially.  Amongst other things, it means I can be much more generous
>> about declaring them as "const", there are almost no risks of using
>> uninitialised data, and the smaller scope means it is easier to see all
>> use of the variable.
>
>
> I mostly agree. However, there is always the possibility of having an
> exception, and the question of which variables an exception handler can
> see and use.
>

In C++ (and C99, but it doesn't have exceptions), you don't need to put
your variables at the start of a block. But their scope and lifetime
will last until the end of the block. So if you need a variable in an
exception, you have to have declared it before the exception handling
but you don't need it in a rigid variables-then-code-block structure :

{
foo1();
int x = foo2(); // <- Not at start of block
foo3(x);
try {
foo4(x, x);
} catch (...) {
foo5(x);
}
}


> When all variable declarations are collected in one place, as in
>
>    declare
>       <local var declarations>
>    begin
>       <statements>
>    exception
>       <handlers>
>    end
>
> it is easy and safe to say that the handlers can rely on all the
> variables declared between "declare" and "begin" being in existence when
> handling some exception from the <statements>. If variables are declared
> here and there in the <statements>, they might or might not yet exist
> when the exception happens, and it would not be safe for the local
> exception handler to access them in any way.

Flexible placement of declarations does not mean /random/ placement, nor
does it mean you don't know what is in scope and what is not!

And you can be pretty sure the compiler will tell you if you are trying
to use a variable outside its scope.

>
> Of course one can nest exception handlers when one nests blocks, but
> that becomes cumbersome pretty quickly.
>
> I don't know how C++ really addresses this problem.
>

For the kinds of code I write - on small systems - I disable exceptions
in C++. I prefer to write code that doesn't go wrong (unusual
circumstances are just another kind of value), and the kind of places
where exceptions might be most useful don't turn up often. (I use
exceptions in Python in PC programming, but the balance is different there.)

>
>>> (In the next Ada standard -- probably Ada 2022 -- one can write such
>>> updating assignments more briefly, as
>>>
>>>      p.all := @ + z;
>>>
>>> but the '@' can be anywhere in the right-hand-side expression, in one or
>>> more places, which is more flexible than the C/C++ combined
>>> assignment-operations like "+=".)
>>>
>>
>> It may be flexible, but I'm not convinced it is clearer nor that it
>> would often be useful.  But I guess that will be highly related to
>> familiarity.
>
>
> Yes, I have not yet used it at all, although I believe GNAT already
> implements it.
>
> I imagine one not uncommon use might be in function calls, such as
>
>    x := Foo (@, ...);
>
> I believe that the main reason this new Ada feature was formulated in
> this "more flexible" way was not the desire for more flexibility, but to
> avoid the introduction of many more lexical tokens and variations like
> "+=", or "+:=" as it would have been for Ada.

Sounds reasonable.

David Brown

unread,
Apr 20, 2021, 1:50:52 AM4/20/21
to
On 19/04/2021 23:37, Niklas Holsti wrote:
> On 2021-04-19 22:47, Paul Rubin wrote:
>> Dimiter_Popoff <d...@tgi-sci.com> writes:
>>> On 4/19/2021 14:04, Niklas Holsti wrote:
>>>> ...their HR departments say that they cannot find programmers trained
>>>> in Ada. Bah, a competent programmer will pick up the core concepts
>>>> quickly, says I.
>>>
>>> This is valid not just for ADA. An experienced programmer will need days
>>> to adjust to this or that language. I guess most if not all of us have
>>> been through it.
>>
>> No it's much worse than that.  First of all some languages are really
>> different and take considerable conceptual adjustment: it took me quite
>> a while as a C and Python programmer to become anywhere near clueful
>> about Haskell.  But understanding Haskell then demystified parts of C++
>> that had made no sense to me at all.
>
>
> I agree that it takes a mental leap to go from an imperative language to
> a functional languge, or to a logic-programming / declarative language.
>
> To maintain a Haskell program, one would certainly prefer to hire a
> programmer experienced in functional programming, over one experienced
> only in C, C++ or Ada.

Certainly. The differences between C++ and Ada are much smaller than to
Haskell.

Some languages mix imperative and functional paradigms. In C++, you can
do a certain amount of functional-style coding - you have lambdas, and
with templates and generic functions you can manipulate functions to
some extent. Ranges and list comprehensions are new to the latest
standard, though being library features they are not as neat in syntax
as you get in languages that support these directly (like Python or,
more obviously, Haskell). Pattern matching is on its way too.

You see at least some functional programming features on many
interpreted languages too, like Python and Javascript.

Going the other way, you get ocaml that is basically a functional
programming language with some imperative features added on.


But for someone unused to functional programming, Haskell does look
quite bizarre. (Though it is not at the level of APL!).

>
>> Secondly, being competent in a language now means far more than the
>> language itself.  There is also a culture and a code corpus out there
>> which also have to be assimilated for each language.  E.g. Ruby is a
>> very simple language, but coming up to speed as a Ruby developer means
>> getting used to a decade of Rails hacks, ORM internals, 100's of "gems"
>> (packages) scattered over 100s of Github repositories, etc.  It's the
>> same way with Javascript and the NPM universe plus whatever
>> framework-of-the-week your project is using.  Python is not yet that
>> bad, because it traditionally had a "batteries included" ethic that
>> tried to standardize more useful functions than other languages did, but
>> it seems to have given up on that in the past few years.
>
>
> Relying on libraries/packages from the Internet is also a huge
> vulnerability. Not long ago a large part of the world's programs in one
> of these languages (unfortunately I forget which -- it was reported on
> comp.risks) suddenly stopped working because they all depended on
> real-time download of a small library package from a certain repository,
> where the maintainer of that package had quarreled with the repository
> owner/provider and had removed the package. Boom... Fortunately it was a
> very small piece of SW and was easily replaced.
>

Would that have been a Javascript library? It is common (mad, but
common) to pull these from external servers (rather than your own
webserver), and it would happen every time a webpage with the code is
loaded.

Paul Rubin

unread,
Apr 20, 2021, 2:18:12 AM4/20/21
to
David Brown <david...@hesbynett.no> writes:
> For the kinds of code I write - on small systems - I disable
> exceptions in C++. I prefer to write code that doesn't go wrong
> (unusual circumstances are just another kind of value), and the kind
> of places where exceptions might be most useful don't turn up often.

That's not so great if you call library routines that can raise
exceptions, and having to propagate error values back up the call chain
is precisely the hassle that exceptions avoid.

There is a proposal for "deterministic exceptions" in C++ that work like
Haskell's Either monad. I.e. they are syntax sugar for propagating
error values upwards, with the compiler automatically inserting the
tests so you don't have to manually check the return values everywhere.
I guess they are a good idea but I don't know if they can replace the
other kind of exceptions. I don't know the current state of the
proposal but some info and links should be here:

https://github.com/cplusplus/papers/issues/310

David Brown

unread,
Apr 20, 2021, 2:20:46 AM4/20/21
to
Baring a few minor issues and easy tweaks, pretty much any C code will
also be valid C++ code. Some people use C++ as ABC (A Better C), taking
advantage of a few features like better constants, references (rather
than pointers), namespaces, and other points that look like they could
work as extensions to C rather than a new language.

In general, you avoid dynamic memory in C++ by avoiding "new" or any
standard library functions that might use it - just as you avoid
"malloc" in C by not using it. There are many standard types and
functions that can use dynamic memory in C++, but usually it is quite
obvious if they will need it or not.

When you want to be more sophisticated, C++ gives you features to
override and control "new" and memory allocation in many ways, with
memory pools and custom code. But don't try that in the first couple of
days.

David Brown

unread,
Apr 20, 2021, 2:21:58 AM4/20/21
to
On 20/04/2021 00:55, Paul Rubin wrote:

>>> And keep smiling! [](){}(); (That's the C++11 smiley - when you
>>> understand what it means, you're laughing!)
>
> Heh, if that means what I think it means.
>

If you think it means anything at all, then you are probably correct
about what you think it means. (That sounds a bit cryptic, but it is
meant literally.)

David Brown

unread,
Apr 20, 2021, 3:36:42 AM4/20/21
to
On 20/04/2021 08:18, Paul Rubin wrote:
> David Brown <david...@hesbynett.no> writes:
>> For the kinds of code I write - on small systems - I disable
>> exceptions in C++. I prefer to write code that doesn't go wrong
>> (unusual circumstances are just another kind of value), and the kind
>> of places where exceptions might be most useful don't turn up often.
>
> That's not so great if you call library routines that can raise
> exceptions, and having to propagate error values back up the call chain
> is precisely the hassle that exceptions avoid.
>

If you don't use exceptions, don't call library functions that raise
exceptions. If you don't treat "errors" as something mysterious that
might or might not happen, which should wander around the code in the
vague hope that something somewhere can make it all better, then you
don't need silent propagation of exceptions - also known as invisible
and undocumented longjumps.

When you are programming on a PC, you can be using huge libraries where
you know the basic interface of the parts that you use, but have no idea
of the contents and what might possibly go wrong.

When you are programming for small-systems embedded systems, you /know/
what can go wrong in the functions you call - /nothing/. There are no
unexpected or untreated errors - or your development is not finished
yet. Any possible returns are part of the interface to the function,
whether they are "success" returns or "failure" returns. In this kind
of system, you don't have the option of showing a dialogue box to the
user saying "unexpected exception encountered - press OK to quit".

> There is a proposal for "deterministic exceptions" in C++ that work like
> Haskell's Either monad. I.e. they are syntax sugar for propagating
> error values upwards, with the compiler automatically inserting the
> tests so you don't have to manually check the return values everywhere.
> I guess they are a good idea but I don't know if they can replace the
> other kind of exceptions. I don't know the current state of the
> proposal but some info and links should be here:
>
> https://github.com/cplusplus/papers/issues/310
>

They are indeed a good idea. I am quite happy with explicit exceptions
or error indications that are part of the interface to functions, and I
would be happy with syntactic sugar and efficient compiler support.
What I would /not/ use on embedded systems is hidden mechanisms where
you don't know what exceptions a function might throw.

There are plenty of other attempts at conveniently and explicitly
handling errors using templates and classes that typically work as sum
types (like the "Either" you mentioned) or a product type (like a struct
with a "valid" flag as well as the real result type). I have found it
convenient to use std::optional, for example.


Niklas Holsti

unread,
Apr 20, 2021, 3:47:12 AM4/20/21
to
On 2021-04-20 2:01, Paul Rubin wrote:
> Niklas Holsti <niklas...@tidorum.invalid> writes:
>> You can impose that constraint if you want to: if I had defined
>> type Zero_Handler is not null access procedure;
>> then the above declaration of the At_Zero component would be illegal,
>> and the compiler would insist on a non-null initial value.
>
> Oh, this is nice.
>
>> But IMO sometimes you need pointers that can be null, just as you
>> sometimes need null values in a database.
>
> Preferable these days is to use a separate type for a value that is
> nullable or optional, so failing to check for null gives a compile-time
> type error. This is 't Option in ML, Maybe a in Haskell, and iirc
> std::Optional<T> these days in C++.


I'm not aware of proposals for such a thing in Ada.


>> The state of the art in Ada implementations of critical systems is
>> slowly becoming to use static analysis and proof tools to verify that
>> no run-time check failures, such as accessing a null pointer, can
>> happen. That is already fairly easy to do with the AdaCore tools
>> (CodePeer, SPARK and others). Proving functional correctness still
>> remains hard.
>
> I know about SPARK. Is CodePeer something along the same lines? Is it
> available through GNU, or is it Adacore proprietary or what?


CodePeer is a for-money AdaCore tool, a static analyzer that basically
(AIUI) applies bottom-up weakest-precondition construction and
pre-post-condition analysis to detect many kinds of logical programming
problems, in particular possible failures of run-time checks.

See https://www.adacore.com/codepeer.

To use it in practice on larger programs, one usually has to guide and
help its analysis by writing a certain amount of explicit precondition
and postcondition aspects into the Ada source (which CodePeer then
verifies as part of its analysis), but that has other benefits too, of
course.


> I still have Burns & Wellings' book on SPARK on the recommendation of
> someone here. It looks good but has been in my want-to-read pile since
> forever. One of these days.


Note that most if not all of the SPARK-specific comments that were used
earlier have now been superseded by the precondition, postcondition,
invariant etc. aspects in standard Ada, and the SPARK tools have been
updated accordingly. I'm not sure if this replacement is complete; I
haven't used SPARK.

By the way, did you know that NVIDIA has started using SPARK? There's a
presentation at
https://www.adacore.com/webinars/securing-future-of-embedded-software.

niklas holsti tidorum fi
. @ .

Tauno Voipio

unread,
Apr 20, 2021, 8:09:36 AM4/20/21
to
I'd like to add: Read the generated assembly code. C++ is
prone to exlpode surprises into it.

An experienced C embedded programmer would read the code,
anyway ...

--

-TV

Tom Gardner

unread,
Apr 20, 2021, 10:29:29 AM4/20/21
to
Agreed.

Unfortunately I've come across too many interview
candidates that haven't a clue about the code emitted
by a compiler when it encounters a function call.

0 new messages