I ask you if you know any companies which are using C++ within safety
critical systems like flight control systems.
I'm personally worked in the past within the telecomunication
business. Here i implemented several thinks in C++ and in Java. Since two
years i'm now working for the transportation business systems unit. Here
we implement electronic interlocking systems to guarantee that a train
can pass a station by a secured way. An operator can see e.g. the layout
of a whole train station with all elements such as tracks, points,
signals, level crossings and so on. A main task is now to set a route for
a train e.g a route from the entry main signal to the exit main
signal. Now it must be guaranteed that no other train can pass this
route. The protection of the route can be done by pointing points of
neighbour routes into the opposite direction and by showing signals the
stop aspect to disallow trains to drive into the protected route.
Currently the system is implemented in C but we are in the progress to
move forward to C++. Do you know companies which have experiences in
programming safety critical systems using C++?
Note that the question has nothing to do with whether C++ is suitable or
not suitable for safety critical systems, only about
companies/systems/people with experience using C++ for such systems. My
knowledge of such companies/systems/people is very limited, so I'm hoping
that participants of this newsgroup may be able to point me and my
correspondent to people/places we are not familiar with.
Thanks,
Scott
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
> I ask you if you know any companies which are using C++ within safety
> critical systems like flight control systems.
Please see this:
http://icalepcs2005.web.cern.ch/Icalepcs2005/
This conference took place just last week and the participants were
those who are involved in high-energy accelerators (including the ones
used for medical purposes), nuclear fusion, astronomy, etc. - most of
them are pure research projects (so you can always claim that they are
not safety critical ;) ), but for sure there is a lot of place for
safety considerations and some of the devices are available for "public"
use, like medical accelerators for the hadron cancer therapy or nuclear
fusion facilities which are going to be actually real power-stations.
I think they *are* safety critical.
(The abstracts of all the papers (and some posters) are already
available for download, so those who are interested may take a tour.)
The overall impression from the conference is that the *majority* of
such projects use C/C++ very extensively. The reasons (my humble opinion
and other disclaimers apply) are not because C++ is the best language
ever, but because the world of real-time and control systems is really
"natively" C/C++ - this means that operating systems, drivers for all
kinds of devices and even external utilities (LabView, Matlab, whatever)
all are made and sold with C(/C++) interfaces bundled, so it is just
easier to bring all those pieces together when C/C++ is the main language.
Of course, there are other languages as well, but not on the "critical"
side: Java is very actively used for the client applications (GUI
consoles), some people use Python as well for some non-real-time
activities, Tcl is less frequent, but also in use.
Interestingly: I can point only *two* projects presented at the
conference, where Ada was used as a main implementation language.
> Note that the question has nothing to do with whether C++ is suitable or
> not suitable for safety critical systems,
Apparently it is.
> only about
> companies/systems/people with experience using C++ for such systems.
The reseach and scientific community extensively uses C++ and again -
this seems to be the standard language. Some of the projects can be
qualified as safety-critical and some clearly end up as commercial
facilities with safety-critical label.
Having said that, I'm also interested in non-research uses of C++ (or at
least those that were not born as research activities) in
safety-critical systems.
--
Maciej Sobczak : http://www.msobczak.com/
Programming : http://www.msobczak.com/prog/
is good reference for a start. They use C and C++ on unix.
I also worked on programs regarding ATC on unix(simulator program
and one project regarding tracking airplanes on the ground)
for company in Germany, but that company doesn't work any more
, so it's not relevant.
Greetings, Bane.
| I recently received the following message from an attendee of one of my C++
| seminars:
|
| I ask you if you know any companies which are using C++ within safety
| critical systems like flight control systems.
Some companies are listed at Bjarne Stroustrup's "Application" website; you
might find Lockheed Martin, the rover on Mars (OK, it is not a company
but whoever sent it there was using C++ ;-)), etc.
--
Gabriel Dos Reis
g...@integrable-solutions.net
Isn't that rover currently 'missing in action' ?
SCNR
Stefan
--
Stefan Naewe
naewe.s_AT_atlas_DOT_de
| Gabriel Dos Reis wrote:
| > Scott Meyers <use...@aristeia.com> writes:
| >
| > | I recently received the following message from an attendee of one of my C++
| > | seminars:
| > |
| > | I ask you if you know any companies which are using C++ within safety
| > | critical systems like flight control systems.
| >
| > Some companies are listed at Bjarne Stroustrup's "Application" website; you
| > might find Lockheed Martin, the rover on Mars (OK, it is not a company
| > but whoever sent it there was using C++ ;-)), etc.
|
| Isn't that rover currently 'missing in action' ?
There must be a joke here that I missed.
--
Gabriel Dos Reis
g...@integrable-solutions.net
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
But the context of Scott's question was clearly C++ as opposed to C.
Not C and C++ as opposed to Java or Ada.
Huh?? Where did you read that?? I guess it would be up to Scott to
clarify if this is what he really meant, but I really don't see how
one could draw this conclusion... (the question he was asked does
mention that the system is in C, and they're moving away from that;
but I don't see that as a "how is C++ compared to C for safety-critical
applications?")
Personally, I think C is like 180 degrees opposed to safety-critical
systems. C is satirically described as operating a chainsaw (or a
table-saw, or whatever other power tools) with all the safety switches
off -- you get all the raw power, but you're taking very high risks
in exchange.
C++'s increased type safety and "task-automation" idioms (constructors
and destructors as the most obvious examples -- you make sure that
you don't forget to do certain things) definitely make the C vs. C++
a non-contest when we're talking about safety-critical systems....
But I do believe Scott's question was more along the lines of "is C++
sufficiently safe to be used for safety-critical applications?" (well,
his question really, and explicitlym, asks for statistics about use --
what I mean is that, if anything, the intent behind the question would
be C++ vs. other alternatives usually perceived as "safer" than C++,
such as Ada and the what-the-hell-is-wrong-with-this-planet Java option)
Carlos
--
> Personally, I think C is like 180 degrees opposed to safety-critical
> systems. C is satirically described as operating a chainsaw (or a
> table-saw, or whatever other power tools) with all the safety switches
> off -- you get all the raw power, but you're taking very high risks
> in exchange.
Yep, with emphasis on the "satirically". While all those folks
are tossing off bon mots at conferences about how dangerous
C is, programmers are busy delivering software that works in C.
And as for all those systems written in "safer" languages,
guess what all their low-level code is written in?
> C++'s increased type safety and "task-automation" idioms (constructors
> and destructors as the most obvious examples -- you make sure that
> you don't forget to do certain things) definitely make the C vs. C++
> a non-contest when we're talking about safety-critical systems....
Uh huh. Except for the uncertainties of new expressions, and
thrown exceptions, and ...
> But I do believe Scott's question was more along the lines of "is C++
> sufficiently safe to be used for safety-critical applications?" (well,
> his question really, and explicitlym, asks for statistics about use --
> what I mean is that, if anything, the intent behind the question would
> be C++ vs. other alternatives usually perceived as "safer" than C++,
> such as Ada and the what-the-hell-is-wrong-with-this-planet Java option)
Yep. And at the end of the day, the safety of a system depends
remarkably little on the choice of programming language, and
(not at all remarkably) much on the robustness of the *process*
by which the product is developed and tested.
P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
My reading between the lines of the message I received is that a decision has
been made to add C++ to an existing safety-critical system in C, but the
decision was not without controversy, and everybody involved would be reassured
to find that others have already successfully implemented such systems in C++,
i.e., this company won't be blazing brand new trails. That's just a guess,
however. The words I posted are the words that were sent to me. It's also
possible that the person who wrote me would like to contact other people who
have experience using C++ in safety-critical systems to find out what issues
they encountered.
Scott
Lockheed Martin. Most of the on-board software for the Joint Strike
Fighter (F-35) will be written in C++.
--
mail1dotstofanetdotdk
LOL. Yes, C with a suitable tool chain is much safer than C++, but
somehow I do not think that is what you meant.
>.
--
Francis Glassborow ACCU
Author of 'You Can Do It!' see http://www.spellen.org/youcandoit
For project ideas and contributions: http://www.spellen.org/youcandoit/projects
- Bjarne Stroustrup; http://www.research.att.com/~bs
PS the majority of the code on the rovers is C and assembler. The C++
part is the scene analysis and the autonomous driving (a rover has to
manage on its own for something like 24 hours between receiving new
instructions about what to do).
C's power is in simplicity. One who writes safe critical code
doesn't like complex language. If one is not able to write
safe code in C it is not capable to write safe code in ADA or C++
or anything. So called safer languages are safe that are fault
tolerant to programer errors. Safety critical system isn't. That's
why in such cases primary choice for language is simplicity.
One who is responsible for lifes doesn't want to rely on
complex tools, rather on him/herself.
>
> C++'s increased type safety and "task-automation" idioms (constructors
> and destructors as the most obvious examples -- you make sure that
> you don't forget to do certain things) definitely make the C vs. C++
> a non-contest when we're talking about safety-critical systems....
Primary choice for C++ would be close integration with C, not safety.
What if you have to use threads? Threads are undefined behavior in
C++. thread programming in C is simple, but even experienced
C++ programers doesn't know that they can't start/join in
constructors/destructors of shared objects.
And we were doing software for air traffic control.
I found such bug in code of very good programmer and I saw book which
*recommends* such code!
Because of lot of problems with C++, some older programers felt that
C++ is too complex and unpredictable. They wanted to revert to pure C.
These are just scratch of problems.
Greetings, Bane.
>>Personally, I think C is like 180 degrees opposed to
>>safety-critical systems. C is satirically described as
>>operating a chainsaw (or a table-saw, or whatever other power
>>tools) with all the safety switches off -- you get all the raw
>>power, but you're taking very high risks in exchange.
> C's power is in simplicity. One who writes safe critical code
> doesn't like complex language. If one is not able to write
> safe code in C it is not capable to write safe code in ADA or
> C++ or anything.
That's probably true. The question is how much it costs to
acheive the same level of reliability.
> So called safer languages are safe that are fault tolerant to
> programer errors.
Actually, I think that Ada is safer above all because it is more
readable. It is easier to prove an Ada program (without
e.g. pointer arithmetic) correct that it is a C program.
> Safety critical system isn't. That's why in such cases
> primary choice for language is simplicity. One who is
> responsible for lifes doesn't want to rely on complex tools,
> rather on him/herself.
Actually, when writing code responsible for human life, one
wants redundancy. Relying uniquely on oneself is not a very
redundant solution, and would not be considered acceptable.
>>C++'s increased type safety and "task-automation" idioms
>>(constructors and destructors as the most obvious examples --
>>you make sure that you don't forget to do certain things)
>>definitely make the C vs. C++ a non-contest when we're talking
>>about safety-critical systems....
> Primary choice for C++ would be close integration with C, not
> safety. What if you have to use threads? Threads are
> undefined behavior in C++. thread programming in C is simple,
> but even experienced C++ programers doesn't know that they
> can't start/join in constructors/destructors of shared
> objects.
The status of threading in C and in C++ is exactly identical.
It's undefined behavior as far as the language standard is
concerned, but most implementations where it is relevant do
define something.
As for the problem of starting a thread in a constructor, given
the amount that has been written about it, it's a pretty poor
programmer who isn't aware of it. (And of course, you can do
just as stupid things in C.)
> And we were doing software for air traffic control.
> I found such bug in code of very good programmer and I saw
> book which *recommends* such code!
I've seen a lot of junk in books. It's well known that some
books are to be avoided.
> Because of lot of problems with C++, some older programers
> felt that C++ is too complex and unpredictable. They wanted
> to revert to pure C. These are just scratch of problems.
There is a potential problem that the C++ world is not yet as
mature as the C world in this respect. Which is sort of normal;
it hasn't been around as long. But I know of some pretty large
and robust applications written in C++, without problems. All
in all, in the measure that C++ can make the code more readable,
it is a positive factor, compared to C. Obviously, if instead
you use it to make the code less readable (which it can also do,
if used incorrectly), then it becomes a negative factor. But,
coming back to your initial comment, if your shop is so
organized that using C++ results in less readable code, then it
is organized in a fashion that makes reliable code impossible in
any language.
--
James Kanze mailto: james...@free.fr
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 pl. Pierre Sémard, 78210 St.-Cyr-l'École, France +33 (0)1 30 23 00 34
A very good point. Now, tell me how version A is simpler than
version B, and thus less critical-error prone:
Version A:
free (str1);
str1 = malloc (strlen(str2) + strlen(str3) + 1);
if (str1 != NULL)
{
strcpy (str1, str2);
strcat (str1, str3);
}
Version B:
str1 = str2 + str3;
And let me clarify something -- I *did* make a reasonable
effort to code version A correctly; any bug that there may
be was *not* intentional as a way to prove my point :-)
(though who knows, my subconscious mind may have done it,
if there are bugs)
> If one is not able to write
> safe code in C it is not capable to write safe code in ADA or C++
> or anything.
Though I agree with the sentiment, I'm still puzzled by how people
only see this as a black-or-white issue; my take is: for a given
level of skills of a programmer, if a given language involves a
higher probability of introducing bugs per task, then it is less
suitable for safety-critical systems, because the average of bugs
per unit of functionality will be higher... This argument only
fails if you have programmers for which the probability of coding
a bug is *zero*, which simply does not exist (and if it does,
they exist with probability 0)
Agreed that the issue is more complex, given that good quality
assurance practices (which *must* be adequately used in any
safety-critical system development) should detect and require
correction of any bug -- but then the issue becomes one of
increased development time required to have the same average
rate of bugs.
And even then, bugs *do* occur after the toughest and most skilled
of quality assurances. Buffer overflows still do account for the
highest fraction of security vulnerabilities that we've seen (don't
quote me on this :-)) -- it does not matter how much we want to
take comfort on saying that "it's because the programmer was an
incompetent"; buffer overflows *do happen* and do account for the
highest fraction of security-vulnerabilities (Windows or Linux/Unix).
If you have a lnaguage in which you physically can not have a buffer
overflow -- or equivalently, a subset of idioms in which it is not
physically possible to have a buffer overflow, then either you'd
have less bugs at a given amount of work, or will require less work
to reach a given level of bugs rate.
If I write my networking application using exclusively C++ strings,
then I know that spending *zero* effort, I'll have a buffer overflow
free application, with 100% certainty (provided that the compiler
works correctly -- and ok, this may blurr the issue in favor of
your argument; but let's face it, compilers and their standard
libraries are better debugged than the code that I, and most
people, including very competent programmers, write :-)).
Anyway, granted that I'm concentrating on certain features of the
language, and you pointed out other aspects of it (point taken
with the examples of multithreading), but still, I do believe the
balance is in favor of C++ hands-down.
What I've always hated is the arguments that put the situation
as a black-or-white, one extreme or the other only -- the people
that argue that using whateverparticular language is a guarantee
that you'll have a bug-free (safe) system, and the other extreme,
that just because it is *possible* to write bugs with any programming
language, then it is entirely irrelevant what language we choose
and assembler, C, C++, or Ada are equally suitable.
Carlos
--
True, but in C that would look something like:
int func(mypr_str* str1,const char* str2, const char* str3)
{
int rc=0;
CHECK(mypr_strprintf(str1,"%s%s",str2,str3)!=NULL,rc,EXIT_POINT);
/* code */
EXIT_POINT:
return rc;
}
>
> > If one is not able to write
> > safe code in C it is not capable to write safe code in ADA or C++
> > or anything.
>
> Though I agree with the sentiment, I'm still puzzled by how people
> only see this as a black-or-white issue; my take is: for a given
> level of skills of a programmer, if a given language involves a
> higher probability of introducing bugs per task, then it is less
> suitable for safety-critical systems, because the average of bugs
> per unit of functionality will be higher... This argument only
> fails if you have programmers for which the probability of coding
> a bug is *zero*, which simply does not exist (and if it does,
> they exist with probability 0)
Agreed. Too me and some C programers I know, main reason for
introducing
C++ is exception handling, constructors,destructors and type
safety introduced by templates. But their usage is sometimes restricted
by environment. For example, if C++ compiler is not thread aware only
choice is to use just C subset or to be very carefull.
Greetings, Bane.
I also prefer version B. Unfortunately if (in either version) I verify
that the code above succeeded (A: no NULL, B: no exception) and I then
write
char c1 = str1[0];
I know that it will succeed for version A (barring meteor strikes or
stack overflow). For version B it may throw an exception (assuming
str1 is a common implementation of std::string). There are plenty of
ways to avoid or handle the exception in version B, but the problem is
not obvious. Not obvious means likely to be missed by the programmer,
reviewer, tests, ... .
> What I've always hated is the arguments that put the situation
> as a black-or-white, one extreme or the other only -- the people
> that argue that using whateverparticular language is a guarantee
> that you'll have a bug-free (safe) system, and the other extreme,
> that just because it is *possible* to write bugs with any programming
> language, then it is entirely irrelevant what language we choose
> and assembler, C, C++, or Ada are equally suitable.
I agree. It is a hard problem, and we don't seem to be close to a
general solution.
>> Branimir Maksimovic wrote:
>
>>> > Carlos Moreno wrote:
>
>>
>
>>>> >>Personally, I think C is like 180 degrees opposed to
>>>> >>safety-critical systems. C is satirically described as
>>>> >>operating a chainsaw (or a table-saw, or whatever other power
>>>> >>tools) with all the safety switches off -- you get all the raw
>>>> >>power, but you're taking very high risks in exchange.
>
>>
>
>>> > C's power is in simplicity. One who writes safe critical code
>>> > doesn't like complex language. If one is not able to write
>>> > safe code in C it is not capable to write safe code in ADA or
>>> > C++ or anything.
>
>>
>> That's probably true. The question is how much it costs to
>> acheive the same level of reliability.
It is of course cheaper to use safety mechanisms of tools then
to code it, but then again, what if mechanism fail?
>>
>
>>> > So called safer languages are safe that are fault tolerant to
>>> > programer errors.
>
>>
>> Actually, I think that Ada is safer above all because it is more
>> readable. It is easier to prove an Ada program (without
>> e.g. pointer arithmetic) correct that it is a C program.
>>
I think that for proving algorithm correctness pseudo language
specially designed for such purpose is best. Then prove can be done
mechanically.
>>> > Safety critical system isn't. That's why in such cases
>>> > primary choice for language is simplicity. One who is
>>> > responsible for lifes doesn't want to rely on complex tools,
>>> > rather on him/herself.
>
>>
>> Actually, when writing code responsible for human life, one
>> wants redundancy. Relying uniquely on oneself is not a very
>> redundant solution, and would not be considered acceptable.
I didn't mean that single person would be responsible
for code.
>>
>
>>>> >>C++'s increased type safety and "task-automation" idioms
>>>> >>(constructors and destructors as the most obvious examples --
>>>> >>you make sure that you don't forget to do certain things)
>>>> >>definitely make the C vs. C++ a non-contest when we're talking
>>>> >>about safety-critical systems....
>
>>
>
>>> > Primary choice for C++ would be close integration with C, not
>>> > safety. What if you have to use threads? Threads are
>>> > undefined behavior in C++. thread programming in C is simple,
>>> > but even experienced C++ programers doesn't know that they
>>> > can't start/join in constructors/destructors of shared
>>> > objects.
>
>>
>> The status of threading in C and in C++ is exactly identical.
>> It's undefined behavior as far as the language standard is
>> concerned, but most implementations where it is relevant do
>> define something.
For C, mainly. C++ inherits what is defined for C because of
tight integration.
>>
>> As for the problem of starting a thread in a constructor, given
>> the amount that has been written about it, it's a pretty poor
>> programmer who isn't aware of it. (And of course, you can do
>> just as stupid things in C.)
I couldn't said that. I only found something about it in
comp.programming.threads faq and there are controverses
about that matter.
So I am sure that even now some people does it.
Problem is that very good programmer is not neccessarily
good C++ programer. For that matter I found Effective C++
series most usefull.
>>
>
>>> > And we were doing software for air traffic control.
>>> > I found such bug in code of very good programmer and I saw
>>> > book which *recommends* such code!
>
>>
>> I've seen a lot of junk in books. It's well known that some
>> books are to be avoided.
There is no authoritative C++ threading book
(at least I didn't found one). There are for C,
but for C++ there are none.
Only this one I found usefull
"Pattern-Oriented Software Architecture:
Patterns for Concurrent and Networked Objects"
and there is doble checked locking advised.
>>
>
>>> > Because of lot of problems with C++, some older programers
>>> > felt that C++ is too complex and unpredictable. They wanted
>>> > to revert to pure C. These are just scratch of problems.
>
>>
>> There is a potential problem that the C++ world is not yet as
>> mature as the C world in this respect. Which is sort of normal;
>> it hasn't been around as long. But I know of some pretty large
>> and robust applications written in C++, without problems.
Yes, now, C++ tools are mature and pretty stable.
All
>> in all, in the measure that C++ can make the code more readable,
>> it is a positive factor, compared to C. Obviously, if instead
>> you use it to make the code less readable (which it can also do,
>> if used incorrectly), then it becomes a negative factor. But,
>> coming back to your initial comment, if your shop is so
>> organized that using C++ results in less readable code, then it
>> is organized in a fashion that makes reliable code impossible in
>> any language.
Oh, I never had to use debugger in those days.
Greetings, Bane.
>> A very good point. Now, tell me how version A is simpler than
>> version B, and thus less critical-error prone:
>>
>> Version A:
>>
>> free (str1);
>> str1 = malloc (strlen(str2) + strlen(str3) + 1);
>> if (str1 != NULL)
>> {
>> strcpy (str1, str2);
>> strcat (str1, str3);
>> }
>>
>> Version B:
>>
>> str1 = str2 + str3;
>>
>>
>> And let me clarify something -- I *did* make a reasonable
>> effort to code version A correctly; any bug that there may
>> be was *not* intentional as a way to prove my point :-)
>> (though who knows, my subconscious mind may have done it,
>> if there are bugs)
What happens to version A if str1 originally points to a
string allocated on the stack? How does free() behave when
an attempt is made to free stack memory?
>> What I've always hated is the arguments that put the situation
>> as a black-or-white, one extreme or the other only -- the people
>> that argue that using whateverparticular language is a guarantee
>> that you'll have a bug-free (safe) system, and the other extreme,
>> that just because it is *possible* to write bugs with any programming
>> language, then it is entirely irrelevant what language we choose
>> and assembler, C, C++, or Ada are equally suitable.
Since your example deals with what is essentially dynamic
string allocation, the Ada version is
function '&'(Left, Right: String) return Unbounded_String is
begin
return To_Unbounded_String(Left & Right);
end;
str1 : Unbounded_String := str2 & str3;
This will always work because Ada's Unbounded_String always
deals with heap-based memory in its internal data representation.
It also automatically handles all deallocation issues.
Note that Ada allows overload resolution based upon the
return type of a function.
Jim Rogers
Within the past year or so, I was told by an engineer at Boeing that when
the FAA reviews the process by which safety-critical flight software for
commercial airlines is produced, they assume that compilers produce bad
code. (More accurately, they do not assume that compilers produce object
code whose behavior is consistent with the source code.) I believe he also
told me that it is possible to get compilers that are certified to produce
reliable code, but that no C++ compilers have yet been certified.
This is probably only part of the story, and it would not surprise me if it
is a story fragment that is in some way misleading, but my point is that
for at least some safety-critical applications, my understanding is that
the correctness of the source code is irrelevant unless the correctness of
the compiler has already been established.
Scott
What happens if operation fails?
Are there any mechanisms for heap resource management
and retry operation in case of failure?
Greetings, Bane.
>> Carlos Moreno wrote:
>>
>
>>>> >> A very good point. Now, tell me how version A is simpler than
>>>> >> version B, and thus less critical-error prone:
>>>> >>
>>>> >> Version A:
>>>> >>
>>>> >> free (str1);
>>>> >> str1 = malloc (strlen(str2) + strlen(str3) + 1);
>>>> >> if (str1 != NULL)
>>>> >> {
>>>> >> strcpy (str1, str2);
>>>> >> strcat (str1, str3);
>>>> >> }
>>>> >>
>>>> >> Version B:
>>>> >>
>>>> >> str1 = str2 + str3;
>>>> >>
>>>> >>
>>>> >> And let me clarify something -- I *did* make a reasonable
>>>> >> effort to code version A correctly; any bug that there may
>>>> >> be was *not* intentional as a way to prove my point :-)
>>>> >> (though who knows, my subconscious mind may have done it,
>>>> >> if there are bugs)
>
>>
>>
>> What happens to version A if str1 originally points to a
>> string allocated on the stack? How does free() behave when
>> an attempt is made to free stack memory?
free() is called for memory created with malloc (on heap). As far as I
know freeing memory created on stack is UB. We have to assume that the
author of A knows that, and wouldn't have called free for memory
created on the stack. That is part of his argument for C++, I think -
as std::string encapsulates this.
>> Since your example deals with what is essentially dynamic
>> string allocation, the Ada version is
>>
>> function '&'(Left, Right: String) return Unbounded_String is
>> begin
>> return To_Unbounded_String(Left & Right);
>> end;
>>
>> str1 : Unbounded_String := str2 & str3;
>>
>> This will always work because Ada's Unbounded_String always
>> deals with heap-based memory in its internal data representation.
>> It also automatically handles all deallocation issues.
I have not implemented std::string myself, but I know that most (if not
all) std::string implementation's internal data representation is also
heap based. For this case BTW, what would ADA do if heap is depleted?
Also, we could use an implementation where we make use of the stack
only...
char buff[200];
std::ostrstream buffStream( buff, sizeof(buff) );
buffStream << "H..." << "...d" << std::ends;
...or a bit of both.
return std::string( buff );
>> Note that Ada allows overload resolution based upon the
>> return type of a function.
I would not know whether this was advantageous or not (other may
comment on this - it would be interesting to see a discussion of which
are better). All I know is that this can easily be mimicked in C++ by
providing a dummy argument containing type. Therefore one could reason
that in C++, overload resolution can be based on both the return type
and the arguments of the function as follows:
template <class T>
struct discriminator
{
typedef T type;
};
T make( discriminator<T> ){ return T; }
U make( discrimator<U> ){ return U; }
make is effectively overloaded with almost know overhead. Although
overload resolution is applied on the argument and not the return type,
it does not matter anymore.
>>
>> Jim Rogers
>>
Regards,
Werner
> Within the past year or so, I was told by an engineer at Boeing that when
> the FAA reviews the process by which safety-critical flight software for
> commercial airlines is produced, they assume that compilers produce bad
> code. (More accurately, they do not assume that compilers produce object
> code whose behavior is consistent with the source code.) I believe he also
> told me that it is possible to get compilers that are certified to produce
> reliable code, but that no C++ compilers have yet been certified.
>
> This is probably only part of the story, and it would not surprise me if it
> is a story fragment that is in some way misleading, but my point is that
> for at least some safety-critical applications, my understanding is that
> the correctness of the source code is irrelevant unless the correctness of
> the compiler has already been established.
Having studied DO-178B (FAA guidelines for software in safety-critical
systems), I agree with the engineer at Boeing --- for high-safety systems, the
compilers cannot be assumed to be correct, unless they have been certified. I
have also been told that there are no certified C++ compilers.
It is possible to achieve certification of a product containing software built
without a certified compiler, but it is a lot harder.
Anthony
--
Anthony Williams
Software Developer
Just Software Solutions Ltd
http://www.justsoftwaresolutions.co.uk
You are correct. There is always the finite possibility of
exhausting heap resources.
The Unbounded_String inherits from the abstract type
Ada.Controlled. Controlled types provide programmer-defined
initialize, adjust, and finalize procedures. The combination
of those procedures as defined for Unbounded_String ensures
that the heap memory used for the Unbounded_String object
is properly managed.
Ada allows the programmer to define custom storage pools for
heap management. In the absence of custom storage pools, Ada
uses a default storage pool. If an attempt to allocate from
the default storage pool fails, the exception Storage_Error
is raised. Storage_Error can be handled, just like any other
exception.
Storage pools can be very useful in a concurrent design.
Separate threads or tasks can allocate their own storage
pools for variables of locally defined types. Alternatively,
specific types can be allocated from a common custom storage
pool.
The type Unbounded_String allocates from the default
storage pool.
Jim Rogers
> >> Branimir Maksimovic wrote:
> >>> > Carlos Moreno wrote:
> >>>> >>Personally, I think C is like 180 degrees opposed to
> >>>> >>safety-critical systems. C is satirically described as
> >>>> >>operating a chainsaw (or a table-saw, or whatever other
> >>>> >>power tools) with all the safety switches off -- you
> >>>> >>get all the raw power, but you're taking very high
> >>>> >>risks in exchange.
> >>> > C's power is in simplicity. One who writes safe
> >>> > critical code doesn't like complex language. If one is
> >>> > not able to write safe code in C it is not capable to
> >>> > write safe code in ADA or C++ or anything.
> >> That's probably true. The question is how much it costs to
> >> acheive the same level of reliability.
> It is of course cheaper to use safety mechanisms of tools then
> to code it, but then again, what if mechanism fail?
This is, of course, a major problem. What if there is an error
in the compiler, and it generates incorrect code? One frequent
argument against C++ for critical code is that the language is
too complex, which leads to a larger risk of compiler errors.
(Whether it is a valid argument is another question. But it is
certainly an argument which cannot be dismissed off hand as
irrelevant.)
> >>> > So called safer languages are safe that are fault
> >>> > tolerant to programer errors.
> >> Actually, I think that Ada is safer above all because it is
> >> more readable. It is easier to prove an Ada program
> >> (without e.g. pointer arithmetic) correct that it is a C
> >> program.
> I think that for proving algorithm correctness pseudo language
> specially designed for such purpose is best. Then prove can
> be done mechanically.
It isn't sufficient for the algorithm to be correct. It's
translation in the compilable language must be correct.
> >>> > Safety critical system isn't. That's why in such cases
> >>> > primary choice for language is simplicity. One who is
> >>> > responsible for lifes doesn't want to rely on complex
> >>> > tools, rather on him/herself.
> >> Actually, when writing code responsible for human life, one
> >> wants redundancy. Relying uniquely on oneself is not a
> >> very redundant solution, and would not be considered
> >> acceptable.
> I didn't mean that single person would be responsible for
> code.
Yes, but that means that readability is a critical asset.
I know that a lot of people do it. It was one of the first
things I spotted in Java's SwingWorker class. Still, the issue
isn't exactly unknown. And when writing a critical system, you
sort of have the responsibility to learn about all such issues
before starting.
> So I am sure that even now some people does it.
I'm also sure that some people write beyond the end of buffers.
Especially in C.
Such people don't work on critical systems (hopefully).
> Problem is that very good programmer is not neccessarily good
> C++ programer. For that matter I found Effective C++ series
> most usefull.
Quite. But a good programmer can learn. And a good programmer
knows his limits, and knows enough to find out what he doesn't
know before trying to do something beyond his head.
A good organization makes bad programmers behave as if they were
good programmers (or look for jobs elsewhere).
Safety critical systems are only developed by good organizations
(hopefully).
> >>> > And we were doing software for air traffic control. I
> >>> > found such bug in code of very good programmer and I saw
> >>> > book which *recommends* such code!
> >> I've seen a lot of junk in books. It's well known that
> >> some books are to be avoided.
> There is no authoritative C++ threading book (at least I
> didn't found one). There are for C, but for C++ there are
> none.
I've not found much language specific literature concerning
thread safety in a critical environment, for any language.
> Only this one I found usefull
> "Pattern-Oriented Software Architecture:
> Patterns for Concurrent and Networked Objects"
> and there is double checked locking advised.
That's based on ACE, I think. My opinion of ACE is not very
high, certainly not something I'd allow in a critical system.
> >>> > Because of lot of problems with C++, some older
> >>> > programers felt that C++ is too complex and
> >>> > unpredictable. They wanted to revert to pure C. These
> >>> > are just scratch of problems.
> >> There is a potential problem that the C++ world is not yet
> >> as mature as the C world in this respect. Which is sort of
> >> normal; it hasn't been around as long. But I know of some
> >> pretty large and robust applications written in C++,
> >> without problems.
> Yes, now, C++ tools are mature and pretty stable.
Not all, but some. The days when I found more errors in the
compiler than in my code are long past.
--
James Kanze GABI Software
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
Another addition to the mix is the operating system the application
runs on. Many FAA systems also require the OS to be certified to
DO 178B Level A. Some specialized real-time OS offerings have achieved
DO 178B certification.
Jim Rogers
Only partially so as many implementation now use the small string
optimisation where short (usually less than 16 char) strings are held
internally.
--
Francis Glassborow ACCU
Author of 'You Can Do It!' see http://www.spellen.org/youcandoit
For project ideas and contributions: http://www.spellen.org/youcandoit/projects
> Having studied DO-178B (FAA guidelines for software in safety-critical
> systems), I agree with the engineer at Boeing --- for high-safety systems, the
> compilers cannot be assumed to be correct, unless they have been certified. I
> have also been told that there are no certified C++ compilers.
There is one for Embedded C++:
http://www.ghs.com/news/20050502_embedded.html
--
mail1dotstofanetdotdk
> Anthony Williams wrote:
>
>> Having studied DO-178B (FAA guidelines for software in safety-critical
>> systems), I agree with the engineer at Boeing --- for high-safety
>> systems, the
>> compilers cannot be assumed to be correct, unless they have been
>> certified. I
>> have also been told that there are no certified C++ compilers.
>
> There is one for Embedded C++:
>
> http://www.ghs.com/news/20050502_embedded.html
Nice to know. That product uses the Edison Design Group front end
and the Dinkumware EC++ Library.
P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
I'd be happy to answer any specific questions about our experience and
what we have encountered.
While not dismissible off hand, I will argue that the tool is not what
we should be most afraid of. I consider myself a coder of average
skills. The number of bugs generated by me usually dwarfs the number of
bugs generated by tools by many magnitudes. Since the laws of
statistics makes it likely average developers will appear in many
projects, tool bugs will not be much danger.
While this is mainly a discussion about C vs C++, I have met a similar
attitude in safety-critical environments - but then about the dangers
of C compilers, when assembly language is so pure and simple. I guess
these folks forgot that their macro assemblers and linkers could have
bugs too...
My experience is that any tool that prevents programmers from making
simple errors will be worth it, even if the tool-generated errors
increase slightly.
Also, rather than worrying about the complexity level of the tool, one
should look at its maturity. The worst experience I have ever had was
when we decided to use a brand-new C compiler for a brand-new chip. We
were swamped with compiler bugs, and would tend to single-step through
the code in assembly view in hope to catch where the compiler messed
up. In contrast, I have worked with mature high-level code generating
tools (SDL) that had very few bugs, maybe only a couple per year.
And that is the real nub of the issue. Safety critical systems should
only be developed by those who are certified competent as programmers in
the relevant domains. That competence should include the ability to
select an appropriate language and set of tools for the task.
No tools, process or language is going to turn an average programmer
into one suitable for working on critical (life threatening)
applications.
Almost any computer language can be used in a safe way, and every
computer language I have ever come across is a dangerous tool in the
hands of those who know less than they think they do. The problem with
C++ is that it is such a large language with so many corner cases that
it places heavy demands on the programmer. Reliance on tools to avoid
such corner cases is, at best, misguided.
I would consider an expert programmer who lacked adequate domain
knowledge about as dangerous as a domain expert who lacked adequate
programming knowledge. Unfortunately a great number of those currently
working in development fall foul of one or the other of those
requirements.
--
Francis Glassborow ACCU
Author of 'You Can Do It!' see http://www.spellen.org/youcandoit
For project ideas and contributions: http://www.spellen.org/youcandoit/projects
While what you say is not precisely wrong, it's based on a premise that
is somewhat flawed in safety critical work.
When doing safety critical work, the engineering process is concerned
(over-simplistically) with achieving a high level of assurance that
design correctly meets requirements and then that code correctly meets
design. This occurs regardless of what language is used. In such a
scenario, errors in the tool (development environment, compiler,
linker, etc) become a more significant contributor to faults in the end
product. While that doesn't mean that one should "be afraid of the
tool" it does mean that assurance of the tool (in terms of ensuring
that correct code is translated to a correct executable) becomes a
significant determinant with being able to offer assurance that the end
product works --- and convince an independent review of that.
> > Safety is part of our everyday dealings and we have found
> >that is sufficiently safe as long as (as P.J. Plauger already
> >pointed out earlier) the programmer is sufficiently
> >knowledgable about his environment and the proper testing has
> >taken place.
> And that is the real nub of the issue. Safety critical systems
> should only be developed by those who are certified competent
> as programmers in the relevant domains. That competence should
> include the ability to select an appropriate language and set
> of tools for the task.
> No tools, process or language is going to turn an average
> programmer into one suitable for working on critical (life
> threatening) applications.
I disagree with regards to the process, although it may depend
on our definition of "average". I've worked on half-critical
projects; nobody would die if the program crashed, but the
customer wanted 24 hours a day, 7 days a week of up time, with
contractual penalties for down time. (I've also worked on truly
critical systems, but that was a long time ago.)
What we found is that with a good process, at least 90% of the
programmers could be used for productive work. And the process
generally was capable of finding useful things for the other 10%
to do. Typically, the problem with the average programmer with
regards to critical work is attitude, not competence, and a good
process can have a very positive effect on attitude.
When the process for a critical system fails, management does
like to blame it on programmer incompetence, but it just isn't
true. When a critical system fails, it's management's fault.
Period.
> Almost any computer language can be used in a safe way, and
> every computer language I have ever come across is a dangerous
> tool in the hands of those who know less than they think they
> do.
That's true enough. The question is never: is it possible? but
rather, is it the most cost efficient means to achieve the goal.
> The problem with C++ is that it is such a large language with
> so many corner cases that it places heavy demands on the
> programmer.
I'll disagree here. Most of the corner cases can easily be
avoided with some simple rules. These simple rules may limit
what you can do somewhat, but not to the point of eliminating
such advantages as stricter type checking, etc. (In fact, they
actually don't limit that much. Avoiding uncertain areas of the
compiler is generally more limiting.)
> Reliance on tools to avoid such corner cases is, at best,
> misguided.
It depends. For the moment, I've not seen such tools, so it's
hard to say.
> I would consider an expert programmer who lacked adequate
> domain knowledge about as dangerous as a domain expert who
> lacked adequate programming knowledge. Unfortunately a great
> number of those currently working in development fall foul of
> one or the other of those requirements.
In practice, the key is teamwork. It's almost inexistant to
find someone who is truly expert in both the domain and the
technical aspects. In a well run team, however, there will be
different experts in different areas, and the entire team
benefits from their knowledge. (FWIW: I've very little
knowledge of either banking or telephone networks, but I've
played a critical role in developing systems for both. The key
is realizing that while having one person with my skill set on
the team is essential, having everyone with the same skill set
would be a disaster.)
--
James Kanze GABI Software
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
> > > >> Branimir Maksimovic wrote:
> > > >>> > Carlos Moreno wrote:
In a critical system, no error can be tolerated. The company
you are working for has control over the quality of your work,
and will take the necessary steps to ensure that your code has
no errors. (Obviously, I'm talking here about the code which is
fully integrated into the final product. Obviously, you don't
just sit down in fron the terminal and type in 100% correct
code. But before the code you write ends up in the product, it
will have been sufficiently reviewed and tested that the company
can be 100% sure of it.)
It does not have similar control over the tools.
> While this is mainly a discussion about C vs C++, I have met a
> similar attitude in safety-critical environments - but then
> about the dangers of C compilers, when assembly language is so
> pure and simple. I guess these folks forgot that their macro
> assemblers and linkers could have bugs too...
> My experience is that any tool that prevents programmers from
> making simple errors will be worth it, even if the
> tool-generated errors increase slightly.
Certainly. Tools which prevent programmers from making simple
errors reduce the total development cost for the same level of
quality. A plus. As long as the tool doesn't introduce other
errors.
My personal opinion is that if C++ were available, I would not
even consider C for a critical system. Depending on the
maturity of the C++ compiler, I might very likely ban the use of
the more complex and newer features. I might also ban the use
of the code which makes absolute rigorous analysis more
difficult, like exceptions or operator overloaded (or anything
else which introduced hidden flow paths or potential state
changes in the code)
> Also, rather than worrying about the complexity level of the
> tool, one should look at its maturity.
I agree here. The problem is that the two are often related.
In the case of C++, compiler maturity, at least with regards to
the newer features (where "newer" means anything added since
CFront 2.1), is often a serious problem.
--
James Kanze GABI Software
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
Not all (or maybe not many) domain experts are competent programmers. I
consider programming a different responsibility to being a domain
expert. I have met very competent domain experts that write code that
only they can maintain. They cannot use the programming language to
convey their intent, but they can use English and Math to such an
extent that a good programmer can. I would like to believe that a
domain expert in tandem with a competent programmer could be used for
this purpose too. Competent programmers don't start out knowing their
problem domain. They learn it on the fly in close consultation with
domain experts.
> That competence should include the ability to
> select an appropriate language and set of tools for the task.
How many languages would you consider as appropriate when developing
safety critical systems? Would you as programmer be biased towards
certain languages? What would make you select C ahead of C++ or Ada.
Does personal taste of tools make you favour the worst instead of the
better, or lack of experience with one make you favour the other?
>
> No tools, process or language is going to turn an average programmer
> into one suitable for working on critical (life threatening)
> applications.
Therefore a competent programmer will always be able to achieve what is
required without necessarily relying on a large bag of languages/tools,
but just by sticking with what he knows. Lets say for example, that you
are a very proficient ADA programmer - you know SW Engineering
concepts, OO design etc, but you haven't programmed in C++ before,
would you choose it as language, or would you allow yourself on a team
that are to develop a safety critical system where C++ was the language
of choice - maybe as architect or consultant, yes, but not developer.
I must ask this question. If I'm proficient at OOD, does this mean I'm
automatically proficient in assembler, or in functional programming.
Certain people have stronger preference (and more ability) towards
certain paradigms. The saying goes that a good programmer can write
good code in any language. I'm not saying that this is not true, but I
have my doubts (I may be wrong, of course).
>
> Almost any computer language can be used in a safe way, and every
> computer language I have ever come across is a dangerous tool in the
> hands of those who know less than they think they do.
Yes, agreed.
> The problem with
> C++ is that it is such a large language with so many corner cases that
> it places heavy demands on the programmer. Reliance on tools to avoid
> such corner cases is, at best, misguided.
Yes, and I'm certainly still learning the corner cases :-). The less
experienced should have the opportunity to learn from the more
experienced, and the more experienced should have the opportunity to
learn from (and become) the experts. Of course, some people will never
because of lack of ability, but they are (can be) recognized.
> I would consider an expert programmer who lacked adequate domain
> knowledge about as dangerous as a domain expert who lacked adequate
> programming knowledge. Unfortunately a great number of those currently
> working in development fall foul of one or the other of those
> requirements.
Yes, an expert programmer without domain knowledge is dangerous, but
not if he has the ability to learn from the domain expert. Eventually
he does not have to become a domain expert to develop good software
(IMHO). He must be able to portray his understanding of the domain to
such an extent that the domain expert can recognize where he err's (a
good communicator). If this is not possible (one can't expect every
programmer to be a good communicator), a mediator may exist that
understands software to such an extent that he can do this.
All said, I would like to believe (maybe I'm an idealist) that being a
domain expert, a requirements analyst, a developer, and an software
architect are all entities within the programming paradigm. If these
entities can collaborate well, it is possible to develop safety
critical software.
Regards,
Werner
> etc...), and even a couple at NASA and the space station. Safety is
> part of our everyday dealings and we have found that is sufficiently
> safe as long as (as P.J. Plauger already pointed out earlier) the
> programmer is sufficiently knowledgable about his environment and the
> proper testing has taken place.
This depends on whether or not the system can reach a safe state when a
failure occurs. For example, a radiation machine can be turned off, a
respiratory ventilator cannot; a train engine can be turned off, its
breaks cannot. As the stakes are raised, the correctness of the tools
used to build the system becomes increasingly important.
The cost of an accident is another factor that determine the importance
of correct tools.
--
mail1dotstofanetdotdk
No, I do not think the premise is flawed. We can never rely on a tool,
no matter how simple, to translate to a correct executable. We always
need tests.
Each of the steps in a process generates some errors, and removes
others. As we define requirements, we forget some and distort some.
During later reviews, some of the errors are caught. One of the steps
in the process will include programming. This activity will generate
errors - with a good process, good tools and disciplined developers
hopefully few errors, but still. Then system test ought to catch most
safety-critical errors.
Focusing on the programming part: Suppose I had do write some SW that
included signal processing. I then have the choice to:
1) write a 20-line Matlab script, resulting in a very clean, simple and
testable source file. This is then converted to C code using a complex
code generator. While I cannot guarantee this code generator is fault
free, it ought to be of reasonably high quality, assuming it has been
in use for some years.
2) write thousands of lines of C code that handles all that is built
into Matlab, and pass this source code to a C compiler that I know
always produces correct code (assuming I can find one).
Now, which approach do you think would lead to the fewest errors in the
code? Which would be easiest to maintain during the products 10-year
life span?
[...]
> No, I do not think the premise is flawed. We can never rely on
> a tool, no matter how simple, to translate to a correct
> executable. We always need tests.
We need tests. But if the tests show up an error, that's cause
to be worried.
> Each of the steps in a process generates some errors, and
> removes others. As we define requirements, we forget some and
> distort some. During later reviews, some of the errors are
> caught. One of the steps in the process will include
> programming. This activity will generate errors - with a good
> process, good tools and disciplined developers hopefully few
> errors, but still.
With a good process, the error rate can be as low as one error
per million lines of code. One error in less than a 100000 is
relatively frequent, although I don't think it would be
considered sufficient in a place developing critical software.
If an error is found in the tests, there is an investigation as
to why, and the process is changed so that it doesn't happen
again.
When a compiler (or any other critical element in the tool
chain) is validated for use in critical systems, the development
process of the tool is evaluated; if the development process
used on the compiler is not capable of producing similarly low
error rates, then the compiler doesn't pass validation.
> Then system test ought to catch most safety-critical errors.
> Focusing on the programming part: Suppose I had do write some
> SW that included signal processing. I then have the choice to:
> 1) write a 20-line Matlab script, resulting in a very clean,
> simple and testable source file. This is then converted to C
> code using a complex code generator. While I cannot guarantee
> this code generator is fault free, it ought to be of
> reasonably high quality, assuming it has been in use for some
> years.
> 2) write thousands of lines of C code that handles all that is
> built into Matlab, and pass this source code to a C compiler
> that I know always produces correct code (assuming I can find
> one).
> Now, which approach do you think would lead to the fewest
> errors in the code?
If your process ensures that your couple of thousand lines of C
code don't contain an error, and the code generated code might,
then the answer is clear. Approach 2.
> Which would be easiest to maintain during the products 10-year
> life span?
We're talking about critical software here. Ease of
maintainance only becomes an issue after we've assured that
there are no errors.
--
James Kanze GABI Software
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
Yes, the rule "I don't use what I don't know" ,
works best.One doesn't have to know or use everything
available in C++, in order to write successful programs.
> > I would consider an expert programmer who lacked adequate
> > domain knowledge about as dangerous as a domain expert who
> > lacked adequate programming knowledge. Unfortunately a great
> > number of those currently working in development fall foul of
> > one or the other of those requirements.
>
> In practice, the key is teamwork. It's almost inexistant to
> find someone who is truly expert in both the domain and the
> technical aspects. In a well run team, however, there will be
> different experts in different areas, and the entire team
> benefits from their knowledge.
Yes, this is also my experience,I didn't have to know what is SID,
STAR,VHF navaid,non directional beacon,holding pattern or
departure profile etc, because there were people who know.
My job was to write code and help others with C++ and also there were
people who educate in domain we were working.
Problem is that people does not realise that software engineering
is domain also. One cannot be devoted to two domains, or can, but
will always be behind ones who are solely devoted to one
domain, because one learns and gain experience constantly.
Programers usually learn something about domain of program,
but that is just dubious knowledge.I worked in blood transfusion,
accounting,banking, airline business etc, but I just have dubious
knowledge
about those. Alright I generally know how things work, but
I can't write any app in those domains without help from
domain experts. Domain experts can neither write app as good
without help from programmers. There are tools that help domain
experts to write apps without programer,but such tools usually have
limits.
Finaly, there is no point in time when one can say alright, I know
everything, let's move to other domain. This is impossible.
There will be always many more things then one don't knows
that one knows even when solely devoted to only one domain
such as programming.
Greetings, Bane.