I am just curious to know what the reason behind MISRA rule 111.
The rule says - Bit fields shall only be defined to be one of type unsigned
int or signed int
I have used UINT32 instead of unsigned int in my code but compiler has
thrown a warning says - "field type should be int"
Could somebody please put light on this.
Thanks
Vikas
---------------------------------------
Posted through http://www.EmbeddedRelated.com
According to C99:
"A bit-field shall have a type that is a qualified or unqualified
version of _Bool, signed int, unsigned int, or some other
implementation-defined type"
Assuming UINT32 is implemented as 'unsigned long', it falls in the
'implementation-defined' category, which is not portable.
"Unsigned int" and "signed int" are the only types that are guaranteed
to be available by any C compiler. Other types typically work with most
compilers, but may have different sizes or alignments on different
targets or compilers. For example, if you use an enumerated type, one
compiler may align it to 8-bit boundaries, whereas another might align
it to "int" boundaries (16-bit or 32-bit).
If you are careful to make sure your code is as portable as it needs to
be, and that sizes and alignments are correct on your target, then I
personally see no reason to follow this rule. Using specific types
correctly is an important aspect of writing clear and correct code.
On top of the reasons other people have posted, think about this for a
bit. A bitfield already has a defined field length, given when you
specify it. By declaring a UINT32, you're calling out a typedef that
also has a defined field length.
If the bitfield you're calling out is 32 bits long you're simply being
redundant. If it's anything other than that, you're conflicting with
yourself. In neither case has declaring the bitfield to be of type
UINT32 provided you with any advantage.
--
Rob Gaddi, Highland Technology
Email address is currently out of order
Not here they can't.
and go to the forum where you will get the definitive answer.
--
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills Staffs England /\/\/\/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
Which is irrelevant.
That's not the whole story. When you pick a type for a bitfield, you
give several things - one is the /maximum/ allowed bitlength (such as 32
bits in this case).
Another is the alignment. For example, if you write "struct { uint8_t a
: 2; uint32_t b : 4; } bits" then bits.a will be at the start of the
structure, followed by padding to allow b to fit within the next 32-bit
alignment block.
You also affect the type of the extracted field - with the above
structure on a 16-bit machine, "a" will be promoted to 16-bit int if you
use it in arithmetic, while "b" will be promoted to 32-bit uint after
extraction.
And if you have declared the bitfield to be volatile (or its components
as volatile), then the underlying type may have some effect on the
instructions used to access the data. However, I don't think this is
well defined by C, and compilers are not consistent.
Bitfields have their uses, but you definitely want to check your
compiler and what it is generating - don't expect them to be portable.
> That's not the whole story. When you pick a type for a bitfield, you
> give several things - one is the /maximum/ allowed bitlength (such as
> 32 bits in this case).
>
> Another is the alignment.
No, it's not. Portable C code has _no_ control over alignment
whatsoever, particularly not regarding bitfields.
> For example, if you write "struct { uint8_t a : 2; uint32_t b : 4; }
> bits" then bits.a will be at the start of the structure, followed by
> padding to allow b to fit within the next 32-bit alignment block.
No. Nothing in the language definition requires any padding in this
case. That struct can perfectly legally fit into a single byte.
> You also affect the type of the extracted field - with the above
> structure on a 16-bit machine, "a" will be promoted to 16-bit int if
> you use it in arithmetic, while "b" will be promoted to 32-bit uint
> after extraction.
Not necessarily, given that any compiler, including the one for that
unspecifited "16-bit machine" may not even allow you to _compile_ that
code, much less do what you think it should with it.
> Bitfields have their uses, but you definitely want to check your
> compiler
Actually from the point of view of MISRA C you have that backwards. The
moment you use any information you needed to check your compiler manual
for, you've left common ground, and made your code unportable.
Using bitfields and expecting a certain representation is unportable anyway,
as the Endianess of bit-fields is implementation-defined.
Vinzent.
--
A C program is like a fast dance on a newly waxed dance floor by people carrying
razors.
-- Waldi Ravens
A lot of stuff is implementation defined, such as representation of
integers and floats. It doesn't have to be a problem, unless you move
the binary representation from one implementation to another.
If you use the bitfields only internally, there are no portability
concerns.
You are correct - /portable/ C does not define alignment for bitfields.
But /practical/ C does - different compilers use different methods, and
it can make a difference to the layout, alignment and padding of the
fields. It may also affect the instructions used and the width of the
accesses.
As you say, /portable/ C has no control over alignment of bitfields -
the C standards don't give any guidelines. It is up to the compiler,
along with the standard ABI for the target, to determine the
implementation. And for some implementations, the choice of the
underlying type of the bitfield will influence the layout.
>> For example, if you write "struct { uint8_t a : 2; uint32_t b : 4; }
>> bits" then bits.a will be at the start of the structure, followed by
>> padding to allow b to fit within the next 32-bit alignment block.
>
> No. Nothing in the language definition requires any padding in this
> case. That struct can perfectly legally fit into a single byte.
>
Correct - and yet some /implementations/ will pad it into two bytes, or
even more. A compiler could justify using 8 bytes for this struct
(though the ones I tested with used 1 or 2 bytes).
>> You also affect the type of the extracted field - with the above
>> structure on a 16-bit machine, "a" will be promoted to 16-bit int if
>> you use it in arithmetic, while "b" will be promoted to 32-bit uint
>> after extraction.
>
> Not necessarily, given that any compiler, including the one for that
> unspecifited "16-bit machine" may not even allow you to _compile_ that
> code, much less do what you think it should with it.
>
Here, I think, you are wrong. The C standards only allow int, signed
int and unsigned int as bitfield types - C99 also allows _Bool. In each
case, the extracted field is given the type used in the bitfield
declaration. C++ allows any integral type, including enumerations, as
bitfield types - and again, on extraction the data takes on this type.
Many C compilers allow other integral types in bitfields, and they
follow the same rules as with C++.
Again, this is not something in the C standards - it is something from
practical reality. /If/ a compiler supports more general bitfield
types, /then/ it will treat them as that type when the bitfield is
extracted.
>> Bitfields have their uses, but you definitely want to check your
>> compiler
>
> Actually from the point of view of MISRA C you have that backwards. The
> moment you use any information you needed to check your compiler manual
> for, you've left common ground, and made your code unportable.
>
Bitfields are /always/ unportable. If nothing else, there is no
standard defining the ordering of bitfields. Some compilers order them
from the LSB, others from the MSB. MS Visual C++ actually changed this
order between two versions of the compiler!
So if you are writing portable code, you can't rely on bitfield details.
It's fine if all you want to do is pack some data into a smaller
space, for use entirely within a program. It is also okay to use them
for inherently unportable code, such as for hardware registers - but you
must check that your bitfield structure actually matches the hardware.
Given these limitations, I'm actually surprised MISRA allows them at all.
The idea with the MISRA rules and recommendations is, that when years
later the stupidest schmuck in the world maintains your code, the
probability that he misunderstands it and causes a bug shall be as low as
possible.
--
Fredrik Östman
MISRA has no view on portability as such.
>>> That's not the whole story. When you pick a type for a bitfield, you
>>> give several things - one is the /maximum/ allowed bitlength (such as
>>> 32 bits in this case).
>>> Another is the alignment.
>> No, it's not. Portable C code has _no_ control over alignment
>> whatsoever, particularly not regarding bitfields.
> You are correct - /portable/ C does not define alignment for bitfields.
>
> But /practical/ C does - different compilers use different methods,
Indeed. And that's why every statement such as yours above claiming
bitfields behave in any particular way is wrong by default. At the very
minimum such a statement would have to be qualified by the exact version
of a particular compiler it's supposed to be applicable for. Often the
exact flags have to be specified, too.
> it can make a difference to the layout,
But you didn't say it "can". You say it _does_, without any
restriction. That's where your claim became incorrect by insufficiently
justified generalization.
> Correct - and yet some /implementations/ will pad it into two bytes, or
> even more.
Yet you effectively claimed that they _all_ did so. And that again is
an unjustified generalization.
> Here, I think, you are wrong. The C standards only allow int, signed int
> and unsigned int as bitfield types - C99 also allows _Bool. In each
> case, the extracted field is given the type used in the bitfield
> declaration.
Relying on the behaviour of such non-standard bitfield types on usage is
every bit as unportable as their very existence.
Extensions to the language can behave whichever way they want. Yes,
they'll usually behave sanely, following the guidance set forth by
standard elements of the language. But there's no guarantee either way.
> Many C compilers allow other integral types in bitfields, and they
> follow the same rules as with C++.
"Many" is an unreliable subset.
> Again, this is not something in the C standards - it is something from
> practical reality. /If/ a compiler supports more general bitfield types,
> /then/ it will treat them as that type when the bitfield is extracted.
And what evidence other than "I have never seen otherwise" do you have
for that claim?
>> Actually from the point of view of MISRA C you have that backwards. The
>> moment you use any information you needed to check your compiler manual
>> for, you've left common ground, and made your code unportable.
> Bitfields are /always/ unportable.
No. Only relying on implementation-specific information about them
makes them so. That's why I advised against even looking at that
information. You can't abuse information you don't have.
Point taken. What I really meant was that it these are factors that
/could/ be affected by the choice of underlying type - as we know, there
are no guarantees here.
>> it can make a difference to the layout,
>
> But you didn't say it "can". You say it _does_, without any restriction.
> That's where your claim became incorrect by insufficiently justified
> generalization.
>
>> Correct - and yet some /implementations/ will pad it into two bytes, or
>> even more.
>
> Yet you effectively claimed that they _all_ did so. And that again is an
> unjustified generalization.
>
>> Here, I think, you are wrong. The C standards only allow int, signed int
>> and unsigned int as bitfield types - C99 also allows _Bool. In each
>> case, the extracted field is given the type used in the bitfield
>> declaration.
>
> Relying on the behaviour of such non-standard bitfield types on usage is
> every bit as unportable as their very existence.
>
> Extensions to the language can behave whichever way they want. Yes,
> they'll usually behave sanely, following the guidance set forth by
> standard elements of the language. But there's no guarantee either way.
>
Agreed.
>> Many C compilers allow other integral types in bitfields, and they
>> follow the same rules as with C++.
>
> "Many" is an unreliable subset.
>
True.
There are a number of features in C that are implementation dependent,
rather than being fixed by the standards. If your code relies on any of
these features, you have to check how it works on the particular target
and compiler combination you are using. Bitfields are just an area in
which there are particularly many implementation-dependent issues.
And of course just because the standards happen to be clear on a
particular point, does not mean that any given compiler will actually
follow them!
>> Again, this is not something in the C standards - it is something from
>> practical reality. /If/ a compiler supports more general bitfield types,
>> /then/ it will treat them as that type when the bitfield is extracted.
>
> And what evidence other than "I have never seen otherwise" do you have
> for that claim?
>
I haven't much more evidence, I admit - other than to claim that
compiler writers who implement such extensions would normally do so in
the clearest and most sensible way.
If you know of counter examples, I'll be happy to accept them. And I
will try to be more careful to qualify my points in this thread - though
I think it is fairly clear that the lack of standards and consistency
around bitfields makes /all/ points somewhat vague.
By the same argument, 'int' and 'double' would be unportable as well.
But they have very well-defined properties you can use. And as long as
you use only those properties, your code is portable.
For example, a 'unsigned int foo : 1;' gives a neat place to store a
bit. 100% portable. I use that all the time. It gets unportable when you
'memcpy' or 'fwrite' the structure containing it around, and make
assumptions about the result. But that's no different for 'int' and
'double'.
Stefan
There is lots in C that is "implementation defined". Some aspects of
"int" and "double" fall into this, such as their bit width. If you are
writing portable code, you /do/ often avoid using "int" if the bit width
is important. Instead, you use "int16_t", "uint32_t", or whatever you
actually mean.
Bitfields have lots of implementation-defined behaviour, and are
therefore often non-portable. But as you say, they have certain
well-defined aspects - if you only need to rely on those aspects, then
your code is portable.
It is a problem, if you rely on a certain representation, "(unsigned) int"
or not. So I don't think, this particular MISRA rule had close-to-100%
portability in mind.
> If you use the bitfields only internally, there are no portability
> concerns.
LOL.
It would be nice to use bitfields to define device register and that
simply ain't possible in a "portable" (e.g. compiler-independent) way,
because the representation may change once a different compiler has been
chosen by the customer. BTSTFU.
The only "portable" way to use bitfields at that time was to define a
collection of flags which were situated at something the Z80 programmer
knows as zero-page to cut down code size by a couple of bytes because
then direct bit addressing was possible. If that's all that bitfields
can deliver, they sure are close to useless, I'd say.
>> If you use the bitfields only internally, there are no portability
>> concerns.
>
> LOL.
>
> It would be nice to use bitfields to define device register and that
> simply ain't possible in a "portable" (e.g. compiler-independent) way,
> because the representation may change once a different compiler has been
> chosen by the customer. BTSTFU.
I meant internally, only as a method to cram more data in the same
memory. Using bitfields in device interfaces, is no longer 'internal'.
Still, you could define device registers using bitfields, if you include
some #ifdef COMPILER_XYZ around it, followed by
#else
#error Please add register definitions for this compiler
#endif
Which exposes portability problems. Switching compilers mid-project is
pretty rare anyway, and very few embedded projects are 100% portable
between compilers. Try implementing a portable interrupt handler, for
instance.
On 5/12/2011 4:00 AM, vikasvds wrote:
> I am just curious to know what the reason behind MISRA rule 111.
Why do you *care* what MISRA has to say? Considering "rules"
to be anything other than "guidelines" abrogates your design
responsibilities to some dweeb who *thinks* he can foresee
everything that you *might* do "wrong" and opt to protect
you from yourself. *At* some expense (reliability, cost,
maintainability, etc.)
Most *guidelines* point out issues that a conscientious designer
should already be aware of -- undefined behaviors in The Standard,
compiler-specific variations, etc. Avoiding something *simply*
because a piece of paper says "you might get screwed" is intended
for The Mindless Masses who don't/can't think things through.
("Don't run with scissors" -- "But, if I *walk*, the patient will
be *dead* before I get there!")
It's simplistic to think that imposing a simple set of "rules"
(so simple, in fact, that the compiler can check and enforce
them!) is going to make a dramatic difference. "Yes, I dotted
all my i's and crossed all my t's... but I still don't know
the difference between a noun and a verb..."
Figure out what your client/user/industry requires and *try* to
adhere to that. But, most of all, *understand* why your client
wants to impose those constraints and make sure that your *client*
understands the consequences of those constraints. E.g., writing
code for a *gaming* system to MISRA is one sure way to get yourself
laughed out of the regulatory process! (I suspect DoD would simply
smile at you and tell you to go back and "do it over"...)
*If* your client insists on MISRA compliance, write your code to
suppress all of those warnings, shrug your shoulders and cash their
check letting them *think* they "bought something (MISRA compliance)
worthwhile". [this is often the path of least resistance rather
than trying to educate them to the folly behind this -- "write in
ADA and software quality will improve"...]
Good luck!
> On 05/13/2011 10:28 PM, Vinzent Hoefler wrote:
>
>>> If you use the bitfields only internally, there are no portability
>>> concerns.
>>
>> LOL.
>>
>> It would be nice to use bitfields to define device register and that
>> simply ain't possible in a "portable" (e.g. compiler-independent) way,
>> because the representation may change once a different compiler has been
>> chosen by the customer. BTSTFU.
>
> I meant internally, only as a method to cram more data in the same
> memory. Using bitfields in device interfaces, is no longer 'internal'.
Sure. And if you do that, you do not rely on a specific internal
representation and all is well.
> Still, you could define device registers using bitfields, if you include
> some #ifdef COMPILER_XYZ around it, followed by
>
> #else
> #error Please add register definitions for this compiler
> #endif
Which certainly violates MISRA rule 10 then: "Sections of code should not be
'commented out'." ;)
> Which exposes portability problems. Switching compilers mid-project is
> pretty rare anyway, and very few embedded projects are 100% portable
> between compilers. Try implementing a portable interrupt handler, for
> instance.
The interrupt handler was no problem for the self-written emulator. But
that the two different compilers decided to count the bits from different
directions, was.
>> Still, you could define device registers using bitfields, if you include
>> some #ifdef COMPILER_XYZ around it, followed by
>>
>> #else
>> #error Please add register definitions for this compiler
>> #endif
>
> Which certainly violates MISRA rule 10 then: "Sections of code should
> not be
> 'commented out'." ;)
In that case, just use different header files, and some Makefile tricks :)
>Figure out what your client/user/industry requires and *try* to
>adhere to that.
Many want MISRA compliance.
>*If* your client insists on MISRA compliance, write your code to
>suppress all of those warnings, shrug your shoulders and cash their
>check letting them *think* they "bought something (MISRA compliance)
>worthwhile".
Sounds completely unethical and unprofessional to me.
On 5/15/2011 3:01 AM, Chris H wrote:
> In message<iql7n4$vcv$1...@speranza.aioe.org>, D Yuniskis
> <not.goi...@seen.com> writes
>>
>> On 5/12/2011 4:00 AM, vikasvds wrote:
>>> I am just curious to know what the reason behind MISRA rule 111.
>
>> Figure out what your client/user/industry requires and *try* to
>> adhere to that.
>
> Many want MISRA compliance.
There are groups/individuals in every industry that aspire to
create/impose "standards" on the work performed *in*/for that
industry. Most appear to be motivated by "noble causes".
They all suffer from trivializing what is a complex problem.
I've seen movements to make engineers legally *liable* for
the consequences of their designs ("Sure! And when do I
get veto power over MY BOSS??"). I've seen folks insist
that only "qualified" individuals write the code ("So,
does this require a certain type of education? A certain
level of intelligence?").
When the first MISRA release came out, a client waved it under
my nose AS IF this would make the code I was about to write for
him better -- *magically*. In the time it took me to skim the
document (over lunch), I was able to show him that:
- much of it falls under the category of "don't use undefined or
implementation-defined behaviors"
- some things were inconsistent ("test error values returned by
functions" -- but *don't* use errno??)
- some things were crippling ("break" and "continue" -phobia!)
- some of it was just *silly* (the dreaded '$' character and
the *sinister* "register" qualifier -- not to mention the
worrisome *comma* operator!)
Much of the code I wrote last week would complain because of my
liberal use of offsetof()! :<
Needless to say, he dropped the compliance requirement, I've heard
of no deaths/injuries/recalls associated with the product. In fact,
I've not had a single bug-fix request for it, either ("bug fixes"
are "free" and immediate -- a *practical* way to keep pressure on
the developer to provide quality code)
[apologies if I have misremembered any of this and/or if changes
have been made in the years since this event]
*Personally*, I abhor "closed" and "for pay" standards -- if what
you have is so wonderful (and really little more than a piece of
electronic paper), why horde it?
>> *If* your client insists on MISRA compliance, write your code to
>> suppress all of those warnings, shrug your shoulders and cash their
>> check letting them *think* they "bought something (MISRA compliance)
>> worthwhile".
>
> Sounds completely unethical and unprofessional to me.
How so? By giving the client what he claims to *want*?
"Here, hire a team of (fill-in-the-blank) experts. Have them
pour over the code for many man years. Let them convince you
the code I've delivered is 100.00000000000% (fill-in-the-blank)
compliant. Go to bed with peace of mind *hoping* that you
really have a safe/secure/robust/'whatever' product/system
(because that's what you *think* this compliance is giving
you)"
These sorts of "Rules" try to make "better coders" out of
run-of-the-mill coders by artificially imposing constraints
on *how* they do *what* they do. But, they just shift
attention to "silencing compiler warnings" instead of
rethinking what they might be "doing wrong".
"Aw, crap! The compiler says I have two 'return' statements
in this function. Now I have to rewrite the damn thing so
there is only *one*... (regardless of how inappropriate that
may be for this piece of code)"
It's the "Just Say No" (to drugs) mentality -- as if a catchy
saying will solve the problem. The hope/delusion that you can
(effectively) *LEGISLATE* quality, etc. Spend your resources
hiring and training better staff so they design *in* quality
from the start.
By far, the best environments I've found are those in which peer
pressure (*not* "competition") inspires you to produce a quality
product. Those in which "compliance" is pushed down "from above"
end up fostering CYA behavior: "The code passed the compliance
suite! It's not *my* fault..."
> Hi Chris,
>
> On 5/15/2011 3:01 AM, Chris H wrote:
>> In message<iql7n4$vcv$1...@speranza.aioe.org>, D Yuniskis
>> <not.goi...@seen.com> writes
>>>
>>> On 5/12/2011 4:00 AM, vikasvds wrote:
>>>> I am just curious to know what the reason behind MISRA rule 111.
>>
>>> Figure out what your client/user/industry requires and *try* to
>>> adhere to that.
>>
>> Many want MISRA compliance.
>
> There are groups/individuals in every industry that aspire to
> create/impose "standards" on the work performed *in*/for that
> industry. Most appear to be motivated by "noble causes".
> They all suffer from trivializing what is a complex problem.
MISRA is not about making the code better. MISRA is about eliminating
common sources of errors ("common" as in "statistically proven", which
OTOH doesn't mean there aren't any other or that such requirements do
not lead to other sources of errors ("others" as in "individually proven").
That said, I certainly doubt that obeying the rules of MISRA accomplished
anything for someone who knows what s/hes doing. But that's also an
individual perception, of course, not a statistically proven one.
On 5/15/2011 8:10 AM, Vinzent Hoefler wrote:
[snip]
>>>>> I am just curious to know what the reason behind MISRA rule 111.
>>>
>>>> Figure out what your client/user/industry requires and *try* to
>>>> adhere to that.
>>>
>>> Many want MISRA compliance.
>>
>> There are groups/individuals in every industry that aspire to
>> create/impose "standards" on the work performed *in*/for that
>> industry. Most appear to be motivated by "noble causes".
>> They all suffer from trivializing what is a complex problem.
>
> MISRA is not about making the code better. MISRA is about eliminating
> common sources of errors ("common" as in "statistically proven", which
> OTOH doesn't mean there aren't any other or that such requirements do
> not lead to other sources of errors ("others" as in "individually proven").
Correct. But, anyone "skilled in the art" *should* already
know most of the issues that MISRA tries to address (replace
"MISRA" with damn near any other "standard").
I contend that adopting these sorts of "standards" and the
imperatives they impose ends up focusing attention on
trivialities -- attention that should, instead, be focused
on *bigger* issues.
It's the same sort of mentality that seeks to impose "style
guidelines" on code in an attempt to make it more readable,
maintainable, etc. People end up working to appease the
Standard's God instead of focusing on the product they are
preparing.
[this comes back to my assertion that "rules" should always be
GUIDELINES. There's a big difference between a compiler offering
reminders/suggestions -- "missing else", etc. -- to help catch
problems. It's another thing entirely to require/prohibit certain
things for fear they *might* cause problems.]
> That said, I certainly doubt that obeying the rules of MISRA accomplished
> anything for someone who knows what s/hes doing.
But it (and similar "standards") can be counterproductive by
forcing implementations to take "unnatural" forms. For example,
I often wrap a section of code in a do-while(1) and liberally
"break" out of that control structure (for error conditions, etc.).
MISRA tells me this is "prohibited". So, I am now forced to
contort the code to comply with this *arbitrary* prohibition
just in case I might not be smart enough to use it properly?
Should we *ban* division because folks forget to test the denominator
for zero? Should we *ban* "==" because folks fail to implement
fuzzy equality tests for floats? Should we *ban* the use of the
value "10" because some might consider it 10r10 while others
consider it 10r2, 10r8 or 10r16? (or, perhaps we should ban all
non-decimal radix? or, force all constants to explicitly declare
their radix *and* data type -- is 50,000 a long or an int?)
Do you assume your staff are *competent* and provide them with
tools to assist them in the performance of their duties? Or, do
you assume they are INcompetent and impose a policeman/regulator
to watch over them and scold them each time they do something
"unexpected"?
> But that's also an
> individual perception, of course, not a statistically proven one.
Spend your resources ($$) on improving the quality of your staff.
That's something that will benefit you in other ways that things
like artificial "standards" can *never* address.
Over the years, I've found that "coding" is the *least* important
of the aspects of product development. By far, proper *specification*
and *testing* will result in a higher quality product (by *any*
measure of "quality") than attempts to micromanage the "coding".
(e.g., I budget 40% of my time for specification/design and 40% for
testing/verification/validation... so "coding" takes a scant 20%
of the effort. IMO, energy spent trying to improve that has far
less impact on the result than improvements in this other "80%").
YMMV, of course.
Not /all/ suffer from trivializing complex problems. Just as many
suffer equally from complicating trivial problems.
>
> *Personally*, I abhor "closed" and "for pay" standards -- if what
> you have is so wonderful (and really little more than a piece of
> electronic paper), why horde it?
>
As far as I am concerned, a set of rules or definitions is not a
standard unless it is well-maintained by a reputable body, freely
available to view by anyone who wants it, freely implementable by anyone
who wants to, represents a real and practical document that is widely
followed, and adapts as required by its users, and modern developments.
Without that, it's just a set of private rules for a particular club.
That's fine in itself - private rules have lots of uses. But it is
not a standard.
I have an even lower opinion of those who call their club rules "open
standards" and yet still charge vast amounts to sell you a pseudo-pdf
file that you can't use properly.
I understand that it costs a lot of money to produce such a set of
rules, and that not everything can be state-sponsored even if it is for
the greater good. But there are plenty of ways to make money from such
an endeavour - charge for certification or the use of logos, sell
printed books, provide courses and training, consultancy fees,
compliance-checking software, etc.
> By far, the best environments I've found are those in which peer
> pressure (*not* "competition") inspires you to produce a quality
> product. Those in which "compliance" is pushed down "from above"
> end up fostering CYA behavior: "The code passed the compliance
> suite! It's not *my* fault..."
You'll hear that a lot, regarding the code itself, the methodology used
to write it, and the development tools to compile it. Some people think
it is more important for a compiler to have passed a particular
certification suite than to be correct - because then you can use that
certification as a defence in court if you get sued.
On 5/15/2011 9:23 AM, David Brown wrote:
>> There are groups/individuals in every industry that aspire to
>> create/impose "standards" on the work performed *in*/for that
>> industry. Most appear to be motivated by "noble causes".
>> They all suffer from trivializing what is a complex problem.
>
> Not /all/ suffer from trivializing complex problems. Just as many suffer
> equally from complicating trivial problems.
The same can be said of the language gods trying to design the
capacity for "error" (?) out of a language. Eventually, you end
up with a language that isn't useful for anything other than
academia :>
Educate; don't Legislate (impose).
>> *Personally*, I abhor "closed" and "for pay" standards -- if what
>> you have is so wonderful (and really little more than a piece of
>> electronic paper), why horde it?
>
> As far as I am concerned, a set of rules or definitions is not a
> standard unless it is well-maintained by a reputable body, freely
> available to view by anyone who wants it, freely implementable by anyone
> who wants to, represents a real and practical document that is widely
> followed, and adapts as required by its users, and modern developments.
+42
> Without that, it's just a set of private rules for a particular club.
> That's fine in itself - private rules have lots of uses. But it is not a
> standard.
>
> I have an even lower opinion of those who call their club rules "open
> standards" and yet still charge vast amounts to sell you a pseudo-pdf
> file that you can't use properly.
>
> I understand that it costs a lot of money to produce such a set of
> rules, and that not everything can be state-sponsored even if it is for
> the greater good. But there are plenty of ways to make money from such
> an endeavour - charge for certification or the use of logos, sell
> printed books, provide courses and training, consultancy fees,
> compliance-checking software, etc.
Or, hope for the benevolence of "key players" in those industries
to underwrite all or part of their efforts. Things like Standards
are so tenuous that you have to be wary that The Industry might
just pick up and head off in a different direction regardless of
your concern/interests.
The problem with "paid" organizations promoting/sponsoring things
like this is they tend to be self-perpetuating. They have a
vested interest in "their" Standard. So, the biological organisms
involved in it have a *huge* stake -- their SALARIES!
>> By far, the best environments I've found are those in which peer
>> pressure (*not* "competition") inspires you to produce a quality
>> product. Those in which "compliance" is pushed down "from above"
>> end up fostering CYA behavior: "The code passed the compliance
>> suite! It's not *my* fault..."
>
> You'll hear that a lot, regarding the code itself, the methodology used
> to write it, and the development tools to compile it. Some people think
> it is more important for a compiler to have passed a particular
> certification suite than to be correct - because then you can use that
> certification as a defence in court if you get sued.
By comparison, if you inspire people to take *personal* ownership of
their product, they actually *care* about what they are producing
and *want* it to work/be better, etc.
Abrogating some/any portion of your "ownership" of a product leaves
you exposed to whatever "deficiencies" that other organization
brings to the table (i.e., what if their quality isn;t up to
par? what if they have made a mistake? etc.).
You can never protect completely against litigation (or stupidity!).
So, you do some *reasonable* amount of "due diligence" and hope
for the best. It's folly to try to design a hammer that *can't*
be used to hit your own foot... or a wood chisel that can't be
used to cut aluminum gutters.
OTOH, it costs very little to provide written warnings against
these sorts of things...
D Yuniskis wrote:
> > MISRA is not about making the code better. MISRA is about eliminating
> > common sources of errors ("common" as in "statistically proven", which
> > OTOH doesn't mean there aren't any other or that such requirements do
> > not lead to other sources of errors ("others" as in "individually proven").
>
> Correct. But, anyone "skilled in the art" *should* already
> know most of the issues that MISRA tries to address (replace
> "MISRA" with damn near any other "standard").
>
> I contend that adopting these sorts of "standards" and the
> imperatives they impose ends up focusing attention on
> trivialities -- attention that should, instead, be focused
> on *bigger* issues.
>
> It's the same sort of mentality that seeks to impose "style
> guidelines" on code in an attempt to make it more readable,
> maintainable, etc. People end up working to appease the
> Standard's God instead of focusing on the product they are
> preparing.
Coding standards misra and others do a lot to make big projects
much more reliable. They tend to force people to use clear
statements devoid of the kind of one of programming tricks
that create debugging and application nightmares.
I have seen a lot of code written by many different people
the best fastest and most easy to maintain code is simple
and clear and let modern tools do there work.
misra is a low cost standard well worth the cost.
Regards,
--
Walter Banks
Byte Craft Limited
http://www.bytecraft.com
D Yuniskis wrote:
> *Personally*, I abhor "closed" and "for pay" standards -- if what
> you have is so wonderful (and really little more than a piece of
> electronic paper), why horde it?
I assume that you don't charge for the work you do for customers.
w..
>-----< Vinzent Hoefler >
> Which certainly violates MISRA rule 10 then: "Sections of code should
> not be 'commented out'."
No, absolutely not. That rule (now 2.4) specifically says that you
should use #if or #ifdef ...#endif instead of commenting out using
/* ... */.
--
Fredrik Östman
I am just curious to know what the reason behind MISRA rule 111.
The rule says - Bit fields shall only be defined to be one of type unsigned
int or signed int
I have used UINT32 instead of unsigned int in my code but compiler has
thrown a warning says - "field type should be int"
Could somebody please put light on this.
Thanks
Vikas
---------------------------------------
Posted through http://www.EmbeddedRelated.com
Then neither ISO C or C++ are a "Standard"
You are asking in the wrong place.
Try the forum at
>Could somebody please put light on this.
No one in this NG can.
Then there are no standards you can rely on.
By the definition I used, then that's correct. They come close,
however. But until you can freely download the pdfs, and use google to
search online html versions, it's not a full standard to me. I think it
is absurd that so many millions of developers around the world rely on
these "standards", yet have no simple and easy way to view them. Online
versions, especially with an interactive comment / wiki setup, would be
a huge boon to developers.
There are plenty of different business models for different types of
work. I have no issues with standards developers making money out of
their work (though I think state sponsorship of standards committees is
a better model in many cases). I just think that a better way to make
that money is by publishing the standards freely and spreading them as
wide as possible, then selling services (trademark licensing,
consultancy, certification, etc.).
Maybe I'm naive here, and the sums wouldn't work out in the end. But
Misra charge £10 for their pdf - it's absurd. Give it out free, and
charge £100 for a Misra rule checker program.
They all have a VERY easy way to get the standards... they go and buy a
copy of the PDF.
Incidentally the MISRA standards come closer than the ISO C and C++
standards by your definitions. For the ISO (and certainly the BSI parts
of it) there is no requirement to have any qualifications or experience
in the field of the standard.
BTW an on line wiki set up would be a complete disaster.
No it is not. Why should MISRA not charge for the standard? DO you
charge for your work?
> Give it out free, and charge £100 for a Misra rule checker program.
IF it is free why charge for a rule checker program?
And will that pdf work on /my/ choice of pdf reader? Or is it one of
these pseudo-pdf files that require a specific bug-ridden and security
nightmare pdf reader with non-standard plugins and a specific OS? (I
haven't got a copy of MISRA, so I don't know - but I know that applies
to many other "standards" that are available on "pdf".)
> Incidentally the MISRA standards come closer than the ISO C and C++
> standards by your definitions. For the ISO (and certainly the BSI parts
> of it) there is no requirement to have any qualifications or experience
> in the field of the standard.
>
MISRA are also closer in that £10 is a lot less than ISO charges for the
C standards.
>
> BTW an on line wiki set up would be a complete disaster.
>
That depends on how it was done. I was thinking of the model used by
PostgreSQL, such as here (look at the bottom of the page):
<http://www.postgresql.org/docs/9.0/interactive/sql-createtable.html>
Registered community members can add comments to clarify the document,
or to give hints or tips. The authors use these comments to improve
later versions of the manual (and sometimes the software).
No idea. Probably Adobe and you can use any reader you with that is
compliant.
>> Incidentally the MISRA standards come closer than the ISO C and C++
>> standards by your definitions. For the ISO (and certainly the BSI parts
>> of it) there is no requirement to have any qualifications or experience
>> in the field of the standard.
>MISRA are also closer in that £10 is a lot less than ISO charges for
>the C standards.
MISRA can't do it for free for obvious reasons.
>> BTW an on line wiki set up would be a complete disaster.
>Registered community members can add comments to clarify the document,
>or to give hints or tips. The authors use these comments to improve
>later versions of the manual (and sometimes the software).
That is how MISRA works now. There is a forum for registered users to
comment. They can not write to the document and "clarify" it as they
will not have attended the meetings to know what was intended.
On 5/15/2011 11:23 PM, Walter Banks wrote:
>> It's the same sort of mentality that seeks to impose "style
>> guidelines" on code in an attempt to make it more readable,
>> maintainable, etc. People end up working to appease the
>> Standard's God instead of focusing on the product they are
>> preparing.
>
> Coding standards misra and others do a lot to make big projects
> much more reliable. They tend to force people to use clear
> statements devoid of the kind of one of programming tricks
> that create debugging and application nightmares.
Coding *guidelines* can have the same effect -- without the
"policeman". If your staff aren't competent enough to
understand the costs and benefits associated with different
language features, then you have bigger problems than a
"standard" can fix.
> I have seen a lot of code written by many different people
> the best fastest and most easy to maintain code is simple
> and clear and let modern tools do there work.
>
> misra is a low cost standard well worth the cost.
The "cost" of an e-file isn't the issue. Rather, the cost
of BLINDLY (i.e., following the "shalls" and "shoulds")
adhering to it can be significant!
Standards cost a lot more than the "paper" they're written
on. Someone has to codify an enforcement policy for your
organization, put in place mechanisms to verify compliance,
maintain any tools that are required for these activities,
educate users as to why rules that obviously have significant
coding consequences are BLINDLY enforced ("Yes, we value you
as an employee... we just don't think you are smart enough
to know when *to* use a goto/break/continue/etc. and when
*not* to -- so we'll just make it illegal to use them!"),
determine (in some empirical manner) if the costs are being
justified by productivity increases, etc.
Or, go the DoD route -- impose heavy-handed "requirements"
on the code, the coding *process*, specification, testing,
etc. *That* sure seems to have worked out well for *them*,
eh? ;-)
On 5/16/2011 12:14 AM, Chris H wrote:
> In message<iqp21t$pag$1...@speranza.aioe.org>, D Yuniskis
> <not.goi...@seen.com> writes
>>
>> Or, hope for the benevolence of "key players" in those industries
>> to underwrite all or part of their efforts. Things like Standards
>> are so tenuous that you have to be wary that The Industry might
>> just pick up and head off in a different direction regardless of
>> your concern/interests.
>>
>> The problem with "paid" organizations promoting/sponsoring things
>> like this is they tend to be self-perpetuating. They have a
>> vested interest in "their" Standard. So, the biological organisms
>> involved in it have a *huge* stake -- their SALARIES!
>
> Then there are no standards you can rely on.
What prevents me from relying on *any* standard (guideline) that
I choose? Why does it have to have an organization behind it?
MISRA isn't trying to define something akin to interoperability.
I.e., defining a consistent API, etc. so code from vendor A
works with vendor B. So, there is nothing "shared".
I can take MISRA (or any other "standard"), drag out a red pen
and mark it up to my heart's content, laminate it between two
sheets of clear plastic, write "Company Guidelines" across
the top and now I have a "standard that I can rely on".
Does the fact that *this* company (and not *that* organization)
has assumed ownership of it make it any less reliable? The
people who will be held accountable to it will have had a
*real* say in its creation. They will have *control* over
its evolution. Your claim is that some third-party needs
to be involved in order to make it *credible*/reliable??
(if my quality/performance is higher than yours, why do you care
what my "standard" is?)
If a customer wants me to design a set of guidelines (for
coding style, testing, etc.) then I charge them for the
work I do TOWARDS THAT GOAL.
[I *really* don't like this sort of task because you "can't win":
the sorts of clients that want to develop these guidelines tend to
be small shops suffering "growing pains". They invariably want
*rules* that they can forget about (hire a policeman). And, you
*know* that those "rules" will be resented by the folks they are
imposed upon. And, *you* (I) will be the personification of that
resentment!]
I don't tell them, "before I start this coding project for you,
I need to charge you to develop a set of guidelines that will apply
to that code -- its design, documentation and testing". They benefit
from the guidelines that I've evolved over the past few decades.
[I've had several clients amused by how "consistent" my designs
are -- whether hardware or software. My ASICs look like they
were designed "mechanically"!]
They pay for that in terms of my level of experience. They benefit
by being exposed to those code samples and my exchanges with their
staff. I, in turn, learn from them -- the particulars of their
application domain, any hardware or software "tricks" that I
develop while working on their project, etc.
I'm not getting paid to share my beliefs *here*. I'm not trying
to be coy saying, "Buy *my* 'standard' instead". Rather, I am
putting forth the argument (for *free*) that "Standards" (in the
sense we are discussing here) have big downsides when treated as
legislation instead of recommendation.
Anyone who's spent more than a minute around the watercooler
arguing/pondering/complaining about some unilaterally imposed
"coding rule" represents a hidden cost of that "standard's
enforcement". If you listen to the folks in shops that have
these sorts of things *imposed* vs. taking control of their
*own* "guidelines", you will see a big difference in their
attitudes towards their work and their employer.
Invest in your staff so that they are better able to *make*
these decisions intelligently instead of imposing "rules"
arbitrarily.
On 5/16/2011 1:50 AM, David Brown wrote:
>>> *Personally*, I abhor "closed" and "for pay" standards -- if what
>>> you have is so wonderful (and really little more than a piece of
>>> electronic paper), why horde it?
>>
>> I assume that you don't charge for the work you do for customers.
>
> There are plenty of different business models for different types of
> work. I have no issues with standards developers making money out of
> their work (though I think state sponsorship of standards committees is
> a better model in many cases). I just think that a better way to make
> that money is by publishing the standards freely and spreading them as
> wide as possible, then selling services (trademark licensing,
> consultancy, certification, etc.).
As I mention elsewhere, recall that we aren't talking about
a "Standard" for interoperability, here. It's not like needing
to come to concensus about how to enumerate a USB device, etc!
The "value added", in this particular case, is someone sat down and
codified a set of rules (most of which are obvious to a student
in a formal language course) regarding what you should *avoid*
when writing code. [note that this is less severe than saying
you *must* avoid -- as MISRA does in many cases]
Spend an evening searching for "C coding standards" and you'll find
at least a dozen that address the same sorts of issues. And none
of those web sites will require a PayPal account to access the
content...
If MISRA wants to try to elevate their status to something
comparable to ISO 9000 certification, they need to add far more
value than "codifying the obvious". (and, they'll have to be
able to defend their claims more aggressively to gain that
level of acceptance -- like DoD's Ada)
> Maybe I'm naive here, and the sums wouldn't work out in the end. But
> Misra charge £10 for their pdf - it's absurd. Give it out free, and
> charge £100 for a Misra rule checker program.
What are the *costs* associated with it? Besides "order takers",
what ongoing costs can they claim? "Certification costs"?? Pass
those on to the vendors being certified (so that the vendor can
make an economic decision as to the *value* of that certification).
Charging to distribute a PDF is just silly. It suggests that
they can't command a high enough premium from *vendors* to
cover their overhead (which implies that vendors don't consider
it worthwhile).
I wonder how widespread PDF's would be if every *reader* had
to be *purchased* from Adobe? (yet, obviously they fare well
enough charging for *writers*!)
On 5/16/2011 2:02 AM, Chris H wrote:
>> Give it out free, and charge £100 for a Misra rule checker program.
>
> IF it is free why charge for a rule checker program?
Because the rule checker ADDS VALUE. It automates what would
otherwise be a manual process of inspecting code for compliance
with this set of rules and, presumably, *commenting* on the
results it discovers to *educate* the user (to the point where
the user will ultimately not need the checker, at all!)
If it works with any compliant pdf reader, then I'm happy. I've just
seen too many pay-for "pdf" files that are /not/ pdf format (i.e., the
don't follow the pdf standards, and only work with Adobe Acrobat).
>>> Incidentally the MISRA standards come closer than the ISO C and C++
>>> standards by your definitions. For the ISO (and certainly the BSI parts
>>> of it) there is no requirement to have any qualifications or experience
>>> in the field of the standard.
>> MISRA are also closer in that £10 is a lot less than ISO charges for
>> the C standards.
>
> MISRA can't do it for free for obvious reasons.
>
These reasons are not doubt obvious to you, but they are not obvious to
me. I understand that it costs a fair amount of money to run a group
like Misra, and you need to get that income somehow. But as I've said
elsewhere, I don't think charging for the document is the best way to
get that money. Obviously, of course, you know far more about this than
I do - I can only comment from the outside.
It's not that £10 is expensive - it's peanuts to a professional, and
even the most hard-up amateur could find the money. But when it is
paid-for and single-user, a company has to figure out and track who has
the documents, how many they need, what are the rules for when the
developer gets a new computer, etc., etc. If it's free, you download a
copy and pass it around as needed.
Having a price - any price - makes it an exclusive club. If it is free,
the knowledge can be spread around so much more easily - you'd find more
information on the web, and more in discussions. The OP in this thread
could have quoted rule 111 for everyone's benefit. Some things are
worth more when they are free.
>
>>> BTW an on line wiki set up would be a complete disaster.
>
>> Registered community members can add comments to clarify the document,
>> or to give hints or tips. The authors use these comments to improve
>> later versions of the manual (and sometimes the software).
>
> That is how MISRA works now. There is a forum for registered users to
> comment. They can not write to the document and "clarify" it as they
> will not have attended the meetings to know what was intended.
>
I had a little look at the forums, and they seem very useful - I
definitely like the way they are organised by rule.
I am not proposing that outsiders be able to modify the documents
themselves - that would be useless. But if you looked at the postgresql
link I posted, you can see that it is user annotations at the end of the
pages - not modifications to the pages themselves. Imagine that the
MISRA standards were available in html format, with one rule per page,
and at the bottom of each page was a link to the matching subforum and
perhaps a copy of the most popular relevant forum posts.
Time == money.
It takes time to issue a licensed Pdf.
>I understand that it costs a fair amount of money to run a group like
>Misra, and you need to get that income somehow.
Yes.
>It's not that £10 is expensive - it's peanuts to a professional, and
>even the most hard-up amateur could find the money.
Exactly.
> But when it is paid-for and single-user, a company has to figure out
>and track who has the documents, how many they need, what are the rules
>for when the developer gets a new computer, etc., etc.
Yes. As you say time is money.
>Having a price - any price - makes it an exclusive club.
Just like life in general and ALL business in particular.
>I am not proposing that outsiders be able to modify the documents
>themselves - that would be useless. But if you looked at the
>postgresql link I posted, you can see that it is user annotations at
>the end of the pages - not modifications to the pages themselves.
We don't want those... It would take far to much time to administer
>Imagine that the MISRA standards were available in html format, with
>one rule per page, and at the bottom of each page was a link to the
>matching subforum and perhaps a copy of the most popular relevant forum
>posts.
There would be no usable MISRA standards.
--
Support Sarah Palin for the next US President
Go Palin! Go Palin! Go Palin!
In God We Trust! Rapture Ready!!!
http://www.sarahpac.com/
You don't have to comply to the MISRA rules to be MISRA compliant. You
just have to get the manager/product owner/whoever to sign a paper
explaining when and why you choose not to comply. This is the real
problem.
Instead of engineers making design and implementation decisions, managers
are by MISRA invited to make (or not make) them.
They don't
If you are unable to work to customer specifications and guidelines you
seem unemployable.
This is only a problem if the managers or programmers don't fully
understand what they are doing. After 4 decades I have yet to be
convinced that either group knows what it is doing any better than the
other.
Why do you insist on trying to provoke me with ad hominem attacks?
Please *cite* where I stated that *I* am "unable to work to customer
specifications and guidelines"? My parse (and intent) in the above
seems very clear:
"If a customer wants me to DESIGN A SET OF GUIDELINES then I
charge them for the work I do towards that goal" (i.e., "If I am
hired for the express purpose of designing a set of guidelines,
then I charge them TO DESIGN THAT SET OF GUIDELINES").
This implies:
-- the customer doesn't have (or isn't happy with) suitable guidelines;
-- I am employable (because the customer hired me);
-- and that I am expected to be competent to perform this task
within whatever "specifications and guidelines" are set forth
for its performance.
As I;ve said (elsewhere?), I don't like this sort of task because
it's a no-win scenario. I can, instead, provide a list of books
that he/his staff might want to review or point them to other
published guidelines, etc.
In every case, my *recommendation* is the same: invest in your
people (if you don't believe in their abilities, then why did you
hire them?)
On 5/16/2011 3:51 AM, Fredrik :Ostman wrote:
>> -----< D Yuniskis>
>> The "value added", in this particular case, is someone sat down and
>> codified a set of rules (most of which are obvious to a student in a
>> formal language course) regarding what you should *avoid* when writing
>> code. [note that this is less severe than saying you *must* avoid -- as
>> MISRA does in many cases]
>
> You don't have to comply to the MISRA rules to be MISRA compliant. You
> just have to get the manager/product owner/whoever to sign a paper
> explaining when and why you choose not to comply. This is the real
> problem.
Sheesh! So what value does "MISRA compliance" have to *me*, a
"concerned third party"? Does the manager have to be capable of
entering into a contractual relationship on behalf of the company
(i.e., legally obligating them)? Or, is he just "speaking for
himself"? (if the latter, does his *replacement* have to come in
and immediately re-certify these claims so *he's* "on-the-hook"
for past performance?)
> Instead of engineers making design and implementation decisions, managers
> are by MISRA invited to make (or not make) them.
And we all know that managers are *the* go-to people when it comes to
knowing the latest technology, best practices, etc. -- since they spend
*so* much of their time keeping up with these things in a "hands-on"
manner. (sarcasm)
(beating a dead horse) Invest in your staff. You can't *impose*
quality, reliability, etc. from "outside". People have to feel
motivated, empowered and *safe* to build quality *into* any
product. Otherwise, they just seek to cover-their-*sses so
they can't *technically* be held responsible for failures
(and, with the relative dearth of formal specifications, this
is *really* easy to do! there's no "contract" for the design!)
On one of my first jobs, I supervised the build for a piece of
military avionics kit for a subcontractor. There were lots of
problems with the documentation (since the serial numbers are
single digits, this is not uncommon). Things were always getting
held up on the line as the device, as documented, couldn't be built!
When I was given the job, I didn't know *squat* about the actual
design ("subcontractor"). But, I would listen to folks on the line
explain their problems (with the drawing set), *ask* their opinions
as to what *they* thought the "right answer" should be, then, weigh
their answer against my "engineering knowledge" (sure, it may be
easier to change that #6 screw to a #4 so that it would fit in the
#4 tapped hole, but the *right* answer might be to redrill and
retap the hole for a #6!) and make an "executive decision" on the
spot -- mark up the print, initial it and get things rolling again.
Thereafter, folks were almost *happy* to bring things to my attention
even if they weren't "deal breakers":
"You know, Don, these cables really shouldn't be routed over here
as they are likely to be chafed by these unfinished edges. (sparks
are a Bad Thing on the flight line! :> ) We could either reroute
the cables *or* add a finishing step to the metalwork in this area.
What do you think?"
No longer were they worried about covering their backsides but,
rather, now eager to improve the product they were being paid to
build. I had shown respect for their abilities (they work with this
stuff every day!) *and* was willing to put *my* neck on the line
(if the aircraft exploded, it was my initials on the print set).
When I subcontract work out to other folks, I don't want to have
to micromanage how they do that work. If I don't trust your
abilities and work ethic, then why would I bother working *with*
you??
[reread that as if it was a corporation speaking... why hire people
that you don't *implicitly* have faith in? why not invest in them
to ensure they *remain* at the top of their game?]
David Brown wrote:
> Maybe I'm naive here, and the sums wouldn't work out in the end. But
> Misra charge £10 for their pdf - it's absurd. Give it out free, and
> charge £100 for a Misra rule checker program.
The main reason is the misra group are in the business of developing
standards and not software. There are some very good software
developers whose business model is to implement standards including
checking misra rules.
w..
D Yuniskis wrote:
You have just made good arguments for design standards.
misra is one such standard. Use it or not as you see fit. It
is low cost and written by people who have spent the time to
understand why some code is far more reliable than others and
translated the form that can be used as implementation guidelines.
w..
D Yuniskis wrote:
Coding standards are a lot like author standards from a
publishing house. They are a way to create a consistent
implementation. BTW the DoD route does work. Some
of the best development groups that I know have serious
standards in place that they follow.
The 20% coding costs that you quoted elsewhere is
about double the cost of coding and debug of many
big projects that I know. Good software is engineered
and not a black art and good engineering standards
and practices produces products that are lower cost
and more reliable.
w..
On 5/16/2011 7:25 AM, Walter Banks wrote:
>> When I subcontract work out to other folks, I don't want to have
>> to micromanage how they do that work. If I don't trust your
>> abilities and work ethic, then why would I bother working *with*
>> you??
>
> You have just made good arguments for design standards.
You're missing the distinction I am trying to make.
"Standards" imply *rules*. This takes the decision making
ability away from the person best qualified to make them
FOR THIS APPLICATION and forces a prescribed format on the
result -- that may or may not be appropriate.
I am all for *guidelines*. I love running compilers with
all warnings enabled -- what can it suggest to me about the
code I've just written. *BUT*, I can chose to ignore any
or all of those warnings as I see fit.
Standards turn what could be "warnings" into "errors"
that are artificially imposed.
"No, I *really* REALLY want to use a 'goto' here. Yes, I
know why its use is discouraged. But, I also know it was
included in the language for a *reason*."
I've been porting a design I wrote in Limbo *back* to C.
It is *so* much easier to do things in C without Limbo
trying to protect me from doing things that *might* be
risky. The code reads a lot cleaner and runs a lot
faster (though that could just be a consequence of Limbo's
overhead).
Sure, some things are a bit more work -- all the RPC's,
setting up secure tunnels, etc. But, for the most part,
those are problems that can be solved once and "inherited"
repeatedly.
> misra is one such standard. Use it or not as you see fit. It
> is low cost and written by people who have spent the time to
> understand why some code is far more reliable than others and
> translated the form that can be used as implementation guidelines.
So, what does it buy me that any number of other *guidelines*
(not "standards") or books don't? Why adopt (and conform)
to something that someone else is controlling?
Take the list of rules, white out the M, I, S, R and A at
the top of the page -- along with anything else that you
disagree with -- write your own initials there, instead,
and treat them as GUIDELINES not REQUIREMENTS.
MISRA sort of claims to be able to mitigate that state of things.
> After 4 decades I have yet to be
> convinced that either group knows what it is doing any better than the
> other.
So what's the use to shift responsibilities between these groups?
I once worked with the MISRA rules. The company (automotive
subcontractor) applied MISRA thusly:
- All "advisory" rules were made "required".
- All rules, advisory or required, which couldn't be checked with a
static code checker were completely ignored. The instructions for code
review didn't and shouldn't contain them.
- While "break" and "continue" continued to be banned for no good reason,
"goto" was allowed. It was allowed only to jump to the only return
statement in a function, but how should the static code checker know
that, and as for code review, see above.
Apparently the customer (automotive OEM) was satisfied with this
"compliance" to MISRA.
--
Fredrik Östman
D Yuniskis wrote:
> Hi Chris,
>
> On 5/16/2011 12:14 AM, Chris H wrote:
> > In message<iqp21t$pag$1...@speranza.aioe.org>, D Yuniskis
> > <not.goi...@seen.com> writes
> >>
> >> Or, hope for the benevolence of "key players" in those industries
> >> to underwrite all or part of their efforts. Things like Standards
> >> are so tenuous that you have to be wary that The Industry might
> >> just pick up and head off in a different direction regardless of
> >> your concern/interests.
> >>
> >> The problem with "paid" organizations promoting/sponsoring things
> >> like this is they tend to be self-perpetuating. They have a
> >> vested interest in "their" Standard. So, the biological organisms
> >> involved in it have a *huge* stake -- their SALARIES!
> >
> > Then there are no standards you can rely on.
>
> What prevents me from relying on *any* standard (guideline) that
> I choose? Why does it have to have an organization behind it?
One reason is an organization is likely to have a broader
base of experience than most individuals.
> MISRA isn't trying to define something akin to interoperability.
> I.e., defining a consistent API, etc. so code from vendor A
> works with vendor B. So, there is nothing "shared".
>
> I can take MISRA (or any other "standard"), drag out a red pen
> and mark it up to my heart's content, laminate it between two
> sheets of clear plastic, write "Company Guidelines" across
> the top and now I have a "standard that I can rely on".
Do you have reasons why your red pen markings make the
resulting code written by your *standard* is likely more reliable
than code would otherwise be? If so you have a start.
> Does the fact that *this* company (and not *that* organization)
> has assumed ownership of it make it any less reliable?
Maybe. Automotive company development groups adopt
standards for their development. Their developers are amoung
the brightest around. Even small groups have serious coding
standards and their work is subject to peer review.
Before you point out the obvious failures in automotive code
divide the failures by lines of active code and compare to
your own work.
w..
On 5/16/2011 7:50 AM, Walter Banks wrote:
>>>> The problem with "paid" organizations promoting/sponsoring things
>>>> like this is they tend to be self-perpetuating. They have a
>>>> vested interest in "their" Standard. So, the biological organisms
>>>> involved in it have a *huge* stake -- their SALARIES!
>>>
>>> Then there are no standards you can rely on.
>>
>> What prevents me from relying on *any* standard (guideline) that
>> I choose? Why does it have to have an organization behind it?
>
> One reason is an organization is likely to have a broader
> base of experience than most individuals.
Sure, but a firm isn't limited to the experiences of one or two
"individual employees"!
>> MISRA isn't trying to define something akin to interoperability.
>> I.e., defining a consistent API, etc. so code from vendor A
>> works with vendor B. So, there is nothing "shared".
>>
>> I can take MISRA (or any other "standard"), drag out a red pen
>> and mark it up to my heart's content, laminate it between two
>> sheets of clear plastic, write "Company Guidelines" across
>> the top and now I have a "standard that I can rely on".
>
> Do you have reasons why your red pen markings make the
> resulting code written by your *standard* is likely more reliable
> than code would otherwise be? If so you have a start.
I have been (and continue to be) lucky in that I work with
competent individuals and organizations. So, my experiences
are biased -- no "newbies" to drag down code quality "unattended".
I can't think of a *released* product I've been involved with that
ever needed a recall or field upgrade (other than to provide
optional or advanced functionality). So, I guess the people
and processes that I have been involved with work "good enough"
(without selling $600 toilet seats :> )
In almost every case, there was a "pride of ownership" involved
that kept folks on their toes. *You* didn't want to be the reason
the product failed, etc.
With bigger firms, I found that people spent time finding a place
to "duck for cover" if things started getting sour. Interestingly,
these same firms were the most likely to try to legislate the
development process.
>> Does the fact that *this* company (and not *that* organization)
>> has assumed ownership of it make it any less reliable?
>
> Maybe. Automotive company development groups adopt
> standards for their development. Their developers are amoung
> the brightest around. Even small groups have serious coding
> standards and their work is subject to peer review.
And are those RIGID STANDARDS or FLEXIBLE GUIDELINES??
When I sit in a code review, if I see a "goto", I don't
immediately think, "Ah, this guy is an idiot! Doesn't
he know you shouldn't *use* goto's?" Rather, I go looking
to see *why* he *chose* to use it -- KNOWING that it would
attract our attention. (i.e., I assume he is competent)
We don't spend any time arguing about other ways that he
could (possibly) have eliminated that goto in favor of a more
structured flow. Instead, we look at it as an example of
a situation that *we* may someday be faced with *or* an
indication of a serious flaw in the design specification
(which had been previously approved -- so what might *we*
have specified wrongly?)
In no case do we tell him "rewrite this to remove the goto".
(nor does The Boss have to sign a statement saying "goto's
are acceptable in this project")
> Before you point out the obvious failures in automotive code
> divide the failures by lines of active code and compare to
> your own work.
And multiply by dollars spent?
I've worked in automotive, medical, navigation, pharmaceutical,
process control and gambling & gaming industries. All have
varying degrees of standards/requirements -- many of which are
*statutory* (by far, the most stringent are gambling). I've
rarely seen "rigid standards" imposed on the code (though often
the *process* is heavily scripted) aside from DoD Ada (which,
you will note, the DoD has backed off of its initial goal of
having Ada used *exclusively* for their projects... I suspect
far more code is written in C, there, than Ada!)
By far, the best bang for the buck comes from specification and
testing -- not constraints on the code itself.
That's *my* cost. How many projects outside of big
shops do you know that can meet that goal? How many
of these have you seen specifications, test suites, etc.?
Big projects, big shops impose lots of overhead. They
justify it because of the size of the project, the fact that
they need a large team, that there will be *big* variations
in the skillsets of team members, that some significant
portion of those team members may "move on" before the
project is completed, etc.
> big projects that I know. Good software is engineered
Exactly. -------------------------^^^^^^^^^^^^
> and not a black art and good engineering standards
> and practices produces products that are lower cost
> and more reliable.
And your contention is that GUIDELINES are NOT EFFECTIVE?
That you can't *trust* your staff to "do the right thing"
without *enforcement*?
I try to design things (hardware, software, systems) so that
it is easiest for those who follow me to "do the right thing".
But, I don't *prevent* them from doing something *else* -- as
I am not prescient. I trust them to know when to go in a
different direction.
Substitute "guidelines" for "standards" and I heartily agree
with your last statement!
D Yuniskis wrote:
> On 5/16/2011 7:33 AM, Walter Banks wrote:
>
> > The 20% coding costs that you quoted elsewhere is
> > about double the cost of coding and debug of many big projects that I know.
>
> That's *my* cost. How many projects outside of big
> shops do you know that can meet that goal? How many
> of these have you seen specifications, test suites, etc.?
I write compilers for a living, I deal with customers
ranging from one man job shops to very large
companies.
> That you can't *trust* your staff to "do the right thing"
> without *enforcement*?
Standards are not about trust or enforcement they
are about disciplined engineering.
w..
Do they? Sorry, my practice differs wildly. The only tools which have
ever *found* real bugs in my code were cppcheck and PREfast. Both are
free, I might add. And both found bugs because they know the APIs I use.
Our MISRA checker mostly makes problems because it's hard to set up and
does not parse perfectly valid code (such as 'char foo[] = { "foo" }',
which is valid C [ISO 9899:1999 §6.7.8(14)]). To add irony, their
website claims they understand the language standards better than most
developers.
Does MISRA improve code quality? People want measurable quality. Using
MISRA the seemingly-usual way ("we want zero violations") is well
measurable, but does not improve code quality.
§2.4 "Sections of code shall not be commented out." This forbids me to
write examples in my comments ("these three functions are used with a
for() loop like this: ..."). The intent of the rule is to avoid (a) old
code hanging around in comments (which is indeed very annoying), and (b)
to have a screenful of code which is not compiled because I don't see
the comment delimiters. But the intent is not measurable. The described
workaround solves neither problem. And (b) could be very easily solved
using "//" comments. Alas, they are forbidden.
§7.1 "Octal constants [...] shall not be used." How does my code get any
better if I have to specify file permissions in hex (or with lengthy
constants)? Everyone knows what mode 0755 is. But do you know what
S_IRWXO is (without looking up the octal value)?
§10 requires casts for almost everything. How does my code get any
better when I have to write 'a = (uint8_t) (b & 255)' instead of just 'a
= (b & 255)'? The observable result is just that it gets slower.
§14.1 says no unreachable code. §15.3 says I need a 'default' clause in
switches, even if I have already covered all possibilities. How does
this go together? And §14.1 forbids me that defensive 'return 0' after a
thread's 'while(1)' main loop - which means some compilers now annoy me
with a warning about a missing return, and the code breaks when someone
changes the loop to stop.
§20.5 "The error indicator errno shall not be used." How does my code
get any better when I ignore errors from 'fopen'?
MISRA is good when it encourages people to think before they type.
I know MISRA doesn't want me to use macros. I also know that 90% of all
compilers/optimizers explode on my codec when I use inline functions
instead. So I can write you a nice paper for a deviation why I need
macros here (and only here). But don't treat me like a lobotomized
chimpanzee who cannot distinguish "||" and "|" operators.
Stefan
--
Tim Wescott
Wescott Design Services
http://www.wescottdesign.com
Do you need to implement control loops in software?
"Applied Control Theory for Embedded Systems" was written for you.
See details at http://www.wescottdesign.com/actfes/actfes.html
I know... proved my point
> MISRA is good when it encourages people to think before they type.
> I know MISRA doesn't want me to use macros. I also know that 90% of all
> compilers/optimizers explode on my codec when I use inline functions
> instead. So I can write you a nice paper for a deviation why I need
> macros here (and only here).
And that's all what's needed. You don't have to obey the rules if you
can reason why they don't make sense in a particular case. That just
documents you thought about it.
Look at it this way: The rules are not for you (because you know what you're
doing, of course), the rules are for the engineer maintaining the code after
you (because he's the one who doesn't know what you were doing).
> But don't treat me like a lobotomized
> chimpanzee who cannot distinguish "||" and "|" operators.
You don't need lobotomy for that. Quite often the standard biological neural
network tends to see only what you expect it to see.
Vinzent.
--
A C program is like a fast dance on a newly waxed dance floor by people carrying
razors.
-- Waldi Ravens
I know that. Tell that the QA inquisition.
Compare <http://www.leshatton.org/Documents/MISRA_comp_1105.pdf>, which
concludes:
| [...] this paper attempts to assess whether important deficiencies in
| the original standard have been addressed satisfactorily.
| Unfortunately, they have not and the important real to false positive
| ratio is not much better in MISRA C 2004 than it was in MISRA C 1998
| and it is unacceptably low in both. [...]
|
| [MISRA C 2004] has not solved the most fundamental problem of MISRA C
| 1998, viz. that its unadulterated use as a compliance document is
| likely to lead to more faults and not less because of the fault
| re-injection phenomenon first noted by [1].
Which precisely matches my observation. Still, people want "zero
derivations, 100% compliance".
>> But don't treat me like a lobotomized
>> chimpanzee who cannot distinguish "||" and "|" operators.
>
> You don't need lobotomy for that. Quite often the standard biological
> neural network tends to see only what you expect it to see.
Actually, I haven't ever had a bug regarding to those two. (However, I
already had a couple of functions where I added a comment "yes, I'm
really using '|' on bools here because I want both paths", to reduce
that "oops, shouldn't there be two of them" feeling.)
Which is precisely my problem with most coding standards. They don't
find or prevent *practical* bugs. Yes, they make code a little nicer and
more consistent, which is why I'm at least following those parts. But
when I have to rewrite well-tested and perfectly working code just to
get it through the checker, the coding standard has failed its goal.
Stefan
On 5/16/2011 9:15 AM, Walter Banks wrote:
>>> The 20% coding costs that you quoted elsewhere is
>>> about double the cost of coding and debug of many big projects that I know.
>>
>> That's *my* cost. How many projects outside of big
>> shops do you know that can meet that goal? How many
>> of these have you seen specifications, test suites, etc.?
>
> I write compilers for a living, I deal with customers
> ranging from one man job shops to very large
> companies.
But you didn't answer either of the two questions I posed!
What are the relative costs that you see in that *range*
of "customers" and customer projects?
I'm *sure* the "coding costs" for SDI, e.g., are a tiny fraction
of the total project costs, etc. Does that mean they write better
code? More *efficient* coders? *Or*, that as a percentage of an
incredibly large project, the costs are RELATIVELY smaller??
>> That you can't *trust* your staff to "do the right thing"
>> without *enforcement*?
>
> Standards are not about trust or enforcement they
> are about disciplined engineering.
As are *guidelines*! The difference is *solely* "trust
and enforcement"! (what is a Standard *without* the LACK
of trust and IMPOSED enforcement? Don't you call that a
*guideline*?! "Advice"? "Best practices"?)
On 5/16/2011 10:05 AM, Stefan Reuther wrote:
> §7.1 "Octal constants [...] shall not be used." How does my code get any
> better if I have to specify file permissions in hex (or with lengthy
> constants)? Everyone knows what mode 0755 is. But do you know what
> S_IRWXO is (without looking up the octal value)?
Actually, I really *hate* the way octal is indicated in C.
I much prefer the <value>r<radix> notation. It is extensible
(to other radix), unambiguous, doesn't add much clutter to
the code and doesn't "magically" assign extra value to things
like leading zeroes.
(I got bit by #include-ing some tables of constants that were
created with leading zero padding instead of leading spaces)
> §20.5 "The error indicator errno shall not be used." How does my code
> get any better when I ignore errors from 'fopen'?
Isn't there *also* a requirement that "error values" from functions
be explicitly tested (e.g., malloc returning NULL)? So, test it
everywhere *except* those cases where a mechanism has explicitly been
created for this purpose.
(how do you code for range errors on something like pow(a,b) if
you can't look at errno? do you try to detect the error in the
*domain*, instead? *try* it! :> )
> MISRA is good when it encourages people to think before they type.
But that is true of *any* "guideline". Where things go awry is when
some *thing* (a "Standard") unilaterally tries to *force* (and ENforce)
a behavior/practice without knowledge of the application, individual,
etc.
Tools should *enable* good practices by assisting the user, not
try to *force* practices that *hope* to make things better.
> I know MISRA doesn't want me to use macros. I also know that 90% of all
> compilers/optimizers explode on my codec when I use inline functions
> instead. So I can write you a nice paper for a deviation why I need
> macros here (and only here). But don't treat me like a lobotomized
> chimpanzee who cannot distinguish "||" and "|" operators.
+42
>>> But don't treat me like a lobotomized
>>> chimpanzee who cannot distinguish "||" and "|" operators.
>>
>> You don't need lobotomy for that. Quite often the standard biological
>> neural network tends to see only what you expect it to see.
>
>Actually, I haven't ever had a bug regarding to those two. (However, I
>already had a couple of functions where I added a comment "yes, I'm
>really using '|' on bools here because I want both paths", to reduce
>that "oops, shouldn't there be two of them" feeling.)
Here I think C overdid it. As I like readable code - especially if I
am the one to read it some years into the future I use several
#define.
e.g.
| BITOR
|| OR
...
The | can be overseen very easily if the formulas are written without
enough parenteses and spaces.
RK
I give you rule 36 from MIStRAy-C
Rule 36 Remember that and is and and or is or. The compiler can usually
tell by context the difference between && & & & II & I & ~ &!
Well, then add spaces when writing? I expect every C programmer to know
that "||" means. But those macros break everyone's (mental and
programmatical) parsers. And every second yacc parser which uses OR as
the macro for a "||" token.
If you insist on words, at least use <iso646.h>. It still breaks
parsers, but at least it's standardized (and the names it defines are
keywords in C++, making name clashes less likely).
Stefan
Agreed. But still we have that syntax and have to live with it.
> (I got bit by #include-ing some tables of constants that were
> created with leading zero padding instead of leading spaces)
And that's precisely the point why I believe ruling it out makes no
sense: people have to know it anyway. You don't save any teaching
effort. If you say "don't use <pthread.h>", you can save yourself all
multi-threading teaching effort. But not if you say "don't use octal",
because people will hit it by accident.
>> §20.5 "The error indicator errno shall not be used." How does my code
>> get any better when I ignore errors from 'fopen'?
>
> Isn't there *also* a requirement that "error values" from functions
> be explicitly tested (e.g., malloc returning NULL)? So, test it
> everywhere *except* those cases where a mechanism has explicitly been
> created for this purpose.
Of course I test return values where it matters. The problem is again
people who want to enforce some rules, and oversee that it doesn't
always matter. For an example, look at the trouble generated by the gcc
function attribute "warn_unused_result" which is applied to the
functions read() and write() in recent libcs.
A popular method of inter-process signalisation is a pipe, where one end
writes a character to wake the other end. So I have this pipe fd, open
with O_NONBLOCK, and I write a character into it:
write(fd, "", 1);
I don't care whether it fails. Because that means the pipe is full and
the other end will wake up soon, which is what I want. This gives a
warning. WHICH CANNOT BE TURNED OFF. I have to uglify my code, like this,
int result = write(fd, "", 1);
if (result == 0) { }
to get rid of the warning. Just assigning it to a variable is not
enough, because then I have a variable which was assigned but not used.
Wonderful.
> (how do you code for range errors on something like pow(a,b) if
> you can't look at errno? do you try to detect the error in the
> *domain*, instead? *try* it! :> )
One of my favorite credos is "don't check for an error you don't know
how to handle", which goes nicely along with the popular pow()
implementations that don't set errno :-)
Stefan
> write(fd, "", 1);
> I don't care whether it fails. Because that means the pipe is full and
> the other end will wake up soon, which is what I want. This gives a
> warning. WHICH CANNOT BE TURNED OFF. I have to uglify my code, like this,
> int result = write(fd, "", 1);
> if (result == 0) { }
(void) write (fd, "", 1);
wouldn't work?
On 5/17/2011 10:11 AM, Stefan Reuther wrote:
>>> §7.1 "Octal constants [...] shall not be used." How does my code get any
>>> better if I have to specify file permissions in hex (or with lengthy
>>> constants)? Everyone knows what mode 0755 is. But do you know what
>>> S_IRWXO is (without looking up the octal value)?
>>
>> Actually, I really *hate* the way octal is indicated in C.
>> I much prefer the<value>r<radix> notation. It is extensible
>> (to other radix), unambiguous, doesn't add much clutter to
>> the code and doesn't "magically" assign extra value to things
>> like leading zeroes.
>
> Agreed. But still we have that syntax and have to live with it.
Yes. So, you make a *guideline* that effectively warns you
of how you can screw yourself by using/not using a particular
language feature, etc. "Invest in your staff", as I've said
before. *Educate* them so they know how to write better code
and *care* about writing better code (instead of "doing what it
takes" to silence the policeman)
>> (I got bit by #include-ing some tables of constants that were
>> created with leading zero padding instead of leading spaces)
>
> And that's precisely the point why I believe ruling it out makes no
> sense: people have to know it anyway. You don't save any teaching
> effort. If you say "don't use<pthread.h>", you can save yourself all
> multi-threading teaching effort. But not if you say "don't use octal",
> because people will hit it by accident.
Exactly. In my case, the problem was easy to spot: "Hmmm... why
did this variable just get loaded with 66? The value should be much
higher than that... yes, the acceleration profile (graph) shows
something like 100, here. So, where did the 66 come from? Ah,
'00102' isn't '102r10' but, rather, '102r8'! Need to 86 these
leading zeroes..."
>>> §20.5 "The error indicator errno shall not be used." How does my code
>>> get any better when I ignore errors from 'fopen'?
>>
>> Isn't there *also* a requirement that "error values" from functions
>> be explicitly tested (e.g., malloc returning NULL)? So, test it
>> everywhere *except* those cases where a mechanism has explicitly been
>> created for this purpose.
>
> Of course I test return values where it matters. The problem is again
> people who want to enforce some rules, and oversee that it doesn't
> always matter. For an example, look at the trouble generated by the gcc
> function attribute "warn_unused_result" which is applied to the
> functions read() and write() in recent libcs.
>
> A popular method of inter-process signalisation is a pipe, where one end
> writes a character to wake the other end. So I have this pipe fd, open
> with O_NONBLOCK, and I write a character into it:
> write(fd, "", 1);
> I don't care whether it fails. Because that means the pipe is full and
> the other end will wake up soon, which is what I want. This gives a
> warning. WHICH CANNOT BE TURNED OFF. I have to uglify my code, like this,
> int result = write(fd, "", 1);
> if (result == 0) { }
> to get rid of the warning. Just assigning it to a variable is not
> enough, because then I have a variable which was assigned but not used.
> Wonderful.
Exactly. Attempts to "legislate" rules force you to spend extra
effort working around those rules. In the process, you either
distort your code so that it is harder to read (requiring comments
explaining why you did something that looks 'stupid': "To appease
lint et al.") or add bugs (which is what the rules are supposedly
trying to *prevent*).
In either case, you break your "flow" of thought (productivity)
by having to deal with some triviality. Like MS's Mr Paperclip
"reminding" you of stuff while you are trying to concentrate on
getting something *done*.
I have a distinctive coding style in that I do lots of things
you are told *not* to do. Whenever I do something that will
raise eyebrows, I alert the reader/reviewer to that fact in
my commentary and describe *why* I made that choice. This
doesn't prevent someone who comes after me from tearing out
all that code and replacing it with something more appropriate
to his/her sensibilities. *But*, it shows that I have thought
out the issue and cautions them to take similar care when/if
they opt to replace/rewrite it.
For example, I might *deliberately* order the expressions in
a compound conditional to exploit something I know about the
data that will be encountered (i.e., if expr1 is expensive to
evaluate and expr2 is often FALSE, I might deliberately say
"expr2 && expr1" instead of "if expr2 { if expr1 { } }" which
would make the deliberate choice more obvious). But, my
commentary would deliberately draw attention to this.
"/* Here, There be Dragons */"
>> (how do you code for range errors on something like pow(a,b) if
>> you can't look at errno? do you try to detect the error in the
>> *domain*, instead? *try* it! :> )
>
> One of my favorite credos is "don't check for an error you don't know
> how to handle",
This seems to be the approach in much desktop software. Esp. things
coded in C++ (where it seems obvious that some layer is probably
throwing exceptions but the layers above aren't smart enough to
deal with them and just terminate the application, instead.
> which goes nicely along with the popular pow()
> implementations that don't set errno :-)
Yet, MISRA doesn't *want* you to look at errno. Instead, you
should try to *guess* as to whether the function is trying to (or
should) signal an error!
And some PHB is going to be able to "sign off" on whether some
particular set of "violations" are "acceptable"??
<frown>
Hire competent people and invest in their skillsets. If you're
afraid they'll take that "investment" elsewhere (i.e., quit),
then make your workplace attractive enough that they don't
*want* to quit (or, hope that other firms have adopted a
similar policy and you can "inherit" some of *their* "enhanced"
staff losses).
See, I'd say that the octal constants rule is the best example of where
something that the tools will enforce coding style, rather than
guidelines to be manually enforced, is beneficial.
Octal constants serve almost no purpose now that we've standardized on 8
bit bytes. File permissions are an extremely rare exception, and can be
handled easily through something along the lines of
#define PERMISSION(owner, group, world) (((owner) << 6) | ((group) << 3)
| (world))
Most often, if you've got a leading zero it's a mistake, not a decision
to invoke an octal. This is the sort of thing that humans are great at
missing, leading to costly debugging time, whereas compilers are great
at catching. Other examples include
if (x == 0)
do_this();
do_that();
When you've got coding styles that are so easy to make mistakes with,
and offer little value, then letting the tools enforce them as
restrictions, rather than guidelines, absolutely improves reliability.
I certainly don't agree with everything in the MISRA spec. But to say
that setting down hard fast rules has no value just doesn't add up.
--
Rob Gaddi, Highland Technology
Email address is currently out of order
On 5/17/2011 12:12 PM, Rob Gaddi wrote:
>> On 5/17/2011 10:11 AM, Stefan Reuther wrote:
>>>>> §7.1 "Octal constants [...] shall not be used." How does my code get
>>>>> any
>>>>> better if I have to specify file permissions in hex (or with lengthy
>>>>> constants)? Everyone knows what mode 0755 is. But do you know what
>>>>> S_IRWXO is (without looking up the octal value)?
>>>>
>>>> Actually, I really *hate* the way octal is indicated in C.
>>>> I much prefer the<value>r<radix> notation. It is extensible
>>>> (to other radix), unambiguous, doesn't add much clutter to
>>>> the code and doesn't "magically" assign extra value to things
>>>> like leading zeroes.
>>>
>>> Agreed. But still we have that syntax and have to live with it.
>>
>> Yes. So, you make a *guideline* that effectively warns you
>> of how you can screw yourself by using/not using a particular
>> language feature, etc. "Invest in your staff", as I've said
>> before. *Educate* them so they know how to write better code
>> and *care* about writing better code (instead of "doing what it
>> takes" to silence the policeman)
>
> See, I'd say that the octal constants rule is the best example of where
> something that the tools will enforce coding style, rather than
> guidelines to be manually enforced, is beneficial.
"Manually enforced" doesn't mean that a tool can't *detect*
these things for you!
> Octal constants serve almost no purpose now that we've standardized on 8
> bit bytes. File permissions are an extremely rare exception, and can be
> handled easily through something along the lines of
>
> #define PERMISSION(owner, group, world) (((owner) << 6) | ((group) << 3)
> | (world))
Why don't we abolish hex also? Or, universally *adopt* hex (and
abolish *decimal*)? Or, force anything non-decimal to be a
lengthy *binary* constant?
C suffers from legacy problems. Dealing with legacy codebases
as well as legacy "practices". You can invent a "new C" (there
have been attempts to do so) at the expense of flexibility,
legacy support, etc.
Continuing to support octal comes at some small cost. A
"-warn_octal_constant" is just as easy to add to a compiler
as a "-prohibit_octal_constant" would be.
While we're at it, should we do away with %o (and %x?)
conversions in printf()/scanf() et al.? And, maybe even
redesign them to not use <stdarg.h>?
> Most often, if you've got a leading zero it's a mistake, not a decision
> to invoke an octal. This is the sort of thing that humans are great at
> missing, leading to costly debugging time, whereas compilers are great
> at catching. Other examples include
>
> if (x == 0)
> do_this();
> do_that();
Should we prohibit:
while (this = count--) {
...
}
(note '=' not '==')
Should we restrict ourselves to just *one* of the for(), while() or
do-while() constructs? (personally, I use each in very different
circumstances and my choice of which tells you something about
the nature of the code that is encapsulated within)
> When you've got coding styles that are so easy to make mistakes with,
> and offer little value, then letting the tools enforce them as
> restrictions, rather than guidelines, absolutely improves reliability.
It's easy to forget to check a denominator for zero. But, you
might explicitly *want* to divide by zero. If you treat it as
an error, then what recourse do you have when you want to do so?
How do you effect a "jump to 0x0000" if you refuse to allow
a pointer to "0" to be dereferenced?
Should the compiler automatically append a NUL to each "series of
characters" (avoiding the use of the word "string")?
All of these are "easy to make mistakes with" yet, when needed,
they *do* offer value -- just not in 99.9728305% of most instances :>
Personally, I live in constant fear of trigraphs screwing me over.
So much so, that I deliberately force myself to avoid double
question marks *anywhere*. Fortunately, this works in my favor
as doubling them in anything other than a colloquial exchange
is "bad style". But, I am often tempted to issue a diagnostic
like: "Shirley, we can't have run out of memory so soon??!"
[Thankfully, some compilers are sensitive to this sort of thing
and catch my transgressions since catching them by *encountering*
their unintended consequences is hard -- the diagnostic condition
would have to take place for me to see the botched message!]
> I certainly don't agree with everything in the MISRA spec. But to say
> that setting down hard fast rules has no value just doesn't add up.
I don't see how unilaterally *prohibiting* them buys you anything
more than having a tool find them and signal them as warnings.
I.e., if that tool can prohibit them, then it must know how to
*find* them. Just change what the tool does in those cases.
You can then choose which warnings are "deal breakers" instead
of being at the mercy of whomever codified the "prohibitions".
Most non-toy compilers already emit lots of useful warnings.
What's a few more (, among friends :> )?
Presumably, their authors -- and *users* -- see value in these
as WARNINGS instead of HARD ERRORS. This, IMO, indicates that
there is cause for allowing the user to determine the value of
each of these "messages".
>Hi Stefan,
>
>On 5/16/2011 10:05 AM, Stefan Reuther wrote:
>
>> §7.1 "Octal constants [...] shall not be used." How does my code get any
>> better if I have to specify file permissions in hex (or with lengthy
>> constants)? Everyone knows what mode 0755 is. But do you know what
>> S_IRWXO is (without looking up the octal value)?
>
>Actually, I really *hate* the way octal is indicated in C.
>I much prefer the <value>r<radix> notation. It is extensible
>(to other radix), unambiguous, doesn't add much clutter to
>the code and doesn't "magically" assign extra value to things
>like leading zeroes.
For a parser, trailing radix takes more space and slightly more
effort. For the programmer, trailing radix is easier to overlook.
But I agree that C's solution is unpalatable ... the compiler accepts
only bases 8, 10 and 16. Using functions like strtoul() you can, in
code, convert values from other bases but you have to separate the
base and value and figure out what is the base yourself.
I really prefer Lisp's syntax (there are 2):
The simplified common syntax:
#{C}{value} where C is a character
- 'b' binary #b10100101
- 'o' octal #o245
- 'x' hexidecimal #xA5
or the general syntax:
#{N}r{value} where N is a number, 2..36
ex: #2rb10100101
#8r245
#16rA5
YMMV,
George
I use them sometimes. For example, in electronics buffers and ports tend
to come in 8-bit multiples. So we have a PLC-like thing with its I/Os
numbered
0500
0501
0502
0503
0504
0505
0506
0507
0510
0511
...
Etc.
The I/O line number gets defined and passed around as an octal
constant. Pretty rare though I agree.
[...]
--
John Devereux
On 5/17/2011 3:19 PM, George Neuner wrote:
[8<]
>> Actually, I really *hate* the way octal is indicated in C.
>> I much prefer the<value>r<radix> notation. It is extensible
>> (to other radix), unambiguous, doesn't add much clutter to
>> the code and doesn't "magically" assign extra value to things
>> like leading zeroes.
>
> For a parser, trailing radix takes more space and slightly more
> effort. For the programmer, trailing radix is easier to overlook.
And *both* suffer from the fact that we don't *think* of the
constants in terms of *any* "radix indication". I.e., to me,
ASCII SP is "20" or "40", depending on "where I am". I don't
think of "0x20" or "040" until I have to "write it down".
> But I agree that C's solution is unpalatable ... the compiler accepts
> only bases 8, 10 and 16. Using functions like strtoul() you can, in
> code, convert values from other bases but you have to separate the
> base and value and figure out what is the base yourself.
But the value of different radix is in expressing constants.
You want to say things like:
#define FLASH 0xFFFF0000
#define SECS_PER_TIMEZONE (24*60*60/24)
#define PREAMBLE "\001\040\040"
instead of being *forced* into "standardizing" on a single radix.
I.e., How many folks would recognize 0x15180? Or, "32"?
[I miss being able to use *binary* constants (since I write lots
of drivers that need to talk to I/O's)]
As you force people to express things in particular ways, solely
for the sake of "standardization", you obfuscate their intent
and *add* bugs (at design time and/or maintenance).
> [snip]
> [I miss being able to use *binary* constants (since I write lots
> of drivers that need to talk to I/O's)]
You can use 0b binary constants in gcc, actually. Of course, being a
compiler specific extension, this isn't supported by MISRA.
On 5/18/2011 9:04 AM, Rob Gaddi wrote:
> On 5/18/2011 6:28 AM, D Yuniskis wrote:
>
>> [snip]
>> [I miss being able to use *binary* constants (since I write lots
>> of drivers that need to talk to I/O's)]
>
> You can use 0b binary constants in gcc, actually. Of course, being a
> compiler specific extension, this isn't supported by MISRA.
Yes, but it's an extension and not part of the language. I.e.,
not portable (won't work on HC11 compiler, etc.)
In a semi-portable way, you can write an m4 macro to handle
it and preprocess your source. But that just gets painful...
Of course not. The test suite even contains a test for this.
<http://gcc.gnu.org/bugzilla/show_bug.cgi?id=25509>
In the form it is defined ("not checking the result is either a security
problem or always a bug"), this makes this a library error, a hardly
useful compiler feature, and another example of mechanical style
checking gone bad.
Stefan
That's why I like Lisp's generalized radix syntax - just by looking at
the number, it is clear that it is "unusual" in some way and it should
be closely examined. I've used the Lisp syntax for a number of DSL
compilers I've written and I've never heard anybody complain that it
was weird or hard to understand.
Personally I can't remember ever having a reason to use constants in
bases other than 2, 10 or 16 ... but my non-decimal uses always have
been limited to bit masks.
I also hate octal and the only times I ever have used it have been for
setting file permissions from the shell in Unix (back when chmod took
only a single octal argument).
George
On 5/19/2011 5:51 PM, George Neuner wrote:
>> [I miss being able to use *binary* constants (since I write lots
>> of drivers that need to talk to I/O's)]
>>
>> As you force people to express things in particular ways, solely
>> for the sake of "standardization", you obfuscate their intent
>> and *add* bugs (at design time and/or maintenance).
>
> That's why I like Lisp's generalized radix syntax - just by looking at
> the number, it is clear that it is "unusual" in some way and it should
> be closely examined. I've used the Lisp syntax for a number of DSL
> compilers I've written and I've never heard anybody complain that it
> was weird or hard to understand.
>
> Personally I can't remember ever having a reason to use constants in
> bases other than 2, 10 or 16 ...
What?? No sexagesimal support?? :>
> but my non-decimal uses always have been limited to bit masks.
I use hex for things like K, M, G, etc. (instead of their
"marketing" versions -- which end in zeroes :> ) as well as
for physical addresses.
> I also hate octal and the only times I ever have used it have been for
> setting file permissions from the shell in Unix (back when chmod took
> only a single octal argument).
In code, I use octal primarily for character constants.
I.e., \010 is easier to recognize than a "BS" manifest
constant.
An early employer used to push "split-octal" (0xFFFF -> 377377) for
Z80 programming by arguing that it was easier to "hand assemble"
code using octal because, for many instructions, you could synthesize
an opcode by memorizing "register identifiers" and "instruction
types", etc. IMO, a perfect example of misplaced priorities
("Why should I risk MISremembering an opcode when I can have the
*machine* generate the code for me???")
Grrr... s/zeroes/decimal zeroes/
> for physical addresses.
>Hi George,
>
>On 5/19/2011 5:51 PM, George Neuner wrote:
>
>><snip>
>> I also hate octal and the only times I ever have used it have been for
>> setting file permissions from the shell in Unix (back when chmod took
>> only a single octal argument).
>
>In code, I use octal primarily for character constants.
>I.e., \010 is easier to recognize than a "BS" manifest
>constant.
>
>An early employer used to push "split-octal" (0xFFFF -> 377377) for
>Z80 programming by arguing that it was easier to "hand assemble"
>code using octal because, for many instructions, you could synthesize
>an opcode by memorizing "register identifiers" and "instruction
>types", etc. IMO, a perfect example of misplaced priorities
>("Why should I risk MISremembering an opcode when I can have the
>*machine* generate the code for me???")
Hmm. I like hex well enough but since I worked a lot with
octal (pdp-11) many years ago, I am comfortable with it.
However, there is a nifty procedure that works by hand well,
if you are lost on a deserted island somewhere and only have
sand and your fingers to work conversions. (There are times,
you know.) It uses octal.
Sometimes, when I convert something from hex to decimal
without a calculator around, I first convert to octal and
perform one of the below algorithms. Similarly, at times
when converting from some decimal number into hex, I may
first convert it to octal (see below) and then quickly
translate that over to hex.
Anyone care to explain why the algorithms below work at all
and are "symmetrical?"
CONVERSION OF DECIMAL TO OCTAL
(0) Prefix the number with "0." Be sure to include
the radix point. It's an important marker.
(1) Double the value to the left side of the radix,
using octal rules, move the radix point one digit
rightward, and then place this doubled value
underneath the current value so that the radix
points align.
(2) If the moved radix point crosses over a digit
that is 8 or 9, convert it to 0 or 1 and add
the carry to the next leftward digit of the
current value.
(3) Add octally those digits to the left of the radix
and simply drop down those digits to the right,
without modification.
(4) If digits remain to the right of the radix, goto 1.
CONVERSION OF OCTAL TO DECIMAL
(0) Prefix the number with "0." Be sure to include
the radix point. It's an important marker.
(1) Double the value to the left side of the radix,
using decimal rules, move the radix point one digit
rightward, and then place this doubled value
underneath the current value so that the radix
points align.
(2) Subtract decimally those digits to the left of
the radix and simply drop down those digits to
the right, without modification.
(3) If digits remain to the right of the radix, goto 1.
For example,
0.4 9 1 8 decimal value
+0
---------
4.9 1 8
+1 0
--------
6 1.1 8
+1 4 2
--------
7 5 3.8
+1 7 2 6
--------
1 1 4 6 6. octal value
Let's convert it back:
0.1 1 4 6 6 octal value
-0
-----------
1.1 4 6 6
- 2
----------
9.4 6 6
- 1 8
----------
7 6.6 6
- 1 5 2
----------
6 1 4.6
- 1 2 2 8
----------
4 9 1 8. decimal value
Jon
> Vinzent Hoefler wrote:
>> Stefan Reuther wrote:
>>> write(fd, "", 1);
>>> I don't care whether it fails. Because that means the pipe is full and
>>> the other end will wake up soon, which is what I want. This gives a
>>> warning. WHICH CANNOT BE TURNED OFF. I have to uglify my code, like this,
>>> int result = write(fd, "", 1);
>>> if (result == 0) { }
>>
>> (void) write (fd, "", 1);
>>
>> wouldn't work?
>
> Of course not. The test suite even contains a test for this.
> <http://gcc.gnu.org/bugzilla/show_bug.cgi?id=25509>
Yes. Why did I even ask?
> Vinzent Hoefler wrote:
>> Stefan Reuther wrote:
>>> MISRA is good when it encourages people to think before they type.
>>> I know MISRA doesn't want me to use macros. I also know that 90% of all
>>> compilers/optimizers explode on my codec when I use inline functions
>>> instead. So I can write you a nice paper for a deviation why I need
>>> macros here (and only here).
>>
>> And that's all what's needed. You don't have to obey the rules if you
>> can reason why they don't make sense in a particular case. That just
>> documents you thought about it.
>
> I know that. Tell that the QA inquisition.
;)
> Compare <http://www.leshatton.org/Documents/MISRA_comp_1105.pdf>, which
> concludes:
> |
> | [MISRA C 2004] has not solved the most fundamental problem of MISRA C
> | 1998, viz. that its unadulterated use as a compliance document is
> | likely to lead to more faults and not less because of the fault
> | re-injection phenomenon first noted by [1].
Something like that is also my concern. As an example, if tasking is
forbidden for safety reasons, solutions to problems that are naturally
solved by using tasking, tend to become more complex and thus have a
bigger potential for errors.
> Which precisely matches my observation. Still, people want "zero
> derivations, 100% compliance".
Which is, of course, complete nonsense.
>>> But don't treat me like a lobotomized
>>> chimpanzee who cannot distinguish "||" and "|" operators.
>>
>> You don't need lobotomy for that. Quite often the standard biological
>> neural network tends to see only what you expect it to see.
>
> Actually, I haven't ever had a bug regarding to those two. (However, I
> already had a couple of functions where I added a comment "yes, I'm
> really using '|' on bools here because I want both paths", to reduce
> that "oops, shouldn't there be two of them" feeling.)
I wasn't talking about writing the code. I was talking about reading it.
> Which is precisely my problem with most coding standards. They don't
> find or prevent *practical* bugs.
They avoid constructs that are statistically proven to fail more often
than others. As I tried to say already, that doesn't mean this doesn't
just open a whole new can of worms introducing a lot new possibilities
of doing it wrong - just in some other way.
Vinzent Hoefler wrote:
> Stefan Reuther wrote:
>>>> But don't treat me like a lobotomized
>>>> chimpanzee who cannot distinguish "||" and "|" operators.
>>>
>>> You don't need lobotomy for that. Quite often the standard biological
>>> neural network tends to see only what you expect it to see.
>>
>> Actually, I haven't ever had a bug regarding to those two. (However, I
>> already had a couple of functions where I added a comment "yes, I'm
>> really using '|' on bools here because I want both paths", to reduce
>> that "oops, shouldn't there be two of them" feeling.)
>
> I wasn't talking about writing the code. I was talking about reading it.
At least I'm well-conditioned enough to immediately question a construct
having just "|" in a boolean context. Hence the comments.
>> Which is precisely my problem with most coding standards. They don't
>> find or prevent *practical* bugs.
>
> They avoid constructs that are statistically proven to fail more often
> than others.
Problem is, I don't see any evidence that MISRA does that.
We're doing C++. MISRA C++ outlaws functions that return *this or a
reference parameter. That is, it outlaws all implementations of
iostreams. It outlaws <stdio.h> (use <cstdio> instead), which means
there's no longer a well-defined way to get the declaration of POSIX
'fileno' or 'fdopen'. Of course, it also outlaws errno, to encourage
code like
std::FILE* fp = std::fopen("file", "rb");
if (fp == NULL) {
printf("something bad happened, but I don't tell you what\n");
}
(needless to say, all practical C++ers I know agree that NULL is bad
style in C++.) It also outlaws strcpy, memcmp, etc., because they might
be accidentally used on a wrong buffer or without a null check. By the
same argument, we'd have to outlaw addition, multiplication and division
as well, because they might be accidentally used on a wrong operand.
Neither of these has any specific track record for our product. The only
single problem source sticking out is signed/unsigned problems, of which
we had a handful: '(-20)/2U' has an unexpected result. Of course, our
rule checker does not find that (gcc does, with an increadibly high
false-positive rate). Other than that, most defects were algorithmic,
not language problems.
Stefan
> Vinzent Hoefler wrote:
>> Stefan Reuther wrote:
>>>>> But don't treat me like a lobotomized
>>>>> chimpanzee who cannot distinguish "||" and "|" operators.
>>>>
>>>> You don't need lobotomy for that. Quite often the standard biological
>>>> neural network tends to see only what you expect it to see.
>>>
>>> Actually, I haven't ever had a bug regarding to those two. (However, I
>>> already had a couple of functions where I added a comment "yes, I'm
>>> really using '|' on bools here because I want both paths", to reduce
>>> that "oops, shouldn't there be two of them" feeling.)
>>
>> I wasn't talking about writing the code. I was talking about reading it.
>
> At least I'm well-conditioned enough to immediately question a construct
> having just "|" in a boolean context. Hence the comments.
Well, that's surely a plus of having to work in only one language mostly,
but when you're forced to adapt your mind from Python to Perl to Ada, back
to Java and then to C again, conditioning the mind for each potentially
questionable construct of every one of those languages is not that easy.
(Of course, not all have the same criticality level, yet conditioning the
mind for slopiness in one case and strict rules in another seems even
harder.)
>>> Which is precisely my problem with most coding standards. They don't
>>> find or prevent *practical* bugs.
>>
>> They avoid constructs that are statistically proven to fail more often
>> than others.
>
> Problem is, I don't see any evidence that MISRA does that.
Which part? The statistics or the failures? ;)
> We're doing C++. MISRA C++ outlaws functions that return *this or a
> reference parameter. That is, it outlaws all implementations of
> iostreams. It outlaws <stdio.h> (use <cstdio> instead), which means
> there's no longer a well-defined way to get the declaration of POSIX
> 'fileno' or 'fdopen'.
Maybe original MISRA wasn't intended for systems which even had I/O. ;)
> Neither of these has any specific track record for our product. The only
> single problem source sticking out is signed/unsigned problems, of which
> we had a handful: '(-20)/2U' has an unexpected result. Of course, our
> rule checker does not find that (gcc does, with an increadibly high
> false-positive rate). Other than that, most defects were algorithmic,
> not language problems.
Playing devil's advocate, this is a sure sign of MISRA helping you in
the development, by making it harder for the "easy" bugs to creep in,
leaving you only the hard ones to concentrate on. Mission accomplished. :)
And I am not trying to argue _for_ the MISRA rules here, I a just trying
to point out why they exist. Again, if anybody thinks MISRA will make
the code safer just by obeying the rules, they are mistaken. The human
part is still the important one. We, as truly responsible coders don't
need those rules, the rules are always there for the other 80% who make
the silly mistakes. ;)
I remember the last talk with my boss:
B: "So what are you currently doing?"
M: "Still at refactoring the $horrible_stuff, should be finished by today.
One can say, I completely rewrote it by now."
B: *raises eyebrows* "So it needs to be tested before?"
M: *question mark in my face* "Errm, what answer do you expect now? - Yes,
of course, it needs to be tested."
I am human, I did, I do and I will make errors and no f*cking rules will
ever be able to change that.
I do less Pascal than I used to do, but I'm switching from C to C++ to
Assemblers to Perl to Lisp to Javascript to Lua to Shell, that's not an
issue :)
>> We're doing C++. MISRA C++ outlaws functions that return *this or a
>> reference parameter. That is, it outlaws all implementations of
>> iostreams. It outlaws <stdio.h> (use <cstdio> instead), which means
>> there's no longer a well-defined way to get the declaration of POSIX
>> 'fileno' or 'fdopen'.
>
> Maybe original MISRA wasn't intended for systems which even had I/O. ;)
Indeed, many of the rules make much more sense if you have an engine
control unit in mind, not an infotainment device with a megapixel screen
and a few Gigs of flash.
>> Neither of these has any specific track record for our product. The only
>> single problem source sticking out is signed/unsigned problems, of which
>> we had a handful: '(-20)/2U' has an unexpected result. Of course, our
>> rule checker does not find that (gcc does, with an increadibly high
>> false-positive rate). Other than that, most defects were algorithmic,
>> not language problems.
>
> Playing devil's advocate, this is a sure sign of MISRA helping you in
> the development, by making it harder for the "easy" bugs to creep in,
> leaving you only the hard ones to concentrate on. Mission accomplished. :)
How does MISRA come into play here? For the code I have statistics
about, we still managed to avoid most of it :-)
Right now, I'm just seeing I have to add things, like casts for every
other assignment, which makes it harder to see the forest for the tree.
I'm not complaining about adding '{' for 'if', and I'm only half
complaining about having to use 'int_fast16_t' instead of 'int' (which
have precisely the same semantics, but the practical problem is that
many C/C++ developers don't realize that), but I'm complaining about
p[0] = static_cast<uint8_t>(v & 255);
p[1] = static_cast<uint8_t>((v >> 8) & 255);
which just doesn't add value, just noise.
And, in particular, I'm getting grumpy when having to change older,
perfectly working and tested code, adding the potential for new bugs,
just to get it through a checker which has already proven by example
that it itself has bugs.
Stefan
All systems MISRA is aimed at have IO... It is just that most don't have
screens.
> All systems MISRA is aimed at have IO... It is just that most don't have
> screens.
Depends on what you count as I/O. Of course, any electric wire going in or
out the MCU are technically I/O.
But I don't think, I could drive a PWM-signal with the facilities of
<stdio.h>. ;)
They are IO... not just "technically" comparatively few computers have
screens.
>But I don't think, I could drive a PWM-signal with the facilities of
><stdio.h>. ;)
printf is essential :-)
--
Support Sarah Palin for the next US President
Go Palin! Go Palin! Go Palin!
In God We Trust! Rapture Ready!!!
http://www.sarahpac.com/
> Vinzent Hoefler wrote:
>
>> Maybe original MISRA wasn't intended for systems which even had I/O. ;)
>
> Indeed, many of the rules make much more sense if you have an engine
> control unit in mind, not an infotainment device with a megapixel screen
> and a few Gigs of flash.
I see. It sure is a very critical device, so that restricting the developers
to use a safety-critical language subset saves a lot of money. ;)
Not like that stupid airbag that nobody's gonna need anyway.