Our intent is to complete major development of Version 1.0 by April 18,
2008, with the published version of the standard being available in
September. Once Version 1.0 of the standard goes to the publisher, we
will begin development of Version 2.0. That is, we will continue to
maintain the wiki to further advance the "working version" of the CERT C
Secure Coding Standard. The published 1.0 version will become the
official version, until replaced by a future version. It is unlikely a
subsequent version will be released any time in the next 2-3 years, so
we would like to ensure that Version 1.0 will be a high quality product
that will promote and encourage secure coding practices.
Thanks for any help and assistance you have already provided and for any
additional contribution you may make. There are currently 184
individuals who have contributed to the development of this standard,
without whom this effort could not have succeeded.
Thanks,
rCs
--
Robert C. Seacord
Senior Vulnerability Analyst
CERT/CC
Work: 412-268-7608
FAX: 412-268-6989
It has been a while since I looked at your wiki on secure coding and forgot
how important and clear it was on good coding practice. I would like to
link the wiki from our website as a service to our customers writing code
for embedded systems applications.
Let me know when it is published and I will be happy to have an item
posted on that as well.
Nice piece of work
Walter Banks
Byte Craft Limited
Secure programming is beyond C's UB. Perfectly conforming programs can
still be insecure, your pages seem to have neither. (conforming or
secure programs).
Update: okay, in that *same* page (INT01-A) I found this code which
claims to be secure & conforming
>> #define BUFF_SIZE 10
>> int main(int argc, char *argv[]){
>> rsize_t size;
>> char buf[BUFF_SIZE];
>> size = atoi(argv[1]); /* vipp: atoi in secure code? */
>> /* vipp: where are your checks for argc? */
>> if (size < BUFF_SIZE){
>> strncpy(buf, argv[2], size); /* vipp: ka-boom */
>> buf[size] = '\0';
>> }
>> }
This is explained in:
https://www.securecoding.cert.org/confluence/display/seccode/INT01-A.+Use+rsize_t+or+size_t+for+all+integer+values+representing+the+size+of+an+object
The type size_t generally covers the entire address space. [TR 24731-1]
introduces a new type rsize_t, defined to be size_t but explicitly used
to hold the size of a single object. In code that documents this
purpose by using the type rsize_t, the size of an object can be checked
to verify that it is no larger than RSIZE_MAX, the maximum size of a
normal single object, which provides additional input validation for
library functions. See [STR00-A. Use TR 24731 for remediation of
existing string manipulation code] for additional discussion of TR 24731-1.
Any variable that is used to represent the size of an object including
integer values used as sizes, indices, loop counters, and lengths should
be declared as rsize_t if available.
> Secure programming is beyond C's UB. Perfectly conforming programs can
> still be insecure, your pages seem to have neither. (conforming or
> secure programs).
>
> Update: okay, in that *same* page (INT01-A) I found this code which
> claims to be secure & conforming
>
>>> #define BUFF_SIZE 10
>>> int main(int argc, char *argv[]){
>>> rsize_t size;
>>> char buf[BUFF_SIZE];
>>> size = atoi(argv[1]); /* vipp: atoi in secure code? */
>>> /* vipp: where are your checks for argc? */
>>> if (size < BUFF_SIZE){
>>> strncpy(buf, argv[2], size); /* vipp: ka-boom */
>>> buf[size] = '\0';
>>> }
>>> }
This is bogus. The code is trying to explain one specific problem, i.e.
that signed int comparisons with some constant assume that the integer
is greater than zero implicitely. To avoid this, rsize_t is recommended.
Conclusion:
I wasted 10 minutes to read your message. You have no comprehension
about what the code examples are intended to show, and you start
complaining about things that do not concern the example directly.
--
jacob navia
jacob at jacob point remcomp point fr
logiciels/informatique
http://www.cs.virginia.edu/~lcc-win32
I think the point was valid: If examples are given, they ought
to follow *all* the "good practice guidelines". The exhibited
code had *several* security, portability, and reliability issues.
Concentrating on some presumed benefit of rsize_t rather than
genuinely serious safety issues is bad strategy and bad tactics.
Please do not top-post. Your answer belongs after (or intermixed
with) the quoted material to which you reply, after snipping all
irrelevant material. See the following links:
--
<http://www.catb.org/~esr/faqs/smart-questions.html>
<http://www.caliburn.nl/topposting.html>
<http://www.netmeister.org/news/learn2quote.html>
<http://cfaj.freeshell.org/google/> (taming google)
<http://members.fortunecity.com/nnqweb/> (newusers)
--
Posted via a free Usenet account from http://www.teranews.com
I could find no way to download a draft.
--
[mail]: Chuck F (cbfalconer at maineline dot net)
[page]: <http://cbfalconer.home.att.net>
Try the download section.
> Walter Banks wrote:
>>
>> It has been a while since I looked at your wiki on secure coding
>> and forgot how important and clear it was on good coding practice.
>> I would like to link the wiki from our website as a service to our
>> customers writing code for embedded systems applications.
>>
>> Let me know when it is published and I will be happy to have an
>> item posted on that as well.
>
> Please do not top-post.
Those who have nothing to say should say nothing, and those who breach
netiquette guidelines should stop doing so before preaching netiquette to
others.
Now - secure coding.
I have a few questions about the document. Firstly, why is it based on C99?
Is it because it was written at a time when C99 take-up was anticipated
more optimistically?
The trouble is that this dependence makes the document itself less useful
than it might be. By encouraging the use of inline functions over macros,
for example, you're actually encouraging people to make their code
uncompilable under C90. If C99 were as widely implemented as C90, that
wouldn't be a problem. But it isn't.
I said I had a few questions, but during the course of asking the last one
I had two browser crashes. Does the OP have a single-page
(very-long-page!) version available for download?
--
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
Google users: <http://www.cpax.org.uk/prg/writings/googly.php>
"Usenet is a strange place" - dmr 29 July 1999
C99 is standard C, and they are correct using it. If you do not like
standard C or feel that you have to wage your personal little war
against the standard go elsewhere.
> The trouble is that this dependence makes the document itself less useful
> than it might be.
Do not use it then.
> By encouraging the use of inline functions over macros,
> for example, you're actually encouraging people to make their code
> uncompilable under C90.
C90 is no longer standard C since 1999.
If C99 were as widely implemented as C90, that
> wouldn't be a problem. But it isn't.
>
There are several implementations. Gcc's implementation is almost
ready. They did not finish it because C has disappeared from
their objectives. GNU doesn't care about the language anymore.
More or less the same with Microsoft, even if their interest on C
has picked up recently.
> I said I had a few questions, but during the course of asking the last one
> I had two browser crashes. Does the OP have a single-page
> (very-long-page!) version available for download?
>
--
> Richard Heathfield wrote:
>> CBFalconer said:
>>
>>> Walter Banks wrote:
>>>> It has been a while since I looked at your wiki on secure coding
>>>> and forgot how important and clear it was on good coding practice.
>>>> I would like to link the wiki from our website as a service to our
>>>> customers writing code for embedded systems applications.
>>>>
>>>> Let me know when it is published and I will be happy to have an
>>>> item posted on that as well.
>>> Please do not top-post.
>>
>> Those who have nothing to say should say nothing, and those who breach
>> netiquette guidelines should stop doing so before preaching netiquette
>> to others.
>>
>> Now - secure coding.
>>
>> I have a few questions about the document. Firstly, why is it based on
>> C99? Is it because it was written at a time when C99 take-up was
>> anticipated more optimistically?
>>
>
> C99 is standard C, and they are correct using it.
I didn't say they were incorrect to use it. I asked *why* they used it.
> If you do not like standard C or feel that you have to wage your
> personal little war against the standard go elsewhere.
I think my question was reasonable. If you don't like what I write, well,
nobody is forcing you to read it.
>> The trouble is that this dependence makes the document itself less
>> useful than it might be.
>
> Do not use it then.
Is that the recommendation of the OP, too? That people who wish to write
portable code should not use the CERT C thing? I'd like to hear that from
him, rather than you, since it's his document.
>> By encouraging the use of inline functions over macros,
>> for example, you're actually encouraging people to make their code
>> uncompilable under C90.
>
> C90 is no longer standard C since 1999.
By that reasoning, lcc-win32 hasn't been a conforming implementation for
over eight years, and it remains non-conforming today, so please stop
banging on about it in comp.lang.c all the time because - by *your*
argument - it isn't a C compiler.
<snip>
Consider me told. I have seen your message I have read your links
save the off topic posts that everyone else has to endure. If
you must tell me do so privately. I will arrange for my filters to
ignore the messages and you will get the satisfaction of pointing
out a wrong and the rest of the newgroup will not endure endless
off topic threads whose subject line doesn't fit the message
of top vs bottom vs 4 line signature (yours is 9 btw).
Regards,
w..
But the rest of us will still have to deal with your top posting.
Please stop. It is annoying.
Note: I did NOT fix your post, I quoted verbatim. Is it really so hard
for you to not top post?
Ed
Falconer is our resident troll. Please just ignore him.
I would not complain about the errors if the link leaded to some
personal web blog or usenet post clarifying the code is pseudo-secure.
However the above link comes from CERT/CC, which according to
wikipedia is a major coordination center and according to them, they
improve security.
I don't know what makes you think that errors in security documents &
papers should be easily excused or ignored.
If you look at securecoding.cert.org there is a link labeled "Top 10
Secure Coding Practices", from which practice n10 is
> 10. Adopt a secure coding standard. Develop and/or apply a secure coding standard for your target development language and platform.
So, according to them, having a *secure* standard is importand, and I
do agree at some degree.
Moreover, this link was posted here, in clc (cross-posted actually;
comp.lang.c comp.std.c) implying the OP seeks technical errors in his
documents.
In fact, mr Seacord (OP) even mentioned so in his post body & title.
Given this, I don't think the OP would be disappointed with my reply.
So, after what I said what had to be said,
> Conclusion:
>
> I wasted 10 minutes to read your message. You have no comprehension
> about what the code examples are intended to show, and you start
> complaining about things that do not concern the example directly.
would you like to elaborate?
But before you do, consider this: I'm not part of any "clique" (nor I
accuse anybody being a member of a clique) and I'm not trying to troll
you (perhaps you believe that).
It is my honest opinion that this security document is bad at it's
current form.
My attitude has been that you are flipping between areas with
different standards, and forgetting to flip your attitude. I guess
that is wrong. At any rate, top-posting is not really acceptable
in c.l.c., and you would do us a favor by abandoning it.
--
[mail]: Chuck F (cbfalconer at maineline dot net)
[page]: <http://cbfalconer.home.att.net>
Try the download section.
--
> Richard Heathfield wrote:
>> If C99 were as widely implemented as C90, that wouldn't be a problem.
>> But it isn't.
>
> There are several implementations. Gcc's implementation is almost
> ready. They did not finish it because C has disappeared from
> their objectives. GNU doesn't care about the language anymore.
How can you say this when the recommended language in the GNU Coding
Standards is C?
> More or less the same with Microsoft, even if their interest on C
> has picked up recently.
Has it? I see no evidence for that.
Because even if they have since years one of the best
implementations, they do not care enough to fix the
very small problems their implementation has.
I just do not understand that they do 95% of the work and then fail
during *years* to improve anything. Their C99 pges has stayed
the same for years.
>> More or less the same with Microsoft, even if their interest on C
>> has picked up recently.
>
> Has it? I see no evidence for that.
>
The recent blog entry by the Microsoft representative, and
their recent TRs and proposals prove this. Thier proposition
for the "secure" functions, even if they did not really
change things is a step forward.
> santosh wrote:
>> jacob navia wrote:
>>
>>> Richard Heathfield wrote:
>>
>>>> If C99 were as widely implemented as C90, that wouldn't be a
>>>> problem. But it isn't.
>>> There are several implementations. Gcc's implementation is almost
>>> ready. They did not finish it because C has disappeared from
>>> their objectives. GNU doesn't care about the language anymore.
>>
>> How can you say this when the recommended language in the GNU Coding
>> Standards is C?
>>
>
> Because even if they have since years one of the best
> implementations, they do not care enough to fix the
> very small problems their implementation has.
>
> I just do not understand that they do 95% of the work and then fail
> during *years* to improve anything. Their C99 pges has stayed
> the same for years.
That's probably because the gcc developers consider those unimplemented
features either not useful or problematic (like conflicting with
similar gcc specific extensions).
Here is a list of C99 features not fully implemented in mainline gcc:
1. wide character library support in <wchar.h> and <wctype.h>
(originally specified in AMD1).
2.variable-length arrays.
3.complex (and imaginary) support in <complex.h>.
4. extended identifiers.
5. library functions in <inttypes.h>.
6. extended integer types in <stdint.h>.
7. additional math library functions in <math.h>.
8. treatment of error conditions by math library functions
(math_errhandling)
9. IEC 60559 (also known as IEC 559 or IEEE arithmetic) support.
10. additional predefined macro names.
11. standard pragmas.
12. deprecate ungetc at the beginning of a binary file.
Of these 5. 6. don't seem to be present on my version of gcc/glibc, as
far as I can see.
7. 10. 11. and 12. ought to be very easy to implement. I don't know why
they are still unimplemented, though of course 7. and 12. belong to
glibc rather than gcc proper.
1., 8. and 9. should also I think, not be too difficult. I personally
don't care much for 2., 3. and 4.
All told, it *is* surprising that they would come so close to
conformance, but then slow down to a crawl.
>>> More or less the same with Microsoft, even if their interest on C
>>> has picked up recently.
>>
>> Has it? I see no evidence for that.
>>
> The recent blog entry by the Microsoft representative, and
> their recent TRs and proposals prove this. Thier proposition
> for the "secure" functions, even if they did not really
> change things is a step forward.
They need to do more than blog. At least gcc has made a valiant attempt.
Microsoft seem to treat C as a subset of C++ and only the superset
worthy of consideration. That of course leaves C99 in a bad way.
The hot thing these days is "managed" code and "managed" languages,
something C probably can never be. MS recommend C++ even for drivers.
BTW, have you considered submitting patches for one or more of the
missing C99 features in gcc?
No it hasn't. At the very least one change between gcc 4.2 and gcc 4.3
is that inline functions are no longer broken and gcc 4.3 was release
March 5th this year. Of course, this is still *very* slow movement, but
it is not the complete failure to improve anything you claim.
You say you care yet you have not finished implementing C99 in your
compiler yet. If you are going to claim that not completing C99 is a
sign of lack of care then you had better finish your implementation of
C99 rather than concentrating on things which are not part of C.
>>> More or less the same with Microsoft, even if their interest on C
>>> has picked up recently.
>>
>> Has it? I see no evidence for that.
>
> The recent blog entry by the Microsoft representative, and
> their recent TRs and proposals prove this. Thier proposition
> for the "secure" functions, even if they did not really
> change things is a step forward.
So MS are still saying they will not implement C99 because there is not
demand unlike GNU who have at least implemented most of it yet MS are
the ones interested in C?
Also gcc have added one of the other extensions for one target which is
of interest to some people, namely fixed point data types described in
n1169.pdf as well as other extensions, so by your suggestion of
extending the language being a sign of interest GNU is still interested.
--
Flash Gordon
*You* complain about this?
When is your own lcc-win going to be fully C99 compliant? So far, you
seem to be spending most of your time inventing and implementing
extensions rather than finishing work on the implementation of the
core language. (That's not to minimize your apparently considerable
efforts in implementing as much of C99 as you have.)
--
Keith Thompson (The_Other_Keith) <ks...@mib.org>
Nokia
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
Yes me. You could see the size and budget difference between gcc's
team (several dozen people working full time) and my effort (two
people working part time).
> When is your own lcc-win going to be fully C99 compliant?
The only feature really missing is the designator initializers
feature.
> So far, you
> seem to be spending most of your time inventing and implementing
> extensions rather than finishing work on the implementation of the
> core language.
No. I have implemented with my extensions the complex numbers
part of C99, and using the generic functions feature I implemented
all the tgmath.h in a general way.
Yes. I have been extending many things in the compiler and
doing other things like marketing, going to customers, fixing
problems, etc. I haven't been able to do everything I would wish to
do.
(That's not to minimize your apparently considerable
> efforts in implementing as much of C99 as you have.)
>
--
> Keith Thompson wrote:
>> *You* complain about this?
>>
>
>
> Yes me. You could see the size and budget difference between gcc's
> team (several dozen people working full time) and my effort (two
> people working part time).
>
>> When is your own lcc-win going to be fully C99 compliant?
>
> The only feature really missing is the designator initializers
> feature.
>
>> So far, you
>> seem to be spending most of your time inventing and implementing
>> extensions rather than finishing work on the implementation of the
>> core language.
>
> No. I have implemented with my extensions the complex numbers
> part of C99, and using the generic functions feature I implemented
> all the tgmath.h in a general way.
So from a C99 P.O.V. your compiler still lacks complex support and
tgmath.h?
Of course you are free to implement them with your extensions, but to
claim conformance to C99 you'll also have to provide a mode where the
standard specified syntax and semantics are also accepted.
<snip>
Well, they are accepted
double _Complex a,b;
// ...
a = a+b;
If my compiler uses operator overloading to implement that,
it complies with the AS IF rule, as far as I see.
From earlier exchanges, I didn't realise you could transparently
implement complex numbers with your operator overloading scheme. Do
programs using _Complex built with your compiler link with modules built
with other compilers?
--
Ian Collins.
Linking different compiler's object code doesn't ever work.
Specifically, you need to link with the object code the
C runtime of lcc-win, where functions like complex divide,
complex square root, etc are implemented.
Now, I am in the process of augmenting operator overloading
with inlined assembler operators. When I am finished doing this,
it will be easier to do since there will be no library calls for
the complex arithmetic operators.
Problem is, I will do this in the 64 bit versions using the
new opcodes of Intel/AMD for complex math (SSE3), and I do not
know if I will have the time to backport this into the 32 bit
versions.
In general, since all those functions are pure and no
malloc/free is invoked either directly or indirectly it
*could* work with older versions of MSVC. But mSVC 2008 is a
full of .net stuff, even when you use it as a plain C
compiler, and I do not feel like digging into the highly
optimized Microsoft implementation to find out why
the program mystreriously crashes...
--
Ian Collins.
Ah, but does it still work and produce all required diagnostics if you
put it in standard compliant mode? I seem to recall that for operator
overloading you use a syntax that in standard compliant mode requires a
diagnostic (not that there is anything wrong with extensions that
require a diagnostic in standard compliant mode).
--
Flash Gordon
1) During compilation of tgmath.h I put the compiler in non-compliant
mode even if you requested the contrary. After compiling that, I
reset it again, the AS IF rule is then followed. Unless you
modify tgmath.h what is not a good idea.
2) There is some stuff that can't be done with operator overloading,
specifically the strange syntax that C99 uses:
double _Complex b = 1+2*I;
This part has to be done in the compiler proper. It would be much
more convenient if there was a simple way to parse a complex constant
without having a full expression simplifier with complex arithmetic
in the compiler obviously. I have still not finished this part, and
if you type
double _Complex b = 1+2*I-245-56*I+56+66*I;
the operations will be done at run time and not at compile time.
Operator overloading can't help here since this is compile time.
The same problem appears in C++ by the way. The solution of
course is to be able to define compile time functions, but after
all the critic for this simple operator overloading stuff I will
not propose any further extensions here.
3) There is not a lot of complex C99 code available that would test
my implementation, and even if I have ported into the new syntax
some codes, it needs more testing.
Understood.
>> When is your own lcc-win going to be fully C99 compliant?
>
> The only feature really missing is the designator initializers
> feature.
Last time I asked you about this, your reply was:
| Designated initializers and structure initializers with the
| dot notation are missing.
| [...]
| Besides the preprocessor is still missing the variable
| arguments feature.
>> So far, you
>> seem to be spending most of your time inventing and implementing
>> extensions rather than finishing work on the implementation of the
>> core language.
>
> No. I have implemented with my extensions the complex numbers
> part of C99, and using the generic functions feature I implemented
> all the tgmath.h in a general way.
>
> Yes. I have been extending many things in the compiler and
> doing other things like marketing, going to customers, fixing
> problems, etc. I haven't been able to do everything I would wish to
> do.
And I'm sure the folks working on gcc, even though there are more of
them, haven't been able to do everything they would wish to do.
I'm not criticizing your efforts; I'm merely questioning your attitude
regarding gcc's failure to fully comply to C99.
Not a bad approach. Standard headers absolutely don't have to be
written in standard-conforming C (they don't have to be written in C,
or even exist as files).
> 2) There is some stuff that can't be done with operator overloading,
> specifically the strange syntax that C99 uses:
>
> double _Complex b = 1+2*I;
What strange syntax? "1+2*I" isn't single literal, it's an
expression. I is a macro that expands to _Complex_I, which in turn
expands to a constant expression of type "const float _Complex". From
there, the rest is just ordinary complex arithmetic with the usual
arithmetic conversions (C99 6.3.1.8, though the wording that describes
promotion from real to complex is a bit subtle). There's nothing
there that requires any more special handling than the initialization
of z in:
int a = 1;
int b = 2;
float _Complex c = I;
double _Complex z = a + b * c;
C99 has no complex literals (just as it has no negative literals;
``-42'' is a unary "-" applied to a constant 42).
> This part has to be done in the compiler proper. It would be much
> more convenient if there was a simple way to parse a complex constant
> without having a full expression simplifier with complex arithmetic
> in the compiler obviously. I have still not finished this part, and
> if you type
>
> double _Complex b = 1+2*I-245-56*I+56+66*I;
>
> the operations will be done at run time and not at compile time.
> Operator overloading can't help here since this is compile time.
> The same problem appears in C++ by the way. The solution of
> course is to be able to define compile time functions, but after
> all the critic for this simple operator overloading stuff I will
> not propose any further extensions here.
As far as I know, performing those operations at run time is perfectly
legal as far as the standard is concerned. Obviously doing as much as
possible at compilation time would be a nice optimization, but that's
not a conformance issue.
> 3) There is not a lot of complex C99 code available that would test
> my implementation, and even if I have ported into the new syntax
> some codes, it needs more testing.
Thanks for acknowledging that. Perhaps some subset of the gcc test
suite would be useful. (I haven't looked at it myself; it might well
turn out to be totally useless for this purpose.)
This raises an interesting question (for the group in general, not
necessarily just for jacob). In complex.h, the macro _Complex_I
expands to a constant expression of type const float _Complex,
with the value of the imaginary unit.
Is there any portable way to do this, or must it be done via some
extension? One example: GNU libc's <complex.h> has
#define _Complex_I (__extension__ 1.0iF)
There doesn't actually *need* to be a portable way to do this, but it
seems odd to define built-in complex types but not be able to express
the value ``i'' without using (directly or indirectly) a compiler
extension. It might have been more consistent either to make
_Complex_I a new keyword, or to define a new suffix to specify complex
or imaginary constants.
I used in my implementation
double _Complex I = {0.0,1.0};
since double _Complex *is* a structure after all.
Anyway, if it was defined as an array, it would be
the same: the above syntax is also valid.
>I used in my implementation
>double _Complex I = {0.0,1.0};
>since double _Complex *is* a structure after all.
>Anyway, if it was defined as an array, it would be
>the same: the above syntax is also valid.
Hmmm, is it? You cannot typedef an array type distinct from
what it is an array of, so if _Complex were defined as an array,
it would have to be defined as a double array, leading to the
equivilent of
double double I[2] = {0.0,1.0};
Is that valid syntax? I don't have my standard here to check
against, but I thought a typedef could only be further qualified
by auto, static, register, volatile, and const ? (And of course,
array specifiers and pointer indicators.)
--
"Okay, buzzwords only. Two syllables, tops." -- Laurie Anderson
To be clear: in standard C, double _Complex certainly isn't a
structure, it's an arithmetic type. In user code, the above
declaration would require a diagnostic.
Now if you're saying that while processing <complex.h> your compiler
goes into a mode in which it accepts the above extension without a
diagnostic, and actually issues the required diagnostic if it sees the
equivalent in user code, then that's fine.
However, it doesn't answer my question, which was whether it's
possible to define _Complex_I in portable C *without* using any
compiler extensions.
When I looked at the briefly released Linux version, there seemed to
be a problem with this method. It looked as if operator overloading
was required for _Complex to work. At that time,
double _Complex x = 1.0;
was rejected when the -ansic flag was set with an error:
Error <path>/complex.c: complex.c: 10 operands of = have
illegal types 'struct long double _Complex' and 'double'
Similarly, just writing 'x + 2' produced:
Error <path>/complex.c: complex.c: 8 operands of + have illegal types
'struct long double _Complex' and 'int'
--
Ben.
> jacob navia <ja...@nospam.com> writes:
>> Keith Thompson wrote:
>>> *You* complain about this?
>>
>> Yes me. You could see the size and budget difference between gcc's
>> team (several dozen people working full time) and my effort (two
>> people working part time).
>
> Understood.
>
>>> When is your own lcc-win going to be fully C99 compliant?
>>
>> The only feature really missing is the designator initializers
>> feature.
>
> Last time I asked you about this, your reply was:
>
> | Designated initializers and structure initializers with the
> | dot notation are missing.
> | [...]
> | Besides the preprocessor is still missing the variable
> | arguments feature.
At the time of the brief Linux release, there were some problems with
compound literals as well.
--
Ben.
"Doesn't ever" is too strong -- the whole point of standard platform
ABIs and run-time libraries is to allow different compilers' object code
to be linked together and work. It's another lesson that Microsoft has
been very slow to learn, but it works just dandy in lots of other
environments (like VMS, Unix, and MVS/ESA). One of my company's main
products is written in a combination of C, C++, and Fortran, compiled by
two or three different compilers, all linked together into a single
executable. It works just fine on all major current platforms. That's
not to say that it's *easy* -- there are lots of potential traps and
pitfalls -- but it can be done.
-Larry Jones
It works on the same principle as electroshock therapy. -- Calvin
<snip>
I agree with Keith that there is absolutely nothing wrong with this
approach.
--
Flash Gordon
It works just fine in the MS world as long as the compilers have been
designed to allow it. On one product I'm involved in the bulk is written
in Borland Delphi 5 but one module is written with MS VC++ 6.0 (this as
done back when these were current technologies).
MinGW can also link to DLLs written in Visual Studio, MinGW even have a
FAQ on how to interface between Visual Studio and MinGW.
This is part of the point of the specification for DLLs under Windows!
> but it works just dandy in lots of other
> environments (like VMS, Unix, and MVS/ESA). One of my company's main
> products is written in a combination of C, C++, and Fortran, compiled by
> two or three different compilers, all linked together into a single
> executable. It works just fine on all major current platforms. That's
> not to say that it's *easy* -- there are lots of potential traps and
> pitfalls -- but it can be done.
The main pitfall is implementations that are not designed to work with
the implementation provided by OS provider.
--
Flash Gordon
What does it mean "to define _Complex_I in portable C"? Can
you define 1.0L in portable C?
Yevgen
Can you give a correct definition of the standard library macro
_Complex_I that would be usable on any C99-conforming compiler?
> Can you define 1.0L in portable C?
In that sense, 1.0L is already portable C.
Actually, for once, I don't think this is to be blamed on MS. The
x86 chips have had a long history, and a wide performance span, and
have never really had a consistent system for which to design
libraries, calling sequences, etc. Even their opcodes have been
heavily modified. Remember, the family is already over 30 years
old.
--
[mail]: Chuck F (cbfalconer at maineline dot net)
[page]: <http://cbfalconer.home.att.net>
Try the download section.
--
Posted via a free Usenet account from http://www.teranews.com
That is fine for standard headers, but what about the user's program?
As far I can see, operator overloading is needed for complex numbers
to work at all. Maybe I've got this wrong, but it looks that way.
Jacob, if you are interested I will post bug reports to
comp.compilers.lcc. I would be very pleased to see another
C99-conforming compiler.
--
Ben.
_Complex_I is a macro, and it has to expand to some expression. My
question is whether it's possible to write that expression portably.
It's somewhat similar to the situation with the offsetof() macro,
except that offsetof() is typically written using only standard C
constructs, but in a non-portable way.
1.0L isn't a macro; it's already a portable standard C expression.
C already has restricted operator overloading, though it doesn't make
it available to user code. For example, "+" can be applied to any
arithmetic type, or to an integer and a pointer. User programs just
can't define their own new overloadings.
If an implementation supports user-defined overloading as an
extension, and uses that extension to implement the language-defined
operators on complex types, I see no fundamental conformance problem
with that -- as long as any attempts to declare overloaded functions
outside the system headers are diagnosed.
> Jacob, if you are interested I will post bug reports to
> comp.compilers.lcc. I would be very pleased to see another
> C99-conforming compiler.
Good luck with that. A good attitude regarding bug reports is very
important.
Yes, it relies on operator overloading. I suspect the standards mode
doesn't disable overloaded definitions that are already set up, only
prevents further definitions from being added. Since a conforming
program cannot tell that it is a "user defined" overloaded operator
making things work and the user still cannot define his/her own
overloaded operators, this is fine.
> Jacob, if you are interested I will post bug reports to
> comp.compilers.lcc. I would be very pleased to see another
> C99-conforming compiler.
I don't believe a bug report is needed for this as I don't believe it is
a bug.
--
Flash Gordon
Yes, that would be nice. I am interested in fixing my bugs of
course, and my compiler *is* very important to me. Just tell me
what you see it doesn't work.
Thanks in advance.
A refreshing attitude! (Seriously.)
But as far as I can see, that means you can't turn on "standards mode"
(the -ansic flag) with any complex number programs. This, in turn,
means that other valid programs are rejected because of the
extensions. It seems that:
int main(void)
{
double _Complex x = 1.0;
return 0;
}
is rejected by lcc-win32. Adding #include <complex.h> allows it to
compile, but it is still rejected if -ansic is specified. For more
details see my post in comp.compilers.lcc.
> I suspect the standards mode
> doesn't disable overloaded definitions that are already set up, only
> prevents further definitions from being added. Since a conforming
> program cannot tell that it is a "user defined" overloaded operator
> making things work and the user still cannot define his/her own
> overloaded operators, this is fine.
>
>> Jacob, if you are interested I will post bug reports to
>> comp.compilers.lcc. I would be very pleased to see another
>> C99-conforming compiler.
>
> I don't believe a bug report is needed for this as I don't believe it
> is a bug.
How can you say "this" does not need one? I may have it all wrong,
but you seem to be guessing at what I think the bug is since I have
not reported one yet!
--
Ben.
> If an implementation supports user-defined overloading as an
> extension, and uses that extension to implement the language-defined
> operators on complex types, I see no fundamental conformance problem
> with that -- as long as any attempts to declare overloaded functions
> outside the system headers are diagnosed.
That is my point. It seems to me that proviso is missing in
lcc-win32. I have posted some examples to comp.compilers.lcc.
--
Ben.
Jacob said you can and I had no reason to disbelieve him not having his
compiler.
> means that other valid programs are rejected because of the
> extensions. It seems that:
>
> int main(void)
> {
> double _Complex x = 1.0;
> return 0;
> }
>
> is rejected by lcc-win32. Adding #include <complex.h> allows it to
> compile, but it is still rejected if -ansic is specified. For more
> details see my post in comp.compilers.lcc.
OK, then there is a reason.
<snip>
>>> Jacob, if you are interested I will post bug reports to
>>> comp.compilers.lcc. I would be very pleased to see another
>>> C99-conforming compiler.
>> I don't believe a bug report is needed for this as I don't believe it
>> is a bug.
>
> How can you say "this" does not need one? I may have it all wrong,
> but you seem to be guessing at what I think the bug is since I have
> not reported one yet!
I had believed what Jacob said which was that it would work in standards
compatible mode. Obviously as you have demonstrated it does not a bug
report is appropriate.
The method Jacob said of enabling extensions within his headers is
perfectly valid and *could* be made to work transparently. All it needs
is that the overloading defined in his headers is not lost when
standards mode is re-enabled at the end of the header, and I had assumed
that this was what Jacob did.
--
Flash Gordon
I had just forgotten to write
#pragma extensions(push,on)
...
#pragma extensions(po)
in complex.h now it works.
Miaow! Ladies, please, put down your handbags.
<usual irritatingly pompous vacuities snipped>
Falconer is a hypocrite and a troll. No one in this group takes him
seriously - just ignore his prissy net-nannying.
> Flash Gordon wrote:
>>
>> The method Jacob said of enabling extensions within his headers is
>> perfectly valid and *could* be made to work transparently. All it
>> needs is that the overloading defined in his headers is not lost
>> when standards mode is re-enabled at the end of the header, and I
>> had assumed that this was what Jacob did.
>
> I had just forgotten to write
> #pragma extensions(push,on)
> ...
> #pragma extensions(po)
"pop", here I think.
> in complex.h now it works.
This does not do it for me.
--
Ben.
> Ben Bacarisse wrote, On 23/03/08 02:04:
>> Flash Gordon <sp...@flash-gordon.me.uk> writes:
>>> Ben Bacarisse wrote, On 22/03/08 16:42:
<snip>
>>>>> Jacob, if you are interested I will post bug reports to
>>>> comp.compilers.lcc. I would be very pleased to see another
>>>> C99-conforming compiler.
>>> I don't believe a bug report is needed for this as I don't believe it
>>> is a bug.
>>
>> How can you say "this" does not need one? I may have it all wrong,
>> but you seem to be guessing at what I think the bug is since I have
>> not reported one yet!
>
> I had believed what Jacob said which was that it would work in
> standards compatible mode.
Yes, so had I. I should have made it plain at the start that I had
believed him until I saw evidence to the contrary.
> Obviously as you have demonstrated it does
> not a bug report is appropriate.
>
> The method Jacob said of enabling extensions within his headers is
> perfectly valid and *could* be made to work transparently. All it
> needs is that the overloading defined in his headers is not lost when
> standards mode is re-enabled at the end of the header, and I had
> assumed that this was what Jacob did.
It is not quite that simple. A file can use complex numbers and not
include any header files[1], so the overloading, if it is needed for
complex numbers to work, must be there in programs that never see
complex.h[1, again]. It was this thought that made me look to see if
it had been done in the right way.
This is, in part, the point made earlier by Keith Thompson -- that C
already has operator overloading for arithmetic and extending that
using compiler extensions as a way to implement _Complex is perfectly
valid. I agree, but it must, of course, be done in a way that
conforms to the standard. My objection was purely practical -- it
seems that Jacob has not covered all the bases yet[1, yet again].
[1] I think this is true. I feel out on a limb here because I am not
an expert in complex number support in C99.
--
Ben.
Yes, I did not told you to eliminate
#ifndef __ANSIC_ONLY
To avoid this bugs I sent you the include file
in comp.lang.lcc
Thanks for your help Mr Becarisse
<snip lcc-win complex support>
> It is not quite that simple. A file can use complex numbers and not
> include any header files[1], so the overloading, if it is needed for
> complex numbers to work, must be there in programs that never see
> complex.h[1, again]. It was this thought that made me look to see if
> it had been done in the right way.
<snip>
> [1] I think this is true. I feel out on a limb here because I am not
> an expert in complex number support in C99.
I'm not an expert on it either (I don't need it for my current work) and
I had not thought of that. Yes, it does sound like more work might be
involved for Jacob.
--
Flash Gordon
me too!
--
Nick Keighley
> Chuck,
>
> Consider me told. I have seen your message I have read your links
> save the off topic posts that everyone else has to endure. If
> you must tell me do so privately. I will arrange for my filters to
> ignore the messages and you will get the satisfaction of pointing
> out a wrong and the rest of the newgroup will not endure endless
> off topic threads whose subject line doesn't fit the message
> of top vs bottom vs 4 line signature (yours is 9 btw).
>
> Regards,
>
> w..
Ignore "Chuck". He's considered to be a bit of a joke here. He is wrong
more often he is right and his group policing and net nannying make him
look like the hypocrite and the fool he is when you consider the fact
that he is STILL posting with a double signature.
You are better off killfiling him and be done with it.
>> Please do not top-post. Your answer belongs after (or intermixed
>> with) the quoted material to which you reply, after snipping all
>> irrelevant material. See the following links:
>>
>> --
>> <http://www.catb.org/~esr/faqs/smart-questions.html>
>> <http://www.caliburn.nl/topposting.html>
>> <http://www.netmeister.org/news/learn2quote.html>
>> <http://cfaj.freeshell.org/google/> (taming google)
>> <http://members.fortunecity.com/nnqweb/> (newusers)
If you were really interested, you would read comp.compilers.lcc.
For example, here's a "bug report", reaction to which so surprised
poor Keith:
http://groups.google.com/group/comp.compilers.lcc/browse_thread/thread/4a7bcd24bc1cf0f7/36e1c1a6330af3f8
Read the program text in the "bug report" and then talk
about attitudes (do read it, all of it, and understand
what it's doing, and try to guess why it was written).
In the same newsgroup you can see Jacob reaction to real
bug reports too. You guys are jerks, you won't understand
that some people don't feel like being polite robots when
get insulted or humiliated (it's funny to see how old wise
Keith acts like a child in such a situation, and yet talks
about "attitude" and shit). Temper, temper, huh?
Yevgen
Ahem. *WE* don't killfile people. In fact, nobody really killfiles
anybody (other than obvious spammers). My basic rule is: If you are
thinking about responding to someone, you're not killfiling them.
Clique members *claim* to have us all (i.e., "us" = the sensible people)
killfiled, but we know they are lying. Given that a lot of them are
(religious) nutters as well, the habit of claiming things that they know
aren't so comes naturally to them.
>Robert Seacord wrote:
>>
>> We would like to invite the C community to review and comment on the
>> current version of the CERT C Secure Coding Standard available online
>> at www.securecoding.cert.org <http://www.securecoding.cert.org>
>> before Version 1.0 is published. To comment, you can create an
>> account on the Secure Coding wiki and post your comments there.
>
>I could find no way to download a draft.
As it's a wiki, I don't expect any comments posted in NGs will be read.
You could use wget -np ... to download the pages, or get a possibly
earlier revision of the complete document (2MB) from
http://www.open-std.org/JTC1/SC22/WG14/www/docs/n1255.pdf.
The web pages seem to be an advert for his book.
The content reminds me too much of Schildt.
Started out as best practices before being pushed as a "standard", with
associated discussions, training, and tools available.
Says it provides rules and recommendations: seems like other coding
guidelines or suggestions of things to avoid; may be a useful checklist
to supplement other approaches.
--
Thanks. Take care, Brian Inglis Calgary, Alberta, Canada
Brian....@CSi.com (Brian[dot]Inglis{at}SystematicSW[dot]ab[dot]ca)
fake address use address above to reply
> > > >> Jacob, if you are interested I will post bug reports to
> > > >> comp.compilers.lcc. Â I would be very pleased to see another
> > > >> C99-conforming compiler.
>
> > > > Yes, that would be nice. I am interested in fixing my bugs of
> > > > course, and my compiler *is* very important to me. Just tell me
> > > > what you see it doesn't work.
>
> > > > Thanks in advance.
>
> > > A refreshing attitude! Â (Seriously.)
>
> > me too!
>
> If you were really interested, you would read comp.compilers.lcc.
no, not really.
In the past Jocob Navia has stated that he will not fix
bugs that are reported by people who do not have a maintenance
contract.
I think this is a bad idea.
He seems to have changed that policy.
I think that is a good idea.
> For example, here's a "bug report", reaction to which so surprised
> poor Keith:http://groups.google.com/group/comp.compilers.lcc/browse_thread/threa...
> Read the program text in the "bug report" and then talk
> about attitudes (do read it, all of it, and understand
> what it's doing, and try to guess why it was written).
we were saying JN had a *good* attitude in
this instance.
> In the same newsgroup you can see Jacob reaction to real
> bug reports too. You guys are jerks, you won't understand
> that some people don't feel like being polite robots when
> get insulted or humiliated (it's funny to see how old wise
> Keith acts like a child in such a situation, and yet talks
> about "attitude" and shit). Temper, temper, huh?
JN seems to treat any comment about what he says as
a personal attack.
--
Nick Keighley
I don't believe he's ever really had such a policy, his sarcastic
remarks to the contrary notwithstanding.
Yes, for instance Mr Falconer wanted that I debug my IDE in his
486. He refused to accept to use a more modern machine.
In those cases a maintenance contract is needed, I am sorry.
In most cases I do fix bugs, as I have demonstrated here
quite a number of times. But I have days with only 24 hours,
and excuse me, I have to sleep, eat, etc etc, I can't work all the
time.
I have corrected most bugs presented by people even if they do not
have a maintenance contract. I can't give you *any* guarantee
that I will do so however. It depends if I have time, if the bug
is simple/blocking/important, or hard to fix and only appearing
in very unusual contexts.
If you have a maintenance contract you
have the priority, and your problems will be addressed first.
> I think this is a bad idea.
>
> He seems to have changed that policy.
>
No, see above. I always have the same policy of fixing *real*
bugs. There are things open for discussion, obviously.
> I think that is a good idea.
I never did otherwise.
You're lucky he didn't want you to make it interoperate with his
multithreaded Pascal code for the 8080...
> Nick Keighley wrote:
>>
>> In the past Jocob Navia has stated that he will not fix
>> bugs that are reported by people who do not have a maintenance
>> contract.
>>
>
> Yes, for instance Mr Falconer wanted that I debug my IDE in his
> 486. He refused to accept to use a more modern machine.
I think the 486 can be safely ignored at this time. Very few C programs
of any significant size are potable onto every existent C
implementation, or even, I suspect, the majority of them.
For example, a piece of code in Chuck's own hashlib.zip which he says is
pure ISO C (and by implication, portable without modification to all
ISO C implementations), does not compile on systems where ULONG_MAX is
not exactly 2^32-1, as was shown by jaysome yesterday.
I think complaining that your IDE fails on the 486 is going a bit
overboard, particularly when you repeatedly mention on your site that
it is guaranteed to run only on Pentiums and above.
> Yes, for instance Mr Falconer wanted that I debug my IDE in his
> 486. He refused to accept to use a more modern machine.
>
> In those cases a maintenance contract is needed, I am sorry.
If part of the problem is not having a 486, you're invited to
contact me off line. I have one in storage that I don't mind
loaning out.
> In most cases I do fix bugs, as I have demonstrated here
> quite a number of times. But I have days with only 24 hours,
> and excuse me, I have to sleep, eat, etc etc, I can't work all the
> time.
You do seem to spend a fair amount of time posting to usenet. :-D
--
Morris Dovey
DeSoto Solar
DeSoto, Iowa USA
http://www.iedu.com/DeSoto/
Answers below.
> I have a few questions about the document. Firstly, why is it based on C99?
We tried to explain this in the Rationale section of our Scope:
https://www.securecoding.cert.org/confluence/display/seccode/Scope
> Is it because it was written at a time when C99 take-up was anticipated
> more optimistically?
No, we started the effort after the Berlin meeting of WG14, just about
2 years ago.
rCs
Jacob doesn't point out that my complaint was 6 or more years ago,
and since then I have simply pointed out his error in claiming lcc
functions under W98.
I have put up replies about the hashlib problem. I would be very
happy to remove that restriction in cokusmt (it only affects the
testing sequence, not the library), but I have no equipment
suitable for testing. The other error is fixed by replacing stdin
with stdout, which it should have been in the first place. A
genuine insect.
--
[mail]: Chuck F (cbfalconer at maineline dot net)
[page]: <http://cbfalconer.home.att.net>
Try the download section.
No I didn't. I advised you that the system had failed within about
2 or 3 months of your causing the failure. You had no idea what
went wrong.
The later criticisms have been when you claim the system works
under W98. It doesn't, if the W98 is running on a 486. Note that
w98 doesn't care. You still don't know why your system fails.
BTW, all this, except the later advisories (about W98) happened
about 6 years ago.
> I think the point was valid: If examples are given, they ought
> to follow *all* the "good practice guidelines". The exhibited
> code had *several* security, portability, and reliability issues.i
i agree these comments are valid; this is the sort of feedback we are
looking for.
i only wish i could get you folks to post them as comments on the
wiki; it would really help.
also, please don't stop after you find one error--keep going!
thanks,
rCs
rCs
Response below.
On Mar 24, 5:07 am, Brian Inglis <Brian.Ing...@SystematicSW.Invalid>
wrote:
> As it's a wiki, I don't expect any comments posted in NGs will be read.
> You could use wget -np ... to download the pages, or get a possibly
> earlier revision of the complete document (2MB) fromhttp://www.open-std.org/JTC1/SC22/WG14/www/docs/n1255.pdf.
>
> The web pages seem to be an advert for his book.
> The content reminds me too much of Schildt.
> Started out as best practices before being pushed as a "standard", with
> associated discussions, training, and tools available.
> Says it provides rules and recommendations: seems like other coding
> guidelines or suggestions of things to avoid; may be a useful checklist
> to supplement other approaches.
the term coding standard has many different interpretations, and there
are a variety of ways such a document can be applied in practice. one
of these, clearly, is to adopt these as coding guidelines. we have
also had a great deal of interest from source code analysis tool
vendors, who would like to be able to check code for compliance with
the rule sets. in this sense, it does provide a "standard" set of
rules for multiple vendors to adopt.
thanks-
rCs
> Richard,
>
> Answers below.
>
>> I have a few questions about the document. Firstly, why is it based on
>> C99?
>
> We tried to explain this in the Rationale section of our Scope:
>
> https://www.securecoding.cert.org/confluence/display/seccode/Scope
Thanks. Alas, my browser crashed whilst trying to load that page, so I
ended up wgetting it.
The document says: "C99 is more widely implemented, but even if it were not
yet, it is the direction in which the industry is moving." Actually, C99
isn't very widely implemented at all, and it isn't a single direction in
which the entire industry is moving. Rather, various bits of the industry
are moving in some of the directions mapped out by C99, but there is
little unanimity.
I was hoping to provide rather more useful feedback to you about the actual
document. But in three attempts to access your site, I've had three
browser crashes. The browser I normally use for Usenet-posted links,
Konqueror, works just fine most of the time - so I have come to perceive
sites that it crashes on as badly-written sites (although of course that's
no excuse for its crashing).
Could you please publish a URL that allows me to download the entire
document as one file, via wget? I'm prepared to spend some time helping
you out, but I don't want to fight your Web site.
--
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
Google users: <http://www.cpax.org.uk/prg/writings/googly.php>
"Usenet is a strange place" - dmr 29 July 1999
> santosh wrote:
>>
> ... snip ...
>>
>> For example, a piece of code in Chuck's own hashlib.zip which he
>> says is pure ISO C (and by implication, portable without
>> modification to all ISO C implementations), does not compile on
>> systems where ULONG_MAX is not exactly 2^32-1, as was shown by
>> jaysome yesterday.
>>
>> I think complaining that your IDE fails on the 486 is going a bit
>> overboard, particularly when you repeatedly mention on your site
>> that it is guaranteed to run only on Pentiums and above.
>
> Jacob doesn't point out that my complaint was 6 or more years ago,
> and since then I have simply pointed out his error in claiming lcc
> functions under W98.
But I'm not aware that he claims that the IDE *does* function under
Windows 98. Sure he might have claimed in the past, but I believe he no
longer does.
> I have put up replies about the hashlib problem. I would be very
> happy to remove that restriction in cokusmt (it only affects the
> testing sequence, not the library), but I have no equipment
> suitable for testing. The other error is fixed by replacing stdin
> with stdout, which it should have been in the first place. A
> genuine insect.
Yes, and the small problem with the makefile not functioning properly
under UNIX systems. It generates executables with a .exe suffix, but
the runtests file tries to invoke them without their suffix, which
fails. Also 'make hashlib' fails, because it tries to link hashlib.o
into a complete executable, which it can't be. However 'make' does
succeed. Also the forth test fails.
> rCs said:
>
>> Richard,
>>
>> Answers below.
>>
>>> I have a few questions about the document. Firstly, why is it based
>>> on C99?
>>
>> We tried to explain this in the Rationale section of our Scope:
>>
>> https://www.securecoding.cert.org/confluence/display/seccode/Scope
>
> Thanks. Alas, my browser crashed whilst trying to load that page, so I
> ended up wgetting it.
> I was hoping to provide rather more useful feedback to you about the
> actual document. But in three attempts to access your site, I've had
> three browser crashes. The browser I normally use for Usenet-posted
> links, Konqueror, works just fine most of the time - so I have come to
> perceive sites that it crashes on as badly-written sites (although of
> course that's no excuse for its crashing).
>
> Could you please publish a URL that allows me to download the entire
> document as one file, via wget? I'm prepared to spend some time
> helping you out, but I don't want to fight your Web site.
Hm, I can browse that site fine in both Firefox, Galeon and Konqueror.
Strange that it crashes for you.
<snip>
> Hm, I can browse that site fine in both Firefox, Galeon and Konqueror.
> Strange that it crashes for you.
Possibly due to archaism at this end. Nevertheless, for most sites it works
fine, and I have never yet come across a reason powerful enough to justify
my spending time *today* to grab a later version. Scott Adams explained
the economics of this problem in one of his books.
Is there a single URL from which the "Standard" can be wgetted?
apologies if I misinterpreted you.
--
Nick Keighley
His browser crashes when it sees the magic words
"C99"
:-)
Sometimes you wonder whether this group is for real. Did it occur to you
that Jacob might not care about debugging his IDE on a 486, since CBF is
the only person alive still using one, and he's probably only
complaining to be his usual contrary self?
And the economics of shipping a 486 box across the atlantic are
mind-boggling...
I must admit, I would prefer a document based on the Standard C my
compiler uses, which is C90.
--
Martin
> jacob navia wrote:
>
>> Yes, for instance Mr Falconer wanted that I debug my IDE in his
>> 486. He refused to accept to use a more modern machine.
>>
>> In those cases a maintenance contract is needed, I am sorry.
>
> If part of the problem is not having a 486, you're invited to
> contact me off line. I have one in storage that I don't mind
> loaning out.
Please tell me that I am dreaming here. A troll like Falconer expects
something to be debugged for a totally obsolete processor and now you
are backing him up and offering to post such to Jacob?!?!?!
>
>> In most cases I do fix bugs, as I have demonstrated here
>> quite a number of times. But I have days with only 24 hours,
>> and excuse me, I have to sleep, eat, etc etc, I can't work all the
>> time.
>
> You do seem to spend a fair amount of time posting to usenet. :-D
It's called multi tasking.
ROTFLM! "Indeed".
I'm surprised "Mr Falconer" didn't ask for a hex dump to be posted to
"user" so he could scan the opcodes and detect any serious breaches of
C90.
> santosh said:
>
> <snip>
>
>> Hm, I can browse that site fine in both Firefox, Galeon and
>> Konqueror. Strange that it crashes for you.
>
> Possibly due to archaism at this end. Nevertheless, for most sites it
> works fine, and I have never yet come across a reason powerful enough
> to justify my spending time *today* to grab a later version. Scott
> Adams explained the economics of this problem in one of his books.
>
> Is there a single URL from which the "Standard" can be wgetted?
I don't think so but maybe the printer friendly index of the standard
may be acceptable to your software?
Also this document is interesting:
<http://www.cert.org/archive/pdf/07tn027.pdf>
It's a proposal for introducing ranged integers into C.
I am way behind on everything. I hope to be able to attack that
test problem this weekend, but no guarantees. On the makefile, for
DJGPP .exe files are needed. Detailed suggestions are welcome.
<snip>
> On the makefile, for
> DJGPP .exe files are needed. Detailed suggestions are welcome.
Why not just add the suffix in runtests? dotest.exe 1 1 and so on.
If I understand you correctly, that would offend my sense of
correctness, in that it would not leave suitably named files for
the system. This may be minor, because the objective is really the
linkable hashlib module alone.
> Richard Heathfield wrote:
>> CBFalconer said:
>>
>> <snip>
>>
>>> On the makefile, for
>>> DJGPP .exe files are needed. Detailed suggestions are welcome.
>>
>> Why not just add the suffix in runtests? dotest.exe 1 1 and so on.
>
> If I understand you correctly, that would offend my sense of
> correctness, in that it would not leave suitably named files for
> the system.
What offends your sense of correctness *more* - a solution that violates an
unwritten and unenforced style rule, or a solution that *doesn't work*?
> CBFalconer said:
>
>> Richard Heathfield wrote:
>>> CBFalconer said:
>>>
>>> <snip>
>>>
>>>> On the makefile, for
>>>> DJGPP .exe files are needed. Detailed suggestions are welcome.
>>>
>>> Why not just add the suffix in runtests? dotest.exe 1 1 and so on.
>>
>> If I understand you correctly, that would offend my sense of
>> correctness, in that it would not leave suitably named files for
>> the system.
>
> What offends your sense of correctness *more* - a solution that
> violates an unwritten and unenforced style rule, or a solution that
> *doesn't work*?
Actually UNIX systems don't care about file suffixes, so he can very
well generate *.exe files. The only necessary fix is to change
his 'runtests' shell script so that it invokes those files with
the .exe suffix. Also for the 'make hashlib' option he needs to stop
with compilation and not attempt to produce an executable.
> Richard Heathfield wrote:
>
>> CBFalconer said:
>>
>>> Richard Heathfield wrote:
>>>> CBFalconer said:
>>>>
>>>> <snip>
>>>>
>>>>> On the makefile, for
>>>>> DJGPP .exe files are needed. Detailed suggestions are welcome.
>>>>
>>>> Why not just add the suffix in runtests? dotest.exe 1 1 and so on.
>>>
>>> If I understand you correctly, that would offend my sense of
>>> correctness, in that it would not leave suitably named files for
>>> the system.
>>
>> What offends your sense of correctness *more* - a solution that
>> violates an unwritten and unenforced style rule, or a solution that
>> *doesn't work*?
>
> Actually UNIX systems don't care about file suffixes, so he can very
> well generate *.exe files.
Yes, that's precisely my point.
> The only necessary fix is to change
> his 'runtests' shell script so that it invokes those files with
> the .exe suffix.
That's why I said "why not just add the suffix in runtests?" (quoted
above).
<snip>
Yes, sorry, I posted without reading properly.
A reasonable question.
>> If I understand you correctly, that would offend my sense of
>> correctness, in that it would not leave suitably named files for
>> the system.
A perfectly good answer to the question.
> What offends your sense of correctness *more* - a solution that
> violates an unwritten and unenforced style rule, or a solution that
> *doesn't work*?
Those aren't the only choices.
Clearly the current situation is incorrect, since it doesn't work.
Unconditionally adding the ".exe" suffix in runtests would fix that
problem, but it would be an ugly solution, since Unix doesn't require
a ".exe" suffix on executables (and adding one is, in my opinion and
apparently in Chuck's, poor style). I understand that in the djgpp
environment, the ".exe" suffix is required but needn't be specified
when executing a command; Cygwin works the same way, as does plain
Windows.
A better solution would use the ".exe" suffix where it's required, and
not use it where it isn't.
<OT>Probably the best way to do that would be to define a variable in
the makefile whose value is either ".exe" or "" depending on the
system. The method for doing this, particularly for determining which
system you're on, is left as an exercise. A simpler brute-force
solution would be to provide two (or more) makefiles.</OT>
--
Keith Thompson (The_Other_Keith) <ks...@mib.org>
Nokia
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
>Unconditionally adding the ".exe" suffix in runtests would fix that
>problem, but it would be an ugly solution, since Unix doesn't require
>a ".exe" suffix on executables (and adding one is, in my opinion and
>apparently in Chuck's, poor style). I understand that in the djgpp
>environment, the ".exe" suffix is required but needn't be specified
>when executing a command; Cygwin works the same way, as does plain
>Windows.
>
>A better solution would use the ".exe" suffix where it's required, and
>not use it where it isn't.
>
><OT>Probably the best way to do that would be to define a variable in
>the makefile whose value is either ".exe" or "" depending on the
>system. The method for doing this, particularly for determining which
>system you're on, is left as an exercise. A simpler brute-force
>solution would be to provide two (or more) makefiles.
An intermediate between those is to localize the bits that need to
change between systems into a bunch of definitions in one place, and
have everything else use those definitions.
<pseudo-topicality type="stretched">This is much the same way
mostly-portable C programs that depend on interfacing to non-portable
system features can be written.</>
My makefiles collect rules for converting base names into executable
names, object file names, library file names, and command-line
arguments for referring to all of those (along with a few other things
that globally influence the build, like compiler flags), into
build-config.mk.<system>; then building on a different system just
requires copying or symlinking build-config.mk.<system> to
build-config.mk, and (if I've gotten things abstracted correctly, which
can take a while to get sorted out) the makefile that includes that
Just Works.
> </OT>
dave
--
Dave Vandervies dj3vande at eskimo dot com
Either that, or %s is the nearest he can come using ascii to his
native-language term for "hedgehog" ...
--Chris Dollin in comp.lang.c
There is some likelihood that CERT-sponsored coding guidelines
would be added to the set of "de facto standards" against which
conformance is checked by some commercial compilers. (They
usually have option settings to determine which guidelines to
check against.) There is a somewhat similar set of guidelines (the
name temporarily escapes me) used in the automotive industry.
So it would be useful to provide feedback on the guidelines
before they get embedded into such compilers and into the apps
where they have been mandated by management or regulation..
Robert Seacord said:
>
> We would like to invite the C community to review and comment on the
> current version of the CERT C Secure Coding Standard available online at
> www.securecoding.cert.org <http://www.securecoding.cert.org> before
> Version 1.0 is published.
Here are my comments on the preprocessor section, PRE00-A to PRE31-C.
These comments can also be found at
http://www.cpax.org.uk/prg/portable/c/reviews/seccode.php
I will endeavour to get to the rest of the document as and when I can find
the time.
PRE00-A. Prefer inline functions to macros
The claim that 'macros are dangerous' can, to a certain extent, be
justified, in much the same way that the claim 'crossing the road is
dangerous' can be justified. Crossing the road is dangerous, even to the
experienced road user, if done carelessly. Likewise, using macros can be
problematic. SECCODE quotes the classic i++ abuse of a macro call, and
suggests that inline functions can eliminate this problem.
This is certainly true where inline functions are available. It is true
that C99 guarantees that inline functions are available, but nowhere is
there any guarantee that a C99 implementation is available! Yes, within
the terms of reference of SECCODE, the suggestion to use inline functions
has obvious merit. But for those of us without access to inline functions,
it is impractical. SECCODE itself recognises this, by pointing out
portability concerns.
For the experienced C programmer, the use of UPPER CASE for macro names is
generally sufficient to act as a warning that macro arguments with side
effects should be avoided.
In the first code fragment, we see a #define that is clearly local to a
function (because the fragment contains executable code). This is
generally considered to be unwise. I can see why the fragment is written
that way, of course - it's to save space! But in a document which purports
to provide a guide to best practice, it's still a curious way to write
code.
The expansion given in the second code fragment is (trivially) incorrect.
Strictly, it should be:
int a = 81 / ((++i) * (++i) * (++i));
The SECCODE version of the expansion omits the extra parentheses that its
previous definition requires.
The third code fragment, which demonstrates the inline version of the code,
is poorly written. When we're cubing a value, even a moderate input can
result in overflow. For example, on 16-bit-int systems, i = 32 will
produce overflow in the call to cube. On 32-bit-int systems, i = 1291
(hardly excessive) will produce overflow. Precisely how one would solve
this problem depends on one's requirements. If it is known that excessive
inputs will never occur, an assertion is appropriate. If they may well
occur, it must be decided whether they constitute an error. If so, the
function needs a way to report that error. Otherwise, an alternative
strategy (e.g. bignums) must be considered. In any event, the function as
written is insecure, because it can produce undefined behaviour.
The fourth fragment illustrates that a clash between a macro parameter name
and a file scope object name can cause incorrect results. It seems to me,
however, that this is not so much a reason not to use macros as a reason
not to use file scope objects!
In the 'SWAP' example, the hoary old XOR trick is used to swap two values.
This is a really bad idea, for two reasons: firstly, it only works on
integer types: SWAP(mydoubleA, mydoubleB) will fail to compile. Secondly,
if you do this: SWAP(myintA, myintA), you don't get a swap - you get 0!
The example, then, is a poor one. Whilst the suggested replacement
sidesteps the first problem neatly (because it suggests a function that
takes int *, so nobody can reasonably expect it to swap doubles), it fails
to address the second problem. Far better to use a temp:
void swap(int *pa, int *pb)
{
int tmp = *pa;
*pa = *pb;
*pb = tmp;
}
This is guaranteed to work, provided only that pa and pb point to valid int
objects.
The example where the interaction of two macros and a file scope object
produce incorrect results is rather contrived. It's an argument against
using macros to adjust file scope object values. It's an argument against
file scope object values themselves. It's an argument against tight
coupling. But it's not an argument against macros.
The claim that the execution of functions cannot be interleaved 'so
problematic orderings are not possible' ignores the fact that the order of
evaluation of multiple functions called in the same statement is not
specified. Whilst this doesn't cause a problem in the example given, it
can certainly cause problems in cases where the called functions have
related side effects (e.g. updating the same object, writing to the same
stream, or whatever). Although y = f(x) + g(x); is harmless in the example
given, it is not a recipe for success.
Lest the wrong impression be garnered from the above, let me reiterate that
the danger of side effects being unwittingly duplicated by macros is
significant, and the careful programmer should ensure that (a) macros
don't evaluate arguments more than once if at all possible; (b) macro
names are written in UPPER CASE to draw attention to them; (c) macros are
only used if there is no sensible alternative that meets the project
requirements.
PRE01-A. Use parentheses within macros around parameter names
It is of course a good idea to parenthesise macro parameters. I would,
however, take issue with the reasoning that it is not necessary to do this
when the parameter names are surrounded by commas in the replacement text
'because commas have lower precedence than any other operator'! The
example given is: #define FOO(a, b, c) bar(a, b, c) - which suggests that
bar is a function. The commas that separate arguments in a function call
are comma separators, not comma operators, so precedence doesn't enter
into it.
PRE02-A. Macro replacement lists should be parenthesized
Good advice - macro replacement lists should indeed be parenthesised. The
example given, EOF, is a strange one, because the claim is that defining
this to -1 rather than (-1) results in a typographical error of if(c EOF)
(a typo for if(c != EOF)) failing to be diagnosed, whereas the parentheses
make the expansion of the typo syntactically invalid. That, of course, is
quite wrong. if(c(-1)) is perfectly valid, syntactically speaking. It is a
function call!
PRE03-A. Prefer typedefs to defines for encoding types
In my experience, typedef should be used sparingly, although there are
definite cases where it is useful. If, however, you are going to 'encode'
(create a new name for) a type anyway, then you should certainly use
typedef rather than a macro! So, to that extent, I agree with SECCODE
here.
The example, however, is less than ideal. Because ISO reserves identifiers
of the form str[a-z]* in most situations, it is generally wise to avoid
creating any such identifiers yourself. I haven't bothered to look up
whether typedef char * string; breaches ISO rules (feel free to do so
yourself). But if I were writing code that used such a typedef, I'd be
forced to look it up. Rather than bother to do that, I'd choose a
different name - e.g. cstring, or something like that. Worse still, the
name is misleading - char * is not a synonym for string in C, and to
suggest (via the typedef) that it is, is a disservice to the code reader.
PRE04-A. Do not reuse a standard header file name
Quite right - don't reuse a standard header name. Note, however, that
headers are not required to be files. Implementations have considerable
licence with headers, and are not obliged to provide or use files to
encapsulate the information that headers are required to represent.
PRE05-A. Understand macro replacement
Whilst I certainly agree that it is a great idea to understand macro
replacement, I'm not quite sure why this advice appears in SECCODE. It
would seem to be better suited to an introductory tutorial or an FAQ.
PRE06-A. Enclose header file in an inclusion sandwich
I'm not sure why SECCODE refers to an 'inclusion sandwich' rather than
'inclusion guards', but the idea itself is obviously sound. SECCODE fails
to mention, however, that the identifier used for the guard should observe
ISO rules on reserved identifiers. It's an easy trap to fall into, if you
use the convention that the identifier HEADER_H is used for guarding
header.h - this means that, say, errors.h will be guarded by ERRORS_H,
which violates ISO rules that reserve identifiers starting with E and
continuing with another upper case letter. A more robust convention is to
protect header.h with H_HEADER (so errors.h's guard becomes H_ERRORS).
PRE07-A. Avoid using repeated question marks
The point of this advice is to protect you from accidental trigraphs! And
of course it's sound advice. The example, though, is an interesting one.
It introduces a problem for which the blame should probably be shared
about evenly between single-line comments and trigraphs, and presents a
solution in which both have been eliminated. I'm not complaining about
this, but it seems to lack focus. (Note that the simplest fix would have
been the introduction of a space after, or even just the removal of, the
second question mark.)
If you don't use trigraphs, see if you can get your implementation to
disable them. (Some do this by default - you actually have to turn them
on, rather than off.) If you can't disable them (either because your
implementation won't let you or because you do actually use them), it's
worth grepping your source code occasionally in search of trigraph
sequences, so that you can decide on a case-by-case basis whether you
intended to use a trigraph in that situation.
PRE08-A. Guarantee that header filenames are unique
The advice given here is good. In general, you should favour short,
descriptive, unambiguous header names. Yes, I know - how can they be
descriptive and unambiguous if they're short? Nevertheless, the Standard
does limit the length that you're guaranteed to be able to use.
Fortunately, there are 1,572,120,576 different combinations of alpha + 7
alphanumeric, so you should be able to find something that is, at the very
least, short and unambiguous!
PRE30-C. Do not create a universal character name through concatenation
This advice is C99-specific, obviously, but it's still a good idea to avoid
accidental use of UCNs in C89 code, lest you choose to port to a C99
implementation some day.
PRE31-C. Never invoke an unsafe macro with arguments containing assignment,
increment, decrement, or function call
By 'unsafe macro', SECCODE means a macro that evaluates at least one of its
arguments more than once. It is clearly a bad idea to pass to such a macro
any argument that has side effects. The four side effects that are
singled(?!) out in the title are in fact the only four I can think of -
but if you do manage to think of any others, don't pass those to macros
either, okay?
The example is strange, because it's completely pointless. Would it really
have taken the author too much time to write a = ABS(n); rather than just
ABS(n)?
"Richard Heathfield" <r...@see.sig.invalid> wrote in message
news:X6qdncC25oPbyG3a...@bt.com...
I think I fell prey to this, demonstrably and recently.
Now that I've worked through it, with help, it's much less of a C wicked
witch.
I thoughy I had a trigraph but had, instead /177 || decimal 127 || the only
octal to receive such attention || del .
I wonder what role this character plays that escapes my attention.
A lot of other things above and below that I couldn't catch with tired eyes.
--
"I am waiting for them to prove that God is really American."
~~ Lawrence Ferlinghetti
The first three are only one kind of side effect: assignment.
The fourth one is only a side effect,
if the called function has side effects.
> but if you do manage to think of any others, don't pass those to
> macros either, okay?
Writing to a file is a side effect.
Accessing a volatile object is a side effect.
--
pete
More accurately:
The first three are only one kind of side effect: modifying an object.
<snip>
> Writing to a file is a side effect.
How do you write to a file without calling a function (or function-like
macro)?
> Accessing a volatile object is a side effect.
Agreed.