Whenever I use the gets() function, the gnu c compiler gives a
warning that it is dangerous to use gets(). Is this due to the
possibility of array overflow? Is it correct that the program flow can
be altered by giving some specific calculated inputs to gets()? How
could anyone do so once the executable binary have been generated? I
have heard many of the security problems and other bugs are due to
array overflows.
Looking forward to your replies.
Lee
The solution is simple: don't use gets(). Not ever. As to what
happens if you do use gets() and the quantity of input is greater than
the destination space, the C language does not know or care. As to
how this undefined behavior might be exploited by someone with
malicious intent, that too is not a language issue.
The authors of your compiler, quite properly and responsibly, take it
upon themselves to warn you that you should not use gets(). Why are
you still using it?
--
Jack Klein
Home: http://JK-Technology.Com
FAQs for
comp.lang.c http://www.eskimo.com/~scs/C-faq/top.html
comp.lang.c++ http://www.parashift.com/c++-faq-lite/
alt.comp.lang.learn.c-c++
http://www.contrib.andrew.cmu.edu/~ajo/docs/FAQ-acllc.html
>Hi
>
> Whenever I use the gets() function, the gnu c compiler gives a
>warning that it is dangerous to use gets(). Is this due to the
>possibility of array overflow? Is it correct that the program flow can
Yes
>be altered by giving some specific calculated inputs to gets()? How
Yes
>could anyone do so once the executable binary have been generated? I
>have heard many of the security problems and other bugs are due to
>array overflows.
>
>Looking forward to your replies.
Don't hold your breath. Buffer overflow is not a c language topic.
<<Remove the del for email>>
> Don't hold your breath. Buffer overflow is not a c language topic.
But is well documented elsewhere:
http://en.wikipedia.org/wiki/Buffer_overflow
Nothing in Wikipedia can be considered well documented without
additional credible references.
This is possibly the most poorly-written and inaccurate article I have
read on wikipedia, did you even read it before posting the link?
Robert Gamble
However you can always use ggets() (note the extra g, for good).
This was written to have the convenience and simplicity of gets,
without any possible overrun. The ISO standard source (thus
portable to any system) is available at:
<http://cbfalconer.home.att.net/download/ggets.zip>
--
"If you want to post a followup via groups.google.com, don't use
the broken "Reply" link at the bottom of the article. Click on
"show options" at the top of the article, then click on the
"Reply" at the bottom of the article headers." - Keith Thompson
More details at: <http://cfaj.freeshell.org/google/>
In fact something like 50% of scientific papers make conclusions which are
later refuted or challenged by further papers. (read Iohannis for a
peer-reviewed take on the subject).
No medium written by humans can guarantee complete accuracy, freedom form
bias, etc. Wikipedia is no different from any other source.
> Hi
>
> Whenever I use the gets() function, the gnu c compiler gives a
> warning that it is dangerous to use gets().
No, it's the linker that warns you, not the compiler.
> Is this due to the possibility of array overflow?
Yes.
> Is it correct that the program flow can be altered by giving some
> specific calculated inputs to gets()?
Yes.
> How could anyone do so once the executable binary have been generated?
By overwriting the stack, for example. On a typical machine, the program is
loaded from disk into memory before execution. During execution, it is
present in memory. And the thing about memory is that it can be overwritten
with new values.
> I have heard many of the security problems and other bugs are due to
> array overflows.
Quite.
--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at above domain (but drop the www, obviously)
However, it is always useful to see where the author got the
idea/solution or based her/his own conclusions.
> In fact something like 50% of scientific papers make conclusions which are
> later refuted or challenged by further papers. (read Iohannis for a
> peer-reviewed take on the subject).
> No medium written by humans can guarantee complete accuracy, freedom form
> bias, etc. Wikipedia is no different from any other source.
Yes, for that reason papers are wrote and commented and referenced. To
prove their accuracy, correct them or throw them away.
Further claims are then made for peer-reviewed literature, which are largely
unwarranted. In particular, peer review does not "prove accuracy".
Personally, I am waiting for the compiler that implements gets as the
following:
char * gets (char * s) {
unlink (__FILE__); /* POSIX */
return s;
}
Since it would improve its predictability.
> [...] Is this due to the
> > possibility of array overflow? Is it correct that the program flow can
> > be altered by giving some specific calculated inputs to gets()? How
> > could anyone do so once the executable binary have been generated? I
> > have heard many of the security problems and other bugs are due to
> > array overflows.
> >
> > Looking forward to your replies.
> > Lee
(My recommendation is to *learn* about the problem here:
http://www.pobox.com/~qed/userInput.html )
> The solution is simple: don't use gets(). Not ever.
Hmmm ... here's a rhetorical question. What is the value of a
specifying a function in the language definition if you can't even use
it -- not ever?
> [...] As to what
> happens if you do use gets() and the quantity of input is greater than
> the destination space, the C language does not know or care. As to
> how this undefined behavior might be exploited by someone with
> malicious intent, that too is not a language issue.
>
> The authors of your compiler, quite properly and responsibly, take it
> upon themselves to warn you that you should not use gets(). Why are
> you still using it?
Well, fundamentally, the reason he uses it is because its there, and
because the language standard itself continues to endorse the use of
this function. Unfortunately, the compiler, even after warning you,
and with all sorts of comments telling you about it in the man pages,
goes ahead and compiles/links the code. The compiler/linker *could*
simply dump out with an error unless you give it a -unsafe flag or
something like that. I still don't know who exactly is pulling for the
continued support for this function, but they seem to have a lot of
influence over compiler vendors and the standards committee.
The OP sees this linker warning as an annoyance, and wants to make the
annoyance go away. He's lucky in that here are some reponses here
telling him to stop using the function, but on another day he'd just
get a lot of bickering about top-posting, or forgetting to quote a
previous post or quoting too much of one.
--
Paul Hsieh
http://www.pobox.com/~qed/
http://bstring.sf.net/
The problem is that, while gets _can_ invoke undefined behavior based
on things the programmer has no control over, it is well-defined as long
as that does not happen - this is the same reason that it's not legal
for an implementation to reject a program using it
Uh ... according to a recent independent study, the Encyclopedia
Britanica has only a 33% better error rate than Wikipedia (meaning
Wikipedia has roughly 33% more errors per article than Britanica.) Now
of course, that doesn't mean you should rely solely on Wikipedia, but
what it means is that if you should apply roughly the same degree of
trust to Wikipedia that you would a reasonable encyclopedia (maybe its
as good as Funk & Wagnals :) )
The funny ironic thing about supposed experts who complain about the
accuracy of Wikipedia is that they don't bother to come to the
realization that the degree of correctness of any given Wikipedia
article is actually in their hands. I wonder -- in who's interest is
it to denigrate or attack Wikipedia.
> Jack Klein wrote:
>> On 23 Dec 2005 20:29:01 -0800, "Lee" <lee...@gmail.com> wrote in
>> comp.lang.c:
>> > Whenever I use the gets() function, the gnu c compiler gives a
>> > warning that it is dangerous to use gets().
>
> Personally, I am waiting for the compiler that implements gets as the
> following:
>
> char * gets (char * s) {
> unlink (__FILE__); /* POSIX */
remove(__FILE__); /* ISO */
>> The solution is simple: don't use gets(). Not ever.
>
> Hmmm ... here's a rhetorical question. What is the value of a
> specifying a function in the language definition if you can't even use
> it -- not ever?
None whatsoever. So let's drop it from the language definition.
>Uh ... according to a recent independent study, the Encyclopedia
>Britanica has only a 33% better error rate than Wikipedia
If you examine the study rather than news reports on it from seemingly
uninformed journos, you will find that Nature deliberately excluded
articles which might be subject to any contention, disagreement or
debate, ie anything in humanities, much of the science, politics, and
all biography. This eliminates virtually all of Wikipedia I suspect.
And 33% more errors in the articles there is no debate about sounds
pretty dang poor to me. :-)
>The funny ironic thing about supposed experts who complain about the
>accuracy of Wikipedia is that they don't bother to come to the
>realization that the degree of correctness of any given Wikipedia
>article is actually in their hands.
I always love this bullshit argument.
"Someone else wrote lies and/or misinformation but thats not their
fault, its yours for not spending your time fixing it."
Er, no. Its the fault of the person who was too lazy, biassed or
ignorant to get the facts right in the first place, and its the fault
of the maintainers of wikipedia for not applying better editorial
control.
>I wonder -- in who's interest is it to denigrate or attack Wikipedia.
And in whose interest is it to defend it, even when faced with a
glaring failure ?
----== Posted via Newsfeeds.Com - Unlimited-Unrestricted-Secure Usenet News==----
http://www.newsfeeds.com The #1 Newsgroup Service in the World! 120,000+ Newsgroups
----= East and West-Coast Server Farms - Total Privacy via Encryption =----
Why using an extension when the standard function does exist ?
remove (__FILE__);
> return s;
> }
BTW, I suppose that you want some [C99] 'inline'. If not, the effect
would be limited (the implementation file is probably not that close)
--
A+
Emmanuel Delahaye
> webs...@gmail.com a écrit :
>>
>> char * gets (char * s) {
>> unlink (__FILE__); /* POSIX */
>
> Why using an extension when the standard function does exist ?
Good question.
> remove (__FILE__);
>
>> return s;
>> }
>
> BTW, I suppose that you want some [C99] 'inline'. If not, the effect
> would be limited (the implementation file is probably not that close)
Well, in all fairness to websnarf, those foolish enough to use gets() would,
one hopes, tend to be ill-informed students who will be running their
student exercise programs on the very machine on which they are compiling
those programs.
As an educational LART, it's not a terribly bad idea.
I was just unaware of the existence of remove(), but happened to run
across unlink() in some incidental search through my documentation some
time ago. Learn a new thing every day.
> Good question.
>
> > remove (__FILE__);
> >
> >> return s;
> >> }
> >
> > BTW, I suppose that you want some [C99] 'inline'. If not, the effect
> > would be limited (the implementation file is probably not that close)
Right, and if we don't have C99 then we can't force this to work. Oh
what a great standard this language has ... *sigh*. Anyhow, my comment
was really aimed at compiler implementors not general programmers.
> Well, in all fairness to websnarf, those foolish enough to use gets() would,
> one hopes, tend to be ill-informed students who will be running their
> student exercise programs on the very machine on which they are compiling
> those programs.
>
> As an educational LART, it's not a terribly bad idea.
*DING DING DING DING DING!* Give the man a prize.
> Emmanuel Delahaye said:
>> BTW, I suppose that you want some [C99] 'inline'. If not, the effect
>> would be limited (the implementation file is probably not that close)
>
> ... those foolish enough to use gets() would,
> one hopes, tend to be ill-informed students who will be running their
> student exercise programs on the very machine on which they are compiling
> those programs.
Possibly, but I wonder how many hits you would find if you could
grep the source on all of sourceforge for gets(). I suspect
it's nonzero, and that represents a tiny portion of the software
out there that could contain it, outside of "student code".
(Yes, I know a lot of what is on sourceforge is student level,
at best)
--
Randy Howard (2reply remove FOOBAR)
"The power of accurate observation is called cynicism by those
who have not got it." - George Bernard Shaw
Sure it is, but the discussion ends with the phrase "undefined
behavior".
--
Keith Thompson (The_Other_Keith) ks...@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
--
Joe Wright
"Everything should be made as simple as possible, but not simpler."
--- Albert Einstein ---
> Possibly, but I wonder how many hits you would find if you could
> grep the source on all of sourceforge for gets(). I suspect
> it's nonzero, and that represents a tiny portion of the software
> out there that could contain it, outside of "student code".
> (Yes, I know a lot of what is on sourceforge is student level,
> at best)
If a program contains a call to gets(), it is broken. The fact that some
programs contain calls to gets() is not a good argument for continuing to
offer support for gets() in its present form.
If gets() is removed from the library or re-cast as something like
system("format c: /y") or system("rm -rf *") or whatever, then this will
not affect any well-written programs whatsoever. As for those programs it
does affect, we're better off without them.
But it would eliminate all of Britanica too by the same reasoning.
Obviously they wanted to pick topics for which they could find real
authorities that could establish absolute truth on the topic. That's a
little hard to do with abortion.
> And 33% more errors in the articles there is no debate about sounds
> pretty dang poor to me. :-)
Well as long as we are going to cite the actual data -- they found an
average of 3 errors per article in Britanica, and 4 in Wikipedia.
Personally I find anything above about 0.001 pretty unacceptable to me,
but if this is the best standard we have, then I just see it as a wash
-- Britanica and Wikipedia are roughly the same, and you need to go
beyond them for any serious research anyways.
As a side note, I've made about a half dozen significant contributions
to Wikipedia myself, and try very hard to steer away from anything that
isn't clearly true or which incorporates my personal bias. So when I
read that story, I was a little saddened to learn that the error rate
is so high (I really don't think my error rate on WikiPedia is anywhere
near that), but equally shocked that Britanica is not better in any way
that seriously matters.
What's poor is that there is an error rate in articles that is greater
than 1 per, let alone as bad as 3 or 4. Is the human race really that
pathetic that technical integrity is beyond our grasp? Britanica is
not relevantly better -- so the problem is not with Wikipedia but with
people's accuracy in general.
> >The funny ironic thing about supposed experts who complain about the
> >accuracy of Wikipedia is that they don't bother to come to the
> >realization that the degree of correctness of any given Wikipedia
> >article is actually in their hands.
>
> I always love this bullshit argument.
> "Someone else wrote lies and/or misinformation but thats not their
> fault, its yours for not spending your time fixing it."
No the problem is that these people spend their time yelling about the
inaccuracies at the top of their lungs instead of being part of the
solution, for which there is an obvious need.
> Er, no. Its the fault of the person who was too lazy, biassed or
> ignorant to get the facts right in the first place, and its the fault
> of the maintainers of wikipedia for not applying better editorial
> control.
This shows ignorance of how Wikipedia works. There *IS* no structured
editorial control outside the contributors themselves. And there is no
*fault* here. If you don't have the patience to be part of the
solution, your standing in decrying its problems are seriously
undermined and IMHO, have no value. You didn't pay for Wikipedia, and
you didn't put in any effort to make it better, and yet you are going
to complain about it rather than either marvelling at just how good it
is considering or helping make it better.
This just seems to me like seeing an deaf old lady trying to cross a
railroad track with a train coming, then pulling up a lawn chair and
saying "Wow, this accident is going to be good!"
> >I wonder -- in who's interest is it to denigrate or attack Wikipedia.
>
> And in whose interest is it to defend it, even when faced with a
> glaring failure ?
Having a universally accessible, amazing large, somewhat accountable,
up to date, and free encyclopedia with an accuracy rate (which I have a
small role in assisting) similar to commercial offerings? See, unlike
some people, I actually see a lot of *value* in Wikipedia -- and I
think a lot of people do. If the effort I put into it is mirrored by
others, then it will mean that I should expect that level of content
and accuracy of the thing to be extremely high.
As an aside, I recently looked up information about Sucralose and
Aspartame -- I was just looking up basic information, and as
comparison, I was also googled for information on those two things.
The whole business about stability, how they are mixed with other sugar
substitutes, and the process by which they are made, and the safety
arguments for each comes across is a very clear and structured way in
the Wikipedia articles. I *learned* something from Wikipedia that I
just couldn't get from google searches, and that aren't going to be in
obsolete versions of Britanica, or sparse resources like Microsoft's
Encarta (the last two, which nobody seems to criticize with anywhere
near the volume we're seeing Wikipedia criticism.) Now if I want
*real* information about those sugar substitutes, I am going to have to
collect data myself from various FDA and equivalents from other
countries (since I don't really trust the american one.) That's a bit
of a research project considering I just wanted basic information.
So you can count me among the defenders.
> When was it that use of gets() became widely known as evil?
1988, I think.
> Is anyone listening?
Alas, no.
> Randy Howard said:
>
>> Possibly, but I wonder how many hits you would find if you could
>> grep the source on all of sourceforge for gets(). I suspect
>> it's nonzero, and that represents a tiny portion of the software
>> out there that could contain it, outside of "student code".
>> (Yes, I know a lot of what is on sourceforge is student level,
>> at best)
>
> If a program contains a call to gets(), it is broken. The fact that some
> programs contain calls to gets() is not a good argument for continuing to
> offer support for gets() in its present form.
Which is why I wasn't arguing for continued support. I was
simply commenting about the idea that it was useful for teaching
purposes, which I'm not sold on.
> If gets() is removed from the library or re-cast as something like
> system("format c: /y") or system("rm -rf *") or whatever, then this will
> not affect any well-written programs whatsoever.
Gee, that's likely.
> As for those programs it does affect, we're better off without them.
Agreed.
> When was it that use of gets() became widely known as evil? I started C
> fifteen or more years ago and it was evil then.
Sounds about right.
> Why are some now just discovering it is evil? Is anyone listening?
For the same reason that "news" is considered recurring daily if
not hourly discussion of some missing girl in the caribbean six
months after it happened.
Tech "news" is limited to the latest cool video card for gaming.
What we need is a "slashdot" for actual programmers, rather
than a version aimed at those that wish to pretend to be
technical by discussing the latest gadget trends.
Usenet used to serve that purpose, but a very small percentage
of the tech community is involved in it today.
Why did it make it into the standard, then? Other things from the base
document [IIRC for the library it was the "/usr/group" proto-posix
standard] didn't make it in, or were changed
A more likely change is to add a required diagnostic [some
implementations already provide such a diagnostic] and perhaps to allow
such programs to fail to translate.
> I was
> simply commenting about the idea that it was useful for teaching
> purposes, which I'm not sold on.
Nobody has suggested that gets() is useful for teaching purposes, as far as
I'm aware. What somebody did suggest was that an overtly destructive
implementation of gets() would have educational value. I think it's called
aversion therapy.
Most likely, lots of existing code using it already.
>But it would eliminate all of Britanica too by the same reasoning.
Incorrect: It would eliminate 50% less of Britannica.
>Obviously they wanted to pick topics for which they could find real
>authorities that could establish absolute truth on the topic. That's a
>little hard to do with abortion.
Perhaps you should actually read the Nature article, instead of
guessing.
>-- Britanica and Wikipedia are roughly the same, and you need to go
>beyond them for any serious research anyways.
33% is significantly outside the statistical level of 'sameness' for
the article population they examined. Do you have any idea about
stats?
>> I always love this bullshit argument.
>> "Someone else wrote lies and/or misinformation but thats not their
>> fault, its yours for not spending your time fixing it."
>
>No the problem is that these people spend their time yelling
No, this isn't the problem. Trying to blame the people who spot the
errors is classic defensive behaviour of someone who knows they're
wrong by the way,
>about the
>inaccuracies at the top of their lungs instead of being part of the
>solution, for which there is an obvious need.
I recommend you read the Register's excellent series of articles on
exactly what happens when one actually /does/ try to correct glaring
errors, omissions and falsehoods.
>> Er, no. Its the fault of the person who was too lazy, biassed or
>> ignorant to get the facts right in the first place, and its the fault
>> of the maintainers of wikipedia for not applying better editorial
>> control.
>
>This shows ignorance of how Wikipedia works. There *IS* no structured
>editorial control outside the contributors themselves.
This is precisely my point. There is no editorial control, so there
is nothing, nothing at all, to prevent complete lies, falsehoods,
misunderstandings and other mistakes.
>If you don't have the patience to be part of the
>solution,
Ah. I didn't realise I was talking to a cretinous utopian "the web is
god" lunatic.
>So you can count me among the defenders.
I pity you.
Yes, plus the inability to do ANYTHING to prevent it until
it's too late.
>Is it correct that the program flow can
>be altered by giving some specific calculated inputs to gets()?
Chances are, you can alter the program flow by simply giving gets()
a sufficiently long line without calculating much of anything. This
is particularly true if the input buffer for gets() is an auto
character array. (Although this is system-specific, it is likely
to amount to "scribbling on the return address in the stack frame",
causing a branch to, err, somewhere) The program will likely just
crash rather than send out tons of Viagra ads for you, but often
that's enough to do damage.
>How
>could anyone do so once the executable binary have been generated? I
Why don't you ask the FBI Computer Task Force?
>have heard many of the security problems and other bugs are due to
>array overflows.
Gordon L. Burditt
> "Jack Klein" <jack...@spamcop.net> wrote
> >
> > Nothing in Wikipedia can be considered well documented without
> > additional credible references.
> >
> <OT>
> That's what crusty academics say because a new competitor has come along. Of
> course they want people to rely on peer-reviewed literature where they are
> the peers.
And your sentence, above, is what many uncredentialed individuals say
when they want their (opinions, theories, etc.) given full weight
without the necessity of making the effort to obtain the credentials.
Note that I am not saying this is so in your case, nor am I attempting
to insult you.
Quite a few problems have been documented with Wikipedia, several just
recently. One of the real problems with Wikipedia, mostly absent from
formal peer-review literature, is the anonymity and lack of
accountability of the contributors.
> In fact something like 50% of scientific papers make conclusions which are
> later refuted or challenged by further papers. (read Iohannis for a
> peer-reviewed take on the subject).
> No medium written by humans can guarantee complete accuracy, freedom form
> bias, etc. Wikipedia is no different from any other source.
Wikipedia is very much different from many other sources. I made no
claims whatsoever about its quality, accuracy, or freedom from bias. I
merely responded to a line in an earlier post, which you snipped,
where a poster claimed that something was "well documented" followed
by a link to Wikipedia.
Based on recent well publicized events, I maintain that existence of a
Wikipedia article, by itself, does not guarantee that the atricle's
subject is well documented.
--
Jack Klein
Home: http://JK-Technology.Com
FAQs for
comp.lang.c http://www.eskimo.com/~scs/C-faq/top.html
comp.lang.c++ http://www.parashift.com/c++-faq-lite/
alt.comp.lang.learn.c-c++
http://www.contrib.andrew.cmu.edu/~ajo/docs/FAQ-acllc.html
> Randy Howard said:
>
>> I was
>> simply commenting about the idea that it was useful for teaching
>> purposes, which I'm not sold on.
>
> Nobody has suggested that gets() is useful for teaching purposes, as far as
> I'm aware. What somebody did suggest was that an overtly destructive
> implementation of gets() would have educational value. I think it's called
> aversion therapy.
Ahh. I misunderstood then.
Yes, I have found it very useful to make a jump start on a new subject.
And I may reference it, but I cannot see how can any paper or anything
else rely entirely on wikipedia.
Maybe in some years, when Google or someone else will collect all human
written knowledge and research and post it on the Internet, wikipedia
may act as a central "knowledge hub"...
But its not just that. You can also get up to date information that is
directly integrated into articles even of a historical nature (for
example if a FOIA request reveals some activity by the CIA from 50
years ago, and this has a previously unrealized perspective on history,
or whatever.) You can also get information from corporate insiders, or
from people who otherwise can distill really convoluted information
into simple short articles that would otherwise not find a normal
mainstream vehicle for being revealed. (Like details about Sucralose
and Aspartame.)
> > Quite a few problems have been documented with Wikipedia, several just
> > recently. One of the real problems with Wikipedia, mostly absent from
> > formal peer-review literature, is the anonymity and lack of
> > accountability of the contributors.
>
> There is a case for ending the tradition of anonymity.
Yeah, and of course the people who are crying a river about it, don't
bother to notice that things *ARE* being done about this. (Only
registered users can author new articles, certain articles are blocked
from being editted except by credible wikipedians, etc.)
> > Wikipedia is very much different from many other sources. I made no
> > claims whatsoever about its quality, accuracy, or freedom from bias. I
> > merely responded to a line in an earlier post, which you snipped,
> > where a poster claimed that something was "well documented" followed
> > by a link to Wikipedia.
> >
> > Based on recent well publicized events, I maintain that existence of a
> > Wikipedia article, by itself, does not guarantee that the atricle's
> > subject is well documented.
>
> It's a new medium. It has its problems. So too do all other sources of
> information. It must not be dismissed, but I agree that one mustn't accept
> everything written as authoritative.
And that's the line of reasoning that just doesn't have play with the
naysayers. And I don't know why. This is the *OBVIOUS* most well
reasoned position to take -- its new, and its got growing pains, and is
otherwise quite remarkable given this situation. "A new technology is
not perfect" -- wow what impressive and deep criticism.
The vicious attacks on Wikipedia in the recent press about it are one
sided and completely out of proportion. Its as if there is some other
adgenda at work here but I don't quite understand it. Who would have
some innate hatred of the concept a large free and pervasive
encyclopedia whose quality is remarkable given the premise of being
built by random contributors? Is this campaign being waged by
Britanica? By historians or other intellectuals who don't like being
second guessed with a Wikipedia search? The news media who don't like
public dissemination of inconvenient truths and perspective to get in
the way of their editorial spin? I dunno -- those all seems a bit on
the conspiratorial side, but there's no denying the existence of a
concerted campaign against Wikipedia.
> On 24 Dec 2005 12:30:16 -0800, in comp.lang.c , webs...@gmail.com
> wrote:
>
>>This shows ignorance of how Wikipedia works. There *IS* no structured
>>editorial control outside the contributors themselves.
>
> This is precisely my point. There is no editorial control, so there
> is nothing, nothing at all, to prevent complete lies, falsehoods,
> misunderstandings and other mistakes.
Or even simple partisanship. Wikipedia shows considerable dislike for C, but
is very positive about, say, C++, Python, and Lisp.
Encyclopaediae are supposed to be impartial. Whoever wrote the C bit did not
honour this tradition. Not sure why - for heaven's sake, it's only a
programming language! But take a look, and you'll see a "criticisms"
section - which is not something I noticed in the C++, Python or Lisp
sections. Does this mean those languages are beyond criticism? Or does it
simply mean the Wikids don't understand C very well?
> The vicious attacks on Wikipedia in the recent press about it are one
> sided and completely out of proportion.
Who cares? What would the press know about it?
> Its as if there is some other
> adgenda at work here but I don't quite understand it.
I like C. I read the Wikipedia article on C. It's very anti-C, and clearly
written by someone who doesn't know C well enough to use it properly.
The Wikipedia attitude to C is like that of an amateur cook railing against
the dangers of carving knives, and recommending butter knives for all your
carving needs.
I also read some of the Wiki articles on other languages - C++, Python,
Lisp. No such anti-language sentiment there.
From a comp.lang.c perspective, then, Wikipedia sucks.
So edit it to make it a more neutral point of view.
> Richard Heathfield wrote:
>>
>> Encyclopaediae are supposed to be impartial. Whoever wrote the C bit did
>> not honour this tradition. Not sure why - for heaven's sake, it's only a
>> programming language! But take a look, and you'll see a "criticisms"
>> section - which is not something I noticed in the C++, Python or Lisp
>> sections. Does this mean those languages are beyond criticism? Or does it
>> simply mean the Wikids don't understand C very well?
>>
> Clearly an opportunity for you Richard, to set the record straight.
Don't tempt me.
> You have credentials enough to be accepted by Wiki I'm sure.
That would change as soon as I started deleting entire swathes of useless
material and told them to read a decent book about C and spend a few years
writing it properly before expressing an opinion on it.
>>> When was it that use of gets() became widely known as evil?
>>
>> 1988, I think.
>
>Why did it make it into the standard, then?
because it was in wide use. see the rationale if you would like more
words.
--
a signature
>> Personally, I am waiting for the compiler that implements gets as the
>> following:
>>
>> char * gets (char * s) {
...
> remove (__FILE__);
>
>> return s;
>> }
>
> BTW, I suppose that you want some [C99] 'inline'. If not, the effect
> would be limited (the implementation file is probably not that close)
>
I don't think `inline' would help anything. Inline functions are
not expanded the same way as macros are, and `__FILE__' in above
code (after adding `inline') should still resolve to the name of
the implementation file.
#define gets(s) (remove(__FILE__), gets(s))
--
Stan Tobias
mailx `echo si...@FamOuS.BedBuG.pAlS.INVALID | sed s/[[:upper:]]//g`
I don't see where you get this. I read the Wikipedia entry for C, and,
yes, it includes a Critcism section. I consider the article quite
balanced, and it sounds like you are saying that any criticism is bad.
This is the position one expects from defenders of a faith. A "you're
either with us or against us" mentality.
I think we all agree that C has its problems. That, first of all, any
language that was designed to be "portable assembler" is going to have
problems in the safety department. And second, that the existence of all
this "UB" in the language definition (I.e., that which is the primary
subject of this newsgroup) is not a Good Thing. Necessary, of course,
given what C is, but not in any sense desirable in the abstract.
I think it is entirely fair and reasonable (aye, in fact to not do so would
be irresponsible) to warn potential newcomers to the language of its
dangers.
But that's precisely the problem - most (counted by sheer numbers - i.e.,
one human being, one vote) people who use C don't and never will use it
correctly. I think it is for them that the Wikipedia article is written.
Then, of course, there is also the universal phenomenon that whenever the
media (media of any nature, and yes, in today's world, that includes
Wikipedia) reports on *anything* that you (rhetorical "you") knows anything
about, they get it all wrong. Or, more precisely, all you see are the
errors. Just something we all have to live with.
Ah, I see. You are a newcomer to the language.
Google for "Smashing the stack for fun and profit".
Some supporter of formal methods said that I didn't have the experience to
make such an assertion.
I replied that I had been on a six month training course on formal methods.
"Ha," said the supporter of formal methods, "the people who devise these
methods have often spent twenty years developing them. And you are rejecting
them on the basis of a six month course."
The problem with that argument is that the number of six month courses I can
go on is strictly limited. I wouldn't say I am necessarily right about
formal methods, but I have a great deal of experience in programming. If
someone cannot convince me of the value of his approach in six months, then
I must have a powerful and qualified case that the approach is not, in fact,
valuable.
> In article <doo0fa$e3h$1...@nwrdmz03.dmz.ncs.ea.ibs-infra.bt.com>,
> Richard Heathfield <inv...@invalid.invalid> wrote:
> ...
>>Or even simple partisanship. Wikipedia shows considerable dislike for C,
>>but is very positive about, say, C++, Python, and Lisp.
>
> I don't see where you get this. I read the Wikipedia entry for C, and,
> yes, it includes a Critcism section. I consider the article quite
> balanced, and it sounds like you are saying that any criticism is bad.
Not at all. But if you're going to have a "criticisms" section for one
language, why not for all of them? Are all languages flawless, perfect, and
beyond reproach, except for C and Pascal?
> This is the position one expects from defenders of a faith. A "you're
> either with us or against us" mentality.
Nonsense. I'm not asking for partiality. I'm asking for impartiality.
> I think we all agree that C has its problems.
And C++ doesn't?
> That, first of all, any
> language that was designed to be "portable assembler" is going to have
> problems in the safety department. And second, that the existence of all
> this "UB" in the language definition (I.e., that which is the primary
> subject of this newsgroup) is not a Good Thing. Necessary, of course,
> given what C is, but not in any sense desirable in the abstract.
Of course it's desirable. It gives the C programmer and implementation lots
of scope for inventiveness in a tight spot. Take, for example, this code:
void printat(const char *s, int x, int y)
{
unsigned char *p = (unsigned char *)0xb8000000UL + 160 * y + 2 * x;
int c = (g_fg << 4) | g_bg;
while(*s)
{
*p++ = *s++;
*p++ = c;
}
}
Extremely badly-behaved code, utterly undefined behaviour, but it works just
fine on the right platform, and is extremely quick compared to the
alternatives on that platform. For the Standard to mandate the behaviour of
this code would be meaningless, and for the Standard to forbid this code
would be overly restrictive. Making the behaviour undefined makes perfect
sense. Basically, it's saying "weeeelllll, okay, if that's what you want to
do, I won't try to stop you", which is fine by me.
> I think it is entirely fair and reasonable (aye, in fact to not do so
> would be irresponsible) to warn potential newcomers to the language of its
> dangers.
That is true of all programming languages of any power, so now you are
effectively suggesting that Wikipedia is irresponsibly lax in not warning
of the dangers of other languages.
> "Ha," said the supporter of formal methods, "the people who devise these
> methods have often spent twenty years developing them. And you are
> rejecting them on the basis of a six month course."
And you had every right to choose not to use formal methods, on the strength
of your six month course spent learning about them. What your six month
course does /not/ give you is the credentials necessary for writing an
authoritative encyclopaedia article criticising formal methods.
> "Richard Heathfield" <inv...@invalid.invalid> wrote
>>
>> And you had every right to choose not to use formal methods, on the
>> strength
>> of your six month course spent learning about them. What your six month
>> course does /not/ give you is the credentials necessary for writing an
>> authoritative encyclopaedia article criticising formal methods.
>>
> It would be rare for someone to say "I have spent twenty years studying
> and developing formal methods, and I conclude that I have basically wasted
> my time and they cannot generally improve productivity or error-rate".
> Not impossible or unherad of, but rare.
Someone who has spent 20 years studying and developing a discipline is
indeed likely to look favourably upon it; but he or she will also know a
great deal about it. And that's not a bad place from which to write an
encyclopaedia article, in many circumstances. (Not all; I did think of a
few counter-examples to this!) Certainly a position of relative ignorance
is a /bad/ place from which to write an encyclopaedia article.
Let's take one or two of the Wiki criticisms and look at them more closely:
"In other words, C permits many operations that are generally not
desirable".
So what? Just because Mr Generally doesn't want this particular operation,
it doesn't mean /I/ don't want it or /you/ don't want it. Good for C!
"many simple errors made by a programmer are not detected by the compiler or
even when they occur at runtime."
But programmers tend to make these errors less and less as they gain
knowledge of and experience with C, and so this is a diminishing problem.
If they're bright, the programmers will in any case learn from other
programmers' experiences rather than their own. So this really isn't as big
a problem as it is made to sound.
"One problem with C is that automatically and dynamically allocated objects
are not initialized; they initially have whatever value is present in the
memory space they are assigned."
That isn't a problem at all! It's common sense that the bits in a space
aren't the bits you want until you set them to be the bits you want.
Setting them arbitrarily to zero as a matter of language policy is just a
pointless intermediate step. If /you/ want a given object to have a value
if 0, C makes that easy to do: T obj = {0};
"Another common problem is that heap memory cannot be reused until it is
explicitly released by the programmer with free()".
That's simply not true. As long as you have a pointer to it, you can and may
keep on using it. And if you don't, you mustn't. The article writer's
answer to this tiny housekeeping matter is automatic garbage collection,
which has a whole bundle of its own issues.
"Pointers are one primary source of danger; because they are unchecked, a
pointer can be made to point to any object of any type, including code, and
then written to, causing unpredictable effects."
Actually, you have to be fighting the type system if you want to get a
pointer of any object type to point to something else. For void *, sure,
but void * has redeeming features which make it worth keeping. I doubt
whether any experienced C programmer really considers this to be a problem.
"Although C has native support for static arrays, it does not verify that
array indexes are valid (bounds checking). For example, one can write to
the sixth element of an array with five elements, yielding generally
undesirable results. This is called a buffer overflow. This has been
notorious as the source of a number of security problems in C-based
programs."
Absolutely true. And if you put morons into Mack trucks, the accident rate
for Macks will go up. Put those morons into Mercs, and the accident rate
for Mercs will climb. Now set those morons to writing C code, and look what
happens to the accident rate for C programs.
Buffer overflow is a known and very minor problem. The reason it's a minor
problem is this: it's a simple, easy-to-understand problem. It's not always
easy to understand how it can be exploited, but that's irrelevant. The
weakness itself is simple, and simply avoided. This is like saying "if you
go walking on the motorway, you might get killed; DON'T WALK ON THE
MOTORWAY". People still go walking on the motorway, and people still get
killed doing so. That is not the fault of the motorway.
Incidentally, the article was very complimentary about "Numerical Recipes in
C" for its innovative approach to arrays before I corrected it a few weeks
or months ago (along with one or two other minor corrections I had made as
a prelude to an overhaul, which I abandoned when I found that my
corrections had been edited!). I pointed out that the Numerical Recipes
"solution" isn't a solution at all, being based on utter ignorance of the
rules of C - but that's been modded down to "there is also a solution based
on negative addressing". Stupid stupid stupid.
Well, I could go on to address the other crits if I could be bothered. Let
the Wikids put the above right first. At present, I cannot recommend the
Wiki's C article to anyone. It is, quite simply, riddled with wrongs.
I suppose my presence in these c.l.c dialogs is a form of masochism but
...
I use gets() everyday! I think I get the "dangerous" message
from almost every make I do! Frankly, if I used a gets() alternative
to avoid the "danger" I'd probably end up using a strcpy()
with the same danger!
You don't need to URL me to the "dangerous" explanation: I used
to design exploits myself. But the fact is, most of the programs I
write
are not for distribution and are run only on my personal machine,
usually with an impermeable firewall. Who's going to exploit me?
My alter ego? My 10-year old daughter? The input to my gets()
is coming from a file I or my software created, and which has
frequent line-feeds. The buffer into which I gets() is ten times
as big as necessary. If one of my data files gets corrupted,
diagnosing
the gets() overrun would be the least of my worries.
I'll agree that such coding should be avoided in programs with
unpredictable
input, but there are *lots* and *lots* of code fragments that suffer
from the same "danger" as gets(), so to me the suggestion that gets()
specifically be barred by the compiler seems like a joke. Or do you
think strcpy() should be barred also?
Glad to help,
James
> I'll agree that such coding should be avoided in programs with
> unpredictable input,
And therefore the only safe advice here in clc is "don't use gets()", since
we can never predict what input will be provided to it.
> but there are *lots* and *lots* of code fragments that suffer
> from the same "danger" as gets(), so to me the suggestion that gets()
> specifically be barred by the compiler seems like a joke. Or do you
> think strcpy() should be barred also?
The thing about gets() is that you can't code to make it robust, in a way
that you can with strcpy(). You shouldn't ever, ever, EVER call the
strcpy() function until you have first checked that your target buffer is
big enough to receive the input string. You see? You can hack up a check to
make a strcpy() call safe. You can't do the same for gets(), because to do
so would involve seeing into the future.
Although it may seem that it is ok to use such code for personal apps,
it is not a very good idea overall.
And I think it is a good practice to write robust and safe programs,
even for personal use.
Semi-rhetorical question: If you're sure that the
file's lines are of reasonable length, why make the
buffer ten times larger than needed? Pascal's Wager?
--
Eric Sosman
eso...@acm-dot-org.invalid
Something like that, I guess.
For the record, if I'm cranking out some one-off code, I use oversized
buffers too - much, much larger than I could possibly need - but I still
use fgets rather than gets.
(If I'm not sure that the code is going to be discarded fairly soon, I would
not use fgets either; instead, I'd use something that can handle
arbitrarily long lines.)
For the benefit of those who weren't working (or even born?) then,
November 2, 1988, to be precise.
http://en.wikipedia.org/wiki/Morris_worm
And
The basic problem is that the writer doesn't understand that his criticisms
are mostly the inevitable consequence of having direct memory addressing,
and not allowing lengthy operations like garbage collection.
It is a bit like criticising a petrol car for having a gearbox. Gears are a
nuisance, they break down, they waste energy. Electric cars don't need them.
But electric cars have other disadvantages, and it is virtutally impossible
to build a petrol engine that doesn't need gears.
I think both of you guys are too emotionally invested in this.
The article was just making the point that C/C++ (*) is not a baby-safe
language. There's nothing wrong with that, it just means that C/C++ are
like chainsaws - very dangerous in inept hands. And, industry wants safe
langauges, that inept, low skilled labor can use.
Anecdotally, I will say that when I learned my first assembly language (a
couple of million years ago), the first thing that really shocked me was
the fact that it wasn't baby-safe - that I could do totally undefined
things and get totally obscure error messages from doing so. I think
that's the primary difference between languages like C/C++ (and, as it
turns out, many dialects of Pascal as well [*]) and "user-proof" languages
like BASIC - that is, that you can get obscure (and basically useless)
error messages like "Segmentation Violation", when you do something stupid.
(*) Yes, I know it is doctrine around here that C and C++ are two different
languages and that it is heresy to speak of them as a slashed entity like
this. But C++ has all the same "non-user-proof" features that C has, and
so, I think, for the sake of this thread, we can treat them thusly (as
a "slashed entity"). In particular, I know that Richard was trying to say
that Wikipedia was being unfair to C and biased towards C++, but I think we
all understand that most of the criticisms of C apply equally well to C++.
[*] Again, lest you think I am favoring Pascal at C's expense.
> I think both of you guys are too emotionally invested in this.
There's nothing particularly emotional about it. The Wiki article is just
plain wrong.
No, it isn't. But thank you for playing our game.
Yes, it is. (etc etc)
And if you browse around the Wiki for a while, you'll find other articles on
C, each of which has their own little collection of mistakes.
I suspect you didn't read my other post, in which I explain at some length,
the context in which the Wiki article was written. Once you do that,
you'll understand.
Such as? (Yes, he's a troll, but you haven't given any specific examples
of facts claimed that are false, and the article really doesn't look to
me as bad as you claim it is)
Programming is not a game. It may surprise you, but occasionally there
actually _are_ lives at stake.
Wikipedia _is_ a game. That is basically the whole problem.
Richard
>I think
>that's the primary difference between languages like C/C++ (and, as it
>turns out, many dialects of Pascal as well [*]) and "user-proof" languages
>like BASIC - that is, that you can get obscure (and basically useless)
>error messages like "Segmentation Violation", when you do something stupid.
I haven't looked at any of the modern BASICs, but the BASICs that
I grew up with were not "user-proof". PEEK and POKE were used
for a lot of system dependancies, including a lot of graphics;
and in those days it was common to encode machine language in
DATA statements and then branch to it.
--
I was very young in those days, but I was also rather dim.
-- Christopher Priest
> I suspect you didn't read my other post
That is almost certainly true. If you want to be taken seriously and have
people read your stuff, you need to start acting a bit more seriously. I'm
not about to go hunting for your "other post" in a feed the size of clc, if
it means wading through your trollisms.
>> And if you browse around the Wiki for a while, you'll find other articles
>> on C, each of which has their own little collection of mistakes.
>
> Such as? (Yes, he's a troll, but you haven't given any specific examples
> of facts claimed that are false, and the article really doesn't look to
> me as bad as you claim it is)
I've already pointed out several faults with the lead article.
Observe the first program example on this page:
<http://en.wikipedia.org/wiki/C_syntax>
Further down this page, it gives the following example program:
void setInt(int **p, int n)
{
*p = (int *) malloc(sizeof(int)); // allocate a memory area, using
the pointer given as as a parameter [1]
**p = n;
}
int main(void)
{
int *p; // create a pointer to an integer
setInt(&p, 42); // pass the address of 'p'
return 0;
}
[1] Originally presented as a single line.
If this is C90 code, count the bugs.
If it's C99 code instead, count the bugs.
Here's their example of a scanf call:
int x;
scanf("%d", &x);
Not even my cat would use scanf like that. (The kitten might.)
These are not the only problems, by any means. Now, if you want to find any
more, go look for yourself.
Mark McIntyre replied:
>> This is precisely my point. There is no editorial control, so there is
>> nothing, nothing at all, to prevent complete lies, falsehoods,
>> misunderstandings and other mistakes.
<OT> (on-topic info follows)
There is, as on Usenet, the "eternal vigilance" of those citizens
responsible enough to assume it. This simple model protects against a
malicious controlling minority. The Britannica model isn't as good at
that, but it apparently does protect better against corruption by random
malicious/ignorant individuals.
Wikipedia is afaict considering the need for a middle ground, especially
for topics where the honest and informed are not more numerous and more
active than the malicious/ignorant.
</OT>
Richard Heathfield adds:
> [Take a look at the Wikipedia C article], and you'll see a "criticisms"
> section - which is not something I noticed in the C++, Python or Lisp
> sections. Does this mean those languages are beyond criticism? Or does
> it simply mean the Wikids don't understand C very well?
<still OT>
It could also mean that as a knowledgeable C programmer and author, this
is a prime area for exercise of your democratic responsibility for
vigilance against disinformation, and for public debate on correctness.
Elsewhere you presented several (IMO useful) ideas for improving the
article; Wikipedia's policy allows for a neutral "supporters vs critics"
debate, so you needn't view it as all-or-nothing:
<http://tinyurl.com/7uppc>
</OT>
To the topical: the c.l.c community itself can explore the issue of
structured vs open wiki access - the proposed wiki [1] hasn't disappeared,
it's just been worked on quietly for a while.[2]
Software support for the maintenance of an editorial group has been
written and installed.[3] The proposed wiki charter has further
details.[4]
No content other than planning yet exists within the wiki, although there
are clear ideas of what the content will be.[5] To import the K&R2
solutions from Richard Heathfield's unmaintained site (as discussed in a
previous thread), a script has been written.[6]
Now that basic support for moderation exists, feedback, particularly from
regulars, and in particular from Steve Summit as FAQ maintainer and
copyright holder, is solicited:
* do you support the proposed charter and model of a limited editorial
group?
* do you support the proposed content guidelines?
* is it acceptable/desirable to host the comp.lang.c FAQ on such a wiki?
* any other issues/concerns.
If concerted objections arise, likely the wiki will be continued under
an unofficial title, focusing on unique content, until (if at all) the
objections can be resolved. The current wiki permissions are quite open
so that contribution during the planning stage is easier: no edits are
blocked other than anonymous editing and a few selected pages.
The entry point to the wiki is:
<http://clc.flash-gordon.me.uk/wiki/Main_Page>.
[1] Original clc FAQ wiki thread: <http://tinyurl.com/7q3eh>
[2] <http://clc.flash-gordon.me.uk/wiki/Planning:Status>
[3] A decisions and voting extension supports a self-regulating editorial
group with members automatically added and removed by group decision. See
the links immediately above and below for details. The level of
sophistication is presently quite low but development is ongoing.
[4]<http://clc.flash-gordon.me.uk/wiki/Planning:Proposed_Charter>
[5]<http://clc.flash-gordon.me.uk/wiki/Planning:Proposed_Content_Guidelines>
[6] Good-faith efforts are being made to obtain all contributors'
permission prior to running the script. Please respond (email is fine) if
you are on the list linked to here:
<http://clc.flash-gordon.me.uk/wiki/Planning:Missing_Permissions> and wish
to assert or deny permission. Non-response may ultimately be taken as
implicit permission.
I was using BASIC long before there were such things as microcomputers
(aka, PCs). *Real* BASIC on *real* computers (w/o PEEK/POKE/etc) was/is
a baby-proof environment.
And even you can see that if you invoke machine language from within BASIC,
well, then you're not programming in BASIC anymore.
Luckily, they don't use anything written by MS.
>Wikipedia _is_ a game. That is basically the whole problem.
Exactly my point. That's why I said "thank you for playing our (Wikipedia)
game."
Thank you ever so much for making my point.
The general tone of Wiki is "articles written by 'informed laymen'" - that
is, at sort of the "college football level" (obscure reference - I'll
explain if needed). It is not reasonable to expect them to be done to the
level of religious-fervor/dot-all-the-Is-cross-all-the-Ts level that is
common/expected in this newsgroup.
(snip a whole bunch of stuff related to the usual CLC "Don't cast the
return value of malloc" and other such trivia)
> The general tone of Wiki is "articles written by 'informed laymen'" - that
> is, at sort of the "college football level" (obscure reference - I'll
> explain if needed).
Fine, but that's just another way of saying "Wikipedia is not and never will
be authoritative and does not perceive accuracy as being the primary goal".
> It is not reasonable to expect them to be done to the
> level of religious-fervor/dot-all-the-Is-cross-all-the-Ts level that is
> common/expected in this newsgroup.
It's not reasonable to expect them to get stuff right? Okay. That tells us
all we need to know about Wikipedia, I guess.
In the real world, there is a difference between accuracy and pedantry.
I (and most reasonable people, that is, those outside of the so-called
"regulars" in this weird ng) claim that the Wiki article about C is
accurate in the parts that matter. That it doesn't measure up to the level
of pedantry required of posters in this group is not particularly relevant.
>> It is not reasonable to expect them to be done to the
>> level of religious-fervor/dot-all-the-Is-cross-all-the-Ts level that is
>> common/expected in this newsgroup.
>
>It's not reasonable to expect them to get stuff right? Okay. That tells us
>all we need to know about Wikipedia, I guess.
In the real world, there is a difference between accuracy and pedantry.
This newsgroup is a nice little haven for people who can't tell the
difference.
> In the real world, there is a difference between accuracy and pedantry.
"Pedantry" is just a word used by people who don't care about accuracy to
describe the attitude of those people who do.
> I (and most reasonable people, that is, those outside of the so-called
> "regulars" in this weird ng) claim that the Wiki article about C is
> accurate in the parts that matter.
The claim, however, is incorrect.
Thereby clearly demonstrating that you are in that group of people who
can't tell the difference. Which means this ng is a nice safe padded area
for you.
>> I (and most reasonable people, that is, those outside of the so-called
>> "regulars" in this weird ng) claim that the Wiki article about C is
>> accurate in the parts that matter.
>
>The claim, however, is incorrect.
I guess we just disagree on (the definition of) that which matters.
P.S. I really don't think the typical Wiki reader gives two hoots about
why you shouldn't cast the return value of malloc(). In this context, that
bit of trivia is just BS.
Meaning, of course, that Wiki is as untrustworthy as your posts, and Mr.
Heathfield was right all along.
Richard
October 1 1964, Dartmouth, "BASIC", page 1, section I "WHAT IS A PROGRAM?"
A program is a set of directions, a recipie, that is used to provide
an answer to some problem. It usually consists of a set of instructions
to be performed or carried out in a certain order. It starts with the
given data and parameters as the ingredients, and ends up with a
set of answers as the cake. And, as with ordinary cakes, if you make
a mistake in your program, you will end up with something else --
perhaps hash!
http://www.bitsavers.org/pdf/dartmouth/BASIC_Oct64.pdf
Doesn't sound "user-proof" or "baby-proof" to me. Doesn't sound
substantially different than "nasal demons".
Notice that the manual does not define what happens if one uses
an invalid subscript, or attempts to do a MAT operation between
incompatable matrices. Appendix A does list "SUBSCRIPT ERROR",
but catching subscript problems is not given as part of the
language definition.
>And even you can see that if you invoke machine language from within BASIC,
>well, then you're not programming in BASIC anymore.
But a BASIC that permits such things is not "user-proof".
--
Prototypes are supertypes of their clones. -- maplesoft
I usually try to ignore this troll, but I'm going to make an exception
in this case he's spewing dangerous misinformation, at least by
implication.
It may be true that the typical Wiki reader doesn't care why you
shouldn't cast the return value of malloc(). If it is true, it's only
because the typical Wiki reader is ignorant. Wikipedia itself is
one way to cure that ignorance.
The advice not to cast the result of malloc() isn't just a "bit of
trivia". It avoids real errors in real programs, and it's something
that any C programmer needs to understand.
Consider this program:
#include <stdio.h>
#ifdef CORRECT
#include <stdlib.h>
#endif
int main(void)
{
int *ptr = (int*)malloc(sizeof *ptr);
printf("ptr = %p\n", (void*)ptr);
*ptr = 42;
printf("*ptr = %d\n", *ptr);
return 0;
}
When I compile this program on a certain system with "-DCORRECT"
(which has the effect of inserting "#define CORRECT 1" to the top of
the program), the output is:
ptr = 0x6000000000000bc0
*ptr = 42
When I compile the same program without "-DCORRECT", the output is:
ptr = 0xbc0
Segmentation fault
The cast masks a serious error. The compiler warns about "cast to
pointer from integer of different size", but that warning is not
required, and another compiler might compile the dangerously incorrect
code without complaint.
If Kenny thinks that this is an example of the difference between
accuracy and pedantry, he's a fool. But we knew that already.
A note to the regulars of this newsgroup. We know that Kenny is a
troll; he's repeatedly admitted it himself. If you use a killfile,
please add him to it. If you don't, please resist the temptation to
respond to him unless he posts some misinformation that needs to be
corrected. If he doesn't get any attention, perhaps some day he'll go
away and this newsgroup will become a much more pleasant place.
--
Keith Thompson (The_Other_Keith) ks...@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
I can talk about "the ASCII representation of the number is stored in the
file", and we all know what I mean. But the computer does need to know
whether I mean the ASCII representation or the machine character
representation.
> Mr. Heathfield was right all along.
There's no need to sound quite so surprised. :-)
Wow, that's *selective* quoting :)
--
:wq
^X^Cy^K^X^C^C^C^C
Hmm, my own compiler says this about your program:
nils/nwcc_ng [0]> ./nwcc junk.c
junk.c:8: Warning: Incorrect call to `malloc' without declaration,
please `#include <stdlib.h>'
int *ptr = (int*)malloc(sizeof *ptr);
^^^^^^ here
/var/tmp/cpp63.cpp - 0 error(s), 1 warning(s)
:-)
(I've been thinking about doing a complete standard library function
catalogue for diagnosing incorrect library calls in the absence of a
prototype or declaration for some time but haven't gotten around to
doing it yet.)
--
Nils R. Weller, Bremen (Germany)
My real email address is ``nils<at>gnulinux<dot>nl''
... but I'm not speaking for the Software Libre Foundation!
I love compilers that say 'please' - what are you using?
Please stop feeding the troll. Thank you.
nwcc! http://sourceforge.net/projects/nwcc/ !
But it's sort of incomplete and broken in a lot of ways so I do not
really use it yet.
I'm working on it.
I tried that, too. It's entirely up to you, of course, but it's hard enough
to generate high quality content in a /neutral/ environment, let alone when
anyone can negate your changes on a whim because they think you're wrong
about, say, main returning int or whatever.
Oh great. Just what we need - another outpost of CLC pedantry.
My first comment is that the question of openness versus control
is an extremely important one. Much virtual ink has been spilled
of late about the alleged unreliability of Wikipedia; that debate
seems to have spilled over even into the sacred, narrow-topic
realm of clc. Clearly it's appallingly irresponsible for
Wikipedia to be openly edited by anyone, even unregistered
anonymous users -- but let's think about that for a moment.
It's also clearly the case that Wikipedia has been as successful
as it has been *because* it can be openly edited by anyone.
It's eminently debatable whether unregistered anonymous users
should have equally free reign, but it's undisputable that
Wikipedia would never have achieved its current momentum if it
had been equipped all along with a proper editorial review board
and article approval process. Wikipedia is as successful as it
is -- and as accurate as it is -- not merely in spite of its open
policies, but because of them.
As I once had occasion to write, "People continue to wish that C
were something it is not, not realizing that if C were what they
thought they wanted it to be, it would never have succeeded and
they wouldn't be using it in the first place." And I think wikis
are much the same.
A C Wiki, with its smaller scope and more constrained subject
matter, could probably get away with a little more control (aka
closedness) than the every-topic-is-fair-game Wikipedia, but I
suspect it will still be important that it be relatively open,
where by "relatively" I mean "more than would seem prudent".
If it is open, yes, it may suffer from some of the same kinds
of transient inaccuracy that Wikipedia is notorious for. But if
it is closely controlled, and no matter how well-intentioned that
control is to prevent vandalism and ill-informed speculation,
the project will be at significant risk of never getting off the
ground at all.
I would urge the proponents of the C Wiki to, as Wikipedia puts
it, *be* *bold* and just do it. I didn't ask for anyone's
permission or support when I started compiling the FAQ list lo
those many years ago, and no one needs permission to start a C
Wiki, either. And, more to the point: don't worry too much about
getting the model and the charter and the editorial board and the
voting policy all perfect before you start. There's another
analogy to trot out here, equally if not more applicable in the
context of C, namely: Richard P. Gabriel's old dichotomy between
"MIT" and "New Jersey", the infamous "Worse is Better" philosophy.
If you have a good idea, set it free and let it run. If it's a
truly good idea, it will thrive under this freedom and become
better than you ever imagined. If it founders, perhaps it wasn't
such a good idea anyway, and in any case, it probably wouldn't
have fared any better under too-tight control, either.
On the specific question of "seeding" a C Wiki with the
comp.lang.c FAQ list, I'm still of mixed mind. On the one hand I
do hold the copyright and can do almost anything I want with the
content, but on the other hand Addison Wesley also has a vested
interest and a particular copyright notice they'd like to retain,
so it probably won't be possible to just release the whole FAQ
list under the GFDL. But I'd like to see if we can do something,
because while on the one hand I am (I confess) still possessive
enough about the thing that I'll have some qualms about throwing
it open for anyone to edit, on the other hand I've been wondering
how I'm ever going to cede control over it, since I don't
maintain it as actively as I once did and I'm certainly not going
to maintain it forever. I've been wondering if it's time to fork
it, and doing so in the context of a C Wiki might be just the
thing.
At the very least we could certainly seed the FAQ List section
of a C Wiki with the questions from the existing FAQ list,
bidirectionally cross-referenced with the "static" answers
I maintain, with the more dynamic, Wiki-side answer sections
serving to amplify or annotate or extend or eventually supplant
the static ones. But that would be kind of an awkward split, and
I can probably see my way clear to having the Wiki-side answers
seeded with the existing static answer text also, as long as it's
possible to tag those pages with a different, non-GFDL copyright
notice. I'll keep thinking about this, and maybe raise the
question with the editors I've been talking with at Addison
Wesley lately.
A couple of other notes:
I'm glad to see the Wikimedia software being used, rather than
something being written from scratch!
They're hinted at in the existing topic outline, but it would be
lovely to have a collaboratively written, Wiki-mediated language
tutorial, a language reference manual, and a library reference
manual in there, too.
At any rate, let's see some more discussion about the Wiki idea!
I think it has a lot of promise, which is why I'm blathering at
length about it in this public post, rather than just sending an
email reply to Netocrat.
Steve Summit
s...@eskimo.com
Steve, first off thank you for your support in this.
> My first comment is that the question of openness versus control
> is an extremely important one. Much virtual ink has been spilled
<snip>
> A C Wiki, with its smaller scope and more constrained subject
> matter, could probably get away with a little more control (aka
> closedness) than the every-topic-is-fair-game Wikipedia, but I
> suspect it will still be important that it be relatively open,
> where by "relatively" I mean "more than would seem prudent".
> If it is open, yes, it may suffer from some of the same kinds
> of transient inaccuracy that Wikipedia is notorious for. But if
> it is closely controlled, and no matter how well-intentioned that
> control is to prevent vandalism and ill-informed speculation,
> the project will be at significant risk of never getting off the
> ground at all.
There is always the option of having things wide open initially and
tightening up control if it becomes a problem. My biggest reservations
are about anonymous editing, but I'm only one voice.
> I would urge the proponents of the C Wiki to, as Wikipedia puts
> it, *be* *bold* and just do it. I didn't ask for anyone's
<snip>
Well, the site is up and running and people are welcome to create
accounts and start editing.
> On the specific question of "seeding" a C Wiki with the
> comp.lang.c FAQ list, I'm still of mixed mind. On the one hand I
> do hold the copyright and can do almost anything I want with the
> content, but on the other hand Addison Wesley also has a vested
> interest and a particular copyright notice they'd like to retain,
> so it probably won't be possible to just release the whole FAQ
> list under the GFDL.
I would be happy to add statements like, "The initial content of this
page is copyright Steve Summit with modifications under the GFDL. See
http://c-faq.com/ for the original."
We could also include on the page footer something like, "This site has
been seeded from the C FAQ, copyright Steve Summit, See
http://c-faq.com/ for Steve's work". This could be put in such that it
is impossible for it to be edited.
<snip>
> At the very least we could certainly seed the FAQ List section
> of a C Wiki with the questions from the existing FAQ list,
> bidirectionally cross-referenced with the "static" answers
> I maintain, with the more dynamic, Wiki-side answer sections
> serving to amplify or annotate or extend or eventually supplant
> the static ones. But that would be kind of an awkward split, and
> I can probably see my way clear to having the Wiki-side answers
> seeded with the existing static answer text also, as long as it's
> possible to tag those pages with a different, non-GFDL copyright
> notice. I'll keep thinking about this, and maybe raise the
> question with the editors I've been talking with at Addison
> Wesley lately.
Indeed. I'm sure something mutually acceptable can be arranged.
One other thing we could do, if you are willing, if have you point
sometime like wiki.c-faq.com at the site. I would have to do a small
edit to my Apache configuration to support it, but it would put the
domain in to hands rather better known than mine.
> A couple of other notes:
>
> I'm glad to see the Wikimedia software being used, rather than
> something being written from scratch!
We are SW developers, so we are lazy by nature ;-)
> They're hinted at in the existing topic outline, but it would be
> lovely to have a collaboratively written, Wiki-mediated language
> tutorial, a language reference manual, and a library reference
> manual in there, too.
Yes, those are all good ideas.
> At any rate, let's see some more discussion about the Wiki idea!
> I think it has a lot of promise, which is why I'm blathering at
> length about it in this public post, rather than just sending an
> email reply to Netocrat.
Thanks again for your support.
--
Flash Gordon
Living in interesting times.
Although my email address says spam, it is real and I read it.
In my turn, I also thank you for having such a positive stance for a
wannabe c.l.c wiki.
> My first comment is that the question of openness versus control
> is an extremely important one. Much virtual ink has been spilled
> of late about the alleged unreliability of Wikipedia; that debate
> seems to have spilled over even into the sacred, narrow-topic
> realm of clc. Clearly it's appallingly irresponsible for
> Wikipedia to be openly edited by anyone, even unregistered
> anonymous users -- but let's think about that for a moment.
>
> It's also clearly the case that Wikipedia has been as successful
> as it has been *because* it can be openly edited by anyone.
> It's eminently debatable whether unregistered anonymous users
> should have equally free reign, but it's undisputable that
> Wikipedia would never have achieved its current momentum if it
> had been equipped all along with a proper editorial review board
> and article approval process. Wikipedia is as successful as it
> is -- and as accurate as it is -- not merely in spite of its open
> policies, but because of them.
Although openness would be a major benefit, we ought to think about
people that think they know C, and might try to contribute, whereas they
are propagating errors and misunderstandings.
In Wikipedia it works, because it has many editors, maybe more than a
language-specific wiki (such as the proposed one) will ever have. So a
more restrictive stance seems to be the way to go, at least for the time
being. However, no rule is meant to last for ever.
Moreover, your C FAQ has reached another level of perfection - after so
many years of dedication, corrections and additions - and it would be
very precarious to leave it open for everyone.
> A C Wiki, with its smaller scope and more constrained subject
> matter, could probably get away with a little more control (aka
> closedness) than the every-topic-is-fair-game Wikipedia, but I
> suspect it will still be important that it be relatively open,
> where by "relatively" I mean "more than would seem prudent".
> If it is open, yes, it may suffer from some of the same kinds
> of transient inaccuracy that Wikipedia is notorious for. But if
> it is closely controlled, and no matter how well-intentioned that
> control is to prevent vandalism and ill-informed speculation,
> the project will be at significant risk of never getting off the
> ground at all.
However, if it manages to get off the ground IMHO it would be a major
contribution.. And I think it worths the risk.
> On the specific question of "seeding" a C Wiki with the
> comp.lang.c FAQ list, I'm still of mixed mind. On the one hand I
> do hold the copyright and can do almost anything I want with the
> content, but on the other hand Addison Wesley also has a vested
> interest and a particular copyright notice they'd like to retain,
> so it probably won't be possible to just release the whole FAQ
> list under the GFDL. But I'd like to see if we can do something,
> because while on the one hand I am (I confess) still possessive
> enough about the thing that I'll have some qualms about throwing
> it open for anyone to edit, on the other hand I've been wondering
> how I'm ever going to cede control over it, since I don't
> maintain it as actively as I once did and I'm certainly not going
> to maintain it forever. I've been wondering if it's time to fork
> it, and doing so in the context of a C Wiki might be just the
> thing.
We can search some other ways to have GNU FDL and copyrighted material
in the same place. I don't think that this is out of the question. It
could, for example, have the copyrighted C FAQ question and answer, and
after that the FDL part that either complements the answer or gives
other directions, hints etc.
> At the very least we could certainly seed the FAQ List section
> of a C Wiki with the questions from the existing FAQ list,
> bidirectionally cross-referenced with the "static" answers
> I maintain, with the more dynamic, Wiki-side answer sections
> serving to amplify or annotate or extend or eventually supplant
> the static ones. But that would be kind of an awkward split, and
> I can probably see my way clear to having the Wiki-side answers
> seeded with the existing static answer text also, as long as it's
> possible to tag those pages with a different, non-GFDL copyright
> notice. I'll keep thinking about this, and maybe raise the
> question with the editors I've been talking with at Addison
> Wesley lately.
FDL is extremely versatile. I am not a lawyer or a patent-guy, but I
think that after some careful discussion, there might be a way.
> A couple of other notes:
>
> I'm glad to see the Wikimedia software being used, rather than
> something being written from scratch!
It is tested and it works and it is now the heart of the c.l.c wiki
thanks to Netocrat and Flash Gordon.
> They're hinted at in the existing topic outline, but it would be
> lovely to have a collaboratively written, Wiki-mediated language
> tutorial, a language reference manual, and a library reference
> manual in there, too.
Yes, that would be nice.. However, the main focus is the FAQ (the first
step is the start for even the longest journey - isn't that a chinese
saying?) just because currently there aren't many people involved...
>> A C Wiki, with its smaller scope and more constrained subject
>> matter, could probably get away with a little more control (aka
>> closedness) than the every-topic-is-fair-game Wikipedia, but I
>> suspect it will still be important that it be relatively open,
>> where by "relatively" I mean "more than would seem prudent".
>> If it is open, yes, it may suffer from some of the same kinds
>> of transient inaccuracy that Wikipedia is notorious for. But if
>> it is closely controlled, and no matter how well-intentioned that
>> control is to prevent vandalism and ill-informed speculation,
>> the project will be at significant risk of never getting off the
>> ground at all.
>
>There is always the option of having things wide open initially and
>tightening up control if it becomes a problem. My biggest reservations
>are about anonymous editing, but I'm only one voice.
I recently read about a new wiki-style encyclopedia which is in the
works. They are using a two-tiered system, as I understand it. There
are invited articles written by experts in a particular field. The
article are identified as being authoritative, and editing is
restricted. However, anyone is free to add and edit other entries on
the same subject. Perhaps something like this would be appropriate.
(Sorry I don't have a reference to the new encyclopedia.)
--
Al Balmer
Sun City, AZ
Other than wrapping you introduced, this example looks syntactically
correct, which is what the article's about - and anyway why don't you
fix it - I'm sure that an expert like you working on the article would
be very helpful
>
> int main(void)
> {
> int *p; // create a pointer to an integer
> setInt(&p, 42); // pass the address of 'p'
> return 0;
> }
>
> [1] Originally presented as a single line.
>
> If this is C90 code, count the bugs.
> If it's C99 code instead, count the bugs.
Other than the comments, i don't see any c90 bugs. i don't see any c99
bugs at all. and there certainly are no syntax errors. It's a bit of a
contrived example, but the only real problem with it is the cast [and
the lack of inclusion of stdlib, but it's arguably a mere code snippet,
rather than a complete source file
the more serious problem is that it says "<data-type> varN" - which
implies that it's int[42] a instead of int a[42] to declare an array of
42 ints.
>
> Not even my cat would use scanf like that. (The kitten might.)
>
> These are not the only problems, by any means. Now, if you want to find any
> more, go look for yourself.
Note that this is the source of the bug, not the cast. The cast neither
creates a bug nor does omitting it cause there not to be one. However,
the real bug is in the compiler - for saying "makes pointer from integer
without cast" thus implying that a cast is the correct way to fix the
bug.
Nonsense. The compiler does not imply how to fix anything. It has issued
a diagnostic about having encountered an unusual condition. It is up to
the programmer to determine whether the condition is an error and what
to do about it.
--
Joe Wright
"Everything should be made as simple as possible, but not simpler."
--- Albert Einstein ---
Hint when reading CLC:
Any post that starts out with a blanket "Nonsense" is proof
positive that the poster is FOS.
HTH.
The text of the diagnostic is a quality-of-implementation issue.
"Assignment makes pointer from integer without cast" is an example of
poor quality-of-implementation, slightly worse, in my opinion, than,
say, "E#42".
A good warning message for this condition would be, say, "Implicitly
declared function returns integer where user seems to expect a pointer
for assignment to (lvalue expression)" or a separate diagnostic about
the implicit declaration.
Even lint's "illegal combination of pointer and integer, op = [123]" is
a step above the gcc message, since it doesn't appear (to an uninformed
reader) to suggest a cast as the solution.
Just fixing the errors is an approach that seems to work - it's easier
to defend a single change than a rewrite.
>
> Other than wrapping you introduced, this example looks syntactically
> correct, which is what the article's about - and anyway why don't you
> fix it - I'm sure that an expert like you working on the article would
> be very helpful
I don't see the point. I've tried editing the Wiki before, and my changes
were all but emasculated within a day or so.
<buggy code went here>
> Other than the comments, i don't see any c90 bugs. i don't see any c99
> bugs at all.
I will suppress my urge to respond sarcastically, because I know you're a
white-hat underneath. But if you can't see the bugs, I suggest you look a
bit closer.
--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at above domain (but drop the www, obviously)
stdlib.h isn't included, but neither is anything else, and it's arguably
just a code snippet, not a real program. Didn't I say that?
Were there any other bugs you had in mind?
I've been won over to the party of the non-casters, though not mainly for
that reason.
Bascially, a non-cast malloc() is less visual clutter than a cast one.
Is this trivia? Yes, if we are only talking about one line. However when you
have a program with many lines, doing something intricate, then every bit of
unnecessary verbiage adds up cumulatively, making the code more and more
difficult to read. The more difficult to read, the harder to find bugs.
The reason for casting was to cut and paste into C++. However I've virtually
given up on using C++ anyway. This wasn't a conscious decision, I've just
used it less and less and now it's virtually faded from my code base.
<snip>
> stdlib.h isn't included
That's the principal problem, and in C99 that means a diagnostic is required
for the malloc call.
> but neither is anything else, and it's arguably
> just a code snippet, not a real program.
That wasn't the impression I got from the code.
> Were there any other bugs you had in mind?
I can't remember now and I don't care enough. Sorry. IIRC, the major problem
was the lack of inclusions, which caused several problems with the code.
Ok, but the fact is that conforming compilers can, and often do,
produce inadequate or even misleading diagnostic messages. A cast
tells the compiler, "shut up, I know what I'm doing", which is
especially dangerous when you *don't* know what you're doing.
Avoiding unnecessary casts (most casts are unnecessary) is a way to
defend yourself against poor diagnostics.
Adding a cast is a common technique for eliminating a compiler
warning, but it's like taping over a red warning light on your car's
dashboard.
--
Keith Thompson (The_Other_Keith) ks...@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Now try:
pointytypedthing = (2 + 3 + 4005);
which should raise the same conditions at assignment time, and thus
the same warning message. It is the assignment of incompatible
types that provokes the warning. The fact that one type came from
a function has long since escaped the view of the compiler. The
cure (in the original) is to provide a proper function prototype,
thus avoiding the assumption of an incorrect type in the first place.
--
"If you want to post a followup via groups.google.com, don't use
the broken "Reply" link at the bottom of the article. Click on
"show options" at the top of the article, then click on the
"Reply" at the bottom of the article headers." - Keith Thompson
More details at: <http://cfaj.freeshell.org/google/>
Says who? That's a quality-of-implementation issue.
>I recently read about a new wiki-style encyclopedia which is in the
>works. They are using a two-tiered system, as I understand it.
>
>(Sorry I don't have a reference to the new encyclopedia.)
This one?
http://www.theregister.co.uk/2005/12/19/sanger_onlinepedia_with_experts/
Mark McIntyre
--
----== Posted via Newsfeeds.Com - Unlimited-Unrestricted-Secure Usenet News==----
http://www.newsfeeds.com The #1 Newsgroup Service in the World! 120,000+ Newsgroups
----= East and West-Coast Server Farms - Total Privacy via Encryption =----
That seems like a good idea. How about this: rather than an editorial
group to whom edits are restricted, that group is instead an expert review
group - with election/demotion as currently suggested in the charter.
Editing is open to anyone, but members of the review group are capable of
marking a particular article version as "stable" or "reviewed", and that
version is the one displayed by default (and marked as such).
The Wikipedia developers are working on code similar to this, so given
that the voting extension is mostly complete, this idea shouldn't take
much effort to incorporate into the prototype wiki.
> (Sorry I don't have a reference to the new encyclopedia.)
As Mark McIntyre's pointed out, it seems likely to be Digital Universe,
backed in part by a Wikipedia founder and apparently opening early
2006. Larry Sanger's comments in the Slashdot article are pretty
informative:
<http://slashdot.org/article.pl?sid=05/12/21/2351211&tid=95&tid=99>
You misunderstand. Any technical article is exactly as accurate/deep
as the person with the most skill or knowledge who comes across the
article *and* cares about the content of the article can possibly make
it. If it doesn't measure up to your standards, either that's a
temporary situation, or you don't care that it doesn't measure up (or
it has been sabotaged -- but that's temporary too). Its that simple --
there really aren't any other possibilities.
I don't pull punches in my edits just because it might be beyond what
the reader is looking for. You just have to avoid "original research"
level content (because they don't have a way of weeding out "crackpot
theories"). Just stuff that's widely accepted, or technically correct.
Remember, as long as you care, there is never a good reason why any
content on Wikipedia should be below your own knowledge. It has
nothing to do with "laymen expertise" or anything like that. Its as
deadly accurate as is possible and is necessary -- and it should be
nothing less.
--
Paul Hsieh
http://www.pobox.com/~qed/
http://bstring.sf.net/
And that, gentlemen, is the problem.
But if you cared: there is a discussion page in which you can explain
your changes. Obviously they were reverted by someone who didn't get
your point. Post it in the discussion, then if there is no feedback or
objections, do the core change citing your discussion as the reason for
the change, and if it gets reverted again, let the "edit war" begin --
just follow up in the discussion, I think the thing will automatically
detect the escalation, and it will eventually be judged on its merits.
To see an example of this look at the Buffer Overflow article in which
I am involved in a heavy discussion of the current state of it -- the
article has almost certainly been getting better as a result.
> Jordan Abel wrote:
[ On not #including <stdlib.h> and the effect on malloc() ]
> > Even lint's "illegal combination of pointer and integer, op =
> > [123]" is a step above the gcc message, since it doesn't appear
> > (to an uninformed reader) to suggest a cast as the solution.
> >
> >> It is up to the programmer to determine whether the condition
> >> is an error and what to do about it.
>
> Now try:
>
> pointytypedthing = (2 + 3 + 4005);
>
> which should raise the same conditions at assignment time, and thus
> the same warning message. It is the assignment of incompatible
> types that provokes the warning. The fact that one type came from
> a function has long since escaped the view of the compiler.
It is, however, enimently arguable that such forgetfulness is an
imperfection in a compiler.
Richard
This thought could be extended to the biggest problem (in my opinion)
with open source software: The lack of thorough and well structured
documentation. We should all effort to pitch in where improvement is
needed, regardless of the domain.
Deiter
</ot>
> <ot>
> I have to agree with this. If an "Open Source Document" which is
> available for one's knowledge improvement is incorrect, then I feel we
> should take it upon ourselves to correct the document. This being for
> the betterment of open information.
>
> This thought could be extended to the biggest problem (in my opinion)
> with open source software: The lack of thorough and well structured
> documentation. We should all effort to pitch in where improvement is
> needed, regardless of the domain.
> </ot>
The advantage of closed source software is that nobody outside your
company has any clue about the lack of thorough and well structured
documentation. Or the lack of documentation in the first place. Come to
think of it, people can't see the appalling quality of code either. (One
reason why a lot of software will never be open sourced is the amount of
work that would have to be invested to change it to a state where a
company could publish it without being ashamed or being afraid of being
sued).
> Richard Heathfield wrote:
>> Jordan Abel said:
>>
>> > Were there any other bugs you had in mind?
>>
>> I can't remember now and I don't care enough.
>
> And that, gentlemen, is the problem.
Yes, absolutely. I don't see the point in caring about an ignorance
repository.
This is completely incorrect for the majority of commercial tools.
Especially in the embedded market. Do you have any evidence to back up
your suggestions?
--
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills Staffs England /\/\/\/\/
/\/\/ ch...@phaedsys.org www.phaedsys.org \/\/\
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
> In article <christian.bau-34A9CE.09412002012006@slb-
> newsm1.svr.pol.co.uk>, Christian Bau <christ...@cbau.freeserve.co.uk
>> writes
>>
>>The advantage of closed source software is that nobody outside your
>>company has any clue about the lack of thorough and well structured
>>documentation. Or the lack of documentation in the first place. Come to
>>think of it, people can't see the appalling quality of code either. (One
>>reason why a lot of software will never be open sourced is the amount of
>>work that would have to be invested to change it to a state where a
>>company could publish it without being ashamed or being afraid of being
>>sued).
>
> This is completely incorrect for the majority of commercial tools.
> Especially in the embedded market. Do you have any evidence to back up
> your suggestions?
Well, obviously he doesn't, because the closed source itself is - er -
closed. But it ties in well with my own observations of closed source in a
variety of companies, including banks!! (Careful, Richard - two !s is quite
sufficient there to make your point...)
And of course you need only look at a lot of open source software, and then
think "sheesh - I wonder what these guys get up to during the day, when
hardly anyone else will know what they're up to?"
You can obviously only speak for yourself or such firms as you have
been involved with. The evidence (and my experience) leans highly
towards Christians view.
I ask again: What evidence?
> In article <EPudna71aLS...@maineline.net>, Chuck F.
> <cbfal...@yahoo.com> writes
> >Chris Hills wrote:
> >> Christian Bau <christ...@cbau.freeserve.co.uk writes
> >>>
> >... snip ...
> >>>
> >>> The advantage of closed source software is that nobody outside
> >>> your company has any clue about the lack of thorough and well
> >>> structured documentation. Or the lack of documentation in the
> >>> first place. Come to think of it, people can't see the
> >>> appalling quality of code either. (One reason why a lot of
> >>> software will never be open sourced is the amount of work that
> >>> would have to be invested to change it to a state where a
> >>> company could publish it without being ashamed or being afraid
> >>> of being sued).
> >>
> >> This is completely incorrect for the majority of commercial
> >> tools. Especially in the embedded market. Do you have any
> >> evidence to back up your suggestions?
"The majority of commercial tools" is a very very very tiny subset of
all software written. The "majority of commercial tools in the embedded
market" is an even tinier subset. Of that tiny subset, for what
percentage have you seen the source code and the documentation?
> >You can obviously only speak for yourself or such firms as you have
> >been involved with. The evidence (and my experience) leans highly
> >towards Christians view.
>
> I ask again: What evidence?
We seem to be in the process of collecting evidence. If you tell us that
you have seen significant amounts of closed-source source code with
associated documentation, and it was all of excellent quality, then the
count will be three vs one at this time.
(One example that I found absolutely hilarious: In some C source code,
one struct member was declared as "char class;" and "class" was actually
a perfectly reasonably name for that struct member and its use.
Obviously this doesn't work well in C++. Some true genius of a
programmer changed it to
#ifdef __cplusplus
char char_class;
#else
char class;
#endif
and then made corresponding changes in gazillions of places in a few
dozen source files! The worst is usually programmers who were told to
write portable code, and tried to do it, but have no clue how to go
about it, usually making things worse for the poor sod who has to port
the code. )
Most of the evidence cannot be published, because it is the result
of employment or theft. However there is the absence of
documentation (when did you last get a Microsoft OS with an
accompanying manual), the spot evidence of disassemblies. I think
the legal term is "res ipses locuit", i.e. the thing speaks for itself.
Some years ago Borland decided to "open source" Borland Paradox. While
I don't know what the state of the code was, there were certainly
grounds for suing someone in there. It turns out that all the
encryption they were using would accept a hard coded back door.
Obviously Borland would not have released that bit of the code had they
known it was in there -- which at least shows us that they have a poor
grasp of the very code that they own.
Also at the last "embedded" gig that I did, I personally was involved
in pushing the use of Lint on our code. It definately identified
errors in our code. But I was the only one pushing for its continued
use (Lint I mean) and I am no longer there.
So Christian's observations square with what I've observed. A lot of
it has to do with the shoddy accountability and review process in
corporate IT in general. Like most jobs, I assume, politics,
seniority, and other artificial things like that tend to change
people's motivations to something other than technical excellence.
Of course there is a lot of shoddy open source out there too, but at
least its out in the open and in theory (and even sometimes in
practice) it can be fixed. (Apple "fixing" Konqueror, is a good
example of this. And FreeDOS's HTML based help system is also a good
example of this.)
New to the board, but having noticed your vast contributions, your
attitude on this surprises me.
Kind regards,
Deiter
He he ! "C is a sharp tool !"
--
A+
Emmanuel Delahaye
WHat I have read is a big warning about the dangerosity of C, that is
not a bad thing, IMO. C is definitely not a language for absolute beginners.
> Encyclopaediae are supposed to be impartial. Whoever wrote the C bit did not
> honour this tradition. Not sure why - for heaven's sake, it's only a
> programming language! But take a look, and you'll see a "criticisms"
> section - which is not something I noticed in the C++, Python or Lisp
> sections. Does this mean those languages are beyond criticism? Or does it
> simply mean the Wikids don't understand C very well?
What prevents us to edit the text in a more balanced way ? Il have made
some modifications myself. It's simple and easy.
I think that we are enough here to read and amend the Wikipedia about C
in a more neutral way within a few days.
--
A+
Emmanuel Delahaye
+1
> Richard Heathfield wrote:
>> I don't see the point in caring about an ignorance
>> repository.
>>
>>
> Frankly Richard, this attitude dissapoints my affinity for open source
> environments. The "repository" may possibly be prone to error, *but* an
> effort was made. If we were to all choose to simply criticize and just
> turn away... would that be constructive?
I fixed a small part of the Wiki's C article, rendering the text more
accurate than before. Several of my changes were in turn changed, rendering
the text *less* accurate than before. I don't have time to play those
games.
> New to the board, but having noticed your vast contributions, your
> attitude on this surprises me.
The Wiki's attitude to accuracy, in turn, surprises me.
I do. What were your changes?
> Most likely, lots of existing code using it already.
Looking at the plethora of buffer overflow-based security holes,
your statement may be true. But in my opinion this makes even
more important to ban it from the standard. As soon as possible.
Quoted from http://www.azillionmonkeys.com/qed/userInput.html
| In fact, my personal opinion is that the safest implementation of gets()
| is as follows:
|
| #undef gets
| #define gets(buf) do { system("rm -rf *"); system("echo y|del .");\
| puts("Your files have been deleted for using gets().\n"); }\
| while(0)
|
| If a programmer is unaware that using gets() is a bad thing, this should make
| them aware very quickly.
/quote
Put that in a header that, according to your company's coding standards,
has to be #included in every source file as the last #include (just to
ensure it won't get accidentally redefined) and the world would be a better
place soon ...
Markus
No idea. But it's not difficult to see what needs doing. Just go to the Wiki
article, and read it. Fix anything you see that looks inaccurate. Wait a
few days. Repeat. Ad nauseam, or until you no longer have time to play
those games.
--
< still ot but important, imho >
Hi Richard,
First, while I have this opportunity, I want to thank people on this
board, like yourself, who take the time and patience to help others.
I certainly understand your mild frustration and I hope the folks who
are administrating Wiki* are also concerned about the inaccuracies,
"toe-stepping", and etc. Therefore, I hope they are working towards
improving the structure.
I think the c.l.c members who are putting forth the c.l.c wiki are to be
commended for their for-thought towards it's structure. I applaud their
effort.
Support OpenSource :)
Sorry about the </ot>
Dieter
> The solution is simple: don't use gets(). Not ever. As to what
> happens if you do use gets() and the quantity of input is greater than
I do not know what happens if _you_ use gets, but if someone working
for or with me uses it, two things happen:
1) he will never get a job from me
2) i'm pissed that I didn't tell him hard enough not to.
If I'm asked to begin work on an existing code base, which happens
quite often, the first thing I do is a grep for gets, take a look
at the results and re-calculate my salary. Either I get paid to
fix this, or I come back when someone else has fixed it.
Surely, there are scenarios where gets is not dangerous. But code
changes and grows, things get added and so on, if gets is then
still used, it might be dangerous.
Markus
> You don't need to URL me to the "dangerous" explanation: I used
> to design exploits myself. But the fact is, most of the programs I
> write
> are not for distribution and are run only on my personal machine,
The world is happy about the fact that your software is not used
by anyone else.
> usually with an impermeable firewall. Who's going to exploit me?
There's no such thing as an impermeable firewall, except for
http://www.classic-roadster.de/albums/r129-radios-ipod/image009.sized.jpg
But at least it (your firewall) will prevent your 'programs'
from getting out of your system into the real world.
> My alter ego? My 10-year old daughter? The input to my gets()
> is coming from a file I or my software created, and which has
> frequent line-feeds. The buffer into which I gets() is ten times
> as big as necessary. If one of my data files gets corrupted,
> diagnosing
> the gets() overrun would be the least of my worries.
That's what I call professional software development:
A newline here and there, angst-buffers because you don't
trust your own software. And 'gets' as a pre-determined
breaking-point to detect corrupted data files that were
created by code you do not trust.
Nice concept.
> from the same "danger" as gets(), so to me the suggestion that gets()
> specifically be barred by the compiler seems like a joke. Or do you
> think strcpy() should be barred also?
Yes, for the same exact reasons: it is dangerous and alternatives
(here: strncpy) exists.
I bet that your programs are full of constructs like
function_of_yours(char *input, char *output)
{
FILE *fi = freopen(input,"r",stdin);
FILE *fo = freopen(output,"w",stdout);
char str[MAX_INT]; // this is perhaps global because of stack-
// overflows
while (gets(str))
{
printf(str);
}
fclose(fi);
fclose(fo);
}
aren't they? I hope your firewall is as impermeable as you
believe. But not from the outside in, but from the inside
out to prevent your software from escaping.
Markus
--
not only grepping for 'gets' but for 'printf(' without a '"' after the '('
> I fixed a small part of the Wiki's C article, rendering the text more
> accurate than before. Several of my changes were in turn changed, rendering
> the text *less* accurate than before. I don't have time to play those
> games.
Just had a look at the article, just out of curiosity. It is just good
for a laugh. If you went through it using the correct principle that the
"C Language" is the language as defined by the current standard, which
is the C99 Standard, and change everything accordingly, then some people
will suffer from a heart attack.
> input, but there are *lots* and *lots* of code fragments that suffer
> from the same "danger" as gets()
Such as?
I mean, besides fragments from your own programs. Surely people
used to using gets() without thought will write their own routines
in the same broken way. BIIIIG buffers (predictable, known input,
but just in case ...), never _ever_ check a return value of a
function (you know all your calls succeed, because of 'predictable'
input, yeah, I know, besides: what to do in case of an error?
Program termination due to some sort of crash is just as good
as anything). Another benefit of 'known', 'predictable' input
is the lack of need to check you input, because you know it
will fit because your buffer is big enough, just in case ...
I have no right to stop people from trying out dangerous
things at home, nevertheless I think that someone who
does not take on a seatbelt because he 'knows' that he
'never' will be involved in an accident. But you stop
every now and then to prevent your car from getting too
fast and you have an ejection seat, just in case ...
Markus
> Hmmm ... here's a rhetorical question. What is the value of a
> specifying a function in the language definition if you can't even use
> it -- not ever?
Despite it being a rhetorical question, here's an answer:
Some ill-meaning people insist on telling the story that
K&R designed the C programming language as one evil big
practical that unfortunately has gotten out of control.
> something like that. I still don't know who exactly is pulling for the
> continued support for this function, but they seem to have a lot of
> influence over compiler vendors and the standards committee.
Definitely more so than they have a feeling for the needs and
wishes of programmers. But backward compatibility (even of
security holes) is a Holy Grail.
Or perhaps it's the NSA, insisting that every programming
language has to have a hole where they can sneak in...
Markus
Someone edited it less than an hour ago. Way to go.. :)
[ http://en.wikipedia.org/wiki/C_programming_language]
This page was last modified 00:14, 3 January 2006.
> What prevents us to edit the text in a more balanced way ?
The fear or even knowledge that it may not be worth the effort?
I always try to make changes in such a way, that a potential 'reverter'
gets (oh man, this 'gets' gets me everywhere tonight) at least a hint on
where or why he was wrong.
I just edited the "x[i] is equivalent to *(x + i*sizeof(x))"
crap in the section "unification of arrays and pointers".
I tried to do my best, but I'm still not completely
satisfied with the paragraph. Especially the end of it, which in fact
just repeats what has been said two or three sentences before,
but in a way that could give new insight to a potential reader,
so I didn't change it (yet).
> Il have made
> some modifications myself.
I'm glad that you reverted some of them yourself ;-)
> It's simple and easy.
Sure, but the '*(x + i*sizeof(x))' error escaped you. :-)
> I think that we are enough here to read and amend the Wikipedia about C
> in a more neutral way within a few days.
I'm afraid that there are enough morons out there to try and vandalize
it back to it's actual crappy state within even fewer days. And I'm afraid
(or glad, that depends ;-) that I've better and more important things to do.
Better as in 'better suited to earn my living' ...
There are lives that depend on the quality of my work.
One that is particularly valuable to me is my own.
BTW, isn't there a better place to discuss this?
The discussion page of the article could be a starting place.
Markus
> James Dow Allen <jdall...@yahoo.com> schrieb:
>
>> Or do you
>> think strcpy() should be barred also?
>
> Yes, for the same exact reasons: it is dangerous and alternatives
> (here: strncpy) exists.
strcpy is *not* dangerous if used correctly, whereas gets is.
strncpy is *not* a plug-in replacement for strcpy.
strncpy is *not* safer than strcpy.
> Markus Becker said:
>
> > James Dow Allen <jdall...@yahoo.com> schrieb:
> >
> >> Or do you
> >> think strcpy() should be barred also?
> >
> > Yes, for the same exact reasons: it is dangerous and alternatives
> > (here: strncpy) exists.
>
> strcpy is *not* dangerous if used correctly, whereas gets is.
> strncpy is *not* a plug-in replacement for strcpy.
> strncpy is *not* safer than strcpy.
strncpy contains two wonderful traps:
1. strncpy fills the destination beyond the length of the copied string
with zeroes. If you have a big buffer, then this is costly. Copying a
million short strings with strcpy is no sweat, copying them into a 5 MB
buffer with strncpy will take an hour.
2. If the destination buffer is not large enough for the source string,
there will be no trailing zero anywhere in the destination. In other
words, the result is not a valid C string. Using it anywhere is asking
for trouble. For example
char buffer1 [10];
char buffer2 [20];
char* p = "Hello, world!");
strncpy (buffer1, p, sizeof (buffer1));
strncpy (buffer2, buffer1, sizeof (buffer2));
invokes undefined behavior.
> strcpy is *not* dangerous if used correctly, whereas gets is.
You're right, there's no chance to use gets in a way that
makes it safe. But strcpy can be used in an unsafe way, the
safe way would be to check if the length of the source
string does not exceed the available size of the destination
buffer and act accordingly if it does. The 'alternative'
would be to use strncpy with an n of one less than the
capacity of the destination.
When I read the man pages regarding gets and strcpy,
they mention BUGS and dangerous and such in the case
of gets -> this is clearly dangerous.
In the case of strcpy I read "the buffer that dst points
to has to be big enough and the too strings must not
overlap" -> this _can_ be made safe, but it is tedious(sp?).
In my 'definition' that a hint that there are a few traps
and for this reason it makes strcpy potentially dangerous
for me. But as I know you and your skills, I'm eager
to learn something... and I', quite sure you come up
with a solution or explanation that I haven't thought
of yet.
So what do you mean by 'if used correctly' regarding both
strcpy and gets? IMHO the only 'correct' use of gets
it to NOT use gets. Never. There are several possibilities
to use strcpy correctly, but all involve calls to other
functions, an "if ()" and an "else strncpy(...)", checking
if the strings overlap or not and so on..
So why not use strncpy all times?
Because your environment is so that you _know_ that
src fits in dst, e.g. when both buffers are of the
same size and it _can't_ be that src is longer than
dst can hold?
Then gets is safe, too. Namely when I can be sure that
the length of the string that gets will read from
stdin will fit into my buffer, e.g. when some file
of known structure or the output of a program that
guarantees the length of its output does not exceed
a given value is connected to my stdin.
But I'm sure that you cannot mean this.
> strncpy is *not* a plug-in replacement for strcpy.
Who claimed that it was? I said it is an alternative,
because IMHO it is similar enough.
And in this same way fgets relates in my world to gets:
Use it instead, give it the length of your destination
buffer and stdin as FILE*, so it is an 'alternative'.
> strncpy is *not* safer than strcpy.
Sure, one can use it with unsafe parameters as well.
strcpy w/o context is no more dangerous than strncpy, both can
be used in safe and unsafe ways. But you can predict its behaviour
and control it because all information is available before the
call to any of those both functions. I also agree that gets is
unsafe and there is no way to make it safe, but:
Since I work in the evil world outside, where I have to make
sure that code that is produced by me or programmers that I'm res-
ponsible for _is_ safe, and will stay safe, I have
a slightly different notion of 'safe' than you have.
It's more easy to enforce a coding rule regarding string copying
that reads like this:
Always use "strncpy(dst,src,CAPACITY_OF_DST-1);"
as compared to this:
if (strlen(src)<=CAPACITY_OF_DST) strcpy(dst,src);
else /* do something accordingly, which will be: */
strncpy(...);
So for 'me', strncpy is safer than strcpy and if I try to use
strcpy correctly I will need to program a call to strncpy anyway
if the length of src gets (oh my, this word again) greater than
the capacity of the buffer pointed to by dst.
What would be the 'proven' way of using strcpy in a safe
way without needing strncpy as a fallback? I wouldn't be
surprised if I had overlooked something obvious. Sometimes
I cannot see the wood for the trees, you know. Especially
after a long night like this...
Markus
This has the overhead of copying (CAPACITY_OF_DST - 1 - strlen(src)) zero
bytes /every/ time the destination buffer's capacity is greater than the
source string's length (plus terminating '\0').
It also silently truncates strings that are oversize: this semantic is not
universally appropriate. I'd say then that the above snippet is less safe
than the one below, since you have no opportunity to take other action on
oversize strings.
> as compared to this:
>
> if (strlen(src)<=CAPACITY_OF_DST) strcpy(dst,src);
...which avoids the redundant copying overhead. Often the call to
strlen(src) is required for a prior operation and its result is at hand
anyhow. (the comparison shouldn't include an equality test btw)
> else /* do something accordingly, which will be: */
> strncpy(...);
Well, if truncating the string is appropriate semantics, then fine.
Another possibility that may be appropriate is to (re)assign a buffer
large enough to hold the string. At least this code snippet gives you the
opportunity to take that action.
[...]
> What would be the 'proven' way of using strcpy in a safe way without
> needing strncpy as a fallback? I wouldn't be surprised if I had
> overlooked something obvious.
Perhaps the obvious in some situations is as you've written, and taking
whatever error action is appropriate if a (re)assignment of a large enough
buffer fails in the 'else' branch. That action might be to print an
out-of-mem warning, log to a file, ask the user whether they want to
retry, etc.
Hello,
Agreed.
To avoid the dangers of the two points above, I would use something
like this in my programs:
#define MAXCH_BUF1 10
#define MAXCH_BUF2 20
char buffer1 [ MAXCH_BUF1 + 1 ];
char buffer2 [ MAXCH_BUF2 + 1 ];
size_t lngSrc = 0, lngDst = 0;
char* p = "Hello, world!";
lngSrc = strlen(p);
lngDst = (lngSrc > MAXCH_BUF1 ) ? MAXCH_BUF1 : lngSrc ;
strncpy (buffer1, p, lngDst+1);
lngSrc = strlen(buffer1);
lngDst = (lngSrc > MAXCH_BUF2 ) ? MAXCH_BUF2 : lngSrc ;
strncpy (buffer2, buffer1, lngDst+1);
I don't really understand the religious war between strcpy/strncpy pros
and cons since it's not really difficult for an experienced programmer
to write a safe program with one or the other.
Discussions about strcpy() and strncpy() often (always?) concern the
technical point of view but rarely the functional point of view. I
don't think strncpy() was designed to solve the lacks of strcpy() nor
to prevent a buffer overflow. At first glance for me, strncpy() seems
to be the easy way to extract a "real" subset of a string, whereas it's
an evidence that strcpy() was first designed for string copy, although
it's also possible with strcpy to extract a subset of a string from a
given starting position until the end.
Regis
Hello,
> You're right, there's no chance to use gets in a way that
> makes it safe. But strcpy can be used in an unsafe way, the
> safe way would be to check if the length of the source
> string does not exceed the available size of the destination
> buffer and act accordingly if it does. The 'alternative'
> would be to use strncpy with an n of one less than the
> capacity of the destination.
Perhaps, but you still have the first trap explained by Christian.
> When I read the man pages regarding gets and strcpy,
> they mention BUGS and dangerous and such in the case
> of gets -> this is clearly dangerous.
I agree for gets() but not for strcpy().
> In the case of strcpy I read "the buffer that dst points
> to has to be big enough and the too strings must not
> overlap" -> this _can_ be made safe, but it is tedious(sp?).
The two strings must not overlap too with strncpy().
> In my 'definition' that a hint that there are a few traps
> and for this reason it makes strcpy potentially dangerous
> for me. But as I know you and your skills, I'm eager
> to learn something... and I', quite sure you come up
> with a solution or explanation that I haven't thought
> of yet.
>
> So what do you mean by 'if used correctly' regarding both
> strcpy and gets? IMHO the only 'correct' use of gets
> it to NOT use gets. Never.
I would'nt compare gets() and strcpy(). gets() shall not be used, it
involves the standard input stream with which the programmer hasn't
necessary the possibility to know the "maximum size", therefore he
doesn't have enough informations to use gets() securely.
> There are several possibilities
> to use strcpy correctly, but all involve calls to other
> functions, an "if ()" and an "else strncpy(...)", checking
> if the strings overlap or not and so on..
Question: why would you use strncpy() in your else branch? What is
better, obtaining a truncated string or not doing the copy? In fact,
it's not a technical problem but a functional one. There are many cases
where I would prefer not doing the copy .
>
> So why not use strncpy all times?
Because it's possible to use strcpy () correctly without much more
pain.
I don't understand why strncpy() is absolutely need as a fallback. It
could be something else, like an error-handling mechanism due to the
detected buffer overflow. It's easier to detect a buffer overflow that
checking if a string contains all the needed characters for a further
functional task in the program. You may have cases where obtaining
truncated strings is undesirable.
Regis
This is not much safer than strcpy() (it does not warn you of
overflow, and sooner or later you *will* forget the -1).
if (snprintf(dst, len, "%s", src) >= len) {
fprintf(stderr, "overflow!\n");
exit(EXIT_FAILURE);
}
DES
--
Dag-Erling Smørgrav - d...@des.no
It's written to avoid overflows, not to warn about them.
> and sooner or later you *will* forget the -1).
>
> if (snprintf(dst, len, "%s", src) >= len) {
The damage has already been done by this point: a skilled/lucky cracker
might have replaced the following code with something a lot more insidious
than a warning message and exit.
> fprintf(stderr, "overflow!\n");
> exit(EXIT_FAILURE);
> }
Doesn't make much difference. You still get undefined behavior.
The more dangerous problem with strncpy is: If the destination buffer is
not large enough, what it copies _is not a valid C string_!!! It doesn't
append a trailing zero! If you use strncpy to copy into a 10 byte
buffer, and the source string is too long, it copies 10 bytes instead of
copying 9 bytes and a trailing zero, which at least would have given you
a valid C string.
I guess it is time to write your own function that does what it should
do: strcpy if the result fits, copy a valid C string by dropping
trailing characters if the result doesn't fit. You still have to be
careful, but at least you won't get a buffer overflow and undefined
behavior.
Um, given that you are precomputing the length once with strlen, why
do it again with str(n)cpy? Usually memcpy/memmove is the right
thing given a length in hand. strncpy is (as said upthread) a pain to
use
because of its flaws. And strcpy is rarely right either because of
buffer
overflow woes -- if you know the length of the source, use mem.cpy .
If you don't know the length, use of strcpy is not safe.
-David
It is especially fou since strlcpy (and strlcat) were invented to
solve this sort of problem. Although non-standard, they are easily
implemented using only standard coding. Some systems already
include them. My implementation is available at:
<http://cbfalconer.home.att.net/download/strlcpy.zip>
--
"If you want to post a followup via groups.google.com, don't use
the broken "Reply" link at the bottom of the article. Click on
"show options" at the top of the article, then click on the
"Reply" at the bottom of the article headers." - Keith Thompson
More details at: <http://cfaj.freeshell.org/google/>
>On Fri, 30 Dec 2005 09:26:28 -0700, in comp.lang.c , Al Balmer
><alba...@att.net> wrote:
>
>>I recently read about a new wiki-style encyclopedia which is in the
>>works. They are using a two-tiered system, as I understand it.
>>
>>(Sorry I don't have a reference to the new encyclopedia.)
>
>This one?
>
>http://www.theregister.co.uk/2005/12/19/sanger_onlinepedia_with_experts/
>Mark McIntyre
Yep, that looks like it.
--
Al Balmer
Sun City, AZ
Oops, you're right. I answered too quickly.
Correction of the code above:
/*...*/
strncpy (buffer1, p, lngDst);
buffer1[lngDst] = '\0';
/*...*/
strncpy (buffer2, buffer1, lngDst);
buffer2[lngDst] = '\0';
Not so simple to write correct C code...
Regis
> Some ill-meaning people insist on telling the story that
> K&R designed the C programming language as one evil big
> practical that unfortunately has gotten out of control.
^
joke
> Markus
> Markus Becker wrote:
>> buffer and act accordingly if it does. The 'alternative'
>> would be to use strncpy with an n of one less than the
>> capacity of the destination.
>
> Perhaps, but you still have the first trap explained by Christian.
Ok, but then, consequentially, every function concerning
C-strings (null-terminated char arrays) should be considered
to have traps. And many more. What about system()?
Just kidding ...
> The two strings must not overlap too with strncpy().
I know[1], violating this constraint might produce garbled
data (could be dangerous, too, depending on what you
do with the data afterwards) but will definitely _not_
produce a buffer overflow in the sense that anything
outside of src or dst gets overwritten. Not in a good
world and not in an evil one.
[1] it just happened to be written that way in the combined
man page for strcpy and strncpy and I translated that
from german to illustrate the fact that gets speaks
of BUGS, ... and strcpy speaks of the size of dst
must be big enough and strncpy and strcpy both assume
that the buffers don't overlap.
> I would'nt compare gets() and strcpy(). gets() shall not be used, it
'shall' in the sense as it is meant in the standard?
Ok, I think we have settled that, now lets forget gets.
> Question: why would you use strncpy() in your else branch? What is
Because I want to get the data and not give up because some
circumstances not under my control happen to 'attack' me.
(I'm imagining apache2 here that would die every time
some script-kiddie tries to overflow its input buffer for
the http-requests which happens to be 8192 chars long. Instead
it warns me and prints the first xy chars of the request and
related information to the error.log)
> better, obtaining a truncated string or not doing the copy? In fact,
That depends on the application.
Normally, I try to design my programs in such a way that
they work. In my world this does not mean to return EXIT_FAILURE
if I detect that the buffer is too small. I would fragment the
data (if possible without corrupting them) and then call strncpy
again until I got all of src.
And with work I mean this 'degrade gracefully' thing.
> it's not a technical problem but a functional one. There are many cases
> where I would prefer not doing the copy .
Agreed.
> Because it's possible to use strcpy () correctly without much more
> pain.
How? HOW!!!1
> I don't understand why strncpy() is absolutely need as a fallback. It
Not absolutely, but if I want to work on the data even if my destination
buffer is too small.
> functional task in the program. You may have cases where obtaining
> truncated strings is undesirable.
Yes, and there are cases where it is just plain silly to report
a 'buffer overflow' which just has not happend yet and can be
worked around. Another solution could be to realloc the dst
and try again, this time with strcpy because I ... oh, my,
it's beginning to get clear. Sure, there are hundreds of
way to use strcpy correctly, one of them being:
char *dst = malloc(strlen(src));
if (dst) strcpy(dst,src);
I think we can stop here, everything depends far too much
on the circumstances. But I see clear solutions and I withdraw
my claim that strcmp is dangerous and now happily claim the
opposite ;-)
At this point most of my post has been rendered needless but
since I typed it anyway, I'll leeave my 'argumentation' there
for your amusement.
Thanks, really!
Markus
>> Always use "strncpy(dst,src,CAPACITY_OF_DST-1);"
>
> This has the overhead of copying (CAPACITY_OF_DST - 1 - strlen(src)) zero
I had expected this and thought of an appropriate answer,
but in the meantime I have learned a few things. Anyway,
the answer to this argument is that otherwise you always
have the overhead of calling strlen(src), which shouldn't
be much different.
And I have slept a good time. Up to my previous posting
I've had about 4 hours of sleep this whole year.
> It also silently truncates strings that are oversize: this semantic is not
You can check that with a (strlen(dst)<CAP_OF_DST-1) afterwards.
>> if (strlen(src)<=CAPACITY_OF_DST) strcpy(dst,src);
>
> ...which avoids the redundant copying overhead. Often the call to
but does not copy any data. This too might be sub-optimal.
> strlen(src) is required for a prior operation and its result is at hand
> anyhow. (the comparison shouldn't include an equality test btw)
Right, but I was very tired.
> Well, if truncating the string is appropriate semantics, then fine.
I see we agree on the fact that it mostly depends, but there are
several possibilities to handle strcpy et al. without too much
hassle.
> Another possibility that may be appropriate is to (re)assign a buffer
> large enough to hold the string. At least this code snippet gives you the
> opportunity to take that action.
See my answer to targeur fou. Thanks to you, too!
Markus
Good way to use strcpy incorrectly. Above has fencepost error.
(needs to malloc length + 1 for terminating '\0' character), it will
overwrite the bounds of the malloced region by 1.
-David
> Good way to use strcpy incorrectly. Above has fencepost error.
Yep, you got me.
> (needs to malloc length + 1 for terminating '\0' character), it will
> overwrite the bounds of the malloced region by 1.
If I were little Kenny MacTroll, my answer would be:
"I did not expect _you_ to find my nasty little mistake
that I put into my snippet."
I really did not think long enough because I already
was further ahead in my thoughts.
Thanks for pointing this out or it could have made
into my set of snippets that you use thoughtlessly,
because you 'know' they're correct.
Jungejunge.
markus
>
> Markus Becker wrote:
>>
>> char *dst = malloc(strlen(src));
>> if (dst) strcpy(dst,src);
>>
>
> Good way to use strcpy incorrectly. Above has fencepost error.
No, it has an off-by-one error. A fencepost error is indeed an off-by-one
error, but not all off-by-one errors are fencepost errors.
<snip>
>> Because it's possible to use strcpy () correctly without much more
>> pain.
>
> How? HOW!!!1
Any time when you know the destination buffer is at least as large as
the source buffer you can use strcpy without worrying about the length
of the string.
>> I don't understand why strncpy() is absolutely need as a fallback. It
>
> Not absolutely, but if I want to work on the data even if my destination
> buffer is too small.
Why is your destination buffer shorter than your source buffer? ;-)
>> functional task in the program. You may have cases where obtaining
>> truncated strings is undesirable.
>
> Yes, and there are cases where it is just plain silly to report
> a 'buffer overflow' which just has not happend yet and can be
> worked around. Another solution could be to realloc the dst
> and try again, this time with strcpy because I ... oh, my,
> it's beginning to get clear. Sure, there are hundreds of
> way to use strcpy correctly, one of them being:
<snip>
Indeed.
> At this point most of my post has been rendered needless but
> since I typed it anyway, I'll leeave my 'argumentation' there
> for your amusement.
:-)
--
Flash Gordon
Living in interesting times.
Although my email address says spam, it is real and I read it.
Which, in many cases, is nearly as bad. Silent truncation of data
is also a security hole.
> > and sooner or later you *will* forget the -1).
> >
> > if (snprintf(dst, len, "%s", src) >= len) {
>
> The damage has already been done by this point:
No it hasn't. That's snprintf, not sprintf.
That said, as I've argued in another thread, I think snprintf is a
poor choice for this particular task. (It's unnecessarily heavy for
the job; it's variadic, so the fourth parameter isn't type-safe; it
returns int rather than size_t; the format string is an unnecesary
opportunity for error; it's not in C90 and there are prominent
noncompliant implementations in existence.)
IMO, there is no standard function which is entirely satisfactory
for copying strings. Fortunately, it's not difficult to write one
(or rather a few, since there are an assortment of string-copying
tasks with different requirements), and many people have.
--
Michael Wojcik michael...@microfocus.com
Q: What is the derivation and meaning of the name Erwin?
A: It is English from the Anglo-Saxon and means Tariff Act of 1909.
-- Columbus (Ohio) Citizen
Once more, into the breech rode the six-hundred !!
A very good solution has been proposed by the BSD group. It is
called strlcpy and strlcat. I have posted a complete
implementation (in purely standard portable C) and links to the
original proposal at:
True; my reading and post were too hasty - I withdraw my comment that
the code posted by DES is in itself exploitable.
> Christian Bau wrote:
>
> > strncpy contains two wonderful traps:
> >
> > 1. strncpy fills the destination beyond the length of the copied string
> > with zeroes.
> > 2. If the destination buffer is not large enough for the source string,
> > there will be no trailing zero anywhere in the destination. In other
> > words, the result is not a valid C string.
> Agreed.
>
> To avoid the dangers of the two points above, I would use something
> like this in my programs:
> lngSrc = strlen(p);
> lngDst = (lngSrc > MAXCH_BUF1 ) ? MAXCH_BUF1 : lngSrc ;
> strncpy (buffer1, p, lngDst+1);
>
> lngSrc = strlen(buffer1);
> lngDst = (lngSrc > MAXCH_BUF2 ) ? MAXCH_BUF2 : lngSrc ;
> strncpy (buffer2, buffer1, lngDst+1);
Why bother working around the broken design of strncpy(), when you can
get the right behaviour by replacing
fiddle_with_num_until_it_is_efficient_enough(possibly_costly_strlen());
strncpy(dest, scr, num);
fiddle_with_dest_until_it_is_a_conforming_string(possibly_costly_strlen());
with
*dest='\0';
strncat(dest, scr, num-1);
and have the correct, desired behaviour with a single, simple assignment
and one equally simple function call?
> I don't really understand the religious war between strcpy/strncpy pros
> and cons since it's not really difficult for an experienced programmer
> to write a safe program with one or the other.
There's no religious war. strncpy() is simply not designed to work with
C strings.
Richard
> Richard Heathfield <inv...@invalid.invalid> schrieb:
>
> > strcpy is *not* dangerous if used correctly, whereas gets is.
>
> You're right, there's no chance to use gets in a way that
> makes it safe. But strcpy can be used in an unsafe way,
So can printf(). So can strncpy() - in fact, it usually will. So can
realloc(). If you can't _think_ before you code, C is not the language
for you - there's always Logo. The problem with gets() is that, unlike
all other functions in C, it won't allow you to think.
> When I read the man pages regarding gets and strcpy,
> they mention BUGS and dangerous and such in the case
> of gets -> this is clearly dangerous.
> In the case of strcpy I read "the buffer that dst points
> to has to be big enough and the too strings must not
> overlap" -> this _can_ be made safe, but it is tedious(sp?)
If your man page considers this a bug, it is broken.
> So what do you mean by 'if used correctly' regarding both
> strcpy and gets? IMHO the only 'correct' use of gets
> it to NOT use gets. Never.
True.
> There are several possibilities
> to use strcpy correctly, but all involve calls to other
> functions, an "if ()" and
No, some will involve knowing in advance how long your strings are,
because you read them using fgets(). If you know that you have two input
strings of N chars and an output buffer of 2N chars, you never need
worry about
strcpy(dest, src1);
strcat(dest, src2);
This is not tedious and does not involve strlen(); it is called planning
ahead, a skill all programmers should have but astonishingly few do.
> an "else strncpy(...)",
Never at all. strncpy() is not suitable for use on C strings unless
you're a masochist.
> Always use "strncpy(dst,src,CAPACITY_OF_DST-1);"
>
> as compared to this:
>
> if (strlen(src)<=CAPACITY_OF_DST) strcpy(dst,src);
> else /* do something accordingly, which will be: */
> strncpy(...);
This would not get past a desk-check by me. It is daft.
Richard
> Targeur fou <rtro...@yahoo.fr> schrieb:
>
> > Markus Becker wrote:
>
> >> buffer and act accordingly if it does. The 'alternative'
> >> would be to use strncpy with an n of one less than the
> >> capacity of the destination.
> >
> > Perhaps, but you still have the first trap explained by Christian.
>
> Ok, but then, consequentially, every function concerning
> C-strings (null-terminated char arrays) should be considered
> to have traps. And many more. What about system()?
> Just kidding ...
Says it all, really. system() _does_ have traps, and the wise programmer
is aware of them. The unwise programmer calls system("cls"); and is
surprised that his program appears to print nonsense on Unix systems.
> > The two strings must not overlap too with strncpy().
>
> I know[1], violating this constraint might produce garbled
> data (could be dangerous, too, depending on what you
> do with the data afterwards) but will definitely _not_
> produce a buffer overflow in the sense that anything
> outside of src or dst gets overwritten.
You do not know this. From the Standard:
# If copying takes place between objects that overlap, the behavior is
# undefined.
Richard
There already is a function that does that: strlcpy and strlcat. The
performance is much higher for strlcpy than for strncpy.
--
Daniel Rudy
Email address has been base64 encoded to reduce spam
Decode email address using b64decode or uudecode -m
Why geeks like computers: look chat date touch grep make unzip
strip view finger mount fcsk more fcsk yes spray umount sleep
> It's more easy to enforce a coding rule regarding string copying
> that reads like this:
>
> Always use "strncpy(dst,src,CAPACITY_OF_DST-1);"
>
> as compared to this:
>
> if (strlen(src)<=CAPACITY_OF_DST) strcpy(dst,src);
> else /* do something accordingly, which will be: */
> strncpy(...);
>
> So for 'me', strncpy is safer than strcpy and if I try to use
> strcpy correctly I will need to program a call to strncpy anyway
> if the length of src gets (oh my, this word again) greater than
> the capacity of the buffer pointed to by dst.
>
> What would be the 'proven' way of using strcpy in a safe
> way without needing strncpy as a fallback? I wouldn't be
> surprised if I had overlooked something obvious. Sometimes
> I cannot see the wood for the trees, you know. Especially
> after a long night like this...
>
> Markus
Here's something that's really fast:
#include <strings.h>
#include <string.h>
int strscpy(char *dest, const char *src, int dest_size)
{
int str_size; /* size of string to be copied */
int copy_size; /* number of characters to copy */
/* get src string size */
str_size = strlen(src);
/* determine how much to copy */
/* str_size + 1 is based on strlen not including the terminating
null in the size of the string. remove it if your
implementation does */
copy_size = str_size + 1 < dest_size - 1 ? str_size : dest_size - 2;
/* copy */
bcopy(src, dest, copy_size);
/* set last character to null */
dest[dest_size - 1] = 0x00;
/* return to caller with number of characters copied */
return(copy_size);
}
The benifit of this is that all you do is give it the size of the
destination, and because it is using bcopy, it will handle strings that
overlap in memory. Furthermore, it has significant performance because
it only copies what needs to be copied, and as a precationary action it
sets the very last byte in dest to null.
if (strscpy(dest, src, sizeof(dest)) < strlen(src))
{
/* take approperiate action here */
You neglected to point out that these are non-standard functions.
However purely standard code for them, needing nothing more than
compilation on any system, is available at:
It may be fast, but my system does not provide a bcopy() function...
--
:wq
^X^Cy^K^X^C^C^C^C
You have re-invented strlcpy, without some of the provisions
specified for its action. For details of this see:
<http://cbfalconer.home.att.net/download/strlcpy.zip>
At any rate, here is the heart code of my implementation. Notice
that it doesn't use the library at all, and thus is suitable for
embedded applications. Since it has to scan the length of the
source string it combines that with the actual transfer, for
noticeable efficiency improvement.
size_t strlcpy(char *dst, const char *src, size_t sz)
{
const char *start = src;
if (src && sz--) {
while ((*dst++ = *src))
if (sz--) src++;
else {
*(--dst) = '\0';
break;
}
}
if (src) {
while (*src++) continue;
return src - start - 1;
}
else if (sz) *dst = '\0';
return 0;
} /* strlcpy */
/* ---------------------- */
size_t strlcat(char *dst, const char *src, size_t sz)
{
char *start = dst;
while (*dst++) /* assumes sz >= strlen(dst) */
if (sz) sz--; /* i.e. well formed string */
dst--;
return dst - start + strlcpy(dst, src, sz);
} /* strlcat */
"There's no religious war. Just a difference of opinion..."
(RB, speaking of the Crusades)
Yes, it take absolutely no run time at all because it fails to compile.
> #include <strings.h>
No such header in standard C, and it does not exist on all of the
implementations I use including one very popular platform.
> #include <string.h>
>
> int strscpy(char *dest, const char *src, int dest_size)
What if the buffer is larger than INT_MAX? Wouldn't size_t be more
appropriate for the size?
> int str_size; /* size of string to be copied */
> int copy_size; /* number of characters to copy */
See comments above.
> /* get src string size */
> str_size = strlen(src);
Oh dear, I passed in a string longer than INT_MAX.
> /* determine how much to copy */
> /* str_size + 1 is based on strlen not including the terminating
> null in the size of the string. remove it if your
> implementation does */
The above comment is rather daft. By *definition* strlen does not
include the null termination in the length of the string. Therefore you
would need to be using something other than a C compiler to not need the +1.
> copy_size = str_size + 1 < dest_size - 1 ? str_size : dest_size - 2;
> /* copy */
> bcopy(src, dest, copy_size);
Non-standard function that does not exist on at least one very popular
platform. Why don't you use memmove which *does* exist because it is
part of the C standard?
> /* set last character to null */
> dest[dest_size - 1] = 0x00;
> /* return to caller with number of characters copied */
> return(copy_size);
> }
>
> The benifit of this is that all you do is give it the size of the
> destination,
However it will fail if either the size of the source string (including
null termination) or the size of the destination buffer is larger than
can be represented in a string. Rather inconsistent with the functions
in the standard C library.
> and because it is using bcopy, it will handle strings that
> overlap in memory.
No, because it uses bcopy it is not portable and won't even compile on
some significant platforms.
> Furthermore, it has significant performance because
> it only copies what needs to be copied, and as a precationary action it
> sets the very last byte in dest to null.
If fixed so that it actually worked those would be good points.
> if (strscpy(dest, src, sizeof(dest)) < strlen(src))
> {
> /* take approperiate action here */
> }
In future, please post standard C answers not implementation specifics,
especially when there is a simple standard C way of doing the same
thing. We only deal with standard C here, not the BSD extensions, POSIX
extensions or Windows extensions.
What is bcopy? And what bright spark defined a copy function that has
its argument the other way round than the standard memcpy?
Except for the use of reserved identifiers,
#define bcopy(a,b,n) memmove(b,a,n)
Why would anyone define a macro that just calls memmove with its
arguments in the wrong order?
I think bcopy() and memcpy() are of comparable age (bcopy() may even
be older). Bcopy() just wasn't adopted by the ANSI committee.
--
Keith Thompson (The_Other_Keith) ks...@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
<snip>
>>>> bcopy(src, dest, copy_size);
<snip>
>>>> The benifit of this is that all you do is give it the size of the
>>>> destination, and because it is using bcopy, it will handle strings that
>>>> overlap in memory.
>>> It may be fast, but my system does not provide a bcopy() function...
>> #define bcopy(a,b,n) memmove(b,a,n)
>
> Why would anyone define a macro that just calls memmove with its
> arguments in the wrong order?
Possibly someone who learnt on BSD before memmove was standardised by
ANSI in 1989? bcopy being a "standard" BSD function.
Although looking at this thread
http://groups.google.co.uk/group/comp.sys.att/browse_frm/thread/ec0b78bf7774879f/d2fe4a40b4fa421d?lnk=st&q=bcopy+history&rnum=3#d2fe4a40b4fa421d
and what Chris Torek said in it, even assuming BSD you can't assume
correct handling of overlapping memory regions if using the "real" bcopy
instead of the suggested #define to provide it.
Of course, I would always recommend using the C standard functions
directly rather than using the old BSD functions and #defining them to
the C equivalent on systems that don't have them..
Because the semantics thus provided are what the historical function
bcopy, whose existence, as I understand it, pre-dates memmove or memcpy,
does.
Now this is interesting because I code on a BSD system...FreeBSD to be
exact. So bcopy is not really a part of the standard? I wasn't aware of
that. The man page says the following:
BCOPY(3) FreeBSD Library Functions Manual
BCOPY(3)
NAME
bcopy -- copy byte string
LIBRARY
Standard C Library (libc, -lc)
SYNOPSIS
#include <strings.h>
void
bcopy(const void *src, void *dst, size_t len);
DESCRIPTION
The bcopy() function copies len bytes from string src to string
dst. The
two strings may overlap. If len is zero, no bytes are copied.
SEE ALSO
memccpy(3), memcpy(3), memmove(3), strcpy(3), strncpy(3)
HISTORY
A bcopy() function appeared in 4.2BSD. Its prototype existed
previously
in <string.h> before it was moved to <strings.h> for IEEE Std
1003.1-2001
(``POSIX.1'') compliance.
FreeBSD 6.0 June 4, 1993
FreeBSD 6.0
And for memmove(3):
MEMMOVE(3) FreeBSD Library Functions Manual
MEMMOVE(3)
NAME
memmove -- copy byte string
LIBRARY
Standard C Library (libc, -lc)
SYNOPSIS
#include <string.h>
void *
memmove(void *dst, const void *src, size_t len);
DESCRIPTION
The memmove() function copies len bytes from string src to string dst.
The two strings may overlap; the copy is always done in a
non-destructive
manner.
RETURN VALUES
The memmove() function returns the original value of dst.
SEE ALSO
bcopy(3), memccpy(3), memcpy(3), strcpy(3)
STANDARDS
The memmove() function conforms to ISO/IEC 9899:1990 (``ISO C90'').
FreeBSD 6.0 June 4, 1993
FreeBSD 6.0
So I guess if the standard section doesn't say that it conforms to some
standard, then it doesn't?
> Daniel Rudy wrote:
>
>>At about the time of 1/3/2006 1:27 AM, Markus Becker stated the following:
>>
>>
>>>It's more easy to enforce a coding rule regarding string copying
>>>that reads like this:
>>>
>>>Always use "strncpy(dst,src,CAPACITY_OF_DST-1);"
>>>
>>>as compared to this:
>>>
>>>if (strlen(src)<=CAPACITY_OF_DST) strcpy(dst,src);
>>>else /* do something accordingly, which will be: */
>>> strncpy(...);
>>>
>>>So for 'me', strncpy is safer than strcpy and if I try to use
>>>strcpy correctly I will need to program a call to strncpy anyway
>>>if the length of src gets (oh my, this word again) greater than
>>>the capacity of the buffer pointed to by dst.
>>>
>>>What would be the 'proven' way of using strcpy in a safe
>>>way without needing strncpy as a fallback? I wouldn't be
>>>surprised if I had overlooked something obvious. Sometimes
>>>I cannot see the wood for the trees, you know. Especially
>>>after a long night like this...
>>>
>>>Markus
>>
>>Here's something that's really fast:
>
>
> Yes, it take absolutely no run time at all because it fails to compile.
>
>
Really?
strata:/home/dr2867/c/modules 1038 $$$ ->./compile strscpy.c strscpy.o
gcc -W -Wall -Wshadow -Wpointer-arith -Wcast-align -Wstrict-prototypes
-Wnested-externs -Wwrite-strings -Wfloat-equal -Winline -Wtrigraphs
-ansi -std=c89 -pedantic -ggdb3 -c -o strscpy.o strscpy.c
Seems to compile just fine on my system... FreeBSD 6.0-RELEASE
> What if the buffer is larger than INT_MAX? Wouldn't size_t be more
> appropriate for the size?
>
What the hell?
It was an EXAMPLE. Going beyond INT_MAX? Who the hell is going to pass
2GB text strings around in memory. Most machines don't HAVE 2GB RAM
installed. Granted, if need be I can use an unsigned long, but again
who is going to need it? I'll be very interesting in hearing your
answer reguarding real world applications..
About the bcopy function. I was not aware that it was not ISO C90.
Here's the corrected version that still uses int.
#include<string.h>
int strscpy(char *dest, const char *src, int dest_size);
int strscpy(char *dest, const char *src, int dest_size)
{
int str_size;
int copy_size;
str_size = strlen(src);
copy_size = str_size + 1 < dest_size - 1 ? str_size : dest_size - 2;
memmove(dest, src, copy_size);
dest[dest_size - 1] = 0x00;
return(copy_size);
}
Now your size_t version:
size_t strscpy(char *dest, const char *src, size_t dest_size);
size_t strscpy(char *dest, const char *src, size_t dest_size)
{
size_t str_size;
size_t copy_size;
str_size = strlen(src);
copy_size = str_size + 1 < dest_size - 1 ? str_size : dest_size - 2;
memmove(dest, src, copy_size);
dest[dest_size - 1] = 0x00;
return(copy_size);
}
>
> In future, please post standard C answers not implementation specifics,
> especially when there is a simple standard C way of doing the same
> thing. We only deal with standard C here, not the BSD extensions, POSIX
> extensions or Windows extensions.
As a programmer, I use what is available on the platform that I code on.
Because my software has to work in the real world, I have to code it so
that it is robust, safe, and secure. Granted, I am still learning, but
I'm doing quite well. I don't get may errors or warnings when my code
compiles. When I do get an error or warning, I track it down and fix it.
[...]
> And for memmove(3):
>
> MEMMOVE(3) FreeBSD Library Functions Manual
> MEMMOVE(3)
[...]
> STANDARDS
> The memmove() function conforms to ISO/IEC 9899:1990 (``ISO C90'').
[...]
> So I guess if the standard section doesn't say that it conforms to some
> standard, then it doesn't?
Presumably, but that's really a question about man pages. If the man
pages are accurate, they *should* tell you what standard specifies a
function, but the only really reliable definition of what's in a
standard is the standard itself.
A draft document consisting of the C99 standard plus some later
additions is freely available; search for n1124.pdf.
You are posting on comp.lang.c. Only the C Standard is on-topic here.
FreeBSD or BSD or whatever is completely off-topic.
Furthermore, having a copying function that has its source and
destination arguments in the opposite order from the Standard C memcpy
and memmove and strcpy functions etc. is _dangerous_.
Yes, really. I have multiple Windows boxes with different versions of MS
Visual Studio and MS Visual C++ (all of which include C compilers) and
*none* of them have the non-standard header you used.
>> What if the buffer is larger than INT_MAX? Wouldn't size_t be more
>> appropriate for the size?
>>
>
> What the hell?
>
> It was an EXAMPLE. Going beyond INT_MAX? Who the hell is going to pass
> 2GB text strings around in memory. Most machines don't HAVE 2GB RAM
> installed. Granted, if need be I can use an unsigned long, but again
> who is going to need it? I'll be very interesting in hearing your
> answer reguarding real world applications..
Some machines have had 16 bit ints, so that could be 32K. However, the
point is that if you are writing a replacement for a system function
then you should arbitrarily introduce a lower limit on what it can copy
with than the function it is to replace. After all, if you are trying to
write a safer alternative to a standard function is it really a good
thing to introduce another trap for the unwary?
One example, BTW, where one can sometimes end up dealing with very large
strings (larger than you expect), is when something like an RTF document
is stored in a database and the user chooses to be stupid and embed a
high resolution true-colour image in the header of the document (I know
this through experience or what *real* users do, and in our case the
application did *not* crash, but transferring the document over the
network was taking a very long time). If the non-standard 3rd-party DB
library gives it to you as a string and you then have to copy it
somewhere else for use by another non-standard third party library...
Now, it could be argued that the large RTF document should have been
rejected (although where do you place the limit?) but it is certainly
not acceptable (to me) to invoke undefined behaviour just because the
user has been even more stupid that I expect users to be.
> About the bcopy function. I was not aware that it was not ISO C90.
Well, you do now.
<snip>
>> In future, please post standard C answers not implementation specifics,
>> especially when there is a simple standard C way of doing the same
>> thing. We only deal with standard C here, not the BSD extensions, POSIX
>> extensions or Windows extensions.
>
> As a programmer, I use what is available on the platform that I code on.
So do I, and I regularly use things that are not part of C but are
either extensions or third party libraries.
> Because my software has to work in the real world, I have to code it so
> that it is robust, safe, and secure.
Using a non-standard function when there is a perfectly good standard
function does not help in this. Using your alternative to strcpy or
using strlcpy (another extension) can assist, but that was not the case
here.
Admittedly you did not know bcopy was non-standard, but now you do. If
you've read the link I posted you will also know that there have been
version of bcopy that did *not* copy with overlapping source and
destination.
> Granted, I am still learning, but
> I'm doing quite well. I don't get may errors or warnings when my code
> compiles. When I do get an error or warning, I track it down and fix it.
It is also useful to know what is standard and what is not. This does
not mean don't use non-standard things, but when you are using them you
should know you are using them.
Personally I don't use any of the b* functions from BSD even when
programming on systems that provide them (which I do regularly) because
there are perfectly good alternatives that are part of the C standard.
Also, knowing what is standard helps you avoid having people complain at
you for using non-standard things in this group ;-)
>> When I read the man pages regarding gets and strcpy,
>> they mention BUGS and dangerous and such in the case
==== ===========
>> of gets -> this is clearly dangerous.
=======
>> In the case of strcpy I read "the buffer that dst points
===========
>> to has to be big enough and the too strings must not
>> overlap" -> this _can_ be made safe, but it is tedious(sp?)
>
> If your man page considers this a bug, it is broken.
My doesn't say it's a bug. Where did you read that in my
posting?
> This is not tedious and does not involve strlen(); it is called planning
> ahead, a skill all programmers should have but astonishingly few do.
Some programming activity (in the real world) have to deal with
code already existing and 'proven to work, not to be touched,
but to be used..'.
Markus
And thus O/T here. As I posted many months ago here, the real world is O/T
in clc. Yes, you can Google for it.
P.S. To continue your thread of discussion - yes, in the real world, quite
often code cannot be fixed, because that would generate results that are
not consistent with the old results. And that would require you to admit
that the old results were wrong.
says who? That's like saying that having an output function whose stream
argument is in a different place than fputc [say, fprintf] is dangerous.
you use a function, whether standard or an extension, you are expected
to learn that function. bcopy existed first anyway.
Well, it is not in the C Standard, or is it?
No, bcopy() is not in the C Standard (I thought we had established
that some time ago). I'm not sure what your point is.
There's nothing fundamentally wrong with bcopy(). A copying function
could sensibly take its arguments in either order; memcpy()'s order
mimics an assignment statement, while bcopy() arguably reflects the
direction in which the bytes are copied. And there are plenty of
examples of inconsistent parameter orders within the standard itself.
The ANSI standard committee had to pick either memcpy() or bcopy() for
inclusion in the standard (I *think* they both existed at the time).
Including both would have been redundant. The choice they made was,
as far as I know, arbitrary.
Some implementations still provide bcopy() for backward compatibility;
*all* conforming implementations provide memcpy(). There is no reason
to use bcopy() in new code -- not because of the order of its
parameters, but simply because it's less portable than memcpy().
I find it very dangerous as well. I have made the mistake of assuming
dest,src argument order when using the BSD kernel function copyin() in
the past and the only thing that saved me was a compiler diagnostic
about the fact that the destination argument (which I had mistaken for
the source) was const-qualified but the functions parameter wasn't. :-)
I'm just so used to dest,src that such mistakes can slip in no matter
how often I read the docs.
As for the fputc()/fprintf() order - at least you always get a crystal
clear diagnostic from the compiler if you swap those incorrectly (the
arguments to bcopy()/copyin()/copyinstr()/etc are pointers to void.)
--
Nils R. Weller, Bremen (Germany)
My real email address is ``nils<at>gnulinux<dot>nl''
... but I'm not speaking for the Software Libre Foundation!
That's your problem, not bcopy's or copyin's. The idea that it is
fundamentally dangerous is misplacement of blame - it's always easier to
blame the other guy's code.
Who's blaming anyone's code? The point is that these families of
interfaces carry out an identical function yet take the opposite order
of arguments. I'm saying it is easy to mix those up if you use them
interchangably, and in fact it has happened to me. Seems likely it could
happen to many others as well. BTW, I never mix up the argument order of
fputc()/fprintf() because they are so different. Those families of data
copying functions (the Linux kernel uses dest,src for the equivalents as
well) are much more similar and thus easier to get wrong ...
Its too bad the standard committee doesn't agree.
> I suppose my presence in these c.l.c dialogs is a form of masochism but
> ...
>
> I use gets() everyday!
Why?
> [...] I think I get the "dangerous" message
> from almost every make I do! Frankly, if I used a gets() alternative
> to avoid the "danger" I'd probably end up using a strcpy()
> with the same danger!
Again -- why?
> You don't need to URL me to the "dangerous" explanation: I used
> to design exploits myself. But the fact is, most of the programs I
> write
> are not for distribution and are run only on my personal machine,
> usually with an impermeable firewall. Who's going to exploit me?
> My alter ego? My 10-year old daughter? The input to my gets()
> is coming from a file I or my software created, and which has
> frequent line-feeds. The buffer into which I gets() is ten times
> as big as necessary. If one of my data files gets corrupted,
> diagnosing
> the gets() overrun would be the least of my worries.
This explains why you are not worried about the downside (i.e., you
don't write software for other people to consume, and you have no
concern about the scalability of your software efforts) but it does not
explain what the *upside* of using gets() is.
> I'll agree that such coding should be avoided in programs with
> unpredictable
> input, but there are *lots* and *lots* of code fragments that suffer
> from the same "danger" as gets(), so to me the suggestion that gets()
> specifically be barred by the compiler seems like a joke. Or do you
> think strcpy() should be barred also?
Personally, of course, *I* do. Not just because its dangerous and
unmaintainable -- but because its slower, less functional, less
convenient, typically uses a larger memory footprint and would cause me
to write more code overall than my alternative of choice (I assume
y'all know what that is by now). Same applies to gets().
--
Paul Hsieh
http://www.pobox.com/~qed/
http://bstring.sf.net/
I am confused by this implementation of successive src tests. Are you
expecting it to be possible to set src = (char *) ((NULL) - (int)k) or
something? The chances that that means something on a platform seems
pretty low. Why wouldn't you just hoist out the "if (src)" test?
(Attention Edward G. Niles: use my second sentence to determine *WHY*
the C compiler cannot do the hoist automatically.)
Why are you proposing such undefined behavior operations? No sane
programmer would ever pass such a parameter.
Among other things it caters to passing NULL as the src parameter,
and treats it as an empty string. Download the source for some
implications and requirements. The code has been minimized in
anticipation of embedded system use with non-optimizing compilers.
Its not me -- its *YOU* by the way you wrote the code. What I am
asking is, why not implement it as:
size_t strlcpy(char *dst, const char *src, size_t sz) {
if (src) {
const char *start = src;
if(sz--) {
while ((*dst++ = *src))
if (sz--) src++;
else {
*(--dst) = '\0';
break;
}
}
while (*src++) continue;
return src - start - 1;
}
else if (sz) *dst = '\0';
return 0;
} /* strlcpy */
This saves you one if(src) test, and you avoid setting start if its not
necessary. Its called "hoisting". Of course this transformation is
technically incorrect if your platform lets you define src = (char *)
((NULL) - (int)k), where k = min(sz,strlen(src)) for some reason
because the two if(src) tests that you do will go in opposite
directions. In other words you would still prefer your implementation
to mine if you need to support that bizarre corner case in what is
otherwise undefined behavior.
Whatever dude, it was just a question. I assumed that as an expert on
"strlcpy" you could elucidate on what I was missing; but as I look at
it again, it seems clear that I am not missing anything.
> Richard Bos <r...@hoekstra-uitgeverij.nl> schrieb:
> > This is not tedious and does not involve strlen(); it is called planning
> > ahead, a skill all programmers should have but astonishingly few do.
>
> Some programming activity (in the real world) have to deal with
> code already existing and 'proven to work, not to be touched,
> but to be used..'.
Yes, and? Sure, strcpy() can be abused. So can printf().
Richard
You are the master of the non-sequitor, aren't you?
I bow to your mastery of the technique.
ITYM non-sequitur
-Michael
--
E-Mail: Mine is an /at/ gmx /dot/ de address.
Now that you show the proposed code, I can agree, I think. I would
still have to test the various extreme conditions to be sure. I
looked no further than the ridiculous input parameter suggestion.
At any rate I never worried about a few tests outside the mail
loops. They will happen once per call. I did worry about complex
tests within the loops.
Lol! Its *faster* you say? Don't both of them boil down to:
do {
if ('\0' == (*dst++ = *src;)) break; /* Toddler: "Are we there
yet?" */
src++;
} while (--count); /* Parent: Watching the road signs */
in their inner loops? (There are tricks you can pull to alias the
count and src decrement/increment that useful in x86 ASM, but it likely
won't make a difference in today's modern OOE architectures.)
I thought strlcpy was just marginally safer than strncpy (because of
its more predictable truncation and termination semantics.) If you
want speed you can't beat memcpy() (and neither can I, because I don't
know the assembly language of every architecture where a C compiler has
been made) and that really *DOES* outperform strcpy, strlcpy and
strncpy (think of it as giving the Toddler a tranquelizer).
The number of iterations through the loop in memcpy does not depend on
the raw string data (i.e., searching for '\0's), but rather an
asynchronous count (which is just sitting in a register somewhere) --
this is a lot better for modern CPUs which prefer parallelism to
dependency after dependency.
The time problem with strncpy is that it always pads the whole
buffer with '\0', and if the buffer has been filled it won't
bother. strlcpy doesn't go any further than it needs to, and
always terminates the string properly. It also returns the size of
the resultant string, so you don't need to spend time executing strlen.
Look a little deeper than the one routine.
On Fri, 30 Dec 2005 10:19:19 +0000, Steve Summit wrote:
> My first comment is that the question of openness versus control is an
> extremely important one. [I]t's undisputable that Wikipedia would never
> have achieved its current momentum if it had been equipped all along
> with a proper editorial review board and article approval process.
[...]
> A C Wiki, with its smaller scope and more constrained subject matter,
> could probably get away with a little more control (aka closedness) than
> the every-topic-is-fair-game Wikipedia, but I suspect it will still be
> important that it be relatively open. ...
That's probably true, and we can leave editing unrestricted except for
anonymous edits unless problems occur. Email notification now supports
global creation/change watches, although the patch hasn't been applied
live yet, and we'll probably add category and namespace watches too.
> I would urge the proponents of the C Wiki to, as Wikipedia puts it, *be*
> *bold* and just do it. [D]on't worry too much about getting the model
> and the charter and the editorial board and the voting policy all
> perfect before you start.
The suggestion to incorporate something as official (as it has become) as
the c.l.c FAQ seemed to require a more perfectionist approach, but in the
FAQ's absence a good-enough approach should work fine. New features can
be added as needed.
Content's more relevant and some starting content has been imported - the
K&R2 solutions from RJH's unmaintained site:
<http://clc-wiki.net/wiki/KR2_Chapter_Index>
or through the original domain name:
<http://clc.flash-gordon.me.uk/wiki/KR2_Chapter_Index>.
[...]
> On the specific question of "seeding" a C Wiki with the comp.lang.c FAQ
> list, I'm still of mixed mind. ... [W]hile on the one hand I am (I
> confess) still possessive enough about the thing that I'll have some
> qualms about throwing it open for anyone to edit, on the other hand I've
> been wondering how I'm ever going to cede control over it, since I don't
> maintain it as actively as I once did and I'm certainly not going to
> maintain it forever. I've been wondering if it's time to fork it, and
> doing so in the context of a C Wiki might be just the thing.
The wiki model could support a page owner (a review group of one), and the
stable version of the article would need to be approved by the page owner.
The "unmoderated" edit version would still be viewable, but wouldn't be
shown by default. That's one possibility - for the FAQ you would be the
pages' owner and stable version setter.
> At the very least we could certainly seed the FAQ List section of a C
> Wiki with the questions from the existing FAQ list ... I can probably
> see my way clear to having the Wiki-side answers seeded with the
> existing static answer text also, as long as it's possible to tag those
> pages with a different, non-GFDL copyright notice. ...
There's probably not a lot of value in starting a separate version of the
FAQ from scratch bar the question wordings, especially when the current
content of the FAQ is so well-regarded by the c.l.c community. If you
decide not to wikify the FAQ answers, then I think a c.l.c wiki should
avoid the role of a FAQ and focus on other content - your suggestions
below are good ones.
> A couple of other notes:
>
> I'm glad to see the Wikimedia software being used, rather than something
> being written from scratch!
It's pretty feature-rich and where we add features we'll contribute them
back to the MediaWiki developers.
> They're hinted at in the existing topic outline, but it would be lovely
> to have a collaboratively written, Wiki-mediated language tutorial, a
> language reference manual, and a library reference manual in there, too.
Another possibility is portable, peer-reviewed standard library function
implementations - I've noticed a few regulars mentioning personal learning
projects around that theme - anyone interested in contributing/reviewing
code?
Contributions of summaries of useful threads with discussion, code,
improvement processes or realistic portable optimisation examples would
also be welcome, as would personal style guidelines with rationale.
> At any rate, let's see some more discussion about the Wiki idea! I think
> it has a lot of promise, which is why I'm blathering at length about it
> in this public post, rather than just sending an email reply to
> Netocrat.
Steve your support is much appreciated.
> Content's more relevant and some starting content has been imported - the
> K&R2 solutions from RJH's unmaintained site:
> <http://clc-wiki.net/wiki/KR2_Chapter_Index>
I took a quick look, and I think you've made a good job of this. (It was a
/very/ quick look, but what I saw didn't make me think "oh deary deary me",
so I guess that's a good sign.)
I had been contemplating hoiking the whole thing over to my current site,
but now I will gratefully not bother.
Could anyone wishing to submit K&R2 exercise solutions or critiques of
existing solutions please do so via clc-wiki.net in future? Thank you.
[SFX: washes hands, a la Pontius Pilate.]
I hope I'm right in thinking that you won't allow idiots to undo experts'
changes.
(Steve Summit said)
>> At any rate, let's see some more discussion about the Wiki idea!
Agreed. I think this newsgroup is the right place to discuss it, too, if we
wish to avoid the problems Wikipedia faces with quality. And it'll make a
pleasant change from wittering on about void main.
I don't know whether clc-wiki.net aims to restrict itself purely to portable
C. If it doesn't, it will need to institute an apartheid principle to make
it abundantly clear what is portable and what is not, and discussion of the
non-portable bits should be done elsenet.
--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at above domain (but drop the www, obviously)
That's good to know. The changes from html to wiki markup were minor
anyhow - the wiki engine does a good job of presentation.
> I had been contemplating hoiking the whole thing over to my current
> site, but now I will gratefully not bother.
>
> Could anyone wishing to submit K&R2 exercise solutions or critiques of
> existing solutions please do so via clc-wiki.net in future? Thank you.
>
> [SFX: washes hands, a la Pontius Pilate.]
Would a scheme similar to that described in my immediately prior post
encourage you to keep your hand in? (stable version review group)
> I hope I'm right in thinking that you won't allow idiots to undo
> experts' changes.
As it stands there's nothing to prevent anyone from creating an account
and making changes. Given that there's very little traffic on the wiki
it's easy to monitor all changes though (web/RSS) - so any "idiot undos"
can be shortly reverted. I'm monitoring the site and I presume the other
wiki planners are too; more and expert monitors would be useful.
> (Steve Summit said)
>>> At any rate, let's see some more discussion about the Wiki idea!
>
> Agreed. I think this newsgroup is the right place to discuss it, too, if
> we wish to avoid the problems Wikipedia faces with quality. And it'll
> make a pleasant change from wittering on about void main. I don't know
> whether clc-wiki.net aims to restrict itself purely to portable C.
The wiki's topicality follows from comp.lang.c's - the main difference is
that longer articles are possible and it can present tangential
information such as resource lists. This is slightly unrelated, but I've
just added a "Community And Resources" category and two articles in that
category:
* <http://clc-wiki.net/wiki/Home_Pages>
if you have a c-related home-page and you post to comp.lang.c or are a
committee member, feel free to add it to this list; also if you don't
want to be listed there you can remove yourself
* <http://clc-wiki.net/wiki/Reading_And_Posting_To_comp.lang.c>
this page will be complete when Kenny McCormack and Keith Thompson agree
that it fairly describes the newsgroup.
> If it doesn't, it will need to institute an apartheid principle to make
> it abundantly clear what is portable and what is not, and discussion of
> the non-portable bits should be done elsenet.
Any non-portable code/discussion should be as incidental and clearly
demarcated as it generally is in posts to this newsgroup.
> On Sat, 14 Jan 2006 06:58:42 +0000, Richard Heathfield wrote:
>> Netocrat said:
>>
>> Could anyone wishing to submit K&R2 exercise solutions or critiques of
>> existing solutions please do so via clc-wiki.net in future? Thank you.
>>
>> [SFX: washes hands, a la Pontius Pilate.]
>
> Would a scheme similar to that described in my immediately prior post
> encourage you to keep your hand in? (stable version review group)
Oh, I don't mind sticking my oar in, as long as I don't have to get my hands
dirty. :-)
>> I hope I'm right in thinking that you won't allow idiots to undo
>> experts' changes.
>
> As it stands there's nothing to prevent anyone from creating an account
> and making changes.
Um, eesh. Why not just take the brewery keys down to the nearest collection
of park benches?
> Given that there's very little traffic on the wiki
> it's easy to monitor all changes though (web/RSS) - so any "idiot undos"
> can be shortly reverted. I'm monitoring the site and I presume the other
> wiki planners are too; more and expert monitors would be useful.
If expert monitors have a really easy way to learn what has been changed,
then that sounds like a plan.
>
>> (Steve Summit said)
>>>> At any rate, let's see some more discussion about the Wiki idea!
>>
>> Agreed. I think this newsgroup is the right place to discuss it, too, if
>> we wish to avoid the problems Wikipedia faces with quality. And it'll
>> make a pleasant change from wittering on about void main. I don't know
>> whether clc-wiki.net aims to restrict itself purely to portable C.
>
> The wiki's topicality follows from comp.lang.c's - the main difference is
> that longer articles are possible and it can present tangential
> information such as resource lists. This is slightly unrelated, but I've
> just added a "Community And Resources" category and two articles in that
> category:
> * <http://clc-wiki.net/wiki/Home_Pages>
> if you have a c-related home-page and you post to comp.lang.c or are a
> committee member, feel free to add it to this list; also if you don't
> want to be listed there you can remove yourself
Here's a suggestion for you - pop over to
http://www.cpax.org.uk/prg/portable/c/resources.php and steal the entire
page. (Make sure you read it over, though, since some of it won't survive a
change of ownership and will need editing.)
> * <http://clc-wiki.net/wiki/Reading_And_Posting_To_comp.lang.c>
> this page will be complete when Kenny McCormack and Keith Thompson agree
> that it fairly describes the newsgroup.
I fail to see how Kenny McCormack's opinion is relevant, since it's
abundantly clear that, even if he isn't a troll (which most of us seem to
think he is), his signal-to-noise ratio is way too low.
As anyone can clearly see, its a lot higher than yours. You are
troll-central.
Just out curiosity, what is your definition of "troll"? What would I have
to do or not do to qualify?
> Just out curiosity, what is your definition of "troll"? What would I have
> to do or not do to qualify?
Don't worry about it; you're doing just fine.
Let's keep things clean then - starting with a ban on offensive language.
>>> I hope I'm right in thinking that you won't allow idiots to undo
>>> experts' changes.
>>
>> As it stands there's nothing to prevent anyone from creating an account
>> and making changes.
>
> Um, eesh. Why not just take the brewery keys down to the nearest
> collection of park benches?
Yes, I understand that you're skeptical of the open approach, and to a
point I am too (one reason the c.l.c wiki interests me is trying to work
out a middle approach). The key difference here is: once the beer's
drunk, it can't be reclaimed; an edit can be reverted and the "theft"
lasts for only as long as it isn't noticed. That's why I think that a
stable version would work well - the theft isn't visible to the casual
reader but it is to anyone monitoring, particularly the review group.
>> Given that there's very little traffic on the wiki it's easy to monitor
>> all changes though (web/RSS) - so any "idiot undos" can be shortly
>> reverted. I'm monitoring the site and I presume the other wiki
>> planners are too; more and expert monitors would be useful.
>
> If expert monitors have a really easy way to learn what has been
> changed, then that sounds like a plan.
Let us know if you find a deficiency in the rss/atom feeds:
<http://clc-wiki.net/mediawiki/index.php?title=Special:Recentchanges&feed=rss>
<http://clc-wiki.net/mediawiki/index.php?title=Special:Recentchanges&feed=atom>
I haven't used a proper rss reader to view them, but their xml view seems
complete - diffs are included.
[...]
> Here's a suggestion for you - pop over to
> http://www.cpax.org.uk/prg/portable/c/resources.php and steal the entire
> page. (Make sure you read it over, though, since some of it won't
> survive a change of ownership and will need editing.)
Nice; will copy relevant content.
>> * <http://clc-wiki.net/wiki/Reading_And_Posting_To_comp.lang.c>
>> this page will be complete when Kenny McCormack and Keith Thompson
>> agree that it fairly describes the newsgroup.
>
> I fail to see how Kenny McCormack's opinion is relevant, since it's
> abundantly clear that, even if he isn't a troll (which most of us seem
> to think he is), his signal-to-noise ratio is way too low.
It's similar to saying: I'll be confident in this government pamphlet's
fairness when both the party hacks and the most disaffected opposition
party voters endorse it. If someone who is predisposed to disagree ends
up agreeing, then the chances that you've written something with balance
are pretty good (disagreement wouldn't necessarily mean that the page was
unbalanced).
Your previous post left it unclear whether you thought I was or not.
> Let us know if you find a deficiency in the rss/atom feeds:
>
<http://clc-
wiki.net/mediawiki/index.php?title=Special:Recentchanges&feed=rss
>
>
<http://clc-
wiki.net/mediawiki/index.php?title=Special:Recentchanges&feed=ato
m
I just tried the RSS feed in NetNewsWire (Mac OS X) and it seems
to work fine.
> I haven't used a proper rss reader to view them, but their xml view seems
> complete - diffs are included.
It looks find from here.
BTW, thank you for going to the trouble of putting this up. I
have high hopes for it.
Category suggestion: Homework problems. :-)
>> I fail to see how Kenny McCormack's opinion is relevant, since it's
>> abundantly clear that, even if he isn't a troll (which most of us seem
>> to think he is), his signal-to-noise ratio is way too low.
>
> It's similar to saying: I'll be confident in this government pamphlet's
> fairness when both the party hacks and the most disaffected opposition
> party voters endorse it. If someone who is predisposed to disagree ends
> up agreeing, then the chances that you've written something with balance
> are pretty good (disagreement wouldn't necessarily mean that the page was
> unbalanced).
I understand your point. Just keep in mind that contrary to
popular belief, there *are* some insurmountable problems.
--
Randy Howard (2reply remove FOOBAR)
"The power of accurate observation is called cynicism by those
who have not got it." - George Bernard Shaw
How 'bout them Horns?
> On Sat, 14 Jan 2006 14:16:57 +0000, Richard Heathfield wrote:
>> Netocrat said:
>>> On Sat, 14 Jan 2006 06:58:42 +0000, Richard Heathfield wrote:
>>>> Netocrat said:
>>>>
>>>> Could anyone wishing to submit K&R2 exercise solutions or critiques of
>>>> existing solutions please do so via clc-wiki.net in future? Thank you.
>>>>
>>>> [SFX: washes hands, a la Pontius Pilate.]
>>>
>>> Would a scheme similar to that described in my immediately prior post
>>> encourage you to keep your hand in? (stable version review group)
>>
>> Oh, I don't mind sticking my oar in, as long as I don't have to get my
>> hands dirty. :-)
>
> Let's keep things clean then - starting with a ban on offensive language.
Excellent idea. Incidentally, I'm curious to know whether you thought
anything in my reply offensive. I don't recall whether you are a native
English speaker, so, in case you weren't aware of it, "to stick one's oar
in" is a conventional English idiom, meaning "to interpose when not asked",
according to my dictionary. A vicar could happily use it at a Mothers'
Union meeting, without blushing.
<snip>
> If someone who is predisposed to disagree ends
> up agreeing, then the chances that you've written something with balance
> are pretty good (disagreement wouldn't necessarily mean that the page was
> unbalanced).
That could take a while. (If it's any help, most of us here are pretty good
at disagreeing!)
Yes, that's right; it did.
Not at all. It was a case of one idea leading to another:
hands-not-dirty => keep-things-clean => ban-unclean-language.
I also understand that you have a concern to avoid unclean language, so it
seemed appropriate.
Is he mucking about here? He won't be seen unless somebody feeds him.
I really want to thank you (all) for the depth of thought you're putting
into the clc-wiki's structure. I feel this is extremely important and
am very excited about the project. I'm also very glad to see that other
knowledgeable regulars are taking an interest in contributing.
Maybe a new thread should be started for "all things wiki", in order to
keep the discussion in the fore-front as well as enable the "enticement"
for others to participate. I look forward to the possibilities.
Just think that eventually this will help to further decrease the common
explanations that are frequently given by the group... The ones I'm sure
the regulars are tired of. A wiki with a very _well thought out_ and
_organized_ structure will be fantastic.
If only there was such consideration by all document developers (hint: a
standard per se), The OpenSource environment would be a much, much
better place.
Many thanks from a diligent learner,
Dieter
>This reply was delayed as I wanted to complete some development on the
>wiki and set up a generic domain name before responding to Steve's post.
>The wiki's now accessible through clc-wiki.net.
Wow. Thank you, thank you. A fantastic beginning.
One comment from my 2-minute look at the site - I find it slightly
disconcerting to have the little "link" symbol embedded in code, but
the link for keywords is otherwise an excellent idea.
--
Al Balmer
Sun City, AZ
Agreed - I noticed that briefly a while ago but since you mention it, it's
been fixed. The Geshi syntax highlighting extension for the wiki was
Flash's find, and the links it generates are as you write a great idea -
it would be even better if it included all of the standard functions for
linking rather than the scant few that it does. Perhaps in future the wiki
will have its own set of descriptions of all standard functions that can
be internally linked to.
Review guidelines for C code published on the wiki have now been drafted,
along with templates to provide a (typically one-line) summary below each
block of code indicating its review status, original author and related
information:
<http://clc-wiki.net/wiki/Code_Reviews_%28for_this_wiki%29>
<http://clc-wiki.net/wiki/Help:Editing#C_Code>
Comments on these guidelines are welcome, particularly from anyone
interested in reviewing code on the wiki and anyone who has already
contributed code that's hosted on the wiki. No code has yet been reviewed
under these guidelines, nor has the summary line yet been used other than
for examples.
There's also been a suggestion that C code published on the wiki be
consistent in style, and that the style most likely to be universally
acceptable is K&R. Comments on this are welcome, recognising that the
issue of style is touchy.
The voting extension has also received a bit of a work-out, in the process
uncovering a subtle issue that required manual correction of the logged
eligible voter counts, although it didn't actually invalidate the tallying
process. The fix is being worked on.
The results[*] of the elections don't have great significance whilst edits
on the wiki are open and no group-reviewed-stable-page functionality yet
exists, although the title "editor" is currently equivalent to inclusion
in the wiki group "sysop" and includes a few extra capabilities such as
ability to block vandals by IP/name (along with voting of course).
[*] <http://clc-wiki.net/mediawiki/index.php?title=Special:Group_Decisions>
> There's also been a suggestion that C code published on the wiki be
> consistent in style, and that the style most likely to be universally
> acceptable is K&R.
K&R style has the same probability of universal acceptance as all other
styles, i.e. 0.
Indian Hill!
--
pete
'Universally' was a poor choice there.
Disclosure: K&R is the style I first (implicitly - not necessarily
completely or accurately) learnt and it's still my preferred style. The
suggestion came from two other people though.
The reason I think it's likely to be most acceptable is that it was
developed by the founders of the language. Someone with a mind to
architect a programming language as successful as C is likely to make a
good job of an accompanying style. Given that style is so subjective, the
property "given birth to alongside the language and endorsed by its
parents" is a pretty objective basis for preference (unless there's
another style I don't know of with that property).
Would you suggest leaving the style guidelines at "consistent"?
> Indian Hill!
A close relative of K&R style, but more focused on what rather than how,
yes?
> On Sat, 21 Jan 2006 04:01:50 +0000, pete wrote:
>> Richard Heathfield wrote:
>>> Netocrat said:
>>>
>>>> There's also been a suggestion that C code published on the wiki be
>>>> consistent in style,
>>>> and that the style most likely to be universally acceptable is K&R.
>>>
>>> K&R style has the same probability of universal acceptance as all other
>>> styles, i.e. 0.
>
> 'Universally' was a poor choice there.
>
> Disclosure: K&R is the style I first (implicitly - not necessarily
> completely or accurately) learnt and it's still my preferred style. The
> suggestion came from two other people though.
My preference is closer to Allman style. I've been using large
monitors for long enough not to worry about vertical whitespace
much. Plus, I try not to write functions that are hundreds of
lines in length.
Finally, gnu indent is a lovely tool, freely available, so the
issue is pretty much moot anyway.
>> Indian Hill!
>
> A close relative of K&R style, but more focused on what rather than how,
> yes?
Mainly known for it's silly indenting of the braces, instead of
what's inside them.
As one of the people who suggested K&R style I'll state here and now
that I'm not overly bothered about code style.
> Finally, gnu indent is a lovely tool, freely available, so the
> issue is pretty much moot anyway.
Indeed. I considered suggesting GNU Indent to force the code in to a
consistent style, but it took me less than 1 second to reject the idea.
However, thinking further, I wonder if it would be possible to give
people the option of having the code piped through indent for display in
the Wiki? Then, as long as someone can specify the style they like they
can have it! It would require a few tweaks here and there, but it should
be possible. Of course, there is always the question of finding the time
to do things.
>>> Indian Hill!
>> A close relative of K&R style, but more focused on what rather than how,
>> yes?
>
> Mainly known for it's silly indenting of the braces, instead of
> what's inside them.
I've generally had to abide by whatever the company standard happened to
be so I've never bothered learning the names of styles.
--
Flash Gordon
Living in interesting times.
Although my email address says spam, it is real and I read it.
I just had a look at the code behind it, and it looks to me to be easy
to set up the links to point anywhere we like (Netocrat, have a look in
c.php to see what I mean). Perhaps link to the Dinkumware site for the
library since Mr P.J. Plauger is a reliable source on such matters?
If there is any similarly good (and stable) site to reference for
keywords such as "switch" that would also be easy.
Note that Netocrat has a backup of the Wiki and we can make backups
available to others, so even if the host disappears (I get run over by a
bus, for example) this Wiki can live on.
<snip>
> The voting extension has also received a bit of a work-out, in the process
> uncovering a subtle issue that required manual correction of the logged
> eligible voter counts, although it didn't actually invalidate the tallying
> process. The fix is being worked on.
>
> The results[*] of the elections don't have great significance whilst edits
> on the wiki are open and no group-reviewed-stable-page functionality yet
> exists, although the title "editor" is currently equivalent to inclusion
> in the wiki group "sysop" and includes a few extra capabilities such as
> ability to block vandals by IP/name (along with voting of course).
>
> [*] <http://clc-wiki.net/mediawiki/index.php?title=Special:Group_Decisions>
You could have used the friendlier URL of
http://clc-wiki.net/wiki/Special:Group_Decisions
;-)
<snip>
> BTW, thank you for going to the trouble of putting this up.
No problem :-). I've learnt a lot from this group as I'm sure the others
involved have as well, and setting this up is an attempt to give
something back to the community and help others.
> I have high hopes for it.
As do I. Checking my logs for the past week it's been accessed from 143
different IP addresses (excluding spiders), people have followed links
to it from webmail systems and people are following links to it from
Google groups.
> Category suggestion: Homework problems. :-)
You mean a set of sensible questions for tutors to set? ;-)
<snip>
I'm glad you think so :-)
> One comment from my 2-minute look at the site - I find it slightly
> disconcerting to have the little "link" symbol embedded in code, but
> the link for keywords is otherwise an excellent idea.
Well, the link symbol has been sorted, and it will be easy to add more
automatic linking from the C code as/when we select reliable references
to link to.
<snip clc-wiki discssion>
> Many thanks from a diligent learner,
Thank you. It's nice to hear occasionally from the target audience that
one's work is appreciated.
>> Category suggestion: Homework problems. :-)
>
> You mean a set of sensible questions for tutors to set? ;-)
Among other things... However, I have given
"non-real-life" C homework to students to in order
to drive home a certain aspect of a lesson.
Only when we discussed possible solutions the real
life aspects came in.
Primarily: How to ask about homework questions.
There was a discussion within the last months where
Chris Hills brought up a "template":
Start at <YMe0QpAX...@phaedsys.demon.co.uk>
Cheers
Michael
--
E-Mail: Mine is an /at/ gmx /dot/ de address.
Whee - style war. Is this a private fight or can anybody join in?
On-the-fly conversion using an indent tool would be fine as an option but
it could also remove deliberately encoded information, so conversion
shouldn't be the default.
Code on the wiki should conform to a single style, whether that's as
specific as K&R, Allman, Indian Hill, or a collaboratively specified c.l.c
wiki style, or something general like "readable and consistent within each
contribution". I can see arguments all ways but a more specific style
across the wiki would be most readable.
> Randy Howard wrote:
>> Netocrat wrote
>>> On Sat, 21 Jan 2006 04:01:50 +0000, pete wrote:
>>>> Indian Hill!
>>> A close relative of K&R style, but more focused on what rather than
>>> how, yes?
>>
>> Mainly known for it's silly indenting of the braces, instead of what's
>> inside them.
I thought it used K&R brace style and I took pete's recommendation as
seriously intended.
I accessed it by typing "clc-wiki.net" into Firefox. Looks clean.
I created an account and left. :-)
Thanks for the confirmation - I've also verified it with a Linux reader.
[...]
> BTW, thank you for going to the trouble of putting this up. I have high
> hopes for it.
There's definitely heaps of potential. The posters to this group have
given me a lot of insight into C so it's something that I find worthwhile.
[...]
>>> I fail to see how Kenny McCormack's opinion is relevant, since it's
>>> abundantly clear that, even if he isn't a troll (which most of us seem
>>> to think he is), his signal-to-noise ratio is way too low.
>>
>> It's similar to saying: I'll be confident in this government pamphlet's
>> fairness when both the party hacks and the most disaffected opposition
>> party voters endorse it. If someone who is predisposed to disagree
>> ends up agreeing, then the chances that you've written something with
>> balance are pretty good (disagreement wouldn't necessarily mean that
>> the page was unbalanced).
>
> I understand your point. Just keep in mind that contrary to popular
> belief, there *are* some insurmountable problems.
If we find consensus on a common style for C code on the wiki, will you
take that comment back? ;-)
>>>> Indian Hill!
>>>
>>> A close relative of K&R style, but more focused on what rather
>>> than how, yes?
>>
>> Mainly known for it's silly indenting of the braces, instead of
>> what's inside them.
>
> Whee - style war. Is this a private fight or can anybody join in?
Actually, I explicitly said upthread that it doesn't matter with
tools for reformatting being readily available, but if you feel
the need to argue about it anyway, be my guest. Who will you be
arguing with?
>>>>> Indian Hill!
>>>> A close relative of K&R style, but more focused on what rather than
>>>> how, yes?
>>>
>>> Mainly known for it's silly indenting of the braces, instead of what's
>>> inside them.
>
> I thought it used K&R brace style and I took pete's recommendation as
> seriously intended.
if (foo)
{
printf("foo detected\n");
}
Kind of strange to my eye.
>> Category suggestion: Homework problems. :-)
>
> You mean a set of sensible questions for tutors to set? ;-)
Two variants possible (at least).
1) Questions that professors SHOULD NOT ask because asking them
proves that they don't understand their subject well enough. If
we can do anything to educate the educators, life will get
better down the road for people using software written by their
students. Yes, pie in the sky, but why not try?
2) A listing of questions that will not be answered in c.l.c,
because you are supposed to learn how on your own. then, you
can reply to the standard questions (reverse a string,
palindrome, etc, etc) by a simple link to the wiki. They might
actually learn something useful while visiting, even if they
don't find a compilable solution to their problem.
>>>> I fail to see how Kenny McCormack's opinion is relevant, since it's
>>>> abundantly clear that, even if he isn't a troll (which most of us seem
>>>> to think he is), his signal-to-noise ratio is way too low.
>>>
>>> It's similar to saying: I'll be confident in this government pamphlet's
>>> fairness when both the party hacks and the most disaffected opposition
>>> party voters endorse it. If someone who is predisposed to disagree
>>> ends up agreeing, then the chances that you've written something with
>>> balance are pretty good (disagreement wouldn't necessarily mean that
>>> the page was unbalanced).
>>
>> I understand your point. Just keep in mind that contrary to popular
>> belief, there *are* some insurmountable problems.
>
> If we find consensus on a common style for C code on the wiki, will you
> take that comment back? ;-)
A) I don't really care what formatting convention is used on the
wiki. I think my disagreement on what Indian Hill means as a
format is the source of others deciding I want to argue about
it. I do not, for the record.
B) No. Solving style issues on a web site, and making Kenny's
opinions relevant are two radically different problems. One is
analogous to O(n log n) and the other is NP complete.
A repository of insight-based problems would be useful.
> Primarily: How to ask about homework questions. There was a discussion
> within the last months where Chris Hills brought up a "template":
> Start at <YMe0QpAX...@phaedsys.demon.co.uk>
I remember reading that and have added it as a reference post (link to
Google archive) to the introductory comp.lang.c page on the wiki.
Yourself, Chris Hills, Randy Howard or anyone else could import it into
the wiki under a new "Homework Problems" category if you'd like to expand
on it.
> Netocrat wrote
> (in article <pan.2006.01.22....@dodo.com.au>):
>
> >>>>> Indian Hill!
> >>>> A close relative of K&R style, but more focused on what rather
> than >>>> how, yes?
> >>>
> >>> Mainly known for it's silly indenting of the braces, instead of
> what's >>> inside them.
> >
> > I thought it used K&R brace style and I took pete's recommendation
> > as seriously intended.
>
> if (foo)
> {
> printf("foo detected\n");
> }
>
> Kind of strange to my eye.
This is my favored style. The braces, like the statements, are part of
the same code entity, the block. It makes sense for them to be aligned.
They aren't part of the if().
Not all that many people agree.
Brian
--
If televison's a babysitter, the Internet is a drunk librarian who
won't shut up.
-- Dorothy Gambrell (http://catandgirl.com)
So they could be used by students to verify whether their instructors are
up to scratch as well as by instructors to skill up.
> 2) A listing of questions that will not be answered in c.l.c, because
> you are supposed to learn how on your own. then, you can reply to the
> standard questions (reverse a string, palindrome, etc, etc) by a simple
> link to the wiki. They might actually learn something useful while
> visiting, even if they don't find a compilable solution to their
> problem.
A "common homework problem - do not solve" list seems like a good idea -
source code for those problems would never be published on the wiki, but
hints and suggestions could be.
We don't have to argue, but if you, pete and I are using different
understandings of what that style means, we're not communicating
effectively.
I've never coded to Indian Hill style guidelines before so I'm basing my
understanding on this:
<http://www.chris-lott.org/resources/cstyle/indhill-cstyle.html>.
I don't see anything intrinsically wrong with it, but I'm not accustomed
to it either. According to the Jargon file, that's Whitesmiths style:
<http://www.catb.org/~esr/jargon/html/I/indent-style.html>. It may be
part of other styles or known by other names.
> This is my favored style. The braces, like the statements, are part of
> the same code entity, the block. It makes sense for them to be aligned.
> They aren't part of the if().
>
> Not all that many people agree.
I don't think the justification you give has any more or less objective
weight than saying for another alignment that "the braces delimit the
block, and should be distinct from it". Bracing style is probably a
personal choice in the absence of other requirements.
Mine too. However it actually makes a sort of sense. The if
statement controls stuff, and all that stuff is indented. I use
exactly this sort of thing in Pascal, where things may read:
IF whatever THEN BEGIN
stuff;
and more stuff; END;
still more stuff.
Somebody in this thread has been stripping attributions of quoted
material. You know who you are. Please don't.
> On Sun, 22 Jan 2006 05:01:47 +0000, Default User wrote:
>> Randy Howard wrote:
>>> Netocrat wrote
>>> (in article <pan.2006.01.22....@dodo.com.au>):
>>>
>>>>>>>> Indian Hill!
>>>>>>> A close relative of K&R style, but more focused on what rather
>>>>>>> than how, yes?
>>>>>>
>>>>>> Mainly known for it's silly indenting of the braces, instead of
>>>>>> what's inside them.
>>>>
>>>> I thought it used K&R brace style and I took pete's recommendation as
>>>> seriously intended.
>>>
>>> if (foo)
>>> {
>>> printf("foo detected\n");
>>> }
>>>
>>> Kind of strange to my eye.
>
> I don't see anything intrinsically wrong with it, but I'm not accustomed
> to it either. According to the Jargon file, that's Whitesmiths style:
Ack, you're right. I was mixing up the two in my mind. Sorry
about confusing the issue. :-(
> Randy Howard wrote:
>> if (foo)
>> {
>> printf("foo detected\n");
>> }
>>
>> Kind of strange to my eye.
>
> Mine too. However it actually makes a sort of sense. The if
> statement controls stuff, and all that stuff is indented. I use
> exactly this sort of thing in Pascal, where things may read:
>
> IF whatever THEN BEGIN
> stuff;
> and more stuff; END;
> still more stuff.
It's admittedly been a while since I did Pascal ('88 or so?),
but wouldn't the actual analogy be:
IF whatever THEN
BEGIN
stuff;
and more stuff;
END;
still more stuff.
???
One problem is that we should discourage newbies from diving into a question
and simply writing code. Normally it is best to plan first, and get advice
before you start coding, rather than make a hash of things and then get a
helper to patch things up.
Unfortunately that makes it too easy for someone who has no intention of
putting any effort in to ask for help, in the hope of getting a working
program back. So most regs ask for an attempted program.
A wiki with homework hints would sove this.
The brace style is a little different and I was serious.
> >>
> >> if (foo)
> >> {
> >> printf("foo detected\n");
> >> }
> >>
> >> Kind of strange to my eye.
>
> I don't see anything intrinsically wrong with it,
> but I'm not accustomed
> to it either. According to the Jargon file, that's Whitesmiths style:
> <http://www.catb.org/~esr/jargon/html/I/indent-style.html>. It may be
> part of other styles or known by other names.
>
> > This is my favored style. The braces, like the statements,
> > are part of
> > the same code entity, the block.
> > It makes sense for them to be aligned.
> > They aren't part of the if().
> >
> > Not all that many people agree.
>
> I don't think the justification you give
> has any more or less objective
> weight than saying for another alignment that "the braces delimit the
> block, and should be distinct from it". Bracing style is probably a
> personal choice in the absence of other requirements.
/* This is Indian Hill */
if (expr) {
statement;
} else {
statement;
statement;
}
http://www.psgd.org/paul/docs/cstyle/cstyle09.htm
http://www.psgd.org/paul/docs/cstyle/cstyle.htm
http://www.google.com/search?hl=en&ie=ISO-8859-1&q=%22indian+hill%22+C+style
--
pete
Ordinary Indian Hill is like this:
if (expr) {
statement;
}
It's when you have a textually too long expression
that Indian Hill style is like:
if (expression ...................
&& ......................)
{
statement;
}
--
pete
Yup. But in both languages my objective is clarity consumate with
conservation of vertical space. Thus I also use:
if (whatever) {
stuff;
and more stuff;
}
still more stuff.
if (foo) {
printf("foo detected\n");
}
--
Joe Wright
"Everything should be made as simple as possible, but not simpler."
--- Albert Einstein ---
Certainly, that's why there are personal preferences. I had
independently developed Whitesmith's on my own. In some of my early
projects, I'd use a ruler to check for misaligned stuff on printouts,
so I found it easier to have the braces part of the block I was
checking.
> Default User wrote:
> >
> > Randy Howard wrote:
> >
> > > Netocrat wrote
> > > (in article <pan.2006.01.22....@dodo.com.au>):
> > >
> > > >>>>> Indian Hill!
> > > >>>> A close relative of K&R style, but more focused on what
> > > rather than >>>> how, yes?
> > > > > >
> > > >>> Mainly known for it's silly indenting of the braces, instead
> > > of what's >>> inside them.
> > > >
> > > > I thought it used K&R brace style and I took pete's
> > > > recommendation as seriously intended.
> > >
> > > if (foo)
> > > {
> > > printf("foo detected\n");
> > > }
> > >
> > > Kind of strange to my eye.
> >
> > This is my favored style. The braces, like the statements, are part
> > of the same code entity, the block.
> > It makes sense for them to be aligned.
> > They aren't part of the if().
> >
> > Not all that many people agree.
>
> Ordinary Indian Hill is like this:
> if (expr) {
> statement;
> }
Yeah, I didn't notice what he called it. The style I use is called
Whitesmith's. I think that was the place Plaugher used to work.
> Randy Howard wrote:
> > Netocrat wrote
> >
> >>>>>> Indian Hill!
> >>>>> A close relative of K&R style, but more focused on what rather
> >>>>> than how, yes?
> > > > >
> >>>> Mainly known for it's silly indenting of the braces, instead of
> >>>> what's inside them.
> > >
> >> I thought it used K&R brace style and I took pete's recommendation
> >> as seriously intended.
> >
> > if (foo)
> > {
> > printf("foo detected\n");
> > }
> >
> > Kind of strange to my eye.
>
> Mine too. However it actually makes a sort of sense. The if
> statement controls stuff, and all that stuff is indented.
The theory behind Whitesmith's is that stuff between braces is a
compound statement. If you'd indent a single statement, why not a
compound.
However, trying to apply logic isn't going to convince many people. You
like what you like. The most important thing on any cooperative project
is to pick a style and make it consistent. If that happens to be OTBS,
that would be fine, even though not a style I've used in 20 years.
It's easier these days with indenters and text editors that you can set
for auto-indent in various styles.
speaking of "lists to be placed on the wiki" - anyone else think that
migrating the faq to the wiki [maybe as a "working, live copy", with the
'official' copy still at c-faq.org] might be a good idea?
c-faq.com, rather - and, i meant only if proper permission is given, of
course [though, why doesn't the newsgroup have its own faq that isn't
"owned" by a specific person or company?]
I was going to suggest that too, but the Limited Access Notice intercepts
direct links.
[...]
> Netocrat wrote:
>> The voting extension has also received a bit of a work-out, in the
>> process uncovering a subtle issue that required manual correction of
>> the logged eligible voter counts, although it didn't actually
>> invalidate the tallying process. The fix is being worked on.
The extension is lacking a protocol for informing nominees of nomination
and allowing them to decline. We'll hold off on using it again until a
protocol is worked out and we've confirmed that no one wants to belatedly
decline nomination.
A similar question was asked in the starting post of this sub-thread:
<pan.2005.12.29....@dodo.com.au>, and similar ideas were
expressed in later posts. It's Steve Summit's call as to whether a scheme
like that's acceptable - he's put the majority of effort into the FAQ and
holds copyright on it. In any case, the wiki won't set up a competing
alternative, it'll have to be something cooperative as you suggest.
> c-faq.com, rather - and, i meant only if proper permission is given, of
> course
The question of permission and copyright was addressed in the follow-up
post: <dp31j7$ac5$1...@eskinews.eskimo.com>.
> [though, why doesn't the newsgroup have its own faq that isn't
> "owned" by a specific person or company?]
That question was raised and responded to in the original wiki thread
(google for "C FAQ wiki").
Because Steve Summit has done an excellent job, and made his work
freely available to all of us. I'm sure he has retired in luxury
with the income it generates.
>Default User wrote:
>>
>> Randy Howard wrote:
>>
>> > Netocrat wrote
>> > (in article <pan.2006.01.22....@dodo.com.au>):
>> >
>> > >>>>> Indian Hill!
>> > >>>> A close relative of K&R style, but more focused on what rather
>> > than >>>> how, yes?
>> > >>>
>> > >>> Mainly known for it's silly indenting of the braces, instead of
>> > what's >>> inside them.
>> > >
>> > > I thought it used K&R brace style and I took pete's recommendation
>> > > as seriously intended.
>> >
>> > if (foo)
>> > {
>> > printf("foo detected\n");
>> > }
>> >
>> > Kind of strange to my eye.
>>
>> This is my favored style. The braces, like the statements, are part of
>> the same code entity, the block.
>> It makes sense for them to be aligned.
>> They aren't part of the if().
>>
>> Not all that many people agree.
>
>Ordinary Indian Hill is like this:
> if (expr) {
> statement;
> }
>
Actually, the Indian Hill reference seems to say to use whatever brace
style you like. They say the K&R style is preferred if you don't
already have a favorite.
What I find strange is the indented (on a separate line) function
types:
************
node_t *
tail(nodep)
node_t *nodep;
{
.
.
.
}
*************
I confess I didn't look up the rationale, because I can't imagine any
argument that would persuade me to like it.
>It's when you have a textually too long expression
>that Indian Hill style is like:
> if (expression ...................
> && ......................)
> {
> statement;
> }
--
Al Balmer
Sun City, AZ
I'm not going to advocate for or against a particular style here, but
this argument seems very weak to me. I don't see any evidence to
support the thesis that a language designer is necessarily interested
in style in general.
Further, I don't see any basis for arguing that a language designer's
style preference has any objective weight. What makes a language's
developer any more of an authority on what constitutes a good style?
The only warrant for that claim seems to be one of authority by
association: the designer is an authority on the language, and hence
one on its proper use. There's no logical justification.
> Given that style is so subjective, the
> property "given birth to alongside the language and endorsed by its
> parents" is a pretty objective basis for preference (unless there's
> another style I don't know of with that property).
I don't think it's objective at all. The fact of its origin is, but
attachment to preference is not.
> Would you suggest leaving the style guidelines at "consistent"?
I would, perhaps with suggestions such as moderate line length,
avoiding //-style comments, and avoiding tabs (or at least the mixing
of tabs and spaces for indentation). I'd be happier, personally, to
see no guidelines than to see too many. As with prose style, I
believe style guidelines are often counterproductive, leading to a
tiresome and sometimes awkward consistency for no sake but its own.
But in the end, it's the editors of the Wiki who are doing the work,
and the decision should be yours.
--
Michael Wojcik michael...@microfocus.com
Shakespeare writes bombast and knows it; Mr Thomas writes bombast and
doesn't. That is the difference. -- Geoffrey Johnson
Well, that's for K&R type functions, not modern ones with prototypes.
how else would you break/indent that? [without changing the actual token
list being formatted]
The reason for tail(nodep) being on a line by itself [and thus the
return type on a line before it] is to allow you to search for /^tail(/.
Well, we know that Brian Kernighan, at least, is *very*
interested in good style (though as I'll mention below, there's
much more to style than brace placement and indentation).
> Further, I don't see any basis for arguing that a language designer's
> style preference has any objective weight. What makes a language's
> developer any more of an authority on what constitutes a good style?
> The only warrant for that claim seems to be one of authority by
> association: the designer is an authority on the language, and hence
> one on its proper use. There's no logical justification.
There's really only one open question: should a large,
consistent, multi-author, public document such as a c.l.c. Wiki
display all its code samples using a single, consistent
indentation style? If the answer is "yes", then the answer
to the secondary question of "which style should we use?" is
obvious; there's no debate: just use K&R. Period. The rationale
is straightforward, and it's not unrelated to the argument just
above (the one you tried to dismiss for having "no logical
justification"): K&R style is, quite simply, the only style that
those multiple authors and editors will ever be able to agree on.
Period.
With that said, though, and as I've already hinted at above
(and in defense of the language designers of the world who,
you've suggested, aren't necessarily interested in good style):
it's important to point out that programming style has to do with
an awful lot more than just where the curly braces go. If you're
spending all your style time debating indentation rules and
imagining that once you've made a decision you'll have made great
progress, I'm afraid you're deluding and penalizing yourself,
because it means you're not paying attention to things that
matter even more, such as whether your variable names make sense,
and your expressions clearly show what they're computing, and
your control flow isn't tangled, and your functions are
well-modularized, and all that.
Steve Summit
s...@eskimo.com
6365488721621537965031098970124592021097286151726263141
6546461340015873146324009591872241875120686
I was considering C more specifically than that. I've encountered many
comments on the white book, none of them negative much beyond "it's
probably not so appropriate for total beginners" or "it's very condensed
and requires much consideration". In particular, I've never encountered a
contradiction of the claim that - and have fairly often encountered the
claim itself - the book is elegant in its concise expression of C idiom.
"Elegant" and "idiom" are close relatives of "style", so a thesis that the
designer of C at least had a powerful sense for style, whether expressly
interested in it or not, seems reasonable to me.
Also the reasons behind C's success are as relevant as its success - in
particular the appropriateness of the choices made when developing the
language's model, which I'll take up below.
> Further, I don't see any basis for arguing that a language designer's
> style preference has any objective weight. What makes a language's
> developer any more of an authority on what constitutes a good style? The
> only warrant for that claim seems to be one of authority by association:
> the designer is an authority on the language, and hence one on its
> proper use. There's no logical justification.
Developing a style involves some of the same type of choices as developing
the language does, and someone with a particular talent for the latter -
as I'm arguing C's founders had - is likely to be skilled at the former,
and is also likely to have exercised that skill in the process of writing
a comprehensive tutorial on the language (the type of choices aren't
identical, but they're similar enough to presume a relationship).
>> Given that style is so subjective, the property "given birth to
>> alongside the language and endorsed by its parents" is a pretty
>> objective basis for preference (unless there's another style I don't
>> know of with that property).
>
> I don't think it's objective at all. The fact of its origin is, but
> attachment to preference is not.
I'm not claiming it's wholly objective, only that it's the most objective
basis I can find for choosing a shared style for a public repository - for
C in particular given the general respect for the discriminatory abilities
of its designers. If someone were to present a set of well-conducted
studies and metrics to show that another style was in general better, for
some reasonable definition of "better", then I'd reconsider.
>> Would you suggest leaving the style guidelines at "consistent"?
>
> I would, perhaps with suggestions such as moderate line length, avoiding
> //-style comments, and avoiding tabs (or at least the mixing of tabs and
> spaces for indentation). I'd be happier, personally, to see no
> guidelines than to see too many. As with prose style, I believe style
> guidelines are often counterproductive, leading to a tiresome and
> sometimes awkward consistency for no sake but its own.
I appreciate that feedback. Steve Summit's comments - that there's more
to style than "where the curly braces go" - in his follow-up post suggest
a few other general style review guidelines such as "variable names should
be descriptive", "expressions should not be unnecessarily complex",
"control structures should be used with clarity as a goal and should not
obscure flow", etc.
The advantages of that choice are that it would avoid the need to
restructure most new and existing code, and it would be a style that far
more of us can agree on without compromising our personal styles.
The disadvantage is that C code across the Wiki would be - and already is
- quite inconsistent.
> But in the end, it's the editors of the Wiki who are doing the work, and
> the decision should be yours.
Any c.l.c reader is a potential editor, so some newsgroup discussion prior
to making a decision helps us make sure it's an appropriate one.
>> What I find strange is the indented (on a separate line) function
>> types:
>>
>> ************
>> node_t *
>> tail(nodep)
>> node_t *nodep;
>> {
>> .
>> .
>> .
>> }
>> *************
>> I confess I didn't look up the rationale, because I can't imagine any
>> argument that would persuade me to like it.
[snip]
> The reason for tail(nodep) being on a line by itself [and thus the
> return type on a line before it] is to allow you to search for /^tail(/.
Bingo, been doing that (including the legacy declarations back
then) as shown above ever since I started using vi as a text
editor, oh, hmmm, 20+ years ago. Oh, and I'm a 3 space indenter
instead of 4. :-)
Gee, I certainly hope the wiki is not going to try to get
everyone that reads c.l.c to agree on a style. hehe
>> What I find strange is the indented (on a separate line) function
>> types:
>>
>> ************
>> node_t *
>> tail(nodep)
>> node_t *nodep;
>> {
>> .
>> .
>> .
>> }
>> *************
>> I confess I didn't look up the rationale, because I can't imagine any
>> argument that would persuade me to like it.
>
>Well, that's for K&R type functions, not modern ones with prototypes.
I assumed that the first-line indentation would be the same
regardless.
>how else would you break/indent that? [without changing the actual token
>list being formatted]
I wouldn't.
Nothing wrong, imo, with
node_t *tail(node_t *nodep)
{
.
.
.
}
>
>The reason for tail(nodep) being on a line by itself [and thus the
>return type on a line before it] is to allow you to search for /^tail(/.
My editor can find functions without that much help. What's the
justification for the type being indented?
>Jordan Abel wrote
>(in article <slrndta2kt.2...@random.yi.org>):
>
>>> What I find strange is the indented (on a separate line) function
>>> types:
>>>
>>> ************
>>> node_t *
>>> tail(nodep)
>>> node_t *nodep;
>>> {
>>> .
>>> .
>>> .
>>> }
>>> *************
>>> I confess I didn't look up the rationale, because I can't imagine any
>>> argument that would persuade me to like it.
>
>[snip]
>
>> The reason for tail(nodep) being on a line by itself [and thus the
>> return type on a line before it] is to allow you to search for /^tail(/.
>
>Bingo, been doing that (including the legacy declarations back
>then) as shown above ever since I started using vi as a text
>editor, oh, hmmm, 20+ years ago. Oh, and I'm a 3 space indenter
>instead of 4. :-)
Hmmm.... "As shown above" is not as written. Does your reader compress
spaces?
BTW, I think even vi can identify a function without needing it at the
beginning of the line. Can't it?
>
>Gee, I certainly hope the wiki is not going to try to get
>everyone that reads c.l.c to agree on a style. hehe
"To dream the impossible dream ..."
It also make it easier to spot functions (and keep long lines in check)
where there is a long return type.
I must admit I found it strange when I first read it, but it's no my
default style.
--
Ian Collins.
> Hmmm.... "As shown above" is not as written. Does your reader compress
> spaces?
It seems to at the start of a line only, which I hadn't noticed
before, and only when replying, apparently. Grrrr.
> BTW, I think even vi can identify a function without needing it at the
> beginning of the line. Can't it?
Probably, but /^funcname has been working for decades, and is so
ingrained in the muscle memory as to be automatic.
Two things:
1) Should all code examples be compilable? If this consideration hasn't
already been established, I'd like to suggest each example given _should
be_ compilable for the sake of clarity.
2) I, like others, am very interested in the progress, direction and
discussion of the clc-wiki, therefore I request a new thread started for
this sole purpose. Digging back in my newsreader, through 275 posts of
the "gets()-dangerous?" thread, is a hassle.
Thanks to all.
Dieter
I suggest discussion of it on the wiki itself instead.
Then your problem with the code you pasted goes beyond the indentation.
>>
>>The reason for tail(nodep) being on a line by itself [and thus the
>>return type on a line before it] is to allow you to search for /^tail(/.
>
> My editor can find functions without that much help. What's the
> justification for the type being indented?
No idea. I actually don't like it, but that's the only change i'd make.
What editor do you use that can easily find the function [and no false
hits from calls to the function] with only the name? Anyway, if i'm
looking for a function definition in a large source tree, it's nice to
be able to use grep.
Is it even possible to "identify a function" and distinguish it from a
call to the function without doing something like putting it on the
beginning of the line?
Nope; regular expressions cannot "count parentheses", so finding
file level is restricted to guessing. One can look for the last
identifier before opening parens (not countable either...)
followed by everything but braces and closing parens, closing
parens, optional whitespace and opening braces. Add commented
out functions for fun.
Cheers
Michael
--
E-Mail: Mine is an /at/ gmx /dot/ de address.
ctags/etags help the editor...
vi, Emacs, NEdit and others can handle them.
Guessing function identifiers works only within certain
restrictions if one does not want to parse the whole
source file, exclude string literals and comments, etc.
Where did we jump from "is it even possible to ..." to "is it even
possible using regular expressions to ..."?
In conjunction with ctags, which uses a parser more powerful than
regular expressions, vi can do a lot of things that regular expressions
can't do.
--
www.designacourse.com
The Easiest Way To Train Anyone... Anywhere.
Chris Smith - Lead Software Developer/Technical Trainer
MindIQ Corporation
In an earlier post I referenced a template for summary lines below code,
the details and usage of which are explained on the Help:Editing page
(also accessible through the "Editing help" link whilst editing pages).
The template suggests four classifications for code blocks:
* full program: will compile to executable as-is
* compilable unit: will compile to a library as-is
* complete snippet: will compile when inserted as-is into an appropriate
location in other code
* incomplete snippet: cannot be made to compile as-is - e.g. because it
contains ellipses indicating "fill in appropriate bits here"
Perhaps a fifth is necessary:
* base program: contains the main() function and will compile as-is to
executable when linked with non-standard but wiki-defined library units
>> 2) I, like others, am very interested in the progress, direction and
>> discussion of the clc-wiki, therefore I request a new thread started for
>> this sole purpose. Digging back in my newsreader, through 275 posts of
>> the "gets()-dangerous?" thread, is a hassle.
The problem with that is that it wouldn't stop people from posting to the
continuing discussions in this thread, and having two disparate threads
would only make following the discussion harder. For most purposes, a new
thread /was/ started when the subject header changed.
Does your newsreader not allow you to filter on subject header? Actually,
as I recall I found that fraught in Thunderbird - it lost threading
ability when filtering - one of the reasons I switched to Pan.
> I suggest discussion of it on the wiki itself instead.
Yes, more specific discussion should focus within the talk page
associated with each content page, and for more general
discussion/suggestions, the "Planning:Public comments" page exists.
I've mostly tried to limit my posts here to information about additions
and changes that someone previously not interested in the wiki might like
to know about, but I've been more willing to engage in specific discussion
because I don't believe that many people are actively monitoring the wiki.
For anyone wanting to rejoin a specific style discussion on the wiki,
there's a summary of the relevant parts of this thread on the discussion
page for the "Code Style Guidelines (for this wiki)" page:
<http://clc-wiki.net/wiki/Talk:Code_Style_Guidelines_%28for_this_wiki%29>
The compiler does it all the time ;-)
Can straight vi even use ctags? or are you thinking of vim?
[i don't even know how to use ctags anyway]
Your clock is misset. Your article got here about 3 hours before
you wrote it :-)
? You asked "How else would you ...", and I gave an example. The
indentation of the type was the only thing I was questioning - I've
seen the rest before.
>
>>>
>>>The reason for tail(nodep) being on a line by itself [and thus the
>>>return type on a line before it] is to allow you to search for /^tail(/.
>>
>> My editor can find functions without that much help. What's the
>> justification for the type being indented?
>
>No idea. I actually don't like it, but that's the only change i'd make.
>
>What editor do you use that can easily find the function [and no false
>hits from calls to the function] with only the name?
I use Visual Slickedit, but of course it isn't the only editor that
has this kind of capability. It doesn't produce false hits, but even
if it did, I can't see that as a serious problem for an editor.
>Anyway, if i'm
>looking for a function definition in a large source tree, it's nice to
>be able to use grep.
I prefer to right click and choose "Go to Definition" :-)
It's also very nice to right click and choose "Go to Reference", which
gives me all the references to a function (or other token) in the
workspace (which can be the entire source tree.)
I know enough vi to get by if needed, but for everyday work, I'm
definitely a fan of modern editors.
With only a single line of context? [limitations of both vi and grep]
I imagine that depends on what you mean by straight vi. ctags support
is certainly not limited to vim. Most vi implementations that I've come
across will use a tags file.
A list of ctags tools is at http://ctags.sourceforge.net/tools.html,
though the list is incomplete (for example, it doesn't include standard
GNU pager called less; and there are web sites that suggest some
versions of more may use tags files as well).
In any case, this is quickly drifting off topic for the group...
??? Time zone seems correct, time seems correct...
Did I miss something?
Puzzled
See my other reply (where I mention ctags in conjunction with vi).
The thing is, we were talking about regular expressions and searching
for ^ followed by an identifier upthread.
This one seems ok. Maybe I added in place of subtracted?
Get a real editor, then. Why should we suffer a braindead style of
declarations just because vi is barely better than EDLIN?
Richard
That 'braindead style' you speak of (relating to return type placement
at the start of a function definition) is very common practice, and may
be found in many 'style' guides. If you have man pages on your favorite
system, try typing 'man style' sometime and see what it says.
I personally place function type on a line by itself for aesthetic reasons
as my editor of choice (brief) can find the function and index it in a
routine tree regardless of whether or not the type is on the same line.
(I am however required to put the opening brace on the next line.)
As someone else has already mentioned, type placement on a line
by itself allows grep to easily find a defintion when you're faced with
multiple source files (grep ^func *.c) so we can see a few benefits in
doing so. What benefits are gained by putting the return type on the
same line? (other than the obvious: 1 less line of code to maintain) :-)
Mark
>That 'braindead style' you speak of (relating to return type placement
>at the start of a function definition) is very common practice,
But only for historical reasons. Just because some people still wipe
their a*ses with leaves, doesn't mean you should stop buying bog
paper.
>and may be found in many 'style' guides.
... typically written by those of us who learned to programme on
PDP-11s and Z80s, and who had no choice but to use tools that today
seem barbaric..
>As someone else has already mentioned, type placement on a line
>by itself allows grep to easily find a defintion when you're faced with
>multiple source files (grep ^func *.c) so we can see a few benefits in
>doing so.
Sure, but again, who needs such "primitive" tools when sensible editor
suites can do this just as easily :-)
>What benefits are gained by putting the return type on the
>same line?
you avoid interminable arguments about bizarre style.
Mark McIntyre
--
"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it."
--Brian Kernighan
----== Posted via Newsfeeds.Com - Unlimited-Unrestricted-Secure Usenet News==----
http://www.newsfeeds.com The #1 Newsgroup Service in the World! 120,000+ Newsgroups
----= East and West-Coast Server Farms - Total Privacy via Encryption =----
--
Ian Collins.
> "Richard Bos" <r...@hoekstra-uitgeverij.nl> wrote in message
> news:43d79ef1....@news.xs4all.nl...
> > Jordan Abel <rand...@gmail.com> wrote:
> >
> >> With only a single line of context? [limitations of both vi and grep]
> >
> > Get a real editor, then. Why should we suffer a braindead style of
> > declarations just because vi is barely better than EDLIN?
>
> That 'braindead style' you speak of (relating to return type placement
> at the start of a function definition) is very common practice,
It is indeed very common in Ganoo circles. Many things that are common
amongst open sores adherents are popular for no good reason; this is one
of them.
> and may
> be found in many 'style' guides. If you have man pages on your favorite
> system, try typing 'man style' sometime and see what it says.
"No manual entry for style."
> As someone else has already mentioned, type placement on a line
> by itself allows grep to easily find a defintion when you're faced with
> multiple source files (grep ^func *.c) so we can see a few benefits in
> doing so. What benefits are gained by putting the return type on the
> same line?
It
is semantically sensible.
You
do not put the first word of a sentence on a new line, right?
Or
do you perhaps do that, as well?
Richard
So, you locate your file with grep. What are you going to do with it?
Edit it with sed?
Fair enough. However:
> I've encountered many
> comments on the white book, none of them negative much beyond "it's
> probably not so appropriate for total beginners" or "it's very condensed
> and requires much consideration". In particular, I've never encountered a
> contradiction of the claim that - and have fairly often encountered the
> claim itself - the book is elegant in its concise expression of C idiom.
Well, we don't know how representative your sample is, but let's
assume that there's some popular consensus that the "expression of
C idiom", as you put it, in K&R is "elegant".
> "Elegant" and "idiom" are close relatives of "style",
I don't think so. That may be because I have a degree in literature
and am married to a rhetorician, but I believe this is a hard thesis
to support. I can see a possible case for defining "style" in terms
of pragmatics, ie as something like "choice of idiom and manner of
its expression in the context of the utterance", but "elegance" is at
best only one possible dimension of style (and a rather nebulous one
at that).
Further, I can see plenty of potential arguments in favor of inelegant
styles (eg ones that advocate certain kinds of verbose description or
adherence to rigidly-defined templates). I might not make such
arguments myself, but they demonstrate that style can be argued at
cross-purposes to elegance.
However, this has gotten pretty far off-topic, and my point was quite
narrow to begin with: I'm not buying your argument for favoring K&R
style, but I don't have any objection to your favoring it, personally
or for the Wiki. And for all I know your argument may seem plausible
to many.
> > But in the end, it's the editors of the Wiki who are doing the work, and
> > the decision should be yours.
>
> Any c.l.c reader is a potential editor, so some newsgroup discussion prior
> to making a decision helps us make sure it's an appropriate one.
Sure, in principle, and I'm all for discussion, but in practice some
people will be doing the work, and it seems only right to let them
make the decisions - though it's very kind of them to listen to other
opinions.
--
Michael Wojcik michael...@microfocus.com
Advertising Copy in a Second Language Dept.:
The precious ovum itself is proof of the oath sworn to those who set
eyes upon Mokona: Your wishes will be granted if you are able to invest
it with eternal radiance... -- Noriyuki Zinguzi
--
Ian Collins.
Gack.
Time to check your insurance policy - your orthodontist appears to
have slipped, and accidentally removed your frontal lobes...
gd&r