Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

egcs as the standard linux compiler

3 views
Skip to first unread message

Christopher Browne

unread,
May 22, 1998, 3:00:00 AM5/22/98
to

On 22 May 1998 16:12:39 -0400, Paul D. Smith <psm...@baynetworks.com> wrote:
>%% jb...@best.com (Joe Buck) writes:
>
> jb> Albert D. Cahalan <acah...@jupiter.cs.uml.edu> wrote:
>
> >> Reports seem to indicate that gcc 2.8 is buggier than egcs.
>
> jb> Yes, the egcs releases are far more solid than 2.8.1 is. Almost all
> jb> of the gcc maintainers are now working on egcs.
>
>If true, doesn't this strike anyone as A Bad Thing?
>
>I mean, the idea of egcs as a proving ground for cool new technology is
>great, but don't people think getting 2.8.1 stable is just as, if not
>more, important?

Parallel this with the development of the Linux kernel...

Which is more important?
- Getting 2.0.34 stable? or
- Getting 2.1.104 stable?

There are certainly more people working on 2.1.x; the overall process
encourages that, as they try to restrict the degree of change that goes
into 2.1.x.

The changes made in 2.0.x are pretty much a set of changes that were
already made to 2.1.x.

I would suggest that what is happening is that GCC is being turned into
a "Bazaar" project.

The "normal" state of affairs would be that EGCS 1.0.x would be based on
the "very stable" GCC 2.8.x, and add in "entertaining experimental
changes."

Good changes from EGCS would then be passed back into the "very stable"
GCC 2.8.x, incrementing x.

At some point, they decide to move to "EGCS 2.0" (or some such number),
and deploy the "stabilized" 1.x version of EGCS as the "new, stable"
GCC 3.0.

That's the "normal" approach. (And I'm being "wild" with the
numbering... Real life might use other numbers...)

Reality is that GCC development had somewhat broken down so that 2.8.x
has been, and still is, fairly much "experimental," and not nearly as
stable as one might like. It will take some effort to get it "fixed
up," and it seems to me that it is quite likely that what *REALLY*
happens is that those that are working on EGCS (that includes much of
the "former GCC team") will at some point stabilize a release of EGCS
and essentially push it out to being the "modern and stable" GCC
release.

In some respects, this has really not been a good year for the FSF;
there have been enough occurances of RMS saying things that are
exceedingly "flameworthy" as to encourage the growth of some really
rather independent development efforts.

It is quite possible that the "wider Linux community" could effectively
make the FSF, as an organization, irrelevant.

When you consider the longterm efforts of FSF contributors including
RMS, that is unfortunate.

When you consider that the FSF is a fairly small organization with
clearcut leadership, and that the Linux community really has no single
fixed "centre," it has to make one ask some questions about how such a
relatively "unthinking mob" (which is not intended as a "flame" to
"Linux people;" merely to suggest that the diversity makes it somewhat
difficult to make *clear* decisions) can so readily overcome a group
that has a clear mandate and clear leadership.

--
"Linux: the operating system with a CLUE... Command Line User
Environment". (seen in a posting in comp.software.testing)
cbbr...@hex.net - <http://www.hex.net/~cbbrowne/lsf.html>

David Kastrup

unread,
May 23, 1998, 3:00:00 AM5/23/98
to

cbbr...@news.amrcorp.com (Christopher Browne) writes:

> In some respects, this has really not been a good year for the FSF;
> there have been enough occurances of RMS saying things that are
> exceedingly "flameworthy" as to encourage the growth of some really
> rather independent development efforts.
>
> It is quite possible that the "wider Linux community" could effectively
> make the FSF, as an organization, irrelevant.
>
> When you consider the longterm efforts of FSF contributors including
> RMS, that is unfortunate.

First, they are not working on opposing goals. Second, I can't see
how it is unfortunate that RMS' ideas and ideals get embraced and used
by so many people that he himself is no longer be able to completely
control the movement he initiated. RMS has always fought for software
being in the hands of the public, not for software being only in his
hands.

Even if some people seem not to understand this.


--
David Kastrup Phone: +49-234-700-5570
Email: d...@neuroinformatik.ruhr-uni-bochum.de Fax: +49-234-709-4209
Institut für Neuroinformatik, Universitätsstr. 150, 44780 Bochum, Germany

Horst von Brand

unread,
May 23, 1998, 3:00:00 AM5/23/98
to

In article <6k4sfb$c7...@hector.sabre.com>, Christopher Browne wrote:
>On 22 May 1998 16:12:39 -0400, Paul D. Smith <psm...@baynetworks.com> wrote:
>>%% jb...@best.com (Joe Buck) writes:
>> jb> Albert D. Cahalan <acah...@jupiter.cs.uml.edu> wrote:

>> >> Reports seem to indicate that gcc 2.8 is buggier than egcs.

>> jb> Yes, the egcs releases are far more solid than 2.8.1 is. Almost all
>> jb> of the gcc maintainers are now working on egcs.

>>If true, doesn't this strike anyone as A Bad Thing?

Not really...

>>I mean, the idea of egcs as a proving ground for cool new technology is
>>great, but don't people think getting 2.8.1 stable is just as, if not
>>more, important?

Why have three development branches (bleeding edge (egcs snapshots), stable
progressive (egcs releases) and utterly stable (gcc)), when Linux has shown
that two are enough?

[...]

>I would suggest that what is happening is that GCC is being turned into
>a "Bazaar" project.

Exactly! And this has attracted lots of talent to the egcs effort. The fact
that (almost) weekly snapshots are available for testing means that
snapshots are much more thoroughly tested, so bugs should last less.

>The "normal" state of affairs would be that EGCS 1.0.x would be based on
>the "very stable" GCC 2.8.x, and add in "entertaining experimental
>changes."

There is no "very stable" gcc-2.8.x, there is a "very stable" egcs-1.0.3a
instead. Sure, egcs is based on gcc-2.8 snapshots, the egcs-1.0.x (stable
releases) will be followed by egcs-1.1 (another stable release), as soon as
the current branch stabilizes, and work will go on there fixing bugs mostly.
Meanwhile, progress will continue to be folded back as 1.2 or so. Just like
Linux development works, they just copied the (extremely sucessful) scheme
Linus spearheaded.

Note that gcc-2.7.2 was Nov 1995, gcc-2.8.0 was Jan 1997. There were tons of
important improvements that were delayed, like real support for ix86 (gcc
used to cater (badly, to boot: Look at your standard CFLAGS when compiling
the kernel, the whole -malign-* stuff should just be assumed by the
compiler, the -fno-strength-reduce flag inhibits a quite basic optimization
to work around a long-known compiler bug) to i385 and i486, even though
i486, Pentium and PPro were probably the most used and visible targets. Not
to mention massive changes in C++ the language (gcc-2.7.2's support for C++
was quite broken, even for its time).

>Good changes from EGCS would then be passed back into the "very stable"
>GCC 2.8.x, incrementing x.

That is the idea, but AFAIKS egcs will split off for good.

>At some point, they decide to move to "EGCS 2.0" (or some such number),
>and deploy the "stabilized" 1.x version of EGCS as the "new, stable"
>GCC 3.0.

[...]

>In some respects, this has really not been a good year for the FSF;
>there have been enough occurances of RMS saying things that are
>exceedingly "flameworthy" as to encourage the growth of some really
>rather independent development efforts.

True, the FSF did (and is still doing) an enormous job. But in many ways
they screwed up badly: The long awaited hurd is still at 0.2, and last time
I tried to install it to play around a little it was impossible to do so (I
really tried, some two or three days just getting grub to grok the disk, and
that just failed utterly. Asks for help went ignored, so I just gave up).
The extremely important gcc development was stalled way too long (at least
in the public eye). All of this was of no real consecuence as long as free
software was just a way for money-starved universities to put their hand on
critical development software, where "real people" in the "real world" used
"real software" (i.e., with licences that cost "real money"). This was
radically changed by Linux, which has grown to be "mainstream Unix", not
just "playground for hackers and hippies" anymore. The FSF just didn't keep
up with its charter to oversee *all* free software, probably never could
have and it's probably better that way too... It really can't be more than
one more player in a worldwide bazaar of free software, there are several
independent efforts (FSF's GNU proyect, Perl, Linux itself; even the BSD,
Aladdin (of Ghostscript fame), Qt and lately Netscape comunities are somehow
involved in this).

>It is quite possible that the "wider Linux community" could effectively
>make the FSF, as an organization, irrelevant.

>When you consider the longterm efforts of FSF contributors including
>RMS, that is unfortunate.

Why? Their efforts certainly aren't lost. Linux would have been absolutely
impossible without gcc and the free equivalents of so many Unix utilities
the FSF nurtured.

>When you consider that the FSF is a fairly small organization with
>clearcut leadership, and that the Linux community really has no single
>fixed "centre," it has to make one ask some questions about how such a
>relatively "unthinking mob" (which is not intended as a "flame" to
>"Linux people;" merely to suggest that the diversity makes it somewhat
>difficult to make *clear* decisions) can so readily overcome a group
>that has a clear mandate and clear leadership.

Force of numbers favor the "mob", for one; and if said leadership does
everything at its hand to alienate said "mob" (witness the "GNU/Linux"
flamewar, and the other recent "flameworthy" outbursts you mention before),
the outcome isn't exactly surprising. Even when the "mob" and the "group"
have rather similar goals.

Sort of reminds of the bitching that went around in the Unix camp, which
this way left the field wide open for MS...
--
Horst von Brand vonb...@sleipnir.valparaiso.cl
Casilla 9G, Viña del Mar, Chile +56 32 672616

Christopher B. Browne

unread,
May 23, 1998, 3:00:00 AM5/23/98
to

On 23 May 1998 11:59:04 +0200, David Kastrup <d...@mailhost.neuroinformatik.ruhr-uni-bochum.de> posted:

>cbbr...@news.amrcorp.com (Christopher Browne) writes:
>> In some respects, this has really not been a good year for the FSF;
>> there have been enough occurances of RMS saying things that are
>> exceedingly "flameworthy" as to encourage the growth of some really
>> rather independent development efforts.
>>
>> It is quite possible that the "wider Linux community" could effectively
>> make the FSF, as an organization, irrelevant.
>>
>> When you consider the longterm efforts of FSF contributors including
>> RMS, that is unfortunate.
>
>First, they are not working on opposing goals. Second, I can't see
>how it is unfortunate that RMS' ideas and ideals get embraced and used
>by so many people that he himself is no longer be able to completely
>control the movement he initiated. RMS has always fought for software
>being in the hands of the public, not for software being only in his
>hands.
>
>Even if some people seem not to understand this.

I would like to agree with you.

It is certainly possible to interpret things to indicate that RMS intends
for software to be "in the hands of the public."

Unfortunately, the "lignux" debacle as well as the continuing saga of RMS
"explaining" how the proper name ought to be "GNU/Linux" in the GNUS
Bulletin suggest otherwise.

I prefer to interpret things in as positive a manner as is sensible, and
don't personally have *real* strong feelings concerning these items. The
"flame wars" and the "elaborations" that have come up surrounding these two
issues *are* supportive of the view that RMS *does* want to be the "true
leader of the free software movement," and, in that fashion, "own" things
like Linux.

The Perl FAQ thing, in contrast, shows that people at the FSF can make
mistakes, and that there are some equally vigorous and "flammable" opinions
that can come out of people that in no way stand with the FSF... I would go
along with the somewhat distinct notion that RMS has gotten "cranky" lately.
If, in contrast, RMS has gone a further step and become a "crank," it is
certain that he is not alone there...

What is unfortunate is that RMS has said enough "highly flammable" things
that many seem to be deciding that they are unwilling to continue to stand
with his leadership in the community.

Another couple of years of:

a) Non-FSF projects forking off of FSF projects,

b) RMS saying things that make significant populations angry that are not
correspondingly *HIGHLY* useful to the "free software community," and

c) Additional organizations "open sourcing" their products,

and the FSF may well become irrelevant to most people. I do perceive this
"shift towards irrelevance" taking place, and I do find it unfortunate.

The FSF *should* be seeking to be a highly credible organization that people
can trust where they could expect to send funds and see useful things
created Real Soon Now.

My "vision" (described in more detail in the "lsf.html" essay) is that there
should be millions of dollars worth of "gift economy" coming from the Linux
community, going towards developing things that are valuable to the
community such as:

- The Ultimate "Libre" File System (logging/ journalling/ expandable/
shrinkable/ multi-device/ ...)
- The "Libre" Word Processor
- The "Libre" Database System
- The "Libre" Spreadsheet
- The "Libre" Personal Finance System
- The "Libre" GUI System
- The "Libre" Compiler Suite
- ... and the list of course continues ...

It would be a real good thing if we had some places where people could
direct "gifts" or "grants" to help pay for the time of people that are well
qualified to help build these sorts of things.

The FSF is one of the more stable and longstanding organizations that
directly seeks contributions to help sponsor development and improvement of
things not unlike the list above.

Unfortunately, every time RMS fires off a "salvo" of controversial
statements, it scares off people that might otherwise be willing to invest
time, efforts, and funds in the things that the FSF can help with. And,
unfortunately, contributes to the possible "decline into irrelevance."

It doesn't much matter if the reality is that RMS is trying to deal with the
fine-tuning some fine point of "GPL Law;" the point is that I *regularly*
hear a variety of people say (in conversation, not merely on Usenet) that
they think he's doing "insane" things. (Kendall, you know who you are :-).)

RMS has been scaring off the people that, based on the sorts of things I see
them do in the "free software community," *ought* to be staunch supporters
of one another.

This is not unlike the historical events of the Middle Ages and after where
one sect of Protestants would be vigorously persecuting another despite the
fact that they really had fairly close theological positions, certainly in
comparison with "true" enemies such as Moslems.

That may be an unkind comparison to those of whatever religious persuasion,
but the parallel works quite nicely.

The "free software community" is wasting efforts infighting rather than
spending the time doing things about real threats from *clearly* proprietary
software...

--
Those who do not understand Unix are condemned to reinvent it, poorly.
-- Henry Spencer <http://www.hex.net/~cbbrowne/lsf.html>
cbbr...@hex.net - "What have you contributed to Linux today?..."

Greg Lindahl

unread,
May 23, 1998, 3:00:00 AM5/23/98
to

cbbr...@news.brownes.org (Christopher B. Browne) writes:

> The "free software community" is wasting efforts infighting rather than
> spending the time doing things about real threats from *clearly* proprietary
> software...

Amen. So please help stop the fighting by not making up motives for
rms and posting them on Usenet -- re-read your posting, and you'll see
what I mean.

-- g


Per Bothner

unread,
May 23, 1998, 3:00:00 AM5/23/98
to

In article <6k89l0$n2...@crash.videotron.ab.ca>,
Kurt Fitzner <kfit...@nexus.v-wave.com> wrote:
>The same goes for egcs/gcc/whatever. The paramount issue, is get something
>out that is stable. It doesn't matter what features you have to strip out.
>Have no optimizations at all if you have to, but release something that is
>dependable.

This is utter nonsense. Suppose you have a choice between two compilers:
a) X is solid, with little or no optmization, and no known bugs.
b) Y works pretty well, generates code that is twice as fast, but in
rare cases has been known to make some incorrect optimizations.

So which do you use to compile your application, X or Y?

You would be a fool to choose X, because you will be beaten up in
the market, and your users will complain you are too slow.
And for what? The hypothetical and unlikely chance you might
run into a bug in the compiler? You would be silly to worry more
about that than about bugs in your own code, hardware errors,
bugs in the standard libraries, etc etc. All these might bite
you - that is why you need to do as extensive testing as you can,
and fix as many bugs as you can. But at some point you have to
ship the damn thing, and you have to decide which bugs you can live
with. This is true for optimizing compilers, as well as applications.
You just do the best you can.
--
--Per Bothner
Cygnus Solutions bot...@cygnus.com http://www.cygnus.com/~bothner

Kurt Fitzner

unread,
May 24, 1998, 3:00:00 AM5/24/98
to

In article <6k4sfb$c7...@hector.sabre.com>,

cbbr...@news.amrcorp.com (Christopher Browne) writes:
> On 22 May 1998 16:12:39 -0400, Paul D. Smith <psm...@baynetworks.com> wrote:

>>I mean, the idea of egcs as a proving ground for cool new technology is
>>great, but don't people think getting 2.8.1 stable is just as, if not
>>more, important?
>

> Parallel this with the development of the Linux kernel...
>
> Which is more important?
> - Getting 2.0.34 stable? or
> - Getting 2.1.104 stable?

GCC as a package isn't anywhere to be found on www.kernel.org. Someone is
making a statement.

But, though that's a rhetorical question, the answer has to be 2.0.X is
of paramount importance to get stable. 2.1.X can drop into deep space, it
just doesn't matter.If you don't have a rock-solid, totally stable kernel
for implementation, then what is everyone doing the work for anyways? We've
gone beyond installing Linux for the coolness factor. Now, we want to get
real work done with it.

The same goes for egcs/gcc/whatever. The paramount issue, is get something
out that is stable. It doesn't matter what features you have to strip out.
Have no optimizations at all if you have to, but release something that is

dependable. Whomever said that a egcs internal compiler error was is a
fault of a in the code it was compiling, is on drugs.

> I would suggest that what is happening is that GCC is being turned into
> a "Bazaar" project.

Bizarre, is more like it.



> The "normal" state of affairs would be that EGCS 1.0.x would be based on
> the "very stable" GCC 2.8.x, and add in "entertaining experimental
> changes."
>

> Good changes from EGCS would then be passed back into the "very stable"
> GCC 2.8.x, incrementing x.

egcs splitting off from gcc is a political statement, not a programming
statement. If it was purely for programming, then it wouldn't be egcs,
would be a development version of gcc. We're not going to see egcs
become the equivalent of the 2.1 development kernel, simply because of the
fact that it is two separate groups doing it. If they had the level of
joint communication going that is needed for that, then they wouldn't be
two separate groups.

The -real- question will be, which compiler will be supported in the new
Linux Standard Base? Bruce Perens will have a hot potatoe in his hands.
I just hope that (and I'm not technically qualified enough myself to make
this judgement) they chose the more conservative/stable one for the base
system. I just hope that the decision is made on solely on the merit of
stability, not how many cool switches are thrown in with impressive
sounding optimizations, or on the fact you can put in -O84 and get code
that's been fractally compressed and comes preoptimized for 17 different
CPU brands, but spits out an internal compiler error due to a faulty
source file.

Kurt

David Kastrup

unread,
May 24, 1998, 3:00:00 AM5/24/98
to

vonb...@sleipnir.valparaiso.cl (Horst von Brand) writes:

> The FSF just didn't keep up with its charter to oversee *all* free
> software, probably never could have and it's probably better that
> way too...

Please point out exactly where in their charter they claim to want to
oversee all free software. Please explain why they then have chosen
to use a license (GPL) even on their own works which does not give
them any special status with regard to future development.

Please think twice before accusing them with something similar to what
Bill Gates tries doing to commercial software.

David Kastrup

unread,
May 24, 1998, 3:00:00 AM5/24/98
to

kfit...@nexus.v-wave.com (Kurt Fitzner) writes:

> egcs splitting off from gcc is a political statement, not a programming
> statement. If it was purely for programming, then it wouldn't be egcs,
> would be a development version of gcc. We're not going to see egcs
> become the equivalent of the 2.1 development kernel, simply because of the
> fact that it is two separate groups doing it. If they had the level of
> joint communication going that is needed for that, then they wouldn't be
> two separate groups.

Please check up on the history. Gcc-2.8 was handled by the FSF in a
way that harmed a lot of people depending on gcc, probably mostly due
to lack of resources. People were accusing Cygnus of withholding the
release and only selling their vastly superior C++ front end to paying
customers, using the GPL to rip off people, when Cygnus had long
contributed everything to the FSF, but no release resulted. Other
compiler groups (like Ada and Fortran) were also hampered by this
release. It was a technical necessity to do something about this.

That it was not a political statement can be seen by the fact that all
egcs contributions are required to have copyright disclaimers for the
FSF. Everything in egcs is contributed to the FSF. They just don't
want to have the public wait as long as it takes the FSF to make a
release (and two years is too long and harmful to all participants).

If you want to indulge in searching for political statements, you
might look into XEmacs, although even they have become a lot more
FSF-conformant in their demands on proper copyright disclaimers.
Even they would think better cooperation with the FSF worthwhile by
now.

Kurt Fitzner

unread,
May 24, 1998, 3:00:00 AM5/24/98
to

In article <6k8buv$5b1$1...@rtl.cygnus.com>,
bot...@cygnus.com (Per Bothner) writes:

>>Have no optimizations at all if you have to, but release something that is
>>dependable.
>

> This is utter nonsense. Suppose you have a choice between two compilers:
> a) X is solid, with little or no optmization, and no known bugs.
> b) Y works pretty well, generates code that is twice as fast, but in
> rare cases has been known to make some incorrect optimizations.
>
> So which do you use to compile your application, X or Y?
>
> You would be a fool to choose X, because you will be beaten up in
> the market, and your users will complain you are too slow.
> And for what? The hypothetical and unlikely chance you might
> run into a bug in the compiler?

This isn't some application that will be sitting on store shelves where
the prettiest box, and the best advertising wins. This is not commodity
software. These are serious tools for serious applications. I don't
want to worry about my kernel crashing when turning on an optimization.

I don't care what people choose to use to compile their applications.
People can download and install what they want. My concern is what is
chosen as a 'standard'. My concern is that this is going to turn into
a feature race for all the different compiler groups, all competing for
control of the standard. My concern is that we'll be left with no stable
solution. And we're not talking about twice as fast here. Please leave
the rhetorical exadgerations out.

> You would be silly to worry more
> about that than about bugs in your own code, hardware errors,
> bugs in the standard libraries, etc etc. All these might bite
> you - that is why you need to do as extensive testing as you can,
> and fix as many bugs as you can. But at some point you have to
> ship the damn thing, and you have to decide which bugs you can live
> with.

If it has a bug, leave it out of the release until it's fixed. What
is so hard about that?

The problem is, with this model, is that the attitude is becoming just
release it, and 50 million internet users will test it for bugs for you.
And this isn't bad. This is a good thing, unless this is the standard
that people are relying on for stability.

Why is there such a push for a Linux Base System? Why the kernel
development split? Becase of the number of users. When Linux had a small
following of hackers, then it didn't matter if there were bugs. But
we're reaching a critical mass now. The number of users is growing to
be enough that you can't just push something out the door, and figure that
everyone depending on it are hackers who are doing it to be cool, or to
give Microsoft the finger. That's why Linux had such a hard time getting
into stability critical implementations. It still does.

As I said, I don't care what gets done with egcs, or GCC, or PCC, or
anything. My concern is that a stable, reliable implementation of
something is chosen as the standard, and that a development model
is chosen where updates to it are done when they are -proven- stable
Lets all pretend that each 'release' is going to be used for a computer
monitoring equipment in the intensive care ward of a hospital.

> This is true for optimizing compilers, as well as applications.
> You just do the best you can.

That's right, you do the best you can. But one can chose methods of doing
that allow the best of both worlds. Stability for those who need it,
and rich experimental features for those who like to live dangerous.

Toon Moene

unread,
May 24, 1998, 3:00:00 AM5/24/98
to

kfit...@nexus.v-wave.com (Kurt Fitzner) wrote:

> If it has a bug, leave it out of the release until it's fixed. What
> is so hard about that?

This seems very hard to grasp for people who never developed compilers. A
compiler is simply a huge piece of software that has bugs just as every other
piece of software of comparable size and complexity.

If commercial compiler vendors can get away shipping compilers with bugs, why
should we (i.e. the egcs community) try the impossible ?

The egcs crowd tries to stomp out bugs before every release; however, we
cannot compile every piece of software on earth in our regression tests, so
we're sure to miss some bugs - tough luck.

--
Toon Moene (mailto:to...@moene.indiv.nluug.nl)
Saturnushof 14, 3738 XG Maartensdijk, The Netherlands
Phone: +31 346 214290; Fax: +31 346 214286
g77 Support: mailto:for...@gnu.org; NWP: http://www.knmi.nl/hirlam

Rudolf Leitgeb

unread,
May 24, 1998, 3:00:00 AM5/24/98
to

In article <6k9p87$1l8$1...@news.utrecht.nl.net>,

Toon Moene <to...@moene.indiv.nluug.nl> writes:
> kfit...@nexus.v-wave.com (Kurt Fitzner) wrote:
>
>> If it has a bug, leave it out of the release until it's fixed. What
>> is so hard about that?
>
> This seems very hard to grasp for people who never developed compilers. A
> compiler is simply a huge piece of software that has bugs just as every other
> piece of software of comparable size and complexity.
>
> If commercial compiler vendors can get away shipping compilers with bugs, why
> should we (i.e. the egcs community) try the impossible ?

It's not about writing bug free code. Everybody knows that this is impossible
as soon as the program does more than "Hello world". It's about putting in
code that does some cool stuff but is _known_ to fail every so often.

The problem is phrases in the FAQ like:
You may get an internal compiler error compiling process.c in newer versions
of the Linux kernel on x86 machines. This is a bug in an asm statement in
process.c, not a bug in egcs. XXX How to fix?!?

I had enough problems getting the AIC7XXX options right under 2.0.33 (and
enough windows-lovers laughing at me when they weren't right and the system
screwed up weekly). When I discovered what was wrong I was glad that I had
a C compiler that I could rely on ...

It sounds like linux developed an extremely successful strategy with the
parallel trees of 2.0.x and 2.1.x and I can foresee a similar split happening
to egcs as soon as it becomes mainstream.

Rudi

--

| | | | |
\ _____ /
/ \ B O R N
-- | o o | -- T O
-- | | -- S L E E P
-- | \___/ | -- I N
\_____/ T H E S U N
/ \
| | | | |

George Bonser

unread,
May 24, 1998, 3:00:00 AM5/24/98
to

In article <6k8buv$5b1$1...@rtl.cygnus.com>,
bot...@cygnus.com (Per Bothner) writes:
>
> So which do you use to compile your application, X or Y?
>

In the real world, you do BOTH. You compile with X and sell the resulting
binary while you test the binary created by Y in-house. Then after a decent
interval has elapsed, you release the binary compiled by Y and add the word
Turbo to your program name and again sell it to the same people you just
sold it to with no extra software development. It is a way of making more
money on the same code.

;)

--
George Bonser

Microsoft! Which end of the stick do you want today?

Fergus Henderson

unread,
May 24, 1998, 3:00:00 AM5/24/98
to

kfit...@nexus.v-wave.com (Kurt Fitzner) writes:

>Whomever said that a egcs internal compiler error was is a
>fault of a in the code it was compiling, is on drugs.

I think this report is probably the result of someone misreporting the
true state of affairs.

GNU C (and hence egcs) sometimes emits an error message which goes
something along the lines of

impossible register spilled
this may be due to an internal compiler
error, or impossible asm

Someone may have misreported this as "egcs gets an internal compiler
error" when in fact the true cause may have been incorrect "asm"
statements in the code being compiled.

--
Fergus Henderson <f...@cs.mu.oz.au> | "I have always known that the pursuit
WWW: <http://www.cs.mu.oz.au/~fjh> | of excellence is a lethal habit"
PGP: finger f...@128.250.37.3 | -- the last words of T. S. Garp.

Ronald Cole

unread,
May 24, 1998, 3:00:00 AM5/24/98
to

cbbr...@news.brownes.org (Christopher B. Browne) writes:
> RMS has been scaring off the people that, based on the sorts of things I see
> them do in the "free software community," *ought* to be staunch supporters
> of one another.

IMO, he doesn't "scare off" so much as "piss off" those people.
Myself being one of them. I've quit using "cathedral" GPL'd software
and embraced "bazaar" GPL'd software simply because the latter's
liberal attitude toward the GPL seems more in the spirit of the GNU
Manifesto than the former's conservate attitude.

In the real world, that means I jettisoned the Hurd from my sole
remaining 486 and deleted Debian from my firewall and installed
Slackware-3.4. I'm also now using egcs instead of gcc, and I've
stopped using GNAT.

--
Forte International, P.O. Box 1412, Ridgecrest, CA 93556-1412
Ronald Cole <ron...@forte-intl.com> Phone: (760) 499-9142
President, CEO Fax: (760) 499-9152
My PGP fingerprint: E9 A8 E3 68 61 88 EF 43 56 2B CE 3E E9 8F 3F 2B

H. Peter Anvin

unread,
May 24, 1998, 3:00:00 AM5/24/98
to

Followup to: <87vhqv6...@yakisoba.forte-intl.com>
By author: Ronald Cole <ron...@yakisoba.forte-intl.com>
In newsgroup: comp.os.linux.development.system

>
> In the real world, that means I jettisoned the Hurd from my sole
> remaining 486 and deleted Debian from my firewall and installed
> Slackware-3.4. I'm also now using egcs instead of gcc, and I've
> stopped using GNAT.
>

I think this is unfair to Debian; they seem to be a very open
development. Their amount of centralization doesn't seem to be any
more than what the Linux kernel does (with Linus as the chief
gatekeeper.)

-hpa
--
PGP: 2047/2A960705 BA 03 D3 2C 14 A8 A8 BD 1E DF FE 69 EE 35 BD 74
See http://www.zytor.com/~hpa/ for web page and full PGP public key
I am Bahá'í -- ask me about it or see http://www.bahai.org/
"To love another person is to see the face of God." -- Les Misérables

robert havoc pennington

unread,
May 24, 1998, 3:00:00 AM5/24/98
to

h...@transmeta.com (H. Peter Anvin) writes:
>
> I think this is unfair to Debian; they seem to be a very open
> development. Their amount of centralization doesn't seem to be any
> more than what the Linux kernel does (with Linus as the chief
> gatekeeper.)
>

The amount of centralization at Debian is far less than on the Linux
kernel, really. To the point that it's a problem. It looks like Debian
will be the first project (that I know of anyway) to write a formal
constitution and make decisions by voting, simply because there is no
one person or small group who can claim to be in charge.

Debian is certainly not a "cathedral" project, with 300+ developers.

Havoc Pennington ==== http://pobox.com/~hp

Jimen Ching

unread,
May 24, 1998, 3:00:00 AM5/24/98
to

David Kastrup (d...@mailhost.neuroinformatik.ruhr-uni-bochum.de) wrote:
>Please check up on the history. Gcc-2.8 was handled by the FSF in a
>way that harmed a lot of people depending on gcc, probably mostly due
>to lack of resources.

This is kind of off track, but. Personally, I do not blame lack of
resources as the reason for the long release period. I blame resource
management. I.e. the maintainer of the gcc source is not managing the
resources very well. I do not see this changing even with the creation
of the egcs group. Unless someone corrects this problem, I do not believe
we will see more frequent releases of gcc. The egcs group, on the other
hand, have excellent resource management. Note, when I refer to resources,
I don't necessarily mean internal resources. Resources could be from
external sources.

>That it was not a political statement can be seen by the fact that all
>egcs contributions are required to have copyright disclaimers for the

If you interpret 'political statement' in the original author's post as
legal, moral or ethic statement, then I agree with you. But, I think
the original author means 'office politics'. In this sense, I have
to agree with him. As I've said above, it is a management problem. The
gcc contributors like H.J. Lu and Mark Mitchell, do not like how gcc
is managed. This is of course, speculation. I've never met H.J. Lu
or Mark Mitchell. But from their contribution styles, I think my
interpretation is close, if not right on.

The success of a software, heavily depends on the outlook of the maintainer.
Compare someone like Linus Torvalds, with RMS (or Thomas Busnell). Linus
seems a lot more outgoing and free spirited, when it comes to software
development. While RMS is closed minded and stiff (and seems to be rubbing
off on TB). Another example is the WINE v.s. TWIN project. Both try to
provide a win32 platform for Linux (UNIX). But movement in WINE is a lot
more pronounce than TWIN. This is because WINE is more open and free
spirited than TWIN, which has a slow and closed development. By closed,
I mean the ablity to access the current source. I.e. compare the ease
of access of the egcs source to that of gcc. Gcc snapshots are on some
out of the way ftp server. While egcs can be found on the web page and
an open CVS server, which both gets updated often. As opposed to the gcc
web page, which hasn't been changed for over a month.

This has nothing to do with copyright transfers, or legal issues. Those
are individual issues. I.e. they apply to each contributor. But the
overall success of a free software depends on the management. I think
this also applies to commercial software as well. If the engineers does
not like the direction the project is going, they quit and go else where.
Of course, the management must be pretty bad and for a long period of time
to have this occur.

--jc
--
Jimen Ching (WH6BRR) jch...@flex.com wh6...@uhm.ampr.org

Daniel Robert Franklin

unread,
May 25, 1998, 3:00:00 AM5/25/98
to

Ronald Cole <ron...@yakisoba.forte-intl.com> writes:

>In the real world, that means I jettisoned the Hurd from my sole
>remaining 486 and deleted Debian from my firewall and installed
>Slackware-3.4. I'm also now using egcs instead of gcc, and I've
>stopped using GNAT.

It seems to me that dumping Debian for "political" reasons is a supremely
silly thing to do. Anyway, Debian has a very open development model.

But I agree, egcs is (and promises to be) a better compiler than the
"traditional" gcc.

- Daniel
--
******************************************************************************
* Daniel Franklin - 4th Year Electrical Engineering Student
* dr...@uow.edu.au
******************************************************************************

H. Peter Anvin

unread,
May 25, 1998, 3:00:00 AM5/25/98
to

Followup to: <wsnn2c7...@harper.uchicago.edu>
By author: robert havoc pennington <h...@pobox.com>
In newsgroup: comp.os.linux.development.system

>
> The amount of centralization at Debian is far less than on the Linux
> kernel, really. To the point that it's a problem. It looks like Debian
> will be the first project (that I know of anyway) to write a formal
> constitution and make decisions by voting, simply because there is no
> one person or small group who can claim to be in charge.
>
> Debian is certainly not a "cathedral" project, with 300+ developers.
>

Apache already do that.

jo...@dhh.gt.org

unread,
May 25, 1998, 3:00:00 AM5/25/98
to

Ronald Cole writes:
> IMO, he [RMS] doesn't "scare off" so much as "piss off" those people.

> Myself being one of them. I've quit using "cathedral" GPL'd software and
> embraced "bazaar" GPL'd software simply because the latter's liberal
> attitude toward the GPL seems more in the spirit of the GNU Manifesto
> than the former's conservate attitude.

> In the real world, that means I jettisoned the Hurd from my sole
> remaining 486 and deleted Debian from my firewall...

Are you laboring under the delusion that there is some connection between
Debian and the FSF? The Open Source Definition is a copy of the Debian
Free Software Guidelines.

> ...and installed Slackware-3.4.

You consider Debian "cathedral" and Slackware "bazaar"? Debian has more
than 300 developers with more joining all the time. How many developers
does Slackware have? Does Slackware have a web page encouraging you to
join and telling you how to do so? Are its decisions discussed on open
mailing lists?
--
John Hasler This posting is in the public domain.
jo...@dhh.gt.org Do with it what you will.
Dancing Horse Hill Make money from it if you can; I don't mind.
Elmwood, Wisconsin Do not send email advertisements to this address.

Rudolf Leitgeb

unread,
May 25, 1998, 3:00:00 AM5/25/98
to

In article <6ka2uc$eis$1...@mulga.cs.mu.oz.au>,

f...@cs.mu.oz.au (Fergus Henderson) writes:
> Someone may have misreported this as "egcs gets an internal compiler
> error" when in fact the true cause may have been incorrect "asm"
> statements in the code being compiled.

The scary thing about this is not so much the internal error (or whatever
it really is) but the fact that there doesn't seem to be a work around, or
I at least can't find one on the egcs FAQ page. So the average user expects
that hell will break lose if one installs egcs and actually needs to get
some work done with it ...

Horst von Brand

unread,
May 25, 1998, 3:00:00 AM5/25/98
to

In article <6k8mb0$os...@crash.videotron.ab.ca>, Kurt Fitzner wrote:
>In article <6k8buv$5b1$1...@rtl.cygnus.com>,
> bot...@cygnus.com (Per Bothner) writes:

[...]

>I don't care what people choose to use to compile their applications.
>People can download and install what they want. My concern is what is
>chosen as a 'standard'. My concern is that this is going to turn into
>a feature race for all the different compiler groups, all competing for
>control of the standard. My concern is that we'll be left with no stable
>solution. And we're not talking about twice as fast here. Please leave
>the rhetorical exadgerations out.

Sorry, but if a solution is stable or not can only be determined (given
current technology) by testing, testing, testing. That's what you are paying
for free software: The (unlikely) case of working wrong, and the duty of
reporting funnies, and the posibility of fixing it yourself (or getting
somebody to do it for you). Or you could go commercial, where the unlikely
case of working wrong is about the same or higher, and you've got a
guarantee in writing that if it breaks, just too bad: They won't do anything
about it if it's not one of their priorities, and you are stuck.

>> You would be silly to worry more
>> about that than about bugs in your own code, hardware errors,
>> bugs in the standard libraries, etc etc. All these might bite
>> you - that is why you need to do as extensive testing as you can,
>> and fix as many bugs as you can. But at some point you have to
>> ship the damn thing, and you have to decide which bugs you can live
>> with.

>If it has a bug, leave it out of the release until it's fixed. What


>is so hard about that?

That if you could just leave out the bug, it wouldn't be a bug anymore,
would it? ;-)

>The problem is, with this model, is that the attitude is becoming just
>release it, and 50 million internet users will test it for bugs for you.
>And this isn't bad. This is a good thing, unless this is the standard
>that people are relying on for stability.

"Stability" is a relative term...

>Why is there such a push for a Linux Base System? Why the kernel
>development split? Becase of the number of users. When Linux had a small
>following of hackers, then it didn't matter if there were bugs. But
>we're reaching a critical mass now. The number of users is growing to
>be enough that you can't just push something out the door, and figure that
>everyone depending on it are hackers who are doing it to be cool, or to
>give Microsoft the finger. That's why Linux had such a hard time getting
>into stability critical implementations. It still does.

The problem with linux is that the resources for exhaustive testing and bug
fixing on older versions just isn't there: It's much more interesting to
play around with the latest&greatest (even if buggy).

>As I said, I don't care what gets done with egcs, or GCC, or PCC, or
>anything. My concern is that a stable, reliable implementation of
>something is chosen as the standard, and that a development model
>is chosen where updates to it are done when they are -proven- stable

Great. Who is doing the "proving"? Read over some of the drivers, and you'll
see that some of them are more workaround-for-broken-hardware code than
driver code proper. Not that one particular board shows all the problems,
there are probably hundreds of different NE2000 clones, each with its own
complement of bugs. Parts of the kernel have been coded the way they are to
work around compiler bugs. And obviously parts of the kernel contain bugs in
themselves too. The only way of finding out problems in a mess like this is
by having it run many different applications, on many different hardware
configurations, under very different loads. I.e, in real-world use.

>Lets all pretend that each 'release' is going to be used for a computer
>monitoring equipment in the intensive care ward of a hospital.

If that will be the case, we'll do. Just pay for the testing time; give
stable, certified bug free hardware to test and run on. And linux
development will give you linux-1.0 in say 5 years time. Good enough?
The sad fact of life is that you pay for development speed with bugs. The
good news is that the linux model gives surprisingly few bugs for the
development speed. The great news is that it is self-regulating: If people
start having real troubles, the development slows down while the bugs are
fixed.

Another piece of news is that whining about bugs, or not having <favorite
feature> in linux or other piece of free software has no effect whatsoever,
you will be silently ignored for the most part. Or it might lead to the Dave
Miller disaster: He was caring for linux-2.0, and got so fed up by the
complaints and whining that he just dropped it.

Kristian Koehntopp

unread,
May 25, 1998, 3:00:00 AM5/25/98
to

h...@transmeta.com (H. Peter Anvin) writes:
>> Debian is certainly not a "cathedral" project, with 300+ developers.
>Apache already do that.

So it is "tribal" software development?

Kristian
--
Kristian Koehntopp, Wassilystrasse 30, 24113 Kiel, +49 431 688897
"See, these two penguins walked into a bar, which was really stupid, 'cause
the second one should have seen it."
-- /usr/games/fortune

Paul Flinders

unread,
May 25, 1998, 3:00:00 AM5/25/98
to

bot...@cygnus.com (Per Bothner) writes:

> In article <6k89l0$n2...@crash.videotron.ab.ca>,
> Kurt Fitzner <kfit...@nexus.v-wave.com> wrote:

> >The same goes for egcs/gcc/whatever. The paramount issue, is get something
> >out that is stable. It doesn't matter what features you have to strip out.

> >Have no optimizations at all if you have to, but release something that is
> >dependable.
>
> This is utter nonsense. Suppose you have a choice between two compilers:
> a) X is solid, with little or no optmization, and no known bugs.
> b) Y works pretty well, generates code that is twice as fast, but in
> rare cases has been known to make some incorrect optimizations.
>

> So which do you use to compile your application, X or Y?
>

> You would be a fool to choose X, because you will be beaten up in
> the market, and your users will complain you are too slow.

No, you would (probably) be a fool to use Y

The correct approach is to use X, profile your code and find the
bottlenecks and elliminate them. You can always hand optimise if your
compiler won't do it for you. Choice of algorithm is usually the biggest
factor in performance anyway - compiler Y can give you a bubble sort
which is twice as fast as compiler X but neither will turn it into
quicksort.

> And for what? The hypothetical and unlikely chance you might
> run into a bug in the compiler?

The issue (commercially) is risk management. An unquantifiable risk is
always worse than a quantifiable one, even if it looks smaller. If you
use the possibly buggy compiler then the chances are you'll spend *much*
more time tracking down bugs caused by it compared to tracking down
"ordinary" bugs in the code.

> You would be silly to worry more about that than about bugs in your
> own code, hardware errors, bugs in the standard libraries, etc etc.
> All these might bite you - that is why you need to do as extensive
> testing as you can, and fix as many bugs as you can. But at some
> point you have to ship the damn thing, and you have to decide which

> bugs you can live with. This is true for optimizing compilers, as


> well as applications. You just do the best you can.

Well all of the above are true but they don't stop it being worthwhile
trying to reduce the risks to which you expose yourself.

However it is a balancing act (as most things are and why I said
"probably" above), typically it's not a choice between solid X with
little optimisation and speedy Y with perhaps some bugs. More usually
the newer but more risky version brings a feature that you can't really
do without - an example might be C++ exception handling in gcc 2.7 vs
egcs. Lack of optimisation at best means a few tweaks to you code. Lack
of exception handling leads to very different implementation strategies
- and that _is_ a cost which you might well decide not to bear.

--
Paul

Brian A. Pomerantz

unread,
May 25, 1998, 3:00:00 AM5/25/98
to

In article <87vhqv6...@yakisoba.forte-intl.com>,
Ronald Cole <ron...@yakisoba.forte-intl.com> wrote:
>
>IMO, he doesn't "scare off" so much as "piss off" those people.

>Myself being one of them. I've quit using "cathedral" GPL'd software
>and embraced "bazaar" GPL'd software simply because the latter's
>liberal attitude toward the GPL seems more in the spirit of the GNU
>Manifesto than the former's conservate attitude.
>

This attitude sounds worse than the so called "conservative" attitude
of the FSF. You can have any attitude you want and use the GPL, as
long as you stick to the legal aspect of it. That is what it is there
for, to protect the rights of those who write software for the benifit
of others and who want to keep their work in the public domain without
commercial molestation. To reject a good piece of code based on their
development philosophy seems a little extreme. Besides, most of the
code that is in Slackware and indeed ALL Linux distributions comes
from the FSF and the HURD project. Try reading the man page for grep.
:)

I do agree that "bazaar" style of software development is a better way
to go if you can manage to get it to work.


BAPper


BAPper

Michael Thomas

unread,
May 25, 1998, 3:00:00 AM5/25/98
to

Paul Flinders <pa...@dawa.demon.co.uk> writes:

> bot...@cygnus.com (Per Bothner) writes:
> > You would be a fool to choose X, because you will be beaten up in
> > the market, and your users will complain you are too slow.
>
> No, you would (probably) be a fool to use Y
>
> The correct approach is to use X, profile your code and find the
> bottlenecks and elliminate them.

Er, no. The bottleneck is the compiler itself.

> You can always hand optimise if your
> compiler won't do it for you.

Now there's a non-bug prone approach.

::snort::

> Choice of algorithm is usually the biggest
> factor in performance anyway - compiler Y can give you a bubble sort
> which is twice as fast as compiler X but neither will turn it into
> quicksort.

This is just plain lunacy. Have you ever heard
of convergent evolution? If you're using the same
algorithms, the guy with the better compiler wins.
Period.

> The issue (commercially) is risk management. An unquantifiable risk is
> always worse than a quantifiable one, even if it looks smaller. If you
> use the possibly buggy compiler then the chances are you'll spend *much*
> more time tracking down bugs caused by it compared to tracking down
> "ordinary" bugs in the code.

And you're talking about hand tuning poorly
optimized code??? In what, assembly???

::double snort::

> > You would be silly to worry more about that than about bugs in your
> > own code, hardware errors, bugs in the standard libraries, etc etc.
> > All these might bite you - that is why you need to do as extensive
> > testing as you can, and fix as many bugs as you can. But at some
> > point you have to ship the damn thing, and you have to decide which
> > bugs you can live with. This is true for optimizing compilers, as
> > well as applications. You just do the best you can.
>
> Well all of the above are true but they don't stop it being worthwhile
> trying to reduce the risks to which you expose yourself.

I think that Per is right on the money: the
likelihood of it being the hardware or OS or
compiler is probably an order of magnitude or
three down from the likelihood of it being *your*
code's problem. Nobody likes compiler bugs, but
the reality is that if you're hunting for a
problem, the likelihood of it being the compiler
is negligible. Though sometimes vexing, compiler
bugs are not really much different from other
bugs. If your testing misses them, then your
testing is inadequate.
I haven't seem anybody saying that Wiz-Bang
compiler X version 0.0 ought to be shipped in
preference to Old Established compiler Y V100.1.
What I have seen is people saying that absolute
insistence on Bugs over Features is a lousy way of
dealing with risk management. Safe over sorry in
the new reality is also know as "out of business".
At some point, you have to take chances, even if
that means the possibility of <cue up organ music>
Compiler Bugs.
--
Michael Thomas (mi...@mtcc.com http://www.mtcc.com/~mike/)
"I dunno, that's an awful lot of money."
Beavis

Horst von Brand

unread,
May 25, 1998, 3:00:00 AM5/25/98
to

Paul Flinders <pa...@dawa.demon.co.uk> writes:

[...]

> The correct approach is to use X, profile your code and find the

> bottlenecks and elliminate them. You can always hand optimise if your
> compiler won't do it for you. Choice of algorithm is usually the biggest


> factor in performance anyway - compiler Y can give you a bubble sort
> which is twice as fast as compiler X but neither will turn it into
> quicksort.

"Twice as fast" for no extra effort at all vs perhaps "three times as fast"
(if so much) by careful tweaking, breaking the design of the system into
tiny little pieces, all different, and introducing tons of subtle bugs in
the process is a loose?!

[...]

> The issue (commercially) is risk management. An unquantifiable risk is
> always worse than a quantifiable one, even if it looks smaller. If you
> use the possibly buggy compiler then the chances are you'll spend *much*
> more time tracking down bugs caused by it compared to tracking down
> "ordinary" bugs in the code.

Sure you jest... show me just one bug free compiler for whatever language
you might choose. Compilers are complex pieces of software, and they _do_
contain bugs. No way around that fact of life.
--
Dr. Horst H. von Brand mailto:vonb...@inf.utfsm.cl
Departamento de Informatica Fono: +56 32 654431
Universidad Tecnica Federico Santa Maria +56 32 654239
Casilla 110-V, Valparaiso, Chile Fax: +56 32 797513

Toon Moene

unread,
May 25, 1998, 3:00:00 AM5/25/98
to

lei...@variable.stanford.edu (Rudolf Leitgeb) wrote:

> In article <6k9p87$1l8$1...@news.utrecht.nl.net>,
> Toon Moene <to...@moene.indiv.nluug.nl> writes:

> > If commercial compiler vendors can get away shipping compilers with bugs,
why
> > should we (i.e. the egcs community) try the impossible ?
>
> It's not about writing bug free code. Everybody knows that this is
impossible
> as soon as the program does more than "Hello world". It's about putting in
> code that does some cool stuff but is _known_ to fail every so often.
>
> The problem is phrases in the FAQ like:
> You may get an internal compiler error compiling process.c in newer
versions
> of the Linux kernel on x86 machines. This is a bug in an asm statement in
> process.c, not a bug in egcs. XXX How to fix?!?

OK, I can see that point - the problem is that this issue was resolved so
fast (matter of days, IIRC) that it was probably better to remove it from the
FAQ all together, or at least fixate it to specific kernel versions and
specific versions of egcs releases. Unfortunately, we still haven't found a
good way to deal with the FAQ (as in: find someone who has enough time *and*
oversight to keep it up to date). Heck, there isn't even a way to search the
archives, so I'm still saving all messages to egcs[-bugs]@cygnus.com here at
home (11,400 and 4,600 respectively) to be able to search their content.

Toon Moene

unread,
May 25, 1998, 3:00:00 AM5/25/98
to

Paul Flinders <pa...@dawa.demon.co.uk> wrote:

> The issue (commercially) is risk management. An unquantifiable risk is
> always worse than a quantifiable one, even if it looks smaller. If you
> use the possibly buggy compiler then the chances are you'll spend *much*
> more time tracking down bugs caused by it compared to tracking down
> "ordinary" bugs in the code.

But which compiler *do* you want to use then ? The only compiler I've ever
used in which I haven't found an error _yet_ comes with $50 million of
hardware. The (extra) risk from using one or the other compiler isn't
quantifiable, period.

David Kastrup

unread,
May 25, 1998, 3:00:00 AM5/25/98
to

One has to point out that the GPL is a *public* license. At any point
in time you are free to take a cathedralized version and start a
bazaar with it (and vice versa). Whether you succeed will depend on
what people prefer. Most people will usually stay with the original
author/maintainer unless he shows to have severely lacking management
capabilities.

Paul Flinders

unread,
May 25, 1998, 3:00:00 AM5/25/98
to

Michael Thomas <mi...@mtcc.com> writes:

> Paul Flinders <pa...@dawa.demon.co.uk> writes:
> > The correct approach is to use X, profile your code and find the
> > bottlenecks and elliminate them.
>

> Er, no. The bottleneck is the compiler itself.

But for much code in many systems doubling the speed will have almost
no impact on the overall feel of the application to the user.

Sometimes the reason is that the delays are actually outside most of
the code - I/O is a good example here. It doesn't matter if I can
parse my file in one millisecond or two if it takes 10 to get it from
the disk.

Sometimes it is that you're using something that is inherently slow -
COM and CORBA come to mind here.

Als it doesn't matter if I can turn user input into an output in 10
milliseconds or 20 because the user won't notice that difference (get
up to 100 millisecs and they will though).

> > You can always hand optimise if your compiler won't do it for you.
>

> Now there's a non-bug prone approach.
>
> ::snort::
>

> And you're talking about hand tuning poorly
> optimized code??? In what, assembly???
>
> ::double snort::
>

No, I'm talking about doing sensible reviews of critical path code
(having demonstrated that it is critical path), perhaps giving the
compiler a little help with an explicit "register" declaration or
moving a few invariants out of loops or dereferencing a pointer once
into a local variable and then using the variable.

Also I've _seen_ people write code like

for (i=0; i < strlen(s); i++)
....

Is there a compiler which will move strlen out of the loop? Will it
still do it when there's a user defined function doing the job of
strlen (say you have strings implemented with an explicit length
rather than using straight C-style strings.


How about

for (...)
if (expensive test but constant within loop)
+++
else
---

rather than
if (expensive test)
for (...)
+++
else
for (...)
---

egcs doesn't do that as an optimisation AFAICS.

I've seen code which went 5x or 10x faster when expensive operations
were moved manually out of loops. When I asked about the code I got
the answer "well the compiler will optimise it".

Once you've gone through the above then a compiler with a better
optimiser may still be able to fine tune you code but we're back to
the first point - it won't matter for much of the code in the first
place.

> > Choice of algorithm is usually the biggest
> > factor in performance anyway - compiler Y can give you a bubble sort
> > which is twice as fast as compiler X but neither will turn it into
> > quicksort.
>

> This is just plain lunacy. Have you ever heard
> of convergent evolution? If you're using the same
> algorithms, the guy with the better compiler wins.
> Period.

_if_ you're using the same algorithms, and the code is actually time
critical.


> I think that Per is right on the money: the likelihood of it being
> the hardware or OS or compiler is probably an order of magnitude or
> three down from the likelihood of it being *your* code's
> problem. Nobody likes compiler bugs, but the reality is that if
> you're hunting for a problem, the likelihood of it being the
> compiler is negligible. Though sometimes vexing, compiler bugs are
> not really much different from other bugs.

Actually per _does_ make a good point here and you do have to weigh up
the likelyhood of things going wrong. I would not use an egcs snapshot
for delivered code but I would probably use one of the releases. I
also agree that the likelyhood is that the bug is in your (my) code,
however, I've come across two or three GCC bugs whilst writing fairly
mundane programs so ther certainly _do_ occur.

> If your testing misses them, then your testing is inadequate.

A reasonable point. However in the real world bugs do slip past
testing (even when testing is done well which is often not the case)
and anything which you can do to avoid them hitting customers is worth
consideration at least. Again it's a question of reducing risk.

--
Paul

Tor Slettnes

unread,
May 25, 1998, 3:00:00 AM5/25/98
to

>>>>> "Christopher" == Christopher B Browne <cbbr...@news.brownes.org> writes:

Christopher> I would go along with the somewhat distinct notion
Christopher> that RMS has gotten "cranky" lately.

That is an impression that stuck also after some of the NPR interviews
in April and May regarding Linux, Free Software, and Netscape. Some
caller was so totally _amazed_, _astonished_ etc at Linus Torvald's
altruism in providing to the _entire_ _world_ _for_ _free_ the Linux
operating system - after which RMS snapped that _he_ had been doing
that for years before Linux. (That's deserved, but maybe someone else
ought to make that point rather than himself). See
http://www.npr.org/ramfiles/980417.totn.02.ram
(skip the first 27 minutes).

-tor

Albert D. Cahalan

unread,
May 26, 1998, 3:00:00 AM5/26/98
to

Paul Flinders <pa...@dawa.demon.co.uk> writes:
> But for much code in many systems doubling the speed will have almost
> no impact on the overall feel of the application to the user.

Think of a spell checker. (or more obvious: image processing)

>>> You can always hand optimise if your compiler won't do it for you.

No you can't, unless you use assembly. You can do a few things
like turn if(foo) into if(!foo) and add lots of goto statements,
but it is fairly impossible to work around a compiler with inferior
register allocation.

> No, I'm talking about doing sensible reviews of critical path code
> (having demonstrated that it is critical path), perhaps giving the
> compiler a little help with an explicit "register" declaration or
> moving a few invariants out of loops or dereferencing a pointer once
> into a local variable and then using the variable.

Sure, but:

1. that is lots of work and could introduce more bugs
2. after you have done that, then what?

> Also I've _seen_ people write code like
> for (i=0; i < strlen(s); i++)
>

> Is there a compiler which will move strlen out of the loop?

I certainly hope so. The header file should specify that strlen()
does not have side effects, using whatever gcc-specific hack needed.

>>> Choice of algorithm is usually the biggest factor in performance
>>> anyway - compiler Y can give you a bubble sort which is twice as
>>> fast as compiler X but neither will turn it into quicksort.

That would be a great compiler.

Jeff Epler

unread,
May 26, 1998, 3:00:00 AM5/26/98
to

>Paul Flinders <pa...@dawa.demon.co.uk> writes:
>> Also I've _seen_ people write code like
>> for (i=0; i < strlen(s); i++)
>>
>> Is there a compiler which will move strlen out of the loop?
>
In article <vc790np...@jupiter.cs.uml.edu>, Albert D. Cahalan wrote:
>I certainly hope so. The header file should specify that strlen()
>does not have side effects, using whatever gcc-specific hack needed.

I think the difficulty is in proving that 's' is unmodified---I believe
that gcc already has adequate markers to note that the function has no
side-effects. But consider the code snippet:

external void foo(int i);
void bar(char *s) {
int i;
for(i=0; i<strlen(s); i++) foo(i);
}

Who knows if this external 'foo' can see and modify 's'? If it
doesn't, then we could move the strlen out of the loop. But, if it
could possibly modify 's', then we can't safely do it. The same goes
for the case of

for(...)
if(expensive_test) { block1; }
else { block2; }

it may be hard to prove that expensive_test is invariant over the
loop. I've heard that there's new "global" optimization in egcs (ie
one translation-unit-wide) which may help for cases where 'foo' or
'expensive_test' don't touch any extern functions...

I don't know, does declaring bar as
void bar(const char *s);
give an adequate guarantee in the first case?

Followups in comp.lang.c.

Jeff
--
Jeff Epler jep...@inetnebr.com (an american student living in France)


David Kastrup

unread,
May 26, 1998, 3:00:00 AM5/26/98
to

jep...@zero.aec.at (Jeff Epler) writes:

> >Paul Flinders <pa...@dawa.demon.co.uk> writes:
> >> Also I've _seen_ people write code like
> >> for (i=0; i < strlen(s); i++)
> >>
> >> Is there a compiler which will move strlen out of the loop?
> >
> In article <vc790np...@jupiter.cs.uml.edu>, Albert D. Cahalan wrote:
> >I certainly hope so. The header file should specify that strlen()
> >does not have side effects, using whatever gcc-specific hack needed.
>
> I think the difficulty is in proving that 's' is unmodified---I believe
> that gcc already has adequate markers to note that the function has no
> side-effects. But consider the code snippet:

[...]
bar called in a loop.

> I don't know, does declaring bar as
> void bar(const char *s);
> give an adequate guarantee in the first case?

Only if bar has no other possible hold onto the memory area passed in
via s. That means, no changeable pointer to that area (including its
declaration) must exist that is accessible globally or that has been
passed out into an unknown function as a changeable pointer.

Paul Flinders

unread,
May 26, 1998, 3:00:00 AM5/26/98
to

acah...@jupiter.cs.uml.edu (Albert D. Cahalan) writes:

> Paul Flinders <pa...@dawa.demon.co.uk> writes:
> > But for much code in many systems doubling the speed will have almost
> > no impact on the overall feel of the application to the user.
>
> Think of a spell checker. (or more obvious: image processing)

Spell checkers tend to spend a lot of time waiting for user input so
they only have to get to the next word "fast enough". You need good
dictionary organiation (remember you have to find candidate words, not
just a "not in dictionary" message) and that won't come from the
compiler's optimiser.

Image processing is probably a special case (I.e people _do_ write inner
loops in assembler for image processing applications).

>
> >>> You can always hand optimise if your compiler won't do it for you.
>
> No you can't, unless you use assembly. You can do a few things
> like turn if(foo) into if(!foo) and add lots of goto statements,
> but it is fairly impossible to work around a compiler with inferior
> register allocation.

If it's really broken or just really bad, no but we're not really
hypothesising that state of affairs.

The choice is Solid compiler with basic optimiser (Per actually said
"little or no" but I'm going so assume basic since the "real"
compilers here are probably gcc 2.7 and egcs/gcc 2.8 and gcc 2.7 has
much better than "basic" optimisation) versus a new version of the
compiler with much better optimisation but the definate chance of
compiler bugs.

In this scenario it's the fast compiler which is more likely to have
broken register allocation.

>
> > No, I'm talking about doing sensible reviews of critical path code
> > (having demonstrated that it is critical path), perhaps giving the
> > compiler a little help with an explicit "register" declaration or
> > moving a few invariants out of loops or dereferencing a pointer once
> > into a local variable and then using the variable.
>
> Sure, but:
>
> 1. that is lots of work and could introduce more bugs

However you should do it anyway, the compiler can only optimise so
much and blind reliance on its abilities will not lead to efficient
code.

> 2. after you have done that, then what?

If it's _still_ too slow? Depends on circumstances - maybe you decide
that it _is_ worth the risk of using the possibly buggy compiler (but
by then the speed-up may not be that great). Maybe you decide to use
a faster processor, but that's not always possible. Maybe you decide
to hire an assembler programmer.

>
> > Also I've _seen_ people write code like
> > for (i=0; i < strlen(s); i++)
> >
> > Is there a compiler which will move strlen out of the loop?
>

> I certainly hope so. The header file should specify that strlen()
> does not have side effects, using whatever gcc-specific hack needed.

Which is why I asked whether the compiler will still optimise the call
if strlen is a user defined function. Also as Jeff Epler points out the
compiler may not be able to tell that the length of the string is
invariant in the loop.

> >>> Choice of algorithm is usually the biggest factor in performance
> >>> anyway - compiler Y can give you a bubble sort which is twice as
> >>> fast as compiler X but neither will turn it into quicksort.
>
> That would be a great compiler.

It's called a software engineer.

Remember - bug fixing is a very expensive process. Assuming that it
takes two weeksš of effort to accept a bug report, shedule it to an
engineer, reproduce and investigate the problem, fix the bug, document
the fix, re-test and re-release the application then an isolated bug
could easily cost $5000 to fix. That's why you want to reduce the risk
that it's there.

Of course it's a value judgement (like most things) and the right
choice in one environment may be the wrong one in another.

š We measured it once, 10 days is about right including QA for a bug
in a medium sized program.
--
Paul


James Youngman

unread,
May 26, 1998, 3:00:00 AM5/26/98
to Per Bothner

>>>>> "pb" == Per Bothner <bot...@cygnus.com> writes:

pb> This is utter nonsense. Suppose you have a choice between two
pb> compilers:

pb> a) X is solid, with little or no optmization, and no known bugs.
pb> b) Y works pretty well, generates code that is twice as fast,
pb> but in rare cases has been known to make some incorrect
pb> optimizations.

pb> So which do you use to compile your application, X or Y?

pb> You would be a fool to choose X, because you will be beaten up
pb> in the market, and your users will complain you are too slow.

Markets differ. I understand how you could see it that way. People
don't like to wait long for compilers to run. (I assume that this is
the kind of thing you're working on, since you're at Cygnus). However
most (well, certainly many) applications are not CPU-bound and so the
absolute speed of the compiler is not as important.

My mileage, as I am trying to say, differs. As far as my day job
goes, the answer is X, not Y, without hesitation.

Thomas G. McWilliams

unread,
May 26, 1998, 3:00:00 AM5/26/98
to

Per Bothner <bot...@cygnus.com> wrote:
: Suppose you have a choice between two compilers:
: a) X is solid, with little or no optmization, and no known bugs.
: b) Y works pretty well, generates code that is twice as fast, but in
: rare cases has been known to make some incorrect optimizations.

: So which do you use to compile your application, X or Y?

It depends of course. Which would you choose for the Airbus or a
nuclear medicine accelerator?


Horst von Brand

unread,
May 26, 1998, 3:00:00 AM5/26/98
to

kfit...@nexus.v-wave.com (Kurt Fitzner) writes:

[...]

> The same goes for egcs/gcc/whatever. The paramount issue, is get something
> out that is stable. It doesn't matter what features you have to strip out.
> Have no optimizations at all if you have to, but release something that is

> dependable. Whomever said that a egcs internal compiler error was is a
> fault of a in the code it was compiling, is on drugs.

If you write invalid asm()s, it is unfortunately quite possible to crash
gcc, egcs or whatever. Sure, the compiler shouldn't crash; but it's your
fault anyway ;-)

Horst von Brand

unread,
May 26, 1998, 3:00:00 AM5/26/98
to

David Kastrup <d...@mailhost.neuroinformatik.ruhr-uni-bochum.de> writes:
> vonb...@sleipnir.valparaiso.cl (Horst von Brand) writes:

> > The FSF just didn't keep up with its charter to oversee *all* free
> > software, probably never could have and it's probably better that
> > way too...

> Please point out exactly where in their charter they claim to want to
> oversee all free software. Please explain why they then have chosen
> to use a license (GPL) even on their own works which does not give
> them any special status with regard to future development.

No, it's not in their charter. But they sure do act that way: Linux is
suddenly lignux, or the GNU/Linux system for them. The freely distributable
Perl documentation has to be rewritten, just so it falls under the GPL
(witness the flames about that gem).

> Please think twice before accusing them with something similar to what
> Bill Gates tries doing to commercial software.

I sure hope they don't want to do something like that. But sometimes they
just act a bit too militant, and bluntly disregard other people's options
(like BSD vs GNU licences)...

Horst von Brand

unread,
May 26, 1998, 3:00:00 AM5/26/98
to

lei...@variable.stanford.edu (Rudolf Leitgeb) writes:

[...]

> It sounds like linux developed an extremely successful strategy with the
> parallel trees of 2.0.x and 2.1.x and I can foresee a similar split happening
> to egcs as soon as it becomes mainstream.

It is working right now: There is egcs-1.0.x (stable branch for now), there
are the weekly (or so) snapshots, and there is even the CVS repository for
up to the minute snapshots. There are the egcs{,-bugs}@cygnus.com lists,
and pages at <http://egcs.cygnus.com>

Josh Stern

unread,
May 26, 1998, 3:00:00 AM5/26/98
to

James Youngman <JYou...@vggas.com> wrote:

>Markets differ. I understand how you could see it that way. People
>don't like to wait long for compilers to run. (I assume that this is
>the kind of thing you're working on, since you're at Cygnus). However
>most (well, certainly many) applications are not CPU-bound and so the
>absolute speed of the compiler is not as important.

In the PC marketplace, people often pay a 40% premium for
a 10-20% gain in CPU performance (even though the faster
CPU will be a bargain basement model next year).
Granted, some of these customers are idiots, and the
effect is accentuated by the lack of product differentiation
in the PC marketplace. But it still dangerous, from
an empirical point of view, to claim that performance
doesn't matter any more. I'm sure all of the would-be
Java ISVs are not really planning on selling their
applications for half the price of a C/C++ application,
but extrapolating from the CPU price/demand curve
should have given some Java investors pause when
their business plan was formulated.


- Josh


Greg Lindahl

unread,
May 26, 1998, 3:00:00 AM5/26/98
to

Horst von Brand <vonb...@inf.utfsm.cl> writes:

> No, it's not in their charter. But they sure do act that way: Linux is
> suddenly lignux, or the GNU/Linux system for them. The freely distributable
> Perl documentation has to be rewritten, just so it falls under the GPL
> (witness the flames about that gem).

Nope, sorry. Both of these flameawrs are people putting words in the
FSF's mouth and then flaming them about it. Please don't contribute to
such unconstructive flames; we're all friends, and we write less code
when we're engaging in silly flamewars.

-- g

Paul Flinders

unread,
May 26, 1998, 3:00:00 AM5/26/98
to

jst...@citilink.com. (Josh Stern) writes:

> In the PC marketplace, people often pay a 40% premium for a 10-20%
> gain in CPU performance (even though the faster CPU will be a
> bargain basement model next year). Granted, some of these customers
> are idiots, and the effect is accentuated by the lack of product
> differentiation in the PC marketplace. But it still dangerous, from
> an empirical point of view, to claim that performance doesn't matter
> any more. I'm sure all of the would-be Java ISVs are not really
> planning on selling their applications for half the price of a C/C++
> application, but extrapolating from the CPU price/demand curve
> should have given some Java investors pause when their business plan
> was formulated.

Performance does matter, of course, but spending a premium *just* to
get 20% more CPU performance is largely self delusional, or playing
office politics. Going from 200Mhz to 233 (a 16% increase) will give
you maybe 5, at best 10% overall as it doesn't speed memory, video or
disk.

If you work somewhere that doesn't mind the hardware being fiddled
with try dropping a workmate's CPU speed down a multiplier (from 233
to 200 say), see how long it takes for them to notice.

My own view is that you need an performance increase of 33-50% just to
register as "noticably better".

--
Paul

David Kastrup

unread,
May 26, 1998, 3:00:00 AM5/26/98
to

Horst von Brand <vonb...@inf.utfsm.cl> writes:

> David Kastrup <d...@mailhost.neuroinformatik.ruhr-uni-bochum.de> writes:
> > vonb...@sleipnir.valparaiso.cl (Horst von Brand) writes:
>
> > > The FSF just didn't keep up with its charter to oversee *all* free
> > > software, probably never could have and it's probably better that
> > > way too...
>
> > Please point out exactly where in their charter they claim to want to
> > oversee all free software. Please explain why they then have chosen
> > to use a license (GPL) even on their own works which does not give
> > them any special status with regard to future development.
>

> No, it's not in their charter. But they sure do act that way: Linux is
> suddenly lignux, or the GNU/Linux system for them.

That's not overseeing things. Their arguments for it have a certain
validity, but definitely not enough to warrant such a PR fiasco.

> The freely distributable
> Perl documentation has to be rewritten, just so it falls under the GPL
> (witness the flames about that gem).

It is not freely distributable. Read again. You are prohibited to
make any profit from distributing it, meaning that it will only be
distributed by amateurs.

They also don't want a GPLed documentation, but a free one (namely
freely redistributable, freely modifiable). A BSD-like license would
do fine.

> I sure hope they don't want to do something like that. But sometimes they
> just act a bit too militant, and bluntly disregard other people's options
> (like BSD vs GNU licences)...

Come off it. They *use* BSD-like licensed software in their own
poroducts without problems. Why, The Hurd is *running* on the Mach
microkernel. They encourage using the GPL obviously, as they think it
better suited to the spread of free software. But they don't have the
hostile stance topwards other free software all people seem to want to
insinuate.

David Kastrup

unread,
May 26, 1998, 3:00:00 AM5/26/98
to

Paul Flinders <paul.f...@finobj.com> writes:

> Performance does matter, of course, but spending a premium *just* to
> get 20% more CPU performance is largely self delusional, or playing
> office politics. Going from 200Mhz to 233 (a 16% increase) will give
> you maybe 5, at best 10% overall as it doesn't speed memory, video or
> disk.

It also makes you liable to getting forged and relabeled processors
which can result in overheating, unreliability and premature death of
processors. As long as you are not buying into the extreme expensive
high end, but into the low end of chips being able to possible
temporarily make it at a certain frequency, you are pretty safe from
relabelers and the associated dangers.

Joe Buck

unread,
May 26, 1998, 3:00:00 AM5/26/98
to

Ronald Cole <ron...@yakisoba.forte-intl.com> wrote:
>IMO, he doesn't "scare off" so much as "piss off" those people.
>Myself being one of them. I've quit using "cathedral" GPL'd software
>and embraced "bazaar" GPL'd software simply because the latter's
>liberal attitude toward the GPL seems more in the spirit of the GNU
>Manifesto than the former's conservate attitude.

It seems that you've done the opposite of what you think you have done.

>In the real world, that means I jettisoned the Hurd from my sole
>remaining 486 and deleted Debian from my firewall and installed
>Slackware-3.4. I'm also now using egcs instead of gcc, and I've
>stopped using GNAT.

Slackware is a one-person cathedral project. Debian has hundreds of
developers ... it is a bazaar project.

--
-- Joe Buck
work: jb...@synopsys.com, otherwise jb...@welsh-buck.org or jb...@best.net
http://www.welsh-buck.org/

Joe Buck

unread,
May 26, 1998, 3:00:00 AM5/26/98
to

Horst von Brand <vonb...@sleipnir.valparaiso.cl> wrote:
>Why have three development branches (bleeding edge (egcs snapshots), stable
>progressive (egcs releases) and utterly stable (gcc)), when Linux has shown
>that two are enough?

It is possible that gcc and egcs will re-merge at some point. If this
happens, though, it will under the egcs terms.

That is, eventually the stable egcs releases will probably be called gcc.

Josh Stern

unread,
May 26, 1998, 3:00:00 AM5/26/98
to

Paul Flinders <paul.f...@finobj.com> wrote:
>jst...@citilink.com. (Josh Stern) writes:

>> In the PC marketplace, people often pay a 40% premium for a 10-20%
>> gain in CPU performance (even though the faster CPU will be a
>> bargain basement model next year). Granted, some of these customers
>> are idiots, and the effect is accentuated by the lack of product
>> differentiation in the PC marketplace. But it still dangerous, from
>> an empirical point of view, to claim that performance doesn't matter
>> any more. I'm sure all of the would-be Java ISVs are not really
>> planning on selling their applications for half the price of a C/C++
>> application, but extrapolating from the CPU price/demand curve
>> should have given some Java investors pause when their business plan
>> was formulated.

>Performance does matter, of course, but spending a premium *just* to


>get 20% more CPU performance is largely self delusional, or playing
>office politics.

In scientific jargon, there is a placebo effect. Programmers
are also subject to placebo effect when they evaluate languages
and compilers.

Going from 200Mhz to 233 (a 16% increase) will give
>you maybe 5, at best 10% overall as it doesn't speed memory, video or
>disk.
>

>If you work somewhere that doesn't mind the hardware being fiddled
>with try dropping a workmate's CPU speed down a multiplier (from 233
>to 200 say), see how long it takes for them to notice.
>
>My own view is that you need an performance increase of 33-50% just to
>register as "noticably better".

I think these numbers are generally reasonable. Put another way,
about a 15% difference in overall performance is noticeably,
but a 15% difference in CPU speed translates to something
much less than that in overall performance.

My point is that when people notice a small difference in
performance (objectively or subjectively) they often care a lot
about that (rationally or irrationally).


- Josh

Ronald Cole

unread,
May 26, 1998, 3:00:00 AM5/26/98
to

Tor Slettnes <t...@netcom.com> writes:
> That is an impression that stuck also after some of the NPR interviews
> in April and May regarding Linux, Free Software, and Netscape. Some
> caller was so totally _amazed_, _astonished_ etc at Linus Torvald's
> altruism in providing to the _entire_ _world_ _for_ _free_ the Linux
> operating system - after which RMS snapped that _he_ had been doing
> that for years before Linux. (That's deserved, but maybe someone else
> ought to make that point rather than himself). See
> http://www.npr.org/ramfiles/980417.totn.02.ram
> (skip the first 27 minutes).

RMS didn't understand what the caller was saying because he wasn't
really listening... (He really seems to prefer to hear himself talk)
Besides, the Hurd is too little, too late (just like gcc-2.8). And
despite what RMS intimates here, he's hardly alrtuistic. I know from
personal experience.

Clearly, RMS will never use the open development model because he
feels it would mean giving up control. Of course, Linus has proved
that such a notion is absurd, but there's no convincing RMS.

--
Forte International, P.O. Box 1412, Ridgecrest, CA 93556-1412
Ronald Cole <ron...@forte-intl.com> Phone: (760) 499-9142
President, CEO Fax: (760) 499-9152
My PGP fingerprint: 15 6E C7 91 5F AF 17 C4 24 93 CB 6B EB 38 B5 E5

Bo Johansson

unread,
May 27, 1998, 3:00:00 AM5/27/98
to

lin...@pbm.com (Greg Lindahl) writes:

> Horst von Brand <vonb...@inf.utfsm.cl> writes:
>
> > No, it's not in their charter. But they sure do act that way: Linux is

> > suddenly lignux, or the GNU/Linux system for them. The freely distributable


> > Perl documentation has to be rewritten, just so it falls under the GPL
> > (witness the flames about that gem).
>

> Nope, sorry. Both of these flameawrs are people putting words in the
> FSF's mouth and then flaming them about it. Please don't contribute to
> such unconstructive flames; we're all friends, and we write less code
> when we're engaging in silly flamewars.

I guess that's the reason that they have included the "GNU/Linux"
rant in the info file for egcs and i guess gcc-2.8 then, because
they are so considerate towards their friends.

Bernd Paysan

unread,
May 27, 1998, 3:00:00 AM5/27/98
to

Brian A. Pomerantz wrote:
> I do agree that "bazaar" style of software development is a better way
> to go if you can manage to get it to work.

I think the bazaar style depends on the number of users and developers.
A small project which doesn't need many developers won't have use of the
bazaar style approach.

I "direct" a cathedral-like free software development: Gforth. There are
basically three core developers with different responsibilities. We
certainly accept patches and suggestions from anybody outside, but they
are rare and don't contribute much. There are not that many Forth users
out there (you might have guessed it), and there are not that many bugs
in Gforth either. None of the three core developers invests much time in
Gforth now, since most of our technical goals are reached. Also, there
are few things that really can be split up between developers. We don't
have device drivers; we neither need to do much work to support many
different platforms (most through GCC, and for the small embedded
controllers, we brought the time to port it to a new one down to about
one or two afternoons).

After all "cathedral" is the wrong word for that what we build. The
philosophy of Forth is much about simplicity, so we just put a stone in
the plain and praise our godness there (the ceiling is certainly higher
than of any cathedral ;-). It's a weather godness, so building a roof
above the stone is considered "blasphemie", anyway ;-).

--
Bernd Paysan
"Late answers are wrong answers!"
http://www.jwdt.com/~paysan/

Bernd Paysan

unread,
May 27, 1998, 3:00:00 AM5/27/98
to

Toon Moene wrote:
>
> kfit...@nexus.v-wave.com (Kurt Fitzner) wrote:
>
> > If it has a bug, leave it out of the release until it's fixed. What
> > is so hard about that?
>
> This seems very hard to grasp for people who never developed compilers. A
> compiler is simply a huge piece of software that has bugs just as every other
> piece of software of comparable size and complexity.

It should be also noted that noone intentionally codes bugs. Bugs happen
(should be a T-shirt motto), and they get into a release, when they
aren't discovered and fixed before. The GCC 2.7 has several known bugs,
and due to the rule that for every known bug there are three other bugs
hiding, it also has several unknown bugs. The slow progress of GCC makes
sure that a bug won't be fixed fast. The faster progress of EGCS gives a
higher risk that new bugs are introduced, but it also gives a higher
chance that old bugs are discovered and fixed (often bugs aren't
discovered in usage, they are discovered in code inspection! Or made
obvious through changes elsewhere).

IMHO EGCS should be the compiler of choise, if it wasn't that the
"stable" Linux kernels have bugs that are only visible if you compile
with EGCS. This isn't just for features, GCC 2.7 is broken (with the
default optimization: you _need_ -fno-strength-reduce), and GCC 2.8 has
less quality than the EGCS releases.

Bernd Paysan

unread,
May 27, 1998, 3:00:00 AM5/27/98
to

Horst von Brand wrote:
> Sure you jest... show me just one bug free compiler for whatever language
> you might choose. Compilers are complex pieces of software, and they _do_
> contain bugs. No way around that fact of life.

A Forth compiler can be bug-free (for a simple threaded-code Forth);
most are. It's however, not a complicated piece of software, and it
doesn't perform any optimization. It is not a trivial piece of software,
either, but the non-triviality doesn't sit in the code, it sits in the
design and simplicity. You pay a performance penalty for a bug-free
compiler (I would say that any good optimizing Forth compiler contains
bugs).

I also would argue that TeX's tokenizer is bug-free, too. It is a bit
more complex than a Forth compiler, but not really that much; both have
a lot in common about how they work, although the execution model of the
resulting code is quite different.

Bernd Paysan

unread,
May 27, 1998, 3:00:00 AM5/27/98
to

Albert D. Cahalan wrote:
> No you can't, unless you use assembly. You can do a few things
> like turn if(foo) into if(!foo) and add lots of goto statements,
> but it is fairly impossible to work around a compiler with inferior
> register allocation.

Certainly you can. GCC for ix86 has "inferior register allocation" (no
life range split, no profiling "how often is this variable used in real
code"). Therefore we (the Gforth team) allocate the critical registers
by hand, using asm() statements. We know which variables should go to
registers. This improved performance by about a factor of two (in the
time of GCC 2.5.7 on a 386, there is much less difference now).

James Youngman

unread,
May 27, 1998, 3:00:00 AM5/27/98
to Bernd Paysan

>>>>> "bp" == Bernd Paysan <bernd....@remove.muenchen.this.org.junk> writes:

bp> It should be also noted that noone intentionally codes bugs.

Almost noone. I read an interesting software engineering article a
whiel back that went like this:-

1. At any time, your users/QA people/developers etc will have found a
certain porportion of the bugs in the program.
2. If, during development, you introduce a number of known bugs, you
keep track of how many of the deliberate bugs have been found.

Let Bd be the number of deliberate bugs introduced.
Let Ba be the (unknown) number of other bugs.
Let R be the number of reported bugs (we shall assume that it is
possible to remove reports for bugs that have already been reported
even though the symptoms may differ, for example by fixing the
reported bugs).
Suppose a fraction X of the reported bugs turn out to have been
deliberate.

Hence we have found (R.X) deliberate and (1-R)X non-deliberate bugs.
We have found a fraction (R.X)/Bd of the total number of deliberate
bugs.

Let us assume that the fraction of nondeliberate bugs discovered is
the same as the fraction of deliberate ones (Hmm.....).

Hence we assume we have found a fraction (R.X)/Bd of the nondeliberate
bugs, hence the fraction remaining is 1-(R.X/Bd).

But introducing bugs deliberately seems a bad idea to me. Especially
as the implicit assumption that the deliberate bugs are no more easy
to find than the nondeliberate ones is a bit tenuous.

Tristan Wibberley

unread,
May 27, 1998, 3:00:00 AM5/27/98
to


Rudolf Leitgeb <lei...@variable.stanford.edu> wrote in article
<6kat22$s0d$1...@nntp.Stanford.EDU>...
> In article <6ka2uc$eis$1...@mulga.cs.mu.oz.au>,
> f...@cs.mu.oz.au (Fergus Henderson) writes:
> > Someone may have misreported this as "egcs gets an internal compiler
> > error" when in fact the true cause may have been incorrect "asm"
> > statements in the code being compiled.
>
> The scary thing about this is not so much the internal error (or whatever
> it really is) but the fact that there doesn't seem to be a work around,
or
> I at least can't find one on the egcs FAQ page. So the average user
expects
> that hell will break lose if one installs egcs and actually needs to get
> some work done with it ...

I think the egcs developers should only be expected to bugfix the egcs code
- if the code you tell egcs to compile is incorrectly written it is your
fault - you should correct your code - when I write some code and it
doesn't compile because I've made a mistake, I don't complain to the gcc
developers and tell them to correct my code for me!

--
Tristan Wibberley

Rudolf Leitgeb

unread,
May 27, 1998, 3:00:00 AM5/27/98
to

In article <01bd898e$f897c820$2e1657a8@w_tristan.gb.tandem.com>,

"Tristan Wibberley" <tristan....@compaq.com> writes:
> I think the egcs developers should only be expected to bugfix the egcs code
> - if the code you tell egcs to compile is incorrectly written it is your
> fault - you should correct your code - when I write some code and it
> doesn't compile because I've made a mistake, I don't complain to the gcc
> developers and tell them to correct my code for me!

This is the MSDOS philosophy: It is stable, it's just bad applications
that cause it to freeze.

Hellooooo !!! It's 1998 !

A long time ago people discovered that it might be useful if a compiler
gives meaningful error messages if it is unhappy.

It is not egcs's job to fix buggy code but it should tell you why it is
unhappy. And, no, a core dump does not count as error message (although
it probably contains more information than any real error message :-)

Anyway, someone else write that this issue has been resolved and that
the FAQ were outdated ...

Rudi

--

| | | | |
\ _____ /
/ \ B O R N
-- | o o | -- T O
-- | | -- S L E E P
-- | \___/ | -- I N
\_____/ T H E S U N
/ \
| | | | |

Kaz Kylheku

unread,
May 27, 1998, 3:00:00 AM5/27/98
to

On 27 May 1998 18:08:03 GMT, lei...@variable.stanford.edu (Rudolf Leitgeb)
wrote:

>In article <01bd898e$f897c820$2e1657a8@w_tristan.gb.tandem.com>,
> "Tristan Wibberley" <tristan....@compaq.com> writes:
>> I think the egcs developers should only be expected to bugfix the egcs code
>> - if the code you tell egcs to compile is incorrectly written it is your
>> fault - you should correct your code - when I write some code and it
>> doesn't compile because I've made a mistake, I don't complain to the gcc
>> developers and tell them to correct my code for me!
>
>This is the MSDOS philosophy: It is stable, it's just bad applications
>that cause it to freeze.

Strictly speaking, that is true.

>Hellooooo !!! It's 1998 !
>
>A long time ago people discovered that it might be useful if a compiler
>gives meaningful error messages if it is unhappy.

GCC has great error messages, much more meaningful than those of other
popular compilers. I don't know what you are talking about.

>It is not egcs's job to fix buggy code but it should tell you why it is
>unhappy. And, no, a core dump does not count as error message (although
>it probably contains more information than any real error message :-)

Only EGCS dumps core. Previous versions of GCC _never_ did that, right?

Rodger Donaldson

unread,
May 27, 1998, 3:00:00 AM5/27/98
to

On 26 May 1998 17:26:15 -0700, Ronald Cole
<ron...@yakisoba.forte-intl.com> wrote:

>Clearly, RMS will never use the open development model because he
>feels it would mean giving up control. Of course, Linus has proved
>that such a notion is absurd, but there's no convincing RMS.

Linus still has ultimate power in Linux. In that sense, development is
closed. It isn't a board or core like Apache or *BSD. OTOH, what Linux is,
is a great manager, who has made people want to work with him.

--
Rodger Donaldson rod...@ihug.co.nz
The Pinguin Patrol

Toon Moene

unread,
May 27, 1998, 3:00:00 AM5/27/98
to

Bernd Paysan <bernd....@remove.muenchen.this.org.junk> wrote:

> I wrote:

> > This seems very hard to grasp for people who never developed compilers.
A
> > compiler is simply a huge piece of software that has bugs just as every
other
> > piece of software of comparable size and complexity.

> It should be also noted that noone intentionally codes bugs. Bugs happen
> (should be a T-shirt motto), and they get into a release, when they
> aren't discovered and fixed before. The GCC 2.7 has several known bugs,
> and due to the rule that for every known bug there are three other bugs
> hiding, it also has several unknown bugs.

The truth is far worse. gcc-2.7.[12].x is so sloppy in optimisation, that
various bugs are hidden because they are in code paths that never get
exercised. This is the main reason why you can get gcc-2.7.x.y to compile
weird Linux kernels chock full of asm statements using dubious constructs on
hopeless architectures like the x86 to correct code.

Aside from this, there was an asm that gcc-2.x (and by inheritance, egcs-1.0)
compiled wrongly because it didn't follow its own documentation, and of
course there were plenty of "grey" areas.

Egcs is the way to go because there's were these shortcomings are
*addressed*.

--
Toon Moene (mailto:to...@moene.indiv.nluug.nl)
Saturnushof 14, 3738 XG Maartensdijk, The Netherlands
Phone: +31 346 214290; Fax: +31 346 214286
g77 Support: mailto:for...@gnu.org; NWP: http://www.knmi.nl/hirlam

Rudolf Leitgeb

unread,
May 27, 1998, 3:00:00 AM5/27/98
to

In article <356c5b37....@207.126.101.81>,

k...@cafe.dot.net (Kaz Kylheku) writes:
>>A long time ago people discovered that it might be useful if a compiler
>>gives meaningful error messages if it is unhappy.
>
> GCC has great error messages, much more meaningful than those of other
> popular compilers. I don't know what you are talking about.

I am fully aware of that and love to use gcc (and everybody else in our
group is trying to get away from HP's and SUN's "professional development
kits")

I was only responding to this previous post where Tristan claimed that it
would be sufficient if the compiler produced correct code from correct
source code.

The gcc info pages specifically say:

* If the compiler gets a fatal signal, for any input whatever, that
is a compiler bug. Reliable compilers never crash.

I assume (and hope) that egcs follows the same philosophy ...

> Only EGCS dumps core. Previous versions of GCC _never_ did that, right?

Hey, settle down! I had gcc 2.7.0 crashing, too, and had to upgrade to 2.7.2,
which seemed to work fine.

All I said was that it does not really encourage me to go to egcs if the
FAQ say that it crashes when I compile the kernel. I know, the kernel does
a lot of crazy stuff and has a lot of hacks to work around gcc 2.7.2 bugs.
But all this doesn't help me if I have to reconfigure and recompile the
kernel.

Anyways, since this crash supposedly got resolved a while ago, it is probably
not worth starting a silly flame war here ...

Cheers

Andi Kleen

unread,
May 27, 1998, 3:00:00 AM5/27/98
to

k...@cafe.dot.net (Kaz Kylheku) writes:

> GCC has great error messages, much more meaningful than those of other
> popular compilers. I don't know what you are talking about.

At least the syntax error messages generated by the bison parser are rather
poor. g++ error messages have some problems too (e.g. try to find a syntax
error in a STL program). If you want to see how a compiler can generate
really good error messages look e.g. what GNAT (the Gnu Ada95 compiler)
generates.

-Andi

Andi Kleen

unread,
May 27, 1998, 3:00:00 AM5/27/98
to

Toon Moene <to...@moene.indiv.nluug.nl> writes:
>
> Aside from this, there was an asm that gcc-2.x (and by inheritance, egcs-1.0)
> compiled wrongly because it didn't follow its own documentation, and of
> course there were plenty of "grey" areas.

To be fair the linux kernel[1] found a real gcc bug too. gcc 2.8.0 ignored
certain casts to volatile. Kenner fixed it in gcc 2.8.1, but AFAIK the
bug is only fixed in egcs-current (by the gcc 2.8 merges), but not in the
egcs 1.0.x tree. A compiler ignoring volatile is generally very suspicious
for low-level system programming...

-Andi

[1] the actual test case never appeared in a released linux kernel, but
it was one of the early proposed fixes for the sys_iopl() kernel bug
that was brought to light by gcc 2.8.x's addressof optimization.

Kaz Kylheku

unread,
May 27, 1998, 3:00:00 AM5/27/98
to

On 27 May 1998 20:32:13 GMT, lei...@variable.stanford.edu (Rudolf Leitgeb)
wrote:

>I was only responding to this previous post where Tristan claimed that it


>would be sufficient if the compiler produced correct code from correct
>source code.

That's not sufficient for ANSI C compliance even; the compiler also has
to produce a diagnostic for any translation unit that requires it.
That's good enough for me. :)

Scott McDermott

unread,
May 27, 1998, 3:00:00 AM5/27/98
to
David Kastrup on Tue, May 26, 1998 at 09:33:49PM +0200:

> It also makes you liable to getting forged and relabeled processors
> which can result in overheating, unreliability and premature death of
> processors. As long as you are not buying into the extreme expensive
> high end, but into the low end of chips being able to possible
> temporarily make it at a certain frequency, you are pretty safe from
> relabelers and the associated dangers.

The newer Intel chips have the multiplier locked I believe -- the pins
no longer connect to anything internally. I think this took place
aroung the 333s.

--
Scott

Michael Meissner

unread,
May 27, 1998, 3:00:00 AM5/27/98
to

Paul Flinders <pa...@dawa.demon.co.uk> writes:

> Is there a compiler which will move strlen out of the loop? Will it
> still do it when there's a user defined function doing the job of
> strlen (say you have strings implemented with an explicit length
> rather than using straight C-style strings.

GCC will do this for user functions declared with the const attribute.

--
Michael Meissner, Cygnus Solutions (Massachusetts office)
4th floor, 955 Massachusetts Avenue, Cambridge, MA 02139, USA
meis...@cygnus.com, 617-354-5416 (office), 617-354-7161 (fax)

Robert Hyatt

unread,
May 27, 1998, 3:00:00 AM5/27/98
to

In comp.os.linux.development.system Andi Kleen <a...@muc.de> wrote:

: Toon Moene <to...@moene.indiv.nluug.nl> writes:
:>
:> Aside from this, there was an asm that gcc-2.x (and by inheritance, egcs-1.0)
:> compiled wrongly because it didn't follow its own documentation, and of
:> course there were plenty of "grey" areas.

: To be fair the linux kernel[1] found a real gcc bug too. gcc 2.8.0 ignored
: certain casts to volatile. Kenner fixed it in gcc 2.8.1, but AFAIK the
: bug is only fixed in egcs-current (by the gcc 2.8 merges), but not in the
: egcs 1.0.x tree. A compiler ignoring volatile is generally very suspicious
: for low-level system programming...

Not to mention threaded programs... :)

--
Robert Hyatt Computer and Information Sciences
hy...@cis.uab.edu University of Alabama at Birmingham
(205) 934-2213 115A Campbell Hall, UAB Station
(205) 934-5473 FAX Birmingham, AL 35294-1170

Paul Flinders

unread,
May 28, 1998, 3:00:00 AM5/28/98
to

Michael Meissner <meis...@cygnus.com> writes:

> Paul Flinders <pa...@dawa.demon.co.uk> writes:
>
> > Is there a compiler which will move strlen out of the loop? Will it
> > still do it when there's a user defined function doing the job of
> > strlen (say you have strings implemented with an explicit length
> > rather than using straight C-style strings.
>
> GCC will do this for user functions declared with the const attribute.

Even in the presence of a possible alias for the string storage?

Tristan Wibberley

unread,
May 28, 1998, 3:00:00 AM5/28/98
to


Rudolf Leitgeb <lei...@variable.stanford.edu> wrote in article

<6khkq3$9r1$1...@nntp.Stanford.EDU>...


> In article <01bd898e$f897c820$2e1657a8@w_tristan.gb.tandem.com>,
> "Tristan Wibberley" <tristan....@compaq.com> writes:
> > I think the egcs developers should only be expected to bugfix the egcs
code
> > - if the code you tell egcs to compile is incorrectly written it is
your
> > fault - you should correct your code - when I write some code and it
> > doesn't compile because I've made a mistake, I don't complain to the
gcc
> > developers and tell them to correct my code for me!
>
> This is the MSDOS philosophy: It is stable, it's just bad applications
> that cause it to freeze.
>

> Hellooooo !!! It's 1998 !
>

> A long time ago people discovered that it might be useful if a compiler
> gives meaningful error messages if it is unhappy.
>

> It is not egcs's job to fix buggy code but it should tell you why it is
> unhappy. And, no, a core dump does not count as error message (although
> it probably contains more information than any real error message :-)

> Anyway, someone else write that this issue has been resolved and that


> the FAQ were outdated ...

That was partly my point, the egcs code was wrong, it was the job of the
egcs developers to fix it (which it seems they did), but I misunderstood
you, I thought you said that the egcs team should provide a workaround for
the bad code that people tried to compile - I was arguing against that. I
completly agree with you on this.

--
Tristan Wibberley

Bernd Schmidt

unread,
May 28, 1998, 3:00:00 AM5/28/98
to

Andi Kleen <a...@muc.de> writes:

>Toon Moene <to...@moene.indiv.nluug.nl> writes:
>>
>> Aside from this, there was an asm that gcc-2.x (and by inheritance, egcs-1.0)
>> compiled wrongly because it didn't follow its own documentation, and of
>> course there were plenty of "grey" areas.

>To be fair the linux kernel[1] found a real gcc bug too. gcc 2.8.0 ignored
>certain casts to volatile. Kenner fixed it in gcc 2.8.1, but AFAIK the
>bug is only fixed in egcs-current (by the gcc 2.8 merges), but not in the
>egcs 1.0.x tree. A compiler ignoring volatile is generally very suspicious
>for low-level system programming...

It can't be fixed in 1.0.x because it's not present in 1.0.x. egcs-1.0.x
doesn't have the addressof optimization that caused the problem.

Bernd

Andi Kleen

unread,
May 28, 1998, 3:00:00 AM5/28/98
to

Paul Flinders <paul.f...@finobj.com> writes:

With the const attribute the user gurantees that the function has no
side effects. If you lie to the compiler you get what you deserve.


-Andi

Greg Lindahl

unread,
May 28, 1998, 3:00:00 AM5/28/98
to

lei...@variable.stanford.edu (Rudolf Leitgeb) writes:

> The gcc info pages specifically say:
>
> * If the compiler gets a fatal signal, for any input whatever, that
> is a compiler bug. Reliable compilers never crash.
>
> I assume (and hope) that egcs follows the same philosophy ...

My experience is that the egcs folks follow the same philosophy.

-- g

Alexander Viro

unread,
May 28, 1998, 3:00:00 AM5/28/98
to

In article <m390nm4...@fred.muc.de>, Andi Kleen <a...@muc.de> wrote:
>Paul Flinders <paul.f...@finobj.com> writes:
[snip]

>> Even in the presence of a possible alias for the string storage?
>
>With the const attribute the user gurantees that the function has no
>side effects. If you lie to the compiler you get what you deserve.

Erm...
for(i=0;i<strlen(s);i++) {
s[100-i]='\0';
}
strlen _really_ has no side-effects. The body does.

--
My theory is that someone's Emacs crashed on a very early version of Linux
while reading alt.flame and the resulting unholy combination of Elisp and
Minix code somehow managed to bootstrap itself and take on an independent
existence. -- James Raynard in c.u.b.f.m on nature of Albert Cahalan

Osma Ahvenlampi

unread,
May 28, 1998, 3:00:00 AM5/28/98
to

vi...@riemann.math.psu.edu (Alexander Viro) writes:
> >With the const attribute the user gurantees that the function has no
> >side effects. If you lie to the compiler you get what you deserve.

> Erm...
> for(i=0;i<strlen(s);i++) {
> s[100-i]='\0';
> }
> strlen _really_ has no side-effects. The body does.

But that's fairly trivial for the compiler to understand.. lets see, a
more fleshed out example..

I would guess that the compiler can not move const functions out of a
loop if 1) the function is passed a pointer to non-local data, or 2)
_all_ functions inside the loop are declared const and none share
arguments with the function you want moved out.

main.c:

char buf[100];
extern const int strlen(char *s);
extern void foo(int i);

int main()
{
int i;

for (i=0;i<strlen(s);i++) {
foo(i);
}
}

foo.c:

extern char buf[100];

void foo(int i) {
buf[i] = 0;
}

--
Liar: One who tells an unpleasant truth.
Osma Ahvenlampi <oa at iki fi> (damn spammers)

Michael Meissner

unread,
May 28, 1998, 3:00:00 AM5/28/98
to

Paul Flinders <paul.f...@finobj.com> writes:

> Even in the presence of a possible alias for the string storage?

The const attribute says that you promise given the same input the function
will always give the same return and not modify external storage. Hence calls
to functions with arguments that don't change within a loop can be hoisted
outside of the loop. Note, I was quite careful to say a user function with
const attribute and not strlen itself, since right now, the compiler doesn't
move the strlen function out of a loop (mainly because none of thought that it
was an optimization that would be needed in real code).

Nathan Myers

unread,
May 28, 1998, 3:00:00 AM5/28/98
to

Andi Kleen <a...@muc.de> wrote:
>Paul Flinders <paul.f...@finobj.com> writes:
>
>> Michael Meissner <meis...@cygnus.com> writes:
>>
>> > Paul Flinders <pa...@dawa.demon.co.uk> writes:
>> >
>> > > Is there a compiler which will move strlen out of the loop? Will it
>> > > still do it when there's a user defined function doing the job of
>> > > strlen (say you have strings implemented with an explicit length
>> > > rather than using straight C-style strings.
>> >
>> > GCC will do this for user functions declared with the const attribute.
>>
>> Even in the presence of a possible alias for the string storage?
>
>With the const attribute the user gurantees that the function has no
>side effects. If you lie to the compiler you get what you deserve.

If you read the thread more carefully, you will realize that Paul asked
a question worthy of a serious answer. The question was whether a
call to strlen might be moved out of a loop. This can only be safe
if the body of the loop cannot change any of the storage pointed to
by strlen's argument.

If Egcs does this analysis, it can consider this optimization.
I don't know whether it does, but I am interested to learn.

--
Nathan Myers
n...@nospam.cantrip.org http://www.cantrip.org/


Ronald Cole

unread,
May 28, 1998, 3:00:00 AM5/28/98
to

lin...@pbm.com (Greg Lindahl) writes:

> Ronald Cole <ron...@yakisoba.forte-intl.com> writes:
> > Clearly, RMS will never use the open development model because he
> > feels it would mean giving up control. Of course, Linus has proved
> > that such a notion is absurd, but there's no convincing RMS.
>
> Here's a good example of someone flaming rms for something rms didn't
> even say -- you just made up something out of thin air, assign it to
> rms, and flame him for it. This is constructive dialogue?

Please, tell us where RMS or the FSF has used the open development
model.

> Try writing some code instead. And learn how to respect someone's
> contribution even if you don't agree with all of their ideas.

Talk about making stuff up out of thin air... Do some research, huh?

--
Forte International, P.O. Box 1412, Ridgecrest, CA 93556-1412
Ronald Cole <ron...@forte-intl.com> Phone: (760) 499-9142
President, CEO Fax: (760) 499-9152
My PGP fingerprint: 15 6E C7 91 5F AF 17 C4 24 93 CB 6B EB 38 B5 E5

Christopher Browne

unread,
May 29, 1998, 3:00:00 AM5/29/98
to

How about another possibility, that being that it is *never* safe to
consider the optimization?

If the program uses threads, is it not possible that a separate thread
could manipulate said string, possibly leaving no traces that would be
particularly visible to the current context of execution?

I realize that this is highly unlikely to be what was intended by the
code in question.

Also it is possible if the string was statically allocated within the
context of the function or compilation unit in question, it may be
possible to statically determine that it can't possibly change within
the loop. But it certainly requires a fair bit of analysis to do so...

--
"What's wrong with 3rd party tools? Especially if they are free? What
the **** do you think unix is anyway? It's a big honkin' party of 3rd
party free tools." --Bob Cassidy (rmca...@uci.edu)
cbbr...@hex.net - <http://www.hex.net/~cbbrowne/lsf.html>

Greg Lindahl

unread,
May 29, 1998, 3:00:00 AM5/29/98
to

The insult was:

> > > Clearly, RMS will never use the open development model because he
> > > feels it would mean giving up control. Of course, Linus has proved
> > > that such a notion is absurd, but there's no convincing RMS.

[...]

> Please, tell us where RMS or the FSF has used the open development
> model.

Let's say that the fsf never has used the open model. You still can't
logically conclude that rms feels "it would mean giving up
control". But when someone posts that, it's pretty rude. I guess
rudeness is the norm this month.

-- g

Alexander Viro

unread,
May 29, 1998, 3:00:00 AM5/29/98
to

In article <6kl781$g...@news3.newsguy.com>,
Greg Lindahl <lin...@pbm.com> wrote:
>The insult was:
[snip]

>Let's say that the fsf never has used the open model. You still can't
>logically conclude that rms feels "it would mean giving up
>control". But when someone posts that, it's pretty rude. I guess
>rudeness is the norm this month.

Yup. Endless September. What do you want - look at the guy's .sig
September is lusers' month...

Nathan Myers

unread,
May 29, 1998, 3:00:00 AM5/29/98
to

Christopher Browne<cbbr...@hex.net> wrote:
>On 28 May 1998 11:35:55 -0700, Nathan Myers <n...@nospam.cantrip.org> wrote:
>> Andi Kleen <a...@muc.de> wrote:
>>>Paul Flinders <paul.f...@finobj.com> writes:
>>>> Michael Meissner <meis...@cygnus.com> writes:
>>>> > Paul Flinders <pa...@dawa.demon.co.uk> writes:
>>>> > > Is there a compiler which will move strlen out of the loop? Will it
>>>> > > still do it when there's a user defined function doing the job of
>>>> > > strlen (say you have strings implemented with an explicit length
>>>> > > rather than using straight C-style strings.
>>>> >
>>>> > GCC will do this for user functions declared with the const attribute.
>>>>
>>>> Even in the presence of a possible alias for the string storage?
>>>
>>>With the const attribute the user gurantees that the function has no
>>>side effects. If you lie to the compiler you get what you deserve.
>>
>>If you read the thread more carefully, you will realize that Paul asked
>>a question worthy of a serious answer. The question was whether a
>>call to strlen might be moved out of a loop. This can only be safe
>>if the body of the loop cannot change any of the storage pointed to
>>by strlen's argument.
>>
>
>How about another possibility, that being that it is *never* safe to
>consider the optimization?
>
>If the program uses threads, is it not possible that a separate thread
>could manipulate said string, possibly leaving no traces that would be
>particularly visible to the current context of execution?

No. The compiler assumes that storage it is not touching is not
being changed, unless you are using a pointer to volatile. It's
your job to make sure that assumption is justified.

Paul Flinders

unread,
May 29, 1998, 3:00:00 AM5/29/98
to


Michael Meissner <meis...@cygnus.com> writes:

> Paul Flinders <paul.f...@finobj.com> writes:
>
> > Even in the presence of a possible alias for the string storage?
>

> The const attribute says that you promise given the same input the function
> will always give the same return and not modify external storage. Hence calls
> to functions with arguments that don't change within a loop can be hoisted
> outside of the loop.

I thought there was a separate attribute for no side effects (I don't
have a copy of gcc at work as it's all Mickeysoft around here so I
couldn't check).

I assume that the optimisation isn't done if the input arg is "volatile"?

> Note, I was quite careful to say a user function with const
> attribute and not strlen itself, since right now, the compiler
> doesn't move the strlen function out of a loop (mainly because none
> of thought that it was an optimization that would be needed in real
> code).

As I said I've seen loops like that more than once in real code, but
it wouldn't survive a review.

Paul Flinders

unread,
May 29, 1998, 3:00:00 AM5/29/98
to

cbbr...@news.hex.net (Christopher Browne) writes:
> If the program uses threads, is it not possible that a separate thread
> could manipulate said string, possibly leaving no traces that would be
> particularly visible to the current context of execution?

Or a signal handler in a single threaded program.

Although I *think* you should be able to warn the compiler of this
possibility with the "volatile" keyword.

>
> Also it is possible if the string was statically allocated within the
> context of the function or compilation unit in question, it may be
> possible to statically determine that it can't possibly change within
> the loop. But it certainly requires a fair bit of analysis to do so...

This is really my point - a programmer has more opportunities for
optimisation because the programmer has access to more "meta data" than
the compiler. In the specific example (although we've got a little
bogged down it it) a programmer has access to the context of the
function and its data within the program. They are also in a position to
recognise patterns in the program, its data and the environment and
choose algorithms appropriately. Compilers can optimise, at best, only a
portion of what _can_ be optimised.

In the end hints like "const", "no side effects" and "volatile" are a
way of informing the compiler of _some_ of the environment so that it
can safely carry out optimisations which might otherwise be dangerous.

However, even if the compiler doesn't have these optimisations (which
need the attributes setting anyway) you can still achieve most of the
same result by hand. In fact you won't run into portability problems
when only one of your compilers understands __attribute__ ((const))
(real programs must often be compiled by many compilers on many
systems).

So back to the original point - just because compiler A optimises twice
as well as copiler B does not mean than using compiler B implies that
the application has to run twice as slowly. It definately doesn't meant
that the application will be *percieved* as running twice as slowly.

If compiler A brings a significant risk of bugs, which will cause you
increased maintainence costs then I don't think it should be used if its
only advantange is the optimisation.

--
Paul

Karl Berry

unread,
May 29, 1998, 3:00:00 AM5/29/98
to

Please, tell us where RMS or the FSF has used the open development
model.

As far as I know (almost?) all FSF projects are ``open development''.

Speaking for myself (maintaining Texinfo at the moment), I certainly
welcome all contributions (and many people not connected with the FSF at
all have made very significant ones), I make releases as quickly as I
can, there is a mailing list for interested people, etc., etc.
Am I missing some crucial criterion?

The same is true of many other GNU projects.

ka...@cs.harvard.edu


Andi Kleen

unread,
May 29, 1998, 3:00:00 AM5/29/98
to

Karl Berry <ka...@suite.deas.harvard.edu> writes:

> Please, tell us where RMS or the FSF has used the open development
> model.
>
> As far as I know (almost?) all FSF projects are ``open development''.

For example emacs isn't, because you can't download development snapshots
and the beta testers seem to be a closed group.

-Andi

Jonathan Magid

unread,
May 29, 1998, 3:00:00 AM5/29/98
to

In article <slrn6mnna9....@orwell.rm.gen.nz>,
Rodger Donaldson <rod...@ihug.co.nz> wrote:
>
>Linus still has ultimate power in Linux. In that sense, development is
>closed. It isn't a board or core like Apache or *BSD. OTOH, what Linux is,
>is a great manager, who has made people want to work with him.
>

Well- he has power because people agree to let his decisions be (more or less)
final. Of course he exercizes no coercive force. There is nothing to stop
someone else to appoint themselves the "Linux Development Board" or somesuch-
but they would first have to convince everyone else to go along with their
pronouncements...

cheers,
jem.


--
--
j...@sunsite.unc.edu|SunSITE/METAlab Technical Director (Full Time Geek)


Kaz Kylheku

unread,
May 29, 1998, 3:00:00 AM5/29/98
to

On Wed, 27 May 1998 09:30:49 +0000, rod...@orwell.rm.gen.nz (Rodger Donaldson)
wrote:

>Linus still has ultimate power in Linux. In that sense, development is
>closed. It isn't a board or core like Apache or *BSD. OTOH, what Linux is,
>is a great manager, who has made people want to work with him.

Albeit a special kind of manager who can actually code. :)

Joe Buck

unread,
May 29, 1998, 3:00:00 AM5/29/98
to

Rodger Donaldson <rod...@ihug.co.nz> wrote:
>>Linus still has ultimate power in Linux. In that sense, development is
>>closed. It isn't a board or core like Apache or *BSD. OTOH, what Linux is,
>>is a great manager, who has made people want to work with him.

Jonathan Magid <j...@daintree.oit.unc.edu> wrote:
>Well- he has power because people agree to let his decisions be (more or less)
>final. Of course he exercizes no coercive force. There is nothing to stop
>someone else to appoint themselves the "Linux Development Board" or somesuch-
>but they would first have to convince everyone else to go along with their
>pronouncements...

Linus gets to be benevolent dictator as long as people think he is doing
a good job. If a significant faction thinks otherwise, they can do a fork.
However, there is an interesting complication. Thanks to a legal battle,
Linus owns the trademark on Linux. This means that no one else can legally
call their OS Linux, should he choose to enforce his rights.


>
>--
>--
>j...@sunsite.unc.edu|SunSITE/METAlab Technical Director (Full Time Geek)
>


--
-- Joe Buck
work: jb...@synopsys.com, otherwise jb...@welsh-buck.org or jb...@best.net
http://www.welsh-buck.org/

Joe Buck

unread,
May 29, 1998, 3:00:00 AM5/29/98
to

Karl Berry <ka...@suite.deas.harvard.edu> writes:
>> Please, tell us where RMS or the FSF has used the open development
>> model.
>>
>> As far as I know (almost?) all FSF projects are ``open development''.

In article <m3u369e...@fred.muc.de>, Andi Kleen <a...@muc.de> wrote:
>For example emacs isn't, because you can't download development snapshots
>and the beta testers seem to be a closed group.

Correct ... some FSF projects have followed an open development model
and some have followed a closed model. The FSF has left it up to the
head maintainer to decide. Most have chosen a relatively open process;
some have chosen a more closed process.

Ronald Cole

unread,
May 29, 1998, 3:00:00 AM5/29/98
to

lin...@pbm.com (Greg Lindahl) writes:
> Let's say that the fsf never has used the open model. You still can't
> logically conclude that rms feels "it would mean giving up
> control". But when someone posts that, it's pretty rude. I guess
> rudeness is the norm this month.

RMS has always said that putting up remote cvs access of the fsf
projects (like the egcs people have) is not in his best interest.
What do you think that "best interest" is, then, if not "control"?
Please tell us.

Zack Weinberg

unread,
May 29, 1998, 3:00:00 AM5/29/98
to

[Followups trimmed -- not relevant to comp.os.linux.*]

In article <87ra1cv...@yakisoba.forte-intl.com>,


Ronald Cole <ron...@yakisoba.forte-intl.com> wrote:
>lin...@pbm.com (Greg Lindahl) writes:
>> Let's say that the fsf never has used the open model. You still can't
>> logically conclude that rms feels "it would mean giving up
>> control". But when someone posts that, it's pretty rude. I guess
>> rudeness is the norm this month.
>
>RMS has always said that putting up remote cvs access of the fsf
>projects (like the egcs people have) is not in his best interest.
>What do you think that "best interest" is, then, if not "control"?
>Please tell us.

I don't know about RMS or other FSF projects, but let me talk a bit
about remote CVS and libc.

We do in fact have remote CVS access to our source tree available, and
if you ask us privately we will tell you where to look. We don't
publicize this information because we don't want J. Random Linux User
to pull the CVS tree, install it as his primary C library without
reading the documentation, and then pester us about how his system
suddenly doesn't work anymore. (The development libc, like the
development kernel, requires a number of utility upgrades and a bit of
caution to work at all.)

EGCS can publicise remote CVS access with less hassle because
upgrading your compiler usually doesn't break the system -- but you'll
note that since EGCS there are perpetual complaints about how it
miscompiles Linux 2.0 despite this being well documented as a known
problem which will be fixed only in kernel 2.1.

Glibc development welcomes new hackers. Please, go read the bug list
(http://www-gnats.gnu.org:8080/cgi-bin/quick?Category=libc&State=open)
or the tasks list (PROJECTS file in a snapshot tarball; this should be
on the web but isn't yet) and send us patches. All we ask is that you
exhibit a clue.

Please note I am only a contributor to libc; this is not an official
statement. I also am a bit bitter about people who don't read docs,
since, as one of the fielders of bug reports, I get to answer the same
questions over and over again from those people.

zw

Zack Weinberg

unread,
May 29, 1998, 3:00:00 AM5/29/98
to

jo...@dhh.gt.org

unread,
May 29, 1998, 3:00:00 AM5/29/98
to

Joe Buck writes:
> However, there is an interesting complication. Thanks to a legal battle,
> Linus owns the trademark on Linux. This means that no one else can
> legally call their OS Linux, should he choose to enforce his rights.

That trademark is unenforceable.
--
John Hasler This posting is in the public domain.
jo...@dhh.gt.org Do with it what you will.
Dancing Horse Hill Make money from it if you can; I don't mind.
Elmwood, Wisconsin Do not send email advertisements to this address.

Thomas Bushnell, n/BSG

unread,
May 29, 1998, 3:00:00 AM5/29/98
to

Andi Kleen <a...@muc.de> writes:

> For example emacs isn't, because you can't download development snapshots
> and the beta testers seem to be a closed group.

In that case, the Hurd is an open development model.

Aaron M. Renn

unread,
May 29, 1998, 3:00:00 AM5/29/98
to

Karl Berry wrote:
> As far as I know (almost?) all FSF projects are ``open development''.
>
> Speaking for myself (maintaining Texinfo at the moment), I certainly
> welcome all contributions (and many people not connected with the FSF at
> all have made very significant ones), I make releases as quickly as I
> can, there is a mailing list for interested people, etc., etc.
> Am I missing some crucial criterion?

A major stumbling block towards FSF projects being truly open is their very
rigid stance wrt copyright assignment and employer waivers. The FSF demands
that virtually any contributor to an official FSF project sign over
copyright to their code to the FSF. Additionally, if a contributor is
employed as a professional software developer, the FSF will probably demand
that you get an explicit copyright waiver from your employer before
accepting the contribution. This is true even if the contributor has
already released the code under the GPL.

Both of these items are major problems for an open development model. Even
if people are willing to assign copyright - which is not a major problem for
an established GNU package, IMO - the waiver problem is a really big deal.

I just checked my source code to the latest release of Texinfo and see that
the modules I checked were all FSF copyrighted. Karl, if someone releases
their code under the GPL, but refuses to assign copyright, will you and the
FSF accept it? Linus does not require that people sign over legal rights to
thier code to him before accepting their contribution.

Before anyone disputes my assertion, let me mention that I have personal
experience with the FSF which indicates they will refuse any non-assigned
contributions, even if they find them of value and even if they are
significant (ie, thousands of lines of code).

--
*****************************************************
* Aaron M. Renn *
* Email: ar...@urbanophile.com *
* Homepage: <URL:http://www.urbanophile.com/arenn/> *
*****************************************************

Tom Payne

unread,
May 30, 1998, 3:00:00 AM5/30/98
to

In comp.os.linux.misc Paul Flinders <paul.f...@finobj.com> wrote:

: cbbr...@news.hex.net (Christopher Browne) writes:
: > If the program uses threads, is it not possible that a separate thread
: > could manipulate said string, possibly leaving no traces that would be
: > particularly visible to the current context of execution?

: Or a signal handler in a single threaded program.

: Although I *think* you should be able to warn the compiler of this
: possibility with the "volatile" keyword.

Per the standard, one gets undefined behavior from any attempt by a
signal handler to write to a static object whose type is other than
volatile sigatomic_t. Signal handlers can set flags and that's that.
They can't even read the flags they set.

Tom Payne

David Kastrup

unread,
May 30, 1998, 3:00:00 AM5/30/98
to

"Aaron M. Renn" <ar...@urbanophile.com> writes:

> A major stumbling block towards FSF projects being truly open is
> their very rigid stance wrt copyright assignment and employer
> waivers. The FSF demands that virtually any contributor to an
> official FSF project sign over copyright to their code to the FSF.

Wrong. They demand a *non-exclusive* transfer of copyright interests,
and a disclaimer of you renouncing your interest in the software (and
perhaps from your employer). In short, they demand the legal
documents necessary for them to
a) be able to sue if someone breaches the GPL on the resulting
software
b) be sure not to get sued for using presumable GPLed software under
the GPL.

Read the forms at www.fsf.org. Those requirements are not out of
proportion, and giving them once for several projects is sufficient.

> I just checked my source code to the latest release of Texinfo and
> see that the modules I checked were all FSF copyrighted. Karl, if
> someone releases their code under the GPL, but refuses to assign
> copyright, will you and the FSF accept it?

The FSF probably not, in particular not for what they consider "key
components". They would give up having the GPL enforceable by them if
they did. What priority this is to other developers, might depend.

> Linus does not require that people sign over legal rights to

> their code to him before accepting their contribution.

Well, perhaps he is less paranoid. But in software circles, a bit of
paranoia is not per se a bad thing.


--
David Kastrup Phone: +49-234-700-5570
Email: d...@neuroinformatik.ruhr-uni-bochum.de Fax: +49-234-709-4209
Institut für Neuroinformatik, Universitätsstr. 150, 44780 Bochum, Germany

Fergus Henderson

unread,
May 30, 1998, 3:00:00 AM5/30/98
to

Michael Meissner <meis...@cygnus.com> writes:

>Paul Flinders <pa...@dawa.demon.co.uk> writes:
>
>> Is there a compiler which will move strlen out of the loop? Will it
>> still do it when there's a user defined function doing the job of
>> strlen (say you have strings implemented with an explicit length
>> rather than using straight C-style strings.
>
>GCC will do this for user functions declared with the const attribute.

Yes, but it would incorrect to declare strlen() with the const attribute,
because it depends not only on the value of the pointer passed, but
also on what it points to

--
Fergus Henderson <f...@cs.mu.oz.au> | "I have always known that the pursuit
WWW: <http://www.cs.mu.oz.au/~fjh> | of excellence is a lethal habit"
PGP: finger f...@128.250.37.3 | -- the last words of T. S. Garp.

It is loading more messages.
0 new messages