It conerns me when on the homepage for egcs, ( http://egcs.cygnus.com )
at the very top, it reads:
"egcs is an experimental step in the development of GCC,
the GNU C compiler."
Is it as stable as GCC was... is it now totally downwardly source code
compatible with GCC 2.7.2.3? Reading on DejaNews on old articles posted
about it, I see references to kernels compiles giving copious numbers of
warnings, and that there are several software packages that need patching
to compile under it.
Have these been cleaned up in the latest release of egcs? I keep my server
up to date with the latest -stable- releases of kernels, libraries, and
compilers. I have seen far too many 'stable' kernels that are broken in
the last year for comfort. The growing sentiment that I am seeing out
there is that Linux, even the 'stable' side, is for 'bleeding-edge'ists.
That if you have serious need of a stable dependable system, you use
FreeBSD.
This isn't a FreeBSD vs. Linux post. This is, a "I'm concerned about
the stability of an experimental compiler" post. If it is stable, then
I'm all for it. If not at least as stable, as compatible with existing
source, and as compatible with existing hardware as GCC 2.7.2.3, then I
would put my voice in to urge returning GCC to Sunsite, as that is the
general benchmark of what is 'standard' in Linux.
Kurt Fitzner
Problems with compiling kernels with egcs were problems with the
kernel source, not egcs. It was depending on undefined behavior of
gcc, and that dependence was exposed by more aggressive optimization
in egcs. Problems with other software packages are likely to be
broken code that compiled under gcc, which egcs rejects. You would be
amazed at the kinds of things that gcc-2.7.2 will compile without a
burp, especially in c++ code.
> The growing sentiment that I am seeing out there is that Linux, even
> the 'stable' side, is for 'bleeding-edge'ists.
The "growing sentiment" is that Linux has an obligation to those who
want stability. This is actually a very new thing in the lifetime of
Linux. Linux started as a hacker's OS, and the development cycle is
still working to find a balance between the interests of the
programmer and the end user. I think it's doing a fine job.
> then I would put my voice in to urge returning GCC to Sunsite, as
> that is the general benchmark of what is 'standard' in Linux.
Sunsite isn't the benchmark of anything significant in Linux, as far
as I can tell. It used to be a "standard" repository for stuff; I
personally haven't gotten a file from there in several years.
b.g.
This begs the question:
"Were these issues due to problems with EGCS, or were they due to problems
with the software?"
The answer, at least with respect to the Linux kernel, is definitely the
latter, and furthermore it represented workarounds for *old* problems in
previous versions of GCC.
From the EGCS FAQ:
"Problems building Linux kernels
If you installed a recent binutils/gas snapshot on your Linux system, you
may not be able to build the kernel because objdump does not understand the
"-k" switch. The solution for this problem is to remove /usr/bin/encaps.
The reason you must remove /usr/bin/encaps is because it is an obsolete
program that was part of older binutils distributions; the Linux kernel's
Makefile looks for this program to decide if you have an old or a new
binutils. Problems occur if you installed a new binutils but haven't removed
encaps, because the Makefile thinks you have the old one. So zap it; trust
us, you won't miss it.
You may get an internal compiler error compiling process.c in newer versions
of the Linux kernel on x86 machines. This is a bug in an asm statement in
process.c, not a bug in egcs. XXX How to fix?!?
You may get errors with the X driver of the form
_X11TransSocketUNIXConnect: Can't connect: errno = 111
It's a kernel bug. The function sys_iopl in arch/i386/kernel/ioport.c does
an illegal hack which used to work but is now broken since GCC optimizes
more aggressively . The newer 2.1.x kernels already have a fix which should
also work in 2.0.32."
>Have these been cleaned up in the latest release of egcs?
Alternatively, "have the bugs in the kernel been cleaned up?"
In effect, the former state of affairs was that the Linux kernel and GCC
2.7.x *both* had bugs; by fixing the bugs in GCC, some of the kernel bugs
have been exposed.
I am led to understand that roughly the same issues apply to GCC 2.8, as it
cleans up many of the same problems in the C and C++ compilers.
At least it's not like MSC 5.1; I worked with software that *could not* be
upgraded to new compiler versions because there were so many "hacks" to work
around Microsoft's bugs...
--
Those who do not understand Unix are condemned to reinvent it, poorly.
-- Henry Spencer <http://www.hex.net/~cbbrowne/lsf.html>
cbbr...@hex.net - "What have you contributed to Linux today?..."
In article <slrn6m7cfe....@knuth.brownes.org>, cbbr...@news.brownes.org (Christopher B. Browne) writes:
> On 20 May 1998 21:11:53 GMT, Kurt Fitzner <kfit...@nexus.v-wave.com> posted:
> In effect, the former state of affairs was that the Linux kernel
> and GCC 2.7.x *both* had bugs; by fixing the bugs in GCC, some
> of the kernel bugs have been exposed.
I'm not a programmer by trade (systems engineer) but with linux's
increasing popularity, it would seem that a 2.0.34 patch would be
in order to fix this. Or if the 2.0.xx kernel source was fixed
for the gcc 2.8 compiler, would that break thing when compiling
with the gcc 2.7.x compiler?
> I am led to understand that roughly the same issues apply to
> GCC 2.8, as it cleans up many of the same problems in the C
> and C++ compilers.
Watching and wondering,
Perry Grieb
--
Perry Grieb
c23...@eng.delcoelect.com
My Primary Application: Win95 Hour Glass (Is there a unix port?)
>> Is it as stable as GCC was...
Both are unstable, because they are software and software is like that.
> "Were these issues due to problems with EGCS, or were they due
> to problems with the software?"
>
> The answer, at least with respect to the Linux kernel,
> is definitely the latter,
No, the compiler is buggy too:
http://linuxwww.db.erau.edu/mail_archives/linux-kernel/May_98/3162.html
In spite of that, egcs may be _less_ buggy than gcc. We've gotten
comfortable with the bugs in gcc, so we don't notice them anymore.
> You may get an internal compiler error compiling process.c in
> newer versions of the Linux kernel on x86 machines. This is a bug
> in an asm statement in process.c, not a bug in egcs. XXX How to fix?!?
Internal compiler errors are not compiler bugs?!?
It shouldn't matter how bogus the code is. I should be able to
feed a .jpeg to the compiler and not get any internal compiler errors.
cat /dev/urandom | head >> /usr/src/linux/kernel/sys.c
> I am led to understand that roughly the same issues apply to GCC 2.8,
> as it cleans up many of the same problems in the C and C++ compilers.
Reports seem to indicate that gcc 2.8 is buggier than egcs.
I've heard Red Hat has been doing builds with egcs, so it can't
be too bad. There is an RPM, so if you screw up the system you
can just reinstall the old compiler.
| > You may get an internal compiler error compiling process.c in
| > newer versions of the Linux kernel on x86 machines. This is a bug
| > in an asm statement in process.c, not a bug in egcs. XXX How to fix?!?
|
| Internal compiler errors are not compiler bugs?!?
|
| It shouldn't matter how bogus the code is. I should be able to
| feed a .jpeg to the compiler and not get any internal compiler errors.
|
| cat /dev/urandom | head >> /usr/src/linux/kernel/sys.c
Damn! Is *that* how they write the code ;-) And I thought they had those
ten thousand monkeys typing away...
| > I am led to understand that roughly the same issues apply to GCC 2.8,
| > as it cleans up many of the same problems in the C and C++ compilers.
|
| Reports seem to indicate that gcc 2.8 is buggier than egcs.
| I've heard Red Hat has been doing builds with egcs, so it can't
| be too bad. There is an RPM, so if you screw up the system you
| can just reinstall the old compiler.
I will cautiously say I've been using pgcc-1.02 for a few weeks,
recompiled the kernel, all the stuff you need to build to go from 2.0.33
to 2.1.101, most of my applications, and no problems yet.
--
bill davidsen <davi...@tmr.com>
"If I were a diplomat, in the best case I'd go hungry. In the worst
case, people would die."
-- Robert Lipe
Yes, the egcs releases are far more solid than 2.8.1 is. Almost all
of the gcc maintainers are now working on egcs.
>I've heard Red Hat has been doing builds with egcs, so it can't
>be too bad. There is an RPM, so if you screw up the system you
>can just reinstall the old compiler.
egcs 1.0.3 was done just for Red Hat (especially for Red Hat/Alpha).
--
-- Joe Buck
work: jb...@synopsys.com, otherwise jb...@welsh-buck.org or jb...@best.net
http://www.welsh-buck.org/
jb> Albert D. Cahalan <acah...@jupiter.cs.uml.edu> wrote:
>> Reports seem to indicate that gcc 2.8 is buggier than egcs.
jb> Yes, the egcs releases are far more solid than 2.8.1 is. Almost all
jb> of the gcc maintainers are now working on egcs.
If true, doesn't this strike anyone as A Bad Thing?
I mean, the idea of egcs as a proving ground for cool new technology is
great, but don't people think getting 2.8.1 stable is just as, if not
more, important?
--
-------------------------------------------------------------------------------
Paul D. Smith <psm...@baynetworks.com> Network Management Development
"Please remain calm...I may be mad, but I am a professional." --Mad Scientist
-------------------------------------------------------------------------------
These are my opinions--Bay Networks takes no responsibility for them.
> %% jb...@best.com (Joe Buck) writes:
>
> jb> Albert D. Cahalan <acah...@jupiter.cs.uml.edu> wrote:
>
> >> Reports seem to indicate that gcc 2.8 is buggier than egcs.
>
> jb> Yes, the egcs releases are far more solid than 2.8.1 is. Almost all
> jb> of the gcc maintainers are now working on egcs.
>
> If true, doesn't this strike anyone as A Bad Thing?
>
> I mean, the idea of egcs as a proving ground for cool new technology is
> great, but don't people think getting 2.8.1 stable is just as, if not
> more, important?
yes, and no. this is a bad thing, iff gcc 2.8.1 doesn't get the fixes from
egcs, and distros continue to ship 2.8.1 however, i don't think this will
be the case - redhat is already planning on shipping egcs.
the linux kernel, and now egcs, shows that a looser model of software
development can indeed have vast positive results. in particular, doing
lots of small releases, like egcs has been doing, means that the bugs pop
up in smaller numbers, making them easier to find and squash. new behavior
is much less of a surprise, because there's a much wider audience that's
been playing with, and commenting on it.
nowadays, doing a major software upgrade, such as gcc 2.7 to 2.8, is almost
akin to switching over to a different program, instead of a new version of
the same one. the free software market is showing that these huge,
infrequent releases simply can't compete with the flurries of snapshots
that egcs and the linux kernel produces.
in the end, i see one of two things happening. one, more work goes into
bringing the features and bugfixes from egcs back into gcc, keeping the two
closer together. this seems to have been the original intent of egcs, but
personally, i question if gcc can keep up with egcs for much longer.
two, the two compilers stay seperate, probably with one of them 'winning'
as the primary compiler. for the linux world at least, this would probably
be egcs, the way things have been going. if the FSF, or another large
chunk of the free software world, such as the *BSD people stuck with gcc,
this could make for yet another set of flamewars... :-P
--
Frank Sweetser rasmusin at wpi.edu fsweetser at blee.net | PGP key available
paramount.res.wpi.net RedHat 5.0 kernel 2.1.102 i586 | at public servers
I've run DOOM more in the last few days than I have the last few
months. I just love debugging ;-)
(Linus Torvalds)
Parallel this with the development of the Linux kernel...
Which is more important?
- Getting 2.0.34 stable? or
- Getting 2.1.104 stable?
There are certainly more people working on 2.1.x; the overall process
encourages that, as they try to restrict the degree of change that goes
into 2.1.x.
The changes made in 2.0.x are pretty much a set of changes that were
already made to 2.1.x.
I would suggest that what is happening is that GCC is being turned into
a "Bazaar" project.
The "normal" state of affairs would be that EGCS 1.0.x would be based on
the "very stable" GCC 2.8.x, and add in "entertaining experimental
changes."
Good changes from EGCS would then be passed back into the "very stable"
GCC 2.8.x, incrementing x.
At some point, they decide to move to "EGCS 2.0" (or some such number),
and deploy the "stabilized" 1.x version of EGCS as the "new, stable"
GCC 3.0.
That's the "normal" approach. (And I'm being "wild" with the
numbering... Real life might use other numbers...)
Reality is that GCC development had somewhat broken down so that 2.8.x
has been, and still is, fairly much "experimental," and not nearly as
stable as one might like. It will take some effort to get it "fixed
up," and it seems to me that it is quite likely that what *REALLY*
happens is that those that are working on EGCS (that includes much of
the "former GCC team") will at some point stabilize a release of EGCS
and essentially push it out to being the "modern and stable" GCC
release.
In some respects, this has really not been a good year for the FSF;
there have been enough occurances of RMS saying things that are
exceedingly "flameworthy" as to encourage the growth of some really
rather independent development efforts.
It is quite possible that the "wider Linux community" could effectively
make the FSF, as an organization, irrelevant.
When you consider the longterm efforts of FSF contributors including
RMS, that is unfortunate.
When you consider that the FSF is a fairly small organization with
clearcut leadership, and that the Linux community really has no single
fixed "centre," it has to make one ask some questions about how such a
relatively "unthinking mob" (which is not intended as a "flame" to
"Linux people;" merely to suggest that the diversity makes it somewhat
difficult to make *clear* decisions) can so readily overcome a group
that has a clear mandate and clear leadership.
--
"Linux: the operating system with a CLUE... Command Line User
Environment". (seen in a posting in comp.software.testing)
cbbr...@hex.net - <http://www.hex.net/~cbbrowne/lsf.html>
Naw, that's Billy's Boys. Linux is *automated*. :-)
--
... _._. ._ ._. . _._. ._. ___ .__ ._. . .__. ._ .. ._.
Felix Finch: scarecrow repairman & rocket surgeon / fe...@crowfix.com
PGP = 91 B3 94 7C E9 E8 76 2D E1 63 51 AA A0 48 89 2F ITAR license #4933
I've found a solution to Fermat's Last Theorem but I see I've run out of room o
Frank, you missed a (sub)variant here: RH takes egcs, Debian - gcc. Or
vice versa. *BSD folks are relatively separate group. But _that_ <shrug>
Considering amount of newbies who barely know about anything but RH5.0...
Did you notice the postings mentioning "Linux 5.0"? ;-/ I'm afraid that
when Debian 2.0 will be released we all will be forced to use killfiles
big way, or sleep in asbestos underware...
ObBSDfolks: FreeBSD folks finally decided to switch on ELF. My pity to
them - I remember this pain in ass all too well. OTOH they have seen how
we did it...
--
My theory is that someone's Emacs crashed on a very early version of Linux
while reading alt.flame and the resulting unholy combination of Elisp and
Minix code somehow managed to bootstrap itself and take on an independent
existence. -- James Raynard in c.u.b.f.m on nature of Albert Cahalan
I made some tests yesterday comparing produced code
sizes, and speed on an x86 (a K6-AMD) using gcc-2.7.2.3 and
egcs-1.0.3.
In short:
I tried various sorting algorithms and an implementation of blowfish.
It seems like egcs still has some optimizing problems with certain
constructs, at least comparing the i386 assembler code showed that
in some cases real nonsense was produced (nonsense which worked,
but was unnecessary.)
I looked at the RTL output and found that for some reasons it looks
like egcs avoids the use of the indexing modes of the i386 as far
as I can tell.
Is someone of the egcs project interested further ?
(I myself looked at the i386.md file, but it seems like that's
the wrong place ?)
so long
Ingo
--
Please remove .PLOPP at end of mail-adress, if you want to reach
me. I'm sorry for that inconvenience, but I'm fed up of spammers.
> I don't think this is acceptable for GCC. I think it would be
> devastating if GCC became unstable or even unbuildable on many of its
> currently-supported platforms, due to neglect of portability issues by
> maintainers.
Paul, do you really think this could be in the interest of the
developers? Cygnus has many customers for non-Intel customers (in
fact, there is problably no single customer paying for ix86 support).
Even if the developers use Linux machines for the main development
(which is not even true for all main developers) they all have at
least one other machine on there desk and the whole ballpark of
machines standing around here. This is no fun project and we
certainly know something about quality assurance.
-- Uli
---------------. drepper at gnu.org ,-. 1325 Chesapeake Terrace
Ulrich Drepper \ ,-------------------' \ Sunnyvale, CA 94089 USA
Cygnus Solutions `--' drepper at cygnus.com `------------------------
I don't think this is acceptable for GCC. I think it would be
devastating if GCC became unstable or even unbuildable on many of its
currently-supported platforms, due to neglect of portability issues by
maintainers.
When EGCS was announced I thought it was great, esp. due to the slow
pace of GCC releases. But now it seems there's too heavy a price to
pay: GCC releases will come even _more_ slowly as the majority of
developers turn away from the (often boring and annoying) task of
keeping GCC portable in order to accelerate EGCS features on Linux.
> I find this equally disturbing. You seem to be implying that, like
> libc5, the majority of development in egcs is happening on and
> directed at Linux systems.
I've been following the egcs mailing list intermittently for a while,
and that's not my impression.
> I don't think this is acceptable for GCC. I think it would be
> devastating if GCC became unstable or even unbuildable on many of
> its currently-supported platforms, due to neglect of portability
> issues by maintainers.
Of course. I agree with you absolutely that if there are significant
bugs in gcc2.8.1, then fixing those and releasing a bugfix 2.8.2 ought
to be the way to go. Linux needs a stable compiler, as well as an
experimental one. I think it's great that RedHat is packaging up
egcs-1.0.3, but there's something seriously wrong with the release
schedule of gcc if it becomes the standard one.
> In some respects, this has really not been a good year for the FSF;
> there have been enough occurances of RMS saying things that are
> exceedingly "flameworthy" as to encourage the growth of some really
> rather independent development efforts.
>
> It is quite possible that the "wider Linux community" could effectively
> make the FSF, as an organization, irrelevant.
>
> When you consider the longterm efforts of FSF contributors including
> RMS, that is unfortunate.
First, they are not working on opposing goals. Second, I can't see
how it is unfortunate that RMS' ideas and ideals get embraced and used
by so many people that he himself is no longer be able to completely
control the movement he initiated. RMS has always fought for software
being in the hands of the public, not for software being only in his
hands.
Even if some people seem not to understand this.
--
David Kastrup Phone: +49-234-700-5570
Email: d...@neuroinformatik.ruhr-uni-bochum.de Fax: +49-234-709-4209
Institut für Neuroinformatik, Universitätsstr. 150, 44780 Bochum, Germany
> psm...@baynetworks.com (Paul D. Smith) writes:
>
> > I don't think this is acceptable for GCC. I think it would be
> > devastating if GCC became unstable or even unbuildable on many of its
> > currently-supported platforms, due to neglect of portability issues by
> > maintainers.
>
> Paul, do you really think this could be in the interest of the
> developers? Cygnus has many customers for non-Intel customers (in
> fact, there is problably no single customer paying for ix86 support).
> Even if the developers use Linux machines for the main development
> (which is not even true for all main developers) they all have at
> least one other machine on there desk and the whole ballpark of
> machines standing around here. This is no fun project and we
> certainly know something about quality assurance.
out of curiosity, have many of those customers switched from gcc 2.7 to
2.8?
paramount.res.wpi.net RedHat 5.0 kernel 2.1.103 i586 | at public servers
"The problem might possibly be to do with the fact that asm code written
for the x86 environment is, on other platforms, about as much use as a
pork pie at a jewish wedding."
Andrew Gierth in comp.unix.programmer
All points well-taken; from "outside" it's not clear how much work is going
into GCC 2.8 as compared to EGCS. The interest certainly seems to be in
EGCS, which, since it has the "ambitious" goal of being something of an
experimental platform, is no big surprise.
Appearances are that GCC 2.8 is still nearly as "experimental" as EGCS,
which certainly isn't the intent I've heard previously from the FSF.
The "natural" thing to have happen is for EGCS to get enhanced and
stabilized to the point where it becomes the natural choice to be treated as
"GCC 2.9" (or some such thing); the question will be how well the FSF will
react to this. RMS has been getting pretty "cranky" lately.
Of course, this assumes that the FSF *is* of importance in this. Which is
an assumption that is, for better or for worse, getting increasingly
questionable. The FSF seems to be working on projects that increasingly
have:
a) Parallel "competitors" (GNU Emacs vs XEmacs, as a somewhat "hostile"
situation, and GCC vs EGCS as a hopefully more friendly situation...)
It's not clear that the FSF "entrants" have any likelihood of dominance,
which is not where they want to be...
b) No FSF involvement outside of being "repository"
c) Projects that seem not to be getting anywhere terribly quickly (GNUStep,
Hurd)
d) "Me too" projects (Guile is only one of many Scheme implementations, and
not necessarily the best...)
The area of greatest "core competency" of late has been "Making statements
that result in flame wars on Usenet," which is *not* where you want to go
today...
> All points well-taken; from "outside" it's not clear how much work is going
> into GCC 2.8 as compared to EGCS. The interest certainly seems to be in
> EGCS, which, since it has the "ambitious" goal of being something of an
> experimental platform, is no big surprise.
>
> Appearances are that GCC 2.8 is still nearly as "experimental" as EGCS,
> which certainly isn't the intent I've heard previously from the FSF.
Perhaps I can throw in some cents (to the maximum amount of $0.02).
I have been using egcs snapshots since the last week of August last year.
(and by using I mean: I ran the complete 1300 Fortran routine snapshot of our
operational weather forecasting code with it).
Only now and then I had a problem with it.
OTOH, when fulfilling my duty as an alpha tester of g77 and trying
g77-0.5.23-prerelease with gcc-2.8.1 it became clear how far behind gcc is at
the moment.
Aside from a couple of Fortran routines being incorrectly compiled, it also
miscompiled the sole C routine in our package (when using loop unrolling).
Note that I'm talking about the gcc-2.8.1 *release* here, without any patch
from the g77 side.
Perhaps this is superfluous, but the "experimental" in "e"gcs refers to its
development model, *not* to the quality of releases.
[ And yes, I know Linus found a problem with the alias analysis code in the
current snapshots this week - it just shows that snapshots are just that
unfinished work; Oh, BTW, it was fixed within hours, to comply with
Linux custom ]
--
Toon Moene (mailto:to...@moene.indiv.nluug.nl)
Saturnushof 14, 3738 XG Maartensdijk, The Netherlands
Phone: +31 346 214290; Fax: +31 346 214286
g77 Support: mailto:for...@gnu.org; NWP: http://www.knmi.nl/hirlam
>> >> Reports seem to indicate that gcc 2.8 is buggier than egcs.
>> jb> Yes, the egcs releases are far more solid than 2.8.1 is. Almost all
>> jb> of the gcc maintainers are now working on egcs.
>>If true, doesn't this strike anyone as A Bad Thing?
Not really...
>>I mean, the idea of egcs as a proving ground for cool new technology is
>>great, but don't people think getting 2.8.1 stable is just as, if not
>>more, important?
Why have three development branches (bleeding edge (egcs snapshots), stable
progressive (egcs releases) and utterly stable (gcc)), when Linux has shown
that two are enough?
[...]
>I would suggest that what is happening is that GCC is being turned into
>a "Bazaar" project.
Exactly! And this has attracted lots of talent to the egcs effort. The fact
that (almost) weekly snapshots are available for testing means that
snapshots are much more thoroughly tested, so bugs should last less.
>The "normal" state of affairs would be that EGCS 1.0.x would be based on
>the "very stable" GCC 2.8.x, and add in "entertaining experimental
>changes."
There is no "very stable" gcc-2.8.x, there is a "very stable" egcs-1.0.3a
instead. Sure, egcs is based on gcc-2.8 snapshots, the egcs-1.0.x (stable
releases) will be followed by egcs-1.1 (another stable release), as soon as
the current branch stabilizes, and work will go on there fixing bugs mostly.
Meanwhile, progress will continue to be folded back as 1.2 or so. Just like
Linux development works, they just copied the (extremely sucessful) scheme
Linus spearheaded.
Note that gcc-2.7.2 was Nov 1995, gcc-2.8.0 was Jan 1997. There were tons of
important improvements that were delayed, like real support for ix86 (gcc
used to cater (badly, to boot: Look at your standard CFLAGS when compiling
the kernel, the whole -malign-* stuff should just be assumed by the
compiler, the -fno-strength-reduce flag inhibits a quite basic optimization
to work around a long-known compiler bug) to i385 and i486, even though
i486, Pentium and PPro were probably the most used and visible targets. Not
to mention massive changes in C++ the language (gcc-2.7.2's support for C++
was quite broken, even for its time).
>Good changes from EGCS would then be passed back into the "very stable"
>GCC 2.8.x, incrementing x.
That is the idea, but AFAIKS egcs will split off for good.
>At some point, they decide to move to "EGCS 2.0" (or some such number),
>and deploy the "stabilized" 1.x version of EGCS as the "new, stable"
>GCC 3.0.
[...]
>In some respects, this has really not been a good year for the FSF;
>there have been enough occurances of RMS saying things that are
>exceedingly "flameworthy" as to encourage the growth of some really
>rather independent development efforts.
True, the FSF did (and is still doing) an enormous job. But in many ways
they screwed up badly: The long awaited hurd is still at 0.2, and last time
I tried to install it to play around a little it was impossible to do so (I
really tried, some two or three days just getting grub to grok the disk, and
that just failed utterly. Asks for help went ignored, so I just gave up).
The extremely important gcc development was stalled way too long (at least
in the public eye). All of this was of no real consecuence as long as free
software was just a way for money-starved universities to put their hand on
critical development software, where "real people" in the "real world" used
"real software" (i.e., with licences that cost "real money"). This was
radically changed by Linux, which has grown to be "mainstream Unix", not
just "playground for hackers and hippies" anymore. The FSF just didn't keep
up with its charter to oversee *all* free software, probably never could
have and it's probably better that way too... It really can't be more than
one more player in a worldwide bazaar of free software, there are several
independent efforts (FSF's GNU proyect, Perl, Linux itself; even the BSD,
Aladdin (of Ghostscript fame), Qt and lately Netscape comunities are somehow
involved in this).
>It is quite possible that the "wider Linux community" could effectively
>make the FSF, as an organization, irrelevant.
>When you consider the longterm efforts of FSF contributors including
>RMS, that is unfortunate.
Why? Their efforts certainly aren't lost. Linux would have been absolutely
impossible without gcc and the free equivalents of so many Unix utilities
the FSF nurtured.
>When you consider that the FSF is a fairly small organization with
>clearcut leadership, and that the Linux community really has no single
>fixed "centre," it has to make one ask some questions about how such a
>relatively "unthinking mob" (which is not intended as a "flame" to
>"Linux people;" merely to suggest that the diversity makes it somewhat
>difficult to make *clear* decisions) can so readily overcome a group
>that has a clear mandate and clear leadership.
Force of numbers favor the "mob", for one; and if said leadership does
everything at its hand to alienate said "mob" (witness the "GNU/Linux"
flamewar, and the other recent "flameworthy" outbursts you mention before),
the outcome isn't exactly surprising. Even when the "mob" and the "group"
have rather similar goals.
Sort of reminds of the bitching that went around in the Unix camp, which
this way left the field wide open for MS...
--
Horst von Brand vonb...@sleipnir.valparaiso.cl
Casilla 9G, Viña del Mar, Chile +56 32 672616
> egcs 1.0.3 was done just for Red Hat (especially for Red Hat/Alpha).
What do you mean by that?
Navin
--
"Evolution is not only about survival, it's about domination." --Prey
I would like to agree with you.
It is certainly possible to interpret things to indicate that RMS intends
for software to be "in the hands of the public."
Unfortunately, the "lignux" debacle as well as the continuing saga of RMS
"explaining" how the proper name ought to be "GNU/Linux" in the GNUS
Bulletin suggest otherwise.
I prefer to interpret things in as positive a manner as is sensible, and
don't personally have *real* strong feelings concerning these items. The
"flame wars" and the "elaborations" that have come up surrounding these two
issues *are* supportive of the view that RMS *does* want to be the "true
leader of the free software movement," and, in that fashion, "own" things
like Linux.
The Perl FAQ thing, in contrast, shows that people at the FSF can make
mistakes, and that there are some equally vigorous and "flammable" opinions
that can come out of people that in no way stand with the FSF... I would go
along with the somewhat distinct notion that RMS has gotten "cranky" lately.
If, in contrast, RMS has gone a further step and become a "crank," it is
certain that he is not alone there...
What is unfortunate is that RMS has said enough "highly flammable" things
that many seem to be deciding that they are unwilling to continue to stand
with his leadership in the community.
Another couple of years of:
a) Non-FSF projects forking off of FSF projects,
b) RMS saying things that make significant populations angry that are not
correspondingly *HIGHLY* useful to the "free software community," and
c) Additional organizations "open sourcing" their products,
and the FSF may well become irrelevant to most people. I do perceive this
"shift towards irrelevance" taking place, and I do find it unfortunate.
The FSF *should* be seeking to be a highly credible organization that people
can trust where they could expect to send funds and see useful things
created Real Soon Now.
My "vision" (described in more detail in the "lsf.html" essay) is that there
should be millions of dollars worth of "gift economy" coming from the Linux
community, going towards developing things that are valuable to the
community such as:
- The Ultimate "Libre" File System (logging/ journalling/ expandable/
shrinkable/ multi-device/ ...)
- The "Libre" Word Processor
- The "Libre" Database System
- The "Libre" Spreadsheet
- The "Libre" Personal Finance System
- The "Libre" GUI System
- The "Libre" Compiler Suite
- ... and the list of course continues ...
It would be a real good thing if we had some places where people could
direct "gifts" or "grants" to help pay for the time of people that are well
qualified to help build these sorts of things.
The FSF is one of the more stable and longstanding organizations that
directly seeks contributions to help sponsor development and improvement of
things not unlike the list above.
Unfortunately, every time RMS fires off a "salvo" of controversial
statements, it scares off people that might otherwise be willing to invest
time, efforts, and funds in the things that the FSF can help with. And,
unfortunately, contributes to the possible "decline into irrelevance."
It doesn't much matter if the reality is that RMS is trying to deal with the
fine-tuning some fine point of "GPL Law;" the point is that I *regularly*
hear a variety of people say (in conversation, not merely on Usenet) that
they think he's doing "insane" things. (Kendall, you know who you are :-).)
RMS has been scaring off the people that, based on the sorts of things I see
them do in the "free software community," *ought* to be staunch supporters
of one another.
This is not unlike the historical events of the Middle Ages and after where
one sect of Protestants would be vigorously persecuting another despite the
fact that they really had fairly close theological positions, certainly in
comparison with "true" enemies such as Moslems.
That may be an unkind comparison to those of whatever religious persuasion,
but the parallel works quite nicely.
The "free software community" is wasting efforts infighting rather than
spending the time doing things about real threats from *clearly* proprietary
software...
> The "free software community" is wasting efforts infighting rather than
> spending the time doing things about real threats from *clearly* proprietary
> software...
Amen. So please help stop the fighting by not making up motives for
rms and posting them on Usenet -- re-read your posting, and you'll see
what I mean.
-- g
This is utter nonsense. Suppose you have a choice between two compilers:
a) X is solid, with little or no optmization, and no known bugs.
b) Y works pretty well, generates code that is twice as fast, but in
rare cases has been known to make some incorrect optimizations.
So which do you use to compile your application, X or Y?
You would be a fool to choose X, because you will be beaten up in
the market, and your users will complain you are too slow.
And for what? The hypothetical and unlikely chance you might
run into a bug in the compiler? You would be silly to worry more
about that than about bugs in your own code, hardware errors,
bugs in the standard libraries, etc etc. All these might bite
you - that is why you need to do as extensive testing as you can,
and fix as many bugs as you can. But at some point you have to
ship the damn thing, and you have to decide which bugs you can live
with. This is true for optimizing compilers, as well as applications.
You just do the best you can.
--
--Per Bothner
Cygnus Solutions bot...@cygnus.com http://www.cygnus.com/~bothner
That comment runs me a smile on my face.
> I will cautiously say I've been using pgcc-1.02 for a few weeks,
> recompiled the kernel, all the stuff you need to build to go from 2.0.33
> to 2.1.101, most of my applications, and no problems yet.
Would you use pgcc-1.02 in a space ship to the planet mars with yourself
on board? I think thats the question every programer should be asked.
Maybe, this is the only way to get rid of these windows people. Put them
in a huge space ship and give them a free trip to the mars with Windows 98
running. Well however, we should give them a few USB scanners with, to go
for sure.
Andre Pfeuffer
>>I mean, the idea of egcs as a proving ground for cool new technology is
>>great, but don't people think getting 2.8.1 stable is just as, if not
>>more, important?
>
> Parallel this with the development of the Linux kernel...
>
> Which is more important?
> - Getting 2.0.34 stable? or
> - Getting 2.1.104 stable?
GCC as a package isn't anywhere to be found on www.kernel.org. Someone is
making a statement.
But, though that's a rhetorical question, the answer has to be 2.0.X is
of paramount importance to get stable. 2.1.X can drop into deep space, it
just doesn't matter.If you don't have a rock-solid, totally stable kernel
for implementation, then what is everyone doing the work for anyways? We've
gone beyond installing Linux for the coolness factor. Now, we want to get
real work done with it.
The same goes for egcs/gcc/whatever. The paramount issue, is get something
out that is stable. It doesn't matter what features you have to strip out.
Have no optimizations at all if you have to, but release something that is
dependable. Whomever said that a egcs internal compiler error was is a
fault of a in the code it was compiling, is on drugs.
> I would suggest that what is happening is that GCC is being turned into
> a "Bazaar" project.
Bizarre, is more like it.
> The "normal" state of affairs would be that EGCS 1.0.x would be based on
> the "very stable" GCC 2.8.x, and add in "entertaining experimental
> changes."
>
> Good changes from EGCS would then be passed back into the "very stable"
> GCC 2.8.x, incrementing x.
egcs splitting off from gcc is a political statement, not a programming
statement. If it was purely for programming, then it wouldn't be egcs,
would be a development version of gcc. We're not going to see egcs
become the equivalent of the 2.1 development kernel, simply because of the
fact that it is two separate groups doing it. If they had the level of
joint communication going that is needed for that, then they wouldn't be
two separate groups.
The -real- question will be, which compiler will be supported in the new
Linux Standard Base? Bruce Perens will have a hot potatoe in his hands.
I just hope that (and I'm not technically qualified enough myself to make
this judgement) they chose the more conservative/stable one for the base
system. I just hope that the decision is made on solely on the merit of
stability, not how many cool switches are thrown in with impressive
sounding optimizations, or on the fact you can put in -O84 and get code
that's been fractally compressed and comes preoptimized for 17 different
CPU brands, but spits out an internal compiler error due to a faulty
source file.
Kurt
> egcs splitting off from gcc is a political statement, not a programming
> statement. If it was purely for programming, then it wouldn't be egcs,
> would be a development version of gcc. We're not going to see egcs
> become the equivalent of the 2.1 development kernel, simply because of the
> fact that it is two separate groups doing it. If they had the level of
> joint communication going that is needed for that, then they wouldn't be
> two separate groups.
Please check up on the history. Gcc-2.8 was handled by the FSF in a
way that harmed a lot of people depending on gcc, probably mostly due
to lack of resources. People were accusing Cygnus of withholding the
release and only selling their vastly superior C++ front end to paying
customers, using the GPL to rip off people, when Cygnus had long
contributed everything to the FSF, but no release resulted. Other
compiler groups (like Ada and Fortran) were also hampered by this
release. It was a technical necessity to do something about this.
That it was not a political statement can be seen by the fact that all
egcs contributions are required to have copyright disclaimers for the
FSF. Everything in egcs is contributed to the FSF. They just don't
want to have the public wait as long as it takes the FSF to make a
release (and two years is too long and harmful to all participants).
If you want to indulge in searching for political statements, you
might look into XEmacs, although even they have become a lot more
FSF-conformant in their demands on proper copyright disclaimers.
Even they would think better cooperation with the FSF worthwhile by
now.
>>> Yes, the egcs releases are far more solid than 2.8.1 is.
>>> Almost all of the gcc maintainers are now working on egcs.
>>
>> If true, doesn't this strike anyone as A Bad Thing?
No, egcs is a perfectly fine GPL compiler. It is better to quietly
and politely start a fork than to flame and rudely start a fork.
Something had to happen, and I'm glad it was handled well.
>> I mean, the idea of egcs as a proving ground for cool new
>> technology is great, but don't people think getting 2.8.1
>> stable is just as, if not more, important?
No, perhaps gcc 2.8 should be removed from FTP sites.
A few weeks ago, H. J. Lu recommended that we use egcs.
Red Hat 5.1 is shipping with egcs AFAIK. Goodbye gcc.
Does the name "gcc" really matter? It's not any better than
egcs, cc, or c89. We now have a GPL compiler that can do
more than the old one could. We get FORTRAN, modern C++,
and GNU C, with better optimization and fewer bugs.
> Parallel this with the development of the Linux kernel...
>
> Which is more important?
> - Getting 2.0.34 stable? or
> - Getting 2.1.104 stable?
It would be good to put out a 2.0.34, but 2.0 really doesn't
need much work anymore. The future 2.2 is more important in
the long run.
> It is quite possible that the "wider Linux community" could
> effectively make the FSF, as an organization, irrelevant.
>
> When you consider the longterm efforts of FSF contributors
> including RMS, that is unfortunate.
Oh well, they still get a place in the history books.
The FSF is too concerned about political stuff. It is better
to just write software and leave politics to the EFF.
> Unfortunately, the "lignux" debacle as well as the continuing saga
> of RMS "explaining" how the proper name ought to be "GNU/Linux"
> in the GNUS Bulletin suggest otherwise.
I find that whole thing highly offensive.
> I prefer to interpret things in as positive a manner as is sensible, and
> don't personally have *real* strong feelings concerning these items. The
> "flame wars" and the "elaborations" that have come up surrounding these two
> issues *are* supportive of the view that RMS *does* want to be the "true
> leader of the free software movement," and, in that fashion, "own" things
> like Linux.
Kid 1 builds a sand castle, with help for other kids.
Kid 2 builds a big fancy sand castle, with even more help.
Most kids decide to work on the big fancy sand castle.
Kid 1 claims to own both, and expects to direct construction.
> What is unfortunate is that RMS has said enough "highly flammable"
> things that many seem to be deciding that they are unwilling to
> continue to stand with his leadership in the community.
What community? (Linux, Hurd, *BSD...)
> The FSF *should* be seeking to be a highly credible organization
> that people can trust where they could expect to send funds and
> see useful things created Real Soon Now.
I see what you hope for. Try Linux International instead.
The FSF keeps throwing money down a hole called "The HURD".
It's an ego problem I guess, caused by the success of Linux.
> My "vision" (described in more detail in the "lsf.html" essay) is
> that there should be millions of dollars worth of "gift economy"
> coming from the Linux community, going towards developing things
> that are valuable to the community such as:
>
> - The Ultimate "Libre" File System (logging/ journalling/ expandable/
> shrinkable/ multi-device/ ...)
We're getting there, with both ext2 extensions and Reiserfs.
> - The "Libre" Word Processor
We'd need to agree on too much. (structured? simple? exact layout?
binary? SGML? TeX? roff? LISP? Scheme? SLang? Qt? GTK? ...)
> - The "Libre" Database System
Done, many times over.
> - The "Libre" Spreadsheet
> - The "Libre" Personal Finance System
Done, at least twice.
> - The "Libre" GUI System
GNOME
> - The "Libre" Compiler Suite
egcs, now with FORTRAN and modern C++
> It would be a real good thing if we had some places where people
> could direct "gifts" or "grants" to help pay for the time of people
> that are well qualified to help build these sorts of things.
>
> The FSF is one of the more stable and longstanding organizations
> that directly seeks contributions to help sponsor development
> and improvement of things not unlike the list above.
The HURD is unlike that listed above. We already have a good
kernel. I certainly wouldn't want money wasted on a microkernel
system that I will never use. Linux International directly seeks
contributions, so consider sending one there instead.
> Unfortunately, every time RMS fires off a "salvo" of controversial
> statements, it scares off people that might otherwise be willing to invest
> time, efforts, and funds in the things that the FSF can help with. And,
> unfortunately, contributes to the possible "decline into irrelevance."
Development funds: Linux International
Political issues: Electronic Freedom Foundation
> The "free software community" is wasting efforts infighting
> rather than spending the time doing things about real threats
> from *clearly* proprietary software...
The "doing things about real threats" means we start projects
like egcs and just ignore the FSF. The slow progress of gcc was
a real threat.
>>Have no optimizations at all if you have to, but release something that is
>>dependable.
>
> This is utter nonsense. Suppose you have a choice between two compilers:
> a) X is solid, with little or no optmization, and no known bugs.
> b) Y works pretty well, generates code that is twice as fast, but in
> rare cases has been known to make some incorrect optimizations.
>
> So which do you use to compile your application, X or Y?
>
> You would be a fool to choose X, because you will be beaten up in
> the market, and your users will complain you are too slow.
> And for what? The hypothetical and unlikely chance you might
> run into a bug in the compiler?
This isn't some application that will be sitting on store shelves where
the prettiest box, and the best advertising wins. This is not commodity
software. These are serious tools for serious applications. I don't
want to worry about my kernel crashing when turning on an optimization.
I don't care what people choose to use to compile their applications.
People can download and install what they want. My concern is what is
chosen as a 'standard'. My concern is that this is going to turn into
a feature race for all the different compiler groups, all competing for
control of the standard. My concern is that we'll be left with no stable
solution. And we're not talking about twice as fast here. Please leave
the rhetorical exadgerations out.
> You would be silly to worry more
> about that than about bugs in your own code, hardware errors,
> bugs in the standard libraries, etc etc. All these might bite
> you - that is why you need to do as extensive testing as you can,
> and fix as many bugs as you can. But at some point you have to
> ship the damn thing, and you have to decide which bugs you can live
> with.
If it has a bug, leave it out of the release until it's fixed. What
is so hard about that?
The problem is, with this model, is that the attitude is becoming just
release it, and 50 million internet users will test it for bugs for you.
And this isn't bad. This is a good thing, unless this is the standard
that people are relying on for stability.
Why is there such a push for a Linux Base System? Why the kernel
development split? Becase of the number of users. When Linux had a small
following of hackers, then it didn't matter if there were bugs. But
we're reaching a critical mass now. The number of users is growing to
be enough that you can't just push something out the door, and figure that
everyone depending on it are hackers who are doing it to be cool, or to
give Microsoft the finger. That's why Linux had such a hard time getting
into stability critical implementations. It still does.
As I said, I don't care what gets done with egcs, or GCC, or PCC, or
anything. My concern is that a stable, reliable implementation of
something is chosen as the standard, and that a development model
is chosen where updates to it are done when they are -proven- stable
Lets all pretend that each 'release' is going to be used for a computer
monitoring equipment in the intensive care ward of a hospital.
> This is true for optimizing compilers, as well as applications.
> You just do the best you can.
That's right, you do the best you can. But one can chose methods of doing
that allow the best of both worlds. Stability for those who need it,
and rich experimental features for those who like to live dangerous.
Allow me to comment on that. I've been one of the people working on the
Debian compiler packages.
For Debian 2.0, the situation will in all likeliness look like this:
| gcc 2.7.2.3 egcs 1.0.3a
-------------+-------------------------------------------------------------
C | yes (gcc package) [1][P][2] yes (gcc / egcc package [1]
C++ | yes (g++272) yes (g++) [P][3]
libg++ | yes (libg++272(-dev)) yes (libg++28(-dev))
libstdc++ | yes (part of libg++272) yes (libstdc++28(-dev))
Objective C | no gobjc
Fortran 77 | no g77
[P] Preferred / default .
[1] "gcc" on architectures where the egcs C compiler is the primary or only
one. "egcc" on others (e.g. i386).
[2] Preferred / default because of the "egcs-compiled 2.0.x kernels can't
run X" situation.
[3] Preferred / default as egcs C++ support is far superior to gcc
2.7.2.3's.
>Did you notice the postings mentioning "Linux 5.0"? ;-/ I'm afraid that
>when Debian 2.0 will be released we all will be forced to use killfiles big
>way, or sleep in asbestos underware...
Most of the Debian developers aren't very inflammatory; I don't think this
scenario is very likely.
Ray
--
Tevens ben ik van mening dat Nederland overdekt dient te worden.
> If it has a bug, leave it out of the release until it's fixed. What
> is so hard about that?
This seems very hard to grasp for people who never developed compilers. A
compiler is simply a huge piece of software that has bugs just as every other
piece of software of comparable size and complexity.
If commercial compiler vendors can get away shipping compilers with bugs, why
should we (i.e. the egcs community) try the impossible ?
The egcs crowd tries to stomp out bugs before every release; however, we
cannot compile every piece of software on earth in our regression tests, so
we're sure to miss some bugs - tough luck.
It's not about writing bug free code. Everybody knows that this is impossible
as soon as the program does more than "Hello world". It's about putting in
code that does some cool stuff but is _known_ to fail every so often.
The problem is phrases in the FAQ like:
You may get an internal compiler error compiling process.c in newer versions
of the Linux kernel on x86 machines. This is a bug in an asm statement in
process.c, not a bug in egcs. XXX How to fix?!?
I had enough problems getting the AIC7XXX options right under 2.0.33 (and
enough windows-lovers laughing at me when they weren't right and the system
screwed up weekly). When I discovered what was wrong I was glad that I had
a C compiler that I could rely on ...
It sounds like linux developed an extremely successful strategy with the
parallel trees of 2.0.x and 2.1.x and I can foresee a similar split happening
to egcs as soon as it becomes mainstream.
Rudi
--
| | | | |
\ _____ /
/ \ B O R N
-- | o o | -- T O
-- | | -- S L E E P
-- | \___/ | -- I N
\_____/ T H E S U N
/ \
| | | | |
In the real world, you do BOTH. You compile with X and sell the resulting
binary while you test the binary created by Y in-house. Then after a decent
interval has elapsed, you release the binary compiled by Y and add the word
Turbo to your program name and again sell it to the same people you just
sold it to with no extra software development. It is a way of making more
money on the same code.
;)
--
George Bonser
Microsoft! Which end of the stick do you want today?
>Whomever said that a egcs internal compiler error was is a
>fault of a in the code it was compiling, is on drugs.
I think this report is probably the result of someone misreporting the
true state of affairs.
GNU C (and hence egcs) sometimes emits an error message which goes
something along the lines of
impossible register spilled
this may be due to an internal compiler
error, or impossible asm
Someone may have misreported this as "egcs gets an internal compiler
error" when in fact the true cause may have been incorrect "asm"
statements in the code being compiled.
--
Fergus Henderson <f...@cs.mu.oz.au> | "I have always known that the pursuit
WWW: <http://www.cs.mu.oz.au/~fjh> | of excellence is a lethal habit"
PGP: finger f...@128.250.37.3 | -- the last words of T. S. Garp.
IMO, he doesn't "scare off" so much as "piss off" those people.
Myself being one of them. I've quit using "cathedral" GPL'd software
and embraced "bazaar" GPL'd software simply because the latter's
liberal attitude toward the GPL seems more in the spirit of the GNU
Manifesto than the former's conservate attitude.
In the real world, that means I jettisoned the Hurd from my sole
remaining 486 and deleted Debian from my firewall and installed
Slackware-3.4. I'm also now using egcs instead of gcc, and I've
stopped using GNAT.
--
Forte International, P.O. Box 1412, Ridgecrest, CA 93556-1412
Ronald Cole <ron...@forte-intl.com> Phone: (760) 499-9142
President, CEO Fax: (760) 499-9152
My PGP fingerprint: E9 A8 E3 68 61 88 EF 43 56 2B CE 3E E9 8F 3F 2B
I think this is unfair to Debian; they seem to be a very open
development. Their amount of centralization doesn't seem to be any
more than what the Linux kernel does (with Linus as the chief
gatekeeper.)
-hpa
--
PGP: 2047/2A960705 BA 03 D3 2C 14 A8 A8 BD 1E DF FE 69 EE 35 BD 74
See http://www.zytor.com/~hpa/ for web page and full PGP public key
I am Bahá'à -- ask me about it or see http://www.bahai.org/
"To love another person is to see the face of God." -- Les Misérables
The amount of centralization at Debian is far less than on the Linux
kernel, really. To the point that it's a problem. It looks like Debian
will be the first project (that I know of anyway) to write a formal
constitution and make decisions by voting, simply because there is no
one person or small group who can claim to be in charge.
Debian is certainly not a "cathedral" project, with 300+ developers.
Havoc Pennington ==== http://pobox.com/~hp
This is kind of off track, but. Personally, I do not blame lack of
resources as the reason for the long release period. I blame resource
management. I.e. the maintainer of the gcc source is not managing the
resources very well. I do not see this changing even with the creation
of the egcs group. Unless someone corrects this problem, I do not believe
we will see more frequent releases of gcc. The egcs group, on the other
hand, have excellent resource management. Note, when I refer to resources,
I don't necessarily mean internal resources. Resources could be from
external sources.
>That it was not a political statement can be seen by the fact that all
>egcs contributions are required to have copyright disclaimers for the
If you interpret 'political statement' in the original author's post as
legal, moral or ethic statement, then I agree with you. But, I think
the original author means 'office politics'. In this sense, I have
to agree with him. As I've said above, it is a management problem. The
gcc contributors like H.J. Lu and Mark Mitchell, do not like how gcc
is managed. This is of course, speculation. I've never met H.J. Lu
or Mark Mitchell. But from their contribution styles, I think my
interpretation is close, if not right on.
The success of a software, heavily depends on the outlook of the maintainer.
Compare someone like Linus Torvalds, with RMS (or Thomas Busnell). Linus
seems a lot more outgoing and free spirited, when it comes to software
development. While RMS is closed minded and stiff (and seems to be rubbing
off on TB). Another example is the WINE v.s. TWIN project. Both try to
provide a win32 platform for Linux (UNIX). But movement in WINE is a lot
more pronounce than TWIN. This is because WINE is more open and free
spirited than TWIN, which has a slow and closed development. By closed,
I mean the ablity to access the current source. I.e. compare the ease
of access of the egcs source to that of gcc. Gcc snapshots are on some
out of the way ftp server. While egcs can be found on the web page and
an open CVS server, which both gets updated often. As opposed to the gcc
web page, which hasn't been changed for over a month.
This has nothing to do with copyright transfers, or legal issues. Those
are individual issues. I.e. they apply to each contributor. But the
overall success of a free software depends on the management. I think
this also applies to commercial software as well. If the engineers does
not like the direction the project is going, they quit and go else where.
Of course, the management must be pretty bad and for a long period of time
to have this occur.
--jc
--
Jimen Ching (WH6BRR) jch...@flex.com wh6...@uhm.ampr.org
>In the real world, that means I jettisoned the Hurd from my sole
>remaining 486 and deleted Debian from my firewall and installed
>Slackware-3.4. I'm also now using egcs instead of gcc, and I've
>stopped using GNAT.
It seems to me that dumping Debian for "political" reasons is a supremely
silly thing to do. Anyway, Debian has a very open development model.
But I agree, egcs is (and promises to be) a better compiler than the
"traditional" gcc.
- Daniel
--
******************************************************************************
* Daniel Franklin - 4th Year Electrical Engineering Student
* dr...@uow.edu.au
******************************************************************************
Apache already do that.
The scary thing about this is not so much the internal error (or whatever
it really is) but the fact that there doesn't seem to be a work around, or
I at least can't find one on the egcs FAQ page. So the average user expects
that hell will break lose if one installs egcs and actually needs to get
some work done with it ...
Vladislav
Bruce Stephens wrote:
>
> psm...@baynetworks.com (Paul D. Smith) writes:
>
> > I find this equally disturbing. You seem to be implying that, like
> > libc5, the majority of development in egcs is happening on and
> > directed at Linux systems.
>
> I've been following the egcs mailing list intermittently for a while,
> and that's not my impression.
>
> > I don't think this is acceptable for GCC. I think it would be
> > devastating if GCC became unstable or even unbuildable on many of
> > its currently-supported platforms, due to neglect of portability
> > issues by maintainers.
>
> Of course. I agree with you absolutely that if there are significant
> bugs in gcc2.8.1, then fixing those and releasing a bugfix 2.8.2 ought
> to be the way to go. Linux needs a stable compiler, as well as an
> experimental one. I think it's great that RedHat is packaging up
> egcs-1.0.3, but there's something seriously wrong with the release
> schedule of gcc if it becomes the standard one.
[...]
>I don't care what people choose to use to compile their applications.
>People can download and install what they want. My concern is what is
>chosen as a 'standard'. My concern is that this is going to turn into
>a feature race for all the different compiler groups, all competing for
>control of the standard. My concern is that we'll be left with no stable
>solution. And we're not talking about twice as fast here. Please leave
>the rhetorical exadgerations out.
Sorry, but if a solution is stable or not can only be determined (given
current technology) by testing, testing, testing. That's what you are paying
for free software: The (unlikely) case of working wrong, and the duty of
reporting funnies, and the posibility of fixing it yourself (or getting
somebody to do it for you). Or you could go commercial, where the unlikely
case of working wrong is about the same or higher, and you've got a
guarantee in writing that if it breaks, just too bad: They won't do anything
about it if it's not one of their priorities, and you are stuck.
>> You would be silly to worry more
>> about that than about bugs in your own code, hardware errors,
>> bugs in the standard libraries, etc etc. All these might bite
>> you - that is why you need to do as extensive testing as you can,
>> and fix as many bugs as you can. But at some point you have to
>> ship the damn thing, and you have to decide which bugs you can live
>> with.
>If it has a bug, leave it out of the release until it's fixed. What
>is so hard about that?
That if you could just leave out the bug, it wouldn't be a bug anymore,
would it? ;-)
>The problem is, with this model, is that the attitude is becoming just
>release it, and 50 million internet users will test it for bugs for you.
>And this isn't bad. This is a good thing, unless this is the standard
>that people are relying on for stability.
"Stability" is a relative term...
>Why is there such a push for a Linux Base System? Why the kernel
>development split? Becase of the number of users. When Linux had a small
>following of hackers, then it didn't matter if there were bugs. But
>we're reaching a critical mass now. The number of users is growing to
>be enough that you can't just push something out the door, and figure that
>everyone depending on it are hackers who are doing it to be cool, or to
>give Microsoft the finger. That's why Linux had such a hard time getting
>into stability critical implementations. It still does.
The problem with linux is that the resources for exhaustive testing and bug
fixing on older versions just isn't there: It's much more interesting to
play around with the latest&greatest (even if buggy).
>As I said, I don't care what gets done with egcs, or GCC, or PCC, or
>anything. My concern is that a stable, reliable implementation of
>something is chosen as the standard, and that a development model
>is chosen where updates to it are done when they are -proven- stable
Great. Who is doing the "proving"? Read over some of the drivers, and you'll
see that some of them are more workaround-for-broken-hardware code than
driver code proper. Not that one particular board shows all the problems,
there are probably hundreds of different NE2000 clones, each with its own
complement of bugs. Parts of the kernel have been coded the way they are to
work around compiler bugs. And obviously parts of the kernel contain bugs in
themselves too. The only way of finding out problems in a mess like this is
by having it run many different applications, on many different hardware
configurations, under very different loads. I.e, in real-world use.
>Lets all pretend that each 'release' is going to be used for a computer
>monitoring equipment in the intensive care ward of a hospital.
If that will be the case, we'll do. Just pay for the testing time; give
stable, certified bug free hardware to test and run on. And linux
development will give you linux-1.0 in say 5 years time. Good enough?
The sad fact of life is that you pay for development speed with bugs. The
good news is that the linux model gives surprisingly few bugs for the
development speed. The great news is that it is self-regulating: If people
start having real troubles, the development slows down while the bugs are
fixed.
Another piece of news is that whining about bugs, or not having <favorite
feature> in linux or other piece of free software has no effect whatsoever,
you will be silently ignored for the most part. Or it might lead to the Dave
Miller disaster: He was caring for linux-2.0, and got so fed up by the
complaints and whining that he just dropped it.
So it is "tribal" software development?
Kristian
--
Kristian Koehntopp, Wassilystrasse 30, 24113 Kiel, +49 431 688897
"See, these two penguins walked into a bar, which was really stupid, 'cause
the second one should have seen it."
-- /usr/games/fortune
bot...@cygnus.com (Per Bothner) writes:
> In article <6k89l0$n2...@crash.videotron.ab.ca>,
> Kurt Fitzner <kfit...@nexus.v-wave.com> wrote:
> >The same goes for egcs/gcc/whatever. The paramount issue, is get something
> >out that is stable. It doesn't matter what features you have to strip out.
> >Have no optimizations at all if you have to, but release something that is
> >dependable.
>
> This is utter nonsense. Suppose you have a choice between two compilers:
> a) X is solid, with little or no optmization, and no known bugs.
> b) Y works pretty well, generates code that is twice as fast, but in
> rare cases has been known to make some incorrect optimizations.
>
> So which do you use to compile your application, X or Y?
>
> You would be a fool to choose X, because you will be beaten up in
> the market, and your users will complain you are too slow.
No, you would (probably) be a fool to use Y
The correct approach is to use X, profile your code and find the
bottlenecks and elliminate them. You can always hand optimise if your
compiler won't do it for you. Choice of algorithm is usually the biggest
factor in performance anyway - compiler Y can give you a bubble sort
which is twice as fast as compiler X but neither will turn it into
quicksort.
> And for what? The hypothetical and unlikely chance you might
> run into a bug in the compiler?
The issue (commercially) is risk management. An unquantifiable risk is
always worse than a quantifiable one, even if it looks smaller. If you
use the possibly buggy compiler then the chances are you'll spend *much*
more time tracking down bugs caused by it compared to tracking down
"ordinary" bugs in the code.
> You would be silly to worry more about that than about bugs in your
> own code, hardware errors, bugs in the standard libraries, etc etc.
> All these might bite you - that is why you need to do as extensive
> testing as you can, and fix as many bugs as you can. But at some
> point you have to ship the damn thing, and you have to decide which
> bugs you can live with. This is true for optimizing compilers, as
> well as applications. You just do the best you can.
Well all of the above are true but they don't stop it being worthwhile
trying to reduce the risks to which you expose yourself.
However it is a balancing act (as most things are and why I said
"probably" above), typically it's not a choice between solid X with
little optimisation and speedy Y with perhaps some bugs. More usually
the newer but more risky version brings a feature that you can't really
do without - an example might be C++ exception handling in gcc 2.7 vs
egcs. Lack of optimisation at best means a few tweaks to you code. Lack
of exception handling leads to very different implementation strategies
- and that _is_ a cost which you might well decide not to bear.
--
Paul
2.0.34pre16 is out and looking good...
--
Thorsten Roskowetz | E-mail: ro...@earthling.net
: Allow me to comment on that. I've been one of the people working on the
: Debian compiler packages.
: For Debian 2.0, the situation will in all likeliness look like this:
: | gcc 2.7.2.3 egcs 1.0.3a
: -------------+-------------------------------------------------------------
I think the original poster meant RH - egcs, Debian gcc 2.8.
I'm glad to see that Debian will also ship egcs...
Regards,
Lokh.
This attitude sounds worse than the so called "conservative" attitude
of the FSF. You can have any attitude you want and use the GPL, as
long as you stick to the legal aspect of it. That is what it is there
for, to protect the rights of those who write software for the benifit
of others and who want to keep their work in the public domain without
commercial molestation. To reject a good piece of code based on their
development philosophy seems a little extreme. Besides, most of the
code that is in Slackware and indeed ALL Linux distributions comes
from the FSF and the HURD project. Try reading the man page for grep.
:)
I do agree that "bazaar" style of software development is a better way
to go if you can manage to get it to work.
BAPper
BAPper
Er, no. The bottleneck is the compiler itself.
> You can always hand optimise if your
> compiler won't do it for you.
Now there's a non-bug prone approach.
::snort::
> Choice of algorithm is usually the biggest
> factor in performance anyway - compiler Y can give you a bubble sort
> which is twice as fast as compiler X but neither will turn it into
> quicksort.
This is just plain lunacy. Have you ever heard
of convergent evolution? If you're using the same
algorithms, the guy with the better compiler wins.
Period.
> The issue (commercially) is risk management. An unquantifiable risk is
> always worse than a quantifiable one, even if it looks smaller. If you
> use the possibly buggy compiler then the chances are you'll spend *much*
> more time tracking down bugs caused by it compared to tracking down
> "ordinary" bugs in the code.
And you're talking about hand tuning poorly
optimized code??? In what, assembly???
::double snort::
> > You would be silly to worry more about that than about bugs in your
> > own code, hardware errors, bugs in the standard libraries, etc etc.
> > All these might bite you - that is why you need to do as extensive
> > testing as you can, and fix as many bugs as you can. But at some
> > point you have to ship the damn thing, and you have to decide which
> > bugs you can live with. This is true for optimizing compilers, as
> > well as applications. You just do the best you can.
>
> Well all of the above are true but they don't stop it being worthwhile
> trying to reduce the risks to which you expose yourself.
I think that Per is right on the money: the
likelihood of it being the hardware or OS or
compiler is probably an order of magnitude or
three down from the likelihood of it being *your*
code's problem. Nobody likes compiler bugs, but
the reality is that if you're hunting for a
problem, the likelihood of it being the compiler
is negligible. Though sometimes vexing, compiler
bugs are not really much different from other
bugs. If your testing misses them, then your
testing is inadequate.
I haven't seem anybody saying that Wiz-Bang
compiler X version 0.0 ought to be shipped in
preference to Old Established compiler Y V100.1.
What I have seen is people saying that absolute
insistence on Bugs over Features is a lousy way of
dealing with risk management. Safe over sorry in
the new reality is also know as "out of business".
At some point, you have to take chances, even if
that means the possibility of <cue up organ music>
Compiler Bugs.
--
Michael Thomas (mi...@mtcc.com http://www.mtcc.com/~mike/)
"I dunno, that's an awful lot of money."
Beavis
My impression is that at this point, the "stable" C compiler is in fact GCC
2.7.x.
GCC 2.8 is, at this point, every bit as much an "experimental" compiler as
it has a whopping lot of changes as compared to 2.7.x, particularly with
regard to C++.
I don't think there's any question of it yet being "stable," or that maing
it stable would happen any more quickly than the stabilization of EGCS.
Once EGCS stabilizes further, I expect that it will likely be deployed as
the "new stable GCC;" I would not want to speculate *too* much on the
version number.
Vendors that absolutely need a "stable" C++ compiler do not at this point
have much choice. GCC 2.7.x has abundant "defects;" GCC 2.8 is not yet a
good choice, and I don't think that this is a defect caused by EGCS; EGCS
is likely a better choice than either of the other two, but is
understandably not a terrific choice for vendors that want something that
has been stable for a while.
There are commercial alternatives such as C++ compilers from KAI, The
Portland Group, or Greg Comeau (and possibly others) that may be better
choices at this time.
Hopefully EGCS/GCC2.8 will become stable and will become reasonable
alternatives to the commercial compilers; that would be a change as there
has really never been a *truly* acceptable "free" C++ compiler; G++ has not
been that, at least for people looking at the whole of the C++ language...
I would be interested in seeing some attention go into TenDRA as well; I
dunno how stable that environment is...
--
Those who do not understand Unix are condemned to reinvent it, poorly.
-- Henry Spencer <http://www.hex.net/~cbbrowne/lsf.html>
cbbr...@hex.net - "What have you contributed to Linux today?..."
[...]
> The correct approach is to use X, profile your code and find the
> bottlenecks and elliminate them. You can always hand optimise if your
> compiler won't do it for you. Choice of algorithm is usually the biggest
> factor in performance anyway - compiler Y can give you a bubble sort
> which is twice as fast as compiler X but neither will turn it into
> quicksort.
"Twice as fast" for no extra effort at all vs perhaps "three times as fast"
(if so much) by careful tweaking, breaking the design of the system into
tiny little pieces, all different, and introducing tons of subtle bugs in
the process is a loose?!
[...]
> The issue (commercially) is risk management. An unquantifiable risk is
> always worse than a quantifiable one, even if it looks smaller. If you
> use the possibly buggy compiler then the chances are you'll spend *much*
> more time tracking down bugs caused by it compared to tracking down
> "ordinary" bugs in the code.
Sure you jest... show me just one bug free compiler for whatever language
you might choose. Compilers are complex pieces of software, and they _do_
contain bugs. No way around that fact of life.
--
Dr. Horst H. von Brand mailto:vonb...@inf.utfsm.cl
Departamento de Informatica Fono: +56 32 654431
Universidad Tecnica Federico Santa Maria +56 32 654239
Casilla 110-V, Valparaiso, Chile Fax: +56 32 797513
> In article <6k9p87$1l8$1...@news.utrecht.nl.net>,
> Toon Moene <to...@moene.indiv.nluug.nl> writes:
> > If commercial compiler vendors can get away shipping compilers with bugs,
why
> > should we (i.e. the egcs community) try the impossible ?
>
> It's not about writing bug free code. Everybody knows that this is
impossible
> as soon as the program does more than "Hello world". It's about putting in
> code that does some cool stuff but is _known_ to fail every so often.
>
> The problem is phrases in the FAQ like:
> You may get an internal compiler error compiling process.c in newer
versions
> of the Linux kernel on x86 machines. This is a bug in an asm statement in
> process.c, not a bug in egcs. XXX How to fix?!?
OK, I can see that point - the problem is that this issue was resolved so
fast (matter of days, IIRC) that it was probably better to remove it from the
FAQ all together, or at least fixate it to specific kernel versions and
specific versions of egcs releases. Unfortunately, we still haven't found a
good way to deal with the FAQ (as in: find someone who has enough time *and*
oversight to keep it up to date). Heck, there isn't even a way to search the
archives, so I'm still saving all messages to egcs[-bugs]@cygnus.com here at
home (11,400 and 4,600 respectively) to be able to search their content.
> The issue (commercially) is risk management. An unquantifiable risk is
> always worse than a quantifiable one, even if it looks smaller. If you
> use the possibly buggy compiler then the chances are you'll spend *much*
> more time tracking down bugs caused by it compared to tracking down
> "ordinary" bugs in the code.
But which compiler *do* you want to use then ? The only compiler I've ever
used in which I haven't found an error _yet_ comes with $50 million of
hardware. The (extra) risk from using one or the other compiler isn't
quantifiable, period.
One has to point out that the GPL is a *public* license. At any point
in time you are free to take a cathedralized version and start a
bazaar with it (and vice versa). Whether you succeed will depend on
what people prefer. Most people will usually stay with the original
author/maintainer unless he shows to have severely lacking management
capabilities.
--
David Kastrup Phone: +49-234-700-5570
Email: d...@neuroinformatik.ruhr-uni-bochum.de Fax: +49-234-709-4209
Institut für Neuroinformatik, Universitätsstr. 150, 44780 Bochum, Germany
That means, more-or-less the following:
Red Hat suggested that it would be a good idea for the EGCS developers
to checkpoint and make up a ``stable release'' at the 1.0.3 level.
This 1.0.3 version could thus be considered usable for Red Hat 5.1,
having solved some problems with 1.0.2 in its interaction with the
libraries, Linux kernel, and other Red Hat Linux components.
- This benefits Red Hat, in that it allows them to have a fairly stable
release of EGCS that they can unleash on the public.
It's not so important from a C perspective, as GCC has been quite stable
for quite a long time. (*Many* years, to be more specific...)
It is *critical* with C++, as earlier releases of G++ are seriously
deficient in overall functionality.
- This benefits people involved in the EGCS project, as it pushes more
copies of EGCS out into "production," thus encouraging more interest,
possibly some more developers, and almost certainly a few more bug
reports so that they can make 1.0.4 or 1.0.5 even better...
--
"Linux: the operating system with a CLUE... Command Line User
Environment". (seen in a posting in comp.software.testing)
cbbr...@hex.net - <http://www.hex.net/~cbbrowne/lsf.html>
Michael Thomas <mi...@mtcc.com> writes:
> Paul Flinders <pa...@dawa.demon.co.uk> writes:
> > The correct approach is to use X, profile your code and find the
> > bottlenecks and elliminate them.
>
> Er, no. The bottleneck is the compiler itself.
But for much code in many systems doubling the speed will have almost
no impact on the overall feel of the application to the user.
Sometimes the reason is that the delays are actually outside most of
the code - I/O is a good example here. It doesn't matter if I can
parse my file in one millisecond or two if it takes 10 to get it from
the disk.
Sometimes it is that you're using something that is inherently slow -
COM and CORBA come to mind here.
Als it doesn't matter if I can turn user input into an output in 10
milliseconds or 20 because the user won't notice that difference (get
up to 100 millisecs and they will though).
> > You can always hand optimise if your compiler won't do it for you.
>
> Now there's a non-bug prone approach.
>
> ::snort::
>
> And you're talking about hand tuning poorly
> optimized code??? In what, assembly???
>
> ::double snort::
>
No, I'm talking about doing sensible reviews of critical path code
(having demonstrated that it is critical path), perhaps giving the
compiler a little help with an explicit "register" declaration or
moving a few invariants out of loops or dereferencing a pointer once
into a local variable and then using the variable.
Also I've _seen_ people write code like
for (i=0; i < strlen(s); i++)
....
Is there a compiler which will move strlen out of the loop? Will it
still do it when there's a user defined function doing the job of
strlen (say you have strings implemented with an explicit length
rather than using straight C-style strings.
How about
for (...)
if (expensive test but constant within loop)
+++
else
---
rather than
if (expensive test)
for (...)
+++
else
for (...)
---
egcs doesn't do that as an optimisation AFAICS.
I've seen code which went 5x or 10x faster when expensive operations
were moved manually out of loops. When I asked about the code I got
the answer "well the compiler will optimise it".
Once you've gone through the above then a compiler with a better
optimiser may still be able to fine tune you code but we're back to
the first point - it won't matter for much of the code in the first
place.
> > Choice of algorithm is usually the biggest
> > factor in performance anyway - compiler Y can give you a bubble sort
> > which is twice as fast as compiler X but neither will turn it into
> > quicksort.
>
> This is just plain lunacy. Have you ever heard
> of convergent evolution? If you're using the same
> algorithms, the guy with the better compiler wins.
> Period.
_if_ you're using the same algorithms, and the code is actually time
critical.
> I think that Per is right on the money: the likelihood of it being
> the hardware or OS or compiler is probably an order of magnitude or
> three down from the likelihood of it being *your* code's
> problem. Nobody likes compiler bugs, but the reality is that if
> you're hunting for a problem, the likelihood of it being the
> compiler is negligible. Though sometimes vexing, compiler bugs are
> not really much different from other bugs.
Actually per _does_ make a good point here and you do have to weigh up
the likelyhood of things going wrong. I would not use an egcs snapshot
for delivered code but I would probably use one of the releases. I
also agree that the likelyhood is that the bug is in your (my) code,
however, I've come across two or three GCC bugs whilst writing fairly
mundane programs so ther certainly _do_ occur.
> If your testing misses them, then your testing is inadequate.
A reasonable point. However in the real world bugs do slip past
testing (even when testing is done well which is often not the case)
and anything which you can do to avoid them hitting customers is worth
consideration at least. Again it's a question of reducing risk.
--
Paul
Christopher> I would go along with the somewhat distinct notion
Christopher> that RMS has gotten "cranky" lately.
That is an impression that stuck also after some of the NPR interviews
in April and May regarding Linux, Free Software, and Netscape. Some
caller was so totally _amazed_, _astonished_ etc at Linus Torvald's
altruism in providing to the _entire_ _world_ _for_ _free_ the Linux
operating system - after which RMS snapped that _he_ had been doing
that for years before Linux. (That's deserved, but maybe someone else
ought to make that point rather than himself). See
http://www.npr.org/ramfiles/980417.totn.02.ram
(skip the first 27 minutes).
-tor
ud> psm...@baynetworks.com (Paul D. Smith) writes:
>> I don't think this is acceptable for GCC. I think it would be
>> devastating if GCC became unstable or even unbuildable on many of its
>> currently-supported platforms, due to neglect of portability issues by
>> maintainers.
ud> Paul, do you really think this could be in the interest of the
ud> developers? Cygnus has many customers for non-Intel customers (in
ud> fact, there is problably no single customer paying for ix86 support).
ud> Even if the developers use Linux machines for the main development
ud> (which is not even true for all main developers) they all have at
ud> least one other machine on there desk and the whole ballpark of
ud> machines standing around here. This is no fun project and we
ud> certainly know something about quality assurance.
First, note that I know nothing much about EGCS except what I've seen
here; these posts might have given me the wrong impression, or I misread
them. I'm quite willing to be convinced of that and be happy.
Second, I'm not talking about Cygnus, I'm talking about GCC as released
by the FSF on ftp://ftp.gnu.org/pub/gnu/. I know full-well that Cygnus
is quite concerned with portability, etc. etc. but their version of GCC
has been moving ahead since GCC 2.7.x came out, and as far as I can tell
not many of their enhancements have been merged back in--at least before
2.8 was out.
Cygnus is a different corporate entity: they'll ship changes back to the
FSF of course, but their developers aren't (I expect) going to be
involved with getting the FSF's GCC package released.
EGCS, though, I assume, is manned by people who could otherwise be
working on GCC "proper".
From my perspective, it's all fine and good to have EGCS as a "Linux
2.1.x" for GCC's "Linux 2.0.x", but to _me_ it seems like everyone is
working on the new version to the detriment of the base: GCC 2.8.0 was
released almost 5.5 months ago and we still don't have a version that
most people feel could be called "very stable", even for the basic C
compiler.
To me, that should mean everyone is hot and bothered and working on GCC
2.8 to stabilize it as fast as possible.
If that's not happening, there's something still wrong somewhere (IMO).
--
-------------------------------------------------------------------------------
Paul D. Smith <psm...@baynetworks.com> Network Management Development
"Please remain calm...I may be mad, but I am a professional." --Mad Scientist
-------------------------------------------------------------------------------
These are my opinions--Bay Networks takes no responsibility for them.
adc> cbbr...@news.amrcorp.com (Christopher Browne) writes:
>>>> Yes, the egcs releases are far more solid than 2.8.1 is.
>>>> Almost all of the gcc maintainers are now working on egcs.
>>>
>>> If true, doesn't this strike anyone as A Bad Thing?
adc> No, egcs is a perfectly fine GPL compiler. It is better to quietly
adc> and politely start a fork than to flame and rudely start a fork.
adc> Something had to happen, and I'm glad it was handled well.
>>> I mean, the idea of egcs as a proving ground for cool new
>>> technology is great, but don't people think getting 2.8.1
>>> stable is just as, if not more, important?
adc> No, perhaps gcc 2.8 should be removed from FTP sites.
adc> A few weeks ago, H. J. Lu recommended that we use egcs.
adc> Red Hat 5.1 is shipping with egcs AFAIK. Goodbye gcc.
adc> Does the name "gcc" really matter? It's not any better than
adc> egcs, cc, or c89. We now have a GPL compiler that can do
adc> more than the old one could. We get FORTRAN, modern C++,
adc> and GNU C, with better optimization and fewer bugs.
On Linux, maybe...
However, Albert, you didn't address at all my main concern: that EGCS is
being directed mainly by and for Linux development, and that the "base"
GCC is suffering because of it.
And, no offense, but based on your past statements of opinion on the
subject of portability and Linux, your comments don't at all give me any
warm-fuzzies :) ;)
At any rate, for much commercial development you're not going to find
too many people willing to live with a "cutting edge" compiler. In my
environment, for example, any upgrade to the compiler means an _entire_
QA cycle, rather than merely an incremental one. In other words, even
an upgrade is treated as if it were a compiler from a totally different
vendor.
It's just not possible to put the "flavor of the week" compiler into a
process like this... what if you standardize on a "bad" flavor? Weeks
of QA work down the drain... missed ship dates... ugh.
As I said, I'd be happy to be shown wrong... I'd be even happier if a
GCC 2.8.2 was released that was considered to be as stable as 2.7.x.
Think of a spell checker. (or more obvious: image processing)
>>> You can always hand optimise if your compiler won't do it for you.
No you can't, unless you use assembly. You can do a few things
like turn if(foo) into if(!foo) and add lots of goto statements,
but it is fairly impossible to work around a compiler with inferior
register allocation.
> No, I'm talking about doing sensible reviews of critical path code
> (having demonstrated that it is critical path), perhaps giving the
> compiler a little help with an explicit "register" declaration or
> moving a few invariants out of loops or dereferencing a pointer once
> into a local variable and then using the variable.
Sure, but:
1. that is lots of work and could introduce more bugs
2. after you have done that, then what?
> Also I've _seen_ people write code like
> for (i=0; i < strlen(s); i++)
>
> Is there a compiler which will move strlen out of the loop?
I certainly hope so. The header file should specify that strlen()
does not have side effects, using whatever gcc-specific hack needed.
>>> Choice of algorithm is usually the biggest factor in performance
>>> anyway - compiler Y can give you a bubble sort which is twice as
>>> fast as compiler X but neither will turn it into quicksort.
That would be a great compiler.
> adc> Does the name "gcc" really matter? It's not any better than
> adc> egcs, cc, or c89. We now have a GPL compiler that can do
> adc> more than the old one could. We get FORTRAN, modern C++,
> adc> and GNU C, with better optimization and fewer bugs.
It matters just a little bit because it has good brand recognition.
Other than that, it doesn't matter at all.
>On Linux, maybe...
>
>However, Albert, you didn't address at all my main concern: that EGCS is
>being directed mainly by and for Linux development, and that the "base"
>GCC is suffering because of it.
I don't think this is true, in any interesting sense.
The fact that many of the egcs developers work for cygnus,
and cygnus has customers on all sorts of platforms makes
it prima facie unlikely. However, the development of egcs
is totally open, so you can look at the archives of the
mailing list and the source tree and judge for
yourself. There is no need, in this case, to spread
speculation, and bug reports for any bugs that you find
on any platform including Linux would be appreciated.
- Josh
I think the difficulty is in proving that 's' is unmodified---I believe
that gcc already has adequate markers to note that the function has no
side-effects. But consider the code snippet:
external void foo(int i);
void bar(char *s) {
int i;
for(i=0; i<strlen(s); i++) foo(i);
}
Who knows if this external 'foo' can see and modify 's'? If it
doesn't, then we could move the strlen out of the loop. But, if it
could possibly modify 's', then we can't safely do it. The same goes
for the case of
for(...)
if(expensive_test) { block1; }
else { block2; }
it may be hard to prove that expensive_test is invariant over the
loop. I've heard that there's new "global" optimization in egcs (ie
one translation-unit-wide) which may help for cases where 'foo' or
'expensive_test' don't touch any extern functions...
I don't know, does declaring bar as
void bar(const char *s);
give an adequate guarantee in the first case?
Followups in comp.lang.c.
Jeff
--
Jeff Epler jep...@inetnebr.com (an american student living in France)
> >Paul Flinders <pa...@dawa.demon.co.uk> writes:
> >> Also I've _seen_ people write code like
> >> for (i=0; i < strlen(s); i++)
> >>
> >> Is there a compiler which will move strlen out of the loop?
> >
> In article <vc790np...@jupiter.cs.uml.edu>, Albert D. Cahalan wrote:
> >I certainly hope so. The header file should specify that strlen()
> >does not have side effects, using whatever gcc-specific hack needed.
>
> I think the difficulty is in proving that 's' is unmodified---I believe
> that gcc already has adequate markers to note that the function has no
> side-effects. But consider the code snippet:
[...]
bar called in a loop.
> I don't know, does declaring bar as
> void bar(const char *s);
> give an adequate guarantee in the first case?
Only if bar has no other possible hold onto the memory area passed in
via s. That means, no changeable pointer to that area (including its
declaration) must exist that is accessible globally or that has been
passed out into an unknown function as a changeable pointer.
acah...@jupiter.cs.uml.edu (Albert D. Cahalan) writes:
> Paul Flinders <pa...@dawa.demon.co.uk> writes:
> > But for much code in many systems doubling the speed will have almost
> > no impact on the overall feel of the application to the user.
>
> Think of a spell checker. (or more obvious: image processing)
Spell checkers tend to spend a lot of time waiting for user input so
they only have to get to the next word "fast enough". You need good
dictionary organiation (remember you have to find candidate words, not
just a "not in dictionary" message) and that won't come from the
compiler's optimiser.
Image processing is probably a special case (I.e people _do_ write inner
loops in assembler for image processing applications).
>
> >>> You can always hand optimise if your compiler won't do it for you.
>
> No you can't, unless you use assembly. You can do a few things
> like turn if(foo) into if(!foo) and add lots of goto statements,
> but it is fairly impossible to work around a compiler with inferior
> register allocation.
If it's really broken or just really bad, no but we're not really
hypothesising that state of affairs.
The choice is Solid compiler with basic optimiser (Per actually said
"little or no" but I'm going so assume basic since the "real"
compilers here are probably gcc 2.7 and egcs/gcc 2.8 and gcc 2.7 has
much better than "basic" optimisation) versus a new version of the
compiler with much better optimisation but the definate chance of
compiler bugs.
In this scenario it's the fast compiler which is more likely to have
broken register allocation.
>
> > No, I'm talking about doing sensible reviews of critical path code
> > (having demonstrated that it is critical path), perhaps giving the
> > compiler a little help with an explicit "register" declaration or
> > moving a few invariants out of loops or dereferencing a pointer once
> > into a local variable and then using the variable.
>
> Sure, but:
>
> 1. that is lots of work and could introduce more bugs
However you should do it anyway, the compiler can only optimise so
much and blind reliance on its abilities will not lead to efficient
code.
> 2. after you have done that, then what?
If it's _still_ too slow? Depends on circumstances - maybe you decide
that it _is_ worth the risk of using the possibly buggy compiler (but
by then the speed-up may not be that great). Maybe you decide to use
a faster processor, but that's not always possible. Maybe you decide
to hire an assembler programmer.
>
> > Also I've _seen_ people write code like
> > for (i=0; i < strlen(s); i++)
> >
> > Is there a compiler which will move strlen out of the loop?
>
> I certainly hope so. The header file should specify that strlen()
> does not have side effects, using whatever gcc-specific hack needed.
Which is why I asked whether the compiler will still optimise the call
if strlen is a user defined function. Also as Jeff Epler points out the
compiler may not be able to tell that the length of the string is
invariant in the loop.
> >>> Choice of algorithm is usually the biggest factor in performance
> >>> anyway - compiler Y can give you a bubble sort which is twice as
> >>> fast as compiler X but neither will turn it into quicksort.
>
> That would be a great compiler.
It's called a software engineer.
Remember - bug fixing is a very expensive process. Assuming that it
takes two weeksš of effort to accept a bug report, shedule it to an
engineer, reproduce and investigate the problem, fix the bug, document
the fix, re-test and re-release the application then an isolated bug
could easily cost $5000 to fix. That's why you want to reduce the risk
that it's there.
Of course it's a value judgement (like most things) and the right
choice in one environment may be the wrong one in another.
Å¡ We measured it once, 10 days is about right including QA for a bug
in a medium sized program.
--
Paul
: However, Albert, you didn't address at all my main concern: that EGCS is
: being directed mainly by and for Linux development, and that the "base"
: GCC is suffering because of it.
IMHO, the egcs/gcc split was made to release a c++ and fortan compilers.
The egcs guys may use Linux as a primary development box (I said 'may'),
but the goal is certainly not to build compilers for linux as a primary
target.
If you're not convinced, think at these 2 things:
- Cygnus probably sell more stuff to non-Linux companies (their
ususal customers)
- The Pentium Compiler Group does still exist, works on egcs, and
releases its own compiler (pgcc) based on egcs snapshots (and releases).
: At any rate, for much commercial development you're not going to find
: too many people willing to live with a "cutting edge" compiler. In my
: environment, for example, any upgrade to the compiler means an _entire_
: QA cycle, rather than merely an incremental one. In other words, even
: an upgrade is treated as if it were a compiler from a totally different
: vendor.
If I read you correctly, you would upgrade to gcc 2.8.x without an
QA circle, and to egcs with an entire QA circle?
both have the same roots (gcc 2.8 08-97 snapshot), and both have
huge changes from 2.7.x (at least on the c++ side).
gcc 2.8 and egcs should be consider the same thing for this matter.
: It's just not possible to put the "flavor of the week" compiler into a
: process like this... what if you standardize on a "bad" flavor? Weeks
: of QA work down the drain... missed ship dates... ugh.
This is a valid argfument for corporations like BN though :).
: As I said, I'd be happy to be shown wrong... I'd be even happier if a
: GCC 2.8.2 was released that was considered to be as stable as 2.7.x.
Personnaly, I'd be ahppy if we don't have the 18 months "no-gcc/g++
release" again :).
Lokh.
pb> This is utter nonsense. Suppose you have a choice between two
pb> compilers:
pb> a) X is solid, with little or no optmization, and no known bugs.
pb> b) Y works pretty well, generates code that is twice as fast,
pb> but in rare cases has been known to make some incorrect
pb> optimizations.
pb> So which do you use to compile your application, X or Y?
pb> You would be a fool to choose X, because you will be beaten up
pb> in the market, and your users will complain you are too slow.
Markets differ. I understand how you could see it that way. People
don't like to wait long for compilers to run. (I assume that this is
the kind of thing you're working on, since you're at Cygnus). However
most (well, certainly many) applications are not CPU-bound and so the
absolute speed of the compiler is not as important.
My mileage, as I am trying to say, differs. As far as my day job
goes, the answer is X, not Y, without hesitation.
: So which do you use to compile your application, X or Y?
It depends of course. Which would you choose for the Airbus or a
nuclear medicine accelerator?
: At any rate, for much commercial development you're not going to find
: too many people willing to live with a "cutting edge" compiler. In my
: environment, for example, any upgrade to the compiler means an _entire_
: QA cycle, rather than merely an incremental one. In other words, even
: an upgrade is treated as if it were a compiler from a totally different
: vendor.
pf> If I read you correctly, you would upgrade to gcc 2.8.x without an
pf> QA circle, and to egcs with an entire QA circle?
No, no no. I wasn't clear at all.
What I meant was that upgrading the compiler (whether to 2.8.x or
whatever) often involves a whole lot of work. So, you don't want to
have to do it more than once if at all possible... certainly not once a
week (or once a month... or even once a year, really).
BTW, I don't want to give anyone the idea that Bay uses GCC as
downloaded from ftp.gnu.org to build any of our router, switch,
etc. embedded software... because we don't :). We do, however, use it
for some ancillary tools: many (most) internal and a few external ones.
[gcc 2.7.2.3 and egcs 1.0.3a]
>I think the original poster meant RH - egcs, Debian gcc 2.8.
I realised that; unfortunately I left out the rationale why we prefer egcs
over gcc 2.8, which is that egcs appears to be the more stable one, and that
its open and fast development model makes it much more likely to be fixed
quickly in reponse to problems found.
Ray
--
UNFAIR Term applied to advantages enjoyed by other people which we tried
to cheat them out of and didn't manage. See also DISHONESTY, SNEAKY,
UNDERHAND and JUST LUCKY I GUESS.
- The Hipcrime Vocab by Chad C. Mulligan
[...]
> The same goes for egcs/gcc/whatever. The paramount issue, is get something
> out that is stable. It doesn't matter what features you have to strip out.
> Have no optimizations at all if you have to, but release something that is
> dependable. Whomever said that a egcs internal compiler error was is a
> fault of a in the code it was compiling, is on drugs.
If you write invalid asm()s, it is unfortunately quite possible to crash
gcc, egcs or whatever. Sure, the compiler shouldn't crash; but it's your
fault anyway ;-)
[...]
> It sounds like linux developed an extremely successful strategy with the
> parallel trees of 2.0.x and 2.1.x and I can foresee a similar split happening
> to egcs as soon as it becomes mainstream.
It is working right now: There is egcs-1.0.x (stable branch for now), there
are the weekly (or so) snapshots, and there is even the CVS repository for
up to the minute snapshots. There are the egcs{,-bugs}@cygnus.com lists,
and pages at <http://egcs.cygnus.com>
>Markets differ. I understand how you could see it that way. People
>don't like to wait long for compilers to run. (I assume that this is
>the kind of thing you're working on, since you're at Cygnus). However
>most (well, certainly many) applications are not CPU-bound and so the
>absolute speed of the compiler is not as important.
In the PC marketplace, people often pay a 40% premium for
a 10-20% gain in CPU performance (even though the faster
CPU will be a bargain basement model next year).
Granted, some of these customers are idiots, and the
effect is accentuated by the lack of product differentiation
in the PC marketplace. But it still dangerous, from
an empirical point of view, to claim that performance
doesn't matter any more. I'm sure all of the would-be
Java ISVs are not really planning on selling their
applications for half the price of a C/C++ application,
but extrapolating from the CPU price/demand curve
should have given some Java investors pause when
their business plan was formulated.
- Josh
jst...@citilink.com. (Josh Stern) writes:
> In the PC marketplace, people often pay a 40% premium for a 10-20%
> gain in CPU performance (even though the faster CPU will be a
> bargain basement model next year). Granted, some of these customers
> are idiots, and the effect is accentuated by the lack of product
> differentiation in the PC marketplace. But it still dangerous, from
> an empirical point of view, to claim that performance doesn't matter
> any more. I'm sure all of the would-be Java ISVs are not really
> planning on selling their applications for half the price of a C/C++
> application, but extrapolating from the CPU price/demand curve
> should have given some Java investors pause when their business plan
> was formulated.
Performance does matter, of course, but spending a premium *just* to
get 20% more CPU performance is largely self delusional, or playing
office politics. Going from 200Mhz to 233 (a 16% increase) will give
you maybe 5, at best 10% overall as it doesn't speed memory, video or
disk.
If you work somewhere that doesn't mind the hardware being fiddled
with try dropping a workmate's CPU speed down a multiplier (from 233
to 200 say), see how long it takes for them to notice.
My own view is that you need an performance increase of 33-50% just to
register as "noticably better".
--
Paul
> Performance does matter, of course, but spending a premium *just* to
> get 20% more CPU performance is largely self delusional, or playing
> office politics. Going from 200Mhz to 233 (a 16% increase) will give
> you maybe 5, at best 10% overall as it doesn't speed memory, video or
> disk.
It also makes you liable to getting forged and relabeled processors
which can result in overheating, unreliability and premature death of
processors. As long as you are not buying into the extreme expensive
high end, but into the low end of chips being able to possible
temporarily make it at a certain frequency, you are pretty safe from
relabelers and the associated dangers.
It seems that you've done the opposite of what you think you have done.
>In the real world, that means I jettisoned the Hurd from my sole
>remaining 486 and deleted Debian from my firewall and installed
>Slackware-3.4. I'm also now using egcs instead of gcc, and I've
>stopped using GNAT.
Slackware is a one-person cathedral project. Debian has hundreds of
developers ... it is a bazaar project.
--
-- Joe Buck
work: jb...@synopsys.com, otherwise jb...@welsh-buck.org or jb...@best.net
http://www.welsh-buck.org/
[...]
: No, egcs is a perfectly fine GPL compiler. It is better to quietly
: and politely start a fork than to flame and rudely start a fork.
: Something had to happen, and I'm glad it was handled well.
[...]
: No, perhaps gcc 2.8 should be removed from FTP sites.
: A few weeks ago, H. J. Lu recommended that we use egcs.
: Red Hat 5.1 is shipping with egcs AFAIK. Goodbye gcc.
: Does the name "gcc" really matter? It's not any better than
: egcs, cc, or c89. We now have a GPL compiler that can do
: more than the old one could. We get FORTRAN, modern C++,
: and GNU C, with better optimization and fewer bugs.
Thanks for the very clear perspective.
I'm told that a couple years ago there were attempts to compile the
Linux kernel under the g++ facilities of gcc, to take advantage of
C++'s more stringent type checking, etc. At the time, g++ was too
buggy, and the effort was dropped. Has anyone retried under egcs?
Tom Payne
>> In the PC marketplace, people often pay a 40% premium for a 10-20%
>> gain in CPU performance (even though the faster CPU will be a
>> bargain basement model next year). Granted, some of these customers
>> are idiots, and the effect is accentuated by the lack of product
>> differentiation in the PC marketplace. But it still dangerous, from
>> an empirical point of view, to claim that performance doesn't matter
>> any more. I'm sure all of the would-be Java ISVs are not really
>> planning on selling their applications for half the price of a C/C++
>> application, but extrapolating from the CPU price/demand curve
>> should have given some Java investors pause when their business plan
>> was formulated.
>Performance does matter, of course, but spending a premium *just* to
>get 20% more CPU performance is largely self delusional, or playing
>office politics.
In scientific jargon, there is a placebo effect. Programmers
are also subject to placebo effect when they evaluate languages
and compilers.
Going from 200Mhz to 233 (a 16% increase) will give
>you maybe 5, at best 10% overall as it doesn't speed memory, video or
>disk.
>
>If you work somewhere that doesn't mind the hardware being fiddled
>with try dropping a workmate's CPU speed down a multiplier (from 233
>to 200 say), see how long it takes for them to notice.
>
>My own view is that you need an performance increase of 33-50% just to
>register as "noticably better".
I think these numbers are generally reasonable. Put another way,
about a 15% difference in overall performance is noticeably,
but a 15% difference in CPU speed translates to something
much less than that in overall performance.
My point is that when people notice a small difference in
performance (objectively or subjectively) they often care a lot
about that (rationally or irrationally).
- Josh
The only solution is to declare the stable branch of egcs to be gcc.
The gcc developers will never be willing to go back to working the old
way.
The "base" gcc is suffering because its process totally broke down, but
gcc 2.8.x has the quality it does because the egcs folks stomped most of
the bugs out and sent the fixes to both distributions.
egcs is not Linux-centric in any way. Cygnus has to keep that code going
on all platforms they support; furthermore there are folks from a lot of
companies (SCO, NeXT, etc) working with the egcs team.
(though egcs was the first GNU C++ compiler
ever to build out of the box on Linux ... used to be HJ had to hack
every release).
>And, no offense, but based on your past statements of opinion on the
>subject of portability and Linux, your comments don't at all give me any
>warm-fuzzies :) ;)
But Albert has no role in egcs, so you needn't worry. :-)
>At any rate, for much commercial development you're not going to find
>too many people willing to live with a "cutting edge" compiler.
I understand. But egcs 1.0.3 is substantially more stable than gcc 2.8.1.
Folks building 2.0.x Linux kernels, however, should continue using 2.7.2.3,
however, because of issues with inline assembly language.
>As I said, I'd be happy to be shown wrong... I'd be even happier if a
>GCC 2.8.2 was released that was considered to be as stable as 2.7.x.
Unless drastic changes occur, you'd be far better off with egcs 1.0.3 than
any 2.8.2 release you might obtain.
It'd probably be a useless exercise, in my opinion. The amount of
effort it would take to port the kernel to C++ (yes, the word is
"port", not "recompile" -- C++ is _not_ a strict superset of C) would
far outweigh any benefit you could possibly imagine from e.g. stricter
typechecking. Now if you wanted to port a kernel to C++ in order to
make use of, say, objects or templates or something, then you're not
talking about a port but pretty much a ground-up rewrite.
Besides, gcc (or cc1, we should say) can be convinced to tighten up its
own typechecking somewhat (-Wpointer-arith -Wreturn-type -Wconversion
-Wbad-function-cast -Wcast-qual -Wcast-align -Wchar-subscripts from a
quick browse through an info page) if you are really worried about it.
--
Peter Samuelson
<sampo.creighton.edu ! psamuels>
RMS didn't understand what the caller was saying because he wasn't
really listening... (He really seems to prefer to hear himself talk)
Besides, the Hurd is too little, too late (just like gcc-2.8). And
despite what RMS intimates here, he's hardly alrtuistic. I know from
personal experience.
Clearly, RMS will never use the open development model because he
feels it would mean giving up control. Of course, Linus has proved
that such a notion is absurd, but there's no convincing RMS.
--
Forte International, P.O. Box 1412, Ridgecrest, CA 93556-1412
Ronald Cole <ron...@forte-intl.com> Phone: (760) 499-9142
President, CEO Fax: (760) 499-9152
My PGP fingerprint: 15 6E C7 91 5F AF 17 C4 24 93 CB 6B EB 38 B5 E5
>> egcs, cc, or c89. We now have a GPL compiler that can do
>> more than the old one could. We get FORTRAN, modern C++,
>> and GNU C, with better optimization and fewer bugs.
...
> I'm told that a couple years ago there were attempts to compile the
> Linux kernel under the g++ facilities of gcc, to take advantage of
> C++'s more stringent type checking, etc. At the time, g++ was too
> buggy, and the effort was dropped. Has anyone retried under egcs?
I doubt it, and I'm fairly sure it would be hopeless.
I'll always curse Bjarne Stroustrup for making "new" a keyword.
He should have grabbed "alloc" or something else less popular.
int foo(int old, int new){
return (old+5<new) ? 0 : -1;
}
If egcs had a switch to disable "new" and name mangling, then maybe
the porting effort would be reasonable. Of course that mostly
kills C++. Call it "C+" then.
> On Linux, maybe...
>
> However, Albert, you didn't address at all my main concern:
> that EGCS is being directed mainly by and for Linux development,
> and that the "base" GCC is suffering because of it.
Looking at the mailing list archives that Cygnus keeps, I can
see that egcs has a test suite that is regularly run on a large
variety of strange systems.
> And, no offense, but based on your past statements of opinion
> on the subject of portability and Linux, your comments don't
> at all give me any warm-fuzzies :) ;)
No problem, just don't get mad that I don't see things your way.
Some people take the issue much too personally.
> At any rate, for much commercial development you're not going to find
> too many people willing to live with a "cutting edge" compiler.
Cygnus has real customers. I wouldn't worry that egcs will be
bleeding edge -- cutting edge is a Good Thing.
> As I said, I'd be happy to be shown wrong... I'd be even happier if a
> GCC 2.8.2 was released that was considered to be as stable as 2.7.x.
I can see only one way to do that, and it would make RMS screaming mad.
(release egcs as gcc 2.8.2) It is better to avoid the flames I think.
I remove that gnu.* group every time I respond to this thread, because
it is better to quietly and politely ignore the FSF version of gcc.
Ahem... Methink ++C would be more appropriate. BTW, name mangling
wouldn't be that terrible. As for the 'new' monstrosity - sometime ago I
became curious about that. I looked the source of 2.0.32 for places where
new was used (that is, outside of comments and string constants). Result:
22 functions and 4 macros in the whole tree! Not that hard to change. I
didn't check 2.1.x or 2.0.3[34], but I suspect that numbers would also be
surprisingly low.
Adding G-machine or TIM to the kernel and rewriting part of the
kernel to something like Miranda (arrgh!) or Haskell would be way bigger
fun (and bigger hack value, BTW).
--
An enthusiast once turned up at my office with a huge stone that filled
the trunk of his aging Chevy, the specimen so heavy that the car's front wheels
were almost off the ground. It was, he told me solemnly, a fossil human skull;
see? there's the eye, the nose, the ear... Alan Walker, The Wisdom of the Bones
I think the bazaar style depends on the number of users and developers.
A small project which doesn't need many developers won't have use of the
bazaar style approach.
I "direct" a cathedral-like free software development: Gforth. There are
basically three core developers with different responsibilities. We
certainly accept patches and suggestions from anybody outside, but they
are rare and don't contribute much. There are not that many Forth users
out there (you might have guessed it), and there are not that many bugs
in Gforth either. None of the three core developers invests much time in
Gforth now, since most of our technical goals are reached. Also, there
are few things that really can be split up between developers. We don't
have device drivers; we neither need to do much work to support many
different platforms (most through GCC, and for the small embedded
controllers, we brought the time to port it to a new one down to about
one or two afternoons).
After all "cathedral" is the wrong word for that what we build. The
philosophy of Forth is much about simplicity, so we just put a stone in
the plain and praise our godness there (the ceiling is certainly higher
than of any cathedral ;-). It's a weather godness, so building a roof
above the stone is considered "blasphemie", anyway ;-).
--
Bernd Paysan
"Late answers are wrong answers!"
http://www.jwdt.com/~paysan/
It should be also noted that noone intentionally codes bugs. Bugs happen
(should be a T-shirt motto), and they get into a release, when they
aren't discovered and fixed before. The GCC 2.7 has several known bugs,
and due to the rule that for every known bug there are three other bugs
hiding, it also has several unknown bugs. The slow progress of GCC makes
sure that a bug won't be fixed fast. The faster progress of EGCS gives a
higher risk that new bugs are introduced, but it also gives a higher
chance that old bugs are discovered and fixed (often bugs aren't
discovered in usage, they are discovered in code inspection! Or made
obvious through changes elsewhere).
IMHO EGCS should be the compiler of choise, if it wasn't that the
"stable" Linux kernels have bugs that are only visible if you compile
with EGCS. This isn't just for features, GCC 2.7 is broken (with the
default optimization: you _need_ -fno-strength-reduce), and GCC 2.8 has
less quality than the EGCS releases.
Certainly you can. GCC for ix86 has "inferior register allocation" (no
life range split, no profiling "how often is this variable used in real
code"). Therefore we (the Gforth team) allocate the critical registers
by hand, using asm() statements. We know which variables should go to
registers. This improved performance by about a factor of two (in the
time of GCC 2.5.7 on a 386, there is much less difference now).
In article <slrn6mdreb....@knuth.brownes.org>,
cbbr...@news.brownes.org (Christopher B. Browne) writes:
> On 22 May 1998 22:20:20 -0700, Ulrich Drepper <dre...@cygnus.com>
> posted:
>
> Of course, this assumes that the FSF *is* of importance in this.
> Which is an assumption that is, for better or for worse, getting
> increasingly questionable. The FSF seems to be working on
> projects that increasingly have:
>
> a) Parallel "competitors" (GNU Emacs vs XEmacs, as a somewhat
> "hostile" situation, and GCC vs EGCS as a hopefully more friendly
> situation...)
>
> It's not clear that the FSF "entrants" have any likelihood of
> dominance, which is not where they want to be...
>
> b) No FSF involvement outside of being "repository"
>
> c) Projects that seem not to be getting anywhere terribly
> quickly (GNUStep, Hurd)
>
> d) "Me too" projects (Guile is only one of many Scheme
> implementations, and not necessarily the best...)
Opinion Follows:
I'm not a developer - just a user of linux and gnu software.
And I don't know RMS personally. It seems to me, however,
that RMS has done alot to make linux possible in the first
place. Of course he didn't know he was doing it at the time.
He was intending to make hurd a reality. However, you need
a compiler to get anywhere....
From the little I know of history, without his vision and
efforts Xemacs, egcs, and linux would probably not exist.
He created the tools (the enabling technology) that made
unix clones possible and popular. And based on the number
of linux servers on the net, maybe even helped shape the
internet.
Perhaps the FSF has "lost control" due to the volume of developers
working other competing packages. Perhaps they even want to much
control. Perhaps they need more people on their team. Perhaps
they have served their purpose and times are changing. Perhaps
none of these. Whatever the case, I feel the FSF and RMS have
played a key roll that was needed to get to where we are today.
> The area of greatest "core competency" of late has been "Making
> statements that result in flame wars on Usenet," which is *not*
> where you want to go today.
Amen.
> --
> Those who do not understand Unix are condemned to reinvent it, poorly.
> -- Henry Spencer <http://www.hex.net/~cbbrowne/lsf.html>
> cbbr...@hex.net - "What have you contributed to Linux today?..."
--
Perry Grieb
c23...@eng.delcoelect.com
Primary Application: Win95 hour glass ... Is there a unix port?
bp> It should be also noted that noone intentionally codes bugs.
Almost noone. I read an interesting software engineering article a
whiel back that went like this:-
1. At any time, your users/QA people/developers etc will have found a
certain porportion of the bugs in the program.
2. If, during development, you introduce a number of known bugs, you
keep track of how many of the deliberate bugs have been found.
Let Bd be the number of deliberate bugs introduced.
Let Ba be the (unknown) number of other bugs.
Let R be the number of reported bugs (we shall assume that it is
possible to remove reports for bugs that have already been reported
even though the symptoms may differ, for example by fixing the
reported bugs).
Suppose a fraction X of the reported bugs turn out to have been
deliberate.
Hence we have found (R.X) deliberate and (1-R)X non-deliberate bugs.
We have found a fraction (R.X)/Bd of the total number of deliberate
bugs.
Let us assume that the fraction of nondeliberate bugs discovered is
the same as the fraction of deliberate ones (Hmm.....).
Hence we assume we have found a fraction (R.X)/Bd of the nondeliberate
bugs, hence the fraction remaining is 1-(R.X/Bd).
But introducing bugs deliberately seems a bad idea to me. Especially
as the implicit assumption that the deliberate bugs are no more easy
to find than the nondeliberate ones is a bit tenuous.
Rudolf Leitgeb <lei...@variable.stanford.edu> wrote in article
<6kat22$s0d$1...@nntp.Stanford.EDU>...
> In article <6ka2uc$eis$1...@mulga.cs.mu.oz.au>,
> f...@cs.mu.oz.au (Fergus Henderson) writes:
> > Someone may have misreported this as "egcs gets an internal compiler
> > error" when in fact the true cause may have been incorrect "asm"
> > statements in the code being compiled.
>
> The scary thing about this is not so much the internal error (or whatever
> it really is) but the fact that there doesn't seem to be a work around,
or
> I at least can't find one on the egcs FAQ page. So the average user
expects
> that hell will break lose if one installs egcs and actually needs to get
> some work done with it ...
I think the egcs developers should only be expected to bugfix the egcs code
- if the code you tell egcs to compile is incorrectly written it is your
fault - you should correct your code - when I write some code and it
doesn't compile because I've made a mistake, I don't complain to the gcc
developers and tell them to correct my code for me!
--
Tristan Wibberley
This is the MSDOS philosophy: It is stable, it's just bad applications
that cause it to freeze.
Hellooooo !!! It's 1998 !
A long time ago people discovered that it might be useful if a compiler
gives meaningful error messages if it is unhappy.
It is not egcs's job to fix buggy code but it should tell you why it is
unhappy. And, no, a core dump does not count as error message (although
it probably contains more information than any real error message :-)
Anyway, someone else write that this issue has been resolved and that
the FAQ were outdated ...
Rudi
--
| | | | |
\ _____ /
/ \ B O R N
-- | o o | -- T O
-- | | -- S L E E P
-- | \___/ | -- I N
\_____/ T H E S U N
/ \
| | | | |
>In article <01bd898e$f897c820$2e1657a8@w_tristan.gb.tandem.com>,
> "Tristan Wibberley" <tristan....@compaq.com> writes:
>> I think the egcs developers should only be expected to bugfix the egcs code
>> - if the code you tell egcs to compile is incorrectly written it is your
>> fault - you should correct your code - when I write some code and it
>> doesn't compile because I've made a mistake, I don't complain to the gcc
>> developers and tell them to correct my code for me!
>
>This is the MSDOS philosophy: It is stable, it's just bad applications
>that cause it to freeze.
Strictly speaking, that is true.
>Hellooooo !!! It's 1998 !
>
>A long time ago people discovered that it might be useful if a compiler
>gives meaningful error messages if it is unhappy.
GCC has great error messages, much more meaningful than those of other
popular compilers. I don't know what you are talking about.
>It is not egcs's job to fix buggy code but it should tell you why it is
>unhappy. And, no, a core dump does not count as error message (although
>it probably contains more information than any real error message :-)
Only EGCS dumps core. Previous versions of GCC _never_ did that, right?
>Clearly, RMS will never use the open development model because he
>feels it would mean giving up control. Of course, Linus has proved
>that such a notion is absurd, but there's no convincing RMS.
Linus still has ultimate power in Linux. In that sense, development is
closed. It isn't a board or core like Apache or *BSD. OTOH, what Linux is,
is a great manager, who has made people want to work with him.
--
Rodger Donaldson rod...@ihug.co.nz
The Pinguin Patrol
> I wrote:
> > This seems very hard to grasp for people who never developed compilers.
A
> > compiler is simply a huge piece of software that has bugs just as every
other
> > piece of software of comparable size and complexity.
> It should be also noted that noone intentionally codes bugs. Bugs happen
> (should be a T-shirt motto), and they get into a release, when they
> aren't discovered and fixed before. The GCC 2.7 has several known bugs,
> and due to the rule that for every known bug there are three other bugs
> hiding, it also has several unknown bugs.
The truth is far worse. gcc-2.7.[12].x is so sloppy in optimisation, that
various bugs are hidden because they are in code paths that never get
exercised. This is the main reason why you can get gcc-2.7.x.y to compile
weird Linux kernels chock full of asm statements using dubious constructs on
hopeless architectures like the x86 to correct code.
Aside from this, there was an asm that gcc-2.x (and by inheritance, egcs-1.0)
compiled wrongly because it didn't follow its own documentation, and of
course there were plenty of "grey" areas.
Egcs is the way to go because there's were these shortcomings are
*addressed*.
--
Toon Moene (mailto:to...@moene.indiv.nluug.nl)
Saturnushof 14, 3738 XG Maartensdijk, The Netherlands
Phone: +31 346 214290; Fax: +31 346 214286
g77 Support: mailto:for...@gnu.org; NWP: http://www.knmi.nl/hirlam
I am fully aware of that and love to use gcc (and everybody else in our
group is trying to get away from HP's and SUN's "professional development
kits")
I was only responding to this previous post where Tristan claimed that it
would be sufficient if the compiler produced correct code from correct
source code.
The gcc info pages specifically say:
* If the compiler gets a fatal signal, for any input whatever, that
is a compiler bug. Reliable compilers never crash.
I assume (and hope) that egcs follows the same philosophy ...
> Only EGCS dumps core. Previous versions of GCC _never_ did that, right?
Hey, settle down! I had gcc 2.7.0 crashing, too, and had to upgrade to 2.7.2,
which seemed to work fine.
All I said was that it does not really encourage me to go to egcs if the
FAQ say that it crashes when I compile the kernel. I know, the kernel does
a lot of crazy stuff and has a lot of hacks to work around gcc 2.7.2 bugs.
But all this doesn't help me if I have to reconfigure and recompile the
kernel.
Anyways, since this crash supposedly got resolved a while ago, it is probably
not worth starting a silly flame war here ...
Cheers
> GCC has great error messages, much more meaningful than those of other
> popular compilers. I don't know what you are talking about.
At least the syntax error messages generated by the bison parser are rather
poor. g++ error messages have some problems too (e.g. try to find a syntax
error in a STL program). If you want to see how a compiler can generate
really good error messages look e.g. what GNAT (the Gnu Ada95 compiler)
generates.
-Andi
To be fair the linux kernel[1] found a real gcc bug too. gcc 2.8.0 ignored
certain casts to volatile. Kenner fixed it in gcc 2.8.1, but AFAIK the
bug is only fixed in egcs-current (by the gcc 2.8 merges), but not in the
egcs 1.0.x tree. A compiler ignoring volatile is generally very suspicious
for low-level system programming...
-Andi
[1] the actual test case never appeared in a released linux kernel, but
it was one of the early proposed fixes for the sys_iopl() kernel bug
that was brought to light by gcc 2.8.x's addressof optimization.