Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

open64 versus gcc

30 views
Skip to first unread message

dz

unread,
Nov 22, 2006, 9:17:33 PM11/22/06
to
Hi
Can any one give me some opinion on how these two compilers compare
in terms of the optimizations they have ? Also can any one comment on
the stability experience they may have had with these compilers.

Any pointers or answers will be highly appreciated

thanks
dz

touati

unread,
Nov 24, 2006, 8:21:12 AM11/24/06
to
dz a écrit :
Actually, Open64 produces better code than gcc.
Open64 is programmed in C++, while gcc is programmed in C.
Open64 is not as maintained as gcc.
gcc has more backends compared to Open64.
The gcc comunity is larger, but this is not a quality criteria.

S

shrey

unread,
Nov 24, 2006, 8:20:47 AM11/24/06
to
One advantage that gcc has over open64 is the variety of architecures
especially embedded ones that it supports if that is what you are
interested. Perhaps somebody can add to if open64 does that as well.

Shrey

Steven Bosscher

unread,
Nov 26, 2006, 9:44:55 PM11/26/06
to
On 24 Nov 2006 08:21:12 -0500, touati wrote:
> Actually, Open64 produces better code than gcc.

Ah, generalizations... It depends on the target you want to look at.
Open64 on Itanium destroys GCC, but on e.g. AMD64 the difference is
really not that much. Sometimes, Open64 compilers produce better code
than GCC, and sometimes GCC produces better code. For example, look
at the following comparable SPEC results:

http://www.spec.org/osg/cpu2000/results/res2006q3/cpu2000-20060814-06971.html
http://www.spec.org/osg/cpu2000/results/res2006q3/cpu2000-20060724-06813.html

PathScale is an Open64-based compiler. Note that the PathScale
scores are with profile feedback, and only base runs were done
with GCC. The GCC baseline rate (without PDO) is within 10% of
the PathScale scores (with PDO).

What's certainly true is that Open64 has more optimizations
implemented, especially Open64 2.0 for Itanium. But whether
those pay off for other targets, I don't know. Also, the
implementation of some of these optimizations is apparently
very Itanium-specific (people from STM wrote about this here:
http://www.st.com/stonline/press/magazine/stjournal/vol0102/special_report/pdf/vol12sr.pdf).


> Open64 is programmed in C++, while gcc is programmed in C.
> Open64 is not as maintained as gcc.
> gcc has more backends compared to Open64.

Does anyone even know how many Open64 backends there are? The
more visible ones are the Itanium and x86-64 backends, but are
there others (publicly) available?

> The gcc comunity is larger, but this is not a quality criteria.

It should help for stability though.

Gr.
Steven

Sid Touati

unread,
Nov 27, 2006, 5:45:12 PM11/27/06
to
Steven Bosscher a écrit :

> On 24 Nov 2006 08:21:12 -0500, touati wrote:
>> Actually, Open64 produces better code than gcc.
>
> Ah, generalizations... It depends on the target you want to look at.

I agree that my initial sentence looks like a generalization, but I am
aware that it isn't. It is difficult to make a fair comparison between
compilers, there are too parameters to explore...

Indeed, since all code optimization techniques (and their order of
execution) are based on ad hoc heuristics, we can never guarantee that
a compiler is better than another for any input program. Usually,
benchmarks are used, but benchmarks are rarely representative of
programs (they may represent workloads, but not
programs). Consequently, for any pair of distinct compilers C1 and C2,
you can always find programs better optimized with C1, and others
better optimized with C2.

S

dz

unread,
Nov 29, 2006, 12:52:03 AM11/29/06
to
The criteria I am looking for is stability and the strength of some
basic analysis in the compiler such as alias analysis. Can anyone
comment on that ?
dz

Diego Novillo

unread,
Nov 29, 2006, 11:07:43 AM11/29/06
to
dz wrote on 11/29/06 00:52:

> The criteria I am looking for is stability and the strength of some
> basic analysis in the compiler such as alias analysis. Can anyone
> comment on that ?

For alias analysis, GCC uses a fairly sophisticated constraint-based
points-to analysis and complements it with type-based disambiguation.
You can read about it in the various GCC Summit proceedings over the
last 2-3 years. You can find them in http://gcc.gnu.org/wiki

I'm not sure what exactly you mean by "some basic analysis", but you
will find GCC a fairly featureful compiler. In terms of strength, GCC
is the system compiler of every Linux distribution out there, so it is
thoroughly tested and breaking it takes a bit of effort.

A.L.

unread,
Dec 1, 2006, 9:49:03 AM12/1/06
to
On 29 Nov 2006 11:07:43 -0500, Diego Novillo <dnov...@redhat.com>
wrote:

>dz wrote on 11/29/06 00:52:
>> The criteria I am looking for is stability and the strength of some
>> basic analysis in the compiler such as alias analysis. Can anyone
>> comment on that ?
>
>For alias analysis, GCC uses a fairly sophisticated constraint-based
>points-to analysis and complements it with type-based disambiguation.
>You can read about it in the various GCC Summit proceedings over the
>last 2-3 years. You can find them in http://gcc.gnu.org/wiki
>
>I'm not sure what exactly you mean by "some basic analysis", but you
>will find GCC a fairly featureful compiler.

One of the feature out of the list of "rich features" is that the
results of numerical computations (such as inverting large matrix or
solving large set of linear equations) strongly depends on activated
options, especially optimization level.

If you are a hobbyist, game programmer or GUI programmer, pretty
likely gcc is good enough. If you do mission critical application,
intensive number crunching or both, stay away from gcc.

A.L.
[GCC is fine for systems programming. I've never done serious
numerical work in it, so you may well be right about that. -John]

A.L.

unread,
Dec 1, 2006, 11:52:42 AM12/1/06
to
On 1 Dec 2006 09:49:03 -0500, "A.L." <alew...@fala2005.com> wrote:

>One of the feature out of the list of "rich features" is that the
>results of numerical computations (such as inverting large matrix or
>solving large set of linear equations) strongly depends on activated
>options, especially optimization level.
>
>If you are a hobbyist, game programmer or GUI programmer, pretty
>likely gcc is good enough. If you do mission critical application,
>intensive number crunching or both, stay away from gcc.
>
>A.L.
>[GCC is fine for systems programming. I've never done serious
>numerical work in it, so you may well be right about that. -John]

Disclaimer: My expiments with gcc and numerical computations ended
in 2003. Maybe since this time something changedIf there is somebody
here who is using the up-to-date version of gcc for large scale,
intensive floating point number crunching, please share the
experience.

A.L.

Diego Novillo

unread,
Dec 3, 2006, 9:30:41 PM12/3/06
to comp...@iecc.com
A.L. wrote on 12/01/06 11:52:

> Disclaimer: My expiments with gcc and numerical computations ended in
> 2003. Maybe since this time something changedIf there is somebody
> here who is using the up-to-date version of gcc for large scale,
> intensive floating point number crunching, please share the
> experience.
>

GCC has gone through a major overhaul starting with version 4.0. It now
support Fortran 95, vectorization, OpenMP and several high-level loop
optimizations. It still needs work in scheduling and register
allocation, but there is work underway in those areas as well.

That being said, we have not yet covered the gap completely in terms of
floating point performance.

Jonathan Thornburg -- remove -animal to reply

unread,
Dec 3, 2006, 9:30:19 PM12/3/06
to
A.L. <alew...@fala2005.com> wrote:
[[about gcc]]

> One of the feature out of the list of "rich features" is that the
> results of numerical computations (such as inverting large matrix or
> solving large set of linear equations) strongly depends on activated
> options, especially optimization level.

This is just what you would expect for a computation which is
highly sensitive to small differences in roundoff errors.


> If you are a hobbyist, game programmer or GUI programmer, pretty
> likely gcc is good enough. If you do mission critical application,
> intensive number crunching or both, stay away from gcc.

I disagree. I've done heavy-duty number-crunching for a living for
over 20 years (simulating black hole collisions & similar problems).
I'm very happy with gcc, and would recommend it as an excellent compiler
for numerical work. It's not quite as good an optimizer as (say) the
Intel compiler, but it's usually within 10 or 20%. My colleagues and
I definitely find that the Intel compilers generate wrong code, or
abort with internal errors in valid code, considerably more often
than gcc. This is for both gcc 3.3* and gcc4, code usually a mix
of C, C++, Fortran 77, and Fortran 90.

ciao,

--
--
"Jonathan Thornburg -- remove -animal to reply" <jth...@aei.mpg-zebra.de>
Max-Planck-Institut fuer Gravitationsphysik (Albert-Einstein-Institut),
Golm, Germany, "Old Europe" http://www.aei.mpg.de/~jthorn/home.html

Greg Lindahl

unread,
Dec 3, 2006, 9:32:51 PM12/3/06
to
> >For alias analysis, GCC uses a fairly sophisticated constraint-based
> >points-to analysis and complements it with type-based disambiguation.

I believe Open64 basically claims the same features. Comparing
superficial feature lists from compilers will rarely teach you much;
it's much more interesting to see how well the compiler does on
benchmarks or your favorite application.

> One of the feature out of the list of "rich features" is that the
> results of numerical computations (such as inverting large matrix or
> solving large set of linear equations) strongly depends on activated
> options, especially optimization level.

Can you name a modern optimizing compiler for which this statement
*isn't* true? I don't think there are any. If you want results which
are the same every time, most compilers list additional flags in their
documentation which inhibit optimizations that change the answer.
Almost all numerically intensive number crunching doesn't use those
flags.

-- greg
[employed by, not speaking for, QLogic Corporation]

Gary Oblock

unread,
Dec 3, 2006, 9:34:35 PM12/3/06
to
Diego Novillo wrote:

I was worked on a custom VLIW scheduler grafted into gcc 4.x and at the
RTL level the alias information available wasn't all that great in my
opinion. I think there is a fundementail lack of communication between
the tree level and the RTL levels because though the initial RTL
generated has a way to get at the tree level information the subsequent
optimizations need to preserve it (which they don't.)

-- Gary

--
Bronze Dreams
http://www.bronzedreams.com

Brooks Moses

unread,
Dec 3, 2006, 9:33:44 PM12/3/06
to

Well, it's benchmarks rather than a real-world experience, but
Polyhedron's Fortran benchmarks are specifically aimed at "large-scale
intensive floating-point number crunching". On those, gfortran (the gcc
Fortran front-end) comes in, on average, about 25-30% slower than the
best compilers -- but there's a fair bit of variation; in a fair number
of cases it's about as fast as anything else.

It should also be noted that this is testing GCC's Fortran front end
(gfortran), of course, rather than the far-more-mature C or C++
compilers; that probably accounts for some of the slowness. (Compare,
for instance, the g95 compiler, which also uses the gcc backend but
doesn't have nearly as much optimization work on the frontend.) In
particular, I believe the notable slowness on the AERMOD benchmark has
been traced to one particular inner loop optimization that's missed in
the front end.

Actual results (for P4 on Linux) here, and this page links AMD results:
http://www.polyhedron.com/pb05/linux/f90bench_p4.html

At the very least, it's certainly quite usable for serious numerical
work; the OpenFOAM CFD package (which I use) uses gcc, and seems to work
quite acceptably well, though that doesn't mean too much since I haven't
actually compared the results with other compilers.

- Brooks


--
The "bmoses-nospam" address is valid; no unmunging needed.

ST

unread,
Dec 6, 2006, 8:59:52 AM12/6/06
to
> I was worked on a custom VLIW scheduler grafted into gcc 4.x and at
> the RTL level the alias information available wasn't all that great
> in my opinion. I think there is a fundementail lack of communication
> between the tree level and the RTL levels because though the initial
> RTL generated has a way to get at the tree level information the
> subsequent optimizations need to preserve it (which they don't.)


Yes I agree. There are some ongoing work on gcc aiming to improve this
point.

S

0 new messages