Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Cracking DES with C++ is faster than Java?

7 views
Skip to first unread message

Julie

unread,
Apr 25, 2004, 3:37:43 PM4/25/04
to
Hi,

I am going to write a program cracking DES.
One says that C++ (actually C) is faster than Java.
Is this true?
Thanks.

J

David Rasmussen

unread,
Apr 25, 2004, 3:49:44 PM4/25/04
to

It depends on the implementation (i.e. the compiler), but in practice
and in general: yes.

/David

Douglas A. Gwyn

unread,
Apr 25, 2004, 3:52:18 PM4/25/04
to
Julie wrote:
> One says that C++ (actually C) is faster than Java.

That depends on several factors,
but in general, programs coded in C will
be slightly faster than if C++ had been used
and noticeably faster than if Java were used.
However, the fastest DES crackers tend to use
some assembly language in order to exploit
available hardware instructions that are not
likely to be used in code generated by a
compiler.

Why bother to write a DES cracker when you
can obtain one that has already had a lot of
development work put into it?

Liz

unread,
Apr 25, 2004, 4:11:26 PM4/25/04
to

"David Rasmussen" <david.r...@gmx.net> wrote in message
news:c6h4hm$i7j$2...@news.net.uni-c.dk...

or as they say in canada
yaaaaaaaaaaaaaaaaaaaaaaaaaaaa

>
> /David


Claudio Puviani

unread,
Apr 25, 2004, 5:46:24 PM4/25/04
to
"Julie" <juri...@aol.com> wrote

At first I thought you were our resident Julie, but I see that you aren't.

The aspect of Java that's inherently slow is the management of object
creation/allocation/copying. If your DES cracker works strictly with a raw
array of bytes and your compiler generates reasonable code (some compilers,
like GNU Java, can generate native executables), there's a good possibility
that the difference in execution times could be negligible. However, if you
introduce object creation into the mix or target a JVM that has no JIT
compilation, you're just handing the victory to C++ on a silver platter.

Claudio Puviani


Jan Panteltje

unread,
Apr 25, 2004, 5:52:45 PM4/25/04
to
On a sunny day (25 Apr 2004 12:37:43 -0700) it happened juri...@aol.com
(Julie) wrote in <c76800c6.04042...@posting.google.com>:

There are special techniques for DES, most programs
are in C and assembler.
Jave is snail like.

Leor Zolman

unread,
Apr 25, 2004, 8:11:28 PM4/25/04
to
On Sun, 25 Apr 2004 15:52:18 -0400, "Douglas A. Gwyn" <DAG...@null.net>
wrote:

>Julie wrote:
>> One says that C++ (actually C) is faster than Java.
>
>That depends on several factors,
>but in general, programs coded in C will
>be slightly faster than if C++ had been used

Why do you say that? C++ was designed so as "not to leave room for a
lower-level language" (including C), so I can't imagine how anything you
code in C would become slower as a result of adaptation to / compilation
under C++ ... unless you go from a superior C compiler to an inferior C++
compiler, of course.
-leor


>and noticeably faster than if Java were used.
>However, the fastest DES crackers tend to use
>some assembly language in order to exploit
>available hardware instructions that are not
>likely to be used in code generated by a
>compiler.
>
>Why bother to write a DES cracker when you
>can obtain one that has already had a lot of
>development work put into it?

--
Leor Zolman --- BD Software --- www.bdsoft.com
On-Site Training in C/C++, Java, Perl and Unix
C++ users: download BD Software's free STL Error Message Decryptor at:
www.bdsoft.com/tools/stlfilt.html

Tom St Denis

unread,
Apr 25, 2004, 8:20:07 PM4/25/04
to

It can be. If you don't know why you really ought to take a step back
and learn some computer science...

Tom

Paul Schlyter

unread,
Apr 26, 2004, 2:13:36 AM4/26/04
to
In article <eIudnVU1Ycx...@comcast.com>,

Douglas A. Gwyn <DAG...@null.net> wrote:

> Julie wrote:
>
>> One says that C++ (actually C) is faster than Java.
>
> That depends on several factors, but in general, programs
> coded in C will be slightly faster than if C++ had been used
> and noticeably faster than if Java were used. However, the
> fastest DES crackers tend to use some assembly language in
> order to exploit available hardware instructions that are not
> likely to be used in code generated by a compiler.

No -- the fastest DES crackers use dedicated DES hardware, and then
it doesn't matter that much whether you use assembler or C to "talk"
to that hardware. DES as an algorithm was designed to be implemented
in hardware, and that's why software implementations of DES are so
slow. Later algorithms (e.g. AES) are usually designed to be
implemented on common microprocessors, and that's why software
implementations of these are so much faster than software
implementations of DES.


> Why bother to write a DES cracker when you can obtain one that
> has already had a lot of development work put into it?

...such as a DES hardware chip...

--
----------------------------------------------------------------
Paul Schlyter, Grev Turegatan 40, SE-114 38 Stockholm, SWEDEN
e-mail: pausch at stockholm dot bostream dot se
WWW: http://www.stjarnhimlen.se/
http://home.tiscali.se/pausch/

Thomas Pornin

unread,
Apr 26, 2004, 3:44:25 AM4/26/04
to
According to Julie <juri...@aol.com>:

> I am going to write a program cracking DES.
> One says that C++ (actually C) is faster than Java.
> Is this true?

Usually, Java is compiled into bytecode, which is somehow native code
for a "virtual machine". When the program is executed, the bytecode is
compiled into native code (by the "JIT" compiler). That compilation
handles only local optimizations, and is not good as other types of
optimizations (such as inlining). Besides, Java is designed so as to be
"safe", therefore all array accesses are checked with regards to the
array length. This slows down Java much. On pure integer computations
(such as DES cracking), you may expect a factor of 3 between a Java
implementation and an optimized C (or C++) implementation.

Of course, since most applications are I/O constrained, or limited by
the memory bandwidth, that slowdown factor is of little importance,
except for the very few routines which require raw cpu power. A DES
cracker is a very uncommon application.

Besides, you can get much worse performance, both with Java and C, if
you do not know what happens "under the hood". For instance, using
dynamic memory allocation (new, malloc()...) inside the inner loop can
kill speed very effectively. If you want to produce optimized code,
you have to take care of the fine details.


As for C vs C++, this is a long debated issue. The C++ model can put
some limitations on the compiler (for instance, exceptions have to
work out their way through the stack frames, which means that the
compiler cannot become too fancy with them) but it can also provide the
programmer with syntaxic constructions which help in building optimized
code. A very good C programmer and a very good C++ programmer, both
using proper tools, will produce programs of similar performance. On
some architecture (such as the PC), a very good assembly programmer will
beat both (at the cost of much reduced portability).


--Thomas Pornin

Julie

unread,
Apr 26, 2004, 12:36:08 PM4/26/04
to
Claudio Puviani wrote:
>
> "Julie" <juri...@aol.com> wrote
> > Hi,
> >
> > I am going to write a program cracking DES.
> > One says that C++ (actually C) is faster than Java.
> > Is this true?
>
> At first I thought you were our resident Julie, but I see that you aren't.

Yep, different Julie.

Michael Scovetta

unread,
Apr 26, 2004, 5:13:08 PM4/26/04
to
Jan Panteltje <pNaonSt...@yahoo.com> wrote in message news:<c6hbu9$156o$1...@news.wplus.net>...

This clearly shows poor understanding of Java, especially recently.
Read up on the subject first:
http://www.idiom.com/~zilla/Computer/javaCbenchmark.html

or google your own benchmarks. C/C++ certainly has some advantages,
but
Java is certainly no "snail".

-Mike Scovetta

Douglas A. Gwyn

unread,
Apr 26, 2004, 6:05:11 PM4/26/04
to
Leor Zolman wrote:
> Why do you say that? C++ was designed so as "not to leave room for a
> lower-level language" (including C), so I can't imagine how anything you
> code in C would become slower as a result of adaptation to / compilation
> under C++ ... unless you go from a superior C compiler to an inferior C++
> compiler, of course.

C++ has several features that force the runtime to
use more time. Just the need to run through the
table of static initializers before beginning
program execution is already a slowdown. Functions
generally have an extra (hidden) argument, which
slows down function linkage. When virtual functions
are involved, there is an additional slowdown.

There is always room for a lower-level language.
In the past couple of years much of my programming
has been of necessity in assembly language!

Daniel Sjöblom

unread,
Apr 26, 2004, 1:26:37 PM4/26/04
to
Thomas Pornin wrote:
> According to Julie <juri...@aol.com>:
>
>>I am going to write a program cracking DES.
>>One says that C++ (actually C) is faster than Java.
>>Is this true?
>
>
> Usually, Java is compiled into bytecode, which is somehow native code
> for a "virtual machine". When the program is executed, the bytecode is
> compiled into native code (by the "JIT" compiler). That compilation
> handles only local optimizations, and is not good as other types of
> optimizations (such as inlining).

The JVM from Sun is actually very good at inlining, and it can also
inline virtual methods if it can prove that a method is not overridden.

> Besides, Java is designed so as to be
> "safe", therefore all array accesses are checked with regards to the
> array length. This slows down Java much.

This is true, although modern JVMs can eliminate bounds checks in many
cases.

>On pure integer computations
> (such as DES cracking), you may expect a factor of 3 between a Java
> implementation and an optimized C (or C++) implementation.

Sounds about right. I see no point in writing a DES cracker in Java. The
features that make java good (oo, safety, portability, ease of use) are
of no use in such a program, especially as it is not hard to write a
safe, portable C or C++ program for this task.
--
Daniel Sjöblom
Remove _NOSPAM to reply by mail

Julie

unread,
Apr 26, 2004, 7:41:32 PM4/26/04
to

Sorry, but you are comparing two completely different things between the
languages.

What is being talked about is if you take a C source file and create an
executable, then take that same source file and compile it as C++, you will end
up w/ (virtually) identical executables.

C++ specific features can't be compared w/ C, simply because they do not exist
in C!

Paul Mensonides

unread,
Apr 26, 2004, 7:45:43 PM4/26/04
to

Well one possible difference is type_info structures for every user defined type
which can increase executable size if not optimized--which is not always
possible.

Regards,
Paul Mensonides


Leor Zolman

unread,
Apr 26, 2004, 10:45:50 PM4/26/04
to
"Douglas A. Gwyn" <DAG...@null.net> wrote in message news:<yfqdnR1VhYE...@comcast.com>...

> Leor Zolman wrote:
> > Why do you say that? C++ was designed so as "not to leave room for a
> > lower-level language" (including C), so I can't imagine how anything you
> > code in C would become slower as a result of adaptation to / compilation
> > under C++ ... unless you go from a superior C compiler to an inferior C++
> > compiler, of course.
>
> C++ has several features that force the runtime to
> use more time. Just the need to run through the
> table of static initializers before beginning
> program execution is already a slowdown.

Okay, let's call that a possible one-time hit at start up, but
generally speaking you're going to pay the same start-up costs for
run-time initialization whether they end up being C++-style or
C-style.

> Functions
> generally have an extra (hidden) argument, which
> slows down function linkage.

You mean passing the "this" pointer? But that's just a matter of
semantics; if you need such a "parameter" in C, it goes into the
argument/parameter lists, and if you don't, neither do you need it in
C++ (you just use non-member functions)


> When virtual functions
> are involved, there is an additional slowdown.

And if you need a similar dispatch mechanism in a C program, it may
very well run even slower, but not likely any faster. Again, in C++
you don't pay for what you do not use or need. If you /need/ virtual
functions, you need that functionality. If not, you don't, and neither
do you pay for it.

>
> There is always room for a lower-level language.
> In the past couple of years much of my programming
> has been of necessity in assembly language!

There may be reasons to code something in assembly, but I still find
it difficult to believe that there would need to be room for /C/ in
any self-respecting C++ environment. And moving code from C to C++
should never incur any palpable performance penalty.
-leor

Douglas A. Gwyn

unread,
Apr 27, 2004, 1:09:27 AM4/27/04
to
Julie wrote:
> What is being talked about is if you take a C source file and create an
> executable, then take that same source file and compile it as C++, you will end
> up w/ (virtually) identical executables.

(Since the languages do not have a subset/
superset relationship, in general compiling a
C source program with a C++ compiler is not safe.)

There is more overhead (both space and time)
even for a C-looking program compiled with C++.

However, the question was about developing a
specific application using various PLs. If you
choose C++, then presumably you will make
essential use of features specific to that
language. Objects, for example.

Douglas A. Gwyn

unread,
Apr 27, 2004, 1:12:50 AM4/27/04
to
Leor Zolman wrote:
> There may be reasons to code something in assembly, but I still find
> it difficult to believe that there would need to be room for /C/ in
> any self-respecting C++ environment.

The question was about the performance to be
expected when a single language is used, not
about combining pieces done in several languages.
I gave an accurate answer to the question.

Dez Akin

unread,
Apr 27, 2004, 4:43:54 AM4/27/04
to
"Douglas A. Gwyn" <DAG...@null.net> wrote in message news:<yfqdnR1VhYE...@comcast.com>...
> Leor Zolman wrote:
> > Why do you say that? C++ was designed so as "not to leave room for a
> > lower-level language" (including C), so I can't imagine how anything you
> > code in C would become slower as a result of adaptation to / compilation
> > under C++ ... unless you go from a superior C compiler to an inferior C++
> > compiler, of course.
>
> C++ has several features that force the runtime to
> use more time. Just the need to run through the
> table of static initializers before beginning
> program execution is already a slowdown. Functions
> generally have an extra (hidden) argument, which
> slows down function linkage. When virtual functions
> are involved, there is an additional slowdown.

Most compilers today arent so stupid; Also C++ has a number of
features that make it 'faster' than C in a number of cases, such as
functor inlining, and compile time code generation. std::sort will
allways run faster than qsort.

> There is always room for a lower-level language.
> In the past couple of years much of my programming
> has been of necessity in assembly language!

For perf? The tightest inner loops perhaps, given that we're talking
about crypto with very deterministic algorithms. It lends itself to
SIMD instruction extentions that most compilers don't exploit.

Optimizers are getting smarter though, and I suspect in not too many
years it will be a waste of time to attempt to out-do the optimizer.
In a couple of decades at the most I suspect it will be impossible for
any human to outperform an optimizer. (Massalin's superoptimizer led
to Denali, which I imagine will lead to more generic optimal code
approximators)

David Rasmussen

unread,
Apr 27, 2004, 5:11:16 AM4/27/04
to
Douglas A. Gwyn wrote:
>
> (Since the languages do not have a subset/
> superset relationship, in general compiling a
> C source program with a C++ compiler is not safe.)
>

Nonsense...

> There is more overhead (both space and time)
> even for a C-looking program compiled with C++.
>

More nonsense...

> However, the question was about developing a
> specific application using various PLs. If you
> choose C++, then presumably you will make
> essential use of features specific to that
> language. Objects, for example.
>

And even more.

/David

Douglas A. Gwyn

unread,
Apr 27, 2004, 7:28:34 AM4/27/04
to
Dez Akin wrote:
> For perf?

No. There are some things that simply cannot be coded
in any high-level language. For example, much of the
stuff that an embedded processor has to do before
starting to process the actual application.

Douglas A. Gwyn

unread,
Apr 27, 2004, 7:29:47 AM4/27/04
to
David Rasmussen wrote:
> Nonsense...

Apparently he thinks he knows more than he
actually does.

Julie

unread,
Apr 27, 2004, 11:29:53 AM4/27/04
to
"Douglas A. Gwyn" wrote:
>
> Julie wrote:
> > What is being talked about is if you take a C source file and create an
> > executable, then take that same source file and compile it as C++, you will end
> > up w/ (virtually) identical executables.
>
> (Since the languages do not have a subset/
> superset relationship,

??? How have you determined that? Do you have any substantive references? If
not, name something that exists in C but not C++ (excluding new C features
introduced since the last C++ standard).

> in general compiling a
> C source program with a C++ compiler is not safe.)

This was a toy example, not a mandate. The fact still stands, compiling a C
program w/ a C++ compiler will result in virtually the same executable code,
there is no intrinsic penalty for C features in C++.

If you still feel that this is not the case, please provide some facts,
evidence, or substantive references to back up your argument.

> There is more overhead (both space and time)
> even for a C-looking program compiled with C++.

Again, evidence of such a statement would help. Aside from compiler QOI
issues, could you please point out anything in the standard or supporting
documentation that indicates that C++ is larger and slower for a executable
that uses C constructs.

> However, the question was about developing a
> specific application using various PLs. If you
> choose C++, then presumably you will make
> essential use of features specific to that
> language. Objects, for example.

Right, but as soon as you use a feature in one language that isn't in the
other, you can no longer make a comparison.

Regardless, you bring up 'objects'. These can be compared across languages
such as:

C:
struct of function pointers and an instance pointer to simulate (non-virtual)
class instance

C++:
standard (non-virtual) class instance

ASM:
hand-coded structure, probably not too different from the method used in the C
implementation

Java:
standard class instance

All of these have approximately the same behavior, as far as the language
permits.

I suspect (merely stating that I haven't built a test harness for the above
tests) that ASM, C, and C++ all perform virtually the same for this example,
and Java is consistently slower.

According to what I gather from your arguments, the performance would always be
ordered as, and appreciably different: ASM < C < C++ < Java.

Mok-Kong Shen

unread,
Apr 27, 2004, 11:35:08 AM4/27/04
to

David Rasmussen wrote:
> Douglas A. Gwyn wrote:
>
>>
>> (Since the languages do not have a subset/
>> superset relationship, in general compiling a
>> C source program with a C++ compiler is not safe.)
>>
>
> Nonsense...

[snip]

If I don't err, there can be codes that are valid in both
C and C++ but have different meanings according to the
respective standards. If you happen to have such and have
them compiled once as C and the other time as C++, you might
get into difficulties under circumstances.

M. K. Shen

Julie

unread,
Apr 27, 2004, 11:38:22 AM4/27/04
to
"Douglas A. Gwyn" wrote:
>
> Leor Zolman wrote:
> > Why do you say that? C++ was designed so as "not to leave room for a
> > lower-level language" (including C), so I can't imagine how anything you
> > code in C would become slower as a result of adaptation to / compilation
> > under C++ ... unless you go from a superior C compiler to an inferior C++
> > compiler, of course.
>
> C++ has several features that force the runtime to
> use more time. Just the need to run through the
> table of static initializers before beginning
> program execution is already a slowdown.

Just as slow as if a C program had registered an equal amount of startup
routines.

> Functions
> generally have an extra (hidden) argument, which
> slows down function linkage.

Standalone functions *never* have an extra hidden argument. Non-static class
member functions (methods) do have the 'this' pointer passed around as function
calls are made.

> When virtual functions
> are involved, there is an additional slowdown.

Not a slowdown, but a difference in behavior. You can't compare a regular (C)
function w/ a class method or virtual method, they just don't compare.

Mok-Kong Shen

unread,
Apr 27, 2004, 11:44:39 AM4/27/04
to

Julie wrote:

> "Douglas A. Gwyn" wrote:
>
>>Julie wrote:
>>
>>>What is being talked about is if you take a C source file and create an
>>>executable, then take that same source file and compile it as C++, you will end
>>>up w/ (virtually) identical executables.
>>
>>(Since the languages do not have a subset/
>>superset relationship,
>
>
> ??? How have you determined that? Do you have any substantive references? If
> not, name something that exists in C but not C++ (excluding new C features
> introduced since the last C++ standard).

[snip]

What Gwyn meant is, I suppose, that C is not a genuine subset
of C++, no more nor less. See the pointer given in Grumble's
post.

M. K. Shen

Niklas Borson

unread,
Apr 27, 2004, 1:29:24 PM4/27/04
to
"Paul Mensonides" <leav...@comcast.net> wrote in message news:<4c-dnZemzMC...@comcast.com>...
[snip]

>
> Well one possible difference is type_info structures for every user defined type
> which can increase executable size if not optimized--which is not always
> possible.

type_info structures are only created for "polymorphic" types, i.e.,
types with virtual functions.

Paul Schlyter

unread,
Apr 27, 2004, 5:16:06 PM4/27/04
to
In article <408E7C71...@nospam.com>, Julie <ju...@nospam.com> wrote:

> ??? How have you determined that? Do you have any substantive references?
> If not, name something that exists in C but not C++ (excluding new C
> features introduced since the last C++ standard).

There are a number of ways to write code which is valid C (i.e. C-89)
but illegal C++:


1. C has fewer reserved words which can be used as identifiers:

int class, template, try, catch;

Such declarations are illegal in C++.



2. In C this is legal:

char a[6] = "123456";

In C++ the array must be large enough also for the terminating NUL of
the string constant.



3. In C, struct tags and typedef symbols are in different name spaces
but in C++ they are in the same name space. Thus this is legal in
C but illegal in C++:

typedef int type;

struct type
{
type memb; // int
struct type * next; // struct pointer
};

void foo(type t, int i)
{
int type;
struct type s;

type = i + t + sizeof(type);
s.memb = type;
}


The code below is valid both in C and C++ but the semantics will be different:

int sz = 80;

int size(void)
{
struct sz
{ ... };

return sizeof(sz); /* sizeof(int) in C, */
/* sizeof(struct sz) in C++ */
}



4. C++ requires a cast when assigning a void pointer to another
kind of pointer; C does not require that:

char *p = malloc(1234); /* Legal C, illegal C++ */

Gregory G Rose

unread,
Apr 27, 2004, 6:01:08 PM4/27/04
to
In article <c6lv0t$s4a$02$1...@news.t-online.com>,

Mok-Kong Shen <mok-ko...@t-online.de> wrote:
>What Gwyn meant is, I suppose, that C is not a genuine subset
>of C++, no more nor less. See the pointer given in Grumble's
>post.

No, he means that C++ is not a genuine superset of
C. One must get precedence right.

Greg.
--
Greg Rose
232B EC8F 44C6 C853 D68F E107 E6BF CD2F 1081 A37C
Qualcomm Australia: http://www.qualcomm.com.au

Mok-Kong Shen

unread,
Apr 27, 2004, 7:57:28 PM4/27/04
to

Gregory G Rose wrote:
> In article <c6lv0t$s4a$02$1...@news.t-online.com>,
> Mok-Kong Shen <mok-ko...@t-online.de> wrote:
>
>>What Gwyn meant is, I suppose, that C is not a genuine subset
>>of C++, no more nor less. See the pointer given in Grumble's
>>post.
>
>
> No, he means that C++ is not a genuine superset of
> C. One must get precedence right.

Sorry, what is 'precedence' here? If A is subset of B,
then B is superset of A, and vice versa, isn't it?

M. K. Shen

Jerry Coffin

unread,
Apr 27, 2004, 8:46:56 PM4/27/04
to
dez...@usa.net (Dez Akin) wrote in message news:<dd43b4da.04042...@posting.google.com>...

[ ... ]

> Optimizers are getting smarter though, and I suspect in not too many
> years it will be a waste of time to attempt to out-do the optimizer.
> In a couple of decades at the most I suspect it will be impossible for
> any human to outperform an optimizer. (Massalin's superoptimizer led
> to Denali, which I imagine will lead to more generic optimal code
> approximators)

Hmm...an interesting proposition. I've been hearing variations on
this theme since (at least) the FORTRAN IV days.

If anything, it looks to me like thie situation is getting worse: I
don't think I've looked at anything in the last 5 years that was
nearly AS CLOSE to optimal as Control Data's FORTRAN IV compiler
typically produced 20+ years ago!

Part of this may be that I've gotten better at writing assembly
language, but frankly I rather doubt it -- I doubt that the extra
experience has compensated for the simple fact that I don't write
assembly code nearly as often anymore.

--
Later,
Jerry.

The universe is a figment of its own imagination.

Julie

unread,
Apr 28, 2004, 12:05:14 AM4/28/04
to
Dez Akin wrote:
[snip]

> Optimizers are getting smarter though, and I suspect in not too many
> years it will be a waste of time to attempt to out-do the optimizer.
> In a couple of decades at the most I suspect it will be impossible for
> any human to outperform an optimizer. (Massalin's superoptimizer led
> to Denali, which I imagine will lead to more generic optimal code
> approximators)

I don't buy that.

If processing unit architecture stayed the same during that time, then you
would have an argument.

However, each time a processor is revved, new features, behaviors, opcodes,
etc. are added. At that point, the compiler writer must then decide on what
feature to incorporate, if at all, let alone optimizations.

It will always be possible to out-do the optimizer, however the value of such
has been steadily decreasing as processor speeds have increased.

Paul Schlyter

unread,
Apr 28, 2004, 2:44:15 AM4/28/04
to
In article <b2e4b04.04042...@posting.google.com>,

Jerry Coffin <jco...@taeus.com> wrote:

> dez...@usa.net (Dez Akin) wrote in message news:<dd43b4da.04042...@posting.google.com>...
>
> [ ... ]
>
>> Optimizers are getting smarter though, and I suspect in not too many
>> years it will be a waste of time to attempt to out-do the optimizer.
>> In a couple of decades at the most I suspect it will be impossible for
>> any human to outperform an optimizer. (Massalin's superoptimizer led
>> to Denali, which I imagine will lead to more generic optimal code
>> approximators)
>
> Hmm...an interesting proposition. I've been hearing variations on
> this theme since (at least) the FORTRAN IV days.
>
> If anything, it looks to me like thie situation is getting worse: I
> don't think I've looked at anything in the last 5 years that was
> nearly AS CLOSE to optimal as Control Data's FORTRAN IV compiler
> typically produced 20+ years ago!

Programming languages have evolved a bit since FORTRAN IV. If all
we had was still FORTRAN IV, I'm convinced that almost no-one would
be able to write assembly code outperforming the output from a
modern FORTRAN IV compiler.

The programming language FORTRAN was designed with one goal being the
highest priority: to produce efficient machine code. When John
Backus implemented the FORTRAN I compiler back in 1957, programmers
were skeptical of high-level languages in general -- they were
usually convinced that the efficiency of the generated code would be
terrible or, at best, bad. Remember that at this time, computer were,
by today's standards, extremely slow. In addition, computer time did
cost a lot of money, perhaps half-a-dollar for every CPU second or
so. Therefore efficient programs were of a very high priority;
in comparison, programmer time was cheap.

FORTRAN I did succeed in producing code almost as efficient as
hand-written code by a skilled programmer, and as a result high-level
languages got accepted. Later, when computers got more powerful and
CPU time got much cheaper, the priorities changed: when the
programmer time got more expensive than the CPU time, it became
sensible to design high-level languages to ease program development
rather than to produce efficient machine code.

Therefore the situation is probably "worse" today: the code
produced by, say, a Java compiler is definitely much less
efficient than the code produced by a FORTRAN IV compiler.
But OTOH the complex programs written in today's HLL's could
never reasonably have been implemented in FORTRAN IV, a
language which even lacks recursion.

Finally, if your program includes some text processing (which
virtually every modern program does to some extent), FORTRAN IV would
be a hopeless language to use, since it lacks string variables. To
do text processing in FORTRAN IV, you must put text in numerical
variables, and keep track of how many characters fits in a machine
word. And to port such a program from a computer where, say, 4
character fits in a word to another computer where, say, 10
characters fits in a word would be a nightmare if the program does
non-trivial text processing. In such a case it's highly likely that
the text processing parts of the FORTRAN IV program will produce code
with bad efficiency.



> Part of this may be that I've gotten better at writing assembly
> language, but frankly I rather doubt it -- I doubt that the extra
> experience has compensated for the simple fact that I don't write
> assembly code nearly as often anymore.
>
> --
> Later,
> Jerry.

Phil Carmody

unread,
Apr 28, 2004, 4:05:54 AM4/28/04
to
Julie <ju...@nospam.com> writes:

> "Douglas A. Gwyn" wrote:
> >
> > Julie wrote:
> > > What is being talked about is if you take a C source file and create an
> > > executable, then take that same source file and compile it as C++, you will end
> > > up w/ (virtually) identical executables.
> >
> > (Since the languages do not have a subset/
> > superset relationship,
>
> ??? How have you determined that? Do you have any substantive references?

How about the standards? Doug is more well aware of the standard than most.
How about the section in Stroustrup that covers things that used to be legal
in C that aren't legal in C++?

> not, name something that exists in C but not C++ (excluding new C features
> introduced since the last C++ standard).

There are hundreds of examples.

Dumb examples, which you should have been able to work out yourself, would
be major changes to language structure such as:

- Use of C++-only keywords as variable names is legal in C,
int class;

- Change in comment syntax, the ability in trad C to have a /*comment*/
immmediately after a division symbol.
i = x//**/y
;

> > in general compiling a
> > C source program with a C++ compiler is not safe.)

However, I'd disagree with Doug's "in general" - unless he means
"given arbitrary source by other people". I frequently migrate
my own old C code to C++ with no changes at all. I don't remember
the last time I encountered _any_ of the many issues.
In general, for me, it is perfectly safe to compile as C++ the code
that I have previously been compiling as C. I know others who
migrate code safely too. In fact in general, people I know migrate
code safely.

Phil
--
1st bug in MS win2k source code found after 20 minutes: scanline.cpp
2nd and 3rd bug found after 10 more minutes: gethost.c
Both non-exploitable. (The 2nd/3rd ones might be, depending on the CRTL)

Douglas A. Gwyn

unread,
Apr 28, 2004, 4:21:56 AM4/28/04
to
Julie wrote:
> ??? How have you determined that? Do you have any substantive references? If
> not, name something that exists in C but not C++ (excluding new C features
> introduced since the last C++ standard).

First off, you are trying to redefine the subsetting issue.
Something that exists in both languages but has different
meaning also demonstrates the lack of subsetting.
Secondly, sizeof'a'>1 "exists" in nearly all conforming
implementations of C but not in C++. Or maybe you would
prefer extern void f();f(42); which "exists" in C but not
in C++. There are numerous other examples.

> Again, evidence of such a statement would help.

I already explained some ways in which that happens.

> According to what I gather from your arguments, the performance would always be
> ordered as, and appreciably different: ASM < C < C++ < Java.

I didn't say "always", but *on average* that is true for
the speed of comparable applications implemented in the
various languages by skilled programmers. There are
other aspects to programming besides execution speed,
however, and anybody who thinks that just one of those
languages should be used for every application is naive.

Douglas A. Gwyn

unread,
Apr 28, 2004, 4:27:06 AM4/28/04
to
Phil Carmody wrote:
> ... I frequently migrate
> my own old C code to C++ with no changes at all.

It is *possible* to write code that will compile
correctly and have the same semantics in both
languages. Presumably you have learned how to
code within this common subset of the two.
However, there is a *lot* of existing C code
that will not compile using a C++ compiler, and
more dangerously there is some existing C code
that will compile using C++ but will produce
different results when executed. People who
are completely unaware of the issues are of
course most likely to fall into that trap.

Douglas A. Gwyn

unread,
Apr 28, 2004, 4:35:02 AM4/28/04
to
Jerry Coffin wrote:
> If anything, it looks to me like thie situation is getting worse: I
> don't think I've looked at anything in the last 5 years that was
> nearly AS CLOSE to optimal as Control Data's FORTRAN IV compiler
> typically produced 20+ years ago!

Actually optimizer technology really is substantially better
these days. Any such comparison needs to keep in mind that
Fortran has properties that allow tighter optimization than
is feasible for C on many platforms; for example, C pointer
parameters can alias whereas Fortran array parameters cannot
(acording to the language spec), which permits vectorization
that is not safe for C code. (We have addressed that in C99
by introducing "restrict" qualification.)
Also, hardware has changed in many ways. RISC architectures
are usually harder to program optimally without machine
assistance in allocating registers, for example.

Mark Wooding

unread,
Apr 28, 2004, 7:11:57 AM4/28/04
to
Julie <ju...@nospam.com> wrote:
> "Douglas A. Gwyn" wrote:
>> (Since the languages do not have a subset/superset relationship,
>
> ??? How have you determined that?

I can't speak for Doug, but presumably by reading the language standards.

> Do you have any substantive references?

The language standards. You might want to read them before being
further ill-informed pronouncements.

> If not, name something that exists in C but not C++ (excluding new C
> features introduced since the last C++ standard).

A number of old C features which C++ dropped, such as K&R-style function
declarations. Also some more significant differences:

* C allows implicit cast from `void *' to other pointer types (e.g.,
result of `malloc'); C++ requires an explicit cast for some stupid
reason. This is probably the most significant difference, and it's
why most of my C programs aren't C++.

* C's character constants have type `int'; C++'s have type `char'.

* C's structure tags have a separate namespace; C++ puts them in the
main identifier namespace.

* C++ reserves a number of new keywords, most heinously `import', but
also `template', `class', etc.; these are normal identifiers in C.
Also, `wchar_t' is a built-in type in C++, but not in C.

-- [mdw]

Andrew Swallow

unread,
Apr 28, 2004, 8:18:12 AM4/28/04
to
"Jerry Coffin" <jco...@taeus.com> wrote in message
news:b2e4b04.04042...@posting.google.com...
[snip]

> If anything, it looks to me like thie situation is getting worse: I
> don't think I've looked at anything in the last 5 years that was
> nearly AS CLOSE to optimal as Control Data's FORTRAN IV compiler
> typically produced 20+ years ago!
>
Computers can access fixed locations in memory
faster that relative locations. Recursion requires
local variables to be stored on the stack, i.e. to
be accessed using relative locations. Data structures
on the heap are even more complicated to access.

Dynamic storage is very bad, you can see the computer
stop for several thousand million clock cycles whilst the
garbage collector tries to find some more memory.

Andrew Swallow

Ernst Lippe

unread,
Apr 28, 2004, 10:04:30 AM4/28/04
to
On Wed, 28 Apr 2004 12:18:12 +0000, Andrew Swallow wrote:

> "Jerry Coffin" <jco...@taeus.com> wrote in message
> news:b2e4b04.04042...@posting.google.com...
> [snip]
>> If anything, it looks to me like thie situation is getting worse: I
>> don't think I've looked at anything in the last 5 years that was
>> nearly AS CLOSE to optimal as Control Data's FORTRAN IV compiler
>> typically produced 20+ years ago!
>>
> Computers can access fixed locations in memory
> faster that relative locations.

I assume that with "fixed location" you mean addresses that are known at link
time, when the executable is produced.

I don't think that your statement is true in general, it all depends on the
circumstances. One of the disadvantages of using absolute memory addresses is
that the size of instruction must always be larger than the number of bits in
the address space. Many CPU's have built-in support for addressing
small-sized offsets to a stack-pointer and in general the size of these
instructions is smaller than the size of the instructions that use absolute
addresses. When the instruction is longer, either it will take more time to
fetch it from memory, or it will decrease the effective number of instructions
that can be held in the instruction cache.

Even on CPU architectures where both types of instructions have the same size,
there may not be any difference in execution speed. For most modern CPU's
memory access is the main bottle-neck. Simple arithmetic operations to
registers, such as adding a small offset, are so fast relative to the time it
takes to fetch/store the contents of some address in memory, that there is no
difference. Also, virtually all modern computers have memory caches, and the
performance depends heavily on whether the contents are available in the
cache. But for the memory cache there is no difference between a fixed address
and an address that has been computed dynamically, the only thing that is
important if that address has been used recently.


> Recursion requires
> local variables to be stored on the stack, i.e. to
> be accessed using relative locations.

The overhead of allowing recursion (if any) is minimal on modern
CPU's. Anyhow, all modern programming languages (even Fortran) allow it, so
apparently language designers believe that the benefits are sufficiently
important.

>Data structures
> on the heap are even more complicated to access.

In most cases this statement is false, in most computers data structures on
the heap are simply identified by their address which is not more complicated
than using absolute addresses or locations on the stack. I have the feeling
that you believe that heaps only occur in languages, that have a relocating
garbage collector (see below), but it is standard usage to refer to the memory
area from which dynamically sized memory is allocated as "the heap". For
example, the malloc in C uses a heap.


> Dynamic storage is very bad, you can see the computer
> stop for several thousand million clock cycles whilst the
> garbage collector tries to find some more memory.

What do you mean by dynamic storage? The standard meaning of the term is
storage where the actual memory size of an object is not known at compile
time. This is pretty common in most computer languages, and in most cases this
is not at all a performance problem.

You are probably referring to relocating garbage collectors that can change
the actual memory addresses where objects are located. Traditionally, these
used to have the problems that you describe, but there have been important
improvements (generation scavenging, multi-threading) and most of these
problems have disappeared.

Anyhow, I don't think that your argument that fixed addresses are
faster is very relevant. Virtually all computer programs are written
with procedures that can accept varying arguments (that was already
the case for these old Fortran programs). So there are hardly any instances
where computers actually use fixed addresses, because most code
relies on variables that have been passed as arguments instead of
global variables.

Ernst Lippe

Julie

unread,
Apr 28, 2004, 10:59:00 AM4/28/04
to
Julie wrote:
>
> Hi,
>
> I am going to write a program cracking DES.
> One says that C++ (actually C) is faster than Java.
> Is this true?
> Thanks.
>
> J

Review:

http://page.mi.fu-berlin.de/~prechelt/Biblio/jccpprt_computer2000.pdf

and decide for yourself.

Julie

unread,
Apr 28, 2004, 11:22:08 AM4/28/04
to
Mark Wooding wrote:
>
> Julie <ju...@nospam.com> wrote:
> > "Douglas A. Gwyn" wrote:
> >> (Since the languages do not have a subset/superset relationship,
> >
> > ??? How have you determined that?
>
> I can't speak for Doug, but presumably by reading the language standards.
>
> > Do you have any substantive references?
>
> The language standards. You might want to read them before being
> further ill-informed pronouncements.

I'm asking the PP to post substantive references to HIS COMMENTS. It isn't my
responsibility to justify others comments, it is theirs. My responsibility is
to justify my comments.

Do you have any references to the standards that explicitly state that C++ is
*NOT* essentially a superset of C?

>
> > If not, name something that exists in C but not C++ (excluding new C
> > features introduced since the last C++ standard).
>
> A number of old C features which C++ dropped, such as K&R-style function
> declarations. Also some more significant differences:
>
> * C allows implicit cast from `void *' to other pointer types (e.g.,
> result of `malloc'); C++ requires an explicit cast for some stupid
> reason. This is probably the most significant difference, and it's
> why most of my C programs aren't C++.
>
> * C's character constants have type `int'; C++'s have type `char'.
>
> * C's structure tags have a separate namespace; C++ puts them in the
> main identifier namespace.
>
> * C++ reserves a number of new keywords, most heinously `import', but
> also `template', `class', etc.; these are normal identifiers in C.
> Also, `wchar_t' is a built-in type in C++, but not in C.
>
> -- [mdw]

It seems that most of the respondents have soon forgotten what we are talking
about. Doug said that there isn't a sub/super set relationship between C and
C++, and I questioned him on that.

Take this for example:

> * C++ reserves a number of new keywords, most heinously `import', but
> also `template', `class', etc.; these are normal identifiers in C.
> Also, `wchar_t' is a built-in type in C++, but not in C.

*EXACTLY* -- C++ contains C, and then extends it -- hence, a SUPERSET. Your
examples only justify that C++ is a superset of C.

Excepting the few syntactic differences here and there, mostly for type safety,
C++ is definitely a superset of C, and that is by design.

Read section B2 of the Appendix of TC++PL:

http://www.filibeto.org/sun/lib/development/c++_iiird/appB.pdf

Julie

unread,
Apr 28, 2004, 11:24:30 AM4/28/04
to
"Douglas A. Gwyn" wrote:
[snip]

> (Since the languages do not have a subset/
> superset relationship, in general compiling a

> C source program with a C++ compiler is not safe.)

Compare that with what BS says in TC++PL:

http://www.filibeto.org/sun/lib/development/c++_iiird/appB.pdf

Tom St Denis

unread,
Apr 28, 2004, 11:42:00 AM4/28/04
to

Um I just read that PDF. The PDF seems to agree that there are valid
[albeit of questionable usage] C statements that are not valid in C++
[and vice versa].

I don't see how that contradicts what Douglas said.

Tom

Andrew Swallow

unread,
Apr 28, 2004, 11:43:48 AM4/28/04
to
"Ernst Lippe" <ernstl-at-p...@ignore.this> wrote in message
news:pan.2004.04.28....@ignore.this...

Do not forget the time it required to load the pointers.

> Even on CPU architectures where both types of instructions have the same
size,
> there may not be any difference in execution speed. For most modern CPU's
> memory access is the main bottle-neck. Simple arithmetic operations to
> registers, such as adding a small offset, are so fast relative to the time
it
> takes to fetch/store the contents of some address in memory, that there is
no
> difference. Also, virtually all modern computers have memory caches, and
the
> performance depends heavily on whether the contents are available in the
> cache. But for the memory cache there is no difference between a fixed
address
> and an address that has been computed dynamically, the only thing that is
> important if that address has been used recently.
>
>
> > Recursion requires
> > local variables to be stored on the stack, i.e. to
> > be accessed using relative locations.
>
> The overhead of allowing recursion (if any) is minimal on modern
> CPU's. Anyhow, all modern programming languages (even Fortran) allow it,
so
> apparently language designers believe that the benefits are sufficiently
> important.
>

Fortran IV did not support recursion. It was even designed
so that its compilers did not need to use recursion. One of
the reason that only simple calculations were permitted in
array subscripts.

> >Data structures
> > on the heap are even more complicated to access.
>
> In most cases this statement is false, in most computers data structures
on
> the heap are simply identified by their address which is not more
complicated
> than using absolute addresses or locations on the stack. I have the
feeling
> that you believe that heaps only occur in languages, that have a
relocating
> garbage collector (see below), but it is standard usage to refer to the
memory
> area from which dynamically sized memory is allocated as "the heap". For
> example, the malloc in C uses a heap.
>

As a user of programs written in C I have noticed. It does
not matter if the runtime garbage collector is in the program
or the operating system, both its time delays and probability
of failure are still there. A construct that is definitely not
suitable for use in realtime or safety critical programs.

>
> > Dynamic storage is very bad, you can see the computer
> > stop for several thousand million clock cycles whilst the
> > garbage collector tries to find some more memory.
>
> What do you mean by dynamic storage? The standard meaning of the term is
> storage where the actual memory size of an object is not known at compile
> time. This is pretty common in most computer languages, and in most cases
this
> is not at all a performance problem.
>
> You are probably referring to relocating garbage collectors that can
change
> the actual memory addresses where objects are located. Traditionally,
these
> used to have the problems that you describe, but there have been important
> improvements (generation scavenging, multi-threading) and most of these
> problems have disappeared.
>

Only yesterday did I saw my PC crash because of a stack
overflow.

> Anyhow, I don't think that your argument that fixed addresses are
> faster is very relevant. Virtually all computer programs are written
> with procedures that can accept varying arguments (that was already
> the case for these old Fortran programs). So there are hardly any
instances
> where computers actually use fixed addresses, because most code
> relies on variables that have been passed as arguments instead of
> global variables.
>

One of the speed optimisations Fortran programmers made
was to pass global variables around in the common blocks
rather that as parameters.

> Ernst Lippe

Andrew Swallow
>

Julie

unread,
Apr 28, 2004, 12:43:59 PM4/28/04
to

Step by step, then:

Douglas A. Gwyn wrote:

(Since the languages [C and C++] do not have a
subset/superset relationship, in general compiling a


C source program with a C++ compiler is not safe.)

TC++PL, Bjarne Stroustrup, Appendix
B.2 C/C++ Compatibility
With minor exceptions, C++ is a superset of C. Most
differences stem from C++'s greater emphasis on type
checking. Well-written C programs tend to be C++
programs as well. All differences between C++ and C
can be diagnosed by a compiler.

Even simpler yet:

Doug: "do not have a subset/superset relationship"

Bjarne: "C++ is a superset of C"

Doug: "in general compiling a C source program with a

C++ compiler is not safe"

Bjarne: "Well-written C programs tend to be C++
programs as well"

Simplest:

They contradict.

Tom St Denis

unread,
Apr 28, 2004, 12:51:04 PM4/28/04
to
Julie wrote:
> Even simpler yet:
>
> Doug: "do not have a subset/superset relationship"

They don't.

> Bjarne: "C++ is a superset of C"

"With Minor exceptions pi is equal to 3".

> Doug: "in general compiling a C source program with a
> C++ compiler is not safe"
>
> Bjarne: "Well-written C programs tend to be C++
> programs as well"

From a cryptographers point of view.

> Simplest:
>
> They contradict.

No. You're misreporting Bjarne.

Tom

Claudio Puviani

unread,
Apr 28, 2004, 2:37:51 PM4/28/04
to
"Julie" <ju...@nospam.com> wrote

> Do you have any references to the standards that
> explicitly state that C++ is *NOT* essentially a
> superset of C?

All of Appendix B of ISO/IEC 14882:2003 (the standard) is dedicated to
differences between C and C++. It cites many differences. Here's just one among
many:

/* valid in C, but NOT in C++ */
int i;
int i; // second definition violates C++'s ODR, but is OK in C

The appendix cites many more, but it only takes one to cause C++ to not be a
superset of C.

> It seems that most of the respondents have soon
> forgotten what we are talking about. Doug said
> that there isn't a sub/super set relationship
> between C and C++, and I questioned him on that.

It's not enough for C++ to be compatible with MOST C code to name it a superset.
It has to be compatible with ALL valid C code. "All women" is not a superset of
"three women and one man".

> *EXACTLY* -- C++ contains C, and then extends it --
> hence, a SUPERSET.

WRONG! Read Appendix B and Chapter 1 of the standard. It's clearly stated that
C++ is BASED ON C, not that it contains C as a subset.

> Excepting the few syntactic differences here and
> there, mostly for type safety, C++ is definitely
> a superset of C, and that is by design.

You're either being argumentative for the sake of arguing or you don't understand
the meaning of "superset". There is no "mostly" in set theory. Either it's a
superset, or it's not. There is a very large intersection between C and C++, but
there is NO containment relationship. NONE. NADA. You don't get to have a
"personal opinion" or your own "perspective" on mathematical axioms.

Claudio Puviani


Jerry Coffin

unread,
Apr 28, 2004, 2:43:16 PM4/28/04
to
Julie <ju...@nospam.com> wrote in message news:<408FC6B4...@nospam.com>...

[ ... ]

> Review:
>
> http://page.mi.fu-berlin.de/~prechelt/Biblio/jccpprt_computer2000.pdf

IMO, basing any decision about DES on this paper is a mistake.

First of all, the conclusions that it draws are mostly based on an 80%
confidence level. In most cases, a 95% confidence level is the
minimum that's considered meaningful, and in more demanding
situations, a 99% confidence level may be required. To look at things
from the opposite direction, there's roughly a 20% chance that any
conclusion you see in this paper is based on pure chance.

Second, some of the data was collected in ways that render it highly
suspect to start with (a fact acknoledged, but IMO insufficiently
emphasized in the paper).

Finally, and probably most importantly, this examines solutions to
only one problem -- and a problem with little relationship to DES at
that. As such, even if the data had been collected more carefully,
and enough more data had been collected to support conclusions, those
conclusions would still mean little or nothing about the task at hand
anyway.

Now, don't get me wrong: I'm not, at any point, saying that its
conclusions are necessarily wrong -- only that we have insufficient
evidence to believe they're right, and even if they were it would be
inapplicable to the question at hand.

Julie

unread,
Apr 28, 2004, 3:18:16 PM4/28/04
to
Claudio Puviani wrote:
>
> "Julie" <ju...@nospam.com> wrote
> > Do you have any references to the standards that
> > explicitly state that C++ is *NOT* essentially a
> > superset of C?
>
> All of Appendix B of ISO/IEC 14882:2003 (the standard) is dedicated to
> differences between C and C++. It cites many differences. Here's just one among
> many:
>
> /* valid in C, but NOT in C++ */
> int i;
> int i; // second definition violates C++'s ODR, but is OK in C
>
> The appendix cites many more, but it only takes one to cause C++ to not be a
> superset of C.

True in absolute terms. I was talking in figurative (looser) terms.

> > It seems that most of the respondents have soon
> > forgotten what we are talking about. Doug said
> > that there isn't a sub/super set relationship
> > between C and C++, and I questioned him on that.
>
> It's not enough for C++ to be compatible with MOST C code to name it a superset.
> It has to be compatible with ALL valid C code. "All women" is not a superset of
> "three women and one man".
>
> > *EXACTLY* -- C++ contains C, and then extends it --
> > hence, a SUPERSET.
>
> WRONG! Read Appendix B and Chapter 1 of the standard. It's clearly stated that
> C++ is BASED ON C, not that it contains C as a subset.

Fine, I shall read it (later).

> > Excepting the few syntactic differences here and
> > there, mostly for type safety, C++ is definitely
> > a superset of C, and that is by design.
>
> You're either being argumentative for the sake of arguing or you don't understand
> the meaning of "superset". There is no "mostly" in set theory. Either it's a
> superset, or it's not. There is a very large intersection between C and C++, but
> there is NO containment relationship. NONE. NADA. You don't get to have a
> "personal opinion" or your own "perspective" on mathematical axioms.

I understand set theory just fine. I wasn't talking in explicit mathematical
terms, but approximate figurative terms. Further, I was essentially
regurgitating Bjarne in TC++PL, Appendix, B2:

"With minor exceptions, C++ is a superset of C."

Honestly, I'm not interested in arguing about the minutiae of the various
comments relating to what is and what isn't a superset/subset. Conceptually,
C++ is a superset of C, explicitly (in pure mathematic set theory terms), it is
not. If you or anyone disagrees, feel free to take it up w/ Bjarne.

Julie

unread,
Apr 28, 2004, 3:21:09 PM4/28/04
to

How can I be 'misreporting' Bjarne when I provided the *explicit* context in
which he made the statements?

If you feel further disagreement on the topic, feel free to discuss it w/
Bjarne, as I have no interest in battling over the explicit minutiae of pure
set theory. FWIW, my comments are based on the *conceptual* idea of
superset/subset.

Julie

unread,
Apr 28, 2004, 3:25:32 PM4/28/04
to
Jerry Coffin wrote:
>
> Julie <ju...@nospam.com> wrote in message news:<408FC6B4...@nospam.com>...
>
> [ ... ]
>
> > Review:
> >
> > http://page.mi.fu-berlin.de/~prechelt/Biblio/jccpprt_computer2000.pdf
>
> IMO, basing any decision about DES on this paper is a mistake.

Just posting information that is based on an empirical study. Looking over the
other respondents to this thread, I see a lot of anecdotal, conjecture, and
off-topic posts (I'm not excluding myself from that either).

Personally, I'd view this paper as more evidential (in general terms) about the
various performance characteristics between languages.

Naturally, the OP should use whatever language is appropriate depending on the
explicit formula for the attack.

Douglas A. Gwyn

unread,
Apr 29, 2004, 1:35:22 AM4/29/04
to
Julie wrote:
> ... FWIW, my comments are based on the *conceptual* idea of
> superset/subset.

As opposed to an *accurate* idea of superset/subset?

Dez Akin

unread,
Apr 29, 2004, 1:39:17 AM4/29/04
to
Julie <ju...@nospam.com> wrote in message news:<408F2D7A...@nospam.com>...

> Dez Akin wrote:
> [snip]
> > Optimizers are getting smarter though, and I suspect in not too many
> > years it will be a waste of time to attempt to out-do the optimizer.
> > In a couple of decades at the most I suspect it will be impossible for
> > any human to outperform an optimizer. (Massalin's superoptimizer led
> > to Denali, which I imagine will lead to more generic optimal code
> > approximators)
>
> I don't buy that.
>
> If processing unit architecture stayed the same during that time, then you
> would have an argument.
>
> However, each time a processor is revved, new features, behaviors, opcodes,
> etc. are added. At that point, the compiler writer must then decide on what
> feature to incorporate, if at all, let alone optimizations.

Optimizing compiler technology has been relentlessly advancing for
years. Its much more difficult now to out do the optimizer at
'ordinary' control flow code than a decade or two ago. When you have
assembly implementations that are faster, they often take advantage of
processor supported operations that are supported only by rather
specialized algorithms (crypto and bignum math being rather notable
here)

Also with superoptimizer techniques, you can define the entirety of
the cpu machine model in a logical modeler that selects the actual
optimal instruction sequence for some metric (size, speed, cache
coherency) so long as the length of the code for the function being
generated is 'small' given that this is reducable to satisfiability
solving, which is NP complete. (see the paper on the Denali
superoptimizer)

And for larger pieces of code, its possible for the compiler to do
data dependency analysis to find easily parallelizable pieces of code.

We're getting better at abstracting processors in general, and
translating that to optimizing compilers. I expect we'll eventually
(20-30 years) have back-end optimizer generators from abstract
processor architecture descriptions that automatically generate
optimizers.

> It will always be possible to out-do the optimizer, however the value of such
> has been steadily decreasing as processor speeds have increased.

People said the same thing about chess. Don't expect people to have
the edge in the future just because we've done allright untill now.

Julie

unread,
Apr 29, 2004, 10:56:48 AM4/29/04
to
Dez Akin wrote:
[snip]

> > It will always be possible to out-do the optimizer, however the value of such
> > has been steadily decreasing as processor speeds have increased.
>
> People said the same thing about chess. Don't expect people to have
> the edge in the future just because we've done allright untill now.

Yes, but the rules of chess have stayed the same.

Chip rules haven't, and won't.

I think that increases in speed will mitigate the need for most optimizations,
so my feeling is that hand optimization will die mainly due to lack of need,
rather than lack of ability.

Phil Carmody

unread,
Apr 29, 2004, 12:55:34 PM4/29/04
to
Julie <ju...@nospam.com> writes:
...

> TC++PL, Bjarne Stroustrup, Appendix
> B.2 C/C++ Compatibility
> With minor exceptions, C++ is a superset of C.
...
[summarising:]

> Bjarne: "C++ is a superset of C"


Bzzzt!

Bjarne ~ C++ is not a superset of C due to the exceptions.

Would you agree with
"Except for the number 2, all primes are odd"
?

Would you agree with
"All primes are odd"
?


Do you see your mistake now?

Claudio Puviani

unread,
Apr 29, 2004, 5:01:43 PM4/29/04
to
"Julie" <ju...@nospam.com> wrote

That naively fallacious argument has existed since the early days of computers,
and it's usually uttered by academics who have little contact with the real
world. No matter how fast a computer is, there will always be problems that take
extremely long to process and the volume of operations that need to be done per
unit time is growing faster than the hardware can keep up. A compiler that
performs worse optimizations (let alone none at all) will not be competitive with
one that performs better ones. Certainly, a company that uses slower code will be
at a disadvantage with respect to one that strives for efficiency. You can't
think in absolutes; it's relative performance that matters.

Claudio Puviani


Julie

unread,
Apr 29, 2004, 6:41:21 PM4/29/04
to

Claudio, please follow the context of the thread before making such responses.

The topic of discussion relates to HAND-OPTIMIZED ASSEMBLY, not optimizations
in general.

My feeling is that the need for hand-optimized assembly will diminish over time
as baseline performance increases over that same period of time. That is what
I was communicating, nothing more, nothing less.

Claudio Puviani

unread,
Apr 29, 2004, 8:18:06 PM4/29/04
to
"Julie":

> > > I think that increases in speed will mitigate the need for most
optimizations,
> > > so my feeling is that hand optimization will die mainly due to lack of
need,
> > > rather than lack of ability.
> >

"Claudio":


> > That naively fallacious argument has existed since the early days of
computers,
> > and it's usually uttered by academics who have little contact with the real
> > world. No matter how fast a computer is, there will always be problems that
take
> > extremely long to process and the volume of operations that need to be done
per
> > unit time is growing faster than the hardware can keep up. A compiler that
> > performs worse optimizations (let alone none at all) will not be competitive
with
> > one that performs better ones. Certainly, a company that uses slower code
will be
> > at a disadvantage with respect to one that strives for efficiency. You can't
> > think in absolutes; it's relative performance that matters.

"Julie":


> Claudio, please follow the context of the thread before making such responses.
>
> The topic of discussion relates to HAND-OPTIMIZED ASSEMBLY, not optimizations
> in general.
>
> My feeling is that the need for hand-optimized assembly will diminish over time
> as baseline performance increases over that same period of time. That is what
> I was communicating, nothing more, nothing less.

I apologize if I misinterpreted your intent, but even in context, the statement,


"I think that increases in speed will mitigate the need for most optimizations",

seemed to apply to compiler optimizations as well.

Claudio Puviani


Julie

unread,
Apr 29, 2004, 11:35:07 PM4/29/04
to

Thank you for your apology.

I entirely agree that optimization (in general terms) will always be necessary,
regardless of processing performance improvements.

Jerry Coffin

unread,
Apr 30, 2004, 12:00:07 AM4/30/04
to
"Douglas A. Gwyn" <DAG...@null.net> wrote in message news:<loSdnSEF160...@comcast.com>...

[ ... ]

> Actually optimizer technology really is substantially better
> these days.

Oddly enough, I quite agree.

> Any such comparison needs to keep in mind that
> Fortran has properties that allow tighter optimization than
> is feasible for C on many platforms; for example, C pointer
> parameters can alias whereas Fortran array parameters cannot
> (acording to the language spec), which permits vectorization
> that is not safe for C code. (We have addressed that in C99
> by introducing "restrict" qualification.)

My opinions aren't based purely on looking at code produced by C
and/or C++ copmilers, but also by various other languages (including
Fortran). It's true that Fortran has changed since then as well, but
I'm not particularly convinced that the sub-optimal code is due to new
language features or anything like that.

Now, how do I, on one hand, say that the situation is worse today than
it was 20+ years ago, but also agree that optimizer technology has
gotten better during that time as well?

It's pretty simple really: optimizers have gotten better, but the
point of "optimum" has grown at a faster rate. I.e. while the code
that's produced has gotten better, the code that COULD be produced has
gotten better by an even larger margin.

Most reasonably current computers have SIMD instructions now, but very
few compilers even attempt to use them -- and those that do almost
universally produce code that's really _quite_ bad.

The result is that 20 years ago I had to plan on working fairly hard
to beat the compiler by more than 20%. Nowadays, I can frequently
just do an _obvious_ implementation, and still beat the compiler by
2:1.

> Also, hardware has changed in many ways. RISC architectures
> are usually harder to program optimally without machine
> assistance in allocating registers, for example.

From my experience, I'd say rather the opposite: RISCy architectures
tend to have fairly large, mostly orthogonal register sets, which
makes register allocation quite easy.

The x86 (for example) makes life _substantially_ more difficult -- for
optimal usage, you often have to put initial values in non-obvious
places to produce final results where they can be used without further
movement. The floating point unit is substantially worse -- those of
us who've used HP calculcators for years do all right with it, but
others really struggle. Then again, quite a few compilers seem to
have similar problems; Borland's, for example, typically writes
intermediate values to memory far more than needed, and rarely uses
more than the top three (or so) stack locations.

Jerry Coffin

unread,
Apr 30, 2004, 12:13:46 AM4/30/04
to
"Andrew Swallow" <am.sw...@eatspam.btinternet.com> wrote in message news:<c6o7e4$pmn$1...@sparta.btinternet.com>...

[ ... ]

> Computers can access fixed locations in memory
> faster that relative locations.

That's not necessarily true. In theory, it takes more work to
generate the absolute address from a relative address, but in fact
nearly all reasonably current computers have hardware dedicated to the
task.

With most modern computers, a cache miss causes a MUCH larger delay
than figuring an effective address.

> Recursion requires
> local variables to be stored on the stack, i.e. to
> be accessed using relative locations. Data structures
> on the heap are even more complicated to access.

Not so -- allocating and freeing data can be expensive, but at least
in C and C++, when you allocate memory on the heap/free store you get
the (absolute) address of the memory allocated. Once you have that,
the access is relatively direct. By contrast, access to the stack is
essentially always done relative to some register.

As noted above, however, this rarely means much. Accessing data on
the stack IS usually quite fast, but that's primarily because the page
at the top of the stack will nearly always be in the cache.

> Dynamic storage is very bad, you can see the computer
> stop for several thousand million clock cycles whilst the
> garbage collector tries to find some more memory.

Only, for starters, when you actually use a garbage collector. Even
then, techniques to reduce waits during a collection cycle have been
well known for quite some time. I realize that many JVMs (for one
example) use GC that dates back to roughly the Lisp 1.5 era, but that
doesn't mean nothing better is known. Generataional scavenging,
incremental GC, etc., have been around for some time and can't be
summarily dismissed even for hard real-time systems.

Bryan Olson

unread,
Apr 30, 2004, 6:52:28 AM4/30/04
to
Jerry Coffin wrote:
> My opinions aren't based purely on looking at code produced by C
> and/or C++ copmilers, but also by various other languages (including
> Fortran). It's true that Fortran has changed since then as well, but
> I'm not particularly convinced that the sub-optimal code is due to new
> language features or anything like that.
>
> Now, how do I, on one hand, say that the situation is worse today than
> it was 20+ years ago, but also agree that optimizer technology has
> gotten better during that time as well?

Hey -- there are more important things to optimize than clock-
cycle counts. I too am old enough to have learned FORTRAN IV,
and let's not kid the kids: FORTRAN IV sucked! Wasting a few
machine cycles is one thing, but don't waste large amounts of my
time and claim to be doing me a favor.

How bad did FORTRAN IV suck? Well, my favorite example is that
a for-loop (actually a "DO" loop in FORTRAN IV) could not
execute its body zero times. Even if the condition was false at
the start, it would go through at least once. Is that as
awkward as it sounds? Absolutely! Why did they do it? Well,
the way most machine instruction sets work, one can save a cycle
or two by testing the condition at the end of the loop, and
using a conditional jump backwards.

We have much more *efficient* languages today. C and C++ are
far from top of the list. Java doesn't crash, but it's still
notably primitive. My hope for the future is with elegant,
polymorphic-type-safe languages, such as the ML family, or
perhaps even more purely functional languages such as Haskell.
They have yet to make a commercial splash comparable to C/C++ or
Java, but when tested in coding-competition, they romp.

In my own grad-school work with ML, I fought long and hard with
the type-checker. I could take an hour or more just to several
dozen lines of my code to compile. Still, I like ML; when
programs did compile, they worked. Logic errors are, or course,
still possible, and I cannot be certain my programs were
entirely correct. Nevertheless, I'm impressed: in about a year
of using ML, *every* known error in my code was caught at
compile-time.


--
--Bryan

Andrew Swallow

unread,
Apr 30, 2004, 10:09:41 AM4/30/04
to
"Jerry Coffin" <jco...@taeus.com> wrote in message
news:b2e4b04.04042...@posting.google.com...
[snip]
> Only, for starters, when you actually use a garbage collector. Even
> then, techniques to reduce waits during a collection cycle have been
> well known for quite some time. I realize that many JVMs (for one
> example) use GC that dates back to roughly the Lisp 1.5 era, but that
> doesn't mean nothing better is known. Generataional scavenging,
> incremental GC, etc., have been around for some time and can't be
> summarily dismissed even for hard real-time systems.

Stick to constructions that always produce the same
results given the same inputs. Designs that fail x%
of the time fail x% of the time.

Andrew Swallow

Jerry Coffin

unread,
Apr 30, 2004, 12:19:43 PM4/30/04
to
Bryan Olson <fakea...@nowhere.org> wrote in message news:<Mdqkc.3497$7u6....@newssvr27.news.prodigy.com>...

[ ... ]

> Hey -- there are more important things to optimize than clock-
> cycle counts. I too am old enough to have learned FORTRAN IV,
> and let's not kid the kids: FORTRAN IV sucked! Wasting a few
> machine cycles is one thing, but don't waste large amounts of my
> time and claim to be doing me a favor.

This was posted as a follow-up to my article, but seems to have been
based on entirely ignoring everything I wrote.

I have not at any point advocated Fortran as a cure for anything (at
least not in the last 20 years). While others in the thread have
claimed that if Fortran IV was still in use that optimization would be
improved, I have NOT done so, and in fact have specifically stated
that I believe this to be mostly wrong.

I've also stated that even though hand optimization makes a greater
difference percentage-wise than it did back then, that I no longer do
so nearly as often as I did then. I would have thought that made it
fairly obvious that I consider it less important than I did then.

> How bad did FORTRAN IV suck? Well, my favorite example is that
> a for-loop (actually a "DO" loop in FORTRAN IV) could not
> execute its body zero times. Even if the condition was false at
> the start, it would go through at least once. Is that as
> awkward as it sounds? Absolutely! Why did they do it? Well,
> the way most machine instruction sets work, one can save a cycle
> or two by testing the condition at the end of the loop, and
> using a conditional jump backwards.

IMO, in the pantheon of Fortran's shortcomings, this is one of the
lesser evils. Nonetheless, since I haven't advocated Fortran, arguing
against it is irrelevant.



> We have much more *efficient* languages today. C and C++ are
> far from top of the list. Java doesn't crash, but it's still
> notably primitive. My hope for the future is with elegant,
> polymorphic-type-safe languages, such as the ML family, or
> perhaps even more purely functional languages such as Haskell.
> They have yet to make a commercial splash comparable to C/C++ or
> Java, but when tested in coding-competition, they romp.

I'd class functional programming right along with the verification it
enables: it's the future of programming; always has been and always
will be.

Seriously, while I _like_ a number of functional languages quite a
lot, none of them I've used yet really works particularly well for
most of the real work I do. Then again, I largely do system
programming, rather than typical applications.

Perhaps we're finally reaching the point at which the programming
world will start to recognize that there are differences between
system programming and application programming, and using system
programming languages like C and C++ for application programming isn't
particularly productive. To a limited extent that's already happened,
but it's still done on a regularly basis anyway.

Truthfully, I'm as much a culprit in this respect as anybody -- for
anything but a truly colossal task, my familiarity with C++ tends to
outweigh the advantages of other languages, I typically use it even in
situations where another language would provide some advantages.

Then again, C++ does have one advantage in this respect: it supports
enough different levels of abstraction that it can be used right down
to the metal, or in a fairly safe, high-level manner, and perhaps most
importantly, more or less seamlessly marrying the two, so even in my
system programming, much of what I write works at a fairly high level
of abstraction.



> In my own grad-school work with ML, I fought long and hard with
> the type-checker. I could take an hour or more just to several
> dozen lines of my code to compile. Still, I like ML; when
> programs did compile, they worked. Logic errors are, or course,
> still possible, and I cannot be certain my programs were
> entirely correct. Nevertheless, I'm impressed: in about a year
> of using ML, *every* known error in my code was caught at
> compile-time.

I'm afraid my experience with functional programming hasn't been quite
so positive. In the end, I think even if functional programming did
provide huge benefits, it _probably_ wouldn't take over any time soon
anyway.

I suspect that the majority of code in the world today is still
written in COBOL and Fortran, and if we can't even get the world
beyond them, the possibility of functional programming becoming
mainstream anytime soon seems remote indeed.

Paul Schlyter

unread,
May 1, 2004, 3:14:07 AM5/1/04
to
In article <b2e4b04.04043...@posting.google.com>,

Jerry Coffin <jco...@taeus.com> wrote:
> Bryan Olson <fakea...@nowhere.org> wrote in message news:<Mdqkc.3497$7u6....@newssvr27.news.prodigy.com>...
....................>
> I'd class functional programming right along with the verification it
> enables: it's the future of programming; always has been and always
> will be.

You're really saying here that you believe functional programming
always will be mostly just a vision and never will become mainstream
programming languages. Because if they would become mainstream
languages they'll switch from "the future of programming" to "the
present of programming", and you believe that will never happen.


> Then again, C++ does have one advantage in this respect: it supports
> enough different levels of abstraction that it can be used right down
> to the metal, or in a fairly safe, high-level manner, and perhaps most
> importantly, more or less seamlessly marrying the two, so even in my
> system programming, much of what I write works at a fairly high level
> of abstraction.

The disadvantage of using C++ at a high abstraction level is that
there are so many ways to do that -- it depends on which class or
template library you choose to use (or which one others have chosen
for you; sometimes you don't have a choice). If you instead use a
genuinely high-level language, there's only one way to learn how to
use that.


> I suspect that the majority of code in the world today is still
> written in COBOL and Fortran, and if we can't even get the world
> beyond them, the possibility of functional programming becoming
> mainstream anytime soon seems remote indeed.

The reason COBOL is still used is that there is so much COBOL code
out there which still runs, and it would just take too much time to
rewrite all that code in some other language. Thus the classic "keep
backwards compatiblity or throw away all that's old and redo
everything from scratch" problem).

The reason Fortran (now FORTRAN IV though, but a mixture of
Fortran-77 and Fortran-90/95) is still used is that it's still the
most machine efficient language for heavy number crunching problems.
While this won't matter much to software which have modest demands
for CPU cycles, it matters a lot if you're doing stuff like wind
tunnel simulations or numerical weather predictions. BTW Fortran as
a languare has evolved too, and Fortran-90/95 is a language very
different from FORTRAN IV - but it still focuses on generating
efficient code, and the language is designed such that aliasing
issues (which are so common in C and C++ and which makes optimization
of generated machine code much harder) does not exist in that
language.

There will always be several different languages, each one best for
one particular type of problem and none best for any kind of problem.

--
----------------------------------------------------------------
Paul Schlyter, Grev Turegatan 40, SE-114 38 Stockholm, SWEDEN
e-mail: pausch at stockholm dot bostream dot se
WWW: http://www.stjarnhimlen.se/
http://home.tiscali.se/pausch/

Mok-Kong Shen

unread,
May 1, 2004, 8:56:41 AM5/1/04
to

Paul Schlyter wrote:
[snip]

> BTW Fortran as
> a languare has evolved too, and Fortran-90/95 is a language very
> different from FORTRAN IV - but it still focuses on generating
> efficient code, and the language is designed such that aliasing
> issues (which are so common in C and C++ and which makes optimization
> of generated machine code much harder) does not exist in that
> language.

Sorry for a question of ignorance. What are the 'aliasing issues'
that exist in C and C++ but don't exist in Fortran? Thanks.

M. K. Shen
----------------------------------
http://home.t-online.de/home/mok-kong.shen

Douglas A. Gwyn

unread,
May 1, 2004, 11:33:59 AM5/1/04
to
Mok-Kong Shen wrote:
> Sorry for a question of ignorance. What are the 'aliasing issues'
> that exist in C and C++ but don't exist in Fortran? Thanks.

void f(int *a, int *b) {
*a = 42;
*b = 0;
if (*a == 42) // cannot assume this is true
*b = 1;
}

Jerry Coffin

unread,
May 1, 2004, 2:42:44 PM5/1/04
to
pau...@saaf.se (Paul Schlyter) wrote in message news:<c6vibk$2v46$1...@merope.saaf.se>...

[ ... ]

> You're really saying here that you believe functional programming
> always will be mostly just a vision and never will become mainstream
> programming languages.

Yup -- but my phrasing was more fun. :-)

> The disadvantage of using C++ at a high abstraction level is that
> there are so many ways to do that -- it depends on which class or
> template library you choose to use (or which one others have chosen
> for you; sometimes you don't have a choice). If you instead use a
> genuinely high-level language, there's only one way to learn how to
> use that.

Either of these can be viewed as either an advantage or a
disadvantage.

[ ... ]



> The reason COBOL is still used

[ ... ]

Anything that starts this way, assuming that there's only one reason
COBOL is still used, is guaranteed to be wrong. There are quite a few
reasons that COBOL is used, and likewise with Fortran.

[ ... ]

> There will always be several different languages, each one best for
> one particular type of problem and none best for any kind of problem.

I doubt that sentence is saying what you really intended -- I suspect
you intended to say something more like "each one best for one
particular kind of problem and none best for all kinds of problems."

In any case, we're wondering far afield from the original question.
At least personally I'm not particularly interested in a debate over
the relative qualities of languages in general, or anything like that
-- I recognize the names of most of the recent participants in this
sub-thread well enough that I think I can honestly say we all know
that's a debate that generates far more heat than light.

Shailesh

unread,
May 1, 2004, 3:38:36 PM5/1/04
to
Tom St Denis wrote:
> Julie wrote:
>
>> Hi,
>> I am going to write a program cracking DES.
>> One says that C++ (actually C) is faster than Java.
>> Is this true?
>> Thanks.
>
>
> It can be. If you don't know why you really ought to take a step back
> and learn some computer science...
>
> Tom

No kidding. Trying to crack DES with a high bit-length is a fool's
endeavor (unless you are a researcher). If you do decide to try it,
the programming language choice is the least of your worries. Your
strategy should be to write your cracking algorithm on paper in
pseudo-code first, and then analyze its time requirements. Then you
might find out that you need thousands of computers:

http://www.distributed.net/des/

If you're just some script kiddie, then use Javascript and run your
program in a web browser for maximum speed.

perry

unread,
May 1, 2004, 4:38:54 PM5/1/04
to
actually, it's a bit of an illusion to argue that c & c++ produce faster
implementations than java. before the days of JIT (just-in-time)
compilation java was primarily an interpreted language so there was no
argument there. however with the introduction of JIT things changed.
true, technically speaking c/c++ is faster but only by a marging of less
than 1%.

check out:
http://java.sun.com/products/hotspot/docs/whitepaper/Java_Hotspot_v1.4.1/Java_HSpot_WP_v1.4.1_1002_4.html

http://java.sun.com/docs/books/tutorial/post1.0/preview/performance.html
http://java.sun.com/developer/onlineTraining/Programming/JDCBook/perf2.html
http://java.sun.com/developer/onlineTraining/Programming/JDCBook/perf2.html#jit
http://java.sun.com/developer/onlineTraining/Programming/JDCBook/perfTech.html

"The server VM contains an advanced adaptive compiler that supports many
of the same types of optimizations performed by optimizing C++
compilers, as well as some optimizations that can't be done by
traditional compilers, such as aggressive inlining across virtual method
invocations. This is a competitive and performance advantage over static
compilers. Adaptive optimization technology is very flexible in its
approach, and typically outperforms even advanced static analysis and
compilation techniques."

http://java.sun.com/products/hotspot/docs/whitepaper/Java_HotSpot_WP_Final_4_30_01.html

i know, your going to stick to your guns over the 1%. however, the
difference is performance at level is typically most insignificant.

- perry

Julie wrote:
> Hi,
>
> I am going to write a program cracking DES.
> One says that C++ (actually C) is faster than Java.
> Is this true?
> Thanks.
>

> J

Douglas A. Gwyn

unread,
May 1, 2004, 3:52:45 PM5/1/04
to
perry wrote:
> true, technically speaking c/c++ is faster but only by a marging of less
> than 1%.

That's simply not true. Have you *measured* the performance
of comparable implementations of the same algorithm in C vs.
Java? There are several reasons for the slowdown, one
being mandatory array bounds checking at run time, another
being treating everythings as an object, another being
dynamically allocating almost everything and relying on
garbage collection.

Paul Schmidt

unread,
May 1, 2004, 9:51:21 PM5/1/04
to
Bryan Olson wrote:
> Jerry Coffin wrote:
> > My opinions aren't based purely on looking at code produced by C
> > and/or C++ copmilers, but also by various other languages (including
> > Fortran). It's true that Fortran has changed since then as well, but
> > I'm not particularly convinced that the sub-optimal code is due to new
> > language features or anything like that.
> >
> > Now, how do I, on one hand, say that the situation is worse today than
> > it was 20+ years ago, but also agree that optimizer technology has
> > gotten better during that time as well?
>
> Hey -- there are more important things to optimize than clock-
> cycle counts. I too am old enough to have learned FORTRAN IV,
> and let's not kid the kids: FORTRAN IV sucked! Wasting a few
> machine cycles is one thing, but don't waste large amounts of my
> time and claim to be doing me a favor.

You need to look at conditions of the time though, you could hire a
programmer for $5.00 an hour, computer time cost over $1000 per machine
second, so if wasting 400 hours of programmer time saved 5 machine
seconds you were ahead of the game.

Today we look at different conditions, you can get a year of computer
time for $1,000 but the programmer costs that for a week, so tools need
to be programmer efficient rather then machine efficient. If you waste
5 hours of machine time and save a week of programmer time, your ahead
of the game.

Java becomes more programmer efficient by 2 of the 3Rs (reduce is the
missing one) reuse and recycle, because a class is an independant
entity, you can use the same class over and over again, in different
programs.

I think the future will be more descriptive, in that a program will
describe what an object needs to accomplish rather then how the object
does it. The compiler will then figure out how to do that.

Paul

George Neuner

unread,
May 2, 2004, 3:24:10 AM5/2/04
to
On Wed, 28 Apr 2004 12:21:09 -0700, Julie <ju...@nospam.com> wrote:

>
>How can I be 'misreporting' Bjarne when I provided the *explicit* context in
>which he made the statements?

Stoustrup explicitly qualified the relationship between C and C++ with
the statement "Except for minor details, C++ is a superset of the C
programming language." Even read informally, that statement cannot be
interpreted simply as "C++ is a superset of C".

The intersection of C and C++ is not equal to C, but it is close
enough to cause problems that are rarely noticed at compile time and
manifest as hard to debug, run time crashes. Notably code in either
language that relies heavily on sizeof (e.g., generic data
structures), and C code that abuses goto in ways that violate C++
scoping rules (typically machine generated), can break if compiled
using the "other" language.

George
=============================================
Send real email to GNEUNER2 at COMCAST o NET

Paul Schlyter

unread,
May 2, 2004, 4:44:17 AM5/2/04
to
In article <b2e4b04.04050...@posting.google.com>,

Jerry Coffin <jco...@taeus.com> wrote:

> pau...@saaf.se (Paul Schlyter) wrote in message news:<c6vibk$2v46$1...@merope.saaf.se>...
>
> [ ... ]
>
>> The disadvantage of using C++ at a high abstraction level is that
>> there are so many ways to do that -- it depends on which class or
>> template library you choose to use (or which one others have chosen
>> for you; sometimes you don't have a choice). If you instead use a
>> genuinely high-level language, there's only one way to learn how to
>> use that.
>
> Either of these can be viewed as either an advantage or a
> disadvantage.

Yes ... your mileage may vary....


> [ ... ]
>
> > The reason COBOL is still used
> [ ... ]
>
> Anything that starts this way, assuming that there's only one reason
> COBOL is still used, is guaranteed to be wrong.

True --- so let's change it to "The main reason COBOL is still used...."


> There are quite a few reasons that COBOL is used, and likewise with
> Fortran.

However the huge amount of legacy code is one very important reason.
If that legacy code wasn't there, I don't think COBOL would be used
very much. Fortran could still have a significant use though, due to
its superiority in producing efficient machine code for heavy number
crunching programs.


> [ ... ]
>
> > There will always be several different languages, each one best for
> > one particular type of problem and none best for any kind of problem.
>
> I doubt that sentence is saying what you really intended -- I suspect
> you intended to say something more like "each one best for one
> particular kind of problem and none best for all kinds of problems."

I don't see much difference between the phrase "any kind of problem"
and "all kinds of problems", except that the latter would indicate
an attempt to actually try to solve all conceivable kinds of problems,
while the former only recognizing the potential of doing so.

Paul Schlyter

unread,
May 2, 2004, 4:44:17 AM5/2/04
to
In article <tuYkc.60514$OU.14...@news20.bellglobal.com>,
Paul Schmidt <wogs...@yahoo.ca> wrote:


> Bryan Olson wrote:
>
>> Hey -- there are more important things to optimize than clock-
>> cycle counts. I too am old enough to have learned FORTRAN IV,
>> and let's not kid the kids: FORTRAN IV sucked! Wasting a few
>> machine cycles is one thing, but don't waste large amounts of my
>> time and claim to be doing me a favor.
>
> You need to look at conditions of the time though, you could hire a
> programmer for $5.00 an hour, computer time cost over $1000 per machine
> second, so if wasting 400 hours of programmer time saved 5 machine
> seconds you were ahead of the game.

Could you give an actual example of computer time costing over $1000
per machine second? That would amount to over $86 _million_ per day
or $31 _billion_ per year ---- quite a lucrative business for just
one single computer!!!! And even if the computer would be used by
paying customers only 10% of the time, it would still mean over $3
billion per year --- several decades ago when money was worth much
more than today!

Also, remember that the computers of the late 60's were, by today's
standards, quite slow. The amount of computation made during CPU
second on one of those machines could be duplicated by a human on a
mechanical calculator in less than 10 hours. And if a programmer
cost $5/hour, a human doing calculations could probably be obtained
for $2.5/hour. So why waste over $1000 on computer time when the
same amount of computations could be done by a human for less than
$25 ?????



> Today we look at different conditions, you can get a year of computer
> time for $1,000 but the programmer costs that for a week, so tools need
> to be programmer efficient rather then machine efficient. If you waste
> 5 hours of machine time and save a week of programmer time, your ahead
> of the game.
>
> Java becomes more programmer efficient by 2 of the 3Rs (reduce is the
> missing one) reuse and recycle, because a class is an independant
> entity, you can use the same class over and over again, in different
> programs.

There was software reuse before classes -- the subroutine was
invented for that specific purpose: to wrap code into a package
making it suitable for reuse in different programs.

Also: in real life, classes aren't as independent as you describe
here: most classes are dependent on other classes. And in extreme
examples, trying to extract a class from a particular program for use
in another program will force you to bring along a whole tree of
classes, which can make moving that class to the new program
infeasible.


> I think the future will be more descriptive, in that a program will
> describe what an object needs to accomplish rather then how the object
> does it. The compiler will then figure out how to do that.

Like Prolog ? It was considered "the future" in the 1980's .....

perry

unread,
May 2, 2004, 10:11:30 AM5/2/04
to

Yes ... your mileage may vary....


>> [ ... ]
>>
>
>>> > The reason COBOL is still used
>
>> [ ... ]
>>
>> Anything that starts this way, assuming that there's only one reason
>> COBOL is still used, is guaranteed to be wrong.


True --- so let's change it to "The main reason COBOL is still used...."


>> There are quite a few reasons that COBOL is used, and likewise with
>> Fortran.


"However the huge amount of legacy code is one very important reason.
If that legacy code wasn't there, I don't think COBOL would be used
very much. Fortran could still have a significant use though, due to
its superiority in producing efficient machine code for heavy number
crunching programs."

we have to get out of the mindset that one language is one size fitz all.

the universe is constantly expanding and this expansion is constantly
creating new and unique opportunities for growth. both early and modern
computer language design is a reflection of this.

further what you are talking about know is commonly addressed using
design patterns, one in particular is the wrapper that allows legacy
code to be "wrapped" inside another (typically more advnaced)
implementation in order to harness the best of both worlds without
nullifying past efforts.

you might smug at COBOL and FORTRAN but a great many people have
accomplished many great feats with these tools.... the piece of plastic
in your back pocket would not be there except for these...

- perry

Andrew Swallow

unread,
May 2, 2004, 2:06:07 PM5/2/04
to
"Paul Schlyter" <pau...@saaf.se> wrote in message
news:c72c9s$uqo$1...@merope.saaf.se...
[snip]

>
> However the huge amount of legacy code is one very important reason.
> If that legacy code wasn't there, I don't think COBOL would be used
> very much. Fortran could still have a significant use though, due to
> its superiority in producing efficient machine code for heavy number
> crunching programs.
>

COBOL's big enemy is Visual BASIC. This would be a
big surprise to people in the 1960s.

Andrew Swallow

Paul Schlyter

unread,
May 2, 2004, 3:44:45 PM5/2/04
to
In article <c73daf$4rd$1...@sparta.btinternet.com>,
Occasionally I'm actually amazed myself that BASIC didn't die many
years ago......

Btw should there ever be a COBOL with object oriented extensions to
the language, the name of that object oriented COBOL would be:

"Add one to COBOL" ...... :-)

Mok-Kong Shen

unread,
May 2, 2004, 5:58:07 PM5/2/04
to

Do you mean the case when f is called thru supplying the
same actaul arguement to the two formal parameters?
But then that's also a problem with Fortran, if I don't
err. (The issue was the difference between these PLs.)

M. K. Shen

Mok-Kong Shen

unread,
May 2, 2004, 6:19:41 PM5/2/04
to

Paul Schlyter wrote:

> In article <c73daf$4rd$1...@sparta.btinternet.com>,
> Andrew Swallow <am.sw...@btinternet.com> wrote:
>
>
>>"Paul Schlyter" <pau...@saaf.se> wrote in message
>>news:c72c9s$uqo$1...@merope.saaf.se...
>>[snip]
>>
>>
>>>However the huge amount of legacy code is one very important reason.
>>>If that legacy code wasn't there, I don't think COBOL would be used
>>>very much. Fortran could still have a significant use though, due to
>>>its superiority in producing efficient machine code for heavy number
>>>crunching programs.
>>
>>COBOL's big enemy is Visual BASIC. This would be a
>>big surprise to people in the 1960s.

>

> Occasionally I'm actually amazed myself that BASIC didn't die many
> years ago......
>
> Btw should there ever be a COBOL with object oriented extensions to
> the language, the name of that object oriented COBOL would be:
>
> "Add one to COBOL" ...... :-)

I don't understand in which sense is Visual BASIC an enemy
of COBOL. COBOL has widespread use in certain commercial
sectors, notably banking, where BASIC is barely used, if
I don't err.

As to 'object-oriented' COBOL, I happen to know the title of
one book (which I have never seen though):

Ned Chapin, Standard Object-Oriented Cobol

M. K. Shen

Andrew Swallow

unread,
May 2, 2004, 7:40:32 PM5/2/04
to
"Mok-Kong Shen" <mok-ko...@t-online.de> wrote in message
news:c73s0g$2lv$07$1...@news.t-online.com...
[snip]

>
> I don't understand in which sense is Visual BASIC an enemy
> of COBOL. COBOL has widespread use in certain commercial
> sectors, notably banking, where BASIC is barely used, if
> I don't err.
>
The transfer of data entry from punch cards to PCs
has allowed Visual BASIC to take over as the main
data processing language in new developments.

In simple English, that is where the jobs are.

Although Java is now trying to be come the main user
friendly MMI language.

Andrew Swallow

Paul Schmidt

unread,
May 2, 2004, 8:22:51 PM5/2/04
to

In trying to prove that I my implementation of the theory was wrong, you
missed the point of the theory. In the 1960's computer time was
expensive and labour was cheap, systems attempted to use as little
computer time as possible, due to the cost. So if writing 400 lines of
Fortran, COBOL or Assembler saved a few seconds of computer time, it was
worth it. Today computers are cheap, and labour is expensive, so new
languages have to be more oriented to reducing labour resources at the
expensive of computer resources. Using massive libraries of precanned
class libraries and reusing classes is a good way of reducing labour.
The fact that the more general code may not be as machine efficient is a
small tradeoff.

>
>
>>Today we look at different conditions, you can get a year of computer
>>time for $1,000 but the programmer costs that for a week, so tools need
>>to be programmer efficient rather then machine efficient. If you waste
>>5 hours of machine time and save a week of programmer time, your ahead
>>of the game.
>>
>>Java becomes more programmer efficient by 2 of the 3Rs (reduce is the
>>missing one) reuse and recycle, because a class is an independant
>>entity, you can use the same class over and over again, in different
>>programs.
>
>
> There was software reuse before classes -- the subroutine was
> invented for that specific purpose: to wrap code into a package
> making it suitable for reuse in different programs.

Subroutines only dealt with the code, you had to be very careful with
the data, a lot of programs used global data, and it was common that one
subroutine would step on a another subroutines data. C and Pascal
allowed for local data, but still rely largely on global data. Objects
cured this to a large extent, it's easier to black-box an object, then
to black-box a subroutine.

>
> Also: in real life, classes aren't as independent as you describe
> here: most classes are dependent on other classes. And in extreme
> examples, trying to extract a class from a particular program for use
> in another program will force you to bring along a whole tree of
> classes, which can make moving that class to the new program
> infeasible.
>

You missed the point, you CAN write classes with the idea of writing a
class once, and then using it over and over again, in each new project
that needs that kind of class, put it in a package or class library and
just bolt in the library or package.

>
>>I think the future will be more descriptive, in that a program will
>>describe what an object needs to accomplish rather then how the object
>>does it. The compiler will then figure out how to do that.
>
>
> Like Prolog ? It was considered "the future" in the 1980's .....
>


Okay, so it's not a new idea, and previous implementations have failed,
objects were the same way, the first attempt to Objectize C was
ObjectiveC who uses it today, you don't see a big call for SmallTalk
programmers either? Everybody seemed to like C++, and Java has been
popular enough. We have objects, we will eventually move away from low
level object handling to high level object handling.

Paul

Jerry Coffin

unread,
May 2, 2004, 9:37:12 PM5/2/04
to
pau...@saaf.se (Paul Schlyter) wrote in message news:<c72c9s$uqo$1...@merope.saaf.se>...

[ ... ]

> > > The reason COBOL is still used
> > [ ... ]
> >
> > Anything that starts this way, assuming that there's only one reason
> > COBOL is still used, is guaranteed to be wrong.
>
> True --- so let's change it to "The main reason COBOL is still used...."

I'm not certain I agree, but at least I'm not absolutely certain this
is wrong.



> > There are quite a few reasons that COBOL is used, and likewise with
> > Fortran.
>
> However the huge amount of legacy code is one very important reason.
> If that legacy code wasn't there, I don't think COBOL would be used
> very much. Fortran could still have a significant use though, due to
> its superiority in producing efficient machine code for heavy number
> crunching programs.

I'd tend toward more or less the opposite: Fortran has little real
advantage for most work. C99 (for one example) allows one to express
the same concepts, but even without that, well written C++ meets or
exceeds the standard set by Fortran.

COBOL, OTOH, provides reasonable solutions for a fairly large class of
problems that nearly no other language addresses as well. Keep in
mind that COBOL was intended for use by people who are not primarily
programmers, and that's still its primary use -- most people who write
COBOL are business majors and such who rarely have more than a couple
of classes in programming. The people I've talked to in that field
seem to think that's the way things should be; they've nearly all
tried to use real programmers to do the job, but have almost
universally expressed disappointment in the results (or, often, lack
thereof).

Now I'm not sure they're entirely correct, but I'm hard put to
completely ignore or discount their experiences either.

[ ... ]

> > > There will always be several different languages, each one best for
> > > one particular type of problem and none best for any kind of problem.
> >
> > I doubt that sentence is saying what you really intended -- I suspect
> > you intended to say something more like "each one best for one
> > particular kind of problem and none best for all kinds of problems."
>
> I don't see much difference between the phrase "any kind of problem"
> and "all kinds of problems", except that the latter would indicate
> an attempt to actually try to solve all conceivable kinds of problems,
> while the former only recognizing the potential of doing so.

As I'd interpret your origial statement ("none best for any kind of
problem") it means "there is no problem for which any of them is the
best". I can hardly believe that's what you intended to say, but as
it was worded, I can't figure out another interpretation for it
either.

Douglas A. Gwyn

unread,
May 2, 2004, 10:41:52 PM5/2/04
to
Mok-Kong Shen wrote:
> But then that's also a problem with Fortran, ...

No.

Thomas Pornin

unread,
May 3, 2004, 3:34:38 AM5/3/04
to
According to Thomas Pornin <por...@nerim.net>:
> On pure integer computations (such as DES cracking), you may expect a
> factor of 3 between a Java implementation and an optimized C (or C++)
> implementation.

While thinking about it, I became aware that, for a DES cracker, it
_may_ be possible to reduce that factor. What kills speed in Java are
memory allocations (you _have_ to use "new" to allocate a small array
of bytes, whereas in C or C++ you can often use a buffer on the stack,
which is way faster, both for allocating and releasing) and array
accesses (which are checked against the array length).

A DES cracker (as opposed to a simple DES encryption engine) can be
programmed without any array at all, thus suppressing both problems. The
implementation would most likely use so-called "bitslice" techniques,
where any data is spread over many variables (one bit per variable).
Thus, S-box are no longer tables, but some "circuit", and bit
permutations become "free" (it is a matter of routing data, solved at
compilation and not at runtime). With the Java "long" type, 64 instances
are performed in parallel. In bitslice representation, I/O becomes a
real problem (you have to bitswap a lot) but a DES cracker does _not_
perform I/O.

So an optimized DES cracker in Java would look like a _big_ method with
a plethora of local variables, no array access, no object, no memory
allocation. It _may_ be as efficient as a C or C++ implementation,
provided that the JIT compiler does not drown under the task (even C
compilers have trouble handling a 55000-lines function with 10000+ local
variables -- try it !). Of course, it will be slow as hell on any JVM
without a JIT (e.g., the Microsoft VM under Internet Explorer 5.5).

Either way, the Java implementation will not be better than the C
implementation.


--Thomas Pornin

Paul Schlyter

unread,
May 3, 2004, 3:44:23 AM5/3/04
to
In article <vhglc.7003$ZJ5.3...@news20.bellglobal.com>,
Sure --- but still, computer time wasn't THAT expensive!!!! At $1000
per CPU second, as you claimed, with the slow computers available
back then, the computer business would have efficiently killed itself
at a very early stage, since hand computation by humans would then
have been cost effective in comparison.

$1 per CPU second is more reasonable -- and that's still a lot!

(btw the word "computer" existed in the English language already
100+ years ago, but with a different meaning: a human, hired to
perform computations)


> Today computers are cheap, and labour is expensive, so new
> languages have to be more oriented to reducing labour resources at the
> expensive of computer resources. Using massive libraries of precanned
> class libraries and reusing classes is a good way of reducing labour.
> The fact that the more general code may not be as machine efficient is a
> small tradeoff.

I never argued against that. However, for the most CPU intensive
programs, such as numerical weather prediction, it's still cost
effective to devote programmer time to make the program more
efficient. And sometimes it's not just a matter of cutting runtime
costs, but the matter of being able to solve a particular problem at
all or not. Admittedly, only a small fraction of all existing
programs are of this kind.



>>>Today we look at different conditions, you can get a year of computer
>>>time for $1,000 but the programmer costs that for a week, so tools need
>>>to be programmer efficient rather then machine efficient. If you waste
>>>5 hours of machine time and save a week of programmer time, your ahead
>>>of the game.
>>>
>>>Java becomes more programmer efficient by 2 of the 3Rs (reduce is the
>>>missing one) reuse and recycle, because a class is an independant
>>>entity, you can use the same class over and over again, in different
>>>programs.
>>
>> There was software reuse before classes -- the subroutine was
>> invented for that specific purpose: to wrap code into a package
>> making it suitable for reuse in different programs.
>
> Subroutines only dealt with the code, you had to be very careful with
> the data, a lot of programs used global data, and it was common that one
> subroutine would step on a another subroutines data. C and Pascal
> allowed for local data, but still rely largely on global data.

FORTRAN had data local to subroutines before C and Pascal even existed.
Yes, this local data was static, but since FORTRAN explicitly disallowed
recursion, that was not a problem.


> Objects cured this to a large extent, it's easier to black-box an
> object, then to black-box a subroutine.

Sure, but still software was reused before object orientation came
in fashin....


>> Also: in real life, classes aren't as independent as you describe
>> here: most classes are dependent on other classes. And in extreme
>> examples, trying to extract a class from a particular program for use
>> in another program will force you to bring along a whole tree of
>> classes, which can make moving that class to the new program
>> infeasible.
>
> You missed the point, you CAN write classes with the idea of writing a
> class once, and then using it over and over again, in each new project
> that needs that kind of class, put it in a package or class library and
> just bolt in the library or package.

You can do the same with subroutines.....



>>>I think the future will be more descriptive, in that a program will
>>>describe what an object needs to accomplish rather then how the object
>>>does it. The compiler will then figure out how to do that.
>>
>> Like Prolog ? It was considered "the future" in the 1980's .....
>
> Okay, so it's not a new idea, and previous implementations have failed,
> objects were the same way, the first attempt to Objectize C was
> ObjectiveC who uses it today, you don't see a big call for SmallTalk
> programmers either? Everybody seemed to like C++, and Java has been
> popular enough. We have objects, we will eventually move away from low
> level object handling to high level object handling.
>
> Paul

....or some new programming paradigm will become popular, making
objects obsolete. You never know what the future will bring....

Paul Schlyter

unread,
May 3, 2004, 3:44:24 AM5/3/04
to
In article <b2e4b04.04050...@posting.google.com>,
Jerry Coffin <jco...@taeus.com> wrote:

> pau...@saaf.se (Paul Schlyter) wrote in message news:<c72c9s$uqo$1...@merope.saaf.se>...
>
> [ ... ]
>
>>>> The reason COBOL is still used
>>> [ ... ]
>>>
>>> Anything that starts this way, assuming that there's only one reason
>>> COBOL is still used, is guaranteed to be wrong.
>>
>> True --- so let's change it to "The main reason COBOL is still used...."
>
> I'm not certain I agree, but at least I'm not absolutely certain this
> is wrong.
>
>>> There are quite a few reasons that COBOL is used, and likewise with
>>> Fortran.
>>
>> However the huge amount of legacy code is one very important reason.
>> If that legacy code wasn't there, I don't think COBOL would be used
>> very much. Fortran could still have a significant use though, due to
>> its superiority in producing efficient machine code for heavy number
>> crunching programs.
>
> I'd tend toward more or less the opposite: Fortran has little real
> advantage for most work.

Your view and my view are not contradicting one another other. Indeed
Fortran has little real advantage for most work, since most work isn't
heavy number crunching.


> C99 (for one example) allows one to express the same concepts,

Unfortunately, there are few C99 implementations out there. Do
you know any C99 available for supercomputers, for instance?


> but even without that, well written C++ meets or exceeds the standard
> set by Fortran.

:-) .... try the standard problem of writing a subroutine to invert a
matrix of arbitrary size. Fortran has had the ability to pass a
2-dimensional array of arbitrary size to subroutines for decades. In
C++ you cannot do that -- you'll have to play games with pointers to
acheive similar functionality. That's why I once wrote the amalloc()
function (it's written in C-89 but compilable in C++), freely
available at http://www.snippets.org


> COBOL, OTOH, provides reasonable solutions for a fairly large class
> of problems that nearly no other language addresses as well.

In principle true, but not particularly relevant: the database
functionality which is missing from most programming languages is
insteac acheived with a suitable library.


> Keep in mind that COBOL was intended for use by people who are not
> primarily programmers, and that's still its primary use -- most
> people who write COBOL are business majors and such who rarely have
> more than a couple of classes in programming.

The vision of COBOL was to enable the programmers to express their
solutions in plain English. COBOL didn't reach quite that far, but it's
still a quite "babbling" language in comparison to almost all other
programming languages.


> The people I've talked to in that field seem to think that's the
> way things should be; they've nearly all tried to use real programmers
> to do the job, but have almost universally expressed disappointment
> in the results (or, often, lack thereof).

I guess real programmers want real problems, or else they'll get
bored. Try to put a very skilled engineer on an accounting job
and you'll probably see similar results....


> Now I'm not sure they're entirely correct, but I'm hard put to
> completely ignore or discount their experiences either.

I believe you --- and COBOL will most likely continue to be used
for a long time.


> [ ... ]
>
>>>> There will always be several different languages, each one best for
>>>> one particular type of problem and none best for any kind of problem.
>>>
>>> I doubt that sentence is saying what you really intended -- I suspect
>>> you intended to say something more like "each one best for one
>>> particular kind of problem and none best for all kinds of problems."
>>
>> I don't see much difference between the phrase "any kind of problem"
>> and "all kinds of problems", except that the latter would indicate
>> an attempt to actually try to solve all conceivable kinds of problems,
>> while the former only recognizing the potential of doing so.
>
> As I'd interpret your origial statement ("none best for any kind of
> problem") it means "there is no problem for which any of them is the
> best". I can hardly believe that's what you intended to say, but as
> it was worded, I can't figure out another interpretation for it
> either.

True, I didn't intend to say that -- with "any problem" I meant "any
problem which could appear" and not "at least one problem"... but
OK, English isn't my native language....

Douglas A. Gwyn

unread,
May 3, 2004, 5:14:39 AM5/3/04
to
Paul Schlyter wrote:
> :-) .... try the standard problem of writing a subroutine to invert a
> matrix of arbitrary size. Fortran has had the ability to pass a
> 2-dimensional array of arbitrary size to subroutines for decades. In
> C++ you cannot do that -- you'll have to play games with pointers to
> acheive similar functionality. ...

C doesn't have multidimensional arrays, but it does support
arrays of arrays and other complex structures. Using these
tools you get a *choice* of how to represent matrices,
unlike the native Fortran facility where you're stuck with
whatever the compiler has wired in.

In C++ one would normally use a matrix class in order to be
able to apply the standard operators, e.g. + and *.

Liwp

unread,
May 3, 2004, 7:02:39 AM5/3/04
to

Someone posted the link below to this thread earlier. I'm guessing you
did not read the article. For example, C has problems with optimizing
pointers that result in similar problems as Java has with array bounds
checking. Also, GCs provide memory locality which again reduces the
number of cache misses which results in better performance. Then again
if you don't allocate anything dynamically you don't have to worry about
that.

http://www.idiom.com/~zilla/Computer/javaCbenchmark.html

If you look at the benchmarks Java goes from being 9 times slower to
being 4 times faster than C. I think the only conclusions you can draw
from the stats is that you can seriously muck things up with both Java
and C unless you know how certain structures affect performance in
relation to register allocations, memory access, and optimizations.

--
! Lauri

Mok-Kong Shen

unread,
May 3, 2004, 10:53:31 AM5/3/04
to

Andrew Swallow wrote:

> "Mok-Kong Shen" <mok-ko...@t-online.de> wrote in message
> news:c73s0g$2lv$07$1...@news.t-online.com...
> [snip]
>
>>I don't understand in which sense is Visual BASIC an enemy
>>of COBOL. COBOL has widespread use in certain commercial
>>sectors, notably banking, where BASIC is barely used, if
>>I don't err.
>>
>
> The transfer of data entry from punch cards to PCs
> has allowed Visual BASIC to take over as the main
> data processing language in new developments.

From at least what I was told, programming in such
business as banking continues to use COBOL and no
chance was ever given to BASIC, whether new development
or not, however.

M. K. Shen

Paul Schmidt

unread,
May 3, 2004, 10:53:41 AM5/3/04
to
Paul Schlyter wrote:
> In article <vhglc.7003$ZJ5.3...@news20.bellglobal.com>,
> Paul Schmidt <wogs...@yahoo.ca> wrote:
>
>
>>Paul Schlyter wrote:
>>>Also: in real life, classes aren't as independent as you describe
>>>here: most classes are dependent on other classes. And in extreme
>>>examples, trying to extract a class from a particular program for use
>>>in another program will force you to bring along a whole tree of
>>>classes, which can make moving that class to the new program
>>>infeasible.
>>
>>You missed the point, you CAN write classes with the idea of writing a
>>class once, and then using it over and over again, in each new project
>>that needs that kind of class, put it in a package or class library and
>>just bolt in the library or package.
>
>
> You can do the same with subroutines.....
>
>

In many cases you can, however subroutines don't offer a way to protect
data from the general program. I have seen a lot of program bugs that
are caused by this lack of protection. There are various attempts at
curing it, Hungarian notationn was one of them, in a multi-developer
system you could have a central repository of variable names so that
variable names don't overlap, or include the devlopers initials in the
variable names, but don't forget the computer is better at boring
repetativc tasks, then people are. Objects make this process much
easier, in that the object is responsible for it's own data.

>>>>I think the future will be more descriptive, in that a program will
>>>>describe what an object needs to accomplish rather then how the object
>>>>does it. The compiler will then figure out how to do that.
>>>
>>>
>>>Like Prolog ? It was considered "the future" in the 1980's .....
>>
>>Okay, so it's not a new idea, and previous implementations have failed,
>>objects were the same way, the first attempt to Objectize C was
>>ObjectiveC who uses it today, you don't see a big call for SmallTalk
>>programmers either? Everybody seemed to like C++, and Java has been
>>popular enough. We have objects, we will eventually move away from low
>>level object handling to high level object handling.
>

> ....or some new programming paradigm will become popular, making
> objects obsolete. You never know what the future will bring....
>

That is also possible, that in 5 to 10 years time, the object will be
replaced with something else. Probably most likely is that you will use
diagramming tools, to diagram what you want the objects to do, and
then the computer will compile the drawing into machine language. The
software designer would then do the diagrams, and rather then email or
courier them to India or China for coding, will simply compile the
diagrams into code.

Paul

Mok-Kong Shen

unread,
May 3, 2004, 11:10:45 AM5/3/04
to

Douglas A. Gwyn wrote:

Fortran's subroutine parameters are name-parameters (not
value parameters), i.e. just like the 'int *a' of your C
example. So, if two formal paremeters of that type in a
Fortran subroutine are associated with one and the same
actual parameter, then certain code sequences could cause
the same problem, if the equivalent C function causes problem.

M. K. Shen

Chris McDonald

unread,
May 3, 2004, 12:29:50 PM5/3/04
to
Mok-Kong Shen <mok-ko...@t-online.de> writes:


>Fortran's subroutine parameters are name-parameters (not
>value parameters), i.e. just like the 'int *a' of your C
>example. So, if two formal paremeters of that type in a
>Fortran subroutine are associated with one and the same
>actual parameter, then certain code sequences could cause
>the same problem, if the equivalent C function causes problem.


I'm sure that you mean 'pass-by-reference', as name-parameters
(pass-by-name) are completely different again (c.f. Algol).

_______________________________________________________________________________
Dr Chris McDonald EMAIL: ch...@csse.uwa.edu.au
School of Computer Science & Software Engineering
The University of Western Australia WWW: http://www.csse.uwa.edu.au/~chris
Crawley, Western Australia, 6009 PH: +61 8 6488 2533, FAX: +61 8 6488 1089

Paul Schlyter

unread,
May 3, 2004, 3:45:28 PM5/3/04
to
In article <4qSdnUXRwbb...@comcast.com>,

Douglas A. Gwyn <DAG...@null.net> wrote:

> Paul Schlyter wrote:
>> :-) .... try the standard problem of writing a subroutine to invert a
>> matrix of arbitrary size. Fortran has had the ability to pass a
>> 2-dimensional array of arbitrary size to subroutines for decades. In
>> C++ you cannot do that -- you'll have to play games with pointers to
>> acheive similar functionality. ...
>
> C doesn't have multidimensional arrays, but it does support
> arrays of arrays

C's arrays of arrays are useless here, because you must decide the
number of elements in all dimensions except the last one. In C
you must use pointers to pointers instead, forcing the storage
of a number of pointers, which must be accessed at runtime. THis
slows down the execution of the code.

> and other complex structures.

True, C is more versatile. Just a pity they left out that case
of multi-dimensional arrays. C borrowed so many other good
properties from Fortran (efficient code, separate compilation,
static variables, a good set of math functions, floating-point
numbers in single AND double precision) so it could have borrowed
that feature as well.

In essence C was a very good mixture from the beast features of several
languages existing prior to C. I've already listed what it borrowed
from Fortran; from Pascal it borrowed automatic variables and dynamic
allocation of memory, recursion, and structured programming concepts.
And from assembler it borrowed low-level support such as pointers
and bit manipulation. However C didn't borrow object orientation
from Simula... :-)


> Using these tools you get a *choice* of how to represent matrices,

In C there's not much of a choice here -- if you want to represent
an arbitrary sized matrix in a way convenient to use, you're
pretty much stuck with the pointer-to-pointer representation.


> unlike the native Fortran facility where you're stuck with
> whatever the compiler has wired in.

True, C is more versatile, I never argued against that.


> In C++ one would normally use a matrix class in order to be
> able to apply the standard operators, e.g. + and *.

...and that matrix class would have to be implemented using the same
basic language features which are available in C -- in that respect
C++ does not offer any advantage over C.

Paul Schlyter

unread,
May 3, 2004, 3:45:29 PM5/3/04
to
In article <c75n8b$ua3$03$1...@news.t-online.com>,

Mok-Kong Shen <mok-ko...@t-online.de> wrote:

> Fortran's subroutine parameters are name-parameters (not
> value parameters),

No - they are reference parameters. Algol had value parameters, but
they were so complex to implement and not of that much use so they
were abandoned in later programming languages.


> i.e. just like the 'int *a' of your C example.

That's an implementation of a reference parameter. If you wanted to
try passing the equivalent of a name parameter in C, the parameter
declaration would look something like

struct name_parameter_a *a

where the struct could be defined as something like:

struct name_parameter_a
{
int (*reference_a)();
void (*assign_to_a)(int value);
};

Name parameters were usually implemented as "thunks", i.e. pointers
to pieces of code which computed the parameter, i.e. implemented the
semantics of the name parameter.



> So, if two formal paremeters of that type in a
> Fortran subroutine are associated with one and the same
> actual parameter, then certain code sequences could cause
> the same problem, if the equivalent C function causes problem.

The well-known aliasing problem....

Paul Schlyter

unread,
May 3, 2004, 3:45:28 PM5/3/04
to
In article <V1tlc.17508$3Q4.3...@news20.bellglobal.com>,
True of course, however there WAS software reuse also before object
orientation. And that was my point.



>> ....or some new programming paradigm will become popular, making
>> objects obsolete. You never know what the future will bring....
>
> That is also possible, that in 5 to 10 years time, the object will be
> replaced with something else. Probably most likely is that you will use
> diagramming tools, to diagram what you want the objects to do, and
> then the computer will compile the drawing into machine language. The
> software designer would then do the diagrams, and rather then email or
> courier them to India or China for coding, will simply compile the
> diagrams into code.

Automatic code generators have been another "dream of the future" for
decades now. I particularly remember one program, written for the
gool ol' Apple II 8-bit microcomputer in the early 1980's. That
program was called "The Last One", suggesting it was the last program
which ever had to be written. What did it do? It asked the user for
a number of parameters for some administrative program, and then it
did output Applesoft Basic code for that program...... Needless to
say, it didn't at all end all futher software development.... :-)

The interesting thing about the future is that it cannot be
predicted. So whenever one envisions computer software and hardware
10-20 years into the future, think back some decades instead and
think about what people back then imagined about our times:

Around 1950, IBM estimated the world market of computers to some 5
machines.

Around 1970, DEC thought there was absolutely no need for personal
computers.

Around 1980, Bill Gates made his now famous statement that 640 kBytes
was more memory than anyone would ever need.

Around 1983-1985, the programming language Ada was envisioned to,
within 5-10 years, take over virtually all software development. At
the same time, there were very high hopes for Japan's "5'th
generation" computers, based on the language Prolog.

Around 1990, Bill Gates thought that OS/2 would run on most personal
computers in the late 1990's.

And so on and so on ......

Mok-Kong Shen

unread,
May 3, 2004, 5:54:07 PM5/3/04
to

Chris McDonald wrote:
> Mok-Kong Shen <mok-ko...@t-online.de> writes:
>
>>Fortran's subroutine parameters are name-parameters (not
>>value parameters), i.e. just like the 'int *a' of your C
>>example. So, if two formal paremeters of that type in a
>>Fortran subroutine are associated with one and the same
>>actual parameter, then certain code sequences could cause
>>the same problem, if the equivalent C function causes problem.
>
>
>
> I'm sure that you mean 'pass-by-reference', as name-parameters
> (pass-by-name) are completely different again (c.f. Algol).

You are certainly right. Thanks for the correction.

M. K. Shen

Mok-Kong Shen

unread,
May 3, 2004, 6:07:41 PM5/3/04
to

Paul Schlyter wrote:

> Mok-Kong Shen <mok-ko...@t-online.de> wrote:
>
>
>>Fortran's subroutine parameters are name-parameters (not
>>value parameters),
>
>
> No - they are reference parameters. Algol had value parameters, but
> they were so complex to implement and not of that much use so they
> were abandoned in later programming languages.
>
>
>>i.e. just like the 'int *a' of your C example.
>
>
> That's an implementation of a reference parameter. If you wanted to
> try passing the equivalent of a name parameter in C, the parameter
> declaration would look something like
>
> struct name_parameter_a *a
>
> where the struct could be defined as something like:
>
> struct name_parameter_a
> {
> int (*reference_a)();
> void (*assign_to_a)(int value);
> };
>
> Name parameters were usually implemented as "thunks", i.e. pointers
> to pieces of code which computed the parameter, i.e. implemented the
> semantics of the name parameter.

Chris McDonald had immediately pointed out my using the
wrong term.

>
>>So, if two formal paremeters of that type in a
>>Fortran subroutine are associated with one and the same
>>actual parameter, then certain code sequences could cause
>>the same problem, if the equivalent C function causes problem.
>
>
> The well-known aliasing problem....

So in Fortran there do exist aliasing problems just like
in C and C++, contrary to what you previous claimed. BTW,
in the newer standard versions of Fortran (starting with
Fortran95), one could also have value-parameters.

M. K. Shen

It is loading more messages.
0 new messages