Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Execution speed of various machines

0 views
Skip to first unread message

Rick.Rashid@cmu-10a

unread,
Jun 14, 1981, 5:10:12 AM6/14/81
to
I have noticed a great deal of "rumor" and
"gossip" about the relative speed of various
machines. In order to reduce the noise factor
in these discussions, I decided to run a
benchmark program on three different
time-sharing systems familiar to most of the
correspondents and on the Three Rivers Corp.
Perq in my office. The benchmark program used
was developed several months ago by Forest Basket
of Stanford (and PARC) and is available for
inspection on my account on CMUA as
"puzzle.pas[p100rr60]". Basically, it has a lot
of looping, integer arithmetic, and array access.
The program is written in PASCAL and the same
program text was used on each of the machines.
(Forest originally used this benchmark in various
programming languages and has written a paper
describing the execution times of the benchmark
algorithm on a number of machines in many different
languages.)

The systems used in the study were: DEC KL-10/TOPS-10,
DEC KA-10/TOPS-10, DEC VAX 11/780 running Berkeley VMUNIX,
and PERQ. All benchmarks were run between 10:00 am and
10:30 am Sunday June 14,1981. At the time of execution
on the VAX, KA-10 and PERQ only one user (myself) was
logged on and active. Several users were logged on the
KL-10, but the load was very light (> 50% idle). I measured
both CPU time, elapsed real time, and the real time required
to compile, link and load the pascal program. Execution
times both with and without runtime range checking were
obtained. In addition, on the VAX I ran the benchmark with
two different PASCAL compilers - the PASCAL compiler
supplied by UC Berkeley, and the CMU PERQ/VAX PASCAL compiler
(which generates code both for the PERQ and the VAX).
The code for the VAX was run through the C code optimizer
before execution. Timings were done with <cntrl-T> for
the KL-10 and KA-10, with the UNIX time command on the
VAX and with my Casio alarm chronograph on the PERQ.
Each benchmark was repeated three times and the times
were consistent. All times are in seconds.

Machine Compiler Runtime w/range Runtime w/o range
KA-10 PASCAL-10 48.6 39.4
PERQ PERQ/PASCAL 28.6 (elapsed) 22. (elapsed)
VAX CMU PASCAL 17.3 10.7
VAX UCB PASCAL 77. 15.8
KL-10 PASCAL-10 9.2 7.3

Machine Compiler Elapsed time w/o range
KA-10 PASCAL-10 44 (Single user)
PERQ PERQ/PASCAL 22 (Single user)
VAX CMU PASCAL 12 (Single user)
VAX UCB PASCAL 17 (Single user)
KL-10 PASCAL-10 31 (Several users)

Machine Compiler Compile/link/load time (elapsed)
KA-10 PASCAL-10 23 (Single user)
PERQ PERQ/PASCAL 23 (Single user)
VAX CMU PASCAL 32 (Single user)
VAX UCB PASCAL 35 (Single user)
KL-10 PASCAL-10 23 (Several users)


While I tried to reduce the variables in the test by
sticking with the same language (and in the case of
the PERQ and VAX the same basic compiler), the
comparison of CMU PASCAL and UCB PASCAL on the VAX
should be warn the wary that benchmarks test many
things besides raw execution speed. They test the
compiler and the code it generates, the match-up of
machine architecture to language requirements, and
the microcode implementation of the machine
architecture. As an example, the PERQ/PASCAL compiler
both generates fairly bad code and is hampered by the
the lack of looping byte codes (the PERQ Q-Code is
very similar to UCSD P-Code). My guess is that a
factor of two speed-up could be obtained with a good
optimizer and a better byte code set.

-Rick

0 new messages