Anyway, I have a lot of fun... :-) with pcap libraries, the vax go
very well on the net with netbsd 1.5 installed, and the same for the
pdp11 running bsd 2.11. :-). Only thing, the alpha 233 mhz seems slow
compared with an intel PIII450, both running linux... :-|
gianluca@alpha-debian:~/supnik/simh$ make USE_NETWORK=1 vax
gcc -O2 -lm -I . VAX/vax_cpu1.c VAX/vax_cpu.c VAX/vax_fpa.c
VAX/vax_io.c VAX/vax_mmu.c VAX/vax_stddev.c VAX/vax_sys.c
VAX/vax_sysdev.c PDP11/pdp11_rl.c PDP11/pdp11_rq.c PDP11/pdp11_ts.c
PDP11/pdp11_dz.c PDP11/pdp11_lp.c PDP11/pdp11_tq.c PDP11/pdp11_pt.c
PDP11/pdp11_xq.c scp.c scp_tty.c sim_sock.c sim_tmxr.c sim_ether.c -I
VAX/ -I PDP11/ -DUSE_INT64 -DUSE_NETWORK -lpcap -o vax
sim_sock.c: In function `sim_accept_conn': sim_sock.c:193: warning:
passing arg 3 of `accept' from incompatible pointer type
bye,
Gian Luca
--
La logica non attiene alla verosimiglianza delle ipotesi,
ma alla coerenza delle tesi con le ipotesi.
Take a look at the SPECint95 numbers on that Alpha vs. the PIII. Off the
top of my head the Alpha is around 3-4 and the PIII is about 20. I've run
both the PDP-10 and PDP-11 versions of simh on my AlphaStation 200 4/233,
and while it's useable, I'd much rather run them on my 500Mhz Celeron or
1GhzPIII :^)
Zane
"Zane H. Healy" wrote:
> Gian Luca Sole <gluc...@tiscalinet.it> wrote:
> > pdp11 running bsd 2.11. :-). Only thing, the alpha 233 mhz seems slow
> > compared with an intel PIII450, both running linux... :-|
>
> Take a look at the SPECint95 numbers on that Alpha vs. the PIII. Off the
> top of my head the Alpha is around 3-4 and the PIII is about 20. I've run
> both the PDP-10 and PDP-11 versions of simh on my AlphaStation 200 4/233,
> and while it's useable
Thanks, Zane.
Is the emulator on the alpha 233 able to reach at least the speed of a real
vax 780? mmhh... :-|
The pdp11 (I used a real one, a /23 with RSX at school, years '83-'86) with
2.11 bsd seems to run quickly (muuuuch more quickly on my PIII700, however).
For curiosity, what I have to do to determine the instantaneous mips rate of
the emulator, compared with a real pdp11 (or a vax) of certain version?
bye G.L.
I'm not possitive, but based on my experiences with running the PDP-10 and
PDP-11 emulators on a 233Mhz Alpha, I really doubt it. Even the 700Mhz PIII
is likely to only be 2-3x the speed of a VAX-11/780 if I'm remembering the
performance figures I've seen tossed about for the SIMH VAX emulator (I've
yet to try and run it).
> The pdp11 (I used a real one, a /23 with RSX at school, years '83-'86) with
> 2.11 bsd seems to run quickly (muuuuch more quickly on my PIII700, however).
A real /23 is a pretty slow system, so I suspect that the 233Mhz Alpha is
faster. It's been quite a while since I ran it, but I seem to remember it
feeling faster under RT-11 than my PDP-11/73, but part of that is likely to
be the much faster disk I/O. Even my AlphaStation 200 4/233 using Narrow
SCSI disks has faster disk I/O than my PDP-11/73 using Narrow SCSI disks.
> For curiosity, what I have to do to determine the instantaneous mips rate of
> the emulator, compared with a real pdp11 (or a vax) of certain version?
Your best bet is going to be to write some code that will give you the
ability to compare (the problem there being that you then need access to the
real hardware). I know someone had been looking into coming up with some
software that would give us some idea as to how many VUPS the VAX version of
SIMH runs at, but I've not heard of anyone actually doing it.
I for one would like to see programs that would allow us to benchmark the
following on various systems:
SIMH PDP-10 (KS10)
KLH10 (KS10 and KL10B)
SIMH PDP-11
E11
SIMH VAX (IIRC, this emulates a KA655)
It wouldn't be to hard for some of us to run benchmarks on real PDP-11's or
VAXen, but getting someone to run them on real PDP-10's would be more
difficult. Also, I really don't see much need to run tests on other systems
such as the PDP-8.
The benchmarks would need to test general CPU usage, memory access, disk
access and anything else that people can think to toss in there.
Zane
The simulator's 'clock' is an instruction counter. Get the current
count with
sim> show time
start your favorite piece of code, time off 10 seconds with your
watch, interrupt with ^E, and type show time again. The difference in
times, divided by 10, is ips. Divide again by 10^6 to get mips.
/Bob
That's pretty bad. The 11/780 was less than 1 MIPS (leading to the question
of whether you were measuting "real MIPS" or "VAX MIPS"). That means the
interpreter is executing hundreds of instructions per VAX instruction.
The VAX isn't *that* much more complex than the 68000, and the common 68000
emulation engine (the one used in UAE, Basilisk, and Copilot/POSE) does a
better job than that.
--
Rev. Peter da Silva, ULC. 29.6852N 95.5770W WWFD?
"Be conservative in what you generate, and liberal in what you accept"
-- Matthew 10:16 (l.trans)
The KLH10 emulator seems to have a mere 40x performance hit compared to
the native hardware when doing Dhrystones. Roughly, my experience is that
you get a KL10 equivalent in performance for each 100MHz of native CPU, so
KLH10 running on a dedicated 700 MHz PIII should be about 7 times a KL10.
Of course, the PDP-10 architecture is much easier to implement quickly
than the VAX.
-- Mark --
http://staff.washington.edu/mrc
Science does not emerge from voting, party politics, or public debate.
Didn't the microcode in the original KL10 run at 100Mhz too?
In which case, the emulator is as good as the microcode,
>Of course, the PDP-10 architecture is much easier to implement quickly
>than the VAX.
I just wish this knowledge could warp back in time to 1982 or thereabouts.
Still, there are areas where the emulator has to do some unneccessary
work. The 18<->32<->36<->64 bit masks and shifts are omiprecent in the
code, then EA calculations are interesting, and the intruction execute
loop gets to be pretty big, Not that I can see any immediate solutions
to these though,
-- mrr
My guess is that dhrystone wouldn't have exercised much microcode.
It would be mostly plain arithmetic and memory instructions, iirc.
What's more interesting is that building and linking all of the TOPS-20
release 7 monitor, including all the TCP and DECnet stuff, takes 12.5
minutes under KLH10 on an Athlon 1700+.
The tables in
http://www.inwap.com/pdp10/models.txt
says KL10 clock speed was either 40 or 33 nanoseconds, depending on
the model. Dunno if that's the speed of the micromachine. (The XKL-1
clock is 30 ns, but microinstructions execute in n * 15 ns, n = 2, 3, 4,
or 5.)
--
Lars Brinkhoff http://lars.nocrew.org/ Linux, GCC, PDP-10,
Brinkhoff Consulting http://www.brinkhoff.se/ HTTP programming
Good grief! We could have used that when packaging the -10 software!
/BAH
Subtract a hundred and four for e-mail.
How long does it take on real hardware?
--
Magnus Olsson (m...@df.lth.se)
PGP Public Key available at http://www.df.lth.se/~mol
Bob Supnik wrote:
> sim> show time
>
> start your favorite piece of code, time off 10 seconds with your watch,
> interrupt with ^E, and type show time again. The difference in times, divided
> by 10, is ips. Divide again by 10^6 to get mips.
Thanks Bob. Reading the simh_doc.txt, I've seen "time units". I've not understood
it means "real executed instructions". :-)
I obtained the following mips rate:
PII700 ALPHA server 2000 4/233
pdp11 ~2.5 mips ~0.45 mips
vax ~1.6 mips ~0.20 mips
IMHO olny the pdp11 appears to be acceptable on the alpha... I was optimistic
about the performance of the alpha processor, even if it's a relatively old model
and the clock frequency is low, but it's a _64 bit risc processor_.
BogoMIPS measured by linux (/proc/cpuinfo) report respectively 460 on the alpha,
and 1400 on the PIII700.
Obviously serves no purpose other than more faithfully simulating the
whole 'experience' of old iron!
Perhaps a new feature?
--gb
Hours. Something like 3 hours on a KL, overnight on a KS.
In the days of old iron we were always trying to get faster processors.
None of us would thought of deliberately slowing things down.
16x KL speed isn't really that great a performance jump. It basically
brings us up to 1990 vintage performance; Lingling is comparable to a PMAX
running Ultrix. And indeed, by 1990 the last few KLs still running were
looking pretty sad.
We had many processors of differing speeds along the way. I see Lingling
as simply another of the tradition, somewhat behind the curve but much
further along than the XKL or SC machines.
Now, we do have certain advantages; we've also missed the past 20 years of
bloatware. Lingling finally represents a PDP-10 processor that is
competant to run Lisp (which for a KL was bloatware).
Almost all of it was due to a slow CPU.
Disk and tape I/O on old mainframes was quite fast, especially with a
winning operating system such as TOPS-20 which knew how to do I/O right.
I was sort of waiting for this reply :-)
So that us old users can appreciate it, how long did it take on the real
hardware?
I seem to recall a monitor build on our KI-10 took many hours but as
I wasn't doing it myself I might be out by a bit here....
--
Huw Davies | e-mail: Huw.D...@kerberos.davies.net.au
| "If God had wanted soccer played in the
| air, the sky would be painted green"
I can't answer about the -20 builds. I can talk a little about the
-10 with the caveat that my work dealt with doing things in such
an order so that what we put on those tapes could be BINCOMed
with no difference when the customers rebuilt the contents of those
tapes. Our shop also limited the number of batch jobs running.
So, with all those caveats, I had the process of building all CUSPs,
and unsupported CUSPs down to 8 hours (from DEPEN0.CTL to
LOGOUT of the final build) before the files could
be laid out for BACKUP. Note that this included a human checking
each cusp.LOG file for success.
DEPEN0.CTL was the first control file that would start the
dependency builds. It submitted stuff and then DEPEN1.CTL
which would run after all of the DEPEN0 jobs were completed.
IIRC, my DEPEN?.CTL went from 0-4.
>I seem to recall a monitor build on our KI-10 took many hours but as
>I wasn't doing it myself I might be out by a bit here....
Heh. I bet they had a /COMPILE switch on every line whether
it needed it or not.
If you recompiled everything, it could have taken many hours.
Again that depended on how many batch jobs could run at
the same. There was maximum job limit (ours was usually 3).
The longest compiles were S, F, COMMON, COMMOD, COMDEV.
Those babies were load with macros, each were expanded based
on the answers made to running MONGEN. Note that these
had to be compiled successfully before anything else could
be compiled. Most of the other monitor modules took a
couple minutes of wall-clock time. Our weekly monitor
build procedures routines built 12-14 monitors (perhaps
more). They were usually done by 17:00 iff the edit was
done early. The edit usually started at 8:30 or so.
/BAH
Yep. Too bad you couldn't use a faster OS.
[impish emoticon failing to resist the barb that was
posted yesterday]
Actually, the files that were slow to compile were all the DECnet-36 stuff
with its intensive use of macros. That junk came from the TOPS-10 losers
as I recall; it's cluelessness about TOPS-20 sure looks that way (e.g. the
NMXTIM routine).
No. The clock was 33 or 40 MHz, depending on model, but many
microinstructions took multiple clock cycles, so the effective
microinstruction rate was was under 20 MHz.
I don't remember who did DECnet in the -20 monitor. IIRC, DECnet
per se didn't go into the -20 monitor until phase IV with the
ethernet implementation. The previous releases were MCB-based
DECnet. The MCB got done first on the -20, then it came over
to the -10. We hired our own to do ethernet and they didn't
work on the -20.