Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

What is the oldest computer that could be used today for real work?

369 views
Skip to first unread message

Jason Evans

unread,
Sep 5, 2021, 5:55:27 AM9/5/21
to
I know this is an odd question, so let me explain what I'm thinking.

First of all, what is "real work"? Let's say that you're a Linux/Unix/BSD
system administrator who spends 90% of his day on the command line. What is
the oldest computer that he could get by with to do his job?

For example: Could you rig up a serial connection from a modern PC to a C64
to get a command prompt on the C64 to use as the interface for the command
line? Sure, it would demote the C64 to a "dumb terminal" but could it work?

These are the kinds of things that I would like to hear about on this group.

Jason

gareth evans

unread,
Sep 5, 2021, 6:35:26 AM9/5/21
to
Well, by jerking the slide from the middle of your slide rule, you could
use it to move the pebbles of your abacus.


Grant Taylor

unread,
Sep 5, 2021, 12:28:17 PM9/5/21
to
On 9/5/21 3:55 AM, Jason Evans wrote:
> First of all, what is "real work"? Let's say that you're a
> Linux/Unix/BSD system administrator who spends 90% of his day on the
> command line. What is the oldest computer that he could get by with
> to do his job?

The problem is the remaining 10%. (I'm re-using your numbers.)

IMM/RSA/iLO/LOM/iDRAC/etc consoles that are inherently GUI which are
invaluable when recovering systems during outages.

Don't forget that email clients /almost/ *need* to be GUI to display
more than simple text ~> attachments. -- We can't forget the venerable
Power Point slides that we need to look at before the next meeting.

I would be remiss if I didn't mention video chat, especially with work
from home that is quite common for the last ~18 months. Not to mention
conference rooms for geographically disperse team meetings.

I don't know about you, but I would have a problem justifying my
employment if I didn't participate in that 10%. And I'm quite sure that
CLI /only/ is not sufficient to do so.

I leave you with ...

Link - Terminal forever | CommitStrip
- https://www.commitstrip.com/en/2016/12/22/terminal-forever/



--
Grant. . . .
unix || die

Michael Trew

unread,
Sep 5, 2021, 10:54:18 PM9/5/21
to
I have an IBM Datamaster/System 23 in my basement that is functional
with its original dot matrix printer. I'd have to imagine it can still
do some basic functions like some kind of word processing. I have boxes
and boxes of 8" floppies as well.

https://en.wikipedia.org/wiki/IBM_System/23_Datamaster

J. Clarke

unread,
Sep 6, 2021, 12:06:45 AM9/6/21
to
You can run Unix from a teletype. Not something anyone in their right
mind wants to do these days but you can do it.

The real work in this case is running the Unix system and you already
have a computer if you do that.

Jason Evans

unread,
Sep 6, 2021, 2:30:47 AM9/6/21
to
On Mon, 06 Sep 2021 00:06:43 -0400, J. Clarke wrote:

> You can run Unix from a teletype. Not something anyone in their right
> mind wants to do these days but you can do it.

Linux via ham radio RTTY would be stupid and awesome, lol.

Ahem A Rivet's Shot

unread,
Sep 6, 2021, 4:30:03 AM9/6/21
to
Erm KA9Q was originally TCP/IP over souped up RTTY (aka packet
radio) was it not. OK it was not Linux (that was still in the future) but
it did come with email, usenet, ftp and a multi-tasking kernel to run them
under messy dos - I never saw the CP/M version but 64K is awfully tight for
TCP/IP.

It was awesome and far from stupid.

--
Steve O'Hara-Smith | Directable Mirror Arrays
C:\>WIN | A better way to focus the sun
The computer obeys and wins. | licences available see
You lose and Bill collects. | http://www.sohara.org/

gareth evans

unread,
Sep 6, 2021, 6:55:54 AM9/6/21
to
On 06/09/2021 09:14, Ahem A Rivet's Shot wrote:
> On Mon, 6 Sep 2021 06:30:46 -0000 (UTC)
> Jason Evans <jse...@mailfence.com> wrote:
>
>> On Mon, 06 Sep 2021 00:06:43 -0400, J. Clarke wrote:
>>
>>> You can run Unix from a teletype. Not something anyone in their right
>>> mind wants to do these days but you can do it.
>>
>> Linux via ham radio RTTY would be stupid and awesome, lol.
>
> Erm KA9Q was originally TCP/IP over souped up RTTY (aka packet
> radio) was it not. OK it was not Linux (that was still in the future) but
> it did come with email, usenet, ftp and a multi-tasking kernel to run them
> under messy dos - I never saw the CP/M version but 64K is awfully tight for
> TCP/IP.
>
> It was awesome and far from stupid.
>

One needs to be careful about terminology.

RTTY in Ham Radio terms means ITA No2, 5-unit start-stop stuff with
the awkward Figure Shift and Letter Shift keys.

Packet Radio was something else, I know not what, but certainly
8-bit character transmissions.

Gareth G4SDW



Jason Evans

unread,
Sep 6, 2021, 7:12:26 AM9/6/21
to
On Mon, 6 Sep 2021 11:55:48 +0100, gareth evans wrote:

>>>> You can run Unix from a teletype. Not something anyone in their
>
> RTTY in Ham Radio terms means ITA No2, 5-unit start-stop stuff with the
> awkward Figure Shift and Letter Shift keys.
>
> Packet Radio was something else, I know not what, but certainly 8-bit
> character transmissions.
>
> Gareth G4SDW

When J Clarke mentioned teletype, I immediately thought of radioteletype
i.e. RTTY and that's why I mentioned it.

Jason KI4GMX

Jason Evans

unread,
Sep 6, 2021, 7:16:38 AM9/6/21
to
On Mon, 6 Sep 2021 09:14:41 +0100, Ahem A Rivet's Shot wrote:

> Erm KA9Q was originally TCP/IP over souped up RTTY (aka packet
> radio) was it not. OK it was not Linux (that was still in the future)
> but it did come with email, usenet, ftp and a multi-tasking kernel to
> run them under messy dos - I never saw the CP/M version but 64K is
> awfully tight for TCP/IP.
>
> It was awesome and far from stupid.

I meanth "stupid" only in the amount of time and effort it would take to
use an old radioteletype machine as an interface with as a linux console.
It does sound very awesome, though!

Grant Taylor

unread,
Sep 6, 2021, 1:42:26 PM9/6/21
to
On 9/6/21 12:30 AM, Jason Evans wrote:
> Linux via ham radio RTTY would be stupid and awesome, lol.

It's not ham radio RTTY, but it is darned close.

Link - Curious Marc tweets from a TTY.
- https://twitter.com/curious_marc/status/1253216773370867717

Grant Taylor

unread,
Sep 6, 2021, 1:44:56 PM9/6/21
to
On 9/6/21 2:14 AM, Ahem A Rivet's Shot wrote:
> Erm KA9Q was originally TCP/IP over souped up RTTY (aka packet radio)
> was it not. OK it was not Linux (that was still in the future) but
> it did come with email, usenet, ftp and a multi-tasking kernel to
> run them under messy dos - I never saw the CP/M version but 64K is
> awfully tight for TCP/IP.

Was it Usenet (UUCP / NNTP) or FTP proper? Or was it other non-standard
services that provided similar function to the proper services?

Many BBSs, both radio and non-radio, of the time provided similar
functionality without using /Internet/ standard protocols for doing so.

> It was awesome and far from stupid.

~chuckle~

Peter Flass

unread,
Sep 6, 2021, 1:56:11 PM9/6/21
to
This is kind of a bizarre question. Any computer could be used for ‘real
work” today. They did their thing years ago, and could still do,the same
kinds of things today: statistics, engineering calculations, payroll,
inventory, etc. I was going to say the IBM 1130, but just realized all
could. Obviously no internet, and things like graphics and relational
databases that require gobs of memory would be out.

--
Pete

Ahem A Rivet's Shot

unread,
Sep 6, 2021, 3:30:02 PM9/6/21
to
On Mon, 6 Sep 2021 11:44:57 -0600
Grant Taylor <gta...@tnetconsulting.net> wrote:

> On 9/6/21 2:14 AM, Ahem A Rivet's Shot wrote:
> > Erm KA9Q was originally TCP/IP over souped up RTTY (aka packet radio)
> > was it not. OK it was not Linux (that was still in the future) but
> > it did come with email, usenet, ftp and a multi-tasking kernel to
> > run them under messy dos - I never saw the CP/M version but 64K is
> > awfully tight for TCP/IP.
>
> Was it Usenet (UUCP / NNTP) or FTP proper? Or was it other non-standard
> services that provided similar function to the proper services?

It was the real thing, there was a pretty good TCP/IP stack in there
and a multi-tasking kernel, the applications were pluggable at build time
but most settled on a variant of Elm for email backed by an SMTP server,
Tin for USENET (NNRP) and I forget where the usual ftp client originated.
When Demon Internet first started offering dial up connections with a
static IP address KA9Q was the standard offering for messy dos.

Sn!pe

unread,
Sep 6, 2021, 4:05:37 PM9/6/21
to
Ahem A Rivet's Shot <ste...@eircom.net> wrote:

> On Mon, 6 Sep 2021 11:44:57 -0600
> Grant Taylor <gta...@tnetconsulting.net> wrote:
>
> > On 9/6/21 2:14 AM, Ahem A Rivet's Shot wrote:
> > > Erm KA9Q was originally TCP/IP over souped up RTTY (aka packet radio)
> > > was it not. OK it was not Linux (that was still in the future) but
> > > it did come with email, usenet, ftp and a multi-tasking kernel to
> > > run them under messy dos - I never saw the CP/M version but 64K is
> > > awfully tight for TCP/IP.
> >
> > Was it Usenet (UUCP / NNTP) or FTP proper? Or was it other non-standard
> > services that provided similar function to the proper services?
>
> It was the real thing, there was a pretty good TCP/IP stack in there
> and a multi-tasking kernel, the applications were pluggable at build time
> but most settled on a variant of Elm for email backed by an SMTP server,
> Tin for USENET (NNRP) and I forget where the usual ftp client originated.
> When Demon Internet first started offering dial up connections with a
> static IP address KA9Q was the standard offering for messy dos.

It worked very well; I began by using KA9Q too, in 1994 with Demon.

--
^Ï^ <https://youtu.be/_kqytf31a8E>

My pet rock Gordon just is.

Grant Taylor

unread,
Sep 6, 2021, 4:54:47 PM9/6/21
to
On 9/6/21 1:07 PM, Ahem A Rivet's Shot wrote:
> It was the real thing, there was a pretty good TCP/IP stack in there
> and a multi-tasking kernel, the applications were pluggable at build
> time but most settled on a variant of Elm for email backed by an
> SMTP server, Tin for USENET (NNRP) and I forget where the usual ftp
> client originated. When Demon Internet first started offering dial
> up connections with a static IP address KA9Q was the standard offering
> for messy dos.

Interesting.

Thank you for confirming.

TIL :-)

John Goerzen

unread,
Sep 6, 2021, 10:30:37 PM9/6/21
to
On 2021-09-05, Jason Evans <jse...@mailfence.com> wrote:
> First of all, what is "real work"? Let's say that you're a Linux/Unix/BSD
> system administrator who spends 90% of his day on the command line. What is

... that fits...

> the oldest computer that he could get by with to do his job?

So I'm going to be "that guy" that says "it depends on what you mean by
computer."

So I have a DEC vt510 that I do still use. It has a serial connection to a
Raspberry Pi, from which I can ssh wherever. I actually enjoy using it as a
"focus mode" break. It was sold as an ANSI terminal. Is it a computer? Well,
it has an 8080 in it IIRC. I do actually use it for doing work on my job from
time to time too.

I also have a Linux box, more modern, that is a Micro PC I used to do backups.
It doesn't permit ssh or such for security reasons. My only way into it is via
serial console or local console. So the vt510 can hook up to that and it is
then doing actual work too. I also have older terminals.

What about older general-purpose machines? I've seen plenty of DOS still
kicking around. Various industrial machinery still uses DOS machines as
controllers or programmers. A lot of time they are running on more modern
hardware, but also a lot of time they wouldn't NEED to be; that's just what is
out there. So that takes us back firmly into the 80s.

The DEC PDP-10 was introduced in 1966 and was famously used by CompuServe up
until at least 2007, 41 years later.

Here's an article from 2008 about how Seattle still uses DEC VAXes (released
1977):
https://www.seattletimes.com/seattle-news/education/dinosaur-computer-stalls-seattle-schools-plans/

Here's an article from just last year about how Kansas is still using a
mainframe from 1977 to manage unemployment claims:
https://www.kctv5.com/coronavirus/kansas-department-of-labor-mainframe-is-from-1977/article_40459370-7ac4-11ea-b4db-df529463a7d4.html

No word on what precise type of mainframe that is.
https://www.dol.ks.gov/documents/20121/85583/KDOL+Modernization+Timeline.pdf/d186de09-851b-d996-d235-ad6fb9286fcb?version=1.0&t=1620335465573
gives a clue that it may be some sort of IBM something.
https://governor.kansas.gov/wp-content/uploads/2021/04/GintherTaxCouncilKDOT.pdf
hints that it may be an IBM System 370/Model 145.

> For example: Could you rig up a serial connection from a modern PC to a C64
> to get a command prompt on the C64 to use as the interface for the command
> line? Sure, it would demote the C64 to a "dumb terminal" but could it work?

The display resolution may be tricky, but an old IBM PC certainly would.

- John

Robin Vowels

unread,
Sep 6, 2021, 11:27:51 PM9/6/21
to
Early computers had "gobs of memory" via endless numbers of
punch cards, endless lengths of paper tape and/or magnetic tape.
Some even had graphics. Yesterday I came across a subroutine for
DEUCE, written in 1955, that rotated the display by 90 degrees.
On that same computer was an animated version of a mouse
finding its way around a maze; also of "hickory dickory dock"
with sound effects; and noughts and crosses [tic-tac-toe].

J. Clarke

unread,
Sep 7, 2021, 1:04:01 AM9/7/21
to
I'd be very surprised if it actually was. When did IBM end
maintenance on those?

Scott Lurndal

unread,
Sep 7, 2021, 11:08:06 AM9/7/21
to
John Goerzen <jgoe...@complete.org> writes:
>On 2021-09-05, Jason Evans <jse...@mailfence.com> wrote:
>> First of all, what is "real work"? Let's say that you're a Linux/Unix/BSD
>> system administrator who spends 90% of his day on the command line. What is

>
>The DEC PDP-10 was introduced in 1966 and was famously used by CompuServe up
>until at least 2007, 41 years later.
>
>Here's an article from 2008 about how Seattle still uses DEC VAXes (released
>1977):
>https://www.seattletimes.com/seattle-news/education/dinosaur-computer-stalls-seattle-schools-plans/
>
>Here's an article from just last year about how Kansas is still using a
>mainframe from 1977 to manage unemployment claims:
>https://www.kctv5.com/coronavirus/kansas-department-of-labor-mainframe-is-from-1977/article_40459370-7ac4-11ea-b4db-df529463a7d4.html

Burroughs medium systems, introduced in 1965, were still running the city of
Santa Ana until 2010 (that system was donated to the Living Computer Museum
who ran it for a couple of years thereafter.

I've a Burroughs/Unisys T27 block-mode terminal hooked up to a medium systems simulator that
still runs today.

Michael Trew

unread,
Sep 7, 2021, 11:34:04 AM9/7/21
to
On 9/6/2021 4:11 PM, Andreas Kohlbach wrote:
> Read about this long ago. Is considered by many to be the "first
> PC". Reminds me on the Apple Lisa which went the right direction but was
> too expensive. So the little less capable but more successful McIntosh
> was released.

I have most all of the manuals as well. It was used as a database in a
radio station in the early 80's. I never actually sat down and figured
out how it works, but it does boot when you flip the switch; numbers
come up on the green screen.

Dennis Boone

unread,
Sep 7, 2021, 12:14:36 PM9/7/21
to
> I have most all of the manuals as well. It was used as a database in a
> radio station in the early 80's. I never actually sat down and figured
> out how it works, but it does boot when you flip the switch; numbers
> come up on the green screen.

User programming is done in BASIC. It's a curious implementation, with
a lot of fairly powerful stuff for business applications, statement
labels, some sort-of cursor editing of statements. Most IBM supplied
utilities are not in BASIC; I think it may have been possible for some
ecosystem developers to get the tooling to do assembler or maybe
compiled development.

There were two types of base machine: the all-in-one type, and a floor
standing one with separate screen and keyboard. Peripherals included
several printers, a twinax-based network for interconnecting stations,
and a hard disk unit that could be shared across the network. 8085
processor, paged address space so that the machine can (and does) have
well over 64k of RAM, and quite a bit of ROM too.

IBM sold various software for them, including a menu-driven application
development system that wrote BASIC applications. There was at least
a small third party ecosystem.

De

John Goerzen

unread,
Sep 7, 2021, 11:10:02 PM9/7/21
to
On 2021-09-07, J Clarke <jclarke...@gmail.com> wrote:
>>https://governor.kansas.gov/wp-content/uploads/2021/04/GintherTaxCouncilKDOT.pdf
>>hints that it may be an IBM System 370/Model 145.
>
> I'd be very surprised if it actually was. When did IBM end
> maintenance on those?

I have no more information, other than that link claims "The Kansas UI System
runs on a Mainframe that was installed in 1977."

Is it possible the hardware was upgraded to something that can emulate the
370/145, and that difference was lost on a non-technical author? Sure.

I have known other places to run mainframes an absurdly long time. I've seen it
in universities and, of course, there's the famous CompuServe PDP-10 story -
though presumably they had more technical know-how to keep their PDP-10s alive.
You are right; it does seem farfetched.

... so I did some more digging, and found
https://ldh.la.gov/assets/medicaid/mmis/docs/IVVRProcurementLibrary/Section3RelevantCorporateExperienceCorporateFinancialCondition.doc
which claims that the "legacy UI system applications run on the Kansas
Department of Administration's OBM OS/390 mainframe."

I know little of IBM's mainframe lineup, but
https://en.wikipedia.org/wiki/IBM_System/390 claims that the System/390 has some
level of compatibility with the S/370.

- John

J. Clarke

unread,
Sep 8, 2021, 12:00:08 AM9/8/21
to
On Tue, 7 Sep 2021 13:06:16 -0000 (UTC), John Goerzen
<jgoe...@complete.org> wrote:

>On 2021-09-07, J Clarke <jclarke...@gmail.com> wrote:
>>>https://governor.kansas.gov/wp-content/uploads/2021/04/GintherTaxCouncilKDOT.pdf
>>>hints that it may be an IBM System 370/Model 145.
>>
>> I'd be very surprised if it actually was. When did IBM end
>> maintenance on those?
>
>I have no more information, other than that link claims "The Kansas UI System
>runs on a Mainframe that was installed in 1977."
>
>Is it possible the hardware was upgraded to something that can emulate the
>370/145, and that difference was lost on a non-technical author? Sure.

A brand new Z can emulate the 370/145. So can my Raspberry Pi if I
don't expect any performance.

>I have known other places to run mainframes an absurdly long time. I've seen it
>in universities and, of course, there's the famous CompuServe PDP-10 story -
>though presumably they had more technical know-how to keep their PDP-10s alive.
>You are right; it does seem farfetched.
>
>... so I did some more digging, and found
>https://ldh.la.gov/assets/medicaid/mmis/docs/IVVRProcurementLibrary/Section3RelevantCorporateExperienceCorporateFinancialCondition.doc
>which claims that the "legacy UI system applications run on the Kansas
>Department of Administration's OBM OS/390 mainframe."
>
>I know little of IBM's mainframe lineup, but
>https://en.wikipedia.org/wiki/IBM_System/390 claims that the System/390 has some
>level of compatibility with the S/370.

From the 360 on, application-level backwards compatibility has been
maintained. I occasionally encounter code today that has dated
comments from the '70s. The OS is tuned for the specific hardware and
new features are provided, but application programmers don't generally
deal with that.

We just transferred our entire system to new hardware, was done over a
weekend. That's a system that manages a Fortune 100 financial
services company.

A common misconception among people who don't work with mainframes is
that the mainframe you have today is the same as the one that was
installed in the mid '60s. They don't understand that the modern
mainframe is just that, with numerous cores, vast quantities of RAM,
and very high clock speeds that can be sustained under any workload.

Ahem A Rivet's Shot

unread,
Sep 8, 2021, 3:30:03 AM9/8/21
to
On Wed, 08 Sep 2021 00:00:04 -0400
J. Clarke <jclarke...@gmail.com> wrote:

> We just transferred our entire system to new hardware, was done over a
> weekend. That's a system that manages a Fortune 100 financial
> services company.

In some circles they just throw new hardware into the racks and tell
the virtual swarm coordinator that runs their systems where to find the new
hardware or tell the coordinator to stop using an obsolete machine so they
can pull it. The systems never notice.

Peter Flass

unread,
Sep 8, 2021, 2:50:39 PM9/8/21
to
You can still run programs compiled on a 360 on the latest “z” box.

--
Pete

John Goerzen

unread,
Sep 8, 2021, 6:55:19 PM9/8/21
to
On 2021-09-08, Peter Flass <peter...@yahoo.com> wrote:
> You can still run programs compiled on a 360 on the latest “z” box.

I gotta say - that's darn impressive. I'm not aware of anything else that
maintains compatibility that long; am I missing anything?

Dan Espen

unread,
Sep 8, 2021, 8:28:30 PM9/8/21
to
Nope. S/360 in it's various flavors is the only survivor of that era.

Before then, IBM kept introducing new incompatible models each one
programmed in it's assembly language. The promise of S/360 was that
you would never again have to throw out your massive investment in
software.

Object code will still run.

--
Dan Espen

John Levine

unread,
Sep 8, 2021, 9:59:51 PM9/8/21
to
According to Dan Espen <dan1...@gmail.com>:
>John Goerzen <jgoe...@complete.org> writes:
>
>> On 2021-09-08, Peter Flass <peter...@yahoo.com> wrote:
>>> You can still run programs compiled on a 360 on the latest “z” box.
>>
>> I gotta say - that's darn impressive. I'm not aware of anything else that
>> maintains compatibility that long; am I missing anything?
>
>Nope. S/360 in it's various flavors is the only survivor of that era.

I thought the Unisys Clearpath machines still run Unival 1100 code from the 1960s.

--
Regards,
John Levine, jo...@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly

J. Clarke

unread,
Sep 8, 2021, 10:24:22 PM9/8/21
to
On Thu, 9 Sep 2021 01:59:49 -0000 (UTC), John Levine <jo...@taugh.com>
wrote:

>According to Dan Espen <dan1...@gmail.com>:
>>John Goerzen <jgoe...@complete.org> writes:
>>
>>> On 2021-09-08, Peter Flass <peter...@yahoo.com> wrote:
>>>> You can still run programs compiled on a 360 on the latest “z” box.
>>>
>>> I gotta say - that's darn impressive. I'm not aware of anything else that
>>> maintains compatibility that long; am I missing anything?
>>
>>Nope. S/360 in it's various flavors is the only survivor of that era.
>
>I thought the Unisys Clearpath machines still run Unival 1100 code from the 1960s.

Unisys Clearpath is an emulator running on Intel. IBM implements the
Z in purpose-made hardware.

Grant Taylor

unread,
Sep 9, 2021, 1:14:40 AM9/9/21
to
On 9/8/21 8:24 PM, J. Clarke wrote:
> Unisys Clearpath is an emulator running on Intel. IBM implements
> the Z in purpose-made hardware.

IBM implements it in microcode. Which is as much software as it is
hardware.

J. Clarke

unread,
Sep 9, 2021, 1:59:40 AM9/9/21
to
On Wed, 8 Sep 2021 23:14:42 -0600, Grant Taylor
<gta...@tnetconsulting.net> wrote:

>On 9/8/21 8:24 PM, J. Clarke wrote:
>> Unisys Clearpath is an emulator running on Intel. IBM implements
>> the Z in purpose-made hardware.
>
>IBM implements it in microcode. Which is as much software as it is
>hardware.

They did once. Do they still when they have from a 360 architecture
viewpoint vast quantities of silicon real estate to play with?

Thomas Koenig

unread,
Sep 9, 2021, 9:08:37 AM9/9/21
to
J Clarke <jclarke...@gmail.com> schrieb:
> On Tue, 7 Sep 2021 13:06:16 -0000 (UTC), John Goerzen
><jgoe...@complete.org> wrote:
>
>>On 2021-09-07, J Clarke <jclarke...@gmail.com> wrote:
>>>>https://governor.kansas.gov/wp-content/uploads/2021/04/GintherTaxCouncilKDOT.pdf
>>>>hints that it may be an IBM System 370/Model 145.
>>>
>>> I'd be very surprised if it actually was. When did IBM end
>>> maintenance on those?
>>
>>I have no more information, other than that link claims "The Kansas UI System
>>runs on a Mainframe that was installed in 1977."
>>
>>Is it possible the hardware was upgraded to something that can emulate the
>>370/145, and that difference was lost on a non-technical author? Sure.
>
> A brand new Z can emulate the 370/145. So can my Raspberry Pi if I
> don't expect any performance.

Surely, a higher performance than the original? Bitsavers claims that

"The Model 145 has a variable-length CPU cycle time. Cycle times of
202.5, 247.5, 292.5, and 315 nanoseconds are implemented. The time
required for the CPU to perform operations is made up of combinations of
these cycles. The CPU fetches instructions from processor storage a
doubleword at a time, while data accesses, both fetches and stores, are
made on a word basis. Eight instruction bytes or four data bytes can be
fetched by the CPU in 540 nanoseconds."

Variable-length CPU cycle time sounds strange, but the clock ran at
somewhere between 3.2 and 5 MZh. Not sure what sort of Pi you
have, but even a 700 MHz ARMv6 should be able to run rings around
that old machine in emulation with a factor of more than 100 in
CPU cycle time.

Grant Taylor

unread,
Sep 9, 2021, 1:53:48 PM9/9/21
to
On 9/8/21 11:59 PM, J. Clarke wrote:
> They did once. Do they still when they have from a 360 architecture
> viewpoint vast quantities of silicon real estate to play with?

Absolutely.

If anything they do even more in microcode now than they used to.

The microcode has somewhat become an abstraction layer. The processor
underneath can do whatever it wants and rely on the microcode to be the
abstraction boundary.

There have been multiple episodes of the Terminal Talk podcast talk
about microcode, milicode, and other very low level codes that fall into
the more broad category of firmware.

Scott Lurndal

unread,
Sep 9, 2021, 2:27:09 PM9/9/21
to
Dan Espen <dan1...@gmail.com> writes:
>John Goerzen <jgoe...@complete.org> writes:
>
>> On 2021-09-08, Peter Flass <peter...@yahoo.com> wrote:
>>> You can still run programs compiled on a 360 on the latest “z” box.
>>
>> I gotta say - that's darn impressive. I'm not aware of anything else that
>> maintains compatibility that long; am I missing anything?
>
>Nope. S/360 in it's various flavors is the only survivor of that era.

Actually, that's not precisely true. The Burroughs B5500 still lives on
as the Unisys Clearpath systems, and still supports object files from
the 1960s.

Peter Flass

unread,
Sep 9, 2021, 3:23:08 PM9/9/21
to
? I thought that the 5500’s successor systems weren’t object-compatible
with it. I don’t know about the degree of compatibility between the 6000s,
7000s, and 8000s. Id be happy to be corrected.

--
Pete

Dan Espen

unread,
Sep 9, 2021, 3:44:37 PM9/9/21
to
hmm, I actually have been in contact with some of those systems but had
no idea they went back as far as 64.

--
Dan Espen

Scott Lurndal

unread,
Sep 9, 2021, 5:10:30 PM9/9/21
to
There was a step change between the B5500 and the B6500; after than they
were binary compatible (e-mode in the early 1980s added support for larger
memory, but still ran old codefiles).

Scott Lurndal

unread,
Sep 9, 2021, 5:14:48 PM9/9/21
to
Here's a video from 1968 on the B6500. I worked at that plant in Pasadena
in the 1980s.

https://www.youtube.com/watch?v=rNBtjEBYFPk

The family really started with the B5000

https://www.youtube.com/watch?v=K3q5n1mR9iM

which was quickly superceded by the B5500:

https://www.youtube.com/watch?v=KswWJ6zvBUs

Dan Espen

unread,
Sep 9, 2021, 9:02:26 PM9/9/21
to
sc...@slp53.sl.home (Scott Lurndal) writes:

> Peter Flass <peter...@yahoo.com> writes:
>>Scott Lurndal <sc...@slp53.sl.home> wrote:
>>> Dan Espen <dan1...@gmail.com> writes:
>>>> John Goerzen <jgoe...@complete.org> writes:
>>>>
>>>>> On 2021-09-08, Peter Flass <peter...@yahoo.com> wrote:
>>>>>> You can still run programs compiled on a 360 on the latest “z†box.
>>>>>
>>>>> I gotta say - that's darn impressive. I'm not aware of anything else that
>>>>> maintains compatibility that long; am I missing anything?
>>>>
>>>> Nope. S/360 in it's various flavors is the only survivor of that era.
>>>
>>> Actually, that's not precisely true. The Burroughs B5500 still lives on
>>> as the Unisys Clearpath systems, and still supports object files from
>>> the 1960s.
>>>
>>
>>? I thought that the 5500’s successor systems weren’t object-compatible
>>with it. I don’t know about the degree of compatibility between the 6000s,
>>7000s, and 8000s. Id be happy to be corrected.
>
> There was a step change between the B5500 and the B6500; after than they
> were binary compatible (e-mode in the early 1980s added support for larger
> memory, but still ran old codefiles).

I see a date of 1969 for the B6500.
That gives the title back to S/360.

I had to support a project moving Unisys code to z-Arch.
We had persistent performance issues, the mainframe just couldn't deal
with loading lots of small programs while the app was running.
I see Unisys is naturally reentrant. That probably had a lot to do with
the problems we were having.

--
Dan Espen

John Levine

unread,
Sep 9, 2021, 9:35:36 PM9/9/21
to
According to Thomas Koenig <tko...@netcologne.de>:
>> A brand new Z can emulate the 370/145. So can my Raspberry Pi if I
>> don't expect any performance.
>
>Surely, a higher performance than the original? ...

CPU speed, sure, but the point of a mainframe is that it has high performance
peripherals. A /145 could have up to four channels and could attach several
dozen disk drives.

These days one SSD holds more data than two dozen 2314 disks, but I wouldn't
think a Pi has particularly high I/O bandwidth.

Scott Lurndal

unread,
Sep 9, 2021, 11:19:57 PM9/9/21
to
Dan Espen <dan1...@gmail.com> writes:
>sc...@slp53.sl.home (Scott Lurndal) writes:
>
>> Peter Flass <peter...@yahoo.com> writes:
>>>Scott Lurndal <sc...@slp53.sl.home> wrote:
>>>> Dan Espen <dan1...@gmail.com> writes:
>>>>> John Goerzen <jgoe...@complete.org> writes:
>>>>>
>>>>>> On 2021-09-08, Peter Flass <peter...@yahoo.com> wrote:
>>>>>>> You can still run programs compiled on a 360 on the latest “z†box.
>>>>>>
>>>>>> I gotta say - that's darn impressive. I'm not aware of anything else that
>>>>>> maintains compatibility that long; am I missing anything?
>>>>>
>>>>> Nope. S/360 in it's various flavors is the only survivor of that era.
>>>>
>>>> Actually, that's not precisely true. The Burroughs B5500 still lives on
>>>> as the Unisys Clearpath systems, and still supports object files from
>>>> the 1960s.
>>>>
>>>
>>>? I thought that the 5500’s successor systems weren’t object-compatible
>>>with it. I don’t know about the degree of compatibility between the 6000s,
>>>7000s, and 8000s. Id be happy to be corrected.
>>
>> There was a step change between the B5500 and the B6500; after than they
>> were binary compatible (e-mode in the early 1980s added support for larger
>> memory, but still ran old codefiles).
>
>I see a date of 1969 for the B6500.
>That gives the title back to S/360.

I wouldn't be surprised to find that B5000 applications
ran on the B6500 - Burroughs was good about backwards
compatability - as shown by the B3500 line which ran
the original binaries through end of life (last system
powered off in 2010 so far as I'm aware - 45 year run).

Someone pointed to this, which talks about the
Pasadena plant and the development of the B5000
line. I hadn't realized that Cliff Berry had
any connection to Burroughs when I was working
there; one my school's most famous Alumni.

http://www.digm.com/UNITE/2019/2019-Origins-Burroughs-Algol.pdf


>
>I had to support a project moving Unisys code to z-Arch.
>We had persistent performance issues, the mainframe just couldn't deal
>with loading lots of small programs while the app was running.

The Burroughs systems were all designed to be very easy
to use and to program.

>I see Unisys is naturally reentrant. That probably had a lot to do with
>the problems we were having.

Yes, it was quite advanced for the day. The capability model
that Burroughs invented with the large systems line is being
investigated for new processor architectures today, see for example CHERI.

J. Clarke

unread,
Sep 9, 2021, 11:52:24 PM9/9/21
to
On Fri, 10 Sep 2021 01:35:34 -0000 (UTC), John Levine
<jo...@taugh.com> wrote:

>According to Thomas Koenig <tko...@netcologne.de>:
>>> A brand new Z can emulate the 370/145. So can my Raspberry Pi if I
>>> don't expect any performance.
>>
>>Surely, a higher performance than the original? ...

No. The pi is painfully slow running the emulator.

>CPU speed, sure, but the point of a mainframe is that it has high performance
>peripherals. A /145 could have up to four channels and could attach several
>dozen disk drives.
>
>These days one SSD holds more data than two dozen 2314 disks, but I wouldn't
>think a Pi has particularly high I/O bandwidth.

It's I/O is a USB port, Gigabit, and wifi. It can also implement I/O
with a USART but that's very limited bandwidth.

Anne & Lynn Wheeler

unread,
Sep 9, 2021, 11:56:28 PM9/9/21
to
John Levine <jo...@taugh.com> writes:
> CPU speed, sure, but the point of a mainframe is that it has high performance
> peripherals. A /145 could have up to four channels and could attach several
> dozen disk drives.
>
> These days one SSD holds more data than two dozen 2314 disks, but I wouldn't
> think a Pi has particularly high I/O bandwidth.

raspberry Pi 4 specs and benchmarks (2 yrs ago)
https://magpi.raspberrypi.org/articles/raspberry-pi-4-specs-benchmarks

SoC: Broadcom BCM2711B0 quad-core A72 (ARMv8-A) 64-bit @ 1.5GHz
GPU: Broadcom VideoCore VI
Networking: 2.4GHz and 5GHz 802.11b/g/n/ac wireless LAN
RAM: 1GB, 2GB, or 4GB LPDDR4 SDRAM
Bluetooth: Bluetooth 5.0, Bluetooth Low Energy (BLE)
GPIO: 40-pin GPIO header, populated
Storage: microSD
Ports: 2x micro-HDMI 2.0, 3.5mm analogue audio-video jack, 2x USB 2.0,
2x USB 3.0, Gigabit Ethernet, Camera Serial Interface (CSI), Display Serial Interface (DSI)
Dimensions: 88mm x 58mm x 19.5mm, 46g

linpack mips 925MIPS, 748MIPS, 2037MIPS
memory bandwidth (1MB blocks r&w) 4129/sec, 4427/sec
USB storage thruput (megabytes/sec r&w) 353mbytes/sec, 323mbytes/sec

more details
https://en.wikipedia.org/wiki/Raspberry_Pi
best picks Pi microSD cards (32gbytes)
https://www.tomshardware.com/best-picks/raspberry-pi-microsd-cards

===

by comparison, 145 would be .3MIPS and 512kbyte memory,

2314 capacity 29mbytes ... need 34 2314s/gbyte or 340 2314s for
10gbytes
https://www.ibm.com/ibm/history/exhibits/storage/storage_2314.html
2314 disk rate 312kbytes/sec ... ignoring channel program overhead, disk
access, etc, assuming that all four 145 channels would continuously be doing
disk i/o transfer at sustained 312kbytes/sec ... that is theoritical
1.2mbytes/sec

trivia: after transferring to San Jose Research (bldg28), I got roped
into playing disk engineer part time (across the street in
bldg14&15). The 3830 controller for 3330 & 3350 disk drives was replaced
with 3880 controller for 3380 disk drives. While 3880 had special
hardware data path for handling 3380 3mbyte/sec transfer ... it had a
microprocessor that was significantly slower than 3830 for everything
else ... which drastically drove up channel busy overhead ... especially
for the channel program chatter latency between processor and
controller.

The 3090 folks had configured number of channels, assuming the 3880
would be similar to 3830 but handling 3mbyte data transfer ... when they
found out how bad the 3880 channel busy really was ... they realized
they would have to drastically increase the number of channels. The
channel number increase required an extra (very expensive) TCM (there
were jokes that the 3090 office was going to charge the 3880 office for
the increase in 3090 manufacturing cost). Eventually marketing respun
big increase in number of channels (to handle the half-duplex chatter
channel busy overhead) as how great all the 3090 channels were.

Other triva: in 1980, IBM STL (lab) was bursting at the seams and they
were moving 300 people from the IMS DBMS development group to and
offsite bldg with dataprocessing back to STL datacenter. The group had
tried "remote" 3270 terminal support and found the human factors totally
unacceptable. I get con'ed into doing channel-extender support so they
can put local channel connected 3270 controllers at the offsite bldg
(with no perceived difference in human factors offsite and in STL).

The hardware vendor tries to get IBM to release my support, but there
were some people in POK playing with some serial stuff that get it
vetoed (they were worried that if it was in the market, it would harder
to justify releasing their stuff). Then in 1988, I'm asked to help LLNL
standardize some serial stuff they are laying with ... which quickly
becomes fibre channel standard (including some stuff I had done in
1980), initially 1gbit (100mbyte) full-duplex (2gbit, aka 200mbyte,
aggregate)

In 1990, the POK people get their stuff released with ES/9000 as ESCON
(when it is already obsolete, around 17mbyte aggregate). Later some of
the POK people start playing with fibre channel standard and define a
heavy weight protocol that drastically cuts the native throughput which
is finally releaseed as FICON.

The latest published benchmarks I can find is "peak I/O" for z196 that
used 104 FICON (running over 104 fibre channel) to get 2M IOPS. About
the same time there was a fibre channel announced for E5-2600 blade
claiming over million IOPS, two such fibre channel getting higher
(native) throughput than 104 FICON running over 104 fibre channel).

--
virtualization experience starting Jan1968, online at home since Mar1970

J. Clarke

unread,
Sep 10, 2021, 1:17:13 AM9/10/21
to
I supposed I could compile Linpack under Z/OS on the pi and see what
it actually does. I'm not that ambitious though. Native on the pi
doesn't count.

Thomas Koenig

unread,
Sep 10, 2021, 11:54:12 AM9/10/21
to
John Levine <jo...@taugh.com> schrieb:
> According to Thomas Koenig <tko...@netcologne.de>:
>>> A brand new Z can emulate the 370/145. So can my Raspberry Pi if I
>>> don't expect any performance.
>>
>>Surely, a higher performance than the original? ...
>
> CPU speed, sure, but the point of a mainframe is that it has high performance
> peripherals. A /145 could have up to four channels and could attach several
> dozen disk drives.

The "Functional characteristics" document from 1972 from Bitsavers
gives a maximum rate per channel of 1.85 MB per second with a
word buffer installed, plus somewhat lower figures for four channels
for a total of 5.29 MB/s (which would be optimum).

> These days one SSD holds more data than two dozen 2314 disks, but I wouldn't
> think a Pi has particularly high I/O bandwidth.

A single USB2 port can do around 53 MB/s theoretical maximum, a factor
of approximately 10 vs. the 370/145. I didn't look up the speed
of the Pi's SSD.

Thomas Koenig

unread,
Sep 10, 2021, 11:57:40 AM9/10/21
to
J Clarke <jclarke...@gmail.com> schrieb:
> On Fri, 10 Sep 2021 01:35:34 -0000 (UTC), John Levine
><jo...@taugh.com> wrote:
>
>>According to Thomas Koenig <tko...@netcologne.de>:
>>>> A brand new Z can emulate the 370/145. So can my Raspberry Pi if I
>>>> don't expect any performance.
>>>
>>>Surely, a higher performance than the original? ...
>
> No. The pi is painfully slow running the emulator.

Slower than the original? How many cycles per S/360 instruction does
it take? If it was really slower than the original, it would have
to be more than 100 cycles per instruction. I find that hard to
believe.

Anne & Lynn Wheeler

unread,
Sep 10, 2021, 12:50:04 PM9/10/21
to

Thomas Koenig <tko...@netcologne.de> writes:
> Slower than the original? How many cycles per S/360 instruction does
> it take? If it was really slower than the original, it would have
> to be more than 100 cycles per instruction. I find that hard to
> believe.

Endicott cons me into helping do ECPS for 138/148. I was told that
low/mid 370, 115-148 avg ten native instructions per 370 emulated instructions
(i.e. 80kips 370/115 had 800kips engine, 120kips 370/125 had 1.2mips
engine, etc) and the 138/148 had 6kbytes of available microcode storage.
I was to identity the 6k bytes highest executed kernel instructions
for moving to microcode (on roughtly byte-for-byte basis). old
archived post with the analysis
http://www.garlic.com/~lynn/94.html#21

6kbytes of kernel instructions pathlength accounted for 79.55% of kernel
execution time ... dropped directly into microcode would run 10times
faster. Also implemented for later 4331/4341.

In the early 80s I got permission to give howto ECPS presentations at
local user group (silicon valley) monthly baybunch meetings and would
get lots of questions from the Amdahl people.

They would say that IBM had started doing lots of trivial microcode
implementations for the 3033 which would be required for MVS to
run. Amdahl eventually responded with "macrocode" ... effectively
370-like instruction set that ran in microcode mode ... where Amdahl
could implement the 3033 microcodes changes much easier with much less
effort.

Note the low/mid range 370s had vertical instruction microcode
processors (i.e. programming like cics/risc processors). high-end 370
had horizontal microcode and typically expressed in the avg. number of
machine cycles per 370 instruction. The 370/165 ran 2.1 machine cycles
per 370 instruction. That was optimized for 370/168 to 1.6 machine
cycles per 370 instruction. The 3033 started out as 168-3 logic remapped
to 20% faster chips and the microcode was further optimized to one
machine cycle per 370 instruction. It was claimed that all those 3033
microcode tweaks ran same speed as 370 (and some cases slower).

Amdahl was then using macrocode to implement hypervisor support ...
subset of virtual machines support w/o needing vm370 ... which took IBM
several years to respond with PR/SM and LPAR in native horizontal
microcode (well after 3081 and well into 3090 product life).

some years later after retiring from IBM ... I was doing some stuff with
http://www.funsoft.com/

and their experience with emulating 370, avg. ten instructions per 370
was about the same as low&mid range 370s ... although they had some
other tweaks that could dynamically translate high-use instruction paths
directly into native code on-the-fly (getting 10:1 improvement).

I believe hercules is somewhat similar
https://en.wikipedia.org/wiki/Hercules_(emulator)

Andreas Kohlbach

unread,
Sep 10, 2021, 12:51:26 PM9/10/21
to
Having the issue here with an (aged) AMD PC. It's since years (software
bloat over the years I assume) no longer able to emulated a Commodore 64
with full speed (around 1 MHz for the 6510 CPU), while the host is
supposed to run at 780 MHz (according to /proc/cpuinfo here in Linux), so
780 times faster. Yeas ago it was able to emulate the C64 at its max
speed, while I could run other tasks on the host at the same time.
--
Andreas

Scott Lurndal

unread,
Sep 10, 2021, 4:45:42 PM9/10/21
to
Thomas Koenig <tko...@netcologne.de> writes:
>John Levine <jo...@taugh.com> schrieb:
>> According to Thomas Koenig <tko...@netcologne.de>:
>>>> A brand new Z can emulate the 370/145. So can my Raspberry Pi if I
>>>> don't expect any performance.
>>>
>>>Surely, a higher performance than the original? ...
>>
>> CPU speed, sure, but the point of a mainframe is that it has high performance
>> peripherals. A /145 could have up to four channels and could attach several
>> dozen disk drives.
>
>The "Functional characteristics" document from 1972 from Bitsavers
>gives a maximum rate per channel of 1.85 MB per second with a
>word buffer installed, plus somewhat lower figures for four channels
>for a total of 5.29 MB/s (which would be optimum).

But it is unlikely that a single drive was dense enough to drive
anywhere near that rate. Regardless of the channel speed, the
drive is limited by how fast it can get the data off the platter.

Burroughs disk channels had similar transfer rates and supported
multiple independently seeking drives on a single channel (up
to 16) to use the available bandwidth. The I/O controller on
the B4900 was rated at 8MB/sec across 32 channels.

USB3.0 on a raspberry pi crushes that by orders of magnitude.

>
>> These days one SSD holds more data than two dozen 2314 disks, but I wouldn't
>> think a Pi has particularly high I/O bandwidth.
>
>A single USB2 port can do around 53 MB/s theoretical maximum, a factor
>of approximately 10 vs. the 370/145. I didn't look up the speed
>of the Pi's SSD.

The fastest NVME SSDs can read three GByte/second and write one Gbyte/second.

The fastest USB SSDs are limited to 600MByte/sec, but few can reach that speed.

As NVME simply requires a PCI express port, which is available on many raspberry
pi boards, the max I/O speed for a pi is the speed of a single PCI Express Gen 3
(1GByte/s) or Gen 4 (2Gbytes/s) lane depending on the pi. Might even see
Gen 5 in the next couple of years (4Gbytes/sec) in future Pi processors.

John Levine

unread,
Sep 10, 2021, 6:03:49 PM9/10/21
to
According to Scott Lurndal <sl...@pacbell.net>:
>>The "Functional characteristics" document from 1972 from Bitsavers
>>gives a maximum rate per channel of 1.85 MB per second with a
>>word buffer installed, plus somewhat lower figures for four channels
>>for a total of 5.29 MB/s (which would be optimum).
>
>But it is unlikely that a single drive was dense enough to drive
>anywhere near that rate. Regardless of the channel speed, the
>drive is limited by how fast it can get the data off the platter.

That's why there were four channels. Each channel can have a
disk transfer going. An IBM web page said a 2314 could transfer
312L bytes per second so the fastest burst speed would be four
times that, say 1.2M bytes/sec.

I'd think later on people would be more likely to use 3330 or 3340
disks, which were 800K bytes/sec so say total data rate of 3.2MB/sec.

>USB3.0 on a raspberry pi crushes that by orders of magnitude.

Yeah, I would think so. No seek time on your SSD either.

Peter Flass

unread,
Sep 10, 2021, 9:41:00 PM9/10/21
to
Scott Lurndal <sc...@slp53.sl.home> wrote:
> Dan Espen <dan1...@gmail.com> writes:
>> sc...@slp53.sl.home (Scott Lurndal) writes:
>>
>>> Dan Espen <dan1...@gmail.com> writes:
>>>> John Goerzen <jgoe...@complete.org> writes:
>>>>
>>>>> On 2021-09-08, Peter Flass <peter...@yahoo.com> wrote:
>>>>>> You can still run programs compiled on a 360 on the latest z box.
>>>>>
>>>>> I gotta say - that's darn impressive. I'm not aware of anything else that
>>>>> maintains compatibility that long; am I missing anything?
>>>>
>>>> Nope. S/360 in it's various flavors is the only survivor of that era.
>>>
>>> Actually, that's not precisely true. The Burroughs B5500 still lives on
>>> as the Unisys Clearpath systems, and still supports object files from
>>> the 1960s.
>>
>> hmm, I actually have been in contact with some of those systems but had
>> no idea they went back as far as 64.
>
> Here's a video from 1968 on the B6500. I worked at that plant in Pasadena
> in the 1980s.
>
> https://www.youtube.com/watch?v=rNBtjEBYFPk

White shirts and ties, sideburns, and cigarettes in the office.


>
> The family really started with the B5000
>
> https://www.youtube.com/watch?v=K3q5n1mR9iM
>
> which was quickly superceded by the B5500:
>
> https://www.youtube.com/watch?v=KswWJ6zvBUs
>



--
Pete

Peter Flass

unread,
Sep 10, 2021, 9:41:01 PM9/10/21
to
You could have reentrant programs on S/360, too, but they had to be coded
as reentrant. I believe all HLLs would generate reentrant code, unless you
deliberately wrote them to be otherwise. There were lots of tuning
techniques you could use to optimize the “lots of small programs,” too, but
they weren’t automatic. That is what CICS is really good at. I thought
UNIVAC TIP would be a dog compared to CICS, because it did just run lots
of small programs.

--
Pete

Dan Espen

unread,
Sep 10, 2021, 10:15:20 PM9/10/21
to
This was a C project and the Unisys code invoked lots of mains.
The IBM LE code to establish reentrancy (mainly building the WSA)
was a major player in the slowness. I'm guessing that Unisys
had more efficient ways of establishing reentrancy.

--
Dan Espen

maus

unread,
Sep 11, 2021, 2:42:03 AM9/11/21
to
On 2021-09-10, Scott Lurndal <sc...@slp53.sl.home> wrote:
> Thomas Koenig <tko...@netcologne.de> writes:
>>John Levine <jo...@taugh.com> schrieb:
>>> According to Thomas Koenig <tko...@netcologne.de>:
>>>>> A brand new Z can emulate the 370/145. So can my Raspberry Pi if I
>>>>> don't expect any performance.
>>>>
>>>>Surely, a higher performance than the original? ...
>>>
>>> CPU speed, sure, but the point of a mainframe is that it has high performance
>>> peripherals. A /145 could have up to four channels and could attach several
>>> dozen disk drives.
>>
>>The "Functional characteristics" document from 1972 from Bitsavers
>>gives a maximum rate per channel of 1.85 MB per second with a
>>word buffer installed, plus somewhat lower figures for four channels
>>for a total of 5.29 MB/s (which would be optimum).
>
> But it is unlikely that a single drive was dense enough to drive
> anywhere near that rate. Regardless of the channel speed, the
> drive is limited by how fast it can get the data off the platter.
>
>
> The fastest NVME SSDs can read three GByte/second and write one Gbyte/second.
>
> The fastest USB SSDs are limited to 600MByte/sec, but few can reach that speed.
>
> As NVME simply requires a PCI express port, which is available on many raspberry
> pi boards, the max I/O speed for a pi is the speed of a single PCI Express Gen 3
> (1GByte/s) or Gen 4 (2Gbytes/s) lane depending on the pi. Might even see
> Gen 5 in the next couple of years (4Gbytes/sec) in future Pi processors.


I have several Pi's, and only in the last have I found what I think is
a grievious error, installed heat sinks, turned it on. After a few
minutes I noticed a searing pain where my hand was leaning on one of
the heat sinks.

grey...@mail.com

Thomas Koenig

unread,
Sep 11, 2021, 3:34:59 AM9/11/21
to
Andreas Kohlbach <a...@spamfence.net> schrieb:
> On Fri, 10 Sep 2021 15:57:39 -0000 (UTC), Thomas Koenig wrote:
>>
>> J Clarke <jclarke...@gmail.com> schrieb:
>>> On Fri, 10 Sep 2021 01:35:34 -0000 (UTC), John Levine
>>><jo...@taugh.com> wrote:
>>>
>>>>According to Thomas Koenig <tko...@netcologne.de>:
>>>>>> A brand new Z can emulate the 370/145. So can my Raspberry Pi if I
>>>>>> don't expect any performance.
>>>>>
>>>>>Surely, a higher performance than the original? ...
>>>
>>> No. The pi is painfully slow running the emulator.
>>
>> Slower than the original? How many cycles per S/360 instruction does
>> it take? If it was really slower than the original, it would have
>> to be more than 100 cycles per instruction. I find that hard to
>> believe.
>
> Having the issue here with an (aged) AMD PC. It's since years (software
> bloat over the years I assume) no longer able to emulated a Commodore 64
> with full speed (around 1 MHz for the 6510 CPU), while the host is
> supposed to run at 780 MHz (according to /proc/cpuinfo here in Linux), so
> 780 times faster.

That is a bit different. An emulator for a whole C-64 including
graphics and sound has to do much more work than an emulator for a
370/145 which did computation and I/O.

> Yeas ago it was able to emulate the C64 at its max
> speed, while I could run other tasks on the host at the same time.

"Other tasks" including running a browser?

I agree that modern software, also for Linux, has become incredibly
bloated. IIRC, the first Linux I ran at home was Slackware
0.99-something on a 486 with 4 MB running a simple window manager
and xterm. It wasn't as nice as the HP workstations I used
at the university, but it ran well enough.

Now... not a chance of getting things going with that setup.

Thomas Koenig

unread,
Sep 11, 2021, 3:36:52 AM9/11/21
to
Peter Flass <peter...@yahoo.com> schrieb:

> You could have reentrant programs on S/360, too, but they had to be coded
> as reentrant. I believe all HLLs would generate reentrant code, unless you
> deliberately wrote them to be otherwise.

Fortran was not reentrant (at least not by default), it used the
standard OS/360 linkage convention.

Thomas Koenig

unread,
Sep 11, 2021, 7:36:42 AM9/11/21
to
Peter Flass <peter...@yahoo.com> schrieb:
> Scott Lurndal <sc...@slp53.sl.home> wrote:
>> Dan Espen <dan1...@gmail.com> writes:
>>> sc...@slp53.sl.home (Scott Lurndal) writes:
>>>
>>>> Dan Espen <dan1...@gmail.com> writes:
>>>>> John Goerzen <jgoe...@complete.org> writes:
>>>>>
>>>>>> On 2021-09-08, Peter Flass <peter...@yahoo.com> wrote:
>>>>>>> You can still run programs compiled on a 360 on the latest z box.
>>>>>>
>>>>>> I gotta say - that's darn impressive. I'm not aware of anything else that
>>>>>> maintains compatibility that long; am I missing anything?
>>>>>
>>>>> Nope. S/360 in it's various flavors is the only survivor of that era.
>>>>
>>>> Actually, that's not precisely true. The Burroughs B5500 still lives on
>>>> as the Unisys Clearpath systems, and still supports object files from
>>>> the 1960s.
>>>
>>> hmm, I actually have been in contact with some of those systems but had
>>> no idea they went back as far as 64.
>>
>> Here's a video from 1968 on the B6500. I worked at that plant in Pasadena
>> in the 1980s.
>>
>> https://www.youtube.com/watch?v=rNBtjEBYFPk
>
> White shirts and ties, sideburns, and cigarettes in the office.

I have a similar video from the early 1970s about a building I came
to work in a few decades earlier. Same style.

Unfortunately, I do not think I can share it, there are probably
all sorts of legal obstacles, such as the personality rights of
the persons who are shown working in it.

J. Clarke

unread,
Sep 11, 2021, 8:55:57 AM9/11/21
to
I think this is mostly moot. A maxed out 360 would have under a gig
of DASD. On a pi 4 with 4 or 8 gig of RAM there's enough to buffer
the entire system, so our emulated 360, DASD and all, would be running
mostly RAM resident.

I think we forget how _immense_ the capacity of even a good _watch_ is
by '60s standards.

Scott Lurndal

unread,
Sep 11, 2021, 11:26:07 AM9/11/21
to
Peter Flass <peter...@yahoo.com> writes:
>Scott Lurndal <sc...@slp53.sl.home> wrote:
>> Dan Espen <dan1...@gmail.com> writes:
>>> sc...@slp53.sl.home (Scott Lurndal) writes:
>>>
>>>> Dan Espen <dan1...@gmail.com> writes:
>>>>> John Goerzen <jgoe...@complete.org> writes:
>>>>>
>>>>>> On 2021-09-08, Peter Flass <peter...@yahoo.com> wrote:
>>>>>>> You can still run programs compiled on a 360 on the latest z box.
>>>>>>
>>>>>> I gotta say - that's darn impressive. I'm not aware of anything else that
>>>>>> maintains compatibility that long; am I missing anything?
>>>>>
>>>>> Nope. S/360 in it's various flavors is the only survivor of that era.
>>>>
>>>> Actually, that's not precisely true. The Burroughs B5500 still lives on
>>>> as the Unisys Clearpath systems, and still supports object files from
>>>> the 1960s.
>>>
>>> hmm, I actually have been in contact with some of those systems but had
>>> no idea they went back as far as 64.
>>
>> Here's a video from 1968 on the B6500. I worked at that plant in Pasadena
>> in the 1980s.
>>
>> https://www.youtube.com/watch?v=rNBtjEBYFPk
>
>White shirts and ties, sideburns, and cigarettes in the office.

It wasn't until 1986 that smoking in the Pasadena Burroughs plant
was fully banned. I moved into an office in 1985 and I had to scrub
every surface to remove the nicotine stains and odor.

Andy Burns

unread,
Sep 11, 2021, 3:02:33 PM9/11/21
to
Scott Lurndal wrote:

> As NVME simply requires a PCI express port, which is available on many raspberry
> pi boards

The latest pi compute module has PCIe and various breakout boards make
it available as a normal X1 slot, but other than that, I thought to get
access to PCIe on any other pi required de-soldering the USB chip?

Scott Lurndal

unread,
Sep 11, 2021, 3:42:42 PM9/11/21
to
Hence "many" instead of "all".

John Levine

unread,
Sep 11, 2021, 8:02:24 PM9/11/21
to
According to Thomas Koenig <tko...@netcologne.de>:
>> You could have reentrant programs on S/360, too, but they had to be coded
>> as reentrant. I believe all HLLs would generate reentrant code, unless you
>> deliberately wrote them to be otherwise.
>
>Fortran was not reentrant (at least not by default), it used the
>standard OS/360 linkage convention.

You could write reentrant code that used the standard linkage scheme, but
it was fiddly and used more conventions about handling the dynamic storage areas.

Fortran and Cobol were never reentrant. PL/I could be if you used the
REENTRANT option in your source code. The PL/I programmers' guides
have examples of calling reentrant and non-reentrant assembler code.

My impression is that most reentrant code was written in assembler and preloaded
at IPL time to be used as shared libraries.

Andy Burns

unread,
Sep 12, 2021, 6:45:06 AM9/12/21
to
Scott Lurndal wrote:

> Andy Burns writes:
>
>> The latest pi compute module has PCIe and various breakout boards make
>> it available as a normal X1 slot, but other than that, I thought to get
>> access to PCIe on any other pi required de-soldering the USB chip?
>
> Hence "many" instead of "all".

It'd be "a few" in my book.


J. Clarke

unread,
Sep 12, 2021, 12:24:26 PM9/12/21
to
On Sun, 12 Sep 2021 11:45:01 +0100, Andy Burns <use...@andyburns.uk>
wrote:
I think "hardly any" comes closer.

Peter Flass

unread,
Sep 13, 2021, 2:39:54 PM9/13/21
to
Okay, I guess just PL/I and assembler then. I’m not sure CICS supported
FORTRAN. The linkage convention isn’t the problem, FORTRAN and COBOL used
only static storage for data, rather than automatic per-task data. For PL/I
you just had to not modify STATIC storage (AUTOMATIC) is the default, or at
least be careful about how you modified it.

--
Pete

Branimir Maksimovic

unread,
Sep 17, 2021, 11:57:08 PM9/17/21
to
On 2021-09-08, Peter Flass <peter...@yahoo.com> wrote:
> John Goerzen <jgoe...@complete.org> wrote:
>> On 2021-09-07, J Clarke <jclarke...@gmail.com> wrote:
>>>> https://governor.kansas.gov/wp-content/uploads/2021/04/GintherTaxCouncilKDOT.pdf
>>>> hints that it may be an IBM System 370/Model 145.
>>>
>>> I'd be very surprised if it actually was. When did IBM end
>>> maintenance on those?
>>
>> I have no more information, other than that link claims "The Kansas UI System
>> runs on a Mainframe that was installed in 1977."
>>
>> Is it possible the hardware was upgraded to something that can emulate the
>> 370/145, and that difference was lost on a non-technical author? Sure.
>>
>> I have known other places to run mainframes an absurdly long time. I've seen it
>> in universities and, of course, there's the famous CompuServe PDP-10 story -
>> though presumably they had more technical know-how to keep their PDP-10s alive.
>> You are right; it does seem farfetched.
>>
>> ... so I did some more digging, and found
>> https://ldh.la.gov/assets/medicaid/mmis/docs/IVVRProcurementLibrary/Section3RelevantCorporateExperienceCorporateFinancialCondition.doc
>> which claims that the "legacy UI system applications run on the Kansas
>> Department of Administration's OBM OS/390 mainframe."
>>
>> I know little of IBM's mainframe lineup, but
>> https://en.wikipedia.org/wiki/IBM_System/390 claims that the System/390 has some
>> level of compatibility with the S/370.
>>
>> - John
>>
>
> You can still run programs compiled on a 360 on the latest “z” box.
>
Why?

--
bmaxa now listens rock.mp3

John Levine

unread,
Sep 18, 2021, 12:07:52 AM9/18/21
to
According to Branimir Maksimovic <branimir....@gmail.com>:
>> You can still run programs compiled on a 360 on the latest “z” box.
>>
>Why?

Because they are still useful? Is this a trick question?

Ahem A Rivet's Shot

unread,
Sep 18, 2021, 2:30:03 AM9/18/21
to
On Sat, 18 Sep 2021 04:07:50 -0000 (UTC)
John Levine <jo...@taugh.com> wrote:

> According to Branimir Maksimovic <branimir....@gmail.com>:
> >> You can still run programs compiled on a 360 on the latest “z” box.
> >>
> >Why?
>
> Because they are still useful? Is this a trick question?

I suppose the real question is why not recompile them to take
advantage of the newer hardware. I know during Y2K work that a lot of
instances of lost source code came to light, are people still running
binaries for which there is no source ?

--
Steve O'Hara-Smith | Directable Mirror Arrays
C:\>WIN | A better way to focus the sun
The computer obeys and wins. | licences available see
You lose and Bill collects. | http://www.sohara.org/

Grant Taylor

unread,
Sep 18, 2021, 2:49:02 AM9/18/21
to
On 9/18/21 12:27 AM, Ahem A Rivet's Shot wrote:
> I suppose the real question is why not recompile them to take
> advantage of the newer hardware. I know during Y2K work that a lot of
> instances of lost source code came to light, are people still running
> binaries for which there is no source ?

Why recompile something just for the sake of recompiling it?

If it's working just fine and is exhibiting no symptoms, why mess with it?



--
Grant. . . .
unix || die

Thomas Koenig

unread,
Sep 18, 2021, 5:23:37 AM9/18/21
to
Ahem A Rivet's Shot <ste...@eircom.net> schrieb:
> On Sat, 18 Sep 2021 04:07:50 -0000 (UTC)
> John Levine <jo...@taugh.com> wrote:
>
>> According to Branimir Maksimovic <branimir....@gmail.com>:
>> >> You can still run programs compiled on a 360 on the latest “z” box.
>> >>
>> >Why?
>>
>> Because they are still useful? Is this a trick question?
>
> I suppose the real question is why not recompile them to take
> advantage of the newer hardware.

Recompile?

You mean re-assemble?

>I know during Y2K work that a lot of
> instances of lost source code came to light, are people still running
> binaries for which there is no source ?

AFAIK, people still use commercial software like Microsoft Windows.
One can presume that Microsoft has the source, but most users
certainly don't (and for the user, this amounts to the same thing).

J. Clarke

unread,
Sep 18, 2021, 5:47:59 AM9/18/21
to
However sometimes there are binaries with no source. Source was on a
tape or card deck that got archived and can no longer be found.

Ahem A Rivet's Shot

unread,
Sep 18, 2021, 11:00:03 AM9/18/21
to
Yeah I get it, you might be depending on an old undocumented
compiler bug or you might fall foul of a new one so why risk the new shiny
compiler that might get 10% better performance and might break the
application.

Peter Flass

unread,
Sep 18, 2021, 2:02:06 PM9/18/21
to
Because they still work, and do what you need them to do. If it works,
leave it alone.

--
Pete

Peter Flass

unread,
Sep 18, 2021, 2:02:07 PM9/18/21
to
Or at least messed up. I tried to rebuild the PL/I(F) compiler from source.
Several modules had minor problems, missing or extra END statements, but
one had a major problem: a large chunk of the program was missing. I spent
several days working from a disassembly to reconstruct the original.

IBM had a big fire at PID in Mechanicsburg, and a lot of sources went
missing. Digital Research lost the source to PL/I-86. The source for PL/C
hs not (yet) been found, although the executable still works fine.

--
Pete

Peter Flass

unread,
Sep 18, 2021, 2:02:07 PM9/18/21
to
Thomas Koenig <tko...@netcologne.de> wrote:
> Ahem A Rivet's Shot <ste...@eircom.net> schrieb:
>> On Sat, 18 Sep 2021 04:07:50 -0000 (UTC)
>> John Levine <jo...@taugh.com> wrote:
>>
>>> According to Branimir Maksimovic <branimir....@gmail.com>:
>>>>> You can still run programs compiled on a 360 on the latest “z” box.
>>>>>
>>>> Why?
>>>
>>> Because they are still useful? Is this a trick question?
>>
>> I suppose the real question is why not recompile them to take
>> advantage of the newer hardware.
>
> Recompile?
>
> You mean re-assemble?
>
>> I know during Y2K work that a lot of
>> instances of lost source code came to light, are people still running
>> binaries for which there is no source ?

There was a lot of this during conversions from 1401 to S/360. People
tended to patch the 1401 object decks rather than change the source and
recompile. If the source hadn’t been lost, it likely didn’t reflect the
running program.

>
> AFAIK, people still use commercial software like Microsoft Windows.
> One can presume that Microsoft has the source, but most users
> certainly don't (and for the user, this amounts to the same thing).
>



--
Pete

Peter Flass

unread,
Sep 18, 2021, 2:02:08 PM9/18/21
to
Ahem A Rivet's Shot <ste...@eircom.net> wrote:
> On Sat, 18 Sep 2021 00:49:00 -0600
> Grant Taylor <gta...@tnetconsulting.net> wrote:
>
>> On 9/18/21 12:27 AM, Ahem A Rivet's Shot wrote:
>>> I suppose the real question is why not recompile them to take
>>> advantage of the newer hardware. I know during Y2K work that a lot of
>>> instances of lost source code came to light, are people still running
>>> binaries for which there is no source ?
>>
>> Why recompile something just for the sake of recompiling it?
>>
>> If it's working just fine and is exhibiting no symptoms, why mess with it?
>
> Yeah I get it, you might be depending on an old undocumented
> compiler bug or you might fall foul of a new one so why risk the new shiny
> compiler that might get 10% better performance and might break the
> application.
>

I ran into this trying to recompile some code that was written for PL/I(F)
with the Enterprise compiler. Several constructs were rejected. This was in
gray areas where the documentation didn’t definitively allow or not allow
the code. After a while it became not worth it to me to make a lot of
changes to fit the new compiler.

--
Pete

John Levine

unread,
Sep 18, 2021, 2:20:23 PM9/18/21
to
It appears that Ahem A Rivet's Shot <ste...@eircom.net> said:
>On Sat, 18 Sep 2021 04:07:50 -0000 (UTC)
>John Levine <jo...@taugh.com> wrote:
>
>> According to Branimir Maksimovic <branimir....@gmail.com>:
>> >> You can still run programs compiled on a 360 on the latest “z” box.
>> >>
>> >Why?
>>
>> Because they are still useful? Is this a trick question?
>
> I suppose the real question is why not recompile them to take
>advantage of the newer hardware.

A lot of 360 software was written in assembler. I gather a fair amount still is.
For some they still have the source, some they don't, but even if they do, it's assembler.

The newer hardware has bigger addresses and some more instructions but they don't run any faster.
If you look at the zSeries principles of operation you can see the many hacks they invented to
let old 24 bit addresss 360 code work with more modern 31 and 64 bit code.

J. Clarke

unread,
Sep 18, 2021, 4:07:29 PM9/18/21
to
And I remember the time that NASTRAN got dropped down the stairwell at
a PPOE. Three floors with cards flying merrily the whole way. I
_think_ they were all found (I had to be somewhere and didn't get to
participate in the search).

J. Clarke

unread,
Sep 18, 2021, 4:13:53 PM9/18/21
to
On Sat, 18 Sep 2021 18:20:22 -0000 (UTC), John Levine
<jo...@taugh.com> wrote:

>It appears that Ahem A Rivet's Shot <ste...@eircom.net> said:
>>On Sat, 18 Sep 2021 04:07:50 -0000 (UTC)
>>John Levine <jo...@taugh.com> wrote:
>>
>>> According to Branimir Maksimovic <branimir....@gmail.com>:
>>> >> You can still run programs compiled on a 360 on the latest “z” box.
>>> >>
>>> >Why?
>>>
>>> Because they are still useful? Is this a trick question?
>>
>> I suppose the real question is why not recompile them to take
>>advantage of the newer hardware.
>
>A lot of 360 software was written in assembler. I gather a fair amount still is.
>For some they still have the source, some they don't, but even if they do, it's assembler.
>
>The newer hardware has bigger addresses and some more instructions but they don't run any faster.
>If you look at the zSeries principles of operation you can see the many hacks they invented to
>let old 24 bit addresss 360 code work with more modern 31 and 64 bit code.

Something that ran adequately on a machine with a 10 MHz clock will
generally run so much more than adequately on a machine with a 5 GHz
clock that there's not much incentive to optimize anyway.

Dan Espen

unread,
Sep 18, 2021, 5:46:13 PM9/18/21
to
Similar here, large amounts of PL/I and a bit of it broke with
Enterprise PL/I. Strange how a new compiler can suddenly make
uninitialized variables start causing problems.

We had more than that though including some new compiler bugs.

--
Dan Espen

Dan Espen

unread,
Sep 18, 2021, 5:50:48 PM9/18/21
to
Peter Flass <peter...@yahoo.com> writes:

> Thomas Koenig <tko...@netcologne.de> wrote:
>> Ahem A Rivet's Shot <ste...@eircom.net> schrieb:
>>> On Sat, 18 Sep 2021 04:07:50 -0000 (UTC)
>>> John Levine <jo...@taugh.com> wrote:
>>>
>>>> According to Branimir Maksimovic <branimir....@gmail.com>:
>>>>>> You can still run programs compiled on a 360 on the latest “z” box.
>>>>>>
>>>>> Why?
>>>>
>>>> Because they are still useful? Is this a trick question?
>>>
>>> I suppose the real question is why not recompile them to take
>>> advantage of the newer hardware.
>>
>> Recompile?
>>
>> You mean re-assemble?
>>
>>> I know during Y2K work that a lot of
>>> instances of lost source code came to light, are people still running
>>> binaries for which there is no source ?
>
> There was a lot of this during conversions from 1401 to S/360. People
> tended to patch the 1401 object decks rather than change the source and
> recompile. If the source hadn’t been lost, it likely didn’t reflect the
> running program.

During my long career I ran into very few instances of missing source.
Only 1 comes to mind.

As I remember, operations would not accept an object deck with patches.
At the same time they accepted the new object deck they
secured the source code and listing.

--
Dan Espen

Thomas Koenig

unread,
Sep 18, 2021, 6:30:23 PM9/18/21
to
J Clarke <jclarke...@gmail.com> schrieb:
There are a couple of things that could go wrong, though, especially
if the problem sizes have grown, as they tend to do.

Tradeoffs between disk speed and memory made in the 1980s may not
work as well when the relative performances of CPU and discs have
diverged as much as they did, and there is a factor of 10^n more
data to process, and all of a sudden you find there is this
n^2 algorithm hidden somewhere...

John Levine

unread,
Sep 18, 2021, 9:55:24 PM9/18/21
to
According to Thomas Koenig <tko...@netcologne.de>:
>>>>> >> You can still run programs compiled on a 360 on the latest “z” box.

>There are a couple of things that could go wrong, though, especially
>if the problem sizes have grown, as they tend to do.
>
>Tradeoffs between disk speed and memory made in the 1980s may not
>work as well when the relative performances of CPU and discs have
>diverged as much as they did, and there is a factor of 10^n more
>data to process, and all of a sudden you find there is this
>n^2 algorithm hidden somewhere...

Nobody is claiming we still run *all* of the code written in the 1960s.

I gather there is some code where for financial reasons it has to produce
results the same as what it has produced for the past forty years, even
though the programmer who wrote it has retired or died, even though
the results may depend on funky details of the 360's ill-designed floating
point, or of shift-and-round-decimal instructions where for some reason
it uses a rounding digit of 6 rather than the normal 5.

It can be worth a lot to keep running the actual code rather than trying
to reverse engineer it and hope you got all the warts right for every case.

J. Clarke

unread,
Sep 18, 2021, 10:08:43 PM9/18/21
to
On Sun, 19 Sep 2021 01:55:22 -0000 (UTC), John Levine
<jo...@taugh.com> wrote:

>According to Thomas Koenig <tko...@netcologne.de>:
>>>>>> >> You can still run programs compiled on a 360 on the latest “z” box.
>
>>There are a couple of things that could go wrong, though, especially
>>if the problem sizes have grown, as they tend to do.
>>
>>Tradeoffs between disk speed and memory made in the 1980s may not
>>work as well when the relative performances of CPU and discs have
>>diverged as much as they did, and there is a factor of 10^n more
>>data to process, and all of a sudden you find there is this
>>n^2 algorithm hidden somewhere...
>
>Nobody is claiming we still run *all* of the code written in the 1960s.
>
>I gather there is some code where for financial reasons it has to produce
>results the same as what it has produced for the past forty years, even
>though the programmer who wrote it has retired or died, even though
>the results may depend on funky details of the 360's ill-designed floating
>point, or of shift-and-round-decimal instructions where for some reason
>it uses a rounding digit of 6 rather than the normal 5.
>
>It can be worth a lot to keep running the actual code rather than trying
>to reverse engineer it and hope you got all the warts right for every case.

That's something that I live. If there's a mismatch we don't let it
slide, we learn the reason why. When you have assets under management
that look like the National Debt a little tiny mistake can be a huge
lawsuit.

Bob Eager

unread,
Sep 19, 2021, 2:22:03 AM9/19/21
to
On Sun, 19 Sep 2021 01:55:22 +0000, John Levine wrote:

> I gather there is some code where for financial reasons it has to
> produce results the same as what it has produced for the past forty
> years, even though the programmer who wrote it has retired or died, even
> though the results may depend on funky details of the 360's ill-designed
> floating point, or of shift-and-round-decimal instructions where for
> some reason it uses a rounding digit of 6 rather than the normal 5.

Do you have a reference to anything on that rounding decision? It's
actually relevant to something I'm working on...



--
Using UNIX since v6 (1975)...

Use the BIG mirror service in the UK:
http://www.mirrorservice.org

John Levine

unread,
Sep 19, 2021, 11:24:34 PM9/19/21
to
According to Bob Eager <news...@eager.cx>:
>On Sun, 19 Sep 2021 01:55:22 +0000, John Levine wrote:
>
>> I gather there is some code where for financial reasons it has to
>> produce results the same as what it has produced for the past forty
>> years, even though the programmer who wrote it has retired or died, even
>> though the results may depend on funky details of the 360's ill-designed
>> floating point, or of shift-and-round-decimal instructions where for
>> some reason it uses a rounding digit of 6 rather than the normal 5.
>
>Do you have a reference to anything on that rounding decision? It's
>actually relevant to something I'm working on...

Sorry, it's a real instruction but a hypothetical example.

The closest I got to this was back in the 1980s when I was working on a modelling package called Javelin
and I had to write the functions that computed bond prices and yields. The securities association published
a pamphlet with the algorithms and examples, and needless to say my code wasn't done until it got all the
examples exactly correct. Some of the calculations were rather odd like the ones that decreed that a year
has 360 days.

Dan Cross

unread,
Sep 22, 2021, 3:00:54 PM9/22/21
to
In article <20210918155210.02c1...@eircom.net>,
Ahem A Rivet's Shot <ste...@eircom.net> wrote:
>On Sat, 18 Sep 2021 00:49:00 -0600
>Grant Taylor <gta...@tnetconsulting.net> wrote:
>> On 9/18/21 12:27 AM, Ahem A Rivet's Shot wrote:
>> > I suppose the real question is why not recompile them to take
>> > advantage of the newer hardware. I know during Y2K work that a lot of
>> > instances of lost source code came to light, are people still running
>> > binaries for which there is no source ?
>>
>> Why recompile something just for the sake of recompiling it?
>>
>> If it's working just fine and is exhibiting no symptoms, why mess with it?
>
> Yeah I get it, you might be depending on an old undocumented
>compiler bug or you might fall foul of a new one so why risk the new shiny
>compiler that might get 10% better performance and might break the
>application.

By that logic, one should never upgrade anything if it can
be avoided. An operating system upgrade in particular would
be terribly fraught.

It strikes me how much process we've built predicated on the
presumed difficulty of testing and qualifying software for
production use.

- Dan C.

J. Clarke

unread,
Sep 22, 2021, 5:54:41 PM9/22/21
to
Nothing "presumed" about it.

Dan Espen

unread,
Sep 22, 2021, 9:15:12 PM9/22/21
to
You only update software when the benefit justifies the cost.

--
Dan Espen

J. Clarke

unread,
Sep 22, 2021, 10:34:43 PM9/22/21
to
On Wed, 22 Sep 2021 21:15:07 -0400, Dan Espen <dan1...@gmail.com>
wrote:
I wish our management understood that. I spend half my time
recovering from "updates".

Charlie Gibbs

unread,
Sep 22, 2021, 10:51:23 PM9/22/21
to
On 2021-09-23, Dan Espen <dan1...@gmail.com> wrote:

> cr...@spitfire.i.gajendra.net (Dan Cross) writes:
>
>> In article <20210918155210.02c1...@eircom.net>,
>> Ahem A Rivet's Shot <ste...@eircom.net> wrote:
>>
>>> On Sat, 18 Sep 2021 00:49:00 -0600
>>> Grant Taylor <gta...@tnetconsulting.net> wrote:
>>>
>>>> On 9/18/21 12:27 AM, Ahem A Rivet's Shot wrote:
>>>>
>>>>> I suppose the real question is why not recompile them to take
>>>>> advantage of the newer hardware. I know during Y2K work that a lot of
>>>>> instances of lost source code came to light, are people still running
>>>>> binaries for which there is no source ?
>>>>
>>>> Why recompile something just for the sake of recompiling it?
>>>>
>>>> If it's working just fine and is exhibiting no symptoms, why mess with it?
>>>
>>> Yeah I get it, you might be depending on an old undocumented
>>> compiler bug or you might fall foul of a new one so why risk the
>>> new shiny compiler that might get 10% better performance and might
>>> break the application.
>>
>> By that logic, one should never upgrade anything if it can
>> be avoided. An operating system upgrade in particular would
>> be terribly fraught.

s/would be/is/

Especially if you have no mechanism for parallel testing
prior to the cutover.

>> It strikes me how much process we've built predicated on the
>> presumed difficulty of testing and qualifying software for
>> production use.
>
> You only update software when the benefit justifies the cost.

Sadly, people now update software whenever the vendor tells them to
(or does it behind their back).

Last year I heard that a number of 911 sites went down (i.e. no
dial tone) for at least half an hour thanks to a buggy Windows
update that was pushed out to them.

--
/~\ Charlie Gibbs | Life is perverse.
\ / <cgi...@kltpzyxm.invalid> | It can be beautiful -
X I'm really at ac.dekanfrus | but it won't.
/ \ if you read it the right way. | -- Lily Tomlin

J. Clarke

unread,
Sep 22, 2021, 11:28:21 PM9/22/21
to
Push updates should be a criminal offense.

Scott Lurndal

unread,
Sep 23, 2021, 10:00:09 AM9/23/21
to
Running critical infrastructure on windows should be a criminal offense.

Peter Flass

unread,
Sep 23, 2021, 10:32:05 AM9/23/21
to
I hate to say that’s what they get for using windows, but…

--
Pete

Dan Espen

unread,
Sep 23, 2021, 10:52:31 AM9/23/21
to
All these stories about companies paying ransoms.
Seldom do they place the blame directly on Windows.

Too be fair, poor backup and recovery probably plays a role too.

--
Dan Espen

Peter Flass

unread,
Sep 23, 2021, 1:01:39 PM9/23/21
to
I was going to say that windows is just a bigger target than Linux, but
Linux is used extensively in servers and mission-critical situations, and
you seldom hear about a successful attack targeting Linux. You’re right
about poor backup software and procedures.

Someone recently posted here about ransomware that just lay low and
corrupted backups for a while before it struck, but don’t good systems
checksum the backups and verify a good one? Duplicity does that, and also
does a test restore periodically. I have had occasion to restore some files
a few times, and I’m grateful to have it, although I also do some of my own
backups.

I did prefer the system I had previously, whose name I have forgotten, that
had a better UI, but they dropped support for individual users in favor of
corporate licenses.

--
Pete

Scott Lurndal

unread,
Sep 23, 2021, 1:05:58 PM9/23/21
to
Peter Flass <peter...@yahoo.com> writes:
>Dan Espen <dan1...@gmail.com> wrote:
>> Peter Flass <peter...@yahoo.com> writes:

>>
>> Too be fair, poor backup and recovery probably plays a role too.
>>
>
>I was going to say that windows is just a bigger target than Linux, but
>Linux is used extensively in servers and mission-critical situations, and
>you seldom hear about a successful attack targeting Linux. You’re right
>about poor backup software and procedures.

Fundamentally, it devolves to Microsoft's choice to use HTML
in email and to allow executable content in mail, to forgo
any form of user security, et cetera.

Simple text is far safer, and forcing someone to manually cut & paste
URLs from a text mail (where the URL is unobfuscatable) to
a sandboxed browser would have been a far more secure paradigm.

Dan Espen

unread,
Sep 23, 2021, 1:12:18 PM9/23/21
to
I'm not and never have been a professional system admin.
It seems to me the system doing backups should only be connected to the
disk farm. That would make corrupting backups an unlikely event.

> I did prefer the system I had previously, whose name I have forgotten, that
> had a better UI, but they dropped support for individual users in favor of
> corporate licenses.

Backup systems? For my home system cron driven rsync with periodic
changes to the backup USB sticks. Couldn't be much simpler.
Rsync just creates another copy, getting to the backup is just a matter
of copying.

--
Dan Espen

Andreas Kohlbach

unread,
Sep 23, 2021, 1:40:28 PM9/23/21
to
On Thu, 23 Sep 2021 10:01:37 -0700, Peter Flass wrote:
>
> Dan Espen <dan1...@gmail.com> wrote:
>>
>> All these stories about companies paying ransoms.
>> Seldom do they place the blame directly on Windows.
>>
>> Too be fair, poor backup and recovery probably plays a role too.
>>
>
> I was going to say that windows is just a bigger target than Linux, but
> Linux is used extensively in servers and mission-critical situations, and
> you seldom hear about a successful attack targeting Linux. You’re right
> about poor backup software and procedures.

While it's true that Linux is extensively used on servers it needs an
exploit targeting the server (PHP exploit or something).

But Linux Desktop installations are AFAIK rarely attacked.

Anybody who runs Linux on a desktop for their daily work (email, social
media, watching pr0n) are less likely to find their computer compromised
than their Windows counterparts.

[...]
--
Andreas
It is loading more messages.
0 new messages