Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Evolution of Floating Point

9 views
Skip to first unread message

Charles Richmond

unread,
Sep 23, 2009, 11:24:13 AM9/23/09
to
ISTM that initially, the floating point representation for numbers
and the manipulation of those numbers... were handled by a
subroutine package. Eventually, floating point migrated into the
hardware implementation. So here are my questions:

When did floating point first appear as implemented in hardware
rather than just software??? What were the first computers to
offer floating point in hardware???

What advantage was gained by making floating point numbers use the
"signed magnitude" representation rather than "two's complement"
for the mantissa???

Who innovated the "excess 128" (or excess whatever) representation
for the exponent, instead of a "two's complement" exponent???

The IBM 360 uses "power of sixteen" exponents, while it seems most
other representations use "power of two" exponents. Are there any
other ways that exponents are represented???


--
+----------------------------------------+
| Charles and Francis Richmond |
| |
| plano dot net at aquaporin4 dot com |
+----------------------------------------+

Stephen Wolstenholme

unread,
Sep 23, 2009, 11:41:39 AM9/23/09
to
On Wed, 23 Sep 2009 10:24:13 -0500, Charles Richmond
<fri...@tx.rr.com> wrote:

>When did floating point first appear as implemented in hardware
>rather than just software??? What were the first computers to
>offer floating point in hardware???

Leo 3 had floating point hardware in 1962. I don't know if it was the
first.

Steve

--
Neural Planner Software Ltd www.NPSL1.com

Nick Spalding

unread,
Sep 23, 2009, 1:30:14 PM9/23/09
to
Charles Richmond wrote, in <h9dej0$57i$1...@news.eternal-september.org>
on Wed, 23 Sep 2009 10:24:13 -0500:

> ISTM that initially, the floating point representation for numbers
> and the manipulation of those numbers... were handled by a
> subroutine package. Eventually, floating point migrated into the
> hardware implementation. So here are my questions:
>
> When did floating point first appear as implemented in hardware
> rather than just software??? What were the first computers to
> offer floating point in hardware???
>
> What advantage was gained by making floating point numbers use the
> "signed magnitude" representation rather than "two's complement"
> for the mantissa???
>
> Who innovated the "excess 128" (or excess whatever) representation
> for the exponent, instead of a "two's complement" exponent???
>
> The IBM 360 uses "power of sixteen" exponents, while it seems most
> other representations use "power of two" exponents. Are there any
> other ways that exponents are represented???

The IBM 7010 could have decimal floating point as an extra. Fixed areas
of core were used as FP registers, I think they were 100 digits long
with a small proportion used as the exponent. I don't know if they ever
actually sold one.
--
Nick Spalding

Louis Krupp

unread,
Sep 23, 2009, 1:33:24 PM9/23/09
to
Charles Richmond wrote:
<snip>

> The IBM 360 uses "power of sixteen" exponents, while it seems most other
> representations use "power of two" exponents. Are there any other ways
> that exponents are represented???

Unisys MCP systems use powers of eight.

Louis


Quadibloc

unread,
Sep 23, 2009, 3:12:31 PM9/23/09
to
On Sep 23, 9:24 am, Charles Richmond <friz...@tx.rr.com> wrote:

> When did floating point first appear as implemented in hardware
> rather than just software??? What were the first computers to
> offer floating point in hardware???

The IBM 704 was an early computer to offer floating-point in hardware,
and it may have been the first major commercial computer that offered
it.

It had predecessors, though. MANIAC had hardware floating-point. And
it's possible one of the Zuse electromechanical machines did as well,
but I can't remember exactly.

> What advantage was gained by making floating point numbers use the
> "signed magnitude" representation rather than "two's complement"
> for the mantissa???

Mainly, it seems more logical. Since the sign is more significant than
the exponent, and the exponent is more significant than the mantissa,
the sign is separated from the mantissa.

As well, keeping the mantissa always positive simplifies how
normalization is done.

> Who innovated the "excess 128" (or excess whatever) representation
> for the exponent, instead of a "two's complement" exponent???

It is advantageous. Basically, handling signed numbers is a pain.
Since one doesn't need to do everything with an exponent that one does
with an integer, because I/O conversions are on the power of ten, not
the power of two, treating the exponent as an *unsigned* integer, with
an offset, simplifies multiplication and normalization.

It only saves a couple of logic gates, really, but why do anything you
don't have to?

The IBM 704 did this for the exponent, so this was done right from the
beginning.

> The IBM 360 uses "power of sixteen" exponents, while it seems most
> other representations use "power of two" exponents. Are there any
> other ways that exponents are represented???

MANIAC used a quite large radix so as to minimize the need for shifts.
There are machines that use a power of eight. According to one
reference I read, powers of eight don't cause the numerical issues
that powers of sixteen do.

John Savard

Quadibloc

unread,
Sep 23, 2009, 3:13:11 PM9/23/09
to
On Sep 23, 11:33 am, Louis Krupp <lkrupp_nos...@indra.com.invalid>
wrote:

OK, so it _is_ the Burroughs machines that used powers of eight.

John Savard

Joe Pfeiffer

unread,
Sep 23, 2009, 4:00:33 PM9/23/09
to
Charles Richmond <fri...@tx.rr.com> writes:

>
> What advantage was gained by making floating point numbers use the
> "signed magnitude" representation rather than "two's complement" for
> the mantissa???

I've never seen a really satisfactory explanation for this. Note the
CDC6600 used 1's complement of the entire word (including both exponent
and mantissa) for negative.

> Who innovated the "excess 128" (or excess whatever) representation for
> the exponent, instead of a "two's complement" exponent???

Don't know -- notice that by using excess notation and normalized
numbers, the floating point numbers are sorted by magnitude. You can
just do an integer compare and get the right answer, with a smidge more
logic if one or both numbers are negative.

> The IBM 360 uses "power of sixteen" exponents, while it seems most
> other representations use "power of two" exponents. Are there any
> other ways that exponents are represented???

6600 used eight. Using 2 lets you use the "hidden bit", so you get one
more bit of accuracy.
--
As we enjoy great advantages from the inventions of others, we should
be glad of an opportunity to serve others by any invention of ours;
and this we should do freely and generously. (Benjamin Franklin)

bert

unread,
Sep 23, 2009, 4:59:08 PM9/23/09
to
On 23 Sep, 16:41, Stephen Wolstenholme <st...@tropheus.demon.co.uk>
wrote:

> On Wed, 23 Sep 2009 10:24:13 -0500, Charles Richmond
>
> <friz...@tx.rr.com> wrote:
> >When did floating point first appear as implemented in hardware
> >rather than just software??? What were the first computers to
> >offer floating point in hardware???
>
> Leo 3 had floating point hardware in 1962. I don't know if it was the
> first.

Except, of course, it wasn't really hardware,
because the whole of its instruction set was
implemented in microcode. But I'm glad that
we were allowed to leave out all the floating
point operations when we re-microcoded it for
the ICL 2960 in 1977-79.
--

Peter Flass

unread,
Sep 23, 2009, 6:09:48 PM9/23/09
to
Charles Richmond wrote:
>
> When did floating point first appear as implemented in hardware rather
> than just software??? What were the first computers to offer floating
> point in hardware???

IBM 704, 1954 (says Wikipeda).

>
> The IBM 360 uses "power of sixteen" exponents, while it seems most other
> representations use "power of two" exponents. Are there any other ways
> that exponents are represented???
>

Power of 10.


Scott Lurndal

unread,
Sep 23, 2009, 7:31:39 PM9/23/09
to
Charles Richmond <fri...@tx.rr.com> writes:
>ISTM that initially, the floating point representation for numbers
>and the manipulation of those numbers... were handled by a
>subroutine package. Eventually, floating point migrated into the
>hardware implementation. So here are my questions:
>
>When did floating point first appear as implemented in hardware
>rather than just software??? What were the first computers to
>offer floating point in hardware???

The Burroughs systems had hardware floating point as early as the
B3500 in 1966. I'd have to look, but I don't believe the Electrodata
200's or the subsequent Burroughs B300's offered hardware floating point.

>
>What advantage was gained by making floating point numbers use the
>"signed magnitude" representation rather than "two's complement"
>for the mantissa???

Not all used that format, of course. BCD machines used a BCD
exponent and a BCD mantissa.

scott

Scott Lurndal

unread,
Sep 23, 2009, 7:33:43 PM9/23/09
to

A-Series MCP (MCP/AS) perhaps, but V-Series (MCP/VS) and medium systems (MCPVI, MCPIX) did not
bias the exponent, but rather used a 2 digit BCD exponent and an 8 or
16-digit BCD mantissa (single and double precision). Both were stored
in a 20-digit accumulator (1 digit for exponent sign, 2 digits for
exponent, 1 digit for mantissa sign and 16 digits of mantissa).

scott

hanc...@bbs.cpcn.com

unread,
Sep 23, 2009, 8:45:06 PM9/23/09
to
On Sep 23, 11:24 am, Charles Richmond <friz...@tx.rr.com> wrote:

> The IBM 360 uses "power of sixteen" exponents, while it seems most
> other representations use "power of two" exponents. Are there any
> other ways that exponents are represented???

Just a correction, today's Z series (descendants of System/360) offer
three types of floating point representation (see separate post).

Quadibloc

unread,
Sep 24, 2009, 7:57:49 AM9/24/09
to
On Sep 23, 2:00 pm, Joe Pfeiffer <pfeif...@cs.nmsu.edu> wrote:
> Charles Richmond <friz...@tx.rr.com> writes:

> > What advantage was gained by making floating point numbers use the
> > "signed magnitude" representation rather than "two's complement" for
> > the mantissa???

> I've never seen a really satisfactory explanation for this.

I remember seeing an ad for the DDP-24 which noted that it used the
highest-quality form of integer arithmetic, sign-magnitude, like the
IBM 7090 and other quality computers, not the intermediate one's
complement form, or two's complement like the really cheap computers.

So, since the IBM 704 used sign-magnitude representation for
_integers_, it's hardly surprising that it would do so for floating-
point numbers.

As you note, an integer sort is possible on positive floats if they're
represented in the order sign, exponent, magnitude. This separates the
sign from the magnitude, making two's complement representation of the
magnitude seem inelegant or illogical.

But the main reason for sign-magnitude floats is to simplify the logic
for normalization, which is a very common FP operation.

If you have hardware subtraction as well as addition, sign-magnitude
isn't harder to implement than two's complement, it just changes which
operation you're doing. Two's complement only simplifies the
arithmetic unit when you only have an add instruction, and no subtract
instruction; instead, you complement, then add. A computer with
floating-point will have a subtract instruction. So two's complement
doesn't gain it anything, but sign magnitude is simpler for doing
multiplication as well.

> > Who innovated the "excess 128" (or excess whatever) representation for
> > the exponent, instead of a "two's complement" exponent???
>
> Don't know -- notice that by using excess notation and normalized
> numbers, the floating point numbers are sorted by magnitude.  You can
> just do an integer compare and get the right answer, with a smidge more
> logic if one or both numbers are negative.

And the main benefit of this representation is that when the exponent
crosses zero, no extra logic is used to handle that. It's basically
the obvious right way to represent the exponent.

John Savard

jmfbahciv

unread,
Sep 24, 2009, 8:13:27 AM9/24/09
to
Charles Richmond wrote:
> ISTM that initially, the floating point representation for numbers and
> the manipulation of those numbers... were handled by a subroutine
> package. Eventually, floating point migrated into the hardware
> implementation. So here are my questions:
>
> When did floating point first appear as implemented in hardware rather
> than just software??? What were the first computers to offer floating
> point in hardware???

PDP-10s had floating point instructions.

<snip>

/BAH

Louis Krupp

unread,
Sep 24, 2009, 10:34:14 AM9/24/09
to

I used the present tense "use powers of eight" since the A-Series
architecture is the only MCP system that survives today, but in this
group, I should have included some historical background. A-Series data
representation dates back to the Burroughs B5500 (1960's), which
preceded the Burroughs Large Systems architecture that evolved into the
Unisys A-Series.

I never actually used Medium Systems. My only contact with them was on
a project to convert to Large Systems. It was an OK port except for the
online transaction processing, which involved moving to a monstrosity
called GEMCOS.

Louis

Al Kossow

unread,
Sep 24, 2009, 12:03:21 PM9/24/09
to
Quadibloc wrote:
> So two's complement doesn't gain it anything

I was told 2's comp was used in small word size computers
because it makes multiple-precision arithmetic easier to implement.

Scott Lurndal

unread,
Sep 24, 2009, 12:29:21 PM9/24/09
to

Ah, EVA. Could have been worse, you could have ended up with SWITCH
(CP3680 - an HP-2000 based comm processor linked via a magnetic tape DLP)
or MSNDL (B874) instead.

scott

John Byrns

unread,
Sep 24, 2009, 1:55:12 PM9/24/09
to
In article <h9fn6...@news2.newsguy.com>, jmfbahciv <jmfbahciv@aol>
wrote:

Was the original PDP-10 floating point actually hardwired, or was it
implemented with microcode?

--
Regards,

John Byrns

Surf my web pages at, http://fmamradios.com/

Quadibloc

unread,
Sep 24, 2009, 2:01:20 PM9/24/09
to
On Sep 24, 10:03 am, Al Kossow <a...@bitsavers.org> wrote:

> I was told 2's comp was used in small word size computers
> because it makes multiple-precision arithmetic easier to implement.

Yes, it does. Carry out of one word transfers perfectly to the next
word.

John Savard

Quadibloc

unread,
Sep 24, 2009, 2:02:59 PM9/24/09
to
On Sep 24, 6:13 am, jmfbahciv <jmfbahciv@aol> wrote:

> PDP-10s had floating point instructions.

Of course they did. And so did the PDP-6, I believe. They chose their
floating-point format to be similar, but not identical, to that of the
IBM 704/709/7090 computers that were the industry standard that
preceded it.

John Savard

Quadibloc

unread,
Sep 24, 2009, 2:05:45 PM9/24/09
to
On Sep 24, 11:55 am, John Byrns <byr...@sbcglobal.net> wrote:

> Was the original PDP-10 floating point actually hardwired, or was it
> implemented with microcode?

Surprisingly enough, the PDP-9 had microcode.

I'm not sure that the original PDP-10 (KA-10 chassis, discrete
transistors) even _had_ microcode, though. It's true that the much
later KS-10, implemented with standard commercial bit-slice chips,
would have been microcoded.

John Savard

Al Kossow

unread,
Sep 24, 2009, 2:26:46 PM9/24/09
to
Quadibloc wrote:

> I'm not sure that the original PDP-10 (KA-10 chassis, discrete
> transistors) even _had_ microcode

The KL-10 and KS-10 are the only released DEC microcoded implementations.

Later Foonlys were microcoded, would have to dig around if the
Super Foonly (ie. F1) was.

Not sure if which of the Systems Concepts machines were.

Was talking to the person who designed the console processor for the F1
a few weeks ago, and he told me there was a Foonly implementation that
had a 4-bit data path (a single 2901). Not sure which machine that would
have been.

Louis Krupp

unread,
Sep 24, 2009, 3:05:49 PM9/24/09
to

EVA?

GEMCOS was a software package originally written, I believe, by a
Burroughs customer. It depended on Burroughs' smart block-mode,
poll-select terminals (or their emulations). I don't remember how it
did this, but when a user hit the transmit button, the transmit light
wouldn't go out until the transaction was logged (and presumably flushed
to disk). The idea was that no transactions would be lost.

Louis

Scott Lurndal

unread,
Sep 24, 2009, 3:34:16 PM9/24/09
to

Internal code name for the "V" to "A" transition project back in the early
90's; amongst other things, this brought GEMCOS to the A-series side, IIRC.

>
>GEMCOS was a software package originally written, I believe, by a
>Burroughs customer.

Yes, it was quite common on Medium and V-series systems supporting
B774, B874, B974 and CP2000 communications processors. I'm trying
to get it to work on my emulator (which means I need to emulate a
communications controller as well as a station (like a TD830)).

> It depended on Burroughs' smart block-mode,
>poll-select terminals (or their emulations). I don't remember how it
>did this, but when a user hit the transmit button, the transmit light
>wouldn't go out until the transaction was logged (and presumably flushed
>to disk). The idea was that no transactions would be lost.

The point-to-point communications sequence is roughly:

1) transmitting entity (station or host) sends ASCII ENQ.
2) receiving entity (station or host) sends either ASCII ACK or ASCII NAK.
3) transmitting entity (on receipt of ACK) sends
SOH <AD1> <AD2> STX message ETX BCC
4) receiving entity sends ACK or NAK (if block check character mismatch)
5) transmitting entity sends EOT

The transmit light on the station wouldn't extinguish until the receiving
entity sent the ACK character. <AD1> <AD2> is a two-byte terminal address
used more commonly on multidrop lines. In point to point, the address is
ignored.

Multidrop terminals were handled by polling code in the data communications
processor. The communications controller would poll each terminal on the
line to determine if any station had transmitted. The communications
controller would select each terminal on the line when the host produced
output for the terminal. Polls and selects would be interleaved by the
network definition language (NDL) running on the communications processor.

In a Poll:

1) Host sends EOT <AD1> <AD2> POL ENQ
2) Station with designated address sends EOT if no traffic, or
SOH <AD1> <AD2> STX message ETX BCC
3) Host sends ACK or NAK
4) Station sends EOT
5) Host proceeds to step 1 for next address on line

In a Select:

1) Host sends EOT <AD1> <AD2> SEL ENQ
2) Station send ACK (if ready) or NAK (if in LOCAL mode)
3) Host sends SOH <AD1> <AD2> STX message ETX BCC
4) Station sends ACK or NAK (if bad BCC)
5) Host sends EOT

scott

Tim Shoppa

unread,
Sep 24, 2009, 4:47:15 PM9/24/09
to
Al writes:
> Was talking to the person who designed the console processor for the F1
> a few weeks ago, and he told me there was a Foonly implementation that
> had a 4-bit data path (a single 2901).

Isn't that like the Peter Schickele joke about the Wagner Ring Cycle
on "convenient 45's"?

Tim.

Rich Alderson

unread,
Sep 24, 2009, 5:24:57 PM9/24/09
to
Al Kossow <a...@bitsavers.org> writes:

> Quadibloc wrote:

>> I'm not sure that the original PDP-10 (KA-10 chassis, discrete
>> transistors) even _had_ microcode

> The KL-10 and KS-10 are the only released DEC microcoded implementations.

> Later Foonlys were microcoded, would have to dig around if the
> Super Foonly (ie. F1) was.

Since the SuperFoonly was the origin of the KL-10 (I know people who worked on
it at SAIL who moved to DEC with the project), I'd say it's a real good bet.

> Not sure if which of the Systems Concepts machines were.

All of them. I took the microcode class in April, 1989, at their headquarters
when their only delivered system was the SC-30M at LOTS.

> Was talking to the person who designed the console processor for the F1
> a few weeks ago, and he told me there was a Foonly implementation that
> had a 4-bit data path (a single 2901). Not sure which machine that would
> have been.

No idea. I *saw* a Foonly, once, from across the floor of the D. C. Power Lab.

--
Rich Alderson "You get what anybody gets. You get a lifetime."
ne...@alderson.users.panix.com --Death, of the Endless

Rich Alderson

unread,
Sep 24, 2009, 6:39:35 PM9/24/09
to
John Byrns <byr...@sbcglobal.net> writes:

The original PDP-10 (KA-10 CPU) was an asynchronous state machine implemented
entirely in hardware. The second model, the KI-10, was a synchronous state
machine, also entirely hardware. The KL-10, the third model, was the first in
the line to be microcoded.

Jim Haynes

unread,
Sep 24, 2009, 8:56:29 PM9/24/09
to
> Charles Richmond wrote:
>> The IBM 360 uses "power of sixteen" exponents, while it seems most other
>> representations use "power of two" exponents. Are there any other ways
>> that exponents are represented???

Power of 8 in the Burroughs B5000 and its successors. These machines
also have the feature that the number is represented as an integer
rather than as a fraction, so a number with zero exponent is simply
an integer and there is no separate integer type. Rather than always
normalizing, the hardware attempts to keep the exponent of integers
at zero.

Jim Haynes

unread,
Sep 24, 2009, 8:58:00 PM9/24/09
to
On 2009-09-23, Scott Lurndal <sc...@slp53.sl.home> wrote:
>
> The Burroughs systems had hardware floating point as early as the
> B3500 in 1966. I'd have to look, but I don't believe the Electrodata
> 200's or the subsequent Burroughs B300's offered hardware floating point.
>
The Burroughs 220 had decimal floating point in the late 1950s.

Jim Haynes

unread,
Sep 24, 2009, 9:13:44 PM9/24/09
to
On 2009-09-24, Quadibloc <jsa...@ecn.ab.ca> wrote:
> On Sep 23, 2:00 pm, Joe Pfeiffer <pfeif...@cs.nmsu.edu> wrote:
>> Charles Richmond <friz...@tx.rr.com> writes:
>
>> > What advantage was gained by making floating point numbers use the
>> > "signed magnitude" representation rather than "two's complement" for
>> > the mantissa???
>
>> I've never seen a really satisfactory explanation for this.
>
Probably in the IBM 704 they used sign-magnitude just because they were
also using that notation for integers. And it makes rounding simple.

The G.E. 635 used two-complement arithmetic. In floating point operations
the exponent was kept in a separate register in the CPU, and the full
36-bit accumulator was used for the fraction. When a floating point
result had to be stored in memory it was necessary to throw away the least
significant bits of the fraction so that the exponent could be stuffed
into the same word. At first this was done by truncation; the extra bits
were just shifted off the end. In sign-magnitude this would not have
been particularly bad; but in twos-complement it biases the results
in the negative direction. Positive numbers get smaller, closer to zero,
and negative numbers get more negative, farther from zero.

There was folklore, which I can't verify, that this became an issue
when the computer was used to generate a control tape for a numerically
controlled cutting torch. The job was to cut a circular hole in a steel
plate. Because of the bias in the number system the cutting line did
not close on itself when the torch completed the circle.

The solution required a new instruction, Floating STore Rounded, that would
handle the truncation properly. Then all the compilers had to be modified
to use the new instruction. It was not possible simply to fix the existing
floating store instruction, because programmers had learned to use it for
some obscure operation that had nothing to do with floating point numbers
and depended on the truncation behavior.
Probably in the IBM 704 they used sign-magnitude just because they were
also using that notation for integers. And it makes rounding simple.

The G.E. 635 used two-complement arithmetic. In floating point operations
the exponent was kept in a separate register in the CPU, and the full
36-bit accumulator was used for the fraction. When a floating point
result had to be stored in memory it was necessary to throw away the least
significant bits of the fraction so that the exponent could be stuffed
into the same word. At first this was done by truncation; the extra bits
were just shifted off the end. In sign-magnitude this would not have
been particularly bad; but in twos-complement it biases the results
in the negative direction. Positive numbers get smaller, closer to zero,
and negative numbers get more negative, farther from zero.

There was folklore, which I can't verify, that this became an issue
when the computer was used to generate a control tape for a numerically
controlled cutting torch. The job was to cut a circular hole in a steel
plate. Because of the bias in the number system the cutting line did
not close on itself when the torch completed the circle.

The solution required a new instruction, Floating STore Rounded, that would
handle the truncation properly. Then all the compilers had to be modified
to use the new instruction. It was not possible simply to fix the existing
floating store instruction, because programmers had learned to use it for
some obscure operation that had nothing to do with floating point numbers
and depended on the truncation behavior.
Probably in the IBM 704 they used sign-magnitude just because they were
also using that notation for integers. And it makes rounding simple.

The G.E. 635 used two-complement arithmetic. In floating point operations
the exponent was kept in a separate register in the CPU, and the full
36-bit accumulator was used for the fraction. When a floating point
result had to be stored in memory it was necessary to throw away the least
significant bits of the fraction so that the exponent could be stuffed
into the same word. At first this was done by truncation; the extra bits
were just shifted off the end. In sign-magnitude this would not have
been particularly bad; but in twos-complement it biases the results
in the negative direction. Positive numbers get smaller, closer to zero,
and negative numbers get more negative, farther from zero.

There was folklore, which I can't verify, that this became an issue
when the computer was used to generate a control tape for a numerically
controlled cutting torch. The job was to cut a circular hole in a steel
plate. Because of the bias in the number system the cutting line did
not close on itself when the torch completed the circle.

The solution required a new instruction, Floating STore Rounded, that would
handle the truncation properly. Then all the compilers had to be modified
to use the new instruction. It was not possible simply to fix the existing
floating store instruction, because programmers had learned to use it for
some obscure operation that had nothing to do with floating point numbers
and depended on the truncation behavior.

Charles Richmond

unread,
Sep 24, 2009, 10:40:32 PM9/24/09
to

Hey, hardwired or microcode... I accept *all* of that as a
hardware implementation. I was contrasting this sort of
implementation with using subroutines in assembly language (or an
HLL).

--
+----------------------------------------+
| Charles and Francis Richmond |
| |
| plano dot net at aquaporin4 dot com |
+----------------------------------------+

Quadibloc

unread,
Sep 24, 2009, 11:33:09 PM9/24/09
to
On Sep 23, 6:45 pm, hanco...@bbs.cpcn.com wrote:

> Just a correction, today's Z series (descendants of System/360) offer
> three types of floating point representation (see separate post).

Can't find another post by you in this thread.

I know today's zSystem machines offer both 'hexadecimal floating
point', that is, native 360 floating point, and 'binary floating
point', that is, the standard IEEE-754 floats used on all today's
microprocessors with floating-point support...

oh, yes, they *now* even support decimal floating point. And not BCD
packed decimal floating point either, but instead formats that encode
3 digits in 10 bits. I should remember this, I only describe the
format at

http://www.quadibloc.com/comp/cp020302.htm

on my web page, after all.

John Savard

Mike Hore

unread,
Sep 25, 2009, 4:10:04 AM9/25/09
to
Joe Pfeiffer wrote:

> Charles Richmond <fri...@tx.rr.com> writes:
>
>> What advantage was gained by making floating point numbers use the
>> "signed magnitude" representation rather than "two's complement" for
>> the mantissa???
>
> I've never seen a really satisfactory explanation for this. Note the
> CDC6600 used 1's complement of the entire word (including both exponent
> and mantissa) for negative.

I gather that for really serious number-crunching-type work (these days
called HPC for High Performance Computing), it's very important that the
numerical properties of the number be symmetrical around zero.
Otherwise, after 10-to-the-whatever iterations, your accumulated error
has a bias which is a Very Bad Thing. I've never done much of this
myself, but it's discussed over in comp.arch from time to time.

Cheers, Mike.

---------------------------------------------------------------
Mike Hore mike_h...@OVE.invalid.aapt.net.au
---------------------------------------------------------------

Mike Hore

unread,
Sep 25, 2009, 4:14:54 AM9/25/09
to
bert wrote:
>...
>> Leo 3 had floating point hardware in 1962. I don't know if it was the
>> first.
>
> Except, of course, it wasn't really hardware,
> because the whole of its instruction set was
> implemented in microcode. But I'm glad that
> we were allowed to leave out all the floating
> point operations when we re-microcoded it for
> the ICL 2960 in 1977-79.

Totally OT, but hey, this is afc... do you still have any 2900 manuals?
Bitsavers is still lacking them -- I'd LOVE to see a manual with
instruction descriptions. If you have any, please get them to Al Kossow
for scanning! Please??????

jmfbahciv

unread,
Sep 25, 2009, 7:05:47 AM9/25/09
to
PDP-10s didn't have microcode until the KL.

/BAH

jmfbahciv

unread,
Sep 25, 2009, 7:06:54 AM9/25/09
to
Quadibloc wrote:
> On Sep 24, 6:13 am, jmfbahciv <jmfbahciv@aol> wrote:
>
>> PDP-10s had floating point instructions.
>
> Of course they did.

I was trying to give a time line so people would think further
in the past :-).

> And so did the PDP-6, I believe. They chose their
> floating-point format to be similar, but not identical, to that of the
> IBM 704/709/7090 computers that were the industry standard that
> preceded it.
>

/BAH

jmfbahciv

unread,
Sep 25, 2009, 7:08:00 AM9/25/09
to
I think the first time I heard the word, microcode, was when the
KL was being designed.

/BAH

Louis Krupp

unread,
Sep 25, 2009, 7:06:52 AM9/25/09
to

GEMCOS (or some mutation thereof) was running on a B6700 in 1977. I'm
trying to remember just what made the program so bad. Some of it was
design; my employer was used to having screen images in an indexed file
back on the 3700 (or whatever they were porting from), and GEMCOS wanted
screens specified in something like a FORTRAN format. I dutifully wrote
a program to read the screen database and generate GEMCOS screen
formats; when I showed this to management, they laughed at me. So I
modified GEMCOS to read screens from a DMSII data set or something. It
worked, at least to some extent.

It all sounds familiar. It's kind of scary that you remember that much
detail.

As I recall, Harris made terminals that emulated TD820s or maybe TD830s.
I don't think there was a TD840. You wouldn't want to emulate the TD850.

Louis

Al Kossow

unread,
Sep 25, 2009, 10:38:48 AM9/25/09
to
Mike Hore wrote:

> Totally OT, but hey, this is afc... do you still have any 2900 manuals?

I have them scanned, including the Mick and Brick app notes.
I'll see about getting them on line.

Scott Lurndal

unread,
Sep 25, 2009, 11:53:21 AM9/25/09
to

Memory augmented by the TD830 Engineering Design Specification (which is up
on bitsavers, btw).

>
>As I recall, Harris made terminals that emulated TD820s or maybe TD830s.
> I don't think there was a TD840. You wouldn't want to emulate the TD850.

My terminal emulator program currently emulates a T27 (which was the successor
to the ET-1100, which suceeded the MT983 which suceeded the TD830); including
forms mode (and a custom X11 font for the forms delimiter and ETX characters).

scott

Quadibloc

unread,
Sep 25, 2009, 2:50:41 PM9/25/09
to
On Sep 24, 8:40 pm, Charles Richmond <friz...@tx.rr.com> wrote:

> Hey, hardwired or microcode... I accept *all* of that as a
> hardware implementation. I was contrasting this sort of
> implementation with using subroutines in assembly language (or an
> HLL).

It's certainly true that any microcoded implementation looks like
hardware to the programmer.

Still, one could draw a useful distinction between floating-point
instructions on an IBM 360/30, with an eight-bit wide integer
arithmetic unit doing all the work, and floating-point instructions on
an IBM 360/65, where the arithmetic unit controlled by the microcode
includes an eight-bit register containing the exponent, and a 56-bit
register containing the mantissa.

John Savard

Walter Bushell

unread,
Sep 25, 2009, 3:58:47 PM9/25/09
to
In article <h9haj0$kku$2...@news.eternal-september.org>,
Charles Richmond <fri...@tx.rr.com> wrote:

> John Byrns wrote:
> > In article <h9fn6...@news2.newsguy.com>, jmfbahciv <jmfbahciv@aol>
> > wrote:
> >
> >> Charles Richmond wrote:
> >>> ISTM that initially, the floating point representation for numbers and
> >>> the manipulation of those numbers... were handled by a subroutine
> >>> package. Eventually, floating point migrated into the hardware
> >>> implementation. So here are my questions:
> >>>
> >>> When did floating point first appear as implemented in hardware rather
> >>> than just software??? What were the first computers to offer floating
> >>> point in hardware???
> >> PDP-10s had floating point instructions.
> >
> > Was the original PDP-10 floating point actually hardwired, or was it
> > implemented with microcode?
> >
>
> Hey, hardwired or microcode... I accept *all* of that as a
> hardware implementation. I was contrasting this sort of
> implementation with using subroutines in assembly language (or an
> HLL).

My view also. Anologously that's why Unix is Unix, if the programming
interface is the same, it's the same OS, even though the core of the
OSen are completely different. I suppose one could build a Unix shell on
Windows, even if current implementations are, how we say, lacking.

--
A computer without Microsoft is like a chocolate cake without mustard.

Walter Bushell

unread,
Sep 25, 2009, 4:33:19 PM9/25/09
to
In article <h9htt2$qt4$1...@news.eternal-september.org>,
Mike Hore <mike_h...@OVE.invalid.aapt.net.au> wrote:

> I gather that for really serious number-crunching-type work (these days
> called HPC for High Performance Computing), it's very important that the
> numerical properties of the number be symmetrical around zero.
> Otherwise, after 10-to-the-whatever iterations, your accumulated error
> has a bias which is a Very Bad Thing. I've never done much of this
> myself, but it's discussed over in comp.arch from time to time.
>
> Cheers, Mike.

I would feel shaky about the results of calculations affected by
rounding error certainly modulo some kind of proof or evidence that the
calculation is correct. On the surface it would seem you are on the
limits of the number of bits used for the accuracy you require and are
depending on compensating errors.

William Hamblen

unread,
Sep 25, 2009, 5:40:31 PM9/25/09
to
On Fri, 25 Sep 2009 15:58:47 -0400, Walter Bushell <pr...@panix.com>
wrote:

Cygwin does a fair job of imitating Unix on Microsft Windows. You can
compile and run some heavy duty packages on Cygwin. IRAF (Image
Reduction and Analysis Facility) is the biggest one I've fiddled with.
Cygwin _is_ a bit slow compared to native Linux or FreeBSD.

Bud

ArarghMai...@not.at.arargh.com

unread,
Sep 25, 2009, 5:47:59 PM9/25/09
to
On Fri, 25 Sep 2009 15:58:47 -0400, Walter Bushell <pr...@panix.com>
wrote:

<snip>


>
>My view also. Anologously that's why Unix is Unix, if the programming
>interface is the same, it's the same OS, even though the core of the
>OSen are completely different. I suppose one could build a Unix shell on
>Windows, even if current implementations are, how we say, lacking.

There is a version of Bash which runs on windows. I think from DJGPP.
--
ArarghMail909 at [drop the 'http://www.' from ->] http://www.arargh.com
BCET Basic Compiler Page: http://www.arargh.com/basic/index.html

To reply by email, remove the extra stuff from the reply address.

Charles Richmond

unread,
Sep 25, 2009, 11:32:09 PM9/25/09
to

The book _Bit-Slice Microprocessor Design_, by Mick and Brick can
be purchased used at <www.abebooks.com> for a reasonable price. It
has a lot of data from the spec sheets in it. The ISBN is
0-07-041781-4.

Charles Richmond

unread,
Sep 25, 2009, 11:37:17 PM9/25/09
to

Indeed "chomping" the instruction execution eight bits at a time
made - it - a - lot - slow - er . . . But the programmer would
write his code in the same way. Of course, programs that made
extensive use of floating point may *not* be practical on an IBM
360/30. The university I attended had an IBM 370/155 and the
floating point performance was okay. Of course, the current batch
of Intel microprocessors make the 370/155 look laughable.

Walter Bushell

unread,
Sep 26, 2009, 12:57:50 AM9/26/09
to
In article <h9k29e$p8l$1...@news.eternal-september.org>,
Charles Richmond <fri...@tx.rr.com> wrote:

> Indeed "chomping" the instruction execution eight bits at a time
> made - it - a - lot - slow - er . . . But the programmer would
> write his code in the same way. Of course, programs that made
> extensive use of floating point may *not* be practical on an IBM
> 360/30. The university I attended had an IBM 370/155 and the
> floating point performance was okay. Of course, the current batch
> of Intel microprocessors make the 370/155 look laughable.

And you have the entire processor to yourself. But imagine converting
and editing movie files on a 370/155. ;|

Chris Barts

unread,
Sep 26, 2009, 5:47:34 PM9/26/09
to
Al Kossow <a...@bitsavers.org> writes:

> Quadibloc wrote:
>> So two's complement doesn't gain it anything
>
> I was told 2's comp was used in small word size computers
> because it makes multiple-precision arithmetic easier to implement.

It does, but why would that limit it to small word size computers? It
seems that even if you have (say) 36-bit native integer arithemetic it
still makes sense to be able to handle integer bignums efficiently.

Gene Wirchenko

unread,
Sep 26, 2009, 6:25:31 PM9/26/09
to
On Sat, 26 Sep 2009 00:57:50 -0400, Walter Bushell <pr...@panix.com>
wrote:

>In article <h9k29e$p8l$1...@news.eternal-september.org>,

Imagine getting men on the moon with an Intel-based system.

Sincerely,

Gene Wirchenko

Mensanator

unread,
Sep 26, 2009, 6:52:55 PM9/26/09
to
On Sep 26, 5:25�pm, Gene Wirchenko <ge...@ocis.net> wrote:
> On Sat, 26 Sep 2009 00:57:50 -0400, Walter Bushell <pr...@panix.com>
> wrote:
>
> >In article <h9k29e$p8...@news.eternal-september.org>,

> > Charles Richmond <friz...@tx.rr.com> wrote:
>
> >> Indeed "chomping" the instruction execution eight bits at a time
> >> made - it - a - lot - slow - er . . . � But the programmer would
> >> write his code in the same way. Of course, programs that made
> >> extensive use of floating point may *not* be practical on an IBM
> >> 360/30. The university I attended had an IBM 370/155 and the
> >> floating point performance was okay. Of course, the current batch
> >> of Intel microprocessors make the 370/155 look laughable.
>
> >And you have the entire processor to yourself. But imagine converting
> >and editing movie files on a 370/155. ;|
>
> � � �Imagine getting men on the moon with an Intel-based system.

Did they have a mainframe installed in the Apollo command
capsule?

I heard that long after Apollo, the Space Shuttle used
core memory (and it wasn't large enough - they couldn't
load the landing program until after achieving orbit).

Those early astronauts were a LOT braver than you think.

And today's are even braver considering there is no
Moore's Law for PHBs.

>
> Sincerely,
>
> Gene Wirchenko

Chris Barts

unread,
Sep 26, 2009, 6:14:44 PM9/26/09
to
Jim Haynes <jha...@alumni.uark.edu> writes:

> On 2009-09-24, Quadibloc <jsa...@ecn.ab.ca> wrote:
>> On Sep 23, 2:00 pm, Joe Pfeiffer <pfeif...@cs.nmsu.edu> wrote:
>>> Charles Richmond <friz...@tx.rr.com> writes:
>>
>>> > What advantage was gained by making floating point numbers use the
>>> > "signed magnitude" representation rather than "two's complement" for
>>> > the mantissa???
>>
>>> I've never seen a really satisfactory explanation for this.
>>
> Probably in the IBM 704 they used sign-magnitude just because they were
> also using that notation for integers. And it makes rounding simple.

<snip>


> Probably in the IBM 704 they used sign-magnitude just because they were
> also using that notation for integers. And it makes rounding simple.

<snip>


> Probably in the IBM 704 they used sign-magnitude just because they were
> also using that notation for integers. And it makes rounding simple.

So... did anyone else get three copies of this post in one message? Is
it really the case that what a misconfigured newsreader tells me three
times is true?

Quadibloc

unread,
Sep 26, 2009, 7:19:15 PM9/26/09
to
On Sep 26, 4:14 pm, Chris Barts <chbarts+use...@gmail.com> wrote:

> So... did anyone else get three copies of this post in one message?

Yes, the post is even that way on Google Groups.

John Savard

Walter Bushell

unread,
Sep 26, 2009, 10:44:28 PM9/26/09
to
In article <87ab0h4...@chbarts.motzarella.org>,
Chris Barts <chbarts...@gmail.com> wrote:

> Al Kossow <a...@bitsavers.org> writes:ti

Perhaps, but if your arithmetic is 8 bits multiprecision arithmetic is
mandatory. With 36 bit, it is not such a constant imperative as to
overwhelm other design desirables.

Chris Barts

unread,
Sep 26, 2009, 11:17:36 PM9/26/09
to
Walter Bushell <pr...@panix.com> writes:

That sounds more than reasonable.

Charles Richmond

unread,
Sep 27, 2009, 1:11:41 AM9/27/09
to

Many digital photographs are *larger* than the main *core* memory
of the old IBM 370/155 at my college.

Charles Richmond

unread,
Sep 27, 2009, 1:12:59 AM9/27/09
to

We'll have to imagine... because NASA has *cancelled* the moon
mission that was to be for 2020. The government can *not* afford
to give NASA the paltry $3 billion a year it would take... :-(

Nick Spalding

unread,
Sep 27, 2009, 3:13:08 AM9/27/09
to
Chris Barts wrote, in <87zl8h2...@chbarts.motzarella.org>
on Sat, 26 Sep 2009 16:14:44 -0600:

That's what arrived here too.
--
Nick Spalding

Stan Barr

unread,
Sep 27, 2009, 6:58:08 AM9/27/09
to
On Sun, 27 Sep 2009 00:11:41 -0500, Charles Richmond <fri...@tx.rr.com> wrote:
> Walter Bushell wrote:
>> In article <h9k29e$p8l$1...@news.eternal-september.org>,
>> Charles Richmond <fri...@tx.rr.com> wrote:
>>
>>> Indeed "chomping" the instruction execution eight bits at a time
>>> made - it - a - lot - slow - er . . . But the programmer would
>>> write his code in the same way. Of course, programs that made
>>> extensive use of floating point may *not* be practical on an IBM
>>> 360/30. The university I attended had an IBM 370/155 and the
>>> floating point performance was okay. Of course, the current batch
>>> of Intel microprocessors make the 370/155 look laughable.
>>
>> And you have the entire processor to yourself. But imagine converting
>> and editing movie files on a 370/155. ;|
>>
>
> Many digital photographs are *larger* than the main *core* memory
> of the old IBM 370/155 at my college.
>

I've just done a few scans - 62.5MB per picture!
Even my normal jpegs come out at over 10MB - about 16,000 x 12,000
pixels at 48-bits. Thank $DEITY for big disks :-)
My first hard disk of 20MB couldn't have held two of my pictures.

--
Cheers,
Stan Barr plan.b .at. dsl .dot. pipex .dot. com

The future was never like this!

jmfbahciv

unread,
Sep 27, 2009, 9:29:55 AM9/27/09
to

There isn't a machine big enough to handle only integers.

/BAH

John Byrns

unread,
Sep 27, 2009, 10:12:59 AM9/27/09
to
In article <h9nop...@news6.newsguy.com>, jmfbahciv <jmfbahciv@aol>
wrote:

They are getting close though, it won't be long before we have computers

big enough to handle only "integers".

--
Regards,

John Byrns

Surf my web pages at, http://fmamradios.com/

Joe Pfeiffer

unread,
Sep 27, 2009, 12:24:25 PM9/27/09
to
Chris Barts <chbarts...@gmail.com> writes:

It's used because 2's complement arithmetic Just Works. You don't need
separate instructions for signed and unsigned addition, you don't need
to treat signs-the-same and signs-different as three (that's right,
three) separate cases....
--
As we enjoy great advantages from the inventions of others, we should
be glad of an opportunity to serve others by any invention of ours;
and this we should do freely and generously. (Benjamin Franklin)

Al Kossow

unread,
Sep 27, 2009, 1:05:02 PM9/27/09
to
Joe Pfeiffer wrote:
> Chris Barts <chbarts...@gmail.com> writes:
>
>> Al Kossow <a...@bitsavers.org> writes:
>>
>>> Quadibloc wrote:
>>>> So two's complement doesn't gain it anything
>>> I was told 2's comp was used in small word size computers
>>> because it makes multiple-precision arithmetic easier to implement.
>> It does, but why would that limit it to small word size computers? It
>> seems that even if you have (say) 36-bit native integer arithemetic it
>> still makes sense to be able to handle integer bignums efficiently.
>
> It's used because 2's complement arithmetic Just Works. You don't need
> separate instructions for signed and unsigned addition, you don't need
> to treat signs-the-same and signs-different as three (that's right,
> three) separate cases....

I looked at the problem of multiple precision arithmetic on the PDP-1,
and it is a PITA to do.

So why was it used as much as it was in the 50's ?

The Cray-designed machines use 1's compliment and borrow pyramids in
subtractors (no adders) up through the 6600. Fewer/faster logic elements?

Quadibloc

unread,
Sep 27, 2009, 1:34:07 PM9/27/09
to
On Sep 27, 7:29 am, jmfbahciv <jmfbahciv@aol> wrote:

> There isn't a machine big enough to handle only integers.

Lots of machines only had integer arithmetic in hardware, such as a
PDP-8.

I presume, therefore, you must mean that a machine would have to be
really big in order to use integer arithmetic to cover numbers as big
as could be expressed in floats. But such a machine still wouldn't be
able to handle fractions, so if it needed to do arithmetic on the kind
of numbers that floats represent, it still would need floating-point.

If it doesn't need floating-point, then it can use only integers - as
commercial computers, before the IBM 360 pointed the way to doing both
commercial and scientific tasks on the same machine, always had done.

Of course, no machine is big enough, or ever will be big enough, to do
arithmetic on arbitrary elements of I, the set of integers, simply
because there is no finite maximum size for an integer. Turing
machines, with infinite storage, can do that, but not real computers.

John Savard

Patrick Scheible

unread,
Sep 27, 2009, 1:35:05 PM9/27/09
to
Charles Richmond <fri...@tx.rr.com> writes:

> Gene Wirchenko wrote:
> > On Sat, 26 Sep 2009 00:57:50 -0400, Walter Bushell <pr...@panix.com>
> > wrote:
> >
> >> In article <h9k29e$p8l$1...@news.eternal-september.org>,
> >> Charles Richmond <fri...@tx.rr.com> wrote:
> >>
> >>> Indeed "chomping" the instruction execution eight bits at a time
> >>> made - it - a - lot - slow - er . . . But the programmer would
> >>> write his code in the same way. Of course, programs that made
> >>> extensive use of floating point may *not* be practical on an IBM
> >>> 360/30. The university I attended had an IBM 370/155 and the
> >>> floating point performance was okay. Of course, the current batch
> >>> of Intel microprocessors make the 370/155 look laughable.
> >> And you have the entire processor to yourself. But imagine converting
> >> and editing movie files on a 370/155. ;|
> >
> > Imagine getting men on the moon with an Intel-based system.
> >
>
> We'll have to imagine... because NASA has *cancelled* the moon
> mission that was to be for 2020. The government can *not* afford
> to give NASA the paltry $3 billion a year it would take... :-(

Sigh. We started with almost nothing in 1961, and got to the moon in
eight years. Now, with all our experience and more advanced
technology, it would take eleven years... and we still can't do it.
Sad.

-- Patrick

Quadibloc

unread,
Sep 27, 2009, 1:39:56 PM9/27/09
to
On Sep 27, 11:05 am, Al Kossow <a...@bitsavers.org> wrote:

> I looked at the problem of multiple precision arithmetic on the PDP-1,
> and it is a PITA to do.
>
> So why was it used as much as it was in the 50's ?
>
> The Cray-designed machines use 1's compliment and borrow pyramids in
> subtractors (no adders) up through the 6600. Fewer/faster logic elements?

Sign-magnitude has the advantage of simplifying conversion to decimal.
One's complement, the preferred number representation of the PDP-4,
was easy to convert to sign-magnitude.

And sign-magnitude may make addition/subtraction more complicated, but
it makes multiplication easier to understand and implement with
consistent behavior - particularly in floating-point, where rounding
is an issue.

When multiplication could be done more quickly if the numbers involved
were small, that is another advantage for sign-magnitude, because it
doesn't make small negative numbers "look" big.

John Savard

John Byrns

unread,
Sep 27, 2009, 2:06:56 PM9/27/09
to
In article
<71897ce8-e541-4f4e...@x6g2000prc.googlegroups.com>,
Quadibloc <jsa...@ecn.ab.ca> wrote:

> On Sep 27, 7:29�am, jmfbahciv <jmfbahciv@aol> wrote:
>
> > There isn't a machine big enough to handle only integers.
>
> Lots of machines only had integer arithmetic in hardware, such as a
> PDP-8.
>
> I presume, therefore, you must mean that a machine would have to be
> really big in order to use integer arithmetic to cover numbers as big
> as could be expressed in floats. But such a machine still wouldn't be
> able to handle fractions, so if it needed to do arithmetic on the kind
> of numbers that floats represent, it still would need floating-point.

Barb's big integer machine could use two big integers to represent
"floating point" numbers one for the integer part and a second for the
fractional part, or it could use a single big integer with a defined
binary point to divide the big "integer" into integer and fractional
parts, if the machine includes multiply and divide instructions
variations would be needed to handle the integer/fractional format.

> If it doesn't need floating-point, then it can use only integers - as
> commercial computers, before the IBM 360 pointed the way to doing both
> commercial and scientific tasks on the same machine, always had done.

But those commercial computers didn't use pure integers for commercial
applications, the "integer" value included an assumed decimal point.

Charlie Gibbs

unread,
Sep 27, 2009, 3:23:22 PM9/27/09
to
In article <h9nop...@news6.newsguy.com>, jmfbahciv@aol (jmfbahciv)
writes:

That depends on the applications you're using. The packed
decimal supported by the IBM 360 and its clones easily supported
any fixed-point calculations I had to do. And the rest of the
time, 32-bit integers suffice.

In 40 years of programming commercial applications, I think I've
used floating point twice.

--
/~\ cgi...@kltpzyxm.invalid (Charlie Gibbs)
\ / I'm really at ac.dekanfrus if you read it the right way.
X Top-posted messages will probably be ignored. See RFC1855.
/ \ HTML will DEFINITELY be ignored. Join the ASCII ribbon campaign!

grey...@mail.com

unread,
Sep 27, 2009, 3:52:13 PM9/27/09
to


There was a newsitem somewhere recently that said that studies had
been done on the amount of radiation absorbed by travellers on a Mars
flight, and that this was greater than would be regarded as healthy.
So, no travel to Mars. The US sent men to the Moon, but it was like a
stunt, no real work done. Now, building and mentaining a moon base?.
(100 times harder?). Probably the Chinese, with a low regard for
human life, would sent humans to Mars, but the same effect could
better done with robots, as has been done since the Moon Landings, in
Space generally. I have a little thing that will go around the house,
and return a TV picture of each room (at least it could, now being
`improved'). A machine that would carry me around to do the same job
would have to be, say, 20 times bigger, and a lot heavier, and use a
lot of fuel. Compare the amount of power need to get to and fro from
the office, compared to teleworking. In many ways, the Moon would be
a more horrible environment than Kolmya. Let the machines go there, I
would prefer Barbados.

--
Greymaus....
Are You a `human` or a `zombie`?
Case A, vote "No".

grey...@mail.com

unread,
Sep 27, 2009, 3:52:13 PM9/27/09
to

Lynn had a posting some time ago on this.

Joe Pfeiffer

unread,
Sep 27, 2009, 4:40:42 PM9/27/09
to
Al Kossow <a...@bitsavers.org> writes:
>
> I looked at the problem of multiple precision arithmetic on the PDP-1,
> and it is a PITA to do.
>
> So why was it used as much as it was in the 50's ?

No idea. Radix-complement arithmetic was certainly known long before
then; my mother's Comptometer does 10s complement.


>
> The Cray-designed machines use 1's compliment and borrow pyramids in
> subtractors (no adders) up through the 6600. Fewer/faster logic elements?

Symmetric ranges. 2's complement has one more negative number than
positive; if you negate that number you get the same number back. The
disadvantage to 1's complement is that it has to 0s; Cray cleverly got
around that by making sure arithmetic got +0.

Cray switched to 2's complement with the Cray 1.

Joe Pfeiffer

unread,
Sep 27, 2009, 4:43:20 PM9/27/09
to
John Byrns <byr...@sbcglobal.net> writes:

Alternatively, they worked in cents.

Patrick Scheible

unread,
Sep 27, 2009, 7:07:10 PM9/27/09
to
grey...@mail.com writes:

> On 2009-09-27, Patrick Scheible <k...@zipcon.net> wrote:
> > Charles Richmond <fri...@tx.rr.com> writes:
> >
> >> Gene Wirchenko wrote:
> >> >
> >> > Imagine getting men on the moon with an Intel-based system.
> >> >
> >>
> >> We'll have to imagine... because NASA has *cancelled* the moon
> >> mission that was to be for 2020. The government can *not* afford
> >> to give NASA the paltry $3 billion a year it would take... :-(
> >
> > Sigh. We started with almost nothing in 1961, and got to the moon in
> > eight years. Now, with all our experience and more advanced
> > technology, it would take eleven years... and we still can't do it.
> > Sad.
> >
>
>
> There was a newsitem somewhere recently that said that studies had
> been done on the amount of radiation absorbed by travellers on a Mars
> flight, and that this was greater than would be regarded as healthy.
> So, no travel to Mars.

You probably looked at:

http://www.nasa.gov/vision/space/livinginspace/17feb_radiation_prt.htm

It sounds like it's a challenge, but not an impossible one. The
article says the added cancer risk might be anywhere from 1% to 19%.
First they have to find out where in that band the added risk actually
is. Then, they may have to experiment with different shielding
materials besides aluminum.

> The US sent men to the Moon, but it was like a
> stunt, no real work done. Now, building and mentaining a moon base?.
> (100 times harder?). Probably the Chinese, with a low regard for
> human life, would sent humans to Mars, but the same effect could
> better done with robots, as has been done since the Moon Landings, in
> Space generally. I have a little thing that will go around the house,
> and return a TV picture of each room (at least it could, now being
> `improved'). A machine that would carry me around to do the same job
> would have to be, say, 20 times bigger, and a lot heavier, and use a
> lot of fuel. Compare the amount of power need to get to and fro from
> the office, compared to teleworking. In many ways, the Moon would be
> a more horrible environment than Kolmya. Let the machines go there, I
> would prefer Barbados.

Unmanned probes are great and we've certainly learned a lot from them
and should continue to do so. However, there's nothing like an actual
person being there.

-- Patrick

Peter Flass

unread,
Sep 27, 2009, 8:08:45 PM9/27/09
to
Scott Lurndal wrote:
>
> The point-to-point communications sequence is roughly:
...
This sounds pretty much like my recollection of IBM BISYNC, except that
only the host controlled the process.

Peter Flass

unread,
Sep 27, 2009, 8:22:16 PM9/27/09
to
Louis Krupp wrote:
>
> GEMCOS (or some mutation thereof) was running on a B6700 in 1977. I'm
> trying to remember just what made the program so bad.

I don't think anyone had anything to match CICS. At one point I worked
at a college that people were trying to talk into converting from CICS
to TIP on a UNIVAC system. It took us all of about ten minutes to trash
that idea.

Rob Warnock

unread,
Sep 27, 2009, 8:39:01 PM9/27/09
to
Quadibloc <jsa...@ecn.ab.ca> wrote:
+---------------

| jmfbahciv <jmfbahciv@aol> wrote:
| > There isn't a machine big enough to handle only integers.
|
| Lots of machines only had integer arithmetic in hardware, such as a
| PDP-8.
|
| I presume, therefore, you must mean that a machine would have to be
| really big in order to use integer arithmetic to cover numbers as big
| as could be expressed in floats. But such a machine still wouldn't be
| able to handle fractions, so if it needed to do arithmetic on the kind
| of numbers that floats represent, it still would need floating-point.
+---------------

Not necessarily. Don't forget fixed-point (a.k.a. "scaled integer").

+---------------


| If it doesn't need floating-point, then it can use only integers - as
| commercial computers, before the IBM 360 pointed the way to doing both
| commercial and scientific tasks on the same machine, always had done.

+---------------

Exactly. And most small embedded computers do so even today.


-Rob

-----
Rob Warnock <rp...@rpw3.org>
627 26th Avenue <URL:http://rpw3.org/>
San Mateo, CA 94403 (650)572-2607

Rob Warnock

unread,
Sep 27, 2009, 8:40:36 PM9/27/09
to
John Byrns <byr...@sbcglobal.net> wrote:
+---------------

| Quadibloc <jsa...@ecn.ab.ca> wrote:
| > I presume, therefore, you must mean that a machine would have to be
| > really big in order to use integer arithmetic to cover numbers as big
| > as could be expressed in floats. But such a machine still wouldn't be
| > able to handle fractions, so if it needed to do arithmetic on the kind
| > of numbers that floats represent, it still would need floating-point.
|
| Barb's big integer machine could use two big integers to represent
| "floating point" numbers one for the integer part and a second for the
| fractional part, or it could use a single big integer with a defined
| binary point to divide the big "integer" into integer and fractional
| parts, if the machine includes multiply and divide instructions
| variations would be needed to handle the integer/fractional format.
+---------------

Yes, this is called "fixed point" or "scaled integer".

+---------------


| > If it doesn't need floating-point, then it can use only integers - as
| > commercial computers, before the IBM 360 pointed the way to doing both
| > commercial and scientific tasks on the same machine, always had done.
|
| But those commercial computers didn't use pure integers for commercial
| applications, the "integer" value included an assumed decimal point.

+---------------

Exactly. (See above.)

ArarghMai...@not.at.arargh.com

unread,
Sep 27, 2009, 9:09:38 PM9/27/09
to
On Sun, 27 Sep 2009 10:34:07 -0700 (PDT), Quadibloc
<jsa...@ecn.ab.ca> wrote:

<snip>


>Of course, no machine is big enough, or ever will be big enough, to do
>arithmetic on arbitrary elements of I, the set of integers, simply
>because there is no finite maximum size for an integer. Turing
>machines, with infinite storage, can do that, but not real computers.

No, but way back when, on a full memory IBM 7010, you could add two
49,000 digit numbers and get an exact number. Couldn't do much else
with them, though. :-)

(That's assuming that the 7010 didn't have a timeout for long
operations and that it worked the same way as a 1401 did for math,
which is the machine that I knew.)
--
ArarghMail909 at [drop the 'http://www.' from ->] http://www.arargh.com
BCET Basic Compiler Page: http://www.arargh.com/basic/index.html

To reply by email, remove the extra stuff from the reply address.

Walter Bushell

unread,
Sep 27, 2009, 11:33:34 PM9/27/09
to
In article <h9ms6d$and$2...@news.eternal-september.org>,
Charles Richmond <fri...@tx.rr.com> wrote:

> Walter Bushell wrote:
> > In article <h9k29e$p8l$1...@news.eternal-september.org>,
> > Charles Richmond <fri...@tx.rr.com> wrote:
> >
> >> Indeed "chomping" the instruction execution eight bits at a time
> >> made - it - a - lot - slow - er . . . But the programmer would
> >> write his code in the same way. Of course, programs that made
> >> extensive use of floating point may *not* be practical on an IBM
> >> 360/30. The university I attended had an IBM 370/155 and the
> >> floating point performance was okay. Of course, the current batch
> >> of Intel microprocessors make the 370/155 look laughable.
> >
> > And you have the entire processor to yourself. But imagine converting
> > and editing movie files on a 370/155. ;|
> >
>
> Many digital photographs are *larger* than the main *core* memory
> of the old IBM 370/155 at my college.

So, the 370/155 had virtual memory, no? IIRC, I have edited photos
bigger than the memory on my computer at the time. Not that I would
particularly recommend the procedure if it could be avoided. Not for the
impatient. ;)


I had a 68040 Performa (LC 475) with 12 Megs of memory and I was editing
scanned film photos of 25 Megabytes.

Walter Bushell

unread,
Sep 27, 2009, 11:44:14 PM9/27/09
to
In article <w9zljk0...@zipcon.net>,
Patrick Scheible <k...@zipcon.net> wrote:

> It sounds like it's a challenge, but not an impossible one. The
> article says the added cancer risk might be anywhere from 1% to 19%.
> First they have to find out where in that band the added risk actually
> is. Then, they may have to experiment with different shielding
> materials besides aluminum.

The may have to find a quicker transit to Mars. Also Mars doesn't have a
magnetic field as the Earth does so the exposure on Mars will be
greater, unless the we can provide shielding.

I suppose the mission would have most of the consumables for the time on
Mars and the return sent ahead, which could follow the economical orbits.

But we got to find a cheaper way to low Earth orbit.

Charles Richmond

unread,
Sep 28, 2009, 12:18:15 AM9/28/09
to

Like the little boy told his dad, when the boy discovered how much
it would cost to attend college: "I can't afford to be successful."

Charles Richmond

unread,
Sep 28, 2009, 12:23:52 AM9/28/09
to
grey...@mail.com wrote:
> On 2009-09-27, Patrick Scheible <k...@zipcon.net> wrote:
>> Charles Richmond <fri...@tx.rr.com> writes:
>>
>>> Gene Wirchenko wrote:
>>>> Imagine getting men on the moon with an Intel-based system.
>>>>
>>> We'll have to imagine... because NASA has *cancelled* the moon
>>> mission that was to be for 2020. The government can *not* afford
>>> to give NASA the paltry $3 billion a year it would take... :-(
>> Sigh. We started with almost nothing in 1961, and got to the moon in
>> eight years. Now, with all our experience and more advanced
>> technology, it would take eleven years... and we still can't do it.
>> Sad.
>>
>
> There was a newsitem somewhere recently that said that studies had
> been done on the amount of radiation absorbed by travellers on a Mars
> flight, and that this was greater than would be regarded as healthy.
> So, no travel to Mars.

Now I think you have made a common error here. Perhaps with the
current state of technology, a trip to Mars would *not* be
workable. But there is a new type of propulsion that will allow
the trip to be *much* shorter: plasma rockets. Just perfect those
and the trip can be made safely.

> The US sent men to the Moon, but it was like a
> stunt, no real work done. Now, building and mentaining a moon base?.
> (100 times harder?). Probably the Chinese, with a low regard for
> human life, would sent humans to Mars, but the same effect could
> better done with robots, as has been done since the Moon Landings, in
> Space generally. I have a little thing that will go around the house,
> and return a TV picture of each room (at least it could, now being
> `improved'). A machine that would carry me around to do the same job
> would have to be, say, 20 times bigger, and a lot heavier, and use a
> lot of fuel. Compare the amount of power need to get to and fro from
> the office, compared to teleworking. In many ways, the Moon would be
> a more horrible environment than Kolmya. Let the machines go there, I
> would prefer Barbados.
>

While you are sunning yourself in Barbados, you should realize
that *people* have to travel in space. The human race needs to
spread out to other planets. If the earth is destroyed by a
cataclysm, those living on other planets will continue the
species. Besides, property taxes are *much* lower on Mars. ;-)

Anne & Lynn Wheeler

unread,
Sep 28, 2009, 12:28:04 AM9/28/09
to

Walter Bushell <pr...@panix.com> writes:
> So, the 370/155 had virtual memory, no? IIRC, I have edited photos
> bigger than the memory on my computer at the time. Not that I would
> particularly recommend the procedure if it could be avoided. Not for the
> impatient. ;)

... originally 370 (135, 145, 155, 165) didn't have virtual memory
... after virtual memory was eventually announced, existing customer 155
& 165 required a pricy (especially for 165) hardware field upgrade for
virtual memory. 135 & 145 were more in the nature of a different
microcode load.

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

Charles Richmond

unread,
Sep 28, 2009, 12:30:41 AM9/28/09
to
Charlie Gibbs wrote:
> In article <h9nop...@news6.newsguy.com>, jmfbahciv@aol (jmfbahciv)
> writes:
>
>> Chris Barts wrote:
>>
>>> Al Kossow <a...@bitsavers.org> writes:
>>>
>>>> Quadibloc wrote:
>>>>
>>>>> So two's complement doesn't gain it anything
>>>> I was told 2's comp was used in small word size computers
>>>> because it makes multiple-precision arithmetic easier to implement.
>>> It does, but why would that limit it to small word size computers?
>>> It seems that even if you have (say) 36-bit native integer
>>> arithemetic it still makes sense to be able to handle integer
>>> bignums efficiently.
>> There isn't a machine big enough to handle only integers.
>
> That depends on the applications you're using. The packed
> decimal supported by the IBM 360 and its clones easily supported
> any fixed-point calculations I had to do. And the rest of the
> time, 32-bit integers suffice.
>
> In 40 years of programming commercial applications, I think I've
> used floating point twice.
>

Perhaps you did *not* need floating point because you *never*
designed "air foils" for new aircraft. Some of the matrices
involved are difficult to work with and require the use of double
or quad precision floating point.

Charles Richmond

unread,
Sep 28, 2009, 12:36:06 AM9/28/09
to
Joe Pfeiffer wrote:
> Al Kossow <a...@bitsavers.org> writes:
>>
>> [snip...] [snip...] [snip...]

>>
>> The Cray-designed machines use 1's compliment and borrow pyramids in
>> subtractors (no adders) up through the 6600. Fewer/faster logic elements?
>
> Symmetric ranges. 2's complement has one more negative number than
> positive; if you negate that number you get the same number back. The
> disadvantage to 1's complement is that it has to 0s; Cray cleverly got
> around that by making sure arithmetic got +0.
>
> Cray switched to 2's complement with the Cray 1.

Remember: "Parity is for farmers."

Ahem A Rivet's Shot

unread,
Sep 28, 2009, 1:01:10 AM9/28/09
to
On Sun, 27 Sep 2009 13:06:56 -0500
John Byrns <byr...@sbcglobal.net> wrote:

> Barb's big integer machine could use two big integers to represent
> "floating point" numbers one for the integer part and a second for the
> fractional part, or it could use a single big integer with a defined
> binary point to divide the big "integer" into integer and fractional
> parts, if the machine includes multiply and divide instructions
> variations would be needed to handle the integer/fractional format.

Or it could use pairs of big integers to implement rationals.

--
Steve O'Hara-Smith | Directable Mirror Arrays
C:>WIN | A better way to focus the sun
The computer obeys and wins. | licences available see
You lose and Bill collects. | http://www.sohara.org/

grey...@mail.com

unread,
Sep 28, 2009, 5:52:10 AM9/28/09
to
On 2009-09-27, Patrick Scheible <k...@zipcon.net> wrote:
> grey...@mail.com writes:
>
>
> http://www.nasa.gov/vision/space/livinginspace/17feb_radiation_prt.htm

Probably derived from that report, I think it was slashdot.

>
> It sounds like it's a challenge, but not an impossible one. The
> article says the added cancer risk might be anywhere from 1% to 19%.
> First they have to find out where in that band the added risk actually
> is. Then, they may have to experiment with different shielding
> materials besides aluminum.

Seems more likely that we (humans) will have to develope a moonbase,
and build the actual Mars travellers there, rather than bringing the
material up the gravity well from Earth to orbit. We would have to
have a good `solar weather' forcast (although, AFAIRemember from that
article, cosmic rays are the big problem.)


>
>
> Unmanned probes are great and we've certainly learned a lot from them
> and should continue to do so. However, there's nothing like an actual
> person being there.


--

grey...@mail.com

unread,
Sep 28, 2009, 5:52:10 AM9/28/09
to

taxes, which would be caused by transport costs, development costs,
etc, would be _incredible_ on Mars. Would have to be paid by those
back on Earth, until Mars was self-sufficent, and then, by analogy to
those ungrateful colonists in North America, they would probably
declare independence and refuse to payback anything.
:)))))))))))))))))

jmfbahciv

unread,
Sep 28, 2009, 6:39:29 AM9/28/09
to
Quadibloc wrote:
> On Sep 27, 7:29 am, jmfbahciv <jmfbahciv@aol> wrote:
>
>> There isn't a machine big enough to handle only integers.
>
> Lots of machines only had integer arithmetic in hardware, such as a
> PDP-8.
>
> I presume, therefore, you must mean that a machine would have to be
> really big in order to use integer arithmetic to cover numbers as big
> as could be expressed in floats.

Yes. If one does get "big enough", somebody will want to use a
larger number. It's an aspect of computing that, if a resource
is increased, it will be used up until capacity is saturated.

> But such a machine still wouldn't be
> able to handle fractions, so if it needed to do arithmetic on the kind
> of numbers that floats represent, it still would need floating-point.

Not to mention those other numbers that show up in real life.

>
> If it doesn't need floating-point, then it can use only integers - as
> commercial computers, before the IBM 360 pointed the way to doing both
> commercial and scientific tasks on the same machine, always had done.
>
> Of course, no machine is big enough, or ever will be big enough, to do
> arithmetic on arbitrary elements of I, the set of integers, simply
> because there is no finite maximum size for an integer. Turing
> machines, with infinite storage, can do that, but not real computers.

right. :-)

/BAH

jmfbahciv

unread,
Sep 28, 2009, 6:41:00 AM9/28/09
to

and that would recreate problems that had been put into a coffin,
nailed, and buried.

/BAH

Peter Flass

unread,
Sep 28, 2009, 7:19:48 AM9/28/09
to
Quadibloc wrote:
> On Sep 27, 7:29 am, jmfbahciv <jmfbahciv@aol> wrote:
>
>> There isn't a machine big enough to handle only integers.
>
> Lots of machines only had integer arithmetic in hardware, such as a
> PDP-8.
>
> I presume, therefore, you must mean that a machine would have to be
> really big in order to use integer arithmetic to cover numbers as big
> as could be expressed in floats. But such a machine still wouldn't be
> able to handle fractions, so if it needed to do arithmetic on the kind
> of numbers that floats represent, it still would need floating-point.

Not true at all. PL/I has scaled fixed-point data. In the simplest
case this is decimal, might be dollars and cents, for example. It also
has scaled binary fixed-point. Doesn't need floating point at all for
this, though it as that as well.

Peter Flass

unread,
Sep 28, 2009, 7:39:13 AM9/28/09
to
Walter Bushell wrote:
> In article <h9ms6d$and$2...@news.eternal-september.org>,
> Charles Richmond <fri...@tx.rr.com> wrote:
>
>> Walter Bushell wrote:
>>> In article <h9k29e$p8l$1...@news.eternal-september.org>,
>>> Charles Richmond <fri...@tx.rr.com> wrote:
>>>
>>>> Indeed "chomping" the instruction execution eight bits at a time
>>>> made - it - a - lot - slow - er . . . But the programmer would
>>>> write his code in the same way. Of course, programs that made
>>>> extensive use of floating point may *not* be practical on an IBM
>>>> 360/30. The university I attended had an IBM 370/155 and the
>>>> floating point performance was okay. Of course, the current batch
>>>> of Intel microprocessors make the 370/155 look laughable.
>>> And you have the entire processor to yourself. But imagine converting
>>> and editing movie files on a 370/155. ;|
>>>
>> Many digital photographs are *larger* than the main *core* memory
>> of the old IBM 370/155 at my college.
>
> So, the 370/155 had virtual memory, no? IIRC, I have edited photos
> bigger than the memory on my computer at the time. Not that I would
> particularly recommend the procedure if it could be avoided. Not for the
> impatient. ;)
>
>
> I had a 68040 Performa (LC 475) with 12 Megs of memory and I was editing
> scanned film photos of 25 Megabytes.
>

Still only a 24-bit address space, 16MB total. I suppose you could
slice the photo up into sections and work on them in parallel using
different processes...

Walter Bushell

unread,
Sep 28, 2009, 9:08:34 AM9/28/09
to
In article <1btyyo2...@snowball.wb.pfeifferfamily.net>,
Joe Pfeiffer <pfei...@cs.nmsu.edu> wrote:

And then some joker made a loan in Francs or Pesos or Australian dollars.

Walter Bushell

unread,
Sep 28, 2009, 9:24:31 AM9/28/09
to
In article <1by6o02...@snowball.wb.pfeifferfamily.net>,
Joe Pfeiffer <pfei...@cs.nmsu.edu> wrote:

> Symmetric ranges. 2's complement has one more negative number than
> positive; if you negate that number you get the same number back. The
> disadvantage to 1's complement is that it has to 0s; Cray cleverly got
> around that by making sure arithmetic got +0.
>
> Cray switched to 2's complement with the Cray 1.

Two zeros was an advantage, you could clear memory with to -0 and assume
than any -0 was an unset quantity and an attempt to use it an error.

Christian Brunschen

unread,
Sep 28, 2009, 9:30:16 AM9/28/09
to
In article <h9q79r$7vi$5...@news.eternal-september.org>,
Peter Flass <Peter...@Yahoo.com> wrote:
>Walter Bushell wrote:

[ snippage ]

>> I had a 68040 Performa (LC 475) with 12 Megs of memory and I was editing
>> scanned film photos of 25 Megabytes.
>
>Still only a 24-bit address space, 16MB total. I suppose you could
>slice the photo up into sections and work on them in parallel using
>different processes...

The 68040 has a full 32-bit address space, 4GiB total, and an MMU that
maps between the 32-bit virtual and a 32-bit physical address space.

// Christian Brunschen

Charles Richmond

unread,
Sep 28, 2009, 11:16:27 AM9/28/09
to

I think Peter was referring to the address space of the IBM
370/155. ISTR that the *largest* hard disk, a refrigerator-sized
multi-platter affair, had *less* than two gigs of storage space.
So you have to worry about where you will *store* all those 25 meg
pictures...

Ahem A Rivet's Shot

unread,
Sep 28, 2009, 10:19:15 AM9/28/09
to
On 28 Sep 2009 09:52:10 GMT
grey...@mail.com wrote:

> On 2009-09-27, Patrick Scheible <k...@zipcon.net> wrote:
> > grey...@mail.com writes:
> >
> >
> > http://www.nasa.gov/vision/space/livinginspace/17feb_radiation_prt.htm
>
> Probably derived from that report, I think it was slashdot.
>
> >
> > It sounds like it's a challenge, but not an impossible one. The
> > article says the added cancer risk might be anywhere from 1% to 19%.
> > First they have to find out where in that band the added risk actually
> > is. Then, they may have to experiment with different shielding
> > materials besides aluminum.
>
> Seems more likely that we (humans) will have to develope a moonbase,
> and build the actual Mars travellers there, rather than bringing the
> material up the gravity well from Earth to orbit. We would have to

Pushing the necessary to mine, smelt and fashion metals to the moon
is probably even more prohibitive, and I'm pretty sure there's a distinct
lack of useful fuel on the moon.

Cheap ground to orbit is the main requirement - laser launch and
rotating tethers is probably the way to go given that skyhooks are still
not feasible. Rockets are a damn silly way to get to orbit.

Ahem A Rivet's Shot

unread,
Sep 28, 2009, 10:20:39 AM9/28/09
to
On 28 Sep 2009 09:52:10 GMT
grey...@mail.com wrote:

See "The Moon is a Harsh Mistress", at least they'd have a harder
job slinging rocks at us from mars :)

Anne & Lynn Wheeler

unread,
Sep 28, 2009, 11:35:45 AM9/28/09
to

Charles Richmond <fri...@tx.rr.com> writes:
> I think Peter was referring to the address space of the IBM
> 370/155. ISTR that the *largest* hard disk, a refrigerator-sized
> multi-platter affair, had *less* than two gigs of storage space. So
> you have to worry about where you will *store* all those 25 meg
> pictures...

early 155s (before virtual memory) would typically have one (8-drive)
2314 strings ... removable packs at approx 29mbytes/pack. one or two
mbyte of "2micsec" real storage was typical. later ... (8-drive) 3330
strings with removable packs at 100mbyte/pack.

155 was faster than 145 ... even tho 145 had approx. 400+nsec memory
(compared to 155 2mic memory) because 155 had cache (8kbytes; as long as
what you were doing fit within the small 155 cache size). the size of
many of today's programs wouldn't even fit in 155 real storage
(independent of the data).

announcement that virtual memory could be retrofit with (purchased)
hardware upgrade ... came about the same time as 370/158 ... which
had approx. same speed memory as 145 (and cache).

the gigabyte refrigerators+ size were a decade after 155.

155 announced 30jun70, withdrawn 23dec77
http://www-03.ibm.com/ibm/history/exhibits/mainframe/mainframe_PP3155.html

above shown with 2314 string (9th drive was "spare" could be used for
service and/or staging mounting packs).

158 announced 2aug72 and withdrawn 15sep80
http://www-03.ibm.com/ibm/history/exhibits/mainframe/mainframe_PP3158.html

shown with 8-drive 3330 string

Quadibloc

unread,
Sep 28, 2009, 12:07:44 PM9/28/09
to
On Sep 25, 2:14 am, Mike Hore <mike_hore...@OVE.invalid.aapt.net.au>
wrote:

> Totally OT, but hey, this is afc...  do you still have any2900manuals?
>   Bitsavers is still lacking them -- I'd LOVE to see a manual with
> instruction descriptions.  If you have any, please get them to Al Kossow
> for scanning!  Please??????

By the way, Brian Spoor e-mailed me an update on a test he did on his
ICL 1900 emulation software, and asked that I share it with this
group. I have done so, but it's possible some people have killfiled
Google Groops and can't see it.

John Savard

It is loading more messages.
0 new messages