Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

negative zero in fortran

758 views
Skip to first unread message

Uno

unread,
Sep 20, 2010, 11:01:26 PM9/20/10
to
I was reading Plauger's _The Standard C Library_ today while waiting for
lunch, making this time precious as opposed to tedious. He says that in
his implementation, he isn't going to fuss about negative zero in math.h.

I read a little bit more, and the references he has built his testing
facility on are fortran, to wit, Cody and Waite. The border between the
syntaxes here is hard to discern, although they both have signed on to
what used to be called Iee 754.

So are there 2 zeroes in these representations, one with the sign bit
set and the other without?

Thanks for your comment,
--
Uno

steve

unread,
Sep 21, 2010, 1:33:35 AM9/21/10
to

If a Fortran processor can distinguish between -0 and 0, then
yes there are 2 representations for zero. Of course, one needs
to read the standard (or experiment) to determine what happens
with

!
! Typed without testing. All bugs in the below code are mine.
!
program a
real x
x = - 0.
if (x == 0) print *, 'zero'
end program a

--
steve

Arjen Markus

unread,
Sep 21, 2010, 3:12:57 AM9/21/10
to

The reason a signed zero exists, as far as I can tell, is that for
certain computations with complex numbers it is important
to be able to distinguish numbers on either side of a branch cut.
(see http://mathworld.wolfram.com/RiemannSurface.html for instance)

Regards,

Arjen

Terence

unread,
Sep 21, 2010, 3:47:34 AM9/21/10
to

Negative zero was implemented on several old (1956-1976) mainframe
computers.
The Fortran compiler and programs that ran on those machines could
distinguish on input and save in memory two versions of zero, one of
which was negative. Then the execution of a Fortran program could make
a call to a machine-dependant routine, (which was an extension of
those Fortran IV versions), to test and return a truth value to
indicate a negative zero situation for that cell value.

I met this on CDC and DEC computers, and when converting some DEC
Fortran programs (about 60) last year, I had to write a special input
routine to catch the leading negative sign on a zero floating point
number and store a truth value for each cell in the arrays read in.
In most cases the negative zero wa a signal that the actual value was
missing, and approprriate changes to statitical measures to be taken
at each cell in the arrays, that had this negative-zero signal.

I got the impression that this was a fairly common requirement, and
the old mainframe computers (especially the IBM Service Bureau
machines) spent hours on running tome on commercial statistics.


e p chandler

unread,
Sep 21, 2010, 4:09:06 AM9/21/10
to
"Terence" wrote

>Negative zero was implemented on several old (1956-1976) mainframe
>computers.

[snip]


>I met this on CDC and DEC computers, and when converting some DEC
>Fortran programs (about 60) last year, I had to write a special input
>routine to catch the leading negative sign on a zero floating point
>number and store a truth value for each cell in the arrays read in.
>In most cases the negative zero wa a signal that the actual value was
>missing, and approprriate changes to statitical measures to be taken
>at each cell in the arrays, that had this negative-zero signal.

I don't remember the internal representation on Univac machines, but I did
use this feature
(outside of Fortran) to indicate missing values in Minitab (a statistical
package) with
RECODE -0 *

When writing FORTRAN programs on IBM, DEC or UNIVAC machines, I never had
any particular use for nor did I pay any attention to negative zero.


Uno

unread,
Sep 21, 2010, 4:25:55 AM9/21/10
to

> If a Fortran processor can distinguish between -0 and 0, then
> yes there are 2 representations for zero. Of course, one needs
> to read the standard (or experiment) to determine what happens
> with
>
> !
> ! Typed without testing. All bugs in the below code are mine.

$ gfortran -Wall -Wextra j1.f90 -o out1
$ ./out1
zero
$ cat j1.f90


program a
real x
x = - 0.
if (x == 0) print *, 'zero'
endprogram


! gfortran -Wall -Wextra j1.f90 -o out1

$

This is good news. I say that because of the algebraic awkwardness of
having two additive identities.
--
Uno

Dave Flower

unread,
Sep 21, 2010, 4:37:33 AM9/21/10
to

The UNIVAC 1108 had an extremely useful function, FLD, that would
extract specified bits from a (36 bit) word. It could be used on
either side of an assignment statement.

However, it misbehaved if given a 'minus zero' argument. So programs
tending to contain the line:

IF ( I .EQ. 0 ) I = 0

which solved the problem

Dave Flower

Uno

unread,
Sep 21, 2010, 4:41:57 AM9/21/10
to
Arjen Markus wrote:

> The reason a signed zero exists, as far as I can tell, is that for
> certain computations with complex numbers it is important
> to be able to distinguish numbers on either side of a branch cut.
> (see http://mathworld.wolfram.com/RiemannSurface.html for instance)

Heady stuff. I think of a complex as a direct product of reals.

You might think this is funny. I sure do:

http://www.youtube.com/watch?v=skCV2L0c6K0

Get real!:-)
--
Uno

Aris

unread,
Sep 21, 2010, 4:53:31 AM9/21/10
to
Arjen Markus <arjen.m...@gmail.com> wrote:
> On 21 sep, 05:01, Uno <merrilljen...@q.com> wrote:
>> I was reading Plauger's _The Standard C Library_ today while waiting for
>> lunch, making this time precious as opposed to tedious. =A0He says that i=

> n
>> his implementation, he isn't going to fuss about negative zero in math.h.
>>
>> I read a little bit more, and the references he has built his testing
>> facility on are fortran, to wit, Cody and Waite. =A0The border between th=

> e
>> syntaxes here is hard to discern, although they both have signed on to
>> what used to be called Iee 754.
>>
>> So are there 2 zeroes in these representations, one with the sign bit
>> set and the other without?
>>
>> Thanks for your comment,
>> --
>> Uno
>
> The reason a signed zero exists, as far as I can tell, is that for
> certain computations with complex numbers it is important
> to be able to distinguish numbers on either side of a branch cut.
> (see http://mathworld.wolfram.com/RiemannSurface.html for instance)

I definitely wouldn't trust that, and make sure there is at least a tiny
non-zero number.

David Muxworthy

unread,
Sep 21, 2010, 5:44:18 AM9/21/10
to
Terence wrote:

> Negative zero was implemented on several old (1956-1976) mainframe
> computers.
> The Fortran compiler and programs that ran on those machines could
> distinguish on input and save in memory two versions of zero, one of
> which was negative. Then the execution of a Fortran program could make
> a call to a machine-dependant routine, (which was an extension of
> those Fortran IV versions), to test and return a truth value to
> indicate a negative zero situation for that cell value.

The main use as I recall was to distinguish an actual zero and a blank
field in an input record. Both were read in to a value zero and were
then disambiguated by use of the SIGN function. This was a de facto
standard on the IBM 7090/4 and Univac 1107/8 amongst others.
David Muxworthy

glen herrmannsfeldt

unread,
Sep 21, 2010, 6:55:38 AM9/21/10
to
Arjen Markus <arjen.m...@gmail.com> wrote:
(snip)


> The reason a signed zero exists, as far as I can tell, is that for
> certain computations with complex numbers it is important
> to be able to distinguish numbers on either side of a branch cut.
> (see http://mathworld.wolfram.com/RiemannSurface.html for instance)

Well, the reason it exists is that it comes natually with sign
magnitude and ones complement representation. Most early computers
used sign magnitude, and slightly later ones complement for integers.

It turned out, though, that twos complement is a better choice
for fixed point, but sign magnitude seems to work best for
floating point.

In addition, the Fortran 66 ISIGN, SIGN, and DSIGN functions can
naturally, but as an extension to the standard, detect negative
zero. (For sign magnitude and ones complement representations.)

Much more recently, and conditional on the processor being able
to distinguish negative zero, some intrinsic functions have
specific defintions for their values when given negative zero
as arguments.

-- glen

glen herrmannsfeldt

unread,
Sep 21, 2010, 7:01:56 AM9/21/10
to
e p chandler <ep...@juno.com> wrote:
(snip)


> I don't remember the internal representation on Univac machines,
> but I did use this feature (outside of Fortran) to indicate
> missing values in Minitab (a statistical package) with
> RECODE -0 *

The 704 where Fortran originates uses sign magnitude for INTEGER
(and REAL).

Univac, I believe, still builds machines with ones complement
integers, allowing for a negative zero.



> When writing FORTRAN programs on IBM, DEC or UNIVAC machines, I never
> had any particular use for nor did I pay any attention to
> negative zero.

There are machines using twos complement for floating point,
but it is much less common. In most cases it doesn't make it
any easier, and in some it makes it harder.

I believe UNIVAC still produces machines using ones complement
for integer arithmetic.

-- glen

m_b_metcalf

unread,
Sep 21, 2010, 9:39:44 AM9/21/10
to

No, it's bad news (an error in gfortran? -- I've no idea what -Wextra
does). The program should print zero and doesn't. "Within Fortran, -0
is treated as the same as a zero in all intrinsic operations and
comparisons, but it can be detected by the sign function and is
respected on formatted output". (MR&C, p. 220).

Regards,

Mike Metcalf

Gordon Sande

unread,
Sep 21, 2010, 9:46:23 AM9/21/10
to

If you follow the references you will find that both 1 + 0 and 1 - 0 will
evaluate to 1 so it is more a notational curiousity until you get into
some of the more arcane properties of Rieman sheets and multivalued functions
where the notational quirks can be handy.

The old "a little knowledge is not always a useful amount" or something.


FX

unread,
Sep 21, 2010, 10:55:39 AM9/21/10
to
>> $ gfortran -Wall -Wextra j1.f90 -o out1
>> $ ./out1
>>   zero
>> $ cat j1.f90
>> program a
>>     real x
>>     x = - 0.
>>     if (x == 0) print *, 'zero'
>> endprogram
>
> No, it's bad news (an error in gfortran? -- I've no idea what -Wextra
> does). The program should print zero and doesn't.

It does.

--
FX

robin

unread,
Sep 21, 2010, 11:12:17 AM9/21/10
to
"glen herrmannsfeldt" <g...@ugcs.caltech.edu> wrote in message news:i7a2va$pnk$1...@speranza.aioe.org...

| Arjen Markus <arjen.m...@gmail.com> wrote:
|
| > The reason a signed zero exists, as far as I can tell, is that for
| > certain computations with complex numbers it is important
| > to be able to distinguish numbers on either side of a branch cut.
| > (see http://mathworld.wolfram.com/RiemannSurface.html for instance)
|
| Well, the reason it exists is that it comes natually with sign
| magnitude and ones complement representation. Most early computers
| used sign magnitude, and slightly later ones complement for integers.

Most early computers used twos complement,
for integers.

That was a fact of life for serial computers.

BCD computers naturally tended to signed magnitide.

| It turned out, though, that twos complement is a better choice
| for fixed point, but sign magnitude seems to work best for
| floating point.

A few machines, notably CDC 6000 and successors,
used ones complement for integers. That was a mistake, for they
created the -0 +0 problem.


steve

unread,
Sep 21, 2010, 11:49:51 AM9/21/10
to

gfortran has the ccrrect behavior. From what Uno wrote,
one might conclude that Uno forgot to actually run the
program. Here's a slightly modified version for Uno to
ponder
troutmask:kargl[224] cat z.f90


program a
real x
x = - 0.

if (x == 0) print *, sqrt(x)
end program a

troutmask:kargl[225] gfortran -o z -O z.f90
troutmask:kargl[226] ./z
-0.0000000

--
steve

Richard Maine

unread,
Sep 21, 2010, 1:15:14 PM9/21/10
to
Aris <us...@domain.invalid> wrote:

> Arjen Markus <arjen.m...@gmail.com> wrote:
>
> > The reason a signed zero exists, as far as I can tell, is that for
> > certain computations with complex numbers it is important
> > to be able to distinguish numbers on either side of a branch cut.
>

> I definitely wouldn't trust that, and make sure there is at least a tiny
> non-zero number.

I agree with not trusting that, particularly as it is quite plausible
for things like

x = -0.

to end up with x being a positive zero. The standard certainly allows
that and arguably encourages it. One could make at least some argument
that the standard requires it, but that would be even more arguable.
(don't think I'll try to make the argument, as I don't actually agree
with it, but I can imagine such an argument. I think the bit about
almost everything being approximation in floating point trumps it.)

Note that the standard doesn't require a negative zero exist at all. It
allows for the fact, but does not require it. Makes the wording a little
awkward at times because of all the phrases like "on processors that
support a negative zero".

The Fortran standard allows for a negative zero because some hardware
(in fact, most of it these days) has one. I don't think any more
essoteric explanation is either needed or accurate. Things like branch
cuts might influence what the Fortran standard says about how negative
zero acts in some cases, but I seriously doubt they have anything to do
with the reason why negative zero.

I also don't think that a highly likely explanation for why hardware
often supports a negative zero. I suspect rather that, as Glen
mentioned, sign magnitude is usually chosen for floating point for other
reasons, and you just naturally get a negative zero from sign magnitude.

--
Richard Maine | Good judgment comes from experience;
email: last name at domain . net | experience comes from bad judgment.
domain: summertriangle | -- Mark Twain

GaryScott

unread,
Sep 21, 2010, 6:52:44 PM9/21/10
to

Ugh, I'll never get positive/negative zero. Negative means "less than
zero" and positive means "greater than zero". Zero doesn't have a
negativeness or positiveness, if it did, the value wouldn't be
zero...oh well, those mathematicians...:)

glen herrmannsfeldt

unread,
Sep 21, 2010, 7:32:26 PM9/21/10
to
GaryScott <garyl...@sbcglobal.net> wrote:
(snip)


> Ugh, I'll never get positive/negative zero. Negative means "less than
> zero" and positive means "greater than zero". Zero doesn't have a
> negativeness or positiveness, if it did, the value wouldn't be
> zero...oh well, those mathematicians...:)

Well, much of math uses 0+ and 0-, that is, the limit as
(some variable) approaches 0 from the + or - side.

In that case, the least negative value should work just fine.

Adding the ability for SIGN to test the sign bit of zero,
which it likely has been doing for a long time (at least
when written in assembler), didn't seem so bad. Extending
it through atan2 and log, though, seems a little more than
was really needed.

Though my biggest complaing about current floating point
implementations is denormals (or gradual underflow).
It seems like a small gain for a lot of extra work
(either in hardware, or software interrupt routines).

-- glen

Uno

unread,
Sep 21, 2010, 8:23:42 PM9/21/10
to

I think you misread my post. I am consistent in this manner showing
first, the compile commmand, second, the output, third, the source. I
think you missed a zero in there.

From gcc.pdf:

-Wextra
This enables some extra warning flags that are not
enabled by ‘-Wall’. (Thisoption used to be called ‘-W’. The older name
is still supported, but the newer
name is more descriptive.)
-Wclobbered
-Wempty-body
-Wignored-qualifiers
-Wmissing-field-initializers
-Wmissing-parameter-type (C only)
-Wold-style-declaration (C only)
-Woverride-init
-Wsign-compare
-Wtype-limits
-Wuninitialized
-Wunused-parameter (only with ‘-Wunused’ or ‘-Wall’)

I started using -Wall and -Wextra a couple years ago and haven't been
told since to turn up my warnings.

One of the hard things for gfortran users is knowing what's going to be
in gcc.pdf, and what in gfortran.pdf, with the idea that the latter
builds on the former.
--
Uno

robin

unread,
Sep 21, 2010, 8:44:19 PM9/21/10
to
"steve" <kar...@comcast.net> wrote in message news:0384ced2-f11c-44d9...@x20g2000pro.googlegroups.com...

No he didn't. The output is there for all to see.

Uno

unread,
Sep 21, 2010, 8:52:34 PM9/21/10
to
steve wrote:

> gfortran has the ccrrect behavior. From what Uno wrote,
> one might conclude that Uno forgot to actually run the
> program. Here's a slightly modified version for Uno to
> ponder
> troutmask:kargl[224] cat z.f90
> program a
> real x
> x = - 0.
> if (x == 0) print *, sqrt(x)
> end program a
>
> troutmask:kargl[225] gfortran -o z -O z.f90
> troutmask:kargl[226] ./z
> -0.0000000
>
> --

Uno did just fine with representing the output, so we need not add that
to his Ponderous Compendium of Mistakes.

I claim that giving the source first is the equivalent of top-posting
for those who are shaping their source. It has to compile. Then it has
to behave, which one sees in the output. Then to discuss it, you cat
the source and select on the terminal just like it happened, in its
briefest form.

$ gfortran -Wall -Wextra j2.f90 -o out
$ ./out
-0.0000000
$ cat j2.f90


program a
real x
x = - 0.
if (x == 0) print *, sqrt(x)
end program a

! gfortran -Wall -Wextra j2.f90 -o out

$

Cool. I was puzzling on how I'd come with one of these.
--
Uno

robin

unread,
Sep 21, 2010, 9:04:50 PM9/21/10
to
"glen herrmannsfeldt" <g...@ugcs.caltech.edu> wrote in message news:i7a2va$pnk$1...@speranza.aioe.org...
| Arjen Markus <arjen.m...@gmail.com> wrote:
| (snip)
|
| > The reason a signed zero exists, as far as I can tell, is that for
| > certain computations with complex numbers it is important
| > to be able to distinguish numbers on either side of a branch cut.
| > (see http://mathworld.wolfram.com/RiemannSurface.html for instance)
|
| Well, the reason it exists is that it comes natually with sign
| magnitude and ones complement representation. Most early computers
| used sign magnitude, and slightly later ones complement for integers.

The reason that most early computers DIDN'T use ones
complement was that execution time for arithmetic operations
would have been doubled or more.

Most of the early computers were serial, and the problem with
ones complement addition was that if there is a carry out
of the addition, a 'one" must THEN be added into the result --
thus requiring TWO additions. In a serial machine, the arithmetic
proceeds from the least-significant bit, one bit at a time. By the time that
a carry out has occurred and been detected, the word has gone, and
the addition operation must then be repeated, by bringing back the sum
to the adder, and adding a "one". Thus, two additions!

For signed-magnitude, similar difficulties exist,
depending on whether the result of addition is negative or positive.

| It turned out, though, that twos complement is a better choice
| for fixed point,

It didn't "turn out" better.
Before building any computer, designers had already determined
that TWOS-COMPLEMENT was the ONLY choice for
a binary arithmetic unit.

| but sign magnitude seems to work best for
| floating point.

Are you forgetting that CDC 6000 series and later used
ones complement floating-point?


robin

unread,
Sep 22, 2010, 7:28:56 AM9/22/10
to
"Shmuel (Seymour J.) Metz" <spam...@library.lspace.org.invalid> wrote in message
news:4c99e35d$1$fuzhry+tra$mr2...@news.patriot.net...
| In <4c995639$0$90005$c30e...@exi-reader.telstra.net>, on 09/22/2010

| at 11:04 AM, "robin" <rob...@dodo.com.au> said:
|
| >The reason that most early computers DIDN'T use ones
| >complement was that execution time for arithmetic operations would
| >have been doubled or more.
|
| Nonsense.

You don't know what youare taling about.

| >Most of the early computers were serial,
|

| 1940's, perhaps, certainly not 1950's.

Definitely in the 1950s.

| >For signed-magnitude, similar difficulties exist,
|

| And for anything but ones' complement, negating a number requires
| going through the adder instead of just a bunch on not gates.

To form the negative in a serial machine, it was necessary to subtract it from zero.
Even if ones complement was used for such a machine (which it wasn't),
all the bits in the word would have had to pass thru the adder/complementer,
and would have taken the same time as forming the twos complement.

| >Before building any computer, designers had already determined that
| >TWOS-COMPLEMENT was the ONLY choice for
| >a binary arithmetic unit.
|

| In what universe?

In a serial machine, as already stated.


Terence

unread,
Sep 22, 2010, 7:46:03 AM9/22/10
to

EXACTLY!
I wish I had remembered that SIGN function reference, but yes, that
was it; and yes, it was set to -0 on a blank input numeric field.
And here I have some examples of those calls, sitting right here in
the original BMD source code (from DEC) on this same computer!!!

glen herrmannsfeldt

unread,
Sep 22, 2010, 8:54:18 AM9/22/10
to
Terence <tbwr...@cantv.net> wrote:
(snip)


> EXACTLY!
> I wish I had remembered that SIGN function reference, but yes, that
> was it; and yes, it was set to -0 on a blank input numeric field.
> And here I have some examples of those calls, sitting right here in
> the original BMD source code (from DEC) on this same computer!!!

Note that the PDP-10 (strangely enough) uses twos-complement
for floating point. That has the advantage of allowing the
integer comparison instructions to be used on floating point
values. Otherwise, it would seem to make processing of floating
point operations harder.

So, no negative zero on the PDP-10.

-- glen

Ken Fairfield

unread,
Sep 22, 2010, 11:27:34 AM9/22/10
to
On Sep 22, 5:54 am, glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:

I was thinking along the same lines. But I'd guess
Terence was actually thinking about a DEC-10 or
DEC-20 (36 bit, right?), or even a PDP-6 (!), when
he writes "DEC".

I really do wish that people would be more precise
in their references, much like the recurrent request
for compiler vendor and version when asking about
compilation errors, sigh... Terence could be talking
about almost any archeological computer when he
says "DEC" and none of us would be any the wiser. :-)

-Ken

P.S. I only go back to PDP-8 (barely) and
Burroughs 3600 (IIRC)...

glen herrmannsfeldt

unread,
Sep 22, 2010, 2:30:15 PM9/22/10
to
Ken Fairfield <ken.fa...@gmail.com> wrote:
(snip)

>> Note that the PDP-10 (strangely enough) uses twos-complement
>> for floating point.  That has the advantage of allowing the
>> integer comparison instructions to be used on floating point
>> values.  Otherwise, it would seem to make processing of floating
>> point operations harder.

>> So, no negative zero on the PDP-10.

> I was thinking along the same lines. But I'd guess
> Terence was actually thinking about a DEC-10 or
> DEC-20 (36 bit, right?), or even a PDP-6 (!), when
> he writes "DEC".

The DEC-10 and DEC-20 are systems based on the PDP-10
processor. The PDP-6 is the predecessor to them,
yes, all are 36 bit. I don't know about floating point
on the PDP-6, though.



> I really do wish that people would be more precise
> in their references, much like the recurrent request
> for compiler vendor and version when asking about
> compilation errors, sigh... Terence could be talking
> about almost any archeological computer when he
> says "DEC" and none of us would be any the wiser. :-)

The PDP-10 (running TOPS-10) was the first DEC system
that I used.



> P.S. I only go back to PDP-8 (barely) and
> Burroughs 3600 (IIRC)...

My first few programs were on the B5500, before it was sold.
That was five years before I started learning Fortran.

-- glen

dpb

unread,
Sep 22, 2010, 3:06:00 PM9/22/10
to
glen herrmannsfeldt wrote:
...

> The PDP-10 (running TOPS-10) was the first DEC system
> that I used.

Ditto, but not for other than editing purposes for stuff submitted to
the IBM at ORNL lo! many moons ago...

>> P.S. I only go back to PDP-8 (barely) and
>> Burroughs 3600 (IIRC)...

Ditto, too, altho no Burroughs...

> My first few programs were on the B5500, before it was sold.
> That was five years before I started learning Fortran.

...

_First_ (undergrad eng'g) was FORTRAN on 1620.

First in anger on Philco. It was 48-bit word; I don't recall ever
worrying about what the floating point representation actually was; that
wasn't on my radar screen yet at the time...

<http://archive.computerhistory.org/resources/text/Philco/Philco.transec_S2000.1958.102646276.pdf>

I'm not sure when the machine at B&W dated; it was upgraded several
times before and during its lifetime there before being retired for the
6600. I don't recall precisely when that was, either, probably about
70,71 I'd guess now. My biggest recollection was the 27 7-track tapes
and trying to minimize need to swap tapes or add a drive in the middle
of cross-section multi-group energy-reduction processing that was a
major bottleneck...those issues outweighed the numerics issues had to
deal with then.

--

Richard Maine

unread,
Sep 22, 2010, 3:46:10 PM9/22/10
to
dpb <no...@non.net> wrote:

> _First_ (undergrad eng'g) was FORTRAN on 1620.

While that wasn't the fist computer I programmed in Fortran, a 1620 was
the first computer that I was actually allowed to lay my own hands on.
Ok, the first digital electronic computer; that rules out the 3-bit
mechanical Digi-Comp I that I played with as a kid. Google it if you
care; apparently I should have saved that old toy as an investment,
though I'm afraid my siblings and I were a little hard on it.

Prior to the 1620, my programming was either by giving a deck of cards
to an operator, who dissapeared into a secret room with it, or by using
a teletype connected to some machine not even in the same state.

By the time I entered Purdue in 1969, the 1620 was decrepit enough that
even peon undergrads like myself were allowed to operate it.

dpb

unread,
Sep 22, 2010, 4:25:41 PM9/22/10
to
Richard Maine wrote:
> dpb <no...@non.net> wrote:
>
>> _First_ (undergrad eng'g) was FORTRAN on 1620.
>
> While that wasn't the fist computer I programmed in Fortran, a 1620 was
> the first computer that I was actually allowed to lay my own hands on.
> Ok, the first digital electronic computer; that rules out the 3-bit
> mechanical Digi-Comp I that I played with as a kid. Google it if you
> care; apparently I should have saved that old toy as an investment,
> though I'm afraid my siblings and I were a little hard on it.
>
> Prior to the 1620, my programming was either by giving a deck of cards
> to an operator, who dissapeared into a secret room with it, or by using
> a teletype connected to some machine not even in the same state.
>
> By the time I entered Purdue in 1969, the 1620 was decrepit enough that
> even peon undergrads like myself were allowed to operate it.

I'd never seen a computer of any sort prior to matriculating (oohh, I'd
never heard the word until a prof used it in a class and then, when
nobody asked what it meant, started an object lesson in how _not_ to
proceed to learn :) ) at K-State in '63.

The 1620 was already in the basement and accessible for undergrad labs
then; it wasn't terribly reliable but was only access we had other than
in-class assignments that were submitted to the IBM via the deck at the
door route and those were controlled to a bare minimum -- I think a good
guess would have been a couple submittals a semester course.

NE isotopes lab had a PACE analog (a very late model, pricey system as a
corporate grant at the time). Closest other than that were the Wang
computing stations w/ the nixie-tube displays.

At B&W for both the Philco and then the CDC it was all behind closed
doors until about the last year at B&W ('77/78 roughly) they finally did
open up a number of CRTs w/ submittal privileges to the 7600 and minimal
computational privileges on the front-end 6400 (they neutered the
FORTRAN compiler to not accept over 100 card images as a simple-minded
ploy--it, of course, didn't take long to learn that could save the
object files and link. Then, they added some more restrictions. It was
a continuing game of how to evade the limits to get some turnaround on
meaningful results. :) ). The claimed reason for all the above was
the requirements for NRC QA and the intent that engineers not write code
because didn't have a process that met (their interpretation) 10CFR50
reqm'ts.

B&W in early days before the CDC when things got tight on deadlines sent
some of us up to the Philco service center in Philly for marathon
fuel-cycle enrichment computational sessions on the hopped-up 2200's
there that were about 1/3-rd to maybe half-again as fast. Were
nightmarish 24-hr 2- or 3-day sessions w/ tag-teams to get SWU orders in
to enrichment chain for reactor fuel. Which, of course, all turned out
to be tempest in a teapot when some of the Oconee I reactor internals
failed during the preload hot functional testing owing to flow-induced
vibrations that delayed initial startup almost an additional two years,
pushing all the other similar plants out, too...

Sorry, belated fogey alert... :)

--

glen herrmannsfeldt

unread,
Sep 22, 2010, 6:04:41 PM9/22/10
to
dpb <no...@non.net> wrote:

(snip)



> The 1620 was already in the basement and accessible for undergrad labs
> then; it wasn't terribly reliable but was only access we had other than
> in-class assignments that were submitted to the IBM via the deck at the
> door route and those were controlled to a bare minimum -- I think a good
> guess would have been a couple submittals a semester course.

We had a PDP-10 for undergrads to use. No limits, other than those
set by other undergrads, and that your job might get set to the
lowest priority.

> At B&W for both the Philco and then the CDC it was all behind closed
> doors until about the last year at B&W ('77/78 roughly) they finally did

I will guess that B&W is Babcock and Wilcox. My grandfather worked
for them, in Barberton Ohio, though I never knew much about what
he did there. He was a mechanical engineer, and had various books
with steam tables and such. I inherited some of his chemistry
and math books, and some of his photography equipment, when I
was 10 years old or so.

(snip)

-- glen

dpb

unread,
Sep 22, 2010, 7:25:32 PM9/22/10
to
glen herrmannsfeldt wrote:
...

> I will guess that B&W is Babcock and Wilcox. My grandfather worked
> for them, in Barberton Ohio, though I never knew much about what
> he did there. He was a mechanical engineer, and had various books
> with steam tables and such. I inherited some of his chemistry
> and math books, and some of his photography equipment, when I
> was 10 years old or so.
>
> (snip)
>
> -- glen

Indeed it would be so.

Barberton was headquarters ("Main Works") of the Fossil Power Generation
Division (would have been known as the "Boiler Division" during your
grandfather's time; wasn't renamed until after B&W entered the nuclear
market and split to the FPGD and NPGD). I was in the NPGD in Lynchburg,
VA (commercial reactors). B&W was the sole supplier to USN for naval
fuel--that facility was the Navy Nuclear Fuel Division, outside
Lynchburg at Mt. Athos, also home of the Lynchburg Nuclear Research
Center and the Critical Experiment Laboratory.

They were/are a comprehensive power-generation company. Besides
Barberton, there were the Tubular Products Division in Beaver Falls, PA
(Joe Namath's father was an employee there), the Refractories Division
Works in Augusta, GA, various manufacturing facilities for the Boiler
Division in Alliance, OH, Wilmington, NC, West Point, MS, Paris, TX, and
Tubular Products Division facilities in Alliance, Brunswick, GA,
Milwaukee, Beaver Falls, Koppel and even a steel mill, also in Beaver
Falls. The NPGD build the Mt Vernon, IN, works for the fabrication of
reactor pressure vessels, steam generators, and other nuclear-grade
components. It, unfortunately, didn't survive the nuclear downturn (nor
did any other US-based nuclear heavy manufacturing).

Over the years I've lost track of most of the company other than since
evolved into supporting mostly the fossil utilities I came in contact w/
a fair number of the people on that side in the Alliance Research Center.

Interestingly, B&W has just become independent again after 32 or so
years as an operating entity under J Ray McDermott. McDermott bought
the entire B&W company in 1967 or early 1968 in order to acquire their
proprietary seamless tubing technology. McDermott was getting heavily
involved in the North Sea exploration and needed (or at least wanted
desperately) it for the depths encountered there. B&W wanted to keep it
proprietary as a competitive advantage in making nuclear fuel cladding
and boiler tubing, etc. But, enough money finally won out.

McDermott then divested themselves of the nuclear business which is now
with BWX which is, (I think) privately held. I just saw an announcement
that B&W has reformed a new Nuclear Power Division but it's
headquartered in Charlotte, NC, and is, at least so far, only a design
operation and small one at that I think. Something like what NPGD was
shortly before my arrival (along w/ a cast of couple thousand others
over about 3 years) as transitioned from the sales engineering to the
final design, construction and startup of the Oconee-class reactors.

I left shortly after the McD takeover--I was in an internal R&D
organization and could begin to see handwriting on the wall that the
beginning slowdown was likely to be protracted (though I'd never have
guessed 30+ yrs protracted and counting) and that B&W didn't have the
resources to hang around like W or GE (or even Combustion Engineering
who held out for a while before being bought up by Frammatome). I
followed a former colleague to a new consulting office opening in Oak
Ridge about a year before TMI and the rest is, as they say, history...

B&W, particularly NPGD, was a great place to be a youngster out of
school in the time--just really getting cranked up to do "real work" --
I was only the 9th NE to come on board in '68 so had almost complete
freedom to choose what I wanted to work on and with whom before the rest
of the influx began to require more structure...by that time, I was
pretty well ensconced. :)

Again, far more than anybody asked....

--

dpb

unread,
Sep 22, 2010, 8:28:37 PM9/22/10
to
dpb wrote:
...
> ...McDermott bought the entire B&W company in 1967 or early 1968 ...

That would have been 1977 or early 1978...

--

robin

unread,
Sep 22, 2010, 10:39:25 PM9/22/10
to
"Shmuel (Seymour J.) Metz" <spam...@library.lspace.org.invalid> wrote in message
news:4c99e35d$1$fuzhry+tra$mr2...@news.patriot.net...

| >Most of the early computers were serial,
|


| 1940's, perhaps, certainly not 1950's.

The Bendix G15, of which 400 were sold,
began operations in 1956. It was a serial machine.

The DEUCE was serial, commenced operation in
1955.

The ACE was serial, commenced operation in late 1950s.

UNIVAC I and UNIVAC II were serial.

In fact, a whole heap of serial machines was produced in the early years.

Choice of serial was based on the type of storage medium
(typcally acoustic delay lines, but could be magnetic), and of the need to
minimise equipment.

Thus, early (serial) machines got by with some 1000 vacuum tubes
(compare with 18,000 for ENIAC).

(Parallel was expensive in hardware, requiring an adder
having some 32 adder circuits instead of 1 for serial.)


glen herrmannsfeldt

unread,
Sep 23, 2010, 12:13:50 AM9/23/10
to
In comp.lang.fortran robin <rob...@dodo.com.au> wrote:
> "Shmuel (Seymour J.) Metz" <spam...@library.lspace.org.invalid> wrote in message
(snip)


> The Bendix G15, of which 400 were sold,
> began operations in 1956. It was a serial machine.

> The DEUCE was serial, commenced operation in 1955.

> The ACE was serial, commenced operation in late 1950s.

> UNIVAC I and UNIVAC II were serial.

> In fact, a whole heap of serial machines was produced in the early years.

Weren't some of these serial decimal, that is, one decimal
digit at a time? Not quite bit serial as we would expect now.

-- glen

Uno

unread,
Sep 23, 2010, 1:39:54 AM9/23/10
to
GaryScott wrote:

> Ugh, I'll never get positive/negative zero. Negative means "less than
> zero" and positive means "greater than zero". Zero doesn't have a
> negativeness or positiveness, if it did, the value wouldn't be
> zero...oh well, those mathematicians...:)

I used this source to try to get a better feel for it:

$ gfortran -Wall -Wextra j3.f90 -o out
$ ./out
-0.0000000
-2.0000000
y and z are 1.17549435E-38 1.17549435E-38
y and z are -0.0000000 1.17549435E-38
y and z are 0.0000000 1.17549435E-38
x is -0.0000000
x is 0.0000000
$ cat j3,f90
cat: j3,f90: No such file or directory
$ cat j3.f90
program a
real x, y, z
x = - 0.
print *, sqrt(x)
print *, sign(2.0, x)
y = tiny(x)
z = spacing(y)
print *, "y and z are", y, z
y = x
z = spacing(y)
print *, "y and z are", y, z
y = 0.0
z = spacing(y)
print *, "y and z are", y, z

x = -0.0
print *, "x is", x
if ((x == 0.0) .and. (sign(2.0, x) .lt. 0.0)) then
x = - x
end if
print *, "x is", x

end program a
! gfortran -Wall -Wextra j3.f90 -o out
$

The last part looks to me to be a standard way to turn both zeroes into
a non-negative real zero.

I was surprised to read on MR&C p. 220 that the standard stipulates that
sqrt(-0.0) is negative zero. Is that the specific feature that is on
account of these branch cuts?
--
Uno

dpb

unread,
Sep 23, 2010, 10:38:52 AM9/23/10
to

If care, Glen...

<http://babcock.com/about/history.html>

Only sorta' an edited timeline; not a lot of the early history narrative
and they left the McD era out... :)

I did not know of the Chap 11 nor realize that in '07 they reacquired
the BWXT facilities back to the NPGD. Now I don't understand the recent
announcement re: a new NPD in Charlotte, for sure...

Another interesting chapter surprisingly not mentioned (and one I regret
I was too late to get in on) was the NS Savannah...

--

Terence

unread,
Sep 24, 2010, 6:05:13 AM9/24/10
to

Yeah!
Dec-10 and Dec-20.
And not ANY archeological computer; I would exclude the Antikithera
mechanism (I have a simulation), and the Babbage difference engine
(which I've seen); also I never used Deuce, but did use Pegasus and
Atlas and some delightful hand-held and hand-driven devices, (Oh! and
the Marchant which used house current). Then I joined IBM after they
dropped the bacon slicer and went electric.

robin

unread,
Sep 24, 2010, 11:02:26 PM9/24/10
to
"Arjen Markus" <arjen.m...@gmail.com> wrote in message
news:adadbe17-bda5-4cf5...@c21g2000vba.googlegroups.com...

>The reason a signed zero exists, as far as I can tell, is that for
>certain computations with complex numbers it is important
>to be able to distinguish numbers on either side of a branch cut.
>(see http://mathworld.wolfram.com/RiemannSurface.html for instance)

The reason a sign zero may exist in a computer is merely a quirk
of the design of arithmetic unit and choice of number representation :
ones complement, twos complement, and signed magnitude,
and whether binary or BCD, etc.

robin

unread,
Sep 27, 2010, 6:08:04 PM9/27/10
to
"Shmuel (Seymour J.) Metz" <spam...@library.lspace.org.invalid> wrote in message
news:4c9eab67$1$fuzhry+tra$mr2...@news.patriot.net...
| In <4c99e87f$0$90003$c30e...@exi-reader.telstra.net>, on 09/22/2010

| at 09:28 PM, "robin" <rob...@dodo.com.au> said:
|
| >You don't know what youare taling about.
|
| ROTF,LMAO! I may not khow what I am taling (sic) about, but I
| certainly know what I am talking about, and the binary machines of the
| 1950s were dominated by parallel ones.

You've not mentioned a single computer that did not use serial arithmetic
and/or did not use non-serial mamory.

Looks like you still don't know what you are talking about.

| >Definitely in the 1950s.
|
| FSVO "1050's" that excludes most of them.

Instead of making silly arrogant remarks,
why not actually check it out?

| >To form the negative in a serial machine,
|

| K3wl. Unfortunately, bit-serial machines were rare in the 1950's.

BTW, the IBM 701 (1952) had a bit-serial memory,
as did all machines that used the Williams tube for memory.
Althought the Ferranti Meg (1954) was a parallel machine,
it used a serial arithmetic unit.
Likewise the MV950 (1956) had a serial arithmetic unit.

As for your fatuous statement that bit-serial machines were rare in the 1950s,
I think that you have not looked at those serial computers that I already cited.


robin

unread,
Sep 27, 2010, 10:42:48 AM9/27/10
to
"glen herrmannsfeldt" <g...@ugcs.caltech.edu> wrote in message news:i7a2va$pnk$1...@speranza.aioe.org...
| Arjen Markus <arjen.m...@gmail.com> wrote:
| (snip)
|
| > The reason a signed zero exists, as far as I can tell, is that for
| > certain computations with complex numbers it is important
| > to be able to distinguish numbers on either side of a branch cut.
| > (see http://mathworld.wolfram.com/RiemannSurface.html for instance)
|
| Well, the reason it exists is that it comes natually with sign
| magnitude and ones complement representation. Most early computers
| used sign magnitude, and slightly later ones complement for integers.

Expanding the record of serial computers using
twos complement arithmetic:-

Pilot ACE first ran in 1950.
DEUCE was first delived in 1955
ACE commissioned in 1958
Bendix G-15 deliveries commenced in 1856
Pegasus (1956)
Ferranti Mark I (1951)

Other serial machines and/or machines having serial arithmetic units :--

Ferranti Meg (1954) serial arithmetic unit.
MV950 (1956) serial arithmetic unit
ORDFIAC used serial acoustic memory.
SEAC used acoustic and electrostatic mamories.
RAYDAC (1953) used serial memory.
OARAC (1953) used serial and parallel arithmetic.

For others, see "Computer Structures, Readings and Examples".


robin

unread,
Sep 27, 2010, 10:44:13 AM9/27/10
to
"Shmuel (Seymour J.) Metz" <spam...@library.lspace.org.invalid> wrote in message
news:4c99e35d$1$fuzhry+tra$mr2...@news.patriot.net...

| And for anything but ones' complement, negating a number requires
| going through the adder instead of just a bunch on not gates.

Forming the twos complement in a serial machine
does not require "going through the adder".

For a number held in twos complement form, forming the negative
requires complementing all bits, starting after the first one-bit.
(recall that in a serial machine, the bits are presented
starting with the least-significant bit).

To form the ones complement in a serial machine
requires that each bit is complemented as it passes the inverter (negator) gate.
(N.B. Only one gate is required, not "a bunch on [sic] not gates".

The principles of serial machines and serial arithmetic
were established by Alan Turing back in 1945.


robin

unread,
Sep 27, 2010, 8:06:39 AM9/27/10
to
"Shmuel (Seymour J.) Metz" <spam...@library.lspace.org.invalid> wrote in message
news:4c9eaf78$3$fuzhry+tra$mr2...@news.patriot.net...
| In <4c9abde5$0$90003$c30e...@exi-reader.telstra.net>, on 09/23/2010

| at 12:39 PM, "robin" <rob...@dodo.com.au> said:
|
| >The Bendix G15, of which 400 were sold,
| >began operations in 1956. It was a serial machine.
|
| K3wl. Unfortunately, you wrote "most", not "some".

Obviously you can't read either.
Where can you find the word "most" in the above sentence
(hint: start from "The Bendix ...")?

| >(Parallel was expensive in hardware, requiring an adder having some
| >32 adder circuits instead of 1 for serial.)
|

| How did Univac manage to do an ALU for a 36-bit machine using only 30
| adder circuits?

What machine?

How did manufacturers of a serial machine manage to do it in a single
adder circuit?

Terence

unread,
Sep 27, 2010, 8:49:55 PM9/27/10
to

I was given a Fortran manual on my first working day with IBM (after
IBM school in 1961) and told to help Babcock & Wilcox staff (of UK) on
designing electricity pylons (from top down, based on line numbers,
unit length weight and two-side span lengths [catenaries] to produce a
complete standard parts list and cut lengths; and then the concrete
base design). I still see the same designs today.

Red Rooster

unread,
Sep 27, 2010, 9:22:36 PM9/27/10
to
On Tue, 28 Sep 2010 00:42:48 +1000, "robin" <rob...@dodo.com.au>
wrote:

>Bendix G-15 deliveries commenced in 1856
Wow, now that's a feat...

RR ;-)

robin

unread,
Sep 28, 2010, 8:16:23 AM9/28/10
to
"Red Rooster" <NewRed...@hottmail.com> wrote in message news:hpg2a6tjopu0osicg...@4ax.com...

Obviously to a design by Charles Babbage!


Red Rooster

unread,
Sep 29, 2010, 11:42:06 PM9/29/10
to
On Tue, 28 Sep 2010 22:16:23 +1000, "robin" <rob...@dodo.com.au>
wrote:

>Babbage
Ahhh, that's explains it! ;-)

RR

ttw...@att.net

unread,
Sep 30, 2010, 3:09:18 PM9/30/10
to
One minor point. Using the negative of the smallest magnetude floating
point number for negative zero isn't a very good idea. Multiplying
-0.0 by 2 yields -0.0 whereas multiplyinb -x by 2.0 (for a very small
x) yields a smaller floating point number (bigger in magnetude).

glen herrmannsfeldt

unread,
Sep 30, 2010, 3:33:32 PM9/30/10
to

If this is a reply to one of my posts, I wasn't suggesting using
the smallest magnitude floating point value for negative zero,
but for the mathematical lim x-->0-.

The newer Fortran standards seem to use the floating point
negative zero, for implementations that have one and can distinguish
it, for the mathematical limit.

-- glen

robin

unread,
Oct 4, 2010, 5:20:06 AM10/4/10
to
"Shmuel (Seymour J.) Metz" <spam...@library.lspace.org.invalid> wrote in message
news:4ca7cefe$1$fuzhry+tra$mr2...@news.patriot.net...

Another lot of arrogant and erroneous replies from Metz,
demonstrating his ignorance of computer operations.

Robin wrote:

| >Forming the twos complement in a serial machine
| >does not require "going through the adder".
|

| It requires adding one to the low order bit and propagating the carry.

That's just the point. It doesn't "require" that at all.
Forming the ones complement and adding 1 is how you might
picture it, but hardware is not constrained by that simpleton approach.

| The easiest way to do that is to feed it through the adder. Admittedly
| you could build a special complementing unit that combined flipping
| the bits with propagating the carry, but that would add to the cost of
| the machine.

The easiest way to do that is not through the adder.

Early computers were hard-wired. In some machines, the adder
was hard-wired to one particular register. Having to form the
complement Metz's way would have destroyed the content of that register.
The twos complement can be formed trivially in other ways,
all of which avoid trashing the contents of another register.

As I laboriously explained before:

In a serial machine, the bits of a word emerge one-at-a-time,
commencing with the least-significant bit.

To form the twos complement in a serial machine,
it requires one and only one inverter and a bi-stable flip-flop.

Initially the bi-stable flip is off. When it is off, the inverter is off.
The bits are examined one-at-a-time. Immediately after a non-zero bit
has been entirely processed, the flip-flop is turned on.
When it is on, the inverter is operative,
and thus negates every subsequent bit in the word.
Thus is formed the twos complement.

Incidentally to form the ones complement of a word in such a machine,
only the inverter is required.

The cost of a forming the twos complement was a trigger and inverter,
about three valves and a handful of resistors, total cost about 6 dollars --
neither here nor there in a machine costing about $100,000.

To form the twos complement Metz' way would have required
a source of 1, components to activate the adder, etc, etc.

Next he'll be telling us that it's necessary to use the adder to subtract 1.


robin

unread,
Oct 4, 2010, 8:56:18 PM10/4/10
to
"Shmuel (Seymour J.) Metz" <spam...@library.lspace.org.invalid> wrote in message
news:4ca9d674$1$fuzhry+tra$mr2...@news.patriot.net...
| In <4ca9ad3f$0$33792$c30e...@exi-reader.telstra.net>, on 10/04/2010

| at 08:20 PM, David Frank <rob...@dodo.com.au> said:
|
| >Another lot of arrogant and erroneous replies
|
| PKB.

|
| >That's just the point. It doesn't "require" that at all.
|
| You can't read, and you're confusing implementation with requirements.

I can read, and you're wrong again.

| >Initially the bi-stable flip is off. When it is off, the inverter
| >is off. The bits are examined one-at-a-time. Immediately after a
| >non-zero bit has been entirely processed, the flip-flop is turned
| >on. When it is on, the inverter is operative,
| >and thus negates every subsequent bit in the word.
|

| ROTF,LMAO!

More arrogant replies.

| >Thus is formed the twos complement.
|

| Had you not omitted a crucial step,

No step was omitted.

| what you described would have been
| an adder. As it is, it was just wrong.

Not only don't you understand plain English,
you don't understand even the basics of computer arithmetic.

Taking the bits one-at-a-time, starting from the least-significant end,
let's negate the 8-bit word 11001000
which is written 00010011 (in reverse for convenience).
Performing the above algorithm, we get (l.s. bit on left):
0
00
000
0001 (the 1-bit is unchanged)
00011 (the 0-bit is changed to 1)
000111 (do.)
0001110 (the 1-bit is changed to 0)
00011100 (do.)
Writing the bits in normal order, we have
00111000 (l.s. bit on right)
which is the twos complement of 11001000.

| >To form the twos complement Metz' way
|

| What youy describe is your way, since it doesn't come from what I
| wrote.

The way to which I referred as "your way" was the "solution"
you prooosed, namely, to use the adder. That's what you wrote.

John Rausch

unread,
Oct 5, 2010, 1:01:26 PM10/5/10
to
The IBM 1620 was mentioned. It is the oddball of all old IBM
computers. I did not have add hardware, but used a table in low
memory! The CEs called it the cadet (Can't Add, Doesn't Even Try).

Gordon Sande

unread,
Oct 5, 2010, 1:08:17 PM10/5/10
to

Agreat way to do octal arithmetic as well. I believe that later models
in fact had a hardware adder.

Terence

unread,
Oct 5, 2010, 4:52:43 PM10/5/10
to

Yup! I installed a few for IBM in the UK. Leister Tech and Lloyd's
register of shipping come to mind. Table add/substarct routine; repeat
call to this for multiply (No, not just a stupid loop, but the school
way); divide was an even longer school method. Don't ask about sqaure
roots!

robin

unread,
Oct 5, 2010, 7:13:07 PM10/5/10
to
"Shmuel (Seymour J.) Metz" <spam...@library.lspace.org.invalid> wrote in message
news:4cab00bc$4$fuzhry+tra$mr2...@news.patriot.net...
| In <4caa77ec$0$33795$c30e...@exi-reader.telstra.net>, on 10/05/2010
| at 11:56 AM, "robin" <rob...@dodo.com.au> said:
|
| >More arrogant replies.
|
| PKB.

More arrogant replues.

| >No step was omitted.
|
| Really? The flip-flop is supposed to stay on once you set it? In what
| universe?

What don't you understand about a flip-flop?
I described what it did, twice.

| >you don't understand even the basics of computer arithmetic.
|

| I don't have to run faster than the bear, I just have to run faster
| than you.

As I said, you (still) don't understand the basics of computer arithmetic.
1. You don't undertand a flip-flop;
2. You can't tell the difrference beqween an adder and a complementer.

| >The way to which I referred as "your way" was the "solution"
|

| No, David. It was not what I wrote, but what *you* wrote,

You've got yourself back-to-front. It was YOU
who said that


"It requires adding one to the low order bit and propagating the carry."

| and it was dishonest to attribute it to me.

Now you're lying. It was YOU who said that "It requires".

(which was your false statement, because it didn't "require" that at all.)


nm...@cam.ac.uk

unread,
Oct 6, 2010, 3:36:53 AM10/6/10
to
In article <4cabb1f6$0$33795$c30e...@exi-reader.telstra.net>,

robin <rob...@dodo.com.au> wrote:
>
>What don't you understand about a flip-flop?
>I described what it did, twice.

Which clearly restored the initial state!


Regards,
Nick Maclaren.

robin

unread,
Oct 6, 2010, 6:41:29 AM10/6/10
to
<nm...@cam.ac.uk> wrote in message news:i8h8ul$3r4$1...@gosset.csi.cam.ac.uk...

?????


James Van Buskirk

unread,
Oct 6, 2010, 1:19:53 PM10/6/10
to
"robin" <rob...@dodo.com.au> wrote in message
news:4cac525f$0$33797$c30e...@exi-reader.telstra.net...

> ?????

T flip-flop.

--
write(*,*) transfer((/17.392111325966148d0,6.5794487871554595D-85, &
6.0134700243160014d-154/),(/'x'/)); end


robin

unread,
Oct 7, 2010, 7:53:44 AM10/7/10
to
"James Van Buskirk" <not_...@comcast.net> wrote in message news:i8ib3t$et7$1...@news.eternal-september.org...

| "robin" <rob...@dodo.com.au> wrote in message
| news:4cac525f$0$33797$c30e...@exi-reader.telstra.net...
| > <nm...@cam.ac.uk> wrote in message
| > news:i8h8ul$3r4$1...@gosset.csi.cam.ac.uk...
| > | In article <4cabb1f6$0$33795$c30e...@exi-reader.telstra.net>,
| > | robin <rob...@dodo.com.au> wrote:
| > | >
| > | >What don't you understand about a flip-flop?
| > | >I described what it did, twice.
| > |
| > | Which clearly restored the initial state!
|
| > ?????
|
| T flip-flop.

?????


robin

unread,
Oct 7, 2010, 7:54:58 AM10/7/10
to
"Shmuel (Seymour J.) Metz" <spam...@library.lspace.org.invalid> wrote in message
news:4cac8fa5$5$fuzhry+tra$mr2...@news.patriot.net...
| In <4cabb1f6$0$33795$c30e...@exi-reader.telstra.net>, on 10/06/2010

| at 10:13 AM, "robin" <rob...@dodo.com.au> said:
|
| >What don't you understand about a flip-flop?
|
| Nothing.

That's patently obviously a false statement.

| >I described what it did, twice.

And you still don't understand it.


nm...@cam.ac.uk

unread,
Oct 7, 2010, 8:36:18 AM10/7/10
to
In article <4cadb51a$0$33796$c30e...@exi-reader.telstra.net>,

Actually, it's demonstrably YOU who still doesn't understand it :-)


Regards,
Nick Maclaren.

robin

unread,
Oct 7, 2010, 9:44:22 AM10/7/10
to
<nm...@cam.ac.uk> wrote in message news:i8kes2$e24$1...@gosset.csi.cam.ac.uk...

Actually, it's you who doesn't understand it.

I correctly described it..


Bill Klein

unread,
Oct 7, 2010, 10:23:19 AM10/7/10
to
<much snippage>

>>Initially the bi-stable flip is off. When it is off, the inverter
>>is off. The bits are examined one-at-a-time. Immediately after a
>>non-zero bit has been entirely processed, the flip-flop is turned
>>on. When it is on, the inverter is operative,
>>and thus negates every subsequent bit in the word.
>
>
>>Thus is formed the twos complement.
>
> Had you not omitted a crucial step, what you described would have been

> an adder. As it is, it was just wrong.
>
<more snippage>

Shmuel,
This is the beginning of the "flip-flopt" discusison. As someone who:
A) doesn't know/understand "flip-flop" and "adders"
and
B) assumes that when Robin starts calling names and doesn't actually respond
to the content in notes, that he may wll be wrong (but will NEVER admit it)

Can you tell me directly WHAT step was missed in the roginal statemnent? In
other wrd, can you repeat Robin's original description and explicitly add
what was missing, so I can can see/understand it?


robin

unread,
Oct 8, 2010, 2:02:01 AM10/8/10
to
"Bill Klein" <wmk...@noreply.ix.netcom.com> wrote in message news:KJkro.239970$MG3....@en-nntp-16.dc1.easynews.com...

| Shmuel,
| This is the beginning of the "flip-flopt" discusison. As someone who:
| A) doesn't know/understand "flip-flop" and "adders"
| and
| B) assumes that when Robin starts calling names and doesn't actually respond
| to the content in notes, that he may wll be wrong (but will NEVER admit it)

If you think that you are going to come back here and start
another campaign of abuse and lying, you'd better think again.
Just go away.

| Can you tell me directly WHAT step was missed in the roginal statemnent?

There was nothing missing.

| In other wrd, can you repeat Robin's original description and explicitly add
| what was missing, so I can can see/understand it?

As I said, there was nothing missing.
Metz doesn't understand how to form a complement
using the alternative algorithm.

He also couldn't tell the difference between a complementer
and an adder.

That's all there is to it.

Bill Klein

unread,
Oct 8, 2010, 7:34:02 AM10/8/10
to
"robin" <rob...@dodo.com.au> wrote in message
news:4caef640$0$33794$c30e...@exi-reader.telstra.net...

> If you think that you are going to come back here and start
> another campaign of abuse and lying, you'd better think again.
> Just go away.
>

Please point to a single post that I have ever made in this group that was a
lie. I know of one mis-use of a term that I admitted was an error; the use
of "stand-alone" when I meant "unbundled" (and even there, I don't know of
any reader who was confused by the term)

I would be HAPPY to point out many times in the past that you have made
erroneous statements and NEVER admitted your error (minor, major, technical,
or otherwise).

I would also be happy to point out the substantive note from Shmuel in the
recent thread that you did NOT answer the substance but started calling
names.

Please note that I changed the subject in my pas note and in this note. If
you would kindly reply to notes using their current subjects so that those
wishing to follow technical information/discussions in this forum without
needing to follow off-topic discussions, could do so.

Please note, I am certain that if you queried the active and less visible
participants in this forum, your technical knowledge and expertise would be
commonly acknolwedged. HOWEVER, if you also queried the viewers and
participants in the forum, I think that you would find a majority (a VERY
LARGE majority) that would confirm that you have a "online personality" that
seems totally incapable of accepting correction, criticism, or even
suggestions of other views.

My note that prompted your reply was a specific question to another
participant to explain his post. I hope he will do so. Your response, as
always, was neither responsive nor helpful.

Terence

unread,
Oct 7, 2010, 8:43:34 PM10/7/10
to
On Oct 8, 12:23 am, "Bill Klein" <wmkl...@noreply.ix.netcom.com>
wrote:

> <much snippage>>>Initially the bi-stable flip is off.  When it is off, the inverter
> >>is off. The bits are examined one-at-a-time. Immediately after a
> >>non-zero bit has been entirely processed, the flip-flop is turned
> >>on. When it is on, the inverter is operative,
> >>and thus negates every subsequent bit in the word.
>
> >>Thus is formed the twos complement.
>
This posting after Robin failed, even after remoiving the "D.Do"
address part which is treated by Google as a bodily function measure
(!!). I actually got a rejection slip!

Trying again

Flip-flops?
What a fuss about who said what!


Actually, at Blackburn Aircraft in 1951, we were trying to build a
computer to find why aircraft tails sometimes fell off. We needed to
vibrate a bomber tail and measure at least 50 wheatstone bridge
strain-
responsive elements (strain gauges) mounted on the tail.
So we built a computer based on DC amplifiers, with twin-triode flip-
flop 1-bit storage memories, nixie-tube digital readouts and a jazzed-
up-to-10cps teleprinter i/o.

Note: the UK had the same problem again (see "Highway in the sky"):
cold metal fatigue.

But any book on elementary electronics will show the diffeernt forms
of flip-flop logic gates. We had the logic (a-la-Babbage) long before
we had the actual hardware.

robin

unread,
Oct 8, 2010, 10:23:29 PM10/8/10
to
"Shmuel (Seymour J.) Metz" <spam...@library.lspace.org.invalid> wrote in message
news:4caf527b$3$fuzhry+tra$mr2...@news.patriot.net...
| In <KJkro.239970$MG3....@en-nntp-16.dc1.easynews.com>, on 10/07/2010

| at 09:23 AM, "Bill Klein" <wmk...@noreply.ix.netcom.com> said:
|
| >Can you tell me directly WHAT step was missed in the roginal
| >statemnent? In other wrd, can you repeat Robin's original
| >description and explicitly add what was missing, so I can can
| >see/understand it?
|
| Sent offline.

Because you know that your silly assertion won't stand up to public scrutiny.

You might fool Klein, who confessed that
he doesn't understand it, but you don't fool anyone else.


0 new messages