Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Unisys A11 worth keeping ?

81 views
Skip to first unread message

Richard Steiner

unread,
Jun 8, 2002, 11:39:43 PM6/8/02
to
Here in alt.folklore.computers,
ace...@iinet.net.au (Tony Epton) spake unto us, saying:

>Our museum has been offered a Unisys in RM36 style cabinet with
>A11 processor, floppy drive, tape cartridge drive, No CA and No SCSSI
>drives.
>Some manuals and install floppies.
>
>Guessing it would be late 80's early 90's machine.
>
>Any opinion on whether this machine is any sort of classic and is
>worth keeping - or should we just scrap it ?

Anyone wanna explain to him why Burroughs-esque A-series boxes are cool?

I can't give good reasons, just parrot what I've read on USENET. :-)

(Stack-based machine, ALGOL-like machine instructions, cool storyline
and/or characters masquerading as routine names in the MCP source...)

--
-Rich Steiner >>>---> http://www.visi.com/~rsteiner >>>---> Eden Prairie, MN
OS/2 + BeOS + Linux + Win95 + DOS + PC/GEOS = PC Hobbyist Heaven! :-)
Applications analyst/designer/developer (13 yrs) seeking employment.
See web site in my signature for current resume and background.

Ignatios Souvatzis

unread,
Jun 10, 2002, 7:31:54 AM6/10/02
to
In article </3sA9oHpv...@visi.com>,

rste...@visi.com (Richard Steiner) writes:
> Anyone wanna explain to him why Burroughs-esque A-series boxes are cool?
> [...]

> (Stack-based machine, ALGOL-like machine instructions, cool storyline
> and/or characters masquerading as routine names in the MCP source...)

Algol 60 or 68?
-is

Larry Krablin

unread,
Jun 13, 2002, 11:03:42 AM6/13/02
to
Algol 60

"Ignatios Souvatzis" <igna...@tarski.cs.uni-bonn.de> wrote in message
news:ae22na$11rq$2...@f1node01.rhrz.uni-bonn.de...

Brian Boutel

unread,
Jun 15, 2002, 2:36:40 AM6/15/02
to

Randall Gellens wrote:

> In article <ae22na$11rq$2...@f1node01.rhrz.uni-bonn.de>,

> It's really about 10% Algol 60 and 90% Burroughs extensions. Quite
> wonderful.
>
>

I loved it, but the sad thing was, that despite the architectural
support for Algol, Fortran code generally ran faster.

--brian

--
Brian Boutel
Wellington New Zealand


Note the NOSPAM

J Ahlstrom

unread,
Jun 17, 2002, 11:22:07 AM6/17/02
to
Randall Gellens wrote:

> In article <ae22na$11rq$2...@f1node01.rhrz.uni-bonn.de>,
> igna...@tarski.cs.uni-bonn.de (Ignatios Souvatzis) wrote:
>

> It's really about 10% Algol 60 and 90% Burroughs extensions. Quite
> wonderful.
>

> --
> Opinions are personal; facts are suspect; I speak for myself only

Any comparison documents between Algol 60 and the A-Series Algol?
Algol 60 or A-Series Algol and Newp (the sys prog replacement for Espol)?

Thanks

JKA

--
You can't reason someone out of something they
haven't been reasoned into.


J Ahlstrom

unread,
Jun 17, 2002, 11:23:37 AM6/17/02
to
Brian Boutel wrote:

My experience with the 5000 and its successors ended with
the 7700 but Fortran ran slower than Algol then. Any
comparisons of more recent machines????

Brian Boutel

unread,
Jun 17, 2002, 11:07:27 PM6/17/02
to

J Ahlstrom wrote:

> Brian Boutel wrote:
>
>
>>Randall Gellens wrote:
>>
>>
>>>In article <ae22na$11rq$2...@f1node01.rhrz.uni-bonn.de>,
>>> igna...@tarski.cs.uni-bonn.de (Ignatios Souvatzis) wrote:
>>>
>>>
>>>
>>>>In article </3sA9oHpv...@visi.com>,
>>>> rste...@visi.com (Richard Steiner) writes:
>>>>
>>>>
>>>>>Anyone wanna explain to him why Burroughs-esque A-series boxes are cool?
>>>>>[...]
>>>>>(Stack-based machine, ALGOL-like machine instructions, cool storyline
>>>>>and/or characters masquerading as routine names in the MCP source...)
>>>>>
>>>>>
>>>>Algol 60 or 68?
>>>>
>>>>
>>>It's really about 10% Algol 60 and 90% Burroughs extensions. Quite
>>>wonderful.
>>>
>>>
>>>
>>I loved it, but the sad thing was, that despite the architectural
>>support for Algol, Fortran code generally ran faster.
>>
>>--brian
>>

>>
>

> My experience with the 5000 and its successors ended with
> the 7700 but Fortran ran slower than Algol then. Any
> comparisons of more recent machines????
>

I only used a 6700, in a university computer centre environment,

from '74 to '80, when it got replaced by IBM.


Obviously our experiences/recollections differ, but I suspect some of the

performance difference was due to coding style. Algol, with nested
procedure declarations, and call-by-name, implied more complex VALC
evaluations, more stack searching for copy descriptors, more display
updates. Even so, I wouldn't like to argue this very strongly after not
thinking about it for over 20 years.

Edward Reid

unread,
Jun 18, 2002, 8:28:19 AM6/18/02
to
On Mon, 17 Jun 2002 11:23:37 -0400, J Ahlstrom wrote

> My experience with the 5000 and its successors ended with
> the 7700 but Fortran ran slower than Algol then. Any
> comparisons of more recent machines????

Obviously I don't have access to the details of the programs which the
previous poster compared. I suspect that the Algol and Fortran programs
were not identical but were written differently in some critical
respects. A good Algol coder can avoid the few inefficiencies in
generated Algol code.

The other possibility is that the program contained some code highly
susceptible to optimization. I don't think the A-Series Fortran
compiler ever did much optimization, but the Algol compiler has never
done any. (Not at any rate of the sort that is typically applied to
Fortran programs.) Algol allows too many side effects and thus inhibits
optimization.

> Any comparison documents between Algol 60 and the A-Series Algol?

Not that I know of. But Algol60 is such a small language that if you
know it, you can pretty much go through the Unisys Algol manual saying
nope, not in Algol 60, nope, not even in the style of Algol 60, etc ...

> Algol 60 or A-Series Algol and Newp (the sys prog replacement for Espol)?

The NEWP manual is actually implemented as an "overlay" on the Algol
manual -- that is, it only documents what is different between Algol
and NEWP. You can download both from

http://public.support.unisys.com/os1/txt/web-verity?type=list

Edward Reid


Hans Vlems

unread,
Jun 18, 2002, 3:14:43 PM6/18/02
to
Algol60 is very small.
Base types: integer, real,boolean,label and arrays of all these (except
labels?)
Constructs: while <be> do <block>
for <ae1> step <ae2> until <be> do <block>
if <be> then <block1> else <block2>
(where be is a boolean expression and ae an arithmetic expression).
There is no defined IO of file base type in algol60.
The motto of algol60 says all: "all that needs explaining is done easily,
the rest is unnecessary"

Burroughs algol adds bit manipulations, strings, truthsets and
translatetables.
It has array handling that is quite unique, plus file arrays (switch files)
and extensive control over
the IO subsystem. .
The language has facilities for multitasking built-in.

Man I wish I could use BEA instead of C (in whatever flavor that is
currently in vogue).
Now where did I put that RT11 TU56 tape with the port of the V2 BEA
compiler....

Hans Vlems

J Ahlstrom <jahl...@cisco.com> wrote in message
news:3D0DFE9F...@cisco.com...

PeteK

unread,
Jun 19, 2002, 10:29:38 AM6/19/02
to
> The other possibility is that the program contained some code highly
> susceptible to optimization. I don't think the A-Series Fortran
> compiler ever did much optimization, but the Algol compiler has never
> done any. (Not at any rate of the sort that is typically applied to
> Fortran programs.) Algol allows too many side effects and thus inhibits
> optimization.

What, not even if you set $OPTIMIZE ?


Jim Haynes

unread,
Jun 20, 2002, 8:26:38 PM6/20/02
to
In article <3D0AE078...@boutel.co.nz>,

Brian Boutel <brian...@boutel.co.nz> wrote:
>
>I loved it, but the sad thing was, that despite the architectural
>support for Algol, Fortran code generally ran faster.
>
Yeah, but then Algol is conceptually a more complicated language than
Fortran. And Fortran users were more apt to demand the fastest possible
running.

Hans Vlems

unread,
Jun 21, 2002, 2:35:40 PM6/21/02
to
IIRC fortran handled matrices different from algol. Object files generated
by the fortran compiler
had their elements in sequence column by column, while algol lined them up
row by row.
Now I do not know how the Burroughs hardware was designed but perhaps that
vectormode instructions are (were?) affected by this.
It seems odd that algolcode would always be slower than fortran, especially
for an
implementation language for the compilers themselves and other system
software.

Hans

Jim Haynes <hay...@alumni.uark.edu> wrote in message
news:2puQ8.8212$Fv1.8...@newsread2.prod.itd.earthlink.net...

Andrew Williams

unread,
Jun 21, 2002, 2:54:09 PM6/21/02
to
Every language except Fortran has A (1,4) next to A (1,5). Only Fortran
has A (4,1) next to A (5,1).
This unnatural design mistake was perpetrated in the early 60's in order
to make writing Fortran compilers easier.
Another mistake perpetrated at the same time was to have all loops
execute at least once. The Fortran77 standard changed this and allows
loops to be executed zero times (= bypassed) if the parameters are such.
i.e. DO 10 I = 5,4
would bypass the loop because 5 > 4.

The idea was to make writing compilers easier, this has nothing to do
with execution speed. The only way this could have any effect in speed
is if you avoided (or hit) some memory-interleave problem.
I get around this array allocation problem by allocating arrays as (3,6)
(example) in Fortran and (6,3) in any other language.

Based on what I can remember of Algol 60, it had the following speed
problems:
- all storage was dynamically allocated (Fortran was static)
- Recursion was permitted
- The floating point IF A = B statement (syntax?) evaluated as TRUE if A
was nearly = B (!!), the idea being that Floating point numbers are
approximations.

I am sure that there were additional reasons why Algol 60 was usually
slower, but I have not looked at the language for around 25 years.


Hans Vlems wrote:
> IIRC fortran handled matrices different from algol. Object files generated
> by the fortran compiler
> had their elements in sequence column by column, while algol lined them up
> row by row.
> Now I do not know how the Burroughs hardware was designed but perhaps that
> vectormode instructions are (were?) affected by this.
> It seems odd that algolcode would always be slower than fortran, especially
> for an
> implementation language for the compilers themselves and other system
> software.
>
> Hans
>


--
opinions personal, facts suspect.
http://home.arcor.de/36bit/samba.html

CBFalconer

unread,
Jun 21, 2002, 6:13:11 PM6/21/02
to
Andrew Williams wrote:
>
... snip ...

>
> Based on what I can remember of Algol 60, it had the following
> speed problems:
> - all storage was dynamically allocated (Fortran was static)
> - Recursion was permitted
> - The floating point IF A = B statement (syntax?) evaluated as
> TRUE if A was nearly = B (!!), the idea being that Floating
> point numbers are approximations.

Years ago I started to implement that in my floating point package
for the 8080/z80. I eventually abandoned it as creating too much
overhead, and easily lost in the mud. For example accumulate a
sum over some range of nearly equal values - what do you use for a
criterion? Especially if the original pairs vary widely in
magnitude. So I chose KISS.

--
Chuck F (cbfal...@yahoo.com) (cbfal...@worldnet.att.net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net> USE worldnet address!

Rupert Pigott

unread,
Jun 21, 2002, 7:02:57 PM6/21/02
to
"CBFalconer" <cbfal...@yahoo.com> wrote in message
news:3D13A3D5...@yahoo.com...

> Andrew Williams wrote:
> >
> ... snip ...
> >
> > Based on what I can remember of Algol 60, it had the following
> > speed problems:
> > - all storage was dynamically allocated (Fortran was static)
> > - Recursion was permitted
> > - The floating point IF A = B statement (syntax?) evaluated as
> > TRUE if A was nearly = B (!!), the idea being that Floating
> > point numbers are approximations.
>
> Years ago I started to implement that in my floating point package
> for the 8080/z80. I eventually abandoned it as creating too much
> overhead, and easily lost in the mud. For example accumulate a
> sum over some range of nearly equal values - what do you use for a
> criterion? Especially if the original pairs vary widely in
> magnitude. So I chose KISS.

Sadly many people often forget that FP numbers are
approximations. I even make the classic "a == b"
cock-up myself on occasions... Sigh... Certainly
does pose some interesting questions about how
you go about testing for equality...

Cheers,
Rupert


Joe Pfeiffer

unread,
Jun 21, 2002, 7:54:19 PM6/21/02
to
> Andrew Williams wrote:
> >
> ... snip ...
> >
> > Based on what I can remember of Algol 60, it had the following
> > speed problems:
> > - all storage was dynamically allocated (Fortran was static)

Assuming activations records were allocated on the stack, this is
negligible.

> > - Recursion was permitted

Assuming a reasonably sane instruction set, this also makes little or
no difference.

> > - The floating point IF A = B statement (syntax?) evaluated as
> > TRUE if A was nearly = B (!!), the idea being that Floating
> > point numbers are approximations.

That would be perverse... if you see code in which the programmer is
comparing floating point numbers for equality, you can figure you're
looking at broken code.
--
Joseph J. Pfeiffer, Jr., Ph.D. Phone -- (505) 646-1605
Department of Computer Science FAX -- (505) 646-1002
New Mexico State University http://www.cs.nmsu.edu/~pfeiffer
Southwestern NM Regional Science and Engr Fair: http://www.nmsu.edu/~scifair

Brian Inglis

unread,
Jun 21, 2002, 10:43:13 PM6/21/02
to
On Fri, 21 Jun 2002 22:13:11 GMT, CBFalconer
<cbfal...@yahoo.com> wrote:

>Andrew Williams wrote:
>>
>... snip ...
>>
>> Based on what I can remember of Algol 60, it had the following
>> speed problems:
>> - all storage was dynamically allocated (Fortran was static)
>> - Recursion was permitted
>> - The floating point IF A = B statement (syntax?) evaluated as
>> TRUE if A was nearly = B (!!), the idea being that Floating
>> point numbers are approximations.
>
>Years ago I started to implement that in my floating point package
>for the 8080/z80. I eventually abandoned it as creating too much
>overhead, and easily lost in the mud. For example accumulate a
>sum over some range of nearly equal values - what do you use for a
>criterion? Especially if the original pairs vary widely in
>magnitude. So I chose KISS.

ISTR DEC's Basic+/BP2/VMS Basic having approximate comparison
operators for doubles and strings, using = and == -- of course,
the operator symbols for exactly equals and approximately equals
were flipped between the two data types IIRC.

--

Thanks. Take care, Brian Inglis Calgary, Alberta, Canada

Brian....@CSi.com (Brian dot Inglis at SystematicSw dot ab dot ca)
fake address use address above to reply

tos...@aol.com ab...@aol.com ab...@yahoo.com ab...@hotmail.com ab...@msn.com ab...@sprint.com ab...@earthlink.com ab...@cadvision.com ab...@ibsystems.com u...@ftc.gov
spam traps

Randall Bart

unread,
Jun 22, 2002, 12:10:19 AM6/22/02
to
'Twas 21 Jun 2002 17:54:19 -0600 when all comp.sys.unisys stood in awe as
Joe Pfeiffer <pfei...@cs.nmsu.edu> uttered:

>if you see code in which the programmer is
>comparing floating point numbers for equality, you can figure you're
>looking at broken code.

Not necessarily broken, but likely.
--
RB |\ Š Randall Bart
aa |/ ad...@RandallBart.spam.com Bart...@att.spam.net
nr |\ Please reply without spam I LOVE YOU 1-917-715-0831
dt ||\ http://RandallBart.com/ DOT-HS-808-065 MS^7=6/28/107
a |/ "Believe nothing, no matter where you read it, or who
l |\ said it, no matter if I have said it, unless it agrees
l |/ with your own reason and your own common sense."--Buddha

jmfb...@aol.com

unread,
Jun 22, 2002, 5:44:18 AM6/22/02
to
In article <1blm98u...@cs.nmsu.edu>,
Joe Pfeiffer <pfei...@cs.nmsu.edu> wrote:
>> Andrew Williams wrote:

<snip>

>> > - The floating point IF A = B statement (syntax?) evaluated as
>> > TRUE if A was nearly = B (!!), the idea being that Floating
>> > point numbers are approximations.
>
>That would be perverse... if you see code in which the programmer is
>comparing floating point numbers for equality, you can figure you're
>looking at broken code.

Unless it's a diagnostic or sanity check. errmmm....People still
do write bad programs for testing purposes...don't they? We had
lots of little tests that would print out "This should generate
a FOO error."

/BAH

Subtract a hundred and four for e-mail.

Hans Vlems

unread,
Jun 22, 2002, 9:43:37 AM6/22/02
to

Andrew Williams <andrew....@t-online.de> wrote in message
news:3D137651...@t-online.de...

> Every language except Fortran has A (1,4) next to A (1,5). Only Fortran
> has A (4,1) next to A (5,1).
> This unnatural design mistake was perpetrated in the early 60's in order
> to make writing Fortran compilers easier.
> Another mistake perpetrated at the same time was to have all loops
> execute at least once. The Fortran77 standard changed this and allows
> loops to be executed zero times (= bypassed) if the parameters are such.
> i.e. DO 10 I = 5,4
> would bypass the loop because 5 > 4.

>
>


> --
> opinions personal, facts suspect.
> http://home.arcor.de/36bit/samba.html
>

The main problem with code efficiency in Algol was call by name IIRC. All
arrays in Algol
are passed call by name (which incidentally led to an interesting security
loophole
in the earlier versions of the MCP) and this is perhaps not the case for
fortran. That may account
for the difference in performance in software that uses linear algebra.

Hans


Rupert Pigott

unread,
Jun 22, 2002, 2:23:38 PM6/22/02
to
"Hans Vlems" <hvl...@iae.nl> wrote in message
news:af1uer$an5nl$1...@ID-143435.news.dfncis.de...

Eeek ! You mean that it resolved the names at RUN-TIME ?

Cheers,
Rupert


Lars Poulsen

unread,
Jun 22, 2002, 3:31:55 PM6/22/02
to
"Hans Vlems" <hvl...@iae.nl> wrote in message
news:af1uer$an5nl$1...@ID-143435.news.dfncis.de...
>> The main problem with code efficiency in Algol was call by name IIRC.
>> All arrays in Algol are passed call by name (which incidentally led
>> to an interesting security loophole in the earlier versions of the MCP)
>> and this is perhaps not the case for fortran. That may account
>> for the difference in performance in software that uses linear algebra.


Rupert Pigott wrote:
> Eeek ! You mean that it resolved the names at RUN-TIME ?

Not at all. Algol's call by name was a wonderful concept that
made sense to people who ever knew the inside of a machine,
but thought abstractly. But no major language since then has
used it.

Think of the following code snippets:

real procedure sum(real a, value integer low,
value integer high, integer k)
begin
real partial;
integer i;
partial := 0.0;

for i := low step 1 until high do
begin
k = i;
partial := partial + a;
end;
sum := partial;
return;
end;

real array series[1900:2002];
integer k;
...
write("Sum is ", sum(series[k], 1900, 2002, k), "\r\n");

(I am half remembering, half reinventing the algol 60 syntax;
I don't think I have seen an algol 60 program in 25 years.)

Notice how the first argument to the function is formally a
"real" passed by name, but the actual is an expression.
The last argument to the function is a variable passed by
name which is part of the expression. When the function
changes the variable and then uses the expression, the
expression is evaluated using the current version of the
variable. In contrast, the two middle arguments are passed
by value.

Fortran uses "call by reference": All arguments are represented
by an address. In C, all arguments are passed by value,
and if you want the effect of "call by reference" you pass
the value of the address of the variable.

Call by name allows some very elegant algorithms for
matrix arithmetic, but they are much harder for a compiler
to optimize than the Fortran kind of code. In the case of
the Burroughs machines, I suspect that the Fortran was
basically translated into quasi-algol and then compiled
from there, since the hardware was built to facilitate
algol anyway.

"You can write bad Fortran in any language".

--
/ Lars Poulsen +1-805-569-5277 http://www.beagle-ears.com/lars/
125 South Ontare Rd, Santa Barbara, CA 93105 USA la...@beagle-ears.com

Louis Krupp

unread,
Jun 22, 2002, 3:50:35 PM6/22/02
to
Lars Poulsen wrote:
<snip>

> Call by name allows some very elegant algorithms for
> matrix arithmetic, but they are much harder for a compiler
> to optimize than the Fortran kind of code. In the case of
> the Burroughs machines, I suspect that the Fortran was
> basically translated into quasi-algol and then compiled
> from there, since the hardware was built to facilitate
> algol anyway.

It's been at least twenty years since I've touched the
source for a Burroughs FORTRAN compiler, but I remember it
as a single pass from FORTRAN to machine code. There was a
FORTRAN to ALGOL translator once upon a time, but I don't
think the FORTRAN compilor ever used it.

Louis Krupp

Hans Vlems

unread,
Jun 22, 2002, 4:14:00 PM6/22/02
to
> > > opinions personal, facts suspect.
> > > http://home.arcor.de/36bit/samba.html
> > >
> > The main problem with code efficiency in Algol was call by name IIRC.
All
> > arrays in Algol
> > are passed call by name (which incidentally led to an interesting
> security
> > loophole
> > in the earlier versions of the MCP) and this is perhaps not the case for
> > fortran. That may account
> > for the difference in performance in software that uses linear algebra.
>
> Eeek ! You mean that it resolved the names at RUN-TIME ?
>
> Cheers,
> Rupert
>
Not sure what your idea is behind "resolving", it's not a DNS thing or so...
What happens is that the formal parameter is replaced by the actual name of
the
actual parameter. This allows a technique called Jensen's device. An
example:

INTEGER PROCEDURE SOM(A,I,J);
VALUE J;
INTEGER A,I,J;
BEGIN
SOM:=0;
FOR I:=0 STEP 1 UNTIL J DO SOM:=* + A;
END;

This innocent looking procedure seems to return (I+1)*A, right? In any
language other than Algol that
would be the case. Called in a main program like this:

BEGIN
INTEGER ARRAY G[0:1];
INTEGER I,J;
FILE TV(KIND=REMOTE,MYUSE=OUT);
I:=0; J:=99; RESIZE(G,J);
FOR I:=0 STEP 1 UNTIL J DO G[I]:=I-1;
WRITE(TV,<I3>,G[I],I,J);
END.

Each array element of G holds a value identical to its index minus one. The
procedure SOM returns the sum
of all these array elements. This is a trivial application. The nice thing
is you can pass a function instead
of an array and do things like:

REAL PROCEDURE INTEGRAL(F(X),X,DELTAX,START,BEGIN);

This way you could write fairly generic toolkits. Once you get used to the
idea it is a very powerful
tool.
BTW this is my first ALgol since 1983 so I guess that the syntax is a little
rusty.
Now if someone would get me access to an MCP system...

Hans


Rupert Pigott

unread,
Jun 22, 2002, 5:32:59 PM6/22/02
to
"Hans Vlems" <hvl...@iae.nl> wrote in message
news:af2lav$b2l9q$1...@ID-143435.news.dfncis.de...
[SNIP]

> REAL PROCEDURE INTEGRAL(F(X),X,DELTAX,START,BEGIN);
>
> This way you could write fairly generic toolkits. Once you get used to the
> idea it is a very powerful
> tool.

That's exceedingly nifty. It was what I was looking for
and instead I got lumbered with C++ templates. Bugger. :)

Cheers,
Rupert


CBFalconer

unread,
Jun 22, 2002, 5:41:02 PM6/22/02
to
Rupert Pigott wrote:
> "Hans Vlems" <hvl...@iae.nl> wrote in message
> >
... snip ...

> > >
> > The main problem with code efficiency in Algol was call by name IIRC.
> > All arrays in Algol are passed call by name (which incidentally led
> > to an interesting security loophole in the earlier versions of the
> > MCP) and this is perhaps not the case for fortran. That may account
> > for the difference in performance in software that uses linear algebra.
>
> Eeek ! You mean that it resolved the names at RUN-TIME ?

Eee-yup. The world of thunks. I never did get it straight.

Edward Reid

unread,
Jun 23, 2002, 7:41:37 PM6/23/02
to
On Wed, 19 Jun 2002 10:29:38 -0400, PeteK wrote

> What, not even if you set $OPTIMIZE ?

RTFM. In A-Series Algol, $OPTIMIZE only allows early exit from Boolean
expressions, nothing else. The docs say that the compiler analyzes the
Boolean expression and only does early exit if no side effects are
possible in the skipped code (since Boolean expressions can have side
effects in Algol), and thus does not change the meaning of the code.
This is not done normally, since studies in the 1970s showed that the
disruption in the pipeline caused by the branch for early exit was
often more costly than evaluating the entire expression without a
branch. (I have no idea whether this is still true on modern
architectures. Certainly the ability to predict multiple paths would
mitigate the problem.)

On Thu, 20 Jun 2002 20:26:38 -0400, Jim Haynes wrote


> Yeah, but then Algol is conceptually a more complicated language than
> Fortran. And Fortran users were more apt to demand the fastest possible
> running.

And are today. The ongoing Fortran standards development constantly
cites efficiency as an argument for or against changes in the language
-- to an extent that you hardly ever see with the other programming
language standards.

AFAIK, Fortran remains the second most commonly used programming
language, after COBOL.

On Fri, 21 Jun 2002 14:35:40 -0400, Hans Vlems wrote


> IIRC fortran handled matrices different from algol. Object files
> generated by the fortran compiler had their elements in sequence
> column by column, while algol lined them up row by row. Now I do not
> know how the Burroughs hardware was designed but perhaps that
> vectormode instructions are (were?) affected by this.

Only a few of the early predecessors of the A/NX/LX systems had vector
mode instructions. In any case, IIRC the vector mode allowed a step
larger than one.

Fortran always implemented multidimensional arrays as single rows to
the instruction set architecture, and the compiler generated the code
to calculate the row subscript based on the multiple given subscripts.
This was essential for compatibility with other Fortrans -- many MANY
programs were written which assumed that when the column subscript
exceeded its bound, it would simply wrap around into the next column.
(Note another subthread about column vs row subscripts in Fortran.) In
fact, in FORTRAN IV it was common practice for a subroutine to receive
an array as a single dimension dummy argument when the actual argument
was two or more dimensions, and for the subroutine code to handle the
subscript calculations. This was because FORTRAN IV did not have
sufficient facilities for declaring an array dummy argument which might
have a different column dimension on different calls. (Or maybe that
was FORTRAN II and corrected in FORTRAN IV ... my first programming job
involved converting programs from FORTRAN II to FORTRAN IV, but that
was 35 years ago and those neurons have rusted.) In any case, common
practice dictated that an array had to be a contiguous memory area, for
reasons unrelated to the ISA.

> It seems odd that algolcode would always be slower than fortran,
> especially for an implementation language for the compilers
> themselves and other system software.

Burroughs Algol was always quite efficient for system software -- much
more so than Fortran would have been -- when written by programmers
knowledgeable about the implementation. The language (which as Randy
Gellens pointed out is mostly extensions built around an Algol
structural core) is not organized for numeric computation, despite the
Algol60 history. As often said, it's a matter of picking the proper
wrench to pound in the screw.

On Fri, 21 Jun 2002 14:54:09 -0400, Andrew Williams wrote


> Based on what I can remember of Algol 60, it had the following speed
> problems:
> - all storage was dynamically allocated (Fortran was static)

An Algol program written to operate in the same way as a comparable
Fortran program would do no allocation. This *would* mean writing some
things differently -- for example, making some local variables global.
But this was required anyway because common practice in Fortran was to
assume that values were saved between subroutine invocations. (Now
codified, with the SAVE attribute, but only practice in 1960.)

A smart compiler could have determined what locals could be static, but
I'm not aware that this was ever done.

> - Recursion was permitted

But not required ... and as others have pointed out, with proper ISA
support, recursion is not expensive.

> - The floating point IF A = B statement (syntax?) evaluated as TRUE if A
> was nearly = B (!!), the idea being that Floating point numbers are
> approximations.

In any case, Burroughs Algol never used inexact comparison, and this
discussion started with Burroughs/A-Series/NX/LX Algol.

I assume this statement derives from the following paragraph in the
Algol60 Report:

<quote>

3.3.6. Arithmetics of REAL quantities. Numbers and variables of
type REAL must be interpreted in the sense of numerical analysis,
i.e. as entities defined inherently with only a finite accuracy.
Similary, the possibility of the occurance of a finite deviation from
the mathematically defined result in any arithmetic expression is
explicitly understood. No exact arithmetic will be specified, however,
and it is indeed understood that different hardware representations may
evaluate arithmetic expressions differently. The control of the
possible consequences of such differences must be carried out by the
methods of numerical analysis. This control must be considered a part
of the process to be described, and will therefore be expressed in
terms of the language itself.

</quote>

It isn't at all clear to me that this requires inexact comparisons. In
fact, I read it as a warning to the programmer that REAL arithmetic is
inexact and the programmers must use appropriate numerical analysis in
developing algorithms. If some Algol implementation used inexact
comparison, it was probably an independent experiment.

After all, floating-point arithmetic was new then, and it was
appropriate to warn explicitly about its pitfalls. (Today one generally
assumes that programmers have become aware of this in the past 40
years. Unfortunately, one is wrong.)

> I am sure that there were additional reasons why Algol 60 was usually
> slower, but I have not looked at the language for around 25 years.

If you talk about the language in general rather than specific
implementations, I don't think it's a valid statement that Algol was
slower. Comparable programs in Algol and Fortran don't need to run
differently given adequate compilers. Algol, however, offers additional
opportunities for the programmer to write inefficient programs -- which
is really unnecessary given that Fortran already offers plenty of such
opportunities.

On Sat, 22 Jun 2002 9:43:37 -0400, Hans Vlems wrote


> The main problem with code efficiency in Algol was call by name
> IIRC. All arrays in Algol are passed call by name (which
> incidentally led to an interesting security loophole in the earlier
> versions of the MCP)

B/A/NX/LX Algol has always passed arrays using a reference rather than
a thunk mechanism (aka accidental entry). This does lead to some
inconsistencies: for example, if you pass A[J,K] as an actual parameter
(to a scalar formal parameter) by name, then the element of A
referenced changes if J changes. But if you pass A[J,*] as an actual
parameter to a single-dimension formal, it is passed by reference, and
changing J does not change the array row referenced.

Additional analysis by a compiler should be able to determine more
precisely when a thunk is required, and generate a reference instead in
most cases. But B/A/NX/LX Algol generally errs on the side of
efficiency.

So although an unexpected thunk is occasionally a performance problem,
the call-by-name mechanism is not a general performance issue. Given
that it's almost never used intentionally, it wouldn't be a bad idea to
have a compiler option which must be set to a non-default value to
allow call-by-name. Hmm, a New Feature Suggestion ...

On Sat, 22 Jun 2002 15:31:55 -0400, Lars Poulsen wrote


> Call by name allows some very elegant algorithms for
> matrix arithmetic, but they are much harder for a compiler
> to optimize than the Fortran kind of code.

They are also hell to read and understand. A neat idea -- may it rest
in peace.

> In the case of
> the Burroughs machines, I suspect that the Fortran was
> basically translated into quasi-algol and then compiled
> from there, since the hardware was built to facilitate
> algol anyway.

As I've pointed out repeatedly, despite the publicity, the B5000
architecture really supported Fortran better than Algol, as it had
trouble addressing intermediate lexical levels (which Algol needs but
Fortran at the time didn't). The B6700 recitifed this difficulty, but
still retained what Fortran needed.

There's a widespread misconception that because the Burroughs systems
were designed with Algol in mind, that their ISA was somehow almost a
one-to-one mapping to Algol. I held this misconception before I first
worked with a B6700 (in 1973), though I don't recall where I acquired
it. It's not true. The B6700 (and to a lesser extent the B5500 before
it) does have support for some things inportant in Algol -- good
support for procedure calls and dynamic memory allocation. Of course,
these are useful for supporting other languages as well. But the
B/A/NX/LX Algol compiler has always been a full-fledged compiler, not a
simple translator (though it remains an old, hand-written recursive
descent compiler). This is true even for the most basic features of
Algol.

Ironically, the NX/LX systems today are used almost entirely for COBOL
applications; Fortran 90 hasn't even been implemented and AFAIK there
are no plans. It isn't a very good architecture for COBOL. But as even
COBOL becomes more dependent on support functions -- especially
database management -- the strengths of the system continue to support
ongoing development.

Edward Reid


Lars Poulsen

unread,
Jun 23, 2002, 8:30:34 PM6/23/02
to
Edward Reid wrote:

> As I've pointed out repeatedly, despite the publicity, the B5000
> architecture really supported Fortran better than Algol, as it had
> trouble addressing intermediate lexical levels (which Algol needs but
> Fortran at the time didn't). The B6700 recitifed this difficulty, but
> still retained what Fortran needed.


Am I wrong in "remembering" that the B6700 architecture implemented
type-tagging of variables in memory and used the same opcodes
for integer arithmetic as for floating point arithmetic?

This would seriously screw up the 1970's style of portable
Fortran-IV scientific programs, which used named and unnamed
common blocks for data shared between several modules, and
sometimes completely redefined a named common block between
subsystems as a way of overlaying data areas.

In fact, I believe that this might make it impossible to
implement a conformant Fortran of the day.

I am putting quotes around "rembering" because I never actually
worked with these beasts, although at the time I worked for the
Nordic Institute of Theoretical Atomics, I could probably have
gotten access to the B6700 at the Danish AEC's Risoe research
center. At the time, I was more interested in playing with
first the IBSYS system at NEUCC, later its OS/MVT/HASP replacement,
during the time I wasn't doing "real work(tm)" on our EXEC_8
system.

Warwick J. Hughes

unread,
Jun 23, 2002, 11:13:34 PM6/23/02
to

"Lars Poulsen" <la...@beagle-ears.com> wrote in message
news:3D14D0AB...@beagle-ears.com...

However, in most cases, 'call by name' is equivalent to 'call by reference',
the exceptions being oddball examples like the one detailed above, which in
practice are pretty rare. (In fact they were uncommon enough that the
compiler
emitted a warning message when a 'call by name' occurred). 'Call by
reference'
was very much the actual norm in Algol (still is in fact!).

>
> Call by name allows some very elegant algorithms for
> matrix arithmetic, but they are much harder for a compiler
> to optimize than the Fortran kind of code. In the case of
> the Burroughs machines, I suspect that the Fortran was
> basically translated into quasi-algol and then compiled
> from there, since the hardware was built to facilitate
> algol anyway.

Incorrect..the Fortran compiler generated machine code directly (as did all
the compilers).

John Homes

unread,
Jun 23, 2002, 10:37:43 PM6/23/02
to

"Lars Poulsen" <la...@beagle-ears.com> wrote in message
news:3D16682A...@beagle-ears.com...

> Edward Reid wrote:
>
> > As I've pointed out repeatedly, despite the publicity, the B5000
> > architecture really supported Fortran better than Algol, as it had
> > trouble addressing intermediate lexical levels (which Algol needs but
> > Fortran at the time didn't). The B6700 recitifed this difficulty, but
> > still retained what Fortran needed.
>
>
> Am I wrong in "remembering" that the B6700 architecture implemented
> type-tagging of variables in memory and used the same opcodes
> for integer arithmetic as for floating point arithmetic?
>

The B6700, which I used briefly, did use type-tagging for some purposes, but
not for distinguishing between integer and floating point.

The floating point representation was such that (within-range) integers, if
interpreted as though they were floating point numbers, would resolve to the
same numerical values. The hardware also knew that if the operands were
integers, the result should be too. This did permit the same opcodes to be
used.


> This would seriously screw up the 1970's style of portable
> Fortran-IV scientific programs, which used named and unnamed
> common blocks for data shared between several modules, and
> sometimes completely redefined a named common block between
> subsystems as a way of overlaying data areas.
>

Integer vs floating point should not prove problematical. However,
type-tagging *was* used for double precision, and this could perhaps cause
trouble. I never did anything that hit it, though.

> In fact, I believe that this might make it impossible to
> implement a conformant Fortran of the day.
>

Possibly the double-precision tags.

John Homes.

Stephen Fuld

unread,
Jun 24, 2002, 12:42:25 AM6/24/02
to

"Edward Reid" <edwar...@spamcop.net> wrote in message
news:01HW.B93BD4F10...@news-east.usenetserver.com...

> On Wed, 19 Jun 2002 10:29:38 -0400, PeteK wrote
>
> AFAIK, Fortran remains the second most commonly used programming
> language, after COBOL.

I had thought that some time ago BASIC supplanted COBOL as the most used (at
the time of the early PC when every PC came with BASIC and lots of new
"programmers" got into the game. Then I thought C won out. But I guess it
also depends on what you mean by "most commonly used". Is the the most
existing programs written in, the most newly coded programs written in, or
the most programmers who say it is their first choice for uage. I guess it
may also depend on whether the program is in some sense in official
production use or not to count.

In any event, does someone have a cite for any real data on the subject?

--
- Stephen Fuld
e-mail address disguised to prevent spam


Edward Reid

unread,
Jun 24, 2002, 9:20:48 AM6/24/02
to
On Sun, 23 Jun 2002 20:30:34 -0400, Lars Poulsen wrote

> Am I wrong in "remembering" that the B6700 architecture implemented
> type-tagging of variables in memory and used the same opcodes
> for integer arithmetic as for floating point arithmetic?

As John Homes already explained well, it wasn't just the same opcodes,
but the same tags, as the integer and floating point formats were (and
are) compatible. The instruction for converting from integer to
floating point is ... NOOP. The instruction for converting from
floating to integer is NTGR (integerize), which really is just
"unnormalize to exponent=0" (and can generate the appropriate fault if
the value is out of range).

Single-precision operands had a tag of 0, double-precision operands a
2, occurs index words a 4 (these are no longer used), and uninitialized
operands a 6. However, any of these could (can) be stored over any of
the others, so the incompatibility is far less than one might think.
The tags were used mostly when retrieving from memory: a double tag
caused both words to be loaded, and an initialized tag cause an
exception (far ahead of its time!).

In recent years the tag has been expanded from three bits to four, but
I don't remember the details of how the expanded values are used,
especially with respect to operands.

> This would seriously screw up the 1970's style of portable
> Fortran-IV scientific programs, which used named and unnamed
> common blocks for data shared between several modules, and
> sometimes completely redefined a named common block between
> subsystems as a way of overlaying data areas.

Of course, many of these programs were really portable only across a
small range of systems.

> In fact, I believe that this might make it impossible to
> implement a conformant Fortran of the day.

Yet the B6700 FORTRAN was conformant to the standards of the day (which
were less rigorous than later standards).

In the 1970s I worked for the Florida Department of Highway Safety and
Motor Vehicles. During that time, some people from an outfit under
contract to (I think) the federal Department of Transportation came to
install a program -- something they were installing in every state to
gather certain statistics for the feds. (I don't think I ever knew just
what statistics they were gathering, and I certainly don't recall now.)
I think the software was called OMNITAB, though my memory could be off
on that too.

At any rate, the software was written in FORTRAN. It had been developed
on a Univac 1100, and previously ported to IBM. They did indeed run
into numerous problems getting it to run on the B6700. But almost
everything the B6700 insisted they change was actually an *error* that
the other systems had not caught, such as mismatched argument types and
other things I don't remember.

At the end of the project, they said they wished they had done the
B6700 installation first, because it found all the errors, and porting
*from* the B6700 to the other systems would have been a breeze.
(Presumably they still had a port or two to go, but I never heard from
them again.)

So not only was B6700 FORTRAN conformant, it was actually one of the
best for writing portable programs.

Edward Reid


Marco S Hyman

unread,
Jun 24, 2002, 2:44:46 PM6/24/02
to
Edward Reid <edwar...@spamcop.net> writes:

> In recent years the tag has been expanded from three bits to four, but
> I don't remember the details of how the expanded values are used,
> especially with respect to operands.

Did they get rid of the parity bit, then?

// marc

Hans Vlems

unread,
Jun 23, 2002, 5:18:03 PM6/23/02
to

Rupert Pigott <dark.try-eati...@btinternet.com> wrote in message
news:af2qe9$9v5$1...@knossos.btinternet.com...
C++, actually C is for unix what Algol is for the MCP.
Now if only those guys at Bell labs would have used Algol....


Hans Vlems

unread,
Jun 24, 2002, 4:18:43 PM6/24/02
to
>
> However, in most cases, 'call by name' is equivalent to 'call by
reference',
> the exceptions being oddball examples like the one detailed above, which
in
> practice are pretty rare. (In fact they were uncommon enough that the
> compiler
> emitted a warning message when a 'call by name' occurred). 'Call by
> reference'
> was very much the actual norm in Algol (still is in fact!).
>
Not sure what algol compiler version you're referring to. But the Mk 2.7 up
to Mk 3.1
compilers for the B6700, B7700 and B7900 did not generate warnings or error
messages.
The concept of a thunk was not even taught at programming 101 though
Jensen's device
was.

I thought that the statement was "you can write fortran in any language" :-)
(Yup, I know it was basic, not fortran).

Hans


J Ahlstrom

unread,
Jun 24, 2002, 4:30:09 PM6/24/02
to
Hans Vlems wrote:

Now if only that guy at Bell Labs had made
C++ to C as Simula was to Algol.

JKA

--
You can't reason someone out of something they
haven't been reasoned into.


Hans Vlems

unread,
Jun 24, 2002, 4:48:33 PM6/24/02
to

J Ahlstrom <jahl...@cisco.com> wrote in message
news:3D178151...@cisco.com...
I've heard of simula, in fact I have an old VAX/VMS compiler from the Univ.
of Hamburg.
I was not aware that it supported call-by-name though. Always considered it
as a
somewhat more structured Pascal. Since VMS Pascal also supports modules (as
seperate
compilation units) I never got to use modula much. I used Burroughs Algol
for 7 years, after that
I was forced onto VMS, RSX and RDOS. I always found Algol an easy language
to live with and
usually all its replacements have something missing. Like Dijkstra said
about Basic, the language
after a while affects the way your brain comes up with algorithms. I got
blessed/cursed with
Burroughs Algol...

BTW I suddenly remember that there's also Modula-2 and -3 but I've never
seen those.

Hans

Rupert Pigott

unread,
Jun 24, 2002, 5:02:44 PM6/24/02
to
"Hans Vlems" <hvl...@iae.nl> wrote in message
news:af7ube$c53ig$1...@ID-143435.news.dfncis.de...

> >
> > However, in most cases, 'call by name' is equivalent to 'call by
> reference',
> > the exceptions being oddball examples like the one detailed above, which
> in
> > practice are pretty rare. (In fact they were uncommon enough that the
> > compiler
> > emitted a warning message when a 'call by name' occurred). 'Call by
> > reference'
> > was very much the actual norm in Algol (still is in fact!).
> >
> Not sure what algol compiler version you're referring to. But the Mk 2.7
up
> to Mk 3.1
> compilers for the B6700, B7700 and B7900 did not generate warnings or
error
> messages.

Didn't they remove "call by name" from the Algol68
spec ? Looked like a bit of a radical move to me
considering it had been in since the '60 spec....

Cheers,
Rupert


Rupert Pigott

unread,
Jun 24, 2002, 5:04:00 PM6/24/02
to
"Hans Vlems" <hvl...@iae.nl> wrote in message
news:af7sfo$c5ek0$1...@ID-143435.news.dfncis.de...

I would have settled for BCPL proper. ;)

Cheers,
Rupert

Sam Yorko

unread,
Jun 24, 2002, 8:40:30 PM6/24/02
to
Edward Reid wrote:
>
> As often said, it's a matter of picking the proper
> wrench to pound in the screw.
>

I've got to remember this......

Sam

John Homes

unread,
Jun 24, 2002, 6:21:28 PM6/24/02
to

"Rupert Pigott" <dark.try-eati...@btinternet.com> wrote in message
news:af81dk$7d4$1...@paris.btinternet.com...

>
> Didn't they remove "call by name" from the Algol68
> spec ? Looked like a bit of a radical move to me
> considering it had been in since the '60 spec....
>

It's been a long time, but IIRC then it is still possible to achieve the
various effects of "call by name" in Algol 68, but it has to be requested
explicitly by means of coercions, rather than just happening, as in Algol
60.

Algol 68 was a new language that built on the ideas of Algol 60, not just
Algol 60 on steroids. IMHO (very H, I never used the language) it suffered
greatly from a headon collision between the grandiose concepts they were
trying to implement and the grossly inadequate (for what they were doing)
hardware platforms they assumed they would be using.

John Homes.


Brian Inglis

unread,
Jun 24, 2002, 8:33:46 PM6/24/02
to

Think a somewhat wordier version of C++: but remember where C
inherited the assignment operators from.

Brian Inglis

unread,
Jun 24, 2002, 8:37:12 PM6/24/02
to

One of his historical web pages implies he made C++ to Simula as
C was to Algol. ;^>

Edward Reid

unread,
Jun 24, 2002, 9:54:36 PM6/24/02
to
On Mon, 24 Jun 2002 14:44:46 -0400, Marco S Hyman wrote

>> In recent years the tag has been expanded from three bits to four, but
>> I don't remember the details of how the expanded values are used,
>> especially with respect to operands.
>
> Did they get rid of the parity bit, then?

Long before the tag was extended, memory was using single bit error
correction (SBEC) and double bit error detection. IIRC (an iffy
proposition on such a fine point) this required 60 bits for each 51-bit
word (48 data plus 3 tag). I would guess that it went to 61, 62, or 64
bits for the words with 4-bit tags. 61 or 62 would make sense when each
bit was stored on a separate board. Perhaps 64 would make more sense
with modern systems which store the entire word on one board, but I'm
not familiar enough with the hardware aspects to say for sure.

Besides, each new generation redesigned the memory. So there wasn't any
reason to rob Peter to pay Paul. It's only hardware; the software
doesn't know how many actual bits the hardware uses to provide a given
number of reliable bits.

It's only in the past very few years that they've moved toward using
standard memory. The LX systems use standard RAM (of course, since it's
just a Windows box emulating the NX architecture as a task). I don't
know whether NX systems are using standard memory cards. It would be
easy to think "OK, 64 bits gives plenty of room for data, tag, and SBEC
codes" ... but of course the RAM should *already* have error codes, so
the extra bits are wasted. (Some modern RAM doesn't have even simple
error detection, much less SBEC -- a false economy.) Nowadays it makes
more sense to waste some bits than to build custom RAM.

Edward Reid


robert d

unread,
Jun 24, 2002, 11:06:51 PM6/24/02
to
I guess it's time to reply.

Hello Mr Reid, haven't heard you since Jacksonville. Long time.

Real arithmetic: Read the current ALGOL documentation it says, and I
will not quote it cause I don't have one at hand, for arithmetic
comparisons use the = sign or EQL but for boolean comparisons use the IS
or ISNT comparison which is a bit by bit comparison. One compares the
mathematics value and the other don't! In other words if one does not
know how the language works they can stab themselves in the back with no
help from others!

I also think COBOL probably still has more lines of code hanging around
than any other. Look at how long it has lived, and it still going.

Just my two cents worth,
Robert deJarnette

CBFalconer

unread,
Jun 24, 2002, 11:21:49 PM6/24/02
to
Edward Reid wrote:
>
... snip ...

>
> Long before the tag was extended, memory was using single bit error
> correction (SBEC) and double bit error detection. IIRC (an iffy
> proposition on such a fine point) this required 60 bits for each 51-bit
> word (48 data plus 3 tag). I would guess that it went to 61, 62, or 64
> bits for the words with 4-bit tags. 61 or 62 would make sense when each
> bit was stored on a separate board. Perhaps 64 would make more sense
> with modern systems which store the entire word on one board, but I'm
> not familiar enough with the hardware aspects to say for sure.

64 bits is quite enough to handle ECC on memory up to 58 bits
wide.

A thought - to take advantage of the cheaper memory modules
available, and still have byte addressability, maybe we should be
designing machinery with a 56 bit main memory. If we allocate 8
bits of that for tags a la Burroughs we are still left with a 48
bit usable memory unit. I think the only rub would be to satisfy
C standards for doubles.

Brian Boutel

unread,
Jun 25, 2002, 12:14:26 AM6/25/02
to

Edward Reid wrote:

> On Mon, 24 Jun 2002 14:44:46 -0400, Marco S Hyman wrote
>
>>>In recent years the tag has been expanded from three bits to four, but
>>>I don't remember the details of how the expanded values are used,
>>>especially with respect to operands.
>>>
>>Did they get rid of the parity bit, then?
>>
>
> Long before the tag was extended, memory was using single bit error
> correction (SBEC) and double bit error detection. IIRC (an iffy
> proposition on such a fine point) this required 60 bits for each 51-bit
> word (48 data plus 3 tag). I would guess that it went to 61, 62, or 64
> bits for the words with 4-bit tags. 61 or 62 would make sense when each
> bit was stored on a separate board. Perhaps 64 would make more sense
> with modern systems which store the entire word on one board, but I'm
> not familiar enough with the hardware aspects to say for sure.
>
>


I am looking at one side of a 64kWord core board from a B6700. (It is
mounted on a wooden board, with a suitable inscription commemorating my
retirement).

It has 256k bits, and the cores are visible with a magnifying glass. 15
such sides would be necessary for 64k of 60-bit words. and IIRC, the
whole module was a 8-leaf, hinged, fanfold, thing with half density core
on the 2 outer surfaces. This does not fit with each bit of a word being
on a separate board.

Again, IIRC, the error correction/detection could be turned off, and
then there was a single parity bit. 6 bits are sufficient to provide
the detection/correction for up to a 64 bits, including the 6, so the
total requirement was 48 (data) + 3 (tag) + 1 (parity) + 6 (corr/det) =
58. It would be possible to expand to a 4-bit tag without increasing the
total word size, although I do not know if that was what was done.

What puzzles me is why single precision operands used only 47 of the 48
data bits (39 mantissa, 6 exponent, 1 mantissa sign, 1 exponent sign).
Anyone?

---brian
--
Brian Boutel
Wellington New Zealand


Note the NOSPAM

Warwick J. Hughes

unread,
Jun 25, 2002, 12:27:43 AM6/25/02
to

"Brian Boutel" <brian...@boutel.co.nz> wrote in message
news:3D17EE22...@boutel.co.nz...

Simple..it dates from the days of the B5500, which had no memory tags,
and used the high-order bit to denote descriptors and control words
(the so-called 'flag bit').

Brian Boutel

unread,
Jun 25, 2002, 1:48:39 AM6/25/02
to

Warwick J. Hughes wrote:

> "Brian Boutel" <brian...@boutel.co.nz> wrote in message
> news:3D17EE22...@boutel.co.nz...
>
>
>>

>>What puzzles me is why single precision operands used only 47 of the 48
>>data bits (39 mantissa, 6 exponent, 1 mantissa sign, 1 exponent sign).
>>
>
> Simple..it dates from the days of the B5500, which had no memory tags,
> and used the high-order bit to denote descriptors and control words
> (the so-called 'flag bit').
>


The wonders of the Internet! 22 years after I stopped using the machine,
I ask a question that bothered me back then and get an answer in
minutes!

Louis Krupp

unread,
Jun 25, 2002, 5:22:37 AM6/25/02
to
Hans Vlems wrote:
<snip>

> I always found Algol an easy language
> to live with and
> usually all its replacements have something missing.


C. A. R. Hoare, in "Hints on Programming Language Design,"
said that ALGOL 60 was "so far ahead of its time that it was
not only an improvement on its predecessors, but also on
nearly all its successors."

Louis Krupp

J Ahlstrom

unread,
Jun 25, 2002, 10:33:07 AM6/25/02
to SSc...@cisco.com, DJa...@cisco.com, CKa...@cruzio.com, Wayne....@sportsvigor.com, Alan....@unisys.com, Bi...@cisco.com
Brian Inglis wrote:

> On Mon, 24 Jun 2002 13:30:09 -0700, J Ahlstrom
> <jahl...@cisco.com> wrote:
>
> >Hans Vlems wrote:
> >
> >> C++, actually C is for unix what Algol is for the MCP.
> >> Now if only those guys at Bell labs would have used Algol....
> >
> >Now if only that guy at Bell Labs had made
> >C++ to C as Simula was to Algol.
>
> One of his historical web pages implies he made C++ to Simula as
> C was to Algol. ;^>
>
> --
>
> Thanks. Take care, Brian Inglis Calgary, Alberta, Canada
>
> Brian....@CSi.com (Brian dot Inglis at SystematicSw dot ab dot ca)
> fake address use address above to reply

I heard him say that's what he wanted/was trying to do.
I don't think there is any evidence that he succeeded
in any meaningful way. Perhaps C was just a terrible
base upon which to build compared to Algol.

Tim McCaffrey

unread,
Jun 25, 2002, 11:39:39 AM6/25/02
to
The newest systems use a 16 bit "tag". Actually, from what little I
understand, the extra 12 bits on the tag are used to add address bits to
descriptors. This also allows some of those "wasted" bits to be actually used
on the emulated systems.

- Tim

NOT speaking for my employer.

Edward Reid

unread,
Jun 25, 2002, 12:48:29 PM6/25/02
to
On Tue, 25 Jun 2002 0:27:43 -0400, Warwick J. Hughes wrote

>> What puzzles me is why single precision operands used only 47 of the 48
>> data bits (39 mantissa, 6 exponent, 1 mantissa sign, 1 exponent sign).
> Simple..it dates from the days of the B5500, which had no memory tags,
> and used the high-order bit to denote descriptors and control words
> (the so-called 'flag bit').

And it was deemed important to be compatible, since Burroughs wanted
its B5500 customers to migrate to the B6700, not to some other vendor.

Edward Reid


Edward Reid

unread,
Jun 25, 2002, 1:07:01 PM6/25/02
to
On Mon, 24 Jun 2002 23:06:51 -0400, robert d wrote

> Hello Mr Reid, haven't heard you since Jacksonville. Long time.

Robert! Long time indeed. Good to hear from you.

> Real arithmetic: Read the current ALGOL documentation it says, and I
> will not quote it cause I don't have one at hand, for arithmetic
> comparisons use the = sign or EQL but for boolean comparisons use the IS
> or ISNT comparison which is a bit by bit comparison. One compares the
> mathematics value and the other don't! In other words if one does not
> know how the language works they can stab themselves in the back with no
> help from others!

Yup. This is a flip side of the earlier question. In that question, two
values which were different both numerically and bit-wise might have
compared "equal" (fuzzy comparison). In this issue, more than one bit
pattern represents the *same* numerical value. Nothing fuzzy, just that
a value has more than one representation. For example, 4"000000000001"
(that's the Burroughs Algol way of writing a value in hex) and
4"261000000000" are both *exactly* 1 numerically, just different
representations.

This multiple representation occurs in most floating point systems, but
I think it's more likely to show up in MCP systems because of the ease
which which values shift between normalized and integer formats. The
issue usually arises when a programmer is using a full word as a bit
bucket and accidentally does something which causes the representation
to change.

> I also think COBOL probably still has more lines of code hanging around
> than any other.

There's no question ithat COBOL has the most LOC. As Stephen Fuld
asked, the real question is how "use" is measured -- lines in use,
lines being written, programmers employed, programming hours spent
using, end user hours supported, etc etc -- and what is the relative
use under different definitions. I'm interested in any followup to his
comments, but so far nothing much new has come up.

Edward


Stephen Fuld

unread,
Jun 25, 2002, 2:01:51 PM6/25/02
to

"CBFalconer" <cbfal...@yahoo.com> wrote in message
news:3D17E11A...@yahoo.com...

> Edward Reid wrote:
> >
> ... snip ...
> >
> > Long before the tag was extended, memory was using single bit error
> > correction (SBEC) and double bit error detection. IIRC (an iffy
> > proposition on such a fine point) this required 60 bits for each 51-bit
> > word (48 data plus 3 tag). I would guess that it went to 61, 62, or 64
> > bits for the words with 4-bit tags. 61 or 62 would make sense when each
> > bit was stored on a separate board. Perhaps 64 would make more sense
> > with modern systems which store the entire word on one board, but I'm
> > not familiar enough with the hardware aspects to say for sure.
>
> 64 bits is quite enough to handle ECC on memory up to 58 bits
> wide.

No it isn't. Assuming that you want SBEC/DBED, you need seven protection
bits for up to 32 bits of data and eight protection bits for up to 64 data
bits. You can work this out for yourself as follows: Conceptually. to
point to a faulted bit in a 32 bit word, you would need five bits (2**5 =
32), but you then need another bit to indicate that there was no bit in
error (otherwise you would always spoint to some bit) and yet another to
indicate that there were two bits in error (the double bit detection part).
So you need seven protection bits for 32 data bits. A similar argument
applies to why you need 8 bits for up to 64 data bits. Please note that
real ECC codes don't use the bits the way I indicated, but the math works
out the same.

CBFalconer

unread,
Jun 25, 2002, 7:31:52 PM6/25/02
to

You are right, and I used the same reasoning, but sloppily. One
more off-by-one error.

Which brings up a thought - if we build 48 bit memories with 6 bit
ecc using 64 bit modules, we have 10 unused bits. This should
allow building those memory modules with bogey chips and
discretionary wiring, thus mightily increasing the manufacturing
yield. Assuming the module is build out of 16 4 wide components,
as many as 10 of these can be faulty in one bit plane.

Next, maybe we can eliminate the discretionary wiring by tricks in
the memory module driver, which can test and reconfigure itself on
power up.

I seem to remember core memory modules with spare planes, for just
this purpose.

CBFalconer

unread,
Jun 25, 2002, 7:31:53 PM6/25/02
to
Edward Reid wrote:
> On Mon, 24 Jun 2002 23:06:51 -0400, robert d wrote
>
... snip ...

>
> > I also think COBOL probably still has more lines of code hanging around
> > than any other.
>
> There's no question ithat COBOL has the most LOC. As Stephen Fuld
> asked, the real question is how "use" is measured -- lines in use,
> lines being written, programmers employed, programming hours spent
> using, end user hours supported, etc etc -- and what is the relative
> use under different definitions. I'm interested in any followup to his
> comments, but so far nothing much new has come up.

I would think that is in part because COBOL has the lowest IPLOC
(Information per line) of the major languages, and it is spent in
pure verbosity, rather than in useful redundancy.

Rupert Pigott

unread,
Jun 25, 2002, 8:39:11 PM6/25/02
to
"CBFalconer" <cbfal...@yahoo.com> wrote in message
news:3D18F531...@yahoo.com...
[SNIP]

> Which brings up a thought - if we build 48 bit memories with 6 bit
> ecc using 64 bit modules, we have 10 unused bits. This should
> allow building those memory modules with bogey chips and
> discretionary wiring, thus mightily increasing the manufacturing
> yield.

When I worked at INMOS I got to see the inside of some
memory chips (they were just getting out of the game at
that point). They always had some spare bits which they
could sub in for busted ones. Just about all silicon
memories do this, even cache and register files ! I think
they usually blow some fuses to replace the bad chunks of
memory before packaging.

IIRC some vendors actually used some reject DRAMs in
products which had half of their full capacity. The
other half failed qualification and was disabled.

> Assuming the module is build out of 16 4 wide components,
> as many as 10 of these can be faulty in one bit plane.
>
> Next, maybe we can eliminate the discretionary wiring by tricks in
> the memory module driver, which can test and reconfigure itself on
> power up.

I wonder if DRAMs do this these days. They do seem to
have acquired a hell of a lot of logic. ;)

Cheers,
Rupert


Edward Reid

unread,
Jun 25, 2002, 11:47:35 PM6/25/02
to
On Tue, 25 Jun 2002 19:31:53 -0400, CBFalconer wrote

> I would think that is in part because COBOL has the lowest IPLOC
> (Information per line) of the major languages, and it is spent in
> pure verbosity, rather than in useful redundancy.

I assume you refer to the LOC in use.

This is a factor, but I'm pretty sure COBOL outpaces all the others by
a very large amount, perhaps an order of magnitude -- far more than can
be accounted for by information density. But I don't have any current
references. Or any old references either except for my memory.

Also, it's been my experience that information density is not
particularly low in COBOL compared with similar applications written by
comparable programmers in other languages. A lot of what is done in
COBOL is basically drudge work -- move this here, move that there. It's
going to take a lot of lines of rather simple code in any language. For
more complex tasks, COBOL will typically take more LOC, but not a huge
amount more. Mind you, this is my highly subjective opinion, but it's
based on a lot of experience with COBOL, Algol, and Fortran, and
smaller amounts of experience with Icon, Pascal, Snobol, LISP, various
assemblers, and probably others.

OTOH, programmer skill, knowledge and training make very large
differences in LOC to perform a given task. And it is true that less
skilled programmers tend to end up in jobs where COBOL is used. It
doesn't work the other way -- there are a great many highly skilled
programmers working with COBOL systems. But the less skilled
programmers will often churn out lots of code with very low IPLOC.

So there are several factors lending the impression that COBOL is
terribly wordy, when in fact it is only a little wordier than other
languages, other things being equal ... which they seldom are.

Should have started crossposting this to comp.lang.cobol a few messages
back ... too late now unless I report the background.

Edward Reid


Brian Boutel

unread,
Jun 26, 2002, 12:15:41 AM6/26/02
to

CBFalconer wrote:


I think it's not quite right.

6 correction/detection bits plus 1 normal parity bit, that's 7 in total,
are enough for 64 bit words. No eighth bit is needed for 2-bit-error
detection. In discussing the Unisys machines, we were talking about 6
bits *in addition to the usual parity bit*.

It works like this: The 6 check bits serve as parity bits for subsets
of the 64 bits. A 1-bit error will always show as a normal parity error,
and the subsets for which parity errors occur will indicate the
offending bit. A 2-bit error will show as parity errors in at least one
of the 6 subsets, but no overal normal parity error. The subsets with
parity errors do not indicate a unique bit-pair, so no correction can be
done.

The subsets are constructed as
1. All bits with a 1-bit in their bit number
2. All bits with a 2-bit in their bit number
...
6. All bits with a 32-bit in their bit number

Bit 0 is not in any set, but that's OK, a parity error alone identifies
this bit.

The eighth bit is not needed because a 2-bit error always generates a
parity error in at least one subset. For this not to be so, either both
error bits or neither error bit would have to be in each subset, i.e.
the error bits would be the same bit.

Rupert Pigott

unread,
Jun 26, 2002, 1:38:08 AM6/26/02
to
"Edward Reid" <edwar...@spamcop.net> wrote in message
news:01HW.B93EB1970...@news-east.usenetserver.com...

> On Tue, 25 Jun 2002 19:31:53 -0400, CBFalconer wrote
> > I would think that is in part because COBOL has the lowest IPLOC
> > (Information per line) of the major languages, and it is spent in
> > pure verbosity, rather than in useful redundancy.
>
> I assume you refer to the LOC in use.
>
> This is a factor, but I'm pretty sure COBOL outpaces all the others by
> a very large amount, perhaps an order of magnitude -- far more than can
> be accounted for by information density. But I don't have any current
> references. Or any old references either except for my memory.

I find this odd. I've only come across COBOL (or derivatives/
work alikes) twice in my coding history. Virtually everything
I've seen has been written in C. The biggest single subsystem
I've seen was 250,000 lines of PASCAL... I find it hard to
believe given my own personal experience that COBOL has more
LOCs than C for example. Simple question of C being the
language of choice for 99.9% of Micros and micros have been
in far wider circulation than COBOL war-horses.

Could just be one of those weird things where you have an
utterly diff perspective because of the circles you move in...

Cheers,
Rupert


Charlie Gibbs

unread,
Jun 26, 2002, 3:28:12 AM6/26/02
to
In article <3D18F531...@yahoo.com> cbfal...@yahoo.com
(CBFalconer) writes:

>I seem to remember core memory modules with spare planes, for just
>this purpose.

The Univac 9300's plated-wire memory (also used in early 9400s)
was 10 bits wide: 8 data bits, one parity bit, and one spare.
IIRC swapping in the spare involved a bit of work with a soldering
iron.

It was nice stuff at the time (late '60s through late '70s) -
non-volatile, non-destructive read-out, and 600-ns cycle time.

--
cgi...@sky.bus.com (Charlie Gibbs)
Remove the first period after the "at" sign to reply.
I don't read top-posted messages. If you want me to see your reply,
appropriately trim the quoted text and put your reply below it.

Charlie Gibbs

unread,
Jun 26, 2002, 3:33:28 AM6/26/02
to
In article <3D18F6B3...@yahoo.com> cbfal...@yahoo.com (CBFalconer)
writes:

>Edward Reid wrote:
>
>> On Mon, 24 Jun 2002 23:06:51 -0400, robert d wrote
>>
>... snip ...
>>
>>> I also think COBOL probably still has more lines of code hanging
>>> around than any other.
>>
>> There's no question ithat COBOL has the most LOC. As Stephen Fuld
>> asked, the real question is how "use" is measured -- lines in use,
>> lines being written, programmers employed, programming hours spent
>> using, end user hours supported, etc etc -- and what is the relative
>> use under different definitions. I'm interested in any followup to
>> his comments, but so far nothing much new has come up.
>
>I would think that is in part because COBOL has the lowest IPLOC
>(Information per line) of the major languages, and it is spent in
>pure verbosity, rather than in useful redundancy.

That's OK, modern programming techniques as wielded by bureaucracies
are pushing C++ to COBOL's level and beyond. :-p

Richard Steiner

unread,
Jun 26, 2002, 3:46:09 AM6/26/02
to
Here in comp.sys.unisys,
"Rupert Pigott" <dark.try-eati...@btinternet.com>
spake unto us, saying:

>I find this odd. I've only come across COBOL (or derivatives/
>work alikes) twice in my coding history. Virtually everything
>I've seen has been written in C. The biggest single subsystem
>I've seen was 250,000 lines of PASCAL...

Interesting. To provide a counterexample, almost everything that I've
encountered professionally has been written in C, FORTRAN, or COBOL,
with the latter two languages representing the greatest volume by a
huge margin.

The system I worked on at Northwest Airlines was roughly two million
lines of FORTRAN, and most of the mainframe systems it talked to were
written in FORTRAN or COBOL (with the exceptions typically being code
written in C and running on Unix systems).

Of course, most of my professional time has been spent in Unisys 2200-
series mainframeland within the airline industry where apps that are
written in older languages are relatively common.

--
-Rich Steiner >>>---> http://www.visi.com/~rsteiner >>>---> Eden Prairie, MN
OS/2 + BeOS + Linux + Win95 + DOS + PC/GEOS = PC Hobbyist Heaven! :-)
Applications analyst/designer/developer (13 yrs) seeking employment.
See web site in my signature for current resume and background.

Rupert Pigott

unread,
Jun 26, 2002, 6:02:27 AM6/26/02
to
"Richard Steiner" <rste...@visi.com> wrote in message
news:BFXG9oHp...@visi.com...

> Here in comp.sys.unisys,
> "Rupert Pigott" <dark.try-eati...@btinternet.com>
> spake unto us, saying:
>
> >I find this odd. I've only come across COBOL (or derivatives/
> >work alikes) twice in my coding history. Virtually everything
> >I've seen has been written in C. The biggest single subsystem
> >I've seen was 250,000 lines of PASCAL...
>
> Interesting. To provide a counterexample, almost everything that I've
> encountered professionally has been written in C, FORTRAN, or COBOL,
> with the latter two languages representing the greatest volume by a
> huge margin.

I should point out that the PASCAL one was a bit of a freak. To
be honest I think it was actually a very pleasant experience, I
liked working with PASCAL on that kind of scale. I can't say I
find it as pleasant to use C or C++ on that kind of scale. Very
strange considering I've spent 14 years with C->C++ and only
about 15 months with PASCAL. Says a lot for the language and the
people working on that project I think (it was a 12 year old
code base back then). :P

> The system I worked on at Northwest Airlines was roughly two million
> lines of FORTRAN, and most of the mainframe systems it talked to were
> written in FORTRAN or COBOL (with the exceptions typically being code
> written in C and running on Unix systems).
>
> Of course, most of my professional time has been spent in Unisys 2200-
> series mainframeland within the airline industry where apps that are
> written in older languages are relatively common.

Yeah, I'm thinking it's a "never the twain shall" meet kind of
thing. I'm fairly certain there's more C in the world now than
anything else. Can't get away from the bloody stuff, I want a
change dammit !

Cheers,
Rupert


Edward Reid

unread,
Jun 26, 2002, 7:37:00 AM6/26/02
to
On Wed, 26 Jun 2002 1:38:08 -0400, Rupert Pigott wrote

> Could just be one of those weird things where you have an
> utterly diff perspective because of the circles you move in...

Yes. Ever worked in a financial institution? There are a few of them
around, and I guarantee their software is almost entirely COBOL. A
recent client of mine has about 2 million lines of COBOL. You'll have a
hard time finding a single stick of C in a financial institution.

> I should point out that the PASCAL one was a bit of a freak. To
> be honest I think it was actually a very pleasant experience, I
> liked working with PASCAL on that kind of scale. I can't say I
> find it as pleasant to use C or C++ on that kind of scale.

Not surprising, given that Pascal was designed and C grew.

> Yeah, I'm thinking it's a "never the twain shall" meet kind of
> thing. I'm fairly certain there's more C in the world now than
> anything else. Can't get away from the bloody stuff, I want a
> change dammit !

Try getting a job at a bank. They are likely to be interested in HTML,
JavaScript, maybe even Java ... but less likely C, and they'll have a
lot more COBOL on hand than anything else.

The problem is that a high portion of the new and interesting software
work is indeed being done in C. If you can find a place doing new
development using Fortran 90, you might have the best of both worlds --
interesting programming in an interesting language -- since Fortran 90
is a better language than either COBOL (even COBOL 2002) or C. Another
choice might be Ada -- I haven't seen enough of it to be sure whether
I'd like it or not.

Edward Reid


Larry__Weiss

unread,
Jun 26, 2002, 7:55:22 AM6/26/02
to
Rupert Pigott wrote:
> ... I liked working with PASCAL on that kind of scale. I can't say I

> find it as pleasant to use C or C++ on that kind of scale. Very
> strange considering I've spent 14 years with C->C++ and only
> about 15 months with PASCAL. Says a lot for the language and the
> people working on that project I think (it was a 12 year old
> code base back then). :P
>

At least Pascal has a string type, and an infix string concatenation operator.

I did some Pascal coding after doing many years of C programming and it felt
good not to have to be concerned with the disposal of memory allocated
for handling dynamic character strings.

But, I do like the overall syntax of a C/C++/Java/C#/AWK program more than
the more verbose Pascal-like syntax.

- LarryW

Randall Bart

unread,
Jun 26, 2002, 8:08:54 AM6/26/02
to
'Twas Tue, 25 Jun 2002 18:01:51 GMT when all comp.sys.unisys stood in awe as
"Stephen Fuld" <s.f...@PleaseRemove.att.net> uttered:

>Assuming that you want SBEC/DBED, you need seven protection
>bits for up to 32 bits of data and eight protection bits for up to 64 data
>bits.

That's not quite how I understand it. First you need a parity bit, then you
need hamming bits. You need enough hamming bits to address the data bits,
the parity bit, and the hamming bits themselves. You can fit 57 data bits,
1 parity bit, and 6 hamming bits into 64 bits. If you had 58 data bits, you
would need 66 bits. For the purposes of A Series architecture, the tag bits
are included in the data bits.

This is not how error correction works on the current LX systems, and I
don't know if it works that way on the NX systems either. The RAM on is
standard 64 bit modules, with some amount of parity and error correction on
top of that. For A Series purposes, the 64 bits are 48 data bits and a 16
bit tag, but most of the tag bits are unused.
--
RB |\ © Randall Bart
aa |/ ad...@RandallBart.spam.com Bart...@att.spam.net
nr |\ Please reply without spam I LOVE YOU 1-917-715-0831
dt ||\ http://RandallBart.com/ DOT-HS-808-065 MS^7=6/28/107
a |/ "Believe nothing, no matter where you read it, or who
l |\ said it, no matter if I have said it, unless it agrees
l |/ with your own reason and your own common sense."--Buddha

Stephen Fuld

unread,
Jun 26, 2002, 1:02:09 PM6/26/02
to

"Brian Boutel" <brian...@boutel.co.nz> wrote in message
news:3D193FED...@boutel.co.nz...

But how do you distinguish between bit zero flipped and the parity bit
flipped? Wouldn't they both indicate a parity error with no "syndrome" bits
flipped? How would you know whether to correct bit zero or not? Similarly,
what about a two bit error that was one of the check bits and the parity
bit? Wouldn't that seem identical to a single bit error in one of the data
bits?

Stephen Fuld

unread,
Jun 26, 2002, 1:12:23 PM6/26/02
to

"Randall Bart" <Bart...@att.spam.net> wrote in message
news:k5ajhuog0dmho5paq...@4ax.com...

> 'Twas Tue, 25 Jun 2002 18:01:51 GMT when all comp.sys.unisys stood in awe
as
> "Stephen Fuld" <s.f...@PleaseRemove.att.net> uttered:
>
> >Assuming that you want SBEC/DBED, you need seven protection
> >bits for up to 32 bits of data and eight protection bits for up to 64
data
> >bits.
>
> That's not quite how I understand it. First you need a parity bit, then
you
> need hamming bits. You need enough hamming bits to address the data bits,
> the parity bit, and the hamming bits themselves. You can fit 57 data
bits,
> 1 parity bit, and 6 hamming bits into 64 bits.

I don't quite understand. Does the parity bit cover just the data bits or
the data bits plus the Hamming bits? If it includes the Hamming bits, then
two Hamming bits flipped would cause mis-correction, not a reported double
error. If it does not include the Hamming bits, then two data bits flipped
would cause the same problem. (At least I think that is the issue - my
brain is a litle fuzzy - must get more coffee!!!!)

Steve O'Hara-Smith

unread,
Jun 26, 2002, 1:42:38 PM6/26/02
to
On 25 Jun 02 23:33:28 -0800
"Charlie Gibbs" <cgi...@sky.bus.com> wrote:

CG> In article <3D18F6B3...@yahoo.com> cbfal...@yahoo.com (CBFalconer)
CG> writes:
CG>
CG> >Edward Reid wrote:
CG> >
CG> >I would think that is in part because COBOL has the lowest IPLOC
CG> >(Information per line) of the major languages, and it is spent in
CG> >pure verbosity, rather than in useful redundancy.
CG>
CG> That's OK, modern programming techniques as wielded by bureaucracies
CG> are pushing C++ to COBOL's level and beyond. :-p

While Java and the associated code generators have long since
left the both of them standing.

--
C:>WIN | Directable Mirrors
The computer obeys and wins. |A Better Way To Focus The Sun
You lose and Bill collects. | licenses available - see:
| http://www.sohara.org/

Hans Vlems

unread,
Jun 26, 2002, 3:11:11 PM6/26/02
to

Louis Krupp <lkr...@pssw.NOSPAMPLEASE.com.invalid> wrote in message
news:3D18365D...@pssw.NOSPAMPLEASE.com.invalid...

I wonder what his opinion is on Burroughs Algol...


Brian Boutel

unread,
Jun 26, 2002, 4:50:10 PM6/26/02
to

Stephen Fuld wrote:

> "Brian Boutel" <brian...@boutel.co.nz> wrote in message
> news:3D193FED...@boutel.co.nz...
>
>


Remember that all bits, including the parity bit and the check bits are
part of the 64-bit word, and all 64 bits are included in the parity and
other checks. While a bit 0 error would cause a parity error and no
errors in the check subsets, a parity bit error would cause both a
parity error and an error in the subsets that included the parity bit.
Any 2-bit error will appear to be parity-correct, and so can be
distinguished from a 1-bit error, which will not.

Brian Boutel

unread,
Jun 26, 2002, 5:04:35 PM6/26/02
to

Stephen Fuld wrote:


All bits, data, tag, parity, Hamming, are part of the word and are
included in parity checks.

A parity error is not indicated by the value of the parity bit, but by
the oddness (or evenness) of the count of 1-bits in the whole word,
including the parity bit.

Similarly, it's not the values of the Hamming bits that are important,
but the parity checks on the associated bit subsets. This is exactly the
same as with normal 1-bit parity checking, where any single bit in
error, including the parity bit itself, will show as a parity failure,
because, in the case of odd parity, the total count of 1-bits will then
be even.

Flipping two Hamming bits (or data bits) will not cause an overall
parity error (flipping 2 bits preserves parity), but will cause a parity
error in at least one bit-subset covered by the Hamming check.

CBFalconer

unread,
Jun 26, 2002, 5:45:39 PM6/26/02
to
Larry__Weiss wrote:
> Rupert Pigott wrote:
>
> > ... I liked working with PASCAL on that kind of scale. I can't
> > say I find it as pleasant to use C or C++ on that kind of scale.
> > Very strange considering I've spent 14 years with C->C++ and
> > only about 15 months with PASCAL. Says a lot for the language
> > and the people working on that project I think (it was a 12
> > year old code base back then). :P
>
> At least Pascal has a string type, and an infix string
> concatenation operator.

Not until you get to ISO10206 Extended Pascal. Which is not really
any major loss, when you have conformant arrays.

John Homes

unread,
Jun 26, 2002, 4:42:28 PM6/26/02
to

"Rupert Pigott" <dark.try-eati...@btinternet.com> wrote in message
news:afbjvv$rqh$1...@helle.btinternet.com...

>
> I find this odd. I've only come across COBOL (or derivatives/
> work alikes) twice in my coding history. Virtually everything
> I've seen has been written in C. The biggest single subsystem
> I've seen was 250,000 lines of PASCAL... I find it hard to
> believe given my own personal experience that COBOL has more
> LOCs than C for example. Simple question of C being the
> language of choice for 99.9% of Micros and micros have been
> in far wider circulation than COBOL war-horses.

A lot of C programs for micros were written once, and spread far and wide.
For the purposes of this thread, I think it's fair that each such program
have its LoC counted only once. A *lot* of COBOL is bespoke.

John Homes (who has written a fair amount of bespoke COBOL in his time).


Randall Bart

unread,
Jun 26, 2002, 10:49:29 PM6/26/02
to
Lemme see if I can reconstruct this from memory.

Single Bit Error Correction Via Hamming Bits (8-Bit Example)

For error correction, we need a parity bit plus enough hamming bits to
address the data bits, the parity bit, and the hamming bits. For eight
bits, that will be one parity bit and four hamming bits. The hamming bits
go in hamming locations which correspond to powers of 2. Therefore the
hamming bits will be 1, 2, 4, and 8. Let's lay out the bits:

Data bits: P H1 H2 D0 H4 D1 D2 D3 H8 D4 D5 D6 D7
Hamming location: 0 1 2 3 4 5 6 7 8 9 10 11 12
(It looks better monospaced.)

The parity bit and the hamming bits each have a domain. The domain of each
hamming bit is the data bits whose hamming location in binary contains that
bit. The domain of the parity bit is all data and hamming bits.

H1 - D0, D1, D3, D4, D6
H2 - D0, D2, D3, D5, D6
H4 - D1, D2, D3, D7
H8 - D4, D5, D6, D7

The value of the parity bit and the hamming bits is set so that each domain
has an even number of 1 bits. (Or odd, but I'll use even parity.)

Example: Take the data 01101110
0 1 1 0 1 1 1 0
Data bits: P H1 H2 D0 H4 D1 D2 D3 H8 D4 D5 D6 D7
Hamming location: 0 1 2 3 4 5 6 7 8 9 10 11 12
0 1 1 0 1

To validate the data, the parity and hamming bits are calculated. If any
one bit is flipped, the parity bit will be wrong, indicating correctable
error. Calculate the hamming bits, and their binary value points to the bit
in error. If it comes out to zero, the bad bit is the parity bit itself.
If it comes out to 1, 2, 4, or 8, the error is in the hamming bit. If it
comes out any other value, the error in the data bit in that hamming
location.

If the parity bit works out right, but one of more hamming bits is wrong,
that's a two bit error. It can't be corrected, but that's two bit error
detection. If three bits are flipped, the parity bit will come out wrong,
and error correction will flip one bit (erroneously). If you turn off one
bit error correction, you will have three bit error detection.

Play around with this and you'll see what I mean.

The.Central.Scr...@invalid.pobox.com

unread,
Jun 26, 2002, 1:01:03 PM6/26/02
to
In article <01HW.B93EB1970...@news-east.usenetserver.com>,

Edward Reid wrote:
>On Tue, 25 Jun 2002 19:31:53 -0400, CBFalconer wrote
>> I would think that is in part because COBOL has the lowest IPLOC
>> (Information per line) of the major languages, and it is spent in
>> pure verbosity, rather than in useful redundancy.
>
>I assume you refer to the LOC in use.
>
>This is a factor, but I'm pretty sure COBOL outpaces all the others by
>a very large amount, perhaps an order of magnitude -- far more than can
>be accounted for by information density. But I don't have any current
>references. Or any old references either except for my memory.
>
>Also, it's been my experience that information density is not
>particularly low in COBOL compared with similar applications written by
>comparable programmers in other languages. A lot of what is done in
>COBOL is basically drudge work -- move this here, move that there. It's
>going to take a lot of lines of rather simple code in any language. For
>more complex tasks, COBOL will typically take more LOC, but not a huge
>amount more. Mind you, this is my highly subjective opinion, but it's
>based on a lot of experience with COBOL, Algol, and Fortran, and
>smaller amounts of experience with Icon, Pascal, Snobol, LISP, various
>assemblers, and probably others.
...

For *anything* cobol requires more LOC than any other language, including
assembler. I consider cobol to be a negatively high level language.

Vax macro is a far higher level language. At least it has subroutine
calls with parameters, nested conditionals and things like a case
statement.

Edward Reid

unread,
Jun 27, 2002, 9:13:34 AM6/27/02
to
On Wed, 26 Jun 2002 13:01:03 -0400,
The.Central.Scr...@invalid.pobox.com () wrote

> For *anything* cobol requires more LOC than any other language, including
> assembler.

I've seen programs written in 1000 lines of Algol that I could easily
write in 100 lines of COBOL. It depends far more on the programmer than
on the language. The language matters, but much less so than the
programmer.

Whether you can beat COBOL with assembler depends on the assembler.
I'll contend that assembler can beat COBOL only by using macros
extensively. And if you use macros extensively, then it doesn't matter
what language you are writing in, because the actual code is all in the
macros. There are numerous macro processors for COBOL, and they quite
predictably drop the LOC count drastically on many programs. Without
macros, any "true" assembler (that is, assembler for a machine
language) without macros will require many times the LOC of COBOL.

So this isn't a choice-of-language issue, it's a whether-to-use-macros
issue, at least between COBOL and any assembler.

Of course, the generally acknowledged champion for minimum LOC is APL.
In APL, every program can be written in one line, information density
approaches infinitity, and programs are impenetrable.

Edward Reid


Randall Bart

unread,
Jun 27, 2002, 10:06:16 AM6/27/02
to
'Twas 26 Jun 2002 11:01:03 -0600 when all comp.sys.unisys stood in awe as
The.Central.Scr...@invalid.pobox.com () uttered:

>For *anything* cobol requires more LOC than any other language, including
>assembler. I consider cobol to be a negatively high level language.

If you're an exceptionally slow typist, don't use Cobol. The intent of
Cobol is to produce code that is easy to maintain, not to write it quickly
the first time.

>Vax macro is a far higher level language. At least it has subroutine
>calls with parameters, nested conditionals and things like a case
>statement.

Have you actually used Cobol? I mean a version more recent than Cobol-68.
Cobol has subroutine calls with parameters (since Cobol-74). Cobol has
nested conditionals (clunkily in Cobol-74, done well in Cobol-85). Cobol
has a case statement (since Cobol-85).

Does Vax Macro have fixed point numbers? I don't mean integers; I mean
fixed point numbers with decimal fractions like we use for money. I come
out of banking, and bankers care about the precision of their monetary
values. In fact since we're judging languages by what they once were, not
what they are, did Vax Macro have fixed point 30 years ago?

And certainly your claim that *anything* takes more LOC in Cobol is false.
In Vax Macro can you edit a number the way we do in Cobol? How do you
produce an output field like "$12,345.67"? Are the text parsing facilities
of Vax Macro as concise as UNSTRING?

Randall Bart

unread,
Jun 27, 2002, 10:13:36 AM6/27/02
to
'Twas Thu, 27 Jun 2002 9:13:34 -0400 when all comp.sys.unisys stood in awe
as Edward Reid <edwar...@spamcop.net> uttered:

>I've seen programs written in 1000 lines of Algol that I could easily
>write in 100 lines of COBOL. It depends far more on the programmer than
>on the language. The language matters, but much less so than the
>programmer.

A determined programmer can write bad Fortran in any language.

Larry__Weiss

unread,
Jun 27, 2002, 10:26:27 AM6/27/02
to
Edward Reid wrote:
> I've seen programs written in 1000 lines of Algol that I could easily
> write in 100 lines of COBOL. It depends far more on the programmer than
> on the language. The language matters, but much less so than the
> programmer.
>

At least COBOL had a concept of currency amounts built-in.

I've been surprised at how few computer languages have a monetary
currency data type.

- LarryW

jmfb...@aol.com

unread,
Jun 27, 2002, 7:16:34 AM6/27/02
to
In article
<0B5619B62D7C4F86.015928FF...@lp.airnews.net>,

Larry__Weiss <l...@airmail.net> wrote:
>Edward Reid wrote:
>> I've seen programs written in 1000 lines of Algol that I could easily
>> write in 100 lines of COBOL. It depends far more on the programmer than
>> on the language. The language matters, but much less so than the
>> programmer.
>>
>
>At least COBOL had a concept of currency amounts built-in.

Oh, yea. I had to explain this to a bank after they had
handed gazillion amount to contract workers who thought
that FORTRAN was the bees knees.

My management would not allow me to tell the bank that they
should have specified that COBOL be used.

>
>I've been surprised at how few computer languages have a monetary
>currency data type.

Look how some guys in this newsgroup "prove" their manhood by
slamming COBOL. I'm not surprised at all.

/BAH

Subtract a hundred and four for e-mail.

CBFalconer

unread,
Jun 27, 2002, 11:15:38 AM6/27/02
to
Edward Reid wrote:
>
... snip ...
>
> Of course, the generally acknowledged champion for minimum LOC is APL.
> In APL, every program can be written in one line, information density
> approaches infinitity, and programs are impenetrable.

Wasn't APL the original one-way algorithm, that inspired
Rivest-Adelman and sired public key cryptography :-)

Rupert Pigott

unread,
Jun 26, 2002, 12:07:13 PM6/26/02
to
"Larry__Weiss" <l...@airmail.net> wrote in message
news:F4FA1BCD88963A77.1CFD8891...@lp.airnews.net...
[SNIP]

> At least Pascal has a string type, and an infix string concatenation
operator.
>
> I did some Pascal coding after doing many years of C programming and it
felt
> good not to have to be concerned with the disposal of memory allocated
> for handling dynamic character strings.
>
> But, I do like the overall syntax of a C/C++/Java/C#/AWK program more than
> the more verbose Pascal-like syntax.

I remember finding that PASCAL seemed verbose after C, but
what I found was that I was typing a *shed-load* less code
to do basic things. Those savings (string handling was one
of them) more than made up for it, and gave me more reliable
code too.

The sad thing is that I feel that C++ hasn't really helped
a great deal. Maybe it's just my coding style in C++, but
I find myself having to put in a ton of junk to ensure
portability or avoid dubious & dangerous language features.

Cheers,
Rupert


Rupert Pigott

unread,
Jun 26, 2002, 12:12:49 PM6/26/02
to
"Edward Reid" <edwar...@spamcop.net> wrote in message
news:01HW.B93F1F9C0...@news-east.usenetserver.com...

> On Wed, 26 Jun 2002 1:38:08 -0400, Rupert Pigott wrote
> > Could just be one of those weird things where you have an
> > utterly diff perspective because of the circles you move in...
>
> Yes. Ever worked in a financial institution? There are a few of them
> around, and I guarantee their software is almost entirely COBOL. A
> recent client of mine has about 2 million lines of COBOL. You'll have a
> hard time finding a single stick of C in a financial institution.

LOL, yeah, most of my recent experience has been in the city
of London. No. Never saw or heard of COBOL there. It's possible
that some vendor supplied apps were in COBOL, I'm sure there
MUST have been somethingin COBOL there. But honestly all the
internally developed stuff I came across was in C/C++/VB etc.

I was surprised too btw, as C seemed like an incredibly bad fit
for the problem domain... Lots of string handling, formatted I/O,
big money figures etc... It amazed me the boneheaded mistakes
and workarounds for C they had to come up with to do the most
basic tasks in that environment. :P

[SNIP]

> Try getting a job at a bank. They are likely to be interested in HTML,
> JavaScript, maybe even Java ... but less likely C, and they'll have a
> lot more COBOL on hand than anything else.

Few to zero COBOL jobs here, all C. That's been the case for the
last 5 years anyway.

> The problem is that a high portion of the new and interesting software
> work is indeed being done in C. If you can find a place doing new

C++, Java and VB seem to be the langauge skills most wanted by
the banks at the moment in the City.

> development using Fortran 90, you might have the best of both worlds --
> interesting programming in an interesting language -- since Fortran 90
> is a better language than either COBOL (even COBOL 2002) or C. Another
> choice might be Ada -- I haven't seen enough of it to be sure whether
> I'd like it or not.

Sigh... Never the twain met methinks. :P

Cheers,
Rupert


Pete Fenelon

unread,
Jun 27, 2002, 12:28:52 PM6/27/02
to
In alt.folklore.computers Rupert Pigott <dark.try-eati...@btinternet.com> wrote:
> I remember finding that PASCAL seemed verbose after C, but
> what I found was that I was typing a *shed-load* less code
> to do basic things. Those savings (string handling was one
> of them) more than made up for it, and gave me more reliable
> code too.

I assume this was Turbo Pascal or some extended version of the language?
Standard Pascal is painful when it comes to any form of text
processing! (I've written a lot of code in BS6192 Pascal and it
*hurts*.) "Why Pascal Is Not My Favourite Programming Language" speaks
to me loud and clear :)


pete
--
pe...@fenelon.com "serious sport has nothing to do with fair play" - orwell

Rupert Pigott

unread,
Jun 26, 2002, 7:11:54 PM6/26/02
to
"John Homes" <john....@eds.com> wrote in message
news:afd8vk$s0a$1...@hermes.nz.eds.com...

I would only be counting those once. There's a HELL of a lot of
bespoke C out there.... There's more to Micros than PCs you know
and in case you haven't noticed the embedded market is *far*
more diverse and specialises in "custom" apps. :)

All the C I've written has been "Bespoke". Aside from a very
small volume turnkey box vendor. Even then we specialised stuff
for each customer - but I wouldn't count libraries for each
customer - just changed lines. :)


Cheers,
Rupert


Rupert Pigott

unread,
Jun 27, 2002, 2:01:57 PM6/27/02
to
"Randall Bart" <Bart...@att.spam.net> wrote in message
news:2u5mhu0o8nvkk5o7r...@4ax.com...
[SNIP]

> And certainly your claim that *anything* takes more LOC in Cobol is false.
> In Vax Macro can you edit a number the way we do in Cobol? How do you
> produce an output field like "$12,345.67"? Are the text parsing
facilities
> of Vax Macro as concise as UNSTRING?

You might be surprised what you can do with VAX Macro
assembler. It's not your "usual" type instruction set.
The VAX instruction set *does* support lots of complex
string handling (EDIT instructions) and a zillion
datatypes - including BCD *strings* of course. To be
honest I'm amazed that they were able to fit all that
into the VAX-11/780 cabinet in the first place.

Cheers,
Rupert


Larry__Weiss

unread,
Jun 27, 2002, 2:46:44 PM6/27/02
to
Pete Fenelon wrote:
> In alt.folklore.computers Rupert Pigott <dark.try-eati...@btinternet.com> wrote:
> > I remember finding that PASCAL seemed verbose after C, but
> > what I found was that I was typing a *shed-load* less code
> > to do basic things. Those savings (string handling was one
> > of them) more than made up for it, and gave me more reliable
> > code too.
>
> I assume this was Turbo Pascal or some extended version of the language?
> Standard Pascal is painful when it comes to any form of text
> processing! (I've written a lot of code in BS6192 Pascal and it
> *hurts*.) "Why Pascal Is Not My Favourite Programming Language" speaks
> to me loud and clear :)
>

Can Pascal be changed to become a better language the way that C is
changed every decade or so? Or is any "Pascal" evolution just left
to the individual implementors at this point in time?

- LarryW

Stephen Fuld

unread,
Jun 27, 2002, 2:57:12 PM6/27/02
to

<The.Central.Scr...@invalid.pobox.com> wrote in message
news:slrnahjsqe.i0q.The.Cen...@flatland.dimensional.com.
..

snip

> For *anything* cobol requires more LOC than any other language, including
> assembler. I consider cobol to be a negatively high level language.
>
> Vax macro is a far higher level language. At least it has subroutine
> calls with parameters,

As does COBOL (if they are separately compiled).

> nested conditionals

As does COBOL (the ELSEIF statement)

> and things like a case
> statement.

COBOL has a limited form of the case statement where the dependent variable
is an integer )GOTO DEPENDING ON) . But when combined with the TRANSFORM
verb, (which I don't know if it is standard, but it is a common extension)
gives you a more generalized facility.

Stephen Fuld

unread,
Jun 27, 2002, 2:57:13 PM6/27/02
to

"Larry__Weiss" <l...@airmail.net> wrote in message
news:0B5619B62D7C4F86.015928FF...@lp.airnews.net...

Or even the less difficult and far more generally applicable facility of
printing numbers with digit separators every three digits! I get so ticked
off seeing large numbers printed as a string of digits without separators.
It makes easy "getting" of the magnitude almost impossible.

Larry__Weiss

unread,
Jun 27, 2002, 3:28:55 PM6/27/02
to
Stephen Fuld wrote:
> "Larry__Weiss" <l...@airmail.net> wrote in message
> > Edward Reid wrote:
> > > I've seen programs written in 1000 lines of Algol that I could easily
> > > write in 100 lines of COBOL. It depends far more on the programmer than
> > > on the language. The language matters, but much less so than the
> > > programmer.
> > >
> > At least COBOL had a concept of currency amounts built-in.
> > I've been surprised at how few computer languages have a monetary
> > currency data type.
> >
> Or even the less difficult and far more generally applicable facility of
> printing numbers with digit separators every three digits! I get so ticked
> off seeing large numbers printed as a string of digits without separators.
> It makes easy "getting" of the magnitude almost impossible.
>

Does COBOL have that built-in support of amount formatting with digit
clustering?

- LarryW

Bob Rahe

unread,
Jun 27, 2002, 3:40:06 PM6/27/02
to
In article <3D1B2ABC...@yahoo.com>,

CBFalconer <cbfal...@worldnet.att.net> wrote:
>Edward Reid wrote:
>>
>... snip ...
>>
>> Of course, the generally acknowledged champion for minimum LOC is APL.
>> In APL, every program can be written in one line, information density
>> approaches infinitity, and programs are impenetrable.

>Wasn't APL the original one-way algorithm, that inspired
>Rivest-Adelman and sired public key cryptography :-)

Possibly, but it was the first write-only language.... 8-))

--
----------------------------------------------------------------------------
|Bob Rahe, Delaware Tech&Comm Coll. / |
|Computer Center, Dover, Delaware / |
|Internet: b...@dtcc.edu (RWR50) / |
----------------------------------------------------------------------------

Bob Rahe

unread,
Jun 27, 2002, 3:39:22 PM6/27/02
to
In article <deJS8.56823$LC3.4...@bgtnsc04-news.ops.worldnet.att.net>,
Stephen Fuld <s.f...@PleaseRemove.att.net> wrote:
...

>Or even the less difficult and far more generally applicable facility of
>printing numbers with digit separators every three digits! I get so ticked
>off seeing large numbers printed as a string of digits without separators.
>It makes easy "getting" of the magnitude almost impossible.

Interestingly, B series and A series do a lot of that in hardware. And
there is support for accessing the code in Algol - pictures etc.

Pete Fenelon

unread,
Jun 27, 2002, 3:46:21 PM6/27/02
to
In alt.folklore.computers Larry__Weiss <l...@airmail.net> wrote:
>
> Can Pascal be changed to become a better language the way that C is
> changed every decade or so? Or is any "Pascal" evolution just left
> to the individual implementors at this point in time?

C peaked with ANSI C in the late 80s. Pascal's evolved in lots of
directions. Delphi's a popular visual programming environment. I
used Modula-2 quite a lot for fairly serious programming tasks.
I've used Ada extensively - good software engineering language, but
verbose and the tools were, until GNAT, clunky. Oberon's an interesting
minimalist/systems programming language.

Richard Steiner

unread,
Jun 27, 2002, 3:09:34 PM6/27/02
to
Here in comp.sys.unisys,
Randall Bart <Bart...@att.spam.net> spake unto us, saying:

>'Twas Thu, 27 Jun 2002 9:13:34 -0400 when all comp.sys.unisys stood in awe
>as Edward Reid <edwar...@spamcop.net> uttered:
>
>>I've seen programs written in 1000 lines of Algol that I could easily
>>write in 100 lines of COBOL. It depends far more on the programmer than
>>on the language. The language matters, but much less so than the
>>programmer.
>
>A determined programmer can write bad Fortran in any language.

Sometimes it's difficult to maintain those low stanards no matter how
hard one tries -- I've seen macro languages that have nothing at all
similar to an arithmetic IF or even a GOTO statement. :-)

--
-Rich Steiner >>>---> http://www.visi.com/~rsteiner >>>---> Eden Prairie, MN
OS/2 + BeOS + Linux + Win95 + DOS + PC/GEOS = PC Hobbyist Heaven! :-)
Applications analyst/designer/developer (13 yrs) seeking employment.
See web site in my signature for current resume and background.

Rupert Pigott

unread,
Jun 27, 2002, 4:00:26 PM6/27/02
to
"Larry__Weiss" <l...@airmail.net> wrote in message
news:429574481B5FE425.9337A7BA...@lp.airnews.net...

I liked the changes from K&R to C89 (in the most part), but I
thought that they didn't really go far enough. The spec gives
implementors too much lee-way which only seems to suit the
vendors and not the users in the long run... So you get the
same lock-in problems as you would with vendors doing their own
thing - which they do anyways... Remember the introduction of
"long long" ? What a total crock that was. :(

Cheers,
Rupert


Stan Barr

unread,
Jun 27, 2002, 4:27:57 PM6/27/02
to


Modula (-2 or -3) - or Oberon...see http://www.oberon.ethz.ch/
especially the link to Bluebottle.

--
Cheers,
Stan Barr st...@dial.pipex.com

The future was never like this!

Andrew Williams

unread,
Jun 27, 2002, 4:25:37 PM6/27/02
to
Larry__Weiss wrote:

> Stephen Fuld wrote:
>
>
>
> Does COBOL have that built-in support of amount formatting with digit
> clustering?
>
> - LarryW

If I understand that correctly, yes it does. It works best with the US$
though.

--
opinions personal, facts suspect.
http://home.arcor.de/36bit/samba.html

It is loading more messages.
0 new messages