Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Seeking CANDE and/or WFL reference materials...

453 views
Skip to first unread message

Richard Steiner

unread,
Oct 2, 2002, 4:54:43 PM10/2/02
to
Hello, folks...

Due to a strange set of circumstances, I've managed to land a contract
position in a shop using an A-series box (COBOL74, DMSII).

I've been a 2200 guy my whole career, but I've always been very curious
about the A-series, so this is pretty cool from my perspective. :-)

Anyway -- I spent a couple of hours yesterday afternoon with one the
programmers there, and he showed me a number of interesting things,
including a few WFL files (WFL seems quite powerful at first glance)
and a few basic editing operations in CANDE.

Are there any references available on the net for either WFL or CANDE?

I've found one site here:

http://www.metalogic.eu.com/Main/docum/ref/cards.htm

that might be applicable, and I'm aware that Don Gregory's publishing
company sells manuals (the two that caught my eye right aware are the
_Beginner's Guide to WFL_ and the _Complete CANDE Primer_), but that's
all I've found so far.

Does anyone have any opinions on those two books from www.gregpub.com?

I know the company I'll be working for has documentation CD-ROMs from
Unisys, but I don't know at this point what they contain -- it's quite
possible that all I need to know is resident on them, but I'm not sure.

This'll be fun -- now I'll finally get a chance to use this CANDE thing
I've heard so much about and directly compare it to the editors I know
like UEDIT/FSED/IPF in OS2200, EDT in VMS, EMACS/vim/FTE on the PC, etc.

--
-Rich Steiner >>>---> http://www.visi.com/~rsteiner >>>---> Eden Prairie, MN
OS/2 + BeOS + Linux + Win95 + DOS + PC/GEOS = PC Hobbyist Heaven! :-)
Applications analyst/designer/developer (13 yrs) seeking employment.
See web site in my signature for current resume and background.

Victor A. Garcia

unread,
Oct 2, 2002, 7:03:18 PM10/2/02
to
Don Gregory Manuals, still are the best out there.
The Unisys doc's, are mostly reference manuals, but you can get some really
good courses at Lombard ILL, or NorthCross GA, (Unisys training centers).
WFL is the most powerful scripting language available for any platform,
you'll enjoy working with it, but it will take you a while to master it.
CANDE is a lot more than an editor, talk about it with non-programmers, they
use it a lot too, the editing features are pretty decent.

Welcome to the other side (darker ???, brighter ???), keep us informed of
your experiences.

"Richard Steiner" <rste...@visi.com> wrote in message
news:T01m9oHp...@visi.com...

Randall Bart

unread,
Oct 2, 2002, 10:16:47 PM10/2/02
to
'Twas Wed, 02 Oct 2002 15:54:43 -0500 when all comp.sys.unisys stood in awe
as rste...@visi.com (Richard Steiner) uttered:

>(WFL seems quite powerful at first glance)

It's even more powerful once you get to know it.
--
RB |\ © Randall Bart
aa |/ ad...@RandallBart.spam.com Bart...@att.spam.net
nr |\ Please reply without spam I LOVE YOU 1-917-715-0831
dt ||\ http://RandallBart.com/ DOT-HS-808-065 MS^7=6/28/107
a |/ "Believe nothing, no matter where you read it, or who
l |\ said it, no matter if I have said it, unless it agrees
l |/ with your own reason and your own common sense."--Buddha

Richard Steiner

unread,
Oct 3, 2002, 3:35:53 AM10/3/02
to
Here in comp.sys.unisys,
Randall Bart <Bart...@att.spam.net> spake unto us, saying:

>'Twas Wed, 02 Oct 2002 15:54:43 -0500 when all comp.sys.unisys stood
>in awe as rste...@visi.com (Richard Steiner) uttered:
>
>>(WFL seems quite powerful at first glance)
>
>It's even more powerful once you get to know it.

Yes, I've found that to be true with other languages as well. It'll be
an interesting experience, and I'm looking forward to it.

Richard Steiner

unread,
Oct 3, 2002, 3:29:51 AM10/3/02
to
Here in comp.sys.unisys,
"Victor A. Garcia" <vgar...@tampabay.rr.com> spake unto us, saying:

>Don Gregory Manuals, still are the best out there.

I've gotten similar opinions via e-mail, and I suspect I'll be sending
an order form his way very soon. :-)

>WFL is the most powerful scripting language available for any platform,
>you'll enjoy working with it, but it will take you a while to master it.

I've been learning perl for the past couple of months (an interesting
scripting language in its own right), and I'm also familiar with a
number of other scripting languages including CALL under OS2200 and
REXX for OS/2, so it'll be quite interesting to see how WFL compares.

>CANDE is a lot more than an editor, talk about it with non-programmers,
>they use it a lot too, the editing features are pretty decent.

Interesting. I'll keep that in mind.

>Welcome to the other side (darker ???, brighter ???), keep us informed
>of your experiences.

Thanks, and I certainly will. :-)

Denny Brouse

unread,
Oct 3, 2002, 3:14:05 AM10/3/02
to
Richard,

When we converted from the Unisys V-series to the NX-5600, I found this
"WFL Made Simple" book very handy to get me started. Nothing too deep in
it, but it answered many of my questions.

http://public.support.unisys.com/aseries/docs/ClearPath-MCP-7.0-SSP2/PDF/880
77391-003.pdf

Hope this helps,

Denny

"Richard Steiner" <rste...@visi.com> wrote in message
news:T01m9oHp...@visi.com...

Don Payette

unread,
Oct 3, 2002, 2:05:19 PM10/3/02
to
And a pitch for my product, Programmer's Workbench (aka NXEdit).
It's a Windows development environment for A Series programmers.
It comes free with the MCP.

Documentation is in the Programmer's Workbench Installation and
Operations guide, and in the PC client online help.

rste...@visi.com (Richard Steiner) wrote:

>Hello, folks...
>
>Due to a strange set of circumstances, I've managed to land a contract
>position in a shop using an A-series box (COBOL74, DMSII).
>
>I've been a 2200 guy my whole career, but I've always been very curious
>about the A-series, so this is pretty cool from my perspective. :-)
>
>Anyway -- I spent a couple of hours yesterday afternoon with one the
>programmers there, and he showed me a number of interesting things,
>including a few WFL files (WFL seems quite powerful at first glance)
>and a few basic editing operations in CANDE.
>
>Are there any references available on the net for either WFL or CANDE?
>
>I've found one site here:
>
> http://www.metalogic.eu.com/Main/docum/ref/cards.htm
>
>that might be applicable, and I'm aware that Don Gregory's publishing
>company sells manuals (the two that caught my eye right aware are the
>_Beginner's Guide to WFL_ and the _Complete CANDE Primer_), but that's
>all I've found so far.
>
>Does anyone have any opinions on those two books from www.gregpub.com?
>
>I know the company I'll be working for has documentation CD-ROMs from
>Unisys, but I don't know at this point what they contain -- it's quite
>possible that all I need to know is resident on them, but I'm not sure.
>
>This'll be fun -- now I'll finally get a chance to use this CANDE thing
>I've heard so much about and directly compare it to the editors I know
>like UEDIT/FSED/IPF in OS2200, EDT in VMS, EMACS/vim/FTE on the PC, etc.

-----------
Don Payette
Unisys Corporation
I speak only for myself; not my employer
Please reply in the newsgroup. Don't try
sending e-mail.

Tim McCaffrey

unread,
Oct 3, 2002, 4:12:35 PM10/3/02
to
In article <T01m9oHp...@visi.com>, rste...@visi.com says...
>
>Hello, folks...
>

>Are there any references available on the net for either WFL or CANDE?
>

When you get there, CANDE has a HELP command that is pretty useful.

The documentation CD-ROM has every reference manual you could ever want
(for the A series), it does not, however, have tutorials.

(Yeah, I know we don't call it A series anymore, but the sentence gets
awkward saying "Clearpath NX/LX/CS and Libra").

- Tim

Bryan Souster

unread,
Oct 4, 2002, 1:39:40 PM10/4/02
to
"Tim McCaffrey" <t...@spamfilter.asns.tr.unisys.com> wrote in message
news:ani8bj$4rd$1...@trsvr.tr.unisys.com...
[snip]

>
> (Yeah, I know we don't call it A series anymore, but the sentence gets
> awkward saying "Clearpath NX/LX/CS and Libra").

I just call them 'MCP Servers'. Accurate enough and uses a current buzzword
as well as referencing the MCP ;-)

Bryan.


Richard Steiner

unread,
Oct 4, 2002, 3:09:26 PM10/4/02
to
Here in comp.sys.unisys,
Don Payette <Nob...@nowhere.com> spake unto us, saying:

>And a pitch for my product, Programmer's Workbench (aka NXEdit).
>It's a Windows development environment for A Series programmers.
>It comes free with the MCP.

I'm not sure what all they're using, but I'll keep in in mind (and will
probably take a look at it myself if they'll let me).

>Documentation is in the Programmer's Workbench Installation and
>Operations guide, and in the PC client online help.

Noted, and thanks!

--
-Rich Steiner >>>---> http://www.visi.com/~rsteiner >>>---> Eden Prairie, MN
OS/2 + BeOS + Linux + Win95 + DOS + PC/GEOS = PC Hobbyist Heaven! :-)

Now running in text mode on a PPro/200. Eat my dust, GUI freaks!
The Theorem Theorem: If If, Then Then.

Richard Steiner

unread,
Oct 4, 2002, 3:08:00 PM10/4/02
to
Here in comp.sys.unisys,
"Denny Brouse" <den...@prodigy.net> spake unto us, saying:

>When we converted from the Unisys V-series to the NX-5600, I found this
>"WFL Made Simple" book very handy to get me started. Nothing too deep in
>it, but it answered many of my questions.

Hey... Nice link! Thanks...

I notice that Unisys is making lots of stuff available on their web
site, which is good. That makes some useful materials available.

>Hope this helps,

Yes, it will. Thank you.

--
-Rich Steiner >>>---> http://www.visi.com/~rsteiner >>>---> Eden Prairie, MN
OS/2 + BeOS + Linux + Win95 + DOS + PC/GEOS = PC Hobbyist Heaven! :-)

Richard Steiner

unread,
Oct 4, 2002, 3:10:39 PM10/4/02
to
Here in comp.sys.unisys,
t...@spamfilter.asns.tr.unisys.com (Tim McCaffrey) spake unto us, saying:

>When you get there, CANDE has a HELP command that is pretty useful.

I suspect I'll be making liberal use of that command for a while. :-)

>The documentation CD-ROM has every reference manual you could ever want
>(for the A series), it does not, however, have tutorials.

No matter -- I bought a few books from gregpub.com, so those should
help.

>(Yeah, I know we don't call it A series anymore, but the sentence gets
>awkward saying "Clearpath NX/LX/CS and Libra").

I still say 2200 instead of Clearpath IX. :-)

--
-Rich Steiner >>>---> http://www.visi.com/~rsteiner >>>---> Eden Prairie, MN
OS/2 + BeOS + Linux + Win95 + DOS + PC/GEOS = PC Hobbyist Heaven! :-)

Greig Blanchett

unread,
Oct 4, 2002, 6:00:04 PM10/4/02
to
On Wed, 02 Oct 2002 23:03:18 GMT, "Victor A. Garcia"
<vgar...@tampabay.rr.com> wrote:

>Don Gregory Manuals, still are the best out there.
>The Unisys doc's, are mostly reference manuals, but you can get some really
>good courses at Lombard ILL, or NorthCross GA, (Unisys training centers).
>WFL is the most powerful scripting language available for any platform,
>you'll enjoy working with it, but it will take you a while to master it.

Have to disagree here. Unix shell (esp. Korn shell) scripting walks
all over WFL, unless you can write your own ALGOL routines to open
files, mess about with their contents and other such useful stuff. And
if Perl is classed as a scripting language, then it wins hands down.
One day somebody somewhere will port Perl to A Series and become an
instant legend, but I'm not holding my breath ....


>CANDE is a lot more than an editor, talk about it with non-programmers, they
>use it a lot too, the editing features are pretty decent.

I've never really got into EMACS, which is considered by many Unix
folk to be better than vi, but irrespective, vi (and it's derivatives)
are better than CANDE.

>
[...]

Richard Steiner

unread,
Oct 4, 2002, 8:33:43 PM10/4/02
to
Here in comp.sys.unisys,
Greig Blanchett <gre...@nzrfu.com> spake unto us, saying:

>On Wed, 02 Oct 2002 23:03:18 GMT, "Victor A. Garcia"
><vgar...@tampabay.rr.com> wrote:
>
>>WFL is the most powerful scripting language available for any platform,
>>you'll enjoy working with it, but it will take you a while to master it.
>
>Have to disagree here. Unix shell (esp. Korn shell) scripting walks
>all over WFL, unless you can write your own ALGOL routines to open
>files, mess about with their contents and other such useful stuff.

I can even do that with a limited scripting language like CALL (the one
used to write such tools as UEDIT, CSHELL, FINDREF, and other things on
the 2200 and Clearpath IX boxes).

>And if Perl is classed as a scripting language, then it wins hands
>down.

Why wouldn't Perl be considered a scripting language?

>I've never really got into EMACS, which is considered by many Unix
>folk to be better than vi, but irrespective, vi (and it's derivatives)
>are better than CANDE.

The subject of editors and their relative merits is fertile ground for
holy wars. :-) Every user has biases, some users more than others.

I personally find the "vim" variant of vi to be a decent editor, and I
recognize its power, but I dislike its approach when it is compared to
editors like FTE (which follow a more PC-oriented interface standard).

--
-Rich Steiner >>>---> http://www.visi.com/~rsteiner >>>---> Eden Prairie, MN
OS/2 + BeOS + Linux + Win95 + DOS + PC/GEOS = PC Hobbyist Heaven! :-)

Ken Grubb

unread,
Oct 5, 2002, 2:28:20 AM10/5/02
to
Richard Steiner wrote:

>>(Yeah, I know we don't call it A series anymore, but the sentence gets
>>awkward saying "Clearpath NX/LX/CS and Libra").
>
>I still say 2200 instead of Clearpath IX. :-)

I still say 1100 instead of 2200. And I know folks who were still
saying Sperry up until a few years ago. Any UNIVAC-folk still walking
about?
;^>

Ken Grubb
Burlington, NC

Louis Krupp

unread,
Oct 5, 2002, 2:52:07 AM10/5/02
to
Greig Blanchett wrote:

> On Wed, 02 Oct 2002 23:03:18 GMT, "Victor A. Garcia"
> <vgar...@tampabay.rr.com> wrote:

<snip>

>>WFL is the most powerful scripting language available for any platform,
>>you'll enjoy working with it, but it will take you a while to master it.
>>
>
> Have to disagree here. Unix shell (esp. Korn shell) scripting walks
> all over WFL, unless you can write your own ALGOL routines to open
> files, mess about with their contents and other such useful stuff. And
> if Perl is classed as a scripting language, then it wins hands down.


It's been at least ten years since I've written any WFL (I taught a
day-and-a-half class on "old" (2.8?) WFL at the Bureau of Mines
in Amarillo in March of 1977, for what that's worth), but in my
experience it's hard to compare WFL to UNIX shells.

WFL is really good at doing things like "run programs A and B in
parallel and if program A fails run program C."

WFL is elegant and readable. The Korn shell and its friends (sh,
bash, etc) are a bit more obscure. So is Perl, typically.

WFL wasn't a data manipulation language when I used it. And it
didn't do pipes. UNIX has a certain ... completeness.


> One day somebody somewhere will port Perl to A Series and become an
> instant legend, but I'm not holding my breath ....


Sounds like a challenge. I understand there's a C compiler for
the A Series (or whatever it's called now). Anyone have any
experience with it?


>>CANDE is a lot more than an editor, talk about it with non-programmers, they
>>use it a lot too, the editing features are pretty decent.
>>
>
> I've never really got into EMACS, which is considered by many Unix
> folk to be better than vi, but irrespective, vi (and it's derivatives)
> are better than CANDE.


CANDE is (or at least was) a combination line editor and interactive
shell. My recollection is that it was a good editor, as line editors
go, but it could have done with some of WFL's shell features. (Yes,
I know you can run WFL jobs from CANDE.)

A lot of this may have changed in the last ten years or so.

I miss ALGOL, and DCALGOL...

Louis Krupp

--

Remove .NOSPAMPLEASE and .invalid to reply.


Edward Reid

unread,
Oct 5, 2002, 9:28:37 PM10/5/02
to
I'm a bit late jumping in, but that's never stopped me before ... BTW,
some of us still refer to this system as a B6700 ;-).

On Wed, 2 Oct 2002 16:54:43 -0400, Richard Steiner wrote


> Anyway -- I spent a couple of hours yesterday afternoon with one the
> programmers there, and he showed me a number of interesting things,
> including a few WFL files (WFL seems quite powerful at first glance)

As already pointed out, this is true, as long as you remember that it
is a task control language and does not pretend to be a full
programming language -- in particular it lacks minor things like I/O
and arrays.

> and a few basic editing operations in CANDE.

Very basic. Since you're familiar with other editors, you REALLY want
to use SYSTEM/EDITOR or NX/Edit (aka Programmer's Workbench, or PW, in
newspeak). CANDE's abilities, though good, are basic by today's
standards and for the most part are *not* being upgraded. The ongoing
effort is in PW and EDITOR. At many sites, you find people who know
CANDE and question why you want to use anything else; try not to let
them stop you from using the better tools.

EDITOR is more complete at this point and runs entirely on the host,
needing only a terminal emulator, though it can handle terminals of
various sizes, anything you can fit on a screen (I often use an
88-line, 96-column terminal). Its greatest flaw is that it's a
separately charged product and Burroughs/Unisys marketing never had a
clue how to sell it, so it's criminally underused and you can't count
on finding it installed, especially at the sites that need it most.
EDITOR has been used in the software plants since the late 1970s and
has the power you expect from programmers who eat their own dog food.
Just make sure you know which key is SPCFY on your emulator! It isn't
marked any longer ...

Don Payette mentioned PW; I'll add a plug from an outsider. PW is a
client/server system, though the editor can run entirely offline and
still understands MCP concepts when it does. PW provides a PC-style
editing environment while still being integrated with all the MCP
facilities. It's much newer than EDITOR and as a result does not yet
have many powerful facilities in EDITOR (especially macros) but it's
catching up. When I was using PW a year and a half ago, Don's group
quickly fixed problems I found. (I tend to be known for exploring every
corner of a product, and thus I report a lot of problems that most
people never see.) And as Don mentioned, PW is bundled, so generally
all you need to do is get the system admin to install the server. You
can download and install the client on your workstation if necessary.
Depending on the site's configuration and practices, you might need
permission to install, and you might need a firewall adjusted to allow
the port that PW uses.

With either EDITOR or PW, make darned sure you learn how to use XREF
files (cross-reference files generated by the compiler) with the
editor. You lose some of the most powerful features if you don't. I've
heard people diss one or the other and then discovered that they hadn't
even heard of using the XREF!!! Kind of like using WFL but not knowing
any statements except RUN. These *are* programmers' development tools,
not end user tools, so don't expect either one to be totally
point-and-click.

> Are there any references available on the net for either WFL or CANDE?

http://public.support.unisys.com/os1/txt/web-verity?type=list

has all Unisys docs (even 2200 docs), and you don't need any login. But
get a copy of the CD if you possibly can. Generally Unisys doesn't mind
if you copy the CD.

The "WFL Made Simple" document is an excellent place to start. If
you're experienced at reading reference manuals -- as I suspect you are
-- the ones for WFL, CANDE, and EDITOR will serve you pretty well.
EDITOR docs are mostly also available online (use ]HELP), and most PW
docs are only in the help file installed with the PC application.

Since I don't recommend using the CANDE editor, you don't really need
to learn more than the basics of CANDE; you can look up specific
commands when you need them.

> This'll be fun -- now I'll finally get a chance to use this CANDE thing
> I've heard so much about and directly compare it to the editors I know
> like UEDIT/FSED/IPF in OS2200, EDT in VMS, EMACS/vim/FTE on the PC, etc.

I'll certainly be interested in hearing the comparisons!

Edward Reid


Richard Steiner

unread,
Oct 6, 2002, 2:31:24 AM10/6/02
to
Here in comp.sys.unisys,
Edward Reid <edwar...@spamcop.net> spake unto us, saying:

>I'm a bit late jumping in, but that's never stopped me before ...

Hey, better late than never. :-)

>> Anyway -- I spent a couple of hours yesterday afternoon with one the
>> programmers there, and he showed me a number of interesting things,
>> including a few WFL files (WFL seems quite powerful at first glance)
>
>As already pointed out, this is true, as long as you remember that it
>is a task control language and does not pretend to be a full
>programming language -- in particular it lacks minor things like I/O
>and arrays.

Those "minor things" would weaken it in comparison to other languages,
at least if it were ever used as an application prototyping or utility
development language (as perl, CALL, and REXX often are).

However, it might have other strengths, and we *are* talking about a
task/job control language after all.

I won't be in a position to come to any meaningful conclusions for a
while, but I know already that it's better than ECL in terms of basic
control structures (which doesn't say much, as ECL has almost none that
I know off the top of my head except @SETC/@TEST and @JUMP). That's why
I used to write my more complex 2200 runstreams either as SymStream
(SSG) or CALL files. Or CSHELL aliases.

>> and a few basic editing operations in CANDE.
>
>Very basic. Since you're familiar with other editors, you REALLY want
>to use SYSTEM/EDITOR or NX/Edit (aka Programmer's Workbench, or PW, in
>newspeak).

Hmmm. Noted.

>CANDE's abilities, though good, are basic by today's standards and for
>the most part are *not* being upgraded. The ongoing effort is in PW and
>EDITOR. At many sites, you find people who know CANDE and question why
>you want to use anything else; try not to let them stop you from using
>the better tools.

Well, I'm the one who maintained and modified UEDIT (a commonly used but
"unofficial" fullscreen text editor) on the 2200's at NWA when some of
the folks there were still using and advocating line-oriented things like
CTS or ED (or FSED, a decent fullscreen editor in its own right).

No need to worry about me limiting myself to standard tools unless the
client requests that I do so. If I find out that an interesting tool
exists (an editor or anything else, really), I'll evaluate it.

I'm a utility junkie at heart on *all* of the platforms I use! :-)

>EDITOR is more complete at this point and runs entirely on the host,
>needing only a terminal emulator, though it can handle terminals of
>various sizes, anything you can fit on a screen (I often use an
>88-line, 96-column terminal). Its greatest flaw is that it's a
>separately charged product and Burroughs/Unisys marketing never had a
>clue how to sell it, so it's criminally underused and you can't count
>on finding it installed, especially at the sites that need it most.

Interesting. Sounds like a rough analog of IPF on the 2200 side, at
least in some ways. Except for the fact that IPF has been surpassed
in many ways (not all) by third-party editors.

>Don Payette mentioned PW; I'll add a plug from an outsider. PW is a
>client/server system, though the editor can run entirely offline and
>still understands MCP concepts when it does. PW provides a PC-style
>editing environment while still being integrated with all the MCP
>facilities.

An interesting concept.

I'm not a big fan of Windows-based text editing environments as a whole
(most of the Windows editors I've used are admittedly weak examples, but
they've left a bad impression), but I'll see if they know what PW is.

>And as Don mentioned, PW is bundled, so generally all you need to do
>is get the system admin to install the server.

I'll have to get a feel for the overall atmosphere at the client site
first, but I'll keep this in mind.

>With either EDITOR or PW, make darned sure you learn how to use XREF
>files (cross-reference files generated by the compiler) with the
>editor. You lose some of the most powerful features if you don't.

I assume this is similar to using ctags in a Unix environment (which I
am aware of but only passingly familiar with)?

Does the A-series environment have a multi-source-file parsing and cross-
reference searching/reporting tool analogous to CULL\IACULL\FINDREF? Or
the "cscope" tool on Solaris?

>These *are* programmers' development tools, not end user tools, so don't
>expect either one to be totally point-and-click.

I don't expect that, but I'm sure I'll figure them out. :-)

> Are there any references available on the net for either WFL or CANDE?
>
>http://public.support.unisys.com/os1/txt/web-verity?type=list
>
>has all Unisys docs (even 2200 docs), and you don't need any login. But
>get a copy of the CD if you possibly can. Generally Unisys doesn't mind
>if you copy the CD.

I suspect I'll have one of these at my disposal.

>The "WFL Made Simple" document is an excellent place to start. If
>you're experienced at reading reference manuals -- as I suspect you are
>-- the ones for WFL, CANDE, and EDITOR will serve you pretty well.

Excellent.

>Since I don't recommend using the CANDE editor, you don't really need
>to learn more than the basics of CANDE; you can look up specific
>commands when you need them.

This depends (to some extent) on whether or not EDITOR is available at
all, and on how willing the client is to accomodate PW. I should have
a much better feel for those two things by the end of the week.

>> This'll be fun -- now I'll finally get a chance to use this CANDE thing
>> I've heard so much about and directly compare it to the editors I know
>> like UEDIT/FSED/IPF in OS2200, EDT in VMS, EMACS/vim/FTE on the PC, etc.
>
>I'll certainly be interested in hearing the comparisons!

It's sometimes hard to compare text editors in different environments,
and even in the same environment the definitions of "good" and "bad"
editor elements can be extremely subjective. Opinions may vary. :-)

We'll see. If you start hearing whining from my quarter, you'll know
I'm not as happy with the tools I've found as I could be... ;-)

--
-Rich Steiner >>>---> http://www.visi.com/~rsteiner >>>---> Eden Prairie, MN
OS/2 + BeOS + Linux + Win95 + DOS + PC/GEOS = PC Hobbyist Heaven! :-)

Randall Bart

unread,
Oct 6, 2002, 11:16:58 AM10/6/02
to
'Twas Sat, 5 Oct 2002 21:28:37 -0400 when all comp.sys.unisys stood in awe
as Edward Reid <edwar...@spamcop.net> uttered:

>I'm a bit late jumping in, but that's never stopped me before ... BTW,
>some of us still refer to this system as a B6700 ;-).

Actually, it was originally the B6500, but only a few dozen shipped before
marketing fell in love with the number 7.

>As already pointed out, this is true, as long as you remember that it
>is a task control language and does not pretend to be a full
>programming language -- in particular it lacks minor things like I/O
>and arrays.

I don't know why it doesn't. I recall some managers in the Languages
Department discussing how WFL might be the worlds only language where you
can declare the file, open the file, change attributes of the file, and
close the file, but you can never read or write. It was never important
enough to any single user for that user to finance the project, so it
couldn't be done. I have two little programs, one to write a record to a
file, another to read a record.

For lack of arrays, I've seen some WFLs which parse a string of commands,
only to create a new string which needs to be parsed again. I was just
noting yesterday a WFL which creates an string of messages which it needs to
parse so it can display (and redisplay) them all at a critical time.

>EDITOR is more complete at this point and runs entirely on the host,
>needing only a terminal emulator, though it can handle terminals of
>various sizes, anything you can fit on a screen (I often use an
>88-line, 96-column terminal).

It's too bad that MARC and many other programs assume you have 80 columns.
CANDE handles it, but you have to invoke the TERM command. It would be very
nice if COMS had a feature to remap screens. To make use of wide screens in
EDITOR, I find I need to have two terminal emulators running on my PC.

>Its greatest flaw is that it's a
>separately charged product and Burroughs/Unisys marketing never had a
>clue how to sell it, so it's criminally underused and you can't count
>on finding it installed, especially at the sites that need it most.

The programmers and engineers at Burroughs/Unisys have always had great
respect and admiration for the superb marketing department. B^) I've been
at too many A Series shops that don't have EDITOR. They don't use it
because they haven't paid for it. They don't want to pay for it, because
they don't use it. I'm going to try to get it into our budget, but it's
hard to explain to someone who isn't using CANDE why we need it.

Edward Reid

unread,
Oct 6, 2002, 11:48:07 AM10/6/02
to
On Sun, 6 Oct 2002 2:31:24 -0400, Richard Steiner wrote

> Well, I'm the one who maintained and modified UEDIT (a commonly used but
> "unofficial" fullscreen text editor) on the 2200's at NWA when some of
> the folks there were still using and advocating line-oriented things like
> CTS or ED (or FSED, a decent fullscreen editor in its own right).

It's interesting that no third party editor has ever taken off on MCP
systems. This is a little surprising, since writing such a thing would
be easier in that environment than in many others. The only other
commonly used editors I know of are for LINC, and of course those are
from Unisys too. I guess that SYSTEM/EDITOR was always just good enough
to put off the utility junkies from writing their own, but it's only a
guess.

There are people who will tell you that CANDE page mode is a full
screen editor. It isn't; it's a kludge that allows you to do a full
page of single-line edits at once. I'm not going to pontificate over
the distinction now. The most obvious is the way the two handle
unnumbered line. Line numbers are very important in the A-Series
environment; they are used universally to identify a specific line.
You'll see this in all the editors, and when you generate a patch file,
it's merged with the main source by line number. (Columns 1-6 in COBOL,
1-5 in BASIC, 73-80 in Algol and Fortran and some other languages,
etc.) CANDE cannot tolerate unnumbered lines, so when you send a page
mode transmission with unnumbered (added) lines, CANDE immeidately
numbers then by interpolating between adjacent lines. Not only does
this impose requirements for the presence of numbered lines in the same
transmission, it means that if you are repeatedly editing a small
section of code, it quickly runs out of numbers to interpolate -- and
CANDE cannot renumber other lines to compensate. By contrast, EDITOR
and PW allow added lines to remain unnumbered until the end of a
session and number them then, getting the most out of the range
available. Furthermore, if they run out of space (either totally or
because the increment is below a user-specified minimum), they will
offer to renumber lines to make more space.

OK, so I pontificated a little.

> I'm not a big fan of Windows-based text editing environments as a whole

I agree. From the techies' point of view, the main advantage of putting
it in the Windows environment is that it will get a lot more internal
support, and lot more client site usage, and thus more development
funding. It'll be a better editor because of this support, though in
spite of the Windows environment.

The PW environment also decreases the amount in interaction between the
host and the workstation. However, when you're on a LAN, this hardly
matters. It also doesn't matter most of the time with a fast remote
connection, except on those days when there's an Internet bottleneck.
Communication speeds have diminished the importance of this aspect of
PW.

>> With either EDITOR or PW, make darned sure you learn how to use XREF
>> files (cross-reference files generated by the compiler) with the
>> editor. You lose some of the most powerful features if you don't.
>
> I assume this is similar to using ctags in a Unix environment (which I
> am aware of but only passingly familiar with)?

I'm not familiar with ctags at all (or with most things Unix). The XREF
is a complete identifer cross-reference. It's generated by the
compiler, and so includes references within macro expansions (DEFINEs
in Algol), is sensitive to conditional compilation, includes references
in external ("INCLUDE" in Algol, COPY in COBOL) files, etc. It's also
very fast in EDITOR because it's walking an actual list of references,
not doing a textual find. On modern systems finding text is so fast
that this advantage has mostly disappeared except when you're working
with something 10,000 lines or more, but there's a lot of software in
that category. Working with the MCP, at almost 1.5 million lines, is
almost impossible without XREF.

> Does the A-series environment have a multi-source-file parsing and cross-
> reference searching/reporting tool analogous to CULL\IACULL\FINDREF? Or
> the "cscope" tool on Solaris?

Yes, it's called the "compiler" ;-).

I'm not familiar with the tools you mention, but in general the
A-Series is weak on multi-source-file processing except by the
compilers. If you need a fast multi-file FIND capability, I know
someone who has one he'd probably sell. Be cautious of trying to
jury-rig a CANDE solution. At times I've used a combination of tools to
generate a CANDE DO file (batch file) to do FINDs on a lot of files.
The problem is that this can bring the largest system to its knees
(especially a single-CPU system) because CANDE runs at super-priority,
as befits software that's intended for interactive use. Thus regular
user software is better for this task.

However, saying the compiler is the tool is not just a joke, because of
the fact that the compiler (all the compilers) generate(s) the complete
cross-reference files, representing every identifier encountered,
sometimes including things you don't even think of as identifiers, and
this combines all files used in the compilation. I know that the Unisys
people take this seriously; every time I've filed a trouble report
saying that a compiler fails to XREF something, it's been fixed with no
argument.

> It's sometimes hard to compare text editors in different environments,
> and even in the same environment the definitions of "good" and "bad"
> editor elements can be extremely subjective. Opinions may vary. :-)

Yup. That's why an informed comparison is worth more than a simple
judgement or opinion.

> We'll see. If you start hearing whining from my quarter, you'll know
> I'm not as happy with the tools I've found as I could be... ;-)

Good luck ;-).

Edward


Richard Steiner

unread,
Oct 6, 2002, 9:40:44 PM10/6/02
to
Here in comp.sys.unisys,
Edward Reid <edwar...@spamcop.net> spake unto us, saying:

>There are people who will tell you that CANDE page mode is a full

>screen editor. It isn't; it's a kludge that allows you to do a full
>page of single-line edits at once. I'm not going to pontificate over
>the distinction now. The most obvious is the way the two handle
>unnumbered line. Line numbers are very important in the A-Series
>environment; they are used universally to identify a specific line.

Yes, I noticed that the CANDE display I was seeing had line numbers on
the left, which sort of reminded me of DCF or ISPF on the IBM side or
IPF on the 2200. I guess I assumed he was simply using it in a mode
which had the line numbers displayed.

A dependency on having actual line numbers saved in the file itself is
rather interesting... Sounds like CTS, if I remember correctly...?

>> Does the A-series environment have a multi-source-file parsing and cross-
>> reference searching/reporting tool analogous to CULL\IACULL\FINDREF? Or
>> the "cscope" tool on Solaris?
>
>Yes, it's called the "compiler" ;-).
>
>I'm not familiar with the tools you mention, but in general the
>A-Series is weak on multi-source-file processing except by the
>compilers.

On the 2200, CULL is a utility now maintained (I think) by Teamquest
that you feed a list of source files, and it proceeds to spin through
the file(s) you specified and build a nice indexed dictionary file and
a list of pointers into the original source code.

Depending on the specific options you use when building the cull file,
CULL may or may not apply language-specific rules when identifying the
set of indexable tokens in each text file (usually by doing things like
skipping keywords specific to a given language like "IF", etc.).

IACULL (which I assume stands for InterActive CULL) is the tool that a
programmer would uses to interactively process the cull file created by
CULL and search/display a list of those source files which contain one
or more search tags and the line offsets of the hits.

It also allows the programmer to bounce back and forth between the tag
summary screen(s) and the original source file to see each occurrence
in context, PUSH the current search environment if one wants to go off
on an unrelated search tangent, POP back to the original search context
when done, etc.

FINDREF is a fancy front-end to IACULL which allows for more powerful
searches than one can do with IACULL alone. I have a picture of it on
my web site (upper right terminal window in this desktop snapshot):

http://www.visi.com/~rsteiner/desktops/macdesk.gif

Of course, my version of FINDREF was rather nonstandard, as I added my
own functions to it during the time I was maintaining it.

A tool such as CULL and IACULL is useful if you have several thousand
source files spread out among a number of different directories and you
want to quickly determine which files reference which library routines,
etc., without having to perform a brute-force search.

FINDREF was a third-party add-on, but made it much easier to perform a
fancy multi-pass search and save the end results for later use.

>If you need a fast multi-file FIND capability, I know someone who has
>one he'd probably sell.

It'd be easy enough to write one. All I'd need is time.

To be honest, though, I'd probably implement something like CULL, since
brute-force searches are typically quite inefficient.

>However, saying the compiler is the tool is not just a joke, because of
>the fact that the compiler (all the compilers) generate(s) the complete
>cross-reference files, representing every identifier encountered,

I understand. We took the cross-reference listings generated by the
various compilers quite seriously on the 2200 side as well.

>> It's sometimes hard to compare text editors in different environments,
>> and even in the same environment the definitions of "good" and "bad"
>> editor elements can be extremely subjective. Opinions may vary. :-)
>
>Yup. That's why an informed comparison is worth more than a simple
>judgement or opinion.

True. Also, if one states one's biases up front, that helps to define
the context in which the observations/comparisons are made.

I'm coming into the A-series environment from a mixed background, I'd
say mostly OS2200, OS/2, PC/MS/DR-DOS, VMS, and Solaris/Linux, so my
preferences are probably a bit weird (and vary depending on context).

>> We'll see. If you start hearing whining from my quarter, you'll know
>> I'm not as happy with the tools I've found as I could be... ;-)
>
>Good luck ;-).

Thanks. :-)

Edward Reid

unread,
Oct 6, 2002, 11:14:09 PM10/6/02
to
On Sun, 6 Oct 2002 11:16:58 -0400, Randall Bart wrote

> It's too bad that MARC and many other programs assume you have 80 columns.
> CANDE handles it, but you have to invoke the TERM command.

It's also too bad the Unisys TELNET server doesn't do the screen size
negotiation that's part of the telnet protocol. It doesn't even handle
an explicit ?ATTR command that changes attributes. The Upstanding
Systems FasTERM emulator and server are much better in this respect,
just because you can size the screen as you want it before making the
connection, and the size will be passed to the telnet server, to COMS,
and to CANDE automatically.

Since I normally don't have FasTERM available, I have a combination of
a STARTUP file and a small program that check the terminal name and
when possible guess at the screen size, and set it. Kludgy.

However, one of the problems with CANDE page mode is that it doesn't
handle input from screens with more than 80 columns, at least not the
last time I looked. Never got around to UCF-ing it since I don't use
CANDE page mode when I can possibly avoid it.

> It would be very
> nice if COMS had a feature to remap screens. To make use of wide screens in
> EDITOR, I find I need to have two terminal emulators running on my PC.

It's not hard to code screens which at least work on other terminal
sizes even if they only use the upper left 24x80 -- and I'm talking
exactly the same data transmitted, no need to adjust for the screen
dimensions at all. MGS's SCREEN/GEN product (unpromoted), a screen
painter/designer, generates screens with this characteristic. The main
trick is always to start a new line with a cursor positioning sequence.

So it's too bad that COMS doesn't just have better coded screens.

MGS's ViewPoint/SightLine Reporter program uses this feature for all
its interactive screens. As a result, it does the interactive part in a
24x80 area no matter how large the actual screen is, but uses the full
screen for generating reports to the screen. (I wrote the Reporter
program. It takes its name from the ViewPoint/SightLine system and uses
the same data, but runs entirely on the host using a terminal and does
not use the VP/SL PC display and interface.)

Disclosure: MGS is at http://www.mgsinc.com. I've worked with them a
lot.

Edward Reid


Edward Reid

unread,
Oct 6, 2002, 11:47:53 PM10/6/02
to
On Sun, 6 Oct 2002 21:40:44 -0400, Richard Steiner wrote

> A dependency on having actual line numbers saved in the file itself is
> rather interesting... Sounds like CTS, if I remember correctly...?

I don't know CTS ... I know that people coming from other environments
often find the line number requirement bizarre. But having a permanent
identifier attached to each line makes a lot of things much simpler.
That of it that way: an arbitrary but unique and persistent key to a
"virtually indexed" file. I find the contortions used to uniquely
identify lines on other systems painful.

> PUSH the current search environment if one wants to go off
> on an unrelated search tangent, POP back to the original search context
> when done, etc.

The EDITOR/PW XREF facilities do push (it's automatic) and pop, also
pick from a recent list, search for identifiers based on wildcards and
several attributes, and more.

> A tool such as CULL and IACULL is useful if you have several thousand
> source files spread out among a number of different directories and you
> want to quickly determine which files reference which library routines,
> etc., without having to perform a brute-force search.

No doubt. It's worth pointing out that the A-Series Way of Doing Things
tends to encourage fewer files than on many systems. Not for any good
reason -- the reasons I could give are mostly after-the-fact -- but
simply because We've Always Done It That Way.

Also, third party source control systems (there are at least two)
handle library references, and the Unisys ADDS product keeps a lot of
this kind of information in a database. Of course these are only
examples, and CULL could do many other things for which there are no
tools now.

Note that CULL could not replace XREF files unless it could handle
Algol DEFINE and COBOL REPLACE and COPY ... REPLACING. These cause
identifier references at places in the code where the identifier name
does not actually appear.

This is not to denigrate in any way the potential value of CULL on
A-Series -- I think it would be very useful and I can think of many
times when I could have made very good use of it.

> It'd be easy enough to write one. All I'd need is time.

You must have more of that commodity than I do. Of course, if you have
access to the original program, porting it would be a lot faster than
starting from scratch, even if you had to rewrite all the actual code.
AFAIK the only 3GL common to 2200 and A-Series is COBOL. If it were
written entirely in ANSI COBOL, porting would be easy, but I suspect
this is not the case. If it's in C, you may have a very easy time.

> To be honest, though, I'd probably implement something like CULL, since
> brute-force searches are typically quite inefficient.

Though on modern systems -- even an LX100 -- you can do a lot of brute
force searches in the time it takes to port a program.

OTOH, I did a lot of brute force searches across several hundred files
last year. It might have taken longer to implement CULL, but it would
have saved a lot of waiting at critical times. On still another hand,
this was on an LX100; on an NX6800 it would have been so fast I'd have
hardly cared -- unless on yet one more hand the machine were overloaded
... better stop while I'm ahead.

If you do port it, consider giving it a longer and more descriptive
name (up to 17 characters, the file name node size). A-Series program
names traditionally are longer than on most mainframes. FINDREF is
pretty good; CULL is a bit terse by A-Series conventions.

> True. Also, if one states one's biases up front, that helps to define
> the context in which the observations/comparisons are made.

It's very obvious that the Unisys software engineers brought a lot of
ideas from other platforms into EDITOR and PW. Many aspects were
totally odd on the A-Series at the time they were introduced.
Cross-pollination enhances fertility.

Edward Reid


Charles M Rader

unread,
Oct 8, 2002, 7:36:28 PM10/8/02
to
"Greig Blanchett" <gre...@nzrfu.com> wrote in message
news:494F1A7F243A44A8.0B3064AE...@lp.airnews.net...

>One day somebody somewhere will port Perl to A Series and become an
>instant legend, but I'm not holding my breath ....

A search at Perl.com found at least one Perl implementation for Java Virtual
Machine.

Newer MCP systems run Java Virtual Machine, so it MIGHT be possible to run
Perl on Java on ClearPath NX4800 or newer MCP systems, depending on which
Java subsets were used in porting Perl.

Multiple levels of interpreters might run very slowly, but it's possible to
compile native code from Java byte code on MCP 7.0...
Charles Rader
ClearPath NX Technical Services, Eagan Service Center
Unisys Global Outsourcing
Eagan, Minnesota

*** Text contains my personal opinions ***


Richard Steiner

unread,
Oct 11, 2002, 7:40:45 PM10/11/02
to
Here in comp.sys.unisys,
Edward Reid <edwar...@spamcop.net> spake unto us, saying:

>On Sun, 6 Oct 2002 21:40:44 -0400, Richard Steiner wrote


>
>> A dependency on having actual line numbers saved in the file itself is
>> rather interesting... Sounds like CTS, if I remember correctly...?
>
>I don't know CTS ...

CTS is a line-editing environment for the 1100/2200/Clearpath IX that
had a tendency to embed line numbers in the files that one edited (if I
recall correctly, anyway -- its been a number of years since I used it
interactively for any length of time).

>I know that people coming from other environments often find the line
>number requirement bizarre.

Hmmm. "Bizarre" is one word for it, yes. :-)

>But having a permanent identifier attached to each line makes a lot
>of things much simpler.

Unfortunately, it also seems (at least in my novice-A-series-user's
eyes) to make many other otherwise routine tasks more complex, like
large line insertions or file imports.

>That of it that way: an arbitrary but unique and persistent key to a
>"virtually indexed" file. I find the contortions used to uniquely
>identify lines on other systems painful.

Most editors simply keep track of each lines current position in the
editing buffer, and renumber things on the fly during line deletion
or insertion operations.

There's typically no need for a permanent "line number" at all, at
least for the types of text editing applications I can think of.

When is a permanently-attached line number an advantage?

They would have some use if one is comparing a modified program to the
original source, perhaps, but there are usually file-comparison tools
for a given platform which take care of that type of thing.

>Note that CULL could not replace XREF files unless it could handle
>Algol DEFINE and COBOL REPLACE and COPY ... REPLACING. These cause
>identifier references at places in the code where the identifier name
>does not actually appear.

I don't believe IACULL handles that type of reference indirection.

FINDREF could trace back to some extent if one was using DCL, I think,
at least based on my reading its help files and code, but as I'm not at
all conversant with DCL I'm not really sure of its capabilities (it
followed EQU and EQUF statements back one or two steps, I think).

>This is not to denigrate in any way the potential value of CULL on
>A-Series -- I think it would be very useful and I can think of many
>times when I could have made very good use of it.
>
>> It'd be easy enough to write one. All I'd need is time.
>
>You must have more of that commodity than I do.

Or more interest. :-)

To be fair, I have no time for such activity at the site where I'm
currently working, nor would it be fair to them to make such time.

I do, however, have a certain amount of time here at home.

>Of course, if you have access to the original program, porting it would
>be a lot faster than starting from scratch, even if you had to rewrite
>all the actual code.

I don't have access to the CULL or IACULL source, no, but the general
concept is simple. The actual *implementation* would require a bit (or
perhaps quite a bit) of thought, as I'd want it to be fairly efficient,
as well as a portable tool and not limited to Unisys mainframe systems
(if I go to that effort, I'd also want to be able to use it in my OS/2
and Linux programming environments here at home).

>> To be honest, though, I'd probably implement something like CULL, since
>> brute-force searches are typically quite inefficient.
>
>Though on modern systems -- even an LX100 -- you can do a lot of brute
>force searches in the time it takes to port a program.

True. However, brute-force searches aren't all that stimulating, while
designing and implementing a useful utility (or thinking about it) is.

:-)

On a sidenote -- why doesn't the UUSIG web site have much "A-series"
software? Are there other sites?

>> True. Also, if one states one's biases up front, that helps to define
>> the context in which the observations/comparisons are made.
>
>It's very obvious that the Unisys software engineers brought a lot of
>ideas from other platforms into EDITOR and PW. Many aspects were
>totally odd on the A-Series at the time they were introduced.

I've had a week of exposure now to things such as the MCP, MARC, CANDE,
COMS-style transactions, and PSI (a screen creation and code-generating
program, and I admit I find the environment to be fascinating.

Seperate line and page transmit keys? Weird... ;-) :-)

>Cross-pollination enhances fertility.

Yes, it does.

Edward Reid

unread,
Oct 12, 2002, 2:08:37 AM10/12/02
to
On Fri, 11 Oct 2002 19:40:45 -0400, Richard Steiner wrote

>> But having a permanent identifier attached to each line makes a lot
>> of things much simpler.
>
> Unfortunately, it also seems (at least in my novice-A-series-user's
> eyes) to make many other otherwise routine tasks more complex, like
> large line insertions or file imports.

Only because the site you're at doesn't use the available tools --
SYSTEM/PATCH, EDITOR, PW. Such tasks are much easier with these tools.
Maintaining sequence numbers still is some burden, but the burden is
far less and the benefits far greater when you are using the available
tools.

> Most editors simply keep track of each lines current position in the
> editing buffer, and renumber things on the fly during line deletion
> or insertion operations.

EDITOR and PW keep track of inserted lines and renumber at the end of a
session, or when you save the file. The interfaces make numbers on
inserted lines unneeded, so on-the-fly renumbering does not arise even
in concept.

> There's typically no need for a permanent "line number" at all, at
> least for the types of text editing applications I can think of.
>
> When is a permanently-attached line number an advantage?

1) When you want to return to the same location in a large program.
"Large" may depend on your environment, but I'm thinking tens of
thousands of lines, or even just thousands -- not much of an issue with
only hundreds of lines. The more often you return, the greater the
value.

2) When you are discussing or corresponding about a change and may not
have the exact same version of the source. As long as the area under
consideration is exactly or nearly the same, you can discuss changes
without confusion, even if there are major differences in other parts
of the source.

3) When using a patch file which may apply to different versions of a
source file. Of course applying patches out of sequence is not
necessarily valid, but once you've verified the validity (no
interaction), the fact that lines in the source have unique, persistent
identifiers means that each patch can be applied to the source
separately. Note that your site probably doesn't use patch files, thus
missing another of the strong development tools available, especially
for large programs.

4) In conjunction with compiler-generated XREF files. Because the line
identifiers (sequence numbers) do not change, even a somewhat out of
date XREF file is often still useful (as long as you do not depend on
it to locate absolutely every reference to an identifier).

5) Identifying fault locations. When a program is properly compiled
(with the $LINEINFO option), a program which faults automatically
displays the line number of the fault, and a traceback of calls with
line numbers. (I believe that the most recent version of COBOL even
displays a PERFORM traceback, which was previously a limitation. Algol
and Fortran and C procedure invocations, and COBOL CALL statements,
have always provided the traceback.) You don't need information (such
as a compile listing) from the relevant compilation to interpret these
line numbers, since they are persistent identifiers.

6) For finding conflicts among multiple patches. SYSTEM/PATCH by
default gives a report showing lines altered by multiple patches. Of
course this is only a very coarse filter and does not guarantee a lack
of conflicts, but it's very useful. And of course it's also possible to
do this without line numbers, but more difficult -- SYSTEM/PATCH was
doing it in 1970.

7) Probably more, this is off the cuff.

I'm sure that all these issues are addressed in one way or another in
many systems. I'm pretty sure that this method handles more of these
issues more smoothly with a single concept than most or all of the
others. However, I'm only familiar with a very small subset of the
others, so certainly there may be other approaches which address these
issues elegantly as well.

> They would have some use if one is comparing a modified program to the
> original source, perhaps, but there are usually file-comparison tools
> for a given platform which take care of that type of thing.

I've used both, and find the comparison using line numbers to be easier
to follow and less susceptible to artifacts. (See the CANDE MATCH
verb.) The other tools (diff etc) have certainly improved vastly over
the years; 20 years ago, CANDE was far superior.

You can also MATCH two files to create a patch file, then edit the
patch file against the source with EDITOR or PW. (Both implement the
concept that your work file is just a patch file -- the alterations --
to a base source, so your current patch file is always distinct.) This
allows you to examine the changes in a different way, one which is
often useful.

Last year I implemented a rather off-the-wall variant of this. I was
involved in a project which entailed extensive desk-checking of major
and minor modifications -- we were not able to test the changes we
made, but wanted the minimum possible errors in what we returned to our
client. This required extensive side-by-side manual examination. I
wrote a program which tied together two terminal screens (side by side
on my workstation) with two EDITOR sessions. All data passed though my
"hub" program. The master terminal and master EDITOR session operated
normally with a modified source file. The hub program sent commands to
cause the slave EDITOR session to keep the slave terminal in sync with
the master, so that I could examine the two versions side by side,
without printing either, and without having to scroll the slave version
manually. A bit bizarre even to me, but it worked very well, and would
have been much more difficult without persistent line identifiers.

> True. However, brute-force searches aren't all that stimulating, while
> designing and implementing a useful utility (or thinking about it) is.

Ah yes, I do understand that point.

> On a sidenote -- why doesn't the UUSIG web site have much "A-series"
> software? Are there other sites?

Can't give you a good reason. No, there are no other sites. There was
once a CUBE Library with a lot of software -- it was maintained by the
Air Force Academy for quite a while, and then I think by someone else
who I regret that I am unable to identify tonight, and distributed on
tape. But after it dropped, there was no grass roots push to reinstate
it. Whether it has to do with a difference in support level, or a
difference in the types of client sites, or a cultural difference, or
something else entirely, I can't say. I know that for my part, nowadays
I seldom have the luxury of taking the time to extend my tools to make
them generally usable.

BTW, you seem to have a couple of links on your site still pointing to
crewstone.com.

> Seperate line and page transmit keys? Weird... ;-) :-)

Yeah, I agree. I almost never use the line transmit -- that's mostly
used by people who use CANDE page mode, and you've heard my rant on
that. Have you gotten used to the fact that the terminal transmits up
to but NOT including the cursor position?

Edward Reid
(looking for A/NX/LX contract work; see
http://user.talstar.com/reide/resume2002.html)


Richard Steiner

unread,
Oct 12, 2002, 4:36:57 AM10/12/02
to
On Sat, 12 Oct 2002 2:08:37 -0400 in comp.sys.unisys,

Edward Reid <edwar...@spamcop.net> spake unto us, saying:

>> Most editors simply keep track of each lines current position in the


>> editing buffer, and renumber things on the fly during line deletion
>> or insertion operations.
>
> EDITOR and PW keep track of inserted lines and renumber at the end of a
> session, or when you save the file. The interfaces make numbers on
> inserted lines unneeded, so on-the-fly renumbering does not arise even
> in concept.

That's good, and it's what I would expect from a relatively modern (or
even 15-year-old) fullscreen editor.

In a UEDIT editing window on a 2200, for example, one typically would
perform location-specific editing operations on the screen using a set
of commands of the following form:

>CMD<xmit>

where > is actually a UTS SOE character (looks like a filled ">" sign)
placed on the point where the operation is to take place, CMD is the
specific command, and <xmit> is the UTS transmit key, which usually
transmits all data on the screen from the the current cursor position
left and then up each line until it hits an SOE character (in this case
just before the "I5" on the screen).

In UTS "fullscreen" mode, the terminal also returns to the editor the
X,Y coordinates of the SOE, making it easy for the editor to know the
exact X,Y position in the text buffer that was indicated.

Various operations are possible, including

In insert n lines after indicated line
IBn insert n lines before indicated line
Dn delete n lines
BLn mark a set of lines
SB mark the start of a block
EB mark the end of a block and copy block to new editing block
EBD mark the end of a block, copy to new block, and delete original
DB delete from the SB point to the indicated DB point (no copy)
PB place contents of last defined block after the indicated spot
PBB place contents of last defined block before the indicated spot
PBn place contents of block number n after the indicated spot
SF mark the upper left corner of a rectangular edit field
EF mark the lower right corner of a rectangular edit field
DF delete the contents of a rectangular editing field
M place a bookmark at the indicated point
T make the indicated point the top of the screen
B make the indicated point the bottom of the screen
Rn refresh indicated n lines
SPL split indicated line at indicated point
< shift display so indicated character is new left margin
> shift display so indicated character is new right margin

etc. Fun stuff. :-)

>> There's typically no need for a permanent "line number" at all, at
>> least for the types of text editing applications I can think of.
>>
>> When is a permanently-attached line number an advantage?
>
> 1) When you want to return to the same location in a large program.

Yes, but that depends on the other tools the editor makes available.

For example, in UEDIT I would tend to use Named Bookmarks -- you can
use the MARK command (or >M<xmit> on the screen) to set a bookmark (mine
supported up to five of those per editing block using MARKA, MARKB, etc),
create a meaningful freetext LABEL for them, change the F-key display
at the bottom to show the bookmark labels instead of F-key help, and
use the GO or SWITCH commands to bounce between the bookmarked lines.

Given the lack of a bookmarking facility, line numbers would be quite
valuable. I found it easier to have the editor remember for me. :-)

> "Large" may depend on your environment, but I'm thinking tens of
> thousands of lines, or even just thousands -- not much of an issue
> with only hundreds of lines. The more often you return, the greater
> the value.

FWIW, the source elements I worked with in the 2200 environment were
usually between 500 and 7000 lines in length. Data and trace files
were often much larger (sometimes several 100,000 lines).

> 2) When you are discussing or corresponding about a change and may not
> have the exact same version of the source.

Yes, this would have value.



> 3) When using a patch file which may apply to different versions of a
> source file.

On the 2200, we used a system which involved a base version of the
source files, a set of change files (called CCF's), and a current
copy of the source.

When the entire system was recompiled once a month (called a GEN), the
existing changes plus any new ones were reapplied to the base source,
and a current set of "current" source files was dynamically generated.

When one submitted a code change for permenent integration, one cut a
change image against file either by hand or using a tool like SCOMP or
DOWN, and then used a tool like SCSCCF to automatically convert the
changes from current-relative to base-relative line numbers.

Once a file had enough changes against it that CCF's became relatively
complex, the file was re-based and a new integrated version introduced
for that file.

Base-relative line numbers in CCF's tended to be the same for many
years (in most cases) before an actively modified file was rebased.

A relatively simple CCF works something like this:

Original file:

This is an
example of the
CCF change
image method
used on many
2200-series
machines.
Cool!

Set of change images:

-2
This line gets inserted after line 2 in the source.
-3,4
..These lines replace lines
..3 and 4 in the source, and
..effectively add an extra two
..lines to the file
-6,7
-8
The "-6,7" line above deleted lines 6 through 7 in
the source. This text goes after line 8.

New file after merge:

This is an
example of the
This line gets inserted after line 2 in the source.
..These lines replace lines
..3 and 4 in the source, and
..effectively add an extra two
..lines to the file
used on many
Cool!
The "-6,7" line above deleted lines 6 through 7 in
the source. This text goes after line 8.

Of course, sometimes one has to modify existing change lines, and the
syntax gets a little more complex.

> Note that your site probably doesn't use patch files, thus missing
> another of the strong development tools available, especially for
> large programs.

Yes. Change control is very basic (paper in folders), and changes are
done to the test system's source files directly in many cases.

> 4) In conjunction with compiler-generated XREF files. Because the line
> identifiers (sequence numbers) do not change, even a somewhat out of
> date XREF file is often still useful (as long as you do not depend on
> it to locate absolutely every reference to an identifier).

I understand the value in the A-series environment, but such a thing is
somewhat less valuable when programmers have interactive cross-reference
tools available. Subject to an individual's preferences, of course.

> 5) Identifying fault locations. When a program is properly compiled
> (with the $LINEINFO option), a program which faults automatically
> displays the line number of the fault, and a traceback of calls with
> line numbers.

A compiler should be able to generate relative line numbers from any
source file regardless of the presence of hard-coded line numbers in
that file.

The hard-coded line numbers would produce compilation errors that are
consistent between versions, of course. I would hope, however, that
the same error would not be encountered consistently across versions.

> 6) For finding conflicts among multiple patches. SYSTEM/PATCH by
> default gives a report showing lines altered by multiple patches.

Yes. That's what we used the base source elements for. :-)

> 7) Probably more, this is off the cuff.
>
> I'm sure that all these issues are addressed in one way or another in
> many systems. I'm pretty sure that this method handles more of these
> issues more smoothly with a single concept than most or all of the
> others.

It may. I'd have to actually see it used in practice first, I think.

> However, I'm only familiar with a very small subset of the
> others, so certainly there may be other approaches which address these
> issues elegantly as well.

I need to spend more time with Unix "diff" files, but it seems to me
that the Unix world is missing some text-mode tools, but makes up for
it if one is using a programmer's editor with good cross-referencing
capabilities.

Things like ctags seem useful as well, but I've not used them enough
to really appreciate them yet.

>> They would have some use if one is comparing a modified program to the
>> original source, perhaps, but there are usually file-comparison tools
>> for a given platform which take care of that type of thing.
>
> I've used both, and find the comparison using line numbers to be easier
> to follow and less susceptible to artifacts. (See the CANDE MATCH
> verb.) The other tools (diff etc) have certainly improved vastly over
> the years; 20 years ago, CANDE was far superior.

I suspect that each Unisys mainframe platform had a sophisticated set
of tools 20 years ago. It's hard to judge which was "superior" without
actually knowing the actual capabilities of each platform at the time.

I didn't start using any tools seriously on the 1100 side until 1988 or
so, when I started working as a Unisys at NWA. By that time UEDIT was
at least 10 years old and was already fairly mature, and 1100 file
comparison tools like the University of Maryland's DOWNDATER had been
in use for some years.

> Last year I implemented a rather off-the-wall variant of this. I was
> involved in a project which entailed extensive desk-checking of major
> and minor modifications -- we were not able to test the changes we
> made, but wanted the minimum possible errors in what we returned to our
> client. This required extensive side-by-side manual examination. I
> wrote a program which tied together two terminal screens (side by side
> on my workstation) with two EDITOR sessions. All data passed though my
> "hub" program. The master terminal and master EDITOR session operated
> normally with a modified source file. The hub program sent commands to
> cause the slave EDITOR session to keep the slave terminal in sync with
> the master, so that I could examine the two versions side by side,
> without printing either, and without having to scroll the slave version
> manually. A bit bizarre even to me, but it worked very well, and would
> have been much more difficult without persistent line identifiers.

Sounds like a nifty setup.

I tended to use UEDIT in split-window mode so it showed one editing
block (containing file A) on the top half of the screen and another
(containing file B) on the bottom, and UEDIT had a COMPARE command to
find differences (it would reposition both windows at the next point
in the files which differed, offset from the top by a defined margin to
give context).

I added to that, implementing a feature called synchro-scroll which
allowed all of the UEDIT text positioning commands (page and line
movement commands as well as auto-scrolling) to apply in lock-step to
both windows at the same time, which was nice when the two files were
very similar (you could page down in each concurrently by hitting xmit),
and I also implemented a flag which told UEDIT to apply all FIND and
LOCATE commands on both of the visible blocks concurrently, allowing
one to go to the same place in each with a single F or L command.

The end result was quite useful for comparing old and new versions of a
given source, since one could use COMPARE to get to a known point of
differences, or one could use the standard locate/find/move commands
in a synched manner to otherwise move about.

Of course, it was often faster to run DOWN and generate a CCF...

> BTW, you seem to have a couple of links on your site still pointing to
> crewstone.com.

I should fix them. It's been a while since I've verified any links.

>> Seperate line and page transmit keys? Weird... ;-) :-)
>
> Yeah, I agree. I almost never use the line transmit -- that's mostly
> used by people who use CANDE page mode, and you've heard my rant on
> that. Have you gotten used to the fact that the terminal transmits up
> to but NOT including the cursor position?

Sure - I've seen that behavior before on the 2200. That's what the EOL
key is for. :-)

--
-Rich Steiner >>>---> http://www.visi.com/~rsteiner >>>---> Eden Prairie, MN·
OS/2 + BeOS + Linux + Win95 + DOS + PC/GEOS = PC Hobbyist Heaven! :-)·
Now running in text mode on a PPro/200. Eat my dust, GUI freaks!·

The Theorem Theorem: If If, Then Then.·

Randall Bart

unread,
Oct 12, 2002, 8:19:20 PM10/12/02
to
'Twas Sat, 12 Oct 2002 08:36:57 GMT when all comp.sys.unisys stood in awe as
Richard Steiner <rste...@visi.com> uttered:

>On Sat, 12 Oct 2002 2:08:37 -0400 in comp.sys.unisys,
>Edward Reid <edwar...@spamcop.net> spake unto us, saying:

<some really good stuff which I agree with>

>>> When is a permanently-attached line number an advantage?
>>
>> 1) When you want to return to the same location in a large program.
>
>Yes, but that depends on the other tools the editor makes available.
>
>For example, in UEDIT I would tend to use Named Bookmarks -- you can
>use the MARK command (or >M<xmit> on the screen) to set a bookmark (mine
>supported up to five of those per editing block using MARKA, MARKB, etc),
>create a meaningful freetext LABEL for them, change the F-key display
>at the bottom to show the bookmark labels instead of F-key help, and
>use the GO or SWITCH commands to bounce between the bookmarked lines.
>
>Given the lack of a bookmarking facility, line numbers would be quite
>valuable. I found it easier to have the editor remember for me. :-)

EDITOR has bookmarks, but it's only in a single user's session. What if
multiple people are editing the same source; how do you keep their bookmarks
in sync? There are tools which do that now, but there weren't in the 1960s
when the patching technique originated, nor in the 1970s when EDITOR
originated.

When I was in the Commercial Languages section, the RPG compiler was 110,000
lines, the COBOL74 compiler was 70,000 lines, and we had a dozen people all
poking around the same source at the same time. The MCP was 650,000 lines
(it's over a million now). In the course of one system release cycle,
hundreds of people made thousands of patches updating tens of thousands of
lines of that source. And we were maintaining as many as five release
streams at the same time. I can't imagine managing that without permanent
line numbers.

BTW, in the history of the MCP, I believe there was exactly one patch to the
MCP made under the username BARTICUS.

>> "Large" may depend on your environment,

>FWIW, the source elements I worked with in the 2200 environment were


>usually between 500 and 7000 lines in length.

See above for definition of "large". Where I am now, our largest program is
3000 lines, and we have on programmer and one manager (me) making program
changes. It's not the same thing at all, but even so, I am trying to train
him not to resequence whole programs.

>> 4) In conjunction with compiler-generated XREF files. Because the line
>> identifiers (sequence numbers) do not change, even a somewhat out of
>> date XREF file is often still useful (as long as you do not depend on
>> it to locate absolutely every reference to an identifier).
>
>I understand the value in the A-series environment, but such a thing is
>somewhat less valuable when programmers have interactive cross-reference
>tools available. Subject to an individual's preferences, of course.

Ed is referring to what we call interactive XREF on A Series. I admit that
some editors maintain symbol cross references, but it's not the same thing.
Imagine this ALGOL:

DEFINE THIS_FIELD(I) = THIS_ARRAY[I] THESE_BITS #

The EDITOR, when I look up THIS_ARRAY, it will point to every reference to
THIS_FIELD. Only a compiler can know this. If your editor does this, then
it's doing at least a partial compile.

>> 5) Identifying fault locations. When a program is properly compiled
>> (with the $LINEINFO option), a program which faults automatically
>> displays the line number of the fault, and a traceback of calls with
>> line numbers.
>
>A compiler should be able to generate relative line numbers from any
>source file regardless of the presence of hard-coded line numbers in
>that file.
>
>The hard-coded line numbers would produce compilation errors that are
>consistent between versions, of course. I would hope, however, that
>the same error would not be encountered consistently across versions.

Imagine the user calls up and says the program blew up at line 12345.
You're supporting five different release streams, three of which are in the
field at a total of 20 different patch levels. You determine the release
stream and patch level which the user has, find that line of code, and send
him a patch which adds 13 more lines. Now it blows up at 12567. Find that
line. Eventually you have a working patch which affects 30 lines scattered
around five routines in the program. Refit this patch into the four other
release streams. I can't imagine managing this without permanent line
numbers.

>>> Seperate line and page transmit keys? Weird... ;-) :-)
>>
>> Yeah, I agree. I almost never use the line transmit -- that's mostly
>> used by people who use CANDE page mode, and you've heard my rant on
>> that.

In NX/View, there are three kinds of transmit: Page, line, and mixed. The
mixed transmit is line transmit when you aren't in forms mode, but it's page
transmit in forms mode. I have never seen an application where you wanted
line transmit in forms mode. The people who wrote FASTerm must not have
either, because the line transmit in FASTerm is just like mixed transmit in
NX/View.

Edward Reid

unread,
Oct 12, 2002, 10:29:44 PM10/12/02
to
I'll try not to duplicate too much of what Randall Bart already wrote.

On Sat, 12 Oct 2002 4:36:57 -0400, Richard Steiner wrote


> In a UEDIT editing window on a 2200, for example, one typically would
> perform location-specific editing operations on the screen using a set
> of commands of the following form:

The Burroughs terminals never had a way of combining location with
text. Thus in EDITOR, location-specific commands which don't act on the
current location have to either include the location as part of the
comment (for example, ]INSERT AFTER +2) or use a SPCFY first to
indicate the location. SPCFY is the only thing which sends the
location. (Some third party terminals had a CTRL-m-n-SPCFY sequence
which sent a predefined control sequence and the location, but this was
never standard.)

The inability of the application to know the cursor location allows the
user to play some tricks, but mostly it's a PITA. It's too bad that the
Burroughs terminal spec was never improved in this respect. Basically
that spec was frozen by 1975 in all important respects.

Of course, it's not an issue with PW -- you just click where you mean.

> Various operations are possible, including

These are very similar to EDITOR, except for

> In insert n lines after indicated line
> IBn insert n lines before indicated line

In EDITOR you simply go into insert mode; all lines you send are
inserted until you leave insert mode. This is easier than saying how
many lines you plan to insert, even when it's easy to insert more
later.

> Given the lack of a bookmarking facility, line numbers would be quite
> valuable. I found it easier to have the editor remember for me. :-)

In practice, far more often I just go to the declaration of whatever
procedure I'm working on:

]DEC MYPROCEDURE

or something like that. But sequence numbers are shared by all users.
Again, I won't try to defend specific uses against other tools; I see
the value of sequence numbers being in addressing quite a few issues
with one concept. (Even if that concept did originate in the need to
sort boxes of punched cards after they were dropped. ;-)

> A relatively simple CCF works something like this:

Looks terribly cumbersome to me. (But of course I don't use this
method, so duh.) Also it suffers from the issue I noted: if you add a
single line to the beginning of the Original, you can no longer apply
your CCFs without changing them. You may have tools to do this, but
it's another step. An A-Series patch file would need no modification.

When using EDITOR or PW, one never needs to "cut a change image"
because one is working with the change image -- aka patch file -- all
along. (It's also possibly to work with a local copy of the source and
generate a patch file later; there's an option of CANDE MATCH for this.
It's commonly used in shops where the programmers don't know how to use
EDITOR or PW.)

Typically a new base source is generated for each release. There's no
reason to put it off, since the base line numbers won't change as a
result of creating a new source file.

>> 5) Identifying fault locations. When a program is properly compiled
>> (with the $LINEINFO option), a program which faults automatically
>> displays the line number of the fault, and a traceback of calls with
>> line numbers.
>
> A compiler should be able to generate relative line numbers from any
> source file regardless of the presence of hard-coded line numbers in
> that file.

But the $LINEINFO method means that the execution-time display is
useful even if I didn't save the program version from which it was
compiled, or if it's sufficiently cumbersome to generate that I'd
rather just look at a current version. As long as the patch marks show
no recent activity in the area, I can diagnose the problem without the
extra work. This is certainly not to recommend that one discard the
source for active programs, but in many cases it's very convenient to
just pick up the closest copy.

> The hard-coded line numbers would produce compilation errors that are
> consistent between versions, of course. I would hope, however, that
> the same error would not be encountered consistently across versions.

The issue here is run-time errors, which are sometimes rare and
non-reproducible.

>> I've used both, and find the comparison using line numbers to be easier
>> to follow and less susceptible to artifacts. (See the CANDE MATCH
>> verb.) The other tools (diff etc) have certainly improved vastly over
>> the years; 20 years ago, CANDE was far superior.
>
> I suspect that each Unisys mainframe platform had a sophisticated set
> of tools 20 years ago. It's hard to judge which was "superior" without
> actually knowing the actual capabilities of each platform at the time.

I only meant comparison tools. I agree that as far as the entire tool
set of the various Burroughs and Sperry (and other) platforms is
concerned, they are different as apples and oranges. You may be able to
compare specific tools, but when you try to compare the entire tool
set, you have to be satisfied with saying they are all useful and are
very different from one another.

> I tended to use UEDIT in split-window mode [...] I added to that,


> implementing a feature called synchro-scroll

Cool. Those address exactly the situation I was dealing with.

Edward

Stephen Fuld

unread,
Oct 13, 2002, 12:14:43 AM10/13/02
to

"Randall Bart" <Bart...@att.spam.net> wrote in message
news:nh1hqugj9cje4gaoh...@4ax.com...

snip

> When I was in the Commercial Languages section, the RPG compiler was
110,000
> lines, the COBOL74 compiler was 70,000 lines, and we had a dozen people
all
> poking around the same source at the same time. The MCP was 650,000 lines
> (it's over a million now). In the course of one system release cycle,
> hundreds of people made thousands of patches updating tens of thousands of
> lines of that source. And we were maintaining as many as five release
> streams at the same time. I can't imagine managing that without permanent
> line numbers.

But were these large programs all in one source file? That seems really
extreme. If the code was broken into multiple (many) separate files that
are independently compiled and later linked, line numbers can be relative to
the individual file, and changes to one don't effect the line numbers of the
others. This seems pretty basic. What am I missing here?

--
- Stephen Fuld
e-mail address disguised to prevent spam


Edward Reid

unread,
Oct 13, 2002, 4:27:58 PM10/13/02
to
On Sun, 13 Oct 2002 0:14:43 -0400, Stephen Fuld wrote

> But were these large programs all in one source file?

Yes. Actually, Randall's figures are only for the main source file. All
the products he mentioned use included files extensively, and at least
the MCP has a few modules of significant size bound in (that is,
statically linked).

A combination of factors disposes toward using fewer, larger source
files on the A-Series. These tools we've been discussing, a preference
for having the compiler do most of the work, and just simple history
and culture. I'm not going to claim it's better, but it works quite
well, and I see no reason but majority vote to go the way of many tiny
source modules.

> If the code was broken into multiple (many) separate files that
> are independently compiled and later linked, line numbers can be relative to
> the individual file, and changes to one don't effect the line numbers of the
> others. This seems pretty basic. What am I missing here?

Probably nothing, except that it works just fine. Obviously people on
other systems find that other methods work just fine too.

You could also say that the A-Series method *allows* you to manage
large source files, and other systems *force* you to use many small
files. This is a gross exaggeration of course, but there's a grain of
truth in it.

People who haven't worked with the A-Series ask "why"; people who have,
say you'll take it away from them over their dead bodies ;-). Having
worked with this method for a long time, I find it bizarre that anyone
would question the value of the persistent line identifier, and I find
it a bit frightening that I might have to depend on a bunch of
utilities and procedures to locate a particular line reliably insead of
just having it immutably marked. But intellectually I realize that I'd
probably get used to other systems. Right now my experience is so
heavily on the A-Series systems that I can't give a fully balanced
comparison of the methods.

Edward Reid


Randall Bart

unread,
Oct 13, 2002, 11:14:42 PM10/13/02
to
'Twas Sun, 13 Oct 2002 04:14:43 GMT when all comp.sys.unisys stood in awe as
"Stephen Fuld" <s.f...@PleaseRemove.att.net> uttered:

>But were these large programs all in one source file? That seems really
>extreme. If the code was broken into multiple (many) separate files that
>are independently compiled and later linked, line numbers can be relative to
>the individual file, and changes to one don't effect the line numbers of the
>others. This seems pretty basic. What am I missing here?

After I posted that, I realized that nobody has source files this size
without permanent line numbers, and therefore people who work without line
numbers always break up their programs into little files. Multiple files
are a problem as well. I implemented a major feature in RPG (calling
dynamic libraries). This implementation caused me to update about 15
routines. I would hate to be in the position where I needed to change 15
different source files and then compile and link the whole thing. Since it
was new, it didn't need to go into five release steams, but at times it
would need to go into two. Bug fixes would rarely hit that many modules,
but I can see a bug fix hitting five or more.

There are trade offs. It's nice to have the tools to support large source
files. I don't think much would have been gained by splitting up those
compilers. OTOH, MCP should have been split up. Unfortunately, NEWP (the
compiler used for the MCP) didn't support binding. It's a case where people
were so happy with one technique they neglected the other.
--
RB |\ © Randall Bart GM O AA
aa |/ ad...@RandallBart.spam.com 1-917-715-0831 oo A nn
nr |\ Bart...@att.spam.net DOT-HS-808-065 Rn / \ ag
dt ||\ Please reply without spam MS^7=6/28/107 ak / \ he
a |/ http://RandallBart.com/ I LOVE YOU le / \ el
l |\ Our monkey spanky the Yankees & the Hankies ly /=======\ is
l |/ Giants & Cardinals: We're waiting for you y! / \ m!

Hans Vlems

unread,
Oct 15, 2002, 5:34:18 PM10/15/02
to
>
> But were these large programs all in one source file? That seems really
> extreme. If the code was broken into multiple (many) separate files that
> are independently compiled and later linked, line numbers can be relative
to
> the individual file, and changes to one don't effect the line numbers of
the
> others. This seems pretty basic. What am I missing here?
>
The largest program I ever wrote on the B7700 was about 12000 lines of
Algol. It was one source file and the Algol compiler was all I needed to
maintain that program (and its associate XREF of course). I used line
numbers as well as patch codes (in 81-90) to maintain that program. Of
course it was not complex since I was the only maintainer and 12.000 lines
is not that bad.
Before that I experimented with smaller units. I had a 3000 line program
that started with BEGIN and ended with END. and not a procedure in sight.
After a couple of years I decided that it needed some structure and used
procedure calls. At the same time I'd discovered *SYSTEM/BINDER and it
seemed a nice idea to have separate compilation units. It worked, but after
a while it became clear that separate compilation and binding took more
effort than a full, straight compile. Now the university was paying for all
these experiments but even at this tiny scale it was obvious that one source
file and (static) line numbers combined with patch id's was a pretty good
way to maintain sources.

Hans Vlems

(just having tried MVS on the pc and wishing for an MCP Mark III.0 release
for hobbyists...)

Richard Steiner

unread,
Oct 20, 2002, 12:24:05 AM10/20/02
to
Here in comp.sys.unisys,
Randall Bart <Bart...@att.spam.net> spake unto us, saying:

>Richard Steiner <rste...@visi.com> uttered:


>
>>Given the lack of a bookmarking facility, line numbers would be quite
>>valuable. I found it easier to have the editor remember for me. :-)
>
>EDITOR has bookmarks, but it's only in a single user's session. What if
>multiple people are editing the same source; how do you keep their bookmarks
>in sync?

In practice, at least at the sites at which I've worked, multiple people
do not edit the same source file concurrently.

Instead, one makes a copy of the original source file and edits the copy,
then uses the tools available to cut a change file for final testing.

If two people are modifying the same module, testing is usually done by
applying both change files.

Note that, at least in the 2200 environment, most compilers and other
compilation front-ends are able to merge change files into the source
as part of the basic compilation process. The programmer doesn't have
to perform that step at all.

In any case, the "bookmarks" to which I refer in a UEDIT context are not
saved within the edited source itself, but are dynamic data structures
that the editor maintains locally for use during an editing session.

>There are tools which do that now, but there weren't in the 1960s
>when the patching technique originated, nor in the 1970s when EDITOR
>originated.

I wouldn't know -- my earliest exposure to tools like DOWNDATER was in
1988 (when I left the academic 1100 world for the commercial 1100 world).

>When I was in the Commercial Languages section, the RPG compiler was 110,000
>lines, the COBOL74 compiler was 70,000 lines, and we had a dozen people all
>poking around the same source at the same time. The MCP was 650,000 lines
>(it's over a million now). In the course of one system release cycle,
>hundreds of people made thousands of patches updating tens of thousands of
>lines of that source. And we were maintaining as many as five release
>streams at the same time. I can't imagine managing that without permanent
>line numbers.

I can, quite easily, but I don't have the same mindset that you do (and
I also suspect, given the reaction I'm seeing, that my poor attempts at
explaining our change process have been less than successful).

Regardless, though, the method we used (using a fixed base version of
the source to give us a set of working line numbers) would work just
fine in the cases you describe.

There is *no* need to tie line numbers directly to the source file in a
way which impacts the actual editing of that source file, only in a way
which coordinates the application of code patches against the source.

>>> "Large" may depend on your environment,
>
>>FWIW, the source elements I worked with in the 2200 environment were
>>usually between 500 and 7000 lines in length.
>
>See above for definition of "large".

No criticism is intended, but the use of monolithic source elements as
large as you describe seems to fly in the face of everything I've been
taught about modular software design.

In any case, in a transaction environment such as that commonly used in
2200-land, large programs are relatively unknown. The whole object of
a transaction program is to get in, perform a simple task, and exit as
quickly as possible so subsequent transactions can be executed with as
little delay as possible. Get the input, parse the input, read a few
files, do a screen build, display the screen, and exit.

> Where I am now, our largest program
>is 3000 lines, and we have on programmer and one manager (me) making program
>changes. It's not the same thing at all, but even so, I am trying to train
>him not to resequence whole programs.

When one operates the way we did (and the way most platforms I've seen
do), local source file resequencing while editing is a nonissue since
the base line numbers are only relevant once one's set of changes is
done and one is cutting the change file.

In that situation, the concept of "resequencing" is replaced with the
concept of rebasing the base source file, thus changing the base set of
line numbers used for change application.

It really doesn't matter what one does with one's local copy.

>>> 4) In conjunction with compiler-generated XREF files. Because the line
>>> identifiers (sequence numbers) do not change, even a somewhat out of
>>> date XREF file is often still useful (as long as you do not depend on
>>> it to locate absolutely every reference to an identifier).
>>
>>I understand the value in the A-series environment, but such a thing is
>>somewhat less valuable when programmers have interactive cross-reference
>>tools available. Subject to an individual's preferences, of course.
>
>Ed is referring to what we call interactive XREF on A Series. I admit that
>some editors maintain symbol cross references, but it's not the same thing.
>Imagine this ALGOL:
>
>DEFINE THIS_FIELD(I) = THIS_ARRAY[I] THESE_BITS #
>
>The EDITOR, when I look up THIS_ARRAY, it will point to every reference to
>THIS_FIELD. Only a compiler can know this.

Any tool capable enough to be able to recognize DEFINE statements in a
source listing and do some fairly basic parsing would know this. Even
the creation/maintenance of a fairly basic symbol table would suffice.

>If your editor does this, then it's doing at least a partial compile.

No, it's parsing the source, something which I would consider quite a
different activity from the types of things a compiler does, since a
real compiler does a *lot* more than the simple parsing of file(s).

>>> 5) Identifying fault locations. When a program is properly compiled
>>> (with the $LINEINFO option), a program which faults automatically
>>> displays the line number of the fault, and a traceback of calls with
>>> line numbers.
>>
>>A compiler should be able to generate relative line numbers from any
>>source file regardless of the presence of hard-coded line numbers in
>>that file.
>>
>>The hard-coded line numbers would produce compilation errors that are
>>consistent between versions, of course. I would hope, however, that
>>the same error would not be encountered consistently across versions.
>
>Imagine the user calls up and says the program blew up at line 12345.
>You're supporting five different release streams, three of which are in the
>field at a total of 20 different patch levels. You determine the release
>stream and patch level which the user has, find that line of code, and send
>him a patch which adds 13 more lines. Now it blows up at 12567. Find that
>line. Eventually you have a working patch which affects 30 lines scattered
>around five routines in the program. Refit this patch into the four other
>release streams. I can't imagine managing this without permanent line
>numbers.

The use of hard source line number references in user error messages is
a totally foreign concept to me.

Most of the errors I've seen which provide hard reference information
for programmer consumption tend to provide reference information such
as source file and module name/number, or perhaps milestone markers.

I can easily imagine finding and fixing an error in that many versions
of code. All one needs is a consistent set of base source elements and
a good code comparison and change-generation tool like DOWN.

Richard Steiner

unread,
Oct 20, 2002, 1:43:55 AM10/20/02
to
Here in comp.sys.unisys,

Edward Reid <edwar...@spamcop.net> spake unto us, saying:

>I'll try not to duplicate too much of what Randall Bart already wrote.

Sometimes a restatement of something tends to makes it clearer, though,
so a certain amount of reduncancy in a discussion can be positive.

I'm not always quick on the uptake, so a certain amount of repetition
will probably do me some good! :-) :-)

>On Sat, 12 Oct 2002 4:36:57 -0400, Richard Steiner wrote
>
>> In a UEDIT editing window on a 2200, for example, one typically would
>> perform location-specific editing operations on the screen using a set
>> of commands of the following form:
>
>The Burroughs terminals never had a way of combining location with
>text. Thus in EDITOR, location-specific commands which don't act on the
>current location have to either include the location as part of the
>comment (for example, ]INSERT AFTER +2) or use a SPCFY first to
>indicate the location.

And I can see how that could impose a limitation on what the program on
the other side (say, an editor) could do. Though I would guess, if the
whole screen were sent as a bytestream, that one could obtain a rough
screen position by simply counting data characters. :-)

Many editors that I've seen in the IBM or 2200 worlds have a line number
region along the left margin (DCF was this way, as is IPF) which is also
a command region, and commands that are typed on a given line number will
take effect on that line.

UEDIT has such a mode, and that might've been the way it operated back
when it was an earlier prototype. I didn't use UEDIT until after folks
like Tim Radde and Chuck Caldarale had modified it somewhat.

>The inability of the application to know the cursor location allows the
>user to play some tricks, but mostly it's a PITA. It's too bad that the
>Burroughs terminal spec was never improved in this respect. Basically
>that spec was frozen by 1975 in all important respects.

It's a limitation that I can sympathize with. :-(

>Of course, it's not an issue with PW -- you just click where you mean.

True. Most editors which operate in a more interactive environment
where things like cursor movements and such are known can do that.

>> Various operations are possible, including
>
>These are very similar to EDITOR, except for
>
>> In insert n lines after indicated line
>> IBn insert n lines before indicated line
>
>In EDITOR you simply go into insert mode; all lines you send are
>inserted until you leave insert mode. This is easier than saying how
>many lines you plan to insert, even when it's easy to insert more
>later.

Yes, that would be a nice approach. Typically with UEDIT, though, you
weren't sending "lines" of data, since UEDIT was running in fullscreen
mode and the Xmit key was only hit for SOE commands like I described or
for conventional editing commands entered in the small common region at
the top.

Come to think of it, UEDIT had an "APPEND" mode for adding text to the
end of a file. I don't think it would work for inserting test, though,
but changing that wouldn't be hard. Hmmm.

Damn. I wish I had a 2200 here at home. So many ideas... :-)

>> A relatively simple CCF works something like this:
>
>Looks terribly cumbersome to me. (But of course I don't use this
>method, so duh.) Also it suffers from the issue I noted: if you add a
>single line to the beginning of the Original, you can no longer apply
>your CCFs without changing them. You may have tools to do this, but
>it's another step. An A-Series patch file would need no modification.

Actually, that isn't true. Someone who added a single line to the top
could do so in the following way:

-1,1
This is the new line #1.
This is the original line #1.

The base source would still be the same, and the application of the
above CCF would add a line to the top by replacing line #1 with two
new lines, one of which is the same as the old line #1.

No other previously-cut change lines would have to be altered, since
the base source file line numbers are unchanged and still apply.

>When using EDITOR or PW, one never needs to "cut a change image"
>because one is working with the change image -- aka patch file -- all
>along. (It's also possibly to work with a local copy of the source and
>generate a patch file later; there's an option of CANDE MATCH for this.
>It's commonly used in shops where the programmers don't know how to use
>EDITOR or PW.)

In the environment I'm in now, one makes a copy of the original source
file, makes changes, copies the old production source to a file called
something/BACKUP, and copies the new source over the old source.

Simple, but it makes things difficult when two people make changes at
the same time. :-)

>Typically a new base source is generated for each release. There's no
>reason to put it off, since the base line numbers won't change as a
>result of creating a new source file.

We weren't releasing a product at NWA -- it was a set of applications
for internal use.

>>> 5) Identifying fault locations. When a program is properly compiled
>>> (with the $LINEINFO option), a program which faults automatically
>>> displays the line number of the fault, and a traceback of calls with
>>> line numbers.
>>
>> A compiler should be able to generate relative line numbers from any
>> source file regardless of the presence of hard-coded line numbers in
>> that file.
>
>But the $LINEINFO method means that the execution-time display is
>useful even if I didn't save the program version from which it was
>compiled, or if it's sufficiently cumbersome to generate that I'd
>rather just look at a current version. As long as the patch marks show
>no recent activity in the area, I can diagnose the problem without the
>extra work. This is certainly not to recommend that one discard the
>source for active programs, but in many cases it's very convenient to
>just pick up the closest copy.

I can see where that can be true at times. Since I tended to work on
my source online most of the time, though, the "closest copy" tended to
be precisely the one that generated the code in the first place. :-)

>> The hard-coded line numbers would produce compilation errors that are
>> consistent between versions, of course. I would hope, however, that
>> the same error would not be encountered consistently across versions.
>
>The issue here is run-time errors, which are sometimes rare and
>non-reproducible.

Yes. I'd much rather have a program generate some sort of logical name
(like a module name or a logical milemarker) than a line number in an
error message -- that way I don't introduce a dependency on having line
numbers intact. :-)

How's that for circular logic? :-)

>>> I've used both, and find the comparison using line numbers to be easier
>>> to follow and less susceptible to artifacts. (See the CANDE MATCH
>>> verb.) The other tools (diff etc) have certainly improved vastly over
>>> the years; 20 years ago, CANDE was far superior.
>>
>> I suspect that each Unisys mainframe platform had a sophisticated set
>> of tools 20 years ago. It's hard to judge which was "superior" without
>> actually knowing the actual capabilities of each platform at the time.
>
>I only meant comparison tools. I agree that as far as the entire tool
>set of the various Burroughs and Sperry (and other) platforms is
>concerned, they are different as apples and oranges. You may be able to
>compare specific tools, but when you try to compare the entire tool
>set, you have to be satisfied with saying they are all useful and are
>very different from one another.

Yes.

The application environments are different, too. I'm currently quite
fascinated by the whole concept of "WINDOWS" and "AGENDAS" in COMS, and
in the way one ties a specific transaction code to a window. I don't
have it all down yet, but I'm learning.

Very different from the TIP transaction environment on the 2200, where
one has a specific transaction envirionment that one generally signs
into in order to run programs, and a separate command-line ("DEMAND")
session where one does editing and compiling and such, and those tend
to be on separate terminal windows using completely separate PIDs.

It's also disconcerting, though, and I find I sometimes forget where I
am and I'll type a trancode in CANDE or CANDE commands in a transaction
window. It is typical in the A-series world to only have one terminal
ID and to have to flip back and forth, or is the place I'm working for
being somewhat stingy when allocating terminal resources?

>> I tended to use UEDIT in split-window mode [...] I added to that,
>> implementing a feature called synchro-scroll
>
>Cool. Those address exactly the situation I was dealing with.

I *LOVED* having the source to the text editor I was using on the 2200,
particularly after I'd finally learned how UEDIT was organized.

If I had the source to CANDE... :-)

Edward Reid

unread,
Oct 20, 2002, 10:35:00 PM10/20/02
to
On Sun, 20 Oct 2002 1:43:55 -0400, Richard Steiner wrote

> And I can see how that could impose a limitation on what the program on
> the other side (say, an editor) could do. Though I would guess, if the
> whole screen were sent as a bytestream, that one could obtain a rough
> screen position by simply counting data characters. :-)

And that's exactly what the Burroughs terminals don't do, except in
non-forms mode with the page transmit. In forms mode, only the
unprotected data are sent. And of course with a line transmit, only the
current line is sent. And there's no indication in the data sent of
what came from where. A COBOL program defines a raw received screen
(without COMS headers) something like

1 THE-SCREEN.
2 TRANCODE PIC X(5).
2 FIELD-2 PIC X(8).
2 FIELD-3 PIC X(2).

etc.

> Damn. I wish I had a 2200 here at home. So many ideas... :-)

Too bad there's no IX equivalent to the LX100 ...

>> Looks terribly cumbersome to me. (But of course I don't use this
>> method, so duh.) Also it suffers from the issue I noted: if you add a
>> single line to the beginning of the Original, you can no longer apply
>> your CCFs without changing them. You may have tools to do this, but
>> it's another step. An A-Series patch file would need no modification.
>
> Actually, that isn't true. Someone who added a single line to the top
> could do so in the following way:

I don't think we're talking about the same situation. In the
environments you're talking about, "rebasing" is both creating a new
source with all changes incorporated AND renumbering the lines.

These are independent in the A-Series method.

For creating new source without renumbering, consider: if you have a
patch file intended for the next release and decide it isn't ready, you
pull the patch (and of course do whatever necessary verification that
the software works without it!). You generate a new source with the
remaining patches. Then you continue working with your postponed patch
against the new source. You may have to verify that no code you are
patching has now changed out from under you, but otherwise your
postponed patch is still valid with no modification needed.

This is because you were able to create a new source, with many and
extensive patches, without renumbering the lines.

For renumbering without creating a new source: you can trivially
resequence the source without merging patches, a concept which does not
exist (and isn't needed) without sequence numbers. However, this is not
normally done, and there's no simple way to update any current patch
files. So this side of the independence isn't useful.

> In the environment I'm in now, one makes a copy of the original source
> file, makes changes, copies the old production source to a file called
> something/BACKUP, and copies the new source over the old source.
>
> Simple, but it makes things difficult when two people make changes at
> the same time. :-)

It's definitely not making use of the available tools. This method is
very common, because it's the full-speed-ahead-damn-the-torpedoes
method. I think we agree failure to plan does not yield good results in
most situations.

>> Typically a new base source is generated for each release. There's no
>> reason to put it off, since the base line numbers won't change as a
>> result of creating a new source file.
>
> We weren't releasing a product at NWA -- it was a set of applications
> for internal use.

Presumably you still went through a release cycle, even if it was much
less formal. There aren't many shops these days that put changes
into production without a verification process.

> Yes. I'd much rather have a program generate some sort of logical name
> (like a module name or a logical milemarker) than a line number in an
> error message -- that way I don't introduce a dependency on having line
> numbers intact. :-)
>
> How's that for circular logic? :-)

Yup. But what it says is that the different systems work AS SYSTEMS,
not as individual pieces. If you were to mix and match without great
care, you'll have a far less usable system than either original. The
logic circles around within one system or the other but does not cross
systems.

> The application environments are different, too. I'm currently quite
> fascinated by the whole concept of "WINDOWS" and "AGENDAS" in COMS, and
> in the way one ties a specific transaction code to a window. I don't
> have it all down yet, but I'm learning.

A lot of people have worked with it for a long time and don't claim to
have it all down ;-).

COMS is one of the best pieces of A-Series software because it was
designed and implemented fairly late in the cycle for that category of
software. Back in the early 1970s, Burroughs provided no MCS (message
control system) at all -- each site designed and implemented its own if
it wanted to do online processing (which was not a given at the time).
A bit belatedly, Burroughs recognized the need to provide a
general-purpose product in thie category. They homed in on a program
from ... I think it was from a customer in Australia, but I could be
off base. They spiffed it up a bit and released it ... but this was
GEMCOS. It worked, but not very well, and a lot of the problems lay in
the basic architecture, which did not take advantage of the best
techniques learned by other implementers. (You'll heard -- very rarely
these days -- of the "Recant MCS", after Bruce Recant, who was writing
them.)

A vendor -- I think it was Joseph and Cogan, but again I'm shooting
from the hip again -- brought out an MCS called Gateway, which did
incorporate many of the better architectural features. However, by the
time Gateway came out, Burroughs was well on the way to developing
COMS. COMS was designed and implemented from scratch, so it used the
best available architectures and code and did not depend on historical
code. It was amazingly fast at the time and still uses the hardware
very efficiently. It was named MPS (for Message Processing System???)
until shortly before its release, when the marketing people discovered
a dog food of the same name and insisted on a change. The release of
COMS pretty much made Gateway unsalable, which was a shame, because
Gateway was an excellent product also.

This was about 1984 ... I know because I was working as a contractor in
the Burrough plant in Santa Ana, helping to fix problems in GEMCOS to
make it stable enough for a put-it-to-on-the-back-burner release. Every
time I'd think "we could redesign this part to work a lot better" I'd
follow it up with "oh that's right, that's what those people on the
other side of the aisle are doing".

For those from other Burroughs backgrounds, note that the A-Series
GEMCOS product resemble the GEMCOS products on other lines in name
only. The others were much better constructed and lasted a lot longer.

> It's also disconcerting, though, and I find I sometimes forget where I
> am and I'll type a trancode in CANDE or CANDE commands in a transaction
> window. It is typical in the A-series world to only have one terminal
> ID and to have to flip back and forth, or is the place I'm working for
> being somewhat stingy when allocating terminal resources?

Most sites allow multiple terminals on the same workstation, partly for
exactly this reason. I don't know of any good reason not to. There was
a slight reason back when the terminals were using poll/select protocol
(too many terminal addresses slowed response on the line), but that's
meaningless for terminals connected by TCP/IP.

The COMS windowing is very useful for some things, but the necessity of
sending an explicit ?ON command for each change makes it quite
cumbersome.

> I *LOVED* having the source to the text editor I was using on the 2200,
> particularly after I'd finally learned how UEDIT was organized.
>
> If I had the source to CANDE... :-)

Sounds pretty doubtful that your site licenses source. Maybe you'll get
another A-Series contract someplace that does license source. CANDE is
kind of fun; the internal architecture was designed to make very
efficient use of the early, small systems. It's tricky as a result, and
the resulting maintenance difficulties are cited by Unisys engineering
as one reason for putting effort into (for example) PW rather than
CANDE. I'm not convinced, but it's a valid argument. The two main
processing in CANDE are called BUMP (the message switcher and queuer)
and GRIND (which handles tasks which require any significant amount of
time).

Edward Reid
(looking for A/NX/LX contracts, resume at
http://user.talstar.com/reide/resume2002.html)


Edward Reid

unread,
Oct 20, 2002, 11:11:37 PM10/20/02
to
On Sun, 20 Oct 2002 0:24:05 -0400, Richard Steiner wrote

> In practice, at least at the sites at which I've worked, multiple people
> do not edit the same source file concurrently.

By "editing the same source", what we mean is "creating different
patches on the same base". Naturally these people want to be able to
talk about their common base. Line numbers enable that. Are they able
to share their UEDIT bookmarks for the base source for discussion
purposes?

> Instead, one makes a copy of the original source file and edits the copy,
> then uses the tools available to cut a change file for final testing.

In the A-Series method, people using the best available development
tools never modify a copy of the source. Using EDITOR or PW, they
simply create the patch file directly. This is trivial because with
both EDITOR and PW, editing a patch against a base isn't any different
from editing a base, except that you end up with a patch file, and so
on your next editing session EDITOR/PW still knows which lines are your
changed lines.

> Note that, at least in the 2200 environment, most compilers and other
> compilation front-ends are able to merge change files into the source
> as part of the basic compilation process. The programmer doesn't have
> to perform that step at all.

As with the A-Series. Every compiler can merge a single patch file.
SYSTEM/PATCH merges multiple patches, creating a single combined patch
file which the complier can merge. Major software frequently has
hundreds of patches applied in one release. The MCP might have
thousands. I've had several dozen patches in one release for software I
was developing alone.

> In any case, the "bookmarks" to which I refer in a UEDIT context are not
> saved within the edited source itself, but are dynamic data structures
> that the editor maintains locally for use during an editing session.

So the question is, can these local structures be merged and shared?

> No criticism is intended, but the use of monolithic source elements as
> large as you describe seems to fly in the face of everything I've been
> taught about modular software design.

I don't see any reason that the physical size of source elements should
have anything at all to do with the modularity of the logical design.
In fact, physically separating modules makes it more difficult to apply
more complex structures, even nested modules. I think you're accustomed
to equating a logical module with a physical module, and there's no
reason other than common practice.

> In that situation, the concept of "resequencing" is replaced with the
> concept of rebasing the base source file, thus changing the base set of
> line numbers used for change application.

But as I discussed in another posting, on the A-Series you typically
generate a new source WITHOUT resequencing -- and thus without changing
the base set of line numbers used for patch files.

> Any tool capable enough to be able to recognize DEFINE statements in a
> source listing and do some fairly basic parsing would know this. Even
> the creation/maintenance of a fairly basic symbol table would suffice.
>
>> If your editor does this, then it's doing at least a partial compile.
>
> No, it's parsing the source, something which I would consider quite a
> different activity from the types of things a compiler does, since a
> real compiler does a *lot* more than the simple parsing of file(s).

I think you're underestimating how much parsing is needed. Consider:

100 begin
110 define a = b #;
120 real b;
130 procedure proc;
140 begin
150 real b;
160 a := 0;
170 end proc;
180 a := 0;
190 end.

The parser must determine that the assignment at line 160 references
the declaration of b at line 150, and the assignment at line 180
references the declaration of b at line 120. I came up with this
example in about five seconds ...

COBOL's syntax is harder to parse but may not have any ambiguities of
this sort -- I haven't thought it out.

> The use of hard source line number references in user error messages is
> a totally foreign concept to me.

Remember that as used, it's really a unique, persistent record
identifier rather than what you are accustomed to thinking of as a
"line number". They could be changed to alphanumeric. In fact, the
compilers are perfectly happy with non-numeric sequence numbers, and
you'll see them in run time fault messages if you use them. It's CANDE
and EDITOR and PW that can't handle non-numeric sequence numbers.

> Most of the errors I've seen which provide hard reference information
> for programmer consumption tend to provide reference information such
> as source file and module name/number, or perhaps milestone markers.

Again, just shows that different systems can work. To bring in a
different topic, the usual objection to Microsoft's monopoly is its
anticompetitive practices. While I agree that this is a serious
problem, I also think that so much concentration is a problem per se,
simply because we don't yet know enough about computing to be
discarding many approaches for one. This is a similar situation:
different systems work, and at this time we don't know which is best in
the long run.

Edward Reid

(looking for NX/LX/A contracts)


Stephen Fuld

unread,
Oct 21, 2002, 1:32:25 AM10/21/02
to

"Richard Steiner" <rste...@visi.com> wrote in message
news:bKks9oHp...@visi.com...
> Here in comp.sys.unisys,

snip

> Actually, that isn't true. Someone who added a single line to the top
> could do so in the following way:
>
> -1,1
> This is the new line #1.
> This is the original line #1.
>
> The base source would still be the same, and the application of the
> above CCF would add a line to the top by replacing line #1 with two
> new lines, one of which is the same as the old line #1.

While, of course, that will work, it isn't necessary to duplicate the
original line. There is an implicit -0 at the beginning of the correction
images, so any image placed there (except of course one beginning with a
minus sign or whatever is the current correction indicator) will be placed
in the file before the existing first line.

Lueko Willms

unread,
Nov 9, 2002, 7:38:00 AM11/9/02
to
Am 05.10.02
schrieb kgr...@ix.netcom.com (Ken Grubb)
auf /COMP/SYS/UNISYS
in cc1tpu41317bh2fd0...@4ax.com
ueber Re: Seeking CANDE and/or WFL reference materials...

>> I still say 2200 instead of Clearpath IX. :-)

KG> I still say 1100 instead of 2200.

I still prefer 1100, too. For a simply reason: my primary language is
German, and "11" in German is just one syllab: "elf" while "22" is much
longer: "zweiundzwanzig" (pronounce: tsveioundtsvantsik).

Lüko Willms http://www.mlwerke.de
/--------- L.WI...@jpberlin.de -- Alle Rechte vorbehalten --

"Die Interessen der Nation lassen sich nicht anders formulieren als unter
dem Gesichtspunkt der herrschenden Klasse oder der Klasse, die die
Herrschaft anstrebt." - Leo Trotzki (27. Januar 1932)

Dennis

unread,
Nov 17, 2002, 3:36:46 AM11/17/02
to
Have you ever worked for Deluxe Check?

Dennis - Milwaukee

In article <T01m9oHp...@visi.com>, rste...@visi.com wrote:
>Hello, folks...
>
>Due to a strange set of circumstances, I've managed to land a contract
>position in a shop using an A-series box (COBOL74, DMSII).
>
>I've been a 2200 guy my whole career, but I've always been very curious
>about the A-series, so this is pretty cool from my perspective. :-)


>
>Anyway -- I spent a couple of hours yesterday afternoon with one the
>programmers there, and he showed me a number of interesting things,
>including a few WFL files (WFL seems quite powerful at first glance)

>and a few basic editing operations in CANDE.
>

>Are there any references available on the net for either WFL or CANDE?
>

>I've found one site here:
>
> http://www.metalogic.eu.com/Main/docum/ref/cards.htm
>
>that might be applicable, and I'm aware that Don Gregory's publishing
>company sells manuals (the two that caught my eye right aware are the
>_Beginner's Guide to WFL_ and the _Complete CANDE Primer_), but that's
>all I've found so far.
>
>Does anyone have any opinions on those two books from www.gregpub.com?
>
>I know the company I'll be working for has documentation CD-ROMs from
>Unisys, but I don't know at this point what they contain -- it's quite
>possible that all I need to know is resident on them, but I'm not sure.

0 new messages