Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

PL/I, COBOL, Advantages, Equivalence, et al

427 views
Skip to first unread message

William M. Klein

unread,
Sep 18, 2006, 3:32:20 PM9/18/06
to
I started to write a similar note last night, but decided not to. Today, I
really think I should.

I hate to disagree AND agree with both D.F. *and* Robin, but it seems that in my
opinion (not fact <G>) they both have similar problems in what they post (on
some topics).

To me, the POWER of a programming language has absolutely NOTHING to do with
"Can you translate XYZ syntax from language to another in 27 keystrokes taking
no more than 3.64 lines of code". The power of a programming language is
determined by:

- What type of programming requirements can you SOLVE in a programming language
(what types of applications can the programming language be used for)
- Given that MOST currently supported programming languages can be used to
solve MOST programming requirements, (not all for either of these), the
questions then become:

- How well does the resulting object (machine) code perform. (You can DO
complex arithmetic in COBOL, for example, but it certainly wouldn't perform very
well).
- Then compare the run-time performance with the ability to maintain the code
(how easy is it to get programmers to understand, maintain and enhance the
source code. The often-cited COBOL requirement is commonly stated as "Can the
average COBOL maintenance programmer understand and fix a "bug" in the source
code at 3 a.m. in the morning? If not, the code is probably not "easily
maintainable".)

***
Language wars are usually about as useful as discussions by children of "My
father can beat up your father".

I would use COBOL to write a weather forecasting/modeling application about as
soon as I would recommend using Fortran to write an IBM mainframe CICS
transaction processing routine. I think that languages such as REXX, PERL, even
AWK or SPITBOL/SNOBOL are better for "regular expression" text handling than
either COBOL or PL/I (although both of them usually CAN do such text handling).
If I were writing a new version of a Unix-(like) operating system and didn't use
C (or equivalent), then I can't imagine anyone thinking that I had made the
correct decision. I don't (personally) know that "language" current games,
CAD/CAM, or embedded systems are being written in, but I can almost guarantee it
isn't REXX (or COBOL).

The "right tool" for the "right job" has and probably always will make sense.
In fact, the HISTORY of PL/I was that much (not all) of its original design
criteria was that it be able to handle (well) what then-current COBOL and
Fortran could already separately do - but what neither could do that the other
could. Even today, if I were in an IBM mainframe shop that did BOTH scientific
and business data processing and wanted to share resources (data and
programmers), PL/I would probably be a better choice than COBOL or Fortran (but
NOT necessarily C/C++). However, it is equally true that both Fortran and COBOL
have added features since the days that PL/I was designed to make them BETTER
(not perfectly) suited to more "general" programming needs.

***

To me, D.F. is (for no useful) reason so bothered by Robin's fact statement on
"language power" that he makes often erroneous statements and raises issues that
have little or nothing to do with actual programming language requirements, e.g.
"Translate this syntax from Fortran into some other language" - rather than
SOLVE this programming requirement in your "language of choice". Meanwhile,
Robin states things as "fact" that are neither substantiated nor universally
accepted. Probably MOST (not all) programmers who PREFER using PL/I agree with
them (so that they are reasonable to express in this newsgroup and the PL/I FAQ)
but when viewed by "non-PL/I believers" they do tend to reduce Robin's GENERAL
credibility. My biggest objection to the FAQ statement is not that it
accurately reflects a COMMONLY (not universally) held opinion, but rather that
IF it were changed. D.F. has indicated that he would stop positing his (often
stupid) challenges in THIS newsgroup. If the FAQ were reworded to more
accurately reflect "opinion" or "for some applications in some environments"
PL/I *is* a better choice than other programming languages that COULD solve the
same problem, *AND* if D.F. actually did (then) stop his ridiculous posts, I
would think many comp.lang.pl1 readers would be MUCH happier.

***

Having made comments on "fact vs opinion", I did want to express (yet again)
some of what I understand (but can be corrected on) are advantages of both COBOL
and PL/I *for IBM mainframe business" programming.

PL/I advantages:
- preprocessor (other than - possibly but not certainly - the HLASM macro
processor, I don't know of any similar tool on IBM or other environments that
has NEARLY the power of the PL/I preprocssor.)
- Bit processing (COBOL can use LE callable services for this, but it is pretty
ugly and not very intuitive. The current ISO 2002 Standard includes bit
support - but it isn't yet available on IBM mainframes. I don't see the need in
MOST IBM mainframe business applications, but it certainly would be nice to have
in COBOL)
- Vector handling and Complex arithmetic (Both of these could be done in
COBOL, but certainly not in "normal" code. Vectors can be handled by "loop"
logic, but certainly not in a good or well-performing manner. Complex
arithmetic would require LE callable services or other hand-written subroutines.
HOWEVER, in my (limited) experience, neither of these are commonly needed in
business logic. In fact, I have never seen the requirement to use a PL/I
subroutine to handle such for an IBM mainframe business application - limiting
the use of COBOL for the same application. I am certain such applications DO
exist; they simply are NOT common).
- PL/I "native" condition handling does provide portable features not available
in IBM mainframe COBOL. (Again, the "common condition handling" declaratives
model is part of the ISO 2002 COBOL Standard, but not available on IBM
mainframes. The LE condition handling provides most - possibly all - that PL/I
can do, but this is NOT something that the "average" COBOL programmer would know
how to use or feel very comfortable with)
- In a cite that wants to SHARE resources (data, programmers, etc) between a
"scientific" side and a "business" side, PL/I would definitely be a better
choice than COBOL.
- VARYING strings (mentioned in a number of PL/I newsgroup threads) have
similar facilities in COBOL but the design is sufficiently different that the
COBOL standards groups are in the process of adding "prefixed" and "delimited"
ANY LENGTH strings into the next COBOL Standard. I don't know when (if) they
will be added to IBM mainframe COBOL, but currently this is certainly something
available today in PL/I but not in COBOL.

* * * *

COBOL Advantages
- first and foremost, COBOL is MORE commonly available at IBM mainframe shops
than PL/I - and this includes programming resources currently available and also
the "pool" of programmers from which to hire NEW programmers. OTOH, the pool is
shrinking (at least outside of India and some "out-sourcing" areas of the
globe). However, I think it is still SIGNIFICANTLY greater than PL/I resources.
It is my perception (and has been ever since I first became aware of PL/I vs
COBOL issues) that there are parts of the world and specific industries in which
PL/I is more popular than it is in "general" data processing in the US.
However, there are NO places in the world and no industries in which IBM PL/I is
SIGNIFICANTLY more popular than IBM COBOL for BUSINESS data processing; while
the converse is true that there are parts of the world and industries in which
IBM mainframe COBOL has significantly greater use than IBM mainframe PL/I.
- IBM's mainframe COBOL fully supports OO COBOL - both for and without Java
interoperatibilty. This has not (as far as I can tell) "caught on" among IBM
mainframe COBOL users, but it does have a "slightly" growing interest - if not
use. (As others have pointed out, you CAN do much of this with PL/I today, but
it isn't much better than trying to do bit-twiddling with COBOL).
- COBOL Standard Report Writer. (This is controversial even among IBM mainframe
COBOL sites; some love it and some hate it. It is currently limited to "fixed
font" reports so I don't personally see it having a "growing future" among
today's IBM mainframe sites, but it does have its uses that require
significantly more complex coding in PL/I).
- The latest COBOL compiler seems to support larger INDIVIDUAL character
data-items. According to:
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/ibm3lr40/A.0
"Maximum length of CHARACTER 32767"
while
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/IGY3LR31/APPENDIX1.2
has
"01-49 data item size 134,217,727 bytes"

This is (some what) useful for XML data and DB2 "BLOBs". On the other hand.,
some of the PL/I limits have 214,748,3647 where COBOL is limited to
134,217,727. So PL/I may be able to handle some data that COBOL can't.

- I believe that there are more "purchasable" packages that allow for COBOL
customization and subroutine than there are PL/I packages - on the IBM mainframe
market today.

***

In general, (but probably with SOME exceptions), the rest of the "advantages"
for IBM mainframe data processing between COBOL and PL/I are a matter of "style"
and what a programmer is used to. Certainly items like "verbosity" (COBOL) or
unstructured condition handling (PL/I) *are* matters of opinion rather than
matters of fact. The popularity of COBOL (over PL/I) in existing and past IBM
mainframe shops does speak to "how common" certain opinions are. On the other
hand, just because some shops have and continue to use COBOL when PL/I *could*
be used doesn't mean that they have made the BEST choice - any more than the
reverse decision is "always right" when it occurs.

***

Again, I find "language wars" not very useful, but I did think I should post
this note to express MY opinion and hopefully separate SOME "fact" from
"opinion" (mine or others).


--
Bill Klein
wmklein <at> ix.netcom.com


Tom Linden

unread,
Sep 18, 2006, 4:03:59 PM9/18/06
to

I would go one step further, and drop the importance of object code
efficiency
in lieue of how does the language help the programmer right more reliable
code? How well does the compiler do in semantic analysis?

--
Using Opera's revolutionary e-mail client: http://www.opera.com/mail/

glen herrmannsfeldt

unread,
Sep 18, 2006, 5:17:27 PM9/18/06
to
William M. Klein <wmk...@nospam.netcom.com> wrote:
(snip)


> To me, the POWER of a programming language has absolutely NOTHING
> to do with "Can you translate XYZ syntax from language to another
> in 27 keystrokes taking

(big snip)

I sometimes find comparisons of languages interesting, in that you
can understand the design goals of a language by seeing what it allows
and disallows. I try to make my comparisons fair, stating facts separately
from opinions. I consider it similar to the "compare and contrast"
assignments for studying literature.

-- glen

Tom Linden

unread,
Sep 18, 2006, 5:34:41 PM9/18/06
to
On Mon, 18 Sep 2006 13:03:59 -0700, Tom Linden <t...@kednos-remove.com>
wrote:

write

Christopher Browne

unread,
Sep 19, 2006, 9:28:36 AM9/19/06
to
"Tom Linden" <t...@kednos-remove.com> writes:
> I would go one step further, and drop the importance of object code
> efficiency in lieue of how does the language help the programmer
> right more reliable code? How well does the compiler do in semantic
> analysis?

There are times when that's most relevant, and times when it's not.

When trying to get a Cray (or the likes) to render as many frames per
hour as possible on the latest would-be blockbuster, the goal may
indeed be to maximize object code efficiency.

On the other hand, for "business applications," whether that be
accounting, inventory analysis, or the like, you're probably I/O
bound, and hence not so interested in maxxing out the CPU. (And
hence, features that somehow enable reliability / resilience are to be
desired...)

Both are legitimate scenarios that crop up. The latter is probably
more common...
--
let name="cbbrowne" and tld="ca.afilias.info" in String.concat "@" [name;tld];;
<http://dba2.int.libertyrms.com/>
Christopher Browne
(416) 673-4124 (land)

Christopher Browne

unread,
Sep 19, 2006, 9:33:40 AM9/19/06
to

The fact that different languages are good at expressing different
things has pointed some to the notion that you should learn a new
language (and not one just like the others you already know) every few
years.

COBOL, FORTRAN, and PL/I are three different, but to head to *truly*
different requires going a fair bit further afield.

Very different from any of these (and each other) would be such things
as:
- ICON
- Snobol
- Lisp
- Haskell
- Python
- Perl
- C

There are problems that each of these could solve "in 27 keystrokes"
that would likely take 27 pages in some other choice of language
(perhaps hyperbolae, to a small degree...)

Walking in such extra shoes is claimed to expand your ability to think
about different kinds of problems, solutions, and solution methods...
--
output = ("cbbrowne" "@" "ca.afilias.info")

Tom Linden

unread,
Sep 19, 2006, 9:33:11 AM9/19/06
to
On Tue, 19 Sep 2006 06:28:36 -0700, Christopher Browne
<cbbr...@ca.afilias.info> wrote:

> When trying to get a Cray (or the likes) to render as many frames per
> hour as possible on the latest would-be blockbuster, the goal may
> indeed be to maximize object code efficiency.

This is IMV a spcial case requiring more hands-on nurturing, much
like a race car vs. ordinary car. IIRC, SGI used the Mips compiler
suite which pruned the stack frame from leaf nodes, making it more
difficult to recover gracefully from errors

glen herrmannsfeldt

unread,
Sep 19, 2006, 2:37:35 PM9/19/06
to
Christopher Browne <cbbr...@ca.afilias.info> wrote:
(snip)


> The fact that different languages are good at expressing different
> things has pointed some to the notion that you should learn a new
> language (and not one just like the others you already know) every few
> years.

> COBOL, FORTRAN, and PL/I are three different, but to head to *truly*
> different requires going a fair bit further afield.

> Very different from any of these (and each other) would be such things
> as:
> - ICON
> - Snobol
> - Lisp
> - Haskell
> - Python
> - Perl
> - C

I would also add languages like Mathematica, Matlab, and R.

Interpreted languages are especially convenient for doing things
in a small number of keystrokes, if the languages has defined just
the operation that one needs. While the runtime may be slower,
getting the right answer might still be faster.

-- glen

robin

unread,
Sep 19, 2006, 11:53:40 PM9/19/06
to
William M. Klein wrote in message <7vCPg.71251$PM1....@fe04.news.easynews.com>...

> - Then compare the run-time performance with the ability to maintain the code
>(how easy is it to get programmers to understand, maintain and enhance the
>source code. The often-cited COBOL requirement is commonly stated as "Can the
>average COBOL maintenance programmer understand and fix a "bug" in the source
>code at 3 a.m. in the morning?

Which is why you should be using PL/I.

PL/I programs can be made failsafe, and do not need
debugging at 3a.m. in the morning.
PL/I can trap virtually every kind of run-time error,
and can recover and continue, after having produced an
exception report.


robin

unread,
Sep 19, 2006, 11:53:42 PM9/19/06
to
William M. Klein wrote in message <7vCPg.71251$PM1....@fe04.news.easynews.com>...

>The "right tool" for the "right job" has and probably always will make sense.


>In fact, the HISTORY of PL/I was that much (not all) of its original design
>criteria was that it be able to handle (well) what then-current COBOL and
>Fortran could already separately do - but what neither could do that the other
>could.

It was also designed to do things that Algol could.

There were many tasks that Fortran could not do at all,
and these were addressed in the design of PL/I

Three of those issues that spring to mind were the ability
to handle errors [Fortran simply gave up, i.e., abended],
dynamic arrays, and character strings.

[With IBM's compilers, it was possible to write a main program
and subroutines in PL/I, and to call a Fortran subprogam.
Even if an error occurred in the Fortran code (e.g., division by zero),
the whole thing did not fall over, because PL/I's error handling
trapped the error and allowed the program to continue.]

> Even today, if I were in an IBM mainframe shop that did BOTH scientific
>and business data processing and wanted to share resources (data and
>programmers), PL/I would probably be a better choice than COBOL or Fortran (but
>NOT necessarily C/C++).

PL/I is unequivocally better than C, in terms of reliability and robustness
in particular, and from every other standpoint.

> However, it is equally true that both Fortran and COBOL
>have added features since the days that PL/I was designed to make them BETTER
>(not perfectly) suited to more "general" programming needs.


But so has PL/I added features. So the relative relationship has
remained unchanged.


robin

unread,
Sep 19, 2006, 11:53:41 PM9/19/06
to
William M. Klein wrote in message <7vCPg.71251$PM1....@fe04.news.easynews.com>...
>I started to write a similar note last night, but decided not to. Today, I
>really think I should.
>
>I hate to disagree AND agree with both D.F. *and* Robin, but it seems that in my
>opinion (not fact <G>) they both have similar problems in what they post (on
>some topics).
>
>To me, the POWER of a programming language has absolutely NOTHING to do with
>"Can you translate XYZ syntax from language to another in 27 keystrokes taking
>no more than 3.64 lines of code". The power of a programming language is
>determined by:
>
> - What type of programming requirements can you SOLVE in a programming language
>(what types of applications can the programming language be used for)
> - Given that MOST currently supported programming languages can be used to
>solve MOST programming requirements, (not all for either of these),

Can they? I would dispute C, for example.
There is also the issue of how well they do that,
and how reliably.

You speak of bebugging COBOL programs at 3 o'clock in the morning.

Let's examine that in the context of your statements above.

A PL/I program is robust and fault tolerant.
In the event that something unexpected should happen,
the program (with its built-in PL/I facilities) can print (or write to
an exception file) the details of the error and all the circumstances
including the actual data) that caused the error.

And it can then continue with the next lot of data.

No need for someone to come in at 3am to fix the program.
No need to re-run the program to find out where and why the
program crashed.

The problem can be analyzed with a fresh mind in the light of day.


Bob Lidral

unread,
Sep 20, 2006, 3:22:53 AM9/20/06
to
robin wrote:

> William M. Klein wrote in message <7vCPg.71251$PM1....@fe04.news.easynews.com>...
>

>> [...]


>>To me, the POWER of a programming language has absolutely NOTHING to do with
>>"Can you translate XYZ syntax from language to another in 27 keystrokes taking
>>no more than 3.64 lines of code". The power of a programming language is
>>determined by:
>>
>>- What type of programming requirements can you SOLVE in a programming language
>>(what types of applications can the programming language be used for)
>>- Given that MOST currently supported programming languages can be used to
>>solve MOST programming requirements, (not all for either of these),
>
>
> Can they? I would dispute C, for example.

C can do just about anything PL/I can do. And assembly/machine language
absolutely can do anything PL/I can do (it does get compiled into
machine language, after all).

> There is also the issue of how well they do that,
> and how reliably.

Absolutely.

>
> You speak of bebugging COBOL programs at 3 o'clock in the morning.
>
> Let's examine that in the context of your statements above.
>
> A PL/I program is robust and fault tolerant.

Not exactly. The most you can truthfully say is that a PL/I program can
be robust and fault tolerant. Much depends on the skill and experience
of the programmer.

In my experience, the more powerful and more expressive a programming
language is, the easier it is for inexperienced programmers to get into
real trouble. Well, as a general rule, anyway. I have seen
horribly-written, virtually incomprehensible code written in just about
every computer language I've ever learned well (no fair for me to
complain about hard-to-understand code in languages I don't know well :-) ).

I once heard a theory that one of the reasons COBOL was so verbose was
to make it difficult to do anything really clever because that helped
keep beginning programmers from getting into too much trouble and from
creating a need for the 3:00 AM emergency debugging sessions. Lest all
you COBOL proponents (there are some here, aren't there) take exception
to that, I don't subscribe to that theory. COBOL is an old enough
language that it was designed before such issues had come into
consideration.

One of the costs of using a robust language is the length of time it
takes to learn it well enough to use it properly.

> In the event that something unexpected should happen,
> the program (with its built-in PL/I facilities) can print (or write to
> an exception file) the details of the error and all the circumstances
> including the actual data) that caused the error.
>
> And it can then continue with the next lot of data.
>

Well, only if it's written that way. PL/I programs don't write
themselves. The language certainly has those capabilities, but it's a
little optimistic to assert that all programs written in it make use of
those facilities.

> No need for someone to come in at 3am to fix the program.
> No need to re-run the program to find out where and why the
> program crashed.
>
> The problem can be analyzed with a fresh mind in the light of day.
>
>

That's actually more a function of management than of programming
language. I don't care what language is used, if a shop is sufficiently
under-funded, under-staffed, or over-committed mistakes will happen that
require the proverbial 3:00 AM debugging session. Choosing PL/I (or
Fortran, or COBOL, or RPG, or ... won't change that).


Bob Lidral
lidral at alum dot mit edu

David Frank

unread,
Sep 20, 2006, 7:40:24 AM9/20/06
to

"William M. Klein" <wmk...@nospam.netcom.com> wrote in message
news:7vCPg.71251$PM1....@fe04.news.easynews.com...

>
> I hate to disagree AND agree with both D.F. *and* Robin, but it seems that
> in my opinion (not fact <G>) they both have similar problems in what they
> post (on some topics).

The problem in this newsgroup is that the REALLY talented PL/I users left
years ago
leaving today a cadre of mostly inactive users (that dont even know the
syntax for IF THEN)
or translate simple Fortran statements.

E.G. No one is willing to confirm if PL/I has equivalent declaration of
Fortran's defined type variables because they dont trust there own knowledge
well enuf to state the facts OR in Vowels case he wont respond because he
knows it does NOT.

Without such syntax I dont see how the arbitrary list problem can be solved
using PL/I...

type list
character,allocatable :: name(:)
integer,allocatable :: nums(:)
end type
type (list),allocatable :: lists(:)

Donald L. Dobbs

unread,
Sep 20, 2006, 12:08:08 PM9/20/06
to

Here, I have to agree with Robin. I have been programming in PL/I since
it was first available (I was working at a beta site, ca. 1965). The
only problems we had was with really junior programmers and the integer
divide gotcha. Once we got them past that concept they produced good
robust code. Bottom line: in 41 years of coding in PL/I and being
around PL/I shops, etc. I don't ever recall a single emergency 3 a.m.
bug fixing session. When the programs were released for production they
were solid.

Donald L. Dobbs

unread,
Sep 20, 2006, 12:15:53 PM9/20/06
to

Bob Lidral wrote:

> robin wrote:
>
>> William M. Klein wrote in message
>> <7vCPg.71251$PM1....@fe04.news.easynews.com>...
>>
>>> [...]
>>> To me, the POWER of a programming language has absolutely NOTHING to
>>> do with
>>> "Can you translate XYZ syntax from language to another in 27
>>> keystrokes taking
>>> no more than 3.64 lines of code". The power of a programming
>>> language is
>>> determined by:
>>>
>>> - What type of programming requirements can you SOLVE in a
>>> programming language
>>> (what types of applications can the programming language be used for)
>>> - Given that MOST currently supported programming languages can be
>>> used to
>>> solve MOST programming requirements, (not all for either of these),
>>
>>
>>
>> Can they? I would dispute C, for example.
>
>
> C can do just about anything PL/I can do.

It doesn't do nesting, and scoping of variables very well. And it
certainly can obfuscate.

glen herrmannsfeldt

unread,
Sep 20, 2006, 3:00:20 PM9/20/06
to
David Frank <dave_...@hotmail.com> wrote:

> E.G. No one is willing to confirm if PL/I has equivalent declaration of
> Fortran's defined type variables because they dont trust there own knowledge
> well enuf to state the facts OR in Vowels case he wont respond because he
> knows it does NOT.

I don't know if it has defined type variables, I presume you mean
something like C's typedef. I don't remember that Fortran does, either.

PL/I has had structures (as far as I know, borrowed from COBOL) since
the beginning, along with structure pointers. List processing
with pointer variables has been part of PL/I from the beginning.



> Without such syntax I dont see how the arbitrary list problem can be solved
> using PL/I...

> type list
> character,allocatable :: name(:)
> integer,allocatable :: nums(:)
> end type
> type (list),allocatable :: lists(:)

I don't know what this has to do with arbitrary lists. If you want
list processing, you need pointers. PL/I has always had allocatable
arrays of structures of allocatable arrays.

-- glen

William M. Klein

unread,
Sep 20, 2006, 4:22:32 PM9/20/06
to

"robin" <rob...@bigpond.com> wrote in message
news:8X2Qg.32466$rP1....@news-server.bigpond.net.au...

Current (and recent) IBM mainframe COBOL can do the same. The question (as
others have pointed out) is what the programmer has put into the code. Since
24/7 processing has become more common, there certainly IS a lot more
"fail-safe" COBOL than there used to be. However, the "tradition" in COBOL
application programming was to give the user what they SAID they
wanted/expected. Often, this led to middle of the night "it wasn't ever
SUPPOSED to get data like this" application failures. (See comments in this
newsgroup and in the IBM documentation on the performance overhead from using
TOO MUCH PL/I "condition handling.)

Bob Lidral

unread,
Sep 21, 2006, 3:37:38 AM9/21/06
to
Donald L. Dobbs wrote:

>
>
> Bob Lidral wrote:
>
>> robin wrote:
>>
>>> William M. Klein wrote in message
>>> <7vCPg.71251$PM1....@fe04.news.easynews.com>...
>>>
>>>> [...]
>>>> To me, the POWER of a programming language has absolutely NOTHING to
>>>> do with
>>>> "Can you translate XYZ syntax from language to another in 27
>>>> keystrokes taking
>>>> no more than 3.64 lines of code". The power of a programming
>>>> language is
>>>> determined by:
>>>>
>>>> - What type of programming requirements can you SOLVE in a
>>>> programming language
>>>> (what types of applications can the programming language be used for)
>>>> - Given that MOST currently supported programming languages can be
>>>> used to
>>>> solve MOST programming requirements, (not all for either of these),
>>>
>>>
>>>
>>>
>>> Can they? I would dispute C, for example.
>>
>>
>>
>> C can do just about anything PL/I can do.
>
>
> It doesn't do nesting, and scoping of variables very well. And it
> certainly can obfuscate.
>

My comments were intended to be in the same context as William M.
Klein's: "What type of programming requirements can you SOLVE ..." which
I took t mean the type of application rather than the programming
method. C doesn't have PICTURE or DECIMAL data, either -- and a lot of
other things PL/I doesn't have. OTOH, PL/I doesn't allow the use of an
assignment operator as the condition for an if or while statement nor
does it have short-cut Boolean operators except as an extension. Those
differences, by themselves, don't mean they can't solve the same
programming problems, merely that they could, or in some cases would
have to, use different methods to do so.

Actually, I've seen some interesting and occasionally difficult to
analyze bugs caused by nesting and variable scoping.

David Frank

unread,
Sep 21, 2006, 4:30:58 AM9/21/06
to

"glen herrmannsfeldt" <g...@seniti.ugcs.caltech.edu> wrote in message
news:ees344$6s$7...@naig.caltech.edu...
> David Frank <dave_...@hotmail.com> wrote:
>
<snip>

>
>> Without such syntax I dont see how the arbitrary list problem can be
>> solved
>> using PL/I...
>
>> type list
>> character,allocatable :: name(:)
>> integer,allocatable :: nums(:)
>> end type
>> type (list),allocatable :: lists(:)
>
> I don't know what this has to do with arbitrary lists. If you want
> list processing, you need pointers.

Obviously not since my arbitrary #lists from file solution into a data
structure (above)
holding the decoded data has no pointer declarations.

> PL/I has always had allocatable arrays of structures of allocatable
> arrays.
>
> -- glen

OK, show everyone that it does and post a translation of my data structure
declaration above


David Frank

unread,
Sep 20, 2006, 7:08:19 PM9/20/06
to

"glen herrmannsfeldt" <g...@seniti.ugcs.caltech.edu> wrote in message
news:ees344$6s$7...@naig.caltech.edu...
> David Frank <dave_...@hotmail.com> wrote:
>
>> E.G. No one is willing to confirm if PL/I has equivalent declaration of
>> Fortran's defined type variables because they dont trust there own
>> knowledge
>> well enuf to state the facts OR in Vowels case he wont respond because he
>> knows it does NOT.
>
> I don't know if it has defined type variables, I presume you mean
> something like C's typedef. I don't remember that Fortran does, either.
>

My documentation calls below declarations Defined Type Variables,
what do you call it?
C's typedef == Fortran TYPE

> PL/I has had structures (as far as I know, borrowed from COBOL) since
> the beginning, along with structure pointers. List processing
> with pointer variables has been part of PL/I from the beginning.
>
>> Without such syntax I dont see how the arbitrary list problem can be
>> solved
>> using PL/I...
>
>> type list
>> character,allocatable :: name(:)
>> integer,allocatable :: nums(:)
>> end type
>> type (list),allocatable :: lists(:)
>
> I don't know what this has to do with arbitrary lists. If you want
> list processing, you need pointers. PL/I has always had allocatable
> arrays of structures of allocatable arrays.
>
> -- glen

Then quit farting around and show us the translation of above TYPE
statements
to a PL/I allocatable structure with 2 allocatable array members.
If thats true, how about showing us the translation of above to such a
PL/I structure>


Peter Flass

unread,
Sep 21, 2006, 6:36:51 AM9/21/06
to
Bob Lidral wrote:
> OTOH, PL/I doesn't allow the use of an
> assignment operator as the condition for an if or while statement

Good one! This "feature" alone has probably led to more C programming
errors than all others combined.

John W. Kennedy

unread,
Sep 21, 2006, 9:58:21 AM9/21/06
to
Bob Lidral wrote:
> My comments were intended to be in the same context as William M.
> Klein's: "What type of programming requirements can you SOLVE ..." which
> I took t mean the type of application rather than the programming
> method. C doesn't have PICTURE or DECIMAL data, either -- and a lot of
> other things PL/I doesn't have. OTOH, PL/I doesn't allow the use of an
> assignment operator as the condition for an if or while statement nor
> does it have short-cut Boolean operators except as an extension.

Worse than that -- it doesn't have short-cut Boolean operators, period,
but some compilers produce short-cut semantics as a side-effect of
optimization.

--
John W. Kennedy
"The blind rulers of Logres
Nourished the land on a fallacy of rational virtue."
-- Charles Williams. "Taliessin through Logres: Prelude"

robin

unread,
Sep 21, 2006, 10:09:01 AM9/21/06
to
William M. Klein wrote in message ...

>
>"robin" <rob...@bigpond.com> wrote in message
>news:8X2Qg.32466$rP1....@news-server.bigpond.net.au...
>> William M. Klein wrote in message
>> <7vCPg.71251$PM1....@fe04.news.easynews.com>...
>>
>>> - Then compare the run-time performance with the ability to maintain the code
>>>(how easy is it to get programmers to understand, maintain and enhance the
>>>source code. The often-cited COBOL requirement is commonly stated as "Can the
>>>average COBOL maintenance programmer understand and fix a "bug" in the source
>>>code at 3 a.m. in the morning?
>>
>> Which is why you should be using PL/I.
>>
>> PL/I programs can be made failsafe, and do not need
>> debugging at 3a.m. in the morning.
>> PL/I can trap virtually every kind of run-time error,
>> and can recover and continue, after having produced an
>> exception report.
>
>Current (and recent) IBM mainframe COBOL can do the same. The question (as
>others have pointed out) is what the programmer has put into the code. Since
>24/7 processing has become more common,

In those days (1960s) 24-hour processing was the norm.
It was unusual NOT to run around the clock.
Computers were very expensive, and often had inadequate
processing capacity.

> there certainly IS a lot more
>"fail-safe" COBOL than there used to be. However, the "tradition" in COBOL
>application programming was to give the user what they SAID they
>wanted/expected. Often, this led to middle of the night "it wasn't ever
>SUPPOSED to get data like this" application failures.

Sounds like poor programming. One of the first things
a production program must do is to check that the data is valid,
and for such data to produce an exception report.
If it doesn't at least do that, the program is not robust.
That has nothing to do with interrupt handling.
Now, with condition handling, unforeseen problems can be
trapped and handled, and it doesn't require a 3am debugging session.

> (See comments in this
>newsgroup and in the IBM documentation on the performance overhead from using
>TOO MUCH PL/I "condition handling.)

Where?


robin

unread,
Sep 21, 2006, 10:09:02 AM9/21/06
to
Bob Lidral wrote in message <4510EC4D...@comcast.net>...

>robin wrote:
>
>> William M. Klein wrote in message <7vCPg.71251$PM1....@fe04.news.easynews.com>...
>>
>>> [...]
>>>To me, the POWER of a programming language has absolutely NOTHING to do with
>>>"Can you translate XYZ syntax from language to another in 27 keystrokes taking
>>>no more than 3.64 lines of code". The power of a programming language is
>>>determined by:
>>>
>>>- What type of programming requirements can you SOLVE in a programming language
>>>(what types of applications can the programming language be used for)
>>>- Given that MOST currently supported programming languages can be used to
>>>solve MOST programming requirements, (not all for either of these),
>>
>>
>> Can they? I would dispute C, for example.
>
>C can do just about anything PL/I can do. And assembly/machine language
> absolutely can do anything PL/I can do (it does get compiled into
>machine language, after all).
>
>> There is also the issue of how well they do that,
>> and how reliably.
>
>Absolutely.
>
>> You speak of bebugging COBOL programs at 3 o'clock in the morning.
>>
>> Let's examine that in the context of your statements above.
>>
>> A PL/I program is robust and fault tolerant.
>
>Not exactly. The most you can truthfully say is that a PL/I program can
>be robust and fault tolerant.

That's exactly what I said in my immediately-preceding post
(that was posted within a few seconds of the other), and is quoted here :-


"PL/I programs can be made failsafe, and do not need
"debugging at 3a.m. in the morning.
"PL/I can trap virtually every kind of run-time error,
"and can recover and continue, after having produced an
"exception report."

> Much depends on the skill and experience
>of the programmer.

It takes no skill to include SIZE, STRINGRANGE, and SUBSCRIPTRANGE
in a program.
And as for validating data, one of the first things a beginner
learns is the importance of validating data.

But yes, to recover from an error does require some experience.

>In my experience, the more powerful and more expressive a programming
>language is, the easier it is for inexperienced programmers to get into
>real trouble. Well, as a general rule, anyway. I have seen
>horribly-written, virtually incomprehensible code written in just about
>every computer language I've ever learned well (no fair for me to
>complain about hard-to-understand code in languages I don't know well :-) ).
>
>I once heard a theory that one of the reasons COBOL was so verbose was
>to make it difficult to do anything really clever because that helped
>keep beginning programmers from getting into too much trouble and from
>creating a need for the 3:00 AM emergency debugging sessions. Lest all
>you COBOL proponents (there are some here, aren't there) take exception
>to that, I don't subscribe to that theory. COBOL is an old enough
>language that it was designed before such issues had come into
>consideration.
>
>One of the costs of using a robust language is the length of time it
>takes to learn it well enough to use it properly.
>
>> In the event that something unexpected should happen,
>> the program (with its built-in PL/I facilities) can print (or write to
>> an exception file) the details of the error and all the circumstances
>> including the actual data) that caused the error.
>>
>> And it can then continue with the next lot of data.
>>
>Well, only if it's written that way.

That's what I said.

> PL/I programs don't write
>themselves. The language certainly has those capabilities, but it's a
>little optimistic to assert that all programs written in it make use of
>those facilities.

That's why I said "can". And recall my text that I quoted above.

>> No need for someone to come in at 3am to fix the program.
>> No need to re-run the program to find out where and why the
>> program crashed.
>>
>> The problem can be analyzed with a fresh mind in the light of day.
>>
>>
>That's actually more a function of management than of programming
>language.

No it's not.

> I don't care what language is used, if a shop is sufficiently
>under-funded, under-staffed, or over-committed mistakes will happen that
>require the proverbial 3:00 AM debugging session.

Not in PL/I, because putting in the code to make a program robust
is trivial.

> Choosing PL/I (or
>Fortran, or COBOL, or RPG, or ... won't change that).

Choosing Fortran* won't change that, and other languages too, including C.

But choosing PL/I can and does change that,
because PL/I was designed for real-time processing,
and has the above-mentioned facilities built-in and ready to use.

>Bob Lidral

______________
footnote
* Earlier versions of Fortran simply crashed when a zivision by zero
or some-such thing occurred.


Tom Linden

unread,
Sep 21, 2006, 10:32:22 AM9/21/06
to
On Thu, 21 Sep 2006 06:58:21 -0700, John W. Kennedy
<jwk...@attglobal.net> wrote:

> Bob Lidral wrote:
>> My comments were intended to be in the same context as William M.
>> Klein's: "What type of programming requirements can you SOLVE ..."
>> which I took t mean the type of application rather than the programming
>> method. C doesn't have PICTURE or DECIMAL data, either -- and a lot of
>> other things PL/I doesn't have. OTOH, PL/I doesn't allow the use of an
>> assignment operator as the condition for an if or while statement nor
>> does it have short-cut Boolean operators except as an extension.
>
> Worse than that -- it doesn't have short-cut Boolean operators, period,
> but some compilers produce short-cut semantics as a side-effect of
> optimization.
>

That is a questiionable optimization, and strictly speaking is not legal.
We added extensions to handle these
http://www.kednos.com/pli/docs/REFERENCE_MANUAL/6291pro_016.html#index_x_799

glen herrmannsfeldt

unread,
Sep 21, 2006, 2:22:01 PM9/21/06
to
David Frank <dave_...@hotmail.com> wrote:

>>> type list
>>> character,allocatable :: name(:)
>>> integer,allocatable :: nums(:)
>>> end type
>>> type (list),allocatable :: lists(:)

> Then quit farting around and show us the translation of above TYPE
> statements
> to a PL/I allocatable structure with 2 allocatable array members.
> If thats true, how about showing us the translation of above to such a
> PL/I structure>

If it was a defined type variable it wouldn't need the type keyword.

In C,

typedef struct {
char *name;
int *nums;
} list;

list *lists;

Note that it doesn't say struct list. Can you do that in Fortran
(through 2008)?

-- glen

William M. Klein

unread,
Sep 21, 2006, 5:02:21 PM9/21/06
to
"robin" <rob...@bigpond.com> wrote in message
news:22xQg.33336$rP1....@news-server.bigpond.net.au...

> Bob Lidral wrote in message <4510EC4D...@comcast.net>...
>>robin wrote:
<snip>

>> Choosing PL/I (or
>>Fortran, or COBOL, or RPG, or ... won't change that).
>
> Choosing Fortran* won't change that, and other languages too, including C.
>
> But choosing PL/I can and does change that,
> because PL/I was designed for real-time processing,
> and has the above-mentioned facilities built-in and ready to use.
>

Again, for IBM mainframe commercial programming (the target of my ORIIGNAL
comments), this is just as built-in for COBOL as it is for PL/I. Progammers
either will or won't use it based on whatever experience and desigr requirements
they have/get.

David Frank

unread,
Sep 22, 2006, 8:15:40 AM9/22/06
to

"glen herrmannsfeldt" <g...@seniti.ugcs.caltech.edu> wrote in message
news:eeul89$n45$5...@naig.caltech.edu...

>
> In C,
>
> typedef struct {
> char *name;
> int *nums;
> } list;
>
> list *lists;
>
> Note that it doesn't say struct list. Can you do that in Fortran
> (through 2008)?

In Fortran


type list
character,allocatable :: name(:)
integer,allocatable :: nums(:)
end type
type (list),allocatable :: lists(:)

Please show Vowels how to translate above Fortran Derived Type declarations
to PL/I
or retract your statement that PL/I has had this capability "since the
beginning"


glen herrmannsfeldt

unread,
Sep 25, 2006, 11:41:37 PM9/25/06
to
David Frank wrote:

(snip)

> In Fortran
> type list
> character,allocatable :: name(:)
> integer,allocatable :: nums(:)
> end type
> type (list),allocatable :: lists(:)

> Please show Vowels how to translate above Fortran Derived Type declarations
> to PL/I
> or retract your statement that PL/I has had this capability "since the
> beginning"

DCL 1 lists(*) ctl
2 name(*) ctl char(1),
2 nums(*) ctl fixed bin;

-- glen

James J. Weinkam

unread,
Sep 26, 2006, 4:14:43 AM9/26/06
to

Sorry, Glen, this won't do. The controlled attribute can only be applied to a
level 1 identifier. Here's what it takes:

dcl
(m,l) bin fixed(15), n bin fixed(31),
1 arrays(m) ctl,
2 id ptr,
2 values ptr,

1 arrayid based,
2 idlen bin fixed(15),
2 idtext char(l refer(idlen)),

1 arrayvalues based,
2 vlen bin fixed(31),
2 numbers(n refer(vlen)) bin fixed(31),

as in DF's original program, you read the entire file into a (presumably huge)
character array and count the *'s (to do it right they should be only the *'s
that come in column 1, i.e., the very first character or one immediately
following a cr-lf pair) giving m. (DF's program will fail if there are any *'s
in the array id's.) Then you allocate the controlled structure, arrays. Next
you scan the input again and process each of the m arrays. For the i-th array,
you find the i-th * in column 1 and get l (that's an ell not a one) the length
of the rest of the line, i.e., up to the next cr-lf, then allocate an arrayid,
stuff the l characters into idtext and store the pointer in id(i). Then for
each line up to the next * in column 1, you count the commas and add 1 and sum
these up giving n. Now you can allocate an arrayvalues and store the pointer in
values(i). Finally you go back to the line following that with the i-th *
(whose location you have thoughtfully remembered) and process each line again
picking off the character values between commas and/or line boundaries,
converting them to bin fixed(31) (by assignment) while stuffing the j-th result
into values(i)->numbers(j). It's really rather simple.

Needless to say I am not going to post the code. If DF wants to see that he can
download a PL/I LRM and work it out for himself. I've given him enough hints.

Unfortunately, this so called challenge is not typical of how things are done in
real data processing applications. Usually the files are much too big even to
dream about reading the entire thing into memory.

David Frank

unread,
Sep 26, 2006, 9:16:37 AM9/26/06
to

"James J. Weinkam" <j...@cs.sfu.ca> wrote in message
news:Tj5Sg.35750$cz3.14004@edtnps82...

<snip description of a typical pointer allocation chain>

> It's really rather simple.

What you describe is not simple, and is NOT equivalent to other language(s)
syntax
using derived type variables.
Cant you just admit PL/I has no such support?

>
> Needless to say I am not going to post the code. If DF wants to see that
> he can download a PL/I LRM and work it out for himself. I've given him
> enough hints.
>

At end of my program's reading of lists, I show that the data is contained
in a
data structure DIRECTLY addressable with standard syntax like SUM etc.
e.g.
id = lists(n)%name ! directly accesses n'th list name
total = total + sum(lists(n)%nums) ! add sum of n'th list's
numbers

Your PROPOSED data structure is not a entity that contains ALL the data in
its internal arrays
but a string of pointer-connected arrays. .

> Unfortunately, this so called challenge is not typical of how things are
> done in real data processing applications. Usually the files are much too
> big even to dream about reading the entire thing into memory.

Many PCs have more memory than the old main-frame clunkers running 30yr
COBOL
programs..

The first solution I posted over in comp.lang.fortran does not read the file
completely into memory before starting its processing
http://home.earthlink.net/~dave_gemini/list.f90

In any case, while interesting, your proposal remains just that,
a description that you think might work but without actual source or any
proof posted..


robin

unread,
Sep 26, 2006, 10:36:07 AM9/26/06
to
William M. Klein wrote in message ...

>"robin" <rob...@bigpond.com> wrote in message
>news:22xQg.33336$rP1....@news-server.bigpond.net.au...
>> Bob Lidral wrote in message <4510EC4D...@comcast.net>...
>>>robin wrote:
><snip>
>>> Choosing PL/I (or
>>>Fortran, or COBOL, or RPG, or ... won't change that).
>>
>> Choosing Fortran* won't change that, and other languages too, including C.
>>
>> But choosing PL/I can and does change that,
>> because PL/I was designed for real-time processing,
>> and has the above-mentioned facilities built-in and ready to use.

>Again, for IBM mainframe commercial programming (the target of my ORIIGNAL
>comments), this is just as built-in for COBOL as it is for PL/I.

Consider the following PL/I fragment::

on error snap begin;
on error system;
put data (p, q, r); /* Could create an exception foile here */
go ro start_set;
end;

start_set: do forever;
<<stuff>>
end start_set:

which regains control for any kind of error.
The ON statement in this fragment specifies the action
to be taken in the event of any kind of error.
Having executed the ON statement (not the ON-unit),
the program then enters the main loop specified by DO - END.
Now, in the event that some problem arises, the ON-unit
is executed, and as well as producing details of the error (including
naming the error and the location where it occurred,
PL/I produces a traceback giving the names of procedures
in the calling chain and where invoked).
The names and values of variables P, Q, and R are then printed
[in practice., some kind of detailed error report would be produced].
Finally, the program resumes with the next set of data.


glen herrmannsfeldt

unread,
Sep 26, 2006, 1:48:10 PM9/26/06
to
James J. Weinkam <j...@cs.sfu.ca> wrote:

> glen herrmannsfeldt wrote:

>> DCL 1 lists(*) ctl
>> 2 name(*) ctl char(1),
>> 2 nums(*) ctl fixed bin;
> Sorry, Glen, this won't do. The controlled attribute can only be
> applied to a level 1 identifier. Here's what it takes:

I wondered about that just after I posted it. Still,
you can use (*) and specify the size later.



> dcl
> (m,l) bin fixed(15), n bin fixed(31),
> 1 arrays(m) ctl,
> 2 id ptr,
> 2 values ptr,

> 1 arrayid based,
> 2 idlen bin fixed(15),
> 2 idtext char(l refer(idlen)),

> 1 arrayvalues based,
> 2 vlen bin fixed(31),
> 2 numbers(n refer(vlen)) bin fixed(31),

I probably would have done pointers to controlled
arrays or structures.

I don't know why DF has character arrays instead
of character variables, though. I forget when
Fortran allowed allocatable length character variables.

> Unfortunately, this so called challenge is not typical of how
> things are done in real data processing applications.
> Usually the files are much too big even to
> dream about reading the entire thing into memory.

This is true, and it is done way too often. Even so, there
are still applications where reading a large amount of
data of unknown size into memory is useful. C's realloc()
usually works pretty well, if you only realloc() one array
(or array of struct) inside the loop.

-- glen

James J. Weinkam

unread,
Sep 26, 2006, 3:47:37 PM9/26/06
to
glen herrmannsfeldt wrote:
>
> I wondered about that just after I posted it. Still,
> you can use (*) and specify the size later.
>
>
The only reason to use * is if the allocated dimension in going to be in
different variables with different allocate statements. In fact, even if a
variable is specified for the dimension in the declare statement, it can always
be overridden in an allocate statement. So the * doesn't intorduce any new
capability.

robin

unread,
Sep 27, 2006, 6:33:12 AM9/27/06
to
James J. Weinkam wrote in message ...

>Sorry, Glen, this won't do. The controlled attribute can only be applied to a
>level 1 identifier. Here's what it takes:
>
> dcl
> (m,l) bin fixed(15), n bin fixed(31),
> 1 arrays(m) ctl,
> 2 id ptr,
> 2 values ptr,
>
> 1 arrayid based,
> 2 idlen bin fixed(15),
> 2 idtext char(l refer(idlen)),
>
> 1 arrayvalues based,
> 2 vlen bin fixed(31),
> 2 numbers(n refer(vlen)) bin fixed(31),
>
>as in DF's original program, you read the entire file into a (presumably huge)
>character array and count the *'s

No, this won't do at all.
You say that the array is huge, but your declarations of bounds
are for arrays and variables to be up to only 32,767.

> (to do it right they should be only the *'s
>that come in column 1, i.e., the very first character or one immediately
>following a cr-lf pair) giving m. (DF's program will fail if there are any *'s
>in the array id's.) Then you allocate the controlled structure, arrays. Next
>you scan the input again

This won't do at all.
You are reading the data twice.

> and process each of the m arrays. For the i-th array,
>you find the i-th * in column 1 and get l (that's an ell not a one) the length
>of the rest of the line, i.e., up to the next cr-lf, then allocate an arrayid,
>stuff the l characters into idtext and store the pointer in id(i). Then for
>each line up to the next * in column 1, you count the commas and add 1 and sum
>these up giving n. Now you can allocate an arrayvalues and store the pointer in
>values(i). Finally you go back to the line following that with the i-th *
>(whose location you have thoughtfully remembered) and process each line again
>picking off the character values between commas and/or line boundaries,
>converting them to bin fixed(31) (by assignment) while stuffing the j-th result
>into values(i)->numbers(j). It's really rather simple.

It is?


James J. Weinkam

unread,
Sep 27, 2006, 8:55:01 PM9/27/06
to
robin wrote:
> James J. Weinkam wrote in message ...
>
>>Sorry, Glen, this won't do. The controlled attribute can only be applied to a
>>level 1 identifier. Here's what it takes:
>>
>> dcl
>> (m,l) bin fixed(15), n bin fixed(31),
>> 1 arrays(m) ctl,
>> 2 id ptr,
>> 2 values ptr,
>>
>> 1 arrayid based,
>> 2 idlen bin fixed(15),
>> 2 idtext char(l refer(idlen)),
>>
>> 1 arrayvalues based,
>> 2 vlen bin fixed(31),
>> 2 numbers(n refer(vlen)) bin fixed(31),
>>
>>as in DF's original program, you read the entire file into a (presumably huge)
>>character array and count the *'s
>
>
> No, this won't do at all.
> You say that the array is huge,

The main purpose of my post was to point out that you cannot nest controlled
variables within an aggregate. I then showed how the "problem" can be
approached using the same methodology DF used in his original post, namely:

1. Find out the file size, allocate space and read the entire file into it. (I
did not actually show the data structure for this part but it would have to be
an array with large bounds.)

2. Count the number of *'s and allocate an array of that length to keep track of
the id's and number arrays.

3. For each id line:

a) find the end allocate the string and assign the id to it

b) count the ,'s and line ends up to the next id line, allocate the number
array and place the values in it.

I then pointed out that this so called "challenge" posed by DF is not typical of
the approach taken in most modern data processing applications.


but your declarations of bounds
> are for arrays and variables to be up to only 32,767.
>

>
>>(to do it right they should be only the *'s
>>that come in column 1, i.e., the very first character or one immediately
>>following a cr-lf pair) giving m. (DF's program will fail if there are any *'s
>>in the array id's.) Then you allocate the controlled structure, arrays. Next
>>you scan the input again
>
>
> This won't do at all.
> You are reading the data twice.

No, I am reading it once and scanning it several times internally which is the
approach used by DF. I am not endorsing that approach. The method that you
posted read the file once and created an intermediate data structure consisting
of an allocation of a controlled binary fixed(31) variable for each number in
the array currently being processed. Once the number of values in that array
was known, you allocated an array of the desired size and copied each value to
the appropriate array element then freed the value. This is an additional
internal scan of the data. In personal PL/I for OS/2 the intermediate data
structure occupies 12 times as much storage as the final array. This ratio may
be different in other implementations, but is probably at least 6 in all
implementations. Moreover the allocation and freeing of each element adds
significant time overhead to the large space overhead.


>
>
>>and process each of the m arrays. For the i-th array,
>>you find the i-th * in column 1 and get l (that's an ell not a one) the length
>>of the rest of the line, i.e., up to the next cr-lf, then allocate an arrayid,
>>stuff the l characters into idtext and store the pointer in id(i). Then for
>>each line up to the next * in column 1, you count the commas and add 1 and sum
>>these up giving n. Now you can allocate an arrayvalues and store the pointer in
>>values(i). Finally you go back to the line following that with the i-th *
>>(whose location you have thoughtfully remembered) and process each line again
>>picking off the character values between commas and/or line boundaries,
>>converting them to bin fixed(31) (by assignment) while stuffing the j-th result
>>into values(i)->numbers(j). It's really rather simple.
>
>
> It is?
>
>

It is.

James J. Weinkam

unread,
Sep 28, 2006, 1:36:26 AM9/28/06
to
James J. Weinkam wrote:

> robin wrote:
>
>
>> but your declarations of bounds
>> are for arrays and variables to be up to only 32,767.
>>
Sorry for replying to my own post but the following sentence failed to appear:

Also if you look again you will see that the variables for the bound of the
number array are bin fixed(31). It is only the number of arrays and the lengths
of the id's that were assumed to be <32768. If that is too small it is readily
changed.

adaw...@sbcglobal.net

unread,
Oct 4, 2006, 9:23:02 AM10/4/06
to

"glen herrmannsfeldt" <g...@seniti.ugcs.caltech.edu> wrote in message
news:een2d7$lol$4...@naig.caltech.edu...
>
> I sometimes find comparisons of languages interesting, in that you
> can understand the design goals of a language by seeing what it allows
> and disallows. I try to make my comparisons fair, stating facts separately
> from opinions. I consider it similar to the "compare and contrast"
> assignments for studying literature.
>
Good observation. When comparing programming languages, one
needs to set out the criteria by which such a comparison will be
evaluated. Not every language is good at everything. Not every
language will evaluate well under every criteria.

Some years ago, one of the popular computer science magazines
had a "hello world" contest to determine which language could
solve this problem in the fewest number of statements. Such
contests are usually pretty useless, but this one was among the
silliest of all.

The criteria for language evaluation need to be carefully selected,
clearly stated, and weighted according to their importance in
the targeted problem-solution domain.

Often, the inherent virtues of the language will not be the dominant
concerns. For example, people often choose C++ for their
programming projects, but that language is characterized largely
by its potential for creating flawed software. In fact, it often
causes me to wonder why anyone would choose a toolset that
is error-prone for creating software and expect a rresult that is
error-free. The reasons for choosing C++, the criteria being
used, has little to do with the inherent difficulties of that
language and more to do with its widespread use by
the programming community.

If one of my criteria is that a language support object-oriented
programming, PL/I will be quickly eliminated from consideration.
If I am concerned about support for some specific database
environment, the language must include direct support for that
environment. I dependability is the foremost concern, we would
probably choose Eiffel or Ada. If we are writing a bunch of
"hello world" programs, we would probably want to use a
simple interpreted scripting language.

Arguing about programming languages in the abstract is a lot
like saying, "My dog is better than your dog!" Better at
what? Some dogs are better at pointing to a potential
winged target hiding in the brush. Others are better at
catching a frisbee. If I am going deer hunting, I really
don't want to take a noisy little chihuahua.

So, when comparing programming languages, we need
to understand the bounds within which each comparison
will be made. We need to agree on the criteria. We
must get beyond the abstract and go to the heart of the
problem domain in which we intend to use that language.

Richard Riehle


Tom Linden

unread,
Oct 4, 2006, 9:28:33 AM10/4/06
to

Support for OOP as a criterion seems more of a fashion statement, at
least it could at best be a derived requirement from more fundemantal
criteria.

>
> Arguing about programming languages in the abstract is a lot
> like saying, "My dog is better than your dog!" Better at
> what? Some dogs are better at pointing to a potential
> winged target hiding in the brush. Others are better at
> catching a frisbee. If I am going deer hunting, I really
> don't want to take a noisy little chihuahua.
>
> So, when comparing programming languages, we need
> to understand the bounds within which each comparison
> will be made. We need to agree on the criteria. We
> must get beyond the abstract and go to the heart of the
> problem domain in which we intend to use that language.
>
> Richard Riehle
>
>

--

LR

unread,
Oct 4, 2006, 10:35:28 AM10/4/06
to
adaw...@sbcglobal.net wrote:


>
> Good observation. When comparing programming languages, one
> needs to set out the criteria by which such a comparison will be
> evaluated. Not every language is good at everything. Not every
> language will evaluate well under every criteria.

> The criteria for language evaluation need to be carefully selected,
> clearly stated, and weighted according to their importance in
> the targeted problem-solution domain.

Yes, particularly if you're going to go on to make statements like the
one below. So what are your criteria?

>
> Often, the inherent virtues of the language will not be the dominant
> concerns. For example, people often choose C++ for their
> programming projects,

And for good reason IMO, but of course YMV.

> but that language is characterized largely
> by its potential for creating flawed software.

Really? Um, can you tell us who characterizes it that way? And for
what reasons? Probably, keeping in mind that any language can be abused.

> In fact, it often
> causes me to wonder why anyone would choose a toolset that
> is error-prone for creating software and expect a rresult that is
> error-free.

Can you please be more specific about the "error prone"?


> The reasons for choosing C++, the criteria being
> used, has little to do with the inherent difficulties of that
> language and more to do with its widespread use by
> the programming community.

Widespread use? Again, yes, and for good reason.

LR


LR

unread,
Oct 4, 2006, 10:37:04 AM10/4/06
to
Tom Linden wrote:


> Support for OOP as a criterion seems more of a fashion statement, at
> least it could at best be a derived requirement from more fundemantal
> criteria.


Seems more like an ease of use thing to me than a fashion statement.
Can you please tell me why you seem to have implied that OOP is merely
fashion? Do you think it will go away?

LR

Tom Linden

unread,
Oct 4, 2006, 11:02:53 AM10/4/06
to

Good programmers can write good code in any langugae, some may require more
effort than others. But there aren't that many 'good' programmers.
Languages
like C++ do not enforce adequate discpline. Overloading of objects leads,
with the passage of time, to diffuse meaning, resulting in disuse of
objects,
contrary to one of the stated advantages of OOP. Class libraries, I would
suspect,
aren't as rigorously tested as a traditional compilers like PL/I, Ada or
Cobol,
as many are amended and cobbled together for a particular application.

No I don't think it will go away. If selection were based on the merits
of the
language everyone would be coding in PL/I.

LR

unread,
Oct 4, 2006, 12:13:26 PM10/4/06
to
Tom Linden wrote:

> On Wed, 04 Oct 2006 07:37:04 -0700, LR <lr...@superlink.net> wrote:
>
>> Tom Linden wrote:
>>
>>
>>> Support for OOP as a criterion seems more of a fashion statement, at
>>> least it could at best be a derived requirement from more fundemantal
>>> criteria.
>>
>>
>>
>> Seems more like an ease of use thing to me than a fashion statement.
>> Can you please tell me why you seem to have implied that OOP is
>> merely fashion? Do you think it will go away?
>>
>> LR
>
>
> Good programmers can write good code in any langugae,

Yes, very true.


> some may require more
> effort than others.

Do you refer to the effort by the programmers, the effort to write in a
particular language, both, something else entirely?


> But there aren't that many 'good' programmers.

Relevance?

> Languages
> like C++ do not enforce adequate discpline.

What particular discipline do you want enforced?


> Overloading of objects leads,
> with the passage of time, to diffuse meaning,

In what way?


> resulting in disuse of
> objects,

How can non-OOP languages do any better? For example, in non-OOP
languages you very often find flags of some kind used to do what can
more easily be done in OOP though inheritence. Talk about becoming
diffuse over time. Besides which, these flags lead to an enormous
maintenance headache. Or lead the programmer to invent their own OOPish
'language' in whatever language they're programming in.

Besides which, doesn't your fave language have some support for
overloading functions at least? I seem to recall a post about this
where I jumped to the conclusion that PL/I didn't have this feature,
being corrected reminded me that jump tos are considered evil.


> contrary to one of the stated advantages of OOP.

Per above, I think we disagree on this.

> Class libraries, I
> would suspect,
> aren't as rigorously tested as a traditional compilers like PL/I, Ada
> or Cobol,

Are you speaking of the class libraries that might, for example, come
with a standard c++ compiler. I suspect these are tested about the same
as a compiler is.


> as many are amended and cobbled together for a particular application.

Not sure what you're talking of here. This sounds more like application
libs put together by application programmers. Surely you're not
suggesting that non-OOP libs aren't ever amended and cobbled together
for a particular application?

Or perhaps you have some specific example in mind?

> No I don't think it will go away.

There we agree, which makes me wonder why you think it's a fashion.


> If selection were based on the
> merits of the
> language everyone would be coding in PL/I.

And strangely, I think that if selection were based on the merits of the
language everyone would be coding in C++.

So what? Unless you are able to put forth the actual merits, and prove,
or least provide reasons for, their superiority, it's just another
pointless 'my language is better than yours' claim.

LR

robin

unread,
Oct 5, 2006, 4:01:38 AM10/5/06
to
<adaw...@sbcglobal.net> wrote in message
news:WAOUg.9626$e66....@newssvr13.news.prodigy.com...

>
> "glen herrmannsfeldt" <g...@seniti.ugcs.caltech.edu> wrote in message
> news:een2d7$lol$4...@naig.caltech.edu...
> >
> > I sometimes find comparisons of languages interesting, in that you
> > can understand the design goals of a language by seeing what it allows
> > and disallows. I try to make my comparisons fair, stating facts separately
> > from opinions. I consider it similar to the "compare and contrast"
> > assignments for studying literature.
> >
> Good observation. When comparing programming languages, one
> needs to set out the criteria by which such a comparison will be
> evaluated. Not every language is good at everything. Not every
> language will evaluate well under every criteria.
>
> Some years ago, one of the popular computer science magazines
> had a "hello world" contest to determine which language could
> solve this problem in the fewest number of statements. Such
> contests are usually pretty useless, but this one was among the
> silliest of all.

Not necessarily.
It identifies immediately those languages that are verbose,
and which may be unsuitable for such a purpose.

> The criteria for language evaluation need to be carefully selected,
> clearly stated, and weighted according to their importance in
> the targeted problem-solution domain.
>
> Often, the inherent virtues of the language will not be the dominant
> concerns. For example, people often choose C++ for their
> programming projects, but that language is characterized largely
> by its potential for creating flawed software. In fact, it often
> causes me to wonder why anyone would choose a toolset that
> is error-prone for creating software and expect a rresult that is
> error-free. The reasons for choosing C++, the criteria being
> used, has little to do with the inherent difficulties of that
> language and more to do with its widespread use by
> the programming community.
>
> If one of my criteria is that a language support object-oriented
> programming, PL/I will be quickly eliminated from consideration.
> If I am concerned about support for some specific database
> environment, the language must include direct support for that
> environment. I dependability is the foremost concern, we would
> probably choose Eiffel or Ada.

Or PL/I, of course.
Dependability and robustmess are attributes of PL/I,
and have been for 40 years.

> If we are writing a bunch of
> "hello world" programs, we would probably want to use a
> simple interpreted scripting language.

No we wouldn't.

> Arguing about programming languages in the abstract is a lot
> like saying, "My dog is better than your dog!" Better at
> what? Some dogs are better at pointing to a potential
> winged target hiding in the brush. Others are better at
> catching a frisbee. If I am going deer hunting, I really
> don't want to take a noisy little chihuahua.

You wouldn't want _any_ dog.

> Richard Riehle


LR

unread,
Oct 5, 2006, 8:39:30 AM10/5/06
to
robin wrote:

> <adaw...@sbcglobal.net> wrote in message
> news:WAOUg.9626$e66....@newssvr13.news.prodigy.com...
>
>>"glen herrmannsfeldt" <g...@seniti.ugcs.caltech.edu> wrote in message
>>news:een2d7$lol$4...@naig.caltech.edu...
>>
>>>I sometimes find comparisons of languages interesting,

>>Some years ago, one of the popular computer science magazines
>>had a "hello world" contest to determine which language could
>>solve this problem in the fewest number of statements. Such
>>contests are usually pretty useless, but this one was among the
>>silliest of all.
>
>
> Not necessarily.
> It identifies immediately those languages that are verbose,
> and which may be unsuitable for such a purpose.

Hmmm. That sounds like a certain poster who thinks that the number of
lines a program takes is pretty important. Am I seeing the beginings of
some common ground?


Anyway, those of you who enjoyed the "Hello World" program contest may
enjoy "99 Bottles of Beer", in the language of your choice.
http://www.westnet.com/mirrors/99bottles/beer.html

Argue all you want, but Brainf*** is best.
http://www.westnet.com/mirrors/99bottles/beer_a_c.html#brainfuck

LR


adaw...@sbcglobal.net

unread,
Oct 5, 2006, 10:54:19 AM10/5/06
to

"LR" <lr...@superlink.net> wrote in message
news:4523c687$0$25791$cc2e...@news.uslec.net...
> adaw...@sbcglobal.net wrote:
>
>
>
> > but that language (C++) is characterized largely

>> by its potential for creating flawed software.
>
> Really? Um, can you tell us who characterizes it that way? And for what
> reasons? Probably, keeping in mind that any language can be abused.
>
I have written software in C++. Also, every conversation I have had,
in recent weeks, with a group of highly experienced C++ programmers
in the midst of a project on which they are working, has reinforced this
view. There are more ways to make programming mistakes in C++
than in any contemporary language. The mistakes are often difficult
to discover even long after the programs have been deployed.

> > In fact, it often
>> causes me to wonder why anyone would choose a toolset that
>> is error-prone for creating software and expect a rresult that is
>> error-free.
>
> Can you please be more specific about the "error prone"?
>

As noted above. However, the pointer model is horrid, the
defaults on constructors and copy constructors can cause
serious defects in the code, and the memory management
model is non-existent. We could go on for many pages
itemizing specific problems with C++, but anyone who has
used the language for any length of time knows how sensitive
it is to even the slightest deviation from careful programming.
Worse, the compiler fails to notify the programmer for a lot
of those problems. This is why debuggers are regarded as
a necessary tool when programming in C++. Not so in
some other languages.


>
> > The reasons for choosing C++, the criteria being
>> used, has little to do with the inherent difficulties of that
>> language and more to do with its widespread use by
>> the programming community.
>
> Widespread use? Again, yes, and for good reason.
>

But those reasons have nothing to do with the dependability
of the final software product.

Richard Riehle


LR

unread,
Oct 5, 2006, 11:24:20 AM10/5/06
to
adaw...@sbcglobal.net wrote:
> "LR" <lr...@superlink.net> wrote in message
> news:4523c687$0$25791$cc2e...@news.uslec.net...
>
>>adaw...@sbcglobal.net wrote:
>>
>>
>>
>>
>>>but that language (C++) is characterized largely
>>>by its potential for creating flawed software.
>>
>>Really? Um, can you tell us who characterizes it that way? And for what
>>reasons? Probably, keeping in mind that any language can be abused.
>>
>
> I have written software in C++. Also, every conversation I have had,
> in recent weeks, with a group of highly experienced C++ programmers
> in the midst of a project on which they are working, has reinforced this
> view. There are more ways to make programming mistakes in C++
> than in any contemporary language. The mistakes are often difficult
> to discover even long after the programs have been deployed.

Highly experienced C++ programmers? In C++ or another language?
Because if they're really 'experienced', for which we might more
reasonably read 'knowlegeable', then they are likely to avoid the nasty
corners of the language. Which C++ has, no question. BTW, do you know
of a language without those? Are they actually making mistakes of the
kind we're discussing, or only complaining about the potential?

>
>
>>>In fact, it often
>>>causes me to wonder why anyone would choose a toolset that
>>>is error-prone for creating software and expect a rresult that is
>>>error-free.
>>
>>Can you please be more specific about the "error prone"?
>>
>
> As noted above. However, the pointer model is horrid,

Use with caution if at all. If used, wrap it up in a class. Manage
your risk. Use smart pointers.

> the
> defaults on constructors and copy constructors can cause
> serious defects in the code,

Yes. Don't do that. Are the "highly experienced" programmers you spoke
to using the default ctors? Don't forget the default assignment operator.

> and the memory management
> model is non-existent.

It's not non existant, it's just not what you like. Maybe the
programmers you spoke to are using raw pointers and not smart pointers?
Using malloc/free instead of new/delete? Using raw pointers to arrays
instead of std::vector? Shame on them. The beauty of C++ is that if you
don't like the features the language has you can roll your own. And
with boost and TR1 more available.

> We could go on for many pages
> itemizing specific problems with C++,

Whereas most other languages suffer from a single flaw: They're not C++. ;)

> but anyone who has
> used the language for any length of time knows how sensitive
> it is to even the slightest deviation from careful programming.

Please suggest a language that doesn't require careful programming.

> Worse, the compiler fails to notify the programmer for a lot
> of those problems.

That's an implementation issue, not a language specific issue. I
recommend lint. I recommend it highly.

> This is why debuggers are regarded as
> a necessary tool when programming in C++.

I've never met anyone who regarded a debugger as necessary in any
language. Nice to have. And it's particularly nice that C++'s market
share makes for nice debugging and other tools.

> Not so in
> some other languages.

Perchance, are those languages without a debugger available? BTW, I
don't think that I've ever met a programmer who wouldn't rather have a
good symbolic debugger for a language than not.


>>>The reasons for choosing C++, the criteria being
>>>used, has little to do with the inherent difficulties of that
>>>language and more to do with its widespread use by
>>>the programming community.
>>
>>Widespread use? Again, yes, and for good reason.
>>
>
> But those reasons have nothing to do with the dependability
> of the final software product.

No language choice has anything to do with the dependability of the
final software product.

You either program well, or you don't. Use the language wisely or don't.

LR

glen herrmannsfeldt

unread,
Oct 5, 2006, 3:11:38 PM10/5/06
to
adaw...@sbcglobal.net wrote:
(snip)

> I have written software in C++. Also, every conversation I have had,
> in recent weeks, with a group of highly experienced C++ programmers
> in the midst of a project on which they are working, has reinforced this
> view. There are more ways to make programming mistakes in C++
> than in any contemporary language. The mistakes are often difficult
> to discover even long after the programs have been deployed.

I agree. If one is dedicated to object oriented methodology,
and explicitely avoids the possible mistakes it might not be
so bad. I believe that the designers of Java tried to learn
from C++'s mistakes. It seems to me that for an OO extension
of C, Java is closer to C in many ways than C++, except for
actually using C code in a C++ compiler.

(snip)

> As noted above. However, the pointer model is horrid, the
> defaults on constructors and copy constructors can cause
> serious defects in the code, and the memory management
> model is non-existent.

I think some of that is left over from the requirement
of early C++ compilers to translate to C as an intermediate,
and for C compatibility in any case. Java's insistance
on initializing scalar variables helps prevent some defects,
though I still don't like it when it is wrong.

(snip)

-- glen

glen herrmannsfeldt

unread,
Oct 5, 2006, 3:18:13 PM10/5/06
to
LR <lr...@superlink.net> wrote:
(snip on C++ and experienced programmers)


> Highly experienced C++ programmers? In C++ or another language?
> Because if they're really 'experienced', for which we might more
> reasonably read 'knowlegeable', then they are likely to avoid the nasty
> corners of the language. Which C++ has, no question. BTW, do you know
> of a language without those? Are they actually making mistakes of the
> kind we're discussing, or only complaining about the potential?

I recently found a bug in a large program written by an
experienced and knowledgable C++ programmer. This program tries
to check every argument for being in range, and otherwise having
the right value. At one point it does a recursive search through
what is supposed to be a binary tree, but hadn't actually been
allocated yet. Due to one small mistake in not initializing
a pointer to null, the program chased an infinite loop
of pointer with a cycle over 19000 long (and all four byte aligned)
until it ran out of memory. Anyone can miss one initialization
in a large program, no matter how much experience they have.

-- glen

LR

unread,
Oct 5, 2006, 6:37:57 PM10/5/06
to
glen herrmannsfeldt wrote:

> LR <lr...@superlink.net> wrote:
> (snip on C++ and experienced programmers)
>
>
>>Highly experienced C++ programmers? In C++ or another language?
>>Because if they're really 'experienced', for which we might more
>>reasonably read 'knowlegeable', then they are likely to avoid the nasty
>>corners of the language. Which C++ has, no question. BTW, do you know
>>of a language without those? Are they actually making mistakes of the
>>kind we're discussing, or only complaining about the potential?
>
>
> I recently found a bug in a large program written by an
> experienced and knowledgable C++ programmer. This program tries
> to check every argument for being in range, and otherwise having
> the right value. At one point it does a recursive search through
> what is supposed to be a binary tree, but hadn't actually been
> allocated yet.

Was this something the experienced and knowlegeable C++ programmer had
tried to implement themselves when std::set and std::map are available
and waiting to be used?

> Due to one small mistake in not initializing
> a pointer to null,

Sounds like someone was using raw pointers.

> the program chased an infinite loop
> of pointer with a cycle over 19000 long (and all four byte aligned)
> until it ran out of memory.

Lint, lint always, lint forever. Also, some compilers give warnings of
unintialized variables.

> Anyone can miss one initialization
> in a large program, no matter how much experience they have.

I agree, but I don't think this problem is limited to C++ or pointers
and besides even if you do initialize things you can give them the wrong
value. Right?

I remember writing some code in PL/C (is PL/C close enough for the point
I'm trying to make?) years ago that resulted in a couple of infinite
loops. Lucky I was using an account that was limited to a few CPU secs
per run, IIRC. I don't, ahem, make mistakes like this anymore, well, not
often, and if I do, I, uh, no longer tell anyone. ;)

LR

adaw...@sbcglobal.net

unread,
Oct 6, 2006, 12:44:15 AM10/6/06
to
I will answer all your questions in this part of the reply
rather than embedding them in the text.

My preferred language is one that does not have all the
potential for errors that you seem to admit is present in
C++. It is designed so the compiler will catch the maximum
number of errors at compile time. It provides a model for
indirection that does not require me to wonder whether
a particular pointer construct might have a hidden dangling
reference or an eventual conflict somewhere. I don't have
the concerns with copy constructors that are present in
C++. Lint is not a substitute for good language design
in the first place.

Language choice does impact dependability. There are
languages that you probably have not used that are characterized
by their emphasis on dependability. C++ is not one of them.

It is true that one cannot depend entirely on the programming
language, and programming always involves being careful. However,
C++ is especially error-prone when compared with most
alternatives.

When I compare C++ with one of the better languages
such as Ada, I find myself preferring Ada. When I
compare it with Eiffel, I find myself preferring Eiffel.

Furthermore, with Ada I get all the flexibility I need,
along with the required efficiency. The compiler catches
more errors at compile-time leaving me time to spend on
my own programming mistakes, those not inherent in
the design of the language.

I know C++ well. I know Ada well. When I compare,
feature-for-feature, according to the one criterion that
is most important to me, dependability of the final
program, C++ consistently falls short.

The better I get to know both languages, the more I become
aware that C++ is one of the worst choices for any software
where dependability is important. It is, at first, fun chasing
the little bugs around the code, but after a while, one needs to
take a more professional attitdude toward one's work and
realize that we are not in the business of tracking down bugs,
but rather we are in the business of trying to produce reliable
software. C++ is not focused on that concern. While some
programmers may find it fun to deal with the peculiarities of
C++ on a day-by-day basis, I would rather be able to focus
on the problems we are supposed to solve than the eccentricities
of the toolset we use to solve those problems.

Richard Riehle


"LR" <lr...@superlink.net> wrote in message

news:45252379$0$25786$cc2e...@news.uslec.net...

adaw...@sbcglobal.net

unread,
Oct 6, 2006, 12:53:02 AM10/6/06
to

"glen herrmannsfeldt" <g...@seniti.ugcs.caltech.edu> wrote in message
news:eg3lda$ep4$3...@naig.caltech.edu...

>
> I think some of that is left over from the requirement
> of early C++ compilers to translate to C as an intermediate,
> and for C compatibility in any case. Java's insistance
> on initializing scalar variables helps prevent some defects,
> though I still don't like it when it is wrong.
>
LC is correct when he suggests that it is possible to initialize
a variable with the wrong value. The Ada Safety and Security
Annex has a pragma Normalize_Scalars that helps to ameliorate
this problem.

Often, it is better not to initialize a scalar to some value simply
because it can be done. An Ada compiler always gives the
programmer a warning when a scalar is never assigned a
value anywhere, initialized or not. This warning enables the
programmer to examine that warning and determine what
action is appropriate. The fact that a scalar is not initialized
is less problematic than the realization that it never gets a
value asssigned anywhere in the program.

When using the SPARK examiner (a preprocessor for
creating highly reliable Ada code), one gets an even stronger
model for correctness. At this stage of software practice,
there is no toolset better guaranteed to provide correct
programs than SPARK. Before naysaying this, you need
to study SPARK for yourself. Otherwise, you simply
won't understand the argument.

Richard Riehle


adaw...@sbcglobal.net

unread,
Oct 6, 2006, 1:05:55 AM10/6/06
to

"glen herrmannsfeldt" <g...@seniti.ugcs.caltech.edu> wrote in message
news:ees344$6s$7...@naig.caltech.edu...
> David Frank <dave_...@hotmail.com> wrote:
>
>> E.G. No one is willing to confirm if PL/I has equivalent declaration of
>> Fortran's defined type variables because they dont trust there own knowledge
>> well enuf to state the facts OR in Vowels case he wont respond because he
>> knows it does NOT.
>
> I don't know if it has defined type variables, I presume you mean
> something like C's typedef. I don't remember that Fortran does, either.
>
typedef is a farce. Too many C programmers think it is doing
something it isn't doing at all. It is not a capability for declaring or
defining
new types. Rather, it is a way to create an alias for an existing type.

I think maybe David is asking whether one can invent new types as one
does in Ada. For example, how would one declare, in PL/I, the following?

type Int16 is range -2**15 .. 2**15 - 1;
for Int16'Size use 16;

which says give the new type called Int16 a range as shown
and force it to be stored in 16 bits; or,

type Color is (Red, Yellow, Blue);
for Color use (Red => 16#34F2#,
Yellow => 16#34F3#,
Blue => 16#34F4#);

which says, for the enumerated values named Red, Yellow, and Blue
force the machine representation to the hexadecimal values shown.

I am pretty sure something like this is possible in PL/I. Perhaps Robin
can give an example in PL/I source code.

Richard Riehle


adaw...@sbcglobal.net

unread,
Oct 6, 2006, 1:27:26 AM10/6/06
to

"Tom Linden" <t...@kednos-remove.com> wrote in message
news:op.tgwfpvk4tte90l@hyrrokkin...

>
> Support for OOP as a criterion seems more of a fashion statement, at
> least it could at best be a derived requirement from more fundemantal
> criteria.
>
When a language does not support OOP, especially in these times,
that language is slightly crippled. On the other hand, when a language
does support OOP, but is so filled with potential for screw-ups, one
needs to question whether it would not be better to stay away from
OOP if that is the only language available.

A software object is an instance of a class. A class is simply a
specialized kind of abstract data type. The special features of
a class are support for inheritance, dynamic binding, and
polymorphism. A fully-formed class model will also include
parameterized classes (sometimes called templates). These
features taken together make it possible to consider the
lifecycle of the software process more as an evolutionary
model.

A class is extensible. That is, we can specialize an
extended class based on an existing class without changing
the base class. This is a powerful idea and lends itself
well to evolutionary and prototypical styles of software
development and management.

A language that does not support the class construct is going
to be limited to a more linear way of thinking about software
development. That is, without the class notion, one is forced
into procedural thinking. This is not a bad thing and we have
used this approach to building perfectly good software for
well over forty years.

However, without the extensibility afforded by OOP, each time
one needs to extend the capabilities of an existing software
product, it is necessary to do the close equivalent of open-heart
surgery. OOP does not require this. We extend existing
code without touching the existing code. This makes long
term adaptability a little easier and a lot safer.

I think it is very short-sighted of the PL/I community to continue
to resist developing an OOP version of the language. Fortran
has support for some of the important ideas in OOP. COBOL
now has that support. Most modern languages have support
for OOP. If PL/I does not eventually have OOP as part of
its fundamental model, it will continue to fall into disuse. No
programmer graduating from any computer science program
anywhere in the world would consider adopting a programming
language that fails to support the object model.

All of that being said, PL/I could be adapted to OOP. At this
stage of our knowledge of the good, the bad, and the ugly (C++)
of OOP, the upgrade of PL/I to support OOP could learn from
the many mistakes already in place with some languages that
are ostensibly OOP.

I would encourage those who want to see the long-term survival
of the best of PL/I to examine this issue and foment action on the
part of those who are charged with the continued health of the
language.

Richard Riehle


James J. Weinkam

unread,
Oct 6, 2006, 4:04:46 AM10/6/06
to
adaw...@sbcglobal.net wrote:
>
> typedef is a farce. Too many C programmers think it is doing
> something it isn't doing at all. It is not a capability for declaring or
> defining
> new types. Rather, it is a way to create an alias for an existing type.
>
> I think maybe David is asking whether one can invent new types as one
> does in Ada. For example, how would one declare, in PL/I, the following?
>
> type Int16 is range -2**15 .. 2**15 - 1;
> for Int16'Size use 16;

You have just described bin fixed(15). You can give it a name if you insist,
but why bother.

>
> which says give the new type called Int16 a range as shown
> and force it to be stored in 16 bits; or,
>
> type Color is (Red, Yellow, Blue);
> for Color use (Red => 16#34F2#,
> Yellow => 16#34F3#,
> Blue => 16#34F4#);
>
> which says, for the enumerated values named Red, Yellow, and Blue
> force the machine representation to the hexadecimal values shown.
>
> I am pretty sure something like this is possible in PL/I. Perhaps Robin
> can give an example in PL/I source code.
>

define ordinal color
(red value('34f2'xn),yellow value('34f3'xn),blue value('34f4'xn))
precision(16) unsigned;

BTW, I have never seen an uglier representation for hex values that the one you
used above. Just my opinion.

David Frank

unread,
Oct 6, 2006, 5:36:54 AM10/6/06
to

<adaw...@sbcglobal.net> wrote in message
news:TulVg.7845$TV3....@newssvr21.news.prodigy.com...

>
>
> I think maybe David is asking whether one can invent new types as one
> does in Ada. For example, how would one declare, in PL/I, the following?
>
> type Int16 is range -2**15 .. 2**15 - 1;
> for Int16'Size use 16;

integer(2) :: Int16

but if you insist on declaring a derived type variable then
type Int16
integer(2) :: k
end type

>
> which says give the new type called Int16 a range as shown
> and force it to be stored in 16 bits; or,
>
> type Color is (Red, Yellow, Blue);
> for Color use (Red => 16#34F2#,
> Yellow => 16#34F3#,
> Blue => 16#34F4#);
>
> which says, for the enumerated values named Red, Yellow, and Blue
> force the machine representation to the hexadecimal values shown.
>

integer(2),parameter :: Red = #34f2, Yellow = #34f3, Blue = #34f4

provides exact size 16bit constants,
plus the new Fortran standard has "C Interoperate" syntax which includes
support for C's enum syntax.

Otoh, Since you havent explicitly shown us Ada's equivalent of Fortran's


type list
character,allocatable :: name(:)
integer,allocatable :: nums(:)
end type
type (list),allocatable :: lists(:)

which I assume means your silence means it doesnt have an equivalent
just like we have to deduce that PL/I doesnt have derived types
let alone derived types with allocatable members.

Shmuel (Seymour J.) Metz

unread,
Oct 6, 2006, 8:56:25 AM10/6/06
to
In <2PlVg.7848$TV3....@newssvr21.news.prodigy.com>, on 10/06/2006

at 05:27 AM, <adaw...@sbcglobal.net> said:

>I think it is very short-sighted of the PL/I community to continue to
>resist developing an OOP version of the language.

It would be if they were.

--
Shmuel (Seymour J.) Metz, SysProg and JOAT <http://patriot.net/~shmuel>

Unsolicited bulk E-mail subject to legal action. I reserve the
right to publicly post or ridicule any abusive E-mail. Reply to
domain Patriot dot net user shmuel+news to contact me. Do not
reply to spam...@library.lspace.org

Shmuel (Seymour J.) Metz

unread,
Oct 6, 2006, 9:00:04 AM10/6/06
to
In <y6oVg.47165$bf5.6750@edtnps90>, on 10/06/2006

at 08:04 AM, "James J. Weinkam" <j...@cs.sfu.ca> said:

>You have just described bin fixed(15). You can give it a name if you
>insist, but why bother.

It was a poorly chosen example. Better would have been

type Int16 is range -10000 .. 20000;


for Int16'Size use 16;

--

robin

unread,
Oct 6, 2006, 10:14:28 AM10/6/06
to
<adaw...@sbcglobal.net> wrote in message
news:TulVg.7845$TV3....@newssvr21.news.prodigy.com...
>
> "glen herrmannsfeldt" <g...@seniti.ugcs.caltech.edu> wrote in message
> news:ees344$6s$7...@naig.caltech.edu...
> > David Frank <dave_...@hotmail.com> wrote:
> >
> >> E.G. No one is willing to confirm if PL/I has equivalent declaration of
> >> Fortran's defined type variables because they dont trust there own
knowledge
> >> well enuf to state the facts OR in Vowels case he wont respond because he
> >> knows it does NOT.
> >
> > I don't know if it has defined type variables, I presume you mean
> > something like C's typedef. I don't remember that Fortran does, either.
> >
> I think maybe David is asking whether one can invent new types as one
> does in Ada. For example, how would one declare, in PL/I, the following?
>
> type Int16 is range -2**15 .. 2**15 - 1;
> for Int16'Size use 16;
>
> which says give the new type called Int16 a range as shown
> and force it to be stored in 16 bits; or,
>
> type Color is (Red, Yellow, Blue);
> for Color use (Red => 16#34F2#,
> Yellow => 16#34F3#,
> Blue => 16#34F4#);
>
> which says, for the enumerated values named Red, Yellow, and Blue
> force the machine representation to the hexadecimal values shown.

Why would you want to do that?*
Simpler is:-
define ordinal color (red, yellow, blue);

Or, if you must have specific values,
define ordinal color (red value(1), yellow value (5), blue value(200));

Or if you really must have a hex constant,

define ordinal color (red value('34F2'xn), yellow, blue);

is sufficient, as the internal values increase consecutively.
_______
* JW has already given an equivalent, so I'll just add a few remarks.

robin

unread,
Oct 6, 2006, 10:14:27 AM10/6/06
to
"David Frank" <dave_...@hotmail.com> wrote in message
news:45262687$0$3016$ec3e...@news.usenetmonster.com...

>
> <adaw...@sbcglobal.net> wrote in message
> news:TulVg.7845$TV3....@newssvr21.news.prodigy.com...
>
> > I think maybe David is asking whether one can invent new types as one
> > does in Ada. For example, how would one declare, in PL/I, the following?
> >
> > type Int16 is range -2**15 .. 2**15 - 1;
> > for Int16'Size use 16;
>
> integer(2) :: Int16

No, this doesn' give you 16 bits in Fortran.
It doesn't guarantee anything.
In fact., with this, you could even get a severe compilation error,
because there's no guarantee that a compiler has a corresponding
kind.

> but if you insist on declaring a derived type variable then
> type Int16
> integer(2) :: k

It doesn't. Same problem as above.

> end type
>
> > which says give the new type called Int16 a range as shown
> > and force it to be stored in 16 bits; or,
> > type Color is (Red, Yellow, Blue);
> > for Color use (Red => 16#34F2#,
> > Yellow => 16#34F3#,
> > Blue => 16#34F4#);
> > which says, for the enumerated values named Red, Yellow, and Blue
> > force the machine representation to the hexadecimal values shown.
>
> integer(2),parameter :: Red = #34f2, Yellow = #34f3, Blue = #34f4

> provides exact size 16bit constants,

No it doesn't. Still the same problem.


LR

unread,
Oct 6, 2006, 11:37:19 AM10/6/06
to
adaw...@sbcglobal.net wrote:
> I will answer all your questions in this part of the reply
> rather than embedding them in the text.
>
> My preferred language is one that does not have all the
> potential for errors that you seem to admit is present in
> C++.

And in all languages.

> It is designed so the compiler will catch the maximum
> number of errors at compile time.

The kinds of errors that you're speaking of are ones that are mostly
made by sloppy programmers and sloppy programmers will make errors no
matter what language they're working in.

> It provides a model for
> indirection that does not require me to wonder whether
> a particular pointer construct might have a hidden dangling
> reference or an eventual conflict somewhere. I don't have
> the concerns with copy constructors that are present in
> C++. Lint is not a substitute for good language design
> in the first place.

You've made some assumptions about what a "good language design" is.


> Language choice does impact dependability.

I suspect not as much as programmer choice.

> There are
> languages that you probably have not used that are characterized
> by their emphasis on dependability. C++ is not one of them.

It might be interesting if you'd define dependabiltiy for us.

> It is true that one cannot depend entirely on the programming
> language, and programming always involves being careful. However,
> C++ is especially error-prone when compared with most
> alternatives.

Sure, if you're not careful, you'll be error prone.

> When I compare C++ with one of the better languages
> such as Ada, I find myself preferring Ada. When I
> compare it with Eiffel, I find myself preferring Eiffel.

Well, since you've already decided that it's better, of course you find
yourself prefering it.


>
> Furthermore, with Ada I get all the flexibility I need,

I've never thought that there was some task you can accomplish in
another language that you can't accomplish in Ada. Or did you mean
something else by the word "flexibility"?


> along with the required efficiency. The compiler catches
> more errors at compile-time leaving me time to spend on
> my own programming mistakes, those not inherent in
> the design of the language.

Strange, but after years of programming in C++, I just don't seem to run
into that many errors that I think are language based. I wonder why?

> I know C++ well. I know Ada well. When I compare,
> feature-for-feature, according to the one criterion that
> is most important to me, dependability of the final
> program, C++ consistently falls short.

Details?


>
> The better I get to know both languages, the more I become
> aware that C++ is one of the worst choices for any software
> where dependability is important. It is, at first, fun chasing
> the little bugs around the code,

Huh?


> but after a while, one needs to
> take a more professional attitdude toward one's work

Programming is not and never will be a profession. Simply not possible.
And also not legal, even, or perhaps most especially where it's been
made 'law', whatever that might mean nowadays. IMHO. But IANAL.

> and
> realize that we are not in the business of tracking down bugs,
> but rather we are in the business of trying to produce reliable
> software.


Reliable? At what cost? And how do you measure your results?


> C++ is not focused on that concern. While some
> programmers may find it fun to deal with the peculiarities of
> C++ on a day-by-day basis, I would rather be able to focus
> on the problems we are supposed to solve than the eccentricities
> of the toolset we use to solve those problems.

I would rather be able to focus on my ability to express my thoughts in
code. I find it leads to fewer problems.

LR

LR

unread,
Oct 6, 2006, 12:00:12 PM10/6/06
to
adaw...@sbcglobal.net wrote:

> "glen herrmannsfeldt" <g...@seniti.ugcs.caltech.edu> wrote in message
> news:eg3lda$ep4$3...@naig.caltech.edu...
>
>>I think some of that is left over from the requirement
>>of early C++ compilers to translate to C as an intermediate,
>>and for C compatibility in any case. Java's insistance
>>on initializing scalar variables helps prevent some defects,
>>though I still don't like it when it is wrong.
>>
>
> LC is correct when he suggests that it is possible to initialize
> a variable with the wrong value. The Ada Safety and Security
> Annex has a pragma Normalize_Scalars that helps to ameliorate
> this problem.

Does that really help? Seriously?
http://en.wikibooks.org/wiki/Ada_Programming/Pragmas/Normalize_Scalars

"The pragma Normalize_Scalars directs the compiler to initialize
otherwise uninitialized scalar variables with predictable values. If
possible, the compiler will choose out-of-range values."

I think that might sometimes be worse.

I remember PL/C being 'helpful', with messages like (and this isn't even
close to being exact.): "Array index out of bounds, set to one." With
predictably unpredicable results.

>
> Often, it is better not to initialize a scalar to some value simply
> because it can be done.

Could you clarify/amplify that?

> An Ada compiler always gives the
> programmer a warning when a scalar is never assigned a
> value anywhere, initialized or not.

Does Ada support a seperate compilation model? Interlanguage programming?


> This warning enables the
> programmer to examine that warning and determine what
> action is appropriate. The fact that a scalar is not initialized
> is less problematic than the realization that it never gets a
> value asssigned anywhere in the program.

"anywhere in the program"? Or anywhere in a "translation unit" (sorry,
I'm not sure what the proper name for this would be for Ada, so please
translate appropriately)?

>
> When using the SPARK examiner (a preprocessor for
> creating highly reliable Ada code), one gets an even stronger
> model for correctness. At this stage of software practice,
> there is no toolset better guaranteed to provide correct
> programs than SPARK. Before naysaying this, you need
> to study SPARK for yourself. Otherwise, you simply
> won't understand the argument.

I took a look at this:
http://en.wikipedia.org/wiki/SPARK_programming_language

Interesting, but it leaves me unconvinced. I looked at
http://www.praxis-his.com/sparkada/ but couldn't find a tutorial there.
Perhaps you could recomend an online tutorial.

LR

adaw...@sbcglobal.net

unread,
Oct 6, 2006, 12:03:47 PM10/6/06
to
First, thanks for all the replies. Note that I never said
that PL/I could not accomplish the equivalent of what
I posted. In fact, I suggested that Robin would have
a good solution and invited him to show it to us.

As to the Ada list type, there are a variety of ways to a
do the same thing in Ada. For the example shown, I might
simply do this:

type List_Type is record
name : string(1..30);
nums : integer;
end record;

type List_Type_Collection is array (Positive range <>) of List_Type;

giving me an uconstrained array of List_Type records. I could also simply
use an existing linked-list library, a tree library, or whatever other
collection
library I might want.

or, if I want to have an unconstrained name in List_Type,


type List_Type is record
name : unbounded_string;
nums : integer;
end record;

which will allow me to have strings of whatever size I want.

Richard Riehle

============================================================


"David Frank" <dave_...@hotmail.com> wrote in message
news:45262687$0$3016$ec3e...@news.usenetmonster.com...
>

David Frank

unread,
Oct 6, 2006, 1:15:16 PM10/6/06
to

<adaw...@sbcglobal.net> wrote in message
news:D7vVg.2266$NE6...@newssvr11.news.prodigy.com...

but you have declared nums as a scalar not as a allocatable ARRAY member of
List_Type
therefore it isnt equivalent to my Fortran declaration and as a result cant
hold ALL the data
of the "arbitrary lists" problem..

glen herrmannsfeldt

unread,
Oct 6, 2006, 2:19:18 PM10/6/06
to
adaw...@sbcglobal.net wrote:
(very large snip)

> No programmer graduating from any computer science program
> anywhere in the world would consider adopting a programming
> language that fails to support the object model.

I would say that most scientific programmers don't come
from the computer science program, but from engineering
and physical sciences.

PL/I by design included features from COBOL for
the business community, and from Fortran for the
scientific community. The life cycle of scientific
and engineering software is a little different from
that of business or 'computer science' software.

-- glen

glen herrmannsfeldt

unread,
Oct 6, 2006, 2:30:55 PM10/6/06
to
LR <lr...@superlink.net> wrote:

> "The pragma Normalize_Scalars directs the compiler to initialize
> otherwise uninitialized scalar variables with predictable values. If
> possible, the compiler will choose out-of-range values."

For debugging programs that will generally work on a
system that doesn't initialize variables, initializing
to a value that will easily be recognized as wrong works.

My favorite is X'81', which tends to be a large negative
integer, and small negative floating point value.

For pointers, it may or may not point outside the available
addressing range.

-- glen

glen herrmannsfeldt

unread,
Oct 6, 2006, 2:38:18 PM10/6/06
to
adaw...@sbcglobal.net wrote:
(snip)

> typedef is a farce. Too many C programmers think it is doing
> something it isn't doing at all. It is not a capability for declaring or
> defining
> new types. Rather, it is a way to create an alias for an existing type.

True, but that existing type can be a struct or union, which
gives it some generality. C has the standard FILE, where the
internal structure is system dependent, but contains everything
needed for file I/O.

> I think maybe David is asking whether one can invent new types as one
> does in Ada. For example, how would one declare, in PL/I, the following?

> type Int16 is range -2**15 .. 2**15 - 1;
> for Int16'Size use 16;

PL/I allows specifying the number of bits or decimal digits
needed, independent of the underlying machine.

Typedef is interesting in that the types it creates work
like the standard types, with no extra qualifier such as TYPE.



> which says give the new type called Int16 a range as shown
> and force it to be stored in 16 bits; or,

(snip of enumeration example)

I am pretty sure PL/I now has enumarations, but
I don't believe it did originally.

-- glen

LR

unread,
Oct 6, 2006, 5:07:53 PM10/6/06
to
glen herrmannsfeldt wrote:

> (very large snip)
[more snippage]

> I would say that most scientific programmers don't come
> from the computer science program, but from engineering
> and physical sciences.

Where I hear they're still all (ok, all might be an overstatement)
taught a nice simple subset of Fortran, because it's so 'simple' and
'easy to use'.


> The life cycle of scientific
> and engineering software is a little different from
> that of business or 'computer science' software.

In what way?

LR

LR

unread,
Oct 6, 2006, 5:27:12 PM10/6/06
to
glen herrmannsfeldt wrote:

> LR <lr...@superlink.net> wrote:
>
>
>>"The pragma Normalize_Scalars directs the compiler to initialize
>>otherwise uninitialized scalar variables with predictable values. If
>>possible, the compiler will choose out-of-range values."
>
>
> For debugging programs that will generally work on a
> system that doesn't initialize variables, initializing
> to a value that will easily be recognized as wrong works.
>
> My favorite is X'81', which tends to be a large negative
> integer, and small negative floating point value.

I've heard about "deadbeef".

A compiler I use in debug mode initializes uninitialized integer
variables, I think that might be bin(31) fixed to you, to 0xcccccccc
which is -858993460. Similar results for real/float types. I'm thankful
that this particular compiler warns me that the variable is unitialized.
(Although, that's an implementation issue.)

> For pointers, it may or may not point outside the available
> addressing range.

NULL works well for this in C & C++. I'm curious, is there a value in
PL/I for a pointer which will always be invalid? If not, what do you do
about writing code that has to move between platforms?


Also from
http://en.wikibooks.org/wiki/Ada_Programming/Pragmas/Normalize_Scalars
---------------------------------------------------------------------
My_Variable : Positive; -- Oops, forgot to initialize this variable.
-- The compiler (may) initialize this to 0
...
-- Oops, using a variable before it is initialized!
-- An exception should be raised here, since the compiler
-- initialized the value to 0 - an out-of-range value for the Positive type.
Some_Other_Variable := My_Variable;
---------------------------------------------------------------------

Which looks interesting, but I worry about variables that are set to
invalid values to begin with. And I think I'd rather know about it at
compile time than run time.

LR

LR

unread,
Oct 6, 2006, 5:38:46 PM10/6/06
to
glen herrmannsfeldt wrote:

> adaw...@sbcglobal.net wrote:
> (snip)
>
>
>>typedef is a farce. Too many C programmers think it is doing
>>something it isn't doing at all. It is not a capability for declaring or
>>defining
>>new types. Rather, it is a way to create an alias for an existing type.
>
>
> True, but that existing type can be a struct or union, which
> gives it some generality. C has the standard FILE, where the
> internal structure is system dependent, but contains everything
> needed for file I/O.

And typedef can be very useful. If you know what you're doing with it.
IOW, don't use a screwdriver as a chisel if you know what's good for you.


> PL/I allows specifying the number of bits or decimal digits
> needed, independent of the underlying machine.

I've always wondered though, what happens if you specify something like
bin(1000) fixed, on a machine whose largest native fixed type is 32 bits?

> Typedef is interesting in that the types it creates work
> like the standard types, with no extra qualifier such as TYPE.

Allowing, as you pointed out, the usage of FILE.

LR

glen herrmannsfeldt

unread,
Oct 6, 2006, 6:00:11 PM10/6/06
to
LR <lr...@superlink.net> wrote:

(I wrote)


>> The life cycle of scientific
>> and engineering software is a little different from
>> that of business or 'computer science' software.

> In what way?

One is that speed is usually pretty important, so run time
checks are usually reduced.

Another is that many times, though not all, something
is written to solve one problem and never used again.
In that case, extendability is not very important.

Some of the previously mentioned compiler restrictions that
stop you from making mistakes require a lot of work to get
around, resulting in just as many mistakes. That is, when
you really do need to get around them.

-- glen

glen herrmannsfeldt

unread,
Oct 6, 2006, 6:11:55 PM10/6/06
to
LR <lr...@superlink.net> wrote:
(snip regarding initializing variables)


>> My favorite is X'81', which tends to be a large negative
>> integer, and small negative floating point value.

> I've heard about "deadbeef".

How about X'cafebabe'. That is the first four bytes of
a Java class file.

(snip)



>> For pointers, it may or may not point outside the available
>> addressing range.

> NULL works well for this in C & C++. I'm curious, is there a value in
> PL/I for a pointer which will always be invalid? If not, what do you do
> about writing code that has to move between platforms?

Well, NULL, which PL/I also has, tends to be a valid value
when a pointer doesn't have anything to point to. It is
often used and tested for in many programs and languages.

Note that Intel processors reserve segment selector zero
as the null segment selector. That is, hardware support
for a null pointer.

I would say a large value for a true invalid pointer.
Odd on machines with word aligned data.

-- glen

LR

unread,
Oct 6, 2006, 7:25:23 PM10/6/06
to
glen herrmannsfeldt wrote:

> LR <lr...@superlink.net> wrote:

[snip]

>>>For pointers, it may or may not point outside the available
>>>addressing range.
>
>
>
>>NULL works well for this in C & C++. I'm curious, is there a value in
>>PL/I for a pointer which will always be invalid? If not, what do you do
>>about writing code that has to move between platforms?
>
>
> Well, NULL, which PL/I also has, tends to be a valid value
> when a pointer doesn't have anything to point to.

I'm not sure I follow that. Do you mean valid until dereferenced?

> It is
> often used and tested for in many programs and languages.
>
> Note that Intel processors reserve segment selector zero
> as the null segment selector. That is, hardware support
> for a null pointer.
>
> I would say a large value for a true invalid pointer.

Sorry, but I don't understand what advantage this would have over NULL.

> Odd on machines with word aligned data.

Does that assume that you're pointing to something that requires word
alignment? Does character data normally require word alignment in PL/I?

LR

Tom Linden

unread,
Oct 6, 2006, 8:14:30 PM10/6/06
to
On Fri, 06 Oct 2006 16:25:23 -0700, LR <lr...@superlink.net> wrote:

> glen herrmannsfeldt wrote:
>
>> LR <lr...@superlink.net> wrote:
>
> [snip]
>
>>>> For pointers, it may or may not point outside the available
>>>> addressing range.
>>
>>> NULL works well for this in C & C++. I'm curious, is there a value in
>>> PL/I for a pointer which will always be invalid? If not, what do you
>>> do about writing code that has to move between platforms?
>> Well, NULL, which PL/I also has, tends to be a valid value
>> when a pointer doesn't have anything to point to.
>
> I'm not sure I follow that. Do you mean valid until dereferenced?

Dereferencing is not meaningful within the context of PL/I, Pointers
are a bona fide data type unlike C, where they are attributed by
association. A pointer is simply an address to a memory location,
what you choose to put or get from there is up to you.


>
> > It is
>> often used and tested for in many programs and languages.
>> Note that Intel processors reserve segment selector zero
>> as the null segment selector. That is, hardware support
>> for a null pointer.
>> I would say a large value for a true invalid pointer.
>
> Sorry, but I don't understand what advantage this would have over NULL.

In PL/I there is a builtin function null() which returns the value of the
null pointer, which may not be zero. This is implementation defined. On
prime, for example, It has some unique value which addressed an invalid
segment, causing a trap, IIRC, which facilitated error recovery.

>
>> Odd on machines with word aligned data.
>
> Does that assume that you're pointing to something that requires word
> alignment? Does character data normally require word alignment in PL/I?

This is somewhat of a fuzzy issue owing to the advent of architectures with
stricter alignement requirements, but in general, unless the ALIGNED
attribute
is specified, you may assume byte alignement. Bit fields are padded to the
narest byte
>
> LR

--
Using Opera's revolutionary e-mail client: http://www.opera.com/mail/

adaw...@sbcglobal.net

unread,
Oct 6, 2006, 9:42:58 PM10/6/06
to

"LR" <lr...@superlink.net> wrote in message
news:45252379$0$25786$cc2e...@news.uslec.net...
> adaw...@sbcglobal.net wrote:
>> "LR" <lr...@superlink.net> wrote in message
>> news:4523c687$0$25791$cc2e...@news.uslec.net...
>>
I have snipped away your comments and mine. It is clear
that we will not find many points of agreement in this discussion.

What is also clear is that you are satisfied with C++, just
as I was satisfied with languages I once knew when I was
busy working in the day-to-day world of programming
over a nearly thirty year period.

I have been privileged to have the time to step back and
look closely at the relative merits of a large number of
languages in recent years. I suspect that this is not
something you have had the time to do since you are
probably heavily involved in actually making software
work each day, just as I was during my early career.

My comments about C++, and other languages, are
based on both my experience as a programmer and
my research into the foundations of programming and
programming language design. That research includes
a lot of examination of a lot of programs and interviews
with a lot of programmers. Most of those programmers
are ardent about their own language choice and will
argue the virtues of their chosen language with vigor
and fervent commitment. That is as it should be
since it is important to have confidence in one's
choices.

However, as I examine different language designs, it
becomes clear that some language design choices,
while seeming to be a good idea when developed,
have not been as good as they might have seemed
to the original designers. This is why new language
and better designs continue to emerge.

Whatever your favorite language might be, it is important
for intellectual honesty to prevail. As you have indicated,
no language design is perfect. Even the best of the newly
designed languages can be criticized at some level. Still,
those new designs do advance the state of programming
practice. Older languages that evolve to adapt to new
ideas about programming and software development
are able to hold on to some share of the programming
marketplace. In some cases, the evolution results in the
language becoming really good in some niche. Other
times, the evolution of the language represents some real
improvements that guarantee a following for a long time.
The continued evolution of Fortran is a good case for
that last statement.

As I look at the evolution of C++, it seems that many new
features are intended to compensate for flaws in the original
design. The language seems to be turning into the rough
equivalent of a "pile of dry rot held up by flying buttresses."
New language designs such as Eiffel are so much better
that one wonders why C++ even exists. Of course the
answer is largely based on tradition, not on the value of
its inherent language design model.

I indicated earlier that language design choices need to be
made on the basis of criteria relevant to the problem one
is trying to solve. One of the primary criterion for the
environment in which I work is dependability. At present,
the most powerful language toolset to satisfy the need
for high-integrity, highly dependable software is called
SPARK, not C++, not PL/I. It is a niche language,
to be sure. One would not use SPARK for pedestrian
projects such as business data processing. However,
there is currently no language model better suited to
the creation of safety-critical software.

On the other hand, most languages can be used with
some confidence for other kinds of software, even C++.
As you have agreed, C++ includes a lot of very dangerous
options. What you have not acknowledged is that not
all languages do include those same opportunities for
mistakes.

Perhaps when you have the opportunity to step away from
your programming practice long enough to make objective
comparisons of the many languages choices, you will begin
to discover how each of these variations in design makes
a difference to the success of a project, depending on the
criteria you have chosen to define success.

Thanks for an interesting diaglogue,

Richard Riehle


adaw...@sbcglobal.net

unread,
Oct 6, 2006, 9:54:27 PM10/6/06
to

"LR" <lr...@superlink.net> wrote in message
news:45267d5e$0$25792$cc2e...@news.uslec.net...

> adaw...@sbcglobal.net wrote:
>
>
>>
>> Often, it is better not to initialize a scalar to some value simply
>> because it can be done.
>
> Could you clarify/amplify that?
>
Yes. Under rare circumstances, it might be better to
let an exception occur, provided one includes a proper
exception handler. This decision would be driven by the
case where a valid value of any kind might deliver incorrect
results that might not be trapped until too late in the program.

> > An Ada compiler always gives the
>> programmer a warning when a scalar is never assigned a
>> value anywhere, initialized or not.
>
> Does Ada support a seperate compilation model? Interlanguage programming?
>

Actually, Ada is one of the most democratic languages you
will find. The separate compilation model is multi-layered,
and the language includes direct support for interoperability
with C, Java, C++, Fortran, and COBOL. That model
could be easily extended (it is defined in the language) to
include other languages as they become popular. Oh, and
Ada can also interact with Assembler and low-level machine code.


>
>> This warning enables the
>> programmer to examine that warning and determine what
>> action is appropriate. The fact that a scalar is not initialized
>> is less problematic than the realization that it never gets a
>> value asssigned anywhere in the program.
>
> "anywhere in the program"? Or anywhere in a "translation unit" (sorry, I'm
> not sure what the proper name for this would be for Ada, so please translate
> appropriately)?
>

The compiler will determine that a scalar is never initialized at
any point where it is visible. The scoping rules are quite a bit
more strict in Ada than in most languages. Also Ada separates
the notion of scope and visibility. Therefore, the compiler can
easily determine whether a scalar has any chance of ever getting
a legal value when the program is executed.


>>
>> When using the SPARK examiner (a preprocessor for
>> creating highly reliable Ada code), one gets an even stronger
>> model for correctness. At this stage of software practice,
>> there is no toolset better guaranteed to provide correct
>> programs than SPARK. Before naysaying this, you need
>> to study SPARK for yourself. Otherwise, you simply
>> won't understand the argument.
>
> I took a look at this: http://en.wikipedia.org/wiki/SPARK_programming_language
>
> Interesting, but it leaves me unconvinced. I looked at
> http://www.praxis-his.com/sparkada/ but couldn't find a tutorial there.
> Perhaps you could recomend an online tutorial.
>

I don't have a stake in SPARK other than as a user. However, I
think the people at PRAXIS might be quite willing to answer any
questions you might have. SPARK, at present, seems to be the
most effective design for the support of and inclusion of formal
methods in a programming process.

Thanks again for your interest,

Richard Riehle


adaw...@sbcglobal.net

unread,
Oct 6, 2006, 10:00:28 PM10/6/06
to

"David Frank" <dave_...@hotmail.com> wrote in message
news:452691fb$0$3036$ec3e...@news.usenetmonster.com...
OK. I will simply include an allocatable array of integers
in the record.

type Integer_List is array(Positive range <>) of Integer;
type List_Type(Allocated_Size) is record
name : unbounded_string;
nums : integer_list(1..Allocated_Size);
end record;

Now, when I declare a record of this type I will simply code,

My_Record : List_Type(Allocated_Size => 200);

which will give me a bounded list, dynamically allocated, for the
record. I could also do this with an indirection-based solution,
but that will require a little more code.

Richard Riehle


LR

unread,
Oct 6, 2006, 11:21:42 PM10/6/06
to
adaw...@sbcglobal.net wrote:
> "LR" <lr...@superlink.net> wrote in message
> news:45252379$0$25786$cc2e...@news.uslec.net...
>
>>adaw...@sbcglobal.net wrote:
>>
>>>"LR" <lr...@superlink.net> wrote in message
>>>news:4523c687$0$25791$cc2e...@news.uslec.net...
>>>
>
> I have snipped away your comments and mine. It is clear
> that we will not find many points of agreement in this discussion.
>
> What is also clear is that you are satisfied with C++, just
> as I was satisfied with languages I once knew when I was
> busy working in the day-to-day world of programming
> over a nearly thirty year period.

No, that's not clear. I use the language and like it for many reasons,
but satisfied? I'm not sure about that.


> I have been privileged to have the time to step back and
> look closely at the relative merits of a large number of
> languages in recent years. I suspect that this is not
> something you have had the time to do since you are
> probably heavily involved in actually making software
> work each day, just as I was during my early career.

I wouldn't say that I haven't done this at all.


>
> My comments about C++, and other languages, are
> based on both my experience as a programmer and
> my research into the foundations of programming and
> programming language design. That research includes
> a lot of examination of a lot of programs and interviews
> with a lot of programmers. Most of those programmers
> are ardent about their own language choice and will
> argue the virtues of their chosen language with vigor
> and fervent commitment. That is as it should be
> since it is important to have confidence in one's
> choices.
>
> However, as I examine different language designs, it
> becomes clear that some language design choices,
> while seeming to be a good idea when developed,
> have not been as good as they might have seemed
> to the original designers. This is why new language
> and better designs continue to emerge.

And worse as well, no? In any case, most languages have both good
things and bad things.

> Whatever your favorite language might be, it is important
> for intellectual honesty to prevail. As you have indicated,
> no language design is perfect. Even the best of the newly
> designed languages can be criticized at some level.

New isn't the equivalent of better.

> Still,
> those new designs do advance the state of programming
> practice.

I don't always share that view. There's at least one newer language
that I know of that I don't think was created to advance the state of
the art, but to attack one of the creator's competitors.

> Older languages that evolve to adapt to new
> ideas about programming and software development
> are able to hold on to some share of the programming
> marketplace.

Yes, as you've pointed out, this is the case for Fortran.

> In some cases, the evolution results in the
> language becoming really good in some niche. Other
> times, the evolution of the language represents some real
> improvements that guarantee a following for a long time.
> The continued evolution of Fortran is a good case for
> that last statement.

I'm curious to see how this plays out. There are plenty of people who
program in Fortran who, or so it seems to me, are probably using a very
narrow subset of the current language. This persists because the subset
is considered to be simple and easy to use in an age where software is
becoming more and more complex.

> As I look at the evolution of C++, it seems that many new
> features are intended to compensate for flaws in the original
> design. The language seems to be turning into the rough
> equivalent of a "pile of dry rot held up by flying buttresses."

Interesting perspective. I tend to think that languages that get used
acquire interesting features. Much like human languages do.

> New language designs such as Eiffel are so much better
> that one wonders why C++ even exists.

Utility? Availability? Familiarity? I wonder why you wonder about it.

> Of course the
> answer is largely based on tradition, not on the value of
> its inherent language design model.

Tradition? Honestly, I've never heard anyone suggest that they chose a
language because of tradition.


>
> I indicated earlier that language design choices need to be
> made on the basis of criteria relevant to the problem one
> is trying to solve.

The best tool for the job? I once read a book printed around the start
of WWII that was for owners of milling machines that showed them how
they could do things that are normally done on a lathe, like make gears.
I guess there was a shortage of lathes. Best to use the tools you
have, and the people who know how to use them, and make the gears rather
then suffer the paralysis of fretting over the best tool.


> One of the primary criterion for the
> environment in which I work is dependability. At present,
> the most powerful language toolset to satisfy the need
> for high-integrity, highly dependable software is called
> SPARK, not C++, not PL/I.

I don't see how SPARK is really all that different from a liberal use of
assert() and lint. Perhaps I'm missing something obvious.

> It is a niche language,
> to be sure. One would not use SPARK for pedestrian
> projects such as business data processing.

I honestly don't understand this. Why not?

> However,
> there is currently no language model better suited to
> the creation of safety-critical software.


> On the other hand, most languages can be used with
> some confidence for other kinds of software, even C++.
> As you have agreed, C++ includes a lot of very dangerous
> options. What you have not acknowledged is that not
> all languages do include those same opportunities for
> mistakes.

All languages include opportunites for mistakes.

> Perhaps when you have the opportunity to step away from
> your programming practice long enough to make objective
> comparisons of the many languages choices, you will begin
> to discover how each of these variations in design makes
> a difference to the success of a project, depending on the
> criteria you have chosen to define success.

I like that qualifier. ;)


> Thanks for an interesting diaglogue,


Thank you very much too.

LR

LR

unread,
Oct 6, 2006, 11:27:54 PM10/6/06
to
adaw...@sbcglobal.net wrote:

> "LR" <lr...@superlink.net> wrote in message
> news:45267d5e$0$25792$cc2e...@news.uslec.net...
>
>>adaw...@sbcglobal.net wrote:
>>
>>
>>
>>>Often, it is better not to initialize a scalar to some value simply
>>>because it can be done.
>>
>>Could you clarify/amplify that?
>>
>
> Yes. Under rare circumstances, it might be better to
> let an exception occur, provided one includes a proper
> exception handler. This decision would be driven by the
> case where a valid value of any kind might deliver incorrect
> results that might not be trapped until too late in the program.

I'm not sure that I understand this. Are you saying that the condition
is met if you have a variable in your code with a valid value will cause
an incorrect result? That sounds like poor design.

As if we had a shoe that is fine unless it has a foot in it.

>>>An Ada compiler always gives the
>>>programmer a warning when a scalar is never assigned a
>>>value anywhere, initialized or not.
>>
>>Does Ada support a seperate compilation model? Interlanguage programming?
>>
>
> Actually, Ada is one of the most democratic languages you
> will find. The separate compilation model is multi-layered,
> and the language includes direct support for interoperability
> with C, Java, C++, Fortran, and COBOL. That model
> could be easily extended (it is defined in the language) to
> include other languages as they become popular. Oh, and
> Ada can also interact with Assembler and low-level machine code.

My question really had more to do with how SPARK was going to figure out
if a particular variable is initialized or how it should be. It seems
to me that seperate compilation might cause complications. Is the
conditional information saved in whatever the object files are called?


>
>>>This warning enables the
>>>programmer to examine that warning and determine what
>>>action is appropriate. The fact that a scalar is not initialized
>>>is less problematic than the realization that it never gets a
>>>value asssigned anywhere in the program.
>>
>>"anywhere in the program"? Or anywhere in a "translation unit" (sorry, I'm
>>not sure what the proper name for this would be for Ada, so please translate
>>appropriately)?
>>
>
> The compiler will determine that a scalar is never initialized at
> any point where it is visible. The scoping rules are quite a bit
> more strict in Ada than in most languages. Also Ada separates
> the notion of scope and visibility. Therefore, the compiler can
> easily determine whether a scalar has any chance of ever getting
> a legal value when the program is executed.

Are you speaking of Ada, or SPARK here? I'm not sure I can see how this
can happen for a seperate compilation model unless there is an awful lot
of info kept after compilation.

>
>>>When using the SPARK examiner (a preprocessor for
>>>creating highly reliable Ada code), one gets an even stronger
>>>model for correctness. At this stage of software practice,
>>>there is no toolset better guaranteed to provide correct
>>>programs than SPARK. Before naysaying this, you need
>>>to study SPARK for yourself. Otherwise, you simply
>>>won't understand the argument.
>>
>>I took a look at this: http://en.wikipedia.org/wiki/SPARK_programming_language
>>
>>Interesting, but it leaves me unconvinced. I looked at
>>http://www.praxis-his.com/sparkada/ but couldn't find a tutorial there.
>>Perhaps you could recomend an online tutorial.
>>
>
> I don't have a stake in SPARK other than as a user. However, I
> think the people at PRAXIS might be quite willing to answer any
> questions you might have. SPARK, at present, seems to be the
> most effective design for the support of and inclusion of formal
> methods in a programming process.

For now. Like you said elsewhere in this thread, languages evolve.
It'll be interesting to see if other languages try to adapt this more
formally. Or not.

LR

John W. Kennedy

unread,
Oct 7, 2006, 1:30:54 AM10/7/06
to
adaw...@sbcglobal.net wrote:
> "glen herrmannsfeldt" <g...@seniti.ugcs.caltech.edu> wrote in message
> news:ees344$6s$7...@naig.caltech.edu...
>> David Frank <dave_...@hotmail.com> wrote:
>>
>>> E.G. No one is willing to confirm if PL/I has equivalent declaration of
>>> Fortran's defined type variables because they dont trust there own knowledge
>>> well enuf to state the facts OR in Vowels case he wont respond because he
>>> knows it does NOT.
>> I don't know if it has defined type variables, I presume you mean
>> something like C's typedef. I don't remember that Fortran does, either.
>>
> typedef is a farce. Too many C programmers think it is doing
> something it isn't doing at all. It is not a capability for declaring or
> defining
> new types. Rather, it is a way to create an alias for an existing type.
>
> I think maybe David is asking whether one can invent new types as one
> does in Ada. For example, how would one declare, in PL/I, the following?
>
> type Int16 is range -2**15 .. 2**15 - 1;
> for Int16'Size use 16;
>
> which says give the new type called Int16 a range as shown
> and force it to be stored in 16 bits; or,
>
> type Color is (Red, Yellow, Blue);
> for Color use (Red => 16#34F2#,
> Yellow => 16#34F3#,
> Blue => 16#34F4#);
>
> which says, for the enumerated values named Red, Yellow, and Blue
> force the machine representation to the hexadecimal values shown.
>
> I am pretty sure something like this is possible in PL/I. Perhaps Robin
> can give an example in PL/I source code.

DEFINE ALIAS INT16 FIXED BINARY(15,0); -- but it is not guaranteed to be
in 16 bits. Of course, it isn't necessarily guaranteed in Ada, either.
And PL/I does not have the ability to declare true numeric types -- only
a aliases. Ranges are also unavailable, except insofar as they are
implied by precisions.

DEFINE ORDINAL COLOR (RED VALUE (34F2B4),
YELLOW VALUE (34F3B4),
BLUE VALUE (34F4B4));

--
John W. Kennedy
"The blind rulers of Logres
Nourished the land on a fallacy of rational virtue."
-- Charles Williams. "Taliessin through Logres: Prelude"

John W. Kennedy

unread,
Oct 7, 2006, 1:35:45 AM10/7/06
to
LR wrote:
> I've always wondered though, what happens if you specify something like
> bin(1000) fixed, on a machine whose largest native fixed type is 32 bits?

Whatever the compiler designer feels like doing -- but, as a rule, it
will only go up to whatever a C compiler on the same system implements
as "long".

James J. Weinkam

unread,
Oct 7, 2006, 3:42:47 AM10/7/06
to
LR wrote:
>
>
> I don't always share that view. There's at least one newer language
> that I know of that I don't think was created to advance the state of
> the art, but to attack one of the creator's competitors.
>

Wow, what a statement! Either tell us the language, the creator, and the
competitor, or don't tell us anything at all. Otherwise all it is is innuendo.

James J. Weinkam

unread,
Oct 7, 2006, 3:47:06 AM10/7/06
to
You seem to have omitted some quotes in your B4 constants.

David Frank

unread,
Oct 7, 2006, 4:50:32 AM10/7/06
to

<adaw...@sbcglobal.net> wrote in message
news:0TDVg.8033$TV3....@newssvr21.news.prodigy.com...

>
> "David Frank" <dave_...@hotmail.com> wrote in message
> news:452691fb$0$3036$ec3e...@news.usenetmonster.com...
>>
>
>> but you have declared nums as a scalar not as a allocatable ARRAY member
>> of List_Type
>> therefore it isnt equivalent to my Fortran declaration and as a result
>> cant hold ALL the data
>> of the "arbitrary lists" problem..
>>
> OK. I will simply include an allocatable array of integers
> in the record.
>
> type Integer_List is array(Positive range <>) of Integer;
> type List_Type(Allocated_Size) is record
> name : unbounded_string;
> nums : integer_list(1..Allocated_Size);
> end record;
>
> Now, when I declare a record of this type I will simply code,
>
> My_Record : List_Type(Allocated_Size => 200);
>
> which will give me a bounded list, dynamically allocated, for the
> record. I could also do this with an indirection-based solution,
> but that will require a little more code.
>
> Richard Riehle
>

Not sure, but it appears you still havent duplicated my Fortran declaration
Type List
Character,Allocatable :: Name(:)
Integer,Allocatable :: Nums(:)
End Type
Type (List),Allocatable :: Lists(:)

Which allows me to have INDEPENDENT array sizes for
Name, Nums members of EACH instance of List within Lists

IOW all data from a file with arbitrary #lists can be contained.


Which allows me EACH of n Lists Lists(n)%Nums to be independently
allocated


adaw...@sbcglobal.net

unread,
Oct 7, 2006, 12:14:24 PM10/7/06
to

"LR" <lr...@superlink.net> wrote in message
news:45271e8c$0$25783$cc2e...@news.uslec.net...

> adaw...@sbcglobal.net wrote:
>
>
>>
>> Yes. Under rare circumstances, it might be better to
>> let an exception occur, provided one includes a proper
>> exception handler. This decision would be driven by the
>> case where a valid value of any kind might deliver incorrect
>> results that might not be trapped until too late in the program.
>
> I'm not sure that I understand this. Are you saying that the condition is met
> if you have a variable in your code with a valid value will cause an incorrect
> result? That sounds like poor design.
>
> As if we had a shoe that is fine unless it has a foot in it.
>
The initialization of a scalar with a value that could be intepreted
as correct at run-time, if it becomes a kind of default value, may
cause more run-time errors than if it is not initialized at all. It is not
always possible to decide that a given initialization is better than no
value at all. The circumstances will vary, of course.

>
>
>>>>An Ada compiler always gives the
>>>>programmer a warning when a scalar is never assigned a
>>>>value anywhere, initialized or not.
>>>
>>>Does Ada support a seperate compilation model? Interlanguage programming?
>>>
>>
>> Actually, Ada is one of the most democratic languages you
>> will find. The separate compilation model is multi-layered,
>> and the language includes direct support for interoperability
>> with C, Java, C++, Fortran, and COBOL. That model
>> could be easily extended (it is defined in the language) to
>> include other languages as they become popular. Oh, and
>> Ada can also interact with Assembler and low-level machine code.
>
> My question really had more to do with how SPARK was going to figure out if a
> particular variable is initialized or how it should be. It seems to me that
> seperate compilation might cause complications. Is the conditional
> information saved in whatever the object files are called?
>
During development, there is quite a bit of supporting information saved
for a full verification process. Separate compilation is designed over the
package model. Let me provide a simple example here. Be aware that
this is a toy example.

A library unit may be composed of multiple compilation units. In every
case, the varioius parts of a library unit may be compiled separately.
The first compilation unit is the specification.

generic
type Item is private;
package Stack is
procedure Push(Data : in Item);
procedure Pop (Data : out Item);
function Is_Full return Boolean;
end Stack;

This is a generic package meaning that it is independent of any
particular data type. The same package could be instantiated
for an integer, a float, or whatever.

Next is the implementation part, called a package body.

package body Stack is
-- here we can define the structure of the Stack
procedure Push(Data : in Item) is separate;
procedure Pop (Data : out Item) is separate;
function Is_Full return Boolean is separate;
end Stack;

Note that the body does not need to refer to the specification
through #include mechanisms as one would with C++. This
is because the library unit is integral even though it can be
separately compiled.

Each of the procedures and functions can also be compiled
as separate files. Once again, the Ada library model is
designed so the entire library unit is treated as a single library
unit, even though the compilation units can be in separate
files.

Ada (and SPARK which uses Ada as its underlying compilation
engine) has a unique visibility model that ensures consistency across
these compilation units. Unlike a #include which gives one both
scope and visibility, Ada separates these two ideas. An element
of a library unit may be in scope, but not be directly visible. This
subtle difference assists in the feature I described earlier: ensuring
that every scalar can be tested for whether it can ever be given a
value anywhere in the final program.

In another reply, you indicate that you don't see the difference
between lint assert and what SPARK does. The assert is not
as fine-grained in its checking as the SPARK model. Further,
SPARK's checking is primarily static. A designer inserts the
assertions in the code and the entire program is statically
evaluated to determine whether there are any places in the
code where those assertions can be violated. Also, SPARK
can detect, not in all cases, but in some, whether there will
be conflicting assertions.

The SPARK model is very close to a theorem-proving
approach, although we still have a long way to go in software
before we are really able to satisfy all the issues of theorem
proving.

You asked why we would not do this with all software. The
answer is primarily economics. Formal methods are not the
right approach for every software problem. It is an expensive
way to build software. However, it is also the right way to
build software in safety-critical environments. SPARK is
unique and that means expensive. If a software system
must absolutely work according to is specification or
people could be killed or maimed because of a software
failure, SPARK is probably the right approach. Most
software does not fall into that category.

For safety-critical software, the stakes are very high. Few
other language designs will be adequate; not C++, not
PL/I, not C, not Fortran, and not even all of Ada. SPARK
forbids the use of some Ada constructs because they cannot
be confirmed as safe by the SPARK Examiner.

Thanks for your question.

Richard Riehle


adaw...@sbcglobal.net

unread,
Oct 7, 2006, 12:18:34 PM10/7/06
to

"David Frank" <dave_...@hotmail.com> wrote in message
news:45276d40$0$3047$ec3e...@news.usenetmonster.com...
The example I posted shows an unconstrained array and an unconstrained
record. At the time of declaration, each record can be allocated a different
number of integer values. The Name is an unbounded string which means
it can vary accordning to however much I want to put in it. In fact, we
can vary that string size dynamically, if we wish. For the integer list,
had I chosen to use a simple linked list for my implementation, the list
could also grow and shrink dynamically.

Richard Riehle


adaw...@sbcglobal.net

unread,
Oct 7, 2006, 12:23:10 PM10/7/06
to

"LR" <lr...@superlink.net> wrote in message
news:4526ccb8$0$25774$cc2e...@news.uslec.net...

>
> I've always wondered though, what happens if you specify something like
> bin(1000) fixed, on a machine whose largest native fixed type is 32 bits?
>
Some languages such as Scheme, Smalltalk, Python, and many others
allow the programmer to have numeric values of any size they want, and
to do arithmetic on them. Consider,

x := 15415112987195719571729512219305 /
1412989375160325123512512612740571975923571925791

This would evaluate just fine in some of the languages named. The
numbers are not implemented simply on the basis of the underlying
word size of the machine.

Richard Riehle

LR

unread,
Oct 7, 2006, 1:46:57 PM10/7/06
to
adaw...@sbcglobal.net wrote:

I'm aware of this, but I was asking particularly about PL/I.

But since you've raised the issue, how for example, are irrational
constants represented/stored in these languages?

And of course, for my money, the nice thing about C++ is that you can
come pretty close to the syntax above, if you take the time to write
classes that can deal with these kinds of numbers.

I think that Java has something like BigNum or BigInt or something to
handle these already.

LR

LR

unread,
Oct 7, 2006, 2:03:13 PM10/7/06
to
adaw...@sbcglobal.net wrote:

> "LR" <lr...@superlink.net> wrote in message
> news:45271e8c$0$25783$cc2e...@news.uslec.net...
>
>>adaw...@sbcglobal.net wrote:
>>
>>
>>
>>>Yes. Under rare circumstances, it might be better to
>>>let an exception occur, provided one includes a proper
>>>exception handler. This decision would be driven by the
>>>case where a valid value of any kind might deliver incorrect
>>>results that might not be trapped until too late in the program.
>>
>>I'm not sure that I understand this. Are you saying that the condition is met
>>if you have a variable in your code with a valid value will cause an incorrect
>>result? That sounds like poor design.
>>
>>As if we had a shoe that is fine unless it has a foot in it.
>>
>
> The initialization of a scalar with a value that could be intepreted
> as correct at run-time, if it becomes a kind of default value, may
> cause more run-time errors than if it is not initialized at all. It is not
> always possible to decide that a given initialization is better than no
> value at all. The circumstances will vary, of course.

I find this pretty confusing. How can a variable have "no value at all"
unless you have some meta-data attached to the variable that indicates
that it hasn't been initialized or had a value assigned to it?
Otherwise, I think the bits will have some 'value'. It may be a 'legal'
value or 'not legal' but the bits will indicate some value. No?

I get the feeling I'm missing something.

Also, can you give an example where no initialization is better than
initialization?

[snip]
[snip]


> Note that the body does not need to refer to the specification
> through #include mechanisms as one would with C++. This
> is because the library unit is integral even though it can be
> separately compiled.

I see pluses and minuses in that.

>
> Each of the procedures and functions can also be compiled
> as separate files. Once again, the Ada library model is
> designed so the entire library unit is treated as a single library
> unit, even though the compilation units can be in separate
> files.

There has to be some underlying method for determining where the files
are though, right? Is this implementation/platform dependent?


[snip]


> In another reply, you indicate that you don't see the difference
> between lint assert and what SPARK does.

I should have been more specific. I don't see much of a difference,
although it seems to me that SPARK is kind of like these but stronger.

> The assert is not
> as fine-grained in its checking as the SPARK model. Further,
> SPARK's checking is primarily static.

Another difference in that assert is a runtime check. But C++ may yet
get some static checking feature. templates will make that likely.

> A designer inserts the
> assertions in the code and the entire program is statically
> evaluated to determine whether there are any places in the
> code where those assertions can be violated.

'Are' violated, or 'can be' violated? I feel a little confused here, is
the code that evaluates the SPARK assertions good enough to tell if the
constraints will be violated at run time?

> Also, SPARK
> can detect, not in all cases, but in some, whether there will
> be conflicting assertions.
>
> The SPARK model is very close to a theorem-proving
> approach, although we still have a long way to go in software
> before we are really able to satisfy all the issues of theorem
> proving.

I'd like to know about it when you can tell if my program will halt. ;)

>
> You asked why we would not do this with all software. The
> answer is primarily economics.

Of course.

I'm also curious about the size of the programs that you've used these
methods for, and how long the compilation step takes. Is there any
overhead in the executables that you create?

Also, what kinds of problems have you run into? Things that surprised you?

> Formal methods are not the
> right approach for every software problem. It is an expensive
> way to build software. However, it is also the right way to
> build software in safety-critical environments. SPARK is
> unique and that means expensive. If a software system
> must absolutely work according to is specification or

What method are you using to specify how the software works, and how do
you convert or translate the specification into SPARK code?


> people could be killed or maimed because of a software
> failure, SPARK is probably the right approach. Most
> software does not fall into that category.
>
> For safety-critical software, the stakes are very high. Few
> other language designs will be adequate; not C++, not
> PL/I, not C, not Fortran, and not even all of Ada. SPARK
> forbids the use of some Ada constructs because they cannot
> be confirmed as safe by the SPARK Examiner.

Examples of this last part please?

>
> Thanks for your question.

Thanks for your answers.

LR

John W. Kennedy

unread,
Oct 7, 2006, 7:40:56 PM10/7/06
to

C#, of course. Microsoft is unalterably opposed to portable standards of
all kinds.

John W. Kennedy

unread,
Oct 7, 2006, 7:49:52 PM10/7/06
to
LR wrote:
> adaw...@sbcglobal.net wrote:
>
>> "LR" <lr...@superlink.net> wrote in message
>> news:4526ccb8$0$25774$cc2e...@news.uslec.net...
>>
>>> I've always wondered though, what happens if you specify something
>>> like bin(1000) fixed, on a machine whose largest native fixed type is
>>> 32 bits?
>>>
>>
>> Some languages such as Scheme, Smalltalk, Python, and many others
>> allow the programmer to have numeric values of any size they want, and
>> to do arithmetic on them. Consider,
>>
>> x := 15415112987195719571729512219305 /
>> 1412989375160325123512512612740571975923571925791
>>
>> This would evaluate just fine in some of the languages named. The
>> numbers are not implemented simply on the basis of the underlying
>> word size of the machine.
>
> I'm aware of this, but I was asking particularly about PL/I.
>
> But since you've raised the issue, how for example, are irrational
> constants represented/stored in these languages?

Irrationals are normally restricted to floating-point, and to a certain
precision.

A few languages implement rationals as a distinct type, with numerators
and denominators. In such a language 7*(1/7) is guaranteed to return
exactly 1.

> And of course, for my money, the nice thing about C++ is that you can
> come pretty close to the syntax above, if you take the time to write
> classes that can deal with these kinds of numbers.
>
> I think that Java has something like BigNum or BigInt or something to
> handle these already.

BigInteger and BigDecimal. However, because Java does not overload
operators, expressions are of the form

a.multiply(x).plus(b)

rather than

a*x+b

Ruby (which is purely OO) automatically switches between machine
integers and big software integers.

John W. Kennedy

unread,
Oct 7, 2006, 7:55:25 PM10/7/06
to

Yeah. Make that: '34F2'XN, etc.. I've never had occasion to use hex
FIXED BIN constants (as opposed to hex BIT constants), so I messed up.

LR

unread,
Oct 7, 2006, 8:36:38 PM10/7/06
to
John W. Kennedy wrote:

> James J. Weinkam wrote:
>
>> LR wrote:
>>
>>>
>>>
>>> I don't always share that view. There's at least one newer language
>>> that I know of that I don't think was created to advance the state of
>>> the art, but to attack one of the creator's competitors.
>>>
>>
>> Wow, what a statement! Either tell us the language, the creator, and
>> the competitor, or don't tell us anything at all. Otherwise all it is
>> is innuendo.

True.

>
>
> C#, of course.

I said "at least one".

> Microsoft is unalterably opposed to portable standards of
> all kinds.

I don't think this is so. I recall reading something by Herb Sutter
that said that MS was maintaining it's commitment to the C++ standard.

And speaking of standards where is the standard for Java, or when was
the standard for PL/I last updated?

LR

adaw...@sbcglobal.net

unread,
Oct 7, 2006, 9:32:40 PM10/7/06
to

"LR" <lr...@superlink.net> wrote in message
news:452847e6$0$25778$cc2e...@news.uslec.net...
> John W. Kennedy wrote:
>>
>> C#, of course.

>
> > Microsoft is unalterably opposed to portable standards of
>> all kinds.
>
> I don't think this is so. I recall reading something by Herb Sutter that said
> that MS was maintaining it's commitment to the C++ standard.
>
> And speaking of standards where is the standard for Java, or when was the
> standard for PL/I last updated?
>
The C# language, the existence of which may have been motivated
by the evil intent of Microsoft policy, is actually a slight improvement
over Java. In particular, the addition of a feature called "delegates"
enhances the power of C# over Java for functional style software
development.

Microsoft originally announced that it intended for C# to become
an ISO standard (e.g., as is C++, Ada, and several other languages)
although I have not seen them submit an application to ISO yet.

Even so, I will admit that the reason for creating C# in the first
place appeared less inspired by the need for a better language
than the spiteful deed of a mean-spirited monopolistic company.
The reasons for the invention of C# should not be taken as a
reason for criticizing the good job its designers did in bringing
it into existence.

Further, the entire .NET model (on which C# is built) has some
very nice properties. In particular, the CLR (Common
Language Runtime) enhances the options for language
interoperability. CLR and .NET are significant contributions
to the software architecture environment.

I am no fan of Microsoft, and don't place a lot of trust in
their good intentions. However, when they do produce a
good product, I have to admit it.

Richard Riehle


adaw...@sbcglobal.net

unread,
Oct 7, 2006, 10:12:44 PM10/7/06
to

"LR" <lr...@superlink.net> wrote in message
news:4527ebb2$0$25792$cc2e...@news.uslec.net...

> adaw...@sbcglobal.net wrote:
>>
>> The initialization of a scalar with a value that could be intepreted
>> as correct at run-time, if it becomes a kind of default value, may
>> cause more run-time errors than if it is not initialized at all. It is not
>> always possible to decide that a given initialization is better than no
>> value at all. The circumstances will vary, of course.
>
> I find this pretty confusing. How can a variable have "no value at all"
> unless you have some meta-data attached to the variable that indicates that it
> hasn't been initialized or had a value assigned to it? Otherwise, I think the
> bits will have some 'value'. It may be a 'legal' value or 'not legal' but the
> bits will indicate some value. No?
>
> I get the feeling I'm missing something.
>
> Also, can you give an example where no initialization is better than
> initialization?
>
Suppose I have a variable that I initialize to zero so my program
can compile without warnings. If my program is designed so I
never have a method that updates that value, when the program
tries to use that value, it turns out to be valid and there is not
immediate error message.

On the other hand, suppose I do not assign an initial value to that
variable. When I try to use it in my program, it will be an invalid
value and the program will raise an exception. It is often better
for a program to fail to do anything than to do something that looks
right but isn't.


>
>> Note that the body does not need to refer to the specification
>> through #include mechanisms as one would with C++. This
>> is because the library unit is integral even though it can be
>> separately compiled.
>
> I see pluses and minuses in that.
>
>>
>> Each of the procedures and functions can also be compiled
>> as separate files. Once again, the Ada library model is
>> designed so the entire library unit is treated as a single library
>> unit, even though the compilation units can be in separate
>> files.
>
> There has to be some underlying method for determining where the files are
> though, right? Is this implementation/platform dependent?
>

No. This is not implementation dependent. The specification for
the Ada language demands that the compiler detect every inconsistency,
even in separate compilation.

Unlike C or C++, Ada library units must compile correctly before
any dependent units can be compiled. That is, where the #include
is textual, the Ada equivalent is library based. The existence of the
library, along with the scope and visibility rules, ensure that no
artifact of a large program will be ignored during the compilation
of some dependent unit.

In the early days of Ada, we made a lot of mistakes in design that
led to excessively long compilation times. I recall a program of
4.5 million lines where the compilations took almost two days. The
computers were slower, but most of the slowness was due to our
failure to understand correct design procedures. Once we did
understand those procedures, slowness due to dependency
checking virtually vanished.


>
> [snip]
>> In another reply, you indicate that you don't see the difference
>> between lint assert and what SPARK does.
>
> I should have been more specific. I don't see much of a difference, although
> it seems to me that SPARK is kind of like these but stronger.
>

SPARK goes well beyond the simple assert model. To begin with, it
directly supports the notion of pre-, post-, and invariant conditions. The
post-condition model is especially powerful. Eiffel also supports this
as a dynamic (run-time) feature.

SPARK also goes beyond simple assertion checking. It includes a
special program called the Examiner which performs static analysis
of the entire set of programs. In part, this is possible because of
SPARK's use of Ada's library model, but it is also a function of
the many kinds of assertions (including dependency assertions)
the designer can include in the code.


>
> 'Are' violated, or 'can be' violated? I feel a little confused here, is the
> code that evaluates the SPARK assertions good enough to tell if the
> constraints will be violated at run time?
>

A good question. The PRAXIS people will tell you that the
kind of static checking done by SPARK will eliminate any
errors that can be checked by the SPARK Examiner. This
seems to be a very large number of kinds of errors.

Even so, no one claims that a programmer, or software
designer, will always specify every feature with perfect
accuracy. There is always room for some kind of error.
All SPARK can do, and does do, is lower the probability
of errors. It seems to do that better than anything else currently
available for software development.


>
>>
>> The SPARK model is very close to a theorem-proving
>> approach, although we still have a long way to go in software
>> before we are really able to satisfy all the issues of theorem
>> proving.
>
> I'd like to know about it when you can tell if my program will halt. ;)
>

The "halting problem" is still with us, as a problem in formal proofs.
However, we usually find ways to avoid dealing with it in real
software solutions. I cannot think of the last time one of my
programs failed to halt, even though I could not have provided
a formal proof that it would.


>>
>> You asked why we would not do this with all software. The
>> answer is primarily economics.
>
> Of course.
>
> I'm also curious about the size of the programs that you've used these methods
> for, and how long the compilation step takes. Is there any overhead in the
> executables that you create?
>

Whenever we leave exception-handling activated in a deployed program,
there is a slight overhead. Engineering is largely about trade-offs in
design and deployment decisions. An engineer is striving to create a
product that abides by the "principle of least surprise." SPARK and
Ada are designed, to a large extent, to reduce surprise in a software
product.

While we cannot eliminate surprise entirely in large-scale software
products, we can reduce the incidence of surprise. Further, we can
also include "software circuit breakers" in our design in the form of
exception handling routines. Safety-critical software should not
rely too heavily on exception handling, but no one would install
electrical wiring in their home without considering the potential for
a spike in the current that might burn down their home.

>>
>> For safety-critical software, the stakes are very high. Few
>> other language designs will be adequate; not C++, not
>> PL/I, not C, not Fortran, and not even all of Ada. SPARK
>> forbids the use of some Ada constructs because they cannot
>> be confirmed as safe by the SPARK Examiner.
>
> Examples of this last part please?
>

Ada includes a model for concurrency. The use of this
feature cannot be proven correct for a complex system. Also,
dynamic binding (as in OOP) is a no-no for SPARK. It is
too dependent on unpredictable events (regardless of what
OOP language one might use). There are other features of
Ada (and other languages) that cannot be proven with formal
methods. Anything that cannot be confirmed by SPARK is
rejected by it.

Even so, SPARK does nothing more than ensure that those
things it can check are valid and safe. If someone chooses
to use unsafe constructs, they are on their own.
>>
I strongly recommend you check out the web site from
PRAXIS. You will get a lot more information from them,
probably more accurate information that you will from me.
The PRAXIS people are continually improving their product,
and I may be a little short on information regarding the latest
advances in their pursuit for highly-reliable software.

I do know that they devote their entire set of corporate
resources to high-integrity software and that those organizations
who have chosen to use SPARK are regularly contributing
new ideas for even better dependability. The safety-critical
software community is fairly small relative to other parts of
the software world, but they are dedicated to the constant
improvement in tools that ensure the safety of the software
products that fly people around the planet, control nuclear
power-plants, control the switching mechanisms in rail
transportation systems, and keep software-controlled
medical devices working without failures.

Richard Riehle


adaw...@sbcglobal.net

unread,
Oct 7, 2006, 10:33:02 PM10/7/06
to

"John W. Kennedy" <jwk...@attglobal.net> wrote in message
news:G2XVg.69$Ii5...@newsfe10.lga...

> LR wrote:
>> adaw...@sbcglobal.net wrote:
>>
>>> "LR" <lr...@superlink.net> wrote in message
>>> news:4526ccb8$0$25774$cc2e...@news.uslec.net...
>>>
>>>> I've always wondered though, what happens if you specify something like
>>>> bin(1000) fixed, on a machine whose largest native fixed type is 32 bits?
>>>
>>> Some languages such as Scheme, Smalltalk, Python, and many others
>>> allow the programmer to have numeric values of any size they want, and
>>> to do arithmetic on them. Consider,
>>>
>>> x := 15415112987195719571729512219305 /
>>> 1412989375160325123512512612740571975923571925791
>>>
>>> This would evaluate just fine in some of the languages named. The
>>> numbers are not implemented simply on the basis of the underlying
>>> word size of the machine.
>>
>> I'm aware of this, but I was asking particularly about PL/I.
>>
>> But since you've raised the issue, how for example, are irrational constants
>> represented/stored in these languages?
>
> Irrationals are normally restricted to floating-point, and to a certain
> precision.
>
We need to be a little careful about terminology. The numbers I showed
were rational numbers (i.e., based on a ratio model). Many languages
have built in support for rational numbers in the form of fractions. For
example, in Scheme I can add the following quite easily (not Scheme
syntax),

(1/4 + 5/17 + 3/93) * (827 / 515/359)

will give a fractional result, not a decimal fraction.

The rational numbers, as shown, are never converted, internally,
to decimal fractions. This preserves a high degree of accuracy
since we never lose precision due to the conversion to binary
and back to decimal that occurs with so many language designs.

Richard Riehle


adaw...@sbcglobal.net

unread,
Oct 7, 2006, 10:35:46 PM10/7/06
to

"Shmuel (Seymour J.) Metz" <spam...@library.lspace.org.invalid> wrote in
message news:45266089$14$fuzhry+tra$mr2...@news.patriot.net...
> In <2PlVg.7848$TV3....@newssvr21.news.prodigy.com>, on 10/06/2006
> at 05:27 AM, <adaw...@sbcglobal.net> said:
>
>>I think it is very short-sighted of the PL/I community to continue to
>>resist developing an OOP version of the language.
>
> It would be if they were.
>
Please expand on this reply. Is there an operational version
of PL/I that now supports object-oriented programming?

Extensible inheritance?
Polymorphism?
Dynamic binding?
Message passing?
Distinguished receiver?
Genericity?

Thanks.

Richard Riehle


David Frank

unread,
Oct 8, 2006, 1:55:37 AM10/8/06
to

"robin" <rob...@bigpond.com> wrote in message
news:7xtVg.42536$rP1....@news-server.bigpond.net.au...

> "David Frank" <dave_...@hotmail.com> wrote in message
> news:45262687$0$3016$ec3e...@news.usenetmonster.com...
>>
>> integer(2) :: Int16
>

> No, this doesn' give you 16 bits in Fortran.

It certainly does for those current compilers that support 16 bit integers

..

Bob Lidral

unread,
Oct 8, 2006, 4:55:07 AM10/8/06
to
adaw...@sbcglobal.net wrote:

What do you mean by an "invalid value"? Most variables are defined in
some context that restricts their valid values to some subset (or, in
pathological cases, some superset) of the values representable by the
underlying machine representation. For example, a variable intended to
be used as an array subscript would probably be represented internally
as some sort of native machine integer value. Valid values for such a
variable would be the range of integers corresponding to valid
subscripts for the array. In such a case, it's easy to choose an
"invalid value" -- i.e., one outside the range of legal subscripts --
but it's not clear to me how that would necessarily raise an exception
at run time. Certainly it could be made to do so if the array were a
properly-defined C++ class or if SUBSCRIPTRANGE were in effect in PL/I
or similar mechanisms in other languages but, in most cases, raising an
exception for such a case is not automatic.

The IEEE floating point standard has a signaling NaN that could be used
to cause an uninitialized floating point variable (well, one initialized
to a signaling NaN in the absence of explicit initialization) to raise
an exception but that depends on the underlying hardware and is not
likely to work for integers, character strings, Booleans, pointers, etc.

Then there's the issue of the validity of a value depending on context.
For a real-world example, take shirt sizes. Men's shirt sizes are
frequently specified as a combination of neck size and sleeve length.
Except for special orders, only certain sleeve lengths are available for
any given neck size and the available sleeve lengths will be different
for different neck sizes. In this case, the range of valid values for
one variable would depend on the current value of another variable.
This is not the best example, because one could easily pick a value for
sleeve length that would be guaranteed to be invalid for any neck size
(negative length, for example). But there are certain to be
applications for which any initial value chosen could be either valid or
invalid depending on the value of some other variable at run time.

So -- what do you mean by "invalid value" for various data types
(especially for a 1-bit representation of a Boolean) such that its use
at run time will cause an exception to be raised without some additional
programmer effort?

> [...]


>
>>>The SPARK model is very close to a theorem-proving
>>>approach, although we still have a long way to go in software
>>>before we are really able to satisfy all the issues of theorem
>>>proving.
>>
>>I'd like to know about it when you can tell if my program will halt. ;)
>>
>
> The "halting problem" is still with us, as a problem in formal proofs.
> However, we usually find ways to avoid dealing with it in real
> software solutions. I cannot think of the last time one of my
> programs failed to halt, even though I could not have provided
> a formal proof that it would.
>

The halting problem is only a problem for FSAs. Humans do not share the
same limitations.

> [...]
>
> Richard Riehle
>
>
Bob Lidral
lidral at alum dot mit dot edu

Tom Linden

unread,
Oct 8, 2006, 8:37:18 AM10/8/06
to

Do you mean of determines the common denominator? Does it reduce it?
Interesting, but not useful.

>
> The rational numbers, as shown, are never converted, internally,
> to decimal fractions. This preserves a high degree of accuracy
> since we never lose precision due to the conversion to binary
> and back to decimal that occurs with so many language designs.
>
> Richard Riehle
>
>

--
Using Opera's revolutionary e-mail client: http://www.opera.com/mail/

It is loading more messages.
0 new messages