http://www.winestockwebdesign.com/Essays/Lisp_Curse.html
"Imagine adding object orientation to the C and Scheme programming
languages. Making Scheme object-oriented is a sophomore homework
assignment. On the other hand, adding object orientation to C requires
the programming chops of Bjarne Stroustrup.
"The consequences of this divergence in needed talent and effort cause
The Lisp Curse:
"Lisp is so powerful that problems which are technical issues in other
programming languages are social issues in Lisp."
Andrew.
OK, as you've had no takers I'll bite.
It was pretty easy to add object orientation to VP assembler, useing
just assembler macros, back in 1990. So no I don't think it's that
hard. :)
Who was it that said, 'object orientation is just a bloody look up table...' ;)
Chris
I don't know, but in addition to dispatching the call, you need some
methodology for instance variables. I wouldn't claim that OO is the fix to
all programming ills, but at its best it does provide a lot of expressive
power from a very modest set of core conepts.
--
Andy Valencia
Home page: http://www.vsta.org/andy/
To contact me: http://www.vsta.org/contact/andy.html
> Chris Hinsley <chris....@gmail.com> wrote:
>> Who was it that said, 'object orientation is just a bloody look up
>> table...' ;)
>
> I don't know, but in addition to dispatching the call, you need some
> methodology for instance variables. I wouldn't claim that OO is the fix to
> all programming ills, but at its best it does provide a lot of expressive
> power from a very modest set of core conepts.
OK, have a structure with the first item being a pointer to a look up
table, etc etc. ;)
I'm not dissing OO, I like it. Just that it's not hard to do even when
the language your codeing in dosn't support it directly.
However, reading the linked to article I feel Andrew might have posted
this here in the Forth newsgroup because theres a lot of similar
arguements for Forth !
Chris
It's more than that. If the superclass has ivars "a" and "b", then when you
subclass it and have your own ivars "c" and "d", instances of your subclass
have to have ivars "a".."d". Often when you allocate you want
instance-specific initialization of ivars, so you also have to have a way to
connect subclass initializing code to superclass code.
> I'm not dissing OO, I like it. Just that it's not hard to do even when
> the language your codeing in dosn't support it directly.
Speaking from experience, doing OO coding in Forth is harder than doing it in
Smalltalk. Or Java. Or Python. Forth is more like a very powerful macro
assembler on top of hand-coded assembly language for a CPU with a dual stack
architecture.
> However, reading the linked to article I feel Andrew might have posted
> this here in the Forth newsgroup because there's a lot of similar
> arguments for Forth!
I assume you mean:
http://forthos.org/oo.html
I found the OO abstractions helpful some of the time, but it was still a win
to port to Python and take advantage of a really impressive set of amenities,
both in the base language and its available libraries.
> Chris Hinsley <chris....@gmail.com> wrote:
>>> I don't know, but in addition to dispatching the call, you need some
>>> methodology for instance variables.
>> OK, have a structure with the first item being a pointer to a look up
>> table, etc etc. ;)
>
> It's more than that. If the superclass has ivars "a" and "b", then when you
> subclass it and have your own ivars "c" and "d", instances of your subclass
> have to have ivars "a".."d". Often when you allocate you want
> instance-specific initialization of ivars, so you also have to have a way to
> connect subclass initializing code to superclass code.
Yes, I know, and it was all implamentable in assembler macros. The VP
assembler was pretty hot on what you could do in the macro system I
grant you, but inheritance, constructors, destructors, superclass
variable accsess, initialisation of all superclasses etc was all
covered.
One thing that it didn't do directly was multiple inheritance, but
speaking personaly I never missed that.
>
>> I'm not dissing OO, I like it. Just that it's not hard to do even when
>> the language your codeing in dosn't support it directly.
>
> Speaking from experience, doing OO coding in Forth is harder than doing it in
> Smalltalk. Or Java. Or Python. Forth is more like a very powerful macro
> assembler on top of hand-coded assembly language for a CPU with a dual stack
> architecture.
I agree with you on the Forth is a powerful macro assembler, which is
why I don't think it's any harder to do OOP in Forth than it was in
assembler code.
>
>> However, reading the linked to article I feel Andrew might have posted
>> this here in the Forth newsgroup because there's a lot of similar
>> arguments for Forth!
>
> I assume you mean:
> http://forthos.org/oo.html
>
> I found the OO abstractions helpful some of the time, but it was still a win
> to port to Python and take advantage of a really impressive set of amenities,
> both in the base language and its available libraries.
I wasn't refering to any implamentation of OOP in Forth. In the article
(which I assume you've read) it was more pointing out that the
flexability of Lisp (and I belive this carries over to Forth) works
against it in getting anything done long term. You only ever end up
with 80% solutions to a problem as someone allways just does it there
way because it's so easy to define your own rather than agree and
co-operate with other people useing the langauge...
I, like Andrew Haley, don't agree with all of the article, but it's
sentiment is just as true for Forth as it is for Lisp.
Chris
In the AI Winter" article it links to, the similarities are even more
pronounced:
http://c2.com/cgi/wiki?AiWinter
Read the Lisp quote. With slight changes, here is the start of that Lisp
quote as modified for Forth:
"Forth was always very general-purpose, but was especially good at real-time
control, and was for a long time very closely associated with astronomer's
telescopes. The Forth companies rode the great embedded microcontroller
wave in the early 90's, when large corporations poured millions of dollars
into the Forth that hype promised manageable, readable, and fast Forth
software in 10 years. When the promises turned out to be harder than
originally thought, ... "
From the "Lisp Curse" article:
"Real Hackers have also known, for a while, that C and C++ are not
appropriate for most programs that don't need to do arbitrary bit-fiddling."
Wow, some people just truly hate C don't they? I guess belittling a
successful HLL language is one way to make your unwise decision to use an
another unpopular, unsuccessful one seem less grating. What's really
telling though is that none of the people who hate C seem to hate other
Algol derived languages such as BASIC and Pascal, nor Pascal derivatives:
Algol W, Modula, Modula-2, Oberon, Prolog, nor C derivatives: C++, Java,
Objective-C. What is that: irrationality, or hate? ISTM that Forth and
Lisp programmers seem to use the word "feel" alot, instead of "think"...
Would you say this is true?
Rod Pemberton
The promises of Forth were more like smaller programs and much, much
shorter development times -- now, not in 10 years. In my experience, we
delivered every time, right then, not as some future promise. But we
failed spectacularly in the public relations/marketing sphere: our
successes went largely unnoticed.
The great myth in technology is that "if you build a better mousetrap
the world will beat a path to your door." In other words, technical
superiority will win on its merits. That is not true. Technical
superiority will win if and only if there is a substantial, dedicated,
and well-funded marketing campaign behind it.
Cheers,
Elizabeth
--
==================================================
Elizabeth D. Rather (US & Canada) 800-55-FORTH
FORTH Inc. +1 310.999.6784
5959 West Century Blvd. Suite 700
Los Angeles, CA 90045
http://www.forth.com
"Forth-based products and Services for real-time
applications since 1973."
==================================================
> From the "Lisp Curse" article:
>
> "Real Hackers have also known, for a while, that C and C++ are not
> appropriate for most programs that don't need to do arbitrary
> bit-fiddling."
>
> Wow, some people just truly hate C don't they? I guess belittling a
> successful HLL language is one way to make your unwise decision to
> use an another unpopular, unsuccessful one seem less grating.
Eh? Surely the comment is simply true, or at the worst a reasonable
opinion. I can't see that there's any hatred there.
> What's really telling though is that none of the people who hate C
> seem to hate other Algol derived languages such as BASIC and Pascal,
> nor Pascal derivatives: Algol W, Modula, Modula-2, Oberon, Prolog,
> nor C derivatives: C++, Java, Objective-C. What is that:
> irrationality, or hate? ISTM that Forth and Lisp programmers seem
> to use the word "feel" alot, instead of "think"... Would you say
> this is true?
No. I think your imagination is running away.
Andrew.
ISTM, the "better mousetrap" was invented *before* the mousetrap by over
half a century (1894 vs. 1832 or so). The mechanism used in spring-loaded
bar mouse traps is basically identical to the firing mechanism in
single-action revolvers. They both have a hammer, trigger, latch (sear),
and a spring. Variations of this mechanism are what guns still use to this
day. So, yeah, a path was beaten to someone's door, I'd say ...
Oh, BTW, we now know the egg came first, since birds _evolved_ from
dinosaurs which laid eggs... So, that quote is out also. :-)
> Technical
> superiority will win if and only if there is a substantial, dedicated,
> and well-funded marketing campaign behind it.
>
For most cases, I agree. I.e., that's true for a free, open, capitalistic
marketplace. There are exceptions though. E.g., SCSI, x86 PCs, Intel/MS,
military weapons, etc. SCSI just won't die. The x86 computing platform won
by default. "Last man standing." Intel and MS are monopolies. The
military
tests everything. The government buys from whomever won the contract bid.
And, there are many markets around the world that are "closed" or "private"
or not capitalistic.
Rod Pemberton
> E.g., SCSI, x86 PCs, Intel/MS, military weapons, etc. SCSI just
> won't die. The x86 computing platform won by default. "Last man
> standing."
Well, let's see. SCSI was certainly good enough. The x86 won because
it was compatible with a lot of what went before and because it was
decently fast. (People always cite the 68000, but it came a fair bit
later and it was dog slow.) As Hermann Hauser put it, "I remember
when Bill Gates visited us and I showed him an operating system that
was much more developed than MS-DOS at the time. We thought this was a
clear advantage; not realising that the game had changed and the real
advantage was standards."
> Intel and MS are monopolies.
Intel perhaps, but both of them have problems getting into mobile
technology, which is where a lot of the attention (and money) is
going. Look instead at ARM and Google.
Andrew.
Amen sister. !
The best tech definate dosn't mean you'll win. :(
Chris
> "Real Hackers have also known, for a while, that C and C++ are not
> appropriate for most programs that don't need to do arbitrary
> bit-fiddling."
>
> Wow, some people just truly hate C don't they?
Yes, we do. But what you quoted doesn't sound hateful to me, maybe you're
projecting? ;-)
> I guess belittling a successful HLL language is one way to make your
> unwise decision to use an another unpopular, unsuccessful one seem less
> grating.
That's not much of an argument unless you sell software. How popular or
successful a language (implementation) is has nothing to do with how good it
is. C was used to write UNIX (yes I know not the original UNIX clunker, just
the next few decades of clunkers) so if you want to work in that environment
it's a top choice. Other OS were written in other languages so whatever
they're written in is a top choice. Companies sell this or that and people
are basically sheep so they do what they're told and believe what companies
tell them. How does popularity relate to how good something is? Not at all.
C and C++ are not HLL. You really ought to know better.
> What's really telling though is that none of the people who hate C seem to
> hate other Algol derived languages such as BASIC and Pascal, nor Pascal
> derivatives: Algol W, Modula, Modula-2, Oberon, Prolog, nor C derivatives:
> C++, Java, Objective-C.
C isn't ALGOL derived more than any HLL with control structures is ALGOL
derived. That is way too general. And BASIC has nothing to do with ALGOL at
all. If people like Pascal then it's either because they're mindless
structured programming groupies or because they grew up on Turbo Pascal
which I've heard is really great but have never used. The original Pascal
was a one-pass teaching compiler so anybody who thinks the language is good
obviously doesn't code for his day job. The rest of the Wirth-originated
languages are just him trying to fix the screwups he made in his original
designs and for his groupies just like Apple fanboys will always buy Macs
and iPhones even when they're all looks and no go. Nobody uses any of the
Wirth languages for work. C++? Abomination. Java? For people who can't code
and want to write bad code cross platform. Objective C? Never heard of
it. ALGOL? I liked ALGOL 68 for academic work but it's not practical for
commercial coding. Ada is, and it's much better than C, C++, Java, or almost
anything else. It's somewhat ALGOL derived.
I hate C and I like ALGOL, PL/I, assembler, FORTRAN and basically anything
designed before 1970. Never used Forth but it looks interesting. I wouldn't
call it readable and I don't buy the false argument once you know Forth it's
readable. There are readable languages like Ada that any programmer can
read. I could also claim APL is readable because I can read it. That doesn't
make it readable, and it's not. Neither is Forth. Lisp is only a little
better.
> What is that: irrationality, or hate? ISTM that Forth and Lisp
> programmers seem to use the word "feel" alot, instead of "think"... Would
> you say this is true?
Lisp is interesting but not very practical and they haven't grown it in a
controlled way. There are too many quirks and too many ways to do one thing,
just because the people involved in developing it are strange and they're
afraid to break backward compatability (misplaced concern IMHO). People love
or hate their niche languages just like people love or hate sports teams.
Why is it ok for sports but not ok for programming? If you say sports is
based on opinion and programming is based on facts then you missed the
answer. Both are based on opinions because nobody can agree on the facts.
Hmm... psychological term... Another clue to your true identity? IMO,
unlikely...
Well, I took the statement to be hateful. Here's a breakdown.
C [is] not appropriate
<-- intentional put down of language as a whole
not appropriate for most programs
<-- false claim that C is utterly useless in regards to programming
that don't need to do arbitrary bit-fiddling
<-- belittling of C's very powerful HLL capabilities
Basically, that statement puts C on the same level as Brainfuck. I.e., it's
hateful.
> > I guess belittling a successful HLL language is one way to make your
> > unwise decision to use an another unpopular, unsuccessful one seem less
> > grating.
>
> [...] How popular or
> successful a language (implementation) is has nothing to do
> with how good it is.
If you had only said "popular", I'd agree.
> How does popularity relate to how good something is? Not at all.
>
I agree. See, you only used "popular" here.
> C and C++ are not HLL. You really ought to know better.
>
Hateful ...
> C isn't ALGOL derived more than any HLL with control structures is ALGOL
> derived. That is way too general.
http://hopl.murdoch.edu.au/showlanguage.prx?exp=2273
http://hopl.murdoch.edu.au/showlanguage.prx?exp=577
> And BASIC has nothing to do with ALGOL at all.
http://hopl.murdoch.edu.au/showlanguage.prx?exp=176
Are you sure? Personally, I'm not familiar with Algol. So, I take HOPL as
the definitive perspective.
> [ ... ] so anybody who thinks
> the [Pascal] language is good
> obviously doesn't code for his day job.
Although this discussion is suited for comp.lang.misc and not c.l.f., I find
that statement from you to be *really* interesting, from a psychological
perspective. The entire problem with Pascal is (or was) that it is purely a
HLL. There are (or were) no low-level capabilities. C has both, yet you
deride it.
> ALGOL? I liked ALGOL 68 for academic work but it's not
> practical for commercial coding.
...
> Ada is, and it's much better than C, C++, Java, or almost
> anything else. It's somewhat ALGOL derived.
>
Ok, Ada question: AIR, Ada is the only major language that does not use zero
to represent logical "false", such as used by conditionals. True?
Yes, I "got" that you are an "Ada is perfect!" fanboy from the c.l.m. post
(you?)... Like I said, there are, what, 3 or 4 of you guys left? ;-)
There is another one on comp.lang.c (KT) and another on comp.lang.misc
(DAK). Slightly older posts show another on comp.lang.c (EP) as well as
comp.programming (J). ISTM, Forth programmers are almost, but not quite, as
fanatical as you guys.
> I hate C and I like ALGOL, PL/I, assembler, FORTRAN and
> basically anything designed before 1970.
>
It's interesting that you don't like C. C is great. I've probably used a
different set of languages than you, but it consistently ranks at the top of
my list.
As for Ada, I've seen more than a few archived posts that represent Ada as
just like C but with stronger type checking. Disagree?
As for the PL/1 variant I programmed, it pointed out one serious mistake
with C: pass-by-value. Pass-by-reference worked beautifully in PL/1.
Pass-by-value was never used, basically. I only had to force pass-by-value
twice. Otherwise, I saw no advantage over C. It was just as capable, but
no more so. But, that could've been due to it being a non-standard
variation.
FORTRAN sucked. I *hate* FORTRAN. I've got *NO* pleasant memories in
regards to FORTRAN some two decades after the fact. It always surprises me
how intense the negative memories are on this language... It's the perfect
example of how to *NOT* design a language.
> Never used Forth but it looks interesting. I wouldn't
> call it readable and I don't buy the false argument once
> you know Forth it's readable.
I've never programmed in Forth either. I am implementing an interpreter for
it. At this point, I think one _must_ implement their own version to
understand some of the concepts. The fact that the Forth community uses
non-standard language, i.e., non Comp. Sci. language, for everything,
obscures how much of the functionality works. The fact that not much seems
to be well documented as to implementation of Forth does so also. You (or
some other remailer dude ...) may have seen my c.l.m. post discussing how it
works using normal terminology.
> There are readable languages
> like Ada that any programmer can read.
AIR, I once read that about COBOL ...
> Lisp is only a little better.
>
I heard this from a hobbyist LISP programmer once: "Lost In Stupid
Parenthesis". It's a common joke, but no joke that he told it to me. He
loved it at first, hated it later.
Rod Pemberton
Rob, can you post a link to your description. I'd quite like to read
that. I think I get how Forth works, but another way of looking at it
could be good.
Chris
Sorry 'Rod', typo.
Chris
Put down? Value judgement? Statement of fact? I don't get that wrapped up in
it. Maybe I would if I wrote C compilers. I read it as "not a good choice"
in the author's opinion. I didn't read the link btw, only your post.
> not appropriate for most programs
> <-- false claim that C is utterly useless in regards to programming
Depends on what the author thinks most programs do and where they run. If he
thinks they run on mainframes (probably not valid) then true, C isn't
appropriate in most cases. If he is talking about mini computers, again, C
isn't on the radar. If he is talking about PC's running Windows I don't know
but I guess he would be wrong. If he is talking about PC's running Linux or
Unix then he's definitely wrong. So sue him LOL. Who cares? But hateful? No.
Maybe he just believes there are better languages for the kind of code he
writes, or better general purpose languages for the code he thinks is most
written.
> that don't need to do arbitrary bit-fiddling
> <-- belittling of C's very powerful HLL capabilities
I don't think C has any powerful HLL capabilities but you would know better
than me about that since you like and use it and I hate it and don't use
it. Mostly I hate it because it's ugly (personal value judgement) and the
operators are confusing and overloaded for the sake of terseness. I consider
that a mistake. I guess I could use the preprocessor and clean up some of
what I think is so hideous (those endless ugly braces etc) but it would
still be C and I still wouldn't like it or the people who like it or the
people who wrote it. It's not my kind of people. No offense, because I enjoy
reading your posts but most other C people I could live without.
> Basically, that statement puts C on the same level as Brainfuck. I.e.,
> it's hateful.
As a Brainfuck fan, I resent that remark ;-). I didn't read the statement
that way at all. I figure he means why play around with stuff like
malloc/calloc and free and low-level abstractions when modern languages make
it easy for idiots to write bad code faster than ever before. Garbage
collection, classes (yeah I know C++ is really C) all that stuff makes life
fun for guys born after 1990 and anything older than that sucks. Who cares.
> > [...] How popular or
> > successful a language (implementation) is has nothing to do
> > with how good it is.
>
> If you had only said "popular", I'd agree.
Why? C is successful because it was used to write two major, bad operating
systems. If you want to write code on those systems then you need to use C
because all the header files and system calls are set up for C.
If you look at the IBM mainframe the operating system is written in
assembler. All the header files and system calls are in assembler. If you
want to use C on a mainframe you cannot physically write system code. At
all. Assembler is the best choice on that platform. Is it popular or
successful? There yes, elsewhere no. It all depends what we are discussing.
> > How does popularity relate to how good something is? Not at all.
> >
>
> I agree. See, you only used "popular" here.
>
> > C and C++ are not HLL. You really ought to know better.
> >
>
> Hateful ...
Not intentional!
> > C isn't ALGOL derived more than any HLL with control structures is ALGOL
> > derived. That is way too general.
> http://hopl.murdoch.edu.au/showlanguage.prx?exp=2273
> http://hopl.murdoch.edu.au/showlanguage.prx?exp=577
>
> > And BASIC has nothing to do with ALGOL at all.
> http://hopl.murdoch.edu.au/showlanguage.prx?exp=176
>
> Are you sure? Personally, I'm not familiar with Algol. So, I take HOPL
> as the definitive perspective.
Yes, quite sure. I am not going to look at those links because I really
don't care what they say. I used ALGOL68 on two platforms and if anything is
like ALGOL it's PL/I and Pascal to some degree. BASIC has no resemblance to
ALGOL at all. C doesn't either, like I said unless you consider any language
with control structures (if/end if while/do while etc) ALGOL derived. If you
do then almost every HLL qualifies except original COBOL. Even later COBOL
and FORTRAN would qualify. That does not make any sense.
> > [ ... ] so anybody who thinks
> > the [Pascal] language is good
> > obviously doesn't code for his day job.
>
> Although this discussion is suited for comp.lang.misc and not c.l.f., I
> find that statement from you to be *really* interesting, from a
> psychological perspective. The entire problem with Pascal is (or was)
> that it is purely a HLL. There are (or were) no low-level capabilities.
> C has both, yet you deride it.
The problems with Pascal are as I said, it is a one-pass teaching
compiler. It generates crap code. It was never intended to be used in
production, like most of what Wirth put out there. AFAIK Turbo Pascal does
have low level facilities but again I am not sure. The other thing I don't
like about Pascal is Wirth. He and Dijkstra fucked up generations of
programmers with their unfounded anti-COBOL spewage and blind hatred of
GOTO's. Some of the worst code ever written is thanks to those two
assholes. Their stuff is good on paper but it falls over in production.
The problems for me with C are not whether it's an HLL or not (it's not). I
prefer assembler so you can't say I have anything against low level
languages. I guess in NIX C is ok but I don't write code in NIX because it's
a crap OS. In my context C is a solution looking for a problem. On other
platforms it's a high level assembler.
> Ok, Ada question: AIR, Ada is the only major language that does not use zero
> to represent logical "false", such as used by conditionals. True?
AFAIK, no. I believe COBOL, PL/I, and maybe even ALGOL68 use false for
false. 0 does not compute. I think that would be true of any typed language
unless you have implicit conversions. Many of them do, but many of them
don't do implicit conversions for boolean. You should check me on that
because I haven't done anything in HLL recently.
> Yes, I "got" that you are an "Ada is perfect!" fanboy from the c.l.m. post
> (you?)...
I don't think so. It's got problems, especially the post 95 versions but
it's a very nice language.
> Like I said, there are, what, 3 or 4 of you guys left? ;-)
Ada? I'm not an Ada guy. They have a niche. I have no idea how many people
it is but there is more than one company surviving by selling Ada compilers
so there has to be some market. It should be used more widely.
> It's interesting that you don't like C. C is great. I've probably used a
> different set of languages than you, but it consistently ranks at the top of
> my list.
I think it gets back to what platform(s) you code on most. My comments
aren't really relevant for most people since almost none of my coding is on
Intel and almost everyone elses is. One of the reasons I never spent much
time coding on PC's is I don't like C.
> As for Ada, I've seen more than a few archived posts that represent Ada as
> just like C but with stronger type checking. Disagree?
No, it's a lot different. In a way, all algebraic languages with control
structures are similar but that's where it ends. Ada use English like COBOL
(your point taken further on) and after you get the hang of the structure I
think it's much more attractive to look at and read than C and that's
important to me. Ada offers a pretty nice list of features built into the
language instead of providing them through libraries so it's more coherent
and one-piece feeling than C. Examples: exceptions, storage management,
tasking, etc. are all Ada language features. They are not add ons or library
calls. That makes Ada very portable, in my tiny experience even more
portable than C. People who love C won't like Ada because C is terse and
overloads operators and Ada makes you spell stuff out because the compiler
and build system is designed to insure that you write what you mean. When
you do it should be pretty close to working.
> As for the PL/1 variant I programmed, it pointed out one serious mistake
> with C: pass-by-value. Pass-by-reference worked beautifully in PL/1.
> Pass-by-value was never used, basically. I only had to force pass-by-value
> twice. Otherwise, I saw no advantage over C. It was just as capable, but
> no more so. But, that could've been due to it being a non-standard
> variation.
I haven't used it in a long time but it's a nice language, again with many
features built in rather than added on as library calls. IBM puts out a
super nice optimizing compiler, it's been improved for decades. One main
advantage to PL/I over C is how PL/I handles strings, but since I believe
you are a fan of null terminated strings you probably won't agree. PL/I
avoids the whole issue of buffer overruns on string operations since it
knows the length of the source and target and will positively not blow off
the end of a string. That's pretty important for any serious code. I know,
the C guys take responsibility for themselves and that is good but in
reality they write an awful lot of crap code. I'm not saying PL/I coders
don't, mostly because there aren't a whole lot of them around anymore, but
PL/I is a better language than C in my opinion and certainly on the
mainframe. Elsewhere it would depend on the implementation, but it's a very
nice language with almost no downside.
> FORTRAN sucked. I *hate* FORTRAN. I've got *NO* pleasant memories in
> regards to FORTRAN some two decades after the fact. It always surprises me
> how intense the negative memories are on this language... It's the perfect
> example of how to *NOT* design a language.
I think it's perfect for what it's for just like COBOL is perfect for what
it's for. If you try to use it for something it's not designed to do, both
of them fall over quickly.
> I've never programmed in Forth either. I am implementing an interpreter for
> it. At this point, I think one _must_ implement their own version to
> understand some of the concepts. The fact that the Forth community uses
> non-standard language, i.e., non Comp. Sci. language, for everything,
> obscures how much of the functionality works. The fact that not much seems
> to be well documented as to implementation of Forth does so also. You (or
> some other remailer dude ...) may have seen my c.l.m. post discussing how it
> works using normal terminology.
It's an interesting topic. I follow the ng just because of how odd it is,
and how much the users like it.
>
> > There are readable languages
> > like Ada that any programmer can read.
>
> AIR, I once read that about COBOL ...
That's true, I should have included it. COBOL can be very readable to the
point of picking things up really fast. Like anything else it's not
impervious to idiots but when used by a competent person coding a solution
to a business problem like doing financial reports or moving money
etc. there's nothing better. On the mainframe anyway.
> > Lisp is only a little better.
> >
>
> I heard this from a hobbyist LISP programmer once: "Lost In Stupid
> Parenthesis". It's a common joke, but no joke that he told it to me. He
> loved it at first, hated it later.
I got to where I can read it now but it's just too quirky. Kind of makes me
think the people developing the new standards can't let go of old baggage
and just take the good and throw out the bad and call it something
else. It's been around so long it has sentimental value like an old pair of
jeans. They may be ripped and patched and stained but when they come out of
the wash nothing fits better. I wanted to like it but since I have no
history with it, I decided for me it's not worth it.
>
> I've never programmed in Forth either. I am implementing an interpreter for
> it. At this point, I think one _must_ implement their own version to
That is considered anathema here. The Forth nomenclatura would prefer
that you purchase or download an eval 'modern' forth, and learn the
language prior to *NEVER* implementing your own. *I*, on the other hand
agree with you, and I've implemented at least 3 C based Forths over the
years.
> understand some of the concepts. The fact that the Forth community uses
I agree ... by implementing the feature, you at least understand the
mechanics, and a knowledge of how to apply it in an application is not a
stretch from there.
> non-standard language, i.e., non Comp. Sci. language, for everything,
> obscures how much of the functionality works. The fact that not much seems
> to be well documented as to implementation of Forth does so also. You (or
It was not always so. Back in the FIG days, reference implementations
were provided for every micro architecture which was popular, and
"verbatim" re-implementations based upon prior reference platforms kept
Forth relatively static.
Subsequent standards work has tended away from implementation details, not
only for portability, but also to remove the handcuffs for the avante
guarde to innovate ... probably a good thing (TM).
My miniforth (a C based implementation) is an attempt to take the classic
fig model, and provide a degree of portability. From that base, I'll be
adding the platform and language features which will make it a useful
production scripting language with a small memory footprint ... for *ME*
anyways. I suspect that in the end, I will make no attempt to comply
with any standard, except perhaps the base wordset.
Good luck with your implementation, and if you have pointers to your work,
they would be appreciated ... (as a BoF) ...
http://www.ControlQ.com/blob/wordpress/?p=352
Rob Sciuk.
Do you think Forth is a good language? Or just an easy language to
implement? If the former, why aren't you implementing *in* Forth? Even if
you had to use some throw-away C code to bootstrap (hell, I used straight
assembly code for my own bootstrap of ForthOS), wouldn't you at this point be
happily hosted on your own metacompiler?
That is so incorrect as to be laughable. Not only is DIY Forth not
"anathema" here, a significant number of regular posters have either
done their own or are in the process of doing so.
My position has been that if you want to implement your own Forth for
whatever reason, you should at least *learn* Forth by using an existing
modern Forth and developing a working ability using existing books
(preferably more modern books than Starting Forth) and tutorials before
starting the project.
My generally feeling about Lisp and Forth is that they are so damn simple at the core (which is what makes them powerful) and to implement, that it doesn't take much to convince one's self to just implement your own when the others out there don't provide [out of the box] what you are looking for.
When anyone can - and does - make their own <insert core product here> you end up with no real "standard" that <insert product extension here> implementers can target. With a lack of extensions, uses are less likely to adopt, yadda yadda yadda.
Lisp's and Forth's greatest assets are also their greatest liability. However, their contributions to the world of computing cannot be over- stated or appreciated.
One final thought as it comes to languages in general. The world is moving extremely fast. Languages should solve a problem; they shouldn't be everything to everyone. But a large percentage of the "problems" today focus around networking, distributed problem solving, databases, the web, .... These are areas for which the Common Lisp and Forth standards have potentially held back these languages from continuing to grow (not that they necessarily should - Forth is for embedded systems and maybe should never attempt to be Erlang) and benefit others. Again, it's not that various implementations are incapable of doing so. It's that they are either non-standards conforming or there are now N implementations of "sockets" to maintain for a "web framework" creator.
Languages that are built and maintained by a single person/group - and are just complex enough that others don't want to try and duplicate the effort - seem to have a leg-up. They are a single target and immediately provide some usefulness to the masses. Then again, they have the downside of turning into vaporware at a moment's notice.
Jeff M.
Hi, Vandy,
On Wed, 29 Jun 2011, van...@vsta.org wrote:
> Date: 29 Jun 2011 18:27:10 GMT
> From: van...@vsta.org
> Newsgroups: comp.lang.forth
> Subject: Forth as implementation language
>
> Sp...@controlq.com wrote:
>> My miniforth (a C based implementation) is an attempt to take the classic
>> fig model, and provide a degree of portability.
>
> Do you think Forth is a good language? Or just an easy language to
> implement?
Not necessarily, at least not out of the box, but add a few words which
pertain to your problem domain, and it becomes a great language.
When I took my first undergrad programming course in FORTRAN, the first
couple of weeks were spent using a "mini" assembler like interpreter ...
just to avoid "big syntax", and to get the basics across. Forth might
have been a better choice for teaching undergrads, but this was back in
the 70's, and Forth hadn't yet hit the radar screens at the University I
attended (though it eventually did, and there was even a Forth Users group
headed by Nick Solntseff).
The "factoring" techniques historically ascribed to good forth coding are
an excellant way to code in any language, and while I have scripted in
Tcl, python, bash, awk (and toyed with Ruby, Perl, Rexx etc), I keep
coming back to Forth as a "pet" language. I suppose it is a great
language, and as Elizabeth made reference a day or so ago, it offers up
its internals to the end user ... useful for the low level coder, but
very scary to the Erlang, Occam and Haskell types ...
> If the former, why aren't you implementing *in* Forth?
You know, that's a very good question. What do you think the minimal
wordset to bootstrap a hosted Forth consists of? I've been meaning to
write an assembler in my C based forth, and bootstrap from there ... time
permitting ... but unfortunately, it doesn't -- hence I proceeded *in* C.
Once I had the basic forth framework in C, it was arguably simpler/faster
to add a bunch of words in C, and so I never analysed what was the minimal
subset required to bootstrap a "forth".
> Even if you had to use some throw-away C code to bootstrap (hell, I used
> straight assembly code for my own bootstrap of ForthOS), wouldn't you at
> this point be happily hosted on your own metacompiler?
exactly ... and I just might start in assembler for the AVR platform, it
being so different from a Von-Neumann architecture.
I'd ask what words were required to bootstrap your ForthOS, but as it was
a native implementation on x86 "iron", much of the boot code would likely
have been in assembler, and the comparison would not have been a fair one
... also, I recall you used some BIOS i/o routines (no?). Still, I regard
ForthOS as a remarkable effort ... and all the more notable by the fact
that it *was* a native implementation.
My miniforth runs "native" on ARM, and Nintendo DS (homebrew) ... and I'm
planning to use u-boot to run "native" on an 8core PPC Soc in the near
term -- but I've got some i/o words to write first 8-). As for the Hosted
version of Miniforth, it runs on Windows, Mac/OSX, Linux, 3 BSD's and on
intel 32/64, AMD, Arm, PPC and HP-PA. (Pretty much anything I have a
compiler for and access to -- one file, one compile and voila!).
Am I a purist? Not by a long shot, but I've got a version of Forth on
every single computing device I own -- and I *DO* frequently code in
forth.
Cheers,
Rob Sciuk
> Date: Wed, 29 Jun 2011 08:38:44 -1000
> From: Elizabeth D Rather <era...@forth.com>
> Newsgroups: comp.lang.forth
> Subject: Re: The Lisp Curse
>
> On 6/29/11 7:25 AM, Sp...@ControlQ.com wrote:
>> On Wed, 29 Jun 2011, Rod Pemberton wrote:
>>
>>>
>>> I've never programmed in Forth either. I am implementing an
>>> interpreter for
>>> it. At this point, I think one _must_ implement their own version to
>>
>> That is considered anathema here. The Forth nomenclatura would prefer
>> that you purchase or download an eval 'modern' forth, and learn the
>> language prior to *NEVER* implementing your own. *I*, on the other hand
>> agree with you, and I've implemented at least 3 C based Forths over the
>> years.
>
> That is so incorrect as to be laughable. Not only is DIY Forth not
> "anathema" here, a significant number of regular posters have either done
> their own or are in the process of doing so.
>
What is incorrect? My impression or my position? ;-)
> My position has been that if you want to implement your own Forth for
> whatever reason, you should at least *learn* Forth by using an existing
> modern Forth and developing a working ability using existing books
> (preferably more modern books than Starting Forth) and tutorials before
> starting the project.
>
> Cheers,
> Elizabeth
Is *YOUR* book available for download, Elizabeth? I've found Stephen's to
be very helpful, if a little dated ... I would love to re-learn Forth in
a more modern language, and then perhaps re-implement my forth yet again
8-).
Cheers,
Rob Sciuk
Bottom paragraph:
http://groups.google.com/group/comp.lang.misc/msg/16a9b1e59e0056e2
RP
There are some posts here which describe some aspects. I posted an update a
while ago, "Status of my Forth interpreter in C" 10/15/2010. But, nothing
showing my method of implementation. I've only added a handful of other
Forth words, the current work is in other areas. Most C based Forth's are
switch() based. Mine is not. It's ITC like an assembly version. I can
setup compiled Forth words basically the same way as assembly, but in C.
I've got a set of primitives or low-level words, in C, as well as a decent
set of high level words, roughly half of ANS core. I doubt they are ANS
compliant. That wasn't a goal at the time. It's fig-Forth like with stuff
I think works well from other Forths or my own solutions, e.g., many
fig-Forth definitions, -1 or all bits set for logical true (F83), Forth
coded in Forth (i.e., primitives and compiled words) ala Ting and Muench,
stuff that makes sense to me: CELLS, CELLS+, CHARS, CHARS+, etc. I can
precompile words in C like assembly versions, or enter them from the
interpreter command line. The stacks and dictionary currently have
dependencies on the C code for them. Basically, it's a functional Forth.
It's just more C than needs to be there. There are a handful of areas that
I'm still working on: converting more code from C to Forth, implementing
some of the remaining Forth control flow words, implementing input
line-buffering, allowing text from files to be redirected through the
interpreter (e.g., IN >IN LOAD FLOAD etc.) so I don't have to precompile so
many words, elimination of C "background" code and dependence on it,
flushing out more Forth words. After that stuff is done, it'll be a bit
slower I think (less optimized C), but should be, in theory, mostly Forth,
except for the primitives. Hopefully, after that, I'll be able to eliminate
the C support code for the stacks and dictionary. If it works, that'll
leave very little C code.
Rod Pemberton
Lisp is good for list processing,
but Forth is best for AI.
Mentifex
--
http://cyborg.blogspot.com
http://mind.sourceforge.net/lisp.html
http://www.scn.org/~mentifex/AiMind.html
http://www.tfeb.org/lisp/mad-people.html
It's great to see you open to the concept!
> ... also, I recall you used some BIOS i/o routines (no?).
No, ForthOS talks directly to the hardware.
> Am I a purist? Not by a long shot, but I've got a version of Forth on
> every single computing device I own -- and I *DO* frequently code in
> forth.
You should go ahead and shoot for a metacompiler, IMHO. It'll take your
Forth system to another level, if my own experience is any guide.
Regards,
The assertion that DIY Forth is "anathema" on c.l.f.
>> My position has been that if you want to implement your own Forth for
>> whatever reason, you should at least *learn* Forth by using an
>> existing modern Forth and developing a working ability using existing
>> books (preferably more modern books than Starting Forth) and tutorials
>> before starting the project.
>>
>> Cheers,
>> Elizabeth
>
> Is *YOUR* book available for download, Elizabeth? I've found Stephen's
> to be very helpful, if a little dated ... I would love to re-learn Forth
> in a more modern language, and then perhaps re-implement my forth yet
> again 8-).
The Forth Programmer's Handbook is included with the free evaluation
download of SwiftForth. Forth Application Techniques is a relatively
inexpensive purchase from Amazon.
What does your classic unsubstantiated and erroneous claim have to do
with web design, Mentifex?
& and * are "overloaded." Two #defines and you've taken care
of the "issue" ... All you need do is to come up with some syntax you like.
I've seen C code, using preprocessor and #defines to change sytnax, that
looks like BASIC, Pascal, etc.
> I consider that a mistake.
It's very minimal, so, why?
There are other more serious language issues with C, mostly related to
parsing.
Er... Did you mean the overloading or the terseness is a mistake? I
originally took that to mean overloading was a mistake, but I now think you
meant terseness is a mistake ... Why?
> hideous (those endless ugly braces etc)
What would you prefer? Blocked BEGIN END? Is that any clearer?
How do you group items or code together? That's an issue with all
languages. Forth has : and ; to do that.
> If you look at the IBM mainframe the operating system is written in
> assembler.
Has any operating system written in assembly been ported to
another platform? (No, or very few...)
Has any operating system written in C been ported to
another platform? (Yes, many...)
Assembly code "dies". It's affixed to the specific processor in use. I
learned that decades ago. HLL code survives. It's not dependent on a
specific processor.
> All the header files and system calls are in assembler. If you
> want to use C on a mainframe you cannot physically write system
> code. At all
"At all" is a bit of an extreme claim. The GCCMVS project (Paul Edwards)
ported GCC to MVS. I don't know much about MVS, but I assume it was
written in assembly. AIUI, it's a mainframe OS by IBM. So, it probably did
what you claim cannot be done "at all". DOS is also written in assembly.
DJGPP (GCC port) for DOS does just what you claim cannot be done.
http://gccmvs.sourceforge.net/
> [Wirth] and Dijkstra fucked up generations of programmers
> with their [...] blind hatred of GOTO's.
Line-oriented BASIC was the only place GOTOs are needed. As for C, there is
no need for them whatsoever. Use of GOTOs in C usually indicate a program
has a structural issue of some sort: too many nested functions, functions
too large, programmer unfamiliar with coding techniques such as fall-through
or status variables, etc. They can always be eliminated, usually without
any additional overhead, albeit sometimes with difficulty. Of course, they
are still needed for porting code, or for code generated by code.
> I prefer assembler so you can't say I have anything against
> low level languages.
There are problems with assembler: the code "dies", it's not as easily
maintained, and it's harder to use variables, struct's, etc. I saw this
with 6502 and even x86 (DOS). Look at the mountains and mountains
of DOS code that is still available to this day. It made DOS great, but
it's
also all "dead": no source, difficult to update or modify, difficult to
debug,
etc.
> Ada? I'm not an Ada guy. They have a niche. I have no idea how many people
> it is but there is more than one company surviving by selling Ada
> compilers so there has to be some market. It should be used more widely.
>
Ada is a US Gov't requirement for some projects.
> One of the reasons I never spent much
> time coding on PC's is I don't like C.
>
It's got assembly. It's got a half-dozen assemblers for it. NASM is a
decent assembler for it.
> One main advantage to PL/I over C is how PL/I handles strings,
> but since I believe you are a fan of null terminated strings you
> probably won't agree.
I don't agree. I lived through the "counted strings" era and saw
the numerous issues with them. Nul-terminated strings put an
end to those problems, many of which I mentioned here recently.
> PL/I avoids the whole issue of buffer overruns on string operations
> since it knows the length of the source and target and will positively
> not blow off the end of a string.
>
One disadvantage versus many ...
> That's pretty important for any serious code.
>
Today, C has routines now that fix such problems. Forth also had numerous
such "blow-up" situations. I'm not as familiar with ANS, so I can't say
Forth still "has" them ...
> > FORTRAN sucked.
>
> I think it's perfect for what it's for [...]
>
"... everything has a purpose or place ..." Psychologically, I'd guess your
MBTI type indicates you seek: harmony (NF) ...
> COBOL [..] when used by a competent person coding a solution
> to a business problem like doing financial reports or moving money
> etc. there's nothing better. On the mainframe anyway.
>
Well, the PL/1 variant I used was used for a RT OLTP application. IMO, C
would've been a better solution. I think it would've been much easier to
maintain and be more flexible. However, their base of programmers,
consultants, and management were all PL/1 skilled. They had considered
porting their application to C to lower their programmer salary costs.
AIUI, the manager was unfamiliar with C and believed attempting to manage C
code and C programmers wouldn't be wise (probably quite a bit of "job
security" in there too... when you get payed big $ to play fantasy football
all day while your underlings work, one likes to protect that). Management
was also concerned about having C programmers, even skilled ones, who were
not very familiar with their codebase, having to program it. They couldn't
figure out a solution to the issue of not having around programmers
experienced or skilled with their codebase. So, that never happened, AFAIK.
I.e., they weren't willing to pay for PL/1 consultants to train the C
programmers after a port, and they weren't willing to "retrain" in-house
PL/1 programmers for C. A few, like me, already knew C. I suspect staying
with PL/1 cost them much in dollars.
Rod Pemberton
I have no issue with most of what you say, but this statement is
truly bizarre. Zero-terminated strings are the biggest drawback, bug
and even enormity of the languages that use them. They have no
redeeming qualities I can think of (would you please list at least one
positive?), and the drawbacks I can produce off the top of my head
(and there're no doubt many others), are:
- lack of security
- inefficient string manipulation
- violation of the principle of separation of the metadata from the
data. Any data structure whose description is obtained by parsing
"magic" tokens that are not part of the normal data, is conceptually
suspect.
You say you mentioned issues with counted strings. What could they
possibly be? Do you have a link to your post?
--
No, no, you can't e-mail me with the nono.
The two fundamental aspects of OO are inheritance and polymorphism.
Inheritance is pretty easy; I did that extensively in my novice
package, although I did have to rewrite ALLOCATE and friends to
support an ALLOCATION function.
Polymorphism is more difficult. You generally use local variables in
your functions, and arrange for these locals to have type declarations
attached to them, so that when you call a member function you get the
one that is associated with that type. This works, but it is not very
Forth-like. Normally in Forth you don't use local variables very much,
but just keep the data on the stack. Forth OO code, with its extensive
use of locals, doesn't look very much like Forth --- it looks like a
postfix-version of Object Pascal.
I think that Forth's primary use is in micro-controllers. Considering
that Object Pascal never was used in micro-controllers, a Forth
version isn't going to be used much either. I would really prefer a
very light-weight OO Forth, such as I used in my novice package. This
is still in keeping with the Forth spirit (short functions with little
or no use of locals), but you do get inheritance. Most micro-
controller applications aren't complex enough to really need
polymorphism, which assumes a lot of classes with name collisions
between them. On the other hand, there are a lot of micro-controller
applications that are complex enough to need inheritance, which
assumes data structures in which not all of the nodes are the same
type (you have both parent and child nodes in there). Traditionally in
both Forth and C, this was accomplished by making a union of the
parent and child data types (as promoted by Rob Pemberton). This
wastes memory, and it is just cheesy --- you are better off to use
inheritance, which is the *correct* way.
The OO system presented in my novice package is the Goldilocks
solution. It is more complex than the use of unions, but it is less
complex than a full-blown OO system with polymorphism --- it is just
the right size for most applications (that you would be using Forth
for in the first place).
That article that Andrew Haley referenced was about Lisp. There is
some similarity between Forth and Lisp in so much as they both allow
the programmer to write code that executes at compile-time --- they
are traditionally the only two extensible languages available. There
isn't much similarity in usage though. Forth is a much more down-to-
earth language --- it is primarily used for controlling machinery, it
runs just fine on 8-bit and 16-bit processors, and it is often done by
electrical engineers who are more interested in hardware than in
software. Lisp is very high-brow --- it is primarily used for AI, it
runs on 32-bit and 64-bit processors, and it is often done by men with
ponytails who can't even change the oil in their car. The culture is
completely different. Forthers drink beer and Lispers drink white
wine. Forthers still think that the 65c02 is a great processor (and
know the instruction set by heart), and Lispers think that 64-bit
processors are mandatory for serious programming (but don't know the
instruction set at all).
Interesting.
"Programs written by individual hackers tend to follow the scratch-an-
itch model. These programs will solve the problem that the hacker,
himself, is having without necessarily handling related parts of the
problem which would make the program more useful to others.
Furthermore, the program is sure to work on that lone hacker's own
setup, but may not be portable to other Scheme implementations or to
the same Scheme implementation on other platforms. Documentation may
be lacking. Being essentially a project done in the hacker's copious
free time, the program is liable to suffer should real-life
responsibilities intrude on the hacker. As Olin Shivers noted, this
means that these one-man-band projects tend to solve eighty-percent of
the problem."
Anyone hear a bell ringing? ;-)
>I've found Stephen's to
>be very helpful, if a little dated ...
Please send me and email telling me what is dated. Then I can make
changes for the next revision.
Stephen
--
Stephen Pelc, steph...@mpeforth.com
MicroProcessor Engineering Ltd - More Real, Less Time
133 Hill Lane, Southampton SO15 5AF, England
tel: +44 (0)23 8063 1441, fax: +44 (0)23 8033 9691
web: http://www.mpeforth.com - free VFX Forth downloads
I always thought it was a terrible processor! Quick though for what it
was. Imagine what it could have been with a decent set of registers....
Dr. Ting spent a long time evaluating the minimum number of words to
bootstrap a Forth. I think it was about 30. That's what eForth
was/became IIRC.
It's a valid point. I wasted a lot of time reading articles such as the
JonesForth and had to re-work a lot of code that I thought was done and
dusted.
Taos/Intent. Ran on around 30 different CPU's, numorous hosted
enviroments, and 10's of different customer platforms.
But yes, it's not a common thing. And no doubt, people would argue that
VP wasn't assembler, it was a compiled intermediate code you could hand
code at the source level like an assembler. But I couldn't let your
comment just go by. :)
Chris
Actually, it seems, NN said "null terminated strings". I mistook that for
NUL-terminated strings. "Null terminated strings" are something else
entirely in the C context, nowadays. Null in C is not a character value
anymore. It was prior to ANSI standards. Modern C's null is a pointer.
So, no I don't like null terminated strings.
> I have no issue with most of what you say, but this statement is
> truly bizarre.
>
Ok.
> Zero-terminated strings are the biggest drawback
Zero-terminated might be perfectly valid description for NUL-terminated
strings for some languages. FYI, that's not so for C.
In C, strings are not zero terminated. In C, zero is a value for an integer
or character. In C, strings are "all bits zero" terminated. This is a
"byte" in C, or multiple bytes, and not a character. Simply put, a C "byte"
is an address unit(s) as large, or larger, than a character's size in bits.
If strings were zero terminated in C, an ASCII or EBCDIC NUL would suffice
to terminate the string. Syntactically, that's how you terminate strings in
C, you use a NUL character: '\0'. However, to comply with the C
specifications, C must clear the bits for the entire string terminator
"byte" with '\0', not just the bits used for the character. This is an
issue if C bytes and characters are of different sizes in bits.
> Zero-terminated strings are the biggest drawback, bug
> and even enormity of the languages that use them. They have no
> redeeming qualities I can think of (would you please list at least one
> positive?), and the drawbacks I can produce off the top of my head
> (and there're no doubt many others), are:
>
> - lack of security
How is this any different than counted strings? I.e., I suspect you have
some example(s) which to you qualify as "lack of security," such as missing
terminators?
As I said, C today has functions which fix many of the serious string
issues. All one needs to do is use them. Problem solved. The C compiler
places termintors on all known strings automatically. So, it shouldn't be a
serious issue. It's really only an issue when strings are constructed from
characters, or strings are broken up into sub-strings, *and* someone is
using the older functions which aren't safe *and* the programmer forgets to
terminate the string. A bunch of stuff has to go wrong.
Counted strings can result in erroneous code too. The count can be wrong
for the string. That can lead to a number of problems: storage that's not
free'd, storage that shouldn't have been free'd, wrong text output, etc.
String storage that's not free'd because of an incorrect count value is a
memory leak. Storage that shouldn't have been free'd can lead to data
corruption or program crashes. If the count is short, the text output is
truncated. What if that happens on an choice-based input prompt? If the
count is too long, it can emit unintentional text characters, such as
control characters which could shift the character set on terminals and
terminal emulators. Some control sequences may even block keyboard input.
Counted strings also do not resolve the issues with string lengths causing
overflow and underflows when copying. They have the exact same issues as
terminated strings in regards to copying. Also see issues in the links you
asked for, which are at the bottom.
> - inefficient string manipulation
How so? The string length is almost never needed. Loop until terminator
found. That's good for copying, printing, moving strings. Search for a
character, insert NUL. That's good for substrings etc. When the string
length is needed, the string should be terminated and can be counted. If
you want to be safer, without using safe string functions, you can
automatically insert a NUL in the last character position of the string.
You know the string length. It's in the declaration or malloc() call.
> - violation of the principle of separation of the metadata from the
> data. Any data structure whose description is obtained by parsing
> "magic" tokens that are not part of the normal data, is conceptually
> suspect.
>
Why is that important? Isn't "locality of reference" preferred?
(Don't link me to Wikipedia on those. Both "metadata" and "locality of
reference" are there.)
How do you explain integers, floats, structs, unions, and the count for
counted strings, etc? Integers and floats have no "metadata" at runtime.
How do you separate data from not-present metadata? Structs and unions have
lots of "metadata" for their addressing of elements that the user in most
languages doesn't have access to, i.e., hidden. AISI, the count for counted
strings is a "'magic' token that [is] not part of the normal data" either,
assuming the string is the "normal data". Well, it's not a "token" per se,
but it's still nearby somewhere, stored at a "magic" location. The "magic"
location has no user accessable metadata telling the user the size of the
location or the location. But, both must be known by the user to be used.
I.e., it's what a few people around here having been calling "carnal
knowledge" of a language or "fruits of the forbidden tree of knowledge",
etc.
> You say you mentioned issues with counted strings. What could they
> possibly be? Do you have a link to your post?
>
http://groups.google.com/group/comp.lang.forth/msg/9bd0c7d06b2c68a8
http://groups.google.com/group/comp.lang.forth/msg/fe585a620e28486b
http://groups.google.com/group/comp.lang.forth/msg/642828bbd8b1addb
Rod Pemberton
So, which 30? Pointers?
http://groups.google.com/group/comp.lang.forth/msg/10872cb68edcb526
http://groups.google.com/group/comp.lang.forth/msg/7912fa3634b2bd06
http://groups.google.com/group/comp.lang.forth/msg/3caa5b2e62f53d7a
Rod Pemberton
<snip>
> Zero-terminated might be perfectly valid description for NUL-terminated
> strings for some languages. FYI, that's not so for C.
>
> In C, strings are not zero terminated. In C, zero is a value for an integer
> or character. In C, strings are "all bits zero" terminated. This is a
> "byte" in C, or multiple bytes, and not a character. Simply put, a C "byte"
> is an address unit(s) as large, or larger, than a character's size in bits.
> If strings were zero terminated in C, an ASCII or EBCDIC NUL would suffice
> to terminate the string. Syntactically, that's how you terminate strings in
> C, you use a NUL character: '\0'. However, to comply with the C
> specifications, C must clear the bits for the entire string terminator
> "byte" with '\0', not just the bits used for the character. This is an
> issue if C bytes and characters are of different sizes in bits.
What about paragraph 2 of N1548 5.2.1 Character sets:
In a character constant or string literal, members of the execution
character set shall be represented by corresponding members of the source
character set or by escape sequences consisting of the backslash \ followed
by one or more characters. A byte with all bits set to 0, called the null
character, shall exist in the basic execution character set; it is used to
terminate a character string.
So a string is terminated with a null byte (or character).
--
Coos
CHForth, 16 bit DOS applications
http://home.hccnet.nl/j.j.haak/forth.html
What I got out of "The Lisp Curse" article was that powerful languages
(that is, languages that expose and let you maniplate the underlying
mechanisms) lead developers to build-up exactly what they need. That
great, but with every developer each implementing their own take on a
feature (such as object orientation), what you get is that each only
implement the portions of a feature that developer felt was
important. Applying this to Forth, what you see is a number of
different OO packages, each with disjoint features and syntax. And of
course, the authors of these packages each think their particular set
of features, syntax, optimizations, and so on are best.
One of the things I find interesting about the Lua community is how
they addressed this problem. The Lua language doesn't have any built-
in notion of objects and-- like Lisp and Forth-- Lua is not an object-
oriented language. What Lua does offer is a set of extensible
mechanisms and some syntactic sugar related to objects. Like with
Lisp and Forth, the Lua programmer is free to build up whatever kind
of objects they see fit. If you think multiple inheritance class-
based objects are cool, you can build that. If you like simpler
prototype-based objects, you can build that. Early binding, late
binding, protection mechanisms baked into the objects or left as a
matter of convention, simple work-a-day objects or based on more
exotic notions. If you think an object should inherit just code and
not data, go for it. As practical or as insane as you want, you can
do it. If you want to mix different kinds of objects, nothing stops
you.
But what is interesting about object orientation in the Lua world is
that because it's all built on the same base, for the most part, all
of these object models coexist peacefully. So for example, in the
last major Lua-based project I worked on, most of the application code
was written using a prototype-style of objects, but calling libraries
that used class-style objects. And while there were a couple subtle
and interesting edges there, it worked just fine.
I wonder how Forth would be different today if at some point the
community said, "we disagree what an object is and what an object
should be, but we can find common ground in a core set of abstractions
that we think are a suitable base for most object-based systems."
Like with Lua, that wouldn't be an endorsement of any particular
flavor of objects-- or even the use of objects at all. And it
wouldn't stop passionate arguments for or against different object
models. It's just the realization that there is a certain amount of
conceptual overlap in most object systems, and that overlap could be
standardized.
The closest thing Forth offers to this is CREATE/DOES>. It's not the
only way to create objects, but often plays a role in most designs.
The problem is that it's too low-level. Nobody has gotten the various
authors of Forth object systems together in a room, locked the doors,
turned up the heat, and said, "you're not leaving here until you all
come up with a core set of facilities that you agree are all useful in
implementing objects." Had that happened-- either metaphorically or
in fact-- where would Forth be today?
Yes.
> (or character).
>
No.
It's *called* the "nulll character". The emphasis is on called. It's not a
character. It's a byte. A byte must be at least the same size as a
character. But, a byte can be larger. That's the critical distinction.
ISTM, that those who cite this section as a claim that it's a character
instead of a byte associate "called" with "is" instead of with "named".
You must not be reading all articles here. I mentioned this recently with
some others. They cited the same thing and argued about it's meaning too.
Rod Pemberton
Hmmmm.... Taos.... Swoon... ;-)
This is pretty typical of salespeople --- to believe that anything can
be sold with a well-funded marketing campaign behind it. This actually
works pretty well when selling soda pop or energy drinks. It is not a
good plan with a computer programming language however.
I think that Forth failed largely because it continued to use threaded
schemes well into the 1990s, whereas C generated machine code. The
result was that Forth code got a reputation for being at least an
order of magnitude slower than C code.
Memory was also a problem. In the days of MS-DOS, PolyForth required
the programmer to fit his code, data, and dictionary headers, all into
a *single* 64K segment. Apparently nobody at Forth Inc. knew that the
8086 allows programs to be put into multiple segments (CS for code, DS
for data, and ES for the dictionary headers). Most likely, the 8086
version of PolyForth was just a direct port of an old 8080 Forth
system.
PolyForth just didn't compare well with Turbo C, which was the main
competition. The slow indirect-threaded-code and the 64K limit
combined to make PolyForth unsuitable for anything except tiny toy
programs. All of this talk about the need for a "well-funded marketing
campaign" is just Elizabeth Rather wishing that people would give her
money. The problem was technical.
I read that article about the "Lisp Curse." There is actually a huge
abundance of Lisp code available. If you need a library of code, you
generally have dozens of choices available. That is the so-called
"Lisp Curse." By comparison, the "Forth Curse" is that you don't
generally have anything available, but you are required to write
everything yourself from scratch. When I wrote my LLRB tree
implementation in Forth, I was called "incompetent" (Passaniti) and a
"donkey" (Ron), etc.. That doesn't happen in the Lisp community ---
people there are encouraged to write software tools and make them
publicly available --- that is why there are so many Lisp libraries
available.
I really wish that somebody would write a library of code in
competition with my novice package. You don't have to go up against
the entire thing --- you could just pick out one aspect, such as
arrays, lists or associative arrays, and write your own version. Would
that be so difficult?
Let's start with efficiency. I claim, that without exception, *any*
operation on counted strings is more efficient than when the length is
unknown a priori and has to be determined from the contents.
- outputting: z-strings need a program loop that examines each
character in turn, always. Depending on how output is implemented in
the particular environment, with c-strings it can be a single machine
language instruction - REPNZ MOVSB, or LDIR / OTIR; and even in the
cases where the OS expects character by character calls, it is still
cheaper to do the loop using a counting instruction like LOOPNZ or
DJNZ, rather than explicitly reading *and* testing every byte.
- copying: even worse; mostly single machine language instructions for
c-strings, and there's no OS overhead for each byte, that would mask
the advantage in the outputting case. Add to that the fact that if you
need to allocate the memory for the copy of the string, you need to
read the string all over *twice* - the first time to count to see how
much memory to allocate, and the second time to do the actual copy.
- concatenation: same thing. If you know the lengths beforehand, you
can allocate the total first, and then do the efficient moves (or
resize the first string if appending in place, etc.) With z-strings,
you need to scan through at least one of them twice, and through both
of them once. In contrast, with c-strings, if appending in place, you
don't need to touch the first string at all, just copy the second.
- substrings: with c-strings, you don't need to touch the data at all,
if providing a reference - just return the address and count data.
With z-strings, you need to scan the source string at least once, no
matter what the operation is, to make sure that the desired substring
is inside the source string's length; with c-strings this is just a
calculation.
- parsing: with z-strings, you need two comparisons - is the byte 0,
is it what we want. In c-strings, we setup the loop parameters by
string length, and make just one comparison inside, leaving the loop
when done.
Second, let's look at the "errors" of the counted strings. They are
mostly of your imagining. How can it happen that the count is wrong?
Maybe if *you* program it, but not in a program by anybody who
routinely uses counted strings, especially a Forther. Concatenation -
just add the lengths, store the result. Substring - same thing;
subtract, store. If the initial counts are correct, all string
operations will trivially yield correct results. Contrary to your
absurd statement in the last of the links you provided, when a
c-string is being modified, one does *not* need to re-count. The new
size is trivially calculated from the old. You are under the mistaken
impression that the string data is somehow manipulated in isolation
from the length, while in fact they form a logical tuple (address
count), and any correct program manipulates them that way. For ex,
when editing in a line buffer, you increase the count when inserting a
character, decrease the count when deleting, don't change it when
overwriting. There's no need to re-scan or re-count.
Third, security. Is it, or isn't it true, that when accepting a
stream of data, where the size of the data is determined by its
contents, one can overflow any pre-allocated buffer? I agree, however,
that this is mostly fixed nowadays (as you point out), by extra
overhead in the re-written security conscious C functions. All that
needed not be done, had it started with proper counted strings, where
all operations are correct from the get-go, and screwups of that
nature can't happen.
Fourth, metadata. It's part of a bigger problem, actually. Even the
counted strings where the count is adjacent to the beginning of the
string are flawed, but less so than the z-strings. Ideally, the
address/count pair should be in a separate location from the string
itself, in a separately allocated memory area. The reason is that if a
misbehaving program overwrites it, the size is lost. This is
especially troublesome for other data structures. For example arrays,
if they store their sizes inside themselves. Or memory managers, that
store the size and owner of the allocated memory segment at the
beginning of the chunk, at a negative offset to the address they
return to the requesting application.
Fifth, one more flaw of z-strings that I just remembered - they
can't contain the NUL character. :) (Yes, I've had that problem in
'94 - a modem needed some NULs as part of its initialization, and the
M$ DTE software we were using couldn't output it, even though it
accepted any hex values in its custom command strings. I don't
remember what exactly we did (maybe wrote a small custom
initialization program?), but we had to find a workaround.)
Still waiting for just one redeeming quality of z-strings. ;)
--
Actually, you need not bother answering. I read the entire thread
referenced below only after I posted my message, while I should have
read the entire thread first. There, all your absurd arguments were
thoroughly refuted, many with the same arguments that I use. And you
still come forward and post a link to where you were totally shown wrong?
I have nothing more to say, and don't think you have either. Move
along people, nothing to see here.
> Rod Pemberton wrote:
>> "Elko T" <nono.bl...@gmail.com> wrote in message
>>
> Still waiting for just one redeeming quality of z-strings. ;)
There must be some redeeming features as Delphi started with counted
strings and now provides both. The main problem with counted strings is
the length limit. Extracting sub strings and parsing is fine but when
appending it is fairly easy to get a string that is to long to fit in.
There is also a possible problem with OS that expand relative paths to
absolute paths and return something that is too long to fit.
I don't know where counted strings originated but I remember using the
Microsoft Basic string functions, which by the way used a separate
string space at top of memory, a 255 character limit could bite back.
Ken Young
You make very good arguments for the addr u representation of strings
(and I agree with them). The only problem is that you call this
representation "counted strings", which are commonly understood to be
strings preceded with a count byte and represented only by the address
of the count byte. E.g., from the Forth-94 document:
|counted string: A data structure consisting of one character
| containing a length followed by zero or more contiguous data
| characters. Normally, counted strings contain text.
And calling counted or addr u strings c-strings is confusing because
it makes me think of C strings, which are zero-terminated.
- anton
--
M. Anton Ertl http://www.complang.tuwien.ac.at/anton/home.html
comp.lang.forth FAQs: http://www.complang.tuwien.ac.at/forth/faq/toc.html
New standard: http://www.forth200x.org/forth200x.html
EuroForth 2011: http://www.euroforth.org/ef11/
Yes it is, and its history is quite well documented:
ALGOL 60 was the primary influence on CPL
which was the primary influence on BCPL
which was the primary influence on C.
Andrew.
I started with Algol, then FORTRAN, then C.
A wrong Algol run ended with
array index out of bounds
or
memory exhausted
(The latter meaning infinite recursion)
And a precise account where it happened.
Both FORTRAN and C crash, with a coredump at best.
C nor FORTRAN have a decent BN definition.
FORTRAN nor C deserve to be called Algol-type, or Algol-derived
languages. They are hackish mishmashes. (As is Forth, by the way.)
>
>Andrew.
--
--
Albert van der Horst, UTRECHT,THE NETHERLANDS
Economic growth -- being exponential -- ultimately falters.
albert@spe&ar&c.xs4all.nl &=n http://home.hccnet.nl/a.w.m.van.der.horst
Indeed, I should have used better terminology, mea culpa. You
understood me perfectly, however - I'm not talking about any
particular implementation of counted strings - be it the Forth
character "counted string", or the stack-contained "addr u" strings.
I'm talking about the situation where each string "object" is
characterized by its address and length, which exist as metadata
outside of the string, and describe it.
Delphi most likely provides both since the host OS library is likely in C.
Rod Pemberton
I disagree. Your last post was totally erroneous. I've posted those
errors. You failed to understand x86. You don't know what a "c-string" is.
You think c-strings don't have many of the same issues as z-strings. You
used numerous arguments supporting my position. You used flawed logic.
You're beyond my help.
Rod Pemberton
The x86 segmented memory model caused problems for C too. Yes, many, many
x86 compilers produced C code for the segmented model without problems.
But, there are examples of C compilers that won't: GCC, LCC. To be fair,
that's not the entire story for either of them. LCC did produce x86
segmented code at first. They abandoned that after having problems with C
specification compliance with segmentation. GCC was designed for
non-segmented memory platforms from the start.
Rod Pemberton
Who are you talking to? I don't see the headers for the prior conversation.
It should be here. Your comments should be interspersed between those of
whom you are replying. You're posting to Usenet...
> Let's start with efficiency. I claim, that without exception, *any*
> operation on counted strings is more efficient than when the length is
> unknown a priori and has to be determined from the contents.
True, but irrelevant. If the length was needed for operations with
z-strings (your terminology), I'd agree with you that c-strings (your
terminology) would be faster than terminated strings. But, as I stated, the
length is almost never needed with z-string operations. So, there is no
advantage to having the count or string's length with z-strings.
> outputting: z-strings need a program loop that examines
> each character in turn, always.
outputing c-strings needs a program loop that examines the count for each
loop, always ... Is there point to this?
> Depending on how output is implemented in
> the particular environment, with c-strings it can be a single
> machine language instruction - REPNZ MOVSB,
Did you *ACTUALLY* use "REPNZ MOVSB" as proof that c-strings are an
improvement over z-strings? With me...? Am I stupid? Wait... Are you
stupid? You understand that REPNZ repeats while there is no zero byte, yes?
"REPeat (while) Not Zero MOVe String Byte" There's no count involved.
Where's the c-string? I.e., it's designed for outputting z-strings... WTF?
Wow, you must've gotten seriously confused in your arguments somewhere...
It's best to delete such posts rather than send them. Start over, rewrite,
it'll come out much better. Trust me.
> [...] in the
> cases where the OS expects character by character calls, it is still
> cheaper to do the loop using a counting instruction like LOOPNZ or
> DJNZ, rather than explicitly reading *and* testing every byte.
>
Dude...
JNZ is not a counting instruction (z-string). JCX is a counting instruction
(c-string). LOOPNZ is not a purely counting instruction (z-string &
c-string). LOOP is a counting instruction (c-string).
> - copying: even worse; mostly single machine language instructions for
> c-strings,
Uh, you clearly misunderstand something, at least for x86...
> Add to that the fact that if you
> need to allocate the memory for the copy of the string, you need to
> read the string all over *twice* - the first time to count to see how
> much memory to allocate, and the second time to do the actual copy.
>
That depends on the copy routine implemented. If a block was previously
allocated and known to be sufficiently large which holds for many
situations, then that claim isn't true.
> - concatenation: same thing. If you know the lengths beforehand, you
> can allocate the total first, and then do the efficient moves (or
> resize the first string if appending in place, etc.) With z-strings,
> you need to scan through at least one of them twice, and through both
> of them once. In contrast, with c-strings, if appending in place, you
> don't need to touch the first string at all, just copy the second.
>
Ditto.
> With z-strings, you need to scan the source string at least once, no
> matter what the operation is, to make sure that the desired substring
> is inside the source string's length;
You have to do that with c-strings also. How do you locate the substring
for either c-strings or z-strings without scanning the original string?
That make no sense whatsoever.
> - parsing: with z-strings, you need two comparisons - is the byte 0,
> is it what we want. In c-strings, we setup the loop parameters by
> string length, and make just one comparison inside, leaving the loop
> when done.
You have to do two comparisons with c-strings also: "is it what we want" and
has the counter reached it's terminating value.
> Second, let's look at the "errors" of the counted strings. They are
> mostly of your imagining. How can it happen that the count is wrong?
How does the count get set? Just as someone or something must set the
terminator for z-strings, someone or something must get and set an accurate
count for c-strings. When something goes wrong, you've got problems.
C-strings count errors have more problems.
> Maybe if *you* program it, but not in a program by anybody who
> routinely uses counted strings, especially a Forther.
> Concatenation -
> just add the lengths, store the result.
Oh, so z-strings via your prior statements must allocate memory but
c-strings don't need to? You can just store the results? Your argument is
flawed.
> Substring - same thing;
> subtract, store. If the initial counts are correct, all string
> operations will trivially yield correct results.
Oh, so z-strings via your prior statements must allocate memory but
c-strings don't need to? You can just store the results? Your argument is
flawed.
When things are done correctly, z-strings work well too. The problems you
point out about z-strings are when things do not work well.
> Contrary to your
> absurd statement in the last of the links you provided, when a
> c-string is being modified, one does *not* need to re-count.
Why? The string is of a different length. I hope you recounted.
> The new
> size is trivially calculated from the old. You are under the mistaken
> impression that the string data is somehow manipulated in isolation
> from the length,
No, I'm not. I pointed out what happens in erroneous situations. When you
point out flaws in z-strings, you'll point out the error condition: that
things go wrong with z-strings when they do not have a terminator. Well,
bad things happen with counted string too when the count is not correct.
> [..] while in fact they form a logical tuple (address
> count), and any correct program manipulates them that way.
Just as any correct program using z-strings has no z-strings without a
terminator... Using your argumentative logic, z-strings are perfect. They
have no problems.
> For ex,
> when editing in a line buffer, you increase the count when inserting a
> character, decrease the count when deleting, don't change it when
> overwriting. There's no need to re-scan or re-count.
>
Increasing or decreasing the count is not a "re-count"? Word games... As
for z-strings, as long as the buffer is large enough - I'm assuming it is
since you didn't allocate any space for your example, then you only need to
move characters, just as needed for c-strings, when inserting or deleting
characters. I.e., there is no need to update a count or a termintor. It's
less work.
> Is it, or isn't it true, that when accepting a
> stream of data, where the size of the data is determined by its
> contents, one can overflow any pre-allocated buffer?
Yes, and c-strings have this issue too. What's your point?
> All that
> needed not be done, had it started with proper counted strings, where
> all operations are correct from the get-go, and screwups of that
> nature can't happen.
>
Wrong!
> Fourth, metadata. It's part of a bigger problem, actually. Even the
> counted strings where the count is adjacent to the beginning of the
> string are flawed, but less so than the z-strings. Ideally, the
> address/count pair should be in a separate location from the string
> itself, in a separately allocated memory area. The reason is that if a
> misbehaving program overwrites it, the size is lost.
z-strings don't have that problem... There's no count. So, the count
cannot be overwritten. The address of a z-string, depending on what the
compiler did with it, may or may not be overwritten. The z-string may or
may not be overwritten, depending on how it was declared in C or what
operating system privileges are available. Strings are not always writable.
The length of a z-string is determined by the terminator. If the terminator
is overwritten, you'll have a problem. If that terminator is more than the
allocation, then there is a bug. If the count for a c-string is more than
the allocation, then there is a bug too. But, z-strings have fewer
side-effects and serious problems.
> This is
> especially troublesome for other data structures. For example arrays,
> if they store their sizes inside themselves. Or memory managers, that
> store the size and owner of the allocated memory segment at the
> beginning of the chunk, at a negative offset to the address they
> return to the requesting application.
>
It seems the systems you're familiar all use writable strings. Some
languages and operating systems restrict that.
> Fifth, one more flaw of z-strings that I just remembered - they
> can't contain the NUL character.
So? It's not a z-string then is it? It's just raw byte data. C handles
that without any problems at all. The raw data is an "array" of small
integers: characters specifically. Yes, C handles the NUL character by
using characters. Is your mind "blown" now? C's functions determine if the
data is a z-string or not. strxxxx() recognize NUL. memxxx() do not
recognize NUL.
> Rod Pemberton wrote:
> > "Elko T" <nono.bl...@gmail.com> wrote in message
> > news:iuglrm$96e$1...@dont-email.me...
> >> Rod Pemberton wrote:
> >>> "Nomen Nescio" <nob...@dizum.com> wrote in message
> >>>
These go at the top. They are supposed to stay there. This goes afterwards
with your comments inserted. The quantity of > indicate who said what to
indicate to what you are replying.
> >>>> One main advantage to PL/I over C is how PL/I handles strings,
> >>>> but since I believe you are a fan of null terminated strings you
> >>>> probably won't agree.
> >>> I don't agree. I lived through the "counted strings" era and saw
> >>> the numerous issues with them. Nul-terminated strings put an
> >>> end to those problems, many of which I mentioned here recently.
> >
> > Actually, it seems, NN said "null terminated strings". I mistook that
for
> > NUL-terminated strings. "Null terminated strings" are something else
> > entirely in the C context, nowadays. Null in C is not a character value
> > anymore. It was prior to ANSI standards. Modern C's null is a pointer.
> > So, no I don't like null terminated strings.
> >
> >> I have no issue with most of what you say, but this statement is
> >> truly bizarre.
> >>
> >
> > [SNIP]
> >
Down here, you delete the signature of the person replying, when you reply.
Rod Pemberton
No. *Your* new post is totally erroneous. Have you ever programmed
an Intel CPU in assembly? You don't appear to know what its
instructions do; for example you appear to think that REPNZ MOVSB
repeats until a zero string byte is encountered. Reread your CPU
instruction set manual.
All the rest of your arguments are like that - either in error, or
twisting the truth and making nonsensical assumptions. You have been
repeatedly shown wrong - by my post, and by all the people who
answered you the previous time. You obviously refuse to learn from
your mistakes; so be it. One can't teach the unteachables.
If you want to "convert" e.g. a ACCEPT buffer to a terminated string,
simply terminate it (if you have the space). You don't have to shift
the whole text part one character up to make room for the count.
Most of my strings live most of their life as an addr/count string.
E.g. all string manipulations (on my system) work on addr/count
strings. You don't use variables if you don't have to. We're not doing
BASIC here.
So, in short, I don't think that on an ANS system (where the addr/
count form is used) the difference is that significant. Of course,
somebody will stand up and prove that in the middle of a tight loop
counted strings are half a millisecond faster, but IRL? I don't think
so.
Hans Bezemer
32-bit polyFORTH used a full 32-bit address space. The DOS
implementation (pF32-386/pMSD) used the Phar Lap DOS extender and
operated in virtual mode, with address space to 1 Mb.
Cheers,
Elizabeth
--
==================================================
Elizabeth D. Rather (US & Canada) 800-55-FORTH
FORTH Inc. +1 310.999.6784
5959 West Century Blvd. Suite 700
Los Angeles, CA 90045
http://www.forth.com
"Forth-based products and Services for real-time
applications since 1973."
==================================================
>
> I really wish that somebody would write a library of code in
> competition with my novice package. You don't have to go up against
> the entire thing --- you could just pick out one aspect, such as
> arrays, lists or associative arrays, and write your own version. Would
> that be so difficult?
Do you have a memory problem are are you just disingenuous? You've been
told before about the Forth Foundation Library, and indeed have
adversely commented on it.
The FFL has been in existence longer than your novice package and has
far more functionality. If you think your novice package is better, and
I don't know whether it is or not, the onus is on you to provide facts
and figures proving so instead of just stating opinions and just
ignoring its existence.
--
Gerry
Yes, but I already knew all these things, so since some of the things
you wrote did not make sense when applying the usual meaning of
"counted strings", I read the whole length of your posting carefully
and found out that it was about addr u strings.
But I did not benefit from your posting, since I knew these things
already. Somebody who might have benefitted from it if you had used
better terminology probably found it confusing or was mislead by it.
> - I'm not talking about any
>particular implementation of counted strings - be it the Forth
>character "counted string", or the stack-contained "addr u" strings.
>I'm talking about the situation where each string "object" is
>characterized by its address and length, which exist as metadata
>outside of the string, and describe it.
But that's not the case for counted strings. There the length
metadata is part of the pointed-to data structure, and therefore
certain advantages of addr u strings are not shared by counted
strings, e.g. creating a substring is not as simple for counted
strings as you describe, whereas it is as simple for addr u strings.
Yes. It describes most hobbyist programming projects, irrespective of
programming language.
However, some languages have developed a culture that results in a
repository of useful code, even if much of that is written by
hobbyists. Still, even in these languages most hobbyist programs are
described by the paragraph above, and most professionally developed
programs share many of the same characteristics. The difference is
that the small fraction of the programs where the programmer can be
induced to go the extra mile, the programmer is actually encouraged to
do so.
"REPNE and REPNZ ... repeat their associated string instruction the
number of times specified in the counter register (rCX). The
repetition terminates when the value in rCX reaches 0 or when the zero
flag (ZF) is set to 1. The REPNE and REPNZ prefixes can be used with
the CMPS, CMPSB, CMPSD, CMPSW, SCAS, SCASB, SCASD, and SCASW
instructions."
"REPE and REPZ ... repeat their associated string instruction the
number of times specified in the counter register (rCX). The
repetition terminates when the value in rCX reaches 0 or when the zero
flag (ZF) is cleared to 0. The REPE and REPZ prefixes can be used with
the CMPS, CMPSB, CMPSD, CMPSW, SCAS, SCASB, SCASD, and SCASW
instructions."
"The REP prefix repeats its associated string instruction the number
of times specified in the counter register (rCX). It terminates the
repetition when the value in rCX reaches 0. The prefix can be used
with the INS, LODS, MOVS, OUTS, and STOS instructions."
AMD64 Architecture Programmer’s Manual Vol 3, Pub# 24594 3.15 November
2009
MOVSx instructions are not permitted with a REPE/REPNE/REPZ/REPNZ
prefix. Only REP is permitted, which is entirely dependent on a count
in rCX, and does not inspect the bytes/words/dwords moved.
>
> > [...] in the
> > cases where the OS expects character by character calls, it is still
> > cheaper to do the loop using a counting instruction like LOOPNZ or
> > DJNZ, rather than explicitly reading *and* testing every byte.
>
> Dude...
>
> JNZ is not a counting instruction (z-string). JCX is a counting instruction
> (c-string). LOOPNZ is not a purely counting instruction (z-string &
> c-string). LOOP is a counting instruction (c-string).
>
All instructions that branch based on a condition code require it to
be set by something; an arithmetic or comparison operation normally.
In the case of a null-terminated string (a z-string in the terminology
here), such a compare will read and test every byte.
<snip>
> MOVSx instructions are not permitted with a REPE/REPNE/REPZ/REPNZ
> prefix. Only REP is permitted, which is entirely dependent on a count
> in rCX, and does not inspect the bytes/words/dwords moved.
>
If I write REP MOVS or REPZ MOVS the same opcodes are assembled ;-)
By some strange decision of Intel, there are only two repeat prefixes
REPZ/REPE or REP (F3) and the other is REPNZ or REPNE (F2).
all those errors are bugs in compilers which are nowadays unusual and anyway
are all fixed in one place instead of the uncountable buffer overflows coded
by incompetent coders. I trust somebody putting out a compiler (ok not gcc,
but a normal company) to do a better job than the average code butcher or
gpl groupie.
> How so? The string length is almost never needed. Loop until terminator
> found. That's good for copying, printing, moving strings. Search for a
> character, insert NUL. That's good for substrings etc. When the string
> length is needed, the string should be terminated and can be counted. If
> you want to be safer, without using safe string functions, you can
> automatically insert a NUL in the last character position of the string.
> You know the string length. It's in the declaration or malloc() call.
That's in your architecture in your OS. Byte by byte moves are horribly
inefficient on the platform I work on. We have instructions to move up to
256 bytes with one opcode, up to 4096K page aligned bytes with one opcode,
and a microprogrammed long move of any practical length (I think it's 2**24)
but I can't remember at the moment. Same thing for compares. PL/I knows how
to move strings efficiently because of having the length of source and
target at all times. Avoids those embarrassing printf and deadly scanf type
errors too.
> How do you explain integers, floats, structs, unions, and the count for
> counted strings, etc? Integers and floats have no "metadata" at runtime.
I don't necessarily agree with what he wrote but to answer you, those other
objects also have either architecturally defined, system defined, or
programmer defined lengths that *are* known.
Yes. REPNZ (F2) or REPZ (F3) MOVSx means REP as no condition code is
set by MOVSx; the effect is REP regardless. Its also used in SIMD
instructions, where F2 and F3h modify the opcode, rather than do a
REP.
> MOVSx instructions are not permitted with a REPE/REPNE/REPZ/REPNZ
> prefix. Only REP is permitted, which is entirely dependent on a count
> in rCX, and does not inspect the bytes/words/dwords moved.
False. It's not the intended prefix. Microsoft does such stuff in their
code. There are a number of such examples. The most well known one is in
their boot loader code. I think that's still in-use to this day. Does
REPNE with an incorrect instruction work as a REP or REPNE?
I don't know. I would assume REP.
> All instructions that branch based on a condition code require it to
> be set by something; an arithmetic or comparison operation normally.
> In the case of a null-terminated string (a z-string in the terminology
> here), such a compare will read and test every byte.
Yes, and a count will compared for each and every byte to for instructions
suited for c-strings... I've already stated this. Elko T stated that this
is some sort of advantage for c-strings over z-strings. It's not. Both
have the same issues in regards to looping.
RP
Wrong. It's not. There is one mistake, due to you...
> Have you ever
> programmed an Intel CPU in assembly?
>
Yes, frequently, albeit I prefer C and use it much more often.
> You don't appear to know what its
> instructions do; for example you appear to think
> that REPNZ MOVSB
> repeats until a zero string byte is encountered.
>
*YOU* wrote "REPNZ MOVSB". Take responsibility for your error: incorrect
pairing of repeat prefix and instruction. A failure to catch your mistake
or trust that you got something correct was my mistake. REPNZ MOVSB wasn't.
Yes, I assumed you got the correct combination before I posted, but like
everything else in that post, that was wrong too. So, I explained what
each of those meant - together. How is that my error? REPNZ means what I
said. MOVSB means what I said. Together, it means what I said. Do they
work together? Yes. Do they work that way together? Probably not... REPNZ
and MOVSB will probably work as REP MOVSB, i.e., c-string only. MS uses
incorrect repeat prefix pairings in a number of places their code. Some of
their bootloader code is the most well known example. This doesn't change
the flaws in your argument regarding c-strings being better suited to
assembly versus z-strings.
> All the rest of your arguments are like that - either in error, or
> twisting the truth and making nonsensical assumptions.
Wrong. There is nothing twisted. It's accurate and correct. The only
thing that isn't is your mistakes and things derived from them...
> You have been
> repeatedly shown wrong
By whom? When? Not by you... All of your claims were incorrect. I've
explained *repeatedly* why you're wrong.
> by my post
Your post was completely wrong. I showed you where and told you why. What
is it that you don't understand? Comprehension, or lack of, real or
feigned, seems to be an issue with you.
> and by all the people who
> answered you the previous time.
I reread the thread that I posted links to after your BS claims that I was
thoroughly refuted somewhere. I don't see where anyone demonstrated
anything incorrect about what I said.
> You obviously refuse to learn from your mistakes; so be it. One can't
> teach the unteachables.
Cute! Recycling my statements about you: "... You used flawed logic. You're
beyond my help." It's there. Go back and reread it.
Since you didn't realize it, I'll state it very clearly. I'm not the one
being taught. That's as politely as I can state the issue.
Rod Pemberton
On the 65c02, the zero-page could be used as 16-bit soft-registers for
pointing at data. That was ahead of its time. The 65C02 didn't allow
the location of the soft-registers to be changed, which would have
been nice (the 65C816 did), but it was still a pretty good design.
This is all immaterial now. The 8-bit processors are obsolete now, so
nobody cares if Forth runs on them or not. The 16-bit processors are
on their way toward becoming obsolete too. We still have processors
such as the PIC24 in use, but they will disappear pretty soon. I
didn't predict this. It seems like just a little while ago that 8-bit
processors (especially the 80c320) were the standard, 16-bit
processors (such as the MC6812) were the heavy hitters, and 32-bit
processors were unheard of (in the micro-controller world). I expected
that 16-bit processors would become the standard (and I was glad of
it, as most of those 8-bit processors were too limited), but that 32-
bit processors would never be used as micro-controllers. Now 32-bit
processors are the standard.
I could write a Forth for the PIC24, but doing so would be somewhat
pointless. The PIC24 will soon be obsolete, along with all of the
other 16-bit processors. When 16-bit processors become obsolete, Forth
will become obsolete too. There are myriad powerful languages
available for 32-bit processors. Who is going to want to use a
language that doesn't have *any* support for OOP? Of course, C will
also become obsolete along with Forth, but that is not much of a
consolation, considering that C++ will take its place.
I tried to introduce ALLOCATION into Forth-200x in order to allow for
inheritance, but this idea was killed because Forth has to be
implementable in C and C doesn't have ALLOCATION. Imagine if
ALLOCATION had been introduced into Forth in the late 1980s when OOP
was first becoming popular. People would have flocked to Forth because
they could have inheritance, and they would have abandoned C
altogether. A quarter of a century later, Forth would be the standard
language and nobody would care if Forth could be implemented in C or
not, because C would have become obsolete long ago.
I remember in 1984 that Forth and C were both popular, and it was
anybody's guess which would become the standard. A failure in
leadership in the Forth community resulted in Forth's complete
failure. Now most people have never heard of Forth, but the few who
have heard of it are only familiar with Gforth, which is just a toy
--- it is slow as molasses compared to C (because it is written in C!)
and it is unsuitable for any kind of application programming
whatsoever. Almost everybody interested in Forth (including yourself)
is interested in writing their own hobby Forth system (or in learning
how Gforth works internally), but nobody is interested in writing
applications. There are a few hobby applications, such as Sudoku
solvers or my slide-rule program, but even those are rare --- there
are more Forth systems being written than Forth applications, and
almost none of those Forth systems are capable of being used to write
Forth applications.
The 8086 architecture was thin *only* thing being used for MS-DOS, and
MS-DOS commanded maybe 99% of the market (the Apple Macintosh, Atari
ST and Amiga had *very* thin usage). Anybody who was going to program
under MS-DOS was expected to master the concept of segments (it is not
difficult). Anybody who fails to master this most rudimentary concept
would not be taken seriously --- this includes QBasic and PolyForth
and some other toy languages (actually, I think that QBasic did use
the Small memory model internally, so even it was ahead of PolyForth).
GCC has primarily been used on Linux (and BSD) systems, and on micro-
controllers, neither of which use the 8086 architecture. As for LCC,
I've never heard of it being used by anybody --- afaik, it is just an
educational system intended to teach compiler design.
Show me *any* application program that has been written using FFL. Or
better yet, write one yourself!
FFL isn't any good. It is extremely primitive in many ways --- I don't
think that it is suitable for use in application programming.
Indeed. All three forms (REP/REPZ/REPNZ) are supposed to produce
the same outcome when assembling, and the two opcodes F2 and F3 are
supposed to work the same, on *the non-testing* instructions.
Interestingly, however, Intel always gives F3 for REP in its docs,
while an old Turbo Assembler manual I have, gives F2 for the LODS
instructions, and F3 for the rest.
Why on earth should I? I have little interest in the relative merits of
FFL and the novice package and was just pointing out that your claim of
no competing libraries was false and that you were well aware of the fact.
> Or better yet, write one yourself!
>
> FFL isn't any good. It is extremely primitive in many ways --- I don't
> think that it is suitable for use in application programming.
It's a well known sales technique to promote a product by omitting any
mention of competing products or rubbishing any such products without
evidence. You've used both tactics. A better sales technique is to
recognise there are competitors and to present some evidence, for
example an analysis showing why your product is superior whether by:
- absence of bugs
- superior functionality
- better performance
- ease of use
...
I don't think that just shouting "it's rubbish" convinces anybody,
particularly knowledgeable members of this group, indeed it is more
likely to antagonise them.
You've complained several times about nobody using your library. Trying
to be helpful I would suggest:
1. Give it a better name than the novice package. It may be suitable for
novices but calling it that makes it plain that it is *only* suitable
for novices. Why should a seasoned Forther who already possesses his own
library even look at yours?
2. Deposit it or a link in FLAG http://soton.mpeforth.com/flag/ so that
its existence has a better chance of becoming known.
3. Provide evidence of why it is superior without rubbishing the opposition.
4. Provide comprehensive test programs using the Hayes tester, that
prove your library is well tested. Others do e.g. David Williams (see
http://www-personal.umich.edu/~williams/archive/forth/strings/index.html
- incidentally there are quite a few library packages in ANS Forth on
this site that you probably haven't considered) and Krishna Myneni.
Again why should experienced Forthers use a library when there's little
or no evidence that it's been well tested.
5. Write a small, easily understandable application where your library
is beneficial. Your slide rule program is too large.
Of course items 3, 4 (see **) and 5 take quite a bit of work which you
may be unwilling to do, in which case you'll probably have to live with
the indifference shown.
I hope this helps.
** If such test programs are developed simulataneously with the library
code I would argue that it both speeds development and you get the final
test programs automatically for free. At least that is my experience.
Also when you modify the library running a test suite provides
confidence that nothing has been broken.
--
Gerry
Like the humans are dwarfed by bacteria, not only in numbers, but in
bio-mass, so you will be surprised by the number of 4 bit computers
around, let alone 8 bits. You know nothing.
<SNIP>
Groetjes Albert
Not important, especially in the case I am talking about, since the hardware
and software were designed for each other. It doesn't make sense to port the
OS and they have never needed to because the company that puts out the OS
also designs and builds the hardware.
You can be portable or optimized, but not both. IBM chose to optimize as
much as possible and they did it by controlling the hardware and software as
a unified package instead of making it run everywhere badly.
> Has any operating system written in C been ported to
> another platform? (Yes, many...)
Many bad ones. Again, so what?
> Assembly code "dies". It's affixed to the specific processor in use. I
> learned that decades ago.
Well I learned decades ago that the same assembler code that worked in 1964
still works today. That's almost 50 years. How much longer should the code
live before your concern becomes invalid?
> HLL code survives.
Really? What about the guy who posted recently about his production Modula 2
code breaking and having no way to fix it? What about all the incompatbilities
introduced by Intel and MS so that old code doesn't work anymore on those
platforms? DOS code won't run on anything from NT and on...etc.
Bottom line is more HLL code has been thrown away than assembler!
> > All the header files and system calls are in assembler. If you
> > want to use C on a mainframe you cannot physically write system
> > code. At all
>
> "At all" is a bit of an extreme claim.
No, it's simple fact. You cannot write system code in MVS in any language
but assembler. Even if you could get mappings for system control blocks in
other languages the code would quickly fall over because it doesn't have
control over facilities that system code needs.
> The GCCMVS project (Paul Edwards) ported GCC to MVS. I don't know much
> about MVS, but I assume it was written in assembly.
Before you said assembly was bad because it doesn't go cross platform but
here you're giving an example of the most highly ported product out there,
probably, and it's written in assembly at least enough of it to be able to
be self bootstrapping or they would have needed to use an IBM licensed
compiler to compile gcc which I'm sure Stallman and his groupies would never
tolerate since IBM is evil and charges money and does so much closed source
code.
I looked into this the last time we talked about it. You don't understand
what gccmvs is and what it's good for (not much) because you don't work in
that environment. If you did you would realize how limited it is. First of
all there is a much more capable C compiler out there from IBM with real
libraries that give an MVS programmer the tools he needs to write useful
application code and access a reasonable range of application facilities.
Not nearly as much as COBOL or PL/I, but about as good as Java on
MVS. gccmvs is missing basic things required for applications and doesn't
have access to virtually anything a programmer needs in that environment,
it's a special purpose tool, not a general purpose one. MVS is not POSIX. In
Linux on MVS hardware gccmvs could be worthwhile, but I think regular gcc is
already ported there. That brings us to the reason for gccmvs.
The mainframe is not used by hobbyists because they're too expensive and the
OS and software are expensive and no production shop uses GCCMVS, it's
basically something Paul needed to do his updates to the public domain OS
IBM from 1974. gccmvs has no system interface (it has very limited support
for application interface) and it is not system code. They had to write
wrappers to make it appear POSIX-like but it doesn't use IBM's POSIX
facilities, because they were not available in 1974 and he needs his code to
work backwards. The OS he is patching also doesn't have the underlying OS
that does have POSIX support. It's basically a way to make a mainframe run C
like on a PC, with some minor local modifications. If you are happy with
printf etc. it might be enough for you. You cannot write system code in it,
you can't do basic file system operations that the IBM C compiler supports,
etc. It's a special purpose tool for hobbyists and that's it. Even IBM's C
compiler has no support for writing systems software, and not their Metal C
either.
> So, it probably did what you claim cannot be done "at all".
No, it didn't. Not even close. It just provides application wrappers to
application level services. It does not go anywhere near systems
services. Paul himself clearly says that in various yahoo groups posts I
found. Windows and UNIX are so different from MVS and magnitudes less
capable, so I'm not surprised there is no reference frame here for
discussion. However most of the doc is online, if you have a couple free
decades and want to see how it should be done, you can look!
> DOS is also written in assembly. DJGPP (GCC port) for DOS does just what
> you claim cannot be done.
What does DOS do that I claim cannot be done? I have no idea what you are
talking about.
> Line-oriented BASIC was the only place GOTOs are needed.
No. FORTRAN and COBOL also need them to have the code generated in an
efficient way and used properly they increase readability dramatically.
> As for C, there is no need for them whatsoever.
http://www.kernel.org/pub/linux/docs/lkml/#s15-5
What you said is not true and goes against the original K&R where they
explained when goto is appropriate in C. The Linux kernel developers use
gotos heavily in the kernel, see above link. Now I realize goto's are almost
never appropriate in C but you don't realize they are almost always
appropriate in FORTRAN and COBOL. You have to know the language and use it
idiomatically instead of trying to make everything a C program because if
you do that with other languages the code doesn't look right and performance
suffers alot.
> There are problems with assembler: the code "dies",
Not on the platform and OS I work on.
> it's not as easily maintained
Depends who you are and what code. I can maintain bad assembler alot better
than I can maintain good C. YMMV.
> and it's harder to use variables, struct's, etc.
Maybe in your assembler(s) but not in ours. The assembler we use was
designed for heavy duty use and has tons of features that make it as capable
as any HLL including a native macro language that is almost an HLL itself.
> Ada is a US Gov't requirement for some projects.
Not for more than a decade. And what does that have to do with what we were
discussing?
> It's got assembly. It's got a half-dozen assemblers for it. NASM is a
> decent assembler for it.
yeah but then you still have to deal with Intel's abominable "architecture".
The reason MVS is so nice for assembler developers is the OS is written in
assembler and all the system interface is in assembler. We don't have to
figure out idiotic C calling conventions designed for compiler back ends or
look at header files in a foreign language, everything is designed around
supporting the assembler programmer. The architecture is elegant, powerful
and efficient. No horseshit libraries needed to do things like storage
management (can you spell libc boys and girls?), everything is a documented
application or system service. It comes with real manuals and real error
messages, not stuff like ooops, sorry, etc.
> > PL/I avoids the whole issue of buffer overruns on string operations
> > since it knows the length of the source and target and will positively
> > not blow off the end of a string.
> >
>
> One disadvantage versus many ...
Maybe but PL/I application code doesn't crash the system or cause integrity
exposures and C code does. Every day of the week.
> "... everything has a purpose or place ..." Psychologically, I'd guess your
> MBTI type indicates you seek: harmony (NF) ...
Ha, no. You missed that one. Assembler, FTW!
Why? Lots of people don't. All you have to do is follow te ABI.
> If you look at the IBM mainframe the operating system is written in
> assembler. All the header files and system calls are in assembler.
MVS? Written in PL/S, surely.
Andrew.
I misused the word "permitted"; All F2/F3 REPs prefixing MOVSx and any
opcode that doesn't set the condition code are treated by the
processor as REP.
>
> > All instructions that branch based on a condition code require it to
> > be set by something; an arithmetic or comparison operation normally.
> > In the case of a null-terminated string (a z-string in the terminology
> > here), such a compare will read and test every byte.
>
> Yes, and a count will compared for each and every byte to for instructions
> suited for c-strings... I've already stated this. Elko T stated that this
> is some sort of advantage for c-strings over z-strings. It's not. Both
> have the same issues in regards to looping.
But if for example SIMD instructions are used, then no testing of
condition codes is required if you know the length on advance. I can't
understand you're passion for nul-terminated strings.
>
> RP
> Date: Thu, 30 Jun 2011 08:15:12 GMT
> From: Stephen Pelc <steph...@mpeforth.com>
> Reply-To: steph...@INVALID.mpeforth.com
> Newsgroups: comp.lang.forth
> Subject: Re: The Lisp Curse
>
> On Wed, 29 Jun 2011 18:01:06 -0400, Sp...@ControlQ.com wrote:
>
>> I've found Stephen's to
>> be very helpful, if a little dated ...
>
> Please send me and email telling me what is dated. Then I can make
> changes for the next revision.
>
> Stephen
Stephen ,
The manual stands on its own, and while it addresses the "historical"
forth model, only mentions in passing "modern forth". Frankly, I intended
the comment more as kudo than criticism. It has been a boon to the
community, and in a significant way, can be pointed to as a starting point
for the initiate, or for those coming back to the language. It works also
as clarification when certain words or techniques are difficult to
understand, particularly for the newbie.
Sorry to get your shorts knotted ...
Rob.
Keep the headers please! I can't figure out which post(s) you were replying
too. I wasn't sure it was me either...
> > Assembly code "dies". It's affixed to the specific processor in use. I
> > learned that decades ago.
>
> Well I learned decades ago that the same assembler code that worked in
> 1964 still works today. That's almost 50 years. How much longer should
> the code live before your concern becomes invalid?
>
If you're still operating ancient computing machine, then yes, it could. If
a platform has lived that long, then yes, it could. What computing platform
has lived that long? x86 is only 3 decades old. I think it's the longest
lived platform, at least without changes to it's instruction set. I'm not
aware of any computing platform pre-dating microprocessors that hasn't died.
Even IBMs platforms have died or been changed substantially over time. Have
they managed to keep their instruction set the same for 50 years? By
"died", I mean that they are no longer in production, not that all produced
machines are no longer functional. Once the platform has "died" the code
requires a complete rewrite for a new platform. Recompiling code for HLLs
on the other hand is frequently a nonevent. Yes, some code requires serious
rework. It depends.
Does my 6502 code still work? Yes, if I pull out and power up a collectible
6502 based machine assuming the machine didn't fail upon power-up. Are
6502s still used as the primary processor in a computing platform? Not as
far as I'm aware. Are 6502s still in production? I don't know. I know
that fast 6502 variants were produced until a decade or so ago for I/O
coprocessors. So, the 6502 microprocessor and it's codebase is effectively
dead as a computing platform. If I want my 6502 source code to work on x86,
I have to rewrite, recode, or port it. Assembly code "dies". It's too
dependent on the platform.
> > HLL code survives.
>
> Really? What about the guy who posted recently about his
> production Modula 2
> code breaking and having no way to fix it?
Well, I don't recall seeing that. But, yes, some languages survive better
than others. Are you sure GNU doesn't have a GCC front-end for it? It
seems to have one for all the "trivial" HLLs, like Pascal.
> What about all the incompatbilities
> introduced by Intel and MS so that old code doesn't
> work anymore on those
> platforms? DOS code won't run on anything from
> NT and on...etc.
>
Well, I'm far from sure about all of the details of what Windows does for
DOS emulation. What I do understand is that under Windows, DOS is an
emulation. Apparently, the emulation is usually a application called NTVDM
that's basically DOS 5.0 with software traps/exceptions. It's some other
application for Win98/SE/ME. There are limitations with more recent
versions of Windows with their DOS emulation. I don't recall people saying
NT was one of them. I recall them saying XP and later has various
restrictions. From what I recall, people said 64-bit Win7 removed it
completely. It's likely it uses x86's v86 cpu mode, at least on 32-bit
Windows. For 64-bit x86, it's more complicated to get back to x86's RM to
enable v86 mode. So, that was probably why it was removed. But, you can
run other DOS emulators: DOSBOX, Bochs with FreeDOS image, probably MESS
w/AT image and DOS, etc. You can also run real-mode DOS on the bare
machine, e.g., BBS spec boot an external USB stick with DOS.
> Bottom line is more HLL code has been thrown away than assembler!
>
Does DOS still have the largest collection of software for a single
platform? I believe it still does... So, vice-versa...
> > The GCCMVS project (Paul Edwards) ported GCC to MVS. I don't know much
> > about MVS, but I assume it was written in assembly.
>
> Before you said assembly was bad because it doesn't go cross platform but
> here you're giving an example of the most highly ported product out there,
> probably, and it's written in assembly at least enough of it to be able to
> be self bootstrapping or they would have needed to use an IBM licensed
> compiler to compile gcc which I'm sure Stallman and his groupies would
> never tolerate since IBM is evil and charges money and does so much
> closed source code.
>
FYI. MVS is not copyrighted (apparently). The MVS OS is not ported, they
run it an emulator. GCC license (apparently) allows use with non-GPL code.
GLIBC license (apparently) doesn't allow use with non-GPL code. So, they
use Paul Edwards's Public Domain C libary: PDPCLIB.
> [GCCMVS] is basically a way to make a mainframe run C
> like on a PC, with some minor local modifications. If you are happy
> with printf etc. it might be enough for you.
I've not used it. I figured someday I could try it in order to test if my C
code worked with EBCDIC, or if I made some non-portable assumptions.
This would be a better test if it were a good non-GCC compiler.
> > DOS is also written in assembly. DJGPP (GCC port) for DOS does just what
> > you claim cannot be done.
>
> What does DOS do that I claim cannot be done? I have no idea what you are
> talking about.
>
I meant the claim of not being able to write system code on an OS coded in
assembly, admittedly DOS is not run on a mainframe... System code isn't
written in C, but could be, for DOS anyway. My personal (stalled,
in-progress) OS for x86 is in C except for specialized x86 instructions.
> > As for C, there is no need for them whatsoever.
>
> http://www.kernel.org/pub/linux/docs/lkml/#s15-5
>
That's total BS. I can understand them not wanting to restructure the code
properly. I can also understand them claiming that use of GOTOs ensures the
OS is speedy, without them having to test their logic. That's probably part
of the real reason. The real reason being: control and/or speed. They
don't have to determine if some logic will exit properly. They know it'll
exit, correctly or not, if it reaches a GOTO.
> What you said is not true and goes against the original K&R where they
> explained when goto is appropriate in C.
Huh? Sorry, I never read K&R. I knew a bit of C before I bought mid-level
and advanced books about it many years ago. I recall the K&R book "seemed
thin" on content. I started basically with H&S "C: A Reference Manual",
3rd. I do have a .pdf, that may be of that book. Let me go take a look to
see what you're talking about. It might be a more recent version
than the original book:
"3.8 Goto and labels
C provides the infinitely-abusable goto statement, and labels to branch to.
Formally, the goto statement is never necessary, and in practice it is
almost always easy to write code without it. We have not used goto in this
book.
Nevertheless, there are a few situations where gotos may find a place.
[snip examples]
Code involving a goto can always be written without one, though perhaps at
the price of some repeated tests or an extra variable.
[snip example]
With a few exceptions like those cited here, code that relies on goto
statements is generally harder to understand and to maintain than code
without gotos. Although we are not dogmatic about the matter, it does seem
that goto statements should be used rarely, if at all.
"
Like I said, there's no point in using gotos with C...
They've given some examples:
1) used to exit nested code
This is unecessary: restructure the code, or use status flags, or setup a
fall-through, or use different flow control.
2) use of labels
Labels are unecessary in C code. They are there for ported or program
generated code.
3) to avoid use of a status flag
...
> Now I realize goto's are almost
> never appropriate in C but you don't realize
> they are almost always
> appropriate in FORTRAN and COBOL.
I never programmed COBOL. I hated FORTRAN. Two decades after the fact I'm
supposed to remember FORTRAN had GOTOs? Unfortunately, I don't recall there
being any GOTOs in FORTRAN...
> > Ada is a US Gov't requirement for some projects.
>
> Not for more than a decade. And what does that have to do with what we
> were discussing?
>
You didn't seem to know why a market for Ada programmers still exists, i.e.,
"... so there has to be some market ..." Maintaining Gov't software is one
likely reason, .e.g., 100 year code lifetime...
> yeah but then you still have to deal with Intel's abominable
> "architecture".
>
So? It's got what you said you desired. Now, you're adding *more*
constraints as to what you'll accept??? It seems you're rationalizing more
reasons to not attempt to use it, ever.
> The reason MVS is so nice for assembler developers is the OS is written in
> assembler and all the system interface is in assembler.
There are many OS projects for x86 written in assembler. Usually, the
assembler based OS' for x86 don't seem to become as complete or as
professional as OS' coded in C. They get the basics down and then stall.
That could be due to complexity or that could be due to a lack of (skilled)
x86 assembly programmers. I can only speculate. E.g., here is one example
of an OS in C that was written from scratch:
http://visopsys.org/
> We don't have to
> figure out idiotic C calling conventions designed for compiler
> back ends or look at header files in a foreign language,
> everything is designed around
> supporting the assembler programmer.
Well then, try an assembly based OS for x86.
> The architecture is elegant, powerful
> and efficient.
x86 can be setup in quite a variety of ways depending on what the OS
designer needs. It can be fully open and clean, or it can be complicated,
or it can use full security features, or it can use a wild mix of features.
> No horseshit libraries needed to do things like storage
> management (can you spell libc boys and girls?),
You can do that without libc. Novices just don't understand how. And, it's
not as flexible or guaranteed to be portable. Yes, there are some trivial
language design mistakes in regards to memory allocation without libc, IMO.
> Maybe but PL/I application code doesn't crash the system or cause
> integrity exposures and C code does. Every day of the week.
>
You mean IBM PL/I on an IBM computing system... Now, it just happened the
non-IBM PL/1 I programmed was for a fault tolerant system, so I don't know
if the language itself had problems. I've not tried the reduce PL/I
variants for an x86 PC.
Rod Pemberton
A fun exercise that most anyone who believes that 8-bit systems are
obsolete is to run through the electronic systems in one's home and
determine how many by percentage are 8-bit (or as you point out
less). Not sure if something is 8-bit? Open it up and see. When
I've done this with people who made the same claim Hugh has, they are
often surprised at the sheer number of 8-bit systems. And quite
often, they miss quite a few because they don't consider the
subcomponents in larger systems like desktop computers, monitors and
televisions, cars, and so on.
8-bit systems are quite alive and well, and will be for many years to
come. The reason is simple-- often, they are just the right size for
the job at hand, and can do so with less system cost.
Ah, the young of today. No sense of history ;-)
> I'm not
> aware of any computing platform pre-dating microprocessors that hasn't died.
> Even IBMs platforms have died or been changed substantially over time. Have
> they managed to keep their instruction set the same for 50 years?
Yes. IBM's most modern machines (z Series) will still run code written
in 1964.
[rest snipped]
Those who cannot remember the past are condemned to repeat it.
System Z is the upward compatible system that exists today based on the
System 360 from 1964.
> x86 is only 3 decades old. I think it's the longest lived platform, at
> least without changes to it's instruction set.
Not even close. Object code from 1964 will run on today's IBM machines. Much
source from those days will still compile and most of it will assemble. If
it doesn't, you can almost always use an older version of a compiler or
assembler and get anything written since then to run.
> I'm not aware of any computing platform pre-dating microprocessors that
> hasn't died.
Now you are. It's the most widely used commercial data processing platform
in the world, virtually all the Fortune 1000 companies have one or more of
them running their enterprise workload. Until the mid 1980s it was *the*
computer system the world ran on. It still is, but now it has company.
> Even IBMs platforms have died or been changed substantially over time. Have
> they managed to keep their instruction set the same for 50 years? By
> "died", I mean that they are no longer in production, not that all produced
> machines are no longer functional. Once the platform has "died" the code
> requires a complete rewrite for a new platform. Recompiling code for HLLs
> on the other hand is frequently a nonevent. Yes, some code requires serious
> rework. It depends.
Yes, they have kept the same instruction set for 50 years, of course it is
now much bigger and has many more features, but none of the old design was
removed. For example it still has 24 bit addressing, but it also has 64 bit
(and 31 bit) addressing. It now has extra floating point registers for IEEE
in addition to the original IBM floating point regs. All the instructions
defined in the 1960's era manuals still work as stated.
> Does my 6502 code still work? Yes, if I pull out and power up a collectible
> 6502 based machine assuming the machine didn't fail upon power-up. Are
> 6502s still used as the primary processor in a computing platform? Not as
> far as I'm aware. Are 6502s still in production? I don't know. I know
> that fast 6502 variants were produced until a decade or so ago for I/O
> coprocessors. So, the 6502 microprocessor and it's codebase is effectively
> dead as a computing platform. If I want my 6502 source code to work on x86,
> I have to rewrite, recode, or port it. Assembly code "dies". It's too
> dependent on the platform.
Again, not in all cases! One of the most important cases from a commercial
and technology viewpoint is IBM System Z. It is still being developed and
sold, hardware is being updated and sold. It's IBM's flagship and it's
amazing how little people who don't work on it know about it, considering
how important it is.
> Well, I don't recall seeing that. But, yes, some languages survive better
> than others. Are you sure GNU doesn't have a GCC front-end for it? It
> seems to have one for all the "trivial" HLLs, like Pascal.
Hehe afaik, gcc was originally a Pascal compiler. Anyway there are several
Modula 2 compilers around, just not one that can save the guy who is running
production that now needs fixing, based on a specific compiler that is no
longer maintained. That's just one case. My point is think of how many
compilers and languages have come and gone, there is no guarantee any HLL
code will live past the time the company who wrote the compiler goes out of
business.
> Does DOS still have the largest collection of software for a single
I don't know, probably is close to Linux for the amount of freely available
apps.
> FYI. MVS is not copyrighted (apparently). The MVS OS is not ported, they
> run it an emulator. GCC license (apparently) allows use with non-GPL code.
> GLIBC license (apparently) doesn't allow use with non-GPL code. So, they
> use Paul Edwards's Public Domain C libary: PDPCLIB.
Ok, let's step back. MVS is licensed and copyrighted and covered under
thousands if not tens of thousands of patents. The 1974 version that was
released by IBM to the public domain was pre copyright (I heard US copyright
law changed in 1980) and the specific one we are talking about in reference
to what Paul is doing with gccmvs, MVS 3.8, is the only publicly available
version. MVS does not use GPL code or gcc or anything to do with PDPCLIB. It
was and is a pure IBM product and has IBM and third party compilers
available for it. FYI, MVS is the generic name for all of the OS on the
mainframe from 1964 to the present day. They call UNIX version this and UNIX
version that, whereas IBM has changed the name of the OS to match the
hardware level. But it's an evolution of the same OS which is why we still
call it MVS (which was actually not the first version but I digress).
Paul's work came much later and has nothing to do with IBM or MVS. He wants
to take that publicly available 1974 version which uses 24 bit addressing
and bring it up to the early 1990's when they had 31 bit addressing. I don't
know why he chose to use C, but I do know there was no C compiler for IBM
for the first decades, I don't know when it started because as I have been
saying, C has never been a significant language on that platform. In the
first 20 or 30 years, there was no C at all on mainframes. At some point IBM
came out with their own compiler but again it has a full set of libraries to
make application programming possible. pdpclib does not come close to
providing that useful level of functionality. And all the IBM stuff is
copyrighted, patented, licensed, and expensive. I can't make this point
strongly enough, MVS doesn't need gcc, doesn't include gcc, and doesn't ship
with any gpl code, and it never will. MVS has nothing to do with gcc.
> I've not used it. I figured someday I could try it in order to test if my C
> code worked with EBCDIC, or if I made some non-portable assumptions.
> This would be a better test if it were a good non-GCC compiler.
I have access to an IBM compiler, so if you need any code compiled post it
somewhere and I'll run it for you. It's about 15 years old but that's still
pretty new in IBM years.
> I meant the claim of not being able to write system code on an OS coded in
> assembly, admittedly DOS is not run on a mainframe... System code isn't
> written in C, but could be, for DOS anyway. My personal (stalled,
> in-progress) OS for x86 is in C except for specialized x86 instructions.
I did not mean that at all. What I said is you cannot write system code in
any language but assembler on the mainframe. That's just the way it
is. That's not a theoretical argument and I only said it concerning the
platform I know about. I did not think it applied to other platforms but I
do know it's easier to write system code in C on UNIX because the OS is
written in C with C interface and needs libc to do basic stuff. It's not an
exact comparison because you can write system code in other languages
(assembly) on NIX but it's harder because you have to create mappings. In
the IBM environment it is simply impossible, not *only* because of the
mappings and interface, but because of other issues that simply don't exist
on other architectures. No other language available on IBM systems can
support the requirements.
Since Andy is trying to cut in with his new-found wikipedia knowledge, I am
including PL/X variants when I say "assembler" because that's what they
are. They aren't available outside of IBM (PL/X was, and I used it, but it
is no longer available) so all system code written by vendors is written in
assembler proper. It always has been.
> 1) used to exit nested code
> This is unecessary: restructure the code, or use status flags, or setup a
> fall-through, or use different flow control.
That's intellectual dishonesty. At the end of the day it's still a goto. If
you can tolerate extra branches in your loop structures or have to define
more switches to test just so you can say you avoided a goto, what have you
really accomplished?
> You didn't seem to know why a market for Ada programmers still exists, i.e.,
> "... so there has to be some market ..." Maintaining Gov't software is one
> likely reason, .e.g., 100 year code lifetime...
Oh, no, I understand why the market exists. I'm not sure why you thought I
mean that.
> > The reason MVS is so nice for assembler developers is the OS is written in
> > assembler and all the system interface is in assembler.
>
> There are many OS projects for x86 written in assembler. Usually, the
> assembler based OS' for x86 don't seem to become as complete or as
> professional as OS' coded in C. They get the basics down and then stall.
> That could be due to complexity or that could be due to a lack of (skilled)
> x86 assembly programmers. I can only speculate. E.g., here is one example
> of an OS in C that was written from scratch:
> http://visopsys.org/
Ok but it doesn't matter. I'm talking about the most widely used,
longest-lived influential OS of all time, it's not a toy OS, it's in
production in tens of thousands of companies world wide. It's more
professional than any UNIX or LINUX or Windows will ever live to be. It has
complete, professional documentation sets, real error messages, real
recovery, real resource management. It's the highest quality OS and
programming environment you will never see. But you can see the doc if you
want, it's online.
> You can do that without libc. Novices just don't understand how. And, it's
> not as flexible or guaranteed to be portable. Yes, there are some trivial
> language design mistakes in regards to memory allocation without libc,
> IMO.
UNIX delegated application memory management to libc. It's a chickenshit
"solution". How can you do malloc and free without libc? AFAIK you
can't. You can set the break point but it doesn't free memory after you set
it repeatedly, at least according to what I read. The more I learn about
UNIX and LINUX the more I realize I was right to avoid learning them all
these years, they're true crap.
If I am wrong, please tell me how I can dynamically allocate and free
storage for structures in assembly, or even discrete variables. I heard you
have to write your own memory manager in UNIX/LINUX if you don't use libc.
Didn't understand your comment about the posting headers. All mine should
have a references header to make threading work (it does for me) but other
headers are stipped by the mix system, nothing we can do about that.
Roughly the same span of time as the Fillet-o-fish....
> > x86 is only 3 decades old. I think it's the longest lived platform, at
> > least without changes to it's instruction set.
>
> Not even close. Object code from 1964 will run on today's IBM machines. Much
> source from those days will still compile and most of it will assemble. If
> it doesn't, you can almost always use an older version of a compiler or
> assembler and get anything written since then to run.
>
And, oddly enough, critics of the WinTel architecture usually use
backwards compatability as a harsh criticism....
I'll restate one of your statements:
"It's IBM's flagship and it's amazing how little
people who don't work on it know about it, considering how important
it is."
snip dialogue about GCC,MVS, etc....
> > I meant the claim of not being able to write system code on an OS coded in
> > assembly, admittedly DOS is not run on a mainframe... System code isn't
> > written in C, but could be, for DOS anyway. My personal (stalled,
> > in-progress) OS for x86 is in C except for specialized x86 instructions.
>
> I did not mean that at all. What I said is you cannot write system code in
> any language but assembler on the mainframe. That's just the way it
> is. That's not a theoretical argument and I only said it concerning the
> platform I know about. I did not think it applied to other platforms but I
> do know it's easier to write system code in C on UNIX because the OS is
> written in C with C interface and needs libc to do basic stuff. It's not an
> exact comparison because you can write system code in other languages
> (assembly) on NIX but it's harder because you have to create mappings. In
Not on LINUX. Kernel-level system calls use a register-based argument
system. Programming in assembler on Linux is a breeze, for me.
I'm guessing you're thinkg of BSD and derivatives, which almost
requires the use of C-isms and stack frames and such.
> the IBM environment it is simply impossible, not *only* because of the
> mappings and interface, but because of other issues that simply don't exist
> on other architectures. No other language available on IBM systems can
> support the requirements.
>
This is a deliberate architectural design decision. IMO, one made
to enforce vendor-lock in, requiring hefty licenses to make or sell
third-party products, and keep the end-user dependant upon IBM.
Profitable? Yes. Important? Perhaps to IBM's shareholders, and
the shareholders of the companies that depend on IBM to run
their 'enterprise'.
> Since Andy is trying to cut in with his new-found wikipedia knowledge, I am
> including PL/X variants when I say "assembler" because that's what they
> are. They aren't available outside of IBM (PL/X was, and I used it, but it
> is no longer available) so all system code written by vendors is written in
> assembler proper. It always has been.
>
> > 1) used to exit nested code
> > This is unecessary: restructure the code, or use status flags, or setup a
> > fall-through, or use different flow control.
>
> That's intellectual dishonesty. At the end of the day it's still a goto. If
> you can tolerate extra branches in your loop structures or have to define
> more switches to test just so you can say you avoided a goto, what have you
> really accomplished?
>
You omit code restructuring, which in most cases would obviate the
need for
goto; that's a shade of dishonesty, eh?
> > You didn't seem to know why a market for Ada programmers still exists, i.e.,
> > "... so there has to be some market ..." Maintaining Gov't software is one
> > likely reason, .e.g., 100 year code lifetime...
>
> Oh, no, I understand why the market exists. I'm not sure why you thought I
> mean that.
>
> > > The reason MVS is so nice for assembler developers is the OS is written in
> > > assembler and all the system interface is in assembler.
>
I hope you aware that some detractors of C consider it a glorified
assembler...
> > There are many OS projects for x86 written in assembler. Usually, the
> > assembler based OS' for x86 don't seem to become as complete or as
> > professional as OS' coded in C. They get the basics down and then stall.
> > That could be due to complexity or that could be due to a lack of (skilled)
> > x86 assembly programmers. I can only speculate. E.g., here is one example
> > of an OS in C that was written from scratch:
> >http://visopsys.org/
>
> Ok but it doesn't matter. I'm talking about the most widely used,
> longest-lived influential OS of all time, it's not a toy OS, it's in
> production in tens of thousands of companies world wide. It's more
> professional than any UNIX or LINUX or Windows will ever live to be. It has
> complete, professional documentation sets, real error messages, real
> recovery, real resource management. It's the highest quality OS and
> programming environment you will never see. But you can see the doc if you
> want, it's online.
>
That's a boatload of opinion you have there. Is there any scientific
work being
done with MVS/zSystem? When DARPA farmed out development of
ARPANet, where were zSys/MVS? And, of course, what about one of
Forth's earliest applications, Radio Telescope Astronomy? What did
Lorenz discover his strange attractor on? Can zSys/MVS sequence
the human, or, for that matter, _any_ genome?
I opine that you have a tremendous amount of worship for what
is essentially an overgrown Data Processing System, a kind of
Incredible Hulk of a spreadsheet/database/timesharing system.
Quaterly reports of sales of the McFlurry do not interest me.
Please, enlighten me with something tremendous- control
systems at CERN, digital imaging of the surface of Mars,
adaptive learning networks....anything of real value to humanity?
> > You can do that without libc. Novices just don't understand how. And, it's
> > not as flexible or guaranteed to be portable. Yes, there are some trivial
> > language design mistakes in regards to memory allocation without libc,
> > IMO.
>
> UNIX delegated application memory management to libc. It's a chickenshit
> "solution". How can you do malloc and free without libc? AFAIK you
> can't. You can set the break point but it doesn't free memory after you set
> it repeatedly, at least according to what I read. The more I learn about
> UNIX and LINUX the more I realize I was right to avoid learning them all
> these years, they're true crap.
>
Subjective. Though your pet may be able to track the sales of smart
phones,
a lot of them are running Android, which is Linux with some chrome....
> If I am wrong, please tell me how I can dynamically allocate and free
> storage for structures in assembly, or even discrete variables. I heard you
> have to write your own memory manager in UNIX/LINUX if you don't use libc.
>
Yes. A trivial programming exercise. So, tell yourself how you
dynamically
allocate and free storage structures in assembly: you write your own,
using the s/brk system call, easily done in Linux; other *nices, YMMV
<snipped stuff about headers>
TTFN,
Tarkin
Lots of OSes were written in various system-specific PL/ languages: I
remember PLZ/SYS and PL/M. Are you really going to claim that they
are all, in fact, "assembly language" or that IBM's PL/S was
lower-level than those? I don't know, I never used it. The Wikipedia
page says it was based on PL/1.
Andrew.
Evidence please.
Spreadsheets? Do you even know what you're talking about?
>
> Quaterly reports of sales of the McFlurry do not interest me.
> Please, enlighten me with something tremendous- control
> systems at CERN, digital imaging of the surface of Mars,
> adaptive learning networks....anything of real value to humanity?
Your bank account. Boring, yes, but of real value if you have one. Or
work on nuclear power simulations. Boring, yes, but of real value if
you happen to own one. Or large scale weather simulations. Is there
weather where you are? One of the real strengths of an IBM mainframe
system is the amount of data it can move around. Cray systems were
often fed IO by IBM mainframes; they were the only processors that
could keep up.
>
> > > You can do that without libc. Novices just don't understand how. And, it's
> > > not as flexible or guaranteed to be portable. Yes, there are some trivial
> > > language design mistakes in regards to memory allocation without libc,
> > > IMO.
>
> > UNIX delegated application memory management to libc. It's a chickenshit
> > "solution". How can you do malloc and free without libc? AFAIK you
> > can't. You can set the break point but it doesn't free memory after you set
> > it repeatedly, at least according to what I read. The more I learn about
> > UNIX and LINUX the more I realize I was right to avoid learning them all
> > these years, they're true crap.
>
> Subjective. Though your pet may be able to track the sales of smart
> phones,
> a lot of them are running Android, which is Linux with some chrome....
>
> > If I am wrong, please tell me how I can dynamically allocate and free
> > storage for structures in assembly, or even discrete variables. I heard you
> > have to write your own memory manager in UNIX/LINUX if you don't use libc.
>
> Yes. A trivial programming exercise. So, tell yourself how you
> dynamically
> allocate and free storage structures in assembly: you write your own,
> using the s/brk system call, easily done in Linux; other *nices, YMMV
>
> <snipped stuff about headers>
>
> TTFN,
> Tarkin
That's a whole boatload of opinion there.
PL/S was pretty low level. It was to PL/I what C is to Algol.
> At the end of the day it's still a goto.
Exactly, usually... With some exceptions, C compilers are very good at
optimization. I.e., if a GOTO worked there originally, the optimized
non-GOTO code will usually reduce to a similar code with a jump. So, what's
the reason for using a GOTO?
> If you can tolerate extra branches in your loop structures or have to
> define more switches to test just so you can say you avoided a goto,
> what have you really accomplished?
>
You've created code that can be modified much more easily. Once a GOTO is
used in C code, it becomes more difficult to reorganize the code. You must
eliminate the use of the GOTO before reorganizing the code. In one case I
came across, there where three GOTOs in the same procedure. You can't
reorganize, modify, or change the existing control flow, because you're
effectively being blocked by the GOTO(s).
> Ok but it doesn't matter. I'm talking about the most widely used,
> longest-lived influential OS of all time, it's not a toy OS, it's in
> production in tens of thousands of companies world wide. It's more
> professional than any UNIX or LINUX or Windows will ever live to be.
> It has complete, professional documentation sets, real error messages,
> real recovery, real resource management. It's the highest quality OS
> and programming environment you will never see. But you can see the
> doc if you want, it's online.
>
You need to drop the "most widely used" and "influential"... That comes off
as an extreme exaggeration.
To be the "most widely used" it must've outsold x86 PCs.
To be the "most widely used" it must've outsold the billions of ARM-based
portable devices.
To be the most "influential" must deny the
<pick-a-world-changing-OS-from-history> OS, e.g., MacIntosh, Wintel, C64,
etc.
> How can you do malloc and free without libc?
> AFAIK you can't.
Oh, sure you can... Memory is allocated and free'd behinds the scenes in C
in a number of ways. 1) You can allocate large file scope arrays and apply
your own memory allocation. This can be far simpler than malloc()/free().
You just set a pointer to a typedef for a struct, union, etc to free space
in the array. Or, you can use one of the three or so publicly available
memory allocators applied to the array. 2) You can also declare procedure
local variables which allocates space from the stack (like non-standard
alloca()). Returning from a procedure call frees the allocated stack space.
You can call C functions recursively, which allocates from the stack
repeatedly (like multiple alloca() calls and free's when returned). 3) The
file I/O functions in C essentially creates a hidden memory allocator behind
the scenes. Most C file I/O functions allocate space from a storage device,
not from memory. However, a file created by tmpfile() will be created in
memory in most C implementations. So, file I/O can also be used effectively
as a memory allocator, i.e., appending to the end of a file effectively
allocates memory. If tmpfile() doesn't create in memory, it'll create on
disk and with enough buffering, it's effectively the same as using memory.
> You can set the break point but it doesn't free memory after you set
> it repeatedly, at least according to what I read. The more I learn about
> UNIX and LINUX the more I realize I was right to avoid learning them all
> these years, they're true crap.
>
I don't like their CLIs, but I do like their C compiler when combined with
Posix file I/O. The everything is a file concept works well for me.
> If I am wrong, please tell me how I can dynamically allocate and free
> storage for structures in assembly,
In assembly or for assembly structures? I think you meant the latter.
You're not asking me to write assembly for C are you? ...
> how I can dynamically allocate and free
> storage or even discrete variables.
The easiest is #1 above. C allows typedef's of objects. This creates an
addressing structure without allocating space for the object. If you
declare a pointer to the typedef of some object, i.e., struct or union, you
can then set the pointer to a previously allocated memory region to provide
space. This is commonly done with memory allocated via malloc(). However,
it can be done with any memory that is allocated. In C you can declare
arrays, i.e., allocate "empty" storage space that can hold a bunch of
objects.
> I heard you
> have to write your own memory manager in
> UNIX/LINUX if you don't use libc.
You could write your own. When you have many objects of a fixed size and
use typedef's, it can be very simple, i.e., not much more than a linked list
of typedef'd objects stored in an array. If you don't want to, you can use
one of these:
John Walker's bget
http://www.fourmilab.ch/bget/
Doug Lea's dlmalloc
http://gee.cs.oswego.edu/dl/html/malloc.html
Dynamic Storage Allocator, Richard Harter, comp.lang.c, Nov. 11, 1990.
http://groups.google.com/group/comp.lang.c/msg/7da27dcbc6e2ace1
Apparently, FreeBSD has a nice memory allocator also. Or, #1 above is
easiest. #3 is easy also.
> but other headers are stipped by the mix system, nothing we can do about
> that.
Sorry to hear that... The carets > in the header and used as indentation
(one more caret) in the post are used to indicate who said what in the
replied to posts. Since the method you're using to post, strips them, I
can't tell who said what from the post. I must go back to the indentation
of the messages to figure out which caret level was which person.
> [Usenet] headers
On the message I replied to, there was no header. With my reply, it adds
your header (above and copied here):
"Fritz Wuehler" <fr...@spamexpire-201107.rodent.frell.theremailer.net> wrote
in message
news:fbea4798d922ac75...@msgid.frell.theremailer.net...
Since you replied to me, my header should've been there, indented once:
> Rod Pemberton" <do_no...@noavailemail.cmm> wrote in message
news:iur1a7$gad$1...@speranza.aioe.org...
Since some of your comments from the earlier message were included, your
prior header should've been there indented twice:
> > "Fritz Wuehler" <fr...@spamexpire-201107.rodent.frell.theremailer.net>
wrote in message
news:2746c0939d1a238e...@msgid.frell.theremailer.net...
So, it should've looked something like this at the top of the post:
"Fritz Wuehler" <fr...@spamexpire-201107.rodent.frell.theremailer.net> wrote
in message
news:fbea4798d922ac75...@msgid.frell.theremailer.net...
> Rod Pemberton" <do_no...@noavailemail.cmm> wrote in message
news:iur1a7$gad$1...@speranza.aioe.org...
> > "Fritz Wuehler" <fr...@spamexpire-201107.rodent.frell.theremailer.net>
wrote in message
news:2746c0939d1a238e...@msgid.frell.theremailer.net...
See, you can see it was "you"-me-"you"... But, lets say someone else was
inbetween, without those headers, it becomes more confusing as to who said
what.
Rod Pemberton
Evidence to the contrary, please.
I mismatched metaphors there, perhaps.
An overgrown spreadsheet.
I am curious as to why you decline to opine on any appearances of
MVS / zSys at places / times of notable innovation and
paradigm-shifting discovery.
> > Quaterly reports of sales of the McFlurry do not interest me.
> > Please, enlighten me with something tremendous- control
> > systems at CERN, digital imaging of the surface of Mars,
> > adaptive learning networks....anything of real value to humanity?
>
> Your bank account. Boring, yes, but of real value if you have one. Or
> work on nuclear power simulations. Boring, yes, but of real value if
> you happen to own one. Or large scale weather simulations. Is there
> weather where you are? One of the real strengths of an IBM mainframe
> system is the amount of data it can move around. Cray systems were
> often fed IO by IBM mainframes; they were the only processors that
> could keep up.
Towards the end there, you expose what I consider what a mainframe's
strength is: I/O. So, it seems there is something that we can agree
upon.
Nuclear simulations? A quick search turns up US D.o.E. / ANL, so,
I'll eat crow on that one.
But let's look at performance:
( http://www.ne.anl.gov/codes/mc2-2/ )
"A 1740-group consistent P1 homogeneous twelve-isotope problem with 27
broad groups requires about 6.5 minutes of CPU time on an IBM 370/195.
The same problem requires approximately 30% less time on the CDC 7600
and approximately 50% less time on the RS6000 and the SS20 SUN
systems."
Hehe.
You want to model atomic particles and nuclear forces? Ok, use
an IBM solution. You want to manipulate those forces...check
out what CERN is using....
Yes, there is weather where I am. Any source for statitistics
regarding
accurate predictions? I'm prepared to pleasantly surprised, but my gut
tells me they wouldn't be much better than the Farmer's Almanac.
> > > > You can do that without libc. Novices just don't understand how. And, it's
> > > > not as flexible or guaranteed to be portable. Yes, there are some trivial
> > > > language design mistakes in regards to memory allocation without libc,
> > > > IMO.
>
> > > UNIX delegated application memory management to libc. It's a chickenshit
> > > "solution". How can you do malloc and free without libc? AFAIK you
> > > can't. You can set the break point but it doesn't free memory after you set
> > > it repeatedly, at least according to what I read. The more I learn about
> > > UNIX and LINUX the more I realize I was right to avoid learning them all
> > > these years, they're true crap.
>
> > Subjective. Though your pet may be able to track the sales of smart
> > phones,
> > a lot of them are running Android, which is Linux with some chrome....
>
> > > If I am wrong, please tell me how I can dynamically allocate and free
> > > storage for structures in assembly, or even discrete variables. I heard you
> > > have to write your own memory manager in UNIX/LINUX if you don't use libc.
>
> > Yes. A trivial programming exercise. So, tell yourself how you
> > dynamically
> > allocate and free storage structures in assembly: you write your own,
> > using the s/brk system call, easily done in Linux; other *nices, YMMV
>
> > <snipped stuff about headers>
>
> > TTFN,
> > Tarkin
>
> That's a whole boatload of opinion there.
It's my experience one tends to get what one gives.
Don't be too proud of this technological terror that IBM
has constructed; the power to issue my bank statement
is insigificant next to the power of smashing hadrons
together.
http://accelconf.web.cern.ch/accelconf/ica05/proceedings/pdf/I1_001.pdf
TTFN,
Tarkin
They all were, from what I remember, although PLZ is the only one I
actually used. Nothing like assembly language, though.
Andrew.
[some snipped]
>
> > > > the IBM environment it is simply impossible, not *only* because of the
> > > > mappings and interface, but because of other issues that simply don't exist
> > > > on other architectures. No other language available on IBM systems can
> > > > support the requirements.
>
> > > This is a deliberate architectural design decision.
>
> > Evidence please.
>
> Evidence to the contrary, please.
>
No, you made the statement. Support it with evidence.
[more snippage]
>
> > > That's a boatload of opinion you have there. Is there any scientific
> > > work being
> > > done with MVS/zSystem? When DARPA farmed out development of
> > > ARPANet, where were zSys/MVS? And, of course, what about one of
> > > Forth's earliest applications, Radio Telescope Astronomy? What did
> > > Lorenz discover his strange attractor on? Can zSys/MVS sequence
> > > the human, or, for that matter, _any_ genome?
>
> > > I opine that you have a tremendous amount of worship for what
> > > is essentially an overgrown Data Processing System, a kind of
> > > Incredible Hulk of a spreadsheet/database/timesharing system.
>
> > Spreadsheets? Do you even know what you're talking about?
>
> I mismatched metaphors there, perhaps.
> An overgrown spreadsheet.
> I am curious as to why you decline to opine on any appearances of
> MVS / zSys at places / times of notable innovation and
> paradigm-shifting discovery.
I see you're impressed by whizz-bangs. The 360 was the most
significant computing architecture ever introduced. In no particular
order;
. The 8-bit byte, with byte-addressable memory and 32-bit words
. First commercial microcoded CPU
. Floating point; the IEEE 754-1985 floating-point standard took 20
years to arrive
. Paging, virtual memory, segmentation, instruction pipelining, memory
protection, unburstable security...
A z series can stay on its feet for years, with no downtime while
major components are replaced or upgraded. It was VM ready from day 1;
something most computers couldn't claim. The Intel line of x86 chips
couldn't do VMs until 1985, and only then very badly. Solaris couldn't
do VMs until 2004.
The IBM 360/370/390/z series been at the heart of computing for 5
decades, and ahead of the pack for most of that time.
>
> > > Quaterly reports of sales of the McFlurry do not interest me.
> > > Please, enlighten me with something tremendous- control
> > > systems at CERN, digital imaging of the surface of Mars,
> > > adaptive learning networks....anything of real value to humanity?
>
> > Your bank account. Boring, yes, but of real value if you have one. Or
> > work on nuclear power simulations. Boring, yes, but of real value if
> > you happen to own one. Or large scale weather simulations. Is there
> > weather where you are? One of the real strengths of an IBM mainframe
> > system is the amount of data it can move around. Cray systems were
> > often fed IO by IBM mainframes; they were the only processors that
> > could keep up.
>
> Towards the end there, you expose what I consider what a mainframe's
> strength is: I/O. So, it seems there is something that we can agree
> upon.
> Nuclear simulations? A quick search turns up US D.o.E. / ANL, so,
> I'll eat crow on that one.
>
> But let's look at performance:
> (http://www.ne.anl.gov/codes/mc2-2/)
> "A 1740-group consistent P1 homogeneous twelve-isotope problem with 27
> broad groups requires about 6.5 minutes of CPU time on an IBM 370/195.
> The same problem requires approximately 30% less time on the CDC 7600
> and approximately 50% less time on the RS6000 and the SS20 SUN
> systems."
>
> Hehe.
370/195? CDC 7600? That's ancient history. The 1970s; before you were
even a twinkle in your daddy's eye.
>
> You want to model atomic particles and nuclear forces? Ok, use
> an IBM solution. You want to manipulate those forces...check
> out what CERN is using....
>
> Yes, there is weather where I am. Any source for statitistics
> regarding
> accurate predictions? I'm prepared to pleasantly surprised, but my gut
> tells me they wouldn't be much better than the Farmer's Almanac.
Now that's remarkably silly, since I suspect what you know about
weather modelling consists of deciding to wear a coat when it rains.
[more snipped]
>
> > That's a whole boatload of opinion there.
>
> It's my experience one tends to get what one gives.
> Don't be too proud of this technological terror that IBM
> has constructed; the power to issue my bank statement
> is insigificant next to the power of smashing hadrons
> together.http://accelconf.web.cern.ch/accelconf/ica05/proceedings/pdf/I1_001.pdf
O get real. Computers don't smash atoms together.
>
> TTFN,
> Tarkin
Don't ask me, I don't write C code. I'm just disputing the universal wisdom
that goto's have no place in code when people who never coded anything but C
are saying it, and showing a project that is widely regarded as good (not by
me though) uses them and K&R says there is a place for them even in C.
> You need to drop the "most widely used" and "influential"... That comes off
> as an extreme exaggeration.
Extreme exaggeration? Not just exaggeration but "extreme exaggeration"? If I
am going to exaggerate I like to do it with extremism. Go all the way.
But it's not exaggeration, on this topic that would be almost impossible to
do. Every company who ran a computer in the 1960s and 1970s used IBM
mainframes, and every major company and government still uses them. They're
king, baby!
> To be the "most widely used" it must've outsold x86 PCs.
It was the most widely used computer for decades in business, before there
was such a thing as Intel, and IBM is still the worlds biggest computer
company, they make more money than Microsoft and their software quality is
so much better Microsoft can't even see them from where they are.
> To be the "most widely used" it must've outsold the billions of ARM-based
> portable devices.
It outsold everything before there was such thing as ARM and it's still
selling and running major corporations and countries today. Nobody cares of
one or a dozen Intel boxes crashes. If a mainframe crashes, it's millions of
dollars a minute. So they stay up.
> To be the most "influential" must deny the
> <pick-a-world-changing-OS-from-history> OS, e.g., MacIntosh, Wintel, C64,
> etc.
That all came later, and was influenced by what IBM did. You have everything
upside down. The world didn't start with the 8080. Learn some history,
bucko! It's good for you!
>
> > How can you do malloc and free without libc?
> > AFAIK you can't.
>
> Oh, sure you can... Memory is allocated and free'd behinds the scenes in C
> in a number of ways. 1) You can allocate large file scope arrays and apply
> your own memory allocation.
mmap?
> This can be far simpler than malloc()/free(). You just set a pointer to a
> typedef for a struct, union, etc to free space in the array. Or, you can
> use one of the three or so publicly available memory allocators applied to
> the array.
But that calls malloc or libc, does it not?
> 2) You can also declare procedure local variables which allocates space
> from the stack (like non-standard alloca()). Returning from a procedure
> call frees the allocated stack space.
Ok.
> You can call C functions recursively, which allocates from the stack
> repeatedly (like multiple alloca() calls and free's when returned). 3) The
> file I/O functions in C essentially creates a hidden memory allocator behind
> the scenes. Most C file I/O functions allocate space from a storage device,
> not from memory. However, a file created by tmpfile() will be created in
> memory in most C implementations. So, file I/O can also be used effectively
> as a memory allocator, i.e., appending to the end of a file effectively
> allocates memory. If tmpfile() doesn't create in memory, it'll create on
> disk and with enough buffering, it's effectively the same as using memory.
Ok but I was not talking about C. And are we talking DOS or Windows or Linux
or what? How is this hidden memory allocator invoked? Libc call? The whole
point is that CrapOS doesn't give you any application level memory
management, you have to call libc. That's chicken shit. In a real OS you can
ask for storage and free it, you don't need a library call to do it. I think
even Windows got this right, why not UNIX?
> > If I am wrong, please tell me how I can dynamically allocate and free
> > storage for structures in assembly,
>
> In assembly or for assembly structures? I think you meant the latter.
> You're not asking me to write assembly for C are you? ...
In assembly...since presumably everything you do in c uses libc.
>
> > how I can dynamically allocate and free
> > storage or even discrete variables.
>
> The easiest is #1 above. C allows typedef's of objects. This creates an
> addressing structure without allocating space for the object. If you
> declare a pointer to the typedef of some object, i.e., struct or union, you
> can then set the pointer to a previously allocated memory region to provide
> space. This is commonly done with memory allocated via malloc(). However,
> it can be done with any memory that is allocated. In C you can declare
> arrays, i.e., allocate "empty" storage space that can hold a bunch of
> objects.
But we're going in circles. The question is how do you allocate memory on
Linux without malloc (libc) since Linux/UNIX doesn't provide any application
memory management at the OS level, only through libc.
If you're saying you just create a bunch of .data elements that's also not
dynamically allocating anything. Even .bss doesn't qualify because it's just
stealing from heap or stack once, when the program is loaded. For example if
you want to read in an unknown amount of data and want to create a linked
list of it, how are you supposed to allocate each node? AFAIK you have to
call libc whether you do it directly or indirectly.
The point is not that C is bad, the point is UNIX/LINUX are CrapOS since
there is no OS call to allocate and free memory at the application
level. They tell you to use libc or write your own. That's not an OS, that's
a chickenshit Berkleyism. Damn those Birkenstock-wearing faggots!
> Apparently, FreeBSD has a nice memory allocator also. Or, #1 above is
> easiest. #3 is easy also.
Yeah but that kind of proves there's no memory management at the OS level
for applications? And AFAIK, FreeBSD has the same problem as other
UNIX/LINUX. It only has break and mmap like all other UNIX. I hope I missed
something but I don't think I did, from looking at the syscalls.
> Sorry to hear that... The carets > in the header and used as indentation
> (one more caret) in the post are used to indicate who said what in the
> replied to posts. Since the method you're using to post, strips them, I
> can't tell who said what from the post. I must go back to the indentation
> of the messages to figure out which caret level was which person.
I'm replying to one person and the single > are you, the double >> are me or
someone else using a remailer, and the >>> are you, etc.
> Since you replied to me, my header should've been there, indented once:
> > Rod Pemberton" <do_no...@noavailemail.cmm> wrote in message
> news:iur1a7$gad$1...@speranza.aioe.org...
Oh, I just cut that out since the threading tells me who posted it. Sorry,
I'll try to leave it in.
> See, you can see it was "you"-me-"you"... But, lets say someone else was
> inbetween, without those headers, it becomes more confusing as to who said
> what.
I'll try to fix it in the future (too late for this post though).
You laugh. I programmed all those! And some of their predecessors.
>> You want to model atomic particles and nuclear forces? Ok, use
>> an IBM solution. You want to manipulate those forces...check
>> out what CERN is using....
>>
>> Yes, there is weather where I am. Any source for statitistics
>> regarding
>> accurate predictions? I'm prepared to pleasantly surprised, but my gut
>> tells me they wouldn't be much better than the Farmer's Almanac.
>
> Now that's remarkably silly, since I suspect what you know about
> weather modelling consists of deciding to wear a coat when it rains.
Should you want to look at some of the source code, it's available for
one of the models here:
http://www.mcs.anl.gov/~michalak/B/mpmm_index.html
Among the features of the latest release, I note:
"5. Optimized routines to ensure faster run times. The new routines
especially improve run times on IBM computers. In order to give
users a choice, both the optimized and original code are available.
6. New Cray X1 and INTEL compiler flags are added."
>>
>>> That's a whole boatload of opinion there.
>>
>> It's my experience one tends to get what one gives.
>> Don't be too proud of this technological terror that IBM
>> has constructed; the power to issue my bank statement
>> is insigificant next to the power of smashing hadrons
>> together.http://accelconf.web.cern.ch/accelconf/ica05/proceedings/pdf/I1_001.pdf
>
> O get real. Computers don't smash atoms together.
Wanna bet? How do you think they control every aspect of the operation
of a supercollider? From the link above, there's a list of the
computers used for this purpose, including:
"The VME Front End Computers: 350 VME computers are being added to the
existing 300 crates already existing for the PS Complex and SPS, to deal
with high performance acquisitions and real-time processing. They house
a large variety of commercial or CERN-made I/O modules. These FECs run
either the same LynxOS real-time operating system as those of the PS and
of the SPS or Red Hat Linux. Mostly they are diskless to increase
reliability and they boot over the network. Typically, the LHC beam
instrumentation and the LHC beam interlock systems use VME front-ends."
Then there are more *categories* of front- and back-end computers
listed, including industrial PC front ends and PLC back ends.
Cheers,
Elizabeth
--
==================================================
Elizabeth D. Rather (US & Canada) 800-55-FORTH
FORTH Inc. +1 310.999.6784
5959 West Century Blvd. Suite 700
Los Angeles, CA 90045
http://www.forth.com
"Forth-based products and Services for real-time
applications since 1973."
==================================================
The 8-bit processors are used in mass-produced items because they are
inexpensive. In mass-produced items, a few pennies savings can add up
to quite a lot. In this case, the cost of hiring programmers to write
the software is very small compared to the cost of manufacturing the
item. The programming is a one-time cost that usually took place a
long time ago (in many cases, the company no longer even has the
source-code). Most such systems use an 8051 derivative, a small PIC
chip, a COP8, etc., and they are written in assembly language ---
those kinds of chips don't really support high-level languages. Most
of the time, assembly language is used rather than an HHL to reduce
memory usage because this allows a less expensive chip to be used. As
I said, programming is a one-time cost, so if the use of assembly
language rather than C or Forth doubles the cost of the software
development, that is insignificant compared to the gain that is
achieved by using a chip that is a few pennies less expensive than
what would otherwise be used. Nobody cares if the source-code is
readable or maintainable, because when mass-production of the item
begins, the source-code will never be changed or even looked at again.
The point that I'm making is that, while there may be a lot of 8-bit
micro-controllers being manufactured and used, there is very little
programming of 8-bit micro-controllers being done. There is not a
whole lot of programming of 16-bit micro-controllers being done either
--- pretty much all programming being done is of 32-bit micro-
controllers, mostly the ARM, and mostly for onesy-twosy items. The
fact that Forth can be used on 8-bit and 16-bit processors is
irrelevant to the vast majority of programmers, because they use C on
32-bit processors. If they consider upgrading to a new language, it
would most likely be to C++ or Java --- Forth isn't considered at all.
Hehe, Alex, check the source code for CMOVE in Win32Forth. :)
--
No, no, you can't e-mail me with the nono.
I really doubt that anybody has ever used FFL for any application.
Just the ugly naming convention alone precludes this. Also, FFL
doesn't have any features. For example, the AVL-tree code doesn't
provide a way for the programmer to in-order traverse the tree within
a selected range. That is the whole point of using a tree rather than
a hash table! The linked-list code doesn't provide a way to find the
node prior to the node that you are looking for, which is necessary
for sorting lists. There is no conversion between lists and arrays.
With both the AVL-trees and the lists, there is no way to have nodes
of different sizes, and be able to clone the entire data structure.
FFL doesn't provide any way to have a mixture of nodes that are in the
heap and in the dictionary, which is necessary for shifting some of
the work over to compile-time (see how in my slide-rule program I
generate some lists at compile-time to reduce the program's run time).
FFL is so primitive that I wouldn't even consider using it in an
application. My impression of FFL is that those were homework
assignments (implement a linked list, implement an AVL tree, etc.),
but that the authors never had any intention of their code actually
being used in any application. They just implemented the data
structure simplistically, and they were done --- they didn't provide
any features that would allow the code to be useful. Who did write
that stuff anyway? Were those Anton Ertl's students striving to get an
A from Anton by proving that they could implement some data structure?
Whoever the authors were, they definitely weren't application
programmers.
> It's a well known sales technique to promote a product by omitting any
> mention of competing products or rubbishing any such products without
> evidence. You've used both tactics. A better sales technique is to
> recognise there are competitors and to present some evidence, for
> example an analysis showing why your product is superior whether by:
> - absence of bugs
> - superior functionality
> - better performance
> - ease of use
> ...
>
> I don't think that just shouting "it's rubbish" convinces anybody,
> particularly knowledgeable members of this group, indeed it is more
> likely to antagonise them.
What "knowledgeable members of this group" are you referring to? If
anybody is at all knowledgeable of my novice package and FFL (spent an
hour perusing both), they are going to see for themselves which is
superior. This obviously doesn't describe yourself, as you seem to
know nothing of either package. This doesn't describe anybody that I
know of.
> You've complained several times about nobody using your library. Trying
> to be helpful I would suggest:
>
> 1. Give it a better name than the novice package. It may be suitable for
> novices but calling it that makes it plain that it is *only* suitable
> for novices. Why should a seasoned Forther who already possesses his own
> library even look at yours?
I don't think that calling it the "expert package" is going to make it
more suitable to "seasoned Forthers." If they can't learn how a novice
package works, then they aren't going to learn how an expert package
works.
> 2. Deposit it or a link in FLAGhttp://soton.mpeforth.com/flag/so that
> its existence has a better chance of becoming known.
It seems very unlikely that Stephen Pelc would allow that --- that is
about as likely as www.forth.com providing a link. lol
In my novice package I have a lot of work-arounds for problems in ANS-
Forth. For example, I rewrote ALLOCATE and friends so that I could
have ALLOCATION. Obviously, members of the Forth-200x committee
(including Stephen Pelc) aren't going to support my novice package,
because doing so would be a tacit admission that ANS-Forth has
problems. Leon Wagner has stated that ALLOCATION is worthless, and
that damns the entire novice package along with it.
I put my novice package on www.forth.org because that is one place
that the Forth-200x members don't have enough authority to order that
it be deleted.
> 3. Provide evidence of why it is superior without rubbishing the opposition.
I didn't "rubbish" the opposition (is that even a verb?) --- the
authors of FFL inflicted rubbish on the world under the grandiose name
"Forth Foundation Library." I spent an hour or so browsing through the
FFL and I found nothing of value in there. This was after I had
written my novice package, but if I had known about FFL prior to
writing my novice package I would have still written it just the way
that I did --- I don't really concern myself with other people's code,
as I feel confident that I can always do a better job myself.
> 4. Provide comprehensive test programs using the Hayes tester, that
> prove your library is well tested. Others do e.g. David Williams (seehttp://www-personal.umich.edu/~williams/archive/forth/strings/index.html
> - incidentally there are quite a few library packages in ANS Forth on
> this site that you probably haven't considered) and Krishna Myneni.
> Again why should experienced Forthers use a library when there's little
> or no evidence that it's been well tested.
I don't know what a "Hayes tester" is.
The only evidence I have that my novice package has been well tested
is that I wrote a lot of application code, including the slide-rule
program, using the novice package and that software worked. As I have
said, I have never heard of anybody writing any application program
using FFL, and I doubt that it could be done anyway --- so FFL has
never been tested in the crucible of application-writing.
> 5. Write a small, easily understandable application where your library
> is beneficial. Your slide rule program is too large.
>
> Of course items 3, 4 (see **) and 5 take quite a bit of work which you
> may be unwilling to do, in which case you'll probably have to live with
> the indifference shown.
I find it amazing that you say my slide-rule program is too large, and
then immediately you say that I am unwilling to do the "quite a bit of
work" involved in writing a small program. Are small programs more
difficult than big programs? You are essentially accusing me of being
lazy, because I don't take the time to spoon-feed you.
Actually, most of my novice package is oriented toward writing large
programs. For example, I have ALLOCATION provided, as well as CLONE-
LIST and COPY-ASSOCIATION that use it. These are for situations in
which the data structure contains nodes of different types (parent and
child types typically). This obviously only becomes an issue in large
programs that have inheritance of data types. For the most part,
applications that need data-structure support are fairly large, in
that they are working with a lot of data of more than one type. Small
programs tend to also be simple programs.
There is a lot in the novice package that can even be used by small
programs though. For example, I have <SPLIT> that breaks a string
apart on delimiters, building a list. One guy used <SPLIT> to break
apart file names on the / character. His program was presumably quite
small, although I never looked at it. <SPLIT> could be used for
working with comma-delimited sequential files, which are a pretty
common database-dump format. Such programs tend to be small --- I used
to write programs like that every day when I was working as an IBM370
assembly-language programmer. I had a personal library of functions
and macros, including something similar to <SPLIT>, that allowed me to
write small programs in an hour or two. I could write assembly-
language programs significantly faster than other people could write
HHL programs, primarily because I had macros (and macros are the one
thing that no HHL language other than Forth and Lisp allow).
> I hope this helps.
It didn't. You are pretty much in the same category as John Passaniti
in the sense that you are talking at length about a subject that you
know nothing about. On the plus side though, you didn't say that my
novice package "sucks," so that does put you a step above Passaniti.
If you would actually spend an hour looking at my package before
commenting on it, that would put you leagues above anybody that I
know. I spent that much time looking at FFL before I commented on it.
Try looking at LIST.4TH as that one is pretty simple --- linked lists
are possibly the simplest data-structure in existence, but also the
most useful (the Lispers think so, anyway).
> ** If such test programs are developed simulataneously with the library
> code I would argue that it both speeds development and you get the final
> test programs automatically for free. At least that is my experience.
> Also when you modify the library running a test suite provides
> confidence that nothing has been broken.
I'm aware of the concept of Agile development. I don't think that test
suites are all that useful for code such as a library that gets
written once and isn't changed again --- that is more for applications
that are in continual flux, especially when multiple programmers are
all working on the application.
By the time that the 80386 came out, Forth was already dead.
There was a short period after the 80386 came out during which people
continued to use MS-DOS and would use DOS-extenders to access memory
beyond the 640K limit (mostly because Windows 3.1 was so horrible).
This trailing edge of MS-DOS became obsolete when Windows-95 came out.
When I wrote MFX at Testra, this was done on the 32-bit version of UR/
Forth running on a DOS-extender. Forth was already dead though ---
other than a few rinky-dink outfits such as Testra that used Forth,
everybody was using C and C++ (Turbo Pascal and Delphi too).
During the decade that the 8088 and 80286 were used to run MS-DOS,
PolyForth supported only the Tiny memory-model. This is largely what
killed Forth. Most casual observers assume that Forth Inc. defines
Forth (they assume this because Forth Inc. owns the name "Forth").
When these people see that Forth Inc. is run by incompetents who don't
even know that the 8088 provides access to more than 64K of memory,
they believe that the entire Forth community must be incompetent
without any further evidence --- PolyForth permanently ruined Forth's
reputation --- this all happened many years prior to the introduction
of the 80386.
If Forth Inc. hadn't dragged the name "Forth" through the mud in the
late 1980s, Forth might have succeeded. I blame Forth Inc. entirely
for Forth's failure. To be more specific, I blame *you* for ruining
Forth's reputation.
My personal version of CMOVE uses REP. The existing assembler is
deficient and allows such a thing. I'm rewriting the assembler at the
moment; I found it extremely difficult making minor changes to the
existing code without having it break. There's a simplified version
that does 32bit addressing only out on
http://tech.groups.yahoo.com/group/win32forth/files/Users/Alex/asm.zip.
Oh, I knew I was right, you're getting this all from Wikipedia. It would
have been a lot better if you had just asked instead of stating incorrect
things as if you knew they were facts.
I have never heard of PLZ/SYS, so I have nothing to say on that.
I'm familiar with PL/I, it's a very nice HLL that should have been more
successful. No IBM version of PL/I (and I believe only IBM ever sold a PL/I
capable of running on a mainframe) can be used for writing systems software.
AFAIK, PL/M *was* a derivative of PL/I that was created (I think) by Digital
Research and used to write much of their DOS code. Again, it was not a
mainframe language and was not an assembly language although it could be
used, and was, to write system code, since that's what they designed it
for. But it doesn't run on a mainframe and can't be used to write systems
software on mainframes, it's strictly a PC language.
I can tell you PL/S, PL/AS, and PL/X are all proprietary, high level
assemblers targeting the IBM mainframe. They do have similarities to PL/I
because they're all more or less ALGOL-derived languages with control
structures and declarations similar to ALGOL. That's where the similarity
ends. PL/X and predecessors don't have a runtime, because they translate
pretty much directly to assembler and they have critical features for
writing system code that HLLs don't, like being able to manipulate registers
and other low level areas, and issue system service requests. All the system
services requests are actually dual-language PL/AS / PL/X and assembler.
This wasn't worth mentioning before because these languages don't go outside
IBM, but since we are discussing it the truth is all the systems services
macros can be invoked from either assembler or PL/X. Another point where the
wikipedia article is wrong (I hadn't looked at it until now) is about not
being able to modify the OS because of not having access to PL/S. PL/S and
assembler coexist without effort, there is no problem to extend one from the
other and indeed many people did all kinds of system modifications in
assembler then and now. I do that for my job, and I use assembler.
I'm not sure PL/S could be based on PL/I since they both came out around the
same time. I don't even know whether they were developed by the same group
or just happen to be similar and it's not impossible that they were designed
by different groups and look similar because of the ALGOL influence. Many
ALGOL descendants are strikingly similar in many ways. Syntax is similar,
but that's it. What they can actually do and how they are used is completely
different.
So yes, PL/S, PL/AS, and PL/X are much lower level languages than PL/I, and
they're only distant relatives insomuch as their structure looks familiar to
a PL/I programmer...but not more than that, and in many ways much less. They
have no library calls, no conversion routines, etc. They're really not much
more than a custom built HLA for the IBM platform. I'm familiar with all of
them more or less because we have to read code in PL/AS and I wrote code in
PL/X when it was available for a short time. I didn't like it though, I feel
assembler was more natural and easier to use. It's not unusual for systems
software vendors to develop their own languages for writing their OS and
tools and part of the reason is it keeps their staff around since what they
learn isn't portable. Nobody outside IBM uses these languages, so people who
spent careers learning them don't have any transferable skills, aside from
their internals knowledge, which of course is valuable.
That brings us around to what I have been saying. If you want to write
systems software on the mainframe, it can only be done in assembler. Now
I'll add, if you work for IBM, you can do it in PL/X.